Here is another pass at the dma-buf heaps patchset Andrew and I
have been working on which tries to destage a fair chunk of ION
functionality.
The patchset implements per-heap devices which can be opened
directly and then an ioctl is used to allocate a dmabuf from the
heap.
The interface is similar, but much simpler then IONs, only
providing an ALLOC ioctl.
Also, I've provided relatively simple system and cma heaps.
I've booted and tested these patches with AOSP on the HiKey960
using the kernel tree here:
https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap
And the userspace changes here:
https://android-review.googlesource.com/c/device/linaro/hikey/+/909436
Compared to ION, this patchset is missing the system-contig,
carveout and chunk heaps, as I don't have a device that uses
those, so I'm unable to do much useful validation there.
Additionally we have no upstream users of chunk or carveout,
and the system-contig has been deprecated in the common/andoid-*
kernels, so this should be ok.
I've also removed the stats accounting for now, since any such
accounting should be implemented by dma-buf core or the heaps
themselves.
New in v5:
* Actually not much that I'm submitting now. I've backed out the
large order page allocation I added in v4 as using it with the
pagelist structure in the helper buffers ended up leaking
memory unless we split them, and then we didn't see any
performance improvement.
* I have spent a fair amount of time looking at allocation
performance compared to ION, and I do have a patch stack
that performs equal or better then ION (utilizing large order
allocations, sgtables, and a page pool). However, Andrew
convinced me that the extra complexity of these optimizations
would distract reviewers from the core functionality, and we
can submit those changes afterwards without any interface
impact. WIP patches can be found here:
https://git.linaro.org/people/john.stultz/android-dev.git/log/?h=dev/dma-buf-heap-WIP
* Minor cleanups
Outstanding concerns:
* Need to better understand various secure heap implementations.
Some concern that heap private flags will be needed, but its
also possible that dma-buf heaps can't solve everyone's needs,
in which case, a vendor's secure buffer driver can implement
their own dma-buf exporter. So I'm not too worried here.
Thoughts and feedback would be greatly appreciated!
thanks
-john
Cc: Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Andrew F. Davis (1):
dma-buf: Add dma-buf heaps framework
John Stultz (4):
dma-buf: heaps: Add heap helpers
dma-buf: heaps: Add system heap to dmabuf heaps
dma-buf: heaps: Add CMA heap to dmabuf heaps
kselftests: Add dma-heap test
MAINTAINERS | 18 ++
drivers/dma-buf/Kconfig | 10 +
drivers/dma-buf/Makefile | 2 +
drivers/dma-buf/dma-heap.c | 237 ++++++++++++++++
drivers/dma-buf/heaps/Kconfig | 14 +
drivers/dma-buf/heaps/Makefile | 4 +
drivers/dma-buf/heaps/cma_heap.c | 169 ++++++++++++
drivers/dma-buf/heaps/heap-helpers.c | 261 ++++++++++++++++++
drivers/dma-buf/heaps/heap-helpers.h | 55 ++++
drivers/dma-buf/heaps/system_heap.c | 123 +++++++++
include/linux/dma-heap.h | 59 ++++
include/uapi/linux/dma-heap.h | 56 ++++
tools/testing/selftests/dmabuf-heaps/Makefile | 11 +
.../selftests/dmabuf-heaps/dmabuf-heap.c | 232 ++++++++++++++++
14 files changed, 1251 insertions(+)
create mode 100644 drivers/dma-buf/dma-heap.c
create mode 100644 drivers/dma-buf/heaps/Kconfig
create mode 100644 drivers/dma-buf/heaps/Makefile
create mode 100644 drivers/dma-buf/heaps/cma_heap.c
create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
create mode 100644 drivers/dma-buf/heaps/system_heap.c
create mode 100644 include/linux/dma-heap.h
create mode 100644 include/uapi/linux/dma-heap.h
create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
--
2.17.1
From: "Andrew F. Davis" <[email protected]>
This framework allows a unified userspace interface for dma-buf
exporters, allowing userland to allocate specific types of memory
for use in dma-buf sharing.
Each heap is given its own device node, which a user can allocate
a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
This code is an evoluiton of the Android ION implementation,
and a big thanks is due to its authors/maintainers over time
for their effort:
Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
Laura Abbott, and many other contributors!
Cc: Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Reviewed-by: Benjamin Gaignard <[email protected]>
Signed-off-by: Andrew F. Davis <[email protected]>
Signed-off-by: John Stultz <[email protected]>
Change-Id: I4af43a137ad34ff6f7da4d6b2864f3cd86fb7652
---
v2:
* Folded down fixes I had previously shared in implementing
heaps
* Make flags a u64 (Suggested by Laura)
* Add PAGE_ALIGN() fix to the core alloc funciton
* IOCTL fixups suggested by Brian
* Added fixes suggested by Benjamin
* Removed core stats mgmt, as that should be implemented by
per-heap code
* Changed alloc to return a dma-buf fd, rather then a buffer
(as it simplifies error handling)
v3:
* Removed scare-quotes in MAINTAINERS email address
* Get rid of .release function as it didn't do anything (from
Christoph)
* Renamed filp to file (suggested by Christoph)
* Split out ioctl handling to separate function (suggested by
Christoph)
* Add comment documenting PAGE_ALIGN usage (suggested by Brian)
* Switch from idr to Xarray (suggested by Brian)
* Fixup cdev creation (suggested by Brian)
* Avoid EXPORT_SYMBOL until we finalize modules (suggested by
Brian)
* Make struct dma_heap internal only (folded in from Andrew)
* Small cleanups suggested by GregKH
* Provide class->devnode callback to get consistent /dev/
subdirectory naming (Suggested by Bjorn)
v4:
* Folded down dma-heap.h change that was in a following patch
* Added fd_flags entry to allocation structure and pass it
through to heap code for use on dma-buf fd creation (suggested
by Benjamin)
v5:
* Minor cleanups
---
MAINTAINERS | 18 +++
drivers/dma-buf/Kconfig | 8 ++
drivers/dma-buf/Makefile | 1 +
drivers/dma-buf/dma-heap.c | 237 ++++++++++++++++++++++++++++++++++
include/linux/dma-heap.h | 59 +++++++++
include/uapi/linux/dma-heap.h | 56 ++++++++
6 files changed, 379 insertions(+)
create mode 100644 drivers/dma-buf/dma-heap.c
create mode 100644 include/linux/dma-heap.h
create mode 100644 include/uapi/linux/dma-heap.h
diff --git a/MAINTAINERS b/MAINTAINERS
index a6954776a37e..5aded7e9a062 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4813,6 +4813,24 @@ F: include/linux/*fence.h
F: Documentation/driver-api/dma-buf.rst
T: git git://anongit.freedesktop.org/drm/drm-misc
+DMA-BUF HEAPS FRAMEWORK
+M: Sumit Semwal <[email protected]>
+R: Andrew F. Davis <[email protected]>
+R: Benjamin Gaignard <[email protected]>
+R: Liam Mark <[email protected]>
+R: Laura Abbott <[email protected]>
+R: Brian Starkey <[email protected]>
+R: John Stultz <[email protected]>
+S: Maintained
+L: [email protected]
+L: [email protected]
+L: [email protected] (moderated for non-subscribers)
+F: include/uapi/linux/dma-heap.h
+F: include/linux/dma-heap.h
+F: drivers/dma-buf/dma-heap.c
+F: drivers/dma-buf/heaps/*
+T: git git://anongit.freedesktop.org/drm/drm-misc
+
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <[email protected]>
L: [email protected]
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index d5f915830b68..9b93f86f597c 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -39,4 +39,12 @@ config UDMABUF
A driver to let userspace turn memfd regions into dma-bufs.
Qemu can use this to create host dmabufs for guest framebuffers.
+menuconfig DMABUF_HEAPS
+ bool "DMA-BUF Userland Memory Heaps"
+ select DMA_SHARED_BUFFER
+ help
+ Choose this option to enable the DMA-BUF userland memory heaps,
+ this allows userspace to allocate dma-bufs that can be shared between
+ drivers.
+
endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index e8c7310cb800..1cb3dd104825 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
obj-$(CONFIG_UDMABUF) += udmabuf.o
diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
new file mode 100644
index 000000000000..bbeaf3192a0d
--- /dev/null
+++ b/drivers/dma-buf/dma-heap.c
@@ -0,0 +1,237 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Framework for userspace DMA-BUF allocations
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/xarray.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include <linux/dma-heap.h>
+#include <uapi/linux/dma-heap.h>
+
+#define DEVNAME "dma_heap"
+
+#define NUM_HEAP_MINORS 128
+
+/**
+ * struct dma_heap - represents a dmabuf heap in the system
+ * @name: used for debugging/device-node name
+ * @ops: ops struct for this heap
+ * @minor minor number of this heap device
+ * @heap_devt heap device node
+ * @heap_cdev heap char device
+ *
+ * Represents a heap of memory from which buffers can be made.
+ */
+struct dma_heap {
+ const char *name;
+ struct dma_heap_ops *ops;
+ void *priv;
+ unsigned int minor;
+ dev_t heap_devt;
+ struct cdev heap_cdev;
+};
+
+static dev_t dma_heap_devt;
+static struct class *dma_heap_class;
+static DEFINE_XARRAY_ALLOC(dma_heap_minors);
+
+static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
+ unsigned int fd_flags,
+ unsigned int heap_flags)
+{
+ /*
+ * Allocations from all heaps have to begin
+ * and end on page boundaries.
+ */
+ len = PAGE_ALIGN(len);
+ if (!len)
+ return -EINVAL;
+
+ return heap->ops->allocate(heap, len, fd_flags, heap_flags);
+}
+
+static int dma_heap_open(struct inode *inode, struct file *file)
+{
+ struct dma_heap *heap;
+
+ heap = xa_load(&dma_heap_minors, iminor(inode));
+ if (!heap) {
+ pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
+ return -ENODEV;
+ }
+
+ /* instance data as context */
+ file->private_data = heap;
+ nonseekable_open(inode, file);
+
+ return 0;
+}
+
+static long dma_heap_ioctl_allocate(struct file *file, unsigned long arg)
+{
+ struct dma_heap_allocation_data heap_allocation;
+ struct dma_heap *heap = file->private_data;
+ int fd;
+
+ if (copy_from_user(&heap_allocation, (void __user *)arg,
+ sizeof(heap_allocation)))
+ return -EFAULT;
+
+ if (heap_allocation.fd ||
+ heap_allocation.reserved0 ||
+ heap_allocation.reserved1) {
+ pr_warn_once("dma_heap: ioctl data not valid\n");
+ return -EINVAL;
+ }
+
+ if (heap_allocation.fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) {
+ pr_warn_once("dma_heap: fd_flags has invalid or unsupported flags set\n");
+ return -EINVAL;
+ }
+
+ if (heap_allocation.heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) {
+ pr_warn_once("dma_heap: heap flags has invalid or unsupported flags set\n");
+ return -EINVAL;
+ }
+
+
+ fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
+ heap_allocation.fd_flags,
+ heap_allocation.heap_flags);
+ if (fd < 0)
+ return fd;
+
+ heap_allocation.fd = fd;
+
+ if (copy_to_user((void __user *)arg, &heap_allocation,
+ sizeof(heap_allocation)))
+ return -EFAULT;
+
+ return 0;
+}
+
+static long dma_heap_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case DMA_HEAP_IOC_ALLOC:
+ ret = dma_heap_ioctl_allocate(file, arg);
+ break;
+ default:
+ return -ENOTTY;
+ }
+
+ return ret;
+}
+
+static const struct file_operations dma_heap_fops = {
+ .owner = THIS_MODULE,
+ .open = dma_heap_open,
+ .unlocked_ioctl = dma_heap_ioctl,
+};
+
+/**
+ * dma_heap_get_data() - get per-subdriver data for the heap
+ * @heap: DMA-Heap to retrieve private data for
+ *
+ * Returns:
+ * The per-subdriver data for the heap.
+ */
+void *dma_heap_get_data(struct dma_heap *heap)
+{
+ return heap->priv;
+}
+
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
+{
+ struct dma_heap *heap;
+ struct device *dev_ret;
+ int ret;
+
+ if (!exp_info->name || !strcmp(exp_info->name, "")) {
+ pr_err("dma_heap: Cannot add heap without a name\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ if (!exp_info->ops || !exp_info->ops->allocate) {
+ pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+ heap = kzalloc(sizeof(*heap), GFP_KERNEL);
+ if (!heap)
+ return ERR_PTR(-ENOMEM);
+
+ heap->name = exp_info->name;
+ heap->ops = exp_info->ops;
+ heap->priv = exp_info->priv;
+
+ /* Find unused minor number */
+ ret = xa_alloc(&dma_heap_minors, &heap->minor, heap,
+ XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
+ if (ret < 0) {
+ pr_err("dma_heap: Unable to get minor number for heap\n");
+ return ERR_PTR(ret);
+ }
+
+ /* Create device */
+ heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
+
+ cdev_init(&heap->heap_cdev, &dma_heap_fops);
+ ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
+ if (ret < 0) {
+ pr_err("dma_heap: Unable to add char device\n");
+ return ERR_PTR(ret);
+ }
+
+ dev_ret = device_create(dma_heap_class,
+ NULL,
+ heap->heap_devt,
+ NULL,
+ heap->name);
+ if (IS_ERR(dev_ret)) {
+ pr_err("dma_heap: Unable to create device\n");
+ cdev_del(&heap->heap_cdev);
+ return (struct dma_heap *)dev_ret;
+ }
+
+ return heap;
+}
+
+static char *dma_heap_devnode(struct device *dev, umode_t *mode)
+{
+ return kasprintf(GFP_KERNEL, "dma_heap/%s", dev_name(dev));
+}
+
+
+static int dma_heap_init(void)
+{
+ int ret;
+
+ ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
+ if (ret)
+ return ret;
+
+ dma_heap_class = class_create(THIS_MODULE, DEVNAME);
+ if (IS_ERR(dma_heap_class)) {
+ unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
+ return PTR_ERR(dma_heap_class);
+ }
+ dma_heap_class->devnode = dma_heap_devnode;
+
+ return 0;
+}
+subsys_initcall(dma_heap_init);
diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
new file mode 100644
index 000000000000..7a1b633ac02f
--- /dev/null
+++ b/include/linux/dma-heap.h
@@ -0,0 +1,59 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps Allocation Infrastructure
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _DMA_HEAPS_H
+#define _DMA_HEAPS_H
+
+#include <linux/cdev.h>
+#include <linux/types.h>
+
+struct dma_heap;
+
+/**
+ * struct dma_heap_ops - ops to operate on a given heap
+ * @allocate: allocate dmabuf and return fd
+ *
+ * allocate returns dmabuf fd on success, -errno on error.
+ */
+struct dma_heap_ops {
+ int (*allocate)(struct dma_heap *heap,
+ unsigned long len,
+ unsigned long fd_flags,
+ unsigned long heap_flags);
+};
+
+/**
+ * struct dma_heap_export_info - information needed to export a new dmabuf heap
+ * @name: used for debugging/device-node name
+ * @ops: ops struct for this heap
+ * @priv: heap exporter private data
+ *
+ * Information needed to export a new dmabuf heap.
+ */
+struct dma_heap_export_info {
+ const char *name;
+ struct dma_heap_ops *ops;
+ void *priv;
+};
+
+/**
+ * dma_heap_get_data() - get per-heap driver data
+ * @heap: DMA-Heap to retrieve private data for
+ *
+ * Returns:
+ * The per-heap data for the heap.
+ */
+void *dma_heap_get_data(struct dma_heap *heap);
+
+/**
+ * dma_heap_add - adds a heap to dmabuf heaps
+ * @exp_info: information needed to register this heap
+ */
+struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
+
+#endif /* _DMA_HEAPS_H */
diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
new file mode 100644
index 000000000000..c382280277d7
--- /dev/null
+++ b/include/uapi/linux/dma-heap.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * DMABUF Heaps Userspace API
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+#ifndef _UAPI_LINUX_DMABUF_POOL_H
+#define _UAPI_LINUX_DMABUF_POOL_H
+
+#include <linux/ioctl.h>
+#include <linux/types.h>
+
+/**
+ * DOC: DMABUF Heaps Userspace API
+ *
+ */
+
+/* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */
+#define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
+
+/* Currently no heap flags */
+#define DMA_HEAP_VALID_HEAP_FLAGS (0)
+
+/**
+ * struct dma_heap_allocation_data - metadata passed from userspace for
+ * allocations
+ * @len: size of the allocation
+ * @fd: will be populated with a fd which provdes the
+ * handle to the allocated dma-buf
+ * @fd_flags: file descriptor flags used when allocating
+ * @heap_flags: flags passed to heap
+ *
+ * Provided by userspace as an argument to the ioctl
+ */
+struct dma_heap_allocation_data {
+ __u64 len;
+ __u32 fd;
+ __u32 fd_flags;
+ __u64 heap_flags;
+ __u32 reserved0;
+ __u32 reserved1;
+};
+
+#define DMA_HEAP_IOC_MAGIC 'H'
+
+/**
+ * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
+ *
+ * Takes an dma_heap_allocation_data struct and returns it with the fd field
+ * populated with the dmabuf handle of the allocation.
+ */
+#define DMA_HEAP_IOC_ALLOC _IOWR(DMA_HEAP_IOC_MAGIC, 0, \
+ struct dma_heap_allocation_data)
+
+#endif /* _UAPI_LINUX_DMABUF_POOL_H */
--
2.17.1
Add very trivial allocation and import test for dma-heaps,
utilizing the vgem driver as a test importer.
A good chunk of this code taken from:
tools/testing/selftests/android/ion/ionmap_test.c
Originally by Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Reviewed-by: Benjamin Gaignard <[email protected]>
Signed-off-by: John Stultz <[email protected]>
Change-Id: Ib98569fdda6378eb086b8092fb5d6bd419b8d431
---
v2:
* Switched to use reworked dma-heap apis
v3:
* Add simple mmap
* Utilize dma-buf testdev to test importing
v4:
* Rework to use vgem
* Pass in fd_flags to match interface changes
* Skip . and .. dirs
---
tools/testing/selftests/dmabuf-heaps/Makefile | 11 +
.../selftests/dmabuf-heaps/dmabuf-heap.c | 232 ++++++++++++++++++
2 files changed, 243 insertions(+)
create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
new file mode 100644
index 000000000000..c414ad36b4bf
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
+#LDLIBS += -lrt -lpthread -lm
+
+# these are all "safe" tests that don't modify
+# system time or require escalated privileges
+TEST_GEN_PROGS = dmabuf-heap
+
+
+include ../lib.mk
+
diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
new file mode 100644
index 000000000000..33d4b105c673
--- /dev/null
+++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
@@ -0,0 +1,232 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <dirent.h>
+#include <errno.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <string.h>
+#include <unistd.h>
+#include <sys/ioctl.h>
+#include <sys/mman.h>
+#include <sys/types.h>
+
+#include <linux/dma-buf.h>
+#include <drm/drm.h>
+
+
+#include "../../../../include/uapi/linux/dma-heap.h"
+
+#define DEVPATH "/dev/dma_heap"
+
+int check_vgem(int fd)
+{
+ drm_version_t version = { 0 };
+ char name[5];
+ int ret;
+
+ version.name_len = 4;
+ version.name = name;
+
+ ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
+ if (ret)
+ return 1;
+
+ return strcmp(name, "vgem");
+}
+
+int open_vgem(void)
+{
+ int i, fd;
+ const char *drmstr = "/dev/dri/card";
+
+ fd = -1;
+ for (i = 0; i < 16; i++) {
+ char name[80];
+
+ sprintf(name, "%s%u", drmstr, i);
+
+ fd = open(name, O_RDWR);
+ if (fd < 0)
+ continue;
+
+ if (check_vgem(fd)) {
+ close(fd);
+ continue;
+ } else {
+ break;
+ }
+
+ }
+ return fd;
+}
+
+int import_vgem_fd(int vgem_fd, int dma_buf_fd, uint32_t *handle)
+{
+ struct drm_prime_handle import_handle = { 0 };
+ int ret;
+
+ import_handle.fd = dma_buf_fd;
+ import_handle.flags = 0;
+ import_handle.handle = 0;
+
+ ret = ioctl(vgem_fd, DRM_IOCTL_PRIME_FD_TO_HANDLE, &import_handle);
+ if (ret == 0)
+ *handle = import_handle.handle;
+ return ret;
+}
+
+void close_handle(int vgem_fd, uint32_t handle)
+{
+ struct drm_gem_close close = { 0 };
+
+ close.handle = handle;
+ ioctl(vgem_fd, DRM_IOCTL_GEM_CLOSE, &close);
+}
+
+
+int dmabuf_heap_open(char *name)
+{
+ int ret, fd;
+ char buf[256];
+
+ ret = sprintf(buf, "%s/%s", DEVPATH, name);
+ if (ret < 0) {
+ printf("sprintf failed!\n");
+ return ret;
+ }
+
+ fd = open(buf, O_RDWR);
+ if (fd < 0)
+ printf("open %s failed!\n", buf);
+ return fd;
+}
+
+int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
+{
+ struct dma_heap_allocation_data data = {
+ .len = len,
+ .fd_flags = O_RDWR | O_CLOEXEC,
+ .heap_flags = flags,
+ };
+ int ret;
+
+ if (dmabuf_fd == NULL)
+ return -EINVAL;
+
+ ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
+ if (ret < 0)
+ return ret;
+ *dmabuf_fd = (int)data.fd;
+ return ret;
+}
+
+void dmabuf_sync(int fd, int start_stop)
+{
+ struct dma_buf_sync sync = { 0 };
+ int ret;
+
+ sync.flags = start_stop | DMA_BUF_SYNC_RW;
+ ret = ioctl(fd, DMA_BUF_IOCTL_SYNC, &sync);
+ if (ret)
+ printf("sync failed %d\n", errno);
+
+}
+
+#define ONE_MEG (1024*1024)
+
+void do_test(char *heap_name)
+{
+ int heap_fd = -1, dmabuf_fd = -1, importer_fd = -1;
+ uint32_t handle = 0;
+ void *p = NULL;
+ int ret;
+
+ printf("Testing heap: %s\n", heap_name);
+
+ heap_fd = dmabuf_heap_open(heap_name);
+ if (heap_fd < 0)
+ return;
+
+ printf("Allocating 1 MEG\n");
+ ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
+ if (ret)
+ goto out;
+
+ /* mmap and write a simple pattern */
+ p = mmap(NULL,
+ ONE_MEG,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED,
+ dmabuf_fd,
+ 0);
+ if (p == MAP_FAILED) {
+ printf("mmap() failed: %m\n");
+ goto out;
+ }
+ printf("mmap passed\n");
+
+
+ dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
+
+ memset(p, 1, ONE_MEG/2);
+ memset((char *)p+ONE_MEG/2, 0, ONE_MEG/2);
+ dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
+
+ importer_fd = open_vgem();
+ if (importer_fd < 0) {
+ ret = importer_fd;
+ printf("Failed to open vgem\n");
+ goto out;
+ }
+
+ ret = import_vgem_fd(importer_fd, dmabuf_fd, &handle);
+ if (ret < 0) {
+ printf("Failed to import buffer\n");
+ goto out;
+ }
+ printf("import passed\n");
+
+ dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
+ memset(p, 0xff, ONE_MEG);
+ dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
+ printf("syncs passed\n");
+
+ close_handle(importer_fd, handle);
+ ret = 0;
+
+out:
+ if (p)
+ munmap(p, ONE_MEG);
+ if (importer_fd >= 0)
+ close(importer_fd);
+ if (dmabuf_fd >= 0)
+ close(dmabuf_fd);
+ if (heap_fd >= 0)
+ close(heap_fd);
+}
+
+
+int main(void)
+{
+ DIR *d;
+ struct dirent *dir;
+
+ d = opendir(DEVPATH);
+ if (!d) {
+ printf("No %s directory?\n", DEVPATH);
+ return -1;
+ }
+
+ while ((dir = readdir(d)) != NULL) {
+ if (!strncmp(dir->d_name, ".", 2))
+ continue;
+ if (!strncmp(dir->d_name, "..", 3))
+ continue;
+
+ do_test(dir->d_name);
+ }
+
+ return 0;
+}
--
2.17.1
This patch adds system heap to the dma-buf heaps framework.
This allows applications to get a page-allocator backed dma-buf
for non-contiguous memory.
This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
Cc: Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Reviewed-by: Benjamin Gaignard <[email protected]>
Signed-off-by: John Stultz <[email protected]>
Change-Id: I4dc5ff54ccb1f7ca3ac8675661114ca33813654b
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
* Droped dead system-contig code
v3:
* Whitespace fixups from Benjamin
* Make sure we're zeroing the allocated pages (from Liam)
* Use PAGE_ALIGN() consistently (suggested by Brian)
* Fold in new registration style from Andrew
* Avoid needless dynamic allocation of sys_heap (suggested by
Christoph)
* Minor cleanups
* Folded in changes from Andrew to use simplified page list
from the heap helpers
v4:
* Optimization to allocate pages in chunks, similar to old
pagepool code
* Use fd_flags when creating dmabuf fd (Suggested by Benjamin)
v5:
* Back out large order page allocations (was leaking memory,
as the page array didn't properly track order size)
---
drivers/dma-buf/Kconfig | 2 +
drivers/dma-buf/heaps/Kconfig | 6 ++
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/system_heap.c | 123 ++++++++++++++++++++++++++++
4 files changed, 132 insertions(+)
create mode 100644 drivers/dma-buf/heaps/Kconfig
create mode 100644 drivers/dma-buf/heaps/system_heap.c
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
index 9b93f86f597c..434cfe646dad 100644
--- a/drivers/dma-buf/Kconfig
+++ b/drivers/dma-buf/Kconfig
@@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS
this allows userspace to allocate dma-bufs that can be shared between
drivers.
+source "drivers/dma-buf/heaps/Kconfig"
+
endmenu
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
new file mode 100644
index 000000000000..205052744169
--- /dev/null
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -0,0 +1,6 @@
+config DMABUF_HEAPS_SYSTEM
+ bool "DMA-BUF System Heap"
+ depends on DMABUF_HEAPS
+ help
+ Choose this option to enable the system dmabuf heap. The system heap
+ is backed by pages from the buddy allocator. If in doubt, say Y.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index de49898112db..d1808eca2581 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,2 +1,3 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += heap-helpers.o
+obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
new file mode 100644
index 000000000000..863834499ce1
--- /dev/null
+++ b/drivers/dma-buf/heaps/system_heap.c
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF System heap exporter
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#include <asm/page.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-heap.h>
+#include <linux/err.h>
+#include <linux/highmem.h>
+#include <linux/mm.h>
+#include <linux/scatterlist.h>
+#include <linux/slab.h>
+
+#include "heap-helpers.h"
+
+struct system_heap {
+ struct dma_heap *heap;
+} sys_heap;
+
+
+static void system_heap_free(struct heap_helper_buffer *buffer)
+{
+ pgoff_t pg;
+
+ for (pg = 0; pg < buffer->pagecount; pg++)
+ __free_page(buffer->pages[pg]);
+ kfree(buffer->pages);
+ kfree(buffer);
+}
+
+static int system_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ unsigned long fd_flags,
+ unsigned long heap_flags)
+{
+ struct heap_helper_buffer *helper_buffer;
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ unsigned long size_remaining = len;
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+ if (!helper_buffer)
+ return -ENOMEM;
+
+ INIT_HEAP_HELPER_BUFFER(helper_buffer, system_heap_free);
+ helper_buffer->heap_buffer.flags = heap_flags;
+ helper_buffer->heap_buffer.heap = heap;
+ helper_buffer->heap_buffer.size = len;
+
+ helper_buffer->pagecount = len / PAGE_SIZE;
+ helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
+ sizeof(*helper_buffer->pages),
+ GFP_KERNEL);
+ if (!helper_buffer->pages) {
+ ret = -ENOMEM;
+ goto err0;
+ }
+
+ for (pg = 0; pg < helper_buffer->pagecount; pg++) {
+ helper_buffer->pages[pg] = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ if (!helper_buffer->pages[pg])
+ goto err1;
+ }
+
+ /* create the dmabuf */
+ exp_info.ops = &heap_helper_ops;
+ exp_info.size = len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = &helper_buffer->heap_buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto err1;
+ }
+
+ helper_buffer->heap_buffer.dmabuf = dmabuf;
+
+ ret = dma_buf_fd(dmabuf, fd_flags);
+ if (ret < 0) {
+ dma_buf_put(dmabuf);
+ /* just return, as put will call release and that will free */
+ return ret;
+ }
+
+ return ret;
+
+err1:
+ while (pg > 0)
+ __free_page(helper_buffer->pages[--pg]);
+ kfree(helper_buffer->pages);
+err0:
+ kfree(helper_buffer);
+
+ return -ENOMEM;
+}
+
+static struct dma_heap_ops system_heap_ops = {
+ .allocate = system_heap_allocate,
+};
+
+static int system_heap_create(void)
+{
+ struct dma_heap_export_info exp_info;
+ int ret = 0;
+
+ exp_info.name = "system_heap";
+ exp_info.ops = &system_heap_ops;
+ exp_info.priv = &sys_heap;
+
+ sys_heap.heap = dma_heap_add(&exp_info);
+ if (IS_ERR(sys_heap.heap))
+ ret = PTR_ERR(sys_heap.heap);
+
+ return ret;
+}
+device_initcall(system_heap_create);
--
2.17.1
Add generic helper dmabuf ops for dma heaps, so we can reduce
the amount of duplicative code for the exported dmabufs.
This code is an evolution of the Android ION implementation, so
thanks to its original authors and maintainters:
Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
Cc: Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Reviewed-by: Benjamin Gaignard <[email protected]>
Signed-off-by: John Stultz <[email protected]>
Change-Id: I48d43656e7783f266d877e363116b5187639f996
---
v2:
* Removed cache management performance hack that I had
accidentally folded in.
* Removed stats code that was in helpers
* Lots of checkpatch cleanups
v3:
* Uninline INIT_HEAP_HELPER_BUFFER (suggested by Christoph)
* Switch to WARN on buffer destroy failure (suggested by Brian)
* buffer->kmap_cnt decrementing cleanup (suggested by Christoph)
* Extra buffer->vaddr checking in dma_heap_dma_buf_kmap
(suggested by Brian)
* Switch to_helper_buffer from macro to inline function
(suggested by Benjamin)
* Rename kmap->vmap (folded in from Andrew)
* Use vmap for vmapping - not begin_cpu_access (folded in from
Andrew)
* Drop kmap for now, as its optional (folded in from Andrew)
* Fold dma_heap_map_user into the single caller (foled in from
Andrew)
* Folded in patch from Andrew to track page list per heap not
sglist, which simplifies the tracking logic
v4:
* Moved dma-heap.h change out to previous patch
---
drivers/dma-buf/Makefile | 1 +
drivers/dma-buf/heaps/Makefile | 2 +
drivers/dma-buf/heaps/heap-helpers.c | 261 +++++++++++++++++++++++++++
drivers/dma-buf/heaps/heap-helpers.h | 55 ++++++
4 files changed, 319 insertions(+)
create mode 100644 drivers/dma-buf/heaps/Makefile
create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 1cb3dd104825..e3e3dca29e46 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1,6 +1,7 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
reservation.o seqno-fence.o
+obj-$(CONFIG_DMABUF_HEAPS) += heaps/
obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
obj-$(CONFIG_SYNC_FILE) += sync_file.o
obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
new file mode 100644
index 000000000000..de49898112db
--- /dev/null
+++ b/drivers/dma-buf/heaps/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0
+obj-y += heap-helpers.o
diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
new file mode 100644
index 000000000000..00cbdbbb97e5
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.c
@@ -0,0 +1,261 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/err.h>
+#include <linux/idr.h>
+#include <linux/list.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <uapi/linux/dma-heap.h>
+
+#include "heap-helpers.h"
+
+void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
+ void (*free)(struct heap_helper_buffer *))
+{
+ buffer->private_flags = 0;
+ buffer->priv_virt = NULL;
+ mutex_init(&buffer->lock);
+ buffer->vmap_cnt = 0;
+ buffer->vaddr = NULL;
+ INIT_LIST_HEAD(&buffer->attachments);
+ buffer->free = free;
+}
+
+
+static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
+{
+ void *vaddr;
+
+ vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
+ if (!vaddr)
+ return ERR_PTR(-ENOMEM);
+
+ return vaddr;
+}
+
+void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
+{
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+ if (buffer->vmap_cnt > 0) {
+ WARN("%s: buffer still mapped in the kernel\n",
+ __func__);
+ vunmap(buffer->vaddr);
+ }
+
+ buffer->free(buffer);
+}
+
+static void *dma_heap_buffer_vmap_get(struct dma_heap_buffer *heap_buffer)
+{
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+ void *vaddr;
+
+ if (buffer->vmap_cnt) {
+ buffer->vmap_cnt++;
+ return buffer->vaddr;
+ }
+ vaddr = dma_heap_map_kernel(buffer);
+ if (WARN_ONCE(!vaddr,
+ "heap->ops->map_kernel should return ERR_PTR on error"))
+ return ERR_PTR(-EINVAL);
+ if (IS_ERR(vaddr))
+ return vaddr;
+ buffer->vaddr = vaddr;
+ buffer->vmap_cnt++;
+ return vaddr;
+}
+
+static void dma_heap_buffer_vmap_put(struct dma_heap_buffer *heap_buffer)
+{
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+ if (!--buffer->vmap_cnt) {
+ vunmap(buffer->vaddr);
+ buffer->vaddr = NULL;
+ }
+}
+
+struct dma_heaps_attachment {
+ struct device *dev;
+ struct sg_table table;
+ struct list_head list;
+};
+
+static int dma_heap_attach(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct dma_heaps_attachment *a;
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+ int ret;
+
+ a = kzalloc(sizeof(*a), GFP_KERNEL);
+ if (!a)
+ return -ENOMEM;
+
+ ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
+ buffer->pagecount, 0,
+ buffer->pagecount << PAGE_SHIFT,
+ GFP_KERNEL);
+ if (ret) {
+ kfree(a);
+ return ret;
+ }
+
+ a->dev = attachment->dev;
+ INIT_LIST_HEAD(&a->list);
+
+ attachment->priv = a;
+
+ mutex_lock(&buffer->lock);
+ list_add(&a->list, &buffer->attachments);
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+static void dma_heap_detatch(struct dma_buf *dmabuf,
+ struct dma_buf_attachment *attachment)
+{
+ struct dma_heaps_attachment *a = attachment->priv;
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+ mutex_lock(&buffer->lock);
+ list_del(&a->list);
+ mutex_unlock(&buffer->lock);
+
+ sg_free_table(&a->table);
+ kfree(a);
+}
+
+static struct sg_table *dma_heap_map_dma_buf(
+ struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ struct dma_heaps_attachment *a = attachment->priv;
+ struct sg_table *table;
+
+ table = &a->table;
+
+ if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
+ direction))
+ table = ERR_PTR(-ENOMEM);
+ return table;
+}
+
+static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
+}
+
+static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct heap_helper_buffer *buffer = vma->vm_private_data;
+
+ vmf->page = buffer->pages[vmf->pgoff];
+ get_page(vmf->page);
+
+ return 0;
+}
+
+static const struct vm_operations_struct dma_heap_vm_ops = {
+ .fault = dma_heap_vm_fault,
+};
+
+static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ vma->vm_ops = &dma_heap_vm_ops;
+ vma->vm_private_data = buffer;
+
+ return 0;
+}
+
+static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
+{
+ struct dma_heap_buffer *buffer = dmabuf->priv;
+
+ dma_heap_buffer_destroy(buffer);
+}
+
+static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+ struct dma_heaps_attachment *a;
+ int ret = 0;
+
+ mutex_lock(&buffer->lock);
+ list_for_each_entry(a, &buffer->attachments, list) {
+ dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents,
+ direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return ret;
+}
+
+static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
+ enum dma_data_direction direction)
+{
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+ struct dma_heaps_attachment *a;
+
+ mutex_lock(&buffer->lock);
+ list_for_each_entry(a, &buffer->attachments, list) {
+ dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents,
+ direction);
+ }
+ mutex_unlock(&buffer->lock);
+
+ return 0;
+}
+
+void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf)
+{
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+ void *vaddr;
+
+ mutex_lock(&buffer->lock);
+ vaddr = dma_heap_buffer_vmap_get(heap_buffer);
+ mutex_unlock(&buffer->lock);
+
+ return vaddr;
+}
+
+void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
+{
+ struct dma_heap_buffer *heap_buffer = dmabuf->priv;
+ struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
+
+ mutex_lock(&buffer->lock);
+ dma_heap_buffer_vmap_put(heap_buffer);
+ mutex_unlock(&buffer->lock);
+}
+
+const struct dma_buf_ops heap_helper_ops = {
+ .map_dma_buf = dma_heap_map_dma_buf,
+ .unmap_dma_buf = dma_heap_unmap_dma_buf,
+ .mmap = dma_heap_mmap,
+ .release = dma_heap_dma_buf_release,
+ .attach = dma_heap_attach,
+ .detach = dma_heap_detatch,
+ .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
+ .end_cpu_access = dma_heap_dma_buf_end_cpu_access,
+ .vmap = dma_heap_dma_buf_vmap,
+ .vunmap = dma_heap_dma_buf_vunmap,
+};
diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
new file mode 100644
index 000000000000..a17502dc22e3
--- /dev/null
+++ b/drivers/dma-buf/heaps/heap-helpers.h
@@ -0,0 +1,55 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * DMABUF Heaps helper code
+ *
+ * Copyright (C) 2011 Google, Inc.
+ * Copyright (C) 2019 Linaro Ltd.
+ */
+
+#ifndef _HEAP_HELPERS_H
+#define _HEAP_HELPERS_H
+
+#include <linux/dma-heap.h>
+#include <linux/list.h>
+
+/**
+ * struct dma_heap_buffer - metadata for a particular buffer
+ * @heap: back pointer to the heap the buffer came from
+ * @dmabuf: backing dma-buf for this buffer
+ * @size: size of the buffer
+ * @flags: buffer specific flags
+ */
+struct dma_heap_buffer {
+ struct dma_heap *heap;
+ struct dma_buf *dmabuf;
+ size_t size;
+ unsigned long flags;
+};
+
+struct heap_helper_buffer {
+ struct dma_heap_buffer heap_buffer;
+
+ unsigned long private_flags;
+ void *priv_virt;
+ struct mutex lock;
+ int vmap_cnt;
+ void *vaddr;
+ pgoff_t pagecount;
+ struct page **pages;
+ struct list_head attachments;
+
+ void (*free)(struct heap_helper_buffer *buffer);
+
+};
+
+static inline struct heap_helper_buffer *to_helper_buffer(
+ struct dma_heap_buffer *h)
+{
+ return container_of(h, struct heap_helper_buffer, heap_buffer);
+}
+
+void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
+ void (*free)(struct heap_helper_buffer *));
+extern const struct dma_buf_ops heap_helper_ops;
+
+#endif /* _HEAP_HELPERS_H */
--
2.17.1
This adds a CMA heap, which allows userspace to allocate
a dma-buf of contiguous memory out of a CMA region.
This code is an evolution of the Android ION implementation, so
thanks to its original author and maintainters:
Benjamin Gaignard, Laura Abbott, and others!
Cc: Laura Abbott <[email protected]>
Cc: Benjamin Gaignard <[email protected]>
Cc: Sumit Semwal <[email protected]>
Cc: Liam Mark <[email protected]>
Cc: Pratik Patel <[email protected]>
Cc: Brian Starkey <[email protected]>
Cc: Vincent Donnefort <[email protected]>
Cc: Sudipto Paul <[email protected]>
Cc: Andrew F. Davis <[email protected]>
Cc: Christoph Hellwig <[email protected]>
Cc: Chenbo Feng <[email protected]>
Cc: Alistair Strachan <[email protected]>
Cc: [email protected]
Reviewed-by: Benjamin Gaignard <[email protected]>
Signed-off-by: John Stultz <[email protected]>
Change-Id: Ic2b0c5dfc0dbaff5245bd1c50170c64b06c73051
---
v2:
* Switch allocate to return dmabuf fd
* Simplify init code
* Checkpatch fixups
v3:
* Switch to inline function for to_cma_heap()
* Minor cleanups suggested by Brian
* Fold in new registration style from Andrew
* Folded in changes from Andrew to use simplified page list
from the heap helpers
v4:
* Use the fd_flags when creating dmabuf fd (Suggested by
Benjamin)
* Use precalculated pagecount (Suggested by Andrew)
---
drivers/dma-buf/heaps/Kconfig | 8 ++
drivers/dma-buf/heaps/Makefile | 1 +
drivers/dma-buf/heaps/cma_heap.c | 169 +++++++++++++++++++++++++++++++
3 files changed, 178 insertions(+)
create mode 100644 drivers/dma-buf/heaps/cma_heap.c
diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
index 205052744169..a5eef06c4226 100644
--- a/drivers/dma-buf/heaps/Kconfig
+++ b/drivers/dma-buf/heaps/Kconfig
@@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
help
Choose this option to enable the system dmabuf heap. The system heap
is backed by pages from the buddy allocator. If in doubt, say Y.
+
+config DMABUF_HEAPS_CMA
+ bool "DMA-BUF CMA Heap"
+ depends on DMABUF_HEAPS && DMA_CMA
+ help
+ Choose this option to enable dma-buf CMA heap. This heap is backed
+ by the Contiguous Memory Allocator (CMA). If your system has these
+ regions, you should say Y here.
diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
index d1808eca2581..6e54cdec3da0 100644
--- a/drivers/dma-buf/heaps/Makefile
+++ b/drivers/dma-buf/heaps/Makefile
@@ -1,3 +1,4 @@
# SPDX-License-Identifier: GPL-2.0
obj-y += heap-helpers.o
obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
+obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
new file mode 100644
index 000000000000..3d0ffbbd0a34
--- /dev/null
+++ b/drivers/dma-buf/heaps/cma_heap.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * DMABUF CMA heap exporter
+ *
+ * Copyright (C) 2012, 2019 Linaro Ltd.
+ * Author: <[email protected]> for ST-Ericsson.
+ */
+
+#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-heap.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+#include <linux/cma.h>
+#include <linux/scatterlist.h>
+#include <linux/highmem.h>
+
+#include "heap-helpers.h"
+
+struct cma_heap {
+ struct dma_heap *heap;
+ struct cma *cma;
+};
+
+static void cma_heap_free(struct heap_helper_buffer *buffer)
+{
+ struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap_buffer.heap);
+ unsigned long nr_pages = buffer->pagecount;
+ struct page *pages = buffer->priv_virt;
+
+ /* free page list */
+ kfree(buffer->pages);
+ /* release memory */
+ cma_release(cma_heap->cma, pages, nr_pages);
+ kfree(buffer);
+}
+
+/* dmabuf heap CMA operations functions */
+static int cma_heap_allocate(struct dma_heap *heap,
+ unsigned long len,
+ unsigned long fd_flags,
+ unsigned long heap_flags)
+{
+ struct cma_heap *cma_heap = dma_heap_get_data(heap);
+ struct heap_helper_buffer *helper_buffer;
+ struct page *pages;
+ size_t size = PAGE_ALIGN(len);
+ unsigned long nr_pages = size >> PAGE_SHIFT;
+ unsigned long align = get_order(size);
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+ struct dma_buf *dmabuf;
+ int ret = -ENOMEM;
+ pgoff_t pg;
+
+ if (align > CONFIG_CMA_ALIGNMENT)
+ align = CONFIG_CMA_ALIGNMENT;
+
+ helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
+ if (!helper_buffer)
+ return -ENOMEM;
+
+ INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free);
+ helper_buffer->heap_buffer.flags = heap_flags;
+ helper_buffer->heap_buffer.heap = heap;
+ helper_buffer->heap_buffer.size = len;
+
+ pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
+ if (!pages)
+ goto free_buf;
+
+ if (PageHighMem(pages)) {
+ unsigned long nr_clear_pages = nr_pages;
+ struct page *page = pages;
+
+ while (nr_clear_pages > 0) {
+ void *vaddr = kmap_atomic(page);
+
+ memset(vaddr, 0, PAGE_SIZE);
+ kunmap_atomic(vaddr);
+ page++;
+ nr_clear_pages--;
+ }
+ } else {
+ memset(page_address(pages), 0, size);
+ }
+
+ helper_buffer->pagecount = nr_pages;
+ helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
+ sizeof(*helper_buffer->pages),
+ GFP_KERNEL);
+ if (!helper_buffer->pages) {
+ ret = -ENOMEM;
+ goto free_cma;
+ }
+
+ for (pg = 0; pg < helper_buffer->pagecount; pg++) {
+ helper_buffer->pages[pg] = &pages[pg];
+ if (!helper_buffer->pages[pg])
+ goto free_pages;
+ }
+
+ /* create the dmabuf */
+ exp_info.ops = &heap_helper_ops;
+ exp_info.size = len;
+ exp_info.flags = fd_flags;
+ exp_info.priv = &helper_buffer->heap_buffer;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ ret = PTR_ERR(dmabuf);
+ goto free_pages;
+ }
+
+ helper_buffer->heap_buffer.dmabuf = dmabuf;
+ helper_buffer->priv_virt = pages;
+
+ ret = dma_buf_fd(dmabuf, fd_flags);
+ if (ret < 0) {
+ dma_buf_put(dmabuf);
+ /* just return, as put will call release and that will free */
+ return ret;
+ }
+
+ return ret;
+
+free_pages:
+ kfree(helper_buffer->pages);
+free_cma:
+ cma_release(cma_heap->cma, pages, nr_pages);
+free_buf:
+ kfree(helper_buffer);
+ return ret;
+}
+
+static struct dma_heap_ops cma_heap_ops = {
+ .allocate = cma_heap_allocate,
+};
+
+static int __add_cma_heap(struct cma *cma, void *data)
+{
+ struct cma_heap *cma_heap;
+ struct dma_heap_export_info exp_info;
+
+ cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
+ if (!cma_heap)
+ return -ENOMEM;
+ cma_heap->cma = cma;
+
+ exp_info.name = cma_get_name(cma);
+ exp_info.ops = &cma_heap_ops;
+ exp_info.priv = cma_heap;
+
+ cma_heap->heap = dma_heap_add(&exp_info);
+ if (IS_ERR(cma_heap->heap)) {
+ int ret = PTR_ERR(cma_heap->heap);
+
+ kfree(cma_heap);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int add_cma_heaps(void)
+{
+ cma_for_each_area(__add_cma_heap, NULL);
+ return 0;
+}
+device_initcall(add_cma_heaps);
--
2.17.1
Hi John,
I think it's looking good. I spotted a couple of error paths which I
think are missing cleanup, no complaints about the API though.
On Fri, Jun 07, 2019 at 03:07:15AM +0000, John Stultz wrote:
> From: "Andrew F. Davis" <[email protected]>
>
> This framework allows a unified userspace interface for dma-buf
> exporters, allowing userland to allocate specific types of memory
> for use in dma-buf sharing.
>
> Each heap is given its own device node, which a user can allocate
> a dma-buf fd from using the DMA_HEAP_IOC_ALLOC.
>
> This code is an evoluiton of the Android ION implementation,
> and a big thanks is due to its authors/maintainers over time
> for their effort:
> Rebecca Schultz Zavin, Colin Cross, Benjamin Gaignard,
> Laura Abbott, and many other contributors!
>
> Cc: Laura Abbott <[email protected]>
> Cc: Benjamin Gaignard <[email protected]>
> Cc: Sumit Semwal <[email protected]>
> Cc: Liam Mark <[email protected]>
> Cc: Pratik Patel <[email protected]>
> Cc: Brian Starkey <[email protected]>
> Cc: Vincent Donnefort <[email protected]>
> Cc: Sudipto Paul <[email protected]>
> Cc: Andrew F. Davis <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Chenbo Feng <[email protected]>
> Cc: Alistair Strachan <[email protected]>
> Cc: [email protected]
> Reviewed-by: Benjamin Gaignard <[email protected]>
> Signed-off-by: Andrew F. Davis <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> Change-Id: I4af43a137ad34ff6f7da4d6b2864f3cd86fb7652
> ---
> v2:
> * Folded down fixes I had previously shared in implementing
> heaps
> * Make flags a u64 (Suggested by Laura)
> * Add PAGE_ALIGN() fix to the core alloc funciton
> * IOCTL fixups suggested by Brian
> * Added fixes suggested by Benjamin
> * Removed core stats mgmt, as that should be implemented by
> per-heap code
> * Changed alloc to return a dma-buf fd, rather then a buffer
nit:s/then/than/
> (as it simplifies error handling)
> v3:
> * Removed scare-quotes in MAINTAINERS email address
> * Get rid of .release function as it didn't do anything (from
> Christoph)
> * Renamed filp to file (suggested by Christoph)
> * Split out ioctl handling to separate function (suggested by
> Christoph)
> * Add comment documenting PAGE_ALIGN usage (suggested by Brian)
> * Switch from idr to Xarray (suggested by Brian)
> * Fixup cdev creation (suggested by Brian)
> * Avoid EXPORT_SYMBOL until we finalize modules (suggested by
> Brian)
> * Make struct dma_heap internal only (folded in from Andrew)
> * Small cleanups suggested by GregKH
> * Provide class->devnode callback to get consistent /dev/
> subdirectory naming (Suggested by Bjorn)
> v4:
> * Folded down dma-heap.h change that was in a following patch
> * Added fd_flags entry to allocation structure and pass it
> through to heap code for use on dma-buf fd creation (suggested
> by Benjamin)
> v5:
> * Minor cleanups
> ---
> MAINTAINERS | 18 +++
> drivers/dma-buf/Kconfig | 8 ++
> drivers/dma-buf/Makefile | 1 +
> drivers/dma-buf/dma-heap.c | 237 ++++++++++++++++++++++++++++++++++
> include/linux/dma-heap.h | 59 +++++++++
> include/uapi/linux/dma-heap.h | 56 ++++++++
> 6 files changed, 379 insertions(+)
> create mode 100644 drivers/dma-buf/dma-heap.c
> create mode 100644 include/linux/dma-heap.h
> create mode 100644 include/uapi/linux/dma-heap.h
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index a6954776a37e..5aded7e9a062 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -4813,6 +4813,24 @@ F: include/linux/*fence.h
> F: Documentation/driver-api/dma-buf.rst
> T: git git://anongit.freedesktop.org/drm/drm-misc
>
> +DMA-BUF HEAPS FRAMEWORK
> +M: Sumit Semwal <[email protected]>
> +R: Andrew F. Davis <[email protected]>
> +R: Benjamin Gaignard <[email protected]>
> +R: Liam Mark <[email protected]>
> +R: Laura Abbott <[email protected]>
> +R: Brian Starkey <[email protected]>
> +R: John Stultz <[email protected]>
> +S: Maintained
> +L: [email protected]
> +L: [email protected]
> +L: [email protected] (moderated for non-subscribers)
> +F: include/uapi/linux/dma-heap.h
> +F: include/linux/dma-heap.h
> +F: drivers/dma-buf/dma-heap.c
> +F: drivers/dma-buf/heaps/*
> +T: git git://anongit.freedesktop.org/drm/drm-misc
> +
> DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
> M: Vinod Koul <[email protected]>
> L: [email protected]
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index d5f915830b68..9b93f86f597c 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -39,4 +39,12 @@ config UDMABUF
> A driver to let userspace turn memfd regions into dma-bufs.
> Qemu can use this to create host dmabufs for guest framebuffers.
>
> +menuconfig DMABUF_HEAPS
> + bool "DMA-BUF Userland Memory Heaps"
> + select DMA_SHARED_BUFFER
> + help
> + Choose this option to enable the DMA-BUF userland memory heaps,
> + this allows userspace to allocate dma-bufs that can be shared between
> + drivers.
> +
> endmenu
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index e8c7310cb800..1cb3dd104825 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: GPL-2.0-only
> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> obj-$(CONFIG_SYNC_FILE) += sync_file.o
> obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
> obj-$(CONFIG_UDMABUF) += udmabuf.o
> diff --git a/drivers/dma-buf/dma-heap.c b/drivers/dma-buf/dma-heap.c
> new file mode 100644
> index 000000000000..bbeaf3192a0d
> --- /dev/null
> +++ b/drivers/dma-buf/dma-heap.c
> @@ -0,0 +1,237 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Framework for userspace DMA-BUF allocations
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <linux/cdev.h>
> +#include <linux/debugfs.h>
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/xarray.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +
> +#include <linux/dma-heap.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#define DEVNAME "dma_heap"
> +
> +#define NUM_HEAP_MINORS 128
> +
> +/**
> + * struct dma_heap - represents a dmabuf heap in the system
> + * @name: used for debugging/device-node name
> + * @ops: ops struct for this heap
> + * @minor minor number of this heap device
> + * @heap_devt heap device node
> + * @heap_cdev heap char device
> + *
> + * Represents a heap of memory from which buffers can be made.
> + */
> +struct dma_heap {
> + const char *name;
> + struct dma_heap_ops *ops;
> + void *priv;
> + unsigned int minor;
> + dev_t heap_devt;
> + struct cdev heap_cdev;
> +};
> +
> +static dev_t dma_heap_devt;
> +static struct class *dma_heap_class;
> +static DEFINE_XARRAY_ALLOC(dma_heap_minors);
> +
> +static int dma_heap_buffer_alloc(struct dma_heap *heap, size_t len,
> + unsigned int fd_flags,
> + unsigned int heap_flags)
> +{
> + /*
> + * Allocations from all heaps have to begin
> + * and end on page boundaries.
> + */
> + len = PAGE_ALIGN(len);
> + if (!len)
> + return -EINVAL;
> +
> + return heap->ops->allocate(heap, len, fd_flags, heap_flags);
> +}
> +
> +static int dma_heap_open(struct inode *inode, struct file *file)
> +{
> + struct dma_heap *heap;
> +
> + heap = xa_load(&dma_heap_minors, iminor(inode));
> + if (!heap) {
> + pr_err("dma_heap: minor %d unknown.\n", iminor(inode));
> + return -ENODEV;
> + }
> +
> + /* instance data as context */
> + file->private_data = heap;
> + nonseekable_open(inode, file);
> +
> + return 0;
> +}
> +
> +static long dma_heap_ioctl_allocate(struct file *file, unsigned long arg)
> +{
> + struct dma_heap_allocation_data heap_allocation;
> + struct dma_heap *heap = file->private_data;
> + int fd;
> +
> + if (copy_from_user(&heap_allocation, (void __user *)arg,
> + sizeof(heap_allocation)))
> + return -EFAULT;
> +
> + if (heap_allocation.fd ||
> + heap_allocation.reserved0 ||
> + heap_allocation.reserved1) {
> + pr_warn_once("dma_heap: ioctl data not valid\n");
> + return -EINVAL;
> + }
> +
> + if (heap_allocation.fd_flags & ~DMA_HEAP_VALID_FD_FLAGS) {
> + pr_warn_once("dma_heap: fd_flags has invalid or unsupported flags set\n");
> + return -EINVAL;
> + }
> +
> + if (heap_allocation.heap_flags & ~DMA_HEAP_VALID_HEAP_FLAGS) {
> + pr_warn_once("dma_heap: heap flags has invalid or unsupported flags set\n");
> + return -EINVAL;
> + }
> +
> +
> + fd = dma_heap_buffer_alloc(heap, heap_allocation.len,
> + heap_allocation.fd_flags,
> + heap_allocation.heap_flags);
> + if (fd < 0)
> + return fd;
> +
> + heap_allocation.fd = fd;
> +
> + if (copy_to_user((void __user *)arg, &heap_allocation,
> + sizeof(heap_allocation)))
I guess there's some cleanup to be done on the dmabuf here.
> + return -EFAULT;
> +
> + return 0;
> +}
> +
> +static long dma_heap_ioctl(struct file *file, unsigned int cmd,
> + unsigned long arg)
> +{
> + int ret = 0;
> +
> + switch (cmd) {
> + case DMA_HEAP_IOC_ALLOC:
> + ret = dma_heap_ioctl_allocate(file, arg);
> + break;
> + default:
> + return -ENOTTY;
> + }
> +
> + return ret;
> +}
> +
> +static const struct file_operations dma_heap_fops = {
> + .owner = THIS_MODULE,
> + .open = dma_heap_open,
> + .unlocked_ioctl = dma_heap_ioctl,
> +};
> +
> +/**
> + * dma_heap_get_data() - get per-subdriver data for the heap
> + * @heap: DMA-Heap to retrieve private data for
> + *
> + * Returns:
> + * The per-subdriver data for the heap.
> + */
> +void *dma_heap_get_data(struct dma_heap *heap)
> +{
> + return heap->priv;
> +}
> +
> +struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info)
> +{
> + struct dma_heap *heap;
> + struct device *dev_ret;
> + int ret;
> +
> + if (!exp_info->name || !strcmp(exp_info->name, "")) {
> + pr_err("dma_heap: Cannot add heap without a name\n");
> + return ERR_PTR(-EINVAL);
> + }
> +
> + if (!exp_info->ops || !exp_info->ops->allocate) {
> + pr_err("dma_heap: Cannot add heap with invalid ops struct\n");
> + return ERR_PTR(-EINVAL);
> + }
> +
> + heap = kzalloc(sizeof(*heap), GFP_KERNEL);
> + if (!heap)
> + return ERR_PTR(-ENOMEM);
It looks like 'heap' is leaked in all the error paths below.
> +
> + heap->name = exp_info->name;
> + heap->ops = exp_info->ops;
> + heap->priv = exp_info->priv;
> +
> + /* Find unused minor number */
> + ret = xa_alloc(&dma_heap_minors, &heap->minor, heap,
> + XA_LIMIT(0, NUM_HEAP_MINORS - 1), GFP_KERNEL);
> + if (ret < 0) {
> + pr_err("dma_heap: Unable to get minor number for heap\n");
> + return ERR_PTR(ret);
> + }
Do we need xa_erase() after this point, too?
> +
> + /* Create device */
> + heap->heap_devt = MKDEV(MAJOR(dma_heap_devt), heap->minor);
> +
> + cdev_init(&heap->heap_cdev, &dma_heap_fops);
> + ret = cdev_add(&heap->heap_cdev, heap->heap_devt, 1);
> + if (ret < 0) {
> + pr_err("dma_heap: Unable to add char device\n");
> + return ERR_PTR(ret);
> + }
> +
> + dev_ret = device_create(dma_heap_class,
> + NULL,
> + heap->heap_devt,
> + NULL,
> + heap->name);
> + if (IS_ERR(dev_ret)) {
> + pr_err("dma_heap: Unable to create device\n");
> + cdev_del(&heap->heap_cdev);
> + return (struct dma_heap *)dev_ret;
> + }
> +
> + return heap;
> +}
> +
> +static char *dma_heap_devnode(struct device *dev, umode_t *mode)
> +{
> + return kasprintf(GFP_KERNEL, "dma_heap/%s", dev_name(dev));
> +}
> +
extra newline
> +
> +static int dma_heap_init(void)
> +{
> + int ret;
> +
> + ret = alloc_chrdev_region(&dma_heap_devt, 0, NUM_HEAP_MINORS, DEVNAME);
> + if (ret)
> + return ret;
> +
> + dma_heap_class = class_create(THIS_MODULE, DEVNAME);
> + if (IS_ERR(dma_heap_class)) {
> + unregister_chrdev_region(dma_heap_devt, NUM_HEAP_MINORS);
> + return PTR_ERR(dma_heap_class);
> + }
> + dma_heap_class->devnode = dma_heap_devnode;
> +
> + return 0;
> +}
> +subsys_initcall(dma_heap_init);
> diff --git a/include/linux/dma-heap.h b/include/linux/dma-heap.h
> new file mode 100644
> index 000000000000..7a1b633ac02f
> --- /dev/null
> +++ b/include/linux/dma-heap.h
> @@ -0,0 +1,59 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps Allocation Infrastructure
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _DMA_HEAPS_H
> +#define _DMA_HEAPS_H
> +
> +#include <linux/cdev.h>
> +#include <linux/types.h>
> +
> +struct dma_heap;
> +
> +/**
> + * struct dma_heap_ops - ops to operate on a given heap
> + * @allocate: allocate dmabuf and return fd
> + *
> + * allocate returns dmabuf fd on success, -errno on error.
> + */
> +struct dma_heap_ops {
> + int (*allocate)(struct dma_heap *heap,
> + unsigned long len,
> + unsigned long fd_flags,
> + unsigned long heap_flags);
> +};
> +
> +/**
> + * struct dma_heap_export_info - information needed to export a new dmabuf heap
> + * @name: used for debugging/device-node name
> + * @ops: ops struct for this heap
> + * @priv: heap exporter private data
> + *
> + * Information needed to export a new dmabuf heap.
> + */
> +struct dma_heap_export_info {
> + const char *name;
> + struct dma_heap_ops *ops;
> + void *priv;
> +};
> +
> +/**
> + * dma_heap_get_data() - get per-heap driver data
> + * @heap: DMA-Heap to retrieve private data for
> + *
> + * Returns:
> + * The per-heap data for the heap.
> + */
> +void *dma_heap_get_data(struct dma_heap *heap);
> +
> +/**
> + * dma_heap_add - adds a heap to dmabuf heaps
> + * @exp_info: information needed to register this heap
> + */
> +struct dma_heap *dma_heap_add(const struct dma_heap_export_info *exp_info);
> +
> +#endif /* _DMA_HEAPS_H */
> diff --git a/include/uapi/linux/dma-heap.h b/include/uapi/linux/dma-heap.h
> new file mode 100644
> index 000000000000..c382280277d7
> --- /dev/null
> +++ b/include/uapi/linux/dma-heap.h
> @@ -0,0 +1,56 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * DMABUF Heaps Userspace API
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +#ifndef _UAPI_LINUX_DMABUF_POOL_H
> +#define _UAPI_LINUX_DMABUF_POOL_H
> +
> +#include <linux/ioctl.h>
> +#include <linux/types.h>
> +
> +/**
> + * DOC: DMABUF Heaps Userspace API
> + *
Is that line needed?
Thanks,
-Brian
> + */
> +
> +/* Valid FD_FLAGS are O_CLOEXEC, O_RDONLY, O_WRONLY, O_RDWR */
> +#define DMA_HEAP_VALID_FD_FLAGS (O_CLOEXEC | O_ACCMODE)
> +
> +/* Currently no heap flags */
> +#define DMA_HEAP_VALID_HEAP_FLAGS (0)
> +
> +/**
> + * struct dma_heap_allocation_data - metadata passed from userspace for
> + * allocations
> + * @len: size of the allocation
> + * @fd: will be populated with a fd which provdes the
> + * handle to the allocated dma-buf
> + * @fd_flags: file descriptor flags used when allocating
> + * @heap_flags: flags passed to heap
> + *
> + * Provided by userspace as an argument to the ioctl
> + */
> +struct dma_heap_allocation_data {
> + __u64 len;
> + __u32 fd;
> + __u32 fd_flags;
> + __u64 heap_flags;
> + __u32 reserved0;
> + __u32 reserved1;
> +};
> +
> +#define DMA_HEAP_IOC_MAGIC 'H'
> +
> +/**
> + * DOC: DMA_HEAP_IOC_ALLOC - allocate memory from pool
> + *
> + * Takes an dma_heap_allocation_data struct and returns it with the fd field
> + * populated with the dmabuf handle of the allocation.
> + */
> +#define DMA_HEAP_IOC_ALLOC _IOWR(DMA_HEAP_IOC_MAGIC, 0, \
> + struct dma_heap_allocation_data)
> +
> +#endif /* _UAPI_LINUX_DMABUF_POOL_H */
> --
> 2.17.1
>
Hi John,
On Fri, Jun 07, 2019 at 03:07:16AM +0000, John Stultz wrote:
> Add generic helper dmabuf ops for dma heaps, so we can reduce
> the amount of duplicative code for the exported dmabufs.
>
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
> Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
>
> Cc: Laura Abbott <[email protected]>
> Cc: Benjamin Gaignard <[email protected]>
> Cc: Sumit Semwal <[email protected]>
> Cc: Liam Mark <[email protected]>
> Cc: Pratik Patel <[email protected]>
> Cc: Brian Starkey <[email protected]>
> Cc: Vincent Donnefort <[email protected]>
> Cc: Sudipto Paul <[email protected]>
> Cc: Andrew F. Davis <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Chenbo Feng <[email protected]>
> Cc: Alistair Strachan <[email protected]>
> Cc: [email protected]
> Reviewed-by: Benjamin Gaignard <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> Change-Id: I48d43656e7783f266d877e363116b5187639f996
> ---
> v2:
> * Removed cache management performance hack that I had
> accidentally folded in.
> * Removed stats code that was in helpers
> * Lots of checkpatch cleanups
> v3:
> * Uninline INIT_HEAP_HELPER_BUFFER (suggested by Christoph)
> * Switch to WARN on buffer destroy failure (suggested by Brian)
> * buffer->kmap_cnt decrementing cleanup (suggested by Christoph)
> * Extra buffer->vaddr checking in dma_heap_dma_buf_kmap
> (suggested by Brian)
> * Switch to_helper_buffer from macro to inline function
> (suggested by Benjamin)
> * Rename kmap->vmap (folded in from Andrew)
> * Use vmap for vmapping - not begin_cpu_access (folded in from
> Andrew)
> * Drop kmap for now, as its optional (folded in from Andrew)
> * Fold dma_heap_map_user into the single caller (foled in from
> Andrew)
> * Folded in patch from Andrew to track page list per heap not
> sglist, which simplifies the tracking logic
> v4:
> * Moved dma-heap.h change out to previous patch
> ---
> drivers/dma-buf/Makefile | 1 +
> drivers/dma-buf/heaps/Makefile | 2 +
> drivers/dma-buf/heaps/heap-helpers.c | 261 +++++++++++++++++++++++++++
> drivers/dma-buf/heaps/heap-helpers.h | 55 ++++++
> 4 files changed, 319 insertions(+)
> create mode 100644 drivers/dma-buf/heaps/Makefile
> create mode 100644 drivers/dma-buf/heaps/heap-helpers.c
> create mode 100644 drivers/dma-buf/heaps/heap-helpers.h
>
> diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> index 1cb3dd104825..e3e3dca29e46 100644
> --- a/drivers/dma-buf/Makefile
> +++ b/drivers/dma-buf/Makefile
> @@ -1,6 +1,7 @@
> # SPDX-License-Identifier: GPL-2.0-only
> obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> reservation.o seqno-fence.o
> +obj-$(CONFIG_DMABUF_HEAPS) += heaps/
> obj-$(CONFIG_DMABUF_HEAPS) += dma-heap.o
> obj-$(CONFIG_SYNC_FILE) += sync_file.o
> obj-$(CONFIG_SW_SYNC) += sw_sync.o sync_debug.o
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> new file mode 100644
> index 000000000000..de49898112db
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -0,0 +1,2 @@
> +# SPDX-License-Identifier: GPL-2.0
> +obj-y += heap-helpers.o
> diff --git a/drivers/dma-buf/heaps/heap-helpers.c b/drivers/dma-buf/heaps/heap-helpers.c
> new file mode 100644
> index 000000000000..00cbdbbb97e5
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.c
> @@ -0,0 +1,261 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/err.h>
> +#include <linux/idr.h>
> +#include <linux/list.h>
> +#include <linux/slab.h>
> +#include <linux/uaccess.h>
> +#include <uapi/linux/dma-heap.h>
> +
> +#include "heap-helpers.h"
> +
> +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> + void (*free)(struct heap_helper_buffer *))
> +{
> + buffer->private_flags = 0;
> + buffer->priv_virt = NULL;
> + mutex_init(&buffer->lock);
> + buffer->vmap_cnt = 0;
> + buffer->vaddr = NULL;
> + INIT_LIST_HEAD(&buffer->attachments);
> + buffer->free = free;
Should pagecount and pages be initialised here too? I'm not sure what
the metric for picking which members are/aren't initialised is.
> +}
> +
extra newline
> +
> +static void *dma_heap_map_kernel(struct heap_helper_buffer *buffer)
> +{
> + void *vaddr;
> +
> + vaddr = vmap(buffer->pages, buffer->pagecount, VM_MAP, PAGE_KERNEL);
> + if (!vaddr)
> + return ERR_PTR(-ENOMEM);
> +
> + return vaddr;
> +}
> +
> +void dma_heap_buffer_destroy(struct dma_heap_buffer *heap_buffer)
can be static
> +{
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> + if (buffer->vmap_cnt > 0) {
> + WARN("%s: buffer still mapped in the kernel\n",
> + __func__);
> + vunmap(buffer->vaddr);
> + }
> +
> + buffer->free(buffer);
> +}
> +
> +static void *dma_heap_buffer_vmap_get(struct dma_heap_buffer *heap_buffer)
> +{
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> + void *vaddr;
> +
> + if (buffer->vmap_cnt) {
> + buffer->vmap_cnt++;
> + return buffer->vaddr;
> + }
> + vaddr = dma_heap_map_kernel(buffer);
> + if (WARN_ONCE(!vaddr,
> + "heap->ops->map_kernel should return ERR_PTR on error"))
> + return ERR_PTR(-EINVAL);
> + if (IS_ERR(vaddr))
> + return vaddr;
> + buffer->vaddr = vaddr;
> + buffer->vmap_cnt++;
> + return vaddr;
> +}
> +
> +static void dma_heap_buffer_vmap_put(struct dma_heap_buffer *heap_buffer)
> +{
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> + if (!--buffer->vmap_cnt) {
> + vunmap(buffer->vaddr);
> + buffer->vaddr = NULL;
> + }
> +}
> +
> +struct dma_heaps_attachment {
> + struct device *dev;
> + struct sg_table table;
> + struct list_head list;
> +};
> +
> +static int dma_heap_attach(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attachment)
> +{
> + struct dma_heaps_attachment *a;
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> + int ret;
> +
> + a = kzalloc(sizeof(*a), GFP_KERNEL);
> + if (!a)
> + return -ENOMEM;
> +
> + ret = sg_alloc_table_from_pages(&a->table, buffer->pages,
> + buffer->pagecount, 0,
> + buffer->pagecount << PAGE_SHIFT,
> + GFP_KERNEL);
> + if (ret) {
> + kfree(a);
> + return ret;
> + }
> +
> + a->dev = attachment->dev;
> + INIT_LIST_HEAD(&a->list);
> +
> + attachment->priv = a;
> +
> + mutex_lock(&buffer->lock);
> + list_add(&a->list, &buffer->attachments);
> + mutex_unlock(&buffer->lock);
> +
> + return 0;
> +}
> +
> +static void dma_heap_detatch(struct dma_buf *dmabuf,
> + struct dma_buf_attachment *attachment)
s/detatch/detach/
> +{
> + struct dma_heaps_attachment *a = attachment->priv;
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> + mutex_lock(&buffer->lock);
> + list_del(&a->list);
> + mutex_unlock(&buffer->lock);
> +
> + sg_free_table(&a->table);
> + kfree(a);
> +}
> +
> +static struct sg_table *dma_heap_map_dma_buf(
> + struct dma_buf_attachment *attachment,
> + enum dma_data_direction direction)
> +{
> + struct dma_heaps_attachment *a = attachment->priv;
> + struct sg_table *table;
> +
> + table = &a->table;
> +
> + if (!dma_map_sg(attachment->dev, table->sgl, table->nents,
> + direction))
> + table = ERR_PTR(-ENOMEM);
> + return table;
> +}
> +
> +static void dma_heap_unmap_dma_buf(struct dma_buf_attachment *attachment,
> + struct sg_table *table,
> + enum dma_data_direction direction)
> +{
> + dma_unmap_sg(attachment->dev, table->sgl, table->nents, direction);
> +}
> +
> +static vm_fault_t dma_heap_vm_fault(struct vm_fault *vmf)
> +{
> + struct vm_area_struct *vma = vmf->vma;
> + struct heap_helper_buffer *buffer = vma->vm_private_data;
> +
> + vmf->page = buffer->pages[vmf->pgoff];
> + get_page(vmf->page);
> +
> + return 0;
> +}
> +
> +static const struct vm_operations_struct dma_heap_vm_ops = {
> + .fault = dma_heap_vm_fault,
> +};
> +
> +static int dma_heap_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
> +{
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
> + return -EINVAL;
> +
> + vma->vm_ops = &dma_heap_vm_ops;
> + vma->vm_private_data = buffer;
> +
> + return 0;
> +}
> +
> +static void dma_heap_dma_buf_release(struct dma_buf *dmabuf)
> +{
> + struct dma_heap_buffer *buffer = dmabuf->priv;
> +
> + dma_heap_buffer_destroy(buffer);
> +}
> +
> +static int dma_heap_dma_buf_begin_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> + struct dma_heaps_attachment *a;
> + int ret = 0;
> +
> + mutex_lock(&buffer->lock);
> + list_for_each_entry(a, &buffer->attachments, list) {
> + dma_sync_sg_for_cpu(a->dev, a->table.sgl, a->table.nents,
> + direction);
> + }
> + mutex_unlock(&buffer->lock);
> +
> + return ret;
> +}
> +
> +static int dma_heap_dma_buf_end_cpu_access(struct dma_buf *dmabuf,
> + enum dma_data_direction direction)
> +{
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> + struct dma_heaps_attachment *a;
> +
> + mutex_lock(&buffer->lock);
> + list_for_each_entry(a, &buffer->attachments, list) {
> + dma_sync_sg_for_device(a->dev, a->table.sgl, a->table.nents,
> + direction);
> + }
> + mutex_unlock(&buffer->lock);
> +
> + return 0;
> +}
> +
> +void *dma_heap_dma_buf_vmap(struct dma_buf *dmabuf)
can be static
> +{
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> + void *vaddr;
> +
> + mutex_lock(&buffer->lock);
> + vaddr = dma_heap_buffer_vmap_get(heap_buffer);
> + mutex_unlock(&buffer->lock);
> +
> + return vaddr;
> +}
> +
> +void dma_heap_dma_buf_vunmap(struct dma_buf *dmabuf, void *vaddr)
can be static
> +{
> + struct dma_heap_buffer *heap_buffer = dmabuf->priv;
> + struct heap_helper_buffer *buffer = to_helper_buffer(heap_buffer);
> +
> + mutex_lock(&buffer->lock);
> + dma_heap_buffer_vmap_put(heap_buffer);
> + mutex_unlock(&buffer->lock);
> +}
> +
> +const struct dma_buf_ops heap_helper_ops = {
> + .map_dma_buf = dma_heap_map_dma_buf,
> + .unmap_dma_buf = dma_heap_unmap_dma_buf,
> + .mmap = dma_heap_mmap,
> + .release = dma_heap_dma_buf_release,
> + .attach = dma_heap_attach,
> + .detach = dma_heap_detatch,
> + .begin_cpu_access = dma_heap_dma_buf_begin_cpu_access,
> + .end_cpu_access = dma_heap_dma_buf_end_cpu_access,
> + .vmap = dma_heap_dma_buf_vmap,
> + .vunmap = dma_heap_dma_buf_vunmap,
> +};
> diff --git a/drivers/dma-buf/heaps/heap-helpers.h b/drivers/dma-buf/heaps/heap-helpers.h
> new file mode 100644
> index 000000000000..a17502dc22e3
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/heap-helpers.h
> @@ -0,0 +1,55 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * DMABUF Heaps helper code
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#ifndef _HEAP_HELPERS_H
> +#define _HEAP_HELPERS_H
> +
> +#include <linux/dma-heap.h>
> +#include <linux/list.h>
> +
> +/**
> + * struct dma_heap_buffer - metadata for a particular buffer
> + * @heap: back pointer to the heap the buffer came from
> + * @dmabuf: backing dma-buf for this buffer
> + * @size: size of the buffer
> + * @flags: buffer specific flags
> + */
> +struct dma_heap_buffer {
> + struct dma_heap *heap;
> + struct dma_buf *dmabuf;
> + size_t size;
> + unsigned long flags;
> +};
> +
> +struct heap_helper_buffer {
> + struct dma_heap_buffer heap_buffer;
> +
> + unsigned long private_flags;
> + void *priv_virt;
> + struct mutex lock;
> + int vmap_cnt;
> + void *vaddr;
> + pgoff_t pagecount;
> + struct page **pages;
> + struct list_head attachments;
> +
> + void (*free)(struct heap_helper_buffer *buffer);
> +
extra newline
With the statics and the detatch typo fixed:
Reviewed-by: Brian Starkey <[email protected]>
> +};
> +
> +static inline struct heap_helper_buffer *to_helper_buffer(
> + struct dma_heap_buffer *h)
> +{
> + return container_of(h, struct heap_helper_buffer, heap_buffer);
> +}
> +
> +void INIT_HEAP_HELPER_BUFFER(struct heap_helper_buffer *buffer,
> + void (*free)(struct heap_helper_buffer *));
> +extern const struct dma_buf_ops heap_helper_ops;
> +
> +#endif /* _HEAP_HELPERS_H */
> --
> 2.17.1
>
Hi,
On Fri, Jun 07, 2019 at 03:07:17AM +0000, John Stultz wrote:
> This patch adds system heap to the dma-buf heaps framework.
>
> This allows applications to get a page-allocator backed dma-buf
> for non-contiguous memory.
>
> This code is an evolution of the Android ION implementation, so
> thanks to its original authors and maintainters:
> Rebecca Schultz Zavin, Colin Cross, Laura Abbott, and others!
>
> Cc: Laura Abbott <[email protected]>
> Cc: Benjamin Gaignard <[email protected]>
> Cc: Sumit Semwal <[email protected]>
> Cc: Liam Mark <[email protected]>
> Cc: Pratik Patel <[email protected]>
> Cc: Brian Starkey <[email protected]>
> Cc: Vincent Donnefort <[email protected]>
> Cc: Sudipto Paul <[email protected]>
> Cc: Andrew F. Davis <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Chenbo Feng <[email protected]>
> Cc: Alistair Strachan <[email protected]>
> Cc: [email protected]
> Reviewed-by: Benjamin Gaignard <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> Change-Id: I4dc5ff54ccb1f7ca3ac8675661114ca33813654b
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> * Droped dead system-contig code
> v3:
> * Whitespace fixups from Benjamin
> * Make sure we're zeroing the allocated pages (from Liam)
> * Use PAGE_ALIGN() consistently (suggested by Brian)
> * Fold in new registration style from Andrew
> * Avoid needless dynamic allocation of sys_heap (suggested by
> Christoph)
> * Minor cleanups
> * Folded in changes from Andrew to use simplified page list
> from the heap helpers
> v4:
> * Optimization to allocate pages in chunks, similar to old
> pagepool code
> * Use fd_flags when creating dmabuf fd (Suggested by Benjamin)
> v5:
> * Back out large order page allocations (was leaking memory,
> as the page array didn't properly track order size)
> ---
> drivers/dma-buf/Kconfig | 2 +
> drivers/dma-buf/heaps/Kconfig | 6 ++
> drivers/dma-buf/heaps/Makefile | 1 +
> drivers/dma-buf/heaps/system_heap.c | 123 ++++++++++++++++++++++++++++
> 4 files changed, 132 insertions(+)
> create mode 100644 drivers/dma-buf/heaps/Kconfig
> create mode 100644 drivers/dma-buf/heaps/system_heap.c
>
> diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
> index 9b93f86f597c..434cfe646dad 100644
> --- a/drivers/dma-buf/Kconfig
> +++ b/drivers/dma-buf/Kconfig
> @@ -47,4 +47,6 @@ menuconfig DMABUF_HEAPS
> this allows userspace to allocate dma-bufs that can be shared between
> drivers.
>
> +source "drivers/dma-buf/heaps/Kconfig"
> +
> endmenu
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> new file mode 100644
> index 000000000000..205052744169
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -0,0 +1,6 @@
> +config DMABUF_HEAPS_SYSTEM
> + bool "DMA-BUF System Heap"
> + depends on DMABUF_HEAPS
> + help
> + Choose this option to enable the system dmabuf heap. The system heap
> + is backed by pages from the buddy allocator. If in doubt, say Y.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index de49898112db..d1808eca2581 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,2 +1,3 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-y += heap-helpers.o
> +obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
> diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c
> new file mode 100644
> index 000000000000..863834499ce1
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/system_heap.c
> @@ -0,0 +1,123 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF System heap exporter
> + *
> + * Copyright (C) 2011 Google, Inc.
> + * Copyright (C) 2019 Linaro Ltd.
> + */
> +
> +#include <asm/page.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/dma-heap.h>
> +#include <linux/err.h>
> +#include <linux/highmem.h>
> +#include <linux/mm.h>
> +#include <linux/scatterlist.h>
> +#include <linux/slab.h>
> +
> +#include "heap-helpers.h"
> +
> +struct system_heap {
> + struct dma_heap *heap;
> +} sys_heap;
> +
> +
extra newline. Otherwise, LGTM:
Reviewed-by: Brian Starkey <[email protected]>
Hi,
On Fri, Jun 07, 2019 at 03:07:18AM +0000, John Stultz wrote:
> This adds a CMA heap, which allows userspace to allocate
> a dma-buf of contiguous memory out of a CMA region.
>
> This code is an evolution of the Android ION implementation, so
> thanks to its original author and maintainters:
> Benjamin Gaignard, Laura Abbott, and others!
>
> Cc: Laura Abbott <[email protected]>
> Cc: Benjamin Gaignard <[email protected]>
> Cc: Sumit Semwal <[email protected]>
> Cc: Liam Mark <[email protected]>
> Cc: Pratik Patel <[email protected]>
> Cc: Brian Starkey <[email protected]>
> Cc: Vincent Donnefort <[email protected]>
> Cc: Sudipto Paul <[email protected]>
> Cc: Andrew F. Davis <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Chenbo Feng <[email protected]>
> Cc: Alistair Strachan <[email protected]>
> Cc: [email protected]
> Reviewed-by: Benjamin Gaignard <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> Change-Id: Ic2b0c5dfc0dbaff5245bd1c50170c64b06c73051
> ---
> v2:
> * Switch allocate to return dmabuf fd
> * Simplify init code
> * Checkpatch fixups
> v3:
> * Switch to inline function for to_cma_heap()
> * Minor cleanups suggested by Brian
> * Fold in new registration style from Andrew
> * Folded in changes from Andrew to use simplified page list
> from the heap helpers
> v4:
> * Use the fd_flags when creating dmabuf fd (Suggested by
> Benjamin)
> * Use precalculated pagecount (Suggested by Andrew)
> ---
> drivers/dma-buf/heaps/Kconfig | 8 ++
> drivers/dma-buf/heaps/Makefile | 1 +
> drivers/dma-buf/heaps/cma_heap.c | 169 +++++++++++++++++++++++++++++++
> 3 files changed, 178 insertions(+)
> create mode 100644 drivers/dma-buf/heaps/cma_heap.c
>
> diff --git a/drivers/dma-buf/heaps/Kconfig b/drivers/dma-buf/heaps/Kconfig
> index 205052744169..a5eef06c4226 100644
> --- a/drivers/dma-buf/heaps/Kconfig
> +++ b/drivers/dma-buf/heaps/Kconfig
> @@ -4,3 +4,11 @@ config DMABUF_HEAPS_SYSTEM
> help
> Choose this option to enable the system dmabuf heap. The system heap
> is backed by pages from the buddy allocator. If in doubt, say Y.
> +
> +config DMABUF_HEAPS_CMA
> + bool "DMA-BUF CMA Heap"
> + depends on DMABUF_HEAPS && DMA_CMA
> + help
> + Choose this option to enable dma-buf CMA heap. This heap is backed
> + by the Contiguous Memory Allocator (CMA). If your system has these
> + regions, you should say Y here.
> diff --git a/drivers/dma-buf/heaps/Makefile b/drivers/dma-buf/heaps/Makefile
> index d1808eca2581..6e54cdec3da0 100644
> --- a/drivers/dma-buf/heaps/Makefile
> +++ b/drivers/dma-buf/heaps/Makefile
> @@ -1,3 +1,4 @@
> # SPDX-License-Identifier: GPL-2.0
> obj-y += heap-helpers.o
> obj-$(CONFIG_DMABUF_HEAPS_SYSTEM) += system_heap.o
> +obj-$(CONFIG_DMABUF_HEAPS_CMA) += cma_heap.o
> diff --git a/drivers/dma-buf/heaps/cma_heap.c b/drivers/dma-buf/heaps/cma_heap.c
> new file mode 100644
> index 000000000000..3d0ffbbd0a34
> --- /dev/null
> +++ b/drivers/dma-buf/heaps/cma_heap.c
> @@ -0,0 +1,169 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * DMABUF CMA heap exporter
> + *
> + * Copyright (C) 2012, 2019 Linaro Ltd.
> + * Author: <[email protected]> for ST-Ericsson.
> + */
> +
> +#include <linux/device.h>
> +#include <linux/dma-buf.h>
> +#include <linux/dma-heap.h>
> +#include <linux/slab.h>
> +#include <linux/errno.h>
> +#include <linux/err.h>
> +#include <linux/cma.h>
> +#include <linux/scatterlist.h>
> +#include <linux/highmem.h>
> +
> +#include "heap-helpers.h"
> +
> +struct cma_heap {
> + struct dma_heap *heap;
> + struct cma *cma;
> +};
> +
> +static void cma_heap_free(struct heap_helper_buffer *buffer)
> +{
> + struct cma_heap *cma_heap = dma_heap_get_data(buffer->heap_buffer.heap);
> + unsigned long nr_pages = buffer->pagecount;
> + struct page *pages = buffer->priv_virt;
> +
> + /* free page list */
> + kfree(buffer->pages);
> + /* release memory */
> + cma_release(cma_heap->cma, pages, nr_pages);
> + kfree(buffer);
Sorry for not managing to review the past couple of revisions where
the helper buffer pages were added, but:
I'm probably a bit dim; but I got a bit confused about "pages" vs
"buffer->pages" here and in the allocate function.
Could I suggest a different name for the CMA allocation? Even simply
"cma_pages" would be clearer IMO.
Otherwise LGTM:
Reviewed-by: Brian Starkey <[email protected]>
> +}
> +
> +/* dmabuf heap CMA operations functions */
> +static int cma_heap_allocate(struct dma_heap *heap,
> + unsigned long len,
> + unsigned long fd_flags,
> + unsigned long heap_flags)
> +{
> + struct cma_heap *cma_heap = dma_heap_get_data(heap);
> + struct heap_helper_buffer *helper_buffer;
> + struct page *pages;
> + size_t size = PAGE_ALIGN(len);
> + unsigned long nr_pages = size >> PAGE_SHIFT;
> + unsigned long align = get_order(size);
> + DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
> + struct dma_buf *dmabuf;
> + int ret = -ENOMEM;
> + pgoff_t pg;
> +
> + if (align > CONFIG_CMA_ALIGNMENT)
> + align = CONFIG_CMA_ALIGNMENT;
> +
> + helper_buffer = kzalloc(sizeof(*helper_buffer), GFP_KERNEL);
> + if (!helper_buffer)
> + return -ENOMEM;
> +
> + INIT_HEAP_HELPER_BUFFER(helper_buffer, cma_heap_free);
> + helper_buffer->heap_buffer.flags = heap_flags;
> + helper_buffer->heap_buffer.heap = heap;
> + helper_buffer->heap_buffer.size = len;
> +
> + pages = cma_alloc(cma_heap->cma, nr_pages, align, false);
> + if (!pages)
> + goto free_buf;
> +
> + if (PageHighMem(pages)) {
> + unsigned long nr_clear_pages = nr_pages;
> + struct page *page = pages;
> +
> + while (nr_clear_pages > 0) {
> + void *vaddr = kmap_atomic(page);
> +
> + memset(vaddr, 0, PAGE_SIZE);
> + kunmap_atomic(vaddr);
> + page++;
> + nr_clear_pages--;
> + }
> + } else {
> + memset(page_address(pages), 0, size);
> + }
> +
> + helper_buffer->pagecount = nr_pages;
> + helper_buffer->pages = kmalloc_array(helper_buffer->pagecount,
> + sizeof(*helper_buffer->pages),
> + GFP_KERNEL);
> + if (!helper_buffer->pages) {
> + ret = -ENOMEM;
> + goto free_cma;
> + }
> +
> + for (pg = 0; pg < helper_buffer->pagecount; pg++) {
> + helper_buffer->pages[pg] = &pages[pg];
> + if (!helper_buffer->pages[pg])
> + goto free_pages;
> + }
> +
> + /* create the dmabuf */
> + exp_info.ops = &heap_helper_ops;
> + exp_info.size = len;
> + exp_info.flags = fd_flags;
> + exp_info.priv = &helper_buffer->heap_buffer;
> + dmabuf = dma_buf_export(&exp_info);
> + if (IS_ERR(dmabuf)) {
> + ret = PTR_ERR(dmabuf);
> + goto free_pages;
> + }
> +
> + helper_buffer->heap_buffer.dmabuf = dmabuf;
> + helper_buffer->priv_virt = pages;
> +
> + ret = dma_buf_fd(dmabuf, fd_flags);
> + if (ret < 0) {
> + dma_buf_put(dmabuf);
> + /* just return, as put will call release and that will free */
> + return ret;
> + }
> +
> + return ret;
> +
> +free_pages:
> + kfree(helper_buffer->pages);
> +free_cma:
> + cma_release(cma_heap->cma, pages, nr_pages);
> +free_buf:
> + kfree(helper_buffer);
> + return ret;
> +}
> +
> +static struct dma_heap_ops cma_heap_ops = {
> + .allocate = cma_heap_allocate,
> +};
> +
> +static int __add_cma_heap(struct cma *cma, void *data)
> +{
> + struct cma_heap *cma_heap;
> + struct dma_heap_export_info exp_info;
> +
> + cma_heap = kzalloc(sizeof(*cma_heap), GFP_KERNEL);
> + if (!cma_heap)
> + return -ENOMEM;
> + cma_heap->cma = cma;
> +
> + exp_info.name = cma_get_name(cma);
> + exp_info.ops = &cma_heap_ops;
> + exp_info.priv = cma_heap;
> +
> + cma_heap->heap = dma_heap_add(&exp_info);
> + if (IS_ERR(cma_heap->heap)) {
> + int ret = PTR_ERR(cma_heap->heap);
> +
> + kfree(cma_heap);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int add_cma_heaps(void)
> +{
> + cma_for_each_area(__add_cma_heap, NULL);
> + return 0;
> +}
> +device_initcall(add_cma_heaps);
> --
> 2.17.1
>
Hi John,
I haven't looked at any selftests before, but is there any advantage
to using the test harness?
https://www.kernel.org/doc/html/v5.1/dev-tools/kselftest.html#test-harness
Couple of minor things below
On Fri, Jun 07, 2019 at 03:07:19AM +0000, John Stultz wrote:
> Add very trivial allocation and import test for dma-heaps,
> utilizing the vgem driver as a test importer.
>
> A good chunk of this code taken from:
> tools/testing/selftests/android/ion/ionmap_test.c
> Originally by Laura Abbott <[email protected]>
>
> Cc: Benjamin Gaignard <[email protected]>
> Cc: Sumit Semwal <[email protected]>
> Cc: Liam Mark <[email protected]>
> Cc: Pratik Patel <[email protected]>
> Cc: Brian Starkey <[email protected]>
> Cc: Vincent Donnefort <[email protected]>
> Cc: Sudipto Paul <[email protected]>
> Cc: Andrew F. Davis <[email protected]>
> Cc: Christoph Hellwig <[email protected]>
> Cc: Chenbo Feng <[email protected]>
> Cc: Alistair Strachan <[email protected]>
> Cc: [email protected]
> Reviewed-by: Benjamin Gaignard <[email protected]>
> Signed-off-by: John Stultz <[email protected]>
> Change-Id: Ib98569fdda6378eb086b8092fb5d6bd419b8d431
> ---
> v2:
> * Switched to use reworked dma-heap apis
> v3:
> * Add simple mmap
> * Utilize dma-buf testdev to test importing
> v4:
> * Rework to use vgem
> * Pass in fd_flags to match interface changes
> * Skip . and .. dirs
> ---
> tools/testing/selftests/dmabuf-heaps/Makefile | 11 +
> .../selftests/dmabuf-heaps/dmabuf-heap.c | 232 ++++++++++++++++++
> 2 files changed, 243 insertions(+)
> create mode 100644 tools/testing/selftests/dmabuf-heaps/Makefile
> create mode 100644 tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
>
> diff --git a/tools/testing/selftests/dmabuf-heaps/Makefile b/tools/testing/selftests/dmabuf-heaps/Makefile
> new file mode 100644
> index 000000000000..c414ad36b4bf
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/Makefile
> @@ -0,0 +1,11 @@
> +# SPDX-License-Identifier: GPL-2.0
> +CFLAGS += -static -O3 -Wl,-no-as-needed -Wall
> +#LDLIBS += -lrt -lpthread -lm
> +
> +# these are all "safe" tests that don't modify
> +# system time or require escalated privileges
> +TEST_GEN_PROGS = dmabuf-heap
> +
newline
> +
> +include ../lib.mk
> +
> diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> new file mode 100644
> index 000000000000..33d4b105c673
> --- /dev/null
> +++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
> @@ -0,0 +1,232 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +#include <dirent.h>
> +#include <errno.h>
> +#include <fcntl.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <unistd.h>
> +#include <sys/ioctl.h>
> +#include <sys/mman.h>
> +#include <sys/types.h>
> +
> +#include <linux/dma-buf.h>
> +#include <drm/drm.h>
> +
> +
> +#include "../../../../include/uapi/linux/dma-heap.h"
> +
> +#define DEVPATH "/dev/dma_heap"
> +
> +int check_vgem(int fd)
Functions can be static
> +{
> + drm_version_t version = { 0 };
> + char name[5];
> + int ret;
> +
> + version.name_len = 4;
> + version.name = name;
> +
> + ret = ioctl(fd, DRM_IOCTL_VERSION, &version);
> + if (ret)
> + return 1;
> +
> + return strcmp(name, "vgem");
> +}
> +
> +int open_vgem(void)
> +{
> + int i, fd;
> + const char *drmstr = "/dev/dri/card";
> +
> + fd = -1;
> + for (i = 0; i < 16; i++) {
> + char name[80];
> +
> + sprintf(name, "%s%u", drmstr, i);
> +
> + fd = open(name, O_RDWR);
> + if (fd < 0)
> + continue;
> +
> + if (check_vgem(fd)) {
It's a minor thing, but the naming vs the logic reads backwards to me
here. I'd expect check_vgem() to return true for vgem.
> + close(fd);
> + continue;
> + } else {
> + break;
> + }
> +
> + }
> + return fd;
> +}
> +
> +int import_vgem_fd(int vgem_fd, int dma_buf_fd, uint32_t *handle)
> +{
> + struct drm_prime_handle import_handle = { 0 };
> + int ret;
> +
> + import_handle.fd = dma_buf_fd;
> + import_handle.flags = 0;
> + import_handle.handle = 0;
You could just initialise import_handle directly. Same for the other
functions
> +
> + ret = ioctl(vgem_fd, DRM_IOCTL_PRIME_FD_TO_HANDLE, &import_handle);
> + if (ret == 0)
> + *handle = import_handle.handle;
> + return ret;
> +}
> +
> +void close_handle(int vgem_fd, uint32_t handle)
> +{
> + struct drm_gem_close close = { 0 };
> +
> + close.handle = handle;
> + ioctl(vgem_fd, DRM_IOCTL_GEM_CLOSE, &close);
> +}
> +
> +
> +int dmabuf_heap_open(char *name)
> +{
> + int ret, fd;
> + char buf[256];
> +
> + ret = sprintf(buf, "%s/%s", DEVPATH, name);
> + if (ret < 0) {
> + printf("sprintf failed!\n");
> + return ret;
> + }
> +
> + fd = open(buf, O_RDWR);
> + if (fd < 0)
> + printf("open %s failed!\n", buf);
> + return fd;
> +}
> +
> +int dmabuf_heap_alloc(int fd, size_t len, unsigned int flags, int *dmabuf_fd)
> +{
> + struct dma_heap_allocation_data data = {
> + .len = len,
> + .fd_flags = O_RDWR | O_CLOEXEC,
> + .heap_flags = flags,
> + };
Like this :-)
> + int ret;
> +
> + if (dmabuf_fd == NULL)
> + return -EINVAL;
> +
> + ret = ioctl(fd, DMA_HEAP_IOC_ALLOC, &data);
> + if (ret < 0)
> + return ret;
> + *dmabuf_fd = (int)data.fd;
> + return ret;
> +}
> +
> +void dmabuf_sync(int fd, int start_stop)
> +{
> + struct dma_buf_sync sync = { 0 };
> + int ret;
> +
> + sync.flags = start_stop | DMA_BUF_SYNC_RW;
> + ret = ioctl(fd, DMA_BUF_IOCTL_SYNC, &sync);
> + if (ret)
> + printf("sync failed %d\n", errno);
> +
newline
> +}
> +
> +#define ONE_MEG (1024*1024)
> +
> +void do_test(char *heap_name)
> +{
> + int heap_fd = -1, dmabuf_fd = -1, importer_fd = -1;
> + uint32_t handle = 0;
> + void *p = NULL;
> + int ret;
> +
> + printf("Testing heap: %s\n", heap_name);
> +
> + heap_fd = dmabuf_heap_open(heap_name);
> + if (heap_fd < 0)
> + return;
> +
> + printf("Allocating 1 MEG\n");
> + ret = dmabuf_heap_alloc(heap_fd, ONE_MEG, 0, &dmabuf_fd);
> + if (ret)
It'd be good to print something here so you can easily tell if
allocations are failing.
> + goto out;
> +
> + /* mmap and write a simple pattern */
> + p = mmap(NULL,
> + ONE_MEG,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED,
> + dmabuf_fd,
> + 0);
> + if (p == MAP_FAILED) {
> + printf("mmap() failed: %m\n");
> + goto out;
> + }
> + printf("mmap passed\n");
> +
> +
> + dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
> +
> + memset(p, 1, ONE_MEG/2);
> + memset((char *)p+ONE_MEG/2, 0, ONE_MEG/2);
Are the selftests using the kernel coding style? If so, there's some
spaces missing.
> + dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
> +
> + importer_fd = open_vgem();
> + if (importer_fd < 0) {
> + ret = importer_fd;
> + printf("Failed to open vgem\n");
> + goto out;
> + }
> +
> + ret = import_vgem_fd(importer_fd, dmabuf_fd, &handle);
> + if (ret < 0) {
> + printf("Failed to import buffer\n");
> + goto out;
> + }
> + printf("import passed\n");
> +
> + dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_START);
> + memset(p, 0xff, ONE_MEG);
> + dmabuf_sync(dmabuf_fd, DMA_BUF_SYNC_END);
> + printf("syncs passed\n");
> +
> + close_handle(importer_fd, handle);
> + ret = 0;
No need for this
> +
> +out:
> + if (p)
> + munmap(p, ONE_MEG);
> + if (importer_fd >= 0)
> + close(importer_fd);
> + if (dmabuf_fd >= 0)
> + close(dmabuf_fd);
> + if (heap_fd >= 0)
> + close(heap_fd);
> +}
> +
> +
> +int main(void)
> +{
> + DIR *d;
> + struct dirent *dir;
> +
> + d = opendir(DEVPATH);
> + if (!d) {
> + printf("No %s directory?\n", DEVPATH);
> + return -1;
> + }
> +
> + while ((dir = readdir(d)) != NULL) {
> + if (!strncmp(dir->d_name, ".", 2))
> + continue;
> + if (!strncmp(dir->d_name, "..", 3))
> + continue;
> +
> + do_test(dir->d_name);
> + }
> +
I know it's only a test, and you're about to exit, but you should
probably still closedir() in case someone copies this code.
Cheers,
-Brian
> + return 0;
> +}
> --
> 2.17.1
>