Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing between cpu and io device, which share
data content rather than address.
Since unified address, hardware and user space of process can share the
same virtual address in the communication.
Uacce is intended to be used with Jean Philippe Brucker's SVA
patchset[1], which enables IO side page fault and PASID support.
We have keep verifying with Jean's sva/current [2]
We also keep verifying with Eric's SMMUv3 Nested Stage patch [3]
This series and related zip & qm driver
https://github.com/Linaro/linux-kernel-warpdrive/tree/5.3-rc1-warpdrive-v2
The library and user application:
https://github.com/Linaro/warpdrive/tree/5.3-rc1-v2
References:
[1] http://jpbrucker.net/sva/
[2] http://www.linux-arm.org/git?p=linux-jpb.git;a=shortlog;h=refs/heads/sva/current
[3] https://github.com/eauger/linux/tree/v5.3.0-rc0-2stage-v9
Change History:
v2:
Address comments from Greg and Jonathan
Modify interface uacce_register
Drop noiommu mode first
v1:
1. Rebase to 5.3-rc1
2. Build on iommu interface
3. Verifying with Jean's sva and Eric's nested mode iommu.
4. User library has developed a lot: support zlib, openssl etc.
5. Move to misc first
RFC3:
https://lkml.org/lkml/2018/11/12/1951
RFC2:
https://lwn.net/Articles/763990/
Background of why Uacce:
Von Neumann processor is not good at general data manipulation.
It is designed for control-bound rather than data-bound application.
The latter need less control path facility and more/specific ALUs.
So there are more and more heterogeneous processors, such as
encryption/decryption accelerators, TPUs, or
EDGE (Explicated Data Graph Execution) processors, introduced to gain
better performance or power efficiency for particular applications
these days.
There are generally two ways to make use of these heterogeneous processors:
The first is to make them co-processors, just like FPU.
This is good for some application but it has its own cons:
It changes the ISA set permanently.
You must save all state elements when the process is switched out.
But most data-bound processors have a huge set of state elements.
It makes the kernel scheduler more complex.
The second is Accelerator.
It is taken as a IO device from the CPU's point of view
(but it need not to be physically). The process, running on CPU,
hold a context of the accelerator and send instructions to it as if
it calls a function or thread running with FPU.
The context is bound with the processor itself.
So the state elements remain in the hardware context until
the context is released.
We believe this is the core feature of an "Accelerator" vs. Co-processor
or other heterogeneous processors.
The intention of Uacce is to provide the basic facility to backup
this scenario. Its first step is to make sure the accelerator and process
can share the same address space. So the accelerator ISA can directly
address any data structure of the main CPU.
This differs from the data sharing between CPU and IO device,
which share data content rather than address.
So it is different comparing to the other DMA libraries.
In the future, we may add more facility to support linking accelerator
library to the main application, or managing the accelerator context as
special thread.
But no matter how, this can be a solid start point for new processor
to be used as an "accelerator" as this is the essential requirement.
Kenneth Lee (2):
uacce: Add documents for uacce
uacce: add uacce driver
Documentation/ABI/testing/sysfs-driver-uacce | 47 ++
Documentation/misc-devices/uacce.rst | 335 ++++++++
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/uacce/Kconfig | 13 +
drivers/misc/uacce/Makefile | 2 +
drivers/misc/uacce/uacce.c | 1086 ++++++++++++++++++++++++++
include/linux/uacce.h | 172 ++++
include/uapi/misc/uacce.h | 39 +
9 files changed, 1696 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
create mode 100644 Documentation/misc-devices/uacce.rst
create mode 100644 drivers/misc/uacce/Kconfig
create mode 100644 drivers/misc/uacce/Makefile
create mode 100644 drivers/misc/uacce/uacce.c
create mode 100644 include/linux/uacce.h
create mode 100644 include/uapi/misc/uacce.h
--
2.7.4
From: Kenneth Lee <[email protected]>
Uacce (Unified/User-space-access-intended Accelerator Framework) is
a kernel module targets to provide Shared Virtual Addressing (SVA)
between the accelerator and process.
This patch add document to explain how it works.
Signed-off-by: Kenneth Lee <[email protected]>
Signed-off-by: Zaibo Xu <[email protected]>
Signed-off-by: Zhou Wang <[email protected]>
Signed-off-by: Zhangfei Gao <[email protected]>
---
Documentation/misc-devices/uacce.rst | 309 +++++++++++++++++++++++++++++++++++
1 file changed, 309 insertions(+)
create mode 100644 Documentation/misc-devices/uacce.rst
diff --git a/Documentation/misc-devices/uacce.rst b/Documentation/misc-devices/uacce.rst
new file mode 100644
index 0000000..a2cbd00
--- /dev/null
+++ b/Documentation/misc-devices/uacce.rst
@@ -0,0 +1,309 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Introduction of Uacce
+=========================
+
+Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
+provide Shared Virtual Addressing (SVA) between accelerators and processes.
+So accelerator can access any data structure of the main cpu.
+This differs from the data sharing between cpu and io device, which share
+data content rather than address.
+Because of the unified address, hardware and user space of process can
+share the same virtual address in the communication.
+Uacce takes the hardware accelerator as a heterogeneous processor, while
+IOMMU share the same CPU page tables and as a result the same translation
+from va to pa.
+
+ __________________________ __________________________
+ | | | |
+ | User application (CPU) | | Hardware Accelerator |
+ |__________________________| |__________________________|
+
+ | |
+ | va | va
+ V V
+ __________ __________
+ | | | |
+ | MMU | | IOMMU |
+ |__________| |__________|
+ | |
+ | |
+ V pa V pa
+ _______________________________________
+ | |
+ | Memory |
+ |_______________________________________|
+
+
+
+Architecture
+------------
+
+Uacce is the kernel module, taking charge of iommu and address sharing.
+The user drivers and libraries are called WarpDrive.
+
+A virtual concept, queue, is used for the communication. It provides a
+FIFO-like interface. And it maintains a unified address space between the
+application and all involved hardware.
+
+ ___________________ ________________
+ | | user API | |
+ | WarpDrive library | ------------> | user driver |
+ |___________________| |________________|
+ | |
+ | |
+ | queue fd |
+ | |
+ | |
+ v |
+ ___________________ _________ |
+ | | | | | mmap memory
+ | Other framework | | uacce | | r/w interface
+ | crypto/nic/others | |_________| |
+ |___________________| |
+ | | |
+ | register | register |
+ | | |
+ | | |
+ | _________________ __________ |
+ | | | | | |
+ ------------- | Device Driver | | IOMMU | |
+ |_________________| |__________| |
+ | |
+ | V
+ | ___________________
+ | | |
+ -------------------------- | Device(Hardware) |
+ |___________________|
+
+
+How does it work
+================
+
+Uacce uses mmap and IOMMU to play the trick.
+
+Uacce create a chrdev for every device registered to it. New queue is
+created when user application open the chrdev. The file descriptor is used
+as the user handle of the queue.
+The accelerator device present itself as an Uacce object, which exports as
+chrdev to the user space. The user application communicates with the
+hardware by ioctl (as control path) or share memory (as data path).
+
+The control path to the hardware is via file operation, while data path is
+via mmap space of the queue fd.
+
+The queue file address space:
+
+enum uacce_qfrt {
+ UACCE_QFRT_MMIO = 0, /* device mmio region */
+ UACCE_QFRT_DKO = 1, /* device kernel-only region */
+ UACCE_QFRT_DUS = 2, /* device user share region */
+ UACCE_QFRT_SS = 3, /* static shared memory (for non-sva devices) */
+ UACCE_QFRT_MAX,
+};
+
+All regions are optional and differ from device type to type. The
+communication protocol is wrapped by the user driver.
+
+The device mmio region is mapped to the hardware mmio space. It is generally
+used for doorbell or other notification to the hardware. It is not fast enough
+as data channel.
+
+The device kernel-only region is necessary only if the device IOMMU has no
+PASID support or it cannot send kernel-only address request. In this case, if
+kernel need to share memory with the device, kernel has to share iova address
+space with the user process via mmap, to prevent iova conflict.
+
+The device user share region is used for share data buffer between user process
+and device. It can be merged into other regions. But a separated region can help
+on device state management. For example, the device can be started when this
+region is mapped.
+
+The static share virtual memory region is used for share data buffer with the
+device and can be shared among queues / devices.
+Its size is set according to the application requirement.
+
+
+The user API
+------------
+
+We adopt a polling style interface in the user space: ::
+
+ int wd_request_queue(struct wd_queue *q);
+ void wd_release_queue(struct wd_queue *q);
+ int wd_send(struct wd_queue *q, void *req);
+ int wd_recv(struct wd_queue *q, void **req);
+ int wd_recv_sync(struct wd_queue *q, void **req);
+ void wd_flush(struct wd_queue *q);
+
+wd_recv_sync() is a wrapper to its non-sync version. It will trap into
+kernel and wait until the queue become available.
+
+If the queue do not support SVA/SVM. The following helper functions
+can be used to create Static Virtual Share Memory: ::
+
+ void *wd_reserve_memory(struct wd_queue *q, size_t size);
+ int wd_share_reserved_memory(struct wd_queue *q,
+ struct wd_queue *target_q);
+
+The user API is not mandatory. It is simply a suggestion and hint what the
+kernel interface is supposed to be.
+
+
+The user driver
+---------------
+
+The queue file mmap space will need a user driver to wrap the communication
+protocol. Uacce provides some attributes in sysfs for the user driver to
+match the right accelerator accordingly.
+More details in Documentation/ABI/testing/sysfs-driver-uacce.
+
+
+The Uacce register API
+-----------------------
+The register API is defined in uacce.h.
+
+struct uacce_interface {
+ char name[32];
+ unsigned int flags;
+ struct uacce_ops *ops;
+};
+
+struct uacce *uacce_register(struct device *parent,
+ struct uacce_interface *interface);
+void uacce_unregister(struct uacce *uacce);
+void uacce_wake_up(struct uacce_queue *q);
+
+
+According to the IOMMU capability, Uacce categories the devices as below:
+
+UACCE_DEV_SVA (UACCE_DEV_PASID | UACCE_DEV_FAULT_FROM_DEV)
+ The device has IOMMU which can share the same page table with user
+ process
+
+UACCE_DEV_SHARE_DOMAIN
+ This is used for device which does not support pasid.
+
+
+The Memory Sharing Model
+------------------------
+The perfect form of a Uacce device is to support SVM/SVA. We built this upon
+Jean Philippe Brucker's SVA patches. [1]
+
+If the hardware support SVA, the user process's page table is shared to the
+opened queue. So the device can access any address in the process address
+space. And it can raise a page fault if the physical page is not available
+yet. It can also access the address in the kernel space, which is referred by
+another page table particular to the kernel. Most of IOMMU implementation can
+handle this by a tag on the address request of the device. For example, ARM
+SMMU uses SSV bit to indicate that the address request is for kernel or user
+space.
+
+The device_attr UACCE_DEV_SVA is used to indicate this capability of the
+device. It is a combination of UACCE_DEV_FAULT_FROM_DEV and UACCE_DEV_PASID.
+
+If the device does not support UACCE_DEV_FAULT_FROM_DEV but UACCE_DEV_PASID.
+Uacce will create an unmanaged iommu_domain for the device. So it can be
+bound to multiple processes. In this case, the device cannot share the user
+page table directly. The user process must map the Static Share Queue File
+Region to create the connection. The Uacce kernel module will allocate
+physical memory to the region for both the device and the user process.
+
+If the device does not support UACCE_DEV_PASID either. There is no way for
+Uacce to support multiple process. Every Uacce allow only one process at
+the same time. In this case, DMA API cannot be used in this device. If the
+device driver need to share memory with the device, it should use QFRT_KO
+queue file region instead. This region is mmaped from the user space but
+valid only for kernel.
+
+We suggest the driver use uacce_mode module parameter to choose the working
+mode of the device. It can be:
+
+UACCE_MODE_NOUACCE (0)
+ Do not register to uacce. In this mode, the driver can register to
+ other kernel framework, such as crypto
+
+UACCE_MODE_UACCE (1)
+ Register to uacce. In this mode, the driver register to uacce. It can
+ register to other kernel framework according to whether it supports
+ PASID.
+
+
+The Folk Scenario
+=================
+For a process with allocated queues and shared memory, what happen if it forks
+a child?
+
+The fd of the queue will be duplicated on folk, so the child can send request
+to the same queue as its parent. But the requests which is sent from processes
+except for the one who opens the queue will be blocked.
+
+It is recommended to add O_CLOEXEC to the queue file.
+
+The queue mmap space has a VM_DONTCOPY in its VMA. So the child will lose all
+those VMAs.
+
+This is a reason why Uacce does not adopt the mode used in VFIO and
+InfiniBand. Both solutions can set any user pointer for hardware sharing.
+But they cannot support fork when the dma is in process. Or the
+"Copy-On-Write" procedure will make the parent process lost its physical
+pages.
+
+
+Difference to the VFIO and IB framework
+---------------------------------------
+The essential function of Uacce is to let the device access the user
+address directly. There are many device drivers doing the same in the kernel.
+And both VFIO and IB can provide similar function in framework level.
+
+But Uacce has a different goal: "share address space". It is
+not taken the request to the accelerator as an enclosure data structure. It
+takes the accelerator as another thread of the same process. So the
+accelerator can refer to any address used by the process.
+
+Both VFIO and IB are taken this as "memory sharing", not "address sharing".
+They care more on sharing the block of memory. But if there is an address
+stored in the block and referring to another memory region. The address may
+not be valid.
+
+By adding more constraints to the VFIO and IB framework, in some sense, we may
+achieve a similar goal. But we gave it up finally. Both VFIO and IB have extra
+assumption which is unnecessary to Uacce. They may hurt each other if we
+try to merge them together.
+
+VFIO manages resource of a hardware as a "virtual device". If a device need to
+serve a separated application. It must isolate the resource as separate
+virtual device. And the life cycle of the application and virtual device are
+unnecessary unrelated. And most concepts, such as bus, driver, probe and
+so on, to make it as a "device" is unnecessary either. And the logic added to
+VFIO to make address sharing do no help on "creating a virtual device".
+
+IB creates a "verbs" standard for sharing memory region to another remote
+entity. Most of these verbs are to make memory region between entities to be
+synchronized. This is not what accelerator need. Accelerator is in the same
+memory system with the CPU. It refers to the same memory system among CPU and
+devices. So the local memory terms/verbs are good enough for it. Extra "verbs"
+are not necessary. And its queue (like queue pair in IB) is the communication
+channel direct to the accelerator hardware. There is nothing about memory
+itself.
+
+Further, both VFIO and IB use the "pin" (get_user_page) way to lock local
+memory in place. This is flexible. But it can cause other problems. For
+example, if the user process fork a child process. The COW procedure may make
+the parent process lost its pages which are sharing with the device. These may
+be fixed in the future. But is not going to be easy. (There is a discussion
+about this on Linux Plumbers Conference 2018 [2])
+
+So we choose to build the solution directly on top of IOMMU interface. IOMMU
+is the essential way for device and process to share their page mapping from
+the hardware perspective. It will be safe to create a software solution on
+this assumption. Uacce manages the IOMMU interface for the accelerator
+device, so the device driver can export some of the resources to the user
+space. Uacce than can make sure the device and the process have the same
+address space.
+
+
+References
+==========
+.. [1] http://jpbrucker.net/sva/
+.. [2] https://lwn.net/Articles/774411/
--
2.7.4
From: Kenneth Lee <[email protected]>
Uacce (Unified/User-space-access-intended Accelerator Framework) targets to
provide Shared Virtual Addressing (SVA) between accelerators and processes.
So accelerator can access any data structure of the main cpu.
This differs from the data sharing between cpu and io device, which share
data content rather than address.
Since unified address, hardware and user space of process can share the
same virtual address in the communication.
Uacce create a chrdev for every registration, the queue is allocated to
the process when the chrdev is opened. Then the process can access the
hardware resource by interact with the queue file. By mmap the queue
file space to user space, the process can directly put requests to the
hardware without syscall to the kernel space.
Signed-off-by: Kenneth Lee <[email protected]>
Signed-off-by: Zaibo Xu <[email protected]>
Signed-off-by: Zhou Wang <[email protected]>
Signed-off-by: Zhangfei Gao <[email protected]>
---
Documentation/ABI/testing/sysfs-driver-uacce | 47 ++
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/uacce/Kconfig | 13 +
drivers/misc/uacce/Makefile | 2 +
drivers/misc/uacce/uacce.c | 1086 ++++++++++++++++++++++++++
include/linux/uacce.h | 172 ++++
include/uapi/misc/uacce.h | 39 +
8 files changed, 1361 insertions(+)
create mode 100644 Documentation/ABI/testing/sysfs-driver-uacce
create mode 100644 drivers/misc/uacce/Kconfig
create mode 100644 drivers/misc/uacce/Makefile
create mode 100644 drivers/misc/uacce/uacce.c
create mode 100644 include/linux/uacce.h
create mode 100644 include/uapi/misc/uacce.h
diff --git a/Documentation/ABI/testing/sysfs-driver-uacce b/Documentation/ABI/testing/sysfs-driver-uacce
new file mode 100644
index 0000000..44e2f69
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-driver-uacce
@@ -0,0 +1,47 @@
+What: /sys/class/uacce/hisi_zip-<n>/id
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Id of the device.
+
+What: /sys/class/uacce/hisi_zip-<n>/api
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Api of the device, used by application to match the correct driver
+
+What: /sys/class/uacce/hisi_zip-<n>/flags
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Attributes of the device, see UACCE_DEV_xxx flag defined in uacce.h
+
+What: /sys/class/uacce/hisi_zip-<n>/available_instances
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Available instances left of the device
+
+What: /sys/class/uacce/hisi_zip-<n>/algorithms
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Algorithms supported by this accelerator
+
+What: /sys/class/uacce/hisi_zip-<n>/qfrs_offset
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Page offsets of each queue file regions
+
+What: /sys/class/uacce/hisi_zip-<n>/numa_distance
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Distance of device node to cpu node
+
+What: /sys/class/uacce/hisi_zip-<n>/node_id
+Date: Aug 2019
+KernelVersion: 5.3
+Contact: [email protected]
+Description: Id of the numa node
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 6abfc8e..8073eb8 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -502,4 +502,5 @@ source "drivers/misc/cxl/Kconfig"
source "drivers/misc/ocxl/Kconfig"
source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
+source "drivers/misc/uacce/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index abd8ae2..93a131b 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -58,4 +58,5 @@ obj-$(CONFIG_OCXL) += ocxl/
obj-y += cardreader/
obj-$(CONFIG_PVPANIC) += pvpanic.o
obj-$(CONFIG_HABANA_AI) += habanalabs/
+obj-$(CONFIG_UACCE) += uacce/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
diff --git a/drivers/misc/uacce/Kconfig b/drivers/misc/uacce/Kconfig
new file mode 100644
index 0000000..e854354
--- /dev/null
+++ b/drivers/misc/uacce/Kconfig
@@ -0,0 +1,13 @@
+config UACCE
+ tristate "Accelerator Framework for User Land"
+ depends on IOMMU_API
+ help
+ UACCE provides interface for the user process to access the hardware
+ without interaction with the kernel space in data path.
+
+ The user-space interface is described in
+ include/uapi/misc/uacce.h
+
+ See Documentation/misc-devices/uacce.rst for more details.
+
+ If you don't know what to do here, say N.
diff --git a/drivers/misc/uacce/Makefile b/drivers/misc/uacce/Makefile
new file mode 100644
index 0000000..5b4374e
--- /dev/null
+++ b/drivers/misc/uacce/Makefile
@@ -0,0 +1,2 @@
+# SPDX-License-Identifier: GPL-2.0-or-later
+obj-$(CONFIG_UACCE) += uacce.o
diff --git a/drivers/misc/uacce/uacce.c b/drivers/misc/uacce/uacce.c
new file mode 100644
index 0000000..2d9bfc2
--- /dev/null
+++ b/drivers/misc/uacce/uacce.c
@@ -0,0 +1,1086 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+#include <linux/compat.h>
+#include <linux/dma-iommu.h>
+#include <linux/dma-mapping.h>
+#include <linux/file.h>
+#include <linux/idr.h>
+#include <linux/irqdomain.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/sched/signal.h>
+#include <linux/uacce.h>
+
+static struct class *uacce_class;
+static DEFINE_IDR(uacce_idr);
+static dev_t uacce_devt;
+static DEFINE_MUTEX(uacce_mutex); /* mutex to protect uacce */
+static const struct file_operations uacce_fops;
+
+/* match with enum uacce_qfrt */
+static const char *const qfrt_str[] = {
+ "mmio",
+ "dko",
+ "dus",
+ "ss",
+ "invalid"
+};
+
+static const char *uacce_qfrt_str(struct uacce_qfile_region *qfr)
+{
+ enum uacce_qfrt type = qfr->type;
+
+ if (type > UACCE_QFRT_MAX)
+ type = UACCE_QFRT_MAX;
+
+ return qfrt_str[type];
+}
+
+/**
+ * uacce_wake_up - Wake up the process who is waiting this queue
+ * @q the accelerator queue to wake up
+ */
+void uacce_wake_up(struct uacce_queue *q)
+{
+ wake_up_interruptible(&q->wait);
+}
+EXPORT_SYMBOL_GPL(uacce_wake_up);
+
+static int uacce_queue_map_qfr(struct uacce_queue *q,
+ struct uacce_qfile_region *qfr)
+{
+ struct device *dev = q->uacce->pdev;
+ struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+ int i, j, ret;
+
+ if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
+ return 0;
+
+ if (!domain)
+ return -ENODEV;
+
+ for (i = 0; i < qfr->nr_pages; i++) {
+ ret = iommu_map(domain, qfr->iova + i * PAGE_SIZE,
+ page_to_phys(qfr->pages[i]),
+ PAGE_SIZE, qfr->prot | q->uacce->prot);
+ if (ret) {
+ dev_err(dev, "iommu_map page %i fail %d\n", i, ret);
+ goto err_with_map_pages;
+ }
+ get_page(qfr->pages[i]);
+ }
+
+ return 0;
+
+err_with_map_pages:
+ for (j = i - 1; j >= 0; j--) {
+ iommu_unmap(domain, qfr->iova + j * PAGE_SIZE, PAGE_SIZE);
+ put_page(qfr->pages[j]);
+ }
+ return ret;
+}
+
+static void uacce_queue_unmap_qfr(struct uacce_queue *q,
+ struct uacce_qfile_region *qfr)
+{
+ struct device *dev = q->uacce->pdev;
+ struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
+ int i;
+
+ if (!domain || !qfr)
+ return;
+
+ if (!(qfr->flags & UACCE_QFRF_MAP) || (qfr->flags & UACCE_QFRF_DMA))
+ return;
+
+ for (i = qfr->nr_pages - 1; i >= 0; i--) {
+ iommu_unmap(domain, qfr->iova + i * PAGE_SIZE, PAGE_SIZE);
+ put_page(qfr->pages[i]);
+ }
+}
+
+static int uacce_qfr_alloc_pages(struct uacce_qfile_region *qfr)
+{
+ int i, j;
+
+ qfr->pages = kcalloc(qfr->nr_pages, sizeof(*qfr->pages), GFP_ATOMIC);
+ if (!qfr->pages)
+ return -ENOMEM;
+
+ for (i = 0; i < qfr->nr_pages; i++) {
+ qfr->pages[i] = alloc_page(GFP_ATOMIC | __GFP_ZERO);
+ if (!qfr->pages[i])
+ goto err_with_pages;
+ }
+
+ return 0;
+
+err_with_pages:
+ for (j = i - 1; j >= 0; j--)
+ put_page(qfr->pages[j]);
+
+ kfree(qfr->pages);
+ return -ENOMEM;
+}
+
+static void uacce_qfr_free_pages(struct uacce_qfile_region *qfr)
+{
+ int i;
+
+ for (i = 0; i < qfr->nr_pages; i++)
+ put_page(qfr->pages[i]);
+
+ kfree(qfr->pages);
+}
+
+static inline int uacce_queue_mmap_qfr(struct uacce_queue *q,
+ struct uacce_qfile_region *qfr,
+ struct vm_area_struct *vma)
+{
+ int i, ret;
+
+ for (i = 0; i < qfr->nr_pages; i++) {
+ ret = remap_pfn_range(vma, vma->vm_start + (i << PAGE_SHIFT),
+ page_to_pfn(qfr->pages[i]), PAGE_SIZE,
+ vma->vm_page_prot);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static struct uacce_qfile_region *
+uacce_create_region(struct uacce_queue *q, struct vm_area_struct *vma,
+ enum uacce_qfrt type, unsigned int flags)
+{
+ struct uacce_qfile_region *qfr;
+ struct uacce *uacce = q->uacce;
+ unsigned long vm_pgoff;
+ int ret = -ENOMEM;
+
+ qfr = kzalloc(sizeof(*qfr), GFP_ATOMIC);
+ if (!qfr)
+ return ERR_PTR(-ENOMEM);
+
+ qfr->type = type;
+ qfr->flags = flags;
+ qfr->iova = vma->vm_start;
+ qfr->nr_pages = vma_pages(vma);
+
+ if (vma->vm_flags & VM_READ)
+ qfr->prot |= IOMMU_READ;
+
+ if (vma->vm_flags & VM_WRITE)
+ qfr->prot |= IOMMU_WRITE;
+
+ if (flags & UACCE_QFRF_SELFMT) {
+ ret = uacce->ops->mmap(q, vma, qfr);
+ if (ret)
+ goto err_with_qfr;
+ return qfr;
+ }
+
+ /* allocate memory */
+ if (flags & UACCE_QFRF_DMA) {
+ qfr->kaddr = dma_alloc_coherent(uacce->pdev,
+ qfr->nr_pages << PAGE_SHIFT,
+ &qfr->dma, GFP_KERNEL);
+ if (!qfr->kaddr) {
+ ret = -ENOMEM;
+ goto err_with_qfr;
+ }
+ } else {
+ ret = uacce_qfr_alloc_pages(qfr);
+ if (ret)
+ goto err_with_qfr;
+ }
+
+ /* map to device */
+ ret = uacce_queue_map_qfr(q, qfr);
+ if (ret)
+ goto err_with_pages;
+
+ /* mmap to user space */
+ if (flags & UACCE_QFRF_MMAP) {
+ if (flags & UACCE_QFRF_DMA) {
+
+ /* dma_mmap_coherent() requires vm_pgoff as 0
+ * restore vm_pfoff to initial value for mmap()
+ */
+ vm_pgoff = vma->vm_pgoff;
+ vma->vm_pgoff = 0;
+ ret = dma_mmap_coherent(uacce->pdev, vma, qfr->kaddr,
+ qfr->dma,
+ qfr->nr_pages << PAGE_SHIFT);
+ vma->vm_pgoff = vm_pgoff;
+ } else {
+ ret = uacce_queue_mmap_qfr(q, qfr, vma);
+ }
+
+ if (ret)
+ goto err_with_mapped_qfr;
+ }
+
+ return qfr;
+
+err_with_mapped_qfr:
+ uacce_queue_unmap_qfr(q, qfr);
+err_with_pages:
+ if (flags & UACCE_QFRF_DMA)
+ dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
+ qfr->kaddr, qfr->dma);
+ else
+ uacce_qfr_free_pages(qfr);
+err_with_qfr:
+ kfree(qfr);
+
+ return ERR_PTR(ret);
+}
+
+static void uacce_destroy_region(struct uacce_queue *q,
+ struct uacce_qfile_region *qfr)
+{
+ struct uacce *uacce = q->uacce;
+
+ if (qfr->flags & UACCE_QFRF_DMA) {
+ dma_free_coherent(uacce->pdev, qfr->nr_pages << PAGE_SHIFT,
+ qfr->kaddr, qfr->dma);
+ } else if (qfr->pages) {
+ if (qfr->flags & UACCE_QFRF_KMAP && qfr->kaddr) {
+ vunmap(qfr->kaddr);
+ qfr->kaddr = NULL;
+ }
+
+ uacce_qfr_free_pages(qfr);
+ }
+ kfree(qfr);
+}
+
+static long uacce_cmd_share_qfr(struct uacce_queue *tgt, int fd)
+{
+ struct file *filep;
+ struct uacce_queue *src;
+ int ret = -EINVAL;
+
+ filep = fget(fd);
+ if (!filep)
+ return ret;
+
+ if (filep->f_op != &uacce_fops)
+ goto out_with_fd;
+
+ src = filep->private_data;
+ if (!src)
+ goto out_with_fd;
+
+ /* no share sva is needed if the dev can do fault-from-dev */
+ if (tgt->uacce->flags & UACCE_DEV_FAULT_FROM_DEV)
+ goto out_with_fd;
+
+ mutex_lock(&uacce_mutex);
+ if (!src->qfrs[UACCE_QFRT_SS] || tgt->qfrs[UACCE_QFRT_SS])
+ goto out_with_lock;
+
+ ret = uacce_queue_map_qfr(tgt, src->qfrs[UACCE_QFRT_SS]);
+ if (ret)
+ goto out_with_lock;
+
+ tgt->qfrs[UACCE_QFRT_SS] = src->qfrs[UACCE_QFRT_SS];
+ list_add(&tgt->list, &src->qfrs[UACCE_QFRT_SS]->qs);
+
+out_with_lock:
+ mutex_unlock(&uacce_mutex);
+out_with_fd:
+ fput(filep);
+ return ret;
+}
+
+static int uacce_start_queue(struct uacce_queue *q)
+{
+ int ret, i, j;
+ struct uacce_qfile_region *qfr;
+ struct device *dev = &q->uacce->dev;
+
+ /*
+ * map KMAP qfr to kernel
+ * vmap should be done in non-spinlocked context!
+ */
+ for (i = 0; i < UACCE_QFRT_MAX; i++) {
+ qfr = q->qfrs[i];
+ if (qfr && (qfr->flags & UACCE_QFRF_KMAP) && !qfr->kaddr) {
+ qfr->kaddr = vmap(qfr->pages, qfr->nr_pages, VM_MAP,
+ PAGE_KERNEL);
+ if (!qfr->kaddr) {
+ ret = -ENOMEM;
+ dev_err(dev, "fail to kmap %s qfr(%d pages)\n",
+ uacce_qfrt_str(qfr), qfr->nr_pages);
+ goto err_with_vmap;
+ }
+
+ }
+ }
+
+ ret = q->uacce->ops->start_queue(q);
+ if (ret < 0)
+ goto err_with_vmap;
+
+ atomic_set(&q->uacce->state, UACCE_ST_STARTED);
+ return 0;
+
+err_with_vmap:
+ for (j = i; j >= 0; j--) {
+ qfr = q->qfrs[j];
+ if (qfr && qfr->kaddr) {
+ vunmap(qfr->kaddr);
+ qfr->kaddr = NULL;
+ }
+ }
+ return ret;
+}
+
+static long uacce_fops_unl_ioctl(struct file *filep,
+ unsigned int cmd, unsigned long arg)
+{
+ struct uacce_queue *q = filep->private_data;
+ struct uacce *uacce = q->uacce;
+
+ switch (cmd) {
+ case UACCE_CMD_SHARE_SVAS:
+ return uacce_cmd_share_qfr(q, arg);
+
+ case UACCE_CMD_START:
+ return uacce_start_queue(q);
+
+ default:
+ if (!uacce->ops->ioctl) {
+ dev_err(&uacce->dev,
+ "ioctl cmd (%d) is not supported!\n", cmd);
+ return -EINVAL;
+ }
+
+ return uacce->ops->ioctl(q, cmd, arg);
+ }
+}
+
+#ifdef CONFIG_COMPAT
+static long uacce_fops_compat_ioctl(struct file *filep,
+ unsigned int cmd, unsigned long arg)
+{
+ arg = (unsigned long)compat_ptr(arg);
+ return uacce_fops_unl_ioctl(filep, cmd, arg);
+}
+#endif
+
+static int uacce_dev_open_check(struct uacce *uacce)
+{
+ /*
+ * The device can be opened once if it dose not support pasid
+ */
+ if (uacce->flags & UACCE_DEV_PASID)
+ return 0;
+
+ if (atomic_cmpxchg(&uacce->state, UACCE_ST_INIT, UACCE_ST_OPENED) !=
+ UACCE_ST_INIT) {
+ dev_info(&uacce->dev, "this device can be openned only once\n");
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int uacce_fops_open(struct inode *inode, struct file *filep)
+{
+ struct uacce_queue *q;
+ struct iommu_sva *handle = NULL;
+ struct uacce *uacce;
+ int ret;
+ int pasid = 0;
+
+ uacce = idr_find(&uacce_idr, iminor(inode));
+ if (!uacce)
+ return -ENODEV;
+
+ if (atomic_read(&uacce->state) == UACCE_ST_RST)
+ return -EINVAL;
+
+ if ((!uacce->ops->get_queue) || (!uacce->ops->start_queue))
+ return -EINVAL;
+
+ if (!try_module_get(uacce->pdev->driver->owner))
+ return -ENODEV;
+
+ ret = uacce_dev_open_check(uacce);
+ if (ret)
+ goto open_err;
+
+#ifdef CONFIG_IOMMU_SVA
+ if (uacce->flags & UACCE_DEV_PASID) {
+ handle = iommu_sva_bind_device(uacce->pdev, current->mm, NULL);
+ if (IS_ERR(handle))
+ goto open_err;
+ pasid = iommu_sva_get_pasid(handle);
+ }
+#endif
+ ret = uacce->ops->get_queue(uacce, pasid, &q);
+ if (ret < 0)
+ goto open_err;
+
+ q->pasid = pasid;
+ q->handle = handle;
+ q->uacce = uacce;
+ q->mm = current->mm;
+ memset(q->qfrs, 0, sizeof(q->qfrs));
+ INIT_LIST_HEAD(&q->list);
+ init_waitqueue_head(&q->wait);
+ filep->private_data = q;
+ mutex_lock(&uacce->q_lock);
+ list_add(&q->q_dev, &uacce->qs);
+ mutex_unlock(&uacce->q_lock);
+
+ return 0;
+
+open_err:
+ module_put(uacce->pdev->driver->owner);
+ return ret;
+}
+
+static int uacce_fops_release(struct inode *inode, struct file *filep)
+{
+ struct uacce_queue *q = filep->private_data;
+ struct uacce_qfile_region *qfr;
+ struct uacce *uacce = q->uacce;
+ bool is_to_free_region;
+ int free_pages = 0;
+ int i;
+
+ mutex_lock(&uacce->q_lock);
+ list_del(&q->q_dev);
+ mutex_unlock(&uacce->q_lock);
+
+ if (atomic_read(&uacce->state) == UACCE_ST_STARTED &&
+ uacce->ops->stop_queue)
+ uacce->ops->stop_queue(q);
+
+ mutex_lock(&uacce_mutex);
+
+ for (i = 0; i < UACCE_QFRT_MAX; i++) {
+ qfr = q->qfrs[i];
+ if (!qfr)
+ continue;
+
+ is_to_free_region = false;
+ uacce_queue_unmap_qfr(q, qfr);
+ if (i == UACCE_QFRT_SS) {
+ list_del(&q->list);
+ if (list_empty(&qfr->qs))
+ is_to_free_region = true;
+ } else
+ is_to_free_region = true;
+
+ if (is_to_free_region) {
+ free_pages += qfr->nr_pages;
+ uacce_destroy_region(q, qfr);
+ }
+
+ qfr = NULL;
+ }
+
+ mutex_unlock(&uacce_mutex);
+
+ if (current->mm == q->mm) {
+ down_write(&q->mm->mmap_sem);
+ q->mm->data_vm -= free_pages;
+ up_write(&q->mm->mmap_sem);
+ }
+
+#ifdef CONFIG_IOMMU_SVA
+ if (uacce->flags & UACCE_DEV_PASID)
+ iommu_sva_unbind_device(q->handle);
+#endif
+
+ if (uacce->ops->put_queue)
+ uacce->ops->put_queue(q);
+
+ atomic_set(&uacce->state, UACCE_ST_INIT);
+ module_put(uacce->pdev->driver->owner);
+
+ return 0;
+}
+
+static enum uacce_qfrt uacce_get_region_type(struct uacce *uacce,
+ struct vm_area_struct *vma)
+{
+ enum uacce_qfrt type = UACCE_QFRT_MAX;
+ size_t next_start = UACCE_QFR_NA;
+ int i;
+
+ for (i = UACCE_QFRT_MAX - 1; i >= 0; i--) {
+ if (vma->vm_pgoff >= uacce->qf_pg_start[i]) {
+ type = i;
+ break;
+ }
+ }
+
+ switch (type) {
+ case UACCE_QFRT_MMIO:
+ if (!uacce->ops->mmap) {
+ dev_err(&uacce->dev, "no driver mmap!\n");
+ return UACCE_QFRT_MAX;
+ }
+ break;
+
+ case UACCE_QFRT_DKO:
+ if (uacce->flags & UACCE_DEV_PASID)
+ return UACCE_QFRT_MAX;
+ break;
+
+ case UACCE_QFRT_DUS:
+ break;
+
+ case UACCE_QFRT_SS:
+ /* todo: this can be valid to protect the process space */
+ if (uacce->flags & UACCE_DEV_FAULT_FROM_DEV)
+ return UACCE_QFRT_MAX;
+ break;
+
+ default:
+ dev_err(&uacce->dev, "uacce bug (%d)!\n", type);
+ return UACCE_QFRT_MAX;
+ }
+
+ /* make sure the mapping size is exactly the same as the region */
+ if (type < UACCE_QFRT_SS) {
+ for (i = type + 1; i < UACCE_QFRT_MAX; i++)
+ if (uacce->qf_pg_start[i] != UACCE_QFR_NA) {
+ next_start = uacce->qf_pg_start[i];
+ break;
+ }
+
+ if (next_start == UACCE_QFR_NA) {
+ dev_err(&uacce->dev, "uacce config error: SS offset set improperly\n");
+ return UACCE_QFRT_MAX;
+ }
+
+ if (vma_pages(vma) !=
+ next_start - uacce->qf_pg_start[type]) {
+ dev_err(&uacce->dev, "invalid mmap size (%ld vs %ld pages) for region %s.\n",
+ vma_pages(vma),
+ next_start - uacce->qf_pg_start[type],
+ qfrt_str[type]);
+ return UACCE_QFRT_MAX;
+ }
+ }
+
+ return type;
+}
+
+static int uacce_fops_mmap(struct file *filep, struct vm_area_struct *vma)
+{
+ struct uacce_queue *q = filep->private_data;
+ struct uacce *uacce = q->uacce;
+ enum uacce_qfrt type = uacce_get_region_type(uacce, vma);
+ struct uacce_qfile_region *qfr;
+ unsigned int flags = 0;
+ int ret;
+
+ if (type == UACCE_QFRT_MAX)
+ return -EINVAL;
+
+ vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND;
+
+ mutex_lock(&uacce_mutex);
+
+ /* fixme: if the region need no pages, we don't need to check it */
+ if (q->mm->data_vm + vma_pages(vma) >
+ rlimit(RLIMIT_DATA) >> PAGE_SHIFT) {
+ ret = -ENOMEM;
+ goto out_with_lock;
+ }
+
+ if (q->qfrs[type]) {
+ ret = -EBUSY;
+ goto out_with_lock;
+ }
+
+ switch (type) {
+ case UACCE_QFRT_MMIO:
+ flags = UACCE_QFRF_SELFMT;
+ break;
+
+ case UACCE_QFRT_SS:
+ if (atomic_read(&uacce->state) != UACCE_ST_STARTED) {
+ ret = -EINVAL;
+ goto out_with_lock;
+ }
+
+ flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
+
+ break;
+
+ case UACCE_QFRT_DKO:
+ flags = UACCE_QFRF_MAP | UACCE_QFRF_KMAP;
+
+ break;
+
+ case UACCE_QFRT_DUS:
+ if (uacce->flags & UACCE_DEV_PASID) {
+ flags = UACCE_QFRF_SELFMT;
+ break;
+ }
+
+ flags = UACCE_QFRF_MAP | UACCE_QFRF_MMAP;
+ break;
+
+ default:
+ WARN_ON(&uacce->dev);
+ break;
+ }
+
+ qfr = uacce_create_region(q, vma, type, flags);
+ if (IS_ERR(qfr)) {
+ ret = PTR_ERR(qfr);
+ goto out_with_lock;
+ }
+ q->qfrs[type] = qfr;
+
+ if (type == UACCE_QFRT_SS) {
+ INIT_LIST_HEAD(&qfr->qs);
+ list_add(&q->list, &q->qfrs[type]->qs);
+ }
+
+ mutex_unlock(&uacce_mutex);
+
+ if (qfr->pages)
+ q->mm->data_vm += qfr->nr_pages;
+
+ return 0;
+
+out_with_lock:
+ mutex_unlock(&uacce_mutex);
+ return ret;
+}
+
+static __poll_t uacce_fops_poll(struct file *file, poll_table *wait)
+{
+ struct uacce_queue *q = file->private_data;
+ struct uacce *uacce = q->uacce;
+
+ poll_wait(file, &q->wait, wait);
+ if (uacce->ops->is_q_updated && uacce->ops->is_q_updated(q))
+ return EPOLLIN | EPOLLRDNORM;
+
+ return 0;
+}
+
+static const struct file_operations uacce_fops = {
+ .owner = THIS_MODULE,
+ .open = uacce_fops_open,
+ .release = uacce_fops_release,
+ .unlocked_ioctl = uacce_fops_unl_ioctl,
+#ifdef CONFIG_COMPAT
+ .compat_ioctl = uacce_fops_compat_ioctl,
+#endif
+ .mmap = uacce_fops_mmap,
+ .poll = uacce_fops_poll,
+};
+
+#define UACCE_FROM_CDEV_ATTR(dev) container_of(dev, struct uacce, dev)
+
+static ssize_t id_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+
+ return sprintf(buf, "%d\n", uacce->dev_id);
+}
+
+static ssize_t api_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+
+ return sprintf(buf, "%s\n", uacce->api_ver);
+}
+
+static ssize_t numa_distance_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+ int distance;
+
+ distance = node_distance(smp_processor_id(), uacce->pdev->numa_node);
+
+ return sprintf(buf, "%d\n", abs(distance));
+}
+
+static ssize_t node_id_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+ int node_id;
+
+ node_id = dev_to_node(uacce->pdev);
+
+ return sprintf(buf, "%d\n", node_id);
+}
+
+static ssize_t flags_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+
+ return sprintf(buf, "%d\n", uacce->flags);
+}
+
+static ssize_t available_instances_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+ int val = 0;
+
+ if (uacce->ops->get_available_instances)
+ val = uacce->ops->get_available_instances(uacce);
+
+ return sprintf(buf, "%d\n", val);
+}
+
+static ssize_t algorithms_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+
+ return sprintf(buf, "%s", uacce->algs);
+}
+
+static ssize_t qfrs_offset_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+ int i, ret;
+ unsigned long offset;
+
+ for (i = 0, ret = 0; i < UACCE_QFRT_MAX; i++) {
+ offset = uacce->qf_pg_start[i];
+ if (offset != UACCE_QFR_NA)
+ offset = offset << PAGE_SHIFT;
+ if (i == UACCE_QFRT_SS)
+ break;
+ ret += sprintf(buf + ret, "%lu\t", offset);
+ }
+ ret += sprintf(buf + ret, "%lu\n", offset);
+
+ return ret;
+}
+
+static DEVICE_ATTR_RO(id);
+static DEVICE_ATTR_RO(api);
+static DEVICE_ATTR_RO(numa_distance);
+static DEVICE_ATTR_RO(node_id);
+static DEVICE_ATTR_RO(flags);
+static DEVICE_ATTR_RO(available_instances);
+static DEVICE_ATTR_RO(algorithms);
+static DEVICE_ATTR_RO(qfrs_offset);
+
+static struct attribute *uacce_dev_attrs[] = {
+ &dev_attr_id.attr,
+ &dev_attr_api.attr,
+ &dev_attr_node_id.attr,
+ &dev_attr_numa_distance.attr,
+ &dev_attr_flags.attr,
+ &dev_attr_available_instances.attr,
+ &dev_attr_algorithms.attr,
+ &dev_attr_qfrs_offset.attr,
+ NULL,
+};
+
+static const struct attribute_group uacce_dev_attr_group = {
+ .attrs = uacce_dev_attrs,
+};
+
+static const struct attribute_group *uacce_dev_attr_groups[] = {
+ &uacce_dev_attr_group,
+ NULL
+};
+
+static int uacce_create_chrdev(struct uacce *uacce)
+{
+ int ret;
+
+ ret = idr_alloc(&uacce_idr, uacce, 0, 0, GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ cdev_init(&uacce->cdev, &uacce_fops);
+ uacce->dev_id = ret;
+ uacce->cdev.owner = THIS_MODULE;
+ device_initialize(&uacce->dev);
+ uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
+ uacce->dev.class = uacce_class;
+ uacce->dev.groups = uacce_dev_attr_groups;
+ uacce->dev.parent = uacce->pdev;
+ dev_set_name(&uacce->dev, "%s-%d", uacce->drv_name, uacce->dev_id);
+ ret = cdev_device_add(&uacce->cdev, &uacce->dev);
+ if (ret)
+ goto err_with_idr;
+
+ dev_dbg(&uacce->dev, "create uacce minior=%d\n", uacce->dev_id);
+ return 0;
+
+err_with_idr:
+ idr_remove(&uacce_idr, uacce->dev_id);
+ return ret;
+}
+
+static void uacce_destroy_chrdev(struct uacce *uacce)
+{
+ cdev_device_del(&uacce->cdev, &uacce->dev);
+ idr_remove(&uacce_idr, uacce->dev_id);
+}
+
+static int uacce_dev_match(struct device *dev, void *data)
+{
+ if (dev->parent == data)
+ return -EBUSY;
+
+ return 0;
+}
+
+/* Borrowed from VFIO to fix msi translation */
+static bool uacce_iommu_has_sw_msi(struct iommu_group *group,
+ phys_addr_t *base)
+{
+ struct list_head group_resv_regions;
+ struct iommu_resv_region *region, *next;
+ bool ret = false;
+
+ INIT_LIST_HEAD(&group_resv_regions);
+ iommu_get_group_resv_regions(group, &group_resv_regions);
+ list_for_each_entry(region, &group_resv_regions, list) {
+ pr_debug("uacce: find a resv region (%d) on %llx\n",
+ region->type, region->start);
+
+ /*
+ * The presence of any 'real' MSI regions should take
+ * precedence over the software-managed one if the
+ * IOMMU driver happens to advertise both types.
+ */
+ if (region->type == IOMMU_RESV_MSI) {
+ ret = false;
+ break;
+ }
+
+ if (region->type == IOMMU_RESV_SW_MSI) {
+ *base = region->start;
+ ret = true;
+ }
+ }
+ list_for_each_entry_safe(region, next, &group_resv_regions, list)
+ kfree(region);
+ return ret;
+}
+
+static int uacce_set_iommu_domain(struct uacce *uacce)
+{
+ struct iommu_domain *domain;
+ struct iommu_group *group;
+ struct device *dev = uacce->pdev;
+ bool resv_msi;
+ phys_addr_t resv_msi_base = 0;
+ int ret;
+
+ if (uacce->flags & UACCE_DEV_PASID)
+ return 0;
+
+ /*
+ * We don't support multiple register for the same dev if no pasid
+ */
+ ret = class_for_each_device(uacce_class, NULL, uacce->pdev,
+ uacce_dev_match);
+ if (ret)
+ return ret;
+
+ /* allocate and attach a unmanged domain */
+ domain = iommu_domain_alloc(uacce->pdev->bus);
+ if (!domain) {
+ dev_err(&uacce->dev, "cannot get domain for iommu\n");
+ return -ENODEV;
+ }
+
+ ret = iommu_attach_device(domain, uacce->pdev);
+ if (ret)
+ goto err_with_domain;
+
+ if (iommu_capable(dev->bus, IOMMU_CAP_CACHE_COHERENCY))
+ uacce->prot |= IOMMU_CACHE;
+
+ group = iommu_group_get(dev);
+ if (!group) {
+ ret = -EINVAL;
+ goto err_with_domain;
+ }
+
+ resv_msi = uacce_iommu_has_sw_msi(group, &resv_msi_base);
+ iommu_group_put(group);
+
+ if (resv_msi) {
+ if (!irq_domain_check_msi_remap() &&
+ !iommu_capable(dev->bus, IOMMU_CAP_INTR_REMAP)) {
+ dev_warn(dev, "No interrupt remapping support!");
+ ret = -EPERM;
+ goto err_with_domain;
+ }
+
+ ret = iommu_get_msi_cookie(domain, resv_msi_base);
+ if (ret)
+ goto err_with_domain;
+ }
+
+ return 0;
+
+err_with_domain:
+ iommu_domain_free(domain);
+ return ret;
+}
+
+static void uacce_unset_iommu_domain(struct uacce *uacce)
+{
+ struct iommu_domain *domain;
+
+ if (uacce->flags & UACCE_DEV_PASID)
+ return;
+
+ domain = iommu_get_domain_for_dev(uacce->pdev);
+ if (!domain) {
+ dev_err(&uacce->dev, "bug: no domain attached to device\n");
+ return;
+ }
+
+ iommu_detach_device(domain, uacce->pdev);
+ iommu_domain_free(domain);
+}
+
+/**
+ * uacce_register - register an accelerator
+ * @uacce: the accelerator structure
+ */
+struct uacce *uacce_register(struct device *parent,
+ struct uacce_interface *interface)
+{
+ int ret, i;
+ struct uacce *uacce;
+ unsigned int flags = interface->flags;
+
+ /* if dev support fault-from-dev, it should support pasid */
+ if ((flags & UACCE_DEV_FAULT_FROM_DEV) && !(flags & UACCE_DEV_PASID)) {
+ dev_warn(parent, "SVM/SAV device should support PASID\n");
+ return ERR_PTR(-EINVAL);
+ }
+
+#ifdef CONFIG_IOMMU_SVA
+ if (flags & UACCE_DEV_PASID) {
+ ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA);
+ if (ret)
+ flags &= ~(UACCE_DEV_FAULT_FROM_DEV |
+ UACCE_DEV_PASID);
+ }
+#endif
+ uacce = kzalloc(sizeof(struct uacce), GFP_KERNEL);
+ if (!uacce)
+ return ERR_PTR(-ENOMEM);
+
+ uacce->pdev = parent;
+ uacce->flags = flags;
+ uacce->ops = interface->ops;
+ uacce->drv_name = interface->name;
+
+ for (i = 0; i < UACCE_QFRT_MAX; i++)
+ uacce->qf_pg_start[i] = UACCE_QFR_NA;
+
+ ret = uacce_set_iommu_domain(uacce);
+ if (ret)
+ goto err_free;
+
+ mutex_lock(&uacce_mutex);
+
+ ret = uacce_create_chrdev(uacce);
+ if (ret)
+ goto err_with_lock;
+
+ atomic_set(&uacce->state, UACCE_ST_INIT);
+ INIT_LIST_HEAD(&uacce->qs);
+ mutex_init(&uacce->q_lock);
+ mutex_unlock(&uacce_mutex);
+
+ return uacce;
+
+err_with_lock:
+ mutex_unlock(&uacce_mutex);
+err_free:
+ kfree(uacce);
+ return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(uacce_register);
+
+/**
+ * uacce_unregister - unregisters a uacce
+ * @uacce: the accelerator to unregister
+ *
+ * Unregister an accelerator that wat previously successully registered with
+ * uacce_register().
+ */
+void uacce_unregister(struct uacce *uacce)
+{
+ mutex_lock(&uacce_mutex);
+
+#ifdef CONFIG_IOMMU_SVA
+ if (uacce->flags & UACCE_DEV_PASID)
+ iommu_dev_disable_feature(uacce->pdev, IOMMU_DEV_FEAT_SVA);
+#endif
+ uacce_unset_iommu_domain(uacce);
+
+ uacce_destroy_chrdev(uacce);
+
+ mutex_unlock(&uacce_mutex);
+
+ kfree(uacce);
+}
+EXPORT_SYMBOL_GPL(uacce_unregister);
+
+static int __init uacce_init(void)
+{
+ int ret;
+
+ uacce_class = class_create(THIS_MODULE, UACCE_NAME);
+ if (IS_ERR(uacce_class)) {
+ ret = PTR_ERR(uacce_class);
+ goto err;
+ }
+
+ ret = alloc_chrdev_region(&uacce_devt, 0, MINORMASK, UACCE_NAME);
+ if (ret)
+ goto err_with_class;
+
+ pr_info("uacce init with major number:%d\n", MAJOR(uacce_devt));
+
+ return 0;
+
+err_with_class:
+ class_destroy(uacce_class);
+err:
+ return ret;
+}
+
+static __exit void uacce_exit(void)
+{
+ unregister_chrdev_region(uacce_devt, MINORMASK);
+ class_destroy(uacce_class);
+ idr_destroy(&uacce_idr);
+}
+
+subsys_initcall(uacce_init);
+module_exit(uacce_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Hisilicon Tech. Co., Ltd.");
+MODULE_DESCRIPTION("Accelerator interface for Userland applications");
diff --git a/include/linux/uacce.h b/include/linux/uacce.h
new file mode 100644
index 0000000..1892b94
--- /dev/null
+++ b/include/linux/uacce.h
@@ -0,0 +1,172 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef __UACCE_H
+#define __UACCE_H
+
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/list.h>
+#include <linux/iommu.h>
+#include <uapi/misc/uacce.h>
+
+#define UACCE_NAME "uacce"
+
+struct uacce_queue;
+struct uacce;
+
+/* uacce mode of the driver */
+#define UACCE_MODE_NOUACCE 0 /* don't use uacce */
+#define UACCE_MODE_UACCE 1 /* use uacce exclusively */
+
+/* uacce queue file flag, requires different operation */
+#define UACCE_QFRF_MAP BIT(0) /* map to current queue */
+#define UACCE_QFRF_MMAP BIT(1) /* map to user space */
+#define UACCE_QFRF_KMAP BIT(2) /* map to kernel space */
+#define UACCE_QFRF_DMA BIT(3) /* use dma api for the region */
+#define UACCE_QFRF_SELFMT BIT(4) /* self maintained qfr */
+
+/**
+ * struct uacce_qfile_region - structure of queue file region
+ * @type: type of the qfr
+ * @iova: iova share between user and device space
+ * @pages: pages pointer of the qfr memory
+ * @nr_pages: page numbers of the qfr memory
+ * @prot: qfr protection flag
+ * @flags: flags of qfr
+ * @qs: list sharing the same region, for ss region
+ * @kaddr: kernel addr of the qfr
+ * @dma: dma address, if created by dma api
+ */
+struct uacce_qfile_region {
+ enum uacce_qfrt type;
+ unsigned long iova;
+ struct page **pages;
+ int nr_pages;
+ int prot;
+ unsigned int flags;
+ struct list_head qs;
+ void *kaddr;
+ dma_addr_t dma;
+};
+
+/**
+ * struct uacce_ops - uacce device operations
+ * @get_available_instances: get available instances left of the device
+ * @get_queue: get a queue from the device
+ * @put_queue: free a queue to the device
+ * @start_queue: make the queue start work after get_queue
+ * @stop_queue: make the queue stop work before put_queue
+ * @is_q_updated: check whether the task is finished
+ * @mask_notify: mask the task irq of queue
+ * @mmap: mmap addresses of queue to user space
+ * @reset: reset the uacce device
+ * @reset_queue: reset the queue
+ * @ioctl: ioctl for user space users of the queue
+ */
+struct uacce_ops {
+ int (*get_available_instances)(struct uacce *uacce);
+ int (*get_queue)(struct uacce *uacce, unsigned long arg,
+ struct uacce_queue **q);
+ void (*put_queue)(struct uacce_queue *q);
+ int (*start_queue)(struct uacce_queue *q);
+ void (*stop_queue)(struct uacce_queue *q);
+ int (*is_q_updated)(struct uacce_queue *q);
+ void (*mask_notify)(struct uacce_queue *q, int event_mask);
+ int (*mmap)(struct uacce_queue *q, struct vm_area_struct *vma,
+ struct uacce_qfile_region *qfr);
+ int (*reset)(struct uacce *uacce);
+ int (*reset_queue)(struct uacce_queue *q);
+ long (*ioctl)(struct uacce_queue *q, unsigned int cmd,
+ unsigned long arg);
+};
+
+/**
+ * struct uacce_interface
+ * @name: the uacce device name. Will show up in sysfs
+ * @flags: uacce device attributes
+ * @ops: pointer to the struct uacce_ops
+ *
+ * This structure is used for the uacce_register()
+ */
+struct uacce_interface {
+ char name[32];
+ unsigned int flags;
+ struct uacce_ops *ops;
+};
+
+/**
+ * struct uacce_queue
+ * @uacce: pointer to uacce
+ * @priv: private pointer
+ * @wait: wait queue head
+ * @pasid: pasid of the queue
+ * @handle: iommu_sva handle return from iommu_sva_bind_device
+ * @list: share list for qfr->qs
+ * @mm: current->mm
+ * @qfrs: pointer of qfr regions
+ * @q_dev: list for uacce->qs
+ */
+struct uacce_queue {
+ struct uacce *uacce;
+ void *priv;
+ wait_queue_head_t wait;
+ int pasid;
+ struct iommu_sva *handle;
+ struct list_head list;
+ struct mm_struct *mm;
+ struct uacce_qfile_region *qfrs[UACCE_QFRT_MAX];
+ struct list_head q_dev;
+};
+
+/* uacce state */
+enum {
+ UACCE_ST_INIT = 0,
+ UACCE_ST_OPENED,
+ UACCE_ST_STARTED,
+ UACCE_ST_RST,
+};
+
+/**
+ * struct uacce
+ * @drv_name: the uacce driver name. Will show up in sysfs
+ * @algs: supported algorithms
+ * @api_ver: api version
+ * @flags: uacce attributes
+ * @qf_pg_start: page start of the queue file regions
+ * @ops: pointer to the struct uacce_ops
+ * @pdev: pointer to the parent device
+ * @is_vf: whether virtual function
+ * @dev_id: id of the uacce device
+ * @cdev: cdev of the uacce
+ * @dev: dev of the uacce
+ * @priv: private pointer of the uacce
+ * @state: uacce state
+ * @prot: uacce protection flag
+ * @q_lock: uacce mutex lock for queue
+ * @qs: uacce list head for share
+ */
+struct uacce {
+ const char *drv_name;
+ const char *algs;
+ const char *api_ver;
+ unsigned int flags;
+ unsigned long qf_pg_start[UACCE_QFRT_MAX];
+ struct uacce_ops *ops;
+ struct device *pdev;
+ bool is_vf;
+ u32 dev_id;
+ struct cdev cdev;
+ struct device dev;
+ void *priv;
+ atomic_t state;
+ int prot;
+ struct mutex q_lock;
+ struct list_head qs;
+};
+
+struct uacce *uacce_register(struct device *parent,
+ struct uacce_interface *interface);
+void uacce_unregister(struct uacce *uacce);
+void uacce_wake_up(struct uacce_queue *q);
+
+#endif
diff --git a/include/uapi/misc/uacce.h b/include/uapi/misc/uacce.h
new file mode 100644
index 0000000..685b4a1
--- /dev/null
+++ b/include/uapi/misc/uacce.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+#ifndef _UAPIUUACCE_H
+#define _UAPIUUACCE_H
+
+#include <linux/types.h>
+#include <linux/ioctl.h>
+
+#define UACCE_CMD_SHARE_SVAS _IO('W', 0)
+#define UACCE_CMD_START _IO('W', 1)
+
+/**
+ * UACCE Device Attributes:
+ *
+ * SHARE_DOMAIN: no PASID, can shae sva only for one process and the kernel
+ * PASID: the device has IOMMU which support PASID setting
+ * can do share sva, mapped to dev per process
+ * FAULT_FROM_DEV: the device has IOMMU which can do page fault request
+ * no need for share sva, should be used with PASID
+ * SVA: full function device
+ */
+
+enum {
+ UACCE_DEV_SHARE_DOMAIN = 0x0,
+ UACCE_DEV_PASID = 0x1,
+ UACCE_DEV_FAULT_FROM_DEV = 0x2,
+ UACCE_DEV_SVA = UACCE_DEV_PASID | UACCE_DEV_FAULT_FROM_DEV,
+};
+
+#define UACCE_QFR_NA ((unsigned long)-1)
+
+enum uacce_qfrt {
+ UACCE_QFRT_MMIO = 0, /* device mmio region */
+ UACCE_QFRT_DKO = 1, /* device kernel-only */
+ UACCE_QFRT_DUS = 2, /* device user share */
+ UACCE_QFRT_SS = 3, /* static share memory */
+ UACCE_QFRT_MAX,
+};
+
+#endif
--
2.7.4
On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
> +struct uacce {
> + const char *drv_name;
> + const char *algs;
> + const char *api_ver;
> + unsigned int flags;
> + unsigned long qf_pg_start[UACCE_QFRT_MAX];
> + struct uacce_ops *ops;
> + struct device *pdev;
> + bool is_vf;
> + u32 dev_id;
> + struct cdev cdev;
> + struct device dev;
> + void *priv;
> + atomic_t state;
> + int prot;
> + struct mutex q_lock;
> + struct list_head qs;
> +};
At a quick glance, this problem really stood out to me. You CAN NOT
have two different objects within a structure that have different
lifetime rules and reference counts. You do that here with both a
'struct cdev' and a 'struct device'. Pick one or the other, but never
both.
I would recommend using a 'struct device' and then a 'struct cdev *'.
That way you get the advantage of using the driver model properly, and
then just adding your char device node pointer to "the side" which
interacts with this device.
Then you might want to call this "struct uacce_device" :)
thanks,
greg k-h
Hi, Greg
On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
> On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
>> +struct uacce {
>> + const char *drv_name;
>> + const char *algs;
>> + const char *api_ver;
>> + unsigned int flags;
>> + unsigned long qf_pg_start[UACCE_QFRT_MAX];
>> + struct uacce_ops *ops;
>> + struct device *pdev;
>> + bool is_vf;
>> + u32 dev_id;
>> + struct cdev cdev;
>> + struct device dev;
>> + void *priv;
>> + atomic_t state;
>> + int prot;
>> + struct mutex q_lock;
>> + struct list_head qs;
>> +};
> At a quick glance, this problem really stood out to me. You CAN NOT
> have two different objects within a structure that have different
> lifetime rules and reference counts. You do that here with both a
> 'struct cdev' and a 'struct device'. Pick one or the other, but never
> both.
>
> I would recommend using a 'struct device' and then a 'struct cdev *'.
> That way you get the advantage of using the driver model properly, and
> then just adding your char device node pointer to "the side" which
> interacts with this device.
>
> Then you might want to call this "struct uacce_device" :)
Here the 'struct cdev' and 'struct device' have the same lifetime and
refcount.
They are allocated with uacce when uacce_register and freed when
uacce_unregister.
To make it clear, how about adding this.
+static void uacce_release(struct device *dev)
+{
+ struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
+
+ idr_remove(&uacce_idr, uacce->dev_id);
+ kfree(uacce);
+}
+
static int uacce_create_chrdev(struct uacce *uacce)
{
int ret;
@@ -819,6 +827,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
uacce->dev.class = uacce_class;
uacce->dev.groups = uacce_dev_attr_groups;
uacce->dev.parent = uacce->pdev;
+ uacce->dev.release = uacce_release;
dev_set_name(&uacce->dev, "%s-%d", uacce->drv_name, uacce->dev_id);
ret = cdev_device_add(&uacce->cdev, &uacce->dev);
if (ret)
@@ -835,7 +844,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
static void uacce_destroy_chrdev(struct uacce *uacce)
{
cdev_device_del(&uacce->cdev, &uacce->dev);
- idr_remove(&uacce_idr, uacce->dev_id);
+ put_device(&uacce->dev);
}
static int uacce_dev_match(struct device *dev, void *data)
@@ -1042,8 +1051,6 @@ void uacce_unregister(struct uacce *uacce)
uacce_destroy_chrdev(uacce);
mutex_unlock(&uacce_mutex);
-
- kfree(uacce);
}
uacce_destroy_chrdev->put_device(&uacce->dev)->uacce_release->kfree(uacce).
And find there are many examples in driver/
$ grep -rn cdev_device_add drivers/
drivers/rtc/class.c:362: err = cdev_device_add(&rtc->char_dev,
&rtc->dev);
rivers/gpio/gpiolib.c:1181: status = cdev_device_add(&gdev->chrdev,
&gdev->dev);
drivers/soc/qcom/rmtfs_mem.c:223: ret =
cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev);
drivers/input/joydev.c:989: error = cdev_device_add(&joydev->cdev,
&joydev->dev);
drivers/input/mousedev.c:907: error = cdev_device_add(&mousedev->cdev,
&mousedev->dev);
drivers/input/evdev.c:1419: error = cdev_device_add(&evdev->cdev,
&evdev->dev);
like drivers/input/evdev.c,
evdev is alloced with initialization of dev and cdev,
and evdev is freed in release ops evdev_free
struct evdev {
struct device dev;
struct cdev cdev;
~
};
Thanks
On Thu, Aug 29, 2019 at 05:05:13PM +0800, zhangfei wrote:
> Hi, Greg
>
> On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
> > On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
> > > +struct uacce {
> > > + const char *drv_name;
> > > + const char *algs;
> > > + const char *api_ver;
> > > + unsigned int flags;
> > > + unsigned long qf_pg_start[UACCE_QFRT_MAX];
> > > + struct uacce_ops *ops;
> > > + struct device *pdev;
> > > + bool is_vf;
> > > + u32 dev_id;
> > > + struct cdev cdev;
> > > + struct device dev;
> > > + void *priv;
> > > + atomic_t state;
> > > + int prot;
> > > + struct mutex q_lock;
> > > + struct list_head qs;
> > > +};
> > At a quick glance, this problem really stood out to me. You CAN NOT
> > have two different objects within a structure that have different
> > lifetime rules and reference counts. You do that here with both a
> > 'struct cdev' and a 'struct device'. Pick one or the other, but never
> > both.
> >
> > I would recommend using a 'struct device' and then a 'struct cdev *'.
> > That way you get the advantage of using the driver model properly, and
> > then just adding your char device node pointer to "the side" which
> > interacts with this device.
> >
> > Then you might want to call this "struct uacce_device" :)
>
> Here the 'struct cdev' and 'struct device' have the same lifetime and
> refcount.
No they do not, that's impossible as refcounts are incremented from
different places (i.e. userspace).
> They are allocated with uacce when uacce_register and freed when
> uacce_unregister.
And that will not work.
>
> To make it clear, how about adding this.
>
> +static void uacce_release(struct device *dev)
> +{
> + struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
> +
> + idr_remove(&uacce_idr, uacce->dev_id);
> + kfree(uacce);
> +}
> +
> static int uacce_create_chrdev(struct uacce *uacce)
> {
> int ret;
> @@ -819,6 +827,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
> uacce->dev.class = uacce_class;
> uacce->dev.groups = uacce_dev_attr_groups;
> uacce->dev.parent = uacce->pdev;
> + uacce->dev.release = uacce_release;
You have to have a release function today, otherwise you will get nasty
kernel messages from the log. I don't know why you aren't seeing that
already.
> dev_set_name(&uacce->dev, "%s-%d", uacce->drv_name, uacce->dev_id);
> ret = cdev_device_add(&uacce->cdev, &uacce->dev);
> if (ret)
> @@ -835,7 +844,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
> static void uacce_destroy_chrdev(struct uacce *uacce)
> {
> cdev_device_del(&uacce->cdev, &uacce->dev);
> - idr_remove(&uacce_idr, uacce->dev_id);
> + put_device(&uacce->dev);
> }
>
> static int uacce_dev_match(struct device *dev, void *data)
> @@ -1042,8 +1051,6 @@ void uacce_unregister(struct uacce *uacce)
> uacce_destroy_chrdev(uacce);
>
> mutex_unlock(&uacce_mutex);
> -
> - kfree(uacce);
> }
>
>
> uacce_destroy_chrdev->put_device(&uacce->dev)->uacce_release->kfree(uacce).
>
> And find there are many examples in driver/
> $ grep -rn cdev_device_add drivers/
> drivers/rtc/class.c:362: err = cdev_device_add(&rtc->char_dev,
> &rtc->dev);
> rivers/gpio/gpiolib.c:1181: status = cdev_device_add(&gdev->chrdev,
> &gdev->dev);
> drivers/soc/qcom/rmtfs_mem.c:223: ret =
> cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev);
> drivers/input/joydev.c:989: error = cdev_device_add(&joydev->cdev,
> &joydev->dev);
> drivers/input/mousedev.c:907: error = cdev_device_add(&mousedev->cdev,
> &mousedev->dev);
> drivers/input/evdev.c:1419: error = cdev_device_add(&evdev->cdev,
> &evdev->dev);
Are you sure these all have the full structures inbedded in them?
>
> like drivers/input/evdev.c,
> evdev is alloced with initialization of dev and cdev,
> and evdev is freed in release ops evdev_free
> struct evdev {
> struct device dev;
> struct cdev cdev;
> ~
Ick, that too is totally wrong and needs to be fixed.
Please don't copy incorrect code, that's why we review stuff :)
thanks,
greg k-h
Hi, Greg
On 2019/8/29 下午5:54, Greg Kroah-Hartman wrote:
> On Thu, Aug 29, 2019 at 05:05:13PM +0800, zhangfei wrote:
>> Hi, Greg
>>
>> On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
>>> On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
>>>> +struct uacce {
>>>> + const char *drv_name;
>>>> + const char *algs;
>>>> + const char *api_ver;
>>>> + unsigned int flags;
>>>> + unsigned long qf_pg_start[UACCE_QFRT_MAX];
>>>> + struct uacce_ops *ops;
>>>> + struct device *pdev;
>>>> + bool is_vf;
>>>> + u32 dev_id;
>>>> + struct cdev cdev;
>>>> + struct device dev;
>>>> + void *priv;
>>>> + atomic_t state;
>>>> + int prot;
>>>> + struct mutex q_lock;
>>>> + struct list_head qs;
>>>> +};
>>> At a quick glance, this problem really stood out to me. You CAN NOT
>>> have two different objects within a structure that have different
>>> lifetime rules and reference counts. You do that here with both a
>>> 'struct cdev' and a 'struct device'. Pick one or the other, but never
>>> both.
>>>
>>> I would recommend using a 'struct device' and then a 'struct cdev *'.
>>> That way you get the advantage of using the driver model properly, and
>>> then just adding your char device node pointer to "the side" which
>>> interacts with this device.
>>>
>>> Then you might want to call this "struct uacce_device" :)
>> Here the 'struct cdev' and 'struct device' have the same lifetime and
>> refcount.
> No they do not, that's impossible as refcounts are incremented from
> different places (i.e. userspace).
Yes, cdev's refcount is increased by open, from user space.
Not sure whether I understand correctly, Is this correct?
@@ -819,9 +819,10 @@ static int uacce_create_chrdev(struct uacce *uacce)
if (ret < 0)
return ret;
- cdev_init(&uacce->cdev, &uacce_fops);
+ uacce->cdev = cdev_alloc();
+ uacce->cdev->ops = &uacce_fops;
uacce->dev_id = ret;
- uacce->cdev.owner = THIS_MODULE;
+ uacce->cdev->owner = THIS_MODULE;
device_initialize(&uacce->dev);
uacce->dev.devt = MKDEV(MAJOR(uacce_devt), uacce->dev_id);
uacce->dev.class = uacce_class;
@@ -829,7 +830,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
uacce->dev.parent = uacce->pdev;
uacce->dev.release = uacce_release;
dev_set_name(&uacce->dev, "%s-%d", uacce->drv_name, uacce->dev_id);
- ret = cdev_device_add(&uacce->cdev, &uacce->dev);
+ ret = cdev_device_add(uacce->cdev, &uacce->dev);
if (ret)
goto err_with_idr;
@@ -843,7 +844,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
static void uacce_destroy_chrdev(struct uacce *uacce)
{
- cdev_device_del(&uacce->cdev, &uacce->dev);
+ cdev_device_del(uacce->cdev, &uacce->dev);
put_device(&uacce->dev);
}
diff --git a/include/linux/uacce.h b/include/linux/uacce.h
index 1892b94..39a2c4b 100644
--- a/include/linux/uacce.h
+++ b/include/linux/uacce.h
@@ -155,7 +155,7 @@ struct uacce {
struct device *pdev;
bool is_vf;
u32 dev_id;
- struct cdev cdev;
+ struct cdev *cdev;
And use struct uacce_device instead of struct uacce
-struct uacce *uacce_register(struct device *parent,
- struct uacce_interface *interface);
+struct uacce_device *uacce_register(struct device *parent,
+ struct uacce_interface *interface);
>
>> They are allocated with uacce when uacce_register and freed when
>> uacce_unregister.
> And that will not work.
>
>> To make it clear, how about adding this.
>>
>> +static void uacce_release(struct device *dev)
>> +{
>> + struct uacce *uacce = UACCE_FROM_CDEV_ATTR(dev);
>> +
>> + idr_remove(&uacce_idr, uacce->dev_id);
>> + kfree(uacce);
>> +}
>> +
>> static int uacce_create_chrdev(struct uacce *uacce)
>> {
>> int ret;
>> @@ -819,6 +827,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
>> uacce->dev.class = uacce_class;
>> uacce->dev.groups = uacce_dev_attr_groups;
>> uacce->dev.parent = uacce->pdev;
>> + uacce->dev.release = uacce_release;
> You have to have a release function today, otherwise you will get nasty
> kernel messages from the log. I don't know why you aren't seeing that
> already.
Yes, kernel report warning after using put_device.
>
>> dev_set_name(&uacce->dev, "%s-%d", uacce->drv_name, uacce->dev_id);
>> ret = cdev_device_add(&uacce->cdev, &uacce->dev);
>> if (ret)
>> @@ -835,7 +844,7 @@ static int uacce_create_chrdev(struct uacce *uacce)
>> static void uacce_destroy_chrdev(struct uacce *uacce)
>> {
>> cdev_device_del(&uacce->cdev, &uacce->dev);
>> - idr_remove(&uacce_idr, uacce->dev_id);
>> + put_device(&uacce->dev);
>> }
>>
>> static int uacce_dev_match(struct device *dev, void *data)
>> @@ -1042,8 +1051,6 @@ void uacce_unregister(struct uacce *uacce)
>> uacce_destroy_chrdev(uacce);
>>
>> mutex_unlock(&uacce_mutex);
>> -
>> - kfree(uacce);
>> }
>>
>>
>> uacce_destroy_chrdev->put_device(&uacce->dev)->uacce_release->kfree(uacce).
>>
>> And find there are many examples in driver/
>> $ grep -rn cdev_device_add drivers/
>> drivers/rtc/class.c:362: err = cdev_device_add(&rtc->char_dev,
>> &rtc->dev);
>> rivers/gpio/gpiolib.c:1181: status = cdev_device_add(&gdev->chrdev,
>> &gdev->dev);
>> drivers/soc/qcom/rmtfs_mem.c:223: ret =
>> cdev_device_add(&rmtfs_mem->cdev, &rmtfs_mem->dev);
>> drivers/input/joydev.c:989: error = cdev_device_add(&joydev->cdev,
>> &joydev->dev);
>> drivers/input/mousedev.c:907: error = cdev_device_add(&mousedev->cdev,
>> &mousedev->dev);
>> drivers/input/evdev.c:1419: error = cdev_device_add(&evdev->cdev,
>> &evdev->dev);
> Are you sure these all have the full structures inbedded in them?
>
>> like drivers/input/evdev.c,
>> evdev is alloced with initialization of dev and cdev,
>> and evdev is freed in release ops evdev_free
>> struct evdev {
>> struct device dev;
>> struct cdev cdev;
>> ~
> Ick, that too is totally wrong and needs to be fixed.
>
> Please don't copy incorrect code, that's why we review stuff :)
>
OK, understand. Thanks Greg.
Hi, Greg
On 2019/8/29 下午5:54, Greg Kroah-Hartman wrote:
> On Thu, Aug 29, 2019 at 05:05:13PM +0800, zhangfei wrote:
>> Hi, Greg
>>
>> On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
>>> On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
>>>> +struct uacce {
>>>> + const char *drv_name;
>>>> + const char *algs;
>>>> + const char *api_ver;
>>>> + unsigned int flags;
>>>> + unsigned long qf_pg_start[UACCE_QFRT_MAX];
>>>> + struct uacce_ops *ops;
>>>> + struct device *pdev;
>>>> + bool is_vf;
>>>> + u32 dev_id;
>>>> + struct cdev cdev;
>>>> + struct device dev;
>>>> + void *priv;
>>>> + atomic_t state;
>>>> + int prot;
>>>> + struct mutex q_lock;
>>>> + struct list_head qs;
>>>> +};
>>> At a quick glance, this problem really stood out to me. You CAN NOT
>>> have two different objects within a structure that have different
>>> lifetime rules and reference counts. You do that here with both a
>>> 'struct cdev' and a 'struct device'. Pick one or the other, but never
>>> both.
>>>
>>> I would recommend using a 'struct device' and then a 'struct cdev *'.
>>> That way you get the advantage of using the driver model properly, and
>>> then just adding your char device node pointer to "the side" which
>>> interacts with this device.
>>>
>>> Then you might want to call this "struct uacce_device" :)
>> Here the 'struct cdev' and 'struct device' have the same lifetime and
>> refcount.
> No they do not, that's impossible as refcounts are incremented from
> different places (i.e. userspace).
>
>> They are allocated with uacce when uacce_register and freed when
>> uacce_unregister.
> And that will not work.
I am sorry, could I ask more about this part.
* This function should be used whenever the struct cdev and the
* struct device are members of the same structure whose lifetime is
* managed by the struct device.
From cdev_device_add comments, looks struct cdev and stuct device
can be in the same structure like uacce, and uacce is released when
put_device(device)
Also cdev_device_del do the device_del(dev) and cdev_del(cdev).
Copy:
fs/char_dev.c
/**
* cdev_device_add() - add a char device and it's corresponding
* struct device, linkink
* @dev: the device structure
* @cdev: the cdev structure
*
* cdev_device_add() adds the char device represented by @cdev to the
system,
* just as cdev_add does. It then adds @dev to the system using device_add
* The dev_t for the char device will be taken from the struct device which
* needs to be initialized first. This helper function correctly takes a
* reference to the parent device so the parent will not get released until
* all references to the cdev are released.
*
* This helper uses dev->devt for the device number. If it is not set
* it will not add the cdev and it will be equivalent to device_add.
*
* This function should be used whenever the struct cdev and the
* struct device are members of the same structure whose lifetime is
* managed by the struct device.
*
* NOTE: Callers must assume that userspace was able to open the cdev and
* can call cdev fops callbacks at any time, even if this function fails.
*/
int cdev_device_add(struct cdev *cdev, struct device *dev)
{
int rc = 0;
if (dev->devt) {
cdev_set_parent(cdev, &dev->kobj);
rc = cdev_add(cdev, dev->devt, 1);
if (rc)
return rc;
}
rc = device_add(dev);
if (rc)
cdev_del(cdev);
return rc;
}
Thanks
Hi, Greg
On 2019/8/30 下午10:54, zhangfei wrote:
>>> On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
>>>> On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
>>>>> +struct uacce {
>>>>> + const char *drv_name;
>>>>> + const char *algs;
>>>>> + const char *api_ver;
>>>>> + unsigned int flags;
>>>>> + unsigned long qf_pg_start[UACCE_QFRT_MAX];
>>>>> + struct uacce_ops *ops;
>>>>> + struct device *pdev;
>>>>> + bool is_vf;
>>>>> + u32 dev_id;
>>>>> + struct cdev cdev;
>>>>> + struct device dev;
>>>>> + void *priv;
>>>>> + atomic_t state;
>>>>> + int prot;
>>>>> + struct mutex q_lock;
>>>>> + struct list_head qs;
>>>>> +};
>>>> At a quick glance, this problem really stood out to me. You CAN NOT
>>>> have two different objects within a structure that have different
>>>> lifetime rules and reference counts. You do that here with both a
>>>> 'struct cdev' and a 'struct device'. Pick one or the other, but never
>>>> both.
>>>>
>>>> I would recommend using a 'struct device' and then a 'struct cdev *'.
>>>> That way you get the advantage of using the driver model properly, and
>>>> then just adding your char device node pointer to "the side" which
>>>> interacts with this device.
>>>>
>>>> Then you might want to call this "struct uacce_device" :)
I think I understand now.
'struct device' and then a 'struct cdev' have different refcounts.
Using 'struct cdev *', the release is not in uacce.c, but controlled by
cdev itself.
So uacce is decoupled with cdev.
//Using 'struct cdev *'
cdev_alloc->cdev_dynamic_release:kfree(p);
uacce_destroy_chrdev:
cdev_device_del->cdev_del(cdev)->kobject_put(&p->kobj);
if (refcount--) == 0
cdev_dynamic_release->kfree(p);
//Using 'struct device'
cdev_init->cdev_default_release
cdev is freed in uacce.c,
So 'struct device' and then a 'struct cdev' are bind together, while
cdev and uacce->dev have different refcount.
Thanks for the patience.
On Mon, Sep 02, 2019 at 11:44:16AM +0800, zhangfei wrote:
>
> Hi, Greg
>
> On 2019/8/30 下午10:54, zhangfei wrote:
> > > > On 2019/8/28 下午11:22, Greg Kroah-Hartman wrote:
> > > > > On Wed, Aug 28, 2019 at 09:27:56PM +0800, Zhangfei Gao wrote:
> > > > > > +struct uacce {
> > > > > > + const char *drv_name;
> > > > > > + const char *algs;
> > > > > > + const char *api_ver;
> > > > > > + unsigned int flags;
> > > > > > + unsigned long qf_pg_start[UACCE_QFRT_MAX];
> > > > > > + struct uacce_ops *ops;
> > > > > > + struct device *pdev;
> > > > > > + bool is_vf;
> > > > > > + u32 dev_id;
> > > > > > + struct cdev cdev;
> > > > > > + struct device dev;
> > > > > > + void *priv;
> > > > > > + atomic_t state;
> > > > > > + int prot;
> > > > > > + struct mutex q_lock;
> > > > > > + struct list_head qs;
> > > > > > +};
> > > > > At a quick glance, this problem really stood out to me. You CAN NOT
> > > > > have two different objects within a structure that have different
> > > > > lifetime rules and reference counts. You do that here with both a
> > > > > 'struct cdev' and a 'struct device'. Pick one or the other, but never
> > > > > both.
> > > > >
> > > > > I would recommend using a 'struct device' and then a 'struct cdev *'.
> > > > > That way you get the advantage of using the driver model properly, and
> > > > > then just adding your char device node pointer to "the side" which
> > > > > interacts with this device.
> > > > >
> > > > > Then you might want to call this "struct uacce_device" :)
> I think I understand now.
> 'struct device' and then a 'struct cdev' have different refcounts.
> Using 'struct cdev *', the release is not in uacce.c, but controlled by cdev
> itself.
> So uacce is decoupled with cdev.
>
> //Using 'struct cdev *'
> cdev_alloc->cdev_dynamic_release:kfree(p);
> uacce_destroy_chrdev:
> cdev_device_del->cdev_del(cdev)->kobject_put(&p->kobj);
> if (refcount--) == 0
> cdev_dynamic_release->kfree(p);
>
> //Using 'struct device'
> cdev_init->cdev_default_release
> cdev is freed in uacce.c,
> So 'struct device' and then a 'struct cdev' are bind together, while cdev
> and uacce->dev have different refcount.
Yes, that is exactly the reason, glad you figured it out.
thanks,
greg k-h