AI engine is the acceleration engine provided by Xilinx. These engines
provide high compute density for vector-based algorithms, and flexible
custom compute and data movement. It has core tiles for compute and
shim tiles to interface the FPGA fabric.
You can check the AI engine architecture document for more hardware details:
https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
This patch series adds a Linux kernel driver to manage the Xilinx AI
engine array device and AI engine partitions (groups of AI engine tiles
dedicated to an application).
v3:
* unlock AIE dev mutex after failed to gain the partition lock in
errors handing
* replace pointer with __u64 and enum with __u32 in ioctl
v2:
* Fix dtschema check errors
* Fix test bot warning on interrupt implementation. Removed set but
unused varaible.
* Fix compilation unused function warning of firmware change in case
ZynqMP firmware is not configured
* There are other warning on ZynqMP firmware reported from testbot
which is not introduced by this patch set.
"[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
for those fixes.
Izhar Ameer Shaikh (1):
firmware: xilinx: Add IOCTL support for AIE ISR Clear
Nishad Saraf (2):
misc: xilinx-ai-engine: Add support to request device management
services
misc: xilinx-ai-engine: Add support for servicing error interrupts
Wendy Liang (6):
dt-binding: soc: xilinx: ai-engine: Add AI engine binding
misc: Add Xilinx AI engine device driver
misc: xilinx-ai-engine: Implement AI engine cleanup sequence
misc: xilinx-ai-engine: expose AI engine tile memories to userspace
misc: xilinx-ai-engine: add setting shim dma bd operation
misc: xilinx-ai-engine: add request and release tiles
.../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
MAINTAINERS | 8 +
drivers/firmware/xilinx/zynqmp.c | 14 +
drivers/misc/Kconfig | 12 +
drivers/misc/Makefile | 1 +
drivers/misc/xilinx-ai-engine/Makefile | 16 +
drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
.../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
include/linux/firmware/xlnx-zynqmp.h | 8 +
include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
18 files changed, 4719 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
create mode 100644 include/uapi/linux/xlnx-ai-engine.h
--
2.7.4
From: Nishad Saraf <[email protected]>
Platform management services like device control, resets, power
management, etc. are provided by Platform, Loader and Manager(PLM)
through firmware driver APIs. For requesting some of these services,
this change reads AI Engine platform management node ID from DT node.
Some other features like clearing interrupts in the NoC interconnect
might only be valid for particular silicon revisions. For supporting
such silicon specific features, AI Engine driver will query and store
this information in device instance. While at it, this change makes
EEMI operations accessible to all the other source files in the
driver.
Signed-off-by: Nishad Saraf <[email protected]>
Signed-off-by: Wendy Liang <[email protected]>
---
drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 25 +++++++++++++++++++++-
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 6 ++++++
2 files changed, 30 insertions(+), 1 deletion(-)
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
index 43f4933..51c3a4f 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
@@ -11,6 +11,7 @@
#include <linux/device.h>
#include <linux/dma-mapping.h>
#include <linux/file.h>
+#include <linux/firmware/xlnx-zynqmp.h>
#include <linux/fs.h>
#include <linux/idr.h>
#include <linux/kernel.h>
@@ -26,7 +27,8 @@
#include "ai-engine-internal.h"
-#define AIE_DEV_MAX (MINORMASK + 1)
+#define AIE_DEV_MAX (MINORMASK + 1)
+#define VERSAL_SILICON_REV_MASK GENMASK(31, 28)
static dev_t aie_major;
struct class *aie_class;
@@ -322,6 +324,7 @@ static int xilinx_ai_engine_probe(struct platform_device *pdev)
{
struct aie_device *adev;
struct device *dev;
+ u32 idcode, version, pm_reg[2];
int ret;
adev = devm_kzalloc(&pdev->dev, sizeof(*adev), GFP_KERNEL);
@@ -349,6 +352,26 @@ static int xilinx_ai_engine_probe(struct platform_device *pdev)
return ret;
}
+ /*
+ * AI Engine platform management node ID is required for requesting
+ * services from firmware driver.
+ */
+ ret = of_property_read_u32_array(pdev->dev.of_node, "power-domains",
+ pm_reg, ARRAY_SIZE(pm_reg));
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "Failed to read power management information\n");
+ return ret;
+ }
+ adev->pm_node_id = pm_reg[1];
+
+ ret = zynqmp_pm_get_chipid(&idcode, &version);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "Failed to get chip ID\n");
+ return ret;
+ }
+ adev->version = FIELD_GET(VERSAL_SILICON_REV_MASK, idcode);
+
dev = &adev->dev;
device_initialize(dev);
dev->class = aie_class;
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
index 131d22a..b21b7025 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
@@ -41,6 +41,10 @@
#define AIE_REGS_ATTR_PERM_MASK GENMASK(15, \
AIE_REGS_ATTR_PERM_SHIFT)
+/* Silicon Engineering Sample(ES) revision ID */
+#define VERSAL_ES1_REV_ID 0x0
+#define VERSAL_ES2_REV_ID 0x1
+
/**
* struct aie_tile_regs - contiguous range of AI engine register
* within an AI engine tile
@@ -173,6 +177,7 @@ struct aie_resource {
* while columns are occupied by partitions.
* @num_kernel_regs: number of kernel only registers range
* @version: AI engine device version
+ * @pm_node_id: AI Engine platform management node ID
*/
struct aie_device {
struct list_head partitions;
@@ -193,6 +198,7 @@ struct aie_device {
u32 row_shift;
u32 num_kernel_regs;
int version;
+ u32 pm_node_id;
};
/**
--
2.7.4
Add operation to set SHIM DMA buffer descriptor.
The following operations are added to set the buffer descriptors:
* attach DMA buffer which enables AI engine device to access the DMA
buffer memory
* detach DMA buffer which disables AI engine device to access the DMA
buffer memory
* set DMA buffer descriptor, which takes buffer descriptor contents
pointer, the dmabuf fd, and the offset to the start of dmabuf as
as argument. It validates the dmabuf and the offset and size of the
buffer. And then it calculates the DMA address of the buffer and set
the buffer descriptor content to the hardware DMA buffer descriptor.
The main logic to control what's go into the buffer descriptor and which
buffer descriptor to use is in the userspace AI engine library.
Signed-off-by: Wendy Liang <[email protected]>
Reviewed-by: Hyun Kwon <[email protected]>
---
drivers/misc/xilinx-ai-engine/Makefile | 1 +
drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 19 +
drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 45 ++
drivers/misc/xilinx-ai-engine/ai-engine-part.c | 17 +
include/uapi/linux/xlnx-ai-engine.h | 44 ++
6 files changed, 607 insertions(+)
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-engine/Makefile
index 2dbed42..1b743fa 100644
--- a/drivers/misc/xilinx-ai-engine/Makefile
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_XILINX_AIE) += xilinx-aie.o
xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
ai-engine-dev.o \
+ ai-engine-dma.o \
ai-engine-mem.o \
ai-engine-part.o \
ai-engine-res.o \
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
index 7fce2f00..ac95aff 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
@@ -107,6 +107,24 @@ static const struct aie_single_reg_field aie_col_clkbuf = {
.regoff = AIE_SHIMPL_CLKCNTR_REGOFF,
};
+static const struct aie_dma_attr aie_shimdma = {
+ .laddr = {
+ .mask = 0xffffffffU,
+ .regoff = 0U,
+ },
+ .haddr = {
+ .mask = 0xffff0000U,
+ .regoff = 0x8U,
+ },
+ .buflen = {
+ .mask = 0xffffffffU,
+ .regoff = 0x4U,
+ },
+ .bd_regoff = 0x0001d000U,
+ .num_bds = 16,
+ .bd_len = 0x14U,
+};
+
static u32 aie_get_tile_type(struct aie_location *loc)
{
if (loc->row)
@@ -232,6 +250,7 @@ int aie_device_init(struct aie_device *adev)
adev->kernel_regs = aie_kernel_regs;
adev->col_rst = &aie_col_rst;
adev->col_clkbuf = &aie_col_clkbuf;
+ adev->shim_dma = &aie_shimdma;
/* Get the columns resource */
/* Get number of columns from AI engine memory resource */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-dma.c b/drivers/misc/xilinx-ai-engine/ai-engine-dma.c
new file mode 100644
index 0000000..863790b
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-dma.c
@@ -0,0 +1,481 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx AI Engine driver DMA implementation
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ */
+
+#include "ai-engine-internal.h"
+#include <linux/dma-buf.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/refcount.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+
+/**
+ * struct aie_dmabuf - AI engine dmabuf information
+ * @attach: dmabuf attachment pointer
+ * @sgt: scatter/gather table
+ * @refs: refcount of the attached aie_dmabuf
+ * @node: list node
+ */
+struct aie_dmabuf {
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+ refcount_t refs;
+ struct list_head node;
+};
+
+/**
+ * aie_part_find_dmabuf() - find a attached dmabuf
+ * @apart: AI engine partition
+ * @dmabuf: pointer to dmabuf
+ * @return: pointer to AI engine dmabuf struct of the found dmabuf, if dmabuf
+ * is not found, returns NULL.
+ *
+ * This function scans all the attached dmabufs to see the input dmabuf is
+ * in the list. if it is attached, return the corresponding struct aie_dmabuf
+ * pointer.
+ */
+static struct aie_dmabuf *
+aie_part_find_dmabuf(struct aie_partition *apart, struct dma_buf *dmabuf)
+{
+ struct aie_dmabuf *adbuf;
+
+ list_for_each_entry(adbuf, &apart->dbufs, node) {
+ if (dmabuf == adbuf->attach->dmabuf)
+ return adbuf;
+ }
+
+ return NULL;
+}
+
+/**
+ * aie_part_get_dmabuf_da_from_off() - get DMA address from offset to a dmabuf
+ * @apart: AI engine partition
+ * @dmabuf_fd: dmabuf file descriptor
+ * @off: offset to the start of a dmabuf
+ * @len: memory length
+ * @return: dma address, or 0 if @off or @len is invalid, or if @dmabuf_fd is
+ * not attached.
+ *
+ * This function returns DMA address if has been mapped to a dmabuf which has
+ * been attached to the AI engine partition.
+ */
+static dma_addr_t
+aie_part_get_dmabuf_da_from_off(struct aie_partition *apart, int dmabuf_fd,
+ u64 off, size_t len)
+{
+ struct dma_buf *dbuf = dma_buf_get(dmabuf_fd);
+ struct aie_dmabuf *adbuf;
+
+ if (IS_ERR(dbuf)) {
+ dev_err(&apart->dev,
+ "failed to get dma address, not able to get dmabuf from %d.\n",
+ dmabuf_fd);
+ return 0;
+ }
+
+ adbuf = aie_part_find_dmabuf(apart, dbuf);
+ dma_buf_put(dbuf);
+ if (!adbuf) {
+ dev_err(&apart->dev,
+ "failed to get dma address, dmabuf %d not attached.\n",
+ dmabuf_fd);
+ return 0;
+ }
+
+ if (off >= dbuf->size || off + len >= dbuf->size) {
+ dev_err(&apart->dev,
+ "failed to get dma address from buf %d, off=0x%llx, len=0x%zx.\n",
+ dmabuf_fd, off, len);
+ return 0;
+ }
+
+ return sg_dma_address(adbuf->sgt->sgl) + off;
+}
+
+/**
+ * aie_part_set_shimdma_bd() - Set the buffer descriptor to AI engine partition
+ * hardware
+ * @apart: AI engine partition
+ * @loc: AI engine tile location
+ * @bd_id: buffer descriptor ID
+ * @bd: pointer buffer descriptor content
+ * @return: 0 for success, negative value for failure
+ *
+ * This function sets the specified buffer descriptor content to the
+ * specified buffer descriptor in the specified AI engine SHIM NOC tile.
+ */
+static int aie_part_set_shimdma_bd(struct aie_partition *apart,
+ struct aie_location loc, u32 bd_id, u32 *bd)
+{
+ const struct aie_dma_attr *shim_dma = apart->adev->shim_dma;
+ struct aie_location loc_adjust;
+ u32 i, regoff, intile_regoff;
+
+ intile_regoff = shim_dma->bd_regoff + shim_dma->bd_len * bd_id;
+ loc_adjust.col = loc.col + apart->range.start.col;
+ loc_adjust.row = loc.row + apart->range.start.row;
+ regoff = aie_cal_regoff(apart->adev, loc_adjust, intile_regoff);
+
+ for (i = 0; i < shim_dma->bd_len / (sizeof(*bd));
+ i++, regoff += sizeof(*bd))
+ iowrite32(bd[i], apart->adev->base + regoff);
+ return 0;
+}
+
+/**
+ * aie_part_validate_bdloc() - Validate SHIM DMA buffer descriptor location
+ * @apart: AI engine partition
+ * @loc: tile location
+ * @bd_id: buffer descriptor id
+ *
+ * @return: 0 for success, negative value for failure
+ *
+ * This function validate the SHIM DMA buffer descriptor base address.
+ */
+static int aie_part_validate_bdloc(struct aie_partition *apart,
+ struct aie_location loc, u32 bd_id)
+{
+ const struct aie_dma_attr *shim_dma = apart->adev->shim_dma;
+ struct aie_location loc_adjust;
+ u32 ttype;
+
+ loc_adjust.col = loc.col + apart->range.start.col;
+ loc_adjust.row = loc.row + apart->range.start.row;
+
+ if (aie_validate_location(apart, loc_adjust) < 0) {
+ dev_err(&apart->dev,
+ "invalid loc (%u,%u) in (%u,%u).\n",
+ loc.col, loc.row,
+ apart->range.size.col, apart->range.size.row);
+ return -EINVAL;
+ }
+
+ ttype = apart->adev->ops->get_tile_type(&loc_adjust);
+ if (ttype != AIE_TILE_TYPE_SHIMNOC) {
+ dev_err(&apart->dev,
+ "failed to set bd, (%u,%u) is not SHIM NOC\n",
+ loc.col, loc.row);
+ return -EINVAL;
+ }
+
+ if (bd_id >= shim_dma->num_bds) {
+ dev_err(&apart->dev,
+ "invalid SHIM DMA bd id: %u.\n", bd_id);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+/**
+ * aie_part_attach_dmabuf() - Attach dmabuf to an AI engine
+ * @apart: AI engine partition
+ * @dbuf: pointer to the DMA buffer to attach
+ * @return: pointer to AI engine dmabuf structure for success, or error value
+ * for failure
+ *
+ * This function attaches a dmabuf to the specified AI engine partition.
+ */
+static struct aie_dmabuf *aie_part_attach_dmabuf(struct aie_partition *apart,
+ struct dma_buf *dbuf)
+{
+ struct aie_dmabuf *adbuf;
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+
+ attach = dma_buf_attach(dbuf, &apart->dev);
+ if (IS_ERR(attach)) {
+ dev_err(&apart->dev, "failed to attach dmabuf\n");
+ return ERR_CAST(attach);
+ }
+
+ sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL);
+ if (IS_ERR(sgt)) {
+ dev_err(&apart->dev, "failed to map dmabuf attachment\n");
+ dma_buf_detach(dbuf, attach);
+ return ERR_CAST(sgt);
+ }
+
+ if (sgt->nents != 1) {
+ dma_addr_t next_sg_addr = sg_dma_address(sgt->sgl);
+ struct scatterlist *s;
+ unsigned int i;
+
+ for_each_sg(sgt->sgl, s, sgt->nents, i) {
+ if (sg_dma_address(s) != next_sg_addr) {
+ dev_err(&apart->dev,
+ "dmabuf not contiguous\n");
+ dma_buf_unmap_attachment(attach, sgt,
+ attach->dir);
+ dma_buf_detach(dbuf, attach);
+ return ERR_PTR(-EINVAL);
+ }
+
+ next_sg_addr = sg_dma_address(s) + sg_dma_len(s);
+ }
+ }
+
+ adbuf = devm_kzalloc(&apart->dev, sizeof(*adbuf), GFP_KERNEL);
+ if (!adbuf) {
+ dma_buf_unmap_attachment(attach, sgt, attach->dir);
+ dma_buf_detach(dbuf, attach);
+ return ERR_PTR(-ENOMEM);
+ }
+
+ adbuf->attach = attach;
+ /*
+ * dmabuf attachment doesn't always include the sgt, store it in
+ * AI engine dma buf structure.
+ */
+ adbuf->sgt = sgt;
+
+ refcount_set(&adbuf->refs, 1);
+
+ list_add(&adbuf->node, &apart->dbufs);
+ return adbuf;
+}
+
+/**
+ * aie_part_dmabuf_attach_get() - Get reference to an dmabuf attachment
+ * @adbuf: AI engine partition attached dmabuf
+ *
+ * This call will increase the reference count by 1
+ */
+static void aie_part_dmabuf_attach_get(struct aie_dmabuf *adbuf)
+{
+ refcount_inc(&adbuf->refs);
+}
+
+/**
+ * aie_part_dmabuf_attach_put() - Put reference to an dmabuf attachment
+ * @adbuf: AI engine partition attached dmabuf
+ *
+ * This call will decrease the reference count by 1. If the refcount reaches
+ * 0, it will detach the dmabuf.
+ */
+static void aie_part_dmabuf_attach_put(struct aie_dmabuf *adbuf)
+{
+ struct dma_buf *dbuf;
+
+ if (!refcount_dec_and_test(&adbuf->refs))
+ return;
+
+ dbuf = adbuf->attach->dmabuf;
+ dma_buf_unmap_attachment(adbuf->attach, adbuf->sgt, adbuf->attach->dir);
+ dma_buf_detach(dbuf, adbuf->attach);
+ dma_buf_put(dbuf);
+ list_del(&adbuf->node);
+}
+
+/**
+ * aie_part_release_dmabufs() - detach all the attached dmabufs from partition
+ * @apart: AI engine partition
+ */
+void aie_part_release_dmabufs(struct aie_partition *apart)
+{
+ struct aie_dmabuf *adbuf, *tmpadbuf;
+
+ list_for_each_entry_safe(adbuf, tmpadbuf, &apart->dbufs, node) {
+ struct dma_buf *dbuf = adbuf->attach->dmabuf;
+
+ dma_buf_unmap_attachment(adbuf->attach, adbuf->sgt,
+ adbuf->attach->dir);
+ dma_buf_detach(dbuf, adbuf->attach);
+ dma_buf_put(dbuf);
+ list_del(&adbuf->node);
+ devm_kfree(&apart->dev, adbuf);
+ }
+}
+
+/**
+ * aie_part_attach_dmabuf_req() - Handle attaching dmabuf to an AI engine
+ * partition request
+ * @apart: AI engine partition
+ * @user_args: user AI engine dmabuf argument
+ *
+ * @return: 0 for success, negative value for failure
+ *
+ * This function attaches a dmabuf to the specified AI engine partition and map
+ * the attachment. It checks if the dmabuf is already attached, if it is not
+ * attached, attach it. It returns the number of entries of the attachment to
+ * the AI engine dmabuf user argument. If user wants to know the sg list, it
+ * can use AI engine get sg ioctl.
+ */
+long aie_part_attach_dmabuf_req(struct aie_partition *apart,
+ void __user *user_args)
+{
+ struct aie_dmabuf *adbuf;
+ struct dma_buf *dbuf;
+ long ret;
+ int dmabuf_fd = (int)(uintptr_t)user_args;
+
+ dbuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dbuf)) {
+ dev_err(&apart->dev, "failed to get dmabuf from %d.\n",
+ dmabuf_fd);
+ return PTR_ERR(dbuf);
+ }
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ dma_buf_put(dbuf);
+ return ret;
+ }
+
+ adbuf = aie_part_find_dmabuf(apart, dbuf);
+ if (!adbuf)
+ adbuf = aie_part_attach_dmabuf(apart, dbuf);
+ else
+ aie_part_dmabuf_attach_get(adbuf);
+
+ mutex_unlock(&apart->mlock);
+
+ if (IS_ERR(adbuf)) {
+ dev_err(&apart->dev, "failed to attach dmabuf\n");
+ dma_buf_put(dbuf);
+ return PTR_ERR(adbuf);
+ }
+
+ return 0;
+}
+
+/**
+ * aie_part_detach_dmabuf_req() - Handle detaching dmabuf from an AI engine
+ * partition request
+ * @apart: AI engine partition
+ * @user_args: user AI engine dmabuf argument
+ *
+ * @return: 0 for success, negative value for failure
+ *
+ * This function unmaps and detaches a dmabuf from the specified AI engine
+ * partition.
+ */
+long aie_part_detach_dmabuf_req(struct aie_partition *apart,
+ void __user *user_args)
+{
+ int dmabuf_fd;
+ struct dma_buf *dbuf;
+ struct aie_dmabuf *adbuf;
+ int ret;
+
+ dmabuf_fd = (int)(uintptr_t)user_args;
+
+ dbuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dbuf)) {
+ dev_err(&apart->dev, "failed to get dmabuf %d.\n", dmabuf_fd);
+ return PTR_ERR(dbuf);
+ }
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ dma_buf_put(dbuf);
+ return ret;
+ }
+
+ adbuf = aie_part_find_dmabuf(apart, dbuf);
+ dma_buf_put(dbuf);
+ if (!adbuf) {
+ dev_err(&apart->dev, "failed to find dmabuf %d.\n", dmabuf_fd);
+ mutex_unlock(&apart->mlock);
+ return -EINVAL;
+ }
+
+ aie_part_dmabuf_attach_put(adbuf);
+
+ mutex_unlock(&apart->mlock);
+
+ return 0;
+}
+
+/**
+ * aie_part_set_dmabuf_bd() - Set AI engine SHIM DMA dmabuf buffer descriptor
+ * @apart: AI engine partition
+ * @user_args: user AI engine dmabuf argument
+ *
+ * @return: 0 for success, negative value for failure
+ *
+ * This function set the user specified buffer descriptor into the SHIM DMA
+ * buffer descriptor. The buffer descriptor contained in the @user_args has the
+ * offset to the start of the buffer descriptor.
+ */
+long aie_part_set_dmabuf_bd(struct aie_partition *apart,
+ void __user *user_args)
+{
+ struct aie_device *adev = apart->adev;
+ const struct aie_dma_attr *shim_dma = adev->shim_dma;
+ struct aie_dmabuf_bd_args args;
+ u32 *bd, *tmpbd, len, laddr, haddr, regval;
+ u64 off;
+ dma_addr_t addr;
+ int ret;
+
+ if (copy_from_user(&args, user_args, sizeof(args)))
+ return -EFAULT;
+
+ ret = aie_part_validate_bdloc(apart, args.loc, args.bd_id);
+ if (ret) {
+ dev_err(&apart->dev, "invalid SHIM DMA BD reg address.\n");
+ return -EINVAL;
+ }
+
+ bd = memdup_user(u64_to_user_ptr(args.bd), shim_dma->bd_len);
+ if (IS_ERR(bd))
+ return PTR_ERR(bd);
+
+ regval = bd[shim_dma->buflen.regoff / sizeof(u32)];
+ len = aie_get_reg_field(&shim_dma->buflen, regval);
+ if (!len) {
+ dev_err(&apart->dev, "no buf length from shim dma bd.\n");
+ kfree(bd);
+ return -EINVAL;
+ }
+
+ /* Get low 32bit address offset */
+ tmpbd = (u32 *)((char *)bd + shim_dma->laddr.regoff);
+ laddr = *tmpbd & shim_dma->laddr.mask;
+ /* Get high 32bit address offset */
+ tmpbd = (u32 *)((char *)bd + shim_dma->haddr.regoff);
+ haddr = *tmpbd & shim_dma->haddr.mask;
+ off = laddr | ((u64)haddr << 32);
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ kfree(bd);
+ return ret;
+ }
+
+ /* Get device address from offset */
+ addr = aie_part_get_dmabuf_da_from_off(apart, args.buf_fd, off, len);
+ if (!addr) {
+ dev_err(&apart->dev, "invalid buffer 0x%llx, 0x%x.\n",
+ off, len);
+ mutex_unlock(&apart->mlock);
+ kfree(bd);
+ return -EINVAL;
+ }
+
+ /* Set low 32bit address */
+ laddr = lower_32_bits(addr);
+ tmpbd = (u32 *)((char *)bd + shim_dma->laddr.regoff);
+ *tmpbd &= ~shim_dma->laddr.mask;
+ *tmpbd |= aie_get_field_val(&shim_dma->laddr, laddr);
+
+ /* Set high 32bit address */
+ haddr = upper_32_bits(addr);
+ tmpbd = (u32 *)((char *)bd + shim_dma->haddr.regoff);
+ *tmpbd &= ~shim_dma->haddr.mask;
+ *tmpbd |= aie_get_field_val(&shim_dma->haddr, haddr);
+
+ ret = aie_part_set_shimdma_bd(apart, args.loc, args.bd_id, bd);
+ mutex_unlock(&apart->mlock);
+ if (ret)
+ dev_err(&apart->dev, "failed to set to shim dma bd.\n");
+
+ kfree(bd);
+ return ret;
+}
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
index e84610b..bf3a09c 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
@@ -90,6 +90,24 @@ struct aie_part_mem {
};
/**
+ * struct aie_dma_attr - AI engine DMA attributes structure
+ * @laddr: low address field attributes
+ * @haddr: high address field attributes
+ * @buflen: buffer length field attributes
+ * @bd_regoff: SHIM DMA buffer descriptors register offset
+ * @num_bds: number of buffer descriptors
+ * @bd_len: length of a buffer descriptor in bytes
+ */
+struct aie_dma_attr {
+ struct aie_single_reg_field laddr;
+ struct aie_single_reg_field haddr;
+ struct aie_single_reg_field buflen;
+ u32 bd_regoff;
+ u32 num_bds;
+ u32 bd_len;
+};
+
+/**
* struct aie_tile_operations - AI engine device operations
* @get_tile_type: get type of tile based on tile operation
* @get_mem_info: get different types of memories information
@@ -127,6 +145,7 @@ struct aie_resource {
* @ops: tile operations
* @col_rst: column reset attribute
* @col_clkbuf: column clock buffer attribute
+ * @shim_dma: SHIM DMA attribute
* @size: size of the AI engine address space
* @array_shift: array address shift
* @col_shift: column address shift
@@ -147,6 +166,7 @@ struct aie_device {
const struct aie_tile_operations *ops;
const struct aie_single_reg_field *col_rst;
const struct aie_single_reg_field *col_clkbuf;
+ const struct aie_dma_attr *shim_dma;
size_t size;
struct aie_resource cols_res;
u32 array_shift;
@@ -159,6 +179,7 @@ struct aie_device {
/**
* struct aie_partition - AI engine partition structure
* @node: list node
+ * @dbufs: dmabufs list
* @adev: pointer to AI device instance
* @pmems: pointer to partition memories types
* @range: range of partition
@@ -172,6 +193,7 @@ struct aie_device {
*/
struct aie_partition {
struct list_head node;
+ struct list_head dbufs;
struct aie_device *adev;
struct aie_part_mem *pmems;
struct aie_range range;
@@ -229,6 +251,20 @@ static inline u32 aie_get_field_val(const struct aie_single_reg_field *field,
}
/**
+ * aie_get_reg_field() - get value from a field from a register valuer
+ * @field: a field in a register
+ * @regval: register value
+ * @return: value of a register field
+ */
+static inline u32 aie_get_reg_field(const struct aie_single_reg_field *field,
+ u32 regval)
+{
+ long long mask64 = (long long)field->mask & 0x00000000ffffffff;
+
+ return (regval & field->mask) >> __bf_shf(mask64);
+}
+
+/**
* aie_cal_regoff() - calculate register offset to the whole AI engine
* device start address
* @adev: AI engine device
@@ -286,5 +322,14 @@ int aie_part_clean(struct aie_partition *apart);
int aie_mem_get_info(struct aie_partition *apart, unsigned long arg);
+long aie_part_attach_dmabuf_req(struct aie_partition *apart,
+ void __user *user_args);
+long aie_part_detach_dmabuf_req(struct aie_partition *apart,
+ void __user *user_args);
+long aie_part_set_bd(struct aie_partition *apart, void __user *user_args);
+long aie_part_set_dmabuf_bd(struct aie_partition *apart,
+ void __user *user_args);
+void aie_part_release_dmabufs(struct aie_partition *apart);
+
int aie_device_init(struct aie_device *adev);
#endif /* AIE_INTERNAL_H */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-part.c b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
index 4be6d38..dcfb9ec 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-part.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
@@ -8,6 +8,7 @@
#include <linux/cdev.h>
#include <linux/delay.h>
#include <linux/device.h>
+#include <linux/dma-mapping.h>
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/list.h>
@@ -221,6 +222,7 @@ static int aie_part_release(struct inode *inode, struct file *filp)
if (ret)
return ret;
+ aie_part_release_dmabufs(apart);
aie_part_clean(apart);
apart->status = 0;
@@ -296,6 +298,12 @@ static long aie_part_ioctl(struct file *fp, unsigned int cmd, unsigned long arg)
}
case AIE_GET_MEM_IOCTL:
return aie_mem_get_info(apart, arg);
+ case AIE_ATTACH_DMABUF_IOCTL:
+ return aie_part_attach_dmabuf_req(apart, argp);
+ case AIE_DETACH_DMABUF_IOCTL:
+ return aie_part_detach_dmabuf_req(apart, argp);
+ case AIE_SET_SHIMDMA_DMABUF_BD_IOCTL:
+ return aie_part_set_dmabuf_bd(apart, argp);
default:
dev_err(&apart->dev, "Invalid ioctl command %u.\n", cmd);
ret = -EINVAL;
@@ -422,6 +430,7 @@ static struct aie_partition *aie_create_partition(struct aie_device *adev,
return ERR_PTR(-ENOMEM);
apart->adev = adev;
+ INIT_LIST_HEAD(&apart->dbufs);
memcpy(&apart->range, range, sizeof(*range));
mutex_init(&apart->mlock);
@@ -443,6 +452,10 @@ static struct aie_partition *aie_create_partition(struct aie_device *adev,
return ERR_PTR(ret);
}
+ /* Set up the DMA mask */
+ dev->coherent_dma_mask = DMA_BIT_MASK(48);
+ dev->dma_mask = &dev->coherent_dma_mask;
+
/*
* Create array to keep the information of the different types of tile
* memories information of the AI engine partition.
@@ -521,6 +534,10 @@ of_aie_part_probe(struct aie_device *adev, struct device_node *nc)
apart->dev.of_node = nc;
apart->partition_id = partition_id;
+ ret = of_dma_configure(&apart->dev, nc, true);
+ if (ret)
+ dev_warn(&apart->dev, "Failed to configure DMA.\n");
+
dev_info(&adev->dev,
"AI engine part(%u,%u),(%u,%u), id %u is probed successfully.\n",
range.start.col, range.start.row,
diff --git a/include/uapi/linux/xlnx-ai-engine.h b/include/uapi/linux/xlnx-ai-engine.h
index 9faeebe..75d9dbf 100644
--- a/include/uapi/linux/xlnx-ai-engine.h
+++ b/include/uapi/linux/xlnx-ai-engine.h
@@ -130,6 +130,22 @@ struct aie_partition_req {
__u32 flag;
};
+/**
+ * struct aie_dmabuf_bd_args - AIE dmabuf buffer descriptor information
+ * @bd: DMA buffer descriptor, within the buffer descriptor, the address field
+ * will be the offset to the start of the dmabuf. It is an array of __u32
+ * words.
+ * @buf_fd: DMA buffer handler which is dmabuf file descriptor
+ * @loc: Tile location relative to the start of a partition
+ * @bd_id: buffer descriptor id
+ */
+struct aie_dmabuf_bd_args {
+ __u64 bd;
+ struct aie_location loc;
+ int buf_fd;
+ __u32 bd_id;
+};
+
#define AIE_IOCTL_BASE 'A'
/* AI engine device IOCTL operations */
@@ -160,4 +176,32 @@ struct aie_partition_req {
*/
#define AIE_GET_MEM_IOCTL _IOWR(AIE_IOCTL_BASE, 0x9, \
struct aie_mem_args)
+/**
+ * DOC: AIE_ATTACH_DMABUF_IOCTL - attach a dmabuf to AI engine partition
+ *
+ * This ioctl is used to attach a dmabuf to the AI engine partition. AI engine
+ * partition will return the number of scatter gather list elements of the
+ * dmabuf.
+ */
+#define AIE_ATTACH_DMABUF_IOCTL _IOR(AIE_IOCTL_BASE, 0xa, int)
+
+/**
+ * DOC: AIE_DETACH_DMABUF_IOCTL - dettach a dmabuf from AI engine partition
+ *
+ * This ioctl is used to detach a dmabuf from the AI engine partition
+ */
+#define AIE_DETACH_DMABUF_IOCTL _IOR(AIE_IOCTL_BASE, 0xb, int)
+
+/**
+ * DOC: AIE_SET_SHIMDMA_DMABUF_BD_IOCTL - set buffer descriptor which contains
+ * dmabuf to SHIM DMA
+ *
+ * This ioctl is used to set the buffer descriptor to SHIM DMA. The
+ * aie_dmabuf_bd_args contains the dmabuf fd and the buffer descriptor contents.
+ * The address field in the buffer descriptor contents should be the offset to
+ * the start of the dmabuf.
+ */
+#define AIE_SET_SHIMDMA_DMABUF_BD_IOCTL _IOW(AIE_IOCTL_BASE, 0x10, \
+ struct aie_dmabuf_bd_args)
+
#endif
--
2.7.4
From: Izhar Ameer Shaikh <[email protected]>
Latching of AIE NPI Interrupts is present in Versal ES1 Silicon Rev,
however it has been removed from ES2 rev.
As a result on ES1, in order to use the interrupt, a client needs to
request PMC to clear/ack the interrupt.
Provide an EEMI IOCTL to serve the same purpose. Note that, this will
only be applicable for ES1 rev. For ES2 and other non-silicon platforms,
this call will essentially be a NOP in the firmware.
Signed-off-by: Izhar Ameer Shaikh <[email protected]>
Signed-off-by: Wendy Liang <[email protected]>
---
drivers/firmware/xilinx/zynqmp.c | 14 ++++++++++++++
include/linux/firmware/xlnx-zynqmp.h | 8 ++++++++
2 files changed, 22 insertions(+)
diff --git a/drivers/firmware/xilinx/zynqmp.c b/drivers/firmware/xilinx/zynqmp.c
index d08ac82..23e58cc 100644
--- a/drivers/firmware/xilinx/zynqmp.c
+++ b/drivers/firmware/xilinx/zynqmp.c
@@ -729,6 +729,20 @@ int zynqmp_pm_set_boot_health_status(u32 value)
}
/**
+ * zynqmp_pm_clear_aie_npi_isr - Clear AI engine NPI interrupt status register
+ * @node: AI engine node id
+ * @irq_mask: Mask of AI engine NPI interrupt bit to clear
+ *
+ * Return: Returns status, either success or error+reason
+ */
+int zynqmp_pm_clear_aie_npi_isr(u32 node, u32 irq_mask)
+{
+ return zynqmp_pm_invoke_fn(PM_IOCTL, node, IOCTL_AIE_ISR_CLEAR,
+ irq_mask, 0, NULL);
+}
+EXPORT_SYMBOL_GPL(zynqmp_pm_clear_aie_npi_isr);
+
+/**
* zynqmp_pm_reset_assert - Request setting of reset (1 - assert, 0 - release)
* @reset: Reset to be configured
* @assert_flag: Flag stating should reset be asserted (1) or
diff --git a/include/linux/firmware/xlnx-zynqmp.h b/include/linux/firmware/xlnx-zynqmp.h
index 83ac9ec..defa4ea 100644
--- a/include/linux/firmware/xlnx-zynqmp.h
+++ b/include/linux/firmware/xlnx-zynqmp.h
@@ -114,6 +114,8 @@ enum pm_ioctl_id {
IOCTL_READ_PGGS = 15,
/* Set healthy bit value */
IOCTL_SET_BOOT_HEALTH_STATUS = 17,
+ /* AI engine NPI ISR clear */
+ IOCTL_AIE_ISR_CLEAR = 24,
};
enum pm_query_id {
@@ -355,6 +357,7 @@ int zynqmp_pm_write_pggs(u32 index, u32 value);
int zynqmp_pm_read_pggs(u32 index, u32 *value);
int zynqmp_pm_system_shutdown(const u32 type, const u32 subtype);
int zynqmp_pm_set_boot_health_status(u32 value);
+int zynqmp_pm_clear_aie_npi_isr(u32 node, u32 irq_mask);
#else
static inline struct zynqmp_eemi_ops *zynqmp_pm_get_eemi_ops(void)
{
@@ -505,6 +508,11 @@ static inline int zynqmp_pm_set_boot_health_status(u32 value)
{
return -ENODEV;
}
+
+static inline int zynqmp_pm_clear_aie_npi_isr(u32 node, u32 irq_mask)
+{
+ return -ENODEV;
+}
#endif
#endif /* __FIRMWARE_ZYNQMP_H__ */
--
2.7.4
There is no concern to have userspace to directly access AI engine
program and data memories. It will be much faster to directly copy
data to and from these memories from userspace.
We choose to use DMA buf for the data and program memory because of the
DMA buf features. DMA buf can share the DMA memory between applications
and different devices, which can benefit on how to share data with AI
engine device in future.
There is one DMA buf per type of memory in an AI engine partition. e.g.
There is one DMA buf for all the tile core program memories in an AI
engine partition. There is another DMA buf for all the tile data
memories in an AI engine partition.
Signed-off-by: Wendy Liang <[email protected]>
Reviewed-by: Hyun Kwon <[email protected]>
---
drivers/misc/xilinx-ai-engine/Makefile | 1 +
drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 36 +++
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 30 +++
drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-part.c | 47 ++++
drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 38 +++
include/uapi/linux/xlnx-ai-engine.h | 50 ++++
7 files changed, 477 insertions(+)
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-engine/Makefile
index 39bec61..2dbed42 100644
--- a/drivers/misc/xilinx-ai-engine/Makefile
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_XILINX_AIE) += xilinx-aie.o
xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
ai-engine-dev.o \
+ ai-engine-mem.o \
ai-engine-part.o \
ai-engine-res.o \
ai-engine-reset.o
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
index 36127f0..7fce2f00 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
@@ -12,10 +12,14 @@
#include "ai-engine-internal.h"
+#define KBYTES(n) ((n) * 1024)
+
#define AIE_ARRAY_SHIFT 30U
#define AIE_COL_SHIFT 23U
#define AIE_ROW_SHIFT 18U
+#define NUM_MEMS_PER_TILE 2U
+
/*
* Registers offsets
*/
@@ -114,6 +118,37 @@ static u32 aie_get_tile_type(struct aie_location *loc)
return AIE_TILE_TYPE_SHIMNOC;
}
+static unsigned int aie_get_mem_info(struct aie_range *range,
+ struct aie_part_mem *pmem)
+{
+ unsigned int i;
+
+ if (range->start.row + range->size.row <= 1) {
+ /* SHIM row only, no memories in this range */
+ return 0;
+ }
+ if (!pmem)
+ return NUM_MEMS_PER_TILE;
+
+ for (i = 0; i < NUM_MEMS_PER_TILE; i++) {
+ struct aie_mem *mem = &pmem[i].mem;
+
+ memcpy(&mem->range, range, sizeof(*range));
+ if (!mem->range.start.row) {
+ mem->range.start.row = 1;
+ mem->range.size.row--;
+ }
+ }
+ /* Setup tile data memory information */
+ pmem[0].mem.offset = 0;
+ pmem[0].mem.size = KBYTES(32);
+ /* Setup program memory information */
+ pmem[1].mem.offset = 0x20000;
+ pmem[1].mem.size = KBYTES(16);
+
+ return NUM_MEMS_PER_TILE;
+}
+
/**
* aie_set_shim_reset() - Set AI engine SHIM reset
* @adev: AI engine device
@@ -170,6 +205,7 @@ static int aie_reset_shim(struct aie_device *adev, struct aie_range *range)
static const struct aie_tile_operations aie_ops = {
.get_tile_type = aie_get_tile_type,
+ .get_mem_info = aie_get_mem_info,
.reset_shim = aie_reset_shim,
};
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
index 2acd34f..e84610b 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
@@ -12,6 +12,8 @@
#include <linux/bits.h>
#include <linux/cdev.h>
#include <linux/device.h>
+#include <linux/dma-buf.h>
+#include <linux/file.h>
#include <linux/io.h>
#include <linux/list.h>
#include <linux/mutex.h>
@@ -67,8 +69,30 @@ struct aie_device;
struct aie_partition;
/**
+ * struct aie_part_mem - AI engine partition memory information structure
+ * @apart: AI engine partition
+ * @dbuf: dmabuf pointer associated with the memory
+ * @mem: memory information of a type of memory
+ * @size: size of the total memories in the partition
+ *
+ * This structure is to keep the information of a type of memory in a
+ * partition. The memory information will be stored in @mem property.
+ * The following information will be keep:
+ * * memory start address offset within a tile
+ * * memory size
+ * * what tiles contain this type of memory
+ */
+struct aie_part_mem {
+ struct aie_partition *apart;
+ struct dma_buf *dbuf;
+ struct aie_mem mem;
+ size_t size;
+};
+
+/**
* struct aie_tile_operations - AI engine device operations
* @get_tile_type: get type of tile based on tile operation
+ * @get_mem_info: get different types of memories information
* @reset_shim: reset shim, it will assert and then release SHIM reset
*
* Different AI engine device version has its own device
@@ -76,6 +100,8 @@ struct aie_partition;
*/
struct aie_tile_operations {
u32 (*get_tile_type)(struct aie_location *loc);
+ unsigned int (*get_mem_info)(struct aie_range *range,
+ struct aie_part_mem *pmem);
int (*reset_shim)(struct aie_device *adev, struct aie_range *range);
};
@@ -134,6 +160,7 @@ struct aie_device {
* struct aie_partition - AI engine partition structure
* @node: list node
* @adev: pointer to AI device instance
+ * @pmems: pointer to partition memories types
* @range: range of partition
* @mlock: protection for AI engine partition operations
* @dev: device for the AI engine partition
@@ -146,6 +173,7 @@ struct aie_device {
struct aie_partition {
struct list_head node;
struct aie_device *adev;
+ struct aie_part_mem *pmems;
struct aie_range range;
struct mutex mlock; /* protection for AI engine partition operations */
struct device dev;
@@ -256,5 +284,7 @@ struct aie_partition *of_aie_part_probe(struct aie_device *adev,
void aie_part_remove(struct aie_partition *apart);
int aie_part_clean(struct aie_partition *apart);
+int aie_mem_get_info(struct aie_partition *apart, unsigned long arg);
+
int aie_device_init(struct aie_device *adev);
#endif /* AIE_INTERNAL_H */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-mem.c b/drivers/misc/xilinx-ai-engine/ai-engine-mem.c
new file mode 100644
index 0000000..4b2697e
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-mem.c
@@ -0,0 +1,275 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx AI Engine device memory implementation
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ */
+
+#include <linux/dma-buf.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/uaccess.h>
+#include <uapi/linux/xlnx-ai-engine.h>
+
+#include "ai-engine-internal.h"
+
+#define aie_cal_reg_goffset(adev, loc, regoff) ({ \
+ struct aie_device *_adev = (adev); \
+ struct aie_location *_loc = &(loc); \
+ (_loc->col << _adev->col_shift) + \
+ (_loc->row << _adev->row_shift) + (regoff); \
+ })
+
+#define aie_cal_reg_pa(adev, loc, regoff) ({ \
+ struct aie_device *__adev = (adev); \
+ __adev->res->start + aie_cal_reg_goffset(__adev, loc, regoff); \
+ })
+
+static struct sg_table *
+aie_mem_map_dma_buf(struct dma_buf_attachment *attachment,
+ enum dma_data_direction direction)
+{
+ /*
+ * TODO: It is mandatory by DMA buf operation. It is used return
+ * scatterlist table of an attachment. We don't have the implementation
+ * for now. And thus it has empty implementation.
+ */
+ (void)attachment;
+ (void)direction;
+ dev_warn(attachment->dev,
+ "AI engine memory map dma buf is not implemented.\n");
+ return NULL;
+}
+
+static void aie_mem_unmap_dma_buf(struct dma_buf_attachment *attachment,
+ struct sg_table *table,
+ enum dma_data_direction direction)
+{
+ /*
+ * TODO: It is mandatory by DMA buf operation. It is used deallocate
+ * scatterlist table of an attachment. We don't have the implementation
+ * for now. And thus it has empty implementation.
+ */
+ (void)attachment;
+ (void)table;
+ (void)direction;
+ dev_warn(attachment->dev,
+ "AI engine memory unmap dma buf is not implemented.\n");
+}
+
+static int aie_mem_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct aie_part_mem *pmem = dmabuf->priv;
+ struct aie_mem *mem = &pmem->mem;
+ struct aie_partition *apart = pmem->apart;
+ struct aie_location loc;
+ unsigned long addr = vma->vm_start;
+ unsigned long offset = vma->vm_pgoff * PAGE_SIZE, moffset = 0;
+ unsigned long remainder = vma->vm_end - addr;
+ size_t msize = mem->size;
+
+ if (remainder + offset > pmem->size)
+ return -EINVAL;
+
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ for (loc.col = mem->range.start.col;
+ loc.col < mem->range.start.col + mem->range.size.col; loc.col++) {
+ for (loc.row = mem->range.start.row;
+ loc.row < mem->range.start.row + mem->range.size.row;
+ loc.row++) {
+ unsigned long toffset, len;
+ phys_addr_t mempa;
+ int ret;
+
+ remainder = vma->vm_end - addr;
+ if (!remainder)
+ return 0;
+
+ if (moffset + msize < offset) {
+ moffset += msize;
+ continue;
+ }
+ /*
+ * calculate offset within the tile memory.
+ * offset is the offset to vma->start.
+ * moffset is the tile memory start offset to
+ * vma->start.
+ */
+ toffset = offset - moffset;
+ len = msize - toffset;
+ if (len > remainder)
+ len = remainder;
+ mempa = aie_cal_reg_pa(apart->adev, loc,
+ toffset + mem->offset);
+
+ ret = remap_pfn_range(vma, addr, mempa >> PAGE_SHIFT,
+ len, vma->vm_page_prot);
+ if (ret) {
+ dev_err(&apart->dev,
+ "failed to mmap (%u,%u)memory, remap failed, 0x%pa, 0x%lx.\n",
+ loc.col, loc.row, &mempa, len);
+ return ret;
+ }
+ addr += len;
+ offset += len;
+ moffset += msize;
+ }
+ }
+ return 0;
+}
+
+static void aie_mem_dmabuf_release(struct dma_buf *dmabuf)
+{
+ struct aie_part_mem *pmem = dmabuf->priv;
+
+ pmem->dbuf = NULL;
+}
+
+static const struct dma_buf_ops aie_mem_dma_buf_ops = {
+ .map_dma_buf = aie_mem_map_dma_buf,
+ .unmap_dma_buf = aie_mem_unmap_dma_buf,
+ .mmap = aie_mem_mmap,
+ .release = aie_mem_dmabuf_release,
+};
+
+/**
+ * aie_mem_create_dmabuf() - creates DMA buffer for AI engine partition
+ * memories
+ * @apart: AI engine partition
+ * @pmem: pointer to the partition memory information
+ * @mem: pointer to where it store the memory information and DMA buf file
+ * descriptor for user.
+ * @return: 0 for success, negative value for failure
+ *
+ * This function will create DMA buffer for the AI engine partition memory
+ * and will store the DMA buffer file descriptor and memory information in
+ * @mem.
+ */
+static int aie_mem_create_dmabuf(struct aie_partition *apart,
+ struct aie_part_mem *pmem,
+ struct aie_mem *mem)
+{
+ struct dma_buf *dmabuf;
+ int ret;
+
+ if (!PAGE_ALIGNED(pmem->mem.size)) {
+ dev_warn(&apart->dev,
+ "no dmabuf for mem(0x%zx, 0x%zx), not aligned with page size.\n",
+ pmem->mem.offset, pmem->mem.size);
+ return -EINVAL;
+ }
+
+ dmabuf = pmem->dbuf;
+ if (!dmabuf) {
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+
+ exp_info.ops = &aie_mem_dma_buf_ops;
+ exp_info.size = pmem->size;
+ exp_info.flags = O_RDWR;
+ exp_info.priv = pmem;
+
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ pmem->dbuf = dmabuf;
+ }
+
+ ret = dma_buf_fd(dmabuf, O_CLOEXEC);
+ if (ret < 0) {
+ dev_err(&apart->dev,
+ "dmabuf creation failed, failed to get fd.\n");
+ return ret;
+ }
+ memcpy(mem, &pmem->mem, sizeof(*mem));
+ mem->fd = ret;
+
+ return 0;
+}
+
+/**
+ * aie_mem_get_info() - get AI engine memories information
+ * @apart: AI engine partition
+ * @arg: argument from user to enquire AI engine partition memory information
+ * @return: 0 for success, and negative value for failure
+ *
+ * This function will get the memories information for the specified AI engine
+ * partition. It will create DMA buf file descriptors for the memories and
+ * return the DMA buf file descriptors to users.
+ * It will create a DMA buffer per type of memories.
+ * e.g. There will be a DMA buffer for all the tile program memories in the
+ * partition, and another DMA buffer for all the tile data memories in the
+ * partition.
+ * User can first pass num_mems as 0 in the @arg to enquire for how many types
+ * of memories in this AI engine partition. And then, user can allocate memory
+ * to keep the information for different types of memories, and then use the
+ * same enquiry with non-zero num_mems and none NULL pointer to ask for the
+ * details of the information of all the types of memories in the AI engine
+ * partition.
+ */
+int aie_mem_get_info(struct aie_partition *apart, unsigned long arg)
+{
+ struct aie_mem_args margs;
+ struct aie_mem *mems;
+ unsigned int num_mems, i;
+ int ret;
+
+ if (copy_from_user(&margs, (void __user *)arg, sizeof(margs)))
+ return -EFAULT;
+
+ num_mems = apart->adev->ops->get_mem_info(&apart->range, NULL);
+ if (num_mems <= 0)
+ return -EINVAL;
+
+ if (!margs.num_mems) {
+ struct aie_mem_args __user *umargs_ptr = (void __user *)arg;
+
+ /* This enquiry is to get the number of types of memories. */
+ if (copy_to_user((void __user *)&umargs_ptr->num_mems,
+ &num_mems, sizeof(num_mems)))
+ return -EFAULT;
+ return 0;
+ }
+
+ if (num_mems != margs.num_mems) {
+ dev_err(&apart->dev,
+ "failed to get mem info, invalid num of mems %d,%d.\n",
+ num_mems, margs.num_mems);
+ return -EINVAL;
+ }
+ if (!margs.mems) {
+ dev_err(&apart->dev,
+ "failed to get mem info, mems pointer is NULL.\n");
+ return -EINVAL;
+ }
+
+ mems = kcalloc(num_mems, sizeof(*mems), GFP_KERNEL);
+ if (!mems)
+ return -ENOMEM;
+
+ /*
+ * Create DMA buffer for the memories.
+ * Each type of memory in the partition has its own DMA buf.
+ */
+ for (i = 0; i < num_mems; i++) {
+ ret = aie_mem_create_dmabuf(apart, &apart->pmems[i], &mems[i]);
+ if (ret)
+ break;
+ }
+ if (!ret) {
+ if (copy_to_user(u64_to_user_ptr(margs.mems), mems,
+ num_mems * sizeof(mems[0])))
+ ret = -EFAULT;
+ }
+
+ if (ret) {
+ for (i = 0; i < num_mems; i++) {
+ if (mems[i].fd)
+ put_unused_fd(mems[i].fd);
+ }
+ }
+
+ kfree(mems);
+ return ret;
+}
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-part.c b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
index 98f125b..4be6d38 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-part.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
@@ -294,6 +294,8 @@ static long aie_part_ioctl(struct file *fp, unsigned int cmd, unsigned long arg)
mutex_unlock(&apart->mlock);
break;
}
+ case AIE_GET_MEM_IOCTL:
+ return aie_mem_get_info(apart, arg);
default:
dev_err(&apart->dev, "Invalid ioctl command %u.\n", cmd);
ret = -EINVAL;
@@ -337,6 +339,41 @@ static void aie_part_release_device(struct device *dev)
}
/**
+ * aie_part_create_mems_info() - creates array to store the AI engine partition
+ * different memories types information
+ * @apart: AI engine partition
+ * @return: 0 for success, negative value for failure
+ *
+ * This function will create array to store the information of different
+ * memories types in the partition. This array is stored in @apart->pmems.
+ */
+static int aie_part_create_mems_info(struct aie_partition *apart)
+{
+ unsigned int i, num_mems;
+
+ num_mems = apart->adev->ops->get_mem_info(&apart->range, NULL);
+ if (!num_mems)
+ return 0;
+
+ apart->pmems = devm_kcalloc(&apart->dev, num_mems,
+ sizeof(struct aie_part_mem),
+ GFP_KERNEL);
+ if (!apart->pmems)
+ return -ENOMEM;
+
+ apart->adev->ops->get_mem_info(&apart->range, apart->pmems);
+ for (i = 0; i < num_mems; i++) {
+ struct aie_mem *mem = &apart->pmems[i].mem;
+
+ apart->pmems[i].apart = apart;
+ apart->pmems[i].size = mem->size *
+ mem->range.size.col *
+ mem->range.size.row;
+ }
+ return 0;
+}
+
+/**
* aie_create_partition() - create AI engine partition instance
* @adev: AI engine device
* @range: AI engine partition range to check. A range describes a group
@@ -406,6 +443,16 @@ static struct aie_partition *aie_create_partition(struct aie_device *adev,
return ERR_PTR(ret);
}
+ /*
+ * Create array to keep the information of the different types of tile
+ * memories information of the AI engine partition.
+ */
+ ret = aie_part_create_mems_info(apart);
+ if (ret) {
+ put_device(dev);
+ return ERR_PTR(ret);
+ }
+
ret = mutex_lock_interruptible(&adev->mlock);
if (ret) {
put_device(dev);
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-reset.c b/drivers/misc/xilinx-ai-engine/ai-engine-reset.c
index fc0262f7..d35cd8d 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-reset.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-reset.c
@@ -86,6 +86,43 @@ static void aie_part_set_cols_clkbuf(struct aie_partition *apart, bool enable)
}
/**
+ * aie_part_clear_mems() - clear memories of every tile in a partition
+ * @apart: AI engine partition
+ */
+static void aie_part_clear_mems(struct aie_partition *apart)
+{
+ struct aie_device *adev = apart->adev;
+ struct aie_part_mem *pmems = apart->pmems;
+ u32 i, num_mems;
+
+ /* Get the number of different types of memories */
+ num_mems = adev->ops->get_mem_info(&apart->range, NULL);
+ if (!num_mems)
+ return;
+
+ /* Clear each type of memories in the partition */
+ for (i = 0; i < num_mems; i++) {
+ struct aie_mem *mem = &pmems[i].mem;
+ struct aie_range *range = &mem->range;
+ u32 c, r;
+
+ for (c = range->start.col;
+ c < range->start.col + range->size.col; c++) {
+ for (r = range->start.row;
+ r < range->start.row + range->size.row; r++) {
+ struct aie_location loc;
+ u32 memoff;
+
+ loc.col = c;
+ loc.row = r;
+ memoff = aie_cal_regoff(adev, loc, mem->offset);
+ memset_io(adev->base + memoff, 0, mem->size);
+ }
+ }
+ }
+}
+
+/**
* aie_part_clean() - reset and clear AI engine partition
* @apart: AI engine partition
* @return: 0 for success and negative value for failure
@@ -115,6 +152,7 @@ int aie_part_clean(struct aie_partition *apart)
if (ret < 0)
return ret;
+ aie_part_clear_mems(apart);
aie_part_set_cols_clkbuf(apart, false);
return 0;
diff --git a/include/uapi/linux/xlnx-ai-engine.h b/include/uapi/linux/xlnx-ai-engine.h
index 0f46151..9faeebe 100644
--- a/include/uapi/linux/xlnx-ai-engine.h
+++ b/include/uapi/linux/xlnx-ai-engine.h
@@ -6,6 +6,10 @@
#ifndef _UAPI_AI_ENGINE_H_
#define _UAPI_AI_ENGINE_H_
+#ifndef __KERNEL__
+#include <stdlib.h>
+#endif
+
#include <linux/ioctl.h>
#include <linux/types.h>
@@ -42,6 +46,33 @@ struct aie_range {
};
/**
+ * struct aie_mem - AIE memory information
+ * @range: range of tiles of the memory
+ * @offset: register offset within a tile of the memory
+ * @size: of a the memory in one tile
+ * @fd: file descriptor of the memory
+ */
+struct aie_mem {
+ struct aie_range range;
+ size_t offset;
+ size_t size;
+ int fd;
+};
+
+/**
+ * struct aie_mem_args - AIE memory enquiry arguments
+ * @num_mems: number of "struct aie_mem" elements
+ * e.g. two memory information elements, one for tile core memory,
+ * and the other for tile data memory.
+ * @mems: array of struct aie_mem which are the AI engine memories
+ * information.
+ */
+struct aie_mem_args {
+ unsigned int num_mems;
+ __u64 mems;
+};
+
+/**
* struct aie_reg_args - AIE access register arguments
* @op: if this request is to read, write or poll register
* @mask: mask for mask write, 0 for not mask write
@@ -110,4 +141,23 @@ struct aie_partition_req {
/* AI engine partition IOCTL operations */
#define AIE_REG_IOCTL _IOWR(AIE_IOCTL_BASE, 0x8, \
struct aie_reg_args)
+/**
+ * DOC: AIE_GET_MEM_IOCTL - enquire information of memories in the AI engine
+ * partition
+ * This ioctl is used to get the information of all the different types of
+ * memories in the AI engine partition. Application can get the memories
+ * information in two steps:
+ * 1. passing 0 as @num_mems in struct aie_mem_args to enquire the number of
+ * different memories in the partition, the value will be returned in
+ * @num_mems.
+ * 2. passing the number of memories in @num_mems and valid pointer as @mems of
+ * struct aie_mem_args to store the details information of different
+ * memories. The driver will create DMA buf for each type of memories, and
+ * will return the memory addressing information along with the DMA buf file
+ * descriptors in @mems.
+ * After getting the memories information, user can use mmap() with the DMA buf
+ * file descriptor to enable access the memories from userspace.
+ */
+#define AIE_GET_MEM_IOCTL _IOWR(AIE_IOCTL_BASE, 0x9, \
+ struct aie_mem_args)
#endif
--
2.7.4
From: Nishad Saraf <[email protected]>
AI engine errors events can be routed to generate interrupt. The
errors events routing will be done during AI engine configuration.
At runtime, Linux kernel AI engine driver monitors the interrupt and
backtracks errors events.
As error events from 400 AIE tiles and 50 shim tiles are channeled on
a single interrupt line, backtracking the source the interrupt to an
AIE module is required. To keep the top-half interrupt short,
backtracking is deferred to bottom half by scheduling a task in shared
workqueue.
Signed-off-by: Nishad Saraf <[email protected]>
Signed-off-by: Wendy Liang <[email protected]>
---
drivers/misc/xilinx-ai-engine/Makefile | 1 +
drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 121 ++++
drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 14 +
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 144 +++++
.../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-part.c | 44 ++
drivers/misc/xilinx-ai-engine/ai-engine-res.c | 54 ++
7 files changed, 1037 insertions(+)
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-engine/Makefile
index 2e67b25..9607ecb 100644
--- a/drivers/misc/xilinx-ai-engine/Makefile
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -9,6 +9,7 @@ xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
ai-engine-clock.o \
ai-engine-dev.o \
ai-engine-dma.o \
+ ai-engine-interrupt.o \
ai-engine-mem.o \
ai-engine-part.o \
ai-engine-res.o \
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
index ff721b3..af0f997 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
@@ -33,7 +33,10 @@
#define AIE_SHIMPL_CLKCNTR_REGOFF 0x00036040U
#define AIE_SHIMPL_COLRESET_REGOFF 0x00036048U
#define AIE_SHIMPL_RESET_REGOFF 0x0003604cU
+#define AIE_SHIMPL_GROUP_ERROR_REGOFF 0x0003450cU
#define AIE_TILE_CORE_CLKCNTR_REGOFF 0x00036040U
+#define AIE_TILE_CORE_GROUP_ERROR_REGOFF 0x00034510U
+#define AIE_TILE_MEM_GROUP_ERROR_REGOFF 0x00014514U
/*
* Register masks
@@ -93,11 +96,27 @@ static const struct aie_tile_regs aie_kernel_regs[] = {
.soff = AIE_SHIMPL_CLKCNTR_REGOFF,
.eoff = AIE_SHIMPL_CLKCNTR_REGOFF,
},
+ /* SHIM group error enable */
+ {.attribute = (AIE_TILE_TYPE_SHIMPL | AIE_TILE_TYPE_SHIMNOC) <<
+ AIE_REGS_ATTR_TILE_TYPE_SHIFT,
+ .soff = AIE_SHIMPL_GROUP_ERROR_REGOFF,
+ .eoff = AIE_SHIMPL_GROUP_ERROR_REGOFF,
+ },
/* Tile clock control */
{.attribute = AIE_TILE_TYPE_TILE << AIE_REGS_ATTR_TILE_TYPE_SHIFT,
.soff = AIE_TILE_CORE_CLKCNTR_REGOFF,
.eoff = AIE_TILE_CORE_CLKCNTR_REGOFF,
},
+ /* Tile group error for core module */
+ {.attribute = AIE_TILE_TYPE_TILE << AIE_REGS_ATTR_TILE_TYPE_SHIFT,
+ .soff = AIE_TILE_CORE_GROUP_ERROR_REGOFF,
+ .eoff = AIE_TILE_CORE_GROUP_ERROR_REGOFF,
+ },
+ /* Tile group error for memory module */
+ {.attribute = AIE_TILE_TYPE_TILE << AIE_REGS_ATTR_TILE_TYPE_SHIFT,
+ .soff = AIE_TILE_MEM_GROUP_ERROR_REGOFF,
+ .eoff = AIE_TILE_MEM_GROUP_ERROR_REGOFF,
+ },
};
static const struct aie_single_reg_field aie_col_rst = {
@@ -128,6 +147,103 @@ static const struct aie_dma_attr aie_shimdma = {
.bd_len = 0x14U,
};
+static const struct aie_event_attr aie_pl_event = {
+ .bc_event = {
+ .mask = GENMASK(6, 0),
+ .regoff = 0x0U,
+ },
+ .group_error = {
+ .mask = GENMASK(10, 0),
+ .regoff = 0xcU,
+ },
+ .bc_regoff = 0x34010U,
+ .status_regoff = 0x34200U,
+ .group_regoff = 0x34500U,
+ .base_error_event = 62U,
+ .num_broadcasts = 16U,
+ .base_bc_event = 107U,
+ .num_events = 128U,
+};
+
+static const struct aie_event_attr aie_mem_event = {
+ .bc_event = {
+ .mask = GENMASK(6, 0),
+ .regoff = 0x0U,
+ },
+ .group_error = {
+ .mask = GENMASK(13, 0),
+ .regoff = 0x14U,
+ },
+ .bc_regoff = 0x14010U,
+ .status_regoff = 0x14200U,
+ .group_regoff = 0x14500U,
+ .base_error_event = 87U,
+ .num_broadcasts = 16U,
+ .base_bc_event = 107U,
+ .num_events = 128U,
+};
+
+static const struct aie_event_attr aie_core_event = {
+ .bc_event = {
+ .mask = GENMASK(6, 0),
+ .regoff = 0x0U,
+ },
+ .group_error = {
+ .mask = GENMASK(21, 0),
+ .regoff = 0x10U,
+ },
+ .bc_regoff = 0x34010U,
+ .status_regoff = 0x34200U,
+ .group_regoff = 0x34500U,
+ .base_error_event = 48U,
+ .num_broadcasts = 16U,
+ .base_bc_event = 107U,
+ .num_events = 128U,
+};
+
+static const struct aie_l1_intr_ctrl_attr aie_l1_intr_ctrl = {
+ .swa_status = {
+ .mask = GENMASK(19, 0),
+ .regoff = 0xcU,
+ },
+ .swb_status = {
+ .mask = GENMASK(19, 0),
+ .regoff = 0x3cU,
+ },
+ .swa_event = {
+ .mask = GENMASK(6, 0),
+ .regoff = 0x14U,
+ },
+ .swb_event = {
+ .mask = GENMASK(6, 0),
+ .regoff = 0x44U,
+ },
+ .regoff = 0x35000U,
+ .event_lsb = 8,
+ .num_broadcasts = 0x14U,
+};
+
+static const struct aie_l2_intr_ctrl_attr aie_l2_intr_ctrl = {
+ .mask = {
+ .mask = GENMASK(15, 0),
+ .regoff = 0x0U,
+ },
+ .enable = {
+ .mask = GENMASK(15, 0),
+ .regoff = 0x4U,
+ },
+ .disable = {
+ .mask = GENMASK(15, 0),
+ .regoff = 0x8U,
+ },
+ .status = {
+ .mask = GENMASK(15, 0),
+ .regoff = 0xcU,
+ },
+ .regoff = 0x15000U,
+ .num_broadcasts = 0x10U,
+};
+
static u32 aie_get_tile_type(struct aie_location *loc)
{
if (loc->row)
@@ -476,6 +592,11 @@ int aie_device_init(struct aie_device *adev)
adev->col_rst = &aie_col_rst;
adev->col_clkbuf = &aie_col_clkbuf;
adev->shim_dma = &aie_shimdma;
+ adev->pl_events = &aie_pl_event;
+ adev->mem_events = &aie_mem_event;
+ adev->core_events = &aie_core_event;
+ adev->l1_ctrl = &aie_l1_intr_ctrl;
+ adev->l2_ctrl = &aie_l2_intr_ctrl;
/* Get the columns resource */
/* Get number of columns from AI engine memory resource */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
index 51c3a4f..240ff6c 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
@@ -14,6 +14,7 @@
#include <linux/firmware/xlnx-zynqmp.h>
#include <linux/fs.h>
#include <linux/idr.h>
+#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/list.h>
#include <linux/module.h>
@@ -406,6 +407,19 @@ static int xilinx_ai_engine_probe(struct platform_device *pdev)
of_xilinx_ai_engine_part_probe(adev);
dev_info(&pdev->dev, "Xilinx AI Engine device(cols=%u) probed\n",
adev->cols_res.total);
+
+ INIT_WORK(&adev->backtrack, aie_array_backtrack);
+
+ adev->irq = platform_get_irq_byname(pdev, "interrupt1");
+ if (adev->irq < 0)
+ goto free_ida;
+
+ ret = devm_request_threaded_irq(dev, adev->irq, NULL, aie_interrupt,
+ IRQF_ONESHOT, dev_name(dev), adev);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request AIE IRQ.\n");
+ goto free_ida;
+ }
return 0;
free_ida:
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
index b21b7025..3b680f7 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
@@ -15,6 +15,7 @@
#include <linux/dma-buf.h>
#include <linux/file.h>
#include <linux/io.h>
+#include <linux/interrupt.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/of.h>
@@ -45,6 +46,55 @@
#define VERSAL_ES1_REV_ID 0x0
#define VERSAL_ES2_REV_ID 0x1
+#define AIE_NPI_ERROR_ID BIT(1)
+
+/* Macros relevant to interrupts */
+#define AIE_INTR_L2_CTRL_MASK_WIDTH 32
+
+/**
+ * enum aie_module_type - identifies different hardware modules within a
+ * tile type. AIE tile may have memory and core
+ * module. While a PL or shim tile may have PL module.
+ * @AIE_MEM_MOD: comprises of the following sub-modules,
+ * * data memory.
+ * * tile DMA.
+ * * lock module.
+ * * events, event broadcast and event actions.
+ * * tracing and profiling.
+ * @AIE_CORE_MOD: comprises of the following sub-modules,
+ * * AIE core.
+ * * program Memory.
+ * * events, event broadcast and event actions.
+ * * tracing and profiling.
+ * * AXI-MM and AXI-S tile interconnects.
+ * @AIE_PL_MOD: comprises of the following sub-modules,
+ * * PL interface.
+ * * AXI-MM and AXI-S tile interconnects.
+ * * Level 1 interrupt controllers.
+ * * events, event broadcast and event actions.
+ * * tracing and profiling.
+ * @AIE_NOC_MOD: comprises of the following sub-modules,
+ * * interface from NoC Slave Unit (NSU)
+ * (bridge to AXI-MM switch)
+ * * interfaces to NoC NoC Master Unit (NMU)
+ * * shim DMA & locks
+ * * NoC stream interface
+ */
+enum aie_module_type {
+ AIE_MEM_MOD,
+ AIE_CORE_MOD,
+ AIE_PL_MOD,
+ AIE_NOC_MOD,
+};
+
+/*
+ * enum aie_shim_switch_type - identifies different switches in shim tile.
+ */
+enum aie_shim_switch_type {
+ AIE_SHIM_SWITCH_A,
+ AIE_SHIM_SWITCH_B
+};
+
/**
* struct aie_tile_regs - contiguous range of AI engine register
* within an AI engine tile
@@ -157,6 +207,75 @@ struct aie_resource {
};
/**
+ * struct aie_event_attr - AI Engine event attributes structure.
+ * @bc_event: broadcast event attribute to capture event mask value and
+ * register offset from @bc_regoff.
+ * @group_error: group error attribute to capture error group mask value and
+ * register offset value from @group_regoff.
+ * @bc_regoff: base broadcast register offset.
+ * @status_regoff: base status register offset.
+ * @group_regoff: base group error register offset.
+ * @base_error_event: event ID of first error event in a group error.
+ * @num_broadcasts: total number of broadcast events.
+ * @base_bc_event: broadcast 0 vent ID
+ * @num_events: total number of events.
+ */
+struct aie_event_attr {
+ struct aie_single_reg_field bc_event;
+ struct aie_single_reg_field group_error;
+ u32 bc_regoff;
+ u32 status_regoff;
+ u32 group_regoff;
+ u32 base_error_event;
+ u32 num_broadcasts;
+ u32 base_bc_event;
+ u32 num_events;
+};
+
+/**
+ * struct aie_l1_intr_ctrl_attr - AI engine level 1 interrupt controller
+ * attributes structure.
+ * @mask: level 1 interrupt controller mask attribute.
+ * @swa_status: switch A level 1 interrupt controller status attribute.
+ * @swb_status: switch A level 1 interrupt controller status attribute.
+ * @swa_event: switch A level 1 interrupt controller event attribute.
+ * @swb_event: switch A level 1 interrupt controller event attribute.
+ * @regoff: base level 1 interrupt controller register offset.
+ * @event_lsb: lsb of IRQ event within IRQ event switch register.
+ * @num_broadcasts: total number of broadcast signals to level 1 interrupt
+ * controller.
+ */
+struct aie_l1_intr_ctrl_attr {
+ struct aie_single_reg_field swa_status;
+ struct aie_single_reg_field swb_status;
+ struct aie_single_reg_field swa_event;
+ struct aie_single_reg_field swb_event;
+ u32 regoff;
+ u32 event_lsb;
+ u32 num_broadcasts;
+};
+
+/**
+ * struct aie_l2_intr_ctrl_attr - AI engine level 2 interrupt controller
+ * attributes structure.
+ * @mask: level 2 interrupt controller mask attribute.
+ * @enable: level 2 interrupt controller enable attribute.
+ * @disable: level 2 interrupt controller disable attribute.
+ * @status: level 2 interrupt controller status attribute.
+ * @regoff: level 2 interrupt controller register offset.
+ * @num_broadcasts: total number of broadcast signals to level 2 interrupt
+ * controller.
+ */
+struct aie_l2_intr_ctrl_attr {
+ struct aie_single_reg_field mask;
+ struct aie_single_reg_field enable;
+ struct aie_single_reg_field disable;
+ struct aie_single_reg_field status;
+ u32 regoff;
+ u32 num_broadcasts;
+};
+
+/**
* struct aie_device - AI engine device structure
* @partitions: list of partitions requested
* @cdev: cdev for the AI engine
@@ -169,6 +288,11 @@ struct aie_resource {
* @col_rst: column reset attribute
* @col_clkbuf: column clock buffer attribute
* @shim_dma: SHIM DMA attribute
+ * @pl_events: pl module event attribute
+ * @mem_events: memory module event attribute
+ * @core_events: core module event attribute
+ * @l1_ctrl: level 1 interrupt controller attribute
+ * @l2_ctrl: level 2 interrupt controller attribute
* @size: size of the AI engine address space
* @array_shift: array address shift
* @col_shift: column address shift
@@ -176,6 +300,8 @@ struct aie_resource {
* @cols_res: AI engine columns resources to indicate
* while columns are occupied by partitions.
* @num_kernel_regs: number of kernel only registers range
+ * @irq: Linux IRQ number
+ * @backtrack: workqueue to backtrack interrupt
* @version: AI engine device version
* @pm_node_id: AI Engine platform management node ID
*/
@@ -191,12 +317,19 @@ struct aie_device {
const struct aie_single_reg_field *col_rst;
const struct aie_single_reg_field *col_clkbuf;
const struct aie_dma_attr *shim_dma;
+ const struct aie_event_attr *pl_events;
+ const struct aie_event_attr *mem_events;
+ const struct aie_event_attr *core_events;
+ const struct aie_l1_intr_ctrl_attr *l1_ctrl;
+ const struct aie_l2_intr_ctrl_attr *l2_ctrl;
size_t size;
struct aie_resource cols_res;
u32 array_shift;
u32 col_shift;
u32 row_shift;
u32 num_kernel_regs;
+ int irq;
+ struct work_struct backtrack;
int version;
u32 pm_node_id;
};
@@ -212,6 +345,7 @@ struct aie_device {
* @dev: device for the AI engine partition
* @cores_clk_state: bitmap to indicate the power state of core modules
* @tiles_inuse: bitmap to indicate if a tile is in use
+ * @l2_mask: level 2 interrupt controller mask bitmap
* @partition_id: partition id. Partition ID is the identifier
* of the AI engine partition in the system.
* @status: indicate if the partition is in use
@@ -228,6 +362,7 @@ struct aie_partition {
struct device dev;
struct aie_resource cores_clk_state;
struct aie_resource tiles_inuse;
+ struct aie_resource l2_mask;
u32 partition_id;
u32 status;
u32 cntrflag;
@@ -339,7 +474,12 @@ int aie_resource_get_region(struct aie_resource *res, u32 start,
void aie_resource_put_region(struct aie_resource *res, int start, u32 count);
int aie_resource_set(struct aie_resource *res, u32 start, u32 count);
int aie_resource_clear(struct aie_resource *res, u32 start, u32 count);
+int aie_resource_clear_all(struct aie_resource *res);
bool aie_resource_testbit(struct aie_resource *res, u32 bit);
+int aie_resource_cpy_from_arr32(struct aie_resource *res, u32 start,
+ const u32 *src, u32 nbits);
+int aie_resource_cpy_to_arr32(struct aie_resource *res, u32 start, u32 *dst,
+ u32 nbits);
const struct file_operations *aie_part_get_fops(void);
u8 aie_part_in_use(struct aie_partition *apart);
@@ -372,4 +512,8 @@ int aie_part_release_tiles_from_user(struct aie_partition *apart,
void __user *user_args);
int aie_device_init(struct aie_device *adev);
+
+void aie_array_backtrack(struct work_struct *work);
+irqreturn_t aie_interrupt(int irq, void *data);
+
#endif /* AIE_INTERNAL_H */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c b/drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
new file mode 100644
index 0000000..c2fd4a0
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
@@ -0,0 +1,659 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx AI Engine device driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ */
+#include <linux/bitmap.h>
+#include <linux/firmware/xlnx-zynqmp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/slab.h>
+#include <linux/workqueue.h>
+
+#include "ai-engine-internal.h"
+
+#define AIE_ARRAY_TILE_ERROR_BC_ID 0U
+#define AIE_SHIM_TILE_ERROR_IRQ_ID 16U
+
+/**
+ * aie_get_broadcast_event() - get event ID being broadcast on given
+ * broadcast line.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @bc_id: broadcast ID.
+ * @return: event ID.
+ */
+static u8 aie_get_broadcast_event(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module, u8 bc_id)
+{
+ const struct aie_event_attr *event_mod;
+ u32 bcoff, regoff;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ bcoff = event_mod->bc_regoff + event_mod->bc_event.regoff + bc_id * 4U;
+ regoff = aie_cal_regoff(apart->adev, *loc, bcoff);
+ return ioread32(apart->adev->base + regoff);
+}
+
+/**
+ * aie_read_event_status() - get the status of event status registers.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @reg: array to store event status register values.
+ */
+static void aie_read_event_status(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module, u32 *reg)
+{
+ const struct aie_event_attr *event_mod;
+ u8 offset;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ for (offset = 0; offset < (event_mod->num_events / 32); offset++) {
+ u32 status_off = event_mod->status_regoff + offset * 4U;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, status_off);
+
+ reg[offset] = ioread32(apart->adev->base + regoff);
+ }
+}
+
+/**
+ * aie_check_group_errors_enabled() - get error events enabled in group error.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @return: bitmap of enabled error events.
+ */
+static u32 aie_check_group_errors_enabled(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module)
+{
+ const struct aie_event_attr *event_mod;
+ u32 groff, regoff;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ groff = event_mod->group_regoff + event_mod->group_error.regoff;
+ regoff = aie_cal_regoff(apart->adev, *loc, groff);
+ return ioread32(apart->adev->base + regoff);
+}
+
+/**
+ * aie_set_error_event() - enable/disable error events in group error.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @bitmap: error event to enable/disable in group errors.
+ */
+static void aie_set_error_event(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module, u32 bitmap)
+{
+ const struct aie_event_attr *event_mod;
+ u32 groff, regoff;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ groff = event_mod->group_regoff + event_mod->group_error.regoff;
+ regoff = aie_cal_regoff(apart->adev, *loc, groff);
+ iowrite32(bitmap, apart->adev->base + regoff);
+}
+
+/**
+ * aie_get_error_event() - map group error status bit to actual error
+ * event number.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @index: event index within group errors.
+ * @return: true event ID.
+ */
+static u32 aie_get_error_event(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module, u8 index)
+{
+ const struct aie_event_attr *event_mod;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ return event_mod->base_error_event + index;
+}
+
+/**
+ * aie_get_bc_event() - get the broadcast event ID.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @module: module type.
+ * @bc_id: broadcast line ID.
+ * @return: broadcast event ID.
+ */
+static u32 aie_get_bc_event(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_module_type module, u8 bc_id)
+{
+ const struct aie_event_attr *event_mod;
+
+ if (module == AIE_CORE_MOD)
+ event_mod = apart->adev->core_events;
+ else if (module == AIE_MEM_MOD)
+ event_mod = apart->adev->mem_events;
+ else
+ event_mod = apart->adev->pl_events;
+
+ return event_mod->base_bc_event + bc_id;
+}
+
+/**
+ * aie_get_l1_event() - get event ID being broadcast on level 1 IRQ.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @sw: switch type.
+ * @irq_id: IRQ event ID to be read.
+ * @return: true event ID.
+ */
+static u8 aie_get_l1_event(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_shim_switch_type sw, u8 irq_id)
+{
+ const struct aie_l1_intr_ctrl_attr *intr_ctrl = apart->adev->l1_ctrl;
+ u32 l1off, l1mask, regoff, reg_value;
+
+ if (sw == AIE_SHIM_SWITCH_A) {
+ l1off = intr_ctrl->regoff + intr_ctrl->swa_event.regoff;
+ l1mask = intr_ctrl->swa_event.mask;
+ } else {
+ l1off = intr_ctrl->regoff + intr_ctrl->swb_event.regoff;
+ l1mask = intr_ctrl->swb_event.mask;
+ }
+
+ regoff = aie_cal_regoff(apart->adev, *loc, l1off);
+ reg_value = ioread32(apart->adev->base + regoff);
+ reg_value &= l1mask << (irq_id * intr_ctrl->event_lsb);
+ reg_value >>= (irq_id * intr_ctrl->event_lsb);
+ return reg_value;
+}
+
+/**
+ * aie_clear_l1_intr() - clear level 1 interrupt controller status.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @sw: switch type.
+ * @irq_id: IRQ ID to be cleared.
+ */
+static void aie_clear_l1_intr(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_shim_switch_type sw, u8 irq_id)
+{
+ const struct aie_l1_intr_ctrl_attr *intr_ctrl = apart->adev->l1_ctrl;
+ u32 l1off, regoff;
+
+ if (sw == AIE_SHIM_SWITCH_A)
+ l1off = intr_ctrl->regoff + intr_ctrl->swa_status.regoff;
+ else
+ l1off = intr_ctrl->regoff + intr_ctrl->swb_status.regoff;
+
+ regoff = aie_cal_regoff(apart->adev, *loc, l1off);
+ iowrite32(BIT(irq_id), apart->adev->base + regoff);
+}
+
+/**
+ * aie_get_l1_status() - get level 1 interrupt controller status value.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @sw: switch type.
+ * @return: status value.
+ */
+static u32 aie_get_l1_status(struct aie_partition *apart,
+ struct aie_location *loc,
+ enum aie_shim_switch_type sw)
+{
+ const struct aie_l1_intr_ctrl_attr *intr_ctrl = apart->adev->l1_ctrl;
+ u32 l1off, regoff;
+
+ if (sw == AIE_SHIM_SWITCH_A)
+ l1off = intr_ctrl->regoff + intr_ctrl->swa_status.regoff;
+ else
+ l1off = intr_ctrl->regoff + intr_ctrl->swb_status.regoff;
+
+ regoff = aie_cal_regoff(apart->adev, *loc, l1off);
+ return ioread32(apart->adev->base + regoff);
+}
+
+/**
+ * aie_clear_l2_intr() - clear level 2 interrupt controller status.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @bitmap_irq: IRQ bitmap. IRQ lines corresponding to set bits will be
+ * cleared.
+ */
+static void aie_clear_l2_intr(struct aie_partition *apart,
+ struct aie_location *loc, u32 bitmap_irq)
+{
+ const struct aie_l2_intr_ctrl_attr *intr_ctrl = apart->adev->l2_ctrl;
+ u32 l2off = intr_ctrl->regoff + intr_ctrl->status.regoff;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, l2off);
+
+ iowrite32(bitmap_irq, apart->adev->base + regoff);
+}
+
+/**
+ * aie_get_l2_status() - get level 2 interrupt controller status value.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @return: status value.
+ */
+static u32 aie_get_l2_status(struct aie_partition *apart,
+ struct aie_location *loc)
+{
+ const struct aie_l2_intr_ctrl_attr *intr_ctrl = apart->adev->l2_ctrl;
+ u32 l2off = intr_ctrl->regoff + intr_ctrl->status.regoff;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, l2off);
+
+ return ioread32(apart->adev->base + regoff);
+}
+
+/**
+ * aie_get_l2_mask() - get level 2 interrupt controller mask value.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @return: mask value.
+ */
+static u32 aie_get_l2_mask(struct aie_partition *apart,
+ struct aie_location *loc)
+{
+ const struct aie_l2_intr_ctrl_attr *intr_ctrl = apart->adev->l2_ctrl;
+ u32 l2off = intr_ctrl->regoff + intr_ctrl->mask.regoff;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, l2off);
+
+ return ioread32(apart->adev->base + regoff);
+}
+
+/**
+ * aie_enable_l2_ctrl() - enable interrupts to level 2 interrupt controller.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @bit_map: bitmap of broadcast lines to enable.
+ */
+static void aie_enable_l2_ctrl(struct aie_partition *apart,
+ struct aie_location *loc, u32 bit_map)
+{
+ const struct aie_l2_intr_ctrl_attr *intr_ctrl = apart->adev->l2_ctrl;
+ u32 l2off = intr_ctrl->regoff + intr_ctrl->enable.regoff;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, l2off);
+
+ bit_map &= intr_ctrl->enable.mask;
+ iowrite32(bit_map, apart->adev->base + regoff);
+}
+
+/**
+ * aie_disable_l2_ctrl() - disable interrupts to level 2 interrupt controller.
+ * @apart: AIE partition pointer.
+ * @loc: pointer to tile location.
+ * @bit_map: bitmap of broadcast lines to disable.
+ */
+static void aie_disable_l2_ctrl(struct aie_partition *apart,
+ struct aie_location *loc, u32 bit_map)
+{
+ const struct aie_l2_intr_ctrl_attr *intr_ctrl = apart->adev->l2_ctrl;
+ u32 l2off = intr_ctrl->regoff + intr_ctrl->disable.regoff;
+ u32 regoff = aie_cal_regoff(apart->adev, *loc, l2off);
+
+ bit_map &= intr_ctrl->disable.mask;
+ iowrite32(bit_map, apart->adev->base + regoff);
+}
+
+/**
+ * aie_tile_backtrack() - if error was asserted on a broadcast line in
+ * the given array tile,
+ * * disable the error from the group errors
+ * @apart: AIE partition pointer.
+ * @loc: tile location.
+ * @module: module type.
+ * @sw: switch type.
+ * @bc_id: broadcast ID.
+ * @return: true if error was asserted, else return false.
+ */
+static bool aie_tile_backtrack(struct aie_partition *apart,
+ struct aie_location loc,
+ enum aie_module_type module,
+ enum aie_shim_switch_type sw, u8 bc_id)
+{
+ unsigned long grenabled;
+ u32 status[4];
+ u8 n, grevent, eevent;
+ bool ret = false;
+
+ if (module == AIE_PL_MOD)
+ grevent = aie_get_l1_event(apart, &loc, sw, bc_id);
+ else
+ grevent = aie_get_broadcast_event(apart, &loc, module, bc_id);
+
+ aie_read_event_status(apart, &loc, module, status);
+
+ if (!(status[grevent / 32] & BIT(grevent % 32)))
+ return ret;
+
+ grenabled = aie_check_group_errors_enabled(apart, &loc, module);
+ for_each_set_bit(n, &grenabled, 32) {
+ eevent = aie_get_error_event(apart, &loc, module, n);
+ if (!(status[eevent / 32] & BIT(eevent % 32)))
+ continue;
+ grenabled &= ~BIT(n);
+ ret = true;
+ dev_err_ratelimited(&apart->adev->dev,
+ "Asserted tile error event %d at col %d row %d\n",
+ eevent, loc.col, loc.row);
+ }
+ aie_set_error_event(apart, &loc, module, grenabled);
+
+ return ret;
+}
+
+/**
+ * aie_map_l2_to_l1() - map the status bit set in level 2 interrupt controller
+ * to a level 1 interrupt controller.
+ * @apart: AIE partition pointer.
+ * @set_pos: position of level 2 set bit.
+ * @l2_col: level 2 interrupt controller column ID.
+ * @l1_col: pointer to return corresponding level 1 column ID.
+ * @sw: pointer to return the level 1 interrupt controller switch ID.
+ *
+ * This API implementation is tightly coupled with the level 2 to level 1
+ * static mapping created when AIE application CDOs are generated.
+ */
+static void aie_map_l2_to_l1(struct aie_partition *apart, u32 set_pos,
+ u32 l2_col, u32 *l1_col,
+ enum aie_shim_switch_type *sw)
+{
+ if (l2_col + 3 >= apart->range.start.col + apart->range.size.col) {
+ *l1_col = l2_col + (set_pos % 6) / 2;
+ *sw = (set_pos % 6) % 2;
+ } else if (l2_col % 2 == 0) {
+ /* set bit position could be 0 - 5 */
+ *l1_col = l2_col - (2 - (set_pos % 6) / 2);
+ *sw = (set_pos % 6) % 2;
+ } else {
+ /* set bit position could be 0 - 1 */
+ *l1_col = l2_col;
+ *sw = set_pos;
+ }
+}
+
+/**
+ * aie_l1_backtrack() - backtrack AIE array tiles or shim tile based on
+ * the level 2 status bit set.
+ * @apart: AIE partition pointer.
+ * @loc: tile location of level 2 interrupt controller.
+ * @set_pos: set bit position in level 2 controller status.
+ * @return: true if error was asserted, else return false.
+ */
+static bool aie_l1_backtrack(struct aie_partition *apart,
+ struct aie_location loc, u32 set_pos)
+{
+ struct aie_location l1_ctrl;
+ enum aie_shim_switch_type sw;
+ unsigned long status;
+ u32 srow = apart->range.start.row + 1;
+ u32 erow = apart->range.start.row + apart->range.size.row;
+ bool ret = false;
+
+ /*
+ * Based on the set status bit find which level 1 interrupt
+ * controller has generated an interrupt
+ */
+ l1_ctrl.row = 0;
+ aie_map_l2_to_l1(apart, set_pos, loc.col, &l1_ctrl.col, &sw);
+
+ status = aie_get_l1_status(apart, &l1_ctrl, sw);
+
+ /* For now, support error broadcasts only */
+ if (status & BIT(AIE_ARRAY_TILE_ERROR_BC_ID)) {
+ struct aie_location temp;
+ enum aie_module_type module;
+ u32 bc_event;
+
+ if (sw == AIE_SHIM_SWITCH_A)
+ module = AIE_CORE_MOD;
+ else
+ module = AIE_MEM_MOD;
+
+ aie_clear_l1_intr(apart, &l1_ctrl, sw,
+ AIE_ARRAY_TILE_ERROR_BC_ID);
+
+ temp.row = srow;
+ temp.col = l1_ctrl.col;
+ bc_event = aie_get_bc_event(apart, &temp, module,
+ AIE_ARRAY_TILE_ERROR_BC_ID);
+ for (; temp.row < erow; temp.row++) {
+ u32 reg[4];
+
+ if (!aie_part_check_clk_enable_loc(apart, &temp))
+ break;
+
+ if (aie_tile_backtrack(apart, temp, module, sw,
+ AIE_ARRAY_TILE_ERROR_BC_ID))
+ ret = true;
+
+ aie_read_event_status(apart, &temp, module, reg);
+ if (!(reg[bc_event / 32] & BIT(bc_event % 32)))
+ break;
+ }
+ }
+
+ if (status & BIT(AIE_SHIM_TILE_ERROR_IRQ_ID)) {
+ aie_clear_l1_intr(apart, &l1_ctrl, sw,
+ AIE_SHIM_TILE_ERROR_IRQ_ID);
+ if (aie_tile_backtrack(apart, l1_ctrl, AIE_PL_MOD, sw,
+ AIE_SHIM_TILE_ERROR_IRQ_ID))
+ ret = true;
+ }
+ return ret;
+}
+
+/**
+ * aie_l2_backtrack() - iterate through each level 2 interrupt controller
+ * in a given partition and backtrack its
+ * corresponding level 1 interrupt controller.
+ * @apart: AIE partition pointer
+ */
+static void aie_l2_backtrack(struct aie_partition *apart)
+{
+ struct aie_location loc;
+ unsigned long l2_mask = 0;
+ u32 n, ttype, l2_bitmap_offset = 0;
+ int ret;
+ bool sched_work = false;
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ dev_err_ratelimited(&apart->dev,
+ "Failed to acquire lock. Process was interrupted by fatal signals\n");
+ return;
+ }
+
+ for (loc.col = apart->range.start.col, loc.row = 0;
+ loc.col < apart->range.start.col + apart->range.size.col;
+ loc.col++) {
+ ttype = apart->adev->ops->get_tile_type(&loc);
+ if (ttype != AIE_TILE_TYPE_SHIMNOC)
+ continue;
+
+ aie_resource_cpy_to_arr32(&apart->l2_mask, l2_bitmap_offset *
+ 32, (u32 *)&l2_mask, 32);
+ l2_bitmap_offset++;
+
+ for_each_set_bit(n, &l2_mask,
+ apart->adev->l2_ctrl->num_broadcasts)
+ aie_l1_backtrack(apart, loc, n);
+
+ aie_enable_l2_ctrl(apart, &loc, l2_mask);
+ }
+
+ /*
+ * Level 2 interrupt registers are edge-triggered. As a result,
+ * re-enabling level 2 won't trigger an interrupt for the already
+ * latched interrupts at level 1 controller.
+ */
+ for (loc.col = apart->range.start.col, loc.row = 0;
+ loc.col < apart->range.start.col + apart->range.size.col;
+ loc.col++) {
+ if (aie_get_l1_status(apart, &loc, AIE_SHIM_SWITCH_A) ||
+ aie_get_l1_status(apart, &loc, AIE_SHIM_SWITCH_B)) {
+ mutex_unlock(&apart->mlock);
+ sched_work = true;
+ schedule_work(&apart->adev->backtrack);
+ break;
+ }
+ }
+
+ if (!sched_work)
+ mutex_unlock(&apart->mlock);
+}
+
+/**
+ * aie_part_backtrack() - backtrack a individual.
+ * @apart: AIE partition pointer.
+ */
+static void aie_part_backtrack(struct aie_partition *apart)
+{
+ aie_l2_backtrack(apart);
+}
+
+/**
+ * aie_array_backtrack() - backtrack each partition to find the source of error
+ * interrupt.
+ * @work: pointer to the work structure.
+ *
+ * This task will re-enable IRQ after errors in all partitions has been
+ * serviced.
+ */
+void aie_array_backtrack(struct work_struct *work)
+{
+ struct aie_device *adev;
+ struct aie_partition *apart;
+ int ret;
+
+ adev = container_of(work, struct aie_device, backtrack);
+
+ ret = mutex_lock_interruptible(&adev->mlock);
+ if (ret) {
+ dev_err_ratelimited(&adev->dev,
+ "Failed to acquire lock. Process was interrupted by fatal signals\n");
+ return;
+ }
+
+ list_for_each_entry(apart, &adev->partitions, node)
+ aie_part_backtrack(apart);
+
+ mutex_unlock(&adev->mlock);
+}
+
+/**
+ * aie_interrupt() - interrupt handler for AIE.
+ * @irq: Interrupt number.
+ * @data: AI engine device structure.
+ * @return: IRQ_HANDLED.
+ *
+ * This thread function disables level 2 interrupt controllers and schedules a
+ * task in workqueue to backtrack the source of error interrupt. Disabled
+ * interrupts are re-enabled after successful completion of bottom half.
+ */
+irqreturn_t aie_interrupt(int irq, void *data)
+{
+ struct aie_device *adev = data;
+ struct aie_partition *apart;
+ int ret;
+ bool sched_work = false;
+
+ ret = mutex_lock_interruptible(&adev->mlock);
+ if (ret) {
+ dev_err(&adev->dev,
+ "Failed to acquire lock. Process was interrupted by fatal signals\n");
+ return IRQ_NONE;
+ }
+
+ list_for_each_entry(apart, &adev->partitions, node) {
+ struct aie_location loc;
+ u32 ttype, l2_mask, l2_status, l2_bitmap_offset = 0;
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ dev_err(&apart->dev,
+ "Failed to acquire lock. Process was interrupted by fatal signals\n");
+ mutex_unlock(&adev->mlock);
+ return IRQ_NONE;
+ }
+
+ for (loc.col = apart->range.start.col, loc.row = 0;
+ loc.col < apart->range.start.col + apart->range.size.col;
+ loc.col++) {
+ ttype = apart->adev->ops->get_tile_type(&loc);
+ if (ttype != AIE_TILE_TYPE_SHIMNOC)
+ continue;
+
+ l2_mask = aie_get_l2_mask(apart, &loc);
+ if (l2_mask) {
+ aie_resource_cpy_from_arr32(&apart->l2_mask,
+ l2_bitmap_offset *
+ 32, &l2_mask, 32);
+ aie_disable_l2_ctrl(apart, &loc, l2_mask);
+ }
+ l2_bitmap_offset++;
+
+ l2_status = aie_get_l2_status(apart, &loc);
+ if (l2_status) {
+ aie_clear_l2_intr(apart, &loc, l2_status);
+ sched_work = true;
+ } else {
+ aie_enable_l2_ctrl(apart, &loc, l2_mask);
+ }
+ }
+ mutex_unlock(&apart->mlock);
+ }
+
+ /* For ES1 silicon, interrupts are latched in NPI */
+ if (adev->version == VERSAL_ES1_REV_ID) {
+ ret = zynqmp_pm_clear_aie_npi_isr(adev->pm_node_id,
+ AIE_NPI_ERROR_ID);
+ if (ret < 0)
+ dev_err(&adev->dev, "Failed to clear NPI ISR\n");
+ }
+
+ mutex_unlock(&adev->mlock);
+
+ if (sched_work)
+ schedule_work(&adev->backtrack);
+
+ return IRQ_HANDLED;
+}
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-part.c b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
index 54450b6..0670d0ad 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-part.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
@@ -247,6 +247,9 @@ static int aie_part_release(struct inode *inode, struct file *filp)
aie_part_clean(apart);
apart->status = 0;
+
+ aie_resource_clear_all(&apart->l2_mask);
+
mutex_unlock(&apart->mlock);
return 0;
@@ -369,6 +372,7 @@ static void aie_part_release_device(struct device *dev)
list_del(&apart->node);
mutex_unlock(&adev->mlock);
aie_resource_uninitialize(&apart->cores_clk_state);
+ aie_resource_uninitialize(&apart->l2_mask);
put_device(apart->dev.parent);
}
@@ -408,6 +412,39 @@ static int aie_part_create_mems_info(struct aie_partition *apart)
}
/**
+ * aie_part_create_l2_bitmap() - create bitmaps to record mask and status
+ * values for level 2 interrupt controllers.
+ * @apart: AI engine partition
+ * @return: 0 for success, and negative value for failure.
+ */
+static int aie_part_create_l2_bitmap(struct aie_partition *apart)
+{
+ struct aie_location loc;
+ u8 num_l2_ctrls = 0;
+ int ret;
+
+ loc.row = 0;
+ for (loc.col = apart->range.start.col;
+ loc.col < apart->range.start.col + apart->range.size.col;
+ loc.col++) {
+ u32 ttype = apart->adev->ops->get_tile_type(&loc);
+
+ if (ttype == AIE_TILE_TYPE_SHIMNOC)
+ num_l2_ctrls++;
+ }
+
+ ret = aie_resource_initialize(&apart->l2_mask, num_l2_ctrls *
+ AIE_INTR_L2_CTRL_MASK_WIDTH);
+ if (ret) {
+ dev_err(&apart->dev,
+ "failed to initialize l2 mask resource.\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+/**
* aie_create_partition() - create AI engine partition instance
* @adev: AI engine device
* @range: AI engine partition range to check. A range describes a group
@@ -498,6 +535,13 @@ static struct aie_partition *aie_create_partition(struct aie_device *adev,
return ERR_PTR(ret);
}
+ ret = aie_part_create_l2_bitmap(apart);
+ if (ret < 0) {
+ dev_err(&apart->dev, "Failed to allocate l2 bitmap.\n");
+ put_device(dev);
+ return ERR_PTR(ret);
+ }
+
ret = mutex_lock_interruptible(&adev->mlock);
if (ret) {
put_device(dev);
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-res.c b/drivers/misc/xilinx-ai-engine/ai-engine-res.c
index b0c0741..f1bb75b 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-res.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-res.c
@@ -132,6 +132,44 @@ int aie_resource_set(struct aie_resource *res, u32 start, u32 count)
}
/**
+ * aie_resource_cpy_from_arr32() - copies nbits from u32[] to bitmap.
+ * @res: pointer to AI engine resource
+ * @start: start bit in bitmap
+ * @src: source buffer
+ * @nbits: number of bits to copy from u32[]
+ * @return: 0 for success and negative value for failure
+ */
+int aie_resource_cpy_from_arr32(struct aie_resource *res, u32 start,
+ const u32 *src, u32 nbits)
+{
+ if (!res || !res->bitmap || !nbits || start + nbits > res->total ||
+ !src)
+ return -EINVAL;
+
+ bitmap_from_arr32(res->bitmap + BIT_WORD(start), src, nbits);
+ return 0;
+}
+
+/**
+ * aie_resource_cpy_to_arr32() - copies nbits to u32[] from bitmap.
+ * @res: pointer to AI engine resource
+ * @start: start bit in bitmap
+ * @dst: destination buffer
+ * @nbits: number of bits to copy to u32[]
+ * @return: 0 for success and negative value for failure
+ */
+int aie_resource_cpy_to_arr32(struct aie_resource *res, u32 start, u32 *dst,
+ u32 nbits)
+{
+ if (!res || !res->bitmap || !nbits || start + nbits > res->total ||
+ !dst)
+ return -EINVAL;
+
+ bitmap_to_arr32(dst, res->bitmap + BIT_WORD(start), nbits);
+ return 0;
+}
+
+/**
* aie_resource_clear() - clear the AI engine resource bits
* @res: pointer to AI engine resource
* @start: start bit to set
@@ -150,6 +188,22 @@ int aie_resource_clear(struct aie_resource *res, u32 start, u32 count)
}
/**
+ * aie_resource_clear_all() - clear all the AI engine resource bits
+ * @res: pointer to AI engine resource
+ * @return: 0 for success and negative value for failure
+ *
+ * This function clears all the bits in the resource.
+ */
+int aie_resource_clear_all(struct aie_resource *res)
+{
+ if (!res || !res->bitmap)
+ return -EINVAL;
+
+ bitmap_clear(res->bitmap, 0, res->total);
+ return 0;
+}
+
+/**
* aie_resource_testbit() - test if a bit is set in a AI engine resource
* @res: pointer to AI engine resource
* @bit: bit to check
--
2.7.4
Xilinx AI engine array can be partitioned statically for different
applications. In the device tree, there will be device node for the AI
engine device, and device nodes for the statically configured AI engine
partitions. Each of the statically configured partition has a partition
ID in the system.
Signed-off-by: Wendy Liang <[email protected]>
---
.../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 +++++++++++++++++++++
1 file changed, 126 insertions(+)
create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
diff --git a/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
new file mode 100644
index 0000000..1de5623
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
@@ -0,0 +1,126 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/soc/xilinx/xlnx,ai-engine.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xilinx AI Engine
+
+maintainers:
+ - Wendy Liang <[email protected]>
+
+description: |+
+ The Xilinx AI Engine is a tile processor with many cores (up to 400) that
+ can run in parallel. The data routing between cores is configured through
+ internal switches, and shim tiles interface with external interconnect, such
+ as memory or PL.
+
+properties:
+ compatible:
+ const: xlnx,ai-engine-v1.0
+
+ reg:
+ description: |
+ Physical base address and length of the device registers.
+ The AI engine address space assigned to Linux is defined by Xilinx
+ platform design tool.
+
+ '#address-cells':
+ enum: [2]
+ description: |
+ size of cell to describe AI engine range of tiles address.
+ It is the location of the starting tile of the range.
+ As the AI engine tiles are 2D array, the location of a tile
+ is presented as (column, row), the address cell is 2.
+
+ '#size-cells':
+ enum: [2]
+ description: |
+ size of cell to describe AI engine range of tiles size.
+ As the AI engine tiles are 2D array, the size cell is 2.
+
+ power-domains:
+ maxItems: 1
+ description: phandle to the associated power domain
+
+ interrupts:
+ maxItems: 3
+
+ interrupt-names:
+ description: |
+ Should be "interrupt1", "interrupt2" or "interrupt3".
+
+required:
+ - compatible
+ - reg
+ - '#address-cells'
+ - '#size-cells'
+ - power-domains
+ - interrupt-parent
+ - interrupts
+ - interrupt-names
+
+patternProperties:
+ "^aie_partition@[0-9]+$":
+ type: object
+ description: |
+ AI engine partition which is a group of column based tiles of the AI
+ engine device. Each AI engine partition is isolated from the other
+ AI engine partitions. An AI engine partition is defined by Xilinx
+ platform design tools. Each partition has a SHIM row and core tiles rows.
+ A SHIM row contains SHIM tiles which are the interface to external
+ components. AXI master can access AI engine registers, push data to and
+ fetch data from AI engine through the SHIM tiles. Core tiles are the
+ compute tiles.
+
+ properties:
+ reg:
+ description: |
+ It describes the group of tiles of the AI engine partition. It needs
+ to include the SHIM row. The format is defined by the parent AI engine
+ device node's '#address-cells' and '#size-cells' properties. e.g. a v1
+ AI engine device has 2D tiles array, the first row is SHIM row. A
+ partition which has 50 columns and 8 rows of core tiles and 1 row of
+ SHIM tiles will be presented as <0 0 50 9>.
+
+ label:
+ maxItems: 1
+
+ xlnx,partition-id:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description: |
+ AI engine partition ID, which is defined by Xilinx platform design
+ tool to identify the AI engine partition in the system.
+
+ required:
+ - reg
+ - xlnx,partition-id
+ additionalProperties: false
+
+additionalProperties: false
+
+examples:
+ - |
+ bus {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ ai_engine: ai-engine@20000000000 {
+ compatible = "xlnx,ai-engine-v1.0";
+ reg = <0x200 0x0 0x1 0x0>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+ power-domains = <&versal_firmware 0x18224072>;
+ interrupt-parent = <&gic>;
+ interrupts = <0x0 0x94 0x4>,
+ <0x0 0x95 0x4>,
+ <0x0 0x96 0x4>;
+ interrupt-names = "interrupt1", "interrupt2", "interrupt3";
+
+ aie_partition0: aie_partition@0 {
+ /* 50 columns and 8 core tile rows + 1 SHIM row */
+ reg = <0 0 50 9>;
+ xlnx,partition-id = <1>;
+ };
+ };
+ };
--
2.7.4
Add request/release and related clock gating functions to AI engine
driver:
* scanning when the partition is being requested to know which tiles
are in use.
* check if a tile is gated or not
* tiles requesting and releasing ioctl so that user application can
enable/disable tiles at runtime.
Signed-off-by: Wendy Liang <[email protected]>
Reviewed-by: Hyun Kwon <[email protected]>
---
drivers/misc/xilinx-ai-engine/Makefile | 1 +
drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 225 +++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 +++++++++++++++++++++
drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 19 +-
drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 34 +++
drivers/misc/xilinx-ai-engine/ai-engine-part.c | 32 +++
drivers/misc/xilinx-ai-engine/ai-engine-res.c | 51 +++++
include/uapi/linux/xlnx-ai-engine.h | 31 +++
8 files changed, 631 insertions(+), 7 deletions(-)
create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
diff --git a/drivers/misc/xilinx-ai-engine/Makefile b/drivers/misc/xilinx-ai-engine/Makefile
index 1b743fa..2e67b25 100644
--- a/drivers/misc/xilinx-ai-engine/Makefile
+++ b/drivers/misc/xilinx-ai-engine/Makefile
@@ -6,6 +6,7 @@
obj-$(CONFIG_XILINX_AIE) += xilinx-aie.o
xilinx-aie-$(CONFIG_XILINX_AIE) := ai-engine-aie.o \
+ ai-engine-clock.o \
ai-engine-dev.o \
ai-engine-dma.o \
ai-engine-mem.o \
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
index ac95aff..ff721b3 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-aie.c
@@ -41,6 +41,9 @@
#define AIE_SHIMPL_SHIMRST_MASK 0x1U
#define AIE_SHIMPL_COLRST_MASK 0x1U
#define AIE_SHIMPL_CLKCNTR_COLBUF_MASK 0x1U
+#define AIE_SHIMPL_CLKCNTR_NEXTCLK_MASK BIT(1)
+#define AIE_TILE_CLKCNTR_COLBUF_MASK BIT(0)
+#define AIE_TILE_CLKCNTR_NEXTCLK_MASK BIT(1)
/*
* AI engine SHIM reset ID.
@@ -221,10 +224,232 @@ static int aie_reset_shim(struct aie_device *adev, struct aie_range *range)
return 0;
}
+static int aie_init_part_clk_state(struct aie_partition *apart)
+{
+ int ret, num_tiles;
+
+ num_tiles = apart->range.size.col * (apart->range.size.row - 1);
+
+ ret = aie_resource_initialize(&apart->cores_clk_state, num_tiles);
+ if (ret) {
+ dev_err(&apart->dev,
+ "failed to initialize cores clock state resource.\n");
+ return ret;
+ }
+
+ ret = aie_resource_initialize(&apart->tiles_inuse, num_tiles);
+ if (ret) {
+ dev_err(&apart->dev,
+ "failed to initialize tiles in use resource.\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int aie_scan_part_clocks(struct aie_partition *apart)
+{
+ struct aie_device *adev = apart->adev;
+ struct aie_range *range = &apart->range;
+ struct aie_location loc;
+
+ /* Clear the bitmap of cores and memories clock state */
+ aie_resource_put_region(&apart->cores_clk_state, 0,
+ apart->cores_clk_state.total);
+
+ for (loc.col = range->start.col;
+ loc.col < range->start.col + range->size.col;
+ loc.col++) {
+ for (loc.row = range->start.row;
+ loc.row < range->start.row + range->size.row - 1;
+ loc.row++) {
+ void __iomem *va;
+ u32 val, nbitpos;
+
+ /*
+ * Reading registers of the current tile to see the next
+ * tile is clock gated.
+ */
+ nbitpos = loc.col * (range->size.row - 1) + loc.row;
+
+ if (aie_get_tile_type(&loc) != AIE_TILE_TYPE_TILE) {
+ /* Checks shim tile for next core tile */
+ va = adev->base +
+ aie_cal_regoff(adev, loc,
+ AIE_SHIMPL_CLKCNTR_REGOFF);
+ val = ioread32(va);
+
+ /*
+ * check if the clock buffer and the next clock
+ * tile is set, if one of them is not set, the
+ * tiles of the column are clock gated.
+ */
+ if (!(val & AIE_SHIMPL_CLKCNTR_COLBUF_MASK) ||
+ !(val & AIE_SHIMPL_CLKCNTR_NEXTCLK_MASK))
+ break;
+
+ /* Set next tile in the row clock state on */
+ aie_resource_set(&apart->cores_clk_state,
+ nbitpos, 1);
+ continue;
+ }
+
+ /* Checks core tile for next tile */
+ va = adev->base +
+ aie_cal_regoff(adev, loc,
+ AIE_TILE_CORE_CLKCNTR_REGOFF);
+ val = ioread32(va);
+
+ /*
+ * If the next tile is gated, skip the rest of the
+ * column.
+ */
+ if (!(val & AIE_TILE_CLKCNTR_NEXTCLK_MASK))
+ break;
+
+ aie_resource_set(&apart->cores_clk_state, nbitpos, 1);
+ }
+ }
+
+ /*
+ * Set the tiles in use bitmap.
+ * In case of scanning, tiles which are powered on are considered as
+ * tiles in use.
+ */
+ bitmap_copy(apart->tiles_inuse.bitmap, apart->cores_clk_state.bitmap,
+ apart->tiles_inuse.total);
+
+ return 0;
+}
+
+/* aie_set_col_clocks() - set clocks of a range of tiles of a column
+ * @apart: AI engine partition
+ * @range: range of tiles of a column
+ * @enable: true to enable the clock, false to disable
+ * @return: 0 for success, negative value of errors.
+ */
+static int aie_set_col_clocks(struct aie_partition *apart,
+ struct aie_range *range, bool enable)
+{
+ struct aie_location ploc;
+ u32 startbit;
+
+ /*
+ * check if the range is of single column. only single column is allowed.
+ * check if the start row is tile row, only tile rows are allowed.
+ */
+ if (range->size.col != 1 || range->start.row < 1)
+ return -EINVAL;
+
+ ploc.col = range->start.col;
+ for (ploc.row = range->start.row - 1;
+ ploc.row < range->start.row + range->size.row - 1;
+ ploc.row++) {
+ struct aie_device *adev = apart->adev;
+
+ if (!ploc.row) {
+ void __iomem *va;
+ u32 val = 0;
+
+ /*
+ * Configure SHIM clock registers to gate or
+ * ungate next tile.
+ */
+ if (enable)
+ val = AIE_SHIMPL_CLKCNTR_COLBUF_MASK |
+ AIE_SHIMPL_CLKCNTR_NEXTCLK_MASK;
+ va = adev->base +
+ aie_cal_regoff(adev, ploc,
+ AIE_SHIMPL_CLKCNTR_REGOFF);
+ iowrite32(val, va);
+ } else {
+ void __iomem *va;
+ u32 val = 0;
+
+ /*
+ * Configure core tile clock registers to gate
+ * or ungate next tile.
+ */
+ if (enable)
+ val = AIE_TILE_CLKCNTR_COLBUF_MASK |
+ AIE_TILE_CLKCNTR_NEXTCLK_MASK;
+ va = adev->base +
+ aie_cal_regoff(adev, ploc,
+ AIE_TILE_CORE_CLKCNTR_REGOFF);
+ iowrite32(val, va);
+ }
+
+ /* If the tile clock is not on, jump to next column */
+ if (!enable)
+ break;
+ }
+
+ /* Update clock state bitmap */
+ startbit = range->start.col * (apart->range.size.row - 1) +
+ range->start.row - 1;
+ if (enable)
+ aie_resource_set(&apart->cores_clk_state, startbit,
+ range->size.row);
+ else
+ aie_resource_clear(&apart->cores_clk_state, startbit,
+ range->size.row);
+
+ return 0;
+}
+
+static int aie_set_part_clocks(struct aie_partition *apart)
+{
+ struct aie_range *range = &apart->range, lrange;
+ struct aie_location loc;
+
+ /*
+ * The tiles below the highest tile whose clock is on, need to have the
+ * clock on. The first for loop is to scan the clock states bitmap to
+ * see which tiles are required to be clocked on, and update the bitmap
+ * to make sure the tiles below are also required to be clocked on.
+ */
+ for (loc.col = range->start.col;
+ loc.col < range->start.col + range->size.col;
+ loc.col++) {
+ u32 startbit, inuse_toprow = 0, clk_toprow = 0;
+
+ startbit = loc.col * (range->size.row - 1);
+
+ for (loc.row = range->start.row + 1;
+ loc.row < range->start.row + range->size.row;
+ loc.row++) {
+ u32 bit = startbit + loc.row - 1;
+
+ if (aie_resource_testbit(&apart->tiles_inuse, bit))
+ inuse_toprow = loc.row;
+ if (aie_resource_testbit(&apart->cores_clk_state, bit))
+ clk_toprow = loc.row;
+ }
+
+ /* Update clock states of a column */
+ lrange.start.col = loc.col;
+ lrange.size.col = 1;
+ if (inuse_toprow < clk_toprow) {
+ lrange.start.row = inuse_toprow + 1;
+ lrange.size.row = clk_toprow - inuse_toprow;
+ aie_set_col_clocks(apart, &lrange, false);
+ } else if (inuse_toprow > clk_toprow) {
+ lrange.start.row = clk_toprow + 1;
+ lrange.size.row = inuse_toprow - clk_toprow;
+ aie_set_col_clocks(apart, &lrange, true);
+ }
+ }
+
+ return 0;
+}
+
static const struct aie_tile_operations aie_ops = {
.get_tile_type = aie_get_tile_type,
.get_mem_info = aie_get_mem_info,
.reset_shim = aie_reset_shim,
+ .init_part_clk_state = aie_init_part_clk_state,
+ .scan_part_clocks = aie_scan_part_clocks,
+ .set_part_clocks = aie_set_part_clocks,
};
/**
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-clock.c b/drivers/misc/xilinx-ai-engine/ai-engine-clock.c
new file mode 100644
index 0000000..261702b
--- /dev/null
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-clock.c
@@ -0,0 +1,245 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx AI Engine device driver
+ *
+ * Copyright (C) 2020 Xilinx, Inc.
+ */
+
+#include "ai-engine-internal.h"
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+/**
+ * aie_part_get_clk_state_bit() - return bit position of the clock state of a
+ * tile
+ * @apart: AI engine partition
+ * @loc: AI engine tile location
+ * @return: bit position for success, negative value for failure
+ */
+static int aie_part_get_clk_state_bit(struct aie_partition *apart,
+ struct aie_location *loc)
+{
+ if (apart->adev->ops->get_tile_type(loc) != AIE_TILE_TYPE_TILE)
+ return -EINVAL;
+
+ return loc->col * (apart->range.size.row - 1) + loc->row - 1;
+}
+
+/**
+ * aie_part_scan_clk_state() - scan the clock states of tiles of the AI engine
+ * partition
+ * @apart: AI engine partition
+ * @return: 0 for success, negative value for failure.
+ *
+ * This function will scan the clock status of both the memory and core
+ * modules.
+ */
+int aie_part_scan_clk_state(struct aie_partition *apart)
+{
+ return apart->adev->ops->scan_part_clocks(apart);
+}
+
+/**
+ * aie_part_check_clk_enable_loc() - return if clock of a tile is enabled
+ * @apart: AI engine partition
+ * @loc: AI engine tile location
+ * @return: true for enabled, false for disabled
+ */
+bool aie_part_check_clk_enable_loc(struct aie_partition *apart,
+ struct aie_location *loc)
+{
+ int bit;
+
+ if (apart->adev->ops->get_tile_type(loc) != AIE_TILE_TYPE_TILE)
+ return true;
+
+ bit = aie_part_get_clk_state_bit(apart, loc);
+ return aie_resource_testbit(&apart->cores_clk_state, bit);
+}
+
+/**
+ * aie_part_request_tiles() - request tiles from an AI engine partition.
+ * @apart: AI engine partition
+ * @num_tiles: number of tiles to request. If it is 0, it means all tiles
+ * @locs: the AI engine tiles locations array which will be requested
+ * @return: 0 for success, negative value for failure.
+ *
+ * This function will enable clocks of the specified tiles.
+ */
+static int aie_part_request_tiles(struct aie_partition *apart, int num_tiles,
+ struct aie_location *locs)
+{
+ if (num_tiles == 0) {
+ aie_resource_set(&apart->tiles_inuse, 0,
+ apart->tiles_inuse.total);
+ } else {
+ u32 n;
+
+ if (!locs)
+ return -EINVAL;
+
+ for (n = 0; n < num_tiles; n++) {
+ int bit = aie_part_get_clk_state_bit(apart, &locs[n]);
+
+ if (bit >= 0)
+ aie_resource_set(&apart->tiles_inuse, bit, 1);
+ }
+ }
+
+ return apart->adev->ops->set_part_clocks(apart);
+}
+
+/**
+ * aie_part_release_tiles() - release tiles from an AI engine partition.
+ * @apart: AI engine partition
+ * @num_tiles: number of tiles to release. If it is 0, it means all tiles
+ * @locs: the AI engine tiles locations array which will be released
+ * @return: 0 for success, negative value for failure.
+ *
+ * This function will disable clocks of the specified tiles.
+ */
+static int aie_part_release_tiles(struct aie_partition *apart, int num_tiles,
+ struct aie_location *locs)
+{
+ if (num_tiles == 0) {
+ aie_resource_clear(&apart->tiles_inuse, 0,
+ apart->tiles_inuse.total);
+ } else {
+ u32 n;
+
+ if (!locs)
+ return -EINVAL;
+
+ for (n = 0; n < num_tiles; n++) {
+ int bit = aie_part_get_clk_state_bit(apart, &locs[n]);
+
+ if (bit >= 0)
+ aie_resource_clear(&apart->tiles_inuse, bit, 1);
+ }
+ }
+
+ return apart->adev->ops->set_part_clocks(apart);
+}
+
+/**
+ * aie_part_request_tiles_from_user() - request tiles from an AI engine
+ * partition from user
+ * @apart: AI engine partition
+ * @user_args: user AI engine request tiles argument
+ * @return: 0 for success, negative value for failure.
+ *
+ * This function will request tiles from user request.
+ */
+int aie_part_request_tiles_from_user(struct aie_partition *apart,
+ void __user *user_args)
+{
+ struct aie_tiles_array args;
+ struct aie_location *locs = NULL;
+ int ret;
+
+ if (copy_from_user(&args, user_args, sizeof(args)))
+ return -EFAULT;
+
+ if (args.num_tiles) {
+ u32 i;
+
+ locs = kmalloc_array(args.num_tiles, sizeof(*locs),
+ GFP_KERNEL);
+ if (!locs)
+ return -ENOMEM;
+
+ if (copy_from_user(locs, u64_to_user_ptr(args.locs),
+ args.num_tiles * sizeof(*locs))) {
+ kfree(locs);
+ return -EFAULT;
+ }
+
+ /* update the location to absolute location */
+ for (i = 0; i < args.num_tiles; i++) {
+ if (locs[i].col > apart->range.size.col ||
+ locs[i].row > apart->range.size.row) {
+ dev_err(&apart->dev,
+ "failed to request tiles, invalid tile(%u,%u).\n",
+ locs[i].col, locs[i].row);
+ kfree(locs);
+ return -EINVAL;
+ }
+ locs[i].col += apart->range.start.col;
+ locs[i].row += apart->range.start.row;
+ }
+ }
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ kfree(locs);
+ return ret;
+ }
+
+ ret = aie_part_request_tiles(apart, args.num_tiles, locs);
+ mutex_unlock(&apart->mlock);
+
+ kfree(locs);
+ return ret;
+}
+
+/**
+ * aie_part_release_tiles_from_user() - release tiles from an AI engine
+ * partition from user
+ * @apart: AI engine partition
+ * @user_args: user AI engine request tiles argument
+ * @return: 0 for success, negative value for failure.
+ *
+ * This function will release tiles from user request.
+ */
+int aie_part_release_tiles_from_user(struct aie_partition *apart,
+ void __user *user_args)
+{
+ struct aie_tiles_array args;
+ struct aie_location *locs = NULL;
+ int ret;
+
+ if (copy_from_user(&args, user_args, sizeof(args)))
+ return -EFAULT;
+
+ if (args.num_tiles) {
+ int i;
+
+ locs = kmalloc_array(args.num_tiles, sizeof(*locs),
+ GFP_KERNEL);
+ if (!locs)
+ return -ENOMEM;
+
+ if (copy_from_user(locs, u64_to_user_ptr(args.locs),
+ args.num_tiles * sizeof(*locs))) {
+ kfree(locs);
+ return -EFAULT;
+ }
+
+ /* update the location to absolute location */
+ for (i = 0; i < args.num_tiles; i++) {
+ if (locs[i].col > apart->range.size.col ||
+ locs[i].row > apart->range.size.row) {
+ dev_err(&apart->dev,
+ "failed to release tiles, invalid tile(%u,%u).\n",
+ locs[i].col, locs[i].row);
+ kfree(locs);
+ return -EINVAL;
+ }
+ locs[i].col += apart->range.start.col;
+ locs[i].row += apart->range.start.row;
+ }
+ }
+
+ ret = mutex_lock_interruptible(&apart->mlock);
+ if (ret) {
+ kfree(locs);
+ return ret;
+ }
+
+ ret = aie_part_release_tiles(apart, args.num_tiles, locs);
+ mutex_unlock(&apart->mlock);
+
+ kfree(locs);
+ return ret;
+}
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
index 69f7216..43f4933 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-dev.c
@@ -204,17 +204,22 @@ struct aie_partition *aie_request_partition(struct aie_device *adev,
} else {
/*
* TBD:
- * 1. setup NOC AXI MM config to only generate error events
- * for slave error and decode error.
- * 2. scan to see which tiles have been clock gated.
+ * setup NOC AXI MM config to only generate error events
+ * for slave error and decode error.
*
* This needs to be done before the AI engine partition is
* exported for user to access.
*/
- apart->status = XAIE_PART_STATUS_INUSE;
- apart->cntrflag = req->flag;
-
- mutex_unlock(&apart->mlock);
+ /* scan to setup the initial clock state for tiles */
+ ret = aie_part_scan_clk_state(apart);
+ if (ret) {
+ mutex_unlock(&apart->mlock);
+ apart = ERR_PTR(ret);
+ } else {
+ apart->status = XAIE_PART_STATUS_INUSE;
+ apart->cntrflag = req->flag;
+ mutex_unlock(&apart->mlock);
+ }
}
mutex_unlock(&adev->mlock);
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
index bf3a09c..131d22a 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-internal.h
@@ -112,6 +112,22 @@ struct aie_dma_attr {
* @get_tile_type: get type of tile based on tile operation
* @get_mem_info: get different types of memories information
* @reset_shim: reset shim, it will assert and then release SHIM reset
+ * @init_part_clk_state: initialize clock states software structure which is a
+ * bitmap for the AI engine partition. The clock states
+ * structure is the structure used to keep track of if
+ * the modules in the AI engine partition are gated.
+ * @scan_part_clocks: scan partition modules to check whether the modules are
+ * clock gated or not, and update the soft clock states
+ * structure. It is required to be called when the partition
+ * is requested so that the driver knows which modules are
+ * clock gated when the partition is requested. This function
+ * expects the caller to apply partition lock before calling
+ * this function.
+ * @set_part_clocks: set partition modules clocks gate registers based on the
+ * partition clock states bitmap. This function expects the
+ * caller to apply partition lock before calling this
+ * function. The caller function will need to set the bitmap
+ * on which tiles are required to be clocked on.
*
* Different AI engine device version has its own device
* operation.
@@ -121,6 +137,9 @@ struct aie_tile_operations {
unsigned int (*get_mem_info)(struct aie_range *range,
struct aie_part_mem *pmem);
int (*reset_shim)(struct aie_device *adev, struct aie_range *range);
+ int (*init_part_clk_state)(struct aie_partition *apart);
+ int (*scan_part_clocks)(struct aie_partition *apart);
+ int (*set_part_clocks)(struct aie_partition *apart);
};
/**
@@ -185,6 +204,8 @@ struct aie_device {
* @range: range of partition
* @mlock: protection for AI engine partition operations
* @dev: device for the AI engine partition
+ * @cores_clk_state: bitmap to indicate the power state of core modules
+ * @tiles_inuse: bitmap to indicate if a tile is in use
* @partition_id: partition id. Partition ID is the identifier
* of the AI engine partition in the system.
* @status: indicate if the partition is in use
@@ -199,6 +220,8 @@ struct aie_partition {
struct aie_range range;
struct mutex mlock; /* protection for AI engine partition operations */
struct device dev;
+ struct aie_resource cores_clk_state;
+ struct aie_resource tiles_inuse;
u32 partition_id;
u32 status;
u32 cntrflag;
@@ -308,6 +331,9 @@ int aie_resource_check_region(struct aie_resource *res, u32 start,
int aie_resource_get_region(struct aie_resource *res, u32 start,
u32 count);
void aie_resource_put_region(struct aie_resource *res, int start, u32 count);
+int aie_resource_set(struct aie_resource *res, u32 start, u32 count);
+int aie_resource_clear(struct aie_resource *res, u32 start, u32 count);
+bool aie_resource_testbit(struct aie_resource *res, u32 bit);
const struct file_operations *aie_part_get_fops(void);
u8 aie_part_in_use(struct aie_partition *apart);
@@ -331,5 +357,13 @@ long aie_part_set_dmabuf_bd(struct aie_partition *apart,
void __user *user_args);
void aie_part_release_dmabufs(struct aie_partition *apart);
+int aie_part_scan_clk_state(struct aie_partition *apart);
+bool aie_part_check_clk_enable_loc(struct aie_partition *apart,
+ struct aie_location *loc);
+int aie_part_request_tiles_from_user(struct aie_partition *apart,
+ void __user *user_args);
+int aie_part_release_tiles_from_user(struct aie_partition *apart,
+ void __user *user_args);
+
int aie_device_init(struct aie_device *adev);
#endif /* AIE_INTERNAL_H */
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-part.c b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
index dcfb9ec..54450b6 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-part.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-part.c
@@ -94,6 +94,27 @@ static int aie_part_reg_validation(struct aie_partition *apart, size_t offset,
return -EINVAL;
}
+ /*
+ * We check if a tile is gated before trying to access the tile.
+ * As we mmap() the registers as read only to enable faster status
+ * enquiry, and mmap() memories as write/read to faster memory access,
+ * user can still access the clock gated tiles from userspace by
+ * accessing the mmapped space.
+ * Accessing the gated tiles can cause decode error. With PDI flow,
+ * the PDI sets up the SHIM NOC AXI MM to only generate AI engine error
+ * even instead of generating the NSU error. but for non PDI flow, as
+ * the AXI MM register are protected register, until we have EEMI API
+ * to update the AXI MM register, access the gated tiles can cause NSU
+ * errors.
+ * TODO: To solve this, we need to either request EEMI to configure
+ * AXI MM or split the mmapped space into tiles based lists.
+ */
+ if (!aie_part_check_clk_enable_loc(apart, &loc)) {
+ dev_err(&apart->dev,
+ "Tile(%u,%d) is gated.\n", loc.col, loc.row);
+ return -EINVAL;
+ }
+
if (!is_write)
return 0;
@@ -304,6 +325,10 @@ static long aie_part_ioctl(struct file *fp, unsigned int cmd, unsigned long arg)
return aie_part_detach_dmabuf_req(apart, argp);
case AIE_SET_SHIMDMA_DMABUF_BD_IOCTL:
return aie_part_set_dmabuf_bd(apart, argp);
+ case AIE_REQUEST_TILES_IOCTL:
+ return aie_part_request_tiles_from_user(apart, argp);
+ case AIE_RELEASE_TILES_IOCTL:
+ return aie_part_release_tiles_from_user(apart, argp);
default:
dev_err(&apart->dev, "Invalid ioctl command %u.\n", cmd);
ret = -EINVAL;
@@ -343,6 +368,7 @@ static void aie_part_release_device(struct device *dev)
apart->range.size.col);
list_del(&apart->node);
mutex_unlock(&adev->mlock);
+ aie_resource_uninitialize(&apart->cores_clk_state);
put_device(apart->dev.parent);
}
@@ -466,6 +492,12 @@ static struct aie_partition *aie_create_partition(struct aie_device *adev,
return ERR_PTR(ret);
}
+ ret = adev->ops->init_part_clk_state(apart);
+ if (ret) {
+ put_device(dev);
+ return ERR_PTR(ret);
+ }
+
ret = mutex_lock_interruptible(&adev->mlock);
if (ret) {
put_device(dev);
diff --git a/drivers/misc/xilinx-ai-engine/ai-engine-res.c b/drivers/misc/xilinx-ai-engine/ai-engine-res.c
index 36f08bf..b0c0741 100644
--- a/drivers/misc/xilinx-ai-engine/ai-engine-res.c
+++ b/drivers/misc/xilinx-ai-engine/ai-engine-res.c
@@ -112,3 +112,54 @@ void aie_resource_put_region(struct aie_resource *res, int start, u32 count)
return;
bitmap_clear(res->bitmap, start, count);
}
+
+/**
+ * aie_resource_set() - set the AI engine resource bits
+ * @res: pointer to AI engine resource
+ * @start: start bit to set
+ * @count: number of bits to set
+ * @return: 0 for success and negative value for failure
+ *
+ * This function sets the specified number bits in the resource.
+ */
+int aie_resource_set(struct aie_resource *res, u32 start, u32 count)
+{
+ if (!res || !res->bitmap || !count || start + count > res->total)
+ return -EINVAL;
+
+ bitmap_set(res->bitmap, start, count);
+ return 0;
+}
+
+/**
+ * aie_resource_clear() - clear the AI engine resource bits
+ * @res: pointer to AI engine resource
+ * @start: start bit to set
+ * @count: number of bits to clear
+ * @return: 0 for success and negative value for failure
+ *
+ * This function clears the specified number bits in the resource.
+ */
+int aie_resource_clear(struct aie_resource *res, u32 start, u32 count)
+{
+ if (!res || !res->bitmap || !count || start + count > res->total)
+ return -EINVAL;
+
+ bitmap_clear(res->bitmap, start, count);
+ return 0;
+}
+
+/**
+ * aie_resource_testbit() - test if a bit is set in a AI engine resource
+ * @res: pointer to AI engine resource
+ * @bit: bit to check
+ * @return: true for set, false for not set
+ */
+bool aie_resource_testbit(struct aie_resource *res, u32 bit)
+{
+ if (!res || !res->bitmap || bit >= res->total)
+ return false;
+
+ /* Locate the unsigned long the required bit belongs to */
+ return test_bit(bit, res->bitmap);
+}
diff --git a/include/uapi/linux/xlnx-ai-engine.h b/include/uapi/linux/xlnx-ai-engine.h
index 75d9dbf..a090550 100644
--- a/include/uapi/linux/xlnx-ai-engine.h
+++ b/include/uapi/linux/xlnx-ai-engine.h
@@ -146,6 +146,16 @@ struct aie_dmabuf_bd_args {
__u32 bd_id;
};
+/**
+ * struct aie_tiles_array - AIE tiles array
+ * @locs: tiles locations array. An array of struct aie_location.
+ * @num_tiles: number of tiles in the tiles locations array
+ */
+struct aie_tiles_array {
+ __u64 locs;
+ __u32 num_tiles;
+};
+
#define AIE_IOCTL_BASE 'A'
/* AI engine device IOCTL operations */
@@ -204,4 +214,25 @@ struct aie_dmabuf_bd_args {
#define AIE_SET_SHIMDMA_DMABUF_BD_IOCTL _IOW(AIE_IOCTL_BASE, 0x10, \
struct aie_dmabuf_bd_args)
+/**
+ * DOC: AIE_REQUEST_TILES_IOCTL - request AI engine tiles
+ *
+ * This ioctl is used to request tiles.
+ * When requested the AI engine partition, the kernel driver will scan the
+ * partition to track which tiles are enabled or not. After that, if user
+ * want to request for more tiles, it will use this ioctl to request more
+ * tiles.
+ * If the aie_tiles_array is empty, it means it will request for all tiles
+ * in the partition.
+ */
+#define AIE_REQUEST_TILES_IOCTL _IOW(AIE_IOCTL_BASE, 0xe, \
+ struct aie_tiles_array)
+
+/**
+ * DOC: AIE_RELEASE_TILES_IOCTL - release AI engine tiles
+ *
+ * This ioctl is used to release tiles
+ */
+#define AIE_RELEASE_TILES_IOCTL _IOW(AIE_IOCTL_BASE, 0xf, \
+ struct aie_tiles_array)
#endif
--
2.7.4
On Sun, Nov 29, 2020 at 11:48:17PM -0800, Wendy Liang wrote:
> Xilinx AI engine array can be partitioned statically for different
> applications. In the device tree, there will be device node for the AI
> engine device, and device nodes for the statically configured AI engine
> partitions. Each of the statically configured partition has a partition
> ID in the system.
>
> Signed-off-by: Wendy Liang <[email protected]>
> ---
> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 +++++++++++++++++++++
> 1 file changed, 126 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>
> diff --git a/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> new file mode 100644
> index 0000000..1de5623
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> @@ -0,0 +1,126 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +%YAML 1.2
> +---
> +$id: http://devicetree.org/schemas/soc/xilinx/xlnx,ai-engine.yaml#
> +$schema: http://devicetree.org/meta-schemas/core.yaml#
> +
> +title: Xilinx AI Engine
> +
> +maintainers:
> + - Wendy Liang <[email protected]>
> +
> +description: |+
You don't need '|' unless there's formatting to preserve.
> + The Xilinx AI Engine is a tile processor with many cores (up to 400) that
> + can run in parallel. The data routing between cores is configured through
> + internal switches, and shim tiles interface with external interconnect, such
> + as memory or PL.
> +
> +properties:
> + compatible:
> + const: xlnx,ai-engine-v1.0
This is soft logic? If not, don't use version numbers.
> +
> + reg:
> + description: |
> + Physical base address and length of the device registers.
That's every 'reg' property. Drop.
> + The AI engine address space assigned to Linux is defined by Xilinx
> + platform design tool.
> +
> + '#address-cells':
> + enum: [2]
const: 2
> + description: |
> + size of cell to describe AI engine range of tiles address.
> + It is the location of the starting tile of the range.
> + As the AI engine tiles are 2D array, the location of a tile
> + is presented as (column, row), the address cell is 2.
> +
> + '#size-cells':
> + enum: [2]
> + description: |
> + size of cell to describe AI engine range of tiles size.
> + As the AI engine tiles are 2D array, the size cell is 2.
> +
> + power-domains:
> + maxItems: 1
> + description: phandle to the associated power domain
> +
> + interrupts:
> + maxItems: 3
> +
> + interrupt-names:
> + description: |
> + Should be "interrupt1", "interrupt2" or "interrupt3".
Really, not useful names. If you do have names, they should be a schema,
not freeform text.
> +
> +required:
> + - compatible
> + - reg
> + - '#address-cells'
> + - '#size-cells'
> + - power-domains
> + - interrupt-parent
Generally, never required because it could be in the parent node.
> + - interrupts
> + - interrupt-names
> +
> +patternProperties:
> + "^aie_partition@[0-9]+$":
aie-partition@
The unit-address is just the 1st cell of reg (the row)? Or needs to be
row and column, in which case you'd want something like '@0,0'. Also,
unit-address values are typically hex, not decimal.
> + type: object
> + description: |
> + AI engine partition which is a group of column based tiles of the AI
> + engine device. Each AI engine partition is isolated from the other
> + AI engine partitions. An AI engine partition is defined by Xilinx
> + platform design tools. Each partition has a SHIM row and core tiles rows.
> + A SHIM row contains SHIM tiles which are the interface to external
> + components. AXI master can access AI engine registers, push data to and
> + fetch data from AI engine through the SHIM tiles. Core tiles are the
> + compute tiles.
> +
> + properties:
> + reg:
> + description: |
> + It describes the group of tiles of the AI engine partition. It needs
> + to include the SHIM row. The format is defined by the parent AI engine
> + device node's '#address-cells' and '#size-cells' properties. e.g. a v1
> + AI engine device has 2D tiles array, the first row is SHIM row. A
> + partition which has 50 columns and 8 rows of core tiles and 1 row of
> + SHIM tiles will be presented as <0 0 50 9>.
You should be able to write some constraints like max row and column
values?
> +
> + label:
> + maxItems: 1
'label' is not an array. Why do you need label?
> +
> + xlnx,partition-id:
> + $ref: /schemas/types.yaml#/definitions/uint32
> + description: |
> + AI engine partition ID, which is defined by Xilinx platform design
> + tool to identify the AI engine partition in the system.
I find the use of 'reg' a bit odd here. Maybe using 'reg' for partition
would make more sense? Which is more closely associated with how you
address the partition?
> +
> + required:
> + - reg
> + - xlnx,partition-id
> + additionalProperties: false
> +
> +additionalProperties: false
> +
> +examples:
> + - |
> + bus {
> + #address-cells = <2>;
> + #size-cells = <2>;
> +
> + ai_engine: ai-engine@20000000000 {
> + compatible = "xlnx,ai-engine-v1.0";
> + reg = <0x200 0x0 0x1 0x0>;
> + #address-cells = <2>;
> + #size-cells = <2>;
> + power-domains = <&versal_firmware 0x18224072>;
> + interrupt-parent = <&gic>;
> + interrupts = <0x0 0x94 0x4>,
> + <0x0 0x95 0x4>,
> + <0x0 0x96 0x4>;
> + interrupt-names = "interrupt1", "interrupt2", "interrupt3";
> +
> + aie_partition0: aie_partition@0 {
> + /* 50 columns and 8 core tile rows + 1 SHIM row */
> + reg = <0 0 50 9>;
> + xlnx,partition-id = <1>;
> + };
> + };
> + };
> --
> 2.7.4
>
On 12/8/20 3:41 PM, Rob Herring wrote:
> On Sun, Nov 29, 2020 at 11:48:17PM -0800, Wendy Liang wrote:
>> Xilinx AI engine array can be partitioned statically for different
>> applications. In the device tree, there will be device node for the AI
>> engine device, and device nodes for the statically configured AI engine
>> partitions. Each of the statically configured partition has a partition
>> ID in the system.
>>
>> Signed-off-by: Wendy Liang <[email protected]>
>> ---
>> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 +++++++++++++++++++++
>> 1 file changed, 126 insertions(+)
>> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>>
>> diff --git a/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>> new file mode 100644
>> index 0000000..1de5623
>> --- /dev/null
>> +++ b/Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>> @@ -0,0 +1,126 @@
>> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
>> +%YAML 1.2
>> +---
>> +$id: http://devicetree.org/schemas/soc/xilinx/xlnx,ai-engine.yaml#
>> +$schema: http://devicetree.org/meta-schemas/core.yaml#
>> +
>> +title: Xilinx AI Engine
>> +
>> +maintainers:
>> + - Wendy Liang <[email protected]>
>> +
>> +description: |+
> You don't need '|' unless there's formatting to preserve.
Will change
>
>> + The Xilinx AI Engine is a tile processor with many cores (up to 400) that
>> + can run in parallel. The data routing between cores is configured through
>> + internal switches, and shim tiles interface with external interconnect, such
>> + as memory or PL.
>> +
>> +properties:
>> + compatible:
>> + const: xlnx,ai-engine-v1.0
> This is soft logic? If not, don't use version numbers.
It is not soft logic, if there is a future version of the device, can we
use version number
in compatible to describe the device version?
>
>> +
>> + reg:
>> + description: |
>> + Physical base address and length of the device registers.
> That's every 'reg' property. Drop.
[Wendy] will drop it.
>
>> + The AI engine address space assigned to Linux is defined by Xilinx
>> + platform design tool.
>> +
>> + '#address-cells':
>> + enum: [2]
> const: 2
Will change
>
>> + description: |
>> + size of cell to describe AI engine range of tiles address.
>> + It is the location of the starting tile of the range.
>> + As the AI engine tiles are 2D array, the location of a tile
>> + is presented as (column, row), the address cell is 2.
>> +
>> + '#size-cells':
>> + enum: [2]
>> + description: |
>> + size of cell to describe AI engine range of tiles size.
>> + As the AI engine tiles are 2D array, the size cell is 2.
>> +
>> + power-domains:
>> + maxItems: 1
>> + description: phandle to the associated power domain
>> +
>> + interrupts:
>> + maxItems: 3
>> +
>> + interrupt-names:
>> + description: |
>> + Should be "interrupt1", "interrupt2" or "interrupt3".
> Really, not useful names. If you do have names, they should be a schema,
> not freeform text.
>
>> +
>> +required:
>> + - compatible
>> + - reg
>> + - '#address-cells'
>> + - '#size-cells'
>> + - power-domains
>> + - interrupt-parent
> Generally, never required because it could be in the parent node.
will remove
>
>> + - interrupts
>> + - interrupt-names
>> +
>> +patternProperties:
>> + "^aie_partition@[0-9]+$":
> aie-partition@
>
> The unit-address is just the 1st cell of reg (the row)? Or needs to be
> row and column, in which case you'd want something like '@0,0'. Also,
> unit-address values are typically hex, not decimal.
It will be col,row, will change to address format with starting column
and row
>
>> + type: object
>> + description: |
>> + AI engine partition which is a group of column based tiles of the AI
>> + engine device. Each AI engine partition is isolated from the other
>> + AI engine partitions. An AI engine partition is defined by Xilinx
>> + platform design tools. Each partition has a SHIM row and core tiles rows.
>> + A SHIM row contains SHIM tiles which are the interface to external
>> + components. AXI master can access AI engine registers, push data to and
>> + fetch data from AI engine through the SHIM tiles. Core tiles are the
>> + compute tiles.
>> +
>> + properties:
>> + reg:
>> + description: |
>> + It describes the group of tiles of the AI engine partition. It needs
>> + to include the SHIM row. The format is defined by the parent AI engine
>> + device node's '#address-cells' and '#size-cells' properties. e.g. a v1
>> + AI engine device has 2D tiles array, the first row is SHIM row. A
>> + partition which has 50 columns and 8 rows of core tiles and 1 row of
>> + SHIM tiles will be presented as <0 0 50 9>.
> You should be able to write some constraints like max row and column
> values?
The max row and columns depends on the underline hardware platform, the
driver can
get the max allowed row and columns from the size field of the "reg"
property in the parent
AI engine device node.
>
>> +
>> + label:
>> + maxItems: 1
> 'label' is not an array. Why do you need label?
>
>> +
>> + xlnx,partition-id:
>> + $ref: /schemas/types.yaml#/definitions/uint32
>> + description: |
>> + AI engine partition ID, which is defined by Xilinx platform design
>> + tool to identify the AI engine partition in the system.
> I find the use of 'reg' a bit odd here. Maybe using 'reg' for partition
> would make more sense? Which is more closely associated with how you
> address the partition?
I am not clear on this comments, are you referring to partition-id?
The partition id is generated by Xilinx tool to identify the AI engine
partition in the firmware.
The "reg" of the partition device node is to describe the location of
the partition within
the AI engine device node.
Thanks,
Wendy
>
>> +
>> + required:
>> + - reg
>> + - xlnx,partition-id
>> + additionalProperties: false
>> +
>> +additionalProperties: false
>> +
>> +examples:
>> + - |
>> + bus {
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> +
>> + ai_engine: ai-engine@20000000000 {
>> + compatible = "xlnx,ai-engine-v1.0";
>> + reg = <0x200 0x0 0x1 0x0>;
>> + #address-cells = <2>;
>> + #size-cells = <2>;
>> + power-domains = <&versal_firmware 0x18224072>;
>> + interrupt-parent = <&gic>;
>> + interrupts = <0x0 0x94 0x4>,
>> + <0x0 0x95 0x4>,
>> + <0x0 0x96 0x4>;
>> + interrupt-names = "interrupt1", "interrupt2", "interrupt3";
>> +
>> + aie_partition0: aie_partition@0 {
>> + /* 50 columns and 8 core tile rows + 1 SHIM row */
>> + reg = <0 0 50 9>;
>> + xlnx,partition-id = <1>;
>> + };
>> + };
>> + };
>> --
>> 2.7.4
>>
On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang <[email protected]> wrote:
>
> AI engine is the acceleration engine provided by Xilinx. These engines
> provide high compute density for vector-based algorithms, and flexible
> custom compute and data movement. It has core tiles for compute and
> shim tiles to interface the FPGA fabric.
>
> You can check the AI engine architecture document for more hardware details:
> https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
>
> This patch series adds a Linux kernel driver to manage the Xilinx AI
> engine array device and AI engine partitions (groups of AI engine tiles
> dedicated to an application).
Hi Wendy,
I think it would be good to provide an overview of how your stack
works in general. That would give reviewers a better handle on how
all of this fits together. I'd suggest including an overview in the
cover letter and also in the commit message and/or as a comment in the
code in one of the patches. I'm not really an expert when it comes to
FPGAs, but this basically looks like a pretty low level interface to
set up the data fabric for a kernel that will run on the soft logic or
maybe the microcontroller on the board. It doesn't have to be super
detailed, just a nice flow for how you might use this. E.g.,
Userspace uses ioctls X, Y, Z to configure the data fabric for the
FPGA kernel. The kernels can run on... . DMA access to system memory
for data sets can be allocated using ioctl A. DMA access is limited
by... . The user can then load the FPGA kernel on to one of the
engines using ioctl B and finally they can kick off the whole thing
using ioctl C. FPGA kernels are compiled using YYY toolchain and use
use the following runtime (link to runtime) to configure the data
fabric using ioctls X, Y, Z.
It would also be good to go over the security implications of the
design. E.g., can the FPGA kernel(s) access the DMA engine directly,
or is it limited to just the DMA regions set up by the ioctls? Also,
does the hardware and software design allow for multiple users? If
so, how does that work?
Thanks,
Alex
>
> v3:
> * unlock AIE dev mutex after failed to gain the partition lock in
> errors handing
> * replace pointer with __u64 and enum with __u32 in ioctl
>
> v2:
> * Fix dtschema check errors
> * Fix test bot warning on interrupt implementation. Removed set but
> unused varaible.
> * Fix compilation unused function warning of firmware change in case
> ZynqMP firmware is not configured
> * There are other warning on ZynqMP firmware reported from testbot
> which is not introduced by this patch set.
> "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
> for those fixes.
>
>
> Izhar Ameer Shaikh (1):
> firmware: xilinx: Add IOCTL support for AIE ISR Clear
>
> Nishad Saraf (2):
> misc: xilinx-ai-engine: Add support to request device management
> services
> misc: xilinx-ai-engine: Add support for servicing error interrupts
>
> Wendy Liang (6):
> dt-binding: soc: xilinx: ai-engine: Add AI engine binding
> misc: Add Xilinx AI engine device driver
> misc: xilinx-ai-engine: Implement AI engine cleanup sequence
> misc: xilinx-ai-engine: expose AI engine tile memories to userspace
> misc: xilinx-ai-engine: add setting shim dma bd operation
> misc: xilinx-ai-engine: add request and release tiles
>
> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
> MAINTAINERS | 8 +
> drivers/firmware/xilinx/zynqmp.c | 14 +
> drivers/misc/Kconfig | 12 +
> drivers/misc/Makefile | 1 +
> drivers/misc/xilinx-ai-engine/Makefile | 16 +
> drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
> .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
> drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
> drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
> include/linux/firmware/xlnx-zynqmp.h | 8 +
> include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
> 18 files changed, 4719 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
> create mode 100644 include/uapi/linux/xlnx-ai-engine.h
>
> --
> 2.7.4
>
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Hi all
On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher <[email protected]> wrote:
>
> On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang <[email protected]> wrote:
> >
> > AI engine is the acceleration engine provided by Xilinx. These engines
> > provide high compute density for vector-based algorithms, and flexible
> > custom compute and data movement. It has core tiles for compute and
> > shim tiles to interface the FPGA fabric.
> >
> > You can check the AI engine architecture document for more hardware details:
> > https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
> >
> > This patch series adds a Linux kernel driver to manage the Xilinx AI
> > engine array device and AI engine partitions (groups of AI engine tiles
> > dedicated to an application).
>
> Hi Wendy,
>
> I think it would be good to provide an overview of how your stack
> works in general. That would give reviewers a better handle on how
> all of this fits together. I'd suggest including an overview in the
> cover letter and also in the commit message and/or as a comment in the
> code in one of the patches. I'm not really an expert when it comes to
> FPGAs, but this basically looks like a pretty low level interface to
> set up the data fabric for a kernel that will run on the soft logic or
> maybe the microcontroller on the board. It doesn't have to be super
> detailed, just a nice flow for how you might use this. E.g.,
>
> Userspace uses ioctls X, Y, Z to configure the data fabric for the
> FPGA kernel. The kernels can run on... . DMA access to system memory
> for data sets can be allocated using ioctl A. DMA access is limited
> by... . The user can then load the FPGA kernel on to one of the
> engines using ioctl B and finally they can kick off the whole thing
> using ioctl C. FPGA kernels are compiled using YYY toolchain and use
> use the following runtime (link to runtime) to configure the data
> fabric using ioctls X, Y, Z.
At least for drm drivers we ideally have that as a .rst file in
Documentation/. With that you can even do full svg graphs, or just dot
graphs, of the overall stack if you really want to go overboard :-)
> It would also be good to go over the security implications of the
> design. E.g., can the FPGA kernel(s) access the DMA engine directly,
> or is it limited to just the DMA regions set up by the ioctls? Also,
> does the hardware and software design allow for multiple users? If
> so, how does that work?
I've also seen indications that there's some on-chip or on-card
memory. How that's planned to be used and whether we want to manage
this (maybe even with something like ttm) would be good to understand.
All excellent questions from Alex, just figured I add some more.
Cheers, Daniel
> Thanks,
>
> Alex
>
>
> >
> > v3:
> > * unlock AIE dev mutex after failed to gain the partition lock in
> > errors handing
> > * replace pointer with __u64 and enum with __u32 in ioctl
> >
> > v2:
> > * Fix dtschema check errors
> > * Fix test bot warning on interrupt implementation. Removed set but
> > unused varaible.
> > * Fix compilation unused function warning of firmware change in case
> > ZynqMP firmware is not configured
> > * There are other warning on ZynqMP firmware reported from testbot
> > which is not introduced by this patch set.
> > "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
> > for those fixes.
> >
> >
> > Izhar Ameer Shaikh (1):
> > firmware: xilinx: Add IOCTL support for AIE ISR Clear
> >
> > Nishad Saraf (2):
> > misc: xilinx-ai-engine: Add support to request device management
> > services
> > misc: xilinx-ai-engine: Add support for servicing error interrupts
> >
> > Wendy Liang (6):
> > dt-binding: soc: xilinx: ai-engine: Add AI engine binding
> > misc: Add Xilinx AI engine device driver
> > misc: xilinx-ai-engine: Implement AI engine cleanup sequence
> > misc: xilinx-ai-engine: expose AI engine tile memories to userspace
> > misc: xilinx-ai-engine: add setting shim dma bd operation
> > misc: xilinx-ai-engine: add request and release tiles
> >
> > .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
> > MAINTAINERS | 8 +
> > drivers/firmware/xilinx/zynqmp.c | 14 +
> > drivers/misc/Kconfig | 12 +
> > drivers/misc/Makefile | 1 +
> > drivers/misc/xilinx-ai-engine/Makefile | 16 +
> > drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
> > .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
> > drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
> > include/linux/firmware/xlnx-zynqmp.h | 8 +
> > include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
> > 18 files changed, 4719 insertions(+)
> > create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> > create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
> > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
> > create mode 100644 include/uapi/linux/xlnx-ai-engine.h
> >
> > --
> > 2.7.4
> >
> > _______________________________________________
> > dri-devel mailing list
> > [email protected]
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> _______________________________________________
> dri-devel mailing list
> [email protected]
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On 12/11/20 11:39 AM, Daniel Vetter wrote:
> Hi all
>
> On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher<[email protected]> wrote:
>> On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang<[email protected]> wrote:
>>> AI engine is the acceleration engine provided by Xilinx. These engines
>>> provide high compute density for vector-based algorithms, and flexible
>>> custom compute and data movement. It has core tiles for compute and
>>> shim tiles to interface the FPGA fabric.
>>>
>>> You can check the AI engine architecture document for more hardware details:
>>> https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
>>>
>>> This patch series adds a Linux kernel driver to manage the Xilinx AI
>>> engine array device and AI engine partitions (groups of AI engine tiles
>>> dedicated to an application).
>> Hi Wendy,
>>
>> I think it would be good to provide an overview of how your stack
>> works in general. That would give reviewers a better handle on how
>> all of this fits together. I'd suggest including an overview in the
>> cover letter and also in the commit message and/or as a comment in the
>> code in one of the patches. I'm not really an expert when it comes to
>> FPGAs, but this basically looks like a pretty low level interface to
>> set up the data fabric for a kernel that will run on the soft logic or
>> maybe the microcontroller on the board. It doesn't have to be super
>> detailed, just a nice flow for how you might use this. E.g.,
>>
>> Userspace uses ioctls X, Y, Z to configure the data fabric for the
>> FPGA kernel. The kernels can run on... . DMA access to system memory
>> for data sets can be allocated using ioctl A. DMA access is limited
>> by... . The user can then load the FPGA kernel on to one of the
>> engines using ioctl B and finally they can kick off the whole thing
>> using ioctl C. FPGA kernels are compiled using YYY toolchain and use
>> use the following runtime (link to runtime) to configure the data
>> fabric using ioctls X, Y, Z.
> At least for drm drivers we ideally have that as a .rst file in
> Documentation/. With that you can even do full svg graphs, or just dot
> graphs, of the overall stack if you really want to go overboard :-)
>
>> It would also be good to go over the security implications of the
>> design. E.g., can the FPGA kernel(s) access the DMA engine directly,
>> or is it limited to just the DMA regions set up by the ioctls? Also,
>> does the hardware and software design allow for multiple users? If
>> so, how does that work?
> I've also seen indications that there's some on-chip or on-card
> memory. How that's planned to be used and whether we want to manage
> this (maybe even with something like ttm) would be good to understand.
>
> All excellent questions from Alex, just figured I add some more.
>
> Cheers, Daniel
Hi Alex, Daniel,
Below is an overview of the driver.
AI engine kernel driver manages Xilinx AI engine device. An AI engine device
contains cores tiles and SHIM tiles. Core tiles are the computation tiles
, the SHIM tiles are the tiles interfacing to external components.
         +--------+--------+--------+--------+
         | Core       | Core       | Core    | Core | ...
         |          |               | |           |
         +-----------------------------------+
         | Core       | Core    | Core       | Core    | ...
          |           |          | |         |
         +--------+--------+--------+---------
          ...
         +--------+--------+-----------------+
         | SHIM    | SHIM      | SHIM      |SHIM       |
         | PL       | PL       | PL           |PL | NOC |
         +---+----+---+----+---+-----+-------+
 AXI Streams  |       |           |        |   |AXI MM
                     |       |          | |   |
Events Singals |Â Â Â Â Â Â Â |Â Â Â Â Â Â Â Â Â Â |Â Â Â Â Â Â Â Â |Â Â Â |
                 |       |           | |   |
                 |       |           | |   |
         +---+--------+--------+-----+ +--+------+
         |      FPGA                           | | Â
NOCÂ Â Â Â Â Â Â |
         | | |            |
         +---------------------------+ +--+-------+
                                          |
                                          |
                                      +---+------+
                                      |  DDR      |
                                      +----------+
Each Core tile contains computing module, local memory and DMA module. The
local memory DMA module takes data from or to the AXI streams and writes
it to or reads it from the local memory. The computing module can also
directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
to get/put data from/to AXI streams from FPGA, enables external master to
access AI engine address space through AXI MM. SHIM NoC module has DMA
engine,
which can access extern memory though AXI MM and push it to internal AXI
streams.
At runtime, the AI engine tiles interconnection needs to be configured
so that
it can get fetch data from external components or adjacent tiles, and AI
engine
core program needs to be loaded. And then user application can push data
to the
AI engine array and start/stop AI engine core. AI engine device errors
can be
raised as events, the AI engine kernel driver listens to the events
interrupt
to monitor runtime async device errors.
Instead of application directly interacting with the AI engine kernel
APIs, user
application/libraries interacts with AI engine userspace library:
https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2
It provides cross OSes low level functional abstraction such as how to
connect one
stream port to another stream port, how to configure core tile local DMA.
The AI engine library can be used by other runtime libraries such as
Xilinx runtime (XRT)
library: https://xilinx.github.io/XRT/master/html/index.html,
which provides acceleration abstraction for Xilinx accelerators, it has
extensions
to interface to other acceleration framework such as OpenCL.
XRT provides buffer handling abstractions for user application to share
data between
applicaiton and devices.
Here is an example of application runtime stack:
           +----------------------------+
           |     Application                   |
           | |
           +----------------------------+
           |      XRT                                       |
           | |
           +----------------------------+
           |     AIE Library                    |
           | |
        +----------------------------+
   +----------------------------------------+
Kern  +----------------------------+
           |        AIE Partition                       +--+
          +----------------------------+   |
               |----------------------------+
           +----------------------------+
            |        AIE Device                |
            | |
           +----------------------------+
The AI engine kernel driver provides the following user interfaces:
 * AIE device driver is the root device driver to manage the partitions of
  of the AI engine device array. AI engine array can be partitioned into
  column wised isolated partitions. Each applicaiton can only access its
  own partitions.
 * AIE device driver monitors the interrupt from the AI enigne device. All
  AI engine tiles shared the same interrupt for error events.
 * AIE partition driver controls address mapping and access of the
  registers/local memories of the tiles within a partition.
  * It provides mmap operation to enable application to direclty
access the
    tiles local memories for small data update such as parameter
update for
    performance.
  * It provides mmap operatio to map all the registers as readonly for
    application to poll registers efficiently to check status.
  * It provides ioctl for userspace to pass I/O commands to write/mask
write
    the registers. How to configure is defined by userspace. Userspace
will
    pass the I/O commands sequence to the kernel driver, and kernel driver
    will validate the commands before it writes to the registers.
  * It provides ioctl to import dmabuf and ioctl to configure the the
DMA module
    in the SHIM tile which can access memory outside AI engine array.
The buffer management is out of this driver. In the above example, user
application
uses Xilinx runtime(XRT), XRT is the one to manage the buffers.
Best Regards,
Wendy
>
>> Thanks,
>>
>> Alex
>>
>>
>>> v3:
>>> * unlock AIE dev mutex after failed to gain the partition lock in
>>> errors handing
>>> * replace pointer with __u64 and enum with __u32 in ioctl
>>>
>>> v2:
>>> * Fix dtschema check errors
>>> * Fix test bot warning on interrupt implementation. Removed set but
>>> unused varaible.
>>> * Fix compilation unused function warning of firmware change in case
>>> ZynqMP firmware is not configured
>>> * There are other warning on ZynqMP firmware reported from testbot
>>> which is not introduced by this patch set.
>>> "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
>>> for those fixes.
>>>
>>>
>>> Izhar Ameer Shaikh (1):
>>> firmware: xilinx: Add IOCTL support for AIE ISR Clear
>>>
>>> Nishad Saraf (2):
>>> misc: xilinx-ai-engine: Add support to request device management
>>> services
>>> misc: xilinx-ai-engine: Add support for servicing error interrupts
>>>
>>> Wendy Liang (6):
>>> dt-binding: soc: xilinx: ai-engine: Add AI engine binding
>>> misc: Add Xilinx AI engine device driver
>>> misc: xilinx-ai-engine: Implement AI engine cleanup sequence
>>> misc: xilinx-ai-engine: expose AI engine tile memories to userspace
>>> misc: xilinx-ai-engine: add setting shim dma bd operation
>>> misc: xilinx-ai-engine: add request and release tiles
>>>
>>> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
>>> MAINTAINERS | 8 +
>>> drivers/firmware/xilinx/zynqmp.c | 14 +
>>> drivers/misc/Kconfig | 12 +
>>> drivers/misc/Makefile | 1 +
>>> drivers/misc/xilinx-ai-engine/Makefile | 16 +
>>> drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
>>> .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
>>> drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
>>> include/linux/firmware/xlnx-zynqmp.h | 8 +
>>> include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
>>> 18 files changed, 4719 insertions(+)
>>> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>>> create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
>>> create mode 100644 include/uapi/linux/xlnx-ai-engine.h
>>>
>>> --
>>> 2.7.4
>>>
>>> _______________________________________________
>>> dri-devel mailing list
>>> [email protected]
>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>> _______________________________________________
>> dri-devel mailing list
>> [email protected]
>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Mon, Dec 14, 2020 at 04:24:17PM -0800, Jiaying Liang wrote:
>
> On 12/11/20 11:39 AM, Daniel Vetter wrote:
> > Hi all
> >
> > On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher<[email protected]> wrote:
> > > On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang<[email protected]> wrote:
> > > > AI engine is the acceleration engine provided by Xilinx. These engines
> > > > provide high compute density for vector-based algorithms, and flexible
> > > > custom compute and data movement. It has core tiles for compute and
> > > > shim tiles to interface the FPGA fabric.
> > > >
> > > > You can check the AI engine architecture document for more hardware details:
> > > > https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
> > > >
> > > > This patch series adds a Linux kernel driver to manage the Xilinx AI
> > > > engine array device and AI engine partitions (groups of AI engine tiles
> > > > dedicated to an application).
> > > Hi Wendy,
> > >
> > > I think it would be good to provide an overview of how your stack
> > > works in general. That would give reviewers a better handle on how
> > > all of this fits together. I'd suggest including an overview in the
> > > cover letter and also in the commit message and/or as a comment in the
> > > code in one of the patches. I'm not really an expert when it comes to
> > > FPGAs, but this basically looks like a pretty low level interface to
> > > set up the data fabric for a kernel that will run on the soft logic or
> > > maybe the microcontroller on the board. It doesn't have to be super
> > > detailed, just a nice flow for how you might use this. E.g.,
> > >
> > > Userspace uses ioctls X, Y, Z to configure the data fabric for the
> > > FPGA kernel. The kernels can run on... . DMA access to system memory
> > > for data sets can be allocated using ioctl A. DMA access is limited
> > > by... . The user can then load the FPGA kernel on to one of the
> > > engines using ioctl B and finally they can kick off the whole thing
> > > using ioctl C. FPGA kernels are compiled using YYY toolchain and use
> > > use the following runtime (link to runtime) to configure the data
> > > fabric using ioctls X, Y, Z.
> > At least for drm drivers we ideally have that as a .rst file in
> > Documentation/. With that you can even do full svg graphs, or just dot
> > graphs, of the overall stack if you really want to go overboard :-)
> >
> > > It would also be good to go over the security implications of the
> > > design. E.g., can the FPGA kernel(s) access the DMA engine directly,
> > > or is it limited to just the DMA regions set up by the ioctls? Also,
> > > does the hardware and software design allow for multiple users? If
> > > so, how does that work?
> > I've also seen indications that there's some on-chip or on-card
> > memory. How that's planned to be used and whether we want to manage
> > this (maybe even with something like ttm) would be good to understand.
> >
> > All excellent questions from Alex, just figured I add some more.
> >
> > Cheers, Daniel
>
> Hi Alex, Daniel,
>
> Below is an overview of the driver.
>
> AI engine kernel driver manages Xilinx AI engine device. An AI engine device
> contains cores tiles and SHIM tiles. Core tiles are the computation tiles
> , the SHIM tiles are the tiles interfacing to external components.
>
> ????????? +--------+--------+--------+--------+
> ???????? ? | Core??????? | Core??????? | Core? ? ? ? | Core | ...
> ???????? ? |????? ? ? ? ? ? |??????????????? | |?????? ? ? ? ?? |
> ????????? +-----------------------------------+
> ???????? ? | Core??????? | Core? ? ? ? | Core??????? | Core???? | ...
> ?????????? |?????? ? ? ? ?? |???? ? ? ? ?? ? | |????? ? ? ?? |
> ????????? +--------+--------+--------+---------
> ?????????? ...
> ????????? +--------+--------+-----------------+
> ????????? | SHIM? ? ? ? | SHIM?????? | SHIM?????? |SHIM??????? |
> ????????? | PL?? ? ? ? ?? | PL??? ? ? ?? | PL??????????? |PL | NOC? |
> ????????? +---+----+---+----+---+-----+-------+
> ? AXI Streams?? |??????? |?????? ? ? ? ?? |??? ? ? ? ? ? |??? |AXI MM
> ???????????????????? ? |??????? |???? ? ? ? ?? ? | |??? |
> Events Singals |??????? |???? ? ? ? ?? ? |??? ? ? ? ? ? |??? |
> ??????????? ? ? ? ? ?? |??????? |?????? ? ? ? ?? | |??? |
> ???????????? ? ? ? ? ? |??????? |?????? ? ? ? ?? | |??? |
> ????????? +---+--------+--------+-----+ +--+------+
> ????????? |?????? FPGA?????????????? ? ? ? ? ? ? ? ? ? ? ? ?? | |??
> NOC??????? |
> ????????? | | |??????? ? ? ? ? ? |
> ????????? +---------------------------+ +--+-------+
> ?????????????????????????????????????????? |
> ?????????????????????????????????????????? |
> ?????????????????????????????????????? +---+------+
> ?????????????????????????????????????? |?? DDR?? ? ? ? ? |
> ?????????????????????????????????????? +----------+
>
Your diagram here didn't survive email unfortunately :-/
Quick question: Where's the fpga driver for this chip? I'm assuming it's
something separate.
> Each Core tile contains computing module, local memory and DMA module. The
> local memory DMA module takes data from or to the AXI streams and writes
> it to or reads it from the local memory. The computing module can also
> directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
> to get/put data from/to AXI streams from FPGA, enables external master to
> access AI engine address space through AXI MM. SHIM NoC module has DMA
> engine,
> which can access extern memory though AXI MM and push it to internal AXI
> streams.
>
> At runtime, the AI engine tiles interconnection needs to be configured so
> that
> it can get fetch data from external components or adjacent tiles, and AI
> engine
> core program needs to be loaded. And then user application can push data to
> the
> AI engine array and start/stop AI engine core. AI engine device errors can
> be
> raised as events, the AI engine kernel driver listens to the events
> interrupt
> to monitor runtime async device errors.
>
> Instead of application directly interacting with the AI engine kernel APIs,
> user
> application/libraries interacts with AI engine userspace library:
> https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2
> It provides cross OSes low level functional abstraction such as how to
> connect one
> stream port to another stream port, how to configure core tile local DMA.
>
> The AI engine library can be used by other runtime libraries such as Xilinx
> runtime (XRT)
> library: https://xilinx.github.io/XRT/master/html/index.html,
> which provides acceleration abstraction for Xilinx accelerators, it has
> extensions
> to interface to other acceleration framework such as OpenCL.
> XRT provides buffer handling abstractions for user application to share data
> between
> applicaiton and devices.
>
> Here is an example of application runtime stack:
>
> ??????????? +----------------------------+
> ??????????? |????? Application????????? ? ? ? ? ? ? ? ? ? ? |
> ??????????? | |
> ??????????? +----------------------------+
> ??????????? |?????? XRT??????????????????????????????????????? |
> ??????????? | |
> ??????????? +----------------------------+
> ??????????? |????? AIE Library????????? ? ? ? ? ? ? ? ? ? ?? |
> ??????????? | |
> ???? ? ??? +----------------------------+
> ??? +----------------------------------------+
> Kern? ? +----------------------------+
> ??????????? |???????? AIE Partition??????????????????????? +--+
> ?????????? +----------------------------+ ?? |
> ???????????? ? ?? |----------------------------+
> ??????????? +----------------------------+
> ???????????? |???????? AIE Device? ? ? ? ? ? ? ? ? ?? ????? |
> ???????????? | |
> ??????????? +----------------------------+
>
>
>
> The AI engine kernel driver provides the following user interfaces:
> ?* AIE device driver is the root device driver to manage the partitions of
> ?? of the AI engine device array. AI engine array can be partitioned into
> ?? column wised isolated partitions. Each applicaiton can only access its
> ?? own partitions.
> ?* AIE device driver monitors the interrupt from the AI enigne device. All
> ?? AI engine tiles shared the same interrupt for error events.
> ?* AIE partition driver controls address mapping and access of the
> ?? registers/local memories of the tiles within a partition.
> ?? * It provides mmap operation to enable application to direclty access the
> ???? tiles local memories for small data update such as parameter update for
> ???? performance.
> ?? * It provides mmap operatio to map all the registers as readonly for
> ???? application to poll registers efficiently to check status.
> ?? * It provides ioctl for userspace to pass I/O commands to write/mask
> write
> ???? the registers. How to configure is defined by userspace. Userspace will
> ???? pass the I/O commands sequence to the kernel driver, and kernel driver
> ???? will validate the commands before it writes to the registers.
> ?? * It provides ioctl to import dmabuf and ioctl to configure the the DMA
> module
> ???? in the SHIM tile which can access memory outside AI engine array.
This sounds a bit like there's no model for running multiple userspace,
aside from hard-partitioning the chip? Could we suspend/resume clients by
saving/restoring that entire mmio range that they set up?
> The buffer management is out of this driver. In the above example, user
> application
> uses Xilinx runtime(XRT), XRT is the one to manage the buffers.
Somehow you're getting data in/out of these compute tiles, and from your
description it sounds like that's done through special AXI streams that
connect to this NOC thing?
So someone needs to manage that memory, and on the kernel side you
probably need something which can do the dma_map_sg for said memory. So
which kernel driver does that?
In the past there was a drm driver submission for iirc a xilinx fpga, but
we can't take that one because only the runtime, not the compiler are open
source. Is that the part which provides memory management (together with
the userspace runtime)?
-Daniel
>
>
> Best Regards,
>
> Wendy
>
> >
> > > Thanks,
> > >
> > > Alex
> > >
> > >
> > > > v3:
> > > > * unlock AIE dev mutex after failed to gain the partition lock in
> > > > errors handing
> > > > * replace pointer with __u64 and enum with __u32 in ioctl
> > > >
> > > > v2:
> > > > * Fix dtschema check errors
> > > > * Fix test bot warning on interrupt implementation. Removed set but
> > > > unused varaible.
> > > > * Fix compilation unused function warning of firmware change in case
> > > > ZynqMP firmware is not configured
> > > > * There are other warning on ZynqMP firmware reported from testbot
> > > > which is not introduced by this patch set.
> > > > "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
> > > > for those fixes.
> > > >
> > > >
> > > > Izhar Ameer Shaikh (1):
> > > > firmware: xilinx: Add IOCTL support for AIE ISR Clear
> > > >
> > > > Nishad Saraf (2):
> > > > misc: xilinx-ai-engine: Add support to request device management
> > > > services
> > > > misc: xilinx-ai-engine: Add support for servicing error interrupts
> > > >
> > > > Wendy Liang (6):
> > > > dt-binding: soc: xilinx: ai-engine: Add AI engine binding
> > > > misc: Add Xilinx AI engine device driver
> > > > misc: xilinx-ai-engine: Implement AI engine cleanup sequence
> > > > misc: xilinx-ai-engine: expose AI engine tile memories to userspace
> > > > misc: xilinx-ai-engine: add setting shim dma bd operation
> > > > misc: xilinx-ai-engine: add request and release tiles
> > > >
> > > > .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
> > > > MAINTAINERS | 8 +
> > > > drivers/firmware/xilinx/zynqmp.c | 14 +
> > > > drivers/misc/Kconfig | 12 +
> > > > drivers/misc/Makefile | 1 +
> > > > drivers/misc/xilinx-ai-engine/Makefile | 16 +
> > > > drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
> > > > .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
> > > > drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
> > > > include/linux/firmware/xlnx-zynqmp.h | 8 +
> > > > include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
> > > > 18 files changed, 4719 insertions(+)
> > > > create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
> > > > create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
> > > > create mode 100644 include/uapi/linux/xlnx-ai-engine.h
> > > >
> > > > --
> > > > 2.7.4
> > > >
> > > > _______________________________________________
> > > > dri-devel mailing list
> > > > [email protected]
> > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > > _______________________________________________
> > > dri-devel mailing list
> > > [email protected]
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
On Mon, Dec 14, 2020 at 7:24 PM Jiaying Liang <[email protected]> wrote:
>
>
> On 12/11/20 11:39 AM, Daniel Vetter wrote:
> > Hi all
> >
> > On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher<[email protected]> wrote:
> >> On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang<[email protected]> wrote:
> >>> AI engine is the acceleration engine provided by Xilinx. These engines
> >>> provide high compute density for vector-based algorithms, and flexible
> >>> custom compute and data movement. It has core tiles for compute and
> >>> shim tiles to interface the FPGA fabric.
> >>>
> >>> You can check the AI engine architecture document for more hardware details:
> >>> https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
> >>>
> >>> This patch series adds a Linux kernel driver to manage the Xilinx AI
> >>> engine array device and AI engine partitions (groups of AI engine tiles
> >>> dedicated to an application).
> >> Hi Wendy,
> >>
> >> I think it would be good to provide an overview of how your stack
> >> works in general. That would give reviewers a better handle on how
> >> all of this fits together. I'd suggest including an overview in the
> >> cover letter and also in the commit message and/or as a comment in the
> >> code in one of the patches. I'm not really an expert when it comes to
> >> FPGAs, but this basically looks like a pretty low level interface to
> >> set up the data fabric for a kernel that will run on the soft logic or
> >> maybe the microcontroller on the board. It doesn't have to be super
> >> detailed, just a nice flow for how you might use this. E.g.,
> >>
> >> Userspace uses ioctls X, Y, Z to configure the data fabric for the
> >> FPGA kernel. The kernels can run on... . DMA access to system memory
> >> for data sets can be allocated using ioctl A. DMA access is limited
> >> by... . The user can then load the FPGA kernel on to one of the
> >> engines using ioctl B and finally they can kick off the whole thing
> >> using ioctl C. FPGA kernels are compiled using YYY toolchain and use
> >> use the following runtime (link to runtime) to configure the data
> >> fabric using ioctls X, Y, Z.
> > At least for drm drivers we ideally have that as a .rst file in
> > Documentation/. With that you can even do full svg graphs, or just dot
> > graphs, of the overall stack if you really want to go overboard :-)
> >
> >> It would also be good to go over the security implications of the
> >> design. E.g., can the FPGA kernel(s) access the DMA engine directly,
> >> or is it limited to just the DMA regions set up by the ioctls? Also,
> >> does the hardware and software design allow for multiple users? If
> >> so, how does that work?
> > I've also seen indications that there's some on-chip or on-card
> > memory. How that's planned to be used and whether we want to manage
> > this (maybe even with something like ttm) would be good to understand.
> >
> > All excellent questions from Alex, just figured I add some more.
> >
> > Cheers, Daniel
>
> Hi Alex, Daniel,
>
> Below is an overview of the driver.
>
> AI engine kernel driver manages Xilinx AI engine device. An AI engine device
> contains cores tiles and SHIM tiles. Core tiles are the computation tiles
> , the SHIM tiles are the tiles interfacing to external components.
>
> +--------+--------+--------+--------+
> | Core | Core | Core | Core | ...
> | | | | |
> +-----------------------------------+
> | Core | Core | Core | Core | ...
> | | | | |
> +--------+--------+--------+---------
> ...
> +--------+--------+-----------------+
> | SHIM | SHIM | SHIM |SHIM |
> | PL | PL | PL |PL | NOC |
> +---+----+---+----+---+-----+-------+
> AXI Streams | | | | |AXI MM
> | | | | |
> Events Singals | | | | |
> | | | | |
> | | | | |
> +---+--------+--------+-----+ +--+------+
> | FPGA | |
> NOC |
> | | | |
> +---------------------------+ +--+-------+
> |
> |
> +---+------+
> | DDR |
> +----------+
>
> Each Core tile contains computing module, local memory and DMA module. The
> local memory DMA module takes data from or to the AXI streams and writes
> it to or reads it from the local memory. The computing module can also
> directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
> to get/put data from/to AXI streams from FPGA, enables external master to
> access AI engine address space through AXI MM. SHIM NoC module has DMA
> engine,
> which can access extern memory though AXI MM and push it to internal AXI
> streams.
>
> At runtime, the AI engine tiles interconnection needs to be configured
> so that
> it can get fetch data from external components or adjacent tiles, and AI
> engine
> core program needs to be loaded. And then user application can push data
> to the
> AI engine array and start/stop AI engine core. AI engine device errors
> can be
> raised as events, the AI engine kernel driver listens to the events
> interrupt
> to monitor runtime async device errors.
>
> Instead of application directly interacting with the AI engine kernel
> APIs, user
> application/libraries interacts with AI engine userspace library:
> https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2
> It provides cross OSes low level functional abstraction such as how to
> connect one
> stream port to another stream port, how to configure core tile local DMA.
>
> The AI engine library can be used by other runtime libraries such as
> Xilinx runtime (XRT)
> library: https://xilinx.github.io/XRT/master/html/index.html,
> which provides acceleration abstraction for Xilinx accelerators, it has
> extensions
> to interface to other acceleration framework such as OpenCL.
> XRT provides buffer handling abstractions for user application to share
> data between
> applicaiton and devices.
>
> Here is an example of application runtime stack:
>
> +----------------------------+
> | Application |
> | |
> +----------------------------+
> | XRT |
> | |
> +----------------------------+
> | AIE Library |
> | |
> +----------------------------+
> +----------------------------------------+
> Kern +----------------------------+
> | AIE Partition +--+
> +----------------------------+ |
> |----------------------------+
> +----------------------------+
> | AIE Device |
> | |
> +----------------------------+
>
>
>
> The AI engine kernel driver provides the following user interfaces:
> * AIE device driver is the root device driver to manage the partitions of
> of the AI engine device array. AI engine array can be partitioned into
> column wised isolated partitions. Each applicaiton can only access its
> own partitions.
> * AIE device driver monitors the interrupt from the AI enigne device. All
> AI engine tiles shared the same interrupt for error events.
> * AIE partition driver controls address mapping and access of the
> registers/local memories of the tiles within a partition.
> * It provides mmap operation to enable application to direclty
> access the
> tiles local memories for small data update such as parameter
> update for
> performance.
> * It provides mmap operatio to map all the registers as readonly for
> application to poll registers efficiently to check status.
> * It provides ioctl for userspace to pass I/O commands to write/mask
> write
> the registers. How to configure is defined by userspace. Userspace
> will
> pass the I/O commands sequence to the kernel driver, and kernel driver
> will validate the commands before it writes to the registers.
> * It provides ioctl to import dmabuf and ioctl to configure the the
> DMA module
> in the SHIM tile which can access memory outside AI engine array.
>
> The buffer management is out of this driver. In the above example, user
> application
> uses Xilinx runtime(XRT), XRT is the one to manage the buffers.
>
So if I understand this correctly, this driver handles the resource
management for the AI engines, PLs (programmable logic), and DMA
streams. I think it's important to understand that there are multiple
address spaces here. Normally when we talk about DMA in the kernel we
are referring to devices accessing an external resource like system
memory on the host CPU or another device's MMIO space (e.g., another
PCIe device). It would be good to clarify which address spaces the
DMAs in your diagram refer to. I think the DMAs in the AI engines are
specifically for DMAs within the AI engine logic (e.g., between AIs in
a partition). How is DMA to system memory handled? What about
dedicated memory on the FPGA (e.g., HBM or DDR on the FPGA itself)?
Is that what you are exposing as DMA bufs? When you allocate a
DMA-buf for a partition, is that partition only allowed to access
memory that is part of that DMA buf? I presume there is some
scatter/gather table that sets up the DMA range that the partition can
access? Who loads the soft logic (Is that the PL or some other IP)?
Is the soft logic partitioned as well? If I had some soft logic I
wanted to run on the FPGA, what would the kernel driver interaction
sequence look like? Maybe using the OpenCL soft logic would be a good
example. E.g.,
1. user has soft logic blob generated by their soft logic compiler (is
this compiler open source?)
2. user calls AI engine kernel driver to allocate the required
resources (AI engines, AI engine DMAs, doorbells of some sort? etc.)
3. user calls AI engine kernel driver to allocate system memory and/or
FGPA memory that can be used by the soft logic blob
4. user calls AI engine kernel driver to load soft logic
5. user interfaces with soft logic (how? presumably via some memory
resource allocated in 2 and 3?)
Thanks,
Alex
>
> Best Regards,
>
> Wendy
>
> >
> >> Thanks,
> >>
> >> Alex
> >>
> >>
> >>> v3:
> >>> * unlock AIE dev mutex after failed to gain the partition lock in
> >>> errors handing
> >>> * replace pointer with __u64 and enum with __u32 in ioctl
> >>>
> >>> v2:
> >>> * Fix dtschema check errors
> >>> * Fix test bot warning on interrupt implementation. Removed set but
> >>> unused varaible.
> >>> * Fix compilation unused function warning of firmware change in case
> >>> ZynqMP firmware is not configured
> >>> * There are other warning on ZynqMP firmware reported from testbot
> >>> which is not introduced by this patch set.
> >>> "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
> >>> for those fixes.
> >>>
> >>>
> >>> Izhar Ameer Shaikh (1):
> >>> firmware: xilinx: Add IOCTL support for AIE ISR Clear
> >>>
> >>> Nishad Saraf (2):
> >>> misc: xilinx-ai-engine: Add support to request device management
> >>> services
> >>> misc: xilinx-ai-engine: Add support for servicing error interrupts
> >>>
> >>> Wendy Liang (6):
> >>> dt-binding: soc: xilinx: ai-engine: Add AI engine binding
> >>> misc: Add Xilinx AI engine device driver
> >>> misc: xilinx-ai-engine: Implement AI engine cleanup sequence
> >>> misc: xilinx-ai-engine: expose AI engine tile memories to userspace
> >>> misc: xilinx-ai-engine: add setting shim dma bd operation
> >>> misc: xilinx-ai-engine: add request and release tiles
> >>>
> >>> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
> >>> MAINTAINERS | 8 +
> >>> drivers/firmware/xilinx/zynqmp.c | 14 +
> >>> drivers/misc/Kconfig | 12 +
> >>> drivers/misc/Makefile | 1 +
> >>> drivers/misc/xilinx-ai-engine/Makefile | 16 +
> >>> drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
> >>> .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
> >>> drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
> >>> include/linux/firmware/xlnx-zynqmp.h | 8 +
> >>> include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
> >>> 18 files changed, 4719 insertions(+)
> >>> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
> >>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
> >>> create mode 100644 include/uapi/linux/xlnx-ai-engine.h
> >>>
> >>> --
> >>> 2.7.4
> >>>
> >>> _______________________________________________
> >>> dri-devel mailing list
> >>> [email protected]
> >>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >> _______________________________________________
> >> dri-devel mailing list
> >> [email protected]
> >> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On 12/15/20 7:23 AM, Alex Deucher wrote:
> On Mon, Dec 14, 2020 at 7:24 PM Jiaying Liang<[email protected]> wrote:
>> On 12/11/20 11:39 AM, Daniel Vetter wrote:
>>> Hi all
>>>
>>> On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher<[email protected]> wrote:
>>>> On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang<[email protected]> wrote:
>>>>> AI engine is the acceleration engine provided by Xilinx. These engines
>>>>> provide high compute density for vector-based algorithms, and flexible
>>>>> custom compute and data movement. It has core tiles for compute and
>>>>> shim tiles to interface the FPGA fabric.
>>>>>
>>>>> You can check the AI engine architecture document for more hardware details:
>>>>> https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
>>>>>
>>>>> This patch series adds a Linux kernel driver to manage the Xilinx AI
>>>>> engine array device and AI engine partitions (groups of AI engine tiles
>>>>> dedicated to an application).
>>>> Hi Wendy,
>>>>
>>>> I think it would be good to provide an overview of how your stack
>>>> works in general. That would give reviewers a better handle on how
>>>> all of this fits together. I'd suggest including an overview in the
>>>> cover letter and also in the commit message and/or as a comment in the
>>>> code in one of the patches. I'm not really an expert when it comes to
>>>> FPGAs, but this basically looks like a pretty low level interface to
>>>> set up the data fabric for a kernel that will run on the soft logic or
>>>> maybe the microcontroller on the board. It doesn't have to be super
>>>> detailed, just a nice flow for how you might use this. E.g.,
>>>>
>>>> Userspace uses ioctls X, Y, Z to configure the data fabric for the
>>>> FPGA kernel. The kernels can run on... . DMA access to system memory
>>>> for data sets can be allocated using ioctl A. DMA access is limited
>>>> by... . The user can then load the FPGA kernel on to one of the
>>>> engines using ioctl B and finally they can kick off the whole thing
>>>> using ioctl C. FPGA kernels are compiled using YYY toolchain and use
>>>> use the following runtime (link to runtime) to configure the data
>>>> fabric using ioctls X, Y, Z.
>>> At least for drm drivers we ideally have that as a .rst file in
>>> Documentation/. With that you can even do full svg graphs, or just dot
>>> graphs, of the overall stack if you really want to go overboard :-)
>>>
>>>> It would also be good to go over the security implications of the
>>>> design. E.g., can the FPGA kernel(s) access the DMA engine directly,
>>>> or is it limited to just the DMA regions set up by the ioctls? Also,
>>>> does the hardware and software design allow for multiple users? If
>>>> so, how does that work?
>>> I've also seen indications that there's some on-chip or on-card
>>> memory. How that's planned to be used and whether we want to manage
>>> this (maybe even with something like ttm) would be good to understand.
>>>
>>> All excellent questions from Alex, just figured I add some more.
>>>
>>> Cheers, Daniel
>> Hi Alex, Daniel,
>>
>> Below is an overview of the driver.
>>
>> AI engine kernel driver manages Xilinx AI engine device. An AI engine device
>> contains cores tiles and SHIM tiles. Core tiles are the computation tiles
>> , the SHIM tiles are the tiles interfacing to external components.
>>
>> +--------+--------+--------+--------+
>> | Core | Core | Core | Core | ...
>> | | | | |
>> +-----------------------------------+
>> | Core | Core | Core | Core | ...
>> | | | | |
>> +--------+--------+--------+---------
>> ...
>> +--------+--------+-----------------+
>> | SHIM | SHIM | SHIM |SHIM |
>> | PL | PL | PL |PL | NOC |
>> +---+----+---+----+---+-----+-------+
>> AXI Streams | | | | |AXI MM
>> | | | | |
>> Events Singals | | | | |
>> | | | | |
>> | | | | |
>> +---+--------+--------+-----+ +--+------+
>> | FPGA | |
>> NOC |
>> | | | |
>> +---------------------------+ +--+-------+
>> |
>> |
>> +---+------+
>> | DDR |
>> +----------+
>>
>> Each Core tile contains computing module, local memory and DMA module. The
>> local memory DMA module takes data from or to the AXI streams and writes
>> it to or reads it from the local memory. The computing module can also
>> directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
>> to get/put data from/to AXI streams from FPGA, enables external master to
>> access AI engine address space through AXI MM. SHIM NoC module has DMA
>> engine,
>> which can access extern memory though AXI MM and push it to internal AXI
>> streams.
>>
>> At runtime, the AI engine tiles interconnection needs to be configured
>> so that
>> it can get fetch data from external components or adjacent tiles, and AI
>> engine
>> core program needs to be loaded. And then user application can push data
>> to the
>> AI engine array and start/stop AI engine core. AI engine device errors
>> can be
>> raised as events, the AI engine kernel driver listens to the events
>> interrupt
>> to monitor runtime async device errors.
>>
>> Instead of application directly interacting with the AI engine kernel
>> APIs, user
>> application/libraries interacts with AI engine userspace library:
>> https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2
>> It provides cross OSes low level functional abstraction such as how to
>> connect one
>> stream port to another stream port, how to configure core tile local DMA.
>>
>> The AI engine library can be used by other runtime libraries such as
>> Xilinx runtime (XRT)
>> library:https://xilinx.github.io/XRT/master/html/index.html,
>> which provides acceleration abstraction for Xilinx accelerators, it has
>> extensions
>> to interface to other acceleration framework such as OpenCL.
>> XRT provides buffer handling abstractions for user application to share
>> data between
>> applicaiton and devices.
>>
>> Here is an example of application runtime stack:
>>
>> +----------------------------+
>> | Application |
>> | |
>> +----------------------------+
>> | XRT |
>> | |
>> +----------------------------+
>> | AIE Library |
>> | |
>> +----------------------------+
>> +----------------------------------------+
>> Kern +----------------------------+
>> | AIE Partition +--+
>> +----------------------------+ |
>> |----------------------------+
>> +----------------------------+
>> | AIE Device |
>> | |
>> +----------------------------+
>>
>>
>>
>> The AI engine kernel driver provides the following user interfaces:
>> * AIE device driver is the root device driver to manage the partitions of
>> of the AI engine device array. AI engine array can be partitioned into
>> column wised isolated partitions. Each applicaiton can only access its
>> own partitions.
>> * AIE device driver monitors the interrupt from the AI enigne device. All
>> AI engine tiles shared the same interrupt for error events.
>> * AIE partition driver controls address mapping and access of the
>> registers/local memories of the tiles within a partition.
>> * It provides mmap operation to enable application to direclty
>> access the
>> tiles local memories for small data update such as parameter
>> update for
>> performance.
>> * It provides mmap operatio to map all the registers as readonly for
>> application to poll registers efficiently to check status.
>> * It provides ioctl for userspace to pass I/O commands to write/mask
>> write
>> the registers. How to configure is defined by userspace. Userspace
>> will
>> pass the I/O commands sequence to the kernel driver, and kernel driver
>> will validate the commands before it writes to the registers.
>> * It provides ioctl to import dmabuf and ioctl to configure the the
>> DMA module
>> in the SHIM tile which can access memory outside AI engine array.
>>
>> The buffer management is out of this driver. In the above example, user
>> application
>> uses Xilinx runtime(XRT), XRT is the one to manage the buffers.
>>
> So if I understand this correctly, this driver handles the resource
> management for the AI engines, PLs (programmable logic), and DMA
> streams. I think it's important to understand that there are multiple
> address spaces here. Normally when we talk about DMA in the kernel we
> are referring to devices accessing an external resource like system
> memory on the host CPU or another device's MMIO space (e.g., another
> PCIe device). It would be good to clarify which address spaces the
> DMAs in your diagram refer to. I think the DMAs in the AI engines are
> specifically for DMAs within the AI engine logic (e.g., between AIs in
> a partition). How is DMA to system memory handled? What about
> dedicated memory on the FPGA (e.g., HBM or DDR on the FPGA itself)?
> Is that what you are exposing as DMA bufs? When you allocate a
> DMA-buf for a partition, is that partition only allowed to access
> memory that is part of that DMA buf? I presume there is some
> scatter/gather table that sets up the DMA range that the partition can
> access? Who loads the soft logic (Is that the PL or some other IP)?
> Is the soft logic partitioned as well? If I had some soft logic I
> wanted to run on the FPGA, what would the kernel driver interaction
> sequence look like? Maybe using the OpenCL soft logic would be a good
> example. E.g.,
The AI engine driver only manage the resources within the AI
engine array. There are two types of DMAs of the AI engine device.
one is the AI engine tile local memory DMA which can only access the local
memory. There is another type of DMA which is in the SHIM tile. This
DMA can access external address space such as DDR. Although it can acess
the memory on fpga if user configure the platform that way, it is
preferred to
use PL data mover to move data between FPGA memory and AI engine device.
The PL data mover will not be managed by the AI engine driver.
One SHIM DMA has up to 16 buffer descriptors to use.
Xilinx FPGA manager is the one used to program the FPGA soft logic.
E.g. when XRT is used, if AI engine is connected to FPGA logic, the XRT
stack is
the one to manage the configuration sequence.
> 1. user has soft logic blob generated by their soft logic compiler (is
> this compiler open source?)
The soft logic blob is generated by Xilinx tools which is not open
source yet.
> 2. user calls AI engine kernel driver to allocate the required
> resources (AI engines, AI engine DMAs, doorbells of some sort? etc.)
User will call AI engine kernel driver to allocate required resources within
the AI engine array at runtime.
However the patches for it is not in this patch set.
> 3. user calls AI engine kernel driver to allocate system memory and/or
> FGPA memory that can be used by the soft logic blob
AI engine kernel driver doesn't allocate system memory. User can use other
kernel driver to allocate memory.
E.g. when XRT is used, user calls XRT kernel driver (zocl) to allocate
system memory.
So far, the FPGA memory is usually assigned to a soft data mover when
the platform is
created. Are you considering to have the FPGA memory in the DMA pool of the
system? If it is dedicated to a device, can reserved memory solve this
problem?
The AI engine kernel driver doesn't consider this yet.
> 4. user calls AI engine kernel driver to load soft logic
I assume you are referring to the soft logic on the FPGA side which is not
part of the AI engine device. FPGA manager is the one to load the soft
logic on FPGA.
> 5. user interfaces with soft logic (how? presumably via some memory
> resource allocated in 2 and 3?)
I assume you are referring to the soft logic on the FPGA side (not the
AI engine device)
The user interface with soft logic is managed by the soft logic IP driver.
Each soft logic has some memory mapped control registers. User can
access those
registers through the soft logic IP driver.
About memory allocation, I think it is better to manage the shared
memory out of
a specific device driver. Are you looking for memory management which covers
both the system memory and fpga memory, and the device can specify which
memory
it prefers?
Thanks,
Wendy
>
> Thanks,
>
> Alex
>
>
>> Best Regards,
>>
>> Wendy
>>
>>>> Thanks,
>>>>
>>>> Alex
>>>>
>>>>
>>>>> v3:
>>>>> * unlock AIE dev mutex after failed to gain the partition lock in
>>>>> errors handing
>>>>> * replace pointer with __u64 and enum with __u32 in ioctl
>>>>>
>>>>> v2:
>>>>> * Fix dtschema check errors
>>>>> * Fix test bot warning on interrupt implementation. Removed set but
>>>>> unused varaible.
>>>>> * Fix compilation unused function warning of firmware change in case
>>>>> ZynqMP firmware is not configured
>>>>> * There are other warning on ZynqMP firmware reported from testbot
>>>>> which is not introduced by this patch set.
>>>>> "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
>>>>> for those fixes.
>>>>>
>>>>>
>>>>> Izhar Ameer Shaikh (1):
>>>>> firmware: xilinx: Add IOCTL support for AIE ISR Clear
>>>>>
>>>>> Nishad Saraf (2):
>>>>> misc: xilinx-ai-engine: Add support to request device management
>>>>> services
>>>>> misc: xilinx-ai-engine: Add support for servicing error interrupts
>>>>>
>>>>> Wendy Liang (6):
>>>>> dt-binding: soc: xilinx: ai-engine: Add AI engine binding
>>>>> misc: Add Xilinx AI engine device driver
>>>>> misc: xilinx-ai-engine: Implement AI engine cleanup sequence
>>>>> misc: xilinx-ai-engine: expose AI engine tile memories to userspace
>>>>> misc: xilinx-ai-engine: add setting shim dma bd operation
>>>>> misc: xilinx-ai-engine: add request and release tiles
>>>>>
>>>>> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
>>>>> MAINTAINERS | 8 +
>>>>> drivers/firmware/xilinx/zynqmp.c | 14 +
>>>>> drivers/misc/Kconfig | 12 +
>>>>> drivers/misc/Makefile | 1 +
>>>>> drivers/misc/xilinx-ai-engine/Makefile | 16 +
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
>>>>> .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
>>>>> drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
>>>>> include/linux/firmware/xlnx-zynqmp.h | 8 +
>>>>> include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
>>>>> 18 files changed, 4719 insertions(+)
>>>>> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
>>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
>>>>> create mode 100644 include/uapi/linux/xlnx-ai-engine.h
>>>>>
>>>>> --
>>>>> 2.7.4
>>>>>
>>>>> _______________________________________________
>>>>> dri-devel mailing list
>>>>> [email protected]
>>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
>>>> _______________________________________________
>>>> dri-devel mailing list
>>>> [email protected]
>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
On Thu, Dec 17, 2020 at 9:40 AM Jiaying Liang <[email protected]> wrote:
>
>
> On 12/15/20 7:23 AM, Alex Deucher wrote:
> > On Mon, Dec 14, 2020 at 7:24 PM Jiaying Liang<[email protected]> wrote:
> >> On 12/11/20 11:39 AM, Daniel Vetter wrote:
> >>> Hi all
> >>>
> >>> On Fri, Dec 11, 2020 at 8:03 PM Alex Deucher<[email protected]> wrote:
> >>>> On Mon, Nov 30, 2020 at 3:25 AM Wendy Liang<[email protected]> wrote:
> >>>>> AI engine is the acceleration engine provided by Xilinx. These engines
> >>>>> provide high compute density for vector-based algorithms, and flexible
> >>>>> custom compute and data movement. It has core tiles for compute and
> >>>>> shim tiles to interface the FPGA fabric.
> >>>>>
> >>>>> You can check the AI engine architecture document for more hardware details:
> >>>>> https://www.xilinx.com/support/documentation/architecture-manuals/am009-versal-ai-engine.pdf
> >>>>>
> >>>>> This patch series adds a Linux kernel driver to manage the Xilinx AI
> >>>>> engine array device and AI engine partitions (groups of AI engine tiles
> >>>>> dedicated to an application).
> >>>> Hi Wendy,
> >>>>
> >>>> I think it would be good to provide an overview of how your stack
> >>>> works in general. That would give reviewers a better handle on how
> >>>> all of this fits together. I'd suggest including an overview in the
> >>>> cover letter and also in the commit message and/or as a comment in the
> >>>> code in one of the patches. I'm not really an expert when it comes to
> >>>> FPGAs, but this basically looks like a pretty low level interface to
> >>>> set up the data fabric for a kernel that will run on the soft logic or
> >>>> maybe the microcontroller on the board. It doesn't have to be super
> >>>> detailed, just a nice flow for how you might use this. E.g.,
> >>>>
> >>>> Userspace uses ioctls X, Y, Z to configure the data fabric for the
> >>>> FPGA kernel. The kernels can run on... . DMA access to system memory
> >>>> for data sets can be allocated using ioctl A. DMA access is limited
> >>>> by... . The user can then load the FPGA kernel on to one of the
> >>>> engines using ioctl B and finally they can kick off the whole thing
> >>>> using ioctl C. FPGA kernels are compiled using YYY toolchain and use
> >>>> use the following runtime (link to runtime) to configure the data
> >>>> fabric using ioctls X, Y, Z.
> >>> At least for drm drivers we ideally have that as a .rst file in
> >>> Documentation/. With that you can even do full svg graphs, or just dot
> >>> graphs, of the overall stack if you really want to go overboard :-)
> >>>
> >>>> It would also be good to go over the security implications of the
> >>>> design. E.g., can the FPGA kernel(s) access the DMA engine directly,
> >>>> or is it limited to just the DMA regions set up by the ioctls? Also,
> >>>> does the hardware and software design allow for multiple users? If
> >>>> so, how does that work?
> >>> I've also seen indications that there's some on-chip or on-card
> >>> memory. How that's planned to be used and whether we want to manage
> >>> this (maybe even with something like ttm) would be good to understand.
> >>>
> >>> All excellent questions from Alex, just figured I add some more.
> >>>
> >>> Cheers, Daniel
> >> Hi Alex, Daniel,
> >>
> >> Below is an overview of the driver.
> >>
> >> AI engine kernel driver manages Xilinx AI engine device. An AI engine device
> >> contains cores tiles and SHIM tiles. Core tiles are the computation tiles
> >> , the SHIM tiles are the tiles interfacing to external components.
> >>
> >> +--------+--------+--------+--------+
> >> | Core | Core | Core | Core | ...
> >> | | | | |
> >> +-----------------------------------+
> >> | Core | Core | Core | Core | ...
> >> | | | | |
> >> +--------+--------+--------+---------
> >> ...
> >> +--------+--------+-----------------+
> >> | SHIM | SHIM | SHIM |SHIM |
> >> | PL | PL | PL |PL | NOC |
> >> +---+----+---+----+---+-----+-------+
> >> AXI Streams | | | | |AXI MM
> >> | | | | |
> >> Events Singals | | | | |
> >> | | | | |
> >> | | | | |
> >> +---+--------+--------+-----+ +--+------+
> >> | FPGA | |
> >> NOC |
> >> | | | |
> >> +---------------------------+ +--+-------+
> >> |
> >> |
> >> +---+------+
> >> | DDR |
> >> +----------+
> >>
> >> Each Core tile contains computing module, local memory and DMA module. The
> >> local memory DMA module takes data from or to the AXI streams and writes
> >> it to or reads it from the local memory. The computing module can also
> >> directly get/put data from/to the AXI stream. The AIE SHIM enables AIE tiles
> >> to get/put data from/to AXI streams from FPGA, enables external master to
> >> access AI engine address space through AXI MM. SHIM NoC module has DMA
> >> engine,
> >> which can access extern memory though AXI MM and push it to internal AXI
> >> streams.
> >>
> >> At runtime, the AI engine tiles interconnection needs to be configured
> >> so that
> >> it can get fetch data from external components or adjacent tiles, and AI
> >> engine
> >> core program needs to be loaded. And then user application can push data
> >> to the
> >> AI engine array and start/stop AI engine core. AI engine device errors
> >> can be
> >> raised as events, the AI engine kernel driver listens to the events
> >> interrupt
> >> to monitor runtime async device errors.
> >>
> >> Instead of application directly interacting with the AI engine kernel
> >> APIs, user
> >> application/libraries interacts with AI engine userspace library:
> >> https://github.com/Xilinx/embeddedsw/tree/master/XilinxProcessorIPLib/drivers/aienginev2
> >> It provides cross OSes low level functional abstraction such as how to
> >> connect one
> >> stream port to another stream port, how to configure core tile local DMA.
> >>
> >> The AI engine library can be used by other runtime libraries such as
> >> Xilinx runtime (XRT)
> >> library:https://xilinx.github.io/XRT/master/html/index.html,
> >> which provides acceleration abstraction for Xilinx accelerators, it has
> >> extensions
> >> to interface to other acceleration framework such as OpenCL.
> >> XRT provides buffer handling abstractions for user application to share
> >> data between
> >> applicaiton and devices.
> >>
> >> Here is an example of application runtime stack:
> >>
> >> +----------------------------+
> >> | Application |
> >> | |
> >> +----------------------------+
> >> | XRT |
> >> | |
> >> +----------------------------+
> >> | AIE Library |
> >> | |
> >> +----------------------------+
> >> +----------------------------------------+
> >> Kern +----------------------------+
> >> | AIE Partition +--+
> >> +----------------------------+ |
> >> |----------------------------+
> >> +----------------------------+
> >> | AIE Device |
> >> | |
> >> +----------------------------+
> >>
> >>
> >>
> >> The AI engine kernel driver provides the following user interfaces:
> >> * AIE device driver is the root device driver to manage the partitions of
> >> of the AI engine device array. AI engine array can be partitioned into
> >> column wised isolated partitions. Each applicaiton can only access its
> >> own partitions.
> >> * AIE device driver monitors the interrupt from the AI enigne device. All
> >> AI engine tiles shared the same interrupt for error events.
> >> * AIE partition driver controls address mapping and access of the
> >> registers/local memories of the tiles within a partition.
> >> * It provides mmap operation to enable application to direclty
> >> access the
> >> tiles local memories for small data update such as parameter
> >> update for
> >> performance.
> >> * It provides mmap operatio to map all the registers as readonly for
> >> application to poll registers efficiently to check status.
> >> * It provides ioctl for userspace to pass I/O commands to write/mask
> >> write
> >> the registers. How to configure is defined by userspace. Userspace
> >> will
> >> pass the I/O commands sequence to the kernel driver, and kernel driver
> >> will validate the commands before it writes to the registers.
> >> * It provides ioctl to import dmabuf and ioctl to configure the the
> >> DMA module
> >> in the SHIM tile which can access memory outside AI engine array.
> >>
> >> The buffer management is out of this driver. In the above example, user
> >> application
> >> uses Xilinx runtime(XRT), XRT is the one to manage the buffers.
> >>
> > So if I understand this correctly, this driver handles the resource
> > management for the AI engines, PLs (programmable logic), and DMA
> > streams. I think it's important to understand that there are multiple
> > address spaces here. Normally when we talk about DMA in the kernel we
> > are referring to devices accessing an external resource like system
> > memory on the host CPU or another device's MMIO space (e.g., another
> > PCIe device). It would be good to clarify which address spaces the
> > DMAs in your diagram refer to. I think the DMAs in the AI engines are
> > specifically for DMAs within the AI engine logic (e.g., between AIs in
> > a partition). How is DMA to system memory handled? What about
> > dedicated memory on the FPGA (e.g., HBM or DDR on the FPGA itself)?
> > Is that what you are exposing as DMA bufs? When you allocate a
> > DMA-buf for a partition, is that partition only allowed to access
> > memory that is part of that DMA buf? I presume there is some
> > scatter/gather table that sets up the DMA range that the partition can
> > access? Who loads the soft logic (Is that the PL or some other IP)?
> > Is the soft logic partitioned as well? If I had some soft logic I
> > wanted to run on the FPGA, what would the kernel driver interaction
> > sequence look like? Maybe using the OpenCL soft logic would be a good
> > example. E.g.,
>
> The AI engine driver only manage the resources within the AI
>
> engine array. There are two types of DMAs of the AI engine device.
>
> one is the AI engine tile local memory DMA which can only access the local
>
> memory. There is another type of DMA which is in the SHIM tile. This
>
> DMA can access external address space such as DDR. Although it can acess
>
> the memory on fpga if user configure the platform that way, it is
> preferred to
>
> use PL data mover to move data between FPGA memory and AI engine device.
>
> The PL data mover will not be managed by the AI engine driver.
>
> One SHIM DMA has up to 16 buffer descriptors to use.
>
> Xilinx FPGA manager is the one used to program the FPGA soft logic.
>
> E.g. when XRT is used, if AI engine is connected to FPGA logic, the XRT
> stack is
>
> the one to manage the configuration sequence.
>
> > 1. user has soft logic blob generated by their soft logic compiler (is
> > this compiler open source?)
> The soft logic blob is generated by Xilinx tools which is not open
> source yet.
> > 2. user calls AI engine kernel driver to allocate the required
> > resources (AI engines, AI engine DMAs, doorbells of some sort? etc.)
>
> User will call AI engine kernel driver to allocate required resources within
>
> the AI engine array at runtime.
>
> However the patches for it is not in this patch set.
>
> > 3. user calls AI engine kernel driver to allocate system memory and/or
> > FGPA memory that can be used by the soft logic blob
>
> AI engine kernel driver doesn't allocate system memory. User can use other
>
> kernel driver to allocate memory.
>
> E.g. when XRT is used, user calls XRT kernel driver (zocl) to allocate
> system memory.
>
> So far, the FPGA memory is usually assigned to a soft data mover when
> the platform is
>
> created. Are you considering to have the FPGA memory in the DMA pool of the
>
> system? If it is dedicated to a device, can reserved memory solve this
> problem?
>
> The AI engine kernel driver doesn't consider this yet.
>
> > 4. user calls AI engine kernel driver to load soft logic
>
> I assume you are referring to the soft logic on the FPGA side which is not
>
> part of the AI engine device. FPGA manager is the one to load the soft
> logic on FPGA.
>
> > 5. user interfaces with soft logic (how? presumably via some memory
> > resource allocated in 2 and 3?)
>
> I assume you are referring to the soft logic on the FPGA side (not the
> AI engine device)
>
> The user interface with soft logic is managed by the soft logic IP driver.
>
> Each soft logic has some memory mapped control registers. User can
> access those
>
> registers through the soft logic IP driver.
>
> About memory allocation, I think it is better to manage the shared
> memory out of
>
> a specific device driver. Are you looking for memory management which covers
>
> both the system memory and fpga memory, and the device can specify which
> memory
>
> it prefers?
Ok, I think the picture is getting clearer. But now I'm wondering why
you have any interactions with dma-buf in this patch series here?
-Daniel
> Thanks,
>
> Wendy
>
> >
> > Thanks,
> >
> > Alex
> >
> >
> >> Best Regards,
> >>
> >> Wendy
> >>
> >>>> Thanks,
> >>>>
> >>>> Alex
> >>>>
> >>>>
> >>>>> v3:
> >>>>> * unlock AIE dev mutex after failed to gain the partition lock in
> >>>>> errors handing
> >>>>> * replace pointer with __u64 and enum with __u32 in ioctl
> >>>>>
> >>>>> v2:
> >>>>> * Fix dtschema check errors
> >>>>> * Fix test bot warning on interrupt implementation. Removed set but
> >>>>> unused varaible.
> >>>>> * Fix compilation unused function warning of firmware change in case
> >>>>> ZynqMP firmware is not configured
> >>>>> * There are other warning on ZynqMP firmware reported from testbot
> >>>>> which is not introduced by this patch set.
> >>>>> "[PATCH] firmware: xlnx-zynqmp: fix compilation warning" is submitted
> >>>>> for those fixes.
> >>>>>
> >>>>>
> >>>>> Izhar Ameer Shaikh (1):
> >>>>> firmware: xilinx: Add IOCTL support for AIE ISR Clear
> >>>>>
> >>>>> Nishad Saraf (2):
> >>>>> misc: xilinx-ai-engine: Add support to request device management
> >>>>> services
> >>>>> misc: xilinx-ai-engine: Add support for servicing error interrupts
> >>>>>
> >>>>> Wendy Liang (6):
> >>>>> dt-binding: soc: xilinx: ai-engine: Add AI engine binding
> >>>>> misc: Add Xilinx AI engine device driver
> >>>>> misc: xilinx-ai-engine: Implement AI engine cleanup sequence
> >>>>> misc: xilinx-ai-engine: expose AI engine tile memories to userspace
> >>>>> misc: xilinx-ai-engine: add setting shim dma bd operation
> >>>>> misc: xilinx-ai-engine: add request and release tiles
> >>>>>
> >>>>> .../bindings/soc/xilinx/xlnx,ai-engine.yaml | 126 ++++
> >>>>> MAINTAINERS | 8 +
> >>>>> drivers/firmware/xilinx/zynqmp.c | 14 +
> >>>>> drivers/misc/Kconfig | 12 +
> >>>>> drivers/misc/Makefile | 1 +
> >>>>> drivers/misc/xilinx-ai-engine/Makefile | 16 +
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-aie.c | 608 +++++++++++++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-clock.c | 245 ++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-dev.c | 496 ++++++++++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-dma.c | 481 +++++++++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-internal.h | 519 ++++++++++++++++
> >>>>> .../misc/xilinx-ai-engine/ai-engine-interrupt.c | 659 +++++++++++++++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-mem.c | 275 +++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-part.c | 635 ++++++++++++++++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-res.c | 219 +++++++
> >>>>> drivers/misc/xilinx-ai-engine/ai-engine-reset.c | 159 +++++
> >>>>> include/linux/firmware/xlnx-zynqmp.h | 8 +
> >>>>> include/uapi/linux/xlnx-ai-engine.h | 238 ++++++++
> >>>>> 18 files changed, 4719 insertions(+)
> >>>>> create mode 100644 Documentation/devicetree/bindings/soc/xilinx/xlnx,ai-engine.yaml
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/Makefile
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-aie.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-clock.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dev.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-dma.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-internal.h
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-interrupt.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-mem.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-part.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-res.c
> >>>>> create mode 100644 drivers/misc/xilinx-ai-engine/ai-engine-reset.c
> >>>>> create mode 100644 include/uapi/linux/xlnx-ai-engine.h
> >>>>>
> >>>>> --
> >>>>> 2.7.4
> >>>>>
> >>>>> _______________________________________________
> >>>>> dri-devel mailing list
> >>>>> [email protected]
> >>>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> >>>> _______________________________________________
> >>>> dri-devel mailing list
> >>>> [email protected]
> >>>> https://lists.freedesktop.org/mailman/listinfo/dri-devel
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch