2021-04-27 21:01:34

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 00/20] XRT Alveo driver overview

Hello,

This is V5 of patch series which adds management physical function driver
for Xilinx Alveo PCIe accelerator cards.
https://www.xilinx.com/products/boards-and-kits/alveo.html

This driver is part of Xilinx Runtime (XRT) open source stack.

XILINX ALVEO PLATFORM ARCHITECTURE

Alveo PCIe FPGA based platforms have a static *shell* partition and a
partial re-configurable *user* partition. The shell partition is
automatically loaded from flash when host is booted and PCIe is enumerated
by BIOS. Shell cannot be changed till the next cold reboot. The shell
exposes two PCIe physical functions:

1. management physical function
2. user physical function

The patch series includes Documentation/xrt.rst which describes Alveo
platform, XRT driver architecture and deployment model in more detail.

Users compile their high level design in C/C++/OpenCL or RTL into FPGA
image using Vitis tools.
https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html

The compiled image is packaged as xclbin which contains partial bitstream
for the user partition and necessary metadata. Users can dynamically swap
the image running on the user partition in order to switch between
different workloads by loading different xclbins.

XRT DRIVERS FOR XILINX ALVEO

XRT Linux kernel driver *xrt-mgnt* binds to management physical function of
Alveo platform. The modular driver framework is organized into several
platform drivers which primarily handle the following functionality:

1. Loading firmware container also called xsabin at driver attach time
2. Loading of user compiled xclbin with FPGA Manager integration
3. Clock scaling of image running on user partition
4. In-band sensors: temp, voltage, power, etc.
5. Device reset and rescan

The platform drivers are packaged into *xrt-lib* helper module with well
defined interfaces. The module provides a pseudo-bus implementation for the
platform drivers. More details on the driver model can be found in
Documentation/xrt.rst.

User physical function driver is not included in this patch series.

LIBFDT REQUIREMENT

XRT driver infrastructure uses Device Tree as a metadata format to discover
HW subsystems in the Alveo PCIe device. The Device Tree schema used by XRT
is documented in Documentation/xrt.rst.

TESTING AND VALIDATION

xrt-mgnt driver can be tested with full XRT open source stack which
includes user space libraries, board utilities and (out of tree) first
generation user physical function driver xocl. XRT open source runtime
stack is available at https://github.com/Xilinx/XRT

Complete documentation for XRT open source stack including sections on
Alveo/XRT security and platform architecture can be found here:

https://xilinx.github.io/XRT/master/html/index.html
https://xilinx.github.io/XRT/master/html/security.html
https://xilinx.github.io/XRT/master/html/platforms_partitions.html

Changes since v4:
- Added xrt_bus_type and xrt_device. All sub devices were changed from
platform_bus_type/platform_device to xrt_bus_type/xrt_device.
- Renamed xrt-mgmt driver to xrt-mgnt driver.
- Replaced 'MGMT' with 'MGNT' and 'mgmt' with 'mgnt' in code and file names
- Moved pci function calls from infrastructure to xrt-mgnt driver.
- Renamed files: mgmt/main.c -> mgnt/xmgnt-main.c
mgmt/main-region.c -> mgnt/xmgnt-main-region.c
include/xmgmt-main.h -> include/xmgnt-main.h
mgmt/fmgr-drv.c -> mgnt/xrt-mgr.c
mgmt/fmgr.h -> mgnt/xrt-mgr.h
- Updated code base to include v4 code review comments.

Changes since v3:
- Leaf drivers use regmap-mmio to access hardware registers.
- Renamed driver module: xmgmt.ko -> xrt-mgmt.ko
- Renamed files: calib.[c|h] -> ddr_calibration.[c|h],
lib/main.[c|h] -> lib/lib-drv.[c|h],
mgmt/main-impl.h - > mgmt/xmgnt.h
- Updated code base to include v3 code review comments.

Changes since v2:
- Streamlined the driver framework into *xleaf*, *group* and *xroot*
- Updated documentation to show the driver model with examples
- Addressed kernel test robot errors
- Added a selftest for basic driver framework
- Documented device tree schema
- Removed need to export libfdt symbols

Changes since v1:
- Updated the driver to use fpga_region and fpga_bridge for FPGA
programming
- Dropped platform drivers not related to PR programming to focus on XRT
core framework
- Updated Documentation/fpga/xrt.rst with information on XRT core framework
- Addressed checkpatch issues
- Dropped xrt- prefix from some header files

For reference V4 version of patch series can be found here:

https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]/

Lizhi Hou (20):
Documentation: fpga: Add a document describing XRT Alveo drivers
fpga: xrt: driver metadata helper functions
fpga: xrt: xclbin file helper functions
fpga: xrt: xrt-lib driver manager
fpga: xrt: group driver
fpga: xrt: char dev node helper functions
fpga: xrt: root driver infrastructure
fpga: xrt: driver infrastructure
fpga: xrt: management physical function driver (root)
fpga: xrt: main driver for management function device
fpga: xrt: fpga-mgr and region implementation for xclbin download
fpga: xrt: VSEC driver
fpga: xrt: User Clock Subsystem driver
fpga: xrt: ICAP driver
fpga: xrt: devctl xrt driver
fpga: xrt: clock driver
fpga: xrt: clock frequency counter driver
fpga: xrt: DDR calibration driver
fpga: xrt: partition isolation driver
fpga: xrt: Kconfig and Makefile updates for XRT drivers

Documentation/fpga/index.rst | 1 +
Documentation/fpga/xrt.rst | 844 +++++++++++++++++
MAINTAINERS | 11 +
drivers/Makefile | 1 +
drivers/fpga/Kconfig | 2 +
drivers/fpga/Makefile | 5 +
drivers/fpga/xrt/Kconfig | 8 +
drivers/fpga/xrt/include/events.h | 45 +
drivers/fpga/xrt/include/group.h | 25 +
drivers/fpga/xrt/include/metadata.h | 236 +++++
drivers/fpga/xrt/include/subdev_id.h | 38 +
drivers/fpga/xrt/include/xclbin-helper.h | 48 +
drivers/fpga/xrt/include/xdevice.h | 131 +++
drivers/fpga/xrt/include/xleaf.h | 205 +++++
drivers/fpga/xrt/include/xleaf/axigate.h | 23 +
drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 +
drivers/fpga/xrt/include/xleaf/clock.h | 29 +
.../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +
drivers/fpga/xrt/include/xleaf/devctl.h | 40 +
drivers/fpga/xrt/include/xleaf/icap.h | 27 +
drivers/fpga/xrt/include/xmgnt-main.h | 34 +
drivers/fpga/xrt/include/xroot.h | 117 +++
drivers/fpga/xrt/lib/Kconfig | 17 +
drivers/fpga/xrt/lib/Makefile | 30 +
drivers/fpga/xrt/lib/cdev.c | 210 +++++
drivers/fpga/xrt/lib/group.c | 278 ++++++
drivers/fpga/xrt/lib/lib-drv.c | 328 +++++++
drivers/fpga/xrt/lib/lib-drv.h | 15 +
drivers/fpga/xrt/lib/subdev.c | 847 ++++++++++++++++++
drivers/fpga/xrt/lib/subdev_pool.h | 53 ++
drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++
drivers/fpga/xrt/lib/xleaf/axigate.c | 325 +++++++
drivers/fpga/xrt/lib/xleaf/clkfreq.c | 223 +++++
drivers/fpga/xrt/lib/xleaf/clock.c | 652 ++++++++++++++
drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 210 +++++
drivers/fpga/xrt/lib/xleaf/devctl.c | 169 ++++
drivers/fpga/xrt/lib/xleaf/icap.c | 328 +++++++
drivers/fpga/xrt/lib/xleaf/ucs.c | 152 ++++
drivers/fpga/xrt/lib/xleaf/vsec.c | 372 ++++++++
drivers/fpga/xrt/lib/xroot.c | 536 +++++++++++
drivers/fpga/xrt/metadata/Kconfig | 12 +
drivers/fpga/xrt/metadata/Makefile | 16 +
drivers/fpga/xrt/metadata/metadata.c | 578 ++++++++++++
drivers/fpga/xrt/mgnt/Kconfig | 15 +
drivers/fpga/xrt/mgnt/Makefile | 19 +
drivers/fpga/xrt/mgnt/root.c | 419 +++++++++
drivers/fpga/xrt/mgnt/xmgnt-main-region.c | 485 ++++++++++
drivers/fpga/xrt/mgnt/xmgnt-main.c | 660 ++++++++++++++
drivers/fpga/xrt/mgnt/xmgnt.h | 33 +
drivers/fpga/xrt/mgnt/xrt-mgr.c | 190 ++++
drivers/fpga/xrt/mgnt/xrt-mgr.h | 16 +
include/uapi/linux/xrt/xclbin.h | 409 +++++++++
include/uapi/linux/xrt/xmgnt-ioctl.h | 46 +
53 files changed, 9931 insertions(+)
create mode 100644 Documentation/fpga/xrt.rst
create mode 100644 drivers/fpga/xrt/Kconfig
create mode 100644 drivers/fpga/xrt/include/events.h
create mode 100644 drivers/fpga/xrt/include/group.h
create mode 100644 drivers/fpga/xrt/include/metadata.h
create mode 100644 drivers/fpga/xrt/include/subdev_id.h
create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
create mode 100644 drivers/fpga/xrt/include/xdevice.h
create mode 100644 drivers/fpga/xrt/include/xleaf.h
create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
create mode 100644 drivers/fpga/xrt/include/xmgnt-main.h
create mode 100644 drivers/fpga/xrt/include/xroot.h
create mode 100644 drivers/fpga/xrt/lib/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Makefile
create mode 100644 drivers/fpga/xrt/lib/cdev.c
create mode 100644 drivers/fpga/xrt/lib/group.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
create mode 100644 drivers/fpga/xrt/lib/subdev.c
create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
create mode 100644 drivers/fpga/xrt/lib/xclbin.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
create mode 100644 drivers/fpga/xrt/lib/xroot.c
create mode 100644 drivers/fpga/xrt/metadata/Kconfig
create mode 100644 drivers/fpga/xrt/metadata/Makefile
create mode 100644 drivers/fpga/xrt/metadata/metadata.c
create mode 100644 drivers/fpga/xrt/mgnt/Kconfig
create mode 100644 drivers/fpga/xrt/mgnt/Makefile
create mode 100644 drivers/fpga/xrt/mgnt/root.c
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main-region.c
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main.c
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt.h
create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.c
create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.h
create mode 100644 include/uapi/linux/xrt/xclbin.h
create mode 100644 include/uapi/linux/xrt/xmgnt-ioctl.h

--
2.27.0


2021-04-27 21:01:51

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 07/20] fpga: xrt: root driver infrastructure

Contains common code for all root drivers and handles root calls from
xrt drivers. This is part of root driver infrastructure.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/events.h | 45 +++
drivers/fpga/xrt/include/xroot.h | 117 +++++++
drivers/fpga/xrt/lib/subdev_pool.h | 53 +++
drivers/fpga/xrt/lib/xroot.c | 536 +++++++++++++++++++++++++++++
4 files changed, 751 insertions(+)
create mode 100644 drivers/fpga/xrt/include/events.h
create mode 100644 drivers/fpga/xrt/include/xroot.h
create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
create mode 100644 drivers/fpga/xrt/lib/xroot.c

diff --git a/drivers/fpga/xrt/include/events.h b/drivers/fpga/xrt/include/events.h
new file mode 100644
index 000000000000..775171a47c8e
--- /dev/null
+++ b/drivers/fpga/xrt/include/events.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_EVENTS_H_
+#define _XRT_EVENTS_H_
+
+#include "subdev_id.h"
+
+/*
+ * Event notification.
+ */
+enum xrt_events {
+ XRT_EVENT_TEST = 0, /* for testing */
+ /*
+ * Events related to specific subdev
+ * Callback arg: struct xrt_event_arg_subdev
+ */
+ XRT_EVENT_POST_CREATION,
+ XRT_EVENT_PRE_REMOVAL,
+ /*
+ * Events related to change of the whole board
+ * Callback arg: <none>
+ */
+ XRT_EVENT_PRE_HOT_RESET,
+ XRT_EVENT_POST_HOT_RESET,
+ XRT_EVENT_PRE_GATE_CLOSE,
+ XRT_EVENT_POST_GATE_OPEN,
+};
+
+struct xrt_event_arg_subdev {
+ enum xrt_subdev_id xevt_subdev_id;
+ int xevt_subdev_instance;
+};
+
+struct xrt_event {
+ enum xrt_events xe_evt;
+ struct xrt_event_arg_subdev xe_subdev;
+};
+
+#endif /* _XRT_EVENTS_H_ */
diff --git a/drivers/fpga/xrt/include/xroot.h b/drivers/fpga/xrt/include/xroot.h
new file mode 100644
index 000000000000..56461bcb07a9
--- /dev/null
+++ b/drivers/fpga/xrt/include/xroot.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_ROOT_H_
+#define _XRT_ROOT_H_
+
+#include "xdevice.h"
+#include "subdev_id.h"
+#include "events.h"
+
+typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id, struct xrt_device *, void *);
+#define XRT_SUBDEV_MATCH_PREV ((xrt_subdev_match_t)-1)
+#define XRT_SUBDEV_MATCH_NEXT ((xrt_subdev_match_t)-2)
+
+/*
+ * Root calls.
+ */
+enum xrt_root_cmd {
+ /* Leaf actions. */
+ XRT_ROOT_GET_LEAF = 0,
+ XRT_ROOT_PUT_LEAF,
+ XRT_ROOT_GET_LEAF_HOLDERS,
+
+ /* Group actions. */
+ XRT_ROOT_CREATE_GROUP,
+ XRT_ROOT_REMOVE_GROUP,
+ XRT_ROOT_LOOKUP_GROUP,
+ XRT_ROOT_WAIT_GROUP_BRINGUP,
+
+ /* Event actions. */
+ XRT_ROOT_EVENT_SYNC,
+ XRT_ROOT_EVENT_ASYNC,
+
+ /* Device info. */
+ XRT_ROOT_GET_RESOURCE,
+ XRT_ROOT_GET_ID,
+
+ /* Misc. */
+ XRT_ROOT_HOT_RESET,
+ XRT_ROOT_HWMON,
+};
+
+struct xrt_root_get_leaf {
+ struct xrt_device *xpigl_caller_xdev;
+ xrt_subdev_match_t xpigl_match_cb;
+ void *xpigl_match_arg;
+ struct xrt_device *xpigl_tgt_xdev;
+};
+
+struct xrt_root_put_leaf {
+ struct xrt_device *xpipl_caller_xdev;
+ struct xrt_device *xpipl_tgt_xdev;
+};
+
+struct xrt_root_lookup_group {
+ struct xrt_device *xpilp_xdev; /* caller's xdev */
+ xrt_subdev_match_t xpilp_match_cb;
+ void *xpilp_match_arg;
+ int xpilp_grp_inst;
+};
+
+struct xrt_root_get_holders {
+ struct xrt_device *xpigh_xdev; /* caller's xdev */
+ char *xpigh_holder_buf;
+ size_t xpigh_holder_buf_len;
+};
+
+struct xrt_root_get_res {
+ u32 xpigr_region_id;
+ struct resource *xpigr_res;
+};
+
+struct xrt_root_get_id {
+ unsigned short xpigi_vendor_id;
+ unsigned short xpigi_device_id;
+ unsigned short xpigi_sub_vendor_id;
+ unsigned short xpigi_sub_device_id;
+};
+
+struct xrt_root_hwmon {
+ bool xpih_register;
+ const char *xpih_name;
+ void *xpih_drvdata;
+ const struct attribute_group **xpih_groups;
+ struct device *xpih_hwmon_dev;
+};
+
+/*
+ * Callback for leaf to make a root request. Arguments are: parent device, parent cookie, req,
+ * and arg.
+ */
+typedef int (*xrt_subdev_root_cb_t)(struct device *, void *, u32, void *);
+int xrt_subdev_root_request(struct xrt_device *self, u32 cmd, void *arg);
+
+/*
+ * Defines physical function (MPF / UPF) specific operations
+ * needed in common root driver.
+ */
+struct xroot_physical_function_callback {
+ void (*xpc_get_id)(struct device *dev, struct xrt_root_get_id *rid);
+ int (*xpc_get_resource)(struct device *dev, struct xrt_root_get_res *res);
+ void (*xpc_hot_reset)(struct device *dev);
+};
+
+int xroot_probe(struct device *dev, struct xroot_physical_function_callback *cb, void **root);
+void xroot_remove(void *root);
+bool xroot_wait_for_bringup(void *root);
+int xroot_create_group(void *xr, char *dtb);
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
+void xroot_broadcast(void *root, enum xrt_events evt);
+
+#endif /* _XRT_ROOT_H_ */
diff --git a/drivers/fpga/xrt/lib/subdev_pool.h b/drivers/fpga/xrt/lib/subdev_pool.h
new file mode 100644
index 000000000000..03f617d7ffd7
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdev_pool.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_SUBDEV_POOL_H_
+#define _XRT_SUBDEV_POOL_H_
+
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include "xroot.h"
+
+/*
+ * The struct xrt_subdev_pool manages a list of xrt_subdevs for root and group drivers.
+ */
+struct xrt_subdev_pool {
+ struct list_head xsp_dev_list;
+ struct device *xsp_owner;
+ struct mutex xsp_lock; /* pool lock */
+ bool xsp_closing;
+};
+
+/*
+ * Subdev pool helper functions for root and group drivers only.
+ */
+void xrt_subdev_pool_init(struct device *dev,
+ struct xrt_subdev_pool *spool);
+void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+ xrt_subdev_match_t match,
+ void *arg, struct device *holder_dev,
+ struct xrt_device **xdevp);
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+ struct xrt_device *xdev,
+ struct device *holder_dev);
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
+ enum xrt_subdev_id id, xrt_subdev_root_cb_t pcb,
+ void *pcb_arg, char *dtb);
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
+ enum xrt_subdev_id id, int instance);
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+ struct xrt_device *xdev,
+ char *buf, size_t len);
+
+void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool,
+ enum xrt_events evt);
+void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool,
+ struct xrt_event *evt);
+
+#endif /* _XRT_SUBDEV_POOL_H_ */
diff --git a/drivers/fpga/xrt/lib/xroot.c b/drivers/fpga/xrt/lib/xroot.c
new file mode 100644
index 000000000000..7b3e540dd6c0
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xroot.c
@@ -0,0 +1,536 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Root Functions
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/hwmon.h>
+#include "xroot.h"
+#include "subdev_pool.h"
+#include "group.h"
+#include "metadata.h"
+
+#define xroot_err(xr, fmt, args...) dev_err((xr)->dev, "%s: " fmt, __func__, ##args)
+#define xroot_warn(xr, fmt, args...) dev_warn((xr)->dev, "%s: " fmt, __func__, ##args)
+#define xroot_info(xr, fmt, args...) dev_info((xr)->dev, "%s: " fmt, __func__, ##args)
+#define xroot_dbg(xr, fmt, args...) dev_dbg((xr)->dev, "%s: " fmt, __func__, ##args)
+
+#define XROOT_GROUP_FIRST (-1)
+#define XROOT_GROUP_LAST (-2)
+
+static int xroot_root_cb(struct device *, void *, u32, void *);
+
+struct xroot_evt {
+ struct list_head list;
+ struct xrt_event evt;
+ struct completion comp;
+ bool async;
+};
+
+struct xroot_events {
+ struct mutex evt_lock; /* event lock */
+ struct list_head evt_list;
+ struct work_struct evt_work;
+};
+
+struct xroot_groups {
+ struct xrt_subdev_pool pool;
+ struct work_struct bringup_work;
+ atomic_t bringup_pending_cnt;
+ atomic_t bringup_failed_cnt;
+ struct completion bringup_comp;
+};
+
+struct xroot {
+ struct device *dev;
+ struct xroot_events events;
+ struct xroot_groups groups;
+ struct xroot_physical_function_callback pf_cb;
+};
+
+struct xroot_group_match_arg {
+ enum xrt_subdev_id id;
+ int instance;
+};
+
+static bool xroot_group_match(enum xrt_subdev_id id, struct xrt_device *xdev, void *arg)
+{
+ struct xroot_group_match_arg *a = (struct xroot_group_match_arg *)arg;
+
+ /* xdev->instance is the instance of the subdev. */
+ return id == a->id && xdev->instance == a->instance;
+}
+
+static int xroot_get_group(struct xroot *xr, int instance, struct xrt_device **grpp)
+{
+ int rc = 0;
+ struct xrt_subdev_pool *grps = &xr->groups.pool;
+ struct device *dev = xr->dev;
+ struct xroot_group_match_arg arg = { XRT_SUBDEV_GRP, instance };
+
+ if (instance == XROOT_GROUP_LAST) {
+ rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_NEXT,
+ *grpp, dev, grpp);
+ } else if (instance == XROOT_GROUP_FIRST) {
+ rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_PREV,
+ *grpp, dev, grpp);
+ } else {
+ rc = xrt_subdev_pool_get(grps, xroot_group_match,
+ &arg, dev, grpp);
+ }
+
+ if (rc && rc != -ENOENT)
+ xroot_err(xr, "failed to hold group %d: %d", instance, rc);
+ return rc;
+}
+
+static void xroot_put_group(struct xroot *xr, struct xrt_device *grp)
+{
+ int inst = grp->instance;
+ int rc = xrt_subdev_pool_put(&xr->groups.pool, grp, xr->dev);
+
+ if (rc)
+ xroot_err(xr, "failed to release group %d: %d", inst, rc);
+}
+
+static int xroot_trigger_event(struct xroot *xr, struct xrt_event *e, bool async)
+{
+ struct xroot_evt *enew = vzalloc(sizeof(*enew));
+
+ if (!enew)
+ return -ENOMEM;
+
+ enew->evt = *e;
+ enew->async = async;
+ init_completion(&enew->comp);
+
+ mutex_lock(&xr->events.evt_lock);
+ list_add(&enew->list, &xr->events.evt_list);
+ mutex_unlock(&xr->events.evt_lock);
+
+ schedule_work(&xr->events.evt_work);
+
+ if (async)
+ return 0;
+
+ wait_for_completion(&enew->comp);
+ vfree(enew);
+ return 0;
+}
+
+static void
+xroot_group_trigger_event(struct xroot *xr, int inst, enum xrt_events e)
+{
+ int ret;
+ struct xrt_device *xdev = NULL;
+ struct xrt_event evt = { 0 };
+
+ WARN_ON(inst < 0);
+ /* Only triggers subdev specific events. */
+ if (e != XRT_EVENT_POST_CREATION && e != XRT_EVENT_PRE_REMOVAL) {
+ xroot_err(xr, "invalid event %d", e);
+ return;
+ }
+
+ ret = xroot_get_group(xr, inst, &xdev);
+ if (ret)
+ return;
+
+ /* Triggers event for children, first. */
+ xleaf_call(xdev, XRT_GROUP_TRIGGER_EVENT, (void *)(uintptr_t)e);
+
+ /* Triggers event for itself. */
+ evt.xe_evt = e;
+ evt.xe_subdev.xevt_subdev_id = XRT_SUBDEV_GRP;
+ evt.xe_subdev.xevt_subdev_instance = inst;
+ xroot_trigger_event(xr, &evt, false);
+
+ xroot_put_group(xr, xdev);
+}
+
+int xroot_create_group(void *root, char *dtb)
+{
+ struct xroot *xr = (struct xroot *)root;
+ int ret;
+
+ atomic_inc(&xr->groups.bringup_pending_cnt);
+ ret = xrt_subdev_pool_add(&xr->groups.pool, XRT_SUBDEV_GRP, xroot_root_cb, xr, dtb);
+ if (ret >= 0) {
+ schedule_work(&xr->groups.bringup_work);
+ } else {
+ atomic_dec(&xr->groups.bringup_pending_cnt);
+ atomic_inc(&xr->groups.bringup_failed_cnt);
+ xroot_err(xr, "failed to create group: %d", ret);
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xroot_create_group);
+
+static int xroot_destroy_single_group(struct xroot *xr, int instance)
+{
+ struct xrt_device *xdev = NULL;
+ int ret;
+
+ WARN_ON(instance < 0);
+ ret = xroot_get_group(xr, instance, &xdev);
+ if (ret)
+ return ret;
+
+ xroot_group_trigger_event(xr, instance, XRT_EVENT_PRE_REMOVAL);
+
+ /* Now tear down all children in this group. */
+ ret = xleaf_call(xdev, XRT_GROUP_FINI_CHILDREN, NULL);
+ xroot_put_group(xr, xdev);
+ if (!ret)
+ ret = xrt_subdev_pool_del(&xr->groups.pool, XRT_SUBDEV_GRP, instance);
+
+ return ret;
+}
+
+static int xroot_destroy_group(struct xroot *xr, int instance)
+{
+ struct xrt_device *target = NULL;
+ struct xrt_device *deps = NULL;
+ int ret;
+
+ WARN_ON(instance < 0);
+ /*
+ * Make sure target group exists and can't go away before
+ * we remove it's dependents
+ */
+ ret = xroot_get_group(xr, instance, &target);
+ if (ret)
+ return ret;
+
+ /*
+ * Remove all groups depend on target one.
+ * Assuming subdevs in higher group ID can depend on ones in
+ * lower ID groups, we remove them in the reservse order.
+ */
+ while (xroot_get_group(xr, XROOT_GROUP_LAST, &deps) != -ENOENT) {
+ int inst = deps->instance;
+
+ xroot_put_group(xr, deps);
+ /* Reached the target group instance, stop here. */
+ if (instance == inst)
+ break;
+ xroot_destroy_single_group(xr, inst);
+ deps = NULL;
+ }
+
+ /* Now we can remove the target group. */
+ xroot_put_group(xr, target);
+ return xroot_destroy_single_group(xr, instance);
+}
+
+static int xroot_lookup_group(struct xroot *xr,
+ struct xrt_root_lookup_group *arg)
+{
+ int rc = -ENOENT;
+ struct xrt_device *grp = NULL;
+
+ while (rc < 0 && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ if (arg->xpilp_match_cb(XRT_SUBDEV_GRP, grp, arg->xpilp_match_arg))
+ rc = grp->instance;
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static void xroot_event_work(struct work_struct *work)
+{
+ struct xroot_evt *tmp;
+ struct xroot *xr = container_of(work, struct xroot, events.evt_work);
+
+ mutex_lock(&xr->events.evt_lock);
+ while (!list_empty(&xr->events.evt_list)) {
+ tmp = list_first_entry(&xr->events.evt_list, struct xroot_evt, list);
+ list_del(&tmp->list);
+ mutex_unlock(&xr->events.evt_lock);
+
+ xrt_subdev_pool_handle_event(&xr->groups.pool, &tmp->evt);
+
+ if (tmp->async)
+ vfree(tmp);
+ else
+ complete(&tmp->comp);
+
+ mutex_lock(&xr->events.evt_lock);
+ }
+ mutex_unlock(&xr->events.evt_lock);
+}
+
+static void xroot_event_init(struct xroot *xr)
+{
+ INIT_LIST_HEAD(&xr->events.evt_list);
+ mutex_init(&xr->events.evt_lock);
+ INIT_WORK(&xr->events.evt_work, xroot_event_work);
+}
+
+static void xroot_event_fini(struct xroot *xr)
+{
+ flush_scheduled_work();
+ WARN_ON(!list_empty(&xr->events.evt_list));
+}
+
+static int xroot_get_leaf(struct xroot *xr, struct xrt_root_get_leaf *arg)
+{
+ int rc = -ENOENT;
+ struct xrt_device *grp = NULL;
+
+ while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ rc = xleaf_call(grp, XRT_GROUP_GET_LEAF, arg);
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static int xroot_put_leaf(struct xroot *xr, struct xrt_root_put_leaf *arg)
+{
+ int rc = -ENOENT;
+ struct xrt_device *grp = NULL;
+
+ while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ rc = xleaf_call(grp, XRT_GROUP_PUT_LEAF, arg);
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static int xroot_root_cb(struct device *dev, void *parg, enum xrt_root_cmd cmd, void *arg)
+{
+ struct xroot *xr = (struct xroot *)parg;
+ int rc = 0;
+
+ switch (cmd) {
+ /* Leaf actions. */
+ case XRT_ROOT_GET_LEAF: {
+ struct xrt_root_get_leaf *getleaf = (struct xrt_root_get_leaf *)arg;
+
+ rc = xroot_get_leaf(xr, getleaf);
+ break;
+ }
+ case XRT_ROOT_PUT_LEAF: {
+ struct xrt_root_put_leaf *putleaf = (struct xrt_root_put_leaf *)arg;
+
+ rc = xroot_put_leaf(xr, putleaf);
+ break;
+ }
+ case XRT_ROOT_GET_LEAF_HOLDERS: {
+ struct xrt_root_get_holders *holders = (struct xrt_root_get_holders *)arg;
+
+ rc = xrt_subdev_pool_get_holders(&xr->groups.pool,
+ holders->xpigh_xdev,
+ holders->xpigh_holder_buf,
+ holders->xpigh_holder_buf_len);
+ break;
+ }
+
+ /* Group actions. */
+ case XRT_ROOT_CREATE_GROUP:
+ rc = xroot_create_group(xr, (char *)arg);
+ break;
+ case XRT_ROOT_REMOVE_GROUP:
+ rc = xroot_destroy_group(xr, (int)(uintptr_t)arg);
+ break;
+ case XRT_ROOT_LOOKUP_GROUP: {
+ struct xrt_root_lookup_group *getgrp = (struct xrt_root_lookup_group *)arg;
+
+ rc = xroot_lookup_group(xr, getgrp);
+ break;
+ }
+ case XRT_ROOT_WAIT_GROUP_BRINGUP:
+ rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
+ break;
+
+ /* Event actions. */
+ case XRT_ROOT_EVENT_SYNC:
+ case XRT_ROOT_EVENT_ASYNC: {
+ bool async = (cmd == XRT_ROOT_EVENT_ASYNC);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+
+ rc = xroot_trigger_event(xr, evt, async);
+ break;
+ }
+
+ /* Device info. */
+ case XRT_ROOT_GET_RESOURCE: {
+ struct xrt_root_get_res *res = (struct xrt_root_get_res *)arg;
+
+ if (xr->pf_cb.xpc_get_resource) {
+ rc = xr->pf_cb.xpc_get_resource(xr->dev, res);
+ } else {
+ xroot_err(xr, "get resource is not supported");
+ rc = -EOPNOTSUPP;
+ }
+ break;
+ }
+ case XRT_ROOT_GET_ID: {
+ struct xrt_root_get_id *id = (struct xrt_root_get_id *)arg;
+
+ if (xr->pf_cb.xpc_get_id)
+ xr->pf_cb.xpc_get_id(xr->dev, id);
+ else
+ memset(id, 0, sizeof(*id));
+ break;
+ }
+
+ /* MISC generic root driver functions. */
+ case XRT_ROOT_HOT_RESET: {
+ if (xr->pf_cb.xpc_hot_reset) {
+ xr->pf_cb.xpc_hot_reset(xr->dev);
+ } else {
+ xroot_err(xr, "hot reset is not supported");
+ rc = -EOPNOTSUPP;
+ }
+ break;
+ }
+ case XRT_ROOT_HWMON: {
+ struct xrt_root_hwmon *hwmon = (struct xrt_root_hwmon *)arg;
+
+ if (hwmon->xpih_register) {
+ hwmon->xpih_hwmon_dev =
+ hwmon_device_register_with_info(xr->dev,
+ hwmon->xpih_name,
+ hwmon->xpih_drvdata,
+ NULL,
+ hwmon->xpih_groups);
+ } else {
+ hwmon_device_unregister(hwmon->xpih_hwmon_dev);
+ }
+ break;
+ }
+
+ default:
+ xroot_err(xr, "unknown IOCTL cmd %d", cmd);
+ rc = -EINVAL;
+ break;
+ }
+
+ return rc;
+}
+
+static void xroot_bringup_group_work(struct work_struct *work)
+{
+ struct xrt_device *xdev = NULL;
+ struct xroot *xr = container_of(work, struct xroot, groups.bringup_work);
+
+ while (xroot_get_group(xr, XROOT_GROUP_FIRST, &xdev) != -ENOENT) {
+ int r, i;
+
+ i = xdev->instance;
+ r = xleaf_call(xdev, XRT_GROUP_INIT_CHILDREN, NULL);
+ xroot_put_group(xr, xdev);
+ if (r == -EEXIST)
+ continue; /* Already brough up, nothing to do. */
+ if (r)
+ atomic_inc(&xr->groups.bringup_failed_cnt);
+
+ xroot_group_trigger_event(xr, i, XRT_EVENT_POST_CREATION);
+
+ if (atomic_dec_and_test(&xr->groups.bringup_pending_cnt))
+ complete(&xr->groups.bringup_comp);
+ }
+}
+
+static void xroot_groups_init(struct xroot *xr)
+{
+ xrt_subdev_pool_init(xr->dev, &xr->groups.pool);
+ INIT_WORK(&xr->groups.bringup_work, xroot_bringup_group_work);
+ atomic_set(&xr->groups.bringup_pending_cnt, 0);
+ atomic_set(&xr->groups.bringup_failed_cnt, 0);
+ init_completion(&xr->groups.bringup_comp);
+}
+
+static void xroot_groups_fini(struct xroot *xr)
+{
+ flush_scheduled_work();
+ xrt_subdev_pool_fini(&xr->groups.pool);
+}
+
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct device *dev = xr->dev;
+ struct xrt_md_endpoint ep = { 0 };
+ int ret = 0;
+
+ ep.ep_name = endpoint;
+ ret = xrt_md_add_endpoint(dev, dtb, &ep);
+ if (ret)
+ xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xroot_add_simple_node);
+
+bool xroot_wait_for_bringup(void *root)
+{
+ struct xroot *xr = (struct xroot *)root;
+
+ wait_for_completion(&xr->groups.bringup_comp);
+ return atomic_read(&xr->groups.bringup_failed_cnt) == 0;
+}
+EXPORT_SYMBOL_GPL(xroot_wait_for_bringup);
+
+int xroot_probe(struct device *dev, struct xroot_physical_function_callback *cb, void **root)
+{
+ struct xroot *xr = NULL;
+
+ dev_info(dev, "%s: probing...", __func__);
+
+ xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
+ if (!xr)
+ return -ENOMEM;
+
+ xr->dev = dev;
+ xr->pf_cb = *cb;
+ xroot_groups_init(xr);
+ xroot_event_init(xr);
+
+ *root = xr;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xroot_probe);
+
+void xroot_remove(void *root)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct xrt_device *grp = NULL;
+
+ xroot_info(xr, "leaving...");
+
+ if (xroot_get_group(xr, XROOT_GROUP_FIRST, &grp) == 0) {
+ int instance = grp->instance;
+
+ xroot_put_group(xr, grp);
+ xroot_destroy_group(xr, instance);
+ }
+
+ xroot_event_fini(xr);
+ xroot_groups_fini(xr);
+}
+EXPORT_SYMBOL_GPL(xroot_remove);
+
+void xroot_broadcast(void *root, enum xrt_events evt)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct xrt_event e = { 0 };
+
+ /* Root pf driver only broadcasts below two events. */
+ if (evt != XRT_EVENT_POST_CREATION && evt != XRT_EVENT_PRE_REMOVAL) {
+ xroot_info(xr, "invalid event %d", evt);
+ return;
+ }
+
+ e.xe_evt = evt;
+ e.xe_subdev.xevt_subdev_id = XRT_ROOT;
+ e.xe_subdev.xevt_subdev_instance = 0;
+ xroot_trigger_event(xr, &e, false);
+}
+EXPORT_SYMBOL_GPL(xroot_broadcast);
--
2.27.0

2021-04-27 21:02:02

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 10/20] fpga: xrt: main driver for management function device

xrt driver that handles IOCTLs, such as hot reset and xclbin download.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xmgnt-main.h | 34 ++
drivers/fpga/xrt/mgnt/xmgnt-main.c | 660 ++++++++++++++++++++++++++
drivers/fpga/xrt/mgnt/xmgnt.h | 33 ++
include/uapi/linux/xrt/xmgnt-ioctl.h | 46 ++
4 files changed, 773 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xmgnt-main.h
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main.c
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt.h
create mode 100644 include/uapi/linux/xrt/xmgnt-ioctl.h

diff --git a/drivers/fpga/xrt/include/xmgnt-main.h b/drivers/fpga/xrt/include/xmgnt-main.h
new file mode 100644
index 000000000000..b46dac710cd3
--- /dev/null
+++ b/drivers/fpga/xrt/include/xmgnt-main.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XMGNT_MAIN_H_
+#define _XMGNT_MAIN_H_
+
+#include <linux/xrt/xclbin.h>
+#include "xleaf.h"
+
+enum xrt_mgnt_main_leaf_cmd {
+ XRT_MGNT_MAIN_GET_AXLF_SECTION = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_MGNT_MAIN_GET_VBNV,
+};
+
+/* There are three kind of partitions. Each of them is programmed independently. */
+enum provider_kind {
+ XMGNT_BLP, /* Base Logic Partition */
+ XMGNT_PLP, /* Provider Logic Partition */
+ XMGNT_ULP, /* User Logic Partition */
+};
+
+struct xrt_mgnt_main_get_axlf_section {
+ enum provider_kind xmmigas_axlf_kind;
+ enum axlf_section_kind xmmigas_section_kind;
+ void *xmmigas_section;
+ u64 xmmigas_section_size;
+};
+
+#endif /* _XMGNT_MAIN_H_ */
diff --git a/drivers/fpga/xrt/mgnt/xmgnt-main.c b/drivers/fpga/xrt/mgnt/xmgnt-main.c
new file mode 100644
index 000000000000..a1c6dc34f6c0
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/xmgnt-main.c
@@ -0,0 +1,660 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGNT PF entry point driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Sonal Santan <[email protected]>
+ */
+
+#include <linux/firmware.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include "xclbin-helper.h"
+#include "metadata.h"
+#include "xleaf.h"
+#include <linux/xrt/xmgnt-ioctl.h>
+#include "xleaf/devctl.h"
+#include "xmgnt-main.h"
+#include "xrt-mgr.h"
+#include "xleaf/icap.h"
+#include "xleaf/axigate.h"
+#include "xmgnt.h"
+
+#define XMGNT_MAIN "xmgnt_main"
+#define XMGNT_SUPP_XCLBIN_MAJOR 2
+
+#define XMGNT_FLAG_FLASH_READY 1
+#define XMGNT_FLAG_DEVCTL_READY 2
+
+#define XMGNT_UUID_STR_LEN (UUID_SIZE * 2 + 1)
+
+struct xmgnt_main {
+ struct xrt_device *xdev;
+ struct axlf *firmware_blp;
+ struct axlf *firmware_plp;
+ struct axlf *firmware_ulp;
+ u32 flags;
+ struct fpga_manager *fmgr;
+ struct mutex lock; /* busy lock */
+ uuid_t *blp_interface_uuids;
+ u32 blp_interface_uuid_num;
+};
+
+/*
+ * VBNV stands for Vendor, BoardID, Name, Version. It is a string
+ * which describes board and shell.
+ *
+ * Caller is responsible for freeing the returned string.
+ */
+char *xmgnt_get_vbnv(struct xrt_device *xdev)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ const char *vbnv;
+ char *ret;
+ int i;
+
+ if (xmm->firmware_plp)
+ vbnv = xmm->firmware_plp->header.platform_vbnv;
+ else if (xmm->firmware_blp)
+ vbnv = xmm->firmware_blp->header.platform_vbnv;
+ else
+ return NULL;
+
+ ret = kstrdup(vbnv, GFP_KERNEL);
+ if (!ret)
+ return NULL;
+
+ for (i = 0; i < strlen(ret); i++) {
+ if (ret[i] == ':' || ret[i] == '.')
+ ret[i] = '_';
+ }
+ return ret;
+}
+
+static int get_dev_uuid(struct xrt_device *xdev, char *uuidstr, size_t len)
+{
+ struct xrt_devctl_rw devctl_arg = { 0 };
+ struct xrt_device *devctl_leaf;
+ char uuid_buf[UUID_SIZE];
+ uuid_t uuid;
+ int err;
+
+ devctl_leaf = xleaf_get_leaf_by_epname(xdev, XRT_MD_NODE_BLP_ROM);
+ if (!devctl_leaf) {
+ xrt_err(xdev, "can not get %s", XRT_MD_NODE_BLP_ROM);
+ return -EINVAL;
+ }
+
+ devctl_arg.xdr_id = XRT_DEVCTL_ROM_UUID;
+ devctl_arg.xdr_buf = uuid_buf;
+ devctl_arg.xdr_len = sizeof(uuid_buf);
+ devctl_arg.xdr_offset = 0;
+ err = xleaf_call(devctl_leaf, XRT_DEVCTL_READ, &devctl_arg);
+ xleaf_put_leaf(xdev, devctl_leaf);
+ if (err) {
+ xrt_err(xdev, "can not get uuid: %d", err);
+ return err;
+ }
+ import_uuid(&uuid, uuid_buf);
+ xrt_md_trans_uuid2str(&uuid, uuidstr);
+
+ return 0;
+}
+
+int xmgnt_hot_reset(struct xrt_device *xdev)
+{
+ int ret = xleaf_broadcast_event(xdev, XRT_EVENT_PRE_HOT_RESET, false);
+
+ if (ret) {
+ xrt_err(xdev, "offline failed, hot reset is canceled");
+ return ret;
+ }
+
+ xleaf_hot_reset(xdev);
+ xleaf_broadcast_event(xdev, XRT_EVENT_POST_HOT_RESET, false);
+ return 0;
+}
+
+static ssize_t reset_store(struct device *dev, struct device_attribute *da,
+ const char *buf, size_t count)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+
+ xmgnt_hot_reset(xdev);
+ return count;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t VBNV_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ ssize_t ret;
+ char *vbnv;
+
+ vbnv = xmgnt_get_vbnv(xdev);
+ if (!vbnv)
+ return -EINVAL;
+ ret = sprintf(buf, "%s\n", vbnv);
+ kfree(vbnv);
+ return ret;
+}
+static DEVICE_ATTR_RO(VBNV);
+
+/* logic uuid is the uuid uniquely identfy the partition */
+static ssize_t logic_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ char uuid[XMGNT_UUID_STR_LEN];
+ ssize_t ret;
+
+ /* Getting UUID pointed to by VSEC, should be the same as logic UUID of BLP. */
+ ret = get_dev_uuid(xdev, uuid, sizeof(uuid));
+ if (ret)
+ return ret;
+ ret = sprintf(buf, "%s\n", uuid);
+ return ret;
+}
+static DEVICE_ATTR_RO(logic_uuids);
+
+static ssize_t interface_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ ssize_t ret = 0;
+ u32 i;
+
+ for (i = 0; i < xmm->blp_interface_uuid_num; i++) {
+ char uuidstr[XMGNT_UUID_STR_LEN];
+
+ xrt_md_trans_uuid2str(&xmm->blp_interface_uuids[i], uuidstr);
+ ret += sprintf(buf + ret, "%s\n", uuidstr);
+ }
+ return ret;
+}
+static DEVICE_ATTR_RO(interface_uuids);
+
+static struct attribute *xmgnt_main_attrs[] = {
+ &dev_attr_reset.attr,
+ &dev_attr_VBNV.attr,
+ &dev_attr_logic_uuids.attr,
+ &dev_attr_interface_uuids.attr,
+ NULL,
+};
+
+static const struct attribute_group xmgnt_main_attrgroup = {
+ .attrs = xmgnt_main_attrs,
+};
+
+static int load_firmware_from_disk(struct xrt_device *xdev, struct axlf **fw_buf, size_t *len)
+{
+ char uuid[XMGNT_UUID_STR_LEN];
+ const struct firmware *fw;
+ char fw_name[256];
+ int err = 0;
+
+ *len = 0;
+ err = get_dev_uuid(xdev, uuid, sizeof(uuid));
+ if (err)
+ return err;
+
+ snprintf(fw_name, sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
+ xrt_info(xdev, "try loading fw: %s", fw_name);
+
+ err = request_firmware(&fw, fw_name, DEV(xdev));
+ if (err)
+ return err;
+
+ *fw_buf = vmalloc(fw->size);
+ if (!*fw_buf) {
+ release_firmware(fw);
+ return -ENOMEM;
+ }
+
+ *len = fw->size;
+ memcpy(*fw_buf, fw->data, fw->size);
+
+ release_firmware(fw);
+ return 0;
+}
+
+static const struct axlf *xmgnt_get_axlf_firmware(struct xmgnt_main *xmm, enum provider_kind kind)
+{
+ switch (kind) {
+ case XMGNT_BLP:
+ return xmm->firmware_blp;
+ case XMGNT_PLP:
+ return xmm->firmware_plp;
+ case XMGNT_ULP:
+ return xmm->firmware_ulp;
+ default:
+ xrt_err(xmm->xdev, "unknown axlf kind: %d", kind);
+ return NULL;
+ }
+}
+
+/* The caller needs to free the returned dtb buffer */
+char *xmgnt_get_dtb(struct xrt_device *xdev, enum provider_kind kind)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ const struct axlf *provider;
+ char *dtb = NULL;
+ int rc;
+
+ provider = xmgnt_get_axlf_firmware(xmm, kind);
+ if (!provider)
+ return dtb;
+
+ rc = xrt_xclbin_get_metadata(DEV(xdev), provider, &dtb);
+ if (rc)
+ xrt_err(xdev, "failed to find dtb: %d", rc);
+ return dtb;
+}
+
+/* The caller needs to free the returned uuid buffer */
+static const char *get_uuid_from_firmware(struct xrt_device *xdev, const struct axlf *xclbin)
+{
+ const void *uuiddup = NULL;
+ const void *uuid = NULL;
+ void *dtb = NULL;
+ int rc;
+
+ rc = xrt_xclbin_get_section(DEV(xdev), xclbin, PARTITION_METADATA, &dtb, NULL);
+ if (rc)
+ return NULL;
+
+ rc = xrt_md_get_prop(DEV(xdev), dtb, NULL, NULL, XRT_MD_PROP_LOGIC_UUID, &uuid, NULL);
+ if (!rc)
+ uuiddup = kstrdup(uuid, GFP_KERNEL);
+ vfree(dtb);
+ return uuiddup;
+}
+
+static bool is_valid_firmware(struct xrt_device *xdev,
+ const struct axlf *xclbin, size_t fw_len)
+{
+ const char *fw_buf = (const char *)xclbin;
+ size_t axlflen = xclbin->header.length;
+ char dev_uuid[XMGNT_UUID_STR_LEN];
+ const char *fw_uuid;
+ int err;
+
+ err = get_dev_uuid(xdev, dev_uuid, sizeof(dev_uuid));
+ if (err)
+ return false;
+
+ if (memcmp(fw_buf, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)) != 0) {
+ xrt_err(xdev, "unknown fw format");
+ return false;
+ }
+
+ if (axlflen > fw_len) {
+ xrt_err(xdev, "truncated fw, length: %zu, expect: %zu", fw_len, axlflen);
+ return false;
+ }
+
+ if (xclbin->header.version_major != XMGNT_SUPP_XCLBIN_MAJOR) {
+ xrt_err(xdev, "firmware is not supported");
+ return false;
+ }
+
+ fw_uuid = get_uuid_from_firmware(xdev, xclbin);
+ if (!fw_uuid || strncmp(fw_uuid, dev_uuid, sizeof(dev_uuid)) != 0) {
+ xrt_err(xdev, "bad fw UUID: %s, expect: %s",
+ fw_uuid ? fw_uuid : "<none>", dev_uuid);
+ kfree(fw_uuid);
+ return false;
+ }
+
+ kfree(fw_uuid);
+ return true;
+}
+
+int xmgnt_get_provider_uuid(struct xrt_device *xdev, enum provider_kind kind, uuid_t *uuid)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ const struct axlf *fwbuf;
+ const char *fw_uuid;
+ int rc = -ENOENT;
+
+ mutex_lock(&xmm->lock);
+
+ fwbuf = xmgnt_get_axlf_firmware(xmm, kind);
+ if (!fwbuf)
+ goto done;
+
+ fw_uuid = get_uuid_from_firmware(xdev, fwbuf);
+ if (!fw_uuid)
+ goto done;
+
+ rc = xrt_md_trans_str2uuid(DEV(xdev), fw_uuid, uuid);
+ kfree(fw_uuid);
+
+done:
+ mutex_unlock(&xmm->lock);
+ return rc;
+}
+
+static int xmgnt_create_blp(struct xmgnt_main *xmm)
+{
+ const struct axlf *provider = xmgnt_get_axlf_firmware(xmm, XMGNT_BLP);
+ struct xrt_device *xdev = xmm->xdev;
+ int rc = 0;
+ char *dtb = NULL;
+
+ dtb = xmgnt_get_dtb(xdev, XMGNT_BLP);
+ if (!dtb) {
+ xrt_err(xdev, "did not get BLP metadata");
+ return -EINVAL;
+ }
+
+ rc = xmgnt_process_xclbin(xmm->xdev, xmm->fmgr, provider, XMGNT_BLP);
+ if (rc) {
+ xrt_err(xdev, "failed to process BLP: %d", rc);
+ goto failed;
+ }
+
+ rc = xleaf_create_group(xdev, dtb);
+ if (rc < 0)
+ xrt_err(xdev, "failed to create BLP group: %d", rc);
+ else
+ rc = 0;
+
+ WARN_ON(xmm->blp_interface_uuids);
+ rc = xrt_md_get_interface_uuids(&xdev->dev, dtb, 0, NULL);
+ if (rc > 0) {
+ xmm->blp_interface_uuid_num = rc;
+ xmm->blp_interface_uuids =
+ kcalloc(xmm->blp_interface_uuid_num, sizeof(uuid_t), GFP_KERNEL);
+ if (!xmm->blp_interface_uuids) {
+ rc = -ENOMEM;
+ goto failed;
+ }
+ xrt_md_get_interface_uuids(&xdev->dev, dtb, xmm->blp_interface_uuid_num,
+ xmm->blp_interface_uuids);
+ }
+
+failed:
+ vfree(dtb);
+ return rc;
+}
+
+static int xmgnt_load_firmware(struct xmgnt_main *xmm)
+{
+ struct xrt_device *xdev = xmm->xdev;
+ size_t fwlen;
+ int rc;
+
+ rc = load_firmware_from_disk(xdev, &xmm->firmware_blp, &fwlen);
+ if (!rc && is_valid_firmware(xdev, xmm->firmware_blp, fwlen))
+ xmgnt_create_blp(xmm);
+ else
+ xrt_err(xdev, "failed to find firmware, giving up: %d", rc);
+ return rc;
+}
+
+static void xmgnt_main_event_cb(struct xrt_device *xdev, void *arg)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct xrt_device *leaf;
+ enum xrt_subdev_id id;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+ switch (e) {
+ case XRT_EVENT_POST_CREATION: {
+ if (id == XRT_SUBDEV_DEVCTL && !(xmm->flags & XMGNT_FLAG_DEVCTL_READY)) {
+ leaf = xleaf_get_leaf_by_epname(xdev, XRT_MD_NODE_BLP_ROM);
+ if (leaf) {
+ xmm->flags |= XMGNT_FLAG_DEVCTL_READY;
+ xleaf_put_leaf(xdev, leaf);
+ }
+ } else if (id == XRT_SUBDEV_QSPI && !(xmm->flags & XMGNT_FLAG_FLASH_READY)) {
+ xmm->flags |= XMGNT_FLAG_FLASH_READY;
+ } else {
+ break;
+ }
+
+ if (xmm->flags & XMGNT_FLAG_DEVCTL_READY)
+ xmgnt_load_firmware(xmm);
+ break;
+ }
+ case XRT_EVENT_PRE_REMOVAL:
+ break;
+ default:
+ xrt_dbg(xdev, "ignored event %d", e);
+ break;
+ }
+}
+
+static int xmgnt_main_probe(struct xrt_device *xdev)
+{
+ struct xmgnt_main *xmm;
+
+ xrt_info(xdev, "probing...");
+
+ xmm = devm_kzalloc(DEV(xdev), sizeof(*xmm), GFP_KERNEL);
+ if (!xmm)
+ return -ENOMEM;
+
+ xmm->xdev = xdev;
+ xmm->fmgr = xmgnt_fmgr_probe(xdev);
+ if (IS_ERR(xmm->fmgr))
+ return PTR_ERR(xmm->fmgr);
+
+ xrt_set_drvdata(xdev, xmm);
+ mutex_init(&xmm->lock);
+
+ /* Ready to handle req thru sysfs nodes. */
+ if (sysfs_create_group(&DEV(xdev)->kobj, &xmgnt_main_attrgroup))
+ xrt_err(xdev, "failed to create sysfs group");
+ return 0;
+}
+
+static void xmgnt_main_remove(struct xrt_device *xdev)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+
+ /* By now, group driver should prevent any inter-leaf call. */
+
+ xrt_info(xdev, "leaving...");
+
+ kfree(xmm->blp_interface_uuids);
+ vfree(xmm->firmware_blp);
+ vfree(xmm->firmware_plp);
+ vfree(xmm->firmware_ulp);
+ xmgnt_region_cleanup_all(xdev);
+ xmgnt_fmgr_remove(xmm->fmgr);
+ sysfs_remove_group(&DEV(xdev)->kobj, &xmgnt_main_attrgroup);
+}
+
+static int
+xmgnt_mainleaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xmgnt_main_event_cb(xdev, arg);
+ break;
+ case XRT_MGNT_MAIN_GET_AXLF_SECTION: {
+ struct xrt_mgnt_main_get_axlf_section *get =
+ (struct xrt_mgnt_main_get_axlf_section *)arg;
+ const struct axlf *firmware = xmgnt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
+
+ if (!firmware) {
+ ret = -ENOENT;
+ } else {
+ ret = xrt_xclbin_get_section(DEV(xdev), firmware,
+ get->xmmigas_section_kind,
+ &get->xmmigas_section,
+ &get->xmmigas_section_size);
+ }
+ break;
+ }
+ case XRT_MGNT_MAIN_GET_VBNV: {
+ char **vbnv_p = (char **)arg;
+
+ *vbnv_p = xmgnt_get_vbnv(xdev);
+ if (!*vbnv_p)
+ ret = -EINVAL;
+ break;
+ }
+ default:
+ xrt_err(xdev, "unknown cmd: %d", cmd);
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
+static int xmgnt_main_open(struct inode *inode, struct file *file)
+{
+ struct xrt_device *xdev = xleaf_devnode_open(inode);
+
+ /* Device may have gone already when we get here. */
+ if (!xdev)
+ return -ENODEV;
+
+ xrt_info(xdev, "opened");
+ file->private_data = xrt_get_drvdata(xdev);
+ return 0;
+}
+
+static int xmgnt_main_close(struct inode *inode, struct file *file)
+{
+ struct xmgnt_main *xmm = file->private_data;
+
+ xleaf_devnode_close(inode);
+
+ xrt_info(xmm->xdev, "closed");
+ return 0;
+}
+
+/*
+ * Called for xclbin download xclbin load ioctl.
+ */
+static int xmgnt_bitstream_axlf_fpga_mgr(struct xmgnt_main *xmm, void *axlf, size_t size)
+{
+ int ret;
+
+ WARN_ON(!mutex_is_locked(&xmm->lock));
+
+ /*
+ * Should any error happens during download, we can't trust
+ * the cached xclbin any more.
+ */
+ vfree(xmm->firmware_ulp);
+ xmm->firmware_ulp = NULL;
+
+ ret = xmgnt_process_xclbin(xmm->xdev, xmm->fmgr, axlf, XMGNT_ULP);
+ if (ret == 0)
+ xmm->firmware_ulp = axlf;
+
+ return ret;
+}
+
+static int bitstream_axlf_ioctl(struct xmgnt_main *xmm, const void __user *arg)
+{
+ struct xmgnt_ioc_bitstream_axlf ioc_obj = { 0 };
+ struct axlf xclbin_obj = { {0} };
+ size_t copy_buffer_size = 0;
+ void *copy_buffer = NULL;
+ int ret = 0;
+
+ if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
+ return -EFAULT;
+ if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin, sizeof(xclbin_obj)))
+ return -EFAULT;
+ if (memcmp(xclbin_obj.magic, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)))
+ return -EINVAL;
+
+ copy_buffer_size = xclbin_obj.header.length;
+ if (copy_buffer_size > XCLBIN_MAX_SIZE || copy_buffer_size < sizeof(xclbin_obj))
+ return -EINVAL;
+ if (xclbin_obj.header.version_major != XMGNT_SUPP_XCLBIN_MAJOR)
+ return -EINVAL;
+
+ copy_buffer = vmalloc(copy_buffer_size);
+ if (!copy_buffer)
+ return -ENOMEM;
+
+ if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
+ vfree(copy_buffer);
+ return -EFAULT;
+ }
+
+ ret = xmgnt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
+ if (ret)
+ vfree(copy_buffer);
+
+ return ret;
+}
+
+static long xmgnt_main_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ struct xmgnt_main *xmm = filp->private_data;
+ long result = 0;
+
+ if (_IOC_TYPE(cmd) != XMGNT_IOC_MAGIC)
+ return -ENOTTY;
+
+ mutex_lock(&xmm->lock);
+
+ xrt_info(xmm->xdev, "ioctl cmd %d, arg %ld", cmd, arg);
+ switch (cmd) {
+ case XMGNT_IOCICAPDOWNLOAD_AXLF:
+ result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
+ break;
+ default:
+ result = -ENOTTY;
+ break;
+ }
+
+ mutex_unlock(&xmm->lock);
+ return result;
+}
+
+static struct xrt_dev_endpoints xrt_mgnt_main_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names []){
+ { .ep_name = XRT_MD_NODE_MGNT_MAIN },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xmgnt_main_driver = {
+ .driver = {
+ .name = XMGNT_MAIN,
+ },
+ .file_ops = {
+ .xsf_ops = {
+ .owner = THIS_MODULE,
+ .open = xmgnt_main_open,
+ .release = xmgnt_main_close,
+ .unlocked_ioctl = xmgnt_main_ioctl,
+ },
+ .xsf_dev_name = "xmgnt",
+ },
+ .subdev_id = XRT_SUBDEV_MGNT_MAIN,
+ .endpoints = xrt_mgnt_main_endpoints,
+ .probe = xmgnt_main_probe,
+ .remove = xmgnt_main_remove,
+ .leaf_call = xmgnt_mainleaf_call,
+};
+
+int xmgnt_register_leaf(void)
+{
+ return xrt_register_driver(&xmgnt_main_driver);
+}
+
+void xmgnt_unregister_leaf(void)
+{
+ xrt_unregister_driver(&xmgnt_main_driver);
+}
diff --git a/drivers/fpga/xrt/mgnt/xmgnt.h b/drivers/fpga/xrt/mgnt/xmgnt.h
new file mode 100644
index 000000000000..c8159903de4a
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/xmgnt.h
@@ -0,0 +1,33 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XMGNT_H_
+#define _XMGNT_H_
+
+#include "xmgnt-main.h"
+
+struct fpga_manager;
+int xmgnt_process_xclbin(struct xrt_device *xdev,
+ struct fpga_manager *fmgr,
+ const struct axlf *xclbin,
+ enum provider_kind kind);
+void xmgnt_region_cleanup_all(struct xrt_device *xdev);
+
+int xmgnt_hot_reset(struct xrt_device *xdev);
+
+/* Getting dtb for specified group. Caller should vfree returned dtb. */
+char *xmgnt_get_dtb(struct xrt_device *xdev, enum provider_kind kind);
+char *xmgnt_get_vbnv(struct xrt_device *xdev);
+int xmgnt_get_provider_uuid(struct xrt_device *xdev,
+ enum provider_kind kind, uuid_t *uuid);
+
+int xmgnt_register_leaf(void);
+void xmgnt_unregister_leaf(void);
+
+#endif /* _XMGNT_H_ */
diff --git a/include/uapi/linux/xrt/xmgnt-ioctl.h b/include/uapi/linux/xrt/xmgnt-ioctl.h
new file mode 100644
index 000000000000..e4ba5335fa3f
--- /dev/null
+++ b/include/uapi/linux/xrt/xmgnt-ioctl.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright (C) 2015-2021, Xilinx Inc
+ *
+ */
+
+/**
+ * DOC: PCIe Kernel Driver for Management Physical Function
+ * Interfaces exposed by *xrt-mgnt* driver are defined in file, *xmgnt-ioctl.h*.
+ * Core functionality provided by *xrt-mgnt* driver is described in the following table:
+ *
+ * =========== ============================== ==================================
+ * Functionality ioctl request code data format
+ * =========== ============================== ==================================
+ * 1 FPGA image download XMGNT_IOCICAPDOWNLOAD_AXLF xmgnt_ioc_bitstream_axlf
+ * =========== ============================== ==================================
+ */
+
+#ifndef _XMGNT_IOCTL_H_
+#define _XMGNT_IOCTL_H_
+
+#include <linux/ioctl.h>
+
+#define XMGNT_IOC_MAGIC 'X'
+#define XMGNT_IOC_ICAP_DOWNLOAD_AXLF 0x6
+
+/**
+ * struct xmgnt_ioc_bitstream_axlf - load xclbin (AXLF) device image
+ * used with XMGNT_IOCICAPDOWNLOAD_AXLF ioctl
+ *
+ * @xclbin: Pointer to user's xclbin structure in memory
+ */
+struct xmgnt_ioc_bitstream_axlf {
+ struct axlf *xclbin;
+};
+
+#define XMGNT_IOCICAPDOWNLOAD_AXLF \
+ _IOW(XMGNT_IOC_MAGIC, XMGNT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgnt_ioc_bitstream_axlf)
+
+/*
+ * The following definitions are for binary compatibility with classic XRT management driver
+ */
+#define XCLMGNT_IOCICAPDOWNLOAD_AXLF XMGNT_IOCICAPDOWNLOAD_AXLF
+#define xclmgnt_ioc_bitstream_axlf xmgnt_ioc_bitstream_axlf
+
+#endif
--
2.27.0

2021-04-27 21:02:14

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 08/20] fpga: xrt: driver infrastructure

Infrastructure code providing APIs for managing leaf driver instance
groups, facilitating inter-leaf driver calls and root calls.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/subdev.c | 847 ++++++++++++++++++++++++++++++++++
1 file changed, 847 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/subdev.c

diff --git a/drivers/fpga/xrt/lib/subdev.c b/drivers/fpga/xrt/lib/subdev.c
new file mode 100644
index 000000000000..710ccf2a2121
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdev.c
@@ -0,0 +1,847 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include "xleaf.h"
+#include "subdev_pool.h"
+#include "lib-drv.h"
+#include "metadata.h"
+
+extern struct bus_type xrt_bus_type;
+
+#define IS_ROOT_DEV(dev) ((dev)->bus != &xrt_bus_type)
+#define XRT_HOLDER_BUF_SZ 1024
+
+static inline struct device *find_root(struct xrt_device *xdev)
+{
+ struct device *d = DEV(xdev);
+
+ while (!IS_ROOT_DEV(d))
+ d = d->parent;
+ return d;
+}
+
+/*
+ * It represents a holder of a subdev. One holder can repeatedly hold a subdev
+ * as long as there is a unhold corresponding to a hold.
+ */
+struct xrt_subdev_holder {
+ struct list_head xsh_holder_list;
+ struct device *xsh_holder;
+ int xsh_count;
+ struct kref xsh_kref;
+};
+
+/*
+ * It represents a specific instance of platform driver for a subdev, which
+ * provides services to its clients (another subdev driver or root driver).
+ */
+struct xrt_subdev {
+ struct list_head xs_dev_list;
+ struct list_head xs_holder_list;
+ enum xrt_subdev_id xs_id; /* type of subdev */
+ struct xrt_device *xs_xdev;
+ struct completion xs_holder_comp;
+};
+
+static struct xrt_subdev *xrt_subdev_alloc(void)
+{
+ struct xrt_subdev *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
+
+ if (!sdev)
+ return NULL;
+
+ INIT_LIST_HEAD(&sdev->xs_dev_list);
+ INIT_LIST_HEAD(&sdev->xs_holder_list);
+ init_completion(&sdev->xs_holder_comp);
+ return sdev;
+}
+
+int xrt_subdev_root_request(struct xrt_device *self, u32 cmd, void *arg)
+{
+ struct device *dev = DEV(self);
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
+
+ WARN_ON(!pdata->xsp_root_cb);
+ return (*pdata->xsp_root_cb)(dev->parent, pdata->xsp_root_cb_arg, cmd, arg);
+}
+
+/*
+ * Subdev common sysfs nodes.
+ */
+static ssize_t holders_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ ssize_t len;
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ struct xrt_root_get_holders holders = { xdev, buf, XRT_HOLDER_BUF_SZ };
+
+ len = xrt_subdev_root_request(xdev, XRT_ROOT_GET_LEAF_HOLDERS, &holders);
+ if (len >= holders.xpigh_holder_buf_len)
+ return len;
+ buf[len] = '\n';
+ return len + 1;
+}
+static DEVICE_ATTR_RO(holders);
+
+static struct attribute *xrt_subdev_attrs[] = {
+ &dev_attr_holders.attr,
+ NULL,
+};
+
+static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
+ unsigned char *blob;
+ unsigned long size;
+ ssize_t ret = 0;
+
+ blob = pdata->xsp_dtb;
+ size = xrt_md_size(dev, blob);
+ if (size == XRT_MD_INVALID_LENGTH) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ if (off >= size) {
+ dev_dbg(dev, "offset (%lld) beyond total size: %ld\n", off, size);
+ goto failed;
+ }
+
+ if (off + count > size) {
+ dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
+ count = size - off;
+ }
+ memcpy(buf, blob + off, count);
+
+ ret = count;
+failed:
+ return ret;
+}
+
+static struct bin_attribute meta_data_attr = {
+ .attr = {
+ .name = "metadata",
+ .mode = 0400
+ },
+ .read = metadata_output,
+ .size = 0
+};
+
+static struct bin_attribute *xrt_subdev_bin_attrs[] = {
+ &meta_data_attr,
+ NULL,
+};
+
+static const struct attribute_group xrt_subdev_attrgroup = {
+ .attrs = xrt_subdev_attrs,
+ .bin_attrs = xrt_subdev_bin_attrs,
+};
+
+/*
+ * Given the device metadata, parse it to get IO ranges and construct
+ * resource array.
+ */
+static int
+xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
+ char *dtb, struct resource **res, int *res_num)
+{
+ struct xrt_subdev_platdata *pdata;
+ struct resource *pci_res = NULL;
+ const u64 *bar_range;
+ const u32 *bar_idx;
+ char *ep_name = NULL, *compat = NULL;
+ uint bar;
+ int count1 = 0, count2 = 0, ret;
+
+ if (!dtb)
+ return -EINVAL;
+
+ pdata = DEV_PDATA(to_xrt_dev(parent));
+
+ /* go through metadata and count endpoints in it */
+ xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, &compat);
+ while (ep_name) {
+ ret = xrt_md_get_prop(parent, dtb, ep_name, compat,
+ XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+ if (!ret)
+ count1++;
+ xrt_md_get_next_endpoint(parent, dtb, ep_name, compat, &ep_name, &compat);
+ }
+ if (!count1)
+ return 0;
+
+ /* allocate resource array for all endpoints been found in metadata */
+ *res = vzalloc(sizeof(**res) * count1);
+
+ /* go through all endpoints again and get IO range for each endpoint */
+ ep_name = NULL;
+ xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, &compat);
+ while (ep_name) {
+ ret = xrt_md_get_prop(parent, dtb, ep_name, compat,
+ XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+ if (ret)
+ continue;
+ xrt_md_get_prop(parent, dtb, ep_name, compat,
+ XRT_MD_PROP_BAR_IDX, (const void **)&bar_idx, NULL);
+ bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
+ xleaf_get_root_res(to_xrt_dev(parent), bar, &pci_res);
+ (*res)[count2].start = pci_res->start + be64_to_cpu(bar_range[0]);
+ (*res)[count2].end = pci_res->start + be64_to_cpu(bar_range[0]) +
+ be64_to_cpu(bar_range[1]) - 1;
+ (*res)[count2].flags = IORESOURCE_MEM;
+ /* check if there is conflicted resource */
+ ret = request_resource(pci_res, *res + count2);
+ if (ret) {
+ dev_err(parent, "Conflict resource %pR\n", *res + count2);
+ vfree(*res);
+ *res_num = 0;
+ *res = NULL;
+ return ret;
+ }
+ release_resource(*res + count2);
+
+ (*res)[count2].parent = pci_res;
+
+ xrt_md_find_endpoint(parent, pdata->xsp_dtb, ep_name,
+ compat, &(*res)[count2].name);
+
+ count2++;
+ xrt_md_get_next_endpoint(parent, dtb, ep_name, compat, &ep_name, &compat);
+ }
+
+ WARN_ON(count1 != count2);
+ *res_num = count2;
+
+ return 0;
+}
+
+static inline enum xrt_dev_file_mode
+xleaf_devnode_mode(struct xrt_device *xdev)
+{
+ return DEV_FILE_OPS(xdev)->xsf_mode;
+}
+
+static bool xrt_subdev_cdev_auto_creation(struct xrt_device *xdev)
+{
+ enum xrt_dev_file_mode mode = xleaf_devnode_mode(xdev);
+
+ if (!xleaf_devnode_enabled(xdev))
+ return false;
+
+ return (mode == XRT_DEV_FILE_DEFAULT || mode == XRT_DEV_FILE_MULTI_INST);
+}
+
+static struct xrt_subdev *
+xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
+ xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
+{
+ struct xrt_subdev_platdata *pdata = NULL;
+ struct xrt_subdev *sdev = NULL;
+ struct xrt_device *xdev = NULL;
+ struct resource *res = NULL;
+ unsigned long dtb_len = 0;
+ bool dtb_alloced = false;
+ int res_num = 0;
+ size_t pdata_sz;
+ int ret;
+
+ sdev = xrt_subdev_alloc();
+ if (!sdev) {
+ dev_err(parent, "failed to alloc subdev for ID %d", id);
+ return NULL;
+ }
+ sdev->xs_id = id;
+
+ if (!dtb) {
+ ret = xrt_md_create(parent, &dtb);
+ if (ret) {
+ dev_err(parent, "can't create empty dtb: %d", ret);
+ goto fail;
+ }
+ dtb_alloced = true;
+ }
+ xrt_md_pack(parent, dtb);
+ dtb_len = xrt_md_size(parent, dtb);
+ if (dtb_len == XRT_MD_INVALID_LENGTH) {
+ dev_err(parent, "invalid metadata len %ld", dtb_len);
+ goto fail1;
+ }
+ pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len;
+
+ /* Prepare platform data passed to subdev. */
+ pdata = vzalloc(pdata_sz);
+ if (!pdata)
+ goto fail1;
+
+ pdata->xsp_root_cb = pcb;
+ pdata->xsp_root_cb_arg = pcb_arg;
+ memcpy(pdata->xsp_dtb, dtb, dtb_len);
+ if (id == XRT_SUBDEV_GRP) {
+ /* Group can only be created by root driver. */
+ pdata->xsp_root_name = dev_name(parent);
+ } else {
+ struct xrt_device *grp = to_xrt_dev(parent);
+
+ /* Leaf can only be created by group driver. */
+ WARN_ON(to_xrt_drv(parent->driver)->subdev_id != XRT_SUBDEV_GRP);
+ pdata->xsp_root_name = DEV_PDATA(grp)->xsp_root_name;
+ }
+
+ /* Create subdev. */
+ if (id != XRT_SUBDEV_GRP) {
+ int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
+
+ if (rc) {
+ dev_err(parent, "failed to get resource for %s: %d",
+ xrt_drv_name(id), rc);
+ goto fail2;
+ }
+ }
+ xdev = xrt_device_register(parent, id, res, res_num, pdata, pdata_sz);
+ vfree(res);
+ if (!xdev) {
+ dev_err(parent, "failed to create subdev for %s", xrt_drv_name(id));
+ goto fail2;
+ }
+ sdev->xs_xdev = xdev;
+
+ if (device_attach(DEV(xdev)) != 1) {
+ xrt_err(xdev, "failed to attach");
+ goto fail3;
+ }
+
+ if (sysfs_create_group(&DEV(xdev)->kobj, &xrt_subdev_attrgroup))
+ xrt_err(xdev, "failed to create sysfs group");
+
+ /*
+ * Create sysfs sym link under root for leaves
+ * under random groups for easy access to them.
+ */
+ if (id != XRT_SUBDEV_GRP) {
+ if (sysfs_create_link(&find_root(xdev)->kobj,
+ &DEV(xdev)->kobj, dev_name(DEV(xdev)))) {
+ xrt_err(xdev, "failed to create sysfs link");
+ }
+ }
+
+ /* All done, ready to handle req thru cdev. */
+ if (xrt_subdev_cdev_auto_creation(xdev))
+ xleaf_devnode_create(xdev, DEV_FILE_OPS(xdev)->xsf_dev_name, NULL);
+
+ vfree(pdata);
+ return sdev;
+
+fail3:
+ xrt_device_unregister(sdev->xs_xdev);
+fail2:
+ vfree(pdata);
+fail1:
+ if (dtb_alloced)
+ vfree(dtb);
+fail:
+ kfree(sdev);
+ return NULL;
+}
+
+static void xrt_subdev_destroy(struct xrt_subdev *sdev)
+{
+ struct xrt_device *xdev = sdev->xs_xdev;
+ struct device *dev = DEV(xdev);
+
+ /* Take down the device node */
+ if (xrt_subdev_cdev_auto_creation(xdev))
+ xleaf_devnode_destroy(xdev);
+ if (sdev->xs_id != XRT_SUBDEV_GRP)
+ sysfs_remove_link(&find_root(xdev)->kobj, dev_name(dev));
+ sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
+ xrt_device_unregister(xdev);
+ kfree(sdev);
+}
+
+struct xrt_device *
+xleaf_get_leaf(struct xrt_device *xdev, xrt_subdev_match_t match_cb, void *match_arg)
+{
+ int rc;
+ struct xrt_root_get_leaf get_leaf = {
+ xdev, match_cb, match_arg, };
+
+ rc = xrt_subdev_root_request(xdev, XRT_ROOT_GET_LEAF, &get_leaf);
+ if (rc)
+ return NULL;
+ return get_leaf.xpigl_tgt_xdev;
+}
+EXPORT_SYMBOL_GPL(xleaf_get_leaf);
+
+bool xleaf_has_endpoint(struct xrt_device *xdev, const char *endpoint_name)
+{
+ struct resource *res;
+ int i = 0;
+
+ do {
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, i);
+ if (res && !strncmp(res->name, endpoint_name, strlen(res->name) + 1))
+ return true;
+ ++i;
+ } while (res);
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(xleaf_has_endpoint);
+
+int xleaf_put_leaf(struct xrt_device *xdev, struct xrt_device *leaf)
+{
+ struct xrt_root_put_leaf put_leaf = { xdev, leaf };
+
+ return xrt_subdev_root_request(xdev, XRT_ROOT_PUT_LEAF, &put_leaf);
+}
+EXPORT_SYMBOL_GPL(xleaf_put_leaf);
+
+int xleaf_create_group(struct xrt_device *xdev, char *dtb)
+{
+ return xrt_subdev_root_request(xdev, XRT_ROOT_CREATE_GROUP, dtb);
+}
+EXPORT_SYMBOL_GPL(xleaf_create_group);
+
+int xleaf_destroy_group(struct xrt_device *xdev, int instance)
+{
+ return xrt_subdev_root_request(xdev, XRT_ROOT_REMOVE_GROUP, (void *)(uintptr_t)instance);
+}
+EXPORT_SYMBOL_GPL(xleaf_destroy_group);
+
+int xleaf_wait_for_group_bringup(struct xrt_device *xdev)
+{
+ return xrt_subdev_root_request(xdev, XRT_ROOT_WAIT_GROUP_BRINGUP, NULL);
+}
+EXPORT_SYMBOL_GPL(xleaf_wait_for_group_bringup);
+
+static ssize_t
+xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
+{
+ const struct list_head *ptr;
+ struct xrt_subdev_holder *h;
+ ssize_t n = 0;
+
+ list_for_each(ptr, &sdev->xs_holder_list) {
+ h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ n += snprintf(buf + n, len - n, "%s:%d ",
+ dev_name(h->xsh_holder), kref_read(&h->xsh_kref));
+ /* Truncation is fine here. Buffer content is only for debugging. */
+ if (n >= (len - 1))
+ break;
+ }
+ return n;
+}
+
+void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
+{
+ INIT_LIST_HEAD(&spool->xsp_dev_list);
+ spool->xsp_owner = dev;
+ mutex_init(&spool->xsp_lock);
+ spool->xsp_closing = false;
+}
+
+static void xrt_subdev_free_holder(struct xrt_subdev_holder *holder)
+{
+ list_del(&holder->xsh_holder_list);
+ vfree(holder);
+}
+
+static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool, struct xrt_subdev *sdev)
+{
+ const struct list_head *ptr, *next;
+ char holders[128];
+ struct xrt_subdev_holder *holder;
+ struct mutex *lk = &spool->xsp_lock;
+
+ while (!list_empty(&sdev->xs_holder_list)) {
+ int rc;
+
+ /* It's most likely a bug if we ever enters this loop. */
+ xrt_subdev_get_holders(sdev, holders, sizeof(holders));
+ xrt_err(sdev->xs_xdev, "awaits holders: %s", holders);
+ mutex_unlock(lk);
+ rc = wait_for_completion_killable(&sdev->xs_holder_comp);
+ mutex_lock(lk);
+ if (rc == -ERESTARTSYS) {
+ xrt_err(sdev->xs_xdev, "give up on waiting for holders, clean up now");
+ list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
+ holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ xrt_subdev_free_holder(holder);
+ }
+ }
+ }
+}
+
+void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
+{
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct mutex *lk = &spool->xsp_lock;
+
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ mutex_unlock(lk);
+ return;
+ }
+ spool->xsp_closing = true;
+ mutex_unlock(lk);
+
+ /* Remove subdev in the reverse order of added. */
+ while (!list_empty(dl)) {
+ struct xrt_subdev *sdev = list_first_entry(dl, struct xrt_subdev, xs_dev_list);
+
+ xrt_subdev_pool_wait_for_holders(spool, sdev);
+ list_del(&sdev->xs_dev_list);
+ xrt_subdev_destroy(sdev);
+ }
+}
+
+static struct xrt_subdev_holder *xrt_subdev_find_holder(struct xrt_subdev *sdev,
+ struct device *holder_dev)
+{
+ struct list_head *hl = &sdev->xs_holder_list;
+ struct xrt_subdev_holder *holder;
+ const struct list_head *ptr;
+
+ list_for_each(ptr, hl) {
+ holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ if (holder->xsh_holder == holder_dev)
+ return holder;
+ }
+ return NULL;
+}
+
+static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+ struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
+ struct list_head *hl = &sdev->xs_holder_list;
+
+ if (!holder) {
+ holder = vzalloc(sizeof(*holder));
+ if (!holder)
+ return -ENOMEM;
+ holder->xsh_holder = holder_dev;
+ kref_init(&holder->xsh_kref);
+ list_add_tail(&holder->xsh_holder_list, hl);
+ } else {
+ kref_get(&holder->xsh_kref);
+ }
+
+ return 0;
+}
+
+static void xrt_subdev_free_holder_kref(struct kref *kref)
+{
+ struct xrt_subdev_holder *holder = container_of(kref, struct xrt_subdev_holder, xsh_kref);
+
+ xrt_subdev_free_holder(holder);
+}
+
+static int
+xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+ struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
+ struct list_head *hl = &sdev->xs_holder_list;
+
+ if (!holder) {
+ dev_err(holder_dev, "can't release, %s did not hold %s",
+ dev_name(holder_dev), dev_name(DEV(sdev->xs_xdev)));
+ return -EINVAL;
+ }
+ kref_put(&holder->xsh_kref, xrt_subdev_free_holder_kref);
+
+ /* kref_put above may remove holder from list. */
+ if (list_empty(hl))
+ complete(&sdev->xs_holder_comp);
+ return 0;
+}
+
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+ xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
+{
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = 0;
+
+ sdev = xrt_subdev_create(spool->xsp_owner, id, pcb, pcb_arg, dtb);
+ if (sdev) {
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ /* No new subdev when pool is going away. */
+ xrt_err(sdev->xs_xdev, "pool is closing");
+ ret = -ENODEV;
+ } else {
+ list_add(&sdev->xs_dev_list, dl);
+ }
+ mutex_unlock(lk);
+ if (ret)
+ xrt_subdev_destroy(sdev);
+ } else {
+ ret = -EINVAL;
+ }
+
+ ret = ret ? ret : sdev->xs_xdev->instance;
+ return ret;
+}
+
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id, int instance)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ /* Pool is going away, all subdevs will be gone. */
+ mutex_unlock(lk);
+ return 0;
+ }
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_id != id || sdev->xs_xdev->instance != instance)
+ continue;
+ xrt_subdev_pool_wait_for_holders(spool, sdev);
+ list_del(&sdev->xs_dev_list);
+ ret = 0;
+ break;
+ }
+ mutex_unlock(lk);
+ if (ret)
+ return ret;
+
+ xrt_subdev_destroy(sdev);
+ return 0;
+}
+
+static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool, xrt_subdev_match_t match,
+ void *arg, struct device *holder_dev, struct xrt_subdev **sdevp)
+{
+ struct xrt_device *xdev = (struct xrt_device *)arg;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct mutex *lk = &spool->xsp_lock;
+ struct xrt_subdev *sdev = NULL;
+ const struct list_head *ptr;
+ struct xrt_subdev *d = NULL;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+
+ if (!xdev) {
+ if (match == XRT_SUBDEV_MATCH_PREV) {
+ sdev = list_empty(dl) ? NULL :
+ list_last_entry(dl, struct xrt_subdev, xs_dev_list);
+ } else if (match == XRT_SUBDEV_MATCH_NEXT) {
+ sdev = list_first_entry_or_null(dl, struct xrt_subdev, xs_dev_list);
+ }
+ }
+
+ list_for_each(ptr, dl) {
+ d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (match == XRT_SUBDEV_MATCH_PREV || match == XRT_SUBDEV_MATCH_NEXT) {
+ if (d->xs_xdev != xdev)
+ continue;
+ } else {
+ if (!match(d->xs_id, d->xs_xdev, arg))
+ continue;
+ }
+
+ if (match == XRT_SUBDEV_MATCH_PREV)
+ sdev = !list_is_first(ptr, dl) ? list_prev_entry(d, xs_dev_list) : NULL;
+ else if (match == XRT_SUBDEV_MATCH_NEXT)
+ sdev = !list_is_last(ptr, dl) ? list_next_entry(d, xs_dev_list) : NULL;
+ else
+ sdev = d;
+ }
+
+ if (sdev)
+ ret = xrt_subdev_hold(sdev, holder_dev);
+
+ mutex_unlock(lk);
+
+ if (!ret)
+ *sdevp = sdev;
+ return ret;
+}
+
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool, xrt_subdev_match_t match, void *arg,
+ struct device *holder_dev, struct xrt_device **xdevp)
+{
+ int rc;
+ struct xrt_subdev *sdev;
+
+ rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
+ if (rc) {
+ if (rc != -ENOENT)
+ dev_err(holder_dev, "failed to hold device: %d", rc);
+ return rc;
+ }
+
+ if (!IS_ROOT_DEV(holder_dev)) {
+ xrt_dbg(to_xrt_dev(holder_dev), "%s <<==== %s",
+ dev_name(holder_dev), dev_name(DEV(sdev->xs_xdev)));
+ }
+
+ *xdevp = sdev->xs_xdev;
+ return 0;
+}
+
+static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool, struct xrt_device *xdev,
+ struct device *holder_dev)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_xdev != xdev)
+ continue;
+ ret = xrt_subdev_release(sdev, holder_dev);
+ break;
+ }
+ mutex_unlock(lk);
+
+ return ret;
+}
+
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool, struct xrt_device *xdev,
+ struct device *holder_dev)
+{
+ int ret = xrt_subdev_pool_put_impl(spool, xdev, holder_dev);
+
+ if (ret)
+ return ret;
+
+ if (!IS_ROOT_DEV(holder_dev)) {
+ xrt_dbg(to_xrt_dev(holder_dev), "%s <<==X== %s",
+ dev_name(holder_dev), dev_name(DEV(xdev)));
+ }
+ return 0;
+}
+
+void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool, enum xrt_events e)
+{
+ struct xrt_device *tgt = NULL;
+ struct xrt_subdev *sdev = NULL;
+ struct xrt_event evt;
+
+ while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+ tgt, spool->xsp_owner, &sdev)) {
+ tgt = sdev->xs_xdev;
+ evt.xe_evt = e;
+ evt.xe_subdev.xevt_subdev_id = sdev->xs_id;
+ evt.xe_subdev.xevt_subdev_instance = tgt->instance;
+ xrt_subdev_root_request(tgt, XRT_ROOT_EVENT_SYNC, &evt);
+ xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
+ }
+}
+
+void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool, struct xrt_event *evt)
+{
+ struct xrt_device *tgt = NULL;
+ struct xrt_subdev *sdev = NULL;
+
+ while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+ tgt, spool->xsp_owner, &sdev)) {
+ tgt = sdev->xs_xdev;
+ xleaf_call(tgt, XRT_XLEAF_EVENT, evt);
+ xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
+ }
+}
+
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+ struct xrt_device *xdev, char *buf, size_t len)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ ssize_t ret = 0;
+
+ mutex_lock(lk);
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_xdev != xdev)
+ continue;
+ ret = xrt_subdev_get_holders(sdev, buf, len);
+ break;
+ }
+ mutex_unlock(lk);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
+
+int xleaf_broadcast_event(struct xrt_device *xdev, enum xrt_events evt, bool async)
+{
+ struct xrt_event e = { evt, };
+ enum xrt_root_cmd cmd = async ? XRT_ROOT_EVENT_ASYNC : XRT_ROOT_EVENT_SYNC;
+
+ WARN_ON(evt == XRT_EVENT_POST_CREATION || evt == XRT_EVENT_PRE_REMOVAL);
+ return xrt_subdev_root_request(xdev, cmd, &e);
+}
+EXPORT_SYMBOL_GPL(xleaf_broadcast_event);
+
+void xleaf_hot_reset(struct xrt_device *xdev)
+{
+ xrt_subdev_root_request(xdev, XRT_ROOT_HOT_RESET, NULL);
+}
+EXPORT_SYMBOL_GPL(xleaf_hot_reset);
+
+void xleaf_get_root_res(struct xrt_device *xdev, u32 region_id, struct resource **res)
+{
+ struct xrt_root_get_res arg = { 0 };
+
+ arg.xpigr_region_id = region_id;
+ xrt_subdev_root_request(xdev, XRT_ROOT_GET_RESOURCE, &arg);
+ *res = arg.xpigr_res;
+}
+
+void xleaf_get_root_id(struct xrt_device *xdev, unsigned short *vendor, unsigned short *device,
+ unsigned short *subvendor, unsigned short *subdevice)
+{
+ struct xrt_root_get_id id = { 0 };
+
+ WARN_ON(!vendor && !device && !subvendor && !subdevice);
+
+ xrt_subdev_root_request(xdev, XRT_ROOT_GET_ID, (void *)&id);
+ if (vendor)
+ *vendor = id.xpigi_vendor_id;
+ if (device)
+ *device = id.xpigi_device_id;
+ if (subvendor)
+ *subvendor = id.xpigi_sub_vendor_id;
+ if (subdevice)
+ *subdevice = id.xpigi_sub_device_id;
+}
+
+struct device *xleaf_register_hwmon(struct xrt_device *xdev, const char *name, void *drvdata,
+ const struct attribute_group **grps)
+{
+ struct xrt_root_hwmon hm = { true, name, drvdata, grps, };
+
+ xrt_subdev_root_request(xdev, XRT_ROOT_HWMON, (void *)&hm);
+ return hm.xpih_hwmon_dev;
+}
+
+void xleaf_unregister_hwmon(struct xrt_device *xdev, struct device *hwmon)
+{
+ struct xrt_root_hwmon hm = { false, };
+
+ hm.xpih_hwmon_dev = hwmon;
+ xrt_subdev_root_request(xdev, XRT_ROOT_HWMON, (void *)&hm);
+}
--
2.27.0

2021-04-27 21:02:15

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 11/20] fpga: xrt: fpga-mgr and region implementation for xclbin download

fpga-mgr and region implementation for xclbin download which will be
called from main xrt driver

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/mgnt/xmgnt-main-region.c | 485 ++++++++++++++++++++++
drivers/fpga/xrt/mgnt/xrt-mgr.c | 190 +++++++++
drivers/fpga/xrt/mgnt/xrt-mgr.h | 16 +
3 files changed, 691 insertions(+)
create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main-region.c
create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.c
create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.h

diff --git a/drivers/fpga/xrt/mgnt/xmgnt-main-region.c b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
new file mode 100644
index 000000000000..398fc816b178
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
@@ -0,0 +1,485 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * FPGA Region Support for Xilinx Alveo
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: [email protected]
+ */
+
+#include <linux/uuid.h>
+#include <linux/fpga/fpga-bridge.h>
+#include <linux/fpga/fpga-region.h>
+#include <linux/slab.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/axigate.h"
+#include "xclbin-helper.h"
+#include "xmgnt.h"
+
+struct xmgnt_bridge {
+ struct xrt_device *xdev;
+ const char *bridge_name;
+};
+
+struct xmgnt_region {
+ struct xrt_device *xdev;
+ struct fpga_region *region;
+ struct fpga_compat_id compat_id;
+ uuid_t interface_uuid;
+ struct fpga_bridge *bridge;
+ int group_instance;
+ uuid_t depend_uuid;
+ struct list_head list;
+};
+
+struct xmgnt_region_match_arg {
+ struct xrt_device *xdev;
+ uuid_t *uuids;
+ u32 uuid_num;
+};
+
+static int xmgnt_br_enable_set(struct fpga_bridge *bridge, bool enable)
+{
+ struct xmgnt_bridge *br_data = (struct xmgnt_bridge *)bridge->priv;
+ struct xrt_device *axigate_leaf;
+ int rc;
+
+ axigate_leaf = xleaf_get_leaf_by_epname(br_data->xdev, br_data->bridge_name);
+ if (!axigate_leaf) {
+ xrt_err(br_data->xdev, "failed to get leaf %s",
+ br_data->bridge_name);
+ return -ENOENT;
+ }
+
+ if (enable)
+ rc = xleaf_call(axigate_leaf, XRT_AXIGATE_OPEN, NULL);
+ else
+ rc = xleaf_call(axigate_leaf, XRT_AXIGATE_CLOSE, NULL);
+
+ if (rc) {
+ xrt_err(br_data->xdev, "failed to %s gate %s, rc %d",
+ (enable ? "free" : "freeze"), br_data->bridge_name,
+ rc);
+ }
+
+ xleaf_put_leaf(br_data->xdev, axigate_leaf);
+
+ return rc;
+}
+
+const struct fpga_bridge_ops xmgnt_bridge_ops = {
+ .enable_set = xmgnt_br_enable_set
+};
+
+static void xmgnt_destroy_bridge(struct fpga_bridge *br)
+{
+ struct xmgnt_bridge *br_data = br->priv;
+
+ if (!br_data)
+ return;
+
+ xrt_info(br_data->xdev, "destroy fpga bridge %s", br_data->bridge_name);
+ fpga_bridge_unregister(br);
+
+ devm_kfree(DEV(br_data->xdev), br_data);
+
+ fpga_bridge_free(br);
+}
+
+static struct fpga_bridge *xmgnt_create_bridge(struct xrt_device *xdev,
+ char *dtb)
+{
+ struct fpga_bridge *br = NULL;
+ struct xmgnt_bridge *br_data;
+ const char *gate;
+ int rc;
+
+ br_data = devm_kzalloc(DEV(xdev), sizeof(*br_data), GFP_KERNEL);
+ if (!br_data)
+ return NULL;
+ br_data->xdev = xdev;
+
+ br_data->bridge_name = XRT_MD_NODE_GATE_ULP;
+ rc = xrt_md_find_endpoint(&xdev->dev, dtb, XRT_MD_NODE_GATE_ULP,
+ NULL, &gate);
+ if (rc) {
+ br_data->bridge_name = XRT_MD_NODE_GATE_PLP;
+ rc = xrt_md_find_endpoint(&xdev->dev, dtb, XRT_MD_NODE_GATE_PLP,
+ NULL, &gate);
+ }
+ if (rc) {
+ xrt_err(xdev, "failed to get axigate, rc %d", rc);
+ goto failed;
+ }
+
+ br = fpga_bridge_create(DEV(xdev), br_data->bridge_name,
+ &xmgnt_bridge_ops, br_data);
+ if (!br) {
+ xrt_err(xdev, "failed to create bridge");
+ goto failed;
+ }
+
+ rc = fpga_bridge_register(br);
+ if (rc) {
+ xrt_err(xdev, "failed to register bridge, rc %d", rc);
+ goto failed;
+ }
+
+ xrt_info(xdev, "created fpga bridge %s", br_data->bridge_name);
+
+ return br;
+
+failed:
+ if (br)
+ fpga_bridge_free(br);
+ if (br_data)
+ devm_kfree(DEV(xdev), br_data);
+
+ return NULL;
+}
+
+static void xmgnt_destroy_region(struct fpga_region *region)
+{
+ struct xmgnt_region *r_data = region->priv;
+
+ xrt_info(r_data->xdev, "destroy fpga region %llx.%llx",
+ region->compat_id->id_h, region->compat_id->id_l);
+
+ fpga_region_unregister(region);
+
+ if (r_data->group_instance > 0)
+ xleaf_destroy_group(r_data->xdev, r_data->group_instance);
+
+ if (r_data->bridge)
+ xmgnt_destroy_bridge(r_data->bridge);
+
+ if (r_data->region->info) {
+ fpga_image_info_free(r_data->region->info);
+ r_data->region->info = NULL;
+ }
+
+ fpga_region_free(region);
+
+ devm_kfree(DEV(r_data->xdev), r_data);
+}
+
+static int xmgnt_region_match(struct device *dev, const void *data)
+{
+ const struct xmgnt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ uuid_t compat_uuid;
+ int i;
+
+ if (dev->parent != &arg->xdev->dev)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ /*
+ * The device tree provides both parent and child uuids for an
+ * xclbin in one array. Here we try both uuids to see if it matches
+ * with target region's compat_id. Strictly speaking we should
+ * only match xclbin's parent uuid with target region's compat_id
+ * but given the uuids by design are unique comparing with both
+ * does not hurt.
+ */
+ import_uuid(&compat_uuid, (const char *)match_region->compat_id);
+ for (i = 0; i < arg->uuid_num; i++) {
+ if (uuid_equal(&compat_uuid, &arg->uuids[i]))
+ return true;
+ }
+
+ return false;
+}
+
+static int xmgnt_region_match_base(struct device *dev, const void *data)
+{
+ const struct xmgnt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ const struct xmgnt_region *r_data;
+
+ if (dev->parent != &arg->xdev->dev)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ r_data = match_region->priv;
+ if (uuid_is_null(&r_data->depend_uuid))
+ return true;
+
+ return false;
+}
+
+static int xmgnt_region_match_by_uuid(struct device *dev, const void *data)
+{
+ const struct xmgnt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ const struct xmgnt_region *r_data;
+
+ if (dev->parent != &arg->xdev->dev)
+ return false;
+
+ if (arg->uuid_num != 1)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ r_data = match_region->priv;
+ if (uuid_equal(&r_data->depend_uuid, arg->uuids))
+ return true;
+
+ return false;
+}
+
+static void xmgnt_region_cleanup(struct fpga_region *region)
+{
+ struct xmgnt_region *r_data = region->priv, *pdata, *temp;
+ struct xrt_device *xdev = r_data->xdev;
+ struct xmgnt_region_match_arg arg = { 0 };
+ struct fpga_region *match_region = NULL;
+ struct device *start_dev = NULL;
+ LIST_HEAD(free_list);
+ uuid_t compat_uuid;
+
+ list_add_tail(&r_data->list, &free_list);
+ arg.xdev = xdev;
+ arg.uuid_num = 1;
+ arg.uuids = &compat_uuid;
+
+ /* find all regions depending on this region */
+ list_for_each_entry_safe(pdata, temp, &free_list, list) {
+ import_uuid(arg.uuids, (const char *)pdata->region->compat_id);
+ start_dev = NULL;
+ while ((match_region = fpga_region_class_find(start_dev, &arg,
+ xmgnt_region_match_by_uuid))) {
+ pdata = match_region->priv;
+ list_add_tail(&pdata->list, &free_list);
+ start_dev = &match_region->dev;
+ put_device(&match_region->dev);
+ }
+ }
+
+ list_del(&r_data->list);
+
+ list_for_each_entry_safe_reverse(pdata, temp, &free_list, list)
+ xmgnt_destroy_region(pdata->region);
+
+ if (r_data->group_instance > 0) {
+ xleaf_destroy_group(xdev, r_data->group_instance);
+ r_data->group_instance = -1;
+ }
+ if (r_data->region->info) {
+ fpga_image_info_free(r_data->region->info);
+ r_data->region->info = NULL;
+ }
+}
+
+void xmgnt_region_cleanup_all(struct xrt_device *xdev)
+{
+ struct xmgnt_region_match_arg arg = { 0 };
+ struct fpga_region *base_region;
+
+ arg.xdev = xdev;
+
+ while ((base_region = fpga_region_class_find(NULL, &arg, xmgnt_region_match_base))) {
+ put_device(&base_region->dev);
+
+ xmgnt_region_cleanup(base_region);
+ xmgnt_destroy_region(base_region);
+ }
+}
+
+/*
+ * Program a region with a xclbin image. Bring up the subdevs and the
+ * group object to contain the subdevs.
+ */
+static int xmgnt_region_program(struct fpga_region *region, const void *xclbin, char *dtb)
+{
+ const struct axlf *xclbin_obj = xclbin;
+ struct fpga_image_info *info;
+ struct xrt_device *xdev;
+ struct xmgnt_region *r_data;
+ int rc;
+
+ r_data = region->priv;
+ xdev = r_data->xdev;
+
+ info = fpga_image_info_alloc(&xdev->dev);
+ if (!info)
+ return -ENOMEM;
+
+ info->buf = xclbin;
+ info->count = xclbin_obj->header.length;
+ info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
+ region->info = info;
+ rc = fpga_region_program_fpga(region);
+ if (rc) {
+ xrt_err(xdev, "programming xclbin failed, rc %d", rc);
+ return rc;
+ }
+
+ /* free bridges to allow reprogram */
+ if (region->get_bridges)
+ fpga_bridges_put(&region->bridge_list);
+
+ /*
+ * Next bringup the subdevs for this region which will be managed by
+ * its own group object.
+ */
+ r_data->group_instance = xleaf_create_group(xdev, dtb);
+ if (r_data->group_instance < 0) {
+ xrt_err(xdev, "failed to create group, rc %d",
+ r_data->group_instance);
+ rc = r_data->group_instance;
+ return rc;
+ }
+
+ rc = xleaf_wait_for_group_bringup(xdev);
+ if (rc)
+ xrt_err(xdev, "group bringup failed, rc %d", rc);
+ return rc;
+}
+
+static int xmgnt_get_bridges(struct fpga_region *region)
+{
+ struct xmgnt_region *r_data = region->priv;
+ struct device *dev = &r_data->xdev->dev;
+
+ return fpga_bridge_get_to_list(dev, region->info, &region->bridge_list);
+}
+
+/*
+ * Program/create FPGA regions based on input xclbin file.
+ * 1. Identify a matching existing region for this xclbin
+ * 2. Tear down any previous objects for the found region
+ * 3. Program this region with input xclbin
+ * 4. Iterate over this region's interface uuids to determine if it defines any
+ * child region. Create fpga_region for the child region.
+ */
+int xmgnt_process_xclbin(struct xrt_device *xdev,
+ struct fpga_manager *fmgr,
+ const struct axlf *xclbin,
+ enum provider_kind kind)
+{
+ struct fpga_region *region, *compat_region = NULL;
+ struct xmgnt_region_match_arg arg = { 0 };
+ struct xmgnt_region *r_data;
+ uuid_t compat_uuid;
+ char *dtb = NULL;
+ int rc, i;
+
+ rc = xrt_xclbin_get_metadata(DEV(xdev), xclbin, &dtb);
+ if (rc) {
+ xrt_err(xdev, "failed to get dtb: %d", rc);
+ goto failed;
+ }
+
+ rc = xrt_md_get_interface_uuids(DEV(xdev), dtb, 0, NULL);
+ if (rc < 0) {
+ xrt_err(xdev, "failed to get intf uuid");
+ rc = -EINVAL;
+ goto failed;
+ }
+ arg.uuid_num = rc;
+ arg.uuids = kcalloc(arg.uuid_num, sizeof(uuid_t), GFP_KERNEL);
+ if (!arg.uuids) {
+ rc = -ENOMEM;
+ goto failed;
+ }
+ arg.xdev = xdev;
+
+ rc = xrt_md_get_interface_uuids(DEV(xdev), dtb, arg.uuid_num, arg.uuids);
+ if (rc != arg.uuid_num) {
+ xrt_err(xdev, "only get %d uuids, expect %d", rc, arg.uuid_num);
+ rc = -EINVAL;
+ goto failed;
+ }
+
+ /* if this is not base firmware, search for a compatible region */
+ if (kind != XMGNT_BLP) {
+ compat_region = fpga_region_class_find(NULL, &arg, xmgnt_region_match);
+ if (!compat_region) {
+ xrt_err(xdev, "failed to get compatible region");
+ rc = -ENOENT;
+ goto failed;
+ }
+
+ xmgnt_region_cleanup(compat_region);
+
+ rc = xmgnt_region_program(compat_region, xclbin, dtb);
+ if (rc) {
+ xrt_err(xdev, "failed to program region");
+ goto failed;
+ }
+ }
+
+ if (compat_region)
+ import_uuid(&compat_uuid, (const char *)compat_region->compat_id);
+
+ /* create all the new regions contained in this xclbin */
+ for (i = 0; i < arg.uuid_num; i++) {
+ if (compat_region && uuid_equal(&compat_uuid, &arg.uuids[i])) {
+ /* region for this interface already exists */
+ continue;
+ }
+
+ region = fpga_region_create(DEV(xdev), fmgr, xmgnt_get_bridges);
+ if (!region) {
+ xrt_err(xdev, "failed to create fpga region");
+ rc = -EFAULT;
+ goto failed;
+ }
+ r_data = devm_kzalloc(DEV(xdev), sizeof(*r_data), GFP_KERNEL);
+ if (!r_data) {
+ rc = -ENOMEM;
+ fpga_region_free(region);
+ goto failed;
+ }
+ r_data->xdev = xdev;
+ r_data->region = region;
+ r_data->group_instance = -1;
+ uuid_copy(&r_data->interface_uuid, &arg.uuids[i]);
+ if (compat_region)
+ import_uuid(&r_data->depend_uuid, (const char *)compat_region->compat_id);
+ r_data->bridge = xmgnt_create_bridge(xdev, dtb);
+ if (!r_data->bridge) {
+ xrt_err(xdev, "failed to create fpga bridge");
+ rc = -EFAULT;
+ devm_kfree(DEV(xdev), r_data);
+ fpga_region_free(region);
+ goto failed;
+ }
+
+ region->compat_id = &r_data->compat_id;
+ export_uuid((char *)region->compat_id, &r_data->interface_uuid);
+ region->priv = r_data;
+
+ rc = fpga_region_register(region);
+ if (rc) {
+ xrt_err(xdev, "failed to register fpga region");
+ xmgnt_destroy_bridge(r_data->bridge);
+ fpga_region_free(region);
+ devm_kfree(DEV(xdev), r_data);
+ goto failed;
+ }
+
+ xrt_info(xdev, "created fpga region %llx.%llx",
+ region->compat_id->id_h, region->compat_id->id_l);
+ }
+
+ if (compat_region)
+ put_device(&compat_region->dev);
+ vfree(dtb);
+ kfree(arg.uuids);
+ return 0;
+
+failed:
+ if (compat_region) {
+ put_device(&compat_region->dev);
+ xmgnt_region_cleanup(compat_region);
+ } else {
+ xmgnt_region_cleanup_all(xdev);
+ }
+
+ vfree(dtb);
+ kfree(arg.uuids);
+ return rc;
+}
diff --git a/drivers/fpga/xrt/mgnt/xrt-mgr.c b/drivers/fpga/xrt/mgnt/xrt-mgr.c
new file mode 100644
index 000000000000..354747f3e872
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/xrt-mgr.c
@@ -0,0 +1,190 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * FPGA Manager Support for Xilinx Alveo
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: [email protected]
+ */
+
+#include <linux/cred.h>
+#include <linux/efi.h>
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "xclbin-helper.h"
+#include "xleaf.h"
+#include "xrt-mgr.h"
+#include "xleaf/axigate.h"
+#include "xleaf/icap.h"
+#include "xmgnt.h"
+
+struct xfpga_class {
+ struct xrt_device *xdev;
+ char name[64];
+};
+
+/*
+ * xclbin download plumbing -- find the download subsystem, ICAP and
+ * pass the xclbin for heavy lifting
+ */
+static int xmgnt_download_bitstream(struct xrt_device *xdev,
+ const struct axlf *xclbin)
+
+{
+ struct xclbin_bit_head_info bit_header = { 0 };
+ struct xrt_device *icap_leaf = NULL;
+ struct xrt_icap_wr arg;
+ char *bitstream = NULL;
+ u64 bit_len;
+ int ret;
+
+ ret = xrt_xclbin_get_section(DEV(xdev), xclbin, BITSTREAM, (void **)&bitstream, &bit_len);
+ if (ret) {
+ xrt_err(xdev, "bitstream not found");
+ return -ENOENT;
+ }
+ ret = xrt_xclbin_parse_bitstream_header(DEV(xdev), bitstream,
+ XCLBIN_HWICAP_BITFILE_BUF_SZ,
+ &bit_header);
+ if (ret) {
+ ret = -EINVAL;
+ xrt_err(xdev, "invalid bitstream header");
+ goto fail;
+ }
+ if (bit_header.header_length + bit_header.bitstream_length > bit_len) {
+ ret = -EINVAL;
+ xrt_err(xdev, "invalid bitstream length. header %d, bitstream %d, section len %lld",
+ bit_header.header_length, bit_header.bitstream_length, bit_len);
+ goto fail;
+ }
+
+ icap_leaf = xleaf_get_leaf_by_id(xdev, XRT_SUBDEV_ICAP, XRT_INVALID_DEVICE_INST);
+ if (!icap_leaf) {
+ ret = -ENODEV;
+ xrt_err(xdev, "icap does not exist");
+ goto fail;
+ }
+ arg.xiiw_bit_data = bitstream + bit_header.header_length;
+ arg.xiiw_data_len = bit_header.bitstream_length;
+ ret = xleaf_call(icap_leaf, XRT_ICAP_WRITE, &arg);
+ if (ret) {
+ xrt_err(xdev, "write bitstream failed, ret = %d", ret);
+ xleaf_put_leaf(xdev, icap_leaf);
+ goto fail;
+ }
+
+ xleaf_put_leaf(xdev, icap_leaf);
+ vfree(bitstream);
+
+ return 0;
+
+fail:
+ vfree(bitstream);
+
+ return ret;
+}
+
+/*
+ * There is no HW prep work we do here since we need the full
+ * xclbin for its sanity check.
+ */
+static int xmgnt_pr_write_init(struct fpga_manager *mgr,
+ struct fpga_image_info *info,
+ const char *buf, size_t count)
+{
+ const struct axlf *bin = (const struct axlf *)buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
+ xrt_info(obj->xdev, "%s only supports partial reconfiguration\n", obj->name);
+ return -EINVAL;
+ }
+
+ if (count < sizeof(struct axlf))
+ return -EINVAL;
+
+ if (count > bin->header.length)
+ return -EINVAL;
+
+ xrt_info(obj->xdev, "Prepare download of xclbin %pUb of length %lld B",
+ &bin->header.uuid, bin->header.length);
+
+ return 0;
+}
+
+/*
+ * The implementation requries full xclbin image before we can start
+ * programming the hardware via ICAP subsystem. The full image is required
+ * for checking the validity of xclbin and walking the sections to
+ * discover the bitstream.
+ */
+static int xmgnt_pr_write(struct fpga_manager *mgr,
+ const char *buf, size_t count)
+{
+ const struct axlf *bin = (const struct axlf *)buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ if (bin->header.length != count)
+ return -EINVAL;
+
+ return xmgnt_download_bitstream((void *)obj->xdev, bin);
+}
+
+static int xmgnt_pr_write_complete(struct fpga_manager *mgr,
+ struct fpga_image_info *info)
+{
+ const struct axlf *bin = (const struct axlf *)info->buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ xrt_info(obj->xdev, "Finished download of xclbin %pUb",
+ &bin->header.uuid);
+ return 0;
+}
+
+static enum fpga_mgr_states xmgnt_pr_state(struct fpga_manager *mgr)
+{
+ return FPGA_MGR_STATE_UNKNOWN;
+}
+
+static const struct fpga_manager_ops xmgnt_pr_ops = {
+ .initial_header_size = sizeof(struct axlf),
+ .write_init = xmgnt_pr_write_init,
+ .write = xmgnt_pr_write,
+ .write_complete = xmgnt_pr_write_complete,
+ .state = xmgnt_pr_state,
+};
+
+struct fpga_manager *xmgnt_fmgr_probe(struct xrt_device *xdev)
+{
+ struct xfpga_class *obj = devm_kzalloc(DEV(xdev), sizeof(struct xfpga_class),
+ GFP_KERNEL);
+ struct fpga_manager *fmgr = NULL;
+ int ret = 0;
+
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
+ obj->xdev = xdev;
+ fmgr = fpga_mgr_create(&xdev->dev,
+ obj->name,
+ &xmgnt_pr_ops,
+ obj);
+ if (!fmgr)
+ return ERR_PTR(-ENOMEM);
+
+ ret = fpga_mgr_register(fmgr);
+ if (ret) {
+ fpga_mgr_free(fmgr);
+ return ERR_PTR(ret);
+ }
+ return fmgr;
+}
+
+int xmgnt_fmgr_remove(struct fpga_manager *fmgr)
+{
+ fpga_mgr_unregister(fmgr);
+ return 0;
+}
diff --git a/drivers/fpga/xrt/mgnt/xrt-mgr.h b/drivers/fpga/xrt/mgnt/xrt-mgr.h
new file mode 100644
index 000000000000..d505a770ea9f
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/xrt-mgr.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: [email protected]
+ */
+
+#ifndef _XRT_MGR_H_
+#define _XRT_MGR_H_
+
+#include <linux/fpga/fpga-mgr.h>
+
+struct fpga_manager *xmgnt_fmgr_probe(struct xrt_device *xdev);
+int xmgnt_fmgr_remove(struct fpga_manager *fmgr);
+
+#endif /* _XRT_MGR_H_ */
--
2.27.0

2021-04-27 21:02:16

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 09/20] fpga: xrt: management physical function driver (root)

The PCIE device driver which attaches to management function on Alveo
devices. It instantiates one or more group drivers which, in turn,
instantiate xrt drivers. The instantiation of group and xrt drivers is
completely dtb driven.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/mgnt/root.c | 419 +++++++++++++++++++++++++++++++++++
1 file changed, 419 insertions(+)
create mode 100644 drivers/fpga/xrt/mgnt/root.c

diff --git a/drivers/fpga/xrt/mgnt/root.c b/drivers/fpga/xrt/mgnt/root.c
new file mode 100644
index 000000000000..6e362e9d4b59
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/root.c
@@ -0,0 +1,419 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+
+#include "xroot.h"
+#include "xmgnt.h"
+#include "metadata.h"
+
+#define XMGNT_MODULE_NAME "xrt-mgnt"
+#define XMGNT_DRIVER_VERSION "4.0.0"
+
+#define XMGNT_PDEV(xm) ((xm)->pdev)
+#define XMGNT_DEV(xm) (&(XMGNT_PDEV(xm)->dev))
+#define xmgnt_err(xm, fmt, args...) \
+ dev_err(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgnt_warn(xm, fmt, args...) \
+ dev_warn(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgnt_info(xm, fmt, args...) \
+ dev_info(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgnt_dbg(xm, fmt, args...) \
+ dev_dbg(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define XMGNT_DEV_ID(_pcidev) \
+ ({ typeof(_pcidev) (pcidev) = (_pcidev); \
+ ((pci_domain_nr((pcidev)->bus) << 16) | \
+ PCI_DEVID((pcidev)->bus->number, 0)); })
+#define XRT_VSEC_ID 0x20
+#define XRT_MAX_READRQ 512
+
+static struct class *xmgnt_class;
+
+/* PCI Device IDs */
+/*
+ * Golden image is preloaded on the device when it is shipped to customer.
+ * Then, customer can load other shells (from Xilinx or some other vendor).
+ * If something goes wrong with the shell, customer can always go back to
+ * golden and start over again.
+ */
+#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
+#define PCI_DEVICE_ID_U50 0x5020
+static const struct pci_device_id xmgnt_pci_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN), }, /* Alveo U50 (golden) */
+ { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /* Alveo U50 */
+ { 0, }
+};
+
+struct xmgnt {
+ struct pci_dev *pdev;
+ void *root;
+
+ bool ready;
+};
+
+static int xmgnt_config_pci(struct xmgnt *xm)
+{
+ struct pci_dev *pdev = XMGNT_PDEV(xm);
+ int rc;
+
+ rc = pcim_enable_device(pdev);
+ if (rc < 0) {
+ xmgnt_err(xm, "failed to enable device: %d", rc);
+ return rc;
+ }
+
+ rc = pci_enable_pcie_error_reporting(pdev);
+ if (rc)
+ xmgnt_warn(xm, "failed to enable AER: %d", rc);
+
+ pci_set_master(pdev);
+
+ rc = pcie_get_readrq(pdev);
+ if (rc > XRT_MAX_READRQ)
+ pcie_set_readrq(pdev, XRT_MAX_READRQ);
+ return 0;
+}
+
+static int xmgnt_match_slot_and_save(struct device *dev, void *data)
+{
+ struct xmgnt *xm = data;
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (XMGNT_DEV_ID(pdev) == XMGNT_DEV_ID(xm->pdev)) {
+ pci_cfg_access_lock(pdev);
+ pci_save_state(pdev);
+ }
+
+ return 0;
+}
+
+static void xmgnt_pci_save_config_all(struct xmgnt *xm)
+{
+ bus_for_each_dev(&pci_bus_type, NULL, xm, xmgnt_match_slot_and_save);
+}
+
+static int xmgnt_match_slot_and_restore(struct device *dev, void *data)
+{
+ struct xmgnt *xm = data;
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (XMGNT_DEV_ID(pdev) == XMGNT_DEV_ID(xm->pdev)) {
+ pci_restore_state(pdev);
+ pci_cfg_access_unlock(pdev);
+ }
+
+ return 0;
+}
+
+static void xmgnt_pci_restore_config_all(struct xmgnt *xm)
+{
+ bus_for_each_dev(&pci_bus_type, NULL, xm, xmgnt_match_slot_and_restore);
+}
+
+static void xmgnt_root_hot_reset(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_bus *bus;
+ u16 pci_cmd, devctl;
+ struct xmgnt *xm;
+ u8 pci_bctl;
+ int i, ret;
+
+ xm = pci_get_drvdata(pdev);
+ xmgnt_info(xm, "hot reset start");
+ xmgnt_pci_save_config_all(xm);
+ pci_disable_device(pdev);
+ bus = pdev->bus;
+
+ /*
+ * When flipping the SBR bit, device can fall off the bus. This is
+ * usually no problem at all so long as drivers are working properly
+ * after SBR. However, some systems complain bitterly when the device
+ * falls off the bus.
+ * The quick solution is to temporarily disable the SERR reporting of
+ * switch port during SBR.
+ */
+
+ pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
+ pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd & ~PCI_COMMAND_SERR));
+ pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
+ pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl & ~PCI_EXP_DEVCTL_FERE));
+ pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
+ pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl | PCI_BRIDGE_CTL_BUS_RESET);
+ msleep(100);
+ pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+ ssleep(1);
+
+ pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
+ pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
+
+ ret = pci_enable_device(pdev);
+ if (ret)
+ xmgnt_err(xm, "failed to enable device, ret %d", ret);
+
+ for (i = 0; i < 300; i++) {
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ if (pci_cmd != 0xffff)
+ break;
+ msleep(20);
+ }
+ if (i == 300)
+ xmgnt_err(xm, "timed out waiting for device to be online after reset");
+
+ xmgnt_info(xm, "waiting for %d ms", i * 20);
+ xmgnt_pci_restore_config_all(xm);
+ xmgnt_config_pci(xm);
+}
+
+static int xmgnt_add_vsec_node(struct xmgnt *xm, char *dtb)
+{
+ u32 off_low, off_high, vsec_bar, header;
+ struct pci_dev *pdev = XMGNT_PDEV(xm);
+ struct xrt_md_endpoint ep = { 0 };
+ struct device *dev = DEV(pdev);
+ int cap = 0, ret = 0;
+ u64 vsec_off;
+
+ while ((cap = pci_find_next_ext_capability(pdev, cap, PCI_EXT_CAP_ID_VNDR))) {
+ pci_read_config_dword(pdev, cap + PCI_VNDR_HEADER, &header);
+ if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
+ break;
+ }
+ if (!cap) {
+ xmgnt_info(xm, "No Vendor Specific Capability.");
+ return -ENOENT;
+ }
+
+ if (pci_read_config_dword(pdev, cap + 8, &off_low) ||
+ pci_read_config_dword(pdev, cap + 12, &off_high)) {
+ xmgnt_err(xm, "pci_read vendor specific failed.");
+ return -EINVAL;
+ }
+
+ ep.ep_name = XRT_MD_NODE_VSEC;
+ ret = xrt_md_add_endpoint(dev, dtb, &ep);
+ if (ret) {
+ xmgnt_err(xm, "add vsec metadata failed, ret %d", ret);
+ goto failed;
+ }
+
+ vsec_bar = cpu_to_be32(off_low & 0xf);
+ ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
+ XRT_MD_PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
+ if (ret) {
+ xmgnt_err(xm, "add vsec bar idx failed, ret %d", ret);
+ goto failed;
+ }
+
+ vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
+ ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
+ XRT_MD_PROP_OFFSET, &vsec_off, sizeof(vsec_off));
+ if (ret) {
+ xmgnt_err(xm, "add vsec offset failed, ret %d", ret);
+ goto failed;
+ }
+
+failed:
+ return ret;
+}
+
+static int xmgnt_create_root_metadata(struct xmgnt *xm, char **root_dtb)
+{
+ char *dtb = NULL;
+ int ret;
+
+ ret = xrt_md_create(XMGNT_DEV(xm), &dtb);
+ if (ret) {
+ xmgnt_err(xm, "create metadata failed, ret %d", ret);
+ goto failed;
+ }
+
+ ret = xmgnt_add_vsec_node(xm, dtb);
+ if (ret == -ENOENT) {
+ /*
+ * We may be dealing with a MFG board.
+ * Try vsec-golden which will bring up all hard-coded leaves
+ * at hard-coded offsets.
+ */
+ ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_VSEC_GOLDEN);
+ } else if (ret == 0) {
+ ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_MGNT_MAIN);
+ }
+ if (ret)
+ goto failed;
+
+ *root_dtb = dtb;
+ return 0;
+
+failed:
+ vfree(dtb);
+ return ret;
+}
+
+static ssize_t ready_show(struct device *dev,
+ struct device_attribute *da,
+ char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct xmgnt *xm = pci_get_drvdata(pdev);
+
+ return sprintf(buf, "%d\n", xm->ready);
+}
+static DEVICE_ATTR_RO(ready);
+
+static struct attribute *xmgnt_root_attrs[] = {
+ &dev_attr_ready.attr,
+ NULL
+};
+
+static struct attribute_group xmgnt_root_attr_group = {
+ .attrs = xmgnt_root_attrs,
+};
+
+static void xmgnt_root_get_id(struct device *dev, struct xrt_root_get_id *rid)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ rid->xpigi_vendor_id = pdev->vendor;
+ rid->xpigi_device_id = pdev->device;
+ rid->xpigi_sub_vendor_id = pdev->subsystem_vendor;
+ rid->xpigi_sub_device_id = pdev->subsystem_device;
+}
+
+static int xmgnt_root_get_resource(struct device *dev, struct xrt_root_get_res *res)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct xmgnt *xm;
+
+ xm = pci_get_drvdata(pdev);
+ if (res->xpigr_region_id > PCI_STD_RESOURCE_END) {
+ xmgnt_err(xm, "Invalid bar idx %d", res->xpigr_region_id);
+ return -EINVAL;
+ }
+
+ res->xpigr_res = &pdev->resource[res->xpigr_region_id];
+ return 0;
+}
+
+static struct xroot_physical_function_callback xmgnt_xroot_pf_cb = {
+ .xpc_get_id = xmgnt_root_get_id,
+ .xpc_get_resource = xmgnt_root_get_resource,
+ .xpc_hot_reset = xmgnt_root_hot_reset,
+};
+
+static int xmgnt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ int ret;
+ struct device *dev = &pdev->dev;
+ struct xmgnt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
+ char *dtb = NULL;
+
+ if (!xm)
+ return -ENOMEM;
+ xm->pdev = pdev;
+ pci_set_drvdata(pdev, xm);
+
+ ret = xmgnt_config_pci(xm);
+ if (ret)
+ goto failed;
+
+ ret = xroot_probe(&pdev->dev, &xmgnt_xroot_pf_cb, &xm->root);
+ if (ret)
+ goto failed;
+
+ ret = xmgnt_create_root_metadata(xm, &dtb);
+ if (ret)
+ goto failed_metadata;
+
+ ret = xroot_create_group(xm->root, dtb);
+ vfree(dtb);
+ if (ret)
+ xmgnt_err(xm, "failed to create root group: %d", ret);
+
+ if (!xroot_wait_for_bringup(xm->root))
+ xmgnt_err(xm, "failed to bringup all groups");
+ else
+ xm->ready = true;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &xmgnt_root_attr_group);
+ if (ret) {
+ /* Warning instead of failing the probe. */
+ xmgnt_warn(xm, "create xmgnt root attrs failed: %d", ret);
+ }
+
+ xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
+ xmgnt_info(xm, "%s started successfully", XMGNT_MODULE_NAME);
+ return 0;
+
+failed_metadata:
+ xroot_remove(xm->root);
+failed:
+ pci_set_drvdata(pdev, NULL);
+ return ret;
+}
+
+static void xmgnt_remove(struct pci_dev *pdev)
+{
+ struct xmgnt *xm = pci_get_drvdata(pdev);
+
+ xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
+ sysfs_remove_group(&pdev->dev.kobj, &xmgnt_root_attr_group);
+ xroot_remove(xm->root);
+ pci_disable_pcie_error_reporting(xm->pdev);
+ xmgnt_info(xm, "%s cleaned up successfully", XMGNT_MODULE_NAME);
+}
+
+static struct pci_driver xmgnt_driver = {
+ .name = XMGNT_MODULE_NAME,
+ .id_table = xmgnt_pci_ids,
+ .probe = xmgnt_probe,
+ .remove = xmgnt_remove,
+};
+
+static int __init xmgnt_init(void)
+{
+ int res = 0;
+
+ res = xmgnt_register_leaf();
+ if (res)
+ return res;
+
+ xmgnt_class = class_create(THIS_MODULE, XMGNT_MODULE_NAME);
+ if (IS_ERR(xmgnt_class))
+ return PTR_ERR(xmgnt_class);
+
+ res = pci_register_driver(&xmgnt_driver);
+ if (res) {
+ class_destroy(xmgnt_class);
+ return res;
+ }
+
+ return 0;
+}
+
+static __exit void xmgnt_exit(void)
+{
+ pci_unregister_driver(&xmgnt_driver);
+ class_destroy(xmgnt_class);
+ xmgnt_unregister_leaf();
+}
+
+module_init(xmgnt_init);
+module_exit(xmgnt_exit);
+
+MODULE_DEVICE_TABLE(pci, xmgnt_pci_ids);
+MODULE_VERSION(XMGNT_DRIVER_VERSION);
+MODULE_AUTHOR("XRT Team <[email protected]>");
+MODULE_DESCRIPTION("Xilinx Alveo management function driver");
+MODULE_LICENSE("GPL v2");
--
2.27.0

2021-04-27 21:02:53

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 13/20] fpga: xrt: User Clock Subsystem driver

Add User Clock Subsystem (UCS) driver. UCS is a hardware function
discovered by walking xclbin metadata. A xrt device node will be
created for it. UCS enables/disables the dynamic region clocks.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/xleaf/ucs.c | 152 +++++++++++++++++++++++++++++++
1 file changed, 152 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c

diff --git a/drivers/fpga/xrt/lib/xleaf/ucs.c b/drivers/fpga/xrt/lib/xleaf/ucs.c
new file mode 100644
index 000000000000..a7a96ddde44f
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/ucs.c
@@ -0,0 +1,152 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA UCS Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clock.h"
+
+#define UCS_ERR(ucs, fmt, arg...) \
+ xrt_err((ucs)->xdev, fmt "\n", ##arg)
+#define UCS_WARN(ucs, fmt, arg...) \
+ xrt_warn((ucs)->xdev, fmt "\n", ##arg)
+#define UCS_INFO(ucs, fmt, arg...) \
+ xrt_info((ucs)->xdev, fmt "\n", ##arg)
+#define UCS_DBG(ucs, fmt, arg...) \
+ xrt_dbg((ucs)->xdev, fmt "\n", ##arg)
+
+#define XRT_UCS "xrt_ucs"
+
+#define XRT_UCS_CHANNEL1_REG 0
+#define XRT_UCS_CHANNEL2_REG 8
+
+#define CLK_MAX_VALUE 6400
+
+XRT_DEFINE_REGMAP_CONFIG(ucs_regmap_config);
+
+struct xrt_ucs {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ struct mutex ucs_lock; /* ucs dev lock */
+};
+
+static void xrt_ucs_event_cb(struct xrt_device *xdev, void *arg)
+{
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct xrt_device *leaf;
+ enum xrt_subdev_id id;
+ int instance;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+ instance = evt->xe_subdev.xevt_subdev_instance;
+
+ if (e != XRT_EVENT_POST_CREATION) {
+ xrt_dbg(xdev, "ignored event %d", e);
+ return;
+ }
+
+ if (id != XRT_SUBDEV_CLOCK)
+ return;
+
+ leaf = xleaf_get_leaf_by_id(xdev, XRT_SUBDEV_CLOCK, instance);
+ if (!leaf) {
+ xrt_err(xdev, "does not get clock subdev");
+ return;
+ }
+
+ xleaf_call(leaf, XRT_CLOCK_VERIFY, NULL);
+ xleaf_put_leaf(xdev, leaf);
+}
+
+static int ucs_enable(struct xrt_ucs *ucs)
+{
+ int ret;
+
+ mutex_lock(&ucs->ucs_lock);
+ ret = regmap_write(ucs->regmap, XRT_UCS_CHANNEL2_REG, 1);
+ mutex_unlock(&ucs->ucs_lock);
+
+ return ret;
+}
+
+static int
+xrt_ucs_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_ucs_event_cb(xdev, arg);
+ break;
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ucs_probe(struct xrt_device *xdev)
+{
+ struct xrt_ucs *ucs = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+
+ ucs = devm_kzalloc(&xdev->dev, sizeof(*ucs), GFP_KERNEL);
+ if (!ucs)
+ return -ENOMEM;
+
+ xrt_set_drvdata(xdev, ucs);
+ ucs->xdev = xdev;
+ mutex_init(&ucs->ucs_lock);
+
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ ucs->regmap = devm_regmap_init_mmio(&xdev->dev, base, &ucs_regmap_config);
+ if (IS_ERR(ucs->regmap)) {
+ UCS_ERR(ucs, "map base %pR failed", res);
+ return PTR_ERR(ucs->regmap);
+ }
+ ucs_enable(ucs);
+
+ return 0;
+}
+
+static struct xrt_dev_endpoints xrt_ucs_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_UCS_CONTROL_STATUS },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_ucs_driver = {
+ .driver = {
+ .name = XRT_UCS,
+ },
+ .subdev_id = XRT_SUBDEV_UCS,
+ .endpoints = xrt_ucs_endpoints,
+ .probe = ucs_probe,
+ .leaf_call = xrt_ucs_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(ucs);
--
2.27.0

2021-04-27 21:03:05

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 04/20] fpga: xrt: xrt-lib driver manager

xrt-lib kernel module infrastructure code to register and manage all
leaf driver modules.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/subdev_id.h | 38 ++++
drivers/fpga/xrt/include/xdevice.h | 131 +++++++++++
drivers/fpga/xrt/include/xleaf.h | 205 +++++++++++++++++
drivers/fpga/xrt/lib/lib-drv.c | 328 +++++++++++++++++++++++++++
drivers/fpga/xrt/lib/lib-drv.h | 15 ++
5 files changed, 717 insertions(+)
create mode 100644 drivers/fpga/xrt/include/subdev_id.h
create mode 100644 drivers/fpga/xrt/include/xdevice.h
create mode 100644 drivers/fpga/xrt/include/xleaf.h
create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.h

diff --git a/drivers/fpga/xrt/include/subdev_id.h b/drivers/fpga/xrt/include/subdev_id.h
new file mode 100644
index 000000000000..084737c53b88
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev_id.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_SUBDEV_ID_H_
+#define _XRT_SUBDEV_ID_H_
+
+/*
+ * Every subdev driver has an ID for others to refer to it. There can be multiple number of
+ * instances of a subdev driver. A <subdev_id, subdev_instance> tuple is a unique identification
+ * of a specific instance of a subdev driver.
+ */
+enum xrt_subdev_id {
+ XRT_SUBDEV_GRP = 1,
+ XRT_SUBDEV_VSEC = 2,
+ XRT_SUBDEV_VSEC_GOLDEN = 3,
+ XRT_SUBDEV_DEVCTL = 4,
+ XRT_SUBDEV_AXIGATE = 5,
+ XRT_SUBDEV_ICAP = 6,
+ XRT_SUBDEV_TEST = 7,
+ XRT_SUBDEV_MGNT_MAIN = 8,
+ XRT_SUBDEV_QSPI = 9,
+ XRT_SUBDEV_MAILBOX = 10,
+ XRT_SUBDEV_CMC = 11,
+ XRT_SUBDEV_CALIB = 12,
+ XRT_SUBDEV_CLKFREQ = 13,
+ XRT_SUBDEV_CLOCK = 14,
+ XRT_SUBDEV_SRSR = 15,
+ XRT_SUBDEV_UCS = 16,
+ XRT_SUBDEV_NUM = 17, /* Total number of subdevs. */
+ XRT_ROOT = -1, /* Special ID for root driver. */
+};
+
+#endif /* _XRT_SUBDEV_ID_H_ */
diff --git a/drivers/fpga/xrt/include/xdevice.h b/drivers/fpga/xrt/include/xdevice.h
new file mode 100644
index 000000000000..3afd96989fc5
--- /dev/null
+++ b/drivers/fpga/xrt/include/xdevice.h
@@ -0,0 +1,131 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_DEVICE_H_
+#define _XRT_DEVICE_H_
+
+#include <linux/fs.h>
+#include <linux/cdev.h>
+
+#define XRT_MAX_DEVICE_NODES 128
+#define XRT_INVALID_DEVICE_INST (XRT_MAX_DEVICE_NODES + 1)
+
+enum {
+ XRT_DEVICE_STATE_NONE = 0,
+ XRT_DEVICE_STATE_ADDED
+};
+
+/*
+ * struct xrt_device - represent an xrt device on xrt bus
+ *
+ * dev: generic device interface.
+ * id: id of the xrt device.
+ */
+struct xrt_device {
+ struct device dev;
+ u32 subdev_id;
+ const char *name;
+ u32 instance;
+ u32 state;
+ u32 num_resources;
+ struct resource *resource;
+ void *sdev_data;
+};
+
+/*
+ * If populated by xrt device driver, infra will handle the mechanics of
+ * char device (un)registration.
+ */
+enum xrt_dev_file_mode {
+ /* Infra create cdev, default file name */
+ XRT_DEV_FILE_DEFAULT = 0,
+ /* Infra create cdev, need to encode inst num in file name */
+ XRT_DEV_FILE_MULTI_INST,
+ /* No auto creation of cdev by infra, leaf handles it by itself */
+ XRT_DEV_FILE_NO_AUTO,
+};
+
+struct xrt_dev_file_ops {
+ const struct file_operations xsf_ops;
+ dev_t xsf_dev_t;
+ const char *xsf_dev_name;
+ enum xrt_dev_file_mode xsf_mode;
+};
+
+/*
+ * this struct define the endpoints belong to the same xrt device
+ */
+struct xrt_dev_ep_names {
+ const char *ep_name;
+ const char *compat;
+};
+
+struct xrt_dev_endpoints {
+ struct xrt_dev_ep_names *xse_names;
+ /* minimum number of endpoints to support the subdevice */
+ u32 xse_min_ep;
+};
+
+/*
+ * struct xrt_driver - represent a xrt device driver
+ *
+ * drv: driver model structure.
+ * id_table: pointer to table of device IDs the driver is interested in.
+ * { } member terminated.
+ * probe: mandatory callback for device binding.
+ * remove: callback for device unbinding.
+ */
+struct xrt_driver {
+ struct device_driver driver;
+ u32 subdev_id;
+ struct xrt_dev_file_ops file_ops;
+ struct xrt_dev_endpoints *endpoints;
+
+ /*
+ * Subdev driver callbacks populated by subdev driver.
+ */
+ int (*probe)(struct xrt_device *xrt_dev);
+ void (*remove)(struct xrt_device *xrt_dev);
+ /*
+ * If leaf_call is defined, these are called by other leaf drivers.
+ * Note that root driver may call into leaf_call of a group driver.
+ */
+ int (*leaf_call)(struct xrt_device *xrt_dev, u32 cmd, void *arg);
+};
+
+#define to_xrt_dev(d) container_of(d, struct xrt_device, dev)
+#define to_xrt_drv(d) container_of(d, struct xrt_driver, driver)
+
+static inline void *xrt_get_drvdata(const struct xrt_device *xdev)
+{
+ return dev_get_drvdata(&xdev->dev);
+}
+
+static inline void xrt_set_drvdata(struct xrt_device *xdev, void *data)
+{
+ dev_set_drvdata(&xdev->dev, data);
+}
+
+static inline void *xrt_get_xdev_data(struct device *dev)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+
+ return xdev->sdev_data;
+}
+
+struct xrt_device *
+xrt_device_register(struct device *parent, u32 id,
+ struct resource *res, u32 res_num,
+ void *pdata, size_t data_sz);
+void xrt_device_unregister(struct xrt_device *xdev);
+int xrt_register_driver(struct xrt_driver *drv);
+void xrt_unregister_driver(struct xrt_driver *drv);
+void *xrt_get_xdev_data(struct device *dev);
+struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type, u32 num);
+
+#endif /* _XRT_DEVICE_H_ */
diff --git a/drivers/fpga/xrt/include/xleaf.h b/drivers/fpga/xrt/include/xleaf.h
new file mode 100644
index 000000000000..f065fc766e0f
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf.h
@@ -0,0 +1,205 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ * Sonal Santan <[email protected]>
+ */
+
+#ifndef _XRT_XLEAF_H_
+#define _XRT_XLEAF_H_
+
+#include <linux/mod_devicetable.h>
+#include "xdevice.h"
+#include "subdev_id.h"
+#include "xroot.h"
+#include "events.h"
+
+/* All subdev drivers should use below common routines to print out msg. */
+#define DEV(xdev) (&(xdev)->dev)
+#define DEV_PDATA(xdev) \
+ ((struct xrt_subdev_platdata *)xrt_get_xdev_data(DEV(xdev)))
+#define DEV_FILE_OPS(xdev) \
+ (&(to_xrt_drv((xdev)->dev.driver))->file_ops)
+#define FMT_PRT(prt_fn, xdev, fmt, args...) \
+ ({typeof(xdev) (_xdev) = (xdev); \
+ prt_fn(DEV(_xdev), "%s %s: " fmt, \
+ DEV_PDATA(_xdev)->xsp_root_name, __func__, ##args); })
+#define xrt_err(xdev, fmt, args...) FMT_PRT(dev_err, xdev, fmt, ##args)
+#define xrt_warn(xdev, fmt, args...) FMT_PRT(dev_warn, xdev, fmt, ##args)
+#define xrt_info(xdev, fmt, args...) FMT_PRT(dev_info, xdev, fmt, ##args)
+#define xrt_dbg(xdev, fmt, args...) FMT_PRT(dev_dbg, xdev, fmt, ##args)
+
+#define XRT_DEFINE_REGMAP_CONFIG(config_name) \
+ static const struct regmap_config config_name = { \
+ .reg_bits = 32, \
+ .val_bits = 32, \
+ .reg_stride = 4, \
+ .max_register = 0x1000, \
+ }
+
+enum {
+ /* Starting cmd for common leaf cmd implemented by all leaves. */
+ XRT_XLEAF_COMMON_BASE = 0,
+ /* Starting cmd for leaves' specific leaf cmds. */
+ XRT_XLEAF_CUSTOM_BASE = 64,
+};
+
+enum xrt_xleaf_common_leaf_cmd {
+ XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
+};
+
+/*
+ * Partially initialized by the parent driver, then, passed in as subdev driver's
+ * platform data when creating subdev driver instance by calling platform
+ * device register API (xrt_device_register_data() or the likes).
+ *
+ * Once device register API returns, platform driver framework makes a copy of
+ * this buffer and maintains its life cycle. The content of the buffer is
+ * completely owned by subdev driver.
+ *
+ * Thus, parent driver should be very careful when it touches this buffer
+ * again once it's handed over to subdev driver. And the data structure
+ * should not contain pointers pointing to buffers that is managed by
+ * other or parent drivers since it could have been freed before platform
+ * data buffer is freed by platform driver framework.
+ */
+struct xrt_subdev_platdata {
+ /*
+ * Per driver instance callback. The xdev points to the instance.
+ * Should always be defined for subdev driver to get service from root.
+ */
+ xrt_subdev_root_cb_t xsp_root_cb;
+ void *xsp_root_cb_arg;
+
+ /* Something to associate w/ root for msg printing. */
+ const char *xsp_root_name;
+
+ /*
+ * Char dev support for this subdev instance.
+ * Initialized by subdev driver.
+ */
+ struct cdev xsp_cdev;
+ struct device *xsp_sysdev;
+ struct mutex xsp_devnode_lock; /* devnode lock */
+ struct completion xsp_devnode_comp;
+ int xsp_devnode_ref;
+ bool xsp_devnode_online;
+ bool xsp_devnode_excl;
+
+ /*
+ * Subdev driver specific init data. The buffer should be embedded
+ * in this data structure buffer after dtb, so that it can be freed
+ * together with platform data.
+ */
+ loff_t xsp_priv_off; /* Offset into this platform data buffer. */
+ size_t xsp_priv_len;
+
+ /*
+ * Populated by parent driver to describe the device tree for
+ * the subdev driver to handle. Should always be last one since it's
+ * of variable length.
+ */
+ bool xsp_dtb_valid;
+ char xsp_dtb[0];
+};
+
+struct subdev_match_arg {
+ enum xrt_subdev_id id;
+ int instance;
+};
+
+bool xleaf_has_endpoint(struct xrt_device *xdev, const char *endpoint_name);
+struct xrt_device *xleaf_get_leaf(struct xrt_device *xdev,
+ xrt_subdev_match_t cb, void *arg);
+
+static inline bool subdev_match(enum xrt_subdev_id id, struct xrt_device *xdev, void *arg)
+{
+ const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
+ int instance = a->instance;
+
+ if (id != a->id)
+ return false;
+ if (instance != xdev->instance && instance != XRT_INVALID_DEVICE_INST)
+ return false;
+ return true;
+}
+
+static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
+ struct xrt_device *xdev, void *arg)
+{
+ return xleaf_has_endpoint(xdev, arg);
+}
+
+static inline struct xrt_device *
+xleaf_get_leaf_by_id(struct xrt_device *xdev,
+ enum xrt_subdev_id id, int instance)
+{
+ struct subdev_match_arg arg = { id, instance };
+
+ return xleaf_get_leaf(xdev, subdev_match, &arg);
+}
+
+static inline struct xrt_device *
+xleaf_get_leaf_by_epname(struct xrt_device *xdev, const char *name)
+{
+ return xleaf_get_leaf(xdev, xrt_subdev_match_epname, (void *)name);
+}
+
+static inline int xleaf_call(struct xrt_device *tgt, u32 cmd, void *arg)
+{
+ return (to_xrt_drv(tgt->dev.driver)->leaf_call)(tgt, cmd, arg);
+}
+
+int xleaf_broadcast_event(struct xrt_device *xdev, enum xrt_events evt, bool async);
+int xleaf_create_group(struct xrt_device *xdev, char *dtb);
+int xleaf_destroy_group(struct xrt_device *xdev, int instance);
+void xleaf_get_root_res(struct xrt_device *xdev, u32 region_id, struct resource **res);
+void xleaf_get_root_id(struct xrt_device *xdev, unsigned short *vendor, unsigned short *device,
+ unsigned short *subvendor, unsigned short *subdevice);
+void xleaf_hot_reset(struct xrt_device *xdev);
+int xleaf_put_leaf(struct xrt_device *xdev, struct xrt_device *leaf);
+struct device *xleaf_register_hwmon(struct xrt_device *xdev, const char *name, void *drvdata,
+ const struct attribute_group **grps);
+void xleaf_unregister_hwmon(struct xrt_device *xdev, struct device *hwmon);
+int xleaf_wait_for_group_bringup(struct xrt_device *xdev);
+
+/*
+ * Character device helper APIs for use by leaf drivers
+ */
+static inline bool xleaf_devnode_enabled(struct xrt_device *xdev)
+{
+ return DEV_FILE_OPS(xdev)->xsf_ops.open;
+}
+
+int xleaf_devnode_create(struct xrt_device *xdev,
+ const char *file_name, const char *inst_name);
+void xleaf_devnode_destroy(struct xrt_device *xdev);
+
+struct xrt_device *xleaf_devnode_open_excl(struct inode *inode);
+struct xrt_device *xleaf_devnode_open(struct inode *inode);
+void xleaf_devnode_close(struct inode *inode);
+
+/* Module's init/fini routines for leaf driver in xrt-lib module */
+#define XRT_LEAF_INIT_FINI_FUNC(name) \
+void name##_leaf_init_fini(bool init) \
+{ \
+ if (init) \
+ xrt_register_driver(&xrt_##name##_driver); \
+ else \
+ xrt_unregister_driver(&xrt_##name##_driver); \
+}
+
+/* Module's init/fini routines for leaf driver in xrt-lib module */
+void group_leaf_init_fini(bool init);
+void vsec_leaf_init_fini(bool init);
+void devctl_leaf_init_fini(bool init);
+void axigate_leaf_init_fini(bool init);
+void icap_leaf_init_fini(bool init);
+void calib_leaf_init_fini(bool init);
+void clkfreq_leaf_init_fini(bool init);
+void clock_leaf_init_fini(bool init);
+void ucs_leaf_init_fini(bool init);
+
+#endif /* _XRT_LEAF_H_ */
diff --git a/drivers/fpga/xrt/lib/lib-drv.c b/drivers/fpga/xrt/lib/lib-drv.c
new file mode 100644
index 000000000000..ba4ac4930823
--- /dev/null
+++ b/drivers/fpga/xrt/lib/lib-drv.c
@@ -0,0 +1,328 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ * Lizhi Hou <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include "xleaf.h"
+#include "xroot.h"
+#include "lib-drv.h"
+
+#define XRT_IPLIB_MODULE_NAME "xrt-lib"
+#define XRT_IPLIB_MODULE_VERSION "4.0.0"
+#define XRT_DRVNAME(drv) ((drv)->driver.name)
+
+#define XRT_SUBDEV_ID_SHIFT 16
+#define XRT_SUBDEV_ID_MASK ((1 << XRT_SUBDEV_ID_SHIFT) - 1)
+
+struct xrt_find_drv_data {
+ enum xrt_subdev_id id;
+ struct xrt_driver *xdrv;
+};
+
+struct class *xrt_class;
+static DEFINE_IDA(xrt_device_ida);
+
+static inline u32 xrt_instance_to_id(enum xrt_subdev_id id, u32 instance)
+{
+ return (id << XRT_SUBDEV_ID_SHIFT) | instance;
+}
+
+static inline u32 xrt_id_to_instance(u32 id)
+{
+ return (id & XRT_SUBDEV_ID_MASK);
+}
+
+static int xrt_bus_match(struct device *dev, struct device_driver *drv)
+{
+ struct xrt_device *xdev = to_xrt_dev(dev);
+ struct xrt_driver *xdrv = to_xrt_drv(drv);
+
+ if (xdev->subdev_id == xdrv->subdev_id)
+ return 1;
+
+ return 0;
+}
+
+static int xrt_bus_probe(struct device *dev)
+{
+ struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
+ struct xrt_device *xdev = to_xrt_dev(dev);
+
+ return xdrv->probe(xdev);
+}
+
+static int xrt_bus_remove(struct device *dev)
+{
+ struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
+ struct xrt_device *xdev = to_xrt_dev(dev);
+
+ if (xdrv->remove)
+ xdrv->remove(xdev);
+
+ return 0;
+}
+
+struct bus_type xrt_bus_type = {
+ .name = "xrt",
+ .match = xrt_bus_match,
+ .probe = xrt_bus_probe,
+ .remove = xrt_bus_remove,
+};
+
+int xrt_register_driver(struct xrt_driver *drv)
+{
+ const char *drvname = XRT_DRVNAME(drv);
+ int rc = 0;
+
+ /* Initialize dev_t for char dev node. */
+ if (drv->file_ops.xsf_ops.open) {
+ rc = alloc_chrdev_region(&drv->file_ops.xsf_dev_t, 0,
+ XRT_MAX_DEVICE_NODES, drvname);
+ if (rc) {
+ pr_err("failed to alloc dev minor for %s: %d\n", drvname, rc);
+ return rc;
+ }
+ } else {
+ drv->file_ops.xsf_dev_t = (dev_t)-1;
+ }
+
+ drv->driver.owner = THIS_MODULE;
+ drv->driver.bus = &xrt_bus_type;
+
+ rc = driver_register(&drv->driver);
+ if (rc) {
+ pr_err("register %s xrt driver failed\n", drvname);
+ if (drv->file_ops.xsf_dev_t != (dev_t)-1) {
+ unregister_chrdev_region(drv->file_ops.xsf_dev_t,
+ XRT_MAX_DEVICE_NODES);
+ }
+ return rc;
+ }
+
+ pr_info("%s registered successfully\n", drvname);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_register_driver);
+
+void xrt_unregister_driver(struct xrt_driver *drv)
+{
+ driver_unregister(&drv->driver);
+
+ if (drv->file_ops.xsf_dev_t != (dev_t)-1)
+ unregister_chrdev_region(drv->file_ops.xsf_dev_t, XRT_MAX_DEVICE_NODES);
+
+ pr_info("%s unregistered successfully\n", XRT_DRVNAME(drv));
+}
+EXPORT_SYMBOL_GPL(xrt_unregister_driver);
+
+static int __find_driver(struct device_driver *drv, void *_data)
+{
+ struct xrt_driver *xdrv = to_xrt_drv(drv);
+ struct xrt_find_drv_data *data = _data;
+
+ if (xdrv->subdev_id == data->id) {
+ data->xdrv = xdrv;
+ return 1;
+ }
+
+ return 0;
+}
+
+const char *xrt_drv_name(enum xrt_subdev_id id)
+{
+ struct xrt_find_drv_data data = { 0 };
+
+ data.id = id;
+ bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
+
+ if (data.xdrv)
+ return XRT_DRVNAME(data.xdrv);
+
+ return NULL;
+}
+
+static int xrt_drv_get_instance(enum xrt_subdev_id id)
+{
+ int ret;
+
+ ret = ida_alloc_range(&xrt_device_ida, xrt_instance_to_id(id, 0),
+ xrt_instance_to_id(id, XRT_MAX_DEVICE_NODES),
+ GFP_KERNEL);
+ if (ret < 0)
+ return ret;
+
+ return xrt_id_to_instance((u32)ret);
+}
+
+static void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
+{
+ ida_free(&xrt_device_ida, xrt_instance_to_id(id, instance));
+}
+
+struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
+{
+ struct xrt_find_drv_data data = { 0 };
+
+ data.id = id;
+ bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
+
+ if (data.xdrv)
+ return data.xdrv->endpoints;
+
+ return NULL;
+}
+
+static void xrt_device_release(struct device *dev)
+{
+ struct xrt_device *xdev = container_of(dev, struct xrt_device, dev);
+
+ kfree(xdev);
+}
+
+void xrt_device_unregister(struct xrt_device *xdev)
+{
+ if (xdev->state == XRT_DEVICE_STATE_ADDED)
+ device_del(&xdev->dev);
+
+ vfree(xdev->sdev_data);
+ kfree(xdev->resource);
+
+ if (xdev->instance != XRT_INVALID_DEVICE_INST)
+ xrt_drv_put_instance(xdev->subdev_id, xdev->instance);
+
+ if (xdev->dev.release == xrt_device_release)
+ put_device(&xdev->dev);
+}
+
+struct xrt_device *
+xrt_device_register(struct device *parent, u32 id,
+ struct resource *res, u32 res_num,
+ void *pdata, size_t data_sz)
+{
+ struct xrt_device *xdev = NULL;
+ int ret;
+
+ xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
+ if (!xdev)
+ return NULL;
+ xdev->instance = XRT_INVALID_DEVICE_INST;
+
+ /* Obtain dev instance number. */
+ ret = xrt_drv_get_instance(id);
+ if (ret < 0) {
+ dev_err(parent, "failed get instance, ret %d", ret);
+ goto fail;
+ }
+
+ xdev->instance = ret;
+ xdev->name = xrt_drv_name(id);
+ xdev->subdev_id = id;
+ device_initialize(&xdev->dev);
+ xdev->dev.release = xrt_device_release;
+ xdev->dev.parent = parent;
+
+ xdev->dev.bus = &xrt_bus_type;
+ dev_set_name(&xdev->dev, "%s.%d", xdev->name, xdev->instance);
+
+ xdev->num_resources = res_num;
+ xdev->resource = kmemdup(res, sizeof(*res) * res_num, GFP_KERNEL);
+ if (!xdev->resource)
+ goto fail;
+
+ xdev->sdev_data = vzalloc(data_sz);
+ if (!xdev->sdev_data)
+ goto fail;
+
+ memcpy(xdev->sdev_data, pdata, data_sz);
+
+ ret = device_add(&xdev->dev);
+ if (ret) {
+ dev_err(parent, "failed add device, ret %d", ret);
+ goto fail;
+ }
+ xdev->state = XRT_DEVICE_STATE_ADDED;
+
+ return xdev;
+
+fail:
+ xrt_device_unregister(xdev);
+ kfree(xdev);
+
+ return NULL;
+}
+
+struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type, u32 num)
+{
+ u32 i;
+
+ for (i = 0; i < xdev->num_resources; i++) {
+ struct resource *r = &xdev->resource[i];
+
+ if (type == resource_type(r) && num-- == 0)
+ return r;
+ }
+ return NULL;
+}
+
+/*
+ * Leaf driver's module init/fini callbacks. This is not a open infrastructure for dynamic
+ * plugging in drivers. All drivers should be statically added.
+ */
+static void (*leaf_init_fini_cbs[])(bool) = {
+ group_leaf_init_fini,
+ vsec_leaf_init_fini,
+ devctl_leaf_init_fini,
+ axigate_leaf_init_fini,
+ icap_leaf_init_fini,
+ calib_leaf_init_fini,
+ clkfreq_leaf_init_fini,
+ clock_leaf_init_fini,
+ ucs_leaf_init_fini,
+};
+
+static __init int xrt_lib_init(void)
+{
+ int ret;
+ int i;
+
+ ret = bus_register(&xrt_bus_type);
+ if (ret)
+ return ret;
+
+ xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
+ if (IS_ERR(xrt_class)) {
+ bus_unregister(&xrt_bus_type);
+ return PTR_ERR(xrt_class);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
+ leaf_init_fini_cbs[i](true);
+ return 0;
+}
+
+static __exit void xrt_lib_fini(void)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
+ leaf_init_fini_cbs[i](false);
+
+ class_destroy(xrt_class);
+ bus_unregister(&xrt_bus_type);
+}
+
+module_init(xrt_lib_init);
+module_exit(xrt_lib_fini);
+
+MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
+MODULE_AUTHOR("XRT Team <[email protected]>");
+MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/fpga/xrt/lib/lib-drv.h b/drivers/fpga/xrt/lib/lib-drv.h
new file mode 100644
index 000000000000..514f904c81c0
--- /dev/null
+++ b/drivers/fpga/xrt/lib/lib-drv.h
@@ -0,0 +1,15 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _LIB_DRV_H_
+#define _LIB_DRV_H_
+
+const char *xrt_drv_name(enum xrt_subdev_id id);
+struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
+
+#endif /* _LIB_DRV_H_ */
--
2.27.0

2021-04-27 21:03:19

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 12/20] fpga: xrt: VSEC driver

Add VSEC driver. VSEC is a hardware function discovered by walking
PCI Express configure space. A xrt device node will be created for it.
VSEC provides board logic UUID and few offset of other hardware functions.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/xleaf/vsec.c | 372 ++++++++++++++++++++++++++++++
1 file changed, 372 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c

diff --git a/drivers/fpga/xrt/lib/xleaf/vsec.c b/drivers/fpga/xrt/lib/xleaf/vsec.c
new file mode 100644
index 000000000000..2bc7578c5f5d
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/vsec.c
@@ -0,0 +1,372 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA VSEC Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/regmap.h>
+#include "metadata.h"
+#include "xdevice.h"
+#include "xleaf.h"
+
+#define XRT_VSEC "xrt_vsec"
+
+#define VSEC_TYPE_UUID 0x50
+#define VSEC_TYPE_FLASH 0x51
+#define VSEC_TYPE_PLATINFO 0x52
+#define VSEC_TYPE_MAILBOX 0x53
+#define VSEC_TYPE_END 0xff
+
+#define VSEC_UUID_LEN 16
+
+#define VSEC_REG_FORMAT 0x0
+#define VSEC_REG_LENGTH 0x4
+#define VSEC_REG_ENTRY 0x8
+
+struct xrt_vsec_header {
+ u32 format;
+ u32 length;
+ u32 entry_sz;
+ u32 rsvd;
+} __packed;
+
+struct xrt_vsec_entry {
+ u8 type;
+ u8 bar_rev;
+ u16 off_lo;
+ u32 off_hi;
+ u8 ver_type;
+ u8 minor;
+ u8 major;
+ u8 rsvd0;
+ u32 rsvd1;
+} __packed;
+
+struct vsec_device {
+ u8 type;
+ char *ep_name;
+ ulong size;
+ char *compat;
+};
+
+static struct vsec_device vsec_devs[] = {
+ {
+ .type = VSEC_TYPE_UUID,
+ .ep_name = XRT_MD_NODE_BLP_ROM,
+ .size = VSEC_UUID_LEN,
+ .compat = "vsec-uuid",
+ },
+ {
+ .type = VSEC_TYPE_FLASH,
+ .ep_name = XRT_MD_NODE_FLASH_VSEC,
+ .size = 4096,
+ .compat = "vsec-flash",
+ },
+ {
+ .type = VSEC_TYPE_PLATINFO,
+ .ep_name = XRT_MD_NODE_PLAT_INFO,
+ .size = 4,
+ .compat = "vsec-platinfo",
+ },
+ {
+ .type = VSEC_TYPE_MAILBOX,
+ .ep_name = XRT_MD_NODE_MAILBOX_VSEC,
+ .size = 48,
+ .compat = "vsec-mbx",
+ },
+};
+
+XRT_DEFINE_REGMAP_CONFIG(vsec_regmap_config);
+
+struct xrt_vsec {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ u32 length;
+
+ char *metadata;
+ char uuid[VSEC_UUID_LEN];
+ int group;
+};
+
+static inline int vsec_read_entry(struct xrt_vsec *vsec, u32 index, struct xrt_vsec_entry *entry)
+{
+ int ret;
+
+ ret = regmap_bulk_read(vsec->regmap, sizeof(struct xrt_vsec_header) +
+ index * sizeof(struct xrt_vsec_entry), entry,
+ sizeof(struct xrt_vsec_entry) /
+ vsec_regmap_config.reg_stride);
+
+ return ret;
+}
+
+static inline u32 vsec_get_bar(struct xrt_vsec_entry *entry)
+{
+ return (entry->bar_rev >> 4) & 0xf;
+}
+
+static inline u64 vsec_get_bar_off(struct xrt_vsec_entry *entry)
+{
+ return entry->off_lo | ((u64)entry->off_hi << 16);
+}
+
+static inline u32 vsec_get_rev(struct xrt_vsec_entry *entry)
+{
+ return entry->bar_rev & 0xf;
+}
+
+static char *type2epname(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].ep_name);
+ }
+
+ return NULL;
+}
+
+static ulong type2size(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].size);
+ }
+
+ return 0;
+}
+
+static char *type2compat(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].compat);
+ }
+
+ return NULL;
+}
+
+static int xrt_vsec_add_node(struct xrt_vsec *vsec,
+ void *md_blob, struct xrt_vsec_entry *p_entry)
+{
+ struct xrt_md_endpoint ep;
+ char compat_ver[64];
+ int ret;
+
+ if (!type2epname(p_entry->type))
+ return -EINVAL;
+
+ /*
+ * VSEC may have more than 1 mailbox instance for the card
+ * which has more than 1 physical function.
+ * This is not supported for now. Assuming only one mailbox
+ */
+
+ snprintf(compat_ver, sizeof(compat_ver) - 1, "%d-%d.%d.%d",
+ p_entry->ver_type, p_entry->major, p_entry->minor,
+ vsec_get_rev(p_entry));
+ ep.ep_name = type2epname(p_entry->type);
+ ep.bar_index = vsec_get_bar(p_entry);
+ ep.bar_off = vsec_get_bar_off(p_entry);
+ ep.size = type2size(p_entry->type);
+ ep.compat = type2compat(p_entry->type);
+ ep.compat_ver = compat_ver;
+ ret = xrt_md_add_endpoint(DEV(vsec->xdev), vsec->metadata, &ep);
+ if (ret)
+ xrt_err(vsec->xdev, "add ep failed, ret %d", ret);
+
+ return ret;
+}
+
+static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
+{
+ struct xrt_vsec_entry entry;
+ int i, ret;
+
+ ret = xrt_md_create(&vsec->xdev->dev, &vsec->metadata);
+ if (ret) {
+ xrt_err(vsec->xdev, "create metadata failed");
+ return ret;
+ }
+
+ for (i = 0; i * sizeof(entry) < vsec->length -
+ sizeof(struct xrt_vsec_header); i++) {
+ ret = vsec_read_entry(vsec, i, &entry);
+ if (ret) {
+ xrt_err(vsec->xdev, "failed read entry %d, ret %d", i, ret);
+ goto fail;
+ }
+
+ if (entry.type == VSEC_TYPE_END)
+ break;
+ ret = xrt_vsec_add_node(vsec, vsec->metadata, &entry);
+ if (ret)
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ vfree(vsec->metadata);
+ vsec->metadata = NULL;
+ return ret;
+}
+
+static int xrt_vsec_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ default:
+ ret = -EINVAL;
+ xrt_err(xdev, "should never been called");
+ break;
+ }
+
+ return ret;
+}
+
+static int xrt_vsec_mapio(struct xrt_vsec *vsec)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->xdev);
+ struct resource *res = NULL;
+ void __iomem *base = NULL;
+ const u64 *bar_off;
+ const u32 *bar;
+ u64 addr;
+ int ret;
+
+ if (!pdata || xrt_md_size(DEV(vsec->xdev), pdata->xsp_dtb) == XRT_MD_INVALID_LENGTH) {
+ xrt_err(vsec->xdev, "empty metadata");
+ return -EINVAL;
+ }
+
+ ret = xrt_md_get_prop(DEV(vsec->xdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
+ NULL, XRT_MD_PROP_BAR_IDX, (const void **)&bar, NULL);
+ if (ret) {
+ xrt_err(vsec->xdev, "failed to get bar idx, ret %d", ret);
+ return -EINVAL;
+ }
+
+ ret = xrt_md_get_prop(DEV(vsec->xdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
+ NULL, XRT_MD_PROP_OFFSET, (const void **)&bar_off, NULL);
+ if (ret) {
+ xrt_err(vsec->xdev, "failed to get bar off, ret %d", ret);
+ return -EINVAL;
+ }
+
+ xrt_info(vsec->xdev, "Map vsec at bar %d, offset 0x%llx",
+ be32_to_cpu(*bar), be64_to_cpu(*bar_off));
+
+ xleaf_get_root_res(vsec->xdev, be32_to_cpu(*bar), &res);
+ if (!res) {
+ xrt_err(vsec->xdev, "failed to get bar addr");
+ return -EINVAL;
+ }
+
+ addr = res->start + be64_to_cpu(*bar_off);
+
+ base = devm_ioremap(&vsec->xdev->dev, addr, vsec_regmap_config.max_register);
+ if (!base) {
+ xrt_err(vsec->xdev, "Map failed");
+ return -EIO;
+ }
+
+ vsec->regmap = devm_regmap_init_mmio(&vsec->xdev->dev, base, &vsec_regmap_config);
+ if (IS_ERR(vsec->regmap)) {
+ xrt_err(vsec->xdev, "regmap %pR failed", res);
+ return PTR_ERR(vsec->regmap);
+ }
+
+ ret = regmap_read(vsec->regmap, VSEC_REG_LENGTH, &vsec->length);
+ if (ret) {
+ xrt_err(vsec->xdev, "failed to read length %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void xrt_vsec_remove(struct xrt_device *xdev)
+{
+ struct xrt_vsec *vsec;
+
+ vsec = xrt_get_drvdata(xdev);
+
+ if (vsec->group >= 0)
+ xleaf_destroy_group(xdev, vsec->group);
+ vfree(vsec->metadata);
+}
+
+static int xrt_vsec_probe(struct xrt_device *xdev)
+{
+ struct xrt_vsec *vsec;
+ int ret = 0;
+
+ vsec = devm_kzalloc(&xdev->dev, sizeof(*vsec), GFP_KERNEL);
+ if (!vsec)
+ return -ENOMEM;
+
+ vsec->xdev = xdev;
+ vsec->group = -1;
+ xrt_set_drvdata(xdev, vsec);
+
+ ret = xrt_vsec_mapio(vsec);
+ if (ret)
+ goto failed;
+
+ ret = xrt_vsec_create_metadata(vsec);
+ if (ret) {
+ xrt_err(xdev, "create metadata failed, ret %d", ret);
+ goto failed;
+ }
+ ret = xleaf_create_group(xdev, vsec->metadata);
+ if (ret < 0) {
+ xrt_err(xdev, "create group failed, ret %d", vsec->group);
+ goto failed;
+ }
+ vsec->group = ret;
+
+ return 0;
+
+failed:
+ xrt_vsec_remove(xdev);
+
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_vsec_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names []){
+ { .ep_name = XRT_MD_NODE_VSEC },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_vsec_driver = {
+ .driver = {
+ .name = XRT_VSEC,
+ },
+ .subdev_id = XRT_SUBDEV_VSEC,
+ .endpoints = xrt_vsec_endpoints,
+ .probe = xrt_vsec_probe,
+ .remove = xrt_vsec_remove,
+ .leaf_call = xrt_vsec_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(vsec);
--
2.27.0

2021-04-27 21:03:29

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 06/20] fpga: xrt: char dev node helper functions

Helper functions for char device node creation / removal for xrt
drivers. This is part of xrt driver infrastructure.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/cdev.c | 210 ++++++++++++++++++++++++++++++++++++
1 file changed, 210 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/cdev.c

diff --git a/drivers/fpga/xrt/lib/cdev.c b/drivers/fpga/xrt/lib/cdev.c
new file mode 100644
index 000000000000..4edd2c1d459b
--- /dev/null
+++ b/drivers/fpga/xrt/lib/cdev.c
@@ -0,0 +1,210 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA device node helper functions.
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include "xleaf.h"
+
+extern struct class *xrt_class;
+
+#define XRT_CDEV_DIR "xrt"
+#define INODE2PDATA(inode) \
+ container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
+#define INODE2PDEV(inode) \
+ to_xrt_dev(kobj_to_dev((inode)->i_cdev->kobj.parent))
+#define CDEV_NAME(sysdev) (strchr((sysdev)->kobj.name, '!') + 1)
+
+/* Allow it to be accessed from cdev. */
+static void xleaf_devnode_allowed(struct xrt_device *xdev)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
+
+ /* Allow new opens. */
+ mutex_lock(&pdata->xsp_devnode_lock);
+ pdata->xsp_devnode_online = true;
+ mutex_unlock(&pdata->xsp_devnode_lock);
+}
+
+/* Turn off access from cdev and wait for all existing user to go away. */
+static void xleaf_devnode_disallowed(struct xrt_device *xdev)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ /* Prevent new opens. */
+ pdata->xsp_devnode_online = false;
+ /* Wait for existing user to close. */
+ while (pdata->xsp_devnode_ref) {
+ mutex_unlock(&pdata->xsp_devnode_lock);
+ wait_for_completion(&pdata->xsp_devnode_comp);
+ mutex_lock(&pdata->xsp_devnode_lock);
+ }
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+}
+
+static struct xrt_device *
+__xleaf_devnode_open(struct inode *inode, bool excl)
+{
+ struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+ struct xrt_device *xdev = INODE2PDEV(inode);
+ bool opened = false;
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ if (pdata->xsp_devnode_online) {
+ if (excl && pdata->xsp_devnode_ref) {
+ xrt_err(xdev, "%s has already been opened exclusively",
+ CDEV_NAME(pdata->xsp_sysdev));
+ } else if (!excl && pdata->xsp_devnode_excl) {
+ xrt_err(xdev, "%s has been opened exclusively",
+ CDEV_NAME(pdata->xsp_sysdev));
+ } else {
+ pdata->xsp_devnode_ref++;
+ pdata->xsp_devnode_excl = excl;
+ opened = true;
+ xrt_info(xdev, "opened %s, ref=%d",
+ CDEV_NAME(pdata->xsp_sysdev),
+ pdata->xsp_devnode_ref);
+ }
+ } else {
+ xrt_err(xdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
+ }
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+
+ xdev = opened ? xdev : NULL;
+ return xdev;
+}
+
+struct xrt_device *
+xleaf_devnode_open_excl(struct inode *inode)
+{
+ return __xleaf_devnode_open(inode, true);
+}
+
+struct xrt_device *
+xleaf_devnode_open(struct inode *inode)
+{
+ return __xleaf_devnode_open(inode, false);
+}
+EXPORT_SYMBOL_GPL(xleaf_devnode_open);
+
+void xleaf_devnode_close(struct inode *inode)
+{
+ struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+ struct xrt_device *xdev = INODE2PDEV(inode);
+ bool notify = false;
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ WARN_ON(pdata->xsp_devnode_ref == 0);
+ pdata->xsp_devnode_ref--;
+ if (pdata->xsp_devnode_ref == 0) {
+ pdata->xsp_devnode_excl = false;
+ notify = true;
+ }
+ if (notify)
+ xrt_info(xdev, "closed %s", CDEV_NAME(pdata->xsp_sysdev));
+ else
+ xrt_info(xdev, "closed %s, notifying waiter", CDEV_NAME(pdata->xsp_sysdev));
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+
+ if (notify)
+ complete(&pdata->xsp_devnode_comp);
+}
+EXPORT_SYMBOL_GPL(xleaf_devnode_close);
+
+static inline enum xrt_dev_file_mode
+devnode_mode(struct xrt_device *xdev)
+{
+ return DEV_FILE_OPS(xdev)->xsf_mode;
+}
+
+int xleaf_devnode_create(struct xrt_device *xdev, const char *file_name,
+ const char *inst_name)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
+ struct xrt_dev_file_ops *fops = DEV_FILE_OPS(xdev);
+ struct cdev *cdevp;
+ struct device *sysdev;
+ int ret = 0;
+ char fname[256];
+
+ mutex_init(&pdata->xsp_devnode_lock);
+ init_completion(&pdata->xsp_devnode_comp);
+
+ cdevp = &DEV_PDATA(xdev)->xsp_cdev;
+ cdev_init(cdevp, &fops->xsf_ops);
+ cdevp->owner = fops->xsf_ops.owner;
+ cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), xdev->instance);
+
+ /*
+ * Set xdev as parent of cdev so that when xdev (and its platform
+ * data) will not be freed when cdev is not freed.
+ */
+ cdev_set_parent(cdevp, &DEV(xdev)->kobj);
+
+ ret = cdev_add(cdevp, cdevp->dev, 1);
+ if (ret) {
+ xrt_err(xdev, "failed to add cdev: %d", ret);
+ goto failed;
+ }
+ if (!file_name)
+ file_name = xdev->name;
+ if (!inst_name) {
+ if (devnode_mode(xdev) == XRT_DEV_FILE_MULTI_INST) {
+ snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
+ XRT_CDEV_DIR, DEV_PDATA(xdev)->xsp_root_name,
+ file_name, xdev->instance);
+ } else {
+ snprintf(fname, sizeof(fname), "%s/%s/%s",
+ XRT_CDEV_DIR, DEV_PDATA(xdev)->xsp_root_name,
+ file_name);
+ }
+ } else {
+ snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
+ DEV_PDATA(xdev)->xsp_root_name, file_name, inst_name);
+ }
+ sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
+ if (IS_ERR(sysdev)) {
+ ret = PTR_ERR(sysdev);
+ xrt_err(xdev, "failed to create device node: %d", ret);
+ goto failed_cdev_add;
+ }
+ pdata->xsp_sysdev = sysdev;
+
+ xleaf_devnode_allowed(xdev);
+
+ xrt_info(xdev, "created (%d, %d): /dev/%s",
+ MAJOR(cdevp->dev), xdev->instance, fname);
+ return 0;
+
+failed_cdev_add:
+ cdev_del(cdevp);
+failed:
+ cdevp->owner = NULL;
+ return ret;
+}
+
+void xleaf_devnode_destroy(struct xrt_device *xdev)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
+ struct cdev *cdevp = &pdata->xsp_cdev;
+ dev_t dev = cdevp->dev;
+
+ xleaf_devnode_disallowed(xdev);
+
+ xrt_info(xdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
+ XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
+ device_destroy(xrt_class, cdevp->dev);
+ pdata->xsp_sysdev = NULL;
+ cdev_del(cdevp);
+}
--
2.27.0

2021-04-27 21:03:30

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 14/20] fpga: xrt: ICAP driver

ICAP stands for Hardware Internal Configuration Access Port. ICAP is
discovered by walking the firmware metadata. A xrt device node will be
created for it. FPGA bitstream is written to hardware through ICAP.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/icap.h | 27 +++
drivers/fpga/xrt/lib/xleaf/icap.c | 328 ++++++++++++++++++++++++++
2 files changed, 355 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c

diff --git a/drivers/fpga/xrt/include/xleaf/icap.h b/drivers/fpga/xrt/include/xleaf/icap.h
new file mode 100644
index 000000000000..96d39a8934fa
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/icap.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_ICAP_H_
+#define _XRT_ICAP_H_
+
+#include "xleaf.h"
+
+/*
+ * ICAP driver leaf calls.
+ */
+enum xrt_icap_leaf_cmd {
+ XRT_ICAP_WRITE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_ICAP_GET_IDCODE,
+};
+
+struct xrt_icap_wr {
+ void *xiiw_bit_data;
+ u32 xiiw_data_len;
+};
+
+#endif /* _XRT_ICAP_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/icap.c b/drivers/fpga/xrt/lib/xleaf/icap.c
new file mode 100644
index 000000000000..755ea2fc0e75
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/icap.c
@@ -0,0 +1,328 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA ICAP Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ * Sonal Santan <[email protected]>
+ * Max Zhen <[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/icap.h"
+#include "xclbin-helper.h"
+
+#define XRT_ICAP "xrt_icap"
+
+#define ICAP_ERR(icap, fmt, arg...) \
+ xrt_err((icap)->xdev, fmt "\n", ##arg)
+#define ICAP_WARN(icap, fmt, arg...) \
+ xrt_warn((icap)->xdev, fmt "\n", ##arg)
+#define ICAP_INFO(icap, fmt, arg...) \
+ xrt_info((icap)->xdev, fmt "\n", ##arg)
+#define ICAP_DBG(icap, fmt, arg...) \
+ xrt_dbg((icap)->xdev, fmt "\n", ##arg)
+
+/*
+ * AXI-HWICAP IP register layout. Please see
+ * https://www.xilinx.com/support/documentation/ip_documentation/axi_hwicap/v3_0/pg134-axi-hwicap.pdf
+ */
+#define ICAP_REG_GIER 0x1C
+#define ICAP_REG_ISR 0x20
+#define ICAP_REG_IER 0x28
+#define ICAP_REG_WF 0x100
+#define ICAP_REG_RF 0x104
+#define ICAP_REG_SZ 0x108
+#define ICAP_REG_CR 0x10C
+#define ICAP_REG_SR 0x110
+#define ICAP_REG_WFV 0x114
+#define ICAP_REG_RFO 0x118
+#define ICAP_REG_ASR 0x11C
+
+#define ICAP_STATUS_EOS 0x4
+#define ICAP_STATUS_DONE 0x1
+
+/*
+ * Canned command sequence to obtain IDCODE of the FPGA
+ */
+static const u32 idcode_stream[] = {
+ /* dummy word */
+ cpu_to_be32(0xffffffff),
+ /* sync word */
+ cpu_to_be32(0xaa995566),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* ID code */
+ cpu_to_be32(0x28018001),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+};
+
+XRT_DEFINE_REGMAP_CONFIG(icap_regmap_config);
+
+struct icap {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ struct mutex icap_lock; /* icap dev lock */
+ u32 idcode;
+};
+
+static int wait_for_done(const struct icap *icap)
+{
+ int i = 0;
+ int ret;
+ u32 w;
+
+ for (i = 0; i < 10; i++) {
+ /*
+ * it requires few micro seconds for ICAP to process incoming data.
+ * Polling every 5us for 10 times would be good enough.
+ */
+ udelay(5);
+ ret = regmap_read(icap->regmap, ICAP_REG_SR, &w);
+ if (ret)
+ return ret;
+ ICAP_INFO(icap, "XHWICAP_SR: %x", w);
+ if (w & (ICAP_STATUS_EOS | ICAP_STATUS_DONE))
+ return 0;
+ }
+
+ ICAP_ERR(icap, "bitstream download timeout");
+ return -ETIMEDOUT;
+}
+
+static int icap_write(const struct icap *icap, const u32 *word_buf, int size)
+{
+ u32 value = 0;
+ int ret;
+ int i;
+
+ for (i = 0; i < size; i++) {
+ value = be32_to_cpu(word_buf[i]);
+ ret = regmap_write(icap->regmap, ICAP_REG_WF, value);
+ if (ret)
+ return ret;
+ }
+
+ ret = regmap_write(icap->regmap, ICAP_REG_CR, 0x1);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < 20; i++) {
+ ret = regmap_read(icap->regmap, ICAP_REG_CR, &value);
+ if (ret)
+ return ret;
+
+ if ((value & 0x1) == 0)
+ return 0;
+ ndelay(50);
+ }
+
+ ICAP_ERR(icap, "writing %d dwords timeout", size);
+ return -EIO;
+}
+
+static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
+ u32 word_count)
+{
+ int wr_fifo_vacancy = 0;
+ u32 word_written = 0;
+ u32 remain_word;
+ int err = 0;
+
+ WARN_ON(!mutex_is_locked(&icap->icap_lock));
+ for (remain_word = word_count; remain_word > 0;
+ remain_word -= word_written, word_buffer += word_written) {
+ err = regmap_read(icap->regmap, ICAP_REG_WFV, &wr_fifo_vacancy);
+ if (err) {
+ ICAP_ERR(icap, "read wr_fifo_vacancy failed %d", err);
+ break;
+ }
+ if (wr_fifo_vacancy <= 0) {
+ ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
+ err = -EIO;
+ break;
+ }
+ word_written = (wr_fifo_vacancy < remain_word) ?
+ wr_fifo_vacancy : remain_word;
+ if (icap_write(icap, word_buffer, word_written) != 0) {
+ ICAP_ERR(icap, "write failed remain %d, written %d",
+ remain_word, word_written);
+ err = -EIO;
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int icap_download(struct icap *icap, const char *buffer,
+ unsigned long length)
+{
+ u32 num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
+ u32 byte_read;
+ int err = 0;
+
+ if (length % sizeof(u32)) {
+ ICAP_ERR(icap, "invalid bitstream length %ld", length);
+ return -EINVAL;
+ }
+
+ mutex_lock(&icap->icap_lock);
+ for (byte_read = 0; byte_read < length; byte_read += num_chars_read) {
+ num_chars_read = length - byte_read;
+ if (num_chars_read > XCLBIN_HWICAP_BITFILE_BUF_SZ)
+ num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
+
+ err = bitstream_helper(icap, (u32 *)buffer, num_chars_read / sizeof(u32));
+ if (err)
+ goto failed;
+ buffer += num_chars_read;
+ }
+
+ /* there is not any cleanup needs to be done if writing ICAP timeout. */
+ err = wait_for_done(icap);
+
+failed:
+ mutex_unlock(&icap->icap_lock);
+
+ return err;
+}
+
+/*
+ * Discover the FPGA IDCODE using special sequence of canned commands
+ */
+static int icap_probe_chip(struct icap *icap)
+{
+ int err;
+ u32 val = 0;
+
+ regmap_read(icap->regmap, ICAP_REG_SR, &val);
+ if (val != ICAP_STATUS_DONE)
+ return -ENODEV;
+ /* Read ICAP FIFO vacancy */
+ regmap_read(icap->regmap, ICAP_REG_WFV, &val);
+ if (val < 8)
+ return -ENODEV;
+ err = icap_write(icap, idcode_stream, ARRAY_SIZE(idcode_stream));
+ if (err)
+ return err;
+ err = wait_for_done(icap);
+ if (err)
+ return err;
+
+ /* Tell config engine how many words to transfer to read FIFO */
+ regmap_write(icap->regmap, ICAP_REG_SZ, 0x1);
+ /* Switch the ICAP to read mode */
+ regmap_write(icap->regmap, ICAP_REG_CR, 0x2);
+ err = wait_for_done(icap);
+ if (err)
+ return err;
+
+ /* Read IDCODE from Read FIFO */
+ regmap_read(icap->regmap, ICAP_REG_RF, &icap->idcode);
+ return 0;
+}
+
+static int
+xrt_icap_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct xrt_icap_wr *wr_arg = arg;
+ struct icap *icap;
+ int ret = 0;
+
+ icap = xrt_get_drvdata(xdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_ICAP_WRITE:
+ ret = icap_download(icap, wr_arg->xiiw_bit_data,
+ wr_arg->xiiw_data_len);
+ break;
+ case XRT_ICAP_GET_IDCODE:
+ *(u32 *)arg = icap->idcode;
+ break;
+ default:
+ ICAP_ERR(icap, "unknown command %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_icap_probe(struct xrt_device *xdev)
+{
+ void __iomem *base = NULL;
+ struct resource *res;
+ struct icap *icap;
+ int result = 0;
+
+ icap = devm_kzalloc(&xdev->dev, sizeof(*icap), GFP_KERNEL);
+ if (!icap)
+ return -ENOMEM;
+
+ icap->xdev = xdev;
+ xrt_set_drvdata(xdev, icap);
+ mutex_init(&icap->icap_lock);
+
+ xrt_info(xdev, "probing");
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ icap->regmap = devm_regmap_init_mmio(&xdev->dev, base, &icap_regmap_config);
+ if (IS_ERR(icap->regmap)) {
+ ICAP_ERR(icap, "init mmio failed");
+ return PTR_ERR(icap->regmap);
+ }
+ /* Disable ICAP interrupts */
+ regmap_write(icap->regmap, ICAP_REG_GIER, 0);
+
+ result = icap_probe_chip(icap);
+ if (result)
+ xrt_err(xdev, "Failed to probe FPGA");
+ else
+ xrt_info(xdev, "Discovered FPGA IDCODE %x", icap->idcode);
+ return result;
+}
+
+static struct xrt_dev_endpoints xrt_icap_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_FPGA_CONFIG },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_icap_driver = {
+ .driver = {
+ .name = XRT_ICAP,
+ },
+ .subdev_id = XRT_SUBDEV_ICAP,
+ .endpoints = xrt_icap_endpoints,
+ .probe = xrt_icap_probe,
+ .leaf_call = xrt_icap_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(icap);
--
2.27.0

2021-04-27 21:05:04

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 18/20] fpga: xrt: DDR calibration driver

Add DDR calibration driver. DDR calibration is a hardware function
discovered by walking firmware metadata. A xrt device node will
be created for it. Hardware provides DDR calibration status through
this function.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
.../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +++
drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 210 ++++++++++++++++++
2 files changed, 238 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c

diff --git a/drivers/fpga/xrt/include/xleaf/ddr_calibration.h b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
new file mode 100644
index 000000000000..878740c26ca2
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_DDR_CALIBRATION_H_
+#define _XRT_DDR_CALIBRATION_H_
+
+#include "xleaf.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * Memory calibration driver leaf calls.
+ */
+enum xrt_calib_results {
+ XRT_CALIB_UNKNOWN = 0,
+ XRT_CALIB_SUCCEEDED,
+ XRT_CALIB_FAILED,
+};
+
+enum xrt_calib_leaf_cmd {
+ XRT_CALIB_RESULT = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+#endif /* _XRT_DDR_CALIBRATION_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
new file mode 100644
index 000000000000..36a0937c9195
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
@@ -0,0 +1,210 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA memory calibration driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * memory calibration
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+#include <linux/delay.h>
+#include <linux/regmap.h>
+#include "xclbin-helper.h"
+#include "metadata.h"
+#include "xleaf/ddr_calibration.h"
+
+#define XRT_CALIB "xrt_calib"
+
+#define XRT_CALIB_STATUS_REG 0
+#define XRT_CALIB_READ_RETRIES 20
+#define XRT_CALIB_READ_INTERVAL 500 /* ms */
+
+XRT_DEFINE_REGMAP_CONFIG(calib_regmap_config);
+
+struct calib_cache {
+ struct list_head link;
+ const char *ep_name;
+ char *data;
+ u32 data_size;
+};
+
+struct calib {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ struct mutex lock; /* calibration dev lock */
+ struct list_head cache_list;
+ u32 cache_num;
+ enum xrt_calib_results result;
+};
+
+static void __calib_cache_clean_nolock(struct calib *calib)
+{
+ struct calib_cache *cache, *temp;
+
+ list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
+ vfree(cache->data);
+ list_del(&cache->link);
+ vfree(cache);
+ }
+ calib->cache_num = 0;
+}
+
+static void calib_cache_clean(struct calib *calib)
+{
+ mutex_lock(&calib->lock);
+ __calib_cache_clean_nolock(calib);
+ mutex_unlock(&calib->lock);
+}
+
+static int calib_calibration(struct calib *calib)
+{
+ u32 times = XRT_CALIB_READ_RETRIES;
+ u32 status;
+ int ret;
+
+ while (times != 0) {
+ ret = regmap_read(calib->regmap, XRT_CALIB_STATUS_REG, &status);
+ if (ret) {
+ xrt_err(calib->xdev, "failed to read status reg %d", ret);
+ return ret;
+ }
+
+ if (status & BIT(0))
+ break;
+ msleep(XRT_CALIB_READ_INTERVAL);
+ times--;
+ }
+
+ if (!times) {
+ xrt_err(calib->xdev,
+ "MIG calibration timeout after bitstream download");
+ return -ETIMEDOUT;
+ }
+
+ xrt_info(calib->xdev, "took %dms", (XRT_CALIB_READ_RETRIES - times) *
+ XRT_CALIB_READ_INTERVAL);
+ return 0;
+}
+
+static void xrt_calib_event_cb(struct xrt_device *xdev, void *arg)
+{
+ struct calib *calib = xrt_get_drvdata(xdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ enum xrt_subdev_id id;
+ int ret;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+
+ switch (e) {
+ case XRT_EVENT_POST_CREATION:
+ if (id == XRT_SUBDEV_UCS) {
+ ret = calib_calibration(calib);
+ if (ret)
+ calib->result = XRT_CALIB_FAILED;
+ else
+ calib->result = XRT_CALIB_SUCCEEDED;
+ }
+ break;
+ default:
+ xrt_dbg(xdev, "ignored event %d", e);
+ break;
+ }
+}
+
+static void xrt_calib_remove(struct xrt_device *xdev)
+{
+ struct calib *calib = xrt_get_drvdata(xdev);
+
+ calib_cache_clean(calib);
+}
+
+static int xrt_calib_probe(struct xrt_device *xdev)
+{
+ void __iomem *base = NULL;
+ struct resource *res;
+ struct calib *calib;
+ int err = 0;
+
+ calib = devm_kzalloc(&xdev->dev, sizeof(*calib), GFP_KERNEL);
+ if (!calib)
+ return -ENOMEM;
+
+ calib->xdev = xdev;
+ xrt_set_drvdata(xdev, calib);
+
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ err = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base)) {
+ err = PTR_ERR(base);
+ goto failed;
+ }
+
+ calib->regmap = devm_regmap_init_mmio(&xdev->dev, base, &calib_regmap_config);
+ if (IS_ERR(calib->regmap)) {
+ xrt_err(xdev, "Map iomem failed");
+ err = PTR_ERR(calib->regmap);
+ goto failed;
+ }
+
+ mutex_init(&calib->lock);
+ INIT_LIST_HEAD(&calib->cache_list);
+
+ return 0;
+
+failed:
+ return err;
+}
+
+static int
+xrt_calib_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct calib *calib = xrt_get_drvdata(xdev);
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_calib_event_cb(xdev, arg);
+ break;
+ case XRT_CALIB_RESULT: {
+ enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
+ *r = calib->result;
+ break;
+ }
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_calib_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_DDR_CALIB },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_calib_driver = {
+ .driver = {
+ .name = XRT_CALIB,
+ },
+ .subdev_id = XRT_SUBDEV_CALIB,
+ .endpoints = xrt_calib_endpoints,
+ .probe = xrt_calib_probe,
+ .remove = xrt_calib_remove,
+ .leaf_call = xrt_calib_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(calib);
--
2.27.0

2021-04-27 21:05:52

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 15/20] fpga: xrt: devctl xrt driver

Add devctl driver. devctl is a type of hardware function which only has
few registers to read or write. They are discovered by walking firmware
metadata. A xrt device node will be created for them.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/devctl.h | 40 ++++++
drivers/fpga/xrt/lib/xleaf/devctl.c | 169 ++++++++++++++++++++++++
2 files changed, 209 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c

diff --git a/drivers/fpga/xrt/include/xleaf/devctl.h b/drivers/fpga/xrt/include/xleaf/devctl.h
new file mode 100644
index 000000000000..b97f3b6d9326
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/devctl.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_DEVCTL_H_
+#define _XRT_DEVCTL_H_
+
+#include "xleaf.h"
+
+/*
+ * DEVCTL driver leaf calls.
+ */
+enum xrt_devctl_leaf_cmd {
+ XRT_DEVCTL_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+enum xrt_devctl_id {
+ XRT_DEVCTL_ROM_UUID = 0,
+ XRT_DEVCTL_DDR_CALIB,
+ XRT_DEVCTL_GOLDEN_VER,
+ XRT_DEVCTL_MAX
+};
+
+struct xrt_devctl_rw {
+ u32 xdr_id;
+ void *xdr_buf;
+ u32 xdr_len;
+ u32 xdr_offset;
+};
+
+struct xrt_devctl_intf_uuid {
+ u32 uuid_num;
+ uuid_t *uuids;
+};
+
+#endif /* _XRT_DEVCTL_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/devctl.c b/drivers/fpga/xrt/lib/xleaf/devctl.c
new file mode 100644
index 000000000000..fb2122be7e56
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/devctl.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA devctl Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/devctl.h"
+
+#define XRT_DEVCTL "xrt_devctl"
+
+struct xrt_name_id {
+ char *ep_name;
+ int id;
+};
+
+static struct xrt_name_id name_id[XRT_DEVCTL_MAX] = {
+ { XRT_MD_NODE_BLP_ROM, XRT_DEVCTL_ROM_UUID },
+ { XRT_MD_NODE_GOLDEN_VER, XRT_DEVCTL_GOLDEN_VER },
+};
+
+XRT_DEFINE_REGMAP_CONFIG(devctl_regmap_config);
+
+struct xrt_devctl {
+ struct xrt_device *xdev;
+ struct regmap *regmap[XRT_DEVCTL_MAX];
+ ulong sizes[XRT_DEVCTL_MAX];
+};
+
+static int xrt_devctl_name2id(struct xrt_devctl *devctl, const char *name)
+{
+ int i;
+
+ for (i = 0; i < XRT_DEVCTL_MAX && name_id[i].ep_name; i++) {
+ if (!strncmp(name_id[i].ep_name, name, strlen(name_id[i].ep_name) + 1))
+ return name_id[i].id;
+ }
+
+ return -EINVAL;
+}
+
+static int
+xrt_devctl_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct xrt_devctl *devctl;
+ int ret = 0;
+
+ devctl = xrt_get_drvdata(xdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_DEVCTL_READ: {
+ struct xrt_devctl_rw *rw_arg = arg;
+
+ if (rw_arg->xdr_len & 0x3) {
+ xrt_err(xdev, "invalid len %d", rw_arg->xdr_len);
+ return -EINVAL;
+ }
+
+ if (rw_arg->xdr_id >= XRT_DEVCTL_MAX) {
+ xrt_err(xdev, "invalid id %d", rw_arg->xdr_id);
+ return -EINVAL;
+ }
+
+ if (!devctl->regmap[rw_arg->xdr_id]) {
+ xrt_err(xdev, "io not found, id %d",
+ rw_arg->xdr_id);
+ return -EINVAL;
+ }
+
+ ret = regmap_bulk_read(devctl->regmap[rw_arg->xdr_id], rw_arg->xdr_offset,
+ rw_arg->xdr_buf,
+ rw_arg->xdr_len / devctl_regmap_config.reg_stride);
+ break;
+ }
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_devctl_probe(struct xrt_device *xdev)
+{
+ struct xrt_devctl *devctl = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int i, id, ret = 0;
+
+ devctl = devm_kzalloc(&xdev->dev, sizeof(*devctl), GFP_KERNEL);
+ if (!devctl)
+ return -ENOMEM;
+
+ devctl->xdev = xdev;
+ xrt_set_drvdata(xdev, devctl);
+
+ xrt_info(xdev, "probing...");
+ for (i = 0, res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ res;
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, ++i)) {
+ struct regmap_config config = devctl_regmap_config;
+
+ id = xrt_devctl_name2id(devctl, res->name);
+ if (id < 0) {
+ xrt_err(xdev, "ep %s not found", res->name);
+ continue;
+ }
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ break;
+ }
+ config.max_register = res->end - res->start + 1;
+ devctl->regmap[id] = devm_regmap_init_mmio(&xdev->dev, base, &config);
+ if (IS_ERR(devctl->regmap[id])) {
+ xrt_err(xdev, "map base failed %pR", res);
+ ret = PTR_ERR(devctl->regmap[id]);
+ break;
+ }
+ devctl->sizes[id] = res->end - res->start + 1;
+ }
+
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_devctl_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ /* add name if ep is in same partition */
+ { .ep_name = XRT_MD_NODE_BLP_ROM },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GOLDEN_VER },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ /* adding ep bundle generates devctl device instance */
+ { 0 },
+};
+
+static struct xrt_driver xrt_devctl_driver = {
+ .driver = {
+ .name = XRT_DEVCTL,
+ },
+ .subdev_id = XRT_SUBDEV_DEVCTL,
+ .endpoints = xrt_devctl_endpoints,
+ .probe = xrt_devctl_probe,
+ .leaf_call = xrt_devctl_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(devctl);
--
2.27.0

2021-04-27 21:06:16

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 17/20] fpga: xrt: clock frequency counter driver

Add clock frequency counter driver. Clock frequency counter is
a hardware function discovered by walking xclbin metadata. A xrt
device node will be created for it. Other part of driver can read the
actual clock frequency through clock frequency counter driver.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 +++
drivers/fpga/xrt/lib/xleaf/clkfreq.c | 223 +++++++++++++++++++++++
2 files changed, 244 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c

diff --git a/drivers/fpga/xrt/include/xleaf/clkfreq.h b/drivers/fpga/xrt/include/xleaf/clkfreq.h
new file mode 100644
index 000000000000..005441d5df78
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/clkfreq.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_CLKFREQ_H_
+#define _XRT_CLKFREQ_H_
+
+#include "xleaf.h"
+
+/*
+ * CLKFREQ driver leaf calls.
+ */
+enum xrt_clkfreq_leaf_cmd {
+ XRT_CLKFREQ_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+#endif /* _XRT_CLKFREQ_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/clkfreq.c b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
new file mode 100644
index 000000000000..3d1f11152375
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Frequency Counter Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clkfreq.h"
+
+#define CLKFREQ_ERR(clkfreq, fmt, arg...) \
+ xrt_err((clkfreq)->xdev, fmt "\n", ##arg)
+#define CLKFREQ_WARN(clkfreq, fmt, arg...) \
+ xrt_warn((clkfreq)->xdev, fmt "\n", ##arg)
+#define CLKFREQ_INFO(clkfreq, fmt, arg...) \
+ xrt_info((clkfreq)->xdev, fmt "\n", ##arg)
+#define CLKFREQ_DBG(clkfreq, fmt, arg...) \
+ xrt_dbg((clkfreq)->xdev, fmt "\n", ##arg)
+
+#define XRT_CLKFREQ "xrt_clkfreq"
+
+#define XRT_CLKFREQ_CONTROL_STATUS_MASK 0xffff
+
+#define XRT_CLKFREQ_CONTROL_START 0x1
+#define XRT_CLKFREQ_CONTROL_DONE 0x2
+#define XRT_CLKFREQ_V5_CLK0_ENABLED 0x10000
+
+#define XRT_CLKFREQ_CONTROL_REG 0
+#define XRT_CLKFREQ_COUNT_REG 0x8
+#define XRT_CLKFREQ_V5_COUNT_REG 0x10
+
+#define XRT_CLKFREQ_READ_RETRIES 10
+
+XRT_DEFINE_REGMAP_CONFIG(clkfreq_regmap_config);
+
+struct clkfreq {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ const char *clkfreq_ep_name;
+ struct mutex clkfreq_lock; /* clock counter dev lock */
+};
+
+static int clkfreq_read(struct clkfreq *clkfreq, u32 *freq)
+{
+ int times = XRT_CLKFREQ_READ_RETRIES;
+ u32 status;
+ int ret;
+
+ *freq = 0;
+ mutex_lock(&clkfreq->clkfreq_lock);
+ ret = regmap_write(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, XRT_CLKFREQ_CONTROL_START);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "write start to control reg failed %d", ret);
+ goto failed;
+ }
+ while (times != 0) {
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, &status);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "read control reg failed %d", ret);
+ goto failed;
+ }
+ if ((status & XRT_CLKFREQ_CONTROL_STATUS_MASK) == XRT_CLKFREQ_CONTROL_DONE)
+ break;
+ mdelay(1);
+ times--;
+ };
+
+ if (!times) {
+ ret = -ETIMEDOUT;
+ goto failed;
+ }
+
+ if (status & XRT_CLKFREQ_V5_CLK0_ENABLED)
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_V5_COUNT_REG, freq);
+ else
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_COUNT_REG, freq);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "read count failed %d", ret);
+ goto failed;
+ }
+
+ mutex_unlock(&clkfreq->clkfreq_lock);
+
+ return 0;
+
+failed:
+ mutex_unlock(&clkfreq->clkfreq_lock);
+
+ return ret;
+}
+
+static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct clkfreq *clkfreq = xrt_get_drvdata(to_xrt_dev(dev));
+ ssize_t count;
+ u32 freq;
+
+ if (clkfreq_read(clkfreq, &freq))
+ return -EINVAL;
+
+ count = snprintf(buf, 64, "%u\n", freq);
+
+ return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clkfreq_attrs[] = {
+ &dev_attr_freq.attr,
+ NULL,
+};
+
+static struct attribute_group clkfreq_attr_group = {
+ .attrs = clkfreq_attrs,
+};
+
+static int
+xrt_clkfreq_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct clkfreq *clkfreq;
+ int ret = 0;
+
+ clkfreq = xrt_get_drvdata(xdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_CLKFREQ_READ:
+ ret = clkfreq_read(clkfreq, arg);
+ break;
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static void clkfreq_remove(struct xrt_device *xdev)
+{
+ sysfs_remove_group(&xdev->dev.kobj, &clkfreq_attr_group);
+}
+
+static int clkfreq_probe(struct xrt_device *xdev)
+{
+ struct clkfreq *clkfreq = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ clkfreq = devm_kzalloc(&xdev->dev, sizeof(*clkfreq), GFP_KERNEL);
+ if (!clkfreq)
+ return -ENOMEM;
+
+ xrt_set_drvdata(xdev, clkfreq);
+ clkfreq->xdev = xdev;
+ mutex_init(&clkfreq->clkfreq_lock);
+
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -EINVAL;
+ goto failed;
+ }
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ clkfreq->regmap = devm_regmap_init_mmio(&xdev->dev, base, &clkfreq_regmap_config);
+ if (IS_ERR(clkfreq->regmap)) {
+ CLKFREQ_ERR(clkfreq, "regmap %pR failed", res);
+ ret = PTR_ERR(clkfreq->regmap);
+ goto failed;
+ }
+ clkfreq->clkfreq_ep_name = res->name;
+
+ ret = sysfs_create_group(&xdev->dev.kobj, &clkfreq_attr_group);
+ if (ret) {
+ CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
+ goto failed;
+ }
+
+ CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_clkfreq_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .compat = XRT_MD_COMPAT_CLKFREQ },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_clkfreq_driver = {
+ .driver = {
+ .name = XRT_CLKFREQ,
+ },
+ .subdev_id = XRT_SUBDEV_CLKFREQ,
+ .endpoints = xrt_clkfreq_endpoints,
+ .probe = clkfreq_probe,
+ .remove = clkfreq_remove,
+ .leaf_call = xrt_clkfreq_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(clkfreq);
--
2.27.0

2021-04-27 21:06:28

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 16/20] fpga: xrt: clock driver

Add clock driver. Clock is a hardware function discovered by walking
xclbin metadata. A xrt device node will be created for it. Other part of
driver configures clock through clock driver.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/clock.h | 29 ++
drivers/fpga/xrt/lib/xleaf/clock.c | 652 +++++++++++++++++++++++++
2 files changed, 681 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c

diff --git a/drivers/fpga/xrt/include/xleaf/clock.h b/drivers/fpga/xrt/include/xleaf/clock.h
new file mode 100644
index 000000000000..6858473fd096
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/clock.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_CLOCK_H_
+#define _XRT_CLOCK_H_
+
+#include "xleaf.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * CLOCK driver leaf calls.
+ */
+enum xrt_clock_leaf_cmd {
+ XRT_CLOCK_SET = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_CLOCK_GET,
+ XRT_CLOCK_VERIFY,
+};
+
+struct xrt_clock_get {
+ u16 freq;
+ u32 freq_cnter;
+};
+
+#endif /* _XRT_CLOCK_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/clock.c b/drivers/fpga/xrt/lib/xleaf/clock.c
new file mode 100644
index 000000000000..7303be55c07a
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/clock.c
@@ -0,0 +1,652 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Wizard Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ * Sonal Santan <[email protected]>
+ * David Zhang <[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clock.h"
+#include "xleaf/clkfreq.h"
+
+/* XRT_CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
+#define XRT_CLOCK_MAX_NUM_CLOCKS 4
+#define XRT_CLOCK_STATUS_MASK 0xffff
+#define XRT_CLOCK_STATUS_MEASURE_START 0x1
+#define XRT_CLOCK_STATUS_MEASURE_DONE 0x2
+
+#define XRT_CLOCK_STATUS_REG 0x4
+#define XRT_CLOCK_CLKFBOUT_REG 0x200
+#define XRT_CLOCK_CLKOUT0_REG 0x208
+#define XRT_CLOCK_LOAD_SADDR_SEN_REG 0x25C
+#define XRT_CLOCK_DEFAULT_EXPIRE_SECS 1
+
+#define CLOCK_ERR(clock, fmt, arg...) \
+ xrt_err((clock)->xdev, fmt "\n", ##arg)
+#define CLOCK_WARN(clock, fmt, arg...) \
+ xrt_warn((clock)->xdev, fmt "\n", ##arg)
+#define CLOCK_INFO(clock, fmt, arg...) \
+ xrt_info((clock)->xdev, fmt "\n", ##arg)
+#define CLOCK_DBG(clock, fmt, arg...) \
+ xrt_dbg((clock)->xdev, fmt "\n", ##arg)
+
+#define XRT_CLOCK "xrt_clock"
+
+XRT_DEFINE_REGMAP_CONFIG(clock_regmap_config);
+
+struct clock {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ struct mutex clock_lock; /* clock dev lock */
+
+ const char *clock_ep_name;
+};
+
+/*
+ * Precomputed table with config0 and config2 register values together with
+ * target frequency. The steps are approximately 5 MHz apart. Table is
+ * generated by platform creation tool.
+ */
+static const struct xmgnt_ocl_clockwiz {
+ /* target frequency */
+ u16 ocl;
+ /* config0 register */
+ u32 config0;
+ /* config2 register */
+ u32 config2;
+} frequency_table[] = {
+ /*1275.000*/ { 10, 0x02EE0C01, 0x0001F47F },
+ /*1575.000*/ { 15, 0x02EE0F01, 0x00000069},
+ /*1600.000*/ { 20, 0x00001001, 0x00000050},
+ /*1600.000*/ { 25, 0x00001001, 0x00000040},
+ /*1575.000*/ { 30, 0x02EE0F01, 0x0001F434},
+ /*1575.000*/ { 35, 0x02EE0F01, 0x0000002D},
+ /*1600.000*/ { 40, 0x00001001, 0x00000028},
+ /*1575.000*/ { 45, 0x02EE0F01, 0x00000023},
+ /*1600.000*/ { 50, 0x00001001, 0x00000020},
+ /*1512.500*/ { 55, 0x007D0F01, 0x0001F41B},
+ /*1575.000*/ { 60, 0x02EE0F01, 0x0000FA1A},
+ /*1462.500*/ { 65, 0x02710E01, 0x0001F416},
+ /*1575.000*/ { 70, 0x02EE0F01, 0x0001F416},
+ /*1575.000*/ { 75, 0x02EE0F01, 0x00000015},
+ /*1600.000*/ { 80, 0x00001001, 0x00000014},
+ /*1487.500*/ { 85, 0x036B0E01, 0x0001F411},
+ /*1575.000*/ { 90, 0x02EE0F01, 0x0001F411},
+ /*1425.000*/ { 95, 0x00FA0E01, 0x0000000F},
+ /*1600.000*/ { 100, 0x00001001, 0x00000010},
+ /*1575.000*/ { 105, 0x02EE0F01, 0x0000000F},
+ /*1512.500*/ { 110, 0x007D0F01, 0x0002EE0D},
+ /*1437.500*/ { 115, 0x01770E01, 0x0001F40C},
+ /*1575.000*/ { 120, 0x02EE0F01, 0x00007D0D},
+ /*1562.500*/ { 125, 0x02710F01, 0x0001F40C},
+ /*1462.500*/ { 130, 0x02710E01, 0x0000FA0B},
+ /*1350.000*/ { 135, 0x01F40D01, 0x0000000A},
+ /*1575.000*/ { 140, 0x02EE0F01, 0x0000FA0B},
+ /*1450.000*/ { 145, 0x01F40E01, 0x0000000A},
+ /*1575.000*/ { 150, 0x02EE0F01, 0x0001F40A},
+ /*1550.000*/ { 155, 0x01F40F01, 0x0000000A},
+ /*1600.000*/ { 160, 0x00001001, 0x0000000A},
+ /*1237.500*/ { 165, 0x01770C01, 0x0001F407},
+ /*1487.500*/ { 170, 0x036B0E01, 0x0002EE08},
+ /*1575.000*/ { 175, 0x02EE0F01, 0x00000009},
+ /*1575.000*/ { 180, 0x02EE0F01, 0x0002EE08},
+ /*1387.500*/ { 185, 0x036B0D01, 0x0001F407},
+ /*1425.000*/ { 190, 0x00FA0E01, 0x0001F407},
+ /*1462.500*/ { 195, 0x02710E01, 0x0001F407},
+ /*1600.000*/ { 200, 0x00001001, 0x00000008},
+ /*1537.500*/ { 205, 0x01770F01, 0x0001F407},
+ /*1575.000*/ { 210, 0x02EE0F01, 0x0001F407},
+ /*1075.000*/ { 215, 0x02EE0A01, 0x00000005},
+ /*1512.500*/ { 220, 0x007D0F01, 0x00036B06},
+ /*1575.000*/ { 225, 0x02EE0F01, 0x00000007},
+ /*1437.500*/ { 230, 0x01770E01, 0x0000FA06},
+ /*1175.000*/ { 235, 0x02EE0B01, 0x00000005},
+ /*1500.000*/ { 240, 0x00000F01, 0x0000FA06},
+ /*1225.000*/ { 245, 0x00FA0C01, 0x00000005},
+ /*1562.500*/ { 250, 0x02710F01, 0x0000FA06},
+ /*1275.000*/ { 255, 0x02EE0C01, 0x00000005},
+ /*1462.500*/ { 260, 0x02710E01, 0x00027105},
+ /*1325.000*/ { 265, 0x00FA0D01, 0x00000005},
+ /*1350.000*/ { 270, 0x01F40D01, 0x00000005},
+ /*1512.500*/ { 275, 0x007D0F01, 0x0001F405},
+ /*1575.000*/ { 280, 0x02EE0F01, 0x00027105},
+ /*1425.000*/ { 285, 0x00FA0E01, 0x00000005},
+ /*1450.000*/ { 290, 0x01F40E01, 0x00000005},
+ /*1475.000*/ { 295, 0x02EE0E01, 0x00000005},
+ /*1575.000*/ { 300, 0x02EE0F01, 0x0000FA05},
+ /*1525.000*/ { 305, 0x00FA0F01, 0x00000005},
+ /*1550.000*/ { 310, 0x01F40F01, 0x00000005},
+ /*1575.000*/ { 315, 0x02EE0F01, 0x00000005},
+ /*1600.000*/ { 320, 0x00001001, 0x00000005},
+ /*1462.500*/ { 325, 0x02710E01, 0x0001F404},
+ /*1237.500*/ { 330, 0x01770C01, 0x0002EE03},
+ /* 837.500*/ { 335, 0x01770801, 0x0001F402},
+ /*1487.500*/ { 340, 0x036B0E01, 0x00017704},
+ /* 862.500*/ { 345, 0x02710801, 0x0001F402},
+ /*1575.000*/ { 350, 0x02EE0F01, 0x0001F404},
+ /* 887.500*/ { 355, 0x036B0801, 0x0001F402},
+ /*1575.000*/ { 360, 0x02EE0F01, 0x00017704},
+ /* 912.500*/ { 365, 0x007D0901, 0x0001F402},
+ /*1387.500*/ { 370, 0x036B0D01, 0x0002EE03},
+ /*1500.000*/ { 375, 0x00000F01, 0x00000004},
+ /*1425.000*/ { 380, 0x00FA0E01, 0x0002EE03},
+ /* 962.500*/ { 385, 0x02710901, 0x0001F402},
+ /*1462.500*/ { 390, 0x02710E01, 0x0002EE03},
+ /* 987.500*/ { 395, 0x036B0901, 0x0001F402},
+ /*1600.000*/ { 400, 0x00001001, 0x00000004},
+ /*1012.500*/ { 405, 0x007D0A01, 0x0001F402},
+ /*1537.500*/ { 410, 0x01770F01, 0x0002EE03},
+ /*1037.500*/ { 415, 0x01770A01, 0x0001F402},
+ /*1575.000*/ { 420, 0x02EE0F01, 0x0002EE03},
+ /*1487.500*/ { 425, 0x036B0E01, 0x0001F403},
+ /*1075.000*/ { 430, 0x02EE0A01, 0x0001F402},
+ /*1087.500*/ { 435, 0x036B0A01, 0x0001F402},
+ /*1375.000*/ { 440, 0x02EE0D01, 0x00007D03},
+ /*1112.500*/ { 445, 0x007D0B01, 0x0001F402},
+ /*1575.000*/ { 450, 0x02EE0F01, 0x0001F403},
+ /*1137.500*/ { 455, 0x01770B01, 0x0001F402},
+ /*1437.500*/ { 460, 0x01770E01, 0x00007D03},
+ /*1162.500*/ { 465, 0x02710B01, 0x0001F402},
+ /*1175.000*/ { 470, 0x02EE0B01, 0x0001F402},
+ /*1425.000*/ { 475, 0x00FA0E01, 0x00000003},
+ /*1500.000*/ { 480, 0x00000F01, 0x00007D03},
+ /*1212.500*/ { 485, 0x007D0C01, 0x0001F402},
+ /*1225.000*/ { 490, 0x00FA0C01, 0x0001F402},
+ /*1237.500*/ { 495, 0x01770C01, 0x0001F402},
+ /*1562.500*/ { 500, 0x02710F01, 0x00007D03},
+ /*1262.500*/ { 505, 0x02710C01, 0x0001F402},
+ /*1275.000*/ { 510, 0x02EE0C01, 0x0001F402},
+ /*1287.500*/ { 515, 0x036B0C01, 0x0001F402},
+ /*1300.000*/ { 520, 0x00000D01, 0x0001F402},
+ /*1575.000*/ { 525, 0x02EE0F01, 0x00000003},
+ /*1325.000*/ { 530, 0x00FA0D01, 0x0001F402},
+ /*1337.500*/ { 535, 0x01770D01, 0x0001F402},
+ /*1350.000*/ { 540, 0x01F40D01, 0x0001F402},
+ /*1362.500*/ { 545, 0x02710D01, 0x0001F402},
+ /*1512.500*/ { 550, 0x007D0F01, 0x0002EE02},
+ /*1387.500*/ { 555, 0x036B0D01, 0x0001F402},
+ /*1400.000*/ { 560, 0x00000E01, 0x0001F402},
+ /*1412.500*/ { 565, 0x007D0E01, 0x0001F402},
+ /*1425.000*/ { 570, 0x00FA0E01, 0x0001F402},
+ /*1437.500*/ { 575, 0x01770E01, 0x0001F402},
+ /*1450.000*/ { 580, 0x01F40E01, 0x0001F402},
+ /*1462.500*/ { 585, 0x02710E01, 0x0001F402},
+ /*1475.000*/ { 590, 0x02EE0E01, 0x0001F402},
+ /*1487.500*/ { 595, 0x036B0E01, 0x0001F402},
+ /*1575.000*/ { 600, 0x02EE0F01, 0x00027102},
+ /*1512.500*/ { 605, 0x007D0F01, 0x0001F402},
+ /*1525.000*/ { 610, 0x00FA0F01, 0x0001F402},
+ /*1537.500*/ { 615, 0x01770F01, 0x0001F402},
+ /*1550.000*/ { 620, 0x01F40F01, 0x0001F402},
+ /*1562.500*/ { 625, 0x02710F01, 0x0001F402},
+ /*1575.000*/ { 630, 0x02EE0F01, 0x0001F402},
+ /*1587.500*/ { 635, 0x036B0F01, 0x0001F402},
+ /*1600.000*/ { 640, 0x00001001, 0x0001F402},
+ /*1290.000*/ { 645, 0x01F44005, 0x00000002},
+ /*1462.500*/ { 650, 0x02710E01, 0x0000FA02}
+};
+
+static u32 find_matching_freq_config(unsigned short freq,
+ const struct xmgnt_ocl_clockwiz *table,
+ int size)
+{
+ u32 end = size - 1;
+ u32 start = 0;
+ u32 idx;
+
+ if (freq < table[0].ocl)
+ return 0;
+
+ if (freq > table[size - 1].ocl)
+ return size - 1;
+
+ while (start < end) {
+ idx = (start + end) / 2;
+ if (freq == table[idx].ocl)
+ break;
+ if (freq < table[idx].ocl)
+ end = idx;
+ else
+ start = idx + 1;
+ }
+ if (freq < table[idx].ocl)
+ idx--;
+
+ return idx;
+}
+
+static u32 find_matching_freq(u32 freq,
+ const struct xmgnt_ocl_clockwiz *freq_table,
+ int freq_table_size)
+{
+ int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
+
+ return freq_table[idx].ocl;
+}
+
+static inline int clock_wiz_busy(struct clock *clock, int cycle, int interval)
+{
+ u32 val = 0;
+ int count;
+ int ret;
+
+ for (count = 0; count < cycle; count++) {
+ ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read status failed %d", ret);
+ return ret;
+ }
+ if (val == 1)
+ break;
+
+ mdelay(interval);
+ }
+ if (val != 1) {
+ CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
+ val, cycle * interval);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int get_freq(struct clock *clock, u16 *freq)
+{
+ u32 mul_frac0 = 0;
+ u32 div_frac1 = 0;
+ u32 mul0, div0;
+ u64 input;
+ u32 div1;
+ u32 val;
+ int ret;
+
+ WARN_ON(!mutex_is_locked(&clock->clock_lock));
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read status failed %d", ret);
+ return ret;
+ }
+
+ if ((val & 0x1) == 0) {
+ CLOCK_ERR(clock, "clockwiz is busy %x", val);
+ *freq = 0;
+ return -EBUSY;
+ }
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read clkfbout failed %d", ret);
+ return ret;
+ }
+
+ div0 = val & 0xff;
+ mul0 = (val & 0xff00) >> 8;
+ if (val & BIT(26)) {
+ mul_frac0 = val >> 16;
+ mul_frac0 &= 0x3ff;
+ }
+
+ /*
+ * Multiply both numerator (mul0) and the denominator (div0) with 1000
+ * to account for fractional portion of multiplier
+ */
+ mul0 *= 1000;
+ mul0 += mul_frac0;
+ div0 *= 1000;
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read clkout0 failed %d", ret);
+ return ret;
+ }
+
+ div1 = val & 0xff;
+ if (val & BIT(18)) {
+ div_frac1 = val >> 8;
+ div_frac1 &= 0x3ff;
+ }
+
+ /*
+ * Multiply both numerator (mul0) and the denominator (div1) with
+ * 1000 to account for fractional portion of divider
+ */
+
+ div1 *= 1000;
+ div1 += div_frac1;
+ div0 *= div1;
+ mul0 *= 1000;
+ if (div0 == 0) {
+ CLOCK_ERR(clock, "clockwiz 0 divider");
+ return 0;
+ }
+
+ input = mul0 * 100;
+ do_div(input, div0);
+ *freq = (u16)input;
+
+ return 0;
+}
+
+static int set_freq(struct clock *clock, u16 freq)
+{
+ int err = 0;
+ u32 idx = 0;
+ u32 val = 0;
+ u32 config;
+
+ mutex_lock(&clock->clock_lock);
+ idx = find_matching_freq_config(freq, frequency_table,
+ ARRAY_SIZE(frequency_table));
+
+ CLOCK_INFO(clock, "New: %d Mhz", freq);
+ err = clock_wiz_busy(clock, 20, 50);
+ if (err)
+ goto fail;
+
+ config = frequency_table[idx].config0;
+ err = regmap_write(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, config);
+ if (err) {
+ CLOCK_ERR(clock, "write clkfbout failed %d", err);
+ goto fail;
+ }
+
+ config = frequency_table[idx].config2;
+ err = regmap_write(clock->regmap, XRT_CLOCK_CLKOUT0_REG, config);
+ if (err) {
+ CLOCK_ERR(clock, "write clkout0 failed %d", err);
+ goto fail;
+ }
+
+ mdelay(10);
+ err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 7);
+ if (err) {
+ CLOCK_ERR(clock, "write load_saddr_sen failed %d", err);
+ goto fail;
+ }
+
+ mdelay(1);
+ err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 2);
+ if (err) {
+ CLOCK_ERR(clock, "write saddr failed %d", err);
+ goto fail;
+ }
+
+ CLOCK_INFO(clock, "clockwiz waiting for locked signal");
+
+ err = clock_wiz_busy(clock, 100, 100);
+ if (err) {
+ CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
+ /* restore */
+ regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 4);
+ mdelay(10);
+ regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 0);
+ goto fail;
+ }
+ regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
+ CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
+ regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
+ CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
+
+fail:
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int get_freq_counter(struct clock *clock, u32 *freq)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->xdev);
+ struct xrt_device *xdev = clock->xdev;
+ struct xrt_device *counter_leaf;
+ const void *counter;
+ int err;
+
+ WARN_ON(!mutex_is_locked(&clock->clock_lock));
+
+ err = xrt_md_get_prop(DEV(xdev), pdata->xsp_dtb, clock->clock_ep_name,
+ NULL, XRT_MD_PROP_CLK_CNT, &counter, NULL);
+ if (err) {
+ xrt_err(xdev, "no counter specified");
+ return err;
+ }
+
+ counter_leaf = xleaf_get_leaf_by_epname(xdev, counter);
+ if (!counter_leaf) {
+ xrt_err(xdev, "can't find counter");
+ return -ENOENT;
+ }
+
+ err = xleaf_call(counter_leaf, XRT_CLKFREQ_READ, freq);
+ if (err)
+ xrt_err(xdev, "can't read counter");
+ xleaf_put_leaf(clock->xdev, counter_leaf);
+
+ return err;
+}
+
+static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
+{
+ int err = 0;
+
+ mutex_lock(&clock->clock_lock);
+
+ if (err == 0 && freq)
+ err = get_freq(clock, freq);
+
+ if (err == 0 && freq_cnter)
+ err = get_freq_counter(clock, freq_cnter);
+
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int clock_verify_freq(struct clock *clock)
+{
+ u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
+ int err = 0;
+ u16 freq;
+
+ mutex_lock(&clock->clock_lock);
+
+ err = get_freq(clock, &freq);
+ if (err) {
+ xrt_err(clock->xdev, "get freq failed, %d", err);
+ goto end;
+ }
+
+ err = get_freq_counter(clock, &clock_freq_counter);
+ if (err) {
+ xrt_err(clock->xdev, "get freq counter failed, %d", err);
+ goto end;
+ }
+
+ lookup_freq = find_matching_freq(freq, frequency_table,
+ ARRAY_SIZE(frequency_table));
+ request_in_khz = lookup_freq * 1000;
+ tolerance = lookup_freq * 50;
+ if (tolerance < abs(clock_freq_counter - request_in_khz)) {
+ CLOCK_ERR(clock,
+ "set clock(%s) failed, request %ukhz, actual %dkhz",
+ clock->clock_ep_name, request_in_khz, clock_freq_counter);
+ err = -EDOM;
+ } else {
+ CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
+ }
+
+end:
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int clock_init(struct clock *clock)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->xdev);
+ const u16 *freq;
+ int err = 0;
+
+ err = xrt_md_get_prop(DEV(clock->xdev), pdata->xsp_dtb,
+ clock->clock_ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
+ (const void **)&freq, NULL);
+ if (err) {
+ xrt_info(clock->xdev, "no default freq");
+ return 0;
+ }
+
+ err = set_freq(clock, be16_to_cpu(*freq));
+
+ return err;
+}
+
+static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct clock *clock = xrt_get_drvdata(to_xrt_dev(dev));
+ ssize_t count;
+ u16 freq = 0;
+
+ count = clock_get_freq(clock, &freq, NULL);
+ if (count < 0)
+ return count;
+
+ count = snprintf(buf, 64, "%u\n", freq);
+
+ return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clock_attrs[] = {
+ &dev_attr_freq.attr,
+ NULL,
+};
+
+static struct attribute_group clock_attr_group = {
+ .attrs = clock_attrs,
+};
+
+static int
+xrt_clock_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ struct clock *clock;
+ int ret = 0;
+
+ clock = xrt_get_drvdata(xdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_CLOCK_SET: {
+ u16 freq = (u16)(uintptr_t)arg;
+
+ ret = set_freq(clock, freq);
+ break;
+ }
+ case XRT_CLOCK_VERIFY:
+ ret = clock_verify_freq(clock);
+ break;
+ case XRT_CLOCK_GET: {
+ struct xrt_clock_get *get =
+ (struct xrt_clock_get *)arg;
+
+ ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
+ break;
+ }
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static void clock_remove(struct xrt_device *xdev)
+{
+ sysfs_remove_group(&xdev->dev.kobj, &clock_attr_group);
+}
+
+static int clock_probe(struct xrt_device *xdev)
+{
+ struct clock *clock = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ clock = devm_kzalloc(&xdev->dev, sizeof(*clock), GFP_KERNEL);
+ if (!clock)
+ return -ENOMEM;
+
+ xrt_set_drvdata(xdev, clock);
+ clock->xdev = xdev;
+ mutex_init(&clock->clock_lock);
+
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ clock->regmap = devm_regmap_init_mmio(&xdev->dev, base, &clock_regmap_config);
+ if (IS_ERR(clock->regmap)) {
+ CLOCK_ERR(clock, "regmap %pR failed", res);
+ ret = PTR_ERR(clock->regmap);
+ goto failed;
+ }
+ clock->clock_ep_name = res->name;
+
+ ret = clock_init(clock);
+ if (ret)
+ goto failed;
+
+ ret = sysfs_create_group(&xdev->dev.kobj, &clock_attr_group);
+ if (ret) {
+ CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
+ goto failed;
+ }
+
+ CLOCK_INFO(clock, "successfully initialized Clock subdev");
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_clock_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .compat = "clkwiz" },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_clock_driver = {
+ .driver = {
+ .name = XRT_CLOCK,
+ },
+ .subdev_id = XRT_SUBDEV_CLOCK,
+ .endpoints = xrt_clock_endpoints,
+ .probe = clock_probe,
+ .remove = clock_remove,
+ .leaf_call = xrt_clock_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(clock);
--
2.27.0

2021-04-27 21:06:28

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 19/20] fpga: xrt: partition isolation driver

Add partition isolation xrt driver. partition isolation is
a hardware function discovered by walking firmware metadata.
A xrt device node will be created for it. Partition isolation
function isolate the different fpga regions

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/axigate.h | 23 ++
drivers/fpga/xrt/lib/xleaf/axigate.c | 325 +++++++++++++++++++++++
2 files changed, 348 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c

diff --git a/drivers/fpga/xrt/include/xleaf/axigate.h b/drivers/fpga/xrt/include/xleaf/axigate.h
new file mode 100644
index 000000000000..58f32c76dca1
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/axigate.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_AXIGATE_H_
+#define _XRT_AXIGATE_H_
+
+#include "xleaf.h"
+#include "metadata.h"
+
+/*
+ * AXIGATE driver leaf calls.
+ */
+enum xrt_axigate_leaf_cmd {
+ XRT_AXIGATE_CLOSE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_AXIGATE_OPEN,
+};
+
+#endif /* _XRT_AXIGATE_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/axigate.c b/drivers/fpga/xrt/lib/xleaf/axigate.c
new file mode 100644
index 000000000000..493707b782e4
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/axigate.c
@@ -0,0 +1,325 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA AXI Gate Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/axigate.h"
+
+#define XRT_AXIGATE "xrt_axigate"
+
+#define XRT_AXIGATE_WRITE_REG 0
+#define XRT_AXIGATE_READ_REG 8
+
+#define XRT_AXIGATE_CTRL_CLOSE 0
+#define XRT_AXIGATE_CTRL_OPEN_BIT0 1
+#define XRT_AXIGATE_CTRL_OPEN_BIT1 2
+
+#define XRT_AXIGATE_INTERVAL 500 /* ns */
+
+struct xrt_axigate {
+ struct xrt_device *xdev;
+ struct regmap *regmap;
+ struct mutex gate_lock; /* gate dev lock */
+ void *evt_hdl;
+ const char *ep_name;
+ bool gate_closed;
+};
+
+XRT_DEFINE_REGMAP_CONFIG(axigate_regmap_config);
+
+/* the ep names are in the order of hardware layers */
+static const char * const xrt_axigate_epnames[] = {
+ XRT_MD_NODE_GATE_PLP, /* PLP: Provider Logic Partition */
+ XRT_MD_NODE_GATE_ULP /* ULP: User Logic Partition */
+};
+
+static inline int close_gate(struct xrt_axigate *gate)
+{
+ u32 val;
+ int ret;
+
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_CLOSE);
+ if (ret) {
+ xrt_err(gate->xdev, "write gate failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ /*
+ * Legacy hardware requires extra read work properly.
+ * This is not on critical path, thus the extra read should not impact performance much.
+ */
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->xdev, "read gate failed %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static inline int open_gate(struct xrt_axigate *gate)
+{
+ u32 val;
+ int ret;
+
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_OPEN_BIT1);
+ if (ret) {
+ xrt_err(gate->xdev, "write 2 failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ /*
+ * Legacy hardware requires extra read work properly.
+ * This is not on critical path, thus the extra read should not impact performance much.
+ */
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->xdev, "read 2 failed %d", ret);
+ return ret;
+ }
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG,
+ XRT_AXIGATE_CTRL_OPEN_BIT0 | XRT_AXIGATE_CTRL_OPEN_BIT1);
+ if (ret) {
+ xrt_err(gate->xdev, "write 3 failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->xdev, "read 3 failed %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int xrt_axigate_epname_idx(struct xrt_device *xdev)
+{
+ struct resource *res;
+ int ret, i;
+
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ xrt_err(xdev, "Empty Resource!");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(xrt_axigate_epnames); i++) {
+ ret = strncmp(xrt_axigate_epnames[i], res->name,
+ strlen(xrt_axigate_epnames[i]) + 1);
+ if (!ret)
+ return i;
+ }
+
+ return -EINVAL;
+}
+
+static int xrt_axigate_close(struct xrt_device *xdev)
+{
+ struct xrt_axigate *gate;
+ u32 status = 0;
+ int ret;
+
+ gate = xrt_get_drvdata(xdev);
+
+ mutex_lock(&gate->gate_lock);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
+ if (ret) {
+ xrt_err(xdev, "read gate failed %d", ret);
+ goto failed;
+ }
+ if (status) { /* gate is opened */
+ xleaf_broadcast_event(xdev, XRT_EVENT_PRE_GATE_CLOSE, false);
+ ret = close_gate(gate);
+ if (ret)
+ goto failed;
+ }
+
+ gate->gate_closed = true;
+
+failed:
+ mutex_unlock(&gate->gate_lock);
+
+ xrt_info(xdev, "close gate %s", gate->ep_name);
+ return ret;
+}
+
+static int xrt_axigate_open(struct xrt_device *xdev)
+{
+ struct xrt_axigate *gate;
+ u32 status;
+ int ret;
+
+ gate = xrt_get_drvdata(xdev);
+
+ mutex_lock(&gate->gate_lock);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
+ if (ret) {
+ xrt_err(xdev, "read gate failed %d", ret);
+ goto failed;
+ }
+ if (!status) { /* gate is closed */
+ ret = open_gate(gate);
+ if (ret)
+ goto failed;
+ xleaf_broadcast_event(xdev, XRT_EVENT_POST_GATE_OPEN, true);
+ /* xrt_axigate_open() could be called in event cb, thus
+ * we can not wait for the completes
+ */
+ }
+
+ gate->gate_closed = false;
+
+failed:
+ mutex_unlock(&gate->gate_lock);
+
+ xrt_info(xdev, "open gate %s", gate->ep_name);
+ return ret;
+}
+
+static void xrt_axigate_event_cb(struct xrt_device *xdev, void *arg)
+{
+ struct xrt_axigate *gate = xrt_get_drvdata(xdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct xrt_device *leaf;
+ enum xrt_subdev_id id;
+ struct resource *res;
+ int instance;
+
+ if (e != XRT_EVENT_POST_CREATION)
+ return;
+
+ instance = evt->xe_subdev.xevt_subdev_instance;
+ id = evt->xe_subdev.xevt_subdev_id;
+ if (id != XRT_SUBDEV_AXIGATE)
+ return;
+
+ leaf = xleaf_get_leaf_by_id(xdev, id, instance);
+ if (!leaf)
+ return;
+
+ res = xrt_get_resource(leaf, IORESOURCE_MEM, 0);
+ if (!res || !strncmp(res->name, gate->ep_name, strlen(res->name) + 1)) {
+ xleaf_put_leaf(xdev, leaf);
+ return;
+ }
+
+ /* higher level axigate instance created, make sure the gate is opened. */
+ if (xrt_axigate_epname_idx(leaf) > xrt_axigate_epname_idx(xdev))
+ xrt_axigate_open(xdev);
+ else
+ xleaf_call(leaf, XRT_AXIGATE_OPEN, NULL);
+
+ xleaf_put_leaf(xdev, leaf);
+}
+
+static int
+xrt_axigate_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_axigate_event_cb(xdev, arg);
+ break;
+ case XRT_AXIGATE_CLOSE:
+ ret = xrt_axigate_close(xdev);
+ break;
+ case XRT_AXIGATE_OPEN:
+ ret = xrt_axigate_open(xdev);
+ break;
+ default:
+ xrt_err(xdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_axigate_probe(struct xrt_device *xdev)
+{
+ struct xrt_axigate *gate = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ gate = devm_kzalloc(&xdev->dev, sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ return -ENOMEM;
+
+ gate->xdev = xdev;
+ xrt_set_drvdata(xdev, gate);
+
+ xrt_info(xdev, "probing...");
+ res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ xrt_err(xdev, "Empty resource 0");
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&xdev->dev, res);
+ if (IS_ERR(base)) {
+ xrt_err(xdev, "map base iomem failed");
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ gate->regmap = devm_regmap_init_mmio(&xdev->dev, base, &axigate_regmap_config);
+ if (IS_ERR(gate->regmap)) {
+ xrt_err(xdev, "regmap %pR failed", res);
+ ret = PTR_ERR(gate->regmap);
+ goto failed;
+ }
+ gate->ep_name = res->name;
+
+ mutex_init(&gate->gate_lock);
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_dev_endpoints xrt_axigate_endpoints[] = {
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GATE_ULP },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ {
+ .xse_names = (struct xrt_dev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GATE_PLP },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_driver xrt_axigate_driver = {
+ .driver = {
+ .name = XRT_AXIGATE,
+ },
+ .subdev_id = XRT_SUBDEV_AXIGATE,
+ .endpoints = xrt_axigate_endpoints,
+ .probe = xrt_axigate_probe,
+ .leaf_call = xrt_axigate_leaf_call,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(axigate);
--
2.27.0

2021-04-27 21:06:41

by Lizhi Hou

[permalink] [raw]
Subject: [PATCH V5 XRT Alveo 20/20] fpga: xrt: Kconfig and Makefile updates for XRT drivers

Update fpga Kconfig/Makefile and add Kconfig/Makefile for new drivers.

Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
MAINTAINERS | 11 +++++++++++
drivers/Makefile | 1 +
drivers/fpga/Kconfig | 2 ++
drivers/fpga/Makefile | 5 +++++
drivers/fpga/xrt/Kconfig | 8 ++++++++
drivers/fpga/xrt/lib/Kconfig | 17 +++++++++++++++++
drivers/fpga/xrt/lib/Makefile | 30 ++++++++++++++++++++++++++++++
drivers/fpga/xrt/metadata/Kconfig | 12 ++++++++++++
drivers/fpga/xrt/metadata/Makefile | 16 ++++++++++++++++
drivers/fpga/xrt/mgnt/Kconfig | 15 +++++++++++++++
drivers/fpga/xrt/mgnt/Makefile | 19 +++++++++++++++++++
11 files changed, 136 insertions(+)
create mode 100644 drivers/fpga/xrt/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Makefile
create mode 100644 drivers/fpga/xrt/metadata/Kconfig
create mode 100644 drivers/fpga/xrt/metadata/Makefile
create mode 100644 drivers/fpga/xrt/mgnt/Kconfig
create mode 100644 drivers/fpga/xrt/mgnt/Makefile

diff --git a/MAINTAINERS b/MAINTAINERS
index 9450e052f1b1..89abe140041b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7016,6 +7016,17 @@ F: Documentation/fpga/
F: drivers/fpga/
F: include/linux/fpga/

+FPGA XRT DRIVERS
+M: Lizhi Hou <[email protected]>
+R: Max Zhen <[email protected]>
+R: Sonal Santan <[email protected]>
+L: [email protected]
+S: Supported
+W: https://github.com/Xilinx/XRT
+F: Documentation/fpga/xrt.rst
+F: drivers/fpga/xrt/
+F: include/uapi/linux/xrt/
+
FPU EMULATOR
M: Bill Metzenthen <[email protected]>
S: Maintained
diff --git a/drivers/Makefile b/drivers/Makefile
index 6fba7daba591..dbb3b727fc7a 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -179,6 +179,7 @@ obj-$(CONFIG_STM) += hwtracing/stm/
obj-$(CONFIG_ANDROID) += android/
obj-$(CONFIG_NVMEM) += nvmem/
obj-$(CONFIG_FPGA) += fpga/
+obj-$(CONFIG_FPGA_XRT_METADATA) += fpga/
obj-$(CONFIG_FSI) += fsi/
obj-$(CONFIG_TEE) += tee/
obj-$(CONFIG_MULTIPLEXER) += mux/
diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
index 5ff9438b7b46..01410ff000b9 100644
--- a/drivers/fpga/Kconfig
+++ b/drivers/fpga/Kconfig
@@ -227,4 +227,6 @@ config FPGA_MGR_ZYNQMP_FPGA
to configure the programmable logic(PL) through PS
on ZynqMP SoC.

+source "drivers/fpga/xrt/Kconfig"
+
endif # FPGA
diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
index 18dc9885883a..a1cad7f7af09 100644
--- a/drivers/fpga/Makefile
+++ b/drivers/fpga/Makefile
@@ -48,3 +48,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) += dfl-n3000-nios.o

# Drivers for FPGAs which implement DFL
obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
+
+# XRT drivers for Alveo
+obj-$(CONFIG_FPGA_XRT_METADATA) += xrt/metadata/
+obj-$(CONFIG_FPGA_XRT_LIB) += xrt/lib/
+obj-$(CONFIG_FPGA_XRT_XMGNT) += xrt/mgnt/
diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
new file mode 100644
index 000000000000..2424f89e6e03
--- /dev/null
+++ b/drivers/fpga/xrt/Kconfig
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+source "drivers/fpga/xrt/metadata/Kconfig"
+source "drivers/fpga/xrt/lib/Kconfig"
+source "drivers/fpga/xrt/mgnt/Kconfig"
diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
new file mode 100644
index 000000000000..935369fad570
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# XRT Alveo FPGA device configuration
+#
+
+config FPGA_XRT_LIB
+ tristate "XRT Alveo Driver Library"
+ depends on HWMON && PCI && HAS_IOMEM
+ select FPGA_XRT_METADATA
+ select REGMAP_MMIO
+ help
+ Select this option to enable Xilinx XRT Alveo driver library. This
+ library is core infrastructure of XRT Alveo FPGA drivers which
+ provides functions for working with device nodes, iteration and
+ lookup of platform devices, common interfaces for platform devices,
+ plumbing of function call and ioctls between platform devices and
+ parent partitions.
diff --git a/drivers/fpga/xrt/lib/Makefile b/drivers/fpga/xrt/lib/Makefile
new file mode 100644
index 000000000000..58563416efbf
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_LIB) += xrt-lib.o
+
+xrt-lib-objs := \
+ lib-drv.o \
+ xroot.o \
+ xclbin.o \
+ subdev.o \
+ cdev.o \
+ group.o \
+ xleaf/vsec.o \
+ xleaf/axigate.o \
+ xleaf/devctl.o \
+ xleaf/icap.o \
+ xleaf/clock.o \
+ xleaf/clkfreq.o \
+ xleaf/ucs.o \
+ xleaf/ddr_calibration.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/xrt/metadata/Kconfig b/drivers/fpga/xrt/metadata/Kconfig
new file mode 100644
index 000000000000..129adda47e94
--- /dev/null
+++ b/drivers/fpga/xrt/metadata/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# XRT Alveo FPGA device configuration
+#
+
+config FPGA_XRT_METADATA
+ bool "XRT Alveo Driver Metadata Parser"
+ select LIBFDT
+ help
+ This option provides helper functions to parse Xilinx Alveo FPGA
+ firmware metadata. The metadata is in device tree format and the
+ XRT driver uses it to discover the HW subsystems behind PCIe BAR.
diff --git a/drivers/fpga/xrt/metadata/Makefile b/drivers/fpga/xrt/metadata/Makefile
new file mode 100644
index 000000000000..14f65ef1595c
--- /dev/null
+++ b/drivers/fpga/xrt/metadata/Makefile
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_METADATA) += xrt-md.o
+
+xrt-md-objs := metadata.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/xrt/mgnt/Kconfig b/drivers/fpga/xrt/mgnt/Kconfig
new file mode 100644
index 000000000000..b43242c14757
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx XRT FPGA device configuration
+#
+
+config FPGA_XRT_XMGNT
+ tristate "Xilinx Alveo Management Driver"
+ depends on FPGA_XRT_LIB
+ select FPGA_XRT_METADATA
+ select FPGA_BRIDGE
+ select FPGA_REGION
+ help
+ Select this option to enable XRT PCIe driver for Xilinx Alveo FPGA.
+ This driver provides interfaces for userspace application to access
+ Alveo FPGA device.
diff --git a/drivers/fpga/xrt/mgnt/Makefile b/drivers/fpga/xrt/mgnt/Makefile
new file mode 100644
index 000000000000..b71d2ff0aa94
--- /dev/null
+++ b/drivers/fpga/xrt/mgnt/Makefile
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_XMGNT) += xrt-mgnt.o
+
+xrt-mgnt-objs := root.o \
+ xmgnt-main.o \
+ xrt-mgr.o \
+ xmgnt-main-region.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
--
2.27.0

2021-04-27 23:55:13

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 20/20] fpga: xrt: Kconfig and Makefile updates for XRT drivers

Hi Lizhi,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.12 next-20210427]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Lizhi-Hou/XRT-Alveo-driver-overview/20210428-050424
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 1fe5501ba1abf2b7e78295df73675423bd6899a0
config: mips-allyesconfig (attached as .config)
compiler: mips-linux-gcc (GCC) 9.3.0
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/0day-ci/linux/commit/079fb263b22e0d961ac204b3928bdff5d8ebf3d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Lizhi-Hou/XRT-Alveo-driver-overview/20210428-050424
git checkout 079fb263b22e0d961ac204b3928bdff5d8ebf3d5
# save the attached .config to linux build tree
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross W=1 ARCH=mips

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

In file included from include/linux/printk.h:409,
from include/linux/kernel.h:16,
from include/linux/list.h:9,
from include/linux/preempt.h:11,
from include/linux/spinlock.h:51,
from include/linux/vmalloc.h:5,
from drivers/fpga/xrt/lib/subdev.c:9:
drivers/fpga/xrt/lib/subdev.c: In function 'metadata_output':
>> drivers/fpga/xrt/lib/subdev.c:120:16: warning: format '%ld' expects argument of type 'long int', but argument 4 has type 'size_t' {aka 'unsigned int'} [-Wformat=]
120 | dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/dynamic_debug.h:129:15: note: in definition of macro '__dynamic_func_call'
129 | func(&id, ##__VA_ARGS__); \
| ^~~~~~~~~~~
include/linux/dynamic_debug.h:161:2: note: in expansion of macro '_dynamic_func_call'
161 | _dynamic_func_call(fmt,__dynamic_dev_dbg, \
| ^~~~~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:2: note: in expansion of macro 'dynamic_dev_dbg'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~~~~~~~~~
include/linux/dev_printk.h:123:23: note: in expansion of macro 'dev_fmt'
123 | dynamic_dev_dbg(dev, dev_fmt(fmt), ##__VA_ARGS__)
| ^~~~~~~
drivers/fpga/xrt/lib/subdev.c:120:3: note: in expansion of macro 'dev_dbg'
120 | dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
| ^~~~~~~
drivers/fpga/xrt/lib/subdev.c:120:26: note: format string is defined here
120 | dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
| ~~^
| |
| long int
| %d


vim +120 drivers/fpga/xrt/lib/subdev.c

390cff2f7a6222 Lizhi Hou 2021-04-27 96
390cff2f7a6222 Lizhi Hou 2021-04-27 97 static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
390cff2f7a6222 Lizhi Hou 2021-04-27 98 struct bin_attribute *attr, char *buf, loff_t off, size_t count)
390cff2f7a6222 Lizhi Hou 2021-04-27 99 {
390cff2f7a6222 Lizhi Hou 2021-04-27 100 struct device *dev = kobj_to_dev(kobj);
390cff2f7a6222 Lizhi Hou 2021-04-27 101 struct xrt_device *xdev = to_xrt_dev(dev);
390cff2f7a6222 Lizhi Hou 2021-04-27 102 struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
390cff2f7a6222 Lizhi Hou 2021-04-27 103 unsigned char *blob;
390cff2f7a6222 Lizhi Hou 2021-04-27 104 unsigned long size;
390cff2f7a6222 Lizhi Hou 2021-04-27 105 ssize_t ret = 0;
390cff2f7a6222 Lizhi Hou 2021-04-27 106
390cff2f7a6222 Lizhi Hou 2021-04-27 107 blob = pdata->xsp_dtb;
390cff2f7a6222 Lizhi Hou 2021-04-27 108 size = xrt_md_size(dev, blob);
390cff2f7a6222 Lizhi Hou 2021-04-27 109 if (size == XRT_MD_INVALID_LENGTH) {
390cff2f7a6222 Lizhi Hou 2021-04-27 110 ret = -EINVAL;
390cff2f7a6222 Lizhi Hou 2021-04-27 111 goto failed;
390cff2f7a6222 Lizhi Hou 2021-04-27 112 }
390cff2f7a6222 Lizhi Hou 2021-04-27 113
390cff2f7a6222 Lizhi Hou 2021-04-27 114 if (off >= size) {
390cff2f7a6222 Lizhi Hou 2021-04-27 115 dev_dbg(dev, "offset (%lld) beyond total size: %ld\n", off, size);
390cff2f7a6222 Lizhi Hou 2021-04-27 116 goto failed;
390cff2f7a6222 Lizhi Hou 2021-04-27 117 }
390cff2f7a6222 Lizhi Hou 2021-04-27 118
390cff2f7a6222 Lizhi Hou 2021-04-27 119 if (off + count > size) {
390cff2f7a6222 Lizhi Hou 2021-04-27 @120 dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
390cff2f7a6222 Lizhi Hou 2021-04-27 121 count = size - off;
390cff2f7a6222 Lizhi Hou 2021-04-27 122 }
390cff2f7a6222 Lizhi Hou 2021-04-27 123 memcpy(buf, blob + off, count);
390cff2f7a6222 Lizhi Hou 2021-04-27 124
390cff2f7a6222 Lizhi Hou 2021-04-27 125 ret = count;
390cff2f7a6222 Lizhi Hou 2021-04-27 126 failed:
390cff2f7a6222 Lizhi Hou 2021-04-27 127 return ret;
390cff2f7a6222 Lizhi Hou 2021-04-27 128 }
390cff2f7a6222 Lizhi Hou 2021-04-27 129

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (5.74 kB)
.config.gz (68.62 kB)
Download all attachments

2021-04-28 03:13:38

by kernel test robot

[permalink] [raw]
Subject: [RFC PATCH] fpga: xrt: xmgnt_bridge_ops can be static

drivers/fpga/xrt/mgnt/xmgnt-main-region.c:71:30: warning: symbol 'xmgnt_bridge_ops' was not declared. Should it be static?

Reported-by: kernel test robot <[email protected]>
Signed-off-by: kernel test robot <[email protected]>
---
xmgnt-main-region.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/fpga/xrt/mgnt/xmgnt-main-region.c b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
index 398fc816b1786..216f0e9652d47 100644
--- a/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
+++ b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
@@ -68,7 +68,7 @@ static int xmgnt_br_enable_set(struct fpga_bridge *bridge, bool enable)
return rc;
}

-const struct fpga_bridge_ops xmgnt_bridge_ops = {
+static const struct fpga_bridge_ops xmgnt_bridge_ops = {
.enable_set = xmgnt_br_enable_set
};

2021-04-28 03:15:27

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 20/20] fpga: xrt: Kconfig and Makefile updates for XRT drivers

Hi Lizhi,

I love your patch! Perhaps something to improve:

[auto build test WARNING on linux/master]
[also build test WARNING on linus/master v5.12 next-20210427]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url: https://github.com/0day-ci/linux/commits/Lizhi-Hou/XRT-Alveo-driver-overview/20210428-050424
base: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 1fe5501ba1abf2b7e78295df73675423bd6899a0
config: x86_64-randconfig-s032-20210428 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
# apt-get install sparse
# sparse version: v0.6.3-341-g8af24329-dirty
# https://github.com/0day-ci/linux/commit/079fb263b22e0d961ac204b3928bdff5d8ebf3d5
git remote add linux-review https://github.com/0day-ci/linux
git fetch --no-tags linux-review Lizhi-Hou/XRT-Alveo-driver-overview/20210428-050424
git checkout 079fb263b22e0d961ac204b3928bdff5d8ebf3d5
# save the attached .config to linux build tree
make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' W=1 ARCH=x86_64

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <[email protected]>


sparse warnings: (new ones prefixed by >>)
>> drivers/fpga/xrt/lib/xclbin.c:314:22: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned short [usertype] freq @@ got restricted __be16 [usertype] @@
drivers/fpga/xrt/lib/xclbin.c:314:22: sparse: expected unsigned short [usertype] freq
drivers/fpga/xrt/lib/xclbin.c:314:22: sparse: got restricted __be16 [usertype]
--
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/xleaf/vsec.c:270:9: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:273:40: sparse: sparse: cast to restricted __be32
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/xleaf/vsec.c:279:29: sparse: sparse: cast to restricted __be64
--
>> drivers/fpga/xrt/lib/xleaf/icap.c:58:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:58:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:58:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:60:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:60:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:60:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:62:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:62:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:62:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:64:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:64:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:64:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:66:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:66:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:66:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:68:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:68:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:68:9: sparse: got restricted __be32 [usertype]
drivers/fpga/xrt/lib/xleaf/icap.c:70:9: sparse: sparse: incorrect type in initializer (different base types) @@ expected unsigned int @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/lib/xleaf/icap.c:70:9: sparse: expected unsigned int
drivers/fpga/xrt/lib/xleaf/icap.c:70:9: sparse: got restricted __be32 [usertype]
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/xleaf/icap.c:113:25: sparse: sparse: cast to restricted __be32
--
>> drivers/fpga/xrt/lib/xleaf/clock.c:506:31: sparse: sparse: cast to restricted __be16
>> drivers/fpga/xrt/lib/xleaf/clock.c:506:31: sparse: sparse: cast to restricted __be16
>> drivers/fpga/xrt/lib/xleaf/clock.c:506:31: sparse: sparse: cast to restricted __be16
>> drivers/fpga/xrt/lib/xleaf/clock.c:506:31: sparse: sparse: cast to restricted __be16
--
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:195:33: sparse: sparse: cast to restricted __be32
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
>> drivers/fpga/xrt/lib/subdev.c:197:57: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:198:55: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
drivers/fpga/xrt/lib/subdev.c:199:25: sparse: sparse: cast to restricted __be64
--
>> drivers/fpga/xrt/metadata/metadata.c:311:21: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned int [usertype] val @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/metadata/metadata.c:311:21: sparse: expected unsigned int [usertype] val
drivers/fpga/xrt/metadata/metadata.c:311:21: sparse: got restricted __be32 [usertype]
>> drivers/fpga/xrt/metadata/metadata.c:319:29: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned long long @@ got restricted __be64 [usertype] @@
drivers/fpga/xrt/metadata/metadata.c:319:29: sparse: expected unsigned long long
drivers/fpga/xrt/metadata/metadata.c:319:29: sparse: got restricted __be64 [usertype]
drivers/fpga/xrt/metadata/metadata.c:320:29: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned long long @@ got restricted __be64 [usertype] @@
drivers/fpga/xrt/metadata/metadata.c:320:29: sparse: expected unsigned long long
drivers/fpga/xrt/metadata/metadata.c:320:29: sparse: got restricted __be64 [usertype]
--
>> drivers/fpga/xrt/mgnt/xmgnt-main-region.c:71:30: sparse: sparse: symbol 'xmgnt_bridge_ops' was not declared. Should it be static?
--
>> drivers/fpga/xrt/mgnt/root.c:211:18: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned int [usertype] vsec_bar @@ got restricted __be32 [usertype] @@
drivers/fpga/xrt/mgnt/root.c:211:18: sparse: expected unsigned int [usertype] vsec_bar
drivers/fpga/xrt/mgnt/root.c:211:18: sparse: got restricted __be32 [usertype]
>> drivers/fpga/xrt/mgnt/root.c:219:18: sparse: sparse: incorrect type in assignment (different base types) @@ expected unsigned long long [usertype] vsec_off @@ got restricted __be64 [usertype] @@
drivers/fpga/xrt/mgnt/root.c:219:18: sparse: expected unsigned long long [usertype] vsec_off
drivers/fpga/xrt/mgnt/root.c:219:18: sparse: got restricted __be64 [usertype]
--
>> drivers/fpga/xrt/mgnt/xmgnt-main.c:570:56: sparse: sparse: incorrect type in argument 2 (different address spaces) @@ expected void const [noderef] __user *from @@ got struct axlf *[addressable] xclbin @@
drivers/fpga/xrt/mgnt/xmgnt-main.c:570:56: sparse: expected void const [noderef] __user *from
drivers/fpga/xrt/mgnt/xmgnt-main.c:570:56: sparse: got struct axlf *[addressable] xclbin
drivers/fpga/xrt/mgnt/xmgnt-main.c:585:48: sparse: sparse: incorrect type in argument 2 (different address spaces) @@ expected void const [noderef] __user *from @@ got struct axlf *[addressable] xclbin @@
drivers/fpga/xrt/mgnt/xmgnt-main.c:585:48: sparse: expected void const [noderef] __user *from
drivers/fpga/xrt/mgnt/xmgnt-main.c:585:48: sparse: got struct axlf *[addressable] xclbin

Please review and possibly fold the followup patch.

vim +314 drivers/fpga/xrt/lib/xclbin.c

d174deaba7ea5f Lizhi Hou 2021-04-27 243
d174deaba7ea5f Lizhi Hou 2021-04-27 244 struct xrt_clock_desc {
d174deaba7ea5f Lizhi Hou 2021-04-27 245 char *clock_ep_name;
d174deaba7ea5f Lizhi Hou 2021-04-27 246 u32 clock_xclbin_type;
d174deaba7ea5f Lizhi Hou 2021-04-27 247 char *clkfreq_ep_name;
d174deaba7ea5f Lizhi Hou 2021-04-27 @248 } clock_desc[] = {
d174deaba7ea5f Lizhi Hou 2021-04-27 249 {
d174deaba7ea5f Lizhi Hou 2021-04-27 250 .clock_ep_name = XRT_MD_NODE_CLK_KERNEL1,
d174deaba7ea5f Lizhi Hou 2021-04-27 251 .clock_xclbin_type = CT_DATA,
d174deaba7ea5f Lizhi Hou 2021-04-27 252 .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K1,
d174deaba7ea5f Lizhi Hou 2021-04-27 253 },
d174deaba7ea5f Lizhi Hou 2021-04-27 254 {
d174deaba7ea5f Lizhi Hou 2021-04-27 255 .clock_ep_name = XRT_MD_NODE_CLK_KERNEL2,
d174deaba7ea5f Lizhi Hou 2021-04-27 256 .clock_xclbin_type = CT_KERNEL,
d174deaba7ea5f Lizhi Hou 2021-04-27 257 .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K2,
d174deaba7ea5f Lizhi Hou 2021-04-27 258 },
d174deaba7ea5f Lizhi Hou 2021-04-27 259 {
d174deaba7ea5f Lizhi Hou 2021-04-27 260 .clock_ep_name = XRT_MD_NODE_CLK_KERNEL3,
d174deaba7ea5f Lizhi Hou 2021-04-27 261 .clock_xclbin_type = CT_SYSTEM,
d174deaba7ea5f Lizhi Hou 2021-04-27 262 .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_HBM,
d174deaba7ea5f Lizhi Hou 2021-04-27 263 },
d174deaba7ea5f Lizhi Hou 2021-04-27 264 };
d174deaba7ea5f Lizhi Hou 2021-04-27 265
d174deaba7ea5f Lizhi Hou 2021-04-27 266 const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type)
d174deaba7ea5f Lizhi Hou 2021-04-27 267 {
d174deaba7ea5f Lizhi Hou 2021-04-27 268 int i;
d174deaba7ea5f Lizhi Hou 2021-04-27 269
d174deaba7ea5f Lizhi Hou 2021-04-27 270 for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
d174deaba7ea5f Lizhi Hou 2021-04-27 271 if (clock_desc[i].clock_xclbin_type == type)
d174deaba7ea5f Lizhi Hou 2021-04-27 272 return clock_desc[i].clock_ep_name;
d174deaba7ea5f Lizhi Hou 2021-04-27 273 }
d174deaba7ea5f Lizhi Hou 2021-04-27 274 return NULL;
d174deaba7ea5f Lizhi Hou 2021-04-27 275 }
d174deaba7ea5f Lizhi Hou 2021-04-27 276 EXPORT_SYMBOL_GPL(xrt_clock_type2epname);
d174deaba7ea5f Lizhi Hou 2021-04-27 277
d174deaba7ea5f Lizhi Hou 2021-04-27 278 static const char *clock_type2clkfreq_name(enum XCLBIN_CLOCK_TYPE type)
d174deaba7ea5f Lizhi Hou 2021-04-27 279 {
d174deaba7ea5f Lizhi Hou 2021-04-27 280 int i;
d174deaba7ea5f Lizhi Hou 2021-04-27 281
d174deaba7ea5f Lizhi Hou 2021-04-27 282 for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
d174deaba7ea5f Lizhi Hou 2021-04-27 283 if (clock_desc[i].clock_xclbin_type == type)
d174deaba7ea5f Lizhi Hou 2021-04-27 284 return clock_desc[i].clkfreq_ep_name;
d174deaba7ea5f Lizhi Hou 2021-04-27 285 }
d174deaba7ea5f Lizhi Hou 2021-04-27 286 return NULL;
d174deaba7ea5f Lizhi Hou 2021-04-27 287 }
d174deaba7ea5f Lizhi Hou 2021-04-27 288
d174deaba7ea5f Lizhi Hou 2021-04-27 289 static int xrt_xclbin_add_clock_metadata(struct device *dev,
d174deaba7ea5f Lizhi Hou 2021-04-27 290 const struct axlf *xclbin,
d174deaba7ea5f Lizhi Hou 2021-04-27 291 char *dtb)
d174deaba7ea5f Lizhi Hou 2021-04-27 292 {
d174deaba7ea5f Lizhi Hou 2021-04-27 293 struct clock_freq_topology *clock_topo;
d174deaba7ea5f Lizhi Hou 2021-04-27 294 u16 freq;
d174deaba7ea5f Lizhi Hou 2021-04-27 295 int rc;
d174deaba7ea5f Lizhi Hou 2021-04-27 296 int i;
d174deaba7ea5f Lizhi Hou 2021-04-27 297
d174deaba7ea5f Lizhi Hou 2021-04-27 298 /* if clock section does not exist, add nothing and return success */
d174deaba7ea5f Lizhi Hou 2021-04-27 299 rc = xrt_xclbin_get_section(dev, xclbin, CLOCK_FREQ_TOPOLOGY,
d174deaba7ea5f Lizhi Hou 2021-04-27 300 (void **)&clock_topo, NULL);
d174deaba7ea5f Lizhi Hou 2021-04-27 301 if (rc == -ENOENT)
d174deaba7ea5f Lizhi Hou 2021-04-27 302 return 0;
d174deaba7ea5f Lizhi Hou 2021-04-27 303 else if (rc)
d174deaba7ea5f Lizhi Hou 2021-04-27 304 return rc;
d174deaba7ea5f Lizhi Hou 2021-04-27 305
d174deaba7ea5f Lizhi Hou 2021-04-27 306 for (i = 0; i < clock_topo->count; i++) {
d174deaba7ea5f Lizhi Hou 2021-04-27 307 u8 type = clock_topo->clock_freq[i].type;
d174deaba7ea5f Lizhi Hou 2021-04-27 308 const char *ep_name = xrt_clock_type2epname(type);
d174deaba7ea5f Lizhi Hou 2021-04-27 309 const char *counter_name = clock_type2clkfreq_name(type);
d174deaba7ea5f Lizhi Hou 2021-04-27 310
d174deaba7ea5f Lizhi Hou 2021-04-27 311 if (!ep_name || !counter_name)
d174deaba7ea5f Lizhi Hou 2021-04-27 312 continue;
d174deaba7ea5f Lizhi Hou 2021-04-27 313
d174deaba7ea5f Lizhi Hou 2021-04-27 @314 freq = cpu_to_be16(clock_topo->clock_freq[i].freq_MHZ);
d174deaba7ea5f Lizhi Hou 2021-04-27 315 rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
d174deaba7ea5f Lizhi Hou 2021-04-27 316 &freq, sizeof(freq));
d174deaba7ea5f Lizhi Hou 2021-04-27 317 if (rc)
d174deaba7ea5f Lizhi Hou 2021-04-27 318 break;
d174deaba7ea5f Lizhi Hou 2021-04-27 319
d174deaba7ea5f Lizhi Hou 2021-04-27 320 rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_CNT,
d174deaba7ea5f Lizhi Hou 2021-04-27 321 counter_name, strlen(counter_name) + 1);
d174deaba7ea5f Lizhi Hou 2021-04-27 322 if (rc)
d174deaba7ea5f Lizhi Hou 2021-04-27 323 break;
d174deaba7ea5f Lizhi Hou 2021-04-27 324 }
d174deaba7ea5f Lizhi Hou 2021-04-27 325
d174deaba7ea5f Lizhi Hou 2021-04-27 326 vfree(clock_topo);
d174deaba7ea5f Lizhi Hou 2021-04-27 327
d174deaba7ea5f Lizhi Hou 2021-04-27 328 return rc;
d174deaba7ea5f Lizhi Hou 2021-04-27 329 }
d174deaba7ea5f Lizhi Hou 2021-04-27 330

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/[email protected]


Attachments:
(No filename) (19.46 kB)
.config.gz (37.36 kB)
Download all attachments

2021-04-28 20:52:25

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 00/20] XRT Alveo driver overview


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Hello,
>
> This is V5 of patch series which adds management physical function driver
> for Xilinx Alveo PCIe accelerator cards.
> https://www.xilinx.com/products/boards-and-kits/alveo.html
>
> This driver is part of Xilinx Runtime (XRT) open source stack.
>
> XILINX ALVEO PLATFORM ARCHITECTURE
>
> Alveo PCIe FPGA based platforms have a static *shell* partition and a
> partial re-configurable *user* partition. The shell partition is
> automatically loaded from flash when host is booted and PCIe is enumerated
> by BIOS. Shell cannot be changed till the next cold reboot. The shell
> exposes two PCIe physical functions:
>
> 1. management physical function
> 2. user physical function
>
> The patch series includes Documentation/xrt.rst which describes Alveo
> platform, XRT driver architecture and deployment model in more detail.
>
> Users compile their high level design in C/C++/OpenCL or RTL into FPGA
> image using Vitis tools.
> https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
>
> The compiled image is packaged as xclbin which contains partial bitstream
> for the user partition and necessary metadata. Users can dynamically swap
> the image running on the user partition in order to switch between
> different workloads by loading different xclbins.
>
> XRT DRIVERS FOR XILINX ALVEO
>
> XRT Linux kernel driver *xrt-mgnt* binds to management physical function of
> Alveo platform. The modular driver framework is organized into several
> platform drivers which primarily handle the following functionality:
>
> 1. Loading firmware container also called xsabin at driver attach time
> 2. Loading of user compiled xclbin with FPGA Manager integration
> 3. Clock scaling of image running on user partition
> 4. In-band sensors: temp, voltage, power, etc.
> 5. Device reset and rescan
>
> The platform drivers are packaged into *xrt-lib* helper module with well
> defined interfaces. The module provides a pseudo-bus implementation for the
> platform drivers. More details on the driver model can be found in
> Documentation/xrt.rst.
>
> User physical function driver is not included in this patch series.
>
> LIBFDT REQUIREMENT
>
> XRT driver infrastructure uses Device Tree as a metadata format to discover
> HW subsystems in the Alveo PCIe device. The Device Tree schema used by XRT
> is documented in Documentation/xrt.rst.
>
> TESTING AND VALIDATION
>
> xrt-mgnt driver can be tested with full XRT open source stack which
> includes user space libraries, board utilities and (out of tree) first
> generation user physical function driver xocl. XRT open source runtime
> stack is available at https://github.com/Xilinx/XRT
>
> Complete documentation for XRT open source stack including sections on
> Alveo/XRT security and platform architecture can be found here:
>
> https://xilinx.github.io/XRT/master/html/index.html
> https://xilinx.github.io/XRT/master/html/security.html
> https://xilinx.github.io/XRT/master/html/platforms_partitions.html
>
> Changes since v4:
> - Added xrt_bus_type and xrt_device. All sub devices were changed from
> platform_bus_type/platform_device to xrt_bus_type/xrt_device.
> - Renamed xrt-mgmt driver to xrt-mgnt driver.
> - Replaced 'MGMT' with 'MGNT' and 'mgmt' with 'mgnt' in code and file names
> - Moved pci function calls from infrastructure to xrt-mgnt driver.
> - Renamed files: mgmt/main.c -> mgnt/xmgnt-main.c
> mgmt/main-region.c -> mgnt/xmgnt-main-region.c
> include/xmgmt-main.h -> include/xmgnt-main.h
> mgmt/fmgr-drv.c -> mgnt/xrt-mgr.c
> mgmt/fmgr.h -> mgnt/xrt-mgr.h
> - Updated code base to include v4 code review comments.


An early run through with checkpatch complains about MAINTAINERS not
being modified for new content.

This likely could be resolved by moving your MAINTAINERS change from
patch 20 to patch 1.

And of course fixes to make the test robot happy :)


Thanks for the refactor to the xrt_bus!

I will start taking a look.

Tom


>
> Changes since v3:
> - Leaf drivers use regmap-mmio to access hardware registers.
> - Renamed driver module: xmgmt.ko -> xrt-mgmt.ko
> - Renamed files: calib.[c|h] -> ddr_calibration.[c|h],
> lib/main.[c|h] -> lib/lib-drv.[c|h],
> mgmt/main-impl.h - > mgmt/xmgnt.h
> - Updated code base to include v3 code review comments.
>
> Changes since v2:
> - Streamlined the driver framework into *xleaf*, *group* and *xroot*
> - Updated documentation to show the driver model with examples
> - Addressed kernel test robot errors
> - Added a selftest for basic driver framework
> - Documented device tree schema
> - Removed need to export libfdt symbols
>
> Changes since v1:
> - Updated the driver to use fpga_region and fpga_bridge for FPGA
> programming
> - Dropped platform drivers not related to PR programming to focus on XRT
> core framework
> - Updated Documentation/fpga/xrt.rst with information on XRT core framework
> - Addressed checkpatch issues
> - Dropped xrt- prefix from some header files
>
> For reference V4 version of patch series can be found here:
>
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]
> https://lore.kernel.org/lkml/[email protected]/
>
> Lizhi Hou (20):
> Documentation: fpga: Add a document describing XRT Alveo drivers
> fpga: xrt: driver metadata helper functions
> fpga: xrt: xclbin file helper functions
> fpga: xrt: xrt-lib driver manager
> fpga: xrt: group driver
> fpga: xrt: char dev node helper functions
> fpga: xrt: root driver infrastructure
> fpga: xrt: driver infrastructure
> fpga: xrt: management physical function driver (root)
> fpga: xrt: main driver for management function device
> fpga: xrt: fpga-mgr and region implementation for xclbin download
> fpga: xrt: VSEC driver
> fpga: xrt: User Clock Subsystem driver
> fpga: xrt: ICAP driver
> fpga: xrt: devctl xrt driver
> fpga: xrt: clock driver
> fpga: xrt: clock frequency counter driver
> fpga: xrt: DDR calibration driver
> fpga: xrt: partition isolation driver
> fpga: xrt: Kconfig and Makefile updates for XRT drivers
>
> Documentation/fpga/index.rst | 1 +
> Documentation/fpga/xrt.rst | 844 +++++++++++++++++
> MAINTAINERS | 11 +
> drivers/Makefile | 1 +
> drivers/fpga/Kconfig | 2 +
> drivers/fpga/Makefile | 5 +
> drivers/fpga/xrt/Kconfig | 8 +
> drivers/fpga/xrt/include/events.h | 45 +
> drivers/fpga/xrt/include/group.h | 25 +
> drivers/fpga/xrt/include/metadata.h | 236 +++++
> drivers/fpga/xrt/include/subdev_id.h | 38 +
> drivers/fpga/xrt/include/xclbin-helper.h | 48 +
> drivers/fpga/xrt/include/xdevice.h | 131 +++
> drivers/fpga/xrt/include/xleaf.h | 205 +++++
> drivers/fpga/xrt/include/xleaf/axigate.h | 23 +
> drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 +
> drivers/fpga/xrt/include/xleaf/clock.h | 29 +
> .../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +
> drivers/fpga/xrt/include/xleaf/devctl.h | 40 +
> drivers/fpga/xrt/include/xleaf/icap.h | 27 +
> drivers/fpga/xrt/include/xmgnt-main.h | 34 +
> drivers/fpga/xrt/include/xroot.h | 117 +++
> drivers/fpga/xrt/lib/Kconfig | 17 +
> drivers/fpga/xrt/lib/Makefile | 30 +
> drivers/fpga/xrt/lib/cdev.c | 210 +++++
> drivers/fpga/xrt/lib/group.c | 278 ++++++
> drivers/fpga/xrt/lib/lib-drv.c | 328 +++++++
> drivers/fpga/xrt/lib/lib-drv.h | 15 +
> drivers/fpga/xrt/lib/subdev.c | 847 ++++++++++++++++++
> drivers/fpga/xrt/lib/subdev_pool.h | 53 ++
> drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++
> drivers/fpga/xrt/lib/xleaf/axigate.c | 325 +++++++
> drivers/fpga/xrt/lib/xleaf/clkfreq.c | 223 +++++
> drivers/fpga/xrt/lib/xleaf/clock.c | 652 ++++++++++++++
> drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 210 +++++
> drivers/fpga/xrt/lib/xleaf/devctl.c | 169 ++++
> drivers/fpga/xrt/lib/xleaf/icap.c | 328 +++++++
> drivers/fpga/xrt/lib/xleaf/ucs.c | 152 ++++
> drivers/fpga/xrt/lib/xleaf/vsec.c | 372 ++++++++
> drivers/fpga/xrt/lib/xroot.c | 536 +++++++++++
> drivers/fpga/xrt/metadata/Kconfig | 12 +
> drivers/fpga/xrt/metadata/Makefile | 16 +
> drivers/fpga/xrt/metadata/metadata.c | 578 ++++++++++++
> drivers/fpga/xrt/mgnt/Kconfig | 15 +
> drivers/fpga/xrt/mgnt/Makefile | 19 +
> drivers/fpga/xrt/mgnt/root.c | 419 +++++++++
> drivers/fpga/xrt/mgnt/xmgnt-main-region.c | 485 ++++++++++
> drivers/fpga/xrt/mgnt/xmgnt-main.c | 660 ++++++++++++++
> drivers/fpga/xrt/mgnt/xmgnt.h | 33 +
> drivers/fpga/xrt/mgnt/xrt-mgr.c | 190 ++++
> drivers/fpga/xrt/mgnt/xrt-mgr.h | 16 +
> include/uapi/linux/xrt/xclbin.h | 409 +++++++++
> include/uapi/linux/xrt/xmgnt-ioctl.h | 46 +
> 53 files changed, 9931 insertions(+)
> create mode 100644 Documentation/fpga/xrt.rst
> create mode 100644 drivers/fpga/xrt/Kconfig
> create mode 100644 drivers/fpga/xrt/include/events.h
> create mode 100644 drivers/fpga/xrt/include/group.h
> create mode 100644 drivers/fpga/xrt/include/metadata.h
> create mode 100644 drivers/fpga/xrt/include/subdev_id.h
> create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
> create mode 100644 drivers/fpga/xrt/include/xdevice.h
> create mode 100644 drivers/fpga/xrt/include/xleaf.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
> create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
> create mode 100644 drivers/fpga/xrt/include/xmgnt-main.h
> create mode 100644 drivers/fpga/xrt/include/xroot.h
> create mode 100644 drivers/fpga/xrt/lib/Kconfig
> create mode 100644 drivers/fpga/xrt/lib/Makefile
> create mode 100644 drivers/fpga/xrt/lib/cdev.c
> create mode 100644 drivers/fpga/xrt/lib/group.c
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
> create mode 100644 drivers/fpga/xrt/lib/subdev.c
> create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
> create mode 100644 drivers/fpga/xrt/lib/xclbin.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
> create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
> create mode 100644 drivers/fpga/xrt/lib/xroot.c
> create mode 100644 drivers/fpga/xrt/metadata/Kconfig
> create mode 100644 drivers/fpga/xrt/metadata/Makefile
> create mode 100644 drivers/fpga/xrt/metadata/metadata.c
> create mode 100644 drivers/fpga/xrt/mgnt/Kconfig
> create mode 100644 drivers/fpga/xrt/mgnt/Makefile
> create mode 100644 drivers/fpga/xrt/mgnt/root.c
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main-region.c
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main.c
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt.h
> create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.c
> create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.h
> create mode 100644 include/uapi/linux/xrt/xclbin.h
> create mode 100644 include/uapi/linux/xrt/xmgnt-ioctl.h
>

2021-05-03 18:06:55

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 04/20] fpga: xrt: xrt-lib driver manager


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> xrt-lib kernel module infrastructure code to register and manage all
> leaf driver modules.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/subdev_id.h | 38 ++++
> drivers/fpga/xrt/include/xdevice.h | 131 +++++++++++
> drivers/fpga/xrt/include/xleaf.h | 205 +++++++++++++++++
> drivers/fpga/xrt/lib/lib-drv.c | 328 +++++++++++++++++++++++++++
> drivers/fpga/xrt/lib/lib-drv.h | 15 ++
> 5 files changed, 717 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/subdev_id.h
> create mode 100644 drivers/fpga/xrt/include/xdevice.h
> create mode 100644 drivers/fpga/xrt/include/xleaf.h
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
>
> diff --git a/drivers/fpga/xrt/include/subdev_id.h b/drivers/fpga/xrt/include/subdev_id.h
> new file mode 100644
> index 000000000000..084737c53b88
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/subdev_id.h
> @@ -0,0 +1,38 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_SUBDEV_ID_H_
> +#define _XRT_SUBDEV_ID_H_
> +
> +/*
> + * Every subdev driver has an ID for others to refer to it. There can be multiple number of
> + * instances of a subdev driver. A <subdev_id, subdev_instance> tuple is a unique identification
> + * of a specific instance of a subdev driver.
> + */
> +enum xrt_subdev_id {
> + XRT_SUBDEV_GRP = 1,

From v4, i meant that only needed to set XRT_SUBDEV_GRP=0

Why did all the values get incremented ?

> + XRT_SUBDEV_VSEC = 2,
> + XRT_SUBDEV_VSEC_GOLDEN = 3,
> + XRT_SUBDEV_DEVCTL = 4,
> + XRT_SUBDEV_AXIGATE = 5,
> + XRT_SUBDEV_ICAP = 6,
> + XRT_SUBDEV_TEST = 7,
> + XRT_SUBDEV_MGNT_MAIN = 8,
> + XRT_SUBDEV_QSPI = 9,
> + XRT_SUBDEV_MAILBOX = 10,
> + XRT_SUBDEV_CMC = 11,
> + XRT_SUBDEV_CALIB = 12,
> + XRT_SUBDEV_CLKFREQ = 13,
> + XRT_SUBDEV_CLOCK = 14,
> + XRT_SUBDEV_SRSR = 15,
> + XRT_SUBDEV_UCS = 16,
> + XRT_SUBDEV_NUM = 17, /* Total number of subdevs. */
Isn't this value now wrong?
> + XRT_ROOT = -1, /* Special ID for root driver. */
> +};
> +
> +#endif /* _XRT_SUBDEV_ID_H_ */
> diff --git a/drivers/fpga/xrt/include/xdevice.h b/drivers/fpga/xrt/include/xdevice.h
> new file mode 100644
> index 000000000000..3afd96989fc5
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xdevice.h
> @@ -0,0 +1,131 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_DEVICE_H_
> +#define _XRT_DEVICE_H_
> +
> +#include <linux/fs.h>
> +#include <linux/cdev.h>
> +
> +#define XRT_MAX_DEVICE_NODES 128
> +#define XRT_INVALID_DEVICE_INST (XRT_MAX_DEVICE_NODES + 1)
> +
> +enum {
> + XRT_DEVICE_STATE_NONE = 0,
> + XRT_DEVICE_STATE_ADDED
> +};
> +
> +/*
> + * struct xrt_device - represent an xrt device on xrt bus
> + *
> + * dev: generic device interface.
> + * id: id of the xrt device.
> + */
> +struct xrt_device {
> + struct device dev;
> + u32 subdev_id;
> + const char *name;
> + u32 instance;
> + u32 state;
> + u32 num_resources;
> + struct resource *resource;
> + void *sdev_data;
> +};
> +
> +/*
> + * If populated by xrt device driver, infra will handle the mechanics of
> + * char device (un)registration.
> + */
> +enum xrt_dev_file_mode {
> + /* Infra create cdev, default file name */
> + XRT_DEV_FILE_DEFAULT = 0,
> + /* Infra create cdev, need to encode inst num in file name */
> + XRT_DEV_FILE_MULTI_INST,
> + /* No auto creation of cdev by infra, leaf handles it by itself */
> + XRT_DEV_FILE_NO_AUTO,
> +};
> +
> +struct xrt_dev_file_ops {
> + const struct file_operations xsf_ops;
> + dev_t xsf_dev_t;
> + const char *xsf_dev_name;
> + enum xrt_dev_file_mode xsf_mode;
> +};
> +
> +/*
> + * this struct define the endpoints belong to the same xrt device
> + */
> +struct xrt_dev_ep_names {
> + const char *ep_name;
> + const char *compat;
> +};
> +
> +struct xrt_dev_endpoints {
> + struct xrt_dev_ep_names *xse_names;
> + /* minimum number of endpoints to support the subdevice */
> + u32 xse_min_ep;
> +};
> +
> +/*
> + * struct xrt_driver - represent a xrt device driver
> + *
> + * drv: driver model structure.
> + * id_table: pointer to table of device IDs the driver is interested in.
> + * { } member terminated.
> + * probe: mandatory callback for device binding.
> + * remove: callback for device unbinding.
> + */
> +struct xrt_driver {
> + struct device_driver driver;
> + u32 subdev_id;
> + struct xrt_dev_file_ops file_ops;
> + struct xrt_dev_endpoints *endpoints;
> +
> + /*
> + * Subdev driver callbacks populated by subdev driver.
> + */
> + int (*probe)(struct xrt_device *xrt_dev);
> + void (*remove)(struct xrt_device *xrt_dev);
> + /*
> + * If leaf_call is defined, these are called by other leaf drivers.
> + * Note that root driver may call into leaf_call of a group driver.
> + */
> + int (*leaf_call)(struct xrt_device *xrt_dev, u32 cmd, void *arg);
> +};
> +
> +#define to_xrt_dev(d) container_of(d, struct xrt_device, dev)
> +#define to_xrt_drv(d) container_of(d, struct xrt_driver, driver)
> +
> +static inline void *xrt_get_drvdata(const struct xrt_device *xdev)
> +{
> + return dev_get_drvdata(&xdev->dev);
> +}
> +
> +static inline void xrt_set_drvdata(struct xrt_device *xdev, void *data)
> +{
> + dev_set_drvdata(&xdev->dev, data);
> +}
> +
> +static inline void *xrt_get_xdev_data(struct device *dev)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> +
> + return xdev->sdev_data;
> +}
> +
> +struct xrt_device *
> +xrt_device_register(struct device *parent, u32 id,
> + struct resource *res, u32 res_num,
> + void *pdata, size_t data_sz);
> +void xrt_device_unregister(struct xrt_device *xdev);
> +int xrt_register_driver(struct xrt_driver *drv);
> +void xrt_unregister_driver(struct xrt_driver *drv);
> +void *xrt_get_xdev_data(struct device *dev);
> +struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type, u32 num);
> +
> +#endif /* _XRT_DEVICE_H_ */
> diff --git a/drivers/fpga/xrt/include/xleaf.h b/drivers/fpga/xrt/include/xleaf.h
> new file mode 100644
> index 000000000000..f065fc766e0f
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf.h
> @@ -0,0 +1,205 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + * Sonal Santan <[email protected]>
> + */
> +
> +#ifndef _XRT_XLEAF_H_
> +#define _XRT_XLEAF_H_
> +
> +#include <linux/mod_devicetable.h>
> +#include "xdevice.h"
> +#include "subdev_id.h"
> +#include "xroot.h"
> +#include "events.h"
> +
> +/* All subdev drivers should use below common routines to print out msg. */
> +#define DEV(xdev) (&(xdev)->dev)
> +#define DEV_PDATA(xdev) \
> + ((struct xrt_subdev_platdata *)xrt_get_xdev_data(DEV(xdev)))
> +#define DEV_FILE_OPS(xdev) \
> + (&(to_xrt_drv((xdev)->dev.driver))->file_ops)
> +#define FMT_PRT(prt_fn, xdev, fmt, args...) \
> + ({typeof(xdev) (_xdev) = (xdev); \
> + prt_fn(DEV(_xdev), "%s %s: " fmt, \
> + DEV_PDATA(_xdev)->xsp_root_name, __func__, ##args); })
> +#define xrt_err(xdev, fmt, args...) FMT_PRT(dev_err, xdev, fmt, ##args)
> +#define xrt_warn(xdev, fmt, args...) FMT_PRT(dev_warn, xdev, fmt, ##args)
> +#define xrt_info(xdev, fmt, args...) FMT_PRT(dev_info, xdev, fmt, ##args)
> +#define xrt_dbg(xdev, fmt, args...) FMT_PRT(dev_dbg, xdev, fmt, ##args)
> +
> +#define XRT_DEFINE_REGMAP_CONFIG(config_name) \
> + static const struct regmap_config config_name = { \
> + .reg_bits = 32, \
> + .val_bits = 32, \
> + .reg_stride = 4, \
> + .max_register = 0x1000, \
> + }
> +
> +enum {
> + /* Starting cmd for common leaf cmd implemented by all leaves. */
> + XRT_XLEAF_COMMON_BASE = 0,
> + /* Starting cmd for leaves' specific leaf cmds. */
> + XRT_XLEAF_CUSTOM_BASE = 64,
> +};
> +
> +enum xrt_xleaf_common_leaf_cmd {
> + XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
> +};
> +
> +/*
> + * Partially initialized by the parent driver, then, passed in as subdev driver's
> + * platform data when creating subdev driver instance by calling platform
> + * device register API (xrt_device_register_data() or the likes).
> + *
> + * Once device register API returns, platform driver framework makes a copy of
> + * this buffer and maintains its life cycle. The content of the buffer is
> + * completely owned by subdev driver.
> + *
> + * Thus, parent driver should be very careful when it touches this buffer
> + * again once it's handed over to subdev driver. And the data structure
> + * should not contain pointers pointing to buffers that is managed by
> + * other or parent drivers since it could have been freed before platform
> + * data buffer is freed by platform driver framework.
> + */
> +struct xrt_subdev_platdata {
> + /*
> + * Per driver instance callback. The xdev points to the instance.
> + * Should always be defined for subdev driver to get service from root.
> + */
> + xrt_subdev_root_cb_t xsp_root_cb;
> + void *xsp_root_cb_arg;
> +
> + /* Something to associate w/ root for msg printing. */
> + const char *xsp_root_name;
> +
> + /*
> + * Char dev support for this subdev instance.
> + * Initialized by subdev driver.
> + */
> + struct cdev xsp_cdev;
> + struct device *xsp_sysdev;
> + struct mutex xsp_devnode_lock; /* devnode lock */
> + struct completion xsp_devnode_comp;
> + int xsp_devnode_ref;
> + bool xsp_devnode_online;
> + bool xsp_devnode_excl;
> +
> + /*
> + * Subdev driver specific init data. The buffer should be embedded
> + * in this data structure buffer after dtb, so that it can be freed
> + * together with platform data.
> + */
> + loff_t xsp_priv_off; /* Offset into this platform data buffer. */
> + size_t xsp_priv_len;
> +
> + /*
> + * Populated by parent driver to describe the device tree for
> + * the subdev driver to handle. Should always be last one since it's
> + * of variable length.
> + */
> + bool xsp_dtb_valid;
> + char xsp_dtb[0];
> +};
> +
> +struct subdev_match_arg {
> + enum xrt_subdev_id id;
> + int instance;
> +};
> +
> +bool xleaf_has_endpoint(struct xrt_device *xdev, const char *endpoint_name);
> +struct xrt_device *xleaf_get_leaf(struct xrt_device *xdev,
> + xrt_subdev_match_t cb, void *arg);
> +
> +static inline bool subdev_match(enum xrt_subdev_id id, struct xrt_device *xdev, void *arg)
> +{
> + const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
> + int instance = a->instance;
> +
> + if (id != a->id)
> + return false;
> + if (instance != xdev->instance && instance != XRT_INVALID_DEVICE_INST)
> + return false;
> + return true;
> +}
> +
> +static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
> + struct xrt_device *xdev, void *arg)
> +{
> + return xleaf_has_endpoint(xdev, arg);
> +}
> +
> +static inline struct xrt_device *
> +xleaf_get_leaf_by_id(struct xrt_device *xdev,
> + enum xrt_subdev_id id, int instance)
> +{
> + struct subdev_match_arg arg = { id, instance };
> +
> + return xleaf_get_leaf(xdev, subdev_match, &arg);
> +}
> +
> +static inline struct xrt_device *
> +xleaf_get_leaf_by_epname(struct xrt_device *xdev, const char *name)
> +{
> + return xleaf_get_leaf(xdev, xrt_subdev_match_epname, (void *)name);
> +}
> +
> +static inline int xleaf_call(struct xrt_device *tgt, u32 cmd, void *arg)
> +{
> + return (to_xrt_drv(tgt->dev.driver)->leaf_call)(tgt, cmd, arg);
> +}
> +
> +int xleaf_broadcast_event(struct xrt_device *xdev, enum xrt_events evt, bool async);
> +int xleaf_create_group(struct xrt_device *xdev, char *dtb);
> +int xleaf_destroy_group(struct xrt_device *xdev, int instance);
> +void xleaf_get_root_res(struct xrt_device *xdev, u32 region_id, struct resource **res);
> +void xleaf_get_root_id(struct xrt_device *xdev, unsigned short *vendor, unsigned short *device,
> + unsigned short *subvendor, unsigned short *subdevice);
> +void xleaf_hot_reset(struct xrt_device *xdev);
> +int xleaf_put_leaf(struct xrt_device *xdev, struct xrt_device *leaf);
> +struct device *xleaf_register_hwmon(struct xrt_device *xdev, const char *name, void *drvdata,
> + const struct attribute_group **grps);
> +void xleaf_unregister_hwmon(struct xrt_device *xdev, struct device *hwmon);
> +int xleaf_wait_for_group_bringup(struct xrt_device *xdev);
> +
> +/*
> + * Character device helper APIs for use by leaf drivers
> + */
> +static inline bool xleaf_devnode_enabled(struct xrt_device *xdev)
> +{
> + return DEV_FILE_OPS(xdev)->xsf_ops.open;
> +}
> +
> +int xleaf_devnode_create(struct xrt_device *xdev,
> + const char *file_name, const char *inst_name);
> +void xleaf_devnode_destroy(struct xrt_device *xdev);
> +
> +struct xrt_device *xleaf_devnode_open_excl(struct inode *inode);
> +struct xrt_device *xleaf_devnode_open(struct inode *inode);
> +void xleaf_devnode_close(struct inode *inode);
> +
> +/* Module's init/fini routines for leaf driver in xrt-lib module */
> +#define XRT_LEAF_INIT_FINI_FUNC(name) \
> +void name##_leaf_init_fini(bool init) \
> +{ \
> + if (init) \
> + xrt_register_driver(&xrt_##name##_driver); \
> + else \
> + xrt_unregister_driver(&xrt_##name##_driver); \
> +}
> +
> +/* Module's init/fini routines for leaf driver in xrt-lib module */
> +void group_leaf_init_fini(bool init);
> +void vsec_leaf_init_fini(bool init);
> +void devctl_leaf_init_fini(bool init);
> +void axigate_leaf_init_fini(bool init);
> +void icap_leaf_init_fini(bool init);
> +void calib_leaf_init_fini(bool init);
> +void clkfreq_leaf_init_fini(bool init);
> +void clock_leaf_init_fini(bool init);
> +void ucs_leaf_init_fini(bool init);
> +
> +#endif /* _XRT_LEAF_H_ */
> diff --git a/drivers/fpga/xrt/lib/lib-drv.c b/drivers/fpga/xrt/lib/lib-drv.c
> new file mode 100644
> index 000000000000..ba4ac4930823
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/lib-drv.c
> @@ -0,0 +1,328 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +#include <linux/slab.h>
> +#include "xleaf.h"
> +#include "xroot.h"
> +#include "lib-drv.h"
> +
> +#define XRT_IPLIB_MODULE_NAME "xrt-lib"
> +#define XRT_IPLIB_MODULE_VERSION "4.0.0"
> +#define XRT_DRVNAME(drv) ((drv)->driver.name)
> +
> +#define XRT_SUBDEV_ID_SHIFT 16
> +#define XRT_SUBDEV_ID_MASK ((1 << XRT_SUBDEV_ID_SHIFT) - 1)
> +
> +struct xrt_find_drv_data {
> + enum xrt_subdev_id id;
> + struct xrt_driver *xdrv;
> +};
> +
> +struct class *xrt_class;
> +static DEFINE_IDA(xrt_device_ida);
> +
> +static inline u32 xrt_instance_to_id(enum xrt_subdev_id id, u32 instance)
> +{
> + return (id << XRT_SUBDEV_ID_SHIFT) | instance;
> +}
> +
> +static inline u32 xrt_id_to_instance(u32 id)
> +{
> + return (id & XRT_SUBDEV_ID_MASK);
> +}
> +
> +static int xrt_bus_match(struct device *dev, struct device_driver *drv)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + struct xrt_driver *xdrv = to_xrt_drv(drv);
> +
> + if (xdev->subdev_id == xdrv->subdev_id)
> + return 1;
> +
> + return 0;
> +}
> +
> +static int xrt_bus_probe(struct device *dev)
> +{
> + struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
> + struct xrt_device *xdev = to_xrt_dev(dev);
> +
> + return xdrv->probe(xdev);
> +}
> +
> +static int xrt_bus_remove(struct device *dev)
> +{
> + struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
> + struct xrt_device *xdev = to_xrt_dev(dev);
> +
> + if (xdrv->remove)
> + xdrv->remove(xdev);
> +
> + return 0;
> +}
> +
> +struct bus_type xrt_bus_type = {
> + .name = "xrt",
> + .match = xrt_bus_match,
> + .probe = xrt_bus_probe,
> + .remove = xrt_bus_remove,
> +};
> +
> +int xrt_register_driver(struct xrt_driver *drv)
> +{
> + const char *drvname = XRT_DRVNAME(drv);
> + int rc = 0;
> +
> + /* Initialize dev_t for char dev node. */
> + if (drv->file_ops.xsf_ops.open) {
> + rc = alloc_chrdev_region(&drv->file_ops.xsf_dev_t, 0,
> + XRT_MAX_DEVICE_NODES, drvname);
> + if (rc) {
> + pr_err("failed to alloc dev minor for %s: %d\n", drvname, rc);
> + return rc;
> + }
> + } else {
> + drv->file_ops.xsf_dev_t = (dev_t)-1;
> + }
> +
> + drv->driver.owner = THIS_MODULE;
> + drv->driver.bus = &xrt_bus_type;
> +
> + rc = driver_register(&drv->driver);
> + if (rc) {
> + pr_err("register %s xrt driver failed\n", drvname);
> + if (drv->file_ops.xsf_dev_t != (dev_t)-1) {
> + unregister_chrdev_region(drv->file_ops.xsf_dev_t,
> + XRT_MAX_DEVICE_NODES);
> + }
> + return rc;
> + }
> +
> + pr_info("%s registered successfully\n", drvname);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_register_driver);
> +
> +void xrt_unregister_driver(struct xrt_driver *drv)
> +{
> + driver_unregister(&drv->driver);
> +
> + if (drv->file_ops.xsf_dev_t != (dev_t)-1)
> + unregister_chrdev_region(drv->file_ops.xsf_dev_t, XRT_MAX_DEVICE_NODES);
> +
> + pr_info("%s unregistered successfully\n", XRT_DRVNAME(drv));
> +}
> +EXPORT_SYMBOL_GPL(xrt_unregister_driver);
> +
> +static int __find_driver(struct device_driver *drv, void *_data)
> +{
> + struct xrt_driver *xdrv = to_xrt_drv(drv);
> + struct xrt_find_drv_data *data = _data;
> +
> + if (xdrv->subdev_id == data->id) {
> + data->xdrv = xdrv;
> + return 1;
> + }
> +
> + return 0;
> +}
> +
> +const char *xrt_drv_name(enum xrt_subdev_id id)
> +{
> + struct xrt_find_drv_data data = { 0 };
> +
> + data.id = id;
> + bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
> +
> + if (data.xdrv)
> + return XRT_DRVNAME(data.xdrv);
> +
> + return NULL;
> +}
> +
> +static int xrt_drv_get_instance(enum xrt_subdev_id id)
> +{
> + int ret;
> +
> + ret = ida_alloc_range(&xrt_device_ida, xrt_instance_to_id(id, 0),
> + xrt_instance_to_id(id, XRT_MAX_DEVICE_NODES),
> + GFP_KERNEL);
> + if (ret < 0)
> + return ret;
> +
> + return xrt_id_to_instance((u32)ret);
> +}
> +
> +static void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
> +{
> + ida_free(&xrt_device_ida, xrt_instance_to_id(id, instance));
> +}
> +
> +struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
> +{
> + struct xrt_find_drv_data data = { 0 };
> +
> + data.id = id;
> + bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
> +
> + if (data.xdrv)
> + return data.xdrv->endpoints;
> +
> + return NULL;
> +}
> +
> +static void xrt_device_release(struct device *dev)
> +{
> + struct xrt_device *xdev = container_of(dev, struct xrt_device, dev);
> +
> + kfree(xdev);
> +}
> +
> +void xrt_device_unregister(struct xrt_device *xdev)
> +{
> + if (xdev->state == XRT_DEVICE_STATE_ADDED)
> + device_del(&xdev->dev);
> +
> + vfree(xdev->sdev_data);
> + kfree(xdev->resource);
> +
> + if (xdev->instance != XRT_INVALID_DEVICE_INST)
> + xrt_drv_put_instance(xdev->subdev_id, xdev->instance);
> +
> + if (xdev->dev.release == xrt_device_release)
> + put_device(&xdev->dev);
> +}
> +
> +struct xrt_device *
> +xrt_device_register(struct device *parent, u32 id,
> + struct resource *res, u32 res_num,
> + void *pdata, size_t data_sz)
> +{
> + struct xrt_device *xdev = NULL;
> + int ret;
> +
> + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
> + if (!xdev)
> + return NULL;
> + xdev->instance = XRT_INVALID_DEVICE_INST;
> +
> + /* Obtain dev instance number. */
> + ret = xrt_drv_get_instance(id);
> + if (ret < 0) {
> + dev_err(parent, "failed get instance, ret %d", ret);
> + goto fail;
> + }
> +
> + xdev->instance = ret;
> + xdev->name = xrt_drv_name(id);
> + xdev->subdev_id = id;
> + device_initialize(&xdev->dev);
> + xdev->dev.release = xrt_device_release;
> + xdev->dev.parent = parent;
> +
> + xdev->dev.bus = &xrt_bus_type;
> + dev_set_name(&xdev->dev, "%s.%d", xdev->name, xdev->instance);
> +
> + xdev->num_resources = res_num;
> + xdev->resource = kmemdup(res, sizeof(*res) * res_num, GFP_KERNEL);
> + if (!xdev->resource)
> + goto fail;
> +
> + xdev->sdev_data = vzalloc(data_sz);
> + if (!xdev->sdev_data)
> + goto fail;
> +
> + memcpy(xdev->sdev_data, pdata, data_sz);
> +
> + ret = device_add(&xdev->dev);
> + if (ret) {
> + dev_err(parent, "failed add device, ret %d", ret);
> + goto fail;
> + }
> + xdev->state = XRT_DEVICE_STATE_ADDED;
> +
> + return xdev;
> +
> +fail:
> + xrt_device_unregister(xdev);
> + kfree(xdev);
> +
> + return NULL;
> +}
> +
> +struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type, u32 num)
> +{
> + u32 i;
> +
> + for (i = 0; i < xdev->num_resources; i++) {
> + struct resource *r = &xdev->resource[i];
> +
> + if (type == resource_type(r) && num-- == 0)
> + return r;
> + }
> + return NULL;
> +}
> +
> +/*
> + * Leaf driver's module init/fini callbacks. This is not a open infrastructure for dynamic
> + * plugging in drivers. All drivers should be statically added.

ok

Tom

> + */
> +static void (*leaf_init_fini_cbs[])(bool) = {
> + group_leaf_init_fini,
> + vsec_leaf_init_fini,
> + devctl_leaf_init_fini,
> + axigate_leaf_init_fini,
> + icap_leaf_init_fini,
> + calib_leaf_init_fini,
> + clkfreq_leaf_init_fini,
> + clock_leaf_init_fini,
> + ucs_leaf_init_fini,
> +};
> +
> +static __init int xrt_lib_init(void)
> +{
> + int ret;
> + int i;
> +
> + ret = bus_register(&xrt_bus_type);
> + if (ret)
> + return ret;
> +
> + xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
> + if (IS_ERR(xrt_class)) {
> + bus_unregister(&xrt_bus_type);
> + return PTR_ERR(xrt_class);
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
> + leaf_init_fini_cbs[i](true);
> + return 0;
> +}
> +
> +static __exit void xrt_lib_fini(void)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
> + leaf_init_fini_cbs[i](false);
> +
> + class_destroy(xrt_class);
> + bus_unregister(&xrt_bus_type);
> +}
> +
> +module_init(xrt_lib_init);
> +module_exit(xrt_lib_fini);
> +
> +MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
> +MODULE_AUTHOR("XRT Team <[email protected]>");
> +MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/fpga/xrt/lib/lib-drv.h b/drivers/fpga/xrt/lib/lib-drv.h
> new file mode 100644
> index 000000000000..514f904c81c0
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/lib-drv.h
> @@ -0,0 +1,15 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _LIB_DRV_H_
> +#define _LIB_DRV_H_
> +
> +const char *xrt_drv_name(enum xrt_subdev_id id);
> +struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
> +
> +#endif /* _LIB_DRV_H_ */

2021-05-03 18:07:29

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 06/20] fpga: xrt: char dev node helper functions


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Helper functions for char device node creation / removal for xrt
> drivers. This is part of xrt driver infrastructure.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/cdev.c | 210 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 210 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/cdev.c
>
> diff --git a/drivers/fpga/xrt/lib/cdev.c b/drivers/fpga/xrt/lib/cdev.c
> new file mode 100644
> index 000000000000..4edd2c1d459b
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/cdev.c
> @@ -0,0 +1,210 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA device node helper functions.
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include "xleaf.h"
> +
> +extern struct class *xrt_class;
> +
> +#define XRT_CDEV_DIR "xrt"
ok
> +#define INODE2PDATA(inode) \
> + container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
> +#define INODE2PDEV(inode) \
> + to_xrt_dev(kobj_to_dev((inode)->i_cdev->kobj.parent))
> +#define CDEV_NAME(sysdev) (strchr((sysdev)->kobj.name, '!') + 1)
> +
> +/* Allow it to be accessed from cdev. */
> +static void xleaf_devnode_allowed(struct xrt_device *xdev)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
> +
> + /* Allow new opens. */
> + mutex_lock(&pdata->xsp_devnode_lock);
> + pdata->xsp_devnode_online = true;
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +}
> +
> +/* Turn off access from cdev and wait for all existing user to go away. */
> +static void xleaf_devnode_disallowed(struct xrt_device *xdev)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + /* Prevent new opens. */
> + pdata->xsp_devnode_online = false;
> + /* Wait for existing user to close. */
> + while (pdata->xsp_devnode_ref) {
> + mutex_unlock(&pdata->xsp_devnode_lock);
> + wait_for_completion(&pdata->xsp_devnode_comp);
> + mutex_lock(&pdata->xsp_devnode_lock);
> + }
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +}
> +
> +static struct xrt_device *
> +__xleaf_devnode_open(struct inode *inode, bool excl)
> +{
> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
> + struct xrt_device *xdev = INODE2PDEV(inode);
> + bool opened = false;
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + if (pdata->xsp_devnode_online) {
> + if (excl && pdata->xsp_devnode_ref) {
> + xrt_err(xdev, "%s has already been opened exclusively",
> + CDEV_NAME(pdata->xsp_sysdev));
> + } else if (!excl && pdata->xsp_devnode_excl) {
> + xrt_err(xdev, "%s has been opened exclusively",
> + CDEV_NAME(pdata->xsp_sysdev));
> + } else {
> + pdata->xsp_devnode_ref++;
> + pdata->xsp_devnode_excl = excl;
> + opened = true;
> + xrt_info(xdev, "opened %s, ref=%d",
> + CDEV_NAME(pdata->xsp_sysdev),
> + pdata->xsp_devnode_ref);
> + }
> + } else {
> + xrt_err(xdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
> + }
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +
> + xdev = opened ? xdev : NULL;
> + return xdev;
> +}
> +
> +struct xrt_device *
> +xleaf_devnode_open_excl(struct inode *inode)
> +{
> + return __xleaf_devnode_open(inode, true);
ok
> +}
> +
> +struct xrt_device *
> +xleaf_devnode_open(struct inode *inode)
> +{
> + return __xleaf_devnode_open(inode, false);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_devnode_open);
ok
> +
> +void xleaf_devnode_close(struct inode *inode)
> +{
> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
> + struct xrt_device *xdev = INODE2PDEV(inode);
> + bool notify = false;
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + WARN_ON(pdata->xsp_devnode_ref == 0);
> + pdata->xsp_devnode_ref--;
> + if (pdata->xsp_devnode_ref == 0) {
> + pdata->xsp_devnode_excl = false;
> + notify = true;
> + }
> + if (notify)
> + xrt_info(xdev, "closed %s", CDEV_NAME(pdata->xsp_sysdev));
ok
> + else
> + xrt_info(xdev, "closed %s, notifying waiter", CDEV_NAME(pdata->xsp_sysdev));
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +
> + if (notify)
> + complete(&pdata->xsp_devnode_comp);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_devnode_close);
> +
> +static inline enum xrt_dev_file_mode
> +devnode_mode(struct xrt_device *xdev)
> +{
> + return DEV_FILE_OPS(xdev)->xsf_mode;
> +}
> +
> +int xleaf_devnode_create(struct xrt_device *xdev, const char *file_name,
> + const char *inst_name)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
> + struct xrt_dev_file_ops *fops = DEV_FILE_OPS(xdev);
> + struct cdev *cdevp;
> + struct device *sysdev;
> + int ret = 0;
> + char fname[256];
> +
> + mutex_init(&pdata->xsp_devnode_lock);
> + init_completion(&pdata->xsp_devnode_comp);
> +
> + cdevp = &DEV_PDATA(xdev)->xsp_cdev;
> + cdev_init(cdevp, &fops->xsf_ops);
> + cdevp->owner = fops->xsf_ops.owner;
> + cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), xdev->instance);
> +
> + /*
> + * Set xdev as parent of cdev so that when xdev (and its platform
> + * data) will not be freed when cdev is not freed.
> + */
> + cdev_set_parent(cdevp, &DEV(xdev)->kobj);
> +
> + ret = cdev_add(cdevp, cdevp->dev, 1);
> + if (ret) {
> + xrt_err(xdev, "failed to add cdev: %d", ret);
> + goto failed;
> + }
> + if (!file_name)
> + file_name = xdev->name;
> + if (!inst_name) {
> + if (devnode_mode(xdev) == XRT_DEV_FILE_MULTI_INST) {
> + snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
> + XRT_CDEV_DIR, DEV_PDATA(xdev)->xsp_root_name,
> + file_name, xdev->instance);
> + } else {
> + snprintf(fname, sizeof(fname), "%s/%s/%s",
> + XRT_CDEV_DIR, DEV_PDATA(xdev)->xsp_root_name,
> + file_name);
> + }
> + } else {
> + snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
> + DEV_PDATA(xdev)->xsp_root_name, file_name, inst_name);
> + }
> + sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
> + if (IS_ERR(sysdev)) {
> + ret = PTR_ERR(sysdev);
> + xrt_err(xdev, "failed to create device node: %d", ret);
> + goto failed_cdev_add;
> + }
> + pdata->xsp_sysdev = sysdev;
> +
> + xleaf_devnode_allowed(xdev);
> +
> + xrt_info(xdev, "created (%d, %d): /dev/%s",
> + MAJOR(cdevp->dev), xdev->instance, fname);
> + return 0;
> +
> +failed_cdev_add:
> + cdev_del(cdevp);
> +failed:
> + cdevp->owner = NULL;
> + return ret;
> +}
> +
> +void xleaf_devnode_destroy(struct xrt_device *xdev)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
> + struct cdev *cdevp = &pdata->xsp_cdev;
> + dev_t dev = cdevp->dev;
> +
> + xleaf_devnode_disallowed(xdev);

ok

Reviewed-by: Tom Rix <[email protected]>

> +
> + xrt_info(xdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
> + XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
> + device_destroy(xrt_class, cdevp->dev);
> + pdata->xsp_sysdev = NULL;
> + cdev_del(cdevp);
> +}

2021-05-03 18:08:31

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 08/20] fpga: xrt: driver infrastructure


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Infrastructure code providing APIs for managing leaf driver instance
> groups, facilitating inter-leaf driver calls and root calls.
ok, keeping debugging msg
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/subdev.c | 847 ++++++++++++++++++++++++++++++++++
> 1 file changed, 847 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/subdev.c
>
> diff --git a/drivers/fpga/xrt/lib/subdev.c b/drivers/fpga/xrt/lib/subdev.c
> new file mode 100644
> index 000000000000..710ccf2a2121
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/subdev.c
> @@ -0,0 +1,847 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/vmalloc.h>
> +#include <linux/slab.h>
> +#include "xleaf.h"
> +#include "subdev_pool.h"
> +#include "lib-drv.h"
> +#include "metadata.h"
> +
> +extern struct bus_type xrt_bus_type;
> +
> +#define IS_ROOT_DEV(dev) ((dev)->bus != &xrt_bus_type)
> +#define XRT_HOLDER_BUF_SZ 1024
> +
ok
> +static inline struct device *find_root(struct xrt_device *xdev)
> +{
> + struct device *d = DEV(xdev);
> +
> + while (!IS_ROOT_DEV(d))
> + d = d->parent;
> + return d;
> +}
> +
> +/*
> + * It represents a holder of a subdev. One holder can repeatedly hold a subdev
> + * as long as there is a unhold corresponding to a hold.
> + */
> +struct xrt_subdev_holder {
> + struct list_head xsh_holder_list;
> + struct device *xsh_holder;
> + int xsh_count;
> + struct kref xsh_kref;
> +};
> +
> +/*
> + * It represents a specific instance of platform driver for a subdev, which
> + * provides services to its clients (another subdev driver or root driver).
> + */
> +struct xrt_subdev {
> + struct list_head xs_dev_list;
> + struct list_head xs_holder_list;
> + enum xrt_subdev_id xs_id; /* type of subdev */
> + struct xrt_device *xs_xdev;
> + struct completion xs_holder_comp;
> +};
> +
> +static struct xrt_subdev *xrt_subdev_alloc(void)
> +{
> + struct xrt_subdev *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
> +
> + if (!sdev)
> + return NULL;
> +
> + INIT_LIST_HEAD(&sdev->xs_dev_list);
> + INIT_LIST_HEAD(&sdev->xs_holder_list);
> + init_completion(&sdev->xs_holder_comp);
> + return sdev;
> +}
> +
ok, removed singleton
> +int xrt_subdev_root_request(struct xrt_device *self, u32 cmd, void *arg)
> +{
> + struct device *dev = DEV(self);
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
> +
> + WARN_ON(!pdata->xsp_root_cb);
> + return (*pdata->xsp_root_cb)(dev->parent, pdata->xsp_root_cb_arg, cmd, arg);
> +}
> +
> +/*
> + * Subdev common sysfs nodes.
> + */
> +static ssize_t holders_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + ssize_t len;
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + struct xrt_root_get_holders holders = { xdev, buf, XRT_HOLDER_BUF_SZ };
ok
> +
> + len = xrt_subdev_root_request(xdev, XRT_ROOT_GET_LEAF_HOLDERS, &holders);
> + if (len >= holders.xpigh_holder_buf_len)
> + return len;
> + buf[len] = '\n';
> + return len + 1;
> +}
> +static DEVICE_ATTR_RO(holders);
> +
> +static struct attribute *xrt_subdev_attrs[] = {
> + &dev_attr_holders.attr,
> + NULL,
> +};
> +
> +static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
> + struct bin_attribute *attr, char *buf, loff_t off, size_t count)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xdev);
> + unsigned char *blob;
> + unsigned long size;
> + ssize_t ret = 0;
> +
> + blob = pdata->xsp_dtb;
> + size = xrt_md_size(dev, blob);
> + if (size == XRT_MD_INVALID_LENGTH) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + if (off >= size) {
> + dev_dbg(dev, "offset (%lld) beyond total size: %ld\n", off, size);
ok
> + goto failed;
> + }
> +
> + if (off + count > size) {
> + dev_dbg(dev, "count (%ld) beyond left bytes: %lld\n", count, size - off);
> + count = size - off;
> + }
> + memcpy(buf, blob + off, count);
> +
> + ret = count;
> +failed:
> + return ret;
> +}
> +
> +static struct bin_attribute meta_data_attr = {
> + .attr = {
> + .name = "metadata",
> + .mode = 0400
> + },
ok, i guess..
> + .read = metadata_output,
> + .size = 0
> +};
> +
> +static struct bin_attribute *xrt_subdev_bin_attrs[] = {
> + &meta_data_attr,
> + NULL,
> +};
> +
> +static const struct attribute_group xrt_subdev_attrgroup = {
> + .attrs = xrt_subdev_attrs,
> + .bin_attrs = xrt_subdev_bin_attrs,
> +};
> +
> +/*
> + * Given the device metadata, parse it to get IO ranges and construct
> + * resource array.
> + */
> +static int
> +xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
> + char *dtb, struct resource **res, int *res_num)
> +{
> + struct xrt_subdev_platdata *pdata;
> + struct resource *pci_res = NULL;
> + const u64 *bar_range;
> + const u32 *bar_idx;
> + char *ep_name = NULL, *compat = NULL;
> + uint bar;
> + int count1 = 0, count2 = 0, ret;
> +
> + if (!dtb)
> + return -EINVAL;
> +
> + pdata = DEV_PDATA(to_xrt_dev(parent));
> +
> + /* go through metadata and count endpoints in it */
> + xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, &compat);
> + while (ep_name) {
ok
> + ret = xrt_md_get_prop(parent, dtb, ep_name, compat,
> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
> + if (!ret)
> + count1++;
> + xrt_md_get_next_endpoint(parent, dtb, ep_name, compat, &ep_name, &compat);
> + }
> + if (!count1)
> + return 0;
> +
> + /* allocate resource array for all endpoints been found in metadata */
> + *res = vzalloc(sizeof(**res) * count1);
ok
> +
> + /* go through all endpoints again and get IO range for each endpoint */
> + ep_name = NULL;
> + xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, &compat);
> + while (ep_name) {
> + ret = xrt_md_get_prop(parent, dtb, ep_name, compat,
> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
> + if (ret)
> + continue;
> + xrt_md_get_prop(parent, dtb, ep_name, compat,
> + XRT_MD_PROP_BAR_IDX, (const void **)&bar_idx, NULL);
> + bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
> + xleaf_get_root_res(to_xrt_dev(parent), bar, &pci_res);
> + (*res)[count2].start = pci_res->start + be64_to_cpu(bar_range[0]);
> + (*res)[count2].end = pci_res->start + be64_to_cpu(bar_range[0]) +
> + be64_to_cpu(bar_range[1]) - 1;
> + (*res)[count2].flags = IORESOURCE_MEM;
> + /* check if there is conflicted resource */
> + ret = request_resource(pci_res, *res + count2);
> + if (ret) {
> + dev_err(parent, "Conflict resource %pR\n", *res + count2);
> + vfree(*res);
> + *res_num = 0;
> + *res = NULL;
> + return ret;
> + }
> + release_resource(*res + count2);
> +
> + (*res)[count2].parent = pci_res;
> +
> + xrt_md_find_endpoint(parent, pdata->xsp_dtb, ep_name,
> + compat, &(*res)[count2].name);
> +
> + count2++;
> + xrt_md_get_next_endpoint(parent, dtb, ep_name, compat, &ep_name, &compat);
> + }
> +
> + WARN_ON(count1 != count2);
> + *res_num = count2;
> +
> + return 0;
> +}
> +
> +static inline enum xrt_dev_file_mode
> +xleaf_devnode_mode(struct xrt_device *xdev)
> +{
> + return DEV_FILE_OPS(xdev)->xsf_mode;
> +}
> +
> +static bool xrt_subdev_cdev_auto_creation(struct xrt_device *xdev)
> +{
> + enum xrt_dev_file_mode mode = xleaf_devnode_mode(xdev);
> +
> + if (!xleaf_devnode_enabled(xdev))
> + return false;
> +
> + return (mode == XRT_DEV_FILE_DEFAULT || mode == XRT_DEV_FILE_MULTI_INST);
> +}
> +
> +static struct xrt_subdev *
> +xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
> +{
> + struct xrt_subdev_platdata *pdata = NULL;
> + struct xrt_subdev *sdev = NULL;
> + struct xrt_device *xdev = NULL;
> + struct resource *res = NULL;
> + unsigned long dtb_len = 0;
> + bool dtb_alloced = false;
> + int res_num = 0;
> + size_t pdata_sz;
> + int ret;
> +
> + sdev = xrt_subdev_alloc();
> + if (!sdev) {
> + dev_err(parent, "failed to alloc subdev for ID %d", id);
> + return NULL;
> + }
> + sdev->xs_id = id;
> +
> + if (!dtb) {
> + ret = xrt_md_create(parent, &dtb);
> + if (ret) {
> + dev_err(parent, "can't create empty dtb: %d", ret);
> + goto fail;
> + }
> + dtb_alloced = true;
> + }
> + xrt_md_pack(parent, dtb);
> + dtb_len = xrt_md_size(parent, dtb);
> + if (dtb_len == XRT_MD_INVALID_LENGTH) {
> + dev_err(parent, "invalid metadata len %ld", dtb_len);
> + goto fail1;
> + }
> + pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len;
> +
> + /* Prepare platform data passed to subdev. */
> + pdata = vzalloc(pdata_sz);
> + if (!pdata)
> + goto fail1;
> +
> + pdata->xsp_root_cb = pcb;
> + pdata->xsp_root_cb_arg = pcb_arg;
> + memcpy(pdata->xsp_dtb, dtb, dtb_len);
> + if (id == XRT_SUBDEV_GRP) {
> + /* Group can only be created by root driver. */
> + pdata->xsp_root_name = dev_name(parent);
> + } else {
> + struct xrt_device *grp = to_xrt_dev(parent);
> +
> + /* Leaf can only be created by group driver. */
> + WARN_ON(to_xrt_drv(parent->driver)->subdev_id != XRT_SUBDEV_GRP);
> + pdata->xsp_root_name = DEV_PDATA(grp)->xsp_root_name;
> + }
> +
> + /* Create subdev. */
> + if (id != XRT_SUBDEV_GRP) {
> + int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
> +
> + if (rc) {
> + dev_err(parent, "failed to get resource for %s: %d",
> + xrt_drv_name(id), rc);
> + goto fail2;
> + }
> + }
> + xdev = xrt_device_register(parent, id, res, res_num, pdata, pdata_sz);
> + vfree(res);
> + if (!xdev) {
> + dev_err(parent, "failed to create subdev for %s", xrt_drv_name(id));
> + goto fail2;
> + }
> + sdev->xs_xdev = xdev;
> +
> + if (device_attach(DEV(xdev)) != 1) {
> + xrt_err(xdev, "failed to attach");
> + goto fail3;
> + }
> +
> + if (sysfs_create_group(&DEV(xdev)->kobj, &xrt_subdev_attrgroup))
> + xrt_err(xdev, "failed to create sysfs group");
> +
> + /*
> + * Create sysfs sym link under root for leaves
> + * under random groups for easy access to them.
> + */
> + if (id != XRT_SUBDEV_GRP) {
> + if (sysfs_create_link(&find_root(xdev)->kobj,
> + &DEV(xdev)->kobj, dev_name(DEV(xdev)))) {
> + xrt_err(xdev, "failed to create sysfs link");
> + }
> + }
> +
> + /* All done, ready to handle req thru cdev. */
> + if (xrt_subdev_cdev_auto_creation(xdev))
> + xleaf_devnode_create(xdev, DEV_FILE_OPS(xdev)->xsf_dev_name, NULL);
> +
> + vfree(pdata);
> + return sdev;
> +
> +fail3:
> + xrt_device_unregister(sdev->xs_xdev);
ok
> +fail2:
> + vfree(pdata);
> +fail1:
> + if (dtb_alloced)
> + vfree(dtb);
> +fail:
> + kfree(sdev);
> + return NULL;
> +}
> +
> +static void xrt_subdev_destroy(struct xrt_subdev *sdev)
> +{
> + struct xrt_device *xdev = sdev->xs_xdev;
> + struct device *dev = DEV(xdev);
> +
> + /* Take down the device node */
> + if (xrt_subdev_cdev_auto_creation(xdev))
> + xleaf_devnode_destroy(xdev);
> + if (sdev->xs_id != XRT_SUBDEV_GRP)
> + sysfs_remove_link(&find_root(xdev)->kobj, dev_name(dev));
> + sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
> + xrt_device_unregister(xdev);
> + kfree(sdev);
> +}
> +
> +struct xrt_device *
> +xleaf_get_leaf(struct xrt_device *xdev, xrt_subdev_match_t match_cb, void *match_arg)
> +{
> + int rc;
> + struct xrt_root_get_leaf get_leaf = {
> + xdev, match_cb, match_arg, };
> +
> + rc = xrt_subdev_root_request(xdev, XRT_ROOT_GET_LEAF, &get_leaf);
> + if (rc)
> + return NULL;
> + return get_leaf.xpigl_tgt_xdev;
> +}
> +EXPORT_SYMBOL_GPL(xleaf_get_leaf);
> +
> +bool xleaf_has_endpoint(struct xrt_device *xdev, const char *endpoint_name)
> +{
> + struct resource *res;
> + int i = 0;
> +
> + do {
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, i);
> + if (res && !strncmp(res->name, endpoint_name, strlen(res->name) + 1))
> + return true;
> + ++i;
> + } while (res);
> +
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(xleaf_has_endpoint);
> +
> +int xleaf_put_leaf(struct xrt_device *xdev, struct xrt_device *leaf)
> +{
> + struct xrt_root_put_leaf put_leaf = { xdev, leaf };
> +
> + return xrt_subdev_root_request(xdev, XRT_ROOT_PUT_LEAF, &put_leaf);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_put_leaf);
> +
> +int xleaf_create_group(struct xrt_device *xdev, char *dtb)
> +{
> + return xrt_subdev_root_request(xdev, XRT_ROOT_CREATE_GROUP, dtb);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_create_group);
> +
> +int xleaf_destroy_group(struct xrt_device *xdev, int instance)
> +{
> + return xrt_subdev_root_request(xdev, XRT_ROOT_REMOVE_GROUP, (void *)(uintptr_t)instance);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_destroy_group);
> +
> +int xleaf_wait_for_group_bringup(struct xrt_device *xdev)
> +{
> + return xrt_subdev_root_request(xdev, XRT_ROOT_WAIT_GROUP_BRINGUP, NULL);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_wait_for_group_bringup);
> +
> +static ssize_t
> +xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
> +{
> + const struct list_head *ptr;
> + struct xrt_subdev_holder *h;
> + ssize_t n = 0;
> +
> + list_for_each(ptr, &sdev->xs_holder_list) {
> + h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + n += snprintf(buf + n, len - n, "%s:%d ",
> + dev_name(h->xsh_holder), kref_read(&h->xsh_kref));
> + /* Truncation is fine here. Buffer content is only for debugging. */

ok

Reviewed-by: Tom Rix <[email protected]>

> + if (n >= (len - 1))
> + break;
> + }
> + return n;
> +}
> +
> +void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
> +{
> + INIT_LIST_HEAD(&spool->xsp_dev_list);
> + spool->xsp_owner = dev;
> + mutex_init(&spool->xsp_lock);
> + spool->xsp_closing = false;
> +}
> +
> +static void xrt_subdev_free_holder(struct xrt_subdev_holder *holder)
> +{
> + list_del(&holder->xsh_holder_list);
> + vfree(holder);
> +}
> +
> +static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool, struct xrt_subdev *sdev)
> +{
> + const struct list_head *ptr, *next;
> + char holders[128];
> + struct xrt_subdev_holder *holder;
> + struct mutex *lk = &spool->xsp_lock;
> +
> + while (!list_empty(&sdev->xs_holder_list)) {
> + int rc;
> +
> + /* It's most likely a bug if we ever enters this loop. */
> + xrt_subdev_get_holders(sdev, holders, sizeof(holders));
> + xrt_err(sdev->xs_xdev, "awaits holders: %s", holders);
> + mutex_unlock(lk);
> + rc = wait_for_completion_killable(&sdev->xs_holder_comp);
> + mutex_lock(lk);
> + if (rc == -ERESTARTSYS) {
> + xrt_err(sdev->xs_xdev, "give up on waiting for holders, clean up now");
> + list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + xrt_subdev_free_holder(holder);
> + }
> + }
> + }
> +}
> +
> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
> +{
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct mutex *lk = &spool->xsp_lock;
> +
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + mutex_unlock(lk);
> + return;
> + }
> + spool->xsp_closing = true;
> + mutex_unlock(lk);
> +
> + /* Remove subdev in the reverse order of added. */
> + while (!list_empty(dl)) {
> + struct xrt_subdev *sdev = list_first_entry(dl, struct xrt_subdev, xs_dev_list);
> +
> + xrt_subdev_pool_wait_for_holders(spool, sdev);
> + list_del(&sdev->xs_dev_list);
> + xrt_subdev_destroy(sdev);
> + }
> +}
> +
> +static struct xrt_subdev_holder *xrt_subdev_find_holder(struct xrt_subdev *sdev,
> + struct device *holder_dev)
> +{
> + struct list_head *hl = &sdev->xs_holder_list;
> + struct xrt_subdev_holder *holder;
> + const struct list_head *ptr;
> +
> + list_for_each(ptr, hl) {
> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + if (holder->xsh_holder == holder_dev)
> + return holder;
> + }
> + return NULL;
> +}
> +
> +static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
> +{
> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
> + struct list_head *hl = &sdev->xs_holder_list;
> +
> + if (!holder) {
> + holder = vzalloc(sizeof(*holder));
> + if (!holder)
> + return -ENOMEM;
> + holder->xsh_holder = holder_dev;
> + kref_init(&holder->xsh_kref);
> + list_add_tail(&holder->xsh_holder_list, hl);
> + } else {
> + kref_get(&holder->xsh_kref);
> + }
> +
> + return 0;
> +}
> +
> +static void xrt_subdev_free_holder_kref(struct kref *kref)
> +{
> + struct xrt_subdev_holder *holder = container_of(kref, struct xrt_subdev_holder, xsh_kref);
> +
> + xrt_subdev_free_holder(holder);
> +}
> +
> +static int
> +xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
> +{
> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
> + struct list_head *hl = &sdev->xs_holder_list;
> +
> + if (!holder) {
> + dev_err(holder_dev, "can't release, %s did not hold %s",
> + dev_name(holder_dev), dev_name(DEV(sdev->xs_xdev)));
> + return -EINVAL;
> + }
> + kref_put(&holder->xsh_kref, xrt_subdev_free_holder_kref);
> +
> + /* kref_put above may remove holder from list. */
> + if (list_empty(hl))
> + complete(&sdev->xs_holder_comp);
> + return 0;
> +}
> +
> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
> +{
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = 0;
> +
> + sdev = xrt_subdev_create(spool->xsp_owner, id, pcb, pcb_arg, dtb);
> + if (sdev) {
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + /* No new subdev when pool is going away. */
> + xrt_err(sdev->xs_xdev, "pool is closing");
> + ret = -ENODEV;
> + } else {
> + list_add(&sdev->xs_dev_list, dl);
> + }
> + mutex_unlock(lk);
> + if (ret)
> + xrt_subdev_destroy(sdev);
> + } else {
> + ret = -EINVAL;
> + }
> +
> + ret = ret ? ret : sdev->xs_xdev->instance;
> + return ret;
> +}
> +
> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id, int instance)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + /* Pool is going away, all subdevs will be gone. */
> + mutex_unlock(lk);
> + return 0;
> + }
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_id != id || sdev->xs_xdev->instance != instance)
> + continue;
> + xrt_subdev_pool_wait_for_holders(spool, sdev);
> + list_del(&sdev->xs_dev_list);
> + ret = 0;
> + break;
> + }
> + mutex_unlock(lk);
> + if (ret)
> + return ret;
> +
> + xrt_subdev_destroy(sdev);
> + return 0;
> +}
> +
> +static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool, xrt_subdev_match_t match,
> + void *arg, struct device *holder_dev, struct xrt_subdev **sdevp)
> +{
> + struct xrt_device *xdev = (struct xrt_device *)arg;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct mutex *lk = &spool->xsp_lock;
> + struct xrt_subdev *sdev = NULL;
> + const struct list_head *ptr;
> + struct xrt_subdev *d = NULL;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> +
> + if (!xdev) {
> + if (match == XRT_SUBDEV_MATCH_PREV) {
> + sdev = list_empty(dl) ? NULL :
> + list_last_entry(dl, struct xrt_subdev, xs_dev_list);
> + } else if (match == XRT_SUBDEV_MATCH_NEXT) {
> + sdev = list_first_entry_or_null(dl, struct xrt_subdev, xs_dev_list);
> + }
> + }
> +
> + list_for_each(ptr, dl) {
> + d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (match == XRT_SUBDEV_MATCH_PREV || match == XRT_SUBDEV_MATCH_NEXT) {
> + if (d->xs_xdev != xdev)
> + continue;
> + } else {
> + if (!match(d->xs_id, d->xs_xdev, arg))
> + continue;
> + }
> +
> + if (match == XRT_SUBDEV_MATCH_PREV)
> + sdev = !list_is_first(ptr, dl) ? list_prev_entry(d, xs_dev_list) : NULL;
> + else if (match == XRT_SUBDEV_MATCH_NEXT)
> + sdev = !list_is_last(ptr, dl) ? list_next_entry(d, xs_dev_list) : NULL;
> + else
> + sdev = d;
> + }
> +
> + if (sdev)
> + ret = xrt_subdev_hold(sdev, holder_dev);
> +
> + mutex_unlock(lk);
> +
> + if (!ret)
> + *sdevp = sdev;
> + return ret;
> +}
> +
> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool, xrt_subdev_match_t match, void *arg,
> + struct device *holder_dev, struct xrt_device **xdevp)
> +{
> + int rc;
> + struct xrt_subdev *sdev;
> +
> + rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
> + if (rc) {
> + if (rc != -ENOENT)
> + dev_err(holder_dev, "failed to hold device: %d", rc);
> + return rc;
> + }
> +
> + if (!IS_ROOT_DEV(holder_dev)) {
> + xrt_dbg(to_xrt_dev(holder_dev), "%s <<==== %s",
> + dev_name(holder_dev), dev_name(DEV(sdev->xs_xdev)));
> + }
> +
> + *xdevp = sdev->xs_xdev;
> + return 0;
> +}
> +
> +static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool, struct xrt_device *xdev,
> + struct device *holder_dev)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_xdev != xdev)
> + continue;
> + ret = xrt_subdev_release(sdev, holder_dev);
> + break;
> + }
> + mutex_unlock(lk);
> +
> + return ret;
> +}
> +
> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool, struct xrt_device *xdev,
> + struct device *holder_dev)
> +{
> + int ret = xrt_subdev_pool_put_impl(spool, xdev, holder_dev);
> +
> + if (ret)
> + return ret;
> +
> + if (!IS_ROOT_DEV(holder_dev)) {
> + xrt_dbg(to_xrt_dev(holder_dev), "%s <<==X== %s",
> + dev_name(holder_dev), dev_name(DEV(xdev)));
> + }
> + return 0;
> +}
> +
> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool, enum xrt_events e)
> +{
> + struct xrt_device *tgt = NULL;
> + struct xrt_subdev *sdev = NULL;
> + struct xrt_event evt;
> +
> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
> + tgt, spool->xsp_owner, &sdev)) {
> + tgt = sdev->xs_xdev;
> + evt.xe_evt = e;
> + evt.xe_subdev.xevt_subdev_id = sdev->xs_id;
> + evt.xe_subdev.xevt_subdev_instance = tgt->instance;
> + xrt_subdev_root_request(tgt, XRT_ROOT_EVENT_SYNC, &evt);
> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
> + }
> +}
> +
> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool, struct xrt_event *evt)
> +{
> + struct xrt_device *tgt = NULL;
> + struct xrt_subdev *sdev = NULL;
> +
> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
> + tgt, spool->xsp_owner, &sdev)) {
> + tgt = sdev->xs_xdev;
> + xleaf_call(tgt, XRT_XLEAF_EVENT, evt);
> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
> + }
> +}
> +
> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
> + struct xrt_device *xdev, char *buf, size_t len)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + ssize_t ret = 0;
> +
> + mutex_lock(lk);
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_xdev != xdev)
> + continue;
> + ret = xrt_subdev_get_holders(sdev, buf, len);
> + break;
> + }
> + mutex_unlock(lk);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
> +
> +int xleaf_broadcast_event(struct xrt_device *xdev, enum xrt_events evt, bool async)
> +{
> + struct xrt_event e = { evt, };
> + enum xrt_root_cmd cmd = async ? XRT_ROOT_EVENT_ASYNC : XRT_ROOT_EVENT_SYNC;
> +
> + WARN_ON(evt == XRT_EVENT_POST_CREATION || evt == XRT_EVENT_PRE_REMOVAL);
> + return xrt_subdev_root_request(xdev, cmd, &e);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_broadcast_event);
> +
> +void xleaf_hot_reset(struct xrt_device *xdev)
> +{
> + xrt_subdev_root_request(xdev, XRT_ROOT_HOT_RESET, NULL);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_hot_reset);
> +
> +void xleaf_get_root_res(struct xrt_device *xdev, u32 region_id, struct resource **res)
> +{
> + struct xrt_root_get_res arg = { 0 };
> +
> + arg.xpigr_region_id = region_id;
> + xrt_subdev_root_request(xdev, XRT_ROOT_GET_RESOURCE, &arg);
> + *res = arg.xpigr_res;
> +}
> +
> +void xleaf_get_root_id(struct xrt_device *xdev, unsigned short *vendor, unsigned short *device,
> + unsigned short *subvendor, unsigned short *subdevice)
> +{
> + struct xrt_root_get_id id = { 0 };
> +
> + WARN_ON(!vendor && !device && !subvendor && !subdevice);
> +
> + xrt_subdev_root_request(xdev, XRT_ROOT_GET_ID, (void *)&id);
> + if (vendor)
> + *vendor = id.xpigi_vendor_id;
> + if (device)
> + *device = id.xpigi_device_id;
> + if (subvendor)
> + *subvendor = id.xpigi_sub_vendor_id;
> + if (subdevice)
> + *subdevice = id.xpigi_sub_device_id;
> +}
> +
> +struct device *xleaf_register_hwmon(struct xrt_device *xdev, const char *name, void *drvdata,
> + const struct attribute_group **grps)
> +{
> + struct xrt_root_hwmon hm = { true, name, drvdata, grps, };
> +
> + xrt_subdev_root_request(xdev, XRT_ROOT_HWMON, (void *)&hm);
> + return hm.xpih_hwmon_dev;
> +}
> +
> +void xleaf_unregister_hwmon(struct xrt_device *xdev, struct device *hwmon)
> +{
> + struct xrt_root_hwmon hm = { false, };
> +
> + hm.xpih_hwmon_dev = hwmon;
> + xrt_subdev_root_request(xdev, XRT_ROOT_HWMON, (void *)&hm);
> +}

2021-05-03 18:08:31

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 07/20] fpga: xrt: root driver infrastructure


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Contains common code for all root drivers and handles root calls from
> xrt drivers. This is part of root driver infrastructure.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/events.h | 45 +++
> drivers/fpga/xrt/include/xroot.h | 117 +++++++
> drivers/fpga/xrt/lib/subdev_pool.h | 53 +++
> drivers/fpga/xrt/lib/xroot.c | 536 +++++++++++++++++++++++++++++
> 4 files changed, 751 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/events.h
> create mode 100644 drivers/fpga/xrt/include/xroot.h
> create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
> create mode 100644 drivers/fpga/xrt/lib/xroot.c
>
> diff --git a/drivers/fpga/xrt/include/events.h b/drivers/fpga/xrt/include/events.h
> new file mode 100644
> index 000000000000..775171a47c8e
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/events.h
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_EVENTS_H_
> +#define _XRT_EVENTS_H_
> +
> +#include "subdev_id.h"
> +
> +/*
> + * Event notification.
> + */
> +enum xrt_events {
> + XRT_EVENT_TEST = 0, /* for testing */
> + /*
> + * Events related to specific subdev
> + * Callback arg: struct xrt_event_arg_subdev
> + */
> + XRT_EVENT_POST_CREATION,
> + XRT_EVENT_PRE_REMOVAL,
> + /*
> + * Events related to change of the whole board
> + * Callback arg: <none>
> + */
> + XRT_EVENT_PRE_HOT_RESET,
> + XRT_EVENT_POST_HOT_RESET,
> + XRT_EVENT_PRE_GATE_CLOSE,
> + XRT_EVENT_POST_GATE_OPEN,
> +};
> +
> +struct xrt_event_arg_subdev {
> + enum xrt_subdev_id xevt_subdev_id;
> + int xevt_subdev_instance;
> +};
> +
> +struct xrt_event {
> + enum xrt_events xe_evt;
> + struct xrt_event_arg_subdev xe_subdev;
> +};
> +
> +#endif /* _XRT_EVENTS_H_ */
> diff --git a/drivers/fpga/xrt/include/xroot.h b/drivers/fpga/xrt/include/xroot.h
> new file mode 100644
> index 000000000000..56461bcb07a9
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xroot.h
> @@ -0,0 +1,117 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_ROOT_H_
> +#define _XRT_ROOT_H_
> +
> +#include "xdevice.h"
> +#include "subdev_id.h"
> +#include "events.h"
> +
> +typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id, struct xrt_device *, void *);
> +#define XRT_SUBDEV_MATCH_PREV ((xrt_subdev_match_t)-1)
> +#define XRT_SUBDEV_MATCH_NEXT ((xrt_subdev_match_t)-2)
> +
> +/*
> + * Root calls.
> + */
> +enum xrt_root_cmd {
> + /* Leaf actions. */
> + XRT_ROOT_GET_LEAF = 0,
> + XRT_ROOT_PUT_LEAF,
> + XRT_ROOT_GET_LEAF_HOLDERS,
> +
> + /* Group actions. */
> + XRT_ROOT_CREATE_GROUP,
> + XRT_ROOT_REMOVE_GROUP,
> + XRT_ROOT_LOOKUP_GROUP,
> + XRT_ROOT_WAIT_GROUP_BRINGUP,
> +
> + /* Event actions. */
> + XRT_ROOT_EVENT_SYNC,
> + XRT_ROOT_EVENT_ASYNC,
> +
> + /* Device info. */
> + XRT_ROOT_GET_RESOURCE,
> + XRT_ROOT_GET_ID,
> +
> + /* Misc. */
> + XRT_ROOT_HOT_RESET,
> + XRT_ROOT_HWMON,
> +};
> +
> +struct xrt_root_get_leaf {
> + struct xrt_device *xpigl_caller_xdev;
> + xrt_subdev_match_t xpigl_match_cb;
> + void *xpigl_match_arg;
> + struct xrt_device *xpigl_tgt_xdev;
> +};
> +
> +struct xrt_root_put_leaf {
> + struct xrt_device *xpipl_caller_xdev;
> + struct xrt_device *xpipl_tgt_xdev;
> +};
> +
> +struct xrt_root_lookup_group {
> + struct xrt_device *xpilp_xdev; /* caller's xdev */
> + xrt_subdev_match_t xpilp_match_cb;
> + void *xpilp_match_arg;
> + int xpilp_grp_inst;
> +};
> +
> +struct xrt_root_get_holders {
> + struct xrt_device *xpigh_xdev; /* caller's xdev */
> + char *xpigh_holder_buf;
> + size_t xpigh_holder_buf_len;
> +};
> +
> +struct xrt_root_get_res {
> + u32 xpigr_region_id;
> + struct resource *xpigr_res;
> +};
> +
> +struct xrt_root_get_id {
> + unsigned short xpigi_vendor_id;
> + unsigned short xpigi_device_id;
> + unsigned short xpigi_sub_vendor_id;
> + unsigned short xpigi_sub_device_id;
> +};
> +
> +struct xrt_root_hwmon {
> + bool xpih_register;
> + const char *xpih_name;
> + void *xpih_drvdata;
> + const struct attribute_group **xpih_groups;
> + struct device *xpih_hwmon_dev;
> +};
> +
> +/*
> + * Callback for leaf to make a root request. Arguments are: parent device, parent cookie, req,
> + * and arg.
> + */
> +typedef int (*xrt_subdev_root_cb_t)(struct device *, void *, u32, void *);
> +int xrt_subdev_root_request(struct xrt_device *self, u32 cmd, void *arg);
> +
> +/*
> + * Defines physical function (MPF / UPF) specific operations
> + * needed in common root driver.
> + */
> +struct xroot_physical_function_callback {
> + void (*xpc_get_id)(struct device *dev, struct xrt_root_get_id *rid);
> + int (*xpc_get_resource)(struct device *dev, struct xrt_root_get_res *res);
> + void (*xpc_hot_reset)(struct device *dev);
> +};
> +
> +int xroot_probe(struct device *dev, struct xroot_physical_function_callback *cb, void **root);
> +void xroot_remove(void *root);
> +bool xroot_wait_for_bringup(void *root);
> +int xroot_create_group(void *xr, char *dtb);
> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
> +void xroot_broadcast(void *root, enum xrt_events evt);
> +
> +#endif /* _XRT_ROOT_H_ */
> diff --git a/drivers/fpga/xrt/lib/subdev_pool.h b/drivers/fpga/xrt/lib/subdev_pool.h
> new file mode 100644
> index 000000000000..03f617d7ffd7
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/subdev_pool.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_SUBDEV_POOL_H_
> +#define _XRT_SUBDEV_POOL_H_
> +
> +#include <linux/device.h>
> +#include <linux/mutex.h>
> +#include "xroot.h"
> +
> +/*
> + * The struct xrt_subdev_pool manages a list of xrt_subdevs for root and group drivers.
> + */
> +struct xrt_subdev_pool {
> + struct list_head xsp_dev_list;
> + struct device *xsp_owner;
> + struct mutex xsp_lock; /* pool lock */
> + bool xsp_closing;
> +};
> +
> +/*
> + * Subdev pool helper functions for root and group drivers only.
> + */
> +void xrt_subdev_pool_init(struct device *dev,
> + struct xrt_subdev_pool *spool);
> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
> + xrt_subdev_match_t match,
> + void *arg, struct device *holder_dev,
> + struct xrt_device **xdevp);
> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
> + struct xrt_device *xdev,
> + struct device *holder_dev);
> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
> + enum xrt_subdev_id id, xrt_subdev_root_cb_t pcb,
> + void *pcb_arg, char *dtb);
> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
> + enum xrt_subdev_id id, int instance);
> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
> + struct xrt_device *xdev,
> + char *buf, size_t len);
> +
> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool,
> + enum xrt_events evt);
> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool,
> + struct xrt_event *evt);
> +
> +#endif /* _XRT_SUBDEV_POOL_H_ */
> diff --git a/drivers/fpga/xrt/lib/xroot.c b/drivers/fpga/xrt/lib/xroot.c
> new file mode 100644
> index 000000000000..7b3e540dd6c0
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xroot.c
> @@ -0,0 +1,536 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Root Functions
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/hwmon.h>
> +#include "xroot.h"
> +#include "subdev_pool.h"
> +#include "group.h"
> +#include "metadata.h"
> +
> +#define xroot_err(xr, fmt, args...) dev_err((xr)->dev, "%s: " fmt, __func__, ##args)
> +#define xroot_warn(xr, fmt, args...) dev_warn((xr)->dev, "%s: " fmt, __func__, ##args)
> +#define xroot_info(xr, fmt, args...) dev_info((xr)->dev, "%s: " fmt, __func__, ##args)
> +#define xroot_dbg(xr, fmt, args...) dev_dbg((xr)->dev, "%s: " fmt, __func__, ##args)
> +
> +#define XROOT_GROUP_FIRST (-1)
> +#define XROOT_GROUP_LAST (-2)
> +
ok, vsec moved
> +static int xroot_root_cb(struct device *, void *, u32, void *);
> +
> +struct xroot_evt {
> + struct list_head list;
> + struct xrt_event evt;
> + struct completion comp;
> + bool async;
> +};
> +
> +struct xroot_events {
> + struct mutex evt_lock; /* event lock */
> + struct list_head evt_list;
> + struct work_struct evt_work;
> +};
> +
> +struct xroot_groups {
> + struct xrt_subdev_pool pool;
> + struct work_struct bringup_work;
> + atomic_t bringup_pending_cnt;
> + atomic_t bringup_failed_cnt;

ok

Reviewed-by: Tom Rix <[email protected]>

> + struct completion bringup_comp;
> +};
> +
> +struct xroot {
> + struct device *dev;
> + struct xroot_events events;
> + struct xroot_groups groups;
> + struct xroot_physical_function_callback pf_cb;
> +};
> +
> +struct xroot_group_match_arg {
> + enum xrt_subdev_id id;
> + int instance;
> +};
> +
> +static bool xroot_group_match(enum xrt_subdev_id id, struct xrt_device *xdev, void *arg)
> +{
> + struct xroot_group_match_arg *a = (struct xroot_group_match_arg *)arg;
> +
> + /* xdev->instance is the instance of the subdev. */
> + return id == a->id && xdev->instance == a->instance;
> +}
> +
> +static int xroot_get_group(struct xroot *xr, int instance, struct xrt_device **grpp)
> +{
> + int rc = 0;
> + struct xrt_subdev_pool *grps = &xr->groups.pool;
> + struct device *dev = xr->dev;
> + struct xroot_group_match_arg arg = { XRT_SUBDEV_GRP, instance };
> +
> + if (instance == XROOT_GROUP_LAST) {
> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_NEXT,
> + *grpp, dev, grpp);
> + } else if (instance == XROOT_GROUP_FIRST) {
> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_PREV,
> + *grpp, dev, grpp);
> + } else {
> + rc = xrt_subdev_pool_get(grps, xroot_group_match,
> + &arg, dev, grpp);
> + }
> +
> + if (rc && rc != -ENOENT)
> + xroot_err(xr, "failed to hold group %d: %d", instance, rc);
> + return rc;
> +}
> +
> +static void xroot_put_group(struct xroot *xr, struct xrt_device *grp)
> +{
> + int inst = grp->instance;
> + int rc = xrt_subdev_pool_put(&xr->groups.pool, grp, xr->dev);
> +
> + if (rc)
> + xroot_err(xr, "failed to release group %d: %d", inst, rc);
> +}
> +
> +static int xroot_trigger_event(struct xroot *xr, struct xrt_event *e, bool async)
> +{
> + struct xroot_evt *enew = vzalloc(sizeof(*enew));
> +
> + if (!enew)
> + return -ENOMEM;
> +
> + enew->evt = *e;
> + enew->async = async;
> + init_completion(&enew->comp);
> +
> + mutex_lock(&xr->events.evt_lock);
> + list_add(&enew->list, &xr->events.evt_list);
> + mutex_unlock(&xr->events.evt_lock);
> +
> + schedule_work(&xr->events.evt_work);
> +
> + if (async)
> + return 0;
> +
> + wait_for_completion(&enew->comp);
> + vfree(enew);
> + return 0;
> +}
> +
> +static void
> +xroot_group_trigger_event(struct xroot *xr, int inst, enum xrt_events e)
> +{
> + int ret;
> + struct xrt_device *xdev = NULL;
> + struct xrt_event evt = { 0 };
> +
> + WARN_ON(inst < 0);
> + /* Only triggers subdev specific events. */
> + if (e != XRT_EVENT_POST_CREATION && e != XRT_EVENT_PRE_REMOVAL) {
> + xroot_err(xr, "invalid event %d", e);
> + return;
> + }
> +
> + ret = xroot_get_group(xr, inst, &xdev);
> + if (ret)
> + return;
> +
> + /* Triggers event for children, first. */
> + xleaf_call(xdev, XRT_GROUP_TRIGGER_EVENT, (void *)(uintptr_t)e);
> +
> + /* Triggers event for itself. */
> + evt.xe_evt = e;
> + evt.xe_subdev.xevt_subdev_id = XRT_SUBDEV_GRP;
> + evt.xe_subdev.xevt_subdev_instance = inst;
> + xroot_trigger_event(xr, &evt, false);
> +
> + xroot_put_group(xr, xdev);
> +}
> +
> +int xroot_create_group(void *root, char *dtb)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + int ret;
> +
> + atomic_inc(&xr->groups.bringup_pending_cnt);
> + ret = xrt_subdev_pool_add(&xr->groups.pool, XRT_SUBDEV_GRP, xroot_root_cb, xr, dtb);
> + if (ret >= 0) {
> + schedule_work(&xr->groups.bringup_work);
> + } else {
> + atomic_dec(&xr->groups.bringup_pending_cnt);
> + atomic_inc(&xr->groups.bringup_failed_cnt);
> + xroot_err(xr, "failed to create group: %d", ret);
> + }
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xroot_create_group);
> +
> +static int xroot_destroy_single_group(struct xroot *xr, int instance)
> +{
> + struct xrt_device *xdev = NULL;
> + int ret;
> +
> + WARN_ON(instance < 0);
> + ret = xroot_get_group(xr, instance, &xdev);
> + if (ret)
> + return ret;
> +
> + xroot_group_trigger_event(xr, instance, XRT_EVENT_PRE_REMOVAL);
> +
> + /* Now tear down all children in this group. */
> + ret = xleaf_call(xdev, XRT_GROUP_FINI_CHILDREN, NULL);
> + xroot_put_group(xr, xdev);
> + if (!ret)
> + ret = xrt_subdev_pool_del(&xr->groups.pool, XRT_SUBDEV_GRP, instance);
> +
> + return ret;
> +}
> +
> +static int xroot_destroy_group(struct xroot *xr, int instance)
> +{
> + struct xrt_device *target = NULL;
> + struct xrt_device *deps = NULL;
> + int ret;
> +
> + WARN_ON(instance < 0);
> + /*
> + * Make sure target group exists and can't go away before
> + * we remove it's dependents
> + */
> + ret = xroot_get_group(xr, instance, &target);
> + if (ret)
> + return ret;
> +
> + /*
> + * Remove all groups depend on target one.
> + * Assuming subdevs in higher group ID can depend on ones in
> + * lower ID groups, we remove them in the reservse order.
> + */
> + while (xroot_get_group(xr, XROOT_GROUP_LAST, &deps) != -ENOENT) {
> + int inst = deps->instance;
> +
> + xroot_put_group(xr, deps);
> + /* Reached the target group instance, stop here. */
> + if (instance == inst)
> + break;
> + xroot_destroy_single_group(xr, inst);
> + deps = NULL;
> + }
> +
> + /* Now we can remove the target group. */
> + xroot_put_group(xr, target);
> + return xroot_destroy_single_group(xr, instance);
> +}
> +
> +static int xroot_lookup_group(struct xroot *xr,
> + struct xrt_root_lookup_group *arg)
> +{
> + int rc = -ENOENT;
> + struct xrt_device *grp = NULL;
> +
> + while (rc < 0 && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + if (arg->xpilp_match_cb(XRT_SUBDEV_GRP, grp, arg->xpilp_match_arg))
> + rc = grp->instance;
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static void xroot_event_work(struct work_struct *work)
> +{
> + struct xroot_evt *tmp;
> + struct xroot *xr = container_of(work, struct xroot, events.evt_work);
> +
> + mutex_lock(&xr->events.evt_lock);
> + while (!list_empty(&xr->events.evt_list)) {
> + tmp = list_first_entry(&xr->events.evt_list, struct xroot_evt, list);
> + list_del(&tmp->list);
> + mutex_unlock(&xr->events.evt_lock);
> +
> + xrt_subdev_pool_handle_event(&xr->groups.pool, &tmp->evt);
> +
> + if (tmp->async)
> + vfree(tmp);
> + else
> + complete(&tmp->comp);
> +
> + mutex_lock(&xr->events.evt_lock);
> + }
> + mutex_unlock(&xr->events.evt_lock);
> +}
> +
> +static void xroot_event_init(struct xroot *xr)
> +{
> + INIT_LIST_HEAD(&xr->events.evt_list);
> + mutex_init(&xr->events.evt_lock);
> + INIT_WORK(&xr->events.evt_work, xroot_event_work);
> +}
> +
> +static void xroot_event_fini(struct xroot *xr)
> +{
> + flush_scheduled_work();
> + WARN_ON(!list_empty(&xr->events.evt_list));
> +}
> +
> +static int xroot_get_leaf(struct xroot *xr, struct xrt_root_get_leaf *arg)
> +{
> + int rc = -ENOENT;
> + struct xrt_device *grp = NULL;
> +
> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + rc = xleaf_call(grp, XRT_GROUP_GET_LEAF, arg);
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static int xroot_put_leaf(struct xroot *xr, struct xrt_root_put_leaf *arg)
> +{
> + int rc = -ENOENT;
> + struct xrt_device *grp = NULL;
> +
> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + rc = xleaf_call(grp, XRT_GROUP_PUT_LEAF, arg);
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static int xroot_root_cb(struct device *dev, void *parg, enum xrt_root_cmd cmd, void *arg)
> +{
> + struct xroot *xr = (struct xroot *)parg;
> + int rc = 0;
> +
> + switch (cmd) {
> + /* Leaf actions. */
> + case XRT_ROOT_GET_LEAF: {
> + struct xrt_root_get_leaf *getleaf = (struct xrt_root_get_leaf *)arg;
> +
> + rc = xroot_get_leaf(xr, getleaf);
> + break;
> + }
> + case XRT_ROOT_PUT_LEAF: {
> + struct xrt_root_put_leaf *putleaf = (struct xrt_root_put_leaf *)arg;
> +
> + rc = xroot_put_leaf(xr, putleaf);
> + break;
> + }
> + case XRT_ROOT_GET_LEAF_HOLDERS: {
> + struct xrt_root_get_holders *holders = (struct xrt_root_get_holders *)arg;
> +
> + rc = xrt_subdev_pool_get_holders(&xr->groups.pool,
> + holders->xpigh_xdev,
> + holders->xpigh_holder_buf,
> + holders->xpigh_holder_buf_len);
> + break;
> + }
> +
> + /* Group actions. */
> + case XRT_ROOT_CREATE_GROUP:
> + rc = xroot_create_group(xr, (char *)arg);
> + break;
> + case XRT_ROOT_REMOVE_GROUP:
> + rc = xroot_destroy_group(xr, (int)(uintptr_t)arg);
> + break;
> + case XRT_ROOT_LOOKUP_GROUP: {
> + struct xrt_root_lookup_group *getgrp = (struct xrt_root_lookup_group *)arg;
> +
> + rc = xroot_lookup_group(xr, getgrp);
> + break;
> + }
> + case XRT_ROOT_WAIT_GROUP_BRINGUP:
> + rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
> + break;
> +
> + /* Event actions. */
> + case XRT_ROOT_EVENT_SYNC:
> + case XRT_ROOT_EVENT_ASYNC: {
> + bool async = (cmd == XRT_ROOT_EVENT_ASYNC);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> +
> + rc = xroot_trigger_event(xr, evt, async);
> + break;
> + }
> +
> + /* Device info. */
> + case XRT_ROOT_GET_RESOURCE: {
> + struct xrt_root_get_res *res = (struct xrt_root_get_res *)arg;
> +
> + if (xr->pf_cb.xpc_get_resource) {
> + rc = xr->pf_cb.xpc_get_resource(xr->dev, res);
> + } else {
> + xroot_err(xr, "get resource is not supported");
> + rc = -EOPNOTSUPP;
> + }
> + break;
> + }
> + case XRT_ROOT_GET_ID: {
> + struct xrt_root_get_id *id = (struct xrt_root_get_id *)arg;
> +
> + if (xr->pf_cb.xpc_get_id)
> + xr->pf_cb.xpc_get_id(xr->dev, id);
> + else
> + memset(id, 0, sizeof(*id));
> + break;
> + }
> +
> + /* MISC generic root driver functions. */
> + case XRT_ROOT_HOT_RESET: {
> + if (xr->pf_cb.xpc_hot_reset) {
> + xr->pf_cb.xpc_hot_reset(xr->dev);
> + } else {
> + xroot_err(xr, "hot reset is not supported");
> + rc = -EOPNOTSUPP;
> + }
> + break;
> + }
> + case XRT_ROOT_HWMON: {
> + struct xrt_root_hwmon *hwmon = (struct xrt_root_hwmon *)arg;
> +
> + if (hwmon->xpih_register) {
> + hwmon->xpih_hwmon_dev =
> + hwmon_device_register_with_info(xr->dev,
> + hwmon->xpih_name,
> + hwmon->xpih_drvdata,
> + NULL,
> + hwmon->xpih_groups);
> + } else {
> + hwmon_device_unregister(hwmon->xpih_hwmon_dev);
> + }
> + break;
> + }
> +
> + default:
> + xroot_err(xr, "unknown IOCTL cmd %d", cmd);
> + rc = -EINVAL;
> + break;
> + }
> +
> + return rc;
> +}
> +
> +static void xroot_bringup_group_work(struct work_struct *work)
> +{
> + struct xrt_device *xdev = NULL;
> + struct xroot *xr = container_of(work, struct xroot, groups.bringup_work);
> +
> + while (xroot_get_group(xr, XROOT_GROUP_FIRST, &xdev) != -ENOENT) {
> + int r, i;
> +
> + i = xdev->instance;
> + r = xleaf_call(xdev, XRT_GROUP_INIT_CHILDREN, NULL);
> + xroot_put_group(xr, xdev);
> + if (r == -EEXIST)
> + continue; /* Already brough up, nothing to do. */
> + if (r)
> + atomic_inc(&xr->groups.bringup_failed_cnt);
> +
> + xroot_group_trigger_event(xr, i, XRT_EVENT_POST_CREATION);
> +
> + if (atomic_dec_and_test(&xr->groups.bringup_pending_cnt))
> + complete(&xr->groups.bringup_comp);
> + }
> +}
> +
> +static void xroot_groups_init(struct xroot *xr)
> +{
> + xrt_subdev_pool_init(xr->dev, &xr->groups.pool);
> + INIT_WORK(&xr->groups.bringup_work, xroot_bringup_group_work);
> + atomic_set(&xr->groups.bringup_pending_cnt, 0);
> + atomic_set(&xr->groups.bringup_failed_cnt, 0);
> + init_completion(&xr->groups.bringup_comp);
> +}
> +
> +static void xroot_groups_fini(struct xroot *xr)
> +{
> + flush_scheduled_work();
> + xrt_subdev_pool_fini(&xr->groups.pool);
> +}
> +
> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct device *dev = xr->dev;
> + struct xrt_md_endpoint ep = { 0 };
> + int ret = 0;
> +
> + ep.ep_name = endpoint;
> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
> + if (ret)
> + xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xroot_add_simple_node);
> +
> +bool xroot_wait_for_bringup(void *root)
> +{
> + struct xroot *xr = (struct xroot *)root;
> +
> + wait_for_completion(&xr->groups.bringup_comp);
> + return atomic_read(&xr->groups.bringup_failed_cnt) == 0;
> +}
> +EXPORT_SYMBOL_GPL(xroot_wait_for_bringup);
> +
> +int xroot_probe(struct device *dev, struct xroot_physical_function_callback *cb, void **root)
> +{
> + struct xroot *xr = NULL;
> +
> + dev_info(dev, "%s: probing...", __func__);
> +
> + xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
> + if (!xr)
> + return -ENOMEM;
> +
> + xr->dev = dev;
> + xr->pf_cb = *cb;
> + xroot_groups_init(xr);
> + xroot_event_init(xr);
> +
> + *root = xr;
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xroot_probe);
> +
> +void xroot_remove(void *root)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct xrt_device *grp = NULL;
> +
> + xroot_info(xr, "leaving...");
> +
> + if (xroot_get_group(xr, XROOT_GROUP_FIRST, &grp) == 0) {
> + int instance = grp->instance;
> +
> + xroot_put_group(xr, grp);
> + xroot_destroy_group(xr, instance);
> + }
> +
> + xroot_event_fini(xr);
> + xroot_groups_fini(xr);
> +}
> +EXPORT_SYMBOL_GPL(xroot_remove);
> +
> +void xroot_broadcast(void *root, enum xrt_events evt)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct xrt_event e = { 0 };
> +
> + /* Root pf driver only broadcasts below two events. */
> + if (evt != XRT_EVENT_POST_CREATION && evt != XRT_EVENT_PRE_REMOVAL) {
> + xroot_info(xr, "invalid event %d", evt);
> + return;
> + }
> +
> + e.xe_evt = evt;
> + e.xe_subdev.xevt_subdev_id = XRT_ROOT;
> + e.xe_subdev.xevt_subdev_instance = 0;
> + xroot_trigger_event(xr, &e, false);
> +}
> +EXPORT_SYMBOL_GPL(xroot_broadcast);

2021-05-03 18:09:36

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 09/20] fpga: xrt: management physical function driver (root)


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> The PCIE device driver which attaches to management function on Alveo
> devices. It instantiates one or more group drivers which, in turn,
> instantiate xrt drivers. The instantiation of group and xrt drivers is
> completely dtb driven.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
Reviewed-by: Tom Rix <[email protected]>
> ---
> drivers/fpga/xrt/mgnt/root.c | 419 +++++++++++++++++++++++++++++++++++
> 1 file changed, 419 insertions(+)
> create mode 100644 drivers/fpga/xrt/mgnt/root.c
>
> diff --git a/drivers/fpga/xrt/mgnt/root.c b/drivers/fpga/xrt/mgnt/root.c
> new file mode 100644
> index 000000000000..6e362e9d4b59
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/root.c
> @@ -0,0 +1,419 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/aer.h>
> +#include <linux/vmalloc.h>
> +#include <linux/delay.h>
> +
> +#include "xroot.h"
> +#include "xmgnt.h"
> +#include "metadata.h"
> +
> +#define XMGNT_MODULE_NAME "xrt-mgnt"
> +#define XMGNT_DRIVER_VERSION "4.0.0"
> +
> +#define XMGNT_PDEV(xm) ((xm)->pdev)
> +#define XMGNT_DEV(xm) (&(XMGNT_PDEV(xm)->dev))
> +#define xmgnt_err(xm, fmt, args...) \
> + dev_err(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgnt_warn(xm, fmt, args...) \
> + dev_warn(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgnt_info(xm, fmt, args...) \
> + dev_info(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgnt_dbg(xm, fmt, args...) \
> + dev_dbg(XMGNT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define XMGNT_DEV_ID(_pcidev) \
> + ({ typeof(_pcidev) (pcidev) = (_pcidev); \
> + ((pci_domain_nr((pcidev)->bus) << 16) | \
> + PCI_DEVID((pcidev)->bus->number, 0)); })
> +#define XRT_VSEC_ID 0x20
> +#define XRT_MAX_READRQ 512
> +
> +static struct class *xmgnt_class;
> +
> +/* PCI Device IDs */
> +/*
> + * Golden image is preloaded on the device when it is shipped to customer.
> + * Then, customer can load other shells (from Xilinx or some other vendor).
> + * If something goes wrong with the shell, customer can always go back to
> + * golden and start over again.
> + */
> +#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
> +#define PCI_DEVICE_ID_U50 0x5020
> +static const struct pci_device_id xmgnt_pci_ids[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN), }, /* Alveo U50 (golden) */
> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /* Alveo U50 */
> + { 0, }
> +};
> +
> +struct xmgnt {
> + struct pci_dev *pdev;
> + void *root;
> +
> + bool ready;
> +};
> +
> +static int xmgnt_config_pci(struct xmgnt *xm)
> +{
> + struct pci_dev *pdev = XMGNT_PDEV(xm);
> + int rc;
> +
> + rc = pcim_enable_device(pdev);
> + if (rc < 0) {
> + xmgnt_err(xm, "failed to enable device: %d", rc);
> + return rc;
> + }
> +
> + rc = pci_enable_pcie_error_reporting(pdev);
> + if (rc)
> + xmgnt_warn(xm, "failed to enable AER: %d", rc);
> +
> + pci_set_master(pdev);
> +
> + rc = pcie_get_readrq(pdev);
> + if (rc > XRT_MAX_READRQ)
> + pcie_set_readrq(pdev, XRT_MAX_READRQ);
> + return 0;
> +}
> +
> +static int xmgnt_match_slot_and_save(struct device *dev, void *data)
> +{
> + struct xmgnt *xm = data;
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (XMGNT_DEV_ID(pdev) == XMGNT_DEV_ID(xm->pdev)) {
> + pci_cfg_access_lock(pdev);
> + pci_save_state(pdev);
> + }
> +
> + return 0;
> +}
> +
> +static void xmgnt_pci_save_config_all(struct xmgnt *xm)
> +{
> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgnt_match_slot_and_save);
> +}
> +
> +static int xmgnt_match_slot_and_restore(struct device *dev, void *data)
> +{
> + struct xmgnt *xm = data;
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (XMGNT_DEV_ID(pdev) == XMGNT_DEV_ID(xm->pdev)) {
> + pci_restore_state(pdev);
> + pci_cfg_access_unlock(pdev);
> + }
> +
> + return 0;
> +}
> +
> +static void xmgnt_pci_restore_config_all(struct xmgnt *xm)
> +{
> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgnt_match_slot_and_restore);
> +}
> +
> +static void xmgnt_root_hot_reset(struct device *dev)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + struct pci_bus *bus;
> + u16 pci_cmd, devctl;
> + struct xmgnt *xm;
> + u8 pci_bctl;
> + int i, ret;
> +
> + xm = pci_get_drvdata(pdev);
> + xmgnt_info(xm, "hot reset start");
> + xmgnt_pci_save_config_all(xm);
> + pci_disable_device(pdev);
> + bus = pdev->bus;
> +
> + /*
> + * When flipping the SBR bit, device can fall off the bus. This is
> + * usually no problem at all so long as drivers are working properly
> + * after SBR. However, some systems complain bitterly when the device
> + * falls off the bus.
> + * The quick solution is to temporarily disable the SERR reporting of
> + * switch port during SBR.
> + */
> +
> + pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
> + pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd & ~PCI_COMMAND_SERR));
> + pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl & ~PCI_EXP_DEVCTL_FERE));
> + pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl | PCI_BRIDGE_CTL_BUS_RESET);
> + msleep(100);
> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> + ssleep(1);
> +
> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
> + pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
> +
> + ret = pci_enable_device(pdev);
> + if (ret)
> + xmgnt_err(xm, "failed to enable device, ret %d", ret);
> +
> + for (i = 0; i < 300; i++) {
> + pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
> + if (pci_cmd != 0xffff)
> + break;
> + msleep(20);
> + }
> + if (i == 300)
> + xmgnt_err(xm, "timed out waiting for device to be online after reset");
> +
> + xmgnt_info(xm, "waiting for %d ms", i * 20);
> + xmgnt_pci_restore_config_all(xm);
> + xmgnt_config_pci(xm);
> +}
> +
> +static int xmgnt_add_vsec_node(struct xmgnt *xm, char *dtb)
> +{
> + u32 off_low, off_high, vsec_bar, header;
> + struct pci_dev *pdev = XMGNT_PDEV(xm);
> + struct xrt_md_endpoint ep = { 0 };
> + struct device *dev = DEV(pdev);
> + int cap = 0, ret = 0;
> + u64 vsec_off;
> +
> + while ((cap = pci_find_next_ext_capability(pdev, cap, PCI_EXT_CAP_ID_VNDR))) {
> + pci_read_config_dword(pdev, cap + PCI_VNDR_HEADER, &header);
> + if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
> + break;
> + }
> + if (!cap) {
> + xmgnt_info(xm, "No Vendor Specific Capability.");
> + return -ENOENT;
> + }
> +
> + if (pci_read_config_dword(pdev, cap + 8, &off_low) ||
> + pci_read_config_dword(pdev, cap + 12, &off_high)) {
> + xmgnt_err(xm, "pci_read vendor specific failed.");
> + return -EINVAL;
> + }
> +
> + ep.ep_name = XRT_MD_NODE_VSEC;
> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
> + if (ret) {
> + xmgnt_err(xm, "add vsec metadata failed, ret %d", ret);
> + goto failed;
> + }
> +
> + vsec_bar = cpu_to_be32(off_low & 0xf);
> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
> + XRT_MD_PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
> + if (ret) {
> + xmgnt_err(xm, "add vsec bar idx failed, ret %d", ret);
> + goto failed;
> + }
> +
> + vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
> + XRT_MD_PROP_OFFSET, &vsec_off, sizeof(vsec_off));
> + if (ret) {
> + xmgnt_err(xm, "add vsec offset failed, ret %d", ret);
> + goto failed;
> + }
> +
> +failed:
> + return ret;
> +}
> +
> +static int xmgnt_create_root_metadata(struct xmgnt *xm, char **root_dtb)
> +{
> + char *dtb = NULL;
> + int ret;
> +
> + ret = xrt_md_create(XMGNT_DEV(xm), &dtb);
> + if (ret) {
> + xmgnt_err(xm, "create metadata failed, ret %d", ret);
> + goto failed;
> + }
> +
> + ret = xmgnt_add_vsec_node(xm, dtb);
> + if (ret == -ENOENT) {
> + /*
> + * We may be dealing with a MFG board.
> + * Try vsec-golden which will bring up all hard-coded leaves
> + * at hard-coded offsets.
> + */
> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_VSEC_GOLDEN);
> + } else if (ret == 0) {
> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_MGNT_MAIN);
> + }
> + if (ret)
> + goto failed;
> +
> + *root_dtb = dtb;
> + return 0;
> +
> +failed:
> + vfree(dtb);
> + return ret;
> +}
> +
> +static ssize_t ready_show(struct device *dev,
> + struct device_attribute *da,
> + char *buf)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + struct xmgnt *xm = pci_get_drvdata(pdev);
> +
> + return sprintf(buf, "%d\n", xm->ready);
> +}
> +static DEVICE_ATTR_RO(ready);
> +
> +static struct attribute *xmgnt_root_attrs[] = {
> + &dev_attr_ready.attr,
> + NULL
> +};
> +
> +static struct attribute_group xmgnt_root_attr_group = {
> + .attrs = xmgnt_root_attrs,
> +};
> +
> +static void xmgnt_root_get_id(struct device *dev, struct xrt_root_get_id *rid)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + rid->xpigi_vendor_id = pdev->vendor;
> + rid->xpigi_device_id = pdev->device;
> + rid->xpigi_sub_vendor_id = pdev->subsystem_vendor;
> + rid->xpigi_sub_device_id = pdev->subsystem_device;
> +}
> +
> +static int xmgnt_root_get_resource(struct device *dev, struct xrt_root_get_res *res)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + struct xmgnt *xm;
> +
> + xm = pci_get_drvdata(pdev);
> + if (res->xpigr_region_id > PCI_STD_RESOURCE_END) {
> + xmgnt_err(xm, "Invalid bar idx %d", res->xpigr_region_id);
> + return -EINVAL;
> + }
> +
> + res->xpigr_res = &pdev->resource[res->xpigr_region_id];
> + return 0;
> +}
> +
> +static struct xroot_physical_function_callback xmgnt_xroot_pf_cb = {
> + .xpc_get_id = xmgnt_root_get_id,
> + .xpc_get_resource = xmgnt_root_get_resource,
> + .xpc_hot_reset = xmgnt_root_hot_reset,
> +};
> +
> +static int xmgnt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> + int ret;
> + struct device *dev = &pdev->dev;
> + struct xmgnt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
> + char *dtb = NULL;
> +
> + if (!xm)
> + return -ENOMEM;
> + xm->pdev = pdev;
> + pci_set_drvdata(pdev, xm);
> +
> + ret = xmgnt_config_pci(xm);
> + if (ret)
> + goto failed;
> +
> + ret = xroot_probe(&pdev->dev, &xmgnt_xroot_pf_cb, &xm->root);
> + if (ret)
> + goto failed;
> +
> + ret = xmgnt_create_root_metadata(xm, &dtb);
> + if (ret)
> + goto failed_metadata;
> +
> + ret = xroot_create_group(xm->root, dtb);
> + vfree(dtb);
> + if (ret)
> + xmgnt_err(xm, "failed to create root group: %d", ret);
> +
> + if (!xroot_wait_for_bringup(xm->root))
> + xmgnt_err(xm, "failed to bringup all groups");
> + else
> + xm->ready = true;
> +
> + ret = sysfs_create_group(&pdev->dev.kobj, &xmgnt_root_attr_group);
> + if (ret) {
> + /* Warning instead of failing the probe. */
> + xmgnt_warn(xm, "create xmgnt root attrs failed: %d", ret);
> + }
> +
> + xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
> + xmgnt_info(xm, "%s started successfully", XMGNT_MODULE_NAME);
> + return 0;
> +
> +failed_metadata:
> + xroot_remove(xm->root);
> +failed:
> + pci_set_drvdata(pdev, NULL);
> + return ret;
> +}
> +
> +static void xmgnt_remove(struct pci_dev *pdev)
> +{
> + struct xmgnt *xm = pci_get_drvdata(pdev);
> +
> + xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
> + sysfs_remove_group(&pdev->dev.kobj, &xmgnt_root_attr_group);
> + xroot_remove(xm->root);
> + pci_disable_pcie_error_reporting(xm->pdev);
> + xmgnt_info(xm, "%s cleaned up successfully", XMGNT_MODULE_NAME);
> +}
> +
> +static struct pci_driver xmgnt_driver = {
> + .name = XMGNT_MODULE_NAME,
> + .id_table = xmgnt_pci_ids,
> + .probe = xmgnt_probe,
> + .remove = xmgnt_remove,
> +};
> +
> +static int __init xmgnt_init(void)
> +{
> + int res = 0;
> +
> + res = xmgnt_register_leaf();
> + if (res)
> + return res;
> +
> + xmgnt_class = class_create(THIS_MODULE, XMGNT_MODULE_NAME);
> + if (IS_ERR(xmgnt_class))
> + return PTR_ERR(xmgnt_class);
> +
> + res = pci_register_driver(&xmgnt_driver);
> + if (res) {
> + class_destroy(xmgnt_class);
> + return res;
> + }
> +
> + return 0;
> +}
> +
> +static __exit void xmgnt_exit(void)
> +{
> + pci_unregister_driver(&xmgnt_driver);
> + class_destroy(xmgnt_class);
> + xmgnt_unregister_leaf();
> +}
> +
> +module_init(xmgnt_init);
> +module_exit(xmgnt_exit);
> +
> +MODULE_DEVICE_TABLE(pci, xmgnt_pci_ids);
> +MODULE_VERSION(XMGNT_DRIVER_VERSION);
> +MODULE_AUTHOR("XRT Team <[email protected]>");
> +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
> +MODULE_LICENSE("GPL v2");

2021-05-03 21:53:06

by Lizhi Hou

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 04/20] fpga: xrt: xrt-lib driver manager

Hi Tom,


On 05/03/2021 06:06 AM, Tom Rix wrote:
> CAUTION: This message has originated from an External Source. Please
> use proper judgment and caution when opening attachments, clicking
> links, or responding to this email.
>
>
> On 4/27/21 1:54 PM, Lizhi Hou wrote:
>> xrt-lib kernel module infrastructure code to register and manage all
>> leaf driver modules.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/subdev_id.h | 38 ++++
>> drivers/fpga/xrt/include/xdevice.h | 131 +++++++++++
>> drivers/fpga/xrt/include/xleaf.h | 205 +++++++++++++++++
>> drivers/fpga/xrt/lib/lib-drv.c | 328 +++++++++++++++++++++++++++
>> drivers/fpga/xrt/lib/lib-drv.h | 15 ++
>> 5 files changed, 717 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/subdev_id.h
>> create mode 100644 drivers/fpga/xrt/include/xdevice.h
>> create mode 100644 drivers/fpga/xrt/include/xleaf.h
>> create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
>> create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
>>
>> diff --git a/drivers/fpga/xrt/include/subdev_id.h
>> b/drivers/fpga/xrt/include/subdev_id.h
>> new file mode 100644
>> index 000000000000..084737c53b88
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/subdev_id.h
>> @@ -0,0 +1,38 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_SUBDEV_ID_H_
>> +#define _XRT_SUBDEV_ID_H_
>> +
>> +/*
>> + * Every subdev driver has an ID for others to refer to it. There
>> can be multiple number of
>> + * instances of a subdev driver. A <subdev_id, subdev_instance>
>> tuple is a unique identification
>> + * of a specific instance of a subdev driver.
>> + */
>> +enum xrt_subdev_id {
>> + XRT_SUBDEV_GRP = 1,
>
> From v4, i meant that only needed to set XRT_SUBDEV_GRP=0
>
> Why did all the values get incremented ?
0 could be an uninitialized id. So we increased the minimum valid ID by
1 and treated 0 as invalid/uninitialized id.
May I set XRT_SUBDEV_GRP = 1 and remove the following value as a fix here?
or I should add "XRT_SUBDEV_INVALID = 0" ?
>
>> + XRT_SUBDEV_VSEC = 2,
>> + XRT_SUBDEV_VSEC_GOLDEN = 3,
>> + XRT_SUBDEV_DEVCTL = 4,
>> + XRT_SUBDEV_AXIGATE = 5,
>> + XRT_SUBDEV_ICAP = 6,
>> + XRT_SUBDEV_TEST = 7,
>> + XRT_SUBDEV_MGNT_MAIN = 8,
>> + XRT_SUBDEV_QSPI = 9,
>> + XRT_SUBDEV_MAILBOX = 10,
>> + XRT_SUBDEV_CMC = 11,
>> + XRT_SUBDEV_CALIB = 12,
>> + XRT_SUBDEV_CLKFREQ = 13,
>> + XRT_SUBDEV_CLOCK = 14,
>> + XRT_SUBDEV_SRSR = 15,
>> + XRT_SUBDEV_UCS = 16,
>> + XRT_SUBDEV_NUM = 17, /* Total number of subdevs. */
> Isn't this value now wrong?
No, invalided id 0 is counted.

Thanks,
Lizhi
>> + XRT_ROOT = -1, /* Special ID for root driver. */
>> +};
>> +
>> +#endif /* _XRT_SUBDEV_ID_H_ */
>> diff --git a/drivers/fpga/xrt/include/xdevice.h
>> b/drivers/fpga/xrt/include/xdevice.h
>> new file mode 100644
>> index 000000000000..3afd96989fc5
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xdevice.h
>> @@ -0,0 +1,131 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_DEVICE_H_
>> +#define _XRT_DEVICE_H_
>> +
>> +#include <linux/fs.h>
>> +#include <linux/cdev.h>
>> +
>> +#define XRT_MAX_DEVICE_NODES 128
>> +#define XRT_INVALID_DEVICE_INST (XRT_MAX_DEVICE_NODES + 1)
>> +
>> +enum {
>> + XRT_DEVICE_STATE_NONE = 0,
>> + XRT_DEVICE_STATE_ADDED
>> +};
>> +
>> +/*
>> + * struct xrt_device - represent an xrt device on xrt bus
>> + *
>> + * dev: generic device interface.
>> + * id: id of the xrt device.
>> + */
>> +struct xrt_device {
>> + struct device dev;
>> + u32 subdev_id;
>> + const char *name;
>> + u32 instance;
>> + u32 state;
>> + u32 num_resources;
>> + struct resource *resource;
>> + void *sdev_data;
>> +};
>> +
>> +/*
>> + * If populated by xrt device driver, infra will handle the
>> mechanics of
>> + * char device (un)registration.
>> + */
>> +enum xrt_dev_file_mode {
>> + /* Infra create cdev, default file name */
>> + XRT_DEV_FILE_DEFAULT = 0,
>> + /* Infra create cdev, need to encode inst num in file name */
>> + XRT_DEV_FILE_MULTI_INST,
>> + /* No auto creation of cdev by infra, leaf handles it by itself */
>> + XRT_DEV_FILE_NO_AUTO,
>> +};
>> +
>> +struct xrt_dev_file_ops {
>> + const struct file_operations xsf_ops;
>> + dev_t xsf_dev_t;
>> + const char *xsf_dev_name;
>> + enum xrt_dev_file_mode xsf_mode;
>> +};
>> +
>> +/*
>> + * this struct define the endpoints belong to the same xrt device
>> + */
>> +struct xrt_dev_ep_names {
>> + const char *ep_name;
>> + const char *compat;
>> +};
>> +
>> +struct xrt_dev_endpoints {
>> + struct xrt_dev_ep_names *xse_names;
>> + /* minimum number of endpoints to support the subdevice */
>> + u32 xse_min_ep;
>> +};
>> +
>> +/*
>> + * struct xrt_driver - represent a xrt device driver
>> + *
>> + * drv: driver model structure.
>> + * id_table: pointer to table of device IDs the driver is interested
>> in.
>> + * { } member terminated.
>> + * probe: mandatory callback for device binding.
>> + * remove: callback for device unbinding.
>> + */
>> +struct xrt_driver {
>> + struct device_driver driver;
>> + u32 subdev_id;
>> + struct xrt_dev_file_ops file_ops;
>> + struct xrt_dev_endpoints *endpoints;
>> +
>> + /*
>> + * Subdev driver callbacks populated by subdev driver.
>> + */
>> + int (*probe)(struct xrt_device *xrt_dev);
>> + void (*remove)(struct xrt_device *xrt_dev);
>> + /*
>> + * If leaf_call is defined, these are called by other leaf
>> drivers.
>> + * Note that root driver may call into leaf_call of a group
>> driver.
>> + */
>> + int (*leaf_call)(struct xrt_device *xrt_dev, u32 cmd, void *arg);
>> +};
>> +
>> +#define to_xrt_dev(d) container_of(d, struct xrt_device, dev)
>> +#define to_xrt_drv(d) container_of(d, struct xrt_driver, driver)
>> +
>> +static inline void *xrt_get_drvdata(const struct xrt_device *xdev)
>> +{
>> + return dev_get_drvdata(&xdev->dev);
>> +}
>> +
>> +static inline void xrt_set_drvdata(struct xrt_device *xdev, void *data)
>> +{
>> + dev_set_drvdata(&xdev->dev, data);
>> +}
>> +
>> +static inline void *xrt_get_xdev_data(struct device *dev)
>> +{
>> + struct xrt_device *xdev = to_xrt_dev(dev);
>> +
>> + return xdev->sdev_data;
>> +}
>> +
>> +struct xrt_device *
>> +xrt_device_register(struct device *parent, u32 id,
>> + struct resource *res, u32 res_num,
>> + void *pdata, size_t data_sz);
>> +void xrt_device_unregister(struct xrt_device *xdev);
>> +int xrt_register_driver(struct xrt_driver *drv);
>> +void xrt_unregister_driver(struct xrt_driver *drv);
>> +void *xrt_get_xdev_data(struct device *dev);
>> +struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type,
>> u32 num);
>> +
>> +#endif /* _XRT_DEVICE_H_ */
>> diff --git a/drivers/fpga/xrt/include/xleaf.h
>> b/drivers/fpga/xrt/include/xleaf.h
>> new file mode 100644
>> index 000000000000..f065fc766e0f
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xleaf.h
>> @@ -0,0 +1,205 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + * Sonal Santan <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_XLEAF_H_
>> +#define _XRT_XLEAF_H_
>> +
>> +#include <linux/mod_devicetable.h>
>> +#include "xdevice.h"
>> +#include "subdev_id.h"
>> +#include "xroot.h"
>> +#include "events.h"
>> +
>> +/* All subdev drivers should use below common routines to print out
>> msg. */
>> +#define DEV(xdev) (&(xdev)->dev)
>> +#define DEV_PDATA(xdev) \
>> + ((struct xrt_subdev_platdata *)xrt_get_xdev_data(DEV(xdev)))
>> +#define DEV_FILE_OPS(xdev) \
>> + (&(to_xrt_drv((xdev)->dev.driver))->file_ops)
>> +#define FMT_PRT(prt_fn, xdev, fmt, args...) \
>> + ({typeof(xdev) (_xdev) = (xdev); \
>> + prt_fn(DEV(_xdev), "%s %s: " fmt, \
>> + DEV_PDATA(_xdev)->xsp_root_name, __func__, ##args); })
>> +#define xrt_err(xdev, fmt, args...) FMT_PRT(dev_err, xdev, fmt, ##args)
>> +#define xrt_warn(xdev, fmt, args...) FMT_PRT(dev_warn, xdev, fmt,
>> ##args)
>> +#define xrt_info(xdev, fmt, args...) FMT_PRT(dev_info, xdev, fmt,
>> ##args)
>> +#define xrt_dbg(xdev, fmt, args...) FMT_PRT(dev_dbg, xdev, fmt, ##args)
>> +
>> +#define XRT_DEFINE_REGMAP_CONFIG(config_name) \
>> + static const struct regmap_config config_name = { \
>> + .reg_bits = 32, \
>> + .val_bits = 32, \
>> + .reg_stride = 4, \
>> + .max_register = 0x1000, \
>> + }
>> +
>> +enum {
>> + /* Starting cmd for common leaf cmd implemented by all leaves. */
>> + XRT_XLEAF_COMMON_BASE = 0,
>> + /* Starting cmd for leaves' specific leaf cmds. */
>> + XRT_XLEAF_CUSTOM_BASE = 64,
>> +};
>> +
>> +enum xrt_xleaf_common_leaf_cmd {
>> + XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
>> +};
>> +
>> +/*
>> + * Partially initialized by the parent driver, then, passed in as
>> subdev driver's
>> + * platform data when creating subdev driver instance by calling
>> platform
>> + * device register API (xrt_device_register_data() or the likes).
>> + *
>> + * Once device register API returns, platform driver framework makes
>> a copy of
>> + * this buffer and maintains its life cycle. The content of the
>> buffer is
>> + * completely owned by subdev driver.
>> + *
>> + * Thus, parent driver should be very careful when it touches this
>> buffer
>> + * again once it's handed over to subdev driver. And the data structure
>> + * should not contain pointers pointing to buffers that is managed by
>> + * other or parent drivers since it could have been freed before
>> platform
>> + * data buffer is freed by platform driver framework.
>> + */
>> +struct xrt_subdev_platdata {
>> + /*
>> + * Per driver instance callback. The xdev points to the instance.
>> + * Should always be defined for subdev driver to get service
>> from root.
>> + */
>> + xrt_subdev_root_cb_t xsp_root_cb;
>> + void *xsp_root_cb_arg;
>> +
>> + /* Something to associate w/ root for msg printing. */
>> + const char *xsp_root_name;
>> +
>> + /*
>> + * Char dev support for this subdev instance.
>> + * Initialized by subdev driver.
>> + */
>> + struct cdev xsp_cdev;
>> + struct device *xsp_sysdev;
>> + struct mutex xsp_devnode_lock; /* devnode lock */
>> + struct completion xsp_devnode_comp;
>> + int xsp_devnode_ref;
>> + bool xsp_devnode_online;
>> + bool xsp_devnode_excl;
>> +
>> + /*
>> + * Subdev driver specific init data. The buffer should be embedded
>> + * in this data structure buffer after dtb, so that it can be
>> freed
>> + * together with platform data.
>> + */
>> + loff_t xsp_priv_off; /* Offset into this platform data buffer. */
>> + size_t xsp_priv_len;
>> +
>> + /*
>> + * Populated by parent driver to describe the device tree for
>> + * the subdev driver to handle. Should always be last one since
>> it's
>> + * of variable length.
>> + */
>> + bool xsp_dtb_valid;
>> + char xsp_dtb[0];
>> +};
>> +
>> +struct subdev_match_arg {
>> + enum xrt_subdev_id id;
>> + int instance;
>> +};
>> +
>> +bool xleaf_has_endpoint(struct xrt_device *xdev, const char
>> *endpoint_name);
>> +struct xrt_device *xleaf_get_leaf(struct xrt_device *xdev,
>> + xrt_subdev_match_t cb, void *arg);
>> +
>> +static inline bool subdev_match(enum xrt_subdev_id id, struct
>> xrt_device *xdev, void *arg)
>> +{
>> + const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
>> + int instance = a->instance;
>> +
>> + if (id != a->id)
>> + return false;
>> + if (instance != xdev->instance && instance !=
>> XRT_INVALID_DEVICE_INST)
>> + return false;
>> + return true;
>> +}
>> +
>> +static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
>> + struct xrt_device *xdev,
>> void *arg)
>> +{
>> + return xleaf_has_endpoint(xdev, arg);
>> +}
>> +
>> +static inline struct xrt_device *
>> +xleaf_get_leaf_by_id(struct xrt_device *xdev,
>> + enum xrt_subdev_id id, int instance)
>> +{
>> + struct subdev_match_arg arg = { id, instance };
>> +
>> + return xleaf_get_leaf(xdev, subdev_match, &arg);
>> +}
>> +
>> +static inline struct xrt_device *
>> +xleaf_get_leaf_by_epname(struct xrt_device *xdev, const char *name)
>> +{
>> + return xleaf_get_leaf(xdev, xrt_subdev_match_epname, (void
>> *)name);
>> +}
>> +
>> +static inline int xleaf_call(struct xrt_device *tgt, u32 cmd, void
>> *arg)
>> +{
>> + return (to_xrt_drv(tgt->dev.driver)->leaf_call)(tgt, cmd, arg);
>> +}
>> +
>> +int xleaf_broadcast_event(struct xrt_device *xdev, enum xrt_events
>> evt, bool async);
>> +int xleaf_create_group(struct xrt_device *xdev, char *dtb);
>> +int xleaf_destroy_group(struct xrt_device *xdev, int instance);
>> +void xleaf_get_root_res(struct xrt_device *xdev, u32 region_id,
>> struct resource **res);
>> +void xleaf_get_root_id(struct xrt_device *xdev, unsigned short
>> *vendor, unsigned short *device,
>> + unsigned short *subvendor, unsigned short
>> *subdevice);
>> +void xleaf_hot_reset(struct xrt_device *xdev);
>> +int xleaf_put_leaf(struct xrt_device *xdev, struct xrt_device *leaf);
>> +struct device *xleaf_register_hwmon(struct xrt_device *xdev, const
>> char *name, void *drvdata,
>> + const struct attribute_group **grps);
>> +void xleaf_unregister_hwmon(struct xrt_device *xdev, struct device
>> *hwmon);
>> +int xleaf_wait_for_group_bringup(struct xrt_device *xdev);
>> +
>> +/*
>> + * Character device helper APIs for use by leaf drivers
>> + */
>> +static inline bool xleaf_devnode_enabled(struct xrt_device *xdev)
>> +{
>> + return DEV_FILE_OPS(xdev)->xsf_ops.open;
>> +}
>> +
>> +int xleaf_devnode_create(struct xrt_device *xdev,
>> + const char *file_name, const char *inst_name);
>> +void xleaf_devnode_destroy(struct xrt_device *xdev);
>> +
>> +struct xrt_device *xleaf_devnode_open_excl(struct inode *inode);
>> +struct xrt_device *xleaf_devnode_open(struct inode *inode);
>> +void xleaf_devnode_close(struct inode *inode);
>> +
>> +/* Module's init/fini routines for leaf driver in xrt-lib module */
>> +#define XRT_LEAF_INIT_FINI_FUNC(name) \
>> +void name##_leaf_init_fini(bool
>> init) \
>> +{ \
>> + if (init) \
>> + xrt_register_driver(&xrt_##name##_driver); \
>> + else \
>> + xrt_unregister_driver(&xrt_##name##_driver); \
>> +}
>> +
>> +/* Module's init/fini routines for leaf driver in xrt-lib module */
>> +void group_leaf_init_fini(bool init);
>> +void vsec_leaf_init_fini(bool init);
>> +void devctl_leaf_init_fini(bool init);
>> +void axigate_leaf_init_fini(bool init);
>> +void icap_leaf_init_fini(bool init);
>> +void calib_leaf_init_fini(bool init);
>> +void clkfreq_leaf_init_fini(bool init);
>> +void clock_leaf_init_fini(bool init);
>> +void ucs_leaf_init_fini(bool init);
>> +
>> +#endif /* _XRT_LEAF_H_ */
>> diff --git a/drivers/fpga/xrt/lib/lib-drv.c
>> b/drivers/fpga/xrt/lib/lib-drv.c
>> new file mode 100644
>> index 000000000000..ba4ac4930823
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/lib-drv.c
>> @@ -0,0 +1,328 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + * Lizhi Hou <[email protected]>
>> + */
>> +
>> +#include <linux/module.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/slab.h>
>> +#include "xleaf.h"
>> +#include "xroot.h"
>> +#include "lib-drv.h"
>> +
>> +#define XRT_IPLIB_MODULE_NAME "xrt-lib"
>> +#define XRT_IPLIB_MODULE_VERSION "4.0.0"
>> +#define XRT_DRVNAME(drv) ((drv)->driver.name)
>> +
>> +#define XRT_SUBDEV_ID_SHIFT 16
>> +#define XRT_SUBDEV_ID_MASK ((1 << XRT_SUBDEV_ID_SHIFT) - 1)
>> +
>> +struct xrt_find_drv_data {
>> + enum xrt_subdev_id id;
>> + struct xrt_driver *xdrv;
>> +};
>> +
>> +struct class *xrt_class;
>> +static DEFINE_IDA(xrt_device_ida);
>> +
>> +static inline u32 xrt_instance_to_id(enum xrt_subdev_id id, u32
>> instance)
>> +{
>> + return (id << XRT_SUBDEV_ID_SHIFT) | instance;
>> +}
>> +
>> +static inline u32 xrt_id_to_instance(u32 id)
>> +{
>> + return (id & XRT_SUBDEV_ID_MASK);
>> +}
>> +
>> +static int xrt_bus_match(struct device *dev, struct device_driver *drv)
>> +{
>> + struct xrt_device *xdev = to_xrt_dev(dev);
>> + struct xrt_driver *xdrv = to_xrt_drv(drv);
>> +
>> + if (xdev->subdev_id == xdrv->subdev_id)
>> + return 1;
>> +
>> + return 0;
>> +}
>> +
>> +static int xrt_bus_probe(struct device *dev)
>> +{
>> + struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
>> + struct xrt_device *xdev = to_xrt_dev(dev);
>> +
>> + return xdrv->probe(xdev);
>> +}
>> +
>> +static int xrt_bus_remove(struct device *dev)
>> +{
>> + struct xrt_driver *xdrv = to_xrt_drv(dev->driver);
>> + struct xrt_device *xdev = to_xrt_dev(dev);
>> +
>> + if (xdrv->remove)
>> + xdrv->remove(xdev);
>> +
>> + return 0;
>> +}
>> +
>> +struct bus_type xrt_bus_type = {
>> + .name = "xrt",
>> + .match = xrt_bus_match,
>> + .probe = xrt_bus_probe,
>> + .remove = xrt_bus_remove,
>> +};
>> +
>> +int xrt_register_driver(struct xrt_driver *drv)
>> +{
>> + const char *drvname = XRT_DRVNAME(drv);
>> + int rc = 0;
>> +
>> + /* Initialize dev_t for char dev node. */
>> + if (drv->file_ops.xsf_ops.open) {
>> + rc = alloc_chrdev_region(&drv->file_ops.xsf_dev_t, 0,
>> + XRT_MAX_DEVICE_NODES, drvname);
>> + if (rc) {
>> + pr_err("failed to alloc dev minor for %s:
>> %d\n", drvname, rc);
>> + return rc;
>> + }
>> + } else {
>> + drv->file_ops.xsf_dev_t = (dev_t)-1;
>> + }
>> +
>> + drv->driver.owner = THIS_MODULE;
>> + drv->driver.bus = &xrt_bus_type;
>> +
>> + rc = driver_register(&drv->driver);
>> + if (rc) {
>> + pr_err("register %s xrt driver failed\n", drvname);
>> + if (drv->file_ops.xsf_dev_t != (dev_t)-1) {
>> + unregister_chrdev_region(drv->file_ops.xsf_dev_t,
>> + XRT_MAX_DEVICE_NODES);
>> + }
>> + return rc;
>> + }
>> +
>> + pr_info("%s registered successfully\n", drvname);
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_register_driver);
>> +
>> +void xrt_unregister_driver(struct xrt_driver *drv)
>> +{
>> + driver_unregister(&drv->driver);
>> +
>> + if (drv->file_ops.xsf_dev_t != (dev_t)-1)
>> + unregister_chrdev_region(drv->file_ops.xsf_dev_t,
>> XRT_MAX_DEVICE_NODES);
>> +
>> + pr_info("%s unregistered successfully\n", XRT_DRVNAME(drv));
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_unregister_driver);
>> +
>> +static int __find_driver(struct device_driver *drv, void *_data)
>> +{
>> + struct xrt_driver *xdrv = to_xrt_drv(drv);
>> + struct xrt_find_drv_data *data = _data;
>> +
>> + if (xdrv->subdev_id == data->id) {
>> + data->xdrv = xdrv;
>> + return 1;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +const char *xrt_drv_name(enum xrt_subdev_id id)
>> +{
>> + struct xrt_find_drv_data data = { 0 };
>> +
>> + data.id = id;
>> + bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
>> +
>> + if (data.xdrv)
>> + return XRT_DRVNAME(data.xdrv);
>> +
>> + return NULL;
>> +}
>> +
>> +static int xrt_drv_get_instance(enum xrt_subdev_id id)
>> +{
>> + int ret;
>> +
>> + ret = ida_alloc_range(&xrt_device_ida, xrt_instance_to_id(id, 0),
>> + xrt_instance_to_id(id,
>> XRT_MAX_DEVICE_NODES),
>> + GFP_KERNEL);
>> + if (ret < 0)
>> + return ret;
>> +
>> + return xrt_id_to_instance((u32)ret);
>> +}
>> +
>> +static void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
>> +{
>> + ida_free(&xrt_device_ida, xrt_instance_to_id(id, instance));
>> +}
>> +
>> +struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
>> +{
>> + struct xrt_find_drv_data data = { 0 };
>> +
>> + data.id = id;
>> + bus_for_each_drv(&xrt_bus_type, NULL, &data, __find_driver);
>> +
>> + if (data.xdrv)
>> + return data.xdrv->endpoints;
>> +
>> + return NULL;
>> +}
>> +
>> +static void xrt_device_release(struct device *dev)
>> +{
>> + struct xrt_device *xdev = container_of(dev, struct xrt_device,
>> dev);
>> +
>> + kfree(xdev);
>> +}
>> +
>> +void xrt_device_unregister(struct xrt_device *xdev)
>> +{
>> + if (xdev->state == XRT_DEVICE_STATE_ADDED)
>> + device_del(&xdev->dev);
>> +
>> + vfree(xdev->sdev_data);
>> + kfree(xdev->resource);
>> +
>> + if (xdev->instance != XRT_INVALID_DEVICE_INST)
>> + xrt_drv_put_instance(xdev->subdev_id, xdev->instance);
>> +
>> + if (xdev->dev.release == xrt_device_release)
>> + put_device(&xdev->dev);
>> +}
>> +
>> +struct xrt_device *
>> +xrt_device_register(struct device *parent, u32 id,
>> + struct resource *res, u32 res_num,
>> + void *pdata, size_t data_sz)
>> +{
>> + struct xrt_device *xdev = NULL;
>> + int ret;
>> +
>> + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL);
>> + if (!xdev)
>> + return NULL;
>> + xdev->instance = XRT_INVALID_DEVICE_INST;
>> +
>> + /* Obtain dev instance number. */
>> + ret = xrt_drv_get_instance(id);
>> + if (ret < 0) {
>> + dev_err(parent, "failed get instance, ret %d", ret);
>> + goto fail;
>> + }
>> +
>> + xdev->instance = ret;
>> + xdev->name = xrt_drv_name(id);
>> + xdev->subdev_id = id;
>> + device_initialize(&xdev->dev);
>> + xdev->dev.release = xrt_device_release;
>> + xdev->dev.parent = parent;
>> +
>> + xdev->dev.bus = &xrt_bus_type;
>> + dev_set_name(&xdev->dev, "%s.%d", xdev->name, xdev->instance);
>> +
>> + xdev->num_resources = res_num;
>> + xdev->resource = kmemdup(res, sizeof(*res) * res_num, GFP_KERNEL);
>> + if (!xdev->resource)
>> + goto fail;
>> +
>> + xdev->sdev_data = vzalloc(data_sz);
>> + if (!xdev->sdev_data)
>> + goto fail;
>> +
>> + memcpy(xdev->sdev_data, pdata, data_sz);
>> +
>> + ret = device_add(&xdev->dev);
>> + if (ret) {
>> + dev_err(parent, "failed add device, ret %d", ret);
>> + goto fail;
>> + }
>> + xdev->state = XRT_DEVICE_STATE_ADDED;
>> +
>> + return xdev;
>> +
>> +fail:
>> + xrt_device_unregister(xdev);
>> + kfree(xdev);
>> +
>> + return NULL;
>> +}
>> +
>> +struct resource *xrt_get_resource(struct xrt_device *xdev, u32 type,
>> u32 num)
>> +{
>> + u32 i;
>> +
>> + for (i = 0; i < xdev->num_resources; i++) {
>> + struct resource *r = &xdev->resource[i];
>> +
>> + if (type == resource_type(r) && num-- == 0)
>> + return r;
>> + }
>> + return NULL;
>> +}
>> +
>> +/*
>> + * Leaf driver's module init/fini callbacks. This is not a open
>> infrastructure for dynamic
>> + * plugging in drivers. All drivers should be statically added.
>
> ok
>
> Tom
>
>> + */
>> +static void (*leaf_init_fini_cbs[])(bool) = {
>> + group_leaf_init_fini,
>> + vsec_leaf_init_fini,
>> + devctl_leaf_init_fini,
>> + axigate_leaf_init_fini,
>> + icap_leaf_init_fini,
>> + calib_leaf_init_fini,
>> + clkfreq_leaf_init_fini,
>> + clock_leaf_init_fini,
>> + ucs_leaf_init_fini,
>> +};
>> +
>> +static __init int xrt_lib_init(void)
>> +{
>> + int ret;
>> + int i;
>> +
>> + ret = bus_register(&xrt_bus_type);
>> + if (ret)
>> + return ret;
>> +
>> + xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
>> + if (IS_ERR(xrt_class)) {
>> + bus_unregister(&xrt_bus_type);
>> + return PTR_ERR(xrt_class);
>> + }
>> +
>> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
>> + leaf_init_fini_cbs[i](true);
>> + return 0;
>> +}
>> +
>> +static __exit void xrt_lib_fini(void)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
>> + leaf_init_fini_cbs[i](false);
>> +
>> + class_destroy(xrt_class);
>> + bus_unregister(&xrt_bus_type);
>> +}
>> +
>> +module_init(xrt_lib_init);
>> +module_exit(xrt_lib_fini);
>> +
>> +MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
>> +MODULE_AUTHOR("XRT Team <[email protected]>");
>> +MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/drivers/fpga/xrt/lib/lib-drv.h
>> b/drivers/fpga/xrt/lib/lib-drv.h
>> new file mode 100644
>> index 000000000000..514f904c81c0
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/lib-drv.h
>> @@ -0,0 +1,15 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _LIB_DRV_H_
>> +#define _LIB_DRV_H_
>> +
>> +const char *xrt_drv_name(enum xrt_subdev_id id);
>> +struct xrt_dev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
>> +
>> +#endif /* _LIB_DRV_H_ */
>

2021-05-04 13:51:21

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 10/20] fpga: xrt: main driver for management function device


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> xrt driver that handles IOCTLs, such as hot reset and xclbin download.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xmgnt-main.h | 34 ++
> drivers/fpga/xrt/mgnt/xmgnt-main.c | 660 ++++++++++++++++++++++++++
> drivers/fpga/xrt/mgnt/xmgnt.h | 33 ++
> include/uapi/linux/xrt/xmgnt-ioctl.h | 46 ++
> 4 files changed, 773 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xmgnt-main.h
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main.c
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt.h
> create mode 100644 include/uapi/linux/xrt/xmgnt-ioctl.h
>
> diff --git a/drivers/fpga/xrt/include/xmgnt-main.h b/drivers/fpga/xrt/include/xmgnt-main.h
> new file mode 100644
> index 000000000000..b46dac710cd3
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xmgnt-main.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XMGNT_MAIN_H_
> +#define _XMGNT_MAIN_H_
> +
> +#include <linux/xrt/xclbin.h>
> +#include "xleaf.h"
> +
> +enum xrt_mgnt_main_leaf_cmd {
> + XRT_MGNT_MAIN_GET_AXLF_SECTION = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_MGNT_MAIN_GET_VBNV,
> +};
> +
> +/* There are three kind of partitions. Each of them is programmed independently. */
> +enum provider_kind {
> + XMGNT_BLP, /* Base Logic Partition */
> + XMGNT_PLP, /* Provider Logic Partition */
> + XMGNT_ULP, /* User Logic Partition */
> +};
> +
> +struct xrt_mgnt_main_get_axlf_section {
> + enum provider_kind xmmigas_axlf_kind;
> + enum axlf_section_kind xmmigas_section_kind;
> + void *xmmigas_section;
> + u64 xmmigas_section_size;
> +};
> +
> +#endif /* _XMGNT_MAIN_H_ */
> diff --git a/drivers/fpga/xrt/mgnt/xmgnt-main.c b/drivers/fpga/xrt/mgnt/xmgnt-main.c
> new file mode 100644
> index 000000000000..a1c6dc34f6c0
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/xmgnt-main.c
> @@ -0,0 +1,660 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA MGNT PF entry point driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Sonal Santan <[email protected]>
> + */
> +
> +#include <linux/firmware.h>
> +#include <linux/uaccess.h>
> +#include <linux/slab.h>
> +#include "xclbin-helper.h"
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include <linux/xrt/xmgnt-ioctl.h>
> +#include "xleaf/devctl.h"
> +#include "xmgnt-main.h"
> +#include "xrt-mgr.h"
> +#include "xleaf/icap.h"
> +#include "xleaf/axigate.h"
> +#include "xmgnt.h"
> +
> +#define XMGNT_MAIN "xmgnt_main"
> +#define XMGNT_SUPP_XCLBIN_MAJOR 2
> +
> +#define XMGNT_FLAG_FLASH_READY 1
> +#define XMGNT_FLAG_DEVCTL_READY 2
> +
> +#define XMGNT_UUID_STR_LEN (UUID_SIZE * 2 + 1)
> +
> +struct xmgnt_main {
> + struct xrt_device *xdev;
> + struct axlf *firmware_blp;
> + struct axlf *firmware_plp;
> + struct axlf *firmware_ulp;
> + u32 flags;
> + struct fpga_manager *fmgr;
> + struct mutex lock; /* busy lock */
ok
> + uuid_t *blp_interface_uuids;
> + u32 blp_interface_uuid_num;
> +};
> +
> +/*
> + * VBNV stands for Vendor, BoardID, Name, Version. It is a string
> + * which describes board and shell.
> + *
> + * Caller is responsible for freeing the returned string.
> + */
> +char *xmgnt_get_vbnv(struct xrt_device *xdev)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + const char *vbnv;
> + char *ret;
> + int i;
> +
> + if (xmm->firmware_plp)
> + vbnv = xmm->firmware_plp->header.platform_vbnv;
> + else if (xmm->firmware_blp)
> + vbnv = xmm->firmware_blp->header.platform_vbnv;
> + else
> + return NULL;
> +
> + ret = kstrdup(vbnv, GFP_KERNEL);
> + if (!ret)
> + return NULL;
> +
> + for (i = 0; i < strlen(ret); i++) {
> + if (ret[i] == ':' || ret[i] == '.')
> + ret[i] = '_';
> + }
> + return ret;
> +}
> +
> +static int get_dev_uuid(struct xrt_device *xdev, char *uuidstr, size_t len)
> +{
> + struct xrt_devctl_rw devctl_arg = { 0 };
> + struct xrt_device *devctl_leaf;
> + char uuid_buf[UUID_SIZE];
> + uuid_t uuid;
> + int err;
> +
> + devctl_leaf = xleaf_get_leaf_by_epname(xdev, XRT_MD_NODE_BLP_ROM);
> + if (!devctl_leaf) {
> + xrt_err(xdev, "can not get %s", XRT_MD_NODE_BLP_ROM);
> + return -EINVAL;
> + }
> +
> + devctl_arg.xdr_id = XRT_DEVCTL_ROM_UUID;
> + devctl_arg.xdr_buf = uuid_buf;
> + devctl_arg.xdr_len = sizeof(uuid_buf);
> + devctl_arg.xdr_offset = 0;
> + err = xleaf_call(devctl_leaf, XRT_DEVCTL_READ, &devctl_arg);
> + xleaf_put_leaf(xdev, devctl_leaf);
> + if (err) {
> + xrt_err(xdev, "can not get uuid: %d", err);
> + return err;
> + }
> + import_uuid(&uuid, uuid_buf);
> + xrt_md_trans_uuid2str(&uuid, uuidstr);
> +
> + return 0;
> +}
> +
> +int xmgnt_hot_reset(struct xrt_device *xdev)
> +{
> + int ret = xleaf_broadcast_event(xdev, XRT_EVENT_PRE_HOT_RESET, false);
> +
> + if (ret) {
> + xrt_err(xdev, "offline failed, hot reset is canceled");
> + return ret;
> + }
> +
> + xleaf_hot_reset(xdev);
> + xleaf_broadcast_event(xdev, XRT_EVENT_POST_HOT_RESET, false);
> + return 0;
> +}
> +
> +static ssize_t reset_store(struct device *dev, struct device_attribute *da,
> + const char *buf, size_t count)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> +
> + xmgnt_hot_reset(xdev);
> + return count;
> +}
> +static DEVICE_ATTR_WO(reset);
> +
> +static ssize_t VBNV_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + ssize_t ret;
> + char *vbnv;
> +
> + vbnv = xmgnt_get_vbnv(xdev);
> + if (!vbnv)
> + return -EINVAL;
> + ret = sprintf(buf, "%s\n", vbnv);
> + kfree(vbnv);
> + return ret;
> +}
> +static DEVICE_ATTR_RO(VBNV);
> +
> +/* logic uuid is the uuid uniquely identfy the partition */
> +static ssize_t logic_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + char uuid[XMGNT_UUID_STR_LEN];
> + ssize_t ret;
> +
> + /* Getting UUID pointed to by VSEC, should be the same as logic UUID of BLP. */
> + ret = get_dev_uuid(xdev, uuid, sizeof(uuid));
> + if (ret)
> + return ret;
> + ret = sprintf(buf, "%s\n", uuid);
> + return ret;
> +}
> +static DEVICE_ATTR_RO(logic_uuids);
> +
> +static ssize_t interface_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct xrt_device *xdev = to_xrt_dev(dev);
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + ssize_t ret = 0;
> + u32 i;
> +
> + for (i = 0; i < xmm->blp_interface_uuid_num; i++) {
> + char uuidstr[XMGNT_UUID_STR_LEN];
> +
> + xrt_md_trans_uuid2str(&xmm->blp_interface_uuids[i], uuidstr);
> + ret += sprintf(buf + ret, "%s\n", uuidstr);
> + }
> + return ret;
> +}
> +static DEVICE_ATTR_RO(interface_uuids);
> +
> +static struct attribute *xmgnt_main_attrs[] = {
> + &dev_attr_reset.attr,
> + &dev_attr_VBNV.attr,
> + &dev_attr_logic_uuids.attr,
> + &dev_attr_interface_uuids.attr,
> + NULL,
> +};
> +
> +static const struct attribute_group xmgnt_main_attrgroup = {
> + .attrs = xmgnt_main_attrs,
> +};
> +
> +static int load_firmware_from_disk(struct xrt_device *xdev, struct axlf **fw_buf, size_t *len)
> +{
> + char uuid[XMGNT_UUID_STR_LEN];
> + const struct firmware *fw;
> + char fw_name[256];
> + int err = 0;
> +
> + *len = 0;
> + err = get_dev_uuid(xdev, uuid, sizeof(uuid));
> + if (err)
> + return err;
> +
> + snprintf(fw_name, sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
> + xrt_info(xdev, "try loading fw: %s", fw_name);
> +
> + err = request_firmware(&fw, fw_name, DEV(xdev));
> + if (err)
> + return err;
> +
> + *fw_buf = vmalloc(fw->size);
> + if (!*fw_buf) {
> + release_firmware(fw);
> + return -ENOMEM;
> + }
> +
> + *len = fw->size;
> + memcpy(*fw_buf, fw->data, fw->size);
> +
> + release_firmware(fw);
> + return 0;
> +}
> +
> +static const struct axlf *xmgnt_get_axlf_firmware(struct xmgnt_main *xmm, enum provider_kind kind)
> +{
> + switch (kind) {
> + case XMGNT_BLP:
> + return xmm->firmware_blp;
> + case XMGNT_PLP:
> + return xmm->firmware_plp;
> + case XMGNT_ULP:
> + return xmm->firmware_ulp;
> + default:
> + xrt_err(xmm->xdev, "unknown axlf kind: %d", kind);
> + return NULL;
> + }
> +}
> +
> +/* The caller needs to free the returned dtb buffer */
> +char *xmgnt_get_dtb(struct xrt_device *xdev, enum provider_kind kind)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + const struct axlf *provider;
> + char *dtb = NULL;
> + int rc;
> +
> + provider = xmgnt_get_axlf_firmware(xmm, kind);
> + if (!provider)
> + return dtb;
> +
> + rc = xrt_xclbin_get_metadata(DEV(xdev), provider, &dtb);
> + if (rc)
> + xrt_err(xdev, "failed to find dtb: %d", rc);
> + return dtb;
> +}
> +
> +/* The caller needs to free the returned uuid buffer */
> +static const char *get_uuid_from_firmware(struct xrt_device *xdev, const struct axlf *xclbin)
> +{
> + const void *uuiddup = NULL;
> + const void *uuid = NULL;
> + void *dtb = NULL;
> + int rc;
> +
> + rc = xrt_xclbin_get_section(DEV(xdev), xclbin, PARTITION_METADATA, &dtb, NULL);
> + if (rc)
> + return NULL;
> +
> + rc = xrt_md_get_prop(DEV(xdev), dtb, NULL, NULL, XRT_MD_PROP_LOGIC_UUID, &uuid, NULL);
> + if (!rc)
> + uuiddup = kstrdup(uuid, GFP_KERNEL);
> + vfree(dtb);
> + return uuiddup;
> +}
> +
> +static bool is_valid_firmware(struct xrt_device *xdev,
> + const struct axlf *xclbin, size_t fw_len)
> +{
> + const char *fw_buf = (const char *)xclbin;
> + size_t axlflen = xclbin->header.length;
> + char dev_uuid[XMGNT_UUID_STR_LEN];
> + const char *fw_uuid;
> + int err;
> +
> + err = get_dev_uuid(xdev, dev_uuid, sizeof(dev_uuid));
> + if (err)
> + return false;
> +
> + if (memcmp(fw_buf, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)) != 0) {
> + xrt_err(xdev, "unknown fw format");
> + return false;
> + }
> +
> + if (axlflen > fw_len) {
> + xrt_err(xdev, "truncated fw, length: %zu, expect: %zu", fw_len, axlflen);
> + return false;
> + }
> +
> + if (xclbin->header.version_major != XMGNT_SUPP_XCLBIN_MAJOR) {
> + xrt_err(xdev, "firmware is not supported");
> + return false;
> + }
> +
> + fw_uuid = get_uuid_from_firmware(xdev, xclbin);
> + if (!fw_uuid || strncmp(fw_uuid, dev_uuid, sizeof(dev_uuid)) != 0) {
> + xrt_err(xdev, "bad fw UUID: %s, expect: %s",
> + fw_uuid ? fw_uuid : "<none>", dev_uuid);
> + kfree(fw_uuid);
> + return false;
> + }
> +
> + kfree(fw_uuid);
> + return true;
> +}
> +
> +int xmgnt_get_provider_uuid(struct xrt_device *xdev, enum provider_kind kind, uuid_t *uuid)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + const struct axlf *fwbuf;
> + const char *fw_uuid;
> + int rc = -ENOENT;
> +
> + mutex_lock(&xmm->lock);
> +
> + fwbuf = xmgnt_get_axlf_firmware(xmm, kind);
> + if (!fwbuf)
> + goto done;
> +
> + fw_uuid = get_uuid_from_firmware(xdev, fwbuf);
> + if (!fw_uuid)
> + goto done;
> +
> + rc = xrt_md_trans_str2uuid(DEV(xdev), fw_uuid, uuid);
> + kfree(fw_uuid);
> +
> +done:
> + mutex_unlock(&xmm->lock);
> + return rc;
> +}
> +
> +static int xmgnt_create_blp(struct xmgnt_main *xmm)
> +{
> + const struct axlf *provider = xmgnt_get_axlf_firmware(xmm, XMGNT_BLP);
> + struct xrt_device *xdev = xmm->xdev;
> + int rc = 0;
> + char *dtb = NULL;
> +
> + dtb = xmgnt_get_dtb(xdev, XMGNT_BLP);
> + if (!dtb) {
> + xrt_err(xdev, "did not get BLP metadata");
> + return -EINVAL;
> + }
> +
> + rc = xmgnt_process_xclbin(xmm->xdev, xmm->fmgr, provider, XMGNT_BLP);
> + if (rc) {
> + xrt_err(xdev, "failed to process BLP: %d", rc);
> + goto failed;
> + }
> +
> + rc = xleaf_create_group(xdev, dtb);
> + if (rc < 0)
> + xrt_err(xdev, "failed to create BLP group: %d", rc);
> + else
> + rc = 0;
> +
> + WARN_ON(xmm->blp_interface_uuids);
> + rc = xrt_md_get_interface_uuids(&xdev->dev, dtb, 0, NULL);
> + if (rc > 0) {
> + xmm->blp_interface_uuid_num = rc;
> + xmm->blp_interface_uuids =
> + kcalloc(xmm->blp_interface_uuid_num, sizeof(uuid_t), GFP_KERNEL);

ok

Reviewed-by: Tom Rix <[email protected]>

> + if (!xmm->blp_interface_uuids) {
> + rc = -ENOMEM;
> + goto failed;
> + }
> + xrt_md_get_interface_uuids(&xdev->dev, dtb, xmm->blp_interface_uuid_num,
> + xmm->blp_interface_uuids);
> + }
> +
> +failed:
> + vfree(dtb);
> + return rc;
> +}
> +
> +static int xmgnt_load_firmware(struct xmgnt_main *xmm)
> +{
> + struct xrt_device *xdev = xmm->xdev;
> + size_t fwlen;
> + int rc;
> +
> + rc = load_firmware_from_disk(xdev, &xmm->firmware_blp, &fwlen);
> + if (!rc && is_valid_firmware(xdev, xmm->firmware_blp, fwlen))
> + xmgnt_create_blp(xmm);
> + else
> + xrt_err(xdev, "failed to find firmware, giving up: %d", rc);
> + return rc;
> +}
> +
> +static void xmgnt_main_event_cb(struct xrt_device *xdev, void *arg)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct xrt_device *leaf;
> + enum xrt_subdev_id id;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> + switch (e) {
> + case XRT_EVENT_POST_CREATION: {
> + if (id == XRT_SUBDEV_DEVCTL && !(xmm->flags & XMGNT_FLAG_DEVCTL_READY)) {
> + leaf = xleaf_get_leaf_by_epname(xdev, XRT_MD_NODE_BLP_ROM);
> + if (leaf) {
> + xmm->flags |= XMGNT_FLAG_DEVCTL_READY;
> + xleaf_put_leaf(xdev, leaf);
> + }
> + } else if (id == XRT_SUBDEV_QSPI && !(xmm->flags & XMGNT_FLAG_FLASH_READY)) {
> + xmm->flags |= XMGNT_FLAG_FLASH_READY;
> + } else {
> + break;
> + }
> +
> + if (xmm->flags & XMGNT_FLAG_DEVCTL_READY)
> + xmgnt_load_firmware(xmm);
> + break;
> + }
> + case XRT_EVENT_PRE_REMOVAL:
> + break;
> + default:
> + xrt_dbg(xdev, "ignored event %d", e);
> + break;
> + }
> +}
> +
> +static int xmgnt_main_probe(struct xrt_device *xdev)
> +{
> + struct xmgnt_main *xmm;
> +
> + xrt_info(xdev, "probing...");
> +
> + xmm = devm_kzalloc(DEV(xdev), sizeof(*xmm), GFP_KERNEL);
> + if (!xmm)
> + return -ENOMEM;
> +
> + xmm->xdev = xdev;
> + xmm->fmgr = xmgnt_fmgr_probe(xdev);
> + if (IS_ERR(xmm->fmgr))
> + return PTR_ERR(xmm->fmgr);
> +
> + xrt_set_drvdata(xdev, xmm);
> + mutex_init(&xmm->lock);
> +
> + /* Ready to handle req thru sysfs nodes. */
> + if (sysfs_create_group(&DEV(xdev)->kobj, &xmgnt_main_attrgroup))
> + xrt_err(xdev, "failed to create sysfs group");
> + return 0;
> +}
> +
> +static void xmgnt_main_remove(struct xrt_device *xdev)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> +
> + /* By now, group driver should prevent any inter-leaf call. */
> +
> + xrt_info(xdev, "leaving...");
> +
> + kfree(xmm->blp_interface_uuids);
> + vfree(xmm->firmware_blp);
> + vfree(xmm->firmware_plp);
> + vfree(xmm->firmware_ulp);
> + xmgnt_region_cleanup_all(xdev);
> + xmgnt_fmgr_remove(xmm->fmgr);
> + sysfs_remove_group(&DEV(xdev)->kobj, &xmgnt_main_attrgroup);
> +}
> +
> +static int
> +xmgnt_mainleaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct xmgnt_main *xmm = xrt_get_drvdata(xdev);
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xmgnt_main_event_cb(xdev, arg);
> + break;
> + case XRT_MGNT_MAIN_GET_AXLF_SECTION: {
> + struct xrt_mgnt_main_get_axlf_section *get =
> + (struct xrt_mgnt_main_get_axlf_section *)arg;
> + const struct axlf *firmware = xmgnt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
> +
> + if (!firmware) {
> + ret = -ENOENT;
> + } else {
> + ret = xrt_xclbin_get_section(DEV(xdev), firmware,
> + get->xmmigas_section_kind,
> + &get->xmmigas_section,
> + &get->xmmigas_section_size);
> + }
> + break;
> + }
> + case XRT_MGNT_MAIN_GET_VBNV: {
> + char **vbnv_p = (char **)arg;
> +
> + *vbnv_p = xmgnt_get_vbnv(xdev);
> + if (!*vbnv_p)
> + ret = -EINVAL;
> + break;
> + }
> + default:
> + xrt_err(xdev, "unknown cmd: %d", cmd);
> + ret = -EINVAL;
> + break;
> + }
> + return ret;
> +}
> +
> +static int xmgnt_main_open(struct inode *inode, struct file *file)
> +{
> + struct xrt_device *xdev = xleaf_devnode_open(inode);
> +
> + /* Device may have gone already when we get here. */
> + if (!xdev)
> + return -ENODEV;
> +
> + xrt_info(xdev, "opened");
> + file->private_data = xrt_get_drvdata(xdev);
> + return 0;
> +}
> +
> +static int xmgnt_main_close(struct inode *inode, struct file *file)
> +{
> + struct xmgnt_main *xmm = file->private_data;
> +
> + xleaf_devnode_close(inode);
> +
> + xrt_info(xmm->xdev, "closed");
> + return 0;
> +}
> +
> +/*
> + * Called for xclbin download xclbin load ioctl.
> + */
> +static int xmgnt_bitstream_axlf_fpga_mgr(struct xmgnt_main *xmm, void *axlf, size_t size)
> +{
> + int ret;
> +
> + WARN_ON(!mutex_is_locked(&xmm->lock));
> +
> + /*
> + * Should any error happens during download, we can't trust
> + * the cached xclbin any more.
> + */
> + vfree(xmm->firmware_ulp);
> + xmm->firmware_ulp = NULL;
> +
> + ret = xmgnt_process_xclbin(xmm->xdev, xmm->fmgr, axlf, XMGNT_ULP);
> + if (ret == 0)
> + xmm->firmware_ulp = axlf;
> +
> + return ret;
> +}
> +
> +static int bitstream_axlf_ioctl(struct xmgnt_main *xmm, const void __user *arg)
> +{
> + struct xmgnt_ioc_bitstream_axlf ioc_obj = { 0 };
> + struct axlf xclbin_obj = { {0} };
> + size_t copy_buffer_size = 0;
> + void *copy_buffer = NULL;
> + int ret = 0;
> +
> + if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
> + return -EFAULT;
> + if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin, sizeof(xclbin_obj)))
> + return -EFAULT;
> + if (memcmp(xclbin_obj.magic, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)))
> + return -EINVAL;
> +
> + copy_buffer_size = xclbin_obj.header.length;
> + if (copy_buffer_size > XCLBIN_MAX_SIZE || copy_buffer_size < sizeof(xclbin_obj))
> + return -EINVAL;
> + if (xclbin_obj.header.version_major != XMGNT_SUPP_XCLBIN_MAJOR)
> + return -EINVAL;
> +
> + copy_buffer = vmalloc(copy_buffer_size);
> + if (!copy_buffer)
> + return -ENOMEM;
> +
> + if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
> + vfree(copy_buffer);
> + return -EFAULT;
> + }
> +
> + ret = xmgnt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
> + if (ret)
> + vfree(copy_buffer);
> +
> + return ret;
> +}
> +
> +static long xmgnt_main_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> +{
> + struct xmgnt_main *xmm = filp->private_data;
> + long result = 0;
> +
> + if (_IOC_TYPE(cmd) != XMGNT_IOC_MAGIC)
> + return -ENOTTY;
> +
> + mutex_lock(&xmm->lock);
> +
> + xrt_info(xmm->xdev, "ioctl cmd %d, arg %ld", cmd, arg);
> + switch (cmd) {
> + case XMGNT_IOCICAPDOWNLOAD_AXLF:
> + result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
> + break;
> + default:
> + result = -ENOTTY;
> + break;
> + }
> +
> + mutex_unlock(&xmm->lock);
> + return result;
> +}
> +
> +static struct xrt_dev_endpoints xrt_mgnt_main_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names []){
> + { .ep_name = XRT_MD_NODE_MGNT_MAIN },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xmgnt_main_driver = {
> + .driver = {
> + .name = XMGNT_MAIN,
> + },
> + .file_ops = {
> + .xsf_ops = {
> + .owner = THIS_MODULE,
> + .open = xmgnt_main_open,
> + .release = xmgnt_main_close,
> + .unlocked_ioctl = xmgnt_main_ioctl,
> + },
> + .xsf_dev_name = "xmgnt",
> + },
> + .subdev_id = XRT_SUBDEV_MGNT_MAIN,
> + .endpoints = xrt_mgnt_main_endpoints,
> + .probe = xmgnt_main_probe,
> + .remove = xmgnt_main_remove,
> + .leaf_call = xmgnt_mainleaf_call,
> +};
> +
> +int xmgnt_register_leaf(void)
> +{
> + return xrt_register_driver(&xmgnt_main_driver);
> +}
> +
> +void xmgnt_unregister_leaf(void)
> +{
> + xrt_unregister_driver(&xmgnt_main_driver);
> +}
> diff --git a/drivers/fpga/xrt/mgnt/xmgnt.h b/drivers/fpga/xrt/mgnt/xmgnt.h
> new file mode 100644
> index 000000000000..c8159903de4a
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/xmgnt.h
> @@ -0,0 +1,33 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XMGNT_H_
> +#define _XMGNT_H_
> +
> +#include "xmgnt-main.h"
> +
> +struct fpga_manager;
> +int xmgnt_process_xclbin(struct xrt_device *xdev,
> + struct fpga_manager *fmgr,
> + const struct axlf *xclbin,
> + enum provider_kind kind);
> +void xmgnt_region_cleanup_all(struct xrt_device *xdev);
> +
> +int xmgnt_hot_reset(struct xrt_device *xdev);
> +
> +/* Getting dtb for specified group. Caller should vfree returned dtb. */
> +char *xmgnt_get_dtb(struct xrt_device *xdev, enum provider_kind kind);
> +char *xmgnt_get_vbnv(struct xrt_device *xdev);
> +int xmgnt_get_provider_uuid(struct xrt_device *xdev,
> + enum provider_kind kind, uuid_t *uuid);
> +
> +int xmgnt_register_leaf(void);
> +void xmgnt_unregister_leaf(void);
> +
> +#endif /* _XMGNT_H_ */
> diff --git a/include/uapi/linux/xrt/xmgnt-ioctl.h b/include/uapi/linux/xrt/xmgnt-ioctl.h
> new file mode 100644
> index 000000000000..e4ba5335fa3f
> --- /dev/null
> +++ b/include/uapi/linux/xrt/xmgnt-ioctl.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Copyright (C) 2015-2021, Xilinx Inc
> + *
> + */
> +
> +/**
> + * DOC: PCIe Kernel Driver for Management Physical Function
> + * Interfaces exposed by *xrt-mgnt* driver are defined in file, *xmgnt-ioctl.h*.
> + * Core functionality provided by *xrt-mgnt* driver is described in the following table:
> + *
> + * =========== ============================== ==================================
> + * Functionality ioctl request code data format
> + * =========== ============================== ==================================
> + * 1 FPGA image download XMGNT_IOCICAPDOWNLOAD_AXLF xmgnt_ioc_bitstream_axlf
> + * =========== ============================== ==================================
> + */
> +
> +#ifndef _XMGNT_IOCTL_H_
> +#define _XMGNT_IOCTL_H_
> +
> +#include <linux/ioctl.h>
> +
> +#define XMGNT_IOC_MAGIC 'X'
> +#define XMGNT_IOC_ICAP_DOWNLOAD_AXLF 0x6
> +
> +/**
> + * struct xmgnt_ioc_bitstream_axlf - load xclbin (AXLF) device image
> + * used with XMGNT_IOCICAPDOWNLOAD_AXLF ioctl
> + *
> + * @xclbin: Pointer to user's xclbin structure in memory
> + */
> +struct xmgnt_ioc_bitstream_axlf {
> + struct axlf *xclbin;
> +};
> +
> +#define XMGNT_IOCICAPDOWNLOAD_AXLF \
> + _IOW(XMGNT_IOC_MAGIC, XMGNT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgnt_ioc_bitstream_axlf)
> +
> +/*
> + * The following definitions are for binary compatibility with classic XRT management driver
> + */
> +#define XCLMGNT_IOCICAPDOWNLOAD_AXLF XMGNT_IOCICAPDOWNLOAD_AXLF
> +#define xclmgnt_ioc_bitstream_axlf xmgnt_ioc_bitstream_axlf
> +
> +#endif

2021-05-04 13:58:48

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 11/20] fpga: xrt: fpga-mgr and region implementation for xclbin download


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> fpga-mgr and region implementation for xclbin download which will be
> called from main xrt driver
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/mgnt/xmgnt-main-region.c | 485 ++++++++++++++++++++++
> drivers/fpga/xrt/mgnt/xrt-mgr.c | 190 +++++++++
> drivers/fpga/xrt/mgnt/xrt-mgr.h | 16 +
> 3 files changed, 691 insertions(+)
> create mode 100644 drivers/fpga/xrt/mgnt/xmgnt-main-region.c
> create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.c
> create mode 100644 drivers/fpga/xrt/mgnt/xrt-mgr.h
>
> diff --git a/drivers/fpga/xrt/mgnt/xmgnt-main-region.c b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
> new file mode 100644
> index 000000000000..398fc816b178
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/xmgnt-main-region.c
> @@ -0,0 +1,485 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * FPGA Region Support for Xilinx Alveo
ok
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: [email protected]
> + */
> +
> +#include <linux/uuid.h>
> +#include <linux/fpga/fpga-bridge.h>
> +#include <linux/fpga/fpga-region.h>
> +#include <linux/slab.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/axigate.h"
> +#include "xclbin-helper.h"
> +#include "xmgnt.h"
> +
> +struct xmgnt_bridge {
> + struct xrt_device *xdev;
> + const char *bridge_name;
> +};
> +
> +struct xmgnt_region {
> + struct xrt_device *xdev;
> + struct fpga_region *region;
> + struct fpga_compat_id compat_id;
> + uuid_t interface_uuid;
ok
> + struct fpga_bridge *bridge;
> + int group_instance;
> + uuid_t depend_uuid;
ok
> + struct list_head list;
> +};
> +
> +struct xmgnt_region_match_arg {
> + struct xrt_device *xdev;
> + uuid_t *uuids;
> + u32 uuid_num;
> +};
> +
> +static int xmgnt_br_enable_set(struct fpga_bridge *bridge, bool enable)
> +{
> + struct xmgnt_bridge *br_data = (struct xmgnt_bridge *)bridge->priv;
> + struct xrt_device *axigate_leaf;
> + int rc;
> +
> + axigate_leaf = xleaf_get_leaf_by_epname(br_data->xdev, br_data->bridge_name);
> + if (!axigate_leaf) {
> + xrt_err(br_data->xdev, "failed to get leaf %s",
> + br_data->bridge_name);
> + return -ENOENT;
> + }
> +
> + if (enable)
> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_OPEN, NULL);
> + else
> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_CLOSE, NULL);
> +
> + if (rc) {
> + xrt_err(br_data->xdev, "failed to %s gate %s, rc %d",
> + (enable ? "free" : "freeze"), br_data->bridge_name,
> + rc);
> + }
> +
> + xleaf_put_leaf(br_data->xdev, axigate_leaf);
> +
> + return rc;
> +}
> +
> +const struct fpga_bridge_ops xmgnt_bridge_ops = {
> + .enable_set = xmgnt_br_enable_set
> +};
> +
> +static void xmgnt_destroy_bridge(struct fpga_bridge *br)
> +{
> + struct xmgnt_bridge *br_data = br->priv;
> +
> + if (!br_data)
> + return;
> +
> + xrt_info(br_data->xdev, "destroy fpga bridge %s", br_data->bridge_name);
> + fpga_bridge_unregister(br);
> +
> + devm_kfree(DEV(br_data->xdev), br_data);
> +
> + fpga_bridge_free(br);
> +}
> +
> +static struct fpga_bridge *xmgnt_create_bridge(struct xrt_device *xdev,
> + char *dtb)
> +{
> + struct fpga_bridge *br = NULL;
> + struct xmgnt_bridge *br_data;
> + const char *gate;
> + int rc;
> +
> + br_data = devm_kzalloc(DEV(xdev), sizeof(*br_data), GFP_KERNEL);
> + if (!br_data)
> + return NULL;
> + br_data->xdev = xdev;
> +
> + br_data->bridge_name = XRT_MD_NODE_GATE_ULP;
> + rc = xrt_md_find_endpoint(&xdev->dev, dtb, XRT_MD_NODE_GATE_ULP,
> + NULL, &gate);
> + if (rc) {
> + br_data->bridge_name = XRT_MD_NODE_GATE_PLP;
> + rc = xrt_md_find_endpoint(&xdev->dev, dtb, XRT_MD_NODE_GATE_PLP,
> + NULL, &gate);
> + }
> + if (rc) {
> + xrt_err(xdev, "failed to get axigate, rc %d", rc);
> + goto failed;
> + }
> +
> + br = fpga_bridge_create(DEV(xdev), br_data->bridge_name,
> + &xmgnt_bridge_ops, br_data);
> + if (!br) {
> + xrt_err(xdev, "failed to create bridge");
> + goto failed;
> + }
> +
> + rc = fpga_bridge_register(br);
> + if (rc) {
> + xrt_err(xdev, "failed to register bridge, rc %d", rc);
> + goto failed;
> + }
> +
> + xrt_info(xdev, "created fpga bridge %s", br_data->bridge_name);
> +
> + return br;
> +
> +failed:
> + if (br)
> + fpga_bridge_free(br);
> + if (br_data)
> + devm_kfree(DEV(xdev), br_data);
> +
> + return NULL;
> +}
> +
> +static void xmgnt_destroy_region(struct fpga_region *region)
> +{
> + struct xmgnt_region *r_data = region->priv;
> +
> + xrt_info(r_data->xdev, "destroy fpga region %llx.%llx",
> + region->compat_id->id_h, region->compat_id->id_l);
ok
> +
> + fpga_region_unregister(region);
> +
> + if (r_data->group_instance > 0)
> + xleaf_destroy_group(r_data->xdev, r_data->group_instance);
> +
> + if (r_data->bridge)
> + xmgnt_destroy_bridge(r_data->bridge);
> +
> + if (r_data->region->info) {
> + fpga_image_info_free(r_data->region->info);
> + r_data->region->info = NULL;
> + }
> +
> + fpga_region_free(region);
> +
> + devm_kfree(DEV(r_data->xdev), r_data);
> +}
> +
> +static int xmgnt_region_match(struct device *dev, const void *data)
> +{
> + const struct xmgnt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
> + uuid_t compat_uuid;
> + int i;
> +
> + if (dev->parent != &arg->xdev->dev)
> + return false;
> +
> + match_region = to_fpga_region(dev);
> + /*
> + * The device tree provides both parent and child uuids for an
> + * xclbin in one array. Here we try both uuids to see if it matches
> + * with target region's compat_id. Strictly speaking we should
> + * only match xclbin's parent uuid with target region's compat_id
> + * but given the uuids by design are unique comparing with both
> + * does not hurt.
> + */
> + import_uuid(&compat_uuid, (const char *)match_region->compat_id);
> + for (i = 0; i < arg->uuid_num; i++) {
> + if (uuid_equal(&compat_uuid, &arg->uuids[i]))
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static int xmgnt_region_match_base(struct device *dev, const void *data)
> +{
> + const struct xmgnt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
> + const struct xmgnt_region *r_data;
> +
> + if (dev->parent != &arg->xdev->dev)
> + return false;
> +
> + match_region = to_fpga_region(dev);
> + r_data = match_region->priv;
> + if (uuid_is_null(&r_data->depend_uuid))
> + return true;
> +
> + return false;
> +}
> +
> +static int xmgnt_region_match_by_uuid(struct device *dev, const void *data)
> +{
> + const struct xmgnt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
> + const struct xmgnt_region *r_data;
> +
> + if (dev->parent != &arg->xdev->dev)
> + return false;
> +
> + if (arg->uuid_num != 1)
> + return false;
> +
> + match_region = to_fpga_region(dev);
> + r_data = match_region->priv;
> + if (uuid_equal(&r_data->depend_uuid, arg->uuids))
> + return true;
> +
> + return false;
> +}
> +
> +static void xmgnt_region_cleanup(struct fpga_region *region)
> +{
> + struct xmgnt_region *r_data = region->priv, *pdata, *temp;
> + struct xrt_device *xdev = r_data->xdev;
> + struct xmgnt_region_match_arg arg = { 0 };
> + struct fpga_region *match_region = NULL;
> + struct device *start_dev = NULL;
> + LIST_HEAD(free_list);
> + uuid_t compat_uuid;
> +
> + list_add_tail(&r_data->list, &free_list);
> + arg.xdev = xdev;
> + arg.uuid_num = 1;
> + arg.uuids = &compat_uuid;
> +
> + /* find all regions depending on this region */
> + list_for_each_entry_safe(pdata, temp, &free_list, list) {
> + import_uuid(arg.uuids, (const char *)pdata->region->compat_id);
> + start_dev = NULL;
> + while ((match_region = fpga_region_class_find(start_dev, &arg,
> + xmgnt_region_match_by_uuid))) {
> + pdata = match_region->priv;
> + list_add_tail(&pdata->list, &free_list);
> + start_dev = &match_region->dev;
> + put_device(&match_region->dev);
> + }
> + }
> +
> + list_del(&r_data->list);
> +
> + list_for_each_entry_safe_reverse(pdata, temp, &free_list, list)
> + xmgnt_destroy_region(pdata->region);
> +
> + if (r_data->group_instance > 0) {
> + xleaf_destroy_group(xdev, r_data->group_instance);
> + r_data->group_instance = -1;
> + }
> + if (r_data->region->info) {
> + fpga_image_info_free(r_data->region->info);
> + r_data->region->info = NULL;
> + }
> +}
> +
> +void xmgnt_region_cleanup_all(struct xrt_device *xdev)
> +{
> + struct xmgnt_region_match_arg arg = { 0 };
> + struct fpga_region *base_region;
> +
> + arg.xdev = xdev;
> +
> + while ((base_region = fpga_region_class_find(NULL, &arg, xmgnt_region_match_base))) {
> + put_device(&base_region->dev);
> +
> + xmgnt_region_cleanup(base_region);
> + xmgnt_destroy_region(base_region);
> + }
> +}
> +
> +/*
> + * Program a region with a xclbin image. Bring up the subdevs and the
> + * group object to contain the subdevs.
> + */
> +static int xmgnt_region_program(struct fpga_region *region, const void *xclbin, char *dtb)
> +{
> + const struct axlf *xclbin_obj = xclbin;
> + struct fpga_image_info *info;
> + struct xrt_device *xdev;
> + struct xmgnt_region *r_data;
> + int rc;
> +
> + r_data = region->priv;
> + xdev = r_data->xdev;
> +
> + info = fpga_image_info_alloc(&xdev->dev);
> + if (!info)
> + return -ENOMEM;
> +
> + info->buf = xclbin;
> + info->count = xclbin_obj->header.length;
> + info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
> + region->info = info;
> + rc = fpga_region_program_fpga(region);
> + if (rc) {
> + xrt_err(xdev, "programming xclbin failed, rc %d", rc);
> + return rc;
> + }
> +
> + /* free bridges to allow reprogram */
> + if (region->get_bridges)
> + fpga_bridges_put(&region->bridge_list);
> +
> + /*
> + * Next bringup the subdevs for this region which will be managed by
> + * its own group object.
> + */
> + r_data->group_instance = xleaf_create_group(xdev, dtb);
> + if (r_data->group_instance < 0) {
> + xrt_err(xdev, "failed to create group, rc %d",
> + r_data->group_instance);
> + rc = r_data->group_instance;
> + return rc;
> + }
> +
> + rc = xleaf_wait_for_group_bringup(xdev);
> + if (rc)
> + xrt_err(xdev, "group bringup failed, rc %d", rc);
> + return rc;
> +}
> +
> +static int xmgnt_get_bridges(struct fpga_region *region)
> +{
> + struct xmgnt_region *r_data = region->priv;
> + struct device *dev = &r_data->xdev->dev;
> +
> + return fpga_bridge_get_to_list(dev, region->info, &region->bridge_list);
> +}
> +
> +/*
> + * Program/create FPGA regions based on input xclbin file.
> + * 1. Identify a matching existing region for this xclbin
> + * 2. Tear down any previous objects for the found region
> + * 3. Program this region with input xclbin
> + * 4. Iterate over this region's interface uuids to determine if it defines any
> + * child region. Create fpga_region for the child region.
> + */
> +int xmgnt_process_xclbin(struct xrt_device *xdev,
> + struct fpga_manager *fmgr,
> + const struct axlf *xclbin,
> + enum provider_kind kind)
> +{
> + struct fpga_region *region, *compat_region = NULL;
> + struct xmgnt_region_match_arg arg = { 0 };
> + struct xmgnt_region *r_data;
> + uuid_t compat_uuid;
> + char *dtb = NULL;
> + int rc, i;
> +
> + rc = xrt_xclbin_get_metadata(DEV(xdev), xclbin, &dtb);
> + if (rc) {
> + xrt_err(xdev, "failed to get dtb: %d", rc);
> + goto failed;
> + }
> +
> + rc = xrt_md_get_interface_uuids(DEV(xdev), dtb, 0, NULL);
> + if (rc < 0) {
> + xrt_err(xdev, "failed to get intf uuid");
> + rc = -EINVAL;
> + goto failed;
> + }
> + arg.uuid_num = rc;
> + arg.uuids = kcalloc(arg.uuid_num, sizeof(uuid_t), GFP_KERNEL);
ok
> + if (!arg.uuids) {
> + rc = -ENOMEM;
> + goto failed;
> + }
> + arg.xdev = xdev;
> +
> + rc = xrt_md_get_interface_uuids(DEV(xdev), dtb, arg.uuid_num, arg.uuids);
> + if (rc != arg.uuid_num) {
> + xrt_err(xdev, "only get %d uuids, expect %d", rc, arg.uuid_num);
> + rc = -EINVAL;
> + goto failed;
> + }
> +
> + /* if this is not base firmware, search for a compatible region */
> + if (kind != XMGNT_BLP) {
> + compat_region = fpga_region_class_find(NULL, &arg, xmgnt_region_match);
> + if (!compat_region) {
> + xrt_err(xdev, "failed to get compatible region");
> + rc = -ENOENT;
> + goto failed;
> + }
> +
> + xmgnt_region_cleanup(compat_region);
> +
> + rc = xmgnt_region_program(compat_region, xclbin, dtb);
> + if (rc) {
> + xrt_err(xdev, "failed to program region");
> + goto failed;
> + }
> + }
> +
> + if (compat_region)
> + import_uuid(&compat_uuid, (const char *)compat_region->compat_id);
> +
> + /* create all the new regions contained in this xclbin */
> + for (i = 0; i < arg.uuid_num; i++) {
> + if (compat_region && uuid_equal(&compat_uuid, &arg.uuids[i])) {
> + /* region for this interface already exists */
> + continue;
> + }
> +
> + region = fpga_region_create(DEV(xdev), fmgr, xmgnt_get_bridges);
> + if (!region) {
> + xrt_err(xdev, "failed to create fpga region");
> + rc = -EFAULT;
> + goto failed;
> + }
> + r_data = devm_kzalloc(DEV(xdev), sizeof(*r_data), GFP_KERNEL);
> + if (!r_data) {
> + rc = -ENOMEM;
> + fpga_region_free(region);
> + goto failed;
> + }
> + r_data->xdev = xdev;
> + r_data->region = region;
> + r_data->group_instance = -1;
> + uuid_copy(&r_data->interface_uuid, &arg.uuids[i]);
> + if (compat_region)
> + import_uuid(&r_data->depend_uuid, (const char *)compat_region->compat_id);
> + r_data->bridge = xmgnt_create_bridge(xdev, dtb);
> + if (!r_data->bridge) {
> + xrt_err(xdev, "failed to create fpga bridge");
> + rc = -EFAULT;
> + devm_kfree(DEV(xdev), r_data);
> + fpga_region_free(region);
> + goto failed;
> + }
> +
> + region->compat_id = &r_data->compat_id;
> + export_uuid((char *)region->compat_id, &r_data->interface_uuid);
> + region->priv = r_data;
> +
> + rc = fpga_region_register(region);
> + if (rc) {
> + xrt_err(xdev, "failed to register fpga region");
> + xmgnt_destroy_bridge(r_data->bridge);
> + fpga_region_free(region);
> + devm_kfree(DEV(xdev), r_data);
> + goto failed;
> + }
> +
> + xrt_info(xdev, "created fpga region %llx.%llx",
> + region->compat_id->id_h, region->compat_id->id_l);

ok

Reviewed-by: Tom Rix <[email protected]>

> + }
> +
> + if (compat_region)
> + put_device(&compat_region->dev);
> + vfree(dtb);
> + kfree(arg.uuids);
> + return 0;
> +
> +failed:
> + if (compat_region) {
> + put_device(&compat_region->dev);
> + xmgnt_region_cleanup(compat_region);
> + } else {
> + xmgnt_region_cleanup_all(xdev);
> + }
> +
> + vfree(dtb);
> + kfree(arg.uuids);
> + return rc;
> +}
> diff --git a/drivers/fpga/xrt/mgnt/xrt-mgr.c b/drivers/fpga/xrt/mgnt/xrt-mgr.c
> new file mode 100644
> index 000000000000..354747f3e872
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/xrt-mgr.c
> @@ -0,0 +1,190 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * FPGA Manager Support for Xilinx Alveo
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: [email protected]
> + */
> +
> +#include <linux/cred.h>
> +#include <linux/efi.h>
> +#include <linux/fpga/fpga-mgr.h>
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +
> +#include "xclbin-helper.h"
> +#include "xleaf.h"
> +#include "xrt-mgr.h"
> +#include "xleaf/axigate.h"
> +#include "xleaf/icap.h"
> +#include "xmgnt.h"
> +
> +struct xfpga_class {
> + struct xrt_device *xdev;
> + char name[64];
> +};
> +
> +/*
> + * xclbin download plumbing -- find the download subsystem, ICAP and
> + * pass the xclbin for heavy lifting
> + */
> +static int xmgnt_download_bitstream(struct xrt_device *xdev,
> + const struct axlf *xclbin)
> +
> +{
> + struct xclbin_bit_head_info bit_header = { 0 };
> + struct xrt_device *icap_leaf = NULL;
> + struct xrt_icap_wr arg;
> + char *bitstream = NULL;
> + u64 bit_len;
> + int ret;
> +
> + ret = xrt_xclbin_get_section(DEV(xdev), xclbin, BITSTREAM, (void **)&bitstream, &bit_len);
> + if (ret) {
> + xrt_err(xdev, "bitstream not found");
> + return -ENOENT;
> + }
> + ret = xrt_xclbin_parse_bitstream_header(DEV(xdev), bitstream,
> + XCLBIN_HWICAP_BITFILE_BUF_SZ,
> + &bit_header);
> + if (ret) {
> + ret = -EINVAL;
> + xrt_err(xdev, "invalid bitstream header");
> + goto fail;
> + }
> + if (bit_header.header_length + bit_header.bitstream_length > bit_len) {
> + ret = -EINVAL;
> + xrt_err(xdev, "invalid bitstream length. header %d, bitstream %d, section len %lld",
> + bit_header.header_length, bit_header.bitstream_length, bit_len);
> + goto fail;
> + }
> +
> + icap_leaf = xleaf_get_leaf_by_id(xdev, XRT_SUBDEV_ICAP, XRT_INVALID_DEVICE_INST);
> + if (!icap_leaf) {
> + ret = -ENODEV;
> + xrt_err(xdev, "icap does not exist");
> + goto fail;
> + }
> + arg.xiiw_bit_data = bitstream + bit_header.header_length;
> + arg.xiiw_data_len = bit_header.bitstream_length;
> + ret = xleaf_call(icap_leaf, XRT_ICAP_WRITE, &arg);
> + if (ret) {
> + xrt_err(xdev, "write bitstream failed, ret = %d", ret);
> + xleaf_put_leaf(xdev, icap_leaf);
> + goto fail;
> + }
> +
> + xleaf_put_leaf(xdev, icap_leaf);
> + vfree(bitstream);
> +
> + return 0;
> +
> +fail:
> + vfree(bitstream);
> +
> + return ret;
> +}
> +
> +/*
> + * There is no HW prep work we do here since we need the full
> + * xclbin for its sanity check.
> + */
> +static int xmgnt_pr_write_init(struct fpga_manager *mgr,
> + struct fpga_image_info *info,
> + const char *buf, size_t count)
> +{
> + const struct axlf *bin = (const struct axlf *)buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
> + xrt_info(obj->xdev, "%s only supports partial reconfiguration\n", obj->name);
> + return -EINVAL;
> + }
> +
> + if (count < sizeof(struct axlf))
> + return -EINVAL;
> +
> + if (count > bin->header.length)
> + return -EINVAL;
> +
> + xrt_info(obj->xdev, "Prepare download of xclbin %pUb of length %lld B",
> + &bin->header.uuid, bin->header.length);
> +
> + return 0;
> +}
> +
> +/*
> + * The implementation requries full xclbin image before we can start
> + * programming the hardware via ICAP subsystem. The full image is required
> + * for checking the validity of xclbin and walking the sections to
> + * discover the bitstream.
> + */
> +static int xmgnt_pr_write(struct fpga_manager *mgr,
> + const char *buf, size_t count)
> +{
> + const struct axlf *bin = (const struct axlf *)buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + if (bin->header.length != count)
> + return -EINVAL;
> +
> + return xmgnt_download_bitstream((void *)obj->xdev, bin);
> +}
> +
> +static int xmgnt_pr_write_complete(struct fpga_manager *mgr,
> + struct fpga_image_info *info)
> +{
> + const struct axlf *bin = (const struct axlf *)info->buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + xrt_info(obj->xdev, "Finished download of xclbin %pUb",
> + &bin->header.uuid);
> + return 0;
> +}
> +
> +static enum fpga_mgr_states xmgnt_pr_state(struct fpga_manager *mgr)
> +{
> + return FPGA_MGR_STATE_UNKNOWN;
> +}
> +
> +static const struct fpga_manager_ops xmgnt_pr_ops = {
> + .initial_header_size = sizeof(struct axlf),
> + .write_init = xmgnt_pr_write_init,
> + .write = xmgnt_pr_write,
> + .write_complete = xmgnt_pr_write_complete,
> + .state = xmgnt_pr_state,
> +};
> +
> +struct fpga_manager *xmgnt_fmgr_probe(struct xrt_device *xdev)
> +{
> + struct xfpga_class *obj = devm_kzalloc(DEV(xdev), sizeof(struct xfpga_class),
> + GFP_KERNEL);
> + struct fpga_manager *fmgr = NULL;
> + int ret = 0;
> +
> + if (!obj)
> + return ERR_PTR(-ENOMEM);
> +
> + snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
> + obj->xdev = xdev;
> + fmgr = fpga_mgr_create(&xdev->dev,
> + obj->name,
> + &xmgnt_pr_ops,
> + obj);
> + if (!fmgr)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = fpga_mgr_register(fmgr);
> + if (ret) {
> + fpga_mgr_free(fmgr);
> + return ERR_PTR(ret);
> + }
> + return fmgr;
> +}
> +
> +int xmgnt_fmgr_remove(struct fpga_manager *fmgr)
> +{
> + fpga_mgr_unregister(fmgr);
> + return 0;
> +}
> diff --git a/drivers/fpga/xrt/mgnt/xrt-mgr.h b/drivers/fpga/xrt/mgnt/xrt-mgr.h
> new file mode 100644
> index 000000000000..d505a770ea9f
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/xrt-mgr.h
> @@ -0,0 +1,16 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: [email protected]
> + */
> +
> +#ifndef _XRT_MGR_H_
> +#define _XRT_MGR_H_
> +
> +#include <linux/fpga/fpga-mgr.h>
> +
> +struct fpga_manager *xmgnt_fmgr_probe(struct xrt_device *xdev);
> +int xmgnt_fmgr_remove(struct fpga_manager *fmgr);
> +
> +#endif /* _XRT_MGR_H_ */

2021-05-04 14:03:01

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 12/20] fpga: xrt: VSEC driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add VSEC driver. VSEC is a hardware function discovered by walking
> PCI Express configure space. A xrt device node will be created for it.
> VSEC provides board logic UUID and few offset of other hardware functions.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/xleaf/vsec.c | 372 ++++++++++++++++++++++++++++++
> 1 file changed, 372 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
>
> diff --git a/drivers/fpga/xrt/lib/xleaf/vsec.c b/drivers/fpga/xrt/lib/xleaf/vsec.c
> new file mode 100644
> index 000000000000..2bc7578c5f5d
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/vsec.c
> @@ -0,0 +1,372 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA VSEC Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/regmap.h>
> +#include "metadata.h"
> +#include "xdevice.h"
> +#include "xleaf.h"
> +
> +#define XRT_VSEC "xrt_vsec"
> +
> +#define VSEC_TYPE_UUID 0x50
> +#define VSEC_TYPE_FLASH 0x51
> +#define VSEC_TYPE_PLATINFO 0x52
> +#define VSEC_TYPE_MAILBOX 0x53
> +#define VSEC_TYPE_END 0xff
> +
> +#define VSEC_UUID_LEN 16
> +
> +#define VSEC_REG_FORMAT 0x0
> +#define VSEC_REG_LENGTH 0x4
> +#define VSEC_REG_ENTRY 0x8
> +
> +struct xrt_vsec_header {
> + u32 format;
> + u32 length;
> + u32 entry_sz;
> + u32 rsvd;
> +} __packed;
> +
> +struct xrt_vsec_entry {
> + u8 type;
> + u8 bar_rev;
> + u16 off_lo;
> + u32 off_hi;
> + u8 ver_type;
> + u8 minor;
> + u8 major;
> + u8 rsvd0;
> + u32 rsvd1;
> +} __packed;
> +
> +struct vsec_device {
> + u8 type;
> + char *ep_name;
> + ulong size;
> + char *compat;
ok
> +};
> +
> +static struct vsec_device vsec_devs[] = {
> + {
> + .type = VSEC_TYPE_UUID,
> + .ep_name = XRT_MD_NODE_BLP_ROM,
> + .size = VSEC_UUID_LEN,
> + .compat = "vsec-uuid",
> + },
> + {
> + .type = VSEC_TYPE_FLASH,
> + .ep_name = XRT_MD_NODE_FLASH_VSEC,
> + .size = 4096,
> + .compat = "vsec-flash",
> + },
> + {
> + .type = VSEC_TYPE_PLATINFO,
> + .ep_name = XRT_MD_NODE_PLAT_INFO,
> + .size = 4,
> + .compat = "vsec-platinfo",
> + },
> + {
> + .type = VSEC_TYPE_MAILBOX,
> + .ep_name = XRT_MD_NODE_MAILBOX_VSEC,
> + .size = 48,
> + .compat = "vsec-mbx",
> + },
> +};
> +
> +XRT_DEFINE_REGMAP_CONFIG(vsec_regmap_config);
ok
> +
> +struct xrt_vsec {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + u32 length;
> +
> + char *metadata;
> + char uuid[VSEC_UUID_LEN];
> + int group;
> +};
> +
> +static inline int vsec_read_entry(struct xrt_vsec *vsec, u32 index, struct xrt_vsec_entry *entry)
> +{
> + int ret;
> +
> + ret = regmap_bulk_read(vsec->regmap, sizeof(struct xrt_vsec_header) +
> + index * sizeof(struct xrt_vsec_entry), entry,
> + sizeof(struct xrt_vsec_entry) /
> + vsec_regmap_config.reg_stride);
> +
> + return ret;
> +}
> +
> +static inline u32 vsec_get_bar(struct xrt_vsec_entry *entry)
> +{
> + return (entry->bar_rev >> 4) & 0xf;
ok
> +}
> +
> +static inline u64 vsec_get_bar_off(struct xrt_vsec_entry *entry)
> +{
> + return entry->off_lo | ((u64)entry->off_hi << 16);
> +}
> +
> +static inline u32 vsec_get_rev(struct xrt_vsec_entry *entry)
> +{
> + return entry->bar_rev & 0xf;
> +}
> +
> +static char *type2epname(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].ep_name);
> + }
> +
> + return NULL;
> +}
> +
> +static ulong type2size(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].size);
> + }
> +
> + return 0;
> +}
> +
> +static char *type2compat(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].compat);
> + }
> +
> + return NULL;
> +}
> +
> +static int xrt_vsec_add_node(struct xrt_vsec *vsec,
> + void *md_blob, struct xrt_vsec_entry *p_entry)
> +{
> + struct xrt_md_endpoint ep;
> + char compat_ver[64];
> + int ret;
> +
> + if (!type2epname(p_entry->type))
> + return -EINVAL;
> +
> + /*
> + * VSEC may have more than 1 mailbox instance for the card
> + * which has more than 1 physical function.
> + * This is not supported for now. Assuming only one mailbox
> + */
> +
> + snprintf(compat_ver, sizeof(compat_ver) - 1, "%d-%d.%d.%d",
> + p_entry->ver_type, p_entry->major, p_entry->minor,
> + vsec_get_rev(p_entry));
> + ep.ep_name = type2epname(p_entry->type);
> + ep.bar_index = vsec_get_bar(p_entry);
> + ep.bar_off = vsec_get_bar_off(p_entry);
> + ep.size = type2size(p_entry->type);
> + ep.compat = type2compat(p_entry->type);
> + ep.compat_ver = compat_ver;
> + ret = xrt_md_add_endpoint(DEV(vsec->xdev), vsec->metadata, &ep);
> + if (ret)
> + xrt_err(vsec->xdev, "add ep failed, ret %d", ret);
> +
> + return ret;
> +}
> +
> +static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
> +{
> + struct xrt_vsec_entry entry;
> + int i, ret;
> +
> + ret = xrt_md_create(&vsec->xdev->dev, &vsec->metadata);
> + if (ret) {
> + xrt_err(vsec->xdev, "create metadata failed");
> + return ret;
> + }
> +
> + for (i = 0; i * sizeof(entry) < vsec->length -
> + sizeof(struct xrt_vsec_header); i++) {
> + ret = vsec_read_entry(vsec, i, &entry);
> + if (ret) {
> + xrt_err(vsec->xdev, "failed read entry %d, ret %d", i, ret);
> + goto fail;
> + }
> +
> + if (entry.type == VSEC_TYPE_END)
> + break;
> + ret = xrt_vsec_add_node(vsec, vsec->metadata, &entry);
> + if (ret)
> + goto fail;
> + }
> +
> + return 0;
> +
> +fail:
> + vfree(vsec->metadata);
> + vsec->metadata = NULL;
> + return ret;
> +}
> +
> +static int xrt_vsec_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + default:
> + ret = -EINVAL;
> + xrt_err(xdev, "should never been called");
> + break;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_vsec_mapio(struct xrt_vsec *vsec)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->xdev);
> + struct resource *res = NULL;
> + void __iomem *base = NULL;
> + const u64 *bar_off;
> + const u32 *bar;
> + u64 addr;
> + int ret;
> +
> + if (!pdata || xrt_md_size(DEV(vsec->xdev), pdata->xsp_dtb) == XRT_MD_INVALID_LENGTH) {
> + xrt_err(vsec->xdev, "empty metadata");
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_get_prop(DEV(vsec->xdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
> + NULL, XRT_MD_PROP_BAR_IDX, (const void **)&bar, NULL);
> + if (ret) {
> + xrt_err(vsec->xdev, "failed to get bar idx, ret %d", ret);
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_get_prop(DEV(vsec->xdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
> + NULL, XRT_MD_PROP_OFFSET, (const void **)&bar_off, NULL);
> + if (ret) {
> + xrt_err(vsec->xdev, "failed to get bar off, ret %d", ret);
> + return -EINVAL;
> + }
> +
> + xrt_info(vsec->xdev, "Map vsec at bar %d, offset 0x%llx",
> + be32_to_cpu(*bar), be64_to_cpu(*bar_off));
> +
> + xleaf_get_root_res(vsec->xdev, be32_to_cpu(*bar), &res);
> + if (!res) {
> + xrt_err(vsec->xdev, "failed to get bar addr");
> + return -EINVAL;
> + }
> +
> + addr = res->start + be64_to_cpu(*bar_off);
> +
> + base = devm_ioremap(&vsec->xdev->dev, addr, vsec_regmap_config.max_register);
> + if (!base) {
> + xrt_err(vsec->xdev, "Map failed");
> + return -EIO;
> + }
> +
> + vsec->regmap = devm_regmap_init_mmio(&vsec->xdev->dev, base, &vsec_regmap_config);
> + if (IS_ERR(vsec->regmap)) {
> + xrt_err(vsec->xdev, "regmap %pR failed", res);
> + return PTR_ERR(vsec->regmap);
> + }
> +
> + ret = regmap_read(vsec->regmap, VSEC_REG_LENGTH, &vsec->length);
> + if (ret) {
> + xrt_err(vsec->xdev, "failed to read length %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static void xrt_vsec_remove(struct xrt_device *xdev)
> +{
> + struct xrt_vsec *vsec;
> +
> + vsec = xrt_get_drvdata(xdev);
> +
> + if (vsec->group >= 0)
> + xleaf_destroy_group(xdev, vsec->group);
> + vfree(vsec->metadata);
> +}
> +
> +static int xrt_vsec_probe(struct xrt_device *xdev)
> +{
> + struct xrt_vsec *vsec;
> + int ret = 0;
> +
> + vsec = devm_kzalloc(&xdev->dev, sizeof(*vsec), GFP_KERNEL);
> + if (!vsec)
> + return -ENOMEM;
> +
> + vsec->xdev = xdev;
> + vsec->group = -1;
> + xrt_set_drvdata(xdev, vsec);
> +
> + ret = xrt_vsec_mapio(vsec);
> + if (ret)
> + goto failed;
> +
> + ret = xrt_vsec_create_metadata(vsec);
> + if (ret) {
> + xrt_err(xdev, "create metadata failed, ret %d", ret);
> + goto failed;
> + }
> + ret = xleaf_create_group(xdev, vsec->metadata);

ok

Reviewed-by: Tom Rix <[email protected]>

> + if (ret < 0) {
> + xrt_err(xdev, "create group failed, ret %d", vsec->group);
> + goto failed;
> + }
> + vsec->group = ret;
> +
> + return 0;
> +
> +failed:
> + xrt_vsec_remove(xdev);
> +
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_vsec_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names []){
> + { .ep_name = XRT_MD_NODE_VSEC },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_vsec_driver = {
> + .driver = {
> + .name = XRT_VSEC,
> + },
> + .subdev_id = XRT_SUBDEV_VSEC,
> + .endpoints = xrt_vsec_endpoints,
> + .probe = xrt_vsec_probe,
> + .remove = xrt_vsec_remove,
> + .leaf_call = xrt_vsec_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(vsec);

2021-05-04 14:07:01

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 14/20] fpga: xrt: ICAP driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> ICAP stands for Hardware Internal Configuration Access Port. ICAP is
> discovered by walking the firmware metadata. A xrt device node will be
> created for it. FPGA bitstream is written to hardware through ICAP.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/icap.h | 27 +++
> drivers/fpga/xrt/lib/xleaf/icap.c | 328 ++++++++++++++++++++++++++
> 2 files changed, 355 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/icap.h b/drivers/fpga/xrt/include/xleaf/icap.h
> new file mode 100644
> index 000000000000..96d39a8934fa
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/icap.h
> @@ -0,0 +1,27 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_ICAP_H_
> +#define _XRT_ICAP_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * ICAP driver leaf calls.
> + */
> +enum xrt_icap_leaf_cmd {
> + XRT_ICAP_WRITE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_ICAP_GET_IDCODE,
> +};
> +
> +struct xrt_icap_wr {
> + void *xiiw_bit_data;
> + u32 xiiw_data_len;
> +};
> +
> +#endif /* _XRT_ICAP_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/icap.c b/drivers/fpga/xrt/lib/xleaf/icap.c
> new file mode 100644
> index 000000000000..755ea2fc0e75
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/icap.c
> @@ -0,0 +1,328 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA ICAP Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + * Sonal Santan <[email protected]>
> + * Max Zhen <[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/icap.h"
> +#include "xclbin-helper.h"
> +
> +#define XRT_ICAP "xrt_icap"
> +
> +#define ICAP_ERR(icap, fmt, arg...) \
> + xrt_err((icap)->xdev, fmt "\n", ##arg)
> +#define ICAP_WARN(icap, fmt, arg...) \
> + xrt_warn((icap)->xdev, fmt "\n", ##arg)
> +#define ICAP_INFO(icap, fmt, arg...) \
> + xrt_info((icap)->xdev, fmt "\n", ##arg)
> +#define ICAP_DBG(icap, fmt, arg...) \
> + xrt_dbg((icap)->xdev, fmt "\n", ##arg)
> +
> +/*
> + * AXI-HWICAP IP register layout. Please see
> + * https://www.xilinx.com/support/documentation/ip_documentation/axi_hwicap/v3_0/pg134-axi-hwicap.pdf
> + */
> +#define ICAP_REG_GIER 0x1C
> +#define ICAP_REG_ISR 0x20
> +#define ICAP_REG_IER 0x28
> +#define ICAP_REG_WF 0x100
> +#define ICAP_REG_RF 0x104
> +#define ICAP_REG_SZ 0x108
> +#define ICAP_REG_CR 0x10C
> +#define ICAP_REG_SR 0x110
> +#define ICAP_REG_WFV 0x114
> +#define ICAP_REG_RFO 0x118
> +#define ICAP_REG_ASR 0x11C
> +
> +#define ICAP_STATUS_EOS 0x4
> +#define ICAP_STATUS_DONE 0x1
> +
> +/*
> + * Canned command sequence to obtain IDCODE of the FPGA
> + */
> +static const u32 idcode_stream[] = {
> + /* dummy word */
> + cpu_to_be32(0xffffffff),
> + /* sync word */
> + cpu_to_be32(0xaa995566),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* ID code */
> + cpu_to_be32(0x28018001),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> +};
> +
> +XRT_DEFINE_REGMAP_CONFIG(icap_regmap_config);
> +
> +struct icap {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + struct mutex icap_lock; /* icap dev lock */

ok

Reviwed-by: Tom Rix <[email protected]>

> + u32 idcode;
> +};
> +
> +static int wait_for_done(const struct icap *icap)
> +{
> + int i = 0;
> + int ret;
> + u32 w;
> +
> + for (i = 0; i < 10; i++) {
> + /*
> + * it requires few micro seconds for ICAP to process incoming data.
> + * Polling every 5us for 10 times would be good enough.
> + */
> + udelay(5);
> + ret = regmap_read(icap->regmap, ICAP_REG_SR, &w);
> + if (ret)
> + return ret;
> + ICAP_INFO(icap, "XHWICAP_SR: %x", w);
> + if (w & (ICAP_STATUS_EOS | ICAP_STATUS_DONE))
> + return 0;
> + }
> +
> + ICAP_ERR(icap, "bitstream download timeout");
> + return -ETIMEDOUT;
> +}
> +
> +static int icap_write(const struct icap *icap, const u32 *word_buf, int size)
> +{
> + u32 value = 0;
> + int ret;
> + int i;
> +
> + for (i = 0; i < size; i++) {
> + value = be32_to_cpu(word_buf[i]);
> + ret = regmap_write(icap->regmap, ICAP_REG_WF, value);
> + if (ret)
> + return ret;
> + }
> +
> + ret = regmap_write(icap->regmap, ICAP_REG_CR, 0x1);
> + if (ret)
> + return ret;
> +
> + for (i = 0; i < 20; i++) {
> + ret = regmap_read(icap->regmap, ICAP_REG_CR, &value);
> + if (ret)
> + return ret;
> +
> + if ((value & 0x1) == 0)
> + return 0;
> + ndelay(50);
> + }
> +
> + ICAP_ERR(icap, "writing %d dwords timeout", size);
> + return -EIO;
> +}
> +
> +static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
> + u32 word_count)
> +{
> + int wr_fifo_vacancy = 0;
> + u32 word_written = 0;
> + u32 remain_word;
> + int err = 0;
> +
> + WARN_ON(!mutex_is_locked(&icap->icap_lock));
> + for (remain_word = word_count; remain_word > 0;
> + remain_word -= word_written, word_buffer += word_written) {
> + err = regmap_read(icap->regmap, ICAP_REG_WFV, &wr_fifo_vacancy);
> + if (err) {
> + ICAP_ERR(icap, "read wr_fifo_vacancy failed %d", err);
> + break;
> + }
> + if (wr_fifo_vacancy <= 0) {
> + ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
> + err = -EIO;
> + break;
> + }
> + word_written = (wr_fifo_vacancy < remain_word) ?
> + wr_fifo_vacancy : remain_word;
> + if (icap_write(icap, word_buffer, word_written) != 0) {
> + ICAP_ERR(icap, "write failed remain %d, written %d",
> + remain_word, word_written);
> + err = -EIO;
> + break;
> + }
> + }
> +
> + return err;
> +}
> +
> +static int icap_download(struct icap *icap, const char *buffer,
> + unsigned long length)
> +{
> + u32 num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
> + u32 byte_read;
> + int err = 0;
> +
> + if (length % sizeof(u32)) {
> + ICAP_ERR(icap, "invalid bitstream length %ld", length);
> + return -EINVAL;
> + }
> +
> + mutex_lock(&icap->icap_lock);
> + for (byte_read = 0; byte_read < length; byte_read += num_chars_read) {
> + num_chars_read = length - byte_read;
> + if (num_chars_read > XCLBIN_HWICAP_BITFILE_BUF_SZ)
> + num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
> +
> + err = bitstream_helper(icap, (u32 *)buffer, num_chars_read / sizeof(u32));
> + if (err)
> + goto failed;
> + buffer += num_chars_read;
> + }
> +
> + /* there is not any cleanup needs to be done if writing ICAP timeout. */
> + err = wait_for_done(icap);
> +
> +failed:
> + mutex_unlock(&icap->icap_lock);
> +
> + return err;
> +}
> +
> +/*
> + * Discover the FPGA IDCODE using special sequence of canned commands
> + */
> +static int icap_probe_chip(struct icap *icap)
> +{
> + int err;
> + u32 val = 0;
> +
> + regmap_read(icap->regmap, ICAP_REG_SR, &val);
> + if (val != ICAP_STATUS_DONE)
> + return -ENODEV;
> + /* Read ICAP FIFO vacancy */
> + regmap_read(icap->regmap, ICAP_REG_WFV, &val);
> + if (val < 8)
> + return -ENODEV;
> + err = icap_write(icap, idcode_stream, ARRAY_SIZE(idcode_stream));
> + if (err)
> + return err;
> + err = wait_for_done(icap);
> + if (err)
> + return err;
> +
> + /* Tell config engine how many words to transfer to read FIFO */
> + regmap_write(icap->regmap, ICAP_REG_SZ, 0x1);
> + /* Switch the ICAP to read mode */
> + regmap_write(icap->regmap, ICAP_REG_CR, 0x2);
> + err = wait_for_done(icap);
> + if (err)
> + return err;
> +
> + /* Read IDCODE from Read FIFO */
> + regmap_read(icap->regmap, ICAP_REG_RF, &icap->idcode);
> + return 0;
> +}
> +
> +static int
> +xrt_icap_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct xrt_icap_wr *wr_arg = arg;
> + struct icap *icap;
> + int ret = 0;
> +
> + icap = xrt_get_drvdata(xdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_ICAP_WRITE:
> + ret = icap_download(icap, wr_arg->xiiw_bit_data,
> + wr_arg->xiiw_data_len);
> + break;
> + case XRT_ICAP_GET_IDCODE:
> + *(u32 *)arg = icap->idcode;
> + break;
> + default:
> + ICAP_ERR(icap, "unknown command %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_icap_probe(struct xrt_device *xdev)
> +{
> + void __iomem *base = NULL;
> + struct resource *res;
> + struct icap *icap;
> + int result = 0;
> +
> + icap = devm_kzalloc(&xdev->dev, sizeof(*icap), GFP_KERNEL);
> + if (!icap)
> + return -ENOMEM;
> +
> + icap->xdev = xdev;
> + xrt_set_drvdata(xdev, icap);
> + mutex_init(&icap->icap_lock);
> +
> + xrt_info(xdev, "probing");
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res)
> + return -EINVAL;
> +
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base))
> + return PTR_ERR(base);
> +
> + icap->regmap = devm_regmap_init_mmio(&xdev->dev, base, &icap_regmap_config);
> + if (IS_ERR(icap->regmap)) {
> + ICAP_ERR(icap, "init mmio failed");
> + return PTR_ERR(icap->regmap);
> + }
> + /* Disable ICAP interrupts */
> + regmap_write(icap->regmap, ICAP_REG_GIER, 0);
> +
> + result = icap_probe_chip(icap);
> + if (result)
> + xrt_err(xdev, "Failed to probe FPGA");
> + else
> + xrt_info(xdev, "Discovered FPGA IDCODE %x", icap->idcode);
> + return result;
> +}
> +
> +static struct xrt_dev_endpoints xrt_icap_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_FPGA_CONFIG },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_icap_driver = {
> + .driver = {
> + .name = XRT_ICAP,
> + },
> + .subdev_id = XRT_SUBDEV_ICAP,
> + .endpoints = xrt_icap_endpoints,
> + .probe = xrt_icap_probe,
> + .leaf_call = xrt_icap_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(icap);

2021-05-04 14:10:34

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 15/20] fpga: xrt: devctl xrt driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add devctl driver. devctl is a type of hardware function which only has
> few registers to read or write. They are discovered by walking firmware
> metadata. A xrt device node will be created for them.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was also ok, please add my Reviwed-by line

Reviewed-by: Tom Rix <[email protected]>

> ---
> drivers/fpga/xrt/include/xleaf/devctl.h | 40 ++++++
> drivers/fpga/xrt/lib/xleaf/devctl.c | 169 ++++++++++++++++++++++++
> 2 files changed, 209 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/devctl.h b/drivers/fpga/xrt/include/xleaf/devctl.h
> new file mode 100644
> index 000000000000..b97f3b6d9326
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/devctl.h
> @@ -0,0 +1,40 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_DEVCTL_H_
> +#define _XRT_DEVCTL_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * DEVCTL driver leaf calls.
> + */
> +enum xrt_devctl_leaf_cmd {
> + XRT_DEVCTL_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +enum xrt_devctl_id {
> + XRT_DEVCTL_ROM_UUID = 0,
> + XRT_DEVCTL_DDR_CALIB,
> + XRT_DEVCTL_GOLDEN_VER,
> + XRT_DEVCTL_MAX
> +};
> +
> +struct xrt_devctl_rw {
> + u32 xdr_id;
> + void *xdr_buf;
> + u32 xdr_len;
> + u32 xdr_offset;
> +};
> +
> +struct xrt_devctl_intf_uuid {
> + u32 uuid_num;
> + uuid_t *uuids;
> +};
> +
> +#endif /* _XRT_DEVCTL_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/devctl.c b/drivers/fpga/xrt/lib/xleaf/devctl.c
> new file mode 100644
> index 000000000000..fb2122be7e56
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/devctl.c
> @@ -0,0 +1,169 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA devctl Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/devctl.h"
> +
> +#define XRT_DEVCTL "xrt_devctl"
> +
> +struct xrt_name_id {
> + char *ep_name;
> + int id;
> +};
> +
> +static struct xrt_name_id name_id[XRT_DEVCTL_MAX] = {
> + { XRT_MD_NODE_BLP_ROM, XRT_DEVCTL_ROM_UUID },
> + { XRT_MD_NODE_GOLDEN_VER, XRT_DEVCTL_GOLDEN_VER },
> +};
> +
> +XRT_DEFINE_REGMAP_CONFIG(devctl_regmap_config);
> +
> +struct xrt_devctl {
> + struct xrt_device *xdev;
> + struct regmap *regmap[XRT_DEVCTL_MAX];
> + ulong sizes[XRT_DEVCTL_MAX];
> +};
> +
> +static int xrt_devctl_name2id(struct xrt_devctl *devctl, const char *name)
> +{
> + int i;
> +
> + for (i = 0; i < XRT_DEVCTL_MAX && name_id[i].ep_name; i++) {
> + if (!strncmp(name_id[i].ep_name, name, strlen(name_id[i].ep_name) + 1))
> + return name_id[i].id;
> + }
> +
> + return -EINVAL;
> +}
> +
> +static int
> +xrt_devctl_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct xrt_devctl *devctl;
> + int ret = 0;
> +
> + devctl = xrt_get_drvdata(xdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_DEVCTL_READ: {
> + struct xrt_devctl_rw *rw_arg = arg;
> +
> + if (rw_arg->xdr_len & 0x3) {
> + xrt_err(xdev, "invalid len %d", rw_arg->xdr_len);
> + return -EINVAL;
> + }
> +
> + if (rw_arg->xdr_id >= XRT_DEVCTL_MAX) {
> + xrt_err(xdev, "invalid id %d", rw_arg->xdr_id);
> + return -EINVAL;
> + }
> +
> + if (!devctl->regmap[rw_arg->xdr_id]) {
> + xrt_err(xdev, "io not found, id %d",
> + rw_arg->xdr_id);
> + return -EINVAL;
> + }
> +
> + ret = regmap_bulk_read(devctl->regmap[rw_arg->xdr_id], rw_arg->xdr_offset,
> + rw_arg->xdr_buf,
> + rw_arg->xdr_len / devctl_regmap_config.reg_stride);
> + break;
> + }
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_devctl_probe(struct xrt_device *xdev)
> +{
> + struct xrt_devctl *devctl = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int i, id, ret = 0;
> +
> + devctl = devm_kzalloc(&xdev->dev, sizeof(*devctl), GFP_KERNEL);
> + if (!devctl)
> + return -ENOMEM;
> +
> + devctl->xdev = xdev;
> + xrt_set_drvdata(xdev, devctl);
> +
> + xrt_info(xdev, "probing...");
> + for (i = 0, res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + res;
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, ++i)) {
> + struct regmap_config config = devctl_regmap_config;
> +
> + id = xrt_devctl_name2id(devctl, res->name);
> + if (id < 0) {
> + xrt_err(xdev, "ep %s not found", res->name);
> + continue;
> + }
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + break;
> + }
> + config.max_register = res->end - res->start + 1;
> + devctl->regmap[id] = devm_regmap_init_mmio(&xdev->dev, base, &config);
> + if (IS_ERR(devctl->regmap[id])) {
> + xrt_err(xdev, "map base failed %pR", res);
> + ret = PTR_ERR(devctl->regmap[id]);
> + break;
> + }
> + devctl->sizes[id] = res->end - res->start + 1;
> + }
> +
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_devctl_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + /* add name if ep is in same partition */
> + { .ep_name = XRT_MD_NODE_BLP_ROM },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GOLDEN_VER },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + /* adding ep bundle generates devctl device instance */
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_devctl_driver = {
> + .driver = {
> + .name = XRT_DEVCTL,
> + },
> + .subdev_id = XRT_SUBDEV_DEVCTL,
> + .endpoints = xrt_devctl_endpoints,
> + .probe = xrt_devctl_probe,
> + .leaf_call = xrt_devctl_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(devctl);

2021-05-04 14:13:09

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 17/20] fpga: xrt: clock frequency counter driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add clock frequency counter driver. Clock frequency counter is
> a hardware function discovered by walking xclbin metadata. A xrt
> device node will be created for it. Other part of driver can read the
> actual clock frequency through clock frequency counter driver.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was ok, please add my Reviewed-by: line

Reviewed-by: Tom Rix <[email protected]>

> ---
> drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 +++
> drivers/fpga/xrt/lib/xleaf/clkfreq.c | 223 +++++++++++++++++++++++
> 2 files changed, 244 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/clkfreq.h b/drivers/fpga/xrt/include/xleaf/clkfreq.h
> new file mode 100644
> index 000000000000..005441d5df78
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/clkfreq.h
> @@ -0,0 +1,21 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_CLKFREQ_H_
> +#define _XRT_CLKFREQ_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * CLKFREQ driver leaf calls.
> + */
> +enum xrt_clkfreq_leaf_cmd {
> + XRT_CLKFREQ_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +#endif /* _XRT_CLKFREQ_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/clkfreq.c b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
> new file mode 100644
> index 000000000000..3d1f11152375
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
> @@ -0,0 +1,223 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Clock Frequency Counter Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clkfreq.h"
> +
> +#define CLKFREQ_ERR(clkfreq, fmt, arg...) \
> + xrt_err((clkfreq)->xdev, fmt "\n", ##arg)
> +#define CLKFREQ_WARN(clkfreq, fmt, arg...) \
> + xrt_warn((clkfreq)->xdev, fmt "\n", ##arg)
> +#define CLKFREQ_INFO(clkfreq, fmt, arg...) \
> + xrt_info((clkfreq)->xdev, fmt "\n", ##arg)
> +#define CLKFREQ_DBG(clkfreq, fmt, arg...) \
> + xrt_dbg((clkfreq)->xdev, fmt "\n", ##arg)
> +
> +#define XRT_CLKFREQ "xrt_clkfreq"
> +
> +#define XRT_CLKFREQ_CONTROL_STATUS_MASK 0xffff
> +
> +#define XRT_CLKFREQ_CONTROL_START 0x1
> +#define XRT_CLKFREQ_CONTROL_DONE 0x2
> +#define XRT_CLKFREQ_V5_CLK0_ENABLED 0x10000
> +
> +#define XRT_CLKFREQ_CONTROL_REG 0
> +#define XRT_CLKFREQ_COUNT_REG 0x8
> +#define XRT_CLKFREQ_V5_COUNT_REG 0x10
> +
> +#define XRT_CLKFREQ_READ_RETRIES 10
> +
> +XRT_DEFINE_REGMAP_CONFIG(clkfreq_regmap_config);
> +
> +struct clkfreq {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + const char *clkfreq_ep_name;
> + struct mutex clkfreq_lock; /* clock counter dev lock */
> +};
> +
> +static int clkfreq_read(struct clkfreq *clkfreq, u32 *freq)
> +{
> + int times = XRT_CLKFREQ_READ_RETRIES;
> + u32 status;
> + int ret;
> +
> + *freq = 0;
> + mutex_lock(&clkfreq->clkfreq_lock);
> + ret = regmap_write(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, XRT_CLKFREQ_CONTROL_START);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "write start to control reg failed %d", ret);
> + goto failed;
> + }
> + while (times != 0) {
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, &status);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "read control reg failed %d", ret);
> + goto failed;
> + }
> + if ((status & XRT_CLKFREQ_CONTROL_STATUS_MASK) == XRT_CLKFREQ_CONTROL_DONE)
> + break;
> + mdelay(1);
> + times--;
> + };
> +
> + if (!times) {
> + ret = -ETIMEDOUT;
> + goto failed;
> + }
> +
> + if (status & XRT_CLKFREQ_V5_CLK0_ENABLED)
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_V5_COUNT_REG, freq);
> + else
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_COUNT_REG, freq);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "read count failed %d", ret);
> + goto failed;
> + }
> +
> + mutex_unlock(&clkfreq->clkfreq_lock);
> +
> + return 0;
> +
> +failed:
> + mutex_unlock(&clkfreq->clkfreq_lock);
> +
> + return ret;
> +}
> +
> +static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct clkfreq *clkfreq = xrt_get_drvdata(to_xrt_dev(dev));
> + ssize_t count;
> + u32 freq;
> +
> + if (clkfreq_read(clkfreq, &freq))
> + return -EINVAL;
> +
> + count = snprintf(buf, 64, "%u\n", freq);
> +
> + return count;
> +}
> +static DEVICE_ATTR_RO(freq);
> +
> +static struct attribute *clkfreq_attrs[] = {
> + &dev_attr_freq.attr,
> + NULL,
> +};
> +
> +static struct attribute_group clkfreq_attr_group = {
> + .attrs = clkfreq_attrs,
> +};
> +
> +static int
> +xrt_clkfreq_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct clkfreq *clkfreq;
> + int ret = 0;
> +
> + clkfreq = xrt_get_drvdata(xdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_CLKFREQ_READ:
> + ret = clkfreq_read(clkfreq, arg);
> + break;
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static void clkfreq_remove(struct xrt_device *xdev)
> +{
> + sysfs_remove_group(&xdev->dev.kobj, &clkfreq_attr_group);
> +}
> +
> +static int clkfreq_probe(struct xrt_device *xdev)
> +{
> + struct clkfreq *clkfreq = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + clkfreq = devm_kzalloc(&xdev->dev, sizeof(*clkfreq), GFP_KERNEL);
> + if (!clkfreq)
> + return -ENOMEM;
> +
> + xrt_set_drvdata(xdev, clkfreq);
> + clkfreq->xdev = xdev;
> + mutex_init(&clkfreq->clkfreq_lock);
> +
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + ret = -EINVAL;
> + goto failed;
> + }
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + clkfreq->regmap = devm_regmap_init_mmio(&xdev->dev, base, &clkfreq_regmap_config);
> + if (IS_ERR(clkfreq->regmap)) {
> + CLKFREQ_ERR(clkfreq, "regmap %pR failed", res);
> + ret = PTR_ERR(clkfreq->regmap);
> + goto failed;
> + }
> + clkfreq->clkfreq_ep_name = res->name;
> +
> + ret = sysfs_create_group(&xdev->dev.kobj, &clkfreq_attr_group);
> + if (ret) {
> + CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
> + goto failed;
> + }
> +
> + CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_clkfreq_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .compat = XRT_MD_COMPAT_CLKFREQ },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_clkfreq_driver = {
> + .driver = {
> + .name = XRT_CLKFREQ,
> + },
> + .subdev_id = XRT_SUBDEV_CLKFREQ,
> + .endpoints = xrt_clkfreq_endpoints,
> + .probe = clkfreq_probe,
> + .remove = clkfreq_remove,
> + .leaf_call = xrt_clkfreq_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(clkfreq);

2021-05-04 14:14:24

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 19/20] fpga: xrt: partition isolation driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add partition isolation xrt driver. partition isolation is
> a hardware function discovered by walking firmware metadata.
> A xrt device node will be created for it. Partition isolation
> function isolate the different fpga regions
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was fine. Please add my Reviewed-by line

Reviewed-by: Tom Rix <[email protected]>

> ---
> drivers/fpga/xrt/include/xleaf/axigate.h | 23 ++
> drivers/fpga/xrt/lib/xleaf/axigate.c | 325 +++++++++++++++++++++++
> 2 files changed, 348 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/axigate.h b/drivers/fpga/xrt/include/xleaf/axigate.h
> new file mode 100644
> index 000000000000..58f32c76dca1
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/axigate.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_AXIGATE_H_
> +#define _XRT_AXIGATE_H_
> +
> +#include "xleaf.h"
> +#include "metadata.h"
> +
> +/*
> + * AXIGATE driver leaf calls.
> + */
> +enum xrt_axigate_leaf_cmd {
> + XRT_AXIGATE_CLOSE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_AXIGATE_OPEN,
> +};
> +
> +#endif /* _XRT_AXIGATE_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/axigate.c b/drivers/fpga/xrt/lib/xleaf/axigate.c
> new file mode 100644
> index 000000000000..493707b782e4
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/axigate.c
> @@ -0,0 +1,325 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA AXI Gate Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/axigate.h"
> +
> +#define XRT_AXIGATE "xrt_axigate"
> +
> +#define XRT_AXIGATE_WRITE_REG 0
> +#define XRT_AXIGATE_READ_REG 8
> +
> +#define XRT_AXIGATE_CTRL_CLOSE 0
> +#define XRT_AXIGATE_CTRL_OPEN_BIT0 1
> +#define XRT_AXIGATE_CTRL_OPEN_BIT1 2
> +
> +#define XRT_AXIGATE_INTERVAL 500 /* ns */
> +
> +struct xrt_axigate {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + struct mutex gate_lock; /* gate dev lock */
> + void *evt_hdl;
> + const char *ep_name;
> + bool gate_closed;
> +};
> +
> +XRT_DEFINE_REGMAP_CONFIG(axigate_regmap_config);
> +
> +/* the ep names are in the order of hardware layers */
> +static const char * const xrt_axigate_epnames[] = {
> + XRT_MD_NODE_GATE_PLP, /* PLP: Provider Logic Partition */
> + XRT_MD_NODE_GATE_ULP /* ULP: User Logic Partition */
> +};
> +
> +static inline int close_gate(struct xrt_axigate *gate)
> +{
> + u32 val;
> + int ret;
> +
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_CLOSE);
> + if (ret) {
> + xrt_err(gate->xdev, "write gate failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + /*
> + * Legacy hardware requires extra read work properly.
> + * This is not on critical path, thus the extra read should not impact performance much.
> + */
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->xdev, "read gate failed %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static inline int open_gate(struct xrt_axigate *gate)
> +{
> + u32 val;
> + int ret;
> +
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_OPEN_BIT1);
> + if (ret) {
> + xrt_err(gate->xdev, "write 2 failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + /*
> + * Legacy hardware requires extra read work properly.
> + * This is not on critical path, thus the extra read should not impact performance much.
> + */
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->xdev, "read 2 failed %d", ret);
> + return ret;
> + }
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG,
> + XRT_AXIGATE_CTRL_OPEN_BIT0 | XRT_AXIGATE_CTRL_OPEN_BIT1);
> + if (ret) {
> + xrt_err(gate->xdev, "write 3 failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->xdev, "read 3 failed %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int xrt_axigate_epname_idx(struct xrt_device *xdev)
> +{
> + struct resource *res;
> + int ret, i;
> +
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + xrt_err(xdev, "Empty Resource!");
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(xrt_axigate_epnames); i++) {
> + ret = strncmp(xrt_axigate_epnames[i], res->name,
> + strlen(xrt_axigate_epnames[i]) + 1);
> + if (!ret)
> + return i;
> + }
> +
> + return -EINVAL;
> +}
> +
> +static int xrt_axigate_close(struct xrt_device *xdev)
> +{
> + struct xrt_axigate *gate;
> + u32 status = 0;
> + int ret;
> +
> + gate = xrt_get_drvdata(xdev);
> +
> + mutex_lock(&gate->gate_lock);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
> + if (ret) {
> + xrt_err(xdev, "read gate failed %d", ret);
> + goto failed;
> + }
> + if (status) { /* gate is opened */
> + xleaf_broadcast_event(xdev, XRT_EVENT_PRE_GATE_CLOSE, false);
> + ret = close_gate(gate);
> + if (ret)
> + goto failed;
> + }
> +
> + gate->gate_closed = true;
> +
> +failed:
> + mutex_unlock(&gate->gate_lock);
> +
> + xrt_info(xdev, "close gate %s", gate->ep_name);
> + return ret;
> +}
> +
> +static int xrt_axigate_open(struct xrt_device *xdev)
> +{
> + struct xrt_axigate *gate;
> + u32 status;
> + int ret;
> +
> + gate = xrt_get_drvdata(xdev);
> +
> + mutex_lock(&gate->gate_lock);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
> + if (ret) {
> + xrt_err(xdev, "read gate failed %d", ret);
> + goto failed;
> + }
> + if (!status) { /* gate is closed */
> + ret = open_gate(gate);
> + if (ret)
> + goto failed;
> + xleaf_broadcast_event(xdev, XRT_EVENT_POST_GATE_OPEN, true);
> + /* xrt_axigate_open() could be called in event cb, thus
> + * we can not wait for the completes
> + */
> + }
> +
> + gate->gate_closed = false;
> +
> +failed:
> + mutex_unlock(&gate->gate_lock);
> +
> + xrt_info(xdev, "open gate %s", gate->ep_name);
> + return ret;
> +}
> +
> +static void xrt_axigate_event_cb(struct xrt_device *xdev, void *arg)
> +{
> + struct xrt_axigate *gate = xrt_get_drvdata(xdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct xrt_device *leaf;
> + enum xrt_subdev_id id;
> + struct resource *res;
> + int instance;
> +
> + if (e != XRT_EVENT_POST_CREATION)
> + return;
> +
> + instance = evt->xe_subdev.xevt_subdev_instance;
> + id = evt->xe_subdev.xevt_subdev_id;
> + if (id != XRT_SUBDEV_AXIGATE)
> + return;
> +
> + leaf = xleaf_get_leaf_by_id(xdev, id, instance);
> + if (!leaf)
> + return;
> +
> + res = xrt_get_resource(leaf, IORESOURCE_MEM, 0);
> + if (!res || !strncmp(res->name, gate->ep_name, strlen(res->name) + 1)) {
> + xleaf_put_leaf(xdev, leaf);
> + return;
> + }
> +
> + /* higher level axigate instance created, make sure the gate is opened. */
> + if (xrt_axigate_epname_idx(leaf) > xrt_axigate_epname_idx(xdev))
> + xrt_axigate_open(xdev);
> + else
> + xleaf_call(leaf, XRT_AXIGATE_OPEN, NULL);
> +
> + xleaf_put_leaf(xdev, leaf);
> +}
> +
> +static int
> +xrt_axigate_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_axigate_event_cb(xdev, arg);
> + break;
> + case XRT_AXIGATE_CLOSE:
> + ret = xrt_axigate_close(xdev);
> + break;
> + case XRT_AXIGATE_OPEN:
> + ret = xrt_axigate_open(xdev);
> + break;
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_axigate_probe(struct xrt_device *xdev)
> +{
> + struct xrt_axigate *gate = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + gate = devm_kzalloc(&xdev->dev, sizeof(*gate), GFP_KERNEL);
> + if (!gate)
> + return -ENOMEM;
> +
> + gate->xdev = xdev;
> + xrt_set_drvdata(xdev, gate);
> +
> + xrt_info(xdev, "probing...");
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + xrt_err(xdev, "Empty resource 0");
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base)) {
> + xrt_err(xdev, "map base iomem failed");
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + gate->regmap = devm_regmap_init_mmio(&xdev->dev, base, &axigate_regmap_config);
> + if (IS_ERR(gate->regmap)) {
> + xrt_err(xdev, "regmap %pR failed", res);
> + ret = PTR_ERR(gate->regmap);
> + goto failed;
> + }
> + gate->ep_name = res->name;
> +
> + mutex_init(&gate->gate_lock);
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_axigate_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GATE_ULP },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GATE_PLP },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_axigate_driver = {
> + .driver = {
> + .name = XRT_AXIGATE,
> + },
> + .subdev_id = XRT_SUBDEV_AXIGATE,
> + .endpoints = xrt_axigate_endpoints,
> + .probe = xrt_axigate_probe,
> + .leaf_call = xrt_axigate_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(axigate);

2021-05-04 14:37:01

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 13/20] fpga: xrt: User Clock Subsystem driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add User Clock Subsystem (UCS) driver. UCS is a hardware function
> discovered by walking xclbin metadata. A xrt device node will be
> created for it. UCS enables/disables the dynamic region clocks.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was ok, please add my Reviewed-by line here, under yours.

Do for the others I have also ok-ed.

Reviewed-by: Tom Rix <[email protected]>

> ---
> drivers/fpga/xrt/lib/xleaf/ucs.c | 152 +++++++++++++++++++++++++++++++
> 1 file changed, 152 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
>
> diff --git a/drivers/fpga/xrt/lib/xleaf/ucs.c b/drivers/fpga/xrt/lib/xleaf/ucs.c
> new file mode 100644
> index 000000000000..a7a96ddde44f
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/ucs.c
> @@ -0,0 +1,152 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA UCS Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clock.h"
> +
> +#define UCS_ERR(ucs, fmt, arg...) \
> + xrt_err((ucs)->xdev, fmt "\n", ##arg)
> +#define UCS_WARN(ucs, fmt, arg...) \
> + xrt_warn((ucs)->xdev, fmt "\n", ##arg)
> +#define UCS_INFO(ucs, fmt, arg...) \
> + xrt_info((ucs)->xdev, fmt "\n", ##arg)
> +#define UCS_DBG(ucs, fmt, arg...) \
> + xrt_dbg((ucs)->xdev, fmt "\n", ##arg)
> +
> +#define XRT_UCS "xrt_ucs"
> +
> +#define XRT_UCS_CHANNEL1_REG 0
> +#define XRT_UCS_CHANNEL2_REG 8
> +
> +#define CLK_MAX_VALUE 6400
> +
> +XRT_DEFINE_REGMAP_CONFIG(ucs_regmap_config);
> +
> +struct xrt_ucs {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + struct mutex ucs_lock; /* ucs dev lock */
> +};
> +
> +static void xrt_ucs_event_cb(struct xrt_device *xdev, void *arg)
> +{
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct xrt_device *leaf;
> + enum xrt_subdev_id id;
> + int instance;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> + instance = evt->xe_subdev.xevt_subdev_instance;
> +
> + if (e != XRT_EVENT_POST_CREATION) {
> + xrt_dbg(xdev, "ignored event %d", e);
> + return;
> + }
> +
> + if (id != XRT_SUBDEV_CLOCK)
> + return;
> +
> + leaf = xleaf_get_leaf_by_id(xdev, XRT_SUBDEV_CLOCK, instance);
> + if (!leaf) {
> + xrt_err(xdev, "does not get clock subdev");
> + return;
> + }
> +
> + xleaf_call(leaf, XRT_CLOCK_VERIFY, NULL);
> + xleaf_put_leaf(xdev, leaf);
> +}
> +
> +static int ucs_enable(struct xrt_ucs *ucs)
> +{
> + int ret;
> +
> + mutex_lock(&ucs->ucs_lock);
> + ret = regmap_write(ucs->regmap, XRT_UCS_CHANNEL2_REG, 1);
> + mutex_unlock(&ucs->ucs_lock);
> +
> + return ret;
> +}
> +
> +static int
> +xrt_ucs_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_ucs_event_cb(xdev, arg);
> + break;
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int ucs_probe(struct xrt_device *xdev)
> +{
> + struct xrt_ucs *ucs = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> +
> + ucs = devm_kzalloc(&xdev->dev, sizeof(*ucs), GFP_KERNEL);
> + if (!ucs)
> + return -ENOMEM;
> +
> + xrt_set_drvdata(xdev, ucs);
> + ucs->xdev = xdev;
> + mutex_init(&ucs->ucs_lock);
> +
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res)
> + return -EINVAL;
> +
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base))
> + return PTR_ERR(base);
> +
> + ucs->regmap = devm_regmap_init_mmio(&xdev->dev, base, &ucs_regmap_config);
> + if (IS_ERR(ucs->regmap)) {
> + UCS_ERR(ucs, "map base %pR failed", res);
> + return PTR_ERR(ucs->regmap);
> + }
> + ucs_enable(ucs);
> +
> + return 0;
> +}
> +
> +static struct xrt_dev_endpoints xrt_ucs_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_UCS_CONTROL_STATUS },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_ucs_driver = {
> + .driver = {
> + .name = XRT_UCS,
> + },
> + .subdev_id = XRT_SUBDEV_UCS,
> + .endpoints = xrt_ucs_endpoints,
> + .probe = ucs_probe,
> + .leaf_call = xrt_ucs_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(ucs);

2021-05-04 14:38:06

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 16/20] fpga: xrt: clock driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add clock driver. Clock is a hardware function discovered by walking
> xclbin metadata. A xrt device node will be created for it. Other part of
> driver configures clock through clock driver.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was also ok, please add my Reviewed-by line

Reviewed-by: Tom Rix <[email protected]>

> ---
> drivers/fpga/xrt/include/xleaf/clock.h | 29 ++
> drivers/fpga/xrt/lib/xleaf/clock.c | 652 +++++++++++++++++++++++++
> 2 files changed, 681 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/clock.h b/drivers/fpga/xrt/include/xleaf/clock.h
> new file mode 100644
> index 000000000000..6858473fd096
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/clock.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_CLOCK_H_
> +#define _XRT_CLOCK_H_
> +
> +#include "xleaf.h"
> +#include <linux/xrt/xclbin.h>
> +
> +/*
> + * CLOCK driver leaf calls.
> + */
> +enum xrt_clock_leaf_cmd {
> + XRT_CLOCK_SET = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_CLOCK_GET,
> + XRT_CLOCK_VERIFY,
> +};
> +
> +struct xrt_clock_get {
> + u16 freq;
> + u32 freq_cnter;
> +};
> +
> +#endif /* _XRT_CLOCK_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/clock.c b/drivers/fpga/xrt/lib/xleaf/clock.c
> new file mode 100644
> index 000000000000..7303be55c07a
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/clock.c
> @@ -0,0 +1,652 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Clock Wizard Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + * Sonal Santan <[email protected]>
> + * David Zhang <[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clock.h"
> +#include "xleaf/clkfreq.h"
> +
> +/* XRT_CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
> +#define XRT_CLOCK_MAX_NUM_CLOCKS 4
> +#define XRT_CLOCK_STATUS_MASK 0xffff
> +#define XRT_CLOCK_STATUS_MEASURE_START 0x1
> +#define XRT_CLOCK_STATUS_MEASURE_DONE 0x2
> +
> +#define XRT_CLOCK_STATUS_REG 0x4
> +#define XRT_CLOCK_CLKFBOUT_REG 0x200
> +#define XRT_CLOCK_CLKOUT0_REG 0x208
> +#define XRT_CLOCK_LOAD_SADDR_SEN_REG 0x25C
> +#define XRT_CLOCK_DEFAULT_EXPIRE_SECS 1
> +
> +#define CLOCK_ERR(clock, fmt, arg...) \
> + xrt_err((clock)->xdev, fmt "\n", ##arg)
> +#define CLOCK_WARN(clock, fmt, arg...) \
> + xrt_warn((clock)->xdev, fmt "\n", ##arg)
> +#define CLOCK_INFO(clock, fmt, arg...) \
> + xrt_info((clock)->xdev, fmt "\n", ##arg)
> +#define CLOCK_DBG(clock, fmt, arg...) \
> + xrt_dbg((clock)->xdev, fmt "\n", ##arg)
> +
> +#define XRT_CLOCK "xrt_clock"
> +
> +XRT_DEFINE_REGMAP_CONFIG(clock_regmap_config);
> +
> +struct clock {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + struct mutex clock_lock; /* clock dev lock */
> +
> + const char *clock_ep_name;
> +};
> +
> +/*
> + * Precomputed table with config0 and config2 register values together with
> + * target frequency. The steps are approximately 5 MHz apart. Table is
> + * generated by platform creation tool.
> + */
> +static const struct xmgnt_ocl_clockwiz {
> + /* target frequency */
> + u16 ocl;
> + /* config0 register */
> + u32 config0;
> + /* config2 register */
> + u32 config2;
> +} frequency_table[] = {
> + /*1275.000*/ { 10, 0x02EE0C01, 0x0001F47F },
> + /*1575.000*/ { 15, 0x02EE0F01, 0x00000069},
> + /*1600.000*/ { 20, 0x00001001, 0x00000050},
> + /*1600.000*/ { 25, 0x00001001, 0x00000040},
> + /*1575.000*/ { 30, 0x02EE0F01, 0x0001F434},
> + /*1575.000*/ { 35, 0x02EE0F01, 0x0000002D},
> + /*1600.000*/ { 40, 0x00001001, 0x00000028},
> + /*1575.000*/ { 45, 0x02EE0F01, 0x00000023},
> + /*1600.000*/ { 50, 0x00001001, 0x00000020},
> + /*1512.500*/ { 55, 0x007D0F01, 0x0001F41B},
> + /*1575.000*/ { 60, 0x02EE0F01, 0x0000FA1A},
> + /*1462.500*/ { 65, 0x02710E01, 0x0001F416},
> + /*1575.000*/ { 70, 0x02EE0F01, 0x0001F416},
> + /*1575.000*/ { 75, 0x02EE0F01, 0x00000015},
> + /*1600.000*/ { 80, 0x00001001, 0x00000014},
> + /*1487.500*/ { 85, 0x036B0E01, 0x0001F411},
> + /*1575.000*/ { 90, 0x02EE0F01, 0x0001F411},
> + /*1425.000*/ { 95, 0x00FA0E01, 0x0000000F},
> + /*1600.000*/ { 100, 0x00001001, 0x00000010},
> + /*1575.000*/ { 105, 0x02EE0F01, 0x0000000F},
> + /*1512.500*/ { 110, 0x007D0F01, 0x0002EE0D},
> + /*1437.500*/ { 115, 0x01770E01, 0x0001F40C},
> + /*1575.000*/ { 120, 0x02EE0F01, 0x00007D0D},
> + /*1562.500*/ { 125, 0x02710F01, 0x0001F40C},
> + /*1462.500*/ { 130, 0x02710E01, 0x0000FA0B},
> + /*1350.000*/ { 135, 0x01F40D01, 0x0000000A},
> + /*1575.000*/ { 140, 0x02EE0F01, 0x0000FA0B},
> + /*1450.000*/ { 145, 0x01F40E01, 0x0000000A},
> + /*1575.000*/ { 150, 0x02EE0F01, 0x0001F40A},
> + /*1550.000*/ { 155, 0x01F40F01, 0x0000000A},
> + /*1600.000*/ { 160, 0x00001001, 0x0000000A},
> + /*1237.500*/ { 165, 0x01770C01, 0x0001F407},
> + /*1487.500*/ { 170, 0x036B0E01, 0x0002EE08},
> + /*1575.000*/ { 175, 0x02EE0F01, 0x00000009},
> + /*1575.000*/ { 180, 0x02EE0F01, 0x0002EE08},
> + /*1387.500*/ { 185, 0x036B0D01, 0x0001F407},
> + /*1425.000*/ { 190, 0x00FA0E01, 0x0001F407},
> + /*1462.500*/ { 195, 0x02710E01, 0x0001F407},
> + /*1600.000*/ { 200, 0x00001001, 0x00000008},
> + /*1537.500*/ { 205, 0x01770F01, 0x0001F407},
> + /*1575.000*/ { 210, 0x02EE0F01, 0x0001F407},
> + /*1075.000*/ { 215, 0x02EE0A01, 0x00000005},
> + /*1512.500*/ { 220, 0x007D0F01, 0x00036B06},
> + /*1575.000*/ { 225, 0x02EE0F01, 0x00000007},
> + /*1437.500*/ { 230, 0x01770E01, 0x0000FA06},
> + /*1175.000*/ { 235, 0x02EE0B01, 0x00000005},
> + /*1500.000*/ { 240, 0x00000F01, 0x0000FA06},
> + /*1225.000*/ { 245, 0x00FA0C01, 0x00000005},
> + /*1562.500*/ { 250, 0x02710F01, 0x0000FA06},
> + /*1275.000*/ { 255, 0x02EE0C01, 0x00000005},
> + /*1462.500*/ { 260, 0x02710E01, 0x00027105},
> + /*1325.000*/ { 265, 0x00FA0D01, 0x00000005},
> + /*1350.000*/ { 270, 0x01F40D01, 0x00000005},
> + /*1512.500*/ { 275, 0x007D0F01, 0x0001F405},
> + /*1575.000*/ { 280, 0x02EE0F01, 0x00027105},
> + /*1425.000*/ { 285, 0x00FA0E01, 0x00000005},
> + /*1450.000*/ { 290, 0x01F40E01, 0x00000005},
> + /*1475.000*/ { 295, 0x02EE0E01, 0x00000005},
> + /*1575.000*/ { 300, 0x02EE0F01, 0x0000FA05},
> + /*1525.000*/ { 305, 0x00FA0F01, 0x00000005},
> + /*1550.000*/ { 310, 0x01F40F01, 0x00000005},
> + /*1575.000*/ { 315, 0x02EE0F01, 0x00000005},
> + /*1600.000*/ { 320, 0x00001001, 0x00000005},
> + /*1462.500*/ { 325, 0x02710E01, 0x0001F404},
> + /*1237.500*/ { 330, 0x01770C01, 0x0002EE03},
> + /* 837.500*/ { 335, 0x01770801, 0x0001F402},
> + /*1487.500*/ { 340, 0x036B0E01, 0x00017704},
> + /* 862.500*/ { 345, 0x02710801, 0x0001F402},
> + /*1575.000*/ { 350, 0x02EE0F01, 0x0001F404},
> + /* 887.500*/ { 355, 0x036B0801, 0x0001F402},
> + /*1575.000*/ { 360, 0x02EE0F01, 0x00017704},
> + /* 912.500*/ { 365, 0x007D0901, 0x0001F402},
> + /*1387.500*/ { 370, 0x036B0D01, 0x0002EE03},
> + /*1500.000*/ { 375, 0x00000F01, 0x00000004},
> + /*1425.000*/ { 380, 0x00FA0E01, 0x0002EE03},
> + /* 962.500*/ { 385, 0x02710901, 0x0001F402},
> + /*1462.500*/ { 390, 0x02710E01, 0x0002EE03},
> + /* 987.500*/ { 395, 0x036B0901, 0x0001F402},
> + /*1600.000*/ { 400, 0x00001001, 0x00000004},
> + /*1012.500*/ { 405, 0x007D0A01, 0x0001F402},
> + /*1537.500*/ { 410, 0x01770F01, 0x0002EE03},
> + /*1037.500*/ { 415, 0x01770A01, 0x0001F402},
> + /*1575.000*/ { 420, 0x02EE0F01, 0x0002EE03},
> + /*1487.500*/ { 425, 0x036B0E01, 0x0001F403},
> + /*1075.000*/ { 430, 0x02EE0A01, 0x0001F402},
> + /*1087.500*/ { 435, 0x036B0A01, 0x0001F402},
> + /*1375.000*/ { 440, 0x02EE0D01, 0x00007D03},
> + /*1112.500*/ { 445, 0x007D0B01, 0x0001F402},
> + /*1575.000*/ { 450, 0x02EE0F01, 0x0001F403},
> + /*1137.500*/ { 455, 0x01770B01, 0x0001F402},
> + /*1437.500*/ { 460, 0x01770E01, 0x00007D03},
> + /*1162.500*/ { 465, 0x02710B01, 0x0001F402},
> + /*1175.000*/ { 470, 0x02EE0B01, 0x0001F402},
> + /*1425.000*/ { 475, 0x00FA0E01, 0x00000003},
> + /*1500.000*/ { 480, 0x00000F01, 0x00007D03},
> + /*1212.500*/ { 485, 0x007D0C01, 0x0001F402},
> + /*1225.000*/ { 490, 0x00FA0C01, 0x0001F402},
> + /*1237.500*/ { 495, 0x01770C01, 0x0001F402},
> + /*1562.500*/ { 500, 0x02710F01, 0x00007D03},
> + /*1262.500*/ { 505, 0x02710C01, 0x0001F402},
> + /*1275.000*/ { 510, 0x02EE0C01, 0x0001F402},
> + /*1287.500*/ { 515, 0x036B0C01, 0x0001F402},
> + /*1300.000*/ { 520, 0x00000D01, 0x0001F402},
> + /*1575.000*/ { 525, 0x02EE0F01, 0x00000003},
> + /*1325.000*/ { 530, 0x00FA0D01, 0x0001F402},
> + /*1337.500*/ { 535, 0x01770D01, 0x0001F402},
> + /*1350.000*/ { 540, 0x01F40D01, 0x0001F402},
> + /*1362.500*/ { 545, 0x02710D01, 0x0001F402},
> + /*1512.500*/ { 550, 0x007D0F01, 0x0002EE02},
> + /*1387.500*/ { 555, 0x036B0D01, 0x0001F402},
> + /*1400.000*/ { 560, 0x00000E01, 0x0001F402},
> + /*1412.500*/ { 565, 0x007D0E01, 0x0001F402},
> + /*1425.000*/ { 570, 0x00FA0E01, 0x0001F402},
> + /*1437.500*/ { 575, 0x01770E01, 0x0001F402},
> + /*1450.000*/ { 580, 0x01F40E01, 0x0001F402},
> + /*1462.500*/ { 585, 0x02710E01, 0x0001F402},
> + /*1475.000*/ { 590, 0x02EE0E01, 0x0001F402},
> + /*1487.500*/ { 595, 0x036B0E01, 0x0001F402},
> + /*1575.000*/ { 600, 0x02EE0F01, 0x00027102},
> + /*1512.500*/ { 605, 0x007D0F01, 0x0001F402},
> + /*1525.000*/ { 610, 0x00FA0F01, 0x0001F402},
> + /*1537.500*/ { 615, 0x01770F01, 0x0001F402},
> + /*1550.000*/ { 620, 0x01F40F01, 0x0001F402},
> + /*1562.500*/ { 625, 0x02710F01, 0x0001F402},
> + /*1575.000*/ { 630, 0x02EE0F01, 0x0001F402},
> + /*1587.500*/ { 635, 0x036B0F01, 0x0001F402},
> + /*1600.000*/ { 640, 0x00001001, 0x0001F402},
> + /*1290.000*/ { 645, 0x01F44005, 0x00000002},
> + /*1462.500*/ { 650, 0x02710E01, 0x0000FA02}
> +};
> +
> +static u32 find_matching_freq_config(unsigned short freq,
> + const struct xmgnt_ocl_clockwiz *table,
> + int size)
> +{
> + u32 end = size - 1;
> + u32 start = 0;
> + u32 idx;
> +
> + if (freq < table[0].ocl)
> + return 0;
> +
> + if (freq > table[size - 1].ocl)
> + return size - 1;
> +
> + while (start < end) {
> + idx = (start + end) / 2;
> + if (freq == table[idx].ocl)
> + break;
> + if (freq < table[idx].ocl)
> + end = idx;
> + else
> + start = idx + 1;
> + }
> + if (freq < table[idx].ocl)
> + idx--;
> +
> + return idx;
> +}
> +
> +static u32 find_matching_freq(u32 freq,
> + const struct xmgnt_ocl_clockwiz *freq_table,
> + int freq_table_size)
> +{
> + int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
> +
> + return freq_table[idx].ocl;
> +}
> +
> +static inline int clock_wiz_busy(struct clock *clock, int cycle, int interval)
> +{
> + u32 val = 0;
> + int count;
> + int ret;
> +
> + for (count = 0; count < cycle; count++) {
> + ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read status failed %d", ret);
> + return ret;
> + }
> + if (val == 1)
> + break;
> +
> + mdelay(interval);
> + }
> + if (val != 1) {
> + CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
> + val, cycle * interval);
> + return -EBUSY;
> + }
> +
> + return 0;
> +}
> +
> +static int get_freq(struct clock *clock, u16 *freq)
> +{
> + u32 mul_frac0 = 0;
> + u32 div_frac1 = 0;
> + u32 mul0, div0;
> + u64 input;
> + u32 div1;
> + u32 val;
> + int ret;
> +
> + WARN_ON(!mutex_is_locked(&clock->clock_lock));
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read status failed %d", ret);
> + return ret;
> + }
> +
> + if ((val & 0x1) == 0) {
> + CLOCK_ERR(clock, "clockwiz is busy %x", val);
> + *freq = 0;
> + return -EBUSY;
> + }
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read clkfbout failed %d", ret);
> + return ret;
> + }
> +
> + div0 = val & 0xff;
> + mul0 = (val & 0xff00) >> 8;
> + if (val & BIT(26)) {
> + mul_frac0 = val >> 16;
> + mul_frac0 &= 0x3ff;
> + }
> +
> + /*
> + * Multiply both numerator (mul0) and the denominator (div0) with 1000
> + * to account for fractional portion of multiplier
> + */
> + mul0 *= 1000;
> + mul0 += mul_frac0;
> + div0 *= 1000;
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read clkout0 failed %d", ret);
> + return ret;
> + }
> +
> + div1 = val & 0xff;
> + if (val & BIT(18)) {
> + div_frac1 = val >> 8;
> + div_frac1 &= 0x3ff;
> + }
> +
> + /*
> + * Multiply both numerator (mul0) and the denominator (div1) with
> + * 1000 to account for fractional portion of divider
> + */
> +
> + div1 *= 1000;
> + div1 += div_frac1;
> + div0 *= div1;
> + mul0 *= 1000;
> + if (div0 == 0) {
> + CLOCK_ERR(clock, "clockwiz 0 divider");
> + return 0;
> + }
> +
> + input = mul0 * 100;
> + do_div(input, div0);
> + *freq = (u16)input;
> +
> + return 0;
> +}
> +
> +static int set_freq(struct clock *clock, u16 freq)
> +{
> + int err = 0;
> + u32 idx = 0;
> + u32 val = 0;
> + u32 config;
> +
> + mutex_lock(&clock->clock_lock);
> + idx = find_matching_freq_config(freq, frequency_table,
> + ARRAY_SIZE(frequency_table));
> +
> + CLOCK_INFO(clock, "New: %d Mhz", freq);
> + err = clock_wiz_busy(clock, 20, 50);
> + if (err)
> + goto fail;
> +
> + config = frequency_table[idx].config0;
> + err = regmap_write(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, config);
> + if (err) {
> + CLOCK_ERR(clock, "write clkfbout failed %d", err);
> + goto fail;
> + }
> +
> + config = frequency_table[idx].config2;
> + err = regmap_write(clock->regmap, XRT_CLOCK_CLKOUT0_REG, config);
> + if (err) {
> + CLOCK_ERR(clock, "write clkout0 failed %d", err);
> + goto fail;
> + }
> +
> + mdelay(10);
> + err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 7);
> + if (err) {
> + CLOCK_ERR(clock, "write load_saddr_sen failed %d", err);
> + goto fail;
> + }
> +
> + mdelay(1);
> + err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 2);
> + if (err) {
> + CLOCK_ERR(clock, "write saddr failed %d", err);
> + goto fail;
> + }
> +
> + CLOCK_INFO(clock, "clockwiz waiting for locked signal");
> +
> + err = clock_wiz_busy(clock, 100, 100);
> + if (err) {
> + CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
> + /* restore */
> + regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 4);
> + mdelay(10);
> + regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 0);
> + goto fail;
> + }
> + regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
> + CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
> + regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
> + CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
> +
> +fail:
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
> +static int get_freq_counter(struct clock *clock, u32 *freq)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->xdev);
> + struct xrt_device *xdev = clock->xdev;
> + struct xrt_device *counter_leaf;
> + const void *counter;
> + int err;
> +
> + WARN_ON(!mutex_is_locked(&clock->clock_lock));
> +
> + err = xrt_md_get_prop(DEV(xdev), pdata->xsp_dtb, clock->clock_ep_name,
> + NULL, XRT_MD_PROP_CLK_CNT, &counter, NULL);
> + if (err) {
> + xrt_err(xdev, "no counter specified");
> + return err;
> + }
> +
> + counter_leaf = xleaf_get_leaf_by_epname(xdev, counter);
> + if (!counter_leaf) {
> + xrt_err(xdev, "can't find counter");
> + return -ENOENT;
> + }
> +
> + err = xleaf_call(counter_leaf, XRT_CLKFREQ_READ, freq);
> + if (err)
> + xrt_err(xdev, "can't read counter");
> + xleaf_put_leaf(clock->xdev, counter_leaf);
> +
> + return err;
> +}
> +
> +static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
> +{
> + int err = 0;
> +
> + mutex_lock(&clock->clock_lock);
> +
> + if (err == 0 && freq)
> + err = get_freq(clock, freq);
> +
> + if (err == 0 && freq_cnter)
> + err = get_freq_counter(clock, freq_cnter);
> +
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
> +static int clock_verify_freq(struct clock *clock)
> +{
> + u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
> + int err = 0;
> + u16 freq;
> +
> + mutex_lock(&clock->clock_lock);
> +
> + err = get_freq(clock, &freq);
> + if (err) {
> + xrt_err(clock->xdev, "get freq failed, %d", err);
> + goto end;
> + }
> +
> + err = get_freq_counter(clock, &clock_freq_counter);
> + if (err) {
> + xrt_err(clock->xdev, "get freq counter failed, %d", err);
> + goto end;
> + }
> +
> + lookup_freq = find_matching_freq(freq, frequency_table,
> + ARRAY_SIZE(frequency_table));
> + request_in_khz = lookup_freq * 1000;
> + tolerance = lookup_freq * 50;
> + if (tolerance < abs(clock_freq_counter - request_in_khz)) {
> + CLOCK_ERR(clock,
> + "set clock(%s) failed, request %ukhz, actual %dkhz",
> + clock->clock_ep_name, request_in_khz, clock_freq_counter);
> + err = -EDOM;
> + } else {
> + CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
> + }
> +
> +end:
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
> +static int clock_init(struct clock *clock)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->xdev);
> + const u16 *freq;
> + int err = 0;
> +
> + err = xrt_md_get_prop(DEV(clock->xdev), pdata->xsp_dtb,
> + clock->clock_ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
> + (const void **)&freq, NULL);
> + if (err) {
> + xrt_info(clock->xdev, "no default freq");
> + return 0;
> + }
> +
> + err = set_freq(clock, be16_to_cpu(*freq));
> +
> + return err;
> +}
> +
> +static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct clock *clock = xrt_get_drvdata(to_xrt_dev(dev));
> + ssize_t count;
> + u16 freq = 0;
> +
> + count = clock_get_freq(clock, &freq, NULL);
> + if (count < 0)
> + return count;
> +
> + count = snprintf(buf, 64, "%u\n", freq);
> +
> + return count;
> +}
> +static DEVICE_ATTR_RO(freq);
> +
> +static struct attribute *clock_attrs[] = {
> + &dev_attr_freq.attr,
> + NULL,
> +};
> +
> +static struct attribute_group clock_attr_group = {
> + .attrs = clock_attrs,
> +};
> +
> +static int
> +xrt_clock_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct clock *clock;
> + int ret = 0;
> +
> + clock = xrt_get_drvdata(xdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_CLOCK_SET: {
> + u16 freq = (u16)(uintptr_t)arg;
> +
> + ret = set_freq(clock, freq);
> + break;
> + }
> + case XRT_CLOCK_VERIFY:
> + ret = clock_verify_freq(clock);
> + break;
> + case XRT_CLOCK_GET: {
> + struct xrt_clock_get *get =
> + (struct xrt_clock_get *)arg;
> +
> + ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
> + break;
> + }
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static void clock_remove(struct xrt_device *xdev)
> +{
> + sysfs_remove_group(&xdev->dev.kobj, &clock_attr_group);
> +}
> +
> +static int clock_probe(struct xrt_device *xdev)
> +{
> + struct clock *clock = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + clock = devm_kzalloc(&xdev->dev, sizeof(*clock), GFP_KERNEL);
> + if (!clock)
> + return -ENOMEM;
> +
> + xrt_set_drvdata(xdev, clock);
> + clock->xdev = xdev;
> + mutex_init(&clock->clock_lock);
> +
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + clock->regmap = devm_regmap_init_mmio(&xdev->dev, base, &clock_regmap_config);
> + if (IS_ERR(clock->regmap)) {
> + CLOCK_ERR(clock, "regmap %pR failed", res);
> + ret = PTR_ERR(clock->regmap);
> + goto failed;
> + }
> + clock->clock_ep_name = res->name;
> +
> + ret = clock_init(clock);
> + if (ret)
> + goto failed;
> +
> + ret = sysfs_create_group(&xdev->dev.kobj, &clock_attr_group);
> + if (ret) {
> + CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
> + goto failed;
> + }
> +
> + CLOCK_INFO(clock, "successfully initialized Clock subdev");
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_clock_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .compat = "clkwiz" },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_clock_driver = {
> + .driver = {
> + .name = XRT_CLOCK,
> + },
> + .subdev_id = XRT_SUBDEV_CLOCK,
> + .endpoints = xrt_clock_endpoints,
> + .probe = clock_probe,
> + .remove = clock_remove,
> + .leaf_call = xrt_clock_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(clock);

2021-05-04 14:39:07

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 20/20] fpga: xrt: Kconfig and Makefile updates for XRT drivers


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Update fpga Kconfig/Makefile and add Kconfig/Makefile for new drivers.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> MAINTAINERS | 11 +++++++++++
> drivers/Makefile | 1 +
> drivers/fpga/Kconfig | 2 ++
> drivers/fpga/Makefile | 5 +++++
> drivers/fpga/xrt/Kconfig | 8 ++++++++
> drivers/fpga/xrt/lib/Kconfig | 17 +++++++++++++++++
> drivers/fpga/xrt/lib/Makefile | 30 ++++++++++++++++++++++++++++++
> drivers/fpga/xrt/metadata/Kconfig | 12 ++++++++++++
> drivers/fpga/xrt/metadata/Makefile | 16 ++++++++++++++++
> drivers/fpga/xrt/mgnt/Kconfig | 15 +++++++++++++++
> drivers/fpga/xrt/mgnt/Makefile | 19 +++++++++++++++++++
> 11 files changed, 136 insertions(+)
> create mode 100644 drivers/fpga/xrt/Kconfig
> create mode 100644 drivers/fpga/xrt/lib/Kconfig
> create mode 100644 drivers/fpga/xrt/lib/Makefile
> create mode 100644 drivers/fpga/xrt/metadata/Kconfig
> create mode 100644 drivers/fpga/xrt/metadata/Makefile
> create mode 100644 drivers/fpga/xrt/mgnt/Kconfig
> create mode 100644 drivers/fpga/xrt/mgnt/Makefile
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9450e052f1b1..89abe140041b 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -7016,6 +7016,17 @@ F: Documentation/fpga/
> F: drivers/fpga/
> F: include/linux/fpga/
>
> +FPGA XRT DRIVERS
> +M: Lizhi Hou <[email protected]>
> +R: Max Zhen <[email protected]>
> +R: Sonal Santan <[email protected]>
> +L: [email protected]
> +S: Supported
ok
> +W: https://github.com/Xilinx/XRT
> +F: Documentation/fpga/xrt.rst
> +F: drivers/fpga/xrt/
> +F: include/uapi/linux/xrt/
> +
> FPU EMULATOR
> M: Bill Metzenthen <[email protected]>
> S: Maintained
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 6fba7daba591..dbb3b727fc7a 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -179,6 +179,7 @@ obj-$(CONFIG_STM) += hwtracing/stm/
> obj-$(CONFIG_ANDROID) += android/
> obj-$(CONFIG_NVMEM) += nvmem/
> obj-$(CONFIG_FPGA) += fpga/
> +obj-$(CONFIG_FPGA_XRT_METADATA) += fpga/
> obj-$(CONFIG_FSI) += fsi/
> obj-$(CONFIG_TEE) += tee/
> obj-$(CONFIG_MULTIPLEXER) += mux/
> diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
> index 5ff9438b7b46..01410ff000b9 100644
> --- a/drivers/fpga/Kconfig
> +++ b/drivers/fpga/Kconfig
> @@ -227,4 +227,6 @@ config FPGA_MGR_ZYNQMP_FPGA
> to configure the programmable logic(PL) through PS
> on ZynqMP SoC.
>
> +source "drivers/fpga/xrt/Kconfig"
> +
> endif # FPGA
> diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
> index 18dc9885883a..a1cad7f7af09 100644
> --- a/drivers/fpga/Makefile
> +++ b/drivers/fpga/Makefile
> @@ -48,3 +48,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) += dfl-n3000-nios.o
>
> # Drivers for FPGAs which implement DFL
> obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
> +
> +# XRT drivers for Alveo
> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt/metadata/
> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt/lib/
> +obj-$(CONFIG_FPGA_XRT_XMGNT) += xrt/mgnt/
> diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
> new file mode 100644
> index 000000000000..2424f89e6e03
> --- /dev/null
> +++ b/drivers/fpga/xrt/Kconfig
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Xilinx Alveo FPGA device configuration
> +#
> +
> +source "drivers/fpga/xrt/metadata/Kconfig"
> +source "drivers/fpga/xrt/lib/Kconfig"
> +source "drivers/fpga/xrt/mgnt/Kconfig"
> diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
> new file mode 100644
> index 000000000000..935369fad570
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/Kconfig
> @@ -0,0 +1,17 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# XRT Alveo FPGA device configuration
> +#
> +
> +config FPGA_XRT_LIB
> + tristate "XRT Alveo Driver Library"
> + depends on HWMON && PCI && HAS_IOMEM
> + select FPGA_XRT_METADATA
> + select REGMAP_MMIO
> + help
> + Select this option to enable Xilinx XRT Alveo driver library. This
> + library is core infrastructure of XRT Alveo FPGA drivers which
> + provides functions for working with device nodes, iteration and
> + lookup of platform devices, common interfaces for platform devices,
> + plumbing of function call and ioctls between platform devices and
> + parent partitions.
ok
> diff --git a/drivers/fpga/xrt/lib/Makefile b/drivers/fpga/xrt/lib/Makefile
> new file mode 100644
> index 000000000000..58563416efbf
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/Makefile
> @@ -0,0 +1,30 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt-lib.o
> +
> +xrt-lib-objs := \
> + lib-drv.o \
> + xroot.o \
> + xclbin.o \
> + subdev.o \
> + cdev.o \
> + group.o \
> + xleaf/vsec.o \
> + xleaf/axigate.o \
> + xleaf/devctl.o \
> + xleaf/icap.o \
> + xleaf/clock.o \
> + xleaf/clkfreq.o \
> + xleaf/ucs.o \
> + xleaf/ddr_calibration.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)
> diff --git a/drivers/fpga/xrt/metadata/Kconfig b/drivers/fpga/xrt/metadata/Kconfig
> new file mode 100644
> index 000000000000..129adda47e94
> --- /dev/null
> +++ b/drivers/fpga/xrt/metadata/Kconfig
> @@ -0,0 +1,12 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# XRT Alveo FPGA device configuration
> +#
> +
> +config FPGA_XRT_METADATA
> + bool "XRT Alveo Driver Metadata Parser"
> + select LIBFDT
> + help
> + This option provides helper functions to parse Xilinx Alveo FPGA
> + firmware metadata. The metadata is in device tree format and the
> + XRT driver uses it to discover the HW subsystems behind PCIe BAR.
ok
> diff --git a/drivers/fpga/xrt/metadata/Makefile b/drivers/fpga/xrt/metadata/Makefile
> new file mode 100644
> index 000000000000..14f65ef1595c
> --- /dev/null
> +++ b/drivers/fpga/xrt/metadata/Makefile
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt-md.o
> +
> +xrt-md-objs := metadata.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)
> diff --git a/drivers/fpga/xrt/mgnt/Kconfig b/drivers/fpga/xrt/mgnt/Kconfig
> new file mode 100644
> index 000000000000..b43242c14757
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/Kconfig
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Xilinx XRT FPGA device configuration
> +#
> +
> +config FPGA_XRT_XMGNT
> + tristate "Xilinx Alveo Management Driver"
> + depends on FPGA_XRT_LIB
> + select FPGA_XRT_METADATA
> + select FPGA_BRIDGE
> + select FPGA_REGION
> + help
> + Select this option to enable XRT PCIe driver for Xilinx Alveo FPGA.
> + This driver provides interfaces for userspace application to access
> + Alveo FPGA device.

ok

good enough for now.

Reviewed-by: Tom Rix <[email protected]>

> diff --git a/drivers/fpga/xrt/mgnt/Makefile b/drivers/fpga/xrt/mgnt/Makefile
> new file mode 100644
> index 000000000000..b71d2ff0aa94
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgnt/Makefile
> @@ -0,0 +1,19 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_XMGNT) += xrt-mgnt.o
> +
> +xrt-mgnt-objs := root.o \
> + xmgnt-main.o \
> + xrt-mgr.o \
> + xmgnt-main-region.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)

2021-05-04 14:40:31

by Tom Rix

[permalink] [raw]
Subject: Re: [PATCH V5 XRT Alveo 18/20] fpga: xrt: DDR calibration driver


On 4/27/21 1:54 PM, Lizhi Hou wrote:
> Add DDR calibration driver. DDR calibration is a hardware function
> discovered by walking firmware metadata. A xrt device node will
> be created for it. Hardware provides DDR calibration status through
> this function.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>

v4 was ok, please add my Reviewed-by line

Reviewed-by: Tom Rix <[email protected]>

> ---
> .../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +++
> drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 210 ++++++++++++++++++
> 2 files changed, 238 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/ddr_calibration.h b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> new file mode 100644
> index 000000000000..878740c26ca2
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_DDR_CALIBRATION_H_
> +#define _XRT_DDR_CALIBRATION_H_
> +
> +#include "xleaf.h"
> +#include <linux/xrt/xclbin.h>
> +
> +/*
> + * Memory calibration driver leaf calls.
> + */
> +enum xrt_calib_results {
> + XRT_CALIB_UNKNOWN = 0,
> + XRT_CALIB_SUCCEEDED,
> + XRT_CALIB_FAILED,
> +};
> +
> +enum xrt_calib_leaf_cmd {
> + XRT_CALIB_RESULT = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +#endif /* _XRT_DDR_CALIBRATION_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
> new file mode 100644
> index 000000000000..36a0937c9195
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
> @@ -0,0 +1,210 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA memory calibration driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * memory calibration
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +#include <linux/delay.h>
> +#include <linux/regmap.h>
> +#include "xclbin-helper.h"
> +#include "metadata.h"
> +#include "xleaf/ddr_calibration.h"
> +
> +#define XRT_CALIB "xrt_calib"
> +
> +#define XRT_CALIB_STATUS_REG 0
> +#define XRT_CALIB_READ_RETRIES 20
> +#define XRT_CALIB_READ_INTERVAL 500 /* ms */
> +
> +XRT_DEFINE_REGMAP_CONFIG(calib_regmap_config);
> +
> +struct calib_cache {
> + struct list_head link;
> + const char *ep_name;
> + char *data;
> + u32 data_size;
> +};
> +
> +struct calib {
> + struct xrt_device *xdev;
> + struct regmap *regmap;
> + struct mutex lock; /* calibration dev lock */
> + struct list_head cache_list;
> + u32 cache_num;
> + enum xrt_calib_results result;
> +};
> +
> +static void __calib_cache_clean_nolock(struct calib *calib)
> +{
> + struct calib_cache *cache, *temp;
> +
> + list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
> + vfree(cache->data);
> + list_del(&cache->link);
> + vfree(cache);
> + }
> + calib->cache_num = 0;
> +}
> +
> +static void calib_cache_clean(struct calib *calib)
> +{
> + mutex_lock(&calib->lock);
> + __calib_cache_clean_nolock(calib);
> + mutex_unlock(&calib->lock);
> +}
> +
> +static int calib_calibration(struct calib *calib)
> +{
> + u32 times = XRT_CALIB_READ_RETRIES;
> + u32 status;
> + int ret;
> +
> + while (times != 0) {
> + ret = regmap_read(calib->regmap, XRT_CALIB_STATUS_REG, &status);
> + if (ret) {
> + xrt_err(calib->xdev, "failed to read status reg %d", ret);
> + return ret;
> + }
> +
> + if (status & BIT(0))
> + break;
> + msleep(XRT_CALIB_READ_INTERVAL);
> + times--;
> + }
> +
> + if (!times) {
> + xrt_err(calib->xdev,
> + "MIG calibration timeout after bitstream download");
> + return -ETIMEDOUT;
> + }
> +
> + xrt_info(calib->xdev, "took %dms", (XRT_CALIB_READ_RETRIES - times) *
> + XRT_CALIB_READ_INTERVAL);
> + return 0;
> +}
> +
> +static void xrt_calib_event_cb(struct xrt_device *xdev, void *arg)
> +{
> + struct calib *calib = xrt_get_drvdata(xdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + enum xrt_subdev_id id;
> + int ret;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> +
> + switch (e) {
> + case XRT_EVENT_POST_CREATION:
> + if (id == XRT_SUBDEV_UCS) {
> + ret = calib_calibration(calib);
> + if (ret)
> + calib->result = XRT_CALIB_FAILED;
> + else
> + calib->result = XRT_CALIB_SUCCEEDED;
> + }
> + break;
> + default:
> + xrt_dbg(xdev, "ignored event %d", e);
> + break;
> + }
> +}
> +
> +static void xrt_calib_remove(struct xrt_device *xdev)
> +{
> + struct calib *calib = xrt_get_drvdata(xdev);
> +
> + calib_cache_clean(calib);
> +}
> +
> +static int xrt_calib_probe(struct xrt_device *xdev)
> +{
> + void __iomem *base = NULL;
> + struct resource *res;
> + struct calib *calib;
> + int err = 0;
> +
> + calib = devm_kzalloc(&xdev->dev, sizeof(*calib), GFP_KERNEL);
> + if (!calib)
> + return -ENOMEM;
> +
> + calib->xdev = xdev;
> + xrt_set_drvdata(xdev, calib);
> +
> + res = xrt_get_resource(xdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + err = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&xdev->dev, res);
> + if (IS_ERR(base)) {
> + err = PTR_ERR(base);
> + goto failed;
> + }
> +
> + calib->regmap = devm_regmap_init_mmio(&xdev->dev, base, &calib_regmap_config);
> + if (IS_ERR(calib->regmap)) {
> + xrt_err(xdev, "Map iomem failed");
> + err = PTR_ERR(calib->regmap);
> + goto failed;
> + }
> +
> + mutex_init(&calib->lock);
> + INIT_LIST_HEAD(&calib->cache_list);
> +
> + return 0;
> +
> +failed:
> + return err;
> +}
> +
> +static int
> +xrt_calib_leaf_call(struct xrt_device *xdev, u32 cmd, void *arg)
> +{
> + struct calib *calib = xrt_get_drvdata(xdev);
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_calib_event_cb(xdev, arg);
> + break;
> + case XRT_CALIB_RESULT: {
> + enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
> + *r = calib->result;
> + break;
> + }
> + default:
> + xrt_err(xdev, "unsupported cmd %d", cmd);
> + ret = -EINVAL;
> + }
> + return ret;
> +}
> +
> +static struct xrt_dev_endpoints xrt_calib_endpoints[] = {
> + {
> + .xse_names = (struct xrt_dev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_DDR_CALIB },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_driver xrt_calib_driver = {
> + .driver = {
> + .name = XRT_CALIB,
> + },
> + .subdev_id = XRT_SUBDEV_CALIB,
> + .endpoints = xrt_calib_endpoints,
> + .probe = xrt_calib_probe,
> + .remove = xrt_calib_remove,
> + .leaf_call = xrt_calib_leaf_call,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(calib);