Hello,
This is V4 of patch series which adds management physical function driver
for Xilinx Alveo PCIe accelerator cards.
https://www.xilinx.com/products/boards-and-kits/alveo.html
This driver is part of Xilinx Runtime (XRT) open source stack.
The V4 patch series do not include bus_type change as suggested before.
The bus_type change will come with v5 patch series.
XILINX ALVEO PLATFORM ARCHITECTURE
Alveo PCIe FPGA based platforms have a static *shell* partition and a
partial re-configurable *user* partition. The shell partition is
automatically loaded from flash when host is booted and PCIe is enumerated
by BIOS. Shell cannot be changed till the next cold reboot. The shell
exposes two PCIe physical functions:
1. management physical function
2. user physical function
The patch series includes Documentation/xrt.rst which describes Alveo
platform, XRT driver architecture and deployment model in more detail.
Users compile their high level design in C/C++/OpenCL or RTL into FPGA
image using Vitis tools.
https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html
The compiled image is packaged as xclbin which contains partial bitstream
for the user partition and necessary metadata. Users can dynamically swap
the image running on the user partition in order to switch between
different workloads by loading different xclbins.
XRT DRIVERS FOR XILINX ALVEO
XRT Linux kernel driver *xmgmt* binds to management physical function of
Alveo platform. The modular driver framework is organized into several
platform drivers which primarily handle the following functionality:
1. Loading firmware container also called xsabin at driver attach time
2. Loading of user compiled xclbin with FPGA Manager integration
3. Clock scaling of image running on user partition
4. In-band sensors: temp, voltage, power, etc.
5. Device reset and rescan
The platform drivers are packaged into *xrt-lib* helper module with well
defined interfaces. The module provides a pseudo-bus implementation for the
platform drivers. More details on the driver model can be found in
Documentation/xrt.rst.
User physical function driver is not included in this patch series.
LIBFDT REQUIREMENT
XRT driver infrastructure uses Device Tree as a metadata format to discover
HW subsystems in the Alveo PCIe device. The Device Tree schema used by XRT
is documented in Documentation/xrt.rst.
TESTING AND VALIDATION
xmgmt driver can be tested with full XRT open source stack which includes
user space libraries, board utilities and (out of tree) first generation
user physical function driver xocl. XRT open source runtime stack is
available at https://github.com/Xilinx/XRT
Complete documentation for XRT open source stack including sections on
Alveo/XRT security and platform architecture can be found here:
https://xilinx.github.io/XRT/master/html/index.html
https://xilinx.github.io/XRT/master/html/security.html
https://xilinx.github.io/XRT/master/html/platforms_partitions.html
Changes since v3:
- Leaf drivers use regmap-mmio to access hardware registers.
- Renamed driver module: xmgmt.ko -> xrt-mgmt.ko
- Renamed files: calib.[c|h] -> ddr_calibration.[c|h],
lib/main.[c|h] -> lib/lib-drv.[c|h],
mgmt/main-impl.h - > mgmt/xmgnt.h
- Updated code base to include v3 code review comments.
Changes since v2:
- Streamlined the driver framework into *xleaf*, *group* and *xroot*
- Updated documentation to show the driver model with examples
- Addressed kernel test robot errors
- Added a selftest for basic driver framework
- Documented device tree schema
- Removed need to export libfdt symbols
Changes since v1:
- Updated the driver to use fpga_region and fpga_bridge for FPGA
programming
- Dropped platform drivers not related to PR programming to focus on XRT
core framework
- Updated Documentation/fpga/xrt.rst with information on XRT core framework
- Addressed checkpatch issues
- Dropped xrt- prefix from some header files
For reference V3 version of patch series can be found here:
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
https://lore.kernel.org/lkml/[email protected]
Lizhi Hou (20):
Documentation: fpga: Add a document describing XRT Alveo drivers
fpga: xrt: driver metadata helper functions
fpga: xrt: xclbin file helper functions
fpga: xrt: xrt-lib platform driver manager
fpga: xrt: group platform driver
fpga: xrt: char dev node helper functions
fpga: xrt: root driver infrastructure
fpga: xrt: platform driver infrastructure
fpga: xrt: management physical function driver (root)
fpga: xrt: main platform driver for management function device
fpga: xrt: fpga-mgr and region implementation for xclbin download
fpga: xrt: VSEC platform driver
fpga: xrt: User Clock Subsystem platform driver
fpga: xrt: ICAP platform driver
fpga: xrt: devctl platform driver
fpga: xrt: clock platform driver
fpga: xrt: clock frequency counter platform driver
fpga: xrt: DDR calibration platform driver
fpga: xrt: partition isolation platform driver
fpga: xrt: Kconfig and Makefile updates for XRT drivers
Documentation/fpga/index.rst | 1 +
Documentation/fpga/xrt.rst | 844 +++++++++++++++++
MAINTAINERS | 11 +
drivers/Makefile | 1 +
drivers/fpga/Kconfig | 2 +
drivers/fpga/Makefile | 5 +
drivers/fpga/xrt/Kconfig | 8 +
drivers/fpga/xrt/include/events.h | 45 +
drivers/fpga/xrt/include/group.h | 25 +
drivers/fpga/xrt/include/metadata.h | 233 +++++
drivers/fpga/xrt/include/subdev_id.h | 38 +
drivers/fpga/xrt/include/xclbin-helper.h | 48 +
drivers/fpga/xrt/include/xleaf.h | 264 ++++++
drivers/fpga/xrt/include/xleaf/axigate.h | 23 +
drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 +
drivers/fpga/xrt/include/xleaf/clock.h | 29 +
.../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +
drivers/fpga/xrt/include/xleaf/devctl.h | 40 +
drivers/fpga/xrt/include/xleaf/icap.h | 27 +
drivers/fpga/xrt/include/xmgmt-main.h | 34 +
drivers/fpga/xrt/include/xroot.h | 117 +++
drivers/fpga/xrt/lib/Kconfig | 17 +
drivers/fpga/xrt/lib/Makefile | 30 +
drivers/fpga/xrt/lib/cdev.c | 232 +++++
drivers/fpga/xrt/lib/group.c | 286 ++++++
drivers/fpga/xrt/lib/lib-drv.c | 277 ++++++
drivers/fpga/xrt/lib/lib-drv.h | 17 +
drivers/fpga/xrt/lib/subdev.c | 865 ++++++++++++++++++
drivers/fpga/xrt/lib/subdev_pool.h | 53 ++
drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++
drivers/fpga/xrt/lib/xleaf/axigate.c | 342 +++++++
drivers/fpga/xrt/lib/xleaf/clkfreq.c | 240 +++++
drivers/fpga/xrt/lib/xleaf/clock.c | 669 ++++++++++++++
drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 226 +++++
drivers/fpga/xrt/lib/xleaf/devctl.c | 183 ++++
drivers/fpga/xrt/lib/xleaf/icap.c | 344 +++++++
drivers/fpga/xrt/lib/xleaf/ucs.c | 167 ++++
drivers/fpga/xrt/lib/xleaf/vsec.c | 388 ++++++++
drivers/fpga/xrt/lib/xroot.c | 589 ++++++++++++
drivers/fpga/xrt/metadata/Kconfig | 12 +
drivers/fpga/xrt/metadata/Makefile | 16 +
drivers/fpga/xrt/metadata/metadata.c | 545 +++++++++++
drivers/fpga/xrt/mgmt/Kconfig | 15 +
drivers/fpga/xrt/mgmt/Makefile | 19 +
drivers/fpga/xrt/mgmt/fmgr-drv.c | 191 ++++
drivers/fpga/xrt/mgmt/fmgr.h | 19 +
drivers/fpga/xrt/mgmt/main-region.c | 483 ++++++++++
drivers/fpga/xrt/mgmt/main.c | 670 ++++++++++++++
drivers/fpga/xrt/mgmt/root.c | 333 +++++++
drivers/fpga/xrt/mgmt/xmgnt.h | 34 +
include/uapi/linux/xrt/xclbin.h | 409 +++++++++
include/uapi/linux/xrt/xmgmt-ioctl.h | 46 +
52 files changed, 9930 insertions(+)
create mode 100644 Documentation/fpga/xrt.rst
create mode 100644 drivers/fpga/xrt/Kconfig
create mode 100644 drivers/fpga/xrt/include/events.h
create mode 100644 drivers/fpga/xrt/include/group.h
create mode 100644 drivers/fpga/xrt/include/metadata.h
create mode 100644 drivers/fpga/xrt/include/subdev_id.h
create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
create mode 100644 drivers/fpga/xrt/include/xleaf.h
create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
create mode 100644 drivers/fpga/xrt/include/xroot.h
create mode 100644 drivers/fpga/xrt/lib/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Makefile
create mode 100644 drivers/fpga/xrt/lib/cdev.c
create mode 100644 drivers/fpga/xrt/lib/group.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
create mode 100644 drivers/fpga/xrt/lib/subdev.c
create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
create mode 100644 drivers/fpga/xrt/lib/xclbin.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
create mode 100644 drivers/fpga/xrt/lib/xroot.c
create mode 100644 drivers/fpga/xrt/metadata/Kconfig
create mode 100644 drivers/fpga/xrt/metadata/Makefile
create mode 100644 drivers/fpga/xrt/metadata/metadata.c
create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
create mode 100644 drivers/fpga/xrt/mgmt/Makefile
create mode 100644 drivers/fpga/xrt/mgmt/fmgr-drv.c
create mode 100644 drivers/fpga/xrt/mgmt/fmgr.h
create mode 100644 drivers/fpga/xrt/mgmt/main-region.c
create mode 100644 drivers/fpga/xrt/mgmt/main.c
create mode 100644 drivers/fpga/xrt/mgmt/root.c
create mode 100644 drivers/fpga/xrt/mgmt/xmgnt.h
create mode 100644 include/uapi/linux/xrt/xclbin.h
create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
--
2.27.0
Describe XRT driver architecture and provide basic overview of
Xilinx Alveo platform.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
Documentation/fpga/index.rst | 1 +
Documentation/fpga/xrt.rst | 844 +++++++++++++++++++++++++++++++++++
2 files changed, 845 insertions(+)
create mode 100644 Documentation/fpga/xrt.rst
diff --git a/Documentation/fpga/index.rst b/Documentation/fpga/index.rst
index f80f95667ca2..30134357b70d 100644
--- a/Documentation/fpga/index.rst
+++ b/Documentation/fpga/index.rst
@@ -8,6 +8,7 @@ fpga
:maxdepth: 1
dfl
+ xrt
.. only:: subproject and html
diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
new file mode 100644
index 000000000000..0f7977464270
--- /dev/null
+++ b/Documentation/fpga/xrt.rst
@@ -0,0 +1,844 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================================
+XRTV2 Linux Kernel Driver Overview
+==================================
+
+Authors:
+
+* Sonal Santan <[email protected]>
+* Max Zhen <[email protected]>
+* Lizhi Hou <[email protected]>
+
+XRTV2 drivers are second generation `XRT <https://github.com/Xilinx/XRT>`_
+drivers which support `Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_
+PCIe platforms from Xilinx.
+
+XRTV2 drivers support *subsystem* style data driven platforms where driver's
+configuration and behavior is determined by meta data provided by the platform
+(in *device tree* format). Primary management physical function (MPF) driver
+is called **xmgmt**. Primary user physical function (UPF) driver is called
+**xuser** and is under development. xrt driver framework and HW subsystem
+drivers are packaged into a library module called **xrt-lib**, which is
+shared by **xmgmt** and **xuser** (under development). The xrt driver framework
+implements a pseudo-bus which is used to discover HW subsystems and facilitate
+inter HW subsystem interaction.
+
+Driver Modules
+==============
+
+xrt-lib.ko
+----------
+
+Repository of all subsystem drivers and pure software modules that can potentially
+be shared between xmgmt and xuser. All these drivers are structured as Linux
+*platform driver* and are instantiated by xmgmt (or xuser under development) based
+on meta data associated with the hardware. The metadata is in the form of a device
+tree as mentioned before. Each platform driver statically defines a subsystem node
+array by using node name or a string in its ``compatible`` property. And this
+array is eventually translated to IOMEM resources of the platform device.
+
+The xrt-lib core infrastructure provides hooks to platform drivers for device node
+management, user file operations and ioctl callbacks. The core infrastructure also
+provides pseudo-bus functionality for platform driver registration, discovery and
+inter platform driver ioctl calls.
+
+.. note::
+ See code in ``include/xleaf.h``
+
+
+xmgmt.ko
+--------
+
+The xmgmt driver is a PCIe device driver driving MPF found on Xilinx's Alveo
+PCIE device. It consists of one *root* driver, one or more *group* drivers
+and one or more *xleaf* drivers. The root and MPF specific xleaf drivers are
+in xmgmt.ko. The group driver and other xleaf drivers are in xrt-lib.ko.
+
+The instantiation of specific group driver or xleaf driver is completely data
+driven based on meta data (mostly in device tree format) found through VSEC
+capability and inside firmware files, such as platform xsabin or user xclbin file.
+The root driver manages the life cycle of multiple group drivers, which, in turn,
+manages multiple xleaf drivers. This allows a single set of drivers to support
+all kinds of subsystems exposed by different shells. The difference among all
+these subsystems will be handled in xleaf drivers with root and group drivers
+being part of the infrastructure and provide common services for all leaves
+found on all platforms.
+
+The driver object model looks like the following::
+
+ +-----------+
+ | xroot |
+ +-----+-----+
+ |
+ +-----------+-----------+
+ | |
+ v v
+ +-----------+ +-----------+
+ | group | ... | group |
+ +-----+-----+ +------+----+
+ | |
+ | |
+ +-----+----+ +-----+----+
+ | | | |
+ v v v v
+ +-------+ +-------+ +-------+ +-------+
+ | xleaf |..| xleaf | | xleaf |..| xleaf |
+ +-------+ +-------+ +-------+ +-------+
+
+As an example for Xilinx Alveo U50 before user xclbin download, the tree
+looks like the following::
+
+ +-----------+
+ | xmgmt |
+ +-----+-----+
+ |
+ +-------------------------+--------------------+
+ | | |
+ v v v
+ +--------+ +--------+ +--------+
+ | group0 | | group1 | | group2 |
+ +----+---+ +----+---+ +---+----+
+ | | |
+ | | |
+ +-----+-----+ +----+-----+---+ +-----+-----+----+--------+
+ | | | | | | | | |
+ v v | v v | v v |
+ +------------+ +------+ | +------+ +------+ | +------+ +-----------+ |
+ | xmgmt_main | | VSEC | | | GPIO | | QSPI | | | CMC | | AXI-GATE0 | |
+ +------------+ +------+ | +------+ +------+ | +------+ +-----------+ |
+ | +---------+ | +------+ +-----------+ |
+ +>| MAILBOX | +->| ICAP | | AXI-GATE1 |<+
+ +---------+ | +------+ +-----------+
+ | +-------+
+ +->| CALIB |
+ +-------+
+
+After an xclbin is download, group3 will be added and the tree looks like the
+following::
+
+ +-----------+
+ | xmgmt |
+ +-----+-----+
+ |
+ +-------------------------+--------------------+-----------------+
+ | | | |
+ v v v |
+ +--------+ +--------+ +--------+ |
+ | group0 | | group1 | | group2 | |
+ +----+---+ +----+---+ +---+----+ |
+ | | | |
+ | | | |
+ +-----+-----+ +-----+-----+---+ +-----+-----+----+--------+ |
+ | | | | | | | | | |
+ v v | v v | v v | |
+ +------------+ +------+ | +------+ +------+ | +------+ +-----------+ | |
+ | xmgmt_main | | VSEC | | | GPIO | | QSPI | | | CMC | | AXI-GATE0 | | |
+ +------------+ +------+ | +------+ +------+ | +------+ +-----------+ | |
+ | +---------+ | +------+ +-----------+ | |
+ +>| MAILBOX | +->| ICAP | | AXI-GATE1 |<+ |
+ +---------+ | +------+ +-----------+ |
+ | +-------+ |
+ +->| CALIB | |
+ +-------+ |
+ +---+----+ |
+ | group3 |<--------------------------------------------+
+ +--------+
+ |
+ |
+ +-------+--------+---+--+--------+------+-------+
+ | | | | | | |
+ v | v | v | v
+ +--------+ | +--------+ | +--------+ | +-----+
+ | CLOCK0 | | | CLOCK1 | | | CLOCK2 | | | UCS |
+ +--------+ v +--------+ v +--------+ v +-----+
+ +-------------+ +-------------+ +-------------+
+ | CLOCK-FREQ0 | | CLOCK-FREQ1 | | CLOCK-FREQ2 |
+ +-------------+ +-------------+ +-------------+
+
+
+xmgmt-root
+^^^^^^^^^^
+
+The xmgmt-root driver is a PCIe device driver attached to MPF. It's part of the
+infrastructure of the MPF driver and resides in xmgmt.ko. This driver
+
+* manages one or more group drivers
+* provides access to functionalities that requires pci_dev, such as PCIE config
+ space access, to other xleaf drivers through root calls
+* facilities event callbacks for other xleaf drivers
+* facilities inter-leaf driver calls for other xleaf drivers
+
+When root driver starts, it will explicitly create an initial group instance,
+which contains xleaf drivers that will trigger the creation of other group
+instances. The root driver will wait for all group and leaves to be created
+before it returns from it's probe routine and claim success of the
+initialization of the entire xmgmt driver. If any leaf fails to initialize the
+xmgmt driver will still come online but with limited functionality.
+
+.. note::
+ See code in ``lib/xroot.c`` and ``mgmt/root.c``
+
+
+group
+^^^^^
+
+The group driver represents a pseudo device whose life cycle is managed by
+root and does not have real IO mem or IRQ resources. It's part of the
+infrastructure of the MPF driver and resides in xrt-lib.ko. This driver
+
+* manages one or more xleaf drivers
+* provides access to root from leaves, so that root calls, event notifications
+ and inter-leaf calls can happen
+
+In xmgmt, an initial group driver instance will be created by the root. This
+instance contains leaves that will trigger group instances to be created to
+manage groups of leaves found on different partitions on hardware, such as
+VSEC, Shell, and User.
+
+Every *fpga_region* has a group object associated with it. The group is
+created when xclbin image is loaded on the fpga_region. The existing group
+is destroyed when a new xclbin image is loaded. The fpga_region persists
+across xclbin downloads.
+
+.. note::
+ See code in ``lib/group.c``
+
+
+xleaf
+^^^^^
+
+The xleaf driver is a platform device driver whose life cycle is managed by
+a group driver and may or may not have real IO mem or IRQ resources. They
+are the real meat of xmgmt and contains platform specific code to Shell and
+User found on a MPF.
+
+A xleaf driver may not have real hardware resources when it merely acts as a
+driver that manages certain in-memory states for xmgmt. These in-memory states
+could be shared by multiple other leaves.
+
+Leaf drivers assigned to specific hardware resources drive specific subsystem in
+the device. To manipulate the subsystem or carry out a task, a xleaf driver may
+ask help from root via root calls and/or from other leaves via inter-leaf calls.
+
+A xleaf can also broadcast events through infrastructure code for other leaves
+to process. It can also receive event notification from infrastructure about
+certain events, such as post-creation or pre-exit of a particular xleaf.
+
+.. note::
+ See code in ``lib/xleaf/*.c``
+
+
+FPGA Manager Interaction
+========================
+
+fpga_manager
+------------
+
+An instance of fpga_manager is created by xmgmt_main and is used for xclbin
+image download. fpga_manager requires the full xclbin image before it can
+start programming the FPGA configuration engine via Internal Configuration
+Access Port (ICAP) platform driver.
+
+fpga_region
+-----------
+
+For every interface exposed by the currently loaded xclbin/xsabin in the
+*parent* fpga_region a new instance of fpga_region is created like a *child*
+fpga_region. The device tree of the *parent* fpga_region defines the
+resources for a new instance of fpga_bridge which isolates the parent from
+child fpga_region. This new instance of fpga_bridge will be used when a
+xclbin image is loaded on the child fpga_region. After the xclbin image is
+downloaded to the fpga_region, an instance of group is created for the
+fpga_region using the device tree obtained as part of the xclbin. If this
+device tree defines any child interfaces then it can trigger the creation of
+fpga_bridge and fpga_region for the next region in the chain.
+
+fpga_bridge
+-----------
+
+Like the fpga_region, matching fpga_bridge is also created by walking the
+device tree of the parent group.
+
+Driver Interfaces
+=================
+
+xmgmt Driver Ioctls
+-------------------
+
+Ioctls exposed by xmgmt driver to user space are enumerated in the following
+table:
+
+== ===================== ============================ ==========================
+# Functionality ioctl request code data format
+== ===================== ============================ ==========================
+1 FPGA image download XMGMT_IOCICAPDOWNLOAD_AXLF xmgmt_ioc_bitstream_axlf
+== ===================== ============================ ==========================
+
+A user xclbin can be downloaded by using the xbmgmt tool from the XRT open source
+suite. See example usage below::
+
+ xbmgmt partition --program --path /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/test/verify.xclbin --force
+
+xmgmt Driver Sysfs
+------------------
+
+xmgmt driver exposes a rich set of sysfs interfaces. Subsystem platform
+drivers export sysfs node for every platform instance.
+
+Every partition also exports its UUIDs. See below for examples::
+
+ /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
+ /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
+
+
+hwmon
+-----
+
+xmgmt driver exposes standard hwmon interface to report voltage, current,
+temperature, power, etc. These can easily be viewed using *sensors* command
+line utility.
+
+Alveo Platform Overview
+=======================
+
+Alveo platforms are architected as two physical FPGA partitions: *Shell* and
+*User*. The Shell provides basic infrastructure for the Alveo platform like
+PCIe connectivity, board management, Dynamic Function Exchange (DFX), sensors,
+clocking, reset, and security. User partition contains user compiled FPGA
+binary which is loaded by a process called DFX also known as partial
+reconfiguration.
+
+For DFX to work properly physical partitions require strict HW compatibility
+with each other. Every physical partition has two interface UUIDs: *parent* UUID
+and *child* UUID. For simple single stage platforms, Shell → User forms parent
+child relationship.
+
+.. note::
+ Partition compatibility matching is key design component of Alveo platforms
+ and XRT. Partitions have child and parent relationship. A loaded partition
+ exposes child partition UUID to advertise its compatibility requirement.When
+ loading a child partition the xmgmt management driver matches parent UUID of
+ the child partition against child UUID exported by the parent. Parent and
+ child partition UUIDs are stored in the *xclbin* (for user) or *xsabin* (for
+ shell). Except for root UUID exported by VSEC, hardware itself does not know
+ about UUIDs. UUIDs are stored in xsabin and xclbin. The image format has a
+ special node called Partition UUIDs which define the compatibility UUIDs. See
+ :ref:`partition_uuids`.
+
+
+The physical partitions and their loading is illustrated below::
+
+ SHELL USER
+ +-----------+ +-------------------+
+ | | | |
+ | VSEC UUID | CHILD PARENT | LOGIC UUID |
+ | o------->|<--------o |
+ | | UUID UUID | |
+ +-----+-----+ +--------+----------+
+ | |
+ . .
+ | |
+ +---+---+ +------+--------+
+ | POR | | USER COMPILED |
+ | FLASH | | XCLBIN |
+ +-------+ +---------------+
+
+
+Loading Sequence
+----------------
+
+The Shell partition is loaded from flash at system boot time. It establishes the
+PCIe link and exposes two physical functions to the BIOS. After the OS boots, xmgmt
+driver attaches to the PCIe physical function 0 exposed by the Shell and then looks
+for VSEC in PCIe extended configuration space. Using VSEC it determines the logic
+UUID of Shell and uses the UUID to load matching *xsabin* file from Linux firmware
+directory. The xsabin file contains metadata to discover peripherals that are part
+of Shell and firmware(s) for any embedded soft processors in Shell. The xsabin file
+also contains Partition UUIDs as described here :ref:`partition_uuids`.
+
+The Shell exports a child interface UUID which is used for the compatibility check
+when loading user compiled xclbin over the User partition as part of DFX. When a user
+requests loading of a specific xclbin the xmgmt management driver reads the parent
+interface UUID specified in the xclbin and matches it with child interface UUID
+exported by Shell to determine if xclbin is compatible with the Shell. If match
+fails loading of xclbin is denied.
+
+xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command. When loading
+xclbin, xmgmt driver performs the following *logical* operations:
+
+1. Copy xclbin from user to kernel memory
+2. Sanity check the xclbin contents
+3. Isolate the User partition
+4. Download the bitstream using the FPGA config engine (ICAP)
+5. De-isolate the User partition
+6. Program the clocks (ClockWiz) driving the User partition
+7. Wait for memory controller (MIG) calibration
+8. Return the loading status back to the caller
+
+`Platform Loading Overview <https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`_
+provides more detailed information on platform loading.
+
+
+xsabin
+------
+
+Each Alveo platform comes packaged with its own xsabin. The xsabin is a trusted
+component of the platform. For format details refer to :ref:`xsabin_xclbin_container_format`
+below. xsabin contains basic information like UUIDs, platform name and metadata in the
+form of device tree. See :ref:`device_tree_usage` below for details and example.
+
+xclbin
+------
+
+xclbin is compiled by end user using
+`Vitis <https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html>`_
+tool set from Xilinx. The xclbin contains sections describing user compiled
+acceleration engines/kernels, memory subsystems, clocking information etc. It also
+contains FPGA bitstream for the user partition, UUIDs, platform name, etc.
+
+
+.. _xsabin_xclbin_container_format:
+
+xsabin/xclbin Container Format
+------------------------------
+
+xclbin/xsabin is ELF-like binary container format. It is structured as series of
+sections. There is a file header followed by several section headers which is
+followed by sections. A section header points to an actual section. There is an
+optional signature at the end. The format is defined by header file ``xclbin.h``.
+The following figure illustrates a typical xclbin::
+
+
+ +---------------------+
+ | |
+ | HEADER |
+ +---------------------+
+ | SECTION HEADER |
+ | |
+ +---------------------+
+ | ... |
+ | |
+ +---------------------+
+ | SECTION HEADER |
+ | |
+ +---------------------+
+ | SECTION |
+ | |
+ +---------------------+
+ | ... |
+ | |
+ +---------------------+
+ | SECTION |
+ | |
+ +---------------------+
+ | SIGNATURE |
+ | (OPTIONAL) |
+ +---------------------+
+
+
+xclbin/xsabin files can be packaged, un-packaged and inspected using XRT utility
+called **xclbinutil**. xclbinutil is part of XRT open source software stack. The
+source code for xclbinutil can be found at
+https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbinutil
+
+For example to enumerate the contents of a xclbin/xsabin use the *--info* switch
+as shown below::
+
+
+ xclbinutil --info --input /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
+ xclbinutil --info --input /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsabin
+
+
+.. _device_tree_usage:
+
+Device Tree Usage
+-----------------
+
+As mentioned previously xsabin stores metadata which advertise HW subsystems present
+in a partition. The metadata is stored in device tree format with a well defined schema.
+XRT management driver uses this information to bind *platform drivers* to the subsystem
+instantiations. The platform drivers are found in **xrt-lib.ko** kernel module defined
+later.
+
+Logic UUID
+^^^^^^^^^^
+A partition is identified uniquely through ``logic_uuid`` property::
+
+ /dts-v1/;
+ / {
+ logic_uuid = "0123456789abcdef0123456789abcdef";
+ ...
+ }
+
+Schema Version
+^^^^^^^^^^^^^^
+Schema version is defined through ``schema_version`` node. And it contains ``major``
+and ``minor`` properties as below::
+
+ /dts-v1/;
+ / {
+ schema_version {
+ major = <0x01>;
+ minor = <0x00>;
+ };
+ ...
+ }
+
+.. _partition_uuids:
+
+Partition UUIDs
+^^^^^^^^^^^^^^^
+As mentioned earlier, each partition may have parent and child UUIDs. These UUIDs are
+defined by ``interfaces`` node and ``interface_uuid`` property::
+
+ /dts-v1/;
+ / {
+ interfaces {
+ @0 {
+ interface_uuid = "0123456789abcdef0123456789abcdef";
+ };
+ @1 {
+ interface_uuid = "fedcba9876543210fedcba9876543210";
+ };
+ ...
+ };
+ ...
+ }
+
+
+Subsystem Instantiations
+^^^^^^^^^^^^^^^^^^^^^^^^
+Subsystem instantiations are captured as children of ``addressable_endpoints``
+node::
+
+ /dts-v1/;
+ / {
+ addressable_endpoints {
+ abc {
+ ...
+ };
+ def {
+ ...
+ };
+ ...
+ }
+ }
+
+Subnode 'abc' and 'def' are the name of subsystem nodes
+
+Subsystem Node
+^^^^^^^^^^^^^^
+Each subsystem node and its properties define a hardware instance::
+
+
+ addressable_endpoints {
+ abc {
+ reg = <0xa 0xb>
+ pcie_physical_function = <0x0>;
+ pcie_bar_mapping = <0x2>;
+ compatible = "abc def";
+ firmware {
+ firmware_product_name = "abc"
+ firmware_branch_name = "def"
+ firmware_version_major = <1>
+ firmware_version_minor = <2>
+ };
+ }
+ ...
+ }
+
+:reg:
+ Property defines address range. '<0xa 0xb>' is BAR offset and length pair, both
+ are 64-bit integer.
+:pcie_physical_function:
+ Property specifies which PCIe physical function the subsystem node resides.
+:pcie_bar_mapping:
+ Property specifies which PCIe BAR the subsystem node resides. '<0x2>' is BAR
+ index and it is 0 if this property is not defined.
+:compatible:
+ Property is a list of strings. The first string in the list specifies the exact
+ subsystem node. The following strings represent other devices that the device
+ is compatible with.
+:firmware:
+ Subnode defines the firmware required by this subsystem node.
+
+Alveo U50 Platform Example
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+::
+
+ /dts-v1/;
+
+ /{
+ logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
+
+ schema_version {
+ major = <0x01>;
+ minor = <0x00>;
+ };
+
+ interfaces {
+
+ @0 {
+ interface_uuid = "862c7020a250293e32036f19956669e5";
+ };
+ };
+
+ addressable_endpoints {
+
+ ep_blp_rom_00 {
+ reg = <0x00 0x1f04000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+ };
+
+ ep_card_flash_program_00 {
+ reg = <0x00 0x1f06000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_quad_spi-1.0\0axi_quad_spi";
+ interrupts = <0x03 0x03>;
+ };
+
+ ep_cmc_firmware_mem_00 {
+ reg = <0x00 0x1e20000 0x00 0x20000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+ firmware {
+ firmware_product_name = "cmc";
+ firmware_branch_name = "u50";
+ firmware_version_major = <0x01>;
+ firmware_version_minor = <0x00>;
+ };
+ };
+
+ ep_cmc_intc_00 {
+ reg = <0x00 0x1e03000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+ interrupts = <0x04 0x04>;
+ };
+
+ ep_cmc_mutex_00 {
+ reg = <0x00 0x1e02000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_cmc_regmap_00 {
+ reg = <0x00 0x1e08000 0x00 0x2000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+ firmware {
+ firmware_product_name = "sc-fw";
+ firmware_branch_name = "u50";
+ firmware_version_major = <0x05>;
+ };
+ };
+
+ ep_cmc_reset_00 {
+ reg = <0x00 0x1e01000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_ddr_mem_calib_00 {
+ reg = <0x00 0x63000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_debug_bscan_mgmt_00 {
+ reg = <0x00 0x1e90000 0x00 0x10000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-debug_bridge-1.0\0debug_bridge";
+ };
+
+ ep_ert_base_address_00 {
+ reg = <0x00 0x21000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_ert_command_queue_mgmt_00 {
+ reg = <0x00 0x40000 0x00 0x10000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+ };
+
+ ep_ert_command_queue_user_00 {
+ reg = <0x00 0x40000 0x00 0x10000>;
+ pcie_physical_function = <0x01>;
+ compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
+ };
+
+ ep_ert_firmware_mem_00 {
+ reg = <0x00 0x30000 0x00 0x8000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+
+ firmware {
+ firmware_product_name = "ert";
+ firmware_branch_name = "v20";
+ firmware_version_major = <0x01>;
+ };
+ };
+
+ ep_ert_intc_00 {
+ reg = <0x00 0x23000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
+ interrupts = <0x05 0x05>;
+ };
+
+ ep_ert_reset_00 {
+ reg = <0x00 0x22000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_ert_sched_00 {
+ reg = <0x00 0x50000 0x00 0x1000>;
+ pcie_physical_function = <0x01>;
+ compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
+ interrupts = <0x09 0x0c>;
+ };
+
+ ep_fpga_configuration_00 {
+ reg = <0x00 0x1e88000 0x00 0x8000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
+ interrupts = <0x02 0x02>;
+ };
+
+ ep_icap_reset_00 {
+ reg = <0x00 0x1f07000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_msix_00 {
+ reg = <0x00 0x00 0x00 0x20000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
+ pcie_bar_mapping = <0x02>;
+ };
+
+ ep_pcie_link_mon_00 {
+ reg = <0x00 0x1f05000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_pr_isolate_plp_00 {
+ reg = <0x00 0x1f01000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_pr_isolate_ulp_00 {
+ reg = <0x00 0x1000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
+ };
+
+ ep_uuid_rom_00 {
+ reg = <0x00 0x64000 0x00 0x1000>;
+ pcie_physical_function = <0x00>;
+ compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
+ };
+
+ ep_xdma_00 {
+ reg = <0x00 0x00 0x00 0x10000>;
+ pcie_physical_function = <0x01>;
+ compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
+ pcie_bar_mapping = <0x02>;
+ };
+ };
+
+ }
+
+
+
+Deployment Models
+=================
+
+Baremetal
+---------
+
+In bare-metal deployments, both MPF and UPF are visible and accessible. xmgmt
+driver binds to MPF. xmgmt driver operations are privileged and available to
+system administrator. The full stack is illustrated below::
+
+ HOST
+
+ [XMGMT] [XUSER]
+ | |
+ | |
+ +-----+ +-----+
+ | MPF | | UPF |
+ | | | |
+ | PF0 | | PF1 |
+ +--+--+ +--+--+
+ ......... ^................. ^..........
+ | |
+ | PCIe DEVICE |
+ | |
+ +--+------------------+--+
+ | SHELL |
+ | |
+ +------------------------+
+ | USER |
+ | |
+ | |
+ | |
+ | |
+ +------------------------+
+
+
+
+Virtualized
+-----------
+
+In virtualized deployments, privileged MPF is assigned to host but unprivileged
+UPF is assigned to guest VM via PCIe pass-through. xmgmt driver in host binds
+to MPF. xmgmt driver operations are privileged and only accessible to the MPF.
+The full stack is illustrated below::
+
+
+ .............
+ HOST . VM .
+ . .
+ [XMGMT] . [XUSER] .
+ | . | .
+ | . | .
+ +-----+ . +-----+ .
+ | MPF | . | UPF | .
+ | | . | | .
+ | PF0 | . | PF1 | .
+ +--+--+ . +--+--+ .
+ ......... ^................. ^..........
+ | |
+ | PCIe DEVICE |
+ | |
+ +--+------------------+--+
+ | SHELL |
+ | |
+ +------------------------+
+ | USER |
+ | |
+ | |
+ | |
+ | |
+ +------------------------+
+
+
+
+
+
+Platform Security Considerations
+================================
+
+`Security of Alveo Platform <https://xilinx.github.io/XRT/master/html/security.html>`_
+discusses the deployment options and security implications in great detail.
--
2.27.0
XRT drivers use device tree as metadata format to discover HW subsystems
behind PCIe BAR. Thus libfdt functions are called for the driver to parse
device tree blob.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/metadata.h | 233 ++++++++++++
drivers/fpga/xrt/metadata/metadata.c | 545 +++++++++++++++++++++++++++
2 files changed, 778 insertions(+)
create mode 100644 drivers/fpga/xrt/include/metadata.h
create mode 100644 drivers/fpga/xrt/metadata/metadata.c
diff --git a/drivers/fpga/xrt/include/metadata.h b/drivers/fpga/xrt/include/metadata.h
new file mode 100644
index 000000000000..479e47960c61
--- /dev/null
+++ b/drivers/fpga/xrt/include/metadata.h
@@ -0,0 +1,233 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_METADATA_H
+#define _XRT_METADATA_H
+
+#include <linux/device.h>
+#include <linux/vmalloc.h>
+#include <linux/uuid.h>
+
+#define XRT_MD_INVALID_LENGTH (~0UL)
+
+/* metadata properties */
+#define XRT_MD_PROP_BAR_IDX "pcie_bar_mapping"
+#define XRT_MD_PROP_COMPATIBLE "compatible"
+#define XRT_MD_PROP_HWICAP "axi_hwicap"
+#define XRT_MD_PROP_INTERFACE_UUID "interface_uuid"
+#define XRT_MD_PROP_INTERRUPTS "interrupts"
+#define XRT_MD_PROP_IO_OFFSET "reg"
+#define XRT_MD_PROP_LOGIC_UUID "logic_uuid"
+#define XRT_MD_PROP_PDI_CONFIG "pdi_config_mem"
+#define XRT_MD_PROP_PF_NUM "pcie_physical_function"
+#define XRT_MD_PROP_VERSION_MAJOR "firmware_version_major"
+
+/* non IP nodes */
+#define XRT_MD_NODE_ENDPOINTS "addressable_endpoints"
+#define XRT_MD_NODE_FIRMWARE "firmware"
+#define XRT_MD_NODE_INTERFACES "interfaces"
+#define XRT_MD_NODE_PARTITION_INFO "partition_info"
+
+/*
+ * IP nodes
+ * AF: AXI Firewall
+ * CMC: Card Management Controller
+ * ERT: Embedded Runtime
+ * PLP: Provider Reconfigurable Partition
+ * ULP: User Reconfigurable Partition
+ */
+#define XRT_MD_NODE_ADDR_TRANSLATOR "ep_remap_data_c2h_00"
+#define XRT_MD_NODE_AF_BLP_CTRL_MGMT "ep_firewall_blp_ctrl_mgmt_00"
+#define XRT_MD_NODE_AF_BLP_CTRL_USER "ep_firewall_blp_ctrl_user_00"
+#define XRT_MD_NODE_AF_CTRL_DEBUG "ep_firewall_ctrl_debug_00"
+#define XRT_MD_NODE_AF_CTRL_MGMT "ep_firewall_ctrl_mgmt_00"
+#define XRT_MD_NODE_AF_CTRL_USER "ep_firewall_ctrl_user_00"
+#define XRT_MD_NODE_AF_DATA_C2H "ep_firewall_data_c2h_00"
+#define XRT_MD_NODE_AF_DATA_H2C "ep_firewall_data_h2c_00"
+#define XRT_MD_NODE_AF_DATA_M2M "ep_firewall_data_m2m_00"
+#define XRT_MD_NODE_AF_DATA_P2P "ep_firewall_data_p2p_00"
+#define XRT_MD_NODE_CLKFREQ_HBM "ep_freq_cnt_aclk_hbm_00"
+#define XRT_MD_NODE_CLKFREQ_K1 "ep_freq_cnt_aclk_kernel_00"
+#define XRT_MD_NODE_CLKFREQ_K2 "ep_freq_cnt_aclk_kernel_01"
+#define XRT_MD_NODE_CLK_KERNEL1 "ep_aclk_kernel_00"
+#define XRT_MD_NODE_CLK_KERNEL2 "ep_aclk_kernel_01"
+#define XRT_MD_NODE_CLK_KERNEL3 "ep_aclk_hbm_00"
+#define XRT_MD_NODE_CLK_SHUTDOWN "ep_aclk_shutdown_00"
+#define XRT_MD_NODE_CMC_FW_MEM "ep_cmc_firmware_mem_00"
+#define XRT_MD_NODE_CMC_MUTEX "ep_cmc_mutex_00"
+#define XRT_MD_NODE_CMC_REG "ep_cmc_regmap_00"
+#define XRT_MD_NODE_CMC_RESET "ep_cmc_reset_00"
+#define XRT_MD_NODE_DDR_CALIB "ep_ddr_mem_calib_00"
+#define XRT_MD_NODE_DDR4_RESET_GATE "ep_ddr_mem_srsr_gate_00"
+#define XRT_MD_NODE_ERT_BASE "ep_ert_base_address_00"
+#define XRT_MD_NODE_ERT_CQ_MGMT "ep_ert_command_queue_mgmt_00"
+#define XRT_MD_NODE_ERT_CQ_USER "ep_ert_command_queue_user_00"
+#define XRT_MD_NODE_ERT_FW_MEM "ep_ert_firmware_mem_00"
+#define XRT_MD_NODE_ERT_RESET "ep_ert_reset_00"
+#define XRT_MD_NODE_ERT_SCHED "ep_ert_sched_00"
+#define XRT_MD_NODE_FLASH "ep_card_flash_program_00"
+#define XRT_MD_NODE_FPGA_CONFIG "ep_fpga_configuration_00"
+#define XRT_MD_NODE_GAPPING "ep_gapping_demand_00"
+#define XRT_MD_NODE_GATE_PLP "ep_pr_isolate_plp_00"
+#define XRT_MD_NODE_GATE_ULP "ep_pr_isolate_ulp_00"
+#define XRT_MD_NODE_KDMA_CTRL "ep_kdma_ctrl_00"
+#define XRT_MD_NODE_MAILBOX_MGMT "ep_mailbox_mgmt_00"
+#define XRT_MD_NODE_MAILBOX_USER "ep_mailbox_user_00"
+#define XRT_MD_NODE_MAILBOX_XRT "ep_mailbox_user_to_ert_00"
+#define XRT_MD_NODE_MSIX "ep_msix_00"
+#define XRT_MD_NODE_P2P "ep_p2p_00"
+#define XRT_MD_NODE_PCIE_MON "ep_pcie_link_mon_00"
+#define XRT_MD_NODE_PMC_INTR "ep_pmc_intr_00"
+#define XRT_MD_NODE_PMC_MUX "ep_pmc_mux_00"
+#define XRT_MD_NODE_QDMA "ep_qdma_00"
+#define XRT_MD_NODE_QDMA4 "ep_qdma4_00"
+#define XRT_MD_NODE_REMAP_P2P "ep_remap_p2p_00"
+#define XRT_MD_NODE_STM "ep_stream_traffic_manager_00"
+#define XRT_MD_NODE_STM4 "ep_stream_traffic_manager4_00"
+#define XRT_MD_NODE_SYSMON "ep_cmp_sysmon_00"
+#define XRT_MD_NODE_XDMA "ep_xdma_00"
+#define XRT_MD_NODE_XVC_PUB "ep_debug_bscan_user_00"
+#define XRT_MD_NODE_XVC_PRI "ep_debug_bscan_mgmt_00"
+#define XRT_MD_NODE_UCS_CONTROL_STATUS "ep_ucs_control_status_00"
+
+/* endpoint regmaps */
+#define XRT_MD_REGMAP_DDR_SRSR "drv_ddr_srsr"
+#define XRT_MD_REGMAP_CLKFREQ "freq_cnt"
+
+/* driver defined endpoints */
+#define XRT_MD_NODE_BLP_ROM "drv_ep_blp_rom_00"
+#define XRT_MD_NODE_DDR_SRSR "drv_ep_ddr_srsr"
+#define XRT_MD_NODE_FLASH_VSEC "drv_ep_card_flash_program_00"
+#define XRT_MD_NODE_GOLDEN_VER "drv_ep_golden_ver_00"
+#define XRT_MD_NODE_MAILBOX_VSEC "drv_ep_mailbox_vsec_00"
+#define XRT_MD_NODE_MGMT_MAIN "drv_ep_mgmt_main_00"
+#define XRT_MD_NODE_PLAT_INFO "drv_ep_platform_info_mgmt_00"
+#define XRT_MD_NODE_PARTITION_INFO_BLP "partition_info_0"
+#define XRT_MD_NODE_PARTITION_INFO_PLP "partition_info_1"
+#define XRT_MD_NODE_TEST "drv_ep_test_00"
+#define XRT_MD_NODE_VSEC "drv_ep_vsec_00"
+#define XRT_MD_NODE_VSEC_GOLDEN "drv_ep_vsec_golden_00"
+
+/* driver defined properties */
+#define XRT_MD_PROP_OFFSET "drv_offset"
+#define XRT_MD_PROP_CLK_FREQ "drv_clock_frequency"
+#define XRT_MD_PROP_CLK_CNT "drv_clock_frequency_counter"
+#define XRT_MD_PROP_VBNV "vbnv"
+#define XRT_MD_PROP_VROM "vrom"
+#define XRT_MD_PROP_PARTITION_LEVEL "partition_level"
+
+struct xrt_md_endpoint {
+ const char *ep_name;
+ u32 bar;
+ u64 bar_off;
+ ulong size;
+ char *regmap;
+ char *regmap_ver;
+};
+
+/* Note: res_id is defined by leaf driver and must start with 0. */
+struct xrt_iores_map {
+ char *res_name;
+ int res_id;
+};
+
+static inline int xrt_md_res_name2id(const struct xrt_iores_map *res_map,
+ int entry_num, const char *res_name)
+{
+ int i;
+
+ for (i = 0; i < entry_num; i++) {
+ if (!strncmp(res_name, res_map->res_name, strlen(res_map->res_name) + 1))
+ return res_map->res_id;
+ res_map++;
+ }
+ return -1;
+}
+
+static inline const char *
+xrt_md_res_id2name(const struct xrt_iores_map *res_map, int entry_num, int id)
+{
+ int i;
+
+ for (i = 0; i < entry_num; i++) {
+ if (res_map->res_id == id)
+ return res_map->res_name;
+ res_map++;
+ }
+ return NULL;
+}
+
+unsigned long xrt_md_size(struct device *dev, const char *blob);
+int xrt_md_create(struct device *dev, char **blob);
+char *xrt_md_dup(struct device *dev, const char *blob);
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+ struct xrt_md_endpoint *ep);
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+ const char *regmap_name);
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+ const char *regmap_name, const char *prop,
+ const void **val, int *size);
+int xrt_md_set_prop(struct device *dev, char *blob, const char *ep_name,
+ const char *regmap_name, const char *prop,
+ const void *val, int size);
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+ const char *ep_name, const char *regmap_name,
+ const char *new_ep_name);
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+ const char *ep_name, const char *regmap_name,
+ char **next_ep, char **next_regmap);
+int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
+ const char *regmap_name, const char **ep_name);
+int xrt_md_find_endpoint(struct device *dev, const char *blob,
+ const char *ep_name, const char *regmap_name,
+ const char **epname);
+int xrt_md_pack(struct device *dev, char *blob);
+int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
+ u32 num_uuids, uuid_t *intf_uuids);
+
+/*
+ * The firmware provides a 128 bit hash string as a unique id to the
+ * partition/interface.
+ * Existing hw does not yet use the cononical form, so it is necessary to
+ * use a translation function.
+ */
+static inline void xrt_md_trans_uuid2str(const uuid_t *uuid, char *uuidstr)
+{
+ int i, p;
+ u8 tmp[UUID_SIZE];
+
+ BUILD_BUG_ON(UUID_SIZE != 16);
+ export_uuid(tmp, uuid);
+ for (p = 0, i = UUID_SIZE - 1; i >= 0; p++, i--)
+ snprintf(&uuidstr[p * 2], 3, "%02x", tmp[i]);
+}
+
+static inline int xrt_md_trans_str2uuid(struct device *dev, const char *uuidstr, uuid_t *p_uuid)
+{
+ u8 p[UUID_SIZE];
+ const char *str;
+ char tmp[3] = { 0 };
+ int i, ret;
+
+ BUILD_BUG_ON(UUID_SIZE != 16);
+ str = uuidstr + strlen(uuidstr) - 2;
+
+ for (i = 0; i < sizeof(*p_uuid) && str >= uuidstr; i++) {
+ tmp[0] = *str;
+ tmp[1] = *(str + 1);
+ ret = kstrtou8(tmp, 16, &p[i]);
+ if (ret)
+ return -EINVAL;
+ str -= 2;
+ }
+ import_uuid(p_uuid, p);
+
+ return 0;
+}
+
+#endif
diff --git a/drivers/fpga/xrt/metadata/metadata.c b/drivers/fpga/xrt/metadata/metadata.c
new file mode 100644
index 000000000000..3b2be50fcb02
--- /dev/null
+++ b/drivers/fpga/xrt/metadata/metadata.c
@@ -0,0 +1,545 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Metadata parse APIs
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#include <linux/libfdt_env.h>
+#include "libfdt.h"
+#include "metadata.h"
+
+#define MAX_BLOB_SIZE (4096 * 25)
+#define MAX_DEPTH 5
+
+static int xrt_md_setprop(struct device *dev, char *blob, int offset,
+ const char *prop, const void *val, int size)
+{
+ int ret;
+
+ ret = fdt_setprop(blob, offset, prop, val, size);
+ if (ret)
+ dev_err(dev, "failed to set prop %d", ret);
+
+ return ret;
+}
+
+static int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
+ const char *ep_name)
+{
+ int ret;
+
+ ret = fdt_add_subnode(blob, parent_offset, ep_name);
+ if (ret < 0 && ret != -FDT_ERR_EXISTS)
+ dev_err(dev, "failed to add node %s. %d", ep_name, ret);
+
+ return ret;
+}
+
+static int xrt_md_get_endpoint(struct device *dev, const char *blob,
+ const char *ep_name, const char *regmap_name,
+ int *ep_offset)
+{
+ const char *name;
+ int offset;
+
+ for (offset = fdt_next_node(blob, -1, NULL);
+ offset >= 0;
+ offset = fdt_next_node(blob, offset, NULL)) {
+ name = fdt_get_name(blob, offset, NULL);
+ if (!name || strncmp(name, ep_name, strlen(ep_name) + 1))
+ continue;
+ if (!regmap_name ||
+ !fdt_node_check_compatible(blob, offset, regmap_name))
+ break;
+ }
+ if (offset < 0)
+ return -ENODEV;
+
+ *ep_offset = offset;
+
+ return 0;
+}
+
+static inline int xrt_md_get_node(struct device *dev, const char *blob,
+ const char *name, const char *regmap_name,
+ int *offset)
+{
+ int ret = 0;
+
+ if (name) {
+ ret = xrt_md_get_endpoint(dev, blob, name, regmap_name,
+ offset);
+ if (ret) {
+ dev_err(dev, "cannot get node %s, regmap %s, ret = %d",
+ name, regmap_name, ret);
+ return -EINVAL;
+ }
+ } else {
+ ret = fdt_next_node(blob, -1, NULL);
+ if (ret < 0) {
+ dev_err(dev, "internal error, ret = %d", ret);
+ return -EINVAL;
+ }
+ *offset = ret;
+ }
+
+ return 0;
+}
+
+static int xrt_md_overlay(struct device *dev, char *blob, int target,
+ const char *overlay_blob, int overlay_offset,
+ int depth)
+{
+ int property, subnode;
+ int ret;
+
+ if (!blob || !overlay_blob) {
+ dev_err(dev, "blob is NULL");
+ return -EINVAL;
+ }
+
+ if (depth > MAX_DEPTH) {
+ dev_err(dev, "meta data depth beyond %d", MAX_DEPTH);
+ return -EINVAL;
+ }
+
+ if (target < 0) {
+ target = fdt_next_node(blob, -1, NULL);
+ if (target < 0) {
+ dev_err(dev, "invalid target");
+ return -EINVAL;
+ }
+ }
+ if (overlay_offset < 0) {
+ overlay_offset = fdt_next_node(overlay_blob, -1, NULL);
+ if (overlay_offset < 0) {
+ dev_err(dev, "invalid overlay");
+ return -EINVAL;
+ }
+ }
+
+ fdt_for_each_property_offset(property, overlay_blob, overlay_offset) {
+ const char *name;
+ const void *prop;
+ int prop_len;
+
+ prop = fdt_getprop_by_offset(overlay_blob, property, &name,
+ &prop_len);
+ if (!prop || prop_len >= MAX_BLOB_SIZE || prop_len < 0) {
+ dev_err(dev, "internal error");
+ return -EINVAL;
+ }
+
+ ret = xrt_md_setprop(dev, blob, target, name, prop,
+ prop_len);
+ if (ret) {
+ dev_err(dev, "setprop failed, ret = %d", ret);
+ return ret;
+ }
+ }
+
+ fdt_for_each_subnode(subnode, overlay_blob, overlay_offset) {
+ const char *name = fdt_get_name(overlay_blob, subnode, NULL);
+ int nnode;
+
+ nnode = xrt_md_add_node(dev, blob, target, name);
+ if (nnode == -FDT_ERR_EXISTS)
+ nnode = fdt_subnode_offset(blob, target, name);
+ if (nnode < 0) {
+ dev_err(dev, "add node failed, ret = %d", nnode);
+ return nnode;
+ }
+
+ ret = xrt_md_overlay(dev, blob, nnode, overlay_blob, subnode, depth + 1);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+unsigned long xrt_md_size(struct device *dev, const char *blob)
+{
+ unsigned long len = (long)fdt_totalsize(blob);
+
+ if (len > MAX_BLOB_SIZE)
+ return XRT_MD_INVALID_LENGTH;
+
+ return len;
+}
+EXPORT_SYMBOL_GPL(xrt_md_size);
+
+int xrt_md_create(struct device *dev, char **blob)
+{
+ int ret = 0;
+
+ if (!blob) {
+ dev_err(dev, "blob is NULL");
+ return -EINVAL;
+ }
+
+ *blob = vzalloc(MAX_BLOB_SIZE);
+ if (!*blob)
+ return -ENOMEM;
+
+ ret = fdt_create_empty_tree(*blob, MAX_BLOB_SIZE);
+ if (ret) {
+ dev_err(dev, "format blob failed, ret = %d", ret);
+ goto failed;
+ }
+
+ ret = fdt_next_node(*blob, -1, NULL);
+ if (ret < 0) {
+ dev_err(dev, "No Node, ret = %d", ret);
+ goto failed;
+ }
+
+ ret = fdt_add_subnode(*blob, 0, XRT_MD_NODE_ENDPOINTS);
+ if (ret < 0) {
+ dev_err(dev, "add node failed, ret = %d", ret);
+ goto failed;
+ }
+
+ return 0;
+
+failed:
+ vfree(*blob);
+ *blob = NULL;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_create);
+
+char *xrt_md_dup(struct device *dev, const char *blob)
+{
+ char *dup_blob;
+ int ret;
+
+ ret = xrt_md_create(dev, &dup_blob);
+ if (ret)
+ return NULL;
+ ret = xrt_md_overlay(dev, dup_blob, -1, blob, -1, 0);
+ if (ret) {
+ vfree(dup_blob);
+ return NULL;
+ }
+
+ return dup_blob;
+}
+EXPORT_SYMBOL_GPL(xrt_md_dup);
+
+int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
+ const char *regmap_name)
+{
+ int ep_offset;
+ int ret;
+
+ ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name, &ep_offset);
+ if (ret) {
+ dev_err(dev, "can not find ep %s", ep_name);
+ return -EINVAL;
+ }
+
+ ret = fdt_del_node(blob, ep_offset);
+ if (ret)
+ dev_err(dev, "delete node %s failed, ret %d", ep_name, ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_del_endpoint);
+
+static int __xrt_md_add_endpoint(struct device *dev, char *blob,
+ struct xrt_md_endpoint *ep, int *offset,
+ const char *parent)
+{
+ int parent_offset = 0;
+ u32 val, count = 0;
+ int ep_offset = 0;
+ u64 io_range[2];
+ char comp[128];
+ int ret = 0;
+
+ if (!ep->ep_name) {
+ dev_err(dev, "empty name");
+ return -EINVAL;
+ }
+
+ if (parent) {
+ ret = xrt_md_get_endpoint(dev, blob, parent, NULL, &parent_offset);
+ if (ret) {
+ dev_err(dev, "invalid blob, ret = %d", ret);
+ return -EINVAL;
+ }
+ }
+
+ ep_offset = xrt_md_add_node(dev, blob, parent_offset, ep->ep_name);
+ if (ep_offset < 0) {
+ dev_err(dev, "add endpoint failed, ret = %d", ret);
+ return -EINVAL;
+ }
+ if (offset)
+ *offset = ep_offset;
+
+ if (ep->size != 0) {
+ val = cpu_to_be32(ep->bar);
+ ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_BAR_IDX,
+ &val, sizeof(u32));
+ if (ret) {
+ dev_err(dev, "set %s failed, ret %d",
+ XRT_MD_PROP_BAR_IDX, ret);
+ goto failed;
+ }
+ io_range[0] = cpu_to_be64((u64)ep->bar_off);
+ io_range[1] = cpu_to_be64((u64)ep->size);
+ ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_IO_OFFSET,
+ io_range, sizeof(io_range));
+ if (ret) {
+ dev_err(dev, "set %s failed, ret %d",
+ XRT_MD_PROP_IO_OFFSET, ret);
+ goto failed;
+ }
+ }
+
+ if (ep->regmap) {
+ if (ep->regmap_ver) {
+ count = snprintf(comp, sizeof(comp) - 1,
+ "%s-%s", ep->regmap, ep->regmap_ver);
+ count++;
+ }
+ if (count > sizeof(comp)) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ count += snprintf(comp + count, sizeof(comp) - count - 1,
+ "%s", ep->regmap);
+ count++;
+ if (count > sizeof(comp)) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_COMPATIBLE,
+ comp, count);
+ if (ret) {
+ dev_err(dev, "set %s failed, ret %d",
+ XRT_MD_PROP_COMPATIBLE, ret);
+ goto failed;
+ }
+ }
+
+failed:
+ if (ret)
+ xrt_md_del_endpoint(dev, blob, ep->ep_name, NULL);
+
+ return ret;
+}
+
+int xrt_md_add_endpoint(struct device *dev, char *blob,
+ struct xrt_md_endpoint *ep)
+{
+ return __xrt_md_add_endpoint(dev, blob, ep, NULL, XRT_MD_NODE_ENDPOINTS);
+}
+EXPORT_SYMBOL_GPL(xrt_md_add_endpoint);
+
+int xrt_md_find_endpoint(struct device *dev, const char *blob,
+ const char *ep_name, const char *regmap_name,
+ const char **epname)
+{
+ int offset;
+ int ret;
+
+ ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+ &offset);
+ if (!ret && epname)
+ *epname = fdt_get_name(blob, offset, NULL);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_find_endpoint);
+
+int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
+ const char *regmap_name, const char *prop,
+ const void **val, int *size)
+{
+ int offset;
+ int ret;
+
+ if (!val) {
+ dev_err(dev, "val is null");
+ return -EINVAL;
+ }
+
+ *val = NULL;
+ ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
+ if (ret)
+ return ret;
+
+ *val = fdt_getprop(blob, offset, prop, size);
+ if (!*val) {
+ dev_dbg(dev, "get ep %s, prop %s failed", ep_name, prop);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_md_get_prop);
+
+int xrt_md_set_prop(struct device *dev, char *blob,
+ const char *ep_name, const char *regmap_name,
+ const char *prop, const void *val, int size)
+{
+ int offset;
+ int ret;
+
+ ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
+ if (ret)
+ return ret;
+
+ ret = xrt_md_setprop(dev, blob, offset, prop, val, size);
+ if (ret)
+ dev_err(dev, "set prop %s failed, ret = %d", prop, ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_set_prop);
+
+int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
+ const char *ep_name, const char *regmap_name,
+ const char *new_ep_name)
+{
+ const char *newepnm = new_ep_name ? new_ep_name : ep_name;
+ struct xrt_md_endpoint ep = {0};
+ int offset, target;
+ const char *parent;
+ int ret;
+
+ ret = xrt_md_get_endpoint(dev, src_blob, ep_name, regmap_name,
+ &offset);
+ if (ret)
+ return -EINVAL;
+
+ ret = xrt_md_get_endpoint(dev, blob, newepnm, regmap_name, &target);
+ if (ret) {
+ ep.ep_name = newepnm;
+ parent = fdt_parent_offset(src_blob, offset) == 0 ? NULL : XRT_MD_NODE_ENDPOINTS;
+ ret = __xrt_md_add_endpoint(dev, blob, &ep, &target, parent);
+ if (ret)
+ return -EINVAL;
+ }
+
+ ret = xrt_md_overlay(dev, blob, target, src_blob, offset, 0);
+ if (ret)
+ dev_err(dev, "overlay failed, ret = %d", ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_copy_endpoint);
+
+int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
+ const char *ep_name, const char *regmap_name,
+ char **next_ep, char **next_regmap)
+{
+ int offset, ret;
+
+ *next_ep = NULL;
+ *next_regmap = NULL;
+ if (!ep_name) {
+ ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_ENDPOINTS, NULL,
+ &offset);
+ } else {
+ ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
+ &offset);
+ }
+
+ if (ret)
+ return -EINVAL;
+
+ offset = ep_name ? fdt_next_subnode(blob, offset) :
+ fdt_first_subnode(blob, offset);
+ if (offset < 0)
+ return -EINVAL;
+
+ *next_ep = (char *)fdt_get_name(blob, offset, NULL);
+ *next_regmap = (char *)fdt_stringlist_get(blob, offset, XRT_MD_PROP_COMPATIBLE,
+ 0, NULL);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_md_get_next_endpoint);
+
+int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
+ const char *regmap_name, const char **ep_name)
+{
+ int ep_offset;
+
+ ep_offset = fdt_node_offset_by_compatible(blob, -1, regmap_name);
+ if (ep_offset < 0) {
+ *ep_name = NULL;
+ return -ENOENT;
+ }
+
+ *ep_name = fdt_get_name(blob, ep_offset, NULL);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_md_get_compatible_endpoint);
+
+int xrt_md_pack(struct device *dev, char *blob)
+{
+ int ret;
+
+ ret = fdt_pack(blob);
+ if (ret)
+ dev_err(dev, "pack failed %d", ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_md_pack);
+
+int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
+ u32 num_uuids, uuid_t *interface_uuids)
+{
+ int offset, count = 0;
+ const char *uuid_str;
+ int ret;
+
+ ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_INTERFACES, NULL, &offset);
+ if (ret)
+ return -ENOENT;
+
+ for (offset = fdt_first_subnode(blob, offset);
+ offset >= 0;
+ offset = fdt_next_subnode(blob, offset), count++) {
+ uuid_str = fdt_getprop(blob, offset, XRT_MD_PROP_INTERFACE_UUID,
+ NULL);
+ if (!uuid_str) {
+ dev_err(dev, "empty interface uuid node");
+ return -EINVAL;
+ }
+
+ if (!num_uuids)
+ continue;
+
+ if (count == num_uuids) {
+ dev_err(dev, "too many interface uuid in blob");
+ return -EINVAL;
+ }
+
+ if (interface_uuids && count < num_uuids) {
+ ret = xrt_md_trans_str2uuid(dev, uuid_str,
+ &interface_uuids[count]);
+ if (ret)
+ return -EINVAL;
+ }
+ }
+ if (!count)
+ count = -ENOENT;
+
+ return count;
+}
+EXPORT_SYMBOL_GPL(xrt_md_get_interface_uuids);
--
2.27.0
xrt-lib kernel module infrastructure code to register and manage all
leaf driver modules.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/subdev_id.h | 38 ++++
drivers/fpga/xrt/include/xleaf.h | 264 +++++++++++++++++++++++++
drivers/fpga/xrt/lib/lib-drv.c | 277 +++++++++++++++++++++++++++
drivers/fpga/xrt/lib/lib-drv.h | 17 ++
4 files changed, 596 insertions(+)
create mode 100644 drivers/fpga/xrt/include/subdev_id.h
create mode 100644 drivers/fpga/xrt/include/xleaf.h
create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
diff --git a/drivers/fpga/xrt/include/subdev_id.h b/drivers/fpga/xrt/include/subdev_id.h
new file mode 100644
index 000000000000..42fbd6f5e80a
--- /dev/null
+++ b/drivers/fpga/xrt/include/subdev_id.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_SUBDEV_ID_H_
+#define _XRT_SUBDEV_ID_H_
+
+/*
+ * Every subdev driver has an ID for others to refer to it. There can be multiple number of
+ * instances of a subdev driver. A <subdev_id, subdev_instance> tuple is a unique identification
+ * of a specific instance of a subdev driver.
+ */
+enum xrt_subdev_id {
+ XRT_SUBDEV_GRP = 0,
+ XRT_SUBDEV_VSEC = 1,
+ XRT_SUBDEV_VSEC_GOLDEN = 2,
+ XRT_SUBDEV_DEVCTL = 3,
+ XRT_SUBDEV_AXIGATE = 4,
+ XRT_SUBDEV_ICAP = 5,
+ XRT_SUBDEV_TEST = 6,
+ XRT_SUBDEV_MGMT_MAIN = 7,
+ XRT_SUBDEV_QSPI = 8,
+ XRT_SUBDEV_MAILBOX = 9,
+ XRT_SUBDEV_CMC = 10,
+ XRT_SUBDEV_CALIB = 11,
+ XRT_SUBDEV_CLKFREQ = 12,
+ XRT_SUBDEV_CLOCK = 13,
+ XRT_SUBDEV_SRSR = 14,
+ XRT_SUBDEV_UCS = 15,
+ XRT_SUBDEV_NUM = 16, /* Total number of subdevs. */
+ XRT_ROOT = -1, /* Special ID for root driver. */
+};
+
+#endif /* _XRT_SUBDEV_ID_H_ */
diff --git a/drivers/fpga/xrt/include/xleaf.h b/drivers/fpga/xrt/include/xleaf.h
new file mode 100644
index 000000000000..acb500df04b0
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf.h
@@ -0,0 +1,264 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ * Sonal Santan <[email protected]>
+ */
+
+#ifndef _XRT_XLEAF_H_
+#define _XRT_XLEAF_H_
+
+#include <linux/platform_device.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include "subdev_id.h"
+#include "xroot.h"
+#include "events.h"
+
+/* All subdev drivers should use below common routines to print out msg. */
+#define DEV(pdev) (&(pdev)->dev)
+#define DEV_PDATA(pdev) \
+ ((struct xrt_subdev_platdata *)dev_get_platdata(DEV(pdev)))
+#define DEV_DRVDATA(pdev) \
+ ((struct xrt_subdev_drvdata *) \
+ platform_get_device_id(pdev)->driver_data)
+#define FMT_PRT(prt_fn, pdev, fmt, args...) \
+ ({typeof(pdev) (_pdev) = (pdev); \
+ prt_fn(DEV(_pdev), "%s %s: " fmt, \
+ DEV_PDATA(_pdev)->xsp_root_name, __func__, ##args); })
+#define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
+#define xrt_warn(pdev, fmt, args...) FMT_PRT(dev_warn, pdev, fmt, ##args)
+#define xrt_info(pdev, fmt, args...) FMT_PRT(dev_info, pdev, fmt, ##args)
+#define xrt_dbg(pdev, fmt, args...) FMT_PRT(dev_dbg, pdev, fmt, ##args)
+
+enum {
+ /* Starting cmd for common leaf cmd implemented by all leaves. */
+ XRT_XLEAF_COMMON_BASE = 0,
+ /* Starting cmd for leaves' specific leaf cmds. */
+ XRT_XLEAF_CUSTOM_BASE = 64,
+};
+
+enum xrt_xleaf_common_leaf_cmd {
+ XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
+};
+
+/*
+ * If populated by subdev driver, infra will handle the mechanics of
+ * char device (un)registration.
+ */
+enum xrt_subdev_file_mode {
+ /* Infra create cdev, default file name */
+ XRT_SUBDEV_FILE_DEFAULT = 0,
+ /* Infra create cdev, need to encode inst num in file name */
+ XRT_SUBDEV_FILE_MULTI_INST,
+ /* No auto creation of cdev by infra, leaf handles it by itself */
+ XRT_SUBDEV_FILE_NO_AUTO,
+};
+
+struct xrt_subdev_file_ops {
+ const struct file_operations xsf_ops;
+ dev_t xsf_dev_t;
+ const char *xsf_dev_name;
+ enum xrt_subdev_file_mode xsf_mode;
+};
+
+/*
+ * Subdev driver callbacks populated by subdev driver.
+ */
+struct xrt_subdev_drv_ops {
+ /*
+ * Per driver instance callback. The pdev points to the instance.
+ * If defined, these are called by other leaf drivers.
+ * Note that root driver may call into xsd_leaf_call of a group driver.
+ */
+ int (*xsd_leaf_call)(struct platform_device *pdev, u32 cmd, void *arg);
+};
+
+/*
+ * Defined and populated by subdev driver, exported as driver_data in
+ * struct platform_device_id.
+ */
+struct xrt_subdev_drvdata {
+ struct xrt_subdev_file_ops xsd_file_ops;
+ struct xrt_subdev_drv_ops xsd_dev_ops;
+};
+
+/*
+ * Partially initialized by the parent driver, then, passed in as subdev driver's
+ * platform data when creating subdev driver instance by calling platform
+ * device register API (platform_device_register_data() or the likes).
+ *
+ * Once device register API returns, platform driver framework makes a copy of
+ * this buffer and maintains its life cycle. The content of the buffer is
+ * completely owned by subdev driver.
+ *
+ * Thus, parent driver should be very careful when it touches this buffer
+ * again once it's handed over to subdev driver. And the data structure
+ * should not contain pointers pointing to buffers that is managed by
+ * other or parent drivers since it could have been freed before platform
+ * data buffer is freed by platform driver framework.
+ */
+struct xrt_subdev_platdata {
+ /*
+ * Per driver instance callback. The pdev points to the instance.
+ * Should always be defined for subdev driver to get service from root.
+ */
+ xrt_subdev_root_cb_t xsp_root_cb;
+ void *xsp_root_cb_arg;
+
+ /* Something to associate w/ root for msg printing. */
+ const char *xsp_root_name;
+
+ /*
+ * Char dev support for this subdev instance.
+ * Initialized by subdev driver.
+ */
+ struct cdev xsp_cdev;
+ struct device *xsp_sysdev;
+ struct mutex xsp_devnode_lock; /* devnode lock */
+ struct completion xsp_devnode_comp;
+ int xsp_devnode_ref;
+ bool xsp_devnode_online;
+ bool xsp_devnode_excl;
+
+ /*
+ * Subdev driver specific init data. The buffer should be embedded
+ * in this data structure buffer after dtb, so that it can be freed
+ * together with platform data.
+ */
+ loff_t xsp_priv_off; /* Offset into this platform data buffer. */
+ size_t xsp_priv_len;
+
+ /*
+ * Populated by parent driver to describe the device tree for
+ * the subdev driver to handle. Should always be last one since it's
+ * of variable length.
+ */
+ bool xsp_dtb_valid;
+ char xsp_dtb[0];
+};
+
+/*
+ * this struct define the endpoints belong to the same subdevice
+ */
+struct xrt_subdev_ep_names {
+ const char *ep_name;
+ const char *regmap_name;
+};
+
+struct xrt_subdev_endpoints {
+ struct xrt_subdev_ep_names *xse_names;
+ /* minimum number of endpoints to support the subdevice */
+ u32 xse_min_ep;
+};
+
+struct subdev_match_arg {
+ enum xrt_subdev_id id;
+ int instance;
+};
+
+bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name);
+struct platform_device *xleaf_get_leaf(struct platform_device *pdev,
+ xrt_subdev_match_t cb, void *arg);
+
+static inline bool subdev_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
+{
+ const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
+ int instance = a->instance;
+
+ if (id != a->id)
+ return false;
+ if (instance != pdev->id && instance != PLATFORM_DEVID_NONE)
+ return false;
+ return true;
+}
+
+static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
+ struct platform_device *pdev, void *arg)
+{
+ return xleaf_has_endpoint(pdev, arg);
+}
+
+static inline struct platform_device *
+xleaf_get_leaf_by_id(struct platform_device *pdev,
+ enum xrt_subdev_id id, int instance)
+{
+ struct subdev_match_arg arg = { id, instance };
+
+ return xleaf_get_leaf(pdev, subdev_match, &arg);
+}
+
+static inline struct platform_device *
+xleaf_get_leaf_by_epname(struct platform_device *pdev, const char *name)
+{
+ return xleaf_get_leaf(pdev, xrt_subdev_match_epname, (void *)name);
+}
+
+static inline int xleaf_call(struct platform_device *tgt, u32 cmd, void *arg)
+{
+ struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(tgt);
+
+ return (*drvdata->xsd_dev_ops.xsd_leaf_call)(tgt, cmd, arg);
+}
+
+int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async);
+int xleaf_create_group(struct platform_device *pdev, char *dtb);
+int xleaf_destroy_group(struct platform_device *pdev, int instance);
+void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx);
+void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
+ unsigned short *subvendor, unsigned short *subdevice);
+void xleaf_hot_reset(struct platform_device *pdev);
+int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf);
+struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
+ const struct attribute_group **grps);
+void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon);
+int xleaf_wait_for_group_bringup(struct platform_device *pdev);
+
+/*
+ * Character device helper APIs for use by leaf drivers
+ */
+static inline bool xleaf_devnode_enabled(struct xrt_subdev_drvdata *drvdata)
+{
+ return drvdata && drvdata->xsd_file_ops.xsf_ops.open;
+}
+
+int xleaf_devnode_create(struct platform_device *pdev,
+ const char *file_name, const char *inst_name);
+int xleaf_devnode_destroy(struct platform_device *pdev);
+
+struct platform_device *xleaf_devnode_open_excl(struct inode *inode);
+struct platform_device *xleaf_devnode_open(struct inode *inode);
+void xleaf_devnode_close(struct inode *inode);
+
+/* Helpers. */
+int xleaf_register_driver(enum xrt_subdev_id id, struct platform_driver *drv,
+ struct xrt_subdev_endpoints *eps);
+void xleaf_unregister_driver(enum xrt_subdev_id id);
+
+/* Module's init/fini routines for leaf driver in xrt-lib module */
+#define XRT_LEAF_INIT_FINI_FUNC(_id, name) \
+void name##_leaf_init_fini(bool init) \
+{ \
+ typeof(_id) id = _id; \
+ if (init) { \
+ xleaf_register_driver(id, \
+ &xrt_##name##_driver, \
+ xrt_##name##_endpoints); \
+ } else { \
+ xleaf_unregister_driver(id); \
+ } \
+}
+
+void group_leaf_init_fini(bool init);
+void vsec_leaf_init_fini(bool init);
+void devctl_leaf_init_fini(bool init);
+void axigate_leaf_init_fini(bool init);
+void icap_leaf_init_fini(bool init);
+void calib_leaf_init_fini(bool init);
+void clkfreq_leaf_init_fini(bool init);
+void clock_leaf_init_fini(bool init);
+void ucs_leaf_init_fini(bool init);
+
+#endif /* _XRT_LEAF_H_ */
diff --git a/drivers/fpga/xrt/lib/lib-drv.c b/drivers/fpga/xrt/lib/lib-drv.c
new file mode 100644
index 000000000000..64bb8710be66
--- /dev/null
+++ b/drivers/fpga/xrt/lib/lib-drv.c
@@ -0,0 +1,277 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include "xleaf.h"
+#include "xroot.h"
+#include "lib-drv.h"
+
+#define XRT_IPLIB_MODULE_NAME "xrt-lib"
+#define XRT_IPLIB_MODULE_VERSION "4.0.0"
+#define XRT_MAX_DEVICE_NODES 128
+#define XRT_DRVNAME(drv) ((drv)->driver.name)
+
+/*
+ * Subdev driver is known by it's ID to others. We map the ID to it's
+ * struct platform_driver, which contains it's binding name and driver/file ops.
+ * We also map it to the endpoint name in DTB as well, if it's different
+ * than the driver's binding name.
+ */
+struct xrt_drv_map {
+ struct list_head list;
+ enum xrt_subdev_id id;
+ struct platform_driver *drv;
+ struct xrt_subdev_endpoints *eps;
+ struct ida ida; /* manage driver instance and char dev minor */
+};
+
+static DEFINE_MUTEX(xrt_lib_lock); /* global lock protecting xrt_drv_maps list */
+static LIST_HEAD(xrt_drv_maps);
+struct class *xrt_class;
+
+static inline struct xrt_subdev_drvdata *
+xrt_drv_map2drvdata(struct xrt_drv_map *map)
+{
+ return (struct xrt_subdev_drvdata *)map->drv->id_table[0].driver_data;
+}
+
+static struct xrt_drv_map *
+__xrt_drv_find_map_by_id(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *tmap;
+
+ list_for_each_entry(tmap, &xrt_drv_maps, list) {
+ if (tmap->id == id)
+ return tmap;
+ }
+ return NULL;
+}
+
+static struct xrt_drv_map *
+xrt_drv_find_map_by_id(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *map;
+
+ mutex_lock(&xrt_lib_lock);
+ map = __xrt_drv_find_map_by_id(id);
+ mutex_unlock(&xrt_lib_lock);
+ /*
+ * map should remain valid even after the lock is dropped since a registered
+ * driver should only be unregistered when driver module is being unloaded,
+ * which means that the driver should not be used by then.
+ */
+ return map;
+}
+
+static int xrt_drv_register_driver(struct xrt_drv_map *map)
+{
+ struct xrt_subdev_drvdata *drvdata;
+ int rc = 0;
+ const char *drvname = XRT_DRVNAME(map->drv);
+
+ rc = platform_driver_register(map->drv);
+ if (rc) {
+ pr_err("register %s platform driver failed\n", drvname);
+ return rc;
+ }
+
+ drvdata = xrt_drv_map2drvdata(map);
+ if (drvdata) {
+ /* Initialize dev_t for char dev node. */
+ if (xleaf_devnode_enabled(drvdata)) {
+ rc = alloc_chrdev_region(&drvdata->xsd_file_ops.xsf_dev_t, 0,
+ XRT_MAX_DEVICE_NODES, drvname);
+ if (rc) {
+ platform_driver_unregister(map->drv);
+ pr_err("failed to alloc dev minor for %s: %d\n", drvname, rc);
+ return rc;
+ }
+ } else {
+ drvdata->xsd_file_ops.xsf_dev_t = (dev_t)-1;
+ }
+ }
+
+ ida_init(&map->ida);
+
+ pr_info("%s registered successfully\n", drvname);
+
+ return 0;
+}
+
+static void xrt_drv_unregister_driver(struct xrt_drv_map *map)
+{
+ const char *drvname = XRT_DRVNAME(map->drv);
+ struct xrt_subdev_drvdata *drvdata;
+
+ ida_destroy(&map->ida);
+
+ drvdata = xrt_drv_map2drvdata(map);
+ if (drvdata && drvdata->xsd_file_ops.xsf_dev_t != (dev_t)-1) {
+ unregister_chrdev_region(drvdata->xsd_file_ops.xsf_dev_t,
+ XRT_MAX_DEVICE_NODES);
+ }
+
+ platform_driver_unregister(map->drv);
+
+ pr_info("%s unregistered successfully\n", drvname);
+}
+
+int xleaf_register_driver(enum xrt_subdev_id id,
+ struct platform_driver *drv,
+ struct xrt_subdev_endpoints *eps)
+{
+ struct xrt_drv_map *map;
+ int rc;
+
+ mutex_lock(&xrt_lib_lock);
+
+ map = __xrt_drv_find_map_by_id(id);
+ if (map) {
+ mutex_unlock(&xrt_lib_lock);
+ pr_err("Id %d already has a registered driver, 0x%p\n",
+ id, map->drv);
+ return -EEXIST;
+ }
+
+ map = kzalloc(sizeof(*map), GFP_KERNEL);
+ if (!map) {
+ mutex_unlock(&xrt_lib_lock);
+ return -ENOMEM;
+ }
+ map->id = id;
+ map->drv = drv;
+ map->eps = eps;
+
+ rc = xrt_drv_register_driver(map);
+ if (rc) {
+ kfree(map);
+ mutex_unlock(&xrt_lib_lock);
+ return rc;
+ }
+
+ list_add(&map->list, &xrt_drv_maps);
+
+ mutex_unlock(&xrt_lib_lock);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xleaf_register_driver);
+
+void xleaf_unregister_driver(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *map;
+
+ mutex_lock(&xrt_lib_lock);
+
+ map = __xrt_drv_find_map_by_id(id);
+ if (!map) {
+ mutex_unlock(&xrt_lib_lock);
+ pr_err("Id %d has no registered driver\n", id);
+ return;
+ }
+
+ list_del(&map->list);
+
+ mutex_unlock(&xrt_lib_lock);
+
+ xrt_drv_unregister_driver(map);
+ kfree(map);
+}
+EXPORT_SYMBOL_GPL(xleaf_unregister_driver);
+
+const char *xrt_drv_name(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+ if (map)
+ return XRT_DRVNAME(map->drv);
+ return NULL;
+}
+
+int xrt_drv_get_instance(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+ return ida_alloc_range(&map->ida, 0, XRT_MAX_DEVICE_NODES, GFP_KERNEL);
+}
+
+void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
+{
+ struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+
+ ida_free(&map->ida, instance);
+}
+
+struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
+{
+ struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
+ struct xrt_subdev_endpoints *eps;
+
+ eps = map ? map->eps : NULL;
+ return eps;
+}
+
+/* Leaf driver's module init/fini callbacks. */
+static void (*leaf_init_fini_cbs[])(bool) = {
+ group_leaf_init_fini,
+ vsec_leaf_init_fini,
+ devctl_leaf_init_fini,
+ axigate_leaf_init_fini,
+ icap_leaf_init_fini,
+ calib_leaf_init_fini,
+ clkfreq_leaf_init_fini,
+ clock_leaf_init_fini,
+ ucs_leaf_init_fini,
+};
+
+static __init int xrt_lib_init(void)
+{
+ int i;
+
+ xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
+ if (IS_ERR(xrt_class))
+ return PTR_ERR(xrt_class);
+
+ for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
+ leaf_init_fini_cbs[i](true);
+ return 0;
+}
+
+static __exit void xrt_lib_fini(void)
+{
+ struct xrt_drv_map *map;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
+ leaf_init_fini_cbs[i](false);
+
+ mutex_lock(&xrt_lib_lock);
+
+ while (!list_empty(&xrt_drv_maps)) {
+ map = list_first_entry_or_null(&xrt_drv_maps, struct xrt_drv_map, list);
+ pr_err("Unloading module with %s still registered\n", XRT_DRVNAME(map->drv));
+ list_del(&map->list);
+ mutex_unlock(&xrt_lib_lock);
+ xrt_drv_unregister_driver(map);
+ kfree(map);
+ mutex_lock(&xrt_lib_lock);
+ }
+
+ mutex_unlock(&xrt_lib_lock);
+
+ class_destroy(xrt_class);
+}
+
+module_init(xrt_lib_init);
+module_exit(xrt_lib_fini);
+
+MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
+MODULE_AUTHOR("XRT Team <[email protected]>");
+MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/fpga/xrt/lib/lib-drv.h b/drivers/fpga/xrt/lib/lib-drv.h
new file mode 100644
index 000000000000..a94c58149cb4
--- /dev/null
+++ b/drivers/fpga/xrt/lib/lib-drv.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _LIB_DRV_H_
+#define _LIB_DRV_H_
+
+const char *xrt_drv_name(enum xrt_subdev_id id);
+int xrt_drv_get_instance(enum xrt_subdev_id id);
+void xrt_drv_put_instance(enum xrt_subdev_id id, int instance);
+struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
+
+#endif /* _LIB_DRV_H_ */
--
2.27.0
platform driver that handles IOCTLs, such as hot reset and xclbin download.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xmgmt-main.h | 34 ++
drivers/fpga/xrt/mgmt/main.c | 670 ++++++++++++++++++++++++++
drivers/fpga/xrt/mgmt/xmgnt.h | 34 ++
include/uapi/linux/xrt/xmgmt-ioctl.h | 46 ++
4 files changed, 784 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
create mode 100644 drivers/fpga/xrt/mgmt/main.c
create mode 100644 drivers/fpga/xrt/mgmt/xmgnt.h
create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
diff --git a/drivers/fpga/xrt/include/xmgmt-main.h b/drivers/fpga/xrt/include/xmgmt-main.h
new file mode 100644
index 000000000000..dce9f0d1a0dc
--- /dev/null
+++ b/drivers/fpga/xrt/include/xmgmt-main.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XMGMT_MAIN_H_
+#define _XMGMT_MAIN_H_
+
+#include <linux/xrt/xclbin.h>
+#include "xleaf.h"
+
+enum xrt_mgmt_main_leaf_cmd {
+ XRT_MGMT_MAIN_GET_AXLF_SECTION = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_MGMT_MAIN_GET_VBNV,
+};
+
+/* There are three kind of partitions. Each of them is programmed independently. */
+enum provider_kind {
+ XMGMT_BLP, /* Base Logic Partition */
+ XMGMT_PLP, /* Provider Logic Partition */
+ XMGMT_ULP, /* User Logic Partition */
+};
+
+struct xrt_mgmt_main_get_axlf_section {
+ enum provider_kind xmmigas_axlf_kind;
+ enum axlf_section_kind xmmigas_section_kind;
+ void *xmmigas_section;
+ u64 xmmigas_section_size;
+};
+
+#endif /* _XMGMT_MAIN_H_ */
diff --git a/drivers/fpga/xrt/mgmt/main.c b/drivers/fpga/xrt/mgmt/main.c
new file mode 100644
index 000000000000..f3b46e1fd78b
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/main.c
@@ -0,0 +1,670 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA MGMT PF entry point driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Sonal Santan <[email protected]>
+ */
+
+#include <linux/firmware.h>
+#include <linux/uaccess.h>
+#include "xclbin-helper.h"
+#include "metadata.h"
+#include "xleaf.h"
+#include <linux/xrt/xmgmt-ioctl.h>
+#include "xleaf/devctl.h"
+#include "xmgmt-main.h"
+#include "fmgr.h"
+#include "xleaf/icap.h"
+#include "xleaf/axigate.h"
+#include "xmgnt.h"
+
+#define XMGMT_MAIN "xmgmt_main"
+#define XMGMT_SUPP_XCLBIN_MAJOR 2
+
+#define XMGMT_FLAG_FLASH_READY 1
+#define XMGMT_FLAG_DEVCTL_READY 2
+
+#define XMGMT_UUID_STR_LEN 80
+
+struct xmgmt_main {
+ struct platform_device *pdev;
+ struct axlf *firmware_blp;
+ struct axlf *firmware_plp;
+ struct axlf *firmware_ulp;
+ u32 flags;
+ struct fpga_manager *fmgr;
+ struct mutex lock; /* busy lock */
+
+ uuid_t *blp_interface_uuids;
+ u32 blp_interface_uuid_num;
+};
+
+/*
+ * VBNV stands for Vendor, BoardID, Name, Version. It is a string
+ * which describes board and shell.
+ *
+ * Caller is responsible for freeing the returned string.
+ */
+char *xmgmt_get_vbnv(struct platform_device *pdev)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ const char *vbnv;
+ char *ret;
+ int i;
+
+ if (xmm->firmware_plp)
+ vbnv = xmm->firmware_plp->header.platform_vbnv;
+ else if (xmm->firmware_blp)
+ vbnv = xmm->firmware_blp->header.platform_vbnv;
+ else
+ return NULL;
+
+ ret = kstrdup(vbnv, GFP_KERNEL);
+ if (!ret)
+ return NULL;
+
+ for (i = 0; i < strlen(ret); i++) {
+ if (ret[i] == ':' || ret[i] == '.')
+ ret[i] = '_';
+ }
+ return ret;
+}
+
+static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
+{
+ struct xrt_devctl_rw devctl_arg = { 0 };
+ struct platform_device *devctl_leaf;
+ char uuid_buf[UUID_SIZE];
+ uuid_t uuid;
+ int err;
+
+ devctl_leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
+ if (!devctl_leaf) {
+ xrt_err(pdev, "can not get %s", XRT_MD_NODE_BLP_ROM);
+ return -EINVAL;
+ }
+
+ devctl_arg.xdr_id = XRT_DEVCTL_ROM_UUID;
+ devctl_arg.xdr_buf = uuid_buf;
+ devctl_arg.xdr_len = sizeof(uuid_buf);
+ devctl_arg.xdr_offset = 0;
+ err = xleaf_call(devctl_leaf, XRT_DEVCTL_READ, &devctl_arg);
+ xleaf_put_leaf(pdev, devctl_leaf);
+ if (err) {
+ xrt_err(pdev, "can not get uuid: %d", err);
+ return err;
+ }
+ import_uuid(&uuid, uuid_buf);
+ xrt_md_trans_uuid2str(&uuid, uuidstr);
+
+ return 0;
+}
+
+int xmgmt_hot_reset(struct platform_device *pdev)
+{
+ int ret = xleaf_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET, false);
+
+ if (ret) {
+ xrt_err(pdev, "offline failed, hot reset is canceled");
+ return ret;
+ }
+
+ xleaf_hot_reset(pdev);
+ xleaf_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET, false);
+ return 0;
+}
+
+static ssize_t reset_store(struct device *dev, struct device_attribute *da,
+ const char *buf, size_t count)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+
+ xmgmt_hot_reset(pdev);
+ return count;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t VBNV_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ ssize_t ret;
+ char *vbnv;
+
+ vbnv = xmgmt_get_vbnv(pdev);
+ if (!vbnv)
+ return -EINVAL;
+ ret = sprintf(buf, "%s\n", vbnv);
+ kfree(vbnv);
+ return ret;
+}
+static DEVICE_ATTR_RO(VBNV);
+
+/* logic uuid is the uuid uniquely identfy the partition */
+static ssize_t logic_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ char uuid[XMGMT_UUID_STR_LEN];
+ ssize_t ret;
+
+ /* Getting UUID pointed to by VSEC, should be the same as logic UUID of BLP. */
+ ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
+ if (ret)
+ return ret;
+ ret = sprintf(buf, "%s\n", uuid);
+ return ret;
+}
+static DEVICE_ATTR_RO(logic_uuids);
+
+static ssize_t interface_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ ssize_t ret = 0;
+ u32 i;
+
+ for (i = 0; i < xmm->blp_interface_uuid_num; i++) {
+ char uuidstr[XMGMT_UUID_STR_LEN];
+
+ xrt_md_trans_uuid2str(&xmm->blp_interface_uuids[i], uuidstr);
+ ret += sprintf(buf + ret, "%s\n", uuidstr);
+ }
+ return ret;
+}
+static DEVICE_ATTR_RO(interface_uuids);
+
+static struct attribute *xmgmt_main_attrs[] = {
+ &dev_attr_reset.attr,
+ &dev_attr_VBNV.attr,
+ &dev_attr_logic_uuids.attr,
+ &dev_attr_interface_uuids.attr,
+ NULL,
+};
+
+static const struct attribute_group xmgmt_main_attrgroup = {
+ .attrs = xmgmt_main_attrs,
+};
+
+static int load_firmware_from_disk(struct platform_device *pdev, struct axlf **fw_buf, size_t *len)
+{
+ char uuid[XMGMT_UUID_STR_LEN];
+ const struct firmware *fw;
+ char fw_name[256];
+ int err = 0;
+
+ *len = 0;
+ err = get_dev_uuid(pdev, uuid, sizeof(uuid));
+ if (err)
+ return err;
+
+ snprintf(fw_name, sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
+ xrt_info(pdev, "try loading fw: %s", fw_name);
+
+ err = request_firmware(&fw, fw_name, DEV(pdev));
+ if (err)
+ return err;
+
+ *fw_buf = vmalloc(fw->size);
+ if (!*fw_buf) {
+ release_firmware(fw);
+ return -ENOMEM;
+ }
+
+ *len = fw->size;
+ memcpy(*fw_buf, fw->data, fw->size);
+
+ release_firmware(fw);
+ return 0;
+}
+
+static const struct axlf *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm, enum provider_kind kind)
+{
+ switch (kind) {
+ case XMGMT_BLP:
+ return xmm->firmware_blp;
+ case XMGMT_PLP:
+ return xmm->firmware_plp;
+ case XMGMT_ULP:
+ return xmm->firmware_ulp;
+ default:
+ xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
+ return NULL;
+ }
+}
+
+/* The caller needs to free the returned dtb buffer */
+char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ const struct axlf *provider;
+ char *dtb = NULL;
+ int rc;
+
+ provider = xmgmt_get_axlf_firmware(xmm, kind);
+ if (!provider)
+ return dtb;
+
+ rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
+ if (rc)
+ xrt_err(pdev, "failed to find dtb: %d", rc);
+ return dtb;
+}
+
+/* The caller needs to free the returned uuid buffer */
+static const char *get_uuid_from_firmware(struct platform_device *pdev, const struct axlf *xclbin)
+{
+ const void *uuiddup = NULL;
+ const void *uuid = NULL;
+ void *dtb = NULL;
+ int rc;
+
+ rc = xrt_xclbin_get_section(DEV(pdev), xclbin, PARTITION_METADATA, &dtb, NULL);
+ if (rc)
+ return NULL;
+
+ rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL, XRT_MD_PROP_LOGIC_UUID, &uuid, NULL);
+ if (!rc)
+ uuiddup = kstrdup(uuid, GFP_KERNEL);
+ vfree(dtb);
+ return uuiddup;
+}
+
+static bool is_valid_firmware(struct platform_device *pdev,
+ const struct axlf *xclbin, size_t fw_len)
+{
+ const char *fw_buf = (const char *)xclbin;
+ size_t axlflen = xclbin->header.length;
+ char dev_uuid[XMGMT_UUID_STR_LEN];
+ const char *fw_uuid;
+ int err;
+
+ err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
+ if (err)
+ return false;
+
+ if (memcmp(fw_buf, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)) != 0) {
+ xrt_err(pdev, "unknown fw format");
+ return false;
+ }
+
+ if (axlflen > fw_len) {
+ xrt_err(pdev, "truncated fw, length: %zu, expect: %zu", fw_len, axlflen);
+ return false;
+ }
+
+ if (xclbin->header.version_major != XMGMT_SUPP_XCLBIN_MAJOR) {
+ xrt_err(pdev, "firmware is not supported");
+ return false;
+ }
+
+ fw_uuid = get_uuid_from_firmware(pdev, xclbin);
+ if (!fw_uuid || strncmp(fw_uuid, dev_uuid, sizeof(dev_uuid)) != 0) {
+ xrt_err(pdev, "bad fw UUID: %s, expect: %s",
+ fw_uuid ? fw_uuid : "<none>", dev_uuid);
+ kfree(fw_uuid);
+ return false;
+ }
+
+ kfree(fw_uuid);
+ return true;
+}
+
+int xmgmt_get_provider_uuid(struct platform_device *pdev, enum provider_kind kind, uuid_t *uuid)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ const struct axlf *fwbuf;
+ const char *fw_uuid;
+ int rc = -ENOENT;
+
+ mutex_lock(&xmm->lock);
+
+ fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
+ if (!fwbuf)
+ goto done;
+
+ fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
+ if (!fw_uuid)
+ goto done;
+
+ rc = xrt_md_trans_str2uuid(DEV(pdev), fw_uuid, uuid);
+ kfree(fw_uuid);
+
+done:
+ mutex_unlock(&xmm->lock);
+ return rc;
+}
+
+static int xmgmt_create_blp(struct xmgmt_main *xmm)
+{
+ const struct axlf *provider = xmgmt_get_axlf_firmware(xmm, XMGMT_BLP);
+ struct platform_device *pdev = xmm->pdev;
+ int rc = 0;
+ char *dtb = NULL;
+
+ dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
+ if (!dtb) {
+ xrt_err(pdev, "did not get BLP metadata");
+ return -EINVAL;
+ }
+
+ rc = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, provider, XMGMT_BLP);
+ if (rc) {
+ xrt_err(pdev, "failed to process BLP: %d", rc);
+ goto failed;
+ }
+
+ rc = xleaf_create_group(pdev, dtb);
+ if (rc < 0)
+ xrt_err(pdev, "failed to create BLP group: %d", rc);
+ else
+ rc = 0;
+
+ WARN_ON(xmm->blp_interface_uuids);
+ rc = xrt_md_get_interface_uuids(&pdev->dev, dtb, 0, NULL);
+ if (rc > 0) {
+ xmm->blp_interface_uuid_num = rc;
+ xmm->blp_interface_uuids = vzalloc(sizeof(uuid_t) * xmm->blp_interface_uuid_num);
+ if (!xmm->blp_interface_uuids) {
+ rc = -ENOMEM;
+ goto failed;
+ }
+ xrt_md_get_interface_uuids(&pdev->dev, dtb, xmm->blp_interface_uuid_num,
+ xmm->blp_interface_uuids);
+ }
+
+failed:
+ vfree(dtb);
+ return rc;
+}
+
+static int xmgmt_load_firmware(struct xmgmt_main *xmm)
+{
+ struct platform_device *pdev = xmm->pdev;
+ size_t fwlen;
+ int rc;
+
+ rc = load_firmware_from_disk(pdev, &xmm->firmware_blp, &fwlen);
+ if (!rc && is_valid_firmware(pdev, xmm->firmware_blp, fwlen))
+ xmgmt_create_blp(xmm);
+ else
+ xrt_err(pdev, "failed to find firmware, giving up: %d", rc);
+ return rc;
+}
+
+static void xmgmt_main_event_cb(struct platform_device *pdev, void *arg)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct platform_device *leaf;
+ enum xrt_subdev_id id;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+ switch (e) {
+ case XRT_EVENT_POST_CREATION: {
+ if (id == XRT_SUBDEV_DEVCTL && !(xmm->flags & XMGMT_FLAG_DEVCTL_READY)) {
+ leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
+ if (leaf) {
+ xmm->flags |= XMGMT_FLAG_DEVCTL_READY;
+ xleaf_put_leaf(pdev, leaf);
+ }
+ } else if (id == XRT_SUBDEV_QSPI && !(xmm->flags & XMGMT_FLAG_FLASH_READY)) {
+ xmm->flags |= XMGMT_FLAG_FLASH_READY;
+ } else {
+ break;
+ }
+
+ if (xmm->flags & XMGMT_FLAG_DEVCTL_READY)
+ xmgmt_load_firmware(xmm);
+ break;
+ }
+ case XRT_EVENT_PRE_REMOVAL:
+ break;
+ default:
+ xrt_dbg(pdev, "ignored event %d", e);
+ break;
+ }
+}
+
+static int xmgmt_main_probe(struct platform_device *pdev)
+{
+ struct xmgmt_main *xmm;
+
+ xrt_info(pdev, "probing...");
+
+ xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
+ if (!xmm)
+ return -ENOMEM;
+
+ xmm->pdev = pdev;
+ xmm->fmgr = xmgmt_fmgr_probe(pdev);
+ if (IS_ERR(xmm->fmgr))
+ return PTR_ERR(xmm->fmgr);
+
+ platform_set_drvdata(pdev, xmm);
+ mutex_init(&xmm->lock);
+
+ /* Ready to handle req thru sysfs nodes. */
+ if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
+ xrt_err(pdev, "failed to create sysfs group");
+ return 0;
+}
+
+static int xmgmt_main_remove(struct platform_device *pdev)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+
+ /* By now, group driver should prevent any inter-leaf call. */
+
+ xrt_info(pdev, "leaving...");
+
+ vfree(xmm->blp_interface_uuids);
+ vfree(xmm->firmware_blp);
+ vfree(xmm->firmware_plp);
+ vfree(xmm->firmware_ulp);
+ xmgmt_region_cleanup_all(pdev);
+ xmgmt_fmgr_remove(xmm->fmgr);
+ sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
+ return 0;
+}
+
+static int
+xmgmt_mainleaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct xmgmt_main *xmm = platform_get_drvdata(pdev);
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xmgmt_main_event_cb(pdev, arg);
+ break;
+ case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
+ struct xrt_mgmt_main_get_axlf_section *get =
+ (struct xrt_mgmt_main_get_axlf_section *)arg;
+ const struct axlf *firmware = xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
+
+ if (!firmware) {
+ ret = -ENOENT;
+ } else {
+ ret = xrt_xclbin_get_section(DEV(pdev), firmware,
+ get->xmmigas_section_kind,
+ &get->xmmigas_section,
+ &get->xmmigas_section_size);
+ }
+ break;
+ }
+ case XRT_MGMT_MAIN_GET_VBNV: {
+ char **vbnv_p = (char **)arg;
+
+ *vbnv_p = xmgmt_get_vbnv(pdev);
+ if (!*vbnv_p)
+ ret = -EINVAL;
+ break;
+ }
+ default:
+ xrt_err(pdev, "unknown cmd: %d", cmd);
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
+static int xmgmt_main_open(struct inode *inode, struct file *file)
+{
+ struct platform_device *pdev = xleaf_devnode_open(inode);
+
+ /* Device may have gone already when we get here. */
+ if (!pdev)
+ return -ENODEV;
+
+ xrt_info(pdev, "opened");
+ file->private_data = platform_get_drvdata(pdev);
+ return 0;
+}
+
+static int xmgmt_main_close(struct inode *inode, struct file *file)
+{
+ struct xmgmt_main *xmm = file->private_data;
+
+ xleaf_devnode_close(inode);
+
+ xrt_info(xmm->pdev, "closed");
+ return 0;
+}
+
+/*
+ * Called for xclbin download xclbin load ioctl.
+ */
+static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm, void *axlf, size_t size)
+{
+ int ret;
+
+ WARN_ON(!mutex_is_locked(&xmm->lock));
+
+ /*
+ * Should any error happens during download, we can't trust
+ * the cached xclbin any more.
+ */
+ vfree(xmm->firmware_ulp);
+ xmm->firmware_ulp = NULL;
+
+ ret = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, axlf, XMGMT_ULP);
+ if (ret == 0)
+ xmm->firmware_ulp = axlf;
+
+ return ret;
+}
+
+static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
+{
+ struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
+ struct axlf xclbin_obj = { {0} };
+ size_t copy_buffer_size = 0;
+ void *copy_buffer = NULL;
+ int ret = 0;
+
+ if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
+ return -EFAULT;
+ if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin, sizeof(xclbin_obj)))
+ return -EFAULT;
+ if (memcmp(xclbin_obj.magic, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)))
+ return -EINVAL;
+
+ copy_buffer_size = xclbin_obj.header.length;
+ if (copy_buffer_size > XCLBIN_MAX_SIZE || copy_buffer_size < sizeof(xclbin_obj))
+ return -EINVAL;
+ if (xclbin_obj.header.version_major != XMGMT_SUPP_XCLBIN_MAJOR)
+ return -EINVAL;
+
+ copy_buffer = vmalloc(copy_buffer_size);
+ if (!copy_buffer)
+ return -ENOMEM;
+
+ if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
+ vfree(copy_buffer);
+ return -EFAULT;
+ }
+
+ ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
+ if (ret)
+ vfree(copy_buffer);
+
+ return ret;
+}
+
+static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ struct xmgmt_main *xmm = filp->private_data;
+ long result = 0;
+
+ if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
+ return -ENOTTY;
+
+ mutex_lock(&xmm->lock);
+
+ xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
+ switch (cmd) {
+ case XMGMT_IOCICAPDOWNLOAD_AXLF:
+ result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
+ break;
+ default:
+ result = -ENOTTY;
+ break;
+ }
+
+ mutex_unlock(&xmm->lock);
+ return result;
+}
+
+static struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names []){
+ { .ep_name = XRT_MD_NODE_MGMT_MAIN },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xmgmt_main_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xmgmt_mainleaf_call,
+ },
+ .xsd_file_ops = {
+ .xsf_ops = {
+ .owner = THIS_MODULE,
+ .open = xmgmt_main_open,
+ .release = xmgmt_main_close,
+ .unlocked_ioctl = xmgmt_main_ioctl,
+ },
+ .xsf_dev_name = "xmgmt",
+ },
+};
+
+static const struct platform_device_id xmgmt_main_id_table[] = {
+ { XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
+ { },
+};
+
+static struct platform_driver xmgmt_main_driver = {
+ .driver = {
+ .name = XMGMT_MAIN,
+ },
+ .probe = xmgmt_main_probe,
+ .remove = xmgmt_main_remove,
+ .id_table = xmgmt_main_id_table,
+};
+
+int xmgmt_register_leaf(void)
+{
+ return xleaf_register_driver(XRT_SUBDEV_MGMT_MAIN,
+ &xmgmt_main_driver, xrt_mgmt_main_endpoints);
+}
+
+void xmgmt_unregister_leaf(void)
+{
+ xleaf_unregister_driver(XRT_SUBDEV_MGMT_MAIN);
+}
diff --git a/drivers/fpga/xrt/mgmt/xmgnt.h b/drivers/fpga/xrt/mgmt/xmgnt.h
new file mode 100644
index 000000000000..9d7c11194745
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/xmgnt.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XMGMT_XMGNT_H_
+#define _XMGMT_XMGNT_H_
+
+#include <linux/platform_device.h>
+#include "xmgmt-main.h"
+
+struct fpga_manager;
+int xmgmt_process_xclbin(struct platform_device *pdev,
+ struct fpga_manager *fmgr,
+ const struct axlf *xclbin,
+ enum provider_kind kind);
+void xmgmt_region_cleanup_all(struct platform_device *pdev);
+
+int xmgmt_hot_reset(struct platform_device *pdev);
+
+/* Getting dtb for specified group. Caller should vfree returned dtb .*/
+char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind);
+char *xmgmt_get_vbnv(struct platform_device *pdev);
+int xmgmt_get_provider_uuid(struct platform_device *pdev,
+ enum provider_kind kind, uuid_t *uuid);
+
+int xmgmt_register_leaf(void);
+void xmgmt_unregister_leaf(void);
+
+#endif /* _XMGMT_XMGNT_H_ */
diff --git a/include/uapi/linux/xrt/xmgmt-ioctl.h b/include/uapi/linux/xrt/xmgmt-ioctl.h
new file mode 100644
index 000000000000..da992e581189
--- /dev/null
+++ b/include/uapi/linux/xrt/xmgmt-ioctl.h
@@ -0,0 +1,46 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Copyright (C) 2015-2021, Xilinx Inc
+ *
+ */
+
+/**
+ * DOC: PCIe Kernel Driver for Management Physical Function
+ * Interfaces exposed by *xclmgmt* driver are defined in file, *mgmt-ioctl.h*.
+ * Core functionality provided by *xmgmt* driver is described in the following table:
+ *
+ * =========== ============================== ==================================
+ * Functionality ioctl request code data format
+ * =========== ============================== ==================================
+ * 1 FPGA image download XMGMT_IOCICAPDOWNLOAD_AXLF xmgmt_ioc_bitstream_axlf
+ * =========== ============================== ==================================
+ */
+
+#ifndef _XMGMT_IOCTL_H_
+#define _XMGMT_IOCTL_H_
+
+#include <linux/ioctl.h>
+
+#define XMGMT_IOC_MAGIC 'X'
+#define XMGMT_IOC_ICAP_DOWNLOAD_AXLF 0x6
+
+/**
+ * struct xmgmt_ioc_bitstream_axlf - load xclbin (AXLF) device image
+ * used with XMGMT_IOCICAPDOWNLOAD_AXLF ioctl
+ *
+ * @xclbin: Pointer to user's xclbin structure in memory
+ */
+struct xmgmt_ioc_bitstream_axlf {
+ struct axlf *xclbin;
+};
+
+#define XMGMT_IOCICAPDOWNLOAD_AXLF \
+ _IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgmt_ioc_bitstream_axlf)
+
+/*
+ * The following definitions are for binary compatibility with classic XRT management driver
+ */
+#define XCLMGMT_IOCICAPDOWNLOAD_AXLF XMGMT_IOCICAPDOWNLOAD_AXLF
+#define xclmgmt_ioc_bitstream_axlf xmgmt_ioc_bitstream_axlf
+
+#endif
--
2.27.0
Add User Clock Subsystem (UCS) driver. UCS is a hardware function
discovered by walking xclbin metadata. A platform device node will be
created for it. UCS enables/disables the dynamic region clocks.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/xleaf/ucs.c | 167 +++++++++++++++++++++++++++++++
1 file changed, 167 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
diff --git a/drivers/fpga/xrt/lib/xleaf/ucs.c b/drivers/fpga/xrt/lib/xleaf/ucs.c
new file mode 100644
index 000000000000..d91ee229e7cb
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/ucs.c
@@ -0,0 +1,167 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA UCS Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clock.h"
+
+#define UCS_ERR(ucs, fmt, arg...) \
+ xrt_err((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_WARN(ucs, fmt, arg...) \
+ xrt_warn((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_INFO(ucs, fmt, arg...) \
+ xrt_info((ucs)->pdev, fmt "\n", ##arg)
+#define UCS_DBG(ucs, fmt, arg...) \
+ xrt_dbg((ucs)->pdev, fmt "\n", ##arg)
+
+#define XRT_UCS "xrt_ucs"
+
+#define XRT_UCS_CHANNEL1_REG 0
+#define XRT_UCS_CHANNEL2_REG 8
+
+#define CLK_MAX_VALUE 6400
+
+static const struct regmap_config ucs_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct xrt_ucs {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct mutex ucs_lock; /* ucs dev lock */
+};
+
+static void xrt_ucs_event_cb(struct platform_device *pdev, void *arg)
+{
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct platform_device *leaf;
+ enum xrt_subdev_id id;
+ int instance;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+ instance = evt->xe_subdev.xevt_subdev_instance;
+
+ if (e != XRT_EVENT_POST_CREATION) {
+ xrt_dbg(pdev, "ignored event %d", e);
+ return;
+ }
+
+ if (id != XRT_SUBDEV_CLOCK)
+ return;
+
+ leaf = xleaf_get_leaf_by_id(pdev, XRT_SUBDEV_CLOCK, instance);
+ if (!leaf) {
+ xrt_err(pdev, "does not get clock subdev");
+ return;
+ }
+
+ xleaf_call(leaf, XRT_CLOCK_VERIFY, NULL);
+ xleaf_put_leaf(pdev, leaf);
+}
+
+static int ucs_enable(struct xrt_ucs *ucs)
+{
+ int ret;
+
+ mutex_lock(&ucs->ucs_lock);
+ ret = regmap_write(ucs->regmap, XRT_UCS_CHANNEL2_REG, 1);
+ mutex_unlock(&ucs->ucs_lock);
+
+ return ret;
+}
+
+static int
+xrt_ucs_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_ucs_event_cb(pdev, arg);
+ break;
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ucs_probe(struct platform_device *pdev)
+{
+ struct xrt_ucs *ucs = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+
+ ucs = devm_kzalloc(&pdev->dev, sizeof(*ucs), GFP_KERNEL);
+ if (!ucs)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, ucs);
+ ucs->pdev = pdev;
+ mutex_init(&ucs->ucs_lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ ucs->regmap = devm_regmap_init_mmio(&pdev->dev, base, &ucs_regmap_config);
+ if (IS_ERR(ucs->regmap)) {
+ UCS_ERR(ucs, "map base %pR failed", res);
+ return PTR_ERR(ucs->regmap);
+ }
+ ucs_enable(ucs);
+
+ return 0;
+}
+
+static struct xrt_subdev_endpoints xrt_ucs_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_UCS_CONTROL_STATUS },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_ucs_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_ucs_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_ucs_table[] = {
+ { XRT_UCS, (kernel_ulong_t)&xrt_ucs_data },
+ { },
+};
+
+static struct platform_driver xrt_ucs_driver = {
+ .driver = {
+ .name = XRT_UCS,
+ },
+ .probe = ucs_probe,
+ .id_table = xrt_ucs_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_UCS, ucs);
--
2.27.0
Add devctl driver. devctl is a type of hardware function which only has
few registers to read or write. They are discovered by walking firmware
metadata. A platform device node will be created for them.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/devctl.h | 40 ++++++
drivers/fpga/xrt/lib/xleaf/devctl.c | 183 ++++++++++++++++++++++++
2 files changed, 223 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
diff --git a/drivers/fpga/xrt/include/xleaf/devctl.h b/drivers/fpga/xrt/include/xleaf/devctl.h
new file mode 100644
index 000000000000..b97f3b6d9326
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/devctl.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_DEVCTL_H_
+#define _XRT_DEVCTL_H_
+
+#include "xleaf.h"
+
+/*
+ * DEVCTL driver leaf calls.
+ */
+enum xrt_devctl_leaf_cmd {
+ XRT_DEVCTL_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+enum xrt_devctl_id {
+ XRT_DEVCTL_ROM_UUID = 0,
+ XRT_DEVCTL_DDR_CALIB,
+ XRT_DEVCTL_GOLDEN_VER,
+ XRT_DEVCTL_MAX
+};
+
+struct xrt_devctl_rw {
+ u32 xdr_id;
+ void *xdr_buf;
+ u32 xdr_len;
+ u32 xdr_offset;
+};
+
+struct xrt_devctl_intf_uuid {
+ u32 uuid_num;
+ uuid_t *uuids;
+};
+
+#endif /* _XRT_DEVCTL_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/devctl.c b/drivers/fpga/xrt/lib/xleaf/devctl.c
new file mode 100644
index 000000000000..ae086d7c431d
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/devctl.c
@@ -0,0 +1,183 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA devctl Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/devctl.h"
+
+#define XRT_DEVCTL "xrt_devctl"
+
+struct xrt_name_id {
+ char *ep_name;
+ int id;
+};
+
+static struct xrt_name_id name_id[XRT_DEVCTL_MAX] = {
+ { XRT_MD_NODE_BLP_ROM, XRT_DEVCTL_ROM_UUID },
+ { XRT_MD_NODE_GOLDEN_VER, XRT_DEVCTL_GOLDEN_VER },
+};
+
+static const struct regmap_config devctl_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+};
+
+struct xrt_devctl {
+ struct platform_device *pdev;
+ struct regmap *regmap[XRT_DEVCTL_MAX];
+ ulong sizes[XRT_DEVCTL_MAX];
+};
+
+static int xrt_devctl_name2id(struct xrt_devctl *devctl, const char *name)
+{
+ int i;
+
+ for (i = 0; i < XRT_DEVCTL_MAX && name_id[i].ep_name; i++) {
+ if (!strncmp(name_id[i].ep_name, name, strlen(name_id[i].ep_name) + 1))
+ return name_id[i].id;
+ }
+
+ return -EINVAL;
+}
+
+static int
+xrt_devctl_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct xrt_devctl *devctl;
+ int ret = 0;
+
+ devctl = platform_get_drvdata(pdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_DEVCTL_READ: {
+ struct xrt_devctl_rw *rw_arg = arg;
+
+ if (rw_arg->xdr_len & 0x3) {
+ xrt_err(pdev, "invalid len %d", rw_arg->xdr_len);
+ return -EINVAL;
+ }
+
+ if (rw_arg->xdr_id >= XRT_DEVCTL_MAX) {
+ xrt_err(pdev, "invalid id %d", rw_arg->xdr_id);
+ return -EINVAL;
+ }
+
+ if (!devctl->regmap[rw_arg->xdr_id]) {
+ xrt_err(pdev, "io not found, id %d",
+ rw_arg->xdr_id);
+ return -EINVAL;
+ }
+
+ ret = regmap_bulk_read(devctl->regmap[rw_arg->xdr_id], rw_arg->xdr_offset,
+ rw_arg->xdr_buf,
+ rw_arg->xdr_len / devctl_regmap_config.reg_stride);
+ break;
+ }
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_devctl_probe(struct platform_device *pdev)
+{
+ struct xrt_devctl *devctl = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int i, id, ret = 0;
+
+ devctl = devm_kzalloc(&pdev->dev, sizeof(*devctl), GFP_KERNEL);
+ if (!devctl)
+ return -ENOMEM;
+
+ devctl->pdev = pdev;
+ platform_set_drvdata(pdev, devctl);
+
+ xrt_info(pdev, "probing...");
+ for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ res;
+ res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
+ struct regmap_config config = devctl_regmap_config;
+
+ id = xrt_devctl_name2id(devctl, res->name);
+ if (id < 0) {
+ xrt_err(pdev, "ep %s not found", res->name);
+ continue;
+ }
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ break;
+ }
+ config.max_register = res->end - res->start + 1;
+ devctl->regmap[id] = devm_regmap_init_mmio(&pdev->dev, base, &config);
+ if (IS_ERR(devctl->regmap[id])) {
+ xrt_err(pdev, "map base failed %pR", res);
+ ret = PTR_ERR(devctl->regmap[id]);
+ break;
+ }
+ devctl->sizes[id] = res->end - res->start + 1;
+ }
+
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_devctl_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ /* add name if ep is in same partition */
+ { .ep_name = XRT_MD_NODE_BLP_ROM },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GOLDEN_VER },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ /* adding ep bundle generates devctl device instance */
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_devctl_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_devctl_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_devctl_table[] = {
+ { XRT_DEVCTL, (kernel_ulong_t)&xrt_devctl_data },
+ { },
+};
+
+static struct platform_driver xrt_devctl_driver = {
+ .driver = {
+ .name = XRT_DEVCTL,
+ },
+ .probe = xrt_devctl_probe,
+ .id_table = xrt_devctl_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_DEVCTL, devctl);
--
2.27.0
Add VSEC driver. VSEC is a hardware function discovered by walking
PCI Express configure space. A platform device node will be created
for it. VSEC provides board logic UUID and few offset of other hardware
functions.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/xleaf/vsec.c | 388 ++++++++++++++++++++++++++++++
1 file changed, 388 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
diff --git a/drivers/fpga/xrt/lib/xleaf/vsec.c b/drivers/fpga/xrt/lib/xleaf/vsec.c
new file mode 100644
index 000000000000..8595d23f5710
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/vsec.c
@@ -0,0 +1,388 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA VSEC Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include "metadata.h"
+#include "xleaf.h"
+
+#define XRT_VSEC "xrt_vsec"
+
+#define VSEC_TYPE_UUID 0x50
+#define VSEC_TYPE_FLASH 0x51
+#define VSEC_TYPE_PLATINFO 0x52
+#define VSEC_TYPE_MAILBOX 0x53
+#define VSEC_TYPE_END 0xff
+
+#define VSEC_UUID_LEN 16
+
+#define VSEC_REG_FORMAT 0x0
+#define VSEC_REG_LENGTH 0x4
+#define VSEC_REG_ENTRY 0x8
+
+struct xrt_vsec_header {
+ u32 format;
+ u32 length;
+ u32 entry_sz;
+ u32 rsvd;
+} __packed;
+
+struct xrt_vsec_entry {
+ u8 type;
+ u8 bar_rev;
+ u16 off_lo;
+ u32 off_hi;
+ u8 ver_type;
+ u8 minor;
+ u8 major;
+ u8 rsvd0;
+ u32 rsvd1;
+} __packed;
+
+struct vsec_device {
+ u8 type;
+ char *ep_name;
+ ulong size;
+ char *regmap;
+};
+
+static struct vsec_device vsec_devs[] = {
+ {
+ .type = VSEC_TYPE_UUID,
+ .ep_name = XRT_MD_NODE_BLP_ROM,
+ .size = VSEC_UUID_LEN,
+ .regmap = "vsec-uuid",
+ },
+ {
+ .type = VSEC_TYPE_FLASH,
+ .ep_name = XRT_MD_NODE_FLASH_VSEC,
+ .size = 4096,
+ .regmap = "vsec-flash",
+ },
+ {
+ .type = VSEC_TYPE_PLATINFO,
+ .ep_name = XRT_MD_NODE_PLAT_INFO,
+ .size = 4,
+ .regmap = "vsec-platinfo",
+ },
+ {
+ .type = VSEC_TYPE_MAILBOX,
+ .ep_name = XRT_MD_NODE_MAILBOX_VSEC,
+ .size = 48,
+ .regmap = "vsec-mbx",
+ },
+};
+
+static const struct regmap_config vsec_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct xrt_vsec {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ u32 length;
+
+ char *metadata;
+ char uuid[VSEC_UUID_LEN];
+ int group;
+};
+
+static inline int vsec_read_entry(struct xrt_vsec *vsec, u32 index, struct xrt_vsec_entry *entry)
+{
+ int ret;
+
+ ret = regmap_bulk_read(vsec->regmap, sizeof(struct xrt_vsec_header) +
+ index * sizeof(struct xrt_vsec_entry), entry,
+ sizeof(struct xrt_vsec_entry) /
+ vsec_regmap_config.reg_stride);
+
+ return ret;
+}
+
+static inline u32 vsec_get_bar(struct xrt_vsec_entry *entry)
+{
+ return ((entry)->bar_rev >> 4) & 0xf;
+}
+
+static inline u64 vsec_get_bar_off(struct xrt_vsec_entry *entry)
+{
+ return (entry)->off_lo | ((u64)(entry)->off_hi << 16);
+}
+
+static inline u32 vsec_get_rev(struct xrt_vsec_entry *entry)
+{
+ return (entry)->bar_rev & 0xf;
+}
+
+static char *type2epname(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].ep_name);
+ }
+
+ return NULL;
+}
+
+static ulong type2size(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].size);
+ }
+
+ return 0;
+}
+
+static char *type2regmap(u32 type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
+ if (vsec_devs[i].type == type)
+ return (vsec_devs[i].regmap);
+ }
+
+ return NULL;
+}
+
+static int xrt_vsec_add_node(struct xrt_vsec *vsec,
+ void *md_blob, struct xrt_vsec_entry *p_entry)
+{
+ struct xrt_md_endpoint ep;
+ char regmap_ver[64];
+ int ret;
+
+ if (!type2epname(p_entry->type))
+ return -EINVAL;
+
+ /*
+ * VSEC may have more than 1 mailbox instance for the card
+ * which has more than 1 physical function.
+ * This is not supported for now. Assuming only one mailbox
+ */
+
+ snprintf(regmap_ver, sizeof(regmap_ver) - 1, "%d-%d.%d.%d",
+ p_entry->ver_type, p_entry->major, p_entry->minor,
+ vsec_get_rev(p_entry));
+ ep.ep_name = type2epname(p_entry->type);
+ ep.bar = vsec_get_bar(p_entry);
+ ep.bar_off = vsec_get_bar_off(p_entry);
+ ep.size = type2size(p_entry->type);
+ ep.regmap = type2regmap(p_entry->type);
+ ep.regmap_ver = regmap_ver;
+ ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, &ep);
+ if (ret)
+ xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
+
+ return ret;
+}
+
+static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
+{
+ struct xrt_vsec_entry entry;
+ int i, ret;
+
+ ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
+ if (ret) {
+ xrt_err(vsec->pdev, "create metadata failed");
+ return ret;
+ }
+
+ for (i = 0; i * sizeof(entry) < vsec->length -
+ sizeof(struct xrt_vsec_header); i++) {
+ ret = vsec_read_entry(vsec, i, &entry);
+ if (ret) {
+ xrt_err(vsec->pdev, "failed read entry %d, ret %d", i, ret);
+ goto fail;
+ }
+
+ if (entry.type == VSEC_TYPE_END)
+ break;
+ ret = xrt_vsec_add_node(vsec, vsec->metadata, &entry);
+ if (ret)
+ goto fail;
+ }
+
+ return 0;
+
+fail:
+ vfree(vsec->metadata);
+ vsec->metadata = NULL;
+ return ret;
+}
+
+static int xrt_vsec_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ default:
+ ret = -EINVAL;
+ xrt_err(pdev, "should never been called");
+ break;
+ }
+
+ return ret;
+}
+
+static int xrt_vsec_mapio(struct xrt_vsec *vsec)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->pdev);
+ struct resource *res = NULL;
+ void __iomem *base = NULL;
+ const u64 *bar_off;
+ const u32 *bar;
+ u64 addr;
+ int ret;
+
+ if (!pdata || xrt_md_size(DEV(vsec->pdev), pdata->xsp_dtb) == XRT_MD_INVALID_LENGTH) {
+ xrt_err(vsec->pdev, "empty metadata");
+ return -EINVAL;
+ }
+
+ ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
+ NULL, XRT_MD_PROP_BAR_IDX, (const void **)&bar, NULL);
+ if (ret) {
+ xrt_err(vsec->pdev, "failed to get bar idx, ret %d", ret);
+ return -EINVAL;
+ }
+
+ ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
+ NULL, XRT_MD_PROP_OFFSET, (const void **)&bar_off, NULL);
+ if (ret) {
+ xrt_err(vsec->pdev, "failed to get bar off, ret %d", ret);
+ return -EINVAL;
+ }
+
+ xrt_info(vsec->pdev, "Map vsec at bar %d, offset 0x%llx",
+ be32_to_cpu(*bar), be64_to_cpu(*bar_off));
+
+ xleaf_get_barres(vsec->pdev, &res, be32_to_cpu(*bar));
+ if (!res) {
+ xrt_err(vsec->pdev, "failed to get bar addr");
+ return -EINVAL;
+ }
+
+ addr = res->start + be64_to_cpu(*bar_off);
+
+ base = devm_ioremap(&vsec->pdev->dev, addr, vsec_regmap_config.max_register);
+ if (!base) {
+ xrt_err(vsec->pdev, "Map failed");
+ return -EIO;
+ }
+
+ vsec->regmap = devm_regmap_init_mmio(&vsec->pdev->dev, base, &vsec_regmap_config);
+ if (IS_ERR(vsec->regmap)) {
+ xrt_err(vsec->pdev, "regmap %pR failed", res);
+ return PTR_ERR(vsec->regmap);
+ }
+
+ ret = regmap_read(vsec->regmap, VSEC_REG_LENGTH, &vsec->length);
+ if (ret) {
+ xrt_err(vsec->pdev, "failed to read length %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int xrt_vsec_remove(struct platform_device *pdev)
+{
+ struct xrt_vsec *vsec;
+
+ vsec = platform_get_drvdata(pdev);
+
+ if (vsec->group >= 0)
+ xleaf_destroy_group(pdev, vsec->group);
+ vfree(vsec->metadata);
+
+ return 0;
+}
+
+static int xrt_vsec_probe(struct platform_device *pdev)
+{
+ struct xrt_vsec *vsec;
+ int ret = 0;
+
+ vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
+ if (!vsec)
+ return -ENOMEM;
+
+ vsec->pdev = pdev;
+ vsec->group = -1;
+ platform_set_drvdata(pdev, vsec);
+
+ ret = xrt_vsec_mapio(vsec);
+ if (ret)
+ goto failed;
+
+ ret = xrt_vsec_create_metadata(vsec);
+ if (ret) {
+ xrt_err(pdev, "create metadata failed, ret %d", ret);
+ goto failed;
+ }
+ vsec->group = xleaf_create_group(pdev, vsec->metadata);
+ if (ret < 0) {
+ xrt_err(pdev, "create group failed, ret %d", vsec->group);
+ ret = vsec->group;
+ goto failed;
+ }
+
+ return 0;
+
+failed:
+ xrt_vsec_remove(pdev);
+
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_vsec_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names []){
+ { .ep_name = XRT_MD_NODE_VSEC },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_vsec_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_vsec_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_vsec_table[] = {
+ { XRT_VSEC, (kernel_ulong_t)&xrt_vsec_data },
+ { },
+};
+
+static struct platform_driver xrt_vsec_driver = {
+ .driver = {
+ .name = XRT_VSEC,
+ },
+ .probe = xrt_vsec_probe,
+ .remove = xrt_vsec_remove,
+ .id_table = xrt_vsec_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_VSEC, vsec);
--
2.27.0
Add clock frequency counter driver. Clock frequency counter is
a hardware function discovered by walking xclbin metadata. A platform
device node will be created for it. Other part of driver can read the
actual clock frequency through clock frequency counter driver.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 ++
drivers/fpga/xrt/lib/xleaf/clkfreq.c | 240 +++++++++++++++++++++++
2 files changed, 261 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
diff --git a/drivers/fpga/xrt/include/xleaf/clkfreq.h b/drivers/fpga/xrt/include/xleaf/clkfreq.h
new file mode 100644
index 000000000000..005441d5df78
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/clkfreq.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_CLKFREQ_H_
+#define _XRT_CLKFREQ_H_
+
+#include "xleaf.h"
+
+/*
+ * CLKFREQ driver leaf calls.
+ */
+enum xrt_clkfreq_leaf_cmd {
+ XRT_CLKFREQ_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+#endif /* _XRT_CLKFREQ_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/clkfreq.c b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
new file mode 100644
index 000000000000..49473adde3fd
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Frequency Counter Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clkfreq.h"
+
+#define CLKFREQ_ERR(clkfreq, fmt, arg...) \
+ xrt_err((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_WARN(clkfreq, fmt, arg...) \
+ xrt_warn((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_INFO(clkfreq, fmt, arg...) \
+ xrt_info((clkfreq)->pdev, fmt "\n", ##arg)
+#define CLKFREQ_DBG(clkfreq, fmt, arg...) \
+ xrt_dbg((clkfreq)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLKFREQ "xrt_clkfreq"
+
+#define XRT_CLKFREQ_CONTROL_STATUS_MASK 0xffff
+
+#define XRT_CLKFREQ_CONTROL_START 0x1
+#define XRT_CLKFREQ_CONTROL_DONE 0x2
+#define XRT_CLKFREQ_V5_CLK0_ENABLED 0x10000
+
+#define XRT_CLKFREQ_CONTROL_REG 0
+#define XRT_CLKFREQ_COUNT_REG 0x8
+#define XRT_CLKFREQ_V5_COUNT_REG 0x10
+
+#define XRT_CLKFREQ_READ_RETRIES 10
+
+static const struct regmap_config clkfreq_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct clkfreq {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ const char *clkfreq_ep_name;
+ struct mutex clkfreq_lock; /* clock counter dev lock */
+};
+
+static int clkfreq_read(struct clkfreq *clkfreq, u32 *freq)
+{
+ int times = XRT_CLKFREQ_READ_RETRIES;
+ u32 status;
+ int ret;
+
+ *freq = 0;
+ mutex_lock(&clkfreq->clkfreq_lock);
+ ret = regmap_write(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, XRT_CLKFREQ_CONTROL_START);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "write start to control reg failed %d", ret);
+ goto failed;
+ }
+ while (times != 0) {
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, &status);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "read control reg failed %d", ret);
+ goto failed;
+ }
+ if ((status & XRT_CLKFREQ_CONTROL_STATUS_MASK) == XRT_CLKFREQ_CONTROL_DONE)
+ break;
+ mdelay(1);
+ times--;
+ };
+
+ if (!times) {
+ ret = -ETIMEDOUT;
+ goto failed;
+ }
+
+ if (status & XRT_CLKFREQ_V5_CLK0_ENABLED)
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_V5_COUNT_REG, freq);
+ else
+ ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_COUNT_REG, freq);
+ if (ret) {
+ CLKFREQ_INFO(clkfreq, "read count failed %d", ret);
+ goto failed;
+ }
+
+ mutex_unlock(&clkfreq->clkfreq_lock);
+
+ return 0;
+
+failed:
+ mutex_unlock(&clkfreq->clkfreq_lock);
+
+ return ret;
+}
+
+static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct clkfreq *clkfreq = platform_get_drvdata(to_platform_device(dev));
+ ssize_t count;
+ u32 freq;
+
+ if (clkfreq_read(clkfreq, &freq))
+ return -EINVAL;
+
+ count = snprintf(buf, 64, "%u\n", freq);
+
+ return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clkfreq_attrs[] = {
+ &dev_attr_freq.attr,
+ NULL,
+};
+
+static struct attribute_group clkfreq_attr_group = {
+ .attrs = clkfreq_attrs,
+};
+
+static int
+xrt_clkfreq_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct clkfreq *clkfreq;
+ int ret = 0;
+
+ clkfreq = platform_get_drvdata(pdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_CLKFREQ_READ:
+ ret = clkfreq_read(clkfreq, arg);
+ break;
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int clkfreq_remove(struct platform_device *pdev)
+{
+ sysfs_remove_group(&pdev->dev.kobj, &clkfreq_attr_group);
+
+ return 0;
+}
+
+static int clkfreq_probe(struct platform_device *pdev)
+{
+ struct clkfreq *clkfreq = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ clkfreq = devm_kzalloc(&pdev->dev, sizeof(*clkfreq), GFP_KERNEL);
+ if (!clkfreq)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, clkfreq);
+ clkfreq->pdev = pdev;
+ mutex_init(&clkfreq->clkfreq_lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -EINVAL;
+ goto failed;
+ }
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ clkfreq->regmap = devm_regmap_init_mmio(&pdev->dev, base, &clkfreq_regmap_config);
+ if (IS_ERR(clkfreq->regmap)) {
+ CLKFREQ_ERR(clkfreq, "regmap %pR failed", res);
+ ret = PTR_ERR(clkfreq->regmap);
+ goto failed;
+ }
+ clkfreq->clkfreq_ep_name = res->name;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &clkfreq_attr_group);
+ if (ret) {
+ CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
+ goto failed;
+ }
+
+ CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_clkfreq_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .regmap_name = XRT_MD_REGMAP_CLKFREQ },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_clkfreq_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_clkfreq_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_clkfreq_table[] = {
+ { XRT_CLKFREQ, (kernel_ulong_t)&xrt_clkfreq_data },
+ { },
+};
+
+static struct platform_driver xrt_clkfreq_driver = {
+ .driver = {
+ .name = XRT_CLKFREQ,
+ },
+ .probe = clkfreq_probe,
+ .remove = clkfreq_remove,
+ .id_table = xrt_clkfreq_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CLKFREQ, clkfreq);
--
2.27.0
ICAP stands for Hardware Internal Configuration Access Port. ICAP is
discovered by walking firmware metadata. A platform device node will be
created for it. FPGA bitstream is written to hardware through ICAP.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/icap.h | 27 ++
drivers/fpga/xrt/lib/xleaf/icap.c | 344 ++++++++++++++++++++++++++
2 files changed, 371 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
diff --git a/drivers/fpga/xrt/include/xleaf/icap.h b/drivers/fpga/xrt/include/xleaf/icap.h
new file mode 100644
index 000000000000..96d39a8934fa
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/icap.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_ICAP_H_
+#define _XRT_ICAP_H_
+
+#include "xleaf.h"
+
+/*
+ * ICAP driver leaf calls.
+ */
+enum xrt_icap_leaf_cmd {
+ XRT_ICAP_WRITE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_ICAP_GET_IDCODE,
+};
+
+struct xrt_icap_wr {
+ void *xiiw_bit_data;
+ u32 xiiw_data_len;
+};
+
+#endif /* _XRT_ICAP_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/icap.c b/drivers/fpga/xrt/lib/xleaf/icap.c
new file mode 100644
index 000000000000..13db2b759138
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/icap.c
@@ -0,0 +1,344 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA ICAP Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ * Sonal Santan <[email protected]>
+ * Max Zhen <[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/icap.h"
+#include "xclbin-helper.h"
+
+#define XRT_ICAP "xrt_icap"
+
+#define ICAP_ERR(icap, fmt, arg...) \
+ xrt_err((icap)->pdev, fmt "\n", ##arg)
+#define ICAP_WARN(icap, fmt, arg...) \
+ xrt_warn((icap)->pdev, fmt "\n", ##arg)
+#define ICAP_INFO(icap, fmt, arg...) \
+ xrt_info((icap)->pdev, fmt "\n", ##arg)
+#define ICAP_DBG(icap, fmt, arg...) \
+ xrt_dbg((icap)->pdev, fmt "\n", ##arg)
+
+/*
+ * AXI-HWICAP IP register layout. Please see
+ * https://www.xilinx.com/support/documentation/ip_documentation/axi_hwicap/v3_0/pg134-axi-hwicap.pdf
+ */
+#define ICAP_REG_GIER 0x1C
+#define ICAP_REG_ISR 0x20
+#define ICAP_REG_IER 0x28
+#define ICAP_REG_WF 0x100
+#define ICAP_REG_RF 0x104
+#define ICAP_REG_SZ 0x108
+#define ICAP_REG_CR 0x10C
+#define ICAP_REG_SR 0x110
+#define ICAP_REG_WFV 0x114
+#define ICAP_REG_RFO 0x118
+#define ICAP_REG_ASR 0x11C
+
+#define ICAP_STATUS_EOS 0x4
+#define ICAP_STATUS_DONE 0x1
+
+/*
+ * Canned command sequence to obtain IDCODE of the FPGA
+ */
+static const u32 idcode_stream[] = {
+ /* dummy word */
+ cpu_to_be32(0xffffffff),
+ /* sync word */
+ cpu_to_be32(0xaa995566),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* ID code */
+ cpu_to_be32(0x28018001),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+ /* NOP word */
+ cpu_to_be32(0x20000000),
+};
+
+static const struct regmap_config icap_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct icap {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct mutex icap_lock; /* icap dev lock */
+
+ u32 idcode;
+};
+
+static int wait_for_done(const struct icap *icap)
+{
+ int i = 0;
+ int ret;
+ u32 w;
+
+ for (i = 0; i < 10; i++) {
+ /*
+ * it requires few micro seconds for ICAP to process incoming data.
+ * Polling every 5us for 10 times would be good enough.
+ */
+ udelay(5);
+ ret = regmap_read(icap->regmap, ICAP_REG_SR, &w);
+ if (ret)
+ return ret;
+ ICAP_INFO(icap, "XHWICAP_SR: %x", w);
+ if (w & (ICAP_STATUS_EOS | ICAP_STATUS_DONE))
+ return 0;
+ }
+
+ ICAP_ERR(icap, "bitstream download timeout");
+ return -ETIMEDOUT;
+}
+
+static int icap_write(const struct icap *icap, const u32 *word_buf, int size)
+{
+ u32 value = 0;
+ int ret;
+ int i;
+
+ for (i = 0; i < size; i++) {
+ value = be32_to_cpu(word_buf[i]);
+ ret = regmap_write(icap->regmap, ICAP_REG_WF, value);
+ if (ret)
+ return ret;
+ }
+
+ ret = regmap_write(icap->regmap, ICAP_REG_CR, 0x1);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < 20; i++) {
+ ret = regmap_read(icap->regmap, ICAP_REG_CR, &value);
+ if (ret)
+ return ret;
+
+ if ((value & 0x1) == 0)
+ return 0;
+ ndelay(50);
+ }
+
+ ICAP_ERR(icap, "writing %d dwords timeout", size);
+ return -EIO;
+}
+
+static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
+ u32 word_count)
+{
+ int wr_fifo_vacancy = 0;
+ u32 word_written = 0;
+ u32 remain_word;
+ int err = 0;
+
+ WARN_ON(!mutex_is_locked(&icap->icap_lock));
+ for (remain_word = word_count; remain_word > 0;
+ remain_word -= word_written, word_buffer += word_written) {
+ err = regmap_read(icap->regmap, ICAP_REG_WFV, &wr_fifo_vacancy);
+ if (err) {
+ ICAP_ERR(icap, "read wr_fifo_vacancy failed %d", err);
+ break;
+ }
+ if (wr_fifo_vacancy <= 0) {
+ ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
+ err = -EIO;
+ break;
+ }
+ word_written = (wr_fifo_vacancy < remain_word) ?
+ wr_fifo_vacancy : remain_word;
+ if (icap_write(icap, word_buffer, word_written) != 0) {
+ ICAP_ERR(icap, "write failed remain %d, written %d",
+ remain_word, word_written);
+ err = -EIO;
+ break;
+ }
+ }
+
+ return err;
+}
+
+static int icap_download(struct icap *icap, const char *buffer,
+ unsigned long length)
+{
+ u32 num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
+ u32 byte_read;
+ int err = 0;
+
+ if (length % sizeof(u32)) {
+ ICAP_ERR(icap, "invalid bitstream length %ld", length);
+ return -EINVAL;
+ }
+
+ mutex_lock(&icap->icap_lock);
+ for (byte_read = 0; byte_read < length; byte_read += num_chars_read) {
+ num_chars_read = length - byte_read;
+ if (num_chars_read > XCLBIN_HWICAP_BITFILE_BUF_SZ)
+ num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
+
+ err = bitstream_helper(icap, (u32 *)buffer, num_chars_read / sizeof(u32));
+ if (err)
+ goto failed;
+ buffer += num_chars_read;
+ }
+
+ /* there is not any cleanup needs to be done if writing ICAP timeout. */
+ err = wait_for_done(icap);
+
+failed:
+ mutex_unlock(&icap->icap_lock);
+
+ return err;
+}
+
+/*
+ * Discover the FPGA IDCODE using special sequence of canned commands
+ */
+static int icap_probe_chip(struct icap *icap)
+{
+ int err;
+ u32 val = 0;
+
+ regmap_read(icap->regmap, ICAP_REG_SR, &val);
+ if (val != ICAP_STATUS_DONE)
+ return -ENODEV;
+ /* Read ICAP FIFO vacancy */
+ regmap_read(icap->regmap, ICAP_REG_WFV, &val);
+ if (val < 8)
+ return -ENODEV;
+ err = icap_write(icap, idcode_stream, ARRAY_SIZE(idcode_stream));
+ if (err)
+ return err;
+ err = wait_for_done(icap);
+ if (err)
+ return err;
+
+ /* Tell config engine how many words to transfer to read FIFO */
+ regmap_write(icap->regmap, ICAP_REG_SZ, 0x1);
+ /* Switch the ICAP to read mode */
+ regmap_write(icap->regmap, ICAP_REG_CR, 0x2);
+ err = wait_for_done(icap);
+ if (err)
+ return err;
+
+ /* Read IDCODE from Read FIFO */
+ regmap_read(icap->regmap, ICAP_REG_RF, &icap->idcode);
+ return 0;
+}
+
+static int
+xrt_icap_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct xrt_icap_wr *wr_arg = arg;
+ struct icap *icap;
+ int ret = 0;
+
+ icap = platform_get_drvdata(pdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_ICAP_WRITE:
+ ret = icap_download(icap, wr_arg->xiiw_bit_data,
+ wr_arg->xiiw_data_len);
+ break;
+ case XRT_ICAP_GET_IDCODE:
+ *(u32 *)arg = icap->idcode;
+ break;
+ default:
+ ICAP_ERR(icap, "unknown command %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_icap_probe(struct platform_device *pdev)
+{
+ void __iomem *base = NULL;
+ struct resource *res;
+ struct icap *icap;
+ int result = 0;
+
+ icap = devm_kzalloc(&pdev->dev, sizeof(*icap), GFP_KERNEL);
+ if (!icap)
+ return -ENOMEM;
+
+ icap->pdev = pdev;
+ platform_set_drvdata(pdev, icap);
+ mutex_init(&icap->icap_lock);
+
+ xrt_info(pdev, "probing");
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res)
+ return -EINVAL;
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ icap->regmap = devm_regmap_init_mmio(&pdev->dev, base, &icap_regmap_config);
+ if (IS_ERR(icap->regmap)) {
+ ICAP_ERR(icap, "init mmio failed");
+ return PTR_ERR(icap->regmap);
+ }
+ /* Disable ICAP interrupts */
+ regmap_write(icap->regmap, ICAP_REG_GIER, 0);
+
+ result = icap_probe_chip(icap);
+ if (result)
+ xrt_err(pdev, "Failed to probe FPGA");
+ else
+ xrt_info(pdev, "Discovered FPGA IDCODE %x", icap->idcode);
+ return result;
+}
+
+static struct xrt_subdev_endpoints xrt_icap_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_FPGA_CONFIG },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_icap_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_icap_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_icap_table[] = {
+ { XRT_ICAP, (kernel_ulong_t)&xrt_icap_data },
+ { },
+};
+
+static struct platform_driver xrt_icap_driver = {
+ .driver = {
+ .name = XRT_ICAP,
+ },
+ .probe = xrt_icap_probe,
+ .id_table = xrt_icap_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_ICAP, icap);
--
2.27.0
Add DDR calibration driver. DDR calibration is a hardware function
discovered by walking firmware metadata. A platform device node will
be created for it. Hardware provides DDR calibration status through
this function.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
.../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +++
drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 226 ++++++++++++++++++
2 files changed, 254 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
diff --git a/drivers/fpga/xrt/include/xleaf/ddr_calibration.h b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
new file mode 100644
index 000000000000..878740c26ca2
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_DDR_CALIBRATION_H_
+#define _XRT_DDR_CALIBRATION_H_
+
+#include "xleaf.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * Memory calibration driver leaf calls.
+ */
+enum xrt_calib_results {
+ XRT_CALIB_UNKNOWN = 0,
+ XRT_CALIB_SUCCEEDED,
+ XRT_CALIB_FAILED,
+};
+
+enum xrt_calib_leaf_cmd {
+ XRT_CALIB_RESULT = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+};
+
+#endif /* _XRT_DDR_CALIBRATION_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
new file mode 100644
index 000000000000..5a9fa82946cb
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
@@ -0,0 +1,226 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA memory calibration driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * memory calibration
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+#include <linux/delay.h>
+#include <linux/regmap.h>
+#include "xclbin-helper.h"
+#include "metadata.h"
+#include "xleaf/ddr_calibration.h"
+
+#define XRT_CALIB "xrt_calib"
+
+#define XRT_CALIB_STATUS_REG 0
+#define XRT_CALIB_READ_RETRIES 20
+#define XRT_CALIB_READ_INTERVAL 500 /* ms */
+
+static const struct regmap_config calib_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct calib_cache {
+ struct list_head link;
+ const char *ep_name;
+ char *data;
+ u32 data_size;
+};
+
+struct calib {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct mutex lock; /* calibration dev lock */
+ struct list_head cache_list;
+ u32 cache_num;
+ enum xrt_calib_results result;
+};
+
+static void __calib_cache_clean_nolock(struct calib *calib)
+{
+ struct calib_cache *cache, *temp;
+
+ list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
+ vfree(cache->data);
+ list_del(&cache->link);
+ vfree(cache);
+ }
+ calib->cache_num = 0;
+}
+
+static void calib_cache_clean(struct calib *calib)
+{
+ mutex_lock(&calib->lock);
+ __calib_cache_clean_nolock(calib);
+ mutex_unlock(&calib->lock);
+}
+
+static int calib_calibration(struct calib *calib)
+{
+ u32 times = XRT_CALIB_READ_RETRIES;
+ u32 status;
+ int ret;
+
+ while (times != 0) {
+ ret = regmap_read(calib->regmap, XRT_CALIB_STATUS_REG, &status);
+ if (ret) {
+ xrt_err(calib->pdev, "failed to read status reg %d", ret);
+ return ret;
+ }
+
+ if (status & BIT(0))
+ break;
+ msleep(XRT_CALIB_READ_INTERVAL);
+ times--;
+ }
+
+ if (!times) {
+ xrt_err(calib->pdev,
+ "MIG calibration timeout after bitstream download");
+ return -ETIMEDOUT;
+ }
+
+ xrt_info(calib->pdev, "took %dms", (XRT_CALIB_READ_RETRIES - times) *
+ XRT_CALIB_READ_INTERVAL);
+ return 0;
+}
+
+static void xrt_calib_event_cb(struct platform_device *pdev, void *arg)
+{
+ struct calib *calib = platform_get_drvdata(pdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ enum xrt_subdev_id id;
+ int ret;
+
+ id = evt->xe_subdev.xevt_subdev_id;
+
+ switch (e) {
+ case XRT_EVENT_POST_CREATION:
+ if (id == XRT_SUBDEV_UCS) {
+ ret = calib_calibration(calib);
+ if (ret)
+ calib->result = XRT_CALIB_FAILED;
+ else
+ calib->result = XRT_CALIB_SUCCEEDED;
+ }
+ break;
+ default:
+ xrt_dbg(pdev, "ignored event %d", e);
+ break;
+ }
+}
+
+static int xrt_calib_remove(struct platform_device *pdev)
+{
+ struct calib *calib = platform_get_drvdata(pdev);
+
+ calib_cache_clean(calib);
+
+ return 0;
+}
+
+static int xrt_calib_probe(struct platform_device *pdev)
+{
+ void __iomem *base = NULL;
+ struct resource *res;
+ struct calib *calib;
+ int err = 0;
+
+ calib = devm_kzalloc(&pdev->dev, sizeof(*calib), GFP_KERNEL);
+ if (!calib)
+ return -ENOMEM;
+
+ calib->pdev = pdev;
+ platform_set_drvdata(pdev, calib);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ err = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base)) {
+ err = PTR_ERR(base);
+ goto failed;
+ }
+
+ calib->regmap = devm_regmap_init_mmio(&pdev->dev, base, &calib_regmap_config);
+ if (IS_ERR(calib->regmap)) {
+ xrt_err(pdev, "Map iomem failed");
+ err = PTR_ERR(calib->regmap);
+ goto failed;
+ }
+
+ mutex_init(&calib->lock);
+ INIT_LIST_HEAD(&calib->cache_list);
+
+ return 0;
+
+failed:
+ return err;
+}
+
+static int
+xrt_calib_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct calib *calib = platform_get_drvdata(pdev);
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_calib_event_cb(pdev, arg);
+ break;
+ case XRT_CALIB_RESULT: {
+ enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
+ *r = calib->result;
+ break;
+ }
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ ret = -EINVAL;
+ }
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_calib_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_DDR_CALIB },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_calib_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_calib_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_calib_table[] = {
+ { XRT_CALIB, (kernel_ulong_t)&xrt_calib_data },
+ { },
+};
+
+static struct platform_driver xrt_calib_driver = {
+ .driver = {
+ .name = XRT_CALIB,
+ },
+ .probe = xrt_calib_probe,
+ .remove = xrt_calib_remove,
+ .id_table = xrt_calib_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CALIB, calib);
--
2.27.0
group driver that manages life cycle of a bunch of leaf driver instances
and bridges them with root.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/group.h | 25 +++
drivers/fpga/xrt/lib/group.c | 286 +++++++++++++++++++++++++++++++
2 files changed, 311 insertions(+)
create mode 100644 drivers/fpga/xrt/include/group.h
create mode 100644 drivers/fpga/xrt/lib/group.c
diff --git a/drivers/fpga/xrt/include/group.h b/drivers/fpga/xrt/include/group.h
new file mode 100644
index 000000000000..09e9d03f53fe
--- /dev/null
+++ b/drivers/fpga/xrt/include/group.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_GROUP_H_
+#define _XRT_GROUP_H_
+
+#include "xleaf.h"
+
+/*
+ * Group driver leaf calls.
+ */
+enum xrt_group_leaf_cmd {
+ XRT_GROUP_GET_LEAF = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_GROUP_PUT_LEAF,
+ XRT_GROUP_INIT_CHILDREN,
+ XRT_GROUP_FINI_CHILDREN,
+ XRT_GROUP_TRIGGER_EVENT,
+};
+
+#endif /* _XRT_GROUP_H_ */
diff --git a/drivers/fpga/xrt/lib/group.c b/drivers/fpga/xrt/lib/group.c
new file mode 100644
index 000000000000..7b8716569641
--- /dev/null
+++ b/drivers/fpga/xrt/lib/group.c
@@ -0,0 +1,286 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Group Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include "xleaf.h"
+#include "subdev_pool.h"
+#include "group.h"
+#include "metadata.h"
+#include "lib-drv.h"
+
+#define XRT_GRP "xrt_group"
+
+struct xrt_group {
+ struct platform_device *pdev;
+ struct xrt_subdev_pool leaves;
+ bool leaves_created;
+ struct mutex lock; /* lock for group */
+};
+
+static int xrt_grp_root_cb(struct device *dev, void *parg,
+ enum xrt_root_cmd cmd, void *arg)
+{
+ int rc;
+ struct platform_device *pdev =
+ container_of(dev, struct platform_device, dev);
+ struct xrt_group *xg = (struct xrt_group *)parg;
+
+ switch (cmd) {
+ case XRT_ROOT_GET_LEAF_HOLDERS: {
+ struct xrt_root_get_holders *holders =
+ (struct xrt_root_get_holders *)arg;
+ rc = xrt_subdev_pool_get_holders(&xg->leaves,
+ holders->xpigh_pdev,
+ holders->xpigh_holder_buf,
+ holders->xpigh_holder_buf_len);
+ break;
+ }
+ default:
+ /* Forward parent call to root. */
+ rc = xrt_subdev_root_request(pdev, cmd, arg);
+ break;
+ }
+
+ return rc;
+}
+
+/*
+ * Cut subdev's dtb from group's dtb based on passed-in endpoint descriptor.
+ * Return the subdev's dtb through dtbp, if found.
+ */
+static int xrt_grp_cut_subdev_dtb(struct xrt_group *xg, struct xrt_subdev_endpoints *eps,
+ char *grp_dtb, char **dtbp)
+{
+ int ret, i, ep_count = 0;
+ char *dtb = NULL;
+
+ ret = xrt_md_create(DEV(xg->pdev), &dtb);
+ if (ret)
+ return ret;
+
+ for (i = 0; eps->xse_names[i].ep_name || eps->xse_names[i].regmap_name; i++) {
+ const char *ep_name = eps->xse_names[i].ep_name;
+ const char *reg_name = eps->xse_names[i].regmap_name;
+
+ if (!ep_name)
+ xrt_md_get_compatible_endpoint(DEV(xg->pdev), grp_dtb, reg_name, &ep_name);
+ if (!ep_name)
+ continue;
+
+ ret = xrt_md_copy_endpoint(DEV(xg->pdev), dtb, grp_dtb, ep_name, reg_name, NULL);
+ if (ret)
+ continue;
+ xrt_md_del_endpoint(DEV(xg->pdev), grp_dtb, ep_name, reg_name);
+ ep_count++;
+ }
+ /* Found enough endpoints, return the subdev's dtb. */
+ if (ep_count >= eps->xse_min_ep) {
+ *dtbp = dtb;
+ return 0;
+ }
+
+ /* Cleanup - Restore all endpoints that has been deleted, if any. */
+ if (ep_count > 0) {
+ xrt_md_copy_endpoint(DEV(xg->pdev), grp_dtb, dtb,
+ XRT_MD_NODE_ENDPOINTS, NULL, NULL);
+ }
+ vfree(dtb);
+ *dtbp = NULL;
+ return 0;
+}
+
+static int xrt_grp_create_leaves(struct xrt_group *xg)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(xg->pdev);
+ struct xrt_subdev_endpoints *eps = NULL;
+ int ret = 0, failed = 0;
+ enum xrt_subdev_id did;
+ char *grp_dtb = NULL;
+ unsigned long mlen;
+
+ if (!pdata)
+ return -EINVAL;
+
+ mlen = xrt_md_size(DEV(xg->pdev), pdata->xsp_dtb);
+ if (mlen == XRT_MD_INVALID_LENGTH) {
+ xrt_err(xg->pdev, "invalid dtb, len %ld", mlen);
+ return -EINVAL;
+ }
+
+ mutex_lock(&xg->lock);
+
+ if (xg->leaves_created) {
+ mutex_unlock(&xg->lock);
+ return -EEXIST;
+ }
+
+ grp_dtb = vmalloc(mlen);
+ if (!grp_dtb) {
+ mutex_unlock(&xg->lock);
+ return -ENOMEM;
+ }
+
+ /* Create all leaves based on dtb. */
+ xrt_info(xg->pdev, "bringing up leaves...");
+ memcpy(grp_dtb, pdata->xsp_dtb, mlen);
+ for (did = 0; did < XRT_SUBDEV_NUM; did++) {
+ eps = xrt_drv_get_endpoints(did);
+ while (eps && eps->xse_names) {
+ char *dtb = NULL;
+
+ ret = xrt_grp_cut_subdev_dtb(xg, eps, grp_dtb, &dtb);
+ if (ret) {
+ failed++;
+ xrt_err(xg->pdev, "failed to cut subdev dtb for drv %s: %d",
+ xrt_drv_name(did), ret);
+ }
+ if (!dtb) {
+ /*
+ * No more dtb to cut or bad things happened for this instance,
+ * switch to the next one.
+ */
+ eps++;
+ continue;
+ }
+
+ /* Found a dtb for this instance, let's add it. */
+ ret = xrt_subdev_pool_add(&xg->leaves, did, xrt_grp_root_cb, xg, dtb);
+ if (ret < 0) {
+ failed++;
+ xrt_err(xg->pdev, "failed to add %s: %d", xrt_drv_name(did), ret);
+ }
+ vfree(dtb);
+ /* Continue searching for the same instance from grp_dtb. */
+ }
+ }
+
+ xg->leaves_created = true;
+ vfree(grp_dtb);
+ mutex_unlock(&xg->lock);
+ return failed == 0 ? 0 : -ECHILD;
+}
+
+static void xrt_grp_remove_leaves(struct xrt_group *xg)
+{
+ mutex_lock(&xg->lock);
+
+ if (!xg->leaves_created) {
+ mutex_unlock(&xg->lock);
+ return;
+ }
+
+ xrt_info(xg->pdev, "tearing down leaves...");
+ xrt_subdev_pool_fini(&xg->leaves);
+ xg->leaves_created = false;
+
+ mutex_unlock(&xg->lock);
+}
+
+static int xrt_grp_probe(struct platform_device *pdev)
+{
+ struct xrt_group *xg;
+
+ xrt_info(pdev, "probing...");
+
+ xg = devm_kzalloc(&pdev->dev, sizeof(*xg), GFP_KERNEL);
+ if (!xg)
+ return -ENOMEM;
+
+ xg->pdev = pdev;
+ mutex_init(&xg->lock);
+ xrt_subdev_pool_init(DEV(pdev), &xg->leaves);
+ platform_set_drvdata(pdev, xg);
+
+ return 0;
+}
+
+static int xrt_grp_remove(struct platform_device *pdev)
+{
+ struct xrt_group *xg = platform_get_drvdata(pdev);
+
+ xrt_info(pdev, "leaving...");
+ xrt_grp_remove_leaves(xg);
+ return 0;
+}
+
+static int xrt_grp_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ int rc = 0;
+ struct xrt_group *xg = platform_get_drvdata(pdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Simply forward to every child. */
+ xrt_subdev_pool_handle_event(&xg->leaves,
+ (struct xrt_event *)arg);
+ break;
+ case XRT_GROUP_GET_LEAF: {
+ struct xrt_root_get_leaf *get_leaf =
+ (struct xrt_root_get_leaf *)arg;
+
+ rc = xrt_subdev_pool_get(&xg->leaves, get_leaf->xpigl_match_cb,
+ get_leaf->xpigl_match_arg,
+ DEV(get_leaf->xpigl_caller_pdev),
+ &get_leaf->xpigl_tgt_pdev);
+ break;
+ }
+ case XRT_GROUP_PUT_LEAF: {
+ struct xrt_root_put_leaf *put_leaf =
+ (struct xrt_root_put_leaf *)arg;
+
+ rc = xrt_subdev_pool_put(&xg->leaves, put_leaf->xpipl_tgt_pdev,
+ DEV(put_leaf->xpipl_caller_pdev));
+ break;
+ }
+ case XRT_GROUP_INIT_CHILDREN:
+ rc = xrt_grp_create_leaves(xg);
+ break;
+ case XRT_GROUP_FINI_CHILDREN:
+ xrt_grp_remove_leaves(xg);
+ break;
+ case XRT_GROUP_TRIGGER_EVENT:
+ xrt_subdev_pool_trigger_event(&xg->leaves, (enum xrt_events)(uintptr_t)arg);
+ break;
+ default:
+ xrt_err(pdev, "unknown IOCTL cmd %d", cmd);
+ rc = -EINVAL;
+ break;
+ }
+ return rc;
+}
+
+static struct xrt_subdev_drvdata xrt_grp_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_grp_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_grp_id_table[] = {
+ { XRT_GRP, (kernel_ulong_t)&xrt_grp_data },
+ { },
+};
+
+static struct platform_driver xrt_group_driver = {
+ .driver = {
+ .name = XRT_GRP,
+ },
+ .probe = xrt_grp_probe,
+ .remove = xrt_grp_remove,
+ .id_table = xrt_grp_id_table,
+};
+
+void group_leaf_init_fini(bool init)
+{
+ if (init)
+ xleaf_register_driver(XRT_SUBDEV_GRP, &xrt_group_driver, NULL);
+ else
+ xleaf_unregister_driver(XRT_SUBDEV_GRP);
+}
--
2.27.0
Helper functions for char device node creation / removal for platform
drivers. This is part of platform driver infrastructure.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/cdev.c | 232 ++++++++++++++++++++++++++++++++++++
1 file changed, 232 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/cdev.c
diff --git a/drivers/fpga/xrt/lib/cdev.c b/drivers/fpga/xrt/lib/cdev.c
new file mode 100644
index 000000000000..38efd24b6e10
--- /dev/null
+++ b/drivers/fpga/xrt/lib/cdev.c
@@ -0,0 +1,232 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA device node helper functions.
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include "xleaf.h"
+
+extern struct class *xrt_class;
+
+#define XRT_CDEV_DIR "xfpga"
+#define INODE2PDATA(inode) \
+ container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
+#define INODE2PDEV(inode) \
+ to_platform_device(kobj_to_dev((inode)->i_cdev->kobj.parent))
+#define CDEV_NAME(sysdev) (strchr((sysdev)->kobj.name, '!') + 1)
+
+/* Allow it to be accessed from cdev. */
+static void xleaf_devnode_allowed(struct platform_device *pdev)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+ /* Allow new opens. */
+ mutex_lock(&pdata->xsp_devnode_lock);
+ pdata->xsp_devnode_online = true;
+ mutex_unlock(&pdata->xsp_devnode_lock);
+}
+
+/* Turn off access from cdev and wait for all existing user to go away. */
+static int xleaf_devnode_disallowed(struct platform_device *pdev)
+{
+ int ret = 0;
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ /* Prevent new opens. */
+ pdata->xsp_devnode_online = false;
+ /* Wait for existing user to close. */
+ while (!ret && pdata->xsp_devnode_ref) {
+ int rc;
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+ rc = wait_for_completion_killable(&pdata->xsp_devnode_comp);
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ if (rc == -ERESTARTSYS) {
+ /* Restore online state. */
+ pdata->xsp_devnode_online = true;
+ xrt_err(pdev, "%s is in use, ref=%d",
+ CDEV_NAME(pdata->xsp_sysdev),
+ pdata->xsp_devnode_ref);
+ ret = -EBUSY;
+ }
+ }
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+
+ return ret;
+}
+
+static struct platform_device *
+__xleaf_devnode_open(struct inode *inode, bool excl)
+{
+ struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+ struct platform_device *pdev = INODE2PDEV(inode);
+ bool opened = false;
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ if (pdata->xsp_devnode_online) {
+ if (excl && pdata->xsp_devnode_ref) {
+ xrt_err(pdev, "%s has already been opened exclusively",
+ CDEV_NAME(pdata->xsp_sysdev));
+ } else if (!excl && pdata->xsp_devnode_excl) {
+ xrt_err(pdev, "%s has been opened exclusively",
+ CDEV_NAME(pdata->xsp_sysdev));
+ } else {
+ pdata->xsp_devnode_ref++;
+ pdata->xsp_devnode_excl = excl;
+ opened = true;
+ xrt_info(pdev, "opened %s, ref=%d",
+ CDEV_NAME(pdata->xsp_sysdev),
+ pdata->xsp_devnode_ref);
+ }
+ } else {
+ xrt_err(pdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
+ }
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+
+ pdev = opened ? pdev : NULL;
+ return pdev;
+}
+
+struct platform_device *
+xleaf_devnode_open_excl(struct inode *inode)
+{
+ return __xleaf_devnode_open(inode, true);
+}
+
+struct platform_device *
+xleaf_devnode_open(struct inode *inode)
+{
+ return __xleaf_devnode_open(inode, false);
+}
+EXPORT_SYMBOL_GPL(xleaf_devnode_open);
+
+void xleaf_devnode_close(struct inode *inode)
+{
+ struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
+ struct platform_device *pdev = INODE2PDEV(inode);
+ bool notify = false;
+
+ mutex_lock(&pdata->xsp_devnode_lock);
+
+ WARN_ON(pdata->xsp_devnode_ref == 0);
+ pdata->xsp_devnode_ref--;
+ if (pdata->xsp_devnode_ref == 0) {
+ pdata->xsp_devnode_excl = false;
+ notify = true;
+ }
+ if (notify) {
+ xrt_info(pdev, "closed %s, ref=%d",
+ CDEV_NAME(pdata->xsp_sysdev), pdata->xsp_devnode_ref);
+ } else {
+ xrt_info(pdev, "closed %s, notifying waiter",
+ CDEV_NAME(pdata->xsp_sysdev));
+ }
+
+ mutex_unlock(&pdata->xsp_devnode_lock);
+
+ if (notify)
+ complete(&pdata->xsp_devnode_comp);
+}
+EXPORT_SYMBOL_GPL(xleaf_devnode_close);
+
+static inline enum xrt_subdev_file_mode
+devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+ return drvdata->xsd_file_ops.xsf_mode;
+}
+
+int xleaf_devnode_create(struct platform_device *pdev, const char *file_name,
+ const char *inst_name)
+{
+ struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+ struct xrt_subdev_file_ops *fops = &drvdata->xsd_file_ops;
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+ struct cdev *cdevp;
+ struct device *sysdev;
+ int ret = 0;
+ char fname[256];
+
+ mutex_init(&pdata->xsp_devnode_lock);
+ init_completion(&pdata->xsp_devnode_comp);
+
+ cdevp = &DEV_PDATA(pdev)->xsp_cdev;
+ cdev_init(cdevp, &fops->xsf_ops);
+ cdevp->owner = fops->xsf_ops.owner;
+ cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), pdev->id);
+
+ /*
+ * Set pdev as parent of cdev so that when pdev (and its platform
+ * data) will not be freed when cdev is not freed.
+ */
+ cdev_set_parent(cdevp, &DEV(pdev)->kobj);
+
+ ret = cdev_add(cdevp, cdevp->dev, 1);
+ if (ret) {
+ xrt_err(pdev, "failed to add cdev: %d", ret);
+ goto failed;
+ }
+ if (!file_name)
+ file_name = pdev->name;
+ if (!inst_name) {
+ if (devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST) {
+ snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
+ XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+ file_name, pdev->id);
+ } else {
+ snprintf(fname, sizeof(fname), "%s/%s/%s",
+ XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
+ file_name);
+ }
+ } else {
+ snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
+ DEV_PDATA(pdev)->xsp_root_name, file_name, inst_name);
+ }
+ sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
+ if (IS_ERR(sysdev)) {
+ ret = PTR_ERR(sysdev);
+ xrt_err(pdev, "failed to create device node: %d", ret);
+ goto failed_cdev_add;
+ }
+ pdata->xsp_sysdev = sysdev;
+
+ xleaf_devnode_allowed(pdev);
+
+ xrt_info(pdev, "created (%d, %d): /dev/%s",
+ MAJOR(cdevp->dev), pdev->id, fname);
+ return 0;
+
+failed_cdev_add:
+ cdev_del(cdevp);
+failed:
+ cdevp->owner = NULL;
+ return ret;
+}
+
+int xleaf_devnode_destroy(struct platform_device *pdev)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+ struct cdev *cdevp = &pdata->xsp_cdev;
+ dev_t dev = cdevp->dev;
+ int rc;
+
+ rc = xleaf_devnode_disallowed(pdev);
+ if (rc)
+ return rc;
+
+ xrt_info(pdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
+ XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
+ device_destroy(xrt_class, cdevp->dev);
+ pdata->xsp_sysdev = NULL;
+ cdev_del(cdevp);
+ return 0;
+}
--
2.27.0
Contains common code for all root drivers and handles root calls from
platform drivers. This is part of root driver infrastructure.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/events.h | 45 +++
drivers/fpga/xrt/include/xroot.h | 117 ++++++
drivers/fpga/xrt/lib/subdev_pool.h | 53 +++
drivers/fpga/xrt/lib/xroot.c | 589 +++++++++++++++++++++++++++++
4 files changed, 804 insertions(+)
create mode 100644 drivers/fpga/xrt/include/events.h
create mode 100644 drivers/fpga/xrt/include/xroot.h
create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
create mode 100644 drivers/fpga/xrt/lib/xroot.c
diff --git a/drivers/fpga/xrt/include/events.h b/drivers/fpga/xrt/include/events.h
new file mode 100644
index 000000000000..775171a47c8e
--- /dev/null
+++ b/drivers/fpga/xrt/include/events.h
@@ -0,0 +1,45 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_EVENTS_H_
+#define _XRT_EVENTS_H_
+
+#include "subdev_id.h"
+
+/*
+ * Event notification.
+ */
+enum xrt_events {
+ XRT_EVENT_TEST = 0, /* for testing */
+ /*
+ * Events related to specific subdev
+ * Callback arg: struct xrt_event_arg_subdev
+ */
+ XRT_EVENT_POST_CREATION,
+ XRT_EVENT_PRE_REMOVAL,
+ /*
+ * Events related to change of the whole board
+ * Callback arg: <none>
+ */
+ XRT_EVENT_PRE_HOT_RESET,
+ XRT_EVENT_POST_HOT_RESET,
+ XRT_EVENT_PRE_GATE_CLOSE,
+ XRT_EVENT_POST_GATE_OPEN,
+};
+
+struct xrt_event_arg_subdev {
+ enum xrt_subdev_id xevt_subdev_id;
+ int xevt_subdev_instance;
+};
+
+struct xrt_event {
+ enum xrt_events xe_evt;
+ struct xrt_event_arg_subdev xe_subdev;
+};
+
+#endif /* _XRT_EVENTS_H_ */
diff --git a/drivers/fpga/xrt/include/xroot.h b/drivers/fpga/xrt/include/xroot.h
new file mode 100644
index 000000000000..91c0aeb30bf8
--- /dev/null
+++ b/drivers/fpga/xrt/include/xroot.h
@@ -0,0 +1,117 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_ROOT_H_
+#define _XRT_ROOT_H_
+
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include "subdev_id.h"
+#include "events.h"
+
+typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id,
+ struct platform_device *, void *);
+#define XRT_SUBDEV_MATCH_PREV ((xrt_subdev_match_t)-1)
+#define XRT_SUBDEV_MATCH_NEXT ((xrt_subdev_match_t)-2)
+
+/*
+ * Root calls.
+ */
+enum xrt_root_cmd {
+ /* Leaf actions. */
+ XRT_ROOT_GET_LEAF = 0,
+ XRT_ROOT_PUT_LEAF,
+ XRT_ROOT_GET_LEAF_HOLDERS,
+
+ /* Group actions. */
+ XRT_ROOT_CREATE_GROUP,
+ XRT_ROOT_REMOVE_GROUP,
+ XRT_ROOT_LOOKUP_GROUP,
+ XRT_ROOT_WAIT_GROUP_BRINGUP,
+
+ /* Event actions. */
+ XRT_ROOT_EVENT_SYNC,
+ XRT_ROOT_EVENT_ASYNC,
+
+ /* Device info. */
+ XRT_ROOT_GET_RESOURCE,
+ XRT_ROOT_GET_ID,
+
+ /* Misc. */
+ XRT_ROOT_HOT_RESET,
+ XRT_ROOT_HWMON,
+};
+
+struct xrt_root_get_leaf {
+ struct platform_device *xpigl_caller_pdev;
+ xrt_subdev_match_t xpigl_match_cb;
+ void *xpigl_match_arg;
+ struct platform_device *xpigl_tgt_pdev;
+};
+
+struct xrt_root_put_leaf {
+ struct platform_device *xpipl_caller_pdev;
+ struct platform_device *xpipl_tgt_pdev;
+};
+
+struct xrt_root_lookup_group {
+ struct platform_device *xpilp_pdev; /* caller's pdev */
+ xrt_subdev_match_t xpilp_match_cb;
+ void *xpilp_match_arg;
+ int xpilp_grp_inst;
+};
+
+struct xrt_root_get_holders {
+ struct platform_device *xpigh_pdev; /* caller's pdev */
+ char *xpigh_holder_buf;
+ size_t xpigh_holder_buf_len;
+};
+
+struct xrt_root_get_res {
+ struct resource *xpigr_res;
+};
+
+struct xrt_root_get_id {
+ unsigned short xpigi_vendor_id;
+ unsigned short xpigi_device_id;
+ unsigned short xpigi_sub_vendor_id;
+ unsigned short xpigi_sub_device_id;
+};
+
+struct xrt_root_hwmon {
+ bool xpih_register;
+ const char *xpih_name;
+ void *xpih_drvdata;
+ const struct attribute_group **xpih_groups;
+ struct device *xpih_hwmon_dev;
+};
+
+/*
+ * Callback for leaf to make a root request. Arguments are: parent device, parent cookie, req,
+ * and arg.
+ */
+typedef int (*xrt_subdev_root_cb_t)(struct device *, void *, u32, void *);
+int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg);
+
+/*
+ * Defines physical function (MPF / UPF) specific operations
+ * needed in common root driver.
+ */
+struct xroot_physical_function_callback {
+ void (*xpc_hot_reset)(struct pci_dev *pdev);
+};
+
+int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root);
+void xroot_remove(void *root);
+bool xroot_wait_for_bringup(void *root);
+int xroot_add_vsec_node(void *root, char *dtb);
+int xroot_create_group(void *xr, char *dtb);
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
+void xroot_broadcast(void *root, enum xrt_events evt);
+
+#endif /* _XRT_ROOT_H_ */
diff --git a/drivers/fpga/xrt/lib/subdev_pool.h b/drivers/fpga/xrt/lib/subdev_pool.h
new file mode 100644
index 000000000000..09d148e4e7ea
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdev_pool.h
@@ -0,0 +1,53 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#ifndef _XRT_SUBDEV_POOL_H_
+#define _XRT_SUBDEV_POOL_H_
+
+#include <linux/device.h>
+#include <linux/mutex.h>
+#include "xroot.h"
+
+/*
+ * The struct xrt_subdev_pool manages a list of xrt_subdevs for root and group drivers.
+ */
+struct xrt_subdev_pool {
+ struct list_head xsp_dev_list;
+ struct device *xsp_owner;
+ struct mutex xsp_lock; /* pool lock */
+ bool xsp_closing;
+};
+
+/*
+ * Subdev pool helper functions for root and group drivers only.
+ */
+void xrt_subdev_pool_init(struct device *dev,
+ struct xrt_subdev_pool *spool);
+void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
+ xrt_subdev_match_t match,
+ void *arg, struct device *holder_dev,
+ struct platform_device **pdevp);
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
+ struct platform_device *pdev,
+ struct device *holder_dev);
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
+ enum xrt_subdev_id id, xrt_subdev_root_cb_t pcb,
+ void *pcb_arg, char *dtb);
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
+ enum xrt_subdev_id id, int instance);
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+ struct platform_device *pdev,
+ char *buf, size_t len);
+
+void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool,
+ enum xrt_events evt);
+void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool,
+ struct xrt_event *evt);
+
+#endif /* _XRT_SUBDEV_POOL_H_ */
diff --git a/drivers/fpga/xrt/lib/xroot.c b/drivers/fpga/xrt/lib/xroot.c
new file mode 100644
index 000000000000..03407272650f
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xroot.c
@@ -0,0 +1,589 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Root Functions
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/hwmon.h>
+#include "xroot.h"
+#include "subdev_pool.h"
+#include "group.h"
+#include "metadata.h"
+
+#define XROOT_PDEV(xr) ((xr)->pdev)
+#define XROOT_DEV(xr) (&(XROOT_PDEV(xr)->dev))
+#define xroot_err(xr, fmt, args...) \
+ dev_err(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_warn(xr, fmt, args...) \
+ dev_warn(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_info(xr, fmt, args...) \
+ dev_info(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+#define xroot_dbg(xr, fmt, args...) \
+ dev_dbg(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
+
+#define XRT_VSEC_ID 0x20
+
+#define XROOT_GROUP_FIRST (-1)
+#define XROOT_GROUP_LAST (-2)
+
+static int xroot_root_cb(struct device *, void *, u32, void *);
+
+struct xroot_evt {
+ struct list_head list;
+ struct xrt_event evt;
+ struct completion comp;
+ bool async;
+};
+
+struct xroot_events {
+ struct mutex evt_lock; /* event lock */
+ struct list_head evt_list;
+ struct work_struct evt_work;
+};
+
+struct xroot_groups {
+ struct xrt_subdev_pool pool;
+ struct work_struct bringup_work;
+ atomic_t bringup_pending;
+ atomic_t bringup_failed;
+ struct completion bringup_comp;
+};
+
+struct xroot {
+ struct pci_dev *pdev;
+ struct xroot_events events;
+ struct xroot_groups groups;
+ struct xroot_physical_function_callback pf_cb;
+};
+
+struct xroot_group_match_arg {
+ enum xrt_subdev_id id;
+ int instance;
+};
+
+static bool xroot_group_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
+{
+ struct xroot_group_match_arg *a = (struct xroot_group_match_arg *)arg;
+
+ /* pdev->id is the instance of the subdev. */
+ return id == a->id && pdev->id == a->instance;
+}
+
+static int xroot_get_group(struct xroot *xr, int instance, struct platform_device **grpp)
+{
+ int rc = 0;
+ struct xrt_subdev_pool *grps = &xr->groups.pool;
+ struct device *dev = DEV(xr->pdev);
+ struct xroot_group_match_arg arg = { XRT_SUBDEV_GRP, instance };
+
+ if (instance == XROOT_GROUP_LAST) {
+ rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_NEXT,
+ *grpp, dev, grpp);
+ } else if (instance == XROOT_GROUP_FIRST) {
+ rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_PREV,
+ *grpp, dev, grpp);
+ } else {
+ rc = xrt_subdev_pool_get(grps, xroot_group_match,
+ &arg, dev, grpp);
+ }
+
+ if (rc && rc != -ENOENT)
+ xroot_err(xr, "failed to hold group %d: %d", instance, rc);
+ return rc;
+}
+
+static void xroot_put_group(struct xroot *xr, struct platform_device *grp)
+{
+ int inst = grp->id;
+ int rc = xrt_subdev_pool_put(&xr->groups.pool, grp, DEV(xr->pdev));
+
+ if (rc)
+ xroot_err(xr, "failed to release group %d: %d", inst, rc);
+}
+
+static int xroot_trigger_event(struct xroot *xr, struct xrt_event *e, bool async)
+{
+ struct xroot_evt *enew = vzalloc(sizeof(*enew));
+
+ if (!enew)
+ return -ENOMEM;
+
+ enew->evt = *e;
+ enew->async = async;
+ init_completion(&enew->comp);
+
+ mutex_lock(&xr->events.evt_lock);
+ list_add(&enew->list, &xr->events.evt_list);
+ mutex_unlock(&xr->events.evt_lock);
+
+ schedule_work(&xr->events.evt_work);
+
+ if (async)
+ return 0;
+
+ wait_for_completion(&enew->comp);
+ vfree(enew);
+ return 0;
+}
+
+static void
+xroot_group_trigger_event(struct xroot *xr, int inst, enum xrt_events e)
+{
+ int ret;
+ struct platform_device *pdev = NULL;
+ struct xrt_event evt = { 0 };
+
+ WARN_ON(inst < 0);
+ /* Only triggers subdev specific events. */
+ if (e != XRT_EVENT_POST_CREATION && e != XRT_EVENT_PRE_REMOVAL) {
+ xroot_err(xr, "invalid event %d", e);
+ return;
+ }
+
+ ret = xroot_get_group(xr, inst, &pdev);
+ if (ret)
+ return;
+
+ /* Triggers event for children, first. */
+ xleaf_call(pdev, XRT_GROUP_TRIGGER_EVENT, (void *)(uintptr_t)e);
+
+ /* Triggers event for itself. */
+ evt.xe_evt = e;
+ evt.xe_subdev.xevt_subdev_id = XRT_SUBDEV_GRP;
+ evt.xe_subdev.xevt_subdev_instance = inst;
+ xroot_trigger_event(xr, &evt, false);
+
+ xroot_put_group(xr, pdev);
+}
+
+int xroot_create_group(void *root, char *dtb)
+{
+ struct xroot *xr = (struct xroot *)root;
+ int ret;
+
+ atomic_inc(&xr->groups.bringup_pending);
+ ret = xrt_subdev_pool_add(&xr->groups.pool, XRT_SUBDEV_GRP, xroot_root_cb, xr, dtb);
+ if (ret >= 0) {
+ schedule_work(&xr->groups.bringup_work);
+ } else {
+ atomic_dec(&xr->groups.bringup_pending);
+ atomic_inc(&xr->groups.bringup_failed);
+ xroot_err(xr, "failed to create group: %d", ret);
+ }
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xroot_create_group);
+
+static int xroot_destroy_single_group(struct xroot *xr, int instance)
+{
+ struct platform_device *pdev = NULL;
+ int ret;
+
+ WARN_ON(instance < 0);
+ ret = xroot_get_group(xr, instance, &pdev);
+ if (ret)
+ return ret;
+
+ xroot_group_trigger_event(xr, instance, XRT_EVENT_PRE_REMOVAL);
+
+ /* Now tear down all children in this group. */
+ ret = xleaf_call(pdev, XRT_GROUP_FINI_CHILDREN, NULL);
+ xroot_put_group(xr, pdev);
+ if (!ret)
+ ret = xrt_subdev_pool_del(&xr->groups.pool, XRT_SUBDEV_GRP, instance);
+
+ return ret;
+}
+
+static int xroot_destroy_group(struct xroot *xr, int instance)
+{
+ struct platform_device *target = NULL;
+ struct platform_device *deps = NULL;
+ int ret;
+
+ WARN_ON(instance < 0);
+ /*
+ * Make sure target group exists and can't go away before
+ * we remove it's dependents
+ */
+ ret = xroot_get_group(xr, instance, &target);
+ if (ret)
+ return ret;
+
+ /*
+ * Remove all groups depend on target one.
+ * Assuming subdevs in higher group ID can depend on ones in
+ * lower ID groups, we remove them in the reservse order.
+ */
+ while (xroot_get_group(xr, XROOT_GROUP_LAST, &deps) != -ENOENT) {
+ int inst = deps->id;
+
+ xroot_put_group(xr, deps);
+ /* Reached the target group instance, stop here. */
+ if (instance == inst)
+ break;
+ xroot_destroy_single_group(xr, inst);
+ deps = NULL;
+ }
+
+ /* Now we can remove the target group. */
+ xroot_put_group(xr, target);
+ return xroot_destroy_single_group(xr, instance);
+}
+
+static int xroot_lookup_group(struct xroot *xr,
+ struct xrt_root_lookup_group *arg)
+{
+ int rc = -ENOENT;
+ struct platform_device *grp = NULL;
+
+ while (rc < 0 && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ if (arg->xpilp_match_cb(XRT_SUBDEV_GRP, grp, arg->xpilp_match_arg))
+ rc = grp->id;
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static void xroot_event_work(struct work_struct *work)
+{
+ struct xroot_evt *tmp;
+ struct xroot *xr = container_of(work, struct xroot, events.evt_work);
+
+ mutex_lock(&xr->events.evt_lock);
+ while (!list_empty(&xr->events.evt_list)) {
+ tmp = list_first_entry(&xr->events.evt_list, struct xroot_evt, list);
+ list_del(&tmp->list);
+ mutex_unlock(&xr->events.evt_lock);
+
+ xrt_subdev_pool_handle_event(&xr->groups.pool, &tmp->evt);
+
+ if (tmp->async)
+ vfree(tmp);
+ else
+ complete(&tmp->comp);
+
+ mutex_lock(&xr->events.evt_lock);
+ }
+ mutex_unlock(&xr->events.evt_lock);
+}
+
+static void xroot_event_init(struct xroot *xr)
+{
+ INIT_LIST_HEAD(&xr->events.evt_list);
+ mutex_init(&xr->events.evt_lock);
+ INIT_WORK(&xr->events.evt_work, xroot_event_work);
+}
+
+static void xroot_event_fini(struct xroot *xr)
+{
+ flush_scheduled_work();
+ WARN_ON(!list_empty(&xr->events.evt_list));
+}
+
+static int xroot_get_leaf(struct xroot *xr, struct xrt_root_get_leaf *arg)
+{
+ int rc = -ENOENT;
+ struct platform_device *grp = NULL;
+
+ while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ rc = xleaf_call(grp, XRT_GROUP_GET_LEAF, arg);
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static int xroot_put_leaf(struct xroot *xr, struct xrt_root_put_leaf *arg)
+{
+ int rc = -ENOENT;
+ struct platform_device *grp = NULL;
+
+ while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
+ rc = xleaf_call(grp, XRT_GROUP_PUT_LEAF, arg);
+ xroot_put_group(xr, grp);
+ }
+ return rc;
+}
+
+static int xroot_root_cb(struct device *dev, void *parg, enum xrt_root_cmd cmd, void *arg)
+{
+ struct xroot *xr = (struct xroot *)parg;
+ int rc = 0;
+
+ switch (cmd) {
+ /* Leaf actions. */
+ case XRT_ROOT_GET_LEAF: {
+ struct xrt_root_get_leaf *getleaf = (struct xrt_root_get_leaf *)arg;
+
+ rc = xroot_get_leaf(xr, getleaf);
+ break;
+ }
+ case XRT_ROOT_PUT_LEAF: {
+ struct xrt_root_put_leaf *putleaf = (struct xrt_root_put_leaf *)arg;
+
+ rc = xroot_put_leaf(xr, putleaf);
+ break;
+ }
+ case XRT_ROOT_GET_LEAF_HOLDERS: {
+ struct xrt_root_get_holders *holders = (struct xrt_root_get_holders *)arg;
+
+ rc = xrt_subdev_pool_get_holders(&xr->groups.pool,
+ holders->xpigh_pdev,
+ holders->xpigh_holder_buf,
+ holders->xpigh_holder_buf_len);
+ break;
+ }
+
+ /* Group actions. */
+ case XRT_ROOT_CREATE_GROUP:
+ rc = xroot_create_group(xr, (char *)arg);
+ break;
+ case XRT_ROOT_REMOVE_GROUP:
+ rc = xroot_destroy_group(xr, (int)(uintptr_t)arg);
+ break;
+ case XRT_ROOT_LOOKUP_GROUP: {
+ struct xrt_root_lookup_group *getgrp = (struct xrt_root_lookup_group *)arg;
+
+ rc = xroot_lookup_group(xr, getgrp);
+ break;
+ }
+ case XRT_ROOT_WAIT_GROUP_BRINGUP:
+ rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
+ break;
+
+ /* Event actions. */
+ case XRT_ROOT_EVENT_SYNC:
+ case XRT_ROOT_EVENT_ASYNC: {
+ bool async = (cmd == XRT_ROOT_EVENT_ASYNC);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+
+ rc = xroot_trigger_event(xr, evt, async);
+ break;
+ }
+
+ /* Device info. */
+ case XRT_ROOT_GET_RESOURCE: {
+ struct xrt_root_get_res *res = (struct xrt_root_get_res *)arg;
+
+ res->xpigr_res = xr->pdev->resource;
+ break;
+ }
+ case XRT_ROOT_GET_ID: {
+ struct xrt_root_get_id *id = (struct xrt_root_get_id *)arg;
+
+ id->xpigi_vendor_id = xr->pdev->vendor;
+ id->xpigi_device_id = xr->pdev->device;
+ id->xpigi_sub_vendor_id = xr->pdev->subsystem_vendor;
+ id->xpigi_sub_device_id = xr->pdev->subsystem_device;
+ break;
+ }
+
+ /* MISC generic PCIE driver functions. */
+ case XRT_ROOT_HOT_RESET: {
+ xr->pf_cb.xpc_hot_reset(xr->pdev);
+ break;
+ }
+ case XRT_ROOT_HWMON: {
+ struct xrt_root_hwmon *hwmon = (struct xrt_root_hwmon *)arg;
+
+ if (hwmon->xpih_register) {
+ hwmon->xpih_hwmon_dev =
+ hwmon_device_register_with_info(DEV(xr->pdev),
+ hwmon->xpih_name,
+ hwmon->xpih_drvdata,
+ NULL,
+ hwmon->xpih_groups);
+ } else {
+ hwmon_device_unregister(hwmon->xpih_hwmon_dev);
+ }
+ break;
+ }
+
+ default:
+ xroot_err(xr, "unknown IOCTL cmd %d", cmd);
+ rc = -EINVAL;
+ break;
+ }
+
+ return rc;
+}
+
+static void xroot_bringup_group_work(struct work_struct *work)
+{
+ struct platform_device *pdev = NULL;
+ struct xroot *xr = container_of(work, struct xroot, groups.bringup_work);
+
+ while (xroot_get_group(xr, XROOT_GROUP_FIRST, &pdev) != -ENOENT) {
+ int r, i;
+
+ i = pdev->id;
+ r = xleaf_call(pdev, XRT_GROUP_INIT_CHILDREN, NULL);
+ xroot_put_group(xr, pdev);
+ if (r == -EEXIST)
+ continue; /* Already brough up, nothing to do. */
+ if (r)
+ atomic_inc(&xr->groups.bringup_failed);
+
+ xroot_group_trigger_event(xr, i, XRT_EVENT_POST_CREATION);
+
+ if (atomic_dec_and_test(&xr->groups.bringup_pending))
+ complete(&xr->groups.bringup_comp);
+ }
+}
+
+static void xroot_groups_init(struct xroot *xr)
+{
+ xrt_subdev_pool_init(DEV(xr->pdev), &xr->groups.pool);
+ INIT_WORK(&xr->groups.bringup_work, xroot_bringup_group_work);
+ atomic_set(&xr->groups.bringup_pending, 0);
+ atomic_set(&xr->groups.bringup_failed, 0);
+ init_completion(&xr->groups.bringup_comp);
+}
+
+static void xroot_groups_fini(struct xroot *xr)
+{
+ flush_scheduled_work();
+ xrt_subdev_pool_fini(&xr->groups.pool);
+}
+
+int xroot_add_vsec_node(void *root, char *dtb)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct device *dev = DEV(xr->pdev);
+ struct xrt_md_endpoint ep = { 0 };
+ int cap = 0, ret = 0;
+ u32 off_low, off_high, vsec_bar, header;
+ u64 vsec_off;
+
+ while ((cap = pci_find_next_ext_capability(xr->pdev, cap, PCI_EXT_CAP_ID_VNDR))) {
+ pci_read_config_dword(xr->pdev, cap + PCI_VNDR_HEADER, &header);
+ if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
+ break;
+ }
+ if (!cap) {
+ xroot_info(xr, "No Vendor Specific Capability.");
+ return -ENOENT;
+ }
+
+ if (pci_read_config_dword(xr->pdev, cap + 8, &off_low) ||
+ pci_read_config_dword(xr->pdev, cap + 12, &off_high)) {
+ xroot_err(xr, "pci_read vendor specific failed.");
+ return -EINVAL;
+ }
+
+ ep.ep_name = XRT_MD_NODE_VSEC;
+ ret = xrt_md_add_endpoint(dev, dtb, &ep);
+ if (ret) {
+ xroot_err(xr, "add vsec metadata failed, ret %d", ret);
+ goto failed;
+ }
+
+ vsec_bar = cpu_to_be32(off_low & 0xf);
+ ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
+ XRT_MD_PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
+ if (ret) {
+ xroot_err(xr, "add vsec bar idx failed, ret %d", ret);
+ goto failed;
+ }
+
+ vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
+ ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
+ XRT_MD_PROP_OFFSET, &vsec_off, sizeof(vsec_off));
+ if (ret) {
+ xroot_err(xr, "add vsec offset failed, ret %d", ret);
+ goto failed;
+ }
+
+failed:
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xroot_add_vsec_node);
+
+int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct device *dev = DEV(xr->pdev);
+ struct xrt_md_endpoint ep = { 0 };
+ int ret = 0;
+
+ ep.ep_name = endpoint;
+ ret = xrt_md_add_endpoint(dev, dtb, &ep);
+ if (ret)
+ xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xroot_add_simple_node);
+
+bool xroot_wait_for_bringup(void *root)
+{
+ struct xroot *xr = (struct xroot *)root;
+
+ wait_for_completion(&xr->groups.bringup_comp);
+ return atomic_read(&xr->groups.bringup_failed) == 0;
+}
+EXPORT_SYMBOL_GPL(xroot_wait_for_bringup);
+
+int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root)
+{
+ struct device *dev = DEV(pdev);
+ struct xroot *xr = NULL;
+
+ dev_info(dev, "%s: probing...", __func__);
+
+ xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
+ if (!xr)
+ return -ENOMEM;
+
+ xr->pdev = pdev;
+ xr->pf_cb = *cb;
+ xroot_groups_init(xr);
+ xroot_event_init(xr);
+
+ *root = xr;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xroot_probe);
+
+void xroot_remove(void *root)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct platform_device *grp = NULL;
+
+ xroot_info(xr, "leaving...");
+
+ if (xroot_get_group(xr, XROOT_GROUP_FIRST, &grp) == 0) {
+ int instance = grp->id;
+
+ xroot_put_group(xr, grp);
+ xroot_destroy_group(xr, instance);
+ }
+
+ xroot_event_fini(xr);
+ xroot_groups_fini(xr);
+}
+EXPORT_SYMBOL_GPL(xroot_remove);
+
+void xroot_broadcast(void *root, enum xrt_events evt)
+{
+ struct xroot *xr = (struct xroot *)root;
+ struct xrt_event e = { 0 };
+
+ /* Root pf driver only broadcasts below two events. */
+ if (evt != XRT_EVENT_POST_CREATION && evt != XRT_EVENT_PRE_REMOVAL) {
+ xroot_info(xr, "invalid event %d", evt);
+ return;
+ }
+
+ e.xe_evt = evt;
+ e.xe_subdev.xevt_subdev_id = XRT_ROOT;
+ e.xe_subdev.xevt_subdev_instance = 0;
+ xroot_trigger_event(xr, &e, false);
+}
+EXPORT_SYMBOL_GPL(xroot_broadcast);
--
2.27.0
Infrastructure code providing APIs for managing leaf driver instance
groups, facilitating inter-leaf driver calls and root calls.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/lib/subdev.c | 865 ++++++++++++++++++++++++++++++++++
1 file changed, 865 insertions(+)
create mode 100644 drivers/fpga/xrt/lib/subdev.c
diff --git a/drivers/fpga/xrt/lib/subdev.c b/drivers/fpga/xrt/lib/subdev.c
new file mode 100644
index 000000000000..6428b183fee3
--- /dev/null
+++ b/drivers/fpga/xrt/lib/subdev.c
@@ -0,0 +1,865 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/platform_device.h>
+#include <linux/pci.h>
+#include <linux/vmalloc.h>
+#include "xleaf.h"
+#include "subdev_pool.h"
+#include "lib-drv.h"
+#include "metadata.h"
+
+#define IS_ROOT_DEV(dev) ((dev)->bus == &pci_bus_type)
+static inline struct device *find_root(struct platform_device *pdev)
+{
+ struct device *d = DEV(pdev);
+
+ while (!IS_ROOT_DEV(d))
+ d = d->parent;
+ return d;
+}
+
+/*
+ * It represents a holder of a subdev. One holder can repeatedly hold a subdev
+ * as long as there is a unhold corresponding to a hold.
+ */
+struct xrt_subdev_holder {
+ struct list_head xsh_holder_list;
+ struct device *xsh_holder;
+ int xsh_count;
+ struct kref xsh_kref;
+};
+
+/*
+ * It represents a specific instance of platform driver for a subdev, which
+ * provides services to its clients (another subdev driver or root driver).
+ */
+struct xrt_subdev {
+ struct list_head xs_dev_list;
+ struct list_head xs_holder_list;
+ enum xrt_subdev_id xs_id; /* type of subdev */
+ struct platform_device *xs_pdev; /* a particular subdev inst */
+ struct completion xs_holder_comp;
+};
+
+static struct xrt_subdev *xrt_subdev_alloc(void)
+{
+ struct xrt_subdev *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
+
+ if (!sdev)
+ return NULL;
+
+ INIT_LIST_HEAD(&sdev->xs_dev_list);
+ INIT_LIST_HEAD(&sdev->xs_holder_list);
+ init_completion(&sdev->xs_holder_comp);
+ return sdev;
+}
+
+static void xrt_subdev_free(struct xrt_subdev *sdev)
+{
+ kfree(sdev);
+}
+
+int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg)
+{
+ struct device *dev = DEV(self);
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
+
+ WARN_ON(!pdata->xsp_root_cb);
+ return (*pdata->xsp_root_cb)(dev->parent, pdata->xsp_root_cb_arg, cmd, arg);
+}
+
+/*
+ * Subdev common sysfs nodes.
+ */
+static ssize_t holders_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ ssize_t len;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct xrt_root_get_holders holders = { pdev, buf, 1024 };
+
+ len = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF_HOLDERS, &holders);
+ if (len >= holders.xpigh_holder_buf_len)
+ return len;
+ buf[len] = '\n';
+ return len + 1;
+}
+static DEVICE_ATTR_RO(holders);
+
+static struct attribute *xrt_subdev_attrs[] = {
+ &dev_attr_holders.attr,
+ NULL,
+};
+
+static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf, loff_t off, size_t count)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct platform_device *pdev = to_platform_device(dev);
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
+ unsigned char *blob;
+ unsigned long size;
+ ssize_t ret = 0;
+
+ blob = pdata->xsp_dtb;
+ size = xrt_md_size(dev, blob);
+ if (size == XRT_MD_INVALID_LENGTH) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ if (off >= size)
+ goto failed;
+
+ if (off + count > size)
+ count = size - off;
+ memcpy(buf, blob + off, count);
+
+ ret = count;
+failed:
+ return ret;
+}
+
+static struct bin_attribute meta_data_attr = {
+ .attr = {
+ .name = "metadata",
+ .mode = 0400
+ },
+ .read = metadata_output,
+ .size = 0
+};
+
+static struct bin_attribute *xrt_subdev_bin_attrs[] = {
+ &meta_data_attr,
+ NULL,
+};
+
+static const struct attribute_group xrt_subdev_attrgroup = {
+ .attrs = xrt_subdev_attrs,
+ .bin_attrs = xrt_subdev_bin_attrs,
+};
+
+/*
+ * Given the device metadata, parse it to get IO ranges and construct
+ * resource array.
+ */
+static int
+xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
+ char *dtb, struct resource **res, int *res_num)
+{
+ struct xrt_subdev_platdata *pdata;
+ struct resource *pci_res = NULL;
+ const u64 *bar_range;
+ const u32 *bar_idx;
+ char *ep_name = NULL, *regmap = NULL;
+ uint bar;
+ int count1 = 0, count2 = 0, ret;
+
+ if (!dtb)
+ return -EINVAL;
+
+ pdata = DEV_PDATA(to_platform_device(parent));
+
+ /* go through metadata and count endpoints in it */
+ for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
+ xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
+ ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+ XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+ if (!ret)
+ count1++;
+ }
+ if (!count1)
+ return 0;
+
+ /* allocate resource array for all endpoints been found in metadata */
+ *res = vzalloc(sizeof(**res) * count1);
+
+ /* go through all endpoints again and get IO range for each endpoint */
+ for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
+ xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
+ ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
+ XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
+ if (ret)
+ continue;
+ xrt_md_get_prop(parent, dtb, ep_name, regmap,
+ XRT_MD_PROP_BAR_IDX, (const void **)&bar_idx, NULL);
+ bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
+ xleaf_get_barres(to_platform_device(parent), &pci_res, bar);
+ (*res)[count2].start = pci_res->start +
+ be64_to_cpu(bar_range[0]);
+ (*res)[count2].end = pci_res->start +
+ be64_to_cpu(bar_range[0]) +
+ be64_to_cpu(bar_range[1]) - 1;
+ (*res)[count2].flags = IORESOURCE_MEM;
+ /* check if there is conflicted resource */
+ ret = request_resource(pci_res, *res + count2);
+ if (ret) {
+ dev_err(parent, "Conflict resource %pR\n", *res + count2);
+ vfree(*res);
+ *res_num = 0;
+ *res = NULL;
+ return ret;
+ }
+ release_resource(*res + count2);
+
+ (*res)[count2].parent = pci_res;
+
+ xrt_md_find_endpoint(parent, pdata->xsp_dtb, ep_name,
+ regmap, &(*res)[count2].name);
+
+ count2++;
+ }
+
+ WARN_ON(count1 != count2);
+ *res_num = count2;
+
+ return 0;
+}
+
+static inline enum xrt_subdev_file_mode
+xleaf_devnode_mode(struct xrt_subdev_drvdata *drvdata)
+{
+ return drvdata->xsd_file_ops.xsf_mode;
+}
+
+static bool xrt_subdev_cdev_auto_creation(struct platform_device *pdev)
+{
+ struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
+ enum xrt_subdev_file_mode mode = xleaf_devnode_mode(drvdata);
+
+ if (!drvdata)
+ return false;
+
+ if (!xleaf_devnode_enabled(drvdata))
+ return false;
+
+ return (mode == XRT_SUBDEV_FILE_DEFAULT || mode == XRT_SUBDEV_FILE_MULTI_INST);
+}
+
+static struct xrt_subdev *
+xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
+ xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
+{
+ struct xrt_subdev_platdata *pdata = NULL;
+ struct platform_device *pdev = NULL;
+ int inst = PLATFORM_DEVID_NONE;
+ struct xrt_subdev *sdev = NULL;
+ struct resource *res = NULL;
+ unsigned long dtb_len = 0;
+ int res_num = 0;
+ size_t pdata_sz;
+ int ret;
+
+ sdev = xrt_subdev_alloc();
+ if (!sdev) {
+ dev_err(parent, "failed to alloc subdev for ID %d", id);
+ goto fail;
+ }
+ sdev->xs_id = id;
+
+ if (!dtb) {
+ ret = xrt_md_create(parent, &dtb);
+ if (ret) {
+ dev_err(parent, "can't create empty dtb: %d", ret);
+ goto fail;
+ }
+ }
+ xrt_md_pack(parent, dtb);
+ dtb_len = xrt_md_size(parent, dtb);
+ if (dtb_len == XRT_MD_INVALID_LENGTH) {
+ dev_err(parent, "invalid metadata len %ld", dtb_len);
+ goto fail;
+ }
+ pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len;
+
+ /* Prepare platform data passed to subdev. */
+ pdata = vzalloc(pdata_sz);
+ if (!pdata)
+ goto fail;
+
+ pdata->xsp_root_cb = pcb;
+ pdata->xsp_root_cb_arg = pcb_arg;
+ memcpy(pdata->xsp_dtb, dtb, dtb_len);
+ if (id == XRT_SUBDEV_GRP) {
+ /* Group can only be created by root driver. */
+ pdata->xsp_root_name = dev_name(parent);
+ } else {
+ struct platform_device *grp = to_platform_device(parent);
+ /* Leaf can only be created by group driver. */
+ WARN_ON(strncmp(xrt_drv_name(XRT_SUBDEV_GRP),
+ platform_get_device_id(grp)->name,
+ strlen(xrt_drv_name(XRT_SUBDEV_GRP)) + 1));
+ pdata->xsp_root_name = DEV_PDATA(grp)->xsp_root_name;
+ }
+
+ /* Obtain dev instance number. */
+ inst = xrt_drv_get_instance(id);
+ if (inst < 0) {
+ dev_err(parent, "failed to obtain instance: %d", inst);
+ goto fail;
+ }
+
+ /* Create subdev. */
+ if (id != XRT_SUBDEV_GRP) {
+ int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
+
+ if (rc) {
+ dev_err(parent, "failed to get resource for %s.%d: %d",
+ xrt_drv_name(id), inst, rc);
+ goto fail;
+ }
+ }
+ pdev = platform_device_register_resndata(parent, xrt_drv_name(id),
+ inst, res, res_num, pdata, pdata_sz);
+ vfree(res);
+ if (IS_ERR(pdev)) {
+ dev_err(parent, "failed to create subdev for %s inst %d: %ld",
+ xrt_drv_name(id), inst, PTR_ERR(pdev));
+ goto fail;
+ }
+ sdev->xs_pdev = pdev;
+
+ if (device_attach(DEV(pdev)) != 1) {
+ xrt_err(pdev, "failed to attach");
+ goto fail;
+ }
+
+ if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_subdev_attrgroup))
+ xrt_err(pdev, "failed to create sysfs group");
+
+ /*
+ * Create sysfs sym link under root for leaves
+ * under random groups for easy access to them.
+ */
+ if (id != XRT_SUBDEV_GRP) {
+ if (sysfs_create_link(&find_root(pdev)->kobj,
+ &DEV(pdev)->kobj, dev_name(DEV(pdev)))) {
+ xrt_err(pdev, "failed to create sysfs link");
+ }
+ }
+
+ /* All done, ready to handle req thru cdev. */
+ if (xrt_subdev_cdev_auto_creation(pdev))
+ xleaf_devnode_create(pdev, DEV_DRVDATA(pdev)->xsd_file_ops.xsf_dev_name, NULL);
+
+ vfree(pdata);
+ return sdev;
+
+fail:
+ vfree(pdata);
+ if (sdev && !IS_ERR_OR_NULL(sdev->xs_pdev))
+ platform_device_unregister(sdev->xs_pdev);
+ if (inst >= 0)
+ xrt_drv_put_instance(id, inst);
+ xrt_subdev_free(sdev);
+ return NULL;
+}
+
+static void xrt_subdev_destroy(struct xrt_subdev *sdev)
+{
+ struct platform_device *pdev = sdev->xs_pdev;
+ struct device *dev = DEV(pdev);
+ int inst = pdev->id;
+ int ret;
+
+ /* Take down the device node */
+ if (xrt_subdev_cdev_auto_creation(pdev)) {
+ ret = xleaf_devnode_destroy(pdev);
+ WARN_ON(ret);
+ }
+ if (sdev->xs_id != XRT_SUBDEV_GRP)
+ sysfs_remove_link(&find_root(pdev)->kobj, dev_name(dev));
+ sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
+ platform_device_unregister(pdev);
+ xrt_drv_put_instance(sdev->xs_id, inst);
+ xrt_subdev_free(sdev);
+}
+
+struct platform_device *
+xleaf_get_leaf(struct platform_device *pdev, xrt_subdev_match_t match_cb, void *match_arg)
+{
+ int rc;
+ struct xrt_root_get_leaf get_leaf = {
+ pdev, match_cb, match_arg, };
+
+ rc = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF, &get_leaf);
+ if (rc)
+ return NULL;
+ return get_leaf.xpigl_tgt_pdev;
+}
+EXPORT_SYMBOL_GPL(xleaf_get_leaf);
+
+bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name)
+{
+ struct resource *res;
+ int i = 0;
+
+ do {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ if (res && !strncmp(res->name, endpoint_name, strlen(res->name) + 1))
+ return true;
+ ++i;
+ } while (res);
+
+ return false;
+}
+EXPORT_SYMBOL_GPL(xleaf_has_endpoint);
+
+int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf)
+{
+ struct xrt_root_put_leaf put_leaf = { pdev, leaf };
+
+ return xrt_subdev_root_request(pdev, XRT_ROOT_PUT_LEAF, &put_leaf);
+}
+EXPORT_SYMBOL_GPL(xleaf_put_leaf);
+
+int xleaf_create_group(struct platform_device *pdev, char *dtb)
+{
+ return xrt_subdev_root_request(pdev, XRT_ROOT_CREATE_GROUP, dtb);
+}
+EXPORT_SYMBOL_GPL(xleaf_create_group);
+
+int xleaf_destroy_group(struct platform_device *pdev, int instance)
+{
+ return xrt_subdev_root_request(pdev, XRT_ROOT_REMOVE_GROUP, (void *)(uintptr_t)instance);
+}
+EXPORT_SYMBOL_GPL(xleaf_destroy_group);
+
+int xleaf_wait_for_group_bringup(struct platform_device *pdev)
+{
+ return xrt_subdev_root_request(pdev, XRT_ROOT_WAIT_GROUP_BRINGUP, NULL);
+}
+EXPORT_SYMBOL_GPL(xleaf_wait_for_group_bringup);
+
+static ssize_t
+xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
+{
+ const struct list_head *ptr;
+ struct xrt_subdev_holder *h;
+ ssize_t n = 0;
+
+ list_for_each(ptr, &sdev->xs_holder_list) {
+ h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ n += snprintf(buf + n, len - n, "%s:%d ",
+ dev_name(h->xsh_holder), kref_read(&h->xsh_kref));
+ if (n >= (len - 1))
+ break;
+ }
+ return n;
+}
+
+void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
+{
+ INIT_LIST_HEAD(&spool->xsp_dev_list);
+ spool->xsp_owner = dev;
+ mutex_init(&spool->xsp_lock);
+ spool->xsp_closing = false;
+}
+
+static void xrt_subdev_free_holder(struct xrt_subdev_holder *holder)
+{
+ list_del(&holder->xsh_holder_list);
+ vfree(holder);
+}
+
+static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool, struct xrt_subdev *sdev)
+{
+ const struct list_head *ptr, *next;
+ char holders[128];
+ struct xrt_subdev_holder *holder;
+ struct mutex *lk = &spool->xsp_lock;
+
+ while (!list_empty(&sdev->xs_holder_list)) {
+ int rc;
+
+ /* It's most likely a bug if we ever enters this loop. */
+ xrt_subdev_get_holders(sdev, holders, sizeof(holders));
+ xrt_err(sdev->xs_pdev, "awaits holders: %s", holders);
+ mutex_unlock(lk);
+ rc = wait_for_completion_killable(&sdev->xs_holder_comp);
+ mutex_lock(lk);
+ if (rc == -ERESTARTSYS) {
+ xrt_err(sdev->xs_pdev, "give up on waiting for holders, clean up now");
+ list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
+ holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ xrt_subdev_free_holder(holder);
+ }
+ }
+ }
+}
+
+void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
+{
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct mutex *lk = &spool->xsp_lock;
+
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ mutex_unlock(lk);
+ return;
+ }
+ spool->xsp_closing = true;
+ mutex_unlock(lk);
+
+ /* Remove subdev in the reverse order of added. */
+ while (!list_empty(dl)) {
+ struct xrt_subdev *sdev = list_first_entry(dl, struct xrt_subdev, xs_dev_list);
+
+ xrt_subdev_pool_wait_for_holders(spool, sdev);
+ list_del(&sdev->xs_dev_list);
+ xrt_subdev_destroy(sdev);
+ }
+}
+
+static struct xrt_subdev_holder *xrt_subdev_find_holder(struct xrt_subdev *sdev,
+ struct device *holder_dev)
+{
+ struct list_head *hl = &sdev->xs_holder_list;
+ struct xrt_subdev_holder *holder;
+ const struct list_head *ptr;
+
+ list_for_each(ptr, hl) {
+ holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
+ if (holder->xsh_holder == holder_dev)
+ return holder;
+ }
+ return NULL;
+}
+
+static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+ struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
+ struct list_head *hl = &sdev->xs_holder_list;
+
+ if (!holder) {
+ holder = vzalloc(sizeof(*holder));
+ if (!holder)
+ return -ENOMEM;
+ holder->xsh_holder = holder_dev;
+ kref_init(&holder->xsh_kref);
+ list_add_tail(&holder->xsh_holder_list, hl);
+ } else {
+ kref_get(&holder->xsh_kref);
+ }
+
+ return 0;
+}
+
+static void xrt_subdev_free_holder_kref(struct kref *kref)
+{
+ struct xrt_subdev_holder *holder = container_of(kref, struct xrt_subdev_holder, xsh_kref);
+
+ xrt_subdev_free_holder(holder);
+}
+
+static int
+xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
+{
+ struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
+ struct list_head *hl = &sdev->xs_holder_list;
+
+ if (!holder) {
+ dev_err(holder_dev, "can't release, %s did not hold %s",
+ dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
+ return -EINVAL;
+ }
+ kref_put(&holder->xsh_kref, xrt_subdev_free_holder_kref);
+
+ /* kref_put above may remove holder from list. */
+ if (list_empty(hl))
+ complete(&sdev->xs_holder_comp);
+ return 0;
+}
+
+int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
+ xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
+{
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = 0;
+
+ sdev = xrt_subdev_create(spool->xsp_owner, id, pcb, pcb_arg, dtb);
+ if (sdev) {
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ /* No new subdev when pool is going away. */
+ xrt_err(sdev->xs_pdev, "pool is closing");
+ ret = -ENODEV;
+ } else {
+ list_add(&sdev->xs_dev_list, dl);
+ }
+ mutex_unlock(lk);
+ if (ret)
+ xrt_subdev_destroy(sdev);
+ } else {
+ ret = -EINVAL;
+ }
+
+ ret = ret ? ret : sdev->xs_pdev->id;
+ return ret;
+}
+
+int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id, int instance)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+ if (spool->xsp_closing) {
+ /* Pool is going away, all subdevs will be gone. */
+ mutex_unlock(lk);
+ return 0;
+ }
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_id != id || sdev->xs_pdev->id != instance)
+ continue;
+ xrt_subdev_pool_wait_for_holders(spool, sdev);
+ list_del(&sdev->xs_dev_list);
+ ret = 0;
+ break;
+ }
+ mutex_unlock(lk);
+ if (ret)
+ return ret;
+
+ xrt_subdev_destroy(sdev);
+ return 0;
+}
+
+static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool, xrt_subdev_match_t match,
+ void *arg, struct device *holder_dev, struct xrt_subdev **sdevp)
+{
+ struct platform_device *pdev = (struct platform_device *)arg;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct mutex *lk = &spool->xsp_lock;
+ struct xrt_subdev *sdev = NULL;
+ const struct list_head *ptr;
+ struct xrt_subdev *d = NULL;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+
+ if (!pdev) {
+ if (match == XRT_SUBDEV_MATCH_PREV) {
+ sdev = list_empty(dl) ? NULL :
+ list_last_entry(dl, struct xrt_subdev, xs_dev_list);
+ } else if (match == XRT_SUBDEV_MATCH_NEXT) {
+ sdev = list_first_entry_or_null(dl, struct xrt_subdev, xs_dev_list);
+ }
+ }
+
+ list_for_each(ptr, dl) {
+ d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (match == XRT_SUBDEV_MATCH_PREV || match == XRT_SUBDEV_MATCH_NEXT) {
+ if (d->xs_pdev != pdev)
+ continue;
+ } else {
+ if (!match(d->xs_id, d->xs_pdev, arg))
+ continue;
+ }
+
+ if (match == XRT_SUBDEV_MATCH_PREV)
+ sdev = !list_is_first(ptr, dl) ? list_prev_entry(d, xs_dev_list) : NULL;
+ else if (match == XRT_SUBDEV_MATCH_NEXT)
+ sdev = !list_is_last(ptr, dl) ? list_next_entry(d, xs_dev_list) : NULL;
+ else
+ sdev = d;
+ }
+
+ if (sdev)
+ ret = xrt_subdev_hold(sdev, holder_dev);
+
+ mutex_unlock(lk);
+
+ if (!ret)
+ *sdevp = sdev;
+ return ret;
+}
+
+int xrt_subdev_pool_get(struct xrt_subdev_pool *spool, xrt_subdev_match_t match, void *arg,
+ struct device *holder_dev, struct platform_device **pdevp)
+{
+ int rc;
+ struct xrt_subdev *sdev;
+
+ rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
+ if (rc) {
+ if (rc != -ENOENT)
+ dev_err(holder_dev, "failed to hold device: %d", rc);
+ return rc;
+ }
+
+ if (!IS_ROOT_DEV(holder_dev)) {
+ xrt_dbg(to_platform_device(holder_dev), "%s <<==== %s",
+ dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
+ }
+
+ *pdevp = sdev->xs_pdev;
+ return 0;
+}
+
+static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool, struct platform_device *pdev,
+ struct device *holder_dev)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ int ret = -ENOENT;
+
+ mutex_lock(lk);
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_pdev != pdev)
+ continue;
+ ret = xrt_subdev_release(sdev, holder_dev);
+ break;
+ }
+ mutex_unlock(lk);
+
+ return ret;
+}
+
+int xrt_subdev_pool_put(struct xrt_subdev_pool *spool, struct platform_device *pdev,
+ struct device *holder_dev)
+{
+ int ret = xrt_subdev_pool_put_impl(spool, pdev, holder_dev);
+
+ if (ret)
+ return ret;
+
+ if (!IS_ROOT_DEV(holder_dev)) {
+ xrt_dbg(to_platform_device(holder_dev), "%s <<==X== %s",
+ dev_name(holder_dev), dev_name(DEV(pdev)));
+ }
+ return 0;
+}
+
+void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool, enum xrt_events e)
+{
+ struct platform_device *tgt = NULL;
+ struct xrt_subdev *sdev = NULL;
+ struct xrt_event evt;
+
+ while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+ tgt, spool->xsp_owner, &sdev)) {
+ tgt = sdev->xs_pdev;
+ evt.xe_evt = e;
+ evt.xe_subdev.xevt_subdev_id = sdev->xs_id;
+ evt.xe_subdev.xevt_subdev_instance = tgt->id;
+ xrt_subdev_root_request(tgt, XRT_ROOT_EVENT_SYNC, &evt);
+ xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
+ }
+}
+
+void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool, struct xrt_event *evt)
+{
+ struct platform_device *tgt = NULL;
+ struct xrt_subdev *sdev = NULL;
+
+ while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
+ tgt, spool->xsp_owner, &sdev)) {
+ tgt = sdev->xs_pdev;
+ xleaf_call(tgt, XRT_XLEAF_EVENT, evt);
+ xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
+ }
+}
+
+ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
+ struct platform_device *pdev, char *buf, size_t len)
+{
+ const struct list_head *ptr;
+ struct mutex *lk = &spool->xsp_lock;
+ struct list_head *dl = &spool->xsp_dev_list;
+ struct xrt_subdev *sdev;
+ ssize_t ret = 0;
+
+ mutex_lock(lk);
+ list_for_each(ptr, dl) {
+ sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
+ if (sdev->xs_pdev != pdev)
+ continue;
+ ret = xrt_subdev_get_holders(sdev, buf, len);
+ break;
+ }
+ mutex_unlock(lk);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
+
+int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async)
+{
+ struct xrt_event e = { evt, };
+ enum xrt_root_cmd cmd = async ? XRT_ROOT_EVENT_ASYNC : XRT_ROOT_EVENT_SYNC;
+
+ WARN_ON(evt == XRT_EVENT_POST_CREATION || evt == XRT_EVENT_PRE_REMOVAL);
+ return xrt_subdev_root_request(pdev, cmd, &e);
+}
+EXPORT_SYMBOL_GPL(xleaf_broadcast_event);
+
+void xleaf_hot_reset(struct platform_device *pdev)
+{
+ xrt_subdev_root_request(pdev, XRT_ROOT_HOT_RESET, NULL);
+}
+EXPORT_SYMBOL_GPL(xleaf_hot_reset);
+
+void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx)
+{
+ struct xrt_root_get_res arg = { 0 };
+
+ if (bar_idx > PCI_STD_RESOURCE_END) {
+ xrt_err(pdev, "Invalid bar idx %d", bar_idx);
+ *res = NULL;
+ return;
+ }
+
+ xrt_subdev_root_request(pdev, XRT_ROOT_GET_RESOURCE, &arg);
+
+ *res = &arg.xpigr_res[bar_idx];
+}
+
+void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
+ unsigned short *subvendor, unsigned short *subdevice)
+{
+ struct xrt_root_get_id id = { 0 };
+
+ WARN_ON(!vendor && !device && !subvendor && !subdevice);
+
+ xrt_subdev_root_request(pdev, XRT_ROOT_GET_ID, (void *)&id);
+ if (vendor)
+ *vendor = id.xpigi_vendor_id;
+ if (device)
+ *device = id.xpigi_device_id;
+ if (subvendor)
+ *subvendor = id.xpigi_sub_vendor_id;
+ if (subdevice)
+ *subdevice = id.xpigi_sub_device_id;
+}
+
+struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
+ const struct attribute_group **grps)
+{
+ struct xrt_root_hwmon hm = { true, name, drvdata, grps, };
+
+ xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
+ return hm.xpih_hwmon_dev;
+}
+
+void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon)
+{
+ struct xrt_root_hwmon hm = { false, };
+
+ hm.xpih_hwmon_dev = hwmon;
+ xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
+}
--
2.27.0
Alveo FPGA firmware and partial reconfigure file are in xclbin format. This
code enumerates and extracts sections from xclbin files. xclbin.h is cross
platform and used across all platforms and OS.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xclbin-helper.h | 48 +++
drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++++++++++++++
include/uapi/linux/xrt/xclbin.h | 409 +++++++++++++++++++++++
3 files changed, 826 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
create mode 100644 drivers/fpga/xrt/lib/xclbin.c
create mode 100644 include/uapi/linux/xrt/xclbin.h
diff --git a/drivers/fpga/xrt/include/xclbin-helper.h b/drivers/fpga/xrt/include/xclbin-helper.h
new file mode 100644
index 000000000000..382b1de97b0a
--- /dev/null
+++ b/drivers/fpga/xrt/include/xclbin-helper.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * David Zhang <[email protected]>
+ * Sonal Santan <[email protected]>
+ */
+
+#ifndef _XCLBIN_HELPER_H_
+#define _XCLBIN_HELPER_H_
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <linux/xrt/xclbin.h>
+
+#define XCLBIN_VERSION2 "xclbin2"
+#define XCLBIN_HWICAP_BITFILE_BUF_SZ 1024
+#define XCLBIN_MAX_SIZE (1024 * 1024 * 1024) /* Assuming xclbin <= 1G, always */
+
+enum axlf_section_kind;
+struct axlf;
+
+/**
+ * Bitstream header information as defined by Xilinx tools.
+ * Please note that this struct definition is not owned by the driver.
+ */
+struct xclbin_bit_head_info {
+ u32 header_length; /* Length of header in 32 bit words */
+ u32 bitstream_length; /* Length of bitstream to read in bytes */
+ const unchar *design_name; /* Design name get from bitstream */
+ const unchar *part_name; /* Part name read from bitstream */
+ const unchar *date; /* Date read from bitstream header */
+ const unchar *time; /* Bitstream creation time */
+ u32 magic_length; /* Length of the magic numbers */
+ const unchar *version; /* Version string */
+};
+
+/* caller must free the allocated memory for **data. len could be NULL. */
+int xrt_xclbin_get_section(struct device *dev, const struct axlf *xclbin,
+ enum axlf_section_kind kind, void **data,
+ uint64_t *len);
+int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb);
+int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
+ u32 size, struct xclbin_bit_head_info *head_info);
+const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type);
+
+#endif /* _XCLBIN_HELPER_H_ */
diff --git a/drivers/fpga/xrt/lib/xclbin.c b/drivers/fpga/xrt/lib/xclbin.c
new file mode 100644
index 000000000000..31b363c014a3
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xclbin.c
@@ -0,0 +1,369 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Driver XCLBIN parser
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: David Zhang <[email protected]>
+ */
+
+#include <asm/errno.h>
+#include <linux/vmalloc.h>
+#include <linux/device.h>
+#include "xclbin-helper.h"
+#include "metadata.h"
+
+/* Used for parsing bitstream header */
+#define BITSTREAM_EVEN_MAGIC_BYTE 0x0f
+#define BITSTREAM_ODD_MAGIC_BYTE 0xf0
+
+static int xrt_xclbin_get_section_hdr(const struct axlf *xclbin,
+ enum axlf_section_kind kind,
+ const struct axlf_section_header **header)
+{
+ const struct axlf_section_header *phead = NULL;
+ u64 xclbin_len;
+ int i;
+
+ *header = NULL;
+ for (i = 0; i < xclbin->header.num_sections; i++) {
+ if (xclbin->sections[i].section_kind == kind) {
+ phead = &xclbin->sections[i];
+ break;
+ }
+ }
+
+ if (!phead)
+ return -ENOENT;
+
+ xclbin_len = xclbin->header.length;
+ if (xclbin_len > XCLBIN_MAX_SIZE ||
+ phead->section_offset + phead->section_size > xclbin_len)
+ return -EINVAL;
+
+ *header = phead;
+ return 0;
+}
+
+static int xrt_xclbin_section_info(const struct axlf *xclbin,
+ enum axlf_section_kind kind,
+ u64 *offset, u64 *size)
+{
+ const struct axlf_section_header *mem_header = NULL;
+ int rc;
+
+ rc = xrt_xclbin_get_section_hdr(xclbin, kind, &mem_header);
+ if (rc)
+ return rc;
+
+ *offset = mem_header->section_offset;
+ *size = mem_header->section_size;
+
+ return 0;
+}
+
+/* caller must free the allocated memory for **data */
+int xrt_xclbin_get_section(struct device *dev,
+ const struct axlf *buf,
+ enum axlf_section_kind kind,
+ void **data, u64 *len)
+{
+ const struct axlf *xclbin = (const struct axlf *)buf;
+ void *section = NULL;
+ u64 offset = 0;
+ u64 size = 0;
+ int err = 0;
+
+ if (!data) {
+ dev_err(dev, "invalid data pointer");
+ return -EINVAL;
+ }
+
+ err = xrt_xclbin_section_info(xclbin, kind, &offset, &size);
+ if (err) {
+ dev_dbg(dev, "parsing section failed. kind %d, err = %d", kind, err);
+ return err;
+ }
+
+ section = vzalloc(size);
+ if (!section)
+ return -ENOMEM;
+
+ memcpy(section, ((const char *)xclbin) + offset, size);
+
+ *data = section;
+ if (len)
+ *len = size;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_xclbin_get_section);
+
+static inline int xclbin_bit_get_string(const unchar *data, u32 size,
+ u32 offset, unchar prefix,
+ const unchar **str)
+{
+ int len;
+ u32 tmp;
+
+ /* prefix and length will be 3 bytes */
+ if (offset + 3 > size)
+ return -EINVAL;
+
+ /* Read prefix */
+ tmp = data[offset++];
+ if (tmp != prefix)
+ return -EINVAL;
+
+ /* Get string length */
+ len = data[offset++];
+ len = (len << 8) | data[offset++];
+
+ if (offset + len > size)
+ return -EINVAL;
+
+ if (data[offset + len - 1] != '\0')
+ return -EINVAL;
+
+ *str = data + offset;
+
+ return len + 3;
+}
+
+/* parse bitstream header */
+int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
+ u32 size, struct xclbin_bit_head_info *head_info)
+{
+ u32 offset = 0;
+ int len, i;
+ u16 magic;
+
+ memset(head_info, 0, sizeof(*head_info));
+
+ /* Get "Magic" length */
+ if (size < sizeof(u16)) {
+ dev_err(dev, "invalid size");
+ return -EINVAL;
+ }
+
+ len = data[offset++];
+ len = (len << 8) | data[offset++];
+
+ if (offset + len > size) {
+ dev_err(dev, "invalid magic len");
+ return -EINVAL;
+ }
+ head_info->magic_length = len;
+
+ for (i = 0; i < head_info->magic_length - 1; i++) {
+ magic = data[offset++];
+ if (!(i % 2) && magic != BITSTREAM_EVEN_MAGIC_BYTE) {
+ dev_err(dev, "invalid magic even byte at %d", offset);
+ return -EINVAL;
+ }
+
+ if ((i % 2) && magic != BITSTREAM_ODD_MAGIC_BYTE) {
+ dev_err(dev, "invalid magic odd byte at %d", offset);
+ return -EINVAL;
+ }
+ }
+
+ if (offset + 3 > size) {
+ dev_err(dev, "invalid length of magic end");
+ return -EINVAL;
+ }
+ /* Read null end of magic data. */
+ if (data[offset++]) {
+ dev_err(dev, "invalid magic end");
+ return -EINVAL;
+ }
+
+ /* Read 0x01 (short) */
+ magic = data[offset++];
+ magic = (magic << 8) | data[offset++];
+
+ /* Check the "0x01" half word */
+ if (magic != 0x01) {
+ dev_err(dev, "invalid magic end");
+ return -EINVAL;
+ }
+
+ len = xclbin_bit_get_string(data, size, offset, 'a', &head_info->design_name);
+ if (len < 0) {
+ dev_err(dev, "get design name failed");
+ return -EINVAL;
+ }
+
+ head_info->version = strstr(head_info->design_name, "Version=") + strlen("Version=");
+ offset += len;
+
+ len = xclbin_bit_get_string(data, size, offset, 'b', &head_info->part_name);
+ if (len < 0) {
+ dev_err(dev, "get part name failed");
+ return -EINVAL;
+ }
+ offset += len;
+
+ len = xclbin_bit_get_string(data, size, offset, 'c', &head_info->date);
+ if (len < 0) {
+ dev_err(dev, "get data failed");
+ return -EINVAL;
+ }
+ offset += len;
+
+ len = xclbin_bit_get_string(data, size, offset, 'd', &head_info->time);
+ if (len < 0) {
+ dev_err(dev, "get time failed");
+ return -EINVAL;
+ }
+ offset += len;
+
+ if (offset + 5 >= size) {
+ dev_err(dev, "can not get bitstream length");
+ return -EINVAL;
+ }
+
+ /* Read 'e' */
+ if (data[offset++] != 'e') {
+ dev_err(dev, "invalid prefix of bitstream length");
+ return -EINVAL;
+ }
+
+ /* Get byte length of bitstream */
+ head_info->bitstream_length = data[offset++];
+ head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
+ head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
+ head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
+
+ head_info->header_length = offset;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xrt_xclbin_parse_bitstream_header);
+
+struct xrt_clock_desc {
+ char *clock_ep_name;
+ u32 clock_xclbin_type;
+ char *clkfreq_ep_name;
+} clock_desc[] = {
+ {
+ .clock_ep_name = XRT_MD_NODE_CLK_KERNEL1,
+ .clock_xclbin_type = CT_DATA,
+ .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K1,
+ },
+ {
+ .clock_ep_name = XRT_MD_NODE_CLK_KERNEL2,
+ .clock_xclbin_type = CT_KERNEL,
+ .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K2,
+ },
+ {
+ .clock_ep_name = XRT_MD_NODE_CLK_KERNEL3,
+ .clock_xclbin_type = CT_SYSTEM,
+ .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_HBM,
+ },
+};
+
+const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+ if (clock_desc[i].clock_xclbin_type == type)
+ return clock_desc[i].clock_ep_name;
+ }
+ return NULL;
+}
+EXPORT_SYMBOL_GPL(xrt_clock_type2epname);
+
+static const char *clock_type2clkfreq_name(enum XCLBIN_CLOCK_TYPE type)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
+ if (clock_desc[i].clock_xclbin_type == type)
+ return clock_desc[i].clkfreq_ep_name;
+ }
+ return NULL;
+}
+
+static int xrt_xclbin_add_clock_metadata(struct device *dev,
+ const struct axlf *xclbin,
+ char *dtb)
+{
+ struct clock_freq_topology *clock_topo;
+ u16 freq;
+ int rc;
+ int i;
+
+ /* if clock section does not exist, add nothing and return success */
+ rc = xrt_xclbin_get_section(dev, xclbin, CLOCK_FREQ_TOPOLOGY,
+ (void **)&clock_topo, NULL);
+ if (rc == -ENOENT)
+ return 0;
+ else if (rc)
+ return rc;
+
+ for (i = 0; i < clock_topo->count; i++) {
+ u8 type = clock_topo->clock_freq[i].type;
+ const char *ep_name = xrt_clock_type2epname(type);
+ const char *counter_name = clock_type2clkfreq_name(type);
+
+ if (!ep_name || !counter_name)
+ continue;
+
+ freq = cpu_to_be16(clock_topo->clock_freq[i].freq_MHZ);
+ rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
+ &freq, sizeof(freq));
+ if (rc)
+ break;
+
+ rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_CNT,
+ counter_name, strlen(counter_name) + 1);
+ if (rc)
+ break;
+ }
+
+ vfree(clock_topo);
+
+ return rc;
+}
+
+int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb)
+{
+ char *md = NULL, *newmd = NULL;
+ u64 len, md_len;
+ int rc;
+
+ *dtb = NULL;
+
+ rc = xrt_xclbin_get_section(dev, xclbin, PARTITION_METADATA, (void **)&md, &len);
+ if (rc)
+ goto done;
+
+ md_len = xrt_md_size(dev, md);
+
+ /* Sanity check the dtb section. */
+ if (md_len > len) {
+ rc = -EINVAL;
+ goto done;
+ }
+
+ /* use dup function here to convert incoming metadata to writable */
+ newmd = xrt_md_dup(dev, md);
+ if (!newmd) {
+ rc = -EFAULT;
+ goto done;
+ }
+
+ /* Convert various needed xclbin sections into dtb. */
+ rc = xrt_xclbin_add_clock_metadata(dev, xclbin, newmd);
+
+ if (!rc)
+ *dtb = newmd;
+ else
+ vfree(newmd);
+done:
+ vfree(md);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xrt_xclbin_get_metadata);
diff --git a/include/uapi/linux/xrt/xclbin.h b/include/uapi/linux/xrt/xclbin.h
new file mode 100644
index 000000000000..baa14d6653ab
--- /dev/null
+++ b/include/uapi/linux/xrt/xclbin.h
@@ -0,0 +1,409 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+/*
+ * Xilinx FPGA compiled binary container format
+ *
+ * Copyright (C) 2015-2021, Xilinx Inc
+ */
+
+#ifndef _XCLBIN_H_
+#define _XCLBIN_H_
+
+#if defined(__KERNEL__)
+
+#include <linux/types.h>
+
+#elif defined(__cplusplus)
+
+#include <cstdlib>
+#include <cstdint>
+#include <algorithm>
+#include <uuid/uuid.h>
+
+#else
+
+#include <stdlib.h>
+#include <stdint.h>
+#include <uuid/uuid.h>
+
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * DOC: Container format for Xilinx FPGA images
+ * The container stores bitstreams, metadata and firmware images.
+ * xclbin/xsabin is an ELF-like binary container format. It is a structured
+ * series of sections. There is a file header followed by several section
+ * headers which is followed by sections. A section header points to an
+ * actual section. There is an optional signature at the end. The
+ * following figure illustrates a typical xclbin:
+ *
+ * +---------------------+
+ * | |
+ * | HEADER |
+ * +---------------------+
+ * | SECTION HEADER |
+ * | |
+ * +---------------------+
+ * | ... |
+ * | |
+ * +---------------------+
+ * | SECTION HEADER |
+ * | |
+ * +---------------------+
+ * | SECTION |
+ * | |
+ * +---------------------+
+ * | ... |
+ * | |
+ * +---------------------+
+ * | SECTION |
+ * | |
+ * +---------------------+
+ * | SIGNATURE |
+ * | (OPTIONAL) |
+ * +---------------------+
+ */
+
+enum XCLBIN_MODE {
+ XCLBIN_FLAT = 0,
+ XCLBIN_PR,
+ XCLBIN_TANDEM_STAGE2,
+ XCLBIN_TANDEM_STAGE2_WITH_PR,
+ XCLBIN_HW_EMU,
+ XCLBIN_SW_EMU,
+ XCLBIN_MODE_MAX
+};
+
+enum axlf_section_kind {
+ BITSTREAM = 0,
+ CLEARING_BITSTREAM,
+ EMBEDDED_METADATA,
+ FIRMWARE,
+ DEBUG_DATA,
+ SCHED_FIRMWARE,
+ MEM_TOPOLOGY,
+ CONNECTIVITY,
+ IP_LAYOUT,
+ DEBUG_IP_LAYOUT,
+ DESIGN_CHECK_POINT,
+ CLOCK_FREQ_TOPOLOGY,
+ MCS,
+ BMC,
+ BUILD_METADATA,
+ KEYVALUE_METADATA,
+ USER_METADATA,
+ DNA_CERTIFICATE,
+ PDI,
+ BITSTREAM_PARTIAL_PDI,
+ PARTITION_METADATA,
+ EMULATION_DATA,
+ SYSTEM_METADATA,
+ SOFT_KERNEL,
+ ASK_FLASH,
+ AIE_METADATA,
+ ASK_GROUP_TOPOLOGY,
+ ASK_GROUP_CONNECTIVITY
+};
+
+enum MEM_TYPE {
+ MEM_DDR3 = 0,
+ MEM_DDR4,
+ MEM_DRAM,
+ MEM_STREAMING,
+ MEM_PREALLOCATED_GLOB,
+ MEM_ARE,
+ MEM_HBM,
+ MEM_BRAM,
+ MEM_URAM,
+ MEM_STREAMING_CONNECTION
+};
+
+enum IP_TYPE {
+ IP_MB = 0,
+ IP_KERNEL,
+ IP_DNASC,
+ IP_DDR4_CONTROLLER,
+ IP_MEM_DDR4,
+ IP_MEM_HBM
+};
+
+struct axlf_section_header {
+ uint32_t section_kind; /* Section type */
+ char section_name[16]; /* Examples: "stage2", "clear1", */
+ /* "clear2", "ocl1", "ocl2, */
+ /* "ublaze", "sched" */
+ char rsvd[4];
+ uint64_t section_offset; /* File offset of section data */
+ uint64_t section_size; /* Size of section data */
+} __packed;
+
+struct axlf_header {
+ uint64_t length; /* Total size of the xclbin file */
+ uint64_t time_stamp; /* Number of seconds since epoch */
+ /* when xclbin was created */
+ uint64_t feature_rom_timestamp; /* TimeSinceEpoch of the featureRom */
+ uint16_t version_patch; /* Patch Version */
+ uint8_t version_major; /* Major Version - Version: 2.1.0*/
+ uint8_t version_minor; /* Minor Version */
+ uint32_t mode; /* XCLBIN_MODE */
+ union {
+ struct {
+ uint64_t platform_id; /* 64 bit platform ID: */
+ /* vendor-device-subvendor-subdev */
+ uint64_t feature_id; /* 64 bit feature id */
+ } rom;
+ unsigned char rom_uuid[16]; /* feature ROM UUID for which */
+ /* this xclbin was generated */
+ };
+ unsigned char platform_vbnv[64]; /* e.g. */
+ /* xilinx:xil-accel-rd-ku115:4ddr-xpr:3.4: null terminated */
+ union {
+ char next_axlf[16]; /* Name of next xclbin file */
+ /* in the daisy chain */
+ unsigned char uuid[16]; /* uuid of this xclbin*/
+ };
+ char debug_bin[16]; /* Name of binary with debug */
+ /* information */
+ uint32_t num_sections; /* Number of section headers */
+ char rsvd[4];
+} __packed;
+
+struct axlf {
+ char magic[8]; /* Should be "xclbin2\0" */
+ int32_t signature_length; /* Length of the signature. */
+ /* -1 indicates no signature */
+ unsigned char reserved[28]; /* Note: Initialized to 0xFFs */
+
+ unsigned char key_block[256]; /* Signature for validation */
+ /* of binary */
+ uint64_t unique_id; /* axlf's uniqueId, use it to */
+ /* skip redownload etc */
+ struct axlf_header header; /* Inline header */
+ struct axlf_section_header sections[1]; /* One or more section */
+ /* headers follow */
+} __packed;
+
+/* bitstream information */
+struct xlnx_bitstream {
+ uint8_t freq[8];
+ char bits[1];
+} __packed;
+
+/**** MEMORY TOPOLOGY SECTION ****/
+struct mem_data {
+ uint8_t type; /* enum corresponding to mem_type. */
+ uint8_t used; /* if 0 this bank is not present */
+ uint8_t rsvd[6];
+ union {
+ uint64_t size; /* if mem_type DDR, then size in KB; */
+ uint64_t route_id; /* if streaming then "route_id" */
+ };
+ union {
+ uint64_t base_address;/* if DDR then the base address; */
+ uint64_t flow_id; /* if streaming then "flow id" */
+ };
+ unsigned char tag[16]; /* DDR: BANK0,1,2,3, has to be null */
+ /* terminated; if streaming then stream0, 1 etc */
+} __packed;
+
+struct mem_topology {
+ int32_t count; /* Number of mem_data */
+ struct mem_data mem_data[1]; /* Should be sorted on mem_type */
+} __packed;
+
+/**** CONNECTIVITY SECTION ****/
+/* Connectivity of each argument of CU(Compute Unit). It will be in terms
+ * of argument index associated. For associating CU instances with arguments
+ * and banks, start at the connectivity section. Using the ip_layout_index
+ * access the ip_data.name. Now we can associate this CU instance with its
+ * original CU name and get the connectivity as well. This enables us to form
+ * related groups of CU instances.
+ */
+
+struct connection {
+ int32_t arg_index; /* From 0 to n, may not be contiguous as scalars */
+ /* skipped */
+ int32_t ip_layout_index; /* index into the ip_layout section. */
+ /* ip_layout.ip_data[index].type == IP_KERNEL */
+ int32_t mem_data_index; /* index of the mem_data . Flag error is */
+ /* used false. */
+} __packed;
+
+struct connectivity {
+ int32_t count;
+ struct connection connection[1];
+} __packed;
+
+/**** IP_LAYOUT SECTION ****/
+
+/* IP Kernel */
+#define IP_INT_ENABLE_MASK 0x0001
+#define IP_INTERRUPT_ID_MASK 0x00FE
+#define IP_INTERRUPT_ID_SHIFT 0x1
+
+enum IP_CONTROL {
+ AP_CTRL_HS = 0,
+ AP_CTRL_CHAIN,
+ AP_CTRL_NONE,
+ AP_CTRL_ME,
+ ACCEL_ADAPTER
+};
+
+#define IP_CONTROL_MASK 0xFF00
+#define IP_CONTROL_SHIFT 0x8
+
+/* IPs on AXI lite - their types, names, and base addresses.*/
+struct ip_data {
+ uint32_t type; /* map to IP_TYPE enum */
+ union {
+ uint32_t properties; /* Default: 32-bits to indicate ip */
+ /* specific property. */
+ /* type: IP_KERNEL
+ * int_enable : Bit - 0x0000_0001;
+ * interrupt_id : Bits - 0x0000_00FE;
+ * ip_control : Bits = 0x0000_FF00;
+ */
+ struct { /* type: IP_MEM_* */
+ uint16_t index;
+ uint8_t pc_index;
+ uint8_t unused;
+ } indices;
+ };
+ uint64_t base_address;
+ uint8_t name[64]; /* eg Kernel name corresponding to KERNEL */
+ /* instance, can embed CU name in future. */
+} __packed;
+
+struct ip_layout {
+ int32_t count;
+ struct ip_data ip_data[1]; /* All the ip_data needs to be sorted */
+ /* by base_address. */
+} __packed;
+
+/*** Debug IP section layout ****/
+enum DEBUG_IP_TYPE {
+ UNDEFINED = 0,
+ LAPC,
+ ILA,
+ AXI_MM_MONITOR,
+ AXI_TRACE_FUNNEL,
+ AXI_MONITOR_FIFO_LITE,
+ AXI_MONITOR_FIFO_FULL,
+ ACCEL_MONITOR,
+ AXI_STREAM_MONITOR,
+ AXI_STREAM_PROTOCOL_CHECKER,
+ TRACE_S2MM,
+ AXI_DMA,
+ TRACE_S2MM_FULL
+};
+
+struct debug_ip_data {
+ uint8_t type; /* type of enum DEBUG_IP_TYPE */
+ uint8_t index_lowbyte;
+ uint8_t properties;
+ uint8_t major;
+ uint8_t minor;
+ uint8_t index_highbyte;
+ uint8_t reserved[2];
+ uint64_t base_address;
+ char name[128];
+} __packed;
+
+struct debug_ip_layout {
+ uint16_t count;
+ struct debug_ip_data debug_ip_data[1];
+} __packed;
+
+/* Supported clock frequency types */
+enum XCLBIN_CLOCK_TYPE {
+ CT_UNUSED = 0, /* Initialized value */
+ CT_DATA = 1, /* Data clock */
+ CT_KERNEL = 2, /* Kernel clock */
+ CT_SYSTEM = 3 /* System Clock */
+};
+
+/* Clock Frequency Entry */
+struct clock_freq {
+ uint16_t freq_MHZ; /* Frequency in MHz */
+ uint8_t type; /* Clock type (enum CLOCK_TYPE) */
+ uint8_t unused[5]; /* Not used - padding */
+ char name[128]; /* Clock Name */
+} __packed;
+
+/* Clock frequency section */
+struct clock_freq_topology {
+ int16_t count; /* Number of entries */
+ struct clock_freq clock_freq[1]; /* Clock array */
+} __packed;
+
+/* Supported MCS file types */
+enum MCS_TYPE {
+ MCS_UNKNOWN = 0, /* Initialized value */
+ MCS_PRIMARY = 1, /* The primary mcs file data */
+ MCS_SECONDARY = 2, /* The secondary mcs file data */
+};
+
+/* One chunk of MCS data */
+struct mcs_chunk {
+ uint8_t type; /* MCS data type */
+ uint8_t unused[7]; /* padding */
+ uint64_t offset; /* data offset from the start of */
+ /* the section */
+ uint64_t size; /* data size */
+} __packed;
+
+/* MCS data section */
+struct mcs {
+ int8_t count; /* Number of chunks */
+ int8_t unused[7]; /* padding */
+ struct mcs_chunk chunk[1]; /* MCS chunks followed by data */
+} __packed;
+
+/* bmc data section */
+struct bmc {
+ uint64_t offset; /* data offset from the start of */
+ /* the section */
+ uint64_t size; /* data size (bytes) */
+ char image_name[64]; /* Name of the image */
+ /* (e.g., MSP432P401R) */
+ char device_name[64]; /* Device ID (e.g., VCU1525) */
+ char version[64];
+ char md5value[33]; /* MD5 Expected Value */
+ /* (e.g., 56027182079c0bd621761b7dab5a27ca)*/
+ char padding[7]; /* Padding */
+} __packed;
+
+/* soft kernel data section, used by classic driver */
+struct soft_kernel {
+ /** Prefix Syntax:
+ * mpo - member, pointer, offset
+ * This variable represents a zero terminated string
+ * that is offseted from the beginning of the section.
+ * The pointer to access the string is initialized as follows:
+ * char * pCharString = (address_of_section) + (mpo value)
+ */
+ uint32_t mpo_name; /* Name of the soft kernel */
+ uint32_t image_offset; /* Image offset */
+ uint32_t image_size; /* Image size */
+ uint32_t mpo_version; /* Version */
+ uint32_t mpo_md5_value; /* MD5 checksum */
+ uint32_t mpo_symbol_name; /* Symbol name */
+ uint32_t num_instances; /* Number of instances */
+ uint8_t padding[36]; /* Reserved for future use */
+ uint8_t reserved_ext[16]; /* Reserved for future extended data */
+} __packed;
+
+enum CHECKSUM_TYPE {
+ CST_UNKNOWN = 0,
+ CST_SDBM = 1,
+ CST_LAST
+};
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
--
2.27.0
The PCIE device driver which attaches to management function on Alveo
devices. It instantiates one or more group drivers which, in turn,
instantiate platform drivers. The instantiation of group and platform
drivers is completely dtb driven.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/mgmt/root.c | 333 +++++++++++++++++++++++++++++++++++
1 file changed, 333 insertions(+)
create mode 100644 drivers/fpga/xrt/mgmt/root.c
diff --git a/drivers/fpga/xrt/mgmt/root.c b/drivers/fpga/xrt/mgmt/root.c
new file mode 100644
index 000000000000..f97f92807c01
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/root.c
@@ -0,0 +1,333 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Cheng Zhen <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/aer.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+
+#include "xroot.h"
+#include "xmgnt.h"
+#include "metadata.h"
+
+#define XMGMT_MODULE_NAME "xrt-mgmt"
+#define XMGMT_DRIVER_VERSION "4.0.0"
+
+#define XMGMT_PDEV(xm) ((xm)->pdev)
+#define XMGMT_DEV(xm) (&(XMGMT_PDEV(xm)->dev))
+#define xmgmt_err(xm, fmt, args...) \
+ dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_warn(xm, fmt, args...) \
+ dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_info(xm, fmt, args...) \
+ dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define xmgmt_dbg(xm, fmt, args...) \
+ dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
+#define XMGMT_DEV_ID(_pcidev) \
+ ({ typeof(_pcidev) (pcidev) = (_pcidev); \
+ ((pci_domain_nr((pcidev)->bus) << 16) | \
+ PCI_DEVID((pcidev)->bus->number, 0)); })
+
+static struct class *xmgmt_class;
+
+/* PCI Device IDs */
+#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
+#define PCI_DEVICE_ID_U50 0x5020
+static const struct pci_device_id xmgmt_pci_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN), }, /* Alveo U50 (golden) */
+ { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /* Alveo U50 */
+ { 0, }
+};
+
+struct xmgmt {
+ struct pci_dev *pdev;
+ void *root;
+
+ bool ready;
+};
+
+static int xmgmt_config_pci(struct xmgmt *xm)
+{
+ struct pci_dev *pdev = XMGMT_PDEV(xm);
+ int rc;
+
+ rc = pcim_enable_device(pdev);
+ if (rc < 0) {
+ xmgmt_err(xm, "failed to enable device: %d", rc);
+ return rc;
+ }
+
+ rc = pci_enable_pcie_error_reporting(pdev);
+ if (rc)
+ xmgmt_warn(xm, "failed to enable AER: %d", rc);
+
+ pci_set_master(pdev);
+
+ rc = pcie_get_readrq(pdev);
+ if (rc > 512)
+ pcie_set_readrq(pdev, 512);
+ return 0;
+}
+
+static int xmgmt_match_slot_and_save(struct device *dev, void *data)
+{
+ struct xmgmt *xm = data;
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+ pci_cfg_access_lock(pdev);
+ pci_save_state(pdev);
+ }
+
+ return 0;
+}
+
+static void xmgmt_pci_save_config_all(struct xmgmt *xm)
+{
+ bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
+}
+
+static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
+{
+ struct xmgmt *xm = data;
+ struct pci_dev *pdev = to_pci_dev(dev);
+
+ if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
+ pci_restore_state(pdev);
+ pci_cfg_access_unlock(pdev);
+ }
+
+ return 0;
+}
+
+static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
+{
+ bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
+}
+
+static void xmgmt_root_hot_reset(struct pci_dev *pdev)
+{
+ struct xmgmt *xm = pci_get_drvdata(pdev);
+ struct pci_bus *bus;
+ u8 pci_bctl;
+ u16 pci_cmd, devctl;
+ int i, ret;
+
+ xmgmt_info(xm, "hot reset start");
+
+ xmgmt_pci_save_config_all(xm);
+
+ pci_disable_device(pdev);
+
+ bus = pdev->bus;
+
+ /*
+ * When flipping the SBR bit, device can fall off the bus. This is
+ * usually no problem at all so long as drivers are working properly
+ * after SBR. However, some systems complain bitterly when the device
+ * falls off the bus.
+ * The quick solution is to temporarily disable the SERR reporting of
+ * switch port during SBR.
+ */
+
+ pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
+ pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd & ~PCI_COMMAND_SERR));
+ pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
+ pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl & ~PCI_EXP_DEVCTL_FERE));
+ pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
+ pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl | PCI_BRIDGE_CTL_BUS_RESET);
+ msleep(100);
+ pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
+ ssleep(1);
+
+ pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
+ pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
+
+ ret = pci_enable_device(pdev);
+ if (ret)
+ xmgmt_err(xm, "failed to enable device, ret %d", ret);
+
+ for (i = 0; i < 300; i++) {
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
+ if (pci_cmd != 0xffff)
+ break;
+ msleep(20);
+ }
+ if (i == 300)
+ xmgmt_err(xm, "time'd out waiting for device to be online after reset");
+
+ xmgmt_info(xm, "waiting for %d ms", i * 20);
+ xmgmt_pci_restore_config_all(xm);
+ xmgmt_config_pci(xm);
+}
+
+static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
+{
+ char *dtb = NULL;
+ int ret;
+
+ ret = xrt_md_create(XMGMT_DEV(xm), &dtb);
+ if (ret) {
+ xmgmt_err(xm, "create metadata failed, ret %d", ret);
+ goto failed;
+ }
+
+ ret = xroot_add_vsec_node(xm->root, dtb);
+ if (ret == -ENOENT) {
+ /*
+ * We may be dealing with a MFG board.
+ * Try vsec-golden which will bring up all hard-coded leaves
+ * at hard-coded offsets.
+ */
+ ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_VSEC_GOLDEN);
+ } else if (ret == 0) {
+ ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_MGMT_MAIN);
+ }
+ if (ret)
+ goto failed;
+
+ *root_dtb = dtb;
+ return 0;
+
+failed:
+ vfree(dtb);
+ return ret;
+}
+
+static ssize_t ready_show(struct device *dev,
+ struct device_attribute *da,
+ char *buf)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct xmgmt *xm = pci_get_drvdata(pdev);
+
+ return sprintf(buf, "%d\n", xm->ready);
+}
+static DEVICE_ATTR_RO(ready);
+
+static struct attribute *xmgmt_root_attrs[] = {
+ &dev_attr_ready.attr,
+ NULL
+};
+
+static struct attribute_group xmgmt_root_attr_group = {
+ .attrs = xmgmt_root_attrs,
+};
+
+static struct xroot_physical_function_callback xmgmt_xroot_pf_cb = {
+ .xpc_hot_reset = xmgmt_root_hot_reset,
+};
+
+static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+{
+ int ret;
+ struct device *dev = &pdev->dev;
+ struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
+ char *dtb = NULL;
+
+ if (!xm)
+ return -ENOMEM;
+ xm->pdev = pdev;
+ pci_set_drvdata(pdev, xm);
+
+ ret = xmgmt_config_pci(xm);
+ if (ret)
+ goto failed;
+
+ ret = xroot_probe(pdev, &xmgmt_xroot_pf_cb, &xm->root);
+ if (ret)
+ goto failed;
+
+ ret = xmgmt_create_root_metadata(xm, &dtb);
+ if (ret)
+ goto failed_metadata;
+
+ ret = xroot_create_group(xm->root, dtb);
+ vfree(dtb);
+ if (ret)
+ xmgmt_err(xm, "failed to create root group: %d", ret);
+
+ if (!xroot_wait_for_bringup(xm->root))
+ xmgmt_err(xm, "failed to bringup all groups");
+ else
+ xm->ready = true;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+ if (ret) {
+ /* Warning instead of failing the probe. */
+ xmgmt_warn(xm, "create xmgmt root attrs failed: %d", ret);
+ }
+
+ xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
+ xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
+ return 0;
+
+failed_metadata:
+ xroot_remove(xm->root);
+failed:
+ pci_set_drvdata(pdev, NULL);
+ return ret;
+}
+
+static void xmgmt_remove(struct pci_dev *pdev)
+{
+ struct xmgmt *xm = pci_get_drvdata(pdev);
+
+ xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
+ sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
+ xroot_remove(xm->root);
+ pci_disable_pcie_error_reporting(xm->pdev);
+ xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
+}
+
+static struct pci_driver xmgmt_driver = {
+ .name = XMGMT_MODULE_NAME,
+ .id_table = xmgmt_pci_ids,
+ .probe = xmgmt_probe,
+ .remove = xmgmt_remove,
+};
+
+static int __init xmgmt_init(void)
+{
+ int res = 0;
+
+ res = xmgmt_register_leaf();
+ if (res)
+ return res;
+
+ xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
+ if (IS_ERR(xmgmt_class))
+ return PTR_ERR(xmgmt_class);
+
+ res = pci_register_driver(&xmgmt_driver);
+ if (res) {
+ class_destroy(xmgmt_class);
+ return res;
+ }
+
+ return 0;
+}
+
+static __exit void xmgmt_exit(void)
+{
+ pci_unregister_driver(&xmgmt_driver);
+ class_destroy(xmgmt_class);
+ xmgmt_unregister_leaf();
+}
+
+module_init(xmgmt_init);
+module_exit(xmgmt_exit);
+
+MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
+MODULE_VERSION(XMGMT_DRIVER_VERSION);
+MODULE_AUTHOR("XRT Team <[email protected]>");
+MODULE_DESCRIPTION("Xilinx Alveo management function driver");
+MODULE_LICENSE("GPL v2");
--
2.27.0
fpga-mgr and region implementation for xclbin download which will be
called from main platform driver
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/mgmt/fmgr-drv.c | 191 +++++++++++
drivers/fpga/xrt/mgmt/fmgr.h | 19 ++
drivers/fpga/xrt/mgmt/main-region.c | 483 ++++++++++++++++++++++++++++
3 files changed, 693 insertions(+)
create mode 100644 drivers/fpga/xrt/mgmt/fmgr-drv.c
create mode 100644 drivers/fpga/xrt/mgmt/fmgr.h
create mode 100644 drivers/fpga/xrt/mgmt/main-region.c
diff --git a/drivers/fpga/xrt/mgmt/fmgr-drv.c b/drivers/fpga/xrt/mgmt/fmgr-drv.c
new file mode 100644
index 000000000000..12e1cc788ad9
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/fmgr-drv.c
@@ -0,0 +1,191 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * FPGA Manager Support for Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: [email protected]
+ */
+
+#include <linux/cred.h>
+#include <linux/efi.h>
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "xclbin-helper.h"
+#include "xleaf.h"
+#include "fmgr.h"
+#include "xleaf/axigate.h"
+#include "xleaf/icap.h"
+#include "xmgnt.h"
+
+struct xfpga_class {
+ const struct platform_device *pdev;
+ char name[64];
+};
+
+/*
+ * xclbin download plumbing -- find the download subsystem, ICAP and
+ * pass the xclbin for heavy lifting
+ */
+static int xmgmt_download_bitstream(struct platform_device *pdev,
+ const struct axlf *xclbin)
+
+{
+ struct xclbin_bit_head_info bit_header = { 0 };
+ struct platform_device *icap_leaf = NULL;
+ struct xrt_icap_wr arg;
+ char *bitstream = NULL;
+ u64 bit_len;
+ int ret;
+
+ ret = xrt_xclbin_get_section(DEV(pdev), xclbin, BITSTREAM, (void **)&bitstream, &bit_len);
+ if (ret) {
+ xrt_err(pdev, "bitstream not found");
+ return -ENOENT;
+ }
+ ret = xrt_xclbin_parse_bitstream_header(DEV(pdev), bitstream,
+ XCLBIN_HWICAP_BITFILE_BUF_SZ,
+ &bit_header);
+ if (ret) {
+ ret = -EINVAL;
+ xrt_err(pdev, "invalid bitstream header");
+ goto fail;
+ }
+ if (bit_header.header_length + bit_header.bitstream_length > bit_len) {
+ ret = -EINVAL;
+ xrt_err(pdev, "invalid bitstream length. header %d, bitstream %d, section len %lld",
+ bit_header.header_length, bit_header.bitstream_length, bit_len);
+ goto fail;
+ }
+
+ icap_leaf = xleaf_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
+ if (!icap_leaf) {
+ ret = -ENODEV;
+ xrt_err(pdev, "icap does not exist");
+ goto fail;
+ }
+ arg.xiiw_bit_data = bitstream + bit_header.header_length;
+ arg.xiiw_data_len = bit_header.bitstream_length;
+ ret = xleaf_call(icap_leaf, XRT_ICAP_WRITE, &arg);
+ if (ret) {
+ xrt_err(pdev, "write bitstream failed, ret = %d", ret);
+ xleaf_put_leaf(pdev, icap_leaf);
+ goto fail;
+ }
+
+ xleaf_put_leaf(pdev, icap_leaf);
+ vfree(bitstream);
+
+ return 0;
+
+fail:
+ vfree(bitstream);
+
+ return ret;
+}
+
+/*
+ * There is no HW prep work we do here since we need the full
+ * xclbin for its sanity check.
+ */
+static int xmgmt_pr_write_init(struct fpga_manager *mgr,
+ struct fpga_image_info *info,
+ const char *buf, size_t count)
+{
+ const struct axlf *bin = (const struct axlf *)buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
+ xrt_info(obj->pdev, "%s only supports partial reconfiguration\n", obj->name);
+ return -EINVAL;
+ }
+
+ if (count < sizeof(struct axlf))
+ return -EINVAL;
+
+ if (count > bin->header.length)
+ return -EINVAL;
+
+ xrt_info(obj->pdev, "Prepare download of xclbin %pUb of length %lld B",
+ &bin->header.uuid, bin->header.length);
+
+ return 0;
+}
+
+/*
+ * The implementation requries full xclbin image before we can start
+ * programming the hardware via ICAP subsystem. The full image is required
+ * for checking the validity of xclbin and walking the sections to
+ * discover the bitstream.
+ */
+static int xmgmt_pr_write(struct fpga_manager *mgr,
+ const char *buf, size_t count)
+{
+ const struct axlf *bin = (const struct axlf *)buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ if (bin->header.length != count)
+ return -EINVAL;
+
+ return xmgmt_download_bitstream((void *)obj->pdev, bin);
+}
+
+static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
+ struct fpga_image_info *info)
+{
+ const struct axlf *bin = (const struct axlf *)info->buf;
+ struct xfpga_class *obj = mgr->priv;
+
+ xrt_info(obj->pdev, "Finished download of xclbin %pUb",
+ &bin->header.uuid);
+ return 0;
+}
+
+static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
+{
+ return FPGA_MGR_STATE_UNKNOWN;
+}
+
+static const struct fpga_manager_ops xmgmt_pr_ops = {
+ .initial_header_size = sizeof(struct axlf),
+ .write_init = xmgmt_pr_write_init,
+ .write = xmgmt_pr_write,
+ .write_complete = xmgmt_pr_write_complete,
+ .state = xmgmt_pr_state,
+};
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
+{
+ struct xfpga_class *obj = devm_kzalloc(DEV(pdev), sizeof(struct xfpga_class),
+ GFP_KERNEL);
+ struct fpga_manager *fmgr = NULL;
+ int ret = 0;
+
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
+ obj->pdev = pdev;
+ fmgr = fpga_mgr_create(&pdev->dev,
+ obj->name,
+ &xmgmt_pr_ops,
+ obj);
+ if (!fmgr)
+ return ERR_PTR(-ENOMEM);
+
+ ret = fpga_mgr_register(fmgr);
+ if (ret) {
+ fpga_mgr_free(fmgr);
+ return ERR_PTR(ret);
+ }
+ return fmgr;
+}
+
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
+{
+ fpga_mgr_unregister(fmgr);
+ return 0;
+}
diff --git a/drivers/fpga/xrt/mgmt/fmgr.h b/drivers/fpga/xrt/mgmt/fmgr.h
new file mode 100644
index 000000000000..ff1fc5f870f8
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/fmgr.h
@@ -0,0 +1,19 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors: [email protected]
+ */
+
+#ifndef _XMGMT_FMGR_H_
+#define _XMGMT_FMGR_H_
+
+#include <linux/fpga/fpga-mgr.h>
+#include <linux/mutex.h>
+
+#include <linux/xrt/xclbin.h>
+
+struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
+int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
+
+#endif
diff --git a/drivers/fpga/xrt/mgmt/main-region.c b/drivers/fpga/xrt/mgmt/main-region.c
new file mode 100644
index 000000000000..96a674618e86
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/main-region.c
@@ -0,0 +1,483 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * FPGA Region Support for Xilinx Alveo Management Function Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
+ *
+ * Authors: [email protected]
+ */
+
+#include <linux/uuid.h>
+#include <linux/fpga/fpga-bridge.h>
+#include <linux/fpga/fpga-region.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/axigate.h"
+#include "xclbin-helper.h"
+#include "xmgnt.h"
+
+struct xmgmt_bridge {
+ struct platform_device *pdev;
+ const char *bridge_name;
+};
+
+struct xmgmt_region {
+ struct platform_device *pdev;
+ struct fpga_region *region;
+ struct fpga_compat_id compat_id;
+ uuid_t intf_uuid;
+ struct fpga_bridge *bridge;
+ int group_instance;
+ uuid_t dep_uuid;
+ struct list_head list;
+};
+
+struct xmgmt_region_match_arg {
+ struct platform_device *pdev;
+ uuid_t *uuids;
+ u32 uuid_num;
+};
+
+static int xmgmt_br_enable_set(struct fpga_bridge *bridge, bool enable)
+{
+ struct xmgmt_bridge *br_data = (struct xmgmt_bridge *)bridge->priv;
+ struct platform_device *axigate_leaf;
+ int rc;
+
+ axigate_leaf = xleaf_get_leaf_by_epname(br_data->pdev, br_data->bridge_name);
+ if (!axigate_leaf) {
+ xrt_err(br_data->pdev, "failed to get leaf %s",
+ br_data->bridge_name);
+ return -ENOENT;
+ }
+
+ if (enable)
+ rc = xleaf_call(axigate_leaf, XRT_AXIGATE_OPEN, NULL);
+ else
+ rc = xleaf_call(axigate_leaf, XRT_AXIGATE_CLOSE, NULL);
+
+ if (rc) {
+ xrt_err(br_data->pdev, "failed to %s gate %s, rc %d",
+ (enable ? "free" : "freeze"), br_data->bridge_name,
+ rc);
+ }
+
+ xleaf_put_leaf(br_data->pdev, axigate_leaf);
+
+ return rc;
+}
+
+const struct fpga_bridge_ops xmgmt_bridge_ops = {
+ .enable_set = xmgmt_br_enable_set
+};
+
+static void xmgmt_destroy_bridge(struct fpga_bridge *br)
+{
+ struct xmgmt_bridge *br_data = br->priv;
+
+ if (!br_data)
+ return;
+
+ xrt_info(br_data->pdev, "destroy fpga bridge %s", br_data->bridge_name);
+ fpga_bridge_unregister(br);
+
+ devm_kfree(DEV(br_data->pdev), br_data);
+
+ fpga_bridge_free(br);
+}
+
+static struct fpga_bridge *xmgmt_create_bridge(struct platform_device *pdev,
+ char *dtb)
+{
+ struct fpga_bridge *br = NULL;
+ struct xmgmt_bridge *br_data;
+ const char *gate;
+ int rc;
+
+ br_data = devm_kzalloc(DEV(pdev), sizeof(*br_data), GFP_KERNEL);
+ if (!br_data)
+ return NULL;
+ br_data->pdev = pdev;
+
+ br_data->bridge_name = XRT_MD_NODE_GATE_ULP;
+ rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_ULP,
+ NULL, &gate);
+ if (rc) {
+ br_data->bridge_name = XRT_MD_NODE_GATE_PLP;
+ rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_PLP,
+ NULL, &gate);
+ }
+ if (rc) {
+ xrt_err(pdev, "failed to get axigate, rc %d", rc);
+ goto failed;
+ }
+
+ br = fpga_bridge_create(DEV(pdev), br_data->bridge_name,
+ &xmgmt_bridge_ops, br_data);
+ if (!br) {
+ xrt_err(pdev, "failed to create bridge");
+ goto failed;
+ }
+
+ rc = fpga_bridge_register(br);
+ if (rc) {
+ xrt_err(pdev, "failed to register bridge, rc %d", rc);
+ goto failed;
+ }
+
+ xrt_info(pdev, "created fpga bridge %s", br_data->bridge_name);
+
+ return br;
+
+failed:
+ if (br)
+ fpga_bridge_free(br);
+ if (br_data)
+ devm_kfree(DEV(pdev), br_data);
+
+ return NULL;
+}
+
+static void xmgmt_destroy_region(struct fpga_region *region)
+{
+ struct xmgmt_region *r_data = region->priv;
+
+ xrt_info(r_data->pdev, "destroy fpga region %llx.%llx",
+ region->compat_id->id_l, region->compat_id->id_h);
+
+ fpga_region_unregister(region);
+
+ if (r_data->group_instance > 0)
+ xleaf_destroy_group(r_data->pdev, r_data->group_instance);
+
+ if (r_data->bridge)
+ xmgmt_destroy_bridge(r_data->bridge);
+
+ if (r_data->region->info) {
+ fpga_image_info_free(r_data->region->info);
+ r_data->region->info = NULL;
+ }
+
+ fpga_region_free(region);
+
+ devm_kfree(DEV(r_data->pdev), r_data);
+}
+
+static int xmgmt_region_match(struct device *dev, const void *data)
+{
+ const struct xmgmt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ uuid_t compat_uuid;
+ int i;
+
+ if (dev->parent != &arg->pdev->dev)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ /*
+ * The device tree provides both parent and child uuids for an
+ * xclbin in one array. Here we try both uuids to see if it matches
+ * with target region's compat_id. Strictly speaking we should
+ * only match xclbin's parent uuid with target region's compat_id
+ * but given the uuids by design are unique comparing with both
+ * does not hurt.
+ */
+ import_uuid(&compat_uuid, (const char *)match_region->compat_id);
+ for (i = 0; i < arg->uuid_num; i++) {
+ if (uuid_equal(&compat_uuid, &arg->uuids[i]))
+ return true;
+ }
+
+ return false;
+}
+
+static int xmgmt_region_match_base(struct device *dev, const void *data)
+{
+ const struct xmgmt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ const struct xmgmt_region *r_data;
+
+ if (dev->parent != &arg->pdev->dev)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ r_data = match_region->priv;
+ if (uuid_is_null(&r_data->dep_uuid))
+ return true;
+
+ return false;
+}
+
+static int xmgmt_region_match_by_uuid(struct device *dev, const void *data)
+{
+ const struct xmgmt_region_match_arg *arg = data;
+ const struct fpga_region *match_region;
+ const struct xmgmt_region *r_data;
+
+ if (dev->parent != &arg->pdev->dev)
+ return false;
+
+ if (arg->uuid_num != 1)
+ return false;
+
+ match_region = to_fpga_region(dev);
+ r_data = match_region->priv;
+ if (uuid_equal(&r_data->dep_uuid, arg->uuids))
+ return true;
+
+ return false;
+}
+
+static void xmgmt_region_cleanup(struct fpga_region *region)
+{
+ struct xmgmt_region *r_data = region->priv, *pdata, *temp;
+ struct platform_device *pdev = r_data->pdev;
+ struct xmgmt_region_match_arg arg = { 0 };
+ struct fpga_region *match_region = NULL;
+ struct device *start_dev = NULL;
+ LIST_HEAD(free_list);
+ uuid_t compat_uuid;
+
+ list_add_tail(&r_data->list, &free_list);
+ arg.pdev = pdev;
+ arg.uuid_num = 1;
+ arg.uuids = &compat_uuid;
+
+ /* find all regions depending on this region */
+ list_for_each_entry_safe(pdata, temp, &free_list, list) {
+ import_uuid(arg.uuids, (const char *)pdata->region->compat_id);
+ start_dev = NULL;
+ while ((match_region = fpga_region_class_find(start_dev, &arg,
+ xmgmt_region_match_by_uuid))) {
+ pdata = match_region->priv;
+ list_add_tail(&pdata->list, &free_list);
+ start_dev = &match_region->dev;
+ put_device(&match_region->dev);
+ }
+ }
+
+ list_del(&r_data->list);
+
+ list_for_each_entry_safe_reverse(pdata, temp, &free_list, list)
+ xmgmt_destroy_region(pdata->region);
+
+ if (r_data->group_instance > 0) {
+ xleaf_destroy_group(pdev, r_data->group_instance);
+ r_data->group_instance = -1;
+ }
+ if (r_data->region->info) {
+ fpga_image_info_free(r_data->region->info);
+ r_data->region->info = NULL;
+ }
+}
+
+void xmgmt_region_cleanup_all(struct platform_device *pdev)
+{
+ struct xmgmt_region_match_arg arg = { 0 };
+ struct fpga_region *base_region;
+
+ arg.pdev = pdev;
+
+ while ((base_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match_base))) {
+ put_device(&base_region->dev);
+
+ xmgmt_region_cleanup(base_region);
+ xmgmt_destroy_region(base_region);
+ }
+}
+
+/*
+ * Program a region with a xclbin image. Bring up the subdevs and the
+ * group object to contain the subdevs.
+ */
+static int xmgmt_region_program(struct fpga_region *region, const void *xclbin, char *dtb)
+{
+ const struct axlf *xclbin_obj = xclbin;
+ struct fpga_image_info *info;
+ struct platform_device *pdev;
+ struct xmgmt_region *r_data;
+ int rc;
+
+ r_data = region->priv;
+ pdev = r_data->pdev;
+
+ info = fpga_image_info_alloc(&pdev->dev);
+ if (!info)
+ return -ENOMEM;
+
+ info->buf = xclbin;
+ info->count = xclbin_obj->header.length;
+ info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
+ region->info = info;
+ rc = fpga_region_program_fpga(region);
+ if (rc) {
+ xrt_err(pdev, "programming xclbin failed, rc %d", rc);
+ return rc;
+ }
+
+ /* free bridges to allow reprogram */
+ if (region->get_bridges)
+ fpga_bridges_put(®ion->bridge_list);
+
+ /*
+ * Next bringup the subdevs for this region which will be managed by
+ * its own group object.
+ */
+ r_data->group_instance = xleaf_create_group(pdev, dtb);
+ if (r_data->group_instance < 0) {
+ xrt_err(pdev, "failed to create group, rc %d",
+ r_data->group_instance);
+ rc = r_data->group_instance;
+ return rc;
+ }
+
+ rc = xleaf_wait_for_group_bringup(pdev);
+ if (rc)
+ xrt_err(pdev, "group bringup failed, rc %d", rc);
+ return rc;
+}
+
+static int xmgmt_get_bridges(struct fpga_region *region)
+{
+ struct xmgmt_region *r_data = region->priv;
+ struct device *dev = &r_data->pdev->dev;
+
+ return fpga_bridge_get_to_list(dev, region->info, ®ion->bridge_list);
+}
+
+/*
+ * Program/create FPGA regions based on input xclbin file.
+ * 1. Identify a matching existing region for this xclbin
+ * 2. Tear down any previous objects for the found region
+ * 3. Program this region with input xclbin
+ * 4. Iterate over this region's interface uuids to determine if it defines any
+ * child region. Create fpga_region for the child region.
+ */
+int xmgmt_process_xclbin(struct platform_device *pdev,
+ struct fpga_manager *fmgr,
+ const struct axlf *xclbin,
+ enum provider_kind kind)
+{
+ struct fpga_region *region, *compat_region = NULL;
+ struct xmgmt_region_match_arg arg = { 0 };
+ struct xmgmt_region *r_data;
+ uuid_t compat_uuid;
+ char *dtb = NULL;
+ int rc, i;
+
+ rc = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
+ if (rc) {
+ xrt_err(pdev, "failed to get dtb: %d", rc);
+ goto failed;
+ }
+
+ rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, 0, NULL);
+ if (rc < 0) {
+ xrt_err(pdev, "failed to get intf uuid");
+ rc = -EINVAL;
+ goto failed;
+ }
+ arg.uuid_num = rc;
+ arg.uuids = vzalloc(sizeof(uuid_t) * arg.uuid_num);
+ if (!arg.uuids) {
+ rc = -ENOMEM;
+ goto failed;
+ }
+ arg.pdev = pdev;
+
+ rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, arg.uuid_num, arg.uuids);
+ if (rc != arg.uuid_num) {
+ xrt_err(pdev, "only get %d uuids, expect %d", rc, arg.uuid_num);
+ rc = -EINVAL;
+ goto failed;
+ }
+
+ /* if this is not base firmware, search for a compatible region */
+ if (kind != XMGMT_BLP) {
+ compat_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match);
+ if (!compat_region) {
+ xrt_err(pdev, "failed to get compatible region");
+ rc = -ENOENT;
+ goto failed;
+ }
+
+ xmgmt_region_cleanup(compat_region);
+
+ rc = xmgmt_region_program(compat_region, xclbin, dtb);
+ if (rc) {
+ xrt_err(pdev, "failed to program region");
+ goto failed;
+ }
+ }
+
+ if (compat_region)
+ import_uuid(&compat_uuid, (const char *)compat_region->compat_id);
+
+ /* create all the new regions contained in this xclbin */
+ for (i = 0; i < arg.uuid_num; i++) {
+ if (compat_region && uuid_equal(&compat_uuid, &arg.uuids[i])) {
+ /* region for this interface already exists */
+ continue;
+ }
+
+ region = fpga_region_create(DEV(pdev), fmgr, xmgmt_get_bridges);
+ if (!region) {
+ xrt_err(pdev, "failed to create fpga region");
+ rc = -EFAULT;
+ goto failed;
+ }
+ r_data = devm_kzalloc(DEV(pdev), sizeof(*r_data), GFP_KERNEL);
+ if (!r_data) {
+ rc = -ENOMEM;
+ fpga_region_free(region);
+ goto failed;
+ }
+ r_data->pdev = pdev;
+ r_data->region = region;
+ r_data->group_instance = -1;
+ uuid_copy(&r_data->intf_uuid, &arg.uuids[i]);
+ if (compat_region)
+ import_uuid(&r_data->dep_uuid, (const char *)compat_region->compat_id);
+ r_data->bridge = xmgmt_create_bridge(pdev, dtb);
+ if (!r_data->bridge) {
+ xrt_err(pdev, "failed to create fpga bridge");
+ rc = -EFAULT;
+ devm_kfree(DEV(pdev), r_data);
+ fpga_region_free(region);
+ goto failed;
+ }
+
+ region->compat_id = &r_data->compat_id;
+ export_uuid((char *)region->compat_id, &r_data->intf_uuid);
+ region->priv = r_data;
+
+ rc = fpga_region_register(region);
+ if (rc) {
+ xrt_err(pdev, "failed to register fpga region");
+ xmgmt_destroy_bridge(r_data->bridge);
+ fpga_region_free(region);
+ devm_kfree(DEV(pdev), r_data);
+ goto failed;
+ }
+
+ xrt_info(pdev, "created fpga region %llx%llx",
+ region->compat_id->id_l, region->compat_id->id_h);
+ }
+
+ if (compat_region)
+ put_device(&compat_region->dev);
+ vfree(dtb);
+ return 0;
+
+failed:
+ if (compat_region) {
+ put_device(&compat_region->dev);
+ xmgmt_region_cleanup(compat_region);
+ } else {
+ xmgmt_region_cleanup_all(pdev);
+ }
+
+ vfree(dtb);
+ return rc;
+}
--
2.27.0
Add clock driver. Clock is a hardware function discovered by walking
xclbin metadata. A platform device node will be created for it. Other
part of driver configures clock through clock driver.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/clock.h | 29 ++
drivers/fpga/xrt/lib/xleaf/clock.c | 669 +++++++++++++++++++++++++
2 files changed, 698 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
diff --git a/drivers/fpga/xrt/include/xleaf/clock.h b/drivers/fpga/xrt/include/xleaf/clock.h
new file mode 100644
index 000000000000..6858473fd096
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/clock.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_CLOCK_H_
+#define _XRT_CLOCK_H_
+
+#include "xleaf.h"
+#include <linux/xrt/xclbin.h>
+
+/*
+ * CLOCK driver leaf calls.
+ */
+enum xrt_clock_leaf_cmd {
+ XRT_CLOCK_SET = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_CLOCK_GET,
+ XRT_CLOCK_VERIFY,
+};
+
+struct xrt_clock_get {
+ u16 freq;
+ u32 freq_cnter;
+};
+
+#endif /* _XRT_CLOCK_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/clock.c b/drivers/fpga/xrt/lib/xleaf/clock.c
new file mode 100644
index 000000000000..071485e4bf65
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/clock.c
@@ -0,0 +1,669 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA Clock Wizard Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ * Sonal Santan <[email protected]>
+ * David Zhang <[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/clock.h"
+#include "xleaf/clkfreq.h"
+
+/* XRT_CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
+#define XRT_CLOCK_MAX_NUM_CLOCKS 4
+#define XRT_CLOCK_STATUS_MASK 0xffff
+#define XRT_CLOCK_STATUS_MEASURE_START 0x1
+#define XRT_CLOCK_STATUS_MEASURE_DONE 0x2
+
+#define XRT_CLOCK_STATUS_REG 0x4
+#define XRT_CLOCK_CLKFBOUT_REG 0x200
+#define XRT_CLOCK_CLKOUT0_REG 0x208
+#define XRT_CLOCK_LOAD_SADDR_SEN_REG 0x25C
+#define XRT_CLOCK_DEFAULT_EXPIRE_SECS 1
+
+#define CLOCK_ERR(clock, fmt, arg...) \
+ xrt_err((clock)->pdev, fmt "\n", ##arg)
+#define CLOCK_WARN(clock, fmt, arg...) \
+ xrt_warn((clock)->pdev, fmt "\n", ##arg)
+#define CLOCK_INFO(clock, fmt, arg...) \
+ xrt_info((clock)->pdev, fmt "\n", ##arg)
+#define CLOCK_DBG(clock, fmt, arg...) \
+ xrt_dbg((clock)->pdev, fmt "\n", ##arg)
+
+#define XRT_CLOCK "xrt_clock"
+
+static const struct regmap_config clock_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+struct clock {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct mutex clock_lock; /* clock dev lock */
+
+ const char *clock_ep_name;
+};
+
+/*
+ * Precomputed table with config0 and config2 register values together with
+ * target frequency. The steps are approximately 5 MHz apart. Table is
+ * generated by platform creation tool.
+ */
+static const struct xmgmt_ocl_clockwiz {
+ /* target frequency */
+ u16 ocl;
+ /* config0 register */
+ u32 config0;
+ /* config2 register */
+ u32 config2;
+} frequency_table[] = {
+ /*1275.000*/ { 10, 0x02EE0C01, 0x0001F47F },
+ /*1575.000*/ { 15, 0x02EE0F01, 0x00000069},
+ /*1600.000*/ { 20, 0x00001001, 0x00000050},
+ /*1600.000*/ { 25, 0x00001001, 0x00000040},
+ /*1575.000*/ { 30, 0x02EE0F01, 0x0001F434},
+ /*1575.000*/ { 35, 0x02EE0F01, 0x0000002D},
+ /*1600.000*/ { 40, 0x00001001, 0x00000028},
+ /*1575.000*/ { 45, 0x02EE0F01, 0x00000023},
+ /*1600.000*/ { 50, 0x00001001, 0x00000020},
+ /*1512.500*/ { 55, 0x007D0F01, 0x0001F41B},
+ /*1575.000*/ { 60, 0x02EE0F01, 0x0000FA1A},
+ /*1462.500*/ { 65, 0x02710E01, 0x0001F416},
+ /*1575.000*/ { 70, 0x02EE0F01, 0x0001F416},
+ /*1575.000*/ { 75, 0x02EE0F01, 0x00000015},
+ /*1600.000*/ { 80, 0x00001001, 0x00000014},
+ /*1487.500*/ { 85, 0x036B0E01, 0x0001F411},
+ /*1575.000*/ { 90, 0x02EE0F01, 0x0001F411},
+ /*1425.000*/ { 95, 0x00FA0E01, 0x0000000F},
+ /*1600.000*/ { 100, 0x00001001, 0x00000010},
+ /*1575.000*/ { 105, 0x02EE0F01, 0x0000000F},
+ /*1512.500*/ { 110, 0x007D0F01, 0x0002EE0D},
+ /*1437.500*/ { 115, 0x01770E01, 0x0001F40C},
+ /*1575.000*/ { 120, 0x02EE0F01, 0x00007D0D},
+ /*1562.500*/ { 125, 0x02710F01, 0x0001F40C},
+ /*1462.500*/ { 130, 0x02710E01, 0x0000FA0B},
+ /*1350.000*/ { 135, 0x01F40D01, 0x0000000A},
+ /*1575.000*/ { 140, 0x02EE0F01, 0x0000FA0B},
+ /*1450.000*/ { 145, 0x01F40E01, 0x0000000A},
+ /*1575.000*/ { 150, 0x02EE0F01, 0x0001F40A},
+ /*1550.000*/ { 155, 0x01F40F01, 0x0000000A},
+ /*1600.000*/ { 160, 0x00001001, 0x0000000A},
+ /*1237.500*/ { 165, 0x01770C01, 0x0001F407},
+ /*1487.500*/ { 170, 0x036B0E01, 0x0002EE08},
+ /*1575.000*/ { 175, 0x02EE0F01, 0x00000009},
+ /*1575.000*/ { 180, 0x02EE0F01, 0x0002EE08},
+ /*1387.500*/ { 185, 0x036B0D01, 0x0001F407},
+ /*1425.000*/ { 190, 0x00FA0E01, 0x0001F407},
+ /*1462.500*/ { 195, 0x02710E01, 0x0001F407},
+ /*1600.000*/ { 200, 0x00001001, 0x00000008},
+ /*1537.500*/ { 205, 0x01770F01, 0x0001F407},
+ /*1575.000*/ { 210, 0x02EE0F01, 0x0001F407},
+ /*1075.000*/ { 215, 0x02EE0A01, 0x00000005},
+ /*1512.500*/ { 220, 0x007D0F01, 0x00036B06},
+ /*1575.000*/ { 225, 0x02EE0F01, 0x00000007},
+ /*1437.500*/ { 230, 0x01770E01, 0x0000FA06},
+ /*1175.000*/ { 235, 0x02EE0B01, 0x00000005},
+ /*1500.000*/ { 240, 0x00000F01, 0x0000FA06},
+ /*1225.000*/ { 245, 0x00FA0C01, 0x00000005},
+ /*1562.500*/ { 250, 0x02710F01, 0x0000FA06},
+ /*1275.000*/ { 255, 0x02EE0C01, 0x00000005},
+ /*1462.500*/ { 260, 0x02710E01, 0x00027105},
+ /*1325.000*/ { 265, 0x00FA0D01, 0x00000005},
+ /*1350.000*/ { 270, 0x01F40D01, 0x00000005},
+ /*1512.500*/ { 275, 0x007D0F01, 0x0001F405},
+ /*1575.000*/ { 280, 0x02EE0F01, 0x00027105},
+ /*1425.000*/ { 285, 0x00FA0E01, 0x00000005},
+ /*1450.000*/ { 290, 0x01F40E01, 0x00000005},
+ /*1475.000*/ { 295, 0x02EE0E01, 0x00000005},
+ /*1575.000*/ { 300, 0x02EE0F01, 0x0000FA05},
+ /*1525.000*/ { 305, 0x00FA0F01, 0x00000005},
+ /*1550.000*/ { 310, 0x01F40F01, 0x00000005},
+ /*1575.000*/ { 315, 0x02EE0F01, 0x00000005},
+ /*1600.000*/ { 320, 0x00001001, 0x00000005},
+ /*1462.500*/ { 325, 0x02710E01, 0x0001F404},
+ /*1237.500*/ { 330, 0x01770C01, 0x0002EE03},
+ /* 837.500*/ { 335, 0x01770801, 0x0001F402},
+ /*1487.500*/ { 340, 0x036B0E01, 0x00017704},
+ /* 862.500*/ { 345, 0x02710801, 0x0001F402},
+ /*1575.000*/ { 350, 0x02EE0F01, 0x0001F404},
+ /* 887.500*/ { 355, 0x036B0801, 0x0001F402},
+ /*1575.000*/ { 360, 0x02EE0F01, 0x00017704},
+ /* 912.500*/ { 365, 0x007D0901, 0x0001F402},
+ /*1387.500*/ { 370, 0x036B0D01, 0x0002EE03},
+ /*1500.000*/ { 375, 0x00000F01, 0x00000004},
+ /*1425.000*/ { 380, 0x00FA0E01, 0x0002EE03},
+ /* 962.500*/ { 385, 0x02710901, 0x0001F402},
+ /*1462.500*/ { 390, 0x02710E01, 0x0002EE03},
+ /* 987.500*/ { 395, 0x036B0901, 0x0001F402},
+ /*1600.000*/ { 400, 0x00001001, 0x00000004},
+ /*1012.500*/ { 405, 0x007D0A01, 0x0001F402},
+ /*1537.500*/ { 410, 0x01770F01, 0x0002EE03},
+ /*1037.500*/ { 415, 0x01770A01, 0x0001F402},
+ /*1575.000*/ { 420, 0x02EE0F01, 0x0002EE03},
+ /*1487.500*/ { 425, 0x036B0E01, 0x0001F403},
+ /*1075.000*/ { 430, 0x02EE0A01, 0x0001F402},
+ /*1087.500*/ { 435, 0x036B0A01, 0x0001F402},
+ /*1375.000*/ { 440, 0x02EE0D01, 0x00007D03},
+ /*1112.500*/ { 445, 0x007D0B01, 0x0001F402},
+ /*1575.000*/ { 450, 0x02EE0F01, 0x0001F403},
+ /*1137.500*/ { 455, 0x01770B01, 0x0001F402},
+ /*1437.500*/ { 460, 0x01770E01, 0x00007D03},
+ /*1162.500*/ { 465, 0x02710B01, 0x0001F402},
+ /*1175.000*/ { 470, 0x02EE0B01, 0x0001F402},
+ /*1425.000*/ { 475, 0x00FA0E01, 0x00000003},
+ /*1500.000*/ { 480, 0x00000F01, 0x00007D03},
+ /*1212.500*/ { 485, 0x007D0C01, 0x0001F402},
+ /*1225.000*/ { 490, 0x00FA0C01, 0x0001F402},
+ /*1237.500*/ { 495, 0x01770C01, 0x0001F402},
+ /*1562.500*/ { 500, 0x02710F01, 0x00007D03},
+ /*1262.500*/ { 505, 0x02710C01, 0x0001F402},
+ /*1275.000*/ { 510, 0x02EE0C01, 0x0001F402},
+ /*1287.500*/ { 515, 0x036B0C01, 0x0001F402},
+ /*1300.000*/ { 520, 0x00000D01, 0x0001F402},
+ /*1575.000*/ { 525, 0x02EE0F01, 0x00000003},
+ /*1325.000*/ { 530, 0x00FA0D01, 0x0001F402},
+ /*1337.500*/ { 535, 0x01770D01, 0x0001F402},
+ /*1350.000*/ { 540, 0x01F40D01, 0x0001F402},
+ /*1362.500*/ { 545, 0x02710D01, 0x0001F402},
+ /*1512.500*/ { 550, 0x007D0F01, 0x0002EE02},
+ /*1387.500*/ { 555, 0x036B0D01, 0x0001F402},
+ /*1400.000*/ { 560, 0x00000E01, 0x0001F402},
+ /*1412.500*/ { 565, 0x007D0E01, 0x0001F402},
+ /*1425.000*/ { 570, 0x00FA0E01, 0x0001F402},
+ /*1437.500*/ { 575, 0x01770E01, 0x0001F402},
+ /*1450.000*/ { 580, 0x01F40E01, 0x0001F402},
+ /*1462.500*/ { 585, 0x02710E01, 0x0001F402},
+ /*1475.000*/ { 590, 0x02EE0E01, 0x0001F402},
+ /*1487.500*/ { 595, 0x036B0E01, 0x0001F402},
+ /*1575.000*/ { 600, 0x02EE0F01, 0x00027102},
+ /*1512.500*/ { 605, 0x007D0F01, 0x0001F402},
+ /*1525.000*/ { 610, 0x00FA0F01, 0x0001F402},
+ /*1537.500*/ { 615, 0x01770F01, 0x0001F402},
+ /*1550.000*/ { 620, 0x01F40F01, 0x0001F402},
+ /*1562.500*/ { 625, 0x02710F01, 0x0001F402},
+ /*1575.000*/ { 630, 0x02EE0F01, 0x0001F402},
+ /*1587.500*/ { 635, 0x036B0F01, 0x0001F402},
+ /*1600.000*/ { 640, 0x00001001, 0x0001F402},
+ /*1290.000*/ { 645, 0x01F44005, 0x00000002},
+ /*1462.500*/ { 650, 0x02710E01, 0x0000FA02}
+};
+
+static u32 find_matching_freq_config(unsigned short freq,
+ const struct xmgmt_ocl_clockwiz *table,
+ int size)
+{
+ u32 end = size - 1;
+ u32 start = 0;
+ u32 idx;
+
+ if (freq < table[0].ocl)
+ return 0;
+
+ if (freq > table[size - 1].ocl)
+ return size - 1;
+
+ while (start < end) {
+ idx = (start + end) / 2;
+ if (freq == table[idx].ocl)
+ break;
+ if (freq < table[idx].ocl)
+ end = idx;
+ else
+ start = idx + 1;
+ }
+ if (freq < table[idx].ocl)
+ idx--;
+
+ return idx;
+}
+
+static u32 find_matching_freq(u32 freq,
+ const struct xmgmt_ocl_clockwiz *freq_table,
+ int freq_table_size)
+{
+ int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
+
+ return freq_table[idx].ocl;
+}
+
+static inline int clock_wiz_busy(struct clock *clock, int cycle, int interval)
+{
+ u32 val = 0;
+ int count;
+ int ret;
+
+ for (count = 0; count < cycle; count++) {
+ ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read status failed %d", ret);
+ return ret;
+ }
+ if (val == 1)
+ break;
+
+ mdelay(interval);
+ }
+ if (val != 1) {
+ CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
+ val, cycle * interval);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int get_freq(struct clock *clock, u16 *freq)
+{
+ u32 mul_frac0 = 0;
+ u32 div_frac1 = 0;
+ u32 mul0, div0;
+ u64 input;
+ u32 div1;
+ u32 val;
+ int ret;
+
+ WARN_ON(!mutex_is_locked(&clock->clock_lock));
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read status failed %d", ret);
+ return ret;
+ }
+
+ if ((val & 0x1) == 0) {
+ CLOCK_ERR(clock, "clockwiz is busy %x", val);
+ *freq = 0;
+ return -EBUSY;
+ }
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read clkfbout failed %d", ret);
+ return ret;
+ }
+
+ div0 = val & 0xff;
+ mul0 = (val & 0xff00) >> 8;
+ if (val & BIT(26)) {
+ mul_frac0 = val >> 16;
+ mul_frac0 &= 0x3ff;
+ }
+
+ /*
+ * Multiply both numerator (mul0) and the denominator (div0) with 1000
+ * to account for fractional portion of multiplier
+ */
+ mul0 *= 1000;
+ mul0 += mul_frac0;
+ div0 *= 1000;
+
+ ret = regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
+ if (ret) {
+ CLOCK_ERR(clock, "read clkout0 failed %d", ret);
+ return ret;
+ }
+
+ div1 = val & 0xff;
+ if (val & BIT(18)) {
+ div_frac1 = val >> 8;
+ div_frac1 &= 0x3ff;
+ }
+
+ /*
+ * Multiply both numerator (mul0) and the denominator (div1) with
+ * 1000 to account for fractional portion of divider
+ */
+
+ div1 *= 1000;
+ div1 += div_frac1;
+ div0 *= div1;
+ mul0 *= 1000;
+ if (div0 == 0) {
+ CLOCK_ERR(clock, "clockwiz 0 divider");
+ return 0;
+ }
+
+ input = mul0 * 100;
+ do_div(input, div0);
+ *freq = (u16)input;
+
+ return 0;
+}
+
+static int set_freq(struct clock *clock, u16 freq)
+{
+ int err = 0;
+ u32 idx = 0;
+ u32 val = 0;
+ u32 config;
+
+ mutex_lock(&clock->clock_lock);
+ idx = find_matching_freq_config(freq, frequency_table,
+ ARRAY_SIZE(frequency_table));
+
+ CLOCK_INFO(clock, "New: %d Mhz", freq);
+ err = clock_wiz_busy(clock, 20, 50);
+ if (err)
+ goto fail;
+
+ config = frequency_table[idx].config0;
+ err = regmap_write(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, config);
+ if (err) {
+ CLOCK_ERR(clock, "write clkfbout failed %d", err);
+ goto fail;
+ }
+
+ config = frequency_table[idx].config2;
+ err = regmap_write(clock->regmap, XRT_CLOCK_CLKOUT0_REG, config);
+ if (err) {
+ CLOCK_ERR(clock, "write clkout0 failed %d", err);
+ goto fail;
+ }
+
+ mdelay(10);
+ err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 7);
+ if (err) {
+ CLOCK_ERR(clock, "write load_saddr_sen failed %d", err);
+ goto fail;
+ }
+
+ mdelay(1);
+ err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 2);
+ if (err) {
+ CLOCK_ERR(clock, "write saddr failed %d", err);
+ goto fail;
+ }
+
+ CLOCK_INFO(clock, "clockwiz waiting for locked signal");
+
+ err = clock_wiz_busy(clock, 100, 100);
+ if (err) {
+ CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
+ /* restore */
+ regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 4);
+ mdelay(10);
+ regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 0);
+ goto fail;
+ }
+ regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
+ CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
+ regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
+ CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
+
+fail:
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int get_freq_counter(struct clock *clock, u32 *freq)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+ struct platform_device *pdev = clock->pdev;
+ struct platform_device *counter_leaf;
+ const void *counter;
+ int err;
+
+ WARN_ON(!mutex_is_locked(&clock->clock_lock));
+
+ err = xrt_md_get_prop(DEV(pdev), pdata->xsp_dtb, clock->clock_ep_name,
+ NULL, XRT_MD_PROP_CLK_CNT, &counter, NULL);
+ if (err) {
+ xrt_err(pdev, "no counter specified");
+ return err;
+ }
+
+ counter_leaf = xleaf_get_leaf_by_epname(pdev, counter);
+ if (!counter_leaf) {
+ xrt_err(pdev, "can't find counter");
+ return -ENOENT;
+ }
+
+ err = xleaf_call(counter_leaf, XRT_CLKFREQ_READ, freq);
+ if (err)
+ xrt_err(pdev, "can't read counter");
+ xleaf_put_leaf(clock->pdev, counter_leaf);
+
+ return err;
+}
+
+static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
+{
+ int err = 0;
+
+ mutex_lock(&clock->clock_lock);
+
+ if (err == 0 && freq)
+ err = get_freq(clock, freq);
+
+ if (err == 0 && freq_cnter)
+ err = get_freq_counter(clock, freq_cnter);
+
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int clock_verify_freq(struct clock *clock)
+{
+ u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
+ int err = 0;
+ u16 freq;
+
+ mutex_lock(&clock->clock_lock);
+
+ err = get_freq(clock, &freq);
+ if (err) {
+ xrt_err(clock->pdev, "get freq failed, %d", err);
+ goto end;
+ }
+
+ err = get_freq_counter(clock, &clock_freq_counter);
+ if (err) {
+ xrt_err(clock->pdev, "get freq counter failed, %d", err);
+ goto end;
+ }
+
+ lookup_freq = find_matching_freq(freq, frequency_table,
+ ARRAY_SIZE(frequency_table));
+ request_in_khz = lookup_freq * 1000;
+ tolerance = lookup_freq * 50;
+ if (tolerance < abs(clock_freq_counter - request_in_khz)) {
+ CLOCK_ERR(clock,
+ "set clock(%s) failed, request %ukhz, actual %dkhz",
+ clock->clock_ep_name, request_in_khz, clock_freq_counter);
+ err = -EDOM;
+ } else {
+ CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
+ }
+
+end:
+ mutex_unlock(&clock->clock_lock);
+ return err;
+}
+
+static int clock_init(struct clock *clock)
+{
+ struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
+ const u16 *freq;
+ int err = 0;
+
+ err = xrt_md_get_prop(DEV(clock->pdev), pdata->xsp_dtb,
+ clock->clock_ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
+ (const void **)&freq, NULL);
+ if (err) {
+ xrt_info(clock->pdev, "no default freq");
+ return 0;
+ }
+
+ err = set_freq(clock, be16_to_cpu(*freq));
+
+ return err;
+}
+
+static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct clock *clock = platform_get_drvdata(to_platform_device(dev));
+ ssize_t count;
+ u16 freq = 0;
+
+ count = clock_get_freq(clock, &freq, NULL);
+ if (count < 0)
+ return count;
+
+ count = snprintf(buf, 64, "%u\n", freq);
+
+ return count;
+}
+static DEVICE_ATTR_RO(freq);
+
+static struct attribute *clock_attrs[] = {
+ &dev_attr_freq.attr,
+ NULL,
+};
+
+static struct attribute_group clock_attr_group = {
+ .attrs = clock_attrs,
+};
+
+static int
+xrt_clock_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ struct clock *clock;
+ int ret = 0;
+
+ clock = platform_get_drvdata(pdev);
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ /* Does not handle any event. */
+ break;
+ case XRT_CLOCK_SET: {
+ u16 freq = (u16)(uintptr_t)arg;
+
+ ret = set_freq(clock, freq);
+ break;
+ }
+ case XRT_CLOCK_VERIFY:
+ ret = clock_verify_freq(clock);
+ break;
+ case XRT_CLOCK_GET: {
+ struct xrt_clock_get *get =
+ (struct xrt_clock_get *)arg;
+
+ ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
+ break;
+ }
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int clock_remove(struct platform_device *pdev)
+{
+ sysfs_remove_group(&pdev->dev.kobj, &clock_attr_group);
+
+ return 0;
+}
+
+static int clock_probe(struct platform_device *pdev)
+{
+ struct clock *clock = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ clock = devm_kzalloc(&pdev->dev, sizeof(*clock), GFP_KERNEL);
+ if (!clock)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, clock);
+ clock->pdev = pdev;
+ mutex_init(&clock->clock_lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base)) {
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ clock->regmap = devm_regmap_init_mmio(&pdev->dev, base, &clock_regmap_config);
+ if (IS_ERR(clock->regmap)) {
+ CLOCK_ERR(clock, "regmap %pR failed", res);
+ ret = PTR_ERR(clock->regmap);
+ goto failed;
+ }
+ clock->clock_ep_name = res->name;
+
+ ret = clock_init(clock);
+ if (ret)
+ goto failed;
+
+ ret = sysfs_create_group(&pdev->dev.kobj, &clock_attr_group);
+ if (ret) {
+ CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
+ goto failed;
+ }
+
+ CLOCK_INFO(clock, "successfully initialized Clock subdev");
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_clock_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .regmap_name = "clkwiz" },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_clock_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_clock_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_clock_table[] = {
+ { XRT_CLOCK, (kernel_ulong_t)&xrt_clock_data },
+ { },
+};
+
+static struct platform_driver xrt_clock_driver = {
+ .driver = {
+ .name = XRT_CLOCK,
+ },
+ .probe = clock_probe,
+ .remove = clock_remove,
+ .id_table = xrt_clock_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CLOCK, clock);
--
2.27.0
Add partition isolation platform driver. partition isolation is
a hardware function discovered by walking firmware metadata.
A platform device node will be created for it. Partition isolation
function isolate the different fpga regions
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
drivers/fpga/xrt/include/xleaf/axigate.h | 23 ++
drivers/fpga/xrt/lib/xleaf/axigate.c | 342 +++++++++++++++++++++++
2 files changed, 365 insertions(+)
create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
diff --git a/drivers/fpga/xrt/include/xleaf/axigate.h b/drivers/fpga/xrt/include/xleaf/axigate.h
new file mode 100644
index 000000000000..58f32c76dca1
--- /dev/null
+++ b/drivers/fpga/xrt/include/xleaf/axigate.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou <[email protected]>
+ */
+
+#ifndef _XRT_AXIGATE_H_
+#define _XRT_AXIGATE_H_
+
+#include "xleaf.h"
+#include "metadata.h"
+
+/*
+ * AXIGATE driver leaf calls.
+ */
+enum xrt_axigate_leaf_cmd {
+ XRT_AXIGATE_CLOSE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
+ XRT_AXIGATE_OPEN,
+};
+
+#endif /* _XRT_AXIGATE_H_ */
diff --git a/drivers/fpga/xrt/lib/xleaf/axigate.c b/drivers/fpga/xrt/lib/xleaf/axigate.c
new file mode 100644
index 000000000000..231bb0335278
--- /dev/null
+++ b/drivers/fpga/xrt/lib/xleaf/axigate.c
@@ -0,0 +1,342 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xilinx Alveo FPGA AXI Gate Driver
+ *
+ * Copyright (C) 2020-2021 Xilinx, Inc.
+ *
+ * Authors:
+ * Lizhi Hou<[email protected]>
+ */
+
+#include <linux/mod_devicetable.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/regmap.h>
+#include <linux/io.h>
+#include "metadata.h"
+#include "xleaf.h"
+#include "xleaf/axigate.h"
+
+#define XRT_AXIGATE "xrt_axigate"
+
+#define XRT_AXIGATE_WRITE_REG 0
+#define XRT_AXIGATE_READ_REG 8
+
+#define XRT_AXIGATE_CTRL_CLOSE 0
+#define XRT_AXIGATE_CTRL_OPEN_BIT0 1
+#define XRT_AXIGATE_CTRL_OPEN_BIT1 2
+
+#define XRT_AXIGATE_INTERVAL 500 /* ns */
+
+struct xrt_axigate {
+ struct platform_device *pdev;
+ struct regmap *regmap;
+ struct mutex gate_lock; /* gate dev lock */
+
+ void *evt_hdl;
+ const char *ep_name;
+
+ bool gate_closed;
+};
+
+static const struct regmap_config axigate_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = 0x1000,
+};
+
+/* the ep names are in the order of hardware layers */
+static const char * const xrt_axigate_epnames[] = {
+ XRT_MD_NODE_GATE_PLP, /* PLP: Provider Logic Partition */
+ XRT_MD_NODE_GATE_ULP /* ULP: User Logic Partition */
+};
+
+static inline int close_gate(struct xrt_axigate *gate)
+{
+ u32 val;
+ int ret;
+
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_CLOSE);
+ if (ret) {
+ xrt_err(gate->pdev, "write gate failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ /*
+ * Legacy hardware requires extra read work properly.
+ * This is not on critical path, thus the extra read should not impact performance much.
+ */
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->pdev, "read gate failed %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static inline int open_gate(struct xrt_axigate *gate)
+{
+ u32 val;
+ int ret;
+
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_OPEN_BIT1);
+ if (ret) {
+ xrt_err(gate->pdev, "write 2 failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ /*
+ * Legacy hardware requires extra read work properly.
+ * This is not on critical path, thus the extra read should not impact performance much.
+ */
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->pdev, "read 2 failed %d", ret);
+ return ret;
+ }
+ ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG,
+ XRT_AXIGATE_CTRL_OPEN_BIT0 | XRT_AXIGATE_CTRL_OPEN_BIT1);
+ if (ret) {
+ xrt_err(gate->pdev, "write 3 failed %d", ret);
+ return ret;
+ }
+ ndelay(XRT_AXIGATE_INTERVAL);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
+ if (ret) {
+ xrt_err(gate->pdev, "read 3 failed %d", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int xrt_axigate_epname_idx(struct platform_device *pdev)
+{
+ struct resource *res;
+ int ret, i;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ xrt_err(pdev, "Empty Resource!");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(xrt_axigate_epnames); i++) {
+ ret = strncmp(xrt_axigate_epnames[i], res->name,
+ strlen(xrt_axigate_epnames[i]) + 1);
+ if (!ret)
+ return i;
+ }
+
+ return -EINVAL;
+}
+
+static int xrt_axigate_close(struct platform_device *pdev)
+{
+ struct xrt_axigate *gate;
+ u32 status = 0;
+ int ret;
+
+ gate = platform_get_drvdata(pdev);
+
+ mutex_lock(&gate->gate_lock);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
+ if (ret) {
+ xrt_err(pdev, "read gate failed %d", ret);
+ goto failed;
+ }
+ if (status) { /* gate is opened */
+ xleaf_broadcast_event(pdev, XRT_EVENT_PRE_GATE_CLOSE, false);
+ ret = close_gate(gate);
+ if (ret)
+ goto failed;
+ }
+
+ gate->gate_closed = true;
+
+failed:
+ mutex_unlock(&gate->gate_lock);
+
+ xrt_info(pdev, "close gate %s", gate->ep_name);
+ return ret;
+}
+
+static int xrt_axigate_open(struct platform_device *pdev)
+{
+ struct xrt_axigate *gate;
+ u32 status;
+ int ret;
+
+ gate = platform_get_drvdata(pdev);
+
+ mutex_lock(&gate->gate_lock);
+ ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
+ if (ret) {
+ xrt_err(pdev, "read gate failed %d", ret);
+ goto failed;
+ }
+ if (!status) { /* gate is closed */
+ ret = open_gate(gate);
+ if (ret)
+ goto failed;
+ xleaf_broadcast_event(pdev, XRT_EVENT_POST_GATE_OPEN, true);
+ /* xrt_axigate_open() could be called in event cb, thus
+ * we can not wait for the completes
+ */
+ }
+
+ gate->gate_closed = false;
+
+failed:
+ mutex_unlock(&gate->gate_lock);
+
+ xrt_info(pdev, "open gate %s", gate->ep_name);
+ return ret;
+}
+
+static void xrt_axigate_event_cb(struct platform_device *pdev, void *arg)
+{
+ struct xrt_axigate *gate = platform_get_drvdata(pdev);
+ struct xrt_event *evt = (struct xrt_event *)arg;
+ enum xrt_events e = evt->xe_evt;
+ struct platform_device *leaf;
+ enum xrt_subdev_id id;
+ struct resource *res;
+ int instance;
+
+ if (e != XRT_EVENT_POST_CREATION)
+ return;
+
+ instance = evt->xe_subdev.xevt_subdev_instance;
+ id = evt->xe_subdev.xevt_subdev_id;
+ if (id != XRT_SUBDEV_AXIGATE)
+ return;
+
+ leaf = xleaf_get_leaf_by_id(pdev, id, instance);
+ if (!leaf)
+ return;
+
+ res = platform_get_resource(leaf, IORESOURCE_MEM, 0);
+ if (!res || !strncmp(res->name, gate->ep_name, strlen(res->name) + 1)) {
+ xleaf_put_leaf(pdev, leaf);
+ return;
+ }
+
+ /* higher level axigate instance created, make sure the gate is opened. */
+ if (xrt_axigate_epname_idx(leaf) > xrt_axigate_epname_idx(pdev))
+ xrt_axigate_open(pdev);
+ else
+ xleaf_call(leaf, XRT_AXIGATE_OPEN, NULL);
+
+ xleaf_put_leaf(pdev, leaf);
+}
+
+static int
+xrt_axigate_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
+{
+ int ret = 0;
+
+ switch (cmd) {
+ case XRT_XLEAF_EVENT:
+ xrt_axigate_event_cb(pdev, arg);
+ break;
+ case XRT_AXIGATE_CLOSE:
+ ret = xrt_axigate_close(pdev);
+ break;
+ case XRT_AXIGATE_OPEN:
+ ret = xrt_axigate_open(pdev);
+ break;
+ default:
+ xrt_err(pdev, "unsupported cmd %d", cmd);
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static int xrt_axigate_probe(struct platform_device *pdev)
+{
+ struct xrt_axigate *gate = NULL;
+ void __iomem *base = NULL;
+ struct resource *res;
+ int ret;
+
+ gate = devm_kzalloc(&pdev->dev, sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ return -ENOMEM;
+
+ gate->pdev = pdev;
+ platform_set_drvdata(pdev, gate);
+
+ xrt_info(pdev, "probing...");
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ xrt_err(pdev, "Empty resource 0");
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(base)) {
+ xrt_err(pdev, "map base iomem failed");
+ ret = PTR_ERR(base);
+ goto failed;
+ }
+
+ gate->regmap = devm_regmap_init_mmio(&pdev->dev, base, &axigate_regmap_config);
+ if (IS_ERR(gate->regmap)) {
+ xrt_err(pdev, "regmap %pR failed", res);
+ ret = PTR_ERR(gate->regmap);
+ goto failed;
+ }
+ gate->ep_name = res->name;
+
+ mutex_init(&gate->gate_lock);
+
+ return 0;
+
+failed:
+ return ret;
+}
+
+static struct xrt_subdev_endpoints xrt_axigate_endpoints[] = {
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GATE_ULP },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ {
+ .xse_names = (struct xrt_subdev_ep_names[]) {
+ { .ep_name = XRT_MD_NODE_GATE_PLP },
+ { NULL },
+ },
+ .xse_min_ep = 1,
+ },
+ { 0 },
+};
+
+static struct xrt_subdev_drvdata xrt_axigate_data = {
+ .xsd_dev_ops = {
+ .xsd_leaf_call = xrt_axigate_leaf_call,
+ },
+};
+
+static const struct platform_device_id xrt_axigate_table[] = {
+ { XRT_AXIGATE, (kernel_ulong_t)&xrt_axigate_data },
+ { },
+};
+
+static struct platform_driver xrt_axigate_driver = {
+ .driver = {
+ .name = XRT_AXIGATE,
+ },
+ .probe = xrt_axigate_probe,
+ .id_table = xrt_axigate_table,
+};
+
+XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_AXIGATE, axigate);
--
2.27.0
Update fpga Kconfig/Makefile and add Kconfig/Makefile for new drivers.
Signed-off-by: Sonal Santan <[email protected]>
Signed-off-by: Max Zhen <[email protected]>
Signed-off-by: Lizhi Hou <[email protected]>
---
MAINTAINERS | 11 +++++++++++
drivers/Makefile | 1 +
drivers/fpga/Kconfig | 2 ++
drivers/fpga/Makefile | 5 +++++
drivers/fpga/xrt/Kconfig | 8 ++++++++
drivers/fpga/xrt/lib/Kconfig | 17 +++++++++++++++++
drivers/fpga/xrt/lib/Makefile | 30 ++++++++++++++++++++++++++++++
drivers/fpga/xrt/metadata/Kconfig | 12 ++++++++++++
drivers/fpga/xrt/metadata/Makefile | 16 ++++++++++++++++
drivers/fpga/xrt/mgmt/Kconfig | 15 +++++++++++++++
drivers/fpga/xrt/mgmt/Makefile | 19 +++++++++++++++++++
11 files changed, 136 insertions(+)
create mode 100644 drivers/fpga/xrt/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Kconfig
create mode 100644 drivers/fpga/xrt/lib/Makefile
create mode 100644 drivers/fpga/xrt/metadata/Kconfig
create mode 100644 drivers/fpga/xrt/metadata/Makefile
create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
create mode 100644 drivers/fpga/xrt/mgmt/Makefile
diff --git a/MAINTAINERS b/MAINTAINERS
index aa84121c5611..44ccc52987ac 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -7009,6 +7009,17 @@ F: Documentation/fpga/
F: drivers/fpga/
F: include/linux/fpga/
+FPGA XRT DRIVERS
+M: Lizhi Hou <[email protected]>
+R: Max Zhen <[email protected]>
+R: Sonal Santan <[email protected]>
+L: [email protected]
+S: Maintained
+W: https://github.com/Xilinx/XRT
+F: Documentation/fpga/xrt.rst
+F: drivers/fpga/xrt/
+F: include/uapi/linux/xrt/
+
FPU EMULATOR
M: Bill Metzenthen <[email protected]>
S: Maintained
diff --git a/drivers/Makefile b/drivers/Makefile
index 6fba7daba591..dbb3b727fc7a 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -179,6 +179,7 @@ obj-$(CONFIG_STM) += hwtracing/stm/
obj-$(CONFIG_ANDROID) += android/
obj-$(CONFIG_NVMEM) += nvmem/
obj-$(CONFIG_FPGA) += fpga/
+obj-$(CONFIG_FPGA_XRT_METADATA) += fpga/
obj-$(CONFIG_FSI) += fsi/
obj-$(CONFIG_TEE) += tee/
obj-$(CONFIG_MULTIPLEXER) += mux/
diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
index 5ff9438b7b46..01410ff000b9 100644
--- a/drivers/fpga/Kconfig
+++ b/drivers/fpga/Kconfig
@@ -227,4 +227,6 @@ config FPGA_MGR_ZYNQMP_FPGA
to configure the programmable logic(PL) through PS
on ZynqMP SoC.
+source "drivers/fpga/xrt/Kconfig"
+
endif # FPGA
diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
index 18dc9885883a..4b887bf95cb3 100644
--- a/drivers/fpga/Makefile
+++ b/drivers/fpga/Makefile
@@ -48,3 +48,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) += dfl-n3000-nios.o
# Drivers for FPGAs which implement DFL
obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
+
+# XRT drivers for Alveo
+obj-$(CONFIG_FPGA_XRT_METADATA) += xrt/metadata/
+obj-$(CONFIG_FPGA_XRT_LIB) += xrt/lib/
+obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt/mgmt/
diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
new file mode 100644
index 000000000000..0e2c59589ddd
--- /dev/null
+++ b/drivers/fpga/xrt/Kconfig
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx Alveo FPGA device configuration
+#
+
+source "drivers/fpga/xrt/metadata/Kconfig"
+source "drivers/fpga/xrt/lib/Kconfig"
+source "drivers/fpga/xrt/mgmt/Kconfig"
diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
new file mode 100644
index 000000000000..935369fad570
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Kconfig
@@ -0,0 +1,17 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# XRT Alveo FPGA device configuration
+#
+
+config FPGA_XRT_LIB
+ tristate "XRT Alveo Driver Library"
+ depends on HWMON && PCI && HAS_IOMEM
+ select FPGA_XRT_METADATA
+ select REGMAP_MMIO
+ help
+ Select this option to enable Xilinx XRT Alveo driver library. This
+ library is core infrastructure of XRT Alveo FPGA drivers which
+ provides functions for working with device nodes, iteration and
+ lookup of platform devices, common interfaces for platform devices,
+ plumbing of function call and ioctls between platform devices and
+ parent partitions.
diff --git a/drivers/fpga/xrt/lib/Makefile b/drivers/fpga/xrt/lib/Makefile
new file mode 100644
index 000000000000..58563416efbf
--- /dev/null
+++ b/drivers/fpga/xrt/lib/Makefile
@@ -0,0 +1,30 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_LIB) += xrt-lib.o
+
+xrt-lib-objs := \
+ lib-drv.o \
+ xroot.o \
+ xclbin.o \
+ subdev.o \
+ cdev.o \
+ group.o \
+ xleaf/vsec.o \
+ xleaf/axigate.o \
+ xleaf/devctl.o \
+ xleaf/icap.o \
+ xleaf/clock.o \
+ xleaf/clkfreq.o \
+ xleaf/ucs.o \
+ xleaf/ddr_calibration.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/xrt/metadata/Kconfig b/drivers/fpga/xrt/metadata/Kconfig
new file mode 100644
index 000000000000..129adda47e94
--- /dev/null
+++ b/drivers/fpga/xrt/metadata/Kconfig
@@ -0,0 +1,12 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# XRT Alveo FPGA device configuration
+#
+
+config FPGA_XRT_METADATA
+ bool "XRT Alveo Driver Metadata Parser"
+ select LIBFDT
+ help
+ This option provides helper functions to parse Xilinx Alveo FPGA
+ firmware metadata. The metadata is in device tree format and the
+ XRT driver uses it to discover the HW subsystems behind PCIe BAR.
diff --git a/drivers/fpga/xrt/metadata/Makefile b/drivers/fpga/xrt/metadata/Makefile
new file mode 100644
index 000000000000..14f65ef1595c
--- /dev/null
+++ b/drivers/fpga/xrt/metadata/Makefile
@@ -0,0 +1,16 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_METADATA) += xrt-md.o
+
+xrt-md-objs := metadata.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
diff --git a/drivers/fpga/xrt/mgmt/Kconfig b/drivers/fpga/xrt/mgmt/Kconfig
new file mode 100644
index 000000000000..31e9e19fffb8
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/Kconfig
@@ -0,0 +1,15 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Xilinx XRT FPGA device configuration
+#
+
+config FPGA_XRT_XMGMT
+ tristate "Xilinx Alveo Management Driver"
+ depends on FPGA_XRT_LIB
+ select FPGA_XRT_METADATA
+ select FPGA_BRIDGE
+ select FPGA_REGION
+ help
+ Select this option to enable XRT PCIe driver for Xilinx Alveo FPGA.
+ This driver provides interfaces for userspace application to access
+ Alveo FPGA device.
diff --git a/drivers/fpga/xrt/mgmt/Makefile b/drivers/fpga/xrt/mgmt/Makefile
new file mode 100644
index 000000000000..acabd811f3fd
--- /dev/null
+++ b/drivers/fpga/xrt/mgmt/Makefile
@@ -0,0 +1,19 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
+#
+# Authors: [email protected]
+#
+
+FULL_XRT_PATH=$(srctree)/$(src)/..
+FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
+
+obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt-mgmt.o
+
+xrt-mgmt-objs := root.o \
+ main.o \
+ fmgr-drv.o \
+ main-region.o
+
+ccflags-y := -I$(FULL_XRT_PATH)/include \
+ -I$(FULL_DTC_PATH)
--
2.27.0
general problem with xmgmt needing to be changed to xrt-mgmt
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Describe XRT driver architecture and provide basic overview of
> Xilinx Alveo platform.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> Documentation/fpga/index.rst | 1 +
> Documentation/fpga/xrt.rst | 844 +++++++++++++++++++++++++++++++++++
> 2 files changed, 845 insertions(+)
> create mode 100644 Documentation/fpga/xrt.rst
>
> diff --git a/Documentation/fpga/index.rst b/Documentation/fpga/index.rst
> index f80f95667ca2..30134357b70d 100644
> --- a/Documentation/fpga/index.rst
> +++ b/Documentation/fpga/index.rst
> @@ -8,6 +8,7 @@ fpga
> :maxdepth: 1
>
> dfl
> + xrt
>
> .. only:: subproject and html
>
> diff --git a/Documentation/fpga/xrt.rst b/Documentation/fpga/xrt.rst
> new file mode 100644
> index 000000000000..0f7977464270
> --- /dev/null
> +++ b/Documentation/fpga/xrt.rst
> @@ -0,0 +1,844 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==================================
> +XRTV2 Linux Kernel Driver Overview
> +==================================
> +
> +Authors:
> +
> +* Sonal Santan <[email protected]>
> +* Max Zhen <[email protected]>
> +* Lizhi Hou <[email protected]>
> +
> +XRTV2 drivers are second generation `XRT <https://github.com/Xilinx/XRT>`_
> +drivers which support `Alveo <https://www.xilinx.com/products/boards-and-kits/alveo.html>`_
> +PCIe platforms from Xilinx.
> +
> +XRTV2 drivers support *subsystem* style data driven platforms where driver's
> +configuration and behavior is determined by meta data provided by the platform
> +(in *device tree* format). Primary management physical function (MPF) driver
> +is called **xmgmt**. Primary user physical function (UPF) driver is called
> +**xuser** and is under development. xrt driver framework and HW subsystem
> +drivers are packaged into a library module called **xrt-lib**, which is
> +shared by **xmgmt** and **xuser** (under development). The xrt driver framework
> +implements a pseudo-bus which is used to discover HW subsystems and facilitate
> +inter HW subsystem interaction.
> +
> +Driver Modules
> +==============
> +
> +xrt-lib.ko
> +----------
> +
> +Repository of all subsystem drivers and pure software modules that can potentially
> +be shared between xmgmt and xuser. All these drivers are structured as Linux
> +*platform driver* and are instantiated by xmgmt (or xuser under development) based
> +on meta data associated with the hardware. The metadata is in the form of a device
ok
> +tree as mentioned before. Each platform driver statically defines a subsystem node
> +array by using node name or a string in its ``compatible`` property. And this
> +array is eventually translated to IOMEM resources of the platform device.
> +
> +The xrt-lib core infrastructure provides hooks to platform drivers for device node
> +management, user file operations and ioctl callbacks. The core infrastructure also
ok
> +provides pseudo-bus functionality for platform driver registration, discovery and
> +inter platform driver ioctl calls.
if/where infrastructure moves is undecided.
> +
> +.. note::
> + See code in ``include/xleaf.h``
> +
> +
> +xmgmt.ko
> +--------
> +
> +The xmgmt driver is a PCIe device driver driving MPF found on Xilinx's Alveo
> +PCIE device. It consists of one *root* driver, one or more *group* drivers
> +and one or more *xleaf* drivers. The root and MPF specific xleaf drivers are
> +in xmgmt.ko. The group driver and other xleaf drivers are in xrt-lib.ko.
> +
> +The instantiation of specific group driver or xleaf driver is completely data
> +driven based on meta data (mostly in device tree format) found through VSEC
> +capability and inside firmware files, such as platform xsabin or user xclbin file.
> +The root driver manages the life cycle of multiple group drivers, which, in turn,
> +manages multiple xleaf drivers. This allows a single set of drivers to support
ok
> +all kinds of subsystems exposed by different shells. The difference among all
> +these subsystems will be handled in xleaf drivers with root and group drivers
> +being part of the infrastructure and provide common services for all leaves
> +found on all platforms.
> +
> +The driver object model looks like the following::
> +
> + +-----------+
> + | xroot |
> + +-----+-----+
> + |
> + +-----------+-----------+
> + | |
> + v v
> + +-----------+ +-----------+
> + | group | ... | group |
> + +-----+-----+ +------+----+
> + | |
> + | |
> + +-----+----+ +-----+----+
> + | | | |
> + v v v v
> + +-------+ +-------+ +-------+ +-------+
> + | xleaf |..| xleaf | | xleaf |..| xleaf |
> + +-------+ +-------+ +-------+ +-------+
> +
> +As an example for Xilinx Alveo U50 before user xclbin download, the tree
> +looks like the following::
> +
> + +-----------+
> + | xmgmt |
> + +-----+-----+
> + |
> + +-------------------------+--------------------+
> + | | |
> + v v v
> + +--------+ +--------+ +--------+
> + | group0 | | group1 | | group2 |
> + +----+---+ +----+---+ +---+----+
> + | | |
> + | | |
> + +-----+-----+ +----+-----+---+ +-----+-----+----+--------+
> + | | | | | | | | |
> + v v | v v | v v |
> + +------------+ +------+ | +------+ +------+ | +------+ +-----------+ |
> + | xmgmt_main | | VSEC | | | GPIO | | QSPI | | | CMC | | AXI-GATE0 | |
> + +------------+ +------+ | +------+ +------+ | +------+ +-----------+ |
> + | +---------+ | +------+ +-----------+ |
> + +>| MAILBOX | +->| ICAP | | AXI-GATE1 |<+
> + +---------+ | +------+ +-----------+
> + | +-------+
> + +->| CALIB |
> + +-------+
> +
> +After an xclbin is download, group3 will be added and the tree looks like the
> +following::
> +
> + +-----------+
> + | xmgmt |
> + +-----+-----+
> + |
> + +-------------------------+--------------------+-----------------+
> + | | | |
> + v v v |
> + +--------+ +--------+ +--------+ |
> + | group0 | | group1 | | group2 | |
> + +----+---+ +----+---+ +---+----+ |
> + | | | |
> + | | | |
> + +-----+-----+ +-----+-----+---+ +-----+-----+----+--------+ |
> + | | | | | | | | | |
> + v v | v v | v v | |
> + +------------+ +------+ | +------+ +------+ | +------+ +-----------+ | |
> + | xmgmt_main | | VSEC | | | GPIO | | QSPI | | | CMC | | AXI-GATE0 | | |
> + +------------+ +------+ | +------+ +------+ | +------+ +-----------+ | |
> + | +---------+ | +------+ +-----------+ | |
> + +>| MAILBOX | +->| ICAP | | AXI-GATE1 |<+ |
> + +---------+ | +------+ +-----------+ |
> + | +-------+ |
> + +->| CALIB | |
> + +-------+ |
> + +---+----+ |
> + | group3 |<--------------------------------------------+
> + +--------+
> + |
> + |
> + +-------+--------+---+--+--------+------+-------+
> + | | | | | | |
> + v | v | v | v
> + +--------+ | +--------+ | +--------+ | +-----+
> + | CLOCK0 | | | CLOCK1 | | | CLOCK2 | | | UCS |
> + +--------+ v +--------+ v +--------+ v +-----+
> + +-------------+ +-------------+ +-------------+
> + | CLOCK-FREQ0 | | CLOCK-FREQ1 | | CLOCK-FREQ2 |
> + +-------------+ +-------------+ +-------------+
> +
> +
> +xmgmt-root
> +^^^^^^^^^^
> +
> +The xmgmt-root driver is a PCIe device driver attached to MPF. It's part of the
> +infrastructure of the MPF driver and resides in xmgmt.ko. This driver
> +
> +* manages one or more group drivers
> +* provides access to functionalities that requires pci_dev, such as PCIE config
> + space access, to other xleaf drivers through root calls
> +* facilities event callbacks for other xleaf drivers
> +* facilities inter-leaf driver calls for other xleaf drivers
> +
> +When root driver starts, it will explicitly create an initial group instance,
> +which contains xleaf drivers that will trigger the creation of other group
> +instances. The root driver will wait for all group and leaves to be created
> +before it returns from it's probe routine and claim success of the
> +initialization of the entire xmgmt driver. If any leaf fails to initialize the
> +xmgmt driver will still come online but with limited functionality.
thanks for adding this
> +
> +.. note::
> + See code in ``lib/xroot.c`` and ``mgmt/root.c``
> +
> +
> +group
> +^^^^^
> +
> +The group driver represents a pseudo device whose life cycle is managed by
ok
> +root and does not have real IO mem or IRQ resources. It's part of the
> +infrastructure of the MPF driver and resides in xrt-lib.ko. This driver
> +
> +* manages one or more xleaf drivers
> +* provides access to root from leaves, so that root calls, event notifications
> + and inter-leaf calls can happen
> +
> +In xmgmt, an initial group driver instance will be created by the root. This
> +instance contains leaves that will trigger group instances to be created to
> +manage groups of leaves found on different partitions on hardware, such as
> +VSEC, Shell, and User.
> +
> +Every *fpga_region* has a group object associated with it. The group is
> +created when xclbin image is loaded on the fpga_region. The existing group
> +is destroyed when a new xclbin image is loaded. The fpga_region persists
> +across xclbin downloads.
> +
> +.. note::
> + See code in ``lib/group.c``
> +
> +
> +xleaf
> +^^^^^
> +
> +The xleaf driver is a platform device driver whose life cycle is managed by
> +a group driver and may or may not have real IO mem or IRQ resources. They
> +are the real meat of xmgmt and contains platform specific code to Shell and
> +User found on a MPF.
> +
> +A xleaf driver may not have real hardware resources when it merely acts as a
> +driver that manages certain in-memory states for xmgmt.
A xleaf driver without real hardware resources manages in-memory states for xrt-mgmt.
A more concise wording of above, change if you like.
I noticed use of xmgmt, this changed in v4 ot xrt-mgmt, check doc for others.
> These in-memory states
> +could be shared by multiple other leaves.
> +
> +Leaf drivers assigned to specific hardware resources drive specific subsystem in
drive a specific
> +the device. To manipulate the subsystem or carry out a task, a xleaf driver may
> +ask help from root via root calls and/or from other leaves via inter-leaf calls.
ask for help
> +
> +A xleaf can also broadcast events through infrastructure code for other leaves
> +to process. It can also receive event notification from infrastructure about
> +certain events, such as post-creation or pre-exit of a particular xleaf.
> +
> +.. note::
> + See code in ``lib/xleaf/*.c``
> +
> +
> +FPGA Manager Interaction
> +========================
> +
> +fpga_manager
> +------------
> +
> +An instance of fpga_manager is created by xmgmt_main and is used for xclbin
> +image download. fpga_manager requires the full xclbin image before it can
> +start programming the FPGA configuration engine via Internal Configuration
> +Access Port (ICAP) platform driver.
thanks for expanding icap
> +
> +fpga_region
> +-----------
> +
> +For every interface exposed by the currently loaded xclbin/xsabin in the
ok
> +*parent* fpga_region a new instance of fpga_region is created like a *child*
ok
> +fpga_region. The device tree of the *parent* fpga_region defines the
> +resources for a new instance of fpga_bridge which isolates the parent from
> +child fpga_region. This new instance of fpga_bridge will be used when a
ok
> +xclbin image is loaded on the child fpga_region. After the xclbin image is
> +downloaded to the fpga_region, an instance of group is created for the
> +fpga_region using the device tree obtained as part of the xclbin. If this
> +device tree defines any child interfaces then it can trigger the creation of
> +fpga_bridge and fpga_region for the next region in the chain.
a fpga_bridge and a fpga_region
> +
> +fpga_bridge
> +-----------
> +
> +Like the fpga_region, matching fpga_bridge is also created by walking the
ok
> +device tree of the parent group.
> +
> +Driver Interfaces
> +=================
> +
> +xmgmt Driver Ioctls
> +-------------------
> +
> +Ioctls exposed by xmgmt driver to user space are enumerated in the following
> +table:
> +
> +== ===================== ============================ ==========================
> +# Functionality ioctl request code data format
> +== ===================== ============================ ==========================
> +1 FPGA image download XMGMT_IOCICAPDOWNLOAD_AXLF xmgmt_ioc_bitstream_axlf
> +== ===================== ============================ ==========================
> +
> +A user xclbin can be downloaded by using the xbmgmt tool from the XRT open source
> +suite. See example usage below::
ok
> +
> + xbmgmt partition --program --path /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/test/verify.xclbin --force
> +
> +xmgmt Driver Sysfs
> +------------------
> +
> +xmgmt driver exposes a rich set of sysfs interfaces. Subsystem platform
> +drivers export sysfs node for every platform instance.
> +
> +Every partition also exports its UUIDs. See below for examples::
> +
> + /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/interface_uuids
> + /sys/bus/pci/devices/0000:06:00.0/xmgmt_main.0/logic_uuids
> +
> +
> +hwmon
> +-----
> +
> +xmgmt driver exposes standard hwmon interface to report voltage, current,
> +temperature, power, etc. These can easily be viewed using *sensors* command
> +line utility.
> +
> +Alveo Platform Overview
> +=======================
> +
> +Alveo platforms are architected as two physical FPGA partitions: *Shell* and
> +*User*. The Shell provides basic infrastructure for the Alveo platform like
> +PCIe connectivity, board management, Dynamic Function Exchange (DFX), sensors,
> +clocking, reset, and security. User partition contains user compiled FPGA
The User partition
contains the user
> +binary which is loaded by a process called DFX also known as partial
> +reconfiguration.
> +
> +For DFX to work properly physical partitions require strict HW compatibility
properly, physical
> +with each other. Every physical partition has two interface UUIDs: *parent* UUID
> +and *child* UUID. For simple single stage platforms, Shell → User forms parent
> +child relationship.
> +
> +.. note::
> + Partition compatibility matching is key design component of Alveo platforms
is a key
> + and XRT. Partitions have child and parent relationship. A loaded partition
> + exposes child partition UUID to advertise its compatibility requirement.When
space needed after '.'
> + loading a child partition the xmgmt management driver matches parent UUID of
> + the child partition against child UUID exported by the parent. Parent and
> + child partition UUIDs are stored in the *xclbin* (for user) or *xsabin* (for
> + shell). Except for root UUID exported by VSEC, hardware itself does not know
> + about UUIDs. UUIDs are stored in xsabin and xclbin. The image format has a
> + special node called Partition UUIDs which define the compatibility UUIDs. See
> + :ref:`partition_uuids`.
> +
This is worded better, thanks.
> +
> +The physical partitions and their loading is illustrated below::
> +
> + SHELL USER
> + +-----------+ +-------------------+
> + | | | |
> + | VSEC UUID | CHILD PARENT | LOGIC UUID |
> + | o------->|<--------o |
> + | | UUID UUID | |
> + +-----+-----+ +--------+----------+
> + | |
> + . .
> + | |
> + +---+---+ +------+--------+
> + | POR | | USER COMPILED |
> + | FLASH | | XCLBIN |
> + +-------+ +---------------+
> +
> +
> +Loading Sequence
> +----------------
> +
> +The Shell partition is loaded from flash at system boot time. It establishes the
> +PCIe link and exposes two physical functions to the BIOS. After the OS boots, xmgmt
the xrt-mgmt
> +driver attaches to the PCIe physical function 0 exposed by the Shell and then looks
> +for VSEC in PCIe extended configuration space. Using VSEC it determines the logic
in the PCIe
Using VSEC, it
> +UUID of Shell and uses the UUID to load matching *xsabin* file from Linux firmware
> +directory. The xsabin file contains metadata to discover peripherals that are part
contains the metadata
the peripherals
> +of Shell and firmware(s) for any embedded soft processors in Shell. The xsabin file
of the Shell and the the firmware
can drop '(s)'
> +also contains Partition UUIDs as described here :ref:`partition_uuids`.
> +
> +The Shell exports a child interface UUID which is used for the compatibility check
ok
> +when loading user compiled xclbin over the User partition as part of DFX. When a user
> +requests loading of a specific xclbin the xmgmt management driver reads the parent
xclbin, the xrt-mgmt driver
can drop 'managment' since 'mgmt' is management
> +interface UUID specified in the xclbin and matches it with child interface UUID
> +exported by Shell to determine if xclbin is compatible with the Shell. If match
> +fails loading of xclbin is denied.
It the match fails, loading is denied.
> +
> +xclbin loading is requested using ICAP_DOWNLOAD_AXLF ioctl command. When loading
> +xclbin, xmgmt driver performs the following *logical* operations:
ok
> +
> +1. Copy xclbin from user to kernel memory
> +2. Sanity check the xclbin contents
> +3. Isolate the User partition
> +4. Download the bitstream using the FPGA config engine (ICAP)
> +5. De-isolate the User partition
> +6. Program the clocks (ClockWiz) driving the User partition
> +7. Wait for memory controller (MIG) calibration
for the
> +8. Return the loading status back to the caller
> +
> +`Platform Loading Overview <https://xilinx.github.io/XRT/master/html/platforms_partitions.html>`_
> +provides more detailed information on platform loading.
> +
> +
> +xsabin
> +------
> +
> +Each Alveo platform comes packaged with its own xsabin. The xsabin is a trusted
ok
> +component of the platform. For format details refer to :ref:`xsabin_xclbin_container_format`
> +below. xsabin contains basic information like UUIDs, platform name and metadata in the
> +form of device tree. See :ref:`device_tree_usage` below for details and example.
ok
> +
> +xclbin
> +------
> +
> +xclbin is compiled by end user using
> +`Vitis <https://www.xilinx.com/products/design-tools/vitis/vitis-platform.html>`_
> +tool set from Xilinx. The xclbin contains sections describing user compiled
> +acceleration engines/kernels, memory subsystems, clocking information etc. It also
> +contains FPGA bitstream for the user partition, UUIDs, platform name, etc.
needs to be either
FPGA bitstreams
or
a FPGA bitstream
> +
> +
> +.. _xsabin_xclbin_container_format:
> +
> +xsabin/xclbin Container Format
> +------------------------------
> +
> +xclbin/xsabin is ELF-like binary container format. It is structured as series of
> +sections. There is a file header followed by several section headers which is
> +followed by sections. A section header points to an actual section. There is an
> +optional signature at the end. The format is defined by header file ``xclbin.h``.
ok
> +The following figure illustrates a typical xclbin::
> +
> +
> + +---------------------+
> + | |
> + | HEADER |
> + +---------------------+
> + | SECTION HEADER |
> + | |
> + +---------------------+
> + | ... |
> + | |
> + +---------------------+
> + | SECTION HEADER |
> + | |
> + +---------------------+
> + | SECTION |
> + | |
> + +---------------------+
> + | ... |
> + | |
> + +---------------------+
> + | SECTION |
> + | |
> + +---------------------+
> + | SIGNATURE |
> + | (OPTIONAL) |
> + +---------------------+
> +
> +
> +xclbin/xsabin files can be packaged, un-packaged and inspected using XRT utility
using a XRT utility
> +called **xclbinutil**. xclbinutil is part of XRT open source software stack. The
of the XRT
> +source code for xclbinutil can be found at
> +https://github.com/Xilinx/XRT/tree/master/src/runtime_src/tools/xclbinutil
> +
> +For example to enumerate the contents of a xclbin/xsabin use the *--info* switch
> +as shown below::
> +
> +
> + xclbinutil --info --input /opt/xilinx/firmware/u50/gen3x16-xdma/blp/test/bandwidth.xclbin
> + xclbinutil --info --input /lib/firmware/xilinx/862c7020a250293e32036f19956669e5/partition.xsabin
> +
> +
> +.. _device_tree_usage:
> +
> +Device Tree Usage
> +-----------------
> +
> +As mentioned previously xsabin stores metadata which advertise HW subsystems present
previously, xsabin
> +in a partition. The metadata is stored in device tree format with a well defined schema.
ok
> +XRT management driver uses this information to bind *platform drivers* to the subsystem
> +instantiations. The platform drivers are found in **xrt-lib.ko** kernel module defined
> +later.
> +
> +Logic UUID
> +^^^^^^^^^^
> +A partition is identified uniquely through ``logic_uuid`` property::
> +
> + /dts-v1/;
> + / {
> + logic_uuid = "0123456789abcdef0123456789abcdef";
> + ...
> + }
> +
> +Schema Version
> +^^^^^^^^^^^^^^
> +Schema version is defined through ``schema_version`` node. And it contains ``major``
> +and ``minor`` properties as below::
> +
> + /dts-v1/;
> + / {
> + schema_version {
> + major = <0x01>;
> + minor = <0x00>;
> + };
> + ...
> + }
> +
> +.. _partition_uuids:
> +
> +Partition UUIDs
> +^^^^^^^^^^^^^^^
> +As mentioned earlier, each partition may have parent and child UUIDs. These UUIDs are
> +defined by ``interfaces`` node and ``interface_uuid`` property::
> +
> + /dts-v1/;
> + / {
> + interfaces {
> + @0 {
> + interface_uuid = "0123456789abcdef0123456789abcdef";
> + };
> + @1 {
> + interface_uuid = "fedcba9876543210fedcba9876543210";
> + };
> + ...
> + };
> + ...
> + }
> +
> +
> +Subsystem Instantiations
> +^^^^^^^^^^^^^^^^^^^^^^^^
> +Subsystem instantiations are captured as children of ``addressable_endpoints``
> +node::
> +
> + /dts-v1/;
> + / {
> + addressable_endpoints {
> + abc {
> + ...
> + };
> + def {
> + ...
> + };
> + ...
> + }
> + }
> +
> +Subnode 'abc' and 'def' are the name of subsystem nodes
> +
> +Subsystem Node
> +^^^^^^^^^^^^^^
> +Each subsystem node and its properties define a hardware instance::
> +
> +
> + addressable_endpoints {
> + abc {
> + reg = <0xa 0xb>
> + pcie_physical_function = <0x0>;
> + pcie_bar_mapping = <0x2>;
> + compatible = "abc def";
> + firmware {
> + firmware_product_name = "abc"
> + firmware_branch_name = "def"
> + firmware_version_major = <1>
> + firmware_version_minor = <2>
> + };
> + }
> + ...
> + }
> +
> +:reg:
> + Property defines address range. '<0xa 0xb>' is BAR offset and length pair, both
defines an address
is the BAR
> + are 64-bit integer.
integers
> +:pcie_physical_function:
> + Property specifies which PCIe physical function the subsystem node resides.
> +:pcie_bar_mapping:
> + Property specifies which PCIe BAR the subsystem node resides. '<0x2>' is BAR
> + index and it is 0 if this property is not defined.
index. A value of 0 means the property is not defined.
> +:compatible:
> + Property is a list of strings. The first string in the list specifies the exact
> + subsystem node. The following strings represent other devices that the device
> + is compatible with.
> +:firmware:
> + Subnode defines the firmware required by this subsystem node.
> +
> +Alveo U50 Platform Example
> +^^^^^^^^^^^^^^^^^^^^^^^^^^
> +::
> +
> + /dts-v1/;
> +
> + /{
> + logic_uuid = "f465b0a3ae8c64f619bc150384ace69b";
> +
> + schema_version {
> + major = <0x01>;
> + minor = <0x00>;
> + };
> +
> + interfaces {
> +
> + @0 {
> + interface_uuid = "862c7020a250293e32036f19956669e5";
> + };
> + };
> +
> + addressable_endpoints {
> +
> + ep_blp_rom_00 {
> + reg = <0x00 0x1f04000 0x00 0x1000>;
this is 4 values, not 2
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> + };
> +
> + ep_card_flash_program_00 {
> + reg = <0x00 0x1f06000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_quad_spi-1.0\0axi_quad_spi";
> + interrupts = <0x03 0x03>;
interrupts not covered above
> + };
> +
> + ep_cmc_firmware_mem_00 {
> + reg = <0x00 0x1e20000 0x00 0x20000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> + firmware {
> + firmware_product_name = "cmc";
> + firmware_branch_name = "u50";
> + firmware_version_major = <0x01>;
> + firmware_version_minor = <0x00>;
> + };
> + };
> +
> + ep_cmc_intc_00 {
> + reg = <0x00 0x1e03000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> + interrupts = <0x04 0x04>;
> + };
> +
> + ep_cmc_mutex_00 {
> + reg = <0x00 0x1e02000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_cmc_regmap_00 {
> + reg = <0x00 0x1e08000 0x00 0x2000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> + firmware {
> + firmware_product_name = "sc-fw";
> + firmware_branch_name = "u50";
> + firmware_version_major = <0x05>;
> + };
> + };
> +
> + ep_cmc_reset_00 {
> + reg = <0x00 0x1e01000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_ddr_mem_calib_00 {
> + reg = <0x00 0x63000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_debug_bscan_mgmt_00 {
> + reg = <0x00 0x1e90000 0x00 0x10000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-debug_bridge-1.0\0debug_bridge";
> + };
> +
> + ep_ert_base_address_00 {
> + reg = <0x00 0x21000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_ert_command_queue_mgmt_00 {
> + reg = <0x00 0x40000 0x00 0x10000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
> + };
> +
> + ep_ert_command_queue_user_00 {
> + reg = <0x00 0x40000 0x00 0x10000>;
> + pcie_physical_function = <0x01>;
> + compatible = "xilinx.com,reg_abs-ert_command_queue-1.0\0ert_command_queue";
> + };
> +
> + ep_ert_firmware_mem_00 {
> + reg = <0x00 0x30000 0x00 0x8000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> +
> + firmware {
> + firmware_product_name = "ert";
> + firmware_branch_name = "v20";
> + firmware_version_major = <0x01>;
> + };
> + };
> +
> + ep_ert_intc_00 {
> + reg = <0x00 0x23000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_intc-1.0\0axi_intc";
> + interrupts = <0x05 0x05>;
> + };
> +
> + ep_ert_reset_00 {
> + reg = <0x00 0x22000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_ert_sched_00 {
> + reg = <0x00 0x50000 0x00 0x1000>;
> + pcie_physical_function = <0x01>;
> + compatible = "xilinx.com,reg_abs-ert_sched-1.0\0ert_sched";
> + interrupts = <0x09 0x0c>;
> + };
> +
> + ep_fpga_configuration_00 {
> + reg = <0x00 0x1e88000 0x00 0x8000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_hwicap-1.0\0axi_hwicap";
> + interrupts = <0x02 0x02>;
> + };
> +
> + ep_icap_reset_00 {
> + reg = <0x00 0x1f07000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_msix_00 {
> + reg = <0x00 0x00 0x00 0x20000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-msix-1.0\0msix";
> + pcie_bar_mapping = <0x02>;
> + };
> +
> + ep_pcie_link_mon_00 {
> + reg = <0x00 0x1f05000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_pr_isolate_plp_00 {
> + reg = <0x00 0x1f01000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_pr_isolate_ulp_00 {
> + reg = <0x00 0x1000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_gpio-1.0\0axi_gpio";
> + };
> +
> + ep_uuid_rom_00 {
> + reg = <0x00 0x64000 0x00 0x1000>;
> + pcie_physical_function = <0x00>;
> + compatible = "xilinx.com,reg_abs-axi_bram_ctrl-1.0\0axi_bram_ctrl";
> + };
> +
> + ep_xdma_00 {
> + reg = <0x00 0x00 0x00 0x10000>;
> + pcie_physical_function = <0x01>;
> + compatible = "xilinx.com,reg_abs-xdma-1.0\0xdma";
> + pcie_bar_mapping = <0x02>;
> + };
> + };
> +
> + }
> +
> +
> +
> +Deployment Models
> +=================
> +
> +Baremetal
> +---------
> +
> +In bare-metal deployments, both MPF and UPF are visible and accessible. xmgmt
ok
xmgnt -> xrt-mgmt
> +driver binds to MPF. xmgmt driver operations are privileged and available to
> +system administrator. The full stack is illustrated below::
> +
> + HOST
> +
> + [XMGMT] [XUSER]
> + | |
> + | |
> + +-----+ +-----+
> + | MPF | | UPF |
> + | | | |
> + | PF0 | | PF1 |
> + +--+--+ +--+--+
> + ......... ^................. ^..........
> + | |
> + | PCIe DEVICE |
> + | |
> + +--+------------------+--+
> + | SHELL |
> + | |
> + +------------------------+
> + | USER |
> + | |
> + | |
> + | |
> + | |
> + +------------------------+
> +
> +
> +
> +Virtualized
> +-----------
> +
> +In virtualized deployments, privileged MPF is assigned to host but unprivileged
an article is needed to precede 'MPF' and 'UPF' pick either 'a' or 'the'
Thanks for all the changes.
Tom
> +UPF is assigned to guest VM via PCIe pass-through. xmgmt driver in host binds
> +to MPF. xmgmt driver operations are privileged and only accessible to the MPF.
> +The full stack is illustrated below::
> +
> +
> + .............
> + HOST . VM .
> + . .
> + [XMGMT] . [XUSER] .
> + | . | .
> + | . | .
> + +-----+ . +-----+ .
> + | MPF | . | UPF | .
> + | | . | | .
> + | PF0 | . | PF1 | .
> + +--+--+ . +--+--+ .
> + ......... ^................. ^..........
> + | |
> + | PCIe DEVICE |
> + | |
> + +--+------------------+--+
> + | SHELL |
> + | |
> + +------------------------+
> + | USER |
> + | |
> + | |
> + | |
> + | |
> + +------------------------+
> +
> +
> +
> +
> +
> +Platform Security Considerations
> +================================
> +
> +`Security of Alveo Platform <https://xilinx.github.io/XRT/master/html/security.html>`_
> +discusses the deployment options and security implications in great detail.
Do not reorder function definitions, this makes comparing changes from the previous patchset difficult.
A general issue with returning consistent error codes. There are several cases where fdt_* code are not translated.
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> XRT drivers use device tree as metadata format to discover HW subsystems
> behind PCIe BAR. Thus libfdt functions are called for the driver to parse
> device tree blob.
to parse the device
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/metadata.h | 233 ++++++++++++
> drivers/fpga/xrt/metadata/metadata.c | 545 +++++++++++++++++++++++++++
> 2 files changed, 778 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/metadata.h
> create mode 100644 drivers/fpga/xrt/metadata/metadata.c
>
> diff --git a/drivers/fpga/xrt/include/metadata.h b/drivers/fpga/xrt/include/metadata.h
> new file mode 100644
> index 000000000000..479e47960c61
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/metadata.h
> @@ -0,0 +1,233 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_METADATA_H
> +#define _XRT_METADATA_H
> +
> +#include <linux/device.h>
> +#include <linux/vmalloc.h>
> +#include <linux/uuid.h>
> +
> +#define XRT_MD_INVALID_LENGTH (~0UL)
> +
> +/* metadata properties */
> +#define XRT_MD_PROP_BAR_IDX "pcie_bar_mapping"
> +#define XRT_MD_PROP_COMPATIBLE "compatible"
> +#define XRT_MD_PROP_HWICAP "axi_hwicap"
> +#define XRT_MD_PROP_INTERFACE_UUID "interface_uuid"
> +#define XRT_MD_PROP_INTERRUPTS "interrupts"
> +#define XRT_MD_PROP_IO_OFFSET "reg"
> +#define XRT_MD_PROP_LOGIC_UUID "logic_uuid"
> +#define XRT_MD_PROP_PDI_CONFIG "pdi_config_mem"
> +#define XRT_MD_PROP_PF_NUM "pcie_physical_function"
> +#define XRT_MD_PROP_VERSION_MAJOR "firmware_version_major"
> +
> +/* non IP nodes */
> +#define XRT_MD_NODE_ENDPOINTS "addressable_endpoints"
> +#define XRT_MD_NODE_FIRMWARE "firmware"
> +#define XRT_MD_NODE_INTERFACES "interfaces"
> +#define XRT_MD_NODE_PARTITION_INFO "partition_info"
> +
> +/*
> + * IP nodes
> + * AF: AXI Firewall
> + * CMC: Card Management Controller
> + * ERT: Embedded Runtime
* EP: End Point
> + * PLP: Provider Reconfigurable Partition
> + * ULP: User Reconfigurable Partition
> + */
> +#define XRT_MD_NODE_ADDR_TRANSLATOR "ep_remap_data_c2h_00"
> +#define XRT_MD_NODE_AF_BLP_CTRL_MGMT "ep_firewall_blp_ctrl_mgmt_00"
> +#define XRT_MD_NODE_AF_BLP_CTRL_USER "ep_firewall_blp_ctrl_user_00"
> +#define XRT_MD_NODE_AF_CTRL_DEBUG "ep_firewall_ctrl_debug_00"
> +#define XRT_MD_NODE_AF_CTRL_MGMT "ep_firewall_ctrl_mgmt_00"
> +#define XRT_MD_NODE_AF_CTRL_USER "ep_firewall_ctrl_user_00"
> +#define XRT_MD_NODE_AF_DATA_C2H "ep_firewall_data_c2h_00"
c2h ?
> +#define XRT_MD_NODE_AF_DATA_H2C "ep_firewall_data_h2c_00"
> +#define XRT_MD_NODE_AF_DATA_M2M "ep_firewall_data_m2m_00"
> +#define XRT_MD_NODE_AF_DATA_P2P "ep_firewall_data_p2p_00"
> +#define XRT_MD_NODE_CLKFREQ_HBM "ep_freq_cnt_aclk_hbm_00"
> +#define XRT_MD_NODE_CLKFREQ_K1 "ep_freq_cnt_aclk_kernel_00"
> +#define XRT_MD_NODE_CLKFREQ_K2 "ep_freq_cnt_aclk_kernel_01"
> +#define XRT_MD_NODE_CLK_KERNEL1 "ep_aclk_kernel_00"
> +#define XRT_MD_NODE_CLK_KERNEL2 "ep_aclk_kernel_01"
> +#define XRT_MD_NODE_CLK_KERNEL3 "ep_aclk_hbm_00"
hbm ?
unusual acronyms should be documented.
> +#define XRT_MD_NODE_CLK_SHUTDOWN "ep_aclk_shutdown_00"
> +#define XRT_MD_NODE_CMC_FW_MEM "ep_cmc_firmware_mem_00"
> +#define XRT_MD_NODE_CMC_MUTEX "ep_cmc_mutex_00"
> +#define XRT_MD_NODE_CMC_REG "ep_cmc_regmap_00"
> +#define XRT_MD_NODE_CMC_RESET "ep_cmc_reset_00"
> +#define XRT_MD_NODE_DDR_CALIB "ep_ddr_mem_calib_00"
> +#define XRT_MD_NODE_DDR4_RESET_GATE "ep_ddr_mem_srsr_gate_00"
> +#define XRT_MD_NODE_ERT_BASE "ep_ert_base_address_00"
> +#define XRT_MD_NODE_ERT_CQ_MGMT "ep_ert_command_queue_mgmt_00"
> +#define XRT_MD_NODE_ERT_CQ_USER "ep_ert_command_queue_user_00"
> +#define XRT_MD_NODE_ERT_FW_MEM "ep_ert_firmware_mem_00"
> +#define XRT_MD_NODE_ERT_RESET "ep_ert_reset_00"
> +#define XRT_MD_NODE_ERT_SCHED "ep_ert_sched_00"
> +#define XRT_MD_NODE_FLASH "ep_card_flash_program_00"
> +#define XRT_MD_NODE_FPGA_CONFIG "ep_fpga_configuration_00"
> +#define XRT_MD_NODE_GAPPING "ep_gapping_demand_00"
> +#define XRT_MD_NODE_GATE_PLP "ep_pr_isolate_plp_00"
> +#define XRT_MD_NODE_GATE_ULP "ep_pr_isolate_ulp_00"
> +#define XRT_MD_NODE_KDMA_CTRL "ep_kdma_ctrl_00"
> +#define XRT_MD_NODE_MAILBOX_MGMT "ep_mailbox_mgmt_00"
> +#define XRT_MD_NODE_MAILBOX_USER "ep_mailbox_user_00"
> +#define XRT_MD_NODE_MAILBOX_XRT "ep_mailbox_user_to_ert_00"
> +#define XRT_MD_NODE_MSIX "ep_msix_00"
> +#define XRT_MD_NODE_P2P "ep_p2p_00"
> +#define XRT_MD_NODE_PCIE_MON "ep_pcie_link_mon_00"
> +#define XRT_MD_NODE_PMC_INTR "ep_pmc_intr_00"
> +#define XRT_MD_NODE_PMC_MUX "ep_pmc_mux_00"
> +#define XRT_MD_NODE_QDMA "ep_qdma_00"
> +#define XRT_MD_NODE_QDMA4 "ep_qdma4_00"
> +#define XRT_MD_NODE_REMAP_P2P "ep_remap_p2p_00"
> +#define XRT_MD_NODE_STM "ep_stream_traffic_manager_00"
> +#define XRT_MD_NODE_STM4 "ep_stream_traffic_manager4_00"
> +#define XRT_MD_NODE_SYSMON "ep_cmp_sysmon_00"
> +#define XRT_MD_NODE_XDMA "ep_xdma_00"
> +#define XRT_MD_NODE_XVC_PUB "ep_debug_bscan_user_00"
> +#define XRT_MD_NODE_XVC_PRI "ep_debug_bscan_mgmt_00"
> +#define XRT_MD_NODE_UCS_CONTROL_STATUS "ep_ucs_control_status_00"
> +
> +/* endpoint regmaps */
> +#define XRT_MD_REGMAP_DDR_SRSR "drv_ddr_srsr"
> +#define XRT_MD_REGMAP_CLKFREQ "freq_cnt"
clock frequency vs frequency count ?
is this ok?
> +
> +/* driver defined endpoints */
> +#define XRT_MD_NODE_BLP_ROM "drv_ep_blp_rom_00"
> +#define XRT_MD_NODE_DDR_SRSR "drv_ep_ddr_srsr"
> +#define XRT_MD_NODE_FLASH_VSEC "drv_ep_card_flash_program_00"
> +#define XRT_MD_NODE_GOLDEN_VER "drv_ep_golden_ver_00"
> +#define XRT_MD_NODE_MAILBOX_VSEC "drv_ep_mailbox_vsec_00"
> +#define XRT_MD_NODE_MGMT_MAIN "drv_ep_mgmt_main_00"
> +#define XRT_MD_NODE_PLAT_INFO "drv_ep_platform_info_mgmt_00"
> +#define XRT_MD_NODE_PARTITION_INFO_BLP "partition_info_0"
> +#define XRT_MD_NODE_PARTITION_INFO_PLP "partition_info_1"
> +#define XRT_MD_NODE_TEST "drv_ep_test_00"
> +#define XRT_MD_NODE_VSEC "drv_ep_vsec_00"
> +#define XRT_MD_NODE_VSEC_GOLDEN "drv_ep_vsec_golden_00"
> +
> +/* driver defined properties */
> +#define XRT_MD_PROP_OFFSET "drv_offset"
> +#define XRT_MD_PROP_CLK_FREQ "drv_clock_frequency"
> +#define XRT_MD_PROP_CLK_CNT "drv_clock_frequency_counter"
> +#define XRT_MD_PROP_VBNV "vbnv"
> +#define XRT_MD_PROP_VROM "vrom"
> +#define XRT_MD_PROP_PARTITION_LEVEL "partition_level"
> +
> +struct xrt_md_endpoint {
> + const char *ep_name;
> + u32 bar;
> + u64 bar_off;
> + ulong size;
bar_off changed from long to u64.
should bar and size both be changed to u64 ?
> + char *regmap;
It seems like this is really a compatibility string and not a regmap.
> + char *regmap_ver;
> +};
> +
> +/* Note: res_id is defined by leaf driver and must start with 0. */
> +struct xrt_iores_map {
> + char *res_name;
> + int res_id;
> +};
> +
> +static inline int xrt_md_res_name2id(const struct xrt_iores_map *res_map,
> + int entry_num, const char *res_name)
> +{
> + int i;
> +
> + for (i = 0; i < entry_num; i++) {
> + if (!strncmp(res_name, res_map->res_name, strlen(res_map->res_name) + 1))
> + return res_map->res_id;
> + res_map++;
> + }
> + return -1;
> +}
> +
> +static inline const char *
> +xrt_md_res_id2name(const struct xrt_iores_map *res_map, int entry_num, int id)
> +{
> + int i;
> +
> + for (i = 0; i < entry_num; i++) {
> + if (res_map->res_id == id)
> + return res_map->res_name;
> + res_map++;
> + }
> + return NULL;
> +}
> +
> +unsigned long xrt_md_size(struct device *dev, const char *blob);
> +int xrt_md_create(struct device *dev, char **blob);
> +char *xrt_md_dup(struct device *dev, const char *blob);
> +int xrt_md_add_endpoint(struct device *dev, char *blob,
> + struct xrt_md_endpoint *ep);
> +int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
> + const char *regmap_name);
> +int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
> + const char *regmap_name, const char *prop,
> + const void **val, int *size);
> +int xrt_md_set_prop(struct device *dev, char *blob, const char *ep_name,
> + const char *regmap_name, const char *prop,
> + const void *val, int size);
> +int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
> + const char *ep_name, const char *regmap_name,
> + const char *new_ep_name);
> +int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
> + const char *ep_name, const char *regmap_name,
> + char **next_ep, char **next_regmap);
> +int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
> + const char *regmap_name, const char **ep_name);
> +int xrt_md_find_endpoint(struct device *dev, const char *blob,
> + const char *ep_name, const char *regmap_name,
> + const char **epname);
> +int xrt_md_pack(struct device *dev, char *blob);
> +int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
> + u32 num_uuids, uuid_t *intf_uuids);
> +
> +/*
> + * The firmware provides a 128 bit hash string as a unique id to the
> + * partition/interface.
> + * Existing hw does not yet use the cononical form, so it is necessary to
> + * use a translation function.
> + */
> +static inline void xrt_md_trans_uuid2str(const uuid_t *uuid, char *uuidstr)
> +{
> + int i, p;
> + u8 tmp[UUID_SIZE];
> +
> + BUILD_BUG_ON(UUID_SIZE != 16);
> + export_uuid(tmp, uuid);
ok
> + for (p = 0, i = UUID_SIZE - 1; i >= 0; p++, i--)
> + snprintf(&uuidstr[p * 2], 3, "%02x", tmp[i]);
XMGMT_UUID_STR_LEN is 80.
This logic say it could be reduced to 33.
> +}
> +
> +static inline int xrt_md_trans_str2uuid(struct device *dev, const char *uuidstr, uuid_t *p_uuid)
> +{
> + u8 p[UUID_SIZE];
> + const char *str;
> + char tmp[3] = { 0 };
> + int i, ret;
> +
> + BUILD_BUG_ON(UUID_SIZE != 16);
Also defined above, do not need to repeat.
> + str = uuidstr + strlen(uuidstr) - 2;
needs an underflow check
> +
> + for (i = 0; i < sizeof(*p_uuid) && str >= uuidstr; i++) {
> + tmp[0] = *str;
> + tmp[1] = *(str + 1);
> + ret = kstrtou8(tmp, 16, &p[i]);
> + if (ret)
> + return -EINVAL;
> + str -= 2;
> + }
> + import_uuid(p_uuid, p);
> +
> + return 0;
> +}
> +
> +#endif
> diff --git a/drivers/fpga/xrt/metadata/metadata.c b/drivers/fpga/xrt/metadata/metadata.c
> new file mode 100644
> index 000000000000..3b2be50fcb02
> --- /dev/null
> +++ b/drivers/fpga/xrt/metadata/metadata.c
> @@ -0,0 +1,545 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Metadata parse APIs
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#include <linux/libfdt_env.h>
> +#include "libfdt.h"
> +#include "metadata.h"
> +
> +#define MAX_BLOB_SIZE (4096 * 25)
> +#define MAX_DEPTH 5
MAX_BLOB_SIZE is defined in keys/trusted-type.h
General, add a prefix to help avoid conflicts.
Like
XRT_MAX_BLOB_SIZE
etc.
> +
> +static int xrt_md_setprop(struct device *dev, char *blob, int offset,
> + const char *prop, const void *val, int size)
> +{
> + int ret;
> +
> + ret = fdt_setprop(blob, offset, prop, val, size);
> + if (ret)
> + dev_err(dev, "failed to set prop %d", ret);
> +
> + return ret;
> +}
> +
> +static int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
> + const char *ep_name)
> +{
> + int ret;
> +
> + ret = fdt_add_subnode(blob, parent_offset, ep_name);
> + if (ret < 0 && ret != -FDT_ERR_EXISTS)
> + dev_err(dev, "failed to add node %s. %d", ep_name, ret);
> +
> + return ret;
> +}
> +
> +static int xrt_md_get_endpoint(struct device *dev, const char *blob,
> + const char *ep_name, const char *regmap_name,
> + int *ep_offset)
> +{
> + const char *name;
> + int offset;
> +
> + for (offset = fdt_next_node(blob, -1, NULL);
> + offset >= 0;
> + offset = fdt_next_node(blob, offset, NULL)) {
> + name = fdt_get_name(blob, offset, NULL);
> + if (!name || strncmp(name, ep_name, strlen(ep_name) + 1))
> + continue;
> + if (!regmap_name ||
regmap_name is known at the start but checked here in the loop.
this check should be made outside of the loop.
> + !fdt_node_check_compatible(blob, offset, regmap_name))
> + break;
> + }
> + if (offset < 0)
> + return -ENODEV;
> +
> + *ep_offset = offset;
> +
> + return 0;
> +}
> +
> +static inline int xrt_md_get_node(struct device *dev, const char *blob,
> + const char *name, const char *regmap_name,
> + int *offset)
> +{
> + int ret = 0;
> +
> + if (name) {
> + ret = xrt_md_get_endpoint(dev, blob, name, regmap_name,
> + offset);
> + if (ret) {
> + dev_err(dev, "cannot get node %s, regmap %s, ret = %d",
> + name, regmap_name, ret);
from above regmap_name is sometimes NULL.
> + return -EINVAL;
> + }
> + } else {
> + ret = fdt_next_node(blob, -1, NULL);
> + if (ret < 0) {
> + dev_err(dev, "internal error, ret = %d", ret);
> + return -EINVAL;
> + }
> + *offset = ret;
> + }
> +
> + return 0;
> +}
> +
> +static int xrt_md_overlay(struct device *dev, char *blob, int target,
> + const char *overlay_blob, int overlay_offset,
> + int depth)
> +{
> + int property, subnode;
> + int ret;
whitespace, looks like tab's after 'int'
should be consistent with space used elsewhere
> +
> + if (!blob || !overlay_blob) {
> + dev_err(dev, "blob is NULL");
> + return -EINVAL;
> + }
> +
> + if (depth > MAX_DEPTH) {
ok
> + dev_err(dev, "meta data depth beyond %d", MAX_DEPTH);
> + return -EINVAL;
> + }
> +
> + if (target < 0) {
> + target = fdt_next_node(blob, -1, NULL);
> + if (target < 0) {
> + dev_err(dev, "invalid target");
> + return -EINVAL;
> + }
> + }
> + if (overlay_offset < 0) {
> + overlay_offset = fdt_next_node(overlay_blob, -1, NULL);
> + if (overlay_offset < 0) {
> + dev_err(dev, "invalid overlay");
> + return -EINVAL;
> + }
> + }
> +
> + fdt_for_each_property_offset(property, overlay_blob, overlay_offset) {
> + const char *name;
> + const void *prop;
> + int prop_len;
> +
> + prop = fdt_getprop_by_offset(overlay_blob, property, &name,
> + &prop_len);
> + if (!prop || prop_len >= MAX_BLOB_SIZE || prop_len < 0) {
> + dev_err(dev, "internal error");
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_setprop(dev, blob, target, name, prop,
> + prop_len);
> + if (ret) {
> + dev_err(dev, "setprop failed, ret = %d", ret);
> + return ret;
> + }
> + }
> +
> + fdt_for_each_subnode(subnode, overlay_blob, overlay_offset) {
> + const char *name = fdt_get_name(overlay_blob, subnode, NULL);
> + int nnode;
> +
> + nnode = xrt_md_add_node(dev, blob, target, name);
> + if (nnode == -FDT_ERR_EXISTS)
> + nnode = fdt_subnode_offset(blob, target, name);
> + if (nnode < 0) {
> + dev_err(dev, "add node failed, ret = %d", nnode);
> + return nnode;
This is an offset, not an error code
return -EINVAL or similar
> + }
> +
> + ret = xrt_md_overlay(dev, blob, nnode, overlay_blob, subnode, depth + 1);
> + if (ret)
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +unsigned long xrt_md_size(struct device *dev, const char *blob)
> +{
review fdt_ro_probe.
fdt_totalsize is signed 32 bit, this conversion to sometimes 64 bit is not necessary.
at most it should be uint32_t
> + unsigned long len = (long)fdt_totalsize(blob);
> +
> + if (len > MAX_BLOB_SIZE)
> + return XRT_MD_INVALID_LENGTH;
> +
> + return len;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_size);
> +
> +int xrt_md_create(struct device *dev, char **blob)
> +{
> + int ret = 0;
> +
> + if (!blob) {
> + dev_err(dev, "blob is NULL");
> + return -EINVAL;
> + }
> +
> + *blob = vzalloc(MAX_BLOB_SIZE);
> + if (!*blob)
> + return -ENOMEM;
> +
> + ret = fdt_create_empty_tree(*blob, MAX_BLOB_SIZE);
> + if (ret) {
> + dev_err(dev, "format blob failed, ret = %d", ret);
> + goto failed;
> + }
> +
> + ret = fdt_next_node(*blob, -1, NULL);
> + if (ret < 0) {
> + dev_err(dev, "No Node, ret = %d", ret);
> + goto failed;
> + }
> +
> + ret = fdt_add_subnode(*blob, 0, XRT_MD_NODE_ENDPOINTS);
> + if (ret < 0) {
fdt error code
> + dev_err(dev, "add node failed, ret = %d", ret);
> + goto failed;
> + }
> +
> + return 0;
> +
> +failed:
> + vfree(*blob);
> + *blob = NULL;
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_create);
> +
> +char *xrt_md_dup(struct device *dev, const char *blob)
> +{
> + char *dup_blob;
> + int ret;
> +
> + ret = xrt_md_create(dev, &dup_blob);
> + if (ret)
> + return NULL;
> + ret = xrt_md_overlay(dev, dup_blob, -1, blob, -1, 0);
> + if (ret) {
> + vfree(dup_blob);
> + return NULL;
> + }
> +
> + return dup_blob;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_dup);
Wasn't xrt_md_dup going to be replaced by memcpy ?
> +
> +int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
> + const char *regmap_name)
> +{
> + int ep_offset;
> + int ret;
> +
> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name, &ep_offset);
> + if (ret) {
> + dev_err(dev, "can not find ep %s", ep_name);
> + return -EINVAL;
> + }
> +
> + ret = fdt_del_node(blob, ep_offset);
fdt return code
Fix these generally.
> + if (ret)
> + dev_err(dev, "delete node %s failed, ret %d", ep_name, ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_del_endpoint);
> +
> +static int __xrt_md_add_endpoint(struct device *dev, char *blob,
> + struct xrt_md_endpoint *ep, int *offset,
> + const char *parent)
> +{
> + int parent_offset = 0;
> + u32 val, count = 0;
> + int ep_offset = 0;
> + u64 io_range[2];
> + char comp[128];
> + int ret = 0;
> +
> + if (!ep->ep_name) {
> + dev_err(dev, "empty name");
> + return -EINVAL;
> + }
> +
> + if (parent) {
> + ret = xrt_md_get_endpoint(dev, blob, parent, NULL, &parent_offset);
> + if (ret) {
> + dev_err(dev, "invalid blob, ret = %d", ret);
> + return -EINVAL;
> + }
> + }
> +
> + ep_offset = xrt_md_add_node(dev, blob, parent_offset, ep->ep_name);
> + if (ep_offset < 0) {
> + dev_err(dev, "add endpoint failed, ret = %d", ret);
> + return -EINVAL;
> + }
> + if (offset)
> + *offset = ep_offset;
> +
> + if (ep->size != 0) {
> + val = cpu_to_be32(ep->bar);
> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_BAR_IDX,
> + &val, sizeof(u32));
> + if (ret) {
> + dev_err(dev, "set %s failed, ret %d",
> + XRT_MD_PROP_BAR_IDX, ret);
> + goto failed;
> + }
> + io_range[0] = cpu_to_be64((u64)ep->bar_off);
> + io_range[1] = cpu_to_be64((u64)ep->size);
if ep->bar is an index, then rename the element to 'bar_index'
> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_IO_OFFSET,
> + io_range, sizeof(io_range));
> + if (ret) {
> + dev_err(dev, "set %s failed, ret %d",
> + XRT_MD_PROP_IO_OFFSET, ret);
> + goto failed;
> + }
> + }
> +
> + if (ep->regmap) {
> + if (ep->regmap_ver) {
> + count = snprintf(comp, sizeof(comp) - 1,
The -1 should be good enough that the if-check below is not needed
> + "%s-%s", ep->regmap, ep->regmap_ver);
> + count++;
> + }
> + if (count > sizeof(comp)) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + count += snprintf(comp + count, sizeof(comp) - count - 1,
> + "%s", ep->regmap);
what happens when only part of regmap is added to comp ?
> + count++;
> + if (count > sizeof(comp)) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_COMPATIBLE,
> + comp, count);
> + if (ret) {
> + dev_err(dev, "set %s failed, ret %d",
> + XRT_MD_PROP_COMPATIBLE, ret);
> + goto failed;
> + }
> + }
> +
> +failed:
> + if (ret)
> + xrt_md_del_endpoint(dev, blob, ep->ep_name, NULL);
> +
> + return ret;
> +}
> +
> +int xrt_md_add_endpoint(struct device *dev, char *blob,
> + struct xrt_md_endpoint *ep)
> +{
> + return __xrt_md_add_endpoint(dev, blob, ep, NULL, XRT_MD_NODE_ENDPOINTS);
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_add_endpoint);
> +
> +int xrt_md_find_endpoint(struct device *dev, const char *blob,
> + const char *ep_name, const char *regmap_name,
> + const char **epname)
> +{
> + int offset;
> + int ret;
> +
> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
> + &offset);
> + if (!ret && epname)
split this condition, if the call failed, check and return early.
> + *epname = fdt_get_name(blob, offset, NULL);
what happens if fdt_get_name fails ?
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_find_endpoint);
> +
> +int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
> + const char *regmap_name, const char *prop,
> + const void **val, int *size)
> +{
> + int offset;
> + int ret;
> +
> + if (!val) {
> + dev_err(dev, "val is null");
> + return -EINVAL;
ok
> + }
> +
> + *val = NULL;
> + ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
> + if (ret)
> + return ret;
> +
> + *val = fdt_getprop(blob, offset, prop, size);
> + if (!*val) {
> + dev_dbg(dev, "get ep %s, prop %s failed", ep_name, prop);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_get_prop);
> +
> +int xrt_md_set_prop(struct device *dev, char *blob,
> + const char *ep_name, const char *regmap_name,
> + const char *prop, const void *val, int size)
> +{
> + int offset;
> + int ret;
> +
> + ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
> + if (ret)
> + return ret;
> +
> + ret = xrt_md_setprop(dev, blob, offset, prop, val, size);
ok
> + if (ret)
> + dev_err(dev, "set prop %s failed, ret = %d", prop, ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_set_prop);
> +
> +int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
> + const char *ep_name, const char *regmap_name,
> + const char *new_ep_name)
> +{
> + const char *newepnm = new_ep_name ? new_ep_name : ep_name;
> + struct xrt_md_endpoint ep = {0};
> + int offset, target;
> + const char *parent;
> + int ret;
> +
> + ret = xrt_md_get_endpoint(dev, src_blob, ep_name, regmap_name,
> + &offset);
> + if (ret)
> + return -EINVAL;
> +
> + ret = xrt_md_get_endpoint(dev, blob, newepnm, regmap_name, &target);
> + if (ret) {
> + ep.ep_name = newepnm;
> + parent = fdt_parent_offset(src_blob, offset) == 0 ? NULL : XRT_MD_NODE_ENDPOINTS;
> + ret = __xrt_md_add_endpoint(dev, blob, &ep, &target, parent);
> + if (ret)
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_overlay(dev, blob, target, src_blob, offset, 0);
> + if (ret)
> + dev_err(dev, "overlay failed, ret = %d", ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_copy_endpoint);
> +
> +int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
> + const char *ep_name, const char *regmap_name,
> + char **next_ep, char **next_regmap)
> +{
> + int offset, ret;
> +
> + *next_ep = NULL;
> + *next_regmap = NULL;
> + if (!ep_name) {
> + ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_ENDPOINTS, NULL,
> + &offset);
> + } else {
> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
> + &offset);
> + }
> +
> + if (ret)
> + return -EINVAL;
> +
> + offset = ep_name ? fdt_next_subnode(blob, offset) :
> + fdt_first_subnode(blob, offset);
tristate with function calls is harder to follow, convert this to if-else logic
> + if (offset < 0)
> + return -EINVAL;
> +
> + *next_ep = (char *)fdt_get_name(blob, offset, NULL);
> + *next_regmap = (char *)fdt_stringlist_get(blob, offset, XRT_MD_PROP_COMPATIBLE,
> + 0, NULL);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_get_next_endpoint);
> +
> +int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
> + const char *regmap_name, const char **ep_name)
> +{
> + int ep_offset;
> +
> + ep_offset = fdt_node_offset_by_compatible(blob, -1, regmap_name);
> + if (ep_offset < 0) {
> + *ep_name = NULL;
> + return -ENOENT;
> + }
> +
> + *ep_name = fdt_get_name(blob, ep_offset, NULL);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_get_compatible_endpoint);
> +
> +int xrt_md_pack(struct device *dev, char *blob)
> +{
> + int ret;
> +
> + ret = fdt_pack(blob);
> + if (ret)
> + dev_err(dev, "pack failed %d", ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_pack);
> +
> +int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
ok
> + u32 num_uuids, uuid_t *interface_uuids)
> +{
> + int offset, count = 0;
> + const char *uuid_str;
> + int ret;
> +
> + ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_INTERFACES, NULL, &offset);
> + if (ret)
> + return -ENOENT;
> +
> + for (offset = fdt_first_subnode(blob, offset);
> + offset >= 0;
> + offset = fdt_next_subnode(blob, offset), count++) {
> + uuid_str = fdt_getprop(blob, offset, XRT_MD_PROP_INTERFACE_UUID,
> + NULL);
> + if (!uuid_str) {
> + dev_err(dev, "empty interface uuid node");
> + return -EINVAL;
> + }
> +
> + if (!num_uuids)
> + continue;
> +
> + if (count == num_uuids) {
ok
> + dev_err(dev, "too many interface uuid in blob");
> + return -EINVAL;
> + }
> +
> + if (interface_uuids && count < num_uuids) {
> + ret = xrt_md_trans_str2uuid(dev, uuid_str,
> + &interface_uuids[count]);
> + if (ret)
> + return -EINVAL;
> + }
> + }
> + if (!count)
> + count = -ENOENT;
> +
> + return count;
> +}
> +EXPORT_SYMBOL_GPL(xrt_md_get_interface_uuids);
Thanks for the changes,
Tom
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Alveo FPGA firmware and partial reconfigure file are in xclbin format. This
> code enumerates and extracts sections from xclbin files. xclbin.h is cross
> platform and used across all platforms and OS.
ok
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xclbin-helper.h | 48 +++
> drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++++++++++++++
> include/uapi/linux/xrt/xclbin.h | 409 +++++++++++++++++++++++
> 3 files changed, 826 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
> create mode 100644 drivers/fpga/xrt/lib/xclbin.c
> create mode 100644 include/uapi/linux/xrt/xclbin.h
>
> diff --git a/drivers/fpga/xrt/include/xclbin-helper.h b/drivers/fpga/xrt/include/xclbin-helper.h
> new file mode 100644
> index 000000000000..382b1de97b0a
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xclbin-helper.h
> @@ -0,0 +1,48 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * David Zhang <[email protected]>
> + * Sonal Santan <[email protected]>
> + */
> +
> +#ifndef _XCLBIN_HELPER_H_
> +#define _XCLBIN_HELPER_H_
ok
> +
> +#include <linux/types.h>
> +#include <linux/device.h>
> +#include <linux/xrt/xclbin.h>
> +
> +#define XCLBIN_VERSION2 "xclbin2"
> +#define XCLBIN_HWICAP_BITFILE_BUF_SZ 1024
> +#define XCLBIN_MAX_SIZE (1024 * 1024 * 1024) /* Assuming xclbin <= 1G, always */
ok
> +
> +enum axlf_section_kind;
> +struct axlf;
> +
> +/**
> + * Bitstream header information as defined by Xilinx tools.
> + * Please note that this struct definition is not owned by the driver.
> + */
> +struct xclbin_bit_head_info {
> + u32 header_length; /* Length of header in 32 bit words */
> + u32 bitstream_length; /* Length of bitstream to read in bytes */
> + const unchar *design_name; /* Design name get from bitstream */
> + const unchar *part_name; /* Part name read from bitstream */
> + const unchar *date; /* Date read from bitstream header */
> + const unchar *time; /* Bitstream creation time */
> + u32 magic_length; /* Length of the magic numbers */
> + const unchar *version; /* Version string */
> +};
> +
ok, bit removed.
> +/* caller must free the allocated memory for **data. len could be NULL. */
> +int xrt_xclbin_get_section(struct device *dev, const struct axlf *xclbin,
> + enum axlf_section_kind kind, void **data,
> + uint64_t *len);
need to add comment that user must free data
need to add comment that len is optional
> +int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb);
> +int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
> + u32 size, struct xclbin_bit_head_info *head_info);
> +const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type);
ok
> +
> +#endif /* _XCLBIN_HELPER_H_ */
> diff --git a/drivers/fpga/xrt/lib/xclbin.c b/drivers/fpga/xrt/lib/xclbin.c
> new file mode 100644
> index 000000000000..31b363c014a3
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xclbin.c
> @@ -0,0 +1,369 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Driver XCLBIN parser
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: David Zhang <[email protected]>
> + */
> +
> +#include <asm/errno.h>
> +#include <linux/vmalloc.h>
> +#include <linux/device.h>
> +#include "xclbin-helper.h"
> +#include "metadata.h"
> +
> +/* Used for parsing bitstream header */
> +#define BITSTREAM_EVEN_MAGIC_BYTE 0x0f
> +#define BITSTREAM_ODD_MAGIC_BYTE 0xf0
ok
> +
> +static int xrt_xclbin_get_section_hdr(const struct axlf *xclbin,
> + enum axlf_section_kind kind,
> + const struct axlf_section_header **header)
> +{
> + const struct axlf_section_header *phead = NULL;
> + u64 xclbin_len;
> + int i;
> +
> + *header = NULL;
> + for (i = 0; i < xclbin->header.num_sections; i++) {
> + if (xclbin->sections[i].section_kind == kind) {
> + phead = &xclbin->sections[i];
> + break;
> + }
> + }
> +
> + if (!phead)
> + return -ENOENT;
> +
> + xclbin_len = xclbin->header.length;
> + if (xclbin_len > XCLBIN_MAX_SIZE ||
> + phead->section_offset + phead->section_size > xclbin_len)
> + return -EINVAL;
> +
> + *header = phead;
> + return 0;
> +}
> +
> +static int xrt_xclbin_section_info(const struct axlf *xclbin,
> + enum axlf_section_kind kind,
> + u64 *offset, u64 *size)
> +{
> + const struct axlf_section_header *mem_header = NULL;
> + int rc;
> +
> + rc = xrt_xclbin_get_section_hdr(xclbin, kind, &mem_header);
> + if (rc)
> + return rc;
> +
> + *offset = mem_header->section_offset;
> + *size = mem_header->section_size;
ok
> +
> + return 0;
> +}
> +
> +/* caller must free the allocated memory for **data */
> +int xrt_xclbin_get_section(struct device *dev,
> + const struct axlf *buf,
> + enum axlf_section_kind kind,
> + void **data, u64 *len)
> +{
> + const struct axlf *xclbin = (const struct axlf *)buf;
> + void *section = NULL;
> + u64 offset = 0;
> + u64 size = 0;
> + int err = 0;
> +
> + if (!data) {
ok
> + dev_err(dev, "invalid data pointer");
> + return -EINVAL;
> + }
> +
> + err = xrt_xclbin_section_info(xclbin, kind, &offset, &size);
> + if (err) {
> + dev_dbg(dev, "parsing section failed. kind %d, err = %d", kind, err);
> + return err;
> + }
> +
> + section = vzalloc(size);
> + if (!section)
> + return -ENOMEM;
> +
> + memcpy(section, ((const char *)xclbin) + offset, size);
> +
> + *data = section;
> + if (len)
> + *len = size;
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_xclbin_get_section);
> +
> +static inline int xclbin_bit_get_string(const unchar *data, u32 size,
> + u32 offset, unchar prefix,
> + const unchar **str)
> +{
> + int len;
> + u32 tmp;
> +
> + /* prefix and length will be 3 bytes */
> + if (offset + 3 > size)
> + return -EINVAL;
> +
> + /* Read prefix */
> + tmp = data[offset++];
> + if (tmp != prefix)
> + return -EINVAL;
> +
> + /* Get string length */
> + len = data[offset++];
> + len = (len << 8) | data[offset++];
> +
> + if (offset + len > size)
> + return -EINVAL;
> +
> + if (data[offset + len - 1] != '\0')
> + return -EINVAL;
> +
> + *str = data + offset;
> +
> + return len + 3;
> +}
> +
> +/* parse bitstream header */
> +int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
> + u32 size, struct xclbin_bit_head_info *head_info)
> +{
> + u32 offset = 0;
> + int len, i;
> + u16 magic;
> +
> + memset(head_info, 0, sizeof(*head_info));
> +
> + /* Get "Magic" length */
> + if (size < sizeof(u16)) {
> + dev_err(dev, "invalid size");
> + return -EINVAL;
> + }
ok
> +
> + len = data[offset++];
> + len = (len << 8) | data[offset++];
> +
> + if (offset + len > size) {
> + dev_err(dev, "invalid magic len");
> + return -EINVAL;
> + }
> + head_info->magic_length = len;
> +
> + for (i = 0; i < head_info->magic_length - 1; i++) {
> + magic = data[offset++];
> + if (!(i % 2) && magic != BITSTREAM_EVEN_MAGIC_BYTE) {
> + dev_err(dev, "invalid magic even byte at %d", offset);
> + return -EINVAL;
> + }
> +
> + if ((i % 2) && magic != BITSTREAM_ODD_MAGIC_BYTE) {
> + dev_err(dev, "invalid magic odd byte at %d", offset);
> + return -EINVAL;
> + }
> + }
> +
> + if (offset + 3 > size) {
> + dev_err(dev, "invalid length of magic end");
> + return -EINVAL;
> + }
> + /* Read null end of magic data. */
> + if (data[offset++]) {
> + dev_err(dev, "invalid magic end");
> + return -EINVAL;
> + }
> +
> + /* Read 0x01 (short) */
> + magic = data[offset++];
> + magic = (magic << 8) | data[offset++];
> +
> + /* Check the "0x01" half word */
> + if (magic != 0x01) {
> + dev_err(dev, "invalid magic end");
> + return -EINVAL;
> + }
> +
> + len = xclbin_bit_get_string(data, size, offset, 'a', &head_info->design_name);
> + if (len < 0) {
> + dev_err(dev, "get design name failed");
> + return -EINVAL;
> + }
> +
> + head_info->version = strstr(head_info->design_name, "Version=") + strlen("Version=");
> + offset += len;
> +
> + len = xclbin_bit_get_string(data, size, offset, 'b', &head_info->part_name);
> + if (len < 0) {
> + dev_err(dev, "get part name failed");
> + return -EINVAL;
> + }
> + offset += len;
> +
> + len = xclbin_bit_get_string(data, size, offset, 'c', &head_info->date);
> + if (len < 0) {
> + dev_err(dev, "get data failed");
> + return -EINVAL;
> + }
> + offset += len;
> +
> + len = xclbin_bit_get_string(data, size, offset, 'd', &head_info->time);
> + if (len < 0) {
> + dev_err(dev, "get time failed");
> + return -EINVAL;
> + }
> + offset += len;
> +
> + if (offset + 5 >= size) {
> + dev_err(dev, "can not get bitstream length");
> + return -EINVAL;
> + }
> +
> + /* Read 'e' */
> + if (data[offset++] != 'e') {
> + dev_err(dev, "invalid prefix of bitstream length");
> + return -EINVAL;
> + }
> +
> + /* Get byte length of bitstream */
> + head_info->bitstream_length = data[offset++];
> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
OK
> +
> + head_info->header_length = offset;
ok
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xrt_xclbin_parse_bitstream_header);
ok, removed xrt_xclbin_free_header
> +
> +struct xrt_clock_desc {
> + char *clock_ep_name;
> + u32 clock_xclbin_type;
> + char *clkfreq_ep_name;
> +} clock_desc[] = {
> + {
> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL1,
> + .clock_xclbin_type = CT_DATA,
> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K1,
> + },
> + {
> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL2,
> + .clock_xclbin_type = CT_KERNEL,
> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K2,
> + },
> + {
> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL3,
> + .clock_xclbin_type = CT_SYSTEM,
> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_HBM,
> + },
> +};
> +
> +const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
> + if (clock_desc[i].clock_xclbin_type == type)
> + return clock_desc[i].clock_ep_name;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL_GPL(xrt_clock_type2epname);
> +
> +static const char *clock_type2clkfreq_name(enum XCLBIN_CLOCK_TYPE type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
> + if (clock_desc[i].clock_xclbin_type == type)
> + return clock_desc[i].clkfreq_ep_name;
> + }
> + return NULL;
> +}
> +
> +static int xrt_xclbin_add_clock_metadata(struct device *dev,
> + const struct axlf *xclbin,
> + char *dtb)
> +{
> + struct clock_freq_topology *clock_topo;
> + u16 freq;
> + int rc;
> + int i;
> +
> + /* if clock section does not exist, add nothing and return success */
ok
> + rc = xrt_xclbin_get_section(dev, xclbin, CLOCK_FREQ_TOPOLOGY,
> + (void **)&clock_topo, NULL);
> + if (rc == -ENOENT)
> + return 0;
> + else if (rc)
> + return rc;
> +
> + for (i = 0; i < clock_topo->count; i++) {
> + u8 type = clock_topo->clock_freq[i].type;
> + const char *ep_name = xrt_clock_type2epname(type);
> + const char *counter_name = clock_type2clkfreq_name(type);
> +
> + if (!ep_name || !counter_name)
> + continue;
> +
> + freq = cpu_to_be16(clock_topo->clock_freq[i].freq_MHZ);
> + rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
> + &freq, sizeof(freq));
> + if (rc)
> + break;
> +
> + rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_CNT,
> + counter_name, strlen(counter_name) + 1);
> + if (rc)
> + break;
> + }
> +
> + vfree(clock_topo);
> +
> + return rc;
> +}
> +
> +int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb)
> +{
> + char *md = NULL, *newmd = NULL;
> + u64 len, md_len;
> + int rc;
> +
> + *dtb = NULL;
ok
> +
> + rc = xrt_xclbin_get_section(dev, xclbin, PARTITION_METADATA, (void **)&md, &len);
> + if (rc)
> + goto done;
> +
> + md_len = xrt_md_size(dev, md);
> +
> + /* Sanity check the dtb section. */
> + if (md_len > len) {
> + rc = -EINVAL;
> + goto done;
> + }
> +
> + /* use dup function here to convert incoming metadata to writable */
> + newmd = xrt_md_dup(dev, md);
> + if (!newmd) {
> + rc = -EFAULT;
> + goto done;
> + }
> +
> + /* Convert various needed xclbin sections into dtb. */
> + rc = xrt_xclbin_add_clock_metadata(dev, xclbin, newmd);
> +
> + if (!rc)
> + *dtb = newmd;
> + else
> + vfree(newmd);
ok
> +done:
> + vfree(md);
> + return rc;
> +}
> +EXPORT_SYMBOL_GPL(xrt_xclbin_get_metadata);
> diff --git a/include/uapi/linux/xrt/xclbin.h b/include/uapi/linux/xrt/xclbin.h
> new file mode 100644
> index 000000000000..baa14d6653ab
> --- /dev/null
> +++ b/include/uapi/linux/xrt/xclbin.h
> @@ -0,0 +1,409 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Xilinx FPGA compiled binary container format
> + *
> + * Copyright (C) 2015-2021, Xilinx Inc
> + */
> +
> +#ifndef _XCLBIN_H_
> +#define _XCLBIN_H_
ok, removed _WIN32_
> +
> +#if defined(__KERNEL__)
> +
> +#include <linux/types.h>
ok, removed uuid.h and version.h
> +
> +#elif defined(__cplusplus)
> +
> +#include <cstdlib>
> +#include <cstdint>
> +#include <algorithm>
> +#include <uuid/uuid.h>
> +
> +#else
> +
> +#include <stdlib.h>
> +#include <stdint.h>
> +#include <uuid/uuid.h>
> +
> +#endif
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * DOC: Container format for Xilinx FPGA images
> + * The container stores bitstreams, metadata and firmware images.
> + * xclbin/xsabin is an ELF-like binary container format. It is a structured
ok
> + * series of sections. There is a file header followed by several section
> + * headers which is followed by sections. A section header points to an
> + * actual section. There is an optional signature at the end. The
> + * following figure illustrates a typical xclbin:
> + *
> + * +---------------------+
> + * | |
> + * | HEADER |
> + * +---------------------+
> + * | SECTION HEADER |
> + * | |
> + * +---------------------+
> + * | ... |
> + * | |
> + * +---------------------+
> + * | SECTION HEADER |
> + * | |
> + * +---------------------+
> + * | SECTION |
> + * | |
> + * +---------------------+
> + * | ... |
> + * | |
> + * +---------------------+
> + * | SECTION |
> + * | |
> + * +---------------------+
> + * | SIGNATURE |
> + * | (OPTIONAL) |
> + * +---------------------+
ok on the tabs to spaces
> + */
> +
> +enum XCLBIN_MODE {
> + XCLBIN_FLAT = 0,
ok
> + XCLBIN_PR,
> + XCLBIN_TANDEM_STAGE2,
> + XCLBIN_TANDEM_STAGE2_WITH_PR,
> + XCLBIN_HW_EMU,
> + XCLBIN_SW_EMU,
> + XCLBIN_MODE_MAX
> +};
> +
> +enum axlf_section_kind {
> + BITSTREAM = 0,
> + CLEARING_BITSTREAM,
> + EMBEDDED_METADATA,
> + FIRMWARE,
> + DEBUG_DATA,
> + SCHED_FIRMWARE,
> + MEM_TOPOLOGY,
> + CONNECTIVITY,
> + IP_LAYOUT,
> + DEBUG_IP_LAYOUT,
> + DESIGN_CHECK_POINT,
> + CLOCK_FREQ_TOPOLOGY,
> + MCS,
> + BMC,
> + BUILD_METADATA,
> + KEYVALUE_METADATA,
> + USER_METADATA,
> + DNA_CERTIFICATE,
> + PDI,
> + BITSTREAM_PARTIAL_PDI,
> + PARTITION_METADATA,
> + EMULATION_DATA,
> + SYSTEM_METADATA,
> + SOFT_KERNEL,
> + ASK_FLASH,
> + AIE_METADATA,
> + ASK_GROUP_TOPOLOGY,
> + ASK_GROUP_CONNECTIVITY
> +};
> +
> +enum MEM_TYPE {
> + MEM_DDR3 = 0,
> + MEM_DDR4,
> + MEM_DRAM,
> + MEM_STREAMING,
> + MEM_PREALLOCATED_GLOB,
> + MEM_ARE,
> + MEM_HBM,
> + MEM_BRAM,
> + MEM_URAM,
> + MEM_STREAMING_CONNECTION
> +};
> +
> +enum IP_TYPE {
> + IP_MB = 0,
> + IP_KERNEL,
> + IP_DNASC,
> + IP_DDR4_CONTROLLER,
> + IP_MEM_DDR4,
> + IP_MEM_HBM
> +};
> +
> +struct axlf_section_header {
> + uint32_t section_kind; /* Section type */
> + char section_name[16]; /* Examples: "stage2", "clear1", */
> + /* "clear2", "ocl1", "ocl2, */
> + /* "ublaze", "sched" */
> + char rsvd[4];
> + uint64_t section_offset; /* File offset of section data */
> + uint64_t section_size; /* Size of section data */
> +} __packed;
> +
> +struct axlf_header {
> + uint64_t length; /* Total size of the xclbin file */
> + uint64_t time_stamp; /* Number of seconds since epoch */
> + /* when xclbin was created */
> + uint64_t feature_rom_timestamp; /* TimeSinceEpoch of the featureRom */
> + uint16_t version_patch; /* Patch Version */
> + uint8_t version_major; /* Major Version - Version: 2.1.0*/
ok, version checked
whitepace, needs '2.1.0 */'
I see this is a general problem, look other places.
maybe it is a 'tab' and the diff is messing it up, convert tab to space.
> + uint8_t version_minor; /* Minor Version */
> + uint32_t mode; /* XCLBIN_MODE */
> + union {
> + struct {
> + uint64_t platform_id; /* 64 bit platform ID: */
> + /* vendor-device-subvendor-subdev */
> + uint64_t feature_id; /* 64 bit feature id */
> + } rom;
> + unsigned char rom_uuid[16]; /* feature ROM UUID for which */
> + /* this xclbin was generated */
> + };
> + unsigned char platform_vbnv[64]; /* e.g. */
> + /* xilinx:xil-accel-rd-ku115:4ddr-xpr:3.4: null terminated */
> + union {
> + char next_axlf[16]; /* Name of next xclbin file */
> + /* in the daisy chain */
> + unsigned char uuid[16]; /* uuid of this xclbin*/
ok
whitespace comment need a ' ' before */
> + };
> + char debug_bin[16]; /* Name of binary with debug */
> + /* information */
> + uint32_t num_sections; /* Number of section headers */
> + char rsvd[4];
> +} __packed;
> +
> +struct axlf {
> + char magic[8]; /* Should be "xclbin2\0" */
> + int32_t signature_length; /* Length of the signature. */
> + /* -1 indicates no signature */
> + unsigned char reserved[28]; /* Note: Initialized to 0xFFs */
> +
> + unsigned char key_block[256]; /* Signature for validation */
> + /* of binary */
> + uint64_t unique_id; /* axlf's uniqueId, use it to */
> + /* skip redownload etc */
> + struct axlf_header header; /* Inline header */
> + struct axlf_section_header sections[1]; /* One or more section */
> + /* headers follow */
> +} __packed;
ok, thanks!
> +
> +/* bitstream information */
> +struct xlnx_bitstream {
> + uint8_t freq[8];
> + char bits[1];
> +} __packed;
> +
> +/**** MEMORY TOPOLOGY SECTION ****/
> +struct mem_data {
> + uint8_t type; /* enum corresponding to mem_type. */
> + uint8_t used; /* if 0 this bank is not present */
> + uint8_t rsvd[6];
> + union {
> + uint64_t size; /* if mem_type DDR, then size in KB; */
> + uint64_t route_id; /* if streaming then "route_id" */
> + };
> + union {
> + uint64_t base_address;/* if DDR then the base address; */
> + uint64_t flow_id; /* if streaming then "flow id" */
> + };
> + unsigned char tag[16]; /* DDR: BANK0,1,2,3, has to be null */
> + /* terminated; if streaming then stream0, 1 etc */
> +} __packed;
> +
> +struct mem_topology {
> + int32_t count; /* Number of mem_data */
> + struct mem_data mem_data[1]; /* Should be sorted on mem_type */
> +} __packed;
> +
> +/**** CONNECTIVITY SECTION ****/
> +/* Connectivity of each argument of CU(Compute Unit). It will be in terms
ok
> + * of argument index associated. For associating CU instances with arguments
> + * and banks, start at the connectivity section. Using the ip_layout_index
> + * access the ip_data.name. Now we can associate this CU instance with its
> + * original CU name and get the connectivity as well. This enables us to form
> + * related groups of CU instances.
> + */
> +
> +struct connection {
> + int32_t arg_index; /* From 0 to n, may not be contiguous as scalars */
> + /* skipped */
> + int32_t ip_layout_index; /* index into the ip_layout section. */
> + /* ip_layout.ip_data[index].type == IP_KERNEL */
> + int32_t mem_data_index; /* index of the mem_data . Flag error is */
> + /* used false. */
> +} __packed;
> +
> +struct connectivity {
> + int32_t count;
> + struct connection connection[1];
> +} __packed;
> +
> +/**** IP_LAYOUT SECTION ****/
> +
> +/* IP Kernel */
> +#define IP_INT_ENABLE_MASK 0x0001
> +#define IP_INTERRUPT_ID_MASK 0x00FE
> +#define IP_INTERRUPT_ID_SHIFT 0x1
> +
> +enum IP_CONTROL {
> + AP_CTRL_HS = 0,
ok
Thanks for the changes!
Tom
> + AP_CTRL_CHAIN,
> + AP_CTRL_NONE,
> + AP_CTRL_ME,
> + ACCEL_ADAPTER
> +};
> +
> +#define IP_CONTROL_MASK 0xFF00
> +#define IP_CONTROL_SHIFT 0x8
> +
> +/* IPs on AXI lite - their types, names, and base addresses.*/
> +struct ip_data {
> + uint32_t type; /* map to IP_TYPE enum */
> + union {
> + uint32_t properties; /* Default: 32-bits to indicate ip */
> + /* specific property. */
> + /* type: IP_KERNEL
> + * int_enable : Bit - 0x0000_0001;
> + * interrupt_id : Bits - 0x0000_00FE;
> + * ip_control : Bits = 0x0000_FF00;
> + */
> + struct { /* type: IP_MEM_* */
> + uint16_t index;
> + uint8_t pc_index;
> + uint8_t unused;
> + } indices;
> + };
> + uint64_t base_address;
> + uint8_t name[64]; /* eg Kernel name corresponding to KERNEL */
> + /* instance, can embed CU name in future. */
> +} __packed;
> +
> +struct ip_layout {
> + int32_t count;
> + struct ip_data ip_data[1]; /* All the ip_data needs to be sorted */
> + /* by base_address. */
> +} __packed;
> +
> +/*** Debug IP section layout ****/
> +enum DEBUG_IP_TYPE {
> + UNDEFINED = 0,
> + LAPC,
> + ILA,
> + AXI_MM_MONITOR,
> + AXI_TRACE_FUNNEL,
> + AXI_MONITOR_FIFO_LITE,
> + AXI_MONITOR_FIFO_FULL,
> + ACCEL_MONITOR,
> + AXI_STREAM_MONITOR,
> + AXI_STREAM_PROTOCOL_CHECKER,
> + TRACE_S2MM,
> + AXI_DMA,
> + TRACE_S2MM_FULL
> +};
> +
> +struct debug_ip_data {
> + uint8_t type; /* type of enum DEBUG_IP_TYPE */
> + uint8_t index_lowbyte;
> + uint8_t properties;
> + uint8_t major;
> + uint8_t minor;
> + uint8_t index_highbyte;
> + uint8_t reserved[2];
> + uint64_t base_address;
> + char name[128];
> +} __packed;
> +
> +struct debug_ip_layout {
> + uint16_t count;
> + struct debug_ip_data debug_ip_data[1];
> +} __packed;
> +
> +/* Supported clock frequency types */
> +enum XCLBIN_CLOCK_TYPE {
> + CT_UNUSED = 0, /* Initialized value */
> + CT_DATA = 1, /* Data clock */
> + CT_KERNEL = 2, /* Kernel clock */
> + CT_SYSTEM = 3 /* System Clock */
> +};
> +
> +/* Clock Frequency Entry */
> +struct clock_freq {
> + uint16_t freq_MHZ; /* Frequency in MHz */
> + uint8_t type; /* Clock type (enum CLOCK_TYPE) */
> + uint8_t unused[5]; /* Not used - padding */
> + char name[128]; /* Clock Name */
> +} __packed;
> +
> +/* Clock frequency section */
> +struct clock_freq_topology {
> + int16_t count; /* Number of entries */
> + struct clock_freq clock_freq[1]; /* Clock array */
> +} __packed;
> +
> +/* Supported MCS file types */
> +enum MCS_TYPE {
> + MCS_UNKNOWN = 0, /* Initialized value */
> + MCS_PRIMARY = 1, /* The primary mcs file data */
> + MCS_SECONDARY = 2, /* The secondary mcs file data */
> +};
> +
> +/* One chunk of MCS data */
> +struct mcs_chunk {
> + uint8_t type; /* MCS data type */
> + uint8_t unused[7]; /* padding */
> + uint64_t offset; /* data offset from the start of */
> + /* the section */
> + uint64_t size; /* data size */
> +} __packed;
> +
> +/* MCS data section */
> +struct mcs {
> + int8_t count; /* Number of chunks */
> + int8_t unused[7]; /* padding */
> + struct mcs_chunk chunk[1]; /* MCS chunks followed by data */
> +} __packed;
> +
> +/* bmc data section */
> +struct bmc {
> + uint64_t offset; /* data offset from the start of */
> + /* the section */
> + uint64_t size; /* data size (bytes) */
> + char image_name[64]; /* Name of the image */
> + /* (e.g., MSP432P401R) */
> + char device_name[64]; /* Device ID (e.g., VCU1525) */
> + char version[64];
> + char md5value[33]; /* MD5 Expected Value */
> + /* (e.g., 56027182079c0bd621761b7dab5a27ca)*/
> + char padding[7]; /* Padding */
> +} __packed;
> +
> +/* soft kernel data section, used by classic driver */
> +struct soft_kernel {
> + /** Prefix Syntax:
> + * mpo - member, pointer, offset
> + * This variable represents a zero terminated string
> + * that is offseted from the beginning of the section.
> + * The pointer to access the string is initialized as follows:
> + * char * pCharString = (address_of_section) + (mpo value)
> + */
> + uint32_t mpo_name; /* Name of the soft kernel */
> + uint32_t image_offset; /* Image offset */
> + uint32_t image_size; /* Image size */
> + uint32_t mpo_version; /* Version */
> + uint32_t mpo_md5_value; /* MD5 checksum */
> + uint32_t mpo_symbol_name; /* Symbol name */
> + uint32_t num_instances; /* Number of instances */
> + uint8_t padding[36]; /* Reserved for future use */
> + uint8_t reserved_ext[16]; /* Reserved for future extended data */
> +} __packed;
> +
> +enum CHECKSUM_TYPE {
> + CST_UNKNOWN = 0,
> + CST_SDBM = 1,
> + CST_LAST
> +};
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
bisectablity may be/is an issue.
Moritz,
building happens on the last patch, so in theory there will never be a build break needing bisection. Do we care about the misordering of serveral of these patches?
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> xrt-lib kernel module infrastructure code to register and manage all
> leaf driver modules.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/subdev_id.h | 38 ++++
> drivers/fpga/xrt/include/xleaf.h | 264 +++++++++++++++++++++++++
> drivers/fpga/xrt/lib/lib-drv.c | 277 +++++++++++++++++++++++++++
ok
> drivers/fpga/xrt/lib/lib-drv.h | 17 ++
> 4 files changed, 596 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/subdev_id.h
> create mode 100644 drivers/fpga/xrt/include/xleaf.h
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
> create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
>
> diff --git a/drivers/fpga/xrt/include/subdev_id.h b/drivers/fpga/xrt/include/subdev_id.h
> new file mode 100644
> index 000000000000..42fbd6f5e80a
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/subdev_id.h
> @@ -0,0 +1,38 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_SUBDEV_ID_H_
> +#define _XRT_SUBDEV_ID_H_
> +
> +/*
> + * Every subdev driver has an ID for others to refer to it. There can be multiple number of
> + * instances of a subdev driver. A <subdev_id, subdev_instance> tuple is a unique identification
> + * of a specific instance of a subdev driver.
> + */
> +enum xrt_subdev_id {
> + XRT_SUBDEV_GRP = 0,
not necessary to initialize all unless there are gaps.
> + XRT_SUBDEV_VSEC = 1,
> + XRT_SUBDEV_VSEC_GOLDEN = 2,
> + XRT_SUBDEV_DEVCTL = 3,
> + XRT_SUBDEV_AXIGATE = 4,
> + XRT_SUBDEV_ICAP = 5,
> + XRT_SUBDEV_TEST = 6,
> + XRT_SUBDEV_MGMT_MAIN = 7,
> + XRT_SUBDEV_QSPI = 8,
> + XRT_SUBDEV_MAILBOX = 9,
> + XRT_SUBDEV_CMC = 10,
> + XRT_SUBDEV_CALIB = 11,
> + XRT_SUBDEV_CLKFREQ = 12,
> + XRT_SUBDEV_CLOCK = 13,
> + XRT_SUBDEV_SRSR = 14,
> + XRT_SUBDEV_UCS = 15,
> + XRT_SUBDEV_NUM = 16, /* Total number of subdevs. */
> + XRT_ROOT = -1, /* Special ID for root driver. */
> +};
> +
> +#endif /* _XRT_SUBDEV_ID_H_ */
> diff --git a/drivers/fpga/xrt/include/xleaf.h b/drivers/fpga/xrt/include/xleaf.h
> new file mode 100644
> index 000000000000..acb500df04b0
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf.h
> @@ -0,0 +1,264 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + * Sonal Santan <[email protected]>
> + */
> +
> +#ifndef _XRT_XLEAF_H_
> +#define _XRT_XLEAF_H_
> +
> +#include <linux/platform_device.h>
> +#include <linux/fs.h>
> +#include <linux/cdev.h>
> +#include "subdev_id.h"
> +#include "xroot.h"
> +#include "events.h"
> +
> +/* All subdev drivers should use below common routines to print out msg. */
> +#define DEV(pdev) (&(pdev)->dev)
> +#define DEV_PDATA(pdev) \
> + ((struct xrt_subdev_platdata *)dev_get_platdata(DEV(pdev)))
> +#define DEV_DRVDATA(pdev) \
> + ((struct xrt_subdev_drvdata *) \
> + platform_get_device_id(pdev)->driver_data)
> +#define FMT_PRT(prt_fn, pdev, fmt, args...) \
> + ({typeof(pdev) (_pdev) = (pdev); \
> + prt_fn(DEV(_pdev), "%s %s: " fmt, \
> + DEV_PDATA(_pdev)->xsp_root_name, __func__, ##args); })
> +#define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
> +#define xrt_warn(pdev, fmt, args...) FMT_PRT(dev_warn, pdev, fmt, ##args)
> +#define xrt_info(pdev, fmt, args...) FMT_PRT(dev_info, pdev, fmt, ##args)
> +#define xrt_dbg(pdev, fmt, args...) FMT_PRT(dev_dbg, pdev, fmt, ##args)
> +
> +enum {
> + /* Starting cmd for common leaf cmd implemented by all leaves. */
> + XRT_XLEAF_COMMON_BASE = 0,
> + /* Starting cmd for leaves' specific leaf cmds. */
> + XRT_XLEAF_CUSTOM_BASE = 64,
> +};
> +
> +enum xrt_xleaf_common_leaf_cmd {
> + XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
> +};
> +
> +/*
> + * If populated by subdev driver, infra will handle the mechanics of
> + * char device (un)registration.
> + */
> +enum xrt_subdev_file_mode {
> + /* Infra create cdev, default file name */
> + XRT_SUBDEV_FILE_DEFAULT = 0,
> + /* Infra create cdev, need to encode inst num in file name */
> + XRT_SUBDEV_FILE_MULTI_INST,
> + /* No auto creation of cdev by infra, leaf handles it by itself */
> + XRT_SUBDEV_FILE_NO_AUTO,
> +};
> +
> +struct xrt_subdev_file_ops {
> + const struct file_operations xsf_ops;
> + dev_t xsf_dev_t;
> + const char *xsf_dev_name;
> + enum xrt_subdev_file_mode xsf_mode;
> +};
> +
> +/*
> + * Subdev driver callbacks populated by subdev driver.
> + */
> +struct xrt_subdev_drv_ops {
> + /*
> + * Per driver instance callback. The pdev points to the instance.
> + * If defined, these are called by other leaf drivers.
> + * Note that root driver may call into xsd_leaf_call of a group driver.
> + */
> + int (*xsd_leaf_call)(struct platform_device *pdev, u32 cmd, void *arg);
> +};
> +
> +/*
> + * Defined and populated by subdev driver, exported as driver_data in
> + * struct platform_device_id.
> + */
> +struct xrt_subdev_drvdata {
> + struct xrt_subdev_file_ops xsd_file_ops;
> + struct xrt_subdev_drv_ops xsd_dev_ops;
> +};
> +
> +/*
> + * Partially initialized by the parent driver, then, passed in as subdev driver's
> + * platform data when creating subdev driver instance by calling platform
> + * device register API (platform_device_register_data() or the likes).
> + *
> + * Once device register API returns, platform driver framework makes a copy of
> + * this buffer and maintains its life cycle. The content of the buffer is
> + * completely owned by subdev driver.
> + *
> + * Thus, parent driver should be very careful when it touches this buffer
> + * again once it's handed over to subdev driver. And the data structure
> + * should not contain pointers pointing to buffers that is managed by
> + * other or parent drivers since it could have been freed before platform
> + * data buffer is freed by platform driver framework.
> + */
> +struct xrt_subdev_platdata {
> + /*
> + * Per driver instance callback. The pdev points to the instance.
> + * Should always be defined for subdev driver to get service from root.
> + */
> + xrt_subdev_root_cb_t xsp_root_cb;
> + void *xsp_root_cb_arg;
> +
> + /* Something to associate w/ root for msg printing. */
> + const char *xsp_root_name;
> +
> + /*
> + * Char dev support for this subdev instance.
> + * Initialized by subdev driver.
> + */
> + struct cdev xsp_cdev;
> + struct device *xsp_sysdev;
> + struct mutex xsp_devnode_lock; /* devnode lock */
> + struct completion xsp_devnode_comp;
> + int xsp_devnode_ref;
> + bool xsp_devnode_online;
> + bool xsp_devnode_excl;
> +
> + /*
> + * Subdev driver specific init data. The buffer should be embedded
> + * in this data structure buffer after dtb, so that it can be freed
> + * together with platform data.
> + */
> + loff_t xsp_priv_off; /* Offset into this platform data buffer. */
> + size_t xsp_priv_len;
> +
> + /*
> + * Populated by parent driver to describe the device tree for
> + * the subdev driver to handle. Should always be last one since it's
> + * of variable length.
> + */
> + bool xsp_dtb_valid;
> + char xsp_dtb[0];
> +};
> +
> +/*
> + * this struct define the endpoints belong to the same subdevice
> + */
> +struct xrt_subdev_ep_names {
> + const char *ep_name;
> + const char *regmap_name;
> +};
> +
> +struct xrt_subdev_endpoints {
> + struct xrt_subdev_ep_names *xse_names;
> + /* minimum number of endpoints to support the subdevice */
> + u32 xse_min_ep;
> +};
> +
> +struct subdev_match_arg {
> + enum xrt_subdev_id id;
> + int instance;
> +};
> +
> +bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name);
> +struct platform_device *xleaf_get_leaf(struct platform_device *pdev,
> + xrt_subdev_match_t cb, void *arg);
> +
> +static inline bool subdev_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
> +{
> + const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
> + int instance = a->instance;
> +
> + if (id != a->id)
> + return false;
> + if (instance != pdev->id && instance != PLATFORM_DEVID_NONE)
> + return false;
> + return true;
> +}
> +
> +static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
> + struct platform_device *pdev, void *arg)
> +{
> + return xleaf_has_endpoint(pdev, arg);
> +}
> +
> +static inline struct platform_device *
> +xleaf_get_leaf_by_id(struct platform_device *pdev,
> + enum xrt_subdev_id id, int instance)
> +{
> + struct subdev_match_arg arg = { id, instance };
> +
> + return xleaf_get_leaf(pdev, subdev_match, &arg);
> +}
> +
> +static inline struct platform_device *
> +xleaf_get_leaf_by_epname(struct platform_device *pdev, const char *name)
> +{
> + return xleaf_get_leaf(pdev, xrt_subdev_match_epname, (void *)name);
> +}
> +
> +static inline int xleaf_call(struct platform_device *tgt, u32 cmd, void *arg)
> +{
> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(tgt);
> +
> + return (*drvdata->xsd_dev_ops.xsd_leaf_call)(tgt, cmd, arg);
> +}
> +
> +int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async);
> +int xleaf_create_group(struct platform_device *pdev, char *dtb);
> +int xleaf_destroy_group(struct platform_device *pdev, int instance);
> +void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx);
> +void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
> + unsigned short *subvendor, unsigned short *subdevice);
> +void xleaf_hot_reset(struct platform_device *pdev);
> +int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf);
> +struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
> + const struct attribute_group **grps);
> +void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon);
> +int xleaf_wait_for_group_bringup(struct platform_device *pdev);
> +
> +/*
> + * Character device helper APIs for use by leaf drivers
> + */
> +static inline bool xleaf_devnode_enabled(struct xrt_subdev_drvdata *drvdata)
> +{
> + return drvdata && drvdata->xsd_file_ops.xsf_ops.open;
> +}
> +
> +int xleaf_devnode_create(struct platform_device *pdev,
> + const char *file_name, const char *inst_name);
> +int xleaf_devnode_destroy(struct platform_device *pdev);
> +
> +struct platform_device *xleaf_devnode_open_excl(struct inode *inode);
> +struct platform_device *xleaf_devnode_open(struct inode *inode);
> +void xleaf_devnode_close(struct inode *inode);
> +
> +/* Helpers. */
> +int xleaf_register_driver(enum xrt_subdev_id id, struct platform_driver *drv,
> + struct xrt_subdev_endpoints *eps);
> +void xleaf_unregister_driver(enum xrt_subdev_id id);
> +
> +/* Module's init/fini routines for leaf driver in xrt-lib module */
> +#define XRT_LEAF_INIT_FINI_FUNC(_id, name) \
> +void name##_leaf_init_fini(bool init) \
> +{ \
> + typeof(_id) id = _id; \
> + if (init) { \
> + xleaf_register_driver(id, \
> + &xrt_##name##_driver, \
> + xrt_##name##_endpoints); \
> + } else { \
> + xleaf_unregister_driver(id); \
> + } \
> +}
> +
> +void group_leaf_init_fini(bool init);
> +void vsec_leaf_init_fini(bool init);
> +void devctl_leaf_init_fini(bool init);
> +void axigate_leaf_init_fini(bool init);
> +void icap_leaf_init_fini(bool init);
> +void calib_leaf_init_fini(bool init);
> +void clkfreq_leaf_init_fini(bool init);
> +void clock_leaf_init_fini(bool init);
> +void ucs_leaf_init_fini(bool init);
> +
> +#endif /* _XRT_LEAF_H_ */
> diff --git a/drivers/fpga/xrt/lib/lib-drv.c b/drivers/fpga/xrt/lib/lib-drv.c
> new file mode 100644
> index 000000000000..64bb8710be66
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/lib-drv.c
> @@ -0,0 +1,277 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +#include "xleaf.h"
> +#include "xroot.h"
> +#include "lib-drv.h"
> +
> +#define XRT_IPLIB_MODULE_NAME "xrt-lib"
> +#define XRT_IPLIB_MODULE_VERSION "4.0.0"
> +#define XRT_MAX_DEVICE_NODES 128
> +#define XRT_DRVNAME(drv) ((drv)->driver.name)
> +
> +/*
> + * Subdev driver is known by it's ID to others. We map the ID to it's
ok
> + * struct platform_driver, which contains it's binding name and driver/file ops.
> + * We also map it to the endpoint name in DTB as well, if it's different
> + * than the driver's binding name.
> + */
> +struct xrt_drv_map {
> + struct list_head list;
> + enum xrt_subdev_id id;
> + struct platform_driver *drv;
> + struct xrt_subdev_endpoints *eps;
> + struct ida ida; /* manage driver instance and char dev minor */
> +};
> +
> +static DEFINE_MUTEX(xrt_lib_lock); /* global lock protecting xrt_drv_maps list */
> +static LIST_HEAD(xrt_drv_maps);
> +struct class *xrt_class;
> +
> +static inline struct xrt_subdev_drvdata *
> +xrt_drv_map2drvdata(struct xrt_drv_map *map)
> +{
> + return (struct xrt_subdev_drvdata *)map->drv->id_table[0].driver_data;
> +}
> +
> +static struct xrt_drv_map *
> +__xrt_drv_find_map_by_id(enum xrt_subdev_id id)
ok
> +{
> + struct xrt_drv_map *tmap;
> +
> + list_for_each_entry(tmap, &xrt_drv_maps, list) {
> + if (tmap->id == id)
> + return tmap;
> + }
> + return NULL;
> +}
> +
> +static struct xrt_drv_map *
> +xrt_drv_find_map_by_id(enum xrt_subdev_id id)
> +{
> + struct xrt_drv_map *map;
> +
> + mutex_lock(&xrt_lib_lock);
> + map = __xrt_drv_find_map_by_id(id);
> + mutex_unlock(&xrt_lib_lock);
> + /*
> + * map should remain valid even after the lock is dropped since a registered
ok
> + * driver should only be unregistered when driver module is being unloaded,
> + * which means that the driver should not be used by then.
> + */
> + return map;
> +}
> +
> +static int xrt_drv_register_driver(struct xrt_drv_map *map)
> +{
> + struct xrt_subdev_drvdata *drvdata;
> + int rc = 0;
> + const char *drvname = XRT_DRVNAME(map->drv);
> +
> + rc = platform_driver_register(map->drv);
> + if (rc) {
> + pr_err("register %s platform driver failed\n", drvname);
> + return rc;
> + }
> +
> + drvdata = xrt_drv_map2drvdata(map);
> + if (drvdata) {
> + /* Initialize dev_t for char dev node. */
> + if (xleaf_devnode_enabled(drvdata)) {
> + rc = alloc_chrdev_region(&drvdata->xsd_file_ops.xsf_dev_t, 0,
> + XRT_MAX_DEVICE_NODES, drvname);
> + if (rc) {
> + platform_driver_unregister(map->drv);
> + pr_err("failed to alloc dev minor for %s: %d\n", drvname, rc);
> + return rc;
> + }
> + } else {
> + drvdata->xsd_file_ops.xsf_dev_t = (dev_t)-1;
> + }
> + }
> +
> + ida_init(&map->ida);
> +
> + pr_info("%s registered successfully\n", drvname);
> +
> + return 0;
> +}
> +
> +static void xrt_drv_unregister_driver(struct xrt_drv_map *map)
> +{
> + const char *drvname = XRT_DRVNAME(map->drv);
> + struct xrt_subdev_drvdata *drvdata;
> +
> + ida_destroy(&map->ida);
> +
> + drvdata = xrt_drv_map2drvdata(map);
> + if (drvdata && drvdata->xsd_file_ops.xsf_dev_t != (dev_t)-1) {
> + unregister_chrdev_region(drvdata->xsd_file_ops.xsf_dev_t,
> + XRT_MAX_DEVICE_NODES);
> + }
> +
> + platform_driver_unregister(map->drv);
> +
> + pr_info("%s unregistered successfully\n", drvname);
> +}
> +
> +int xleaf_register_driver(enum xrt_subdev_id id,
> + struct platform_driver *drv,
> + struct xrt_subdev_endpoints *eps)
> +{
> + struct xrt_drv_map *map;
> + int rc;
> +
> + mutex_lock(&xrt_lib_lock);
> +
> + map = __xrt_drv_find_map_by_id(id);
> + if (map) {
> + mutex_unlock(&xrt_lib_lock);
> + pr_err("Id %d already has a registered driver, 0x%p\n",
> + id, map->drv);
> + return -EEXIST;
> + }
> +
> + map = kzalloc(sizeof(*map), GFP_KERNEL);
ok
> + if (!map) {
> + mutex_unlock(&xrt_lib_lock);
> + return -ENOMEM;
> + }
> + map->id = id;
> + map->drv = drv;
> + map->eps = eps;
> +
> + rc = xrt_drv_register_driver(map);
> + if (rc) {
ok
> + kfree(map);
> + mutex_unlock(&xrt_lib_lock);
> + return rc;
> + }
> +
> + list_add(&map->list, &xrt_drv_maps);
> +
> + mutex_unlock(&xrt_lib_lock);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xleaf_register_driver);
> +
> +void xleaf_unregister_driver(enum xrt_subdev_id id)
> +{
> + struct xrt_drv_map *map;
> +
> + mutex_lock(&xrt_lib_lock);
> +
> + map = __xrt_drv_find_map_by_id(id);
> + if (!map) {
> + mutex_unlock(&xrt_lib_lock);
> + pr_err("Id %d has no registered driver\n", id);
> + return;
> + }
> +
> + list_del(&map->list);
> +
> + mutex_unlock(&xrt_lib_lock);
> +
> + xrt_drv_unregister_driver(map);
> + kfree(map);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_unregister_driver);
> +
> +const char *xrt_drv_name(enum xrt_subdev_id id)
> +{
> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
> +
> + if (map)
> + return XRT_DRVNAME(map->drv);
> + return NULL;
> +}
> +
> +int xrt_drv_get_instance(enum xrt_subdev_id id)
> +{
> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
> +
> + return ida_alloc_range(&map->ida, 0, XRT_MAX_DEVICE_NODES, GFP_KERNEL);
> +}
> +
> +void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
> +{
> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
> +
> + ida_free(&map->ida, instance);
> +}
> +
> +struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
> +{
> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
> + struct xrt_subdev_endpoints *eps;
> +
> + eps = map ? map->eps : NULL;
> + return eps;
> +}
> +
> +/* Leaf driver's module init/fini callbacks. */
add comment to effect that dynamically adding drivers/ID's are not supported.
> +static void (*leaf_init_fini_cbs[])(bool) = {
> + group_leaf_init_fini,
> + vsec_leaf_init_fini,
> + devctl_leaf_init_fini,
> + axigate_leaf_init_fini,
> + icap_leaf_init_fini,
> + calib_leaf_init_fini,
> + clkfreq_leaf_init_fini,
> + clock_leaf_init_fini,
> + ucs_leaf_init_fini,
> +};
> +
> +static __init int xrt_lib_init(void)
> +{
> + int i;
> +
> + xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
> + if (IS_ERR(xrt_class))
> + return PTR_ERR(xrt_class);
> +
> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
> + leaf_init_fini_cbs[i](true);
> + return 0;
> +}
> +
> +static __exit void xrt_lib_fini(void)
> +{
> + struct xrt_drv_map *map;
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
> + leaf_init_fini_cbs[i](false);
> +
> + mutex_lock(&xrt_lib_lock);
> +
> + while (!list_empty(&xrt_drv_maps)) {
> + map = list_first_entry_or_null(&xrt_drv_maps, struct xrt_drv_map, list);
> + pr_err("Unloading module with %s still registered\n", XRT_DRVNAME(map->drv));
> + list_del(&map->list);
> + mutex_unlock(&xrt_lib_lock);
> + xrt_drv_unregister_driver(map);
> + kfree(map);
> + mutex_lock(&xrt_lib_lock);
> + }
> +
> + mutex_unlock(&xrt_lib_lock);
> +
> + class_destroy(xrt_class);
> +}
> +
> +module_init(xrt_lib_init);
> +module_exit(xrt_lib_fini);
> +
> +MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
> +MODULE_AUTHOR("XRT Team <[email protected]>");
> +MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
> +MODULE_LICENSE("GPL v2");
> diff --git a/drivers/fpga/xrt/lib/lib-drv.h b/drivers/fpga/xrt/lib/lib-drv.h
> new file mode 100644
> index 000000000000..a94c58149cb4
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/lib-drv.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _LIB_DRV_H_
> +#define _LIB_DRV_H_
> +
> +const char *xrt_drv_name(enum xrt_subdev_id id);
bisectablity may be /is still an issue.
Tom
> +int xrt_drv_get_instance(enum xrt_subdev_id id);
> +void xrt_drv_put_instance(enum xrt_subdev_id id, int instance);
> +struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
> +
> +#endif /* _LIB_DRV_H_ */
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> group driver that manages life cycle of a bunch of leaf driver instances
> and bridges them with root.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/group.h | 25 +++
> drivers/fpga/xrt/lib/group.c | 286 +++++++++++++++++++++++++++++++
> 2 files changed, 311 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/group.h
> create mode 100644 drivers/fpga/xrt/lib/group.c
>
> diff --git a/drivers/fpga/xrt/include/group.h b/drivers/fpga/xrt/include/group.h
> new file mode 100644
> index 000000000000..09e9d03f53fe
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/group.h
> @@ -0,0 +1,25 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
ok, removed generic boilerplate
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_GROUP_H_
> +#define _XRT_GROUP_H_
> +
> +#include "xleaf.h"
move header to another patch
> +
> +/*
> + * Group driver leaf calls.
ok
> + */
> +enum xrt_group_leaf_cmd {
> + XRT_GROUP_GET_LEAF = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
ok
> + XRT_GROUP_PUT_LEAF,
> + XRT_GROUP_INIT_CHILDREN,
> + XRT_GROUP_FINI_CHILDREN,
> + XRT_GROUP_TRIGGER_EVENT,
> +};
> +
> +#endif /* _XRT_GROUP_H_ */
> diff --git a/drivers/fpga/xrt/lib/group.c b/drivers/fpga/xrt/lib/group.c
> new file mode 100644
> index 000000000000..7b8716569641
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/group.c
> @@ -0,0 +1,286 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Group Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include "xleaf.h"
> +#include "subdev_pool.h"
> +#include "group.h"
> +#include "metadata.h"
> +#include "lib-drv.h"
> +
> +#define XRT_GRP "xrt_group"
> +
> +struct xrt_group {
> + struct platform_device *pdev;
> + struct xrt_subdev_pool leaves;
> + bool leaves_created;
> + struct mutex lock; /* lock for group */
> +};
> +
> +static int xrt_grp_root_cb(struct device *dev, void *parg,
> + enum xrt_root_cmd cmd, void *arg)
ok
> +{
> + int rc;
> + struct platform_device *pdev =
> + container_of(dev, struct platform_device, dev);
> + struct xrt_group *xg = (struct xrt_group *)parg;
> +
> + switch (cmd) {
> + case XRT_ROOT_GET_LEAF_HOLDERS: {
> + struct xrt_root_get_holders *holders =
> + (struct xrt_root_get_holders *)arg;
> + rc = xrt_subdev_pool_get_holders(&xg->leaves,
> + holders->xpigh_pdev,
> + holders->xpigh_holder_buf,
> + holders->xpigh_holder_buf_len);
> + break;
> + }
> + default:
> + /* Forward parent call to root. */
> + rc = xrt_subdev_root_request(pdev, cmd, arg);
> + break;
> + }
> +
> + return rc;
> +}
> +
> +/*
> + * Cut subdev's dtb from group's dtb based on passed-in endpoint descriptor.
> + * Return the subdev's dtb through dtbp, if found.
> + */
> +static int xrt_grp_cut_subdev_dtb(struct xrt_group *xg, struct xrt_subdev_endpoints *eps,
> + char *grp_dtb, char **dtbp)
> +{
> + int ret, i, ep_count = 0;
> + char *dtb = NULL;
> +
> + ret = xrt_md_create(DEV(xg->pdev), &dtb);
> + if (ret)
> + return ret;
> +
> + for (i = 0; eps->xse_names[i].ep_name || eps->xse_names[i].regmap_name; i++) {
> + const char *ep_name = eps->xse_names[i].ep_name;
> + const char *reg_name = eps->xse_names[i].regmap_name;
> +
> + if (!ep_name)
> + xrt_md_get_compatible_endpoint(DEV(xg->pdev), grp_dtb, reg_name, &ep_name);
> + if (!ep_name)
> + continue;
> +
> + ret = xrt_md_copy_endpoint(DEV(xg->pdev), dtb, grp_dtb, ep_name, reg_name, NULL);
> + if (ret)
> + continue;
> + xrt_md_del_endpoint(DEV(xg->pdev), grp_dtb, ep_name, reg_name);
> + ep_count++;
> + }
> + /* Found enough endpoints, return the subdev's dtb. */
> + if (ep_count >= eps->xse_min_ep) {
> + *dtbp = dtb;
> + return 0;
> + }
> +
> + /* Cleanup - Restore all endpoints that has been deleted, if any. */
> + if (ep_count > 0) {
> + xrt_md_copy_endpoint(DEV(xg->pdev), grp_dtb, dtb,
> + XRT_MD_NODE_ENDPOINTS, NULL, NULL);
> + }
> + vfree(dtb);
> + *dtbp = NULL;
> + return 0;
> +}
> +
> +static int xrt_grp_create_leaves(struct xrt_group *xg)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xg->pdev);
> + struct xrt_subdev_endpoints *eps = NULL;
> + int ret = 0, failed = 0;
> + enum xrt_subdev_id did;
> + char *grp_dtb = NULL;
> + unsigned long mlen;
> +
> + if (!pdata)
> + return -EINVAL;
ok
> +
> + mlen = xrt_md_size(DEV(xg->pdev), pdata->xsp_dtb);
> + if (mlen == XRT_MD_INVALID_LENGTH) {
> + xrt_err(xg->pdev, "invalid dtb, len %ld", mlen);
> + return -EINVAL;
> + }
> +
> + mutex_lock(&xg->lock);
> +
> + if (xg->leaves_created) {
> + mutex_unlock(&xg->lock);
add a comment that this is not an error and/or error is handled elsewhere
> + return -EEXIST;
> + }
> +
> + grp_dtb = vmalloc(mlen);
> + if (!grp_dtb) {
> + mutex_unlock(&xg->lock);
> + return -ENOMEM;
ok
> + }
> +
> + /* Create all leaves based on dtb. */
> + xrt_info(xg->pdev, "bringing up leaves...");
> + memcpy(grp_dtb, pdata->xsp_dtb, mlen);
> + for (did = 0; did < XRT_SUBDEV_NUM; did++) {
ok
> + eps = xrt_drv_get_endpoints(did);
> + while (eps && eps->xse_names) {
> + char *dtb = NULL;
> +
> + ret = xrt_grp_cut_subdev_dtb(xg, eps, grp_dtb, &dtb);
> + if (ret) {
> + failed++;
> + xrt_err(xg->pdev, "failed to cut subdev dtb for drv %s: %d",
> + xrt_drv_name(did), ret);
> + }
> + if (!dtb) {
> + /*
> + * No more dtb to cut or bad things happened for this instance,
> + * switch to the next one.
> + */
> + eps++;
> + continue;
> + }
> +
> + /* Found a dtb for this instance, let's add it. */
> + ret = xrt_subdev_pool_add(&xg->leaves, did, xrt_grp_root_cb, xg, dtb);
> + if (ret < 0) {
> + failed++;
> + xrt_err(xg->pdev, "failed to add %s: %d", xrt_drv_name(did), ret);
add a comment that this is not a fatal error and cleanup happens elsewhere
Tom
> + }
> + vfree(dtb);
> + /* Continue searching for the same instance from grp_dtb. */
> + }
> + }
> +
> + xg->leaves_created = true;
> + vfree(grp_dtb);
> + mutex_unlock(&xg->lock);
> + return failed == 0 ? 0 : -ECHILD;
> +}
> +
> +static void xrt_grp_remove_leaves(struct xrt_group *xg)
> +{
> + mutex_lock(&xg->lock);
> +
> + if (!xg->leaves_created) {
> + mutex_unlock(&xg->lock);
> + return;
> + }
> +
> + xrt_info(xg->pdev, "tearing down leaves...");
> + xrt_subdev_pool_fini(&xg->leaves);
> + xg->leaves_created = false;
> +
> + mutex_unlock(&xg->lock);
> +}
> +
> +static int xrt_grp_probe(struct platform_device *pdev)
> +{
> + struct xrt_group *xg;
> +
> + xrt_info(pdev, "probing...");
> +
> + xg = devm_kzalloc(&pdev->dev, sizeof(*xg), GFP_KERNEL);
> + if (!xg)
> + return -ENOMEM;
> +
> + xg->pdev = pdev;
> + mutex_init(&xg->lock);
> + xrt_subdev_pool_init(DEV(pdev), &xg->leaves);
> + platform_set_drvdata(pdev, xg);
> +
> + return 0;
> +}
> +
> +static int xrt_grp_remove(struct platform_device *pdev)
> +{
> + struct xrt_group *xg = platform_get_drvdata(pdev);
> +
> + xrt_info(pdev, "leaving...");
> + xrt_grp_remove_leaves(xg);
> + return 0;
> +}
> +
> +static int xrt_grp_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + int rc = 0;
> + struct xrt_group *xg = platform_get_drvdata(pdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Simply forward to every child. */
> + xrt_subdev_pool_handle_event(&xg->leaves,
> + (struct xrt_event *)arg);
> + break;
> + case XRT_GROUP_GET_LEAF: {
> + struct xrt_root_get_leaf *get_leaf =
> + (struct xrt_root_get_leaf *)arg;
> +
> + rc = xrt_subdev_pool_get(&xg->leaves, get_leaf->xpigl_match_cb,
> + get_leaf->xpigl_match_arg,
> + DEV(get_leaf->xpigl_caller_pdev),
> + &get_leaf->xpigl_tgt_pdev);
> + break;
> + }
> + case XRT_GROUP_PUT_LEAF: {
> + struct xrt_root_put_leaf *put_leaf =
> + (struct xrt_root_put_leaf *)arg;
> +
> + rc = xrt_subdev_pool_put(&xg->leaves, put_leaf->xpipl_tgt_pdev,
> + DEV(put_leaf->xpipl_caller_pdev));
> + break;
> + }
> + case XRT_GROUP_INIT_CHILDREN:
> + rc = xrt_grp_create_leaves(xg);
> + break;
> + case XRT_GROUP_FINI_CHILDREN:
> + xrt_grp_remove_leaves(xg);
> + break;
> + case XRT_GROUP_TRIGGER_EVENT:
> + xrt_subdev_pool_trigger_event(&xg->leaves, (enum xrt_events)(uintptr_t)arg);
> + break;
> + default:
> + xrt_err(pdev, "unknown IOCTL cmd %d", cmd);
> + rc = -EINVAL;
> + break;
> + }
> + return rc;
> +}
> +
> +static struct xrt_subdev_drvdata xrt_grp_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_grp_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_grp_id_table[] = {
> + { XRT_GRP, (kernel_ulong_t)&xrt_grp_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_group_driver = {
> + .driver = {
> + .name = XRT_GRP,
> + },
> + .probe = xrt_grp_probe,
> + .remove = xrt_grp_remove,
> + .id_table = xrt_grp_id_table,
> +};
> +
> +void group_leaf_init_fini(bool init)
> +{
> + if (init)
> + xleaf_register_driver(XRT_SUBDEV_GRP, &xrt_group_driver, NULL);
> + else
> + xleaf_unregister_driver(XRT_SUBDEV_GRP);
> +}
It is unclear from the changelog if this new patch was split from an existing patch or new content.
the file ops seem to come from mgmnt/main.c, which call what could be file ops here. why is this complicated redirection needed ?
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Helper functions for char device node creation / removal for platform
> drivers. This is part of platform driver infrastructure.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/cdev.c | 232 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 232 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/cdev.c
>
> diff --git a/drivers/fpga/xrt/lib/cdev.c b/drivers/fpga/xrt/lib/cdev.c
> new file mode 100644
> index 000000000000..38efd24b6e10
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/cdev.c
> @@ -0,0 +1,232 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA device node helper functions.
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include "xleaf.h"
> +
> +extern struct class *xrt_class;
> +
> +#define XRT_CDEV_DIR "xfpga"
maybe "xrt_fpga" or just "xrt"
> +#define INODE2PDATA(inode) \
> + container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
> +#define INODE2PDEV(inode) \
> + to_platform_device(kobj_to_dev((inode)->i_cdev->kobj.parent))
> +#define CDEV_NAME(sysdev) (strchr((sysdev)->kobj.name, '!') + 1)
> +
> +/* Allow it to be accessed from cdev. */
> +static void xleaf_devnode_allowed(struct platform_device *pdev)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> +
> + /* Allow new opens. */
> + mutex_lock(&pdata->xsp_devnode_lock);
> + pdata->xsp_devnode_online = true;
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +}
> +
> +/* Turn off access from cdev and wait for all existing user to go away. */
> +static int xleaf_devnode_disallowed(struct platform_device *pdev)
> +{
> + int ret = 0;
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + /* Prevent new opens. */
> + pdata->xsp_devnode_online = false;
> + /* Wait for existing user to close. */
> + while (!ret && pdata->xsp_devnode_ref) {
> + int rc;
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> + rc = wait_for_completion_killable(&pdata->xsp_devnode_comp);
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + if (rc == -ERESTARTSYS) {
> + /* Restore online state. */
> + pdata->xsp_devnode_online = true;
> + xrt_err(pdev, "%s is in use, ref=%d",
> + CDEV_NAME(pdata->xsp_sysdev),
> + pdata->xsp_devnode_ref);
> + ret = -EBUSY;
> + }
> + }
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +
> + return ret;
> +}
> +
> +static struct platform_device *
> +__xleaf_devnode_open(struct inode *inode, bool excl)
> +{
> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
> + struct platform_device *pdev = INODE2PDEV(inode);
> + bool opened = false;
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + if (pdata->xsp_devnode_online) {
> + if (excl && pdata->xsp_devnode_ref) {
> + xrt_err(pdev, "%s has already been opened exclusively",
> + CDEV_NAME(pdata->xsp_sysdev));
> + } else if (!excl && pdata->xsp_devnode_excl) {
> + xrt_err(pdev, "%s has been opened exclusively",
> + CDEV_NAME(pdata->xsp_sysdev));
> + } else {
> + pdata->xsp_devnode_ref++;
> + pdata->xsp_devnode_excl = excl;
> + opened = true;
> + xrt_info(pdev, "opened %s, ref=%d",
> + CDEV_NAME(pdata->xsp_sysdev),
> + pdata->xsp_devnode_ref);
> + }
> + } else {
> + xrt_err(pdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
> + }
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +
> + pdev = opened ? pdev : NULL;
> + return pdev;
> +}
> +
> +struct platform_device *
> +xleaf_devnode_open_excl(struct inode *inode)
> +{
> + return __xleaf_devnode_open(inode, true);
> +}
This function is unused, remove.
> +
> +struct platform_device *
> +xleaf_devnode_open(struct inode *inode)
> +{
> + return __xleaf_devnode_open(inode, false);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_devnode_open);
does this really need to be exported ?
> +
> +void xleaf_devnode_close(struct inode *inode)
> +{
> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
> + struct platform_device *pdev = INODE2PDEV(inode);
> + bool notify = false;
> +
> + mutex_lock(&pdata->xsp_devnode_lock);
> +
> + WARN_ON(pdata->xsp_devnode_ref == 0);
> + pdata->xsp_devnode_ref--;
> + if (pdata->xsp_devnode_ref == 0) {
> + pdata->xsp_devnode_excl = false;
> + notify = true;
> + }
> + if (notify) {
> + xrt_info(pdev, "closed %s, ref=%d",
> + CDEV_NAME(pdata->xsp_sysdev), pdata->xsp_devnode_ref);
xsp_devnode_ref will always be 0, so no need to report it.
> + } else {
> + xrt_info(pdev, "closed %s, notifying waiter",
> + CDEV_NAME(pdata->xsp_sysdev));
> + }
> +
> + mutex_unlock(&pdata->xsp_devnode_lock);
> +
> + if (notify)
> + complete(&pdata->xsp_devnode_comp);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_devnode_close);
> +
> +static inline enum xrt_subdev_file_mode
> +devnode_mode(struct xrt_subdev_drvdata *drvdata)
> +{
> + return drvdata->xsd_file_ops.xsf_mode;
> +}
> +
> +int xleaf_devnode_create(struct platform_device *pdev, const char *file_name,
> + const char *inst_name)
> +{
> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
> + struct xrt_subdev_file_ops *fops = &drvdata->xsd_file_ops;
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> + struct cdev *cdevp;
> + struct device *sysdev;
> + int ret = 0;
> + char fname[256];
> +
> + mutex_init(&pdata->xsp_devnode_lock);
> + init_completion(&pdata->xsp_devnode_comp);
> +
> + cdevp = &DEV_PDATA(pdev)->xsp_cdev;
> + cdev_init(cdevp, &fops->xsf_ops);
> + cdevp->owner = fops->xsf_ops.owner;
> + cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), pdev->id);
> +
> + /*
> + * Set pdev as parent of cdev so that when pdev (and its platform
> + * data) will not be freed when cdev is not freed.
> + */
> + cdev_set_parent(cdevp, &DEV(pdev)->kobj);
> +
> + ret = cdev_add(cdevp, cdevp->dev, 1);
> + if (ret) {
> + xrt_err(pdev, "failed to add cdev: %d", ret);
> + goto failed;
> + }
> + if (!file_name)
> + file_name = pdev->name;
> + if (!inst_name) {
> + if (devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST) {
> + snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
> + XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
> + file_name, pdev->id);
> + } else {
> + snprintf(fname, sizeof(fname), "%s/%s/%s",
> + XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
> + file_name);
> + }
> + } else {
> + snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
> + DEV_PDATA(pdev)->xsp_root_name, file_name, inst_name);
> + }
> + sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
> + if (IS_ERR(sysdev)) {
> + ret = PTR_ERR(sysdev);
> + xrt_err(pdev, "failed to create device node: %d", ret);
> + goto failed_cdev_add;
> + }
> + pdata->xsp_sysdev = sysdev;
> +
> + xleaf_devnode_allowed(pdev);
> +
> + xrt_info(pdev, "created (%d, %d): /dev/%s",
> + MAJOR(cdevp->dev), pdev->id, fname);
> + return 0;
> +
> +failed_cdev_add:
> + cdev_del(cdevp);
> +failed:
> + cdevp->owner = NULL;
> + return ret;
> +}
> +
> +int xleaf_devnode_destroy(struct platform_device *pdev)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> + struct cdev *cdevp = &pdata->xsp_cdev;
> + dev_t dev = cdevp->dev;
> + int rc;
> +
> + rc = xleaf_devnode_disallowed(pdev);
> + if (rc)
> + return rc;
Failure of one leaf to be destroyed is not handled well.
could a able to destroy check be done over the whole group ?
Tom
> +
> + xrt_info(pdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
> + XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
> + device_destroy(xrt_class, cdevp->dev);
> + pdata->xsp_sysdev = NULL;
> + cdev_del(cdevp);
> + return 0;
> +}
This was split from 'fpga: xrt: platform driver infrastructure'
and fpga: xrt: managment physical function driver (root)
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Contains common code for all root drivers and handles root calls from
> platform drivers. This is part of root driver infrastructure.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/events.h | 45 +++
> drivers/fpga/xrt/include/xroot.h | 117 ++++++
> drivers/fpga/xrt/lib/subdev_pool.h | 53 +++
> drivers/fpga/xrt/lib/xroot.c | 589 +++++++++++++++++++++++++++++
> 4 files changed, 804 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/events.h
> create mode 100644 drivers/fpga/xrt/include/xroot.h
> create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
> create mode 100644 drivers/fpga/xrt/lib/xroot.c
>
> diff --git a/drivers/fpga/xrt/include/events.h b/drivers/fpga/xrt/include/events.h
> new file mode 100644
> index 000000000000..775171a47c8e
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/events.h
> @@ -0,0 +1,45 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
ok
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_EVENTS_H_
> +#define _XRT_EVENTS_H_
ok
> +
> +#include "subdev_id.h"
> +
> +/*
> + * Event notification.
> + */
> +enum xrt_events {
> + XRT_EVENT_TEST = 0, /* for testing */
> + /*
> + * Events related to specific subdev
> + * Callback arg: struct xrt_event_arg_subdev
> + */
> + XRT_EVENT_POST_CREATION,
> + XRT_EVENT_PRE_REMOVAL,
> + /*
> + * Events related to change of the whole board
> + * Callback arg: <none>
> + */
> + XRT_EVENT_PRE_HOT_RESET,
> + XRT_EVENT_POST_HOT_RESET,
> + XRT_EVENT_PRE_GATE_CLOSE,
> + XRT_EVENT_POST_GATE_OPEN,
> +};
> +
> +struct xrt_event_arg_subdev {
> + enum xrt_subdev_id xevt_subdev_id;
> + int xevt_subdev_instance;
> +};
> +
> +struct xrt_event {
> + enum xrt_events xe_evt;
> + struct xrt_event_arg_subdev xe_subdev;
> +};
> +
> +#endif /* _XRT_EVENTS_H_ */
> diff --git a/drivers/fpga/xrt/include/xroot.h b/drivers/fpga/xrt/include/xroot.h
> new file mode 100644
> index 000000000000..91c0aeb30bf8
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xroot.h
> @@ -0,0 +1,117 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_ROOT_H_
> +#define _XRT_ROOT_H_
> +
> +#include <linux/platform_device.h>
> +#include <linux/pci.h>
> +#include "subdev_id.h"
> +#include "events.h"
> +
> +typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id,
> + struct platform_device *, void *);
> +#define XRT_SUBDEV_MATCH_PREV ((xrt_subdev_match_t)-1)
> +#define XRT_SUBDEV_MATCH_NEXT ((xrt_subdev_match_t)-2)
> +
> +/*
> + * Root calls.
> + */
> +enum xrt_root_cmd {
> + /* Leaf actions. */
> + XRT_ROOT_GET_LEAF = 0,
> + XRT_ROOT_PUT_LEAF,
> + XRT_ROOT_GET_LEAF_HOLDERS,
> +
> + /* Group actions. */
> + XRT_ROOT_CREATE_GROUP,
> + XRT_ROOT_REMOVE_GROUP,
> + XRT_ROOT_LOOKUP_GROUP,
> + XRT_ROOT_WAIT_GROUP_BRINGUP,
> +
> + /* Event actions. */
> + XRT_ROOT_EVENT_SYNC,
> + XRT_ROOT_EVENT_ASYNC,
> +
> + /* Device info. */
> + XRT_ROOT_GET_RESOURCE,
> + XRT_ROOT_GET_ID,
> +
> + /* Misc. */
> + XRT_ROOT_HOT_RESET,
> + XRT_ROOT_HWMON,
> +};
> +
> +struct xrt_root_get_leaf {
> + struct platform_device *xpigl_caller_pdev;
> + xrt_subdev_match_t xpigl_match_cb;
> + void *xpigl_match_arg;
> + struct platform_device *xpigl_tgt_pdev;
> +};
> +
> +struct xrt_root_put_leaf {
> + struct platform_device *xpipl_caller_pdev;
> + struct platform_device *xpipl_tgt_pdev;
> +};
> +
> +struct xrt_root_lookup_group {
> + struct platform_device *xpilp_pdev; /* caller's pdev */
> + xrt_subdev_match_t xpilp_match_cb;
> + void *xpilp_match_arg;
> + int xpilp_grp_inst;
> +};
> +
> +struct xrt_root_get_holders {
> + struct platform_device *xpigh_pdev; /* caller's pdev */
> + char *xpigh_holder_buf;
> + size_t xpigh_holder_buf_len;
> +};
> +
> +struct xrt_root_get_res {
> + struct resource *xpigr_res;
> +};
> +
> +struct xrt_root_get_id {
> + unsigned short xpigi_vendor_id;
> + unsigned short xpigi_device_id;
> + unsigned short xpigi_sub_vendor_id;
> + unsigned short xpigi_sub_device_id;
> +};
> +
> +struct xrt_root_hwmon {
> + bool xpih_register;
> + const char *xpih_name;
> + void *xpih_drvdata;
> + const struct attribute_group **xpih_groups;
> + struct device *xpih_hwmon_dev;
> +};
> +
> +/*
> + * Callback for leaf to make a root request. Arguments are: parent device, parent cookie, req,
> + * and arg.
> + */
> +typedef int (*xrt_subdev_root_cb_t)(struct device *, void *, u32, void *);
> +int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg);
> +
> +/*
> + * Defines physical function (MPF / UPF) specific operations
> + * needed in common root driver.
> + */
> +struct xroot_physical_function_callback {
> + void (*xpc_hot_reset)(struct pci_dev *pdev);
> +};
> +
> +int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root);
> +void xroot_remove(void *root);
> +bool xroot_wait_for_bringup(void *root);
> +int xroot_add_vsec_node(void *root, char *dtb);
> +int xroot_create_group(void *xr, char *dtb);
> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
> +void xroot_broadcast(void *root, enum xrt_events evt);
> +
> +#endif /* _XRT_ROOT_H_ */
> diff --git a/drivers/fpga/xrt/lib/subdev_pool.h b/drivers/fpga/xrt/lib/subdev_pool.h
> new file mode 100644
> index 000000000000..09d148e4e7ea
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/subdev_pool.h
> @@ -0,0 +1,53 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_SUBDEV_POOL_H_
> +#define _XRT_SUBDEV_POOL_H_
> +
> +#include <linux/device.h>
> +#include <linux/mutex.h>
> +#include "xroot.h"
> +
> +/*
> + * The struct xrt_subdev_pool manages a list of xrt_subdevs for root and group drivers.
> + */
> +struct xrt_subdev_pool {
> + struct list_head xsp_dev_list;
> + struct device *xsp_owner;
> + struct mutex xsp_lock; /* pool lock */
> + bool xsp_closing;
> +};
> +
> +/*
> + * Subdev pool helper functions for root and group drivers only.
> + */
> +void xrt_subdev_pool_init(struct device *dev,
> + struct xrt_subdev_pool *spool);
> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
> + xrt_subdev_match_t match,
> + void *arg, struct device *holder_dev,
> + struct platform_device **pdevp);
> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
> + struct platform_device *pdev,
> + struct device *holder_dev);
> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
> + enum xrt_subdev_id id, xrt_subdev_root_cb_t pcb,
> + void *pcb_arg, char *dtb);
> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
> + enum xrt_subdev_id id, int instance);
> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
> + struct platform_device *pdev,
> + char *buf, size_t len);
> +
> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool,
> + enum xrt_events evt);
> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool,
> + struct xrt_event *evt);
> +
> +#endif /* _XRT_SUBDEV_POOL_H_ */
> diff --git a/drivers/fpga/xrt/lib/xroot.c b/drivers/fpga/xrt/lib/xroot.c
> new file mode 100644
> index 000000000000..03407272650f
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xroot.c
> @@ -0,0 +1,589 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Root Functions
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/hwmon.h>
> +#include "xroot.h"
> +#include "subdev_pool.h"
> +#include "group.h"
> +#include "metadata.h"
> +
> +#define XROOT_PDEV(xr) ((xr)->pdev)
> +#define XROOT_DEV(xr) (&(XROOT_PDEV(xr)->dev))
> +#define xroot_err(xr, fmt, args...) \
> + dev_err(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
> +#define xroot_warn(xr, fmt, args...) \
> + dev_warn(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
> +#define xroot_info(xr, fmt, args...) \
> + dev_info(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
> +#define xroot_dbg(xr, fmt, args...) \
> + dev_dbg(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
> +
> +#define XRT_VSEC_ID 0x20
> +
'root' is an abstraction, 'pci' is an implementation.
Consider splitting.
I think this will be part of the pseudo bus, so figure out how to do root there.
> +#define XROOT_GROUP_FIRST (-1)
> +#define XROOT_GROUP_LAST (-2)
> +
> +static int xroot_root_cb(struct device *, void *, u32, void *);
> +
> +struct xroot_evt {
> + struct list_head list;
> + struct xrt_event evt;
> + struct completion comp;
> + bool async;
> +};
> +
> +struct xroot_events {
> + struct mutex evt_lock; /* event lock */
> + struct list_head evt_list;
> + struct work_struct evt_work;
> +};
> +
> +struct xroot_groups {
> + struct xrt_subdev_pool pool;
> + struct work_struct bringup_work;
add a comment that these two elements are counters or append '_cnt' or similar to name
> + atomic_t bringup_pending;
> + atomic_t bringup_failed;
> + struct completion bringup_comp;
> +};
> +
> +struct xroot {
> + struct pci_dev *pdev;
> + struct xroot_events events;
> + struct xroot_groups groups;
> + struct xroot_physical_function_callback pf_cb;
ok
> +};
> +
> +struct xroot_group_match_arg {
> + enum xrt_subdev_id id;
> + int instance;
> +};
> +
> +static bool xroot_group_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
> +{
> + struct xroot_group_match_arg *a = (struct xroot_group_match_arg *)arg;
> +
> + /* pdev->id is the instance of the subdev. */
ok
> + return id == a->id && pdev->id == a->instance;
> +}
> +
> +static int xroot_get_group(struct xroot *xr, int instance, struct platform_device **grpp)
> +{
> + int rc = 0;
> + struct xrt_subdev_pool *grps = &xr->groups.pool;
> + struct device *dev = DEV(xr->pdev);
> + struct xroot_group_match_arg arg = { XRT_SUBDEV_GRP, instance };
> +
> + if (instance == XROOT_GROUP_LAST) {
> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_NEXT,
> + *grpp, dev, grpp);
> + } else if (instance == XROOT_GROUP_FIRST) {
> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_PREV,
> + *grpp, dev, grpp);
> + } else {
> + rc = xrt_subdev_pool_get(grps, xroot_group_match,
> + &arg, dev, grpp);
> + }
> +
> + if (rc && rc != -ENOENT)
> + xroot_err(xr, "failed to hold group %d: %d", instance, rc);
> + return rc;
> +}
> +
> +static void xroot_put_group(struct xroot *xr, struct platform_device *grp)
> +{
> + int inst = grp->id;
> + int rc = xrt_subdev_pool_put(&xr->groups.pool, grp, DEV(xr->pdev));
> +
> + if (rc)
> + xroot_err(xr, "failed to release group %d: %d", inst, rc);
> +}
> +
> +static int xroot_trigger_event(struct xroot *xr, struct xrt_event *e, bool async)
> +{
> + struct xroot_evt *enew = vzalloc(sizeof(*enew));
> +
> + if (!enew)
> + return -ENOMEM;
> +
> + enew->evt = *e;
> + enew->async = async;
> + init_completion(&enew->comp);
> +
> + mutex_lock(&xr->events.evt_lock);
> + list_add(&enew->list, &xr->events.evt_list);
> + mutex_unlock(&xr->events.evt_lock);
> +
> + schedule_work(&xr->events.evt_work);
> +
> + if (async)
> + return 0;
> +
> + wait_for_completion(&enew->comp);
> + vfree(enew);
> + return 0;
> +}
> +
> +static void
> +xroot_group_trigger_event(struct xroot *xr, int inst, enum xrt_events e)
> +{
> + int ret;
> + struct platform_device *pdev = NULL;
> + struct xrt_event evt = { 0 };
> +
> + WARN_ON(inst < 0);
> + /* Only triggers subdev specific events. */
> + if (e != XRT_EVENT_POST_CREATION && e != XRT_EVENT_PRE_REMOVAL) {
> + xroot_err(xr, "invalid event %d", e);
> + return;
> + }
> +
> + ret = xroot_get_group(xr, inst, &pdev);
> + if (ret)
> + return;
> +
> + /* Triggers event for children, first. */
> + xleaf_call(pdev, XRT_GROUP_TRIGGER_EVENT, (void *)(uintptr_t)e);
ok
> +
> + /* Triggers event for itself. */
> + evt.xe_evt = e;
> + evt.xe_subdev.xevt_subdev_id = XRT_SUBDEV_GRP;
> + evt.xe_subdev.xevt_subdev_instance = inst;
> + xroot_trigger_event(xr, &evt, false);
> +
> + xroot_put_group(xr, pdev);
> +}
> +
> +int xroot_create_group(void *root, char *dtb)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + int ret;
> +
> + atomic_inc(&xr->groups.bringup_pending);
> + ret = xrt_subdev_pool_add(&xr->groups.pool, XRT_SUBDEV_GRP, xroot_root_cb, xr, dtb);
> + if (ret >= 0) {
> + schedule_work(&xr->groups.bringup_work);
> + } else {
> + atomic_dec(&xr->groups.bringup_pending);
> + atomic_inc(&xr->groups.bringup_failed);
> + xroot_err(xr, "failed to create group: %d", ret);
> + }
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xroot_create_group);
> +
> +static int xroot_destroy_single_group(struct xroot *xr, int instance)
> +{
ok as-is
> + struct platform_device *pdev = NULL;
> + int ret;
> +
> + WARN_ON(instance < 0);
> + ret = xroot_get_group(xr, instance, &pdev);
> + if (ret)
> + return ret;
> +
> + xroot_group_trigger_event(xr, instance, XRT_EVENT_PRE_REMOVAL);
> +
> + /* Now tear down all children in this group. */
> + ret = xleaf_call(pdev, XRT_GROUP_FINI_CHILDREN, NULL);
> + xroot_put_group(xr, pdev);
> + if (!ret)
> + ret = xrt_subdev_pool_del(&xr->groups.pool, XRT_SUBDEV_GRP, instance);
> +
> + return ret;
> +}
> +
> +static int xroot_destroy_group(struct xroot *xr, int instance)
> +{
> + struct platform_device *target = NULL;
> + struct platform_device *deps = NULL;
> + int ret;
> +
> + WARN_ON(instance < 0);
> + /*
> + * Make sure target group exists and can't go away before
> + * we remove it's dependents
> + */
> + ret = xroot_get_group(xr, instance, &target);
> + if (ret)
> + return ret;
> +
> + /*
> + * Remove all groups depend on target one.
> + * Assuming subdevs in higher group ID can depend on ones in
> + * lower ID groups, we remove them in the reservse order.
> + */
> + while (xroot_get_group(xr, XROOT_GROUP_LAST, &deps) != -ENOENT) {
> + int inst = deps->id;
> +
> + xroot_put_group(xr, deps);
> + /* Reached the target group instance, stop here. */
ok
> + if (instance == inst)
> + break;
> + xroot_destroy_single_group(xr, inst);
> + deps = NULL;
> + }
> +
> + /* Now we can remove the target group. */
> + xroot_put_group(xr, target);
> + return xroot_destroy_single_group(xr, instance);
> +}
> +
> +static int xroot_lookup_group(struct xroot *xr,
> + struct xrt_root_lookup_group *arg)
> +{
> + int rc = -ENOENT;
> + struct platform_device *grp = NULL;
> +
> + while (rc < 0 && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + if (arg->xpilp_match_cb(XRT_SUBDEV_GRP, grp, arg->xpilp_match_arg))
> + rc = grp->id;
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static void xroot_event_work(struct work_struct *work)
> +{
> + struct xroot_evt *tmp;
> + struct xroot *xr = container_of(work, struct xroot, events.evt_work);
> +
> + mutex_lock(&xr->events.evt_lock);
> + while (!list_empty(&xr->events.evt_list)) {
> + tmp = list_first_entry(&xr->events.evt_list, struct xroot_evt, list);
> + list_del(&tmp->list);
> + mutex_unlock(&xr->events.evt_lock);
> +
> + xrt_subdev_pool_handle_event(&xr->groups.pool, &tmp->evt);
> +
> + if (tmp->async)
> + vfree(tmp);
> + else
> + complete(&tmp->comp);
> +
> + mutex_lock(&xr->events.evt_lock);
> + }
> + mutex_unlock(&xr->events.evt_lock);
> +}
> +
> +static void xroot_event_init(struct xroot *xr)
> +{
> + INIT_LIST_HEAD(&xr->events.evt_list);
> + mutex_init(&xr->events.evt_lock);
> + INIT_WORK(&xr->events.evt_work, xroot_event_work);
> +}
> +
> +static void xroot_event_fini(struct xroot *xr)
> +{
> + flush_scheduled_work();
> + WARN_ON(!list_empty(&xr->events.evt_list));
> +}
> +
> +static int xroot_get_leaf(struct xroot *xr, struct xrt_root_get_leaf *arg)
> +{
> + int rc = -ENOENT;
> + struct platform_device *grp = NULL;
> +
> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + rc = xleaf_call(grp, XRT_GROUP_GET_LEAF, arg);
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static int xroot_put_leaf(struct xroot *xr, struct xrt_root_put_leaf *arg)
> +{
> + int rc = -ENOENT;
> + struct platform_device *grp = NULL;
> +
> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
> + rc = xleaf_call(grp, XRT_GROUP_PUT_LEAF, arg);
> + xroot_put_group(xr, grp);
> + }
> + return rc;
> +}
> +
> +static int xroot_root_cb(struct device *dev, void *parg, enum xrt_root_cmd cmd, void *arg)
> +{
> + struct xroot *xr = (struct xroot *)parg;
> + int rc = 0;
> +
> + switch (cmd) {
> + /* Leaf actions. */
> + case XRT_ROOT_GET_LEAF: {
> + struct xrt_root_get_leaf *getleaf = (struct xrt_root_get_leaf *)arg;
> +
> + rc = xroot_get_leaf(xr, getleaf);
> + break;
> + }
> + case XRT_ROOT_PUT_LEAF: {
> + struct xrt_root_put_leaf *putleaf = (struct xrt_root_put_leaf *)arg;
> +
> + rc = xroot_put_leaf(xr, putleaf);
> + break;
> + }
> + case XRT_ROOT_GET_LEAF_HOLDERS: {
> + struct xrt_root_get_holders *holders = (struct xrt_root_get_holders *)arg;
> +
> + rc = xrt_subdev_pool_get_holders(&xr->groups.pool,
> + holders->xpigh_pdev,
> + holders->xpigh_holder_buf,
> + holders->xpigh_holder_buf_len);
> + break;
> + }
> +
> + /* Group actions. */
> + case XRT_ROOT_CREATE_GROUP:
> + rc = xroot_create_group(xr, (char *)arg);
> + break;
> + case XRT_ROOT_REMOVE_GROUP:
> + rc = xroot_destroy_group(xr, (int)(uintptr_t)arg);
> + break;
> + case XRT_ROOT_LOOKUP_GROUP: {
> + struct xrt_root_lookup_group *getgrp = (struct xrt_root_lookup_group *)arg;
> +
> + rc = xroot_lookup_group(xr, getgrp);
> + break;
> + }
> + case XRT_ROOT_WAIT_GROUP_BRINGUP:
> + rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
> + break;
> +
> + /* Event actions. */
> + case XRT_ROOT_EVENT_SYNC:
> + case XRT_ROOT_EVENT_ASYNC: {
> + bool async = (cmd == XRT_ROOT_EVENT_ASYNC);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> +
> + rc = xroot_trigger_event(xr, evt, async);
> + break;
> + }
> +
> + /* Device info. */
> + case XRT_ROOT_GET_RESOURCE: {
> + struct xrt_root_get_res *res = (struct xrt_root_get_res *)arg;
> +
> + res->xpigr_res = xr->pdev->resource;
> + break;
> + }
> + case XRT_ROOT_GET_ID: {
> + struct xrt_root_get_id *id = (struct xrt_root_get_id *)arg;
> +
> + id->xpigi_vendor_id = xr->pdev->vendor;
> + id->xpigi_device_id = xr->pdev->device;
> + id->xpigi_sub_vendor_id = xr->pdev->subsystem_vendor;
> + id->xpigi_sub_device_id = xr->pdev->subsystem_device;
> + break;
> + }
> +
> + /* MISC generic PCIE driver functions. */
> + case XRT_ROOT_HOT_RESET: {
> + xr->pf_cb.xpc_hot_reset(xr->pdev);
> + break;
> + }
> + case XRT_ROOT_HWMON: {
> + struct xrt_root_hwmon *hwmon = (struct xrt_root_hwmon *)arg;
> +
> + if (hwmon->xpih_register) {
> + hwmon->xpih_hwmon_dev =
> + hwmon_device_register_with_info(DEV(xr->pdev),
> + hwmon->xpih_name,
> + hwmon->xpih_drvdata,
> + NULL,
> + hwmon->xpih_groups);
> + } else {
> + hwmon_device_unregister(hwmon->xpih_hwmon_dev);
> + }
> + break;
> + }
> +
> + default:
> + xroot_err(xr, "unknown IOCTL cmd %d", cmd);
> + rc = -EINVAL;
> + break;
> + }
> +
> + return rc;
> +}
> +
> +static void xroot_bringup_group_work(struct work_struct *work)
> +{
> + struct platform_device *pdev = NULL;
> + struct xroot *xr = container_of(work, struct xroot, groups.bringup_work);
> +
> + while (xroot_get_group(xr, XROOT_GROUP_FIRST, &pdev) != -ENOENT) {
> + int r, i;
> +
> + i = pdev->id;
> + r = xleaf_call(pdev, XRT_GROUP_INIT_CHILDREN, NULL);
> + xroot_put_group(xr, pdev);
> + if (r == -EEXIST)
> + continue; /* Already brough up, nothing to do. */
> + if (r)
> + atomic_inc(&xr->groups.bringup_failed);
> +
> + xroot_group_trigger_event(xr, i, XRT_EVENT_POST_CREATION);
> +
> + if (atomic_dec_and_test(&xr->groups.bringup_pending))
> + complete(&xr->groups.bringup_comp);
> + }
> +}
> +
> +static void xroot_groups_init(struct xroot *xr)
ok
> +{
> + xrt_subdev_pool_init(DEV(xr->pdev), &xr->groups.pool);
> + INIT_WORK(&xr->groups.bringup_work, xroot_bringup_group_work);
> + atomic_set(&xr->groups.bringup_pending, 0);
> + atomic_set(&xr->groups.bringup_failed, 0);
> + init_completion(&xr->groups.bringup_comp);
> +}
> +
> +static void xroot_groups_fini(struct xroot *xr)
> +{
> + flush_scheduled_work();
> + xrt_subdev_pool_fini(&xr->groups.pool);
> +}
> +
> +int xroot_add_vsec_node(void *root, char *dtb)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct device *dev = DEV(xr->pdev);
> + struct xrt_md_endpoint ep = { 0 };
> + int cap = 0, ret = 0;
> + u32 off_low, off_high, vsec_bar, header;
> + u64 vsec_off;
> +
> + while ((cap = pci_find_next_ext_capability(xr->pdev, cap, PCI_EXT_CAP_ID_VNDR))) {
> + pci_read_config_dword(xr->pdev, cap + PCI_VNDR_HEADER, &header);
> + if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
> + break;
> + }
> + if (!cap) {
> + xroot_info(xr, "No Vendor Specific Capability.");
> + return -ENOENT;
> + }
> +
> + if (pci_read_config_dword(xr->pdev, cap + 8, &off_low) ||
> + pci_read_config_dword(xr->pdev, cap + 12, &off_high)) {
> + xroot_err(xr, "pci_read vendor specific failed.");
> + return -EINVAL;
> + }
> +
> + ep.ep_name = XRT_MD_NODE_VSEC;
> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
> + if (ret) {
> + xroot_err(xr, "add vsec metadata failed, ret %d", ret);
> + goto failed;
> + }
> +
> + vsec_bar = cpu_to_be32(off_low & 0xf);
> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
> + XRT_MD_PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
> + if (ret) {
> + xroot_err(xr, "add vsec bar idx failed, ret %d", ret);
> + goto failed;
> + }
> +
> + vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
> + XRT_MD_PROP_OFFSET, &vsec_off, sizeof(vsec_off));
> + if (ret) {
> + xroot_err(xr, "add vsec offset failed, ret %d", ret);
> + goto failed;
> + }
> +
> +failed:
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xroot_add_vsec_node);
> +
> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct device *dev = DEV(xr->pdev);
> + struct xrt_md_endpoint ep = { 0 };
> + int ret = 0;
> +
> + ep.ep_name = endpoint;
> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
> + if (ret)
> + xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xroot_add_simple_node);
> +
> +bool xroot_wait_for_bringup(void *root)
> +{
> + struct xroot *xr = (struct xroot *)root;
> +
> + wait_for_completion(&xr->groups.bringup_comp);
> + return atomic_read(&xr->groups.bringup_failed) == 0;
ok
Tom
> +}
> +EXPORT_SYMBOL_GPL(xroot_wait_for_bringup);
> +
> +int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root)
> +{
> + struct device *dev = DEV(pdev);
> + struct xroot *xr = NULL;
> +
> + dev_info(dev, "%s: probing...", __func__);
> +
> + xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
> + if (!xr)
> + return -ENOMEM;
> +
> + xr->pdev = pdev;
> + xr->pf_cb = *cb;
> + xroot_groups_init(xr);
> + xroot_event_init(xr);
> +
> + *root = xr;
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(xroot_probe);
> +
> +void xroot_remove(void *root)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct platform_device *grp = NULL;
> +
> + xroot_info(xr, "leaving...");
> +
> + if (xroot_get_group(xr, XROOT_GROUP_FIRST, &grp) == 0) {
> + int instance = grp->id;
> +
> + xroot_put_group(xr, grp);
> + xroot_destroy_group(xr, instance);
> + }
> +
> + xroot_event_fini(xr);
> + xroot_groups_fini(xr);
> +}
> +EXPORT_SYMBOL_GPL(xroot_remove);
> +
> +void xroot_broadcast(void *root, enum xrt_events evt)
> +{
> + struct xroot *xr = (struct xroot *)root;
> + struct xrt_event e = { 0 };
> +
> + /* Root pf driver only broadcasts below two events. */
> + if (evt != XRT_EVENT_POST_CREATION && evt != XRT_EVENT_PRE_REMOVAL) {
> + xroot_info(xr, "invalid event %d", evt);
> + return;
> + }
> +
> + e.xe_evt = evt;
> + e.xe_subdev.xevt_subdev_id = XRT_ROOT;
> + e.xe_subdev.xevt_subdev_instance = 0;
> + xroot_trigger_event(xr, &e, false);
> +}
> +EXPORT_SYMBOL_GPL(xroot_broadcast);
Several just for debugging items, consider adding a CONFIG_XRT_DEBUGGING
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Infrastructure code providing APIs for managing leaf driver instance
> groups, facilitating inter-leaf driver calls and root calls.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/subdev.c | 865 ++++++++++++++++++++++++++++++++++
> 1 file changed, 865 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/subdev.c
>
> diff --git a/drivers/fpga/xrt/lib/subdev.c b/drivers/fpga/xrt/lib/subdev.c
> new file mode 100644
> index 000000000000..6428b183fee3
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/subdev.c
> @@ -0,0 +1,865 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/platform_device.h>
> +#include <linux/pci.h>
> +#include <linux/vmalloc.h>
> +#include "xleaf.h"
> +#include "subdev_pool.h"
> +#include "lib-drv.h"
> +#include "metadata.h"
> +
> +#define IS_ROOT_DEV(dev) ((dev)->bus == &pci_bus_type)
for readablity, add a new line here
> +static inline struct device *find_root(struct platform_device *pdev)
> +{
> + struct device *d = DEV(pdev);
> +
> + while (!IS_ROOT_DEV(d))
> + d = d->parent;
> + return d;
> +}
> +
> +/*
> + * It represents a holder of a subdev. One holder can repeatedly hold a subdev
> + * as long as there is a unhold corresponding to a hold.
> + */
> +struct xrt_subdev_holder {
> + struct list_head xsh_holder_list;
> + struct device *xsh_holder;
> + int xsh_count;
> + struct kref xsh_kref;
> +};
> +
> +/*
> + * It represents a specific instance of platform driver for a subdev, which
> + * provides services to its clients (another subdev driver or root driver).
> + */
> +struct xrt_subdev {
> + struct list_head xs_dev_list;
> + struct list_head xs_holder_list;
> + enum xrt_subdev_id xs_id; /* type of subdev */
> + struct platform_device *xs_pdev; /* a particular subdev inst */
> + struct completion xs_holder_comp;
> +};
> +
> +static struct xrt_subdev *xrt_subdev_alloc(void)
> +{
> + struct xrt_subdev *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
ok
> +
> + if (!sdev)
> + return NULL;
> +
> + INIT_LIST_HEAD(&sdev->xs_dev_list);
> + INIT_LIST_HEAD(&sdev->xs_holder_list);
> + init_completion(&sdev->xs_holder_comp);
> + return sdev;
> +}
> +
> +static void xrt_subdev_free(struct xrt_subdev *sdev)
> +{
> + kfree(sdev);
Abstraction for a single function is not needed, use kfree directly.
> +}
> +
> +int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg)
> +{
> + struct device *dev = DEV(self);
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
> +
> + WARN_ON(!pdata->xsp_root_cb);
ok
> + return (*pdata->xsp_root_cb)(dev->parent, pdata->xsp_root_cb_arg, cmd, arg);
> +}
> +
> +/*
> + * Subdev common sysfs nodes.
> + */
> +static ssize_t holders_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + ssize_t len;
> + struct platform_device *pdev = to_platform_device(dev);
> + struct xrt_root_get_holders holders = { pdev, buf, 1024 };
Since 1024 is config, #define it somewhere so it can be tweeked later
> +
> + len = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF_HOLDERS, &holders);
> + if (len >= holders.xpigh_holder_buf_len)
> + return len;
> + buf[len] = '\n';
> + return len + 1;
> +}
> +static DEVICE_ATTR_RO(holders);
> +
> +static struct attribute *xrt_subdev_attrs[] = {
> + &dev_attr_holders.attr,
> + NULL,
> +};
> +
> +static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
> + struct bin_attribute *attr, char *buf, loff_t off, size_t count)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct platform_device *pdev = to_platform_device(dev);
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
> + unsigned char *blob;
> + unsigned long size;
> + ssize_t ret = 0;
> +
> + blob = pdata->xsp_dtb;
> + size = xrt_md_size(dev, blob);
> + if (size == XRT_MD_INVALID_LENGTH) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + if (off >= size)
> + goto failed;
if this and next are used for debugging, add a 'dev_dbg()' to help out the debugging.
> +
> + if (off + count > size)
> + count = size - off;
> + memcpy(buf, blob + off, count);
> +
> + ret = count;
> +failed:
> + return ret;
> +}
> +
> +static struct bin_attribute meta_data_attr = {
> + .attr = {
> + .name = "metadata",
> + .mode = 0400
> + },
Permissions will not be enough, anyone can be root.
A developer only interface should be hidden behind a CONFIG_
> + .read = metadata_output,
> + .size = 0
> +};
> +
> +static struct bin_attribute *xrt_subdev_bin_attrs[] = {
> + &meta_data_attr,
> + NULL,
> +};
> +
> +static const struct attribute_group xrt_subdev_attrgroup = {
> + .attrs = xrt_subdev_attrs,
> + .bin_attrs = xrt_subdev_bin_attrs,
> +};
> +
> +/*
> + * Given the device metadata, parse it to get IO ranges and construct
> + * resource array.
> + */
> +static int
> +xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
> + char *dtb, struct resource **res, int *res_num)
> +{
> + struct xrt_subdev_platdata *pdata;
> + struct resource *pci_res = NULL;
> + const u64 *bar_range;
> + const u32 *bar_idx;
> + char *ep_name = NULL, *regmap = NULL;
> + uint bar;
> + int count1 = 0, count2 = 0, ret;
> +
> + if (!dtb)
> + return -EINVAL;
> +
> + pdata = DEV_PDATA(to_platform_device(parent));
> +
> + /* go through metadata and count endpoints in it */
> + for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
Embedding functions in the for-loop is difficult to debug consider change this loop into something easier to read.
Maybe
xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map);
while (ep_name) {
...
xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)
}
similar below
> + xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
> + ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
> + if (!ret)
> + count1++;
> + }
> + if (!count1)
> + return 0;
> +
> + /* allocate resource array for all endpoints been found in metadata */
> + *res = vzalloc(sizeof(**res) * count1);
if this is small, convert to kzalloc
> +
> + /* go through all endpoints again and get IO range for each endpoint */
> + for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
> + xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
> + ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
> + if (ret)
> + continue;
> + xrt_md_get_prop(parent, dtb, ep_name, regmap,
> + XRT_MD_PROP_BAR_IDX, (const void **)&bar_idx, NULL);
> + bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
> + xleaf_get_barres(to_platform_device(parent), &pci_res, bar);
> + (*res)[count2].start = pci_res->start +
> + be64_to_cpu(bar_range[0]);
> + (*res)[count2].end = pci_res->start +
> + be64_to_cpu(bar_range[0]) +
> + be64_to_cpu(bar_range[1]) - 1;
> + (*res)[count2].flags = IORESOURCE_MEM;
> + /* check if there is conflicted resource */
> + ret = request_resource(pci_res, *res + count2);
> + if (ret) {
> + dev_err(parent, "Conflict resource %pR\n", *res + count2);
> + vfree(*res);
> + *res_num = 0;
> + *res = NULL;
> + return ret;
> + }
> + release_resource(*res + count2);
> +
> + (*res)[count2].parent = pci_res;
> +
> + xrt_md_find_endpoint(parent, pdata->xsp_dtb, ep_name,
> + regmap, &(*res)[count2].name);
> +
> + count2++;
> + }
> +
> + WARN_ON(count1 != count2);
> + *res_num = count2;
> +
> + return 0;
> +}
> +
> +static inline enum xrt_subdev_file_mode
> +xleaf_devnode_mode(struct xrt_subdev_drvdata *drvdata)
> +{
> + return drvdata->xsd_file_ops.xsf_mode;
> +}
> +
> +static bool xrt_subdev_cdev_auto_creation(struct platform_device *pdev)
> +{
> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
> + enum xrt_subdev_file_mode mode = xleaf_devnode_mode(drvdata);
> +
> + if (!drvdata)
> + return false;
> +
> + if (!xleaf_devnode_enabled(drvdata))
> + return false;
> +
> + return (mode == XRT_SUBDEV_FILE_DEFAULT || mode == XRT_SUBDEV_FILE_MULTI_INST);
should this check happen before xleaf_devnode_enable() ?
> +}
> +
> +static struct xrt_subdev *
> +xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
> +{
> + struct xrt_subdev_platdata *pdata = NULL;
> + struct platform_device *pdev = NULL;
> + int inst = PLATFORM_DEVID_NONE;
> + struct xrt_subdev *sdev = NULL;
> + struct resource *res = NULL;
> + unsigned long dtb_len = 0;
> + int res_num = 0;
> + size_t pdata_sz;
> + int ret;
> +
> + sdev = xrt_subdev_alloc();
> + if (!sdev) {
> + dev_err(parent, "failed to alloc subdev for ID %d", id);
> + goto fail;
> + }
> + sdev->xs_id = id;
> +
> + if (!dtb) {
> + ret = xrt_md_create(parent, &dtb);
> + if (ret) {
> + dev_err(parent, "can't create empty dtb: %d", ret);
> + goto fail;
> + }
> + }
> + xrt_md_pack(parent, dtb);
> + dtb_len = xrt_md_size(parent, dtb);
> + if (dtb_len == XRT_MD_INVALID_LENGTH) {
> + dev_err(parent, "invalid metadata len %ld", dtb_len);
> + goto fail;
> + }
> + pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len;
ok
> +
> + /* Prepare platform data passed to subdev. */
> + pdata = vzalloc(pdata_sz);
> + if (!pdata)
> + goto fail;
> +
> + pdata->xsp_root_cb = pcb;
> + pdata->xsp_root_cb_arg = pcb_arg;
> + memcpy(pdata->xsp_dtb, dtb, dtb_len);
> + if (id == XRT_SUBDEV_GRP) {
> + /* Group can only be created by root driver. */
> + pdata->xsp_root_name = dev_name(parent);
> + } else {
> + struct platform_device *grp = to_platform_device(parent);
> + /* Leaf can only be created by group driver. */
> + WARN_ON(strncmp(xrt_drv_name(XRT_SUBDEV_GRP),
> + platform_get_device_id(grp)->name,
> + strlen(xrt_drv_name(XRT_SUBDEV_GRP)) + 1));
> + pdata->xsp_root_name = DEV_PDATA(grp)->xsp_root_name;
> + }
> +
> + /* Obtain dev instance number. */
> + inst = xrt_drv_get_instance(id);
> + if (inst < 0) {
> + dev_err(parent, "failed to obtain instance: %d", inst);
> + goto fail;
> + }
> +
> + /* Create subdev. */
> + if (id != XRT_SUBDEV_GRP) {
> + int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
> +
> + if (rc) {
> + dev_err(parent, "failed to get resource for %s.%d: %d",
> + xrt_drv_name(id), inst, rc);
> + goto fail;
> + }
> + }
> + pdev = platform_device_register_resndata(parent, xrt_drv_name(id),
> + inst, res, res_num, pdata, pdata_sz);
ok
> + vfree(res);
> + if (IS_ERR(pdev)) {
> + dev_err(parent, "failed to create subdev for %s inst %d: %ld",
> + xrt_drv_name(id), inst, PTR_ERR(pdev));
> + goto fail;
> + }
> + sdev->xs_pdev = pdev;
> +
> + if (device_attach(DEV(pdev)) != 1) {
> + xrt_err(pdev, "failed to attach");
> + goto fail;
> + }
> +
> + if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_subdev_attrgroup))
> + xrt_err(pdev, "failed to create sysfs group");
> +
> + /*
> + * Create sysfs sym link under root for leaves
> + * under random groups for easy access to them.
> + */
> + if (id != XRT_SUBDEV_GRP) {
> + if (sysfs_create_link(&find_root(pdev)->kobj,
> + &DEV(pdev)->kobj, dev_name(DEV(pdev)))) {
> + xrt_err(pdev, "failed to create sysfs link");
> + }
> + }
> +
> + /* All done, ready to handle req thru cdev. */
> + if (xrt_subdev_cdev_auto_creation(pdev))
> + xleaf_devnode_create(pdev, DEV_DRVDATA(pdev)->xsd_file_ops.xsf_dev_name, NULL);
> +
> + vfree(pdata);
> + return sdev;
> +
> +fail:
Take another look at splitting this error handling.
Jumping to specific labels is more common.
> + vfree(pdata);
> + if (sdev && !IS_ERR_OR_NULL(sdev->xs_pdev))
> + platform_device_unregister(sdev->xs_pdev);
> + if (inst >= 0)
> + xrt_drv_put_instance(id, inst);
> + xrt_subdev_free(sdev);
> + return NULL;
> +}
> +
> +static void xrt_subdev_destroy(struct xrt_subdev *sdev)
> +{
> + struct platform_device *pdev = sdev->xs_pdev;
> + struct device *dev = DEV(pdev);
> + int inst = pdev->id;
> + int ret;
> +
> + /* Take down the device node */
> + if (xrt_subdev_cdev_auto_creation(pdev)) {
> + ret = xleaf_devnode_destroy(pdev);
> + WARN_ON(ret);
> + }
> + if (sdev->xs_id != XRT_SUBDEV_GRP)
> + sysfs_remove_link(&find_root(pdev)->kobj, dev_name(dev));
> + sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
> + platform_device_unregister(pdev);
> + xrt_drv_put_instance(sdev->xs_id, inst);
> + xrt_subdev_free(sdev);
> +}
> +
> +struct platform_device *
> +xleaf_get_leaf(struct platform_device *pdev, xrt_subdev_match_t match_cb, void *match_arg)
> +{
> + int rc;
> + struct xrt_root_get_leaf get_leaf = {
> + pdev, match_cb, match_arg, };
> +
> + rc = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF, &get_leaf);
> + if (rc)
> + return NULL;
> + return get_leaf.xpigl_tgt_pdev;
> +}
> +EXPORT_SYMBOL_GPL(xleaf_get_leaf);
> +
> +bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name)
> +{
> + struct resource *res;
> + int i = 0;
ok
> +
> + do {
> + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
> + if (res && !strncmp(res->name, endpoint_name, strlen(res->name) + 1))
> + return true;
> + ++i;
ok
> + } while (res);
> +
> + return false;
> +}
> +EXPORT_SYMBOL_GPL(xleaf_has_endpoint);
> +
> +int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf)
> +{
> + struct xrt_root_put_leaf put_leaf = { pdev, leaf };
> +
> + return xrt_subdev_root_request(pdev, XRT_ROOT_PUT_LEAF, &put_leaf);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_put_leaf);
> +
> +int xleaf_create_group(struct platform_device *pdev, char *dtb)
> +{
> + return xrt_subdev_root_request(pdev, XRT_ROOT_CREATE_GROUP, dtb);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_create_group);
> +
> +int xleaf_destroy_group(struct platform_device *pdev, int instance)
> +{
> + return xrt_subdev_root_request(pdev, XRT_ROOT_REMOVE_GROUP, (void *)(uintptr_t)instance);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_destroy_group);
> +
> +int xleaf_wait_for_group_bringup(struct platform_device *pdev)
> +{
> + return xrt_subdev_root_request(pdev, XRT_ROOT_WAIT_GROUP_BRINGUP, NULL);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_wait_for_group_bringup);
> +
> +static ssize_t
> +xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
> +{
> + const struct list_head *ptr;
> + struct xrt_subdev_holder *h;
> + ssize_t n = 0;
> +
> + list_for_each(ptr, &sdev->xs_holder_list) {
> + h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + n += snprintf(buf + n, len - n, "%s:%d ",
> + dev_name(h->xsh_holder), kref_read(&h->xsh_kref));
add a comment that truncation is fine
> + if (n >= (len - 1))
> + break;
> + }
> + return n;
> +}
> +
> +void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
> +{
> + INIT_LIST_HEAD(&spool->xsp_dev_list);
> + spool->xsp_owner = dev;
> + mutex_init(&spool->xsp_lock);
> + spool->xsp_closing = false;
> +}
> +
> +static void xrt_subdev_free_holder(struct xrt_subdev_holder *holder)
> +{
> + list_del(&holder->xsh_holder_list);
> + vfree(holder);
> +}
> +
> +static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool, struct xrt_subdev *sdev)
> +{
> + const struct list_head *ptr, *next;
> + char holders[128];
> + struct xrt_subdev_holder *holder;
> + struct mutex *lk = &spool->xsp_lock;
> +
> + while (!list_empty(&sdev->xs_holder_list)) {
> + int rc;
> +
> + /* It's most likely a bug if we ever enters this loop. */
> + xrt_subdev_get_holders(sdev, holders, sizeof(holders));
Items just for debugging need to run just for debugging
> + xrt_err(sdev->xs_pdev, "awaits holders: %s", holders);
> + mutex_unlock(lk);
> + rc = wait_for_completion_killable(&sdev->xs_holder_comp);
> + mutex_lock(lk);
> + if (rc == -ERESTARTSYS) {
> + xrt_err(sdev->xs_pdev, "give up on waiting for holders, clean up now");
> + list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + xrt_subdev_free_holder(holder);
> + }
> + }
> + }
> +}
> +
> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
> +{
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct mutex *lk = &spool->xsp_lock;
> +
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + mutex_unlock(lk);
> + return;
> + }
> + spool->xsp_closing = true;
> + mutex_unlock(lk);
ok
> +
> + /* Remove subdev in the reverse order of added. */
> + while (!list_empty(dl)) {
> + struct xrt_subdev *sdev = list_first_entry(dl, struct xrt_subdev, xs_dev_list);
> +
> + xrt_subdev_pool_wait_for_holders(spool, sdev);
> + list_del(&sdev->xs_dev_list);
> + xrt_subdev_destroy(sdev);
> + }
> +}
> +
> +static struct xrt_subdev_holder *xrt_subdev_find_holder(struct xrt_subdev *sdev,
> + struct device *holder_dev)
> +{
> + struct list_head *hl = &sdev->xs_holder_list;
> + struct xrt_subdev_holder *holder;
> + const struct list_head *ptr;
> +
> + list_for_each(ptr, hl) {
> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
> + if (holder->xsh_holder == holder_dev)
> + return holder;
> + }
> + return NULL;
> +}
> +
> +static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
> +{
> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
> + struct list_head *hl = &sdev->xs_holder_list;
> +
> + if (!holder) {
> + holder = vzalloc(sizeof(*holder));
> + if (!holder)
> + return -ENOMEM;
> + holder->xsh_holder = holder_dev;
> + kref_init(&holder->xsh_kref);
> + list_add_tail(&holder->xsh_holder_list, hl);
> + } else {
> + kref_get(&holder->xsh_kref);
> + }
> +
> + return 0;
> +}
> +
> +static void xrt_subdev_free_holder_kref(struct kref *kref)
> +{
> + struct xrt_subdev_holder *holder = container_of(kref, struct xrt_subdev_holder, xsh_kref);
> +
> + xrt_subdev_free_holder(holder);
> +}
> +
> +static int
> +xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
> +{
> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
> + struct list_head *hl = &sdev->xs_holder_list;
> +
> + if (!holder) {
> + dev_err(holder_dev, "can't release, %s did not hold %s",
> + dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
> + return -EINVAL;
> + }
> + kref_put(&holder->xsh_kref, xrt_subdev_free_holder_kref);
> +
> + /* kref_put above may remove holder from list. */
> + if (list_empty(hl))
> + complete(&sdev->xs_holder_comp);
> + return 0;
> +}
> +
> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
> +{
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = 0;
> +
> + sdev = xrt_subdev_create(spool->xsp_owner, id, pcb, pcb_arg, dtb);
> + if (sdev) {
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + /* No new subdev when pool is going away. */
> + xrt_err(sdev->xs_pdev, "pool is closing");
> + ret = -ENODEV;
> + } else {
> + list_add(&sdev->xs_dev_list, dl);
> + }
> + mutex_unlock(lk);
> + if (ret)
> + xrt_subdev_destroy(sdev);
> + } else {
> + ret = -EINVAL;
> + }
> +
> + ret = ret ? ret : sdev->xs_pdev->id;
> + return ret;
> +}
> +
> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id, int instance)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> + if (spool->xsp_closing) {
> + /* Pool is going away, all subdevs will be gone. */
> + mutex_unlock(lk);
> + return 0;
> + }
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_id != id || sdev->xs_pdev->id != instance)
> + continue;
> + xrt_subdev_pool_wait_for_holders(spool, sdev);
> + list_del(&sdev->xs_dev_list);
> + ret = 0;
> + break;
> + }
> + mutex_unlock(lk);
> + if (ret)
> + return ret;
> +
> + xrt_subdev_destroy(sdev);
> + return 0;
> +}
> +
> +static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool, xrt_subdev_match_t match,
> + void *arg, struct device *holder_dev, struct xrt_subdev **sdevp)
> +{
> + struct platform_device *pdev = (struct platform_device *)arg;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct mutex *lk = &spool->xsp_lock;
> + struct xrt_subdev *sdev = NULL;
> + const struct list_head *ptr;
> + struct xrt_subdev *d = NULL;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> +
> + if (!pdev) {
> + if (match == XRT_SUBDEV_MATCH_PREV) {
> + sdev = list_empty(dl) ? NULL :
> + list_last_entry(dl, struct xrt_subdev, xs_dev_list);
> + } else if (match == XRT_SUBDEV_MATCH_NEXT) {
> + sdev = list_first_entry_or_null(dl, struct xrt_subdev, xs_dev_list);
> + }
> + }
> +
> + list_for_each(ptr, dl) {
ok
> + d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (match == XRT_SUBDEV_MATCH_PREV || match == XRT_SUBDEV_MATCH_NEXT) {
> + if (d->xs_pdev != pdev)
> + continue;
> + } else {
> + if (!match(d->xs_id, d->xs_pdev, arg))
> + continue;
> + }
> +
> + if (match == XRT_SUBDEV_MATCH_PREV)
> + sdev = !list_is_first(ptr, dl) ? list_prev_entry(d, xs_dev_list) : NULL;
> + else if (match == XRT_SUBDEV_MATCH_NEXT)
> + sdev = !list_is_last(ptr, dl) ? list_next_entry(d, xs_dev_list) : NULL;
> + else
> + sdev = d;
> + }
> +
> + if (sdev)
> + ret = xrt_subdev_hold(sdev, holder_dev);
> +
> + mutex_unlock(lk);
> +
> + if (!ret)
> + *sdevp = sdev;
> + return ret;
> +}
> +
> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool, xrt_subdev_match_t match, void *arg,
> + struct device *holder_dev, struct platform_device **pdevp)
> +{
> + int rc;
> + struct xrt_subdev *sdev;
> +
> + rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
> + if (rc) {
> + if (rc != -ENOENT)
> + dev_err(holder_dev, "failed to hold device: %d", rc);
> + return rc;
> + }
> +
> + if (!IS_ROOT_DEV(holder_dev)) {
ok
> + xrt_dbg(to_platform_device(holder_dev), "%s <<==== %s",
> + dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
> + }
> +
> + *pdevp = sdev->xs_pdev;
> + return 0;
> +}
> +
> +static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool, struct platform_device *pdev,
> + struct device *holder_dev)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + int ret = -ENOENT;
> +
> + mutex_lock(lk);
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_pdev != pdev)
> + continue;
> + ret = xrt_subdev_release(sdev, holder_dev);
> + break;
> + }
> + mutex_unlock(lk);
> +
> + return ret;
> +}
> +
> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool, struct platform_device *pdev,
> + struct device *holder_dev)
> +{
> + int ret = xrt_subdev_pool_put_impl(spool, pdev, holder_dev);
> +
> + if (ret)
> + return ret;
> +
> + if (!IS_ROOT_DEV(holder_dev)) {
ok
> + xrt_dbg(to_platform_device(holder_dev), "%s <<==X== %s",
> + dev_name(holder_dev), dev_name(DEV(pdev)));
> + }
> + return 0;
> +}
> +
> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool, enum xrt_events e)
> +{
> + struct platform_device *tgt = NULL;
> + struct xrt_subdev *sdev = NULL;
> + struct xrt_event evt;
> +
> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
> + tgt, spool->xsp_owner, &sdev)) {
> + tgt = sdev->xs_pdev;
> + evt.xe_evt = e;
> + evt.xe_subdev.xevt_subdev_id = sdev->xs_id;
> + evt.xe_subdev.xevt_subdev_instance = tgt->id;
> + xrt_subdev_root_request(tgt, XRT_ROOT_EVENT_SYNC, &evt);
> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
> + }
> +}
> +
> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool, struct xrt_event *evt)
> +{
> + struct platform_device *tgt = NULL;
> + struct xrt_subdev *sdev = NULL;
> +
> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
> + tgt, spool->xsp_owner, &sdev)) {
> + tgt = sdev->xs_pdev;
> + xleaf_call(tgt, XRT_XLEAF_EVENT, evt);
> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
> + }
> +}
> +
> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
> + struct platform_device *pdev, char *buf, size_t len)
> +{
> + const struct list_head *ptr;
> + struct mutex *lk = &spool->xsp_lock;
> + struct list_head *dl = &spool->xsp_dev_list;
> + struct xrt_subdev *sdev;
> + ssize_t ret = 0;
> +
> + mutex_lock(lk);
> + list_for_each(ptr, dl) {
> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
> + if (sdev->xs_pdev != pdev)
> + continue;
> + ret = xrt_subdev_get_holders(sdev, buf, len);
> + break;
> + }
> + mutex_unlock(lk);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
> +
> +int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async)
> +{
> + struct xrt_event e = { evt, };
> + enum xrt_root_cmd cmd = async ? XRT_ROOT_EVENT_ASYNC : XRT_ROOT_EVENT_SYNC;
> +
> + WARN_ON(evt == XRT_EVENT_POST_CREATION || evt == XRT_EVENT_PRE_REMOVAL);
> + return xrt_subdev_root_request(pdev, cmd, &e);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_broadcast_event);
> +
> +void xleaf_hot_reset(struct platform_device *pdev)
> +{
> + xrt_subdev_root_request(pdev, XRT_ROOT_HOT_RESET, NULL);
> +}
> +EXPORT_SYMBOL_GPL(xleaf_hot_reset);
> +
> +void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx)
> +{
> + struct xrt_root_get_res arg = { 0 };
> +
> + if (bar_idx > PCI_STD_RESOURCE_END) {
> + xrt_err(pdev, "Invalid bar idx %d", bar_idx);
> + *res = NULL;
> + return;
> + }
> +
> + xrt_subdev_root_request(pdev, XRT_ROOT_GET_RESOURCE, &arg);
> +
> + *res = &arg.xpigr_res[bar_idx];
> +}
> +
> +void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
> + unsigned short *subvendor, unsigned short *subdevice)
> +{
> + struct xrt_root_get_id id = { 0 };
> +
> + WARN_ON(!vendor && !device && !subvendor && !subdevice);
ok
Tom
> +
> + xrt_subdev_root_request(pdev, XRT_ROOT_GET_ID, (void *)&id);
> + if (vendor)
> + *vendor = id.xpigi_vendor_id;
> + if (device)
> + *device = id.xpigi_device_id;
> + if (subvendor)
> + *subvendor = id.xpigi_sub_vendor_id;
> + if (subdevice)
> + *subdevice = id.xpigi_sub_device_id;
> +}
> +
> +struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
> + const struct attribute_group **grps)
> +{
> + struct xrt_root_hwmon hm = { true, name, drvdata, grps, };
> +
> + xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
> + return hm.xpih_hwmon_dev;
> +}
> +
> +void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon)
> +{
> + struct xrt_root_hwmon hm = { false, };
> +
> + hm.xpih_hwmon_dev = hwmon;
> + xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
> +}
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> The PCIE device driver which attaches to management function on Alveo
> devices. It instantiates one or more group drivers which, in turn,
> instantiate platform drivers. The instantiation of group and platform
> drivers is completely dtb driven.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/mgmt/root.c | 333 +++++++++++++++++++++++++++++++++++
> 1 file changed, 333 insertions(+)
> create mode 100644 drivers/fpga/xrt/mgmt/root.c
>
> diff --git a/drivers/fpga/xrt/mgmt/root.c b/drivers/fpga/xrt/mgmt/root.c
> new file mode 100644
> index 000000000000..f97f92807c01
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/root.c
> @@ -0,0 +1,333 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#include <linux/module.h>
> +#include <linux/pci.h>
> +#include <linux/aer.h>
> +#include <linux/vmalloc.h>
> +#include <linux/delay.h>
> +
> +#include "xroot.h"
> +#include "xmgnt.h"
> +#include "metadata.h"
> +
> +#define XMGMT_MODULE_NAME "xrt-mgmt"
ok
> +#define XMGMT_DRIVER_VERSION "4.0.0"
> +
> +#define XMGMT_PDEV(xm) ((xm)->pdev)
> +#define XMGMT_DEV(xm) (&(XMGMT_PDEV(xm)->dev))
> +#define xmgmt_err(xm, fmt, args...) \
> + dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_warn(xm, fmt, args...) \
> + dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_info(xm, fmt, args...) \
> + dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define xmgmt_dbg(xm, fmt, args...) \
> + dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
> +#define XMGMT_DEV_ID(_pcidev) \
> + ({ typeof(_pcidev) (pcidev) = (_pcidev); \
> + ((pci_domain_nr((pcidev)->bus) << 16) | \
> + PCI_DEVID((pcidev)->bus->number, 0)); })
> +
> +static struct class *xmgmt_class;
> +
> +/* PCI Device IDs */
add a comment on what a golden image is here something like
/*
* Golden image is preloaded on the device when it is shipped to customer.
* Then, customer can load other shells (from Xilinx or some other vendor).
* If something goes wrong with the shell, customer can always go back to
* golden and start over again.
*/
> +#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
> +#define PCI_DEVICE_ID_U50 0x5020
> +static const struct pci_device_id xmgmt_pci_ids[] = {
> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN), }, /* Alveo U50 (golden) */
> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /* Alveo U50 */
> + { 0, }
> +};
> +
> +struct xmgmt {
> + struct pci_dev *pdev;
> + void *root;
> +
> + bool ready;
> +};
> +
> +static int xmgmt_config_pci(struct xmgmt *xm)
> +{
> + struct pci_dev *pdev = XMGMT_PDEV(xm);
> + int rc;
> +
> + rc = pcim_enable_device(pdev);
> + if (rc < 0) {
> + xmgmt_err(xm, "failed to enable device: %d", rc);
> + return rc;
> + }
> +
> + rc = pci_enable_pcie_error_reporting(pdev);
> + if (rc)
ok
> + xmgmt_warn(xm, "failed to enable AER: %d", rc);
> +
> + pci_set_master(pdev);
> +
> + rc = pcie_get_readrq(pdev);
> + if (rc > 512)
512 is magic number, change this to a #define
> + pcie_set_readrq(pdev, 512);
> + return 0;
> +}
> +
> +static int xmgmt_match_slot_and_save(struct device *dev, void *data)
> +{
> + struct xmgmt *xm = data;
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> + pci_cfg_access_lock(pdev);
> + pci_save_state(pdev);
> + }
> +
> + return 0;
> +}
> +
> +static void xmgmt_pci_save_config_all(struct xmgmt *xm)
> +{
> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
refactor expected in v5 when pseudo bus change happens.
> +}
> +
> +static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
> +{
> + struct xmgmt *xm = data;
> + struct pci_dev *pdev = to_pci_dev(dev);
> +
> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
> + pci_restore_state(pdev);
> + pci_cfg_access_unlock(pdev);
> + }
> +
> + return 0;
> +}
> +
> +static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
> +{
> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
> +}
> +
> +static void xmgmt_root_hot_reset(struct pci_dev *pdev)
> +{
> + struct xmgmt *xm = pci_get_drvdata(pdev);
> + struct pci_bus *bus;
> + u8 pci_bctl;
> + u16 pci_cmd, devctl;
> + int i, ret;
> +
> + xmgmt_info(xm, "hot reset start");
> +
> + xmgmt_pci_save_config_all(xm);
> +
> + pci_disable_device(pdev);
> +
> + bus = pdev->bus;
whitespace, all these nl's are not needed
> +
> + /*
> + * When flipping the SBR bit, device can fall off the bus. This is
> + * usually no problem at all so long as drivers are working properly
> + * after SBR. However, some systems complain bitterly when the device
> + * falls off the bus.
> + * The quick solution is to temporarily disable the SERR reporting of
> + * switch port during SBR.
> + */
> +
> + pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
> + pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd & ~PCI_COMMAND_SERR));
> + pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl & ~PCI_EXP_DEVCTL_FERE));
> + pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl | PCI_BRIDGE_CTL_BUS_RESET);
ok
> + msleep(100);
> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
> + ssleep(1);
> +
> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
> + pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
> +
> + ret = pci_enable_device(pdev);
> + if (ret)
> + xmgmt_err(xm, "failed to enable device, ret %d", ret);
> +
> + for (i = 0; i < 300; i++) {
> + pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
> + if (pci_cmd != 0xffff)
> + break;
> + msleep(20);
> + }
> + if (i == 300)
> + xmgmt_err(xm, "time'd out waiting for device to be online after reset");
time'd -> timed
Tom
> +
> + xmgmt_info(xm, "waiting for %d ms", i * 20);
> + xmgmt_pci_restore_config_all(xm);
> + xmgmt_config_pci(xm);
> +}
> +
> +static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
> +{
> + char *dtb = NULL;
> + int ret;
> +
> + ret = xrt_md_create(XMGMT_DEV(xm), &dtb);
> + if (ret) {
> + xmgmt_err(xm, "create metadata failed, ret %d", ret);
> + goto failed;
> + }
> +
> + ret = xroot_add_vsec_node(xm->root, dtb);
> + if (ret == -ENOENT) {
> + /*
> + * We may be dealing with a MFG board.
> + * Try vsec-golden which will bring up all hard-coded leaves
> + * at hard-coded offsets.
> + */
> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_VSEC_GOLDEN);
> + } else if (ret == 0) {
> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_MGMT_MAIN);
> + }
> + if (ret)
> + goto failed;
> +
> + *root_dtb = dtb;
> + return 0;
> +
> +failed:
> + vfree(dtb);
> + return ret;
> +}
> +
> +static ssize_t ready_show(struct device *dev,
> + struct device_attribute *da,
> + char *buf)
> +{
> + struct pci_dev *pdev = to_pci_dev(dev);
> + struct xmgmt *xm = pci_get_drvdata(pdev);
> +
> + return sprintf(buf, "%d\n", xm->ready);
> +}
> +static DEVICE_ATTR_RO(ready);
> +
> +static struct attribute *xmgmt_root_attrs[] = {
> + &dev_attr_ready.attr,
> + NULL
> +};
> +
> +static struct attribute_group xmgmt_root_attr_group = {
> + .attrs = xmgmt_root_attrs,
> +};
> +
> +static struct xroot_physical_function_callback xmgmt_xroot_pf_cb = {
> + .xpc_hot_reset = xmgmt_root_hot_reset,
> +};
> +
> +static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
> +{
> + int ret;
> + struct device *dev = &pdev->dev;
> + struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
> + char *dtb = NULL;
> +
> + if (!xm)
> + return -ENOMEM;
> + xm->pdev = pdev;
> + pci_set_drvdata(pdev, xm);
> +
> + ret = xmgmt_config_pci(xm);
> + if (ret)
> + goto failed;
> +
> + ret = xroot_probe(pdev, &xmgmt_xroot_pf_cb, &xm->root);
> + if (ret)
> + goto failed;
> +
> + ret = xmgmt_create_root_metadata(xm, &dtb);
> + if (ret)
> + goto failed_metadata;
> +
> + ret = xroot_create_group(xm->root, dtb);
> + vfree(dtb);
> + if (ret)
> + xmgmt_err(xm, "failed to create root group: %d", ret);
> +
> + if (!xroot_wait_for_bringup(xm->root))
> + xmgmt_err(xm, "failed to bringup all groups");
> + else
> + xm->ready = true;
> +
> + ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> + if (ret) {
> + /* Warning instead of failing the probe. */
> + xmgmt_warn(xm, "create xmgmt root attrs failed: %d", ret);
> + }
> +
> + xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
> + xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
> + return 0;
> +
> +failed_metadata:
> + xroot_remove(xm->root);
> +failed:
> + pci_set_drvdata(pdev, NULL);
> + return ret;
> +}
> +
> +static void xmgmt_remove(struct pci_dev *pdev)
> +{
> + struct xmgmt *xm = pci_get_drvdata(pdev);
> +
> + xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
> + sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
> + xroot_remove(xm->root);
> + pci_disable_pcie_error_reporting(xm->pdev);
> + xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
> +}
> +
> +static struct pci_driver xmgmt_driver = {
> + .name = XMGMT_MODULE_NAME,
> + .id_table = xmgmt_pci_ids,
> + .probe = xmgmt_probe,
> + .remove = xmgmt_remove,
> +};
> +
> +static int __init xmgmt_init(void)
> +{
> + int res = 0;
> +
> + res = xmgmt_register_leaf();
> + if (res)
> + return res;
> +
> + xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
> + if (IS_ERR(xmgmt_class))
> + return PTR_ERR(xmgmt_class);
> +
> + res = pci_register_driver(&xmgmt_driver);
> + if (res) {
> + class_destroy(xmgmt_class);
> + return res;
> + }
> +
> + return 0;
> +}
> +
> +static __exit void xmgmt_exit(void)
> +{
> + pci_unregister_driver(&xmgmt_driver);
> + class_destroy(xmgmt_class);
> + xmgmt_unregister_leaf();
> +}
> +
> +module_init(xmgmt_init);
> +module_exit(xmgmt_exit);
> +
> +MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
> +MODULE_VERSION(XMGMT_DRIVER_VERSION);
> +MODULE_AUTHOR("XRT Team <[email protected]>");
> +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
> +MODULE_LICENSE("GPL v2");
small alloc's should use kzalloc.
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> fpga-mgr and region implementation for xclbin download which will be
> called from main platform driver
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/mgmt/fmgr-drv.c | 191 +++++++++++
> drivers/fpga/xrt/mgmt/fmgr.h | 19 ++
> drivers/fpga/xrt/mgmt/main-region.c | 483 ++++++++++++++++++++++++++++
> 3 files changed, 693 insertions(+)
> create mode 100644 drivers/fpga/xrt/mgmt/fmgr-drv.c
> create mode 100644 drivers/fpga/xrt/mgmt/fmgr.h
a better file name would be xrt-mgr.*
> create mode 100644 drivers/fpga/xrt/mgmt/main-region.c
>
> diff --git a/drivers/fpga/xrt/mgmt/fmgr-drv.c b/drivers/fpga/xrt/mgmt/fmgr-drv.c
> new file mode 100644
> index 000000000000..12e1cc788ad9
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/fmgr-drv.c
> @@ -0,0 +1,191 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * FPGA Manager Support for Xilinx Alveo Management Function Driver
Since there is only one fpga mgr for xrt, this could be shortened to
* FPGA Manager Support for Xilinx Alevo
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: [email protected]
> + */
> +
> +#include <linux/cred.h>
> +#include <linux/efi.h>
> +#include <linux/fpga/fpga-mgr.h>
> +#include <linux/platform_device.h>
> +#include <linux/module.h>
> +#include <linux/vmalloc.h>
> +
> +#include "xclbin-helper.h"
> +#include "xleaf.h"
> +#include "fmgr.h"
> +#include "xleaf/axigate.h"
> +#include "xleaf/icap.h"
> +#include "xmgnt.h"
> +
> +struct xfpga_class {
> + const struct platform_device *pdev;
> + char name[64];
> +};
> +
> +/*
> + * xclbin download plumbing -- find the download subsystem, ICAP and
> + * pass the xclbin for heavy lifting
> + */
> +static int xmgmt_download_bitstream(struct platform_device *pdev,
> + const struct axlf *xclbin)
> +
> +{
> + struct xclbin_bit_head_info bit_header = { 0 };
> + struct platform_device *icap_leaf = NULL;
> + struct xrt_icap_wr arg;
> + char *bitstream = NULL;
> + u64 bit_len;
> + int ret;
> +
> + ret = xrt_xclbin_get_section(DEV(pdev), xclbin, BITSTREAM, (void **)&bitstream, &bit_len);
> + if (ret) {
> + xrt_err(pdev, "bitstream not found");
> + return -ENOENT;
> + }
> + ret = xrt_xclbin_parse_bitstream_header(DEV(pdev), bitstream,
> + XCLBIN_HWICAP_BITFILE_BUF_SZ,
> + &bit_header);
> + if (ret) {
> + ret = -EINVAL;
> + xrt_err(pdev, "invalid bitstream header");
> + goto fail;
> + }
> + if (bit_header.header_length + bit_header.bitstream_length > bit_len) {
> + ret = -EINVAL;
> + xrt_err(pdev, "invalid bitstream length. header %d, bitstream %d, section len %lld",
> + bit_header.header_length, bit_header.bitstream_length, bit_len);
> + goto fail;
> + }
> +
> + icap_leaf = xleaf_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
> + if (!icap_leaf) {
> + ret = -ENODEV;
> + xrt_err(pdev, "icap does not exist");
> + goto fail;
> + }
> + arg.xiiw_bit_data = bitstream + bit_header.header_length;
> + arg.xiiw_data_len = bit_header.bitstream_length;
> + ret = xleaf_call(icap_leaf, XRT_ICAP_WRITE, &arg);
> + if (ret) {
> + xrt_err(pdev, "write bitstream failed, ret = %d", ret);
> + xleaf_put_leaf(pdev, icap_leaf);
> + goto fail;
> + }
ok, free_header removed
> +
> + xleaf_put_leaf(pdev, icap_leaf);
> + vfree(bitstream);
> +
> + return 0;
> +
> +fail:
> + vfree(bitstream);
> +
> + return ret;
> +}
> +
> +/*
> + * There is no HW prep work we do here since we need the full
> + * xclbin for its sanity check.
> + */
> +static int xmgmt_pr_write_init(struct fpga_manager *mgr,
> + struct fpga_image_info *info,
> + const char *buf, size_t count)
> +{
> + const struct axlf *bin = (const struct axlf *)buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
> + xrt_info(obj->pdev, "%s only supports partial reconfiguration\n", obj->name);
> + return -EINVAL;
> + }
> +
> + if (count < sizeof(struct axlf))
> + return -EINVAL;
> +
> + if (count > bin->header.length)
> + return -EINVAL;
> +
> + xrt_info(obj->pdev, "Prepare download of xclbin %pUb of length %lld B",
> + &bin->header.uuid, bin->header.length);
> +
> + return 0;
> +}
> +
> +/*
> + * The implementation requries full xclbin image before we can start
> + * programming the hardware via ICAP subsystem. The full image is required
ok
> + * for checking the validity of xclbin and walking the sections to
> + * discover the bitstream.
> + */
> +static int xmgmt_pr_write(struct fpga_manager *mgr,
> + const char *buf, size_t count)
> +{
> + const struct axlf *bin = (const struct axlf *)buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + if (bin->header.length != count)
> + return -EINVAL;
> +
> + return xmgmt_download_bitstream((void *)obj->pdev, bin);
> +}
> +
> +static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
> + struct fpga_image_info *info)
> +{
> + const struct axlf *bin = (const struct axlf *)info->buf;
> + struct xfpga_class *obj = mgr->priv;
> +
> + xrt_info(obj->pdev, "Finished download of xclbin %pUb",
> + &bin->header.uuid);
> + return 0;
> +}
> +
> +static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
> +{
> + return FPGA_MGR_STATE_UNKNOWN;
ok as-is
> +}
> +
> +static const struct fpga_manager_ops xmgmt_pr_ops = {
> + .initial_header_size = sizeof(struct axlf),
> + .write_init = xmgmt_pr_write_init,
> + .write = xmgmt_pr_write,
> + .write_complete = xmgmt_pr_write_complete,
> + .state = xmgmt_pr_state,
> +};
> +
> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
> +{
> + struct xfpga_class *obj = devm_kzalloc(DEV(pdev), sizeof(struct xfpga_class),
> + GFP_KERNEL);
> + struct fpga_manager *fmgr = NULL;
> + int ret = 0;
> +
> + if (!obj)
> + return ERR_PTR(-ENOMEM);
> +
> + snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
> + obj->pdev = pdev;
> + fmgr = fpga_mgr_create(&pdev->dev,
> + obj->name,
> + &xmgmt_pr_ops,
> + obj);
> + if (!fmgr)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = fpga_mgr_register(fmgr);
> + if (ret) {
> + fpga_mgr_free(fmgr);
> + return ERR_PTR(ret);
> + }
> + return fmgr;
> +}
> +
> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
> +{
> + fpga_mgr_unregister(fmgr);
> + return 0;
> +}
> diff --git a/drivers/fpga/xrt/mgmt/fmgr.h b/drivers/fpga/xrt/mgmt/fmgr.h
> new file mode 100644
> index 000000000000..ff1fc5f870f8
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/fmgr.h
> @@ -0,0 +1,19 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors: [email protected]
> + */
> +
> +#ifndef _XMGMT_FMGR_H_
> +#define _XMGMT_FMGR_H_
> +
> +#include <linux/fpga/fpga-mgr.h>
> +#include <linux/mutex.h>
why do mutex.h and xclbin.h need to be included ?
consider removing them.
> +
> +#include <linux/xrt/xclbin.h>
ok enum removed.
> +
> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
> +
> +#endif
> diff --git a/drivers/fpga/xrt/mgmt/main-region.c b/drivers/fpga/xrt/mgmt/main-region.c
> new file mode 100644
> index 000000000000..96a674618e86
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/main-region.c
> @@ -0,0 +1,483 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * FPGA Region Support for Xilinx Alveo Management Function Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
review this line, there is not fmgr.c
> + *
> + * Authors: [email protected]
> + */
> +
> +#include <linux/uuid.h>
> +#include <linux/fpga/fpga-bridge.h>
> +#include <linux/fpga/fpga-region.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/axigate.h"
> +#include "xclbin-helper.h"
> +#include "xmgnt.h"
> +
> +struct xmgmt_bridge {
> + struct platform_device *pdev;
> + const char *bridge_name;
ok
> +};
> +
> +struct xmgmt_region {
> + struct platform_device *pdev;
> + struct fpga_region *region;
> + struct fpga_compat_id compat_id;
> + uuid_t intf_uuid;
interface_uuid
> + struct fpga_bridge *bridge;
> + int group_instance;
> + uuid_t dep_uuid;
dep ? expand.
> + struct list_head list;
> +};
> +
> +struct xmgmt_region_match_arg {
> + struct platform_device *pdev;
> + uuid_t *uuids;
> + u32 uuid_num;
> +};
> +
> +static int xmgmt_br_enable_set(struct fpga_bridge *bridge, bool enable)
> +{
> + struct xmgmt_bridge *br_data = (struct xmgmt_bridge *)bridge->priv;
> + struct platform_device *axigate_leaf;
> + int rc;
> +
> + axigate_leaf = xleaf_get_leaf_by_epname(br_data->pdev, br_data->bridge_name);
> + if (!axigate_leaf) {
> + xrt_err(br_data->pdev, "failed to get leaf %s",
> + br_data->bridge_name);
> + return -ENOENT;
> + }
> +
> + if (enable)
> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_OPEN, NULL);
> + else
> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_CLOSE, NULL);
> +
> + if (rc) {
> + xrt_err(br_data->pdev, "failed to %s gate %s, rc %d",
> + (enable ? "free" : "freeze"), br_data->bridge_name,
> + rc);
> + }
> +
> + xleaf_put_leaf(br_data->pdev, axigate_leaf);
> +
> + return rc;
> +}
> +
> +const struct fpga_bridge_ops xmgmt_bridge_ops = {
> + .enable_set = xmgmt_br_enable_set
> +};
> +
> +static void xmgmt_destroy_bridge(struct fpga_bridge *br)
> +{
> + struct xmgmt_bridge *br_data = br->priv;
> +
> + if (!br_data)
> + return;
> +
> + xrt_info(br_data->pdev, "destroy fpga bridge %s", br_data->bridge_name);
> + fpga_bridge_unregister(br);
> +
> + devm_kfree(DEV(br_data->pdev), br_data);
> +
> + fpga_bridge_free(br);
> +}
> +
> +static struct fpga_bridge *xmgmt_create_bridge(struct platform_device *pdev,
> + char *dtb)
> +{
> + struct fpga_bridge *br = NULL;
> + struct xmgmt_bridge *br_data;
> + const char *gate;
> + int rc;
> +
> + br_data = devm_kzalloc(DEV(pdev), sizeof(*br_data), GFP_KERNEL);
> + if (!br_data)
> + return NULL;
> + br_data->pdev = pdev;
> +
> + br_data->bridge_name = XRT_MD_NODE_GATE_ULP;
> + rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_ULP,
> + NULL, &gate);
> + if (rc) {
> + br_data->bridge_name = XRT_MD_NODE_GATE_PLP;
> + rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_PLP,
> + NULL, &gate);
> + }
> + if (rc) {
> + xrt_err(pdev, "failed to get axigate, rc %d", rc);
> + goto failed;
> + }
> +
> + br = fpga_bridge_create(DEV(pdev), br_data->bridge_name,
> + &xmgmt_bridge_ops, br_data);
> + if (!br) {
> + xrt_err(pdev, "failed to create bridge");
> + goto failed;
> + }
> +
> + rc = fpga_bridge_register(br);
> + if (rc) {
> + xrt_err(pdev, "failed to register bridge, rc %d", rc);
> + goto failed;
> + }
> +
> + xrt_info(pdev, "created fpga bridge %s", br_data->bridge_name);
> +
> + return br;
> +
> +failed:
> + if (br)
> + fpga_bridge_free(br);
> + if (br_data)
> + devm_kfree(DEV(pdev), br_data);
> +
> + return NULL;
> +}
> +
> +static void xmgmt_destroy_region(struct fpga_region *region)
ok
> +{
> + struct xmgmt_region *r_data = region->priv;
> +
> + xrt_info(r_data->pdev, "destroy fpga region %llx.%llx",
> + region->compat_id->id_l, region->compat_id->id_h);
are the args ordered correctly ? I expected id_h to be first.
> +
> + fpga_region_unregister(region);
> +
> + if (r_data->group_instance > 0)
> + xleaf_destroy_group(r_data->pdev, r_data->group_instance);
> +
> + if (r_data->bridge)
> + xmgmt_destroy_bridge(r_data->bridge);
> +
> + if (r_data->region->info) {
> + fpga_image_info_free(r_data->region->info);
> + r_data->region->info = NULL;
> + }
> +
> + fpga_region_free(region);
> +
> + devm_kfree(DEV(r_data->pdev), r_data);
> +}
> +
> +static int xmgmt_region_match(struct device *dev, const void *data)
> +{
> + const struct xmgmt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
ok
> + uuid_t compat_uuid;
> + int i;
> +
> + if (dev->parent != &arg->pdev->dev)
> + return false;
> +
> + match_region = to_fpga_region(dev);
> + /*
> + * The device tree provides both parent and child uuids for an
> + * xclbin in one array. Here we try both uuids to see if it matches
> + * with target region's compat_id. Strictly speaking we should
> + * only match xclbin's parent uuid with target region's compat_id
> + * but given the uuids by design are unique comparing with both
> + * does not hurt.
> + */
> + import_uuid(&compat_uuid, (const char *)match_region->compat_id);
> + for (i = 0; i < arg->uuid_num; i++) {
> + if (uuid_equal(&compat_uuid, &arg->uuids[i]))
> + return true;
> + }
> +
> + return false;
> +}
> +
> +static int xmgmt_region_match_base(struct device *dev, const void *data)
> +{
> + const struct xmgmt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
> + const struct xmgmt_region *r_data;
> +
> + if (dev->parent != &arg->pdev->dev)
> + return false;
> +
> + match_region = to_fpga_region(dev);
> + r_data = match_region->priv;
> + if (uuid_is_null(&r_data->dep_uuid))
> + return true;
> +
> + return false;
> +}
> +
> +static int xmgmt_region_match_by_uuid(struct device *dev, const void *data)
ok
> +{
> + const struct xmgmt_region_match_arg *arg = data;
> + const struct fpga_region *match_region;
> + const struct xmgmt_region *r_data;
> +
> + if (dev->parent != &arg->pdev->dev)
> + return false;
> +
> + if (arg->uuid_num != 1)
> + return false;
ok
> +
> + match_region = to_fpga_region(dev);
> + r_data = match_region->priv;
> + if (uuid_equal(&r_data->dep_uuid, arg->uuids))
> + return true;
> +
> + return false;
> +}
> +
> +static void xmgmt_region_cleanup(struct fpga_region *region)
> +{
> + struct xmgmt_region *r_data = region->priv, *pdata, *temp;
> + struct platform_device *pdev = r_data->pdev;
> + struct xmgmt_region_match_arg arg = { 0 };
> + struct fpga_region *match_region = NULL;
> + struct device *start_dev = NULL;
> + LIST_HEAD(free_list);
> + uuid_t compat_uuid;
> +
> + list_add_tail(&r_data->list, &free_list);
> + arg.pdev = pdev;
> + arg.uuid_num = 1;
> + arg.uuids = &compat_uuid;
> +
> + /* find all regions depending on this region */
> + list_for_each_entry_safe(pdata, temp, &free_list, list) {
ok
> + import_uuid(arg.uuids, (const char *)pdata->region->compat_id);
> + start_dev = NULL;
> + while ((match_region = fpga_region_class_find(start_dev, &arg,
> + xmgmt_region_match_by_uuid))) {
> + pdata = match_region->priv;
> + list_add_tail(&pdata->list, &free_list);
> + start_dev = &match_region->dev;
> + put_device(&match_region->dev);
> + }
> + }
> +
> + list_del(&r_data->list);
> +
> + list_for_each_entry_safe_reverse(pdata, temp, &free_list, list)
> + xmgmt_destroy_region(pdata->region);
> +
> + if (r_data->group_instance > 0) {
> + xleaf_destroy_group(pdev, r_data->group_instance);
> + r_data->group_instance = -1;
> + }
> + if (r_data->region->info) {
> + fpga_image_info_free(r_data->region->info);
> + r_data->region->info = NULL;
> + }
> +}
> +
> +void xmgmt_region_cleanup_all(struct platform_device *pdev)
> +{
> + struct xmgmt_region_match_arg arg = { 0 };
> + struct fpga_region *base_region;
> +
> + arg.pdev = pdev;
> +
> + while ((base_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match_base))) {
ok
> + put_device(&base_region->dev);
> +
> + xmgmt_region_cleanup(base_region);
> + xmgmt_destroy_region(base_region);
> + }
> +}
> +
> +/*
> + * Program a region with a xclbin image. Bring up the subdevs and the
ok
> + * group object to contain the subdevs.
> + */
> +static int xmgmt_region_program(struct fpga_region *region, const void *xclbin, char *dtb)
> +{
> + const struct axlf *xclbin_obj = xclbin;
> + struct fpga_image_info *info;
> + struct platform_device *pdev;
> + struct xmgmt_region *r_data;
> + int rc;
> +
> + r_data = region->priv;
> + pdev = r_data->pdev;
> +
> + info = fpga_image_info_alloc(&pdev->dev);
> + if (!info)
> + return -ENOMEM;
> +
> + info->buf = xclbin;
> + info->count = xclbin_obj->header.length;
> + info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
> + region->info = info;
> + rc = fpga_region_program_fpga(region);
> + if (rc) {
> + xrt_err(pdev, "programming xclbin failed, rc %d", rc);
> + return rc;
> + }
> +
> + /* free bridges to allow reprogram */
> + if (region->get_bridges)
> + fpga_bridges_put(®ion->bridge_list);
> +
> + /*
> + * Next bringup the subdevs for this region which will be managed by
> + * its own group object.
> + */
> + r_data->group_instance = xleaf_create_group(pdev, dtb);
> + if (r_data->group_instance < 0) {
> + xrt_err(pdev, "failed to create group, rc %d",
> + r_data->group_instance);
> + rc = r_data->group_instance;
> + return rc;
> + }
> +
> + rc = xleaf_wait_for_group_bringup(pdev);
> + if (rc)
> + xrt_err(pdev, "group bringup failed, rc %d", rc);
> + return rc;
> +}
> +
> +static int xmgmt_get_bridges(struct fpga_region *region)
> +{
> + struct xmgmt_region *r_data = region->priv;
> + struct device *dev = &r_data->pdev->dev;
> +
> + return fpga_bridge_get_to_list(dev, region->info, ®ion->bridge_list);
> +}
> +
> +/*
> + * Program/create FPGA regions based on input xclbin file.
ok, dropped sentence
> + * 1. Identify a matching existing region for this xclbin
> + * 2. Tear down any previous objects for the found region
> + * 3. Program this region with input xclbin
> + * 4. Iterate over this region's interface uuids to determine if it defines any
> + * child region. Create fpga_region for the child region.
> + */
> +int xmgmt_process_xclbin(struct platform_device *pdev,
> + struct fpga_manager *fmgr,
> + const struct axlf *xclbin,
> + enum provider_kind kind)
> +{
> + struct fpga_region *region, *compat_region = NULL;
> + struct xmgmt_region_match_arg arg = { 0 };
ok
> + struct xmgmt_region *r_data;
> + uuid_t compat_uuid;
> + char *dtb = NULL;
> + int rc, i;
> +
> + rc = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
> + if (rc) {
> + xrt_err(pdev, "failed to get dtb: %d", rc);
> + goto failed;
> + }
> +
> + rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, 0, NULL);
> + if (rc < 0) {
> + xrt_err(pdev, "failed to get intf uuid");
> + rc = -EINVAL;
ok
> + goto failed;
> + }
> + arg.uuid_num = rc;
> + arg.uuids = vzalloc(sizeof(uuid_t) * arg.uuid_num);
uuids small, convert to bzalloc
> + if (!arg.uuids) {
> + rc = -ENOMEM;
> + goto failed;
> + }
> + arg.pdev = pdev;
> +
> + rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, arg.uuid_num, arg.uuids);
> + if (rc != arg.uuid_num) {
> + xrt_err(pdev, "only get %d uuids, expect %d", rc, arg.uuid_num);
> + rc = -EINVAL;
> + goto failed;
> + }
> +
> + /* if this is not base firmware, search for a compatible region */
> + if (kind != XMGMT_BLP) {
> + compat_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match);
> + if (!compat_region) {
> + xrt_err(pdev, "failed to get compatible region");
> + rc = -ENOENT;
> + goto failed;
> + }
> +
> + xmgmt_region_cleanup(compat_region);
> +
> + rc = xmgmt_region_program(compat_region, xclbin, dtb);
> + if (rc) {
> + xrt_err(pdev, "failed to program region");
> + goto failed;
> + }
> + }
> +
> + if (compat_region)
> + import_uuid(&compat_uuid, (const char *)compat_region->compat_id);
> +
> + /* create all the new regions contained in this xclbin */
> + for (i = 0; i < arg.uuid_num; i++) {
> + if (compat_region && uuid_equal(&compat_uuid, &arg.uuids[i])) {
> + /* region for this interface already exists */
> + continue;
> + }
> +
> + region = fpga_region_create(DEV(pdev), fmgr, xmgmt_get_bridges);
> + if (!region) {
> + xrt_err(pdev, "failed to create fpga region");
> + rc = -EFAULT;
> + goto failed;
> + }
> + r_data = devm_kzalloc(DEV(pdev), sizeof(*r_data), GFP_KERNEL);
> + if (!r_data) {
> + rc = -ENOMEM;
> + fpga_region_free(region);
> + goto failed;
> + }
> + r_data->pdev = pdev;
> + r_data->region = region;
> + r_data->group_instance = -1;
> + uuid_copy(&r_data->intf_uuid, &arg.uuids[i]);
> + if (compat_region)
> + import_uuid(&r_data->dep_uuid, (const char *)compat_region->compat_id);
> + r_data->bridge = xmgmt_create_bridge(pdev, dtb);
> + if (!r_data->bridge) {
> + xrt_err(pdev, "failed to create fpga bridge");
> + rc = -EFAULT;
> + devm_kfree(DEV(pdev), r_data);
> + fpga_region_free(region);
> + goto failed;
> + }
> +
> + region->compat_id = &r_data->compat_id;
> + export_uuid((char *)region->compat_id, &r_data->intf_uuid);
> + region->priv = r_data;
> +
> + rc = fpga_region_register(region);
> + if (rc) {
> + xrt_err(pdev, "failed to register fpga region");
> + xmgmt_destroy_bridge(r_data->bridge);
> + fpga_region_free(region);
> + devm_kfree(DEV(pdev), r_data);
> + goto failed;
> + }
> +
> + xrt_info(pdev, "created fpga region %llx%llx",
> + region->compat_id->id_l, region->compat_id->id_h);
see above comment on id_h
destroy's info used %llx.%llx, for consistency need to add or remove a '.'
Tom
> + }
> +
> + if (compat_region)
> + put_device(&compat_region->dev);
> + vfree(dtb);
> + return 0;
> +
> +failed:
> + if (compat_region) {
> + put_device(&compat_region->dev);
> + xmgmt_region_cleanup(compat_region);
> + } else {
> + xmgmt_region_cleanup_all(pdev);
> + }
> +
> + vfree(dtb);
> + return rc;
> +}
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> platform driver that handles IOCTLs, such as hot reset and xclbin download.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xmgmt-main.h | 34 ++
> drivers/fpga/xrt/mgmt/main.c | 670 ++++++++++++++++++++++++++
> drivers/fpga/xrt/mgmt/xmgnt.h | 34 ++
> include/uapi/linux/xrt/xmgmt-ioctl.h | 46 ++
> 4 files changed, 784 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
> create mode 100644 drivers/fpga/xrt/mgmt/main.c
'main' is generic, how about xmgnt-main ?
> create mode 100644 drivers/fpga/xrt/mgmt/xmgnt.h
> create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
>
> diff --git a/drivers/fpga/xrt/include/xmgmt-main.h b/drivers/fpga/xrt/include/xmgmt-main.h
> new file mode 100644
> index 000000000000..dce9f0d1a0dc
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xmgmt-main.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XMGMT_MAIN_H_
> +#define _XMGMT_MAIN_H_
> +
> +#include <linux/xrt/xclbin.h>
> +#include "xleaf.h"
> +
> +enum xrt_mgmt_main_leaf_cmd {
> + XRT_MGMT_MAIN_GET_AXLF_SECTION = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_MGMT_MAIN_GET_VBNV,
> +};
> +
> +/* There are three kind of partitions. Each of them is programmed independently. */
> +enum provider_kind {
> + XMGMT_BLP, /* Base Logic Partition */
> + XMGMT_PLP, /* Provider Logic Partition */
> + XMGMT_ULP, /* User Logic Partition */
ok
> +};
> +
> +struct xrt_mgmt_main_get_axlf_section {
> + enum provider_kind xmmigas_axlf_kind;
> + enum axlf_section_kind xmmigas_section_kind;
> + void *xmmigas_section;
> + u64 xmmigas_section_size;
> +};
> +
> +#endif /* _XMGMT_MAIN_H_ */
> diff --git a/drivers/fpga/xrt/mgmt/main.c b/drivers/fpga/xrt/mgmt/main.c
> new file mode 100644
> index 000000000000..f3b46e1fd78b
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/main.c
> @@ -0,0 +1,670 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA MGMT PF entry point driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Sonal Santan <[email protected]>
> + */
> +
> +#include <linux/firmware.h>
> +#include <linux/uaccess.h>
> +#include "xclbin-helper.h"
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include <linux/xrt/xmgmt-ioctl.h>
> +#include "xleaf/devctl.h"
> +#include "xmgmt-main.h"
> +#include "fmgr.h"
> +#include "xleaf/icap.h"
> +#include "xleaf/axigate.h"
> +#include "xmgnt.h"
> +
> +#define XMGMT_MAIN "xmgmt_main"
> +#define XMGMT_SUPP_XCLBIN_MAJOR 2
> +
> +#define XMGMT_FLAG_FLASH_READY 1
> +#define XMGMT_FLAG_DEVCTL_READY 2
> +
> +#define XMGMT_UUID_STR_LEN 80
> +
> +struct xmgmt_main {
> + struct platform_device *pdev;
> + struct axlf *firmware_blp;
> + struct axlf *firmware_plp;
> + struct axlf *firmware_ulp;
> + u32 flags;
ok
> + struct fpga_manager *fmgr;
> + struct mutex lock; /* busy lock */
ok
> +
do not need this nl
> + uuid_t *blp_interface_uuids;
> + u32 blp_interface_uuid_num;
ok
> +};
> +
> +/*
> + * VBNV stands for Vendor, BoardID, Name, Version. It is a string
> + * which describes board and shell.
> + *
> + * Caller is responsible for freeing the returned string.
ok
> + */
> +char *xmgmt_get_vbnv(struct platform_device *pdev)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + const char *vbnv;
> + char *ret;
> + int i;
> +
> + if (xmm->firmware_plp)
> + vbnv = xmm->firmware_plp->header.platform_vbnv;
> + else if (xmm->firmware_blp)
> + vbnv = xmm->firmware_blp->header.platform_vbnv;
> + else
> + return NULL;
> +
> + ret = kstrdup(vbnv, GFP_KERNEL);
> + if (!ret)
> + return NULL;
> +
> + for (i = 0; i < strlen(ret); i++) {
> + if (ret[i] == ':' || ret[i] == '.')
> + ret[i] = '_';
> + }
> + return ret;
> +}
> +
> +static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
> +{
> + struct xrt_devctl_rw devctl_arg = { 0 };
> + struct platform_device *devctl_leaf;
> + char uuid_buf[UUID_SIZE];
> + uuid_t uuid;
> + int err;
> +
> + devctl_leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
> + if (!devctl_leaf) {
> + xrt_err(pdev, "can not get %s", XRT_MD_NODE_BLP_ROM);
> + return -EINVAL;
> + }
> +
> + devctl_arg.xdr_id = XRT_DEVCTL_ROM_UUID;
> + devctl_arg.xdr_buf = uuid_buf;
> + devctl_arg.xdr_len = sizeof(uuid_buf);
> + devctl_arg.xdr_offset = 0;
> + err = xleaf_call(devctl_leaf, XRT_DEVCTL_READ, &devctl_arg);
> + xleaf_put_leaf(pdev, devctl_leaf);
> + if (err) {
> + xrt_err(pdev, "can not get uuid: %d", err);
> + return err;
> + }
> + import_uuid(&uuid, uuid_buf);
ok
> + xrt_md_trans_uuid2str(&uuid, uuidstr);
> +
> + return 0;
> +}
> +
> +int xmgmt_hot_reset(struct platform_device *pdev)
> +{
> + int ret = xleaf_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET, false);
> +
> + if (ret) {
> + xrt_err(pdev, "offline failed, hot reset is canceled");
> + return ret;
> + }
> +
> + xleaf_hot_reset(pdev);
> + xleaf_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET, false);
> + return 0;
> +}
> +
> +static ssize_t reset_store(struct device *dev, struct device_attribute *da,
> + const char *buf, size_t count)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> +
> + xmgmt_hot_reset(pdev);
> + return count;
> +}
> +static DEVICE_ATTR_WO(reset);
> +
> +static ssize_t VBNV_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + ssize_t ret;
> + char *vbnv;
> +
> + vbnv = xmgmt_get_vbnv(pdev);
> + if (!vbnv)
> + return -EINVAL;
ok
> + ret = sprintf(buf, "%s\n", vbnv);
> + kfree(vbnv);
> + return ret;
> +}
> +static DEVICE_ATTR_RO(VBNV);
> +
> +/* logic uuid is the uuid uniquely identfy the partition */
> +static ssize_t logic_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + char uuid[XMGMT_UUID_STR_LEN];
ok
> + ssize_t ret;
> +
> + /* Getting UUID pointed to by VSEC, should be the same as logic UUID of BLP. */
> + ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
> + if (ret)
> + return ret;
> + ret = sprintf(buf, "%s\n", uuid);
> + return ret;
> +}
> +static DEVICE_ATTR_RO(logic_uuids);
> +
> +static ssize_t interface_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
> +{
> + struct platform_device *pdev = to_platform_device(dev);
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + ssize_t ret = 0;
> + u32 i;
> +
> + for (i = 0; i < xmm->blp_interface_uuid_num; i++) {
> + char uuidstr[XMGMT_UUID_STR_LEN];
> +
> + xrt_md_trans_uuid2str(&xmm->blp_interface_uuids[i], uuidstr);
> + ret += sprintf(buf + ret, "%s\n", uuidstr);
> + }
> + return ret;
> +}
> +static DEVICE_ATTR_RO(interface_uuids);
> +
> +static struct attribute *xmgmt_main_attrs[] = {
> + &dev_attr_reset.attr,
> + &dev_attr_VBNV.attr,
> + &dev_attr_logic_uuids.attr,
> + &dev_attr_interface_uuids.attr,
> + NULL,
> +};
> +
> +static const struct attribute_group xmgmt_main_attrgroup = {
> + .attrs = xmgmt_main_attrs,
> +};
> +
ok, removed ulp_image_write()
> +static int load_firmware_from_disk(struct platform_device *pdev, struct axlf **fw_buf, size_t *len)
> +{
> + char uuid[XMGMT_UUID_STR_LEN];
> + const struct firmware *fw;
> + char fw_name[256];
> + int err = 0;
> +
> + *len = 0;
ok
> + err = get_dev_uuid(pdev, uuid, sizeof(uuid));
> + if (err)
> + return err;
> +
> + snprintf(fw_name, sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
> + xrt_info(pdev, "try loading fw: %s", fw_name);
> +
> + err = request_firmware(&fw, fw_name, DEV(pdev));
> + if (err)
> + return err;
> +
> + *fw_buf = vmalloc(fw->size);
> + if (!*fw_buf) {
> + release_firmware(fw);
> + return -ENOMEM;
> + }
> +
> + *len = fw->size;
> + memcpy(*fw_buf, fw->data, fw->size);
> +
> + release_firmware(fw);
> + return 0;
> +}
> +
> +static const struct axlf *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm, enum provider_kind kind)
> +{
> + switch (kind) {
> + case XMGMT_BLP:
> + return xmm->firmware_blp;
> + case XMGMT_PLP:
> + return xmm->firmware_plp;
> + case XMGMT_ULP:
> + return xmm->firmware_ulp;
> + default:
> + xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
> + return NULL;
> + }
> +}
> +
> +/* The caller needs to free the returned dtb buffer */
ok
> +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + const struct axlf *provider;
> + char *dtb = NULL;
> + int rc;
> +
> + provider = xmgmt_get_axlf_firmware(xmm, kind);
> + if (!provider)
> + return dtb;
> +
> + rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
> + if (rc)
> + xrt_err(pdev, "failed to find dtb: %d", rc);
> + return dtb;
> +}
> +
> +/* The caller needs to free the returned uuid buffer */
ok
> +static const char *get_uuid_from_firmware(struct platform_device *pdev, const struct axlf *xclbin)
> +{
> + const void *uuiddup = NULL;
> + const void *uuid = NULL;
> + void *dtb = NULL;
> + int rc;
> +
> + rc = xrt_xclbin_get_section(DEV(pdev), xclbin, PARTITION_METADATA, &dtb, NULL);
> + if (rc)
> + return NULL;
> +
> + rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL, XRT_MD_PROP_LOGIC_UUID, &uuid, NULL);
> + if (!rc)
> + uuiddup = kstrdup(uuid, GFP_KERNEL);
> + vfree(dtb);
> + return uuiddup;
> +}
> +
> +static bool is_valid_firmware(struct platform_device *pdev,
> + const struct axlf *xclbin, size_t fw_len)
> +{
> + const char *fw_buf = (const char *)xclbin;
> + size_t axlflen = xclbin->header.length;
> + char dev_uuid[XMGMT_UUID_STR_LEN];
> + const char *fw_uuid;
> + int err;
> +
> + err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
> + if (err)
> + return false;
> +
> + if (memcmp(fw_buf, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)) != 0) {
> + xrt_err(pdev, "unknown fw format");
> + return false;
> + }
> +
> + if (axlflen > fw_len) {
> + xrt_err(pdev, "truncated fw, length: %zu, expect: %zu", fw_len, axlflen);
> + return false;
> + }
> +
> + if (xclbin->header.version_major != XMGMT_SUPP_XCLBIN_MAJOR) {
> + xrt_err(pdev, "firmware is not supported");
> + return false;
> + }
> +
> + fw_uuid = get_uuid_from_firmware(pdev, xclbin);
> + if (!fw_uuid || strncmp(fw_uuid, dev_uuid, sizeof(dev_uuid)) != 0) {
> + xrt_err(pdev, "bad fw UUID: %s, expect: %s",
> + fw_uuid ? fw_uuid : "<none>", dev_uuid);
> + kfree(fw_uuid);
> + return false;
> + }
> +
> + kfree(fw_uuid);
> + return true;
> +}
> +
> +int xmgmt_get_provider_uuid(struct platform_device *pdev, enum provider_kind kind, uuid_t *uuid)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + const struct axlf *fwbuf;
> + const char *fw_uuid;
> + int rc = -ENOENT;
> +
> + mutex_lock(&xmm->lock);
> +
> + fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
> + if (!fwbuf)
> + goto done;
> +
> + fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
> + if (!fw_uuid)
> + goto done;
> +
> + rc = xrt_md_trans_str2uuid(DEV(pdev), fw_uuid, uuid);
> + kfree(fw_uuid);
> +
> +done:
> + mutex_unlock(&xmm->lock);
> + return rc;
> +}
> +
> +static int xmgmt_create_blp(struct xmgmt_main *xmm)
> +{
> + const struct axlf *provider = xmgmt_get_axlf_firmware(xmm, XMGMT_BLP);
> + struct platform_device *pdev = xmm->pdev;
> + int rc = 0;
> + char *dtb = NULL;
> +
> + dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
> + if (!dtb) {
> + xrt_err(pdev, "did not get BLP metadata");
> + return -EINVAL;
ok
> + }
> +
> + rc = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, provider, XMGMT_BLP);
> + if (rc) {
> + xrt_err(pdev, "failed to process BLP: %d", rc);
> + goto failed;
> + }
> +
> + rc = xleaf_create_group(pdev, dtb);
> + if (rc < 0)
> + xrt_err(pdev, "failed to create BLP group: %d", rc);
> + else
> + rc = 0;
> +
> + WARN_ON(xmm->blp_interface_uuids);
> + rc = xrt_md_get_interface_uuids(&pdev->dev, dtb, 0, NULL);
> + if (rc > 0) {
> + xmm->blp_interface_uuid_num = rc;
> + xmm->blp_interface_uuids = vzalloc(sizeof(uuid_t) * xmm->blp_interface_uuid_num);
blp_interface_uuids should be small, so convert to kzalloc
> + if (!xmm->blp_interface_uuids) {
ok
> + rc = -ENOMEM;
> + goto failed;
> + }
> + xrt_md_get_interface_uuids(&pdev->dev, dtb, xmm->blp_interface_uuid_num,
> + xmm->blp_interface_uuids);
> + }
> +
> +failed:
> + vfree(dtb);
> + return rc;
> +}
> +
> +static int xmgmt_load_firmware(struct xmgmt_main *xmm)
> +{
> + struct platform_device *pdev = xmm->pdev;
> + size_t fwlen;
> + int rc;
> +
> + rc = load_firmware_from_disk(pdev, &xmm->firmware_blp, &fwlen);
ok
> + if (!rc && is_valid_firmware(pdev, xmm->firmware_blp, fwlen))
> + xmgmt_create_blp(xmm);
> + else
> + xrt_err(pdev, "failed to find firmware, giving up: %d", rc);
> + return rc;
> +}
> +
> +static void xmgmt_main_event_cb(struct platform_device *pdev, void *arg)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct platform_device *leaf;
> + enum xrt_subdev_id id;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> + switch (e) {
> + case XRT_EVENT_POST_CREATION: {
> + if (id == XRT_SUBDEV_DEVCTL && !(xmm->flags & XMGMT_FLAG_DEVCTL_READY)) {
> + leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
> + if (leaf) {
> + xmm->flags |= XMGMT_FLAG_DEVCTL_READY;
> + xleaf_put_leaf(pdev, leaf);
> + }
> + } else if (id == XRT_SUBDEV_QSPI && !(xmm->flags & XMGMT_FLAG_FLASH_READY)) {
> + xmm->flags |= XMGMT_FLAG_FLASH_READY;
> + } else {
> + break;
> + }
> +
> + if (xmm->flags & XMGMT_FLAG_DEVCTL_READY)
> + xmgmt_load_firmware(xmm);
> + break;
> + }
> + case XRT_EVENT_PRE_REMOVAL:
> + break;
> + default:
> + xrt_dbg(pdev, "ignored event %d", e);
> + break;
> + }
> +}
> +
> +static int xmgmt_main_probe(struct platform_device *pdev)
> +{
> + struct xmgmt_main *xmm;
> +
> + xrt_info(pdev, "probing...");
> +
> + xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
> + if (!xmm)
> + return -ENOMEM;
> +
> + xmm->pdev = pdev;
> + xmm->fmgr = xmgmt_fmgr_probe(pdev);
> + if (IS_ERR(xmm->fmgr))
> + return PTR_ERR(xmm->fmgr);
> +
> + platform_set_drvdata(pdev, xmm);
> + mutex_init(&xmm->lock);
> +
> + /* Ready to handle req thru sysfs nodes. */
> + if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
> + xrt_err(pdev, "failed to create sysfs group");
> + return 0;
> +}
> +
> +static int xmgmt_main_remove(struct platform_device *pdev)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> +
> + /* By now, group driver should prevent any inter-leaf call. */
> +
> + xrt_info(pdev, "leaving...");
> +
> + vfree(xmm->blp_interface_uuids);
> + vfree(xmm->firmware_blp);
> + vfree(xmm->firmware_plp);
> + vfree(xmm->firmware_ulp);
> + xmgmt_region_cleanup_all(pdev);
> + xmgmt_fmgr_remove(xmm->fmgr);
> + sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
> + return 0;
> +}
> +
> +static int
> +xmgmt_mainleaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xmgmt_main_event_cb(pdev, arg);
> + break;
> + case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
> + struct xrt_mgmt_main_get_axlf_section *get =
> + (struct xrt_mgmt_main_get_axlf_section *)arg;
> + const struct axlf *firmware = xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
> +
> + if (!firmware) {
> + ret = -ENOENT;
> + } else {
> + ret = xrt_xclbin_get_section(DEV(pdev), firmware,
> + get->xmmigas_section_kind,
> + &get->xmmigas_section,
> + &get->xmmigas_section_size);
> + }
> + break;
> + }
> + case XRT_MGMT_MAIN_GET_VBNV: {
> + char **vbnv_p = (char **)arg;
> +
> + *vbnv_p = xmgmt_get_vbnv(pdev);
> + if (!*vbnv_p)
> + ret = -EINVAL;
ok
> + break;
> + }
> + default:
> + xrt_err(pdev, "unknown cmd: %d", cmd);
> + ret = -EINVAL;
> + break;
> + }
> + return ret;
> +}
> +
> +static int xmgmt_main_open(struct inode *inode, struct file *file)
> +{
> + struct platform_device *pdev = xleaf_devnode_open(inode);
> +
> + /* Device may have gone already when we get here. */
> + if (!pdev)
> + return -ENODEV;
> +
> + xrt_info(pdev, "opened");
> + file->private_data = platform_get_drvdata(pdev);
> + return 0;
> +}
> +
> +static int xmgmt_main_close(struct inode *inode, struct file *file)
> +{
> + struct xmgmt_main *xmm = file->private_data;
> +
> + xleaf_devnode_close(inode);
> +
> + xrt_info(xmm->pdev, "closed");
> + return 0;
> +}
> +
> +/*
> + * Called for xclbin download xclbin load ioctl.
> + */
> +static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm, void *axlf, size_t size)
> +{
> + int ret;
> +
> + WARN_ON(!mutex_is_locked(&xmm->lock));
> +
> + /*
> + * Should any error happens during download, we can't trust
> + * the cached xclbin any more.
> + */
> + vfree(xmm->firmware_ulp);
> + xmm->firmware_ulp = NULL;
> +
> + ret = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, axlf, XMGMT_ULP);
> + if (ret == 0)
> + xmm->firmware_ulp = axlf;
> +
> + return ret;
> +}
> +
> +static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
> +{
> + struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
> + struct axlf xclbin_obj = { {0} };
> + size_t copy_buffer_size = 0;
> + void *copy_buffer = NULL;
> + int ret = 0;
> +
> + if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
> + return -EFAULT;
> + if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin, sizeof(xclbin_obj)))
> + return -EFAULT;
> + if (memcmp(xclbin_obj.magic, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)))
> + return -EINVAL;
> +
> + copy_buffer_size = xclbin_obj.header.length;
> + if (copy_buffer_size > XCLBIN_MAX_SIZE || copy_buffer_size < sizeof(xclbin_obj))
ok
Tom
> + return -EINVAL;
> + if (xclbin_obj.header.version_major != XMGMT_SUPP_XCLBIN_MAJOR)
> + return -EINVAL;
> +
> + copy_buffer = vmalloc(copy_buffer_size);
> + if (!copy_buffer)
> + return -ENOMEM;
> +
> + if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
> + vfree(copy_buffer);
> + return -EFAULT;
> + }
> +
> + ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
> + if (ret)
> + vfree(copy_buffer);
> +
> + return ret;
> +}
> +
> +static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
> +{
> + struct xmgmt_main *xmm = filp->private_data;
> + long result = 0;
> +
> + if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
> + return -ENOTTY;
> +
> + mutex_lock(&xmm->lock);
> +
> + xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
> + switch (cmd) {
> + case XMGMT_IOCICAPDOWNLOAD_AXLF:
> + result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
> + break;
> + default:
> + result = -ENOTTY;
> + break;
> + }
> +
> + mutex_unlock(&xmm->lock);
> + return result;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names []){
> + { .ep_name = XRT_MD_NODE_MGMT_MAIN },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xmgmt_main_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xmgmt_mainleaf_call,
> + },
> + .xsd_file_ops = {
> + .xsf_ops = {
> + .owner = THIS_MODULE,
> + .open = xmgmt_main_open,
> + .release = xmgmt_main_close,
> + .unlocked_ioctl = xmgmt_main_ioctl,
> + },
> + .xsf_dev_name = "xmgmt",
> + },
> +};
> +
> +static const struct platform_device_id xmgmt_main_id_table[] = {
> + { XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
> + { },
> +};
> +
> +static struct platform_driver xmgmt_main_driver = {
> + .driver = {
> + .name = XMGMT_MAIN,
> + },
> + .probe = xmgmt_main_probe,
> + .remove = xmgmt_main_remove,
> + .id_table = xmgmt_main_id_table,
> +};
> +
> +int xmgmt_register_leaf(void)
> +{
> + return xleaf_register_driver(XRT_SUBDEV_MGMT_MAIN,
> + &xmgmt_main_driver, xrt_mgmt_main_endpoints);
> +}
> +
> +void xmgmt_unregister_leaf(void)
> +{
> + xleaf_unregister_driver(XRT_SUBDEV_MGMT_MAIN);
> +}
> diff --git a/drivers/fpga/xrt/mgmt/xmgnt.h b/drivers/fpga/xrt/mgmt/xmgnt.h
> new file mode 100644
> index 000000000000..9d7c11194745
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/xmgnt.h
> @@ -0,0 +1,34 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XMGMT_XMGNT_H_
> +#define _XMGMT_XMGNT_H_
For consistency, should be shortened to _XMGMNT_H_
> +
> +#include <linux/platform_device.h>
> +#include "xmgmt-main.h"
> +
> +struct fpga_manager;
> +int xmgmt_process_xclbin(struct platform_device *pdev,
> + struct fpga_manager *fmgr,
> + const struct axlf *xclbin,
> + enum provider_kind kind);
> +void xmgmt_region_cleanup_all(struct platform_device *pdev);
> +
> +int xmgmt_hot_reset(struct platform_device *pdev);
> +
> +/* Getting dtb for specified group. Caller should vfree returned dtb .*/
> +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind);
> +char *xmgmt_get_vbnv(struct platform_device *pdev);
> +int xmgmt_get_provider_uuid(struct platform_device *pdev,
> + enum provider_kind kind, uuid_t *uuid);
> +
> +int xmgmt_register_leaf(void);
ok
> +void xmgmt_unregister_leaf(void);
> +
> +#endif /* _XMGMT_XMGNT_H_ */
> diff --git a/include/uapi/linux/xrt/xmgmt-ioctl.h b/include/uapi/linux/xrt/xmgmt-ioctl.h
> new file mode 100644
> index 000000000000..da992e581189
> --- /dev/null
> +++ b/include/uapi/linux/xrt/xmgmt-ioctl.h
> @@ -0,0 +1,46 @@
> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
> +/*
> + * Copyright (C) 2015-2021, Xilinx Inc
> + *
> + */
> +
> +/**
> + * DOC: PCIe Kernel Driver for Management Physical Function
> + * Interfaces exposed by *xclmgmt* driver are defined in file, *mgmt-ioctl.h*.
> + * Core functionality provided by *xmgmt* driver is described in the following table:
> + *
> + * =========== ============================== ==================================
> + * Functionality ioctl request code data format
> + * =========== ============================== ==================================
> + * 1 FPGA image download XMGMT_IOCICAPDOWNLOAD_AXLF xmgmt_ioc_bitstream_axlf
> + * =========== ============================== ==================================
> + */
> +
> +#ifndef _XMGMT_IOCTL_H_
> +#define _XMGMT_IOCTL_H_
> +
> +#include <linux/ioctl.h>
> +
> +#define XMGMT_IOC_MAGIC 'X'
> +#define XMGMT_IOC_ICAP_DOWNLOAD_AXLF 0x6
> +
> +/**
> + * struct xmgmt_ioc_bitstream_axlf - load xclbin (AXLF) device image
> + * used with XMGMT_IOCICAPDOWNLOAD_AXLF ioctl
> + *
> + * @xclbin: Pointer to user's xclbin structure in memory
> + */
> +struct xmgmt_ioc_bitstream_axlf {
> + struct axlf *xclbin;
> +};
> +
> +#define XMGMT_IOCICAPDOWNLOAD_AXLF \
> + _IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgmt_ioc_bitstream_axlf)
> +
> +/*
> + * The following definitions are for binary compatibility with classic XRT management driver
> + */
> +#define XCLMGMT_IOCICAPDOWNLOAD_AXLF XMGMT_IOCICAPDOWNLOAD_AXLF
> +#define xclmgmt_ioc_bitstream_axlf xmgmt_ioc_bitstream_axlf
> +
> +#endif
local use of 'regmap' conflicts with global meaning.
reword local regmap to something else.
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add VSEC driver. VSEC is a hardware function discovered by walking
> PCI Express configure space. A platform device node will be created
> for it. VSEC provides board logic UUID and few offset of other hardware
> functions.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/xleaf/vsec.c | 388 ++++++++++++++++++++++++++++++
> 1 file changed, 388 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
>
> diff --git a/drivers/fpga/xrt/lib/xleaf/vsec.c b/drivers/fpga/xrt/lib/xleaf/vsec.c
> new file mode 100644
> index 000000000000..8595d23f5710
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/vsec.c
> @@ -0,0 +1,388 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA VSEC Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/platform_device.h>
> +#include <linux/regmap.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +
> +#define XRT_VSEC "xrt_vsec"
> +
> +#define VSEC_TYPE_UUID 0x50
> +#define VSEC_TYPE_FLASH 0x51
> +#define VSEC_TYPE_PLATINFO 0x52
> +#define VSEC_TYPE_MAILBOX 0x53
> +#define VSEC_TYPE_END 0xff
> +
> +#define VSEC_UUID_LEN 16
> +
> +#define VSEC_REG_FORMAT 0x0
> +#define VSEC_REG_LENGTH 0x4
> +#define VSEC_REG_ENTRY 0x8
> +
> +struct xrt_vsec_header {
> + u32 format;
> + u32 length;
> + u32 entry_sz;
> + u32 rsvd;
> +} __packed;
> +
> +struct xrt_vsec_entry {
> + u8 type;
> + u8 bar_rev;
> + u16 off_lo;
> + u32 off_hi;
> + u8 ver_type;
> + u8 minor;
> + u8 major;
> + u8 rsvd0;
> + u32 rsvd1;
> +} __packed;
> +
> +struct vsec_device {
> + u8 type;
> + char *ep_name;
> + ulong size;
> + char *regmap;
This element should be 'char *name;' as regmap is a different thing.
> +};
> +
> +static struct vsec_device vsec_devs[] = {
> + {
> + .type = VSEC_TYPE_UUID,
> + .ep_name = XRT_MD_NODE_BLP_ROM,
> + .size = VSEC_UUID_LEN,
> + .regmap = "vsec-uuid",
> + },
> + {
> + .type = VSEC_TYPE_FLASH,
> + .ep_name = XRT_MD_NODE_FLASH_VSEC,
> + .size = 4096,
> + .regmap = "vsec-flash",
> + },
> + {
> + .type = VSEC_TYPE_PLATINFO,
> + .ep_name = XRT_MD_NODE_PLAT_INFO,
> + .size = 4,
> + .regmap = "vsec-platinfo",
> + },
> + {
> + .type = VSEC_TYPE_MAILBOX,
> + .ep_name = XRT_MD_NODE_MAILBOX_VSEC,
> + .size = 48,
> + .regmap = "vsec-mbx",
> + },
> +};
> +
> +static const struct regmap_config vsec_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
At least 0x1000 could be #define, maybe all.
> +};
> +
> +struct xrt_vsec {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + u32 length;
> +
> + char *metadata;
> + char uuid[VSEC_UUID_LEN];
> + int group;
> +};
> +
> +static inline int vsec_read_entry(struct xrt_vsec *vsec, u32 index, struct xrt_vsec_entry *entry)
> +{
> + int ret;
> +
> + ret = regmap_bulk_read(vsec->regmap, sizeof(struct xrt_vsec_header) +
> + index * sizeof(struct xrt_vsec_entry), entry,
> + sizeof(struct xrt_vsec_entry) /
> + vsec_regmap_config.reg_stride);
> +
> + return ret;
> +}
> +
> +static inline u32 vsec_get_bar(struct xrt_vsec_entry *entry)
> +{
> + return ((entry)->bar_rev >> 4) & 0xf;
The extra () were needed when this was a macro, they aren't now.
remove here and the next 2 functions.
> +}
> +
> +static inline u64 vsec_get_bar_off(struct xrt_vsec_entry *entry)
> +{
> + return (entry)->off_lo | ((u64)(entry)->off_hi << 16);
> +}
> +
> +static inline u32 vsec_get_rev(struct xrt_vsec_entry *entry)
> +{
> + return (entry)->bar_rev & 0xf;
> +}
> +
> +static char *type2epname(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].ep_name);
> + }
> +
> + return NULL;
> +}
> +
> +static ulong type2size(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].size);
> + }
> +
> + return 0;
> +}
> +
> +static char *type2regmap(u32 type)
> +{
> + int i;
> +
> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
> + if (vsec_devs[i].type == type)
> + return (vsec_devs[i].regmap);
> + }
> +
> + return NULL;
> +}
> +
> +static int xrt_vsec_add_node(struct xrt_vsec *vsec,
> + void *md_blob, struct xrt_vsec_entry *p_entry)
> +{
> + struct xrt_md_endpoint ep;
> + char regmap_ver[64];
> + int ret;
> +
> + if (!type2epname(p_entry->type))
> + return -EINVAL;
> +
> + /*
> + * VSEC may have more than 1 mailbox instance for the card
> + * which has more than 1 physical function.
> + * This is not supported for now. Assuming only one mailbox
> + */
> +
> + snprintf(regmap_ver, sizeof(regmap_ver) - 1, "%d-%d.%d.%d",
> + p_entry->ver_type, p_entry->major, p_entry->minor,
> + vsec_get_rev(p_entry));
> + ep.ep_name = type2epname(p_entry->type);
> + ep.bar = vsec_get_bar(p_entry);
> + ep.bar_off = vsec_get_bar_off(p_entry);
ok
> + ep.size = type2size(p_entry->type);
> + ep.regmap = type2regmap(p_entry->type);
> + ep.regmap_ver = regmap_ver;
> + ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, &ep);
> + if (ret)
> + xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
> +
> + return ret;
> +}
> +
> +static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
> +{
> + struct xrt_vsec_entry entry;
> + int i, ret;
> +
> + ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
> + if (ret) {
> + xrt_err(vsec->pdev, "create metadata failed");
> + return ret;
> + }
> +
> + for (i = 0; i * sizeof(entry) < vsec->length -
> + sizeof(struct xrt_vsec_header); i++) {
> + ret = vsec_read_entry(vsec, i, &entry);
> + if (ret) {
> + xrt_err(vsec->pdev, "failed read entry %d, ret %d", i, ret);
> + goto fail;
> + }
> +
> + if (entry.type == VSEC_TYPE_END)
> + break;
> + ret = xrt_vsec_add_node(vsec, vsec->metadata, &entry);
> + if (ret)
> + goto fail;
ok
> + }
> +
> + return 0;
> +
> +fail:
> + vfree(vsec->metadata);
> + vsec->metadata = NULL;
> + return ret;
> +}
> +
> +static int xrt_vsec_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + default:
> + ret = -EINVAL;
> + xrt_err(pdev, "should never been called");
> + break;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_vsec_mapio(struct xrt_vsec *vsec)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->pdev);
> + struct resource *res = NULL;
> + void __iomem *base = NULL;
> + const u64 *bar_off;
> + const u32 *bar;
> + u64 addr;
ok
> + int ret;
> +
> + if (!pdata || xrt_md_size(DEV(vsec->pdev), pdata->xsp_dtb) == XRT_MD_INVALID_LENGTH) {
> + xrt_err(vsec->pdev, "empty metadata");
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
> + NULL, XRT_MD_PROP_BAR_IDX, (const void **)&bar, NULL);
> + if (ret) {
> + xrt_err(vsec->pdev, "failed to get bar idx, ret %d", ret);
> + return -EINVAL;
> + }
> +
> + ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
> + NULL, XRT_MD_PROP_OFFSET, (const void **)&bar_off, NULL);
> + if (ret) {
> + xrt_err(vsec->pdev, "failed to get bar off, ret %d", ret);
> + return -EINVAL;
> + }
> +
> + xrt_info(vsec->pdev, "Map vsec at bar %d, offset 0x%llx",
> + be32_to_cpu(*bar), be64_to_cpu(*bar_off));
> +
> + xleaf_get_barres(vsec->pdev, &res, be32_to_cpu(*bar));
> + if (!res) {
> + xrt_err(vsec->pdev, "failed to get bar addr");
> + return -EINVAL;
> + }
> +
> + addr = res->start + be64_to_cpu(*bar_off);
> +
> + base = devm_ioremap(&vsec->pdev->dev, addr, vsec_regmap_config.max_register);
> + if (!base) {
> + xrt_err(vsec->pdev, "Map failed");
> + return -EIO;
> + }
> +
> + vsec->regmap = devm_regmap_init_mmio(&vsec->pdev->dev, base, &vsec_regmap_config);
> + if (IS_ERR(vsec->regmap)) {
> + xrt_err(vsec->pdev, "regmap %pR failed", res);
> + return PTR_ERR(vsec->regmap);
> + }
> +
> + ret = regmap_read(vsec->regmap, VSEC_REG_LENGTH, &vsec->length);
> + if (ret) {
> + xrt_err(vsec->pdev, "failed to read length %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int xrt_vsec_remove(struct platform_device *pdev)
> +{
> + struct xrt_vsec *vsec;
> +
> + vsec = platform_get_drvdata(pdev);
> +
> + if (vsec->group >= 0)
> + xleaf_destroy_group(pdev, vsec->group);
> + vfree(vsec->metadata);
> +
> + return 0;
> +}
> +
> +static int xrt_vsec_probe(struct platform_device *pdev)
> +{
> + struct xrt_vsec *vsec;
> + int ret = 0;
> +
> + vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
> + if (!vsec)
> + return -ENOMEM;
> +
> + vsec->pdev = pdev;
> + vsec->group = -1;
> + platform_set_drvdata(pdev, vsec);
> +
> + ret = xrt_vsec_mapio(vsec);
> + if (ret)
> + goto failed;
> +
> + ret = xrt_vsec_create_metadata(vsec);
> + if (ret) {
> + xrt_err(pdev, "create metadata failed, ret %d", ret);
> + goto failed;
> + }
> + vsec->group = xleaf_create_group(pdev, vsec->metadata);
> + if (ret < 0) {
this is a bug, ret is not set by xleaf_create_group
Tom
> + xrt_err(pdev, "create group failed, ret %d", vsec->group);
> + ret = vsec->group;
> + goto failed;
> + }
> +
> + return 0;
> +
> +failed:
> + xrt_vsec_remove(pdev);
> +
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_vsec_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names []){
> + { .ep_name = XRT_MD_NODE_VSEC },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_vsec_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_vsec_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_vsec_table[] = {
> + { XRT_VSEC, (kernel_ulong_t)&xrt_vsec_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_vsec_driver = {
> + .driver = {
> + .name = XRT_VSEC,
> + },
> + .probe = xrt_vsec_probe,
> + .remove = xrt_vsec_remove,
> + .id_table = xrt_vsec_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_VSEC, vsec);
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add User Clock Subsystem (UCS) driver. UCS is a hardware function
ok
> discovered by walking xclbin metadata. A platform device node will be
> created for it. UCS enables/disables the dynamic region clocks.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/lib/xleaf/ucs.c | 167 +++++++++++++++++++++++++++++++
ok on removing ucs.h
> 1 file changed, 167 insertions(+)
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ucs.c
>
> diff --git a/drivers/fpga/xrt/lib/xleaf/ucs.c b/drivers/fpga/xrt/lib/xleaf/ucs.c
> new file mode 100644
> index 000000000000..d91ee229e7cb
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/ucs.c
> @@ -0,0 +1,167 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA UCS Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clock.h"
> +
> +#define UCS_ERR(ucs, fmt, arg...) \
> + xrt_err((ucs)->pdev, fmt "\n", ##arg)
> +#define UCS_WARN(ucs, fmt, arg...) \
> + xrt_warn((ucs)->pdev, fmt "\n", ##arg)
> +#define UCS_INFO(ucs, fmt, arg...) \
> + xrt_info((ucs)->pdev, fmt "\n", ##arg)
> +#define UCS_DBG(ucs, fmt, arg...) \
> + xrt_dbg((ucs)->pdev, fmt "\n", ##arg)
> +
> +#define XRT_UCS "xrt_ucs"
> +
> +#define XRT_UCS_CHANNEL1_REG 0
> +#define XRT_UCS_CHANNEL2_REG 8
> +
> +#define CLK_MAX_VALUE 6400
> +
> +static const struct regmap_config ucs_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
> +};
> +
> +struct xrt_ucs {
> + struct platform_device *pdev;
> + struct regmap *regmap;
ok
> + struct mutex ucs_lock; /* ucs dev lock */
> +};
> +
> +static void xrt_ucs_event_cb(struct platform_device *pdev, void *arg)
> +{
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct platform_device *leaf;
> + enum xrt_subdev_id id;
> + int instance;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> + instance = evt->xe_subdev.xevt_subdev_instance;
> +
> + if (e != XRT_EVENT_POST_CREATION) {
> + xrt_dbg(pdev, "ignored event %d", e);
> + return;
> + }
> +
> + if (id != XRT_SUBDEV_CLOCK)
> + return;
ok
> +
> + leaf = xleaf_get_leaf_by_id(pdev, XRT_SUBDEV_CLOCK, instance);
> + if (!leaf) {
> + xrt_err(pdev, "does not get clock subdev");
> + return;
> + }
> +
> + xleaf_call(leaf, XRT_CLOCK_VERIFY, NULL);
> + xleaf_put_leaf(pdev, leaf);
> +}
ok on removing ucs_check.
> +
> +static int ucs_enable(struct xrt_ucs *ucs)
> +{
> + int ret;
> +
> + mutex_lock(&ucs->ucs_lock);
ok
> + ret = regmap_write(ucs->regmap, XRT_UCS_CHANNEL2_REG, 1);
> + mutex_unlock(&ucs->ucs_lock);
> +
> + return ret;
> +}
> +
> +static int
> +xrt_ucs_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
ok
Looks fine.
Reviewed-by: Tom Rix <[email protected]>
> +{
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_ucs_event_cb(pdev, arg);
> + break;
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
> +
> +static int ucs_probe(struct platform_device *pdev)
> +{
> + struct xrt_ucs *ucs = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> +
> + ucs = devm_kzalloc(&pdev->dev, sizeof(*ucs), GFP_KERNEL);
> + if (!ucs)
> + return -ENOMEM;
> +
> + platform_set_drvdata(pdev, ucs);
> + ucs->pdev = pdev;
> + mutex_init(&ucs->ucs_lock);
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res)
> + return -EINVAL;
> +
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base))
> + return PTR_ERR(base);
> +
> + ucs->regmap = devm_regmap_init_mmio(&pdev->dev, base, &ucs_regmap_config);
> + if (IS_ERR(ucs->regmap)) {
> + UCS_ERR(ucs, "map base %pR failed", res);
> + return PTR_ERR(ucs->regmap);
> + }
> + ucs_enable(ucs);
> +
> + return 0;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_ucs_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_UCS_CONTROL_STATUS },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_ucs_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_ucs_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_ucs_table[] = {
> + { XRT_UCS, (kernel_ulong_t)&xrt_ucs_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_ucs_driver = {
> + .driver = {
> + .name = XRT_UCS,
> + },
> + .probe = ucs_probe,
> + .id_table = xrt_ucs_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_UCS, ucs);
Hi Tom,
On 3/30/21 8:11 AM, Tom Rix wrote:
> This was split from 'fpga: xrt: platform driver infrastructure'
>
> and fpga: xrt: managment physical function driver (root)
Yes, trying not to have huge patch for review.
>
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Contains common code for all root drivers and handles root calls from
>> platform drivers. This is part of root driver infrastructure.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/events.h | 45 +++
>> drivers/fpga/xrt/include/xroot.h | 117 ++++++
>> drivers/fpga/xrt/lib/subdev_pool.h | 53 +++
>> drivers/fpga/xrt/lib/xroot.c | 589 +++++++++++++++++++++++++++++
>> 4 files changed, 804 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/events.h
>> create mode 100644 drivers/fpga/xrt/include/xroot.h
>> create mode 100644 drivers/fpga/xrt/lib/subdev_pool.h
>> create mode 100644 drivers/fpga/xrt/lib/xroot.c
>>
>> diff --git a/drivers/fpga/xrt/include/events.h b/drivers/fpga/xrt/include/events.h
>> new file mode 100644
>> index 000000000000..775171a47c8e
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/events.h
>> @@ -0,0 +1,45 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
> ok
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_EVENTS_H_
>> +#define _XRT_EVENTS_H_
> ok
>> +
>> +#include "subdev_id.h"
>> +
>> +/*
>> + * Event notification.
>> + */
>> +enum xrt_events {
>> + XRT_EVENT_TEST = 0, /* for testing */
>> + /*
>> + * Events related to specific subdev
>> + * Callback arg: struct xrt_event_arg_subdev
>> + */
>> + XRT_EVENT_POST_CREATION,
>> + XRT_EVENT_PRE_REMOVAL,
>> + /*
>> + * Events related to change of the whole board
>> + * Callback arg: <none>
>> + */
>> + XRT_EVENT_PRE_HOT_RESET,
>> + XRT_EVENT_POST_HOT_RESET,
>> + XRT_EVENT_PRE_GATE_CLOSE,
>> + XRT_EVENT_POST_GATE_OPEN,
>> +};
>> +
>> +struct xrt_event_arg_subdev {
>> + enum xrt_subdev_id xevt_subdev_id;
>> + int xevt_subdev_instance;
>> +};
>> +
>> +struct xrt_event {
>> + enum xrt_events xe_evt;
>> + struct xrt_event_arg_subdev xe_subdev;
>> +};
>> +
>> +#endif /* _XRT_EVENTS_H_ */
>> diff --git a/drivers/fpga/xrt/include/xroot.h b/drivers/fpga/xrt/include/xroot.h
>> new file mode 100644
>> index 000000000000..91c0aeb30bf8
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xroot.h
>> @@ -0,0 +1,117 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_ROOT_H_
>> +#define _XRT_ROOT_H_
>> +
>> +#include <linux/platform_device.h>
>> +#include <linux/pci.h>
>> +#include "subdev_id.h"
>> +#include "events.h"
>> +
>> +typedef bool (*xrt_subdev_match_t)(enum xrt_subdev_id,
>> + struct platform_device *, void *);
>> +#define XRT_SUBDEV_MATCH_PREV ((xrt_subdev_match_t)-1)
>> +#define XRT_SUBDEV_MATCH_NEXT ((xrt_subdev_match_t)-2)
>> +
>> +/*
>> + * Root calls.
>> + */
>> +enum xrt_root_cmd {
>> + /* Leaf actions. */
>> + XRT_ROOT_GET_LEAF = 0,
>> + XRT_ROOT_PUT_LEAF,
>> + XRT_ROOT_GET_LEAF_HOLDERS,
>> +
>> + /* Group actions. */
>> + XRT_ROOT_CREATE_GROUP,
>> + XRT_ROOT_REMOVE_GROUP,
>> + XRT_ROOT_LOOKUP_GROUP,
>> + XRT_ROOT_WAIT_GROUP_BRINGUP,
>> +
>> + /* Event actions. */
>> + XRT_ROOT_EVENT_SYNC,
>> + XRT_ROOT_EVENT_ASYNC,
>> +
>> + /* Device info. */
>> + XRT_ROOT_GET_RESOURCE,
>> + XRT_ROOT_GET_ID,
>> +
>> + /* Misc. */
>> + XRT_ROOT_HOT_RESET,
>> + XRT_ROOT_HWMON,
>> +};
>> +
>> +struct xrt_root_get_leaf {
>> + struct platform_device *xpigl_caller_pdev;
>> + xrt_subdev_match_t xpigl_match_cb;
>> + void *xpigl_match_arg;
>> + struct platform_device *xpigl_tgt_pdev;
>> +};
>> +
>> +struct xrt_root_put_leaf {
>> + struct platform_device *xpipl_caller_pdev;
>> + struct platform_device *xpipl_tgt_pdev;
>> +};
>> +
>> +struct xrt_root_lookup_group {
>> + struct platform_device *xpilp_pdev; /* caller's pdev */
>> + xrt_subdev_match_t xpilp_match_cb;
>> + void *xpilp_match_arg;
>> + int xpilp_grp_inst;
>> +};
>> +
>> +struct xrt_root_get_holders {
>> + struct platform_device *xpigh_pdev; /* caller's pdev */
>> + char *xpigh_holder_buf;
>> + size_t xpigh_holder_buf_len;
>> +};
>> +
>> +struct xrt_root_get_res {
>> + struct resource *xpigr_res;
>> +};
>> +
>> +struct xrt_root_get_id {
>> + unsigned short xpigi_vendor_id;
>> + unsigned short xpigi_device_id;
>> + unsigned short xpigi_sub_vendor_id;
>> + unsigned short xpigi_sub_device_id;
>> +};
>> +
>> +struct xrt_root_hwmon {
>> + bool xpih_register;
>> + const char *xpih_name;
>> + void *xpih_drvdata;
>> + const struct attribute_group **xpih_groups;
>> + struct device *xpih_hwmon_dev;
>> +};
>> +
>> +/*
>> + * Callback for leaf to make a root request. Arguments are: parent device, parent cookie, req,
>> + * and arg.
>> + */
>> +typedef int (*xrt_subdev_root_cb_t)(struct device *, void *, u32, void *);
>> +int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg);
>> +
>> +/*
>> + * Defines physical function (MPF / UPF) specific operations
>> + * needed in common root driver.
>> + */
>> +struct xroot_physical_function_callback {
>> + void (*xpc_hot_reset)(struct pci_dev *pdev);
>> +};
>> +
>> +int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root);
>> +void xroot_remove(void *root);
>> +bool xroot_wait_for_bringup(void *root);
>> +int xroot_add_vsec_node(void *root, char *dtb);
>> +int xroot_create_group(void *xr, char *dtb);
>> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint);
>> +void xroot_broadcast(void *root, enum xrt_events evt);
>> +
>> +#endif /* _XRT_ROOT_H_ */
>> diff --git a/drivers/fpga/xrt/lib/subdev_pool.h b/drivers/fpga/xrt/lib/subdev_pool.h
>> new file mode 100644
>> index 000000000000..09d148e4e7ea
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/subdev_pool.h
>> @@ -0,0 +1,53 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_SUBDEV_POOL_H_
>> +#define _XRT_SUBDEV_POOL_H_
>> +
>> +#include <linux/device.h>
>> +#include <linux/mutex.h>
>> +#include "xroot.h"
>> +
>> +/*
>> + * The struct xrt_subdev_pool manages a list of xrt_subdevs for root and group drivers.
>> + */
>> +struct xrt_subdev_pool {
>> + struct list_head xsp_dev_list;
>> + struct device *xsp_owner;
>> + struct mutex xsp_lock; /* pool lock */
>> + bool xsp_closing;
>> +};
>> +
>> +/*
>> + * Subdev pool helper functions for root and group drivers only.
>> + */
>> +void xrt_subdev_pool_init(struct device *dev,
>> + struct xrt_subdev_pool *spool);
>> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool);
>> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool,
>> + xrt_subdev_match_t match,
>> + void *arg, struct device *holder_dev,
>> + struct platform_device **pdevp);
>> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool,
>> + struct platform_device *pdev,
>> + struct device *holder_dev);
>> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool,
>> + enum xrt_subdev_id id, xrt_subdev_root_cb_t pcb,
>> + void *pcb_arg, char *dtb);
>> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool,
>> + enum xrt_subdev_id id, int instance);
>> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
>> + struct platform_device *pdev,
>> + char *buf, size_t len);
>> +
>> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool,
>> + enum xrt_events evt);
>> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool,
>> + struct xrt_event *evt);
>> +
>> +#endif /* _XRT_SUBDEV_POOL_H_ */
>> diff --git a/drivers/fpga/xrt/lib/xroot.c b/drivers/fpga/xrt/lib/xroot.c
>> new file mode 100644
>> index 000000000000..03407272650f
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/xroot.c
>> @@ -0,0 +1,589 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA Root Functions
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#include <linux/module.h>
>> +#include <linux/pci.h>
>> +#include <linux/hwmon.h>
>> +#include "xroot.h"
>> +#include "subdev_pool.h"
>> +#include "group.h"
>> +#include "metadata.h"
>> +
>> +#define XROOT_PDEV(xr) ((xr)->pdev)
>> +#define XROOT_DEV(xr) (&(XROOT_PDEV(xr)->dev))
>> +#define xroot_err(xr, fmt, args...) \
>> + dev_err(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
>> +#define xroot_warn(xr, fmt, args...) \
>> + dev_warn(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
>> +#define xroot_info(xr, fmt, args...) \
>> + dev_info(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
>> +#define xroot_dbg(xr, fmt, args...) \
>> + dev_dbg(XROOT_DEV(xr), "%s: " fmt, __func__, ##args)
>> +
>> +#define XRT_VSEC_ID 0x20
>> +
> 'root' is an abstraction, 'pci' is an implementation.
>
> Consider splitting.
>
> I think this will be part of the pseudo bus, so figure out how to do root there.
Yes, we will remove all PCI specific code from infra code (xroot.c,
subdev.c and group.c). They will be moved to xrt-mgmt.ko, which is
indeed a PCI device driver.
>
>
>> +#define XROOT_GROUP_FIRST (-1)
>> +#define XROOT_GROUP_LAST (-2)
>> +
>> +static int xroot_root_cb(struct device *, void *, u32, void *);
>> +
>> +struct xroot_evt {
>> + struct list_head list;
>> + struct xrt_event evt;
>> + struct completion comp;
>> + bool async;
>> +};
>> +
>> +struct xroot_events {
>> + struct mutex evt_lock; /* event lock */
>> + struct list_head evt_list;
>> + struct work_struct evt_work;
>> +};
>> +
>> +struct xroot_groups {
>> + struct xrt_subdev_pool pool;
>> + struct work_struct bringup_work;
> add a comment that these two elements are counters or append '_cnt' or similar to name
Will add '_cnt' to the names.
Thanks,
Max
>> + atomic_t bringup_pending;
>> + atomic_t bringup_failed;
>> + struct completion bringup_comp;
>> +};
>> +
>> +struct xroot {
>> + struct pci_dev *pdev;
>> + struct xroot_events events;
>> + struct xroot_groups groups;
>> + struct xroot_physical_function_callback pf_cb;
> ok
>> +};
>> +
>> +struct xroot_group_match_arg {
>> + enum xrt_subdev_id id;
>> + int instance;
>> +};
>> +
>> +static bool xroot_group_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
>> +{
>> + struct xroot_group_match_arg *a = (struct xroot_group_match_arg *)arg;
>> +
>> + /* pdev->id is the instance of the subdev. */
> ok
>> + return id == a->id && pdev->id == a->instance;
>> +}
>> +
>> +static int xroot_get_group(struct xroot *xr, int instance, struct platform_device **grpp)
>> +{
>> + int rc = 0;
>> + struct xrt_subdev_pool *grps = &xr->groups.pool;
>> + struct device *dev = DEV(xr->pdev);
>> + struct xroot_group_match_arg arg = { XRT_SUBDEV_GRP, instance };
>> +
>> + if (instance == XROOT_GROUP_LAST) {
>> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_NEXT,
>> + *grpp, dev, grpp);
>> + } else if (instance == XROOT_GROUP_FIRST) {
>> + rc = xrt_subdev_pool_get(grps, XRT_SUBDEV_MATCH_PREV,
>> + *grpp, dev, grpp);
>> + } else {
>> + rc = xrt_subdev_pool_get(grps, xroot_group_match,
>> + &arg, dev, grpp);
>> + }
>> +
>> + if (rc && rc != -ENOENT)
>> + xroot_err(xr, "failed to hold group %d: %d", instance, rc);
>> + return rc;
>> +}
>> +
>> +static void xroot_put_group(struct xroot *xr, struct platform_device *grp)
>> +{
>> + int inst = grp->id;
>> + int rc = xrt_subdev_pool_put(&xr->groups.pool, grp, DEV(xr->pdev));
>> +
>> + if (rc)
>> + xroot_err(xr, "failed to release group %d: %d", inst, rc);
>> +}
>> +
>> +static int xroot_trigger_event(struct xroot *xr, struct xrt_event *e, bool async)
>> +{
>> + struct xroot_evt *enew = vzalloc(sizeof(*enew));
>> +
>> + if (!enew)
>> + return -ENOMEM;
>> +
>> + enew->evt = *e;
>> + enew->async = async;
>> + init_completion(&enew->comp);
>> +
>> + mutex_lock(&xr->events.evt_lock);
>> + list_add(&enew->list, &xr->events.evt_list);
>> + mutex_unlock(&xr->events.evt_lock);
>> +
>> + schedule_work(&xr->events.evt_work);
>> +
>> + if (async)
>> + return 0;
>> +
>> + wait_for_completion(&enew->comp);
>> + vfree(enew);
>> + return 0;
>> +}
>> +
>> +static void
>> +xroot_group_trigger_event(struct xroot *xr, int inst, enum xrt_events e)
>> +{
>> + int ret;
>> + struct platform_device *pdev = NULL;
>> + struct xrt_event evt = { 0 };
>> +
>> + WARN_ON(inst < 0);
>> + /* Only triggers subdev specific events. */
>> + if (e != XRT_EVENT_POST_CREATION && e != XRT_EVENT_PRE_REMOVAL) {
>> + xroot_err(xr, "invalid event %d", e);
>> + return;
>> + }
>> +
>> + ret = xroot_get_group(xr, inst, &pdev);
>> + if (ret)
>> + return;
>> +
>> + /* Triggers event for children, first. */
>> + xleaf_call(pdev, XRT_GROUP_TRIGGER_EVENT, (void *)(uintptr_t)e);
> ok
>> +
>> + /* Triggers event for itself. */
>> + evt.xe_evt = e;
>> + evt.xe_subdev.xevt_subdev_id = XRT_SUBDEV_GRP;
>> + evt.xe_subdev.xevt_subdev_instance = inst;
>> + xroot_trigger_event(xr, &evt, false);
>> +
>> + xroot_put_group(xr, pdev);
>> +}
>> +
>> +int xroot_create_group(void *root, char *dtb)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> + int ret;
>> +
>> + atomic_inc(&xr->groups.bringup_pending);
>> + ret = xrt_subdev_pool_add(&xr->groups.pool, XRT_SUBDEV_GRP, xroot_root_cb, xr, dtb);
>> + if (ret >= 0) {
>> + schedule_work(&xr->groups.bringup_work);
>> + } else {
>> + atomic_dec(&xr->groups.bringup_pending);
>> + atomic_inc(&xr->groups.bringup_failed);
>> + xroot_err(xr, "failed to create group: %d", ret);
>> + }
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_create_group);
>> +
>> +static int xroot_destroy_single_group(struct xroot *xr, int instance)
>> +{
> ok as-is
>> + struct platform_device *pdev = NULL;
>> + int ret;
>> +
>> + WARN_ON(instance < 0);
>> + ret = xroot_get_group(xr, instance, &pdev);
>> + if (ret)
>> + return ret;
>> +
>> + xroot_group_trigger_event(xr, instance, XRT_EVENT_PRE_REMOVAL);
>> +
>> + /* Now tear down all children in this group. */
>> + ret = xleaf_call(pdev, XRT_GROUP_FINI_CHILDREN, NULL);
>> + xroot_put_group(xr, pdev);
>> + if (!ret)
>> + ret = xrt_subdev_pool_del(&xr->groups.pool, XRT_SUBDEV_GRP, instance);
>> +
>> + return ret;
>> +}
>> +
>> +static int xroot_destroy_group(struct xroot *xr, int instance)
>> +{
>> + struct platform_device *target = NULL;
>> + struct platform_device *deps = NULL;
>> + int ret;
>> +
>> + WARN_ON(instance < 0);
>> + /*
>> + * Make sure target group exists and can't go away before
>> + * we remove it's dependents
>> + */
>> + ret = xroot_get_group(xr, instance, &target);
>> + if (ret)
>> + return ret;
>> +
>> + /*
>> + * Remove all groups depend on target one.
>> + * Assuming subdevs in higher group ID can depend on ones in
>> + * lower ID groups, we remove them in the reservse order.
>> + */
>> + while (xroot_get_group(xr, XROOT_GROUP_LAST, &deps) != -ENOENT) {
>> + int inst = deps->id;
>> +
>> + xroot_put_group(xr, deps);
>> + /* Reached the target group instance, stop here. */
> ok
>> + if (instance == inst)
>> + break;
>> + xroot_destroy_single_group(xr, inst);
>> + deps = NULL;
>> + }
>> +
>> + /* Now we can remove the target group. */
>> + xroot_put_group(xr, target);
>> + return xroot_destroy_single_group(xr, instance);
>> +}
>> +
>> +static int xroot_lookup_group(struct xroot *xr,
>> + struct xrt_root_lookup_group *arg)
>> +{
>> + int rc = -ENOENT;
>> + struct platform_device *grp = NULL;
>> +
>> + while (rc < 0 && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
>> + if (arg->xpilp_match_cb(XRT_SUBDEV_GRP, grp, arg->xpilp_match_arg))
>> + rc = grp->id;
>> + xroot_put_group(xr, grp);
>> + }
>> + return rc;
>> +}
>> +
>> +static void xroot_event_work(struct work_struct *work)
>> +{
>> + struct xroot_evt *tmp;
>> + struct xroot *xr = container_of(work, struct xroot, events.evt_work);
>> +
>> + mutex_lock(&xr->events.evt_lock);
>> + while (!list_empty(&xr->events.evt_list)) {
>> + tmp = list_first_entry(&xr->events.evt_list, struct xroot_evt, list);
>> + list_del(&tmp->list);
>> + mutex_unlock(&xr->events.evt_lock);
>> +
>> + xrt_subdev_pool_handle_event(&xr->groups.pool, &tmp->evt);
>> +
>> + if (tmp->async)
>> + vfree(tmp);
>> + else
>> + complete(&tmp->comp);
>> +
>> + mutex_lock(&xr->events.evt_lock);
>> + }
>> + mutex_unlock(&xr->events.evt_lock);
>> +}
>> +
>> +static void xroot_event_init(struct xroot *xr)
>> +{
>> + INIT_LIST_HEAD(&xr->events.evt_list);
>> + mutex_init(&xr->events.evt_lock);
>> + INIT_WORK(&xr->events.evt_work, xroot_event_work);
>> +}
>> +
>> +static void xroot_event_fini(struct xroot *xr)
>> +{
>> + flush_scheduled_work();
>> + WARN_ON(!list_empty(&xr->events.evt_list));
>> +}
>> +
>> +static int xroot_get_leaf(struct xroot *xr, struct xrt_root_get_leaf *arg)
>> +{
>> + int rc = -ENOENT;
>> + struct platform_device *grp = NULL;
>> +
>> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
>> + rc = xleaf_call(grp, XRT_GROUP_GET_LEAF, arg);
>> + xroot_put_group(xr, grp);
>> + }
>> + return rc;
>> +}
>> +
>> +static int xroot_put_leaf(struct xroot *xr, struct xrt_root_put_leaf *arg)
>> +{
>> + int rc = -ENOENT;
>> + struct platform_device *grp = NULL;
>> +
>> + while (rc && xroot_get_group(xr, XROOT_GROUP_LAST, &grp) != -ENOENT) {
>> + rc = xleaf_call(grp, XRT_GROUP_PUT_LEAF, arg);
>> + xroot_put_group(xr, grp);
>> + }
>> + return rc;
>> +}
>> +
>> +static int xroot_root_cb(struct device *dev, void *parg, enum xrt_root_cmd cmd, void *arg)
>> +{
>> + struct xroot *xr = (struct xroot *)parg;
>> + int rc = 0;
>> +
>> + switch (cmd) {
>> + /* Leaf actions. */
>> + case XRT_ROOT_GET_LEAF: {
>> + struct xrt_root_get_leaf *getleaf = (struct xrt_root_get_leaf *)arg;
>> +
>> + rc = xroot_get_leaf(xr, getleaf);
>> + break;
>> + }
>> + case XRT_ROOT_PUT_LEAF: {
>> + struct xrt_root_put_leaf *putleaf = (struct xrt_root_put_leaf *)arg;
>> +
>> + rc = xroot_put_leaf(xr, putleaf);
>> + break;
>> + }
>> + case XRT_ROOT_GET_LEAF_HOLDERS: {
>> + struct xrt_root_get_holders *holders = (struct xrt_root_get_holders *)arg;
>> +
>> + rc = xrt_subdev_pool_get_holders(&xr->groups.pool,
>> + holders->xpigh_pdev,
>> + holders->xpigh_holder_buf,
>> + holders->xpigh_holder_buf_len);
>> + break;
>> + }
>> +
>> + /* Group actions. */
>> + case XRT_ROOT_CREATE_GROUP:
>> + rc = xroot_create_group(xr, (char *)arg);
>> + break;
>> + case XRT_ROOT_REMOVE_GROUP:
>> + rc = xroot_destroy_group(xr, (int)(uintptr_t)arg);
>> + break;
>> + case XRT_ROOT_LOOKUP_GROUP: {
>> + struct xrt_root_lookup_group *getgrp = (struct xrt_root_lookup_group *)arg;
>> +
>> + rc = xroot_lookup_group(xr, getgrp);
>> + break;
>> + }
>> + case XRT_ROOT_WAIT_GROUP_BRINGUP:
>> + rc = xroot_wait_for_bringup(xr) ? 0 : -EINVAL;
>> + break;
>> +
>> + /* Event actions. */
>> + case XRT_ROOT_EVENT_SYNC:
>> + case XRT_ROOT_EVENT_ASYNC: {
>> + bool async = (cmd == XRT_ROOT_EVENT_ASYNC);
>> + struct xrt_event *evt = (struct xrt_event *)arg;
>> +
>> + rc = xroot_trigger_event(xr, evt, async);
>> + break;
>> + }
>> +
>> + /* Device info. */
>> + case XRT_ROOT_GET_RESOURCE: {
>> + struct xrt_root_get_res *res = (struct xrt_root_get_res *)arg;
>> +
>> + res->xpigr_res = xr->pdev->resource;
>> + break;
>> + }
>> + case XRT_ROOT_GET_ID: {
>> + struct xrt_root_get_id *id = (struct xrt_root_get_id *)arg;
>> +
>> + id->xpigi_vendor_id = xr->pdev->vendor;
>> + id->xpigi_device_id = xr->pdev->device;
>> + id->xpigi_sub_vendor_id = xr->pdev->subsystem_vendor;
>> + id->xpigi_sub_device_id = xr->pdev->subsystem_device;
>> + break;
>> + }
>> +
>> + /* MISC generic PCIE driver functions. */
>> + case XRT_ROOT_HOT_RESET: {
>> + xr->pf_cb.xpc_hot_reset(xr->pdev);
>> + break;
>> + }
>> + case XRT_ROOT_HWMON: {
>> + struct xrt_root_hwmon *hwmon = (struct xrt_root_hwmon *)arg;
>> +
>> + if (hwmon->xpih_register) {
>> + hwmon->xpih_hwmon_dev =
>> + hwmon_device_register_with_info(DEV(xr->pdev),
>> + hwmon->xpih_name,
>> + hwmon->xpih_drvdata,
>> + NULL,
>> + hwmon->xpih_groups);
>> + } else {
>> + hwmon_device_unregister(hwmon->xpih_hwmon_dev);
>> + }
>> + break;
>> + }
>> +
>> + default:
>> + xroot_err(xr, "unknown IOCTL cmd %d", cmd);
>> + rc = -EINVAL;
>> + break;
>> + }
>> +
>> + return rc;
>> +}
>> +
>> +static void xroot_bringup_group_work(struct work_struct *work)
>> +{
>> + struct platform_device *pdev = NULL;
>> + struct xroot *xr = container_of(work, struct xroot, groups.bringup_work);
>> +
>> + while (xroot_get_group(xr, XROOT_GROUP_FIRST, &pdev) != -ENOENT) {
>> + int r, i;
>> +
>> + i = pdev->id;
>> + r = xleaf_call(pdev, XRT_GROUP_INIT_CHILDREN, NULL);
>> + xroot_put_group(xr, pdev);
>> + if (r == -EEXIST)
>> + continue; /* Already brough up, nothing to do. */
>> + if (r)
>> + atomic_inc(&xr->groups.bringup_failed);
>> +
>> + xroot_group_trigger_event(xr, i, XRT_EVENT_POST_CREATION);
>> +
>> + if (atomic_dec_and_test(&xr->groups.bringup_pending))
>> + complete(&xr->groups.bringup_comp);
>> + }
>> +}
>> +
>> +static void xroot_groups_init(struct xroot *xr)
> ok
>> +{
>> + xrt_subdev_pool_init(DEV(xr->pdev), &xr->groups.pool);
>> + INIT_WORK(&xr->groups.bringup_work, xroot_bringup_group_work);
>> + atomic_set(&xr->groups.bringup_pending, 0);
>> + atomic_set(&xr->groups.bringup_failed, 0);
>> + init_completion(&xr->groups.bringup_comp);
>> +}
>> +
>> +static void xroot_groups_fini(struct xroot *xr)
>> +{
>> + flush_scheduled_work();
>> + xrt_subdev_pool_fini(&xr->groups.pool);
>> +}
>> +
>> +int xroot_add_vsec_node(void *root, char *dtb)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> + struct device *dev = DEV(xr->pdev);
>> + struct xrt_md_endpoint ep = { 0 };
>> + int cap = 0, ret = 0;
>> + u32 off_low, off_high, vsec_bar, header;
>> + u64 vsec_off;
>> +
>> + while ((cap = pci_find_next_ext_capability(xr->pdev, cap, PCI_EXT_CAP_ID_VNDR))) {
>> + pci_read_config_dword(xr->pdev, cap + PCI_VNDR_HEADER, &header);
>> + if (PCI_VNDR_HEADER_ID(header) == XRT_VSEC_ID)
>> + break;
>> + }
>> + if (!cap) {
>> + xroot_info(xr, "No Vendor Specific Capability.");
>> + return -ENOENT;
>> + }
>> +
>> + if (pci_read_config_dword(xr->pdev, cap + 8, &off_low) ||
>> + pci_read_config_dword(xr->pdev, cap + 12, &off_high)) {
>> + xroot_err(xr, "pci_read vendor specific failed.");
>> + return -EINVAL;
>> + }
>> +
>> + ep.ep_name = XRT_MD_NODE_VSEC;
>> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
>> + if (ret) {
>> + xroot_err(xr, "add vsec metadata failed, ret %d", ret);
>> + goto failed;
>> + }
>> +
>> + vsec_bar = cpu_to_be32(off_low & 0xf);
>> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
>> + XRT_MD_PROP_BAR_IDX, &vsec_bar, sizeof(vsec_bar));
>> + if (ret) {
>> + xroot_err(xr, "add vsec bar idx failed, ret %d", ret);
>> + goto failed;
>> + }
>> +
>> + vsec_off = cpu_to_be64(((u64)off_high << 32) | (off_low & ~0xfU));
>> + ret = xrt_md_set_prop(dev, dtb, XRT_MD_NODE_VSEC, NULL,
>> + XRT_MD_PROP_OFFSET, &vsec_off, sizeof(vsec_off));
>> + if (ret) {
>> + xroot_err(xr, "add vsec offset failed, ret %d", ret);
>> + goto failed;
>> + }
>> +
>> +failed:
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_add_vsec_node);
>> +
>> +int xroot_add_simple_node(void *root, char *dtb, const char *endpoint)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> + struct device *dev = DEV(xr->pdev);
>> + struct xrt_md_endpoint ep = { 0 };
>> + int ret = 0;
>> +
>> + ep.ep_name = endpoint;
>> + ret = xrt_md_add_endpoint(dev, dtb, &ep);
>> + if (ret)
>> + xroot_err(xr, "add %s failed, ret %d", endpoint, ret);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_add_simple_node);
>> +
>> +bool xroot_wait_for_bringup(void *root)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> +
>> + wait_for_completion(&xr->groups.bringup_comp);
>> + return atomic_read(&xr->groups.bringup_failed) == 0;
> ok
>
> Tom
>
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_wait_for_bringup);
>> +
>> +int xroot_probe(struct pci_dev *pdev, struct xroot_physical_function_callback *cb, void **root)
>> +{
>> + struct device *dev = DEV(pdev);
>> + struct xroot *xr = NULL;
>> +
>> + dev_info(dev, "%s: probing...", __func__);
>> +
>> + xr = devm_kzalloc(dev, sizeof(*xr), GFP_KERNEL);
>> + if (!xr)
>> + return -ENOMEM;
>> +
>> + xr->pdev = pdev;
>> + xr->pf_cb = *cb;
>> + xroot_groups_init(xr);
>> + xroot_event_init(xr);
>> +
>> + *root = xr;
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_probe);
>> +
>> +void xroot_remove(void *root)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> + struct platform_device *grp = NULL;
>> +
>> + xroot_info(xr, "leaving...");
>> +
>> + if (xroot_get_group(xr, XROOT_GROUP_FIRST, &grp) == 0) {
>> + int instance = grp->id;
>> +
>> + xroot_put_group(xr, grp);
>> + xroot_destroy_group(xr, instance);
>> + }
>> +
>> + xroot_event_fini(xr);
>> + xroot_groups_fini(xr);
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_remove);
>> +
>> +void xroot_broadcast(void *root, enum xrt_events evt)
>> +{
>> + struct xroot *xr = (struct xroot *)root;
>> + struct xrt_event e = { 0 };
>> +
>> + /* Root pf driver only broadcasts below two events. */
>> + if (evt != XRT_EVENT_POST_CREATION && evt != XRT_EVENT_PRE_REMOVAL) {
>> + xroot_info(xr, "invalid event %d", evt);
>> + return;
>> + }
>> +
>> + e.xe_evt = evt;
>> + e.xe_subdev.xevt_subdev_id = XRT_ROOT;
>> + e.xe_subdev.xevt_subdev_instance = 0;
>> + xroot_trigger_event(xr, &e, false);
>> +}
>> +EXPORT_SYMBOL_GPL(xroot_broadcast);
Hi Tom,
On 03/28/2021 08:30 AM, Tom Rix wrote:
> Do not reorder function definitions, this makes comparing changes from the previous patchset difficult.
>
> A general issue with returning consistent error codes. There are several cases where fdt_* code are not translated.
Sure. Will fix.
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> XRT drivers use device tree as metadata format to discover HW subsystems
>> behind PCIe BAR. Thus libfdt functions are called for the driver to parse
>> device tree blob.
> to parse the device
Will fix
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/metadata.h | 233 ++++++++++++
>> drivers/fpga/xrt/metadata/metadata.c | 545 +++++++++++++++++++++++++++
>> 2 files changed, 778 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/metadata.h
>> create mode 100644 drivers/fpga/xrt/metadata/metadata.c
>>
>> diff --git a/drivers/fpga/xrt/include/metadata.h b/drivers/fpga/xrt/include/metadata.h
>> new file mode 100644
>> index 000000000000..479e47960c61
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/metadata.h
>> @@ -0,0 +1,233 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_METADATA_H
>> +#define _XRT_METADATA_H
>> +
>> +#include <linux/device.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/uuid.h>
>> +
>> +#define XRT_MD_INVALID_LENGTH (~0UL)
>> +
>> +/* metadata properties */
>> +#define XRT_MD_PROP_BAR_IDX "pcie_bar_mapping"
>> +#define XRT_MD_PROP_COMPATIBLE "compatible"
>> +#define XRT_MD_PROP_HWICAP "axi_hwicap"
>> +#define XRT_MD_PROP_INTERFACE_UUID "interface_uuid"
>> +#define XRT_MD_PROP_INTERRUPTS "interrupts"
>> +#define XRT_MD_PROP_IO_OFFSET "reg"
>> +#define XRT_MD_PROP_LOGIC_UUID "logic_uuid"
>> +#define XRT_MD_PROP_PDI_CONFIG "pdi_config_mem"
>> +#define XRT_MD_PROP_PF_NUM "pcie_physical_function"
>> +#define XRT_MD_PROP_VERSION_MAJOR "firmware_version_major"
>> +
>> +/* non IP nodes */
>> +#define XRT_MD_NODE_ENDPOINTS "addressable_endpoints"
>> +#define XRT_MD_NODE_FIRMWARE "firmware"
>> +#define XRT_MD_NODE_INTERFACES "interfaces"
>> +#define XRT_MD_NODE_PARTITION_INFO "partition_info"
>> +
>> +/*
>> + * IP nodes
>> + * AF: AXI Firewall
>> + * CMC: Card Management Controller
>> + * ERT: Embedded Runtime
> * EP: End Point
Will add
>> + * PLP: Provider Reconfigurable Partition
>> + * ULP: User Reconfigurable Partition
>> + */
>> +#define XRT_MD_NODE_ADDR_TRANSLATOR "ep_remap_data_c2h_00"
>> +#define XRT_MD_NODE_AF_BLP_CTRL_MGMT "ep_firewall_blp_ctrl_mgmt_00"
>> +#define XRT_MD_NODE_AF_BLP_CTRL_USER "ep_firewall_blp_ctrl_user_00"
>> +#define XRT_MD_NODE_AF_CTRL_DEBUG "ep_firewall_ctrl_debug_00"
>> +#define XRT_MD_NODE_AF_CTRL_MGMT "ep_firewall_ctrl_mgmt_00"
>> +#define XRT_MD_NODE_AF_CTRL_USER "ep_firewall_ctrl_user_00"
>> +#define XRT_MD_NODE_AF_DATA_C2H "ep_firewall_data_c2h_00"
> c2h ?
Card to host. I will add a comment.
>> +#define XRT_MD_NODE_AF_DATA_H2C "ep_firewall_data_h2c_00"
>> +#define XRT_MD_NODE_AF_DATA_M2M "ep_firewall_data_m2m_00"
>> +#define XRT_MD_NODE_AF_DATA_P2P "ep_firewall_data_p2p_00"
>> +#define XRT_MD_NODE_CLKFREQ_HBM "ep_freq_cnt_aclk_hbm_00"
>> +#define XRT_MD_NODE_CLKFREQ_K1 "ep_freq_cnt_aclk_kernel_00"
>> +#define XRT_MD_NODE_CLKFREQ_K2 "ep_freq_cnt_aclk_kernel_01"
>> +#define XRT_MD_NODE_CLK_KERNEL1 "ep_aclk_kernel_00"
>> +#define XRT_MD_NODE_CLK_KERNEL2 "ep_aclk_kernel_01"
>> +#define XRT_MD_NODE_CLK_KERNEL3 "ep_aclk_hbm_00"
> hbm ?
>
> unusual acronyms should be documented.
High bandwidth memory. I will add a comment.
>
>> +#define XRT_MD_NODE_CLK_SHUTDOWN "ep_aclk_shutdown_00"
>> +#define XRT_MD_NODE_CMC_FW_MEM "ep_cmc_firmware_mem_00"
>> +#define XRT_MD_NODE_CMC_MUTEX "ep_cmc_mutex_00"
>> +#define XRT_MD_NODE_CMC_REG "ep_cmc_regmap_00"
>> +#define XRT_MD_NODE_CMC_RESET "ep_cmc_reset_00"
>> +#define XRT_MD_NODE_DDR_CALIB "ep_ddr_mem_calib_00"
>> +#define XRT_MD_NODE_DDR4_RESET_GATE "ep_ddr_mem_srsr_gate_00"
>> +#define XRT_MD_NODE_ERT_BASE "ep_ert_base_address_00"
>> +#define XRT_MD_NODE_ERT_CQ_MGMT "ep_ert_command_queue_mgmt_00"
>> +#define XRT_MD_NODE_ERT_CQ_USER "ep_ert_command_queue_user_00"
>> +#define XRT_MD_NODE_ERT_FW_MEM "ep_ert_firmware_mem_00"
>> +#define XRT_MD_NODE_ERT_RESET "ep_ert_reset_00"
>> +#define XRT_MD_NODE_ERT_SCHED "ep_ert_sched_00"
>> +#define XRT_MD_NODE_FLASH "ep_card_flash_program_00"
>> +#define XRT_MD_NODE_FPGA_CONFIG "ep_fpga_configuration_00"
>> +#define XRT_MD_NODE_GAPPING "ep_gapping_demand_00"
>> +#define XRT_MD_NODE_GATE_PLP "ep_pr_isolate_plp_00"
>> +#define XRT_MD_NODE_GATE_ULP "ep_pr_isolate_ulp_00"
>> +#define XRT_MD_NODE_KDMA_CTRL "ep_kdma_ctrl_00"
>> +#define XRT_MD_NODE_MAILBOX_MGMT "ep_mailbox_mgmt_00"
>> +#define XRT_MD_NODE_MAILBOX_USER "ep_mailbox_user_00"
>> +#define XRT_MD_NODE_MAILBOX_XRT "ep_mailbox_user_to_ert_00"
>> +#define XRT_MD_NODE_MSIX "ep_msix_00"
>> +#define XRT_MD_NODE_P2P "ep_p2p_00"
>> +#define XRT_MD_NODE_PCIE_MON "ep_pcie_link_mon_00"
>> +#define XRT_MD_NODE_PMC_INTR "ep_pmc_intr_00"
>> +#define XRT_MD_NODE_PMC_MUX "ep_pmc_mux_00"
>> +#define XRT_MD_NODE_QDMA "ep_qdma_00"
>> +#define XRT_MD_NODE_QDMA4 "ep_qdma4_00"
>> +#define XRT_MD_NODE_REMAP_P2P "ep_remap_p2p_00"
>> +#define XRT_MD_NODE_STM "ep_stream_traffic_manager_00"
>> +#define XRT_MD_NODE_STM4 "ep_stream_traffic_manager4_00"
>> +#define XRT_MD_NODE_SYSMON "ep_cmp_sysmon_00"
>> +#define XRT_MD_NODE_XDMA "ep_xdma_00"
>> +#define XRT_MD_NODE_XVC_PUB "ep_debug_bscan_user_00"
>> +#define XRT_MD_NODE_XVC_PRI "ep_debug_bscan_mgmt_00"
>> +#define XRT_MD_NODE_UCS_CONTROL_STATUS "ep_ucs_control_status_00"
>> +
>> +/* endpoint regmaps */
>> +#define XRT_MD_REGMAP_DDR_SRSR "drv_ddr_srsr"
>> +#define XRT_MD_REGMAP_CLKFREQ "freq_cnt"
> clock frequency vs frequency count ?
>
> is this ok?
Yes. "freq_cnt" has been used by hardware tools which generates the
metadata. It is clock frequency count.
>
>> +
>> +/* driver defined endpoints */
>> +#define XRT_MD_NODE_BLP_ROM "drv_ep_blp_rom_00"
>> +#define XRT_MD_NODE_DDR_SRSR "drv_ep_ddr_srsr"
>> +#define XRT_MD_NODE_FLASH_VSEC "drv_ep_card_flash_program_00"
>> +#define XRT_MD_NODE_GOLDEN_VER "drv_ep_golden_ver_00"
>> +#define XRT_MD_NODE_MAILBOX_VSEC "drv_ep_mailbox_vsec_00"
>> +#define XRT_MD_NODE_MGMT_MAIN "drv_ep_mgmt_main_00"
>> +#define XRT_MD_NODE_PLAT_INFO "drv_ep_platform_info_mgmt_00"
>> +#define XRT_MD_NODE_PARTITION_INFO_BLP "partition_info_0"
>> +#define XRT_MD_NODE_PARTITION_INFO_PLP "partition_info_1"
>> +#define XRT_MD_NODE_TEST "drv_ep_test_00"
>> +#define XRT_MD_NODE_VSEC "drv_ep_vsec_00"
>> +#define XRT_MD_NODE_VSEC_GOLDEN "drv_ep_vsec_golden_00"
>> +
>> +/* driver defined properties */
>> +#define XRT_MD_PROP_OFFSET "drv_offset"
>> +#define XRT_MD_PROP_CLK_FREQ "drv_clock_frequency"
>> +#define XRT_MD_PROP_CLK_CNT "drv_clock_frequency_counter"
>> +#define XRT_MD_PROP_VBNV "vbnv"
>> +#define XRT_MD_PROP_VROM "vrom"
>> +#define XRT_MD_PROP_PARTITION_LEVEL "partition_level"
>> +
>> +struct xrt_md_endpoint {
>> + const char *ep_name;
>> + u32 bar;
>> + u64 bar_off;
>> + ulong size;
> bar_off changed from long to u64.
>
> should bar and size both be changed to u64 ?
bar is bar index and u32 should be good enough for it. I will change
size to u64 and rename 'bar' to 'bar_index'.
>
>> + char *regmap;
> It seems like this is really a compatibility string and not a regmap.
Yes. I will rename 'regmap' to 'compat' and 'regmap_ver' to 'compat_ver'
>
>> + char *regmap_ver;
>> +};
>> +
>> +/* Note: res_id is defined by leaf driver and must start with 0. */
>> +struct xrt_iores_map {
>> + char *res_name;
>> + int res_id;
>> +};
>> +
>> +static inline int xrt_md_res_name2id(const struct xrt_iores_map *res_map,
>> + int entry_num, const char *res_name)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < entry_num; i++) {
>> + if (!strncmp(res_name, res_map->res_name, strlen(res_map->res_name) + 1))
>> + return res_map->res_id;
>> + res_map++;
>> + }
>> + return -1;
>> +}
>> +
>> +static inline const char *
>> +xrt_md_res_id2name(const struct xrt_iores_map *res_map, int entry_num, int id)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < entry_num; i++) {
>> + if (res_map->res_id == id)
>> + return res_map->res_name;
>> + res_map++;
>> + }
>> + return NULL;
>> +}
>> +
>> +unsigned long xrt_md_size(struct device *dev, const char *blob);
>> +int xrt_md_create(struct device *dev, char **blob);
>> +char *xrt_md_dup(struct device *dev, const char *blob);
>> +int xrt_md_add_endpoint(struct device *dev, char *blob,
>> + struct xrt_md_endpoint *ep);
>> +int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
>> + const char *regmap_name);
>> +int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
>> + const char *regmap_name, const char *prop,
>> + const void **val, int *size);
>> +int xrt_md_set_prop(struct device *dev, char *blob, const char *ep_name,
>> + const char *regmap_name, const char *prop,
>> + const void *val, int size);
>> +int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
>> + const char *ep_name, const char *regmap_name,
>> + const char *new_ep_name);
>> +int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + char **next_ep, char **next_regmap);
>> +int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
>> + const char *regmap_name, const char **ep_name);
>> +int xrt_md_find_endpoint(struct device *dev, const char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + const char **epname);
>> +int xrt_md_pack(struct device *dev, char *blob);
>> +int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
>> + u32 num_uuids, uuid_t *intf_uuids);
>> +
>> +/*
>> + * The firmware provides a 128 bit hash string as a unique id to the
>> + * partition/interface.
>> + * Existing hw does not yet use the cononical form, so it is necessary to
>> + * use a translation function.
>> + */
>> +static inline void xrt_md_trans_uuid2str(const uuid_t *uuid, char *uuidstr)
>> +{
>> + int i, p;
>> + u8 tmp[UUID_SIZE];
>> +
>> + BUILD_BUG_ON(UUID_SIZE != 16);
>> + export_uuid(tmp, uuid);
> ok
>> + for (p = 0, i = UUID_SIZE - 1; i >= 0; p++, i--)
>> + snprintf(&uuidstr[p * 2], 3, "%02x", tmp[i]);
> XMGMT_UUID_STR_LEN is 80.
>
> This logic say it could be reduced to 33.
Sure. I will define it as (UUID_SIZE * 2 + 1).
>
>> +}
>> +
>> +static inline int xrt_md_trans_str2uuid(struct device *dev, const char *uuidstr, uuid_t *p_uuid)
>> +{
>> + u8 p[UUID_SIZE];
>> + const char *str;
>> + char tmp[3] = { 0 };
>> + int i, ret;
>> +
>> + BUILD_BUG_ON(UUID_SIZE != 16);
> Also defined above, do not need to repeat.
Will remove.
>> + str = uuidstr + strlen(uuidstr) - 2;
> needs an underflow check
Will add check
if (strlen(uuidstr) != UUID_SIZE * 2)
return -EINVAL;
>> +
>> + for (i = 0; i < sizeof(*p_uuid) && str >= uuidstr; i++) {
>> + tmp[0] = *str;
>> + tmp[1] = *(str + 1);
>> + ret = kstrtou8(tmp, 16, &p[i]);
>> + if (ret)
>> + return -EINVAL;
>> + str -= 2;
>> + }
>> + import_uuid(p_uuid, p);
>> +
>> + return 0;
>> +}
>> +
>> +#endif
>> diff --git a/drivers/fpga/xrt/metadata/metadata.c b/drivers/fpga/xrt/metadata/metadata.c
>> new file mode 100644
>> index 000000000000..3b2be50fcb02
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/metadata/metadata.c
>> @@ -0,0 +1,545 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA Metadata parse APIs
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou <[email protected]>
>> + */
>> +
>> +#include <linux/libfdt_env.h>
>> +#include "libfdt.h"
>> +#include "metadata.h"
>> +
>> +#define MAX_BLOB_SIZE (4096 * 25)
>> +#define MAX_DEPTH 5
> MAX_BLOB_SIZE is defined in keys/trusted-type.h
>
> General, add a prefix to help avoid conflicts.
>
> Like
>
> XRT_MAX_BLOB_SIZE
>
> etc.
Sure. Will add XRT_
>
>> +
>> +static int xrt_md_setprop(struct device *dev, char *blob, int offset,
>> + const char *prop, const void *val, int size)
>> +{
>> + int ret;
>> +
>> + ret = fdt_setprop(blob, offset, prop, val, size);
>> + if (ret)
>> + dev_err(dev, "failed to set prop %d", ret);
>> +
>> + return ret;
>> +}
>> +
>> +static int xrt_md_add_node(struct device *dev, char *blob, int parent_offset,
>> + const char *ep_name)
>> +{
>> + int ret;
>> +
>> + ret = fdt_add_subnode(blob, parent_offset, ep_name);
>> + if (ret < 0 && ret != -FDT_ERR_EXISTS)
>> + dev_err(dev, "failed to add node %s. %d", ep_name, ret);
>> +
>> + return ret;
>> +}
>> +
>> +static int xrt_md_get_endpoint(struct device *dev, const char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + int *ep_offset)
>> +{
>> + const char *name;
>> + int offset;
>> +
>> + for (offset = fdt_next_node(blob, -1, NULL);
>> + offset >= 0;
>> + offset = fdt_next_node(blob, offset, NULL)) {
>> + name = fdt_get_name(blob, offset, NULL);
>> + if (!name || strncmp(name, ep_name, strlen(ep_name) + 1))
>> + continue;
>> + if (!regmap_name ||
> regmap_name is known at the start but checked here in the loop.
>
> this check should be made outside of the loop.
Will change.
>
>> + !fdt_node_check_compatible(blob, offset, regmap_name))
>> + break;
>> + }
>> + if (offset < 0)
>> + return -ENODEV;
>> +
>> + *ep_offset = offset;
>> +
>> + return 0;
>> +}
>> +
>> +static inline int xrt_md_get_node(struct device *dev, const char *blob,
>> + const char *name, const char *regmap_name,
>> + int *offset)
>> +{
>> + int ret = 0;
>> +
>> + if (name) {
>> + ret = xrt_md_get_endpoint(dev, blob, name, regmap_name,
>> + offset);
>> + if (ret) {
>> + dev_err(dev, "cannot get node %s, regmap %s, ret = %d",
>> + name, regmap_name, ret);
> from above regmap_name is sometimes NULL.
Will add a check.
>> + return -EINVAL;
>> + }
>> + } else {
>> + ret = fdt_next_node(blob, -1, NULL);
>> + if (ret < 0) {
>> + dev_err(dev, "internal error, ret = %d", ret);
>> + return -EINVAL;
>> + }
>> + *offset = ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int xrt_md_overlay(struct device *dev, char *blob, int target,
>> + const char *overlay_blob, int overlay_offset,
>> + int depth)
>> +{
>> + int property, subnode;
>> + int ret;
> whitespace, looks like tab's after 'int'
>
> should be consistent with space used elsewhere
Will fix.
>
>> +
>> + if (!blob || !overlay_blob) {
>> + dev_err(dev, "blob is NULL");
>> + return -EINVAL;
>> + }
>> +
>> + if (depth > MAX_DEPTH) {
> ok
>> + dev_err(dev, "meta data depth beyond %d", MAX_DEPTH);
>> + return -EINVAL;
>> + }
>> +
>> + if (target < 0) {
>> + target = fdt_next_node(blob, -1, NULL);
>> + if (target < 0) {
>> + dev_err(dev, "invalid target");
>> + return -EINVAL;
>> + }
>> + }
>> + if (overlay_offset < 0) {
>> + overlay_offset = fdt_next_node(overlay_blob, -1, NULL);
>> + if (overlay_offset < 0) {
>> + dev_err(dev, "invalid overlay");
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + fdt_for_each_property_offset(property, overlay_blob, overlay_offset) {
>> + const char *name;
>> + const void *prop;
>> + int prop_len;
>> +
>> + prop = fdt_getprop_by_offset(overlay_blob, property, &name,
>> + &prop_len);
>> + if (!prop || prop_len >= MAX_BLOB_SIZE || prop_len < 0) {
>> + dev_err(dev, "internal error");
>> + return -EINVAL;
>> + }
>> +
>> + ret = xrt_md_setprop(dev, blob, target, name, prop,
>> + prop_len);
>> + if (ret) {
>> + dev_err(dev, "setprop failed, ret = %d", ret);
>> + return ret;
>> + }
>> + }
>> +
>> + fdt_for_each_subnode(subnode, overlay_blob, overlay_offset) {
>> + const char *name = fdt_get_name(overlay_blob, subnode, NULL);
>> + int nnode;
>> +
>> + nnode = xrt_md_add_node(dev, blob, target, name);
>> + if (nnode == -FDT_ERR_EXISTS)
>> + nnode = fdt_subnode_offset(blob, target, name);
>> + if (nnode < 0) {
>> + dev_err(dev, "add node failed, ret = %d", nnode);
>> + return nnode;
> This is an offset, not an error code
>
> return -EINVAL or similar
Will fix.
>
>> + }
>> +
>> + ret = xrt_md_overlay(dev, blob, nnode, overlay_blob, subnode, depth + 1);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +unsigned long xrt_md_size(struct device *dev, const char *blob)
>> +{
> review fdt_ro_probe.
>
> fdt_totalsize is signed 32 bit, this conversion to sometimes 64 bit is not necessary.
>
> at most it should be uint32_t
Will use u32.
>
>> + unsigned long len = (long)fdt_totalsize(blob);
>> +
>> + if (len > MAX_BLOB_SIZE)
>> + return XRT_MD_INVALID_LENGTH;
>> +
>> + return len;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_size);
>> +
>> +int xrt_md_create(struct device *dev, char **blob)
>> +{
>> + int ret = 0;
>> +
>> + if (!blob) {
>> + dev_err(dev, "blob is NULL");
>> + return -EINVAL;
>> + }
>> +
>> + *blob = vzalloc(MAX_BLOB_SIZE);
>> + if (!*blob)
>> + return -ENOMEM;
>> +
>> + ret = fdt_create_empty_tree(*blob, MAX_BLOB_SIZE);
>> + if (ret) {
>> + dev_err(dev, "format blob failed, ret = %d", ret);
>> + goto failed;
>> + }
>> +
>> + ret = fdt_next_node(*blob, -1, NULL);
>> + if (ret < 0) {
>> + dev_err(dev, "No Node, ret = %d", ret);
>> + goto failed;
>> + }
>> +
>> + ret = fdt_add_subnode(*blob, 0, XRT_MD_NODE_ENDPOINTS);
>> + if (ret < 0) {
> fdt error code
will return -EINVAL.
>> + dev_err(dev, "add node failed, ret = %d", ret);
>> + goto failed;
>> + }
>> +
>> + return 0;
>> +
>> +failed:
>> + vfree(*blob);
>> + *blob = NULL;
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_create);
>> +
>> +char *xrt_md_dup(struct device *dev, const char *blob)
>> +{
>> + char *dup_blob;
>> + int ret;
>> +
>> + ret = xrt_md_create(dev, &dup_blob);
>> + if (ret)
>> + return NULL;
>> + ret = xrt_md_overlay(dev, dup_blob, -1, blob, -1, 0);
>> + if (ret) {
>> + vfree(dup_blob);
>> + return NULL;
>> + }
>> +
>> + return dup_blob;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_dup);
> Wasn't xrt_md_dup going to be replaced by memcpy ?
Looked into this more and found it is not able to replaced by memcpy.
The blob read from firmware could be marked as 'read only'. And we need
'read-write' for driver metadata.
>> +
>> +int xrt_md_del_endpoint(struct device *dev, char *blob, const char *ep_name,
>> + const char *regmap_name)
>> +{
>> + int ep_offset;
>> + int ret;
>> +
>> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name, &ep_offset);
>> + if (ret) {
>> + dev_err(dev, "can not find ep %s", ep_name);
>> + return -EINVAL;
>> + }
>> +
>> + ret = fdt_del_node(blob, ep_offset);
> fdt return code
>
> Fix these generally.
sure.
>
>> + if (ret)
>> + dev_err(dev, "delete node %s failed, ret %d", ep_name, ret);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_del_endpoint);
>> +
>> +static int __xrt_md_add_endpoint(struct device *dev, char *blob,
>> + struct xrt_md_endpoint *ep, int *offset,
>> + const char *parent)
>> +{
>> + int parent_offset = 0;
>> + u32 val, count = 0;
>> + int ep_offset = 0;
>> + u64 io_range[2];
>> + char comp[128];
>> + int ret = 0;
>> +
>> + if (!ep->ep_name) {
>> + dev_err(dev, "empty name");
>> + return -EINVAL;
>> + }
>> +
>> + if (parent) {
>> + ret = xrt_md_get_endpoint(dev, blob, parent, NULL, &parent_offset);
>> + if (ret) {
>> + dev_err(dev, "invalid blob, ret = %d", ret);
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + ep_offset = xrt_md_add_node(dev, blob, parent_offset, ep->ep_name);
>> + if (ep_offset < 0) {
>> + dev_err(dev, "add endpoint failed, ret = %d", ret);
>> + return -EINVAL;
>> + }
>> + if (offset)
>> + *offset = ep_offset;
>> +
>> + if (ep->size != 0) {
>> + val = cpu_to_be32(ep->bar);
>> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_BAR_IDX,
>> + &val, sizeof(u32));
>> + if (ret) {
>> + dev_err(dev, "set %s failed, ret %d",
>> + XRT_MD_PROP_BAR_IDX, ret);
>> + goto failed;
>> + }
>> + io_range[0] = cpu_to_be64((u64)ep->bar_off);
>> + io_range[1] = cpu_to_be64((u64)ep->size);
> if ep->bar is an index, then rename the element to 'bar_index'
sure.
>> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_IO_OFFSET,
>> + io_range, sizeof(io_range));
>> + if (ret) {
>> + dev_err(dev, "set %s failed, ret %d",
>> + XRT_MD_PROP_IO_OFFSET, ret);
>> + goto failed;
>> + }
>> + }
>> +
>> + if (ep->regmap) {
>> + if (ep->regmap_ver) {
>> + count = snprintf(comp, sizeof(comp) - 1,
> The -1 should be good enough that the if-check below is not needed
We do not expect compat string is beyond 127 bytes. I will change to
if (count >= sizeof(comp))
So if the converted compat string is beyond 127 bytest, returns error.
>> + "%s-%s", ep->regmap, ep->regmap_ver);
>> + count++;
>> + }
>> + if (count > sizeof(comp)) {
>> + ret = -EINVAL;
>> + goto failed;
>> + }
>> +
>> + count += snprintf(comp + count, sizeof(comp) - count - 1,
>> + "%s", ep->regmap);
> what happens when only part of regmap is added to comp ?
It should never happen and will return error if it is.
>> + count++;
>> + if (count > sizeof(comp)) {
>> + ret = -EINVAL;
>> + goto failed;
>> + }
>> +
>> + ret = xrt_md_setprop(dev, blob, ep_offset, XRT_MD_PROP_COMPATIBLE,
>> + comp, count);
>> + if (ret) {
>> + dev_err(dev, "set %s failed, ret %d",
>> + XRT_MD_PROP_COMPATIBLE, ret);
>> + goto failed;
>> + }
>> + }
>> +
>> +failed:
>> + if (ret)
>> + xrt_md_del_endpoint(dev, blob, ep->ep_name, NULL);
>> +
>> + return ret;
>> +}
>> +
>> +int xrt_md_add_endpoint(struct device *dev, char *blob,
>> + struct xrt_md_endpoint *ep)
>> +{
>> + return __xrt_md_add_endpoint(dev, blob, ep, NULL, XRT_MD_NODE_ENDPOINTS);
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_add_endpoint);
>> +
>> +int xrt_md_find_endpoint(struct device *dev, const char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + const char **epname)
>> +{
>> + int offset;
>> + int ret;
>> +
>> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
>> + &offset);
>> + if (!ret && epname)
> split this condition, if the call failed, check and return early.
sure.
>> + *epname = fdt_get_name(blob, offset, NULL);
> what happens if fdt_get_name fails ?
will add check.
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_find_endpoint);
>> +
>> +int xrt_md_get_prop(struct device *dev, const char *blob, const char *ep_name,
>> + const char *regmap_name, const char *prop,
>> + const void **val, int *size)
>> +{
>> + int offset;
>> + int ret;
>> +
>> + if (!val) {
>> + dev_err(dev, "val is null");
>> + return -EINVAL;
> ok
>> + }
>> +
>> + *val = NULL;
>> + ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
>> + if (ret)
>> + return ret;
>> +
>> + *val = fdt_getprop(blob, offset, prop, size);
>> + if (!*val) {
>> + dev_dbg(dev, "get ep %s, prop %s failed", ep_name, prop);
>> + return -EINVAL;
>> + }
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_get_prop);
>> +
>> +int xrt_md_set_prop(struct device *dev, char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + const char *prop, const void *val, int size)
>> +{
>> + int offset;
>> + int ret;
>> +
>> + ret = xrt_md_get_node(dev, blob, ep_name, regmap_name, &offset);
>> + if (ret)
>> + return ret;
>> +
>> + ret = xrt_md_setprop(dev, blob, offset, prop, val, size);
> ok
>> + if (ret)
>> + dev_err(dev, "set prop %s failed, ret = %d", prop, ret);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_set_prop);
>> +
>> +int xrt_md_copy_endpoint(struct device *dev, char *blob, const char *src_blob,
>> + const char *ep_name, const char *regmap_name,
>> + const char *new_ep_name)
>> +{
>> + const char *newepnm = new_ep_name ? new_ep_name : ep_name;
>> + struct xrt_md_endpoint ep = {0};
>> + int offset, target;
>> + const char *parent;
>> + int ret;
>> +
>> + ret = xrt_md_get_endpoint(dev, src_blob, ep_name, regmap_name,
>> + &offset);
>> + if (ret)
>> + return -EINVAL;
>> +
>> + ret = xrt_md_get_endpoint(dev, blob, newepnm, regmap_name, &target);
>> + if (ret) {
>> + ep.ep_name = newepnm;
>> + parent = fdt_parent_offset(src_blob, offset) == 0 ? NULL : XRT_MD_NODE_ENDPOINTS;
>> + ret = __xrt_md_add_endpoint(dev, blob, &ep, &target, parent);
>> + if (ret)
>> + return -EINVAL;
>> + }
>> +
>> + ret = xrt_md_overlay(dev, blob, target, src_blob, offset, 0);
>> + if (ret)
>> + dev_err(dev, "overlay failed, ret = %d", ret);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_copy_endpoint);
>> +
>> +int xrt_md_get_next_endpoint(struct device *dev, const char *blob,
>> + const char *ep_name, const char *regmap_name,
>> + char **next_ep, char **next_regmap)
>> +{
>> + int offset, ret;
>> +
>> + *next_ep = NULL;
>> + *next_regmap = NULL;
>> + if (!ep_name) {
>> + ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_ENDPOINTS, NULL,
>> + &offset);
>> + } else {
>> + ret = xrt_md_get_endpoint(dev, blob, ep_name, regmap_name,
>> + &offset);
>> + }
>> +
>> + if (ret)
>> + return -EINVAL;
>> +
>> + offset = ep_name ? fdt_next_subnode(blob, offset) :
>> + fdt_first_subnode(blob, offset);
> tristate with function calls is harder to follow, convert this to if-else logic
Will change.
Thanks,
Lizhi
>> + if (offset < 0)
>> + return -EINVAL;
>> +
>> + *next_ep = (char *)fdt_get_name(blob, offset, NULL);
>> + *next_regmap = (char *)fdt_stringlist_get(blob, offset, XRT_MD_PROP_COMPATIBLE,
>> + 0, NULL);
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_get_next_endpoint);
>> +
>> +int xrt_md_get_compatible_endpoint(struct device *dev, const char *blob,
>> + const char *regmap_name, const char **ep_name)
>> +{
>> + int ep_offset;
>> +
>> + ep_offset = fdt_node_offset_by_compatible(blob, -1, regmap_name);
>> + if (ep_offset < 0) {
>> + *ep_name = NULL;
>> + return -ENOENT;
>> + }
>> +
>> + *ep_name = fdt_get_name(blob, ep_offset, NULL);
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_get_compatible_endpoint);
>> +
>> +int xrt_md_pack(struct device *dev, char *blob)
>> +{
>> + int ret;
>> +
>> + ret = fdt_pack(blob);
>> + if (ret)
>> + dev_err(dev, "pack failed %d", ret);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_pack);
>> +
>> +int xrt_md_get_interface_uuids(struct device *dev, const char *blob,
> ok
>> + u32 num_uuids, uuid_t *interface_uuids)
>> +{
>> + int offset, count = 0;
>> + const char *uuid_str;
>> + int ret;
>> +
>> + ret = xrt_md_get_endpoint(dev, blob, XRT_MD_NODE_INTERFACES, NULL, &offset);
>> + if (ret)
>> + return -ENOENT;
>> +
>> + for (offset = fdt_first_subnode(blob, offset);
>> + offset >= 0;
>> + offset = fdt_next_subnode(blob, offset), count++) {
>> + uuid_str = fdt_getprop(blob, offset, XRT_MD_PROP_INTERFACE_UUID,
>> + NULL);
>> + if (!uuid_str) {
>> + dev_err(dev, "empty interface uuid node");
>> + return -EINVAL;
>> + }
>> +
>> + if (!num_uuids)
>> + continue;
>> +
>> + if (count == num_uuids) {
> ok
>
>> + dev_err(dev, "too many interface uuid in blob");
>> + return -EINVAL;
>> + }
>> +
>> + if (interface_uuids && count < num_uuids) {
>> + ret = xrt_md_trans_str2uuid(dev, uuid_str,
>> + &interface_uuids[count]);
>> + if (ret)
>> + return -EINVAL;
>> + }
>> + }
>> + if (!count)
>> + count = -ENOENT;
>> +
>> + return count;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_md_get_interface_uuids);
> Thanks for the changes,
>
> Tom
>
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> ICAP stands for Hardware Internal Configuration Access Port. ICAP is
> discovered by walking firmware metadata. A platform device node will be
by walking the firmware
> created for it. FPGA bitstream is written to hardware through ICAP.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/icap.h | 27 ++
> drivers/fpga/xrt/lib/xleaf/icap.c | 344 ++++++++++++++++++++++++++
> 2 files changed, 371 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/icap.h b/drivers/fpga/xrt/include/xleaf/icap.h
> new file mode 100644
> index 000000000000..96d39a8934fa
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/icap.h
> @@ -0,0 +1,27 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_ICAP_H_
> +#define _XRT_ICAP_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * ICAP driver leaf calls.
> + */
> +enum xrt_icap_leaf_cmd {
> + XRT_ICAP_WRITE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_ICAP_GET_IDCODE,
ok
> +};
> +
> +struct xrt_icap_wr {
> + void *xiiw_bit_data;
> + u32 xiiw_data_len;
> +};
> +
> +#endif /* _XRT_ICAP_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/icap.c b/drivers/fpga/xrt/lib/xleaf/icap.c
> new file mode 100644
> index 000000000000..13db2b759138
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/icap.c
> @@ -0,0 +1,344 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA ICAP Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + * Sonal Santan <[email protected]>
> + * Max Zhen <[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/icap.h"
> +#include "xclbin-helper.h"
> +
> +#define XRT_ICAP "xrt_icap"
> +
> +#define ICAP_ERR(icap, fmt, arg...) \
> + xrt_err((icap)->pdev, fmt "\n", ##arg)
> +#define ICAP_WARN(icap, fmt, arg...) \
> + xrt_warn((icap)->pdev, fmt "\n", ##arg)
> +#define ICAP_INFO(icap, fmt, arg...) \
> + xrt_info((icap)->pdev, fmt "\n", ##arg)
> +#define ICAP_DBG(icap, fmt, arg...) \
> + xrt_dbg((icap)->pdev, fmt "\n", ##arg)
> +
> +/*
> + * AXI-HWICAP IP register layout. Please see
> + * https://www.xilinx.com/support/documentation/ip_documentation/axi_hwicap/v3_0/pg134-axi-hwicap.pdf
url works, looks good
> + */
> +#define ICAP_REG_GIER 0x1C
> +#define ICAP_REG_ISR 0x20
> +#define ICAP_REG_IER 0x28
> +#define ICAP_REG_WF 0x100
> +#define ICAP_REG_RF 0x104
> +#define ICAP_REG_SZ 0x108
> +#define ICAP_REG_CR 0x10C
> +#define ICAP_REG_SR 0x110
> +#define ICAP_REG_WFV 0x114
> +#define ICAP_REG_RFO 0x118
> +#define ICAP_REG_ASR 0x11C
> +
> +#define ICAP_STATUS_EOS 0x4
> +#define ICAP_STATUS_DONE 0x1
> +
> +/*
> + * Canned command sequence to obtain IDCODE of the FPGA
> + */
> +static const u32 idcode_stream[] = {
> + /* dummy word */
> + cpu_to_be32(0xffffffff),
> + /* sync word */
> + cpu_to_be32(0xaa995566),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* ID code */
> + cpu_to_be32(0x28018001),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> + /* NOP word */
> + cpu_to_be32(0x20000000),
> +};
> +
> +static const struct regmap_config icap_regmap_config = {
ok
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
> +};
> +
> +struct icap {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + struct mutex icap_lock; /* icap dev lock */
> +
whitespace, remove extra nl
> + u32 idcode;
> +};
> +
> +static int wait_for_done(const struct icap *icap)
> +{
> + int i = 0;
> + int ret;
> + u32 w;
> +
> + for (i = 0; i < 10; i++) {
> + /*
> + * it requires few micro seconds for ICAP to process incoming data.
> + * Polling every 5us for 10 times would be good enough.
ok
> + */
> + udelay(5);
> + ret = regmap_read(icap->regmap, ICAP_REG_SR, &w);
> + if (ret)
> + return ret;
> + ICAP_INFO(icap, "XHWICAP_SR: %x", w);
> + if (w & (ICAP_STATUS_EOS | ICAP_STATUS_DONE))
ok
> + return 0;
> + }
> +
> + ICAP_ERR(icap, "bitstream download timeout");
> + return -ETIMEDOUT;
> +}
> +
> +static int icap_write(const struct icap *icap, const u32 *word_buf, int size)
> +{
> + u32 value = 0;
> + int ret;
> + int i;
> +
> + for (i = 0; i < size; i++) {
> + value = be32_to_cpu(word_buf[i]);
> + ret = regmap_write(icap->regmap, ICAP_REG_WF, value);
> + if (ret)
> + return ret;
> + }
> +
> + ret = regmap_write(icap->regmap, ICAP_REG_CR, 0x1);
> + if (ret)
> + return ret;
> +
> + for (i = 0; i < 20; i++) {
> + ret = regmap_read(icap->regmap, ICAP_REG_CR, &value);
> + if (ret)
> + return ret;
> +
> + if ((value & 0x1) == 0)
> + return 0;
> + ndelay(50);
> + }
> +
> + ICAP_ERR(icap, "writing %d dwords timeout", size);
> + return -EIO;
> +}
> +
> +static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
> + u32 word_count)
> +{
> + int wr_fifo_vacancy = 0;
> + u32 word_written = 0;
> + u32 remain_word;
> + int err = 0;
> +
> + WARN_ON(!mutex_is_locked(&icap->icap_lock));
> + for (remain_word = word_count; remain_word > 0;
> + remain_word -= word_written, word_buffer += word_written) {
> + err = regmap_read(icap->regmap, ICAP_REG_WFV, &wr_fifo_vacancy);
> + if (err) {
> + ICAP_ERR(icap, "read wr_fifo_vacancy failed %d", err);
> + break;
> + }
> + if (wr_fifo_vacancy <= 0) {
> + ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
> + err = -EIO;
> + break;
> + }
> + word_written = (wr_fifo_vacancy < remain_word) ?
> + wr_fifo_vacancy : remain_word;
> + if (icap_write(icap, word_buffer, word_written) != 0) {
> + ICAP_ERR(icap, "write failed remain %d, written %d",
> + remain_word, word_written);
> + err = -EIO;
> + break;
> + }
> + }
> +
> + return err;
> +}
> +
> +static int icap_download(struct icap *icap, const char *buffer,
> + unsigned long length)
> +{
> + u32 num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
> + u32 byte_read;
> + int err = 0;
> +
> + if (length % sizeof(u32)) {
ok
> + ICAP_ERR(icap, "invalid bitstream length %ld", length);
> + return -EINVAL;
> + }
> +
> + mutex_lock(&icap->icap_lock);
> + for (byte_read = 0; byte_read < length; byte_read += num_chars_read) {
> + num_chars_read = length - byte_read;
> + if (num_chars_read > XCLBIN_HWICAP_BITFILE_BUF_SZ)
> + num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
> +
> + err = bitstream_helper(icap, (u32 *)buffer, num_chars_read / sizeof(u32));
> + if (err)
> + goto failed;
> + buffer += num_chars_read;
> + }
> +
> + /* there is not any cleanup needs to be done if writing ICAP timeout. */
> + err = wait_for_done(icap);
> +
> +failed:
> + mutex_unlock(&icap->icap_lock);
> +
> + return err;
> +}
> +
> +/*
> + * Discover the FPGA IDCODE using special sequence of canned commands
> + */
> +static int icap_probe_chip(struct icap *icap)
> +{
> + int err;
> + u32 val = 0;
ok, thanks for demagic-ing this function.
Looks good overall, only a few minor things.
Reviewed-by: Tom Rix <[email protected]>
> +
> + regmap_read(icap->regmap, ICAP_REG_SR, &val);
> + if (val != ICAP_STATUS_DONE)
> + return -ENODEV;
> + /* Read ICAP FIFO vacancy */
> + regmap_read(icap->regmap, ICAP_REG_WFV, &val);
> + if (val < 8)
> + return -ENODEV;
> + err = icap_write(icap, idcode_stream, ARRAY_SIZE(idcode_stream));
> + if (err)
> + return err;
> + err = wait_for_done(icap);
> + if (err)
> + return err;
> +
> + /* Tell config engine how many words to transfer to read FIFO */
> + regmap_write(icap->regmap, ICAP_REG_SZ, 0x1);
> + /* Switch the ICAP to read mode */
> + regmap_write(icap->regmap, ICAP_REG_CR, 0x2);
> + err = wait_for_done(icap);
> + if (err)
> + return err;
> +
> + /* Read IDCODE from Read FIFO */
> + regmap_read(icap->regmap, ICAP_REG_RF, &icap->idcode);
> + return 0;
> +}
> +
> +static int
> +xrt_icap_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct xrt_icap_wr *wr_arg = arg;
> + struct icap *icap;
> + int ret = 0;
> +
> + icap = platform_get_drvdata(pdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_ICAP_WRITE:
> + ret = icap_download(icap, wr_arg->xiiw_bit_data,
> + wr_arg->xiiw_data_len);
> + break;
> + case XRT_ICAP_GET_IDCODE:
> + *(u32 *)arg = icap->idcode;
> + break;
> + default:
> + ICAP_ERR(icap, "unknown command %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_icap_probe(struct platform_device *pdev)
> +{
> + void __iomem *base = NULL;
> + struct resource *res;
> + struct icap *icap;
> + int result = 0;
> +
> + icap = devm_kzalloc(&pdev->dev, sizeof(*icap), GFP_KERNEL);
> + if (!icap)
> + return -ENOMEM;
> +
> + icap->pdev = pdev;
> + platform_set_drvdata(pdev, icap);
> + mutex_init(&icap->icap_lock);
> +
> + xrt_info(pdev, "probing");
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res)
> + return -EINVAL;
> +
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base))
> + return PTR_ERR(base);
> +
> + icap->regmap = devm_regmap_init_mmio(&pdev->dev, base, &icap_regmap_config);
> + if (IS_ERR(icap->regmap)) {
> + ICAP_ERR(icap, "init mmio failed");
> + return PTR_ERR(icap->regmap);
> + }
> + /* Disable ICAP interrupts */
> + regmap_write(icap->regmap, ICAP_REG_GIER, 0);
> +
> + result = icap_probe_chip(icap);
> + if (result)
> + xrt_err(pdev, "Failed to probe FPGA");
> + else
> + xrt_info(pdev, "Discovered FPGA IDCODE %x", icap->idcode);
> + return result;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_icap_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_FPGA_CONFIG },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_icap_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_icap_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_icap_table[] = {
> + { XRT_ICAP, (kernel_ulong_t)&xrt_icap_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_icap_driver = {
> + .driver = {
> + .name = XRT_ICAP,
> + },
> + .probe = xrt_icap_probe,
> + .id_table = xrt_icap_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_ICAP, icap);
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add devctl driver. devctl is a type of hardware function which only has
> few registers to read or write. They are discovered by walking firmware
> metadata. A platform device node will be created for them.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/devctl.h | 40 ++++++
> drivers/fpga/xrt/lib/xleaf/devctl.c | 183 ++++++++++++++++++++++++
> 2 files changed, 223 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/devctl.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/devctl.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/devctl.h b/drivers/fpga/xrt/include/xleaf/devctl.h
> new file mode 100644
> index 000000000000..b97f3b6d9326
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/devctl.h
> @@ -0,0 +1,40 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_DEVCTL_H_
> +#define _XRT_DEVCTL_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * DEVCTL driver leaf calls.
> + */
> +enum xrt_devctl_leaf_cmd {
> + XRT_DEVCTL_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +enum xrt_devctl_id {
> + XRT_DEVCTL_ROM_UUID = 0,
ok
> + XRT_DEVCTL_DDR_CALIB,
> + XRT_DEVCTL_GOLDEN_VER,
> + XRT_DEVCTL_MAX
> +};
> +
> +struct xrt_devctl_rw {
> + u32 xdr_id;
> + void *xdr_buf;
> + u32 xdr_len;
> + u32 xdr_offset;
> +};
> +
> +struct xrt_devctl_intf_uuid {
> + u32 uuid_num;
> + uuid_t *uuids;
> +};
> +
> +#endif /* _XRT_DEVCTL_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/devctl.c b/drivers/fpga/xrt/lib/xleaf/devctl.c
> new file mode 100644
> index 000000000000..ae086d7c431d
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/devctl.c
> @@ -0,0 +1,183 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA devctl Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/devctl.h"
> +
> +#define XRT_DEVCTL "xrt_devctl"
> +
> +struct xrt_name_id {
> + char *ep_name;
> + int id;
> +};
> +
> +static struct xrt_name_id name_id[XRT_DEVCTL_MAX] = {
> + { XRT_MD_NODE_BLP_ROM, XRT_DEVCTL_ROM_UUID },
> + { XRT_MD_NODE_GOLDEN_VER, XRT_DEVCTL_GOLDEN_VER },
> +};
> +
> +static const struct regmap_config devctl_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
ok
> +};
> +
> +struct xrt_devctl {
> + struct platform_device *pdev;
> + struct regmap *regmap[XRT_DEVCTL_MAX];
> + ulong sizes[XRT_DEVCTL_MAX];
> +};
> +
> +static int xrt_devctl_name2id(struct xrt_devctl *devctl, const char *name)
> +{
> + int i;
> +
> + for (i = 0; i < XRT_DEVCTL_MAX && name_id[i].ep_name; i++) {
> + if (!strncmp(name_id[i].ep_name, name, strlen(name_id[i].ep_name) + 1))
> + return name_id[i].id;
> + }
> +
> + return -EINVAL;
> +}
> +
> +static int
> +xrt_devctl_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct xrt_devctl *devctl;
> + int ret = 0;
> +
> + devctl = platform_get_drvdata(pdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_DEVCTL_READ: {
> + struct xrt_devctl_rw *rw_arg = arg;
> +
> + if (rw_arg->xdr_len & 0x3) {
> + xrt_err(pdev, "invalid len %d", rw_arg->xdr_len);
> + return -EINVAL;
> + }
> +
> + if (rw_arg->xdr_id >= XRT_DEVCTL_MAX) {
ok
> + xrt_err(pdev, "invalid id %d", rw_arg->xdr_id);
> + return -EINVAL;
> + }
> +
> + if (!devctl->regmap[rw_arg->xdr_id]) {
> + xrt_err(pdev, "io not found, id %d",
> + rw_arg->xdr_id);
> + return -EINVAL;
> + }
> +
> + ret = regmap_bulk_read(devctl->regmap[rw_arg->xdr_id], rw_arg->xdr_offset,
> + rw_arg->xdr_buf,
> + rw_arg->xdr_len / devctl_regmap_config.reg_stride);
> + break;
> + }
ok, *_WRITE removed.
Thanks for the changes
Reviewed-by: Tom Rix <[email protected]>
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_devctl_probe(struct platform_device *pdev)
> +{
> + struct xrt_devctl *devctl = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int i, id, ret = 0;
> +
> + devctl = devm_kzalloc(&pdev->dev, sizeof(*devctl), GFP_KERNEL);
> + if (!devctl)
> + return -ENOMEM;
> +
> + devctl->pdev = pdev;
> + platform_set_drvdata(pdev, devctl);
> +
> + xrt_info(pdev, "probing...");
> + for (i = 0, res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + res;
> + res = platform_get_resource(pdev, IORESOURCE_MEM, ++i)) {
> + struct regmap_config config = devctl_regmap_config;
> +
> + id = xrt_devctl_name2id(devctl, res->name);
> + if (id < 0) {
> + xrt_err(pdev, "ep %s not found", res->name);
> + continue;
> + }
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + break;
> + }
> + config.max_register = res->end - res->start + 1;
> + devctl->regmap[id] = devm_regmap_init_mmio(&pdev->dev, base, &config);
> + if (IS_ERR(devctl->regmap[id])) {
> + xrt_err(pdev, "map base failed %pR", res);
> + ret = PTR_ERR(devctl->regmap[id]);
> + break;
> + }
> + devctl->sizes[id] = res->end - res->start + 1;
> + }
> +
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_devctl_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + /* add name if ep is in same partition */
> + { .ep_name = XRT_MD_NODE_BLP_ROM },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GOLDEN_VER },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + /* adding ep bundle generates devctl device instance */
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_devctl_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_devctl_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_devctl_table[] = {
> + { XRT_DEVCTL, (kernel_ulong_t)&xrt_devctl_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_devctl_driver = {
> + .driver = {
> + .name = XRT_DEVCTL,
> + },
> + .probe = xrt_devctl_probe,
> + .id_table = xrt_devctl_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_DEVCTL, devctl);
Hi Tom,
On 03/29/2021 10:12 AM, Tom Rix wrote:
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Alveo FPGA firmware and partial reconfigure file are in xclbin format. This
>> code enumerates and extracts sections from xclbin files. xclbin.h is cross
>> platform and used across all platforms and OS.
> ok
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/xclbin-helper.h | 48 +++
>> drivers/fpga/xrt/lib/xclbin.c | 369 ++++++++++++++++++++
>> include/uapi/linux/xrt/xclbin.h | 409 +++++++++++++++++++++++
>> 3 files changed, 826 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/xclbin-helper.h
>> create mode 100644 drivers/fpga/xrt/lib/xclbin.c
>> create mode 100644 include/uapi/linux/xrt/xclbin.h
>>
>> diff --git a/drivers/fpga/xrt/include/xclbin-helper.h b/drivers/fpga/xrt/include/xclbin-helper.h
>> new file mode 100644
>> index 000000000000..382b1de97b0a
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xclbin-helper.h
>> @@ -0,0 +1,48 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * David Zhang <[email protected]>
>> + * Sonal Santan <[email protected]>
>> + */
>> +
>> +#ifndef _XCLBIN_HELPER_H_
>> +#define _XCLBIN_HELPER_H_
> ok
>> +
>> +#include <linux/types.h>
>> +#include <linux/device.h>
>> +#include <linux/xrt/xclbin.h>
>> +
>> +#define XCLBIN_VERSION2 "xclbin2"
>> +#define XCLBIN_HWICAP_BITFILE_BUF_SZ 1024
>> +#define XCLBIN_MAX_SIZE (1024 * 1024 * 1024) /* Assuming xclbin <= 1G, always */
> ok
>> +
>> +enum axlf_section_kind;
>> +struct axlf;
>> +
>> +/**
>> + * Bitstream header information as defined by Xilinx tools.
>> + * Please note that this struct definition is not owned by the driver.
>> + */
>> +struct xclbin_bit_head_info {
>> + u32 header_length; /* Length of header in 32 bit words */
>> + u32 bitstream_length; /* Length of bitstream to read in bytes */
>> + const unchar *design_name; /* Design name get from bitstream */
>> + const unchar *part_name; /* Part name read from bitstream */
>> + const unchar *date; /* Date read from bitstream header */
>> + const unchar *time; /* Bitstream creation time */
>> + u32 magic_length; /* Length of the magic numbers */
>> + const unchar *version; /* Version string */
>> +};
>> +
> ok, bit removed.
>> +/* caller must free the allocated memory for **data. len could be NULL. */
>> +int xrt_xclbin_get_section(struct device *dev, const struct axlf *xclbin,
>> + enum axlf_section_kind kind, void **data,
>> + uint64_t *len);
> need to add comment that user must free data
>
> need to add comment that len is optional
It sounds the comment above the function.
/* caller must free the allocated memory for **data. len could be NULL. */
Do you mean I need to add more detail or format the comment in different way?
>
>> +int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb);
>> +int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
>> + u32 size, struct xclbin_bit_head_info *head_info);
>> +const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type);
> ok
>> +
>> +#endif /* _XCLBIN_HELPER_H_ */
>> diff --git a/drivers/fpga/xrt/lib/xclbin.c b/drivers/fpga/xrt/lib/xclbin.c
>> new file mode 100644
>> index 000000000000..31b363c014a3
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/xclbin.c
>> @@ -0,0 +1,369 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA Driver XCLBIN parser
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors: David Zhang <[email protected]>
>> + */
>> +
>> +#include <asm/errno.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/device.h>
>> +#include "xclbin-helper.h"
>> +#include "metadata.h"
>> +
>> +/* Used for parsing bitstream header */
>> +#define BITSTREAM_EVEN_MAGIC_BYTE 0x0f
>> +#define BITSTREAM_ODD_MAGIC_BYTE 0xf0
> ok
>> +
>> +static int xrt_xclbin_get_section_hdr(const struct axlf *xclbin,
>> + enum axlf_section_kind kind,
>> + const struct axlf_section_header **header)
>> +{
>> + const struct axlf_section_header *phead = NULL;
>> + u64 xclbin_len;
>> + int i;
>> +
>> + *header = NULL;
>> + for (i = 0; i < xclbin->header.num_sections; i++) {
>> + if (xclbin->sections[i].section_kind == kind) {
>> + phead = &xclbin->sections[i];
>> + break;
>> + }
>> + }
>> +
>> + if (!phead)
>> + return -ENOENT;
>> +
>> + xclbin_len = xclbin->header.length;
>> + if (xclbin_len > XCLBIN_MAX_SIZE ||
>> + phead->section_offset + phead->section_size > xclbin_len)
>> + return -EINVAL;
>> +
>> + *header = phead;
>> + return 0;
>> +}
>> +
>> +static int xrt_xclbin_section_info(const struct axlf *xclbin,
>> + enum axlf_section_kind kind,
>> + u64 *offset, u64 *size)
>> +{
>> + const struct axlf_section_header *mem_header = NULL;
>> + int rc;
>> +
>> + rc = xrt_xclbin_get_section_hdr(xclbin, kind, &mem_header);
>> + if (rc)
>> + return rc;
>> +
>> + *offset = mem_header->section_offset;
>> + *size = mem_header->section_size;
> ok
>> +
>> + return 0;
>> +}
>> +
>> +/* caller must free the allocated memory for **data */
>> +int xrt_xclbin_get_section(struct device *dev,
>> + const struct axlf *buf,
>> + enum axlf_section_kind kind,
>> + void **data, u64 *len)
>> +{
>> + const struct axlf *xclbin = (const struct axlf *)buf;
>> + void *section = NULL;
>> + u64 offset = 0;
>> + u64 size = 0;
>> + int err = 0;
>> +
>> + if (!data) {
> ok
>> + dev_err(dev, "invalid data pointer");
>> + return -EINVAL;
>> + }
>> +
>> + err = xrt_xclbin_section_info(xclbin, kind, &offset, &size);
>> + if (err) {
>> + dev_dbg(dev, "parsing section failed. kind %d, err = %d", kind, err);
>> + return err;
>> + }
>> +
>> + section = vzalloc(size);
>> + if (!section)
>> + return -ENOMEM;
>> +
>> + memcpy(section, ((const char *)xclbin) + offset, size);
>> +
>> + *data = section;
>> + if (len)
>> + *len = size;
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_xclbin_get_section);
>> +
>> +static inline int xclbin_bit_get_string(const unchar *data, u32 size,
>> + u32 offset, unchar prefix,
>> + const unchar **str)
>> +{
>> + int len;
>> + u32 tmp;
>> +
>> + /* prefix and length will be 3 bytes */
>> + if (offset + 3 > size)
>> + return -EINVAL;
>> +
>> + /* Read prefix */
>> + tmp = data[offset++];
>> + if (tmp != prefix)
>> + return -EINVAL;
>> +
>> + /* Get string length */
>> + len = data[offset++];
>> + len = (len << 8) | data[offset++];
>> +
>> + if (offset + len > size)
>> + return -EINVAL;
>> +
>> + if (data[offset + len - 1] != '\0')
>> + return -EINVAL;
>> +
>> + *str = data + offset;
>> +
>> + return len + 3;
>> +}
>> +
>> +/* parse bitstream header */
>> +int xrt_xclbin_parse_bitstream_header(struct device *dev, const unchar *data,
>> + u32 size, struct xclbin_bit_head_info *head_info)
>> +{
>> + u32 offset = 0;
>> + int len, i;
>> + u16 magic;
>> +
>> + memset(head_info, 0, sizeof(*head_info));
>> +
>> + /* Get "Magic" length */
>> + if (size < sizeof(u16)) {
>> + dev_err(dev, "invalid size");
>> + return -EINVAL;
>> + }
> ok
>> +
>> + len = data[offset++];
>> + len = (len << 8) | data[offset++];
>> +
>> + if (offset + len > size) {
>> + dev_err(dev, "invalid magic len");
>> + return -EINVAL;
>> + }
>> + head_info->magic_length = len;
>> +
>> + for (i = 0; i < head_info->magic_length - 1; i++) {
>> + magic = data[offset++];
>> + if (!(i % 2) && magic != BITSTREAM_EVEN_MAGIC_BYTE) {
>> + dev_err(dev, "invalid magic even byte at %d", offset);
>> + return -EINVAL;
>> + }
>> +
>> + if ((i % 2) && magic != BITSTREAM_ODD_MAGIC_BYTE) {
>> + dev_err(dev, "invalid magic odd byte at %d", offset);
>> + return -EINVAL;
>> + }
>> + }
>> +
>> + if (offset + 3 > size) {
>> + dev_err(dev, "invalid length of magic end");
>> + return -EINVAL;
>> + }
>> + /* Read null end of magic data. */
>> + if (data[offset++]) {
>> + dev_err(dev, "invalid magic end");
>> + return -EINVAL;
>> + }
>> +
>> + /* Read 0x01 (short) */
>> + magic = data[offset++];
>> + magic = (magic << 8) | data[offset++];
>> +
>> + /* Check the "0x01" half word */
>> + if (magic != 0x01) {
>> + dev_err(dev, "invalid magic end");
>> + return -EINVAL;
>> + }
>> +
>> + len = xclbin_bit_get_string(data, size, offset, 'a', &head_info->design_name);
>> + if (len < 0) {
>> + dev_err(dev, "get design name failed");
>> + return -EINVAL;
>> + }
>> +
>> + head_info->version = strstr(head_info->design_name, "Version=") + strlen("Version=");
>> + offset += len;
>> +
>> + len = xclbin_bit_get_string(data, size, offset, 'b', &head_info->part_name);
>> + if (len < 0) {
>> + dev_err(dev, "get part name failed");
>> + return -EINVAL;
>> + }
>> + offset += len;
>> +
>> + len = xclbin_bit_get_string(data, size, offset, 'c', &head_info->date);
>> + if (len < 0) {
>> + dev_err(dev, "get data failed");
>> + return -EINVAL;
>> + }
>> + offset += len;
>> +
>> + len = xclbin_bit_get_string(data, size, offset, 'd', &head_info->time);
>> + if (len < 0) {
>> + dev_err(dev, "get time failed");
>> + return -EINVAL;
>> + }
>> + offset += len;
>> +
>> + if (offset + 5 >= size) {
>> + dev_err(dev, "can not get bitstream length");
>> + return -EINVAL;
>> + }
>> +
>> + /* Read 'e' */
>> + if (data[offset++] != 'e') {
>> + dev_err(dev, "invalid prefix of bitstream length");
>> + return -EINVAL;
>> + }
>> +
>> + /* Get byte length of bitstream */
>> + head_info->bitstream_length = data[offset++];
>> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
>> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
>> + head_info->bitstream_length = (head_info->bitstream_length << 8) | data[offset++];
> OK
>> +
>> + head_info->header_length = offset;
> ok
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_xclbin_parse_bitstream_header);
> ok, removed xrt_xclbin_free_header
>> +
>> +struct xrt_clock_desc {
>> + char *clock_ep_name;
>> + u32 clock_xclbin_type;
>> + char *clkfreq_ep_name;
>> +} clock_desc[] = {
>> + {
>> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL1,
>> + .clock_xclbin_type = CT_DATA,
>> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K1,
>> + },
>> + {
>> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL2,
>> + .clock_xclbin_type = CT_KERNEL,
>> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_K2,
>> + },
>> + {
>> + .clock_ep_name = XRT_MD_NODE_CLK_KERNEL3,
>> + .clock_xclbin_type = CT_SYSTEM,
>> + .clkfreq_ep_name = XRT_MD_NODE_CLKFREQ_HBM,
>> + },
>> +};
>> +
>> +const char *xrt_clock_type2epname(enum XCLBIN_CLOCK_TYPE type)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
>> + if (clock_desc[i].clock_xclbin_type == type)
>> + return clock_desc[i].clock_ep_name;
>> + }
>> + return NULL;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_clock_type2epname);
>> +
>> +static const char *clock_type2clkfreq_name(enum XCLBIN_CLOCK_TYPE type)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(clock_desc); i++) {
>> + if (clock_desc[i].clock_xclbin_type == type)
>> + return clock_desc[i].clkfreq_ep_name;
>> + }
>> + return NULL;
>> +}
>> +
>> +static int xrt_xclbin_add_clock_metadata(struct device *dev,
>> + const struct axlf *xclbin,
>> + char *dtb)
>> +{
>> + struct clock_freq_topology *clock_topo;
>> + u16 freq;
>> + int rc;
>> + int i;
>> +
>> + /* if clock section does not exist, add nothing and return success */
> ok
>> + rc = xrt_xclbin_get_section(dev, xclbin, CLOCK_FREQ_TOPOLOGY,
>> + (void **)&clock_topo, NULL);
>> + if (rc == -ENOENT)
>> + return 0;
>> + else if (rc)
>> + return rc;
>> +
>> + for (i = 0; i < clock_topo->count; i++) {
>> + u8 type = clock_topo->clock_freq[i].type;
>> + const char *ep_name = xrt_clock_type2epname(type);
>> + const char *counter_name = clock_type2clkfreq_name(type);
>> +
>> + if (!ep_name || !counter_name)
>> + continue;
>> +
>> + freq = cpu_to_be16(clock_topo->clock_freq[i].freq_MHZ);
>> + rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
>> + &freq, sizeof(freq));
>> + if (rc)
>> + break;
>> +
>> + rc = xrt_md_set_prop(dev, dtb, ep_name, NULL, XRT_MD_PROP_CLK_CNT,
>> + counter_name, strlen(counter_name) + 1);
>> + if (rc)
>> + break;
>> + }
>> +
>> + vfree(clock_topo);
>> +
>> + return rc;
>> +}
>> +
>> +int xrt_xclbin_get_metadata(struct device *dev, const struct axlf *xclbin, char **dtb)
>> +{
>> + char *md = NULL, *newmd = NULL;
>> + u64 len, md_len;
>> + int rc;
>> +
>> + *dtb = NULL;
> ok
>> +
>> + rc = xrt_xclbin_get_section(dev, xclbin, PARTITION_METADATA, (void **)&md, &len);
>> + if (rc)
>> + goto done;
>> +
>> + md_len = xrt_md_size(dev, md);
>> +
>> + /* Sanity check the dtb section. */
>> + if (md_len > len) {
>> + rc = -EINVAL;
>> + goto done;
>> + }
>> +
>> + /* use dup function here to convert incoming metadata to writable */
>> + newmd = xrt_md_dup(dev, md);
>> + if (!newmd) {
>> + rc = -EFAULT;
>> + goto done;
>> + }
>> +
>> + /* Convert various needed xclbin sections into dtb. */
>> + rc = xrt_xclbin_add_clock_metadata(dev, xclbin, newmd);
>> +
>> + if (!rc)
>> + *dtb = newmd;
>> + else
>> + vfree(newmd);
> ok
>> +done:
>> + vfree(md);
>> + return rc;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_xclbin_get_metadata);
>> diff --git a/include/uapi/linux/xrt/xclbin.h b/include/uapi/linux/xrt/xclbin.h
>> new file mode 100644
>> index 000000000000..baa14d6653ab
>> --- /dev/null
>> +++ b/include/uapi/linux/xrt/xclbin.h
>> @@ -0,0 +1,409 @@
>> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
>> +/*
>> + * Xilinx FPGA compiled binary container format
>> + *
>> + * Copyright (C) 2015-2021, Xilinx Inc
>> + */
>> +
>> +#ifndef _XCLBIN_H_
>> +#define _XCLBIN_H_
> ok, removed _WIN32_
>> +
>> +#if defined(__KERNEL__)
>> +
>> +#include <linux/types.h>
> ok, removed uuid.h and version.h
>> +
>> +#elif defined(__cplusplus)
>> +
>> +#include <cstdlib>
>> +#include <cstdint>
>> +#include <algorithm>
>> +#include <uuid/uuid.h>
>> +
>> +#else
>> +
>> +#include <stdlib.h>
>> +#include <stdint.h>
>> +#include <uuid/uuid.h>
>> +
>> +#endif
>> +
>> +#ifdef __cplusplus
>> +extern "C" {
>> +#endif
>> +
>> +/**
>> + * DOC: Container format for Xilinx FPGA images
>> + * The container stores bitstreams, metadata and firmware images.
>> + * xclbin/xsabin is an ELF-like binary container format. It is a structured
> ok
>> + * series of sections. There is a file header followed by several section
>> + * headers which is followed by sections. A section header points to an
>> + * actual section. There is an optional signature at the end. The
>> + * following figure illustrates a typical xclbin:
>> + *
>> + * +---------------------+
>> + * | |
>> + * | HEADER |
>> + * +---------------------+
>> + * | SECTION HEADER |
>> + * | |
>> + * +---------------------+
>> + * | ... |
>> + * | |
>> + * +---------------------+
>> + * | SECTION HEADER |
>> + * | |
>> + * +---------------------+
>> + * | SECTION |
>> + * | |
>> + * +---------------------+
>> + * | ... |
>> + * | |
>> + * +---------------------+
>> + * | SECTION |
>> + * | |
>> + * +---------------------+
>> + * | SIGNATURE |
>> + * | (OPTIONAL) |
>> + * +---------------------+
> ok on the tabs to spaces
>> + */
>> +
>> +enum XCLBIN_MODE {
>> + XCLBIN_FLAT = 0,
> ok
>> + XCLBIN_PR,
>> + XCLBIN_TANDEM_STAGE2,
>> + XCLBIN_TANDEM_STAGE2_WITH_PR,
>> + XCLBIN_HW_EMU,
>> + XCLBIN_SW_EMU,
>> + XCLBIN_MODE_MAX
>> +};
>> +
>> +enum axlf_section_kind {
>> + BITSTREAM = 0,
>> + CLEARING_BITSTREAM,
>> + EMBEDDED_METADATA,
>> + FIRMWARE,
>> + DEBUG_DATA,
>> + SCHED_FIRMWARE,
>> + MEM_TOPOLOGY,
>> + CONNECTIVITY,
>> + IP_LAYOUT,
>> + DEBUG_IP_LAYOUT,
>> + DESIGN_CHECK_POINT,
>> + CLOCK_FREQ_TOPOLOGY,
>> + MCS,
>> + BMC,
>> + BUILD_METADATA,
>> + KEYVALUE_METADATA,
>> + USER_METADATA,
>> + DNA_CERTIFICATE,
>> + PDI,
>> + BITSTREAM_PARTIAL_PDI,
>> + PARTITION_METADATA,
>> + EMULATION_DATA,
>> + SYSTEM_METADATA,
>> + SOFT_KERNEL,
>> + ASK_FLASH,
>> + AIE_METADATA,
>> + ASK_GROUP_TOPOLOGY,
>> + ASK_GROUP_CONNECTIVITY
>> +};
>> +
>> +enum MEM_TYPE {
>> + MEM_DDR3 = 0,
>> + MEM_DDR4,
>> + MEM_DRAM,
>> + MEM_STREAMING,
>> + MEM_PREALLOCATED_GLOB,
>> + MEM_ARE,
>> + MEM_HBM,
>> + MEM_BRAM,
>> + MEM_URAM,
>> + MEM_STREAMING_CONNECTION
>> +};
>> +
>> +enum IP_TYPE {
>> + IP_MB = 0,
>> + IP_KERNEL,
>> + IP_DNASC,
>> + IP_DDR4_CONTROLLER,
>> + IP_MEM_DDR4,
>> + IP_MEM_HBM
>> +};
>> +
>> +struct axlf_section_header {
>> + uint32_t section_kind; /* Section type */
>> + char section_name[16]; /* Examples: "stage2", "clear1", */
>> + /* "clear2", "ocl1", "ocl2, */
>> + /* "ublaze", "sched" */
>> + char rsvd[4];
>> + uint64_t section_offset; /* File offset of section data */
>> + uint64_t section_size; /* Size of section data */
>> +} __packed;
>> +
>> +struct axlf_header {
>> + uint64_t length; /* Total size of the xclbin file */
>> + uint64_t time_stamp; /* Number of seconds since epoch */
>> + /* when xclbin was created */
>> + uint64_t feature_rom_timestamp; /* TimeSinceEpoch of the featureRom */
>> + uint16_t version_patch; /* Patch Version */
>> + uint8_t version_major; /* Major Version - Version: 2.1.0*/
> ok, version checked
>
> whitepace, needs '2.1.0 */'
>
> I see this is a general problem, look other places.
>
> maybe it is a 'tab' and the diff is messing it up, convert tab to space.
Will fix it.
>
>> + uint8_t version_minor; /* Minor Version */
>> + uint32_t mode; /* XCLBIN_MODE */
>> + union {
>> + struct {
>> + uint64_t platform_id; /* 64 bit platform ID: */
>> + /* vendor-device-subvendor-subdev */
>> + uint64_t feature_id; /* 64 bit feature id */
>> + } rom;
>> + unsigned char rom_uuid[16]; /* feature ROM UUID for which */
>> + /* this xclbin was generated */
>> + };
>> + unsigned char platform_vbnv[64]; /* e.g. */
>> + /* xilinx:xil-accel-rd-ku115:4ddr-xpr:3.4: null terminated */
>> + union {
>> + char next_axlf[16]; /* Name of next xclbin file */
>> + /* in the daisy chain */
>> + unsigned char uuid[16]; /* uuid of this xclbin*/
> ok
>
> whitespace comment need a ' ' before */
Will fix.
Thanks,
Lizhi
>
>> + };
>> + char debug_bin[16]; /* Name of binary with debug */
>> + /* information */
>> + uint32_t num_sections; /* Number of section headers */
>> + char rsvd[4];
>> +} __packed;
>> +
>> +struct axlf {
>> + char magic[8]; /* Should be "xclbin2\0" */
>> + int32_t signature_length; /* Length of the signature. */
>> + /* -1 indicates no signature */
>> + unsigned char reserved[28]; /* Note: Initialized to 0xFFs */
>> +
>> + unsigned char key_block[256]; /* Signature for validation */
>> + /* of binary */
>> + uint64_t unique_id; /* axlf's uniqueId, use it to */
>> + /* skip redownload etc */
>> + struct axlf_header header; /* Inline header */
>> + struct axlf_section_header sections[1]; /* One or more section */
>> + /* headers follow */
>> +} __packed;
> ok, thanks!
>> +
>> +/* bitstream information */
>> +struct xlnx_bitstream {
>> + uint8_t freq[8];
>> + char bits[1];
>> +} __packed;
>> +
>> +/**** MEMORY TOPOLOGY SECTION ****/
>> +struct mem_data {
>> + uint8_t type; /* enum corresponding to mem_type. */
>> + uint8_t used; /* if 0 this bank is not present */
>> + uint8_t rsvd[6];
>> + union {
>> + uint64_t size; /* if mem_type DDR, then size in KB; */
>> + uint64_t route_id; /* if streaming then "route_id" */
>> + };
>> + union {
>> + uint64_t base_address;/* if DDR then the base address; */
>> + uint64_t flow_id; /* if streaming then "flow id" */
>> + };
>> + unsigned char tag[16]; /* DDR: BANK0,1,2,3, has to be null */
>> + /* terminated; if streaming then stream0, 1 etc */
>> +} __packed;
>> +
>> +struct mem_topology {
>> + int32_t count; /* Number of mem_data */
>> + struct mem_data mem_data[1]; /* Should be sorted on mem_type */
>> +} __packed;
>> +
>> +/**** CONNECTIVITY SECTION ****/
>> +/* Connectivity of each argument of CU(Compute Unit). It will be in terms
> ok
>> + * of argument index associated. For associating CU instances with arguments
>> + * and banks, start at the connectivity section. Using the ip_layout_index
>> + * access the ip_data.name. Now we can associate this CU instance with its
>> + * original CU name and get the connectivity as well. This enables us to form
>> + * related groups of CU instances.
>> + */
>> +
>> +struct connection {
>> + int32_t arg_index; /* From 0 to n, may not be contiguous as scalars */
>> + /* skipped */
>> + int32_t ip_layout_index; /* index into the ip_layout section. */
>> + /* ip_layout.ip_data[index].type == IP_KERNEL */
>> + int32_t mem_data_index; /* index of the mem_data . Flag error is */
>> + /* used false. */
>> +} __packed;
>> +
>> +struct connectivity {
>> + int32_t count;
>> + struct connection connection[1];
>> +} __packed;
>> +
>> +/**** IP_LAYOUT SECTION ****/
>> +
>> +/* IP Kernel */
>> +#define IP_INT_ENABLE_MASK 0x0001
>> +#define IP_INTERRUPT_ID_MASK 0x00FE
>> +#define IP_INTERRUPT_ID_SHIFT 0x1
>> +
>> +enum IP_CONTROL {
>> + AP_CTRL_HS = 0,
> ok
>
> Thanks for the changes!
>
> Tom
>
>> + AP_CTRL_CHAIN,
>> + AP_CTRL_NONE,
>> + AP_CTRL_ME,
>> + ACCEL_ADAPTER
>> +};
>> +
>> +#define IP_CONTROL_MASK 0xFF00
>> +#define IP_CONTROL_SHIFT 0x8
>> +
>> +/* IPs on AXI lite - their types, names, and base addresses.*/
>> +struct ip_data {
>> + uint32_t type; /* map to IP_TYPE enum */
>> + union {
>> + uint32_t properties; /* Default: 32-bits to indicate ip */
>> + /* specific property. */
>> + /* type: IP_KERNEL
>> + * int_enable : Bit - 0x0000_0001;
>> + * interrupt_id : Bits - 0x0000_00FE;
>> + * ip_control : Bits = 0x0000_FF00;
>> + */
>> + struct { /* type: IP_MEM_* */
>> + uint16_t index;
>> + uint8_t pc_index;
>> + uint8_t unused;
>> + } indices;
>> + };
>> + uint64_t base_address;
>> + uint8_t name[64]; /* eg Kernel name corresponding to KERNEL */
>> + /* instance, can embed CU name in future. */
>> +} __packed;
>> +
>> +struct ip_layout {
>> + int32_t count;
>> + struct ip_data ip_data[1]; /* All the ip_data needs to be sorted */
>> + /* by base_address. */
>> +} __packed;
>> +
>> +/*** Debug IP section layout ****/
>> +enum DEBUG_IP_TYPE {
>> + UNDEFINED = 0,
>> + LAPC,
>> + ILA,
>> + AXI_MM_MONITOR,
>> + AXI_TRACE_FUNNEL,
>> + AXI_MONITOR_FIFO_LITE,
>> + AXI_MONITOR_FIFO_FULL,
>> + ACCEL_MONITOR,
>> + AXI_STREAM_MONITOR,
>> + AXI_STREAM_PROTOCOL_CHECKER,
>> + TRACE_S2MM,
>> + AXI_DMA,
>> + TRACE_S2MM_FULL
>> +};
>> +
>> +struct debug_ip_data {
>> + uint8_t type; /* type of enum DEBUG_IP_TYPE */
>> + uint8_t index_lowbyte;
>> + uint8_t properties;
>> + uint8_t major;
>> + uint8_t minor;
>> + uint8_t index_highbyte;
>> + uint8_t reserved[2];
>> + uint64_t base_address;
>> + char name[128];
>> +} __packed;
>> +
>> +struct debug_ip_layout {
>> + uint16_t count;
>> + struct debug_ip_data debug_ip_data[1];
>> +} __packed;
>> +
>> +/* Supported clock frequency types */
>> +enum XCLBIN_CLOCK_TYPE {
>> + CT_UNUSED = 0, /* Initialized value */
>> + CT_DATA = 1, /* Data clock */
>> + CT_KERNEL = 2, /* Kernel clock */
>> + CT_SYSTEM = 3 /* System Clock */
>> +};
>> +
>> +/* Clock Frequency Entry */
>> +struct clock_freq {
>> + uint16_t freq_MHZ; /* Frequency in MHz */
>> + uint8_t type; /* Clock type (enum CLOCK_TYPE) */
>> + uint8_t unused[5]; /* Not used - padding */
>> + char name[128]; /* Clock Name */
>> +} __packed;
>> +
>> +/* Clock frequency section */
>> +struct clock_freq_topology {
>> + int16_t count; /* Number of entries */
>> + struct clock_freq clock_freq[1]; /* Clock array */
>> +} __packed;
>> +
>> +/* Supported MCS file types */
>> +enum MCS_TYPE {
>> + MCS_UNKNOWN = 0, /* Initialized value */
>> + MCS_PRIMARY = 1, /* The primary mcs file data */
>> + MCS_SECONDARY = 2, /* The secondary mcs file data */
>> +};
>> +
>> +/* One chunk of MCS data */
>> +struct mcs_chunk {
>> + uint8_t type; /* MCS data type */
>> + uint8_t unused[7]; /* padding */
>> + uint64_t offset; /* data offset from the start of */
>> + /* the section */
>> + uint64_t size; /* data size */
>> +} __packed;
>> +
>> +/* MCS data section */
>> +struct mcs {
>> + int8_t count; /* Number of chunks */
>> + int8_t unused[7]; /* padding */
>> + struct mcs_chunk chunk[1]; /* MCS chunks followed by data */
>> +} __packed;
>> +
>> +/* bmc data section */
>> +struct bmc {
>> + uint64_t offset; /* data offset from the start of */
>> + /* the section */
>> + uint64_t size; /* data size (bytes) */
>> + char image_name[64]; /* Name of the image */
>> + /* (e.g., MSP432P401R) */
>> + char device_name[64]; /* Device ID (e.g., VCU1525) */
>> + char version[64];
>> + char md5value[33]; /* MD5 Expected Value */
>> + /* (e.g., 56027182079c0bd621761b7dab5a27ca)*/
>> + char padding[7]; /* Padding */
>> +} __packed;
>> +
>> +/* soft kernel data section, used by classic driver */
>> +struct soft_kernel {
>> + /** Prefix Syntax:
>> + * mpo - member, pointer, offset
>> + * This variable represents a zero terminated string
>> + * that is offseted from the beginning of the section.
>> + * The pointer to access the string is initialized as follows:
>> + * char * pCharString = (address_of_section) + (mpo value)
>> + */
>> + uint32_t mpo_name; /* Name of the soft kernel */
>> + uint32_t image_offset; /* Image offset */
>> + uint32_t image_size; /* Image size */
>> + uint32_t mpo_version; /* Version */
>> + uint32_t mpo_md5_value; /* MD5 checksum */
>> + uint32_t mpo_symbol_name; /* Symbol name */
>> + uint32_t num_instances; /* Number of instances */
>> + uint8_t padding[36]; /* Reserved for future use */
>> + uint8_t reserved_ext[16]; /* Reserved for future extended data */
>> +} __packed;
>> +
>> +enum CHECKSUM_TYPE {
>> + CST_UNKNOWN = 0,
>> + CST_SDBM = 1,
>> + CST_LAST
>> +};
>> +
>> +#ifdef __cplusplus
>> +}
>> +#endif
>> +
>> +#endif
On 3/30/21 6:45 AM, Tom Rix wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> It is unclear from the changelog if this new patch was split from an existing patch or new content.
>
> the file ops seem to come from mgmnt/main.c, which call what could be file ops here. why is this complicated redirection needed ?
This is part of infra code which is split from subdev.c, not from
mgmt/main.c. Sorry about the confusion. We further split infra code to
avoid having one huge patch for review.
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Helper functions for char device node creation / removal for platform
>> drivers. This is part of platform driver infrastructure.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/lib/cdev.c | 232 ++++++++++++++++++++++++++++++++++++
>> 1 file changed, 232 insertions(+)
>> create mode 100644 drivers/fpga/xrt/lib/cdev.c
>>
>> diff --git a/drivers/fpga/xrt/lib/cdev.c b/drivers/fpga/xrt/lib/cdev.c
>> new file mode 100644
>> index 000000000000..38efd24b6e10
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/cdev.c
>> @@ -0,0 +1,232 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA device node helper functions.
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#include "xleaf.h"
>> +
>> +extern struct class *xrt_class;
>> +
>> +#define XRT_CDEV_DIR "xfpga"
> maybe "xrt_fpga" or just "xrt"
Will change it to just "xrt", yes.
>> +#define INODE2PDATA(inode) \
>> + container_of((inode)->i_cdev, struct xrt_subdev_platdata, xsp_cdev)
>> +#define INODE2PDEV(inode) \
>> + to_platform_device(kobj_to_dev((inode)->i_cdev->kobj.parent))
>> +#define CDEV_NAME(sysdev) (strchr((sysdev)->kobj.name, '!') + 1)
>> +
>> +/* Allow it to be accessed from cdev. */
>> +static void xleaf_devnode_allowed(struct platform_device *pdev)
>> +{
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
>> +
>> + /* Allow new opens. */
>> + mutex_lock(&pdata->xsp_devnode_lock);
>> + pdata->xsp_devnode_online = true;
>> + mutex_unlock(&pdata->xsp_devnode_lock);
>> +}
>> +
>> +/* Turn off access from cdev and wait for all existing user to go away. */
>> +static int xleaf_devnode_disallowed(struct platform_device *pdev)
>> +{
>> + int ret = 0;
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
>> +
>> + mutex_lock(&pdata->xsp_devnode_lock);
>> +
>> + /* Prevent new opens. */
>> + pdata->xsp_devnode_online = false;
>> + /* Wait for existing user to close. */
>> + while (!ret && pdata->xsp_devnode_ref) {
>> + int rc;
>> +
>> + mutex_unlock(&pdata->xsp_devnode_lock);
>> + rc = wait_for_completion_killable(&pdata->xsp_devnode_comp);
>> + mutex_lock(&pdata->xsp_devnode_lock);
>> +
>> + if (rc == -ERESTARTSYS) {
>> + /* Restore online state. */
>> + pdata->xsp_devnode_online = true;
>> + xrt_err(pdev, "%s is in use, ref=%d",
>> + CDEV_NAME(pdata->xsp_sysdev),
>> + pdata->xsp_devnode_ref);
>> + ret = -EBUSY;
>> + }
>> + }
>> +
>> + mutex_unlock(&pdata->xsp_devnode_lock);
>> +
>> + return ret;
>> +}
>> +
>> +static struct platform_device *
>> +__xleaf_devnode_open(struct inode *inode, bool excl)
>> +{
>> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
>> + struct platform_device *pdev = INODE2PDEV(inode);
>> + bool opened = false;
>> +
>> + mutex_lock(&pdata->xsp_devnode_lock);
>> +
>> + if (pdata->xsp_devnode_online) {
>> + if (excl && pdata->xsp_devnode_ref) {
>> + xrt_err(pdev, "%s has already been opened exclusively",
>> + CDEV_NAME(pdata->xsp_sysdev));
>> + } else if (!excl && pdata->xsp_devnode_excl) {
>> + xrt_err(pdev, "%s has been opened exclusively",
>> + CDEV_NAME(pdata->xsp_sysdev));
>> + } else {
>> + pdata->xsp_devnode_ref++;
>> + pdata->xsp_devnode_excl = excl;
>> + opened = true;
>> + xrt_info(pdev, "opened %s, ref=%d",
>> + CDEV_NAME(pdata->xsp_sysdev),
>> + pdata->xsp_devnode_ref);
>> + }
>> + } else {
>> + xrt_err(pdev, "%s is offline", CDEV_NAME(pdata->xsp_sysdev));
>> + }
>> +
>> + mutex_unlock(&pdata->xsp_devnode_lock);
>> +
>> + pdev = opened ? pdev : NULL;
>> + return pdev;
>> +}
>> +
>> +struct platform_device *
>> +xleaf_devnode_open_excl(struct inode *inode)
>> +{
>> + return __xleaf_devnode_open(inode, true);
>> +}
> This function is unused, remove.
This is part of infra's API for leaf driver to call. The caller has been
removed from this initial patch set to reduce the size of the patch. You
will see it in the next follow up patch once we finish reviewing current
one.
>> +
>> +struct platform_device *
>> +xleaf_devnode_open(struct inode *inode)
>> +{
>> + return __xleaf_devnode_open(inode, false);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_devnode_open);
> does this really need to be exported ?
Yes, this is part of infra's API in xrt-lib.ko. One of the caller is in
xrt-mgmt.ko.
>> +
>> +void xleaf_devnode_close(struct inode *inode)
>> +{
>> + struct xrt_subdev_platdata *pdata = INODE2PDATA(inode);
>> + struct platform_device *pdev = INODE2PDEV(inode);
>> + bool notify = false;
>> +
>> + mutex_lock(&pdata->xsp_devnode_lock);
>> +
>> + WARN_ON(pdata->xsp_devnode_ref == 0);
>> + pdata->xsp_devnode_ref--;
>> + if (pdata->xsp_devnode_ref == 0) {
>> + pdata->xsp_devnode_excl = false;
>> + notify = true;
>> + }
>> + if (notify) {
>> + xrt_info(pdev, "closed %s, ref=%d",
>> + CDEV_NAME(pdata->xsp_sysdev), pdata->xsp_devnode_ref);
> xsp_devnode_ref will always be 0, so no need to report it.
Will remove.
>> + } else {
>> + xrt_info(pdev, "closed %s, notifying waiter",
>> + CDEV_NAME(pdata->xsp_sysdev));
>> + }
>> +
>> + mutex_unlock(&pdata->xsp_devnode_lock);
>> +
>> + if (notify)
>> + complete(&pdata->xsp_devnode_comp);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_devnode_close);
>> +
>> +static inline enum xrt_subdev_file_mode
>> +devnode_mode(struct xrt_subdev_drvdata *drvdata)
>> +{
>> + return drvdata->xsd_file_ops.xsf_mode;
>> +}
>> +
>> +int xleaf_devnode_create(struct platform_device *pdev, const char *file_name,
>> + const char *inst_name)
>> +{
>> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
>> + struct xrt_subdev_file_ops *fops = &drvdata->xsd_file_ops;
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
>> + struct cdev *cdevp;
>> + struct device *sysdev;
>> + int ret = 0;
>> + char fname[256];
>> +
>> + mutex_init(&pdata->xsp_devnode_lock);
>> + init_completion(&pdata->xsp_devnode_comp);
>> +
>> + cdevp = &DEV_PDATA(pdev)->xsp_cdev;
>> + cdev_init(cdevp, &fops->xsf_ops);
>> + cdevp->owner = fops->xsf_ops.owner;
>> + cdevp->dev = MKDEV(MAJOR(fops->xsf_dev_t), pdev->id);
>> +
>> + /*
>> + * Set pdev as parent of cdev so that when pdev (and its platform
>> + * data) will not be freed when cdev is not freed.
>> + */
>> + cdev_set_parent(cdevp, &DEV(pdev)->kobj);
>> +
>> + ret = cdev_add(cdevp, cdevp->dev, 1);
>> + if (ret) {
>> + xrt_err(pdev, "failed to add cdev: %d", ret);
>> + goto failed;
>> + }
>> + if (!file_name)
>> + file_name = pdev->name;
>> + if (!inst_name) {
>> + if (devnode_mode(drvdata) == XRT_SUBDEV_FILE_MULTI_INST) {
>> + snprintf(fname, sizeof(fname), "%s/%s/%s.%u",
>> + XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
>> + file_name, pdev->id);
>> + } else {
>> + snprintf(fname, sizeof(fname), "%s/%s/%s",
>> + XRT_CDEV_DIR, DEV_PDATA(pdev)->xsp_root_name,
>> + file_name);
>> + }
>> + } else {
>> + snprintf(fname, sizeof(fname), "%s/%s/%s.%s", XRT_CDEV_DIR,
>> + DEV_PDATA(pdev)->xsp_root_name, file_name, inst_name);
>> + }
>> + sysdev = device_create(xrt_class, NULL, cdevp->dev, NULL, "%s", fname);
>> + if (IS_ERR(sysdev)) {
>> + ret = PTR_ERR(sysdev);
>> + xrt_err(pdev, "failed to create device node: %d", ret);
>> + goto failed_cdev_add;
>> + }
>> + pdata->xsp_sysdev = sysdev;
>> +
>> + xleaf_devnode_allowed(pdev);
>> +
>> + xrt_info(pdev, "created (%d, %d): /dev/%s",
>> + MAJOR(cdevp->dev), pdev->id, fname);
>> + return 0;
>> +
>> +failed_cdev_add:
>> + cdev_del(cdevp);
>> +failed:
>> + cdevp->owner = NULL;
>> + return ret;
>> +}
>> +
>> +int xleaf_devnode_destroy(struct platform_device *pdev)
>> +{
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
>> + struct cdev *cdevp = &pdata->xsp_cdev;
>> + dev_t dev = cdevp->dev;
>> + int rc;
>> +
>> + rc = xleaf_devnode_disallowed(pdev);
>> + if (rc)
>> + return rc;
> Failure of one leaf to be destroyed is not handled well.
>
> could a able to destroy check be done over the whole group ?
Yes, this is not handled properly. Handling this type of error during
cleaning up code path is not very clean. I think it might make more
sense to just change the code so that xleaf_devnode_disallowed() will
not fail. It will instead just waiting for existing user to quit. This
is just like the remove callback of a platform device. It does not
return error.
Or maybe there is a better way to handle the error like this?
Thanks,
Max
>
> Tom
>
>> +
>> + xrt_info(pdev, "removed (%d, %d): /dev/%s/%s", MAJOR(dev), MINOR(dev),
>> + XRT_CDEV_DIR, CDEV_NAME(pdata->xsp_sysdev));
>> + device_destroy(xrt_class, cdevp->dev);
>> + pdata->xsp_sysdev = NULL;
>> + cdev_del(cdevp);
>> + return 0;
>> +}
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add DDR calibration driver. DDR calibration is a hardware function
> discovered by walking firmware metadata. A platform device node will
> be created for it. Hardware provides DDR calibration status through
> this function.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> .../fpga/xrt/include/xleaf/ddr_calibration.h | 28 +++
> drivers/fpga/xrt/lib/xleaf/ddr_calibration.c | 226 ++++++++++++++++++
ok
> 2 files changed, 254 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/ddr_calibration.h b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> new file mode 100644
> index 000000000000..878740c26ca2
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/ddr_calibration.h
> @@ -0,0 +1,28 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Cheng Zhen <[email protected]>
> + */
> +
> +#ifndef _XRT_DDR_CALIBRATION_H_
> +#define _XRT_DDR_CALIBRATION_H_
> +
> +#include "xleaf.h"
> +#include <linux/xrt/xclbin.h>
> +
> +/*
> + * Memory calibration driver leaf calls.
> + */
> +enum xrt_calib_results {
> + XRT_CALIB_UNKNOWN = 0,
ok
> + XRT_CALIB_SUCCEEDED,
> + XRT_CALIB_FAILED,
> +};
> +
> +enum xrt_calib_leaf_cmd {
> + XRT_CALIB_RESULT = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +#endif /* _XRT_DDR_CALIBRATION_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
> new file mode 100644
> index 000000000000..5a9fa82946cb
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/ddr_calibration.c
> @@ -0,0 +1,226 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA memory calibration driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * memory calibration
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +#include <linux/delay.h>
> +#include <linux/regmap.h>
> +#include "xclbin-helper.h"
> +#include "metadata.h"
> +#include "xleaf/ddr_calibration.h"
> +
> +#define XRT_CALIB "xrt_calib"
> +
> +#define XRT_CALIB_STATUS_REG 0
> +#define XRT_CALIB_READ_RETRIES 20
> +#define XRT_CALIB_READ_INTERVAL 500 /* ms */
> +
> +static const struct regmap_config calib_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
ok
> +};
> +
> +struct calib_cache {
> + struct list_head link;
> + const char *ep_name;
> + char *data;
> + u32 data_size;
> +};
> +
> +struct calib {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + struct mutex lock; /* calibration dev lock */
> + struct list_head cache_list;
> + u32 cache_num;
> + enum xrt_calib_results result;
> +};
> +
> +static void __calib_cache_clean_nolock(struct calib *calib)
ok
> +{
> + struct calib_cache *cache, *temp;
> +
> + list_for_each_entry_safe(cache, temp, &calib->cache_list, link) {
> + vfree(cache->data);
> + list_del(&cache->link);
> + vfree(cache);
> + }
> + calib->cache_num = 0;
> +}
> +
> +static void calib_cache_clean(struct calib *calib)
> +{
> + mutex_lock(&calib->lock);
> + __calib_cache_clean_nolock(calib);
> + mutex_unlock(&calib->lock);
> +}
> +
> +static int calib_calibration(struct calib *calib)
> +{
> + u32 times = XRT_CALIB_READ_RETRIES;
ok
> + u32 status;
> + int ret;
> +
> + while (times != 0) {
> + ret = regmap_read(calib->regmap, XRT_CALIB_STATUS_REG, &status);
> + if (ret) {
> + xrt_err(calib->pdev, "failed to read status reg %d", ret);
> + return ret;
> + }
> +
> + if (status & BIT(0))
> + break;
> + msleep(XRT_CALIB_READ_INTERVAL);
ok
Reviewed-by: Tom Rix <[email protected]>
> + times--;
> + }
> +
> + if (!times) {
> + xrt_err(calib->pdev,
> + "MIG calibration timeout after bitstream download");
> + return -ETIMEDOUT;
> + }
> +
> + xrt_info(calib->pdev, "took %dms", (XRT_CALIB_READ_RETRIES - times) *
> + XRT_CALIB_READ_INTERVAL);
> + return 0;
> +}
> +
> +static void xrt_calib_event_cb(struct platform_device *pdev, void *arg)
> +{
> + struct calib *calib = platform_get_drvdata(pdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + enum xrt_subdev_id id;
> + int ret;
> +
> + id = evt->xe_subdev.xevt_subdev_id;
> +
> + switch (e) {
> + case XRT_EVENT_POST_CREATION:
> + if (id == XRT_SUBDEV_UCS) {
> + ret = calib_calibration(calib);
> + if (ret)
> + calib->result = XRT_CALIB_FAILED;
> + else
> + calib->result = XRT_CALIB_SUCCEEDED;
> + }
> + break;
> + default:
> + xrt_dbg(pdev, "ignored event %d", e);
> + break;
> + }
> +}
> +
> +static int xrt_calib_remove(struct platform_device *pdev)
> +{
> + struct calib *calib = platform_get_drvdata(pdev);
> +
> + calib_cache_clean(calib);
> +
> + return 0;
> +}
> +
> +static int xrt_calib_probe(struct platform_device *pdev)
> +{
> + void __iomem *base = NULL;
> + struct resource *res;
> + struct calib *calib;
> + int err = 0;
> +
> + calib = devm_kzalloc(&pdev->dev, sizeof(*calib), GFP_KERNEL);
> + if (!calib)
> + return -ENOMEM;
> +
> + calib->pdev = pdev;
> + platform_set_drvdata(pdev, calib);
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + err = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base)) {
> + err = PTR_ERR(base);
> + goto failed;
> + }
> +
> + calib->regmap = devm_regmap_init_mmio(&pdev->dev, base, &calib_regmap_config);
> + if (IS_ERR(calib->regmap)) {
> + xrt_err(pdev, "Map iomem failed");
> + err = PTR_ERR(calib->regmap);
> + goto failed;
> + }
> +
> + mutex_init(&calib->lock);
> + INIT_LIST_HEAD(&calib->cache_list);
> +
> + return 0;
> +
> +failed:
> + return err;
> +}
> +
> +static int
> +xrt_calib_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct calib *calib = platform_get_drvdata(pdev);
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_calib_event_cb(pdev, arg);
> + break;
> + case XRT_CALIB_RESULT: {
> + enum xrt_calib_results *r = (enum xrt_calib_results *)arg;
> + *r = calib->result;
> + break;
> + }
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + ret = -EINVAL;
> + }
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_calib_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_DDR_CALIB },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_calib_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_calib_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_calib_table[] = {
> + { XRT_CALIB, (kernel_ulong_t)&xrt_calib_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_calib_driver = {
> + .driver = {
> + .name = XRT_CALIB,
> + },
> + .probe = xrt_calib_probe,
> + .remove = xrt_calib_remove,
> + .id_table = xrt_calib_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CALIB, calib);
Hi Tom,
On 3/29/21 12:44 PM, Tom Rix wrote:
> bisectablity may be/is an issue.
>
> Moritz,
>
> building happens on the last patch, so in theory there will never be a build break needing bisection. Do we care about the misordering of serveral of these patches?
The general idea about ordering of patches is that global defines should
be introduced before the user.
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> xrt-lib kernel module infrastructure code to register and manage all
>> leaf driver modules.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/subdev_id.h | 38 ++++
>> drivers/fpga/xrt/include/xleaf.h | 264 +++++++++++++++++++++++++
>> drivers/fpga/xrt/lib/lib-drv.c | 277 +++++++++++++++++++++++++++
> ok
>> drivers/fpga/xrt/lib/lib-drv.h | 17 ++
>> 4 files changed, 596 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/subdev_id.h
>> create mode 100644 drivers/fpga/xrt/include/xleaf.h
>> create mode 100644 drivers/fpga/xrt/lib/lib-drv.c
>> create mode 100644 drivers/fpga/xrt/lib/lib-drv.h
>>
>> diff --git a/drivers/fpga/xrt/include/subdev_id.h b/drivers/fpga/xrt/include/subdev_id.h
>> new file mode 100644
>> index 000000000000..42fbd6f5e80a
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/subdev_id.h
>> @@ -0,0 +1,38 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_SUBDEV_ID_H_
>> +#define _XRT_SUBDEV_ID_H_
>> +
>> +/*
>> + * Every subdev driver has an ID for others to refer to it. There can be multiple number of
>> + * instances of a subdev driver. A <subdev_id, subdev_instance> tuple is a unique identification
>> + * of a specific instance of a subdev driver.
>> + */
>> +enum xrt_subdev_id {
>> + XRT_SUBDEV_GRP = 0,
> not necessary to initialize all unless there are gaps.
Yeah, just trying to avoid any issue when things are accidentally reordered.
>> + XRT_SUBDEV_VSEC = 1,
>> + XRT_SUBDEV_VSEC_GOLDEN = 2,
>> + XRT_SUBDEV_DEVCTL = 3,
>> + XRT_SUBDEV_AXIGATE = 4,
>> + XRT_SUBDEV_ICAP = 5,
>> + XRT_SUBDEV_TEST = 6,
>> + XRT_SUBDEV_MGMT_MAIN = 7,
>> + XRT_SUBDEV_QSPI = 8,
>> + XRT_SUBDEV_MAILBOX = 9,
>> + XRT_SUBDEV_CMC = 10,
>> + XRT_SUBDEV_CALIB = 11,
>> + XRT_SUBDEV_CLKFREQ = 12,
>> + XRT_SUBDEV_CLOCK = 13,
>> + XRT_SUBDEV_SRSR = 14,
>> + XRT_SUBDEV_UCS = 15,
>> + XRT_SUBDEV_NUM = 16, /* Total number of subdevs. */
>> + XRT_ROOT = -1, /* Special ID for root driver. */
>> +};
>> +
>> +#endif /* _XRT_SUBDEV_ID_H_ */
>> diff --git a/drivers/fpga/xrt/include/xleaf.h b/drivers/fpga/xrt/include/xleaf.h
>> new file mode 100644
>> index 000000000000..acb500df04b0
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xleaf.h
>> @@ -0,0 +1,264 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + * Sonal Santan <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_XLEAF_H_
>> +#define _XRT_XLEAF_H_
>> +
>> +#include <linux/platform_device.h>
>> +#include <linux/fs.h>
>> +#include <linux/cdev.h>
>> +#include "subdev_id.h"
>> +#include "xroot.h"
>> +#include "events.h"
>> +
>> +/* All subdev drivers should use below common routines to print out msg. */
>> +#define DEV(pdev) (&(pdev)->dev)
>> +#define DEV_PDATA(pdev) \
>> + ((struct xrt_subdev_platdata *)dev_get_platdata(DEV(pdev)))
>> +#define DEV_DRVDATA(pdev) \
>> + ((struct xrt_subdev_drvdata *) \
>> + platform_get_device_id(pdev)->driver_data)
>> +#define FMT_PRT(prt_fn, pdev, fmt, args...) \
>> + ({typeof(pdev) (_pdev) = (pdev); \
>> + prt_fn(DEV(_pdev), "%s %s: " fmt, \
>> + DEV_PDATA(_pdev)->xsp_root_name, __func__, ##args); })
>> +#define xrt_err(pdev, fmt, args...) FMT_PRT(dev_err, pdev, fmt, ##args)
>> +#define xrt_warn(pdev, fmt, args...) FMT_PRT(dev_warn, pdev, fmt, ##args)
>> +#define xrt_info(pdev, fmt, args...) FMT_PRT(dev_info, pdev, fmt, ##args)
>> +#define xrt_dbg(pdev, fmt, args...) FMT_PRT(dev_dbg, pdev, fmt, ##args)
>> +
>> +enum {
>> + /* Starting cmd for common leaf cmd implemented by all leaves. */
>> + XRT_XLEAF_COMMON_BASE = 0,
>> + /* Starting cmd for leaves' specific leaf cmds. */
>> + XRT_XLEAF_CUSTOM_BASE = 64,
>> +};
>> +
>> +enum xrt_xleaf_common_leaf_cmd {
>> + XRT_XLEAF_EVENT = XRT_XLEAF_COMMON_BASE,
>> +};
>> +
>> +/*
>> + * If populated by subdev driver, infra will handle the mechanics of
>> + * char device (un)registration.
>> + */
>> +enum xrt_subdev_file_mode {
>> + /* Infra create cdev, default file name */
>> + XRT_SUBDEV_FILE_DEFAULT = 0,
>> + /* Infra create cdev, need to encode inst num in file name */
>> + XRT_SUBDEV_FILE_MULTI_INST,
>> + /* No auto creation of cdev by infra, leaf handles it by itself */
>> + XRT_SUBDEV_FILE_NO_AUTO,
>> +};
>> +
>> +struct xrt_subdev_file_ops {
>> + const struct file_operations xsf_ops;
>> + dev_t xsf_dev_t;
>> + const char *xsf_dev_name;
>> + enum xrt_subdev_file_mode xsf_mode;
>> +};
>> +
>> +/*
>> + * Subdev driver callbacks populated by subdev driver.
>> + */
>> +struct xrt_subdev_drv_ops {
>> + /*
>> + * Per driver instance callback. The pdev points to the instance.
>> + * If defined, these are called by other leaf drivers.
>> + * Note that root driver may call into xsd_leaf_call of a group driver.
>> + */
>> + int (*xsd_leaf_call)(struct platform_device *pdev, u32 cmd, void *arg);
>> +};
>> +
>> +/*
>> + * Defined and populated by subdev driver, exported as driver_data in
>> + * struct platform_device_id.
>> + */
>> +struct xrt_subdev_drvdata {
>> + struct xrt_subdev_file_ops xsd_file_ops;
>> + struct xrt_subdev_drv_ops xsd_dev_ops;
>> +};
>> +
>> +/*
>> + * Partially initialized by the parent driver, then, passed in as subdev driver's
>> + * platform data when creating subdev driver instance by calling platform
>> + * device register API (platform_device_register_data() or the likes).
>> + *
>> + * Once device register API returns, platform driver framework makes a copy of
>> + * this buffer and maintains its life cycle. The content of the buffer is
>> + * completely owned by subdev driver.
>> + *
>> + * Thus, parent driver should be very careful when it touches this buffer
>> + * again once it's handed over to subdev driver. And the data structure
>> + * should not contain pointers pointing to buffers that is managed by
>> + * other or parent drivers since it could have been freed before platform
>> + * data buffer is freed by platform driver framework.
>> + */
>> +struct xrt_subdev_platdata {
>> + /*
>> + * Per driver instance callback. The pdev points to the instance.
>> + * Should always be defined for subdev driver to get service from root.
>> + */
>> + xrt_subdev_root_cb_t xsp_root_cb;
>> + void *xsp_root_cb_arg;
>> +
>> + /* Something to associate w/ root for msg printing. */
>> + const char *xsp_root_name;
>> +
>> + /*
>> + * Char dev support for this subdev instance.
>> + * Initialized by subdev driver.
>> + */
>> + struct cdev xsp_cdev;
>> + struct device *xsp_sysdev;
>> + struct mutex xsp_devnode_lock; /* devnode lock */
>> + struct completion xsp_devnode_comp;
>> + int xsp_devnode_ref;
>> + bool xsp_devnode_online;
>> + bool xsp_devnode_excl;
>> +
>> + /*
>> + * Subdev driver specific init data. The buffer should be embedded
>> + * in this data structure buffer after dtb, so that it can be freed
>> + * together with platform data.
>> + */
>> + loff_t xsp_priv_off; /* Offset into this platform data buffer. */
>> + size_t xsp_priv_len;
>> +
>> + /*
>> + * Populated by parent driver to describe the device tree for
>> + * the subdev driver to handle. Should always be last one since it's
>> + * of variable length.
>> + */
>> + bool xsp_dtb_valid;
>> + char xsp_dtb[0];
>> +};
>> +
>> +/*
>> + * this struct define the endpoints belong to the same subdevice
>> + */
>> +struct xrt_subdev_ep_names {
>> + const char *ep_name;
>> + const char *regmap_name;
>> +};
>> +
>> +struct xrt_subdev_endpoints {
>> + struct xrt_subdev_ep_names *xse_names;
>> + /* minimum number of endpoints to support the subdevice */
>> + u32 xse_min_ep;
>> +};
>> +
>> +struct subdev_match_arg {
>> + enum xrt_subdev_id id;
>> + int instance;
>> +};
>> +
>> +bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name);
>> +struct platform_device *xleaf_get_leaf(struct platform_device *pdev,
>> + xrt_subdev_match_t cb, void *arg);
>> +
>> +static inline bool subdev_match(enum xrt_subdev_id id, struct platform_device *pdev, void *arg)
>> +{
>> + const struct subdev_match_arg *a = (struct subdev_match_arg *)arg;
>> + int instance = a->instance;
>> +
>> + if (id != a->id)
>> + return false;
>> + if (instance != pdev->id && instance != PLATFORM_DEVID_NONE)
>> + return false;
>> + return true;
>> +}
>> +
>> +static inline bool xrt_subdev_match_epname(enum xrt_subdev_id id,
>> + struct platform_device *pdev, void *arg)
>> +{
>> + return xleaf_has_endpoint(pdev, arg);
>> +}
>> +
>> +static inline struct platform_device *
>> +xleaf_get_leaf_by_id(struct platform_device *pdev,
>> + enum xrt_subdev_id id, int instance)
>> +{
>> + struct subdev_match_arg arg = { id, instance };
>> +
>> + return xleaf_get_leaf(pdev, subdev_match, &arg);
>> +}
>> +
>> +static inline struct platform_device *
>> +xleaf_get_leaf_by_epname(struct platform_device *pdev, const char *name)
>> +{
>> + return xleaf_get_leaf(pdev, xrt_subdev_match_epname, (void *)name);
>> +}
>> +
>> +static inline int xleaf_call(struct platform_device *tgt, u32 cmd, void *arg)
>> +{
>> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(tgt);
>> +
>> + return (*drvdata->xsd_dev_ops.xsd_leaf_call)(tgt, cmd, arg);
>> +}
>> +
>> +int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async);
>> +int xleaf_create_group(struct platform_device *pdev, char *dtb);
>> +int xleaf_destroy_group(struct platform_device *pdev, int instance);
>> +void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx);
>> +void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
>> + unsigned short *subvendor, unsigned short *subdevice);
>> +void xleaf_hot_reset(struct platform_device *pdev);
>> +int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf);
>> +struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
>> + const struct attribute_group **grps);
>> +void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon);
>> +int xleaf_wait_for_group_bringup(struct platform_device *pdev);
>> +
>> +/*
>> + * Character device helper APIs for use by leaf drivers
>> + */
>> +static inline bool xleaf_devnode_enabled(struct xrt_subdev_drvdata *drvdata)
>> +{
>> + return drvdata && drvdata->xsd_file_ops.xsf_ops.open;
>> +}
>> +
>> +int xleaf_devnode_create(struct platform_device *pdev,
>> + const char *file_name, const char *inst_name);
>> +int xleaf_devnode_destroy(struct platform_device *pdev);
>> +
>> +struct platform_device *xleaf_devnode_open_excl(struct inode *inode);
>> +struct platform_device *xleaf_devnode_open(struct inode *inode);
>> +void xleaf_devnode_close(struct inode *inode);
>> +
>> +/* Helpers. */
>> +int xleaf_register_driver(enum xrt_subdev_id id, struct platform_driver *drv,
>> + struct xrt_subdev_endpoints *eps);
>> +void xleaf_unregister_driver(enum xrt_subdev_id id);
>> +
>> +/* Module's init/fini routines for leaf driver in xrt-lib module */
>> +#define XRT_LEAF_INIT_FINI_FUNC(_id, name) \
>> +void name##_leaf_init_fini(bool init) \
>> +{ \
>> + typeof(_id) id = _id; \
>> + if (init) { \
>> + xleaf_register_driver(id, \
>> + &xrt_##name##_driver, \
>> + xrt_##name##_endpoints); \
>> + } else { \
>> + xleaf_unregister_driver(id); \
>> + } \
>> +}
>> +
>> +void group_leaf_init_fini(bool init);
>> +void vsec_leaf_init_fini(bool init);
>> +void devctl_leaf_init_fini(bool init);
>> +void axigate_leaf_init_fini(bool init);
>> +void icap_leaf_init_fini(bool init);
>> +void calib_leaf_init_fini(bool init);
>> +void clkfreq_leaf_init_fini(bool init);
>> +void clock_leaf_init_fini(bool init);
>> +void ucs_leaf_init_fini(bool init);
>> +
>> +#endif /* _XRT_LEAF_H_ */
>> diff --git a/drivers/fpga/xrt/lib/lib-drv.c b/drivers/fpga/xrt/lib/lib-drv.c
>> new file mode 100644
>> index 000000000000..64bb8710be66
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/lib-drv.c
>> @@ -0,0 +1,277 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#include <linux/module.h>
>> +#include <linux/vmalloc.h>
>> +#include "xleaf.h"
>> +#include "xroot.h"
>> +#include "lib-drv.h"
>> +
>> +#define XRT_IPLIB_MODULE_NAME "xrt-lib"
>> +#define XRT_IPLIB_MODULE_VERSION "4.0.0"
>> +#define XRT_MAX_DEVICE_NODES 128
>> +#define XRT_DRVNAME(drv) ((drv)->driver.name)
>> +
>> +/*
>> + * Subdev driver is known by it's ID to others. We map the ID to it's
> ok
>> + * struct platform_driver, which contains it's binding name and driver/file ops.
>> + * We also map it to the endpoint name in DTB as well, if it's different
>> + * than the driver's binding name.
>> + */
>> +struct xrt_drv_map {
>> + struct list_head list;
>> + enum xrt_subdev_id id;
>> + struct platform_driver *drv;
>> + struct xrt_subdev_endpoints *eps;
>> + struct ida ida; /* manage driver instance and char dev minor */
>> +};
>> +
>> +static DEFINE_MUTEX(xrt_lib_lock); /* global lock protecting xrt_drv_maps list */
>> +static LIST_HEAD(xrt_drv_maps);
>> +struct class *xrt_class;
>> +
>> +static inline struct xrt_subdev_drvdata *
>> +xrt_drv_map2drvdata(struct xrt_drv_map *map)
>> +{
>> + return (struct xrt_subdev_drvdata *)map->drv->id_table[0].driver_data;
>> +}
>> +
>> +static struct xrt_drv_map *
>> +__xrt_drv_find_map_by_id(enum xrt_subdev_id id)
> ok
>> +{
>> + struct xrt_drv_map *tmap;
>> +
>> + list_for_each_entry(tmap, &xrt_drv_maps, list) {
>> + if (tmap->id == id)
>> + return tmap;
>> + }
>> + return NULL;
>> +}
>> +
>> +static struct xrt_drv_map *
>> +xrt_drv_find_map_by_id(enum xrt_subdev_id id)
>> +{
>> + struct xrt_drv_map *map;
>> +
>> + mutex_lock(&xrt_lib_lock);
>> + map = __xrt_drv_find_map_by_id(id);
>> + mutex_unlock(&xrt_lib_lock);
>> + /*
>> + * map should remain valid even after the lock is dropped since a registered
> ok
>> + * driver should only be unregistered when driver module is being unloaded,
>> + * which means that the driver should not be used by then.
>> + */
>> + return map;
>> +}
>> +
>> +static int xrt_drv_register_driver(struct xrt_drv_map *map)
>> +{
>> + struct xrt_subdev_drvdata *drvdata;
>> + int rc = 0;
>> + const char *drvname = XRT_DRVNAME(map->drv);
>> +
>> + rc = platform_driver_register(map->drv);
>> + if (rc) {
>> + pr_err("register %s platform driver failed\n", drvname);
>> + return rc;
>> + }
>> +
>> + drvdata = xrt_drv_map2drvdata(map);
>> + if (drvdata) {
>> + /* Initialize dev_t for char dev node. */
>> + if (xleaf_devnode_enabled(drvdata)) {
>> + rc = alloc_chrdev_region(&drvdata->xsd_file_ops.xsf_dev_t, 0,
>> + XRT_MAX_DEVICE_NODES, drvname);
>> + if (rc) {
>> + platform_driver_unregister(map->drv);
>> + pr_err("failed to alloc dev minor for %s: %d\n", drvname, rc);
>> + return rc;
>> + }
>> + } else {
>> + drvdata->xsd_file_ops.xsf_dev_t = (dev_t)-1;
>> + }
>> + }
>> +
>> + ida_init(&map->ida);
>> +
>> + pr_info("%s registered successfully\n", drvname);
>> +
>> + return 0;
>> +}
>> +
>> +static void xrt_drv_unregister_driver(struct xrt_drv_map *map)
>> +{
>> + const char *drvname = XRT_DRVNAME(map->drv);
>> + struct xrt_subdev_drvdata *drvdata;
>> +
>> + ida_destroy(&map->ida);
>> +
>> + drvdata = xrt_drv_map2drvdata(map);
>> + if (drvdata && drvdata->xsd_file_ops.xsf_dev_t != (dev_t)-1) {
>> + unregister_chrdev_region(drvdata->xsd_file_ops.xsf_dev_t,
>> + XRT_MAX_DEVICE_NODES);
>> + }
>> +
>> + platform_driver_unregister(map->drv);
>> +
>> + pr_info("%s unregistered successfully\n", drvname);
>> +}
>> +
>> +int xleaf_register_driver(enum xrt_subdev_id id,
>> + struct platform_driver *drv,
>> + struct xrt_subdev_endpoints *eps)
>> +{
>> + struct xrt_drv_map *map;
>> + int rc;
>> +
>> + mutex_lock(&xrt_lib_lock);
>> +
>> + map = __xrt_drv_find_map_by_id(id);
>> + if (map) {
>> + mutex_unlock(&xrt_lib_lock);
>> + pr_err("Id %d already has a registered driver, 0x%p\n",
>> + id, map->drv);
>> + return -EEXIST;
>> + }
>> +
>> + map = kzalloc(sizeof(*map), GFP_KERNEL);
> ok
>> + if (!map) {
>> + mutex_unlock(&xrt_lib_lock);
>> + return -ENOMEM;
>> + }
>> + map->id = id;
>> + map->drv = drv;
>> + map->eps = eps;
>> +
>> + rc = xrt_drv_register_driver(map);
>> + if (rc) {
> ok
>> + kfree(map);
>> + mutex_unlock(&xrt_lib_lock);
>> + return rc;
>> + }
>> +
>> + list_add(&map->list, &xrt_drv_maps);
>> +
>> + mutex_unlock(&xrt_lib_lock);
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_register_driver);
>> +
>> +void xleaf_unregister_driver(enum xrt_subdev_id id)
>> +{
>> + struct xrt_drv_map *map;
>> +
>> + mutex_lock(&xrt_lib_lock);
>> +
>> + map = __xrt_drv_find_map_by_id(id);
>> + if (!map) {
>> + mutex_unlock(&xrt_lib_lock);
>> + pr_err("Id %d has no registered driver\n", id);
>> + return;
>> + }
>> +
>> + list_del(&map->list);
>> +
>> + mutex_unlock(&xrt_lib_lock);
>> +
>> + xrt_drv_unregister_driver(map);
>> + kfree(map);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_unregister_driver);
>> +
>> +const char *xrt_drv_name(enum xrt_subdev_id id)
>> +{
>> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
>> +
>> + if (map)
>> + return XRT_DRVNAME(map->drv);
>> + return NULL;
>> +}
>> +
>> +int xrt_drv_get_instance(enum xrt_subdev_id id)
>> +{
>> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
>> +
>> + return ida_alloc_range(&map->ida, 0, XRT_MAX_DEVICE_NODES, GFP_KERNEL);
>> +}
>> +
>> +void xrt_drv_put_instance(enum xrt_subdev_id id, int instance)
>> +{
>> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
>> +
>> + ida_free(&map->ida, instance);
>> +}
>> +
>> +struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id)
>> +{
>> + struct xrt_drv_map *map = xrt_drv_find_map_by_id(id);
>> + struct xrt_subdev_endpoints *eps;
>> +
>> + eps = map ? map->eps : NULL;
>> + return eps;
>> +}
>> +
>> +/* Leaf driver's module init/fini callbacks. */
> add comment to effect that dynamically adding drivers/ID's are not supported.
Will do.
>> +static void (*leaf_init_fini_cbs[])(bool) = {
>> + group_leaf_init_fini,
>> + vsec_leaf_init_fini,
>> + devctl_leaf_init_fini,
>> + axigate_leaf_init_fini,
>> + icap_leaf_init_fini,
>> + calib_leaf_init_fini,
>> + clkfreq_leaf_init_fini,
>> + clock_leaf_init_fini,
>> + ucs_leaf_init_fini,
>> +};
>> +
>> +static __init int xrt_lib_init(void)
>> +{
>> + int i;
>> +
>> + xrt_class = class_create(THIS_MODULE, XRT_IPLIB_MODULE_NAME);
>> + if (IS_ERR(xrt_class))
>> + return PTR_ERR(xrt_class);
>> +
>> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
>> + leaf_init_fini_cbs[i](true);
>> + return 0;
>> +}
>> +
>> +static __exit void xrt_lib_fini(void)
>> +{
>> + struct xrt_drv_map *map;
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(leaf_init_fini_cbs); i++)
>> + leaf_init_fini_cbs[i](false);
>> +
>> + mutex_lock(&xrt_lib_lock);
>> +
>> + while (!list_empty(&xrt_drv_maps)) {
>> + map = list_first_entry_or_null(&xrt_drv_maps, struct xrt_drv_map, list);
>> + pr_err("Unloading module with %s still registered\n", XRT_DRVNAME(map->drv));
>> + list_del(&map->list);
>> + mutex_unlock(&xrt_lib_lock);
>> + xrt_drv_unregister_driver(map);
>> + kfree(map);
>> + mutex_lock(&xrt_lib_lock);
>> + }
>> +
>> + mutex_unlock(&xrt_lib_lock);
>> +
>> + class_destroy(xrt_class);
>> +}
>> +
>> +module_init(xrt_lib_init);
>> +module_exit(xrt_lib_fini);
>> +
>> +MODULE_VERSION(XRT_IPLIB_MODULE_VERSION);
>> +MODULE_AUTHOR("XRT Team <[email protected]>");
>> +MODULE_DESCRIPTION("Xilinx Alveo IP Lib driver");
>> +MODULE_LICENSE("GPL v2");
>> diff --git a/drivers/fpga/xrt/lib/lib-drv.h b/drivers/fpga/xrt/lib/lib-drv.h
>> new file mode 100644
>> index 000000000000..a94c58149cb4
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/lib-drv.h
>> @@ -0,0 +1,17 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _LIB_DRV_H_
>> +#define _LIB_DRV_H_
>> +
>> +const char *xrt_drv_name(enum xrt_subdev_id id);
> bisectablity may be /is still an issue.
See my comments at the beginning of this reply, please. As you pointed
out, there should be no issue since the make file is integrated as the
very last patch.
Thanks,
Max
>
> Tom
>
>> +int xrt_drv_get_instance(enum xrt_subdev_id id);
>> +void xrt_drv_put_instance(enum xrt_subdev_id id, int instance);
>> +struct xrt_subdev_endpoints *xrt_drv_get_endpoints(enum xrt_subdev_id id);
>> +
>> +#endif /* _LIB_DRV_H_ */
Hi Tom,
On 3/30/21 5:52 AM, Tom Rix wrote:
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> group driver that manages life cycle of a bunch of leaf driver instances
>> and bridges them with root.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/group.h | 25 +++
>> drivers/fpga/xrt/lib/group.c | 286 +++++++++++++++++++++++++++++++
>> 2 files changed, 311 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/group.h
>> create mode 100644 drivers/fpga/xrt/lib/group.c
>>
>> diff --git a/drivers/fpga/xrt/include/group.h b/drivers/fpga/xrt/include/group.h
>> new file mode 100644
>> index 000000000000..09e9d03f53fe
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/group.h
>> @@ -0,0 +1,25 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
> ok, removed generic boilerplate
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_GROUP_H_
>> +#define _XRT_GROUP_H_
>> +
>> +#include "xleaf.h"
> move header to another patch
Yes, the header is moved to 04/20 patch.
>> +
>> +/*
>> + * Group driver leaf calls.
> ok
>> + */
>> +enum xrt_group_leaf_cmd {
>> + XRT_GROUP_GET_LEAF = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> ok
>> + XRT_GROUP_PUT_LEAF,
>> + XRT_GROUP_INIT_CHILDREN,
>> + XRT_GROUP_FINI_CHILDREN,
>> + XRT_GROUP_TRIGGER_EVENT,
>> +};
>> +
>> +#endif /* _XRT_GROUP_H_ */
>> diff --git a/drivers/fpga/xrt/lib/group.c b/drivers/fpga/xrt/lib/group.c
>> new file mode 100644
>> index 000000000000..7b8716569641
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/group.c
>> @@ -0,0 +1,286 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA Group Driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#include <linux/mod_devicetable.h>
>> +#include <linux/platform_device.h>
>> +#include "xleaf.h"
>> +#include "subdev_pool.h"
>> +#include "group.h"
>> +#include "metadata.h"
>> +#include "lib-drv.h"
>> +
>> +#define XRT_GRP "xrt_group"
>> +
>> +struct xrt_group {
>> + struct platform_device *pdev;
>> + struct xrt_subdev_pool leaves;
>> + bool leaves_created;
>> + struct mutex lock; /* lock for group */
>> +};
>> +
>> +static int xrt_grp_root_cb(struct device *dev, void *parg,
>> + enum xrt_root_cmd cmd, void *arg)
> ok
>> +{
>> + int rc;
>> + struct platform_device *pdev =
>> + container_of(dev, struct platform_device, dev);
>> + struct xrt_group *xg = (struct xrt_group *)parg;
>> +
>> + switch (cmd) {
>> + case XRT_ROOT_GET_LEAF_HOLDERS: {
>> + struct xrt_root_get_holders *holders =
>> + (struct xrt_root_get_holders *)arg;
>> + rc = xrt_subdev_pool_get_holders(&xg->leaves,
>> + holders->xpigh_pdev,
>> + holders->xpigh_holder_buf,
>> + holders->xpigh_holder_buf_len);
>> + break;
>> + }
>> + default:
>> + /* Forward parent call to root. */
>> + rc = xrt_subdev_root_request(pdev, cmd, arg);
>> + break;
>> + }
>> +
>> + return rc;
>> +}
>> +
>> +/*
>> + * Cut subdev's dtb from group's dtb based on passed-in endpoint descriptor.
>> + * Return the subdev's dtb through dtbp, if found.
>> + */
>> +static int xrt_grp_cut_subdev_dtb(struct xrt_group *xg, struct xrt_subdev_endpoints *eps,
>> + char *grp_dtb, char **dtbp)
>> +{
>> + int ret, i, ep_count = 0;
>> + char *dtb = NULL;
>> +
>> + ret = xrt_md_create(DEV(xg->pdev), &dtb);
>> + if (ret)
>> + return ret;
>> +
>> + for (i = 0; eps->xse_names[i].ep_name || eps->xse_names[i].regmap_name; i++) {
>> + const char *ep_name = eps->xse_names[i].ep_name;
>> + const char *reg_name = eps->xse_names[i].regmap_name;
>> +
>> + if (!ep_name)
>> + xrt_md_get_compatible_endpoint(DEV(xg->pdev), grp_dtb, reg_name, &ep_name);
>> + if (!ep_name)
>> + continue;
>> +
>> + ret = xrt_md_copy_endpoint(DEV(xg->pdev), dtb, grp_dtb, ep_name, reg_name, NULL);
>> + if (ret)
>> + continue;
>> + xrt_md_del_endpoint(DEV(xg->pdev), grp_dtb, ep_name, reg_name);
>> + ep_count++;
>> + }
>> + /* Found enough endpoints, return the subdev's dtb. */
>> + if (ep_count >= eps->xse_min_ep) {
>> + *dtbp = dtb;
>> + return 0;
>> + }
>> +
>> + /* Cleanup - Restore all endpoints that has been deleted, if any. */
>> + if (ep_count > 0) {
>> + xrt_md_copy_endpoint(DEV(xg->pdev), grp_dtb, dtb,
>> + XRT_MD_NODE_ENDPOINTS, NULL, NULL);
>> + }
>> + vfree(dtb);
>> + *dtbp = NULL;
>> + return 0;
>> +}
>> +
>> +static int xrt_grp_create_leaves(struct xrt_group *xg)
>> +{
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(xg->pdev);
>> + struct xrt_subdev_endpoints *eps = NULL;
>> + int ret = 0, failed = 0;
>> + enum xrt_subdev_id did;
>> + char *grp_dtb = NULL;
>> + unsigned long mlen;
>> +
>> + if (!pdata)
>> + return -EINVAL;
> ok
>> +
>> + mlen = xrt_md_size(DEV(xg->pdev), pdata->xsp_dtb);
>> + if (mlen == XRT_MD_INVALID_LENGTH) {
>> + xrt_err(xg->pdev, "invalid dtb, len %ld", mlen);
>> + return -EINVAL;
>> + }
>> +
>> + mutex_lock(&xg->lock);
>> +
>> + if (xg->leaves_created) {
>> + mutex_unlock(&xg->lock);
> add a comment that this is not an error and/or error is handled elsewhere
Will do.
>> + return -EEXIST;
>> + }
>> +
>> + grp_dtb = vmalloc(mlen);
>> + if (!grp_dtb) {
>> + mutex_unlock(&xg->lock);
>> + return -ENOMEM;
> ok
>> + }
>> +
>> + /* Create all leaves based on dtb. */
>> + xrt_info(xg->pdev, "bringing up leaves...");
>> + memcpy(grp_dtb, pdata->xsp_dtb, mlen);
>> + for (did = 0; did < XRT_SUBDEV_NUM; did++) {
> ok
>> + eps = xrt_drv_get_endpoints(did);
>> + while (eps && eps->xse_names) {
>> + char *dtb = NULL;
>> +
>> + ret = xrt_grp_cut_subdev_dtb(xg, eps, grp_dtb, &dtb);
>> + if (ret) {
>> + failed++;
>> + xrt_err(xg->pdev, "failed to cut subdev dtb for drv %s: %d",
>> + xrt_drv_name(did), ret);
>> + }
>> + if (!dtb) {
>> + /*
>> + * No more dtb to cut or bad things happened for this instance,
>> + * switch to the next one.
>> + */
>> + eps++;
>> + continue;
>> + }
>> +
>> + /* Found a dtb for this instance, let's add it. */
>> + ret = xrt_subdev_pool_add(&xg->leaves, did, xrt_grp_root_cb, xg, dtb);
>> + if (ret < 0) {
>> + failed++;
>> + xrt_err(xg->pdev, "failed to add %s: %d", xrt_drv_name(did), ret);
> add a comment that this is not a fatal error and cleanup happens elsewhere
Will do.
Thanks,
Max
>
> Tom
>
>> + }
>> + vfree(dtb);
>> + /* Continue searching for the same instance from grp_dtb. */
>> + }
>> + }
>> +
>> + xg->leaves_created = true;
>> + vfree(grp_dtb);
>> + mutex_unlock(&xg->lock);
>> + return failed == 0 ? 0 : -ECHILD;
>> +}
>> +
>> +static void xrt_grp_remove_leaves(struct xrt_group *xg)
>> +{
>> + mutex_lock(&xg->lock);
>> +
>> + if (!xg->leaves_created) {
>> + mutex_unlock(&xg->lock);
>> + return;
>> + }
>> +
>> + xrt_info(xg->pdev, "tearing down leaves...");
>> + xrt_subdev_pool_fini(&xg->leaves);
>> + xg->leaves_created = false;
>> +
>> + mutex_unlock(&xg->lock);
>> +}
>> +
>> +static int xrt_grp_probe(struct platform_device *pdev)
>> +{
>> + struct xrt_group *xg;
>> +
>> + xrt_info(pdev, "probing...");
>> +
>> + xg = devm_kzalloc(&pdev->dev, sizeof(*xg), GFP_KERNEL);
>> + if (!xg)
>> + return -ENOMEM;
>> +
>> + xg->pdev = pdev;
>> + mutex_init(&xg->lock);
>> + xrt_subdev_pool_init(DEV(pdev), &xg->leaves);
>> + platform_set_drvdata(pdev, xg);
>> +
>> + return 0;
>> +}
>> +
>> +static int xrt_grp_remove(struct platform_device *pdev)
>> +{
>> + struct xrt_group *xg = platform_get_drvdata(pdev);
>> +
>> + xrt_info(pdev, "leaving...");
>> + xrt_grp_remove_leaves(xg);
>> + return 0;
>> +}
>> +
>> +static int xrt_grp_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
>> +{
>> + int rc = 0;
>> + struct xrt_group *xg = platform_get_drvdata(pdev);
>> +
>> + switch (cmd) {
>> + case XRT_XLEAF_EVENT:
>> + /* Simply forward to every child. */
>> + xrt_subdev_pool_handle_event(&xg->leaves,
>> + (struct xrt_event *)arg);
>> + break;
>> + case XRT_GROUP_GET_LEAF: {
>> + struct xrt_root_get_leaf *get_leaf =
>> + (struct xrt_root_get_leaf *)arg;
>> +
>> + rc = xrt_subdev_pool_get(&xg->leaves, get_leaf->xpigl_match_cb,
>> + get_leaf->xpigl_match_arg,
>> + DEV(get_leaf->xpigl_caller_pdev),
>> + &get_leaf->xpigl_tgt_pdev);
>> + break;
>> + }
>> + case XRT_GROUP_PUT_LEAF: {
>> + struct xrt_root_put_leaf *put_leaf =
>> + (struct xrt_root_put_leaf *)arg;
>> +
>> + rc = xrt_subdev_pool_put(&xg->leaves, put_leaf->xpipl_tgt_pdev,
>> + DEV(put_leaf->xpipl_caller_pdev));
>> + break;
>> + }
>> + case XRT_GROUP_INIT_CHILDREN:
>> + rc = xrt_grp_create_leaves(xg);
>> + break;
>> + case XRT_GROUP_FINI_CHILDREN:
>> + xrt_grp_remove_leaves(xg);
>> + break;
>> + case XRT_GROUP_TRIGGER_EVENT:
>> + xrt_subdev_pool_trigger_event(&xg->leaves, (enum xrt_events)(uintptr_t)arg);
>> + break;
>> + default:
>> + xrt_err(pdev, "unknown IOCTL cmd %d", cmd);
>> + rc = -EINVAL;
>> + break;
>> + }
>> + return rc;
>> +}
>> +
>> +static struct xrt_subdev_drvdata xrt_grp_data = {
>> + .xsd_dev_ops = {
>> + .xsd_leaf_call = xrt_grp_leaf_call,
>> + },
>> +};
>> +
>> +static const struct platform_device_id xrt_grp_id_table[] = {
>> + { XRT_GRP, (kernel_ulong_t)&xrt_grp_data },
>> + { },
>> +};
>> +
>> +static struct platform_driver xrt_group_driver = {
>> + .driver = {
>> + .name = XRT_GRP,
>> + },
>> + .probe = xrt_grp_probe,
>> + .remove = xrt_grp_remove,
>> + .id_table = xrt_grp_id_table,
>> +};
>> +
>> +void group_leaf_init_fini(bool init)
>> +{
>> + if (init)
>> + xleaf_register_driver(XRT_SUBDEV_GRP, &xrt_group_driver, NULL);
>> + else
>> + xleaf_unregister_driver(XRT_SUBDEV_GRP);
>> +}
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add clock driver. Clock is a hardware function discovered by walking
> xclbin metadata. A platform device node will be created for it. Other
> part of driver configures clock through clock driver.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/clock.h | 29 ++
> drivers/fpga/xrt/lib/xleaf/clock.c | 669 +++++++++++++++++++++++++
> 2 files changed, 698 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/clock.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clock.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/clock.h b/drivers/fpga/xrt/include/xleaf/clock.h
> new file mode 100644
> index 000000000000..6858473fd096
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/clock.h
> @@ -0,0 +1,29 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_CLOCK_H_
> +#define _XRT_CLOCK_H_
> +
> +#include "xleaf.h"
> +#include <linux/xrt/xclbin.h>
> +
> +/*
> + * CLOCK driver leaf calls.
> + */
> +enum xrt_clock_leaf_cmd {
> + XRT_CLOCK_SET = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_CLOCK_GET,
> + XRT_CLOCK_VERIFY,
> +};
> +
> +struct xrt_clock_get {
> + u16 freq;
> + u32 freq_cnter;
> +};
> +
> +#endif /* _XRT_CLOCK_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/clock.c b/drivers/fpga/xrt/lib/xleaf/clock.c
> new file mode 100644
> index 000000000000..071485e4bf65
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/clock.c
> @@ -0,0 +1,669 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Clock Wizard Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + * Sonal Santan <[email protected]>
> + * David Zhang <[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clock.h"
> +#include "xleaf/clkfreq.h"
> +
> +/* XRT_CLOCK_MAX_NUM_CLOCKS should be a concept from XCLBIN_ in the future */
> +#define XRT_CLOCK_MAX_NUM_CLOCKS 4
> +#define XRT_CLOCK_STATUS_MASK 0xffff
> +#define XRT_CLOCK_STATUS_MEASURE_START 0x1
> +#define XRT_CLOCK_STATUS_MEASURE_DONE 0x2
ok
> +
> +#define XRT_CLOCK_STATUS_REG 0x4
> +#define XRT_CLOCK_CLKFBOUT_REG 0x200
> +#define XRT_CLOCK_CLKOUT0_REG 0x208
> +#define XRT_CLOCK_LOAD_SADDR_SEN_REG 0x25C
> +#define XRT_CLOCK_DEFAULT_EXPIRE_SECS 1
> +
> +#define CLOCK_ERR(clock, fmt, arg...) \
> + xrt_err((clock)->pdev, fmt "\n", ##arg)
> +#define CLOCK_WARN(clock, fmt, arg...) \
> + xrt_warn((clock)->pdev, fmt "\n", ##arg)
> +#define CLOCK_INFO(clock, fmt, arg...) \
> + xrt_info((clock)->pdev, fmt "\n", ##arg)
> +#define CLOCK_DBG(clock, fmt, arg...) \
> + xrt_dbg((clock)->pdev, fmt "\n", ##arg)
> +
> +#define XRT_CLOCK "xrt_clock"
> +
> +static const struct regmap_config clock_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
> +};
> +
> +struct clock {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + struct mutex clock_lock; /* clock dev lock */
> +
> + const char *clock_ep_name;
> +};
> +
> +/*
> + * Precomputed table with config0 and config2 register values together with
> + * target frequency. The steps are approximately 5 MHz apart. Table is
> + * generated by platform creation tool.
ok
> + */
> +static const struct xmgmt_ocl_clockwiz {
> + /* target frequency */
> + u16 ocl;
> + /* config0 register */
> + u32 config0;
> + /* config2 register */
> + u32 config2;
> +} frequency_table[] = {
> + /*1275.000*/ { 10, 0x02EE0C01, 0x0001F47F },
ok
> + /*1575.000*/ { 15, 0x02EE0F01, 0x00000069},
> + /*1600.000*/ { 20, 0x00001001, 0x00000050},
> + /*1600.000*/ { 25, 0x00001001, 0x00000040},
> + /*1575.000*/ { 30, 0x02EE0F01, 0x0001F434},
> + /*1575.000*/ { 35, 0x02EE0F01, 0x0000002D},
> + /*1600.000*/ { 40, 0x00001001, 0x00000028},
> + /*1575.000*/ { 45, 0x02EE0F01, 0x00000023},
> + /*1600.000*/ { 50, 0x00001001, 0x00000020},
> + /*1512.500*/ { 55, 0x007D0F01, 0x0001F41B},
> + /*1575.000*/ { 60, 0x02EE0F01, 0x0000FA1A},
> + /*1462.500*/ { 65, 0x02710E01, 0x0001F416},
> + /*1575.000*/ { 70, 0x02EE0F01, 0x0001F416},
> + /*1575.000*/ { 75, 0x02EE0F01, 0x00000015},
> + /*1600.000*/ { 80, 0x00001001, 0x00000014},
> + /*1487.500*/ { 85, 0x036B0E01, 0x0001F411},
> + /*1575.000*/ { 90, 0x02EE0F01, 0x0001F411},
> + /*1425.000*/ { 95, 0x00FA0E01, 0x0000000F},
> + /*1600.000*/ { 100, 0x00001001, 0x00000010},
> + /*1575.000*/ { 105, 0x02EE0F01, 0x0000000F},
> + /*1512.500*/ { 110, 0x007D0F01, 0x0002EE0D},
> + /*1437.500*/ { 115, 0x01770E01, 0x0001F40C},
> + /*1575.000*/ { 120, 0x02EE0F01, 0x00007D0D},
> + /*1562.500*/ { 125, 0x02710F01, 0x0001F40C},
> + /*1462.500*/ { 130, 0x02710E01, 0x0000FA0B},
> + /*1350.000*/ { 135, 0x01F40D01, 0x0000000A},
> + /*1575.000*/ { 140, 0x02EE0F01, 0x0000FA0B},
> + /*1450.000*/ { 145, 0x01F40E01, 0x0000000A},
> + /*1575.000*/ { 150, 0x02EE0F01, 0x0001F40A},
> + /*1550.000*/ { 155, 0x01F40F01, 0x0000000A},
> + /*1600.000*/ { 160, 0x00001001, 0x0000000A},
> + /*1237.500*/ { 165, 0x01770C01, 0x0001F407},
> + /*1487.500*/ { 170, 0x036B0E01, 0x0002EE08},
> + /*1575.000*/ { 175, 0x02EE0F01, 0x00000009},
> + /*1575.000*/ { 180, 0x02EE0F01, 0x0002EE08},
> + /*1387.500*/ { 185, 0x036B0D01, 0x0001F407},
> + /*1425.000*/ { 190, 0x00FA0E01, 0x0001F407},
> + /*1462.500*/ { 195, 0x02710E01, 0x0001F407},
> + /*1600.000*/ { 200, 0x00001001, 0x00000008},
> + /*1537.500*/ { 205, 0x01770F01, 0x0001F407},
> + /*1575.000*/ { 210, 0x02EE0F01, 0x0001F407},
> + /*1075.000*/ { 215, 0x02EE0A01, 0x00000005},
> + /*1512.500*/ { 220, 0x007D0F01, 0x00036B06},
> + /*1575.000*/ { 225, 0x02EE0F01, 0x00000007},
> + /*1437.500*/ { 230, 0x01770E01, 0x0000FA06},
> + /*1175.000*/ { 235, 0x02EE0B01, 0x00000005},
> + /*1500.000*/ { 240, 0x00000F01, 0x0000FA06},
> + /*1225.000*/ { 245, 0x00FA0C01, 0x00000005},
> + /*1562.500*/ { 250, 0x02710F01, 0x0000FA06},
> + /*1275.000*/ { 255, 0x02EE0C01, 0x00000005},
> + /*1462.500*/ { 260, 0x02710E01, 0x00027105},
> + /*1325.000*/ { 265, 0x00FA0D01, 0x00000005},
> + /*1350.000*/ { 270, 0x01F40D01, 0x00000005},
> + /*1512.500*/ { 275, 0x007D0F01, 0x0001F405},
> + /*1575.000*/ { 280, 0x02EE0F01, 0x00027105},
> + /*1425.000*/ { 285, 0x00FA0E01, 0x00000005},
> + /*1450.000*/ { 290, 0x01F40E01, 0x00000005},
> + /*1475.000*/ { 295, 0x02EE0E01, 0x00000005},
> + /*1575.000*/ { 300, 0x02EE0F01, 0x0000FA05},
> + /*1525.000*/ { 305, 0x00FA0F01, 0x00000005},
> + /*1550.000*/ { 310, 0x01F40F01, 0x00000005},
> + /*1575.000*/ { 315, 0x02EE0F01, 0x00000005},
> + /*1600.000*/ { 320, 0x00001001, 0x00000005},
> + /*1462.500*/ { 325, 0x02710E01, 0x0001F404},
> + /*1237.500*/ { 330, 0x01770C01, 0x0002EE03},
> + /* 837.500*/ { 335, 0x01770801, 0x0001F402},
> + /*1487.500*/ { 340, 0x036B0E01, 0x00017704},
> + /* 862.500*/ { 345, 0x02710801, 0x0001F402},
> + /*1575.000*/ { 350, 0x02EE0F01, 0x0001F404},
> + /* 887.500*/ { 355, 0x036B0801, 0x0001F402},
> + /*1575.000*/ { 360, 0x02EE0F01, 0x00017704},
> + /* 912.500*/ { 365, 0x007D0901, 0x0001F402},
> + /*1387.500*/ { 370, 0x036B0D01, 0x0002EE03},
> + /*1500.000*/ { 375, 0x00000F01, 0x00000004},
> + /*1425.000*/ { 380, 0x00FA0E01, 0x0002EE03},
> + /* 962.500*/ { 385, 0x02710901, 0x0001F402},
> + /*1462.500*/ { 390, 0x02710E01, 0x0002EE03},
> + /* 987.500*/ { 395, 0x036B0901, 0x0001F402},
> + /*1600.000*/ { 400, 0x00001001, 0x00000004},
> + /*1012.500*/ { 405, 0x007D0A01, 0x0001F402},
> + /*1537.500*/ { 410, 0x01770F01, 0x0002EE03},
> + /*1037.500*/ { 415, 0x01770A01, 0x0001F402},
> + /*1575.000*/ { 420, 0x02EE0F01, 0x0002EE03},
> + /*1487.500*/ { 425, 0x036B0E01, 0x0001F403},
> + /*1075.000*/ { 430, 0x02EE0A01, 0x0001F402},
> + /*1087.500*/ { 435, 0x036B0A01, 0x0001F402},
> + /*1375.000*/ { 440, 0x02EE0D01, 0x00007D03},
> + /*1112.500*/ { 445, 0x007D0B01, 0x0001F402},
> + /*1575.000*/ { 450, 0x02EE0F01, 0x0001F403},
> + /*1137.500*/ { 455, 0x01770B01, 0x0001F402},
> + /*1437.500*/ { 460, 0x01770E01, 0x00007D03},
> + /*1162.500*/ { 465, 0x02710B01, 0x0001F402},
> + /*1175.000*/ { 470, 0x02EE0B01, 0x0001F402},
> + /*1425.000*/ { 475, 0x00FA0E01, 0x00000003},
> + /*1500.000*/ { 480, 0x00000F01, 0x00007D03},
> + /*1212.500*/ { 485, 0x007D0C01, 0x0001F402},
> + /*1225.000*/ { 490, 0x00FA0C01, 0x0001F402},
> + /*1237.500*/ { 495, 0x01770C01, 0x0001F402},
> + /*1562.500*/ { 500, 0x02710F01, 0x00007D03},
> + /*1262.500*/ { 505, 0x02710C01, 0x0001F402},
> + /*1275.000*/ { 510, 0x02EE0C01, 0x0001F402},
> + /*1287.500*/ { 515, 0x036B0C01, 0x0001F402},
> + /*1300.000*/ { 520, 0x00000D01, 0x0001F402},
> + /*1575.000*/ { 525, 0x02EE0F01, 0x00000003},
> + /*1325.000*/ { 530, 0x00FA0D01, 0x0001F402},
> + /*1337.500*/ { 535, 0x01770D01, 0x0001F402},
> + /*1350.000*/ { 540, 0x01F40D01, 0x0001F402},
> + /*1362.500*/ { 545, 0x02710D01, 0x0001F402},
> + /*1512.500*/ { 550, 0x007D0F01, 0x0002EE02},
> + /*1387.500*/ { 555, 0x036B0D01, 0x0001F402},
> + /*1400.000*/ { 560, 0x00000E01, 0x0001F402},
> + /*1412.500*/ { 565, 0x007D0E01, 0x0001F402},
> + /*1425.000*/ { 570, 0x00FA0E01, 0x0001F402},
> + /*1437.500*/ { 575, 0x01770E01, 0x0001F402},
> + /*1450.000*/ { 580, 0x01F40E01, 0x0001F402},
> + /*1462.500*/ { 585, 0x02710E01, 0x0001F402},
> + /*1475.000*/ { 590, 0x02EE0E01, 0x0001F402},
> + /*1487.500*/ { 595, 0x036B0E01, 0x0001F402},
> + /*1575.000*/ { 600, 0x02EE0F01, 0x00027102},
> + /*1512.500*/ { 605, 0x007D0F01, 0x0001F402},
> + /*1525.000*/ { 610, 0x00FA0F01, 0x0001F402},
> + /*1537.500*/ { 615, 0x01770F01, 0x0001F402},
> + /*1550.000*/ { 620, 0x01F40F01, 0x0001F402},
> + /*1562.500*/ { 625, 0x02710F01, 0x0001F402},
> + /*1575.000*/ { 630, 0x02EE0F01, 0x0001F402},
> + /*1587.500*/ { 635, 0x036B0F01, 0x0001F402},
> + /*1600.000*/ { 640, 0x00001001, 0x0001F402},
> + /*1290.000*/ { 645, 0x01F44005, 0x00000002},
> + /*1462.500*/ { 650, 0x02710E01, 0x0000FA02}
> +};
> +
> +static u32 find_matching_freq_config(unsigned short freq,
> + const struct xmgmt_ocl_clockwiz *table,
> + int size)
> +{
> + u32 end = size - 1;
> + u32 start = 0;
> + u32 idx;
> +
> + if (freq < table[0].ocl)
> + return 0;
> +
> + if (freq > table[size - 1].ocl)
> + return size - 1;
ok
> +
> + while (start < end) {
> + idx = (start + end) / 2;
ok
> + if (freq == table[idx].ocl)
> + break;
> + if (freq < table[idx].ocl)
> + end = idx;
> + else
> + start = idx + 1;
> + }
> + if (freq < table[idx].ocl)
> + idx--;
> +
> + return idx;
> +}
> +
> +static u32 find_matching_freq(u32 freq,
> + const struct xmgmt_ocl_clockwiz *freq_table,
> + int freq_table_size)
> +{
> + int idx = find_matching_freq_config(freq, freq_table, freq_table_size);
> +
> + return freq_table[idx].ocl;
> +}
> +
> +static inline int clock_wiz_busy(struct clock *clock, int cycle, int interval)
> +{
> + u32 val = 0;
> + int count;
> + int ret;
> +
> + for (count = 0; count < cycle; count++) {
> + ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read status failed %d", ret);
> + return ret;
> + }
> + if (val == 1)
> + break;
ok
> +
> + mdelay(interval);
> + }
> + if (val != 1) {
> + CLOCK_ERR(clock, "clockwiz is (%u) busy after %d ms",
> + val, cycle * interval);
> + return -EBUSY;
> + }
> +
> + return 0;
> +}
> +
> +static int get_freq(struct clock *clock, u16 *freq)
> +{
ok, #define removed
> + u32 mul_frac0 = 0;
> + u32 div_frac1 = 0;
> + u32 mul0, div0;
> + u64 input;
> + u32 div1;
> + u32 val;
> + int ret;
> +
> + WARN_ON(!mutex_is_locked(&clock->clock_lock));
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_STATUS_REG, &val);
ok, regs defined above
> + if (ret) {
> + CLOCK_ERR(clock, "read status failed %d", ret);
> + return ret;
> + }
> +
> + if ((val & 0x1) == 0) {
> + CLOCK_ERR(clock, "clockwiz is busy %x", val);
> + *freq = 0;
> + return -EBUSY;
> + }
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read clkfbout failed %d", ret);
> + return ret;
> + }
> +
> + div0 = val & 0xff;
> + mul0 = (val & 0xff00) >> 8;
> + if (val & BIT(26)) {
> + mul_frac0 = val >> 16;
> + mul_frac0 &= 0x3ff;
> + }
> +
> + /*
> + * Multiply both numerator (mul0) and the denominator (div0) with 1000
> + * to account for fractional portion of multiplier
> + */
> + mul0 *= 1000;
> + mul0 += mul_frac0;
> + div0 *= 1000;
> +
> + ret = regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
> + if (ret) {
> + CLOCK_ERR(clock, "read clkout0 failed %d", ret);
> + return ret;
> + }
> +
> + div1 = val & 0xff;
> + if (val & BIT(18)) {
> + div_frac1 = val >> 8;
> + div_frac1 &= 0x3ff;
> + }
> +
> + /*
> + * Multiply both numerator (mul0) and the denominator (div1) with
> + * 1000 to account for fractional portion of divider
> + */
> +
> + div1 *= 1000;
> + div1 += div_frac1;
> + div0 *= div1;
> + mul0 *= 1000;
> + if (div0 == 0) {
> + CLOCK_ERR(clock, "clockwiz 0 divider");
> + return 0;
> + }
> +
> + input = mul0 * 100;
> + do_div(input, div0);
> + *freq = (u16)input;
> +
> + return 0;
> +}
> +
> +static int set_freq(struct clock *clock, u16 freq)
> +{
> + int err = 0;
> + u32 idx = 0;
> + u32 val = 0;
> + u32 config;
> +
> + mutex_lock(&clock->clock_lock);
> + idx = find_matching_freq_config(freq, frequency_table,
> + ARRAY_SIZE(frequency_table));
> +
> + CLOCK_INFO(clock, "New: %d Mhz", freq);
> + err = clock_wiz_busy(clock, 20, 50);
> + if (err)
> + goto fail;
> +
> + config = frequency_table[idx].config0;
> + err = regmap_write(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, config);
> + if (err) {
> + CLOCK_ERR(clock, "write clkfbout failed %d", err);
> + goto fail;
> + }
> +
> + config = frequency_table[idx].config2;
> + err = regmap_write(clock->regmap, XRT_CLOCK_CLKOUT0_REG, config);
> + if (err) {
> + CLOCK_ERR(clock, "write clkout0 failed %d", err);
> + goto fail;
> + }
> +
> + mdelay(10);
> + err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 7);
> + if (err) {
> + CLOCK_ERR(clock, "write load_saddr_sen failed %d", err);
> + goto fail;
> + }
> +
> + mdelay(1);
> + err = regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 2);
> + if (err) {
> + CLOCK_ERR(clock, "write saddr failed %d", err);
> + goto fail;
> + }
> +
> + CLOCK_INFO(clock, "clockwiz waiting for locked signal");
> +
> + err = clock_wiz_busy(clock, 100, 100);
> + if (err) {
> + CLOCK_ERR(clock, "clockwiz MMCM/PLL did not lock");
> + /* restore */
> + regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 4);
> + mdelay(10);
> + regmap_write(clock->regmap, XRT_CLOCK_LOAD_SADDR_SEN_REG, 0);
> + goto fail;
> + }
> + regmap_read(clock->regmap, XRT_CLOCK_CLKFBOUT_REG, &val);
> + CLOCK_INFO(clock, "clockwiz CONFIG(0) 0x%x", val);
> + regmap_read(clock->regmap, XRT_CLOCK_CLKOUT0_REG, &val);
> + CLOCK_INFO(clock, "clockwiz CONFIG(2) 0x%x", val);
> +
> +fail:
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
> +static int get_freq_counter(struct clock *clock, u32 *freq)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
> + struct platform_device *pdev = clock->pdev;
> + struct platform_device *counter_leaf;
> + const void *counter;
ok
> + int err;
> +
> + WARN_ON(!mutex_is_locked(&clock->clock_lock));
> +
> + err = xrt_md_get_prop(DEV(pdev), pdata->xsp_dtb, clock->clock_ep_name,
> + NULL, XRT_MD_PROP_CLK_CNT, &counter, NULL);
> + if (err) {
> + xrt_err(pdev, "no counter specified");
> + return err;
> + }
> +
> + counter_leaf = xleaf_get_leaf_by_epname(pdev, counter);
> + if (!counter_leaf) {
> + xrt_err(pdev, "can't find counter");
> + return -ENOENT;
> + }
> +
> + err = xleaf_call(counter_leaf, XRT_CLKFREQ_READ, freq);
> + if (err)
> + xrt_err(pdev, "can't read counter");
> + xleaf_put_leaf(clock->pdev, counter_leaf);
> +
> + return err;
> +}
> +
> +static int clock_get_freq(struct clock *clock, u16 *freq, u32 *freq_cnter)
> +{
> + int err = 0;
> +
> + mutex_lock(&clock->clock_lock);
> +
> + if (err == 0 && freq)
> + err = get_freq(clock, freq);
> +
> + if (err == 0 && freq_cnter)
> + err = get_freq_counter(clock, freq_cnter);
> +
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
ok, clock_set_freq removed
> +static int clock_verify_freq(struct clock *clock)
> +{
> + u32 lookup_freq, clock_freq_counter, request_in_khz, tolerance;
> + int err = 0;
> + u16 freq;
> +
> + mutex_lock(&clock->clock_lock);
> +
> + err = get_freq(clock, &freq);
> + if (err) {
> + xrt_err(clock->pdev, "get freq failed, %d", err);
> + goto end;
> + }
> +
> + err = get_freq_counter(clock, &clock_freq_counter);
> + if (err) {
> + xrt_err(clock->pdev, "get freq counter failed, %d", err);
> + goto end;
> + }
> +
> + lookup_freq = find_matching_freq(freq, frequency_table,
> + ARRAY_SIZE(frequency_table));
> + request_in_khz = lookup_freq * 1000;
> + tolerance = lookup_freq * 50;
> + if (tolerance < abs(clock_freq_counter - request_in_khz)) {
> + CLOCK_ERR(clock,
> + "set clock(%s) failed, request %ukhz, actual %dkhz",
> + clock->clock_ep_name, request_in_khz, clock_freq_counter);
> + err = -EDOM;
> + } else {
> + CLOCK_INFO(clock, "verified clock (%s)", clock->clock_ep_name);
> + }
> +
> +end:
> + mutex_unlock(&clock->clock_lock);
> + return err;
> +}
> +
> +static int clock_init(struct clock *clock)
> +{
> + struct xrt_subdev_platdata *pdata = DEV_PDATA(clock->pdev);
> + const u16 *freq;
> + int err = 0;
> +
> + err = xrt_md_get_prop(DEV(clock->pdev), pdata->xsp_dtb,
> + clock->clock_ep_name, NULL, XRT_MD_PROP_CLK_FREQ,
> + (const void **)&freq, NULL);
> + if (err) {
> + xrt_info(clock->pdev, "no default freq");
> + return 0;
> + }
> +
> + err = set_freq(clock, be16_to_cpu(*freq));
> +
> + return err;
> +}
> +
> +static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct clock *clock = platform_get_drvdata(to_platform_device(dev));
> + ssize_t count;
> + u16 freq = 0;
> +
> + count = clock_get_freq(clock, &freq, NULL);
> + if (count < 0)
> + return count;
> +
> + count = snprintf(buf, 64, "%u\n", freq);
ok
Thanks for the changes
Reviewed-by: Tom Rix <[email protected]>
> +
> + return count;
> +}
> +static DEVICE_ATTR_RO(freq);
> +
> +static struct attribute *clock_attrs[] = {
> + &dev_attr_freq.attr,
> + NULL,
> +};
> +
> +static struct attribute_group clock_attr_group = {
> + .attrs = clock_attrs,
> +};
> +
> +static int
> +xrt_clock_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct clock *clock;
> + int ret = 0;
> +
> + clock = platform_get_drvdata(pdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_CLOCK_SET: {
> + u16 freq = (u16)(uintptr_t)arg;
> +
> + ret = set_freq(clock, freq);
> + break;
> + }
> + case XRT_CLOCK_VERIFY:
> + ret = clock_verify_freq(clock);
> + break;
> + case XRT_CLOCK_GET: {
> + struct xrt_clock_get *get =
> + (struct xrt_clock_get *)arg;
> +
> + ret = clock_get_freq(clock, &get->freq, &get->freq_cnter);
> + break;
> + }
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int clock_remove(struct platform_device *pdev)
> +{
> + sysfs_remove_group(&pdev->dev.kobj, &clock_attr_group);
> +
> + return 0;
> +}
> +
> +static int clock_probe(struct platform_device *pdev)
> +{
> + struct clock *clock = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + clock = devm_kzalloc(&pdev->dev, sizeof(*clock), GFP_KERNEL);
> + if (!clock)
> + return -ENOMEM;
> +
> + platform_set_drvdata(pdev, clock);
> + clock->pdev = pdev;
> + mutex_init(&clock->clock_lock);
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + clock->regmap = devm_regmap_init_mmio(&pdev->dev, base, &clock_regmap_config);
> + if (IS_ERR(clock->regmap)) {
> + CLOCK_ERR(clock, "regmap %pR failed", res);
> + ret = PTR_ERR(clock->regmap);
> + goto failed;
> + }
> + clock->clock_ep_name = res->name;
> +
> + ret = clock_init(clock);
> + if (ret)
> + goto failed;
> +
> + ret = sysfs_create_group(&pdev->dev.kobj, &clock_attr_group);
> + if (ret) {
> + CLOCK_ERR(clock, "create clock attrs failed: %d", ret);
> + goto failed;
> + }
> +
> + CLOCK_INFO(clock, "successfully initialized Clock subdev");
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_clock_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .regmap_name = "clkwiz" },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_clock_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_clock_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_clock_table[] = {
> + { XRT_CLOCK, (kernel_ulong_t)&xrt_clock_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_clock_driver = {
> + .driver = {
> + .name = XRT_CLOCK,
> + },
> + .probe = clock_probe,
> + .remove = clock_remove,
> + .id_table = xrt_clock_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CLOCK, clock);
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add clock frequency counter driver. Clock frequency counter is
> a hardware function discovered by walking xclbin metadata. A platform
> device node will be created for it. Other part of driver can read the
> actual clock frequency through clock frequency counter driver.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/clkfreq.h | 21 ++
> drivers/fpga/xrt/lib/xleaf/clkfreq.c | 240 +++++++++++++++++++++++
> 2 files changed, 261 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/clkfreq.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/clkfreq.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/clkfreq.h b/drivers/fpga/xrt/include/xleaf/clkfreq.h
> new file mode 100644
> index 000000000000..005441d5df78
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/clkfreq.h
> @@ -0,0 +1,21 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_CLKFREQ_H_
> +#define _XRT_CLKFREQ_H_
> +
> +#include "xleaf.h"
> +
> +/*
> + * CLKFREQ driver leaf calls.
> + */
> +enum xrt_clkfreq_leaf_cmd {
> + XRT_CLKFREQ_READ = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> +};
> +
> +#endif /* _XRT_CLKFREQ_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/clkfreq.c b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
> new file mode 100644
> index 000000000000..49473adde3fd
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/clkfreq.c
> @@ -0,0 +1,240 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA Clock Frequency Counter Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/clkfreq.h"
> +
> +#define CLKFREQ_ERR(clkfreq, fmt, arg...) \
> + xrt_err((clkfreq)->pdev, fmt "\n", ##arg)
> +#define CLKFREQ_WARN(clkfreq, fmt, arg...) \
> + xrt_warn((clkfreq)->pdev, fmt "\n", ##arg)
> +#define CLKFREQ_INFO(clkfreq, fmt, arg...) \
> + xrt_info((clkfreq)->pdev, fmt "\n", ##arg)
> +#define CLKFREQ_DBG(clkfreq, fmt, arg...) \
> + xrt_dbg((clkfreq)->pdev, fmt "\n", ##arg)
> +
> +#define XRT_CLKFREQ "xrt_clkfreq"
> +
> +#define XRT_CLKFREQ_CONTROL_STATUS_MASK 0xffff
> +
> +#define XRT_CLKFREQ_CONTROL_START 0x1
> +#define XRT_CLKFREQ_CONTROL_DONE 0x2
> +#define XRT_CLKFREQ_V5_CLK0_ENABLED 0x10000
> +
> +#define XRT_CLKFREQ_CONTROL_REG 0
> +#define XRT_CLKFREQ_COUNT_REG 0x8
> +#define XRT_CLKFREQ_V5_COUNT_REG 0x10
> +
> +#define XRT_CLKFREQ_READ_RETRIES 10
> +
> +static const struct regmap_config clkfreq_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
ok
> +};
> +
> +struct clkfreq {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + const char *clkfreq_ep_name;
> + struct mutex clkfreq_lock; /* clock counter dev lock */
> +};
> +
> +static int clkfreq_read(struct clkfreq *clkfreq, u32 *freq)
ok
> +{
> + int times = XRT_CLKFREQ_READ_RETRIES;
ok
> + u32 status;
> + int ret;
> +
> + *freq = 0;
> + mutex_lock(&clkfreq->clkfreq_lock);
> + ret = regmap_write(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, XRT_CLKFREQ_CONTROL_START);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "write start to control reg failed %d", ret);
> + goto failed;
> + }
> + while (times != 0) {
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_CONTROL_REG, &status);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "read control reg failed %d", ret);
> + goto failed;
> + }
> + if ((status & XRT_CLKFREQ_CONTROL_STATUS_MASK) == XRT_CLKFREQ_CONTROL_DONE)
> + break;
> + mdelay(1);
> + times--;
> + };
> +
> + if (!times) {
> + ret = -ETIMEDOUT;
> + goto failed;
> + }
> +
> + if (status & XRT_CLKFREQ_V5_CLK0_ENABLED)
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_V5_COUNT_REG, freq);
> + else
> + ret = regmap_read(clkfreq->regmap, XRT_CLKFREQ_COUNT_REG, freq);
> + if (ret) {
> + CLKFREQ_INFO(clkfreq, "read count failed %d", ret);
> + goto failed;
> + }
ok
> +
> + mutex_unlock(&clkfreq->clkfreq_lock);
> +
> + return 0;
> +
> +failed:
> + mutex_unlock(&clkfreq->clkfreq_lock);
> +
> + return ret;
> +}
> +
> +static ssize_t freq_show(struct device *dev, struct device_attribute *attr, char *buf)
> +{
> + struct clkfreq *clkfreq = platform_get_drvdata(to_platform_device(dev));
> + ssize_t count;
> + u32 freq;
> +
> + if (clkfreq_read(clkfreq, &freq))
> + return -EINVAL;
ok
> +
> + count = snprintf(buf, 64, "%u\n", freq);
ok
> +
> + return count;
> +}
> +static DEVICE_ATTR_RO(freq);
> +
> +static struct attribute *clkfreq_attrs[] = {
> + &dev_attr_freq.attr,
> + NULL,
> +};
> +
> +static struct attribute_group clkfreq_attr_group = {
> + .attrs = clkfreq_attrs,
> +};
> +
> +static int
> +xrt_clkfreq_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + struct clkfreq *clkfreq;
> + int ret = 0;
> +
> + clkfreq = platform_get_drvdata(pdev);
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + /* Does not handle any event. */
> + break;
> + case XRT_CLKFREQ_READ:
ok
> + ret = clkfreq_read(clkfreq, arg);
ok
> + break;
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int clkfreq_remove(struct platform_device *pdev)
> +{
> + sysfs_remove_group(&pdev->dev.kobj, &clkfreq_attr_group);
> +
> + return 0;
> +}
> +
> +static int clkfreq_probe(struct platform_device *pdev)
> +{
> + struct clkfreq *clkfreq = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + clkfreq = devm_kzalloc(&pdev->dev, sizeof(*clkfreq), GFP_KERNEL);
> + if (!clkfreq)
> + return -ENOMEM;
> +
> + platform_set_drvdata(pdev, clkfreq);
> + clkfreq->pdev = pdev;
> + mutex_init(&clkfreq->clkfreq_lock);
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + ret = -EINVAL;
> + goto failed;
> + }
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base)) {
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + clkfreq->regmap = devm_regmap_init_mmio(&pdev->dev, base, &clkfreq_regmap_config);
> + if (IS_ERR(clkfreq->regmap)) {
> + CLKFREQ_ERR(clkfreq, "regmap %pR failed", res);
> + ret = PTR_ERR(clkfreq->regmap);
> + goto failed;
> + }
> + clkfreq->clkfreq_ep_name = res->name;
> +
> + ret = sysfs_create_group(&pdev->dev.kobj, &clkfreq_attr_group);
> + if (ret) {
> + CLKFREQ_ERR(clkfreq, "create clkfreq attrs failed: %d", ret);
> + goto failed;
> + }
> +
> + CLKFREQ_INFO(clkfreq, "successfully initialized clkfreq subdev");
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_clkfreq_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .regmap_name = XRT_MD_REGMAP_CLKFREQ },
ok
Looks good to me
Reviewed-by: Tom Rix <[email protected]>
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_clkfreq_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_clkfreq_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_clkfreq_table[] = {
> + { XRT_CLKFREQ, (kernel_ulong_t)&xrt_clkfreq_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_clkfreq_driver = {
> + .driver = {
> + .name = XRT_CLKFREQ,
> + },
> + .probe = clkfreq_probe,
> + .remove = clkfreq_remove,
> + .id_table = xrt_clkfreq_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_CLKFREQ, clkfreq);
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Add partition isolation platform driver. partition isolation is
> a hardware function discovered by walking firmware metadata.
> A platform device node will be created for it. Partition isolation
> function isolate the different fpga regions
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> drivers/fpga/xrt/include/xleaf/axigate.h | 23 ++
> drivers/fpga/xrt/lib/xleaf/axigate.c | 342 +++++++++++++++++++++++
> 2 files changed, 365 insertions(+)
> create mode 100644 drivers/fpga/xrt/include/xleaf/axigate.h
> create mode 100644 drivers/fpga/xrt/lib/xleaf/axigate.c
>
> diff --git a/drivers/fpga/xrt/include/xleaf/axigate.h b/drivers/fpga/xrt/include/xleaf/axigate.h
> new file mode 100644
> index 000000000000..58f32c76dca1
> --- /dev/null
> +++ b/drivers/fpga/xrt/include/xleaf/axigate.h
> @@ -0,0 +1,23 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou <[email protected]>
> + */
> +
> +#ifndef _XRT_AXIGATE_H_
> +#define _XRT_AXIGATE_H_
> +
> +#include "xleaf.h"
> +#include "metadata.h"
> +
> +/*
> + * AXIGATE driver leaf calls.
> + */
> +enum xrt_axigate_leaf_cmd {
> + XRT_AXIGATE_CLOSE = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
> + XRT_AXIGATE_OPEN,
ok
> +};
> +
> +#endif /* _XRT_AXIGATE_H_ */
> diff --git a/drivers/fpga/xrt/lib/xleaf/axigate.c b/drivers/fpga/xrt/lib/xleaf/axigate.c
> new file mode 100644
> index 000000000000..231bb0335278
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/xleaf/axigate.c
> @@ -0,0 +1,342 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xilinx Alveo FPGA AXI Gate Driver
> + *
> + * Copyright (C) 2020-2021 Xilinx, Inc.
> + *
> + * Authors:
> + * Lizhi Hou<[email protected]>
> + */
> +
> +#include <linux/mod_devicetable.h>
> +#include <linux/platform_device.h>
> +#include <linux/delay.h>
> +#include <linux/device.h>
> +#include <linux/regmap.h>
> +#include <linux/io.h>
> +#include "metadata.h"
> +#include "xleaf.h"
> +#include "xleaf/axigate.h"
> +
> +#define XRT_AXIGATE "xrt_axigate"
> +
> +#define XRT_AXIGATE_WRITE_REG 0
> +#define XRT_AXIGATE_READ_REG 8
> +
> +#define XRT_AXIGATE_CTRL_CLOSE 0
> +#define XRT_AXIGATE_CTRL_OPEN_BIT0 1
> +#define XRT_AXIGATE_CTRL_OPEN_BIT1 2
> +
> +#define XRT_AXIGATE_INTERVAL 500 /* ns */
> +
> +struct xrt_axigate {
> + struct platform_device *pdev;
> + struct regmap *regmap;
> + struct mutex gate_lock; /* gate dev lock */
> +
> + void *evt_hdl;
> + const char *ep_name;
> +
> + bool gate_closed;
white space, extra nl's are not needed
> +};
> +
> +static const struct regmap_config axigate_regmap_config = {
> + .reg_bits = 32,
> + .val_bits = 32,
> + .reg_stride = 4,
> + .max_register = 0x1000,
ok
> +};
> +
> +/* the ep names are in the order of hardware layers */
> +static const char * const xrt_axigate_epnames[] = {
> + XRT_MD_NODE_GATE_PLP, /* PLP: Provider Logic Partition */
> + XRT_MD_NODE_GATE_ULP /* ULP: User Logic Partition */
ok
> +};
> +
> +static inline int close_gate(struct xrt_axigate *gate)
> +{
> + u32 val;
> + int ret;
> +
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_CLOSE);
ok, regs defined
> + if (ret) {
> + xrt_err(gate->pdev, "write gate failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + /*
> + * Legacy hardware requires extra read work properly.
> + * This is not on critical path, thus the extra read should not impact performance much.
> + */
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->pdev, "read gate failed %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static inline int open_gate(struct xrt_axigate *gate)
> +{
> + u32 val;
> + int ret;
> +
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG, XRT_AXIGATE_CTRL_OPEN_BIT1);
> + if (ret) {
> + xrt_err(gate->pdev, "write 2 failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + /*
> + * Legacy hardware requires extra read work properly.
> + * This is not on critical path, thus the extra read should not impact performance much.
> + */
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->pdev, "read 2 failed %d", ret);
> + return ret;
> + }
> + ret = regmap_write(gate->regmap, XRT_AXIGATE_WRITE_REG,
> + XRT_AXIGATE_CTRL_OPEN_BIT0 | XRT_AXIGATE_CTRL_OPEN_BIT1);
> + if (ret) {
> + xrt_err(gate->pdev, "write 3 failed %d", ret);
> + return ret;
> + }
> + ndelay(XRT_AXIGATE_INTERVAL);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &val);
> + if (ret) {
> + xrt_err(gate->pdev, "read 3 failed %d", ret);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +static int xrt_axigate_epname_idx(struct platform_device *pdev)
> +{
> + struct resource *res;
> + int ret, i;
ok
> +
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + xrt_err(pdev, "Empty Resource!");
> + return -EINVAL;
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(xrt_axigate_epnames); i++) {
ok
> + ret = strncmp(xrt_axigate_epnames[i], res->name,
> + strlen(xrt_axigate_epnames[i]) + 1);
ok
> + if (!ret)
> + return i;
> + }
> +
> + return -EINVAL;
> +}
> +
> +static int xrt_axigate_close(struct platform_device *pdev)
> +{
> + struct xrt_axigate *gate;
> + u32 status = 0;
> + int ret;
> +
> + gate = platform_get_drvdata(pdev);
> +
> + mutex_lock(&gate->gate_lock);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
> + if (ret) {
> + xrt_err(pdev, "read gate failed %d", ret);
> + goto failed;
> + }
> + if (status) { /* gate is opened */
> + xleaf_broadcast_event(pdev, XRT_EVENT_PRE_GATE_CLOSE, false);
> + ret = close_gate(gate);
ok
> + if (ret)
> + goto failed;
> + }
> +
> + gate->gate_closed = true;
ok
> +
> +failed:
> + mutex_unlock(&gate->gate_lock);
> +
> + xrt_info(pdev, "close gate %s", gate->ep_name);
> + return ret;
> +}
> +
> +static int xrt_axigate_open(struct platform_device *pdev)
> +{
> + struct xrt_axigate *gate;
> + u32 status;
> + int ret;
> +
> + gate = platform_get_drvdata(pdev);
> +
> + mutex_lock(&gate->gate_lock);
> + ret = regmap_read(gate->regmap, XRT_AXIGATE_READ_REG, &status);
> + if (ret) {
> + xrt_err(pdev, "read gate failed %d", ret);
> + goto failed;
> + }
> + if (!status) { /* gate is closed */
> + ret = open_gate(gate);
> + if (ret)
> + goto failed;
> + xleaf_broadcast_event(pdev, XRT_EVENT_POST_GATE_OPEN, true);
> + /* xrt_axigate_open() could be called in event cb, thus
> + * we can not wait for the completes
> + */
> + }
> +
> + gate->gate_closed = false;
> +
> +failed:
> + mutex_unlock(&gate->gate_lock);
> +
> + xrt_info(pdev, "open gate %s", gate->ep_name);
> + return ret;
> +}
> +
> +static void xrt_axigate_event_cb(struct platform_device *pdev, void *arg)
> +{
> + struct xrt_axigate *gate = platform_get_drvdata(pdev);
> + struct xrt_event *evt = (struct xrt_event *)arg;
> + enum xrt_events e = evt->xe_evt;
> + struct platform_device *leaf;
> + enum xrt_subdev_id id;
> + struct resource *res;
> + int instance;
> +
> + if (e != XRT_EVENT_POST_CREATION)
> + return;
> +
> + instance = evt->xe_subdev.xevt_subdev_instance;
> + id = evt->xe_subdev.xevt_subdev_id;
> + if (id != XRT_SUBDEV_AXIGATE)
> + return;
ok
> +
> + leaf = xleaf_get_leaf_by_id(pdev, id, instance);
> + if (!leaf)
> + return;
> +
> + res = platform_get_resource(leaf, IORESOURCE_MEM, 0);
> + if (!res || !strncmp(res->name, gate->ep_name, strlen(res->name) + 1)) {
> + xleaf_put_leaf(pdev, leaf);
> + return;
> + }
> +
> + /* higher level axigate instance created, make sure the gate is opened. */
ok
only minor ws issue, otherwise good to go
Reviewed-by: Tom Rix <[email protected]>
> + if (xrt_axigate_epname_idx(leaf) > xrt_axigate_epname_idx(pdev))
> + xrt_axigate_open(pdev);
> + else
> + xleaf_call(leaf, XRT_AXIGATE_OPEN, NULL);
> +
> + xleaf_put_leaf(pdev, leaf);
> +}
> +
> +static int
> +xrt_axigate_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
> +{
> + int ret = 0;
> +
> + switch (cmd) {
> + case XRT_XLEAF_EVENT:
> + xrt_axigate_event_cb(pdev, arg);
> + break;
> + case XRT_AXIGATE_CLOSE:
> + ret = xrt_axigate_close(pdev);
> + break;
> + case XRT_AXIGATE_OPEN:
> + ret = xrt_axigate_open(pdev);
> + break;
> + default:
> + xrt_err(pdev, "unsupported cmd %d", cmd);
> + return -EINVAL;
> + }
> +
> + return ret;
> +}
> +
> +static int xrt_axigate_probe(struct platform_device *pdev)
> +{
> + struct xrt_axigate *gate = NULL;
> + void __iomem *base = NULL;
> + struct resource *res;
> + int ret;
> +
> + gate = devm_kzalloc(&pdev->dev, sizeof(*gate), GFP_KERNEL);
> + if (!gate)
> + return -ENOMEM;
> +
> + gate->pdev = pdev;
> + platform_set_drvdata(pdev, gate);
> +
> + xrt_info(pdev, "probing...");
> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + if (!res) {
> + xrt_err(pdev, "Empty resource 0");
> + ret = -EINVAL;
> + goto failed;
> + }
> +
> + base = devm_ioremap_resource(&pdev->dev, res);
> + if (IS_ERR(base)) {
> + xrt_err(pdev, "map base iomem failed");
> + ret = PTR_ERR(base);
> + goto failed;
> + }
> +
> + gate->regmap = devm_regmap_init_mmio(&pdev->dev, base, &axigate_regmap_config);
> + if (IS_ERR(gate->regmap)) {
> + xrt_err(pdev, "regmap %pR failed", res);
> + ret = PTR_ERR(gate->regmap);
> + goto failed;
> + }
> + gate->ep_name = res->name;
> +
> + mutex_init(&gate->gate_lock);
> +
> + return 0;
> +
> +failed:
> + return ret;
> +}
> +
> +static struct xrt_subdev_endpoints xrt_axigate_endpoints[] = {
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GATE_ULP },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + {
> + .xse_names = (struct xrt_subdev_ep_names[]) {
> + { .ep_name = XRT_MD_NODE_GATE_PLP },
> + { NULL },
> + },
> + .xse_min_ep = 1,
> + },
> + { 0 },
> +};
> +
> +static struct xrt_subdev_drvdata xrt_axigate_data = {
> + .xsd_dev_ops = {
> + .xsd_leaf_call = xrt_axigate_leaf_call,
> + },
> +};
> +
> +static const struct platform_device_id xrt_axigate_table[] = {
> + { XRT_AXIGATE, (kernel_ulong_t)&xrt_axigate_data },
> + { },
> +};
> +
> +static struct platform_driver xrt_axigate_driver = {
> + .driver = {
> + .name = XRT_AXIGATE,
> + },
> + .probe = xrt_axigate_probe,
> + .id_table = xrt_axigate_table,
> +};
> +
> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_AXIGATE, axigate);
On 3/23/21 10:29 PM, Lizhi Hou wrote:
> Update fpga Kconfig/Makefile and add Kconfig/Makefile for new drivers.
>
> Signed-off-by: Sonal Santan <[email protected]>
> Signed-off-by: Max Zhen <[email protected]>
> Signed-off-by: Lizhi Hou <[email protected]>
> ---
> MAINTAINERS | 11 +++++++++++
> drivers/Makefile | 1 +
> drivers/fpga/Kconfig | 2 ++
> drivers/fpga/Makefile | 5 +++++
> drivers/fpga/xrt/Kconfig | 8 ++++++++
> drivers/fpga/xrt/lib/Kconfig | 17 +++++++++++++++++
> drivers/fpga/xrt/lib/Makefile | 30 ++++++++++++++++++++++++++++++
> drivers/fpga/xrt/metadata/Kconfig | 12 ++++++++++++
> drivers/fpga/xrt/metadata/Makefile | 16 ++++++++++++++++
> drivers/fpga/xrt/mgmt/Kconfig | 15 +++++++++++++++
> drivers/fpga/xrt/mgmt/Makefile | 19 +++++++++++++++++++
> 11 files changed, 136 insertions(+)
> create mode 100644 drivers/fpga/xrt/Kconfig
> create mode 100644 drivers/fpga/xrt/lib/Kconfig
> create mode 100644 drivers/fpga/xrt/lib/Makefile
> create mode 100644 drivers/fpga/xrt/metadata/Kconfig
> create mode 100644 drivers/fpga/xrt/metadata/Makefile
> create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
> create mode 100644 drivers/fpga/xrt/mgmt/Makefile
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index aa84121c5611..44ccc52987ac 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -7009,6 +7009,17 @@ F: Documentation/fpga/
> F: drivers/fpga/
> F: include/linux/fpga/
>
> +FPGA XRT DRIVERS
> +M: Lizhi Hou <[email protected]>
> +R: Max Zhen <[email protected]>
> +R: Sonal Santan <[email protected]>
> +L: [email protected]
> +S: Maintained
Should this be 'Supported' ?
> +W: https://github.com/Xilinx/XRT
> +F: Documentation/fpga/xrt.rst
> +F: drivers/fpga/xrt/
> +F: include/uapi/linux/xrt/
> +
> FPU EMULATOR
> M: Bill Metzenthen <[email protected]>
> S: Maintained
> diff --git a/drivers/Makefile b/drivers/Makefile
> index 6fba7daba591..dbb3b727fc7a 100644
> --- a/drivers/Makefile
> +++ b/drivers/Makefile
> @@ -179,6 +179,7 @@ obj-$(CONFIG_STM) += hwtracing/stm/
> obj-$(CONFIG_ANDROID) += android/
> obj-$(CONFIG_NVMEM) += nvmem/
> obj-$(CONFIG_FPGA) += fpga/
> +obj-$(CONFIG_FPGA_XRT_METADATA) += fpga/
CONFIG_FPGA_XRT_METADATA is only defined when CONFIG_FPGA is, so i don't
think this line is needed.
> obj-$(CONFIG_FSI) += fsi/
> obj-$(CONFIG_TEE) += tee/
> obj-$(CONFIG_MULTIPLEXER) += mux/
> diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
> index 5ff9438b7b46..01410ff000b9 100644
> --- a/drivers/fpga/Kconfig
> +++ b/drivers/fpga/Kconfig
> @@ -227,4 +227,6 @@ config FPGA_MGR_ZYNQMP_FPGA
> to configure the programmable logic(PL) through PS
> on ZynqMP SoC.
>
> +source "drivers/fpga/xrt/Kconfig"
> +
> endif # FPGA
This is where it is defined..
> diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
> index 18dc9885883a..4b887bf95cb3 100644
> --- a/drivers/fpga/Makefile
> +++ b/drivers/fpga/Makefile
> @@ -48,3 +48,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) += dfl-n3000-nios.o
>
> # Drivers for FPGAs which implement DFL
> obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
> +
> +# XRT drivers for Alveo
> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt/metadata/
> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt/lib/
> +obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt/mgmt/
> diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
> new file mode 100644
> index 000000000000..0e2c59589ddd
> --- /dev/null
> +++ b/drivers/fpga/xrt/Kconfig
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Xilinx Alveo FPGA device configuration
> +#
> +
> +source "drivers/fpga/xrt/metadata/Kconfig"
> +source "drivers/fpga/xrt/lib/Kconfig"
> +source "drivers/fpga/xrt/mgmt/Kconfig"
> diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
> new file mode 100644
> index 000000000000..935369fad570
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/Kconfig
> @@ -0,0 +1,17 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# XRT Alveo FPGA device configuration
> +#
> +
> +config FPGA_XRT_LIB
> + tristate "XRT Alveo Driver Library"
> + depends on HWMON && PCI && HAS_IOMEM
> + select FPGA_XRT_METADATA
> + select REGMAP_MMIO
> + help
> + Select this option to enable Xilinx XRT Alveo driver library. This
> + library is core infrastructure of XRT Alveo FPGA drivers which
> + provides functions for working with device nodes, iteration and
> + lookup of platform devices, common interfaces for platform devices,
> + plumbing of function call and ioctls between platform devices and
> + parent partitions.
> diff --git a/drivers/fpga/xrt/lib/Makefile b/drivers/fpga/xrt/lib/Makefile
> new file mode 100644
> index 000000000000..58563416efbf
> --- /dev/null
> +++ b/drivers/fpga/xrt/lib/Makefile
> @@ -0,0 +1,30 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt-lib.o
> +
> +xrt-lib-objs := \
> + lib-drv.o \
> + xroot.o \
> + xclbin.o \
> + subdev.o \
> + cdev.o \
> + group.o \
> + xleaf/vsec.o \
> + xleaf/axigate.o \
> + xleaf/devctl.o \
> + xleaf/icap.o \
> + xleaf/clock.o \
> + xleaf/clkfreq.o \
> + xleaf/ucs.o \
> + xleaf/ddr_calibration.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)
> diff --git a/drivers/fpga/xrt/metadata/Kconfig b/drivers/fpga/xrt/metadata/Kconfig
> new file mode 100644
> index 000000000000..129adda47e94
> --- /dev/null
> +++ b/drivers/fpga/xrt/metadata/Kconfig
> @@ -0,0 +1,12 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# XRT Alveo FPGA device configuration
> +#
> +
> +config FPGA_XRT_METADATA
> + bool "XRT Alveo Driver Metadata Parser"
> + select LIBFDT
> + help
> + This option provides helper functions to parse Xilinx Alveo FPGA
> + firmware metadata. The metadata is in device tree format and the
> + XRT driver uses it to discover the HW subsystems behind PCIe BAR.
> diff --git a/drivers/fpga/xrt/metadata/Makefile b/drivers/fpga/xrt/metadata/Makefile
> new file mode 100644
> index 000000000000..14f65ef1595c
> --- /dev/null
> +++ b/drivers/fpga/xrt/metadata/Makefile
> @@ -0,0 +1,16 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt-md.o
> +
> +xrt-md-objs := metadata.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)
> diff --git a/drivers/fpga/xrt/mgmt/Kconfig b/drivers/fpga/xrt/mgmt/Kconfig
> new file mode 100644
> index 000000000000..31e9e19fffb8
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/Kconfig
> @@ -0,0 +1,15 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# Xilinx XRT FPGA device configuration
> +#
> +
> +config FPGA_XRT_XMGMT
> + tristate "Xilinx Alveo Management Driver"
> + depends on FPGA_XRT_LIB
> + select FPGA_XRT_METADATA
If the XRT driver depends on these other two configs and it does not
make sense to build these two seperately, could you remove these configs
and just use something like FPGA_XRT ?
Tom
> + select FPGA_BRIDGE
> + select FPGA_REGION
> + help
> + Select this option to enable XRT PCIe driver for Xilinx Alveo FPGA.
> + This driver provides interfaces for userspace application to access
> + Alveo FPGA device.
> diff --git a/drivers/fpga/xrt/mgmt/Makefile b/drivers/fpga/xrt/mgmt/Makefile
> new file mode 100644
> index 000000000000..acabd811f3fd
> --- /dev/null
> +++ b/drivers/fpga/xrt/mgmt/Makefile
> @@ -0,0 +1,19 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
> +#
> +# Authors: [email protected]
> +#
> +
> +FULL_XRT_PATH=$(srctree)/$(src)/..
> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
> +
> +obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt-mgmt.o
> +
> +xrt-mgmt-objs := root.o \
> + main.o \
> + fmgr-drv.o \
> + main-region.o
> +
> +ccflags-y := -I$(FULL_XRT_PATH)/include \
> + -I$(FULL_DTC_PATH)
On 04/02/2021 07:12 AM, Tom Rix wrote:
> CAUTION: This message has originated from an External Source. Please use proper judgment and caution when opening attachments, clicking links, or responding to this email.
>
>
> local use of 'regmap' conflicts with global meaning.
>
> reword local regmap to something else.
Will change local regmap to 'compat'.
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Add VSEC driver. VSEC is a hardware function discovered by walking
>> PCI Express configure space. A platform device node will be created
>> for it. VSEC provides board logic UUID and few offset of other hardware
>> functions.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/lib/xleaf/vsec.c | 388 ++++++++++++++++++++++++++++++
>> 1 file changed, 388 insertions(+)
>> create mode 100644 drivers/fpga/xrt/lib/xleaf/vsec.c
>>
>> diff --git a/drivers/fpga/xrt/lib/xleaf/vsec.c b/drivers/fpga/xrt/lib/xleaf/vsec.c
>> new file mode 100644
>> index 000000000000..8595d23f5710
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/xleaf/vsec.c
>> @@ -0,0 +1,388 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA VSEC Driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou<[email protected]>
>> + */
>> +
>> +#include <linux/platform_device.h>
>> +#include <linux/regmap.h>
>> +#include "metadata.h"
>> +#include "xleaf.h"
>> +
>> +#define XRT_VSEC "xrt_vsec"
>> +
>> +#define VSEC_TYPE_UUID 0x50
>> +#define VSEC_TYPE_FLASH 0x51
>> +#define VSEC_TYPE_PLATINFO 0x52
>> +#define VSEC_TYPE_MAILBOX 0x53
>> +#define VSEC_TYPE_END 0xff
>> +
>> +#define VSEC_UUID_LEN 16
>> +
>> +#define VSEC_REG_FORMAT 0x0
>> +#define VSEC_REG_LENGTH 0x4
>> +#define VSEC_REG_ENTRY 0x8
>> +
>> +struct xrt_vsec_header {
>> + u32 format;
>> + u32 length;
>> + u32 entry_sz;
>> + u32 rsvd;
>> +} __packed;
>> +
>> +struct xrt_vsec_entry {
>> + u8 type;
>> + u8 bar_rev;
>> + u16 off_lo;
>> + u32 off_hi;
>> + u8 ver_type;
>> + u8 minor;
>> + u8 major;
>> + u8 rsvd0;
>> + u32 rsvd1;
>> +} __packed;
>> +
>> +struct vsec_device {
>> + u8 type;
>> + char *ep_name;
>> + ulong size;
>> + char *regmap;
> This element should be 'char *name;' as regmap is a different thing.
Will change to 'compat'.
>
>> +};
>> +
>> +static struct vsec_device vsec_devs[] = {
>> + {
>> + .type = VSEC_TYPE_UUID,
>> + .ep_name = XRT_MD_NODE_BLP_ROM,
>> + .size = VSEC_UUID_LEN,
>> + .regmap = "vsec-uuid",
>> + },
>> + {
>> + .type = VSEC_TYPE_FLASH,
>> + .ep_name = XRT_MD_NODE_FLASH_VSEC,
>> + .size = 4096,
>> + .regmap = "vsec-flash",
>> + },
>> + {
>> + .type = VSEC_TYPE_PLATINFO,
>> + .ep_name = XRT_MD_NODE_PLAT_INFO,
>> + .size = 4,
>> + .regmap = "vsec-platinfo",
>> + },
>> + {
>> + .type = VSEC_TYPE_MAILBOX,
>> + .ep_name = XRT_MD_NODE_MAILBOX_VSEC,
>> + .size = 48,
>> + .regmap = "vsec-mbx",
>> + },
>> +};
>> +
>> +static const struct regmap_config vsec_regmap_config = {
>> + .reg_bits = 32,
>> + .val_bits = 32,
>> + .reg_stride = 4,
>> + .max_register = 0x1000,
> At least 0x1000 could be #define, maybe all.
Will #define all of them.
>> +};
>> +
>> +struct xrt_vsec {
>> + struct platform_device *pdev;
>> + struct regmap *regmap;
>> + u32 length;
>> +
>> + char *metadata;
>> + char uuid[VSEC_UUID_LEN];
>> + int group;
>> +};
>> +
>> +static inline int vsec_read_entry(struct xrt_vsec *vsec, u32 index, struct xrt_vsec_entry *entry)
>> +{
>> + int ret;
>> +
>> + ret = regmap_bulk_read(vsec->regmap, sizeof(struct xrt_vsec_header) +
>> + index * sizeof(struct xrt_vsec_entry), entry,
>> + sizeof(struct xrt_vsec_entry) /
>> + vsec_regmap_config.reg_stride);
>> +
>> + return ret;
>> +}
>> +
>> +static inline u32 vsec_get_bar(struct xrt_vsec_entry *entry)
>> +{
>> + return ((entry)->bar_rev >> 4) & 0xf;
> The extra () were needed when this was a macro, they aren't now.
>
> remove here and the next 2 functions.
will remove ().
>
>> +}
>> +
>> +static inline u64 vsec_get_bar_off(struct xrt_vsec_entry *entry)
>> +{
>> + return (entry)->off_lo | ((u64)(entry)->off_hi << 16);
>> +}
>> +
>> +static inline u32 vsec_get_rev(struct xrt_vsec_entry *entry)
>> +{
>> + return (entry)->bar_rev & 0xf;
>> +}
>> +
>> +static char *type2epname(u32 type)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
>> + if (vsec_devs[i].type == type)
>> + return (vsec_devs[i].ep_name);
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +static ulong type2size(u32 type)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
>> + if (vsec_devs[i].type == type)
>> + return (vsec_devs[i].size);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static char *type2regmap(u32 type)
>> +{
>> + int i;
>> +
>> + for (i = 0; i < ARRAY_SIZE(vsec_devs); i++) {
>> + if (vsec_devs[i].type == type)
>> + return (vsec_devs[i].regmap);
>> + }
>> +
>> + return NULL;
>> +}
>> +
>> +static int xrt_vsec_add_node(struct xrt_vsec *vsec,
>> + void *md_blob, struct xrt_vsec_entry *p_entry)
>> +{
>> + struct xrt_md_endpoint ep;
>> + char regmap_ver[64];
>> + int ret;
>> +
>> + if (!type2epname(p_entry->type))
>> + return -EINVAL;
>> +
>> + /*
>> + * VSEC may have more than 1 mailbox instance for the card
>> + * which has more than 1 physical function.
>> + * This is not supported for now. Assuming only one mailbox
>> + */
>> +
>> + snprintf(regmap_ver, sizeof(regmap_ver) - 1, "%d-%d.%d.%d",
>> + p_entry->ver_type, p_entry->major, p_entry->minor,
>> + vsec_get_rev(p_entry));
>> + ep.ep_name = type2epname(p_entry->type);
>> + ep.bar = vsec_get_bar(p_entry);
>> + ep.bar_off = vsec_get_bar_off(p_entry);
> ok
>> + ep.size = type2size(p_entry->type);
>> + ep.regmap = type2regmap(p_entry->type);
>> + ep.regmap_ver = regmap_ver;
>> + ret = xrt_md_add_endpoint(DEV(vsec->pdev), vsec->metadata, &ep);
>> + if (ret)
>> + xrt_err(vsec->pdev, "add ep failed, ret %d", ret);
>> +
>> + return ret;
>> +}
>> +
>> +static int xrt_vsec_create_metadata(struct xrt_vsec *vsec)
>> +{
>> + struct xrt_vsec_entry entry;
>> + int i, ret;
>> +
>> + ret = xrt_md_create(&vsec->pdev->dev, &vsec->metadata);
>> + if (ret) {
>> + xrt_err(vsec->pdev, "create metadata failed");
>> + return ret;
>> + }
>> +
>> + for (i = 0; i * sizeof(entry) < vsec->length -
>> + sizeof(struct xrt_vsec_header); i++) {
>> + ret = vsec_read_entry(vsec, i, &entry);
>> + if (ret) {
>> + xrt_err(vsec->pdev, "failed read entry %d, ret %d", i, ret);
>> + goto fail;
>> + }
>> +
>> + if (entry.type == VSEC_TYPE_END)
>> + break;
>> + ret = xrt_vsec_add_node(vsec, vsec->metadata, &entry);
>> + if (ret)
>> + goto fail;
> ok
>> + }
>> +
>> + return 0;
>> +
>> +fail:
>> + vfree(vsec->metadata);
>> + vsec->metadata = NULL;
>> + return ret;
>> +}
>> +
>> +static int xrt_vsec_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
>> +{
>> + int ret = 0;
>> +
>> + switch (cmd) {
>> + case XRT_XLEAF_EVENT:
>> + /* Does not handle any event. */
>> + break;
>> + default:
>> + ret = -EINVAL;
>> + xrt_err(pdev, "should never been called");
>> + break;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int xrt_vsec_mapio(struct xrt_vsec *vsec)
>> +{
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(vsec->pdev);
>> + struct resource *res = NULL;
>> + void __iomem *base = NULL;
>> + const u64 *bar_off;
>> + const u32 *bar;
>> + u64 addr;
> ok
>> + int ret;
>> +
>> + if (!pdata || xrt_md_size(DEV(vsec->pdev), pdata->xsp_dtb) == XRT_MD_INVALID_LENGTH) {
>> + xrt_err(vsec->pdev, "empty metadata");
>> + return -EINVAL;
>> + }
>> +
>> + ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
>> + NULL, XRT_MD_PROP_BAR_IDX, (const void **)&bar, NULL);
>> + if (ret) {
>> + xrt_err(vsec->pdev, "failed to get bar idx, ret %d", ret);
>> + return -EINVAL;
>> + }
>> +
>> + ret = xrt_md_get_prop(DEV(vsec->pdev), pdata->xsp_dtb, XRT_MD_NODE_VSEC,
>> + NULL, XRT_MD_PROP_OFFSET, (const void **)&bar_off, NULL);
>> + if (ret) {
>> + xrt_err(vsec->pdev, "failed to get bar off, ret %d", ret);
>> + return -EINVAL;
>> + }
>> +
>> + xrt_info(vsec->pdev, "Map vsec at bar %d, offset 0x%llx",
>> + be32_to_cpu(*bar), be64_to_cpu(*bar_off));
>> +
>> + xleaf_get_barres(vsec->pdev, &res, be32_to_cpu(*bar));
>> + if (!res) {
>> + xrt_err(vsec->pdev, "failed to get bar addr");
>> + return -EINVAL;
>> + }
>> +
>> + addr = res->start + be64_to_cpu(*bar_off);
>> +
>> + base = devm_ioremap(&vsec->pdev->dev, addr, vsec_regmap_config.max_register);
>> + if (!base) {
>> + xrt_err(vsec->pdev, "Map failed");
>> + return -EIO;
>> + }
>> +
>> + vsec->regmap = devm_regmap_init_mmio(&vsec->pdev->dev, base, &vsec_regmap_config);
>> + if (IS_ERR(vsec->regmap)) {
>> + xrt_err(vsec->pdev, "regmap %pR failed", res);
>> + return PTR_ERR(vsec->regmap);
>> + }
>> +
>> + ret = regmap_read(vsec->regmap, VSEC_REG_LENGTH, &vsec->length);
>> + if (ret) {
>> + xrt_err(vsec->pdev, "failed to read length %d", ret);
>> + return ret;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static int xrt_vsec_remove(struct platform_device *pdev)
>> +{
>> + struct xrt_vsec *vsec;
>> +
>> + vsec = platform_get_drvdata(pdev);
>> +
>> + if (vsec->group >= 0)
>> + xleaf_destroy_group(pdev, vsec->group);
>> + vfree(vsec->metadata);
>> +
>> + return 0;
>> +}
>> +
>> +static int xrt_vsec_probe(struct platform_device *pdev)
>> +{
>> + struct xrt_vsec *vsec;
>> + int ret = 0;
>> +
>> + vsec = devm_kzalloc(&pdev->dev, sizeof(*vsec), GFP_KERNEL);
>> + if (!vsec)
>> + return -ENOMEM;
>> +
>> + vsec->pdev = pdev;
>> + vsec->group = -1;
>> + platform_set_drvdata(pdev, vsec);
>> +
>> + ret = xrt_vsec_mapio(vsec);
>> + if (ret)
>> + goto failed;
>> +
>> + ret = xrt_vsec_create_metadata(vsec);
>> + if (ret) {
>> + xrt_err(pdev, "create metadata failed, ret %d", ret);
>> + goto failed;
>> + }
>> + vsec->group = xleaf_create_group(pdev, vsec->metadata);
>> + if (ret < 0) {
> this is a bug, ret is not set by xleaf_create_group
Will fix it.
Lizhi
>
> Tom
>
>> + xrt_err(pdev, "create group failed, ret %d", vsec->group);
>> + ret = vsec->group;
>> + goto failed;
>> + }
>> +
>> + return 0;
>> +
>> +failed:
>> + xrt_vsec_remove(pdev);
>> +
>> + return ret;
>> +}
>> +
>> +static struct xrt_subdev_endpoints xrt_vsec_endpoints[] = {
>> + {
>> + .xse_names = (struct xrt_subdev_ep_names []){
>> + { .ep_name = XRT_MD_NODE_VSEC },
>> + { NULL },
>> + },
>> + .xse_min_ep = 1,
>> + },
>> + { 0 },
>> +};
>> +
>> +static struct xrt_subdev_drvdata xrt_vsec_data = {
>> + .xsd_dev_ops = {
>> + .xsd_leaf_call = xrt_vsec_leaf_call,
>> + },
>> +};
>> +
>> +static const struct platform_device_id xrt_vsec_table[] = {
>> + { XRT_VSEC, (kernel_ulong_t)&xrt_vsec_data },
>> + { },
>> +};
>> +
>> +static struct platform_driver xrt_vsec_driver = {
>> + .driver = {
>> + .name = XRT_VSEC,
>> + },
>> + .probe = xrt_vsec_probe,
>> + .remove = xrt_vsec_remove,
>> + .id_table = xrt_vsec_table,
>> +};
>> +
>> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_VSEC, vsec);
Hi Tom,
On 04/06/2021 06:50 AM, Tom Rix wrote:
>
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> ICAP stands for Hardware Internal Configuration Access Port. ICAP is
>> discovered by walking firmware metadata. A platform device node will be
> by walking the firmware
Sure.
>> created for it. FPGA bitstream is written to hardware through ICAP.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/xleaf/icap.h | 27 ++
>> drivers/fpga/xrt/lib/xleaf/icap.c | 344 ++++++++++++++++++++++++++
>> 2 files changed, 371 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/xleaf/icap.h
>> create mode 100644 drivers/fpga/xrt/lib/xleaf/icap.c
>>
>> diff --git a/drivers/fpga/xrt/include/xleaf/icap.h
>> b/drivers/fpga/xrt/include/xleaf/icap.h
>> new file mode 100644
>> index 000000000000..96d39a8934fa
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xleaf/icap.h
>> @@ -0,0 +1,27 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou <[email protected]>
>> + */
>> +
>> +#ifndef _XRT_ICAP_H_
>> +#define _XRT_ICAP_H_
>> +
>> +#include "xleaf.h"
>> +
>> +/*
>> + * ICAP driver leaf calls.
>> + */
>> +enum xrt_icap_leaf_cmd {
>> + XRT_ICAP_WRITE = XRT_XLEAF_CUSTOM_BASE, /* See comments in
>> xleaf.h */
>> + XRT_ICAP_GET_IDCODE,
> ok
>> +};
>> +
>> +struct xrt_icap_wr {
>> + void *xiiw_bit_data;
>> + u32 xiiw_data_len;
>> +};
>> +
>> +#endif /* _XRT_ICAP_H_ */
>> diff --git a/drivers/fpga/xrt/lib/xleaf/icap.c
>> b/drivers/fpga/xrt/lib/xleaf/icap.c
>> new file mode 100644
>> index 000000000000..13db2b759138
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/xleaf/icap.c
>> @@ -0,0 +1,344 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA ICAP Driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou<[email protected]>
>> + * Sonal Santan <[email protected]>
>> + * Max Zhen <[email protected]>
>> + */
>> +
>> +#include <linux/mod_devicetable.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/delay.h>
>> +#include <linux/device.h>
>> +#include <linux/regmap.h>
>> +#include <linux/io.h>
>> +#include "metadata.h"
>> +#include "xleaf.h"
>> +#include "xleaf/icap.h"
>> +#include "xclbin-helper.h"
>> +
>> +#define XRT_ICAP "xrt_icap"
>> +
>> +#define ICAP_ERR(icap, fmt, arg...) \
>> + xrt_err((icap)->pdev, fmt "\n", ##arg)
>> +#define ICAP_WARN(icap, fmt, arg...) \
>> + xrt_warn((icap)->pdev, fmt "\n", ##arg)
>> +#define ICAP_INFO(icap, fmt, arg...) \
>> + xrt_info((icap)->pdev, fmt "\n", ##arg)
>> +#define ICAP_DBG(icap, fmt, arg...) \
>> + xrt_dbg((icap)->pdev, fmt "\n", ##arg)
>> +
>> +/*
>> + * AXI-HWICAP IP register layout. Please see
>> + *
>> https://www.xilinx.com/support/documentation/ip_documentation/axi_hwicap/v3_0/pg134-axi-hwicap.pdf
> url works, looks good
>> + */
>> +#define ICAP_REG_GIER 0x1C
>> +#define ICAP_REG_ISR 0x20
>> +#define ICAP_REG_IER 0x28
>> +#define ICAP_REG_WF 0x100
>> +#define ICAP_REG_RF 0x104
>> +#define ICAP_REG_SZ 0x108
>> +#define ICAP_REG_CR 0x10C
>> +#define ICAP_REG_SR 0x110
>> +#define ICAP_REG_WFV 0x114
>> +#define ICAP_REG_RFO 0x118
>> +#define ICAP_REG_ASR 0x11C
>> +
>> +#define ICAP_STATUS_EOS 0x4
>> +#define ICAP_STATUS_DONE 0x1
>> +
>> +/*
>> + * Canned command sequence to obtain IDCODE of the FPGA
>> + */
>> +static const u32 idcode_stream[] = {
>> + /* dummy word */
>> + cpu_to_be32(0xffffffff),
>> + /* sync word */
>> + cpu_to_be32(0xaa995566),
>> + /* NOP word */
>> + cpu_to_be32(0x20000000),
>> + /* NOP word */
>> + cpu_to_be32(0x20000000),
>> + /* ID code */
>> + cpu_to_be32(0x28018001),
>> + /* NOP word */
>> + cpu_to_be32(0x20000000),
>> + /* NOP word */
>> + cpu_to_be32(0x20000000),
>> +};
>> +
>> +static const struct regmap_config icap_regmap_config = {
> ok
>> + .reg_bits = 32,
>> + .val_bits = 32,
>> + .reg_stride = 4,
>> + .max_register = 0x1000,
>> +};
>> +
>> +struct icap {
>> + struct platform_device *pdev;
>> + struct regmap *regmap;
>> + struct mutex icap_lock; /* icap dev lock */
>> +
> whitespace, remove extra nl
Sure.
Thanks,
Lizhi
>> + u32 idcode;
>> +};
>> +
>> +static int wait_for_done(const struct icap *icap)
>> +{
>> + int i = 0;
>> + int ret;
>> + u32 w;
>> +
>> + for (i = 0; i < 10; i++) {
>> + /*
>> + * it requires few micro seconds for ICAP to process
>> incoming data.
>> + * Polling every 5us for 10 times would be good enough.
> ok
>> + */
>> + udelay(5);
>> + ret = regmap_read(icap->regmap, ICAP_REG_SR, &w);
>> + if (ret)
>> + return ret;
>> + ICAP_INFO(icap, "XHWICAP_SR: %x", w);
>> + if (w & (ICAP_STATUS_EOS | ICAP_STATUS_DONE))
> ok
>> + return 0;
>> + }
>> +
>> + ICAP_ERR(icap, "bitstream download timeout");
>> + return -ETIMEDOUT;
>> +}
>> +
>> +static int icap_write(const struct icap *icap, const u32 *word_buf,
>> int size)
>> +{
>> + u32 value = 0;
>> + int ret;
>> + int i;
>> +
>> + for (i = 0; i < size; i++) {
>> + value = be32_to_cpu(word_buf[i]);
>> + ret = regmap_write(icap->regmap, ICAP_REG_WF, value);
>> + if (ret)
>> + return ret;
>> + }
>> +
>> + ret = regmap_write(icap->regmap, ICAP_REG_CR, 0x1);
>> + if (ret)
>> + return ret;
>> +
>> + for (i = 0; i < 20; i++) {
>> + ret = regmap_read(icap->regmap, ICAP_REG_CR, &value);
>> + if (ret)
>> + return ret;
>> +
>> + if ((value & 0x1) == 0)
>> + return 0;
>> + ndelay(50);
>> + }
>> +
>> + ICAP_ERR(icap, "writing %d dwords timeout", size);
>> + return -EIO;
>> +}
>> +
>> +static int bitstream_helper(struct icap *icap, const u32 *word_buffer,
>> + u32 word_count)
>> +{
>> + int wr_fifo_vacancy = 0;
>> + u32 word_written = 0;
>> + u32 remain_word;
>> + int err = 0;
>> +
>> + WARN_ON(!mutex_is_locked(&icap->icap_lock));
>> + for (remain_word = word_count; remain_word > 0;
>> + remain_word -= word_written, word_buffer += word_written) {
>> + err = regmap_read(icap->regmap, ICAP_REG_WFV,
>> &wr_fifo_vacancy);
>> + if (err) {
>> + ICAP_ERR(icap, "read wr_fifo_vacancy failed
>> %d", err);
>> + break;
>> + }
>> + if (wr_fifo_vacancy <= 0) {
>> + ICAP_ERR(icap, "no vacancy: %d", wr_fifo_vacancy);
>> + err = -EIO;
>> + break;
>> + }
>> + word_written = (wr_fifo_vacancy < remain_word) ?
>> + wr_fifo_vacancy : remain_word;
>> + if (icap_write(icap, word_buffer, word_written) != 0) {
>> + ICAP_ERR(icap, "write failed remain %d, written
>> %d",
>> + remain_word, word_written);
>> + err = -EIO;
>> + break;
>> + }
>> + }
>> +
>> + return err;
>> +}
>> +
>> +static int icap_download(struct icap *icap, const char *buffer,
>> + unsigned long length)
>> +{
>> + u32 num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
>> + u32 byte_read;
>> + int err = 0;
>> +
>> + if (length % sizeof(u32)) {
> ok
>> + ICAP_ERR(icap, "invalid bitstream length %ld", length);
>> + return -EINVAL;
>> + }
>> +
>> + mutex_lock(&icap->icap_lock);
>> + for (byte_read = 0; byte_read < length; byte_read +=
>> num_chars_read) {
>> + num_chars_read = length - byte_read;
>> + if (num_chars_read > XCLBIN_HWICAP_BITFILE_BUF_SZ)
>> + num_chars_read = XCLBIN_HWICAP_BITFILE_BUF_SZ;
>> +
>> + err = bitstream_helper(icap, (u32 *)buffer,
>> num_chars_read / sizeof(u32));
>> + if (err)
>> + goto failed;
>> + buffer += num_chars_read;
>> + }
>> +
>> + /* there is not any cleanup needs to be done if writing ICAP
>> timeout. */
>> + err = wait_for_done(icap);
>> +
>> +failed:
>> + mutex_unlock(&icap->icap_lock);
>> +
>> + return err;
>> +}
>> +
>> +/*
>> + * Discover the FPGA IDCODE using special sequence of canned commands
>> + */
>> +static int icap_probe_chip(struct icap *icap)
>> +{
>> + int err;
>> + u32 val = 0;
>
> ok, thanks for demagic-ing this function.
>
> Looks good overall, only a few minor things.
>
> Reviewed-by: Tom Rix <[email protected]>
>
>> +
>> + regmap_read(icap->regmap, ICAP_REG_SR, &val);
>> + if (val != ICAP_STATUS_DONE)
>> + return -ENODEV;
>> + /* Read ICAP FIFO vacancy */
>> + regmap_read(icap->regmap, ICAP_REG_WFV, &val);
>> + if (val < 8)
>> + return -ENODEV;
>> + err = icap_write(icap, idcode_stream, ARRAY_SIZE(idcode_stream));
>> + if (err)
>> + return err;
>> + err = wait_for_done(icap);
>> + if (err)
>> + return err;
>> +
>> + /* Tell config engine how many words to transfer to read FIFO */
>> + regmap_write(icap->regmap, ICAP_REG_SZ, 0x1);
>> + /* Switch the ICAP to read mode */
>> + regmap_write(icap->regmap, ICAP_REG_CR, 0x2);
>> + err = wait_for_done(icap);
>> + if (err)
>> + return err;
>> +
>> + /* Read IDCODE from Read FIFO */
>> + regmap_read(icap->regmap, ICAP_REG_RF, &icap->idcode);
>> + return 0;
>> +}
>> +
>> +static int
>> +xrt_icap_leaf_call(struct platform_device *pdev, u32 cmd, void *arg)
>> +{
>> + struct xrt_icap_wr *wr_arg = arg;
>> + struct icap *icap;
>> + int ret = 0;
>> +
>> + icap = platform_get_drvdata(pdev);
>> +
>> + switch (cmd) {
>> + case XRT_XLEAF_EVENT:
>> + /* Does not handle any event. */
>> + break;
>> + case XRT_ICAP_WRITE:
>> + ret = icap_download(icap, wr_arg->xiiw_bit_data,
>> + wr_arg->xiiw_data_len);
>> + break;
>> + case XRT_ICAP_GET_IDCODE:
>> + *(u32 *)arg = icap->idcode;
>> + break;
>> + default:
>> + ICAP_ERR(icap, "unknown command %d", cmd);
>> + return -EINVAL;
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static int xrt_icap_probe(struct platform_device *pdev)
>> +{
>> + void __iomem *base = NULL;
>> + struct resource *res;
>> + struct icap *icap;
>> + int result = 0;
>> +
>> + icap = devm_kzalloc(&pdev->dev, sizeof(*icap), GFP_KERNEL);
>> + if (!icap)
>> + return -ENOMEM;
>> +
>> + icap->pdev = pdev;
>> + platform_set_drvdata(pdev, icap);
>> + mutex_init(&icap->icap_lock);
>> +
>> + xrt_info(pdev, "probing");
>> + res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
>> + if (!res)
>> + return -EINVAL;
>> +
>> + base = devm_ioremap_resource(&pdev->dev, res);
>> + if (IS_ERR(base))
>> + return PTR_ERR(base);
>> +
>> + icap->regmap = devm_regmap_init_mmio(&pdev->dev, base,
>> &icap_regmap_config);
>> + if (IS_ERR(icap->regmap)) {
>> + ICAP_ERR(icap, "init mmio failed");
>> + return PTR_ERR(icap->regmap);
>> + }
>> + /* Disable ICAP interrupts */
>> + regmap_write(icap->regmap, ICAP_REG_GIER, 0);
>> +
>> + result = icap_probe_chip(icap);
>> + if (result)
>> + xrt_err(pdev, "Failed to probe FPGA");
>> + else
>> + xrt_info(pdev, "Discovered FPGA IDCODE %x", icap->idcode);
>> + return result;
>> +}
>> +
>> +static struct xrt_subdev_endpoints xrt_icap_endpoints[] = {
>> + {
>> + .xse_names = (struct xrt_subdev_ep_names[]) {
>> + { .ep_name = XRT_MD_NODE_FPGA_CONFIG },
>> + { NULL },
>> + },
>> + .xse_min_ep = 1,
>> + },
>> + { 0 },
>> +};
>> +
>> +static struct xrt_subdev_drvdata xrt_icap_data = {
>> + .xsd_dev_ops = {
>> + .xsd_leaf_call = xrt_icap_leaf_call,
>> + },
>> +};
>> +
>> +static const struct platform_device_id xrt_icap_table[] = {
>> + { XRT_ICAP, (kernel_ulong_t)&xrt_icap_data },
>> + { },
>> +};
>> +
>> +static struct platform_driver xrt_icap_driver = {
>> + .driver = {
>> + .name = XRT_ICAP,
>> + },
>> + .probe = xrt_icap_probe,
>> + .id_table = xrt_icap_table,
>> +};
>> +
>> +XRT_LEAF_INIT_FINI_FUNC(XRT_SUBDEV_ICAP, icap);
>
Hi Tom,
On 04/06/2021 02:00 PM, Tom Rix wrote:
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Update fpga Kconfig/Makefile and add Kconfig/Makefile for new drivers.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> MAINTAINERS | 11 +++++++++++
>> drivers/Makefile | 1 +
>> drivers/fpga/Kconfig | 2 ++
>> drivers/fpga/Makefile | 5 +++++
>> drivers/fpga/xrt/Kconfig | 8 ++++++++
>> drivers/fpga/xrt/lib/Kconfig | 17 +++++++++++++++++
>> drivers/fpga/xrt/lib/Makefile | 30 ++++++++++++++++++++++++++++++
>> drivers/fpga/xrt/metadata/Kconfig | 12 ++++++++++++
>> drivers/fpga/xrt/metadata/Makefile | 16 ++++++++++++++++
>> drivers/fpga/xrt/mgmt/Kconfig | 15 +++++++++++++++
>> drivers/fpga/xrt/mgmt/Makefile | 19 +++++++++++++++++++
>> 11 files changed, 136 insertions(+)
>> create mode 100644 drivers/fpga/xrt/Kconfig
>> create mode 100644 drivers/fpga/xrt/lib/Kconfig
>> create mode 100644 drivers/fpga/xrt/lib/Makefile
>> create mode 100644 drivers/fpga/xrt/metadata/Kconfig
>> create mode 100644 drivers/fpga/xrt/metadata/Makefile
>> create mode 100644 drivers/fpga/xrt/mgmt/Kconfig
>> create mode 100644 drivers/fpga/xrt/mgmt/Makefile
>>
>> diff --git a/MAINTAINERS b/MAINTAINERS
>> index aa84121c5611..44ccc52987ac 100644
>> --- a/MAINTAINERS
>> +++ b/MAINTAINERS
>> @@ -7009,6 +7009,17 @@ F: Documentation/fpga/
>> F: drivers/fpga/
>> F: include/linux/fpga/
>>
>> +FPGA XRT DRIVERS
>> +M: Lizhi Hou <[email protected]>
>> +R: Max Zhen <[email protected]>
>> +R: Sonal Santan <[email protected]>
>> +L: [email protected]
>> +S: Maintained
> Should this be 'Supported' ?
Sure.
>> +W: https://github.com/Xilinx/XRT
>> +F: Documentation/fpga/xrt.rst
>> +F: drivers/fpga/xrt/
>> +F: include/uapi/linux/xrt/
>> +
>> FPU EMULATOR
>> M: Bill Metzenthen <[email protected]>
>> S: Maintained
>> diff --git a/drivers/Makefile b/drivers/Makefile
>> index 6fba7daba591..dbb3b727fc7a 100644
>> --- a/drivers/Makefile
>> +++ b/drivers/Makefile
>> @@ -179,6 +179,7 @@ obj-$(CONFIG_STM) += hwtracing/stm/
>> obj-$(CONFIG_ANDROID) += android/
>> obj-$(CONFIG_NVMEM) += nvmem/
>> obj-$(CONFIG_FPGA) += fpga/
>> +obj-$(CONFIG_FPGA_XRT_METADATA) += fpga/
> CONFIG_FPGA_XRT_METADATA is only defined when CONFIG_FPGA is, so i don't
> think this line is needed.
CONFIG_FPGA could be 'm'.
And as we discussed before, CONFIG_FPGA_XRT_METADATA extending fdt_* can
be only build in kernel ('y'). Maybe it can not rely on CONFIG_FPGA?
>> obj-$(CONFIG_FSI) += fsi/
>> obj-$(CONFIG_TEE) += tee/
>> obj-$(CONFIG_MULTIPLEXER) += mux/
>> diff --git a/drivers/fpga/Kconfig b/drivers/fpga/Kconfig
>> index 5ff9438b7b46..01410ff000b9 100644
>> --- a/drivers/fpga/Kconfig
>> +++ b/drivers/fpga/Kconfig
>> @@ -227,4 +227,6 @@ config FPGA_MGR_ZYNQMP_FPGA
>> to configure the programmable logic(PL) through PS
>> on ZynqMP SoC.
>>
>> +source "drivers/fpga/xrt/Kconfig"
>> +
>> endif # FPGA
> This is where it is defined..
>> diff --git a/drivers/fpga/Makefile b/drivers/fpga/Makefile
>> index 18dc9885883a..4b887bf95cb3 100644
>> --- a/drivers/fpga/Makefile
>> +++ b/drivers/fpga/Makefile
>> @@ -48,3 +48,8 @@ obj-$(CONFIG_FPGA_DFL_NIOS_INTEL_PAC_N3000) +=
>> dfl-n3000-nios.o
>>
>> # Drivers for FPGAs which implement DFL
>> obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
>> +
>> +# XRT drivers for Alveo
>> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt/metadata/
>> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt/lib/
>> +obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt/mgmt/
>> diff --git a/drivers/fpga/xrt/Kconfig b/drivers/fpga/xrt/Kconfig
>> new file mode 100644
>> index 000000000000..0e2c59589ddd
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/Kconfig
>> @@ -0,0 +1,8 @@
>> +# SPDX-License-Identifier: GPL-2.0-only
>> +#
>> +# Xilinx Alveo FPGA device configuration
>> +#
>> +
>> +source "drivers/fpga/xrt/metadata/Kconfig"
>> +source "drivers/fpga/xrt/lib/Kconfig"
>> +source "drivers/fpga/xrt/mgmt/Kconfig"
>> diff --git a/drivers/fpga/xrt/lib/Kconfig b/drivers/fpga/xrt/lib/Kconfig
>> new file mode 100644
>> index 000000000000..935369fad570
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/Kconfig
>> @@ -0,0 +1,17 @@
>> +# SPDX-License-Identifier: GPL-2.0-only
>> +#
>> +# XRT Alveo FPGA device configuration
>> +#
>> +
>> +config FPGA_XRT_LIB
>> + tristate "XRT Alveo Driver Library"
>> + depends on HWMON && PCI && HAS_IOMEM
>> + select FPGA_XRT_METADATA
>> + select REGMAP_MMIO
>> + help
>> + Select this option to enable Xilinx XRT Alveo driver library.
>> This
>> + library is core infrastructure of XRT Alveo FPGA drivers which
>> + provides functions for working with device nodes, iteration and
>> + lookup of platform devices, common interfaces for platform
>> devices,
>> + plumbing of function call and ioctls between platform devices
>> and
>> + parent partitions.
>> diff --git a/drivers/fpga/xrt/lib/Makefile
>> b/drivers/fpga/xrt/lib/Makefile
>> new file mode 100644
>> index 000000000000..58563416efbf
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/Makefile
>> @@ -0,0 +1,30 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
>> +#
>> +# Authors: [email protected]
>> +#
>> +
>> +FULL_XRT_PATH=$(srctree)/$(src)/..
>> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
>> +
>> +obj-$(CONFIG_FPGA_XRT_LIB) += xrt-lib.o
>> +
>> +xrt-lib-objs := \
>> + lib-drv.o \
>> + xroot.o \
>> + xclbin.o \
>> + subdev.o \
>> + cdev.o \
>> + group.o \
>> + xleaf/vsec.o \
>> + xleaf/axigate.o \
>> + xleaf/devctl.o \
>> + xleaf/icap.o \
>> + xleaf/clock.o \
>> + xleaf/clkfreq.o \
>> + xleaf/ucs.o \
>> + xleaf/ddr_calibration.o
>> +
>> +ccflags-y := -I$(FULL_XRT_PATH)/include \
>> + -I$(FULL_DTC_PATH)
>> diff --git a/drivers/fpga/xrt/metadata/Kconfig
>> b/drivers/fpga/xrt/metadata/Kconfig
>> new file mode 100644
>> index 000000000000..129adda47e94
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/metadata/Kconfig
>> @@ -0,0 +1,12 @@
>> +# SPDX-License-Identifier: GPL-2.0-only
>> +#
>> +# XRT Alveo FPGA device configuration
>> +#
>> +
>> +config FPGA_XRT_METADATA
>> + bool "XRT Alveo Driver Metadata Parser"
>> + select LIBFDT
>> + help
>> + This option provides helper functions to parse Xilinx Alveo FPGA
>> + firmware metadata. The metadata is in device tree format and the
>> + XRT driver uses it to discover the HW subsystems behind PCIe
>> BAR.
>> diff --git a/drivers/fpga/xrt/metadata/Makefile
>> b/drivers/fpga/xrt/metadata/Makefile
>> new file mode 100644
>> index 000000000000..14f65ef1595c
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/metadata/Makefile
>> @@ -0,0 +1,16 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
>> +#
>> +# Authors: [email protected]
>> +#
>> +
>> +FULL_XRT_PATH=$(srctree)/$(src)/..
>> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
>> +
>> +obj-$(CONFIG_FPGA_XRT_METADATA) += xrt-md.o
>> +
>> +xrt-md-objs := metadata.o
>> +
>> +ccflags-y := -I$(FULL_XRT_PATH)/include \
>> + -I$(FULL_DTC_PATH)
>> diff --git a/drivers/fpga/xrt/mgmt/Kconfig
>> b/drivers/fpga/xrt/mgmt/Kconfig
>> new file mode 100644
>> index 000000000000..31e9e19fffb8
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/Kconfig
>> @@ -0,0 +1,15 @@
>> +# SPDX-License-Identifier: GPL-2.0-only
>> +#
>> +# Xilinx XRT FPGA device configuration
>> +#
>> +
>> +config FPGA_XRT_XMGMT
>> + tristate "Xilinx Alveo Management Driver"
>> + depends on FPGA_XRT_LIB
>> + select FPGA_XRT_METADATA
>
> If the XRT driver depends on these other two configs and it does not
> make sense to build these two seperately, could you remove these configs
> and just use something like FPGA_XRT ?
This is similar reason with above. CONFIG_FPGA_XRT_METADATA can be only
built in. And FPGA_XRT_LIB can be built as module. They might not be
built together.
Thanks,
Lizhi
>
> Tom
>
>> + select FPGA_BRIDGE
>> + select FPGA_REGION
>> + help
>> + Select this option to enable XRT PCIe driver for Xilinx Alveo
>> FPGA.
>> + This driver provides interfaces for userspace application to
>> access
>> + Alveo FPGA device.
>> diff --git a/drivers/fpga/xrt/mgmt/Makefile
>> b/drivers/fpga/xrt/mgmt/Makefile
>> new file mode 100644
>> index 000000000000..acabd811f3fd
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/Makefile
>> @@ -0,0 +1,19 @@
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +# Copyright (C) 2020-2021 Xilinx, Inc. All rights reserved.
>> +#
>> +# Authors: [email protected]
>> +#
>> +
>> +FULL_XRT_PATH=$(srctree)/$(src)/..
>> +FULL_DTC_PATH=$(srctree)/scripts/dtc/libfdt
>> +
>> +obj-$(CONFIG_FPGA_XRT_XMGMT) += xrt-mgmt.o
>> +
>> +xrt-mgmt-objs := root.o \
>> + main.o \
>> + fmgr-drv.o \
>> + main-region.o
>> +
>> +ccflags-y := -I$(FULL_XRT_PATH)/include \
>> + -I$(FULL_DTC_PATH)
>
Hi Tom,
On 04/01/2021 07:43 AM, Tom Rix wrote:
> small alloc's should use kzalloc.
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> fpga-mgr and region implementation for xclbin download which will be
>> called from main platform driver
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/mgmt/fmgr-drv.c | 191 +++++++++++
>> drivers/fpga/xrt/mgmt/fmgr.h | 19 ++
>> drivers/fpga/xrt/mgmt/main-region.c | 483 ++++++++++++++++++++++++++++
>> 3 files changed, 693 insertions(+)
>> create mode 100644 drivers/fpga/xrt/mgmt/fmgr-drv.c
>> create mode 100644 drivers/fpga/xrt/mgmt/fmgr.h
> a better file name would be xrt-mgr.*
Will change file name to xrt-mgr.*
>> create mode 100644 drivers/fpga/xrt/mgmt/main-region.c
>>
>> diff --git a/drivers/fpga/xrt/mgmt/fmgr-drv.c b/drivers/fpga/xrt/mgmt/fmgr-drv.c
>> new file mode 100644
>> index 000000000000..12e1cc788ad9
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/fmgr-drv.c
>> @@ -0,0 +1,191 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * FPGA Manager Support for Xilinx Alveo Management Function Driver
> Since there is only one fpga mgr for xrt, this could be shortened to
>
> * FPGA Manager Support for Xilinx Alevo
Sure.
>
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors: [email protected]
>> + */
>> +
>> +#include <linux/cred.h>
>> +#include <linux/efi.h>
>> +#include <linux/fpga/fpga-mgr.h>
>> +#include <linux/platform_device.h>
>> +#include <linux/module.h>
>> +#include <linux/vmalloc.h>
>> +
>> +#include "xclbin-helper.h"
>> +#include "xleaf.h"
>> +#include "fmgr.h"
>> +#include "xleaf/axigate.h"
>> +#include "xleaf/icap.h"
>> +#include "xmgnt.h"
>> +
>> +struct xfpga_class {
>> + const struct platform_device *pdev;
>> + char name[64];
>> +};
>> +
>> +/*
>> + * xclbin download plumbing -- find the download subsystem, ICAP and
>> + * pass the xclbin for heavy lifting
>> + */
>> +static int xmgmt_download_bitstream(struct platform_device *pdev,
>> + const struct axlf *xclbin)
>> +
>> +{
>> + struct xclbin_bit_head_info bit_header = { 0 };
>> + struct platform_device *icap_leaf = NULL;
>> + struct xrt_icap_wr arg;
>> + char *bitstream = NULL;
>> + u64 bit_len;
>> + int ret;
>> +
>> + ret = xrt_xclbin_get_section(DEV(pdev), xclbin, BITSTREAM, (void **)&bitstream, &bit_len);
>> + if (ret) {
>> + xrt_err(pdev, "bitstream not found");
>> + return -ENOENT;
>> + }
>> + ret = xrt_xclbin_parse_bitstream_header(DEV(pdev), bitstream,
>> + XCLBIN_HWICAP_BITFILE_BUF_SZ,
>> + &bit_header);
>> + if (ret) {
>> + ret = -EINVAL;
>> + xrt_err(pdev, "invalid bitstream header");
>> + goto fail;
>> + }
>> + if (bit_header.header_length + bit_header.bitstream_length > bit_len) {
>> + ret = -EINVAL;
>> + xrt_err(pdev, "invalid bitstream length. header %d, bitstream %d, section len %lld",
>> + bit_header.header_length, bit_header.bitstream_length, bit_len);
>> + goto fail;
>> + }
>> +
>> + icap_leaf = xleaf_get_leaf_by_id(pdev, XRT_SUBDEV_ICAP, PLATFORM_DEVID_NONE);
>> + if (!icap_leaf) {
>> + ret = -ENODEV;
>> + xrt_err(pdev, "icap does not exist");
>> + goto fail;
>> + }
>> + arg.xiiw_bit_data = bitstream + bit_header.header_length;
>> + arg.xiiw_data_len = bit_header.bitstream_length;
>> + ret = xleaf_call(icap_leaf, XRT_ICAP_WRITE, &arg);
>> + if (ret) {
>> + xrt_err(pdev, "write bitstream failed, ret = %d", ret);
>> + xleaf_put_leaf(pdev, icap_leaf);
>> + goto fail;
>> + }
> ok, free_header removed
>> +
>> + xleaf_put_leaf(pdev, icap_leaf);
>> + vfree(bitstream);
>> +
>> + return 0;
>> +
>> +fail:
>> + vfree(bitstream);
>> +
>> + return ret;
>> +}
>> +
>> +/*
>> + * There is no HW prep work we do here since we need the full
>> + * xclbin for its sanity check.
>> + */
>> +static int xmgmt_pr_write_init(struct fpga_manager *mgr,
>> + struct fpga_image_info *info,
>> + const char *buf, size_t count)
>> +{
>> + const struct axlf *bin = (const struct axlf *)buf;
>> + struct xfpga_class *obj = mgr->priv;
>> +
>> + if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) {
>> + xrt_info(obj->pdev, "%s only supports partial reconfiguration\n", obj->name);
>> + return -EINVAL;
>> + }
>> +
>> + if (count < sizeof(struct axlf))
>> + return -EINVAL;
>> +
>> + if (count > bin->header.length)
>> + return -EINVAL;
>> +
>> + xrt_info(obj->pdev, "Prepare download of xclbin %pUb of length %lld B",
>> + &bin->header.uuid, bin->header.length);
>> +
>> + return 0;
>> +}
>> +
>> +/*
>> + * The implementation requries full xclbin image before we can start
>> + * programming the hardware via ICAP subsystem. The full image is required
> ok
>> + * for checking the validity of xclbin and walking the sections to
>> + * discover the bitstream.
>> + */
>> +static int xmgmt_pr_write(struct fpga_manager *mgr,
>> + const char *buf, size_t count)
>> +{
>> + const struct axlf *bin = (const struct axlf *)buf;
>> + struct xfpga_class *obj = mgr->priv;
>> +
>> + if (bin->header.length != count)
>> + return -EINVAL;
>> +
>> + return xmgmt_download_bitstream((void *)obj->pdev, bin);
>> +}
>> +
>> +static int xmgmt_pr_write_complete(struct fpga_manager *mgr,
>> + struct fpga_image_info *info)
>> +{
>> + const struct axlf *bin = (const struct axlf *)info->buf;
>> + struct xfpga_class *obj = mgr->priv;
>> +
>> + xrt_info(obj->pdev, "Finished download of xclbin %pUb",
>> + &bin->header.uuid);
>> + return 0;
>> +}
>> +
>> +static enum fpga_mgr_states xmgmt_pr_state(struct fpga_manager *mgr)
>> +{
>> + return FPGA_MGR_STATE_UNKNOWN;
> ok as-is
>> +}
>> +
>> +static const struct fpga_manager_ops xmgmt_pr_ops = {
>> + .initial_header_size = sizeof(struct axlf),
>> + .write_init = xmgmt_pr_write_init,
>> + .write = xmgmt_pr_write,
>> + .write_complete = xmgmt_pr_write_complete,
>> + .state = xmgmt_pr_state,
>> +};
>> +
>> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev)
>> +{
>> + struct xfpga_class *obj = devm_kzalloc(DEV(pdev), sizeof(struct xfpga_class),
>> + GFP_KERNEL);
>> + struct fpga_manager *fmgr = NULL;
>> + int ret = 0;
>> +
>> + if (!obj)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + snprintf(obj->name, sizeof(obj->name), "Xilinx Alveo FPGA Manager");
>> + obj->pdev = pdev;
>> + fmgr = fpga_mgr_create(&pdev->dev,
>> + obj->name,
>> + &xmgmt_pr_ops,
>> + obj);
>> + if (!fmgr)
>> + return ERR_PTR(-ENOMEM);
>> +
>> + ret = fpga_mgr_register(fmgr);
>> + if (ret) {
>> + fpga_mgr_free(fmgr);
>> + return ERR_PTR(ret);
>> + }
>> + return fmgr;
>> +}
>> +
>> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr)
>> +{
>> + fpga_mgr_unregister(fmgr);
>> + return 0;
>> +}
>> diff --git a/drivers/fpga/xrt/mgmt/fmgr.h b/drivers/fpga/xrt/mgmt/fmgr.h
>> new file mode 100644
>> index 000000000000..ff1fc5f870f8
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/fmgr.h
>> @@ -0,0 +1,19 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors: [email protected]
>> + */
>> +
>> +#ifndef _XMGMT_FMGR_H_
>> +#define _XMGMT_FMGR_H_
>> +
>> +#include <linux/fpga/fpga-mgr.h>
>> +#include <linux/mutex.h>
> why do mutex.h and xclbin.h need to be included ?
>
> consider removing them.
Sure.
>
>> +
>> +#include <linux/xrt/xclbin.h>
> ok enum removed.
>> +
>> +struct fpga_manager *xmgmt_fmgr_probe(struct platform_device *pdev);
>> +int xmgmt_fmgr_remove(struct fpga_manager *fmgr);
>> +
>> +#endif
>> diff --git a/drivers/fpga/xrt/mgmt/main-region.c b/drivers/fpga/xrt/mgmt/main-region.c
>> new file mode 100644
>> index 000000000000..96a674618e86
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/main-region.c
>> @@ -0,0 +1,483 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * FPGA Region Support for Xilinx Alveo Management Function Driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + * Bulk of the code borrowed from XRT mgmt driver file, fmgr.c
> review this line, there is not fmgr.c
Will remove this line.
>> + *
>> + * Authors: [email protected]
>> + */
>> +
>> +#include <linux/uuid.h>
>> +#include <linux/fpga/fpga-bridge.h>
>> +#include <linux/fpga/fpga-region.h>
>> +#include "metadata.h"
>> +#include "xleaf.h"
>> +#include "xleaf/axigate.h"
>> +#include "xclbin-helper.h"
>> +#include "xmgnt.h"
>> +
>> +struct xmgmt_bridge {
>> + struct platform_device *pdev;
>> + const char *bridge_name;
> ok
>> +};
>> +
>> +struct xmgmt_region {
>> + struct platform_device *pdev;
>> + struct fpga_region *region;
>> + struct fpga_compat_id compat_id;
>> + uuid_t intf_uuid;
> interface_uuid
Sure.
>> + struct fpga_bridge *bridge;
>> + int group_instance;
>> + uuid_t dep_uuid;
> dep ? expand.
Will use 'depend_uuid'
>> + struct list_head list;
>> +};
>> +
>> +struct xmgmt_region_match_arg {
>> + struct platform_device *pdev;
>> + uuid_t *uuids;
>> + u32 uuid_num;
>> +};
>> +
>> +static int xmgmt_br_enable_set(struct fpga_bridge *bridge, bool enable)
>> +{
>> + struct xmgmt_bridge *br_data = (struct xmgmt_bridge *)bridge->priv;
>> + struct platform_device *axigate_leaf;
>> + int rc;
>> +
>> + axigate_leaf = xleaf_get_leaf_by_epname(br_data->pdev, br_data->bridge_name);
>> + if (!axigate_leaf) {
>> + xrt_err(br_data->pdev, "failed to get leaf %s",
>> + br_data->bridge_name);
>> + return -ENOENT;
>> + }
>> +
>> + if (enable)
>> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_OPEN, NULL);
>> + else
>> + rc = xleaf_call(axigate_leaf, XRT_AXIGATE_CLOSE, NULL);
>> +
>> + if (rc) {
>> + xrt_err(br_data->pdev, "failed to %s gate %s, rc %d",
>> + (enable ? "free" : "freeze"), br_data->bridge_name,
>> + rc);
>> + }
>> +
>> + xleaf_put_leaf(br_data->pdev, axigate_leaf);
>> +
>> + return rc;
>> +}
>> +
>> +const struct fpga_bridge_ops xmgmt_bridge_ops = {
>> + .enable_set = xmgmt_br_enable_set
>> +};
>> +
>> +static void xmgmt_destroy_bridge(struct fpga_bridge *br)
>> +{
>> + struct xmgmt_bridge *br_data = br->priv;
>> +
>> + if (!br_data)
>> + return;
>> +
>> + xrt_info(br_data->pdev, "destroy fpga bridge %s", br_data->bridge_name);
>> + fpga_bridge_unregister(br);
>> +
>> + devm_kfree(DEV(br_data->pdev), br_data);
>> +
>> + fpga_bridge_free(br);
>> +}
>> +
>> +static struct fpga_bridge *xmgmt_create_bridge(struct platform_device *pdev,
>> + char *dtb)
>> +{
>> + struct fpga_bridge *br = NULL;
>> + struct xmgmt_bridge *br_data;
>> + const char *gate;
>> + int rc;
>> +
>> + br_data = devm_kzalloc(DEV(pdev), sizeof(*br_data), GFP_KERNEL);
>> + if (!br_data)
>> + return NULL;
>> + br_data->pdev = pdev;
>> +
>> + br_data->bridge_name = XRT_MD_NODE_GATE_ULP;
>> + rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_ULP,
>> + NULL, &gate);
>> + if (rc) {
>> + br_data->bridge_name = XRT_MD_NODE_GATE_PLP;
>> + rc = xrt_md_find_endpoint(&pdev->dev, dtb, XRT_MD_NODE_GATE_PLP,
>> + NULL, &gate);
>> + }
>> + if (rc) {
>> + xrt_err(pdev, "failed to get axigate, rc %d", rc);
>> + goto failed;
>> + }
>> +
>> + br = fpga_bridge_create(DEV(pdev), br_data->bridge_name,
>> + &xmgmt_bridge_ops, br_data);
>> + if (!br) {
>> + xrt_err(pdev, "failed to create bridge");
>> + goto failed;
>> + }
>> +
>> + rc = fpga_bridge_register(br);
>> + if (rc) {
>> + xrt_err(pdev, "failed to register bridge, rc %d", rc);
>> + goto failed;
>> + }
>> +
>> + xrt_info(pdev, "created fpga bridge %s", br_data->bridge_name);
>> +
>> + return br;
>> +
>> +failed:
>> + if (br)
>> + fpga_bridge_free(br);
>> + if (br_data)
>> + devm_kfree(DEV(pdev), br_data);
>> +
>> + return NULL;
>> +}
>> +
>> +static void xmgmt_destroy_region(struct fpga_region *region)
> ok
>> +{
>> + struct xmgmt_region *r_data = region->priv;
>> +
>> + xrt_info(r_data->pdev, "destroy fpga region %llx.%llx",
>> + region->compat_id->id_l, region->compat_id->id_h);
> are the args ordered correctly ? I expected id_h to be first.
Will switch the order.
>> +
>> + fpga_region_unregister(region);
>> +
>> + if (r_data->group_instance > 0)
>> + xleaf_destroy_group(r_data->pdev, r_data->group_instance);
>> +
>> + if (r_data->bridge)
>> + xmgmt_destroy_bridge(r_data->bridge);
>> +
>> + if (r_data->region->info) {
>> + fpga_image_info_free(r_data->region->info);
>> + r_data->region->info = NULL;
>> + }
>> +
>> + fpga_region_free(region);
>> +
>> + devm_kfree(DEV(r_data->pdev), r_data);
>> +}
>> +
>> +static int xmgmt_region_match(struct device *dev, const void *data)
>> +{
>> + const struct xmgmt_region_match_arg *arg = data;
>> + const struct fpga_region *match_region;
> ok
>> + uuid_t compat_uuid;
>> + int i;
>> +
>> + if (dev->parent != &arg->pdev->dev)
>> + return false;
>> +
>> + match_region = to_fpga_region(dev);
>> + /*
>> + * The device tree provides both parent and child uuids for an
>> + * xclbin in one array. Here we try both uuids to see if it matches
>> + * with target region's compat_id. Strictly speaking we should
>> + * only match xclbin's parent uuid with target region's compat_id
>> + * but given the uuids by design are unique comparing with both
>> + * does not hurt.
>> + */
>> + import_uuid(&compat_uuid, (const char *)match_region->compat_id);
>> + for (i = 0; i < arg->uuid_num; i++) {
>> + if (uuid_equal(&compat_uuid, &arg->uuids[i]))
>> + return true;
>> + }
>> +
>> + return false;
>> +}
>> +
>> +static int xmgmt_region_match_base(struct device *dev, const void *data)
>> +{
>> + const struct xmgmt_region_match_arg *arg = data;
>> + const struct fpga_region *match_region;
>> + const struct xmgmt_region *r_data;
>> +
>> + if (dev->parent != &arg->pdev->dev)
>> + return false;
>> +
>> + match_region = to_fpga_region(dev);
>> + r_data = match_region->priv;
>> + if (uuid_is_null(&r_data->dep_uuid))
>> + return true;
>> +
>> + return false;
>> +}
>> +
>> +static int xmgmt_region_match_by_uuid(struct device *dev, const void *data)
> ok
>> +{
>> + const struct xmgmt_region_match_arg *arg = data;
>> + const struct fpga_region *match_region;
>> + const struct xmgmt_region *r_data;
>> +
>> + if (dev->parent != &arg->pdev->dev)
>> + return false;
>> +
>> + if (arg->uuid_num != 1)
>> + return false;
> ok
>> +
>> + match_region = to_fpga_region(dev);
>> + r_data = match_region->priv;
>> + if (uuid_equal(&r_data->dep_uuid, arg->uuids))
>> + return true;
>> +
>> + return false;
>> +}
>> +
>> +static void xmgmt_region_cleanup(struct fpga_region *region)
>> +{
>> + struct xmgmt_region *r_data = region->priv, *pdata, *temp;
>> + struct platform_device *pdev = r_data->pdev;
>> + struct xmgmt_region_match_arg arg = { 0 };
>> + struct fpga_region *match_region = NULL;
>> + struct device *start_dev = NULL;
>> + LIST_HEAD(free_list);
>> + uuid_t compat_uuid;
>> +
>> + list_add_tail(&r_data->list, &free_list);
>> + arg.pdev = pdev;
>> + arg.uuid_num = 1;
>> + arg.uuids = &compat_uuid;
>> +
>> + /* find all regions depending on this region */
>> + list_for_each_entry_safe(pdata, temp, &free_list, list) {
> ok
>> + import_uuid(arg.uuids, (const char *)pdata->region->compat_id);
>> + start_dev = NULL;
>> + while ((match_region = fpga_region_class_find(start_dev, &arg,
>> + xmgmt_region_match_by_uuid))) {
>> + pdata = match_region->priv;
>> + list_add_tail(&pdata->list, &free_list);
>> + start_dev = &match_region->dev;
>> + put_device(&match_region->dev);
>> + }
>> + }
>> +
>> + list_del(&r_data->list);
>> +
>> + list_for_each_entry_safe_reverse(pdata, temp, &free_list, list)
>> + xmgmt_destroy_region(pdata->region);
>> +
>> + if (r_data->group_instance > 0) {
>> + xleaf_destroy_group(pdev, r_data->group_instance);
>> + r_data->group_instance = -1;
>> + }
>> + if (r_data->region->info) {
>> + fpga_image_info_free(r_data->region->info);
>> + r_data->region->info = NULL;
>> + }
>> +}
>> +
>> +void xmgmt_region_cleanup_all(struct platform_device *pdev)
>> +{
>> + struct xmgmt_region_match_arg arg = { 0 };
>> + struct fpga_region *base_region;
>> +
>> + arg.pdev = pdev;
>> +
>> + while ((base_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match_base))) {
> ok
>> + put_device(&base_region->dev);
>> +
>> + xmgmt_region_cleanup(base_region);
>> + xmgmt_destroy_region(base_region);
>> + }
>> +}
>> +
>> +/*
>> + * Program a region with a xclbin image. Bring up the subdevs and the
> ok
>> + * group object to contain the subdevs.
>> + */
>> +static int xmgmt_region_program(struct fpga_region *region, const void *xclbin, char *dtb)
>> +{
>> + const struct axlf *xclbin_obj = xclbin;
>> + struct fpga_image_info *info;
>> + struct platform_device *pdev;
>> + struct xmgmt_region *r_data;
>> + int rc;
>> +
>> + r_data = region->priv;
>> + pdev = r_data->pdev;
>> +
>> + info = fpga_image_info_alloc(&pdev->dev);
>> + if (!info)
>> + return -ENOMEM;
>> +
>> + info->buf = xclbin;
>> + info->count = xclbin_obj->header.length;
>> + info->flags |= FPGA_MGR_PARTIAL_RECONFIG;
>> + region->info = info;
>> + rc = fpga_region_program_fpga(region);
>> + if (rc) {
>> + xrt_err(pdev, "programming xclbin failed, rc %d", rc);
>> + return rc;
>> + }
>> +
>> + /* free bridges to allow reprogram */
>> + if (region->get_bridges)
>> + fpga_bridges_put(®ion->bridge_list);
>> +
>> + /*
>> + * Next bringup the subdevs for this region which will be managed by
>> + * its own group object.
>> + */
>> + r_data->group_instance = xleaf_create_group(pdev, dtb);
>> + if (r_data->group_instance < 0) {
>> + xrt_err(pdev, "failed to create group, rc %d",
>> + r_data->group_instance);
>> + rc = r_data->group_instance;
>> + return rc;
>> + }
>> +
>> + rc = xleaf_wait_for_group_bringup(pdev);
>> + if (rc)
>> + xrt_err(pdev, "group bringup failed, rc %d", rc);
>> + return rc;
>> +}
>> +
>> +static int xmgmt_get_bridges(struct fpga_region *region)
>> +{
>> + struct xmgmt_region *r_data = region->priv;
>> + struct device *dev = &r_data->pdev->dev;
>> +
>> + return fpga_bridge_get_to_list(dev, region->info, ®ion->bridge_list);
>> +}
>> +
>> +/*
>> + * Program/create FPGA regions based on input xclbin file.
> ok, dropped sentence
>> + * 1. Identify a matching existing region for this xclbin
>> + * 2. Tear down any previous objects for the found region
>> + * 3. Program this region with input xclbin
>> + * 4. Iterate over this region's interface uuids to determine if it defines any
>> + * child region. Create fpga_region for the child region.
>> + */
>> +int xmgmt_process_xclbin(struct platform_device *pdev,
>> + struct fpga_manager *fmgr,
>> + const struct axlf *xclbin,
>> + enum provider_kind kind)
>> +{
>> + struct fpga_region *region, *compat_region = NULL;
>> + struct xmgmt_region_match_arg arg = { 0 };
> ok
>> + struct xmgmt_region *r_data;
>> + uuid_t compat_uuid;
>> + char *dtb = NULL;
>> + int rc, i;
>> +
>> + rc = xrt_xclbin_get_metadata(DEV(pdev), xclbin, &dtb);
>> + if (rc) {
>> + xrt_err(pdev, "failed to get dtb: %d", rc);
>> + goto failed;
>> + }
>> +
>> + rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, 0, NULL);
>> + if (rc < 0) {
>> + xrt_err(pdev, "failed to get intf uuid");
>> + rc = -EINVAL;
> ok
>> + goto failed;
>> + }
>> + arg.uuid_num = rc;
>> + arg.uuids = vzalloc(sizeof(uuid_t) * arg.uuid_num);
> uuids small, convert to bzalloc
Will change to kcalloc.
>> + if (!arg.uuids) {
>> + rc = -ENOMEM;
>> + goto failed;
>> + }
>> + arg.pdev = pdev;
>> +
>> + rc = xrt_md_get_interface_uuids(DEV(pdev), dtb, arg.uuid_num, arg.uuids);
>> + if (rc != arg.uuid_num) {
>> + xrt_err(pdev, "only get %d uuids, expect %d", rc, arg.uuid_num);
>> + rc = -EINVAL;
>> + goto failed;
>> + }
>> +
>> + /* if this is not base firmware, search for a compatible region */
>> + if (kind != XMGMT_BLP) {
>> + compat_region = fpga_region_class_find(NULL, &arg, xmgmt_region_match);
>> + if (!compat_region) {
>> + xrt_err(pdev, "failed to get compatible region");
>> + rc = -ENOENT;
>> + goto failed;
>> + }
>> +
>> + xmgmt_region_cleanup(compat_region);
>> +
>> + rc = xmgmt_region_program(compat_region, xclbin, dtb);
>> + if (rc) {
>> + xrt_err(pdev, "failed to program region");
>> + goto failed;
>> + }
>> + }
>> +
>> + if (compat_region)
>> + import_uuid(&compat_uuid, (const char *)compat_region->compat_id);
>> +
>> + /* create all the new regions contained in this xclbin */
>> + for (i = 0; i < arg.uuid_num; i++) {
>> + if (compat_region && uuid_equal(&compat_uuid, &arg.uuids[i])) {
>> + /* region for this interface already exists */
>> + continue;
>> + }
>> +
>> + region = fpga_region_create(DEV(pdev), fmgr, xmgmt_get_bridges);
>> + if (!region) {
>> + xrt_err(pdev, "failed to create fpga region");
>> + rc = -EFAULT;
>> + goto failed;
>> + }
>> + r_data = devm_kzalloc(DEV(pdev), sizeof(*r_data), GFP_KERNEL);
>> + if (!r_data) {
>> + rc = -ENOMEM;
>> + fpga_region_free(region);
>> + goto failed;
>> + }
>> + r_data->pdev = pdev;
>> + r_data->region = region;
>> + r_data->group_instance = -1;
>> + uuid_copy(&r_data->intf_uuid, &arg.uuids[i]);
>> + if (compat_region)
>> + import_uuid(&r_data->dep_uuid, (const char *)compat_region->compat_id);
>> + r_data->bridge = xmgmt_create_bridge(pdev, dtb);
>> + if (!r_data->bridge) {
>> + xrt_err(pdev, "failed to create fpga bridge");
>> + rc = -EFAULT;
>> + devm_kfree(DEV(pdev), r_data);
>> + fpga_region_free(region);
>> + goto failed;
>> + }
>> +
>> + region->compat_id = &r_data->compat_id;
>> + export_uuid((char *)region->compat_id, &r_data->intf_uuid);
>> + region->priv = r_data;
>> +
>> + rc = fpga_region_register(region);
>> + if (rc) {
>> + xrt_err(pdev, "failed to register fpga region");
>> + xmgmt_destroy_bridge(r_data->bridge);
>> + fpga_region_free(region);
>> + devm_kfree(DEV(pdev), r_data);
>> + goto failed;
>> + }
>> +
>> + xrt_info(pdev, "created fpga region %llx%llx",
>> + region->compat_id->id_l, region->compat_id->id_h);
> see above comment on id_h
>
> destroy's info used %llx.%llx, for consistency need to add or remove a '.'
Sure.
Thanks,
Lizhi
>
> Tom
>
>> + }
>> +
>> + if (compat_region)
>> + put_device(&compat_region->dev);
>> + vfree(dtb);
>> + return 0;
>> +
>> +failed:
>> + if (compat_region) {
>> + put_device(&compat_region->dev);
>> + xmgmt_region_cleanup(compat_region);
>> + } else {
>> + xmgmt_region_cleanup_all(pdev);
>> + }
>> +
>> + vfree(dtb);
>> + return rc;
>> +}
Hi Tom,
On 04/01/2021 07:07 AM, Tom Rix wrote:
>
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> platform driver that handles IOCTLs, such as hot reset and xclbin download.
>>
>> Signed-off-by: Sonal Santan <[email protected]>
>> Signed-off-by: Max Zhen <[email protected]>
>> Signed-off-by: Lizhi Hou <[email protected]>
>> ---
>> drivers/fpga/xrt/include/xmgmt-main.h | 34 ++
>> drivers/fpga/xrt/mgmt/main.c | 670 ++++++++++++++++++++++++++
>> drivers/fpga/xrt/mgmt/xmgnt.h | 34 ++
>> include/uapi/linux/xrt/xmgmt-ioctl.h | 46 ++
>> 4 files changed, 784 insertions(+)
>> create mode 100644 drivers/fpga/xrt/include/xmgmt-main.h
>> create mode 100644 drivers/fpga/xrt/mgmt/main.c
> 'main' is generic, how about xmgnt-main ?
Sure. Will change to xmgnt-main
>> create mode 100644 drivers/fpga/xrt/mgmt/xmgnt.h
>> create mode 100644 include/uapi/linux/xrt/xmgmt-ioctl.h
>>
>> diff --git a/drivers/fpga/xrt/include/xmgmt-main.h b/drivers/fpga/xrt/include/xmgmt-main.h
>> new file mode 100644
>> index 000000000000..dce9f0d1a0dc
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/include/xmgmt-main.h
>> @@ -0,0 +1,34 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XMGMT_MAIN_H_
>> +#define _XMGMT_MAIN_H_
>> +
>> +#include <linux/xrt/xclbin.h>
>> +#include "xleaf.h"
>> +
>> +enum xrt_mgmt_main_leaf_cmd {
>> + XRT_MGMT_MAIN_GET_AXLF_SECTION = XRT_XLEAF_CUSTOM_BASE, /* See comments in xleaf.h */
>> + XRT_MGMT_MAIN_GET_VBNV,
>> +};
>> +
>> +/* There are three kind of partitions. Each of them is programmed independently. */
>> +enum provider_kind {
>> + XMGMT_BLP, /* Base Logic Partition */
>> + XMGMT_PLP, /* Provider Logic Partition */
>> + XMGMT_ULP, /* User Logic Partition */
> ok
>> +};
>> +
>> +struct xrt_mgmt_main_get_axlf_section {
>> + enum provider_kind xmmigas_axlf_kind;
>> + enum axlf_section_kind xmmigas_section_kind;
>> + void *xmmigas_section;
>> + u64 xmmigas_section_size;
>> +};
>> +
>> +#endif /* _XMGMT_MAIN_H_ */
>> diff --git a/drivers/fpga/xrt/mgmt/main.c b/drivers/fpga/xrt/mgmt/main.c
>> new file mode 100644
>> index 000000000000..f3b46e1fd78b
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/main.c
>> @@ -0,0 +1,670 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo FPGA MGMT PF entry point driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Sonal Santan <[email protected]>
>> + */
>> +
>> +#include <linux/firmware.h>
>> +#include <linux/uaccess.h>
>> +#include "xclbin-helper.h"
>> +#include "metadata.h"
>> +#include "xleaf.h"
>> +#include <linux/xrt/xmgmt-ioctl.h>
>> +#include "xleaf/devctl.h"
>> +#include "xmgmt-main.h"
>> +#include "fmgr.h"
>> +#include "xleaf/icap.h"
>> +#include "xleaf/axigate.h"
>> +#include "xmgnt.h"
>> +
>> +#define XMGMT_MAIN "xmgmt_main"
>> +#define XMGMT_SUPP_XCLBIN_MAJOR 2
>> +
>> +#define XMGMT_FLAG_FLASH_READY 1
>> +#define XMGMT_FLAG_DEVCTL_READY 2
>> +
>> +#define XMGMT_UUID_STR_LEN 80
>> +
>> +struct xmgmt_main {
>> + struct platform_device *pdev;
>> + struct axlf *firmware_blp;
>> + struct axlf *firmware_plp;
>> + struct axlf *firmware_ulp;
>> + u32 flags;
> ok
>> + struct fpga_manager *fmgr;
>> + struct mutex lock; /* busy lock */
> ok
>> +
> do not need this nl
Will remove.
>> + uuid_t *blp_interface_uuids;
>> + u32 blp_interface_uuid_num;
> ok
>> +};
>> +
>> +/*
>> + * VBNV stands for Vendor, BoardID, Name, Version. It is a string
>> + * which describes board and shell.
>> + *
>> + * Caller is responsible for freeing the returned string.
> ok
>> + */
>> +char *xmgmt_get_vbnv(struct platform_device *pdev)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + const char *vbnv;
>> + char *ret;
>> + int i;
>> +
>> + if (xmm->firmware_plp)
>> + vbnv = xmm->firmware_plp->header.platform_vbnv;
>> + else if (xmm->firmware_blp)
>> + vbnv = xmm->firmware_blp->header.platform_vbnv;
>> + else
>> + return NULL;
>> +
>> + ret = kstrdup(vbnv, GFP_KERNEL);
>> + if (!ret)
>> + return NULL;
>> +
>> + for (i = 0; i < strlen(ret); i++) {
>> + if (ret[i] == ':' || ret[i] == '.')
>> + ret[i] = '_';
>> + }
>> + return ret;
>> +}
>> +
>> +static int get_dev_uuid(struct platform_device *pdev, char *uuidstr, size_t len)
>> +{
>> + struct xrt_devctl_rw devctl_arg = { 0 };
>> + struct platform_device *devctl_leaf;
>> + char uuid_buf[UUID_SIZE];
>> + uuid_t uuid;
>> + int err;
>> +
>> + devctl_leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
>> + if (!devctl_leaf) {
>> + xrt_err(pdev, "can not get %s", XRT_MD_NODE_BLP_ROM);
>> + return -EINVAL;
>> + }
>> +
>> + devctl_arg.xdr_id = XRT_DEVCTL_ROM_UUID;
>> + devctl_arg.xdr_buf = uuid_buf;
>> + devctl_arg.xdr_len = sizeof(uuid_buf);
>> + devctl_arg.xdr_offset = 0;
>> + err = xleaf_call(devctl_leaf, XRT_DEVCTL_READ, &devctl_arg);
>> + xleaf_put_leaf(pdev, devctl_leaf);
>> + if (err) {
>> + xrt_err(pdev, "can not get uuid: %d", err);
>> + return err;
>> + }
>> + import_uuid(&uuid, uuid_buf);
> ok
>> + xrt_md_trans_uuid2str(&uuid, uuidstr);
>> +
>> + return 0;
>> +}
>> +
>> +int xmgmt_hot_reset(struct platform_device *pdev)
>> +{
>> + int ret = xleaf_broadcast_event(pdev, XRT_EVENT_PRE_HOT_RESET, false);
>> +
>> + if (ret) {
>> + xrt_err(pdev, "offline failed, hot reset is canceled");
>> + return ret;
>> + }
>> +
>> + xleaf_hot_reset(pdev);
>> + xleaf_broadcast_event(pdev, XRT_EVENT_POST_HOT_RESET, false);
>> + return 0;
>> +}
>> +
>> +static ssize_t reset_store(struct device *dev, struct device_attribute *da,
>> + const char *buf, size_t count)
>> +{
>> + struct platform_device *pdev = to_platform_device(dev);
>> +
>> + xmgmt_hot_reset(pdev);
>> + return count;
>> +}
>> +static DEVICE_ATTR_WO(reset);
>> +
>> +static ssize_t VBNV_show(struct device *dev, struct device_attribute *da, char *buf)
>> +{
>> + struct platform_device *pdev = to_platform_device(dev);
>> + ssize_t ret;
>> + char *vbnv;
>> +
>> + vbnv = xmgmt_get_vbnv(pdev);
>> + if (!vbnv)
>> + return -EINVAL;
> ok
>> + ret = sprintf(buf, "%s\n", vbnv);
>> + kfree(vbnv);
>> + return ret;
>> +}
>> +static DEVICE_ATTR_RO(VBNV);
>> +
>> +/* logic uuid is the uuid uniquely identfy the partition */
>> +static ssize_t logic_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
>> +{
>> + struct platform_device *pdev = to_platform_device(dev);
>> + char uuid[XMGMT_UUID_STR_LEN];
> ok
>> + ssize_t ret;
>> +
>> + /* Getting UUID pointed to by VSEC, should be the same as logic UUID of BLP. */
>> + ret = get_dev_uuid(pdev, uuid, sizeof(uuid));
>> + if (ret)
>> + return ret;
>> + ret = sprintf(buf, "%s\n", uuid);
>> + return ret;
>> +}
>> +static DEVICE_ATTR_RO(logic_uuids);
>> +
>> +static ssize_t interface_uuids_show(struct device *dev, struct device_attribute *da, char *buf)
>> +{
>> + struct platform_device *pdev = to_platform_device(dev);
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + ssize_t ret = 0;
>> + u32 i;
>> +
>> + for (i = 0; i < xmm->blp_interface_uuid_num; i++) {
>> + char uuidstr[XMGMT_UUID_STR_LEN];
>> +
>> + xrt_md_trans_uuid2str(&xmm->blp_interface_uuids[i], uuidstr);
>> + ret += sprintf(buf + ret, "%s\n", uuidstr);
>> + }
>> + return ret;
>> +}
>> +static DEVICE_ATTR_RO(interface_uuids);
>> +
>> +static struct attribute *xmgmt_main_attrs[] = {
>> + &dev_attr_reset.attr,
>> + &dev_attr_VBNV.attr,
>> + &dev_attr_logic_uuids.attr,
>> + &dev_attr_interface_uuids.attr,
>> + NULL,
>> +};
>> +
>> +static const struct attribute_group xmgmt_main_attrgroup = {
>> + .attrs = xmgmt_main_attrs,
>> +};
>> +
> ok, removed ulp_image_write()
>> +static int load_firmware_from_disk(struct platform_device *pdev, struct axlf **fw_buf, size_t *len)
>> +{
>> + char uuid[XMGMT_UUID_STR_LEN];
>> + const struct firmware *fw;
>> + char fw_name[256];
>> + int err = 0;
>> +
>> + *len = 0;
> ok
>> + err = get_dev_uuid(pdev, uuid, sizeof(uuid));
>> + if (err)
>> + return err;
>> +
>> + snprintf(fw_name, sizeof(fw_name), "xilinx/%s/partition.xsabin", uuid);
>> + xrt_info(pdev, "try loading fw: %s", fw_name);
>> +
>> + err = request_firmware(&fw, fw_name, DEV(pdev));
>> + if (err)
>> + return err;
>> +
>> + *fw_buf = vmalloc(fw->size);
>> + if (!*fw_buf) {
>> + release_firmware(fw);
>> + return -ENOMEM;
>> + }
>> +
>> + *len = fw->size;
>> + memcpy(*fw_buf, fw->data, fw->size);
>> +
>> + release_firmware(fw);
>> + return 0;
>> +}
>> +
>> +static const struct axlf *xmgmt_get_axlf_firmware(struct xmgmt_main *xmm, enum provider_kind kind)
>> +{
>> + switch (kind) {
>> + case XMGMT_BLP:
>> + return xmm->firmware_blp;
>> + case XMGMT_PLP:
>> + return xmm->firmware_plp;
>> + case XMGMT_ULP:
>> + return xmm->firmware_ulp;
>> + default:
>> + xrt_err(xmm->pdev, "unknown axlf kind: %d", kind);
>> + return NULL;
>> + }
>> +}
>> +
>> +/* The caller needs to free the returned dtb buffer */
> ok
>> +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + const struct axlf *provider;
>> + char *dtb = NULL;
>> + int rc;
>> +
>> + provider = xmgmt_get_axlf_firmware(xmm, kind);
>> + if (!provider)
>> + return dtb;
>> +
>> + rc = xrt_xclbin_get_metadata(DEV(pdev), provider, &dtb);
>> + if (rc)
>> + xrt_err(pdev, "failed to find dtb: %d", rc);
>> + return dtb;
>> +}
>> +
>> +/* The caller needs to free the returned uuid buffer */
> ok
>> +static const char *get_uuid_from_firmware(struct platform_device *pdev, const struct axlf *xclbin)
>> +{
>> + const void *uuiddup = NULL;
>> + const void *uuid = NULL;
>> + void *dtb = NULL;
>> + int rc;
>> +
>> + rc = xrt_xclbin_get_section(DEV(pdev), xclbin, PARTITION_METADATA, &dtb, NULL);
>> + if (rc)
>> + return NULL;
>> +
>> + rc = xrt_md_get_prop(DEV(pdev), dtb, NULL, NULL, XRT_MD_PROP_LOGIC_UUID, &uuid, NULL);
>> + if (!rc)
>> + uuiddup = kstrdup(uuid, GFP_KERNEL);
>> + vfree(dtb);
>> + return uuiddup;
>> +}
>> +
>> +static bool is_valid_firmware(struct platform_device *pdev,
>> + const struct axlf *xclbin, size_t fw_len)
>> +{
>> + const char *fw_buf = (const char *)xclbin;
>> + size_t axlflen = xclbin->header.length;
>> + char dev_uuid[XMGMT_UUID_STR_LEN];
>> + const char *fw_uuid;
>> + int err;
>> +
>> + err = get_dev_uuid(pdev, dev_uuid, sizeof(dev_uuid));
>> + if (err)
>> + return false;
>> +
>> + if (memcmp(fw_buf, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)) != 0) {
>> + xrt_err(pdev, "unknown fw format");
>> + return false;
>> + }
>> +
>> + if (axlflen > fw_len) {
>> + xrt_err(pdev, "truncated fw, length: %zu, expect: %zu", fw_len, axlflen);
>> + return false;
>> + }
>> +
>> + if (xclbin->header.version_major != XMGMT_SUPP_XCLBIN_MAJOR) {
>> + xrt_err(pdev, "firmware is not supported");
>> + return false;
>> + }
>> +
>> + fw_uuid = get_uuid_from_firmware(pdev, xclbin);
>> + if (!fw_uuid || strncmp(fw_uuid, dev_uuid, sizeof(dev_uuid)) != 0) {
>> + xrt_err(pdev, "bad fw UUID: %s, expect: %s",
>> + fw_uuid ? fw_uuid : "<none>", dev_uuid);
>> + kfree(fw_uuid);
>> + return false;
>> + }
>> +
>> + kfree(fw_uuid);
>> + return true;
>> +}
>> +
>> +int xmgmt_get_provider_uuid(struct platform_device *pdev, enum provider_kind kind, uuid_t *uuid)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + const struct axlf *fwbuf;
>> + const char *fw_uuid;
>> + int rc = -ENOENT;
>> +
>> + mutex_lock(&xmm->lock);
>> +
>> + fwbuf = xmgmt_get_axlf_firmware(xmm, kind);
>> + if (!fwbuf)
>> + goto done;
>> +
>> + fw_uuid = get_uuid_from_firmware(pdev, fwbuf);
>> + if (!fw_uuid)
>> + goto done;
>> +
>> + rc = xrt_md_trans_str2uuid(DEV(pdev), fw_uuid, uuid);
>> + kfree(fw_uuid);
>> +
>> +done:
>> + mutex_unlock(&xmm->lock);
>> + return rc;
>> +}
>> +
>> +static int xmgmt_create_blp(struct xmgmt_main *xmm)
>> +{
>> + const struct axlf *provider = xmgmt_get_axlf_firmware(xmm, XMGMT_BLP);
>> + struct platform_device *pdev = xmm->pdev;
>> + int rc = 0;
>> + char *dtb = NULL;
>> +
>> + dtb = xmgmt_get_dtb(pdev, XMGMT_BLP);
>> + if (!dtb) {
>> + xrt_err(pdev, "did not get BLP metadata");
>> + return -EINVAL;
> ok
>> + }
>> +
>> + rc = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, provider, XMGMT_BLP);
>> + if (rc) {
>> + xrt_err(pdev, "failed to process BLP: %d", rc);
>> + goto failed;
>> + }
>> +
>> + rc = xleaf_create_group(pdev, dtb);
>> + if (rc < 0)
>> + xrt_err(pdev, "failed to create BLP group: %d", rc);
>> + else
>> + rc = 0;
>> +
>> + WARN_ON(xmm->blp_interface_uuids);
>> + rc = xrt_md_get_interface_uuids(&pdev->dev, dtb, 0, NULL);
>> + if (rc > 0) {
>> + xmm->blp_interface_uuid_num = rc;
>> + xmm->blp_interface_uuids = vzalloc(sizeof(uuid_t) * xmm->blp_interface_uuid_num);
> blp_interface_uuids should be small, so convert to kzalloc
Will convert to kcalloc.
Thanks,
Lizhi
>> + if (!xmm->blp_interface_uuids) {
> ok
>> + rc = -ENOMEM;
>> + goto failed;
>> + }
>> + xrt_md_get_interface_uuids(&pdev->dev, dtb, xmm->blp_interface_uuid_num,
>> + xmm->blp_interface_uuids);
>> + }
>> +
>> +failed:
>> + vfree(dtb);
>> + return rc;
>> +}
>> +
>> +static int xmgmt_load_firmware(struct xmgmt_main *xmm)
>> +{
>> + struct platform_device *pdev = xmm->pdev;
>> + size_t fwlen;
>> + int rc;
>> +
>> + rc = load_firmware_from_disk(pdev, &xmm->firmware_blp, &fwlen);
> ok
>> + if (!rc && is_valid_firmware(pdev, xmm->firmware_blp, fwlen))
>> + xmgmt_create_blp(xmm);
>> + else
>> + xrt_err(pdev, "failed to find firmware, giving up: %d", rc);
>> + return rc;
>> +}
>> +
>> +static void xmgmt_main_event_cb(struct platform_device *pdev, void *arg)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + struct xrt_event *evt = (struct xrt_event *)arg;
>> + enum xrt_events e = evt->xe_evt;
>> + struct platform_device *leaf;
>> + enum xrt_subdev_id id;
>> +
>> + id = evt->xe_subdev.xevt_subdev_id;
>> + switch (e) {
>> + case XRT_EVENT_POST_CREATION: {
>> + if (id == XRT_SUBDEV_DEVCTL && !(xmm->flags & XMGMT_FLAG_DEVCTL_READY)) {
>> + leaf = xleaf_get_leaf_by_epname(pdev, XRT_MD_NODE_BLP_ROM);
>> + if (leaf) {
>> + xmm->flags |= XMGMT_FLAG_DEVCTL_READY;
>> + xleaf_put_leaf(pdev, leaf);
>> + }
>> + } else if (id == XRT_SUBDEV_QSPI && !(xmm->flags & XMGMT_FLAG_FLASH_READY)) {
>> + xmm->flags |= XMGMT_FLAG_FLASH_READY;
>> + } else {
>> + break;
>> + }
>> +
>> + if (xmm->flags & XMGMT_FLAG_DEVCTL_READY)
>> + xmgmt_load_firmware(xmm);
>> + break;
>> + }
>> + case XRT_EVENT_PRE_REMOVAL:
>> + break;
>> + default:
>> + xrt_dbg(pdev, "ignored event %d", e);
>> + break;
>> + }
>> +}
>> +
>> +static int xmgmt_main_probe(struct platform_device *pdev)
>> +{
>> + struct xmgmt_main *xmm;
>> +
>> + xrt_info(pdev, "probing...");
>> +
>> + xmm = devm_kzalloc(DEV(pdev), sizeof(*xmm), GFP_KERNEL);
>> + if (!xmm)
>> + return -ENOMEM;
>> +
>> + xmm->pdev = pdev;
>> + xmm->fmgr = xmgmt_fmgr_probe(pdev);
>> + if (IS_ERR(xmm->fmgr))
>> + return PTR_ERR(xmm->fmgr);
>> +
>> + platform_set_drvdata(pdev, xmm);
>> + mutex_init(&xmm->lock);
>> +
>> + /* Ready to handle req thru sysfs nodes. */
>> + if (sysfs_create_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup))
>> + xrt_err(pdev, "failed to create sysfs group");
>> + return 0;
>> +}
>> +
>> +static int xmgmt_main_remove(struct platform_device *pdev)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> +
>> + /* By now, group driver should prevent any inter-leaf call. */
>> +
>> + xrt_info(pdev, "leaving...");
>> +
>> + vfree(xmm->blp_interface_uuids);
>> + vfree(xmm->firmware_blp);
>> + vfree(xmm->firmware_plp);
>> + vfree(xmm->firmware_ulp);
>> + xmgmt_region_cleanup_all(pdev);
>> + xmgmt_fmgr_remove(xmm->fmgr);
>> + sysfs_remove_group(&DEV(pdev)->kobj, &xmgmt_main_attrgroup);
>> + return 0;
>> +}
>> +
>> +static int
>> +xmgmt_mainleaf_call(struct platform_device *pdev, u32 cmd, void *arg)
>> +{
>> + struct xmgmt_main *xmm = platform_get_drvdata(pdev);
>> + int ret = 0;
>> +
>> + switch (cmd) {
>> + case XRT_XLEAF_EVENT:
>> + xmgmt_main_event_cb(pdev, arg);
>> + break;
>> + case XRT_MGMT_MAIN_GET_AXLF_SECTION: {
>> + struct xrt_mgmt_main_get_axlf_section *get =
>> + (struct xrt_mgmt_main_get_axlf_section *)arg;
>> + const struct axlf *firmware = xmgmt_get_axlf_firmware(xmm, get->xmmigas_axlf_kind);
>> +
>> + if (!firmware) {
>> + ret = -ENOENT;
>> + } else {
>> + ret = xrt_xclbin_get_section(DEV(pdev), firmware,
>> + get->xmmigas_section_kind,
>> + &get->xmmigas_section,
>> + &get->xmmigas_section_size);
>> + }
>> + break;
>> + }
>> + case XRT_MGMT_MAIN_GET_VBNV: {
>> + char **vbnv_p = (char **)arg;
>> +
>> + *vbnv_p = xmgmt_get_vbnv(pdev);
>> + if (!*vbnv_p)
>> + ret = -EINVAL;
> ok
>> + break;
>> + }
>> + default:
>> + xrt_err(pdev, "unknown cmd: %d", cmd);
>> + ret = -EINVAL;
>> + break;
>> + }
>> + return ret;
>> +}
>> +
>> +static int xmgmt_main_open(struct inode *inode, struct file *file)
>> +{
>> + struct platform_device *pdev = xleaf_devnode_open(inode);
>> +
>> + /* Device may have gone already when we get here. */
>> + if (!pdev)
>> + return -ENODEV;
>> +
>> + xrt_info(pdev, "opened");
>> + file->private_data = platform_get_drvdata(pdev);
>> + return 0;
>> +}
>> +
>> +static int xmgmt_main_close(struct inode *inode, struct file *file)
>> +{
>> + struct xmgmt_main *xmm = file->private_data;
>> +
>> + xleaf_devnode_close(inode);
>> +
>> + xrt_info(xmm->pdev, "closed");
>> + return 0;
>> +}
>> +
>> +/*
>> + * Called for xclbin download xclbin load ioctl.
>> + */
>> +static int xmgmt_bitstream_axlf_fpga_mgr(struct xmgmt_main *xmm, void *axlf, size_t size)
>> +{
>> + int ret;
>> +
>> + WARN_ON(!mutex_is_locked(&xmm->lock));
>> +
>> + /*
>> + * Should any error happens during download, we can't trust
>> + * the cached xclbin any more.
>> + */
>> + vfree(xmm->firmware_ulp);
>> + xmm->firmware_ulp = NULL;
>> +
>> + ret = xmgmt_process_xclbin(xmm->pdev, xmm->fmgr, axlf, XMGMT_ULP);
>> + if (ret == 0)
>> + xmm->firmware_ulp = axlf;
>> +
>> + return ret;
>> +}
>> +
>> +static int bitstream_axlf_ioctl(struct xmgmt_main *xmm, const void __user *arg)
>> +{
>> + struct xmgmt_ioc_bitstream_axlf ioc_obj = { 0 };
>> + struct axlf xclbin_obj = { {0} };
>> + size_t copy_buffer_size = 0;
>> + void *copy_buffer = NULL;
>> + int ret = 0;
>> +
>> + if (copy_from_user((void *)&ioc_obj, arg, sizeof(ioc_obj)))
>> + return -EFAULT;
>> + if (copy_from_user((void *)&xclbin_obj, ioc_obj.xclbin, sizeof(xclbin_obj)))
>> + return -EFAULT;
>> + if (memcmp(xclbin_obj.magic, XCLBIN_VERSION2, sizeof(XCLBIN_VERSION2)))
>> + return -EINVAL;
>> +
>> + copy_buffer_size = xclbin_obj.header.length;
>> + if (copy_buffer_size > XCLBIN_MAX_SIZE || copy_buffer_size < sizeof(xclbin_obj))
> ok
>
> Tom
>
>> + return -EINVAL;
>> + if (xclbin_obj.header.version_major != XMGMT_SUPP_XCLBIN_MAJOR)
>> + return -EINVAL;
>> +
>> + copy_buffer = vmalloc(copy_buffer_size);
>> + if (!copy_buffer)
>> + return -ENOMEM;
>> +
>> + if (copy_from_user(copy_buffer, ioc_obj.xclbin, copy_buffer_size)) {
>> + vfree(copy_buffer);
>> + return -EFAULT;
>> + }
>> +
>> + ret = xmgmt_bitstream_axlf_fpga_mgr(xmm, copy_buffer, copy_buffer_size);
>> + if (ret)
>> + vfree(copy_buffer);
>> +
>> + return ret;
>> +}
>> +
>> +static long xmgmt_main_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
>> +{
>> + struct xmgmt_main *xmm = filp->private_data;
>> + long result = 0;
>> +
>> + if (_IOC_TYPE(cmd) != XMGMT_IOC_MAGIC)
>> + return -ENOTTY;
>> +
>> + mutex_lock(&xmm->lock);
>> +
>> + xrt_info(xmm->pdev, "ioctl cmd %d, arg %ld", cmd, arg);
>> + switch (cmd) {
>> + case XMGMT_IOCICAPDOWNLOAD_AXLF:
>> + result = bitstream_axlf_ioctl(xmm, (const void __user *)arg);
>> + break;
>> + default:
>> + result = -ENOTTY;
>> + break;
>> + }
>> +
>> + mutex_unlock(&xmm->lock);
>> + return result;
>> +}
>> +
>> +static struct xrt_subdev_endpoints xrt_mgmt_main_endpoints[] = {
>> + {
>> + .xse_names = (struct xrt_subdev_ep_names []){
>> + { .ep_name = XRT_MD_NODE_MGMT_MAIN },
>> + { NULL },
>> + },
>> + .xse_min_ep = 1,
>> + },
>> + { 0 },
>> +};
>> +
>> +static struct xrt_subdev_drvdata xmgmt_main_data = {
>> + .xsd_dev_ops = {
>> + .xsd_leaf_call = xmgmt_mainleaf_call,
>> + },
>> + .xsd_file_ops = {
>> + .xsf_ops = {
>> + .owner = THIS_MODULE,
>> + .open = xmgmt_main_open,
>> + .release = xmgmt_main_close,
>> + .unlocked_ioctl = xmgmt_main_ioctl,
>> + },
>> + .xsf_dev_name = "xmgmt",
>> + },
>> +};
>> +
>> +static const struct platform_device_id xmgmt_main_id_table[] = {
>> + { XMGMT_MAIN, (kernel_ulong_t)&xmgmt_main_data },
>> + { },
>> +};
>> +
>> +static struct platform_driver xmgmt_main_driver = {
>> + .driver = {
>> + .name = XMGMT_MAIN,
>> + },
>> + .probe = xmgmt_main_probe,
>> + .remove = xmgmt_main_remove,
>> + .id_table = xmgmt_main_id_table,
>> +};
>> +
>> +int xmgmt_register_leaf(void)
>> +{
>> + return xleaf_register_driver(XRT_SUBDEV_MGMT_MAIN,
>> + &xmgmt_main_driver, xrt_mgmt_main_endpoints);
>> +}
>> +
>> +void xmgmt_unregister_leaf(void)
>> +{
>> + xleaf_unregister_driver(XRT_SUBDEV_MGMT_MAIN);
>> +}
>> diff --git a/drivers/fpga/xrt/mgmt/xmgnt.h b/drivers/fpga/xrt/mgmt/xmgnt.h
>> new file mode 100644
>> index 000000000000..9d7c11194745
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/xmgnt.h
>> @@ -0,0 +1,34 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Lizhi Hou <[email protected]>
>> + * Cheng Zhen <[email protected]>
>> + */
>> +
>> +#ifndef _XMGMT_XMGNT_H_
>> +#define _XMGMT_XMGNT_H_
> For consistency, should be shortened to _XMGMNT_H_
>
>> +
>> +#include <linux/platform_device.h>
>> +#include "xmgmt-main.h"
>> +
>> +struct fpga_manager;
>> +int xmgmt_process_xclbin(struct platform_device *pdev,
>> + struct fpga_manager *fmgr,
>> + const struct axlf *xclbin,
>> + enum provider_kind kind);
>> +void xmgmt_region_cleanup_all(struct platform_device *pdev);
>> +
>> +int xmgmt_hot_reset(struct platform_device *pdev);
>> +
>> +/* Getting dtb for specified group. Caller should vfree returned dtb .*/
>> +char *xmgmt_get_dtb(struct platform_device *pdev, enum provider_kind kind);
>> +char *xmgmt_get_vbnv(struct platform_device *pdev);
>> +int xmgmt_get_provider_uuid(struct platform_device *pdev,
>> + enum provider_kind kind, uuid_t *uuid);
>> +
>> +int xmgmt_register_leaf(void);
> ok
>> +void xmgmt_unregister_leaf(void);
>> +
>> +#endif /* _XMGMT_XMGNT_H_ */
>> diff --git a/include/uapi/linux/xrt/xmgmt-ioctl.h b/include/uapi/linux/xrt/xmgmt-ioctl.h
>> new file mode 100644
>> index 000000000000..da992e581189
>> --- /dev/null
>> +++ b/include/uapi/linux/xrt/xmgmt-ioctl.h
>> @@ -0,0 +1,46 @@
>> +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
>> +/*
>> + * Copyright (C) 2015-2021, Xilinx Inc
>> + *
>> + */
>> +
>> +/**
>> + * DOC: PCIe Kernel Driver for Management Physical Function
>> + * Interfaces exposed by *xclmgmt* driver are defined in file, *mgmt-ioctl.h*.
>> + * Core functionality provided by *xmgmt* driver is described in the following table:
>> + *
>> + * =========== ============================== ==================================
>> + * Functionality ioctl request code data format
>> + * =========== ============================== ==================================
>> + * 1 FPGA image download XMGMT_IOCICAPDOWNLOAD_AXLF xmgmt_ioc_bitstream_axlf
>> + * =========== ============================== ==================================
>> + */
>> +
>> +#ifndef _XMGMT_IOCTL_H_
>> +#define _XMGMT_IOCTL_H_
>> +
>> +#include <linux/ioctl.h>
>> +
>> +#define XMGMT_IOC_MAGIC 'X'
>> +#define XMGMT_IOC_ICAP_DOWNLOAD_AXLF 0x6
>> +
>> +/**
>> + * struct xmgmt_ioc_bitstream_axlf - load xclbin (AXLF) device image
>> + * used with XMGMT_IOCICAPDOWNLOAD_AXLF ioctl
>> + *
>> + * @xclbin: Pointer to user's xclbin structure in memory
>> + */
>> +struct xmgmt_ioc_bitstream_axlf {
>> + struct axlf *xclbin;
>> +};
>> +
>> +#define XMGMT_IOCICAPDOWNLOAD_AXLF \
>> + _IOW(XMGMT_IOC_MAGIC, XMGMT_IOC_ICAP_DOWNLOAD_AXLF, struct xmgmt_ioc_bitstream_axlf)
>> +
>> +/*
>> + * The following definitions are for binary compatibility with classic XRT management driver
>> + */
>> +#define XCLMGMT_IOCICAPDOWNLOAD_AXLF XMGMT_IOCICAPDOWNLOAD_AXLF
>> +#define xclmgmt_ioc_bitstream_axlf xmgmt_ioc_bitstream_axlf
>> +
>> +#endif
Hi Tom,
On 3/31/21 5:50 AM, Tom Rix wrote:
> Several just for debugging items, consider adding a CONFIG_XRT_DEBUGGING
I'd like to clarify what "only for debugging" means here. It actually
means that the content of the msg/output only makes sense to a
developer, v.s. end user. It does not mean that only developer will ever
execute this code path which triggers the debugging code.
We have msg from print functions like this, and we have output from
sysfs node like this. We can't just disable all of them by default
because the content only makes sense to a developer. In some cases,
requiring a recompilation of the driver to enable the debugging code is
very difficult to do. E.g., when we need to debug a customer issue and
we do not have access to the system. It's a big ask for our customer to
recompile, reload the driver and reproduce the issue for us (v.s. just
collect and send us the msg/output).
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> Infrastructure code providing APIs for managing leaf driver instance
>> groups, facilitating inter-leaf driver calls and root calls.
>>
>> Signed-off-by: Sonal Santan<[email protected]>
>> Signed-off-by: Max Zhen<[email protected]>
>> Signed-off-by: Lizhi Hou<[email protected]>
>> ---
>> drivers/fpga/xrt/lib/subdev.c | 865 ++++++++++++++++++++++++++++++++++
>> 1 file changed, 865 insertions(+)
>> create mode 100644 drivers/fpga/xrt/lib/subdev.c
>>
>> diff --git a/drivers/fpga/xrt/lib/subdev.c b/drivers/fpga/xrt/lib/subdev.c
>> new file mode 100644
>> index 000000000000..6428b183fee3
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/lib/subdev.c
>> @@ -0,0 +1,865 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen<[email protected]>
>> + */
>> +
>> +#include <linux/platform_device.h>
>> +#include <linux/pci.h>
>> +#include <linux/vmalloc.h>
>> +#include "xleaf.h"
>> +#include "subdev_pool.h"
>> +#include "lib-drv.h"
>> +#include "metadata.h"
>> +
>> +#define IS_ROOT_DEV(dev) ((dev)->bus == &pci_bus_type)
> for readablity, add a new line here
Will do.
>> +static inline struct device *find_root(struct platform_device *pdev)
>> +{
>> + struct device *d = DEV(pdev);
>> +
>> + while (!IS_ROOT_DEV(d))
>> + d = d->parent;
>> + return d;
>> +}
>> +
>> +/*
>> + * It represents a holder of a subdev. One holder can repeatedly hold a subdev
>> + * as long as there is a unhold corresponding to a hold.
>> + */
>> +struct xrt_subdev_holder {
>> + struct list_head xsh_holder_list;
>> + struct device *xsh_holder;
>> + int xsh_count;
>> + struct kref xsh_kref;
>> +};
>> +
>> +/*
>> + * It represents a specific instance of platform driver for a subdev, which
>> + * provides services to its clients (another subdev driver or root driver).
>> + */
>> +struct xrt_subdev {
>> + struct list_head xs_dev_list;
>> + struct list_head xs_holder_list;
>> + enum xrt_subdev_id xs_id; /* type of subdev */
>> + struct platform_device *xs_pdev; /* a particular subdev inst */
>> + struct completion xs_holder_comp;
>> +};
>> +
>> +static struct xrt_subdev *xrt_subdev_alloc(void)
>> +{
>> + struct xrt_subdev *sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
> ok
>> +
>> + if (!sdev)
>> + return NULL;
>> +
>> + INIT_LIST_HEAD(&sdev->xs_dev_list);
>> + INIT_LIST_HEAD(&sdev->xs_holder_list);
>> + init_completion(&sdev->xs_holder_comp);
>> + return sdev;
>> +}
>> +
>> +static void xrt_subdev_free(struct xrt_subdev *sdev)
>> +{
>> + kfree(sdev);
> Abstraction for a single function is not needed, use kfree directly.
Will do.
>> +}
>> +
>> +int xrt_subdev_root_request(struct platform_device *self, u32 cmd, void *arg)
>> +{
>> + struct device *dev = DEV(self);
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(self);
>> +
>> + WARN_ON(!pdata->xsp_root_cb);
> ok
>> + return (*pdata->xsp_root_cb)(dev->parent, pdata->xsp_root_cb_arg, cmd, arg);
>> +}
>> +
>> +/*
>> + * Subdev common sysfs nodes.
>> + */
>> +static ssize_t holders_show(struct device *dev, struct device_attribute *attr, char *buf)
>> +{
>> + ssize_t len;
>> + struct platform_device *pdev = to_platform_device(dev);
>> + struct xrt_root_get_holders holders = { pdev, buf, 1024 };
> Since 1024 is config, #define it somewhere so it can be tweeked later
Will do.
>> +
>> + len = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF_HOLDERS, &holders);
>> + if (len >= holders.xpigh_holder_buf_len)
>> + return len;
>> + buf[len] = '\n';
>> + return len + 1;
>> +}
>> +static DEVICE_ATTR_RO(holders);
>> +
>> +static struct attribute *xrt_subdev_attrs[] = {
>> + &dev_attr_holders.attr,
>> + NULL,
>> +};
>> +
>> +static ssize_t metadata_output(struct file *filp, struct kobject *kobj,
>> + struct bin_attribute *attr, char *buf, loff_t off, size_t count)
>> +{
>> + struct device *dev = kobj_to_dev(kobj);
>> + struct platform_device *pdev = to_platform_device(dev);
>> + struct xrt_subdev_platdata *pdata = DEV_PDATA(pdev);
>> + unsigned char *blob;
>> + unsigned long size;
>> + ssize_t ret = 0;
>> +
>> + blob = pdata->xsp_dtb;
>> + size = xrt_md_size(dev, blob);
>> + if (size == XRT_MD_INVALID_LENGTH) {
>> + ret = -EINVAL;
>> + goto failed;
>> + }
>> +
>> + if (off >= size)
>> + goto failed;
> if this and next are used for debugging, add a 'dev_dbg()' to help out the debugging.
Will do.
>> +
>> + if (off + count > size)
>> + count = size - off;
>> + memcpy(buf, blob + off, count);
>> +
>> + ret = count;
>> +failed:
>> + return ret;
>> +}
>> +
>> +static struct bin_attribute meta_data_attr = {
>> + .attr = {
>> + .name = "metadata",
>> + .mode = 0400
>> + },
> Permissions will not be enough, anyone can be root.
>
> A developer only interface should be hidden behind a CONFIG_
Please see my comment at the beginning of this reply. Leaving it here
will ease the trouble shooting on customers system. Further more,
whoever is root has already gained access to these meta data. So, there
is also no security concern here for root users to also read it from
this sysfs node.
>> + .read = metadata_output,
>> + .size = 0
>> +};
>> +
>> +static struct bin_attribute *xrt_subdev_bin_attrs[] = {
>> + &meta_data_attr,
>> + NULL,
>> +};
>> +
>> +static const struct attribute_group xrt_subdev_attrgroup = {
>> + .attrs = xrt_subdev_attrs,
>> + .bin_attrs = xrt_subdev_bin_attrs,
>> +};
>> +
>> +/*
>> + * Given the device metadata, parse it to get IO ranges and construct
>> + * resource array.
>> + */
>> +static int
>> +xrt_subdev_getres(struct device *parent, enum xrt_subdev_id id,
>> + char *dtb, struct resource **res, int *res_num)
>> +{
>> + struct xrt_subdev_platdata *pdata;
>> + struct resource *pci_res = NULL;
>> + const u64 *bar_range;
>> + const u32 *bar_idx;
>> + char *ep_name = NULL, *regmap = NULL;
>> + uint bar;
>> + int count1 = 0, count2 = 0, ret;
>> +
>> + if (!dtb)
>> + return -EINVAL;
>> +
>> + pdata = DEV_PDATA(to_platform_device(parent));
>> +
>> + /* go through metadata and count endpoints in it */
>> + for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
> Embedding functions in the for-loop is difficult to debug consider change this loop into something easier to read.
>
> Maybe
>
> xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map);
>
> while (ep_name) {
>
> ...
>
> xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)
>
> }
>
> similar below
Will change.
>> + xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
>> + ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
>> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
>> + if (!ret)
>> + count1++;
>> + }
>> + if (!count1)
>> + return 0;
>> +
>> + /* allocate resource array for all endpoints been found in metadata */
>> + *res = vzalloc(sizeof(**res) * count1);
> if this is small, convert to kzalloc
It depends on the value of count1, so could be big. I'll keep it as is.
>> +
>> + /* go through all endpoints again and get IO range for each endpoint */
>> + for (xrt_md_get_next_endpoint(parent, dtb, NULL, NULL, &ep_name, ®map); ep_name;
>> + xrt_md_get_next_endpoint(parent, dtb, ep_name, regmap, &ep_name, ®map)) {
>> + ret = xrt_md_get_prop(parent, dtb, ep_name, regmap,
>> + XRT_MD_PROP_IO_OFFSET, (const void **)&bar_range, NULL);
>> + if (ret)
>> + continue;
>> + xrt_md_get_prop(parent, dtb, ep_name, regmap,
>> + XRT_MD_PROP_BAR_IDX, (const void **)&bar_idx, NULL);
>> + bar = bar_idx ? be32_to_cpu(*bar_idx) : 0;
>> + xleaf_get_barres(to_platform_device(parent), &pci_res, bar);
>> + (*res)[count2].start = pci_res->start +
>> + be64_to_cpu(bar_range[0]);
>> + (*res)[count2].end = pci_res->start +
>> + be64_to_cpu(bar_range[0]) +
>> + be64_to_cpu(bar_range[1]) - 1;
>> + (*res)[count2].flags = IORESOURCE_MEM;
>> + /* check if there is conflicted resource */
>> + ret = request_resource(pci_res, *res + count2);
>> + if (ret) {
>> + dev_err(parent, "Conflict resource %pR\n", *res + count2);
>> + vfree(*res);
>> + *res_num = 0;
>> + *res = NULL;
>> + return ret;
>> + }
>> + release_resource(*res + count2);
>> +
>> + (*res)[count2].parent = pci_res;
>> +
>> + xrt_md_find_endpoint(parent, pdata->xsp_dtb, ep_name,
>> + regmap, &(*res)[count2].name);
>> +
>> + count2++;
>> + }
>> +
>> + WARN_ON(count1 != count2);
>> + *res_num = count2;
>> +
>> + return 0;
>> +}
>> +
>> +static inline enum xrt_subdev_file_mode
>> +xleaf_devnode_mode(struct xrt_subdev_drvdata *drvdata)
>> +{
>> + return drvdata->xsd_file_ops.xsf_mode;
>> +}
>> +
>> +static bool xrt_subdev_cdev_auto_creation(struct platform_device *pdev)
>> +{
>> + struct xrt_subdev_drvdata *drvdata = DEV_DRVDATA(pdev);
>> + enum xrt_subdev_file_mode mode = xleaf_devnode_mode(drvdata);
>> +
>> + if (!drvdata)
>> + return false;
>> +
>> + if (!xleaf_devnode_enabled(drvdata))
>> + return false;
>> +
>> + return (mode == XRT_SUBDEV_FILE_DEFAULT || mode == XRT_SUBDEV_FILE_MULTI_INST);
> should this check happen before xleaf_devnode_enable() ?
The code here has changed due to the bus type change. Please review the
new code in next version.
>> +}
>> +
>> +static struct xrt_subdev *
>> +xrt_subdev_create(struct device *parent, enum xrt_subdev_id id,
>> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
>> +{
>> + struct xrt_subdev_platdata *pdata = NULL;
>> + struct platform_device *pdev = NULL;
>> + int inst = PLATFORM_DEVID_NONE;
>> + struct xrt_subdev *sdev = NULL;
>> + struct resource *res = NULL;
>> + unsigned long dtb_len = 0;
>> + int res_num = 0;
>> + size_t pdata_sz;
>> + int ret;
>> +
>> + sdev = xrt_subdev_alloc();
>> + if (!sdev) {
>> + dev_err(parent, "failed to alloc subdev for ID %d", id);
>> + goto fail;
>> + }
>> + sdev->xs_id = id;
>> +
>> + if (!dtb) {
>> + ret = xrt_md_create(parent, &dtb);
>> + if (ret) {
>> + dev_err(parent, "can't create empty dtb: %d", ret);
>> + goto fail;
>> + }
>> + }
>> + xrt_md_pack(parent, dtb);
>> + dtb_len = xrt_md_size(parent, dtb);
>> + if (dtb_len == XRT_MD_INVALID_LENGTH) {
>> + dev_err(parent, "invalid metadata len %ld", dtb_len);
>> + goto fail;
>> + }
>> + pdata_sz = sizeof(struct xrt_subdev_platdata) + dtb_len;
> ok
>> +
>> + /* Prepare platform data passed to subdev. */
>> + pdata = vzalloc(pdata_sz);
>> + if (!pdata)
>> + goto fail;
>> +
>> + pdata->xsp_root_cb = pcb;
>> + pdata->xsp_root_cb_arg = pcb_arg;
>> + memcpy(pdata->xsp_dtb, dtb, dtb_len);
>> + if (id == XRT_SUBDEV_GRP) {
>> + /* Group can only be created by root driver. */
>> + pdata->xsp_root_name = dev_name(parent);
>> + } else {
>> + struct platform_device *grp = to_platform_device(parent);
>> + /* Leaf can only be created by group driver. */
>> + WARN_ON(strncmp(xrt_drv_name(XRT_SUBDEV_GRP),
>> + platform_get_device_id(grp)->name,
>> + strlen(xrt_drv_name(XRT_SUBDEV_GRP)) + 1));
>> + pdata->xsp_root_name = DEV_PDATA(grp)->xsp_root_name;
>> + }
>> +
>> + /* Obtain dev instance number. */
>> + inst = xrt_drv_get_instance(id);
>> + if (inst < 0) {
>> + dev_err(parent, "failed to obtain instance: %d", inst);
>> + goto fail;
>> + }
>> +
>> + /* Create subdev. */
>> + if (id != XRT_SUBDEV_GRP) {
>> + int rc = xrt_subdev_getres(parent, id, dtb, &res, &res_num);
>> +
>> + if (rc) {
>> + dev_err(parent, "failed to get resource for %s.%d: %d",
>> + xrt_drv_name(id), inst, rc);
>> + goto fail;
>> + }
>> + }
>> + pdev = platform_device_register_resndata(parent, xrt_drv_name(id),
>> + inst, res, res_num, pdata, pdata_sz);
> ok
>> + vfree(res);
>> + if (IS_ERR(pdev)) {
>> + dev_err(parent, "failed to create subdev for %s inst %d: %ld",
>> + xrt_drv_name(id), inst, PTR_ERR(pdev));
>> + goto fail;
>> + }
>> + sdev->xs_pdev = pdev;
>> +
>> + if (device_attach(DEV(pdev)) != 1) {
>> + xrt_err(pdev, "failed to attach");
>> + goto fail;
>> + }
>> +
>> + if (sysfs_create_group(&DEV(pdev)->kobj, &xrt_subdev_attrgroup))
>> + xrt_err(pdev, "failed to create sysfs group");
>> +
>> + /*
>> + * Create sysfs sym link under root for leaves
>> + * under random groups for easy access to them.
>> + */
>> + if (id != XRT_SUBDEV_GRP) {
>> + if (sysfs_create_link(&find_root(pdev)->kobj,
>> + &DEV(pdev)->kobj, dev_name(DEV(pdev)))) {
>> + xrt_err(pdev, "failed to create sysfs link");
>> + }
>> + }
>> +
>> + /* All done, ready to handle req thru cdev. */
>> + if (xrt_subdev_cdev_auto_creation(pdev))
>> + xleaf_devnode_create(pdev, DEV_DRVDATA(pdev)->xsd_file_ops.xsf_dev_name, NULL);
>> +
>> + vfree(pdata);
>> + return sdev;
>> +
>> +fail:
> Take another look at splitting this error handling.
>
> Jumping to specific labels is more common.
Will change.
>> + vfree(pdata);
>> + if (sdev && !IS_ERR_OR_NULL(sdev->xs_pdev))
>> + platform_device_unregister(sdev->xs_pdev);
>> + if (inst >= 0)
>> + xrt_drv_put_instance(id, inst);
>> + xrt_subdev_free(sdev);
>> + return NULL;
>> +}
>> +
>> +static void xrt_subdev_destroy(struct xrt_subdev *sdev)
>> +{
>> + struct platform_device *pdev = sdev->xs_pdev;
>> + struct device *dev = DEV(pdev);
>> + int inst = pdev->id;
>> + int ret;
>> +
>> + /* Take down the device node */
>> + if (xrt_subdev_cdev_auto_creation(pdev)) {
>> + ret = xleaf_devnode_destroy(pdev);
>> + WARN_ON(ret);
>> + }
>> + if (sdev->xs_id != XRT_SUBDEV_GRP)
>> + sysfs_remove_link(&find_root(pdev)->kobj, dev_name(dev));
>> + sysfs_remove_group(&dev->kobj, &xrt_subdev_attrgroup);
>> + platform_device_unregister(pdev);
>> + xrt_drv_put_instance(sdev->xs_id, inst);
>> + xrt_subdev_free(sdev);
>> +}
>> +
>> +struct platform_device *
>> +xleaf_get_leaf(struct platform_device *pdev, xrt_subdev_match_t match_cb, void *match_arg)
>> +{
>> + int rc;
>> + struct xrt_root_get_leaf get_leaf = {
>> + pdev, match_cb, match_arg, };
>> +
>> + rc = xrt_subdev_root_request(pdev, XRT_ROOT_GET_LEAF, &get_leaf);
>> + if (rc)
>> + return NULL;
>> + return get_leaf.xpigl_tgt_pdev;
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_get_leaf);
>> +
>> +bool xleaf_has_endpoint(struct platform_device *pdev, const char *endpoint_name)
>> +{
>> + struct resource *res;
>> + int i = 0;
> ok
>> +
>> + do {
>> + res = platform_get_resource(pdev, IORESOURCE_MEM, i);
>> + if (res && !strncmp(res->name, endpoint_name, strlen(res->name) + 1))
>> + return true;
>> + ++i;
> ok
>> + } while (res);
>> +
>> + return false;
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_has_endpoint);
>> +
>> +int xleaf_put_leaf(struct platform_device *pdev, struct platform_device *leaf)
>> +{
>> + struct xrt_root_put_leaf put_leaf = { pdev, leaf };
>> +
>> + return xrt_subdev_root_request(pdev, XRT_ROOT_PUT_LEAF, &put_leaf);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_put_leaf);
>> +
>> +int xleaf_create_group(struct platform_device *pdev, char *dtb)
>> +{
>> + return xrt_subdev_root_request(pdev, XRT_ROOT_CREATE_GROUP, dtb);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_create_group);
>> +
>> +int xleaf_destroy_group(struct platform_device *pdev, int instance)
>> +{
>> + return xrt_subdev_root_request(pdev, XRT_ROOT_REMOVE_GROUP, (void *)(uintptr_t)instance);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_destroy_group);
>> +
>> +int xleaf_wait_for_group_bringup(struct platform_device *pdev)
>> +{
>> + return xrt_subdev_root_request(pdev, XRT_ROOT_WAIT_GROUP_BRINGUP, NULL);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_wait_for_group_bringup);
>> +
>> +static ssize_t
>> +xrt_subdev_get_holders(struct xrt_subdev *sdev, char *buf, size_t len)
>> +{
>> + const struct list_head *ptr;
>> + struct xrt_subdev_holder *h;
>> + ssize_t n = 0;
>> +
>> + list_for_each(ptr, &sdev->xs_holder_list) {
>> + h = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
>> + n += snprintf(buf + n, len - n, "%s:%d ",
>> + dev_name(h->xsh_holder), kref_read(&h->xsh_kref));
> add a comment that truncation is fine
Will change.
>> + if (n >= (len - 1))
>> + break;
>> + }
>> + return n;
>> +}
>> +
>> +void xrt_subdev_pool_init(struct device *dev, struct xrt_subdev_pool *spool)
>> +{
>> + INIT_LIST_HEAD(&spool->xsp_dev_list);
>> + spool->xsp_owner = dev;
>> + mutex_init(&spool->xsp_lock);
>> + spool->xsp_closing = false;
>> +}
>> +
>> +static void xrt_subdev_free_holder(struct xrt_subdev_holder *holder)
>> +{
>> + list_del(&holder->xsh_holder_list);
>> + vfree(holder);
>> +}
>> +
>> +static void xrt_subdev_pool_wait_for_holders(struct xrt_subdev_pool *spool, struct xrt_subdev *sdev)
>> +{
>> + const struct list_head *ptr, *next;
>> + char holders[128];
>> + struct xrt_subdev_holder *holder;
>> + struct mutex *lk = &spool->xsp_lock;
>> +
>> + while (!list_empty(&sdev->xs_holder_list)) {
>> + int rc;
>> +
>> + /* It's most likely a bug if we ever enters this loop. */
>> + xrt_subdev_get_holders(sdev, holders, sizeof(holders));
> Items just for debugging need to run just for debugging
Please see my comment at the beginning of this reply. I'd like to keep
the error msg here. This error msg might be very valuable to us since it
might help to debug a race condition which is not easy to reproduce.
Thanks,
Max
>> + xrt_err(sdev->xs_pdev, "awaits holders: %s", holders);
>> + mutex_unlock(lk);
>> + rc = wait_for_completion_killable(&sdev->xs_holder_comp);
>> + mutex_lock(lk);
>> + if (rc == -ERESTARTSYS) {
>> + xrt_err(sdev->xs_pdev, "give up on waiting for holders, clean up now");
>> + list_for_each_safe(ptr, next, &sdev->xs_holder_list) {
>> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
>> + xrt_subdev_free_holder(holder);
>> + }
>> + }
>> + }
>> +}
>> +
>> +void xrt_subdev_pool_fini(struct xrt_subdev_pool *spool)
>> +{
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct mutex *lk = &spool->xsp_lock;
>> +
>> + mutex_lock(lk);
>> + if (spool->xsp_closing) {
>> + mutex_unlock(lk);
>> + return;
>> + }
>> + spool->xsp_closing = true;
>> + mutex_unlock(lk);
> ok
>> +
>> + /* Remove subdev in the reverse order of added. */
>> + while (!list_empty(dl)) {
>> + struct xrt_subdev *sdev = list_first_entry(dl, struct xrt_subdev, xs_dev_list);
>> +
>> + xrt_subdev_pool_wait_for_holders(spool, sdev);
>> + list_del(&sdev->xs_dev_list);
>> + xrt_subdev_destroy(sdev);
>> + }
>> +}
>> +
>> +static struct xrt_subdev_holder *xrt_subdev_find_holder(struct xrt_subdev *sdev,
>> + struct device *holder_dev)
>> +{
>> + struct list_head *hl = &sdev->xs_holder_list;
>> + struct xrt_subdev_holder *holder;
>> + const struct list_head *ptr;
>> +
>> + list_for_each(ptr, hl) {
>> + holder = list_entry(ptr, struct xrt_subdev_holder, xsh_holder_list);
>> + if (holder->xsh_holder == holder_dev)
>> + return holder;
>> + }
>> + return NULL;
>> +}
>> +
>> +static int xrt_subdev_hold(struct xrt_subdev *sdev, struct device *holder_dev)
>> +{
>> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
>> + struct list_head *hl = &sdev->xs_holder_list;
>> +
>> + if (!holder) {
>> + holder = vzalloc(sizeof(*holder));
>> + if (!holder)
>> + return -ENOMEM;
>> + holder->xsh_holder = holder_dev;
>> + kref_init(&holder->xsh_kref);
>> + list_add_tail(&holder->xsh_holder_list, hl);
>> + } else {
>> + kref_get(&holder->xsh_kref);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static void xrt_subdev_free_holder_kref(struct kref *kref)
>> +{
>> + struct xrt_subdev_holder *holder = container_of(kref, struct xrt_subdev_holder, xsh_kref);
>> +
>> + xrt_subdev_free_holder(holder);
>> +}
>> +
>> +static int
>> +xrt_subdev_release(struct xrt_subdev *sdev, struct device *holder_dev)
>> +{
>> + struct xrt_subdev_holder *holder = xrt_subdev_find_holder(sdev, holder_dev);
>> + struct list_head *hl = &sdev->xs_holder_list;
>> +
>> + if (!holder) {
>> + dev_err(holder_dev, "can't release, %s did not hold %s",
>> + dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
>> + return -EINVAL;
>> + }
>> + kref_put(&holder->xsh_kref, xrt_subdev_free_holder_kref);
>> +
>> + /* kref_put above may remove holder from list. */
>> + if (list_empty(hl))
>> + complete(&sdev->xs_holder_comp);
>> + return 0;
>> +}
>> +
>> +int xrt_subdev_pool_add(struct xrt_subdev_pool *spool, enum xrt_subdev_id id,
>> + xrt_subdev_root_cb_t pcb, void *pcb_arg, char *dtb)
>> +{
>> + struct mutex *lk = &spool->xsp_lock;
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct xrt_subdev *sdev;
>> + int ret = 0;
>> +
>> + sdev = xrt_subdev_create(spool->xsp_owner, id, pcb, pcb_arg, dtb);
>> + if (sdev) {
>> + mutex_lock(lk);
>> + if (spool->xsp_closing) {
>> + /* No new subdev when pool is going away. */
>> + xrt_err(sdev->xs_pdev, "pool is closing");
>> + ret = -ENODEV;
>> + } else {
>> + list_add(&sdev->xs_dev_list, dl);
>> + }
>> + mutex_unlock(lk);
>> + if (ret)
>> + xrt_subdev_destroy(sdev);
>> + } else {
>> + ret = -EINVAL;
>> + }
>> +
>> + ret = ret ? ret : sdev->xs_pdev->id;
>> + return ret;
>> +}
>> +
>> +int xrt_subdev_pool_del(struct xrt_subdev_pool *spool, enum xrt_subdev_id id, int instance)
>> +{
>> + const struct list_head *ptr;
>> + struct mutex *lk = &spool->xsp_lock;
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct xrt_subdev *sdev;
>> + int ret = -ENOENT;
>> +
>> + mutex_lock(lk);
>> + if (spool->xsp_closing) {
>> + /* Pool is going away, all subdevs will be gone. */
>> + mutex_unlock(lk);
>> + return 0;
>> + }
>> + list_for_each(ptr, dl) {
>> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
>> + if (sdev->xs_id != id || sdev->xs_pdev->id != instance)
>> + continue;
>> + xrt_subdev_pool_wait_for_holders(spool, sdev);
>> + list_del(&sdev->xs_dev_list);
>> + ret = 0;
>> + break;
>> + }
>> + mutex_unlock(lk);
>> + if (ret)
>> + return ret;
>> +
>> + xrt_subdev_destroy(sdev);
>> + return 0;
>> +}
>> +
>> +static int xrt_subdev_pool_get_impl(struct xrt_subdev_pool *spool, xrt_subdev_match_t match,
>> + void *arg, struct device *holder_dev, struct xrt_subdev **sdevp)
>> +{
>> + struct platform_device *pdev = (struct platform_device *)arg;
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct mutex *lk = &spool->xsp_lock;
>> + struct xrt_subdev *sdev = NULL;
>> + const struct list_head *ptr;
>> + struct xrt_subdev *d = NULL;
>> + int ret = -ENOENT;
>> +
>> + mutex_lock(lk);
>> +
>> + if (!pdev) {
>> + if (match == XRT_SUBDEV_MATCH_PREV) {
>> + sdev = list_empty(dl) ? NULL :
>> + list_last_entry(dl, struct xrt_subdev, xs_dev_list);
>> + } else if (match == XRT_SUBDEV_MATCH_NEXT) {
>> + sdev = list_first_entry_or_null(dl, struct xrt_subdev, xs_dev_list);
>> + }
>> + }
>> +
>> + list_for_each(ptr, dl) {
> ok
>> + d = list_entry(ptr, struct xrt_subdev, xs_dev_list);
>> + if (match == XRT_SUBDEV_MATCH_PREV || match == XRT_SUBDEV_MATCH_NEXT) {
>> + if (d->xs_pdev != pdev)
>> + continue;
>> + } else {
>> + if (!match(d->xs_id, d->xs_pdev, arg))
>> + continue;
>> + }
>> +
>> + if (match == XRT_SUBDEV_MATCH_PREV)
>> + sdev = !list_is_first(ptr, dl) ? list_prev_entry(d, xs_dev_list) : NULL;
>> + else if (match == XRT_SUBDEV_MATCH_NEXT)
>> + sdev = !list_is_last(ptr, dl) ? list_next_entry(d, xs_dev_list) : NULL;
>> + else
>> + sdev = d;
>> + }
>> +
>> + if (sdev)
>> + ret = xrt_subdev_hold(sdev, holder_dev);
>> +
>> + mutex_unlock(lk);
>> +
>> + if (!ret)
>> + *sdevp = sdev;
>> + return ret;
>> +}
>> +
>> +int xrt_subdev_pool_get(struct xrt_subdev_pool *spool, xrt_subdev_match_t match, void *arg,
>> + struct device *holder_dev, struct platform_device **pdevp)
>> +{
>> + int rc;
>> + struct xrt_subdev *sdev;
>> +
>> + rc = xrt_subdev_pool_get_impl(spool, match, arg, holder_dev, &sdev);
>> + if (rc) {
>> + if (rc != -ENOENT)
>> + dev_err(holder_dev, "failed to hold device: %d", rc);
>> + return rc;
>> + }
>> +
>> + if (!IS_ROOT_DEV(holder_dev)) {
> ok
>> + xrt_dbg(to_platform_device(holder_dev), "%s <<==== %s",
>> + dev_name(holder_dev), dev_name(DEV(sdev->xs_pdev)));
>> + }
>> +
>> + *pdevp = sdev->xs_pdev;
>> + return 0;
>> +}
>> +
>> +static int xrt_subdev_pool_put_impl(struct xrt_subdev_pool *spool, struct platform_device *pdev,
>> + struct device *holder_dev)
>> +{
>> + const struct list_head *ptr;
>> + struct mutex *lk = &spool->xsp_lock;
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct xrt_subdev *sdev;
>> + int ret = -ENOENT;
>> +
>> + mutex_lock(lk);
>> + list_for_each(ptr, dl) {
>> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
>> + if (sdev->xs_pdev != pdev)
>> + continue;
>> + ret = xrt_subdev_release(sdev, holder_dev);
>> + break;
>> + }
>> + mutex_unlock(lk);
>> +
>> + return ret;
>> +}
>> +
>> +int xrt_subdev_pool_put(struct xrt_subdev_pool *spool, struct platform_device *pdev,
>> + struct device *holder_dev)
>> +{
>> + int ret = xrt_subdev_pool_put_impl(spool, pdev, holder_dev);
>> +
>> + if (ret)
>> + return ret;
>> +
>> + if (!IS_ROOT_DEV(holder_dev)) {
> ok
>> + xrt_dbg(to_platform_device(holder_dev), "%s <<==X== %s",
>> + dev_name(holder_dev), dev_name(DEV(pdev)));
>> + }
>> + return 0;
>> +}
>> +
>> +void xrt_subdev_pool_trigger_event(struct xrt_subdev_pool *spool, enum xrt_events e)
>> +{
>> + struct platform_device *tgt = NULL;
>> + struct xrt_subdev *sdev = NULL;
>> + struct xrt_event evt;
>> +
>> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
>> + tgt, spool->xsp_owner, &sdev)) {
>> + tgt = sdev->xs_pdev;
>> + evt.xe_evt = e;
>> + evt.xe_subdev.xevt_subdev_id = sdev->xs_id;
>> + evt.xe_subdev.xevt_subdev_instance = tgt->id;
>> + xrt_subdev_root_request(tgt, XRT_ROOT_EVENT_SYNC, &evt);
>> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
>> + }
>> +}
>> +
>> +void xrt_subdev_pool_handle_event(struct xrt_subdev_pool *spool, struct xrt_event *evt)
>> +{
>> + struct platform_device *tgt = NULL;
>> + struct xrt_subdev *sdev = NULL;
>> +
>> + while (!xrt_subdev_pool_get_impl(spool, XRT_SUBDEV_MATCH_NEXT,
>> + tgt, spool->xsp_owner, &sdev)) {
>> + tgt = sdev->xs_pdev;
>> + xleaf_call(tgt, XRT_XLEAF_EVENT, evt);
>> + xrt_subdev_pool_put_impl(spool, tgt, spool->xsp_owner);
>> + }
>> +}
>> +
>> +ssize_t xrt_subdev_pool_get_holders(struct xrt_subdev_pool *spool,
>> + struct platform_device *pdev, char *buf, size_t len)
>> +{
>> + const struct list_head *ptr;
>> + struct mutex *lk = &spool->xsp_lock;
>> + struct list_head *dl = &spool->xsp_dev_list;
>> + struct xrt_subdev *sdev;
>> + ssize_t ret = 0;
>> +
>> + mutex_lock(lk);
>> + list_for_each(ptr, dl) {
>> + sdev = list_entry(ptr, struct xrt_subdev, xs_dev_list);
>> + if (sdev->xs_pdev != pdev)
>> + continue;
>> + ret = xrt_subdev_get_holders(sdev, buf, len);
>> + break;
>> + }
>> + mutex_unlock(lk);
>> +
>> + return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(xrt_subdev_pool_get_holders);
>> +
>> +int xleaf_broadcast_event(struct platform_device *pdev, enum xrt_events evt, bool async)
>> +{
>> + struct xrt_event e = { evt, };
>> + enum xrt_root_cmd cmd = async ? XRT_ROOT_EVENT_ASYNC : XRT_ROOT_EVENT_SYNC;
>> +
>> + WARN_ON(evt == XRT_EVENT_POST_CREATION || evt == XRT_EVENT_PRE_REMOVAL);
>> + return xrt_subdev_root_request(pdev, cmd, &e);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_broadcast_event);
>> +
>> +void xleaf_hot_reset(struct platform_device *pdev)
>> +{
>> + xrt_subdev_root_request(pdev, XRT_ROOT_HOT_RESET, NULL);
>> +}
>> +EXPORT_SYMBOL_GPL(xleaf_hot_reset);
>> +
>> +void xleaf_get_barres(struct platform_device *pdev, struct resource **res, uint bar_idx)
>> +{
>> + struct xrt_root_get_res arg = { 0 };
>> +
>> + if (bar_idx > PCI_STD_RESOURCE_END) {
>> + xrt_err(pdev, "Invalid bar idx %d", bar_idx);
>> + *res = NULL;
>> + return;
>> + }
>> +
>> + xrt_subdev_root_request(pdev, XRT_ROOT_GET_RESOURCE, &arg);
>> +
>> + *res = &arg.xpigr_res[bar_idx];
>> +}
>> +
>> +void xleaf_get_root_id(struct platform_device *pdev, unsigned short *vendor, unsigned short *device,
>> + unsigned short *subvendor, unsigned short *subdevice)
>> +{
>> + struct xrt_root_get_id id = { 0 };
>> +
>> + WARN_ON(!vendor && !device && !subvendor && !subdevice);
> ok
>
> Tom
>
>> +
>> + xrt_subdev_root_request(pdev, XRT_ROOT_GET_ID, (void *)&id);
>> + if (vendor)
>> + *vendor = id.xpigi_vendor_id;
>> + if (device)
>> + *device = id.xpigi_device_id;
>> + if (subvendor)
>> + *subvendor = id.xpigi_sub_vendor_id;
>> + if (subdevice)
>> + *subdevice = id.xpigi_sub_device_id;
>> +}
>> +
>> +struct device *xleaf_register_hwmon(struct platform_device *pdev, const char *name, void *drvdata,
>> + const struct attribute_group **grps)
>> +{
>> + struct xrt_root_hwmon hm = { true, name, drvdata, grps, };
>> +
>> + xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
>> + return hm.xpih_hwmon_dev;
>> +}
>> +
>> +void xleaf_unregister_hwmon(struct platform_device *pdev, struct device *hwmon)
>> +{
>> + struct xrt_root_hwmon hm = { false, };
>> +
>> + hm.xpih_hwmon_dev = hwmon;
>> + xrt_subdev_root_request(pdev, XRT_ROOT_HWMON, (void *)&hm);
>> +}
Hi Tom,
On 3/31/21 6:03 AM, Tom Rix wrote:
> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>> The PCIE device driver which attaches to management function on Alveo
>> devices. It instantiates one or more group drivers which, in turn,
>> instantiate platform drivers. The instantiation of group and platform
>> drivers is completely dtb driven.
>>
>> Signed-off-by: Sonal Santan<[email protected]>
>> Signed-off-by: Max Zhen<[email protected]>
>> Signed-off-by: Lizhi Hou<[email protected]>
>> ---
>> drivers/fpga/xrt/mgmt/root.c | 333 +++++++++++++++++++++++++++++++++++
>> 1 file changed, 333 insertions(+)
>> create mode 100644 drivers/fpga/xrt/mgmt/root.c
>>
>> diff --git a/drivers/fpga/xrt/mgmt/root.c b/drivers/fpga/xrt/mgmt/root.c
>> new file mode 100644
>> index 000000000000..f97f92807c01
>> --- /dev/null
>> +++ b/drivers/fpga/xrt/mgmt/root.c
>> @@ -0,0 +1,333 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Xilinx Alveo Management Function Driver
>> + *
>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>> + *
>> + * Authors:
>> + * Cheng Zhen<[email protected]>
>> + */
>> +
>> +#include <linux/module.h>
>> +#include <linux/pci.h>
>> +#include <linux/aer.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/delay.h>
>> +
>> +#include "xroot.h"
>> +#include "xmgnt.h"
>> +#include "metadata.h"
>> +
>> +#define XMGMT_MODULE_NAME "xrt-mgmt"
> ok
>> +#define XMGMT_DRIVER_VERSION "4.0.0"
>> +
>> +#define XMGMT_PDEV(xm) ((xm)->pdev)
>> +#define XMGMT_DEV(xm) (&(XMGMT_PDEV(xm)->dev))
>> +#define xmgmt_err(xm, fmt, args...) \
>> + dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>> +#define xmgmt_warn(xm, fmt, args...) \
>> + dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>> +#define xmgmt_info(xm, fmt, args...) \
>> + dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>> +#define xmgmt_dbg(xm, fmt, args...) \
>> + dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>> +#define XMGMT_DEV_ID(_pcidev) \
>> + ({ typeof(_pcidev) (pcidev) = (_pcidev); \
>> + ((pci_domain_nr((pcidev)->bus) << 16) | \
>> + PCI_DEVID((pcidev)->bus->number, 0)); })
>> +
>> +static struct class *xmgmt_class;
>> +
>> +/* PCI Device IDs */
> add a comment on what a golden image is here something like
>
> /*
>
> * Golden image is preloaded on the device when it is shipped to customer.
>
> * Then, customer can load other shells (from Xilinx or some other vendor).
>
> * If something goes wrong with the shell, customer can always go back to
>
> * golden and start over again.
>
> */
>
Will do.
>> +#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
>> +#define PCI_DEVICE_ID_U50 0x5020
>> +static const struct pci_device_id xmgmt_pci_ids[] = {
>> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN), }, /* Alveo U50 (golden) */
>> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /* Alveo U50 */
>> + { 0, }
>> +};
>> +
>> +struct xmgmt {
>> + struct pci_dev *pdev;
>> + void *root;
>> +
>> + bool ready;
>> +};
>> +
>> +static int xmgmt_config_pci(struct xmgmt *xm)
>> +{
>> + struct pci_dev *pdev = XMGMT_PDEV(xm);
>> + int rc;
>> +
>> + rc = pcim_enable_device(pdev);
>> + if (rc < 0) {
>> + xmgmt_err(xm, "failed to enable device: %d", rc);
>> + return rc;
>> + }
>> +
>> + rc = pci_enable_pcie_error_reporting(pdev);
>> + if (rc)
> ok
>> + xmgmt_warn(xm, "failed to enable AER: %d", rc);
>> +
>> + pci_set_master(pdev);
>> +
>> + rc = pcie_get_readrq(pdev);
>> + if (rc > 512)
> 512 is magic number, change this to a #define
Will do.
>> + pcie_set_readrq(pdev, 512);
>> + return 0;
>> +}
>> +
>> +static int xmgmt_match_slot_and_save(struct device *dev, void *data)
>> +{
>> + struct xmgmt *xm = data;
>> + struct pci_dev *pdev = to_pci_dev(dev);
>> +
>> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
>> + pci_cfg_access_lock(pdev);
>> + pci_save_state(pdev);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static void xmgmt_pci_save_config_all(struct xmgmt *xm)
>> +{
>> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_save);
> refactor expected in v5 when pseudo bus change happens.
There might be some mis-understanding here...
No matter how we reorganize our code (using platform_device bus type or
defining our own bus type), it's a driver that drives a PCIE device
after all. So, this mgmt/root.c must be a PCIE driver, which may
interact with a whole bunch of IP drivers through a pseudo bus we are
about to create.
What this code is doing here is completely of PCIE business (PCIE config
space access). So, I think it is appropriate code in a PCIE driver.
The PCIE device we are driving is a multi-function device. The mgmt pf
is of function 0, which, according to PCIE spec, can manage other
functions on the same device. So, I think it's appropriate for mgmt pf
driver (this root driver) to find it's peer function (through PCIE bus
type) on the same device and do something about it in certain special cases.
Please let me know why you expect this code to be refactored and how you
want it to be refactored. I might have missed something here...
>> +}
>> +
>> +static int xmgmt_match_slot_and_restore(struct device *dev, void *data)
>> +{
>> + struct xmgmt *xm = data;
>> + struct pci_dev *pdev = to_pci_dev(dev);
>> +
>> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
>> + pci_restore_state(pdev);
>> + pci_cfg_access_unlock(pdev);
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
>> +{
>> + bus_for_each_dev(&pci_bus_type, NULL, xm, xmgmt_match_slot_and_restore);
>> +}
>> +
>> +static void xmgmt_root_hot_reset(struct pci_dev *pdev)
>> +{
>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>> + struct pci_bus *bus;
>> + u8 pci_bctl;
>> + u16 pci_cmd, devctl;
>> + int i, ret;
>> +
>> + xmgmt_info(xm, "hot reset start");
>> +
>> + xmgmt_pci_save_config_all(xm);
>> +
>> + pci_disable_device(pdev);
>> +
>> + bus = pdev->bus;
> whitespace, all these nl's are not needed
Will remove them.
>> +
>> + /*
>> + * When flipping the SBR bit, device can fall off the bus. This is
>> + * usually no problem at all so long as drivers are working properly
>> + * after SBR. However, some systems complain bitterly when the device
>> + * falls off the bus.
>> + * The quick solution is to temporarily disable the SERR reporting of
>> + * switch port during SBR.
>> + */
>> +
>> + pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
>> + pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd & ~PCI_COMMAND_SERR));
>> + pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
>> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl & ~PCI_EXP_DEVCTL_FERE));
>> + pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
>> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl | PCI_BRIDGE_CTL_BUS_RESET);
> ok
>> + msleep(100);
>> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
>> + ssleep(1);
>> +
>> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
>> + pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
>> +
>> + ret = pci_enable_device(pdev);
>> + if (ret)
>> + xmgmt_err(xm, "failed to enable device, ret %d", ret);
>> +
>> + for (i = 0; i < 300; i++) {
>> + pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
>> + if (pci_cmd != 0xffff)
>> + break;
>> + msleep(20);
>> + }
>> + if (i == 300)
>> + xmgmt_err(xm, "time'd out waiting for device to be online after reset");
> time'd -> timed
Will do.
Thanks,
Max
> Tom
>
>> +
>> + xmgmt_info(xm, "waiting for %d ms", i * 20);
>> + xmgmt_pci_restore_config_all(xm);
>> + xmgmt_config_pci(xm);
>> +}
>> +
>> +static int xmgmt_create_root_metadata(struct xmgmt *xm, char **root_dtb)
>> +{
>> + char *dtb = NULL;
>> + int ret;
>> +
>> + ret = xrt_md_create(XMGMT_DEV(xm), &dtb);
>> + if (ret) {
>> + xmgmt_err(xm, "create metadata failed, ret %d", ret);
>> + goto failed;
>> + }
>> +
>> + ret = xroot_add_vsec_node(xm->root, dtb);
>> + if (ret == -ENOENT) {
>> + /*
>> + * We may be dealing with a MFG board.
>> + * Try vsec-golden which will bring up all hard-coded leaves
>> + * at hard-coded offsets.
>> + */
>> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_VSEC_GOLDEN);
>> + } else if (ret == 0) {
>> + ret = xroot_add_simple_node(xm->root, dtb, XRT_MD_NODE_MGMT_MAIN);
>> + }
>> + if (ret)
>> + goto failed;
>> +
>> + *root_dtb = dtb;
>> + return 0;
>> +
>> +failed:
>> + vfree(dtb);
>> + return ret;
>> +}
>> +
>> +static ssize_t ready_show(struct device *dev,
>> + struct device_attribute *da,
>> + char *buf)
>> +{
>> + struct pci_dev *pdev = to_pci_dev(dev);
>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>> +
>> + return sprintf(buf, "%d\n", xm->ready);
>> +}
>> +static DEVICE_ATTR_RO(ready);
>> +
>> +static struct attribute *xmgmt_root_attrs[] = {
>> + &dev_attr_ready.attr,
>> + NULL
>> +};
>> +
>> +static struct attribute_group xmgmt_root_attr_group = {
>> + .attrs = xmgmt_root_attrs,
>> +};
>> +
>> +static struct xroot_physical_function_callback xmgmt_xroot_pf_cb = {
>> + .xpc_hot_reset = xmgmt_root_hot_reset,
>> +};
>> +
>> +static int xmgmt_probe(struct pci_dev *pdev, const struct pci_device_id *id)
>> +{
>> + int ret;
>> + struct device *dev = &pdev->dev;
>> + struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
>> + char *dtb = NULL;
>> +
>> + if (!xm)
>> + return -ENOMEM;
>> + xm->pdev = pdev;
>> + pci_set_drvdata(pdev, xm);
>> +
>> + ret = xmgmt_config_pci(xm);
>> + if (ret)
>> + goto failed;
>> +
>> + ret = xroot_probe(pdev, &xmgmt_xroot_pf_cb, &xm->root);
>> + if (ret)
>> + goto failed;
>> +
>> + ret = xmgmt_create_root_metadata(xm, &dtb);
>> + if (ret)
>> + goto failed_metadata;
>> +
>> + ret = xroot_create_group(xm->root, dtb);
>> + vfree(dtb);
>> + if (ret)
>> + xmgmt_err(xm, "failed to create root group: %d", ret);
>> +
>> + if (!xroot_wait_for_bringup(xm->root))
>> + xmgmt_err(xm, "failed to bringup all groups");
>> + else
>> + xm->ready = true;
>> +
>> + ret = sysfs_create_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
>> + if (ret) {
>> + /* Warning instead of failing the probe. */
>> + xmgmt_warn(xm, "create xmgmt root attrs failed: %d", ret);
>> + }
>> +
>> + xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
>> + xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
>> + return 0;
>> +
>> +failed_metadata:
>> + xroot_remove(xm->root);
>> +failed:
>> + pci_set_drvdata(pdev, NULL);
>> + return ret;
>> +}
>> +
>> +static void xmgmt_remove(struct pci_dev *pdev)
>> +{
>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>> +
>> + xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
>> + sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
>> + xroot_remove(xm->root);
>> + pci_disable_pcie_error_reporting(xm->pdev);
>> + xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
>> +}
>> +
>> +static struct pci_driver xmgmt_driver = {
>> + .name = XMGMT_MODULE_NAME,
>> + .id_table = xmgmt_pci_ids,
>> + .probe = xmgmt_probe,
>> + .remove = xmgmt_remove,
>> +};
>> +
>> +static int __init xmgmt_init(void)
>> +{
>> + int res = 0;
>> +
>> + res = xmgmt_register_leaf();
>> + if (res)
>> + return res;
>> +
>> + xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
>> + if (IS_ERR(xmgmt_class))
>> + return PTR_ERR(xmgmt_class);
>> +
>> + res = pci_register_driver(&xmgmt_driver);
>> + if (res) {
>> + class_destroy(xmgmt_class);
>> + return res;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static __exit void xmgmt_exit(void)
>> +{
>> + pci_unregister_driver(&xmgmt_driver);
>> + class_destroy(xmgmt_class);
>> + xmgmt_unregister_leaf();
>> +}
>> +
>> +module_init(xmgmt_init);
>> +module_exit(xmgmt_exit);
>> +
>> +MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
>> +MODULE_VERSION(XMGMT_DRIVER_VERSION);
>> +MODULE_AUTHOR("XRT Team<[email protected]>");
>> +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
>> +MODULE_LICENSE("GPL v2");
On 4/9/21 11:50 AM, Max Zhen wrote:
> Hi Tom,
>
>
> On 3/31/21 6:03 AM, Tom Rix wrote:
>> On 3/23/21 10:29 PM, Lizhi Hou wrote:
>>> The PCIE device driver which attaches to management function on Alveo
>>> devices. It instantiates one or more group drivers which, in turn,
>>> instantiate platform drivers. The instantiation of group and platform
>>> drivers is completely dtb driven.
>>>
>>> Signed-off-by: Sonal Santan<[email protected]>
>>> Signed-off-by: Max Zhen<[email protected]>
>>> Signed-off-by: Lizhi Hou<[email protected]>
>>> ---
>>> drivers/fpga/xrt/mgmt/root.c | 333
>>> +++++++++++++++++++++++++++++++++++
>>> 1 file changed, 333 insertions(+)
>>> create mode 100644 drivers/fpga/xrt/mgmt/root.c
>>>
>>> diff --git a/drivers/fpga/xrt/mgmt/root.c
>>> b/drivers/fpga/xrt/mgmt/root.c
>>> new file mode 100644
>>> index 000000000000..f97f92807c01
>>> --- /dev/null
>>> +++ b/drivers/fpga/xrt/mgmt/root.c
>>> @@ -0,0 +1,333 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/*
>>> + * Xilinx Alveo Management Function Driver
>>> + *
>>> + * Copyright (C) 2020-2021 Xilinx, Inc.
>>> + *
>>> + * Authors:
>>> + * Cheng Zhen<[email protected]>
>>> + */
>>> +
>>> +#include <linux/module.h>
>>> +#include <linux/pci.h>
>>> +#include <linux/aer.h>
>>> +#include <linux/vmalloc.h>
>>> +#include <linux/delay.h>
>>> +
>>> +#include "xroot.h"
>>> +#include "xmgnt.h"
>>> +#include "metadata.h"
>>> +
>>> +#define XMGMT_MODULE_NAME "xrt-mgmt"
>> ok
>>> +#define XMGMT_DRIVER_VERSION "4.0.0"
>>> +
>>> +#define XMGMT_PDEV(xm) ((xm)->pdev)
>>> +#define XMGMT_DEV(xm) (&(XMGMT_PDEV(xm)->dev))
>>> +#define xmgmt_err(xm, fmt, args...) \
>>> + dev_err(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>>> +#define xmgmt_warn(xm, fmt, args...) \
>>> + dev_warn(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>>> +#define xmgmt_info(xm, fmt, args...) \
>>> + dev_info(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>>> +#define xmgmt_dbg(xm, fmt, args...) \
>>> + dev_dbg(XMGMT_DEV(xm), "%s: " fmt, __func__, ##args)
>>> +#define XMGMT_DEV_ID(_pcidev) \
>>> + ({ typeof(_pcidev) (pcidev) = (_pcidev); \
>>> + ((pci_domain_nr((pcidev)->bus) << 16) | \
>>> + PCI_DEVID((pcidev)->bus->number, 0)); })
>>> +
>>> +static struct class *xmgmt_class;
>>> +
>>> +/* PCI Device IDs */
>> add a comment on what a golden image is here something like
>>
>> /*
>>
>> * Golden image is preloaded on the device when it is shipped to
>> customer.
>>
>> * Then, customer can load other shells (from Xilinx or some other
>> vendor).
>>
>> * If something goes wrong with the shell, customer can always go back to
>>
>> * golden and start over again.
>>
>> */
>>
>
> Will do.
>
>
>>> +#define PCI_DEVICE_ID_U50_GOLDEN 0xD020
>>> +#define PCI_DEVICE_ID_U50 0x5020
>>> +static const struct pci_device_id xmgmt_pci_ids[] = {
>>> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50_GOLDEN),
>>> }, /* Alveo U50 (golden) */
>>> + { PCI_DEVICE(PCI_VENDOR_ID_XILINX, PCI_DEVICE_ID_U50), }, /*
>>> Alveo U50 */
>>> + { 0, }
>>> +};
>>> +
>>> +struct xmgmt {
>>> + struct pci_dev *pdev;
>>> + void *root;
>>> +
>>> + bool ready;
>>> +};
>>> +
>>> +static int xmgmt_config_pci(struct xmgmt *xm)
>>> +{
>>> + struct pci_dev *pdev = XMGMT_PDEV(xm);
>>> + int rc;
>>> +
>>> + rc = pcim_enable_device(pdev);
>>> + if (rc < 0) {
>>> + xmgmt_err(xm, "failed to enable device: %d", rc);
>>> + return rc;
>>> + }
>>> +
>>> + rc = pci_enable_pcie_error_reporting(pdev);
>>> + if (rc)
>> ok
>>> + xmgmt_warn(xm, "failed to enable AER: %d", rc);
>>> +
>>> + pci_set_master(pdev);
>>> +
>>> + rc = pcie_get_readrq(pdev);
>>> + if (rc > 512)
>> 512 is magic number, change this to a #define
>
>
> Will do.
>
>
>>> + pcie_set_readrq(pdev, 512);
>>> + return 0;
>>> +}
>>> +
>>> +static int xmgmt_match_slot_and_save(struct device *dev, void *data)
>>> +{
>>> + struct xmgmt *xm = data;
>>> + struct pci_dev *pdev = to_pci_dev(dev);
>>> +
>>> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
>>> + pci_cfg_access_lock(pdev);
>>> + pci_save_state(pdev);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void xmgmt_pci_save_config_all(struct xmgmt *xm)
>>> +{
>>> + bus_for_each_dev(&pci_bus_type, NULL, xm,
>>> xmgmt_match_slot_and_save);
>> refactor expected in v5 when pseudo bus change happens.
>
>
> There might be some mis-understanding here...
>
> No matter how we reorganize our code (using platform_device bus type
> or defining our own bus type), it's a driver that drives a PCIE device
> after all. So, this mgmt/root.c must be a PCIE driver, which may
> interact with a whole bunch of IP drivers through a pseudo bus we are
> about to create.
>
> What this code is doing here is completely of PCIE business (PCIE
> config space access). So, I think it is appropriate code in a PCIE
> driver.
>
> The PCIE device we are driving is a multi-function device. The mgmt pf
> is of function 0, which, according to PCIE spec, can manage other
> functions on the same device. So, I think it's appropriate for mgmt pf
> driver (this root driver) to find it's peer function (through PCIE bus
> type) on the same device and do something about it in certain special
> cases.
>
> Please let me know why you expect this code to be refactored and how
> you want it to be refactored. I might have missed something here...
>
ok, i get it.
thanks for the explanation.
Tom
>
>>> +}
>>> +
>>> +static int xmgmt_match_slot_and_restore(struct device *dev, void
>>> *data)
>>> +{
>>> + struct xmgmt *xm = data;
>>> + struct pci_dev *pdev = to_pci_dev(dev);
>>> +
>>> + if (XMGMT_DEV_ID(pdev) == XMGMT_DEV_ID(xm->pdev)) {
>>> + pci_restore_state(pdev);
>>> + pci_cfg_access_unlock(pdev);
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static void xmgmt_pci_restore_config_all(struct xmgmt *xm)
>>> +{
>>> + bus_for_each_dev(&pci_bus_type, NULL, xm,
>>> xmgmt_match_slot_and_restore);
>>> +}
>>> +
>>> +static void xmgmt_root_hot_reset(struct pci_dev *pdev)
>>> +{
>>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>>> + struct pci_bus *bus;
>>> + u8 pci_bctl;
>>> + u16 pci_cmd, devctl;
>>> + int i, ret;
>>> +
>>> + xmgmt_info(xm, "hot reset start");
>>> +
>>> + xmgmt_pci_save_config_all(xm);
>>> +
>>> + pci_disable_device(pdev);
>>> +
>>> + bus = pdev->bus;
>> whitespace, all these nl's are not needed
>
>
> Will remove them.
>
>
>>> +
>>> + /*
>>> + * When flipping the SBR bit, device can fall off the bus.
>>> This is
>>> + * usually no problem at all so long as drivers are working
>>> properly
>>> + * after SBR. However, some systems complain bitterly when the
>>> device
>>> + * falls off the bus.
>>> + * The quick solution is to temporarily disable the SERR
>>> reporting of
>>> + * switch port during SBR.
>>> + */
>>> +
>>> + pci_read_config_word(bus->self, PCI_COMMAND, &pci_cmd);
>>> + pci_write_config_word(bus->self, PCI_COMMAND, (pci_cmd &
>>> ~PCI_COMMAND_SERR));
>>> + pcie_capability_read_word(bus->self, PCI_EXP_DEVCTL, &devctl);
>>> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, (devctl
>>> & ~PCI_EXP_DEVCTL_FERE));
>>> + pci_read_config_byte(bus->self, PCI_BRIDGE_CONTROL, &pci_bctl);
>>> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl
>>> | PCI_BRIDGE_CTL_BUS_RESET);
>> ok
>>> + msleep(100);
>>> + pci_write_config_byte(bus->self, PCI_BRIDGE_CONTROL, pci_bctl);
>>> + ssleep(1);
>>> +
>>> + pcie_capability_write_word(bus->self, PCI_EXP_DEVCTL, devctl);
>>> + pci_write_config_word(bus->self, PCI_COMMAND, pci_cmd);
>>> +
>>> + ret = pci_enable_device(pdev);
>>> + if (ret)
>>> + xmgmt_err(xm, "failed to enable device, ret %d", ret);
>>> +
>>> + for (i = 0; i < 300; i++) {
>>> + pci_read_config_word(pdev, PCI_COMMAND, &pci_cmd);
>>> + if (pci_cmd != 0xffff)
>>> + break;
>>> + msleep(20);
>>> + }
>>> + if (i == 300)
>>> + xmgmt_err(xm, "time'd out waiting for device to be
>>> online after reset");
>> time'd -> timed
>
>
> Will do.
>
>
> Thanks,
>
> Max
>
>> Tom
>>
>>> +
>>> + xmgmt_info(xm, "waiting for %d ms", i * 20);
>>> + xmgmt_pci_restore_config_all(xm);
>>> + xmgmt_config_pci(xm);
>>> +}
>>> +
>>> +static int xmgmt_create_root_metadata(struct xmgmt *xm, char
>>> **root_dtb)
>>> +{
>>> + char *dtb = NULL;
>>> + int ret;
>>> +
>>> + ret = xrt_md_create(XMGMT_DEV(xm), &dtb);
>>> + if (ret) {
>>> + xmgmt_err(xm, "create metadata failed, ret %d", ret);
>>> + goto failed;
>>> + }
>>> +
>>> + ret = xroot_add_vsec_node(xm->root, dtb);
>>> + if (ret == -ENOENT) {
>>> + /*
>>> + * We may be dealing with a MFG board.
>>> + * Try vsec-golden which will bring up all hard-coded
>>> leaves
>>> + * at hard-coded offsets.
>>> + */
>>> + ret = xroot_add_simple_node(xm->root, dtb,
>>> XRT_MD_NODE_VSEC_GOLDEN);
>>> + } else if (ret == 0) {
>>> + ret = xroot_add_simple_node(xm->root, dtb,
>>> XRT_MD_NODE_MGMT_MAIN);
>>> + }
>>> + if (ret)
>>> + goto failed;
>>> +
>>> + *root_dtb = dtb;
>>> + return 0;
>>> +
>>> +failed:
>>> + vfree(dtb);
>>> + return ret;
>>> +}
>>> +
>>> +static ssize_t ready_show(struct device *dev,
>>> + struct device_attribute *da,
>>> + char *buf)
>>> +{
>>> + struct pci_dev *pdev = to_pci_dev(dev);
>>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>>> +
>>> + return sprintf(buf, "%d\n", xm->ready);
>>> +}
>>> +static DEVICE_ATTR_RO(ready);
>>> +
>>> +static struct attribute *xmgmt_root_attrs[] = {
>>> + &dev_attr_ready.attr,
>>> + NULL
>>> +};
>>> +
>>> +static struct attribute_group xmgmt_root_attr_group = {
>>> + .attrs = xmgmt_root_attrs,
>>> +};
>>> +
>>> +static struct xroot_physical_function_callback xmgmt_xroot_pf_cb = {
>>> + .xpc_hot_reset = xmgmt_root_hot_reset,
>>> +};
>>> +
>>> +static int xmgmt_probe(struct pci_dev *pdev, const struct
>>> pci_device_id *id)
>>> +{
>>> + int ret;
>>> + struct device *dev = &pdev->dev;
>>> + struct xmgmt *xm = devm_kzalloc(dev, sizeof(*xm), GFP_KERNEL);
>>> + char *dtb = NULL;
>>> +
>>> + if (!xm)
>>> + return -ENOMEM;
>>> + xm->pdev = pdev;
>>> + pci_set_drvdata(pdev, xm);
>>> +
>>> + ret = xmgmt_config_pci(xm);
>>> + if (ret)
>>> + goto failed;
>>> +
>>> + ret = xroot_probe(pdev, &xmgmt_xroot_pf_cb, &xm->root);
>>> + if (ret)
>>> + goto failed;
>>> +
>>> + ret = xmgmt_create_root_metadata(xm, &dtb);
>>> + if (ret)
>>> + goto failed_metadata;
>>> +
>>> + ret = xroot_create_group(xm->root, dtb);
>>> + vfree(dtb);
>>> + if (ret)
>>> + xmgmt_err(xm, "failed to create root group: %d", ret);
>>> +
>>> + if (!xroot_wait_for_bringup(xm->root))
>>> + xmgmt_err(xm, "failed to bringup all groups");
>>> + else
>>> + xm->ready = true;
>>> +
>>> + ret = sysfs_create_group(&pdev->dev.kobj,
>>> &xmgmt_root_attr_group);
>>> + if (ret) {
>>> + /* Warning instead of failing the probe. */
>>> + xmgmt_warn(xm, "create xmgmt root attrs failed: %d",
>>> ret);
>>> + }
>>> +
>>> + xroot_broadcast(xm->root, XRT_EVENT_POST_CREATION);
>>> + xmgmt_info(xm, "%s started successfully", XMGMT_MODULE_NAME);
>>> + return 0;
>>> +
>>> +failed_metadata:
>>> + xroot_remove(xm->root);
>>> +failed:
>>> + pci_set_drvdata(pdev, NULL);
>>> + return ret;
>>> +}
>>> +
>>> +static void xmgmt_remove(struct pci_dev *pdev)
>>> +{
>>> + struct xmgmt *xm = pci_get_drvdata(pdev);
>>> +
>>> + xroot_broadcast(xm->root, XRT_EVENT_PRE_REMOVAL);
>>> + sysfs_remove_group(&pdev->dev.kobj, &xmgmt_root_attr_group);
>>> + xroot_remove(xm->root);
>>> + pci_disable_pcie_error_reporting(xm->pdev);
>>> + xmgmt_info(xm, "%s cleaned up successfully", XMGMT_MODULE_NAME);
>>> +}
>>> +
>>> +static struct pci_driver xmgmt_driver = {
>>> + .name = XMGMT_MODULE_NAME,
>>> + .id_table = xmgmt_pci_ids,
>>> + .probe = xmgmt_probe,
>>> + .remove = xmgmt_remove,
>>> +};
>>> +
>>> +static int __init xmgmt_init(void)
>>> +{
>>> + int res = 0;
>>> +
>>> + res = xmgmt_register_leaf();
>>> + if (res)
>>> + return res;
>>> +
>>> + xmgmt_class = class_create(THIS_MODULE, XMGMT_MODULE_NAME);
>>> + if (IS_ERR(xmgmt_class))
>>> + return PTR_ERR(xmgmt_class);
>>> +
>>> + res = pci_register_driver(&xmgmt_driver);
>>> + if (res) {
>>> + class_destroy(xmgmt_class);
>>> + return res;
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> +static __exit void xmgmt_exit(void)
>>> +{
>>> + pci_unregister_driver(&xmgmt_driver);
>>> + class_destroy(xmgmt_class);
>>> + xmgmt_unregister_leaf();
>>> +}
>>> +
>>> +module_init(xmgmt_init);
>>> +module_exit(xmgmt_exit);
>>> +
>>> +MODULE_DEVICE_TABLE(pci, xmgmt_pci_ids);
>>> +MODULE_VERSION(XMGMT_DRIVER_VERSION);
>>> +MODULE_AUTHOR("XRT Team<[email protected]>");
>>> +MODULE_DESCRIPTION("Xilinx Alveo management function driver");
>>> +MODULE_LICENSE("GPL v2");
>