2021-01-08 21:32:14

by mark gross

[permalink] [raw]
Subject: [PATCH v2 00/34] Intel Vision Processing base enabling

From: Mark Gross <[email protected]>

Second try, the first attempt failed to land on lore.

The Intel Vision Processing Unit (VPU) is an IP block that is showing up for
the first time as part of the Keem Bay SOC. Keem Bay is a quad core A53 Arm
SOC. It is designed to be used as a stand alone SOC as well as in an PCIe
Vision Processing accelerator add in card.

This is version 2 of the earlier patch set:
Subject: [PATCH 00/22] Intel Vision Processing Unit base enabling part 1
Date: Tue, 1 Dec 2020 14:34:49 -0800

It also include all the part 2 patches along with updates addressing feedback
from the earlier posting.
The most notable are:
* added a VPU mailbox IPC driver
* reworked the Keem Bay IPC driver to use that mailbox
* folding a refactoring of the xlink core patch into the xlink core set.
* Documentation updates
* corrected MAINTAINER file entries WRT sustained vrs maintained use.
* adds associated drivers:
* tsens -- thermal managment
* HDDL -- device manageme
* Xlink SMBus
* vpu manager -- user mode interface


Thanks for looking at these and providing feedback.

--mark

C, Udhayakumar (8):
dt-bindings: misc: intel_tsens: Add tsens thermal bindings
documentation
misc: Tsens ARM host thermal driver.
misc: Intel tsens IA host driver.
Intel tsens i2c slave driver.
misc:intel_tsens: Intel Keem Bay tsens driver.
dt-bindings: misc: hddl_dev: Add hddl device management documentation
misc: Hddl device management for local host
misc: HDDL device management for IA host

Daniele Alessandrelli (4):
dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings
mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox
dt-bindings: Add bindings for Keem Bay IPC driver
keembay-ipc: Add Keem Bay IPC module

Li, Tingqian (2):
dt-bindings: misc: Add Keem Bay vpumgr
misc: Add Keem Bay VPU manager

Paul Murphy (2):
dt-bindings: Add bindings for Keem Bay VPU IPC driver
keembay-vpu-ipc: Add Keem Bay VPU IPC module

Ramya P Karanth (1):
Intel Keem Bay XLink SMBus driver

Seamus Kelly (7):
xlink-ipc: Add xlink ipc device tree bindings
xlink-ipc: Add xlink ipc driver
xlink-core: Add xlink core device tree bindings
xlink-core: Add xlink core driver xLink
xlink-core: Enable xlink protocol over pcie
xlink-core: Enable VPU IP management and runtime control
xlink-core: add async channel and events

Srikanth Thokala (9):
misc: xlink-pcie: Add documentation for XLink PCIe driver
misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
misc: xlink-pcie: lh: Add PCIe EP DMA functionality
misc: xlink-pcie: lh: Add core communication logic
misc: xlink-pcie: lh: Prepare changes for adding remote host driver
misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host
misc: xlink-pcie: rh: Add core communication logic
misc: xlink-pcie: Add XLink API interface
misc: xlink-pcie: Add asynchronous event notification support for
XLink

mark gross (1):
Add Vision Processing Unit (VPU) documentation.

.../mailbox/intel,vpu-ipc-mailbox.yaml | 69 +
.../bindings/misc/intel,hddl-client.yaml | 114 +
.../bindings/misc/intel,intel-tsens.yaml | 122 +
.../bindings/misc/intel,keembay-vpu-mgr.yaml | 48 +
.../misc/intel,keembay-xlink-ipc.yaml | 49 +
.../bindings/misc/intel,keembay-xlink.yaml | 27 +
.../bindings/soc/intel/intel,keembay-ipc.yaml | 45 +
.../soc/intel/intel,keembay-vpu-ipc.yaml | 153 ++
Documentation/hwmon/index.rst | 2 +
Documentation/hwmon/intel_tsens_host.rst | 71 +
Documentation/hwmon/intel_tsens_sensor.rst | 67 +
Documentation/i2c/busses/index.rst | 1 +
.../i2c/busses/intel-xlink-smbus.rst | 71 +
Documentation/index.rst | 1 +
.../misc-devices/hddl_device_client.rst | 212 ++
.../misc-devices/hddl_device_server.rst | 205 ++
Documentation/misc-devices/index.rst | 2 +
Documentation/vpu/index.rst | 20 +
Documentation/vpu/vpu-stack-overview.rst | 270 +++
Documentation/vpu/xlink-core.rst | 81 +
Documentation/vpu/xlink-ipc.rst | 51 +
Documentation/vpu/xlink-pcie.rst | 90 +
MAINTAINERS | 54 +
drivers/mailbox/Kconfig | 11 +
drivers/mailbox/Makefile | 2 +
drivers/mailbox/vpu-ipc-mailbox.c | 297 +++
drivers/misc/Kconfig | 7 +
drivers/misc/Makefile | 7 +
drivers/misc/hddl_device/Kconfig | 26 +
drivers/misc/hddl_device/Makefile | 7 +
drivers/misc/hddl_device/hddl_device.c | 565 +++++
drivers/misc/hddl_device/hddl_device_lh.c | 764 +++++++
drivers/misc/hddl_device/hddl_device_rh.c | 837 +++++++
drivers/misc/hddl_device/hddl_device_util.h | 52 +
drivers/misc/intel_tsens/Kconfig | 54 +
drivers/misc/intel_tsens/Makefile | 10 +
drivers/misc/intel_tsens/intel_tsens_host.c | 351 +++
drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +
.../misc/intel_tsens/intel_tsens_thermal.c | 651 ++++++
.../misc/intel_tsens/intel_tsens_thermal.h | 38 +
drivers/misc/intel_tsens/keembay_thermal.c | 169 ++
drivers/misc/intel_tsens/keembay_tsens.h | 366 +++
drivers/misc/vpumgr/Kconfig | 9 +
drivers/misc/vpumgr/Makefile | 3 +
drivers/misc/vpumgr/vpu_common.h | 31 +
drivers/misc/vpumgr/vpu_mgr.c | 370 +++
drivers/misc/vpumgr/vpu_smm.c | 554 +++++
drivers/misc/vpumgr/vpu_smm.h | 30 +
drivers/misc/vpumgr/vpu_vcm.c | 585 +++++
drivers/misc/vpumgr/vpu_vcm.h | 84 +
drivers/misc/xlink-core/Kconfig | 33 +
drivers/misc/xlink-core/Makefile | 5 +
drivers/misc/xlink-core/xlink-core.c | 1331 +++++++++++
drivers/misc/xlink-core/xlink-core.h | 25 +
drivers/misc/xlink-core/xlink-defs.h | 181 ++
drivers/misc/xlink-core/xlink-dispatcher.c | 436 ++++
drivers/misc/xlink-core/xlink-dispatcher.h | 26 +
drivers/misc/xlink-core/xlink-ioctl.c | 554 +++++
drivers/misc/xlink-core/xlink-ioctl.h | 36 +
drivers/misc/xlink-core/xlink-multiplexer.c | 1164 ++++++++++
drivers/misc/xlink-core/xlink-multiplexer.h | 35 +
drivers/misc/xlink-core/xlink-platform.c | 273 +++
drivers/misc/xlink-core/xlink-platform.h | 65 +
drivers/misc/xlink-ipc/Kconfig | 7 +
drivers/misc/xlink-ipc/Makefile | 4 +
drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++
drivers/misc/xlink-pcie/Kconfig | 20 +
drivers/misc/xlink-pcie/Makefile | 2 +
drivers/misc/xlink-pcie/common/core.h | 247 ++
drivers/misc/xlink-pcie/common/interface.c | 126 +
drivers/misc/xlink-pcie/common/util.c | 375 +++
drivers/misc/xlink-pcie/common/util.h | 70 +
drivers/misc/xlink-pcie/common/xpcie.h | 102 +
drivers/misc/xlink-pcie/local_host/Makefile | 6 +
drivers/misc/xlink-pcie/local_host/core.c | 819 +++++++
drivers/misc/xlink-pcie/local_host/dma.c | 577 +++++
drivers/misc/xlink-pcie/local_host/epf.c | 522 +++++
drivers/misc/xlink-pcie/local_host/epf.h | 103 +
drivers/misc/xlink-pcie/remote_host/Makefile | 6 +
drivers/misc/xlink-pcie/remote_host/core.c | 623 +++++
drivers/misc/xlink-pcie/remote_host/main.c | 95 +
drivers/misc/xlink-pcie/remote_host/pci.c | 525 +++++
drivers/misc/xlink-pcie/remote_host/pci.h | 67 +
drivers/misc/xlink-smbus/Kconfig | 26 +
drivers/misc/xlink-smbus/Makefile | 5 +
drivers/misc/xlink-smbus/xlink-smbus.c | 467 ++++
drivers/soc/Kconfig | 1 +
drivers/soc/Makefile | 1 +
drivers/soc/intel/Kconfig | 33 +
drivers/soc/intel/Makefile | 5 +
drivers/soc/intel/keembay-ipc.c | 1364 +++++++++++
drivers/soc/intel/keembay-vpu-ipc.c | 2036 +++++++++++++++++
include/linux/hddl_device.h | 153 ++
include/linux/intel_tsens_host.h | 34 +
include/linux/soc/intel/keembay-ipc.h | 30 +
include/linux/soc/intel/keembay-vpu-ipc.h | 62 +
include/linux/xlink-ipc.h | 48 +
include/linux/xlink.h | 146 ++
include/linux/xlink_drv_inf.h | 72 +
include/uapi/misc/vpumgr.h | 64 +
include/uapi/misc/xlink_uapi.h | 145 ++
101 files changed, 21854 insertions(+)
create mode 100644 Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
create mode 100644 Documentation/hwmon/intel_tsens_host.rst
create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
create mode 100644 Documentation/misc-devices/hddl_device_client.rst
create mode 100644 Documentation/misc-devices/hddl_device_server.rst
create mode 100644 Documentation/vpu/index.rst
create mode 100644 Documentation/vpu/vpu-stack-overview.rst
create mode 100644 Documentation/vpu/xlink-core.rst
create mode 100644 Documentation/vpu/xlink-ipc.rst
create mode 100644 Documentation/vpu/xlink-pcie.rst
create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c
create mode 100644 drivers/misc/hddl_device/Kconfig
create mode 100644 drivers/misc/hddl_device/Makefile
create mode 100644 drivers/misc/hddl_device/hddl_device.c
create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c
create mode 100644 drivers/misc/hddl_device/hddl_device_util.h
create mode 100644 drivers/misc/intel_tsens/Kconfig
create mode 100644 drivers/misc/intel_tsens/Makefile
create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h
create mode 100644 drivers/misc/vpumgr/Kconfig
create mode 100644 drivers/misc/vpumgr/Makefile
create mode 100644 drivers/misc/vpumgr/vpu_common.h
create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.h
create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
create mode 100644 drivers/misc/xlink-core/Kconfig
create mode 100644 drivers/misc/xlink-core/Makefile
create mode 100644 drivers/misc/xlink-core/xlink-core.c
create mode 100644 drivers/misc/xlink-core/xlink-core.h
create mode 100644 drivers/misc/xlink-core/xlink-defs.h
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
create mode 100644 drivers/misc/xlink-core/xlink-platform.c
create mode 100644 drivers/misc/xlink-core/xlink-platform.h
create mode 100644 drivers/misc/xlink-ipc/Kconfig
create mode 100644 drivers/misc/xlink-ipc/Makefile
create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
create mode 100644 drivers/misc/xlink-pcie/Kconfig
create mode 100644 drivers/misc/xlink-pcie/Makefile
create mode 100644 drivers/misc/xlink-pcie/common/core.h
create mode 100644 drivers/misc/xlink-pcie/common/interface.c
create mode 100644 drivers/misc/xlink-pcie/common/util.c
create mode 100644 drivers/misc/xlink-pcie/common/util.h
create mode 100644 drivers/misc/xlink-pcie/common/xpcie.h
create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
create mode 100644 drivers/misc/xlink-pcie/remote_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/main.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.h
create mode 100644 drivers/misc/xlink-smbus/Kconfig
create mode 100644 drivers/misc/xlink-smbus/Makefile
create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c
create mode 100644 drivers/soc/intel/Kconfig
create mode 100644 drivers/soc/intel/Makefile
create mode 100644 drivers/soc/intel/keembay-ipc.c
create mode 100644 drivers/soc/intel/keembay-vpu-ipc.c
create mode 100644 include/linux/hddl_device.h
create mode 100644 include/linux/intel_tsens_host.h
create mode 100644 include/linux/soc/intel/keembay-ipc.h
create mode 100644 include/linux/soc/intel/keembay-vpu-ipc.h
create mode 100644 include/linux/xlink-ipc.h
create mode 100644 include/linux/xlink.h
create mode 100644 include/linux/xlink_drv_inf.h
create mode 100644 include/uapi/misc/vpumgr.h
create mode 100644 include/uapi/misc/xlink_uapi.h

--
2.17.1


2021-01-08 21:32:24

by mark gross

[permalink] [raw]
Subject: [PATCH v2 25/34] misc: Add Keem Bay VPU manager

From: "Li, Tingqian" <[email protected]>

VPU IP on Keem Bay SOC is a vision acceleration IP complex
under the control of a RTOS-based firmware (running on RISC
MCU inside the VPU IP) serving user-space application
running on CPU side for HW accelerated computer vision tasks.

This module is kernel counterpart of the VPUAL(VPU abstraction
layer) which bridges firmware on VPU side and applications on
CPU user-space, it assists firmware on VPU side serving multiple
user space application processes on CPU side concurrently while
also performing necessary data buffer management on behave of
VPU IP.

objmgr provides basic infrastructure for create/destroy VPU side
software object concurrently on demand of user-space application
and also automatically release leaked objects during handling of
application termination. Note this module only cares about the
life-cycle of such objects, it's up to the application and firmware
to define the behavior/operations of each object.

objmgr does it's job by communicating with firmware through a fixed
reserved xlink channel, using a very simple message protocol.

smm provides DMABuf allocation/import facilities to allow user-space
app pass data to/from VPU in zero-copy fashion. it also provided a
convenient ioctl function for converting virtual pointer of a mem-mapped
and imported DMABuf into it's corresponding dma address, to allow
user-space app to specify the sub-regions of a bigger DMABuf to be
processed by VPU.

Signed-off-by: Li, Tingqian <[email protected]>
Signed-off-by: Zhou, Luwei <[email protected]>
Signed-off-by: Wang, jue <[email protected]>
---
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/vpumgr/Kconfig | 9 +
drivers/misc/vpumgr/Makefile | 3 +
drivers/misc/vpumgr/vpu_common.h | 31 ++
drivers/misc/vpumgr/vpu_mgr.c | 370 +++++++++++++++++++
drivers/misc/vpumgr/vpu_smm.c | 554 +++++++++++++++++++++++++++++
drivers/misc/vpumgr/vpu_smm.h | 30 ++
drivers/misc/vpumgr/vpu_vcm.c | 585 +++++++++++++++++++++++++++++++
drivers/misc/vpumgr/vpu_vcm.h | 84 +++++
include/uapi/misc/vpumgr.h | 64 ++++
11 files changed, 1732 insertions(+)
create mode 100644 drivers/misc/vpumgr/Kconfig
create mode 100644 drivers/misc/vpumgr/Makefile
create mode 100644 drivers/misc/vpumgr/vpu_common.h
create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.h
create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
create mode 100644 include/uapi/misc/vpumgr.h

diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 09ae65e98681..2d1f7b165cc8 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -484,4 +484,5 @@ source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
source "drivers/misc/xlink-ipc/Kconfig"
source "drivers/misc/xlink-core/Kconfig"
+source "drivers/misc/vpumgr/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index f3a6eb03bae9..2936930f3edc 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -60,3 +60,4 @@ obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
obj-$(CONFIG_XLINK_CORE) += xlink-core/
+obj-$(CONFIG_VPUMGR) += vpumgr/
diff --git a/drivers/misc/vpumgr/Kconfig b/drivers/misc/vpumgr/Kconfig
new file mode 100644
index 000000000000..bb82ff83afd3
--- /dev/null
+++ b/drivers/misc/vpumgr/Kconfig
@@ -0,0 +1,9 @@
+config VPUMGR
+ tristate "VPU Manager"
+ depends on ARM64 && XLINK_CORE
+ help
+ VPUMGR manages life-cycle of VPU related resources which were
+ dynamically allocated on demands of user-space application
+
+ Select y or m if you have a processor including the Intel
+ Vision Processor (VPU) on it.
diff --git a/drivers/misc/vpumgr/Makefile b/drivers/misc/vpumgr/Makefile
new file mode 100644
index 000000000000..51441dc8a930
--- /dev/null
+++ b/drivers/misc/vpumgr/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_VPUMGR) += vpumgr.o
+vpumgr-objs := vpu_mgr.o vpu_smm.o vpu_vcm.o
diff --git a/drivers/misc/vpumgr/vpu_common.h b/drivers/misc/vpumgr/vpu_common.h
new file mode 100644
index 000000000000..cd474ffc05f3
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_common.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * VPUMGR Kernel module - common definition
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef _VPU_COMMON_H
+#define _VPU_COMMON_H
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_vcm.h"
+
+/* there will be one such device for each HW instance */
+struct vpumgr_device {
+ struct device *sdev;
+ struct device *dev;
+ dev_t devnum;
+ struct cdev cdev;
+ struct platform_device *pdev;
+
+ struct vcm_dev vcm;
+ struct dentry *debugfs_root;
+
+ struct mutex client_mutex; /* protect client_list */
+ struct list_head client_list;
+};
+
+#define XLINK_INVALID_SW_DEVID 0xDEADBEEF
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_mgr.c b/drivers/misc/vpumgr/vpu_mgr.c
new file mode 100644
index 000000000000..75be64ebc3b0
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_mgr.c
@@ -0,0 +1,370 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VPU Manager Kernel module.
+ *
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+#include "vpu_vcm.h"
+
+#define DRIVER_NAME "vpumgr"
+#define MAX_DEV_CNT 32
+
+/* Define the max xlink device number */
+#define MAX_SW_DEV_CNT 20
+
+/* Define the SW_DEVICE_ID bit mask and offset */
+#define IPC_INTERFACE 0x00
+#define PCIE_INTERFACE 0x01
+#define MASK_INTERFACE 0x7000000
+#define BIT_OFFSET_INTERFACE 24
+#define MASK_VPU_IPC_ID 0x000E
+#define BIT_OFFSET_VPU_ID 1
+
+#define SWDEVID_INTERFACE(sw_dev_id) (((sw_dev_id) & MASK_INTERFACE) >> BIT_OFFSET_INTERFACE)
+#define SWDEVID_VPU_IPC_ID(sw_dev_id) (((sw_dev_id) & MASK_VPU_IPC_ID) >> BIT_OFFSET_VPU_ID)
+
+static dev_t vpumgr_devnum;
+static struct class *vpumgr_class;
+
+/**
+ * struct vpumgr_fpriv - per-process context stored in FD private data.
+ * @vdev: vpumgr device corresponding to the file
+ * @smm: memory manager
+ * @ctx: vpu context manager
+ * @list: for global list of all opened file
+ * @pid: process which opens the device file
+ */
+struct vpumgr_fpriv {
+ struct vpumgr_device *vdev;
+ struct vpumgr_smm smm;
+ struct vpumgr_ctx ctx;
+ struct list_head list;
+ pid_t pid;
+};
+
+static u32 get_sw_device_id(int vpu_ipc_id)
+{
+ u32 sw_id_list[MAX_SW_DEV_CNT];
+ enum xlink_error rc;
+ u32 num = 0;
+ u32 swid;
+ int i;
+
+ rc = xlink_get_device_list(sw_id_list, &num);
+ if (rc) {
+ pr_err("XLINK get device list error %d in %s\n", rc, __func__);
+ return XLINK_INVALID_SW_DEVID;
+ }
+
+ for (i = 0; i < num; i++) {
+ swid = sw_id_list[i];
+ if (SWDEVID_INTERFACE(swid) == IPC_INTERFACE &&
+ SWDEVID_VPU_IPC_ID(swid) == vpu_ipc_id)
+ return swid;
+ }
+ return XLINK_INVALID_SW_DEVID;
+}
+
+static int vpumgr_open(struct inode *inode, struct file *filp)
+{
+ struct vpumgr_fpriv *vpriv;
+ struct vpumgr_device *vdev;
+ int rc;
+
+ vpriv = kzalloc(sizeof(*vpriv), GFP_KERNEL);
+ if (!vpriv)
+ return -ENOMEM;
+
+ vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+ rc = smm_open(&vpriv->smm, vdev);
+ if (rc)
+ goto free_priv;
+
+ rc = vcm_open(&vpriv->ctx, vdev);
+ if (rc)
+ goto close_smm;
+
+ vpriv->vdev = vdev;
+ vpriv->pid = task_pid_nr(current);
+ INIT_LIST_HEAD(&vpriv->list);
+
+ mutex_lock(&vdev->client_mutex);
+ list_add_tail(&vpriv->list, &vdev->client_list);
+ mutex_unlock(&vdev->client_mutex);
+
+ filp->private_data = vpriv;
+ return 0;
+
+close_smm:
+ smm_close(&vpriv->smm);
+free_priv:
+ kfree(vpriv);
+ return rc;
+}
+
+static int vpumgr_release(struct inode *inode, struct file *filp)
+{
+ struct vpumgr_fpriv *vpriv = filp->private_data;
+ struct vpumgr_device *vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+
+ vcm_close(&vpriv->ctx);
+ smm_close(&vpriv->smm);
+
+ mutex_lock(&vdev->client_mutex);
+ list_del(&vpriv->list);
+ mutex_unlock(&vdev->client_mutex);
+
+ kfree(vpriv);
+ filp->private_data = NULL;
+ return 0;
+}
+
+static long vpumgr_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ struct vpumgr_fpriv *vpriv = filp->private_data;
+ const unsigned int io_dir = _IOC_DIR(cmd);
+ const unsigned int io_size = _IOC_SIZE(cmd);
+ struct vpumgr_vcm_submit *vs;
+ struct vpumgr_vcm_wait *vw;
+ char tmp[128];
+ int rc = 0;
+
+ if (_IOC_TYPE(cmd) != VPUMGR_MAGIC || _IOC_NR(cmd) >= _IOC_NR(VPUMGR_IOCTL_END))
+ return -EINVAL;
+
+ if (io_size > sizeof(tmp))
+ return -EFAULT;
+
+ if (io_dir & _IOC_READ) {
+ if (copy_from_user(tmp, (void __user *)arg, io_size) != 0)
+ return -EFAULT;
+ }
+
+ switch (cmd) {
+ case VPUMGR_IOCTL_DMABUF_ALLOC:
+ rc = smm_alloc(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_IMPORT:
+ rc = smm_import(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_UNIMPORT:
+ rc = smm_unimport(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_PTR2VPU:
+ rc = smm_ptr2vpu(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_VCM_SUBMIT:
+ vs = (struct vpumgr_vcm_submit *)tmp;
+ if (vs->cmd <= VCTX_KMD_RESERVED_CMD_LAST) {
+ /*
+ * user-space can talk to vpu context lives in firmware
+ * with any commands other than those reserved for kernel
+ * mode.
+ */
+ rc = -EACCES;
+ break;
+ }
+ rc = vcm_submit(&vpriv->ctx, vs->cmd,
+ (const void *)vs->in, vs->in_len, &vs->submit_id);
+ break;
+ case VPUMGR_IOCTL_VCM_WAIT:
+ vw = (struct vpumgr_vcm_wait *)tmp;
+ rc = vcm_wait(&vpriv->ctx, vw->submit_id, &vw->vpu_rc,
+ (void *)vw->out, &vw->out_len, vw->timeout_ms);
+ break;
+ }
+
+ if (!rc) {
+ if (io_dir & _IOC_WRITE) {
+ if (copy_to_user((void __user *)arg, tmp, io_size) != 0)
+ return -EFAULT;
+ }
+ }
+ return rc;
+}
+
+static const struct file_operations vpumgr_devfile_fops = {
+ .owner = THIS_MODULE,
+ .open = vpumgr_open,
+ .release = vpumgr_release,
+ .unlocked_ioctl = vpumgr_ioctl,
+};
+
+static int vpumgr_debugfs_stats_show(struct seq_file *file, void *offset)
+{
+ struct vpumgr_device *vdev = dev_get_drvdata(file->private);
+ struct vpumgr_fpriv *fpriv;
+ int i = 0;
+
+ mutex_lock(&vdev->client_mutex);
+ list_for_each_entry(fpriv, &vdev->client_list, list) {
+ seq_printf(file, "client #%d pid:%d\n", i++, fpriv->pid);
+ vcm_debugfs_stats_show(file, &fpriv->ctx);
+ smm_debugfs_stats_show(file, &fpriv->smm);
+ }
+ mutex_unlock(&vdev->client_mutex);
+ return 0;
+}
+
+static const struct of_device_id keembay_vpumgr_of_match[] = {
+ { .compatible = "intel,keembay-vpu-mgr"},
+ { .compatible = "intel,keembay-vpusmm"},
+ {}
+};
+MODULE_DEVICE_TABLE(of, keembay_vpumgr_of_match);
+
+static int vpumgr_driver_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vpumgr_device *vdev;
+ u32 ipc_sw_device_id;
+ u32 vpu_ipc_id = 0;
+ int rc;
+
+ /* get device id */
+ rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+ &vpu_ipc_id);
+ if (rc && rc != -EINVAL) {
+ dev_err(dev, "%s: vpu-ipc-id read failed with rc %d\n", __func__, rc);
+ return -EINVAL;
+ }
+
+ ipc_sw_device_id = get_sw_device_id(vpu_ipc_id);
+ if (ipc_sw_device_id == XLINK_INVALID_SW_DEVID)
+ dev_warn(dev, "%s: no xlink sw device for vpu_ipc_id %d\n",
+ __func__, vpu_ipc_id);
+
+ vdev = devm_kzalloc(dev, sizeof(struct vpumgr_device), GFP_KERNEL);
+ if (!vdev)
+ return -ENOMEM;
+
+ vdev->devnum = MKDEV(MAJOR(vpumgr_devnum), vpu_ipc_id);
+ vdev->pdev = pdev;
+ vdev->dev = dev;
+
+ dev_dbg(dev, "dev->devnum %u, id %u, major %u\n",
+ vdev->devnum, vpu_ipc_id, MAJOR(vdev->devnum));
+
+ vdev->sdev = device_create(vpumgr_class, dev, vdev->devnum,
+ NULL, DRIVER_NAME "%d", vpu_ipc_id);
+ if (IS_ERR(vdev->sdev)) {
+ dev_err(dev, "%s: device_create failed\n", __func__);
+ return PTR_ERR(vdev->sdev);
+ }
+
+ cdev_init(&vdev->cdev, &vpumgr_devfile_fops);
+ vdev->cdev.owner = THIS_MODULE;
+ rc = cdev_add(&vdev->cdev, vdev->devnum, 1);
+ if (rc) {
+ dev_err(dev, "%s: cdev_add failed.\n", __func__);
+ goto detroy_device;
+ }
+
+ vdev->debugfs_root = debugfs_create_dir(dev_name(vdev->sdev), NULL);
+
+ debugfs_create_devm_seqfile(dev, "stats", vdev->debugfs_root,
+ vpumgr_debugfs_stats_show);
+
+ rc = smm_init(vdev);
+ if (rc)
+ goto remove_debugfs;
+
+ rc = vcm_init(vdev, ipc_sw_device_id);
+ if (rc)
+ goto fini_smm;
+
+ INIT_LIST_HEAD(&vdev->client_list);
+ mutex_init(&vdev->client_mutex);
+
+ dev_set_drvdata(dev, vdev);
+ return 0;
+
+fini_smm:
+ smm_fini(vdev);
+remove_debugfs:
+ debugfs_remove_recursive(vdev->debugfs_root);
+ cdev_del(&vdev->cdev);
+detroy_device:
+ device_destroy(vpumgr_class, vdev->devnum);
+ return rc;
+}
+
+static int vpumgr_driver_remove(struct platform_device *pdev)
+{
+ struct vpumgr_device *vdev = dev_get_drvdata(&pdev->dev);
+
+ mutex_destroy(&vdev->client_mutex);
+ vcm_fini(vdev);
+ smm_fini(vdev);
+ debugfs_remove_recursive(vdev->debugfs_root);
+ cdev_del(&vdev->cdev);
+ device_destroy(vpumgr_class, vdev->devnum);
+ return 0;
+}
+
+static struct platform_driver vpumgr_driver = {
+ .probe = vpumgr_driver_probe,
+ .remove = vpumgr_driver_remove,
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "keembay-vpu-mgr",
+ .of_match_table = keembay_vpumgr_of_match,
+ },
+};
+
+static int __init vpumgr_init(void)
+{
+ int rc;
+
+ rc = alloc_chrdev_region(&vpumgr_devnum, 0, MAX_DEV_CNT, DRIVER_NAME);
+ if (rc < 0) {
+ pr_err("[%s] err: alloc_chrdev_region\n", __func__);
+ return rc;
+ }
+
+ vpumgr_class = class_create(THIS_MODULE, DRIVER_NAME "_class");
+ if (IS_ERR(vpumgr_class)) {
+ rc = PTR_ERR(vpumgr_class);
+ pr_err("[%s] err: class_create\n", __func__);
+ goto unreg_chrdev;
+ }
+
+ rc = platform_driver_register(&vpumgr_driver);
+ if (rc) {
+ pr_err("[%s] err platform_driver_register\n", __func__);
+ goto destroy_class;
+ }
+
+ return 0;
+
+destroy_class:
+ class_destroy(vpumgr_class);
+unreg_chrdev:
+ unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+ return rc;
+}
+
+static void vpumgr_exit(void)
+{
+ platform_driver_unregister(&vpumgr_driver);
+ class_destroy(vpumgr_class);
+ unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+}
+
+module_init(vpumgr_init)
+module_exit(vpumgr_exit)
+
+MODULE_DESCRIPTION("VPU resource manager driver");
+MODULE_AUTHOR("Tingqian Li <[email protected]>");
+MODULE_AUTHOR("Luwei Zhou <[email protected]>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/vpumgr/vpu_smm.c b/drivers/misc/vpumgr/vpu_smm.c
new file mode 100644
index 000000000000..a89f62984a48
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation
+ */
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-direct.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/rbtree.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+
+/*
+ * DMABuf exported by VPU device is described by following data structure
+ * the life of the buffer may be longer than session(it may be shared with
+ * other driver),so it contains pointer to device rather than to session
+ */
+struct vpusmm_buffer {
+ struct device *dev;
+ void *cookie;
+ dma_addr_t dma_addr;
+ size_t size;
+ unsigned long dma_attrs;
+};
+
+/*
+ * DMABufs imported to VPU device are maintained in a rb tree with dmabuf's
+ * pointer as key. so user space can unimport it by specifying fd and ptr2vpu
+ * API can work by searching this rb tree.
+ */
+struct vpusmm_impbuf {
+ struct rb_node node;
+ struct dma_buf *dmabuf;
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+ enum dma_data_direction direction;
+ dma_addr_t vpu_addr;
+ int refcount;
+};
+
+/*
+ * VPU imported dmabuf management
+ */
+static void vpusmm_insert_impbuf(struct vpumgr_smm *sess,
+ struct vpusmm_impbuf *new_item)
+{
+ struct rb_root *root = &sess->imp_rb;
+ struct rb_node **iter = &root->rb_node, *parent = NULL;
+ struct dma_buf *value = new_item->dmabuf;
+ struct vpusmm_impbuf *item;
+
+ while (*iter) {
+ parent = *iter;
+ item = rb_entry(parent, struct vpusmm_impbuf, node);
+
+ if (item->dmabuf > value)
+ iter = &(*iter)->rb_left;
+ else
+ iter = &(*iter)->rb_right;
+ }
+
+ /* Put the new node there */
+ rb_link_node(&new_item->node, parent, iter);
+ rb_insert_color(&new_item->node, root);
+}
+
+static struct vpusmm_impbuf *vpusmm_find_impbuf(struct vpumgr_smm *sess,
+ struct dma_buf *dmabuf)
+{
+ struct rb_node *node = sess->imp_rb.rb_node;
+ struct vpusmm_impbuf *item;
+
+ while (node) {
+ item = rb_entry(node, struct vpusmm_impbuf, node);
+
+ if (item->dmabuf > dmabuf)
+ node = node->rb_left;
+ else if (item->dmabuf < dmabuf)
+ node = node->rb_right;
+ else
+ return item;
+ }
+ return NULL;
+}
+
+static struct vpusmm_impbuf *vpusmm_import_dmabuf(struct vpumgr_smm *sess,
+ int dmabuf_fd,
+ enum dma_data_direction direction, int vpu_id)
+{
+ struct vpusmm_impbuf *item;
+ struct dma_buf_attachment *attach;
+ struct device *dma_dev = sess->vdev->dev;
+ struct dma_buf *dmabuf;
+ struct sg_table *sgt;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return ERR_CAST(dmabuf);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ if (item) {
+ item->refcount++;
+ goto found_impbuf;
+ }
+
+ attach = dma_buf_attach(dmabuf, dma_dev);
+ if (IS_ERR(attach)) {
+ item = ERR_CAST(attach);
+ goto fail_attach;
+ }
+
+ sgt = dma_buf_map_attachment(attach, direction);
+ if (IS_ERR(sgt)) {
+ item = ERR_CAST(sgt);
+ goto fail_map;
+ }
+
+ if (sgt->nents > 1) {
+ item = ERR_PTR(-EINVAL);
+ goto fail_import;
+ }
+
+ item = kzalloc(sizeof(*item), GFP_KERNEL);
+ if (!item) {
+ item = ERR_PTR(-ENOMEM);
+ goto fail_import;
+ }
+
+ item->dmabuf = dmabuf;
+ item->attach = attach;
+ item->sgt = sgt;
+ item->direction = direction;
+ item->vpu_addr = sg_dma_address(sgt->sgl);
+ item->refcount = 1;
+
+ vpusmm_insert_impbuf(sess, item);
+
+ mutex_unlock(&sess->imp_rb_lock);
+ return item;
+
+fail_import:
+ dma_buf_unmap_attachment(attach, sgt, direction);
+fail_map:
+ dma_buf_detach(dmabuf, attach);
+fail_attach:
+found_impbuf:
+ mutex_unlock(&sess->imp_rb_lock);
+ dma_buf_put(dmabuf);
+ return item;
+}
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev)
+{
+ sess->vdev = vdev;
+ sess->imp_rb = RB_ROOT;
+ mutex_init(&sess->imp_rb_lock);
+ return 0;
+}
+
+int smm_close(struct vpumgr_smm *sess)
+{
+ struct device *dev = sess->vdev->sdev;
+ struct rb_node *cur, *next;
+ struct vpusmm_impbuf *item;
+
+ mutex_destroy(&sess->imp_rb_lock);
+
+ cur = rb_first(&sess->imp_rb);
+ while (cur) {
+ item = rb_entry(cur, struct vpusmm_impbuf, node);
+ next = rb_next(cur);
+ if (item) {
+ dev_dbg(dev, "[%s] PID:%d free leaked imported dmabuf %zu bytes, %d refs\n",
+ __func__, current->pid, item->dmabuf->size, item->refcount);
+ dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+ dma_buf_detach(item->dmabuf, item->attach);
+ dma_buf_put(item->dmabuf);
+ rb_erase(&item->node, &sess->imp_rb);
+ kfree(item);
+ }
+ cur = next;
+ }
+ return 0;
+}
+
+/*
+ * DMABuf
+ */
+static struct sg_table *map_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+ enum dma_data_direction dir)
+{
+ struct vpusmm_buffer *buff = attach->dmabuf->priv;
+ struct sg_table *sgt;
+ int rc;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return ERR_PTR(-EINVAL);
+
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+ if (!sgt)
+ return NULL;
+
+ rc = dma_get_sgtable(buff->dev, sgt, buff->cookie, buff->dma_addr, buff->size);
+ if (rc < 0)
+ goto free_sgt;
+
+ rc = dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+ if (!rc)
+ goto free_sg_table;
+ return sgt;
+
+free_sg_table:
+ sg_free_table(sgt);
+free_sgt:
+ kfree(sgt);
+ return ERR_PTR(rc);
+}
+
+static void unmap_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+ sg_free_table(sgt);
+ kfree(sgt);
+}
+
+static void release_vpusmm(struct dma_buf *dmabuf)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+
+ dma_free_attrs(buff->dev, buff->size, buff->cookie, buff->dma_addr, buff->dma_attrs);
+ kfree(buff);
+}
+
+static int mmap_vpusmm(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+ unsigned long vm_size;
+
+ vm_size = vma->vm_end - vma->vm_start;
+ if (vm_size > PAGE_ALIGN(buff->size))
+ return -EINVAL;
+
+ vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_pgoff = 0;
+
+ return dma_mmap_attrs(buff->dev, vma, buff->cookie, buff->dma_addr,
+ buff->size, buff->dma_attrs);
+}
+
+static int vmap_vpusmm(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+
+ dma_buf_map_set_vaddr(map, buff->cookie);
+
+ return 0;
+}
+
+static const struct dma_buf_ops vpusmm_dmabuf_ops = {
+ .cache_sgt_mapping = true,
+ .map_dma_buf = map_dma_buf_vpusmm,
+ .unmap_dma_buf = unmap_dma_buf_vpusmm,
+ .release = release_vpusmm,
+ .mmap = mmap_vpusmm,
+ .vmap = vmap_vpusmm,
+};
+
+/*
+ * Allocate dma buffer suitable for VPU access.
+ * export as DMABuf fd
+ * sess will hold additional refcount to the dmabuf
+ * on request of passing it to VPU side for processing
+ */
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg)
+{
+ struct vpumgr_device *vdev = sess->vdev;
+ const int flags = O_RDWR | O_CLOEXEC;
+ size_t buffer_size = arg->size;
+ struct dma_buf *dmabuf = NULL;
+ phys_addr_t phys_addr;
+ struct dma_buf_export_info exp_info = {
+ .exp_name = dev_name(vdev->sdev),
+ .owner = THIS_MODULE,
+ .ops = &vpusmm_dmabuf_ops,
+ .size = buffer_size,
+ .flags = flags
+ };
+ struct vpusmm_buffer *buff;
+ int retval;
+
+ buff = kzalloc(sizeof(*buff), GFP_KERNEL);
+ if (!buff) {
+ retval = -ENOMEM;
+ goto failed;
+ }
+
+ buff->dev = vdev->dev;
+ buff->size = buffer_size;
+ buff->dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_WRITE_COMBINE;
+ buff->cookie = dma_alloc_attrs(vdev->dev, buff->size, &buff->dma_addr,
+ GFP_KERNEL | GFP_DMA, buff->dma_attrs);
+ if (!buff->cookie) {
+ retval = -ENOMEM;
+ goto failed;
+ }
+
+ phys_addr = dma_to_phys(vdev->dev, buff->dma_addr);
+
+ exp_info.priv = buff;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ retval = PTR_ERR(dmabuf);
+ dmabuf = NULL;
+ goto failed;
+ }
+
+ retval = dma_buf_fd(dmabuf, flags);
+ if (retval < 0) {
+ goto failed;
+ } else {
+ arg->fd = retval;
+ retval = 0;
+ }
+
+ dev_dbg(vdev->dev, "%s: dma_addr=%llx, phys_addr=%llx allocated from %s\n",
+ __func__, buff->dma_addr, phys_addr, dev_name(vdev->dev));
+
+ return 0;
+failed:
+ dev_err(vdev->dev, "%s failed with %d\n", __func__, retval);
+
+ if (dmabuf) {
+ /* this will finally release underlying buff */
+ dma_buf_put(dmabuf);
+ } else if (buff) {
+ if (buff->cookie)
+ dma_free_attrs(vdev->dev, buff->size, buff->cookie,
+ buff->dma_addr, buff->dma_attrs);
+ kfree(buff);
+ }
+ return retval;
+}
+
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg)
+{
+ struct device *dev = sess->vdev->sdev;
+ enum dma_data_direction direction;
+ struct vpusmm_impbuf *item;
+
+ switch (arg->vpu_access) {
+ case VPU_ACCESS_READ:
+ direction = DMA_TO_DEVICE;
+ break;
+ case VPU_ACCESS_WRITE:
+ direction = DMA_FROM_DEVICE;
+ break;
+ case VPU_ACCESS_DEFAULT:
+ case VPU_ACCESS_RW:
+ direction = DMA_BIDIRECTIONAL;
+ break;
+ default:
+ dev_err(dev, "Unknown vpu_access:%d\n", arg->vpu_access);
+ return -EINVAL;
+ }
+
+ item = vpusmm_import_dmabuf(sess, arg->fd, direction, 0);
+ if (IS_ERR(item))
+ return PTR_ERR(item);
+
+ arg->vpu_addr = item->vpu_addr;
+ arg->size = item->dmabuf->size;
+ return 0;
+}
+
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd)
+{
+ struct vpusmm_impbuf *item;
+ struct dma_buf *dmabuf;
+ int rc = 0;
+
+ dmabuf = dma_buf_get(*p_dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ if (!item) {
+ rc = -EINVAL;
+ goto exit;
+ }
+
+ item->refcount--;
+ if (item->refcount <= 0) {
+ rb_erase(&item->node, &sess->imp_rb);
+ dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+ dma_buf_detach(item->dmabuf, item->attach);
+ dma_buf_put(item->dmabuf);
+ kfree(item);
+ }
+exit:
+ mutex_unlock(&sess->imp_rb_lock);
+ dma_buf_put(dmabuf);
+ return rc;
+}
+
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg)
+{
+ struct device *dev = sess->vdev->sdev;
+ struct task_struct *task = current;
+ struct dma_buf *dmabuf = NULL;
+ unsigned long vaddr = *arg;
+ struct vm_area_struct *vma;
+ struct vpusmm_impbuf *item;
+ struct mm_struct *mm;
+
+ mm = get_task_mm(task);
+ if (!mm)
+ goto failed;
+
+ mmap_read_lock(mm);
+
+ vma = find_vma(mm, vaddr);
+ if (!vma) {
+ dev_dbg(dev, "cannot find vaddr: %lx\n", vaddr);
+ goto failed;
+ }
+
+ if (vaddr < vma->vm_start) {
+ dev_dbg(dev, "failed at line %d, vaddr=%lx, vma->vm_start=%lx\n",
+ __LINE__, vaddr, vma->vm_start);
+ goto failed;
+ }
+
+ /* make sure the vma is backed by a dmabuf */
+ if (!vma->vm_file) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+
+ dmabuf = vma->vm_file->private_data;
+ if (!dmabuf) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+
+ if (dmabuf->file != vma->vm_file) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+ mmap_read_unlock(mm);
+ mmput(mm);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ mutex_unlock(&sess->imp_rb_lock);
+
+ if (!item) {
+ dev_dbg(dev, "failed to find dmabuf in imported list for vaddr=0x%lx (%d)\n",
+ vaddr, __LINE__);
+ return -EFAULT;
+ }
+
+ *arg = (dma_addr_t)(vaddr - vma->vm_start) + item->vpu_addr;
+ return 0;
+
+failed:
+ if (mm) {
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
+ return -EFAULT;
+}
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess)
+{
+ struct rb_node *cur, *next;
+ struct vpusmm_impbuf *item;
+ int i;
+
+ seq_puts(file, "\tdmabuf\texpname\tsize(bytes)\tfilecount\trefs\n");
+
+ mutex_lock(&sess->imp_rb_lock);
+ cur = rb_first(&sess->imp_rb);
+ i = 0;
+ while (cur) {
+ item = rb_entry(cur, struct vpusmm_impbuf, node);
+ next = rb_next(cur);
+ if (item)
+ seq_printf(file, "\t%d:%s\t%s\t%zu\t%ld\t%d\n", i++,
+ item->dmabuf->name ? : "",
+ item->dmabuf->exp_name,
+ item->dmabuf->size,
+ file_count(item->dmabuf->file),
+ item->refcount);
+ cur = next;
+ }
+ mutex_unlock(&sess->imp_rb_lock);
+ return 0;
+}
+
+int smm_init(struct vpumgr_device *vdev)
+{
+ int rc = 0;
+
+ if (!vdev->dev->of_node) {
+ /*
+ * no of_node imply:
+ * 1. no IOMMU, VPU device is only 32-bit DMA capable
+ * 2. use default CMA because no device tree node specifying memory-region
+ */
+ dma_set_mask(vdev->dev, DMA_BIT_MASK(32));
+ dma_set_coherent_mask(vdev->dev, DMA_BIT_MASK(32));
+ } else {
+ /* Initialize reserved memory resources */
+ rc = of_reserved_mem_device_init(vdev->dev);
+ if (rc) {
+ if (rc == -ENODEV) {
+ dev_warn(vdev->dev,
+ "No reserved memory specified, use default cma\n");
+ rc = 0;
+ } else {
+ dev_err(vdev->dev,
+ "Failed to init reserved memory, rc=%d\n", rc);
+ }
+ }
+ }
+ return rc;
+}
+
+int smm_fini(struct vpumgr_device *vdev)
+{
+ return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_smm.h b/drivers/misc/vpumgr/vpu_smm.h
new file mode 100644
index 000000000000..ff547649d95c
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * Copyright (C) 2020 Intel Corporation
+ */
+#ifndef _VPU_SMM_H
+#define _VPU_SMM_H
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+
+#include "vpu_common.h"
+
+struct vpumgr_smm {
+ struct vpumgr_device *vdev;
+ struct rb_root imp_rb;
+ struct mutex imp_rb_lock; /* protects imp_rb */
+};
+
+int smm_init(struct vpumgr_device *vdev);
+int smm_fini(struct vpumgr_device *vdev);
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev);
+int smm_close(struct vpumgr_smm *sess);
+
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg);
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg);
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd);
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg);
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess);
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_vcm.c b/drivers/misc/vpumgr/vpu_vcm.c
new file mode 100644
index 000000000000..03311dbd579a
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.c
@@ -0,0 +1,585 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#include <linux/delay.h>
+#include <linux/dma-buf.h>
+#include <linux/idr.h>
+#include <linux/mutex.h>
+#include <linux/kthread.h>
+#include <linux/sched/task.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include "vpu_common.h"
+#include "vpu_vcm.h"
+
+#define XLINK_IPC_TIMEOUT 1000u
+
+/* Static xlink configuration */
+#define VCM_XLINK_CHANNEL 1
+#define VCM_XLINK_CHAN_SIZE 128
+
+static const int msg_header_size = offsetof(struct vcm_msg, payload.data);
+
+struct vpu_cmd {
+ struct work_struct work;
+ struct kref refcount;
+ struct xlink_handle *handle;
+ struct vpumgr_ctx *vctx; /* the submitting vpu context */
+ struct vcm_msg msg; /* message buffer for send/recv */
+ struct completion complete; /* completion for async submit/reply */
+ int submit_err; /* error code for submittion process */
+};
+
+static int vcm_vpu_link_init(struct vcm_dev *pvcm)
+{
+ struct vpumgr_device *vdev = container_of(pvcm, struct vpumgr_device, vcm);
+ enum xlink_error rc;
+
+ pvcm->ipc_xlink_handle.dev_type = VPUIP_DEVICE;
+ pvcm->ipc_xlink_handle.sw_device_id = pvcm->sw_dev_id;
+
+ rc = xlink_initialize();
+ if (rc != X_LINK_SUCCESS)
+ goto exit;
+
+ rc = xlink_connect(&pvcm->ipc_xlink_handle);
+ if (rc != X_LINK_SUCCESS)
+ goto exit;
+
+ rc = xlink_open_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL,
+ RXB_TXB, VCM_XLINK_CHAN_SIZE, XLINK_IPC_TIMEOUT);
+ if (rc != X_LINK_SUCCESS && rc != X_LINK_ALREADY_OPEN) {
+ xlink_disconnect(&pvcm->ipc_xlink_handle);
+ goto exit;
+ }
+
+ rc = 0;
+exit:
+ dev_info(vdev->dev, "%s: rc = %d\n", __func__, rc);
+ return -(int)rc;
+}
+
+static int vcm_vpu_link_fini(struct vcm_dev *pvcm)
+{
+ xlink_close_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL);
+ xlink_disconnect(&pvcm->ipc_xlink_handle);
+ return 0;
+}
+
+/*
+ * Send a vcm_msg by xlink.
+ * Given limited xlink payload size, packing is also performed.
+ */
+static int vcm_send(struct xlink_handle *xlnk_handle, struct vcm_msg *req)
+{
+ struct vpumgr_device *vdev;
+ enum xlink_error rc;
+ u8 *ptr = (u8 *)req;
+ u32 size = 0;
+ u32 len = req->size;
+
+ vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+ if (len > sizeof(*req))
+ return -EINVAL;
+ do {
+ size = len > VCM_XLINK_CHAN_SIZE ? VCM_XLINK_CHAN_SIZE : len;
+ rc = xlink_write_volatile(xlnk_handle, VCM_XLINK_CHANNEL, ptr, size);
+ if (rc != X_LINK_SUCCESS) {
+ dev_warn(vdev->dev, "%s xlink_write_volatile error %d\n", __func__, rc);
+ return -EINVAL;
+ }
+ ptr += size;
+ len -= size;
+ } while (len > 0);
+
+ return 0;
+}
+
+/*
+ * Receives a vcm_msg by xlink.
+ * Given limited xlink payload size, unpacking is also performed.
+ */
+static int vcm_recv(struct xlink_handle *xlnk_handle, struct vcm_msg *rep)
+{
+ struct vpumgr_device *vdev;
+ enum xlink_error rc;
+ u64 size;
+ u32 total_size = 0;
+ u32 rx_size = 0;
+ u8 *ptr = (u8 *)rep;
+
+ vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+ do {
+ /* workaround for a bug in xlink_read_data_to_buffer()
+ * although it's last argument is declared to be of type (u32 *), the
+ * function actually writes 64-bit value into that address.
+ */
+ rc = xlink_read_data_to_buffer(xlnk_handle, VCM_XLINK_CHANNEL, ptr, (u32 *)&size);
+ if (rc != X_LINK_SUCCESS) {
+ dev_warn(vdev->dev, "%s: xlink_read_data_to_buffer failed, rc:%d\n",
+ __func__, rc);
+ return -EPIPE;
+ }
+
+ if (total_size == 0) {
+ if (size < msg_header_size) {
+ dev_warn(vdev->dev, "%s: first packet is too small (%llu)\n",
+ __func__, size);
+ return -EINVAL;
+ }
+
+ total_size = rep->size;
+ if (total_size > sizeof(*rep)) {
+ dev_warn(vdev->dev, "%s: packet size (%u) is too big\n",
+ __func__, total_size);
+ return -EINVAL;
+ }
+ if (total_size < size) {
+ dev_warn(vdev->dev,
+ "%s: first packet is smaller than claimed (%llu)\n",
+ __func__, size);
+ return -EINVAL;
+ }
+ }
+
+ ptr += size;
+ rx_size += size;
+ } while (rx_size < total_size);
+
+ if (rx_size != total_size) {
+ dev_warn(vdev->dev, "%s: actuall size %u exceeds claimed size %ud\n",
+ __func__, rx_size, total_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void vcmd_free(struct kref *kref)
+{
+ struct vpu_cmd *vcmd = container_of(kref, struct vpu_cmd, refcount);
+
+ kvfree(vcmd);
+}
+
+static struct vpu_cmd *vcmd_get(struct vcm_dev *pvcm, int msgid)
+{
+ struct vpu_cmd *vcmd;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ vcmd = idr_find(&pvcm->msg_idr, msgid);
+ if (vcmd)
+ kref_get(&vcmd->refcount);
+ mutex_unlock(&pvcm->msg_idr_lock);
+
+ return vcmd;
+}
+
+static void vcmd_put(struct vpu_cmd *vcmd)
+{
+ kref_put(&vcmd->refcount, vcmd_free);
+}
+
+static int vcmd_alloc_msgid(struct vcm_dev *pvcm, struct vpu_cmd *vcmd)
+{
+ int msgid;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ msgid = idr_alloc_cyclic(&pvcm->msg_idr, vcmd, 1, 0, GFP_KERNEL);
+ if (msgid >= 0)
+ kref_init(&vcmd->refcount);
+ mutex_unlock(&pvcm->msg_idr_lock);
+ return msgid;
+}
+
+static void vcmd_remove_msgid(struct vcm_dev *pvcm, int msgid)
+{
+ struct vpu_cmd *vcmd;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ vcmd = idr_remove(&pvcm->msg_idr, msgid);
+ if (vcmd)
+ kref_put(&vcmd->refcount, vcmd_free);
+ mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static void vcmd_clean_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+ struct vpu_cmd *vcmd;
+ int i;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+ if (vcmd->vctx == v) {
+ idr_remove(&pvcm->msg_idr, i);
+ kref_put(&vcmd->refcount, vcmd_free);
+ }
+ mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static int vcmd_count_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+ struct vpu_cmd *vcmd;
+ int i;
+ int count = 0;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+ if (vcmd->vctx == v)
+ count++;
+ mutex_unlock(&pvcm->msg_idr_lock);
+
+ return count;
+}
+
+static void vpu_cmd_submit(struct work_struct *work)
+{
+ struct vpu_cmd *p = container_of(work, struct vpu_cmd, work);
+
+ p->submit_err = vcm_send(p->handle, &p->msg);
+}
+
+/*
+ * vcm_submit() - Submit a command to VPU
+ * @v: Pointer to local vpu context data structure.
+ * @cmd: Command code
+ * @data_in: Data arguments
+ * @in_len: Length of the data arguments
+ * @submit_id: On return, this will containe a newly allocated
+ * vpu-device-wise unique ID for the submitted command
+ *
+ * Submit a command to corresponding vpu context running on firmware to execute
+ */
+int vcm_submit(struct vpumgr_ctx *v,
+ u32 cmd, const void *data_in, u32 in_len, s32 *submit_id)
+{
+ struct vcm_dev *pvcm = &v->vdev->vcm;
+ int ctx = v->vpu_ctx_id;
+ int rc = 0;
+ struct vpu_cmd *vcmd;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ if (in_len > VCM_PAYLOAD_SIZE)
+ return -EINVAL;
+
+ vcmd = kvmalloc(sizeof(*vcmd), GFP_KERNEL);
+ if (!vcmd)
+ return -ENOMEM;
+
+ vcmd->vctx = v;
+
+ rc = vcmd_alloc_msgid(pvcm, vcmd);
+ if (rc < 0) {
+ rc = -EEXIST;
+ kvfree(vcmd);
+ return rc;
+ }
+
+ /* from now on, vcmd can is refcount managed */
+ vcmd->msg.id = rc;
+ *submit_id = vcmd->msg.id;
+
+ if (data_in && in_len > 0) {
+ if (access_ok((void __user *)data_in, in_len)) {
+ rc = copy_from_user(vcmd->msg.payload.data,
+ (const void __user *)data_in, in_len);
+ if (rc)
+ goto remove_msgid;
+ } else {
+ memcpy(vcmd->msg.payload.data, data_in, in_len);
+ }
+ }
+
+ init_completion(&vcmd->complete);
+ vcmd->handle = &pvcm->ipc_xlink_handle;
+ vcmd->msg.size = msg_header_size + in_len;
+ vcmd->msg.ctx = ctx;
+ vcmd->msg.cmd = cmd;
+ vcmd->msg.rc = 0;
+ INIT_WORK(&vcmd->work, vpu_cmd_submit);
+
+ if (!queue_work(pvcm->wq, &vcmd->work)) {
+ rc = -EEXIST;
+ goto remove_msgid;
+ }
+
+ return 0;
+
+remove_msgid:
+ vcmd_remove_msgid(pvcm, vcmd->msg.id);
+ return rc;
+}
+
+/*
+ * vcm_wait() - Wait a submitted command to finish
+ * @v: Pointer to local vpu context data structure.
+ * @submit_id: Unique ID of the submitted command to wait for
+ * @vpu_rc: Return code of the submitted commands
+ * @data_out: Return data payload of the submitted command
+ * @p_out_len: Length of the returned paylaod
+ * @timeout_ms: Time in milliseconds before the wait expires
+ *
+ * Wait for a submitted command to finish and retrieves the
+ * return code and outputs on success with timeout.
+ */
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+ s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms)
+{
+ struct vcm_dev *pvcm = &v->vdev->vcm;
+ struct device *dev = v->vdev->sdev;
+ unsigned long timeout = msecs_to_jiffies(timeout_ms);
+ struct vpu_cmd *vcmd;
+ int rc, len;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ vcmd = vcmd_get(pvcm, submit_id);
+ if (!vcmd) {
+ dev_err(dev, "%s:cannot find submit_id %d\n", __func__, submit_id);
+ return -EINVAL;
+ }
+
+ if (v != vcmd->vctx) {
+ dev_err(dev, "%s:trying to wait on submit %d doesn't belong to vpu context %d\n",
+ __func__, submit_id, v->vpu_ctx_id);
+ return -EINVAL;
+ }
+
+ /* wait for submission work to be done */
+ flush_work(&vcmd->work);
+ rc = vcmd->submit_err;
+ if (rc)
+ goto exit;
+
+ /* wait for reply */
+ rc = wait_for_completion_interruptible_timeout(&vcmd->complete, timeout);
+ if (rc < 0) {
+ goto exit;
+ }
+ else if (rc == 0) {
+ rc = -ETIMEDOUT;
+ goto exit;
+ } else {
+ /* wait_for_completion_interruptible_timeout return positive
+ * rc on success, but we return zero as success.
+ */
+ rc = 0;
+ }
+
+ if (vpu_rc)
+ *vpu_rc = vcmd->msg.rc;
+
+ if (data_out && p_out_len) {
+ /* truncate payload to fit output buffer size provided */
+ len = vcmd->msg.size - msg_header_size;
+ if (len > (*p_out_len)) {
+ dev_err(dev, "%s: output is truncated from %d to %d to fit buffer size.\n",
+ __func__, len, (*p_out_len));
+ len = (*p_out_len);
+ }
+
+ if (len > 0) {
+ if (access_ok((void __user *)data_out, len)) {
+ rc = copy_to_user((void __user *)data_out,
+ vcmd->msg.payload.data, len);
+ if (rc)
+ goto exit;
+ } else {
+ memcpy(data_out, vcmd->msg.payload.data, len);
+ }
+ }
+
+ /* tell the caller the exact length received, even if
+ * we have truncated due to output buffer size limitation
+ */
+ *p_out_len = len;
+ }
+exit:
+ v->total_vcmds++;
+ vcmd_put(vcmd);
+ vcmd_remove_msgid(pvcm, submit_id);
+ return rc;
+}
+
+static int vcm_call(struct vpumgr_ctx *v,
+ s32 cmd, const void *data_in, u32 in_len,
+ s32 *res_rc, void *data_out, u32 *p_out_len)
+{
+ int submit_id, rc;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ rc = vcm_submit(v, cmd, data_in, in_len, &submit_id);
+ if (rc)
+ return rc;
+
+ return vcm_wait(v, submit_id, res_rc, data_out, p_out_len, 1000);
+}
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev)
+{
+ struct device *dev = vdev->sdev;
+ int rep_rc, rc;
+
+ v->vdev = vdev;
+
+ if (!vdev->vcm.enabled)
+ return 0;
+
+ rc = vcm_call(v, VCTX_MSG_CREATE, NULL, 0, &rep_rc, NULL, NULL);
+
+ if (rc != 0 || rep_rc < 0)
+ dev_err(dev, "%s: Vpu context create with rc:%d and vpu reply rc:%d\n",
+ __func__, rc, rep_rc);
+ if (rc)
+ return rc;
+ if (rep_rc < 0)
+ return -ENXIO;
+
+ v->vpu_ctx_id = rep_rc;
+ v->total_vcmds = 0;
+ return 0;
+}
+
+int vcm_close(struct vpumgr_ctx *v)
+{
+ struct device *dev = v->vdev->sdev;
+ int rep_rc, rc;
+
+ if (!v->vdev->vcm.enabled)
+ return 0;
+
+ rc = vcm_call(v, VCTX_MSG_DESTROY, NULL, 0, &rep_rc, NULL, NULL);
+ dev_dbg(dev, "vpu context %d is destroyed with rc:%d and vpu reply rc:%d\n",
+ v->vpu_ctx_id, rc, rep_rc);
+
+ /* remove submit belongs to this context */
+ vcmd_clean_ctx(&v->vdev->vcm, v);
+ return 0;
+}
+
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v)
+{
+ if (!v->vdev->vcm.enabled)
+ return 0;
+ seq_printf(file, "\tvpu context: #%d\n", v->vpu_ctx_id);
+ seq_printf(file, "\t\tNum of completed cmds: %llu\n", v->total_vcmds);
+ seq_printf(file, "\t\tNum of on-going cmds: %d\n", vcmd_count_ctx(&v->vdev->vcm, v));
+ return 0;
+}
+
+static int vcm_rxthread(void *param)
+{
+ struct vpumgr_device *vdev = param;
+ struct device *dev = vdev->sdev;
+ struct vcm_dev *pvcm = &vdev->vcm;
+ struct vcm_msg *msg = &pvcm->rxmsg;
+ struct vpu_cmd *vcmd;
+ int rc;
+
+ while (!kthread_should_stop()) {
+ rc = vcm_recv(&pvcm->ipc_xlink_handle, msg);
+ if (rc == -EPIPE)
+ break;
+ if (rc)
+ continue;
+
+ switch (msg->cmd) {
+ case VCTX_MSG_REPLY:
+ /* find local data associated with that msg id */
+ vcmd = vcmd_get(pvcm, (unsigned long)msg->id);
+ if (!vcmd)
+ break;
+
+ if (msg->ctx != vcmd->msg.ctx)
+ dev_warn(dev, "reply msg #%u's ctx (%u) mismatches vcmd ctx (%u)\n",
+ msg->id, msg->ctx, vcmd->msg.ctx);
+
+ vcmd->submit_err = 0;
+
+ /* submit corresponding to msg->id is done, do post process */
+ memcpy(&vcmd->msg, msg, msg->size);
+ complete(&vcmd->complete);
+
+ vcmd_put(vcmd);
+ break;
+ default:
+ break;
+ }
+ }
+ return rc;
+}
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id)
+{
+ struct vcm_dev *pvcm = &vdev->vcm;
+ struct task_struct *rxthread;
+ int rc = 0;
+
+ if (sw_dev_id == XLINK_INVALID_SW_DEVID) {
+ dev_warn(vdev->dev, "%s: vcm is not enabled!\n",
+ __func__);
+ rc = 0;
+ goto exit;
+ }
+
+ pvcm->sw_dev_id = sw_dev_id;
+ rc = vcm_vpu_link_init(pvcm);
+ if (rc)
+ goto exit;
+
+ pvcm->wq = alloc_ordered_workqueue("vcm workqueue", WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!pvcm->wq) {
+ rc = -ENOMEM;
+ goto vpu_link_fini;
+ }
+
+ mutex_init(&pvcm->msg_idr_lock);
+ idr_init(&pvcm->msg_idr);
+
+ rxthread = kthread_run(vcm_rxthread,
+ (void *)vdev, "vcmrx");
+ if (IS_ERR(rxthread)) {
+ rc = PTR_ERR(rxthread);
+ goto destroy_idr;
+ }
+
+ pvcm->rxthread = get_task_struct(rxthread);
+
+ pvcm->enabled = true;
+ return 0;
+
+destroy_idr:
+ idr_destroy(&pvcm->msg_idr);
+ destroy_workqueue(pvcm->wq);
+vpu_link_fini:
+ vcm_vpu_link_fini(pvcm);
+exit:
+ pvcm->enabled = false;
+ return rc;
+}
+
+int vcm_fini(struct vpumgr_device *vdev)
+{
+ struct vcm_dev *pvcm = &vdev->vcm;
+
+ if (!pvcm->enabled)
+ return 0;
+
+ vcm_vpu_link_fini(pvcm);
+
+ kthread_stop(pvcm->rxthread);
+ put_task_struct(pvcm->rxthread);
+
+ idr_destroy(&pvcm->msg_idr);
+ destroy_workqueue(pvcm->wq);
+ mutex_destroy(&pvcm->msg_idr_lock);
+ return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_vcm.h b/drivers/misc/vpumgr/vpu_vcm.h
new file mode 100644
index 000000000000..9e89c281092b
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * RESMGR driver - VPU Context Manager
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef __VPU_VCM_H
+#define __VPU_VCM_H
+
+#include <linux/xlink.h>
+
+struct vpumgr_device;
+
+/* Command code for message to/from VPU context manager on firmware */
+#define VCTX_MSG_CREATE 1
+#define VCTX_MSG_DESTROY 2
+#define VCTX_MSG_REPLY 3
+
+/* Maximal payload size supported for request or reply */
+#define VCM_PAYLOAD_SIZE (8192 - 5 * sizeof(u32))
+
+/**
+ * struct vcm_msg: VPU context manager message
+ * @size: size tag must be at the begin
+ * @ctx: to/from which context the msg is
+ * @cmd: the type of this message
+ * @id: index to identify this message in context ctx
+ * @rc: return code or misc args
+ * @payload: the payload of message
+ */
+struct vcm_msg {
+ u32 size;
+ u32 ctx;
+ u32 cmd;
+ u32 id;
+ s32 rc;
+ union {
+ char data[VCM_PAYLOAD_SIZE];
+ } payload;
+} __packed;
+
+struct vcm_dev {
+ bool enabled;
+ /*
+ * XLINK IPC related.
+ */
+ struct xlink_handle ipc_xlink_handle;
+ s32 sw_dev_id;
+
+ /*
+ * dispatch work queue.
+ * Xlink transcations would handled in the work queue;
+ * Decouple the xlink API invoking with user space systemcall
+ * because SIGINT would cause xlink_read* erro.
+ */
+ struct workqueue_struct *wq;
+
+ /* kthread for rx */
+ struct task_struct *rxthread;
+
+ /* message buffer for receiving thread */
+ struct vcm_msg rxmsg;
+
+ struct mutex msg_idr_lock; /* protects msg_idr */
+ struct idr msg_idr;
+};
+
+struct vpumgr_ctx {
+ struct vpumgr_device *vdev;
+ u32 vpu_ctx_id;
+ u64 total_vcmds;
+};
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id);
+int vcm_fini(struct vpumgr_device *vdev);
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev);
+int vcm_close(struct vpumgr_ctx *v);
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v);
+
+int vcm_submit(struct vpumgr_ctx *v,
+ u32 cmd, const void *data_in, u32 in_len, s32 *submit_id);
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+ s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms);
+
+#endif /* __VPU_VCM_H */
diff --git a/include/uapi/misc/vpumgr.h b/include/uapi/misc/vpumgr.h
new file mode 100644
index 000000000000..910b26e60097
--- /dev/null
+++ b/include/uapi/misc/vpumgr.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note
+ * VPU manager Linux Kernel API
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#ifndef __VPUMGR_UAPI_H
+#define __VPUMGR_UAPI_H
+
+#include <linux/types.h>
+
+/* ioctl numbers */
+#define VPUMGR_MAGIC 'V'
+/* VPU manager IOCTLs */
+#define VPUMGR_IOCTL_DMABUF_ALLOC _IOWR(VPUMGR_MAGIC, 2, struct vpumgr_args_alloc)
+#define VPUMGR_IOCTL_DMABUF_IMPORT _IOWR(VPUMGR_MAGIC, 3, struct vpumgr_args_import)
+#define VPUMGR_IOCTL_DMABUF_UNIMPORT _IOWR(VPUMGR_MAGIC, 4, __s32)
+#define VPUMGR_IOCTL_DMABUF_PTR2VPU _IOWR(VPUMGR_MAGIC, 5, __u64)
+#define VPUMGR_IOCTL_VCM_SUBMIT _IOWR(VPUMGR_MAGIC, 6, struct vpumgr_vcm_submit)
+#define VPUMGR_IOCTL_VCM_WAIT _IOWR(VPUMGR_MAGIC, 7, struct vpumgr_vcm_wait)
+#define VPUMGR_IOCTL_END _IO(VPUMGR_MAGIC, 8)
+
+struct vpumgr_args_alloc {
+ __s32 fd; /* out: DMABuf fd */
+ __s32 reserved[2]; /* in: reserved */
+ __u64 size; /* in: required buffer size */
+};
+
+/* vpu_access flags */
+enum vpu_access_type {
+ VPU_ACCESS_DEFAULT = 0,
+ VPU_ACCESS_READ = 1,
+ VPU_ACCESS_WRITE = 2,
+ VPU_ACCESS_RW = 3
+};
+
+struct vpumgr_args_import {
+ __s32 fd; /* in: input DMABuf fd */
+ __s32 vpu_access; /* in: how vpu is going to access the buffer */
+ __u64 vpu_addr; /* out: vpu dma address of the DMABuf */
+ __u64 size; /* out: the size of the DMABuf */
+};
+
+/* Command code reserved for kernel mode driver,
+ * user-space should not use commmand code smaller
+ * than or equal to this micro
+ */
+#define VCTX_KMD_RESERVED_CMD_LAST 31
+
+struct vpumgr_vcm_submit {
+ __u32 cmd; /* in: command code */
+ __u64 in; /* in: input paramer buffer address */
+ __u32 in_len; /* in: input paramer buffer length */
+ __s32 submit_id; /* out: submit id */
+};
+
+struct vpumgr_vcm_wait {
+ __s32 submit_id; /* in: submit id */
+ __s32 vpu_rc; /* out: vpu return code */
+ __u64 out; /* in: address of the buffer for receiving result */
+ __u32 out_len; /* in: length of the result buffer */
+ __u32 timeout_ms; /* in: timeout in milliseconds */
+};
+
+#endif /* __VPUMGR_UAPI_H */
--
2.17.1

2021-01-08 21:32:27

by mark gross

[permalink] [raw]
Subject: [PATCH v2 34/34] misc: HDDL device management for IA host

From: "C, Udhayakumar" <[email protected]>

Add IA host hddl device management driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU Local Host ARM CPU
------------------------------- ----------------------------
| * Send time as xlink packet | |* Sync time with IA host |
| * receive sensor details | |* Prepare and share sensor|
| and register as i2c or | | details to IA host as |
| xlink smbus slaves | | xlink packets |
------------------------------- ----------------------------
| hddl server | <=====> | hddl client |
------------------------------- xlink ----------------------------

hddl device module:
-------------------
The HDDL client driver acts as an software RTC to sync with network
time. It abstracts xlink protocol to communicate with remote host.
This driver exports the details about sensors available in the
platform to remote host as xlink packets.
This driver also handles device connect/disconnect events and
identifies board id and soc id using gpio's, based on platform
configuration.

- Remote Host driver
* Intended for IA CPU
* It is based on xlink Framework
* Driver path:
{tree}/drivers/misc/hddl_device/hddl_device_server.c

Local arm host and Remote IA host drivers communicates using
XLINK protocol.

Signed-off-by: C, Udhayakumar <[email protected]>
---
.../misc-devices/hddl_device_server.rst | 205 +++++
Documentation/misc-devices/index.rst | 1 +
drivers/misc/hddl_device/Kconfig | 12 +
drivers/misc/hddl_device/Makefile | 2 +
drivers/misc/hddl_device/hddl_device_rh.c | 837 ++++++++++++++++++
5 files changed, 1057 insertions(+)
create mode 100644 Documentation/misc-devices/hddl_device_server.rst
create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c

diff --git a/Documentation/misc-devices/hddl_device_server.rst b/Documentation/misc-devices/hddl_device_server.rst
new file mode 100644
index 000000000000..0be37973d1fe
--- /dev/null
+++ b/Documentation/misc-devices/hddl_device_server.rst
@@ -0,0 +1,205 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+Kernel driver: hddl_device_server
+=================================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+Authors:
+ - Thalaiappan, Rathina <[email protected]>
+ - Udhayakumar C <[email protected]>
+
+High-level architecture
+=======================
+::
+
+ Remote Host IA CPU Local Host ARM CPU
+ ------------------------------- ----------------------------
+ | * Send time as xlink packet | |* Sync time with IA host |
+ | * receive sensor details | |* Prepare and share sensor|
+ | and register as i2c or | | details to IA host as |
+ | xlink smbus slaves | | xlink packets |
+ ------------------------------- ----------------------------
+ | hddl server | <=====> | hddl client |
+ ------------------------------- xlink ----------------------------
+
+Overview
+========
+
+This driver supports hddl device management for Intel Edge.AI Computer Vision
+platforms. This driver runs in IA host
+
+This driver supports the following features:
+
+ - Receives deatils of temperature sensor, current sensor and fan controller
+ present in Intel Edge.AI Computer Vision platforms.
+ - Send Time sync data to Intel Edge.AI Computer Vision platform.
+ - Handles device connect and disconnect events.
+ - Get free slave address for memory mapped thermal sensors present in SoC
+ (Documentation/hwmon/intel_tsens_sensors.rst) and share it with Intel
+ Edge.AI Computer Vision platform.
+ - Registers i2c slave device for slaves present in Intel Edge.AI Computer
+ Vision platform
+
+Keem Bay platform has
+Onchip sensors:
+
+ - Media Subsystem (mss) temperature sensor
+ - NN subsystem (nce) temperature sensor
+ - Compute subsystem (cse) temperature sensor
+ - SOC(Maximum of mss, nce and cse).
+
+Onboard sensors:
+
+ - two lm75 temperature sensors
+ - emc2103 fan controller
+ - ina3221 current sensor
+
+Driver Structure
+================
+
+The driver provides a platform device where the ``probe`` and ``remove``
+operations are provided.
+
+ - probe: spawn kernel thread to monitor new PCIE devices.
+
+ - init task: Poll for new PCIE device with time interval of 5 seconds and
+ creates connect task to setup new device.
+
+ - connect task: Connect task is the main entity which connects to hddl
+ device client using xlink and does the basic initialisation and handshaking.
+ Additionally it also monitors the hddl device client link down/link up
+ events and reinitialise the drivers accordingly in the server side.
+
+ - remove: unregister i2c client devices, i2c adapters and close xlink
+ channel.
+
+HDDL Server Sequence - Basic Setup and handshaking with HDDL Device Client
+==========================================================================
+::
+
+ ,-----. ,---------. ,------------. ,------------------.
+ |probe| |Init task| |connect task| |hddl device client|
+ `--+--' `----+----' `-----+------' `--------+---------'
+ ----. | | |
+ | "Init char dev"| | |
+ <---' | | |
+ | | | |
+ | ,----------------------!. | |
+ | |Initialize char device|_\ | |
+ | |for ioctls | | |
+ | `------------------------' | |
+ | "Creates kthread" | | |
+ |------------------->| | |
+ | | | |
+ | ,-----------------------!. | |
+ | |creates kernel thread |_\ | |
+ | |to check for new device | | |
+ | `-------------------------' | |
+ ,---------------------!. ----. | |
+ |check for new device |_\ | | |
+ |with time interval of | <---' | |
+ |5 seconds | | | |
+ `-----------------------' | | |
+ ,---------------------!. | | |
+ |if new device found?.|_\ | | |
+ |creates connect task | |-------------------->| |
+ |to setup new device | | | |
+ `-----------------------' | | |
+ | ,-------------------!. |----. |
+ | |setup xlink channel|_\| | |
+ | |to communicate with ||<---' |
+ | |client || |
+ | `---------------------'| |
+ | | | share time data |
+ | | | to client |
+ | | | -------------------------->
+ | | | |
+ | | | receives board id |
+ | | | <--------------------------
+ | | | |
+ | | | Gets total number of |
+ | | | sensors available in SoC |
+ | | | <--------------------------
+ | | | |
+ | ,-----------------------!. | |
+ | |For each sensors get |_\| |
+ | |sensor type, name, trip || <--------------------------
+ | |temp, trip type || |
+ | `-------------------------'| |
+ | | | Send complete. |
+ | | | -------------------------->
+ | | | |
+ | | |----. |
+ | | | | Register xlink i2c |
+ | | |<---' adapters. |
+ | | | |
+ | | | |
+ | | | send slave addr for |
+ | | | each salve in SoC |
+ | | | -------------------------->
+ | | | |
+ | | |----. |
+ | | | | Register i2c clients.|
+ | | |<---' |
+ | | | |
+ | | |----.
+ | | | | poll for device status
+ | | |<---'
+ ,--+--. ,----+----. ,-----+------. ,--------+---------.
+ |probe| |Init task| |connect task| |hddl device client|
+ `-----' `---------' `------------' `------------------'
+
+
+XLINK i2c sequence:
+===================
+::
+
+ ,-----------------. ,--------. ,---------. ,-----.
+ |xlink-i2c-adapter| |I2C core| |i2c-slave| |xlink|
+ `--------+--------' `---+----' `----+----' `--+--'
+ | | | |
+ |---------------------->| | |
+ | | | |
+ | ,--------------------------!. | |
+ | |Initialize xlink based i2c|_\ | |
+ | |adapters. | | |
+ | `----------------------------' | |
+ | | | |
+ | | <------------------| |
+ | | | |
+ | | ,----------------------!. |
+ | | |Linux i2c slave device|_\ |
+ | | |standard request | |
+ | | `------------------------' |
+ | i2c request from | | |
+ | clients. | | |
+ |<----------------------| | |
+ | | | |
+ | | | |
+ |-------------------------------------------------------------->|
+ | | | |
+ | | ,----------------------------!. |
+ | | |I2C request is sent as xlink|_\ |
+ | | |packet to SoC | |
+ | | `------------------------------' |
+ | | | |
+ |<--------------------------------------------------------------|
+ | | | |
+ | | ,------------------------------!. |
+ | | |I2C response from SoC as xlink|_\ |
+ | | |packet | |
+ | | `--------------------------------' |
+ | | | |
+ |---------------------->| | |
+ | | | |
+ | ,---------------------------!. | |
+ | |xlink response is converted|_\ | |
+ | |to standard i2c response. | | |
+ | `-----------------------------' | |
+ | | i2c response | |
+ | | ------------------>| |
+ ,--------+--------. ,---+----. ,----+----. ,--+--.
+ |xlink-i2c-adapter| |I2C core| |i2c-slave| |xlink|
+ `-----------------' `--------' `---------' `-----'
diff --git a/Documentation/misc-devices/index.rst b/Documentation/misc-devices/index.rst
index 102f7f9dea87..5a77a86261b7 100644
--- a/Documentation/misc-devices/index.rst
+++ b/Documentation/misc-devices/index.rst
@@ -20,6 +20,7 @@ fit into other categories.
eeprom
c2port
hddl_device_client.rst
+ hddl_device_server.rst
ibmvmc
ics932s401
isl29003
diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
index e1ae81fdf177..7f9a6a685275 100644
--- a/drivers/misc/hddl_device/Kconfig
+++ b/drivers/misc/hddl_device/Kconfig
@@ -12,3 +12,15 @@ config HDDL_DEVICE_CLIENT
the device connect/disconnect programming sequence.
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.
+
+config HDDL_DEVICE_SERVER
+ tristate "Support for hddl device server"
+ depends on XLINK_CORE && !HDDL_DEVICE_CLIENT
+ help
+ This option enables HDDL device server module.
+
+ This driver is used for sharing time sync data to local host and
+ retrives the sensors available on the platform. This also handles
+ the device connect/disconnect programming sequence.
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
diff --git a/drivers/misc/hddl_device/Makefile b/drivers/misc/hddl_device/Makefile
index dca381660baa..0e9a4cd2cef3 100644
--- a/drivers/misc/hddl_device/Makefile
+++ b/drivers/misc/hddl_device/Makefile
@@ -3,3 +3,5 @@

obj-$(CONFIG_HDDL_DEVICE_CLIENT) += hddl_device_client.o
hddl_device_client-objs += hddl_device_lh.o hddl_device.o
+obj-$(CONFIG_HDDL_DEVICE_SERVER) += hddl_device_server.o
+hddl_device_server-objs += hddl_device_rh.o hddl_device.o
diff --git a/drivers/misc/hddl_device/hddl_device_rh.c b/drivers/misc/hddl_device/hddl_device_rh.c
new file mode 100644
index 000000000000..78546ea64356
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_rh.c
@@ -0,0 +1,837 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/intel_tsens_host.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+
+#include <uapi/linux/stat.h>
+
+#include "hddl_device_util.h"
+
+#define DRIVER_NAME "hddl_device_server"
+
+/*
+ * I2C client Reserved addr: 0x00 - 0x0f
+ * 0xf0 - 0xff
+ */
+#define HDDL_FREE_CLIENT_ADDR_START 0x10
+#define HDDL_FREE_CLIENT_ADDR_END 0xf0
+#define HDDL_FREE_CLIENT_ADDR_SIZE (HDDL_FREE_CLIENT_ADDR_END - \
+ HDDL_FREE_CLIENT_ADDR_START)
+/* Xlink channel reserved for HDDL device management
+ * HDDL_NODE_XLINK_CHANNEL - Default channel for HDDL device
+ * Management communication.
+ * HDDL_I2C_XLINK_CHANNEL - channel used for xlink I2C
+ * communication.
+ */
+#define HDDL_NODE_XLINK_CHANNEL 1080
+#define HDDL_I2C_XLINK_CHANNEL 1081
+
+#define HDDL_RESET_SUCCESS 1
+#define HDDL_RESET_FAILED 0
+
+static const int hddl_host_reserved_addrs[] = {
+ 0x42,
+ 0x52,
+ 0x54,
+ 0x60
+};
+
+struct intel_hddl_server_plat_data {
+ u32 xlink_chan;
+ u32 i2c_xlink_chan;
+};
+
+struct intel_hddl_device_priv {
+ u32 xlink_chan;
+ u32 i2c_xlink_chan;
+ u32 ndevs;
+ DECLARE_BITMAP(client_addr, HDDL_FREE_CLIENT_ADDR_SIZE);
+ /* HDDL device lock */
+ struct mutex lock;
+ struct platform_device *pdev;
+ struct intel_hddl_clients **hddl_client;
+ struct task_struct *hddl_dev_init_task;
+ struct intel_hddl_server_plat_data *plat_data;
+ struct i2c_adapter *smbus_adap;
+ struct class *dev_class;
+ struct cdev hddl_cdev;
+ dev_t cdev;
+};
+
+static struct intel_hddl_device_priv *g_priv;
+
+static inline int intel_hddl_get_xlink_data(struct device *dev,
+ struct xlink_handle *xlink,
+ int chan_num, u8 *msg,
+ int *size)
+{
+ int rc;
+
+ rc = xlink_read_data_to_buffer(xlink, chan_num,
+ msg, size);
+ if (rc) {
+ dev_err(dev,
+ "HDDL: xlink read data failed rc = %d\n",
+ rc);
+ return -EFAULT;
+ }
+ rc = xlink_release_data(xlink, chan_num, NULL);
+ if (rc) {
+ dev_err(dev,
+ "HDDL: xlink release failed rc = %d\n",
+ rc);
+ return -EFAULT;
+ }
+ return rc;
+}
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs)
+{
+ if (!g_priv)
+ return NULL;
+ *n_devs = g_priv->ndevs;
+ return g_priv->hddl_client;
+}
+
+static long hddl_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ struct intel_hddl_device_priv *priv = file->private_data;
+ u32 __user *user_ptr = (u32 __user *)arg;
+ struct device *dev = &priv->pdev->dev;
+ struct sw_id_soft_reset soft_reset;
+ struct sw_id_hddl_data swid_data;
+ struct intel_hddl_clients **clients;
+ struct intel_hddl_clients *client;
+ int i, rc;
+
+ if (!user_ptr) {
+ dev_err(dev, "Null pointer from user\n");
+ return -EINVAL;
+ }
+ if (!priv) {
+ dev_err(dev, "Device ioctl failed\n");
+ return -ENODEV;
+ }
+ clients = priv->hddl_client;
+ if (!clients) {
+ dev_err(dev, "Device ioctl failed\n");
+ return -ENODEV;
+ }
+ switch (cmd) {
+ case HDDL_SOFT_RESET:
+ if (copy_from_user(&soft_reset,
+ user_ptr,
+ sizeof(struct sw_id_soft_reset)))
+ return -EFAULT;
+ for (i = 0; i < priv->ndevs; i++) {
+ if (clients[i]->xlink_dev.sw_device_id ==
+ soft_reset.sw_id) {
+ client = clients[i];
+ break;
+ }
+ }
+
+ if (!client) {
+ dev_err(dev, "target device to reset not found %d",
+ soft_reset.sw_id);
+ return -ENODEV;
+ }
+ /* xlink-reset */
+ rc = xlink_reset_device(&client->xlink_dev);
+ if (rc > 0) {
+ dev_err(dev, "xlink_reset_device failed");
+ soft_reset.return_id = HDDL_RESET_FAILED;
+ } else {
+ soft_reset.return_id = HDDL_RESET_SUCCESS;
+ }
+ if (copy_to_user(user_ptr,
+ &soft_reset, sizeof(struct sw_id_soft_reset)))
+ return -EFAULT;
+ /* xlink-rest */
+ break;
+ case HDDL_READ_SW_ID_DATA:
+ if (copy_from_user(&swid_data, user_ptr,
+ sizeof(struct sw_id_hddl_data)))
+ return -EFAULT;
+ for (i = 0; i < priv->ndevs; i++) {
+ if (clients[i]->xlink_dev.sw_device_id ==
+ swid_data.sw_id) {
+ client = clients[i];
+ break;
+ }
+ }
+
+ if (!client) {
+ dev_err(dev, "target device to reset not found %d",
+ swid_data.sw_id);
+ return -ENODEV;
+ }
+ swid_data.board_id = client->board_info.board_id;
+ swid_data.soc_id = client->board_info.soc_id;
+ if (client->adap[0])
+ swid_data.soc_adaptor_no[0] = client->adap[0]->nr;
+ if (client->adap[1])
+ swid_data.soc_adaptor_no[1] = client->adap[1]->nr;
+ swid_data.return_id = 1;
+ if (copy_to_user(user_ptr,
+ &swid_data, sizeof(struct sw_id_hddl_data)))
+ return -EFAULT;
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static int hddl_open(struct inode *inode, struct file *filp)
+{
+ struct intel_hddl_device_priv *priv;
+
+ priv = container_of(inode->i_cdev,
+ struct intel_hddl_device_priv, hddl_cdev);
+ if (!priv) {
+ dev_err(&priv->pdev->dev, "Device open failed\n");
+ return -ENODEV;
+ }
+ filp->private_data = priv;
+ return 0;
+}
+
+static const struct file_operations hddl_fops = {
+ .owner = THIS_MODULE,
+ .open = hddl_open,
+ .unlocked_ioctl = hddl_ioctl,
+};
+
+static int intel_hddl_cdev_init(struct intel_hddl_device_priv *priv)
+{
+ /*Allocating Major number*/
+ if ((alloc_chrdev_region(&priv->cdev, 0, 1, "hddl_dev")) < 0) {
+ dev_err(&priv->pdev->dev, "Cannot allocate major number\n");
+ return -EINVAL;
+ }
+ dev_err(&priv->pdev->dev, "Major = %d Minor = %d\n", MAJOR(priv->cdev),
+ MINOR(priv->cdev));
+ /*Creating cdev structure*/
+ cdev_init(&priv->hddl_cdev, &hddl_fops);
+ /*Adding character device to the system*/
+ if ((cdev_add(&priv->hddl_cdev, priv->cdev, 1)) < 0) {
+ dev_err(&priv->pdev->dev,
+ "Cannot add the device to the system\n");
+ goto r_region;
+ }
+ /*Creating struct class*/
+ priv->dev_class = class_create(THIS_MODULE, "hddl_class");
+ if (!priv->dev_class) {
+ dev_err(&priv->pdev->dev, "Cannot create the struct class\n");
+ goto r_device;
+ }
+ /*Creating device*/
+ if (!(device_create(priv->dev_class, NULL, priv->cdev, NULL,
+ "hddl_device"))) {
+ dev_err(&priv->pdev->dev, "Cannot create the Device\n");
+ goto r_class;
+ }
+ return 0;
+
+r_class:
+ class_destroy(priv->dev_class);
+r_device:
+ cdev_del(&priv->hddl_cdev);
+r_region:
+ unregister_chrdev_region(priv->cdev, 1);
+ return -EINVAL;
+}
+
+static void intel_hddl_cdev_remove(struct intel_hddl_device_priv *priv)
+{
+ device_destroy(priv->dev_class, priv->cdev);
+ class_destroy(priv->dev_class);
+ cdev_del(&priv->hddl_cdev);
+ unregister_chrdev_region(priv->cdev, 1);
+}
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c)
+{
+ struct intel_hddl_device_priv *priv = c->pdata;
+
+ intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+}
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+ struct intel_hddl_i2c_devs *i2c_dev)
+{
+ struct intel_hddl_device_priv *priv = d->pdata;
+ int bit_pos = i2c_dev->addr - HDDL_FREE_CLIENT_ADDR_START;
+
+ if (i2c_dev->xlk_client)
+ i2c_unregister_device(i2c_dev->xlk_client);
+ if (i2c_dev->i2c_client)
+ i2c_unregister_device(i2c_dev->i2c_client);
+ if (i2c_dev->smbus_client)
+ i2c_unregister_device(i2c_dev->smbus_client);
+ i2c_dev->xlk_client = NULL;
+ i2c_dev->i2c_client = NULL;
+ i2c_dev->smbus_client = NULL;
+ mutex_lock(&priv->lock);
+ clear_bit(bit_pos, priv->client_addr);
+ mutex_unlock(&priv->lock);
+}
+
+/** hddl_get_free_client - get free client address,
+ *
+ * https://i2c.info/i2c-bus-specification
+ * below client address are reserved as per i2c bus specification.
+ * I2C client Reserved addr: 0x00 - 0x0f
+ * 0xf0 - 0xff
+ *
+ * Get free client address other than standard i2c clients reserved and
+ * i2c client address used by host. If any free client address found,
+ * mark it as reserved by setting the bit corresponding to the address,
+ * and return client address.
+ */
+static int hddl_get_free_client(struct intel_hddl_device_priv *priv)
+{
+ unsigned long bit_pos;
+ int client_addr;
+
+ bit_pos = find_first_zero_bit(priv->client_addr,
+ HDDL_FREE_CLIENT_ADDR_SIZE);
+ if (bit_pos >= HDDL_FREE_CLIENT_ADDR_SIZE)
+ return -EINVAL;
+ client_addr = bit_pos + HDDL_FREE_CLIENT_ADDR_START;
+ set_bit(bit_pos, priv->client_addr);
+ return client_addr;
+}
+
+static int intel_hddl_i2c_register_clients(struct device *dev,
+ struct intel_hddl_clients *c)
+{
+ struct intel_hddl_device_priv *priv = c->pdata;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct intel_hddl_i2c_devs **i2c_devs;
+ struct intel_hddl_tsens_msg msg;
+ int rc, ndevs, size, i;
+
+ /* Get N I2C devices */
+ msg.msg_type = HDDL_GET_N_I2C_DEVS;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(dev,
+ xlink, c->chan_num,
+ (u8 *)&ndevs, &size);
+ if (rc)
+ return rc;
+ c->n_clients = ndevs;
+ i2c_devs = devm_kcalloc(dev, ndevs,
+ sizeof(struct intel_hddl_i2c_devs *),
+ GFP_KERNEL);
+ if (!i2c_devs)
+ return -ENOMEM;
+ c->i2c_devs = i2c_devs;
+ for (i = 0; i < ndevs; i++) {
+ struct intel_hddl_i2c_devs *i2c;
+ struct intel_hddl_i2c_devs_data i2c_data;
+
+ i2c = devm_kzalloc(dev,
+ sizeof(struct intel_hddl_i2c_devs),
+ GFP_KERNEL);
+ if (!i2c)
+ return -ENOMEM;
+ i2c_devs[i] = i2c;
+
+ /* Get Details*/
+ msg.msg_type = HDDL_GET_I2C_DEVS;
+ msg.sensor_type = i;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(dev, "xlink write data failed rc = %d\n", rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(dev,
+ xlink, c->chan_num,
+ (u8 *)&i2c_data, &size);
+ if (rc)
+ return rc;
+
+ strcpy(i2c->name, i2c_data.name);
+ i2c->addr = i2c_data.addr;
+ i2c->bus = i2c_data.bus;
+ i2c->enabled = i2c_data.enabled;
+ i2c->local_host = i2c_data.local_host;
+ i2c->remote_host = i2c_data.remote_host;
+ }
+
+ mutex_lock(&priv->lock);
+ for (i = 0; i < ndevs; i++) {
+ if (i2c_devs[i]->addr & (1 << 30))
+ i2c_devs[i]->addr = hddl_get_free_client(priv);
+
+ strcpy(i2c_devs[i]->board_info.type,
+ i2c_devs[i]->name);
+ i2c_devs[i]->board_info.addr = i2c_devs[i]->addr;
+ }
+ mutex_unlock(&priv->lock);
+ /* Send Complete */
+ msg.msg_type = HDDL_GET_SENS_COMPLETE;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(dev, "xlink write data failed rc = %d\n", rc);
+ return rc;
+ }
+
+ mutex_lock(&priv->lock);
+
+ /* Get msg type */
+ rc = intel_hddl_get_xlink_data(dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc) {
+ mutex_unlock(&priv->lock);
+ return rc;
+ }
+
+ while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+ switch (msg.msg_type) {
+ case HDDL_GET_I2C_DEV_ADDR:
+ {
+ i = msg.sensor_type;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&i2c_devs[i]->addr,
+ sizeof(i2c_devs[i]->addr));
+ if (rc) {
+ dev_err(dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ mutex_unlock(&priv->lock);
+ return rc;
+ }
+ }
+ break;
+ default:
+ break;
+ }
+ rc = intel_hddl_get_xlink_data(dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc) {
+ mutex_unlock(&priv->lock);
+ return rc;
+ }
+ }
+ intel_hddl_add_xlink_i2c_clients(dev, c, c->i2c_devs,
+ c->n_clients, 1);
+ mutex_unlock(&priv->lock);
+ return 0;
+}
+
+static int intel_hddl_tsens_data(struct intel_hddl_clients *c)
+{
+ struct intel_hddl_device_priv *priv = c->pdata;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct intel_tsens_host **p_tsens;
+ struct intel_hddl_tsens_msg msg;
+ u32 size, i, j;
+ u32 nsens;
+ int rc;
+
+ /* Get Nsens */
+ msg.msg_type = HDDL_GET_NSENS;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&nsens, &size);
+ if (rc)
+ return rc;
+
+ c->nsens = nsens;
+ p_tsens = devm_kcalloc(&priv->pdev->dev, nsens,
+ sizeof(struct intel_tsens_host *),
+ GFP_KERNEL);
+ if (!p_tsens)
+ return -ENOMEM;
+ c->tsens = (void **)p_tsens;
+ for (i = 0; i < nsens; i++) {
+ struct intel_tsens_host *tsens;
+ struct intel_tsens_data *tsens_data;
+
+ tsens = devm_kzalloc(&priv->pdev->dev,
+ sizeof(struct intel_tsens_host),
+ GFP_KERNEL);
+ if (!tsens)
+ return -ENOMEM;
+ tsens_data = devm_kzalloc(&priv->pdev->dev,
+ sizeof(struct intel_tsens_data),
+ GFP_KERNEL);
+ if (!tsens_data)
+ return -ENOMEM;
+ tsens->t_data = tsens_data;
+
+ /* Get Details*/
+ msg.msg_type = HDDL_GET_SENS_DETAILS;
+ msg.sensor_type = i;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)tsens_data, &size);
+ if (rc)
+ return rc;
+
+ /* Get trip info*/
+ tsens->trip_info =
+ devm_kcalloc(&priv->pdev->dev, tsens_data->n_trips,
+ sizeof(struct intel_tsens_host_trip_info *),
+ GFP_KERNEL);
+ if (!tsens->trip_info)
+ return -ENOMEM;
+ for (j = 0; j < tsens_data->n_trips; j++) {
+ struct intel_tsens_host_trip_info *t_info;
+
+ t_info =
+ devm_kzalloc(&priv->pdev->dev,
+ sizeof(struct intel_tsens_host_trip_info),
+ GFP_KERNEL);
+ if (!t_info)
+ return -ENOMEM;
+ tsens->trip_info[j] = t_info;
+ msg.msg_type = HDDL_GET_SENS_TRIP_INFO;
+ msg.sensor_type = i;
+ msg.trip_info_idx = j;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)t_info, &size);
+ if (rc)
+ return rc;
+ }
+ p_tsens[i] = tsens;
+ }
+ /* Send Complete */
+ msg.msg_type = HDDL_GET_SENS_COMPLETE;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+
+ return 0;
+}
+
+static int intel_hddl_device_connect_task(void *data)
+{
+ struct intel_hddl_clients *c = (struct intel_hddl_clients *)data;
+ struct intel_hddl_device_priv *priv = c->pdata;
+ struct intel_hddl_board_info board_info_rcvd;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct timespec64 ts;
+ u32 size, rc;
+
+ c->chan_num = priv->xlink_chan;
+ c->i2c_chan_num = priv->i2c_xlink_chan;
+ c->smbus_adap = priv->smbus_adap;
+ if (intel_hddl_open_xlink_device(&priv->pdev->dev, c)) {
+ dev_err(&priv->pdev->dev, "HDDL open xlink dev failed\n");
+ return -ENODEV;
+ }
+ ktime_get_real_ts64(&ts);
+ rc = xlink_write_volatile(xlink, c->chan_num, (u8 *)&ts,
+ sizeof(struct timespec64));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+
+ size = sizeof(c->board_info);
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&c->board_info, &size);
+ if (rc)
+ return rc;
+ board_info_rcvd.board_id = ~(c->board_info.board_id);
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&board_info_rcvd,
+ sizeof(board_info_rcvd));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+
+ rc = intel_hddl_tsens_data(c);
+ if (rc) {
+ dev_err(&priv->pdev->dev, "HDDL: tsens data not rcvd\n");
+ goto close_xlink_dev;
+ }
+ rc = intel_hddl_register_xlink_i2c_adap(&priv->pdev->dev, c);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "HDDL: register xlink i2c adapter failed\n");
+ goto close_xlink_dev;
+ }
+ rc = intel_hddl_i2c_register_clients(&priv->pdev->dev, c);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "HDDL: register i2c clients failed\n");
+ goto remove_xlink_i2c_adap;
+ }
+ while (!kthread_should_stop())
+ msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+
+ return 0;
+
+remove_xlink_i2c_adap:
+ intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+close_xlink_dev:
+ intel_hddl_close_xlink_device(&priv->pdev->dev, c);
+ return rc;
+}
+
+static int intel_hddl_check_for_new_device(struct intel_hddl_device_priv *priv)
+{
+ struct intel_hddl_clients **hddl_clients;
+
+ hddl_clients =
+ intel_hddl_setup_device(&priv->pdev->dev,
+ intel_hddl_device_connect_task,
+ &priv->ndevs, priv->hddl_client,
+ priv);
+
+ if (!hddl_clients) {
+ dev_err(&priv->pdev->dev,
+ "intel_hddl_setup_device returned NULL\n");
+ return 0;
+ }
+ priv->hddl_client = hddl_clients;
+ return 1;
+}
+
+static int intel_hddl_device_init_task(void *data)
+{
+ struct intel_hddl_device_priv *priv =
+ (struct intel_hddl_device_priv *)data;
+
+ while (!kthread_should_stop()) {
+ if (!intel_hddl_check_for_new_device(priv)) {
+ dev_err(&priv->pdev->dev,
+ "Error while checking for new device\n");
+ return -EFAULT;
+ }
+ msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+ }
+
+ return 0;
+}
+
+static int intel_hddl_device_init(struct intel_hddl_device_priv *priv)
+{
+ struct i2c_adapter *temp;
+ int j = 0;
+
+ while ((temp = i2c_get_adapter(j))) {
+ if (strstr(temp->name, "SMBus I801"))
+ priv->smbus_adap = temp;
+ i2c_put_adapter(temp);
+ j++;
+ }
+ priv->hddl_dev_init_task = kthread_run(intel_hddl_device_init_task,
+ (void *)priv,
+ "hddl_device_init");
+ if (!priv->hddl_dev_init_task) {
+ dev_err(&priv->pdev->dev, "failed to create thread\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int intel_hddl_server_probe(struct platform_device *pdev)
+{
+ struct intel_hddl_server_plat_data *plat_data;
+ struct intel_hddl_device_priv *priv;
+ int ret, i;
+
+ plat_data = pdev->dev.platform_data;
+ if (!plat_data) {
+ dev_err(&pdev->dev, "Platform data not found\n");
+ return -EINVAL;
+ }
+
+ priv = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_hddl_device_priv),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->pdev = pdev;
+ priv->plat_data = plat_data;
+ priv->xlink_chan = plat_data->xlink_chan;
+ priv->i2c_xlink_chan = plat_data->i2c_xlink_chan;
+ mutex_init(&priv->lock);
+ g_priv = priv;
+ ret = intel_hddl_cdev_init(priv);
+ if (ret) {
+ dev_err(&pdev->dev, "HDDL char device init failed\n");
+ return -EINVAL;
+ }
+ /*
+ * https://i2c.info/i2c-bus-specification
+ * below client address are reserved as per i2c bus specification.
+ * I2C client Reserved addr: 0x00 - 0x0f
+ * 0xf0 - 0xff
+ *
+ * hddl_get_free_client will not use standard i2c client
+ * reserved address, so no need to mark them as reserved.
+ * mark the address used by i2c clients connected to the host
+ * as used.
+ */
+ for (i = 0; i < ARRAY_SIZE(hddl_host_reserved_addrs); i++) {
+ int bit_pos = hddl_host_reserved_addrs[i] -
+ HDDL_FREE_CLIENT_ADDR_START;
+ set_bit(bit_pos, priv->client_addr);
+ }
+ ret = intel_hddl_device_init(priv);
+ if (ret) {
+ dev_err(&pdev->dev, "HDDL device init failed\n");
+ ret = -EINVAL;
+ goto free_cdev;
+ }
+ platform_set_drvdata(pdev, priv);
+
+ return 0;
+free_cdev:
+ intel_hddl_cdev_remove(priv);
+ return ret;
+}
+
+/* Device Exit */
+static int intel_hddl_server_remove(struct platform_device *pdev)
+{
+ struct intel_hddl_device_priv *priv = platform_get_drvdata(pdev);
+ int i;
+
+ if (!priv)
+ return -EINVAL;
+ intel_hddl_cdev_remove(priv);
+ for (i = 0; i < priv->ndevs; i++)
+ intel_hddl_device_remove(priv->hddl_client[i]);
+ kthread_stop(priv->hddl_dev_init_task);
+
+ return 0;
+}
+
+static struct platform_driver intel_hddl_server_driver = {
+ .probe = intel_hddl_server_probe,
+ .remove = intel_hddl_server_remove,
+ .driver = {
+ .name = "intel_hddl_server",
+ },
+};
+
+static struct platform_device *intel_hddl_server_pdev;
+
+static void intel_hddl_server_exit(void)
+{
+ platform_driver_unregister(&intel_hddl_server_driver);
+ platform_device_unregister(intel_hddl_server_pdev);
+}
+
+static int __init intel_hddl_server_init(void)
+{
+ struct intel_hddl_server_plat_data plat;
+ struct platform_device_info pdevinfo;
+ struct platform_device *dd;
+ int ret;
+
+ ret = platform_driver_register(&intel_hddl_server_driver);
+ if (ret) {
+ pr_err("HDDL SERVER: platform_driver_register failed %d", ret);
+ return ret;
+ }
+ memset(&pdevinfo, 0, sizeof(pdevinfo));
+ pdevinfo.name = "intel_hddl_server";
+ pdevinfo.data = &plat;
+ plat.xlink_chan = HDDL_NODE_XLINK_CHANNEL;
+ plat.i2c_xlink_chan = HDDL_I2C_XLINK_CHANNEL;
+ pdevinfo.size_data = sizeof(struct intel_hddl_server_plat_data);
+ dd = platform_device_register_full(&pdevinfo);
+ if (IS_ERR(dd)) {
+ pr_err("HDDL SERVER: platform device register failed\n");
+ platform_driver_unregister(&intel_hddl_server_driver);
+ return -EINVAL;
+ }
+ intel_hddl_server_pdev = dd;
+ return 0;
+}
+
+module_init(intel_hddl_server_init);
+module_exit(intel_hddl_server_exit);
+
+MODULE_DESCRIPTION("Intel HDDL Device host driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Vaidya, Mahesh R <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
--
2.17.1

2021-01-08 21:32:30

by mark gross

[permalink] [raw]
Subject: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

From: Srikanth Thokala <[email protected]>

Add PCIe EPF driver for local host (lh) to configure BAR's and other
HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.

Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
MAINTAINERS | 6 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-pcie/Kconfig | 9 +
drivers/misc/xlink-pcie/Makefile | 1 +
drivers/misc/xlink-pcie/local_host/Makefile | 2 +
drivers/misc/xlink-pcie/local_host/epf.c | 413 ++++++++++++++++++++
drivers/misc/xlink-pcie/local_host/epf.h | 39 ++
drivers/misc/xlink-pcie/local_host/xpcie.h | 38 ++
9 files changed, 510 insertions(+)
create mode 100644 drivers/misc/xlink-pcie/Kconfig
create mode 100644 drivers/misc/xlink-pcie/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
create mode 100644 drivers/misc/xlink-pcie/local_host/xpcie.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 2c118fcab623..036658cba574 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM KEEM BAY XLINK PCIE SUPPORT
+M: Srikanth Thokala <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-pcie/
+
ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
M: Jonathan Cameron <[email protected]>
L: [email protected] (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index fafa8b0d8099..dfb98e444c6e 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -481,4 +481,5 @@ source "drivers/misc/ocxl/Kconfig"
source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
+source "drivers/misc/xlink-pcie/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d23231e73330..d17621fc43d5 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -57,3 +57,4 @@ obj-$(CONFIG_HABANA_AI) += habanalabs/
obj-$(CONFIG_UACCE) += uacce/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
+obj-y += xlink-pcie/
diff --git a/drivers/misc/xlink-pcie/Kconfig b/drivers/misc/xlink-pcie/Kconfig
new file mode 100644
index 000000000000..46aa401d79b7
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Kconfig
@@ -0,0 +1,9 @@
+config XLINK_PCIE_LH_DRIVER
+ tristate "XLink PCIe Local Host driver"
+ depends on PCI_ENDPOINT && ARCH_KEEMBAY
+ help
+ This option enables XLink PCIe Local Host driver.
+
+ Choose M here to compile this driver as a module, name is mxlk_ep.
+ This driver is used for XLink communication over PCIe and is to be
+ loaded on the Intel Keem Bay platform.
diff --git a/drivers/misc/xlink-pcie/Makefile b/drivers/misc/xlink-pcie/Makefile
new file mode 100644
index 000000000000..d693d382e9c6
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += local_host/
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
new file mode 100644
index 000000000000..514d3f0c91bc
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
+mxlk_ep-objs := epf.o
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
new file mode 100644
index 000000000000..9e6d407aa6b3
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -0,0 +1,413 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+#include "epf.h"
+
+#define BAR2_MIN_SIZE SZ_16K
+#define BAR4_MIN_SIZE SZ_16K
+
+#define PCIE_REGS_PCIE_INTR_ENABLE 0x18
+#define PCIE_REGS_PCIE_INTR_FLAGS 0x1C
+#define LBC_CII_EVENT_FLAG BIT(18)
+#define PCIE_REGS_PCIE_ERR_INTR_FLAGS 0x24
+#define LINK_REQ_RST_FLG BIT(15)
+
+static struct pci_epf_header xpcie_header = {
+ .vendorid = PCI_VENDOR_ID_INTEL,
+ .deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
+ .baseclass_code = PCI_BASE_CLASS_MULTIMEDIA,
+ .subclass_code = 0x0,
+ .subsys_vendor_id = 0x0,
+ .subsys_id = 0x0,
+};
+
+static const struct pci_epf_device_id xpcie_epf_ids[] = {
+ {
+ .name = "mxlk_pcie_epf",
+ },
+ {},
+};
+
+static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
+{
+ struct xpcie_epf *xpcie_epf;
+ struct xpcie *xpcie = args;
+ u32 val;
+
+ xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+ val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+ iowrite32(val, xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
+{
+ struct xpcie_epf *xpcie_epf;
+ struct xpcie *xpcie = args;
+ u32 val;
+
+ xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+ val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+ if (val & LBC_CII_EVENT_FLAG) {
+ iowrite32(LBC_CII_EVENT_FLAG,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int intel_xpcie_check_bar(struct pci_epf *epf,
+ struct pci_epf_bar *epf_bar,
+ enum pci_barno barno,
+ size_t size, u8 reserved_bar)
+{
+ if (reserved_bar & (1 << barno)) {
+ dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
+ return -EFAULT;
+ }
+
+ if (epf_bar->size != 0 && epf_bar->size < size) {
+ dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int intel_xpcie_configure_bar(struct pci_epf *epf,
+ const struct pci_epc_features
+ *epc_features)
+{
+ struct pci_epf_bar *epf_bar;
+ bool bar_fixed_64bit;
+ int ret, i;
+
+ for (i = BAR_0; i <= BAR_5; i++) {
+ epf_bar = &epf->bar[i];
+ bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i));
+ if (bar_fixed_64bit)
+ epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+ if (epc_features->bar_fixed_size[i])
+ epf_bar->size = epc_features->bar_fixed_size[i];
+
+ if (i == BAR_2) {
+ ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
+ BAR2_MIN_SIZE,
+ epc_features->reserved_bar);
+ if (ret)
+ return ret;
+ }
+
+ if (i == BAR_4) {
+ ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
+ BAR4_MIN_SIZE,
+ epc_features->reserved_bar);
+ if (ret)
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static void intel_xpcie_cleanup_bar(struct pci_epf *epf, enum pci_barno barno)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
+
+ if (xpcie_epf->vaddr[barno]) {
+ pci_epc_clear_bar(epc, epf->func_no, &epf->bar[barno]);
+ pci_epf_free_space(epf, xpcie_epf->vaddr[barno], barno);
+ xpcie_epf->vaddr[barno] = NULL;
+ }
+}
+
+static void intel_xpcie_cleanup_bars(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ intel_xpcie_cleanup_bar(epf, BAR_2);
+ intel_xpcie_cleanup_bar(epf, BAR_4);
+ xpcie_epf->xpcie.mmio = NULL;
+ xpcie_epf->xpcie.bar4 = NULL;
+}
+
+static int intel_xpcie_setup_bar(struct pci_epf *epf, enum pci_barno barno,
+ size_t min_size, size_t align)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epf_bar *bar = &epf->bar[barno];
+ struct pci_epc *epc = epf->epc;
+ void *vaddr;
+ int ret;
+
+ bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+ if (!bar->size)
+ bar->size = min_size;
+
+ if (barno == BAR_4)
+ bar->flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+
+ vaddr = pci_epf_alloc_space(epf, bar->size, barno, align);
+ if (!vaddr) {
+ dev_err(&epf->dev, "Failed to map BAR%d\n", barno);
+ return -ENOMEM;
+ }
+
+ ret = pci_epc_set_bar(epc, epf->func_no, bar);
+ if (ret) {
+ pci_epf_free_space(epf, vaddr, barno);
+ dev_err(&epf->dev, "Failed to set BAR%d\n", barno);
+ return ret;
+ }
+
+ xpcie_epf->vaddr[barno] = vaddr;
+
+ return 0;
+}
+
+static int intel_xpcie_setup_bars(struct pci_epf *epf, size_t align)
+{
+ int ret;
+
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ ret = intel_xpcie_setup_bar(epf, BAR_2, BAR2_MIN_SIZE, align);
+ if (ret)
+ return ret;
+
+ ret = intel_xpcie_setup_bar(epf, BAR_4, BAR4_MIN_SIZE, align);
+ if (ret) {
+ intel_xpcie_cleanup_bar(epf, BAR_2);
+ return ret;
+ }
+
+ xpcie_epf->comm_bar = BAR_2;
+ xpcie_epf->xpcie.mmio = (void *)xpcie_epf->vaddr[BAR_2] +
+ XPCIE_MMIO_OFFSET;
+
+ xpcie_epf->bar4 = BAR_4;
+ xpcie_epf->xpcie.bar4 = xpcie_epf->vaddr[BAR_4];
+
+ return 0;
+}
+
+static int intel_xpcie_epf_get_platform_data(struct device *dev,
+ struct xpcie_epf *xpcie_epf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *soc_node, *version_node;
+ struct resource *res;
+ const char *prop;
+ int prop_size;
+
+ xpcie_epf->irq_dma = platform_get_irq_byname(pdev, "intr");
+ if (xpcie_epf->irq_dma < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get IRQ: %d\n",
+ xpcie_epf->irq_dma);
+ return -EINVAL;
+ }
+
+ xpcie_epf->irq_err = platform_get_irq_byname(pdev, "err_intr");
+ if (xpcie_epf->irq_err < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get erroe IRQ: %d\n",
+ xpcie_epf->irq_err);
+ return -EINVAL;
+ }
+
+ xpcie_epf->irq = platform_get_irq_byname(pdev, "ev_intr");
+ if (xpcie_epf->irq < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get event IRQ: %d\n",
+ xpcie_epf->irq);
+ return -EINVAL;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb");
+ xpcie_epf->apb_base =
+ devm_ioremap(dev, res->start, resource_size(res));
+ if (IS_ERR(xpcie_epf->apb_base))
+ return PTR_ERR(xpcie_epf->apb_base);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+ xpcie_epf->dbi_base =
+ devm_ioremap(dev, res->start, resource_size(res));
+ if (IS_ERR(xpcie_epf->dbi_base))
+ return PTR_ERR(xpcie_epf->dbi_base);
+
+ memcpy(xpcie_epf->stepping, "B0", 2);
+ soc_node = of_get_parent(pdev->dev.of_node);
+ if (soc_node) {
+ version_node = of_get_child_by_name(soc_node, "version-info");
+ if (version_node) {
+ prop = of_get_property(version_node, "stepping",
+ &prop_size);
+ if (prop && prop_size <= KEEMBAY_XPCIE_STEPPING_MAXLEN)
+ memcpy(xpcie_epf->stepping, prop, prop_size);
+ of_node_put(version_node);
+ }
+ of_node_put(soc_node);
+ }
+
+ return 0;
+}
+
+static int intel_xpcie_epf_bind(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ const struct pci_epc_features *features;
+ struct pci_epc *epc = epf->epc;
+ struct device *dev;
+ size_t align = SZ_16K;
+ int ret;
+
+ if (WARN_ON_ONCE(!epc))
+ return -EINVAL;
+
+ dev = epc->dev.parent;
+ features = pci_epc_get_features(epc, epf->func_no);
+ xpcie_epf->epc_features = features;
+ if (features) {
+ align = features->align;
+ ret = intel_xpcie_configure_bar(epf, features);
+ if (ret)
+ return ret;
+ }
+
+ ret = intel_xpcie_setup_bars(epf, align);
+ if (ret) {
+ dev_err(&epf->dev, "BAR initialization failed\n");
+ return ret;
+ }
+
+ ret = intel_xpcie_epf_get_platform_data(dev, xpcie_epf);
+ if (ret) {
+ dev_err(&epf->dev, "Unable to get platform data\n");
+ return -EINVAL;
+ }
+
+ if (!strcmp(xpcie_epf->stepping, "A0")) {
+ xpcie_epf->xpcie.legacy_a0 = true;
+ iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
+ } else {
+ xpcie_epf->xpcie.legacy_a0 = false;
+ iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
+ }
+
+ /* Enable interrupt */
+ writel(LBC_CII_EVENT_FLAG,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_ENABLE);
+ ret = devm_request_irq(&epf->dev, xpcie_epf->irq,
+ &intel_xpcie_host_interrupt, 0,
+ XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "failed to request irq\n");
+ goto err_cleanup_bars;
+ }
+
+ ret = devm_request_irq(&epf->dev, xpcie_epf->irq_err,
+ &intel_xpcie_err_interrupt, 0,
+ XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "failed to request error irq\n");
+ goto err_cleanup_bars;
+ }
+
+ return 0;
+
+err_cleanup_bars:
+ intel_xpcie_cleanup_bars(epf);
+
+ return ret;
+}
+
+static void intel_xpcie_epf_unbind(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
+
+ free_irq(xpcie_epf->irq, &xpcie_epf->xpcie);
+ free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+
+ pci_epc_stop(epc);
+
+ intel_xpcie_cleanup_bars(epf);
+}
+
+static int intel_xpcie_epf_probe(struct pci_epf *epf)
+{
+ struct device *dev = &epf->dev;
+ struct xpcie_epf *xpcie_epf;
+
+ xpcie_epf = devm_kzalloc(dev, sizeof(*xpcie_epf), GFP_KERNEL);
+ if (!xpcie_epf)
+ return -ENOMEM;
+
+ epf->header = &xpcie_header;
+ xpcie_epf->epf = epf;
+ epf_set_drvdata(epf, xpcie_epf);
+
+ return 0;
+}
+
+static void intel_xpcie_epf_shutdown(struct device *dev)
+{
+ struct pci_epf *epf = to_pci_epf(dev);
+ struct xpcie_epf *xpcie_epf;
+
+ xpcie_epf = epf_get_drvdata(epf);
+
+ /* Notify host in case PCIe hot plug not supported */
+ if (xpcie_epf)
+ pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
+static struct pci_epf_ops ops = {
+ .bind = intel_xpcie_epf_bind,
+ .unbind = intel_xpcie_epf_unbind,
+};
+
+static struct pci_epf_driver xpcie_epf_driver = {
+ .driver.name = "mxlk_pcie_epf",
+ .driver.shutdown = intel_xpcie_epf_shutdown,
+ .probe = intel_xpcie_epf_probe,
+ .id_table = xpcie_epf_ids,
+ .ops = &ops,
+ .owner = THIS_MODULE,
+};
+
+static int __init intel_xpcie_epf_init(void)
+{
+ int ret;
+
+ ret = pci_epf_register_driver(&xpcie_epf_driver);
+ if (ret) {
+ pr_err("Failed to register xlink pcie epf driver: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+module_init(intel_xpcie_epf_init);
+
+static void __exit intel_xpcie_epf_exit(void)
+{
+ pci_epf_unregister_driver(&xpcie_epf_driver);
+}
+module_exit(intel_xpcie_epf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION(XPCIE_DRIVER_DESC);
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
new file mode 100644
index 000000000000..2b38c87b3701
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -0,0 +1,39 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_EPF_HEADER_
+#define XPCIE_EPF_HEADER_
+
+#include <linux/pci-epc.h>
+#include <linux/pci-epf.h>
+
+#include "xpcie.h"
+
+#define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
+#define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
+
+#define KEEMBAY_XPCIE_STEPPING_MAXLEN 8
+
+struct xpcie_epf {
+ struct pci_epf *epf;
+ void *vaddr[BAR_5 + 1];
+ enum pci_barno comm_bar;
+ enum pci_barno bar4;
+ const struct pci_epc_features *epc_features;
+ struct xpcie xpcie;
+ int irq;
+ int irq_dma;
+ int irq_err;
+ void __iomem *apb_base;
+ void __iomem *dma_base;
+ void __iomem *dbi_base;
+ char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
+};
+
+#endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
new file mode 100644
index 000000000000..0745e6dfee10
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_HEADER_
+#define XPCIE_HEADER_
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci_ids.h>
+
+#ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
+#define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
+#endif
+
+#define XPCIE_IO_COMM_SIZE SZ_16K
+#define XPCIE_MMIO_OFFSET SZ_4K
+
+/* MMIO layout and offsets shared between device and host */
+struct xpcie_mmio {
+ u8 legacy_a0;
+} __packed;
+
+#define XPCIE_MMIO_LEGACY_A0 (offsetof(struct xpcie_mmio, legacy_a0))
+
+struct xpcie {
+ u32 status;
+ bool legacy_a0;
+ void *mmio;
+ void *bar4;
+};
+
+#endif /* XPCIE_HEADER_ */
--
2.17.1

2021-01-08 21:32:29

by mark gross

[permalink] [raw]
Subject: [PATCH v2 18/34] xlink-ipc: Add xlink ipc driver

From: Seamus Kelly <[email protected]>

Add xLink driver, which interfaces the xLink Core driver with the Keem
Bay VPU IPC driver, thus enabling xLink to control and communicate with
the VPU IP present on the Intel Keem Bay SoC.

Specifically the driver enables xLink Core to:

* Boot / Reset the VPU IP
* Register to VPU IP event notifications (device connected, device
disconnected, WDT event)
* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
- Including the ability to send 'volatile' data (i.e., small amount of
data, up to 128-bytes that was not allocated in the CPU/VPU shared
memory region)

Cc: Jonathan Corbet <[email protected]>
Cc: Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
Signed-off-by: Ryan Carnaghi <[email protected]>
---
Documentation/vpu/index.rst | 1 +
Documentation/vpu/xlink-ipc.rst | 51 ++
MAINTAINERS | 6 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-ipc/Kconfig | 7 +
drivers/misc/xlink-ipc/Makefile | 4 +
drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++++++++++++++++++++++++
include/linux/xlink-ipc.h | 48 ++
9 files changed, 997 insertions(+)
create mode 100644 Documentation/vpu/xlink-ipc.rst
create mode 100644 drivers/misc/xlink-ipc/Kconfig
create mode 100644 drivers/misc/xlink-ipc/Makefile
create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
create mode 100644 include/linux/xlink-ipc.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 661cc700ee45..49c78bb65b83 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -15,3 +15,4 @@ This documentation contains information for the Intel VPU stack.

vpu-stack-overview
xlink-pcie
+ xlink-ipc
diff --git a/Documentation/vpu/xlink-ipc.rst b/Documentation/vpu/xlink-ipc.rst
new file mode 100644
index 000000000000..97ee62b10e93
--- /dev/null
+++ b/Documentation/vpu/xlink-ipc.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============================
+Kernel driver: xLink IPC driver
+===============================
+
+Supported chips:
+
+* | Intel Edge.AI Computer Vision platforms: Keem Bay
+ | Suffix: Bay
+ | Datasheet: (not yet publicly available)
+
+Introduction
+============
+
+The xLink IPC driver interfaces the xLink Core driver with the Keem Bay VPU IPC
+driver, thus enabling xLink to control and communicate with the VPU IP present
+on the Intel Keem Bay SoC.
+
+Specifically the driver enables xLink Core to:
+
+* Boot / Reset the VPU IP
+* Register to VPU IP event notifications (device connected, device disconnected,
+ WDT event)
+* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
+* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
+
+ * Including the ability to send 'volatile' data (i.e. small amounts of data,
+ up to 128-bytes that was not allocated in the CPU/VPU shared memory region)
+
+Sending / Receiving 'volatile' data
+===================================
+
+Data to be exchanged with Keem Bay IPC needs to be allocated in the portion of
+DDR shared between the CPU and VPU.
+
+This can be impractical for small amounts of data that user code can allocate
+on the stack.
+
+To reduce the burden on user code, xLink Core provides special send / receive
+functions to send up to 128 bytes of 'volatile data', i.e., data that is not
+allocated in the shared memory and that might also disappear after the xLink
+API is called (e.g., because allocated on the stack).
+
+The xLink IPC driver implements support for transferring such 'volatile data'
+to the VPU using Keem Bay IPC. To this end, the driver reserves some memory in
+the shared memory region.
+
+When volatile data is to be sent, xLink IPC allocates a buffer from the
+reserved memory region and copies the volatile data to the buffer. The buffer
+is then transferred to the VPU using Keem Bay IPC.
diff --git a/MAINTAINERS b/MAINTAINERS
index df7b61900664..92693390c59d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM/INTEL XLINK IPC SUPPORT
+M: Seamus Kelly <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-ipc/
+
ARM/INTEL KEEM BAY XLINK PCIE SUPPORT
M: Srikanth Thokala <[email protected]>
M: Mark Gross <[email protected]>
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index dfb98e444c6e..1f81ea915b95 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -482,4 +482,5 @@ source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
+source "drivers/misc/xlink-ipc/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d17621fc43d5..b51495a2f1e0 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -58,3 +58,4 @@ obj-$(CONFIG_UACCE) += uacce/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
+obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
diff --git a/drivers/misc/xlink-ipc/Kconfig b/drivers/misc/xlink-ipc/Kconfig
new file mode 100644
index 000000000000..6e5374c7d85c
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Kconfig
@@ -0,0 +1,7 @@
+config XLINK_IPC
+ tristate "Support for XLINK IPC"
+ depends on KEEMBAY_VPU_IPC
+ help
+ XLINK IPC enables the communication/control IPC Sub-System.
+
+ Select M if you have an Intel SoC with a Vision Processing Unit (VPU).
diff --git a/drivers/misc/xlink-ipc/Makefile b/drivers/misc/xlink-ipc/Makefile
new file mode 100644
index 000000000000..f92c95525e23
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Makefile
@@ -0,0 +1,4 @@
+#
+# Makefile for xlink IPC Linux driver
+#
+obj-$(CONFIG_XLINK_IPC) += xlink-ipc.o
diff --git a/drivers/misc/xlink-ipc/xlink-ipc.c b/drivers/misc/xlink-ipc/xlink-ipc.c
new file mode 100644
index 000000000000..c04b8f44f684
--- /dev/null
+++ b/drivers/misc/xlink-ipc/xlink-ipc.c
@@ -0,0 +1,878 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2020 Intel Corporation
+ *
+ */
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <linux/soc/intel/keembay-vpu-ipc.h>
+
+#include <linux/xlink-ipc.h>
+
+#define XLINK_IPC_MAX_DEVICE_NAME_SIZE 12
+
+/* used to extract fields from and create xlink sw device id */
+#define SW_DEVICE_ID_INTERFACE_SHIFT 24U
+#define SW_DEVICE_ID_INTERFACE_MASK 0x7
+#define SW_DEVICE_ID_INTERFACE_SMASK \
+ (SW_DEVICE_ID_INTERFACE_MASK << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_INTERFACE_IPC_VALUE 0x0
+#define SW_DEVICE_ID_INTERFACE_IPC_SVALUE \
+ (SW_DEVICE_ID_INTERFACE_IPC_VALUE << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_VPU_ID_SHIFT 1U
+#define SW_DEVICE_ID_VPU_ID_MASK 0x7
+#define SW_DEVICE_ID_VPU_ID_SMASK \
+ (SW_DEVICE_ID_VPU_ID_MASK << SW_DEVICE_ID_VPU_ID_SHIFT)
+#define GET_VPU_ID_FROM_SW_DEVICE_ID(id) \
+ (((id) >> SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_MASK)
+#define GET_SW_DEVICE_ID_FROM_VPU_ID(id) \
+ ((((id) << SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_SMASK) \
+ | SW_DEVICE_ID_INTERFACE_IPC_SVALUE)
+
+/* the maximum buffer size for volatile xlink operations */
+#define XLINK_MAX_BUF_SIZE 128U
+
+/* indices used to retrieve reserved memory from the dt */
+#define LOCAL_XLINK_IPC_BUFFER_IDX 0
+#define REMOTE_XLINK_IPC_BUFFER_IDX 1
+
+/* index used to retrieve the vpu ipc device phandle from the dt */
+#define VPU_IPC_DEVICE_PHANDLE_IDX 1
+
+/* the timeout (in ms) used to wait for the vpu ready message */
+#define XLINK_VPU_WAIT_FOR_READY_MS 3000
+
+/* xlink buffer memory region */
+struct xlink_buf_mem {
+ struct device *dev; /* child device managing the memory region */
+ void *vaddr; /* the virtual address of the memory region */
+ dma_addr_t dma_handle; /* the physical address of the memory region */
+ size_t size; /* the size of the memory region */
+};
+
+/* xlink buffer pool */
+struct xlink_buf_pool {
+ void *buf; /* pointer to the start of pool area */
+ size_t buf_cnt; /* pool size (i.e., number of buffers) */
+ size_t idx; /* current index */
+ spinlock_t lock; /* the lock protecting this pool */
+};
+
+/* xlink ipc device */
+struct xlink_ipc_dev {
+ struct platform_device *pdev; /* pointer to platform device */
+ u32 vpu_id; /* The VPU ID defined in the device tree */
+ u32 sw_device_id; /* the sw device id */
+ const char *device_name; /* the vpu device name */
+ struct xlink_buf_mem local_xlink_mem; /* tx buffer memory region */
+ struct xlink_buf_mem remote_xlink_mem; /* rx buffer memory region */
+ struct xlink_buf_pool xlink_buf_pool; /* tx buffer pool */
+ struct device *vpu_dev; /* pointer to vpu ipc device */
+ int (*callback)(u32 sw_device_id, u32 event);
+};
+
+/* Events that can be notified via callback, when registered. */
+enum xlink_vpu_event {
+ XLINK_VPU_NOTIFY_DISCONNECT = 0,
+ XLINK_VPU_NOTIFY_CONNECT,
+ XLINK_VPU_NOTIFY_MSS_WDT,
+ XLINK_VPU_NOTIFY_NCE_WDT,
+ NUM_EVENT_TYPE
+};
+
+#define VPU_ID 0
+#define VPU_DEVICE_NAME "vpu-0"
+#define VPU_SW_DEVICE_ID 0
+static struct xlink_ipc_dev *xlink_dev;
+
+/*
+ * Functions related to reserved-memory sub-devices.
+ */
+
+/*
+ * xlink_reserved_memory_remove() - Removes the reserved memory sub-devices.
+ *
+ * @xlink_dev: [in] The xlink ipc device with reserved memory sub-devices.
+ */
+static void xlink_reserved_memory_remove(struct xlink_ipc_dev *xlink_dev)
+{
+ device_unregister(xlink_dev->local_xlink_mem.dev);
+ device_unregister(xlink_dev->remote_xlink_mem.dev);
+}
+
+/*
+ * xlink_reserved_mem_release() - Reserved memory release callback function.
+ *
+ * @dev: [in] The reserved memory sub-device.
+ */
+static void xlink_reserved_mem_release(struct device *dev)
+{
+ of_reserved_mem_device_release(dev);
+}
+
+/*
+ * get_xlink_reserved_mem_size() - Gets the size of the reserved memory region.
+ *
+ * @dev: [in] The device the reserved memory region is allocated to.
+ * @idx: [in] The reserved memory region's index in the phandle table.
+ *
+ * Return: The reserved memory size, 0 on failure.
+ */
+static resource_size_t get_xlink_reserved_mem_size(struct device *dev, int idx)
+{
+ struct device_node *np;
+ struct resource mem;
+ int rc;
+
+ np = of_parse_phandle(dev->of_node, "memory-region", idx);
+ if (!np) {
+ dev_err(dev, "Couldn't find memory-region %d\n", idx);
+ return 0;
+ }
+
+ rc = of_address_to_resource(np, 0, &mem);
+ if (rc) {
+ dev_err(dev, "Couldn't map address to resource %d\n", idx);
+ return 0;
+ }
+ return resource_size(&mem);
+}
+
+/*
+ * init_xlink_reserved_mem_dev() - Initializes the reserved memory sub-devices.
+ *
+ * @dev: [in] The parent device of the reserved memory sub-device.
+ * @name: [in] The name to assign to the memory region.
+ * @idx: [in] The reserved memory region index in the phandle table.
+ *
+ * Return: The initialized sub-device, NULL on failure.
+ */
+static struct device *init_xlink_reserved_mem_dev(struct device *dev,
+ const char *name, int idx)
+{
+ struct device *child;
+ int rc;
+
+ child = devm_kzalloc(dev, sizeof(*child), GFP_KERNEL);
+ if (!child)
+ return NULL;
+
+ device_initialize(child);
+ dev_set_name(child, "%s:%s", dev_name(dev), name);
+ dev_err(dev, " dev_name %s, name %s\n", dev_name(dev), name);
+ child->parent = dev;
+ child->dma_mask = dev->dma_mask;
+ child->coherent_dma_mask = dev->coherent_dma_mask;
+ /* set up dma configuration using information from parent's dt node */
+ rc = of_dma_configure(child, dev->of_node, true);
+ if (rc)
+ return NULL;
+ child->release = xlink_reserved_mem_release;
+
+ rc = device_add(child);
+ if (rc)
+ goto err;
+ rc = of_reserved_mem_device_init_by_idx(child, dev->of_node, idx);
+ if (rc) {
+ dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
+ idx, rc);
+ device_del(child);
+ goto err;
+ }
+ return child;
+
+err:
+ put_device(child);
+ return NULL;
+}
+
+/*
+ * xlink_reserved_memory_init() - Initialize reserved memory for the device.
+ *
+ * @xlink_dev: [in] The xlink ipc device the reserved memory is allocated to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_init(struct xlink_ipc_dev *xlink_dev)
+{
+ struct device *dev = &xlink_dev->pdev->dev;
+ struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+ lxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_local_reserved",
+ LOCAL_XLINK_IPC_BUFFER_IDX);
+ if (!lxm->dev)
+ return -ENOMEM;
+
+ lxm->size = get_xlink_reserved_mem_size(dev, LOCAL_XLINK_IPC_BUFFER_IDX);
+
+ rxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_remote_reserved",
+ REMOTE_XLINK_IPC_BUFFER_IDX);
+ if (!rxm->dev) {
+ device_unregister(xlink_dev->local_xlink_mem.dev);
+ return -ENOMEM;
+ }
+
+ rxm->size = get_xlink_reserved_mem_size(dev, REMOTE_XLINK_IPC_BUFFER_IDX);
+
+ return 0;
+}
+
+/*
+ * xlink_reserved_memory_alloc() - Allocate reserved memory for the device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_alloc(struct xlink_ipc_dev *xlink_dev)
+{
+ struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+ lxm->vaddr = dmam_alloc_coherent(xlink_dev->local_xlink_mem.dev,
+ xlink_dev->local_xlink_mem.size,
+ &xlink_dev->local_xlink_mem.dma_handle,
+ GFP_KERNEL);
+ if (!lxm->vaddr) {
+ dev_err(&xlink_dev->pdev->dev,
+ "Failed to allocate from local reserved memory.\n");
+ return -ENOMEM;
+ }
+ rxm->vaddr = dmam_alloc_coherent(xlink_dev->remote_xlink_mem.dev,
+ xlink_dev->remote_xlink_mem.size,
+ &xlink_dev->remote_xlink_mem.dma_handle,
+ GFP_KERNEL);
+ if (!rxm->vaddr) {
+ dev_err(&xlink_dev->pdev->dev,
+ "Failed to allocate from remote reserved memory.\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/*
+ * init_xlink_buf_pool() - Initialize the device's tx buffer pool.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success.
+ */
+static int init_xlink_buf_pool(struct xlink_ipc_dev *xlink_dev)
+{
+ struct xlink_buf_mem *mem = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_pool *xbufpool = &xlink_dev->xlink_buf_pool;
+
+ memset(mem->vaddr, 0, mem->size);
+ xbufpool->buf = mem->vaddr;
+ xbufpool->buf_cnt = mem->size / XLINK_MAX_BUF_SIZE;
+ xbufpool->idx = 0;
+ dev_info(&xlink_dev->pdev->dev, "xlink Buffer Pool size: %zX\n",
+ xbufpool->buf_cnt);
+ spin_lock_init(&xbufpool->lock);
+
+ return 0;
+}
+
+/*
+ * xlink_phys_to_virt() - Convert an xlink physical addresses to a virtual one.
+ *
+ * @xlink_mem: [in] The memory region where the physical address is located.
+ * @paddr: [in] The physical address to convert to a virtual one.
+ *
+ * Return: The corresponding virtual address, or NULL if the
+ * physical address is not in the expected memory
+ * range.
+ */
+static void *xlink_phys_to_virt(const struct xlink_buf_mem *xlink_mem,
+ u32 paddr)
+{
+ if (unlikely(paddr < xlink_mem->dma_handle) ||
+ paddr >= (xlink_mem->dma_handle + xlink_mem->size))
+ return NULL;
+
+ return xlink_mem->vaddr + (paddr - xlink_mem->dma_handle);
+}
+
+/*
+ * xlink_virt_to_phys() - Convert an xlink virtual addresses to a physical one.
+ *
+ * @xlink_mem: [in] The memory region where the physical address is located.
+ * @vaddr: [in] The virtual address to convert to a physical one.
+ * @paddr: [out] Where to store the computed physical address.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_virt_to_phys(struct xlink_buf_mem *xlink_mem, void *vaddr,
+ u32 *paddr)
+{
+ if (unlikely((xlink_mem->dma_handle + xlink_mem->size) > 0xFFFFFFFF))
+ return -EINVAL;
+ if (unlikely(vaddr < xlink_mem->vaddr ||
+ vaddr >= (xlink_mem->vaddr + xlink_mem->size)))
+ return -EINVAL;
+ *paddr = xlink_mem->dma_handle + (vaddr - xlink_mem->vaddr);
+
+ return 0;
+}
+
+/*
+ * get_next_xlink_buf() - Get next xlink buffer from an xlink device's pool.
+ *
+ * @xlink_dev: [in] The xlink ipc device to get a buffer from.
+ * @buf: [out] Where to store the reference to the next buffer.
+ * @size: [in] The size of the buffer to get.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int get_next_xlink_buf(struct xlink_ipc_dev *xlink_dev, void **buf,
+ int size)
+{
+ struct xlink_buf_pool *pool;
+ unsigned long flags;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ if (size > XLINK_MAX_BUF_SIZE)
+ return -EINVAL;
+
+ pool = &xlink_dev->xlink_buf_pool;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ if (pool->idx == pool->buf_cnt) {
+ /* reached end of buffers - wrap around */
+ pool->idx = 0;
+ }
+ *buf = pool->buf + (pool->idx * XLINK_MAX_BUF_SIZE);
+ pool->idx++;
+ spin_unlock_irqrestore(&pool->lock, flags);
+ return 0;
+}
+
+/*
+ * Functions related to the vpu ipc device reference.
+ */
+
+/*
+ * vpu_ipc_device_put() - Release the vpu ipc device held by the xlink device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ */
+static void vpu_ipc_device_put(struct xlink_ipc_dev *xlink_dev)
+{
+ put_device(xlink_dev->vpu_dev);
+}
+
+/*
+ * vpu_ipc_device_get() - Get the vpu ipc device reference for the xlink device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int vpu_ipc_device_get(struct xlink_ipc_dev *xlink_dev)
+{
+ struct device *dev = &xlink_dev->pdev->dev;
+ struct platform_device *pdev;
+ struct device_node *np;
+
+ np = of_parse_phandle(dev->of_node, "intel,keembay-vpu-ipc", 0);
+ if (!np)
+ return -ENODEV;
+
+ pdev = of_find_device_by_node(np);
+ if (!pdev) {
+ dev_info(dev, "IPC device not probed\n");
+ of_node_put(np);
+ return -EPROBE_DEFER;
+ }
+
+ xlink_dev->vpu_dev = get_device(&pdev->dev);
+ of_node_put(np);
+
+ dev_info(dev, "Using IPC device: %s\n", dev_name(xlink_dev->vpu_dev));
+ return 0;
+}
+
+/*
+ * xlink platform api - ipc interface functions
+ */
+
+/*
+ * xlink_ipc_connect() - platform connect interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to connect to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_connect(u32 sw_device_id)
+{
+ if (!xlink_dev)
+ return -ENODEV;
+
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_connect);
+
+/*
+ * xlink_ipc_write() - platform write interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to write to.
+ * @data: [in] The data buffer to write.
+ * @size: [in-out] The amount of data to write/written.
+ * @timeout: [in] The time (in ms) to wait before timing out.
+ * @context: [in] The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context)
+{
+ struct xlink_ipc_context *ctx = context;
+ void *vaddr = NULL;
+ u32 paddr;
+ int rc;
+
+ if (!ctx)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ if (ctx->is_volatile) {
+ rc = get_next_xlink_buf(xlink_dev, &vaddr, XLINK_MAX_BUF_SIZE);
+ if (rc)
+ return rc;
+ memcpy(vaddr, data, *size);
+ rc = xlink_virt_to_phys(&xlink_dev->local_xlink_mem, vaddr,
+ &paddr);
+ if (rc)
+ return rc;
+ } else {
+ paddr = *(u32 *)data;
+ }
+ rc = intel_keembay_vpu_ipc_send(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+ paddr, *size);
+
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_write);
+
+/*
+ * xlink_ipc_read() - platform read interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to read from.
+ * @data: [out] The data buffer to read into.
+ * @size: [in-out] The amount of data to read/was read.
+ * @timeout: [in] The time (in ms) to wait before timing out.
+ * @context: [in] The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context)
+{
+ struct xlink_ipc_context *ctx = context;
+ u32 addr = 0;
+ void *vaddr;
+ int rc;
+
+ if (!ctx)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_recv(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+ &addr, size, timeout);
+
+ if (ctx->is_volatile) {
+ vaddr = xlink_phys_to_virt(&xlink_dev->remote_xlink_mem, addr);
+ if (vaddr)
+ memcpy(data, vaddr, *size);
+ else
+ return -ENXIO;
+ } else {
+ *(u32 *)data = addr;
+ }
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_read);
+
+/*
+ * xlink_ipc_get_device_list() - platform get device list interface.
+ *
+ * @sw_device_id_list: [out] The list of devices found.
+ * @num_devices: [out] The number of devices found.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
+{
+ int i = 0;
+
+ if (!sw_device_id_list || !num_devices)
+ return -EINVAL;
+
+ if (xlink_dev) {
+ *sw_device_id_list = xlink_dev->sw_device_id;
+ i++;
+ }
+
+ *num_devices = i;
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_list);
+
+/*
+ * xlink_ipc_get_device_name() - platform get device name interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to get name of.
+ * @device_name: [out] The name of the xlink ipc device.
+ * @name_size: [in] The maximum size of the name.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size)
+{
+ size_t size;
+
+ if (!device_name)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ size = (name_size > XLINK_IPC_MAX_DEVICE_NAME_SIZE)
+ ? XLINK_IPC_MAX_DEVICE_NAME_SIZE
+ : name_size;
+ strncpy(device_name, xlink_dev->device_name, size);
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_name);
+
+/*
+ * xlink_ipc_get_device_status() - platform get device status interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ * @device_status: [out] The device status.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status)
+{
+ if (!device_status)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ *device_status = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_get_device_status);
+
+static void kernel_callback(struct device *dev, enum intel_keembay_vpu_event event)
+{
+ if ((enum xlink_vpu_event)event >= NUM_EVENT_TYPE)
+ return;
+
+ if (xlink_dev) {
+ if (xlink_dev->callback)
+ xlink_dev->callback(xlink_dev->sw_device_id, event);
+ }
+}
+
+/*
+ * xlink_ipc_register_for_events() - platform register for events
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ * @callback: [in] Callback function for events
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, enum xlink_vpu_event event))
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+ xlink_dev->callback = callback;
+ rc = intel_keembay_vpu_register_for_events(xlink_dev->vpu_dev, kernel_callback);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_register_for_events);
+/*
+ * xlink_ipc_unregister_for_events() - platform register for events
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+ rc = intel_keembay_vpu_unregister_for_events(xlink_dev->vpu_dev);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_unregister_for_events);
+
+/*
+ * xlink_ipc_boot_device() - platform boot device interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to boot.
+ * @binary_name: [in] The file name of the firmware binary to boot.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name)
+{
+ enum intel_keembay_vpu_state state;
+ int rc;
+
+ if (!binary_name)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ pr_info("\nStart VPU 0x%x - %s\n", sw_device_id, binary_name);
+ rc = intel_keembay_vpu_startup(xlink_dev->vpu_dev, binary_name);
+ if (rc) {
+ pr_err("Failed to start VPU: %d\n", rc);
+ return -EBUSY;
+ }
+ pr_info("Successfully started VPU!\n");
+
+ /* Wait for VPU to be READY */
+ rc = intel_keembay_vpu_wait_for_ready(xlink_dev->vpu_dev,
+ XLINK_VPU_WAIT_FOR_READY_MS);
+ if (rc) {
+ pr_err("Tried to start VPU but never got READY.\n");
+ return -EBUSY;
+ }
+ pr_info("Successfully synchronised state with VPU!\n");
+
+ /* Check state */
+ state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ if (state != KEEMBAY_VPU_READY) {
+ pr_err("VPU was not ready, it was %d\n", state);
+ return -EBUSY;
+ }
+ pr_info("VPU was ready.\n");
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_boot_device);
+
+/*
+ * xlink_ipc_reset_device() - platform reset device interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to reset.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_reset_device(u32 sw_device_id)
+{
+ enum intel_keembay_vpu_state state;
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ /* stop the vpu */
+ rc = intel_keembay_vpu_stop(xlink_dev->vpu_dev);
+ if (rc) {
+ pr_err("Failed to stop VPU: %d\n", rc);
+ return -EBUSY;
+ }
+ pr_info("Successfully stopped VPU!\n");
+
+ /* check state */
+ state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ if (state != KEEMBAY_VPU_OFF) {
+ pr_err("VPU was not OFF after stop request, it was %d\n",
+ state);
+ return -EBUSY;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(xlink_ipc_reset_device);
+
+/*
+ * xlink_ipc_open_channel() - platform open channel interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to open channel to.
+ * @channel: [in] The channel id to open.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_open_channel(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, channel);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_open_channel);
+
+/*
+ * xlink_ipc_close_channel() - platform close channel interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to close channel to.
+ * @channel: [in] The channel id to close.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_close_channel(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, channel);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_ipc_close_channel);
+
+/*
+ * xlink ipc driver functions
+ */
+
+static int keembay_xlink_ipc_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ int rc;
+
+ /* allocate device data structure */
+ xlink_dev = kzalloc(sizeof(*xlink_dev), GFP_KERNEL);
+ if (!xlink_dev)
+ return -ENOMEM;
+
+ xlink_dev->pdev = pdev;
+ dev_info(dev, "Keem Bay xlink IPC driver probed.\n");
+
+ /* grab reserved memory regions and assign to child devices */
+ rc = xlink_reserved_memory_init(xlink_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Failed to set up reserved memory regions.\n");
+ goto r_cleanup;
+ }
+
+ /* allocate memory from the reserved memory regions */
+ rc = xlink_reserved_memory_alloc(xlink_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Failed to allocate reserved memory regions.\n");
+ goto r_cleanup;
+ }
+
+ /* init the xlink buffer pool used for rx/tx */
+ init_xlink_buf_pool(xlink_dev);
+
+ /* get reference to vpu ipc device */
+ rc = vpu_ipc_device_get(xlink_dev);
+ if (rc)
+ goto r_cleanup;
+
+ /* get device id */
+ rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+ &xlink_dev->vpu_id);
+ if (rc) {
+ dev_err(dev, "Cannot get VPU ID from DT.\n");
+ goto r_cleanup;
+ }
+
+ /* assign a sw device id */
+ xlink_dev->sw_device_id = GET_SW_DEVICE_ID_FROM_VPU_ID
+ (xlink_dev->vpu_id);
+
+ /* assign a device name */
+ rc = of_property_read_string(dev->of_node, "intel,keembay-vpu-ipc-name",
+ &xlink_dev->device_name);
+ if (rc) {
+ /* only warn for now; we will enforce this in the future */
+ dev_warn(dev, "VPU name not defined in DT, using %s as default.\n",
+ VPU_DEVICE_NAME);
+ dev_warn(dev, "WARNING: additional VPU devices may fail probing.\n");
+ xlink_dev->device_name = VPU_DEVICE_NAME;
+ }
+
+ /* get platform data reference */
+ platform_set_drvdata(pdev, xlink_dev);
+
+ dev_info(dev, "Device id=%u sw_device_id=0x%x name=%s probe complete.\n",
+ xlink_dev->vpu_id, xlink_dev->sw_device_id,
+ xlink_dev->device_name);
+ return 0;
+
+r_cleanup:
+ xlink_reserved_memory_remove(xlink_dev);
+ return rc;
+}
+
+static int keembay_xlink_ipc_remove(struct platform_device *pdev)
+{
+ struct xlink_ipc_dev *xlink_dev = platform_get_drvdata(pdev);
+ struct device *dev = &pdev->dev;
+
+ /*
+ * no need to de-alloc xlink mem (local_xlink_mem and remote_xlink_mem)
+ * since it was allocated with dmam_alloc
+ */
+ xlink_reserved_memory_remove(xlink_dev);
+
+ /* release vpu ipc device */
+ vpu_ipc_device_put(xlink_dev);
+
+ dev_info(dev, "Keem Bay xlink IPC driver removed.\n");
+ return 0;
+}
+
+static const struct of_device_id keembay_xlink_ipc_of_match[] = {
+ {
+ .compatible = "intel,keembay-xlink-ipc",
+ },
+ {}
+};
+
+static struct platform_driver keembay_xlink_ipc_driver = {
+ .driver = {
+ .name = "keembay-xlink-ipc",
+ .of_match_table = keembay_xlink_ipc_of_match,
+ },
+ .probe = keembay_xlink_ipc_probe,
+ .remove = keembay_xlink_ipc_remove,
+};
+module_platform_driver(keembay_xlink_ipc_driver);
+
+MODULE_DESCRIPTION("Keem Bay xlink IPC Driver");
+MODULE_AUTHOR("Ryan Carnaghi <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/xlink-ipc.h b/include/linux/xlink-ipc.h
new file mode 100644
index 000000000000..f26b53bf6506
--- /dev/null
+++ b/include/linux/xlink-ipc.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay xlink IPC Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef _XLINK_IPC_H_
+#define _XLINK_IPC_H_
+
+#include <linux/types.h>
+
+struct xlink_ipc_context {
+ u16 chan;
+ u8 is_volatile;
+};
+
+int xlink_ipc_connect(u32 sw_device_id);
+
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context);
+
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context);
+
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices);
+
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size);
+
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status);
+
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name);
+
+int xlink_ipc_reset_device(u32 sw_device_id);
+
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, u32 event));
+
+int xlink_ipc_unregister_for_events(u32 sw_device_id);
+
+#endif /* _XLINK_IPC_H_ */
--
2.17.1

2021-01-08 21:32:34

by mark gross

[permalink] [raw]
Subject: [PATCH v2 10/34] misc: xlink-pcie: lh: Add PCIe EP DMA functionality

From: Srikanth Thokala <[email protected]>

Add Synopsys PCIe DWC core embedded-DMA functionality for local host

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/local_host/Makefile | 1 +
drivers/misc/xlink-pcie/local_host/dma.c | 577 ++++++++++++++++++++
drivers/misc/xlink-pcie/local_host/epf.c | 15 +-
drivers/misc/xlink-pcie/local_host/epf.h | 41 ++
4 files changed, 631 insertions(+), 3 deletions(-)
create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c

diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 514d3f0c91bc..54fc118e2dd1 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
mxlk_ep-objs := epf.o
+mxlk_ep-objs += dma.o
diff --git a/drivers/misc/xlink-pcie/local_host/dma.c b/drivers/misc/xlink-pcie/local_host/dma.c
new file mode 100644
index 000000000000..811e5eebb7ab
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/dma.c
@@ -0,0 +1,577 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/wait.h>
+
+#include "epf.h"
+
+#define DMA_DBI_OFFSET (0x380000)
+
+/* PCIe DMA control 1 register definitions. */
+#define DMA_CH_CONTROL1_CB_SHIFT (0)
+#define DMA_CH_CONTROL1_TCB_SHIFT (1)
+#define DMA_CH_CONTROL1_LLP_SHIFT (2)
+#define DMA_CH_CONTROL1_LIE_SHIFT (3)
+#define DMA_CH_CONTROL1_CS_SHIFT (5)
+#define DMA_CH_CONTROL1_CCS_SHIFT (8)
+#define DMA_CH_CONTROL1_LLE_SHIFT (9)
+#define DMA_CH_CONTROL1_CB_MASK (BIT(DMA_CH_CONTROL1_CB_SHIFT))
+#define DMA_CH_CONTROL1_TCB_MASK (BIT(DMA_CH_CONTROL1_TCB_SHIFT))
+#define DMA_CH_CONTROL1_LLP_MASK (BIT(DMA_CH_CONTROL1_LLP_SHIFT))
+#define DMA_CH_CONTROL1_LIE_MASK (BIT(DMA_CH_CONTROL1_LIE_SHIFT))
+#define DMA_CH_CONTROL1_CS_MASK (0x3 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CCS_MASK (BIT(DMA_CH_CONTROL1_CCS_SHIFT))
+#define DMA_CH_CONTROL1_LLE_MASK (BIT(DMA_CH_CONTROL1_LLE_SHIFT))
+
+/* DMA control 1 register Channel Status */
+#define DMA_CH_CONTROL1_CS_RUNNING (0x1 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CS_HALTED (0x2 << DMA_CH_CONTROL1_CS_SHIFT)
+#define DMA_CH_CONTROL1_CS_STOPPED (0x3 << DMA_CH_CONTROL1_CS_SHIFT)
+
+/* PCIe DMA Engine enable register definitions. */
+#define DMA_ENGINE_EN_SHIFT (0)
+#define DMA_ENGINE_EN_MASK (BIT(DMA_ENGINE_EN_SHIFT))
+
+/* PCIe DMA interrupt registers definitions. */
+#define DMA_ABORT_INTERRUPT_SHIFT (16)
+#define DMA_ABORT_INTERRUPT_MASK (0xFF << DMA_ABORT_INTERRUPT_SHIFT)
+#define DMA_ABORT_INTERRUPT_CH_MASK(_c) (BIT(_c) << DMA_ABORT_INTERRUPT_SHIFT)
+#define DMA_DONE_INTERRUPT_MASK (0xFF)
+#define DMA_DONE_INTERRUPT_CH_MASK(_c) (BIT(_c))
+#define DMA_ALL_INTERRUPT_MASK \
+ (DMA_ABORT_INTERRUPT_MASK | DMA_DONE_INTERRUPT_MASK)
+
+#define DMA_LL_ERROR_SHIFT (16)
+#define DMA_CPL_ABORT_SHIFT (8)
+#define DMA_CPL_TIMEOUT_SHIFT (16)
+#define DMA_DATA_POI_SHIFT (24)
+#define DMA_AR_ERROR_CH_MASK(_c) (BIT(_c))
+#define DMA_LL_ERROR_CH_MASK(_c) (BIT(_c) << DMA_LL_ERROR_SHIFT)
+#define DMA_UNREQ_ERROR_CH_MASK(_c) (BIT(_c))
+#define DMA_CPL_ABORT_ERROR_CH_MASK(_c) (BIT(_c) << DMA_CPL_ABORT_SHIFT)
+#define DMA_CPL_TIMEOUT_ERROR_CH_MASK(_c) (BIT(_c) << DMA_CPL_TIMEOUT_SHIFT)
+#define DMA_DATA_POI_ERROR_CH_MASK(_c) (BIT(_c) << DMA_DATA_POI_SHIFT)
+
+#define DMA_LLLAIE_SHIFT (16)
+#define DMA_LLLAIE_MASK (0xF << DMA_LLLAIE_SHIFT)
+
+#define DMA_CHAN_WRITE_MAX_WEIGHT (0x7)
+#define DMA_CHAN_READ_MAX_WEIGHT (0x3)
+#define DMA_CHAN0_WEIGHT_OFFSET (0)
+#define DMA_CHAN1_WEIGHT_OFFSET (5)
+#define DMA_CHAN2_WEIGHT_OFFSET (10)
+#define DMA_CHAN3_WEIGHT_OFFSET (15)
+#define DMA_CHAN_WRITE_ALL_MAX_WEIGHT \
+ ((DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN0_WEIGHT_OFFSET) | \
+ (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN1_WEIGHT_OFFSET) | \
+ (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN2_WEIGHT_OFFSET) | \
+ (DMA_CHAN_WRITE_MAX_WEIGHT << DMA_CHAN3_WEIGHT_OFFSET))
+#define DMA_CHAN_READ_ALL_MAX_WEIGHT \
+ ((DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN0_WEIGHT_OFFSET) | \
+ (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN1_WEIGHT_OFFSET) | \
+ (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN2_WEIGHT_OFFSET) | \
+ (DMA_CHAN_READ_MAX_WEIGHT << DMA_CHAN3_WEIGHT_OFFSET))
+
+#define PCIE_REGS_PCIE_APP_CNTRL 0x8
+#define APP_XFER_PENDING BIT(6)
+#define PCIE_REGS_PCIE_SII_PM_STATE_1 0xb4
+#define PM_LINKST_IN_L1 BIT(10)
+
+#define DMA_POLLING_TIMEOUT 1000000
+#define DMA_ENABLE_TIMEOUT 1000
+#define DMA_PCIE_PM_L1_TIMEOUT 20
+
+struct __packed pcie_dma_reg {
+ u32 dma_ctrl_data_arb_prior;
+ u32 reserved1;
+ u32 dma_ctrl;
+ u32 dma_write_engine_en;
+ u32 dma_write_doorbell;
+ u32 reserved2;
+ u32 dma_write_channel_arb_weight_low;
+ u32 dma_write_channel_arb_weight_high;
+ u32 reserved3[3];
+ u32 dma_read_engine_en;
+ u32 dma_read_doorbell;
+ u32 reserved4;
+ u32 dma_read_channel_arb_weight_low;
+ u32 dma_read_channel_arb_weight_high;
+ u32 reserved5[3];
+ u32 dma_write_int_status;
+ u32 reserved6;
+ u32 dma_write_int_mask;
+ u32 dma_write_int_clear;
+ u32 dma_write_err_status;
+ u32 dma_write_done_imwr_low;
+ u32 dma_write_done_imwr_high;
+ u32 dma_write_abort_imwr_low;
+ u32 dma_write_abort_imwr_high;
+ u16 dma_write_ch_imwr_data[8];
+ u32 reserved7[4];
+ u32 dma_write_linked_list_err_en;
+ u32 reserved8[3];
+ u32 dma_read_int_status;
+ u32 reserved9;
+ u32 dma_read_int_mask;
+ u32 dma_read_int_clear;
+ u32 reserved10;
+ u32 dma_read_err_status_low;
+ u32 dma_rd_err_sts_h;
+ u32 reserved11[2];
+ u32 dma_read_linked_list_err_en;
+ u32 reserved12;
+ u32 dma_read_done_imwr_low;
+ u32 dma_read_done_imwr_high;
+ u32 dma_read_abort_imwr_low;
+ u32 dma_read_abort_imwr_high;
+ u16 dma_read_ch_imwr_data[8];
+};
+
+struct __packed pcie_dma_chan {
+ u32 dma_ch_control1;
+ u32 reserved1;
+ u32 dma_transfer_size;
+ u32 dma_sar_low;
+ u32 dma_sar_high;
+ u32 dma_dar_low;
+ u32 dma_dar_high;
+ u32 dma_llp_low;
+ u32 dma_llp_high;
+};
+
+enum xpcie_ep_engine_type {
+ WRITE_ENGINE,
+ READ_ENGINE
+};
+
+static u32 dma_chan_offset[2][DMA_CHAN_NUM] = {
+ { 0x200, 0x400, 0x600, 0x800 },
+ { 0x300, 0x500, 0x700, 0x900 }
+};
+
+static void __iomem *intel_xpcie_ep_get_dma_base(struct pci_epf *epf)
+{
+ struct device *dev = &epf->dev;
+ struct xpcie_epf *xpcie_epf = (struct xpcie_epf *)dev->driver_data;
+
+ return xpcie_epf->dbi_base + DMA_DBI_OFFSET;
+}
+
+static int intel_xpcie_ep_dma_disable(void __iomem *dma_base,
+ enum xpcie_ep_engine_type rw)
+{
+ struct __iomem pcie_dma_reg * dma_reg =
+ (struct __iomem pcie_dma_reg *)dma_base;
+ void __iomem *int_mask, *int_clear;
+ void __iomem *engine_en, *ll_err;
+ int i;
+
+ if (rw == WRITE_ENGINE) {
+ engine_en = (void __iomem *)&dma_reg->dma_write_engine_en;
+ int_mask = (void __iomem *)&dma_reg->dma_write_int_mask;
+ int_clear = (void __iomem *)&dma_reg->dma_write_int_clear;
+ ll_err = (void __iomem *)&dma_reg->dma_write_linked_list_err_en;
+ } else {
+ engine_en = (void __iomem *)&dma_reg->dma_read_engine_en;
+ int_mask = (void __iomem *)&dma_reg->dma_read_int_mask;
+ int_clear = (void __iomem *)&dma_reg->dma_read_int_clear;
+ ll_err = (void __iomem *)&dma_reg->dma_read_linked_list_err_en;
+ }
+
+ iowrite32(0x0, engine_en);
+
+ /* Mask all interrupts. */
+ iowrite32(DMA_ALL_INTERRUPT_MASK, int_mask);
+
+ /* Clear all interrupts. */
+ iowrite32(DMA_ALL_INTERRUPT_MASK, int_clear);
+
+ /* Disable LL abort interrupt (LLLAIE). */
+ iowrite32(0, ll_err);
+
+ /* Wait until the engine is disabled. */
+ for (i = 0; i < DMA_ENABLE_TIMEOUT; i++) {
+ if (!(ioread32(engine_en) & DMA_ENGINE_EN_MASK))
+ return 0;
+ msleep(20);
+ }
+
+ return -EBUSY;
+}
+
+static void intel_xpcie_ep_dma_enable(void __iomem *dma_base,
+ enum xpcie_ep_engine_type rw)
+{
+ struct __iomem pcie_dma_reg * dma_reg =
+ (struct __iomem pcie_dma_reg *)(dma_base);
+ void __iomem *engine_en, *ll_err, *arb_weight;
+ struct __iomem pcie_dma_chan * dma_chan;
+ void __iomem *int_mask, *int_clear;
+ u32 offset, weight;
+ int i;
+
+ if (rw == WRITE_ENGINE) {
+ engine_en = (void __iomem *)&dma_reg->dma_write_engine_en;
+ int_mask = (void __iomem *)&dma_reg->dma_write_int_mask;
+ int_clear = (void __iomem *)&dma_reg->dma_write_int_clear;
+ ll_err = (void __iomem *)&dma_reg->dma_write_linked_list_err_en;
+ arb_weight = (void __iomem *)
+ &dma_reg->dma_write_channel_arb_weight_low;
+ weight = DMA_CHAN_WRITE_ALL_MAX_WEIGHT;
+ } else {
+ engine_en = (void __iomem *)&dma_reg->dma_read_engine_en;
+ int_mask = (void __iomem *)&dma_reg->dma_read_int_mask;
+ int_clear = (void __iomem *)&dma_reg->dma_read_int_clear;
+ ll_err = (void __iomem *)&dma_reg->dma_read_linked_list_err_en;
+ arb_weight = (void __iomem *)
+ &dma_reg->dma_read_channel_arb_weight_low;
+ weight = DMA_CHAN_READ_ALL_MAX_WEIGHT;
+ }
+
+ iowrite32(DMA_ENGINE_EN_MASK, engine_en);
+
+ /* Unmask all interrupts, so that the interrupt line gets asserted. */
+ iowrite32(~(u32)DMA_ALL_INTERRUPT_MASK, int_mask);
+
+ /* Clear all interrupts. */
+ iowrite32(DMA_ALL_INTERRUPT_MASK, int_clear);
+
+ /* Set channel round robin weight. */
+ iowrite32(weight, arb_weight);
+
+ /* Enable LL abort interrupt (LLLAIE). */
+ iowrite32(DMA_LLLAIE_MASK, ll_err);
+
+ /* Enable linked list mode. */
+ for (i = 0; i < DMA_CHAN_NUM; i++) {
+ offset = dma_chan_offset[rw][i];
+ dma_chan = (struct __iomem pcie_dma_chan *)(dma_base + offset);
+ iowrite32(DMA_CH_CONTROL1_LLE_MASK,
+ (void __iomem *)&dma_chan->dma_ch_control1);
+ }
+}
+
+/*
+ * Make sure EP is not in L1 state when DMA doorbell.
+ * The DMA controller may start the wrong channel if doorbell occurs at the
+ * same time as controller is transitioning to L1.
+ */
+static int intel_xpcie_ep_dma_doorbell(struct xpcie_epf *xpcie_epf, int chan,
+ void __iomem *doorbell)
+{
+ int i = DMA_PCIE_PM_L1_TIMEOUT, rc = 0;
+ u32 val, pm_val;
+
+ val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+ iowrite32(val | APP_XFER_PENDING,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+ pm_val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_SII_PM_STATE_1);
+ while (pm_val & PM_LINKST_IN_L1) {
+ if (i-- < 0) {
+ rc = -ETIME;
+ break;
+ }
+ udelay(5);
+ pm_val = ioread32(xpcie_epf->apb_base +
+ PCIE_REGS_PCIE_SII_PM_STATE_1);
+ }
+
+ iowrite32((u32)chan, doorbell);
+
+ iowrite32(val & ~APP_XFER_PENDING,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_APP_CNTRL);
+
+ return rc;
+}
+
+static int intel_xpcie_ep_dma_err_status(void __iomem *err_status, int chan)
+{
+ if (ioread32(err_status) &
+ (DMA_AR_ERROR_CH_MASK(chan) | DMA_LL_ERROR_CH_MASK(chan)))
+ return -EIO;
+
+ return 0;
+}
+
+static int intel_xpcie_ep_dma_rd_err_sts_h(void __iomem *err_status,
+ int chan)
+{
+ if (ioread32(err_status) &
+ (DMA_UNREQ_ERROR_CH_MASK(chan) |
+ DMA_CPL_ABORT_ERROR_CH_MASK(chan) |
+ DMA_CPL_TIMEOUT_ERROR_CH_MASK(chan) |
+ DMA_DATA_POI_ERROR_CH_MASK(chan)))
+ return -EIO;
+
+ return 0;
+}
+
+static void
+intel_xpcie_ep_dma_setup_ll_descs(struct __iomem pcie_dma_chan * dma_chan,
+ struct xpcie_dma_ll_desc_buf *desc_buf,
+ int descs_num)
+{
+ struct xpcie_dma_ll_desc *descs = desc_buf->virt;
+ int i;
+
+ /* Setup linked list descriptors */
+ for (i = 0; i < descs_num - 1; i++)
+ descs[i].dma_ch_control1 = DMA_CH_CONTROL1_CB_MASK;
+ descs[descs_num - 1].dma_ch_control1 = DMA_CH_CONTROL1_LIE_MASK |
+ DMA_CH_CONTROL1_CB_MASK;
+ descs[descs_num].dma_ch_control1 = DMA_CH_CONTROL1_LLP_MASK |
+ DMA_CH_CONTROL1_TCB_MASK;
+ descs[descs_num].src_addr = (phys_addr_t)desc_buf->phys;
+
+ /* Setup linked list settings */
+ iowrite32(DMA_CH_CONTROL1_LLE_MASK | DMA_CH_CONTROL1_CCS_MASK,
+ (void __iomem *)&dma_chan->dma_ch_control1);
+ iowrite32((u32)desc_buf->phys, (void __iomem *)&dma_chan->dma_llp_low);
+ iowrite32((u64)desc_buf->phys >> 32,
+ (void __iomem *)&dma_chan->dma_llp_high);
+}
+
+int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ void __iomem *dma_base = xpcie_epf->dma_base;
+ struct __iomem pcie_dma_chan * dma_chan;
+ struct xpcie_dma_ll_desc_buf *desc_buf;
+ struct __iomem pcie_dma_reg * dma_reg =
+ (struct __iomem pcie_dma_reg *)(dma_base);
+ int i, rc;
+
+ if (descs_num <= 0 || descs_num > XPCIE_NUM_TX_DESCS)
+ return -EINVAL;
+
+ if (chan < 0 || chan >= DMA_CHAN_NUM)
+ return -EINVAL;
+
+ dma_chan = (struct __iomem pcie_dma_chan *)
+ (dma_base + dma_chan_offset[WRITE_ENGINE][chan]);
+
+ desc_buf = &xpcie_epf->tx_desc_buf[chan];
+
+ intel_xpcie_ep_dma_setup_ll_descs(dma_chan, desc_buf, descs_num);
+
+ /* Start DMA transfer. */
+ rc = intel_xpcie_ep_dma_doorbell(xpcie_epf, chan,
+ (void __iomem *)
+ &dma_reg->dma_write_doorbell);
+ if (rc)
+ return rc;
+
+ /* Wait for DMA transfer to complete. */
+ for (i = 0; i < DMA_POLLING_TIMEOUT; i++) {
+ usleep_range(5, 10);
+ if (ioread32((void __iomem *)&dma_reg->dma_write_int_status) &
+ (DMA_DONE_INTERRUPT_CH_MASK(chan) |
+ DMA_ABORT_INTERRUPT_CH_MASK(chan)))
+ break;
+ }
+ if (i == DMA_POLLING_TIMEOUT) {
+ dev_err(&xpcie_epf->epf->dev, "DMA Wr timeout\n");
+ rc = -ETIME;
+ goto cleanup;
+ }
+
+ rc = intel_xpcie_ep_dma_err_status((void __iomem *)
+ &dma_reg->dma_write_err_status,
+ chan);
+
+cleanup:
+ /* Clear the done/abort interrupt. */
+ iowrite32((DMA_DONE_INTERRUPT_CH_MASK(chan) |
+ DMA_ABORT_INTERRUPT_CH_MASK(chan)),
+ (void __iomem *)&dma_reg->dma_write_int_clear);
+
+ if (rc) {
+ if (intel_xpcie_ep_dma_disable(dma_base, WRITE_ENGINE)) {
+ dev_err(&xpcie_epf->epf->dev,
+ "failed to disable WR DMA\n");
+ return rc;
+ }
+ intel_xpcie_ep_dma_enable(dma_base, WRITE_ENGINE);
+ }
+
+ return rc;
+}
+
+int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ void __iomem *dma_base = xpcie_epf->dma_base;
+ struct xpcie_dma_ll_desc_buf *desc_buf;
+ struct __iomem pcie_dma_reg * dma_reg =
+ (struct __iomem pcie_dma_reg *)(dma_base);
+ struct __iomem pcie_dma_chan * dma_chan;
+ int i, rc;
+
+ if (descs_num <= 0 || descs_num > XPCIE_NUM_RX_DESCS)
+ return -EINVAL;
+
+ if (chan < 0 || chan >= DMA_CHAN_NUM)
+ return -EINVAL;
+
+ dma_chan = (struct __iomem pcie_dma_chan *)
+ (dma_base + dma_chan_offset[READ_ENGINE][chan]);
+
+ desc_buf = &xpcie_epf->rx_desc_buf[chan];
+
+ intel_xpcie_ep_dma_setup_ll_descs(dma_chan, desc_buf, descs_num);
+
+ /* Start DMA transfer. */
+ rc = intel_xpcie_ep_dma_doorbell(xpcie_epf, chan,
+ (void __iomem *)
+ &dma_reg->dma_read_doorbell);
+ if (rc)
+ return rc;
+
+ /* Wait for DMA transfer to complete. */
+ for (i = 0; i < DMA_POLLING_TIMEOUT; i++) {
+ usleep_range(5, 10);
+ if (ioread32((void __iomem *)&dma_reg->dma_read_int_status) &
+ (DMA_DONE_INTERRUPT_CH_MASK(chan) |
+ DMA_ABORT_INTERRUPT_CH_MASK(chan)))
+ break;
+ }
+ if (i == DMA_POLLING_TIMEOUT) {
+ dev_err(&xpcie_epf->epf->dev, "DMA Rd timeout\n");
+ rc = -ETIME;
+ goto cleanup;
+ }
+
+ rc = intel_xpcie_ep_dma_err_status((void __iomem *)
+ &dma_reg->dma_read_err_status_low,
+ chan);
+ if (!rc) {
+ rc =
+ intel_xpcie_ep_dma_rd_err_sts_h((void __iomem *)
+ &dma_reg->dma_rd_err_sts_h,
+ chan);
+ }
+cleanup:
+ /* Clear the done/abort interrupt. */
+ iowrite32((DMA_DONE_INTERRUPT_CH_MASK(chan) |
+ DMA_ABORT_INTERRUPT_CH_MASK(chan)),
+ (void __iomem *)&dma_reg->dma_read_int_clear);
+
+ if (rc) {
+ if (intel_xpcie_ep_dma_disable(dma_base, READ_ENGINE)) {
+ dev_err(&xpcie_epf->epf->dev,
+ "failed to disable RD DMA\n");
+ return rc;
+ }
+ intel_xpcie_ep_dma_enable(dma_base, READ_ENGINE);
+ }
+
+ return rc;
+}
+
+static void intel_xpcie_ep_dma_free_ll_descs_mem(struct xpcie_epf *xpcie_epf)
+{
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ int i;
+
+ for (i = 0; i < DMA_CHAN_NUM; i++) {
+ if (xpcie_epf->tx_desc_buf[i].virt) {
+ dma_free_coherent(dma_dev,
+ xpcie_epf->tx_desc_buf[i].size,
+ xpcie_epf->tx_desc_buf[i].virt,
+ xpcie_epf->tx_desc_buf[i].phys);
+ }
+ if (xpcie_epf->rx_desc_buf[i].virt) {
+ dma_free_coherent(dma_dev,
+ xpcie_epf->rx_desc_buf[i].size,
+ xpcie_epf->rx_desc_buf[i].virt,
+ xpcie_epf->rx_desc_buf[i].phys);
+ }
+
+ memset(&xpcie_epf->tx_desc_buf[i], 0,
+ sizeof(struct xpcie_dma_ll_desc_buf));
+ memset(&xpcie_epf->rx_desc_buf[i], 0,
+ sizeof(struct xpcie_dma_ll_desc_buf));
+ }
+}
+
+static int intel_xpcie_ep_dma_alloc_ll_descs_mem(struct xpcie_epf *xpcie_epf)
+{
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ int tx_num = XPCIE_NUM_TX_DESCS + 1;
+ int rx_num = XPCIE_NUM_RX_DESCS + 1;
+ size_t tx_size, rx_size;
+ int i;
+
+ tx_size = tx_num * sizeof(struct xpcie_dma_ll_desc);
+ rx_size = rx_num * sizeof(struct xpcie_dma_ll_desc);
+
+ for (i = 0; i < DMA_CHAN_NUM; i++) {
+ xpcie_epf->tx_desc_buf[i].virt =
+ dma_alloc_coherent(dma_dev, tx_size,
+ &xpcie_epf->tx_desc_buf[i].phys,
+ GFP_KERNEL);
+ xpcie_epf->rx_desc_buf[i].virt =
+ dma_alloc_coherent(dma_dev, rx_size,
+ &xpcie_epf->rx_desc_buf[i].phys,
+ GFP_KERNEL);
+
+ if (!xpcie_epf->tx_desc_buf[i].virt ||
+ !xpcie_epf->rx_desc_buf[i].virt) {
+ intel_xpcie_ep_dma_free_ll_descs_mem(xpcie_epf);
+ return -ENOMEM;
+ }
+
+ xpcie_epf->tx_desc_buf[i].size = tx_size;
+ xpcie_epf->rx_desc_buf[i].size = rx_size;
+ }
+ return 0;
+}
+
+int intel_xpcie_ep_dma_reset(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ /* Disable the DMA read/write engine. */
+ if (intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, WRITE_ENGINE) ||
+ intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, READ_ENGINE))
+ return -EBUSY;
+
+ intel_xpcie_ep_dma_enable(xpcie_epf->dma_base, WRITE_ENGINE);
+ intel_xpcie_ep_dma_enable(xpcie_epf->dma_base, READ_ENGINE);
+
+ return 0;
+}
+
+int intel_xpcie_ep_dma_uninit(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ if (intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, WRITE_ENGINE) ||
+ intel_xpcie_ep_dma_disable(xpcie_epf->dma_base, READ_ENGINE))
+ return -EBUSY;
+
+ intel_xpcie_ep_dma_free_ll_descs_mem(xpcie_epf);
+
+ return 0;
+}
+
+int intel_xpcie_ep_dma_init(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ int rc;
+
+ xpcie_epf->dma_base = intel_xpcie_ep_get_dma_base(epf);
+
+ rc = intel_xpcie_ep_dma_alloc_ll_descs_mem(xpcie_epf);
+ if (rc)
+ return rc;
+
+ return intel_xpcie_ep_dma_reset(epf);
+}
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
index 9e6d407aa6b3..dd8ffcabf5f9 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.c
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -45,6 +45,8 @@ static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)

xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+ if (val & LINK_REQ_RST_FLG)
+ intel_xpcie_ep_dma_reset(xpcie_epf->epf);

iowrite32(val, xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);

@@ -325,8 +327,17 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
goto err_cleanup_bars;
}

+ ret = intel_xpcie_ep_dma_init(epf);
+ if (ret) {
+ dev_err(&epf->dev, "DMA initialization failed\n");
+ goto err_free_err_irq;
+ }
+
return 0;

+err_free_err_irq:
+ free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+
err_cleanup_bars:
intel_xpcie_cleanup_bars(epf);

@@ -335,11 +346,9 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)

static void intel_xpcie_epf_unbind(struct pci_epf *epf)
{
- struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;

- free_irq(xpcie_epf->irq, &xpcie_epf->xpcie);
- free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+ intel_xpcie_ep_dma_uninit(epf);

pci_epc_stop(epc);

diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index 2b38c87b3701..6ce5260e67be 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -20,6 +20,38 @@

#define KEEMBAY_XPCIE_STEPPING_MAXLEN 8

+#define DMA_CHAN_NUM (4)
+
+#define XPCIE_NUM_TX_DESCS (64)
+#define XPCIE_NUM_RX_DESCS (64)
+
+extern bool dma_ll_mode;
+
+struct xpcie_dma_ll_desc {
+ u32 dma_ch_control1;
+ u32 dma_transfer_size;
+ union {
+ struct {
+ u32 dma_sar_low;
+ u32 dma_sar_high;
+ };
+ phys_addr_t src_addr;
+ };
+ union {
+ struct {
+ u32 dma_dar_low;
+ u32 dma_dar_high;
+ };
+ phys_addr_t dst_addr;
+ };
+} __packed;
+
+struct xpcie_dma_ll_desc_buf {
+ struct xpcie_dma_ll_desc *virt;
+ dma_addr_t phys;
+ size_t size;
+};
+
struct xpcie_epf {
struct pci_epf *epf;
void *vaddr[BAR_5 + 1];
@@ -34,6 +66,15 @@ struct xpcie_epf {
void __iomem *dma_base;
void __iomem *dbi_base;
char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
+
+ struct xpcie_dma_ll_desc_buf tx_desc_buf[DMA_CHAN_NUM];
+ struct xpcie_dma_ll_desc_buf rx_desc_buf[DMA_CHAN_NUM];
};

+int intel_xpcie_ep_dma_init(struct pci_epf *epf);
+int intel_xpcie_ep_dma_uninit(struct pci_epf *epf);
+int intel_xpcie_ep_dma_reset(struct pci_epf *epf);
+int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num);
+int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num);
+
#endif /* XPCIE_EPF_HEADER_ */
--
2.17.1

2021-01-08 21:32:37

by mark gross

[permalink] [raw]
Subject: [PATCH v2 28/34] misc: Intel tsens IA host driver.

From: "C, Udhayakumar" <[email protected]>

Add Intel tsens IA host driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU Local Host ARM CPU
---------------- --------------------------
| Platform | | Thermal Daemon |
| Management SW| | |
---------------- --------------------------
| Intel tsens | | intel tsens i2c slave |
| i2c client | | and thermal driver |
---------------- --------------------------
| XLINK I2C | | XLINK I2C Slave |
| controller | <=========> | controller |
---------------- xlink smbus --------------------------

intel tsens module:
-------------------
The tsens module enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms.In the tsens module
various junction and SoC temperatures are reported using thermal
subsystem and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsystem will be used by
platform manageability software running in IA host.

- Remote Host driver
* Intended for IA CPU
* It is a I2C client driver
* Driver path:
{tree}/drivers/misc/intel_tsens/intel_tsens_host.c

Local host and Remote host drivers communicates using
I2C SMBUS protocol.

Acked-by: Mark Mross <[email protected]>
Signed-off-by: C, Udhayakumar <[email protected]>
---
Documentation/hwmon/index.rst | 1 +
Documentation/hwmon/intel_tsens_host.rst | 71 ++++
drivers/misc/intel_tsens/Kconfig | 13 +
drivers/misc/intel_tsens/Makefile | 1 +
drivers/misc/intel_tsens/intel_tsens_host.c | 351 ++++++++++++++++++++
include/linux/intel_tsens_host.h | 34 ++
6 files changed, 471 insertions(+)
create mode 100644 Documentation/hwmon/intel_tsens_host.rst
create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
create mode 100644 include/linux/intel_tsens_host.h

diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst
index fc29100bef73..7a9eaddd1ab3 100644
--- a/Documentation/hwmon/index.rst
+++ b/Documentation/hwmon/index.rst
@@ -81,6 +81,7 @@ Hardware Monitoring Kernel Drivers
isl68137
it87
intel_tsens_sensor.rst
+ intel_tsens_host.rst
jc42
k10temp
k8temp
diff --git a/Documentation/hwmon/intel_tsens_host.rst b/Documentation/hwmon/intel_tsens_host.rst
new file mode 100644
index 000000000000..012c593f969f
--- /dev/null
+++ b/Documentation/hwmon/intel_tsens_host.rst
@@ -0,0 +1,71 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+Kernel driver: intel_tsens
+==========================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+ Slave address: The address is assigned by the hddl device management
+ driver.
+
+ Datasheet:
+ Documentation/hwmon/intel_tsens_sensor.rst#Remote Thermal Interface
+
+Authors:
+ - Thalaiappan, Rathina <[email protected]
+ - Udhayakumar C <[email protected]>
+
+Description
+===========
+The intel_tsens is a temperature sensor driver receiving the junction temperature
+from different heating points inside the SOC. The driver will receive the
+temperature on SMBUS connection. The reported temperature is in degrees Celsius.
+
+In Keem Bay, the four thermal junction temperature points are,
+Media Subsystem (mss), NN subsystem (nce), Compute subsystem (cse) and
+SOC(Maximum of mss, nce and cse).
+
+Example
+=======
+Temperature reported by a Keem Bay on the Linux Thermal sysfs interface.
+
+# cat /sys/class/thermal/thermal_zone*/type
+mss
+css
+nce
+soc
+
+# cat /sys/class/thermal/thermal_zone*/temp
+0
+29210
+28478
+29210
+
++-----------+-------------+
+| offset | Sensor |
++-----------+-------------+
+| 0 | mss |
++-----------+-------------+
+| 1 | css |
++-----------+-------------+
+| 2 | nce |
++-----------+-------------+
+| 3 | soc |
++-----------+-------------+
+
+#sudo i2cdetect -l
+i2c-8 smbus SMBus I801 adapter at efa0 SMBus adapte r
+
+To read mss junction temperature:
+#i2cget -y 8 <slave addr> 0x0 w
+
+To read cse junction temperature:
+#i2cget -y 8 <slave addr> 0x1 w
+
+To read nce junction temperature:
+#i2cget -y 8 <slave addr> 0x2 w
+
+To read overall SoC temperature:
+#i2cget -y 8 <slave addr> 0x3 w
diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index bfb8fe1997f4..8b263fdd80c3 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -13,3 +13,16 @@ config INTEL_TSENS_LOCAL_HOST
management controller.
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.
+
+config INTEL_TSENS_IA_HOST
+ tristate "Temperature sensor driver for intel tsens remote host"
+ depends on I2C && THERMAL
+ depends on I2C_SMBUS
+ help
+ This option enables tsens i2c and thermal local Host driver.
+
+ This driver is used for reading thermal data via I2C SMBUS
+ and registers itself to thermal framework, which can be
+ used by thermal daemon in remote IA host
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 93dee8b9f481..250dc484fb49 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,3 +5,4 @@
#

obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_IA_HOST) += intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_host.c b/drivers/misc/intel_tsens/intel_tsens_host.c
new file mode 100644
index 000000000000..adb553f3f2e3
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_host.c
@@ -0,0 +1,351 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/intel_tsens_host.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/printk.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include <uapi/linux/stat.h>
+
+#define TSENS_BINDING_NAME "intel_tsens"
+#define TSENS_BYTE_INDEX_SHIFT 0x6
+#define TSENS_READ_BYTE0 (0x0 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE1 (0x1 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE2 (0x2 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE3 (0x3 << TSENS_BYTE_INDEX_SHIFT)
+
+static int tsens_i2c_smbus_read_byte_data(struct i2c_client *i2c, u8 command,
+ u8 *i2c_val)
+{
+ union i2c_smbus_data data;
+ int status;
+
+ status = i2c_smbus_xfer(i2c->adapter, i2c->addr, i2c->flags,
+ I2C_SMBUS_READ, command,
+ I2C_SMBUS_BYTE_DATA, &data);
+ *i2c_val = data.byte;
+ return status;
+}
+
+/**
+ * intel_tsens_get_temp - get updated temperatue
+ * @zone: Thermal zone device
+ * @temp: updated temperature value.
+ *
+ * Temperature value read from sensors ranging from -40000 (-40 degree Celsius)
+ * to 126000 (126 degree Celsius). if there is a failure while reading update
+ * temperature, -255 would be returned as temperature to indicate failure.
+ */
+static int intel_tsens_get_temp(struct thermal_zone_device *zone,
+ int *temp)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+ struct i2c_client *i2c_c;
+ int status, sensor_type;
+ u8 i2c_val;
+ s32 val;
+
+ if (strstr(zone->type, "smb"))
+ i2c_c = tsens->i2c_smbus;
+ else
+ i2c_c = tsens->i2c_xlk;
+
+ *temp = -255;
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE0;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val = i2c_val;
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE1;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 8);
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE2;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 16);
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE3;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 24);
+ *temp = val;
+ return 0;
+}
+
+static int intel_tsens_thermal_get_trip_type(struct thermal_zone_device *zone,
+ int trip,
+ enum thermal_trip_type *type)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+
+ *type = tsens->trip_info[trip]->trip_type;
+ return 0;
+}
+
+static int intel_tsens_thermal_get_trip_temp(struct thermal_zone_device *zone,
+ int trip, int *temp)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+
+ *temp = tsens->trip_info[trip]->temp;
+ return 0;
+}
+
+static int intel_tsens_thermal_notify(struct thermal_zone_device *tz,
+ int trip, enum thermal_trip_type type)
+{
+ int ret = 0;
+
+ switch (type) {
+ case THERMAL_TRIP_ACTIVE:
+ dev_warn(&tz->device,
+ "zone %s reached to active temperature %d\n",
+ tz->type, tz->temperature);
+ ret = 1;
+ break;
+ case THERMAL_TRIP_CRITICAL:
+ dev_warn(&tz->device,
+ "zone %s reached to critical temperature %d\n",
+ tz->type, tz->temperature);
+ ret = 1;
+ break;
+ default:
+ break;
+ }
+ return ret;
+}
+
+static int intel_tsens_bind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ int ret;
+
+ /*
+ * Check here thermal device zone name and cdev name to match,
+ * then call the bind device
+ */
+ if (strncmp(TSENS_BINDING_NAME, cdev->type,
+ strlen(TSENS_BINDING_NAME)) == 0) {
+ ret = thermal_zone_bind_cooling_device
+ (tz,
+ THERMAL_TRIP_ACTIVE,
+ cdev,
+ THERMAL_NO_LIMIT,
+ THERMAL_NO_LIMIT,
+ THERMAL_WEIGHT_DEFAULT);
+ if (ret) {
+ dev_err(&tz->device,
+ "binding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static int intel_tsens_unbind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ int ret;
+
+ if (strncmp(TSENS_BINDING_NAME, cdev->type,
+ strlen(TSENS_BINDING_NAME)) == 0) {
+ ret = thermal_zone_unbind_cooling_device(tz, 0, cdev);
+ if (ret) {
+ dev_err(&tz->device,
+ "unbinding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static struct thermal_zone_device_ops tsens_thermal_ops = {
+ .bind = intel_tsens_bind,
+ .unbind = intel_tsens_unbind,
+ .get_temp = intel_tsens_get_temp,
+ .get_trip_type = intel_tsens_thermal_get_trip_type,
+ .get_trip_temp = intel_tsens_thermal_get_trip_temp,
+ .notify = intel_tsens_thermal_notify,
+};
+
+static int intel_tsens_add_tz(struct intel_tsens_host *tsens,
+ struct thermal_zone_device **tz,
+ const char *name,
+ struct device *dev,
+ int i)
+{
+ int ret;
+
+ *tz = thermal_zone_device_register(name,
+ tsens->t_data->n_trips,
+ 0, tsens,
+ &tsens_thermal_ops,
+ NULL,
+ tsens->t_data->passive_delay,
+ tsens->t_data->polling_delay);
+ if (IS_ERR(*tz)) {
+ ret = PTR_ERR(*tz);
+ dev_err(dev,
+ "failed to register thermal zone device %s\n",
+ tsens->t_data->name);
+ return ret;
+ }
+ return 0;
+}
+
+static void intel_tsens_remove_tz(struct intel_hddl_clients *d)
+{
+ int i;
+
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ if (tsens->tz_smbus) {
+ thermal_zone_device_unregister(tsens->tz_smbus);
+ tsens->tz_smbus = NULL;
+ }
+ if (tsens->tz_xlk) {
+ thermal_zone_device_unregister(tsens->tz_xlk);
+ tsens->tz_xlk = NULL;
+ }
+ }
+}
+
+static int intel_tsens_tj_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct intel_hddl_clients *d = client->dev.platform_data;
+ u32 device_id = tsens_get_device_id(d);
+ char *i2c_str;
+ int ret, i;
+
+ if (strstr(client->adapter->name, "SMBus I801")) {
+ i2c_str = "smb";
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ tsens->sensor_name_smbus =
+ kasprintf(GFP_KERNEL,
+ "%s_%s-%x",
+ tsens->t_data->name,
+ i2c_str, device_id);
+ tsens->i2c_smbus = client;
+ ret = intel_tsens_add_tz(tsens,
+ &tsens->tz_smbus,
+ tsens->sensor_name_smbus,
+ &client->dev,
+ i);
+ if (ret) {
+ dev_err(&client->dev,
+ "thermal zone configuration failed\n");
+ intel_tsens_remove_tz(d);
+ return ret;
+ }
+ }
+ } else {
+ i2c_str = "xlk";
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ tsens->sensor_name_xlk =
+ kasprintf(GFP_KERNEL,
+ "%s_%s-%x",
+ tsens->t_data->name,
+ i2c_str, device_id);
+ tsens->i2c_xlk = client;
+ ret = intel_tsens_add_tz(tsens,
+ &tsens->tz_xlk,
+ tsens->sensor_name_xlk,
+ &client->dev,
+ i);
+ if (ret) {
+ dev_err(&client->dev,
+ "thermal zone configuration failed\n");
+ intel_tsens_remove_tz(d);
+ return ret;
+ }
+ }
+ }
+
+ i2c_set_clientdata(client, d);
+
+ return 0;
+}
+
+static int intel_tsens_tj_exit(struct i2c_client *client)
+{
+ struct intel_hddl_clients *d = client->dev.platform_data;
+
+ if (!d) {
+ dev_err(&client->dev,
+ "Unable to get private data\n");
+ return -EINVAL;
+ }
+ intel_tsens_remove_tz(d);
+ return 0;
+}
+
+static const struct i2c_device_id i2c_intel_tsens_id[] = {
+ { "intel_tsens", (kernel_ulong_t)NULL },
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, i2c_intel_tsens_id);
+
+static struct i2c_driver i2c_intel_tsens_driver = {
+ .driver = {
+ .name = "intel_tsens",
+ },
+ .probe = intel_tsens_tj_probe,
+ .remove = intel_tsens_tj_exit,
+ .id_table = i2c_intel_tsens_id,
+};
+module_i2c_driver(i2c_intel_tsens_driver);
+
+MODULE_DESCRIPTION("Intel tsens host Device driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Vaidya, Mahesh R <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/intel_tsens_host.h b/include/linux/intel_tsens_host.h
new file mode 100644
index 000000000000..4b9b2d6a5cfc
--- /dev/null
+++ b/include/linux/intel_tsens_host.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel tsens host I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_INTEL_TSENS_HOST_DEVICE_H
+#define _LINUX_INTEL_TSENS_HOST_DEVICE_H
+
+struct intel_tsens_host_trip_info {
+ enum thermal_trip_type trip_type;
+ int temp;
+} __packed __aligned(4);
+
+struct intel_tsens_host {
+ const char *sensor_name_smbus;
+ const char *sensor_name_xlk;
+ struct intel_tsens_data *t_data;
+ struct intel_tsens_host_trip_info **trip_info;
+ u32 device_id;
+ struct i2c_client *i2c_xlk;
+ struct i2c_client *i2c_smbus;
+ struct thermal_zone_device *tz_xlk;
+ struct thermal_zone_device *tz_smbus;
+};
+
+struct intel_tsens_host_plat_data {
+ int nsens;
+ struct intel_tsens_host **tsens;
+};
+#endif /*_LINUX_INTEL_TSENS_HOST_DEVICE_H*/
--
2.17.1

2021-01-08 21:32:41

by mark gross

[permalink] [raw]
Subject: [PATCH v2 21/34] xlink-core: Enable xlink protocol over pcie

From: Seamus Kelly <[email protected]>

Enable host system access to the VPU over the xlink protocol over PCIe by
enabling channel multiplexing and dispatching. This allows for remote host
communication channels across pcie links.

add dispatcher
update multiplexer to utilise dispatcher

xlink-core: Patch set 2

Add xlink-dispatcher
creates tx and rx threads
enables queueing of messages for transmission and on reception

Update multiplexer to utilise dispatcher:
handle multiplexing channels over single interface link e.g. PCIe
process messages received by dispatcher
pass messages created by API calls to dispatcher for transmission


Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
drivers/misc/xlink-core/Makefile | 2 +-
drivers/misc/xlink-core/xlink-core.c | 35 +-
drivers/misc/xlink-core/xlink-dispatcher.c | 441 +++++++++++++++++
drivers/misc/xlink-core/xlink-dispatcher.h | 26 +
drivers/misc/xlink-core/xlink-multiplexer.c | 498 +++++++++++++++++++-
5 files changed, 999 insertions(+), 3 deletions(-)
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h

diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
index e82b7c72b6b9..ee81f9d05f2b 100644
--- a/drivers/misc/xlink-core/Makefile
+++ b/drivers/misc/xlink-core/Makefile
@@ -2,4 +2,4 @@
# Makefile for Keem Bay xlink Linux driver
#
obj-$(CONFIG_XLINK_CORE) += xlink.o
-xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-dispatcher.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index 1a443f54786d..017d6776ce4c 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -21,6 +21,7 @@

#include "xlink-core.h"
#include "xlink-defs.h"
+#include "xlink-dispatcher.h"
#include "xlink-ioctl.h"
#include "xlink-multiplexer.h"
#include "xlink-platform.h"
@@ -151,6 +152,12 @@ static int kmb_xlink_probe(struct platform_device *pdev)
goto r_multiplexer;
}

+ // initialize dispatcher
+ rc = xlink_dispatcher_init(xlink_dev->pdev);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("Dispatcher initialization failed\n");
+ goto r_dispatcher;
+ }
// initialize xlink data structure
xlink_dev->nmb_connected_links = 0;
mutex_init(&xlink_dev->lock);
@@ -168,7 +175,7 @@ static int kmb_xlink_probe(struct platform_device *pdev)
/*Allocating Major number*/
if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
dev_info(&pdev->dev, "Cannot allocate major number\n");
- goto r_multiplexer;
+ goto r_dispatcher;
}
dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
MINOR(xdev));
@@ -205,6 +212,8 @@ static int kmb_xlink_probe(struct platform_device *pdev)
class_destroy(dev_class);
r_class:
unregister_chrdev_region(xdev, 1);
+r_dispatcher:
+ xlink_dispatcher_destroy();
r_multiplexer:
xlink_multiplexer_destroy();
return -1;
@@ -220,6 +229,10 @@ static int kmb_xlink_remove(struct platform_device *pdev)
rc = xlink_multiplexer_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Multiplexer destroy failed\n");
+ // stop dispatchers and destroy
+ rc = xlink_dispatcher_destroy();
+ if (rc != X_LINK_SUCCESS)
+ pr_err("Dispatcher destroy failed\n");

mutex_unlock(&xlink->lock);
mutex_destroy(&xlink->lock);
@@ -314,6 +327,14 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
link->handle = *handle;
xlink->nmb_connected_links++;
kref_init(&link->refcount);
+ if (interface != IPC_INTERFACE) {
+ // start dispatcher
+ rc = xlink_dispatcher_start(link->id, &link->handle);
+ if (rc) {
+ pr_err("dispatcher start failed\n");
+ goto r_cleanup;
+ }
+ }
// initialize multiplexer connection
rc = xlink_multiplexer_connect(link->id);
if (rc) {
@@ -649,6 +670,7 @@ EXPORT_SYMBOL(xlink_release_data);
enum xlink_error xlink_disconnect(struct xlink_handle *handle)
{
struct xlink_link *link;
+ int interface = NULL_INTERFACE;
enum xlink_error rc = X_LINK_ERROR;

if (!xlink || !handle)
@@ -661,6 +683,17 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
// decrement refcount, if count is 0 lock mutex and disconnect
if (kref_put_mutex(&link->refcount, release_after_kref_put,
&xlink->lock)) {
+ // stop dispatcher
+ interface = get_interface_from_sw_device_id(link->handle.sw_device_id);
+ if (interface != IPC_INTERFACE) {
+ // stop dispatcher
+ rc = xlink_dispatcher_stop(link->id);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("dispatcher stop failed\n");
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ }
// deinitialize multiplexer connection
rc = xlink_multiplexer_disconnect(link->id);
if (rc) {
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
new file mode 100644
index 000000000000..11ef8e4110ca
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -0,0 +1,441 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/semaphore.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+#include <linux/platform_device.h>
+
+#include "xlink-dispatcher.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define DISPATCHER_RX_TIMEOUT_MSEC 0
+
+/* state of a dispatcher servicing a link to a device*/
+enum dispatcher_state {
+ XLINK_DISPATCHER_INIT, /* initialized but not used */
+ XLINK_DISPATCHER_RUNNING, /* currently servicing a link */
+ XLINK_DISPATCHER_STOPPED, /* no longer servicing a link */
+ XLINK_DISPATCHER_ERROR, /* fatal error */
+};
+
+/* queue for dispatcher tx thread event handling */
+struct event_queue {
+ u32 count; /* number of events in the queue */
+ u32 capacity; /* capacity of events in the queue */
+ struct list_head head; /* head of event linked list */
+ struct mutex lock; /* locks queue while accessing */
+};
+
+/* dispatcher servicing a single link to a device */
+struct dispatcher {
+ u32 link_id; /* id of link being serviced */
+ enum dispatcher_state state; /* state of the dispatcher */
+ struct xlink_handle *handle; /* xlink device handle */
+ int interface; /* underlying interface of link */
+ struct task_struct *rxthread; /* kthread servicing rx */
+ struct task_struct *txthread; /* kthread servicing tx */
+ struct event_queue queue; /* xlink event queue */
+ struct semaphore event_sem; /* signals tx kthread of events */
+ struct completion rx_done; /* sync start/stop of rx kthread */
+ struct completion tx_done; /* sync start/stop of tx thread */
+};
+
+/* xlink dispatcher system component */
+struct xlink_dispatcher {
+ struct dispatcher dispatchers[XLINK_MAX_CONNECTIONS]; /* disp queue */
+ struct device *dev; /* deallocate data */
+ struct mutex lock; /* locks when start new disp */
+};
+
+/* global reference to the xlink dispatcher data structure */
+static struct xlink_dispatcher *xlinkd;
+
+/*
+ * Dispatcher Internal Functions
+ *
+ */
+
+static struct dispatcher *get_dispatcher_by_id(u32 id)
+{
+ if (!xlinkd)
+ return NULL;
+
+ if (id >= XLINK_MAX_CONNECTIONS)
+ return NULL;
+
+ return &xlinkd->dispatchers[id];
+}
+
+static u32 event_generate_id(void)
+{
+ static u32 id = 0xa; // TODO: temporary solution
+
+ return id++;
+}
+
+static struct xlink_event *event_dequeue(struct event_queue *queue)
+{
+ struct xlink_event *event = NULL;
+
+ mutex_lock(&queue->lock);
+ if (!list_empty(&queue->head)) {
+ event = list_first_entry(&queue->head, struct xlink_event,
+ list);
+ list_del(&event->list);
+ queue->count--;
+ }
+ mutex_unlock(&queue->lock);
+ return event;
+}
+
+static int event_enqueue(struct event_queue *queue, struct xlink_event *event)
+{
+ int rc = -1;
+
+ mutex_lock(&queue->lock);
+ if (queue->count < ((queue->capacity / 10) * 7)) {
+ list_add_tail(&event->list, &queue->head);
+ queue->count++;
+ rc = 0;
+ }
+ mutex_unlock(&queue->lock);
+ return rc;
+}
+
+static struct xlink_event *dispatcher_event_get(struct dispatcher *disp)
+{
+ int rc = 0;
+ struct xlink_event *event = NULL;
+
+ // wait until an event is available
+ rc = down_interruptible(&disp->event_sem);
+ // dequeue and return next event to process
+ if (!rc)
+ event = event_dequeue(&disp->queue);
+ return event;
+}
+
+static int is_valid_event_header(struct xlink_event *event)
+{
+ if (event->header.magic != XLINK_EVENT_HEADER_MAGIC)
+ return 0;
+ else
+ return 1;
+}
+
+static int dispatcher_event_send(struct xlink_event *event)
+{
+ size_t event_header_size = sizeof(event->header);
+ int rc;
+
+ // write event header
+ // printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
+ // event->header.type, event->header.id);
+ rc = xlink_platform_write(event->interface,
+ event->handle->sw_device_id, &event->header,
+ &event_header_size, event->header.timeout, NULL);
+ if (rc || event_header_size != sizeof(event->header)) {
+ pr_err("Write header failed %d\n", rc);
+ return rc;
+ }
+ if (event->header.type == XLINK_WRITE_REQ ||
+ event->header.type == XLINK_WRITE_VOLATILE_REQ) {
+ // write event data
+ rc = xlink_platform_write(event->interface,
+ event->handle->sw_device_id, event->data,
+ &event->header.size, event->header.timeout,
+ NULL);
+ if (rc)
+ pr_err("Write data failed %d\n", rc);
+ if (event->user_data == 1) {
+ if (event->paddr != 0) {
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data, event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_CMA_MEMORY);
+ } else {
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data, event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
+ }
+ }
+ return rc;
+}
+
+static int xlink_dispatcher_rxthread(void *context)
+{
+ struct dispatcher *disp = (struct dispatcher *)context;
+ struct xlink_event *event;
+ size_t size;
+ int rc;
+
+ // printk(KERN_DEBUG "dispatcher rxthread started\n");
+ event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
+ if (!event)
+ return -1;
+
+ allow_signal(SIGTERM); // allow thread termination while waiting on sem
+ complete(&disp->rx_done);
+ while (!kthread_should_stop()) {
+ size = sizeof(event->header);
+ rc = xlink_platform_read(disp->interface,
+ disp->handle->sw_device_id,
+ &event->header, &size,
+ DISPATCHER_RX_TIMEOUT_MSEC, NULL);
+ if (rc || size != (int)sizeof(event->header))
+ continue;
+ if (is_valid_event_header(event)) {
+ event->link_id = disp->link_id;
+ rc = xlink_multiplexer_rx(event);
+ if (!rc) {
+ event = xlink_create_event(disp->link_id, 0,
+ disp->handle, 0, 0,
+ 0);
+ if (!event)
+ return -1;
+ }
+ }
+ }
+ // printk(KERN_INFO "dispatcher rxthread stopped\n");
+ complete(&disp->rx_done);
+ do_exit(0);
+ return 0;
+}
+
+static int xlink_dispatcher_txthread(void *context)
+{
+ struct dispatcher *disp = (struct dispatcher *)context;
+ struct xlink_event *event;
+
+ // printk(KERN_DEBUG "dispatcher txthread started\n");
+ allow_signal(SIGTERM); // allow thread termination while waiting on sem
+ complete(&disp->tx_done);
+ while (!kthread_should_stop()) {
+ event = dispatcher_event_get(disp);
+ if (!event)
+ continue;
+
+ dispatcher_event_send(event);
+ xlink_destroy_event(event); // free handled event
+ }
+ // printk(KERN_INFO "dispatcher txthread stopped\n");
+ complete(&disp->tx_done);
+ do_exit(0);
+ return 0;
+}
+
+/*
+ * Dispatcher External Functions
+ *
+ */
+
+enum xlink_error xlink_dispatcher_init(void *dev)
+{
+ struct platform_device *plat_dev = (struct platform_device *)dev;
+ int i;
+
+ xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
+ if (!xlinkd)
+ return X_LINK_ERROR;
+
+ xlinkd->dev = &plat_dev->dev;
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ xlinkd->dispatchers[i].link_id = i;
+ sema_init(&xlinkd->dispatchers[i].event_sem, 0);
+ init_completion(&xlinkd->dispatchers[i].rx_done);
+ init_completion(&xlinkd->dispatchers[i].tx_done);
+ INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
+ mutex_init(&xlinkd->dispatchers[i].queue.lock);
+ xlinkd->dispatchers[i].queue.count = 0;
+ xlinkd->dispatchers[i].queue.capacity =
+ XLINK_EVENT_QUEUE_CAPACITY;
+ xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+ }
+ mutex_init(&xlinkd->lock);
+
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle)
+{
+ struct dispatcher *disp;
+
+ mutex_lock(&xlinkd->lock);
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(id);
+ if (!disp)
+ goto r_error;
+
+ // cannot start a running or failed dispatcher
+ if (disp->state == XLINK_DISPATCHER_RUNNING ||
+ disp->state == XLINK_DISPATCHER_ERROR)
+ goto r_error;
+
+ // set the dispatcher context
+ disp->handle = handle;
+ disp->interface = get_interface_from_sw_device_id(handle->sw_device_id);
+
+ // run dispatcher thread to handle and write outgoing packets
+ disp->txthread = kthread_run(xlink_dispatcher_txthread,
+ (void *)disp, "txthread");
+ if (!disp->txthread) {
+ pr_err("xlink txthread creation failed\n");
+ goto r_txthread;
+ }
+ wait_for_completion(&disp->tx_done);
+ disp->state = XLINK_DISPATCHER_RUNNING;
+ // run dispatcher thread to read and handle incoming packets
+ disp->rxthread = kthread_run(xlink_dispatcher_rxthread,
+ (void *)disp, "rxthread");
+ if (!disp->rxthread) {
+ pr_err("xlink rxthread creation failed\n");
+ goto r_rxthread;
+ }
+ wait_for_completion(&disp->rx_done);
+ mutex_unlock(&xlinkd->lock);
+
+ return X_LINK_SUCCESS;
+
+r_rxthread:
+ kthread_stop(disp->txthread);
+r_txthread:
+ disp->state = XLINK_DISPATCHER_STOPPED;
+r_error:
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+ struct xlink_event *event)
+{
+ struct dispatcher *disp;
+ int rc;
+
+ // get dispatcher by handle
+ disp = get_dispatcher_by_id(event->link_id);
+ if (!disp)
+ return X_LINK_ERROR;
+
+ // only add events if the dispatcher is running
+ if (disp->state != XLINK_DISPATCHER_RUNNING)
+ return X_LINK_ERROR;
+
+ // configure event and add to queue
+ if (origin == EVENT_TX)
+ event->header.id = event_generate_id();
+ event->origin = origin;
+ rc = event_enqueue(&disp->queue, event);
+ if (rc)
+ return X_LINK_CHAN_FULL;
+
+ // notify dispatcher tx thread of new event
+ up(&disp->event_sem);
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_stop(int id)
+{
+ struct dispatcher *disp;
+ int rc;
+
+ mutex_lock(&xlinkd->lock);
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(id);
+ if (!disp)
+ goto r_error;
+
+ // don't stop dispatcher if not started
+ if (disp->state != XLINK_DISPATCHER_RUNNING)
+ goto r_error;
+
+ if (disp->rxthread) {
+ // stop dispatcher rx thread
+ send_sig(SIGTERM, disp->rxthread, 0);
+ rc = kthread_stop(disp->rxthread);
+ if (rc)
+ goto r_thread;
+ }
+ wait_for_completion(&disp->rx_done);
+ if (disp->txthread) {
+ // stop dispatcher tx thread
+ send_sig(SIGTERM, disp->txthread, 0);
+ rc = kthread_stop(disp->txthread);
+ if (rc)
+ goto r_thread;
+ }
+ wait_for_completion(&disp->tx_done);
+ disp->state = XLINK_DISPATCHER_STOPPED;
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_SUCCESS;
+
+r_thread:
+ // dispatcher now in error state and cannot be used
+ disp->state = XLINK_DISPATCHER_ERROR;
+r_error:
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_destroy(void)
+{
+ enum xlink_event_type type;
+ struct xlink_event *event;
+ struct dispatcher *disp;
+ int i;
+
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(i);
+ if (!disp)
+ continue;
+
+ // stop all running dispatchers
+ if (disp->state == XLINK_DISPATCHER_RUNNING)
+ xlink_dispatcher_stop(i);
+
+ // empty queues of all used dispatchers
+ if (disp->state == XLINK_DISPATCHER_INIT)
+ continue;
+
+ // deallocate remaining events in queue
+ while (!list_empty(&disp->queue.head)) {
+ event = event_dequeue(&disp->queue);
+ if (!event)
+ continue;
+ type = event->header.type;
+ if (type == XLINK_WRITE_REQ ||
+ type == XLINK_WRITE_VOLATILE_REQ) {
+ // deallocate event data
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data,
+ event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
+ xlink_destroy_event(event);
+ }
+ // destroy dispatcher
+ mutex_destroy(&disp->queue.lock);
+ }
+ mutex_destroy(&xlinkd->lock);
+ return X_LINK_SUCCESS;
+}
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.h b/drivers/misc/xlink-core/xlink-dispatcher.h
new file mode 100644
index 000000000000..d1458e7a4ab7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DISPATCHER_H
+#define __XLINK_DISPATCHER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_dispatcher_init(void *dev);
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle);
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+ struct xlink_event *event);
+
+enum xlink_error xlink_dispatcher_stop(int id);
+
+enum xlink_error xlink_dispatcher_destroy(void);
+
+enum xlink_error xlink_dispatcher_ipc_passthru_event_add(struct xlink_event *event);
+
+#endif /* __XLINK_DISPATCHER_H */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 9b1ed008bb56..339734826f3e 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -27,6 +27,7 @@
#include <linux/xlink-ipc.h>
#endif

+#include "xlink-dispatcher.h"
#include "xlink-multiplexer.h"
#include "xlink-platform.h"

@@ -165,6 +166,32 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
return 0;
}

+static int is_enough_space_in_channel(struct open_channel *opchan,
+ u32 size)
+{
+ if (opchan->tx_packet_level >= ((XLINK_PACKET_QUEUE_CAPACITY / 100) * THR_UPR)) {
+ pr_info("Packet queue limit reached\n");
+ return 0;
+ }
+ if (opchan->tx_up_limit == 0) {
+ if ((opchan->tx_fill_level + size)
+ > ((opchan->chan->size / 100) * THR_UPR)) {
+ opchan->tx_up_limit = 1;
+ return 0;
+ }
+ }
+ if (opchan->tx_up_limit == 1) {
+ if ((opchan->tx_fill_level + size)
+ < ((opchan->chan->size / 100) * THR_LWR)) {
+ opchan->tx_up_limit = 0;
+ return 1;
+ } else {
+ return 0;
+ }
+ }
+ return 1;
+}
+
static int is_control_channel(u16 chan)
{
if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
@@ -173,6 +200,51 @@ static int is_control_channel(u16 chan)
return 0;
}

+static struct open_channel *get_channel(u32 link_id, u16 chan)
+{
+ if (!xmux->channels[link_id][chan].opchan)
+ return NULL;
+ mutex_lock(&xmux->channels[link_id][chan].opchan->lock);
+ return xmux->channels[link_id][chan].opchan;
+}
+
+static void release_channel(struct open_channel *opchan)
+{
+ if (opchan)
+ mutex_unlock(&opchan->lock);
+}
+
+static int add_packet_to_channel(struct open_channel *opchan,
+ struct packet_queue *queue,
+ void *buffer, u32 size,
+ dma_addr_t paddr)
+{
+ struct packet *pkt;
+
+ if (queue->count < queue->capacity) {
+ pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+ if (!pkt)
+ return X_LINK_ERROR;
+ pkt->data = buffer;
+ pkt->length = size;
+ pkt->paddr = paddr;
+ list_add_tail(&pkt->list, &queue->head);
+ queue->count++;
+ opchan->rx_fill_level += pkt->length;
+ }
+ return X_LINK_SUCCESS;
+}
+
+static struct packet *get_packet_from_channel(struct packet_queue *queue)
+{
+ struct packet *pkt = NULL;
+ // get first packet in queue
+ if (!list_empty(&queue->head))
+ pkt = list_first_entry(&queue->head, struct packet, list);
+
+ return pkt;
+}
+
static int release_packet_from_channel(struct open_channel *opchan,
struct packet_queue *queue,
u8 * const addr, u32 *size)
@@ -355,15 +427,46 @@ enum xlink_error xlink_multiplexer_destroy(void)
return X_LINK_SUCCESS;
}

+static int compl_wait(struct completion *compl, struct open_channel *opchan)
+{
+ int rc;
+ unsigned long tout = msecs_to_jiffies(opchan->chan->timeout);
+
+ if (opchan->chan->timeout == 0) {
+ mutex_unlock(&opchan->lock);
+ rc = wait_for_completion_interruptible(compl);
+ mutex_lock(&opchan->lock);
+ if (rc < 0) // wait interrupted
+ rc = X_LINK_ERROR;
+ } else {
+ mutex_unlock(&opchan->lock);
+ rc = wait_for_completion_interruptible_timeout(compl, tout);
+ mutex_lock(&opchan->lock);
+ if (rc == 0)
+ rc = X_LINK_TIMEOUT;
+ else if (rc < 0) // wait interrupted
+ rc = X_LINK_ERROR;
+ else if (rc > 0)
+ rc = X_LINK_SUCCESS;
+ }
+ return rc;
+}
+
enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
int *event_queued)
{
+ struct open_channel *opchan = NULL;
+ struct packet *pkt = NULL;
int rc = X_LINK_SUCCESS;
+ u32 link_id = 0;
+ u32 size = 0;
u16 chan = 0;
+ u32 save_timeout = 0;

if (!xmux || !event)
return X_LINK_ERROR;

+ link_id = event->link_id;
chan = event->header.chan;

// verify channel ID is in range
@@ -379,9 +482,402 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
if (is_control_channel(chan))
return X_LINK_ERROR;

- if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+ if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE) {
// event should be handled by passthrough
rc = xlink_passthrough(event);
+ return rc;
+ }
+ // event should be handled by dispatcher
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ case XLINK_WRITE_VOLATILE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ while (!is_enough_space_in_channel(opchan,
+ event->header.size)) {
+ if (opchan->chan->mode == RXN_TXB ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking,
+ // wait for packet to be released
+ rc = compl_wait(&opchan->pkt_released, opchan);
+ } else {
+ rc = X_LINK_CHAN_FULL;
+ break;
+ }
+ }
+ if (rc == X_LINK_SUCCESS) {
+ opchan->tx_fill_level += event->header.size;
+ opchan->tx_packet_level++;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ if (opchan->chan->mode == RXN_TXB ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking,
+ // wait for packet to be consumed
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_consumed, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ if (opchan->chan->mode == RXB_TXN ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking, wait for packet to become available
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_available, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ if (rc == X_LINK_SUCCESS) {
+ pkt = get_packet_from_channel(&opchan->rx_queue);
+ if (pkt) {
+ *(u32 **)event->pdata = (u32 *)pkt->data;
+ *event->length = pkt->length;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_TO_BUFFER_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ if (opchan->chan->mode == RXB_TXN ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking, wait for packet to become available
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_available, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ if (rc == X_LINK_SUCCESS) {
+ pkt = get_packet_from_channel(&opchan->rx_queue);
+ if (pkt) {
+ memcpy(event->data, pkt->data, pkt->length);
+ *event->length = pkt->length;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_RELEASE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ rc = release_packet_from_channel(opchan,
+ &opchan->rx_queue,
+ event->data,
+ &size);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ event->header.size = size;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ mutex_unlock(&opchan->lock);
+ save_timeout = opchan->chan->timeout;
+ opchan->chan->timeout = OPEN_CHANNEL_TIMEOUT_MSEC;
+ rc = compl_wait(&opchan->opened, opchan);
+ opchan->chan->timeout = save_timeout;
+ if (rc == 0) {
+ xmux->channels[link_id][chan].status = CHAN_OPEN;
+ release_channel(opchan);
+ } else {
+ multiplexer_close_channel(opchan);
+ }
+ }
+ }
+ } else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
+ /* channel already open */
+ xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ } else {
+ /* channel already open */
+ rc = X_LINK_ALREADY_OPEN;
+ }
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
+ opchan = get_channel(link_id, chan);
+ if (!opchan)
+ return X_LINK_COMMUNICATION_FAIL;
+ rc = multiplexer_close_channel(opchan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].status = CHAN_CLOSED;
+ } else {
+ /* can't close channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_PING_REQ:
+ break;
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ case XLINK_OPEN_CHANNEL_RESP:
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ default:
+ rc = X_LINK_ERROR;
+ }
+return rc;
+}
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
+{
+ struct xlink_event *passthru_event = NULL;
+ struct open_channel *opchan = NULL;
+ int rc = X_LINK_SUCCESS;
+ dma_addr_t paddr = 0;
+ void *buffer = NULL;
+ size_t size = 0;
+ u32 link_id;
+ u16 chan;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ link_id = event->link_id;
+ chan = event->header.chan;
+
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ case XLINK_WRITE_VOLATILE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ // if we receive data on a closed channel - flush/read the data
+ buffer = xlink_platform_allocate(xmux->dev, &paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (buffer) {
+ size = event->header.size;
+ xlink_platform_read(event->interface,
+ event->handle->sw_device_id,
+ buffer, &size, 1000, NULL);
+ xlink_platform_deallocate(xmux->dev, buffer,
+ paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ } else {
+ pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+ }
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ buffer = xlink_platform_allocate(xmux->dev, &paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (buffer) {
+ size = event->header.size;
+ rc = xlink_platform_read(event->interface,
+ event->handle->sw_device_id,
+ buffer, &size,
+ opchan->chan->timeout,
+ NULL);
+ if (rc || event->header.size != size) {
+ xlink_platform_deallocate(xmux->dev, buffer,
+ paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ event->paddr = paddr;
+ event->data = buffer;
+ if (add_packet_to_channel(opchan, &opchan->rx_queue,
+ event->data,
+ event->header.size,
+ paddr)) {
+ xlink_platform_deallocate(xmux->dev,
+ buffer, paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ event->header.type = XLINK_WRITE_VOLATILE_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_available);
+ } else {
+ // failed to allocate buffer
+ rc = X_LINK_ERROR;
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_REQ:
+ case XLINK_READ_TO_BUFFER_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ break;
+ }
+ event->header.timeout = opchan->chan->timeout;
+ event->header.type = XLINK_READ_TO_BUFFER_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_consumed);
+ release_channel(opchan);
+ break;
+ case XLINK_RELEASE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ opchan->tx_fill_level -= event->header.size;
+ opchan->tx_packet_level--;
+ event->header.type = XLINK_RELEASE_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_released);
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ //xmux->channels[link_id][chan].mode = *(enum xlink_opmode *)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+ complete(&opchan->opened);
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan,
+ 0,
+ opchan->chan->timeout);
+ if (!passthru_event) {
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ }
+ release_channel(opchan);
+ }
+ } else {
+ /* channel already open */
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan, 0, 0);
+ if (!passthru_event) {
+ release_channel(opchan);
+ rc = X_LINK_ERROR;
+ break;
+ }
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ }
+ release_channel(opchan);
+ }
+ rc = xlink_passthrough(event);
+ if (rc == 0)
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ case XLINK_PING_REQ:
+ break;
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ opchan = get_channel(link_id, chan);
+ if (!opchan)
+ rc = X_LINK_COMMUNICATION_FAIL;
+ else
+ xlink_destroy_event(event); // event is handled and can now be freed
+ release_channel(opchan);
+ break;
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ case XLINK_OPEN_CHANNEL_RESP:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xlink_destroy_event(event); // event is handled and can now be freed
+ complete(&opchan->opened);
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ default:
+ rc = X_LINK_ERROR;
+ }
+
return rc;
}

--
2.17.1

2021-01-08 21:32:43

by mark gross

[permalink] [raw]
Subject: [PATCH v2 11/34] misc: xlink-pcie: lh: Add core communication logic

From: Srikanth Thokala <[email protected]>

Add logic to establish communication with the remote host which is through
ring buffer management and MSI/Doorbell interrupts

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/local_host/Makefile | 2 +
drivers/misc/xlink-pcie/local_host/core.c | 808 ++++++++++++++++++++
drivers/misc/xlink-pcie/local_host/core.h | 247 ++++++
drivers/misc/xlink-pcie/local_host/epf.c | 116 ++-
drivers/misc/xlink-pcie/local_host/epf.h | 23 +
drivers/misc/xlink-pcie/local_host/util.c | 375 +++++++++
drivers/misc/xlink-pcie/local_host/util.h | 70 ++
drivers/misc/xlink-pcie/local_host/xpcie.h | 63 ++
include/linux/xlink_drv_inf.h | 60 ++
9 files changed, 1756 insertions(+), 8 deletions(-)
create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
create mode 100644 drivers/misc/xlink-pcie/local_host/core.h
create mode 100644 drivers/misc/xlink-pcie/local_host/util.c
create mode 100644 drivers/misc/xlink-pcie/local_host/util.h
create mode 100644 include/linux/xlink_drv_inf.h

diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 54fc118e2dd1..28761751d43b 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -1,3 +1,5 @@
obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
mxlk_ep-objs := epf.o
mxlk_ep-objs += dma.o
+mxlk_ep-objs += core.o
+mxlk_ep-objs += util.o
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
new file mode 100644
index 000000000000..612ab917db45
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -0,0 +1,808 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/of_reserved_mem.h>
+
+#include "epf.h"
+#include "core.h"
+#include "util.h"
+
+static struct xpcie *global_xpcie;
+
+static struct xpcie *intel_xpcie_core_get_by_id(u32 sw_device_id)
+{
+ return (sw_device_id == xlink_sw_id) ? global_xpcie : NULL;
+}
+
+static int intel_xpcie_map_dma(struct xpcie *xpcie, struct xpcie_buf_desc *bd,
+ int direction)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+ struct device *dma_dev = epf->epc->dev.parent;
+
+ bd->phys = dma_map_single(dma_dev, bd->data, bd->length, direction);
+
+ return dma_mapping_error(dma_dev, bd->phys);
+}
+
+static void intel_xpcie_unmap_dma(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd, int direction)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+ struct device *dma_dev = epf->epc->dev.parent;
+
+ dma_unmap_single(dma_dev, bd->phys, bd->length, direction);
+}
+
+static void intel_xpcie_set_cap_txrx(struct xpcie *xpcie)
+{
+ size_t tx_len = sizeof(struct xpcie_transfer_desc) *
+ XPCIE_NUM_TX_DESCS;
+ size_t rx_len = sizeof(struct xpcie_transfer_desc) *
+ XPCIE_NUM_RX_DESCS;
+ size_t hdr_len = sizeof(struct xpcie_cap_txrx);
+ u32 start = sizeof(struct xpcie_mmio);
+ struct xpcie_cap_txrx *cap;
+ struct xpcie_cap_hdr *hdr;
+ u16 next;
+
+ next = (u16)(start + hdr_len + tx_len + rx_len);
+ intel_xpcie_iowrite32(start, xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+ cap = (void *)xpcie->mmio + start;
+ memset(cap, 0, sizeof(struct xpcie_cap_txrx));
+ cap->hdr.id = XPCIE_CAP_TXRX;
+ cap->hdr.next = next;
+ cap->fragment_size = XPCIE_FRAGMENT_SIZE;
+ cap->tx.ring = start + hdr_len;
+ cap->tx.ndesc = XPCIE_NUM_TX_DESCS;
+ cap->rx.ring = start + hdr_len + tx_len;
+ cap->rx.ndesc = XPCIE_NUM_RX_DESCS;
+
+ hdr = (struct xpcie_cap_hdr *)((void *)xpcie->mmio + next);
+ hdr->id = XPCIE_CAP_NULL;
+}
+
+static void intel_xpcie_txrx_cleanup(struct xpcie *xpcie)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ struct xpcie_transfer_desc *td;
+ int index;
+
+ xpcie->stop_flag = true;
+ xpcie->no_tx_buffer = false;
+ inf->data_avail = true;
+ wake_up_interruptible(&xpcie->tx_waitq);
+ wake_up_interruptible(&inf->rx_waitq);
+ mutex_lock(&xpcie->wlock);
+ mutex_lock(&inf->rlock);
+
+ for (index = 0; index < rx->pipe.ndesc; index++) {
+ td = rx->pipe.tdr + index;
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+ for (index = 0; index < tx->pipe.ndesc; index++) {
+ td = tx->pipe.tdr + index;
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+
+ intel_xpcie_list_cleanup(&xpcie->tx_pool);
+ intel_xpcie_list_cleanup(&xpcie->rx_pool);
+
+ if (xpcie_epf->tx_virt) {
+ dma_free_coherent(dma_dev, xpcie_epf->tx_size,
+ xpcie_epf->tx_virt, xpcie_epf->tx_phys);
+ }
+
+ mutex_unlock(&inf->rlock);
+ mutex_unlock(&xpcie->wlock);
+}
+
+/*
+ * The RX/TX are named for Remote Host, in Local Host
+ * RX/TX is reversed.
+ */
+static int intel_xpcie_txrx_init(struct xpcie *xpcie,
+ struct xpcie_cap_txrx *cap)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ int tx_pool_size, rx_pool_size;
+ struct xpcie_buf_desc *bd;
+ int index, ndesc, rc;
+
+ xpcie->txrx = cap;
+ xpcie->fragment_size = cap->fragment_size;
+ xpcie->stop_flag = false;
+
+ rx->pipe.ndesc = cap->tx.ndesc;
+ rx->pipe.head = &cap->tx.head;
+ rx->pipe.tail = &cap->tx.tail;
+ rx->pipe.tdr = (void *)xpcie->mmio + cap->tx.ring;
+
+ tx->pipe.ndesc = cap->rx.ndesc;
+ tx->pipe.head = &cap->rx.head;
+ tx->pipe.tail = &cap->rx.tail;
+ tx->pipe.tdr = (void *)xpcie->mmio + cap->rx.ring;
+
+ intel_xpcie_list_init(&xpcie->rx_pool);
+ rx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = rx_pool_size / xpcie->fragment_size;
+
+ /* Initialize reserved memory resources */
+ rc = of_reserved_mem_device_init(dma_dev);
+ if (rc) {
+ dev_err(dma_dev, "Could not get reserved memory\n");
+ goto error;
+ }
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->rx_pool, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to alloc all rx pool descriptors\n");
+ goto error;
+ }
+ }
+
+ intel_xpcie_list_init(&xpcie->tx_pool);
+ tx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = tx_pool_size / xpcie->fragment_size;
+
+ xpcie_epf->tx_size = tx_pool_size;
+ xpcie_epf->tx_virt = dma_alloc_coherent(dma_dev,
+ xpcie_epf->tx_size,
+ &xpcie_epf->tx_phys,
+ GFP_KERNEL);
+ if (!xpcie_epf->tx_virt)
+ goto error;
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd_reuse(xpcie->fragment_size,
+ xpcie_epf->tx_virt +
+ (index *
+ xpcie->fragment_size),
+ xpcie_epf->tx_phys +
+ (index *
+ xpcie->fragment_size));
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->tx_pool, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to alloc all tx pool descriptors\n");
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ intel_xpcie_txrx_cleanup(xpcie);
+
+ return -ENOMEM;
+}
+
+static int intel_xpcie_discover_txrx(struct xpcie *xpcie)
+{
+ struct xpcie_cap_txrx *cap;
+ int error;
+
+ cap = intel_xpcie_cap_find(xpcie, 0, XPCIE_CAP_TXRX);
+ if (cap) {
+ error = intel_xpcie_txrx_init(xpcie, cap);
+ } else {
+ dev_err(xpcie_to_dev(xpcie), "xpcie txrx info not found\n");
+ error = -EIO;
+ }
+
+ return error;
+}
+
+static void intel_xpcie_start_tx(struct xpcie *xpcie, unsigned long delay)
+{
+ /*
+ * Use only one WQ for both Rx and Tx
+ *
+ * Synchronous Read and Writes to DDR is found to result in memory
+ * mismatch errors in stability tests due to silicon bug in A0 SoC.
+ */
+ if (xpcie->legacy_a0)
+ queue_delayed_work(xpcie->rx_wq, &xpcie->tx_event, delay);
+ else
+ queue_delayed_work(xpcie->tx_wq, &xpcie->tx_event, delay);
+}
+
+static void intel_xpcie_start_rx(struct xpcie *xpcie, unsigned long delay)
+{
+ queue_delayed_work(xpcie->rx_wq, &xpcie->rx_event, delay);
+}
+
+static void intel_xpcie_rx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, rx_event.work);
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+ u32 head, tail, ndesc, length, initial_head;
+ unsigned long delay = msecs_to_jiffies(1);
+ struct xpcie_stream *rx = &xpcie->rx;
+ int descs_num = 0, chan = 0, rc;
+ struct xpcie_dma_ll_desc *desc;
+ struct xpcie_transfer_desc *td;
+ bool reset_work = false;
+ u16 interface;
+ u64 address;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ bd_head = NULL;
+ bd_tail = NULL;
+ ndesc = rx->pipe.ndesc;
+ tail = intel_xpcie_get_tdr_tail(&rx->pipe);
+ initial_head = intel_xpcie_get_tdr_head(&rx->pipe);
+ head = initial_head;
+
+ while (head != tail) {
+ td = rx->pipe.tdr + head;
+
+ bd = intel_xpcie_alloc_rx_bd(xpcie);
+ if (!bd) {
+ reset_work = true;
+ if (descs_num == 0) {
+ delay = msecs_to_jiffies(10);
+ goto task_exit;
+ }
+ break;
+ }
+
+ interface = intel_xpcie_get_td_interface(td);
+ length = intel_xpcie_get_td_length(td);
+ address = intel_xpcie_get_td_address(td);
+
+ bd->length = length;
+ bd->interface = interface;
+ rc = intel_xpcie_map_dma(xpcie, bd, DMA_FROM_DEVICE);
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to map rx bd (%d)\n", rc);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ break;
+ }
+
+ desc = &xpcie_epf->rx_desc_buf[chan].virt[descs_num++];
+ desc->dma_transfer_size = length;
+ desc->dst_addr = bd->phys;
+ desc->src_addr = address;
+
+ if (bd_head)
+ bd_tail->next = bd;
+ else
+ bd_head = bd;
+ bd_tail = bd;
+
+ head = XPCIE_CIRCULAR_INC(head, ndesc);
+ }
+
+ if (descs_num == 0)
+ goto task_exit;
+
+ rc = intel_xpcie_copy_from_host_ll(xpcie, chan, descs_num);
+
+ bd = bd_head;
+ while (bd) {
+ intel_xpcie_unmap_dma(xpcie, bd, DMA_FROM_DEVICE);
+ bd = bd->next;
+ }
+
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to DMA from host (%d)\n", rc);
+ intel_xpcie_free_rx_bd(xpcie, bd_head);
+ delay = msecs_to_jiffies(5);
+ reset_work = true;
+ goto task_exit;
+ }
+
+ head = initial_head;
+ bd = bd_head;
+ while (bd) {
+ td = rx->pipe.tdr + head;
+ bd_head = bd_head->next;
+ bd->next = NULL;
+
+ if (likely(bd->interface < XPCIE_NUM_INTERFACES)) {
+ intel_xpcie_set_td_status(td,
+ XPCIE_DESC_STATUS_SUCCESS);
+ intel_xpcie_add_bd_to_interface(xpcie, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "detected rx desc interface failure (%u)\n",
+ bd->interface);
+ intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_ERROR);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ }
+
+ bd = bd_head;
+ head = XPCIE_CIRCULAR_INC(head, ndesc);
+ }
+
+ if (head != initial_head) {
+ intel_xpcie_set_tdr_head(&rx->pipe, head);
+ intel_xpcie_raise_irq(xpcie, DATA_RECEIVED);
+ }
+
+task_exit:
+ if (reset_work)
+ intel_xpcie_start_rx(xpcie, delay);
+}
+
+static void intel_xpcie_tx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, tx_event.work);
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+ struct xpcie_stream *tx = &xpcie->tx;
+ u32 head, tail, ndesc, initial_tail;
+ struct xpcie_dma_ll_desc *desc;
+ struct xpcie_transfer_desc *td;
+ int descs_num = 0, chan = 0, rc;
+ size_t buffers = 0, bytes = 0;
+ u64 address;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ bd_head = NULL;
+ bd_tail = NULL;
+ ndesc = tx->pipe.ndesc;
+ initial_tail = intel_xpcie_get_tdr_tail(&tx->pipe);
+ tail = initial_tail;
+ head = intel_xpcie_get_tdr_head(&tx->pipe);
+
+ /* add new entries */
+ while (XPCIE_CIRCULAR_INC(tail, ndesc) != head) {
+ bd = intel_xpcie_list_get(&xpcie->write);
+ if (!bd)
+ break;
+
+ td = tx->pipe.tdr + tail;
+ address = intel_xpcie_get_td_address(td);
+
+ desc = &xpcie_epf->tx_desc_buf[chan].virt[descs_num++];
+ desc->dma_transfer_size = bd->length;
+ desc->src_addr = bd->phys;
+ desc->dst_addr = address;
+
+ if (bd_head)
+ bd_tail->next = bd;
+ else
+ bd_head = bd;
+ bd_tail = bd;
+
+ tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+ }
+
+ if (descs_num == 0)
+ goto task_exit;
+
+ rc = intel_xpcie_copy_to_host_ll(xpcie, chan, descs_num);
+
+ tail = initial_tail;
+ bd = bd_head;
+ while (bd) {
+ if (rc) {
+ bd = bd->next;
+ continue;
+ }
+
+ td = tx->pipe.tdr + tail;
+ intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_SUCCESS);
+ intel_xpcie_set_td_length(td, bd->length);
+ intel_xpcie_set_td_interface(td, bd->interface);
+
+ bd = bd->next;
+ tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+ }
+
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to DMA to host (%d)\n", rc);
+ intel_xpcie_list_put_head(&xpcie->write, bd_head);
+ return;
+ }
+
+ intel_xpcie_free_tx_bd(xpcie, bd_head);
+
+ if (intel_xpcie_get_tdr_tail(&tx->pipe) != tail) {
+ intel_xpcie_set_tdr_tail(&tx->pipe, tail);
+ intel_xpcie_raise_irq(xpcie, DATA_SENT);
+ }
+
+task_exit:
+ intel_xpcie_list_info(&xpcie->write, &bytes, &buffers);
+ if (buffers) {
+ xpcie->tx_pending = true;
+ head = intel_xpcie_get_tdr_head(&tx->pipe);
+ if (XPCIE_CIRCULAR_INC(tail, ndesc) != head)
+ intel_xpcie_start_tx(xpcie, 0);
+ } else {
+ xpcie->tx_pending = false;
+ }
+}
+
+static irqreturn_t intel_xpcie_core_irq_cb(int irq, void *args)
+{
+ struct xpcie *xpcie = args;
+
+ if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_SENT)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+ intel_xpcie_start_rx(xpcie, 0);
+ }
+ if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+ if (xpcie->tx_pending)
+ intel_xpcie_start_tx(xpcie, 0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int intel_xpcie_events_init(struct xpcie *xpcie)
+{
+ xpcie->rx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!xpcie->rx_wq) {
+ dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+ return -ENOMEM;
+ }
+
+ if (!xpcie->legacy_a0) {
+ xpcie->tx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM |
+ WQ_HIGHPRI);
+ if (!xpcie->tx_wq) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to allocate workqueue\n");
+ destroy_workqueue(xpcie->rx_wq);
+ return -ENOMEM;
+ }
+ }
+
+ INIT_DELAYED_WORK(&xpcie->rx_event, intel_xpcie_rx_event_handler);
+ INIT_DELAYED_WORK(&xpcie->tx_event, intel_xpcie_tx_event_handler);
+
+ return 0;
+}
+
+static void intel_xpcie_events_cleanup(struct xpcie *xpcie)
+{
+ cancel_delayed_work_sync(&xpcie->rx_event);
+ cancel_delayed_work_sync(&xpcie->tx_event);
+
+ destroy_workqueue(xpcie->rx_wq);
+ if (!xpcie->legacy_a0)
+ destroy_workqueue(xpcie->tx_wq);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie)
+{
+ int error;
+
+ global_xpcie = xpcie;
+
+ intel_xpcie_set_cap_txrx(xpcie);
+
+ error = intel_xpcie_events_init(xpcie);
+ if (error)
+ return error;
+
+ error = intel_xpcie_discover_txrx(xpcie);
+ if (error)
+ goto error_txrx;
+
+ intel_xpcie_interfaces_init(xpcie);
+
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DEV_EVENT, NO_OP);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_SENT, 0);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED, 0);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DEV_EVENT, NO_OP);
+
+ intel_xpcie_register_host_irq(xpcie, intel_xpcie_core_irq_cb);
+
+ return 0;
+
+error_txrx:
+ intel_xpcie_events_cleanup(xpcie);
+
+ return error;
+}
+
+void intel_xpcie_core_cleanup(struct xpcie *xpcie)
+{
+ if (xpcie->status == XPCIE_STATUS_RUN) {
+ intel_xpcie_events_cleanup(xpcie);
+ intel_xpcie_interfaces_cleanup(xpcie);
+ intel_xpcie_txrx_cleanup(xpcie);
+ }
+}
+
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer,
+ size_t *length, u32 timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd;
+ long jiffies_passed = 0;
+ size_t len, remaining;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ while (!inf->data_avail) {
+ mutex_unlock(&inf->rlock);
+ if (timeout_ms == 0) {
+ ret =
+ wait_event_interruptible(inf->rx_waitq,
+ inf->data_avail);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(inf->rx_waitq,
+ inf->data_avail,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+ }
+
+ bd = (inf->partial_read) ? inf->partial_read :
+ intel_xpcie_list_get(&inf->read);
+
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(remaining, bd->length);
+ memcpy(buffer, bd->data, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->data += bcopy;
+ bd->length -= bcopy;
+
+ if (bd->length == 0) {
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ bd = intel_xpcie_list_get(&inf->read);
+ }
+ }
+
+ /* save for next time */
+ inf->partial_read = bd;
+
+ if (!bd)
+ inf->data_avail = false;
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&inf->rlock);
+
+ return 0;
+}
+
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer,
+ size_t *length, u32 timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd, *head;
+ long jiffies_passed = 0;
+ size_t remaining, len;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ while (!head) {
+ mutex_unlock(&xpcie->wlock);
+ if (timeout_ms == 0) {
+ ret =
+ wait_event_interruptible(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ }
+
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(bd->length, remaining);
+ memcpy(bd->data, buffer, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->length = bcopy;
+ bd->interface = inf->id;
+
+ if (remaining) {
+ bd->next = intel_xpcie_alloc_tx_bd(xpcie);
+ bd = bd->next;
+ }
+ }
+
+ intel_xpcie_list_put(&inf->xpcie->write, head);
+ intel_xpcie_start_tx(xpcie, 0);
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&xpcie->wlock);
+
+ return 0;
+}
+
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status)
+{
+ struct xpcie *xpcie = intel_xpcie_core_get_by_id(id);
+
+ if (!xpcie)
+ return -ENODEV;
+
+ *status = xpcie->status;
+
+ return 0;
+}
+
+u32 intel_xpcie_get_device_num(u32 *id_list)
+{
+ u32 num_devices = 0;
+
+ if (xlink_sw_id) {
+ num_devices = 1;
+ *id_list = xlink_sw_id;
+ }
+
+ return num_devices;
+}
+
+int intel_xpcie_get_device_name_by_id(u32 id,
+ char *device_name, size_t name_size)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ memset(device_name, 0, name_size);
+ if (name_size > strlen(XPCIE_DRIVER_NAME))
+ name_size = strlen(XPCIE_DRIVER_NAME);
+ memcpy(device_name, XPCIE_DRIVER_NAME, name_size);
+
+ return 0;
+}
+
+int intel_xpcie_pci_connect_device(u32 id)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -EIO;
+
+ return 0;
+}
+
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ return intel_xpcie_core_read(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ return intel_xpcie_core_write(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_reset_device(u32 id)
+{
+ return 0;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/core.h b/drivers/misc/xlink-pcie/local_host/core.h
new file mode 100644
index 000000000000..84985ef41a64
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.h
@@ -0,0 +1,247 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_CORE_HEADER_
+#define XPCIE_CORE_HEADER_
+
+#include <linux/io.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/mempool.h>
+#include <linux/dma-mapping.h>
+#include <linux/cache.h>
+#include <linux/wait.h>
+
+#include <linux/xlink_drv_inf.h>
+
+/* Number of interfaces to statically allocate resources for */
+#define XPCIE_NUM_INTERFACES (1)
+
+/* max should be always power of '2' */
+#define XPCIE_CIRCULAR_INC(val, max) (((val) + 1) & ((max) - 1))
+
+#define XPCIE_FRAGMENT_SIZE SZ_128K
+
+/* Status encoding of the transfer descriptors */
+#define XPCIE_DESC_STATUS_SUCCESS (0)
+#define XPCIE_DESC_STATUS_ERROR (0xFFFF)
+
+/* Layout transfer descriptors used by device and host */
+struct xpcie_transfer_desc {
+ u64 address;
+ u32 length;
+ u16 status;
+ u16 interface;
+} __packed;
+
+struct xpcie_pipe {
+ u32 old;
+ u32 ndesc;
+ u32 *head;
+ u32 *tail;
+ struct xpcie_transfer_desc *tdr;
+};
+
+struct xpcie_buf_desc {
+ struct xpcie_buf_desc *next;
+ void *head;
+ dma_addr_t phys;
+ size_t true_len;
+ void *data;
+ size_t length;
+ int interface;
+ bool own_mem;
+};
+
+struct xpcie_stream {
+ size_t frag;
+ struct xpcie_pipe pipe;
+};
+
+struct xpcie_list {
+ spinlock_t lock; /* list lock */
+ size_t bytes;
+ size_t buffers;
+ struct xpcie_buf_desc *head;
+ struct xpcie_buf_desc *tail;
+};
+
+struct xpcie_interface {
+ int id;
+ struct xpcie *xpcie;
+ struct mutex rlock; /* read lock */
+ struct xpcie_list read;
+ struct xpcie_buf_desc *partial_read;
+ bool data_avail;
+ wait_queue_head_t rx_waitq;
+};
+
+struct xpcie_debug_stats {
+ struct {
+ size_t cnts;
+ size_t bytes;
+ } tx_krn, rx_krn, tx_usr, rx_usr;
+ size_t send_ints;
+ size_t interrupts;
+ size_t rx_event_runs;
+ size_t tx_event_runs;
+};
+
+/* Defined capabilities located in mmio space */
+#define XPCIE_CAP_NULL (0)
+#define XPCIE_CAP_TXRX (1)
+
+#define XPCIE_CAP_TTL (32)
+#define XPCIE_CAP_HDR_ID (offsetof(struct xpcie_cap_hdr, id))
+#define XPCIE_CAP_HDR_NEXT (offsetof(struct xpcie_cap_hdr, next))
+
+/* Header at the beginning of each capability to define and link to next */
+struct xpcie_cap_hdr {
+ u16 id;
+ u16 next;
+} __packed;
+
+struct xpcie_cap_pipe {
+ u32 ring;
+ u32 ndesc;
+ u32 head;
+ u32 tail;
+} __packed;
+
+/* Transmit and Receive capability */
+struct xpcie_cap_txrx {
+ struct xpcie_cap_hdr hdr;
+ u32 fragment_size;
+ struct xpcie_cap_pipe tx;
+ struct xpcie_cap_pipe rx;
+} __packed;
+
+static inline u64 _ioread64(void __iomem *addr)
+{
+ u64 low, high;
+
+ low = ioread32(addr);
+ high = ioread32(addr + sizeof(u32));
+
+ return low | (high << 32);
+}
+
+static inline void _iowrite64(u64 value, void __iomem *addr)
+{
+ iowrite32(value, addr);
+ iowrite32(value >> 32, addr + sizeof(u32));
+}
+
+#define intel_xpcie_iowrite64(value, addr) \
+ _iowrite64(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite32(value, addr) \
+ iowrite32(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite16(value, addr) \
+ iowrite16(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite8(value, addr) \
+ iowrite8(value, (void __iomem *)addr)
+#define intel_xpcie_ioread64(addr) \
+ _ioread64((void __iomem *)addr)
+#define intel_xpcie_ioread32(addr) \
+ ioread32((void __iomem *)addr)
+#define intel_xpcie_ioread16(addr) \
+ ioread16((void __iomem *)addr)
+#define intel_xpcie_ioread8(addr) \
+ ioread8((void __iomem *)addr)
+
+static inline
+void intel_xpcie_set_td_address(struct xpcie_transfer_desc *td, u64 address)
+{
+ intel_xpcie_iowrite64(address, &td->address);
+}
+
+static inline
+u64 intel_xpcie_get_td_address(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread64(&td->address);
+}
+
+static inline
+void intel_xpcie_set_td_length(struct xpcie_transfer_desc *td, u32 length)
+{
+ intel_xpcie_iowrite32(length, &td->length);
+}
+
+static inline
+u32 intel_xpcie_get_td_length(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread32(&td->length);
+}
+
+static inline
+void intel_xpcie_set_td_interface(struct xpcie_transfer_desc *td, u16 interface)
+{
+ intel_xpcie_iowrite16(interface, &td->interface);
+}
+
+static inline
+u16 intel_xpcie_get_td_interface(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread16(&td->interface);
+}
+
+static inline
+void intel_xpcie_set_td_status(struct xpcie_transfer_desc *td, u16 status)
+{
+ intel_xpcie_iowrite16(status, &td->status);
+}
+
+static inline
+u16 intel_xpcie_get_td_status(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread16(&td->status);
+}
+
+static inline
+void intel_xpcie_set_tdr_head(struct xpcie_pipe *p, u32 head)
+{
+ intel_xpcie_iowrite32(head, p->head);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_head(struct xpcie_pipe *p)
+{
+ return intel_xpcie_ioread32(p->head);
+}
+
+static inline
+void intel_xpcie_set_tdr_tail(struct xpcie_pipe *p, u32 tail)
+{
+ intel_xpcie_iowrite32(tail, p->tail);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_tail(struct xpcie_pipe *p)
+{
+ return intel_xpcie_ioread32(p->tail);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie);
+void intel_xpcie_core_cleanup(struct xpcie *xpcie);
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer, size_t *length,
+ u32 timeout_ms);
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer, size_t *length,
+ u32 timeout_ms);
+u32 intel_xpcie_get_device_num(u32 *id_list);
+struct xpcie_dev *intel_xpcie_get_device_by_id(u32 id);
+int intel_xpcie_get_device_name_by_id(u32 id, char *device_name,
+ size_t name_size);
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status);
+int intel_xpcie_pci_connect_device(u32 id);
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_reset_device(u32 id);
+#endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
index dd8ffcabf5f9..b3112ca47bad 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.c
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -9,6 +9,7 @@

#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/reboot.h>

#include "epf.h"

@@ -21,6 +22,12 @@
#define PCIE_REGS_PCIE_ERR_INTR_FLAGS 0x24
#define LINK_REQ_RST_FLG BIT(15)

+#define PCIE_REGS_PCIE_SYS_CFG_CORE 0x7C
+#define PCIE_CFG_PBUS_NUM_OFFSET 8
+#define PCIE_CFG_PBUS_NUM_MASK 0xFF
+#define PCIE_CFG_PBUS_DEV_NUM_OFFSET 16
+#define PCIE_CFG_PBUS_DEV_NUM_MASK 0x1F
+
static struct pci_epf_header xpcie_header = {
.vendorid = PCI_VENDOR_ID_INTEL,
.deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
@@ -37,6 +44,45 @@ static const struct pci_epf_device_id xpcie_epf_ids[] = {
{},
};

+u32 xlink_sw_id;
+
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ return intel_xpcie_ep_dma_read_ll(epf, chan, descs_num);
+}
+
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ return intel_xpcie_ep_dma_write_ll(epf, chan, descs_num);
+}
+
+void intel_xpcie_register_host_irq(struct xpcie *xpcie, irq_handler_t func)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+
+ xpcie_epf->core_irq_callback = func;
+}
+
+int intel_xpcie_raise_irq(struct xpcie *xpcie, enum xpcie_doorbell_type type)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, type, 1);
+
+ return pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
{
struct xpcie_epf *xpcie_epf;
@@ -57,6 +103,7 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
{
struct xpcie_epf *xpcie_epf;
struct xpcie *xpcie = args;
+ u8 event;
u32 val;

xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
@@ -64,6 +111,18 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
if (val & LBC_CII_EVENT_FLAG) {
iowrite32(LBC_CII_EVENT_FLAG,
xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+
+ event = intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DEV_EVENT);
+ if (unlikely(event != NO_OP)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE,
+ DEV_EVENT, NO_OP);
+ if (event == REQUEST_RESET)
+ orderly_reboot();
+ return IRQ_HANDLED;
+ }
+
+ if (likely(xpcie_epf->core_irq_callback))
+ xpcie_epf->core_irq_callback(irq, xpcie);
}

return IRQ_HANDLED;
@@ -269,6 +328,7 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
const struct pci_epc_features *features;
struct pci_epc *epc = epf->epc;
+ u32 bus_num, dev_num;
struct device *dev;
size_t align = SZ_16K;
int ret;
@@ -300,12 +360,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)

if (!strcmp(xpcie_epf->stepping, "A0")) {
xpcie_epf->xpcie.legacy_a0 = true;
- iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
- XPCIE_MMIO_LEGACY_A0);
+ intel_xpcie_iowrite32(1, xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
} else {
xpcie_epf->xpcie.legacy_a0 = false;
- iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
- XPCIE_MMIO_LEGACY_A0);
+ intel_xpcie_iowrite32(0, xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
}

/* Enable interrupt */
@@ -330,13 +390,46 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
ret = intel_xpcie_ep_dma_init(epf);
if (ret) {
dev_err(&epf->dev, "DMA initialization failed\n");
- goto err_free_err_irq;
+ goto err_cleanup_bars;
}

+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
+ ret = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_SYS_CFG_CORE);
+ bus_num = (ret >> PCIE_CFG_PBUS_NUM_OFFSET) & PCIE_CFG_PBUS_NUM_MASK;
+ dev_num = (ret >> PCIE_CFG_PBUS_DEV_NUM_OFFSET) &
+ PCIE_CFG_PBUS_DEV_NUM_MASK;
+
+ xlink_sw_id = FIELD_PREP(XLINK_DEV_INF_TYPE_MASK,
+ XLINK_DEV_INF_PCIE) |
+ FIELD_PREP(XLINK_DEV_PHYS_ID_MASK,
+ bus_num << 8 | dev_num) |
+ FIELD_PREP(XLINK_DEV_TYPE_MASK, XLINK_DEV_TYPE_KMB) |
+ FIELD_PREP(XLINK_DEV_PCIE_ID_MASK, XLINK_DEV_PCIE_0) |
+ FIELD_PREP(XLINK_DEV_FUNC_MASK, XLINK_DEV_FUNC_VPU);
+
+ ret = intel_xpcie_core_init(&xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "Core component configuration failed\n");
+ goto err_uninit_dma;
+ }
+
+ intel_xpcie_iowrite32(XPCIE_STATUS_UNINIT,
+ xpcie_epf->xpcie.mmio + XPCIE_MMIO_HOST_STATUS);
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_RUN);
+ intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+ DEV_EVENT, NO_OP);
+ memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+ strlen(XPCIE_MAGIC_YOCTO));
+
return 0;

-err_free_err_irq:
- free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+err_uninit_dma:
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_ERROR);
+ memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+ strlen(XPCIE_MAGIC_YOCTO));
+
+ intel_xpcie_ep_dma_uninit(epf);

err_cleanup_bars:
intel_xpcie_cleanup_bars(epf);
@@ -346,8 +439,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)

static void intel_xpcie_epf_unbind(struct pci_epf *epf)
{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;

+ intel_xpcie_core_cleanup(&xpcie_epf->xpcie);
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
intel_xpcie_ep_dma_uninit(epf);

pci_epc_stop(epc);
@@ -379,8 +476,11 @@ static void intel_xpcie_epf_shutdown(struct device *dev)
xpcie_epf = epf_get_drvdata(epf);

/* Notify host in case PCIe hot plug not supported */
- if (xpcie_epf)
+ if (xpcie_epf && xpcie_epf->xpcie.status == XPCIE_STATUS_RUN) {
+ intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+ DEV_EVENT, DEV_SHUTDOWN);
pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+ }
}

static struct pci_epf_ops ops = {
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index 6ce5260e67be..ca01e17c5107 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -14,6 +14,7 @@
#include <linux/pci-epf.h>

#include "xpcie.h"
+#include "util.h"

#define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
#define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
@@ -26,6 +27,7 @@
#define XPCIE_NUM_RX_DESCS (64)

extern bool dma_ll_mode;
+extern u32 xlink_sw_id;

struct xpcie_dma_ll_desc {
u32 dma_ch_control1;
@@ -67,14 +69,35 @@ struct xpcie_epf {
void __iomem *dbi_base;
char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];

+ irq_handler_t core_irq_callback;
+ dma_addr_t tx_phys;
+ void *tx_virt;
+ size_t tx_size;
+
struct xpcie_dma_ll_desc_buf tx_desc_buf[DMA_CHAN_NUM];
struct xpcie_dma_ll_desc_buf rx_desc_buf[DMA_CHAN_NUM];
};

+static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+
+ return &xpcie_epf->epf->dev;
+}
+
int intel_xpcie_ep_dma_init(struct pci_epf *epf);
int intel_xpcie_ep_dma_uninit(struct pci_epf *epf);
int intel_xpcie_ep_dma_reset(struct pci_epf *epf);
int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num);
int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num);

+void intel_xpcie_register_host_irq(struct xpcie *xpcie,
+ irq_handler_t func);
+int intel_xpcie_raise_irq(struct xpcie *xpcie,
+ enum xpcie_doorbell_type type);
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie,
+ int chan, int descs_num);
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie,
+ int chan, int descs_num);
#endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/util.c b/drivers/misc/xlink-pcie/local_host/util.c
new file mode 100644
index 000000000000..ec808b0cd72b
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.c
@@ -0,0 +1,375 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "util.h"
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status)
+{
+ xpcie->status = status;
+ intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie)
+{
+ return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+static size_t intel_xpcie_doorbell_offset(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type)
+{
+ if (dirt == TO_DEVICE && type == DATA_SENT)
+ return XPCIE_MMIO_HTOD_TX_DOORBELL;
+ if (dirt == TO_DEVICE && type == DATA_RECEIVED)
+ return XPCIE_MMIO_HTOD_RX_DOORBELL;
+ if (dirt == TO_DEVICE && type == DEV_EVENT)
+ return XPCIE_MMIO_HTOD_EVENT_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DATA_SENT)
+ return XPCIE_MMIO_DTOH_TX_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DATA_RECEIVED)
+ return XPCIE_MMIO_DTOH_RX_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DEV_EVENT)
+ return XPCIE_MMIO_DTOH_EVENT_DOORBELL;
+
+ return 0;
+}
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type, u8 value)
+{
+ size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+ intel_xpcie_iowrite8(value, xpcie->mmio + offset);
+}
+
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type)
+{
+ size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+ return intel_xpcie_ioread8(xpcie->mmio + offset);
+}
+
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie)
+{
+ return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status)
+{
+ xpcie->status = status;
+ intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+ if (!bd)
+ return NULL;
+
+ bd->head = kzalloc(roundup(length, cache_line_size()), GFP_KERNEL);
+ if (!bd->head) {
+ kfree(bd);
+ return NULL;
+ }
+
+ bd->data = bd->head;
+ bd->length = length;
+ bd->true_len = length;
+ bd->next = NULL;
+ bd->own_mem = true;
+
+ return bd;
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+ dma_addr_t phys)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+ if (!bd)
+ return NULL;
+
+ bd->head = virt;
+ bd->phys = phys;
+ bd->data = bd->head;
+ bd->length = length;
+ bd->true_len = length;
+ bd->next = NULL;
+ bd->own_mem = false;
+
+ return bd;
+}
+
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd)
+{
+ if (bd) {
+ if (bd->own_mem)
+ kfree(bd->head);
+ kfree(bd);
+ }
+}
+
+int intel_xpcie_list_init(struct xpcie_list *list)
+{
+ spin_lock_init(&list->lock);
+ list->bytes = 0;
+ list->buffers = 0;
+ list->head = NULL;
+ list->tail = NULL;
+
+ return 0;
+}
+
+void intel_xpcie_list_cleanup(struct xpcie_list *list)
+{
+ struct xpcie_buf_desc *bd;
+
+ spin_lock(&list->lock);
+ while (list->head) {
+ bd = list->head;
+ list->head = bd->next;
+ intel_xpcie_free_bd(bd);
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+ spin_unlock(&list->lock);
+}
+
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd)
+{
+ if (!bd)
+ return -EINVAL;
+
+ spin_lock(&list->lock);
+ if (list->head)
+ list->tail->next = bd;
+ else
+ list->head = bd;
+
+ while (bd) {
+ list->tail = bd;
+ list->bytes += bd->length;
+ list->buffers++;
+ bd = bd->next;
+ }
+ spin_unlock(&list->lock);
+
+ return 0;
+}
+
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+ struct xpcie_buf_desc *bd)
+{
+ struct xpcie_buf_desc *old_head;
+
+ if (!bd)
+ return -EINVAL;
+
+ spin_lock(&list->lock);
+ old_head = list->head;
+ list->head = bd;
+ while (bd) {
+ list->bytes += bd->length;
+ list->buffers++;
+ if (!bd->next) {
+ list->tail = list->tail ? list->tail : bd;
+ bd->next = old_head;
+ break;
+ }
+ bd = bd->next;
+ }
+ spin_unlock(&list->lock);
+
+ return 0;
+}
+
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list)
+{
+ struct xpcie_buf_desc *bd;
+
+ spin_lock(&list->lock);
+ bd = list->head;
+ if (list->head) {
+ list->head = list->head->next;
+ if (!list->head)
+ list->tail = NULL;
+ bd->next = NULL;
+ list->bytes -= bd->length;
+ list->buffers--;
+ }
+ spin_unlock(&list->lock);
+
+ return bd;
+}
+
+void intel_xpcie_list_info(struct xpcie_list *list,
+ size_t *bytes, size_t *buffers)
+{
+ spin_lock(&list->lock);
+ *bytes = list->bytes;
+ *buffers = list->buffers;
+ spin_unlock(&list->lock);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = intel_xpcie_list_get(&xpcie->rx_pool);
+ if (bd) {
+ bd->data = bd->head;
+ bd->length = bd->true_len;
+ bd->next = NULL;
+ bd->interface = 0;
+ }
+
+ return bd;
+}
+
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+ if (bd)
+ intel_xpcie_list_put(&xpcie->rx_pool, bd);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = intel_xpcie_list_get(&xpcie->tx_pool);
+ if (bd) {
+ bd->data = bd->head;
+ bd->length = bd->true_len;
+ bd->next = NULL;
+ bd->interface = 0;
+ } else {
+ xpcie->no_tx_buffer = true;
+ }
+
+ return bd;
+}
+
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+ if (!bd)
+ return;
+
+ intel_xpcie_list_put(&xpcie->tx_pool, bd);
+
+ xpcie->no_tx_buffer = false;
+ wake_up_interruptible(&xpcie->tx_waitq);
+}
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id)
+{
+ struct xpcie_interface *inf = xpcie->interfaces + id;
+
+ inf->id = id;
+ inf->xpcie = xpcie;
+
+ inf->partial_read = NULL;
+ intel_xpcie_list_init(&inf->read);
+ mutex_init(&inf->rlock);
+ inf->data_avail = false;
+ init_waitqueue_head(&inf->rx_waitq);
+
+ return 0;
+}
+
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf)
+{
+ struct xpcie_buf_desc *bd;
+
+ intel_xpcie_free_rx_bd(inf->xpcie, inf->partial_read);
+ while ((bd = intel_xpcie_list_get(&inf->read)))
+ intel_xpcie_free_rx_bd(inf->xpcie, bd);
+
+ mutex_destroy(&inf->rlock);
+}
+
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie)
+{
+ int index;
+
+ for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+ intel_xpcie_interface_cleanup(xpcie->interfaces + index);
+
+ intel_xpcie_list_cleanup(&xpcie->write);
+ mutex_destroy(&xpcie->wlock);
+}
+
+int intel_xpcie_interfaces_init(struct xpcie *xpcie)
+{
+ int index;
+
+ mutex_init(&xpcie->wlock);
+ intel_xpcie_list_init(&xpcie->write);
+ init_waitqueue_head(&xpcie->tx_waitq);
+ xpcie->no_tx_buffer = false;
+
+ for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+ intel_xpcie_interface_init(xpcie, index);
+
+ return 0;
+}
+
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd)
+{
+ struct xpcie_interface *inf;
+
+ inf = xpcie->interfaces + bd->interface;
+
+ intel_xpcie_list_put(&inf->read, bd);
+
+ mutex_lock(&inf->rlock);
+ inf->data_avail = true;
+ mutex_unlock(&inf->rlock);
+ wake_up_interruptible(&inf->rx_waitq);
+}
+
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id)
+{
+ int ttl = XPCIE_CAP_TTL;
+ void *hdr;
+ u16 id_out, next;
+
+ /* If user didn't specify start, assume start of mmio */
+ if (!start)
+ start = intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+
+ /* Read header info */
+ hdr = xpcie->mmio + start;
+
+ /* Check if we still have time to live */
+ while (ttl--) {
+ id_out = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_ID);
+ next = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_NEXT);
+
+ /* If cap matches, return header */
+ if (id_out == id)
+ return hdr;
+ /* If cap is NULL, we are at the end of the list */
+ else if (id_out == XPCIE_CAP_NULL)
+ return NULL;
+ /* If no match and no end of list, traverse the linked list */
+ else
+ hdr = xpcie->mmio + next;
+ }
+
+ /* If we reached here, the capability list is corrupted */
+ return NULL;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/util.h b/drivers/misc/xlink-pcie/local_host/util.h
new file mode 100644
index 000000000000..908be897a61d
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_UTIL_HEADER_
+#define XPCIE_UTIL_HEADER_
+
+#include "xpcie.h"
+
+enum xpcie_doorbell_direction {
+ TO_DEVICE,
+ FROM_DEVICE
+};
+
+enum xpcie_doorbell_type {
+ DATA_SENT,
+ DATA_RECEIVED,
+ DEV_EVENT
+};
+
+enum xpcie_event_type {
+ NO_OP,
+ REQUEST_RESET,
+ DEV_SHUTDOWN
+};
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type, u8 value);
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type);
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status);
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie);
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie);
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length);
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+ dma_addr_t phys);
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd);
+
+int intel_xpcie_list_init(struct xpcie_list *list);
+void intel_xpcie_list_cleanup(struct xpcie_list *list);
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd);
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+ struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list);
+void intel_xpcie_list_info(struct xpcie_list *list, size_t *bytes,
+ size_t *buffers);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id);
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf);
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie);
+int intel_xpcie_interfaces_init(struct xpcie *xpcie);
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd);
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id);
+#endif /* XPCIE_UTIL_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
index 0745e6dfee10..8a559617daba 100644
--- a/drivers/misc/xlink-pcie/local_host/xpcie.h
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -14,6 +14,8 @@
#include <linux/module.h>
#include <linux/pci_ids.h>

+#include "core.h"
+
#ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
#define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
#endif
@@ -21,18 +23,79 @@
#define XPCIE_IO_COMM_SIZE SZ_16K
#define XPCIE_MMIO_OFFSET SZ_4K

+/* Status encoding of both device and host */
+#define XPCIE_STATUS_ERROR (0xFFFFFFFF)
+#define XPCIE_STATUS_UNINIT (0)
+#define XPCIE_STATUS_READY (1)
+#define XPCIE_STATUS_RECOVERY (2)
+#define XPCIE_STATUS_OFF (3)
+#define XPCIE_STATUS_RUN (4)
+
+#define XPCIE_MAGIC_STRLEN (16)
+#define XPCIE_MAGIC_YOCTO "VPUYOCTO"
+
/* MMIO layout and offsets shared between device and host */
struct xpcie_mmio {
+ u32 device_status;
+ u32 host_status;
u8 legacy_a0;
+ u8 htod_tx_doorbell;
+ u8 htod_rx_doorbell;
+ u8 htod_event_doorbell;
+ u8 dtoh_tx_doorbell;
+ u8 dtoh_rx_doorbell;
+ u8 dtoh_event_doorbell;
+ u8 reserved;
+ u32 cap_offset;
+ u8 magic[XPCIE_MAGIC_STRLEN];
} __packed;

+#define XPCIE_MMIO_DEV_STATUS (offsetof(struct xpcie_mmio, device_status))
+#define XPCIE_MMIO_HOST_STATUS (offsetof(struct xpcie_mmio, host_status))
#define XPCIE_MMIO_LEGACY_A0 (offsetof(struct xpcie_mmio, legacy_a0))
+#define XPCIE_MMIO_HTOD_TX_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_tx_doorbell))
+#define XPCIE_MMIO_HTOD_RX_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_rx_doorbell))
+#define XPCIE_MMIO_HTOD_EVENT_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_event_doorbell))
+#define XPCIE_MMIO_DTOH_TX_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_tx_doorbell))
+#define XPCIE_MMIO_DTOH_RX_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_rx_doorbell))
+#define XPCIE_MMIO_DTOH_EVENT_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_event_doorbell))
+#define XPCIE_MMIO_CAP_OFF (offsetof(struct xpcie_mmio, cap_offset))
+#define XPCIE_MMIO_MAGIC_OFF (offsetof(struct xpcie_mmio, magic))

struct xpcie {
u32 status;
bool legacy_a0;
void *mmio;
void *bar4;
+
+ struct workqueue_struct *rx_wq;
+ struct workqueue_struct *tx_wq;
+
+ struct xpcie_interface interfaces[XPCIE_NUM_INTERFACES];
+
+ size_t fragment_size;
+ struct xpcie_cap_txrx *txrx;
+ struct xpcie_stream tx;
+ struct xpcie_stream rx;
+
+ struct mutex wlock; /* write lock */
+ struct xpcie_list write;
+ bool no_tx_buffer;
+ wait_queue_head_t tx_waitq;
+ bool tx_pending;
+ bool stop_flag;
+
+ struct xpcie_list rx_pool;
+ struct xpcie_list tx_pool;
+
+ struct delayed_work rx_event;
+ struct delayed_work tx_event;
};

#endif /* XPCIE_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
new file mode 100644
index 000000000000..ffe8f4c253e6
--- /dev/null
+++ b/include/linux/xlink_drv_inf.h
@@ -0,0 +1,60 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef _XLINK_DRV_INF_H_
+#define _XLINK_DRV_INF_H_
+
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define XLINK_DEV_INF_TYPE_MASK GENMASK(27, 24)
+#define XLINK_DEV_PHYS_ID_MASK GENMASK(23, 8)
+#define XLINK_DEV_TYPE_MASK GENMASK(6, 4)
+#define XLINK_DEV_PCIE_ID_MASK GENMASK(3, 1)
+#define XLINK_DEV_FUNC_MASK GENMASK(0, 0)
+
+enum xlink_device_inf_type {
+ XLINK_DEV_INF_PCIE = 1,
+};
+
+enum xlink_device_type {
+ XLINK_DEV_TYPE_KMB = 0,
+};
+
+enum xlink_device_pcie {
+ XLINK_DEV_PCIE_0 = 0,
+};
+
+enum xlink_device_func {
+ XLINK_DEV_FUNC_VPU = 0,
+};
+
+enum _xlink_device_status {
+ _XLINK_DEV_OFF,
+ _XLINK_DEV_ERROR,
+ _XLINK_DEV_BUSY,
+ _XLINK_DEV_RECOVERY,
+ _XLINK_DEV_READY
+};
+
+int xlink_pcie_get_device_list(u32 *sw_device_id_list,
+ u32 *num_devices);
+int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size);
+int xlink_pcie_get_device_status(u32 sw_device_id,
+ u32 *device_status);
+int xlink_pcie_boot_device(u32 sw_device_id, const char *binary_name);
+int xlink_pcie_connect(u32 sw_device_id);
+int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout);
+int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout);
+int xlink_pcie_reset_device(u32 sw_device_id);
+#endif
--
2.17.1

2021-01-08 21:32:44

by mark gross

[permalink] [raw]
Subject: [PATCH v2 29/34] Intel tsens i2c slave driver.

From: "C, Udhayakumar" <[email protected]>

Add Intel tsens i2c slave driver for Intel Edge.AI Computer Vision
platforms.

The tsens i2c slave driver enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms. In the tsens i2c module
various junction and SoC temperatures are reported using i2c slave
protocol.

Signed-off-by: C, Udhayakumar <[email protected]>
---
drivers/misc/intel_tsens/Kconfig | 14 +++
drivers/misc/intel_tsens/Makefile | 1 +
drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +++++++++++++++++++++
3 files changed, 134 insertions(+)
create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c

diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index 8b263fdd80c3..c2138339bd89 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.

+config INTEL_TSENS_I2C_SLAVE
+ bool "I2C slave driver for intel tsens"
+ depends on INTEL_TSENS_LOCAL_HOST
+ select I2C
+ select I2C_SLAVE
+ help
+ This option enables tsens i2c slave driver.
+
+ This driver is used for reporting thermal data via I2C
+ SMBUS to remote host.
+ Enable this option if you want to have support for thermal
+ management controller
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
config INTEL_TSENS_IA_HOST
tristate "Temperature sensor driver for intel tsens remote host"
depends on I2C && THERMAL
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 250dc484fb49..f6f41bbca80c 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,4 +5,5 @@
#

obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_I2C_SLAVE) += intel_tsens_i2c.o
obj-$(CONFIG_INTEL_TSENS_IA_HOST) += intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_i2c.c b/drivers/misc/intel_tsens/intel_tsens_i2c.c
new file mode 100644
index 000000000000..520c3f4bf392
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_i2c.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include "intel_tsens_thermal.h"
+
+#define TSENS_BYTE_INDEX_SHIFT 0x6
+#define TSENS_BYTE_INDEX_MASK 0x3
+#define TSENS_SENSOR_TYPE_MASK 0x3F
+
+struct intel_tsens_i2c {
+ int sensor_type;
+ u16 buffer_idx;
+ bool read_only;
+ u8 idx_write_cnt;
+ struct intel_tsens_i2c_plat_data *plat_data;
+};
+
+static int intel_i2c_tsens_slave_cb(struct i2c_client *client,
+ enum i2c_slave_event event, u8 *val)
+{
+ struct intel_tsens_i2c *tsens_i2c = i2c_get_clientdata(client);
+ struct intel_tsens_i2c_plat_data *plat_data = tsens_i2c->plat_data;
+ int ret = 0;
+
+ switch (event) {
+ case I2C_SLAVE_WRITE_RECEIVED:
+ tsens_i2c->sensor_type = *val;
+ break;
+
+ case I2C_SLAVE_READ_PROCESSED:
+ case I2C_SLAVE_READ_REQUESTED:
+ if (plat_data->get_temp) {
+ int temp;
+ int sensor_type = tsens_i2c->sensor_type &
+ TSENS_SENSOR_TYPE_MASK;
+
+ if (!plat_data->get_temp(sensor_type, &temp,
+ plat_data->pdata)) {
+ u8 offset = (tsens_i2c->sensor_type >>
+ TSENS_BYTE_INDEX_SHIFT) &
+ TSENS_BYTE_INDEX_MASK;
+ u8 *ptr_temp = (u8 *)&temp;
+
+ *val = ptr_temp[offset];
+ tsens_i2c->buffer_idx++;
+ ret = 0;
+ } else {
+ ret = -EINVAL;
+ }
+ } else {
+ ret = -EINVAL;
+ }
+ break;
+
+ case I2C_SLAVE_STOP:
+ case I2C_SLAVE_WRITE_REQUESTED:
+ tsens_i2c->idx_write_cnt = 0;
+ tsens_i2c->buffer_idx = 0;
+ break;
+
+ default:
+ break;
+ }
+ return ret;
+}
+
+static int intel_i2c_tsens_slave_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{ struct intel_tsens_i2c *priv;
+ int ret;
+
+ if (!id->driver_data) {
+ dev_err(&client->dev, "No platform data");
+ return -EINVAL;
+ }
+ priv = devm_kzalloc(&client->dev,
+ sizeof(struct intel_tsens_i2c),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->plat_data = (struct intel_tsens_i2c_plat_data *)id->driver_data;
+ i2c_set_clientdata(client, priv);
+ ret = i2c_slave_register(client, intel_i2c_tsens_slave_cb);
+ if (ret)
+ dev_err(&client->dev, "i2c slave register failed\n");
+
+ return ret;
+};
+
+static struct i2c_device_id intel_i2c_tsens_slave_id[] = {
+ { "intel_tsens", (kernel_ulong_t)&i2c_plat_data},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, intel_i2c_tsens_slave_id);
+
+static struct i2c_driver intel_i2c_tsens_slave_driver = {
+ .driver = {
+ .name = "intel_tsens",
+ },
+ .probe = intel_i2c_tsens_slave_probe,
+ .remove = i2c_slave_unregister,
+ .id_table = intel_i2c_tsens_slave_id,
+};
+
+module_i2c_driver(intel_i2c_tsens_slave_driver);
+
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_DESCRIPTION("tsens i2c slave driver");
+MODULE_LICENSE("GPL");
--
2.17.1

2021-01-08 21:33:04

by mark gross

[permalink] [raw]
Subject: [PATCH v2 24/34] dt-bindings: misc: Add Keem Bay vpumgr

From: "Li, Tingqian" <[email protected]>

Add DT binding schema for VPU on Keem Bay ASoC platform


Signed-off-by: Li, Tingqian <[email protected]>
---
.../bindings/misc/intel,keembay-vpu-mgr.yaml | 48 +++++++++++++++++++
1 file changed, 48 insertions(+)
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
new file mode 100644
index 000000000000..7fad14274ee2
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
@@ -0,0 +1,48 @@
+# SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause
+# Copyright (C) 2020 Intel
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/misc/intel,keembay-vpu-mgr.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Intel VPU manager bindings
+
+maintainers:
+ - Li, Tingqian <[email protected]>
+ - Zhou, Luwei <[email protected]>
+
+description: |
+ The Intel VPU manager provides shared memory and process
+ depedent context management for Intel VPU hardware IP.
+
+properties:
+ compatible:
+ items:
+ - enum:
+ - intel,keembay-vpu-mgr
+ - intel,keembay-vpusmm
+
+ memory-region:
+ description:
+ phandle to a node describing reserved memory (System RAM memory)
+ used by VPU (see bindings/reserved-memory/reserved-memory.txt)
+ maxItems: 1
+
+ intel,keembay-vpu-ipc-id:
+ $ref: /schemas/types.yaml#/definitions/uint32
+ description:
+ the index of the VPU slice to be managed. Default is 0.
+
+required:
+ - compatible
+ - memory-region
+
+additionalProperties: false
+
+examples:
+ - |
+ vpumgr0 {
+ compatible = "intel,keembay-vpu-mgr";
+ memory-region = <&vpu_reserved>;
+ intel,keembay-vpu-ipc-id = <0x0>;
+ };
--
2.17.1

2021-01-08 21:33:05

by mark gross

[permalink] [raw]
Subject: [PATCH v2 20/34] xlink-core: Add xlink core driver xLink

From: Seamus Kelly <[email protected]>

Add xLink driver, which provides an abstracted control and communication
subsystem based on channel identification.
It is intended to support VPU technology both at SoC level as well as at
IP level, over multiple interfaces. This initial patch enables local host
user mode to open/close/read/write via IOCTLs.

Specifically the driver enables application/process to:

* Access a common xLink API across all interfaces from both kernel and
user space.
* Call typical APIs types (open, close, read, write) that you would
associate with a communication interface.
* Call other APIs that are related to other functions that the
device can perform e.g. boot, reset get/set device mode. Device mode
refers to the power load of the VPU and an API can be used to read and
control it.
* Use multiple commnication channels that the driver manages from one
interface to another, providing routing of data through these multiple
channels across a single physical interface.

xLink: Add xLink Core device tree bindings

Add device tree bindings for the xLink Core driver which enables xLink
to control and communicate with the VPU IP present on the Intel Keem Bay
SoC.

Cc: Jonathan Corbet <[email protected]>
Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
Documentation/vpu/index.rst | 1 +
Documentation/vpu/xlink-core.rst | 81 +++
MAINTAINERS | 12 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-core/Kconfig | 33 +
drivers/misc/xlink-core/Makefile | 5 +
drivers/misc/xlink-core/xlink-core.c | 738 ++++++++++++++++++++
drivers/misc/xlink-core/xlink-core.h | 22 +
drivers/misc/xlink-core/xlink-defs.h | 175 +++++
drivers/misc/xlink-core/xlink-ioctl.c | 212 ++++++
drivers/misc/xlink-core/xlink-ioctl.h | 21 +
drivers/misc/xlink-core/xlink-multiplexer.c | 534 ++++++++++++++
drivers/misc/xlink-core/xlink-multiplexer.h | 35 +
drivers/misc/xlink-core/xlink-platform.c | 160 +++++
drivers/misc/xlink-core/xlink-platform.h | 65 ++
include/linux/xlink.h | 108 +++
include/uapi/misc/xlink_uapi.h | 145 ++++
18 files changed, 2349 insertions(+)
create mode 100644 Documentation/vpu/xlink-core.rst
create mode 100644 drivers/misc/xlink-core/Kconfig
create mode 100644 drivers/misc/xlink-core/Makefile
create mode 100644 drivers/misc/xlink-core/xlink-core.c
create mode 100644 drivers/misc/xlink-core/xlink-core.h
create mode 100644 drivers/misc/xlink-core/xlink-defs.h
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
create mode 100644 drivers/misc/xlink-core/xlink-platform.c
create mode 100644 drivers/misc/xlink-core/xlink-platform.h
create mode 100644 include/linux/xlink.h
create mode 100644 include/uapi/misc/xlink_uapi.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 49c78bb65b83..cd4272e089ec 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -16,3 +16,4 @@ This documentation contains information for the Intel VPU stack.
vpu-stack-overview
xlink-pcie
xlink-ipc
+ xlink-core
diff --git a/Documentation/vpu/xlink-core.rst b/Documentation/vpu/xlink-core.rst
new file mode 100644
index 000000000000..1c471ec803d3
--- /dev/null
+++ b/Documentation/vpu/xlink-core.rst
@@ -0,0 +1,81 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============================
+xLink-core software subsystem
+=============================
+
+The purpose of the xLink software subsystem is to facilitate communication
+between multiple users on multiple nodes in the system.
+
+There are three types of xLink nodes:
+
+1. Remote Host: this is an external IA/x86 host system that is only capable of
+ communicating directly to the Local Host node on VPU 2.x products.
+2. Local Host: this is the ARM core within the VPU 2.x SoC. The Local Host can
+ communicate upstream to the Remote Host node, and downstream to the VPU IP
+ node.
+3. VPU IP: this is the Leon RT core within the VPU 2.x SoC. The VPU IP can only
+ communicate upstream to the Local Host node.
+
+xLink provides a common API across all interfaces for users to access xLink
+functions and provides user space APIs via an IOCTL interface implemented in
+the xLink core.
+
+xLink manages communications from one interface to another and provides routing
+of data through multiple channels across a single physical interface.
+
+It exposes a common API across all interfaces at both kernel and user levels
+for processes/applications to access.
+
+It has typical API types (open, close, read, write) that you would associate
+with a communication interface.
+
+It also has other APIs that are related to other functions that the device can
+perform, e.g. boot, reset get/set device mode.
+The driver is broken down into 4 source files.
+
+xlink-core:
+Contains driver initialization, driver API and IOCTL interface (for user
+space).
+
+xlink-multiplexer:
+The Multiplexer component is responsible for securely routing messages through
+multiple communication channels over a single physical interface.
+
+xlink-dispatcher:
+The Dispatcher component is responsible for queueing and handling xLink
+communication requests from all users in the system and invoking the underlying
+platform interface drivers.
+
+xlink-platform:
+provides abstraction to each interface supported (PCIe, USB, IPC, etc).
+
+Typical xLink transaction (simplified):
+When a user wants to send data across an interface via xLink it firstly calls
+xlink connect which connects to the relevant interface (PCIe, USB, IPC, etc.)
+and then xlink open channel.
+
+Then it calls xlink write function, this takes the data, passes it to the
+kernel which packages up the data and channel and then adds it to a transmit
+queue.
+
+A separate thread reads this transaction queue and pops off data if available
+and passes the data to the underlying interface (e.g. PCIe) write function.
+Using this thread provides serialization of transactions and decouples the user
+write from the platform write.
+
+On the other side of the interface, a thread is continually reading the
+interface (e.g. PCIe) via the platform interface read function and if it reads
+any data it adds it to channel packet container.
+
+The application at this side of the interface will have called xlink connect,
+opened the channel and called xlink read function to read data from the
+interface and if any exists for that channel , the data gets popped from the
+channel packet container and copied from kernel space to user space buffer
+provided by the call.
+
+xLink can handle API requests from multi-process and multi-threaded
+application/processes.
+
+xLink maintains 4096 channels per device connected (via xlink connect) and
+maintains a separate channel infrastructure for each device.
diff --git a/MAINTAINERS b/MAINTAINERS
index 92693390c59d..e4165c9983cd 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1962,6 +1962,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM/INTEL XLINK CORE SUPPORT
+M: Seamus Kelly <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-core/
+
ARM/INTEL XLINK IPC SUPPORT
M: Seamus Kelly <[email protected]>
M: Mark Gross <[email protected]>
@@ -1974,6 +1980,12 @@ M: Mark Gross <[email protected]>
S: Supported
F: drivers/misc/xlink-pcie/

+ARM/INTEL TSENS SUPPORT
+M: Udhayakumar C <[email protected]>
+S: Supported
+F: drivers/misc/hddl_device/
+F: drivers/misc/intel_tsens/
+
ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
M: Jonathan Cameron <[email protected]>
L: [email protected] (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 1f81ea915b95..09ae65e98681 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -483,4 +483,5 @@ source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
source "drivers/misc/xlink-ipc/Kconfig"
+source "drivers/misc/xlink-core/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index b51495a2f1e0..f3a6eb03bae9 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -59,3 +59,4 @@ obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
+obj-$(CONFIG_XLINK_CORE) += xlink-core/
diff --git a/drivers/misc/xlink-core/Kconfig b/drivers/misc/xlink-core/Kconfig
new file mode 100644
index 000000000000..a0ceb0b48219
--- /dev/null
+++ b/drivers/misc/xlink-core/Kconfig
@@ -0,0 +1,33 @@
+config XLINK_CORE
+ tristate "Support for XLINK CORE"
+ depends on ((XLINK_PCIE_RH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER) || (XLINK_LOCAL_HOST && (XLINK_PCIE_LH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER)))
+ help
+ XLINK CORE enables the communication/control subsystem.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called xlink.ko.
+
+config XLINK_LOCAL_HOST
+ tristate "Support for XLINK LOCAL HOST"
+ depends on XLINK_IPC
+ help
+ XLINK LOCAL HOST enables local host functionality for
+ the communication/control Sub-System.
+
+ Enable this config when building the kernel for the Intel Vision
+ Processing Unit (VPU) Local Host core.
+
+ If building for a Remote Host kernel, say N.
+
+config XLINK_PSS
+ tristate "Support for XLINK PSS (Pre-Silicon Solution)"
+ depends on XLINK_LOCAL_HOST
+ help
+ XLINK PSS enables the communication/control subsystem on a PSS platform.
+
+ Enable this config when building the kernel for the Intel Vision
+ Processing Unit (VPU) in a simulated env.
+
+ If building for a VPU silicon, say N.
diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
new file mode 100644
index 000000000000..e82b7c72b6b9
--- /dev/null
+++ b/drivers/misc/xlink-core/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Keem Bay xlink Linux driver
+#
+obj-$(CONFIG_XLINK_CORE) += xlink.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
new file mode 100644
index 000000000000..1a443f54786d
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-core.h"
+#include "xlink-defs.h"
+#include "xlink-ioctl.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+// xlink version number
+#define XLINK_VERSION_MAJOR 0
+#define XLINK_VERSION_MINOR 1
+#define XLINK_VERSION_REVISION 2
+#define XLINK_VERSION_SUB_REV "a"
+
+// timeout in milliseconds used to wait for the reay message from the VPU
+#ifdef CONFIG_XLINK_PSS
+#define XLINK_VPU_WAIT_FOR_READY (3000000)
+#else
+#define XLINK_VPU_WAIT_FOR_READY (3000)
+#endif
+
+// device, class, and driver names
+#define DEVICE_NAME "xlnk"
+#define CLASS_NAME "xlkcore"
+#define DRV_NAME "xlink-driver"
+
+// used to determine if an API was called from user or kernel space
+#define CHANNEL_SET_USER_BIT(chan) ((chan) |= (1 << 15))
+#define CHANNEL_USER_BIT_IS_SET(chan) ((chan) & (1 << 15))
+#define CHANNEL_CLEAR_USER_BIT(chan) ((chan) &= ~(1 << 15))
+
+static dev_t xdev;
+static struct class *dev_class;
+static struct cdev xlink_cdev;
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+
+static const struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = xlink_ioctl,
+};
+
+struct xlink_link {
+ u32 id;
+ struct xlink_handle handle;
+ struct kref refcount;
+};
+
+struct keembay_xlink_dev {
+ struct platform_device *pdev;
+ struct xlink_link links[XLINK_MAX_CONNECTIONS];
+ u32 nmb_connected_links;
+ struct mutex lock; // protect access to xlink_dev
+};
+
+/*
+ * global variable pointing to our xlink device.
+ *
+ * This is meant to be used only when platform_get_drvdata() cannot be used
+ * because we lack a reference to our platform_device.
+ */
+static struct keembay_xlink_dev *xlink;
+
+/*
+ * get_next_link() - Searches the list of links to find the next available.
+ *
+ * Note: This function is only used in xlink_connect, where the xlink mutex is
+ * already locked.
+ *
+ * Return: the next available link, or NULL if maximum connections reached.
+ */
+static struct xlink_link *get_next_link(void)
+{
+ struct xlink_link *link = NULL;
+ int i;
+
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ if (xlink->links[i].handle.sw_device_id == XLINK_INVALID_SW_DEVICE_ID) {
+ link = &xlink->links[i];
+ break;
+ }
+ }
+ return link;
+}
+
+/*
+ * get_link_by_sw_device_id()
+ *
+ * Searches the list of links to find a link by sw device id.
+ *
+ * Return: the handle, or NULL if the handle is not found.
+ */
+static struct xlink_link *get_link_by_sw_device_id(u32 sw_device_id)
+{
+ struct xlink_link *link = NULL;
+ int i;
+
+ mutex_lock(&xlink->lock);
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ if (xlink->links[i].handle.sw_device_id == sw_device_id) {
+ link = &xlink->links[i];
+ break;
+ }
+ }
+ mutex_unlock(&xlink->lock);
+ return link;
+}
+
+// For now , do nothing and leave for further consideration
+static void release_after_kref_put(struct kref *ref) {}
+
+/* Driver probing. */
+static int kmb_xlink_probe(struct platform_device *pdev)
+{
+ struct keembay_xlink_dev *xlink_dev;
+ struct device *dev_ret;
+ int rc, i;
+
+ dev_info(&pdev->dev, "Keem Bay xlink v%d.%d.%d:%s\n", XLINK_VERSION_MAJOR,
+ XLINK_VERSION_MINOR, XLINK_VERSION_REVISION, XLINK_VERSION_SUB_REV);
+
+ xlink_dev = devm_kzalloc(&pdev->dev, sizeof(*xlink), GFP_KERNEL);
+ if (!xlink_dev)
+ return -ENOMEM;
+
+ xlink_dev->pdev = pdev;
+
+ // initialize multiplexer
+ rc = xlink_multiplexer_init(xlink_dev->pdev);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("Multiplexer initialization failed\n");
+ goto r_multiplexer;
+ }
+
+ // initialize xlink data structure
+ xlink_dev->nmb_connected_links = 0;
+ mutex_init(&xlink_dev->lock);
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ xlink_dev->links[i].id = i;
+ xlink_dev->links[i].handle.sw_device_id =
+ XLINK_INVALID_SW_DEVICE_ID;
+ }
+
+ platform_set_drvdata(pdev, xlink_dev);
+
+ /* Set the global reference to our device. */
+ xlink = xlink_dev;
+
+ /*Allocating Major number*/
+ if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
+ dev_info(&pdev->dev, "Cannot allocate major number\n");
+ goto r_multiplexer;
+ }
+ dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
+ MINOR(xdev));
+
+ /*Creating struct class*/
+ dev_class = class_create(THIS_MODULE, CLASS_NAME);
+ if (IS_ERR(dev_class)) {
+ dev_info(&pdev->dev, "Cannot create the struct class - Err %ld\n",
+ PTR_ERR(dev_class));
+ goto r_class;
+ }
+
+ /*Creating device*/
+ dev_ret = device_create(dev_class, NULL, xdev, NULL, DEVICE_NAME);
+ if (IS_ERR(dev_ret)) {
+ dev_info(&pdev->dev, "Cannot create the Device 1 - Err %ld\n",
+ PTR_ERR(dev_ret));
+ goto r_device;
+ }
+ dev_info(&pdev->dev, "Device Driver Insert...Done!!!\n");
+
+ /*Creating cdev structure*/
+ cdev_init(&xlink_cdev, &fops);
+
+ /*Adding character device to the system*/
+ if ((cdev_add(&xlink_cdev, xdev, 1)) < 0) {
+ dev_info(&pdev->dev, "Cannot add the device to the system\n");
+ goto r_class;
+ }
+
+ return 0;
+
+r_device:
+ class_destroy(dev_class);
+r_class:
+ unregister_chrdev_region(xdev, 1);
+r_multiplexer:
+ xlink_multiplexer_destroy();
+ return -1;
+}
+
+/* Driver removal. */
+static int kmb_xlink_remove(struct platform_device *pdev)
+{
+ int rc;
+
+ mutex_lock(&xlink->lock);
+ // destroy multiplexer
+ rc = xlink_multiplexer_destroy();
+ if (rc != X_LINK_SUCCESS)
+ pr_err("Multiplexer destroy failed\n");
+
+ mutex_unlock(&xlink->lock);
+ mutex_destroy(&xlink->lock);
+ // unregister and destroy device
+ unregister_chrdev_region(xdev, 1);
+ device_destroy(dev_class, xdev);
+ cdev_del(&xlink_cdev);
+ class_destroy(dev_class);
+ pr_info("XLink Driver removed\n");
+ return 0;
+}
+
+/*
+ * IOCTL function for User Space access to xlink kernel functions
+ *
+ */
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int rc;
+
+ switch (cmd) {
+ case XL_CONNECT:
+ rc = ioctl_connect(arg);
+ break;
+ case XL_OPEN_CHANNEL:
+ rc = ioctl_open_channel(arg);
+ break;
+ case XL_READ_DATA:
+ rc = ioctl_read_data(arg);
+ break;
+ case XL_WRITE_DATA:
+ rc = ioctl_write_data(arg);
+ break;
+ case XL_WRITE_VOLATILE:
+ rc = ioctl_write_volatile_data(arg);
+ break;
+ case XL_RELEASE_DATA:
+ rc = ioctl_release_data(arg);
+ break;
+ case XL_CLOSE_CHANNEL:
+ rc = ioctl_close_channel(arg);
+ break;
+ case XL_DISCONNECT:
+ rc = ioctl_disconnect(arg);
+ break;
+ }
+ if (rc)
+ return -EIO;
+ else
+ return 0;
+}
+
+/*
+ * xlink Kernel API.
+ */
+
+enum xlink_error xlink_initialize(void)
+{
+ return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_initialize);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle)
+{
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int interface;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ mutex_lock(&xlink->lock);
+ if (!link) {
+ link = get_next_link();
+ if (!link) {
+ pr_err("max connections reached %d\n",
+ XLINK_MAX_CONNECTIONS);
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // platform connect
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ rc = xlink_platform_connect(interface, handle->sw_device_id);
+ if (rc) {
+ pr_err("platform connect failed %d\n", rc);
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // set link handle reference and link id
+ link->handle = *handle;
+ xlink->nmb_connected_links++;
+ kref_init(&link->refcount);
+ // initialize multiplexer connection
+ rc = xlink_multiplexer_connect(link->id);
+ if (rc) {
+ pr_err("multiplexer connect failed\n");
+ goto r_cleanup;
+ }
+ pr_info("dev 0x%x connected - dev_type %d - nmb_connected_links %d\n",
+ link->handle.sw_device_id,
+ link->handle.dev_type,
+ xlink->nmb_connected_links);
+ } else {
+ // already connected
+ pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
+ link->handle.sw_device_id,
+ link->handle.dev_type);
+ kref_get(&link->refcount);
+ *handle = link->handle;
+ }
+ mutex_unlock(&xlink->lock);
+ // TODO: implement ping
+ return X_LINK_SUCCESS;
+
+r_cleanup:
+ link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+}
+EXPORT_SYMBOL(xlink_connect);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+ &link->handle, chan, data_size, timeout);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = (void *)mode;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_open_channel);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+ u16 chan)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ &link->handle, chan, 0, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_close_channel);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_DATA_SIZE)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+ chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS &&
+ event->interface == IPC_INTERFACE) {
+ /* only passing message address across IPC interface */
+ event->data = &pmessage;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ xlink_destroy_event(event);
+ } else {
+ event->data = (u8 *)pmessage;
+ event->paddr = 0;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ }
+ return rc;
+}
+EXPORT_SYMBOL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+ dma_addr_t paddr = 0;
+ u32 addr;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_DATA_SIZE)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+ chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+ event->user_data = 1;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS &&
+ event->interface == IPC_INTERFACE) {
+ /* only passing message address across IPC interface */
+ if (get_user(addr, (u32 __user *)pmessage)) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->data = &addr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ xlink_destroy_event(event);
+ } else {
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+ size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->paddr = paddr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ }
+ }
+ return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+ return rc;
+}
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size, u32 user_flag)
+{
+ enum xlink_error rc = 0;
+ struct xlink_link *link = NULL;
+ struct xlink_event *event = NULL;
+ int event_queued = 0;
+ dma_addr_t paddr;
+ int region = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_BUF_SIZE)
+ return X_LINK_ERROR; // TODO: XLink Parameter Error
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_VOLATILE_REQ,
+ &link->handle, chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ region = XLINK_NORMAL_MEMORY;
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr, size,
+ XLINK_PACKET_ALIGNMENT, region);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ memcpy(event->data, message, size);
+ event->user_data = user_flag;
+ event->paddr = paddr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued) {
+ xlink_platform_deallocate(&xlink->pdev->dev, event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT, region);
+ xlink_destroy_event(event);
+ }
+ return rc;
+}
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+ u16 chan, u8 **pmessage, u32 *size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_READ_REQ, &link->handle,
+ chan, *size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->pdata = (void **)pmessage;
+ event->length = size;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_read_data);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+ u16 chan, u8 *const message, u32 *size)
+{
+ enum xlink_error rc = 0;
+ struct xlink_link *link = NULL;
+ struct xlink_event *event = NULL;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_READ_TO_BUFFER_REQ,
+ &link->handle, chan, *size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = message;
+ event->length = size;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_read_data_to_buffer);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+ u16 chan, u8 * const data_addr)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_RELEASE_REQ, &link->handle,
+ chan, 0, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = data_addr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_release_data);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle)
+{
+ struct xlink_link *link;
+ enum xlink_error rc = X_LINK_ERROR;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ // decrement refcount, if count is 0 lock mutex and disconnect
+ if (kref_put_mutex(&link->refcount, release_after_kref_put,
+ &xlink->lock)) {
+ // deinitialize multiplexer connection
+ rc = xlink_multiplexer_disconnect(link->id);
+ if (rc) {
+ pr_err("multiplexer disconnect failed\n");
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // TODO: reset device?
+ // invalidate link handle reference
+ link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+ xlink->nmb_connected_links--;
+ mutex_unlock(&xlink->lock);
+ }
+ return rc;
+}
+EXPORT_SYMBOL(xlink_disconnect);
+
+/* Device tree driver match. */
+static const struct of_device_id kmb_xlink_of_match[] = {
+ {
+ .compatible = "intel,keembay-xlink",
+ },
+ {}
+};
+
+/* The xlink driver is a platform device. */
+static struct platform_driver kmb_xlink_driver = {
+ .probe = kmb_xlink_probe,
+ .remove = kmb_xlink_remove,
+ .driver = {
+ .name = DRV_NAME,
+ .of_match_table = kmb_xlink_of_match,
+ },
+};
+
+/*
+ * The remote host system will need to create an xlink platform
+ * device for the platform driver to match with
+ */
+#ifndef CONFIG_XLINK_LOCAL_HOST
+static struct platform_device pdev;
+static void kmb_xlink_release(struct device *dev) { return; }
+#endif
+
+static int kmb_xlink_init(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&kmb_xlink_driver);
+#ifndef CONFIG_XLINK_LOCAL_HOST
+ pdev.dev.release = kmb_xlink_release;
+ pdev.name = DRV_NAME;
+ pdev.id = -1;
+ if (!rc) {
+ rc = platform_device_register(&pdev);
+ if (rc)
+ platform_driver_unregister(&kmb_xlink_driver);
+ }
+#endif
+ return rc;
+}
+module_init(kmb_xlink_init);
+
+static void kmb_xlink_exit(void)
+{
+#ifndef CONFIG_XLINK_LOCAL_HOST
+ platform_device_unregister(&pdev);
+#endif
+ platform_driver_unregister(&kmb_xlink_driver);
+}
+module_exit(kmb_xlink_exit);
+
+MODULE_DESCRIPTION("Keem Bay xlink Kernel Driver");
+MODULE_AUTHOR("Seamus Kelly <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
new file mode 100644
index 000000000000..5ba7ac653bf7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink core header file.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_CORE_H_
+#define XLINK_CORE_H_
+#include "xlink-defs.h"
+
+#define NUM_REG_EVENTS 4
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size, u32 user_flag);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size);
+#endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
new file mode 100644
index 000000000000..09aee36d5542
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Defines.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DEFS_H
+#define __XLINK_DEFS_H
+
+#include <linux/slab.h>
+#include <linux/xlink.h>
+
+#define XLINK_MAX_BUF_SIZE 128U
+#define XLINK_MAX_DATA_SIZE (1024U * 1024U * 1024U)
+#define XLINK_MAX_CONTROL_DATA_SIZE 100U
+#define XLINK_MAX_CONNECTIONS 16
+#define XLINK_PACKET_ALIGNMENT 64
+#define XLINK_INVALID_EVENT_ID 0xDEADBEEF
+#define XLINK_INVALID_CHANNEL_ID 0xDEAD
+#define XLINK_PACKET_QUEUE_CAPACITY 10000
+#define XLINK_EVENT_QUEUE_CAPACITY 10000
+#define XLINK_EVENT_HEADER_MAGIC 0x786C6E6B
+#define XLINK_PING_TIMEOUT_MS 5000U
+#define XLINK_MAX_DEVICE_NAME_SIZE 128
+#define XLINK_MAX_DEVICE_LIST_SIZE 8
+#define XLINK_INVALID_LINK_ID 0xDEADBEEF
+#define XLINK_INVALID_SW_DEVICE_ID 0xDEADBEEF
+
+#define NMB_CHANNELS 4096
+#define IP_CONTROL_CHANNEL 0x0A
+#define VPU_CONTROL_CHANNEL 0x400
+#define CONTROL_CHANNEL_OPMODE RXB_TXB // blocking
+#define CONTROL_CHANNEL_DATASIZE 128U // size of internal rx/tx buffers
+#define CONTROL_CHANNEL_TIMEOUT_MS 0U // wait indefinitely
+#define SIGXLNK 44 // signal XLink uses for callback signalling
+
+#define UNUSED(x) (void)(x)
+
+// the end of the IPC channel range (starting at zero)
+#define XLINK_IPC_MAX_CHANNELS 1024
+
+// used to extract the interface type from a sw device id
+#define SW_DEVICE_ID_INTERFACE_SHIFT 24U
+#define SW_DEVICE_ID_INTERFACE_MASK 0x7
+#define GET_INTERFACE_FROM_SW_DEVICE_ID(id) (((id) >> SW_DEVICE_ID_INTERFACE_SHIFT) & \
+ SW_DEVICE_ID_INTERFACE_MASK)
+#define SW_DEVICE_ID_IPC_INTERFACE 0x0
+#define SW_DEVICE_ID_PCIE_INTERFACE 0x1
+#define SW_DEVICE_ID_USB_INTERFACE 0x2
+#define SW_DEVICE_ID_ETH_INTERFACE 0x3
+
+enum xlink_interface {
+ NULL_INTERFACE = -1,
+ IPC_INTERFACE = 0,
+ PCIE_INTERFACE,
+ USB_CDC_INTERFACE,
+ ETH_INTERFACE,
+ NMB_OF_INTERFACES,
+};
+
+static inline int get_interface_from_sw_device_id(u32 sw_device_id)
+{
+ u32 interface = 0;
+
+ interface = GET_INTERFACE_FROM_SW_DEVICE_ID(sw_device_id);
+ switch (interface) {
+ case SW_DEVICE_ID_IPC_INTERFACE:
+ return IPC_INTERFACE;
+ case SW_DEVICE_ID_PCIE_INTERFACE:
+ return PCIE_INTERFACE;
+ case SW_DEVICE_ID_USB_INTERFACE:
+ return USB_CDC_INTERFACE;
+ case SW_DEVICE_ID_ETH_INTERFACE:
+ return ETH_INTERFACE;
+ default:
+ return NULL_INTERFACE;
+ }
+}
+
+enum xlink_channel_status {
+ CHAN_CLOSED = 0x0000,
+ CHAN_OPEN = 0x0001,
+ CHAN_BLOCKED_READ = 0x0010,
+ CHAN_BLOCKED_WRITE = 0x0100,
+ CHAN_OPEN_PEER = 0x1000,
+};
+
+enum xlink_event_origin {
+ EVENT_TX = 0, // outgoing events
+ EVENT_RX, // incoming events
+};
+
+enum xlink_event_type {
+ // request events
+ XLINK_WRITE_REQ = 0x00,
+ XLINK_WRITE_VOLATILE_REQ,
+ XLINK_READ_REQ,
+ XLINK_READ_TO_BUFFER_REQ,
+ XLINK_RELEASE_REQ,
+ XLINK_OPEN_CHANNEL_REQ,
+ XLINK_CLOSE_CHANNEL_REQ,
+ XLINK_PING_REQ,
+ XLINK_REQ_LAST,
+ // response events
+ XLINK_WRITE_RESP = 0x10,
+ XLINK_WRITE_VOLATILE_RESP,
+ XLINK_READ_RESP,
+ XLINK_READ_TO_BUFFER_RESP,
+ XLINK_RELEASE_RESP,
+ XLINK_OPEN_CHANNEL_RESP,
+ XLINK_CLOSE_CHANNEL_RESP,
+ XLINK_PING_RESP,
+ XLINK_RESP_LAST,
+};
+
+struct xlink_event_header {
+ u32 magic;
+ u32 id;
+ enum xlink_event_type type;
+ u32 chan;
+ size_t size;
+ u32 timeout;
+ u8 control_data[XLINK_MAX_CONTROL_DATA_SIZE];
+};
+
+struct xlink_event {
+ struct xlink_event_header header;
+ enum xlink_event_origin origin;
+ u32 link_id;
+ struct xlink_handle *handle;
+ int interface;
+ void *data;
+ struct task_struct *calling_pid;
+ char callback_origin;
+ char user_data;
+ void **pdata;
+ dma_addr_t paddr;
+ u32 *length;
+ struct list_head list;
+};
+
+static inline struct xlink_event *xlink_create_event(u32 link_id,
+ enum xlink_event_type type,
+ struct xlink_handle *handle,
+ u32 chan, u32 size, u32 timeout)
+{
+ struct xlink_event *new_event = NULL;
+
+ // allocate new event
+ new_event = kzalloc(sizeof(*new_event), GFP_KERNEL);
+ if (!new_event)
+ return NULL;
+
+ // set event context
+ new_event->link_id = link_id;
+ new_event->handle = handle;
+ new_event->interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ new_event->user_data = 0;
+
+ // set event header
+ new_event->header.magic = XLINK_EVENT_HEADER_MAGIC;
+ new_event->header.id = XLINK_INVALID_EVENT_ID;
+ new_event->header.type = type;
+ new_event->header.chan = chan;
+ new_event->header.size = size;
+ new_event->header.timeout = timeout;
+ return new_event;
+}
+
+static inline void xlink_destroy_event(struct xlink_event *event)
+{
+ kfree(event);
+}
+#endif /* __XLINK_DEFS_H */
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
new file mode 100644
index 000000000000..1f75ad38137b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#include "xlink-defs.h"
+#include "xlink-core.h"
+#include "xlink-ioctl.h"
+
+#define CHANNEL_SET_USER_BIT(chan) ((chan) |= (1 << 15))
+
+static int copy_result_to_user(u32 *where, int rc)
+{
+ if (copy_to_user((void __user *)where, &rc, sizeof(rc)))
+ return -EFAULT;
+ return rc;
+}
+
+static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+ return rc;
+}
+
+int ioctl_connect(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkconnect con = {};
+ int rc = 0;
+
+ if (copy_from_user(&con, (void __user *)arg,
+ sizeof(struct xlinkconnect)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)con.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_connect(&devh);
+ if (rc == X_LINK_SUCCESS) {
+ if (copy_to_user((void __user *)con.handle,
+ &devh, sizeof(struct xlink_handle)))
+ return -EFAULT;
+ }
+
+ return copy_result_to_user(con.return_code, rc);
+}
+
+int ioctl_open_channel(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkopenchannel op = {};
+ int rc = 0;
+
+ if (copy_from_user(&op, (void __user *)arg,
+ sizeof(struct xlinkopenchannel)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)op.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_open_channel(&devh, op.chan, op.mode, op.data_size,
+ op.timeout);
+
+ return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_read_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkreaddata rd = {};
+ int rc = 0;
+ u8 *rdaddr;
+ u32 size;
+ int interface;
+
+ if (copy_from_user(&rd, (void __user *)arg,
+ sizeof(struct xlinkreaddata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)rd.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_read_data(&devh, rd.chan, &rdaddr, &size);
+ if (!rc) {
+ interface = get_interface_from_sw_device_id(devh.sw_device_id);
+ if (interface == IPC_INTERFACE) {
+ if (copy_to_user((void __user *)rd.pmessage, (void *)&rdaddr,
+ sizeof(u32)))
+ return -EFAULT;
+ } else {
+ if (copy_to_user((void __user *)rd.pmessage, (void *)rdaddr,
+ size))
+ return -EFAULT;
+ }
+ if (copy_to_user((void __user *)rd.size, (void *)&size, sizeof(size)))
+ return -EFAULT;
+ }
+
+ return copy_result_to_user(rd.return_code, rc);
+}
+
+int ioctl_write_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ int rc = 0;
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_DATA_SIZE)
+ return -EFAULT;
+ rc = xlink_write_data_user(&devh, wr.chan, wr.pmessage, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_write_volatile_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ int rc = 0;
+ u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_BUF_SIZE)
+ return -EFAULT;
+ if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+ return -EFAULT;
+ rc = xlink_write_volatile_user(&devh, wr.chan, volbuf, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_release_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkrelease rel = {};
+ int rc = 0;
+ u8 reladdr;
+
+ if (copy_from_user(&rel, (void __user *)arg,
+ sizeof(struct xlinkrelease)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)rel.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (rel.addr) {
+ if (get_user(reladdr, (u32 __user *const)rel.addr))
+ return -EFAULT;
+ rc = xlink_release_data(&devh, rel.chan,
+ (u8 *)&reladdr);
+ } else {
+ rc = xlink_release_data(&devh, rel.chan, NULL);
+ }
+
+ return copy_result_to_user(rel.return_code, rc);
+}
+
+int ioctl_close_channel(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkopenchannel op = {};
+ int rc = 0;
+
+ if (copy_from_user(&op, (void __user *)arg,
+ sizeof(struct xlinkopenchannel)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)op.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_close_channel(&devh, op.chan);
+
+ return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_disconnect(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkconnect con = {};
+ int rc = 0;
+
+ if (copy_from_user(&con, (void __user *)arg,
+ sizeof(struct xlinkconnect)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)con.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_disconnect(&devh);
+
+ return copy_result_to_user(con.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
new file mode 100644
index 000000000000..0f317c6c2b94
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink ioctl header files.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_IOCTL_H_
+#define XLINK_IOCTL_H_
+
+int ioctl_connect(unsigned long arg);
+int ioctl_open_channel(unsigned long arg);
+int ioctl_read_data(unsigned long arg);
+int ioctl_write_data(unsigned long arg);
+int ioctl_write_volatile_data(unsigned long arg);
+int ioctl_release_data(unsigned long arg);
+int ioctl_close_channel(unsigned long arg);
+int ioctl_disconnect(unsigned long arg);
+
+#endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
new file mode 100644
index 000000000000..9b1ed008bb56
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -0,0 +1,534 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-direct.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define THR_UPR 85
+#define THR_LWR 80
+
+// timeout used for open channel
+#define OPEN_CHANNEL_TIMEOUT_MSEC 5000
+
+/* Channel mapping table. */
+struct xlink_channel_type {
+ enum xlink_interface remote_to_local;
+ enum xlink_interface local_to_ip;
+};
+
+struct xlink_channel_table_entry {
+ u16 start_range;
+ u16 stop_range;
+ struct xlink_channel_type type;
+};
+
+static const struct xlink_channel_table_entry default_channel_table[] = {
+ {0x0, 0x1, {PCIE_INTERFACE, IPC_INTERFACE}},
+ {0x2, 0x9, {USB_CDC_INTERFACE, IPC_INTERFACE}},
+ {0xA, 0x3FD, {PCIE_INTERFACE, IPC_INTERFACE}},
+ {0x3FE, 0x3FF, {ETH_INTERFACE, IPC_INTERFACE}},
+ {0x400, 0xFFE, {PCIE_INTERFACE, NULL_INTERFACE}},
+ {0xFFF, 0xFFF, {ETH_INTERFACE, NULL_INTERFACE}},
+ {NMB_CHANNELS, NMB_CHANNELS, {NULL_INTERFACE, NULL_INTERFACE}},
+};
+
+struct channel {
+ struct open_channel *opchan;
+ enum xlink_opmode mode;
+ enum xlink_channel_status status;
+ enum xlink_channel_status ipc_status;
+ u32 size;
+ u32 timeout;
+};
+
+struct packet {
+ u8 *data;
+ u32 length;
+ dma_addr_t paddr;
+ struct list_head list;
+};
+
+struct packet_queue {
+ u32 count;
+ u32 capacity;
+ struct list_head head;
+ struct mutex lock; // lock to protect packet queue
+};
+
+struct open_channel {
+ u16 id;
+ struct channel *chan;
+ struct packet_queue rx_queue;
+ struct packet_queue tx_queue;
+ s32 rx_fill_level;
+ s32 tx_fill_level;
+ s32 tx_packet_level;
+ s32 tx_up_limit;
+ struct completion opened;
+ struct completion pkt_available;
+ struct completion pkt_consumed;
+ struct completion pkt_released;
+ struct task_struct *ready_calling_pid;
+ void *ready_callback;
+ struct task_struct *consumed_calling_pid;
+ void *consumed_callback;
+ char callback_origin;
+ struct mutex lock; // lock to protect channel config
+};
+
+struct xlink_multiplexer {
+ struct device *dev;
+ struct channel channels[XLINK_MAX_CONNECTIONS][NMB_CHANNELS];
+};
+
+static struct xlink_multiplexer *xmux;
+
+/*
+ * Multiplexer Internal Functions
+ *
+ */
+
+static inline int chan_is_non_blocking_read(struct open_channel *opchan)
+{
+ if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
+ return 1;
+
+ return 0;
+}
+
+static inline int chan_is_non_blocking_write(struct open_channel *opchan)
+{
+ if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXB_TXN)
+ return 1;
+
+ return 0;
+}
+
+static struct xlink_channel_type const *get_channel_type(u16 chan)
+{
+ struct xlink_channel_type const *type = NULL;
+ int i = 0;
+
+ while (default_channel_table[i].start_range < NMB_CHANNELS) {
+ if (chan >= default_channel_table[i].start_range &&
+ chan <= default_channel_table[i].stop_range) {
+ type = &default_channel_table[i].type;
+ break;
+ }
+ i++;
+ }
+ return type;
+}
+
+static int is_channel_for_device(u16 chan, u32 sw_device_id,
+ enum xlink_dev_type dev_type)
+{
+ struct xlink_channel_type const *chan_type = get_channel_type(chan);
+ int interface = NULL_INTERFACE;
+
+ if (chan_type) {
+ interface = get_interface_from_sw_device_id(sw_device_id);
+ if (dev_type == VPUIP_DEVICE) {
+ if (chan_type->local_to_ip == interface)
+ return 1;
+ } else {
+ if (chan_type->remote_to_local == interface)
+ return 1;
+ }
+ }
+ return 0;
+}
+
+static int is_control_channel(u16 chan)
+{
+ if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
+ return 1;
+ else
+ return 0;
+}
+
+static int release_packet_from_channel(struct open_channel *opchan,
+ struct packet_queue *queue,
+ u8 * const addr, u32 *size)
+{
+ u8 packet_found = 0;
+ struct packet *pkt = NULL;
+
+ if (!addr) {
+ // address is null, release first packet in queue
+ if (!list_empty(&queue->head)) {
+ pkt = list_first_entry(&queue->head, struct packet,
+ list);
+ packet_found = 1;
+ }
+ } else {
+ // find packet in channel rx queue
+ list_for_each_entry(pkt, &queue->head, list) {
+ if (pkt->data == addr) {
+ packet_found = 1;
+ break;
+ }
+ }
+ }
+ if (!pkt || !packet_found)
+ return X_LINK_ERROR;
+ // packet found, deallocate and remove from queue
+ xlink_platform_deallocate(xmux->dev, pkt->data, pkt->paddr, pkt->length,
+ XLINK_PACKET_ALIGNMENT, XLINK_NORMAL_MEMORY);
+ list_del(&pkt->list);
+ queue->count--;
+ opchan->rx_fill_level -= pkt->length;
+ if (size)
+ *size = pkt->length;
+ kfree(pkt);
+ return X_LINK_SUCCESS;
+}
+
+static int multiplexer_open_channel(u32 link_id, u16 chan)
+{
+ struct open_channel *opchan;
+
+ // channel already open
+ if (xmux->channels[link_id][chan].opchan)
+ return X_LINK_SUCCESS;
+
+ // allocate open channel
+ opchan = kzalloc(sizeof(*opchan), GFP_KERNEL);
+ if (!opchan)
+ return X_LINK_ERROR;
+
+ // initialize open channel
+ opchan->id = chan;
+ opchan->chan = &xmux->channels[link_id][chan];
+ // TODO: remove circular dependency
+ xmux->channels[link_id][chan].opchan = opchan;
+ INIT_LIST_HEAD(&opchan->rx_queue.head);
+ opchan->rx_queue.count = 0;
+ opchan->rx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+ INIT_LIST_HEAD(&opchan->tx_queue.head);
+ opchan->tx_queue.count = 0;
+ opchan->tx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+ opchan->rx_fill_level = 0;
+ opchan->tx_fill_level = 0;
+ opchan->tx_packet_level = 0;
+ opchan->tx_up_limit = 0;
+ init_completion(&opchan->opened);
+ init_completion(&opchan->pkt_available);
+ init_completion(&opchan->pkt_consumed);
+ init_completion(&opchan->pkt_released);
+ mutex_init(&opchan->lock);
+ return X_LINK_SUCCESS;
+}
+
+static int multiplexer_close_channel(struct open_channel *opchan)
+{
+ if (!opchan)
+ return X_LINK_ERROR;
+
+ // free remaining packets
+ while (!list_empty(&opchan->rx_queue.head)) {
+ release_packet_from_channel(opchan, &opchan->rx_queue,
+ NULL, NULL);
+ }
+
+ while (!list_empty(&opchan->tx_queue.head)) {
+ release_packet_from_channel(opchan, &opchan->tx_queue,
+ NULL, NULL);
+ }
+
+ // deallocate data structure and destroy
+ opchan->chan->opchan = NULL; // TODO: remove circular dependency
+ mutex_destroy(&opchan->rx_queue.lock);
+ mutex_destroy(&opchan->tx_queue.lock);
+ mutex_unlock(&opchan->lock);
+ mutex_destroy(&opchan->lock);
+ kfree(opchan);
+ return X_LINK_SUCCESS;
+}
+
+/*
+ * Multiplexer External Functions
+ *
+ */
+
+enum xlink_error xlink_multiplexer_init(void *dev)
+{
+ struct platform_device *plat_dev = (struct platform_device *)dev;
+
+ // allocate multiplexer data structure
+ xmux = kzalloc(sizeof(*xmux), GFP_KERNEL);
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ xmux->dev = &plat_dev->dev;
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id)
+{
+ int rc;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ // open ip control channel
+ rc = multiplexer_open_channel(link_id, IP_CONTROL_CHANNEL);
+ if (rc) {
+ goto r_cleanup;
+ } else {
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].status = CHAN_OPEN;
+ }
+ // open vpu control channel
+ rc = multiplexer_open_channel(link_id, VPU_CONTROL_CHANNEL);
+ if (rc) {
+ goto r_cleanup;
+ } else {
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].status = CHAN_OPEN;
+ }
+ return X_LINK_SUCCESS;
+
+r_cleanup:
+ xlink_multiplexer_disconnect(link_id);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id)
+{
+ int i;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ for (i = 0; i < NMB_CHANNELS; i++) {
+ if (xmux->channels[link_id][i].opchan)
+ multiplexer_close_channel(xmux->channels[link_id][i].opchan);
+ }
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_destroy(void)
+{
+ int i;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ // close all open channels and deallocate remaining packets
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++)
+ xlink_multiplexer_disconnect(i);
+
+ // destroy multiplexer
+ kfree(xmux);
+ xmux = NULL;
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+ int *event_queued)
+{
+ int rc = X_LINK_SUCCESS;
+ u16 chan = 0;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ chan = event->header.chan;
+
+ // verify channel ID is in range
+ if (chan >= NMB_CHANNELS)
+ return X_LINK_ERROR;
+
+ // verify communication to device on channel is valid
+ if (!is_channel_for_device(chan, event->handle->sw_device_id,
+ event->handle->dev_type))
+ return X_LINK_ERROR;
+
+ // verify this is not a control channel
+ if (is_control_channel(chan))
+ return X_LINK_ERROR;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+ // event should be handled by passthrough
+ rc = xlink_passthrough(event);
+ return rc;
+}
+
+enum xlink_error xlink_passthrough(struct xlink_event *event)
+{
+ int rc = 0;
+#ifdef CONFIG_XLINK_LOCAL_HOST
+ struct xlink_ipc_context ipc = {0};
+ phys_addr_t physaddr = 0;
+ dma_addr_t vpuaddr = 0;
+ u32 timeout = 0;
+ u32 link_id;
+ u16 chan;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ link_id = event->link_id;
+ chan = event->header.chan;
+ ipc.chan = chan;
+
+ if (ipc.chan >= XLINK_IPC_MAX_CHANNELS)
+ return rc;
+
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* Translate physical address to VPU address */
+ vpuaddr = phys_to_dma(xmux->dev, *(u32 *)event->data);
+ event->data = &vpuaddr;
+ rc = xlink_platform_write(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ &event->header.size, 0, &ipc);
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_WRITE_VOLATILE_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ ipc.is_volatile = 1;
+ rc = xlink_platform_write(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ &event->header.size, 0, &ipc);
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_READ_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* if channel has receive blocking set,
+ * then set timeout to U32_MAX
+ */
+ if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+ xmux->channels[link_id][chan].mode == RXB_TXB) {
+ timeout = U32_MAX;
+ } else {
+ timeout = xmux->channels[link_id][chan].timeout;
+ }
+ rc = xlink_platform_read(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ &vpuaddr,
+ (size_t *)event->length,
+ timeout, &ipc);
+ /* Translate VPU address to physical address */
+ physaddr = dma_to_phys(xmux->dev, vpuaddr);
+ *(phys_addr_t *)event->pdata = physaddr;
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_READ_TO_BUFFER_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* if channel has receive blocking set,
+ * then set timeout to U32_MAX
+ */
+ if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+ xmux->channels[link_id][chan].mode == RXB_TXB) {
+ timeout = U32_MAX;
+ } else {
+ timeout = xmux->channels[link_id][chan].timeout;
+ }
+ ipc.is_volatile = 1;
+ rc = xlink_platform_read(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ (size_t *)event->length,
+ timeout, &ipc);
+ if (rc || *event->length > XLINK_MAX_BUF_SIZE)
+ rc = X_LINK_ERROR;
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_RELEASE_REQ:
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = xlink_platform_open_channel(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ chan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].ipc_status = CHAN_OPEN;
+ } else {
+ /* channel already open */
+ rc = X_LINK_ALREADY_OPEN;
+ }
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ rc = xlink_platform_close_channel(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ chan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].ipc_status = CHAN_CLOSED;
+ } else {
+ /* can't close channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_PING_REQ:
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ case XLINK_OPEN_CHANNEL_RESP:
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ break;
+ default:
+ rc = X_LINK_ERROR;
+ }
+#else
+ rc = 0;
+#endif // CONFIG_XLINK_LOCAL_HOST
+ return rc;
+}
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.h b/drivers/misc/xlink-core/xlink-multiplexer.h
new file mode 100644
index 000000000000..c978e5683b45
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_MULTIPLEXER_H
+#define __XLINK_MULTIPLEXER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_multiplexer_init(void *dev);
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_destroy(void);
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+ int *event_queued);
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event);
+
+enum xlink_error xlink_passthrough(struct xlink_event *event);
+
+void *find_allocated_buffer(dma_addr_t paddr);
+
+int unregister_allocated_buffer(void *buf, dma_addr_t paddr);
+
+int core_release_packet_from_channel(u32 link_id, uint16_t chan,
+ uint8_t * const addr);
+
+#endif /* __XLINK_MULTIPLEXER_H */
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
new file mode 100644
index 000000000000..c34b69ee206b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+#include <linux/xlink_drv_inf.h>
+
+#include "xlink-platform.h"
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#else /* !CONFIG_XLINK_LOCAL_HOST */
+
+static inline int xlink_ipc_connect(u32 sw_device_id)
+{ return -1; }
+
+static inline int xlink_ipc_write(u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_read(u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_open_channel(u32 sw_device_id,
+ u32 channel)
+{ return -1; }
+
+static inline int xlink_ipc_close_channel(u32 sw_device_id,
+ u32 channel)
+{ return -1; }
+
+#endif /* CONFIG_XLINK_LOCAL_HOST */
+
+/*
+ * xlink low-level driver interface arrays
+ *
+ * note: array indices based on xlink_interface enum definition
+ */
+
+static int (*connect_fcts[NMB_OF_INTERFACES])(u32) = {
+ xlink_ipc_connect, xlink_pcie_connect, NULL, NULL};
+
+static int (*write_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+ NULL, xlink_pcie_write, NULL, NULL};
+
+static int (*read_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+ NULL, xlink_pcie_read, NULL, NULL};
+
+static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+ xlink_ipc_open_channel, NULL, NULL, NULL};
+
+static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+ xlink_ipc_close_channel, NULL, NULL, NULL};
+
+/*
+ * xlink low-level driver interface
+ */
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id)
+{
+ if (interface >= NMB_OF_INTERFACES || !connect_fcts[interface])
+ return -1;
+
+ return connect_fcts[interface](sw_device_id);
+}
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{
+ if (interface == IPC_INTERFACE)
+ return xlink_ipc_write(sw_device_id, data, size, timeout,
+ context);
+
+ if (interface >= NMB_OF_INTERFACES || !write_fcts[interface])
+ return -1;
+
+ return write_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{
+ if (interface == IPC_INTERFACE)
+ return xlink_ipc_read(sw_device_id, data, size, timeout,
+ context);
+
+ if (interface >= NMB_OF_INTERFACES || !read_fcts[interface])
+ return -1;
+
+ return read_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
+ u32 channel)
+{
+ if (interface >= NMB_OF_INTERFACES || !open_chan_fcts[interface])
+ return -1;
+
+ return open_chan_fcts[interface](sw_device_id, channel);
+}
+
+int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
+ u32 channel)
+{
+ if (interface >= NMB_OF_INTERFACES || !close_chan_fcts[interface])
+ return -1;
+
+ return close_chan_fcts[interface](sw_device_id, channel);
+}
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+ u32 size, u32 alignment,
+ enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+ *handle = 0;
+ return kzalloc(size, GFP_KERNEL);
+#else
+ void *p;
+
+ if (region == XLINK_CMA_MEMORY) {
+ // size needs to be at least 4097 to be allocated from CMA
+ size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+ p = dma_alloc_coherent(dev, size, handle, GFP_KERNEL);
+ return p;
+ }
+ *handle = 0;
+ return kzalloc(size, GFP_KERNEL);
+#endif
+}
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+ dma_addr_t handle, u32 size, u32 alignment,
+ enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+ kfree(buf);
+#else
+ if (region == XLINK_CMA_MEMORY) {
+ // size needs to be at least 4097 to be allocated from CMA
+ size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+ dma_free_coherent(dev, size, buf, handle);
+ } else {
+ kfree(buf);
+ }
+#endif
+}
diff --git a/drivers/misc/xlink-core/xlink-platform.h b/drivers/misc/xlink-core/xlink-platform.h
new file mode 100644
index 000000000000..2c7c4c418099
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_PLATFORM_H
+#define __XLINK_PLATFORM_H
+
+#include "xlink-defs.h"
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id);
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_reset_device(u32 interface, u32 sw_device_id);
+
+int xlink_platform_boot_device(u32 interface, u32 sw_device_id,
+ const char *binary_name);
+
+int xlink_platform_get_device_name(u32 interface, u32 sw_device_id,
+ char *device_name, size_t name_size);
+
+int xlink_platform_get_device_list(u32 interface, u32 *sw_device_id_list,
+ u32 *num_devices);
+
+int xlink_platform_get_device_status(u32 interface, u32 sw_device_id,
+ u32 *device_status);
+
+int xlink_platform_set_device_mode(u32 interface, u32 sw_device_id,
+ u32 power_mode);
+
+int xlink_platform_get_device_mode(u32 interface, u32 sw_device_id,
+ u32 *power_mode);
+
+int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
+ u32 channel);
+
+int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
+ u32 channel);
+
+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+ xlink_device_event_cb event_notif_fn);
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id);
+
+enum xlink_memory_region {
+ XLINK_NORMAL_MEMORY = 0,
+ XLINK_CMA_MEMORY
+};
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+ u32 size, u32 alignment,
+ enum xlink_memory_region region);
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+ dma_addr_t handle, u32 size, u32 alignment,
+ enum xlink_memory_region region);
+
+#endif /* __XLINK_PLATFORM_H */
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
new file mode 100644
index 000000000000..c22439d5aade
--- /dev/null
+++ b/include/linux/xlink.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_H
+#define __XLINK_H
+
+#include <uapi/misc/xlink_uapi.h>
+
+enum xlink_dev_type {
+ HOST_DEVICE = 0, /* used when communicating host to host*/
+ VPUIP_DEVICE /* used when communicating host to vpu ip */
+};
+
+struct xlink_handle {
+ u32 sw_device_id; /* identifies a device in the system */
+ enum xlink_dev_type dev_type; /* determines direction of comms */
+};
+
+enum xlink_opmode {
+ RXB_TXB = 0, /* blocking read, blocking write */
+ RXN_TXN, /* non-blocking read, non-blocking write */
+ RXB_TXN, /* blocking read, non-blocking write */
+ RXN_TXB /* non-blocking read, blocking write */
+};
+
+enum xlink_device_power_mode {
+ POWER_DEFAULT_NOMINAL_MAX = 0, /* no load reduction, default mode */
+ POWER_SUBNOMINAL_HIGH, /* slight load reduction */
+ POWER_MEDIUM, /* average load reduction */
+ POWER_LOW, /* significant load reduction */
+ POWER_MIN, /* maximum load reduction */
+ POWER_SUSPENDED /* power off or device suspend */
+};
+
+enum xlink_error {
+ X_LINK_SUCCESS = 0, /* xlink operation completed successfully */
+ X_LINK_ALREADY_INIT, /* xlink already initialized */
+ X_LINK_ALREADY_OPEN, /* channel already open */
+ X_LINK_COMMUNICATION_NOT_OPEN, /* operation on a closed channel */
+ X_LINK_COMMUNICATION_FAIL, /* communication failure */
+ X_LINK_COMMUNICATION_UNKNOWN_ERROR, /* error unknown */
+ X_LINK_DEVICE_NOT_FOUND, /* device specified not found */
+ X_LINK_TIMEOUT, /* operation timed out */
+ X_LINK_ERROR, /* parameter error */
+ X_LINK_CHAN_FULL /* channel has reached fill level */
+};
+
+enum xlink_device_status {
+ XLINK_DEV_OFF = 0, /* device is off */
+ XLINK_DEV_ERROR, /* device is busy and not available */
+ XLINK_DEV_BUSY, /* device is available for use */
+ XLINK_DEV_RECOVERY, /* device is in recovery mode */
+ XLINK_DEV_READY /* device HW failure is detected */
+};
+
+/* xlink API */
+
+typedef void (*xlink_event)(u16 chan);
+typedef int (*xlink_device_event_cb)(u32 sw_device_id, u32 event_type);
+
+enum xlink_error xlink_initialize(void);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+ u16 chan, u8 **message, u32 *size);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+ u16 chan, u8 * const message,
+ u32 *size);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+ u16 chan, u8 * const data_addr);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle);
+
+/* API functions to be implemented
+ *
+ * enum xlink_error xlink_write_crc_data(struct xlink_handle *handle,
+ * u16 chan, u8 const *message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data(struct xlink_handle *handle,
+ * u16 chan, u8 **message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data_to_buffer(struct xlink_handle *handle,
+ * u16 chan, u8 * const message, size_t * const size);
+ *
+ * enum xlink_error xlink_reset_all(void);
+ *
+ */
+
+#endif /* __XLINK_H */
diff --git a/include/uapi/misc/xlink_uapi.h b/include/uapi/misc/xlink_uapi.h
new file mode 100644
index 000000000000..77794299c4a6
--- /dev/null
+++ b/include/uapi/misc/xlink_uapi.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_UAPI_H
+#define __XLINK_UAPI_H
+
+#include <linux/types.h>
+
+#define XLINK_MAGIC 'x'
+#define XL_OPEN_CHANNEL _IOW(XLINK_MAGIC, 1, void*)
+#define XL_READ_DATA _IOW(XLINK_MAGIC, 2, void*)
+#define XL_WRITE_DATA _IOW(XLINK_MAGIC, 3, void*)
+#define XL_CLOSE_CHANNEL _IOW(XLINK_MAGIC, 4, void*)
+#define XL_WRITE_VOLATILE _IOW(XLINK_MAGIC, 5, void*)
+#define XL_READ_TO_BUFFER _IOW(XLINK_MAGIC, 6, void*)
+#define XL_START_VPU _IOW(XLINK_MAGIC, 7, void*)
+#define XL_STOP_VPU _IOW(XLINK_MAGIC, 8, void*)
+#define XL_RESET_VPU _IOW(XLINK_MAGIC, 9, void*)
+#define XL_CONNECT _IOW(XLINK_MAGIC, 10, void*)
+#define XL_RELEASE_DATA _IOW(XLINK_MAGIC, 11, void*)
+#define XL_DISCONNECT _IOW(XLINK_MAGIC, 12, void*)
+#define XL_WRITE_CONTROL_DATA _IOW(XLINK_MAGIC, 13, void*)
+#define XL_DATA_READY_CALLBACK _IOW(XLINK_MAGIC, 14, void*)
+#define XL_DATA_CONSUMED_CALLBACK _IOW(XLINK_MAGIC, 15, void*)
+#define XL_GET_DEVICE_NAME _IOW(XLINK_MAGIC, 16, void*)
+#define XL_GET_DEVICE_LIST _IOW(XLINK_MAGIC, 17, void*)
+#define XL_GET_DEVICE_STATUS _IOW(XLINK_MAGIC, 18, void*)
+#define XL_BOOT_DEVICE _IOW(XLINK_MAGIC, 19, void*)
+#define XL_RESET_DEVICE _IOW(XLINK_MAGIC, 20, void*)
+#define XL_GET_DEVICE_MODE _IOW(XLINK_MAGIC, 21, void*)
+#define XL_SET_DEVICE_MODE _IOW(XLINK_MAGIC, 22, void*)
+#define XL_REGISTER_DEV_EVENT _IOW(XLINK_MAGIC, 23, void*)
+#define XL_UNREGISTER_DEV_EVENT _IOW(XLINK_MAGIC, 24, void*)
+
+struct xlinkopenchannel {
+ void *handle;
+ __u16 chan;
+ int mode;
+ __u32 data_size;
+ __u32 timeout;
+ __u32 *return_code;
+};
+
+struct xlinkcallback {
+ void *handle;
+ __u16 chan;
+ void (*callback)(__u16 chan);
+ __u32 *return_code;
+};
+
+struct xlinkwritedata {
+ void *handle;
+ __u16 chan;
+ void const *pmessage;
+ __u32 size;
+ __u32 *return_code;
+};
+
+struct xlinkreaddata {
+ void *handle;
+ __u16 chan;
+ void *pmessage;
+ __u32 *size;
+ __u32 *return_code;
+};
+
+struct xlinkreadtobuffer {
+ void *handle;
+ __u16 chan;
+ void *pmessage;
+ __u32 *size;
+ __u32 *return_code;
+};
+
+struct xlinkconnect {
+ void *handle;
+ __u32 *return_code;
+};
+
+struct xlinkrelease {
+ void *handle;
+ __u16 chan;
+ void *addr;
+ __u32 *return_code;
+};
+
+struct xlinkstartvpu {
+ char *filename;
+ int namesize;
+ __u32 *return_code;
+};
+
+struct xlinkstopvpu {
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicename {
+ void *handle;
+ char *name;
+ __u32 name_size;
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicelist {
+ __u32 *sw_device_id_list;
+ __u32 *num_devices;
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicestatus {
+ void *handle;
+ __u32 *device_status;
+ __u32 *return_code;
+};
+
+struct xlinkbootdevice {
+ void *handle;
+ const char *binary_name;
+ __u32 binary_name_size;
+ __u32 *return_code;
+};
+
+struct xlinkresetdevice {
+ void *handle;
+ __u32 *return_code;
+};
+
+struct xlinkdevmode {
+ void *handle;
+ int *device_mode;
+ __u32 *return_code;
+};
+
+struct xlinkregdevevent {
+ void *handle;
+ __u32 *event_list;
+ __u32 num_events;
+ __u32 *return_code;
+};
+
+#endif /* __XLINK_UAPI_H */
--
2.17.1

2021-01-08 21:33:14

by mark gross

[permalink] [raw]
Subject: [PATCH v2 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation

From: "C, Udhayakumar" <[email protected]>

Add device tree bindings for local host thermal sensors
Intel Edge.AI Computer Vision platforms.

The tsens module enables reading of on chip sensors present
in the Intel Bay series SoC. In the tsens module various junction
temperature and SoC temperature are reported using thermal subsystem
and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsytem will be used by
platform manageability software running in remote host.

Acked-by: mark gross <[email protected]>
Signed-off-by: C, Udhayakumar <[email protected]>
---
.../bindings/misc/intel,intel-tsens.yaml | 122 ++++++++++++++++++
1 file changed, 122 insertions(+)
create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
new file mode 100644
index 000000000000..abac41995643
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
@@ -0,0 +1,122 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,intel_tsens.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Temperature sensors in Bay series
+
+maintainers:
+ - Udhayakumar C <[email protected]>
+
+description: |
+ The tsens driver enables reading of onchip sensors present
+ in the Intel Bay SoC.
+ Each subnode of the tsens represents sensors available
+ on the soc.
+
+select: false
+
+properties:
+ compatible:
+ items:
+ - const: intel,intel-tsens
+
+ plat_name:
+ contains:
+ enum:
+ - intel,keembay_thermal
+
+ reg:
+ minItems: 1
+ maxItems: 2
+
+ clocks:
+ items:
+ - description: thermal sensor clock
+
+ clk-rate:
+ minItems: 1
+ maxItems: 1
+ additionalItems: false
+ items:
+ - description: thermal sensor clock freq
+
+ sensor_name:
+ type: object
+ description:
+ Details to configure sensor trip points and its types.
+
+ properties:
+ passive_delay:
+ minItems: 1
+ maxItems: 1
+ description: number of milliseconds to wait between polls when
+ performing passive cooling
+
+ polling_delay:
+ minItems: 1
+ maxItems: 1
+ description: number of milliseconds to wait between polls when checking
+ whether trip points have been crossed (0 for interrupt
+ driven systems)
+
+ trip_temp:
+ minItems: 1
+ description: temperature for trip points
+
+ trip_type:
+ minItems: 1
+ description: trip type list for trip points
+
+ required:
+ - passive_delay
+ - polling_delay
+ - trip_temp
+ - trip_type
+
+required:
+ - compatible
+
+additionalProperties: false
+
+examples:
+ - |
+ tsens: tsens@20260000 {
+ compatible = "intel,intel-tsens";
+ status = "disabled";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ plat_name = "intel,keembay_thermal";
+ reg = <0x0 0x20260000 0x0 0x100>;
+ clocks = <&scmi_clk>;
+ clk-rate = <1250000>;
+
+ mss {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ css {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ nce {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ soc {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+ };
--
2.17.1

2021-01-08 21:33:20

by mark gross

[permalink] [raw]
Subject: [PATCH v2 32/34] dt-bindings: misc: hddl_dev: Add hddl device management documentation

From: "C, Udhayakumar" <[email protected]>

Add hddl device management documentation

The HDDL client driver acts as an software RTC to sync with network time.
It abstracts xlink protocol to communicate with remote IA host.
This driver exports the details about sensors available in the platform
to remote IA host as xlink packets.
This driver also handles device connect/disconnect events and identifies
board id and soc id using gpio's based on platform configuration.

Signed-off-by: C, Udhayakumar <[email protected]>
---
.../bindings/misc/intel,hddl-client.yaml | 114 ++++++++++++++++++
1 file changed, 114 insertions(+)
create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml b/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
new file mode 100644
index 000000000000..c1d121c35fc5
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
@@ -0,0 +1,114 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,hddl-client.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel hddl client device to handle platform management in Bay series
+
+maintainers:
+ - Udhayakumar C <[email protected]>
+
+description: |
+ The HDDL client driver acts as an software RTC to sync with network time.
+ It abstracts xlink protocol to communicate with remote host. This driver
+ exports the details about sensors available in the platform to remote
+ host as xlink packets.
+ This driver also handles device connect/disconnect events and identifies
+ board id and soc id using gpio's based on platform configuration.
+
+select: false
+
+properties:
+ compatible:
+ items:
+ - const: intel,hddl-client
+
+ reg:
+ minItems: 4
+ maxItems: 4
+
+ xlink_chan:
+ minItems: 1
+ maxItems: 1
+ description: xlink channel number used for communication
+ with remote host for time sync and sharing sensor
+ details available in platform.
+
+ i2c_xlink_chan:
+ minItems: 1
+ maxItems: 1
+ description: xlink channel number used for communication
+ with remote host for xlink i2c smbus.
+
+ sensor_name:
+ type: object
+ description:
+ Details about sensors and its configuration on local host and remote
+ host.
+
+ properties:
+ compatible:
+ items:
+ - const: intel_tsens
+
+ reg:
+ description: i2c slave address for sensor.
+
+ local-host:
+ minItems: 1
+ maxItems: 1
+ description: enable bit 0 to register sensor as i2c slave
+ in local host (normal i2c client)
+ enable bit 1 to mimic sensor as i2c slave
+ in local host (onchip sensors as i2c slave)
+ enable bit 2 to register i2c slave as xlink smbus slave
+ in local host.
+ remote-host:
+ minItems: 1
+ maxItems: 1
+ description: enable bit 0 to register sensor as i2c slave
+ in remote host (normal i2c client)
+ enable bit 1 to mimic sensor as i2c slave
+ in remote host (onchip sensors as i2c slave)
+ enable bit 2 to register i2c slave as xlink smbus slave
+ in remote host.
+
+ bus:
+ minItems: 1
+ maxItems: 1
+ description: i2c bus number for the i2c client device.
+
+ required:
+ - compatible
+ - reg
+ - local-host
+ - remote-host
+ - bus
+
+required:
+ - compatible
+ - reg
+ - xlink_chan
+ - i2c_xlink_chan
+
+additionalProperties: false
+
+examples:
+ - |
+ hddl_dev: hddl@20320000 {
+ compatible = "intel,hddl-client";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ status = "disabled";
+ reg = <0x0 0x20320000 0x0 0x800>;
+ xlink_chan = <1080>;
+ i2c_xlink_chan = <1081>;
+ kmb_xlink_tj {
+ status = "okay";
+ compatible = "intel_tsens";
+ local-host = <0x3>;
+ remote-host = <0x3>;
+ bus = <0x1>;
+ };
+ };
--
2.17.1

2021-01-08 21:33:22

by mark gross

[permalink] [raw]
Subject: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface

From: Srikanth Thokala <[email protected]>

Provide interface for XLink layer to interact with XLink PCIe transport
layer on both local host and remote host.

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/common/interface.c | 109 +++++++++++++++++++
drivers/misc/xlink-pcie/local_host/Makefile | 1 +
drivers/misc/xlink-pcie/remote_host/Makefile | 1 +
3 files changed, 111 insertions(+)
create mode 100644 drivers/misc/xlink-pcie/common/interface.c

diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
new file mode 100644
index 000000000000..56c1d9ed9d8f
--- /dev/null
+++ b/drivers/misc/xlink-pcie/common/interface.c
@@ -0,0 +1,109 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include <linux/xlink_drv_inf.h>
+
+#include "core.h"
+#include "xpcie.h"
+
+/* Define xpcie driver interface API */
+int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
+{
+ if (!sw_device_id_list || !num_devices)
+ return -EINVAL;
+
+ *num_devices = intel_xpcie_get_device_num(sw_device_id_list);
+
+ return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_list);
+
+int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size)
+{
+ if (!device_name)
+ return -EINVAL;
+
+ return intel_xpcie_get_device_name_by_id(sw_device_id,
+ device_name, name_size);
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_name);
+
+int xlink_pcie_get_device_status(u32 sw_device_id, u32 *device_status)
+{
+ u32 status;
+ int rc;
+
+ if (!device_status)
+ return -EINVAL;
+
+ rc = intel_xpcie_get_device_status_by_id(sw_device_id, &status);
+ if (rc)
+ return rc;
+
+ switch (status) {
+ case XPCIE_STATUS_READY:
+ case XPCIE_STATUS_RUN:
+ *device_status = _XLINK_DEV_READY;
+ break;
+ case XPCIE_STATUS_ERROR:
+ *device_status = _XLINK_DEV_ERROR;
+ break;
+ case XPCIE_STATUS_RECOVERY:
+ *device_status = _XLINK_DEV_RECOVERY;
+ break;
+ case XPCIE_STATUS_OFF:
+ *device_status = _XLINK_DEV_OFF;
+ break;
+ default:
+ *device_status = _XLINK_DEV_BUSY;
+ break;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_get_device_status);
+
+int xlink_pcie_boot_device(u32 sw_device_id, const char *binary_name)
+{
+ return 0;
+}
+EXPORT_SYMBOL(xlink_pcie_boot_device);
+
+int xlink_pcie_connect(u32 sw_device_id)
+{
+ return intel_xpcie_pci_connect_device(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_connect);
+
+int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout)
+{
+ if (!data || !size)
+ return -EINVAL;
+
+ return intel_xpcie_pci_read(sw_device_id, data, size, timeout);
+}
+EXPORT_SYMBOL(xlink_pcie_read);
+
+int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout)
+{
+ if (!data || !size)
+ return -EINVAL;
+
+ return intel_xpcie_pci_write(sw_device_id, data, size, timeout);
+}
+EXPORT_SYMBOL(xlink_pcie_write);
+
+int xlink_pcie_reset_device(u32 sw_device_id)
+{
+ return intel_xpcie_pci_reset_device(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_reset_device);
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 65df94c7e860..16bb1e7345ac 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -3,3 +3,4 @@ mxlk_ep-objs := epf.o
mxlk_ep-objs += dma.o
mxlk_ep-objs += core.o
mxlk_ep-objs += ../common/util.o
+mxlk_ep-objs += ../common/interface.o
diff --git a/drivers/misc/xlink-pcie/remote_host/Makefile b/drivers/misc/xlink-pcie/remote_host/Makefile
index e8074dbb1161..088e121ad46e 100644
--- a/drivers/misc/xlink-pcie/remote_host/Makefile
+++ b/drivers/misc/xlink-pcie/remote_host/Makefile
@@ -3,3 +3,4 @@ mxlk-objs := main.o
mxlk-objs += pci.o
mxlk-objs += core.o
mxlk-objs += ../common/util.o
+mxlk-objs += ../common/interface.o
--
2.17.1

2021-01-08 21:33:29

by mark gross

[permalink] [raw]
Subject: [PATCH v2 23/34] xlink-core: add async channel and events

From: Seamus Kelly <[email protected]>

Enable asynchronous channel and event communication.

Add APIs:
data ready callback:
The xLink Data Ready Callback function is used to
register a callback function that is invoked when data
is ready to be read from a channel
data consumed callback:
The xLink Data Consumed Callback function is used to
register a callback function that is invoked when data
is consumed by the peer node on a channel
Add event notification handling including APIs:
register device event:
The xLink Register Device Event function is used to
register a callback for notification of certain system
events. Currently XLink supports 4 such events [0-3]
whose meaning is system dependent. Registering for an
event means that the callback will be called when the
event occurs with 2 parameters the sw_device_id of the
device that triggered the event and the event number [0-3]
unregister device event
The xLink Unregister Device Event function is used to
unregister events that have previously been registered
by register device event API

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
drivers/misc/xlink-core/xlink-core.c | 497 ++++++++++++++++----
drivers/misc/xlink-core/xlink-core.h | 11 +-
drivers/misc/xlink-core/xlink-defs.h | 6 +-
drivers/misc/xlink-core/xlink-dispatcher.c | 53 +--
drivers/misc/xlink-core/xlink-ioctl.c | 146 +++++-
drivers/misc/xlink-core/xlink-ioctl.h | 6 +
drivers/misc/xlink-core/xlink-multiplexer.c | 176 +++++--
drivers/misc/xlink-core/xlink-platform.c | 27 ++
include/linux/xlink.h | 15 +-
9 files changed, 757 insertions(+), 180 deletions(-)

diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index f30ac584a01c..8c1fd3f54afa 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -55,6 +55,8 @@ static struct cdev xlink_cdev;

static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);

+static struct mutex dev_event_lock;
+
static const struct file_operations fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = xlink_ioctl,
@@ -66,14 +68,75 @@ struct xlink_link {
struct kref refcount;
};

+struct xlink_attr {
+ unsigned long value;
+ u32 sw_dev_id;
+};
+
struct keembay_xlink_dev {
struct platform_device *pdev;
struct xlink_link links[XLINK_MAX_CONNECTIONS];
u32 nmb_connected_links;
struct mutex lock; // protect access to xlink_dev
+ struct xlink_attr eventx[4];
+};
+
+struct event_info {
+ struct list_head list;
+ u32 sw_device_id;
+ u32 event_type;
+ u32 user_flag;
+ xlink_device_event_cb event_notif_fn;
};

-static u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+// sysfs attribute functions
+
+static ssize_t eventx_show(struct device *dev, struct device_attribute *attr,
+ int index, char *buf)
+{
+ struct keembay_xlink_dev *xlink_dev = dev_get_drvdata(dev);
+ struct xlink_attr *a = &xlink_dev->eventx[index];
+
+ return sysfs_emit(buf, "0x%x 0x%lx\n", a->sw_dev_id, a->value);
+}
+
+static ssize_t event0_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 0, buf);
+}
+
+static ssize_t event1_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 1, buf);
+}
+
+static ssize_t event2_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 2, buf);
+}
+
+static ssize_t event3_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 3, buf);
+}
+
+static DEVICE_ATTR_RO(event0);
+static DEVICE_ATTR_RO(event1);
+static DEVICE_ATTR_RO(event2);
+static DEVICE_ATTR_RO(event3);
+static struct attribute *xlink_sysfs_entries[] = {
+ &dev_attr_event0.attr,
+ &dev_attr_event1.attr,
+ &dev_attr_event2.attr,
+ &dev_attr_event3.attr,
+ NULL,
+};
+
+static const struct attribute_group xlink_sysfs_group = {
+ .attrs = xlink_sysfs_entries,
+};
+
+static struct event_info ev_info;

/*
* global variable pointing to our xlink device.
@@ -207,7 +270,14 @@ static int kmb_xlink_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "Cannot add the device to the system\n");
goto r_class;
}
+ INIT_LIST_HEAD(&ev_info.list);

+ rc = devm_device_add_group(&pdev->dev, &xlink_sysfs_group);
+ if (rc) {
+ dev_err(&pdev->dev, "failed to create sysfs entries: %d\n", rc);
+ return rc;
+ }
+ mutex_init(&dev_event_lock);
return 0;

r_device:
@@ -231,7 +301,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
rc = xlink_multiplexer_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Multiplexer destroy failed\n");
- // stop dispatchers and destroy
rc = xlink_dispatcher_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Dispatcher destroy failed\n");
@@ -251,7 +320,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
* IOCTL function for User Space access to xlink kernel functions
*
*/
-
static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int rc;
@@ -263,6 +331,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_OPEN_CHANNEL:
rc = ioctl_open_channel(arg);
break;
+ case XL_DATA_READY_CALLBACK:
+ rc = ioctl_data_ready_callback(arg);
+ break;
+ case XL_DATA_CONSUMED_CALLBACK:
+ rc = ioctl_data_consumed_callback(arg);
+ break;
case XL_READ_DATA:
rc = ioctl_read_data(arg);
break;
@@ -275,6 +349,9 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_WRITE_VOLATILE:
rc = ioctl_write_volatile_data(arg);
break;
+ case XL_WRITE_CONTROL_DATA:
+ rc = ioctl_write_control_data(arg);
+ break;
case XL_RELEASE_DATA:
rc = ioctl_release_data(arg);
break;
@@ -285,10 +362,10 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
rc = ioctl_start_vpu(arg);
break;
case XL_STOP_VPU:
- rc = xlink_stop_vpu();
+ rc = ioctl_stop_vpu();
break;
case XL_RESET_VPU:
- rc = xlink_stop_vpu();
+ rc = ioctl_stop_vpu();
break;
case XL_DISCONNECT:
rc = ioctl_disconnect(arg);
@@ -314,6 +391,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_SET_DEVICE_MODE:
rc = ioctl_set_device_mode(arg);
break;
+ case XL_REGISTER_DEV_EVENT:
+ rc = ioctl_register_device_event(arg);
+ break;
+ case XL_UNREGISTER_DEV_EVENT:
+ rc = ioctl_unregister_device_event(arg);
+ break;
}
if (rc)
return -EIO;
@@ -387,14 +470,12 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
xlink->nmb_connected_links++;
kref_init(&link->refcount);
if (interface != IPC_INTERFACE) {
- // start dispatcher
rc = xlink_dispatcher_start(link->id, &link->handle);
if (rc) {
pr_err("dispatcher start failed\n");
goto r_cleanup;
}
}
- // initialize multiplexer connection
rc = xlink_multiplexer_connect(link->id);
if (rc) {
pr_err("multiplexer connect failed\n");
@@ -405,7 +486,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
link->handle.dev_type,
xlink->nmb_connected_links);
} else {
- // already connected
pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
link->handle.sw_device_id,
link->handle.dev_type);
@@ -413,7 +493,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
*handle = link->handle;
}
mutex_unlock(&xlink->lock);
- // TODO: implement ping
return X_LINK_SUCCESS;

r_cleanup:
@@ -423,64 +502,109 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
}
EXPORT_SYMBOL(xlink_connect);

-enum xlink_error xlink_open_channel(struct xlink_handle *handle,
- u16 chan, enum xlink_opmode mode,
- u32 data_size, u32 timeout)
+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_available_event)
{
struct xlink_event *event;
struct xlink_link *link;
- int event_queued = 0;
enum xlink_error rc;
+ int event_queued = 0;
+ char origin = 'K';

if (!xlink || !handle)
return X_LINK_ERROR;

+ if (CHANNEL_USER_BIT_IS_SET(chan))
+ origin = 'U'; // function called from user space
+ CHANNEL_CLEAR_USER_BIT(chan); // restore proper channel value
+
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;
-
- event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
- &link->handle, chan, data_size, timeout);
+ event = xlink_create_event(link->id, XLINK_DATA_READY_CALLBACK_REQ,
+ &link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;
-
- event->data = (void *)mode;
+ event->data = data_available_event;
+ event->callback_origin = origin;
+ if (!data_available_event)
+ event->calling_pid = NULL; // disable callbacks on this channel
+ else
+ event->calling_pid = current;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued)
xlink_destroy_event(event);
return rc;
}
-EXPORT_SYMBOL(xlink_open_channel);
-
-enum xlink_error xlink_close_channel(struct xlink_handle *handle,
- u16 chan)
+EXPORT_SYMBOL(xlink_data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_consumed_event)
{
struct xlink_event *event;
struct xlink_link *link;
enum xlink_error rc;
int event_queued = 0;
+ char origin = 'K';

if (!xlink || !handle)
return X_LINK_ERROR;

+ if (CHANNEL_USER_BIT_IS_SET(chan))
+ origin = 'U'; // function called from user space
+ CHANNEL_CLEAR_USER_BIT(chan); // restore proper channel value
+
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;
-
- event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ event = xlink_create_event(link->id, XLINK_DATA_CONSUMED_CALLBACK_REQ,
&link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;
+ event->data = data_consumed_event;
+ event->callback_origin = origin;
+ if (!data_consumed_event)
+ event->calling_pid = NULL; // disable callbacks on this channel
+ else
+ event->calling_pid = current;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_data_consumed_event);
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+ &link->handle, chan, data_size, timeout);
+ if (!event)
+ return X_LINK_ERROR;

+ event->data = (void *)mode;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued)
xlink_destroy_event(event);
return rc;
}
-EXPORT_SYMBOL(xlink_close_channel);
+EXPORT_SYMBOL(xlink_open_channel);

-enum xlink_error xlink_write_data(struct xlink_handle *handle,
- u16 chan, u8 const *pmessage, u32 size)
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+ u16 chan)
{
struct xlink_event *event;
struct xlink_link *link;
@@ -490,38 +614,26 @@ enum xlink_error xlink_write_data(struct xlink_handle *handle,
if (!xlink || !handle)
return X_LINK_ERROR;

- if (size > XLINK_MAX_DATA_SIZE)
- return X_LINK_ERROR;
-
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;

- event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
- chan, size, 0);
+ event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ &link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;

- if (chan < XLINK_IPC_MAX_CHANNELS &&
- event->interface == IPC_INTERFACE) {
- /* only passing message address across IPC interface */
- event->data = &pmessage;
- rc = xlink_multiplexer_tx(event, &event_queued);
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
xlink_destroy_event(event);
- } else {
- event->data = (u8 *)pmessage;
- event->paddr = 0;
- rc = xlink_multiplexer_tx(event, &event_queued);
- if (!event_queued)
- xlink_destroy_event(event);
- }
+
return rc;
}
-EXPORT_SYMBOL(xlink_write_data);
+EXPORT_SYMBOL(xlink_close_channel);

-enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
- u16 chan, u8 const *pmessage,
- u32 size)
+static enum xlink_error do_xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size, u32 user_flag)
{
struct xlink_event *event;
struct xlink_link *link;
@@ -544,48 +656,78 @@ enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
chan, size, 0);
if (!event)
return X_LINK_ERROR;
- event->user_data = 1;
+ event->user_data = user_flag;

if (chan < XLINK_IPC_MAX_CHANNELS &&
event->interface == IPC_INTERFACE) {
/* only passing message address across IPC interface */
- if (get_user(addr, (u32 __user *)pmessage)) {
- xlink_destroy_event(event);
- return X_LINK_ERROR;
+ if (user_flag) {
+ if (get_user(addr, (u32 __user *)pmessage)) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->data = &addr;
+ } else {
+ event->data = &pmessage;
}
- event->data = &addr;
rc = xlink_multiplexer_tx(event, &event_queued);
xlink_destroy_event(event);
} else {
- event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
- size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
- if (!event->data) {
- xlink_destroy_event(event);
- return X_LINK_ERROR;
- }
- if (copy_from_user(event->data, (void __user *)pmessage, size)) {
- xlink_platform_deallocate(&xlink->pdev->dev,
- event->data, paddr, size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
- xlink_destroy_event(event);
- return X_LINK_ERROR;
+ if (user_flag) {
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+ size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->paddr = paddr;
+ } else {
+ event->data = (u8 *)pmessage;
+ event->paddr = 0;
}
- event->paddr = paddr;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued) {
- xlink_platform_deallocate(&xlink->pdev->dev,
- event->data, paddr, size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
+ if (user_flag) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
xlink_destroy_event(event);
}
}
return rc;
}

+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_data(handle, chan, pmessage, size, 0);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_data(handle, chan, pmessage, size, 1);
+ return rc;
+}
+
enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
u16 chan, u8 const *pmessage,
u32 size)
@@ -614,16 +756,7 @@ enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
}
EXPORT_SYMBOL(xlink_write_control_data);

-enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
- u16 chan, u8 const *message, u32 size)
-{
- enum xlink_error rc = 0;
-
- rc = do_xlink_write_volatile(handle, chan, message, size, 0);
- return rc;
-}
-
-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+static enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
u16 chan, u8 const *message,
u32 size, u32 user_flag)
{
@@ -668,6 +801,26 @@ enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
return rc;
}

+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+ return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_write_volatile);
+
enum xlink_error xlink_read_data(struct xlink_handle *handle,
u16 chan, u8 **pmessage, u32 *size)
{
@@ -757,8 +910,8 @@ EXPORT_SYMBOL(xlink_release_data);
enum xlink_error xlink_disconnect(struct xlink_handle *handle)
{
struct xlink_link *link;
- int interface = NULL_INTERFACE;
- enum xlink_error rc = X_LINK_ERROR;
+ int interface;
+ enum xlink_error rc = 0;

if (!xlink || !handle)
return X_LINK_ERROR;
@@ -767,7 +920,6 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
if (!link)
return X_LINK_ERROR;

- // decrement refcount, if count is 0 lock mutex and disconnect
if (kref_put_mutex(&link->refcount, release_after_kref_put,
&xlink->lock)) {
// stop dispatcher
@@ -946,6 +1098,179 @@ enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
return rc;
}
EXPORT_SYMBOL(xlink_get_device_mode);
+
+static int xlink_device_event_handler(u32 sw_device_id, u32 event_type)
+{
+ struct event_info *events = NULL;
+ xlink_device_event_cb event_cb;
+ bool found = false;
+ char event_attr[7];
+
+ mutex_lock(&dev_event_lock);
+ // find sw_device_id, event_type in list
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == sw_device_id &&
+ events->event_type == event_type) {
+ event_cb = events->event_notif_fn;
+ found = true;
+ break;
+ }
+ }
+ }
+ if (found) {
+ if (events->user_flag) {
+ xlink->eventx[events->event_type].value = events->event_type;
+ xlink->eventx[events->event_type].sw_dev_id = sw_device_id;
+ sprintf(event_attr, "event%d", events->event_type);
+ sysfs_notify(&xlink->pdev->dev.kobj, NULL, event_attr);
+ } else {
+ if (event_cb) {
+ event_cb(sw_device_id, event_type);
+ } else {
+ pr_info("No callback found for sw_device_id : 0x%x event type %d\n",
+ sw_device_id, event_type);
+ mutex_unlock(&dev_event_lock);
+ return X_LINK_ERROR;
+ }
+ }
+ pr_info("sysfs_notify event %d swdev_id %xs\n",
+ events->event_type, sw_device_id);
+ }
+ mutex_unlock(&dev_event_lock);
+return X_LINK_SUCCESS;
+}
+
+static bool event_registered(u32 sw_dev_id, u32 event)
+{
+ struct event_info *events = NULL;
+
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == sw_dev_id &&
+ events->event_type == event) {
+ return true;
+ }
+ }
+ }
+return false;
+}
+
+static enum xlink_error do_xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events,
+ xlink_device_event_cb event_notif_fn,
+ u32 user_flag)
+{
+ struct event_info *events;
+ u32 interface;
+ u32 event;
+ int i;
+
+ if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+ return X_LINK_ERROR;
+ for (i = 0; i < num_events; i++) {
+ events = kzalloc(sizeof(*events), GFP_KERNEL);
+ if (!events)
+ return X_LINK_ERROR;
+ event = *event_list;
+ events->sw_device_id = handle->sw_device_id;
+ events->event_notif_fn = event_notif_fn;
+ events->event_type = *event_list++;
+ events->user_flag = user_flag;
+ if (user_flag) {
+ /* only add to list once if userspace */
+ /* xlink userspace handles multi process callbacks */
+ if (event_registered(handle->sw_device_id, event)) {
+ pr_info("xlink-core: Event 0x%x - %d already registered\n",
+ handle->sw_device_id, event);
+ kfree(events);
+ continue;
+ }
+ }
+ pr_info("xlink-core:Events: sw_device_id 0x%x event %d fn %p user_flag %d\n",
+ events->sw_device_id, events->event_type,
+ events->event_notif_fn, events->user_flag);
+ list_add_tail(&events->list, &ev_info.list);
+ }
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ if (interface == NULL_INTERFACE)
+ return X_LINK_ERROR;
+ xlink_platform_register_for_events(interface, handle->sw_device_id,
+ xlink_device_event_handler);
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn)
+{
+ enum xlink_error rc;
+
+ rc = do_xlink_register_device_event(handle, event_list, num_events,
+ event_notif_fn, 1);
+ return rc;
+}
+
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn)
+{
+ enum xlink_error rc;
+
+ rc = do_xlink_register_device_event(handle, event_list, num_events,
+ event_notif_fn, 0);
+ return rc;
+}
+EXPORT_SYMBOL(xlink_register_device_event);
+
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events)
+{
+ struct event_info *events = NULL;
+ u32 interface;
+ int found = 0;
+ int count = 0;
+ int i;
+
+ if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+ return X_LINK_ERROR;
+ for (i = 0; i < num_events; i++) {
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events->sw_device_id == handle->sw_device_id &&
+ events->event_type == event_list[i]) {
+ found = 1;
+ break;
+ }
+ }
+ if (!events || !found)
+ return X_LINK_ERROR;
+ pr_info("removing event %d for sw_device_id 0x%x\n",
+ events->event_type, events->sw_device_id);
+ list_del(&events->list);
+ kfree(events);
+ }
+ // check if any events left for this sw_device_id
+ // are still registered ( in list )
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == handle->sw_device_id) {
+ count++;
+ break;
+ }
+ }
+ }
+ if (count == 0) {
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ if (interface == NULL_INTERFACE)
+ return X_LINK_ERROR;
+ xlink_platform_unregister_for_events(interface, handle->sw_device_id);
+ }
+ return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL(xlink_unregister_device_event);
+
/* Device tree driver match. */
static const struct of_device_id kmb_xlink_of_match[] = {
{
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
index 5ba7ac653bf7..ee10058a15ac 100644
--- a/drivers/misc/xlink-core/xlink-core.h
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -12,11 +12,14 @@

#define NUM_REG_EVENTS 4

-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
- u16 chan, u8 const *message,
- u32 size, u32 user_flag);
-
enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
u16 chan, u8 const *pmessage,
u32 size);
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events,
+ xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size);
#endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
index 8985f6631175..81aa3bfffcd3 100644
--- a/drivers/misc/xlink-core/xlink-defs.h
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -35,7 +35,7 @@
#define CONTROL_CHANNEL_TIMEOUT_MS 0U // wait indefinitely
#define SIGXLNK 44 // signal XLink uses for callback signalling

-#define UNUSED(x) (void)(x)
+#define UNUSED(x) ((void)(x))

// the end of the IPC channel range (starting at zero)
#define XLINK_IPC_MAX_CHANNELS 1024
@@ -102,6 +102,8 @@ enum xlink_event_type {
XLINK_CLOSE_CHANNEL_REQ,
XLINK_PING_REQ,
XLINK_WRITE_CONTROL_REQ,
+ XLINK_DATA_READY_CALLBACK_REQ,
+ XLINK_DATA_CONSUMED_CALLBACK_REQ,
XLINK_REQ_LAST,
// response events
XLINK_WRITE_RESP = 0x10,
@@ -113,6 +115,8 @@ enum xlink_event_type {
XLINK_CLOSE_CHANNEL_RESP,
XLINK_PING_RESP,
XLINK_WRITE_CONTROL_RESP,
+ XLINK_DATA_READY_CALLBACK_RESP,
+ XLINK_DATA_CONSUMED_CALLBACK_RESP,
XLINK_RESP_LAST,
};

diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
index 11ef8e4110ca..bc2f184488ac 100644
--- a/drivers/misc/xlink-core/xlink-dispatcher.c
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -5,18 +5,18 @@
* Copyright (C) 2018-2019 Intel Corporation
*
*/
-#include <linux/init.h>
-#include <linux/module.h>
+#include <linux/completion.h>
#include <linux/device.h>
+#include <linux/init.h>
#include <linux/kernel.h>
-#include <linux/slab.h>
#include <linux/kthread.h>
#include <linux/list.h>
-#include <linux/semaphore.h>
+#include <linux/module.h>
#include <linux/mutex.h>
-#include <linux/completion.h>
-#include <linux/sched/signal.h>
#include <linux/platform_device.h>
+#include <linux/sched/signal.h>
+#include <linux/semaphore.h>
+#include <linux/slab.h>

#include "xlink-dispatcher.h"
#include "xlink-multiplexer.h"
@@ -34,18 +34,18 @@ enum dispatcher_state {

/* queue for dispatcher tx thread event handling */
struct event_queue {
+ struct list_head head; /* head of event linked list */
u32 count; /* number of events in the queue */
u32 capacity; /* capacity of events in the queue */
- struct list_head head; /* head of event linked list */
struct mutex lock; /* locks queue while accessing */
};

/* dispatcher servicing a single link to a device */
struct dispatcher {
u32 link_id; /* id of link being serviced */
+ int interface; /* underlying interface of link */
enum dispatcher_state state; /* state of the dispatcher */
struct xlink_handle *handle; /* xlink device handle */
- int interface; /* underlying interface of link */
struct task_struct *rxthread; /* kthread servicing rx */
struct task_struct *txthread; /* kthread servicing tx */
struct event_queue queue; /* xlink event queue */
@@ -82,7 +82,7 @@ static struct dispatcher *get_dispatcher_by_id(u32 id)

static u32 event_generate_id(void)
{
- static u32 id = 0xa; // TODO: temporary solution
+ static u32 id = 0xa;

return id++;
}
@@ -142,9 +142,6 @@ static int dispatcher_event_send(struct xlink_event *event)
size_t event_header_size = sizeof(event->header);
int rc;

- // write event header
- // printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
- // event->header.type, event->header.id);
rc = xlink_platform_write(event->interface,
event->handle->sw_device_id, &event->header,
&event_header_size, event->header.timeout, NULL);
@@ -159,8 +156,10 @@ static int dispatcher_event_send(struct xlink_event *event)
event->handle->sw_device_id, event->data,
&event->header.size, event->header.timeout,
NULL);
- if (rc)
+ if (rc) {
pr_err("Write data failed %d\n", rc);
+ return rc;
+ }
if (event->user_data == 1) {
if (event->paddr != 0) {
xlink_platform_deallocate(xlinkd->dev,
@@ -187,7 +186,6 @@ static int xlink_dispatcher_rxthread(void *context)
size_t size;
int rc;

- // printk(KERN_DEBUG "dispatcher rxthread started\n");
event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
if (!event)
return -1;
@@ -214,7 +212,6 @@ static int xlink_dispatcher_rxthread(void *context)
}
}
}
- // printk(KERN_INFO "dispatcher rxthread stopped\n");
complete(&disp->rx_done);
do_exit(0);
return 0;
@@ -225,7 +222,6 @@ static int xlink_dispatcher_txthread(void *context)
struct dispatcher *disp = (struct dispatcher *)context;
struct xlink_event *event;

- // printk(KERN_DEBUG "dispatcher txthread started\n");
allow_signal(SIGTERM); // allow thread termination while waiting on sem
complete(&disp->tx_done);
while (!kthread_should_stop()) {
@@ -236,7 +232,6 @@ static int xlink_dispatcher_txthread(void *context)
dispatcher_event_send(event);
xlink_destroy_event(event); // free handled event
}
- // printk(KERN_INFO "dispatcher txthread stopped\n");
complete(&disp->tx_done);
do_exit(0);
return 0;
@@ -250,6 +245,7 @@ static int xlink_dispatcher_txthread(void *context)
enum xlink_error xlink_dispatcher_init(void *dev)
{
struct platform_device *plat_dev = (struct platform_device *)dev;
+ struct dispatcher *dsp;
int i;

xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
@@ -258,16 +254,16 @@ enum xlink_error xlink_dispatcher_init(void *dev)

xlinkd->dev = &plat_dev->dev;
for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
- xlinkd->dispatchers[i].link_id = i;
- sema_init(&xlinkd->dispatchers[i].event_sem, 0);
- init_completion(&xlinkd->dispatchers[i].rx_done);
- init_completion(&xlinkd->dispatchers[i].tx_done);
- INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
- mutex_init(&xlinkd->dispatchers[i].queue.lock);
- xlinkd->dispatchers[i].queue.count = 0;
- xlinkd->dispatchers[i].queue.capacity =
- XLINK_EVENT_QUEUE_CAPACITY;
- xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+ dsp = &xlinkd->dispatchers[i];
+ dsp->link_id = i;
+ sema_init(&dsp->event_sem, 0);
+ init_completion(&dsp->rx_done);
+ init_completion(&dsp->tx_done);
+ INIT_LIST_HEAD(&dsp->queue.head);
+ mutex_init(&dsp->queue.lock);
+ dsp->queue.count = 0;
+ dsp->queue.capacity = XLINK_EVENT_QUEUE_CAPACITY;
+ dsp->state = XLINK_DISPATCHER_INIT;
}
mutex_init(&xlinkd->lock);

@@ -329,7 +325,7 @@ enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
struct dispatcher *disp;
int rc;

- // get dispatcher by handle
+ // get dispatcher by link id
disp = get_dispatcher_by_id(event->link_id);
if (!disp)
return X_LINK_ERROR;
@@ -433,7 +429,6 @@ enum xlink_error xlink_dispatcher_destroy(void)
}
xlink_destroy_event(event);
}
- // destroy dispatcher
mutex_destroy(&disp->queue.lock);
}
mutex_destroy(&xlinkd->lock);
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
index 90947bbccfed..7822a7b35bb6 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.c
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -28,15 +28,6 @@ static int copy_result_to_user(u32 *where, int rc)
return rc;
}

-static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
- u16 chan, u8 const *message, u32 size)
-{
- enum xlink_error rc = 0;
-
- rc = do_xlink_write_volatile(handle, chan, message, size, 1);
- return rc;
-}
-
int ioctl_connect(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -158,6 +149,28 @@ int ioctl_write_data(unsigned long arg)
return copy_result_to_user(wr.return_code, rc);
}

+int ioctl_write_control_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ u8 volbuf[XLINK_MAX_BUF_SIZE];
+ int rc = 0;
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_CONTROL_DATA_SIZE)
+ return -EFAULT;
+ if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+ return -EFAULT;
+ rc = xlink_write_control_data(&devh, wr.chan, volbuf, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
int ioctl_write_volatile_data(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -242,6 +255,14 @@ int ioctl_start_vpu(unsigned long arg)
return copy_result_to_user(startvpu.return_code, rc);
}

+int ioctl_stop_vpu(void)
+{
+ int rc = 0;
+
+ rc = xlink_stop_vpu();
+ return rc;
+}
+
int ioctl_disconnect(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -424,3 +445,110 @@ int ioctl_set_device_mode(unsigned long arg)

return copy_result_to_user(devm.return_code, rc);
}
+
+int ioctl_register_device_event(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkregdevevent regdevevent = {};
+ u32 num_events = 0;
+ u32 *ev_list;
+ int rc = 0;
+
+ if (copy_from_user(&regdevevent, (void __user *)arg,
+ sizeof(struct xlinkregdevevent)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ num_events = regdevevent.num_events;
+ if (num_events > 0 && num_events <= NUM_REG_EVENTS) {
+ ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+ if (ev_list) {
+ if (copy_from_user(ev_list,
+ (void __user *)regdevevent.event_list,
+ (num_events * sizeof(u32)))) {
+ kfree(ev_list);
+ return -EFAULT;
+ }
+ rc = xlink_register_device_event_user(&devh,
+ ev_list,
+ num_events,
+ NULL);
+ kfree(ev_list);
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ } else {
+ rc = X_LINK_ERROR;
+ }
+
+ return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_unregister_device_event(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkregdevevent regdevevent = {};
+ u32 num_events = 0;
+ u32 *ev_list;
+ int rc = 0;
+
+ if (copy_from_user(&regdevevent, (void __user *)arg,
+ sizeof(struct xlinkregdevevent)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ num_events = regdevevent.num_events;
+ if (num_events <= NUM_REG_EVENTS) {
+ ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+ if (copy_from_user(ev_list,
+ (void __user *)regdevevent.event_list,
+ (num_events * sizeof(u32)))) {
+ kfree(ev_list);
+ return -EFAULT;
+ }
+ rc = xlink_unregister_device_event(&devh, ev_list, num_events);
+ kfree(ev_list);
+ } else {
+ rc = X_LINK_ERROR;
+ }
+
+ return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_data_ready_callback(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkcallback cb = {};
+ int rc = 0;
+
+ if (copy_from_user(&cb, (void __user *)arg,
+ sizeof(struct xlinkcallback)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)cb.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+ rc = xlink_data_available_event(&devh, cb.chan, cb.callback);
+
+ return copy_result_to_user(cb.return_code, rc);
+}
+
+int ioctl_data_consumed_callback(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkcallback cb = {};
+ int rc = 0;
+
+ if (copy_from_user(&cb, (void __user *)arg,
+ sizeof(struct xlinkcallback)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)cb.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+ rc = xlink_data_consumed_event(&devh, cb.chan, cb.callback);
+
+ return copy_result_to_user(cb.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
index d016d8418f30..7818b676d488 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.h
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -14,10 +14,12 @@ int ioctl_open_channel(unsigned long arg);
int ioctl_read_data(unsigned long arg);
int ioctl_read_to_buffer(unsigned long arg);
int ioctl_write_data(unsigned long arg);
+int ioctl_write_control_data(unsigned long arg);
int ioctl_write_volatile_data(unsigned long arg);
int ioctl_release_data(unsigned long arg);
int ioctl_close_channel(unsigned long arg);
int ioctl_start_vpu(unsigned long arg);
+int ioctl_stop_vpu(void);
int ioctl_disconnect(unsigned long arg);
int ioctl_get_device_name(unsigned long arg);
int ioctl_get_device_list(unsigned long arg);
@@ -26,5 +28,9 @@ int ioctl_boot_device(unsigned long arg);
int ioctl_reset_device(unsigned long arg);
int ioctl_get_device_mode(unsigned long arg);
int ioctl_set_device_mode(unsigned long arg);
+int ioctl_register_device_event(unsigned long arg);
+int ioctl_unregister_device_event(unsigned long arg);
+int ioctl_data_ready_callback(unsigned long arg);
+int ioctl_data_consumed_callback(unsigned long arg);

#endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 48451dc30712..e09458b62c45 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -115,6 +115,38 @@ static struct xlink_multiplexer *xmux;
*
*/

+static enum xlink_error run_callback(struct open_channel *opchan,
+ void *callback, struct task_struct *pid)
+{
+ enum xlink_error rc = X_LINK_SUCCESS;
+ struct kernel_siginfo info;
+ void (*func)(int chan);
+ int ret;
+
+ if (opchan->callback_origin == 'U') { // user-space origin
+ if (pid) {
+ memset(&info, 0, sizeof(struct kernel_siginfo));
+ info.si_signo = SIGXLNK;
+ info.si_code = SI_QUEUE;
+ info.si_errno = opchan->id;
+ info.si_ptr = (void __user *)callback;
+ ret = send_sig_info(SIGXLNK, &info, pid);
+ if (ret < 0) {
+ pr_err("Unable to send signal %d\n", ret);
+ rc = X_LINK_ERROR;
+ }
+ } else {
+ pr_err("CHAN 0x%x -- calling_pid == NULL\n",
+ opchan->id);
+ rc = X_LINK_ERROR;
+ }
+ } else { // kernel origin
+ func = callback;
+ func(opchan->id);
+ }
+ return rc;
+}
+
static inline int chan_is_non_blocking_read(struct open_channel *opchan)
{
if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
@@ -151,7 +183,7 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
enum xlink_dev_type dev_type)
{
struct xlink_channel_type const *chan_type = get_channel_type(chan);
- int interface = NULL_INTERFACE;
+ int interface;

if (chan_type) {
interface = get_interface_from_sw_device_id(sw_device_id);
@@ -181,13 +213,9 @@ static int is_enough_space_in_channel(struct open_channel *opchan,
}
}
if (opchan->tx_up_limit == 1) {
- if ((opchan->tx_fill_level + size)
- < ((opchan->chan->size / 100) * THR_LWR)) {
- opchan->tx_up_limit = 0;
- return 1;
- } else {
+ if ((opchan->tx_fill_level + size) >=
+ ((opchan->chan->size / 100) * THR_LWR))
return 0;
- }
}
return 1;
}
@@ -231,6 +259,8 @@ static int add_packet_to_channel(struct open_channel *opchan,
list_add_tail(&pkt->list, &queue->head);
queue->count++;
opchan->rx_fill_level += pkt->length;
+ } else {
+ return X_LINK_ERROR;
}
return X_LINK_SUCCESS;
}
@@ -262,9 +292,11 @@ static int release_packet_from_channel(struct open_channel *opchan,
} else {
// find packet in channel rx queue
list_for_each_entry(pkt, &queue->head, list) {
- if (pkt->data == addr) {
- packet_found = 1;
- break;
+ if (pkt) {
+ if (pkt->data == addr) {
+ packet_found = 1;
+ break;
+ }
}
}
}
@@ -629,16 +661,46 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
}
} else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
/* channel already open */
- xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
- xmux->channels[link_id][chan].size = event->header.size;
- xmux->channels[link_id][chan].timeout = event->header.timeout;
- xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
rc = multiplexer_open_channel(link_id, chan);
+ if (rc == X_LINK_SUCCESS) {
+ struct channel *xchan = &xmux->channels[link_id][chan];
+
+ xchan->status = CHAN_OPEN; // opened locally
+ xchan->size = event->header.size;
+ xchan->timeout = event->header.timeout;
+ xchan->mode = (uintptr_t)event->data;
+ }
} else {
/* channel already open */
rc = X_LINK_ALREADY_OPEN;
}
break;
+ case XLINK_DATA_READY_CALLBACK_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ opchan->ready_callback = event->data;
+ opchan->ready_calling_pid = event->calling_pid;
+ opchan->callback_origin = event->callback_origin;
+ pr_info("xlink ready callback process registered - %lx chan %d\n",
+ (uintptr_t)event->calling_pid, chan);
+ release_channel(opchan);
+ }
+ break;
+ case XLINK_DATA_CONSUMED_CALLBACK_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ opchan->consumed_callback = event->data;
+ opchan->consumed_calling_pid = event->calling_pid;
+ opchan->callback_origin = event->callback_origin;
+ pr_info("xlink consumed callback process registered - %lx chan %d\n",
+ (uintptr_t)event->calling_pid, chan);
+ release_channel(opchan);
+ }
+ break;
case XLINK_CLOSE_CHANNEL_REQ:
if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
opchan = get_channel(link_id, chan);
@@ -709,7 +771,8 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
XLINK_PACKET_ALIGNMENT,
XLINK_NORMAL_MEMORY);
} else {
- pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+ pr_err("Fatal error: can't allocate memory in line:%d func:%s\n",
+ __LINE__, __func__);
}
rc = X_LINK_COMMUNICATION_FAIL;
} else {
@@ -754,6 +817,14 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
xlink_dispatcher_event_add(EVENT_RX, event);
//complete regardless of mode/timeout
complete(&opchan->pkt_available);
+ // run callback
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+ chan_is_non_blocking_read(opchan) &&
+ opchan->ready_callback) {
+ rc = run_callback(opchan, opchan->ready_callback,
+ opchan->ready_calling_pid);
+ break;
+ }
} else {
// failed to allocate buffer
rc = X_LINK_ERROR;
@@ -813,6 +884,13 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
xlink_dispatcher_event_add(EVENT_RX, event);
//complete regardless of mode/timeout
complete(&opchan->pkt_consumed);
+ // run callback
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+ chan_is_non_blocking_write(opchan) &&
+ opchan->consumed_callback) {
+ rc = run_callback(opchan, opchan->consumed_callback,
+ opchan->consumed_calling_pid);
+ }
release_channel(opchan);
break;
case XLINK_RELEASE_REQ:
@@ -838,47 +916,47 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
rc = multiplexer_open_channel(link_id, chan);
if (rc) {
rc = X_LINK_ERROR;
- } else {
- opchan = get_channel(link_id, chan);
- if (!opchan) {
- rc = X_LINK_COMMUNICATION_FAIL;
- } else {
- xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
- complete(&opchan->opened);
- passthru_event = xlink_create_event(link_id,
- XLINK_OPEN_CHANNEL_RESP,
- event->handle,
- chan,
- 0,
- opchan->chan->timeout);
- if (!passthru_event) {
- rc = X_LINK_ERROR;
- release_channel(opchan);
- break;
- }
- xlink_dispatcher_event_add(EVENT_RX,
- passthru_event);
- }
+ break;
+ }
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ break;
+ }
+ xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+ complete(&opchan->opened);
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan,
+ 0,
+ opchan->chan->timeout);
+ if (!passthru_event) {
+ rc = X_LINK_ERROR;
release_channel(opchan);
+ break;
}
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ release_channel(opchan);
} else {
/* channel already open */
opchan = get_channel(link_id, chan);
if (!opchan) {
rc = X_LINK_COMMUNICATION_FAIL;
- } else {
- passthru_event = xlink_create_event(link_id,
- XLINK_OPEN_CHANNEL_RESP,
- event->handle,
- chan, 0, 0);
- if (!passthru_event) {
- release_channel(opchan);
- rc = X_LINK_ERROR;
- break;
- }
- xlink_dispatcher_event_add(EVENT_RX,
- passthru_event);
+ break;
+ }
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan, 0, 0);
+ if (!passthru_event) {
+ release_channel(opchan);
+ rc = X_LINK_ERROR;
+ break;
}
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
release_channel(opchan);
}
rc = xlink_passthrough(event);
@@ -930,7 +1008,7 @@ enum xlink_error xlink_passthrough(struct xlink_event *event)
#ifdef CONFIG_XLINK_LOCAL_HOST
struct xlink_ipc_context ipc = {0};
phys_addr_t physaddr = 0;
- dma_addr_t vpuaddr = 0;
+ static dma_addr_t vpuaddr;
u32 timeout = 0;
u32 link_id;
u16 chan;
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
index 56eb8da28a5f..b0076cb3671d 100644
--- a/drivers/misc/xlink-core/xlink-platform.c
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -56,6 +56,11 @@ static inline int xlink_ipc_close_channel(u32 sw_device_id,
u32 channel)
{ return -1; }

+static inline int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, u32 event))
+{ return -1; }
+static inline int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{ return -1; }
#endif /* CONFIG_XLINK_LOCAL_HOST */

/*
@@ -95,6 +100,13 @@ static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {

static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
xlink_ipc_close_channel, NULL, NULL, NULL};
+static int (*register_for_events_fcts[NMB_OF_INTERFACES])(u32,
+ int (*callback)(u32 sw_device_id, u32 event)) = {
+ xlink_ipc_register_for_events,
+ xlink_pcie_register_device_event,
+ NULL, NULL};
+static int (*unregister_for_events_fcts[NMB_OF_INTERFACES])(u32) = {
+ xlink_ipc_unregister_for_events, xlink_pcie_unregister_device_event, NULL, NULL};

/*
* xlink low-level driver interface
@@ -207,6 +219,21 @@ int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
return close_chan_fcts[interface](sw_device_id, channel);
}

+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+ xlink_device_event_cb event_notif_fn)
+{
+ if (interface >= NMB_OF_INTERFACES || !register_for_events_fcts[interface])
+ return -1;
+ return register_for_events_fcts[interface](sw_device_id, event_notif_fn);
+}
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id)
+{
+ if (interface >= NMB_OF_INTERFACES || !unregister_for_events_fcts[interface])
+ return -1;
+ return unregister_for_events_fcts[interface](sw_device_id);
+}
+
void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
u32 size, u32 alignment,
enum xlink_memory_region region)
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
index b00dbc719530..ac196ff85469 100644
--- a/include/linux/xlink.h
+++ b/include/linux/xlink.h
@@ -70,6 +70,12 @@ enum xlink_error xlink_open_channel(struct xlink_handle *handle,
u16 chan, enum xlink_opmode mode,
u32 data_size, u32 timeout);

+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_consumed_event);
enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);

enum xlink_error xlink_write_data(struct xlink_handle *handle,
@@ -113,9 +119,14 @@ enum xlink_error xlink_set_device_mode(struct xlink_handle *handle,
enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
enum xlink_device_power_mode *power_mode);

-enum xlink_error xlink_start_vpu(char *filename); /* depreciated */
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events);
+enum xlink_error xlink_start_vpu(char *filename); /* deprecated */

-enum xlink_error xlink_stop_vpu(void); /* depreciated */
+enum xlink_error xlink_stop_vpu(void); /* deprecated */

/* API functions to be implemented
*
--
2.17.1

2021-01-08 21:33:46

by mark gross

[permalink] [raw]
Subject: [PATCH v2 14/34] misc: xlink-pcie: rh: Add core communication logic

From: Srikanth Thokala <[email protected]>

Add logic to establish communication with the local host which is through
ring buffer management and MSI/Doorbell interrupts

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/common/core.h | 11 +-
drivers/misc/xlink-pcie/remote_host/Makefile | 2 +
drivers/misc/xlink-pcie/remote_host/core.c | 623 +++++++++++++++++++
drivers/misc/xlink-pcie/remote_host/pci.c | 48 +-
4 files changed, 672 insertions(+), 12 deletions(-)
create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c

diff --git a/drivers/misc/xlink-pcie/common/core.h b/drivers/misc/xlink-pcie/common/core.h
index 84985ef41a64..eec8566c19d9 100644
--- a/drivers/misc/xlink-pcie/common/core.h
+++ b/drivers/misc/xlink-pcie/common/core.h
@@ -10,15 +10,11 @@
#ifndef XPCIE_CORE_HEADER_
#define XPCIE_CORE_HEADER_

-#include <linux/io.h>
-#include <linux/types.h>
-#include <linux/workqueue.h>
-#include <linux/slab.h>
-#include <linux/mutex.h>
-#include <linux/mempool.h>
#include <linux/dma-mapping.h>
-#include <linux/cache.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
#include <linux/wait.h>
+#include <linux/workqueue.h>

#include <linux/xlink_drv_inf.h>

@@ -64,6 +60,7 @@ struct xpcie_buf_desc {
struct xpcie_stream {
size_t frag;
struct xpcie_pipe pipe;
+ struct xpcie_buf_desc **ddr;
};

struct xpcie_list {
diff --git a/drivers/misc/xlink-pcie/remote_host/Makefile b/drivers/misc/xlink-pcie/remote_host/Makefile
index 96374a43023e..e8074dbb1161 100644
--- a/drivers/misc/xlink-pcie/remote_host/Makefile
+++ b/drivers/misc/xlink-pcie/remote_host/Makefile
@@ -1,3 +1,5 @@
obj-$(CONFIG_XLINK_PCIE_RH_DRIVER) += mxlk.o
mxlk-objs := main.o
mxlk-objs += pci.o
+mxlk-objs += core.o
+mxlk-objs += ../common/util.o
diff --git a/drivers/misc/xlink-pcie/remote_host/core.c b/drivers/misc/xlink-pcie/remote_host/core.c
new file mode 100644
index 000000000000..2e20e1490076
--- /dev/null
+++ b/drivers/misc/xlink-pcie/remote_host/core.c
@@ -0,0 +1,623 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "pci.h"
+
+#include "../common/core.h"
+#include "../common/util.h"
+
+static int intel_xpcie_map_dma(struct xpcie *xpcie, struct xpcie_buf_desc *bd,
+ int direction)
+{
+ struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+ struct device *dev = &xdev->pci->dev;
+
+ bd->phys = dma_map_single(dev, bd->data, bd->length, direction);
+
+ return dma_mapping_error(dev, bd->phys);
+}
+
+static void intel_xpcie_unmap_dma(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd,
+ int direction)
+{
+ struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+ struct device *dev = &xdev->pci->dev;
+
+ dma_unmap_single(dev, bd->phys, bd->length, direction);
+}
+
+static void intel_xpcie_txrx_cleanup(struct xpcie *xpcie)
+{
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ struct xpcie_buf_desc *bd;
+ int index;
+
+ xpcie->stop_flag = true;
+ xpcie->no_tx_buffer = false;
+ inf->data_avail = true;
+ wake_up_interruptible(&xpcie->tx_waitq);
+ wake_up_interruptible(&inf->rx_waitq);
+ mutex_lock(&xpcie->wlock);
+ mutex_lock(&inf->rlock);
+
+ if (tx->ddr) {
+ for (index = 0; index < tx->pipe.ndesc; index++) {
+ struct xpcie_transfer_desc *td = tx->pipe.tdr + index;
+
+ bd = tx->ddr[index];
+ if (bd) {
+ intel_xpcie_unmap_dma(xpcie, bd, DMA_TO_DEVICE);
+ intel_xpcie_free_tx_bd(xpcie, bd);
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+ }
+ kfree(tx->ddr);
+ }
+
+ if (rx->ddr) {
+ for (index = 0; index < rx->pipe.ndesc; index++) {
+ struct xpcie_transfer_desc *td = rx->pipe.tdr + index;
+
+ bd = rx->ddr[index];
+ if (bd) {
+ intel_xpcie_unmap_dma(xpcie,
+ bd, DMA_FROM_DEVICE);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+ }
+ kfree(rx->ddr);
+ }
+
+ intel_xpcie_list_cleanup(&xpcie->tx_pool);
+ intel_xpcie_list_cleanup(&xpcie->rx_pool);
+
+ mutex_unlock(&inf->rlock);
+ mutex_unlock(&xpcie->wlock);
+}
+
+static int intel_xpcie_txrx_init(struct xpcie *xpcie,
+ struct xpcie_cap_txrx *cap)
+{
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ int tx_pool_size, rx_pool_size;
+ struct xpcie_buf_desc *bd;
+ int rc, index, ndesc;
+
+ xpcie->txrx = cap;
+ xpcie->fragment_size = intel_xpcie_ioread32(&cap->fragment_size);
+ xpcie->stop_flag = false;
+
+ tx->pipe.ndesc = intel_xpcie_ioread32(&cap->tx.ndesc);
+ tx->pipe.head = &cap->tx.head;
+ tx->pipe.tail = &cap->tx.tail;
+ tx->pipe.old = intel_xpcie_ioread32(&cap->tx.tail);
+ tx->pipe.tdr = (struct xpcie_transfer_desc *)(xpcie->mmio +
+ intel_xpcie_ioread32(&cap->tx.ring));
+
+ tx->ddr = kcalloc(tx->pipe.ndesc, sizeof(struct xpcie_buf_desc *),
+ GFP_KERNEL);
+ if (!tx->ddr) {
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ rx->pipe.ndesc = intel_xpcie_ioread32(&cap->rx.ndesc);
+ rx->pipe.head = &cap->rx.head;
+ rx->pipe.tail = &cap->rx.tail;
+ rx->pipe.old = intel_xpcie_ioread32(&cap->rx.head);
+ rx->pipe.tdr = (struct xpcie_transfer_desc *)(xpcie->mmio +
+ intel_xpcie_ioread32(&cap->rx.ring));
+
+ rx->ddr = kcalloc(rx->pipe.ndesc, sizeof(struct xpcie_buf_desc *),
+ GFP_KERNEL);
+ if (!rx->ddr) {
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ intel_xpcie_list_init(&xpcie->rx_pool);
+ rx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = rx_pool_size / xpcie->fragment_size;
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->rx_pool, bd);
+ } else {
+ rc = -ENOMEM;
+ goto error;
+ }
+ }
+
+ intel_xpcie_list_init(&xpcie->tx_pool);
+ tx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = tx_pool_size / xpcie->fragment_size;
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->tx_pool, bd);
+ } else {
+ rc = -ENOMEM;
+ goto error;
+ }
+ }
+
+ for (index = 0; index < rx->pipe.ndesc; index++) {
+ struct xpcie_transfer_desc *td = rx->pipe.tdr + index;
+
+ bd = intel_xpcie_alloc_rx_bd(xpcie);
+ if (!bd) {
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ if (intel_xpcie_map_dma(xpcie, bd, DMA_FROM_DEVICE)) {
+ dev_err(xpcie_to_dev(xpcie), "failed to map rx bd\n");
+ rc = -ENOMEM;
+ goto error;
+ }
+
+ rx->ddr[index] = bd;
+ intel_xpcie_set_td_address(td, bd->phys);
+ intel_xpcie_set_td_length(td, bd->length);
+ }
+
+ return 0;
+
+error:
+ intel_xpcie_txrx_cleanup(xpcie);
+
+ return rc;
+}
+
+static int intel_xpcie_discover_txrx(struct xpcie *xpcie)
+{
+ struct xpcie_cap_txrx *cap;
+ int error;
+
+ cap = intel_xpcie_cap_find(xpcie, 0, XPCIE_CAP_TXRX);
+ if (cap)
+ error = intel_xpcie_txrx_init(xpcie, cap);
+ else
+ error = -EIO;
+
+ return error;
+}
+
+static void intel_xpcie_start_tx(struct xpcie *xpcie, unsigned long delay)
+{
+ queue_delayed_work(xpcie->tx_wq, &xpcie->tx_event, delay);
+}
+
+static void intel_xpcie_start_rx(struct xpcie *xpcie, unsigned long delay)
+{
+ queue_delayed_work(xpcie->rx_wq, &xpcie->rx_event, delay);
+}
+
+static void intel_xpcie_rx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, rx_event.work);
+ struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+ struct xpcie_buf_desc *bd, *replacement = NULL;
+ unsigned long delay = msecs_to_jiffies(1);
+ struct xpcie_stream *rx = &xpcie->rx;
+ struct xpcie_transfer_desc *td;
+ u32 head, tail, ndesc, length;
+ u16 status, interface;
+ int rc;
+
+ if (intel_xpcie_get_device_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ ndesc = rx->pipe.ndesc;
+ tail = intel_xpcie_get_tdr_tail(&rx->pipe);
+ head = intel_xpcie_get_tdr_head(&rx->pipe);
+
+ while (head != tail) {
+ td = rx->pipe.tdr + head;
+ bd = rx->ddr[head];
+
+ replacement = intel_xpcie_alloc_rx_bd(xpcie);
+ if (!replacement) {
+ delay = msecs_to_jiffies(20);
+ break;
+ }
+
+ rc = intel_xpcie_map_dma(xpcie, replacement, DMA_FROM_DEVICE);
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to map rx bd (%d)\n", rc);
+ intel_xpcie_free_rx_bd(xpcie, replacement);
+ break;
+ }
+
+ status = intel_xpcie_get_td_status(td);
+ interface = intel_xpcie_get_td_interface(td);
+ length = intel_xpcie_get_td_length(td);
+ intel_xpcie_unmap_dma(xpcie, bd, DMA_FROM_DEVICE);
+
+ if (unlikely(status != XPCIE_DESC_STATUS_SUCCESS) ||
+ unlikely(interface >= XPCIE_NUM_INTERFACES)) {
+ dev_err(xpcie_to_dev(xpcie),
+ "rx desc failure, status(%u), interface(%u)\n",
+ status, interface);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ } else {
+ bd->interface = interface;
+ bd->length = length;
+ bd->next = NULL;
+
+ intel_xpcie_add_bd_to_interface(xpcie, bd);
+ }
+
+ rx->ddr[head] = replacement;
+ intel_xpcie_set_td_address(td, replacement->phys);
+ intel_xpcie_set_td_length(td, replacement->length);
+ head = XPCIE_CIRCULAR_INC(head, ndesc);
+ }
+
+ if (intel_xpcie_get_tdr_head(&rx->pipe) != head) {
+ intel_xpcie_set_tdr_head(&rx->pipe, head);
+ intel_xpcie_pci_raise_irq(xdev, DATA_RECEIVED, 1);
+ }
+
+ if (!replacement)
+ intel_xpcie_start_rx(xpcie, delay);
+}
+
+static void intel_xpcie_tx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, tx_event.work);
+ struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_transfer_desc *td;
+ u32 head, tail, old, ndesc;
+ struct xpcie_buf_desc *bd;
+ size_t bytes, buffers;
+ u16 status;
+
+ if (intel_xpcie_get_device_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ ndesc = tx->pipe.ndesc;
+ old = tx->pipe.old;
+ tail = intel_xpcie_get_tdr_tail(&tx->pipe);
+ head = intel_xpcie_get_tdr_head(&tx->pipe);
+
+ /* clean old entries first */
+ while (old != head) {
+ bd = tx->ddr[old];
+ td = tx->pipe.tdr + old;
+ status = intel_xpcie_get_td_status(td);
+ if (status != XPCIE_DESC_STATUS_SUCCESS)
+ dev_err(xpcie_to_dev(xpcie),
+ "detected tx desc failure (%u)\n", status);
+
+ intel_xpcie_unmap_dma(xpcie, bd, DMA_TO_DEVICE);
+ intel_xpcie_free_tx_bd(xpcie, bd);
+ tx->ddr[old] = NULL;
+ old = XPCIE_CIRCULAR_INC(old, ndesc);
+ }
+ tx->pipe.old = old;
+
+ /* add new entries */
+ while (XPCIE_CIRCULAR_INC(tail, ndesc) != head) {
+ bd = intel_xpcie_list_get(&xpcie->write);
+ if (!bd)
+ break;
+
+ td = tx->pipe.tdr + tail;
+
+ if (intel_xpcie_map_dma(xpcie, bd, DMA_TO_DEVICE)) {
+ dev_err(xpcie_to_dev(xpcie),
+ "dma mapping error bd addr %p, size %zu\n",
+ bd->data, bd->length);
+ break;
+ }
+
+ tx->ddr[tail] = bd;
+ intel_xpcie_set_td_address(td, bd->phys);
+ intel_xpcie_set_td_length(td, bd->length);
+ intel_xpcie_set_td_interface(td, bd->interface);
+ intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_ERROR);
+
+ tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+ }
+
+ if (intel_xpcie_get_tdr_tail(&tx->pipe) != tail) {
+ intel_xpcie_set_tdr_tail(&tx->pipe, tail);
+ intel_xpcie_pci_raise_irq(xdev, DATA_SENT, 1);
+ }
+
+ intel_xpcie_list_info(&xpcie->write, &bytes, &buffers);
+ if (buffers)
+ xpcie->tx_pending = true;
+ else
+ xpcie->tx_pending = false;
+}
+
+static irqreturn_t intel_xpcie_interrupt(int irq, void *args)
+{
+ struct xpcie_dev *xdev = args;
+ struct xpcie *xpcie;
+
+ xpcie = &xdev->xpcie;
+
+ if (intel_xpcie_get_doorbell(xpcie, FROM_DEVICE, DATA_SENT)) {
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_SENT, 0);
+ intel_xpcie_start_rx(xpcie, 0);
+ }
+ if (intel_xpcie_get_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED)) {
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED, 0);
+ if (xpcie->tx_pending)
+ intel_xpcie_start_tx(xpcie, 0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int intel_xpcie_events_init(struct xpcie *xpcie)
+{
+ xpcie->rx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!xpcie->rx_wq) {
+ dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+ return -ENOMEM;
+ }
+
+ xpcie->tx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!xpcie->tx_wq) {
+ dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+ destroy_workqueue(xpcie->rx_wq);
+ return -ENOMEM;
+ }
+
+ INIT_DELAYED_WORK(&xpcie->rx_event, intel_xpcie_rx_event_handler);
+ INIT_DELAYED_WORK(&xpcie->tx_event, intel_xpcie_tx_event_handler);
+
+ return 0;
+}
+
+static void intel_xpcie_events_cleanup(struct xpcie *xpcie)
+{
+ cancel_delayed_work_sync(&xpcie->rx_event);
+ cancel_delayed_work_sync(&xpcie->tx_event);
+
+ destroy_workqueue(xpcie->rx_wq);
+ destroy_workqueue(xpcie->tx_wq);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie)
+{
+ struct xpcie_dev *xdev = container_of(xpcie, struct xpcie_dev, xpcie);
+ int status, rc;
+
+ status = intel_xpcie_get_device_status(xpcie);
+ if (status != XPCIE_STATUS_RUN) {
+ dev_err(&xdev->pci->dev,
+ "device status not RUNNING (%d)\n", status);
+ rc = -EBUSY;
+ return rc;
+ }
+
+ if (intel_xpcie_ioread8(xpcie->mmio + XPCIE_MMIO_LEGACY_A0))
+ xpcie->legacy_a0 = true;
+
+ rc = intel_xpcie_events_init(xpcie);
+ if (rc)
+ return rc;
+
+ rc = intel_xpcie_discover_txrx(xpcie);
+ if (rc)
+ goto error_txrx;
+
+ intel_xpcie_interfaces_init(xpcie);
+
+ rc = intel_xpcie_pci_register_irq(xdev, &intel_xpcie_interrupt);
+ if (rc)
+ goto error_txrx;
+
+ intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_RUN);
+
+ return 0;
+
+error_txrx:
+ intel_xpcie_events_cleanup(xpcie);
+ intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_ERROR);
+
+ return rc;
+}
+
+void intel_xpcie_core_cleanup(struct xpcie *xpcie)
+{
+ if (xpcie->status == XPCIE_STATUS_RUN) {
+ intel_xpcie_set_host_status(xpcie, XPCIE_STATUS_UNINIT);
+ intel_xpcie_events_cleanup(xpcie);
+ intel_xpcie_interfaces_cleanup(xpcie);
+ intel_xpcie_txrx_cleanup(xpcie);
+ }
+}
+
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer, size_t *length,
+ uint32_t timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd;
+ size_t remaining, len;
+ long jiffies_passed = 0;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ while (!inf->data_avail) {
+ mutex_unlock(&inf->rlock);
+ if (timeout_ms == 0) {
+ ret = wait_event_interruptible(inf->rx_waitq,
+ inf->data_avail);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(inf->rx_waitq,
+ inf->data_avail,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+ }
+
+ bd = (inf->partial_read) ? inf->partial_read :
+ intel_xpcie_list_get(&inf->read);
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(remaining, bd->length);
+ memcpy(buffer, bd->data, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->data += bcopy;
+ bd->length -= bcopy;
+
+ if (bd->length == 0) {
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ bd = intel_xpcie_list_get(&inf->read);
+ }
+ }
+
+ /* save for next time */
+ inf->partial_read = bd;
+
+ if (!bd)
+ inf->data_avail = false;
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&inf->rlock);
+
+ return 0;
+}
+
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer, size_t *length,
+ uint32_t timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd, *head;
+ long jiffies_passed = 0;
+ size_t remaining, len;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ while (!head) {
+ mutex_unlock(&xpcie->wlock);
+ if (timeout_ms == 0) {
+ ret =
+ wait_event_interruptible(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ }
+
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(bd->length, remaining);
+ memcpy(bd->data, buffer, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->length = bcopy;
+ bd->interface = inf->id;
+
+ if (remaining) {
+ bd->next = intel_xpcie_alloc_tx_bd(xpcie);
+ bd = bd->next;
+ }
+ }
+
+ intel_xpcie_list_put(&xpcie->write, head);
+ intel_xpcie_start_tx(xpcie, 0);
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&xpcie->wlock);
+
+ return 0;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
index 0ca0755b591f..f92a78f41324 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.c
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -208,10 +208,8 @@ static void xpcie_device_poll(struct work_struct *work)
{
struct xpcie_dev *xdev = container_of(work, struct xpcie_dev,
wait_event.work);
- u32 dev_status = intel_xpcie_ioread32(xdev->xpcie.mmio +
- XPCIE_MMIO_DEV_STATUS);

- if (dev_status < XPCIE_STATUS_RUN)
+ if (intel_xpcie_get_device_status(&xdev->xpcie) < XPCIE_STATUS_RUN)
schedule_delayed_work(&xdev->wait_event,
msecs_to_jiffies(100));
else
@@ -224,9 +222,10 @@ static int intel_xpcie_pci_prepare_dev_reset(struct xpcie_dev *xdev,
if (mutex_lock_interruptible(&xdev->lock))
return -EINTR;

- if (xdev->core_irq_callback)
+ if (xdev->core_irq_callback) {
xdev->core_irq_callback = NULL;
-
+ intel_xpcie_core_cleanup(&xdev->xpcie);
+ }
xdev->xpcie.status = XPCIE_STATUS_OFF;
if (notify)
intel_xpcie_pci_raise_irq(xdev, DEV_EVENT, REQUEST_RESET);
@@ -326,6 +325,8 @@ int intel_xpcie_pci_cleanup(struct xpcie_dev *xdev)
xdev->core_irq_callback = NULL;
intel_xpcie_pci_irq_cleanup(xdev);

+ intel_xpcie_core_cleanup(&xdev->xpcie);
+
intel_xpcie_pci_unmap_bar(xdev);
pci_release_regions(xdev->pci);
pci_disable_device(xdev->pci);
@@ -359,6 +360,7 @@ int intel_xpcie_pci_raise_irq(struct xpcie_dev *xdev,
{
u16 pci_status;

+ intel_xpcie_set_doorbell(&xdev->xpcie, TO_DEVICE, type, value);
pci_read_config_word(xdev->pci, PCI_STATUS, &pci_status);

return 0;
@@ -445,7 +447,43 @@ int intel_xpcie_pci_connect_device(u32 id)
goto connect_cleanup;
}

+ rc = intel_xpcie_core_init(&xdev->xpcie);
+ if (rc < 0) {
+ dev_err(&xdev->pci->dev, "failed to sync with device\n");
+ goto connect_cleanup;
+ }
+
connect_cleanup:
mutex_unlock(&xdev->lock);
return rc;
}
+
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+ if (!xdev)
+ return -ENODEV;
+
+ return intel_xpcie_core_read(&xdev->xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+ if (!xdev)
+ return -ENODEV;
+
+ return intel_xpcie_core_write(&xdev->xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_reset_device(u32 id)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(id);
+
+ if (!xdev)
+ return -ENOMEM;
+
+ return intel_xpcie_pci_prepare_dev_reset(xdev, true);
+}
--
2.17.1

2021-01-08 21:34:19

by mark gross

[permalink] [raw]
Subject: [PATCH v2 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink

From: Srikanth Thokala <[email protected]>

Add support to notify XLink layer upon PCIe link UP/DOWN events

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/common/core.h | 3 ++
drivers/misc/xlink-pcie/common/interface.c | 17 ++++++++++
drivers/misc/xlink-pcie/local_host/core.c | 11 +++++++
drivers/misc/xlink-pcie/remote_host/main.c | 3 ++
drivers/misc/xlink-pcie/remote_host/pci.c | 36 ++++++++++++++++++++++
drivers/misc/xlink-pcie/remote_host/pci.h | 3 ++
include/linux/xlink_drv_inf.h | 12 ++++++++
7 files changed, 85 insertions(+)

diff --git a/drivers/misc/xlink-pcie/common/core.h b/drivers/misc/xlink-pcie/common/core.h
index eec8566c19d9..34b6c268aac5 100644
--- a/drivers/misc/xlink-pcie/common/core.h
+++ b/drivers/misc/xlink-pcie/common/core.h
@@ -241,4 +241,7 @@ int intel_xpcie_pci_connect_device(u32 id);
int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
int intel_xpcie_pci_reset_device(u32 id);
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn);
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id);
#endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
index 56c1d9ed9d8f..4ad291ff97c8 100644
--- a/drivers/misc/xlink-pcie/common/interface.c
+++ b/drivers/misc/xlink-pcie/common/interface.c
@@ -107,3 +107,20 @@ int xlink_pcie_reset_device(u32 sw_device_id)
return intel_xpcie_pci_reset_device(sw_device_id);
}
EXPORT_SYMBOL(xlink_pcie_reset_device);
+
+int xlink_pcie_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ if (!event_notif_fn)
+ return -EINVAL;
+
+ return intel_xpcie_pci_register_device_event(sw_device_id,
+ event_notif_fn);
+}
+EXPORT_SYMBOL(xlink_pcie_register_device_event);
+
+int xlink_pcie_unregister_device_event(u32 sw_device_id)
+{
+ return intel_xpcie_pci_unregister_device_event(sw_device_id);
+}
+EXPORT_SYMBOL(xlink_pcie_unregister_device_event);
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
index 51fa25259515..a343e30d8b45 100644
--- a/drivers/misc/xlink-pcie/local_host/core.c
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -806,3 +806,14 @@ int intel_xpcie_pci_reset_device(u32 id)
{
return 0;
}
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+ return 0;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/main.c b/drivers/misc/xlink-pcie/remote_host/main.c
index d88257dd2585..fd31ee3c153b 100644
--- a/drivers/misc/xlink-pcie/remote_host/main.c
+++ b/drivers/misc/xlink-pcie/remote_host/main.c
@@ -55,6 +55,8 @@ static int intel_xpcie_probe(struct pci_dev *pdev,
if (new_device)
intel_xpcie_list_add_device(xdev);

+ intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_CONNECTED);
+
return ret;
}

@@ -64,6 +66,7 @@ static void intel_xpcie_remove(struct pci_dev *pdev)

if (xdev) {
intel_xpcie_pci_cleanup(xdev);
+ intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_DISCONNECTED);
intel_xpcie_remove_device(xdev);
}
}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
index f92a78f41324..0046bff5f604 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.c
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -8,6 +8,7 @@
****************************************************************************/

#include <linux/mutex.h>
+#include <linux/pci.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
@@ -487,3 +488,38 @@ int intel_xpcie_pci_reset_device(u32 id)

return intel_xpcie_pci_prepare_dev_reset(xdev, true);
}
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->event_fn = event_notif_fn;
+
+ return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->event_fn = NULL;
+
+ return 0;
+}
+
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+ enum xlink_device_event_type event_type)
+{
+ if (event_type >= NUM_EVENT_TYPE)
+ return;
+
+ if (xdev->event_fn)
+ xdev->event_fn(xdev->devid, event_type);
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.h b/drivers/misc/xlink-pcie/remote_host/pci.h
index 72de3701f83a..a05dedf36a12 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.h
+++ b/drivers/misc/xlink-pcie/remote_host/pci.h
@@ -38,6 +38,7 @@ struct xpcie_dev {
irq_handler_t core_irq_callback;

struct xpcie xpcie;
+ xlink_device_event event_fn;
};

static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
@@ -60,5 +61,7 @@ struct xpcie_dev *intel_xpcie_create_device(u32 sw_device_id,
void intel_xpcie_remove_device(struct xpcie_dev *xdev);
void intel_xpcie_list_add_device(struct xpcie_dev *xdev);
void intel_xpcie_list_del_device(struct xpcie_dev *xdev);
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+ enum xlink_device_event_type event_type);

#endif /* XPCIE_PCI_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
index ffe8f4c253e6..5ca0ae1ae2e3 100644
--- a/include/linux/xlink_drv_inf.h
+++ b/include/linux/xlink_drv_inf.h
@@ -44,6 +44,15 @@ enum _xlink_device_status {
_XLINK_DEV_READY
};

+enum xlink_device_event_type {
+ NOTIFY_DEVICE_DISCONNECTED,
+ NOTIFY_DEVICE_CONNECTED,
+ NUM_EVENT_TYPE
+};
+
+typedef int (*xlink_device_event)(u32 sw_device_id,
+ enum xlink_device_event_type event_type);
+
int xlink_pcie_get_device_list(u32 *sw_device_id_list,
u32 *num_devices);
int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
@@ -57,4 +66,7 @@ int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
u32 timeout);
int xlink_pcie_reset_device(u32 sw_device_id);
+int xlink_pcie_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn);
+int xlink_pcie_unregister_device_event(u32 sw_device_id);
#endif
--
2.17.1

2021-01-12 23:05:54

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v2 29/34] Intel tsens i2c slave driver.

On 1/8/21 1:25 PM, [email protected] wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> index 8b263fdd80c3..c2138339bd89 100644
> --- a/drivers/misc/intel_tsens/Kconfig
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> Say Y if using a processor that includes the Intel VPU such as
> Keem Bay. If unsure, say N.
>
> +config INTEL_TSENS_I2C_SLAVE
> + bool "I2C slave driver for intel tsens"

Why bool instead of tristate?

> + depends on INTEL_TSENS_LOCAL_HOST
> + select I2C
> + select I2C_SLAVE
> + help
> + This option enables tsens i2c slave driver.

I2C

> +
> + This driver is used for reporting thermal data via I2C
> + SMBUS to remote host.
> + Enable this option if you want to have support for thermal
> + management controller

controller.

> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-01-19 20:01:04

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v2 20/34] xlink-core: Add xlink core driver xLink

Doc comments only:

On 1/8/21 1:25 PM, [email protected] wrote:
> diff --git a/Documentation/vpu/xlink-core.rst b/Documentation/vpu/xlink-core.rst
> new file mode 100644
> index 000000000000..1c471ec803d3
> --- /dev/null
> +++ b/Documentation/vpu/xlink-core.rst
> @@ -0,0 +1,81 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=============================
> +xLink-core software subsystem
> +=============================
> +
> +The purpose of the xLink software subsystem is to facilitate communication
> +between multiple users on multiple nodes in the system.
> +
> +There are three types of xLink nodes:
> +
> +1. Remote Host: this is an external IA/x86 host system that is only capable of
> + communicating directly to the Local Host node on VPU 2.x products.
> +2. Local Host: this is the ARM core within the VPU 2.x SoC. The Local Host can
> + communicate upstream to the Remote Host node, and downstream to the VPU IP
> + node.
> +3. VPU IP: this is the Leon RT core within the VPU 2.x SoC. The VPU IP can only
> + communicate upstream to the Local Host node.
> +
> +xLink provides a common API across all interfaces for users to access xLink
> +functions and provides user space APIs via an IOCTL interface implemented in
> +the xLink core.
> +
> +xLink manages communications from one interface to another and provides routing
> +of data through multiple channels across a single physical interface.
> +
> +It exposes a common API across all interfaces at both kernel and user levels
> +for processes/applications to access.
> +
> +It has typical API types (open, close, read, write) that you would associate
> +with a communication interface.
> +
> +It also has other APIs that are related to other functions that the device can
> +perform, e.g. boot, reset get/set device mode.
> +The driver is broken down into 4 source files.
> +
> +xlink-core:
> +Contains driver initialization, driver API and IOCTL interface (for user
> +space).
> +
> +xlink-multiplexer:
> +The Multiplexer component is responsible for securely routing messages through
> +multiple communication channels over a single physical interface.
> +
> +xlink-dispatcher:
> +The Dispatcher component is responsible for queueing and handling xLink
> +communication requests from all users in the system and invoking the underlying
> +platform interface drivers.
> +
> +xlink-platform:
> +provides abstraction to each interface supported (PCIe, USB, IPC, etc).
> +
> +Typical xLink transaction (simplified):
> +When a user wants to send data across an interface via xLink it firstly calls
> +xlink connect which connects to the relevant interface (PCIe, USB, IPC, etc.)
> +and then xlink open channel.
> +
> +Then it calls xlink write function, this takes the data, passes it to the

function. This takes

> +kernel which packages up the data and channel and then adds it to a transmit
> +queue.
> +
> +A separate thread reads this transaction queue and pops off data if available
> +and passes the data to the underlying interface (e.g. PCIe) write function.
> +Using this thread provides serialization of transactions and decouples the user
> +write from the platform write.
> +
> +On the other side of the interface, a thread is continually reading the
> +interface (e.g. PCIe) via the platform interface read function and if it reads
> +any data it adds it to channel packet container.
> +
> +The application at this side of the interface will have called xlink connect,
> +opened the channel and called xlink read function to read data from the
> +interface and if any exists for that channel , the data gets popped from the

channel, the

> +channel packet container and copied from kernel space to user space buffer
> +provided by the call.
> +
> +xLink can handle API requests from multi-process and multi-threaded
> +application/processes.
> +
> +xLink maintains 4096 channels per device connected (via xlink connect) and
> +maintains a separate channel infrastructure for each device.


--
~Randy
"He closes his eyes and drops the goggles. You can't get hurt
by looking at a bitmap. Or can you?"
(Neal Stephenson: Snow Crash)

2021-01-20 18:02:52

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

On Fri, Jan 08, 2021 at 01:25:35PM -0800, [email protected] wrote:
> From: Srikanth Thokala <[email protected]>
>
> Add PCIe EPF driver for local host (lh) to configure BAR's and other
> HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.
>
> Cc: Derek Kiernan <[email protected]>
> Cc: Dragan Cvetic <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Reviewed-by: Mark Gross <[email protected]>
> Signed-off-by: Srikanth Thokala <[email protected]>
> ---
> MAINTAINERS | 6 +
> drivers/misc/Kconfig | 1 +
> drivers/misc/Makefile | 1 +
> drivers/misc/xlink-pcie/Kconfig | 9 +
> drivers/misc/xlink-pcie/Makefile | 1 +
> drivers/misc/xlink-pcie/local_host/Makefile | 2 +
> drivers/misc/xlink-pcie/local_host/epf.c | 413 ++++++++++++++++++++
> drivers/misc/xlink-pcie/local_host/epf.h | 39 ++
> drivers/misc/xlink-pcie/local_host/xpcie.h | 38 ++

Why such a deep directory tree? Why is "local_host" needed?

Anyway, one thing stood out instantly:

> +static int intel_xpcie_check_bar(struct pci_epf *epf,
> + struct pci_epf_bar *epf_bar,
> + enum pci_barno barno,
> + size_t size, u8 reserved_bar)
> +{
> + if (reserved_bar & (1 << barno)) {
> + dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
> + return -EFAULT;

That error is only allowed when you really have a fault from
reading/writing to/from userspace memory. Not this type of foolish
programming error by the caller.

> + }
> +
> + if (epf_bar->size != 0 && epf_bar->size < size) {
> + dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
> + return -ENOMEM;

Did you really run out of memory or was the parameters sent to you
incorrect? -EINVAL is the properly thing here, right?



> + }
> +
> + return 0;
> +}
> +
> +static int intel_xpcie_configure_bar(struct pci_epf *epf,
> + const struct pci_epc_features
> + *epc_features)

Odd indentation :(

> +{
> + struct pci_epf_bar *epf_bar;
> + bool bar_fixed_64bit;
> + int ret, i;
> +
> + for (i = BAR_0; i <= BAR_5; i++) {
> + epf_bar = &epf->bar[i];
> + bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i));
> + if (bar_fixed_64bit)
> + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> + if (epc_features->bar_fixed_size[i])
> + epf_bar->size = epc_features->bar_fixed_size[i];
> +
> + if (i == BAR_2) {
> + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> + BAR2_MIN_SIZE,
> + epc_features->reserved_bar);
> + if (ret)
> + return ret;
> + }
> +
> + if (i == BAR_4) {
> + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> + BAR4_MIN_SIZE,
> + epc_features->reserved_bar);
> + if (ret)
> + return ret;
> + }

Why do you need to check all of this? Where is the data coming from
that could be incorrect?

thanks,

greg k-h

2021-01-20 18:06:35

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface

On Fri, Jan 08, 2021 at 01:25:41PM -0800, [email protected] wrote:
> From: Srikanth Thokala <[email protected]>
>
> Provide interface for XLink layer to interact with XLink PCIe transport
> layer on both local host and remote host.
>
> Cc: Arnd Bergmann <[email protected]>
> Cc: Greg Kroah-Hartman <[email protected]>
> Reviewed-by: Mark Gross <[email protected]>
> Signed-off-by: Srikanth Thokala <[email protected]>
> ---
> drivers/misc/xlink-pcie/common/interface.c | 109 +++++++++++++++++++
> drivers/misc/xlink-pcie/local_host/Makefile | 1 +
> drivers/misc/xlink-pcie/remote_host/Makefile | 1 +
> 3 files changed, 111 insertions(+)
> create mode 100644 drivers/misc/xlink-pcie/common/interface.c
>
> diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
> new file mode 100644
> index 000000000000..56c1d9ed9d8f
> --- /dev/null
> +++ b/drivers/misc/xlink-pcie/common/interface.c
> @@ -0,0 +1,109 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +/*****************************************************************************
> + *
> + * Intel Keem Bay XLink PCIe Driver
> + *
> + * Copyright (C) 2020 Intel Corporation
> + *
> + ****************************************************************************/

Do you really need the ******* mess? :)

> +
> +#include <linux/xlink_drv_inf.h>
> +
> +#include "core.h"
> +#include "xpcie.h"
> +
> +/* Define xpcie driver interface API */
> +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
> +{
> + if (!sw_device_id_list || !num_devices)
> + return -EINVAL;
> +
> + *num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL(xlink_pcie_get_device_list);

EXPORT_SYMBOL_GPL() for all of these perhaps? I have to ask...

thanks,

greg k-h

2021-01-21 23:52:27

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface

On Wed, Jan 20, 2021 at 06:59:33PM +0100, Greg KH wrote:
> On Fri, Jan 08, 2021 at 01:25:41PM -0800, [email protected] wrote:
> > From: Srikanth Thokala <[email protected]>
> >
> > Provide interface for XLink layer to interact with XLink PCIe transport
> > layer on both local host and remote host.
> >
> > Cc: Arnd Bergmann <[email protected]>
> > Cc: Greg Kroah-Hartman <[email protected]>
> > Reviewed-by: Mark Gross <[email protected]>
> > Signed-off-by: Srikanth Thokala <[email protected]>
> > ---
> > drivers/misc/xlink-pcie/common/interface.c | 109 +++++++++++++++++++
> > drivers/misc/xlink-pcie/local_host/Makefile | 1 +
> > drivers/misc/xlink-pcie/remote_host/Makefile | 1 +
> > 3 files changed, 111 insertions(+)
> > create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> >
> > diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
> > new file mode 100644
> > index 000000000000..56c1d9ed9d8f
> > --- /dev/null
> > +++ b/drivers/misc/xlink-pcie/common/interface.c
> > @@ -0,0 +1,109 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> > +/*****************************************************************************
> > + *
> > + * Intel Keem Bay XLink PCIe Driver
> > + *
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > + ****************************************************************************/
>
> Do you really need the ******* mess? :)
>
> > +
> > +#include <linux/xlink_drv_inf.h>
> > +
> > +#include "core.h"
> > +#include "xpcie.h"
> > +
> > +/* Define xpcie driver interface API */
> > +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
> > +{
> > + if (!sw_device_id_list || !num_devices)
> > + return -EINVAL;
> > +
> > + *num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(xlink_pcie_get_device_list);
>
> EXPORT_SYMBOL_GPL() for all of these perhaps? I have to ask...
I can't think of a reason why not using the _GPL flavor of export symbol. I'll
change them all if that's desired.

--mark
>
> thanks,
>
> greg k-h

2021-01-24 11:47:58

by Thokala, Srikanth

[permalink] [raw]
Subject: RE: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface

Hi Greg,

Thank you for the review.

> -----Original Message-----
> From: Greg KH <[email protected]>
> Sent: Wednesday, January 20, 2021 11:30 PM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; linux-
> [email protected]; Thokala, Srikanth <[email protected]>
> Subject: Re: [PATCH v2 15/34] misc: xlink-pcie: Add XLink API interface
>
> On Fri, Jan 08, 2021 at 01:25:41PM -0800, [email protected] wrote:
> > From: Srikanth Thokala <[email protected]>
> >
> > Provide interface for XLink layer to interact with XLink PCIe transport
> > layer on both local host and remote host.
> >
> > Cc: Arnd Bergmann <[email protected]>
> > Cc: Greg Kroah-Hartman <[email protected]>
> > Reviewed-by: Mark Gross <[email protected]>
> > Signed-off-by: Srikanth Thokala <[email protected]>
> > ---
> > drivers/misc/xlink-pcie/common/interface.c | 109 +++++++++++++++++++
> > drivers/misc/xlink-pcie/local_host/Makefile | 1 +
> > drivers/misc/xlink-pcie/remote_host/Makefile | 1 +
> > 3 files changed, 111 insertions(+)
> > create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> >
> > diff --git a/drivers/misc/xlink-pcie/common/interface.c
> b/drivers/misc/xlink-pcie/common/interface.c
> > new file mode 100644
> > index 000000000000..56c1d9ed9d8f
> > --- /dev/null
> > +++ b/drivers/misc/xlink-pcie/common/interface.c
> > @@ -0,0 +1,109 @@
> > +// SPDX-License-Identifier: GPL-2.0-only
> >
> +/************************************************************************
> *****
> > + *
> > + * Intel Keem Bay XLink PCIe Driver
> > + *
> > + * Copyright (C) 2020 Intel Corporation
> > + *
> > +
> **************************************************************************
> **/
>
> Do you really need the ******* mess? :)

It is not required; I will clean up.

>
> > +
> > +#include <linux/xlink_drv_inf.h>
> > +
> > +#include "core.h"
> > +#include "xpcie.h"
> > +
> > +/* Define xpcie driver interface API */
> > +int xlink_pcie_get_device_list(u32 *sw_device_id_list, u32
> *num_devices)
> > +{
> > + if (!sw_device_id_list || !num_devices)
> > + return -EINVAL;
> > +
> > + *num_devices = intel_xpcie_get_device_num(sw_device_id_list);
> > +
> > + return 0;
> > +}
> > +EXPORT_SYMBOL(xlink_pcie_get_device_list);
>
> EXPORT_SYMBOL_GPL() for all of these perhaps? I have to ask...

I agree with Mark, will make the change to use EXPORT_SYMBOL_GPL().

Thanks!
Srikanth

>
> thanks,
>
> greg k-h

2021-01-24 11:54:30

by Thokala, Srikanth

[permalink] [raw]
Subject: RE: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

Hi Greg,

Thank you for the review.

> -----Original Message-----
> From: Greg KH <[email protected]>
> Sent: Wednesday, January 20, 2021 11:28 PM
> To: [email protected]
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; linux-
> [email protected]; Thokala, Srikanth <[email protected]>;
> Derek Kiernan <[email protected]>
> Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver
> for Local Host
>
> On Fri, Jan 08, 2021 at 01:25:35PM -0800, [email protected] wrote:
> > From: Srikanth Thokala <[email protected]>
> >
> > Add PCIe EPF driver for local host (lh) to configure BAR's and other
> > HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.
> >
> > Cc: Derek Kiernan <[email protected]>
> > Cc: Dragan Cvetic <[email protected]>
> > Cc: Arnd Bergmann <[email protected]>
> > Cc: Greg Kroah-Hartman <[email protected]>
> > Reviewed-by: Mark Gross <[email protected]>
> > Signed-off-by: Srikanth Thokala <[email protected]>
> > ---
> > MAINTAINERS | 6 +
> > drivers/misc/Kconfig | 1 +
> > drivers/misc/Makefile | 1 +
> > drivers/misc/xlink-pcie/Kconfig | 9 +
> > drivers/misc/xlink-pcie/Makefile | 1 +
> > drivers/misc/xlink-pcie/local_host/Makefile | 2 +
> > drivers/misc/xlink-pcie/local_host/epf.c | 413 ++++++++++++++++++++
> > drivers/misc/xlink-pcie/local_host/epf.h | 39 ++
> > drivers/misc/xlink-pcie/local_host/xpcie.h | 38 ++
>
> Why such a deep directory tree? Why is "local_host" needed?

Xlink-pcie comprises of local host (ARM CPU) and remote host (IA CPU)
variants. It is a transport layer that establishes communication between
them.

local_host/ running on ARM CPU is based on PCI Endpoint Framework
remote_host/ running on IA CPU is a PCIe Endpoint driver

As these two variants are architecturally different, we are maintaining
under two directories.

>
> Anyway, one thing stood out instantly:
>
> > +static int intel_xpcie_check_bar(struct pci_epf *epf,
> > + struct pci_epf_bar *epf_bar,
> > + enum pci_barno barno,
> > + size_t size, u8 reserved_bar)
> > +{
> > + if (reserved_bar & (1 << barno)) {
> > + dev_err(&epf->dev, "BAR%d is already reserved\n", barno);
> > + return -EFAULT;
>
> That error is only allowed when you really have a fault from
> reading/writing to/from userspace memory. Not this type of foolish
> programming error by the caller.

Thanks for pointing out, I will use appropriate error value to return.

> > + }
> > +
> > + if (epf_bar->size != 0 && epf_bar->size < size) {
> > + dev_err(&epf->dev, "BAR%d fixed size is not enough\n", barno);
> > + return -ENOMEM;
>
> Did you really run out of memory or was the parameters sent to you
> incorrect? -EINVAL is the properly thing here, right?

Sure, I will change to return -EINVAL.

>
>
>
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +static int intel_xpcie_configure_bar(struct pci_epf *epf,
> > + const struct pci_epc_features
> > + *epc_features)
>
> Odd indentation :(

I had to break this as the checkpatch complained about 80-line warning.
I will fix this to have better indentation.

>
> > +{
> > + struct pci_epf_bar *epf_bar;
> > + bool bar_fixed_64bit;
> > + int ret, i;
> > +
> > + for (i = BAR_0; i <= BAR_5; i++) {
> > + epf_bar = &epf->bar[i];
> > + bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 <<
> i));
> > + if (bar_fixed_64bit)
> > + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > + if (epc_features->bar_fixed_size[i])
> > + epf_bar->size = epc_features->bar_fixed_size[i];
> > +
> > + if (i == BAR_2) {
> > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > + BAR2_MIN_SIZE,
> > + epc_features->reserved_bar);
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + if (i == BAR_4) {
> > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > + BAR4_MIN_SIZE,
> > + epc_features->reserved_bar);
> > + if (ret)
> > + return ret;
> > + }
>
> Why do you need to check all of this? Where is the data coming from
> that could be incorrect?

PCI BAR attributes, as inputs, are coming from the PCIe controller driver
through PCIe End Point Framework. These checks are required to compare the
configuration this driver is expecting to the configuration coming from
the PCIe controller driver.

FYI, PCIe controller driver for Intel Keem Bay is currently under review:
https://lore.kernel.org/linux-pci/[email protected]/

Thanks!
Srikanth

>
> thanks,
>
> greg k-h

2021-01-24 12:01:00

by Greg Kroah-Hartman

[permalink] [raw]
Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

On Sun, Jan 24, 2021 at 11:48:29AM +0000, Thokala, Srikanth wrote:
> > > +{
> > > + struct pci_epf_bar *epf_bar;
> > > + bool bar_fixed_64bit;
> > > + int ret, i;
> > > +
> > > + for (i = BAR_0; i <= BAR_5; i++) {
> > > + epf_bar = &epf->bar[i];
> > > + bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 <<
> > i));
> > > + if (bar_fixed_64bit)
> > > + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > > + if (epc_features->bar_fixed_size[i])
> > > + epf_bar->size = epc_features->bar_fixed_size[i];
> > > +
> > > + if (i == BAR_2) {
> > > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > > + BAR2_MIN_SIZE,
> > > + epc_features->reserved_bar);
> > > + if (ret)
> > > + return ret;
> > > + }
> > > +
> > > + if (i == BAR_4) {
> > > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > > + BAR4_MIN_SIZE,
> > > + epc_features->reserved_bar);
> > > + if (ret)
> > > + return ret;
> > > + }
> >
> > Why do you need to check all of this? Where is the data coming from
> > that could be incorrect?
>
> PCI BAR attributes, as inputs, are coming from the PCIe controller driver
> through PCIe End Point Framework. These checks are required to compare the
> configuration this driver is expecting to the configuration coming from
> the PCIe controller driver.

So why do you not trust that information coming from the caller?
Shouldn't it always be correct as it already is validated by that
in-kernel caller? Don't check for things you don't have to check for
because you control the code that calls this stuff.

thanks,

greg k-h

2021-01-24 18:22:19

by Thokala, Srikanth

[permalink] [raw]
Subject: RE: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

Hi Greg,

> -----Original Message-----
> From: Greg KH <[email protected]>
> Sent: Sunday, January 24, 2021 5:27 PM
> To: Thokala, Srikanth <[email protected]>
> Cc: [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; linux-
> [email protected]; Derek Kiernan <[email protected]>
> Subject: Re: [PATCH v2 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver
> for Local Host
>
> On Sun, Jan 24, 2021 at 11:48:29AM +0000, Thokala, Srikanth wrote:
> > > > +{
> > > > + struct pci_epf_bar *epf_bar;
> > > > + bool bar_fixed_64bit;
> > > > + int ret, i;
> > > > +
> > > > + for (i = BAR_0; i <= BAR_5; i++) {
> > > > + epf_bar = &epf->bar[i];
> > > > + bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1
> <<
> > > i));
> > > > + if (bar_fixed_64bit)
> > > > + epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
> > > > + if (epc_features->bar_fixed_size[i])
> > > > + epf_bar->size = epc_features->bar_fixed_size[i];
> > > > +
> > > > + if (i == BAR_2) {
> > > > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_2,
> > > > + BAR2_MIN_SIZE,
> > > > + epc_features->reserved_bar);
> > > > + if (ret)
> > > > + return ret;
> > > > + }
> > > > +
> > > > + if (i == BAR_4) {
> > > > + ret = intel_xpcie_check_bar(epf, epf_bar, BAR_4,
> > > > + BAR4_MIN_SIZE,
> > > > + epc_features->reserved_bar);
> > > > + if (ret)
> > > > + return ret;
> > > > + }
> > >
> > > Why do you need to check all of this? Where is the data coming from
> > > that could be incorrect?
> >
> > PCI BAR attributes, as inputs, are coming from the PCIe controller
> driver
> > through PCIe End Point Framework. These checks are required to compare
> the
> > configuration this driver is expecting to the configuration coming from
> > the PCIe controller driver.
>
> So why do you not trust that information coming from the caller?
> Shouldn't it always be correct as it already is validated by that
> in-kernel caller? Don't check for things you don't have to check for
> because you control the code that calls this stuff.

Sure, I agree to your point.
I will fix it in my next version.

Thanks!
Srikanth

>
> thanks,
>
> greg k-h

2021-01-25 23:46:05

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v2 29/34] Intel tsens i2c slave driver.

On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> On 1/8/21 1:25 PM, [email protected] wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > index 8b263fdd80c3..c2138339bd89 100644
> > --- a/drivers/misc/intel_tsens/Kconfig
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > Say Y if using a processor that includes the Intel VPU such as
> > Keem Bay. If unsure, say N.
> >
> > +config INTEL_TSENS_I2C_SLAVE
> > + bool "I2C slave driver for intel tsens"
>
> Why bool instead of tristate?
Becuase the I2C driver depends on a file scoped global i2c_plat_data
instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER (intel_tsens_thermal.[ch])

Udhaya, would you care to comment further?

--mark


>
> > + depends on INTEL_TSENS_LOCAL_HOST
> > + select I2C
> > + select I2C_SLAVE
> > + help
> > + This option enables tsens i2c slave driver.
>
> I2C
>
> > +
> > + This driver is used for reporting thermal data via I2C
> > + SMBUS to remote host.
> > + Enable this option if you want to have support for thermal
> > + management controller
>
> controller.
>
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-01-26 15:00:42

by Gross, Mark

[permalink] [raw]
Subject: RE: [PATCH v2 29/34] Intel tsens i2c slave driver.



> -----Original Message-----
> From: Arnd Bergmann <[email protected]>
> Sent: Monday, January 25, 2021 11:45 PM
> To: [email protected]
> Cc: Randy Dunlap <[email protected]>; [email protected]; Arnd
> Bergmann <[email protected]>; Borislav Petkov <[email protected]>; Damien Le Moal
> <[email protected]>; Dragan Cvetic <[email protected]>; gregkh
> <[email protected]>; Jonathan Corbet <[email protected]>; Leonard
> Crestez <[email protected]>; Palmer Dabbelt
> <[email protected]>; Paul Walmsley <[email protected]>;
> Peng Fan <[email protected]>; Rob Herring <[email protected]>; Shawn
> Guo <[email protected]>; Jassi Brar <[email protected]>; linux-
> [email protected]; C, Udhayakumar <[email protected]>;
> [email protected]
> Subject: Re: [PATCH v2 29/34] Intel tsens i2c slave driver.
>
> On Tue, Jan 26, 2021 at 12:39 AM mark gross <[email protected]> wrote:
> >
> > On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > > On 1/8/21 1:25 PM, [email protected] wrote:
> > > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > > b/drivers/misc/intel_tsens/Kconfig
> > > > index 8b263fdd80c3..c2138339bd89 100644
> > > > --- a/drivers/misc/intel_tsens/Kconfig
> > > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > > Say Y if using a processor that includes the Intel VPU such as
> > > > Keem Bay. If unsure, say N.
> > > >
> > > > +config INTEL_TSENS_I2C_SLAVE
> > > > + bool "I2C slave driver for intel tsens"
> > >
> > > Why bool instead of tristate?
> > Becuase the I2C driver depends on a file scoped global i2c_plat_data
> > instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> > (intel_tsens_thermal.[ch])
> >
> > Udhaya, would you care to comment further?
>
> > > > + depends on INTEL_TSENS_LOCAL_HOST
> > > > + select I2C
> > > > + select I2C_SLAVE
>
> Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
> in this case. A random driver should never force-enable another subsystem.
Will do, thanks for the feedback!

--mark



2021-01-26 20:04:56

by Arnd Bergmann

[permalink] [raw]
Subject: Re: [PATCH v2 29/34] Intel tsens i2c slave driver.

On Tue, Jan 26, 2021 at 12:39 AM mark gross <[email protected]> wrote:
>
> On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > On 1/8/21 1:25 PM, [email protected] wrote:
> > > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > > index 8b263fdd80c3..c2138339bd89 100644
> > > --- a/drivers/misc/intel_tsens/Kconfig
> > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > Say Y if using a processor that includes the Intel VPU such as
> > > Keem Bay. If unsure, say N.
> > >
> > > +config INTEL_TSENS_I2C_SLAVE
> > > + bool "I2C slave driver for intel tsens"
> >
> > Why bool instead of tristate?
> Becuase the I2C driver depends on a file scoped global i2c_plat_data
> instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER (intel_tsens_thermal.[ch])
>
> Udhaya, would you care to comment further?

> > > + depends on INTEL_TSENS_LOCAL_HOST
> > > + select I2C
> > > + select I2C_SLAVE

Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
in this case. A random driver should never force-enable another subsystem.

Arnd

2021-01-27 21:07:36

by C, Udhayakumar

[permalink] [raw]
Subject: RE: [PATCH v2 29/34] Intel tsens i2c slave driver.

> On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > On 1/8/21 1:25 PM, [email protected] wrote:
> > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > b/drivers/misc/intel_tsens/Kconfig
> > > index 8b263fdd80c3..c2138339bd89 100644
> > > --- a/drivers/misc/intel_tsens/Kconfig
> > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > Say Y if using a processor that includes the Intel VPU such as
> > > Keem Bay. If unsure, say N.
> > >
> > > +config INTEL_TSENS_I2C_SLAVE
> > > + bool "I2C slave driver for intel tsens"
> >
> > Why bool instead of tristate?
> Becuase the I2C driver depends on a file scoped global i2c_plat_data
> instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> (intel_tsens_thermal.[ch])
>
> Udhaya, would you care to comment further?
>
> --mark
>
As Mark mentioned above, i2c_plat_data from intel_tsens_thermal.[ch] will be used by tsens_i2c client driver.
>
> >
> > > + depends on INTEL_TSENS_LOCAL_HOST
> > > + select I2C
> > > + select I2C_SLAVE
> > > + help
> > > + This option enables tsens i2c slave driver.
> >
> > I2C
> >
Will update in v2 version of patch. Thanks for feedback.
> > > +
> > > + This driver is used for reporting thermal data via I2C
> > > + SMBUS to remote host.
> > > + Enable this option if you want to have support for thermal
> > > + management controller
> >
> > controller.
> >
Will update in v2 version of patch. Thanks for feedback.
> > > + Say Y if using a processor that includes the Intel VPU such as
> > > + Keem Bay. If unsure, say N.
> >
> >
> > --
> > ~Randy
> >

2021-01-27 21:09:44

by C, Udhayakumar

[permalink] [raw]
Subject: RE: [PATCH v2 29/34] Intel tsens i2c slave driver.

> On Tue, Jan 26, 2021 at 12:39 AM mark gross <[email protected]>
> wrote:
> >
> > On Mon, Jan 11, 2021 at 11:15:06PM -0800, Randy Dunlap wrote:
> > > On 1/8/21 1:25 PM, [email protected] wrote:
> > > > diff --git a/drivers/misc/intel_tsens/Kconfig
> > > > b/drivers/misc/intel_tsens/Kconfig
> > > > index 8b263fdd80c3..c2138339bd89 100644
> > > > --- a/drivers/misc/intel_tsens/Kconfig
> > > > +++ b/drivers/misc/intel_tsens/Kconfig
> > > > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > > > Say Y if using a processor that includes the Intel VPU such as
> > > > Keem Bay. If unsure, say N.
> > > >
> > > > +config INTEL_TSENS_I2C_SLAVE
> > > > + bool "I2C slave driver for intel tsens"
> > >
> > > Why bool instead of tristate?
> > Becuase the I2C driver depends on a file scoped global i2c_plat_data
> > instanciated in the INTELL_TSENS_LOCAL_HOST DRIVER
> > (intel_tsens_thermal.[ch])
> >
> > Udhaya, would you care to comment further?
>
> > > > + depends on INTEL_TSENS_LOCAL_HOST
> > > > + select I2C
> > > > + select I2C_SLAVE
>
> Please make this 'depends on I2C=y && I2C_SLAVE' instead of 'select'
> in this case. A random driver should never force-enable another subsystem.
>
Will update in v2 version of patch. Thanks for feedback.
> Arnd