2021-02-12 22:33:03

by mark gross

[permalink] [raw]
Subject: [PATCH v6 00/34] Intel Vision Processing base enabling

From: Mark Gross <[email protected]>

The Intel Vision Processing Unit (VPU) is an IP block that is showing up for
the first time as part of the Keem Bay SOC. Keem Bay is a quad core A53 Arm
SOC. It is designed to be used as a stand alone SOC as well as in an PCIe
Vision Processing accelerator add in card.

This 6th version of this patch set includes more updates to the xlink SMBUS
patch correction some language in its Kconfig file.

At the bottom of this cover letter is the delta between v5 and this version for
easy review of the modifications.

Thanks for looking at these and providing feedback.


C, Udhayakumar (8):
dt-bindings: misc: intel_tsens: Add tsens thermal bindings
documentation
misc: Tsens ARM host thermal driver.
misc: Intel tsens IA host driver.
Intel tsens i2c slave driver.
misc:intel_tsens: Intel Keem Bay tsens driver.
dt-bindings: misc: hddl_dev: Add hddl device management documentation
misc: Hddl device management for local host
misc: HDDL device management for IA host

Daniele Alessandrelli (4):
dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings
mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox
dt-bindings: Add bindings for Keem Bay IPC driver
keembay-ipc: Add Keem Bay IPC module

Li, Tingqian (2):
dt-bindings: misc: Add Keem Bay vpumgr
misc: Add Keem Bay VPU manager

Paul Murphy (2):
dt-bindings: Add bindings for Keem Bay VPU IPC driver
keembay-vpu-ipc: Add Keem Bay VPU IPC module

Ramya P Karanth (1):
Intel Keembay XLink SMBus driver

Seamus Kelly (7):
xlink-ipc: Add xlink ipc device tree bindings
xlink-ipc: Add xlink ipc driver
xlink-core: Add xlink core device tree bindings
xlink-core: Add xlink core driver xLink
xlink-core: Enable xlink protocol over pcie
xlink-core: Enable VPU IP management and runtime control
xlink-core: add async channel and events

Srikanth Thokala (9):
misc: xlink-pcie: Add documentation for XLink PCIe driver
misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
misc: xlink-pcie: lh: Add PCIe EP DMA functionality
misc: xlink-pcie: lh: Add core communication logic
misc: xlink-pcie: lh: Prepare changes for adding remote host driver
misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host
misc: xlink-pcie: rh: Add core communication logic
misc: xlink-pcie: Add XLink API interface
misc: xlink-pcie: Add asynchronous event notification support for
XLink

mark gross (1):
Add Vision Processing Unit (VPU) documentation.

.../mailbox/intel,vpu-ipc-mailbox.yaml | 69 +
.../bindings/misc/intel,hddl-client.yaml | 117 +
.../bindings/misc/intel,intel-tsens.yaml | 122 +
.../bindings/misc/intel,keembay-vpu-mgr.yaml | 48 +
.../misc/intel,keembay-xlink-ipc.yaml | 51 +
.../bindings/misc/intel,keembay-xlink.yaml | 29 +
.../bindings/soc/intel/intel,keembay-ipc.yaml | 45 +
.../soc/intel/intel,keembay-vpu-ipc.yaml | 143 ++
Documentation/hwmon/index.rst | 2 +
Documentation/hwmon/intel_tsens_host.rst | 71 +
Documentation/hwmon/intel_tsens_sensor.rst | 67 +
Documentation/i2c/busses/index.rst | 1 +
.../i2c/busses/intel-xlink-smbus.rst | 71 +
Documentation/index.rst | 1 +
.../misc-devices/hddl_device_client.rst | 212 ++
.../misc-devices/hddl_device_server.rst | 205 ++
Documentation/misc-devices/index.rst | 2 +
Documentation/vpu/index.rst | 20 +
Documentation/vpu/vpu-stack-overview.rst | 270 +++
Documentation/vpu/xlink-core.rst | 81 +
Documentation/vpu/xlink-ipc.rst | 51 +
Documentation/vpu/xlink-pcie.rst | 90 +
MAINTAINERS | 54 +
drivers/mailbox/Kconfig | 11 +
drivers/mailbox/Makefile | 2 +
drivers/mailbox/vpu-ipc-mailbox.c | 297 +++
drivers/misc/Kconfig | 7 +
drivers/misc/Makefile | 7 +
drivers/misc/hddl_device/Kconfig | 26 +
drivers/misc/hddl_device/Makefile | 7 +
drivers/misc/hddl_device/hddl_device.c | 565 +++++
drivers/misc/hddl_device/hddl_device_lh.c | 764 +++++++
drivers/misc/hddl_device/hddl_device_rh.c | 837 +++++++
drivers/misc/hddl_device/hddl_device_util.h | 52 +
drivers/misc/intel_tsens/Kconfig | 54 +
drivers/misc/intel_tsens/Makefile | 10 +
drivers/misc/intel_tsens/intel_tsens_host.c | 352 +++
drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +
.../misc/intel_tsens/intel_tsens_thermal.c | 651 ++++++
.../misc/intel_tsens/intel_tsens_thermal.h | 38 +
drivers/misc/intel_tsens/keembay_thermal.c | 169 ++
drivers/misc/intel_tsens/keembay_tsens.h | 366 +++
drivers/misc/vpumgr/Kconfig | 9 +
drivers/misc/vpumgr/Makefile | 3 +
drivers/misc/vpumgr/vpu_common.h | 31 +
drivers/misc/vpumgr/vpu_mgr.c | 370 +++
drivers/misc/vpumgr/vpu_smm.c | 554 +++++
drivers/misc/vpumgr/vpu_smm.h | 30 +
drivers/misc/vpumgr/vpu_vcm.c | 584 +++++
drivers/misc/vpumgr/vpu_vcm.h | 84 +
drivers/misc/xlink-core/Kconfig | 33 +
drivers/misc/xlink-core/Makefile | 5 +
drivers/misc/xlink-core/xlink-core.c | 1331 +++++++++++
drivers/misc/xlink-core/xlink-core.h | 25 +
drivers/misc/xlink-core/xlink-defs.h | 181 ++
drivers/misc/xlink-core/xlink-dispatcher.c | 436 ++++
drivers/misc/xlink-core/xlink-dispatcher.h | 26 +
drivers/misc/xlink-core/xlink-ioctl.c | 554 +++++
drivers/misc/xlink-core/xlink-ioctl.h | 36 +
drivers/misc/xlink-core/xlink-multiplexer.c | 1164 ++++++++++
drivers/misc/xlink-core/xlink-multiplexer.h | 35 +
drivers/misc/xlink-core/xlink-platform.c | 273 +++
drivers/misc/xlink-core/xlink-platform.h | 65 +
drivers/misc/xlink-ipc/Kconfig | 7 +
drivers/misc/xlink-ipc/Makefile | 4 +
drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++
drivers/misc/xlink-pcie/Kconfig | 20 +
drivers/misc/xlink-pcie/Makefile | 2 +
drivers/misc/xlink-pcie/common/core.h | 245 ++
drivers/misc/xlink-pcie/common/interface.c | 124 +
drivers/misc/xlink-pcie/common/util.c | 373 +++
drivers/misc/xlink-pcie/common/util.h | 68 +
drivers/misc/xlink-pcie/common/xpcie.h | 100 +
drivers/misc/xlink-pcie/local_host/Makefile | 6 +
drivers/misc/xlink-pcie/local_host/core.c | 817 +++++++
drivers/misc/xlink-pcie/local_host/dma.c | 575 +++++
drivers/misc/xlink-pcie/local_host/epf.c | 482 ++++
drivers/misc/xlink-pcie/local_host/epf.h | 101 +
drivers/misc/xlink-pcie/remote_host/Makefile | 6 +
drivers/misc/xlink-pcie/remote_host/core.c | 621 +++++
drivers/misc/xlink-pcie/remote_host/main.c | 93 +
drivers/misc/xlink-pcie/remote_host/pci.c | 523 +++++
drivers/misc/xlink-pcie/remote_host/pci.h | 65 +
drivers/misc/xlink-smbus/Kconfig | 25 +
drivers/misc/xlink-smbus/Makefile | 5 +
drivers/misc/xlink-smbus/xlink-smbus.c | 467 ++++
drivers/soc/Kconfig | 1 +
drivers/soc/Makefile | 1 +
drivers/soc/intel/Kconfig | 33 +
drivers/soc/intel/Makefile | 5 +
drivers/soc/intel/keembay-ipc.c | 1364 +++++++++++
drivers/soc/intel/keembay-vpu-ipc.c | 2026 +++++++++++++++++
include/linux/hddl_device.h | 153 ++
include/linux/intel_tsens_host.h | 34 +
include/linux/soc/intel/keembay-ipc.h | 30 +
include/linux/soc/intel/keembay-vpu-ipc.h | 62 +
include/linux/xlink-ipc.h | 48 +
include/linux/xlink.h | 146 ++
include/linux/xlink_drv_inf.h | 70 +
include/uapi/misc/vpumgr.h | 64 +
include/uapi/misc/xlink_uapi.h | 145 ++
101 files changed, 21774 insertions(+)
create mode 100644 Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
create mode 100644 Documentation/hwmon/intel_tsens_host.rst
create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
create mode 100644 Documentation/misc-devices/hddl_device_client.rst
create mode 100644 Documentation/misc-devices/hddl_device_server.rst
create mode 100644 Documentation/vpu/index.rst
create mode 100644 Documentation/vpu/vpu-stack-overview.rst
create mode 100644 Documentation/vpu/xlink-core.rst
create mode 100644 Documentation/vpu/xlink-ipc.rst
create mode 100644 Documentation/vpu/xlink-pcie.rst
create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c
create mode 100644 drivers/misc/hddl_device/Kconfig
create mode 100644 drivers/misc/hddl_device/Makefile
create mode 100644 drivers/misc/hddl_device/hddl_device.c
create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c
create mode 100644 drivers/misc/hddl_device/hddl_device_util.h
create mode 100644 drivers/misc/intel_tsens/Kconfig
create mode 100644 drivers/misc/intel_tsens/Makefile
create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h
create mode 100644 drivers/misc/vpumgr/Kconfig
create mode 100644 drivers/misc/vpumgr/Makefile
create mode 100644 drivers/misc/vpumgr/vpu_common.h
create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.h
create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
create mode 100644 drivers/misc/xlink-core/Kconfig
create mode 100644 drivers/misc/xlink-core/Makefile
create mode 100644 drivers/misc/xlink-core/xlink-core.c
create mode 100644 drivers/misc/xlink-core/xlink-core.h
create mode 100644 drivers/misc/xlink-core/xlink-defs.h
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
create mode 100644 drivers/misc/xlink-core/xlink-platform.c
create mode 100644 drivers/misc/xlink-core/xlink-platform.h
create mode 100644 drivers/misc/xlink-ipc/Kconfig
create mode 100644 drivers/misc/xlink-ipc/Makefile
create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
create mode 100644 drivers/misc/xlink-pcie/Kconfig
create mode 100644 drivers/misc/xlink-pcie/Makefile
create mode 100644 drivers/misc/xlink-pcie/common/core.h
create mode 100644 drivers/misc/xlink-pcie/common/interface.c
create mode 100644 drivers/misc/xlink-pcie/common/util.c
create mode 100644 drivers/misc/xlink-pcie/common/util.h
create mode 100644 drivers/misc/xlink-pcie/common/xpcie.h
create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
create mode 100644 drivers/misc/xlink-pcie/remote_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/main.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.c
create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.h
create mode 100644 drivers/misc/xlink-smbus/Kconfig
create mode 100644 drivers/misc/xlink-smbus/Makefile
create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c
create mode 100644 drivers/soc/intel/Kconfig
create mode 100644 drivers/soc/intel/Makefile
create mode 100644 drivers/soc/intel/keembay-ipc.c
create mode 100644 drivers/soc/intel/keembay-vpu-ipc.c
create mode 100644 include/linux/hddl_device.h
create mode 100644 include/linux/intel_tsens_host.h
create mode 100644 include/linux/soc/intel/keembay-ipc.h
create mode 100644 include/linux/soc/intel/keembay-vpu-ipc.h
create mode 100644 include/linux/xlink-ipc.h
create mode 100644 include/linux/xlink.h
create mode 100644 include/linux/xlink_drv_inf.h
create mode 100644 include/uapi/misc/vpumgr.h
create mode 100644 include/uapi/misc/xlink_uapi.h

--
2.17.1

diff --git a/drivers/misc/xlink-smbus/Kconfig b/drivers/misc/xlink-smbus/Kconfig
index e6cdf8b9a096..8d2451f8d1be 100644
--- a/drivers/misc/xlink-smbus/Kconfig
+++ b/drivers/misc/xlink-smbus/Kconfig
@@ -2,25 +2,24 @@
# SPDX-License-Identifier: GPL-2.0-only

config XLINK_SMBUS
- tristate "Enable smbus interface over Xlink PCIe"
+ tristate "Enable SMBUS interface over Xlink PCIe"
depends on XLINK_CORE
depends on HDDL_DEVICE_CLIENT || HDDL_DEVICE_SERVER
help
- Enable xlink-pcie as i2c adapter both slave and master. The server
- (Remote Host) will use this interface to get sensor data from the soc
- (vision accelerator - Local Host) which is connected over PCIe.
- This driver is loaded on both Remote Host and Local Host.
- Select M to compile the driver as a module, name is xlink-smbus.
- If unsure, select N.
-
+ Enable xlink-pcie as I2C adapter both slave and master. The server
+ (Remote Host) will use this interface to get sensor data from the SoC
+ (vision accelerator - Local Host) which is connected over PCIe.
+ This driver is loaded on both Remote Host and Local Host.
+ Select M to compile the driver as a module, name is xlink-smbus.
+ If unsure, select N.

config XLINK_SMBUS_PROXY
tristate "Enable SMBUS adapter as proxy for I2C controller"
depends on XLINK_CORE
depends on XLINK_SMBUS
help
- Enable this config when SMBUS adapter is acting as proxy for
- another I2C controller.
- Select M or Y if building for Intel Vision Processing Unit (VPU)
- Local Host core.
- Select N, if building for a Remote Host kernel.
+ Enable this config when SMBUS adapter is acting as proxy for
+ another I2C controller.
+ Select M or Y if building for Intel Vision Processing Unit (VPU)
+ Local Host core.
+ Select N, if building for a Remote Host kernel.


2021-02-12 22:33:21

by mark gross

[permalink] [raw]
Subject: [PATCH v6 21/34] xlink-core: Enable xlink protocol over pcie

From: Seamus Kelly <[email protected]>

Enable host system access to the VPU over the xlink protocol over PCIe by
enabling channel multiplexing and dispatching. This allows for remote host
communication channels across pcie links.

add dispatcher
update multiplexer to utilise dispatcher

xlink-core: Patch set 2

Add xlink-dispatcher
creates tx and rx threads
enables queueing of messages for transmission and on reception

Update multiplexer to utilise dispatcher:
handle multiplexing channels over single interface link e.g. PCIe
process messages received by dispatcher
pass messages created by API calls to dispatcher for transmission


Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
drivers/misc/xlink-core/Makefile | 2 +-
drivers/misc/xlink-core/xlink-core.c | 35 +-
drivers/misc/xlink-core/xlink-dispatcher.c | 441 +++++++++++++++++
drivers/misc/xlink-core/xlink-dispatcher.h | 26 +
drivers/misc/xlink-core/xlink-multiplexer.c | 498 +++++++++++++++++++-
5 files changed, 999 insertions(+), 3 deletions(-)
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h

diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
index e82b7c72b6b9..ee81f9d05f2b 100644
--- a/drivers/misc/xlink-core/Makefile
+++ b/drivers/misc/xlink-core/Makefile
@@ -2,4 +2,4 @@
# Makefile for Keem Bay xlink Linux driver
#
obj-$(CONFIG_XLINK_CORE) += xlink.o
-xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-dispatcher.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index dd8db834c184..bdbf8c6a99ca 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -21,6 +21,7 @@

#include "xlink-core.h"
#include "xlink-defs.h"
+#include "xlink-dispatcher.h"
#include "xlink-ioctl.h"
#include "xlink-multiplexer.h"
#include "xlink-platform.h"
@@ -151,6 +152,12 @@ static int kmb_xlink_probe(struct platform_device *pdev)
goto r_multiplexer;
}

+ // initialize dispatcher
+ rc = xlink_dispatcher_init(xlink_dev->pdev);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("Dispatcher initialization failed\n");
+ goto r_dispatcher;
+ }
// initialize xlink data structure
xlink_dev->nmb_connected_links = 0;
mutex_init(&xlink_dev->lock);
@@ -168,7 +175,7 @@ static int kmb_xlink_probe(struct platform_device *pdev)
/*Allocating Major number*/
if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
dev_info(&pdev->dev, "Cannot allocate major number\n");
- goto r_multiplexer;
+ goto r_dispatcher;
}
dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
MINOR(xdev));
@@ -205,6 +212,8 @@ static int kmb_xlink_probe(struct platform_device *pdev)
class_destroy(dev_class);
r_class:
unregister_chrdev_region(xdev, 1);
+r_dispatcher:
+ xlink_dispatcher_destroy();
r_multiplexer:
xlink_multiplexer_destroy();
return -1;
@@ -220,6 +229,10 @@ static int kmb_xlink_remove(struct platform_device *pdev)
rc = xlink_multiplexer_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Multiplexer destroy failed\n");
+ // stop dispatchers and destroy
+ rc = xlink_dispatcher_destroy();
+ if (rc != X_LINK_SUCCESS)
+ pr_err("Dispatcher destroy failed\n");

mutex_unlock(&xlink->lock);
mutex_destroy(&xlink->lock);
@@ -314,6 +327,14 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
link->handle = *handle;
xlink->nmb_connected_links++;
kref_init(&link->refcount);
+ if (interface != IPC_INTERFACE) {
+ // start dispatcher
+ rc = xlink_dispatcher_start(link->id, &link->handle);
+ if (rc) {
+ pr_err("dispatcher start failed\n");
+ goto r_cleanup;
+ }
+ }
// initialize multiplexer connection
rc = xlink_multiplexer_connect(link->id);
if (rc) {
@@ -649,6 +670,7 @@ EXPORT_SYMBOL_GPL(xlink_release_data);
enum xlink_error xlink_disconnect(struct xlink_handle *handle)
{
struct xlink_link *link;
+ int interface = NULL_INTERFACE;
enum xlink_error rc = X_LINK_ERROR;

if (!xlink || !handle)
@@ -661,6 +683,17 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
// decrement refcount, if count is 0 lock mutex and disconnect
if (kref_put_mutex(&link->refcount, release_after_kref_put,
&xlink->lock)) {
+ // stop dispatcher
+ interface = get_interface_from_sw_device_id(link->handle.sw_device_id);
+ if (interface != IPC_INTERFACE) {
+ // stop dispatcher
+ rc = xlink_dispatcher_stop(link->id);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("dispatcher stop failed\n");
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ }
// deinitialize multiplexer connection
rc = xlink_multiplexer_disconnect(link->id);
if (rc) {
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
new file mode 100644
index 000000000000..11ef8e4110ca
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -0,0 +1,441 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/semaphore.h>
+#include <linux/mutex.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+#include <linux/platform_device.h>
+
+#include "xlink-dispatcher.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define DISPATCHER_RX_TIMEOUT_MSEC 0
+
+/* state of a dispatcher servicing a link to a device*/
+enum dispatcher_state {
+ XLINK_DISPATCHER_INIT, /* initialized but not used */
+ XLINK_DISPATCHER_RUNNING, /* currently servicing a link */
+ XLINK_DISPATCHER_STOPPED, /* no longer servicing a link */
+ XLINK_DISPATCHER_ERROR, /* fatal error */
+};
+
+/* queue for dispatcher tx thread event handling */
+struct event_queue {
+ u32 count; /* number of events in the queue */
+ u32 capacity; /* capacity of events in the queue */
+ struct list_head head; /* head of event linked list */
+ struct mutex lock; /* locks queue while accessing */
+};
+
+/* dispatcher servicing a single link to a device */
+struct dispatcher {
+ u32 link_id; /* id of link being serviced */
+ enum dispatcher_state state; /* state of the dispatcher */
+ struct xlink_handle *handle; /* xlink device handle */
+ int interface; /* underlying interface of link */
+ struct task_struct *rxthread; /* kthread servicing rx */
+ struct task_struct *txthread; /* kthread servicing tx */
+ struct event_queue queue; /* xlink event queue */
+ struct semaphore event_sem; /* signals tx kthread of events */
+ struct completion rx_done; /* sync start/stop of rx kthread */
+ struct completion tx_done; /* sync start/stop of tx thread */
+};
+
+/* xlink dispatcher system component */
+struct xlink_dispatcher {
+ struct dispatcher dispatchers[XLINK_MAX_CONNECTIONS]; /* disp queue */
+ struct device *dev; /* deallocate data */
+ struct mutex lock; /* locks when start new disp */
+};
+
+/* global reference to the xlink dispatcher data structure */
+static struct xlink_dispatcher *xlinkd;
+
+/*
+ * Dispatcher Internal Functions
+ *
+ */
+
+static struct dispatcher *get_dispatcher_by_id(u32 id)
+{
+ if (!xlinkd)
+ return NULL;
+
+ if (id >= XLINK_MAX_CONNECTIONS)
+ return NULL;
+
+ return &xlinkd->dispatchers[id];
+}
+
+static u32 event_generate_id(void)
+{
+ static u32 id = 0xa; // TODO: temporary solution
+
+ return id++;
+}
+
+static struct xlink_event *event_dequeue(struct event_queue *queue)
+{
+ struct xlink_event *event = NULL;
+
+ mutex_lock(&queue->lock);
+ if (!list_empty(&queue->head)) {
+ event = list_first_entry(&queue->head, struct xlink_event,
+ list);
+ list_del(&event->list);
+ queue->count--;
+ }
+ mutex_unlock(&queue->lock);
+ return event;
+}
+
+static int event_enqueue(struct event_queue *queue, struct xlink_event *event)
+{
+ int rc = -1;
+
+ mutex_lock(&queue->lock);
+ if (queue->count < ((queue->capacity / 10) * 7)) {
+ list_add_tail(&event->list, &queue->head);
+ queue->count++;
+ rc = 0;
+ }
+ mutex_unlock(&queue->lock);
+ return rc;
+}
+
+static struct xlink_event *dispatcher_event_get(struct dispatcher *disp)
+{
+ int rc = 0;
+ struct xlink_event *event = NULL;
+
+ // wait until an event is available
+ rc = down_interruptible(&disp->event_sem);
+ // dequeue and return next event to process
+ if (!rc)
+ event = event_dequeue(&disp->queue);
+ return event;
+}
+
+static int is_valid_event_header(struct xlink_event *event)
+{
+ if (event->header.magic != XLINK_EVENT_HEADER_MAGIC)
+ return 0;
+ else
+ return 1;
+}
+
+static int dispatcher_event_send(struct xlink_event *event)
+{
+ size_t event_header_size = sizeof(event->header);
+ int rc;
+
+ // write event header
+ // printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
+ // event->header.type, event->header.id);
+ rc = xlink_platform_write(event->interface,
+ event->handle->sw_device_id, &event->header,
+ &event_header_size, event->header.timeout, NULL);
+ if (rc || event_header_size != sizeof(event->header)) {
+ pr_err("Write header failed %d\n", rc);
+ return rc;
+ }
+ if (event->header.type == XLINK_WRITE_REQ ||
+ event->header.type == XLINK_WRITE_VOLATILE_REQ) {
+ // write event data
+ rc = xlink_platform_write(event->interface,
+ event->handle->sw_device_id, event->data,
+ &event->header.size, event->header.timeout,
+ NULL);
+ if (rc)
+ pr_err("Write data failed %d\n", rc);
+ if (event->user_data == 1) {
+ if (event->paddr != 0) {
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data, event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_CMA_MEMORY);
+ } else {
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data, event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
+ }
+ }
+ return rc;
+}
+
+static int xlink_dispatcher_rxthread(void *context)
+{
+ struct dispatcher *disp = (struct dispatcher *)context;
+ struct xlink_event *event;
+ size_t size;
+ int rc;
+
+ // printk(KERN_DEBUG "dispatcher rxthread started\n");
+ event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
+ if (!event)
+ return -1;
+
+ allow_signal(SIGTERM); // allow thread termination while waiting on sem
+ complete(&disp->rx_done);
+ while (!kthread_should_stop()) {
+ size = sizeof(event->header);
+ rc = xlink_platform_read(disp->interface,
+ disp->handle->sw_device_id,
+ &event->header, &size,
+ DISPATCHER_RX_TIMEOUT_MSEC, NULL);
+ if (rc || size != (int)sizeof(event->header))
+ continue;
+ if (is_valid_event_header(event)) {
+ event->link_id = disp->link_id;
+ rc = xlink_multiplexer_rx(event);
+ if (!rc) {
+ event = xlink_create_event(disp->link_id, 0,
+ disp->handle, 0, 0,
+ 0);
+ if (!event)
+ return -1;
+ }
+ }
+ }
+ // printk(KERN_INFO "dispatcher rxthread stopped\n");
+ complete(&disp->rx_done);
+ do_exit(0);
+ return 0;
+}
+
+static int xlink_dispatcher_txthread(void *context)
+{
+ struct dispatcher *disp = (struct dispatcher *)context;
+ struct xlink_event *event;
+
+ // printk(KERN_DEBUG "dispatcher txthread started\n");
+ allow_signal(SIGTERM); // allow thread termination while waiting on sem
+ complete(&disp->tx_done);
+ while (!kthread_should_stop()) {
+ event = dispatcher_event_get(disp);
+ if (!event)
+ continue;
+
+ dispatcher_event_send(event);
+ xlink_destroy_event(event); // free handled event
+ }
+ // printk(KERN_INFO "dispatcher txthread stopped\n");
+ complete(&disp->tx_done);
+ do_exit(0);
+ return 0;
+}
+
+/*
+ * Dispatcher External Functions
+ *
+ */
+
+enum xlink_error xlink_dispatcher_init(void *dev)
+{
+ struct platform_device *plat_dev = (struct platform_device *)dev;
+ int i;
+
+ xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
+ if (!xlinkd)
+ return X_LINK_ERROR;
+
+ xlinkd->dev = &plat_dev->dev;
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ xlinkd->dispatchers[i].link_id = i;
+ sema_init(&xlinkd->dispatchers[i].event_sem, 0);
+ init_completion(&xlinkd->dispatchers[i].rx_done);
+ init_completion(&xlinkd->dispatchers[i].tx_done);
+ INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
+ mutex_init(&xlinkd->dispatchers[i].queue.lock);
+ xlinkd->dispatchers[i].queue.count = 0;
+ xlinkd->dispatchers[i].queue.capacity =
+ XLINK_EVENT_QUEUE_CAPACITY;
+ xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+ }
+ mutex_init(&xlinkd->lock);
+
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle)
+{
+ struct dispatcher *disp;
+
+ mutex_lock(&xlinkd->lock);
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(id);
+ if (!disp)
+ goto r_error;
+
+ // cannot start a running or failed dispatcher
+ if (disp->state == XLINK_DISPATCHER_RUNNING ||
+ disp->state == XLINK_DISPATCHER_ERROR)
+ goto r_error;
+
+ // set the dispatcher context
+ disp->handle = handle;
+ disp->interface = get_interface_from_sw_device_id(handle->sw_device_id);
+
+ // run dispatcher thread to handle and write outgoing packets
+ disp->txthread = kthread_run(xlink_dispatcher_txthread,
+ (void *)disp, "txthread");
+ if (!disp->txthread) {
+ pr_err("xlink txthread creation failed\n");
+ goto r_txthread;
+ }
+ wait_for_completion(&disp->tx_done);
+ disp->state = XLINK_DISPATCHER_RUNNING;
+ // run dispatcher thread to read and handle incoming packets
+ disp->rxthread = kthread_run(xlink_dispatcher_rxthread,
+ (void *)disp, "rxthread");
+ if (!disp->rxthread) {
+ pr_err("xlink rxthread creation failed\n");
+ goto r_rxthread;
+ }
+ wait_for_completion(&disp->rx_done);
+ mutex_unlock(&xlinkd->lock);
+
+ return X_LINK_SUCCESS;
+
+r_rxthread:
+ kthread_stop(disp->txthread);
+r_txthread:
+ disp->state = XLINK_DISPATCHER_STOPPED;
+r_error:
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+ struct xlink_event *event)
+{
+ struct dispatcher *disp;
+ int rc;
+
+ // get dispatcher by handle
+ disp = get_dispatcher_by_id(event->link_id);
+ if (!disp)
+ return X_LINK_ERROR;
+
+ // only add events if the dispatcher is running
+ if (disp->state != XLINK_DISPATCHER_RUNNING)
+ return X_LINK_ERROR;
+
+ // configure event and add to queue
+ if (origin == EVENT_TX)
+ event->header.id = event_generate_id();
+ event->origin = origin;
+ rc = event_enqueue(&disp->queue, event);
+ if (rc)
+ return X_LINK_CHAN_FULL;
+
+ // notify dispatcher tx thread of new event
+ up(&disp->event_sem);
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_dispatcher_stop(int id)
+{
+ struct dispatcher *disp;
+ int rc;
+
+ mutex_lock(&xlinkd->lock);
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(id);
+ if (!disp)
+ goto r_error;
+
+ // don't stop dispatcher if not started
+ if (disp->state != XLINK_DISPATCHER_RUNNING)
+ goto r_error;
+
+ if (disp->rxthread) {
+ // stop dispatcher rx thread
+ send_sig(SIGTERM, disp->rxthread, 0);
+ rc = kthread_stop(disp->rxthread);
+ if (rc)
+ goto r_thread;
+ }
+ wait_for_completion(&disp->rx_done);
+ if (disp->txthread) {
+ // stop dispatcher tx thread
+ send_sig(SIGTERM, disp->txthread, 0);
+ rc = kthread_stop(disp->txthread);
+ if (rc)
+ goto r_thread;
+ }
+ wait_for_completion(&disp->tx_done);
+ disp->state = XLINK_DISPATCHER_STOPPED;
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_SUCCESS;
+
+r_thread:
+ // dispatcher now in error state and cannot be used
+ disp->state = XLINK_DISPATCHER_ERROR;
+r_error:
+ mutex_unlock(&xlinkd->lock);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_dispatcher_destroy(void)
+{
+ enum xlink_event_type type;
+ struct xlink_event *event;
+ struct dispatcher *disp;
+ int i;
+
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ // get dispatcher by link id
+ disp = get_dispatcher_by_id(i);
+ if (!disp)
+ continue;
+
+ // stop all running dispatchers
+ if (disp->state == XLINK_DISPATCHER_RUNNING)
+ xlink_dispatcher_stop(i);
+
+ // empty queues of all used dispatchers
+ if (disp->state == XLINK_DISPATCHER_INIT)
+ continue;
+
+ // deallocate remaining events in queue
+ while (!list_empty(&disp->queue.head)) {
+ event = event_dequeue(&disp->queue);
+ if (!event)
+ continue;
+ type = event->header.type;
+ if (type == XLINK_WRITE_REQ ||
+ type == XLINK_WRITE_VOLATILE_REQ) {
+ // deallocate event data
+ xlink_platform_deallocate(xlinkd->dev,
+ event->data,
+ event->paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
+ xlink_destroy_event(event);
+ }
+ // destroy dispatcher
+ mutex_destroy(&disp->queue.lock);
+ }
+ mutex_destroy(&xlinkd->lock);
+ return X_LINK_SUCCESS;
+}
diff --git a/drivers/misc/xlink-core/xlink-dispatcher.h b/drivers/misc/xlink-core/xlink-dispatcher.h
new file mode 100644
index 000000000000..d1458e7a4ab7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-dispatcher.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Dispatcher.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DISPATCHER_H
+#define __XLINK_DISPATCHER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_dispatcher_init(void *dev);
+
+enum xlink_error xlink_dispatcher_start(int id, struct xlink_handle *handle);
+
+enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
+ struct xlink_event *event);
+
+enum xlink_error xlink_dispatcher_stop(int id);
+
+enum xlink_error xlink_dispatcher_destroy(void);
+
+enum xlink_error xlink_dispatcher_ipc_passthru_event_add(struct xlink_event *event);
+
+#endif /* __XLINK_DISPATCHER_H */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 9b1ed008bb56..339734826f3e 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -27,6 +27,7 @@
#include <linux/xlink-ipc.h>
#endif

+#include "xlink-dispatcher.h"
#include "xlink-multiplexer.h"
#include "xlink-platform.h"

@@ -165,6 +166,32 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
return 0;
}

+static int is_enough_space_in_channel(struct open_channel *opchan,
+ u32 size)
+{
+ if (opchan->tx_packet_level >= ((XLINK_PACKET_QUEUE_CAPACITY / 100) * THR_UPR)) {
+ pr_info("Packet queue limit reached\n");
+ return 0;
+ }
+ if (opchan->tx_up_limit == 0) {
+ if ((opchan->tx_fill_level + size)
+ > ((opchan->chan->size / 100) * THR_UPR)) {
+ opchan->tx_up_limit = 1;
+ return 0;
+ }
+ }
+ if (opchan->tx_up_limit == 1) {
+ if ((opchan->tx_fill_level + size)
+ < ((opchan->chan->size / 100) * THR_LWR)) {
+ opchan->tx_up_limit = 0;
+ return 1;
+ } else {
+ return 0;
+ }
+ }
+ return 1;
+}
+
static int is_control_channel(u16 chan)
{
if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
@@ -173,6 +200,51 @@ static int is_control_channel(u16 chan)
return 0;
}

+static struct open_channel *get_channel(u32 link_id, u16 chan)
+{
+ if (!xmux->channels[link_id][chan].opchan)
+ return NULL;
+ mutex_lock(&xmux->channels[link_id][chan].opchan->lock);
+ return xmux->channels[link_id][chan].opchan;
+}
+
+static void release_channel(struct open_channel *opchan)
+{
+ if (opchan)
+ mutex_unlock(&opchan->lock);
+}
+
+static int add_packet_to_channel(struct open_channel *opchan,
+ struct packet_queue *queue,
+ void *buffer, u32 size,
+ dma_addr_t paddr)
+{
+ struct packet *pkt;
+
+ if (queue->count < queue->capacity) {
+ pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+ if (!pkt)
+ return X_LINK_ERROR;
+ pkt->data = buffer;
+ pkt->length = size;
+ pkt->paddr = paddr;
+ list_add_tail(&pkt->list, &queue->head);
+ queue->count++;
+ opchan->rx_fill_level += pkt->length;
+ }
+ return X_LINK_SUCCESS;
+}
+
+static struct packet *get_packet_from_channel(struct packet_queue *queue)
+{
+ struct packet *pkt = NULL;
+ // get first packet in queue
+ if (!list_empty(&queue->head))
+ pkt = list_first_entry(&queue->head, struct packet, list);
+
+ return pkt;
+}
+
static int release_packet_from_channel(struct open_channel *opchan,
struct packet_queue *queue,
u8 * const addr, u32 *size)
@@ -355,15 +427,46 @@ enum xlink_error xlink_multiplexer_destroy(void)
return X_LINK_SUCCESS;
}

+static int compl_wait(struct completion *compl, struct open_channel *opchan)
+{
+ int rc;
+ unsigned long tout = msecs_to_jiffies(opchan->chan->timeout);
+
+ if (opchan->chan->timeout == 0) {
+ mutex_unlock(&opchan->lock);
+ rc = wait_for_completion_interruptible(compl);
+ mutex_lock(&opchan->lock);
+ if (rc < 0) // wait interrupted
+ rc = X_LINK_ERROR;
+ } else {
+ mutex_unlock(&opchan->lock);
+ rc = wait_for_completion_interruptible_timeout(compl, tout);
+ mutex_lock(&opchan->lock);
+ if (rc == 0)
+ rc = X_LINK_TIMEOUT;
+ else if (rc < 0) // wait interrupted
+ rc = X_LINK_ERROR;
+ else if (rc > 0)
+ rc = X_LINK_SUCCESS;
+ }
+ return rc;
+}
+
enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
int *event_queued)
{
+ struct open_channel *opchan = NULL;
+ struct packet *pkt = NULL;
int rc = X_LINK_SUCCESS;
+ u32 link_id = 0;
+ u32 size = 0;
u16 chan = 0;
+ u32 save_timeout = 0;

if (!xmux || !event)
return X_LINK_ERROR;

+ link_id = event->link_id;
chan = event->header.chan;

// verify channel ID is in range
@@ -379,9 +482,402 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
if (is_control_channel(chan))
return X_LINK_ERROR;

- if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+ if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE) {
// event should be handled by passthrough
rc = xlink_passthrough(event);
+ return rc;
+ }
+ // event should be handled by dispatcher
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ case XLINK_WRITE_VOLATILE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ while (!is_enough_space_in_channel(opchan,
+ event->header.size)) {
+ if (opchan->chan->mode == RXN_TXB ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking,
+ // wait for packet to be released
+ rc = compl_wait(&opchan->pkt_released, opchan);
+ } else {
+ rc = X_LINK_CHAN_FULL;
+ break;
+ }
+ }
+ if (rc == X_LINK_SUCCESS) {
+ opchan->tx_fill_level += event->header.size;
+ opchan->tx_packet_level++;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ if (opchan->chan->mode == RXN_TXB ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking,
+ // wait for packet to be consumed
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_consumed, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ if (opchan->chan->mode == RXB_TXN ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking, wait for packet to become available
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_available, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ if (rc == X_LINK_SUCCESS) {
+ pkt = get_packet_from_channel(&opchan->rx_queue);
+ if (pkt) {
+ *(u32 **)event->pdata = (u32 *)pkt->data;
+ *event->length = pkt->length;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_TO_BUFFER_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan || opchan->chan->status != CHAN_OPEN) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ if (opchan->chan->mode == RXB_TXN ||
+ opchan->chan->mode == RXB_TXB) {
+ // channel is blocking, wait for packet to become available
+ mutex_unlock(&opchan->lock);
+ rc = compl_wait(&opchan->pkt_available, opchan);
+ mutex_lock(&opchan->lock);
+ }
+ if (rc == X_LINK_SUCCESS) {
+ pkt = get_packet_from_channel(&opchan->rx_queue);
+ if (pkt) {
+ memcpy(event->data, pkt->data, pkt->length);
+ *event->length = pkt->length;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_RELEASE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ rc = release_packet_from_channel(opchan,
+ &opchan->rx_queue,
+ event->data,
+ &size);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ event->header.size = size;
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xlink_dispatcher_event_add(EVENT_TX, event);
+ *event_queued = 1;
+ mutex_unlock(&opchan->lock);
+ save_timeout = opchan->chan->timeout;
+ opchan->chan->timeout = OPEN_CHANNEL_TIMEOUT_MSEC;
+ rc = compl_wait(&opchan->opened, opchan);
+ opchan->chan->timeout = save_timeout;
+ if (rc == 0) {
+ xmux->channels[link_id][chan].status = CHAN_OPEN;
+ release_channel(opchan);
+ } else {
+ multiplexer_close_channel(opchan);
+ }
+ }
+ }
+ } else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
+ /* channel already open */
+ xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ } else {
+ /* channel already open */
+ rc = X_LINK_ALREADY_OPEN;
+ }
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
+ opchan = get_channel(link_id, chan);
+ if (!opchan)
+ return X_LINK_COMMUNICATION_FAIL;
+ rc = multiplexer_close_channel(opchan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].status = CHAN_CLOSED;
+ } else {
+ /* can't close channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_PING_REQ:
+ break;
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ case XLINK_OPEN_CHANNEL_RESP:
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ default:
+ rc = X_LINK_ERROR;
+ }
+return rc;
+}
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
+{
+ struct xlink_event *passthru_event = NULL;
+ struct open_channel *opchan = NULL;
+ int rc = X_LINK_SUCCESS;
+ dma_addr_t paddr = 0;
+ void *buffer = NULL;
+ size_t size = 0;
+ u32 link_id;
+ u16 chan;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ link_id = event->link_id;
+ chan = event->header.chan;
+
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ case XLINK_WRITE_VOLATILE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ // if we receive data on a closed channel - flush/read the data
+ buffer = xlink_platform_allocate(xmux->dev, &paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (buffer) {
+ size = event->header.size;
+ xlink_platform_read(event->interface,
+ event->handle->sw_device_id,
+ buffer, &size, 1000, NULL);
+ xlink_platform_deallocate(xmux->dev, buffer,
+ paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ } else {
+ pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+ }
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ buffer = xlink_platform_allocate(xmux->dev, &paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (buffer) {
+ size = event->header.size;
+ rc = xlink_platform_read(event->interface,
+ event->handle->sw_device_id,
+ buffer, &size,
+ opchan->chan->timeout,
+ NULL);
+ if (rc || event->header.size != size) {
+ xlink_platform_deallocate(xmux->dev, buffer,
+ paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ event->paddr = paddr;
+ event->data = buffer;
+ if (add_packet_to_channel(opchan, &opchan->rx_queue,
+ event->data,
+ event->header.size,
+ paddr)) {
+ xlink_platform_deallocate(xmux->dev,
+ buffer, paddr,
+ event->header.size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ event->header.type = XLINK_WRITE_VOLATILE_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_available);
+ } else {
+ // failed to allocate buffer
+ rc = X_LINK_ERROR;
+ }
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_READ_REQ:
+ case XLINK_READ_TO_BUFFER_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ break;
+ }
+ event->header.timeout = opchan->chan->timeout;
+ event->header.type = XLINK_READ_TO_BUFFER_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_consumed);
+ release_channel(opchan);
+ break;
+ case XLINK_RELEASE_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ event->header.timeout = opchan->chan->timeout;
+ opchan->tx_fill_level -= event->header.size;
+ opchan->tx_packet_level--;
+ event->header.type = XLINK_RELEASE_RESP;
+ xlink_dispatcher_event_add(EVENT_RX, event);
+ //complete regardless of mode/timeout
+ complete(&opchan->pkt_released);
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ //xmux->channels[link_id][chan].mode = *(enum xlink_opmode *)event->data;
+ rc = multiplexer_open_channel(link_id, chan);
+ if (rc) {
+ rc = X_LINK_ERROR;
+ } else {
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+ complete(&opchan->opened);
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan,
+ 0,
+ opchan->chan->timeout);
+ if (!passthru_event) {
+ rc = X_LINK_ERROR;
+ release_channel(opchan);
+ break;
+ }
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ }
+ release_channel(opchan);
+ }
+ } else {
+ /* channel already open */
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan, 0, 0);
+ if (!passthru_event) {
+ release_channel(opchan);
+ rc = X_LINK_ERROR;
+ break;
+ }
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ }
+ release_channel(opchan);
+ }
+ rc = xlink_passthrough(event);
+ if (rc == 0)
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ case XLINK_PING_REQ:
+ break;
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ opchan = get_channel(link_id, chan);
+ if (!opchan)
+ rc = X_LINK_COMMUNICATION_FAIL;
+ else
+ xlink_destroy_event(event); // event is handled and can now be freed
+ release_channel(opchan);
+ break;
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ case XLINK_OPEN_CHANNEL_RESP:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ xlink_destroy_event(event); // event is handled and can now be freed
+ complete(&opchan->opened);
+ }
+ release_channel(opchan);
+ break;
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ xlink_destroy_event(event); // event is handled and can now be freed
+ break;
+ default:
+ rc = X_LINK_ERROR;
+ }
+
return rc;
}

--
2.17.1

2021-02-12 22:33:23

by mark gross

[permalink] [raw]
Subject: [PATCH v6 09/34] misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host

From: Srikanth Thokala <[email protected]>

Add PCIe EPF driver for local host (lh) to configure BAR's and other
HW resources. Underlying PCIe HW controller is a Synopsys DWC PCIe core.

Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
MAINTAINERS | 6 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-pcie/Kconfig | 9 +
drivers/misc/xlink-pcie/Makefile | 1 +
drivers/misc/xlink-pcie/local_host/Makefile | 2 +
drivers/misc/xlink-pcie/local_host/epf.c | 373 ++++++++++++++++++++
drivers/misc/xlink-pcie/local_host/epf.h | 37 ++
drivers/misc/xlink-pcie/local_host/xpcie.h | 38 ++
9 files changed, 468 insertions(+)
create mode 100644 drivers/misc/xlink-pcie/Kconfig
create mode 100644 drivers/misc/xlink-pcie/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
create mode 100644 drivers/misc/xlink-pcie/local_host/xpcie.h

diff --git a/MAINTAINERS b/MAINTAINERS
index c18640ae6955..3a7fe50f17a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1961,6 +1961,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM KEEM BAY XLINK PCIE SUPPORT
+M: Srikanth Thokala <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-pcie/
+
ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
M: Jonathan Cameron <[email protected]>
L: [email protected] (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index fafa8b0d8099..dfb98e444c6e 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -481,4 +481,5 @@ source "drivers/misc/ocxl/Kconfig"
source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
+source "drivers/misc/xlink-pcie/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d23231e73330..d17621fc43d5 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -57,3 +57,4 @@ obj-$(CONFIG_HABANA_AI) += habanalabs/
obj-$(CONFIG_UACCE) += uacce/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
+obj-y += xlink-pcie/
diff --git a/drivers/misc/xlink-pcie/Kconfig b/drivers/misc/xlink-pcie/Kconfig
new file mode 100644
index 000000000000..46aa401d79b7
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Kconfig
@@ -0,0 +1,9 @@
+config XLINK_PCIE_LH_DRIVER
+ tristate "XLink PCIe Local Host driver"
+ depends on PCI_ENDPOINT && ARCH_KEEMBAY
+ help
+ This option enables XLink PCIe Local Host driver.
+
+ Choose M here to compile this driver as a module, name is mxlk_ep.
+ This driver is used for XLink communication over PCIe and is to be
+ loaded on the Intel Keem Bay platform.
diff --git a/drivers/misc/xlink-pcie/Makefile b/drivers/misc/xlink-pcie/Makefile
new file mode 100644
index 000000000000..d693d382e9c6
--- /dev/null
+++ b/drivers/misc/xlink-pcie/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += local_host/
diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
new file mode 100644
index 000000000000..514d3f0c91bc
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
+mxlk_ep-objs := epf.o
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
new file mode 100644
index 000000000000..0234756e89ae
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -0,0 +1,373 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2021 Intel Corporation
+ */
+
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+#include "epf.h"
+
+#define BAR2_MIN_SIZE SZ_16K
+#define BAR4_MIN_SIZE SZ_16K
+
+#define PCIE_REGS_PCIE_INTR_ENABLE 0x18
+#define PCIE_REGS_PCIE_INTR_FLAGS 0x1C
+#define LBC_CII_EVENT_FLAG BIT(18)
+#define PCIE_REGS_PCIE_ERR_INTR_FLAGS 0x24
+#define LINK_REQ_RST_FLG BIT(15)
+
+static struct pci_epf_header xpcie_header = {
+ .vendorid = PCI_VENDOR_ID_INTEL,
+ .deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
+ .baseclass_code = PCI_BASE_CLASS_MULTIMEDIA,
+ .subclass_code = 0x0,
+ .subsys_vendor_id = 0x0,
+ .subsys_id = 0x0,
+};
+
+static const struct pci_epf_device_id xpcie_epf_ids[] = {
+ {
+ .name = "mxlk_pcie_epf",
+ },
+ {},
+};
+
+static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
+{
+ struct xpcie_epf *xpcie_epf;
+ struct xpcie *xpcie = args;
+ u32 val;
+
+ xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+ val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+ iowrite32(val, xpcie_epf->apb_base + PCIE_REGS_PCIE_ERR_INTR_FLAGS);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
+{
+ struct xpcie_epf *xpcie_epf;
+ struct xpcie *xpcie = args;
+ u32 val;
+
+ xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
+ val = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+ if (val & LBC_CII_EVENT_FLAG) {
+ iowrite32(LBC_CII_EVENT_FLAG,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void
+intel_xpcie_configure_bar(struct pci_epf *epf,
+ const struct pci_epc_features *epc_features)
+{
+ struct pci_epf_bar *epf_bar;
+ bool bar_fixed_64bit;
+ int i;
+
+ for (i = BAR_0; i <= BAR_5; i++) {
+ epf_bar = &epf->bar[i];
+ bar_fixed_64bit = !!(epc_features->bar_fixed_64bit & (1 << i));
+ if (bar_fixed_64bit)
+ epf_bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+ if (epc_features->bar_fixed_size[i])
+ epf_bar->size = epc_features->bar_fixed_size[i];
+ }
+}
+
+static void intel_xpcie_cleanup_bar(struct pci_epf *epf, enum pci_barno barno)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
+
+ if (xpcie_epf->vaddr[barno]) {
+ pci_epc_clear_bar(epc, epf->func_no, &epf->bar[barno]);
+ pci_epf_free_space(epf, xpcie_epf->vaddr[barno], barno);
+ xpcie_epf->vaddr[barno] = NULL;
+ }
+}
+
+static void intel_xpcie_cleanup_bars(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ intel_xpcie_cleanup_bar(epf, BAR_2);
+ intel_xpcie_cleanup_bar(epf, BAR_4);
+ xpcie_epf->xpcie.mmio = NULL;
+ xpcie_epf->xpcie.bar4 = NULL;
+}
+
+static int intel_xpcie_setup_bar(struct pci_epf *epf, enum pci_barno barno,
+ size_t min_size, size_t align)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epf_bar *bar = &epf->bar[barno];
+ struct pci_epc *epc = epf->epc;
+ void *vaddr;
+ int ret;
+
+ bar->flags |= PCI_BASE_ADDRESS_MEM_TYPE_64;
+ if (!bar->size)
+ bar->size = min_size;
+
+ if (barno == BAR_4)
+ bar->flags |= PCI_BASE_ADDRESS_MEM_PREFETCH;
+
+ vaddr = pci_epf_alloc_space(epf, bar->size, barno, align);
+ if (!vaddr) {
+ dev_err(&epf->dev, "Failed to map BAR%d\n", barno);
+ return -ENOMEM;
+ }
+
+ ret = pci_epc_set_bar(epc, epf->func_no, bar);
+ if (ret) {
+ pci_epf_free_space(epf, vaddr, barno);
+ dev_err(&epf->dev, "Failed to set BAR%d\n", barno);
+ return ret;
+ }
+
+ xpcie_epf->vaddr[barno] = vaddr;
+
+ return 0;
+}
+
+static int intel_xpcie_setup_bars(struct pci_epf *epf, size_t align)
+{
+ int ret;
+
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+
+ ret = intel_xpcie_setup_bar(epf, BAR_2, BAR2_MIN_SIZE, align);
+ if (ret)
+ return ret;
+
+ ret = intel_xpcie_setup_bar(epf, BAR_4, BAR4_MIN_SIZE, align);
+ if (ret) {
+ intel_xpcie_cleanup_bar(epf, BAR_2);
+ return ret;
+ }
+
+ xpcie_epf->comm_bar = BAR_2;
+ xpcie_epf->xpcie.mmio = (void *)xpcie_epf->vaddr[BAR_2] +
+ XPCIE_MMIO_OFFSET;
+
+ xpcie_epf->bar4 = BAR_4;
+ xpcie_epf->xpcie.bar4 = xpcie_epf->vaddr[BAR_4];
+
+ return 0;
+}
+
+static int intel_xpcie_epf_get_platform_data(struct device *dev,
+ struct xpcie_epf *xpcie_epf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct device_node *soc_node, *version_node;
+ struct resource *res;
+ const char *prop;
+ int prop_size;
+
+ xpcie_epf->irq_dma = platform_get_irq_byname(pdev, "intr");
+ if (xpcie_epf->irq_dma < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get IRQ: %d\n",
+ xpcie_epf->irq_dma);
+ return -EINVAL;
+ }
+
+ xpcie_epf->irq_err = platform_get_irq_byname(pdev, "err_intr");
+ if (xpcie_epf->irq_err < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get erroe IRQ: %d\n",
+ xpcie_epf->irq_err);
+ return -EINVAL;
+ }
+
+ xpcie_epf->irq = platform_get_irq_byname(pdev, "ev_intr");
+ if (xpcie_epf->irq < 0) {
+ dev_err(&xpcie_epf->epf->dev, "failed to get event IRQ: %d\n",
+ xpcie_epf->irq);
+ return -EINVAL;
+ }
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "apb");
+ xpcie_epf->apb_base =
+ devm_ioremap(dev, res->start, resource_size(res));
+ if (IS_ERR(xpcie_epf->apb_base))
+ return PTR_ERR(xpcie_epf->apb_base);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dbi");
+ xpcie_epf->dbi_base =
+ devm_ioremap(dev, res->start, resource_size(res));
+ if (IS_ERR(xpcie_epf->dbi_base))
+ return PTR_ERR(xpcie_epf->dbi_base);
+
+ memcpy(xpcie_epf->stepping, "B0", 2);
+ soc_node = of_get_parent(pdev->dev.of_node);
+ if (soc_node) {
+ version_node = of_get_child_by_name(soc_node, "version-info");
+ if (version_node) {
+ prop = of_get_property(version_node, "stepping",
+ &prop_size);
+ if (prop && prop_size <= KEEMBAY_XPCIE_STEPPING_MAXLEN)
+ memcpy(xpcie_epf->stepping, prop, prop_size);
+ of_node_put(version_node);
+ }
+ of_node_put(soc_node);
+ }
+
+ return 0;
+}
+
+static int intel_xpcie_epf_bind(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ const struct pci_epc_features *features;
+ struct pci_epc *epc = epf->epc;
+ struct device *dev;
+ size_t align = SZ_16K;
+ int ret;
+
+ if (WARN_ON_ONCE(!epc))
+ return -EINVAL;
+
+ dev = epc->dev.parent;
+ features = pci_epc_get_features(epc, epf->func_no);
+ xpcie_epf->epc_features = features;
+ if (features) {
+ align = features->align;
+ intel_xpcie_configure_bar(epf, features);
+ }
+
+ ret = intel_xpcie_setup_bars(epf, align);
+ if (ret) {
+ dev_err(&epf->dev, "BAR initialization failed\n");
+ return ret;
+ }
+
+ ret = intel_xpcie_epf_get_platform_data(dev, xpcie_epf);
+ if (ret) {
+ dev_err(&epf->dev, "Unable to get platform data\n");
+ return -EINVAL;
+ }
+
+ if (!strcmp(xpcie_epf->stepping, "A0")) {
+ xpcie_epf->xpcie.legacy_a0 = true;
+ iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
+ } else {
+ xpcie_epf->xpcie.legacy_a0 = false;
+ iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
+ }
+
+ /* Enable interrupt */
+ writel(LBC_CII_EVENT_FLAG,
+ xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_ENABLE);
+ ret = devm_request_irq(&epf->dev, xpcie_epf->irq,
+ &intel_xpcie_host_interrupt, 0,
+ XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "failed to request irq\n");
+ goto err_cleanup_bars;
+ }
+
+ ret = devm_request_irq(&epf->dev, xpcie_epf->irq_err,
+ &intel_xpcie_err_interrupt, 0,
+ XPCIE_DRIVER_NAME, &xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "failed to request error irq\n");
+ goto err_cleanup_bars;
+ }
+
+ return 0;
+
+err_cleanup_bars:
+ intel_xpcie_cleanup_bars(epf);
+
+ return ret;
+}
+
+static void intel_xpcie_epf_unbind(struct pci_epf *epf)
+{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
+
+ free_irq(xpcie_epf->irq, &xpcie_epf->xpcie);
+ free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+
+ pci_epc_stop(epc);
+
+ intel_xpcie_cleanup_bars(epf);
+}
+
+static int intel_xpcie_epf_probe(struct pci_epf *epf)
+{
+ struct device *dev = &epf->dev;
+ struct xpcie_epf *xpcie_epf;
+
+ xpcie_epf = devm_kzalloc(dev, sizeof(*xpcie_epf), GFP_KERNEL);
+ if (!xpcie_epf)
+ return -ENOMEM;
+
+ epf->header = &xpcie_header;
+ xpcie_epf->epf = epf;
+ epf_set_drvdata(epf, xpcie_epf);
+
+ return 0;
+}
+
+static void intel_xpcie_epf_shutdown(struct device *dev)
+{
+ struct pci_epf *epf = to_pci_epf(dev);
+ struct xpcie_epf *xpcie_epf;
+
+ xpcie_epf = epf_get_drvdata(epf);
+
+ /* Notify host in case PCIe hot plug not supported */
+ if (xpcie_epf)
+ pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
+static struct pci_epf_ops ops = {
+ .bind = intel_xpcie_epf_bind,
+ .unbind = intel_xpcie_epf_unbind,
+};
+
+static struct pci_epf_driver xpcie_epf_driver = {
+ .driver.name = "mxlk_pcie_epf",
+ .driver.shutdown = intel_xpcie_epf_shutdown,
+ .probe = intel_xpcie_epf_probe,
+ .id_table = xpcie_epf_ids,
+ .ops = &ops,
+ .owner = THIS_MODULE,
+};
+
+static int __init intel_xpcie_epf_init(void)
+{
+ int ret;
+
+ ret = pci_epf_register_driver(&xpcie_epf_driver);
+ if (ret) {
+ pr_err("Failed to register xlink pcie epf driver: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+module_init(intel_xpcie_epf_init);
+
+static void __exit intel_xpcie_epf_exit(void)
+{
+ pci_epf_unregister_driver(&xpcie_epf_driver);
+}
+module_exit(intel_xpcie_epf_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_DESCRIPTION(XPCIE_DRIVER_DESC);
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
new file mode 100644
index 000000000000..a60cd43fe555
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2021 Intel Corporation
+ */
+
+#ifndef XPCIE_EPF_HEADER_
+#define XPCIE_EPF_HEADER_
+
+#include <linux/pci-epc.h>
+#include <linux/pci-epf.h>
+
+#include "xpcie.h"
+
+#define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
+#define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
+
+#define KEEMBAY_XPCIE_STEPPING_MAXLEN 8
+
+struct xpcie_epf {
+ struct pci_epf *epf;
+ void *vaddr[BAR_5 + 1];
+ enum pci_barno comm_bar;
+ enum pci_barno bar4;
+ const struct pci_epc_features *epc_features;
+ struct xpcie xpcie;
+ int irq;
+ int irq_dma;
+ int irq_err;
+ void __iomem *apb_base;
+ void __iomem *dma_base;
+ void __iomem *dbi_base;
+ char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];
+};
+
+#endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
new file mode 100644
index 000000000000..0745e6dfee10
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_HEADER_
+#define XPCIE_HEADER_
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci_ids.h>
+
+#ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
+#define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
+#endif
+
+#define XPCIE_IO_COMM_SIZE SZ_16K
+#define XPCIE_MMIO_OFFSET SZ_4K
+
+/* MMIO layout and offsets shared between device and host */
+struct xpcie_mmio {
+ u8 legacy_a0;
+} __packed;
+
+#define XPCIE_MMIO_LEGACY_A0 (offsetof(struct xpcie_mmio, legacy_a0))
+
+struct xpcie {
+ u32 status;
+ bool legacy_a0;
+ void *mmio;
+ void *bar4;
+};
+
+#endif /* XPCIE_HEADER_ */
--
2.17.1

2021-02-12 22:34:46

by mark gross

[permalink] [raw]
Subject: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings

From: Seamus Kelly <[email protected]>

Add device tree bindings for keembay-xlink.

Cc: Rob Herring <[email protected]>
Cc: [email protected]
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
.../bindings/misc/intel,keembay-xlink.yaml | 29 +++++++++++++++++++
1 file changed, 29 insertions(+)
create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
new file mode 100644
index 000000000000..5ac2e7fa5b5e
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
@@ -0,0 +1,29 @@
+# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
+# Copyright (c) Intel Corporation. All rights reserved.
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Keem Bay xlink
+
+maintainers:
+ - Seamus Kelly <[email protected]>
+
+description: |
+ The Keem Bay xlink driver enables the communication/control sub-system
+ for internal and external communications to the Intel Keem Bay SoC.
+
+properties:
+ compatible:
+ oneOf:
+ - items:
+ - const: intel,keembay-xlink
+
+additionalProperties: False
+
+examples:
+ - |
+ xlink {
+ compatible = "intel,keembay-xlink";
+ };
--
2.17.1

2021-02-12 22:35:15

by mark gross

[permalink] [raw]
Subject: [PATCH v6 27/34] misc: Tsens ARM host thermal driver.

From: "C, Udhayakumar" <[email protected]>

Add tsens ARM host thermal driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU Local Host ARM CPU
---------------- --------------------------
| Platform | | Thermal Daemon |
| Management SW| | |
---------------- --------------------------
| Intel tsens | | intel tsens i2c slave |
| i2c client | | and thermal driver |
---------------- --------------------------
| XLINK I2C | | XLINK I2C Slave |
| controller | <=========> | controller |
---------------- smbus --------------------------

intel tsens module:
-------------------
The tsens module enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms. In the tsens module
various junction and SoC temperatures are reported using thermal
subsystem and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsystem will be used by
platform manageability software running in IA host.

- Local Host driver
* Intended for ARM CPU
* It is based on Thermal and I2C slave Framework
* Driver path:
{tree}/drivers/misc/intel_tsens/intel_tsens_thermal.c

Local host and Remote host drivers communicates using
XLINK I2C SMBUS protocol.

Acked-by: Mark Gross <[email protected]>
Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
Documentation/hwmon/index.rst | 1 +
Documentation/hwmon/intel_tsens_sensor.rst | 67 ++
MAINTAINERS | 5 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/intel_tsens/Kconfig | 15 +
drivers/misc/intel_tsens/Makefile | 7 +
.../misc/intel_tsens/intel_tsens_thermal.c | 651 ++++++++++++++++++
.../misc/intel_tsens/intel_tsens_thermal.h | 38 +
include/linux/hddl_device.h | 153 ++++
10 files changed, 939 insertions(+)
create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
create mode 100644 drivers/misc/intel_tsens/Kconfig
create mode 100644 drivers/misc/intel_tsens/Makefile
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
create mode 100644 include/linux/hddl_device.h

diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst
index fcb870ce6286..fc29100bef73 100644
--- a/Documentation/hwmon/index.rst
+++ b/Documentation/hwmon/index.rst
@@ -80,6 +80,7 @@ Hardware Monitoring Kernel Drivers
ir38064
isl68137
it87
+ intel_tsens_sensor.rst
jc42
k10temp
k8temp
diff --git a/Documentation/hwmon/intel_tsens_sensor.rst b/Documentation/hwmon/intel_tsens_sensor.rst
new file mode 100644
index 000000000000..0f53dfca477e
--- /dev/null
+++ b/Documentation/hwmon/intel_tsens_sensor.rst
@@ -0,0 +1,67 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==================================
+Kernel driver: intel_tsens_thermal
+==================================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+ Slave address: The address is assigned by the hddl device management
+ driver.
+
+Authors:
+ - Thalaiappan, Rathina <[email protected]>
+ - Udhayakumar C <[email protected]>
+
+Description
+===========
+The Intel Edge.AI Computer Vision platforms have memory mapped thermal sensors
+which are accessible locally. The intel_tsens_thermal driver handles these
+thermal sensor and exposes the temperature to
+
+* the external host similar to the standard SMBUS based thermal sensor
+ (like LM73) to the host by registering to the I2C subsystem as
+ slave interface (Documentation/i2c/slave-interface.rst).
+* the local CPU as a standard thermal device.
+
+In Keem Bay, the four thermal junction temperature points are,
+Media Subsystem (mss), NN subsystem (nce), Compute subsystem (cse) and
+SOC(Maximum of mss, nce and cse).
+
+Similarity: /drivers/thermal/qcom
+
+Example
+=======
+Local Thermal Interface:
+
+Temperature reported in Keem Bay on the Linux Thermal sysfs interface.
+
+# cat /sys/class/thermal/thermal_zone*/type
+mss
+css
+nce
+soc
+
+# cat /sys/class/thermal/thermal_zone*/temp
+0
+29210
+28478
+29210
+
+Remote Thermal Interface:
+
+tsens i2c slave driver reports temperature of various subsytem
+junction temperature based on table as below.
+
++-----------+-------------+
+| offset | Sensor |
++-----------+-------------+
+| 0 | mss |
++-----------+-------------+
+| 1 | css |
++-----------+-------------+
+| 2 | nce |
++-----------+-------------+
+| 3 | soc |
++-----------+-------------+
diff --git a/MAINTAINERS b/MAINTAINERS
index 1429db8d592c..4fb7d6f98569 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1985,6 +1985,11 @@ S: Supported
F: drivers/misc/hddl_device/
F: drivers/misc/intel_tsens/

+ARM/INTEL TSENS SUPPORT
+M: Udhayakumar C <[email protected]>
+S: Supported
+F: drivers/misc/intel_tsens/
+
ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
M: Jonathan Cameron <[email protected]>
L: [email protected] (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 2d1f7b165cc8..aed3ef61897c 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -485,4 +485,5 @@ source "drivers/misc/xlink-pcie/Kconfig"
source "drivers/misc/xlink-ipc/Kconfig"
source "drivers/misc/xlink-core/Kconfig"
source "drivers/misc/vpumgr/Kconfig"
+source "drivers/misc/intel_tsens/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 2936930f3edc..c08502b22778 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -61,3 +61,4 @@ obj-y += xlink-pcie/
obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
obj-$(CONFIG_XLINK_CORE) += xlink-core/
obj-$(CONFIG_VPUMGR) += vpumgr/
+obj-y += intel_tsens/
diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
new file mode 100644
index 000000000000..bfb8fe1997f4
--- /dev/null
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -0,0 +1,15 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config INTEL_TSENS_LOCAL_HOST
+ bool "Temperature sensor driver for intel tsens"
+ select THERMAL
+ help
+ This option enables tsens thermal local Host driver.
+
+ This driver is used for reporting thermal data via thermal
+ framework.
+ Enable this option if you want to have support for thermal
+ management controller.
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
new file mode 100644
index 000000000000..93dee8b9f481
--- /dev/null
+++ b/drivers/misc/intel_tsens/Makefile
@@ -0,0 +1,7 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# Makefile for intel tsens Thermal Linux driver
+#
+
+obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_thermal.c b/drivers/misc/intel_tsens/intel_tsens_thermal.c
new file mode 100644
index 000000000000..5a08e6f216f2
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_thermal.c
@@ -0,0 +1,651 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/i2c.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include "intel_tsens_thermal.h"
+
+struct intel_tsens_trip_info {
+ enum thermal_trip_type trip_type;
+ int temp;
+};
+
+struct intel_tsens {
+ char name[20];
+ u32 n_trips;
+ u32 passive_delay;
+ u32 polling_delay;
+ u32 sensor_type;
+ u64 addr;
+ u64 size;
+ u32 curr_temp;
+ void __iomem *base_addr;
+ struct intel_tsens_trip_info **trip_info;
+ struct thermal_zone_device *tz;
+ void *pdata;
+ struct intel_tsens_plat_info plat_info;
+};
+
+struct intel_tsens_priv {
+ int n_sens;
+ bool global_clk_available;
+ void __iomem *base_addr;
+ struct clk *tsens_clk;
+ u32 tsens_clk_rate;
+ struct intel_tsens **intel_tsens;
+ struct device *dev;
+ struct platform_device *pdev;
+ struct intel_tsens_plat_info plat_info;
+};
+
+static int intel_tsens_register_pdev(struct intel_tsens_plat_info *plat_info)
+{
+ struct intel_tsens_plat_data plat_data;
+ struct platform_device_info pdevinfo;
+ struct platform_device *dd;
+
+ memset(&pdevinfo, 0, sizeof(pdevinfo));
+ pdevinfo.name = plat_info->plat_name;
+ pdevinfo.id = plat_info->id;
+ plat_data.base_addr = plat_info->base_addr;
+ plat_data.name = plat_info->plat_name;
+ plat_data.get_temp = NULL;
+ pdevinfo.data = &plat_data;
+ pdevinfo.size_data = sizeof(plat_data);
+ dd = platform_device_register_full(&pdevinfo);
+ if (IS_ERR(dd))
+ return -EINVAL;
+ plat_info->pdev = dd;
+
+ return 0;
+}
+
+static void intel_tsens_unregister_pdev(struct intel_tsens_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->n_sens; i++) {
+ if (priv->plat_info.pdev)
+ platform_device_unregister(priv->plat_info.pdev);
+ }
+}
+
+static int intel_tsens_add_pdev(struct intel_tsens_priv *priv)
+{
+ int i, ret;
+
+ /*
+ * Register platform device for each sensor.
+ *
+ */
+ if (priv->plat_info.plat_name) {
+ priv->plat_info.base_addr = priv->base_addr;
+ ret = intel_tsens_register_pdev(&priv->plat_info);
+ if (ret) {
+ dev_err(&priv->pdev->dev,
+ "platform device register failed for %s\n",
+ priv->plat_info.plat_name);
+ return ret;
+ }
+ }
+ for (i = 0; i < priv->n_sens; i++) {
+ struct intel_tsens *tsens = priv->intel_tsens[i];
+
+ if (!tsens->plat_info.plat_name)
+ continue;
+ tsens->plat_info.base_addr = tsens->base_addr;
+ ret = intel_tsens_register_pdev(&tsens->plat_info);
+ if (ret) {
+ dev_err(&priv->pdev->dev,
+ "platform device register failed for %s\n",
+ tsens->name);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_tsens_thermal_get_temp(struct thermal_zone_device *tz,
+ int *temp)
+{
+ struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+ struct intel_tsens_priv *priv =
+ (struct intel_tsens_priv *)tsens->pdata;
+ struct intel_tsens_plat_data *plat_data;
+ int type = tsens->sensor_type;
+ struct platform_device *pdev;
+
+ if (tsens->plat_info.plat_name) {
+ pdev = tsens->plat_info.pdev;
+ plat_data = pdev->dev.platform_data;
+
+ if (!plat_data) {
+ dev_err(&pdev->dev, "Platform data not found for %s\n",
+ tsens->name);
+ return -EINVAL;
+ }
+ if (!plat_data->get_temp) {
+ *temp = 0;
+ return -EINVAL;
+ }
+ if (plat_data->get_temp(pdev, type, temp))
+ return -EINVAL;
+ tsens->curr_temp = *temp;
+ return 0;
+ }
+ if (priv->plat_info.plat_name) {
+ pdev = priv->plat_info.pdev;
+ plat_data = pdev->dev.platform_data;
+
+ if (!plat_data) {
+ dev_err(&pdev->dev, "Platform data not found for %s\n",
+ tsens->name);
+ return -EINVAL;
+ }
+ if (!plat_data->get_temp) {
+ *temp = 0;
+ return -EINVAL;
+ }
+
+ if (plat_data->get_temp(pdev, type, temp))
+ return -EINVAL;
+ tsens->curr_temp = *temp;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int intel_tsens_thermal_get_trip_type(struct thermal_zone_device *tz,
+ int trip,
+ enum thermal_trip_type *type)
+{
+ struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+
+ *type = tsens->trip_info[trip]->trip_type;
+ return 0;
+}
+
+static int intel_tsens_thermal_get_trip_temp(struct thermal_zone_device *tz,
+ int trip, int *temp)
+{
+ struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+
+ *temp = tsens->trip_info[trip]->temp;
+ return 0;
+}
+
+/* Refer https://lwn.net/Articles/242046/
+ * how to receive this event in userspace
+ */
+static int intel_tsens_notify_user_space(struct thermal_zone_device *tz,
+ int trip)
+{
+ char *thermal_prop[5];
+ int i, ret = 0;
+
+ mutex_lock(&tz->lock);
+ thermal_prop[0] = kasprintf(GFP_KERNEL, "NAME=%s", tz->type);
+ thermal_prop[1] = kasprintf(GFP_KERNEL, "TEMP=%d",
+ tz->emul_temperature);
+ thermal_prop[2] = kasprintf(GFP_KERNEL, "TRIP=%d", trip);
+ thermal_prop[3] = kasprintf(GFP_KERNEL, "EVENT=%d", tz->notify_event);
+ thermal_prop[4] = NULL;
+ if (thermal_prop[0] && thermal_prop[1] &&
+ thermal_prop[2] && thermal_prop[3]) {
+ kobject_uevent_env(&tz->device.kobj, KOBJ_CHANGE,
+ thermal_prop);
+ } else {
+ ret = -ENOMEM;
+ }
+ for (i = 0; i < 4; ++i)
+ kfree(thermal_prop[i]);
+ mutex_unlock(&tz->lock);
+ return ret;
+}
+
+static int intel_tsens_thermal_notify(struct thermal_zone_device *tz,
+ int trip, enum thermal_trip_type type)
+{
+ intel_tsens_notify_user_space(tz, trip);
+
+ if (type == THERMAL_TRIP_PASSIVE || type == THERMAL_TRIP_CRITICAL)
+ return 1;
+ return 0;
+}
+
+static int intel_tsens_thermal_bind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ struct intel_tsens *tsens = (struct intel_tsens *)tz->devdata;
+ struct intel_tsens_priv *priv =
+ (struct intel_tsens_priv *)tsens->pdata;
+ int ret = -EINVAL;
+
+ /*
+ * Check here thermal device zone name and cdev name to match,
+ * then call the bind device
+ */
+ if (!strncmp(tz->type, cdev->type, THERMAL_NAME_LENGTH) == 0) {
+ ret = thermal_zone_bind_cooling_device
+ (tz,
+ THERMAL_TRIP_PASSIVE,
+ cdev,
+ THERMAL_NO_LIMIT,
+ THERMAL_NO_LIMIT,
+ THERMAL_WEIGHT_DEFAULT);
+ if (ret) {
+ dev_err(&priv->pdev->dev,
+ "binding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ }
+ }
+ return ret;
+}
+
+static int intel_tsens_thermal_unbind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ int ret;
+
+ ret = thermal_zone_unbind_cooling_device(tz, 0, cdev);
+ if (ret) {
+ dev_err(&tz->device,
+ "unbinding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ }
+ return ret;
+}
+
+static struct thermal_zone_device_ops tsens_thermal_ops = {
+ .bind = intel_tsens_thermal_bind,
+ .unbind = intel_tsens_thermal_unbind,
+ .get_temp = intel_tsens_thermal_get_temp,
+ .get_trip_type = intel_tsens_thermal_get_trip_type,
+ .get_trip_temp = intel_tsens_thermal_get_trip_temp,
+ .notify = intel_tsens_thermal_notify,
+/* .set_emul_temp = tsens_thermal_emulation */
+
+};
+
+static int intel_tsens_get_temp(int type, int *temp, void *pdata)
+{
+ struct intel_tsens_priv *priv = (struct intel_tsens_priv *)pdata;
+
+ if (!priv)
+ return -EINVAL;
+
+ return intel_tsens_thermal_get_temp(priv->intel_tsens[type]->tz, temp);
+}
+
+struct intel_tsens_i2c_plat_data i2c_plat_data = {
+ .get_temp = intel_tsens_get_temp,
+};
+
+static void intel_tsens_remove_thermal_zones(struct intel_tsens_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->n_sens; i++) {
+ struct intel_tsens *tsens = priv->intel_tsens[i];
+
+ if (tsens->tz) {
+ thermal_zone_device_unregister(tsens->tz);
+ tsens->tz = NULL;
+ }
+ }
+}
+
+static int intel_tsens_add_thermal_zones(struct intel_tsens_priv *priv)
+{
+ int i;
+
+ for (i = 0; i < priv->n_sens; i++) {
+ struct intel_tsens *tsens = priv->intel_tsens[i];
+
+ tsens->tz =
+ thermal_zone_device_register(tsens->name,
+ tsens->n_trips,
+ 0,
+ tsens,
+ &tsens_thermal_ops,
+ NULL,
+ tsens->passive_delay,
+ tsens->polling_delay);
+ if (IS_ERR(tsens->tz)) {
+ dev_err(&priv->pdev->dev,
+ "failed to register thermal zone device %s\n",
+ tsens->name);
+ return PTR_ERR(tsens->tz);
+ }
+ }
+
+ return 0;
+}
+
+static void intel_tsens_remove_clk(struct intel_tsens_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+
+ clk_disable_unprepare(priv->tsens_clk);
+ devm_clk_put(&pdev->dev, priv->tsens_clk);
+}
+
+static int intel_tsens_clk_config(struct intel_tsens_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+ int ret;
+
+ if (priv->global_clk_available) {
+ priv->tsens_clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(priv->tsens_clk)) {
+ ret = PTR_ERR(priv->tsens_clk);
+ if (ret != -EPROBE_DEFER) {
+ dev_err(&pdev->dev,
+ "failed to get thermal clk: %d\n", ret);
+ }
+ return PTR_ERR(priv->tsens_clk);
+ }
+ ret = clk_set_rate(priv->tsens_clk, priv->tsens_clk_rate);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "failed to set rate for thermal clk: %d\n",
+ ret);
+ devm_clk_put(&pdev->dev, priv->tsens_clk);
+ return ret;
+ }
+ ret = clk_prepare_enable(priv->tsens_clk);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "failed to enable thermal clk: %d\n",
+ ret);
+ devm_clk_put(&pdev->dev, priv->tsens_clk);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int intel_tsens_config_sensors(struct device_node *s_node,
+ struct intel_tsens *tsens,
+ int sensor_type)
+{
+ struct intel_tsens_priv *priv = (struct intel_tsens_priv *)tsens->pdata;
+ struct platform_device *pdev = priv->pdev;
+ s32 trip_temp_count, trip_temp_type_c, i;
+
+ of_property_read_string_index(s_node, "plat_name", 0,
+ &tsens->plat_info.plat_name);
+ tsens->plat_info.id = 1 << sensor_type;
+ tsens->sensor_type = sensor_type;
+ if (of_property_read_u32(s_node, "passive_delay",
+ &tsens->passive_delay)) {
+ dev_err(&pdev->dev,
+ "passive_delay missing in dt for %s\n",
+ tsens->name);
+ return -EINVAL;
+ }
+ if (of_property_read_u32(s_node, "polling_delay",
+ &tsens->polling_delay)) {
+ dev_err(&pdev->dev,
+ "polling_delay missing in dt for %s\n",
+ tsens->name);
+ return -EINVAL;
+ }
+ trip_temp_count = of_property_count_u32_elems(s_node, "trip_temp");
+ trip_temp_type_c = of_property_count_strings(s_node, "trip_type");
+ if (trip_temp_count != trip_temp_type_c ||
+ trip_temp_count <= 0 || trip_temp_type_c <= 0) {
+ dev_err(&pdev->dev,
+ "trip temp config is missing in dt for %s\n",
+ tsens->name);
+ return -EINVAL;
+ }
+
+ tsens->trip_info =
+ devm_kcalloc(&pdev->dev, trip_temp_count,
+ sizeof(struct intel_tsens_trip_info *),
+ GFP_KERNEL);
+ if (!tsens->trip_info)
+ return -ENOMEM;
+ tsens->n_trips = trip_temp_count;
+ for (i = 0; i < trip_temp_count; i++) {
+ struct intel_tsens_trip_info *trip_info;
+ const char *trip_name;
+
+ trip_info = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_tsens_trip_info),
+ GFP_KERNEL);
+ if (!trip_info)
+ return -ENOMEM;
+
+ of_property_read_u32_index(s_node, "trip_temp", i,
+ &trip_info->temp);
+ of_property_read_string_index(s_node, "trip_type", i,
+ &trip_name);
+ if (!strcmp(trip_name, "passive"))
+ trip_info->trip_type = THERMAL_TRIP_PASSIVE;
+ else if (!strcmp(trip_name, "critical"))
+ trip_info->trip_type = THERMAL_TRIP_CRITICAL;
+ else if (!strcmp(trip_name, "hot"))
+ trip_info->trip_type = THERMAL_TRIP_HOT;
+ else
+ trip_info->trip_type = THERMAL_TRIP_ACTIVE;
+ tsens->trip_info[i] = trip_info;
+ }
+
+ return 0;
+}
+
+static int intel_tsens_config_dt(struct intel_tsens_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+ struct device_node *np = pdev->dev.of_node;
+ struct device_node *s_node = NULL, *node;
+ struct resource *res;
+ int i = 0, ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ priv->base_addr = devm_ioremap_resource(&pdev->dev, res);
+ node = of_parse_phandle(np, "soc-sensors", 0);
+ if (!node)
+ return -EINVAL;
+ priv->n_sens = of_get_child_count(node);
+ if (priv->n_sens == 0) {
+ dev_err(&pdev->dev, "No sensors configured in dt\n");
+ return -EINVAL;
+ }
+ priv->global_clk_available = of_property_read_bool(np, "clocks");
+ if (priv->global_clk_available) {
+ ret = of_property_read_u32(np, "clk-rate",
+ &priv->tsens_clk_rate);
+ if (ret) {
+ dev_err(&pdev->dev, "clk-rate not available in dt");
+ return ret;
+ }
+ }
+ of_property_read_string_index(np, "plat_name", 0,
+ &priv->plat_info.plat_name);
+ priv->intel_tsens =
+ devm_kcalloc(&pdev->dev, priv->n_sens,
+ sizeof(struct intel_tsens *),
+ GFP_KERNEL);
+ if (!priv->intel_tsens)
+ return -ENOMEM;
+ for_each_child_of_node(node, s_node) {
+ int r_count, size_count;
+ struct intel_tsens *ts;
+
+ ts = devm_kzalloc(&pdev->dev, sizeof(struct intel_tsens),
+ GFP_KERNEL);
+ if (!ts) {
+ of_node_put(s_node);
+ return -ENOMEM;
+ }
+ strcpy(ts->name, s_node->name);
+ if (!of_property_read_u32(s_node, "address-cells", &r_count) &&
+ !of_property_read_u32(s_node, "size-cells", &size_count)) {
+ if (r_count > 1) {
+ ret = of_property_read_u64_index(s_node, "reg",
+ 0, &ts->addr);
+ } else {
+ u32 *addr = (u32 *)&ts->addr;
+
+ ret = of_property_read_u32_index(s_node, "reg",
+ 0, addr);
+ }
+ if (ret) {
+ dev_err(&pdev->dev, "Invalid reg base address");
+ of_node_put(s_node);
+ return ret;
+ }
+ if (size_count > 1) {
+ int index =
+ (r_count > 1) ? (r_count / 2) :
+ r_count;
+
+ ret = of_property_read_u64_index(s_node, "reg",
+ index,
+ &ts->size);
+ } else {
+ u32 *size = (u32 *)&ts->size;
+
+ ret = of_property_read_u32_index(s_node, "reg",
+ r_count, size);
+ }
+ if (ret) {
+ dev_err(&pdev->dev, "Invalid size");
+ of_node_put(s_node);
+ return ret;
+ }
+ ts->base_addr = devm_ioremap(&pdev->dev,
+ ts->addr,
+ ts->size);
+ } else {
+ ts->base_addr = priv->base_addr;
+ }
+ if (!ts->base_addr) {
+ dev_err(&pdev->dev, "ioremap failed for %s\n",
+ ts->name);
+ of_node_put(s_node);
+ return -EINVAL;
+ }
+ ts->pdata = priv;
+ if (intel_tsens_config_sensors(s_node, ts, i)) {
+ dev_err(&pdev->dev,
+ "Missing sensor info in dts for %s\n",
+ ts->name);
+ of_node_put(s_node);
+ return -EINVAL;
+ }
+ priv->intel_tsens[i] = ts;
+ i++;
+ }
+
+ return 0;
+}
+
+static int intel_tsens_thermal_probe(struct platform_device *pdev)
+{
+ struct intel_tsens_priv *intel_tsens_priv;
+ int ret;
+
+ intel_tsens_priv = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_tsens_priv),
+ GFP_KERNEL);
+ if (!intel_tsens_priv)
+ return -ENOMEM;
+ intel_tsens_priv->pdev = pdev;
+ if (pdev->dev.of_node) {
+ ret = intel_tsens_config_dt(intel_tsens_priv);
+ if (ret) {
+ dev_err(&pdev->dev, "dt configuration failed\n");
+ return ret;
+ }
+ } else {
+ dev_err(&pdev->dev, "Non Device Tree build is not supported\n");
+ return -EINVAL;
+ }
+ ret = intel_tsens_clk_config(intel_tsens_priv);
+ if (ret) {
+ dev_err(&pdev->dev, "Thermal clk config failed\n");
+ return ret;
+ }
+ ret = intel_tsens_add_pdev(intel_tsens_priv);
+ if (ret) {
+ dev_err(&pdev->dev, "platform device registration failed\n");
+ goto remove_pdev;
+ }
+ ret = intel_tsens_add_thermal_zones(intel_tsens_priv);
+ if (ret) {
+ dev_err(&pdev->dev, "thermal zone configuration failed\n");
+ goto remove_tz;
+ }
+ platform_set_drvdata(pdev, intel_tsens_priv);
+ i2c_plat_data.pdata = intel_tsens_priv;
+ return 0;
+
+remove_tz:
+ intel_tsens_remove_thermal_zones(intel_tsens_priv);
+remove_pdev:
+ intel_tsens_unregister_pdev(intel_tsens_priv);
+ intel_tsens_remove_clk(intel_tsens_priv);
+ return ret;
+}
+
+/* Device Exit */
+static int intel_tsens_thermal_exit(struct platform_device *pdev)
+{
+ struct intel_tsens_priv *priv = platform_get_drvdata(pdev);
+
+ if (!priv) {
+ dev_err(&pdev->dev,
+ "unable to get private data\n");
+ return -EINVAL;
+ }
+ intel_tsens_remove_thermal_zones(priv);
+ intel_tsens_unregister_pdev(priv);
+ intel_tsens_remove_clk(priv);
+
+ return 0;
+}
+
+static const struct of_device_id intel_tsens_thermal_id_table[] = {
+ { .compatible = "intel,intel-tsens" },
+ {}
+};
+MODULE_DEVICE_TABLE(of, intel_tsens_thermal_id_table);
+
+static struct platform_driver intel_tsens_thermal_driver = {
+ .probe = intel_tsens_thermal_probe,
+ .remove = intel_tsens_thermal_exit,
+ .driver = {
+ .name = "intel_tsens_thermal",
+ .of_match_table = intel_tsens_thermal_id_table,
+ },
+};
+
+module_platform_driver(intel_tsens_thermal_driver);
+
+MODULE_DESCRIPTION("TSENS Thermal Driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/intel_tsens/intel_tsens_thermal.h b/drivers/misc/intel_tsens/intel_tsens_thermal.h
new file mode 100644
index 000000000000..a531c95b20b3
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_thermal.h
@@ -0,0 +1,38 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel tsens thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_INTEL_TSENS_H
+#define _LINUX_INTEL_TSENS_H
+
+#include <linux/platform_device.h>
+#include <linux/thermal.h>
+
+struct intel_tsens_plat_data {
+ const char *name;
+ void __iomem *base_addr;
+ int (*get_temp)(struct platform_device *pdev, int type, int *temp);
+ void *pdata;
+};
+
+struct intel_tsens_plat_info {
+ const char *plat_name;
+ int id;
+ struct platform_device *pdev;
+ void __iomem *base_addr;
+};
+
+struct intel_tsens_i2c_plat_data {
+ int (*get_temp)(int type, int *temp, void *pdata);
+ void *pdata;
+};
+
+/* TSENS i2c platform data */
+extern struct intel_tsens_i2c_plat_data i2c_plat_data;
+
+#endif /* _LINUX_INTEL_TSENS_H */
diff --git a/include/linux/hddl_device.h b/include/linux/hddl_device.h
new file mode 100644
index 000000000000..1c21ad27ea33
--- /dev/null
+++ b/include/linux/hddl_device.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef __HDDL_DEVICE_H
+#define __HDDL_DEVICE_H
+
+#include <linux/i2c.h>
+#include <linux/platform_device.h>
+#include <linux/thermal.h>
+#include <linux/types.h>
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+#include <linux/xlink.h>
+#include <linux/xlink_drv_inf.h>
+#endif /* XLINK_CORE */
+
+#define HDDL_ALIGN 4
+
+#define HDDL_MAGIC 'x'
+#define HDDL_READ_SW_ID_DATA _IOW(HDDL_MAGIC, 'a', void*)
+#define HDDL_SOFT_RESET _IOW(HDDL_MAGIC, 'b', void*)
+
+struct sw_id_hddl_data {
+ u32 board_id;
+ u32 soc_id;
+ u32 soc_adaptor_no[2];
+ u32 sw_id;
+ u32 return_id;
+};
+
+struct sw_id_soft_reset {
+ u32 sw_id;
+ u32 return_id;
+};
+
+enum hddl_xlink_adapter {
+ HDDL_XLINK_I2C_MASTER,
+ HDDL_XLINK_I2C_SLAVE,
+ HDDL_XLINK_I2C_END,
+};
+
+enum hddl_device {
+ HDDL_I2C_CLIENT = (1 << 0),
+ HDDL_XLINK_CLIENT = (1 << 1),
+ HDDL_XLINK_SMBUS_CLIENT = (1 << 2),
+};
+
+enum hddl_device_status {
+ HDDL_DEV_STATUS_START,
+ HDDL_DEV_STATUS_CONNECTED,
+ HDDL_DEV_STATUS_DISCONNECTED,
+ HDDL_DEV_STATUS_END,
+};
+
+enum hddl_msg_type {
+ HDDL_GET_NSENS = 0x10,
+ HDDL_GET_SENS_NAME = 0x11,
+ HDDL_GET_SENS_DETAILS = 0x12,
+ HDDL_GET_SENS_TRIP_INFO = 0x13,
+ HDDL_GET_N_I2C_DEVS = 0x14,
+ HDDL_GET_I2C_DEVS = 0x15,
+ HDDL_GET_I2C_DEV_ADDR = 0x16,
+ HDDL_GET_SENS_COMPLETE = 0x20,
+};
+
+struct intel_hddl_tsens_msg {
+ int msg_type;
+ u32 sensor_type;
+ u32 trip_info_idx;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_board_info {
+ int board_id;
+ int soc_id;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_tsens_data {
+ char name[20];
+ u32 n_trips;
+ u32 passive_delay;
+ u32 polling_delay;
+ u32 sensor_type;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_i2c_devs_data {
+ char name[20];
+ u32 addr;
+ u32 bus;
+ int enabled;
+ int local_host;
+ int remote_host;
+} __packed __aligned(HDDL_ALIGN);
+
+struct intel_hddl_i2c_devs {
+ char name[20];
+ u32 addr;
+ u32 bus;
+ int enabled;
+ int local_host;
+ int remote_host;
+ struct i2c_board_info board_info;
+ struct i2c_client *xlk_client;
+ struct i2c_client *i2c_client;
+ struct i2c_client *smbus_client;
+};
+
+struct intel_hddl_clients {
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+ struct xlink_handle xlink_dev;
+#endif /* XLINK_CORE */
+ struct task_struct *hddl_dev_connect_task;
+ void *task;
+ u32 chan_num;
+ void *pdata;
+ struct intel_hddl_board_info board_info;
+ u32 xlink_i2c_ch[HDDL_XLINK_I2C_END];
+ u32 i2c_chan_num;
+ void **tsens;
+ u32 nsens;
+ struct platform_device *xlink_i2c_plt_dev[HDDL_XLINK_I2C_END];
+ struct platform_device *pdev;
+ struct i2c_adapter *adap[HDDL_XLINK_I2C_END];
+ struct i2c_adapter *smbus_adap;
+ struct intel_hddl_i2c_devs **i2c_devs;
+ int n_clients;
+ enum hddl_device_status status;
+ /* hddl device lock */
+ struct mutex lock;
+};
+
+struct intel_tsens_trip_info {
+ enum thermal_trip_type trip_type;
+ int temp;
+} __packed __aligned(HDDL_ALIGN);
+
+#if IS_ENABLED(CONFIG_XLINK_CORE)
+static inline u32 tsens_get_device_id(struct intel_hddl_clients *d)
+{
+ return d->xlink_dev.sw_device_id;
+}
+#else
+static inline u32 tsens_get_device_id(struct intel_hddl_clients *d)
+{
+ return -EINVAL;
+}
+#endif /* XLINK_CORE */
+
+#endif /* __HDDL_DEVICE_H */
--
2.17.1

2021-02-12 22:36:05

by mark gross

[permalink] [raw]
Subject: [PATCH v6 25/34] misc: Add Keem Bay VPU manager

From: "Li, Tingqian" <[email protected]>

VPU IP on Keem Bay SOC is a vision acceleration IP complex
under the control of a RTOS-based firmware (running on RISC
MCU inside the VPU IP) serving user-space application
running on CPU side for HW accelerated computer vision tasks.

This module is kernel counterpart of the VPUAL(VPU abstraction
layer) which bridges firmware on VPU side and applications on
CPU user-space, it assists firmware on VPU side serving multiple
user space application processes on CPU side concurrently while
also performing necessary data buffer management on behave of
VPU IP.

objmgr provides basic infrastructure for create/destroy VPU side
software object concurrently on demand of user-space application
and also automatically release leaked objects during handling of
application termination. Note this module only cares about the
life-cycle of such objects, it's up to the application and firmware
to define the behavior/operations of each object.

objmgr does it's job by communicating with firmware through a fixed
reserved xlink channel, using a very simple message protocol.

smm provides DMABuf allocation/import facilities to allow user-space
app pass data to/from VPU in zero-copy fashion. it also provided a
convenient ioctl function for converting virtual pointer of a mem-mapped
and imported DMABuf into it's corresponding dma address, to allow
user-space app to specify the sub-regions of a bigger DMABuf to be
processed by VPU.

Signed-off-by: Li Tingqian <[email protected]>
Signed-off-by: Zhou Luwei <[email protected]>
Signed-off-by: Wang jue <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/vpumgr/Kconfig | 9 +
drivers/misc/vpumgr/Makefile | 3 +
drivers/misc/vpumgr/vpu_common.h | 31 ++
drivers/misc/vpumgr/vpu_mgr.c | 370 ++++++++++++++++++++
drivers/misc/vpumgr/vpu_smm.c | 554 +++++++++++++++++++++++++++++
drivers/misc/vpumgr/vpu_smm.h | 30 ++
drivers/misc/vpumgr/vpu_vcm.c | 584 +++++++++++++++++++++++++++++++
drivers/misc/vpumgr/vpu_vcm.h | 84 +++++
include/uapi/misc/vpumgr.h | 64 ++++
11 files changed, 1731 insertions(+)
create mode 100644 drivers/misc/vpumgr/Kconfig
create mode 100644 drivers/misc/vpumgr/Makefile
create mode 100644 drivers/misc/vpumgr/vpu_common.h
create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.c
create mode 100644 drivers/misc/vpumgr/vpu_smm.h
create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
create mode 100644 include/uapi/misc/vpumgr.h

diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 09ae65e98681..2d1f7b165cc8 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -484,4 +484,5 @@ source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
source "drivers/misc/xlink-ipc/Kconfig"
source "drivers/misc/xlink-core/Kconfig"
+source "drivers/misc/vpumgr/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index f3a6eb03bae9..2936930f3edc 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -60,3 +60,4 @@ obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
obj-$(CONFIG_XLINK_CORE) += xlink-core/
+obj-$(CONFIG_VPUMGR) += vpumgr/
diff --git a/drivers/misc/vpumgr/Kconfig b/drivers/misc/vpumgr/Kconfig
new file mode 100644
index 000000000000..bb82ff83afd3
--- /dev/null
+++ b/drivers/misc/vpumgr/Kconfig
@@ -0,0 +1,9 @@
+config VPUMGR
+ tristate "VPU Manager"
+ depends on ARM64 && XLINK_CORE
+ help
+ VPUMGR manages life-cycle of VPU related resources which were
+ dynamically allocated on demands of user-space application
+
+ Select y or m if you have a processor including the Intel
+ Vision Processor (VPU) on it.
diff --git a/drivers/misc/vpumgr/Makefile b/drivers/misc/vpumgr/Makefile
new file mode 100644
index 000000000000..51441dc8a930
--- /dev/null
+++ b/drivers/misc/vpumgr/Makefile
@@ -0,0 +1,3 @@
+# SPDX-License-Identifier: GPL-2.0-only
+obj-$(CONFIG_VPUMGR) += vpumgr.o
+vpumgr-objs := vpu_mgr.o vpu_smm.o vpu_vcm.o
diff --git a/drivers/misc/vpumgr/vpu_common.h b/drivers/misc/vpumgr/vpu_common.h
new file mode 100644
index 000000000000..cd474ffc05f3
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_common.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * VPUMGR Kernel module - common definition
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef _VPU_COMMON_H
+#define _VPU_COMMON_H
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_vcm.h"
+
+/* there will be one such device for each HW instance */
+struct vpumgr_device {
+ struct device *sdev;
+ struct device *dev;
+ dev_t devnum;
+ struct cdev cdev;
+ struct platform_device *pdev;
+
+ struct vcm_dev vcm;
+ struct dentry *debugfs_root;
+
+ struct mutex client_mutex; /* protect client_list */
+ struct list_head client_list;
+};
+
+#define XLINK_INVALID_SW_DEVID 0xDEADBEEF
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_mgr.c b/drivers/misc/vpumgr/vpu_mgr.c
new file mode 100644
index 000000000000..75be64ebc3b0
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_mgr.c
@@ -0,0 +1,370 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * VPU Manager Kernel module.
+ *
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#include <linux/debugfs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+#include "vpu_vcm.h"
+
+#define DRIVER_NAME "vpumgr"
+#define MAX_DEV_CNT 32
+
+/* Define the max xlink device number */
+#define MAX_SW_DEV_CNT 20
+
+/* Define the SW_DEVICE_ID bit mask and offset */
+#define IPC_INTERFACE 0x00
+#define PCIE_INTERFACE 0x01
+#define MASK_INTERFACE 0x7000000
+#define BIT_OFFSET_INTERFACE 24
+#define MASK_VPU_IPC_ID 0x000E
+#define BIT_OFFSET_VPU_ID 1
+
+#define SWDEVID_INTERFACE(sw_dev_id) (((sw_dev_id) & MASK_INTERFACE) >> BIT_OFFSET_INTERFACE)
+#define SWDEVID_VPU_IPC_ID(sw_dev_id) (((sw_dev_id) & MASK_VPU_IPC_ID) >> BIT_OFFSET_VPU_ID)
+
+static dev_t vpumgr_devnum;
+static struct class *vpumgr_class;
+
+/**
+ * struct vpumgr_fpriv - per-process context stored in FD private data.
+ * @vdev: vpumgr device corresponding to the file
+ * @smm: memory manager
+ * @ctx: vpu context manager
+ * @list: for global list of all opened file
+ * @pid: process which opens the device file
+ */
+struct vpumgr_fpriv {
+ struct vpumgr_device *vdev;
+ struct vpumgr_smm smm;
+ struct vpumgr_ctx ctx;
+ struct list_head list;
+ pid_t pid;
+};
+
+static u32 get_sw_device_id(int vpu_ipc_id)
+{
+ u32 sw_id_list[MAX_SW_DEV_CNT];
+ enum xlink_error rc;
+ u32 num = 0;
+ u32 swid;
+ int i;
+
+ rc = xlink_get_device_list(sw_id_list, &num);
+ if (rc) {
+ pr_err("XLINK get device list error %d in %s\n", rc, __func__);
+ return XLINK_INVALID_SW_DEVID;
+ }
+
+ for (i = 0; i < num; i++) {
+ swid = sw_id_list[i];
+ if (SWDEVID_INTERFACE(swid) == IPC_INTERFACE &&
+ SWDEVID_VPU_IPC_ID(swid) == vpu_ipc_id)
+ return swid;
+ }
+ return XLINK_INVALID_SW_DEVID;
+}
+
+static int vpumgr_open(struct inode *inode, struct file *filp)
+{
+ struct vpumgr_fpriv *vpriv;
+ struct vpumgr_device *vdev;
+ int rc;
+
+ vpriv = kzalloc(sizeof(*vpriv), GFP_KERNEL);
+ if (!vpriv)
+ return -ENOMEM;
+
+ vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+ rc = smm_open(&vpriv->smm, vdev);
+ if (rc)
+ goto free_priv;
+
+ rc = vcm_open(&vpriv->ctx, vdev);
+ if (rc)
+ goto close_smm;
+
+ vpriv->vdev = vdev;
+ vpriv->pid = task_pid_nr(current);
+ INIT_LIST_HEAD(&vpriv->list);
+
+ mutex_lock(&vdev->client_mutex);
+ list_add_tail(&vpriv->list, &vdev->client_list);
+ mutex_unlock(&vdev->client_mutex);
+
+ filp->private_data = vpriv;
+ return 0;
+
+close_smm:
+ smm_close(&vpriv->smm);
+free_priv:
+ kfree(vpriv);
+ return rc;
+}
+
+static int vpumgr_release(struct inode *inode, struct file *filp)
+{
+ struct vpumgr_fpriv *vpriv = filp->private_data;
+ struct vpumgr_device *vdev = container_of(inode->i_cdev, struct vpumgr_device, cdev);
+
+ vcm_close(&vpriv->ctx);
+ smm_close(&vpriv->smm);
+
+ mutex_lock(&vdev->client_mutex);
+ list_del(&vpriv->list);
+ mutex_unlock(&vdev->client_mutex);
+
+ kfree(vpriv);
+ filp->private_data = NULL;
+ return 0;
+}
+
+static long vpumgr_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ struct vpumgr_fpriv *vpriv = filp->private_data;
+ const unsigned int io_dir = _IOC_DIR(cmd);
+ const unsigned int io_size = _IOC_SIZE(cmd);
+ struct vpumgr_vcm_submit *vs;
+ struct vpumgr_vcm_wait *vw;
+ char tmp[128];
+ int rc = 0;
+
+ if (_IOC_TYPE(cmd) != VPUMGR_MAGIC || _IOC_NR(cmd) >= _IOC_NR(VPUMGR_IOCTL_END))
+ return -EINVAL;
+
+ if (io_size > sizeof(tmp))
+ return -EFAULT;
+
+ if (io_dir & _IOC_READ) {
+ if (copy_from_user(tmp, (void __user *)arg, io_size) != 0)
+ return -EFAULT;
+ }
+
+ switch (cmd) {
+ case VPUMGR_IOCTL_DMABUF_ALLOC:
+ rc = smm_alloc(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_IMPORT:
+ rc = smm_import(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_UNIMPORT:
+ rc = smm_unimport(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_DMABUF_PTR2VPU:
+ rc = smm_ptr2vpu(&vpriv->smm, (void *)tmp);
+ break;
+ case VPUMGR_IOCTL_VCM_SUBMIT:
+ vs = (struct vpumgr_vcm_submit *)tmp;
+ if (vs->cmd <= VCTX_KMD_RESERVED_CMD_LAST) {
+ /*
+ * user-space can talk to vpu context lives in firmware
+ * with any commands other than those reserved for kernel
+ * mode.
+ */
+ rc = -EACCES;
+ break;
+ }
+ rc = vcm_submit(&vpriv->ctx, vs->cmd,
+ (const void *)vs->in, vs->in_len, &vs->submit_id);
+ break;
+ case VPUMGR_IOCTL_VCM_WAIT:
+ vw = (struct vpumgr_vcm_wait *)tmp;
+ rc = vcm_wait(&vpriv->ctx, vw->submit_id, &vw->vpu_rc,
+ (void *)vw->out, &vw->out_len, vw->timeout_ms);
+ break;
+ }
+
+ if (!rc) {
+ if (io_dir & _IOC_WRITE) {
+ if (copy_to_user((void __user *)arg, tmp, io_size) != 0)
+ return -EFAULT;
+ }
+ }
+ return rc;
+}
+
+static const struct file_operations vpumgr_devfile_fops = {
+ .owner = THIS_MODULE,
+ .open = vpumgr_open,
+ .release = vpumgr_release,
+ .unlocked_ioctl = vpumgr_ioctl,
+};
+
+static int vpumgr_debugfs_stats_show(struct seq_file *file, void *offset)
+{
+ struct vpumgr_device *vdev = dev_get_drvdata(file->private);
+ struct vpumgr_fpriv *fpriv;
+ int i = 0;
+
+ mutex_lock(&vdev->client_mutex);
+ list_for_each_entry(fpriv, &vdev->client_list, list) {
+ seq_printf(file, "client #%d pid:%d\n", i++, fpriv->pid);
+ vcm_debugfs_stats_show(file, &fpriv->ctx);
+ smm_debugfs_stats_show(file, &fpriv->smm);
+ }
+ mutex_unlock(&vdev->client_mutex);
+ return 0;
+}
+
+static const struct of_device_id keembay_vpumgr_of_match[] = {
+ { .compatible = "intel,keembay-vpu-mgr"},
+ { .compatible = "intel,keembay-vpusmm"},
+ {}
+};
+MODULE_DEVICE_TABLE(of, keembay_vpumgr_of_match);
+
+static int vpumgr_driver_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct vpumgr_device *vdev;
+ u32 ipc_sw_device_id;
+ u32 vpu_ipc_id = 0;
+ int rc;
+
+ /* get device id */
+ rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+ &vpu_ipc_id);
+ if (rc && rc != -EINVAL) {
+ dev_err(dev, "%s: vpu-ipc-id read failed with rc %d\n", __func__, rc);
+ return -EINVAL;
+ }
+
+ ipc_sw_device_id = get_sw_device_id(vpu_ipc_id);
+ if (ipc_sw_device_id == XLINK_INVALID_SW_DEVID)
+ dev_warn(dev, "%s: no xlink sw device for vpu_ipc_id %d\n",
+ __func__, vpu_ipc_id);
+
+ vdev = devm_kzalloc(dev, sizeof(struct vpumgr_device), GFP_KERNEL);
+ if (!vdev)
+ return -ENOMEM;
+
+ vdev->devnum = MKDEV(MAJOR(vpumgr_devnum), vpu_ipc_id);
+ vdev->pdev = pdev;
+ vdev->dev = dev;
+
+ dev_dbg(dev, "dev->devnum %u, id %u, major %u\n",
+ vdev->devnum, vpu_ipc_id, MAJOR(vdev->devnum));
+
+ vdev->sdev = device_create(vpumgr_class, dev, vdev->devnum,
+ NULL, DRIVER_NAME "%d", vpu_ipc_id);
+ if (IS_ERR(vdev->sdev)) {
+ dev_err(dev, "%s: device_create failed\n", __func__);
+ return PTR_ERR(vdev->sdev);
+ }
+
+ cdev_init(&vdev->cdev, &vpumgr_devfile_fops);
+ vdev->cdev.owner = THIS_MODULE;
+ rc = cdev_add(&vdev->cdev, vdev->devnum, 1);
+ if (rc) {
+ dev_err(dev, "%s: cdev_add failed.\n", __func__);
+ goto detroy_device;
+ }
+
+ vdev->debugfs_root = debugfs_create_dir(dev_name(vdev->sdev), NULL);
+
+ debugfs_create_devm_seqfile(dev, "stats", vdev->debugfs_root,
+ vpumgr_debugfs_stats_show);
+
+ rc = smm_init(vdev);
+ if (rc)
+ goto remove_debugfs;
+
+ rc = vcm_init(vdev, ipc_sw_device_id);
+ if (rc)
+ goto fini_smm;
+
+ INIT_LIST_HEAD(&vdev->client_list);
+ mutex_init(&vdev->client_mutex);
+
+ dev_set_drvdata(dev, vdev);
+ return 0;
+
+fini_smm:
+ smm_fini(vdev);
+remove_debugfs:
+ debugfs_remove_recursive(vdev->debugfs_root);
+ cdev_del(&vdev->cdev);
+detroy_device:
+ device_destroy(vpumgr_class, vdev->devnum);
+ return rc;
+}
+
+static int vpumgr_driver_remove(struct platform_device *pdev)
+{
+ struct vpumgr_device *vdev = dev_get_drvdata(&pdev->dev);
+
+ mutex_destroy(&vdev->client_mutex);
+ vcm_fini(vdev);
+ smm_fini(vdev);
+ debugfs_remove_recursive(vdev->debugfs_root);
+ cdev_del(&vdev->cdev);
+ device_destroy(vpumgr_class, vdev->devnum);
+ return 0;
+}
+
+static struct platform_driver vpumgr_driver = {
+ .probe = vpumgr_driver_probe,
+ .remove = vpumgr_driver_remove,
+ .driver = {
+ .owner = THIS_MODULE,
+ .name = "keembay-vpu-mgr",
+ .of_match_table = keembay_vpumgr_of_match,
+ },
+};
+
+static int __init vpumgr_init(void)
+{
+ int rc;
+
+ rc = alloc_chrdev_region(&vpumgr_devnum, 0, MAX_DEV_CNT, DRIVER_NAME);
+ if (rc < 0) {
+ pr_err("[%s] err: alloc_chrdev_region\n", __func__);
+ return rc;
+ }
+
+ vpumgr_class = class_create(THIS_MODULE, DRIVER_NAME "_class");
+ if (IS_ERR(vpumgr_class)) {
+ rc = PTR_ERR(vpumgr_class);
+ pr_err("[%s] err: class_create\n", __func__);
+ goto unreg_chrdev;
+ }
+
+ rc = platform_driver_register(&vpumgr_driver);
+ if (rc) {
+ pr_err("[%s] err platform_driver_register\n", __func__);
+ goto destroy_class;
+ }
+
+ return 0;
+
+destroy_class:
+ class_destroy(vpumgr_class);
+unreg_chrdev:
+ unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+ return rc;
+}
+
+static void vpumgr_exit(void)
+{
+ platform_driver_unregister(&vpumgr_driver);
+ class_destroy(vpumgr_class);
+ unregister_chrdev_region(vpumgr_devnum, MAX_DEV_CNT);
+}
+
+module_init(vpumgr_init)
+module_exit(vpumgr_exit)
+
+MODULE_DESCRIPTION("VPU resource manager driver");
+MODULE_AUTHOR("Tingqian Li <[email protected]>");
+MODULE_AUTHOR("Luwei Zhou <[email protected]>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/misc/vpumgr/vpu_smm.c b/drivers/misc/vpumgr/vpu_smm.c
new file mode 100644
index 000000000000..a89f62984a48
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.c
@@ -0,0 +1,554 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020 Intel Corporation
+ */
+#include <linux/cdev.h>
+#include <linux/debugfs.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-buf.h>
+#include <linux/dma-direct.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/rbtree.h>
+#include <linux/slab.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+
+#include <uapi/misc/vpumgr.h>
+
+#include "vpu_common.h"
+#include "vpu_smm.h"
+
+/*
+ * DMABuf exported by VPU device is described by following data structure
+ * the life of the buffer may be longer than session(it may be shared with
+ * other driver),so it contains pointer to device rather than to session
+ */
+struct vpusmm_buffer {
+ struct device *dev;
+ void *cookie;
+ dma_addr_t dma_addr;
+ size_t size;
+ unsigned long dma_attrs;
+};
+
+/*
+ * DMABufs imported to VPU device are maintained in a rb tree with dmabuf's
+ * pointer as key. so user space can unimport it by specifying fd and ptr2vpu
+ * API can work by searching this rb tree.
+ */
+struct vpusmm_impbuf {
+ struct rb_node node;
+ struct dma_buf *dmabuf;
+ struct dma_buf_attachment *attach;
+ struct sg_table *sgt;
+ enum dma_data_direction direction;
+ dma_addr_t vpu_addr;
+ int refcount;
+};
+
+/*
+ * VPU imported dmabuf management
+ */
+static void vpusmm_insert_impbuf(struct vpumgr_smm *sess,
+ struct vpusmm_impbuf *new_item)
+{
+ struct rb_root *root = &sess->imp_rb;
+ struct rb_node **iter = &root->rb_node, *parent = NULL;
+ struct dma_buf *value = new_item->dmabuf;
+ struct vpusmm_impbuf *item;
+
+ while (*iter) {
+ parent = *iter;
+ item = rb_entry(parent, struct vpusmm_impbuf, node);
+
+ if (item->dmabuf > value)
+ iter = &(*iter)->rb_left;
+ else
+ iter = &(*iter)->rb_right;
+ }
+
+ /* Put the new node there */
+ rb_link_node(&new_item->node, parent, iter);
+ rb_insert_color(&new_item->node, root);
+}
+
+static struct vpusmm_impbuf *vpusmm_find_impbuf(struct vpumgr_smm *sess,
+ struct dma_buf *dmabuf)
+{
+ struct rb_node *node = sess->imp_rb.rb_node;
+ struct vpusmm_impbuf *item;
+
+ while (node) {
+ item = rb_entry(node, struct vpusmm_impbuf, node);
+
+ if (item->dmabuf > dmabuf)
+ node = node->rb_left;
+ else if (item->dmabuf < dmabuf)
+ node = node->rb_right;
+ else
+ return item;
+ }
+ return NULL;
+}
+
+static struct vpusmm_impbuf *vpusmm_import_dmabuf(struct vpumgr_smm *sess,
+ int dmabuf_fd,
+ enum dma_data_direction direction, int vpu_id)
+{
+ struct vpusmm_impbuf *item;
+ struct dma_buf_attachment *attach;
+ struct device *dma_dev = sess->vdev->dev;
+ struct dma_buf *dmabuf;
+ struct sg_table *sgt;
+
+ dmabuf = dma_buf_get(dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return ERR_CAST(dmabuf);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ if (item) {
+ item->refcount++;
+ goto found_impbuf;
+ }
+
+ attach = dma_buf_attach(dmabuf, dma_dev);
+ if (IS_ERR(attach)) {
+ item = ERR_CAST(attach);
+ goto fail_attach;
+ }
+
+ sgt = dma_buf_map_attachment(attach, direction);
+ if (IS_ERR(sgt)) {
+ item = ERR_CAST(sgt);
+ goto fail_map;
+ }
+
+ if (sgt->nents > 1) {
+ item = ERR_PTR(-EINVAL);
+ goto fail_import;
+ }
+
+ item = kzalloc(sizeof(*item), GFP_KERNEL);
+ if (!item) {
+ item = ERR_PTR(-ENOMEM);
+ goto fail_import;
+ }
+
+ item->dmabuf = dmabuf;
+ item->attach = attach;
+ item->sgt = sgt;
+ item->direction = direction;
+ item->vpu_addr = sg_dma_address(sgt->sgl);
+ item->refcount = 1;
+
+ vpusmm_insert_impbuf(sess, item);
+
+ mutex_unlock(&sess->imp_rb_lock);
+ return item;
+
+fail_import:
+ dma_buf_unmap_attachment(attach, sgt, direction);
+fail_map:
+ dma_buf_detach(dmabuf, attach);
+fail_attach:
+found_impbuf:
+ mutex_unlock(&sess->imp_rb_lock);
+ dma_buf_put(dmabuf);
+ return item;
+}
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev)
+{
+ sess->vdev = vdev;
+ sess->imp_rb = RB_ROOT;
+ mutex_init(&sess->imp_rb_lock);
+ return 0;
+}
+
+int smm_close(struct vpumgr_smm *sess)
+{
+ struct device *dev = sess->vdev->sdev;
+ struct rb_node *cur, *next;
+ struct vpusmm_impbuf *item;
+
+ mutex_destroy(&sess->imp_rb_lock);
+
+ cur = rb_first(&sess->imp_rb);
+ while (cur) {
+ item = rb_entry(cur, struct vpusmm_impbuf, node);
+ next = rb_next(cur);
+ if (item) {
+ dev_dbg(dev, "[%s] PID:%d free leaked imported dmabuf %zu bytes, %d refs\n",
+ __func__, current->pid, item->dmabuf->size, item->refcount);
+ dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+ dma_buf_detach(item->dmabuf, item->attach);
+ dma_buf_put(item->dmabuf);
+ rb_erase(&item->node, &sess->imp_rb);
+ kfree(item);
+ }
+ cur = next;
+ }
+ return 0;
+}
+
+/*
+ * DMABuf
+ */
+static struct sg_table *map_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+ enum dma_data_direction dir)
+{
+ struct vpusmm_buffer *buff = attach->dmabuf->priv;
+ struct sg_table *sgt;
+ int rc;
+
+ if (WARN_ON(dir == DMA_NONE))
+ return ERR_PTR(-EINVAL);
+
+ sgt = kzalloc(sizeof(*sgt), GFP_KERNEL);
+ if (!sgt)
+ return NULL;
+
+ rc = dma_get_sgtable(buff->dev, sgt, buff->cookie, buff->dma_addr, buff->size);
+ if (rc < 0)
+ goto free_sgt;
+
+ rc = dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+ if (!rc)
+ goto free_sg_table;
+ return sgt;
+
+free_sg_table:
+ sg_free_table(sgt);
+free_sgt:
+ kfree(sgt);
+ return ERR_PTR(rc);
+}
+
+static void unmap_dma_buf_vpusmm(struct dma_buf_attachment *attach,
+ struct sg_table *sgt, enum dma_data_direction dir)
+{
+ dma_unmap_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir, DMA_ATTR_SKIP_CPU_SYNC);
+ sg_free_table(sgt);
+ kfree(sgt);
+}
+
+static void release_vpusmm(struct dma_buf *dmabuf)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+
+ dma_free_attrs(buff->dev, buff->size, buff->cookie, buff->dma_addr, buff->dma_attrs);
+ kfree(buff);
+}
+
+static int mmap_vpusmm(struct dma_buf *dmabuf, struct vm_area_struct *vma)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+ unsigned long vm_size;
+
+ vm_size = vma->vm_end - vma->vm_start;
+ if (vm_size > PAGE_ALIGN(buff->size))
+ return -EINVAL;
+
+ vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+ vma->vm_pgoff = 0;
+
+ return dma_mmap_attrs(buff->dev, vma, buff->cookie, buff->dma_addr,
+ buff->size, buff->dma_attrs);
+}
+
+static int vmap_vpusmm(struct dma_buf *dmabuf, struct dma_buf_map *map)
+{
+ struct vpusmm_buffer *buff = dmabuf->priv;
+
+ dma_buf_map_set_vaddr(map, buff->cookie);
+
+ return 0;
+}
+
+static const struct dma_buf_ops vpusmm_dmabuf_ops = {
+ .cache_sgt_mapping = true,
+ .map_dma_buf = map_dma_buf_vpusmm,
+ .unmap_dma_buf = unmap_dma_buf_vpusmm,
+ .release = release_vpusmm,
+ .mmap = mmap_vpusmm,
+ .vmap = vmap_vpusmm,
+};
+
+/*
+ * Allocate dma buffer suitable for VPU access.
+ * export as DMABuf fd
+ * sess will hold additional refcount to the dmabuf
+ * on request of passing it to VPU side for processing
+ */
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg)
+{
+ struct vpumgr_device *vdev = sess->vdev;
+ const int flags = O_RDWR | O_CLOEXEC;
+ size_t buffer_size = arg->size;
+ struct dma_buf *dmabuf = NULL;
+ phys_addr_t phys_addr;
+ struct dma_buf_export_info exp_info = {
+ .exp_name = dev_name(vdev->sdev),
+ .owner = THIS_MODULE,
+ .ops = &vpusmm_dmabuf_ops,
+ .size = buffer_size,
+ .flags = flags
+ };
+ struct vpusmm_buffer *buff;
+ int retval;
+
+ buff = kzalloc(sizeof(*buff), GFP_KERNEL);
+ if (!buff) {
+ retval = -ENOMEM;
+ goto failed;
+ }
+
+ buff->dev = vdev->dev;
+ buff->size = buffer_size;
+ buff->dma_attrs = DMA_ATTR_FORCE_CONTIGUOUS | DMA_ATTR_WRITE_COMBINE;
+ buff->cookie = dma_alloc_attrs(vdev->dev, buff->size, &buff->dma_addr,
+ GFP_KERNEL | GFP_DMA, buff->dma_attrs);
+ if (!buff->cookie) {
+ retval = -ENOMEM;
+ goto failed;
+ }
+
+ phys_addr = dma_to_phys(vdev->dev, buff->dma_addr);
+
+ exp_info.priv = buff;
+ dmabuf = dma_buf_export(&exp_info);
+ if (IS_ERR(dmabuf)) {
+ retval = PTR_ERR(dmabuf);
+ dmabuf = NULL;
+ goto failed;
+ }
+
+ retval = dma_buf_fd(dmabuf, flags);
+ if (retval < 0) {
+ goto failed;
+ } else {
+ arg->fd = retval;
+ retval = 0;
+ }
+
+ dev_dbg(vdev->dev, "%s: dma_addr=%llx, phys_addr=%llx allocated from %s\n",
+ __func__, buff->dma_addr, phys_addr, dev_name(vdev->dev));
+
+ return 0;
+failed:
+ dev_err(vdev->dev, "%s failed with %d\n", __func__, retval);
+
+ if (dmabuf) {
+ /* this will finally release underlying buff */
+ dma_buf_put(dmabuf);
+ } else if (buff) {
+ if (buff->cookie)
+ dma_free_attrs(vdev->dev, buff->size, buff->cookie,
+ buff->dma_addr, buff->dma_attrs);
+ kfree(buff);
+ }
+ return retval;
+}
+
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg)
+{
+ struct device *dev = sess->vdev->sdev;
+ enum dma_data_direction direction;
+ struct vpusmm_impbuf *item;
+
+ switch (arg->vpu_access) {
+ case VPU_ACCESS_READ:
+ direction = DMA_TO_DEVICE;
+ break;
+ case VPU_ACCESS_WRITE:
+ direction = DMA_FROM_DEVICE;
+ break;
+ case VPU_ACCESS_DEFAULT:
+ case VPU_ACCESS_RW:
+ direction = DMA_BIDIRECTIONAL;
+ break;
+ default:
+ dev_err(dev, "Unknown vpu_access:%d\n", arg->vpu_access);
+ return -EINVAL;
+ }
+
+ item = vpusmm_import_dmabuf(sess, arg->fd, direction, 0);
+ if (IS_ERR(item))
+ return PTR_ERR(item);
+
+ arg->vpu_addr = item->vpu_addr;
+ arg->size = item->dmabuf->size;
+ return 0;
+}
+
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd)
+{
+ struct vpusmm_impbuf *item;
+ struct dma_buf *dmabuf;
+ int rc = 0;
+
+ dmabuf = dma_buf_get(*p_dmabuf_fd);
+ if (IS_ERR(dmabuf))
+ return PTR_ERR(dmabuf);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ if (!item) {
+ rc = -EINVAL;
+ goto exit;
+ }
+
+ item->refcount--;
+ if (item->refcount <= 0) {
+ rb_erase(&item->node, &sess->imp_rb);
+ dma_buf_unmap_attachment(item->attach, item->sgt, item->direction);
+ dma_buf_detach(item->dmabuf, item->attach);
+ dma_buf_put(item->dmabuf);
+ kfree(item);
+ }
+exit:
+ mutex_unlock(&sess->imp_rb_lock);
+ dma_buf_put(dmabuf);
+ return rc;
+}
+
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg)
+{
+ struct device *dev = sess->vdev->sdev;
+ struct task_struct *task = current;
+ struct dma_buf *dmabuf = NULL;
+ unsigned long vaddr = *arg;
+ struct vm_area_struct *vma;
+ struct vpusmm_impbuf *item;
+ struct mm_struct *mm;
+
+ mm = get_task_mm(task);
+ if (!mm)
+ goto failed;
+
+ mmap_read_lock(mm);
+
+ vma = find_vma(mm, vaddr);
+ if (!vma) {
+ dev_dbg(dev, "cannot find vaddr: %lx\n", vaddr);
+ goto failed;
+ }
+
+ if (vaddr < vma->vm_start) {
+ dev_dbg(dev, "failed at line %d, vaddr=%lx, vma->vm_start=%lx\n",
+ __LINE__, vaddr, vma->vm_start);
+ goto failed;
+ }
+
+ /* make sure the vma is backed by a dmabuf */
+ if (!vma->vm_file) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+
+ dmabuf = vma->vm_file->private_data;
+ if (!dmabuf) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+
+ if (dmabuf->file != vma->vm_file) {
+ dev_dbg(dev, "failed at line %d\n", __LINE__);
+ goto failed;
+ }
+ mmap_read_unlock(mm);
+ mmput(mm);
+
+ mutex_lock(&sess->imp_rb_lock);
+ item = vpusmm_find_impbuf(sess, dmabuf);
+ mutex_unlock(&sess->imp_rb_lock);
+
+ if (!item) {
+ dev_dbg(dev, "failed to find dmabuf in imported list for vaddr=0x%lx (%d)\n",
+ vaddr, __LINE__);
+ return -EFAULT;
+ }
+
+ *arg = (dma_addr_t)(vaddr - vma->vm_start) + item->vpu_addr;
+ return 0;
+
+failed:
+ if (mm) {
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
+ return -EFAULT;
+}
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess)
+{
+ struct rb_node *cur, *next;
+ struct vpusmm_impbuf *item;
+ int i;
+
+ seq_puts(file, "\tdmabuf\texpname\tsize(bytes)\tfilecount\trefs\n");
+
+ mutex_lock(&sess->imp_rb_lock);
+ cur = rb_first(&sess->imp_rb);
+ i = 0;
+ while (cur) {
+ item = rb_entry(cur, struct vpusmm_impbuf, node);
+ next = rb_next(cur);
+ if (item)
+ seq_printf(file, "\t%d:%s\t%s\t%zu\t%ld\t%d\n", i++,
+ item->dmabuf->name ? : "",
+ item->dmabuf->exp_name,
+ item->dmabuf->size,
+ file_count(item->dmabuf->file),
+ item->refcount);
+ cur = next;
+ }
+ mutex_unlock(&sess->imp_rb_lock);
+ return 0;
+}
+
+int smm_init(struct vpumgr_device *vdev)
+{
+ int rc = 0;
+
+ if (!vdev->dev->of_node) {
+ /*
+ * no of_node imply:
+ * 1. no IOMMU, VPU device is only 32-bit DMA capable
+ * 2. use default CMA because no device tree node specifying memory-region
+ */
+ dma_set_mask(vdev->dev, DMA_BIT_MASK(32));
+ dma_set_coherent_mask(vdev->dev, DMA_BIT_MASK(32));
+ } else {
+ /* Initialize reserved memory resources */
+ rc = of_reserved_mem_device_init(vdev->dev);
+ if (rc) {
+ if (rc == -ENODEV) {
+ dev_warn(vdev->dev,
+ "No reserved memory specified, use default cma\n");
+ rc = 0;
+ } else {
+ dev_err(vdev->dev,
+ "Failed to init reserved memory, rc=%d\n", rc);
+ }
+ }
+ }
+ return rc;
+}
+
+int smm_fini(struct vpumgr_device *vdev)
+{
+ return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_smm.h b/drivers/misc/vpumgr/vpu_smm.h
new file mode 100644
index 000000000000..ff547649d95c
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_smm.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * Copyright (C) 2020 Intel Corporation
+ */
+#ifndef _VPU_SMM_H
+#define _VPU_SMM_H
+#include <linux/kernel.h>
+#include <linux/rbtree.h>
+
+#include "vpu_common.h"
+
+struct vpumgr_smm {
+ struct vpumgr_device *vdev;
+ struct rb_root imp_rb;
+ struct mutex imp_rb_lock; /* protects imp_rb */
+};
+
+int smm_init(struct vpumgr_device *vdev);
+int smm_fini(struct vpumgr_device *vdev);
+
+int smm_open(struct vpumgr_smm *sess, struct vpumgr_device *vdev);
+int smm_close(struct vpumgr_smm *sess);
+
+int smm_alloc(struct vpumgr_smm *sess, struct vpumgr_args_alloc *arg);
+int smm_import(struct vpumgr_smm *sess, struct vpumgr_args_import *arg);
+int smm_unimport(struct vpumgr_smm *sess, int *p_dmabuf_fd);
+int smm_ptr2vpu(struct vpumgr_smm *sess, unsigned long *arg);
+
+int smm_debugfs_stats_show(struct seq_file *file, struct vpumgr_smm *sess);
+
+#endif
diff --git a/drivers/misc/vpumgr/vpu_vcm.c b/drivers/misc/vpumgr/vpu_vcm.c
new file mode 100644
index 000000000000..44b9408d478e
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.c
@@ -0,0 +1,584 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#include <linux/delay.h>
+#include <linux/dma-buf.h>
+#include <linux/idr.h>
+#include <linux/mutex.h>
+#include <linux/kthread.h>
+#include <linux/sched/task.h>
+#include <linux/seq_file.h>
+#include <linux/uaccess.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include "vpu_common.h"
+#include "vpu_vcm.h"
+
+#define XLINK_IPC_TIMEOUT 1000u
+
+/* Static xlink configuration */
+#define VCM_XLINK_CHANNEL 1
+#define VCM_XLINK_CHAN_SIZE 128
+
+static const int msg_header_size = offsetof(struct vcm_msg, payload.data);
+
+struct vpu_cmd {
+ struct work_struct work;
+ struct kref refcount;
+ struct xlink_handle *handle;
+ struct vpumgr_ctx *vctx; /* the submitting vpu context */
+ struct vcm_msg msg; /* message buffer for send/recv */
+ struct completion complete; /* completion for async submit/reply */
+ int submit_err; /* error code for submittion process */
+};
+
+static int vcm_vpu_link_init(struct vcm_dev *pvcm)
+{
+ struct vpumgr_device *vdev = container_of(pvcm, struct vpumgr_device, vcm);
+ enum xlink_error rc;
+
+ pvcm->ipc_xlink_handle.dev_type = VPUIP_DEVICE;
+ pvcm->ipc_xlink_handle.sw_device_id = pvcm->sw_dev_id;
+
+ rc = xlink_initialize();
+ if (rc != X_LINK_SUCCESS)
+ goto exit;
+
+ rc = xlink_connect(&pvcm->ipc_xlink_handle);
+ if (rc != X_LINK_SUCCESS)
+ goto exit;
+
+ rc = xlink_open_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL,
+ RXB_TXB, VCM_XLINK_CHAN_SIZE, XLINK_IPC_TIMEOUT);
+ if (rc != X_LINK_SUCCESS && rc != X_LINK_ALREADY_OPEN) {
+ xlink_disconnect(&pvcm->ipc_xlink_handle);
+ goto exit;
+ }
+
+ rc = 0;
+exit:
+ dev_info(vdev->dev, "%s: rc = %d\n", __func__, rc);
+ return -(int)rc;
+}
+
+static int vcm_vpu_link_fini(struct vcm_dev *pvcm)
+{
+ xlink_close_channel(&pvcm->ipc_xlink_handle, VCM_XLINK_CHANNEL);
+ xlink_disconnect(&pvcm->ipc_xlink_handle);
+ return 0;
+}
+
+/*
+ * Send a vcm_msg by xlink.
+ * Given limited xlink payload size, packing is also performed.
+ */
+static int vcm_send(struct xlink_handle *xlnk_handle, struct vcm_msg *req)
+{
+ struct vpumgr_device *vdev;
+ enum xlink_error rc;
+ u8 *ptr = (u8 *)req;
+ u32 size = 0;
+ u32 len = req->size;
+
+ vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+ if (len > sizeof(*req))
+ return -EINVAL;
+ do {
+ size = len > VCM_XLINK_CHAN_SIZE ? VCM_XLINK_CHAN_SIZE : len;
+ rc = xlink_write_volatile(xlnk_handle, VCM_XLINK_CHANNEL, ptr, size);
+ if (rc != X_LINK_SUCCESS) {
+ dev_warn(vdev->dev, "%s xlink_write_volatile error %d\n", __func__, rc);
+ return -EINVAL;
+ }
+ ptr += size;
+ len -= size;
+ } while (len > 0);
+
+ return 0;
+}
+
+/*
+ * Receives a vcm_msg by xlink.
+ * Given limited xlink payload size, unpacking is also performed.
+ */
+static int vcm_recv(struct xlink_handle *xlnk_handle, struct vcm_msg *rep)
+{
+ struct vpumgr_device *vdev;
+ enum xlink_error rc;
+ u64 size;
+ u32 total_size = 0;
+ u32 rx_size = 0;
+ u8 *ptr = (u8 *)rep;
+
+ vdev = container_of(xlnk_handle, struct vpumgr_device, vcm.ipc_xlink_handle);
+ do {
+ /* workaround for a bug in xlink_read_data_to_buffer()
+ * although it's last argument is declared to be of type (u32 *), the
+ * function actually writes 64-bit value into that address.
+ */
+ rc = xlink_read_data_to_buffer(xlnk_handle, VCM_XLINK_CHANNEL, ptr, (u32 *)&size);
+ if (rc != X_LINK_SUCCESS) {
+ dev_warn(vdev->dev, "%s: xlink_read_data_to_buffer failed, rc:%d\n",
+ __func__, rc);
+ return -EPIPE;
+ }
+
+ if (total_size == 0) {
+ if (size < msg_header_size) {
+ dev_warn(vdev->dev, "%s: first packet is too small (%llu)\n",
+ __func__, size);
+ return -EINVAL;
+ }
+
+ total_size = rep->size;
+ if (total_size > sizeof(*rep)) {
+ dev_warn(vdev->dev, "%s: packet size (%u) is too big\n",
+ __func__, total_size);
+ return -EINVAL;
+ }
+ if (total_size < size) {
+ dev_warn(vdev->dev,
+ "%s: first packet is smaller than claimed (%llu)\n",
+ __func__, size);
+ return -EINVAL;
+ }
+ }
+
+ ptr += size;
+ rx_size += size;
+ } while (rx_size < total_size);
+
+ if (rx_size != total_size) {
+ dev_warn(vdev->dev, "%s: actuall size %u exceeds claimed size %ud\n",
+ __func__, rx_size, total_size);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void vcmd_free(struct kref *kref)
+{
+ struct vpu_cmd *vcmd = container_of(kref, struct vpu_cmd, refcount);
+
+ kvfree(vcmd);
+}
+
+static struct vpu_cmd *vcmd_get(struct vcm_dev *pvcm, int msgid)
+{
+ struct vpu_cmd *vcmd;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ vcmd = idr_find(&pvcm->msg_idr, msgid);
+ if (vcmd)
+ kref_get(&vcmd->refcount);
+ mutex_unlock(&pvcm->msg_idr_lock);
+
+ return vcmd;
+}
+
+static void vcmd_put(struct vpu_cmd *vcmd)
+{
+ kref_put(&vcmd->refcount, vcmd_free);
+}
+
+static int vcmd_alloc_msgid(struct vcm_dev *pvcm, struct vpu_cmd *vcmd)
+{
+ int msgid;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ msgid = idr_alloc_cyclic(&pvcm->msg_idr, vcmd, 1, 0, GFP_KERNEL);
+ if (msgid >= 0)
+ kref_init(&vcmd->refcount);
+ mutex_unlock(&pvcm->msg_idr_lock);
+ return msgid;
+}
+
+static void vcmd_remove_msgid(struct vcm_dev *pvcm, int msgid)
+{
+ struct vpu_cmd *vcmd;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ vcmd = idr_remove(&pvcm->msg_idr, msgid);
+ if (vcmd)
+ kref_put(&vcmd->refcount, vcmd_free);
+ mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static void vcmd_clean_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+ struct vpu_cmd *vcmd;
+ int i;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+ if (vcmd->vctx == v) {
+ idr_remove(&pvcm->msg_idr, i);
+ kref_put(&vcmd->refcount, vcmd_free);
+ }
+ mutex_unlock(&pvcm->msg_idr_lock);
+}
+
+static int vcmd_count_ctx(struct vcm_dev *pvcm, struct vpumgr_ctx *v)
+{
+ struct vpu_cmd *vcmd;
+ int i;
+ int count = 0;
+
+ mutex_lock(&pvcm->msg_idr_lock);
+ idr_for_each_entry(&pvcm->msg_idr, vcmd, i)
+ if (vcmd->vctx == v)
+ count++;
+ mutex_unlock(&pvcm->msg_idr_lock);
+
+ return count;
+}
+
+static void vpu_cmd_submit(struct work_struct *work)
+{
+ struct vpu_cmd *p = container_of(work, struct vpu_cmd, work);
+
+ p->submit_err = vcm_send(p->handle, &p->msg);
+}
+
+/*
+ * vcm_submit() - Submit a command to VPU
+ * @v: Pointer to local vpu context data structure.
+ * @cmd: Command code
+ * @data_in: Data arguments
+ * @in_len: Length of the data arguments
+ * @submit_id: On return, this will containe a newly allocated
+ * vpu-device-wise unique ID for the submitted command
+ *
+ * Submit a command to corresponding vpu context running on firmware to execute
+ */
+int vcm_submit(struct vpumgr_ctx *v,
+ u32 cmd, const void *data_in, u32 in_len, s32 *submit_id)
+{
+ struct vcm_dev *pvcm = &v->vdev->vcm;
+ int ctx = v->vpu_ctx_id;
+ int rc = 0;
+ struct vpu_cmd *vcmd;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ if (in_len > VCM_PAYLOAD_SIZE)
+ return -EINVAL;
+
+ vcmd = kvmalloc(sizeof(*vcmd), GFP_KERNEL);
+ if (!vcmd)
+ return -ENOMEM;
+
+ vcmd->vctx = v;
+
+ rc = vcmd_alloc_msgid(pvcm, vcmd);
+ if (rc < 0) {
+ rc = -EEXIST;
+ kvfree(vcmd);
+ return rc;
+ }
+
+ /* from now on, vcmd can is refcount managed */
+ vcmd->msg.id = rc;
+ *submit_id = vcmd->msg.id;
+
+ if (data_in && in_len > 0) {
+ if (access_ok((void __user *)data_in, in_len)) {
+ rc = copy_from_user(vcmd->msg.payload.data,
+ (const void __user *)data_in, in_len);
+ if (rc)
+ goto remove_msgid;
+ } else {
+ memcpy(vcmd->msg.payload.data, data_in, in_len);
+ }
+ }
+
+ init_completion(&vcmd->complete);
+ vcmd->handle = &pvcm->ipc_xlink_handle;
+ vcmd->msg.size = msg_header_size + in_len;
+ vcmd->msg.ctx = ctx;
+ vcmd->msg.cmd = cmd;
+ vcmd->msg.rc = 0;
+ INIT_WORK(&vcmd->work, vpu_cmd_submit);
+
+ if (!queue_work(pvcm->wq, &vcmd->work)) {
+ rc = -EEXIST;
+ goto remove_msgid;
+ }
+
+ return 0;
+
+remove_msgid:
+ vcmd_remove_msgid(pvcm, vcmd->msg.id);
+ return rc;
+}
+
+/*
+ * vcm_wait() - Wait a submitted command to finish
+ * @v: Pointer to local vpu context data structure.
+ * @submit_id: Unique ID of the submitted command to wait for
+ * @vpu_rc: Return code of the submitted commands
+ * @data_out: Return data payload of the submitted command
+ * @p_out_len: Length of the returned paylaod
+ * @timeout_ms: Time in milliseconds before the wait expires
+ *
+ * Wait for a submitted command to finish and retrieves the
+ * return code and outputs on success with timeout.
+ */
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+ s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms)
+{
+ struct vcm_dev *pvcm = &v->vdev->vcm;
+ struct device *dev = v->vdev->sdev;
+ unsigned long timeout = msecs_to_jiffies(timeout_ms);
+ struct vpu_cmd *vcmd;
+ int rc, len;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ vcmd = vcmd_get(pvcm, submit_id);
+ if (!vcmd) {
+ dev_err(dev, "%s:cannot find submit_id %d\n", __func__, submit_id);
+ return -EINVAL;
+ }
+
+ if (v != vcmd->vctx) {
+ dev_err(dev, "%s:trying to wait on submit %d doesn't belong to vpu context %d\n",
+ __func__, submit_id, v->vpu_ctx_id);
+ return -EINVAL;
+ }
+
+ /* wait for submission work to be done */
+ flush_work(&vcmd->work);
+ rc = vcmd->submit_err;
+ if (rc)
+ goto exit;
+
+ /* wait for reply */
+ rc = wait_for_completion_interruptible_timeout(&vcmd->complete, timeout);
+ if (rc < 0) {
+ goto exit;
+ } else if (rc == 0) {
+ rc = -ETIMEDOUT;
+ goto exit;
+ } else {
+ /* wait_for_completion_interruptible_timeout return positive
+ * rc on success, but we return zero as success.
+ */
+ rc = 0;
+ }
+
+ if (vpu_rc)
+ *vpu_rc = vcmd->msg.rc;
+
+ if (data_out && p_out_len) {
+ /* truncate payload to fit output buffer size provided */
+ len = vcmd->msg.size - msg_header_size;
+ if (len > (*p_out_len)) {
+ dev_err(dev, "%s: output is truncated from %d to %d to fit buffer size.\n",
+ __func__, len, (*p_out_len));
+ len = (*p_out_len);
+ }
+
+ if (len > 0) {
+ if (access_ok((void __user *)data_out, len)) {
+ rc = copy_to_user((void __user *)data_out,
+ vcmd->msg.payload.data, len);
+ if (rc)
+ goto exit;
+ } else {
+ memcpy(data_out, vcmd->msg.payload.data, len);
+ }
+ }
+
+ /* tell the caller the exact length received, even if
+ * we have truncated due to output buffer size limitation
+ */
+ *p_out_len = len;
+ }
+exit:
+ v->total_vcmds++;
+ vcmd_put(vcmd);
+ vcmd_remove_msgid(pvcm, submit_id);
+ return rc;
+}
+
+static int vcm_call(struct vpumgr_ctx *v,
+ s32 cmd, const void *data_in, u32 in_len,
+ s32 *res_rc, void *data_out, u32 *p_out_len)
+{
+ int submit_id, rc;
+
+ if (!v->vdev->vcm.enabled)
+ return -ENOENT;
+
+ rc = vcm_submit(v, cmd, data_in, in_len, &submit_id);
+ if (rc)
+ return rc;
+
+ return vcm_wait(v, submit_id, res_rc, data_out, p_out_len, 1000);
+}
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev)
+{
+ struct device *dev = vdev->sdev;
+ int rep_rc, rc;
+
+ v->vdev = vdev;
+
+ if (!vdev->vcm.enabled)
+ return 0;
+
+ rc = vcm_call(v, VCTX_MSG_CREATE, NULL, 0, &rep_rc, NULL, NULL);
+
+ if (rc != 0 || rep_rc < 0)
+ dev_err(dev, "%s: Vpu context create with rc:%d and vpu reply rc:%d\n",
+ __func__, rc, rep_rc);
+ if (rc)
+ return rc;
+ if (rep_rc < 0)
+ return -ENXIO;
+
+ v->vpu_ctx_id = rep_rc;
+ v->total_vcmds = 0;
+ return 0;
+}
+
+int vcm_close(struct vpumgr_ctx *v)
+{
+ struct device *dev = v->vdev->sdev;
+ int rep_rc, rc;
+
+ if (!v->vdev->vcm.enabled)
+ return 0;
+
+ rc = vcm_call(v, VCTX_MSG_DESTROY, NULL, 0, &rep_rc, NULL, NULL);
+ dev_dbg(dev, "vpu context %d is destroyed with rc:%d and vpu reply rc:%d\n",
+ v->vpu_ctx_id, rc, rep_rc);
+
+ /* remove submit belongs to this context */
+ vcmd_clean_ctx(&v->vdev->vcm, v);
+ return 0;
+}
+
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v)
+{
+ if (!v->vdev->vcm.enabled)
+ return 0;
+ seq_printf(file, "\tvpu context: #%d\n", v->vpu_ctx_id);
+ seq_printf(file, "\t\tNum of completed cmds: %llu\n", v->total_vcmds);
+ seq_printf(file, "\t\tNum of on-going cmds: %d\n", vcmd_count_ctx(&v->vdev->vcm, v));
+ return 0;
+}
+
+static int vcm_rxthread(void *param)
+{
+ struct vpumgr_device *vdev = param;
+ struct device *dev = vdev->sdev;
+ struct vcm_dev *pvcm = &vdev->vcm;
+ struct vcm_msg *msg = &pvcm->rxmsg;
+ struct vpu_cmd *vcmd;
+ int rc;
+
+ while (!kthread_should_stop()) {
+ rc = vcm_recv(&pvcm->ipc_xlink_handle, msg);
+ if (rc == -EPIPE)
+ break;
+ if (rc)
+ continue;
+
+ switch (msg->cmd) {
+ case VCTX_MSG_REPLY:
+ /* find local data associated with that msg id */
+ vcmd = vcmd_get(pvcm, (unsigned long)msg->id);
+ if (!vcmd)
+ break;
+
+ if (msg->ctx != vcmd->msg.ctx)
+ dev_warn(dev, "reply msg #%u's ctx (%u) mismatches vcmd ctx (%u)\n",
+ msg->id, msg->ctx, vcmd->msg.ctx);
+
+ vcmd->submit_err = 0;
+
+ /* submit corresponding to msg->id is done, do post process */
+ memcpy(&vcmd->msg, msg, msg->size);
+ complete(&vcmd->complete);
+
+ vcmd_put(vcmd);
+ break;
+ default:
+ break;
+ }
+ }
+ return rc;
+}
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id)
+{
+ struct vcm_dev *pvcm = &vdev->vcm;
+ struct task_struct *rxthread;
+ int rc = 0;
+
+ if (sw_dev_id == XLINK_INVALID_SW_DEVID) {
+ dev_warn(vdev->dev, "%s: vcm is not enabled!\n",
+ __func__);
+ rc = 0;
+ goto exit;
+ }
+
+ pvcm->sw_dev_id = sw_dev_id;
+ rc = vcm_vpu_link_init(pvcm);
+ if (rc)
+ goto exit;
+
+ pvcm->wq = alloc_ordered_workqueue("vcm workqueue", WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!pvcm->wq) {
+ rc = -ENOMEM;
+ goto vpu_link_fini;
+ }
+
+ mutex_init(&pvcm->msg_idr_lock);
+ idr_init(&pvcm->msg_idr);
+
+ rxthread = kthread_run(vcm_rxthread,
+ (void *)vdev, "vcmrx");
+ if (IS_ERR(rxthread)) {
+ rc = PTR_ERR(rxthread);
+ goto destroy_idr;
+ }
+
+ pvcm->rxthread = get_task_struct(rxthread);
+
+ pvcm->enabled = true;
+ return 0;
+
+destroy_idr:
+ idr_destroy(&pvcm->msg_idr);
+ destroy_workqueue(pvcm->wq);
+vpu_link_fini:
+ vcm_vpu_link_fini(pvcm);
+exit:
+ pvcm->enabled = false;
+ return rc;
+}
+
+int vcm_fini(struct vpumgr_device *vdev)
+{
+ struct vcm_dev *pvcm = &vdev->vcm;
+
+ if (!pvcm->enabled)
+ return 0;
+
+ vcm_vpu_link_fini(pvcm);
+
+ kthread_stop(pvcm->rxthread);
+ put_task_struct(pvcm->rxthread);
+
+ idr_destroy(&pvcm->msg_idr);
+ destroy_workqueue(pvcm->wq);
+ mutex_destroy(&pvcm->msg_idr_lock);
+ return 0;
+}
diff --git a/drivers/misc/vpumgr/vpu_vcm.h b/drivers/misc/vpumgr/vpu_vcm.h
new file mode 100644
index 000000000000..9e89c281092b
--- /dev/null
+++ b/drivers/misc/vpumgr/vpu_vcm.h
@@ -0,0 +1,84 @@
+/* SPDX-License-Identifier: GPL-2.0-only
+ * RESMGR driver - VPU Context Manager
+ * Copyright (C) 2020-2021 Intel Corporation
+ */
+#ifndef __VPU_VCM_H
+#define __VPU_VCM_H
+
+#include <linux/xlink.h>
+
+struct vpumgr_device;
+
+/* Command code for message to/from VPU context manager on firmware */
+#define VCTX_MSG_CREATE 1
+#define VCTX_MSG_DESTROY 2
+#define VCTX_MSG_REPLY 3
+
+/* Maximal payload size supported for request or reply */
+#define VCM_PAYLOAD_SIZE (8192 - 5 * sizeof(u32))
+
+/**
+ * struct vcm_msg: VPU context manager message
+ * @size: size tag must be at the begin
+ * @ctx: to/from which context the msg is
+ * @cmd: the type of this message
+ * @id: index to identify this message in context ctx
+ * @rc: return code or misc args
+ * @payload: the payload of message
+ */
+struct vcm_msg {
+ u32 size;
+ u32 ctx;
+ u32 cmd;
+ u32 id;
+ s32 rc;
+ union {
+ char data[VCM_PAYLOAD_SIZE];
+ } payload;
+} __packed;
+
+struct vcm_dev {
+ bool enabled;
+ /*
+ * XLINK IPC related.
+ */
+ struct xlink_handle ipc_xlink_handle;
+ s32 sw_dev_id;
+
+ /*
+ * dispatch work queue.
+ * Xlink transcations would handled in the work queue;
+ * Decouple the xlink API invoking with user space systemcall
+ * because SIGINT would cause xlink_read* erro.
+ */
+ struct workqueue_struct *wq;
+
+ /* kthread for rx */
+ struct task_struct *rxthread;
+
+ /* message buffer for receiving thread */
+ struct vcm_msg rxmsg;
+
+ struct mutex msg_idr_lock; /* protects msg_idr */
+ struct idr msg_idr;
+};
+
+struct vpumgr_ctx {
+ struct vpumgr_device *vdev;
+ u32 vpu_ctx_id;
+ u64 total_vcmds;
+};
+
+int vcm_init(struct vpumgr_device *vdev, u32 sw_dev_id);
+int vcm_fini(struct vpumgr_device *vdev);
+
+int vcm_open(struct vpumgr_ctx *v, struct vpumgr_device *vdev);
+int vcm_close(struct vpumgr_ctx *v);
+int vcm_debugfs_stats_show(struct seq_file *file, struct vpumgr_ctx *v);
+
+int vcm_submit(struct vpumgr_ctx *v,
+ u32 cmd, const void *data_in, u32 in_len, s32 *submit_id);
+int vcm_wait(struct vpumgr_ctx *v, s32 submit_id,
+ s32 *vpu_rc, void *data_out, u32 *p_out_len, u32 timeout_ms);
+
+#endif /* __VPU_VCM_H */
diff --git a/include/uapi/misc/vpumgr.h b/include/uapi/misc/vpumgr.h
new file mode 100644
index 000000000000..910b26e60097
--- /dev/null
+++ b/include/uapi/misc/vpumgr.h
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note
+ * VPU manager Linux Kernel API
+ * Copyright (C) 2020-2021 Intel Corporation
+ *
+ */
+#ifndef __VPUMGR_UAPI_H
+#define __VPUMGR_UAPI_H
+
+#include <linux/types.h>
+
+/* ioctl numbers */
+#define VPUMGR_MAGIC 'V'
+/* VPU manager IOCTLs */
+#define VPUMGR_IOCTL_DMABUF_ALLOC _IOWR(VPUMGR_MAGIC, 2, struct vpumgr_args_alloc)
+#define VPUMGR_IOCTL_DMABUF_IMPORT _IOWR(VPUMGR_MAGIC, 3, struct vpumgr_args_import)
+#define VPUMGR_IOCTL_DMABUF_UNIMPORT _IOWR(VPUMGR_MAGIC, 4, __s32)
+#define VPUMGR_IOCTL_DMABUF_PTR2VPU _IOWR(VPUMGR_MAGIC, 5, __u64)
+#define VPUMGR_IOCTL_VCM_SUBMIT _IOWR(VPUMGR_MAGIC, 6, struct vpumgr_vcm_submit)
+#define VPUMGR_IOCTL_VCM_WAIT _IOWR(VPUMGR_MAGIC, 7, struct vpumgr_vcm_wait)
+#define VPUMGR_IOCTL_END _IO(VPUMGR_MAGIC, 8)
+
+struct vpumgr_args_alloc {
+ __s32 fd; /* out: DMABuf fd */
+ __s32 reserved[2]; /* in: reserved */
+ __u64 size; /* in: required buffer size */
+};
+
+/* vpu_access flags */
+enum vpu_access_type {
+ VPU_ACCESS_DEFAULT = 0,
+ VPU_ACCESS_READ = 1,
+ VPU_ACCESS_WRITE = 2,
+ VPU_ACCESS_RW = 3
+};
+
+struct vpumgr_args_import {
+ __s32 fd; /* in: input DMABuf fd */
+ __s32 vpu_access; /* in: how vpu is going to access the buffer */
+ __u64 vpu_addr; /* out: vpu dma address of the DMABuf */
+ __u64 size; /* out: the size of the DMABuf */
+};
+
+/* Command code reserved for kernel mode driver,
+ * user-space should not use commmand code smaller
+ * than or equal to this micro
+ */
+#define VCTX_KMD_RESERVED_CMD_LAST 31
+
+struct vpumgr_vcm_submit {
+ __u32 cmd; /* in: command code */
+ __u64 in; /* in: input paramer buffer address */
+ __u32 in_len; /* in: input paramer buffer length */
+ __s32 submit_id; /* out: submit id */
+};
+
+struct vpumgr_vcm_wait {
+ __s32 submit_id; /* in: submit id */
+ __s32 vpu_rc; /* out: vpu return code */
+ __u64 out; /* in: address of the buffer for receiving result */
+ __u32 out_len; /* in: length of the result buffer */
+ __u32 timeout_ms; /* in: timeout in milliseconds */
+};
+
+#endif /* __VPUMGR_UAPI_H */
--
2.17.1

2021-02-12 22:36:23

by mark gross

[permalink] [raw]
Subject: [PATCH v6 30/34] misc:intel_tsens: Intel Keem Bay tsens driver.

From: "C, Udhayakumar" <[email protected]>

Add keembey_thermal driver to expose on chip temperature
sensors, and register call back functions for periodic sampling.

This driver does following:
* Reads temperature data from on chip sensors present in Keem Bay
platform.
* Registers callback function to intel tsens driver for sampling
temperature values periodically.
* Decode the raw values from registers to Celsius.

Acked-by: mark gross <[email protected]>
Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
drivers/misc/intel_tsens/Kconfig | 12 +
drivers/misc/intel_tsens/Makefile | 1 +
drivers/misc/intel_tsens/keembay_thermal.c | 169 ++++++++++
drivers/misc/intel_tsens/keembay_tsens.h | 366 +++++++++++++++++++++
4 files changed, 548 insertions(+)
create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h

diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index be8d27e81864..5cfe6b4004e5 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -28,6 +28,18 @@ config INTEL_TSENS_I2C_SLAVE
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.

+config KEEMBAY_THERMAL
+ tristate "Temperature sensor driver for intel Keem Bay"
+ depends on INTEL_TSENS_LOCAL_HOST && ARCH_KEEMBAY
+ help
+ Enable this option if you want to have support for Keem Bay
+ thermal management sensors.
+
+ This driver is used for reading onchip temperature sensor
+ values from Keem Bay SoC.
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
+
config INTEL_TSENS_IA_HOST
tristate "Temperature sensor driver for intel tsens remote host"
depends on I2C && THERMAL
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index f6f41bbca80c..00f63c2d5b2f 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -7,3 +7,4 @@
obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
obj-$(CONFIG_INTEL_TSENS_I2C_SLAVE) += intel_tsens_i2c.o
obj-$(CONFIG_INTEL_TSENS_IA_HOST) += intel_tsens_host.o
+obj-$(CONFIG_KEEMBAY_THERMAL) += keembay_thermal.o
diff --git a/drivers/misc/intel_tsens/keembay_thermal.c b/drivers/misc/intel_tsens/keembay_thermal.c
new file mode 100644
index 000000000000..d6c8fa8fc3aa
--- /dev/null
+++ b/drivers/misc/intel_tsens/keembay_thermal.c
@@ -0,0 +1,169 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel Keem Bay thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/clk.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/err.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include "intel_tsens_thermal.h"
+#include "keembay_tsens.h"
+
+struct keembay_thermal_priv {
+ const char *name;
+ void __iomem *base_addr;
+ /* sensor lock*/
+ spinlock_t lock;
+ int current_temp[KEEMBAY_SENSOR_MAX];
+ struct intel_tsens_plat_data *plat_data;
+};
+
+static void kmb_sensor_read_temp(void __iomem *regs_val,
+ int offset,
+ int sample_valid_mask,
+ int sample_value,
+ int bit_shift,
+ int *temp)
+{
+ int reg_val, kmb_raw_index;
+
+ /* clear the bit of TSENS_EN and re-enable again */
+ iowrite32(0x00, regs_val + AON_TSENS_CFG);
+ iowrite32(CFG_MASK_MANUAL, regs_val + AON_TSENS_CFG);
+ reg_val = ioread32(regs_val + offset);
+ if (reg_val & sample_valid_mask) {
+ reg_val = (reg_val >> bit_shift) & sample_value;
+ kmb_raw_index = reg_val - KEEMBAY_SENSOR_BASE_TEMP;
+ if (kmb_raw_index < 0)
+ reg_val = raw_kmb[0];
+ else if (kmb_raw_index > (raw_kmb_size - 1))
+ reg_val = raw_kmb[raw_kmb_size - 1];
+ else
+ reg_val = raw_kmb[kmb_raw_index];
+ *temp = reg_val;
+ } else {
+ *temp = -255;
+ }
+}
+
+/*The lock is assumed to be held by the caller.*/
+static int keembay_get_temp(struct platform_device *pdev, int type, int *temp)
+{
+ struct keembay_thermal_priv *priv = platform_get_drvdata(pdev);
+
+ spin_lock(&priv->lock);
+ switch (type) {
+ case KEEMBAY_SENSOR_MSS:
+ kmb_sensor_read_temp(priv->base_addr,
+ AON_TSENS_DATA0,
+ MSS_T_SAMPLE_VALID,
+ MSS_T_SAMPLE,
+ MSS_BIT_SHIFT,
+ temp);
+ priv->current_temp[KEEMBAY_SENSOR_MSS] = *temp;
+ break;
+
+ case KEEMBAY_SENSOR_CSS:
+ kmb_sensor_read_temp(priv->base_addr,
+ AON_TSENS_DATA0,
+ CSS_T_SAMPLE_VALID,
+ CSS_T_SAMPLE,
+ CSS_BIT_SHIFT,
+ temp);
+ priv->current_temp[KEEMBAY_SENSOR_CSS] = *temp;
+ break;
+
+ case KEEMBAY_SENSOR_NCE:
+ {
+ int nce0, nce1;
+
+ kmb_sensor_read_temp(priv->base_addr,
+ AON_TSENS_DATA1,
+ NCE0_T_SAMPLE_VALID,
+ NCE0_T_SAMPLE,
+ NCE0_BIT_SHIFT,
+ &nce0);
+ kmb_sensor_read_temp(priv->base_addr,
+ AON_TSENS_DATA1,
+ NCE1_T_SAMPLE_VALID,
+ NCE1_T_SAMPLE,
+ NCE1_BIT_SHIFT,
+ &nce1);
+ *temp = max(nce0, nce1);
+ priv->current_temp[KEEMBAY_SENSOR_NCE] = *temp;
+ }
+ break;
+
+ case KEEMBAY_SENSOR_SOC:
+ {
+ int i;
+
+ *temp = 0;
+ for (i = 0; i < KEEMBAY_SENSOR_MAX; i++)
+ *temp = max(*temp, priv->current_temp[i]);
+ }
+ break;
+
+ default:
+ break;
+ }
+ spin_unlock(&priv->lock);
+
+ return 0;
+}
+
+static int keembay_thermal_probe(struct platform_device *pdev)
+{
+ struct intel_tsens_plat_data *plat_data;
+ struct keembay_thermal_priv *priv;
+
+ plat_data = pdev->dev.platform_data;
+ if (!plat_data) {
+ dev_err(&pdev->dev, "Platform data not found\n");
+ return -EINVAL;
+ }
+ if (!plat_data->base_addr)
+ return -EINVAL;
+
+ priv = devm_kzalloc(&pdev->dev,
+ sizeof(struct keembay_thermal_priv),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->name = plat_data->name;
+ priv->base_addr = plat_data->base_addr;
+ priv->plat_data = plat_data;
+ plat_data->get_temp = keembay_get_temp;
+ spin_lock_init(&priv->lock);
+ platform_set_drvdata(pdev, priv);
+ dev_info(&pdev->dev, "Thermal driver loaded for %s\n",
+ plat_data->name);
+ return 0;
+}
+
+static struct platform_driver keembay_thermal_driver = {
+ .probe = keembay_thermal_probe,
+ .driver = {
+ .name = "intel,keembay_thermal",
+ },
+};
+
+module_platform_driver(keembay_thermal_driver);
+
+MODULE_DESCRIPTION("Keem Bay Thermal Driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/intel_tsens/keembay_tsens.h b/drivers/misc/intel_tsens/keembay_tsens.h
new file mode 100644
index 000000000000..e9b927a31e22
--- /dev/null
+++ b/drivers/misc/intel_tsens/keembay_tsens.h
@@ -0,0 +1,366 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel Keem Bay thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_KEEMBAY_TSENS_H
+#define _LINUX_KEEMBAY_TSENS_H
+
+#include <linux/thermal.h>
+
+/* Register values for keembay temperature (PVT Sensor) */
+#define AON_TSENS_TRIM0_CFG 0x0030
+#define AON_TSENS_TRIM1_CFG 0x0034
+#define AON_TSENS_CFG 0x0038
+#define AON_TSENS_INT0 0x203c
+#define AON_TSENS_INT1 0x2040
+#define AON_TSENS_IRQ_CLEAR 0x0044
+#define AON_TSENS_DATA0 0x0048
+#define MSS_T_SAMPLE_VALID 0x80000000
+#define MSS_T_SAMPLE 0x3ff
+#define CSS_T_SAMPLE_VALID 0x8000
+#define CSS_T_SAMPLE 0x3ff
+#define NCE1_T_SAMPLE_VALID 0x80000000
+#define NCE1_T_SAMPLE 0x3ff
+#define NCE0_T_SAMPLE_VALID 0x8000
+#define NCE0_T_SAMPLE 0x3ff
+#define AON_TSENS_DATA1 0x004c
+#define AON_INTERFACE 0x20260000
+/* Bit shift for registers*/
+#define MSS_BIT_SHIFT 16
+#define CSS_BIT_SHIFT 0
+#define NCE0_BIT_SHIFT 0
+#define NCE1_BIT_SHIFT 16
+/* mask values for config register */
+#define CFG_MASK_AUTO 0x80ff //(auto configuration)
+#define CFG_IRQ_MASK 0x8fff
+#define CFG_MASK_MANUAL 0x000f // TSENS_EN (manual config)
+
+/**
+ * KEEMBAY_SENSOR_MSS - Media subsystem junction temperature.
+ * KEEMBAY_SENSOR_CSS - Compute subsystem junction temperature.
+ * KEEMBAY_SENSOR_NCE - Neural computing engine junction temperature.
+ * For NCE two sensors are available in kemmaby paltform,
+ * maximum temperature of these two sensors will be
+ * returned as NCE temperature.
+ * KEEMBAY_SENSOR_SOC - Soc temperature.
+ * Maximum of MSS, CSS and NCE would be returned as
+ * SOC temperature.
+ */
+enum keembay_thermal_sensor_en {
+ KEEMBAY_SENSOR_MSS,
+ KEEMBAY_SENSOR_CSS,
+ KEEMBAY_SENSOR_NCE,
+ KEEMBAY_SENSOR_SOC,
+ KEEMBAY_SENSOR_MAX
+};
+
+#define KEEMBAY_SENSOR_BASE_TEMP 27
+
+static const int raw_kmb[] = {
+39956, -39637, -39319, -39001, -38684,
+
+38367, -38050, -37734, -37418, -37103,
+
+36787, -36472, -36158, -35844, -35530,
+
+35216, -34903, -34590, -34278, -33966,
+
+33654, -33343, -33032, -32721, -32411,
+
+32101, -31791, -31482, -31173, -30864,
+
+30556, -30248, -29940, -29633, -29326,
+
+29020, -28713, -28407, -28102, -27797,
+
+27492, -27187, -26883, -26579, -26276,
+
+25973, -25670, -25367, -25065, -24763,
+
+24462, -24160, -23860, -23559, -23259,
+
+22959, -22660, -22360, -22062, -21763,
+
+21465, -21167, -20869, -20572, -20275,
+
+19979, -19683, -19387, -19091, -18796,
+
+18501, -18206, -17912, -17618, -17325,
+
+-17031, -16738, -16446, -16153, -15861,
+
+-15570, -15278, -14987, -14697, -14406,
+
+-14116, -13826, -13537, -13248, -12959,
+
+-12670, -12382, -12094, -11807, -11520,
+
+-11233, -10946, -10660, -10374, -10088,
+
+-9803, -9518, -9233, -8949, -8665,
+
+-8381, -8097, -7814, -7531, -7249,
+
+-6967, -6685, -6403, -6122, -5841,
+
+-5560, -5279, -4999, -4720, -4440,
+
+-4161, -3882, -3603, -3325, -3047,
+
+-2770, -2492, -2215, -1938, -1662,
+
+-1386, -1110, -834, -559, -284,
+
+-9, 265, 539, 813, 1086,
+
+1360, 1633, 1905, 2177, 2449,
+
+2721, 2993, 3264, 3535, 3805,
+
+4075, 4345, 4615, 4884, 5153,
+
+5422, 5691, 5959, 6227, 6495,
+
+6762, 7029, 7296, 7562, 7829,
+
+8095, 8360, 8626, 8891, 9155,
+
+9420, 9684, 9948, 10212, 10475,
+
+10738, 11001, 11264, 11526, 11788,
+
+12049, 12311, 12572, 12833, 13093,
+
+13354, 13614, 13874, 14133, 14392,
+
+14651, 14910, 15168, 15426, 15684,
+
+15942, 16199, 16456, 16713, 16969,
+
+17225, 17481, 17737, 17992, 18247,
+
+18502, 18757, 19011, 19265, 19519,
+
+19772, 20025, 20278, 20531, 20784,
+
+21036, 21288, 21539, 21791, 22042,
+
+22292, 22543, 22793, 23043, 23293,
+
+23543, 23792, 24041, 24290, 24538,
+
+24786, 25034, 25282, 25529, 25776,
+
+26023, 26270, 26516, 26763, 27008,
+
+27254, 27499, 27745, 27989, 28234,
+
+28478, 28722, 28966, 29210, 29453,
+
+29696, 29939, 30182, 30424, 30666,
+
+30908, 31149, 31391, 31632, 31873,
+
+32113, 32353, 32593, 32833, 33073,
+
+33312, 33551, 33790, 34029, 34267,
+
+34505, 34743, 34980, 35218, 35455,
+
+35692, 35928, 36165, 36401, 36637,
+
+36872, 37108, 37343, 37578, 37813,
+
+38047, 38281, 38515, 38749, 38982,
+
+39216, 39448, 39681, 39914, 40146,
+
+40378, 40610, 40841, 41073, 41304,
+
+41535, 41765, 41996, 42226, 42456,
+
+42686, 42915, 43144, 43373, 43602,
+
+43830, 44059, 44287, 44515, 44742,
+
+44970, 45197, 45424, 45650, 45877,
+
+46103, 46329, 46555, 46780, 47006,
+
+47231, 47456, 47680, 47905, 48129,
+
+48353, 48576, 48800, 49023, 49246,
+
+49469, 49692, 49914, 50136, 50358,
+
+50580, 50801, 51023, 51244, 51464,
+
+51685, 51905, 52126, 52346, 52565,
+
+52785, 53004, 53223, 53442, 53661,
+
+53879, 54097, 54315, 54533, 54750,
+
+54968, 55185, 55402, 55618, 55835,
+
+56051, 56267, 56483, 56699, 56914,
+
+57129, 57344, 57559, 57773, 57988,
+
+58202, 58416, 58630, 58843, 59056,
+
+59269, 59482, 59695, 59907, 60120,
+
+60332, 60543, 60755, 60966, 61178,
+
+61389, 61599, 61810, 62020, 62231,
+
+62440, 62650, 62860, 63069, 63278,
+
+63487, 63696, 63904, 64113, 64321,
+
+64529, 64737, 64944, 65151, 65358,
+
+65565, 65772, 65979, 66185, 66391,
+
+66597, 66803, 67008, 67213, 67419,
+
+67624, 67828, 68033, 68237, 68441,
+
+68645, 68849, 69052, 69256, 69459,
+
+69662, 69865, 70067, 70270, 70472,
+
+70674, 70876, 71077, 71279, 71480,
+
+71681, 71882, 72082, 72283, 72483,
+
+72683, 72883, 73083, 73282, 73481,
+
+73680, 73879, 74078, 74277, 74475,
+
+74673, 74871, 75069, 75266, 75464,
+
+75661, 75858, 76055, 76252, 76448,
+
+76644, 76841, 77037, 77232, 77428,
+
+77623, 77818, 78013, 78208, 78403,
+
+78597, 78792, 78986, 79180, 79373,
+
+79567, 79760, 79953, 80146, 80339,
+
+80532, 80724, 80917, 81109, 81301,
+
+81492, 81684, 81875, 82066, 82258,
+
+82448, 82639, 82830, 83020, 83210,
+
+83400, 83590, 83779, 83969, 84158,
+
+84347, 84536, 84725, 84913, 85102,
+
+85290, 85478, 85666, 85854, 86041,
+
+86228, 86416, 86603, 86789, 86976,
+
+87163, 87349, 87535, 87721, 87907,
+
+88092, 88278, 88463, 88648, 88833,
+
+89018, 89203, 89387, 89571, 89755,
+
+89939, 90123, 90307, 90490, 90674,
+
+90857, 91040, 91222, 91405, 91587,
+
+91770, 91952, 92134, 92315, 92497,
+
+92679, 92860, 93041, 93222, 93403,
+
+93583, 93764, 93944, 94124, 94304,
+
+94484, 94664, 94843, 95023, 95202,
+
+95381, 95560, 95738, 95917, 96095,
+
+96273, 96451, 96629, 96807, 96985,
+
+97162, 97339, 97516, 97693, 97870,
+
+98047, 98223, 98399, 98576, 98752,
+
+98927, 99103, 99279, 99454, 99629,
+
+99804, 99979, 100154, 100328, 100503,
+
+100677, 100851, 101025, 101199, 101373,
+
+101546, 101720, 101893, 102066, 102239,
+
+102411, 102584, 102756, 102929, 103101,
+
+103273, 103445, 103616, 103788, 103959,
+
+104130, 104302, 104472, 104643, 104814,
+
+104984, 105155, 105325, 105495, 105665,
+
+105835, 106004, 106174, 106343, 106512,
+
+106681, 106850, 107019, 107187, 107355,
+
+107524, 107692, 107860, 108028, 108195,
+
+108363, 108530, 108697, 108865, 109031,
+
+109198, 109365, 109531, 109698, 109864,
+
+110030, 110196, 110362, 110528, 110693,
+
+110858, 111024, 111189, 111354, 111518,
+
+111683, 111848, 112012, 112176, 112340,
+
+112504, 112668, 112832, 112995, 113159,
+
+113322, 113485, 113648, 113811, 113973,
+
+114136, 114298, 114461, 114623, 114785,
+
+114947, 115108, 115270, 115431, 115593,
+
+115754, 115915, 116076, 116236, 116397,
+
+116558, 116718, 116878, 117038, 117198,
+
+117358, 117518, 117677, 117836, 117996,
+
+118155, 118314, 118473, 118631, 118790,
+
+118948, 119107, 119265, 119423, 119581,
+
+119739, 119896, 120054, 120211, 120368,
+
+120525, 120682, 120839, 120996, 121153,
+
+121309, 121465, 121622, 121778, 121934,
+
+122089, 122245, 122400, 122556, 122711,
+
+122866, 123021, 123176, 123331, 123486,
+
+123640, 123794, 123949, 124103, 124257,
+
+124411, 124564, 124718, 124871, 125025,
+};
+
+static int raw_kmb_size = sizeof(raw_kmb) / sizeof(int);
+
+#endif /* _LINUX_KEEMBAY_TSENS_H */
--
2.17.1

2021-02-12 22:36:25

by mark gross

[permalink] [raw]
Subject: [PATCH v6 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox

From: Daniele Alessandrelli <[email protected]>

Add mailbox controller enabling inter-processor communication (IPC)
between the CPU (aka, the Application Processor - AP) and the VPU on
Intel Movidius SoCs like Keem Bay.

The controller uses HW FIFOs to enable such communication. Specifically,
there are two FIFOs, one for the CPU and one for VPU. Each FIFO can hold
128 entries (messages) of 32-bit each (but only 26 bits are actually
usable, since the 6 least-significant bits are reserved for the overflow
detection mechanism, more info below).

When the Linux kernel on the AP needs to send messages to the VPU
firmware, it writes them to the VPU FIFO; similarly, when the VPU
firmware needs to send messages to the AP, it writes them to the CPU
FIFO.

It's important to note that the AP is not the only one that can write to
the VPU FIFO: other components within the SoC can write to it too. Each
of these components (including the AP) has a unique 6-bit processor ID
associated to it. The VPU HW FIFO expects the last 6 bits of each 32-bit
FIFO entry to contain the processor ID of the sender.

Therefore, sending a message from the AP to the VPU works as follows:

1. The message must be a 32-bit value with the last 6-bit set to 0 (in
practice, the message is meant to be a 32-bit address value, aligned
to 64 bytes; but this is not really relevant for this mailbox
controller).

2. The AP adds its processor ID to the 32-bit message being sent:
M = m | AP_ProcID

3. The sender writes the message (M) to the TIM_IPC_FIFO register of the
VPU FIFO.

4. The HW atomically checks if the FIFO is full and if not it writes it
to the actual FIFO; if the FIFO is full, the HW reads the ProcID
from M and then sets the corresponding bit of TIM_IPC_FIFO_OF_FLAG0,
to signal that the write failed, because the FIFO was full.

5. The AP reads the TIM_IPC_FIFO_OF_FLAG0 register and checks if the
bit corresponding to its ProcID has been set (in order to know if the
TX succeeded or failed); if the bit is set, the AP clears it.

Note: as briefly mentioned above, the 32-bit value is meant to be a 32-
bit physical address (64-byte aligned). This address points to a
predefined struct (i.e., the IPC packet) in shared memory. However,
since this struct is not HW dependent (it's just the struct the VPU
firmware expects and in theory it could change if a different VPU FW is
used), it's not defined here, but in the Keem Bay IPC driver, which is
the mailbox client for this controller.

The AP is notified of pending messages in the CPU FIFO by means of the
'FIFO-not-empty' interrupt, which is generated by the CPU FIFO while not
empty. This interrupt is cleared automatically once all messages have
been read from the FIFO (i.e., the FIFO has been emptied).

The hardware doesn't provide an TX done IRQ (i.e., an IRQ that allows
the VPU firmware to notify the AP that the message put into the VPU FIFO
has been received); however the AP can ensure that the message has been
successfully put into the VPU FIFO (and therefore transmitted) by
checking the VPU FIFO status register (TIM_IPC_FIFO_OF_FLAG0) to ensure
that writing the message didn't cause the FIFO to overflow (as described
above).

Therefore, the mailbox controller is configured as capable of tx_done
IRQs and a tasklet is used to simulate the tx_done IRQ. The tasklet is
activated by send_data() right after the message has been put into the
VPU FIFO and the VPU FIFO status registers has been checked. If an
overflow is reported by the status register, the tasklet passes -EBUSY
to mbox_chan_txdone(), to notify the mailbox client of the failed TX.

The client should therefore register a tx_done() callback to properly
handle failed transmissions.

Note: the 'txdone_poll' mechanism cannot be used because it doesn't
provide a way to report a failed transmission.

Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Daniele Alessandrelli <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
MAINTAINERS | 1 +
drivers/mailbox/Kconfig | 11 ++
drivers/mailbox/Makefile | 2 +
drivers/mailbox/vpu-ipc-mailbox.c | 297 ++++++++++++++++++++++++++++++
4 files changed, 311 insertions(+)
create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c

diff --git a/MAINTAINERS b/MAINTAINERS
index 68e6af3e5650..a88f5eb35e39 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -9184,6 +9184,7 @@ M: Daniele Alessandrelli <[email protected]>
M: Mark Gross <[email protected]>
S: Supported
F: Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
+F: drivers/mailbox/vpu-ipc-mailbox.c

INTEL WIRELESS 3945ABG/BG, 4965AGN (iwlegacy)
M: Stanislaw Gruszka <[email protected]>
diff --git a/drivers/mailbox/Kconfig b/drivers/mailbox/Kconfig
index f4abe3529acd..cb50b541a5c6 100644
--- a/drivers/mailbox/Kconfig
+++ b/drivers/mailbox/Kconfig
@@ -29,6 +29,17 @@ config IMX_MBOX
help
Mailbox implementation for i.MX Messaging Unit (MU).

+config INTEL_VPU_IPC_MBOX
+ tristate "Intel VPU IPC Mailbox"
+ depends on HAS_IOMEM
+ depends on OF || COMPILE_TEST
+ help
+ Mailbox implementation for enabling inter-processor communication
+ between application processors and Intel VPUs.
+
+ Say Y or M here if you are building for an SoC equipped with an Intel
+ VPU. If M is selected, the module will be called vpu-ipc-mailbox.
+
config PLATFORM_MHU
tristate "Platform MHU Mailbox"
depends on OF
diff --git a/drivers/mailbox/Makefile b/drivers/mailbox/Makefile
index 7194fa92c787..68768bb2ee43 100644
--- a/drivers/mailbox/Makefile
+++ b/drivers/mailbox/Makefile
@@ -56,3 +56,5 @@ obj-$(CONFIG_SUN6I_MSGBOX) += sun6i-msgbox.o
obj-$(CONFIG_SPRD_MBOX) += sprd-mailbox.o

obj-$(CONFIG_QCOM_IPCC) += qcom-ipcc.o
+
+obj-$(CONFIG_INTEL_VPU_IPC_MBOX) += vpu-ipc-mailbox.o
diff --git a/drivers/mailbox/vpu-ipc-mailbox.c b/drivers/mailbox/vpu-ipc-mailbox.c
new file mode 100644
index 000000000000..ad161a7bbabb
--- /dev/null
+++ b/drivers/mailbox/vpu-ipc-mailbox.c
@@ -0,0 +1,297 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel VPU IPC mailbox driver.
+ *
+ * Copyright (c) 2020-2021 Intel Corporation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/mailbox_controller.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+
+/*
+ * The IPC FIFO registers (offsets to the base address defined in device tree).
+ */
+
+/*
+ * TIM_IPC_FIFO - Write a 32-bit entry to FIFO.
+ *
+ * The entry to be put in the FIFO must be written to this register.
+ *
+ * NOTE: the 6 least-significant bits are reserved for the writing processor
+ * to include its processor ID, 0 <= x <= 62, so it can determine if the entry
+ * was written correctly by checking the appropriate bit of register
+ * TIM_IPC_FIFO_OF_FLAG[n].
+ *
+ * Internally, the hardware increments FIFO write pointer and fill level.
+ *
+ */
+#define IPC_FIFO 0x00
+
+/* The last 6 bits of an IPC entry are reserved. */
+#define IPC_FIFO_ENTRY_RSVD_MASK 0x3f
+
+/*
+ * IPC_FIFO_ATM - Read from FIFO using ATM mode.
+ *
+ * If FIFO is empty, reading from this registers returns 0xFFFFFFFF, otherwise
+ * returns the value from the FIFO with the 6 least-significant bits set to 0.
+ *
+ * Internally, the hardware increments FIFO read pointer and decrements fill
+ * level.
+ */
+#define IPC_FIFO_ATM 0x04
+#define IPC_FIFO_EMPTY 0xFFFFFFFF
+
+/*
+ * TIM_IPC_FIFO_OF_FLAG[n] - IPC FIFO overflow status for processor IDs 0-62.
+ *
+ * Read:
+ *
+ * A processor can check that its writes to the IPC FIFO were successful by
+ * reading the value of TIM_IPC_FIFO_OF_FLAG0 or TIM_IPC_FIFO_OF_FLAG1
+ * (depending on its processor ID).
+ *
+ * Bit x, 0 <= x <= 31, of TIM_IPC_FIFO_OF_FLAG0 is set high if a write
+ * to TIM_IPC_FIFO by processor ID x failed because the FIFO was full.
+ *
+ * Bit x, 0 <= x <= 30, of TIM_IPC_FIFO_OF_FLAG1 is set high if a write
+ * to TIM_IPC_FIFO by processor ID x+32 failed because the FIFO was
+ * full.
+ *
+ * Processors are identified by the 6 least-significant bits of words
+ * written to TIM_IPC_FIFO, i.e. x = TIM_IPC_FIFO[5:0].
+ * Processor ID = 0x3F is reserved to indicate a read of an empty FIFO
+ * has occurred.
+ *
+ * Write:
+ *
+ * Writing 1 to bit position x of TIM_IPC_FIFO_OF_FLAG0 clears the
+ * overflow flag corresponding to processor ID x. Writing 1 to bit
+ * position x of TIM_IPC_FIFO_OF_FLAG1 clears the overflow flag
+ * corresponding to processor ID x+32.
+ *
+ * Writing 0 to any bit position has not effect.
+ */
+#define IPC_FIFO_OF_FLAG0 0x10
+#define IPC_FIFO_OF_FLAG1 0x14
+
+/* The processor ID of the CPU. */
+#define IPC_FIFO_ID_CPU 0
+
+/**
+ * struct vpu_ipc_mbox - Intel VPU IPC mailbox controller.
+ * @mbox: Mailbox controller.
+ * @mbox_chan: The only channel supported by this controller.
+ * @dev: The device associated with this controller.
+ * @cpu_fifo_base: Base address of CPU FIFO registers.
+ * @vpu_fifo_base: Base address of VPU FIFO registers.
+ * @txdone_tasklet: Tasklet calling mbox_chan_txdone(). It's activated by
+ * the send_data() function, after the VPU FIFO has been
+ * written; a tasklet is used because send_data() cannot
+ * call mbox_chan_txdone() directly.
+ * @txdone_result: The result of the last TX. It's set by the send_data()
+ * function before activating the txdone_tasklet.
+ */
+struct vpu_ipc_mbox {
+ struct mbox_controller mbox;
+ struct mbox_chan mbox_chan;
+ void __iomem *cpu_fifo_base;
+ void __iomem *vpu_fifo_base;
+ struct tasklet_struct txdone_tasklet;
+ int txdone_result;
+};
+
+/* The IRQ handler servicing 'FIFO-not-empty' IRQs coming from the CPU FIFO. */
+static irqreturn_t vpu_ipc_mailbox_irq_handler(int irq, void *data)
+{
+ struct vpu_ipc_mbox *vpu_ipc_mbox = data;
+ u32 entry;
+
+ /* Extract and process one entry from CPU FIFO. */
+ entry = ioread32(vpu_ipc_mbox->cpu_fifo_base + IPC_FIFO_ATM);
+ if (unlikely(entry == IPC_FIFO_EMPTY))
+ return IRQ_NONE;
+
+ /* Notify mailbox client of new data. */
+ mbox_chan_received_data(&vpu_ipc_mbox->mbox_chan, (void *)&entry);
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * The function implementing the txdone_tasklet.
+ *
+ * It calls mbox_chan_txdone() passing as arguments the only channel we have
+ * and the result of the last TX (as stored in the vpu_ipc_mbox struct).
+ */
+static void txdone_tasklet_func(unsigned long vpu_ipc_mbox_ptr)
+{
+ struct vpu_ipc_mbox *vpu_ipc_mbox = (void *)vpu_ipc_mbox_ptr;
+
+ /* Notify client that tx is completed and pass proper result code. */
+ mbox_chan_txdone(&vpu_ipc_mbox->mbox_chan, vpu_ipc_mbox->txdone_result);
+}
+
+/*
+ * Mailbox controller 'send_data()' function.
+ *
+ * This functions tries to put 'data' into the VPU FIFO. This is done by
+ * writing to the IPC_FIFO VPU register and then checking if we overflew the
+ * FIFO (by reading the IPC_FIFO_OF_FLAG0 VPU register).
+ *
+ * If we overflew the FIFO, the TX has failed and we notify the mailbox client
+ * by passing -EBUSY to mbox_chan_txdone()); otherwise the TX succeeded (we
+ * pass 0 to mbox_chan_txdone()). Note: mbox_chan_txdone() cannot be called
+ * directly (since that would case a deadlock), therefore a tasklet is used to
+ * defer the call.
+ *
+ * 'data' is meant to be a 32-bit unsigned integer with the least 6 significant
+ * bits set to 0.
+ */
+static int vpu_ipc_mailbox_send_data(struct mbox_chan *chan, void *data)
+{
+ struct vpu_ipc_mbox *vpu_ipc_mbox = chan->con_priv;
+ u32 entry, overflow;
+
+ entry = *((u32 *)data);
+
+ /* Ensure last 6-bits of entry are not used. */
+ if (unlikely(entry & IPC_FIFO_ENTRY_RSVD_MASK)) {
+ vpu_ipc_mbox->txdone_result = -EINVAL;
+ goto exit;
+ }
+
+ /* Add processor ID to entry. */
+ entry |= IPC_FIFO_ID_CPU & IPC_FIFO_ENTRY_RSVD_MASK;
+
+ /* Write entry to VPU FIFO. */
+ iowrite32(entry, vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO);
+
+ /* Check if we overflew the VPU FIFO. */
+ overflow = ioread32(vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO_OF_FLAG0) &
+ BIT(IPC_FIFO_ID_CPU);
+ if (unlikely(overflow)) {
+ /* Reset overflow register. */
+ iowrite32(BIT(IPC_FIFO_ID_CPU),
+ vpu_ipc_mbox->vpu_fifo_base + IPC_FIFO_OF_FLAG0);
+ vpu_ipc_mbox->txdone_result = -EBUSY;
+ goto exit;
+ }
+ vpu_ipc_mbox->txdone_result = 0;
+
+exit:
+ /* Schedule tasklet to call mbox_chan_txdone(). */
+ tasklet_schedule(&vpu_ipc_mbox->txdone_tasklet);
+
+ return 0;
+}
+
+/* The mailbox channel ops for this controller. */
+static const struct mbox_chan_ops vpu_ipc_mbox_chan_ops = {
+ .send_data = vpu_ipc_mailbox_send_data,
+};
+
+static int vpu_ipc_mailbox_probe(struct platform_device *pdev)
+{
+ struct vpu_ipc_mbox *vpu_ipc_mbox;
+ struct device *dev = &pdev->dev;
+ void __iomem *base;
+ int irq;
+ int rc;
+
+ vpu_ipc_mbox = devm_kzalloc(dev, sizeof(*vpu_ipc_mbox), GFP_KERNEL);
+ if (!vpu_ipc_mbox)
+ return -ENOMEM;
+
+ /* Map CPU FIFO registers. */
+ base = devm_platform_ioremap_resource_byname(pdev, "cpu_fifo");
+ if (IS_ERR(base)) {
+ dev_err(dev, "Failed to ioremap CPU FIFO registers\n");
+ return PTR_ERR(base);
+ }
+ vpu_ipc_mbox->cpu_fifo_base = base;
+
+ /* MAP VPU FIFO registers. */
+ base = devm_platform_ioremap_resource_byname(pdev, "vpu_fifo");
+ if (IS_ERR(base)) {
+ dev_err(dev, "Failed to ioremap VPU FIFO registers\n");
+ return PTR_ERR(base);
+ }
+ vpu_ipc_mbox->vpu_fifo_base = base;
+
+ /* Initialize mailbox channels. */
+ vpu_ipc_mbox->mbox_chan.con_priv = vpu_ipc_mbox;
+
+ /* Initialize mailbox controller. */
+ vpu_ipc_mbox->mbox.dev = dev;
+ vpu_ipc_mbox->mbox.ops = &vpu_ipc_mbox_chan_ops;
+ vpu_ipc_mbox->mbox.chans = &vpu_ipc_mbox->mbox_chan;
+ vpu_ipc_mbox->mbox.num_chans = 1;
+ /*
+ * Set txdone_irq; we don't have a HW IRQ, but we use a txdone tasklet
+ * to simulate it.
+ */
+ vpu_ipc_mbox->mbox.txdone_irq = true;
+
+ /* Init TX done tasklet. */
+ tasklet_init(&vpu_ipc_mbox->txdone_tasklet, txdone_tasklet_func,
+ (uintptr_t)vpu_ipc_mbox);
+
+ rc = devm_mbox_controller_register(dev, &vpu_ipc_mbox->mbox);
+ if (rc) {
+ dev_err(&pdev->dev, "Failed to register VPU IPC controller\n");
+ return rc;
+ }
+
+ /* Register interrupt handler for CPU FIFO. */
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+ rc = devm_request_irq(dev, irq, vpu_ipc_mailbox_irq_handler, 0,
+ dev_name(dev), vpu_ipc_mbox);
+ if (rc)
+ return rc;
+
+ platform_set_drvdata(pdev, vpu_ipc_mbox);
+
+ return 0;
+}
+
+static int vpu_ipc_mailbox_remove(struct platform_device *pdev)
+{
+ struct vpu_ipc_mbox *vpu_ipc_mbox = platform_get_drvdata(pdev);
+
+ /*
+ * Just kill the tasklet as iomem and irq have been requested with
+ * devm_* functions and, therefore, are freed automatically.
+ */
+ tasklet_kill(&vpu_ipc_mbox->txdone_tasklet);
+
+ return 0;
+}
+
+static const struct of_device_id vpu_ipc_mailbox_of_match[] = {
+ {
+ .compatible = "intel,vpu-ipc-mailbox",
+ },
+ {}
+};
+
+static struct platform_driver vpu_ipc_mailbox_driver = {
+ .driver = {
+ .name = "vpu-ipc-mailbox",
+ .of_match_table = vpu_ipc_mailbox_of_match,
+ },
+ .probe = vpu_ipc_mailbox_probe,
+ .remove = vpu_ipc_mailbox_remove,
+};
+module_platform_driver(vpu_ipc_mailbox_driver);
+
+MODULE_DESCRIPTION("Intel VPU IPC mailbox driver");
+MODULE_AUTHOR("Daniele Alessandrelli <[email protected]>");
+MODULE_LICENSE("GPL");
--
2.17.1

2021-02-12 22:36:25

by mark gross

[permalink] [raw]
Subject: [PATCH v6 23/34] xlink-core: add async channel and events

From: Seamus Kelly <[email protected]>

Enable asynchronous channel and event communication.

Add APIs:
data ready callback:
The xLink Data Ready Callback function is used to
register a callback function that is invoked when data
is ready to be read from a channel
data consumed callback:
The xLink Data Consumed Callback function is used to
register a callback function that is invoked when data
is consumed by the peer node on a channel
Add event notification handling including APIs:
register device event:
The xLink Register Device Event function is used to
register a callback for notification of certain system
events. Currently XLink supports 4 such events [0-3]
whose meaning is system dependent. Registering for an
event means that the callback will be called when the
event occurs with 2 parameters the sw_device_id of the
device that triggered the event and the event number [0-3]
unregister device event
The xLink Unregister Device Event function is used to
unregister events that have previously been registered
by register device event API

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
drivers/misc/xlink-core/xlink-core.c | 497 ++++++++++++++++----
drivers/misc/xlink-core/xlink-core.h | 11 +-
drivers/misc/xlink-core/xlink-defs.h | 6 +-
drivers/misc/xlink-core/xlink-dispatcher.c | 53 +--
drivers/misc/xlink-core/xlink-ioctl.c | 146 +++++-
drivers/misc/xlink-core/xlink-ioctl.h | 6 +
drivers/misc/xlink-core/xlink-multiplexer.c | 176 +++++--
drivers/misc/xlink-core/xlink-platform.c | 27 ++
include/linux/xlink.h | 15 +-
9 files changed, 757 insertions(+), 180 deletions(-)

diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
index d0a3f98d16af..23c0025f6f0d 100644
--- a/drivers/misc/xlink-core/xlink-core.c
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -55,6 +55,8 @@ static struct cdev xlink_cdev;

static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);

+static struct mutex dev_event_lock;
+
static const struct file_operations fops = {
.owner = THIS_MODULE,
.unlocked_ioctl = xlink_ioctl,
@@ -66,14 +68,75 @@ struct xlink_link {
struct kref refcount;
};

+struct xlink_attr {
+ unsigned long value;
+ u32 sw_dev_id;
+};
+
struct keembay_xlink_dev {
struct platform_device *pdev;
struct xlink_link links[XLINK_MAX_CONNECTIONS];
u32 nmb_connected_links;
struct mutex lock; // protect access to xlink_dev
+ struct xlink_attr eventx[4];
+};
+
+struct event_info {
+ struct list_head list;
+ u32 sw_device_id;
+ u32 event_type;
+ u32 user_flag;
+ xlink_device_event_cb event_notif_fn;
};

-static u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+// sysfs attribute functions
+
+static ssize_t eventx_show(struct device *dev, struct device_attribute *attr,
+ int index, char *buf)
+{
+ struct keembay_xlink_dev *xlink_dev = dev_get_drvdata(dev);
+ struct xlink_attr *a = &xlink_dev->eventx[index];
+
+ return sysfs_emit(buf, "0x%x 0x%lx\n", a->sw_dev_id, a->value);
+}
+
+static ssize_t event0_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 0, buf);
+}
+
+static ssize_t event1_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 1, buf);
+}
+
+static ssize_t event2_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 2, buf);
+}
+
+static ssize_t event3_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ return eventx_show(dev, attr, 3, buf);
+}
+
+static DEVICE_ATTR_RO(event0);
+static DEVICE_ATTR_RO(event1);
+static DEVICE_ATTR_RO(event2);
+static DEVICE_ATTR_RO(event3);
+static struct attribute *xlink_sysfs_entries[] = {
+ &dev_attr_event0.attr,
+ &dev_attr_event1.attr,
+ &dev_attr_event2.attr,
+ &dev_attr_event3.attr,
+ NULL,
+};
+
+static const struct attribute_group xlink_sysfs_group = {
+ .attrs = xlink_sysfs_entries,
+};
+
+static struct event_info ev_info;

/*
* global variable pointing to our xlink device.
@@ -207,7 +270,14 @@ static int kmb_xlink_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "Cannot add the device to the system\n");
goto r_class;
}
+ INIT_LIST_HEAD(&ev_info.list);

+ rc = devm_device_add_group(&pdev->dev, &xlink_sysfs_group);
+ if (rc) {
+ dev_err(&pdev->dev, "failed to create sysfs entries: %d\n", rc);
+ return rc;
+ }
+ mutex_init(&dev_event_lock);
return 0;

r_device:
@@ -231,7 +301,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
rc = xlink_multiplexer_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Multiplexer destroy failed\n");
- // stop dispatchers and destroy
rc = xlink_dispatcher_destroy();
if (rc != X_LINK_SUCCESS)
pr_err("Dispatcher destroy failed\n");
@@ -251,7 +320,6 @@ static int kmb_xlink_remove(struct platform_device *pdev)
* IOCTL function for User Space access to xlink kernel functions
*
*/
-
static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
int rc;
@@ -263,6 +331,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_OPEN_CHANNEL:
rc = ioctl_open_channel(arg);
break;
+ case XL_DATA_READY_CALLBACK:
+ rc = ioctl_data_ready_callback(arg);
+ break;
+ case XL_DATA_CONSUMED_CALLBACK:
+ rc = ioctl_data_consumed_callback(arg);
+ break;
case XL_READ_DATA:
rc = ioctl_read_data(arg);
break;
@@ -275,6 +349,9 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_WRITE_VOLATILE:
rc = ioctl_write_volatile_data(arg);
break;
+ case XL_WRITE_CONTROL_DATA:
+ rc = ioctl_write_control_data(arg);
+ break;
case XL_RELEASE_DATA:
rc = ioctl_release_data(arg);
break;
@@ -285,10 +362,10 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
rc = ioctl_start_vpu(arg);
break;
case XL_STOP_VPU:
- rc = xlink_stop_vpu();
+ rc = ioctl_stop_vpu();
break;
case XL_RESET_VPU:
- rc = xlink_stop_vpu();
+ rc = ioctl_stop_vpu();
break;
case XL_DISCONNECT:
rc = ioctl_disconnect(arg);
@@ -314,6 +391,12 @@ static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
case XL_SET_DEVICE_MODE:
rc = ioctl_set_device_mode(arg);
break;
+ case XL_REGISTER_DEV_EVENT:
+ rc = ioctl_register_device_event(arg);
+ break;
+ case XL_UNREGISTER_DEV_EVENT:
+ rc = ioctl_unregister_device_event(arg);
+ break;
}
if (rc)
return -EIO;
@@ -387,14 +470,12 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
xlink->nmb_connected_links++;
kref_init(&link->refcount);
if (interface != IPC_INTERFACE) {
- // start dispatcher
rc = xlink_dispatcher_start(link->id, &link->handle);
if (rc) {
pr_err("dispatcher start failed\n");
goto r_cleanup;
}
}
- // initialize multiplexer connection
rc = xlink_multiplexer_connect(link->id);
if (rc) {
pr_err("multiplexer connect failed\n");
@@ -405,7 +486,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
link->handle.dev_type,
xlink->nmb_connected_links);
} else {
- // already connected
pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
link->handle.sw_device_id,
link->handle.dev_type);
@@ -413,7 +493,6 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
*handle = link->handle;
}
mutex_unlock(&xlink->lock);
- // TODO: implement ping
return X_LINK_SUCCESS;

r_cleanup:
@@ -423,64 +502,109 @@ enum xlink_error xlink_connect(struct xlink_handle *handle)
}
EXPORT_SYMBOL_GPL(xlink_connect);

-enum xlink_error xlink_open_channel(struct xlink_handle *handle,
- u16 chan, enum xlink_opmode mode,
- u32 data_size, u32 timeout)
+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_available_event)
{
struct xlink_event *event;
struct xlink_link *link;
- int event_queued = 0;
enum xlink_error rc;
+ int event_queued = 0;
+ char origin = 'K';

if (!xlink || !handle)
return X_LINK_ERROR;

+ if (CHANNEL_USER_BIT_IS_SET(chan))
+ origin = 'U'; // function called from user space
+ CHANNEL_CLEAR_USER_BIT(chan); // restore proper channel value
+
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;
-
- event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
- &link->handle, chan, data_size, timeout);
+ event = xlink_create_event(link->id, XLINK_DATA_READY_CALLBACK_REQ,
+ &link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;
-
- event->data = (void *)mode;
+ event->data = data_available_event;
+ event->callback_origin = origin;
+ if (!data_available_event)
+ event->calling_pid = NULL; // disable callbacks on this channel
+ else
+ event->calling_pid = current;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued)
xlink_destroy_event(event);
return rc;
}
-EXPORT_SYMBOL_GPL(xlink_open_channel);
-
-enum xlink_error xlink_close_channel(struct xlink_handle *handle,
- u16 chan)
+EXPORT_SYMBOL_GPL(xlink_data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_consumed_event)
{
struct xlink_event *event;
struct xlink_link *link;
enum xlink_error rc;
int event_queued = 0;
+ char origin = 'K';

if (!xlink || !handle)
return X_LINK_ERROR;

+ if (CHANNEL_USER_BIT_IS_SET(chan))
+ origin = 'U'; // function called from user space
+ CHANNEL_CLEAR_USER_BIT(chan); // restore proper channel value
+
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;
-
- event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ event = xlink_create_event(link->id, XLINK_DATA_CONSUMED_CALLBACK_REQ,
&link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;
+ event->data = data_consumed_event;
+ event->callback_origin = origin;
+ if (!data_consumed_event)
+ event->calling_pid = NULL; // disable callbacks on this channel
+ else
+ event->calling_pid = current;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_data_consumed_event);
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+ &link->handle, chan, data_size, timeout);
+ if (!event)
+ return X_LINK_ERROR;

+ event->data = (void *)mode;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued)
xlink_destroy_event(event);
return rc;
}
-EXPORT_SYMBOL_GPL(xlink_close_channel);
+EXPORT_SYMBOL_GPL(xlink_open_channel);

-enum xlink_error xlink_write_data(struct xlink_handle *handle,
- u16 chan, u8 const *pmessage, u32 size)
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+ u16 chan)
{
struct xlink_event *event;
struct xlink_link *link;
@@ -490,38 +614,26 @@ enum xlink_error xlink_write_data(struct xlink_handle *handle,
if (!xlink || !handle)
return X_LINK_ERROR;

- if (size > XLINK_MAX_DATA_SIZE)
- return X_LINK_ERROR;
-
link = get_link_by_sw_device_id(handle->sw_device_id);
if (!link)
return X_LINK_ERROR;

- event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
- chan, size, 0);
+ event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ &link->handle, chan, 0, 0);
if (!event)
return X_LINK_ERROR;

- if (chan < XLINK_IPC_MAX_CHANNELS &&
- event->interface == IPC_INTERFACE) {
- /* only passing message address across IPC interface */
- event->data = &pmessage;
- rc = xlink_multiplexer_tx(event, &event_queued);
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
xlink_destroy_event(event);
- } else {
- event->data = (u8 *)pmessage;
- event->paddr = 0;
- rc = xlink_multiplexer_tx(event, &event_queued);
- if (!event_queued)
- xlink_destroy_event(event);
- }
+
return rc;
}
-EXPORT_SYMBOL_GPL(xlink_write_data);
+EXPORT_SYMBOL_GPL(xlink_close_channel);

-enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
- u16 chan, u8 const *pmessage,
- u32 size)
+static enum xlink_error do_xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size, u32 user_flag)
{
struct xlink_event *event;
struct xlink_link *link;
@@ -544,48 +656,78 @@ enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
chan, size, 0);
if (!event)
return X_LINK_ERROR;
- event->user_data = 1;
+ event->user_data = user_flag;

if (chan < XLINK_IPC_MAX_CHANNELS &&
event->interface == IPC_INTERFACE) {
/* only passing message address across IPC interface */
- if (get_user(addr, (u32 __user *)pmessage)) {
- xlink_destroy_event(event);
- return X_LINK_ERROR;
+ if (user_flag) {
+ if (get_user(addr, (u32 __user *)pmessage)) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->data = &addr;
+ } else {
+ event->data = &pmessage;
}
- event->data = &addr;
rc = xlink_multiplexer_tx(event, &event_queued);
xlink_destroy_event(event);
} else {
- event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
- size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
- if (!event->data) {
- xlink_destroy_event(event);
- return X_LINK_ERROR;
- }
- if (copy_from_user(event->data, (void __user *)pmessage, size)) {
- xlink_platform_deallocate(&xlink->pdev->dev,
- event->data, paddr, size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
- xlink_destroy_event(event);
- return X_LINK_ERROR;
+ if (user_flag) {
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+ size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->paddr = paddr;
+ } else {
+ event->data = (u8 *)pmessage;
+ event->paddr = 0;
}
- event->paddr = paddr;
rc = xlink_multiplexer_tx(event, &event_queued);
if (!event_queued) {
- xlink_platform_deallocate(&xlink->pdev->dev,
- event->data, paddr, size,
- XLINK_PACKET_ALIGNMENT,
- XLINK_NORMAL_MEMORY);
+ if (user_flag) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ }
xlink_destroy_event(event);
}
}
return rc;
}

+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_data(handle, chan, pmessage, size, 0);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_data(handle, chan, pmessage, size, 1);
+ return rc;
+}
+
enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
u16 chan, u8 const *pmessage,
u32 size)
@@ -614,16 +756,7 @@ enum xlink_error xlink_write_control_data(struct xlink_handle *handle,
}
EXPORT_SYMBOL_GPL(xlink_write_control_data);

-enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
- u16 chan, u8 const *message, u32 size)
-{
- enum xlink_error rc = 0;
-
- rc = do_xlink_write_volatile(handle, chan, message, size, 0);
- return rc;
-}
-
-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+static enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
u16 chan, u8 const *message,
u32 size, u32 user_flag)
{
@@ -668,6 +801,26 @@ enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
return rc;
}

+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+ return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_write_volatile);
+
enum xlink_error xlink_read_data(struct xlink_handle *handle,
u16 chan, u8 **pmessage, u32 *size)
{
@@ -757,8 +910,8 @@ EXPORT_SYMBOL_GPL(xlink_release_data);
enum xlink_error xlink_disconnect(struct xlink_handle *handle)
{
struct xlink_link *link;
- int interface = NULL_INTERFACE;
- enum xlink_error rc = X_LINK_ERROR;
+ int interface;
+ enum xlink_error rc = 0;

if (!xlink || !handle)
return X_LINK_ERROR;
@@ -767,7 +920,6 @@ enum xlink_error xlink_disconnect(struct xlink_handle *handle)
if (!link)
return X_LINK_ERROR;

- // decrement refcount, if count is 0 lock mutex and disconnect
if (kref_put_mutex(&link->refcount, release_after_kref_put,
&xlink->lock)) {
// stop dispatcher
@@ -946,6 +1098,179 @@ enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
return rc;
}
EXPORT_SYMBOL_GPL(xlink_get_device_mode);
+
+static int xlink_device_event_handler(u32 sw_device_id, u32 event_type)
+{
+ struct event_info *events = NULL;
+ xlink_device_event_cb event_cb;
+ bool found = false;
+ char event_attr[7];
+
+ mutex_lock(&dev_event_lock);
+ // find sw_device_id, event_type in list
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == sw_device_id &&
+ events->event_type == event_type) {
+ event_cb = events->event_notif_fn;
+ found = true;
+ break;
+ }
+ }
+ }
+ if (found) {
+ if (events->user_flag) {
+ xlink->eventx[events->event_type].value = events->event_type;
+ xlink->eventx[events->event_type].sw_dev_id = sw_device_id;
+ sprintf(event_attr, "event%d", events->event_type);
+ sysfs_notify(&xlink->pdev->dev.kobj, NULL, event_attr);
+ } else {
+ if (event_cb) {
+ event_cb(sw_device_id, event_type);
+ } else {
+ pr_info("No callback found for sw_device_id : 0x%x event type %d\n",
+ sw_device_id, event_type);
+ mutex_unlock(&dev_event_lock);
+ return X_LINK_ERROR;
+ }
+ }
+ pr_info("sysfs_notify event %d swdev_id %xs\n",
+ events->event_type, sw_device_id);
+ }
+ mutex_unlock(&dev_event_lock);
+return X_LINK_SUCCESS;
+}
+
+static bool event_registered(u32 sw_dev_id, u32 event)
+{
+ struct event_info *events = NULL;
+
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == sw_dev_id &&
+ events->event_type == event) {
+ return true;
+ }
+ }
+ }
+return false;
+}
+
+static enum xlink_error do_xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events,
+ xlink_device_event_cb event_notif_fn,
+ u32 user_flag)
+{
+ struct event_info *events;
+ u32 interface;
+ u32 event;
+ int i;
+
+ if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+ return X_LINK_ERROR;
+ for (i = 0; i < num_events; i++) {
+ events = kzalloc(sizeof(*events), GFP_KERNEL);
+ if (!events)
+ return X_LINK_ERROR;
+ event = *event_list;
+ events->sw_device_id = handle->sw_device_id;
+ events->event_notif_fn = event_notif_fn;
+ events->event_type = *event_list++;
+ events->user_flag = user_flag;
+ if (user_flag) {
+ /* only add to list once if userspace */
+ /* xlink userspace handles multi process callbacks */
+ if (event_registered(handle->sw_device_id, event)) {
+ pr_info("xlink-core: Event 0x%x - %d already registered\n",
+ handle->sw_device_id, event);
+ kfree(events);
+ continue;
+ }
+ }
+ pr_info("xlink-core:Events: sw_device_id 0x%x event %d fn %p user_flag %d\n",
+ events->sw_device_id, events->event_type,
+ events->event_notif_fn, events->user_flag);
+ list_add_tail(&events->list, &ev_info.list);
+ }
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ if (interface == NULL_INTERFACE)
+ return X_LINK_ERROR;
+ xlink_platform_register_for_events(interface, handle->sw_device_id,
+ xlink_device_event_handler);
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn)
+{
+ enum xlink_error rc;
+
+ rc = do_xlink_register_device_event(handle, event_list, num_events,
+ event_notif_fn, 1);
+ return rc;
+}
+
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn)
+{
+ enum xlink_error rc;
+
+ rc = do_xlink_register_device_event(handle, event_list, num_events,
+ event_notif_fn, 0);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_register_device_event);
+
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events)
+{
+ struct event_info *events = NULL;
+ u32 interface;
+ int found = 0;
+ int count = 0;
+ int i;
+
+ if (num_events < 0 || num_events >= NUM_REG_EVENTS)
+ return X_LINK_ERROR;
+ for (i = 0; i < num_events; i++) {
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events->sw_device_id == handle->sw_device_id &&
+ events->event_type == event_list[i]) {
+ found = 1;
+ break;
+ }
+ }
+ if (!events || !found)
+ return X_LINK_ERROR;
+ pr_info("removing event %d for sw_device_id 0x%x\n",
+ events->event_type, events->sw_device_id);
+ list_del(&events->list);
+ kfree(events);
+ }
+ // check if any events left for this sw_device_id
+ // are still registered ( in list )
+ list_for_each_entry(events, &ev_info.list, list) {
+ if (events) {
+ if (events->sw_device_id == handle->sw_device_id) {
+ count++;
+ break;
+ }
+ }
+ }
+ if (count == 0) {
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ if (interface == NULL_INTERFACE)
+ return X_LINK_ERROR;
+ xlink_platform_unregister_for_events(interface, handle->sw_device_id);
+ }
+ return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL_GPL(xlink_unregister_device_event);
+
/* Device tree driver match. */
static const struct of_device_id kmb_xlink_of_match[] = {
{
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
index 5ba7ac653bf7..ee10058a15ac 100644
--- a/drivers/misc/xlink-core/xlink-core.h
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -12,11 +12,14 @@

#define NUM_REG_EVENTS 4

-enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
- u16 chan, u8 const *message,
- u32 size, u32 user_flag);
-
enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
u16 chan, u8 const *pmessage,
u32 size);
+enum xlink_error xlink_register_device_event_user(struct xlink_handle *handle,
+ u32 *event_list,
+ u32 num_events,
+ xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size);
#endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
index 8985f6631175..81aa3bfffcd3 100644
--- a/drivers/misc/xlink-core/xlink-defs.h
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -35,7 +35,7 @@
#define CONTROL_CHANNEL_TIMEOUT_MS 0U // wait indefinitely
#define SIGXLNK 44 // signal XLink uses for callback signalling

-#define UNUSED(x) (void)(x)
+#define UNUSED(x) ((void)(x))

// the end of the IPC channel range (starting at zero)
#define XLINK_IPC_MAX_CHANNELS 1024
@@ -102,6 +102,8 @@ enum xlink_event_type {
XLINK_CLOSE_CHANNEL_REQ,
XLINK_PING_REQ,
XLINK_WRITE_CONTROL_REQ,
+ XLINK_DATA_READY_CALLBACK_REQ,
+ XLINK_DATA_CONSUMED_CALLBACK_REQ,
XLINK_REQ_LAST,
// response events
XLINK_WRITE_RESP = 0x10,
@@ -113,6 +115,8 @@ enum xlink_event_type {
XLINK_CLOSE_CHANNEL_RESP,
XLINK_PING_RESP,
XLINK_WRITE_CONTROL_RESP,
+ XLINK_DATA_READY_CALLBACK_RESP,
+ XLINK_DATA_CONSUMED_CALLBACK_RESP,
XLINK_RESP_LAST,
};

diff --git a/drivers/misc/xlink-core/xlink-dispatcher.c b/drivers/misc/xlink-core/xlink-dispatcher.c
index 11ef8e4110ca..bc2f184488ac 100644
--- a/drivers/misc/xlink-core/xlink-dispatcher.c
+++ b/drivers/misc/xlink-core/xlink-dispatcher.c
@@ -5,18 +5,18 @@
* Copyright (C) 2018-2019 Intel Corporation
*
*/
-#include <linux/init.h>
-#include <linux/module.h>
+#include <linux/completion.h>
#include <linux/device.h>
+#include <linux/init.h>
#include <linux/kernel.h>
-#include <linux/slab.h>
#include <linux/kthread.h>
#include <linux/list.h>
-#include <linux/semaphore.h>
+#include <linux/module.h>
#include <linux/mutex.h>
-#include <linux/completion.h>
-#include <linux/sched/signal.h>
#include <linux/platform_device.h>
+#include <linux/sched/signal.h>
+#include <linux/semaphore.h>
+#include <linux/slab.h>

#include "xlink-dispatcher.h"
#include "xlink-multiplexer.h"
@@ -34,18 +34,18 @@ enum dispatcher_state {

/* queue for dispatcher tx thread event handling */
struct event_queue {
+ struct list_head head; /* head of event linked list */
u32 count; /* number of events in the queue */
u32 capacity; /* capacity of events in the queue */
- struct list_head head; /* head of event linked list */
struct mutex lock; /* locks queue while accessing */
};

/* dispatcher servicing a single link to a device */
struct dispatcher {
u32 link_id; /* id of link being serviced */
+ int interface; /* underlying interface of link */
enum dispatcher_state state; /* state of the dispatcher */
struct xlink_handle *handle; /* xlink device handle */
- int interface; /* underlying interface of link */
struct task_struct *rxthread; /* kthread servicing rx */
struct task_struct *txthread; /* kthread servicing tx */
struct event_queue queue; /* xlink event queue */
@@ -82,7 +82,7 @@ static struct dispatcher *get_dispatcher_by_id(u32 id)

static u32 event_generate_id(void)
{
- static u32 id = 0xa; // TODO: temporary solution
+ static u32 id = 0xa;

return id++;
}
@@ -142,9 +142,6 @@ static int dispatcher_event_send(struct xlink_event *event)
size_t event_header_size = sizeof(event->header);
int rc;

- // write event header
- // printk(KERN_DEBUG "Sending event: type = 0x%x, id = 0x%x\n",
- // event->header.type, event->header.id);
rc = xlink_platform_write(event->interface,
event->handle->sw_device_id, &event->header,
&event_header_size, event->header.timeout, NULL);
@@ -159,8 +156,10 @@ static int dispatcher_event_send(struct xlink_event *event)
event->handle->sw_device_id, event->data,
&event->header.size, event->header.timeout,
NULL);
- if (rc)
+ if (rc) {
pr_err("Write data failed %d\n", rc);
+ return rc;
+ }
if (event->user_data == 1) {
if (event->paddr != 0) {
xlink_platform_deallocate(xlinkd->dev,
@@ -187,7 +186,6 @@ static int xlink_dispatcher_rxthread(void *context)
size_t size;
int rc;

- // printk(KERN_DEBUG "dispatcher rxthread started\n");
event = xlink_create_event(disp->link_id, 0, disp->handle, 0, 0, 0);
if (!event)
return -1;
@@ -214,7 +212,6 @@ static int xlink_dispatcher_rxthread(void *context)
}
}
}
- // printk(KERN_INFO "dispatcher rxthread stopped\n");
complete(&disp->rx_done);
do_exit(0);
return 0;
@@ -225,7 +222,6 @@ static int xlink_dispatcher_txthread(void *context)
struct dispatcher *disp = (struct dispatcher *)context;
struct xlink_event *event;

- // printk(KERN_DEBUG "dispatcher txthread started\n");
allow_signal(SIGTERM); // allow thread termination while waiting on sem
complete(&disp->tx_done);
while (!kthread_should_stop()) {
@@ -236,7 +232,6 @@ static int xlink_dispatcher_txthread(void *context)
dispatcher_event_send(event);
xlink_destroy_event(event); // free handled event
}
- // printk(KERN_INFO "dispatcher txthread stopped\n");
complete(&disp->tx_done);
do_exit(0);
return 0;
@@ -250,6 +245,7 @@ static int xlink_dispatcher_txthread(void *context)
enum xlink_error xlink_dispatcher_init(void *dev)
{
struct platform_device *plat_dev = (struct platform_device *)dev;
+ struct dispatcher *dsp;
int i;

xlinkd = kzalloc(sizeof(*xlinkd), GFP_KERNEL);
@@ -258,16 +254,16 @@ enum xlink_error xlink_dispatcher_init(void *dev)

xlinkd->dev = &plat_dev->dev;
for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
- xlinkd->dispatchers[i].link_id = i;
- sema_init(&xlinkd->dispatchers[i].event_sem, 0);
- init_completion(&xlinkd->dispatchers[i].rx_done);
- init_completion(&xlinkd->dispatchers[i].tx_done);
- INIT_LIST_HEAD(&xlinkd->dispatchers[i].queue.head);
- mutex_init(&xlinkd->dispatchers[i].queue.lock);
- xlinkd->dispatchers[i].queue.count = 0;
- xlinkd->dispatchers[i].queue.capacity =
- XLINK_EVENT_QUEUE_CAPACITY;
- xlinkd->dispatchers[i].state = XLINK_DISPATCHER_INIT;
+ dsp = &xlinkd->dispatchers[i];
+ dsp->link_id = i;
+ sema_init(&dsp->event_sem, 0);
+ init_completion(&dsp->rx_done);
+ init_completion(&dsp->tx_done);
+ INIT_LIST_HEAD(&dsp->queue.head);
+ mutex_init(&dsp->queue.lock);
+ dsp->queue.count = 0;
+ dsp->queue.capacity = XLINK_EVENT_QUEUE_CAPACITY;
+ dsp->state = XLINK_DISPATCHER_INIT;
}
mutex_init(&xlinkd->lock);

@@ -329,7 +325,7 @@ enum xlink_error xlink_dispatcher_event_add(enum xlink_event_origin origin,
struct dispatcher *disp;
int rc;

- // get dispatcher by handle
+ // get dispatcher by link id
disp = get_dispatcher_by_id(event->link_id);
if (!disp)
return X_LINK_ERROR;
@@ -433,7 +429,6 @@ enum xlink_error xlink_dispatcher_destroy(void)
}
xlink_destroy_event(event);
}
- // destroy dispatcher
mutex_destroy(&disp->queue.lock);
}
mutex_destroy(&xlinkd->lock);
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
index 90947bbccfed..7822a7b35bb6 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.c
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -28,15 +28,6 @@ static int copy_result_to_user(u32 *where, int rc)
return rc;
}

-static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
- u16 chan, u8 const *message, u32 size)
-{
- enum xlink_error rc = 0;
-
- rc = do_xlink_write_volatile(handle, chan, message, size, 1);
- return rc;
-}
-
int ioctl_connect(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -158,6 +149,28 @@ int ioctl_write_data(unsigned long arg)
return copy_result_to_user(wr.return_code, rc);
}

+int ioctl_write_control_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ u8 volbuf[XLINK_MAX_BUF_SIZE];
+ int rc = 0;
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_CONTROL_DATA_SIZE)
+ return -EFAULT;
+ if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+ return -EFAULT;
+ rc = xlink_write_control_data(&devh, wr.chan, volbuf, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
int ioctl_write_volatile_data(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -242,6 +255,14 @@ int ioctl_start_vpu(unsigned long arg)
return copy_result_to_user(startvpu.return_code, rc);
}

+int ioctl_stop_vpu(void)
+{
+ int rc = 0;
+
+ rc = xlink_stop_vpu();
+ return rc;
+}
+
int ioctl_disconnect(unsigned long arg)
{
struct xlink_handle devh = {};
@@ -424,3 +445,110 @@ int ioctl_set_device_mode(unsigned long arg)

return copy_result_to_user(devm.return_code, rc);
}
+
+int ioctl_register_device_event(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkregdevevent regdevevent = {};
+ u32 num_events = 0;
+ u32 *ev_list;
+ int rc = 0;
+
+ if (copy_from_user(&regdevevent, (void __user *)arg,
+ sizeof(struct xlinkregdevevent)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ num_events = regdevevent.num_events;
+ if (num_events > 0 && num_events <= NUM_REG_EVENTS) {
+ ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+ if (ev_list) {
+ if (copy_from_user(ev_list,
+ (void __user *)regdevevent.event_list,
+ (num_events * sizeof(u32)))) {
+ kfree(ev_list);
+ return -EFAULT;
+ }
+ rc = xlink_register_device_event_user(&devh,
+ ev_list,
+ num_events,
+ NULL);
+ kfree(ev_list);
+ } else {
+ rc = X_LINK_ERROR;
+ }
+ } else {
+ rc = X_LINK_ERROR;
+ }
+
+ return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_unregister_device_event(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkregdevevent regdevevent = {};
+ u32 num_events = 0;
+ u32 *ev_list;
+ int rc = 0;
+
+ if (copy_from_user(&regdevevent, (void __user *)arg,
+ sizeof(struct xlinkregdevevent)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)regdevevent.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ num_events = regdevevent.num_events;
+ if (num_events <= NUM_REG_EVENTS) {
+ ev_list = kzalloc((num_events * sizeof(u32)), GFP_KERNEL);
+ if (copy_from_user(ev_list,
+ (void __user *)regdevevent.event_list,
+ (num_events * sizeof(u32)))) {
+ kfree(ev_list);
+ return -EFAULT;
+ }
+ rc = xlink_unregister_device_event(&devh, ev_list, num_events);
+ kfree(ev_list);
+ } else {
+ rc = X_LINK_ERROR;
+ }
+
+ return copy_result_to_user(regdevevent.return_code, rc);
+}
+
+int ioctl_data_ready_callback(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkcallback cb = {};
+ int rc = 0;
+
+ if (copy_from_user(&cb, (void __user *)arg,
+ sizeof(struct xlinkcallback)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)cb.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+ rc = xlink_data_available_event(&devh, cb.chan, cb.callback);
+
+ return copy_result_to_user(cb.return_code, rc);
+}
+
+int ioctl_data_consumed_callback(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkcallback cb = {};
+ int rc = 0;
+
+ if (copy_from_user(&cb, (void __user *)arg,
+ sizeof(struct xlinkcallback)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)cb.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ CHANNEL_SET_USER_BIT(cb.chan); // set MSbit for user space call
+ rc = xlink_data_consumed_event(&devh, cb.chan, cb.callback);
+
+ return copy_result_to_user(cb.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
index d016d8418f30..7818b676d488 100644
--- a/drivers/misc/xlink-core/xlink-ioctl.h
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -14,10 +14,12 @@ int ioctl_open_channel(unsigned long arg);
int ioctl_read_data(unsigned long arg);
int ioctl_read_to_buffer(unsigned long arg);
int ioctl_write_data(unsigned long arg);
+int ioctl_write_control_data(unsigned long arg);
int ioctl_write_volatile_data(unsigned long arg);
int ioctl_release_data(unsigned long arg);
int ioctl_close_channel(unsigned long arg);
int ioctl_start_vpu(unsigned long arg);
+int ioctl_stop_vpu(void);
int ioctl_disconnect(unsigned long arg);
int ioctl_get_device_name(unsigned long arg);
int ioctl_get_device_list(unsigned long arg);
@@ -26,5 +28,9 @@ int ioctl_boot_device(unsigned long arg);
int ioctl_reset_device(unsigned long arg);
int ioctl_get_device_mode(unsigned long arg);
int ioctl_set_device_mode(unsigned long arg);
+int ioctl_register_device_event(unsigned long arg);
+int ioctl_unregister_device_event(unsigned long arg);
+int ioctl_data_ready_callback(unsigned long arg);
+int ioctl_data_consumed_callback(unsigned long arg);

#endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
index 48451dc30712..e09458b62c45 100644
--- a/drivers/misc/xlink-core/xlink-multiplexer.c
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -115,6 +115,38 @@ static struct xlink_multiplexer *xmux;
*
*/

+static enum xlink_error run_callback(struct open_channel *opchan,
+ void *callback, struct task_struct *pid)
+{
+ enum xlink_error rc = X_LINK_SUCCESS;
+ struct kernel_siginfo info;
+ void (*func)(int chan);
+ int ret;
+
+ if (opchan->callback_origin == 'U') { // user-space origin
+ if (pid) {
+ memset(&info, 0, sizeof(struct kernel_siginfo));
+ info.si_signo = SIGXLNK;
+ info.si_code = SI_QUEUE;
+ info.si_errno = opchan->id;
+ info.si_ptr = (void __user *)callback;
+ ret = send_sig_info(SIGXLNK, &info, pid);
+ if (ret < 0) {
+ pr_err("Unable to send signal %d\n", ret);
+ rc = X_LINK_ERROR;
+ }
+ } else {
+ pr_err("CHAN 0x%x -- calling_pid == NULL\n",
+ opchan->id);
+ rc = X_LINK_ERROR;
+ }
+ } else { // kernel origin
+ func = callback;
+ func(opchan->id);
+ }
+ return rc;
+}
+
static inline int chan_is_non_blocking_read(struct open_channel *opchan)
{
if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
@@ -151,7 +183,7 @@ static int is_channel_for_device(u16 chan, u32 sw_device_id,
enum xlink_dev_type dev_type)
{
struct xlink_channel_type const *chan_type = get_channel_type(chan);
- int interface = NULL_INTERFACE;
+ int interface;

if (chan_type) {
interface = get_interface_from_sw_device_id(sw_device_id);
@@ -181,13 +213,9 @@ static int is_enough_space_in_channel(struct open_channel *opchan,
}
}
if (opchan->tx_up_limit == 1) {
- if ((opchan->tx_fill_level + size)
- < ((opchan->chan->size / 100) * THR_LWR)) {
- opchan->tx_up_limit = 0;
- return 1;
- } else {
+ if ((opchan->tx_fill_level + size) >=
+ ((opchan->chan->size / 100) * THR_LWR))
return 0;
- }
}
return 1;
}
@@ -231,6 +259,8 @@ static int add_packet_to_channel(struct open_channel *opchan,
list_add_tail(&pkt->list, &queue->head);
queue->count++;
opchan->rx_fill_level += pkt->length;
+ } else {
+ return X_LINK_ERROR;
}
return X_LINK_SUCCESS;
}
@@ -262,9 +292,11 @@ static int release_packet_from_channel(struct open_channel *opchan,
} else {
// find packet in channel rx queue
list_for_each_entry(pkt, &queue->head, list) {
- if (pkt->data == addr) {
- packet_found = 1;
- break;
+ if (pkt) {
+ if (pkt->data == addr) {
+ packet_found = 1;
+ break;
+ }
}
}
}
@@ -629,16 +661,46 @@ enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
}
} else if (xmux->channels[link_id][chan].status == CHAN_OPEN_PEER) {
/* channel already open */
- xmux->channels[link_id][chan].status = CHAN_OPEN; // opened locally
- xmux->channels[link_id][chan].size = event->header.size;
- xmux->channels[link_id][chan].timeout = event->header.timeout;
- xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
rc = multiplexer_open_channel(link_id, chan);
+ if (rc == X_LINK_SUCCESS) {
+ struct channel *xchan = &xmux->channels[link_id][chan];
+
+ xchan->status = CHAN_OPEN; // opened locally
+ xchan->size = event->header.size;
+ xchan->timeout = event->header.timeout;
+ xchan->mode = (uintptr_t)event->data;
+ }
} else {
/* channel already open */
rc = X_LINK_ALREADY_OPEN;
}
break;
+ case XLINK_DATA_READY_CALLBACK_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ opchan->ready_callback = event->data;
+ opchan->ready_calling_pid = event->calling_pid;
+ opchan->callback_origin = event->callback_origin;
+ pr_info("xlink ready callback process registered - %lx chan %d\n",
+ (uintptr_t)event->calling_pid, chan);
+ release_channel(opchan);
+ }
+ break;
+ case XLINK_DATA_CONSUMED_CALLBACK_REQ:
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ } else {
+ opchan->consumed_callback = event->data;
+ opchan->consumed_calling_pid = event->calling_pid;
+ opchan->callback_origin = event->callback_origin;
+ pr_info("xlink consumed callback process registered - %lx chan %d\n",
+ (uintptr_t)event->calling_pid, chan);
+ release_channel(opchan);
+ }
+ break;
case XLINK_CLOSE_CHANNEL_REQ:
if (xmux->channels[link_id][chan].status == CHAN_OPEN) {
opchan = get_channel(link_id, chan);
@@ -709,7 +771,8 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
XLINK_PACKET_ALIGNMENT,
XLINK_NORMAL_MEMORY);
} else {
- pr_err("Fatal error: can't allocate memory in line:%d func:%s\n", __LINE__, __func__);
+ pr_err("Fatal error: can't allocate memory in line:%d func:%s\n",
+ __LINE__, __func__);
}
rc = X_LINK_COMMUNICATION_FAIL;
} else {
@@ -754,6 +817,14 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
xlink_dispatcher_event_add(EVENT_RX, event);
//complete regardless of mode/timeout
complete(&opchan->pkt_available);
+ // run callback
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+ chan_is_non_blocking_read(opchan) &&
+ opchan->ready_callback) {
+ rc = run_callback(opchan, opchan->ready_callback,
+ opchan->ready_calling_pid);
+ break;
+ }
} else {
// failed to allocate buffer
rc = X_LINK_ERROR;
@@ -813,6 +884,13 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
xlink_dispatcher_event_add(EVENT_RX, event);
//complete regardless of mode/timeout
complete(&opchan->pkt_consumed);
+ // run callback
+ if (xmux->channels[link_id][chan].status == CHAN_OPEN &&
+ chan_is_non_blocking_write(opchan) &&
+ opchan->consumed_callback) {
+ rc = run_callback(opchan, opchan->consumed_callback,
+ opchan->consumed_calling_pid);
+ }
release_channel(opchan);
break;
case XLINK_RELEASE_REQ:
@@ -838,47 +916,47 @@ enum xlink_error xlink_multiplexer_rx(struct xlink_event *event)
rc = multiplexer_open_channel(link_id, chan);
if (rc) {
rc = X_LINK_ERROR;
- } else {
- opchan = get_channel(link_id, chan);
- if (!opchan) {
- rc = X_LINK_COMMUNICATION_FAIL;
- } else {
- xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
- complete(&opchan->opened);
- passthru_event = xlink_create_event(link_id,
- XLINK_OPEN_CHANNEL_RESP,
- event->handle,
- chan,
- 0,
- opchan->chan->timeout);
- if (!passthru_event) {
- rc = X_LINK_ERROR;
- release_channel(opchan);
- break;
- }
- xlink_dispatcher_event_add(EVENT_RX,
- passthru_event);
- }
+ break;
+ }
+ opchan = get_channel(link_id, chan);
+ if (!opchan) {
+ rc = X_LINK_COMMUNICATION_FAIL;
+ break;
+ }
+ xmux->channels[link_id][chan].status = CHAN_OPEN_PEER;
+ complete(&opchan->opened);
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan,
+ 0,
+ opchan->chan->timeout);
+ if (!passthru_event) {
+ rc = X_LINK_ERROR;
release_channel(opchan);
+ break;
}
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
+ release_channel(opchan);
} else {
/* channel already open */
opchan = get_channel(link_id, chan);
if (!opchan) {
rc = X_LINK_COMMUNICATION_FAIL;
- } else {
- passthru_event = xlink_create_event(link_id,
- XLINK_OPEN_CHANNEL_RESP,
- event->handle,
- chan, 0, 0);
- if (!passthru_event) {
- release_channel(opchan);
- rc = X_LINK_ERROR;
- break;
- }
- xlink_dispatcher_event_add(EVENT_RX,
- passthru_event);
+ break;
+ }
+ passthru_event = xlink_create_event(link_id,
+ XLINK_OPEN_CHANNEL_RESP,
+ event->handle,
+ chan, 0, 0);
+ if (!passthru_event) {
+ release_channel(opchan);
+ rc = X_LINK_ERROR;
+ break;
}
+ xlink_dispatcher_event_add(EVENT_RX,
+ passthru_event);
release_channel(opchan);
}
rc = xlink_passthrough(event);
@@ -930,7 +1008,7 @@ enum xlink_error xlink_passthrough(struct xlink_event *event)
#ifdef CONFIG_XLINK_LOCAL_HOST
struct xlink_ipc_context ipc = {0};
phys_addr_t physaddr = 0;
- dma_addr_t vpuaddr = 0;
+ static dma_addr_t vpuaddr;
u32 timeout = 0;
u32 link_id;
u16 chan;
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
index 56eb8da28a5f..b0076cb3671d 100644
--- a/drivers/misc/xlink-core/xlink-platform.c
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -56,6 +56,11 @@ static inline int xlink_ipc_close_channel(u32 sw_device_id,
u32 channel)
{ return -1; }

+static inline int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, u32 event))
+{ return -1; }
+static inline int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{ return -1; }
#endif /* CONFIG_XLINK_LOCAL_HOST */

/*
@@ -95,6 +100,13 @@ static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {

static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
xlink_ipc_close_channel, NULL, NULL, NULL};
+static int (*register_for_events_fcts[NMB_OF_INTERFACES])(u32,
+ int (*callback)(u32 sw_device_id, u32 event)) = {
+ xlink_ipc_register_for_events,
+ xlink_pcie_register_device_event,
+ NULL, NULL};
+static int (*unregister_for_events_fcts[NMB_OF_INTERFACES])(u32) = {
+ xlink_ipc_unregister_for_events, xlink_pcie_unregister_device_event, NULL, NULL};

/*
* xlink low-level driver interface
@@ -207,6 +219,21 @@ int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
return close_chan_fcts[interface](sw_device_id, channel);
}

+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+ xlink_device_event_cb event_notif_fn)
+{
+ if (interface >= NMB_OF_INTERFACES || !register_for_events_fcts[interface])
+ return -1;
+ return register_for_events_fcts[interface](sw_device_id, event_notif_fn);
+}
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id)
+{
+ if (interface >= NMB_OF_INTERFACES || !unregister_for_events_fcts[interface])
+ return -1;
+ return unregister_for_events_fcts[interface](sw_device_id);
+}
+
void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
u32 size, u32 alignment,
enum xlink_memory_region region)
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
index b00dbc719530..ac196ff85469 100644
--- a/include/linux/xlink.h
+++ b/include/linux/xlink.h
@@ -70,6 +70,12 @@ enum xlink_error xlink_open_channel(struct xlink_handle *handle,
u16 chan, enum xlink_opmode mode,
u32 data_size, u32 timeout);

+enum xlink_error xlink_data_available_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_available_event);
+enum xlink_error xlink_data_consumed_event(struct xlink_handle *handle,
+ u16 chan,
+ xlink_event data_consumed_event);
enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);

enum xlink_error xlink_write_data(struct xlink_handle *handle,
@@ -113,9 +119,14 @@ enum xlink_error xlink_set_device_mode(struct xlink_handle *handle,
enum xlink_error xlink_get_device_mode(struct xlink_handle *handle,
enum xlink_device_power_mode *power_mode);

-enum xlink_error xlink_start_vpu(char *filename); /* depreciated */
+enum xlink_error xlink_register_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events,
+ xlink_device_event_cb event_notif_fn);
+enum xlink_error xlink_unregister_device_event(struct xlink_handle *handle,
+ u32 *event_list, u32 num_events);
+enum xlink_error xlink_start_vpu(char *filename); /* deprecated */

-enum xlink_error xlink_stop_vpu(void); /* depreciated */
+enum xlink_error xlink_stop_vpu(void); /* deprecated */

/* API functions to be implemented
*
--
2.17.1

2021-02-12 22:37:39

by mark gross

[permalink] [raw]
Subject: [PATCH v6 28/34] misc: Intel tsens IA host driver.

From: "C, Udhayakumar" <[email protected]>

Add Intel tsens IA host driver for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU Local Host ARM CPU
---------------- --------------------------
| Platform | | Thermal Daemon |
| Management SW| | |
---------------- --------------------------
| Intel tsens | | intel tsens i2c slave |
| i2c client | | and thermal driver |
---------------- --------------------------
| XLINK I2C | | XLINK I2C Slave |
| controller | <=========> | controller |
---------------- xlink smbus --------------------------

intel tsens module:
-------------------
The tsens module enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms.In the tsens module
various junction and SoC temperatures are reported using thermal
subsystem and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsystem will be used by
platform manageability software running in IA host.

- Remote Host driver
* Intended for IA CPU
* It is a I2C client driver
* Driver path:
{tree}/drivers/misc/intel_tsens/intel_tsens_host.c

Local host and Remote host drivers communicates using
I2C SMBUS protocol.

Acked-by: Mark Mross <[email protected]>
Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
Documentation/hwmon/index.rst | 1 +
Documentation/hwmon/intel_tsens_host.rst | 71 ++++
drivers/misc/intel_tsens/Kconfig | 13 +
drivers/misc/intel_tsens/Makefile | 1 +
drivers/misc/intel_tsens/intel_tsens_host.c | 352 ++++++++++++++++++++
include/linux/intel_tsens_host.h | 34 ++
6 files changed, 472 insertions(+)
create mode 100644 Documentation/hwmon/intel_tsens_host.rst
create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
create mode 100644 include/linux/intel_tsens_host.h

diff --git a/Documentation/hwmon/index.rst b/Documentation/hwmon/index.rst
index fc29100bef73..7a9eaddd1ab3 100644
--- a/Documentation/hwmon/index.rst
+++ b/Documentation/hwmon/index.rst
@@ -81,6 +81,7 @@ Hardware Monitoring Kernel Drivers
isl68137
it87
intel_tsens_sensor.rst
+ intel_tsens_host.rst
jc42
k10temp
k8temp
diff --git a/Documentation/hwmon/intel_tsens_host.rst b/Documentation/hwmon/intel_tsens_host.rst
new file mode 100644
index 000000000000..012c593f969f
--- /dev/null
+++ b/Documentation/hwmon/intel_tsens_host.rst
@@ -0,0 +1,71 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+Kernel driver: intel_tsens
+==========================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+ Slave address: The address is assigned by the hddl device management
+ driver.
+
+ Datasheet:
+ Documentation/hwmon/intel_tsens_sensor.rst#Remote Thermal Interface
+
+Authors:
+ - Thalaiappan, Rathina <[email protected]
+ - Udhayakumar C <[email protected]>
+
+Description
+===========
+The intel_tsens is a temperature sensor driver receiving the junction temperature
+from different heating points inside the SOC. The driver will receive the
+temperature on SMBUS connection. The reported temperature is in degrees Celsius.
+
+In Keem Bay, the four thermal junction temperature points are,
+Media Subsystem (mss), NN subsystem (nce), Compute subsystem (cse) and
+SOC(Maximum of mss, nce and cse).
+
+Example
+=======
+Temperature reported by a Keem Bay on the Linux Thermal sysfs interface.
+
+# cat /sys/class/thermal/thermal_zone*/type
+mss
+css
+nce
+soc
+
+# cat /sys/class/thermal/thermal_zone*/temp
+0
+29210
+28478
+29210
+
++-----------+-------------+
+| offset | Sensor |
++-----------+-------------+
+| 0 | mss |
++-----------+-------------+
+| 1 | css |
++-----------+-------------+
+| 2 | nce |
++-----------+-------------+
+| 3 | soc |
++-----------+-------------+
+
+#sudo i2cdetect -l
+i2c-8 smbus SMBus I801 adapter at efa0 SMBus adapte r
+
+To read mss junction temperature:
+#i2cget -y 8 <slave addr> 0x0 w
+
+To read cse junction temperature:
+#i2cget -y 8 <slave addr> 0x1 w
+
+To read nce junction temperature:
+#i2cget -y 8 <slave addr> 0x2 w
+
+To read overall SoC temperature:
+#i2cget -y 8 <slave addr> 0x3 w
diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index bfb8fe1997f4..8b263fdd80c3 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -13,3 +13,16 @@ config INTEL_TSENS_LOCAL_HOST
management controller.
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.
+
+config INTEL_TSENS_IA_HOST
+ tristate "Temperature sensor driver for intel tsens remote host"
+ depends on I2C && THERMAL
+ depends on I2C_SMBUS
+ help
+ This option enables tsens i2c and thermal local Host driver.
+
+ This driver is used for reading thermal data via I2C SMBUS
+ and registers itself to thermal framework, which can be
+ used by thermal daemon in remote IA host
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 93dee8b9f481..250dc484fb49 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,3 +5,4 @@
#

obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_IA_HOST) += intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_host.c b/drivers/misc/intel_tsens/intel_tsens_host.c
new file mode 100644
index 000000000000..85de4ab13765
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_host.c
@@ -0,0 +1,352 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/intel_tsens_host.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/printk.h>
+#include <linux/platform_device.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/thermal.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+#include <uapi/linux/stat.h>
+
+#define TSENS_BINDING_NAME "intel_tsens"
+#define TSENS_BYTE_INDEX_SHIFT 0x6
+#define TSENS_READ_BYTE0 (0x0 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE1 (0x1 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE2 (0x2 << TSENS_BYTE_INDEX_SHIFT)
+#define TSENS_READ_BYTE3 (0x3 << TSENS_BYTE_INDEX_SHIFT)
+
+static int tsens_i2c_smbus_read_byte_data(struct i2c_client *i2c, u8 command,
+ u8 *i2c_val)
+{
+ union i2c_smbus_data data;
+ int status;
+
+ status = i2c_smbus_xfer(i2c->adapter, i2c->addr, i2c->flags,
+ I2C_SMBUS_READ, command,
+ I2C_SMBUS_BYTE_DATA, &data);
+ if (!status)
+ *i2c_val = data.byte;
+ return status;
+}
+
+/**
+ * intel_tsens_get_temp - get updated temperature
+ * @zone: Thermal zone device
+ * @temp: updated temperature value.
+ *
+ * Temperature value read from sensors ranging from -40000 (-40 degree Celsius)
+ * to 126000 (126 degree Celsius). if there is a failure while reading update
+ * temperature, -255 would be returned as temperature to indicate failure.
+ */
+static int intel_tsens_get_temp(struct thermal_zone_device *zone,
+ int *temp)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+ struct i2c_client *i2c_c;
+ int status, sensor_type;
+ u8 i2c_val;
+ s32 val;
+
+ if (strstr(zone->type, "smb"))
+ i2c_c = tsens->i2c_smbus;
+ else
+ i2c_c = tsens->i2c_xlk;
+
+ *temp = -255;
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE0;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val = i2c_val;
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE1;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 8);
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE2;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 16);
+ sensor_type = tsens->t_data->sensor_type | TSENS_READ_BYTE3;
+ status = tsens_i2c_smbus_read_byte_data(i2c_c,
+ sensor_type,
+ &i2c_val);
+ if (status < 0)
+ return status;
+ val |= (i2c_val << 24);
+ *temp = val;
+ return 0;
+}
+
+static int intel_tsens_thermal_get_trip_type(struct thermal_zone_device *zone,
+ int trip,
+ enum thermal_trip_type *type)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+
+ *type = tsens->trip_info[trip]->trip_type;
+ return 0;
+}
+
+static int intel_tsens_thermal_get_trip_temp(struct thermal_zone_device *zone,
+ int trip, int *temp)
+{
+ struct intel_tsens_host *tsens =
+ (struct intel_tsens_host *)zone->devdata;
+
+ *temp = tsens->trip_info[trip]->temp;
+ return 0;
+}
+
+static int intel_tsens_thermal_notify(struct thermal_zone_device *tz,
+ int trip, enum thermal_trip_type type)
+{
+ int ret = 0;
+
+ switch (type) {
+ case THERMAL_TRIP_ACTIVE:
+ dev_warn(&tz->device,
+ "zone %s reached to active temperature %d\n",
+ tz->type, tz->temperature);
+ ret = 1;
+ break;
+ case THERMAL_TRIP_CRITICAL:
+ dev_warn(&tz->device,
+ "zone %s reached to critical temperature %d\n",
+ tz->type, tz->temperature);
+ ret = 1;
+ break;
+ default:
+ break;
+ }
+ return ret;
+}
+
+static int intel_tsens_bind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ int ret;
+
+ /*
+ * Check here thermal device zone name and cdev name to match,
+ * then call the bind device
+ */
+ if (strncmp(TSENS_BINDING_NAME, cdev->type,
+ strlen(TSENS_BINDING_NAME)) == 0) {
+ ret = thermal_zone_bind_cooling_device
+ (tz,
+ THERMAL_TRIP_ACTIVE,
+ cdev,
+ THERMAL_NO_LIMIT,
+ THERMAL_NO_LIMIT,
+ THERMAL_WEIGHT_DEFAULT);
+ if (ret) {
+ dev_err(&tz->device,
+ "binding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static int intel_tsens_unbind(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev)
+{
+ int ret;
+
+ if (strncmp(TSENS_BINDING_NAME, cdev->type,
+ strlen(TSENS_BINDING_NAME)) == 0) {
+ ret = thermal_zone_unbind_cooling_device(tz, 0, cdev);
+ if (ret) {
+ dev_err(&tz->device,
+ "unbinding zone %s with cdev %s failed:%d\n",
+ tz->type, cdev->type, ret);
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static struct thermal_zone_device_ops tsens_thermal_ops = {
+ .bind = intel_tsens_bind,
+ .unbind = intel_tsens_unbind,
+ .get_temp = intel_tsens_get_temp,
+ .get_trip_type = intel_tsens_thermal_get_trip_type,
+ .get_trip_temp = intel_tsens_thermal_get_trip_temp,
+ .notify = intel_tsens_thermal_notify,
+};
+
+static int intel_tsens_add_tz(struct intel_tsens_host *tsens,
+ struct thermal_zone_device **tz,
+ const char *name,
+ struct device *dev,
+ int i)
+{
+ int ret;
+
+ *tz = thermal_zone_device_register(name,
+ tsens->t_data->n_trips,
+ 0, tsens,
+ &tsens_thermal_ops,
+ NULL,
+ tsens->t_data->passive_delay,
+ tsens->t_data->polling_delay);
+ if (IS_ERR(*tz)) {
+ ret = PTR_ERR(*tz);
+ dev_err(dev,
+ "failed to register thermal zone device %s\n",
+ tsens->t_data->name);
+ return ret;
+ }
+ return 0;
+}
+
+static void intel_tsens_remove_tz(struct intel_hddl_clients *d)
+{
+ int i;
+
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ if (tsens->tz_smbus) {
+ thermal_zone_device_unregister(tsens->tz_smbus);
+ tsens->tz_smbus = NULL;
+ }
+ if (tsens->tz_xlk) {
+ thermal_zone_device_unregister(tsens->tz_xlk);
+ tsens->tz_xlk = NULL;
+ }
+ }
+}
+
+static int intel_tsens_tj_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct intel_hddl_clients *d = client->dev.platform_data;
+ u32 device_id = tsens_get_device_id(d);
+ char *i2c_str;
+ int ret, i;
+
+ if (strstr(client->adapter->name, "SMBus I801")) {
+ i2c_str = "smb";
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ tsens->sensor_name_smbus =
+ kasprintf(GFP_KERNEL,
+ "%s_%s-%x",
+ tsens->t_data->name,
+ i2c_str, device_id);
+ tsens->i2c_smbus = client;
+ ret = intel_tsens_add_tz(tsens,
+ &tsens->tz_smbus,
+ tsens->sensor_name_smbus,
+ &client->dev,
+ i);
+ if (ret) {
+ dev_err(&client->dev,
+ "thermal zone configuration failed\n");
+ intel_tsens_remove_tz(d);
+ return ret;
+ }
+ }
+ } else {
+ i2c_str = "xlk";
+ for (i = 0; i < d->nsens; i++) {
+ struct intel_tsens_host *tsens = d->tsens[i];
+
+ tsens->sensor_name_xlk =
+ kasprintf(GFP_KERNEL,
+ "%s_%s-%x",
+ tsens->t_data->name,
+ i2c_str, device_id);
+ tsens->i2c_xlk = client;
+ ret = intel_tsens_add_tz(tsens,
+ &tsens->tz_xlk,
+ tsens->sensor_name_xlk,
+ &client->dev,
+ i);
+ if (ret) {
+ dev_err(&client->dev,
+ "thermal zone configuration failed\n");
+ intel_tsens_remove_tz(d);
+ return ret;
+ }
+ }
+ }
+
+ i2c_set_clientdata(client, d);
+
+ return 0;
+}
+
+static int intel_tsens_tj_exit(struct i2c_client *client)
+{
+ struct intel_hddl_clients *d = client->dev.platform_data;
+
+ if (!d) {
+ dev_err(&client->dev,
+ "Unable to get private data\n");
+ return -EINVAL;
+ }
+ intel_tsens_remove_tz(d);
+ return 0;
+}
+
+static const struct i2c_device_id i2c_intel_tsens_id[] = {
+ { "intel_tsens", (kernel_ulong_t)NULL },
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, i2c_intel_tsens_id);
+
+static struct i2c_driver i2c_intel_tsens_driver = {
+ .driver = {
+ .name = "intel_tsens",
+ },
+ .probe = intel_tsens_tj_probe,
+ .remove = intel_tsens_tj_exit,
+ .id_table = i2c_intel_tsens_id,
+};
+module_i2c_driver(i2c_intel_tsens_driver);
+
+MODULE_DESCRIPTION("Intel tsens host Device driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Vaidya, Mahesh R <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/intel_tsens_host.h b/include/linux/intel_tsens_host.h
new file mode 100644
index 000000000000..4b9b2d6a5cfc
--- /dev/null
+++ b/include/linux/intel_tsens_host.h
@@ -0,0 +1,34 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * Intel tsens host I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_INTEL_TSENS_HOST_DEVICE_H
+#define _LINUX_INTEL_TSENS_HOST_DEVICE_H
+
+struct intel_tsens_host_trip_info {
+ enum thermal_trip_type trip_type;
+ int temp;
+} __packed __aligned(4);
+
+struct intel_tsens_host {
+ const char *sensor_name_smbus;
+ const char *sensor_name_xlk;
+ struct intel_tsens_data *t_data;
+ struct intel_tsens_host_trip_info **trip_info;
+ u32 device_id;
+ struct i2c_client *i2c_xlk;
+ struct i2c_client *i2c_smbus;
+ struct thermal_zone_device *tz_xlk;
+ struct thermal_zone_device *tz_smbus;
+};
+
+struct intel_tsens_host_plat_data {
+ int nsens;
+ struct intel_tsens_host **tsens;
+};
+#endif /*_LINUX_INTEL_TSENS_HOST_DEVICE_H*/
--
2.17.1

2021-02-12 22:38:10

by mark gross

[permalink] [raw]
Subject: [PATCH v6 18/34] xlink-ipc: Add xlink ipc driver

From: Seamus Kelly <[email protected]>

Add xLink driver, which interfaces the xLink Core driver with the Keem
Bay VPU IPC driver, thus enabling xLink to control and communicate with
the VPU IP present on the Intel Keem Bay SoC.

Specifically the driver enables xLink Core to:

* Boot / Reset the VPU IP
* Register to VPU IP event notifications (device connected, device
disconnected, WDT event)
* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
- Including the ability to send 'volatile' data (i.e., small amount of
data, up to 128-bytes that was not allocated in the CPU/VPU shared
memory region)

Cc: Jonathan Corbet <[email protected]>
Cc: Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
Documentation/vpu/index.rst | 1 +
Documentation/vpu/xlink-ipc.rst | 51 ++
MAINTAINERS | 6 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-ipc/Kconfig | 7 +
drivers/misc/xlink-ipc/Makefile | 4 +
drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++++++++++++++++++++++++
include/linux/xlink-ipc.h | 48 ++
9 files changed, 997 insertions(+)
create mode 100644 Documentation/vpu/xlink-ipc.rst
create mode 100644 drivers/misc/xlink-ipc/Kconfig
create mode 100644 drivers/misc/xlink-ipc/Makefile
create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
create mode 100644 include/linux/xlink-ipc.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 661cc700ee45..49c78bb65b83 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -15,3 +15,4 @@ This documentation contains information for the Intel VPU stack.

vpu-stack-overview
xlink-pcie
+ xlink-ipc
diff --git a/Documentation/vpu/xlink-ipc.rst b/Documentation/vpu/xlink-ipc.rst
new file mode 100644
index 000000000000..97ee62b10e93
--- /dev/null
+++ b/Documentation/vpu/xlink-ipc.rst
@@ -0,0 +1,51 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===============================
+Kernel driver: xLink IPC driver
+===============================
+
+Supported chips:
+
+* | Intel Edge.AI Computer Vision platforms: Keem Bay
+ | Suffix: Bay
+ | Datasheet: (not yet publicly available)
+
+Introduction
+============
+
+The xLink IPC driver interfaces the xLink Core driver with the Keem Bay VPU IPC
+driver, thus enabling xLink to control and communicate with the VPU IP present
+on the Intel Keem Bay SoC.
+
+Specifically the driver enables xLink Core to:
+
+* Boot / Reset the VPU IP
+* Register to VPU IP event notifications (device connected, device disconnected,
+ WDT event)
+* Query the status of the VPU IP (OFF, BUSY, READY, ERROR, RECOVERY)
+* Exchange data with the VPU IP, using the Keem Bay IPC mechanism
+
+ * Including the ability to send 'volatile' data (i.e. small amounts of data,
+ up to 128-bytes that was not allocated in the CPU/VPU shared memory region)
+
+Sending / Receiving 'volatile' data
+===================================
+
+Data to be exchanged with Keem Bay IPC needs to be allocated in the portion of
+DDR shared between the CPU and VPU.
+
+This can be impractical for small amounts of data that user code can allocate
+on the stack.
+
+To reduce the burden on user code, xLink Core provides special send / receive
+functions to send up to 128 bytes of 'volatile data', i.e., data that is not
+allocated in the shared memory and that might also disappear after the xLink
+API is called (e.g., because allocated on the stack).
+
+The xLink IPC driver implements support for transferring such 'volatile data'
+to the VPU using Keem Bay IPC. To this end, the driver reserves some memory in
+the shared memory region.
+
+When volatile data is to be sent, xLink IPC allocates a buffer from the
+reserved memory region and copies the volatile data to the buffer. The buffer
+is then transferred to the VPU using Keem Bay IPC.
diff --git a/MAINTAINERS b/MAINTAINERS
index b42a432dc667..aff4d8dd53a4 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1961,6 +1961,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM/INTEL XLINK IPC SUPPORT
+M: Seamus Kelly <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-ipc/
+
ARM/INTEL KEEM BAY XLINK PCIE SUPPORT
M: Srikanth Thokala <[email protected]>
M: Mark Gross <[email protected]>
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index dfb98e444c6e..1f81ea915b95 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -482,4 +482,5 @@ source "drivers/misc/cardreader/Kconfig"
source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
+source "drivers/misc/xlink-ipc/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index d17621fc43d5..b51495a2f1e0 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -58,3 +58,4 @@ obj-$(CONFIG_UACCE) += uacce/
obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
+obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
diff --git a/drivers/misc/xlink-ipc/Kconfig b/drivers/misc/xlink-ipc/Kconfig
new file mode 100644
index 000000000000..6e5374c7d85c
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Kconfig
@@ -0,0 +1,7 @@
+config XLINK_IPC
+ tristate "Support for XLINK IPC"
+ depends on KEEMBAY_VPU_IPC
+ help
+ XLINK IPC enables the communication/control IPC Sub-System.
+
+ Select M if you have an Intel SoC with a Vision Processing Unit (VPU).
diff --git a/drivers/misc/xlink-ipc/Makefile b/drivers/misc/xlink-ipc/Makefile
new file mode 100644
index 000000000000..f92c95525e23
--- /dev/null
+++ b/drivers/misc/xlink-ipc/Makefile
@@ -0,0 +1,4 @@
+#
+# Makefile for xlink IPC Linux driver
+#
+obj-$(CONFIG_XLINK_IPC) += xlink-ipc.o
diff --git a/drivers/misc/xlink-ipc/xlink-ipc.c b/drivers/misc/xlink-ipc/xlink-ipc.c
new file mode 100644
index 000000000000..e98a5a4b66a2
--- /dev/null
+++ b/drivers/misc/xlink-ipc/xlink-ipc.c
@@ -0,0 +1,878 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2020 Intel Corporation
+ *
+ */
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include <linux/soc/intel/keembay-vpu-ipc.h>
+
+#include <linux/xlink-ipc.h>
+
+#define XLINK_IPC_MAX_DEVICE_NAME_SIZE 12
+
+/* used to extract fields from and create xlink sw device id */
+#define SW_DEVICE_ID_INTERFACE_SHIFT 24U
+#define SW_DEVICE_ID_INTERFACE_MASK 0x7
+#define SW_DEVICE_ID_INTERFACE_SMASK \
+ (SW_DEVICE_ID_INTERFACE_MASK << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_INTERFACE_IPC_VALUE 0x0
+#define SW_DEVICE_ID_INTERFACE_IPC_SVALUE \
+ (SW_DEVICE_ID_INTERFACE_IPC_VALUE << SW_DEVICE_ID_INTERFACE_SHIFT)
+#define SW_DEVICE_ID_VPU_ID_SHIFT 1U
+#define SW_DEVICE_ID_VPU_ID_MASK 0x7
+#define SW_DEVICE_ID_VPU_ID_SMASK \
+ (SW_DEVICE_ID_VPU_ID_MASK << SW_DEVICE_ID_VPU_ID_SHIFT)
+#define GET_VPU_ID_FROM_SW_DEVICE_ID(id) \
+ (((id) >> SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_MASK)
+#define GET_SW_DEVICE_ID_FROM_VPU_ID(id) \
+ ((((id) << SW_DEVICE_ID_VPU_ID_SHIFT) & SW_DEVICE_ID_VPU_ID_SMASK) \
+ | SW_DEVICE_ID_INTERFACE_IPC_SVALUE)
+
+/* the maximum buffer size for volatile xlink operations */
+#define XLINK_MAX_BUF_SIZE 128U
+
+/* indices used to retrieve reserved memory from the dt */
+#define LOCAL_XLINK_IPC_BUFFER_IDX 0
+#define REMOTE_XLINK_IPC_BUFFER_IDX 1
+
+/* index used to retrieve the vpu ipc device phandle from the dt */
+#define VPU_IPC_DEVICE_PHANDLE_IDX 1
+
+/* the timeout (in ms) used to wait for the vpu ready message */
+#define XLINK_VPU_WAIT_FOR_READY_MS 3000
+
+/* xlink buffer memory region */
+struct xlink_buf_mem {
+ struct device *dev; /* child device managing the memory region */
+ void *vaddr; /* the virtual address of the memory region */
+ dma_addr_t dma_handle; /* the physical address of the memory region */
+ size_t size; /* the size of the memory region */
+};
+
+/* xlink buffer pool */
+struct xlink_buf_pool {
+ void *buf; /* pointer to the start of pool area */
+ size_t buf_cnt; /* pool size (i.e., number of buffers) */
+ size_t idx; /* current index */
+ spinlock_t lock; /* the lock protecting this pool */
+};
+
+/* xlink ipc device */
+struct xlink_ipc_dev {
+ struct platform_device *pdev; /* pointer to platform device */
+ u32 vpu_id; /* The VPU ID defined in the device tree */
+ u32 sw_device_id; /* the sw device id */
+ const char *device_name; /* the vpu device name */
+ struct xlink_buf_mem local_xlink_mem; /* tx buffer memory region */
+ struct xlink_buf_mem remote_xlink_mem; /* rx buffer memory region */
+ struct xlink_buf_pool xlink_buf_pool; /* tx buffer pool */
+ struct device *vpu_dev; /* pointer to vpu ipc device */
+ int (*callback)(u32 sw_device_id, u32 event);
+};
+
+/* Events that can be notified via callback, when registered. */
+enum xlink_vpu_event {
+ XLINK_VPU_NOTIFY_DISCONNECT = 0,
+ XLINK_VPU_NOTIFY_CONNECT,
+ XLINK_VPU_NOTIFY_MSS_WDT,
+ XLINK_VPU_NOTIFY_NCE_WDT,
+ NUM_EVENT_TYPE
+};
+
+#define VPU_ID 0
+#define VPU_DEVICE_NAME "vpu-0"
+#define VPU_SW_DEVICE_ID 0
+static struct xlink_ipc_dev *xlink_dev;
+
+/*
+ * Functions related to reserved-memory sub-devices.
+ */
+
+/*
+ * xlink_reserved_memory_remove() - Removes the reserved memory sub-devices.
+ *
+ * @xlink_dev: [in] The xlink ipc device with reserved memory sub-devices.
+ */
+static void xlink_reserved_memory_remove(struct xlink_ipc_dev *xlink_dev)
+{
+ device_unregister(xlink_dev->local_xlink_mem.dev);
+ device_unregister(xlink_dev->remote_xlink_mem.dev);
+}
+
+/*
+ * xlink_reserved_mem_release() - Reserved memory release callback function.
+ *
+ * @dev: [in] The reserved memory sub-device.
+ */
+static void xlink_reserved_mem_release(struct device *dev)
+{
+ of_reserved_mem_device_release(dev);
+}
+
+/*
+ * get_xlink_reserved_mem_size() - Gets the size of the reserved memory region.
+ *
+ * @dev: [in] The device the reserved memory region is allocated to.
+ * @idx: [in] The reserved memory region's index in the phandle table.
+ *
+ * Return: The reserved memory size, 0 on failure.
+ */
+static resource_size_t get_xlink_reserved_mem_size(struct device *dev, int idx)
+{
+ struct device_node *np;
+ struct resource mem;
+ int rc;
+
+ np = of_parse_phandle(dev->of_node, "memory-region", idx);
+ if (!np) {
+ dev_err(dev, "Couldn't find memory-region %d\n", idx);
+ return 0;
+ }
+
+ rc = of_address_to_resource(np, 0, &mem);
+ if (rc) {
+ dev_err(dev, "Couldn't map address to resource %d\n", idx);
+ return 0;
+ }
+ return resource_size(&mem);
+}
+
+/*
+ * init_xlink_reserved_mem_dev() - Initializes the reserved memory sub-devices.
+ *
+ * @dev: [in] The parent device of the reserved memory sub-device.
+ * @name: [in] The name to assign to the memory region.
+ * @idx: [in] The reserved memory region index in the phandle table.
+ *
+ * Return: The initialized sub-device, NULL on failure.
+ */
+static struct device *init_xlink_reserved_mem_dev(struct device *dev,
+ const char *name, int idx)
+{
+ struct device *child;
+ int rc;
+
+ child = devm_kzalloc(dev, sizeof(*child), GFP_KERNEL);
+ if (!child)
+ return NULL;
+
+ device_initialize(child);
+ dev_set_name(child, "%s:%s", dev_name(dev), name);
+ dev_err(dev, " dev_name %s, name %s\n", dev_name(dev), name);
+ child->parent = dev;
+ child->dma_mask = dev->dma_mask;
+ child->coherent_dma_mask = dev->coherent_dma_mask;
+ /* set up dma configuration using information from parent's dt node */
+ rc = of_dma_configure(child, dev->of_node, true);
+ if (rc)
+ return NULL;
+ child->release = xlink_reserved_mem_release;
+
+ rc = device_add(child);
+ if (rc)
+ goto err;
+ rc = of_reserved_mem_device_init_by_idx(child, dev->of_node, idx);
+ if (rc) {
+ dev_err(dev, "Couldn't get reserved memory with idx = %d, %d\n",
+ idx, rc);
+ device_del(child);
+ goto err;
+ }
+ return child;
+
+err:
+ put_device(child);
+ return NULL;
+}
+
+/*
+ * xlink_reserved_memory_init() - Initialize reserved memory for the device.
+ *
+ * @xlink_dev: [in] The xlink ipc device the reserved memory is allocated to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_init(struct xlink_ipc_dev *xlink_dev)
+{
+ struct device *dev = &xlink_dev->pdev->dev;
+ struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+ lxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_local_reserved",
+ LOCAL_XLINK_IPC_BUFFER_IDX);
+ if (!lxm->dev)
+ return -ENOMEM;
+
+ lxm->size = get_xlink_reserved_mem_size(dev, LOCAL_XLINK_IPC_BUFFER_IDX);
+
+ rxm->dev = init_xlink_reserved_mem_dev(dev, "xlink_remote_reserved",
+ REMOTE_XLINK_IPC_BUFFER_IDX);
+ if (!rxm->dev) {
+ device_unregister(xlink_dev->local_xlink_mem.dev);
+ return -ENOMEM;
+ }
+
+ rxm->size = get_xlink_reserved_mem_size(dev, REMOTE_XLINK_IPC_BUFFER_IDX);
+
+ return 0;
+}
+
+/*
+ * xlink_reserved_memory_alloc() - Allocate reserved memory for the device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_reserved_memory_alloc(struct xlink_ipc_dev *xlink_dev)
+{
+ struct xlink_buf_mem *lxm = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_mem *rxm = &xlink_dev->remote_xlink_mem;
+
+ lxm->vaddr = dmam_alloc_coherent(xlink_dev->local_xlink_mem.dev,
+ xlink_dev->local_xlink_mem.size,
+ &xlink_dev->local_xlink_mem.dma_handle,
+ GFP_KERNEL);
+ if (!lxm->vaddr) {
+ dev_err(&xlink_dev->pdev->dev,
+ "Failed to allocate from local reserved memory.\n");
+ return -ENOMEM;
+ }
+ rxm->vaddr = dmam_alloc_coherent(xlink_dev->remote_xlink_mem.dev,
+ xlink_dev->remote_xlink_mem.size,
+ &xlink_dev->remote_xlink_mem.dma_handle,
+ GFP_KERNEL);
+ if (!rxm->vaddr) {
+ dev_err(&xlink_dev->pdev->dev,
+ "Failed to allocate from remote reserved memory.\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+/*
+ * init_xlink_buf_pool() - Initialize the device's tx buffer pool.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success.
+ */
+static int init_xlink_buf_pool(struct xlink_ipc_dev *xlink_dev)
+{
+ struct xlink_buf_mem *mem = &xlink_dev->local_xlink_mem;
+ struct xlink_buf_pool *xbufpool = &xlink_dev->xlink_buf_pool;
+
+ memset(mem->vaddr, 0, mem->size);
+ xbufpool->buf = mem->vaddr;
+ xbufpool->buf_cnt = mem->size / XLINK_MAX_BUF_SIZE;
+ xbufpool->idx = 0;
+ dev_info(&xlink_dev->pdev->dev, "xlink Buffer Pool size: %zX\n",
+ xbufpool->buf_cnt);
+ spin_lock_init(&xbufpool->lock);
+
+ return 0;
+}
+
+/*
+ * xlink_phys_to_virt() - Convert an xlink physical addresses to a virtual one.
+ *
+ * @xlink_mem: [in] The memory region where the physical address is located.
+ * @paddr: [in] The physical address to convert to a virtual one.
+ *
+ * Return: The corresponding virtual address, or NULL if the
+ * physical address is not in the expected memory
+ * range.
+ */
+static void *xlink_phys_to_virt(const struct xlink_buf_mem *xlink_mem,
+ u32 paddr)
+{
+ if (unlikely(paddr < xlink_mem->dma_handle) ||
+ paddr >= (xlink_mem->dma_handle + xlink_mem->size))
+ return NULL;
+
+ return xlink_mem->vaddr + (paddr - xlink_mem->dma_handle);
+}
+
+/*
+ * xlink_virt_to_phys() - Convert an xlink virtual addresses to a physical one.
+ *
+ * @xlink_mem: [in] The memory region where the physical address is located.
+ * @vaddr: [in] The virtual address to convert to a physical one.
+ * @paddr: [out] Where to store the computed physical address.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int xlink_virt_to_phys(struct xlink_buf_mem *xlink_mem, void *vaddr,
+ u32 *paddr)
+{
+ if (unlikely((xlink_mem->dma_handle + xlink_mem->size) > 0xFFFFFFFF))
+ return -EINVAL;
+ if (unlikely(vaddr < xlink_mem->vaddr ||
+ vaddr >= (xlink_mem->vaddr + xlink_mem->size)))
+ return -EINVAL;
+ *paddr = xlink_mem->dma_handle + (vaddr - xlink_mem->vaddr);
+
+ return 0;
+}
+
+/*
+ * get_next_xlink_buf() - Get next xlink buffer from an xlink device's pool.
+ *
+ * @xlink_dev: [in] The xlink ipc device to get a buffer from.
+ * @buf: [out] Where to store the reference to the next buffer.
+ * @size: [in] The size of the buffer to get.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int get_next_xlink_buf(struct xlink_ipc_dev *xlink_dev, void **buf,
+ int size)
+{
+ struct xlink_buf_pool *pool;
+ unsigned long flags;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ if (size > XLINK_MAX_BUF_SIZE)
+ return -EINVAL;
+
+ pool = &xlink_dev->xlink_buf_pool;
+
+ spin_lock_irqsave(&pool->lock, flags);
+ if (pool->idx == pool->buf_cnt) {
+ /* reached end of buffers - wrap around */
+ pool->idx = 0;
+ }
+ *buf = pool->buf + (pool->idx * XLINK_MAX_BUF_SIZE);
+ pool->idx++;
+ spin_unlock_irqrestore(&pool->lock, flags);
+ return 0;
+}
+
+/*
+ * Functions related to the vpu ipc device reference.
+ */
+
+/*
+ * vpu_ipc_device_put() - Release the vpu ipc device held by the xlink device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ */
+static void vpu_ipc_device_put(struct xlink_ipc_dev *xlink_dev)
+{
+ put_device(xlink_dev->vpu_dev);
+}
+
+/*
+ * vpu_ipc_device_get() - Get the vpu ipc device reference for the xlink device.
+ *
+ * @xlink_dev: [in] The xlink ipc device.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+static int vpu_ipc_device_get(struct xlink_ipc_dev *xlink_dev)
+{
+ struct device *dev = &xlink_dev->pdev->dev;
+ struct platform_device *pdev;
+ struct device_node *np;
+
+ np = of_parse_phandle(dev->of_node, "intel,keembay-vpu-ipc", 0);
+ if (!np)
+ return -ENODEV;
+
+ pdev = of_find_device_by_node(np);
+ if (!pdev) {
+ dev_info(dev, "IPC device not probed\n");
+ of_node_put(np);
+ return -EPROBE_DEFER;
+ }
+
+ xlink_dev->vpu_dev = get_device(&pdev->dev);
+ of_node_put(np);
+
+ dev_info(dev, "Using IPC device: %s\n", dev_name(xlink_dev->vpu_dev));
+ return 0;
+}
+
+/*
+ * xlink platform api - ipc interface functions
+ */
+
+/*
+ * xlink_ipc_connect() - platform connect interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to connect to.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_connect(u32 sw_device_id)
+{
+ if (!xlink_dev)
+ return -ENODEV;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_connect);
+
+/*
+ * xlink_ipc_write() - platform write interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to write to.
+ * @data: [in] The data buffer to write.
+ * @size: [in-out] The amount of data to write/written.
+ * @timeout: [in] The time (in ms) to wait before timing out.
+ * @context: [in] The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context)
+{
+ struct xlink_ipc_context *ctx = context;
+ void *vaddr = NULL;
+ u32 paddr;
+ int rc;
+
+ if (!ctx)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ if (ctx->is_volatile) {
+ rc = get_next_xlink_buf(xlink_dev, &vaddr, XLINK_MAX_BUF_SIZE);
+ if (rc)
+ return rc;
+ memcpy(vaddr, data, *size);
+ rc = xlink_virt_to_phys(&xlink_dev->local_xlink_mem, vaddr,
+ &paddr);
+ if (rc)
+ return rc;
+ } else {
+ paddr = *(u32 *)data;
+ }
+ rc = intel_keembay_vpu_ipc_send(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+ paddr, *size);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_write);
+
+/*
+ * xlink_ipc_read() - platform read interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to read from.
+ * @data: [out] The data buffer to read into.
+ * @size: [in-out] The amount of data to read/was read.
+ * @timeout: [in] The time (in ms) to wait before timing out.
+ * @context: [in] The ipc operation context.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context)
+{
+ struct xlink_ipc_context *ctx = context;
+ u32 addr = 0;
+ void *vaddr;
+ int rc;
+
+ if (!ctx)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_recv(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, ctx->chan,
+ &addr, size, timeout);
+
+ if (ctx->is_volatile) {
+ vaddr = xlink_phys_to_virt(&xlink_dev->remote_xlink_mem, addr);
+ if (vaddr)
+ memcpy(data, vaddr, *size);
+ else
+ return -ENXIO;
+ } else {
+ *(u32 *)data = addr;
+ }
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_read);
+
+/*
+ * xlink_ipc_get_device_list() - platform get device list interface.
+ *
+ * @sw_device_id_list: [out] The list of devices found.
+ * @num_devices: [out] The number of devices found.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices)
+{
+ int i = 0;
+
+ if (!sw_device_id_list || !num_devices)
+ return -EINVAL;
+
+ if (xlink_dev) {
+ *sw_device_id_list = xlink_dev->sw_device_id;
+ i++;
+ }
+
+ *num_devices = i;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_get_device_list);
+
+/*
+ * xlink_ipc_get_device_name() - platform get device name interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to get name of.
+ * @device_name: [out] The name of the xlink ipc device.
+ * @name_size: [in] The maximum size of the name.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size)
+{
+ size_t size;
+
+ if (!device_name)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ size = (name_size > XLINK_IPC_MAX_DEVICE_NAME_SIZE)
+ ? XLINK_IPC_MAX_DEVICE_NAME_SIZE
+ : name_size;
+ strncpy(device_name, xlink_dev->device_name, size);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_get_device_name);
+
+/*
+ * xlink_ipc_get_device_status() - platform get device status interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ * @device_status: [out] The device status.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status)
+{
+ if (!device_status)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ *device_status = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_get_device_status);
+
+static void kernel_callback(struct device *dev, enum intel_keembay_vpu_event event)
+{
+ if ((enum xlink_vpu_event)event >= NUM_EVENT_TYPE)
+ return;
+
+ if (xlink_dev) {
+ if (xlink_dev->callback)
+ xlink_dev->callback(xlink_dev->sw_device_id, event);
+ }
+}
+
+/*
+ * xlink_ipc_register_for_events() - platform register for events
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ * @callback: [in] Callback function for events
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, enum xlink_vpu_event event))
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+ xlink_dev->callback = callback;
+ rc = intel_keembay_vpu_register_for_events(xlink_dev->vpu_dev, kernel_callback);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_register_for_events);
+/*
+ * xlink_ipc_unregister_for_events() - platform register for events
+ *
+ * @sw_device_id: [in] The sw device id of the device to get status of.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_unregister_for_events(u32 sw_device_id)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+ rc = intel_keembay_vpu_unregister_for_events(xlink_dev->vpu_dev);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_unregister_for_events);
+
+/*
+ * xlink_ipc_boot_device() - platform boot device interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to boot.
+ * @binary_name: [in] The file name of the firmware binary to boot.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name)
+{
+ enum intel_keembay_vpu_state state;
+ int rc;
+
+ if (!binary_name)
+ return -EINVAL;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ pr_info("\nStart VPU 0x%x - %s\n", sw_device_id, binary_name);
+ rc = intel_keembay_vpu_startup(xlink_dev->vpu_dev, binary_name);
+ if (rc) {
+ pr_err("Failed to start VPU: %d\n", rc);
+ return -EBUSY;
+ }
+ pr_info("Successfully started VPU!\n");
+
+ /* Wait for VPU to be READY */
+ rc = intel_keembay_vpu_wait_for_ready(xlink_dev->vpu_dev,
+ XLINK_VPU_WAIT_FOR_READY_MS);
+ if (rc) {
+ pr_err("Tried to start VPU but never got READY.\n");
+ return -EBUSY;
+ }
+ pr_info("Successfully synchronised state with VPU!\n");
+
+ /* Check state */
+ state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ if (state != KEEMBAY_VPU_READY) {
+ pr_err("VPU was not ready, it was %d\n", state);
+ return -EBUSY;
+ }
+ pr_info("VPU was ready.\n");
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_boot_device);
+
+/*
+ * xlink_ipc_reset_device() - platform reset device interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to reset.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_reset_device(u32 sw_device_id)
+{
+ enum intel_keembay_vpu_state state;
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ /* stop the vpu */
+ rc = intel_keembay_vpu_stop(xlink_dev->vpu_dev);
+ if (rc) {
+ pr_err("Failed to stop VPU: %d\n", rc);
+ return -EBUSY;
+ }
+ pr_info("Successfully stopped VPU!\n");
+
+ /* check state */
+ state = intel_keembay_vpu_status(xlink_dev->vpu_dev);
+ if (state != KEEMBAY_VPU_OFF) {
+ pr_err("VPU was not OFF after stop request, it was %d\n",
+ state);
+ return -EBUSY;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_reset_device);
+
+/*
+ * xlink_ipc_open_channel() - platform open channel interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to open channel to.
+ * @channel: [in] The channel id to open.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_open_channel(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, channel);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_open_channel);
+
+/*
+ * xlink_ipc_close_channel() - platform close channel interface.
+ *
+ * @sw_device_id: [in] The sw device id of the device to close channel to.
+ * @channel: [in] The channel id to close.
+ *
+ * Return: 0 on success, negative error code otherwise.
+ */
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel)
+{
+ int rc;
+
+ if (!xlink_dev)
+ return -ENODEV;
+
+ rc = intel_keembay_vpu_ipc_close_channel(xlink_dev->vpu_dev,
+ KMB_VPU_IPC_NODE_LEON_MSS, channel);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_ipc_close_channel);
+
+/*
+ * xlink ipc driver functions
+ */
+
+static int keembay_xlink_ipc_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ int rc;
+
+ /* allocate device data structure */
+ xlink_dev = kzalloc(sizeof(*xlink_dev), GFP_KERNEL);
+ if (!xlink_dev)
+ return -ENOMEM;
+
+ xlink_dev->pdev = pdev;
+ dev_info(dev, "Keem Bay xlink IPC driver probed.\n");
+
+ /* grab reserved memory regions and assign to child devices */
+ rc = xlink_reserved_memory_init(xlink_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Failed to set up reserved memory regions.\n");
+ goto r_cleanup;
+ }
+
+ /* allocate memory from the reserved memory regions */
+ rc = xlink_reserved_memory_alloc(xlink_dev);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Failed to allocate reserved memory regions.\n");
+ goto r_cleanup;
+ }
+
+ /* init the xlink buffer pool used for rx/tx */
+ init_xlink_buf_pool(xlink_dev);
+
+ /* get reference to vpu ipc device */
+ rc = vpu_ipc_device_get(xlink_dev);
+ if (rc)
+ goto r_cleanup;
+
+ /* get device id */
+ rc = of_property_read_u32(dev->of_node, "intel,keembay-vpu-ipc-id",
+ &xlink_dev->vpu_id);
+ if (rc) {
+ dev_err(dev, "Cannot get VPU ID from DT.\n");
+ goto r_cleanup;
+ }
+
+ /* assign a sw device id */
+ xlink_dev->sw_device_id = GET_SW_DEVICE_ID_FROM_VPU_ID
+ (xlink_dev->vpu_id);
+
+ /* assign a device name */
+ rc = of_property_read_string(dev->of_node, "intel,keembay-vpu-ipc-name",
+ &xlink_dev->device_name);
+ if (rc) {
+ /* only warn for now; we will enforce this in the future */
+ dev_warn(dev, "VPU name not defined in DT, using %s as default.\n",
+ VPU_DEVICE_NAME);
+ dev_warn(dev, "WARNING: additional VPU devices may fail probing.\n");
+ xlink_dev->device_name = VPU_DEVICE_NAME;
+ }
+
+ /* get platform data reference */
+ platform_set_drvdata(pdev, xlink_dev);
+
+ dev_info(dev, "Device id=%u sw_device_id=0x%x name=%s probe complete.\n",
+ xlink_dev->vpu_id, xlink_dev->sw_device_id,
+ xlink_dev->device_name);
+ return 0;
+
+r_cleanup:
+ xlink_reserved_memory_remove(xlink_dev);
+ return rc;
+}
+
+static int keembay_xlink_ipc_remove(struct platform_device *pdev)
+{
+ struct xlink_ipc_dev *xlink_dev = platform_get_drvdata(pdev);
+ struct device *dev = &pdev->dev;
+
+ /*
+ * no need to de-alloc xlink mem (local_xlink_mem and remote_xlink_mem)
+ * since it was allocated with dmam_alloc
+ */
+ xlink_reserved_memory_remove(xlink_dev);
+
+ /* release vpu ipc device */
+ vpu_ipc_device_put(xlink_dev);
+
+ dev_info(dev, "Keem Bay xlink IPC driver removed.\n");
+ return 0;
+}
+
+static const struct of_device_id keembay_xlink_ipc_of_match[] = {
+ {
+ .compatible = "intel,keembay-xlink-ipc",
+ },
+ {}
+};
+
+static struct platform_driver keembay_xlink_ipc_driver = {
+ .driver = {
+ .name = "keembay-xlink-ipc",
+ .of_match_table = keembay_xlink_ipc_of_match,
+ },
+ .probe = keembay_xlink_ipc_probe,
+ .remove = keembay_xlink_ipc_remove,
+};
+module_platform_driver(keembay_xlink_ipc_driver);
+
+MODULE_DESCRIPTION("Keem Bay xlink IPC Driver");
+MODULE_AUTHOR("Ryan Carnaghi <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/include/linux/xlink-ipc.h b/include/linux/xlink-ipc.h
new file mode 100644
index 000000000000..f26b53bf6506
--- /dev/null
+++ b/include/linux/xlink-ipc.h
@@ -0,0 +1,48 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay xlink IPC Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef _XLINK_IPC_H_
+#define _XLINK_IPC_H_
+
+#include <linux/types.h>
+
+struct xlink_ipc_context {
+ u16 chan;
+ u8 is_volatile;
+};
+
+int xlink_ipc_connect(u32 sw_device_id);
+
+int xlink_ipc_write(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context);
+
+int xlink_ipc_read(u32 sw_device_id, void *data, size_t * const size,
+ u32 timeout, void *context);
+
+int xlink_ipc_get_device_list(u32 *sw_device_id_list, u32 *num_devices);
+
+int xlink_ipc_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size);
+
+int xlink_ipc_get_device_status(u32 sw_device_id, u32 *device_status);
+
+int xlink_ipc_boot_device(u32 sw_device_id, const char *binary_name);
+
+int xlink_ipc_reset_device(u32 sw_device_id);
+
+int xlink_ipc_open_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_close_channel(u32 sw_device_id, u32 channel);
+
+int xlink_ipc_register_for_events(u32 sw_device_id,
+ int (*callback)(u32 sw_device_id, u32 event));
+
+int xlink_ipc_unregister_for_events(u32 sw_device_id);
+
+#endif /* _XLINK_IPC_H_ */
--
2.17.1

2021-02-12 22:38:15

by mark gross

[permalink] [raw]
Subject: [PATCH v6 11/34] misc: xlink-pcie: lh: Add core communication logic

From: Srikanth Thokala <[email protected]>

Add logic to establish communication with the remote host which is through
ring buffer management and MSI/Doorbell interrupts

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/local_host/Makefile | 2 +
drivers/misc/xlink-pcie/local_host/core.c | 806 ++++++++++++++++++++
drivers/misc/xlink-pcie/local_host/core.h | 247 ++++++
drivers/misc/xlink-pcie/local_host/epf.c | 116 ++-
drivers/misc/xlink-pcie/local_host/epf.h | 23 +
drivers/misc/xlink-pcie/local_host/util.c | 375 +++++++++
drivers/misc/xlink-pcie/local_host/util.h | 70 ++
drivers/misc/xlink-pcie/local_host/xpcie.h | 63 ++
include/linux/xlink_drv_inf.h | 58 ++
9 files changed, 1752 insertions(+), 8 deletions(-)
create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
create mode 100644 drivers/misc/xlink-pcie/local_host/core.h
create mode 100644 drivers/misc/xlink-pcie/local_host/util.c
create mode 100644 drivers/misc/xlink-pcie/local_host/util.h
create mode 100644 include/linux/xlink_drv_inf.h

diff --git a/drivers/misc/xlink-pcie/local_host/Makefile b/drivers/misc/xlink-pcie/local_host/Makefile
index 54fc118e2dd1..28761751d43b 100644
--- a/drivers/misc/xlink-pcie/local_host/Makefile
+++ b/drivers/misc/xlink-pcie/local_host/Makefile
@@ -1,3 +1,5 @@
obj-$(CONFIG_XLINK_PCIE_LH_DRIVER) += mxlk_ep.o
mxlk_ep-objs := epf.o
mxlk_ep-objs += dma.o
+mxlk_ep-objs += core.o
+mxlk_ep-objs += util.o
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
new file mode 100644
index 000000000000..c67ce2c3067d
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -0,0 +1,806 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2021 Intel Corporation
+ */
+
+#include <linux/of_reserved_mem.h>
+
+#include "epf.h"
+#include "core.h"
+#include "util.h"
+
+static struct xpcie *global_xpcie;
+
+static struct xpcie *intel_xpcie_core_get_by_id(u32 sw_device_id)
+{
+ return (sw_device_id == xlink_sw_id) ? global_xpcie : NULL;
+}
+
+static int intel_xpcie_map_dma(struct xpcie *xpcie, struct xpcie_buf_desc *bd,
+ int direction)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+ struct device *dma_dev = epf->epc->dev.parent;
+
+ bd->phys = dma_map_single(dma_dev, bd->data, bd->length, direction);
+
+ return dma_mapping_error(dma_dev, bd->phys);
+}
+
+static void intel_xpcie_unmap_dma(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd, int direction)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+ struct device *dma_dev = epf->epc->dev.parent;
+
+ dma_unmap_single(dma_dev, bd->phys, bd->length, direction);
+}
+
+static void intel_xpcie_set_cap_txrx(struct xpcie *xpcie)
+{
+ size_t tx_len = sizeof(struct xpcie_transfer_desc) *
+ XPCIE_NUM_TX_DESCS;
+ size_t rx_len = sizeof(struct xpcie_transfer_desc) *
+ XPCIE_NUM_RX_DESCS;
+ size_t hdr_len = sizeof(struct xpcie_cap_txrx);
+ u32 start = sizeof(struct xpcie_mmio);
+ struct xpcie_cap_txrx *cap;
+ struct xpcie_cap_hdr *hdr;
+ u16 next;
+
+ next = (u16)(start + hdr_len + tx_len + rx_len);
+ intel_xpcie_iowrite32(start, xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+ cap = (void *)xpcie->mmio + start;
+ memset(cap, 0, sizeof(struct xpcie_cap_txrx));
+ cap->hdr.id = XPCIE_CAP_TXRX;
+ cap->hdr.next = next;
+ cap->fragment_size = XPCIE_FRAGMENT_SIZE;
+ cap->tx.ring = start + hdr_len;
+ cap->tx.ndesc = XPCIE_NUM_TX_DESCS;
+ cap->rx.ring = start + hdr_len + tx_len;
+ cap->rx.ndesc = XPCIE_NUM_RX_DESCS;
+
+ hdr = (struct xpcie_cap_hdr *)((void *)xpcie->mmio + next);
+ hdr->id = XPCIE_CAP_NULL;
+}
+
+static void intel_xpcie_txrx_cleanup(struct xpcie *xpcie)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ struct xpcie_transfer_desc *td;
+ int index;
+
+ xpcie->stop_flag = true;
+ xpcie->no_tx_buffer = false;
+ inf->data_avail = true;
+ wake_up_interruptible(&xpcie->tx_waitq);
+ wake_up_interruptible(&inf->rx_waitq);
+ mutex_lock(&xpcie->wlock);
+ mutex_lock(&inf->rlock);
+
+ for (index = 0; index < rx->pipe.ndesc; index++) {
+ td = rx->pipe.tdr + index;
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+ for (index = 0; index < tx->pipe.ndesc; index++) {
+ td = tx->pipe.tdr + index;
+ intel_xpcie_set_td_address(td, 0);
+ intel_xpcie_set_td_length(td, 0);
+ }
+
+ intel_xpcie_list_cleanup(&xpcie->tx_pool);
+ intel_xpcie_list_cleanup(&xpcie->rx_pool);
+
+ if (xpcie_epf->tx_virt) {
+ dma_free_coherent(dma_dev, xpcie_epf->tx_size,
+ xpcie_epf->tx_virt, xpcie_epf->tx_phys);
+ }
+
+ mutex_unlock(&inf->rlock);
+ mutex_unlock(&xpcie->wlock);
+}
+
+/*
+ * The RX/TX are named for Remote Host, in Local Host
+ * RX/TX is reversed.
+ */
+static int intel_xpcie_txrx_init(struct xpcie *xpcie,
+ struct xpcie_cap_txrx *cap)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct device *dma_dev = xpcie_epf->epf->epc->dev.parent;
+ struct xpcie_stream *tx = &xpcie->tx;
+ struct xpcie_stream *rx = &xpcie->rx;
+ int tx_pool_size, rx_pool_size;
+ struct xpcie_buf_desc *bd;
+ int index, ndesc, rc;
+
+ xpcie->txrx = cap;
+ xpcie->fragment_size = cap->fragment_size;
+ xpcie->stop_flag = false;
+
+ rx->pipe.ndesc = cap->tx.ndesc;
+ rx->pipe.head = &cap->tx.head;
+ rx->pipe.tail = &cap->tx.tail;
+ rx->pipe.tdr = (void *)xpcie->mmio + cap->tx.ring;
+
+ tx->pipe.ndesc = cap->rx.ndesc;
+ tx->pipe.head = &cap->rx.head;
+ tx->pipe.tail = &cap->rx.tail;
+ tx->pipe.tdr = (void *)xpcie->mmio + cap->rx.ring;
+
+ intel_xpcie_list_init(&xpcie->rx_pool);
+ rx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = rx_pool_size / xpcie->fragment_size;
+
+ /* Initialize reserved memory resources */
+ rc = of_reserved_mem_device_init(dma_dev);
+ if (rc) {
+ dev_err(dma_dev, "Could not get reserved memory\n");
+ goto error;
+ }
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd(xpcie->fragment_size);
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->rx_pool, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to alloc all rx pool descriptors\n");
+ goto error;
+ }
+ }
+
+ intel_xpcie_list_init(&xpcie->tx_pool);
+ tx_pool_size = roundup(SZ_32M, xpcie->fragment_size);
+ ndesc = tx_pool_size / xpcie->fragment_size;
+
+ xpcie_epf->tx_size = tx_pool_size;
+ xpcie_epf->tx_virt = dma_alloc_coherent(dma_dev,
+ xpcie_epf->tx_size,
+ &xpcie_epf->tx_phys,
+ GFP_KERNEL);
+ if (!xpcie_epf->tx_virt)
+ goto error;
+
+ for (index = 0; index < ndesc; index++) {
+ bd = intel_xpcie_alloc_bd_reuse(xpcie->fragment_size,
+ xpcie_epf->tx_virt +
+ (index *
+ xpcie->fragment_size),
+ xpcie_epf->tx_phys +
+ (index *
+ xpcie->fragment_size));
+ if (bd) {
+ intel_xpcie_list_put(&xpcie->tx_pool, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to alloc all tx pool descriptors\n");
+ goto error;
+ }
+ }
+
+ return 0;
+
+error:
+ intel_xpcie_txrx_cleanup(xpcie);
+
+ return -ENOMEM;
+}
+
+static int intel_xpcie_discover_txrx(struct xpcie *xpcie)
+{
+ struct xpcie_cap_txrx *cap;
+ int error;
+
+ cap = intel_xpcie_cap_find(xpcie, 0, XPCIE_CAP_TXRX);
+ if (cap) {
+ error = intel_xpcie_txrx_init(xpcie, cap);
+ } else {
+ dev_err(xpcie_to_dev(xpcie), "xpcie txrx info not found\n");
+ error = -EIO;
+ }
+
+ return error;
+}
+
+static void intel_xpcie_start_tx(struct xpcie *xpcie, unsigned long delay)
+{
+ /*
+ * Use only one WQ for both Rx and Tx
+ *
+ * Synchronous Read and Writes to DDR is found to result in memory
+ * mismatch errors in stability tests due to silicon bug in A0 SoC.
+ */
+ if (xpcie->legacy_a0)
+ queue_delayed_work(xpcie->rx_wq, &xpcie->tx_event, delay);
+ else
+ queue_delayed_work(xpcie->tx_wq, &xpcie->tx_event, delay);
+}
+
+static void intel_xpcie_start_rx(struct xpcie *xpcie, unsigned long delay)
+{
+ queue_delayed_work(xpcie->rx_wq, &xpcie->rx_event, delay);
+}
+
+static void intel_xpcie_rx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, rx_event.work);
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+ u32 head, tail, ndesc, length, initial_head;
+ unsigned long delay = msecs_to_jiffies(1);
+ struct xpcie_stream *rx = &xpcie->rx;
+ int descs_num = 0, chan = 0, rc;
+ struct xpcie_dma_ll_desc *desc;
+ struct xpcie_transfer_desc *td;
+ bool reset_work = false;
+ u16 interface;
+ u64 address;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ bd_head = NULL;
+ bd_tail = NULL;
+ ndesc = rx->pipe.ndesc;
+ tail = intel_xpcie_get_tdr_tail(&rx->pipe);
+ initial_head = intel_xpcie_get_tdr_head(&rx->pipe);
+ head = initial_head;
+
+ while (head != tail) {
+ td = rx->pipe.tdr + head;
+
+ bd = intel_xpcie_alloc_rx_bd(xpcie);
+ if (!bd) {
+ reset_work = true;
+ if (descs_num == 0) {
+ delay = msecs_to_jiffies(10);
+ goto task_exit;
+ }
+ break;
+ }
+
+ interface = intel_xpcie_get_td_interface(td);
+ length = intel_xpcie_get_td_length(td);
+ address = intel_xpcie_get_td_address(td);
+
+ bd->length = length;
+ bd->interface = interface;
+ rc = intel_xpcie_map_dma(xpcie, bd, DMA_FROM_DEVICE);
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to map rx bd (%d)\n", rc);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ break;
+ }
+
+ desc = &xpcie_epf->rx_desc_buf[chan].virt[descs_num++];
+ desc->dma_transfer_size = length;
+ desc->dst_addr = bd->phys;
+ desc->src_addr = address;
+
+ if (bd_head)
+ bd_tail->next = bd;
+ else
+ bd_head = bd;
+ bd_tail = bd;
+
+ head = XPCIE_CIRCULAR_INC(head, ndesc);
+ }
+
+ if (descs_num == 0)
+ goto task_exit;
+
+ rc = intel_xpcie_copy_from_host_ll(xpcie, chan, descs_num);
+
+ bd = bd_head;
+ while (bd) {
+ intel_xpcie_unmap_dma(xpcie, bd, DMA_FROM_DEVICE);
+ bd = bd->next;
+ }
+
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to DMA from host (%d)\n", rc);
+ intel_xpcie_free_rx_bd(xpcie, bd_head);
+ delay = msecs_to_jiffies(5);
+ reset_work = true;
+ goto task_exit;
+ }
+
+ head = initial_head;
+ bd = bd_head;
+ while (bd) {
+ td = rx->pipe.tdr + head;
+ bd_head = bd_head->next;
+ bd->next = NULL;
+
+ if (likely(bd->interface < XPCIE_NUM_INTERFACES)) {
+ intel_xpcie_set_td_status(td,
+ XPCIE_DESC_STATUS_SUCCESS);
+ intel_xpcie_add_bd_to_interface(xpcie, bd);
+ } else {
+ dev_err(xpcie_to_dev(xpcie),
+ "detected rx desc interface failure (%u)\n",
+ bd->interface);
+ intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_ERROR);
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ }
+
+ bd = bd_head;
+ head = XPCIE_CIRCULAR_INC(head, ndesc);
+ }
+
+ if (head != initial_head) {
+ intel_xpcie_set_tdr_head(&rx->pipe, head);
+ intel_xpcie_raise_irq(xpcie, DATA_RECEIVED);
+ }
+
+task_exit:
+ if (reset_work)
+ intel_xpcie_start_rx(xpcie, delay);
+}
+
+static void intel_xpcie_tx_event_handler(struct work_struct *work)
+{
+ struct xpcie *xpcie = container_of(work, struct xpcie, tx_event.work);
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct xpcie_buf_desc *bd_head, *bd_tail, *bd;
+ struct xpcie_stream *tx = &xpcie->tx;
+ u32 head, tail, ndesc, initial_tail;
+ struct xpcie_dma_ll_desc *desc;
+ struct xpcie_transfer_desc *td;
+ int descs_num = 0, chan = 0, rc;
+ size_t buffers = 0, bytes = 0;
+ u64 address;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return;
+
+ bd_head = NULL;
+ bd_tail = NULL;
+ ndesc = tx->pipe.ndesc;
+ initial_tail = intel_xpcie_get_tdr_tail(&tx->pipe);
+ tail = initial_tail;
+ head = intel_xpcie_get_tdr_head(&tx->pipe);
+
+ /* add new entries */
+ while (XPCIE_CIRCULAR_INC(tail, ndesc) != head) {
+ bd = intel_xpcie_list_get(&xpcie->write);
+ if (!bd)
+ break;
+
+ td = tx->pipe.tdr + tail;
+ address = intel_xpcie_get_td_address(td);
+
+ desc = &xpcie_epf->tx_desc_buf[chan].virt[descs_num++];
+ desc->dma_transfer_size = bd->length;
+ desc->src_addr = bd->phys;
+ desc->dst_addr = address;
+
+ if (bd_head)
+ bd_tail->next = bd;
+ else
+ bd_head = bd;
+ bd_tail = bd;
+
+ tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+ }
+
+ if (descs_num == 0)
+ goto task_exit;
+
+ rc = intel_xpcie_copy_to_host_ll(xpcie, chan, descs_num);
+
+ tail = initial_tail;
+ bd = bd_head;
+ while (bd) {
+ if (rc) {
+ bd = bd->next;
+ continue;
+ }
+
+ td = tx->pipe.tdr + tail;
+ intel_xpcie_set_td_status(td, XPCIE_DESC_STATUS_SUCCESS);
+ intel_xpcie_set_td_length(td, bd->length);
+ intel_xpcie_set_td_interface(td, bd->interface);
+
+ bd = bd->next;
+ tail = XPCIE_CIRCULAR_INC(tail, ndesc);
+ }
+
+ if (rc) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to DMA to host (%d)\n", rc);
+ intel_xpcie_list_put_head(&xpcie->write, bd_head);
+ return;
+ }
+
+ intel_xpcie_free_tx_bd(xpcie, bd_head);
+
+ if (intel_xpcie_get_tdr_tail(&tx->pipe) != tail) {
+ intel_xpcie_set_tdr_tail(&tx->pipe, tail);
+ intel_xpcie_raise_irq(xpcie, DATA_SENT);
+ }
+
+task_exit:
+ intel_xpcie_list_info(&xpcie->write, &bytes, &buffers);
+ if (buffers) {
+ xpcie->tx_pending = true;
+ head = intel_xpcie_get_tdr_head(&tx->pipe);
+ if (XPCIE_CIRCULAR_INC(tail, ndesc) != head)
+ intel_xpcie_start_tx(xpcie, 0);
+ } else {
+ xpcie->tx_pending = false;
+ }
+}
+
+static irqreturn_t intel_xpcie_core_irq_cb(int irq, void *args)
+{
+ struct xpcie *xpcie = args;
+
+ if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_SENT)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+ intel_xpcie_start_rx(xpcie, 0);
+ }
+ if (intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+ if (xpcie->tx_pending)
+ intel_xpcie_start_tx(xpcie, 0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int intel_xpcie_events_init(struct xpcie *xpcie)
+{
+ xpcie->rx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM | WQ_HIGHPRI);
+ if (!xpcie->rx_wq) {
+ dev_err(xpcie_to_dev(xpcie), "failed to allocate workqueue\n");
+ return -ENOMEM;
+ }
+
+ if (!xpcie->legacy_a0) {
+ xpcie->tx_wq = alloc_ordered_workqueue(XPCIE_DRIVER_NAME,
+ WQ_MEM_RECLAIM |
+ WQ_HIGHPRI);
+ if (!xpcie->tx_wq) {
+ dev_err(xpcie_to_dev(xpcie),
+ "failed to allocate workqueue\n");
+ destroy_workqueue(xpcie->rx_wq);
+ return -ENOMEM;
+ }
+ }
+
+ INIT_DELAYED_WORK(&xpcie->rx_event, intel_xpcie_rx_event_handler);
+ INIT_DELAYED_WORK(&xpcie->tx_event, intel_xpcie_tx_event_handler);
+
+ return 0;
+}
+
+static void intel_xpcie_events_cleanup(struct xpcie *xpcie)
+{
+ cancel_delayed_work_sync(&xpcie->rx_event);
+ cancel_delayed_work_sync(&xpcie->tx_event);
+
+ destroy_workqueue(xpcie->rx_wq);
+ if (!xpcie->legacy_a0)
+ destroy_workqueue(xpcie->tx_wq);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie)
+{
+ int error;
+
+ global_xpcie = xpcie;
+
+ intel_xpcie_set_cap_txrx(xpcie);
+
+ error = intel_xpcie_events_init(xpcie);
+ if (error)
+ return error;
+
+ error = intel_xpcie_discover_txrx(xpcie);
+ if (error)
+ goto error_txrx;
+
+ intel_xpcie_interfaces_init(xpcie);
+
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_SENT, 0);
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DATA_RECEIVED, 0);
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE, DEV_EVENT, NO_OP);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_SENT, 0);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DATA_RECEIVED, 0);
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, DEV_EVENT, NO_OP);
+
+ intel_xpcie_register_host_irq(xpcie, intel_xpcie_core_irq_cb);
+
+ return 0;
+
+error_txrx:
+ intel_xpcie_events_cleanup(xpcie);
+
+ return error;
+}
+
+void intel_xpcie_core_cleanup(struct xpcie *xpcie)
+{
+ if (xpcie->status == XPCIE_STATUS_RUN) {
+ intel_xpcie_events_cleanup(xpcie);
+ intel_xpcie_interfaces_cleanup(xpcie);
+ intel_xpcie_txrx_cleanup(xpcie);
+ }
+}
+
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer,
+ size_t *length, u32 timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd;
+ long jiffies_passed = 0;
+ size_t len, remaining;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ while (!inf->data_avail) {
+ mutex_unlock(&inf->rlock);
+ if (timeout_ms == 0) {
+ ret =
+ wait_event_interruptible(inf->rx_waitq,
+ inf->data_avail);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(inf->rx_waitq,
+ inf->data_avail,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&inf->rlock);
+ if (ret < 0)
+ return -EINTR;
+ }
+
+ bd = (inf->partial_read) ? inf->partial_read :
+ intel_xpcie_list_get(&inf->read);
+
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(remaining, bd->length);
+ memcpy(buffer, bd->data, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->data += bcopy;
+ bd->length -= bcopy;
+
+ if (bd->length == 0) {
+ intel_xpcie_free_rx_bd(xpcie, bd);
+ bd = intel_xpcie_list_get(&inf->read);
+ }
+ }
+
+ /* save for next time */
+ inf->partial_read = bd;
+
+ if (!bd)
+ inf->data_avail = false;
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&inf->rlock);
+
+ return 0;
+}
+
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer,
+ size_t *length, u32 timeout_ms)
+{
+ long jiffies_timeout = (long)msecs_to_jiffies(timeout_ms);
+ struct xpcie_interface *inf = &xpcie->interfaces[0];
+ unsigned long jiffies_start = jiffies;
+ struct xpcie_buf_desc *bd, *head;
+ long jiffies_passed = 0;
+ size_t remaining, len;
+ int ret;
+
+ if (*length == 0)
+ return -EINVAL;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ if (intel_xpcie_get_host_status(xpcie) != XPCIE_STATUS_RUN)
+ return -ENODEV;
+
+ len = *length;
+ remaining = len;
+ *length = 0;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ do {
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ while (!head) {
+ mutex_unlock(&xpcie->wlock);
+ if (timeout_ms == 0) {
+ ret =
+ wait_event_interruptible(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer);
+ } else {
+ ret =
+ wait_event_interruptible_timeout(xpcie->tx_waitq,
+ !xpcie->no_tx_buffer,
+ jiffies_timeout -
+ jiffies_passed);
+ if (ret == 0)
+ return -ETIME;
+ }
+ if (ret < 0 || xpcie->stop_flag)
+ return -EINTR;
+
+ ret = mutex_lock_interruptible(&xpcie->wlock);
+ if (ret < 0)
+ return -EINTR;
+
+ bd = intel_xpcie_alloc_tx_bd(xpcie);
+ head = bd;
+ }
+
+ while (remaining && bd) {
+ size_t bcopy;
+
+ bcopy = min(bd->length, remaining);
+ memcpy(bd->data, buffer, bcopy);
+
+ buffer += bcopy;
+ remaining -= bcopy;
+ bd->length = bcopy;
+ bd->interface = inf->id;
+
+ if (remaining) {
+ bd->next = intel_xpcie_alloc_tx_bd(xpcie);
+ bd = bd->next;
+ }
+ }
+
+ intel_xpcie_list_put(&inf->xpcie->write, head);
+ intel_xpcie_start_tx(xpcie, 0);
+
+ *length = len - remaining;
+
+ jiffies_passed = (long)jiffies - (long)jiffies_start;
+ } while (remaining > 0 && (jiffies_passed < jiffies_timeout ||
+ timeout_ms == 0));
+
+ mutex_unlock(&xpcie->wlock);
+
+ return 0;
+}
+
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status)
+{
+ struct xpcie *xpcie = intel_xpcie_core_get_by_id(id);
+
+ if (!xpcie)
+ return -ENODEV;
+
+ *status = xpcie->status;
+
+ return 0;
+}
+
+u32 intel_xpcie_get_device_num(u32 *id_list)
+{
+ u32 num_devices = 0;
+
+ if (xlink_sw_id) {
+ num_devices = 1;
+ *id_list = xlink_sw_id;
+ }
+
+ return num_devices;
+}
+
+int intel_xpcie_get_device_name_by_id(u32 id,
+ char *device_name, size_t name_size)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ memset(device_name, 0, name_size);
+ if (name_size > strlen(XPCIE_DRIVER_NAME))
+ name_size = strlen(XPCIE_DRIVER_NAME);
+ memcpy(device_name, XPCIE_DRIVER_NAME, name_size);
+
+ return 0;
+}
+
+int intel_xpcie_pci_connect_device(u32 id)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ if (xpcie->status != XPCIE_STATUS_RUN)
+ return -EIO;
+
+ return 0;
+}
+
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ return intel_xpcie_core_read(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout)
+{
+ struct xpcie *xpcie;
+
+ xpcie = intel_xpcie_core_get_by_id(id);
+ if (!xpcie)
+ return -ENODEV;
+
+ return intel_xpcie_core_write(xpcie, data, size, timeout);
+}
+
+int intel_xpcie_pci_reset_device(u32 id)
+{
+ return 0;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/core.h b/drivers/misc/xlink-pcie/local_host/core.h
new file mode 100644
index 000000000000..84985ef41a64
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/core.h
@@ -0,0 +1,247 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_CORE_HEADER_
+#define XPCIE_CORE_HEADER_
+
+#include <linux/io.h>
+#include <linux/types.h>
+#include <linux/workqueue.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/mempool.h>
+#include <linux/dma-mapping.h>
+#include <linux/cache.h>
+#include <linux/wait.h>
+
+#include <linux/xlink_drv_inf.h>
+
+/* Number of interfaces to statically allocate resources for */
+#define XPCIE_NUM_INTERFACES (1)
+
+/* max should be always power of '2' */
+#define XPCIE_CIRCULAR_INC(val, max) (((val) + 1) & ((max) - 1))
+
+#define XPCIE_FRAGMENT_SIZE SZ_128K
+
+/* Status encoding of the transfer descriptors */
+#define XPCIE_DESC_STATUS_SUCCESS (0)
+#define XPCIE_DESC_STATUS_ERROR (0xFFFF)
+
+/* Layout transfer descriptors used by device and host */
+struct xpcie_transfer_desc {
+ u64 address;
+ u32 length;
+ u16 status;
+ u16 interface;
+} __packed;
+
+struct xpcie_pipe {
+ u32 old;
+ u32 ndesc;
+ u32 *head;
+ u32 *tail;
+ struct xpcie_transfer_desc *tdr;
+};
+
+struct xpcie_buf_desc {
+ struct xpcie_buf_desc *next;
+ void *head;
+ dma_addr_t phys;
+ size_t true_len;
+ void *data;
+ size_t length;
+ int interface;
+ bool own_mem;
+};
+
+struct xpcie_stream {
+ size_t frag;
+ struct xpcie_pipe pipe;
+};
+
+struct xpcie_list {
+ spinlock_t lock; /* list lock */
+ size_t bytes;
+ size_t buffers;
+ struct xpcie_buf_desc *head;
+ struct xpcie_buf_desc *tail;
+};
+
+struct xpcie_interface {
+ int id;
+ struct xpcie *xpcie;
+ struct mutex rlock; /* read lock */
+ struct xpcie_list read;
+ struct xpcie_buf_desc *partial_read;
+ bool data_avail;
+ wait_queue_head_t rx_waitq;
+};
+
+struct xpcie_debug_stats {
+ struct {
+ size_t cnts;
+ size_t bytes;
+ } tx_krn, rx_krn, tx_usr, rx_usr;
+ size_t send_ints;
+ size_t interrupts;
+ size_t rx_event_runs;
+ size_t tx_event_runs;
+};
+
+/* Defined capabilities located in mmio space */
+#define XPCIE_CAP_NULL (0)
+#define XPCIE_CAP_TXRX (1)
+
+#define XPCIE_CAP_TTL (32)
+#define XPCIE_CAP_HDR_ID (offsetof(struct xpcie_cap_hdr, id))
+#define XPCIE_CAP_HDR_NEXT (offsetof(struct xpcie_cap_hdr, next))
+
+/* Header at the beginning of each capability to define and link to next */
+struct xpcie_cap_hdr {
+ u16 id;
+ u16 next;
+} __packed;
+
+struct xpcie_cap_pipe {
+ u32 ring;
+ u32 ndesc;
+ u32 head;
+ u32 tail;
+} __packed;
+
+/* Transmit and Receive capability */
+struct xpcie_cap_txrx {
+ struct xpcie_cap_hdr hdr;
+ u32 fragment_size;
+ struct xpcie_cap_pipe tx;
+ struct xpcie_cap_pipe rx;
+} __packed;
+
+static inline u64 _ioread64(void __iomem *addr)
+{
+ u64 low, high;
+
+ low = ioread32(addr);
+ high = ioread32(addr + sizeof(u32));
+
+ return low | (high << 32);
+}
+
+static inline void _iowrite64(u64 value, void __iomem *addr)
+{
+ iowrite32(value, addr);
+ iowrite32(value >> 32, addr + sizeof(u32));
+}
+
+#define intel_xpcie_iowrite64(value, addr) \
+ _iowrite64(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite32(value, addr) \
+ iowrite32(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite16(value, addr) \
+ iowrite16(value, (void __iomem *)addr)
+#define intel_xpcie_iowrite8(value, addr) \
+ iowrite8(value, (void __iomem *)addr)
+#define intel_xpcie_ioread64(addr) \
+ _ioread64((void __iomem *)addr)
+#define intel_xpcie_ioread32(addr) \
+ ioread32((void __iomem *)addr)
+#define intel_xpcie_ioread16(addr) \
+ ioread16((void __iomem *)addr)
+#define intel_xpcie_ioread8(addr) \
+ ioread8((void __iomem *)addr)
+
+static inline
+void intel_xpcie_set_td_address(struct xpcie_transfer_desc *td, u64 address)
+{
+ intel_xpcie_iowrite64(address, &td->address);
+}
+
+static inline
+u64 intel_xpcie_get_td_address(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread64(&td->address);
+}
+
+static inline
+void intel_xpcie_set_td_length(struct xpcie_transfer_desc *td, u32 length)
+{
+ intel_xpcie_iowrite32(length, &td->length);
+}
+
+static inline
+u32 intel_xpcie_get_td_length(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread32(&td->length);
+}
+
+static inline
+void intel_xpcie_set_td_interface(struct xpcie_transfer_desc *td, u16 interface)
+{
+ intel_xpcie_iowrite16(interface, &td->interface);
+}
+
+static inline
+u16 intel_xpcie_get_td_interface(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread16(&td->interface);
+}
+
+static inline
+void intel_xpcie_set_td_status(struct xpcie_transfer_desc *td, u16 status)
+{
+ intel_xpcie_iowrite16(status, &td->status);
+}
+
+static inline
+u16 intel_xpcie_get_td_status(struct xpcie_transfer_desc *td)
+{
+ return intel_xpcie_ioread16(&td->status);
+}
+
+static inline
+void intel_xpcie_set_tdr_head(struct xpcie_pipe *p, u32 head)
+{
+ intel_xpcie_iowrite32(head, p->head);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_head(struct xpcie_pipe *p)
+{
+ return intel_xpcie_ioread32(p->head);
+}
+
+static inline
+void intel_xpcie_set_tdr_tail(struct xpcie_pipe *p, u32 tail)
+{
+ intel_xpcie_iowrite32(tail, p->tail);
+}
+
+static inline
+u32 intel_xpcie_get_tdr_tail(struct xpcie_pipe *p)
+{
+ return intel_xpcie_ioread32(p->tail);
+}
+
+int intel_xpcie_core_init(struct xpcie *xpcie);
+void intel_xpcie_core_cleanup(struct xpcie *xpcie);
+int intel_xpcie_core_read(struct xpcie *xpcie, void *buffer, size_t *length,
+ u32 timeout_ms);
+int intel_xpcie_core_write(struct xpcie *xpcie, void *buffer, size_t *length,
+ u32 timeout_ms);
+u32 intel_xpcie_get_device_num(u32 *id_list);
+struct xpcie_dev *intel_xpcie_get_device_by_id(u32 id);
+int intel_xpcie_get_device_name_by_id(u32 id, char *device_name,
+ size_t name_size);
+int intel_xpcie_get_device_status_by_id(u32 id, u32 *status);
+int intel_xpcie_pci_connect_device(u32 id);
+int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
+int intel_xpcie_pci_reset_device(u32 id);
+#endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/epf.c b/drivers/misc/xlink-pcie/local_host/epf.c
index 7019aecd6a81..7fceed9d2a4f 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.c
+++ b/drivers/misc/xlink-pcie/local_host/epf.c
@@ -7,6 +7,7 @@

#include <linux/of.h>
#include <linux/platform_device.h>
+#include <linux/reboot.h>

#include "epf.h"

@@ -19,6 +20,12 @@
#define PCIE_REGS_PCIE_ERR_INTR_FLAGS 0x24
#define LINK_REQ_RST_FLG BIT(15)

+#define PCIE_REGS_PCIE_SYS_CFG_CORE 0x7C
+#define PCIE_CFG_PBUS_NUM_OFFSET 8
+#define PCIE_CFG_PBUS_NUM_MASK 0xFF
+#define PCIE_CFG_PBUS_DEV_NUM_OFFSET 16
+#define PCIE_CFG_PBUS_DEV_NUM_MASK 0x1F
+
static struct pci_epf_header xpcie_header = {
.vendorid = PCI_VENDOR_ID_INTEL,
.deviceid = PCI_DEVICE_ID_INTEL_KEEMBAY,
@@ -35,6 +42,45 @@ static const struct pci_epf_device_id xpcie_epf_ids[] = {
{},
};

+u32 xlink_sw_id;
+
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ return intel_xpcie_ep_dma_read_ll(epf, chan, descs_num);
+}
+
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie, int chan, int descs_num)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ return intel_xpcie_ep_dma_write_ll(epf, chan, descs_num);
+}
+
+void intel_xpcie_register_host_irq(struct xpcie *xpcie, irq_handler_t func)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+
+ xpcie_epf->core_irq_callback = func;
+}
+
+int intel_xpcie_raise_irq(struct xpcie *xpcie, enum xpcie_doorbell_type type)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+ struct pci_epf *epf = xpcie_epf->epf;
+
+ intel_xpcie_set_doorbell(xpcie, FROM_DEVICE, type, 1);
+
+ return pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+}
+
static irqreturn_t intel_xpcie_err_interrupt(int irq, void *args)
{
struct xpcie_epf *xpcie_epf;
@@ -55,6 +101,7 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
{
struct xpcie_epf *xpcie_epf;
struct xpcie *xpcie = args;
+ u8 event;
u32 val;

xpcie_epf = container_of(xpcie, struct xpcie_epf, xpcie);
@@ -62,6 +109,18 @@ static irqreturn_t intel_xpcie_host_interrupt(int irq, void *args)
if (val & LBC_CII_EVENT_FLAG) {
iowrite32(LBC_CII_EVENT_FLAG,
xpcie_epf->apb_base + PCIE_REGS_PCIE_INTR_FLAGS);
+
+ event = intel_xpcie_get_doorbell(xpcie, TO_DEVICE, DEV_EVENT);
+ if (unlikely(event != NO_OP)) {
+ intel_xpcie_set_doorbell(xpcie, TO_DEVICE,
+ DEV_EVENT, NO_OP);
+ if (event == REQUEST_RESET)
+ orderly_reboot();
+ return IRQ_HANDLED;
+ }
+
+ if (likely(xpcie_epf->core_irq_callback))
+ xpcie_epf->core_irq_callback(irq, xpcie);
}

return IRQ_HANDLED;
@@ -231,6 +290,7 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
const struct pci_epc_features *features;
struct pci_epc *epc = epf->epc;
+ u32 bus_num, dev_num;
struct device *dev;
size_t align = SZ_16K;
int ret;
@@ -260,12 +320,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)

if (!strcmp(xpcie_epf->stepping, "A0")) {
xpcie_epf->xpcie.legacy_a0 = true;
- iowrite32(1, (void __iomem *)xpcie_epf->xpcie.mmio +
- XPCIE_MMIO_LEGACY_A0);
+ intel_xpcie_iowrite32(1, xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
} else {
xpcie_epf->xpcie.legacy_a0 = false;
- iowrite32(0, (void __iomem *)xpcie_epf->xpcie.mmio +
- XPCIE_MMIO_LEGACY_A0);
+ intel_xpcie_iowrite32(0, xpcie_epf->xpcie.mmio +
+ XPCIE_MMIO_LEGACY_A0);
}

/* Enable interrupt */
@@ -290,13 +350,46 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)
ret = intel_xpcie_ep_dma_init(epf);
if (ret) {
dev_err(&epf->dev, "DMA initialization failed\n");
- goto err_free_err_irq;
+ goto err_cleanup_bars;
}

+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
+ ret = ioread32(xpcie_epf->apb_base + PCIE_REGS_PCIE_SYS_CFG_CORE);
+ bus_num = (ret >> PCIE_CFG_PBUS_NUM_OFFSET) & PCIE_CFG_PBUS_NUM_MASK;
+ dev_num = (ret >> PCIE_CFG_PBUS_DEV_NUM_OFFSET) &
+ PCIE_CFG_PBUS_DEV_NUM_MASK;
+
+ xlink_sw_id = FIELD_PREP(XLINK_DEV_INF_TYPE_MASK,
+ XLINK_DEV_INF_PCIE) |
+ FIELD_PREP(XLINK_DEV_PHYS_ID_MASK,
+ bus_num << 8 | dev_num) |
+ FIELD_PREP(XLINK_DEV_TYPE_MASK, XLINK_DEV_TYPE_KMB) |
+ FIELD_PREP(XLINK_DEV_PCIE_ID_MASK, XLINK_DEV_PCIE_0) |
+ FIELD_PREP(XLINK_DEV_FUNC_MASK, XLINK_DEV_FUNC_VPU);
+
+ ret = intel_xpcie_core_init(&xpcie_epf->xpcie);
+ if (ret) {
+ dev_err(&epf->dev, "Core component configuration failed\n");
+ goto err_uninit_dma;
+ }
+
+ intel_xpcie_iowrite32(XPCIE_STATUS_UNINIT,
+ xpcie_epf->xpcie.mmio + XPCIE_MMIO_HOST_STATUS);
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_RUN);
+ intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+ DEV_EVENT, NO_OP);
+ memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+ strlen(XPCIE_MAGIC_YOCTO));
+
return 0;

-err_free_err_irq:
- free_irq(xpcie_epf->irq_err, &xpcie_epf->xpcie);
+err_uninit_dma:
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_ERROR);
+ memcpy(xpcie_epf->xpcie.mmio + XPCIE_MMIO_MAGIC_OFF, XPCIE_MAGIC_YOCTO,
+ strlen(XPCIE_MAGIC_YOCTO));
+
+ intel_xpcie_ep_dma_uninit(epf);

err_cleanup_bars:
intel_xpcie_cleanup_bars(epf);
@@ -306,8 +399,12 @@ static int intel_xpcie_epf_bind(struct pci_epf *epf)

static void intel_xpcie_epf_unbind(struct pci_epf *epf)
{
+ struct xpcie_epf *xpcie_epf = epf_get_drvdata(epf);
struct pci_epc *epc = epf->epc;

+ intel_xpcie_core_cleanup(&xpcie_epf->xpcie);
+ intel_xpcie_set_device_status(&xpcie_epf->xpcie, XPCIE_STATUS_READY);
+
intel_xpcie_ep_dma_uninit(epf);

pci_epc_stop(epc);
@@ -339,8 +436,11 @@ static void intel_xpcie_epf_shutdown(struct device *dev)
xpcie_epf = epf_get_drvdata(epf);

/* Notify host in case PCIe hot plug not supported */
- if (xpcie_epf)
+ if (xpcie_epf && xpcie_epf->xpcie.status == XPCIE_STATUS_RUN) {
+ intel_xpcie_set_doorbell(&xpcie_epf->xpcie, FROM_DEVICE,
+ DEV_EVENT, DEV_SHUTDOWN);
pci_epc_raise_irq(epf->epc, epf->func_no, PCI_EPC_IRQ_MSI, 1);
+ }
}

static struct pci_epf_ops ops = {
diff --git a/drivers/misc/xlink-pcie/local_host/epf.h b/drivers/misc/xlink-pcie/local_host/epf.h
index 82410404d54e..7220cead0973 100644
--- a/drivers/misc/xlink-pcie/local_host/epf.h
+++ b/drivers/misc/xlink-pcie/local_host/epf.h
@@ -12,6 +12,7 @@
#include <linux/pci-epf.h>

#include "xpcie.h"
+#include "util.h"

#define XPCIE_DRIVER_NAME "mxlk_pcie_epf"
#define XPCIE_DRIVER_DESC "Intel(R) xLink PCIe endpoint function driver"
@@ -24,6 +25,7 @@
#define XPCIE_NUM_RX_DESCS (64)

extern bool dma_ll_mode;
+extern u32 xlink_sw_id;

struct xpcie_dma_ll_desc {
u32 dma_ch_control1;
@@ -65,14 +67,35 @@ struct xpcie_epf {
void __iomem *dbi_base;
char stepping[KEEMBAY_XPCIE_STEPPING_MAXLEN];

+ irq_handler_t core_irq_callback;
+ dma_addr_t tx_phys;
+ void *tx_virt;
+ size_t tx_size;
+
struct xpcie_dma_ll_desc_buf tx_desc_buf[DMA_CHAN_NUM];
struct xpcie_dma_ll_desc_buf rx_desc_buf[DMA_CHAN_NUM];
};

+static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
+{
+ struct xpcie_epf *xpcie_epf = container_of(xpcie,
+ struct xpcie_epf, xpcie);
+
+ return &xpcie_epf->epf->dev;
+}
+
int intel_xpcie_ep_dma_init(struct pci_epf *epf);
int intel_xpcie_ep_dma_uninit(struct pci_epf *epf);
int intel_xpcie_ep_dma_reset(struct pci_epf *epf);
int intel_xpcie_ep_dma_read_ll(struct pci_epf *epf, int chan, int descs_num);
int intel_xpcie_ep_dma_write_ll(struct pci_epf *epf, int chan, int descs_num);

+void intel_xpcie_register_host_irq(struct xpcie *xpcie,
+ irq_handler_t func);
+int intel_xpcie_raise_irq(struct xpcie *xpcie,
+ enum xpcie_doorbell_type type);
+int intel_xpcie_copy_from_host_ll(struct xpcie *xpcie,
+ int chan, int descs_num);
+int intel_xpcie_copy_to_host_ll(struct xpcie *xpcie,
+ int chan, int descs_num);
#endif /* XPCIE_EPF_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/util.c b/drivers/misc/xlink-pcie/local_host/util.c
new file mode 100644
index 000000000000..ec808b0cd72b
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.c
@@ -0,0 +1,375 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#include "util.h"
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status)
+{
+ xpcie->status = status;
+ intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie)
+{
+ return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_DEV_STATUS);
+}
+
+static size_t intel_xpcie_doorbell_offset(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type)
+{
+ if (dirt == TO_DEVICE && type == DATA_SENT)
+ return XPCIE_MMIO_HTOD_TX_DOORBELL;
+ if (dirt == TO_DEVICE && type == DATA_RECEIVED)
+ return XPCIE_MMIO_HTOD_RX_DOORBELL;
+ if (dirt == TO_DEVICE && type == DEV_EVENT)
+ return XPCIE_MMIO_HTOD_EVENT_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DATA_SENT)
+ return XPCIE_MMIO_DTOH_TX_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DATA_RECEIVED)
+ return XPCIE_MMIO_DTOH_RX_DOORBELL;
+ if (dirt == FROM_DEVICE && type == DEV_EVENT)
+ return XPCIE_MMIO_DTOH_EVENT_DOORBELL;
+
+ return 0;
+}
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type, u8 value)
+{
+ size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+ intel_xpcie_iowrite8(value, xpcie->mmio + offset);
+}
+
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type)
+{
+ size_t offset = intel_xpcie_doorbell_offset(xpcie, dirt, type);
+
+ return intel_xpcie_ioread8(xpcie->mmio + offset);
+}
+
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie)
+{
+ return intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status)
+{
+ xpcie->status = status;
+ intel_xpcie_iowrite32(status, xpcie->mmio + XPCIE_MMIO_HOST_STATUS);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+ if (!bd)
+ return NULL;
+
+ bd->head = kzalloc(roundup(length, cache_line_size()), GFP_KERNEL);
+ if (!bd->head) {
+ kfree(bd);
+ return NULL;
+ }
+
+ bd->data = bd->head;
+ bd->length = length;
+ bd->true_len = length;
+ bd->next = NULL;
+ bd->own_mem = true;
+
+ return bd;
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+ dma_addr_t phys)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = kzalloc(sizeof(*bd), GFP_KERNEL);
+ if (!bd)
+ return NULL;
+
+ bd->head = virt;
+ bd->phys = phys;
+ bd->data = bd->head;
+ bd->length = length;
+ bd->true_len = length;
+ bd->next = NULL;
+ bd->own_mem = false;
+
+ return bd;
+}
+
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd)
+{
+ if (bd) {
+ if (bd->own_mem)
+ kfree(bd->head);
+ kfree(bd);
+ }
+}
+
+int intel_xpcie_list_init(struct xpcie_list *list)
+{
+ spin_lock_init(&list->lock);
+ list->bytes = 0;
+ list->buffers = 0;
+ list->head = NULL;
+ list->tail = NULL;
+
+ return 0;
+}
+
+void intel_xpcie_list_cleanup(struct xpcie_list *list)
+{
+ struct xpcie_buf_desc *bd;
+
+ spin_lock(&list->lock);
+ while (list->head) {
+ bd = list->head;
+ list->head = bd->next;
+ intel_xpcie_free_bd(bd);
+ }
+
+ list->head = NULL;
+ list->tail = NULL;
+ spin_unlock(&list->lock);
+}
+
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd)
+{
+ if (!bd)
+ return -EINVAL;
+
+ spin_lock(&list->lock);
+ if (list->head)
+ list->tail->next = bd;
+ else
+ list->head = bd;
+
+ while (bd) {
+ list->tail = bd;
+ list->bytes += bd->length;
+ list->buffers++;
+ bd = bd->next;
+ }
+ spin_unlock(&list->lock);
+
+ return 0;
+}
+
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+ struct xpcie_buf_desc *bd)
+{
+ struct xpcie_buf_desc *old_head;
+
+ if (!bd)
+ return -EINVAL;
+
+ spin_lock(&list->lock);
+ old_head = list->head;
+ list->head = bd;
+ while (bd) {
+ list->bytes += bd->length;
+ list->buffers++;
+ if (!bd->next) {
+ list->tail = list->tail ? list->tail : bd;
+ bd->next = old_head;
+ break;
+ }
+ bd = bd->next;
+ }
+ spin_unlock(&list->lock);
+
+ return 0;
+}
+
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list)
+{
+ struct xpcie_buf_desc *bd;
+
+ spin_lock(&list->lock);
+ bd = list->head;
+ if (list->head) {
+ list->head = list->head->next;
+ if (!list->head)
+ list->tail = NULL;
+ bd->next = NULL;
+ list->bytes -= bd->length;
+ list->buffers--;
+ }
+ spin_unlock(&list->lock);
+
+ return bd;
+}
+
+void intel_xpcie_list_info(struct xpcie_list *list,
+ size_t *bytes, size_t *buffers)
+{
+ spin_lock(&list->lock);
+ *bytes = list->bytes;
+ *buffers = list->buffers;
+ spin_unlock(&list->lock);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = intel_xpcie_list_get(&xpcie->rx_pool);
+ if (bd) {
+ bd->data = bd->head;
+ bd->length = bd->true_len;
+ bd->next = NULL;
+ bd->interface = 0;
+ }
+
+ return bd;
+}
+
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+ if (bd)
+ intel_xpcie_list_put(&xpcie->rx_pool, bd);
+}
+
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie)
+{
+ struct xpcie_buf_desc *bd;
+
+ bd = intel_xpcie_list_get(&xpcie->tx_pool);
+ if (bd) {
+ bd->data = bd->head;
+ bd->length = bd->true_len;
+ bd->next = NULL;
+ bd->interface = 0;
+ } else {
+ xpcie->no_tx_buffer = true;
+ }
+
+ return bd;
+}
+
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd)
+{
+ if (!bd)
+ return;
+
+ intel_xpcie_list_put(&xpcie->tx_pool, bd);
+
+ xpcie->no_tx_buffer = false;
+ wake_up_interruptible(&xpcie->tx_waitq);
+}
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id)
+{
+ struct xpcie_interface *inf = xpcie->interfaces + id;
+
+ inf->id = id;
+ inf->xpcie = xpcie;
+
+ inf->partial_read = NULL;
+ intel_xpcie_list_init(&inf->read);
+ mutex_init(&inf->rlock);
+ inf->data_avail = false;
+ init_waitqueue_head(&inf->rx_waitq);
+
+ return 0;
+}
+
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf)
+{
+ struct xpcie_buf_desc *bd;
+
+ intel_xpcie_free_rx_bd(inf->xpcie, inf->partial_read);
+ while ((bd = intel_xpcie_list_get(&inf->read)))
+ intel_xpcie_free_rx_bd(inf->xpcie, bd);
+
+ mutex_destroy(&inf->rlock);
+}
+
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie)
+{
+ int index;
+
+ for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+ intel_xpcie_interface_cleanup(xpcie->interfaces + index);
+
+ intel_xpcie_list_cleanup(&xpcie->write);
+ mutex_destroy(&xpcie->wlock);
+}
+
+int intel_xpcie_interfaces_init(struct xpcie *xpcie)
+{
+ int index;
+
+ mutex_init(&xpcie->wlock);
+ intel_xpcie_list_init(&xpcie->write);
+ init_waitqueue_head(&xpcie->tx_waitq);
+ xpcie->no_tx_buffer = false;
+
+ for (index = 0; index < XPCIE_NUM_INTERFACES; index++)
+ intel_xpcie_interface_init(xpcie, index);
+
+ return 0;
+}
+
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd)
+{
+ struct xpcie_interface *inf;
+
+ inf = xpcie->interfaces + bd->interface;
+
+ intel_xpcie_list_put(&inf->read, bd);
+
+ mutex_lock(&inf->rlock);
+ inf->data_avail = true;
+ mutex_unlock(&inf->rlock);
+ wake_up_interruptible(&inf->rx_waitq);
+}
+
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id)
+{
+ int ttl = XPCIE_CAP_TTL;
+ void *hdr;
+ u16 id_out, next;
+
+ /* If user didn't specify start, assume start of mmio */
+ if (!start)
+ start = intel_xpcie_ioread32(xpcie->mmio + XPCIE_MMIO_CAP_OFF);
+
+ /* Read header info */
+ hdr = xpcie->mmio + start;
+
+ /* Check if we still have time to live */
+ while (ttl--) {
+ id_out = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_ID);
+ next = intel_xpcie_ioread16(hdr + XPCIE_CAP_HDR_NEXT);
+
+ /* If cap matches, return header */
+ if (id_out == id)
+ return hdr;
+ /* If cap is NULL, we are at the end of the list */
+ else if (id_out == XPCIE_CAP_NULL)
+ return NULL;
+ /* If no match and no end of list, traverse the linked list */
+ else
+ hdr = xpcie->mmio + next;
+ }
+
+ /* If we reached here, the capability list is corrupted */
+ return NULL;
+}
diff --git a/drivers/misc/xlink-pcie/local_host/util.h b/drivers/misc/xlink-pcie/local_host/util.h
new file mode 100644
index 000000000000..908be897a61d
--- /dev/null
+++ b/drivers/misc/xlink-pcie/local_host/util.h
@@ -0,0 +1,70 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*****************************************************************************
+ *
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ ****************************************************************************/
+
+#ifndef XPCIE_UTIL_HEADER_
+#define XPCIE_UTIL_HEADER_
+
+#include "xpcie.h"
+
+enum xpcie_doorbell_direction {
+ TO_DEVICE,
+ FROM_DEVICE
+};
+
+enum xpcie_doorbell_type {
+ DATA_SENT,
+ DATA_RECEIVED,
+ DEV_EVENT
+};
+
+enum xpcie_event_type {
+ NO_OP,
+ REQUEST_RESET,
+ DEV_SHUTDOWN
+};
+
+void intel_xpcie_set_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type, u8 value);
+u8 intel_xpcie_get_doorbell(struct xpcie *xpcie,
+ enum xpcie_doorbell_direction dirt,
+ enum xpcie_doorbell_type type);
+
+void intel_xpcie_set_device_status(struct xpcie *xpcie, u32 status);
+u32 intel_xpcie_get_device_status(struct xpcie *xpcie);
+u32 intel_xpcie_get_host_status(struct xpcie *xpcie);
+void intel_xpcie_set_host_status(struct xpcie *xpcie, u32 status);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_bd(size_t length);
+struct xpcie_buf_desc *intel_xpcie_alloc_bd_reuse(size_t length, void *virt,
+ dma_addr_t phys);
+void intel_xpcie_free_bd(struct xpcie_buf_desc *bd);
+
+int intel_xpcie_list_init(struct xpcie_list *list);
+void intel_xpcie_list_cleanup(struct xpcie_list *list);
+int intel_xpcie_list_put(struct xpcie_list *list, struct xpcie_buf_desc *bd);
+int intel_xpcie_list_put_head(struct xpcie_list *list,
+ struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_list_get(struct xpcie_list *list);
+void intel_xpcie_list_info(struct xpcie_list *list, size_t *bytes,
+ size_t *buffers);
+
+struct xpcie_buf_desc *intel_xpcie_alloc_rx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_rx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+struct xpcie_buf_desc *intel_xpcie_alloc_tx_bd(struct xpcie *xpcie);
+void intel_xpcie_free_tx_bd(struct xpcie *xpcie, struct xpcie_buf_desc *bd);
+
+int intel_xpcie_interface_init(struct xpcie *xpcie, int id);
+void intel_xpcie_interface_cleanup(struct xpcie_interface *inf);
+void intel_xpcie_interfaces_cleanup(struct xpcie *xpcie);
+int intel_xpcie_interfaces_init(struct xpcie *xpcie);
+void intel_xpcie_add_bd_to_interface(struct xpcie *xpcie,
+ struct xpcie_buf_desc *bd);
+void *intel_xpcie_cap_find(struct xpcie *xpcie, u32 start, u16 id);
+#endif /* XPCIE_UTIL_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/local_host/xpcie.h b/drivers/misc/xlink-pcie/local_host/xpcie.h
index 0745e6dfee10..8a559617daba 100644
--- a/drivers/misc/xlink-pcie/local_host/xpcie.h
+++ b/drivers/misc/xlink-pcie/local_host/xpcie.h
@@ -14,6 +14,8 @@
#include <linux/module.h>
#include <linux/pci_ids.h>

+#include "core.h"
+
#ifndef PCI_DEVICE_ID_INTEL_KEEMBAY
#define PCI_DEVICE_ID_INTEL_KEEMBAY 0x6240
#endif
@@ -21,18 +23,79 @@
#define XPCIE_IO_COMM_SIZE SZ_16K
#define XPCIE_MMIO_OFFSET SZ_4K

+/* Status encoding of both device and host */
+#define XPCIE_STATUS_ERROR (0xFFFFFFFF)
+#define XPCIE_STATUS_UNINIT (0)
+#define XPCIE_STATUS_READY (1)
+#define XPCIE_STATUS_RECOVERY (2)
+#define XPCIE_STATUS_OFF (3)
+#define XPCIE_STATUS_RUN (4)
+
+#define XPCIE_MAGIC_STRLEN (16)
+#define XPCIE_MAGIC_YOCTO "VPUYOCTO"
+
/* MMIO layout and offsets shared between device and host */
struct xpcie_mmio {
+ u32 device_status;
+ u32 host_status;
u8 legacy_a0;
+ u8 htod_tx_doorbell;
+ u8 htod_rx_doorbell;
+ u8 htod_event_doorbell;
+ u8 dtoh_tx_doorbell;
+ u8 dtoh_rx_doorbell;
+ u8 dtoh_event_doorbell;
+ u8 reserved;
+ u32 cap_offset;
+ u8 magic[XPCIE_MAGIC_STRLEN];
} __packed;

+#define XPCIE_MMIO_DEV_STATUS (offsetof(struct xpcie_mmio, device_status))
+#define XPCIE_MMIO_HOST_STATUS (offsetof(struct xpcie_mmio, host_status))
#define XPCIE_MMIO_LEGACY_A0 (offsetof(struct xpcie_mmio, legacy_a0))
+#define XPCIE_MMIO_HTOD_TX_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_tx_doorbell))
+#define XPCIE_MMIO_HTOD_RX_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_rx_doorbell))
+#define XPCIE_MMIO_HTOD_EVENT_DOORBELL \
+ (offsetof(struct xpcie_mmio, htod_event_doorbell))
+#define XPCIE_MMIO_DTOH_TX_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_tx_doorbell))
+#define XPCIE_MMIO_DTOH_RX_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_rx_doorbell))
+#define XPCIE_MMIO_DTOH_EVENT_DOORBELL \
+ (offsetof(struct xpcie_mmio, dtoh_event_doorbell))
+#define XPCIE_MMIO_CAP_OFF (offsetof(struct xpcie_mmio, cap_offset))
+#define XPCIE_MMIO_MAGIC_OFF (offsetof(struct xpcie_mmio, magic))

struct xpcie {
u32 status;
bool legacy_a0;
void *mmio;
void *bar4;
+
+ struct workqueue_struct *rx_wq;
+ struct workqueue_struct *tx_wq;
+
+ struct xpcie_interface interfaces[XPCIE_NUM_INTERFACES];
+
+ size_t fragment_size;
+ struct xpcie_cap_txrx *txrx;
+ struct xpcie_stream tx;
+ struct xpcie_stream rx;
+
+ struct mutex wlock; /* write lock */
+ struct xpcie_list write;
+ bool no_tx_buffer;
+ wait_queue_head_t tx_waitq;
+ bool tx_pending;
+ bool stop_flag;
+
+ struct xpcie_list rx_pool;
+ struct xpcie_list tx_pool;
+
+ struct delayed_work rx_event;
+ struct delayed_work tx_event;
};

#endif /* XPCIE_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
new file mode 100644
index 000000000000..8ffbaafecc88
--- /dev/null
+++ b/include/linux/xlink_drv_inf.h
@@ -0,0 +1,58 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Intel Keem Bay XLink PCIe Driver
+ *
+ * Copyright (C) 2021 Intel Corporation
+ */
+
+#ifndef _XLINK_DRV_INF_H_
+#define _XLINK_DRV_INF_H_
+
+#include <linux/bitfield.h>
+#include <linux/bits.h>
+#include <linux/types.h>
+
+#define XLINK_DEV_INF_TYPE_MASK GENMASK(27, 24)
+#define XLINK_DEV_PHYS_ID_MASK GENMASK(23, 8)
+#define XLINK_DEV_TYPE_MASK GENMASK(6, 4)
+#define XLINK_DEV_PCIE_ID_MASK GENMASK(3, 1)
+#define XLINK_DEV_FUNC_MASK GENMASK(0, 0)
+
+enum xlink_device_inf_type {
+ XLINK_DEV_INF_PCIE = 1,
+};
+
+enum xlink_device_type {
+ XLINK_DEV_TYPE_KMB = 0,
+};
+
+enum xlink_device_pcie {
+ XLINK_DEV_PCIE_0 = 0,
+};
+
+enum xlink_device_func {
+ XLINK_DEV_FUNC_VPU = 0,
+};
+
+enum _xlink_device_status {
+ _XLINK_DEV_OFF,
+ _XLINK_DEV_ERROR,
+ _XLINK_DEV_BUSY,
+ _XLINK_DEV_RECOVERY,
+ _XLINK_DEV_READY
+};
+
+int xlink_pcie_get_device_list(u32 *sw_device_id_list,
+ u32 *num_devices);
+int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
+ size_t name_size);
+int xlink_pcie_get_device_status(u32 sw_device_id,
+ u32 *device_status);
+int xlink_pcie_boot_device(u32 sw_device_id, const char *binary_name);
+int xlink_pcie_connect(u32 sw_device_id);
+int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout);
+int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
+ u32 timeout);
+int xlink_pcie_reset_device(u32 sw_device_id);
+#endif
--
2.17.1

2021-02-12 22:39:02

by mark gross

[permalink] [raw]
Subject: [PATCH v6 31/34] Intel Keembay XLink SMBus driver

From: Ramya P Karanth <[email protected]>

Adds XLink SMBus driver for Intel Keem Bay SoC.

Xlink-smbus driver is a logical SMBus adapter which uses Xlink
(xlink-pcie) protocol as an interface. Keem Bay(s) vision accelerators
are connected to the server via PCI interface. The Server needs to know
the temperature of the SoC and the source to get the temperature can be
either on board sensors or on chip sensors. The sensors are ideally
connected over i2c bus of the SoC and the server does not have access to
sensors present in the PCB. With this xlink-smbus interfaces, server
access the on board/on chip sensors via xlink smbus adapter.

Signed-off-by: Ramya P Karanth <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
Documentation/i2c/busses/index.rst | 1 +
.../i2c/busses/intel-xlink-smbus.rst | 71 +++
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-smbus/Kconfig | 25 +
drivers/misc/xlink-smbus/Makefile | 5 +
drivers/misc/xlink-smbus/xlink-smbus.c | 467 ++++++++++++++++++
7 files changed, 571 insertions(+)
create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
create mode 100644 drivers/misc/xlink-smbus/Kconfig
create mode 100644 drivers/misc/xlink-smbus/Makefile
create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c

diff --git a/Documentation/i2c/busses/index.rst b/Documentation/i2c/busses/index.rst
index 5e4077b08d86..6ce4a740f616 100644
--- a/Documentation/i2c/busses/index.rst
+++ b/Documentation/i2c/busses/index.rst
@@ -29,4 +29,5 @@ I2C Bus Drivers
i2c-taos-evm
i2c-viapro
i2c-via
+ intel-xlink-smbus.rst
scx200_acb
diff --git a/Documentation/i2c/busses/intel-xlink-smbus.rst b/Documentation/i2c/busses/intel-xlink-smbus.rst
new file mode 100644
index 000000000000..ab87d18051b4
--- /dev/null
+++ b/Documentation/i2c/busses/intel-xlink-smbus.rst
@@ -0,0 +1,71 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+Kernel driver: xlink_smbus
+==========================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+ Sufix: Bay
+
+ Slave address: The address is selectable by device-tree. (TBD)
+
+Authors:
+ - Raja Subramanian, Lakshmi Bai <[email protected]>
+ - Thalaiappan, Rathina <[email protected]>
+ - Karanth, Ramya P <[email protected]>
+
+Description
+===========
+The Intel Edge.AI Computer Vision platforms have to be monitored using platform
+devices like sensors, fan controller, IO expander etc. Some of these devices
+are memory mapped and some are i2c based. Either of these devices are not
+directly accessible to the host.
+
+The host here refers to the server to which the vision accelerators are
+connected over PCIe Interface. The Host needs to do a consolidated action based
+on the parameters of platform devices. In general, most of the standard devices
+(includes sensors, fan controller, IO expander etc) are I2C/SMBus based and are
+used to provide the status of the accelerator. Standard drivers for these
+devices are available based on i2c/smbus APIs.
+
+Instead of changing the sensor drivers to adapt to PCIe interface, a generic
+i2c adapter "xlink-smbus" which underneath uses xlink as physical medium is
+used. With xlink-smbus, the drivers for the platform devices doesn't need to
+undergo any interface change.
+
+High-level architecture
+=======================
+
+Accessing Onchip devices::
+
+ ------------------- -------------------
+ | Remote Host | | Local Host |
+ | IA CPU | | Vision platforms|
+ ------------------- -------------------
+ | Onchip | | i2c slave | ==> Access the device
+ | sensor driver | | handler | ==> which is mmio based
+ ------------------- -------------------
+ |Intel XLINK_SMBUS| |Intel XLINK_SMBUS|
+ | adpater | | adapter |
+ | (Master) | | (I2C_SLAVE) |
+ ------------------- -------------------
+ | XLINK | <==========> | XLINK |
+ ------------------- PCIE -------------------
+
+Accessing Onboard devices::
+
+ ------------------- ----------------------
+ | Remote Host | | Local Host |
+ | IA CPU | | Vision platforms |
+ ------------------- ----------------------
+ | On board | | i2c smbus | ==> Access the device
+ | sensor driver | | xfer [synopsys] | ==> which is on i2c bus
+ ------------------- ----------------------
+ |Intel XLINK_SMBUS| | Intel XLINK_SMBUS |
+ | adpater | | adapter |
+ | (Master) | |(SMBUS_PROXY Master)|
+ ------------------- ----------------------
+ | XLINK | <==========> | XLINK |
+ ------------------- PCIE ----------------------
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index aed3ef61897c..f6229dd8ba9e 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -486,4 +486,5 @@ source "drivers/misc/xlink-ipc/Kconfig"
source "drivers/misc/xlink-core/Kconfig"
source "drivers/misc/vpumgr/Kconfig"
source "drivers/misc/intel_tsens/Kconfig"
+source "drivers/misc/xlink-smbus/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index c08502b22778..0ed8a62cbb20 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -62,3 +62,4 @@ obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
obj-$(CONFIG_XLINK_CORE) += xlink-core/
obj-$(CONFIG_VPUMGR) += vpumgr/
obj-y += intel_tsens/
+obj-$(CONFIG_XLINK_SMBUS) += xlink-smbus/
diff --git a/drivers/misc/xlink-smbus/Kconfig b/drivers/misc/xlink-smbus/Kconfig
new file mode 100644
index 000000000000..8d2451f8d1be
--- /dev/null
+++ b/drivers/misc/xlink-smbus/Kconfig
@@ -0,0 +1,25 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config XLINK_SMBUS
+ tristate "Enable SMBUS interface over Xlink PCIe"
+ depends on XLINK_CORE
+ depends on HDDL_DEVICE_CLIENT || HDDL_DEVICE_SERVER
+ help
+ Enable xlink-pcie as I2C adapter both slave and master. The server
+ (Remote Host) will use this interface to get sensor data from the SoC
+ (vision accelerator - Local Host) which is connected over PCIe.
+ This driver is loaded on both Remote Host and Local Host.
+ Select M to compile the driver as a module, name is xlink-smbus.
+ If unsure, select N.
+
+config XLINK_SMBUS_PROXY
+ tristate "Enable SMBUS adapter as proxy for I2C controller"
+ depends on XLINK_CORE
+ depends on XLINK_SMBUS
+ help
+ Enable this config when SMBUS adapter is acting as proxy for
+ another I2C controller.
+ Select M or Y if building for Intel Vision Processing Unit (VPU)
+ Local Host core.
+ Select N, if building for a Remote Host kernel.
diff --git a/drivers/misc/xlink-smbus/Makefile b/drivers/misc/xlink-smbus/Makefile
new file mode 100644
index 000000000000..27369dfa488c
--- /dev/null
+++ b/drivers/misc/xlink-smbus/Makefile
@@ -0,0 +1,5 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+# Makefile for Xlink SMBus
+#
+obj-$(CONFIG_XLINK_SMBUS) += xlink-smbus.o
diff --git a/drivers/misc/xlink-smbus/xlink-smbus.c b/drivers/misc/xlink-smbus/xlink-smbus.c
new file mode 100644
index 000000000000..fc652e6c96bb
--- /dev/null
+++ b/drivers/misc/xlink-smbus/xlink-smbus.c
@@ -0,0 +1,467 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Intel Xlink SMBus Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ */
+
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <linux/stddef.h>
+#include <linux/time.h>
+#include <linux/xlink.h>
+
+struct xlink_msg {
+ u16 addr;
+ u16 flags;
+ u8 read_write;
+ u8 command;
+ u16 padding;
+ u32 protocol;
+ union i2c_smbus_data data;
+ s32 status;
+ struct list_head node;
+};
+
+struct xlink_adapter_data {
+ struct xlink_handle *xhandle;
+ struct completion work;
+ struct task_struct *task_recv;
+ struct i2c_client *slave;
+ struct list_head head;
+ struct i2c_adapter *adap;
+ u32 channel;
+};
+
+#if defined(CONFIG_XLINK_SMBUS_PROXY)
+/*
+ * PROXY the commands using existing adapter
+ * I2C2 is fixed for Keem Bay, it has all sensors connected
+ */
+#define proxy_i2c_adapter_info() i2c_get_adapter(2)
+#else
+/*
+ * This is an adapter by itself
+ * It doesn't proxy transfer on another adapter
+ */
+#define proxy_i2c_adapter_info() ((void *)0)
+#endif
+
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+/*
+ * The complete slave protocol is implemented in one shot here as
+ * the whole chunk of data is transferred or received via xlink,
+ * not byte-by-byte
+ * Refer https://lwn.net/Articles/640346/ for protocol
+ */
+static s32 handle_slave_mode(struct i2c_client *slave, struct xlink_msg *msg)
+{
+ struct device *dev = &slave->dev;
+ u8 temp;
+
+ /* Send the command as first write */
+ i2c_slave_event(slave, I2C_SLAVE_WRITE_REQUESTED, NULL);
+ i2c_slave_event(slave, I2C_SLAVE_WRITE_RECEIVED, &msg->command);
+
+ /* Now handle specifics to read/write */
+ if (msg->read_write == I2C_SMBUS_WRITE) {
+ if (msg->protocol == I2C_SMBUS_BYTE_DATA) {
+ i2c_slave_event(slave, I2C_SLAVE_WRITE_RECEIVED,
+ &msg->data.byte);
+ } else if (msg->protocol == I2C_SMBUS_WORD_DATA) {
+ temp = msg->data.word & 0xFF;
+ i2c_slave_event(slave,
+ I2C_SLAVE_WRITE_RECEIVED,
+ &temp);
+ temp = (msg->data.word >> 8) & 0xFF;
+ i2c_slave_event(slave,
+ I2C_SLAVE_WRITE_RECEIVED,
+ &temp);
+ } else if (msg->protocol == I2C_SMBUS_BLOCK_DATA) {
+ int i;
+
+ if (msg->data.block[0] > I2C_SMBUS_BLOCK_MAX)
+ return -EPROTO;
+
+ for (i = 1; (i < msg->data.block[0] ||
+ i <= I2C_SMBUS_BLOCK_MAX); ++i) {
+ i2c_slave_event(slave,
+ I2C_SLAVE_WRITE_RECEIVED,
+ &msg->data.block[i]);
+ }
+ } else {
+ dev_err(dev,
+ "unknown protocol (%d) received in %s\n",
+ msg->protocol,
+ __func__
+ );
+ return -EOPNOTSUPP;
+ }
+ } else {
+ if (msg->protocol == I2C_SMBUS_BYTE_DATA) {
+ i2c_slave_event(slave,
+ I2C_SLAVE_READ_REQUESTED,
+ &msg->data.byte);
+ } else if (msg->protocol == I2C_SMBUS_WORD_DATA) {
+ i2c_slave_event(slave,
+ I2C_SLAVE_READ_REQUESTED,
+ &temp);
+ msg->data.word = temp << 8;
+ i2c_slave_event(slave,
+ I2C_SLAVE_READ_REQUESTED,
+ &temp);
+ msg->data.word |= temp;
+ } else if (msg->protocol == I2C_SMBUS_BLOCK_DATA) {
+ int i;
+
+ if (msg->data.block[0] > I2C_SMBUS_BLOCK_MAX)
+ return -EPROTO;
+
+ for (i = 1; (i < msg->data.block[0] ||
+ i <= I2C_SMBUS_BLOCK_MAX); ++i) {
+ i2c_slave_event(slave,
+ I2C_SLAVE_READ_REQUESTED,
+ &msg->data.block[i]);
+ }
+ } else {
+ dev_err(dev,
+ "unknown protocol (%d) received in %s\n",
+ msg->protocol,
+ __func__);
+ return -EOPNOTSUPP;
+ }
+ i2c_slave_event(slave, I2C_SLAVE_READ_PROCESSED, &temp);
+ }
+ i2c_slave_event(slave, I2C_SLAVE_STOP, NULL);
+ return 0;
+}
+#endif /* CONFIG_I2C_SLAVE */
+
+static s32 xlink_smbus_xfer(struct i2c_adapter *adap, u16 addr,
+ unsigned short flags, char read_write,
+ u8 command, int protocol,
+ union i2c_smbus_data *data)
+{
+ struct xlink_adapter_data *adapt_data = NULL;
+ struct device *dev = NULL;
+ struct xlink_msg tx_msg, *rx_msg;
+ enum xlink_error xerr;
+ s32 rc = 0;
+
+ if (!adap)
+ return -ENODEV;
+ adapt_data = i2c_get_adapdata(adap);
+ dev = &adapt_data->adap->dev;
+
+ if (!data)
+ return -EINVAL;
+
+ tx_msg.addr = addr;
+ tx_msg.flags = flags;
+ tx_msg.read_write = read_write;
+ tx_msg.command = command;
+ tx_msg.protocol = protocol;
+ tx_msg.data = *data;
+ tx_msg.status = 0;
+
+ xerr = xlink_write_data(adapt_data->xhandle, adapt_data->channel,
+ (u8 *)&tx_msg,
+ sizeof(struct xlink_msg));
+
+ if (xerr != X_LINK_SUCCESS) {
+ dev_err_ratelimited(dev,
+ "xlink_write_data failed (%d) dropping packet.\n",
+ xerr);
+ return -EIO;
+ }
+
+ /*
+ * wait for getting the response from the peer host device
+ * message is received by xlinki2c_receive_thread
+ * and notified here through completion trigger
+ */
+ if (wait_for_completion_interruptible_timeout(&adapt_data->work,
+ 4 * HZ) > 0) {
+ rx_msg = list_first_entry(&adapt_data->head,
+ struct xlink_msg,
+ node);
+ list_del(&rx_msg->node);
+
+ /* Update the data and status from the xlink message received */
+ *data = rx_msg->data;
+ rc = rx_msg->status;
+
+ /* free the response received from Proxy */
+ kfree(rx_msg);
+ } else {
+ WARN_ONCE(1, "VPU not responding");
+ rc = -ETIMEDOUT;
+ }
+
+ return rc;
+}
+
+static int xlinki2c_receive_thread(void *param)
+{
+ struct xlink_adapter_data *adapt_data = param;
+ struct device *dev = &adapt_data->adap->dev;
+ struct i2c_adapter *adap;
+ enum xlink_error xerr;
+ struct xlink_msg *msg;
+ u32 size;
+
+ while (!kthread_should_stop()) {
+ /* msg will be freed in this context or other */
+ msg = kzalloc(sizeof(*msg), GFP_KERNEL);
+ if (!msg)
+ return -ENOMEM;
+
+ /* Wait to receive xlink message from the peer device */
+ xerr = xlink_read_data_to_buffer(adapt_data->xhandle,
+ adapt_data->channel,
+ (uint8_t *)msg, &size);
+ if (xerr != X_LINK_SUCCESS) {
+ if (xerr != X_LINK_TIMEOUT) {
+ dev_warn_ratelimited(dev,
+ "[%d] Error (%d) dropping packet.\n",
+ adapt_data->adap->nr, xerr);
+ }
+ kfree(msg);
+ continue;
+ }
+ xlink_release_data(adapt_data->xhandle, adapt_data->channel,
+ NULL);
+ adap = proxy_i2c_adapter_info();
+
+ if (adap) {
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+ if (adapt_data->slave) {
+ msg->status = handle_slave_mode
+ (adapt_data->slave, msg);
+ goto send_resp;
+ }
+#endif
+ /*
+ * This is a proxy for an existing adapter.
+ * call the local adapter to receive the data
+ * from the hardware.
+ */
+ msg->status = i2c_smbus_xfer(adap,
+ msg->addr,
+ msg->flags,
+ msg->read_write,
+ msg->command,
+ msg->protocol,
+ &msg->data);
+
+ /*
+ * Send back the complete message that
+ * carries status, back to sender which is
+ * waiting on xlinki2c_receive_thread
+ */
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+send_resp:
+#endif
+ xlink_write_data(adapt_data->xhandle,
+ adapt_data->channel, (u8 *)msg,
+ sizeof(struct xlink_msg));
+ kfree(msg);
+ } else {
+ /*
+ * This is an adapter on its own.
+ * Receives the status and data over xlink (msg).
+ * Indicate the data received to the component
+ * which is waiting in xlink_smbus_xfer
+ */
+ list_add_tail(&msg->node, &adapt_data->head);
+ complete(&adapt_data->work);
+ }
+ } /* thread loop */
+ dev_dbg(dev, "[%d] %s stopped\n", adapt_data->adap->nr, __func__);
+
+ return 0;
+}
+
+static inline u32 xlink_smbus_func(struct i2c_adapter *adapter)
+{
+ u32 func = I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+ I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+ I2C_FUNC_SMBUS_BLOCK_DATA;
+
+ return func;
+}
+
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+
+/*
+ * This will be called when slave client driver
+ * register itself to an adapter
+ */
+static int xlink_smbus_reg_slave(struct i2c_client *slave)
+{
+ struct xlink_adapter_data *adapt_data =
+ i2c_get_adapdata(slave->adapter);
+
+ adapt_data->slave = slave;
+
+ return 0;
+}
+
+static int xlink_smbus_unreg_slave(struct i2c_client *slave)
+{
+ struct xlink_adapter_data *adapt_data =
+ i2c_get_adapdata(slave->adapter);
+
+ adapt_data->slave = NULL;
+
+ return 0;
+}
+#endif
+
+static struct i2c_algorithm xlink_algorithm = {
+ .smbus_xfer = xlink_smbus_xfer,
+ .functionality = xlink_smbus_func,
+#if IS_ENABLED(CONFIG_I2C_SLAVE)
+ .reg_slave = xlink_smbus_reg_slave,
+ .unreg_slave = xlink_smbus_unreg_slave,
+#endif
+};
+
+static int xlink_i2c_probe(struct platform_device *pdev)
+{
+ struct intel_hddl_clients *c = pdev->dev.platform_data;
+ struct xlink_handle *devhandle = &c->xlink_dev;
+ struct xlink_adapter_data *adapt_data;
+ struct device *dev = &pdev->dev;
+ struct i2c_adapter *adap;
+ u32 rc;
+
+ dev_dbg(dev, "Registering xlink SMBus adapter...\n");
+
+ adap = kzalloc(sizeof(*adap), GFP_KERNEL);
+ if (!adap)
+ return -ENOMEM;
+
+ c->adap[pdev->id & 0x3] = adap;
+ memset(adap, 0, sizeof(struct i2c_adapter));
+ adap->owner = THIS_MODULE;
+ adap->algo = &xlink_algorithm;
+ strcpy(adap->name, "xlink adapter");
+ platform_set_drvdata(pdev, adap);
+
+ adapt_data = kzalloc(sizeof(*adapt_data), GFP_KERNEL);
+ if (!adapt_data) {
+ kfree(adap);
+ return -ENOMEM;
+ }
+
+ init_completion(&adapt_data->work);
+
+ INIT_LIST_HEAD(&adapt_data->head);
+ adapt_data->channel = c->xlink_i2c_ch[pdev->id & 0x3];
+ adapt_data->xhandle = devhandle;
+ adapt_data->adap = adap;
+
+ rc = xlink_open_channel(devhandle,
+ adapt_data->channel,
+ RXB_TXB, /* mode */
+ 64 * 1024,
+ 100); /* timeout */
+ if (rc != X_LINK_SUCCESS) {
+ dev_err(dev, "xlink_open_channel failed[%d][%d][%p]\n", rc,
+ adapt_data->channel,
+ adapt_data->xhandle);
+ goto err_kfree;
+ }
+
+ i2c_set_adapdata(adap, adapt_data);
+
+ rc = i2c_add_adapter(adap);
+ if (rc)
+ goto err_exit;
+
+ /* Create receiver thread */
+ adapt_data->task_recv = kthread_run(xlinki2c_receive_thread,
+ adapt_data,
+ "xlinki2c_receive_thread");
+ if (!adapt_data->task_recv) {
+ dev_err(dev, "%s Thread creation failed", __func__);
+ i2c_del_adapter(adapt_data->adap);
+ goto err_exit;
+ }
+ return 0;
+
+err_exit:
+ xlink_close_channel(adapt_data->xhandle, adapt_data->channel);
+err_kfree:
+ kfree(adap);
+ kfree(adapt_data);
+ return rc;
+}
+
+static int xlink_i2c_remove(struct platform_device *pdev)
+{
+ struct i2c_adapter *adap = platform_get_drvdata(pdev);
+ struct xlink_adapter_data *adapt_data = i2c_get_adapdata(adap);
+
+ kthread_stop(adapt_data->task_recv);
+
+ dev_info(&adap->dev, "Delete the adapter[%d]\n", adap->nr);
+ /* Close the channel and disconnect */
+ xlink_close_channel(adapt_data->xhandle, adapt_data->channel);
+ /* This will block the dynamic registration */
+ i2c_del_adapter(adapt_data->adap);
+ kfree(adapt_data);
+
+ return 0;
+}
+
+static struct platform_driver xlink_i2c_driver = {
+ .probe = xlink_i2c_probe,
+ .remove = xlink_i2c_remove,
+ .driver = {
+ .name = "i2c_xlink"
+ }
+};
+
+/* Define the xlink debug device structures to be used with dev_dbg() et al */
+
+static struct device_driver dbg_name = {
+ .name = "xlink_i2c_dbg"
+};
+
+static struct device dbg_subname = {
+ .init_name = "xlink_i2c_dbg",
+ .driver = &dbg_name
+};
+
+static struct device *dbgxi2c = &dbg_subname;
+
+static void __exit xlink_adapter_exit(void)
+{
+ dev_dbg(dbgxi2c, "Unloading XLink I2C module...\n");
+ platform_driver_unregister(&xlink_i2c_driver);
+}
+
+static int __init xlink_adapter_init(void)
+{
+ dev_dbg(dbgxi2c, "Loading XLink I2C module...\n");
+ platform_driver_register(&xlink_i2c_driver);
+ return 0;
+}
+
+module_init(xlink_adapter_init);
+module_exit(xlink_adapter_exit);
+
+MODULE_AUTHOR("Raja Subramanian, Lakshmi Bai <[email protected]>");
+MODULE_AUTHOR("Thalaiappan, Rathina <[email protected]>");
+MODULE_AUTHOR("Karanth, Ramya P <[email protected]>");
+MODULE_DESCRIPTION("xlink i2c adapter");
+MODULE_LICENSE("GPL");
--
2.17.1

2021-02-12 22:39:09

by mark gross

[permalink] [raw]
Subject: [PATCH v6 29/34] Intel tsens i2c slave driver.

From: "C, Udhayakumar" <[email protected]>

Add Intel tsens i2c slave driver for Intel Edge.AI Computer Vision
platforms.

The tsens i2c slave driver enables reading of on chip sensors present
in the Intel Edge.AI Computer Vision platforms. In the tsens i2c module
various junction and SoC temperatures are reported using i2c slave
protocol.

Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
drivers/misc/intel_tsens/Kconfig | 14 +++
drivers/misc/intel_tsens/Makefile | 1 +
drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +++++++++++++++++++++
3 files changed, 134 insertions(+)
create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c

diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
index 8b263fdd80c3..be8d27e81864 100644
--- a/drivers/misc/intel_tsens/Kconfig
+++ b/drivers/misc/intel_tsens/Kconfig
@@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
Say Y if using a processor that includes the Intel VPU such as
Keem Bay. If unsure, say N.

+config INTEL_TSENS_I2C_SLAVE
+ bool "I2C slave driver for intel tsens"
+ depends on INTEL_TSENS_LOCAL_HOST
+ depends on I2C=y && I2C_SLAVE
+ help
+ This option enables tsens I2C slave driver.
+
+ This driver is used for reporting thermal data via I2C
+ SMBUS to remote host.
+ Enable this option if you want to have support for thermal
+ management controller.
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
+
config INTEL_TSENS_IA_HOST
tristate "Temperature sensor driver for intel tsens remote host"
depends on I2C && THERMAL
diff --git a/drivers/misc/intel_tsens/Makefile b/drivers/misc/intel_tsens/Makefile
index 250dc484fb49..f6f41bbca80c 100644
--- a/drivers/misc/intel_tsens/Makefile
+++ b/drivers/misc/intel_tsens/Makefile
@@ -5,4 +5,5 @@
#

obj-$(CONFIG_INTEL_TSENS_LOCAL_HOST) += intel_tsens_thermal.o
+obj-$(CONFIG_INTEL_TSENS_I2C_SLAVE) += intel_tsens_i2c.o
obj-$(CONFIG_INTEL_TSENS_IA_HOST) += intel_tsens_host.o
diff --git a/drivers/misc/intel_tsens/intel_tsens_i2c.c b/drivers/misc/intel_tsens/intel_tsens_i2c.c
new file mode 100644
index 000000000000..520c3f4bf392
--- /dev/null
+++ b/drivers/misc/intel_tsens/intel_tsens_i2c.c
@@ -0,0 +1,119 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * Intel tsens I2C thermal Driver
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include "intel_tsens_thermal.h"
+
+#define TSENS_BYTE_INDEX_SHIFT 0x6
+#define TSENS_BYTE_INDEX_MASK 0x3
+#define TSENS_SENSOR_TYPE_MASK 0x3F
+
+struct intel_tsens_i2c {
+ int sensor_type;
+ u16 buffer_idx;
+ bool read_only;
+ u8 idx_write_cnt;
+ struct intel_tsens_i2c_plat_data *plat_data;
+};
+
+static int intel_i2c_tsens_slave_cb(struct i2c_client *client,
+ enum i2c_slave_event event, u8 *val)
+{
+ struct intel_tsens_i2c *tsens_i2c = i2c_get_clientdata(client);
+ struct intel_tsens_i2c_plat_data *plat_data = tsens_i2c->plat_data;
+ int ret = 0;
+
+ switch (event) {
+ case I2C_SLAVE_WRITE_RECEIVED:
+ tsens_i2c->sensor_type = *val;
+ break;
+
+ case I2C_SLAVE_READ_PROCESSED:
+ case I2C_SLAVE_READ_REQUESTED:
+ if (plat_data->get_temp) {
+ int temp;
+ int sensor_type = tsens_i2c->sensor_type &
+ TSENS_SENSOR_TYPE_MASK;
+
+ if (!plat_data->get_temp(sensor_type, &temp,
+ plat_data->pdata)) {
+ u8 offset = (tsens_i2c->sensor_type >>
+ TSENS_BYTE_INDEX_SHIFT) &
+ TSENS_BYTE_INDEX_MASK;
+ u8 *ptr_temp = (u8 *)&temp;
+
+ *val = ptr_temp[offset];
+ tsens_i2c->buffer_idx++;
+ ret = 0;
+ } else {
+ ret = -EINVAL;
+ }
+ } else {
+ ret = -EINVAL;
+ }
+ break;
+
+ case I2C_SLAVE_STOP:
+ case I2C_SLAVE_WRITE_REQUESTED:
+ tsens_i2c->idx_write_cnt = 0;
+ tsens_i2c->buffer_idx = 0;
+ break;
+
+ default:
+ break;
+ }
+ return ret;
+}
+
+static int intel_i2c_tsens_slave_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{ struct intel_tsens_i2c *priv;
+ int ret;
+
+ if (!id->driver_data) {
+ dev_err(&client->dev, "No platform data");
+ return -EINVAL;
+ }
+ priv = devm_kzalloc(&client->dev,
+ sizeof(struct intel_tsens_i2c),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->plat_data = (struct intel_tsens_i2c_plat_data *)id->driver_data;
+ i2c_set_clientdata(client, priv);
+ ret = i2c_slave_register(client, intel_i2c_tsens_slave_cb);
+ if (ret)
+ dev_err(&client->dev, "i2c slave register failed\n");
+
+ return ret;
+};
+
+static struct i2c_device_id intel_i2c_tsens_slave_id[] = {
+ { "intel_tsens", (kernel_ulong_t)&i2c_plat_data},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, intel_i2c_tsens_slave_id);
+
+static struct i2c_driver intel_i2c_tsens_slave_driver = {
+ .driver = {
+ .name = "intel_tsens",
+ },
+ .probe = intel_i2c_tsens_slave_probe,
+ .remove = i2c_slave_unregister,
+ .id_table = intel_i2c_tsens_slave_id,
+};
+
+module_i2c_driver(intel_i2c_tsens_slave_driver);
+
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_DESCRIPTION("tsens i2c slave driver");
+MODULE_LICENSE("GPL");
--
2.17.1

2021-02-12 22:40:27

by mark gross

[permalink] [raw]
Subject: [PATCH v6 33/34] misc: Hddl device management for local host

From: "C, Udhayakumar" <[email protected]>

Add local host hddl device management for Intel Edge.AI Computer Vision
platforms.

About Intel Edge.AI Computer Vision platforms:
---------------------------------------------
The Intel Edge.AI Computer Vision platforms are vision processing systems
targeting machine vision applications for connected devices.

They are based on ARM A53 CPU running Linux and acts as a PCIe
endpoint device.

High-level architecture:
------------------------

Remote Host IA CPU Local Host ARM CPU
------------------------------- ----------------------------
| * Send time as xlink packet | |* Sync time with IA host |
| * receive sensor details | |* Prepare and share sensor|
| and register as i2c or | | details to IA host as |
| xlink smbus slaves | | xlink packets |
------------------------------- ----------------------------
| hddl server | <=====> | hddl client |
------------------------------- xlink ----------------------------

hddl device module:
-------------------
The HDDL client driver acts as an software RTC to sync with network
time. It abstracts xlink protocol to communicate with remote host.
This driver exports the details about sensors available in the
platform to remote host as xlink packets.
This driver also handles device connect/disconnect events and
identifies board id and soc id using gpio's, based on platform
configuration.

- Local Host driver
* Intended for ARM CPU
* It is based on xlink Framework
* Driver path:
{tree}/drivers/misc/hddl_device/hddl_device_client.c

Local arm host and Remote IA host drivers communicates using
XLINK protocol.

Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
.../misc-devices/hddl_device_client.rst | 212 +++++
Documentation/misc-devices/index.rst | 1 +
Documentation/vpu/index.rst | 1 +
MAINTAINERS | 1 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/hddl_device/Kconfig | 14 +
drivers/misc/hddl_device/Makefile | 5 +
drivers/misc/hddl_device/hddl_device.c | 565 +++++++++++++
drivers/misc/hddl_device/hddl_device_lh.c | 764 ++++++++++++++++++
drivers/misc/hddl_device/hddl_device_util.h | 52 ++
11 files changed, 1617 insertions(+)
create mode 100644 Documentation/misc-devices/hddl_device_client.rst
create mode 100644 drivers/misc/hddl_device/Kconfig
create mode 100644 drivers/misc/hddl_device/Makefile
create mode 100644 drivers/misc/hddl_device/hddl_device.c
create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
create mode 100644 drivers/misc/hddl_device/hddl_device_util.h

diff --git a/Documentation/misc-devices/hddl_device_client.rst b/Documentation/misc-devices/hddl_device_client.rst
new file mode 100644
index 000000000000..413643b6b500
--- /dev/null
+++ b/Documentation/misc-devices/hddl_device_client.rst
@@ -0,0 +1,212 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=================================
+Kernel driver: hddl_device_client
+=================================
+
+Supported chips:
+ * Intel Edge.AI Computer Vision platforms: Keem Bay
+
+Authors:
+ - Thalaiappan, Rathina <[email protected]>
+ - Udhayakumar C <[email protected]>
+
+
+Overview
+========
+
+This driver supports hddl device management for Intel Edge.AI Computer Vision
+platforms.
+
+This driver supports the following features:
+
+ - Exports deatils of temperature sensor, current sensor and fan controller
+ present in Intel Edge.AI Computer Vision platforms to IA host.
+ - Enable Time sync of Intel Edge.AI Computer Vision platform with IA host.
+ - Handles device connect and disconnect events.
+ - Receives slave address from the IA host for memory mapped thermal sensors
+ present in SoC (Documentation/hwmon/intel_tsens_sensors.rst).
+ - Registers i2c slave device for slaves present in Intel Edge.AI Computer
+ Vision platform
+
+Keem Bay platform has
+Onchip sensors:
+
+ - Media Subsystem (mss) temperature sensor
+ - NN subsystem (nce) temperature sensor
+ - Compute subsystem (cse) temperature sensor
+ - SOC(Maximum of mss, nce and cse).
+
+Onboard sensors:
+
+ - two lm75 temperature sensors
+ - emc2103 fan controller
+ - ina3221 current sensor
+
+High-level architecture
+=======================
+::
+
+ Remote Host IA CPU Local Host ARM CPU
+ ------------------------------- ----------------------------
+ | * Send time as xlink packet | |* Sync time with IA host |
+ | * receive sensor details | |* Prepare and share sensor|
+ | and register as i2c or | | details to IA host as |
+ | xlink smbus slaves | | xlink packets |
+ ------------------------------- ----------------------------
+ | hddl server | <=====> | hddl client |
+ ------------------------------- xlink ----------------------------
+
+Driver Structure
+================
+
+The driver provides a platform device where the ``probe`` and ``remove``
+operations are provided.
+
+ - probe: Gets list of external sensors from device-tree entries, identify
+ board id and soc id based on configuration from device-tree entries and
+ spawn kernel thread to monitor new PCIE devices.
+
+ - init task: Poll for new PCIE device with time interval of 5 seconds and
+ creates connect task to setup new device.
+
+ - connect task: Connect task is the main entity which connects to hddl
+ device server using xlink and does the basic initialisation and handshaking.
+ Additionally it also monitors the hddl device server link down/link up
+ events and reinitialise the drivers accordingly in the client side.
+
+ - remove: unregister i2c client devices, i2c adapters and close xlink
+ channel.
+
+HDDL Client Sequence – Basic Setup and handshaking with HDDL Device Server
+==========================================================================
+::
+
+ ,-----. ,---------. ,------------. ,------------------.
+ |probe| |Init task| |connect task| |hddl device server|
+ `--+--' `----+----' `-----+------' `--------+---------'
+ ----. | | |
+ | "Parse DT" | | |
+ <---' | | |
+ | | | |
+ | ,------------------!. | | |
+ | |Get sensor details|_\ | | |
+ | |from device tree | | | |
+ | `--------------------' | | |
+ ----. | | |
+ | "Identify Board Id"| | |
+ <---' | | |
+ | | | |
+ | "Creates kthread" | | |
+ |----------------------->| | |
+ | | | |
+ | ,-----------------------!. | |
+ | |creates kernel thread |_\ | |
+ | |to check for new device | | |
+ | `-------------------------' | |
+ ,---------------------!. ----. | |
+ |check for new device |_\ | | |
+ |with time interval of | <---' | |
+ |5 seconds | | | |
+ `-----------------------' | | |
+ ,---------------------!. | | |
+ |if new device found?.|_\ | | |
+ |creates connect task | |-------------------->| |
+ |to setup new device | | | |
+ `-----------------------' | | |
+ | ,-------------------!. |----. |
+ | |setup xlink channel|_\| | |
+ | |to communicate with ||<---' |
+ | |server || |
+ | `---------------------'| |
+ | | | Get time data |
+ | | | from server |
+ | | | <--------------------------
+ | | | |
+ | | | share board id |
+ | | | -------------------------->
+ | | | |
+ | | | share total number of |
+ | | | sensors available in SoC |
+ | | | -------------------------->
+ | | | |
+ | ,-----------------------!. | |
+ | |For each sensors share |_\| |
+ | |sensor type, name, trip || -------------------------->
+ | |temp, trip type || |
+ | `-------------------------'| |
+ | | | Receives Send complete. |
+ | | | <--------------------------
+ | | | |
+ | | |----. |
+ | | | | Register xlink i2c |
+ | | |<---' adapters. |
+ | | | |
+ | | | |
+ | | | Receives slave addr for |
+ | | | each salve in SoC |
+ | | | <--------------------------
+ | | | |
+ | | |----. |
+ | | | | Register i2c clients.|
+ | | |<---' |
+ | | | |
+ | | |----.
+ | | | | poll for device status
+ | | |<---'
+ ,--+--. ,----+----. ,-----+------. ,--------+---------.
+ |probe| |Init task| |connect task| |hddl device server|
+ `-----' `---------' `------------' `------------------'
+
+
+XLINK i2c sequence:
+===================
+::
+
+ ,-----------------. ,--------. ,-----. ,---------.
+ |xlink-i2c-adapter| |I2C core| |xlink| |i2c-slave|
+ `--------+--------' `---+----' `--+--' `----+----'
+ | | | |
+ |---------------------->| | |
+ | | | |
+ | ,--------------------------!. | |
+ | |Initialize xlink based i2c|_\ | |
+ | |adapters. | | |
+ | `----------------------------' | |
+ | | | |
+ |<-----------------------------------------| |
+ | | | |
+ | ,--------------------------------!. | |
+ | |I2C request is received as xlink|_\ | |
+ | |packet from IA host | | |
+ | `----------------------------------' | |
+ | | | |
+ |---------------------->| | |
+ | | | |
+ | | ,---------------------------------!.
+ | | |xlink I2C request is converted to|_\
+ | | |standard i2c request |
+ | | `-----------------------------------'
+ | | | |
+ | | ----------------------------------->|
+ | | | |
+ | | ,----------------------!. |
+ | | |Linux i2c slave device|_\ |
+ | | |standard request | |
+ | | `------------------------' |
+ | | | |
+ | | <-----------------------------------|
+ | | | |
+ | | ,----------------------!. |
+ | | |Linux i2c slave device|_\ |
+ | | |standard response | |
+ | | `------------------------' |
+ | I2C response | | |
+ |<----------------------| | |
+ | | | |
+ | | | ,-------------------------!.
+ |----------------------------------------->| |I2C response is converted|_\
+ | | | |to xlink packet |
+ ,--------+--------. ,---+----. ,--+-`---------------------------'
+ |xlink-i2c-adapter| |I2C core| |xlink| |i2c-slave|
+ `-----------------' `--------' `-----' `---------'
diff --git a/Documentation/misc-devices/index.rst b/Documentation/misc-devices/index.rst
index 64420b3314fe..102f7f9dea87 100644
--- a/Documentation/misc-devices/index.rst
+++ b/Documentation/misc-devices/index.rst
@@ -19,6 +19,7 @@ fit into other categories.
bh1770glc
eeprom
c2port
+ hddl_device_client.rst
ibmvmc
ics932s401
isl29003
diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index cd4272e089ec..b50b1376b591 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -17,3 +17,4 @@ This documentation contains information for the Intel VPU stack.
xlink-pcie
xlink-ipc
xlink-core
+ hddl_device_client
diff --git a/MAINTAINERS b/MAINTAINERS
index 4fb7d6f98569..cf49733a83d7 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1988,6 +1988,7 @@ F: drivers/misc/intel_tsens/
ARM/INTEL TSENS SUPPORT
M: Udhayakumar C <[email protected]>
S: Supported
+F: drivers/misc/hddl_device/
F: drivers/misc/intel_tsens/

ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index f6229dd8ba9e..f9f3d4d89e7b 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -487,4 +487,5 @@ source "drivers/misc/xlink-core/Kconfig"
source "drivers/misc/vpumgr/Kconfig"
source "drivers/misc/intel_tsens/Kconfig"
source "drivers/misc/xlink-smbus/Kconfig"
+source "drivers/misc/hddl_device/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index 0ed8a62cbb20..350618dbe678 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -63,3 +63,4 @@ obj-$(CONFIG_XLINK_CORE) += xlink-core/
obj-$(CONFIG_VPUMGR) += vpumgr/
obj-y += intel_tsens/
obj-$(CONFIG_XLINK_SMBUS) += xlink-smbus/
+obj-y += hddl_device/
diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
new file mode 100644
index 000000000000..e1ae81fdf177
--- /dev/null
+++ b/drivers/misc/hddl_device/Kconfig
@@ -0,0 +1,14 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+config HDDL_DEVICE_CLIENT
+ tristate "Support for hddl device client"
+ depends on XLINK_CORE && INTEL_TSENS_LOCAL_HOST
+ help
+ This option enables HDDL device client module.
+
+ This driver is used for sharing time sync data to local host and
+ retrives the sensors available on the platform. This also handles
+ the device connect/disconnect programming sequence.
+ Say Y if using a processor that includes the Intel VPU such as
+ Keem Bay. If unsure, say N.
diff --git a/drivers/misc/hddl_device/Makefile b/drivers/misc/hddl_device/Makefile
new file mode 100644
index 000000000000..dca381660baa
--- /dev/null
+++ b/drivers/misc/hddl_device/Makefile
@@ -0,0 +1,5 @@
+# Copyright (C) 2020 Intel Corporation
+# SPDX-License-Identifier: GPL-2.0-only
+
+obj-$(CONFIG_HDDL_DEVICE_CLIENT) += hddl_device_client.o
+hddl_device_client-objs += hddl_device_lh.o hddl_device.o
diff --git a/drivers/misc/hddl_device/hddl_device.c b/drivers/misc/hddl_device/hddl_device.c
new file mode 100644
index 000000000000..89e22adc3a03
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device.c
@@ -0,0 +1,565 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning HELPER module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/gpio/consumer.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include <uapi/linux/stat.h>
+#include "hddl_device_util.h"
+
+#define HDDL_XLINK_OPEN_TIMEOUT 1000
+#define HDDL_I2C_CLIENT_INIT_TIME 1000
+
+enum hddl_device_event_type {
+ HDDL_NOTIFY_DEVICE_DISCONNECTED,
+ HDDL_NOTIFY_DEVICE_CONNECTED,
+ HDDL_NUM_EVENT_TYPE,
+};
+
+/*
+ * Register callback for device
+ * connect and disconnect events.
+ */
+static u32 hddl_device_events[] = {
+ HDDL_NOTIFY_DEVICE_DISCONNECTED,
+ HDDL_NOTIFY_DEVICE_CONNECTED,
+ HDDL_NUM_EVENT_TYPE
+};
+
+/**
+ * intel_hddl_new_device - check for new hddl device
+ * @dev: The hddl device.
+ * @hddl_clients: list of existing client devices.
+ * @sw_device_id_list: list of software device id's to check for new device.
+ * @num_devices: number of id's present in sw_device_id_list.
+ * @n_devs: number of existing client devices in hddl_clients list.
+ *
+ * Returns list of client devices by comapring the device id's with device id's
+ * present in existing client devices list. If any software id which is not
+ * matching with id's in existing client devices, it allocates new list to
+ * accommodate new client device and copies the existing client devices to
+ * new list and free's up the older list.
+ * If no new device id found in sw_device_id_list, it returns list of existing
+ * client devices.
+ */
+static struct intel_hddl_clients **
+ intel_hddl_new_device(struct device *dev,
+ struct intel_hddl_clients **hddl_clients,
+ u32 *sw_device_id_list,
+ u32 num_devices,
+ u32 *n_devs)
+{
+ struct intel_hddl_clients **cls;
+ bool match_found, new_dev = false;
+ int i, j;
+
+ /*
+ * Check is there any new device by comparing id's with
+ * existing list.
+ */
+ for (i = 0; i < num_devices; i++) {
+ match_found = false;
+ for (j = 0; j < (*n_devs); j++) {
+ if (sw_device_id_list[i] ==
+ hddl_clients[j]->xlink_dev.sw_device_id) {
+ match_found = true;
+ break;
+ }
+ }
+ if (!match_found) {
+ new_dev = true;
+ break;
+ }
+ }
+ if (!new_dev)
+ return hddl_clients;
+ /*
+ * Allocate memory for new list
+ */
+ cls = kcalloc(num_devices,
+ sizeof(struct intel_hddl_clients *),
+ GFP_KERNEL);
+ if (!cls)
+ return hddl_clients;
+ /*
+ * copy hddl client devices to new list.
+ */
+ for (i = 0; i < num_devices; i++) {
+ for (j = 0; j < *n_devs; j++) {
+ if (sw_device_id_list[i] ==
+ hddl_clients[j]->xlink_dev.sw_device_id) {
+ cls[i] = hddl_clients[j];
+ break;
+ }
+ }
+ }
+ /*
+ * update number of devices to include new device and free up existing
+ * list.
+ */
+ *n_devs = num_devices;
+ kfree(hddl_clients);
+ return cls;
+}
+
+/**
+ * intel_hddl_setup_device - Initialize new client device
+ * @dev: The hddl device.
+ * @task: Thread function to setup and connect to host/device.
+ * @n_devs: number of client devices in hddl_clients list.
+ * @hddl_clients: list of existing client devices.
+ * @pdata: platform data.
+ *
+ * Returns list of client devices. It also initialize the device and creates
+ * kernel thread to initiate communication over xlink.
+ */
+
+struct intel_hddl_clients **
+ intel_hddl_setup_device(struct device *dev,
+ intel_hddl_connect_task task, u32 *n_devs,
+ struct intel_hddl_clients **hddl_clients,
+ void *pdata)
+{
+ u32 sw_device_id_list[XLINK_MAX_DEVICE_LIST_SIZE];
+ char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+ struct intel_hddl_clients **cls;
+ u32 num_devices = 0;
+ u32 i = 0;
+
+ xlink_get_device_list(sw_device_id_list, &num_devices);
+ if (num_devices == 0) {
+ dev_err(dev, "HDDL:No devices found\n");
+ return NULL;
+ }
+
+ /*
+ * If list available, add new device to the existing client devices
+ * list, if list is not available create new list.
+ */
+ if (hddl_clients) {
+ cls = intel_hddl_new_device(dev,
+ hddl_clients,
+ sw_device_id_list,
+ num_devices, n_devs);
+ if (!cls)
+ return NULL;
+ } else {
+ cls = devm_kcalloc(dev, num_devices,
+ sizeof(struct intel_hddl_clients *),
+ GFP_KERNEL);
+ if (!cls)
+ return NULL;
+ /*
+ * update number of devices in client devices list
+ */
+ *n_devs = num_devices;
+ }
+ hddl_clients = cls;
+ for (i = 0; i < num_devices; i++) {
+ struct intel_hddl_clients *c = hddl_clients[i];
+ int rc;
+
+ /*
+ * Initialize new client device.
+ */
+ if (c)
+ continue;
+ c = devm_kzalloc(dev,
+ sizeof(struct intel_hddl_clients),
+ GFP_KERNEL);
+ if (!c)
+ return hddl_clients;
+ c->pdata = pdata;
+ c->xlink_dev.dev_type = HOST_DEVICE;
+ c->xlink_dev.sw_device_id = sw_device_id_list[i];
+ rc = xlink_get_device_name((&c->xlink_dev),
+ device_name,
+ XLINK_MAX_DEVICE_NAME_SIZE);
+ if (rc > 0) {
+ dev_err(dev,
+ "HDDL:Failed to get device name [EC%d] %x\n",
+ rc, c->xlink_dev.sw_device_id);
+ return hddl_clients;
+ }
+ dev_info(dev, "HDDL:Device name: %x %s\n",
+ c->xlink_dev.sw_device_id, device_name);
+ if (GET_INTERFACE_FROM_SW_DEVICE_ID(sw_device_id_list[i]) ==
+ SW_DEVICE_ID_PCIE_INTERFACE) {
+ /*
+ * Start kernel thread to initialize
+ * xlink communication.
+ */
+ c->hddl_dev_connect_task = kthread_run(task,
+ (void *)c,
+ device_name);
+ if (!c->hddl_dev_connect_task) {
+ dev_err(dev, "failed to create thread\n");
+ return hddl_clients;
+ }
+ c->task = (void *)task;
+ }
+ hddl_clients[i] = c;
+ }
+
+ return hddl_clients;
+}
+
+int intel_hddl_xlink_remove_i2c_adap(struct device *dev,
+ struct intel_hddl_clients *c)
+{
+ int i;
+
+ for (i = 0; i < HDDL_XLINK_I2C_END; i++) {
+ if (c->xlink_i2c_plt_dev[i]) {
+ dev_info(dev,
+ "HDDL : platform_device_unregister = %d\n",
+ i);
+ platform_device_unregister(c->xlink_i2c_plt_dev[i]);
+ c->xlink_i2c_plt_dev[i] = NULL;
+ }
+ }
+ return 0;
+}
+
+static void hddl_register_remote_smbus_client(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs *i2c)
+{
+ struct platform_device *xlink_pdev =
+ c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_MASTER];
+ struct i2c_adapter *adap;
+
+ if (!xlink_pdev)
+ return;
+ adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+ if (!adap)
+ return;
+ c->adap[HDDL_XLINK_I2C_MASTER] = adap;
+ i2c->smbus_client =
+ i2c_new_client_device(adap,
+ &i2c->board_info);
+ msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+}
+
+static void hddl_register_remote_i2c_client(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs *i2c)
+{
+ if (c->smbus_adap) {
+ i2c->board_info.platform_data = c;
+ i2c->i2c_client = i2c_new_client_device(c->smbus_adap,
+ &i2c->board_info);
+ msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+ }
+}
+
+static void hddl_register_remote_xlink_client(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs *i2c)
+{
+ struct platform_device *xlink_pdev =
+ c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_SLAVE];
+ struct i2c_adapter *adap;
+
+ if (!xlink_pdev)
+ return;
+ adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+ if (!adap)
+ return;
+ c->adap[HDDL_XLINK_I2C_SLAVE] = adap;
+ i2c->board_info.platform_data = c;
+ i2c->xlk_client = i2c_new_client_device(adap, &i2c->board_info);
+ msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+}
+
+static void intel_hddl_add_remote_clients(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs *i2c_devs)
+{
+ if (!i2c_devs->enabled)
+ return;
+ /*
+ * Register this device as xlink i2c client.
+ */
+ if (i2c_devs->remote_host & HDDL_XLINK_CLIENT)
+ hddl_register_remote_xlink_client(dev, c, i2c_devs);
+ /*
+ * Register this device as i2c smbbus client.
+ */
+ if (i2c_devs->remote_host & HDDL_I2C_CLIENT)
+ hddl_register_remote_i2c_client(dev, c, i2c_devs);
+ /*
+ * Register this device as xlink smbus i2c client.
+ * Based on the bit mask of remote_host, It is possible that same
+ * device can be registered as xlink i2c client and smbus i2c client.
+ */
+ if (i2c_devs->remote_host & HDDL_XLINK_SMBUS_CLIENT)
+ hddl_register_remote_smbus_client(dev, c, i2c_devs);
+}
+
+static void intel_hddl_add_localhost_clients(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs *i2c)
+{
+ if (!i2c->enabled)
+ return;
+ /*
+ * Register this device as xlink i2c client.
+ */
+ if (i2c->local_host & HDDL_XLINK_CLIENT) {
+ struct platform_device *xlink_pdev =
+ c->xlink_i2c_plt_dev[HDDL_XLINK_I2C_SLAVE];
+ struct i2c_adapter *adap;
+
+ if (!xlink_pdev)
+ return;
+ adap = (struct i2c_adapter *)platform_get_drvdata(xlink_pdev);
+ if (!adap)
+ return;
+ c->adap[HDDL_XLINK_I2C_SLAVE] = adap;
+ i2c->xlk_client =
+ i2c_new_client_device(adap,
+ &i2c->board_info);
+ msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+ }
+ /*
+ * Register this device as smbus i2c client.
+ * Based on the bit mask of local_host, It is possible that same
+ * device can be registered as xlink i2c client and smbus i2c client.
+ */
+ if (i2c->local_host & HDDL_I2C_CLIENT) {
+ i2c->i2c_client =
+ i2c_new_client_device(i2c_get_adapter(i2c->bus),
+ &i2c->board_info);
+ msleep_interruptible(HDDL_I2C_CLIENT_INIT_TIME);
+ }
+}
+
+void intel_hddl_add_xlink_i2c_clients(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs **i2c_devs,
+ int n_clients, int remote)
+{
+ int i;
+
+ for (i = 0; i < n_clients; i++) {
+ if (remote)
+ intel_hddl_add_remote_clients(dev, c, i2c_devs[i]);
+ else
+ intel_hddl_add_localhost_clients(dev, c, i2c_devs[i]);
+ }
+}
+
+int intel_hddl_register_xlink_i2c_adap(struct device *dev,
+ struct intel_hddl_clients *c)
+{
+ int i;
+
+ for (i = 0; i < HDDL_XLINK_I2C_END; i++) {
+ struct platform_device_info xlink_i2c_info;
+ int soc_id = c->board_info.soc_id;
+
+ memset(&xlink_i2c_info, 0, sizeof(xlink_i2c_info));
+ xlink_i2c_info.name = "i2c_xlink";
+ xlink_i2c_info.id = c->board_info.board_id << 4 |
+ soc_id << 2 | i;
+ c->xlink_i2c_ch[i] =
+ c->i2c_chan_num + (soc_id * 2) + i;
+ xlink_i2c_info.data = c;
+ xlink_i2c_info.size_data =
+ sizeof(struct intel_hddl_clients);
+ c->xlink_i2c_plt_dev[i] =
+ platform_device_register_full(&xlink_i2c_info);
+ if (!c->xlink_i2c_plt_dev[i]) {
+ dev_err(dev, "platform device register failed\n");
+ return -EFAULT;
+ }
+ }
+ return 0;
+}
+
+static int intel_hddl_device_probe(struct intel_hddl_clients *d)
+{
+ char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+ int rc;
+
+ if (d->status == HDDL_DEV_STATUS_CONNECTED)
+ return 0;
+ rc = xlink_get_device_name(&d->xlink_dev,
+ device_name, XLINK_MAX_DEVICE_NAME_SIZE);
+ if (rc > 0) {
+ dev_err(&d->pdev->dev,
+ "HDDL:Failed to get device name of id [EC%d] %x\n",
+ rc, d->xlink_dev.sw_device_id);
+ return -ENODEV;
+ }
+
+ d->hddl_dev_connect_task =
+ kthread_run((intel_hddl_connect_task)d->task,
+ (void *)d,
+ device_name);
+ if (!d->hddl_dev_connect_task) {
+ dev_err(&d->pdev->dev, "failed to create thread\n");
+ return -EFAULT;
+ }
+ d->status = HDDL_DEV_STATUS_CONNECTED;
+
+ return 0;
+}
+
+void intel_hddl_device_remove(struct intel_hddl_clients *d)
+{
+ int i;
+
+ /** lock device removal.
+ * xlink core gives multiple device disconnected notifications,
+ * so add lock to diconnect the device and update the status as
+ * disconnected. subsequent notificatios will check the status
+ * and returs if the device is already disconnected.
+ */
+ mutex_lock(&d->lock);
+ if (d->status == HDDL_DEV_STATUS_DISCONNECTED) {
+ mutex_unlock(&d->lock);
+ return;
+ }
+
+ for (i = 0; i < d->n_clients; i++)
+ intel_hddl_free_i2c_client(d, d->i2c_devs[i]);
+ intel_hddl_unregister_pdev(d);
+ xlink_close_channel(&d->xlink_dev, d->chan_num);
+ xlink_disconnect(&d->xlink_dev);
+ kthread_stop(d->hddl_dev_connect_task);
+ d->status = HDDL_DEV_STATUS_DISCONNECTED;
+ mutex_unlock(&d->lock);
+}
+
+static int intel_hddl_device_event_notify(u32 sw_device_id,
+ uint32_t event_type)
+{
+ struct intel_hddl_clients **clients;
+ struct intel_hddl_clients *client;
+ int i, ret = 0;
+ int ndevs = 0;
+
+ clients = intel_hddl_get_clients(&ndevs);
+ if (!clients)
+ return 0;
+ for (i = 0; i < ndevs; i++) {
+ if (clients[i]->xlink_dev.sw_device_id == sw_device_id) {
+ client = clients[i];
+ break;
+ }
+ }
+ if (!client)
+ return -EINVAL;
+ switch (event_type) {
+ case HDDL_NOTIFY_DEVICE_DISCONNECTED:
+ intel_hddl_device_remove(client);
+ break;
+
+ case HDDL_NOTIFY_DEVICE_CONNECTED:
+ ret = intel_hddl_device_probe(client);
+ break;
+
+ default:
+ dev_err(&client->pdev->dev,
+ "HDDL:xlink pcie notify - Error[%x]: [%d]\n",
+ sw_device_id, event_type);
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
+void intel_hddl_close_xlink_device(struct device *dev,
+ struct intel_hddl_clients *d)
+{
+ xlink_close_channel(&d->xlink_dev, d->chan_num);
+ xlink_disconnect(&d->xlink_dev);
+ kthread_stop(d->hddl_dev_connect_task);
+ d->status = HDDL_DEV_STATUS_DISCONNECTED;
+}
+
+int intel_hddl_open_xlink_device(struct device *dev,
+ struct intel_hddl_clients *d)
+{
+ char device_name[XLINK_MAX_DEVICE_NAME_SIZE];
+ u32 device_status = 0xFF;
+ int rc;
+
+ rc = xlink_get_device_name(&d->xlink_dev,
+ device_name, XLINK_MAX_DEVICE_NAME_SIZE);
+ if (rc > 0) {
+ dev_err(dev,
+ "HDDL:Failed to get device name of id [EC%d] %x\n",
+ rc, d->xlink_dev.sw_device_id);
+ return -ENODEV;
+ }
+ if (xlink_boot_device(&d->xlink_dev, device_name) !=
+ X_LINK_SUCCESS) {
+ dev_err(dev, "xlink_boot_device failed\n");
+ return -ENODEV;
+ }
+ if (xlink_get_device_status(&d->xlink_dev, &device_status) !=
+ X_LINK_SUCCESS) {
+ dev_err(dev, "xlink_get_device_status failed\n");
+ return -ENODEV;
+ }
+ if (xlink_connect(&d->xlink_dev) != X_LINK_SUCCESS) {
+ dev_err(dev, "xlink_connect failed\n");
+ return -ENODEV;
+ }
+ mutex_init(&d->lock);
+ xlink_register_device_event(&d->xlink_dev,
+ hddl_device_events,
+ HDDL_NUM_EVENT_TYPE,
+ intel_hddl_device_event_notify);
+
+ d->status = HDDL_DEV_STATUS_CONNECTED;
+ /*
+ * Try opening xlink channel, open channel will fail till host/client
+ * initilaizes the channel. intel_hddl_open_xlink_device is invoked
+ * from kernel thread. so it is safe to try indefinitely.
+ */
+ while (xlink_open_channel(&d->xlink_dev,
+ d->chan_num, RXB_TXB,
+ 64 * 1024, 0 /* timeout */) !=
+ X_LINK_SUCCESS) {
+ if (kthread_should_stop()) {
+ xlink_disconnect(&d->xlink_dev);
+ return -ENODEV;
+ }
+ }
+
+ return 0;
+}
diff --git a/drivers/misc/hddl_device/hddl_device_lh.c b/drivers/misc/hddl_device/hddl_device_lh.c
new file mode 100644
index 000000000000..e44c480379bc
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_lh.c
@@ -0,0 +1,764 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ *
+ * High Density Deep Learning Kernel module.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#include <asm/page.h>
+#include <linux/cdev.h>
+#include <linux/delay.h>
+#include <linux/fs.h>
+#include <linux/gpio/consumer.h>
+#include <linux/hddl_device.h>
+#include <linux/i2c.h>
+#include <linux/ioctl.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/kthread.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/platform_device.h>
+#include <linux/printk.h>
+#include <linux/sched.h>
+#include <linux/sched/mm.h>
+#include <linux/sched/task.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/uaccess.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+#include <linux/xlink.h>
+#include <uapi/linux/stat.h>
+#include "hddl_device_util.h"
+
+#define DRIVER_NAME "hddl_device_client"
+
+struct intel_tsens {
+ struct intel_tsens_data data;
+ struct intel_tsens_trip_info **trip_info;
+};
+
+struct intel_hddl_client_priv {
+ void __iomem *base_addr;
+ int board_id;
+ int soc_id;
+ int n_clients;
+ u32 nsens;
+ u32 xlink_chan;
+ u32 i2c_xlink_chan;
+ u32 n_hddl_devs;
+ struct platform_device *pdev;
+ struct intel_hddl_clients **hddl_client;
+ struct task_struct *hddl_dev_init_task;
+ struct intel_hddl_i2c_devs **i2c_devs;
+ struct intel_tsens **tsens;
+ struct intel_hddl_board_info board_info;
+};
+
+static struct intel_hddl_client_priv *g_priv;
+
+static inline int intel_hddl_get_xlink_data(struct device *dev,
+ struct xlink_handle *xlink,
+ int chan_num, u8 *msg,
+ int *size)
+{
+ int rc;
+
+ rc = xlink_read_data_to_buffer(xlink, chan_num,
+ msg, size);
+ if (rc) {
+ dev_err(dev,
+ "HDDL: xlink read data failed rc = %d\n",
+ rc);
+ return -EFAULT;
+ }
+ rc = xlink_release_data(xlink, chan_num, NULL);
+ if (rc) {
+ dev_err(dev,
+ "HDDL: xlink release failed rc = %d\n",
+ rc);
+ return -EFAULT;
+ }
+ return rc;
+}
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+ struct intel_hddl_i2c_devs *i2c_dev)
+{
+ if (i2c_dev->xlk_client)
+ i2c_unregister_device(i2c_dev->xlk_client);
+ if (i2c_dev->i2c_client)
+ i2c_unregister_device(i2c_dev->i2c_client);
+ if (i2c_dev->smbus_client)
+ i2c_unregister_device(i2c_dev->smbus_client);
+ i2c_dev->xlk_client = NULL;
+ i2c_dev->i2c_client = NULL;
+ i2c_dev->smbus_client = NULL;
+}
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs)
+{
+ if (!g_priv || !n_devs)
+ return NULL;
+ *n_devs = g_priv->n_hddl_devs;
+ return g_priv->hddl_client;
+}
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c)
+{
+ struct intel_hddl_client_priv *priv = c->pdata;
+
+ intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+}
+
+static u8 *intel_tsens_thermal_msg(struct intel_hddl_clients *c,
+ struct intel_hddl_tsens_msg *msg,
+ u32 *size)
+{
+ struct intel_hddl_client_priv *priv = c->pdata;
+ struct intel_tsens **tsens = priv->tsens;
+
+ switch (msg->msg_type) {
+ case HDDL_GET_NSENS:
+ {
+ u32 *data;
+ *size = sizeof(int);
+ data = kzalloc(*size, GFP_KERNEL);
+ if (!data)
+ return NULL;
+ *data = priv->nsens;
+ return (u8 *)data;
+ }
+ case HDDL_GET_SENS_DETAILS:
+ {
+ struct intel_tsens_data *data;
+ u32 sensor_type = msg->sensor_type;
+ struct intel_tsens_data *tsens_data =
+ &tsens[sensor_type]->data;
+
+ *size = sizeof(struct intel_tsens_data);
+ data = kzalloc(*size, GFP_KERNEL);
+ if (!data)
+ return NULL;
+ strcpy(data->name, tsens_data->name);
+ data->n_trips = tsens_data->n_trips;
+ data->passive_delay = tsens_data->passive_delay;
+ data->polling_delay = tsens_data->polling_delay;
+ data->sensor_type = tsens_data->sensor_type;
+ return (u8 *)data;
+ }
+ case HDDL_GET_SENS_TRIP_INFO:
+ {
+ struct intel_tsens_trip_info *data;
+ u32 sensor_type = msg->sensor_type;
+ u32 trip_info_idx = msg->trip_info_idx;
+
+ *size = sizeof(struct intel_tsens_trip_info);
+ data = kzalloc(*size, GFP_KERNEL);
+ if (!data)
+ return NULL;
+ memcpy(data, tsens[sensor_type]->trip_info[trip_info_idx],
+ sizeof(struct intel_tsens_trip_info));
+ return (u8 *)data;
+ }
+ default:
+ break;
+ }
+ return NULL;
+}
+
+static int intel_hddl_i2c_register_clients(struct device *dev,
+ struct intel_hddl_clients *c)
+{
+ struct intel_hddl_client_priv *priv = c->pdata;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct intel_hddl_tsens_msg msg;
+ int rc, i;
+ int size;
+
+ /* Get msg type */
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc)
+ return rc;
+
+ while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+ u32 *data;
+
+ switch (msg.msg_type) {
+ case HDDL_GET_N_I2C_DEVS:
+ {
+ size = sizeof(int);
+ data = kzalloc(size, GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+ *data = priv->n_clients;
+ break;
+ }
+ case HDDL_GET_I2C_DEVS:
+ {
+ struct intel_hddl_i2c_devs_data *i2c_dev;
+ int sensor_type = msg.sensor_type;
+
+ size = sizeof(struct intel_hddl_i2c_devs_data);
+ i2c_dev = kzalloc(size, GFP_KERNEL);
+ if (!i2c_dev)
+ return -ENOMEM;
+ strcpy(i2c_dev->name,
+ priv->i2c_devs[sensor_type]->name);
+ i2c_dev->addr = priv->i2c_devs[sensor_type]->addr;
+ i2c_dev->bus = priv->i2c_devs[sensor_type]->bus;
+ i2c_dev->enabled =
+ priv->i2c_devs[sensor_type]->enabled;
+ i2c_dev->local_host =
+ priv->i2c_devs[sensor_type]->local_host;
+ i2c_dev->remote_host =
+ priv->i2c_devs[sensor_type]->remote_host;
+ data = (u32 *)i2c_dev;
+ break;
+ }
+ default:
+ dev_err(&priv->pdev->dev,
+ "HDDL: Invalid msg received\n");
+ return -EINVAL;
+ }
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)data, size);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ kfree(data);
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc)
+ return rc;
+ }
+
+ for (i = 0; i < priv->n_clients; i++) {
+ msg.msg_type = HDDL_GET_I2C_DEV_ADDR;
+ msg.sensor_type = i;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&priv->i2c_devs[i]->addr,
+ &size);
+ if (rc)
+ return rc;
+ priv->i2c_devs[i]->board_info.addr = priv->i2c_devs[i]->addr;
+ }
+ /* Send Complete */
+ msg.msg_type = HDDL_GET_SENS_COMPLETE;
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&msg, sizeof(msg));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+
+ intel_hddl_add_xlink_i2c_clients(&priv->pdev->dev, c, priv->i2c_devs,
+ priv->n_clients, 0);
+ return 0;
+}
+
+static int intel_hddl_send_tsens_data(struct intel_hddl_clients *c)
+{
+ struct intel_hddl_client_priv *priv = c->pdata;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct intel_hddl_tsens_msg msg;
+ u32 size;
+ u8 *data;
+ int rc;
+
+ /* Get msg type */
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc)
+ return rc;
+
+ while (msg.msg_type != HDDL_GET_SENS_COMPLETE) {
+ data = intel_tsens_thermal_msg(c, &msg, &size);
+ if (!data) {
+ dev_err(&priv->pdev->dev, "HDDL: failed to get details\n");
+ return -EINVAL;
+ }
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)data, size);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ return rc;
+ }
+ kfree(data);
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&msg, &size);
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
+static int intel_hddl_device_connect_task(void *data)
+{
+ struct intel_hddl_clients *c = (struct intel_hddl_clients *)data;
+ struct intel_hddl_client_priv *priv = c->pdata;
+ struct intel_hddl_board_info board_info_rcvd;
+ struct xlink_handle *xlink = &c->xlink_dev;
+ struct timespec64 ts;
+ u32 size, rc;
+
+ memcpy(&c->board_info, &priv->board_info,
+ sizeof(struct intel_hddl_board_info));
+ c->chan_num = priv->xlink_chan;
+ c->i2c_chan_num = priv->i2c_xlink_chan;
+ c->i2c_devs = priv->i2c_devs;
+ c->n_clients = priv->n_clients;
+ if (intel_hddl_open_xlink_device(&priv->pdev->dev, c)) {
+ dev_err(&priv->pdev->dev, "HDDL open xlink dev failed\n");
+ return -EINVAL;
+ }
+ size = sizeof(ts);
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&ts, &size);
+ if (rc)
+ goto close_xlink_dev;
+ do_settimeofday64(&ts);
+
+ rc = xlink_write_volatile(xlink, c->chan_num,
+ (u8 *)&c->board_info,
+ sizeof(struct intel_hddl_board_info));
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "xlink write data failed rc = %d\n",
+ rc);
+ goto close_xlink_dev;
+ }
+
+ size = sizeof(board_info_rcvd);
+ rc = intel_hddl_get_xlink_data(&priv->pdev->dev,
+ xlink, c->chan_num,
+ (u8 *)&board_info_rcvd,
+ &size);
+ if (rc)
+ goto close_xlink_dev;
+ rc = intel_hddl_send_tsens_data(c);
+ if (rc) {
+ dev_err(&priv->pdev->dev, "HDDL: tsens data not sent\n");
+ goto close_xlink_dev;
+ }
+ rc = intel_hddl_register_xlink_i2c_adap(&priv->pdev->dev, c);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "HDDL: register xlink i2c adapter failed\n");
+ goto remove_xlink_i2c_adap;
+ }
+ rc = intel_hddl_i2c_register_clients(&priv->pdev->dev, c);
+ if (rc) {
+ dev_err(&priv->pdev->dev,
+ "HDDL: register i2c clients failed\n");
+ goto remove_xlink_i2c_adap;
+ }
+
+ return 0;
+remove_xlink_i2c_adap:
+ intel_hddl_xlink_remove_i2c_adap(&priv->pdev->dev, c);
+close_xlink_dev:
+ intel_hddl_close_xlink_device(&priv->pdev->dev, c);
+ return rc;
+}
+
+static int intel_hddl_check_for_new_device(struct intel_hddl_client_priv *priv)
+{
+ struct intel_hddl_clients **hddl_clients;
+
+ hddl_clients =
+ intel_hddl_setup_device(&priv->pdev->dev,
+ intel_hddl_device_connect_task,
+ &priv->n_hddl_devs, priv->hddl_client,
+ priv);
+ if (!hddl_clients) {
+ dev_err(&priv->pdev->dev,
+ "intel_hddl_setup_device returned NULL\n");
+ return 0;
+ }
+ priv->hddl_client = hddl_clients;
+ return 1;
+}
+
+static int intel_hddl_device_init_task(void *data)
+{
+ struct intel_hddl_client_priv *priv =
+ (struct intel_hddl_client_priv *)data;
+
+ while (!kthread_should_stop()) {
+ if (!intel_hddl_check_for_new_device(priv)) {
+ dev_err(&priv->pdev->dev,
+ "Error while checking for new device\n");
+ return -EFAULT;
+ }
+ msleep_interruptible(HDDL_NEW_DEV_POLL_TIME);
+ }
+
+ return 0;
+}
+
+static int intel_hddl_device_init(struct intel_hddl_client_priv *priv)
+{
+ priv->hddl_dev_init_task = kthread_run(intel_hddl_device_init_task,
+ (void *)priv,
+ "hddl_device_init");
+ if (!priv->hddl_dev_init_task) {
+ dev_err(&priv->pdev->dev, "failed to create thread\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int hddl_tsens_config_sensors(struct device_node *s_node,
+ struct intel_hddl_client_priv *priv,
+ int sensor_type)
+{
+ struct intel_tsens *tsens = priv->tsens[sensor_type];
+ struct platform_device *pdev = priv->pdev;
+ s32 trip_temp_count, trip_temp_type_c, i;
+ int ret;
+
+ tsens->data.sensor_type = sensor_type;
+ if (of_property_read_u32(s_node, "passive_delay_rh",
+ &tsens->data.passive_delay)) {
+ dev_err(&pdev->dev,
+ "passive_delay missing in dt for %s\n",
+ tsens->data.name);
+ return -EINVAL;
+ }
+ if (of_property_read_u32(s_node, "polling_delay_rh",
+ &tsens->data.polling_delay)) {
+ dev_err(&pdev->dev,
+ "polling_delay missing in dt for %s\n",
+ tsens->data.name);
+ return -EINVAL;
+ }
+ trip_temp_count = of_property_count_u32_elems(s_node, "trip_temp_rh");
+ trip_temp_type_c = of_property_count_strings(s_node, "trip_type_rh");
+ if (trip_temp_count != trip_temp_type_c ||
+ trip_temp_count <= 0 || trip_temp_type_c <= 0) {
+ dev_err(&pdev->dev,
+ "trip temp config is missing in dt for %s\n",
+ tsens->data.name);
+ return -EINVAL;
+ }
+
+ tsens->trip_info =
+ devm_kcalloc(&pdev->dev, trip_temp_count,
+ sizeof(struct intel_tsens_trip_info *),
+ GFP_KERNEL);
+ if (!tsens->trip_info)
+ return -ENOMEM;
+ tsens->data.n_trips = trip_temp_count;
+ for (i = 0; i < trip_temp_count; i++) {
+ const char *trip_name;
+
+ tsens->trip_info[i] =
+ devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_tsens_trip_info),
+ GFP_KERNEL);
+ if (!tsens->trip_info[i])
+ return -ENOMEM;
+ ret = of_property_read_u32_index(s_node, "trip_temp_rh", i,
+ &tsens->trip_info[i]->temp);
+ if (ret) {
+ dev_err(&pdev->dev, "Invalid trip temp");
+ return ret;
+ }
+ ret = of_property_read_string_index(s_node, "trip_type_rh", i,
+ &trip_name);
+ if (ret) {
+ dev_err(&pdev->dev, "Invalid trip type");
+ return ret;
+ }
+ if (!strcmp(trip_name, "passive"))
+ tsens->trip_info[i]->trip_type = THERMAL_TRIP_PASSIVE;
+ else if (!strcmp(trip_name, "critical"))
+ tsens->trip_info[i]->trip_type = THERMAL_TRIP_CRITICAL;
+ else if (!strcmp(trip_name, "hot"))
+ tsens->trip_info[i]->trip_type = THERMAL_TRIP_HOT;
+ else
+ tsens->trip_info[i]->trip_type = THERMAL_TRIP_ACTIVE;
+ }
+
+ return 0;
+}
+
+static int hddl_get_onchip_sensors(struct platform_device *pdev,
+ struct intel_hddl_client_priv *priv)
+{
+ struct device_node *s_node;
+ struct device_node *np = NULL;
+ int i = 0;
+
+ s_node = of_parse_phandle(pdev->dev.of_node, "soc-sensors", 0);
+ if (!s_node)
+ return -EINVAL;
+ priv->nsens = of_get_child_count(s_node);
+ if (priv->nsens == 0) {
+ dev_err(&pdev->dev, "No onchip sensors configured in dt\n");
+ return -EINVAL;
+ }
+ priv->tsens =
+ devm_kcalloc(&pdev->dev, priv->nsens,
+ sizeof(struct intel_tsens *),
+ GFP_KERNEL);
+ if (!priv->tsens)
+ return -ENOMEM;
+ for_each_child_of_node(s_node, np) {
+ struct intel_tsens *tsens;
+
+ tsens = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_tsens),
+ GFP_KERNEL);
+ if (!tsens)
+ return -ENOMEM;
+ priv->tsens[i] = tsens;
+ strcpy(tsens->data.name, np->name);
+ if (hddl_tsens_config_sensors(np, priv, i)) {
+ dev_err(&pdev->dev,
+ "Missing sensor info in dts for %s\n",
+ tsens->data.name);
+ return -EINVAL;
+ }
+ i++;
+ }
+
+ return 0;
+}
+
+static int intel_hddl_get_ids(struct platform_device *pdev,
+ struct intel_hddl_client_priv *priv)
+{
+ int ret;
+ struct gpio_descs *board_id_gpios;
+ struct gpio_descs *soc_id_gpios;
+ unsigned long values = 0;
+
+ board_id_gpios =
+ gpiod_get_array_optional(&pdev->dev, "board-id",
+ GPIOD_IN);
+ if (board_id_gpios) {
+ ret = gpiod_get_array_value(board_id_gpios->ndescs,
+ board_id_gpios->desc,
+ NULL, &values);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "failed to get boardid values %d",
+ ret);
+ return ret;
+ }
+ priv->board_info.board_id = values;
+ priv->board_id = priv->board_info.board_id;
+ }
+ soc_id_gpios =
+ gpiod_get_array_optional(&pdev->dev, "soc-id",
+ GPIOD_IN);
+ if (soc_id_gpios) {
+ values = 0;
+ ret = gpiod_get_array_value(soc_id_gpios->ndescs,
+ soc_id_gpios->desc,
+ NULL, &values);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "failed to get soc-id values %d",
+ ret);
+ return ret;
+ }
+ priv->board_info.soc_id = values;
+ priv->soc_id = priv->board_info.soc_id;
+ }
+
+ return 0;
+}
+
+static int intel_hddl_config_dt(struct intel_hddl_client_priv *priv)
+{
+ struct platform_device *pdev = priv->pdev;
+ struct device_node *np = pdev->dev.of_node;
+ struct device_node *s_node = NULL;
+ struct resource *res;
+ int i, ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (res) {
+ priv->base_addr = ioremap(res->start,
+ (res->end - res->start));
+ }
+ priv->board_info.soc_id = 0;
+ priv->board_info.board_id = 0;
+ ret = of_property_read_u32(np, "xlink_chan",
+ &priv->xlink_chan);
+ if (ret) {
+ dev_err(&pdev->dev, "xlink channel not available in dt");
+ return ret;
+ }
+ ret = of_property_read_u32(np, "i2c_xlink_chan",
+ &priv->i2c_xlink_chan);
+ if (ret) {
+ dev_err(&pdev->dev, "i2c xlink channel not available in dt");
+ return ret;
+ }
+ ret = intel_hddl_get_ids(pdev, priv);
+ if (ret) {
+ dev_err(&pdev->dev, "Unable to get board/soc id");
+ return ret;
+ }
+ ret = hddl_get_onchip_sensors(pdev, priv);
+ if (ret) {
+ dev_err(&pdev->dev, "Onchip sensor config failed");
+ return ret;
+ }
+ priv->n_clients = of_get_child_count(np);
+ priv->i2c_devs = devm_kcalloc(&pdev->dev, priv->n_clients,
+ sizeof(struct intel_hddl_i2c_devs *),
+ GFP_KERNEL);
+ if (!priv->i2c_devs)
+ return -ENOMEM;
+ i = 0;
+ for_each_child_of_node(np, s_node) {
+ const char *status;
+ struct intel_hddl_i2c_devs *i2c_dev;
+
+ i2c_dev = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_hddl_i2c_devs),
+ GFP_KERNEL);
+ if (!i2c_dev)
+ return -ENOMEM;
+ of_property_read_string_index(s_node, "status", 0,
+ &status);
+ if (!strcmp(status, "okay")) {
+ u32 addr;
+ const char *name = NULL;
+
+ i2c_dev->enabled = 1;
+ of_property_read_string_index(s_node, "compatible", 0,
+ &name);
+ if (name) {
+ strcpy(i2c_dev->name, name);
+ strcpy(i2c_dev->board_info.type,
+ i2c_dev->name);
+ }
+ /**
+ * below dt params are optional.
+ */
+ of_property_read_u32(s_node, "reg", &addr);
+ i2c_dev->board_info.addr = addr;
+ i2c_dev->addr = addr;
+ of_property_read_u32(s_node, "bus",
+ &i2c_dev->bus);
+ of_property_read_u32(s_node, "remote-host",
+ &i2c_dev->remote_host);
+ of_property_read_u32(s_node, "local-host",
+ &i2c_dev->local_host);
+ }
+ priv->i2c_devs[i] = i2c_dev;
+ i++;
+ }
+ return 0;
+}
+
+static int intel_hddl_client_probe(struct platform_device *pdev)
+{
+ struct intel_hddl_client_priv *priv;
+ int ret;
+
+ priv = devm_kzalloc(&pdev->dev,
+ sizeof(struct intel_hddl_client_priv),
+ GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+ priv->pdev = pdev;
+ if (pdev->dev.of_node) {
+ ret = intel_hddl_config_dt(priv);
+ if (ret) {
+ dev_err(&pdev->dev, "dt configuration failed\n");
+ devm_kfree(&pdev->dev, priv);
+ return ret;
+ }
+ } else {
+ dev_err(&pdev->dev,
+ "Non Device Tree build is not supported\n");
+ devm_kfree(&pdev->dev, priv);
+ return -EINVAL;
+ }
+ ret = intel_hddl_device_init(priv);
+ if (ret) {
+ dev_err(&pdev->dev, "HDDL device init failed\n");
+ devm_kfree(&pdev->dev, priv);
+ return -EINVAL;
+ }
+ g_priv = priv;
+ platform_set_drvdata(pdev, priv);
+ return 0;
+}
+
+/* Device Exit */
+static int intel_hddl_client_exit(struct platform_device *pdev)
+{
+ int k;
+ struct intel_hddl_client_priv *priv = platform_get_drvdata(pdev);
+
+ if (!priv)
+ return -EINVAL;
+ for (k = 0; k < priv->n_hddl_devs; k++) {
+ struct intel_hddl_clients *d = priv->hddl_client[k];
+
+ intel_hddl_device_remove(d);
+ }
+
+ return 0;
+}
+
+static const struct of_device_id intel_hddl_client_id_table[] = {
+ { .compatible = "intel,hddl-client" },
+ {}
+};
+MODULE_DEVICE_TABLE(of, intel_hddl_client_id_table);
+
+static struct platform_driver intel_hddl_client_driver = {
+ .probe = intel_hddl_client_probe,
+ .remove = intel_hddl_client_exit,
+ .driver = {
+ .name = "intel_hddl_client",
+ .of_match_table = intel_hddl_client_id_table,
+ },
+};
+
+module_platform_driver(intel_hddl_client_driver);
+
+MODULE_DESCRIPTION("Intel HDDL Device driver");
+MODULE_AUTHOR("Sandeep Singh <[email protected]>");
+MODULE_AUTHOR("Vaidya, Mahesh R <[email protected]>");
+MODULE_AUTHOR("Udhayakumar C <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/hddl_device/hddl_device_util.h b/drivers/misc/hddl_device/hddl_device_util.h
new file mode 100644
index 000000000000..628619e0cdb9
--- /dev/null
+++ b/drivers/misc/hddl_device/hddl_device_util.h
@@ -0,0 +1,52 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ *
+ * High Density Deep Learning utils.
+ *
+ * Copyright (C) 2020 Intel Corporation
+ *
+ */
+
+#ifndef _LINUX_HDDL_DEVICE_UTIL_H
+#define _LINUX_HDDL_DEVICE_UTIL_H
+
+#include <linux/xlink_drv_inf.h>
+#include "../xlink-core/xlink-defs.h"
+
+#define HDDL_NEW_DEV_POLL_TIME 2000
+
+typedef int (*intel_hddl_connect_task)(void *);
+
+struct intel_hddl_clients **
+ intel_hddl_setup_device(struct device *dev,
+ intel_hddl_connect_task task, u32 *n_devs,
+ struct intel_hddl_clients **hddl_clients,
+ void *pdata);
+
+int intel_hddl_xlink_remove_i2c_adap(struct device *dev,
+ struct intel_hddl_clients *c);
+
+void intel_hddl_add_xlink_i2c_clients(struct device *dev,
+ struct intel_hddl_clients *c,
+ struct intel_hddl_i2c_devs **i2c_devs,
+ int n_clients, int remote);
+
+int intel_hddl_register_xlink_i2c_adap(struct device *dev,
+ struct intel_hddl_clients *c);
+
+struct intel_hddl_clients **intel_hddl_get_clients(int *n_devs);
+
+void intel_hddl_device_remove(struct intel_hddl_clients *d);
+
+void intel_hddl_unregister_pdev(struct intel_hddl_clients *c);
+
+void intel_hddl_close_xlink_device(struct device *dev,
+ struct intel_hddl_clients *d);
+
+int intel_hddl_open_xlink_device(struct device *dev,
+ struct intel_hddl_clients *d);
+
+void intel_hddl_free_i2c_client(struct intel_hddl_clients *d,
+ struct intel_hddl_i2c_devs *i2c_dev);
+
+#endif /* _LINUX_HDDL_DEVICE_UTIL_H */
--
2.17.1

2021-02-12 22:40:32

by mark gross

[permalink] [raw]
Subject: [PATCH v6 26/34] dt-bindings: misc: intel_tsens: Add tsens thermal bindings documentation

From: "C, Udhayakumar" <[email protected]>

Add device tree bindings for local host thermal sensors
Intel Edge.AI Computer Vision platforms.

The tsens module enables reading of on chip sensors present
in the Intel Bay series SoC. In the tsens module various junction
temperature and SoC temperature are reported using thermal subsystem
and i2c subsystem.

Temperature data reported using thermal subsystem will be used for
various cooling agents such as DVFS, fan control and shutdown the
system in case of critical temperature.

Temperature data reported using i2c subsytem will be used by
platform manageability software running in remote host.

Acked-by: mark gross <[email protected]>
Signed-off-by: C Udhayakumar <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
---
.../bindings/misc/intel,intel-tsens.yaml | 122 ++++++++++++++++++
1 file changed, 122 insertions(+)
create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml

diff --git a/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
new file mode 100644
index 000000000000..f307dc6d2012
--- /dev/null
+++ b/Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
@@ -0,0 +1,122 @@
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: "http://devicetree.org/schemas/misc/intel,intel-tsens.yaml#"
+$schema: "http://devicetree.org/meta-schemas/core.yaml#"
+
+title: Intel Temperature sensors in Bay series
+
+maintainers:
+ - Udhayakumar C <[email protected]>
+
+description: |
+ The tsens driver enables reading of onchip sensors present
+ in the Intel Bay SoC.
+ Each subnode of the tsens represents sensors available
+ on the soc.
+
+select: false
+
+properties:
+ compatible:
+ items:
+ - const: intel,intel-tsens
+
+ plat_name:
+ contains:
+ enum:
+ - intel,keembay_thermal
+
+ reg:
+ minItems: 1
+ maxItems: 2
+
+ clocks:
+ items:
+ - description: thermal sensor clock
+
+ clk-rate:
+ items:
+ - description: thermal sensor clock freq
+
+ sensor_name:
+ type: object
+ description:
+ Details to configure sensor trip points and its types.
+
+ properties:
+ passive_delay:
+ minItems: 1
+ maxItems: 1
+ description: number of milliseconds to wait between polls when
+ performing passive cooling
+
+ polling_delay:
+ minItems: 1
+ maxItems: 1
+ description: number of milliseconds to wait between polls when checking
+ whether trip points have been crossed (0 for interrupt
+ driven systems)
+
+ trip_temp:
+ minItems: 1
+ description: temperature for trip points
+
+ trip_type:
+ minItems: 1
+ description: trip type list for trip points
+
+ required:
+ - passive_delay
+ - polling_delay
+ - trip_temp
+ - trip_type
+
+required:
+ - compatible
+
+additionalProperties: false
+
+examples:
+ - |
+ tsens {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ tsens@20260000 {
+ compatible = "intel,intel-tsens";
+ status = "disabled";
+ plat_name = "intel,keembay_thermal";
+ reg = <0x0 0x20260000 0x0 0x100>;
+ clocks = <&scmi_clk>;
+ clk-rate = <1250000>;
+
+ mss {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ css {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ nce {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+
+ soc {
+ passive_delay = <1000>;
+ polling_delay = <2000>;
+ trip_temp = <40000 80000 1000000>;
+ trip_type = "passive", "passive", "critical";
+ };
+ };
+ };
--
2.17.1

2021-02-12 22:41:53

by mark gross

[permalink] [raw]
Subject: [PATCH v6 16/34] misc: xlink-pcie: Add asynchronous event notification support for XLink

From: Srikanth Thokala <[email protected]>

Add support to notify XLink layer upon PCIe link UP/DOWN events

Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Srikanth Thokala <[email protected]>
---
drivers/misc/xlink-pcie/common/core.h | 3 ++
drivers/misc/xlink-pcie/common/interface.c | 17 ++++++++++
drivers/misc/xlink-pcie/local_host/core.c | 11 +++++++
drivers/misc/xlink-pcie/remote_host/main.c | 3 ++
drivers/misc/xlink-pcie/remote_host/pci.c | 36 ++++++++++++++++++++++
drivers/misc/xlink-pcie/remote_host/pci.h | 3 ++
include/linux/xlink_drv_inf.h | 12 ++++++++
7 files changed, 85 insertions(+)

diff --git a/drivers/misc/xlink-pcie/common/core.h b/drivers/misc/xlink-pcie/common/core.h
index f43c175b7a48..87b302f87cfd 100644
--- a/drivers/misc/xlink-pcie/common/core.h
+++ b/drivers/misc/xlink-pcie/common/core.h
@@ -239,4 +239,7 @@ int intel_xpcie_pci_connect_device(u32 id);
int intel_xpcie_pci_read(u32 id, void *data, size_t *size, u32 timeout);
int intel_xpcie_pci_write(u32 id, void *data, size_t *size, u32 timeout);
int intel_xpcie_pci_reset_device(u32 id);
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn);
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id);
#endif /* XPCIE_CORE_HEADER_ */
diff --git a/drivers/misc/xlink-pcie/common/interface.c b/drivers/misc/xlink-pcie/common/interface.c
index fcc69a940a4c..5d30c27dd18d 100644
--- a/drivers/misc/xlink-pcie/common/interface.c
+++ b/drivers/misc/xlink-pcie/common/interface.c
@@ -105,3 +105,20 @@ int xlink_pcie_reset_device(u32 sw_device_id)
return intel_xpcie_pci_reset_device(sw_device_id);
}
EXPORT_SYMBOL_GPL(xlink_pcie_reset_device);
+
+int xlink_pcie_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ if (!event_notif_fn)
+ return -EINVAL;
+
+ return intel_xpcie_pci_register_device_event(sw_device_id,
+ event_notif_fn);
+}
+EXPORT_SYMBOL_GPL(xlink_pcie_register_device_event);
+
+int xlink_pcie_unregister_device_event(u32 sw_device_id)
+{
+ return intel_xpcie_pci_unregister_device_event(sw_device_id);
+}
+EXPORT_SYMBOL_GPL(xlink_pcie_unregister_device_event);
diff --git a/drivers/misc/xlink-pcie/local_host/core.c b/drivers/misc/xlink-pcie/local_host/core.c
index 2c4e29bce7f7..bfb14c18c24c 100644
--- a/drivers/misc/xlink-pcie/local_host/core.c
+++ b/drivers/misc/xlink-pcie/local_host/core.c
@@ -804,3 +804,14 @@ int intel_xpcie_pci_reset_device(u32 id)
{
return 0;
}
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+ return 0;
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/main.c b/drivers/misc/xlink-pcie/remote_host/main.c
index ed1a431ed5d4..efc9143a2fac 100644
--- a/drivers/misc/xlink-pcie/remote_host/main.c
+++ b/drivers/misc/xlink-pcie/remote_host/main.c
@@ -53,6 +53,8 @@ static int intel_xpcie_probe(struct pci_dev *pdev,
if (new_device)
intel_xpcie_list_add_device(xdev);

+ intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_CONNECTED);
+
return ret;
}

@@ -62,6 +64,7 @@ static void intel_xpcie_remove(struct pci_dev *pdev)

if (xdev) {
intel_xpcie_pci_cleanup(xdev);
+ intel_xpcie_pci_notify_event(xdev, NOTIFY_DEVICE_DISCONNECTED);
intel_xpcie_remove_device(xdev);
}
}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.c b/drivers/misc/xlink-pcie/remote_host/pci.c
index 71cbe779d1bc..6a79782b983e 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.c
+++ b/drivers/misc/xlink-pcie/remote_host/pci.c
@@ -6,6 +6,7 @@
*/

#include <linux/mutex.h>
+#include <linux/pci.h>
#include <linux/sched.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
@@ -485,3 +486,38 @@ int intel_xpcie_pci_reset_device(u32 id)

return intel_xpcie_pci_prepare_dev_reset(xdev, true);
}
+
+int intel_xpcie_pci_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->event_fn = event_notif_fn;
+
+ return 0;
+}
+
+int intel_xpcie_pci_unregister_device_event(u32 sw_device_id)
+{
+ struct xpcie_dev *xdev = intel_xpcie_get_device_by_id(sw_device_id);
+
+ if (!xdev)
+ return -ENOMEM;
+
+ xdev->event_fn = NULL;
+
+ return 0;
+}
+
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+ enum xlink_device_event_type event_type)
+{
+ if (event_type >= NUM_EVENT_TYPE)
+ return;
+
+ if (xdev->event_fn)
+ xdev->event_fn(xdev->devid, event_type);
+}
diff --git a/drivers/misc/xlink-pcie/remote_host/pci.h b/drivers/misc/xlink-pcie/remote_host/pci.h
index bd6b01cc58b8..b082bfb73e4f 100644
--- a/drivers/misc/xlink-pcie/remote_host/pci.h
+++ b/drivers/misc/xlink-pcie/remote_host/pci.h
@@ -36,6 +36,7 @@ struct xpcie_dev {
irq_handler_t core_irq_callback;

struct xpcie xpcie;
+ xlink_device_event event_fn;
};

static inline struct device *xpcie_to_dev(struct xpcie *xpcie)
@@ -58,5 +59,7 @@ struct xpcie_dev *intel_xpcie_create_device(u32 sw_device_id,
void intel_xpcie_remove_device(struct xpcie_dev *xdev);
void intel_xpcie_list_add_device(struct xpcie_dev *xdev);
void intel_xpcie_list_del_device(struct xpcie_dev *xdev);
+void intel_xpcie_pci_notify_event(struct xpcie_dev *xdev,
+ enum xlink_device_event_type event_type);

#endif /* XPCIE_PCI_HEADER_ */
diff --git a/include/linux/xlink_drv_inf.h b/include/linux/xlink_drv_inf.h
index 8ffbaafecc88..f20f69bd879a 100644
--- a/include/linux/xlink_drv_inf.h
+++ b/include/linux/xlink_drv_inf.h
@@ -42,6 +42,15 @@ enum _xlink_device_status {
_XLINK_DEV_READY
};

+enum xlink_device_event_type {
+ NOTIFY_DEVICE_DISCONNECTED,
+ NOTIFY_DEVICE_CONNECTED,
+ NUM_EVENT_TYPE
+};
+
+typedef int (*xlink_device_event)(u32 sw_device_id,
+ enum xlink_device_event_type event_type);
+
int xlink_pcie_get_device_list(u32 *sw_device_id_list,
u32 *num_devices);
int xlink_pcie_get_device_name(u32 sw_device_id, char *device_name,
@@ -55,4 +64,7 @@ int xlink_pcie_read(u32 sw_device_id, void *data, size_t *const size,
int xlink_pcie_write(u32 sw_device_id, void *data, size_t *const size,
u32 timeout);
int xlink_pcie_reset_device(u32 sw_device_id);
+int xlink_pcie_register_device_event(u32 sw_device_id,
+ xlink_device_event event_notif_fn);
+int xlink_pcie_unregister_device_event(u32 sw_device_id);
#endif
--
2.17.1

2021-02-12 22:44:05

by mark gross

[permalink] [raw]
Subject: [PATCH v6 20/34] xlink-core: Add xlink core driver xLink

From: Seamus Kelly <[email protected]>

Add xLink driver, which provides an abstracted control and communication
subsystem based on channel identification.
It is intended to support VPU technology both at SoC level as well as at
IP level, over multiple interfaces. This initial patch enables local host
user mode to open/close/read/write via IOCTLs.

Specifically the driver enables application/process to:

* Access a common xLink API across all interfaces from both kernel and
user space.
* Call typical APIs types (open, close, read, write) that you would
associate with a communication interface.
* Call other APIs that are related to other functions that the
device can perform e.g. boot, reset get/set device mode. Device mode
refers to the power load of the VPU and an API can be used to read and
control it.
* Use multiple commnication channels that the driver manages from one
interface to another, providing routing of data through these multiple
channels across a single physical interface.

xLink: Add xLink Core device tree bindings

Add device tree bindings for the xLink Core driver which enables xLink
to control and communicate with the VPU IP present on the Intel Keem Bay
SoC.

Cc: Jonathan Corbet <[email protected]>
Cc: Derek Kiernan <[email protected]>
Cc: Dragan Cvetic <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: [email protected]
Reviewed-by: Mark Gross <[email protected]>
Signed-off-by: Mark Gross <[email protected]>
Signed-off-by: Seamus Kelly <[email protected]>
---
Documentation/vpu/index.rst | 1 +
Documentation/vpu/xlink-core.rst | 81 +++
MAINTAINERS | 12 +
drivers/misc/Kconfig | 1 +
drivers/misc/Makefile | 1 +
drivers/misc/xlink-core/Kconfig | 33 +
drivers/misc/xlink-core/Makefile | 5 +
drivers/misc/xlink-core/xlink-core.c | 738 ++++++++++++++++++++
drivers/misc/xlink-core/xlink-core.h | 22 +
drivers/misc/xlink-core/xlink-defs.h | 175 +++++
drivers/misc/xlink-core/xlink-ioctl.c | 212 ++++++
drivers/misc/xlink-core/xlink-ioctl.h | 21 +
drivers/misc/xlink-core/xlink-multiplexer.c | 534 ++++++++++++++
drivers/misc/xlink-core/xlink-multiplexer.h | 35 +
drivers/misc/xlink-core/xlink-platform.c | 160 +++++
drivers/misc/xlink-core/xlink-platform.h | 65 ++
include/linux/xlink.h | 108 +++
include/uapi/misc/xlink_uapi.h | 145 ++++
18 files changed, 2349 insertions(+)
create mode 100644 Documentation/vpu/xlink-core.rst
create mode 100644 drivers/misc/xlink-core/Kconfig
create mode 100644 drivers/misc/xlink-core/Makefile
create mode 100644 drivers/misc/xlink-core/xlink-core.c
create mode 100644 drivers/misc/xlink-core/xlink-core.h
create mode 100644 drivers/misc/xlink-core/xlink-defs.h
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
create mode 100644 drivers/misc/xlink-core/xlink-platform.c
create mode 100644 drivers/misc/xlink-core/xlink-platform.h
create mode 100644 include/linux/xlink.h
create mode 100644 include/uapi/misc/xlink_uapi.h

diff --git a/Documentation/vpu/index.rst b/Documentation/vpu/index.rst
index 49c78bb65b83..cd4272e089ec 100644
--- a/Documentation/vpu/index.rst
+++ b/Documentation/vpu/index.rst
@@ -16,3 +16,4 @@ This documentation contains information for the Intel VPU stack.
vpu-stack-overview
xlink-pcie
xlink-ipc
+ xlink-core
diff --git a/Documentation/vpu/xlink-core.rst b/Documentation/vpu/xlink-core.rst
new file mode 100644
index 000000000000..441c18230491
--- /dev/null
+++ b/Documentation/vpu/xlink-core.rst
@@ -0,0 +1,81 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=============================
+xLink-core software subsystem
+=============================
+
+The purpose of the xLink software subsystem is to facilitate communication
+between multiple users on multiple nodes in the system.
+
+There are three types of xLink nodes:
+
+1. Remote Host: this is an external IA/x86 host system that is only capable of
+ communicating directly to the Local Host node on VPU 2.x products.
+2. Local Host: this is the ARM core within the VPU 2.x SoC. The Local Host can
+ communicate upstream to the Remote Host node, and downstream to the VPU IP
+ node.
+3. VPU IP: this is the Leon RT core within the VPU 2.x SoC. The VPU IP can only
+ communicate upstream to the Local Host node.
+
+xLink provides a common API across all interfaces for users to access xLink
+functions and provides user space APIs via an IOCTL interface implemented in
+the xLink core.
+
+xLink manages communications from one interface to another and provides routing
+of data through multiple channels across a single physical interface.
+
+It exposes a common API across all interfaces at both kernel and user levels
+for processes/applications to access.
+
+It has typical API types (open, close, read, write) that you would associate
+with a communication interface.
+
+It also has other APIs that are related to other functions that the device can
+perform, e.g. boot, reset get/set device mode.
+The driver is broken down into 4 source files.
+
+xlink-core:
+Contains driver initialization, driver API and IOCTL interface (for user
+space).
+
+xlink-multiplexer:
+The Multiplexer component is responsible for securely routing messages through
+multiple communication channels over a single physical interface.
+
+xlink-dispatcher:
+The Dispatcher component is responsible for queueing and handling xLink
+communication requests from all users in the system and invoking the underlying
+platform interface drivers.
+
+xlink-platform:
+provides abstraction to each interface supported (PCIe, USB, IPC, etc).
+
+Typical xLink transaction (simplified):
+When a user wants to send data across an interface via xLink it firstly calls
+xlink connect which connects to the relevant interface (PCIe, USB, IPC, etc.)
+and then xlink open channel.
+
+Then it calls xlink write function. This takes the data, passes it to the
+kernel which packages up the data and channel and then adds it to a transmit
+queue.
+
+A separate thread reads this transaction queue and pops off data if available
+and passes the data to the underlying interface (e.g. PCIe) write function.
+Using this thread provides serialization of transactions and decouples the user
+write from the platform write.
+
+On the other side of the interface, a thread is continually reading the
+interface (e.g. PCIe) via the platform interface read function and if it reads
+any data it adds it to channel packet container.
+
+The application at this side of the interface will have called xlink connect,
+opened the channel and called xlink read function to read data from the
+interface and if any exists for that channel, the data gets popped from the
+channel packet container and copied from kernel space to user space buffer
+provided by the call.
+
+xLink can handle API requests from multi-process and multi-threaded
+application/processes.
+
+xLink maintains 4096 channels per device connected (via xlink connect) and
+maintains a separate channel infrastructure for each device.
diff --git a/MAINTAINERS b/MAINTAINERS
index aff4d8dd53a4..1429db8d592c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1961,6 +1961,12 @@ F: Documentation/devicetree/bindings/arm/intel,keembay.yaml
F: arch/arm64/boot/dts/intel/keembay-evm.dts
F: arch/arm64/boot/dts/intel/keembay-soc.dtsi

+ARM/INTEL XLINK CORE SUPPORT
+M: Seamus Kelly <[email protected]>
+M: Mark Gross <[email protected]>
+S: Supported
+F: drivers/misc/xlink-core/
+
ARM/INTEL XLINK IPC SUPPORT
M: Seamus Kelly <[email protected]>
M: Mark Gross <[email protected]>
@@ -1973,6 +1979,12 @@ M: Mark Gross <[email protected]>
S: Supported
F: drivers/misc/xlink-pcie/

+ARM/INTEL TSENS SUPPORT
+M: Udhayakumar C <[email protected]>
+S: Supported
+F: drivers/misc/hddl_device/
+F: drivers/misc/intel_tsens/
+
ARM/INTEL RESEARCH IMOTE/STARGATE 2 MACHINE SUPPORT
M: Jonathan Cameron <[email protected]>
L: [email protected] (moderated for non-subscribers)
diff --git a/drivers/misc/Kconfig b/drivers/misc/Kconfig
index 1f81ea915b95..09ae65e98681 100644
--- a/drivers/misc/Kconfig
+++ b/drivers/misc/Kconfig
@@ -483,4 +483,5 @@ source "drivers/misc/habanalabs/Kconfig"
source "drivers/misc/uacce/Kconfig"
source "drivers/misc/xlink-pcie/Kconfig"
source "drivers/misc/xlink-ipc/Kconfig"
+source "drivers/misc/xlink-core/Kconfig"
endmenu
diff --git a/drivers/misc/Makefile b/drivers/misc/Makefile
index b51495a2f1e0..f3a6eb03bae9 100644
--- a/drivers/misc/Makefile
+++ b/drivers/misc/Makefile
@@ -59,3 +59,4 @@ obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
obj-$(CONFIG_HISI_HIKEY_USB) += hisi_hikey_usb.o
obj-y += xlink-pcie/
obj-$(CONFIG_XLINK_IPC) += xlink-ipc/
+obj-$(CONFIG_XLINK_CORE) += xlink-core/
diff --git a/drivers/misc/xlink-core/Kconfig b/drivers/misc/xlink-core/Kconfig
new file mode 100644
index 000000000000..a0ceb0b48219
--- /dev/null
+++ b/drivers/misc/xlink-core/Kconfig
@@ -0,0 +1,33 @@
+config XLINK_CORE
+ tristate "Support for XLINK CORE"
+ depends on ((XLINK_PCIE_RH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER) || (XLINK_LOCAL_HOST && (XLINK_PCIE_LH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER)))
+ help
+ XLINK CORE enables the communication/control subsystem.
+
+ If unsure, say N.
+
+ To compile this driver as a module, choose M here: the
+ module will be called xlink.ko.
+
+config XLINK_LOCAL_HOST
+ tristate "Support for XLINK LOCAL HOST"
+ depends on XLINK_IPC
+ help
+ XLINK LOCAL HOST enables local host functionality for
+ the communication/control Sub-System.
+
+ Enable this config when building the kernel for the Intel Vision
+ Processing Unit (VPU) Local Host core.
+
+ If building for a Remote Host kernel, say N.
+
+config XLINK_PSS
+ tristate "Support for XLINK PSS (Pre-Silicon Solution)"
+ depends on XLINK_LOCAL_HOST
+ help
+ XLINK PSS enables the communication/control subsystem on a PSS platform.
+
+ Enable this config when building the kernel for the Intel Vision
+ Processing Unit (VPU) in a simulated env.
+
+ If building for a VPU silicon, say N.
diff --git a/drivers/misc/xlink-core/Makefile b/drivers/misc/xlink-core/Makefile
new file mode 100644
index 000000000000..e82b7c72b6b9
--- /dev/null
+++ b/drivers/misc/xlink-core/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for Keem Bay xlink Linux driver
+#
+obj-$(CONFIG_XLINK_CORE) += xlink.o
+xlink-objs += xlink-core.o xlink-multiplexer.o xlink-platform.o xlink-ioctl.o
diff --git a/drivers/misc/xlink-core/xlink-core.c b/drivers/misc/xlink-core/xlink-core.c
new file mode 100644
index 000000000000..dd8db834c184
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.c
@@ -0,0 +1,738 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-core.h"
+#include "xlink-defs.h"
+#include "xlink-ioctl.h"
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+// xlink version number
+#define XLINK_VERSION_MAJOR 0
+#define XLINK_VERSION_MINOR 1
+#define XLINK_VERSION_REVISION 2
+#define XLINK_VERSION_SUB_REV "a"
+
+// timeout in milliseconds used to wait for the reay message from the VPU
+#ifdef CONFIG_XLINK_PSS
+#define XLINK_VPU_WAIT_FOR_READY (3000000)
+#else
+#define XLINK_VPU_WAIT_FOR_READY (3000)
+#endif
+
+// device, class, and driver names
+#define DEVICE_NAME "xlnk"
+#define CLASS_NAME "xlkcore"
+#define DRV_NAME "xlink-driver"
+
+// used to determine if an API was called from user or kernel space
+#define CHANNEL_SET_USER_BIT(chan) ((chan) |= (1 << 15))
+#define CHANNEL_USER_BIT_IS_SET(chan) ((chan) & (1 << 15))
+#define CHANNEL_CLEAR_USER_BIT(chan) ((chan) &= ~(1 << 15))
+
+static dev_t xdev;
+static struct class *dev_class;
+static struct cdev xlink_cdev;
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+
+static const struct file_operations fops = {
+ .owner = THIS_MODULE,
+ .unlocked_ioctl = xlink_ioctl,
+};
+
+struct xlink_link {
+ u32 id;
+ struct xlink_handle handle;
+ struct kref refcount;
+};
+
+struct keembay_xlink_dev {
+ struct platform_device *pdev;
+ struct xlink_link links[XLINK_MAX_CONNECTIONS];
+ u32 nmb_connected_links;
+ struct mutex lock; // protect access to xlink_dev
+};
+
+/*
+ * global variable pointing to our xlink device.
+ *
+ * This is meant to be used only when platform_get_drvdata() cannot be used
+ * because we lack a reference to our platform_device.
+ */
+static struct keembay_xlink_dev *xlink;
+
+/*
+ * get_next_link() - Searches the list of links to find the next available.
+ *
+ * Note: This function is only used in xlink_connect, where the xlink mutex is
+ * already locked.
+ *
+ * Return: the next available link, or NULL if maximum connections reached.
+ */
+static struct xlink_link *get_next_link(void)
+{
+ struct xlink_link *link = NULL;
+ int i;
+
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ if (xlink->links[i].handle.sw_device_id == XLINK_INVALID_SW_DEVICE_ID) {
+ link = &xlink->links[i];
+ break;
+ }
+ }
+ return link;
+}
+
+/*
+ * get_link_by_sw_device_id()
+ *
+ * Searches the list of links to find a link by sw device id.
+ *
+ * Return: the handle, or NULL if the handle is not found.
+ */
+static struct xlink_link *get_link_by_sw_device_id(u32 sw_device_id)
+{
+ struct xlink_link *link = NULL;
+ int i;
+
+ mutex_lock(&xlink->lock);
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ if (xlink->links[i].handle.sw_device_id == sw_device_id) {
+ link = &xlink->links[i];
+ break;
+ }
+ }
+ mutex_unlock(&xlink->lock);
+ return link;
+}
+
+// For now , do nothing and leave for further consideration
+static void release_after_kref_put(struct kref *ref) {}
+
+/* Driver probing. */
+static int kmb_xlink_probe(struct platform_device *pdev)
+{
+ struct keembay_xlink_dev *xlink_dev;
+ struct device *dev_ret;
+ int rc, i;
+
+ dev_info(&pdev->dev, "Keem Bay xlink v%d.%d.%d:%s\n", XLINK_VERSION_MAJOR,
+ XLINK_VERSION_MINOR, XLINK_VERSION_REVISION, XLINK_VERSION_SUB_REV);
+
+ xlink_dev = devm_kzalloc(&pdev->dev, sizeof(*xlink), GFP_KERNEL);
+ if (!xlink_dev)
+ return -ENOMEM;
+
+ xlink_dev->pdev = pdev;
+
+ // initialize multiplexer
+ rc = xlink_multiplexer_init(xlink_dev->pdev);
+ if (rc != X_LINK_SUCCESS) {
+ pr_err("Multiplexer initialization failed\n");
+ goto r_multiplexer;
+ }
+
+ // initialize xlink data structure
+ xlink_dev->nmb_connected_links = 0;
+ mutex_init(&xlink_dev->lock);
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++) {
+ xlink_dev->links[i].id = i;
+ xlink_dev->links[i].handle.sw_device_id =
+ XLINK_INVALID_SW_DEVICE_ID;
+ }
+
+ platform_set_drvdata(pdev, xlink_dev);
+
+ /* Set the global reference to our device. */
+ xlink = xlink_dev;
+
+ /*Allocating Major number*/
+ if ((alloc_chrdev_region(&xdev, 0, 1, "xlinkdev")) < 0) {
+ dev_info(&pdev->dev, "Cannot allocate major number\n");
+ goto r_multiplexer;
+ }
+ dev_info(&pdev->dev, "Major = %d Minor = %d\n", MAJOR(xdev),
+ MINOR(xdev));
+
+ /*Creating struct class*/
+ dev_class = class_create(THIS_MODULE, CLASS_NAME);
+ if (IS_ERR(dev_class)) {
+ dev_info(&pdev->dev, "Cannot create the struct class - Err %ld\n",
+ PTR_ERR(dev_class));
+ goto r_class;
+ }
+
+ /*Creating device*/
+ dev_ret = device_create(dev_class, NULL, xdev, NULL, DEVICE_NAME);
+ if (IS_ERR(dev_ret)) {
+ dev_info(&pdev->dev, "Cannot create the Device 1 - Err %ld\n",
+ PTR_ERR(dev_ret));
+ goto r_device;
+ }
+ dev_info(&pdev->dev, "Device Driver Insert...Done!!!\n");
+
+ /*Creating cdev structure*/
+ cdev_init(&xlink_cdev, &fops);
+
+ /*Adding character device to the system*/
+ if ((cdev_add(&xlink_cdev, xdev, 1)) < 0) {
+ dev_info(&pdev->dev, "Cannot add the device to the system\n");
+ goto r_class;
+ }
+
+ return 0;
+
+r_device:
+ class_destroy(dev_class);
+r_class:
+ unregister_chrdev_region(xdev, 1);
+r_multiplexer:
+ xlink_multiplexer_destroy();
+ return -1;
+}
+
+/* Driver removal. */
+static int kmb_xlink_remove(struct platform_device *pdev)
+{
+ int rc;
+
+ mutex_lock(&xlink->lock);
+ // destroy multiplexer
+ rc = xlink_multiplexer_destroy();
+ if (rc != X_LINK_SUCCESS)
+ pr_err("Multiplexer destroy failed\n");
+
+ mutex_unlock(&xlink->lock);
+ mutex_destroy(&xlink->lock);
+ // unregister and destroy device
+ unregister_chrdev_region(xdev, 1);
+ device_destroy(dev_class, xdev);
+ cdev_del(&xlink_cdev);
+ class_destroy(dev_class);
+ pr_info("XLink Driver removed\n");
+ return 0;
+}
+
+/*
+ * IOCTL function for User Space access to xlink kernel functions
+ *
+ */
+
+static long xlink_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ int rc;
+
+ switch (cmd) {
+ case XL_CONNECT:
+ rc = ioctl_connect(arg);
+ break;
+ case XL_OPEN_CHANNEL:
+ rc = ioctl_open_channel(arg);
+ break;
+ case XL_READ_DATA:
+ rc = ioctl_read_data(arg);
+ break;
+ case XL_WRITE_DATA:
+ rc = ioctl_write_data(arg);
+ break;
+ case XL_WRITE_VOLATILE:
+ rc = ioctl_write_volatile_data(arg);
+ break;
+ case XL_RELEASE_DATA:
+ rc = ioctl_release_data(arg);
+ break;
+ case XL_CLOSE_CHANNEL:
+ rc = ioctl_close_channel(arg);
+ break;
+ case XL_DISCONNECT:
+ rc = ioctl_disconnect(arg);
+ break;
+ }
+ if (rc)
+ return -EIO;
+ else
+ return 0;
+}
+
+/*
+ * xlink Kernel API.
+ */
+
+enum xlink_error xlink_initialize(void)
+{
+ return X_LINK_SUCCESS;
+}
+EXPORT_SYMBOL_GPL(xlink_initialize);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle)
+{
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int interface;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ mutex_lock(&xlink->lock);
+ if (!link) {
+ link = get_next_link();
+ if (!link) {
+ pr_err("max connections reached %d\n",
+ XLINK_MAX_CONNECTIONS);
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // platform connect
+ interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ rc = xlink_platform_connect(interface, handle->sw_device_id);
+ if (rc) {
+ pr_err("platform connect failed %d\n", rc);
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // set link handle reference and link id
+ link->handle = *handle;
+ xlink->nmb_connected_links++;
+ kref_init(&link->refcount);
+ // initialize multiplexer connection
+ rc = xlink_multiplexer_connect(link->id);
+ if (rc) {
+ pr_err("multiplexer connect failed\n");
+ goto r_cleanup;
+ }
+ pr_info("dev 0x%x connected - dev_type %d - nmb_connected_links %d\n",
+ link->handle.sw_device_id,
+ link->handle.dev_type,
+ xlink->nmb_connected_links);
+ } else {
+ // already connected
+ pr_info("dev 0x%x ALREADY connected - dev_type %d\n",
+ link->handle.sw_device_id,
+ link->handle.dev_type);
+ kref_get(&link->refcount);
+ *handle = link->handle;
+ }
+ mutex_unlock(&xlink->lock);
+ // TODO: implement ping
+ return X_LINK_SUCCESS;
+
+r_cleanup:
+ link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+}
+EXPORT_SYMBOL_GPL(xlink_connect);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_OPEN_CHANNEL_REQ,
+ &link->handle, chan, data_size, timeout);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = (void *)mode;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_open_channel);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle,
+ u16 chan)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_CLOSE_CHANNEL_REQ,
+ &link->handle, chan, 0, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_close_channel);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage, u32 size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_DATA_SIZE)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+ chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS &&
+ event->interface == IPC_INTERFACE) {
+ /* only passing message address across IPC interface */
+ event->data = &pmessage;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ xlink_destroy_event(event);
+ } else {
+ event->data = (u8 *)pmessage;
+ event->paddr = 0;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ }
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_write_data);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ enum xlink_error rc;
+ int event_queued = 0;
+ dma_addr_t paddr = 0;
+ u32 addr;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_DATA_SIZE)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_REQ, &link->handle,
+ chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+ event->user_data = 1;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS &&
+ event->interface == IPC_INTERFACE) {
+ /* only passing message address across IPC interface */
+ if (get_user(addr, (u32 __user *)pmessage)) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->data = &addr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ xlink_destroy_event(event);
+ } else {
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr,
+ size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ if (copy_from_user(event->data, (void __user *)pmessage, size)) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ event->paddr = paddr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued) {
+ xlink_platform_deallocate(&xlink->pdev->dev,
+ event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT,
+ XLINK_NORMAL_MEMORY);
+ xlink_destroy_event(event);
+ }
+ }
+ return rc;
+}
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 0);
+ return rc;
+}
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size, u32 user_flag)
+{
+ enum xlink_error rc = 0;
+ struct xlink_link *link = NULL;
+ struct xlink_event *event = NULL;
+ int event_queued = 0;
+ dma_addr_t paddr;
+ int region = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ if (size > XLINK_MAX_BUF_SIZE)
+ return X_LINK_ERROR; // TODO: XLink Parameter Error
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_WRITE_VOLATILE_REQ,
+ &link->handle, chan, size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ region = XLINK_NORMAL_MEMORY;
+ event->data = xlink_platform_allocate(&xlink->pdev->dev, &paddr, size,
+ XLINK_PACKET_ALIGNMENT, region);
+ if (!event->data) {
+ xlink_destroy_event(event);
+ return X_LINK_ERROR;
+ }
+ memcpy(event->data, message, size);
+ event->user_data = user_flag;
+ event->paddr = paddr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued) {
+ xlink_platform_deallocate(&xlink->pdev->dev, event->data, paddr, size,
+ XLINK_PACKET_ALIGNMENT, region);
+ xlink_destroy_event(event);
+ }
+ return rc;
+}
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+ u16 chan, u8 **pmessage, u32 *size)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_READ_REQ, &link->handle,
+ chan, *size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->pdata = (void **)pmessage;
+ event->length = size;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_read_data);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+ u16 chan, u8 *const message, u32 *size)
+{
+ enum xlink_error rc = 0;
+ struct xlink_link *link = NULL;
+ struct xlink_event *event = NULL;
+ int event_queued = 0;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_READ_TO_BUFFER_REQ,
+ &link->handle, chan, *size, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = message;
+ event->length = size;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_read_data_to_buffer);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+ u16 chan, u8 * const data_addr)
+{
+ struct xlink_event *event;
+ struct xlink_link *link;
+ int event_queued = 0;
+ enum xlink_error rc;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ event = xlink_create_event(link->id, XLINK_RELEASE_REQ, &link->handle,
+ chan, 0, 0);
+ if (!event)
+ return X_LINK_ERROR;
+
+ event->data = data_addr;
+ rc = xlink_multiplexer_tx(event, &event_queued);
+ if (!event_queued)
+ xlink_destroy_event(event);
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_release_data);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle)
+{
+ struct xlink_link *link;
+ enum xlink_error rc = X_LINK_ERROR;
+
+ if (!xlink || !handle)
+ return X_LINK_ERROR;
+
+ link = get_link_by_sw_device_id(handle->sw_device_id);
+ if (!link)
+ return X_LINK_ERROR;
+
+ // decrement refcount, if count is 0 lock mutex and disconnect
+ if (kref_put_mutex(&link->refcount, release_after_kref_put,
+ &xlink->lock)) {
+ // deinitialize multiplexer connection
+ rc = xlink_multiplexer_disconnect(link->id);
+ if (rc) {
+ pr_err("multiplexer disconnect failed\n");
+ mutex_unlock(&xlink->lock);
+ return X_LINK_ERROR;
+ }
+ // TODO: reset device?
+ // invalidate link handle reference
+ link->handle.sw_device_id = XLINK_INVALID_SW_DEVICE_ID;
+ xlink->nmb_connected_links--;
+ mutex_unlock(&xlink->lock);
+ }
+ return rc;
+}
+EXPORT_SYMBOL_GPL(xlink_disconnect);
+
+/* Device tree driver match. */
+static const struct of_device_id kmb_xlink_of_match[] = {
+ {
+ .compatible = "intel,keembay-xlink",
+ },
+ {}
+};
+
+/* The xlink driver is a platform device. */
+static struct platform_driver kmb_xlink_driver = {
+ .probe = kmb_xlink_probe,
+ .remove = kmb_xlink_remove,
+ .driver = {
+ .name = DRV_NAME,
+ .of_match_table = kmb_xlink_of_match,
+ },
+};
+
+/*
+ * The remote host system will need to create an xlink platform
+ * device for the platform driver to match with
+ */
+#ifndef CONFIG_XLINK_LOCAL_HOST
+static struct platform_device pdev;
+static void kmb_xlink_release(struct device *dev) { return; }
+#endif
+
+static int kmb_xlink_init(void)
+{
+ int rc;
+
+ rc = platform_driver_register(&kmb_xlink_driver);
+#ifndef CONFIG_XLINK_LOCAL_HOST
+ pdev.dev.release = kmb_xlink_release;
+ pdev.name = DRV_NAME;
+ pdev.id = -1;
+ if (!rc) {
+ rc = platform_device_register(&pdev);
+ if (rc)
+ platform_driver_unregister(&kmb_xlink_driver);
+ }
+#endif
+ return rc;
+}
+module_init(kmb_xlink_init);
+
+static void kmb_xlink_exit(void)
+{
+#ifndef CONFIG_XLINK_LOCAL_HOST
+ platform_device_unregister(&pdev);
+#endif
+ platform_driver_unregister(&kmb_xlink_driver);
+}
+module_exit(kmb_xlink_exit);
+
+MODULE_DESCRIPTION("Keem Bay xlink Kernel Driver");
+MODULE_AUTHOR("Seamus Kelly <[email protected]>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/misc/xlink-core/xlink-core.h b/drivers/misc/xlink-core/xlink-core.h
new file mode 100644
index 000000000000..5ba7ac653bf7
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-core.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink core header file.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_CORE_H_
+#define XLINK_CORE_H_
+#include "xlink-defs.h"
+
+#define NUM_REG_EVENTS 4
+
+enum xlink_error do_xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message,
+ u32 size, u32 user_flag);
+
+enum xlink_error xlink_write_data_user(struct xlink_handle *handle,
+ u16 chan, u8 const *pmessage,
+ u32 size);
+#endif /* XLINK_CORE_H_ */
diff --git a/drivers/misc/xlink-core/xlink-defs.h b/drivers/misc/xlink-core/xlink-defs.h
new file mode 100644
index 000000000000..09aee36d5542
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-defs.h
@@ -0,0 +1,175 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Defines.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_DEFS_H
+#define __XLINK_DEFS_H
+
+#include <linux/slab.h>
+#include <linux/xlink.h>
+
+#define XLINK_MAX_BUF_SIZE 128U
+#define XLINK_MAX_DATA_SIZE (1024U * 1024U * 1024U)
+#define XLINK_MAX_CONTROL_DATA_SIZE 100U
+#define XLINK_MAX_CONNECTIONS 16
+#define XLINK_PACKET_ALIGNMENT 64
+#define XLINK_INVALID_EVENT_ID 0xDEADBEEF
+#define XLINK_INVALID_CHANNEL_ID 0xDEAD
+#define XLINK_PACKET_QUEUE_CAPACITY 10000
+#define XLINK_EVENT_QUEUE_CAPACITY 10000
+#define XLINK_EVENT_HEADER_MAGIC 0x786C6E6B
+#define XLINK_PING_TIMEOUT_MS 5000U
+#define XLINK_MAX_DEVICE_NAME_SIZE 128
+#define XLINK_MAX_DEVICE_LIST_SIZE 8
+#define XLINK_INVALID_LINK_ID 0xDEADBEEF
+#define XLINK_INVALID_SW_DEVICE_ID 0xDEADBEEF
+
+#define NMB_CHANNELS 4096
+#define IP_CONTROL_CHANNEL 0x0A
+#define VPU_CONTROL_CHANNEL 0x400
+#define CONTROL_CHANNEL_OPMODE RXB_TXB // blocking
+#define CONTROL_CHANNEL_DATASIZE 128U // size of internal rx/tx buffers
+#define CONTROL_CHANNEL_TIMEOUT_MS 0U // wait indefinitely
+#define SIGXLNK 44 // signal XLink uses for callback signalling
+
+#define UNUSED(x) (void)(x)
+
+// the end of the IPC channel range (starting at zero)
+#define XLINK_IPC_MAX_CHANNELS 1024
+
+// used to extract the interface type from a sw device id
+#define SW_DEVICE_ID_INTERFACE_SHIFT 24U
+#define SW_DEVICE_ID_INTERFACE_MASK 0x7
+#define GET_INTERFACE_FROM_SW_DEVICE_ID(id) (((id) >> SW_DEVICE_ID_INTERFACE_SHIFT) & \
+ SW_DEVICE_ID_INTERFACE_MASK)
+#define SW_DEVICE_ID_IPC_INTERFACE 0x0
+#define SW_DEVICE_ID_PCIE_INTERFACE 0x1
+#define SW_DEVICE_ID_USB_INTERFACE 0x2
+#define SW_DEVICE_ID_ETH_INTERFACE 0x3
+
+enum xlink_interface {
+ NULL_INTERFACE = -1,
+ IPC_INTERFACE = 0,
+ PCIE_INTERFACE,
+ USB_CDC_INTERFACE,
+ ETH_INTERFACE,
+ NMB_OF_INTERFACES,
+};
+
+static inline int get_interface_from_sw_device_id(u32 sw_device_id)
+{
+ u32 interface = 0;
+
+ interface = GET_INTERFACE_FROM_SW_DEVICE_ID(sw_device_id);
+ switch (interface) {
+ case SW_DEVICE_ID_IPC_INTERFACE:
+ return IPC_INTERFACE;
+ case SW_DEVICE_ID_PCIE_INTERFACE:
+ return PCIE_INTERFACE;
+ case SW_DEVICE_ID_USB_INTERFACE:
+ return USB_CDC_INTERFACE;
+ case SW_DEVICE_ID_ETH_INTERFACE:
+ return ETH_INTERFACE;
+ default:
+ return NULL_INTERFACE;
+ }
+}
+
+enum xlink_channel_status {
+ CHAN_CLOSED = 0x0000,
+ CHAN_OPEN = 0x0001,
+ CHAN_BLOCKED_READ = 0x0010,
+ CHAN_BLOCKED_WRITE = 0x0100,
+ CHAN_OPEN_PEER = 0x1000,
+};
+
+enum xlink_event_origin {
+ EVENT_TX = 0, // outgoing events
+ EVENT_RX, // incoming events
+};
+
+enum xlink_event_type {
+ // request events
+ XLINK_WRITE_REQ = 0x00,
+ XLINK_WRITE_VOLATILE_REQ,
+ XLINK_READ_REQ,
+ XLINK_READ_TO_BUFFER_REQ,
+ XLINK_RELEASE_REQ,
+ XLINK_OPEN_CHANNEL_REQ,
+ XLINK_CLOSE_CHANNEL_REQ,
+ XLINK_PING_REQ,
+ XLINK_REQ_LAST,
+ // response events
+ XLINK_WRITE_RESP = 0x10,
+ XLINK_WRITE_VOLATILE_RESP,
+ XLINK_READ_RESP,
+ XLINK_READ_TO_BUFFER_RESP,
+ XLINK_RELEASE_RESP,
+ XLINK_OPEN_CHANNEL_RESP,
+ XLINK_CLOSE_CHANNEL_RESP,
+ XLINK_PING_RESP,
+ XLINK_RESP_LAST,
+};
+
+struct xlink_event_header {
+ u32 magic;
+ u32 id;
+ enum xlink_event_type type;
+ u32 chan;
+ size_t size;
+ u32 timeout;
+ u8 control_data[XLINK_MAX_CONTROL_DATA_SIZE];
+};
+
+struct xlink_event {
+ struct xlink_event_header header;
+ enum xlink_event_origin origin;
+ u32 link_id;
+ struct xlink_handle *handle;
+ int interface;
+ void *data;
+ struct task_struct *calling_pid;
+ char callback_origin;
+ char user_data;
+ void **pdata;
+ dma_addr_t paddr;
+ u32 *length;
+ struct list_head list;
+};
+
+static inline struct xlink_event *xlink_create_event(u32 link_id,
+ enum xlink_event_type type,
+ struct xlink_handle *handle,
+ u32 chan, u32 size, u32 timeout)
+{
+ struct xlink_event *new_event = NULL;
+
+ // allocate new event
+ new_event = kzalloc(sizeof(*new_event), GFP_KERNEL);
+ if (!new_event)
+ return NULL;
+
+ // set event context
+ new_event->link_id = link_id;
+ new_event->handle = handle;
+ new_event->interface = get_interface_from_sw_device_id(handle->sw_device_id);
+ new_event->user_data = 0;
+
+ // set event header
+ new_event->header.magic = XLINK_EVENT_HEADER_MAGIC;
+ new_event->header.id = XLINK_INVALID_EVENT_ID;
+ new_event->header.type = type;
+ new_event->header.chan = chan;
+ new_event->header.size = size;
+ new_event->header.timeout = timeout;
+ return new_event;
+}
+
+static inline void xlink_destroy_event(struct xlink_event *event)
+{
+ kfree(event);
+}
+#endif /* __XLINK_DEFS_H */
diff --git a/drivers/misc/xlink-core/xlink-ioctl.c b/drivers/misc/xlink-core/xlink-ioctl.c
new file mode 100644
index 000000000000..1f75ad38137b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.c
@@ -0,0 +1,212 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Core Driver.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/kref.h>
+
+#include "xlink-defs.h"
+#include "xlink-core.h"
+#include "xlink-ioctl.h"
+
+#define CHANNEL_SET_USER_BIT(chan) ((chan) |= (1 << 15))
+
+static int copy_result_to_user(u32 *where, int rc)
+{
+ if (copy_to_user((void __user *)where, &rc, sizeof(rc)))
+ return -EFAULT;
+ return rc;
+}
+
+static enum xlink_error xlink_write_volatile_user(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size)
+{
+ enum xlink_error rc = 0;
+
+ rc = do_xlink_write_volatile(handle, chan, message, size, 1);
+ return rc;
+}
+
+int ioctl_connect(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkconnect con = {};
+ int rc = 0;
+
+ if (copy_from_user(&con, (void __user *)arg,
+ sizeof(struct xlinkconnect)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)con.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_connect(&devh);
+ if (rc == X_LINK_SUCCESS) {
+ if (copy_to_user((void __user *)con.handle,
+ &devh, sizeof(struct xlink_handle)))
+ return -EFAULT;
+ }
+
+ return copy_result_to_user(con.return_code, rc);
+}
+
+int ioctl_open_channel(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkopenchannel op = {};
+ int rc = 0;
+
+ if (copy_from_user(&op, (void __user *)arg,
+ sizeof(struct xlinkopenchannel)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)op.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_open_channel(&devh, op.chan, op.mode, op.data_size,
+ op.timeout);
+
+ return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_read_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkreaddata rd = {};
+ int rc = 0;
+ u8 *rdaddr;
+ u32 size;
+ int interface;
+
+ if (copy_from_user(&rd, (void __user *)arg,
+ sizeof(struct xlinkreaddata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)rd.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_read_data(&devh, rd.chan, &rdaddr, &size);
+ if (!rc) {
+ interface = get_interface_from_sw_device_id(devh.sw_device_id);
+ if (interface == IPC_INTERFACE) {
+ if (copy_to_user((void __user *)rd.pmessage, (void *)&rdaddr,
+ sizeof(u32)))
+ return -EFAULT;
+ } else {
+ if (copy_to_user((void __user *)rd.pmessage, (void *)rdaddr,
+ size))
+ return -EFAULT;
+ }
+ if (copy_to_user((void __user *)rd.size, (void *)&size, sizeof(size)))
+ return -EFAULT;
+ }
+
+ return copy_result_to_user(rd.return_code, rc);
+}
+
+int ioctl_write_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ int rc = 0;
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_DATA_SIZE)
+ return -EFAULT;
+ rc = xlink_write_data_user(&devh, wr.chan, wr.pmessage, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_write_volatile_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkwritedata wr = {};
+ int rc = 0;
+ u8 volbuf[XLINK_MAX_BUF_SIZE]; // buffer for volatile transactions
+
+ if (copy_from_user(&wr, (void __user *)arg,
+ sizeof(struct xlinkwritedata)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)wr.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (wr.size > XLINK_MAX_BUF_SIZE)
+ return -EFAULT;
+ if (copy_from_user(volbuf, (void __user *)wr.pmessage, wr.size))
+ return -EFAULT;
+ rc = xlink_write_volatile_user(&devh, wr.chan, volbuf, wr.size);
+
+ return copy_result_to_user(wr.return_code, rc);
+}
+
+int ioctl_release_data(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkrelease rel = {};
+ int rc = 0;
+ u8 reladdr;
+
+ if (copy_from_user(&rel, (void __user *)arg,
+ sizeof(struct xlinkrelease)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)rel.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ if (rel.addr) {
+ if (get_user(reladdr, (u32 __user *const)rel.addr))
+ return -EFAULT;
+ rc = xlink_release_data(&devh, rel.chan,
+ (u8 *)&reladdr);
+ } else {
+ rc = xlink_release_data(&devh, rel.chan, NULL);
+ }
+
+ return copy_result_to_user(rel.return_code, rc);
+}
+
+int ioctl_close_channel(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkopenchannel op = {};
+ int rc = 0;
+
+ if (copy_from_user(&op, (void __user *)arg,
+ sizeof(struct xlinkopenchannel)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)op.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_close_channel(&devh, op.chan);
+
+ return copy_result_to_user(op.return_code, rc);
+}
+
+int ioctl_disconnect(unsigned long arg)
+{
+ struct xlink_handle devh = {};
+ struct xlinkconnect con = {};
+ int rc = 0;
+
+ if (copy_from_user(&con, (void __user *)arg,
+ sizeof(struct xlinkconnect)))
+ return -EFAULT;
+ if (copy_from_user(&devh, (void __user *)con.handle,
+ sizeof(struct xlink_handle)))
+ return -EFAULT;
+ rc = xlink_disconnect(&devh);
+
+ return copy_result_to_user(con.return_code, rc);
+}
diff --git a/drivers/misc/xlink-core/xlink-ioctl.h b/drivers/misc/xlink-core/xlink-ioctl.h
new file mode 100644
index 000000000000..0f317c6c2b94
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-ioctl.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink ioctl header files.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+
+#ifndef XLINK_IOCTL_H_
+#define XLINK_IOCTL_H_
+
+int ioctl_connect(unsigned long arg);
+int ioctl_open_channel(unsigned long arg);
+int ioctl_read_data(unsigned long arg);
+int ioctl_write_data(unsigned long arg);
+int ioctl_write_volatile_data(unsigned long arg);
+int ioctl_release_data(unsigned long arg);
+int ioctl_close_channel(unsigned long arg);
+int ioctl_disconnect(unsigned long arg);
+
+#endif /* XLINK_IOCTL_H_ */
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.c b/drivers/misc/xlink-core/xlink-multiplexer.c
new file mode 100644
index 000000000000..9b1ed008bb56
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.c
@@ -0,0 +1,534 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/version.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma-direct.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/completion.h>
+#include <linux/sched/signal.h>
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#endif
+
+#include "xlink-multiplexer.h"
+#include "xlink-platform.h"
+
+#define THR_UPR 85
+#define THR_LWR 80
+
+// timeout used for open channel
+#define OPEN_CHANNEL_TIMEOUT_MSEC 5000
+
+/* Channel mapping table. */
+struct xlink_channel_type {
+ enum xlink_interface remote_to_local;
+ enum xlink_interface local_to_ip;
+};
+
+struct xlink_channel_table_entry {
+ u16 start_range;
+ u16 stop_range;
+ struct xlink_channel_type type;
+};
+
+static const struct xlink_channel_table_entry default_channel_table[] = {
+ {0x0, 0x1, {PCIE_INTERFACE, IPC_INTERFACE}},
+ {0x2, 0x9, {USB_CDC_INTERFACE, IPC_INTERFACE}},
+ {0xA, 0x3FD, {PCIE_INTERFACE, IPC_INTERFACE}},
+ {0x3FE, 0x3FF, {ETH_INTERFACE, IPC_INTERFACE}},
+ {0x400, 0xFFE, {PCIE_INTERFACE, NULL_INTERFACE}},
+ {0xFFF, 0xFFF, {ETH_INTERFACE, NULL_INTERFACE}},
+ {NMB_CHANNELS, NMB_CHANNELS, {NULL_INTERFACE, NULL_INTERFACE}},
+};
+
+struct channel {
+ struct open_channel *opchan;
+ enum xlink_opmode mode;
+ enum xlink_channel_status status;
+ enum xlink_channel_status ipc_status;
+ u32 size;
+ u32 timeout;
+};
+
+struct packet {
+ u8 *data;
+ u32 length;
+ dma_addr_t paddr;
+ struct list_head list;
+};
+
+struct packet_queue {
+ u32 count;
+ u32 capacity;
+ struct list_head head;
+ struct mutex lock; // lock to protect packet queue
+};
+
+struct open_channel {
+ u16 id;
+ struct channel *chan;
+ struct packet_queue rx_queue;
+ struct packet_queue tx_queue;
+ s32 rx_fill_level;
+ s32 tx_fill_level;
+ s32 tx_packet_level;
+ s32 tx_up_limit;
+ struct completion opened;
+ struct completion pkt_available;
+ struct completion pkt_consumed;
+ struct completion pkt_released;
+ struct task_struct *ready_calling_pid;
+ void *ready_callback;
+ struct task_struct *consumed_calling_pid;
+ void *consumed_callback;
+ char callback_origin;
+ struct mutex lock; // lock to protect channel config
+};
+
+struct xlink_multiplexer {
+ struct device *dev;
+ struct channel channels[XLINK_MAX_CONNECTIONS][NMB_CHANNELS];
+};
+
+static struct xlink_multiplexer *xmux;
+
+/*
+ * Multiplexer Internal Functions
+ *
+ */
+
+static inline int chan_is_non_blocking_read(struct open_channel *opchan)
+{
+ if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXN_TXB)
+ return 1;
+
+ return 0;
+}
+
+static inline int chan_is_non_blocking_write(struct open_channel *opchan)
+{
+ if (opchan->chan->mode == RXN_TXN || opchan->chan->mode == RXB_TXN)
+ return 1;
+
+ return 0;
+}
+
+static struct xlink_channel_type const *get_channel_type(u16 chan)
+{
+ struct xlink_channel_type const *type = NULL;
+ int i = 0;
+
+ while (default_channel_table[i].start_range < NMB_CHANNELS) {
+ if (chan >= default_channel_table[i].start_range &&
+ chan <= default_channel_table[i].stop_range) {
+ type = &default_channel_table[i].type;
+ break;
+ }
+ i++;
+ }
+ return type;
+}
+
+static int is_channel_for_device(u16 chan, u32 sw_device_id,
+ enum xlink_dev_type dev_type)
+{
+ struct xlink_channel_type const *chan_type = get_channel_type(chan);
+ int interface = NULL_INTERFACE;
+
+ if (chan_type) {
+ interface = get_interface_from_sw_device_id(sw_device_id);
+ if (dev_type == VPUIP_DEVICE) {
+ if (chan_type->local_to_ip == interface)
+ return 1;
+ } else {
+ if (chan_type->remote_to_local == interface)
+ return 1;
+ }
+ }
+ return 0;
+}
+
+static int is_control_channel(u16 chan)
+{
+ if (chan == IP_CONTROL_CHANNEL || chan == VPU_CONTROL_CHANNEL)
+ return 1;
+ else
+ return 0;
+}
+
+static int release_packet_from_channel(struct open_channel *opchan,
+ struct packet_queue *queue,
+ u8 * const addr, u32 *size)
+{
+ u8 packet_found = 0;
+ struct packet *pkt = NULL;
+
+ if (!addr) {
+ // address is null, release first packet in queue
+ if (!list_empty(&queue->head)) {
+ pkt = list_first_entry(&queue->head, struct packet,
+ list);
+ packet_found = 1;
+ }
+ } else {
+ // find packet in channel rx queue
+ list_for_each_entry(pkt, &queue->head, list) {
+ if (pkt->data == addr) {
+ packet_found = 1;
+ break;
+ }
+ }
+ }
+ if (!pkt || !packet_found)
+ return X_LINK_ERROR;
+ // packet found, deallocate and remove from queue
+ xlink_platform_deallocate(xmux->dev, pkt->data, pkt->paddr, pkt->length,
+ XLINK_PACKET_ALIGNMENT, XLINK_NORMAL_MEMORY);
+ list_del(&pkt->list);
+ queue->count--;
+ opchan->rx_fill_level -= pkt->length;
+ if (size)
+ *size = pkt->length;
+ kfree(pkt);
+ return X_LINK_SUCCESS;
+}
+
+static int multiplexer_open_channel(u32 link_id, u16 chan)
+{
+ struct open_channel *opchan;
+
+ // channel already open
+ if (xmux->channels[link_id][chan].opchan)
+ return X_LINK_SUCCESS;
+
+ // allocate open channel
+ opchan = kzalloc(sizeof(*opchan), GFP_KERNEL);
+ if (!opchan)
+ return X_LINK_ERROR;
+
+ // initialize open channel
+ opchan->id = chan;
+ opchan->chan = &xmux->channels[link_id][chan];
+ // TODO: remove circular dependency
+ xmux->channels[link_id][chan].opchan = opchan;
+ INIT_LIST_HEAD(&opchan->rx_queue.head);
+ opchan->rx_queue.count = 0;
+ opchan->rx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+ INIT_LIST_HEAD(&opchan->tx_queue.head);
+ opchan->tx_queue.count = 0;
+ opchan->tx_queue.capacity = XLINK_PACKET_QUEUE_CAPACITY;
+ opchan->rx_fill_level = 0;
+ opchan->tx_fill_level = 0;
+ opchan->tx_packet_level = 0;
+ opchan->tx_up_limit = 0;
+ init_completion(&opchan->opened);
+ init_completion(&opchan->pkt_available);
+ init_completion(&opchan->pkt_consumed);
+ init_completion(&opchan->pkt_released);
+ mutex_init(&opchan->lock);
+ return X_LINK_SUCCESS;
+}
+
+static int multiplexer_close_channel(struct open_channel *opchan)
+{
+ if (!opchan)
+ return X_LINK_ERROR;
+
+ // free remaining packets
+ while (!list_empty(&opchan->rx_queue.head)) {
+ release_packet_from_channel(opchan, &opchan->rx_queue,
+ NULL, NULL);
+ }
+
+ while (!list_empty(&opchan->tx_queue.head)) {
+ release_packet_from_channel(opchan, &opchan->tx_queue,
+ NULL, NULL);
+ }
+
+ // deallocate data structure and destroy
+ opchan->chan->opchan = NULL; // TODO: remove circular dependency
+ mutex_destroy(&opchan->rx_queue.lock);
+ mutex_destroy(&opchan->tx_queue.lock);
+ mutex_unlock(&opchan->lock);
+ mutex_destroy(&opchan->lock);
+ kfree(opchan);
+ return X_LINK_SUCCESS;
+}
+
+/*
+ * Multiplexer External Functions
+ *
+ */
+
+enum xlink_error xlink_multiplexer_init(void *dev)
+{
+ struct platform_device *plat_dev = (struct platform_device *)dev;
+
+ // allocate multiplexer data structure
+ xmux = kzalloc(sizeof(*xmux), GFP_KERNEL);
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ xmux->dev = &plat_dev->dev;
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id)
+{
+ int rc;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ // open ip control channel
+ rc = multiplexer_open_channel(link_id, IP_CONTROL_CHANNEL);
+ if (rc) {
+ goto r_cleanup;
+ } else {
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+ xmux->channels[link_id][IP_CONTROL_CHANNEL].status = CHAN_OPEN;
+ }
+ // open vpu control channel
+ rc = multiplexer_open_channel(link_id, VPU_CONTROL_CHANNEL);
+ if (rc) {
+ goto r_cleanup;
+ } else {
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].size = CONTROL_CHANNEL_DATASIZE;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].timeout = CONTROL_CHANNEL_TIMEOUT_MS;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].mode = CONTROL_CHANNEL_OPMODE;
+ xmux->channels[link_id][VPU_CONTROL_CHANNEL].status = CHAN_OPEN;
+ }
+ return X_LINK_SUCCESS;
+
+r_cleanup:
+ xlink_multiplexer_disconnect(link_id);
+ return X_LINK_ERROR;
+}
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id)
+{
+ int i;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ for (i = 0; i < NMB_CHANNELS; i++) {
+ if (xmux->channels[link_id][i].opchan)
+ multiplexer_close_channel(xmux->channels[link_id][i].opchan);
+ }
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_destroy(void)
+{
+ int i;
+
+ if (!xmux)
+ return X_LINK_ERROR;
+
+ // close all open channels and deallocate remaining packets
+ for (i = 0; i < XLINK_MAX_CONNECTIONS; i++)
+ xlink_multiplexer_disconnect(i);
+
+ // destroy multiplexer
+ kfree(xmux);
+ xmux = NULL;
+ return X_LINK_SUCCESS;
+}
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+ int *event_queued)
+{
+ int rc = X_LINK_SUCCESS;
+ u16 chan = 0;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ chan = event->header.chan;
+
+ // verify channel ID is in range
+ if (chan >= NMB_CHANNELS)
+ return X_LINK_ERROR;
+
+ // verify communication to device on channel is valid
+ if (!is_channel_for_device(chan, event->handle->sw_device_id,
+ event->handle->dev_type))
+ return X_LINK_ERROR;
+
+ // verify this is not a control channel
+ if (is_control_channel(chan))
+ return X_LINK_ERROR;
+
+ if (chan < XLINK_IPC_MAX_CHANNELS && event->interface == IPC_INTERFACE)
+ // event should be handled by passthrough
+ rc = xlink_passthrough(event);
+ return rc;
+}
+
+enum xlink_error xlink_passthrough(struct xlink_event *event)
+{
+ int rc = 0;
+#ifdef CONFIG_XLINK_LOCAL_HOST
+ struct xlink_ipc_context ipc = {0};
+ phys_addr_t physaddr = 0;
+ dma_addr_t vpuaddr = 0;
+ u32 timeout = 0;
+ u32 link_id;
+ u16 chan;
+
+ if (!xmux || !event)
+ return X_LINK_ERROR;
+
+ link_id = event->link_id;
+ chan = event->header.chan;
+ ipc.chan = chan;
+
+ if (ipc.chan >= XLINK_IPC_MAX_CHANNELS)
+ return rc;
+
+ switch (event->header.type) {
+ case XLINK_WRITE_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* Translate physical address to VPU address */
+ vpuaddr = phys_to_dma(xmux->dev, *(u32 *)event->data);
+ event->data = &vpuaddr;
+ rc = xlink_platform_write(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ &event->header.size, 0, &ipc);
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_WRITE_VOLATILE_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ ipc.is_volatile = 1;
+ rc = xlink_platform_write(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ &event->header.size, 0, &ipc);
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_READ_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* if channel has receive blocking set,
+ * then set timeout to U32_MAX
+ */
+ if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+ xmux->channels[link_id][chan].mode == RXB_TXB) {
+ timeout = U32_MAX;
+ } else {
+ timeout = xmux->channels[link_id][chan].timeout;
+ }
+ rc = xlink_platform_read(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ &vpuaddr,
+ (size_t *)event->length,
+ timeout, &ipc);
+ /* Translate VPU address to physical address */
+ physaddr = dma_to_phys(xmux->dev, vpuaddr);
+ *(phys_addr_t *)event->pdata = physaddr;
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_READ_TO_BUFFER_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ /* if channel has receive blocking set,
+ * then set timeout to U32_MAX
+ */
+ if (xmux->channels[link_id][chan].mode == RXB_TXN ||
+ xmux->channels[link_id][chan].mode == RXB_TXB) {
+ timeout = U32_MAX;
+ } else {
+ timeout = xmux->channels[link_id][chan].timeout;
+ }
+ ipc.is_volatile = 1;
+ rc = xlink_platform_read(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ event->data,
+ (size_t *)event->length,
+ timeout, &ipc);
+ if (rc || *event->length > XLINK_MAX_BUF_SIZE)
+ rc = X_LINK_ERROR;
+ } else {
+ /* channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_RELEASE_REQ:
+ break;
+ case XLINK_OPEN_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_CLOSED) {
+ xmux->channels[link_id][chan].size = event->header.size;
+ xmux->channels[link_id][chan].timeout = event->header.timeout;
+ xmux->channels[link_id][chan].mode = (uintptr_t)event->data;
+ rc = xlink_platform_open_channel(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ chan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].ipc_status = CHAN_OPEN;
+ } else {
+ /* channel already open */
+ rc = X_LINK_ALREADY_OPEN;
+ }
+ break;
+ case XLINK_CLOSE_CHANNEL_REQ:
+ if (xmux->channels[link_id][chan].ipc_status == CHAN_OPEN) {
+ rc = xlink_platform_close_channel(IPC_INTERFACE,
+ event->handle->sw_device_id,
+ chan);
+ if (rc)
+ rc = X_LINK_ERROR;
+ else
+ xmux->channels[link_id][chan].ipc_status = CHAN_CLOSED;
+ } else {
+ /* can't close channel not open */
+ rc = X_LINK_ERROR;
+ }
+ break;
+ case XLINK_PING_REQ:
+ case XLINK_WRITE_RESP:
+ case XLINK_WRITE_VOLATILE_RESP:
+ case XLINK_READ_RESP:
+ case XLINK_READ_TO_BUFFER_RESP:
+ case XLINK_RELEASE_RESP:
+ case XLINK_OPEN_CHANNEL_RESP:
+ case XLINK_CLOSE_CHANNEL_RESP:
+ case XLINK_PING_RESP:
+ break;
+ default:
+ rc = X_LINK_ERROR;
+ }
+#else
+ rc = 0;
+#endif // CONFIG_XLINK_LOCAL_HOST
+ return rc;
+}
diff --git a/drivers/misc/xlink-core/xlink-multiplexer.h b/drivers/misc/xlink-core/xlink-multiplexer.h
new file mode 100644
index 000000000000..c978e5683b45
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-multiplexer.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Multiplexer.
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_MULTIPLEXER_H
+#define __XLINK_MULTIPLEXER_H
+
+#include "xlink-defs.h"
+
+enum xlink_error xlink_multiplexer_init(void *dev);
+
+enum xlink_error xlink_multiplexer_connect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_disconnect(u32 link_id);
+
+enum xlink_error xlink_multiplexer_destroy(void);
+
+enum xlink_error xlink_multiplexer_tx(struct xlink_event *event,
+ int *event_queued);
+
+enum xlink_error xlink_multiplexer_rx(struct xlink_event *event);
+
+enum xlink_error xlink_passthrough(struct xlink_event *event);
+
+void *find_allocated_buffer(dma_addr_t paddr);
+
+int unregister_allocated_buffer(void *buf, dma_addr_t paddr);
+
+int core_release_packet_from_channel(u32 link_id, uint16_t chan,
+ uint8_t * const addr);
+
+#endif /* __XLINK_MULTIPLEXER_H */
diff --git a/drivers/misc/xlink-core/xlink-platform.c b/drivers/misc/xlink-core/xlink-platform.c
new file mode 100644
index 000000000000..c34b69ee206b
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.c
@@ -0,0 +1,160 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/cdev.h>
+#include <linux/platform_device.h>
+#include <linux/uaccess.h>
+#include <linux/slab.h>
+#include <linux/dma-mapping.h>
+#include <linux/xlink_drv_inf.h>
+
+#include "xlink-platform.h"
+
+#ifdef CONFIG_XLINK_LOCAL_HOST
+#include <linux/xlink-ipc.h>
+#else /* !CONFIG_XLINK_LOCAL_HOST */
+
+static inline int xlink_ipc_connect(u32 sw_device_id)
+{ return -1; }
+
+static inline int xlink_ipc_write(u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_read(u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{ return -1; }
+
+static inline int xlink_ipc_open_channel(u32 sw_device_id,
+ u32 channel)
+{ return -1; }
+
+static inline int xlink_ipc_close_channel(u32 sw_device_id,
+ u32 channel)
+{ return -1; }
+
+#endif /* CONFIG_XLINK_LOCAL_HOST */
+
+/*
+ * xlink low-level driver interface arrays
+ *
+ * note: array indices based on xlink_interface enum definition
+ */
+
+static int (*connect_fcts[NMB_OF_INTERFACES])(u32) = {
+ xlink_ipc_connect, xlink_pcie_connect, NULL, NULL};
+
+static int (*write_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+ NULL, xlink_pcie_write, NULL, NULL};
+
+static int (*read_fcts[NMB_OF_INTERFACES])(u32, void *, size_t * const, u32) = {
+ NULL, xlink_pcie_read, NULL, NULL};
+
+static int (*open_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+ xlink_ipc_open_channel, NULL, NULL, NULL};
+
+static int (*close_chan_fcts[NMB_OF_INTERFACES])(u32, u32) = {
+ xlink_ipc_close_channel, NULL, NULL, NULL};
+
+/*
+ * xlink low-level driver interface
+ */
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id)
+{
+ if (interface >= NMB_OF_INTERFACES || !connect_fcts[interface])
+ return -1;
+
+ return connect_fcts[interface](sw_device_id);
+}
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{
+ if (interface == IPC_INTERFACE)
+ return xlink_ipc_write(sw_device_id, data, size, timeout,
+ context);
+
+ if (interface >= NMB_OF_INTERFACES || !write_fcts[interface])
+ return -1;
+
+ return write_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context)
+{
+ if (interface == IPC_INTERFACE)
+ return xlink_ipc_read(sw_device_id, data, size, timeout,
+ context);
+
+ if (interface >= NMB_OF_INTERFACES || !read_fcts[interface])
+ return -1;
+
+ return read_fcts[interface](sw_device_id, data, size, timeout);
+}
+
+int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
+ u32 channel)
+{
+ if (interface >= NMB_OF_INTERFACES || !open_chan_fcts[interface])
+ return -1;
+
+ return open_chan_fcts[interface](sw_device_id, channel);
+}
+
+int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
+ u32 channel)
+{
+ if (interface >= NMB_OF_INTERFACES || !close_chan_fcts[interface])
+ return -1;
+
+ return close_chan_fcts[interface](sw_device_id, channel);
+}
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+ u32 size, u32 alignment,
+ enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+ *handle = 0;
+ return kzalloc(size, GFP_KERNEL);
+#else
+ void *p;
+
+ if (region == XLINK_CMA_MEMORY) {
+ // size needs to be at least 4097 to be allocated from CMA
+ size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+ p = dma_alloc_coherent(dev, size, handle, GFP_KERNEL);
+ return p;
+ }
+ *handle = 0;
+ return kzalloc(size, GFP_KERNEL);
+#endif
+}
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+ dma_addr_t handle, u32 size, u32 alignment,
+ enum xlink_memory_region region)
+{
+#if defined(CONFIG_XLINK_PSS) || !defined(CONFIG_XLINK_LOCAL_HOST)
+ kfree(buf);
+#else
+ if (region == XLINK_CMA_MEMORY) {
+ // size needs to be at least 4097 to be allocated from CMA
+ size = (size < PAGE_SIZE * 2) ? (PAGE_SIZE * 2) : size;
+ dma_free_coherent(dev, size, buf, handle);
+ } else {
+ kfree(buf);
+ }
+#endif
+}
diff --git a/drivers/misc/xlink-core/xlink-platform.h b/drivers/misc/xlink-core/xlink-platform.h
new file mode 100644
index 000000000000..2c7c4c418099
--- /dev/null
+++ b/drivers/misc/xlink-core/xlink-platform.h
@@ -0,0 +1,65 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel Platform API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_PLATFORM_H
+#define __XLINK_PLATFORM_H
+
+#include "xlink-defs.h"
+
+int xlink_platform_connect(u32 interface, u32 sw_device_id);
+
+int xlink_platform_write(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_read(u32 interface, u32 sw_device_id, void *data,
+ size_t * const size, u32 timeout, void *context);
+
+int xlink_platform_reset_device(u32 interface, u32 sw_device_id);
+
+int xlink_platform_boot_device(u32 interface, u32 sw_device_id,
+ const char *binary_name);
+
+int xlink_platform_get_device_name(u32 interface, u32 sw_device_id,
+ char *device_name, size_t name_size);
+
+int xlink_platform_get_device_list(u32 interface, u32 *sw_device_id_list,
+ u32 *num_devices);
+
+int xlink_platform_get_device_status(u32 interface, u32 sw_device_id,
+ u32 *device_status);
+
+int xlink_platform_set_device_mode(u32 interface, u32 sw_device_id,
+ u32 power_mode);
+
+int xlink_platform_get_device_mode(u32 interface, u32 sw_device_id,
+ u32 *power_mode);
+
+int xlink_platform_open_channel(u32 interface, u32 sw_device_id,
+ u32 channel);
+
+int xlink_platform_close_channel(u32 interface, u32 sw_device_id,
+ u32 channel);
+
+int xlink_platform_register_for_events(u32 interface, u32 sw_device_id,
+ xlink_device_event_cb event_notif_fn);
+
+int xlink_platform_unregister_for_events(u32 interface, u32 sw_device_id);
+
+enum xlink_memory_region {
+ XLINK_NORMAL_MEMORY = 0,
+ XLINK_CMA_MEMORY
+};
+
+void *xlink_platform_allocate(struct device *dev, dma_addr_t *handle,
+ u32 size, u32 alignment,
+ enum xlink_memory_region region);
+
+void xlink_platform_deallocate(struct device *dev, void *buf,
+ dma_addr_t handle, u32 size, u32 alignment,
+ enum xlink_memory_region region);
+
+#endif /* __XLINK_PLATFORM_H */
diff --git a/include/linux/xlink.h b/include/linux/xlink.h
new file mode 100644
index 000000000000..c22439d5aade
--- /dev/null
+++ b/include/linux/xlink.h
@@ -0,0 +1,108 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_H
+#define __XLINK_H
+
+#include <uapi/misc/xlink_uapi.h>
+
+enum xlink_dev_type {
+ HOST_DEVICE = 0, /* used when communicating host to host*/
+ VPUIP_DEVICE /* used when communicating host to vpu ip */
+};
+
+struct xlink_handle {
+ u32 sw_device_id; /* identifies a device in the system */
+ enum xlink_dev_type dev_type; /* determines direction of comms */
+};
+
+enum xlink_opmode {
+ RXB_TXB = 0, /* blocking read, blocking write */
+ RXN_TXN, /* non-blocking read, non-blocking write */
+ RXB_TXN, /* blocking read, non-blocking write */
+ RXN_TXB /* non-blocking read, blocking write */
+};
+
+enum xlink_device_power_mode {
+ POWER_DEFAULT_NOMINAL_MAX = 0, /* no load reduction, default mode */
+ POWER_SUBNOMINAL_HIGH, /* slight load reduction */
+ POWER_MEDIUM, /* average load reduction */
+ POWER_LOW, /* significant load reduction */
+ POWER_MIN, /* maximum load reduction */
+ POWER_SUSPENDED /* power off or device suspend */
+};
+
+enum xlink_error {
+ X_LINK_SUCCESS = 0, /* xlink operation completed successfully */
+ X_LINK_ALREADY_INIT, /* xlink already initialized */
+ X_LINK_ALREADY_OPEN, /* channel already open */
+ X_LINK_COMMUNICATION_NOT_OPEN, /* operation on a closed channel */
+ X_LINK_COMMUNICATION_FAIL, /* communication failure */
+ X_LINK_COMMUNICATION_UNKNOWN_ERROR, /* error unknown */
+ X_LINK_DEVICE_NOT_FOUND, /* device specified not found */
+ X_LINK_TIMEOUT, /* operation timed out */
+ X_LINK_ERROR, /* parameter error */
+ X_LINK_CHAN_FULL /* channel has reached fill level */
+};
+
+enum xlink_device_status {
+ XLINK_DEV_OFF = 0, /* device is off */
+ XLINK_DEV_ERROR, /* device is busy and not available */
+ XLINK_DEV_BUSY, /* device is available for use */
+ XLINK_DEV_RECOVERY, /* device is in recovery mode */
+ XLINK_DEV_READY /* device HW failure is detected */
+};
+
+/* xlink API */
+
+typedef void (*xlink_event)(u16 chan);
+typedef int (*xlink_device_event_cb)(u32 sw_device_id, u32 event_type);
+
+enum xlink_error xlink_initialize(void);
+
+enum xlink_error xlink_connect(struct xlink_handle *handle);
+
+enum xlink_error xlink_open_channel(struct xlink_handle *handle,
+ u16 chan, enum xlink_opmode mode,
+ u32 data_size, u32 timeout);
+
+enum xlink_error xlink_close_channel(struct xlink_handle *handle, u16 chan);
+
+enum xlink_error xlink_write_data(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_write_volatile(struct xlink_handle *handle,
+ u16 chan, u8 const *message, u32 size);
+
+enum xlink_error xlink_read_data(struct xlink_handle *handle,
+ u16 chan, u8 **message, u32 *size);
+
+enum xlink_error xlink_read_data_to_buffer(struct xlink_handle *handle,
+ u16 chan, u8 * const message,
+ u32 *size);
+
+enum xlink_error xlink_release_data(struct xlink_handle *handle,
+ u16 chan, u8 * const data_addr);
+
+enum xlink_error xlink_disconnect(struct xlink_handle *handle);
+
+/* API functions to be implemented
+ *
+ * enum xlink_error xlink_write_crc_data(struct xlink_handle *handle,
+ * u16 chan, u8 const *message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data(struct xlink_handle *handle,
+ * u16 chan, u8 **message, size_t * const size);
+ *
+ * enum xlink_error xlink_read_crc_data_to_buffer(struct xlink_handle *handle,
+ * u16 chan, u8 * const message, size_t * const size);
+ *
+ * enum xlink_error xlink_reset_all(void);
+ *
+ */
+
+#endif /* __XLINK_H */
diff --git a/include/uapi/misc/xlink_uapi.h b/include/uapi/misc/xlink_uapi.h
new file mode 100644
index 000000000000..77794299c4a6
--- /dev/null
+++ b/include/uapi/misc/xlink_uapi.h
@@ -0,0 +1,145 @@
+/* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */
+/*
+ * xlink Linux Kernel API
+ *
+ * Copyright (C) 2018-2019 Intel Corporation
+ *
+ */
+#ifndef __XLINK_UAPI_H
+#define __XLINK_UAPI_H
+
+#include <linux/types.h>
+
+#define XLINK_MAGIC 'x'
+#define XL_OPEN_CHANNEL _IOW(XLINK_MAGIC, 1, void*)
+#define XL_READ_DATA _IOW(XLINK_MAGIC, 2, void*)
+#define XL_WRITE_DATA _IOW(XLINK_MAGIC, 3, void*)
+#define XL_CLOSE_CHANNEL _IOW(XLINK_MAGIC, 4, void*)
+#define XL_WRITE_VOLATILE _IOW(XLINK_MAGIC, 5, void*)
+#define XL_READ_TO_BUFFER _IOW(XLINK_MAGIC, 6, void*)
+#define XL_START_VPU _IOW(XLINK_MAGIC, 7, void*)
+#define XL_STOP_VPU _IOW(XLINK_MAGIC, 8, void*)
+#define XL_RESET_VPU _IOW(XLINK_MAGIC, 9, void*)
+#define XL_CONNECT _IOW(XLINK_MAGIC, 10, void*)
+#define XL_RELEASE_DATA _IOW(XLINK_MAGIC, 11, void*)
+#define XL_DISCONNECT _IOW(XLINK_MAGIC, 12, void*)
+#define XL_WRITE_CONTROL_DATA _IOW(XLINK_MAGIC, 13, void*)
+#define XL_DATA_READY_CALLBACK _IOW(XLINK_MAGIC, 14, void*)
+#define XL_DATA_CONSUMED_CALLBACK _IOW(XLINK_MAGIC, 15, void*)
+#define XL_GET_DEVICE_NAME _IOW(XLINK_MAGIC, 16, void*)
+#define XL_GET_DEVICE_LIST _IOW(XLINK_MAGIC, 17, void*)
+#define XL_GET_DEVICE_STATUS _IOW(XLINK_MAGIC, 18, void*)
+#define XL_BOOT_DEVICE _IOW(XLINK_MAGIC, 19, void*)
+#define XL_RESET_DEVICE _IOW(XLINK_MAGIC, 20, void*)
+#define XL_GET_DEVICE_MODE _IOW(XLINK_MAGIC, 21, void*)
+#define XL_SET_DEVICE_MODE _IOW(XLINK_MAGIC, 22, void*)
+#define XL_REGISTER_DEV_EVENT _IOW(XLINK_MAGIC, 23, void*)
+#define XL_UNREGISTER_DEV_EVENT _IOW(XLINK_MAGIC, 24, void*)
+
+struct xlinkopenchannel {
+ void *handle;
+ __u16 chan;
+ int mode;
+ __u32 data_size;
+ __u32 timeout;
+ __u32 *return_code;
+};
+
+struct xlinkcallback {
+ void *handle;
+ __u16 chan;
+ void (*callback)(__u16 chan);
+ __u32 *return_code;
+};
+
+struct xlinkwritedata {
+ void *handle;
+ __u16 chan;
+ void const *pmessage;
+ __u32 size;
+ __u32 *return_code;
+};
+
+struct xlinkreaddata {
+ void *handle;
+ __u16 chan;
+ void *pmessage;
+ __u32 *size;
+ __u32 *return_code;
+};
+
+struct xlinkreadtobuffer {
+ void *handle;
+ __u16 chan;
+ void *pmessage;
+ __u32 *size;
+ __u32 *return_code;
+};
+
+struct xlinkconnect {
+ void *handle;
+ __u32 *return_code;
+};
+
+struct xlinkrelease {
+ void *handle;
+ __u16 chan;
+ void *addr;
+ __u32 *return_code;
+};
+
+struct xlinkstartvpu {
+ char *filename;
+ int namesize;
+ __u32 *return_code;
+};
+
+struct xlinkstopvpu {
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicename {
+ void *handle;
+ char *name;
+ __u32 name_size;
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicelist {
+ __u32 *sw_device_id_list;
+ __u32 *num_devices;
+ __u32 *return_code;
+};
+
+struct xlinkgetdevicestatus {
+ void *handle;
+ __u32 *device_status;
+ __u32 *return_code;
+};
+
+struct xlinkbootdevice {
+ void *handle;
+ const char *binary_name;
+ __u32 binary_name_size;
+ __u32 *return_code;
+};
+
+struct xlinkresetdevice {
+ void *handle;
+ __u32 *return_code;
+};
+
+struct xlinkdevmode {
+ void *handle;
+ int *device_mode;
+ __u32 *return_code;
+};
+
+struct xlinkregdevevent {
+ void *handle;
+ __u32 *event_list;
+ __u32 num_events;
+ __u32 *return_code;
+};
+
+#endif /* __XLINK_UAPI_H */
--
2.17.1

2021-02-14 17:43:00

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 25/34] misc: Add Keem Bay VPU manager

On 2/12/21 2:22 PM, [email protected] wrote:
> diff --git a/drivers/misc/vpumgr/Kconfig b/drivers/misc/vpumgr/Kconfig
> new file mode 100644
> index 000000000000..bb82ff83afd3
> --- /dev/null
> +++ b/drivers/misc/vpumgr/Kconfig
> @@ -0,0 +1,9 @@
> +config VPUMGR
> + tristate "VPU Manager"
> + depends on ARM64 && XLINK_CORE
> + help
> + VPUMGR manages life-cycle of VPU related resources which were
> + dynamically allocated on demands of user-space application

End the sentence above with a period ('.').

> +
> + Select y or m if you have a processor including the Intel
> + Vision Processor (VPU) on it.


--
~Randy

2021-02-14 17:45:00

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 29/34] Intel tsens i2c slave driver.

On 2/12/21 2:22 PM, [email protected] wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> index 8b263fdd80c3..be8d27e81864 100644
> --- a/drivers/misc/intel_tsens/Kconfig
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> Say Y if using a processor that includes the Intel VPU such as
> Keem Bay. If unsure, say N.
>
> +config INTEL_TSENS_I2C_SLAVE
> + bool "I2C slave driver for intel tsens"

s/intel/Intel/ ?
(as below)

> + depends on INTEL_TSENS_LOCAL_HOST
> + depends on I2C=y && I2C_SLAVE
> + help
> + This option enables tsens I2C slave driver.
> +
> + This driver is used for reporting thermal data via I2C
> + SMBUS to remote host.
> + Enable this option if you want to have support for thermal
> + management controller.
> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-02-14 17:45:27

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 30/34] misc:intel_tsens: Intel Keem Bay tsens driver.

On 2/12/21 2:23 PM, [email protected] wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> index be8d27e81864..5cfe6b4004e5 100644
> --- a/drivers/misc/intel_tsens/Kconfig
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -28,6 +28,18 @@ config INTEL_TSENS_I2C_SLAVE
> Say Y if using a processor that includes the Intel VPU such as
> Keem Bay. If unsure, say N.
>
> +config KEEMBAY_THERMAL
> + tristate "Temperature sensor driver for intel Keem Bay"

s/intel/Intel/ ?

as above and below.

> + depends on INTEL_TSENS_LOCAL_HOST && ARCH_KEEMBAY
> + help
> + Enable this option if you want to have support for Keem Bay
> + thermal management sensors.
> +
> + This driver is used for reading onchip temperature sensor
> + values from Keem Bay SoC.
> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-02-14 17:47:47

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 27/34] misc: Tsens ARM host thermal driver.

On 2/12/21 2:22 PM, [email protected] wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> new file mode 100644
> index 000000000000..bfb8fe1997f4
> --- /dev/null
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -0,0 +1,15 @@
> +# Copyright (C) 2020 Intel Corporation
> +# SPDX-License-Identifier: GPL-2.0-only
> +
> +config INTEL_TSENS_LOCAL_HOST
> + bool "Temperature sensor driver for intel tsens"

s/intel/Intel/ ?

as below.

> + select THERMAL
> + help
> + This option enables tsens thermal local Host driver.
> +
> + This driver is used for reporting thermal data via thermal
> + framework.
> + Enable this option if you want to have support for thermal
> + management controller.
> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-02-14 17:48:39

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 28/34] misc: Intel tsens IA host driver.

On 2/12/21 2:22 PM, [email protected] wrote:
> diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> index bfb8fe1997f4..8b263fdd80c3 100644
> --- a/drivers/misc/intel_tsens/Kconfig
> +++ b/drivers/misc/intel_tsens/Kconfig
> @@ -13,3 +13,16 @@ config INTEL_TSENS_LOCAL_HOST
> management controller.
> Say Y if using a processor that includes the Intel VPU such as
> Keem Bay. If unsure, say N.
> +
> +config INTEL_TSENS_IA_HOST
> + tristate "Temperature sensor driver for intel tsens remote host"

s/intel/Intel/ ?

as above and below.

> + depends on I2C && THERMAL
> + depends on I2C_SMBUS
> + help
> + This option enables tsens i2c and thermal local Host driver.
> +
> + This driver is used for reading thermal data via I2C SMBUS
> + and registers itself to thermal framework, which can be
> + used by thermal daemon in remote IA host
> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-02-14 17:51:53

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 33/34] misc: Hddl device management for local host

On 2/12/21 2:23 PM, [email protected] wrote:
> diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
> new file mode 100644
> index 000000000000..e1ae81fdf177
> --- /dev/null
> +++ b/drivers/misc/hddl_device/Kconfig
> @@ -0,0 +1,14 @@
> +# Copyright (C) 2020 Intel Corporation
> +# SPDX-License-Identifier: GPL-2.0-only
> +
> +config HDDL_DEVICE_CLIENT
> + tristate "Support for hddl device client"

Please use "HDDL" consistently.

> + depends on XLINK_CORE && INTEL_TSENS_LOCAL_HOST
> + help
> + This option enables HDDL device client module.
> +
> + This driver is used for sharing time sync data to local host and
> + retrives the sensors available on the platform. This also handles
> + the device connect/disconnect programming sequence.
> + Say Y if using a processor that includes the Intel VPU such as
> + Keem Bay. If unsure, say N.


--
~Randy

2021-02-14 17:55:28

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH v6 20/34] xlink-core: Add xlink core driver xLink

On 2/12/21 2:22 PM, [email protected] wrote:
> diff --git a/drivers/misc/xlink-core/Kconfig b/drivers/misc/xlink-core/Kconfig
> new file mode 100644
> index 000000000000..a0ceb0b48219
> --- /dev/null
> +++ b/drivers/misc/xlink-core/Kconfig
> @@ -0,0 +1,33 @@
> +config XLINK_CORE
> + tristate "Support for XLINK CORE"
> + depends on ((XLINK_PCIE_RH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER) || (XLINK_LOCAL_HOST && (XLINK_PCIE_LH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER)))

-ELINETOOLONG. Use '\' for line continuation in Kconfig files.

> + help
> + XLINK CORE enables the communication/control subsystem.
> +
> + If unsure, say N.
> +
> + To compile this driver as a module, choose M here: the
> + module will be called xlink.ko.
> +
> +config XLINK_LOCAL_HOST
> + tristate "Support for XLINK LOCAL HOST"
> + depends on XLINK_IPC
> + help
> + XLINK LOCAL HOST enables local host functionality for
> + the communication/control Sub-System.
> +
> + Enable this config when building the kernel for the Intel Vision
> + Processing Unit (VPU) Local Host core.
> +
> + If building for a Remote Host kernel, say N.
> +
> +config XLINK_PSS
> + tristate "Support for XLINK PSS (Pre-Silicon Solution)"
> + depends on XLINK_LOCAL_HOST
> + help
> + XLINK PSS enables the communication/control subsystem on a PSS platform.
> +
> + Enable this config when building the kernel for the Intel Vision
> + Processing Unit (VPU) in a simulated env.

Please s/env/environment/.

> +
> + If building for a VPU silicon, say N.


--
~Randy

2021-02-15 04:56:54

by Jassi Brar

[permalink] [raw]
Subject: Re: [PATCH v6 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox

On Fri, Feb 12, 2021 at 4:23 PM <[email protected]> wrote:
>
> From: Daniele Alessandrelli <[email protected]>
>
> Add mailbox controller enabling inter-processor communication (IPC)
> between the CPU (aka, the Application Processor - AP) and the VPU on
> Intel Movidius SoCs like Keem Bay.
>
> The controller uses HW FIFOs to enable such communication. Specifically,
> there are two FIFOs, one for the CPU and one for VPU. Each FIFO can hold
> 128 entries (messages) of 32-bit each (but only 26 bits are actually
> usable, since the 6 least-significant bits are reserved for the overflow
> detection mechanism, more info below).
>
> When the Linux kernel on the AP needs to send messages to the VPU
> firmware, it writes them to the VPU FIFO; similarly, when the VPU
> firmware needs to send messages to the AP, it writes them to the CPU
> FIFO.
>
> It's important to note that the AP is not the only one that can write to
> the VPU FIFO: other components within the SoC can write to it too. Each
> of these components (including the AP) has a unique 6-bit processor ID
> associated to it. The VPU HW FIFO expects the last 6 bits of each 32-bit
> FIFO entry to contain the processor ID of the sender.
>
> Therefore, sending a message from the AP to the VPU works as follows:
>
> 1. The message must be a 32-bit value with the last 6-bit set to 0 (in
> practice, the message is meant to be a 32-bit address value, aligned
> to 64 bytes; but this is not really relevant for this mailbox
> controller).
>
> 2. The AP adds its processor ID to the 32-bit message being sent:
> M = m | AP_ProcID
>
> 3. The sender writes the message (M) to the TIM_IPC_FIFO register of the
> VPU FIFO.
>
> 4. The HW atomically checks if the FIFO is full and if not it writes it
> to the actual FIFO; if the FIFO is full, the HW reads the ProcID
> from M and then sets the corresponding bit of TIM_IPC_FIFO_OF_FLAG0,
> to signal that the write failed, because the FIFO was full.
>
> 5. The AP reads the TIM_IPC_FIFO_OF_FLAG0 register and checks if the
> bit corresponding to its ProcID has been set (in order to know if the
> TX succeeded or failed); if the bit is set, the AP clears it.
>
> Note: as briefly mentioned above, the 32-bit value is meant to be a 32-
> bit physical address (64-byte aligned). This address points to a
> predefined struct (i.e., the IPC packet) in shared memory. However,
> since this struct is not HW dependent (it's just the struct the VPU
> firmware expects and in theory it could change if a different VPU FW is
> used), it's not defined here, but in the Keem Bay IPC driver, which is
> the mailbox client for this controller.
>
> The AP is notified of pending messages in the CPU FIFO by means of the
> 'FIFO-not-empty' interrupt, which is generated by the CPU FIFO while not
> empty. This interrupt is cleared automatically once all messages have
> been read from the FIFO (i.e., the FIFO has been emptied).
>
> The hardware doesn't provide an TX done IRQ (i.e., an IRQ that allows
> the VPU firmware to notify the AP that the message put into the VPU FIFO
> has been received); however the AP can ensure that the message has been
> successfully put into the VPU FIFO (and therefore transmitted) by
> checking the VPU FIFO status register (TIM_IPC_FIFO_OF_FLAG0) to ensure
> that writing the message didn't cause the FIFO to overflow (as described
> above).
>
I don't remember the last time I saw such a detailed explanation! Thanks.

> Therefore, the mailbox controller is configured as capable of tx_done
> IRQs and a tasklet is used to simulate the tx_done IRQ. The tasklet is
> activated by send_data() right after the message has been put into the
> VPU FIFO and the VPU FIFO status registers has been checked. If an
> overflow is reported by the status register, the tasklet passes -EBUSY
> to mbox_chan_txdone(), to notify the mailbox client of the failed TX.
>
> The client should therefore register a tx_done() callback to properly
> handle failed transmissions.
>
> Note: the 'txdone_poll' mechanism cannot be used because it doesn't
> provide a way to report a failed transmission.
>
IIUIC, maybe the solution is simpler .... What if we set txdone_poll.
Always return success in send_data(). And check if we overflew the
fifo in last_tx_done(). If we did overflow, try to rewrite the data
and check again. Return true, if not overflew this time, otherwise
return false so that mailbox api can ask us to try again in next
last_tx_done(). This way we can do away with the tasklet and, more
importantly, avoid send_data() failures and retries on clients' part.

Cheers!

2021-02-17 23:52:42

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 25/34] misc: Add Keem Bay VPU manager

On Sun, Feb 14, 2021 at 09:39:55AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:22 PM, [email protected] wrote:
> > diff --git a/drivers/misc/vpumgr/Kconfig b/drivers/misc/vpumgr/Kconfig
> > new file mode 100644
> > index 000000000000..bb82ff83afd3
> > --- /dev/null
> > +++ b/drivers/misc/vpumgr/Kconfig
> > @@ -0,0 +1,9 @@
> > +config VPUMGR
> > + tristate "VPU Manager"
> > + depends on ARM64 && XLINK_CORE
> > + help
> > + VPUMGR manages life-cycle of VPU related resources which were
> > + dynamically allocated on demands of user-space application
>
> End the sentence above with a period ('.').
fixed and qued up for merge window closure.

--mark

>
> > +
> > + Select y or m if you have a processor including the Intel
> > + Vision Processor (VPU) on it.
>
>
> --
> ~Randy
>

2021-02-17 23:54:42

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 27/34] misc: Tsens ARM host thermal driver.

On Sun, Feb 14, 2021 at 09:44:53AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:22 PM, [email protected] wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > new file mode 100644
> > index 000000000000..bfb8fe1997f4
> > --- /dev/null
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -0,0 +1,15 @@
> > +# Copyright (C) 2020 Intel Corporation
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +
> > +config INTEL_TSENS_LOCAL_HOST
> > + bool "Temperature sensor driver for intel tsens"
>
> s/intel/Intel/ ?
>
> as below.
fixed and qued up for posting when merge window closes.

--mark

>
> > + select THERMAL
> > + help
> > + This option enables tsens thermal local Host driver.
> > +
> > + This driver is used for reporting thermal data via thermal
> > + framework.
> > + Enable this option if you want to have support for thermal
> > + management controller.
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-02-17 23:57:52

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 28/34] misc: Intel tsens IA host driver.

On Sun, Feb 14, 2021 at 09:45:56AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:22 PM, [email protected] wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > index bfb8fe1997f4..8b263fdd80c3 100644
> > --- a/drivers/misc/intel_tsens/Kconfig
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -13,3 +13,16 @@ config INTEL_TSENS_LOCAL_HOST
> > management controller.
> > Say Y if using a processor that includes the Intel VPU such as
> > Keem Bay. If unsure, say N.
> > +
> > +config INTEL_TSENS_IA_HOST
> > + tristate "Temperature sensor driver for intel tsens remote host"
>
> s/intel/Intel/ ?
>
> as above and below.
fixed and qued up for posting when merge window closes.

Thanks!

--mark
>
> > + depends on I2C && THERMAL
> > + depends on I2C_SMBUS
> > + help
> > + This option enables tsens i2c and thermal local Host driver.
> > +
> > + This driver is used for reading thermal data via I2C SMBUS
> > + and registers itself to thermal framework, which can be
> > + used by thermal daemon in remote IA host
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-02-18 00:00:46

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 30/34] misc:intel_tsens: Intel Keem Bay tsens driver.

On Sun, Feb 14, 2021 at 09:42:22AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:23 PM, [email protected] wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > index be8d27e81864..5cfe6b4004e5 100644
> > --- a/drivers/misc/intel_tsens/Kconfig
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -28,6 +28,18 @@ config INTEL_TSENS_I2C_SLAVE
> > Say Y if using a processor that includes the Intel VPU such as
> > Keem Bay. If unsure, say N.
> >
> > +config KEEMBAY_THERMAL
> > + tristate "Temperature sensor driver for intel Keem Bay"
>
> s/intel/Intel/ ?
>
> as above and below.
fixed and qued up for posting with the merge window closes.

--mark
>
> > + depends on INTEL_TSENS_LOCAL_HOST && ARCH_KEEMBAY
> > + help
> > + Enable this option if you want to have support for Keem Bay
> > + thermal management sensors.
> > +
> > + This driver is used for reading onchip temperature sensor
> > + values from Keem Bay SoC.
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-02-18 00:01:15

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 29/34] Intel tsens i2c slave driver.

On Sun, Feb 14, 2021 at 09:41:26AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:22 PM, [email protected] wrote:
> > diff --git a/drivers/misc/intel_tsens/Kconfig b/drivers/misc/intel_tsens/Kconfig
> > index 8b263fdd80c3..be8d27e81864 100644
> > --- a/drivers/misc/intel_tsens/Kconfig
> > +++ b/drivers/misc/intel_tsens/Kconfig
> > @@ -14,6 +14,20 @@ config INTEL_TSENS_LOCAL_HOST
> > Say Y if using a processor that includes the Intel VPU such as
> > Keem Bay. If unsure, say N.
> >
> > +config INTEL_TSENS_I2C_SLAVE
> > + bool "I2C slave driver for intel tsens"
>
> s/intel/Intel/ ?
> (as below)
fixed and qued up for posting when merge window closes.

Thanks

--mark

>
> > + depends on INTEL_TSENS_LOCAL_HOST
> > + depends on I2C=y && I2C_SLAVE
> > + help
> > + This option enables tsens I2C slave driver.
> > +
> > + This driver is used for reporting thermal data via I2C
> > + SMBUS to remote host.
> > + Enable this option if you want to have support for thermal
> > + management controller.
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-02-18 00:56:30

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 20/34] xlink-core: Add xlink core driver xLink

On Sun, Feb 14, 2021 at 09:52:51AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:22 PM, [email protected] wrote:
> > diff --git a/drivers/misc/xlink-core/Kconfig b/drivers/misc/xlink-core/Kconfig
> > new file mode 100644
> > index 000000000000..a0ceb0b48219
> > --- /dev/null
> > +++ b/drivers/misc/xlink-core/Kconfig
> > @@ -0,0 +1,33 @@
> > +config XLINK_CORE
> > + tristate "Support for XLINK CORE"
> > + depends on ((XLINK_PCIE_RH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER) || (XLINK_LOCAL_HOST && (XLINK_PCIE_LH_DRIVER || XBAY_XLINK_PCIE_RH_DRIVER)))
>
> -ELINETOOLONG. Use '\' for line continuation in Kconfig files.
fixed

>
> > + help
> > + XLINK CORE enables the communication/control subsystem.
> > +
> > + If unsure, say N.
> > +
> > + To compile this driver as a module, choose M here: the
> > + module will be called xlink.ko.
> > +
> > +config XLINK_LOCAL_HOST
> > + tristate "Support for XLINK LOCAL HOST"
> > + depends on XLINK_IPC
> > + help
> > + XLINK LOCAL HOST enables local host functionality for
> > + the communication/control Sub-System.
> > +
> > + Enable this config when building the kernel for the Intel Vision
> > + Processing Unit (VPU) Local Host core.
> > +
> > + If building for a Remote Host kernel, say N.
> > +
> > +config XLINK_PSS
> > + tristate "Support for XLINK PSS (Pre-Silicon Solution)"
> > + depends on XLINK_LOCAL_HOST
> > + help
> > + XLINK PSS enables the communication/control subsystem on a PSS platform.
> > +
> > + Enable this config when building the kernel for the Intel Vision
> > + Processing Unit (VPU) in a simulated env.
>
> Please s/env/environment/.
fixed

I have the update qued up and I'll post v7 rebased when the merge window closes

--mark

>
> > +
> > + If building for a VPU silicon, say N.
>
>
> --
> ~Randy
>

2021-02-18 00:58:24

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 33/34] misc: Hddl device management for local host

On Sun, Feb 14, 2021 at 09:47:51AM -0800, Randy Dunlap wrote:
> On 2/12/21 2:23 PM, [email protected] wrote:
> > diff --git a/drivers/misc/hddl_device/Kconfig b/drivers/misc/hddl_device/Kconfig
> > new file mode 100644
> > index 000000000000..e1ae81fdf177
> > --- /dev/null
> > +++ b/drivers/misc/hddl_device/Kconfig
> > @@ -0,0 +1,14 @@
> > +# Copyright (C) 2020 Intel Corporation
> > +# SPDX-License-Identifier: GPL-2.0-only
> > +
> > +config HDDL_DEVICE_CLIENT
> > + tristate "Support for hddl device client"
>
> Please use "HDDL" consistently.
fixed and qued up for merge window closure.

--mark


> > + depends on XLINK_CORE && INTEL_TSENS_LOCAL_HOST
> > + help
> > + This option enables HDDL device client module.
> > +
> > + This driver is used for sharing time sync data to local host and
> > + retrives the sensors available on the platform. This also handles
> > + the device connect/disconnect programming sequence.
> > + Say Y if using a processor that includes the Intel VPU such as
> > + Keem Bay. If unsure, say N.
>
>
> --
> ~Randy
>

2021-02-18 14:00:55

by Alessandrelli, Daniele

[permalink] [raw]
Subject: Re: [PATCH v6 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox

Hi Jassi,

Thank you very much for your feedback.

On Sun, 2021-02-14 at 22:54 -0600, Jassi Brar wrote:
> IIUIC, maybe the solution is simpler .... What if we set txdone_poll.
> Always return success in send_data(). And check if we overflew the
> fifo in last_tx_done(). If we did overflow, try to rewrite the data
> and check again. Return true, if not overflew this time, otherwise
> return false so that mailbox api can ask us to try again in next
> last_tx_done(). This way we can do away with the tasklet and, more
> importantly, avoid send_data() failures and retries on clients' part.

That's a clever solution to avoid the tasklet. The only issue for us is
the automatic TX retry from the controller. I understand that's
generally a desirable feature, but in our case we'd like the client to
have full control on re-transmission attempts.

That's because some of our data is time-sensitive. For instance, when
we process frames from a video stream we prefer dropping a frame rather
than re-transmitting it and delaying the processing of the rest.

Now, I understand that the client can set the 'tx_block' and 'tx_tout'
channel fields to specify how long it wishes to wait, but the problem
is that our (single) channel is shared between multiple applications
having different timing requirements. That's why we prefer to let
applications deal we re-transmissions.

Given the above, do you think it's reasonable to leave the
implementation as it is now?
(from initial analysis, the tasklet doesn't seem to affect the
performance of our use cases significantly, so we are fine with it)


2021-02-18 21:14:11

by Jassi Brar

[permalink] [raw]
Subject: Re: [PATCH v6 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox

On Thu, Feb 18, 2021 at 6:02 AM Alessandrelli, Daniele
<[email protected]> wrote:
>
> Hi Jassi,
>
> Thank you very much for your feedback.
>
> On Sun, 2021-02-14 at 22:54 -0600, Jassi Brar wrote:
> > IIUIC, maybe the solution is simpler .... What if we set txdone_poll.
> > Always return success in send_data(). And check if we overflew the
> > fifo in last_tx_done(). If we did overflow, try to rewrite the data
> > and check again. Return true, if not overflew this time, otherwise
> > return false so that mailbox api can ask us to try again in next
> > last_tx_done(). This way we can do away with the tasklet and, more
> > importantly, avoid send_data() failures and retries on clients' part.
>
> That's a clever solution to avoid the tasklet. The only issue for us is
> the automatic TX retry from the controller. I understand that's
> generally a desirable feature, but in our case we'd like the client to
> have full control on re-transmission attempts.
>
> That's because some of our data is time-sensitive. For instance, when
> we process frames from a video stream we prefer dropping a frame rather
> than re-transmitting it and delaying the processing of the rest.
>
> Now, I understand that the client can set the 'tx_block' and 'tx_tout'
> channel fields to specify how long it wishes to wait, but the problem
> is that our (single) channel is shared between multiple applications
> having different timing requirements. That's why we prefer to let
> applications deal we re-transmissions.
>
> Given the above, do you think it's reasonable to leave the
> implementation as it is now?
> (from initial analysis, the tasklet doesn't seem to affect the
> performance of our use cases significantly, so we are fine with it)
>
Yup. It is intel specific so, hopefully, we don't have to deal with
other vendors trying to support their use cases.
Are you targeting the next merge window or this one?

cheers.

2021-02-18 22:43:05

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 03/34] mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox

On Thu, Feb 18, 2021 at 03:09:55PM -0600, Jassi Brar wrote:
> On Thu, Feb 18, 2021 at 6:02 AM Alessandrelli, Daniele
> <[email protected]> wrote:
> >
> > Hi Jassi,
> >
> > Thank you very much for your feedback.
> >
> > On Sun, 2021-02-14 at 22:54 -0600, Jassi Brar wrote:
> > > IIUIC, maybe the solution is simpler .... What if we set txdone_poll.
> > > Always return success in send_data(). And check if we overflew the
> > > fifo in last_tx_done(). If we did overflow, try to rewrite the data
> > > and check again. Return true, if not overflew this time, otherwise
> > > return false so that mailbox api can ask us to try again in next
> > > last_tx_done(). This way we can do away with the tasklet and, more
> > > importantly, avoid send_data() failures and retries on clients' part.
> >
> > That's a clever solution to avoid the tasklet. The only issue for us is
> > the automatic TX retry from the controller. I understand that's
> > generally a desirable feature, but in our case we'd like the client to
> > have full control on re-transmission attempts.
> >
> > That's because some of our data is time-sensitive. For instance, when
> > we process frames from a video stream we prefer dropping a frame rather
> > than re-transmitting it and delaying the processing of the rest.
> >
> > Now, I understand that the client can set the 'tx_block' and 'tx_tout'
> > channel fields to specify how long it wishes to wait, but the problem
> > is that our (single) channel is shared between multiple applications
> > having different timing requirements. That's why we prefer to let
> > applications deal we re-transmissions.
> >
> > Given the above, do you think it's reasonable to leave the
> > implementation as it is now?
> > (from initial analysis, the tasklet doesn't seem to affect the
> > performance of our use cases significantly, so we are fine with it)
> >
> Yup. It is intel specific so, hopefully, we don't have to deal with
> other vendors trying to support their use cases.
> Are you targeting the next merge window or this one?
>

Its a bit late for the v5.12 merge window for most of the code in this patchset
at this point but, if you feel like getting this inital bit in that would be
great.

I'm hoping we can get the rest of this series into linux-next after RC1.

--mark

2021-03-05 21:11:14

by Rob Herring

[permalink] [raw]
Subject: Re: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings

On Fri, Feb 12, 2021 at 02:22:49PM -0800, [email protected] wrote:
> From: Seamus Kelly <[email protected]>
>
> Add device tree bindings for keembay-xlink.
>
> Cc: Rob Herring <[email protected]>
> Cc: [email protected]
> Reviewed-by: Mark Gross <[email protected]>
> Signed-off-by: Mark Gross <[email protected]>
> Signed-off-by: Seamus Kelly <[email protected]>
> ---
> .../bindings/misc/intel,keembay-xlink.yaml | 29 +++++++++++++++++++
> 1 file changed, 29 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
>
> diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> new file mode 100644
> index 000000000000..5ac2e7fa5b5e
> --- /dev/null
> +++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> @@ -0,0 +1,29 @@
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +# Copyright (c) Intel Corporation. All rights reserved.
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Intel Keem Bay xlink
> +
> +maintainers:
> + - Seamus Kelly <[email protected]>
> +
> +description: |
> + The Keem Bay xlink driver enables the communication/control sub-system
> + for internal and external communications to the Intel Keem Bay SoC.
> +
> +properties:
> + compatible:
> + oneOf:
> + - items:
> + - const: intel,keembay-xlink
> +
> +additionalProperties: False
> +
> +examples:
> + - |
> + xlink {
> + compatible = "intel,keembay-xlink";

A node with only a compatible is almost always abusing DT just to
instantiate your driver.

Rob

2021-03-08 20:32:52

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings

On Fri, Mar 05, 2021 at 03:03:00PM -0600, Rob Herring wrote:
> On Fri, Feb 12, 2021 at 02:22:49PM -0800, [email protected] wrote:
> > From: Seamus Kelly <[email protected]>
> >
> > Add device tree bindings for keembay-xlink.
> >
> > Cc: Rob Herring <[email protected]>
> > Cc: [email protected]
> > Reviewed-by: Mark Gross <[email protected]>
> > Signed-off-by: Mark Gross <[email protected]>
> > Signed-off-by: Seamus Kelly <[email protected]>
> > ---
> > .../bindings/misc/intel,keembay-xlink.yaml | 29 +++++++++++++++++++
> > 1 file changed, 29 insertions(+)
> > create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> >
> > diff --git a/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> > new file mode 100644
> > index 000000000000..5ac2e7fa5b5e
> > --- /dev/null
> > +++ b/Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> > @@ -0,0 +1,29 @@
> > +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> > +# Copyright (c) Intel Corporation. All rights reserved.
> > +%YAML 1.2
> > +---
> > +$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
> > +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> > +
> > +title: Intel Keem Bay xlink
> > +
> > +maintainers:
> > + - Seamus Kelly <[email protected]>
> > +
> > +description: |
> > + The Keem Bay xlink driver enables the communication/control sub-system
> > + for internal and external communications to the Intel Keem Bay SoC.
> > +
> > +properties:
> > + compatible:
> > + oneOf:
> > + - items:
> > + - const: intel,keembay-xlink
> > +
> > +additionalProperties: False
> > +
> > +examples:
> > + - |
> > + xlink {
> > + compatible = "intel,keembay-xlink";
>
> A node with only a compatible is almost always abusing DT just to
> instantiate your driver.

Is it normal to make drivers that want to abuse DT in this way platform
devices?

Any advice would be welcome and helful.

thanks!

--mark

2021-04-13 07:18:03

by Dave Hansen

[permalink] [raw]
Subject: Re: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings

On 2/12/21 2:22 PM, [email protected] wrote:
> +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
> +# Copyright (c) Intel Corporation. All rights reserved.
> +%YAML 1.2
> +---
> +$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
> +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> +
> +title: Intel Keem Bay xlink

Is there a specific reason this is dual licensed? If so, can you please
include information about the license choice in the next post's cover
letter?

If there is no specific reason for this contribution to be dual
licensed, please make it GPL-2.0 only.

2021-04-20 22:13:02

by Gross, Mark

[permalink] [raw]
Subject: RE: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings



> -----Original Message-----
> From: Dave Hansen <[email protected]>
> Sent: Monday, April 12, 2021 2:33 PM
> To: [email protected]; [email protected]; [email protected]; [email protected];
> [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]; [email protected];
> [email protected]; [email protected]
> Cc: [email protected]; Kelly, Seamus <[email protected]>;
> [email protected]
> Subject: Re: [PATCH v6 19/34] xlink-core: Add xlink core device tree bindings
>
> On 2/12/21 2:22 PM, [email protected] wrote:
> > +# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) # Copyright (c)
> > +Intel Corporation. All rights reserved.
> > +%YAML 1.2
> > +---
> > +$id: "http://devicetree.org/schemas/misc/intel,keembay-xlink.yaml#"
> > +$schema: "http://devicetree.org/meta-schemas/core.yaml#"
> > +
> > +title: Intel Keem Bay xlink
>
> Is there a specific reason this is dual licensed? If so, can you please include
> information about the license choice in the next post's cover letter?
>
> If there is no specific reason for this contribution to be dual licensed, please make it
> GPL-2.0 only.
I will.

I'm just waiting on some testing before making the next post.

Thanks,
--mark

2021-07-09 18:18:15

by mark gross

[permalink] [raw]
Subject: Re: [PATCH v6 00/34] Intel Vision Processing base enabling

I want to thank you for all the work and feedback given reviewing these series
of patches. Our upstream strategy for the VPU enabling has changed and we are
regrouping and redesigning this stack in a significant way.

To avoid legacy ABI compatibility issues with the redesigned stack we are
dropping this set of patches. Other KMB changes, that are not VPU related,
will continue to be up streamed.

Thanks again for all the valuable feedback and lessons given through the
reviews.

--mark

On Fri, Feb 12, 2021 at 02:22:30PM -0800, [email protected] wrote:
> From: Mark Gross <[email protected]>
>
> The Intel Vision Processing Unit (VPU) is an IP block that is showing up for
> the first time as part of the Keem Bay SOC. Keem Bay is a quad core A53 Arm
> SOC. It is designed to be used as a stand alone SOC as well as in an PCIe
> Vision Processing accelerator add in card.
>
> This 6th version of this patch set includes more updates to the xlink SMBUS
> patch correction some language in its Kconfig file.
>
> At the bottom of this cover letter is the delta between v5 and this version for
> easy review of the modifications.
>
> Thanks for looking at these and providing feedback.
>
>
> C, Udhayakumar (8):
> dt-bindings: misc: intel_tsens: Add tsens thermal bindings
> documentation
> misc: Tsens ARM host thermal driver.
> misc: Intel tsens IA host driver.
> Intel tsens i2c slave driver.
> misc:intel_tsens: Intel Keem Bay tsens driver.
> dt-bindings: misc: hddl_dev: Add hddl device management documentation
> misc: Hddl device management for local host
> misc: HDDL device management for IA host
>
> Daniele Alessandrelli (4):
> dt-bindings: mailbox: Add Intel VPU IPC mailbox bindings
> mailbox: vpu-ipc-mailbox: Add support for Intel VPU IPC mailbox
> dt-bindings: Add bindings for Keem Bay IPC driver
> keembay-ipc: Add Keem Bay IPC module
>
> Li, Tingqian (2):
> dt-bindings: misc: Add Keem Bay vpumgr
> misc: Add Keem Bay VPU manager
>
> Paul Murphy (2):
> dt-bindings: Add bindings for Keem Bay VPU IPC driver
> keembay-vpu-ipc: Add Keem Bay VPU IPC module
>
> Ramya P Karanth (1):
> Intel Keembay XLink SMBus driver
>
> Seamus Kelly (7):
> xlink-ipc: Add xlink ipc device tree bindings
> xlink-ipc: Add xlink ipc driver
> xlink-core: Add xlink core device tree bindings
> xlink-core: Add xlink core driver xLink
> xlink-core: Enable xlink protocol over pcie
> xlink-core: Enable VPU IP management and runtime control
> xlink-core: add async channel and events
>
> Srikanth Thokala (9):
> misc: xlink-pcie: Add documentation for XLink PCIe driver
> misc: xlink-pcie: lh: Add PCIe EPF driver for Local Host
> misc: xlink-pcie: lh: Add PCIe EP DMA functionality
> misc: xlink-pcie: lh: Add core communication logic
> misc: xlink-pcie: lh: Prepare changes for adding remote host driver
> misc: xlink-pcie: rh: Add PCIe EP driver for Remote Host
> misc: xlink-pcie: rh: Add core communication logic
> misc: xlink-pcie: Add XLink API interface
> misc: xlink-pcie: Add asynchronous event notification support for
> XLink
>
> mark gross (1):
> Add Vision Processing Unit (VPU) documentation.
>
> .../mailbox/intel,vpu-ipc-mailbox.yaml | 69 +
> .../bindings/misc/intel,hddl-client.yaml | 117 +
> .../bindings/misc/intel,intel-tsens.yaml | 122 +
> .../bindings/misc/intel,keembay-vpu-mgr.yaml | 48 +
> .../misc/intel,keembay-xlink-ipc.yaml | 51 +
> .../bindings/misc/intel,keembay-xlink.yaml | 29 +
> .../bindings/soc/intel/intel,keembay-ipc.yaml | 45 +
> .../soc/intel/intel,keembay-vpu-ipc.yaml | 143 ++
> Documentation/hwmon/index.rst | 2 +
> Documentation/hwmon/intel_tsens_host.rst | 71 +
> Documentation/hwmon/intel_tsens_sensor.rst | 67 +
> Documentation/i2c/busses/index.rst | 1 +
> .../i2c/busses/intel-xlink-smbus.rst | 71 +
> Documentation/index.rst | 1 +
> .../misc-devices/hddl_device_client.rst | 212 ++
> .../misc-devices/hddl_device_server.rst | 205 ++
> Documentation/misc-devices/index.rst | 2 +
> Documentation/vpu/index.rst | 20 +
> Documentation/vpu/vpu-stack-overview.rst | 270 +++
> Documentation/vpu/xlink-core.rst | 81 +
> Documentation/vpu/xlink-ipc.rst | 51 +
> Documentation/vpu/xlink-pcie.rst | 90 +
> MAINTAINERS | 54 +
> drivers/mailbox/Kconfig | 11 +
> drivers/mailbox/Makefile | 2 +
> drivers/mailbox/vpu-ipc-mailbox.c | 297 +++
> drivers/misc/Kconfig | 7 +
> drivers/misc/Makefile | 7 +
> drivers/misc/hddl_device/Kconfig | 26 +
> drivers/misc/hddl_device/Makefile | 7 +
> drivers/misc/hddl_device/hddl_device.c | 565 +++++
> drivers/misc/hddl_device/hddl_device_lh.c | 764 +++++++
> drivers/misc/hddl_device/hddl_device_rh.c | 837 +++++++
> drivers/misc/hddl_device/hddl_device_util.h | 52 +
> drivers/misc/intel_tsens/Kconfig | 54 +
> drivers/misc/intel_tsens/Makefile | 10 +
> drivers/misc/intel_tsens/intel_tsens_host.c | 352 +++
> drivers/misc/intel_tsens/intel_tsens_i2c.c | 119 +
> .../misc/intel_tsens/intel_tsens_thermal.c | 651 ++++++
> .../misc/intel_tsens/intel_tsens_thermal.h | 38 +
> drivers/misc/intel_tsens/keembay_thermal.c | 169 ++
> drivers/misc/intel_tsens/keembay_tsens.h | 366 +++
> drivers/misc/vpumgr/Kconfig | 9 +
> drivers/misc/vpumgr/Makefile | 3 +
> drivers/misc/vpumgr/vpu_common.h | 31 +
> drivers/misc/vpumgr/vpu_mgr.c | 370 +++
> drivers/misc/vpumgr/vpu_smm.c | 554 +++++
> drivers/misc/vpumgr/vpu_smm.h | 30 +
> drivers/misc/vpumgr/vpu_vcm.c | 584 +++++
> drivers/misc/vpumgr/vpu_vcm.h | 84 +
> drivers/misc/xlink-core/Kconfig | 33 +
> drivers/misc/xlink-core/Makefile | 5 +
> drivers/misc/xlink-core/xlink-core.c | 1331 +++++++++++
> drivers/misc/xlink-core/xlink-core.h | 25 +
> drivers/misc/xlink-core/xlink-defs.h | 181 ++
> drivers/misc/xlink-core/xlink-dispatcher.c | 436 ++++
> drivers/misc/xlink-core/xlink-dispatcher.h | 26 +
> drivers/misc/xlink-core/xlink-ioctl.c | 554 +++++
> drivers/misc/xlink-core/xlink-ioctl.h | 36 +
> drivers/misc/xlink-core/xlink-multiplexer.c | 1164 ++++++++++
> drivers/misc/xlink-core/xlink-multiplexer.h | 35 +
> drivers/misc/xlink-core/xlink-platform.c | 273 +++
> drivers/misc/xlink-core/xlink-platform.h | 65 +
> drivers/misc/xlink-ipc/Kconfig | 7 +
> drivers/misc/xlink-ipc/Makefile | 4 +
> drivers/misc/xlink-ipc/xlink-ipc.c | 878 +++++++
> drivers/misc/xlink-pcie/Kconfig | 20 +
> drivers/misc/xlink-pcie/Makefile | 2 +
> drivers/misc/xlink-pcie/common/core.h | 245 ++
> drivers/misc/xlink-pcie/common/interface.c | 124 +
> drivers/misc/xlink-pcie/common/util.c | 373 +++
> drivers/misc/xlink-pcie/common/util.h | 68 +
> drivers/misc/xlink-pcie/common/xpcie.h | 100 +
> drivers/misc/xlink-pcie/local_host/Makefile | 6 +
> drivers/misc/xlink-pcie/local_host/core.c | 817 +++++++
> drivers/misc/xlink-pcie/local_host/dma.c | 575 +++++
> drivers/misc/xlink-pcie/local_host/epf.c | 482 ++++
> drivers/misc/xlink-pcie/local_host/epf.h | 101 +
> drivers/misc/xlink-pcie/remote_host/Makefile | 6 +
> drivers/misc/xlink-pcie/remote_host/core.c | 621 +++++
> drivers/misc/xlink-pcie/remote_host/main.c | 93 +
> drivers/misc/xlink-pcie/remote_host/pci.c | 523 +++++
> drivers/misc/xlink-pcie/remote_host/pci.h | 65 +
> drivers/misc/xlink-smbus/Kconfig | 25 +
> drivers/misc/xlink-smbus/Makefile | 5 +
> drivers/misc/xlink-smbus/xlink-smbus.c | 467 ++++
> drivers/soc/Kconfig | 1 +
> drivers/soc/Makefile | 1 +
> drivers/soc/intel/Kconfig | 33 +
> drivers/soc/intel/Makefile | 5 +
> drivers/soc/intel/keembay-ipc.c | 1364 +++++++++++
> drivers/soc/intel/keembay-vpu-ipc.c | 2026 +++++++++++++++++
> include/linux/hddl_device.h | 153 ++
> include/linux/intel_tsens_host.h | 34 +
> include/linux/soc/intel/keembay-ipc.h | 30 +
> include/linux/soc/intel/keembay-vpu-ipc.h | 62 +
> include/linux/xlink-ipc.h | 48 +
> include/linux/xlink.h | 146 ++
> include/linux/xlink_drv_inf.h | 70 +
> include/uapi/misc/vpumgr.h | 64 +
> include/uapi/misc/xlink_uapi.h | 145 ++
> 101 files changed, 21774 insertions(+)
> create mode 100644 Documentation/devicetree/bindings/mailbox/intel,vpu-ipc-mailbox.yaml
> create mode 100644 Documentation/devicetree/bindings/misc/intel,hddl-client.yaml
> create mode 100644 Documentation/devicetree/bindings/misc/intel,intel-tsens.yaml
> create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-vpu-mgr.yaml
> create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink-ipc.yaml
> create mode 100644 Documentation/devicetree/bindings/misc/intel,keembay-xlink.yaml
> create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-ipc.yaml
> create mode 100644 Documentation/devicetree/bindings/soc/intel/intel,keembay-vpu-ipc.yaml
> create mode 100644 Documentation/hwmon/intel_tsens_host.rst
> create mode 100644 Documentation/hwmon/intel_tsens_sensor.rst
> create mode 100644 Documentation/i2c/busses/intel-xlink-smbus.rst
> create mode 100644 Documentation/misc-devices/hddl_device_client.rst
> create mode 100644 Documentation/misc-devices/hddl_device_server.rst
> create mode 100644 Documentation/vpu/index.rst
> create mode 100644 Documentation/vpu/vpu-stack-overview.rst
> create mode 100644 Documentation/vpu/xlink-core.rst
> create mode 100644 Documentation/vpu/xlink-ipc.rst
> create mode 100644 Documentation/vpu/xlink-pcie.rst
> create mode 100644 drivers/mailbox/vpu-ipc-mailbox.c
> create mode 100644 drivers/misc/hddl_device/Kconfig
> create mode 100644 drivers/misc/hddl_device/Makefile
> create mode 100644 drivers/misc/hddl_device/hddl_device.c
> create mode 100644 drivers/misc/hddl_device/hddl_device_lh.c
> create mode 100644 drivers/misc/hddl_device/hddl_device_rh.c
> create mode 100644 drivers/misc/hddl_device/hddl_device_util.h
> create mode 100644 drivers/misc/intel_tsens/Kconfig
> create mode 100644 drivers/misc/intel_tsens/Makefile
> create mode 100644 drivers/misc/intel_tsens/intel_tsens_host.c
> create mode 100644 drivers/misc/intel_tsens/intel_tsens_i2c.c
> create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.c
> create mode 100644 drivers/misc/intel_tsens/intel_tsens_thermal.h
> create mode 100644 drivers/misc/intel_tsens/keembay_thermal.c
> create mode 100644 drivers/misc/intel_tsens/keembay_tsens.h
> create mode 100644 drivers/misc/vpumgr/Kconfig
> create mode 100644 drivers/misc/vpumgr/Makefile
> create mode 100644 drivers/misc/vpumgr/vpu_common.h
> create mode 100644 drivers/misc/vpumgr/vpu_mgr.c
> create mode 100644 drivers/misc/vpumgr/vpu_smm.c
> create mode 100644 drivers/misc/vpumgr/vpu_smm.h
> create mode 100644 drivers/misc/vpumgr/vpu_vcm.c
> create mode 100644 drivers/misc/vpumgr/vpu_vcm.h
> create mode 100644 drivers/misc/xlink-core/Kconfig
> create mode 100644 drivers/misc/xlink-core/Makefile
> create mode 100644 drivers/misc/xlink-core/xlink-core.c
> create mode 100644 drivers/misc/xlink-core/xlink-core.h
> create mode 100644 drivers/misc/xlink-core/xlink-defs.h
> create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.c
> create mode 100644 drivers/misc/xlink-core/xlink-dispatcher.h
> create mode 100644 drivers/misc/xlink-core/xlink-ioctl.c
> create mode 100644 drivers/misc/xlink-core/xlink-ioctl.h
> create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.c
> create mode 100644 drivers/misc/xlink-core/xlink-multiplexer.h
> create mode 100644 drivers/misc/xlink-core/xlink-platform.c
> create mode 100644 drivers/misc/xlink-core/xlink-platform.h
> create mode 100644 drivers/misc/xlink-ipc/Kconfig
> create mode 100644 drivers/misc/xlink-ipc/Makefile
> create mode 100644 drivers/misc/xlink-ipc/xlink-ipc.c
> create mode 100644 drivers/misc/xlink-pcie/Kconfig
> create mode 100644 drivers/misc/xlink-pcie/Makefile
> create mode 100644 drivers/misc/xlink-pcie/common/core.h
> create mode 100644 drivers/misc/xlink-pcie/common/interface.c
> create mode 100644 drivers/misc/xlink-pcie/common/util.c
> create mode 100644 drivers/misc/xlink-pcie/common/util.h
> create mode 100644 drivers/misc/xlink-pcie/common/xpcie.h
> create mode 100644 drivers/misc/xlink-pcie/local_host/Makefile
> create mode 100644 drivers/misc/xlink-pcie/local_host/core.c
> create mode 100644 drivers/misc/xlink-pcie/local_host/dma.c
> create mode 100644 drivers/misc/xlink-pcie/local_host/epf.c
> create mode 100644 drivers/misc/xlink-pcie/local_host/epf.h
> create mode 100644 drivers/misc/xlink-pcie/remote_host/Makefile
> create mode 100644 drivers/misc/xlink-pcie/remote_host/core.c
> create mode 100644 drivers/misc/xlink-pcie/remote_host/main.c
> create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.c
> create mode 100644 drivers/misc/xlink-pcie/remote_host/pci.h
> create mode 100644 drivers/misc/xlink-smbus/Kconfig
> create mode 100644 drivers/misc/xlink-smbus/Makefile
> create mode 100644 drivers/misc/xlink-smbus/xlink-smbus.c
> create mode 100644 drivers/soc/intel/Kconfig
> create mode 100644 drivers/soc/intel/Makefile
> create mode 100644 drivers/soc/intel/keembay-ipc.c
> create mode 100644 drivers/soc/intel/keembay-vpu-ipc.c
> create mode 100644 include/linux/hddl_device.h
> create mode 100644 include/linux/intel_tsens_host.h
> create mode 100644 include/linux/soc/intel/keembay-ipc.h
> create mode 100644 include/linux/soc/intel/keembay-vpu-ipc.h
> create mode 100644 include/linux/xlink-ipc.h
> create mode 100644 include/linux/xlink.h
> create mode 100644 include/linux/xlink_drv_inf.h
> create mode 100644 include/uapi/misc/vpumgr.h
> create mode 100644 include/uapi/misc/xlink_uapi.h
>
> --
> 2.17.1
>
> diff --git a/drivers/misc/xlink-smbus/Kconfig b/drivers/misc/xlink-smbus/Kconfig
> index e6cdf8b9a096..8d2451f8d1be 100644
> --- a/drivers/misc/xlink-smbus/Kconfig
> +++ b/drivers/misc/xlink-smbus/Kconfig
> @@ -2,25 +2,24 @@
> # SPDX-License-Identifier: GPL-2.0-only
>
> config XLINK_SMBUS
> - tristate "Enable smbus interface over Xlink PCIe"
> + tristate "Enable SMBUS interface over Xlink PCIe"
> depends on XLINK_CORE
> depends on HDDL_DEVICE_CLIENT || HDDL_DEVICE_SERVER
> help
> - Enable xlink-pcie as i2c adapter both slave and master. The server
> - (Remote Host) will use this interface to get sensor data from the soc
> - (vision accelerator - Local Host) which is connected over PCIe.
> - This driver is loaded on both Remote Host and Local Host.
> - Select M to compile the driver as a module, name is xlink-smbus.
> - If unsure, select N.
> -
> + Enable xlink-pcie as I2C adapter both slave and master. The server
> + (Remote Host) will use this interface to get sensor data from the SoC
> + (vision accelerator - Local Host) which is connected over PCIe.
> + This driver is loaded on both Remote Host and Local Host.
> + Select M to compile the driver as a module, name is xlink-smbus.
> + If unsure, select N.
>
> config XLINK_SMBUS_PROXY
> tristate "Enable SMBUS adapter as proxy for I2C controller"
> depends on XLINK_CORE
> depends on XLINK_SMBUS
> help
> - Enable this config when SMBUS adapter is acting as proxy for
> - another I2C controller.
> - Select M or Y if building for Intel Vision Processing Unit (VPU)
> - Local Host core.
> - Select N, if building for a Remote Host kernel.
> + Enable this config when SMBUS adapter is acting as proxy for
> + another I2C controller.
> + Select M or Y if building for Intel Vision Processing Unit (VPU)
> + Local Host core.
> + Select N, if building for a Remote Host kernel.