Hello,
This series adds initial support for the Qualcomm specific Modem Host Interface
(MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
communicates with the MHI bus in host machines like x86 over any physical bus
like PCIe for data connectivity. The MHI host support is already in mainline [1]
and been used by PCIe based modems and WLAN devices running vendor code
(downstream).
Overview
========
This series aims at adding the MHI support in the endpoint devices with the goal
of getting data connectivity using the mainline kernel running on the modems.
Modems here refer to the combination of an APPS processor (Cortex A grade) and
a baseband processor (DSP). The MHI bus is located in the APPS processor and it
transfers data packets from the baseband processor to the host machine.
The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
code written by Qualcomm. But the complete stack is mostly re-written to adapt
to the "bus" framework and made it modular so that it can work with the upstream
subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
follows the MHI host stack to maintain uniformity.
With this initial MHI EP stack (along with few other drivers), we can establish
the network interface between host and endpoint over the MHI software channels
(IP_SW0) and can do things like IP forwarding, SSH, etc...
Stack Organization
==================
The MHI EP stack has the concept of controller and device drivers as like the
MHI host stack. The MHI EP controller driver can be a PCI Endpoint Function
driver and the MHI device driver can be a MHI EP Networking driver or QRTR
driver. The MHI EP controller driver is tied to the PCI Endpoint subsystem and
handles all bus related activities like mapping the host memory, raising IRQ,
passing link specific events etc... The MHI EP networking driver is tied to the
Networking stack and handles all networking related activities like
sending/receiving the SKBs from netdev, statistics collection etc...
This series only contains the MHI EP code, whereas the PCIe EPF driver and MHI
EP Networking drivers are not yet submitted and can be found here [2]. Though
the MHI EP stack doesn't have the build time dependency, it cannot function
without them.
Test setup
==========
This series has been tested on Telit FN980 TLB board powered by Qualcomm SDX55
(a.k.a X55 modem) interfaced to the 96Boards Poplar board (ARM64 host) over
PCIe.
Limitations
===========
We are not _yet_ there to get the data packets from the modem as that involves
the Qualcomm IP Accelerator (IPA) integration with MHI endpoint stack. But we
are planning to add support for it in the coming days.
References
==========
MHI bus: https://www.kernel.org/doc/html/latest/mhi/mhi.html
Linaro connect presentation around this topic: https://connect.linaro.org/resources/lvc21f/lvc21f-222/
Thanks,
Mani
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/bus/mhi
[2] https://git.linaro.org/landing-teams/working/qualcomm/kernel.git/log/?h=tracking-qcomlt-sdx55-drivers
Manivannan Sadhasivam (20):
bus: mhi: Move host MHI code to "host" directory
bus: mhi: Move common MHI definitions out of host directory
bus: mhi: Make mhi_state_str[] array static const and move to common.h
bus: mhi: Cleanup the register definitions used in headers
bus: mhi: ep: Add support for registering MHI endpoint controllers
bus: mhi: ep: Add support for registering MHI endpoint client drivers
bus: mhi: ep: Add support for creating and destroying MHI EP devices
bus: mhi: ep: Add support for managing MMIO registers
bus: mhi: ep: Add support for ring management
bus: mhi: ep: Add support for sending events to the host
bus: mhi: ep: Add support for managing MHI state machine
bus: mhi: ep: Add support for processing MHI endpoint interrupts
bus: mhi: ep: Add support for powering up the MHI endpoint stack
bus: mhi: ep: Add support for powering down the MHI endpoint stack
bus: mhi: ep: Add support for handling MHI_RESET
bus: mhi: ep: Add support for handling SYS_ERR condition
bus: mhi: ep: Add support for processing command and TRE rings
bus: mhi: ep: Add support for queueing SKBs over MHI bus
bus: mhi: ep: Add support for suspending and resuming channels
bus: mhi: ep: Add uevent support for module autoloading
drivers/bus/Makefile | 2 +-
drivers/bus/mhi/Kconfig | 28 +-
drivers/bus/mhi/Makefile | 9 +-
drivers/bus/mhi/common.h | 284 ++++
drivers/bus/mhi/ep/Kconfig | 10 +
drivers/bus/mhi/ep/Makefile | 2 +
drivers/bus/mhi/ep/internal.h | 237 +++
drivers/bus/mhi/ep/main.c | 1674 +++++++++++++++++++++
drivers/bus/mhi/ep/mmio.c | 303 ++++
drivers/bus/mhi/ep/ring.c | 316 ++++
drivers/bus/mhi/ep/sm.c | 181 +++
drivers/bus/mhi/host/Kconfig | 31 +
drivers/bus/mhi/{core => host}/Makefile | 4 +-
drivers/bus/mhi/{core => host}/boot.c | 0
drivers/bus/mhi/{core => host}/debugfs.c | 0
drivers/bus/mhi/{core => host}/init.c | 12 -
drivers/bus/mhi/{core => host}/internal.h | 436 ++----
drivers/bus/mhi/{core => host}/main.c | 0
drivers/bus/mhi/{ => host}/pci_generic.c | 0
drivers/bus/mhi/{core => host}/pm.c | 0
include/linux/mhi_ep.h | 289 ++++
include/linux/mod_devicetable.h | 2 +
scripts/mod/file2alias.c | 10 +
23 files changed, 3448 insertions(+), 382 deletions(-)
create mode 100644 drivers/bus/mhi/common.h
create mode 100644 drivers/bus/mhi/ep/Kconfig
create mode 100644 drivers/bus/mhi/ep/Makefile
create mode 100644 drivers/bus/mhi/ep/internal.h
create mode 100644 drivers/bus/mhi/ep/main.c
create mode 100644 drivers/bus/mhi/ep/mmio.c
create mode 100644 drivers/bus/mhi/ep/ring.c
create mode 100644 drivers/bus/mhi/ep/sm.c
create mode 100644 drivers/bus/mhi/host/Kconfig
rename drivers/bus/mhi/{core => host}/Makefile (54%)
rename drivers/bus/mhi/{core => host}/boot.c (100%)
rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
rename drivers/bus/mhi/{core => host}/init.c (99%)
rename drivers/bus/mhi/{core => host}/internal.h (51%)
rename drivers/bus/mhi/{core => host}/main.c (100%)
rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
rename drivers/bus/mhi/{core => host}/pm.c (100%)
create mode 100644 include/linux/mhi_ep.h
--
2.25.1
In preparation of the endpoint MHI support, let's move the host MHI code
to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.
While at it, let's also move the "pci_generic" driver to "host" directory
as it is a host MHI controller driver.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/Makefile | 2 +-
drivers/bus/mhi/Kconfig | 27 ++------------------
drivers/bus/mhi/Makefile | 8 ++----
drivers/bus/mhi/host/Kconfig | 31 +++++++++++++++++++++++
drivers/bus/mhi/{core => host}/Makefile | 4 ++-
drivers/bus/mhi/{core => host}/boot.c | 0
drivers/bus/mhi/{core => host}/debugfs.c | 0
drivers/bus/mhi/{core => host}/init.c | 0
drivers/bus/mhi/{core => host}/internal.h | 0
drivers/bus/mhi/{core => host}/main.c | 0
drivers/bus/mhi/{ => host}/pci_generic.c | 0
drivers/bus/mhi/{core => host}/pm.c | 0
12 files changed, 39 insertions(+), 33 deletions(-)
create mode 100644 drivers/bus/mhi/host/Kconfig
rename drivers/bus/mhi/{core => host}/Makefile (54%)
rename drivers/bus/mhi/{core => host}/boot.c (100%)
rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
rename drivers/bus/mhi/{core => host}/init.c (100%)
rename drivers/bus/mhi/{core => host}/internal.h (100%)
rename drivers/bus/mhi/{core => host}/main.c (100%)
rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
rename drivers/bus/mhi/{core => host}/pm.c (100%)
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 52c2f35a26a9..16da51130d1a 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -39,4 +39,4 @@ obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
# MHI
-obj-$(CONFIG_MHI_BUS) += mhi/
+obj-y += mhi/
diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index da5cd0c9fc62..4748df7f9cd5 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -2,30 +2,7 @@
#
# MHI bus
#
-# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+# Copyright (c) 2021, Linaro Ltd.
#
-config MHI_BUS
- tristate "Modem Host Interface (MHI) bus"
- help
- Bus driver for MHI protocol. Modem Host Interface (MHI) is a
- communication protocol used by the host processors to control
- and communicate with modem devices over a high speed peripheral
- bus or shared memory.
-
-config MHI_BUS_DEBUG
- bool "Debugfs support for the MHI bus"
- depends on MHI_BUS && DEBUG_FS
- help
- Enable debugfs support for use with the MHI transport. Allows
- reading and/or modifying some values within the MHI controller
- for debug and test purposes.
-
-config MHI_BUS_PCI_GENERIC
- tristate "MHI PCI controller driver"
- depends on MHI_BUS
- depends on PCI
- help
- This driver provides MHI PCI controller driver for devices such as
- Qualcomm SDX55 based PCIe modems.
-
+source "drivers/bus/mhi/host/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 0a2d778d6fb4..5f5708a249f5 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,6 +1,2 @@
-# core layer
-obj-y += core/
-
-obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
-mhi_pci_generic-y += pci_generic.o
-
+# Host MHI stack
+obj-y += host/
diff --git a/drivers/bus/mhi/host/Kconfig b/drivers/bus/mhi/host/Kconfig
new file mode 100644
index 000000000000..da5cd0c9fc62
--- /dev/null
+++ b/drivers/bus/mhi/host/Kconfig
@@ -0,0 +1,31 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# MHI bus
+#
+# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+#
+
+config MHI_BUS
+ tristate "Modem Host Interface (MHI) bus"
+ help
+ Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+ communication protocol used by the host processors to control
+ and communicate with modem devices over a high speed peripheral
+ bus or shared memory.
+
+config MHI_BUS_DEBUG
+ bool "Debugfs support for the MHI bus"
+ depends on MHI_BUS && DEBUG_FS
+ help
+ Enable debugfs support for use with the MHI transport. Allows
+ reading and/or modifying some values within the MHI controller
+ for debug and test purposes.
+
+config MHI_BUS_PCI_GENERIC
+ tristate "MHI PCI controller driver"
+ depends on MHI_BUS
+ depends on PCI
+ help
+ This driver provides MHI PCI controller driver for devices such as
+ Qualcomm SDX55 based PCIe modems.
+
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/host/Makefile
similarity index 54%
rename from drivers/bus/mhi/core/Makefile
rename to drivers/bus/mhi/host/Makefile
index c3feb4130aa3..859c2f38451c 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/host/Makefile
@@ -1,4 +1,6 @@
obj-$(CONFIG_MHI_BUS) += mhi.o
-
mhi-y := init.o main.o pm.o boot.o
mhi-$(CONFIG_MHI_BUS_DEBUG) += debugfs.o
+
+obj-$(CONFIG_MHI_BUS_PCI_GENERIC) += mhi_pci_generic.o
+mhi_pci_generic-y += pci_generic.o
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/host/boot.c
similarity index 100%
rename from drivers/bus/mhi/core/boot.c
rename to drivers/bus/mhi/host/boot.c
diff --git a/drivers/bus/mhi/core/debugfs.c b/drivers/bus/mhi/host/debugfs.c
similarity index 100%
rename from drivers/bus/mhi/core/debugfs.c
rename to drivers/bus/mhi/host/debugfs.c
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/host/init.c
similarity index 100%
rename from drivers/bus/mhi/core/init.c
rename to drivers/bus/mhi/host/init.c
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/host/internal.h
similarity index 100%
rename from drivers/bus/mhi/core/internal.h
rename to drivers/bus/mhi/host/internal.h
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/host/main.c
similarity index 100%
rename from drivers/bus/mhi/core/main.c
rename to drivers/bus/mhi/host/main.c
diff --git a/drivers/bus/mhi/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
similarity index 100%
rename from drivers/bus/mhi/pci_generic.c
rename to drivers/bus/mhi/host/pci_generic.c
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/host/pm.c
similarity index 100%
rename from drivers/bus/mhi/core/pm.c
rename to drivers/bus/mhi/host/pm.c
--
2.25.1
mhi_state_str[] array could be used by MHI endpoint stack also. So let's
make the array as "static const" and move it inside the "common.h" header
so that the endpoint stack could also make use of it. Otherwise, the
structure definition should be present in both host and endpoint stack and
that'll result in duplication.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/common.h | 13 ++++++++++++-
drivers/bus/mhi/host/init.c | 12 ------------
2 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index 0f4f3b9f3027..2ea438205617 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -174,7 +174,18 @@ struct mhi_cmd_ctxt {
__u64 wp __packed __aligned(4);
};
-extern const char * const mhi_state_str[MHI_STATE_MAX];
+static const char * const mhi_state_str[MHI_STATE_MAX] = {
+ [MHI_STATE_RESET] = "RESET",
+ [MHI_STATE_READY] = "READY",
+ [MHI_STATE_M0] = "M0",
+ [MHI_STATE_M1] = "M1",
+ [MHI_STATE_M2] = "M2",
+ [MHI_STATE_M3] = "M3",
+ [MHI_STATE_M3_FAST] = "M3 FAST",
+ [MHI_STATE_BHI] = "BHI",
+ [MHI_STATE_SYS_ERR] = "SYS ERROR",
+};
+
#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
!mhi_state_str[state]) ? \
"INVALID_STATE" : mhi_state_str[state])
diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
index 5aaca6d0f52b..fa904e7468d8 100644
--- a/drivers/bus/mhi/host/init.c
+++ b/drivers/bus/mhi/host/init.c
@@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
[DEV_ST_TRANSITION_DISABLE] = "DISABLE",
};
-const char * const mhi_state_str[MHI_STATE_MAX] = {
- [MHI_STATE_RESET] = "RESET",
- [MHI_STATE_READY] = "READY",
- [MHI_STATE_M0] = "M0",
- [MHI_STATE_M1] = "M1",
- [MHI_STATE_M2] = "M2",
- [MHI_STATE_M3] = "M3",
- [MHI_STATE_M3_FAST] = "M3 FAST",
- [MHI_STATE_BHI] = "BHI",
- [MHI_STATE_SYS_ERR] = "SYS ERROR",
-};
-
const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
[MHI_CH_STATE_TYPE_RESET] = "RESET",
[MHI_CH_STATE_TYPE_STOP] = "STOP",
--
2.25.1
Move the common MHI definitions in host "internal.h" to "common.h" so
that the endpoint code can make use of them. This also avoids
duplicating the definitions in the endpoint stack.
Still, the MHI register definitions are not moved since the offsets
vary between host and endpoint.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/common.h | 182 ++++++++++++++++++++++++++++++++
drivers/bus/mhi/host/internal.h | 154 +--------------------------
2 files changed, 183 insertions(+), 153 deletions(-)
create mode 100644 drivers/bus/mhi/common.h
diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
new file mode 100644
index 000000000000..0f4f3b9f3027
--- /dev/null
+++ b/drivers/bus/mhi/common.h
@@ -0,0 +1,182 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_COMMON_H
+#define _MHI_COMMON_H
+
+#include <linux/mhi.h>
+
+/* Command Ring Element macros */
+/* No operation command */
+#define MHI_TRE_CMD_NOOP_PTR (0)
+#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
+
+/* Channel reset command */
+#define MHI_TRE_CMD_RESET_PTR (0)
+#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_RESET_CHAN << 16))
+
+/* Channel stop command */
+#define MHI_TRE_CMD_STOP_PTR (0)
+#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_STOP_CHAN << 16))
+
+/* Channel start command */
+#define MHI_TRE_CMD_START_PTR (0)
+#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_START_CHAN << 16))
+
+#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+
+/* Event descriptor macros */
+/* Transfer completion event */
+#define MHI_TRE_EV_PTR(ptr) (ptr)
+#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
+#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+
+/* State change event */
+#define MHI_SC_EV_PTR 0
+#define MHI_SC_EV_DWORD0(state) (state << 24)
+#define MHI_SC_EV_DWORD1(type) (type << 16)
+
+/* EE event */
+#define MHI_EE_EV_PTR 0
+#define MHI_EE_EV_DWORD0(ee) (ee << 24)
+#define MHI_EE_EV_DWORD1(type) (type << 16)
+
+/* Command Completion event */
+#define MHI_CC_EV_PTR(ptr) (ptr)
+#define MHI_CC_EV_DWORD0(code) (code << 24)
+#define MHI_CC_EV_DWORD1(type) (type << 16)
+
+/* Transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr) (ptr)
+#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+ | (ieot << 9) | (ieob << 8) | chain)
+
+/* RSC transfer descriptor macros */
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
+#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
+#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
+
+enum mhi_pkt_type {
+ MHI_PKT_TYPE_INVALID = 0x0,
+ MHI_PKT_TYPE_NOOP_CMD = 0x1,
+ MHI_PKT_TYPE_TRANSFER = 0x2,
+ MHI_PKT_TYPE_COALESCING = 0x8,
+ MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+ MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+ MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+ MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+ MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+ MHI_PKT_TYPE_TX_EVENT = 0x22,
+ MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
+ MHI_PKT_TYPE_EE_EVENT = 0x40,
+ MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+ MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
+ MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
+/* MHI transfer completion events */
+enum mhi_ev_ccs {
+ MHI_EV_CC_INVALID = 0x0,
+ MHI_EV_CC_SUCCESS = 0x1,
+ MHI_EV_CC_EOT = 0x2, /* End of transfer event */
+ MHI_EV_CC_OVERFLOW = 0x3,
+ MHI_EV_CC_EOB = 0x4, /* End of block event */
+ MHI_EV_CC_OOB = 0x5, /* Out of block event */
+ MHI_EV_CC_DB_MODE = 0x6,
+ MHI_EV_CC_UNDEFINED_ERR = 0x10,
+ MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+/* Channel state */
+enum mhi_ch_state {
+ MHI_CH_STATE_DISABLED,
+ MHI_CH_STATE_ENABLED,
+ MHI_CH_STATE_RUNNING,
+ MHI_CH_STATE_SUSPENDED,
+ MHI_CH_STATE_STOP,
+ MHI_CH_STATE_ERROR,
+};
+
+enum mhi_cmd_type {
+ MHI_CMD_NOP = 1,
+ MHI_CMD_RESET_CHAN = 16,
+ MHI_CMD_STOP_CHAN = 17,
+ MHI_CMD_START_CHAN = 18,
+};
+
+#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
+#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
+#define EV_CTX_INTMODC_SHIFT 8
+#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
+#define EV_CTX_INTMODT_SHIFT 16
+struct mhi_event_ctxt {
+ __u32 intmod;
+ __u32 ertype;
+ __u32 msivec;
+
+ __u64 rbase __packed __aligned(4);
+ __u64 rlen __packed __aligned(4);
+ __u64 rp __packed __aligned(4);
+ __u64 wp __packed __aligned(4);
+};
+
+#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
+#define CHAN_CTX_CHSTATE_SHIFT 0
+#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
+#define CHAN_CTX_BRSTMODE_SHIFT 8
+#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
+#define CHAN_CTX_POLLCFG_SHIFT 10
+#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
+struct mhi_chan_ctxt {
+ __u32 chcfg;
+ __u32 chtype;
+ __u32 erindex;
+
+ __u64 rbase __packed __aligned(4);
+ __u64 rlen __packed __aligned(4);
+ __u64 rp __packed __aligned(4);
+ __u64 wp __packed __aligned(4);
+};
+
+struct mhi_cmd_ctxt {
+ __u32 reserved0;
+ __u32 reserved1;
+ __u32 reserved2;
+
+ __u64 rbase __packed __aligned(4);
+ __u64 rlen __packed __aligned(4);
+ __u64 rp __packed __aligned(4);
+ __u64 wp __packed __aligned(4);
+};
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+ !mhi_state_str[state]) ? \
+ "INVALID_STATE" : mhi_state_str[state])
+
+#endif /* _MHI_COMMON_H */
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index 3a732afaf73e..a324a76684d0 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -7,7 +7,7 @@
#ifndef _MHI_INT_H
#define _MHI_INT_H
-#include <linux/mhi.h>
+#include "../common.h"
extern struct bus_type mhi_bus_type;
@@ -203,51 +203,6 @@ extern struct bus_type mhi_bus_type;
#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
-#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
-#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
-#define EV_CTX_INTMODC_SHIFT 8
-#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
-#define EV_CTX_INTMODT_SHIFT 16
-struct mhi_event_ctxt {
- __u32 intmod;
- __u32 ertype;
- __u32 msivec;
-
- __u64 rbase __packed __aligned(4);
- __u64 rlen __packed __aligned(4);
- __u64 rp __packed __aligned(4);
- __u64 wp __packed __aligned(4);
-};
-
-#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
-#define CHAN_CTX_CHSTATE_SHIFT 0
-#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
-#define CHAN_CTX_BRSTMODE_SHIFT 8
-#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
-#define CHAN_CTX_POLLCFG_SHIFT 10
-#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
-struct mhi_chan_ctxt {
- __u32 chcfg;
- __u32 chtype;
- __u32 erindex;
-
- __u64 rbase __packed __aligned(4);
- __u64 rlen __packed __aligned(4);
- __u64 rp __packed __aligned(4);
- __u64 wp __packed __aligned(4);
-};
-
-struct mhi_cmd_ctxt {
- __u32 reserved0;
- __u32 reserved1;
- __u32 reserved2;
-
- __u64 rbase __packed __aligned(4);
- __u64 rlen __packed __aligned(4);
- __u64 rp __packed __aligned(4);
- __u64 wp __packed __aligned(4);
-};
-
struct mhi_ctxt {
struct mhi_event_ctxt *er_ctxt;
struct mhi_chan_ctxt *chan_ctxt;
@@ -267,108 +222,6 @@ struct bhi_vec_entry {
u64 size;
};
-enum mhi_cmd_type {
- MHI_CMD_NOP = 1,
- MHI_CMD_RESET_CHAN = 16,
- MHI_CMD_STOP_CHAN = 17,
- MHI_CMD_START_CHAN = 18,
-};
-
-/* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR (0)
-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
-#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
-
-/* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR (0)
-#define MHI_TRE_CMD_RESET_DWORD0 (0)
-#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
- (MHI_CMD_RESET_CHAN << 16))
-
-/* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR (0)
-#define MHI_TRE_CMD_STOP_DWORD0 (0)
-#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
- (MHI_CMD_STOP_CHAN << 16))
-
-/* Channel start command */
-#define MHI_TRE_CMD_START_PTR (0)
-#define MHI_TRE_CMD_START_DWORD0 (0)
-#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
- (MHI_CMD_START_CHAN << 16))
-
-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
-
-/* Event descriptor macros */
-#define MHI_TRE_EV_PTR(ptr) (ptr)
-#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
-#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
-#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
-#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
-
-/* Transfer descriptor macros */
-#define MHI_TRE_DATA_PTR(ptr) (ptr)
-#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
-#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
- | (ieot << 9) | (ieob << 8) | chain)
-
-/* RSC transfer descriptor macros */
-#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
-#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
-#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
-
-enum mhi_pkt_type {
- MHI_PKT_TYPE_INVALID = 0x0,
- MHI_PKT_TYPE_NOOP_CMD = 0x1,
- MHI_PKT_TYPE_TRANSFER = 0x2,
- MHI_PKT_TYPE_COALESCING = 0x8,
- MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
- MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
- MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
- MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
- MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
- MHI_PKT_TYPE_TX_EVENT = 0x22,
- MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
- MHI_PKT_TYPE_EE_EVENT = 0x40,
- MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
- MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
- MHI_PKT_TYPE_STALE_EVENT, /* internal event */
-};
-
-/* MHI transfer completion events */
-enum mhi_ev_ccs {
- MHI_EV_CC_INVALID = 0x0,
- MHI_EV_CC_SUCCESS = 0x1,
- MHI_EV_CC_EOT = 0x2, /* End of transfer event */
- MHI_EV_CC_OVERFLOW = 0x3,
- MHI_EV_CC_EOB = 0x4, /* End of block event */
- MHI_EV_CC_OOB = 0x5, /* Out of block event */
- MHI_EV_CC_DB_MODE = 0x6,
- MHI_EV_CC_UNDEFINED_ERR = 0x10,
- MHI_EV_CC_BAD_TRE = 0x11,
-};
-
-enum mhi_ch_state {
- MHI_CH_STATE_DISABLED = 0x0,
- MHI_CH_STATE_ENABLED = 0x1,
- MHI_CH_STATE_RUNNING = 0x2,
- MHI_CH_STATE_SUSPENDED = 0x3,
- MHI_CH_STATE_STOP = 0x4,
- MHI_CH_STATE_ERROR = 0x5,
-};
-
enum mhi_ch_state_type {
MHI_CH_STATE_TYPE_RESET,
MHI_CH_STATE_TYPE_STOP,
@@ -409,11 +262,6 @@ extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
"INVALID_STATE" : dev_state_tran_str[state])
-extern const char * const mhi_state_str[MHI_STATE_MAX];
-#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
- !mhi_state_str[state]) ? \
- "INVALID_STATE" : mhi_state_str[state])
-
/* internal power states */
enum mhi_pm_state {
MHI_PM_STATE_DISABLE,
--
2.25.1
Cleanup includes:
1. Moving the MHI register bit definitions to common.h header (only the
register offsets differ between host and ep not the bit definitions)
2. Using the GENMASK macro for masks
3. Removing brackets for single values
4. Using lowercase for hex values
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/common.h | 129 ++++++++++++---
drivers/bus/mhi/host/internal.h | 282 +++++++++++---------------------
2 files changed, 207 insertions(+), 204 deletions(-)
diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
index 2ea438205617..c1272d61e54e 100644
--- a/drivers/bus/mhi/common.h
+++ b/drivers/bus/mhi/common.h
@@ -9,32 +9,123 @@
#include <linux/mhi.h>
+/* MHI register bits */
+#define MHIREGLEN_MHIREGLEN_MASK GENMASK(31, 0)
+#define MHIREGLEN_MHIREGLEN_SHIFT 0
+
+#define MHIVER_MHIVER_MASK GENMASK(31, 0)
+#define MHIVER_MHIVER_SHIFT 0
+
+#define MHICFG_NHWER_MASK GENMASK(31, 24)
+#define MHICFG_NHWER_SHIFT 24
+#define MHICFG_NER_MASK GENMASK(23, 16)
+#define MHICFG_NER_SHIFT 16
+#define MHICFG_NHWCH_MASK GENMASK(15, 8)
+#define MHICFG_NHWCH_SHIFT 8
+#define MHICFG_NCH_MASK GENMASK(7, 0)
+#define MHICFG_NCH_SHIFT 0
+
+#define CHDBOFF_CHDBOFF_MASK GENMASK(31, 0)
+#define CHDBOFF_CHDBOFF_SHIFT 0
+
+#define ERDBOFF_ERDBOFF_MASK GENMASK(31, 0)
+#define ERDBOFF_ERDBOFF_SHIFT 0
+
+#define BHIOFF_BHIOFF_MASK GENMASK(31, 0)
+#define BHIOFF_BHIOFF_SHIFT 0
+
+#define BHIEOFF_BHIEOFF_MASK GENMASK(31, 0)
+#define BHIEOFF_BHIEOFF_SHIFT 0
+
+#define DEBUGOFF_DEBUGOFF_MASK GENMASK(31, 0)
+#define DEBUGOFF_DEBUGOFF_SHIFT 0
+
+#define MHICTRL_MHISTATE_MASK GENMASK(15, 8)
+#define MHICTRL_MHISTATE_SHIFT 8
+#define MHICTRL_RESET_MASK 2
+#define MHICTRL_RESET_SHIFT 1
+
+#define MHISTATUS_MHISTATE_MASK GENMASK(15, 8)
+#define MHISTATUS_MHISTATE_SHIFT 8
+#define MHISTATUS_SYSERR_MASK 4
+#define MHISTATUS_SYSERR_SHIFT 2
+#define MHISTATUS_READY_MASK 1
+#define MHISTATUS_READY_SHIFT 0
+
+#define CCABAP_LOWER_CCABAP_LOWER_MASK GENMASK(31, 0)
+#define CCABAP_LOWER_CCABAP_LOWER_SHIFT 0
+
+#define CCABAP_HIGHER_CCABAP_HIGHER_MASK GENMASK(31, 0)
+#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT 0
+
+#define ECABAP_LOWER_ECABAP_LOWER_MASK GENMASK(31, 0)
+#define ECABAP_LOWER_ECABAP_LOWER_SHIFT 0
+
+#define ECABAP_HIGHER_ECABAP_HIGHER_MASK GENMASK(31, 0)
+#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT 0
+
+#define CRCBAP_LOWER_CRCBAP_LOWER_MASK GENMASK(31, 0)
+#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT 0
+
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK GENMASK(31, 0)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT 0
+
+#define CRDB_LOWER_CRDB_LOWER_MASK GENMASK(31, 0)
+#define CRDB_LOWER_CRDB_LOWER_SHIFT 0
+
+#define CRDB_HIGHER_CRDB_HIGHER_MASK GENMASK(31, 0)
+#define CRDB_HIGHER_CRDB_HIGHER_SHIFT 0
+
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK GENMASK(31, 0)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT 0
+
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK GENMASK(31, 0)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT 0
+
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK GENMASK(31, 0)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT 0
+
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK GENMASK(31, 0)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT 0
+
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK GENMASK(31, 0)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT 0
+
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK GENMASK(31, 0)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT 0
+
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK GENMASK(31, 0)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT 0
+
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK GENMASK(31, 0)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT 0
+
/* Command Ring Element macros */
/* No operation command */
-#define MHI_TRE_CMD_NOOP_PTR (0)
-#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_PTR 0
+#define MHI_TRE_CMD_NOOP_DWORD0 0
#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
/* Channel reset command */
-#define MHI_TRE_CMD_RESET_PTR (0)
-#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_PTR 0
+#define MHI_TRE_CMD_RESET_DWORD0 0
#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_RESET_CHAN << 16))
/* Channel stop command */
-#define MHI_TRE_CMD_STOP_PTR (0)
-#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_PTR 0
+#define MHI_TRE_CMD_STOP_DWORD0 0
#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_STOP_CHAN << 16))
/* Channel start command */
-#define MHI_TRE_CMD_START_PTR (0)
-#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_PTR 0
+#define MHI_TRE_CMD_START_DWORD0 0
#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
(MHI_CMD_START_CHAN << 16))
-#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xff)
+#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xff)
/* Event descriptor macros */
/* Transfer completion event */
@@ -42,18 +133,18 @@
#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
-#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
-#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xff)
+#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xffff)
+#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xff)
+#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xff)
+#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xff)
+#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xff)
#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
-#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
-#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
-#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xff)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xff)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xff)
/* State change event */
#define MHI_SC_EV_PTR 0
diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
index a324a76684d0..c882245b9133 100644
--- a/drivers/bus/mhi/host/internal.h
+++ b/drivers/bus/mhi/host/internal.h
@@ -11,197 +11,109 @@
extern struct bus_type mhi_bus_type;
-#define MHIREGLEN (0x0)
-#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
-#define MHIREGLEN_MHIREGLEN_SHIFT (0)
-
-#define MHIVER (0x8)
-#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
-#define MHIVER_MHIVER_SHIFT (0)
-
-#define MHICFG (0x10)
-#define MHICFG_NHWER_MASK (0xFF000000)
-#define MHICFG_NHWER_SHIFT (24)
-#define MHICFG_NER_MASK (0xFF0000)
-#define MHICFG_NER_SHIFT (16)
-#define MHICFG_NHWCH_MASK (0xFF00)
-#define MHICFG_NHWCH_SHIFT (8)
-#define MHICFG_NCH_MASK (0xFF)
-#define MHICFG_NCH_SHIFT (0)
-
-#define CHDBOFF (0x18)
-#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
-#define CHDBOFF_CHDBOFF_SHIFT (0)
-
-#define ERDBOFF (0x20)
-#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
-#define ERDBOFF_ERDBOFF_SHIFT (0)
-
-#define BHIOFF (0x28)
-#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
-#define BHIOFF_BHIOFF_SHIFT (0)
-
-#define BHIEOFF (0x2C)
-#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
-#define BHIEOFF_BHIEOFF_SHIFT (0)
-
-#define DEBUGOFF (0x30)
-#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
-#define DEBUGOFF_DEBUGOFF_SHIFT (0)
-
-#define MHICTRL (0x38)
-#define MHICTRL_MHISTATE_MASK (0x0000FF00)
-#define MHICTRL_MHISTATE_SHIFT (8)
-#define MHICTRL_RESET_MASK (0x2)
-#define MHICTRL_RESET_SHIFT (1)
-
-#define MHISTATUS (0x48)
-#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
-#define MHISTATUS_MHISTATE_SHIFT (8)
-#define MHISTATUS_SYSERR_MASK (0x4)
-#define MHISTATUS_SYSERR_SHIFT (2)
-#define MHISTATUS_READY_MASK (0x1)
-#define MHISTATUS_READY_SHIFT (0)
-
-#define CCABAP_LOWER (0x58)
-#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
-#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
-
-#define CCABAP_HIGHER (0x5C)
-#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
-#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
-
-#define ECABAP_LOWER (0x60)
-#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
-#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
-
-#define ECABAP_HIGHER (0x64)
-#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
-#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
-
-#define CRCBAP_LOWER (0x68)
-#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
-#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
-
-#define CRCBAP_HIGHER (0x6C)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
-#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
-
-#define CRDB_LOWER (0x70)
-#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
-#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
-
-#define CRDB_HIGHER (0x74)
-#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
-#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
-
-#define MHICTRLBASE_LOWER (0x80)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
-
-#define MHICTRLBASE_HIGHER (0x84)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
-
-#define MHICTRLLIMIT_LOWER (0x88)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
-
-#define MHICTRLLIMIT_HIGHER (0x8C)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
-
-#define MHIDATABASE_LOWER (0x98)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
-
-#define MHIDATABASE_HIGHER (0x9C)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
-
-#define MHIDATALIMIT_LOWER (0xA0)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
-
-#define MHIDATALIMIT_HIGHER (0xA4)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
-#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+/* MHI registers */
+#define MHIREGLEN 0x0
+#define MHIVER 0x8
+#define MHICFG 0x10
+#define CHDBOFF 0x18
+#define ERDBOFF 0x20
+#define BHIOFF 0x28
+#define BHIEOFF 0x2c
+#define DEBUGOFF 0x30
+#define MHICTRL 0x38
+#define MHISTATUS 0x48
+#define CCABAP_LOWER 0x58
+#define CCABAP_HIGHER 0x5c
+#define ECABAP_LOWER 0x60
+#define ECABAP_HIGHER 0x64
+#define CRCBAP_LOWER 0x68
+#define CRCBAP_HIGHER 0x6c
+#define CRDB_LOWER 0x70
+#define CRDB_HIGHER 0x74
+#define MHICTRLBASE_LOWER 0x80
+#define MHICTRLBASE_HIGHER 0x84
+#define MHICTRLLIMIT_LOWER 0x88
+#define MHICTRLLIMIT_HIGHER 0x8c
+#define MHIDATABASE_LOWER 0x98
+#define MHIDATABASE_HIGHER 0x9c
+#define MHIDATALIMIT_LOWER 0xa0
+#define MHIDATALIMIT_HIGHER 0xa4
/* Host request register */
-#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
-#define MHI_SOC_RESET_REQ BIT(0)
+#define MHI_SOC_RESET_REQ_OFFSET 0xb0
+#define MHI_SOC_RESET_REQ BIT(0)
/* MHI BHI offfsets */
-#define BHI_BHIVERSION_MINOR (0x00)
-#define BHI_BHIVERSION_MAJOR (0x04)
-#define BHI_IMGADDR_LOW (0x08)
-#define BHI_IMGADDR_HIGH (0x0C)
-#define BHI_IMGSIZE (0x10)
-#define BHI_RSVD1 (0x14)
-#define BHI_IMGTXDB (0x18)
-#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHI_TXDB_SEQNUM_SHFT (0)
-#define BHI_RSVD2 (0x1C)
-#define BHI_INTVEC (0x20)
-#define BHI_RSVD3 (0x24)
-#define BHI_EXECENV (0x28)
-#define BHI_STATUS (0x2C)
-#define BHI_ERRCODE (0x30)
-#define BHI_ERRDBG1 (0x34)
-#define BHI_ERRDBG2 (0x38)
-#define BHI_ERRDBG3 (0x3C)
-#define BHI_SERIALNU (0x40)
-#define BHI_SBLANTIROLLVER (0x44)
-#define BHI_NUMSEG (0x48)
-#define BHI_MSMHWID(n) (0x4C + (0x4 * (n)))
-#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
-#define BHI_RSVD5 (0xC4)
-#define BHI_STATUS_MASK (0xC0000000)
-#define BHI_STATUS_SHIFT (30)
-#define BHI_STATUS_ERROR (3)
-#define BHI_STATUS_SUCCESS (2)
-#define BHI_STATUS_RESET (0)
+#define BHI_BHIVERSION_MINOR 0x00
+#define BHI_BHIVERSION_MAJOR 0x04
+#define BHI_IMGADDR_LOW 0x08
+#define BHI_IMGADDR_HIGH 0x0c
+#define BHI_IMGSIZE 0x10
+#define BHI_RSVD1 0x14
+#define BHI_IMGTXDB 0x18
+#define BHI_TXDB_SEQNUM_BMSK GENMASK(29, 0)
+#define BHI_TXDB_SEQNUM_SHFT 0
+#define BHI_RSVD2 0x1c
+#define BHI_INTVEC 0x20
+#define BHI_RSVD3 0x24
+#define BHI_EXECENV 0x28
+#define BHI_STATUS 0x2c
+#define BHI_ERRCODE 0x30
+#define BHI_ERRDBG1 0x34
+#define BHI_ERRDBG2 0x38
+#define BHI_ERRDBG3 0x3c
+#define BHI_SERIALNU 0x40
+#define BHI_SBLANTIROLLVER 0x44
+#define BHI_NUMSEG 0x48
+#define BHI_MSMHWID(n) (0x4c + (0x4 * (n)))
+#define BHI_OEMPKHASH(n) (0x64 + (0x4 * (n)))
+#define BHI_RSVD5 0xc4
+#define BHI_STATUS_MASK GENMASK(31, 30)
+#define BHI_STATUS_SHIFT 30
+#define BHI_STATUS_ERROR 3
+#define BHI_STATUS_SUCCESS 2
+#define BHI_STATUS_RESET 0
/* MHI BHIE offsets */
-#define BHIE_MSMSOCID_OFFS (0x0000)
-#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
-#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
-#define BHIE_TXVECSIZE_OFFS (0x0034)
-#define BHIE_TXVECDB_OFFS (0x003C)
-#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECDB_SEQNUM_SHFT (0)
-#define BHIE_TXVECSTATUS_OFFS (0x0044)
-#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
-#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
-#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
-#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
-#define BHIE_RXVECSIZE_OFFS (0x0068)
-#define BHIE_RXVECDB_OFFS (0x0070)
-#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECDB_SEQNUM_SHFT (0)
-#define BHIE_RXVECSTATUS_OFFS (0x0078)
-#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
-#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
-#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
-#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
-#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
-#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
-#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
-
-#define SOC_HW_VERSION_OFFS (0x224)
-#define SOC_HW_VERSION_FAM_NUM_BMSK (0xF0000000)
-#define SOC_HW_VERSION_FAM_NUM_SHFT (28)
-#define SOC_HW_VERSION_DEV_NUM_BMSK (0x0FFF0000)
-#define SOC_HW_VERSION_DEV_NUM_SHFT (16)
-#define SOC_HW_VERSION_MAJOR_VER_BMSK (0x0000FF00)
-#define SOC_HW_VERSION_MAJOR_VER_SHFT (8)
-#define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
-#define SOC_HW_VERSION_MINOR_VER_SHFT (0)
+#define BHIE_MSMSOCID_OFFS 0x0000
+#define BHIE_TXVECADDR_LOW_OFFS 0x002c
+#define BHIE_TXVECADDR_HIGH_OFFS 0x0030
+#define BHIE_TXVECSIZE_OFFS 0x0034
+#define BHIE_TXVECDB_OFFS 0x003c
+#define BHIE_TXVECDB_SEQNUM_BMSK GENMASK(29, 0)
+#define BHIE_TXVECDB_SEQNUM_SHFT 0
+#define BHIE_TXVECSTATUS_OFFS 0x0044
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK GENMASK(29, 0)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT 0
+#define BHIE_TXVECSTATUS_STATUS_BMSK GENMASK(31, 30)
+#define BHIE_TXVECSTATUS_STATUS_SHFT 30
+#define BHIE_TXVECSTATUS_STATUS_RESET 0x00
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL 0x02
+#define BHIE_TXVECSTATUS_STATUS_ERROR 0x03
+#define BHIE_RXVECADDR_LOW_OFFS 0x0060
+#define BHIE_RXVECADDR_HIGH_OFFS 0x0064
+#define BHIE_RXVECSIZE_OFFS 0x0068
+#define BHIE_RXVECDB_OFFS 0x0070
+#define BHIE_RXVECDB_SEQNUM_BMSK GENMASK(29, 0)
+#define BHIE_RXVECDB_SEQNUM_SHFT 0
+#define BHIE_RXVECSTATUS_OFFS 0x0078
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK GENMASK(29, 0)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT 0
+#define BHIE_RXVECSTATUS_STATUS_BMSK GENMASK(31, 30)
+#define BHIE_RXVECSTATUS_STATUS_SHFT 30
+#define BHIE_RXVECSTATUS_STATUS_RESET 0x00
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL 0x02
+#define BHIE_RXVECSTATUS_STATUS_ERROR 0x03
+
+#define SOC_HW_VERSION_OFFS 0x224
+#define SOC_HW_VERSION_FAM_NUM_BMSK GENMASK(31, 28)
+#define SOC_HW_VERSION_FAM_NUM_SHFT 28
+#define SOC_HW_VERSION_DEV_NUM_BMSK GENMASK(27, 16)
+#define SOC_HW_VERSION_DEV_NUM_SHFT 16
+#define SOC_HW_VERSION_MAJOR_VER_BMSK GENMASK(15, 8)
+#define SOC_HW_VERSION_MAJOR_VER_SHFT 8
+#define SOC_HW_VERSION_MINOR_VER_BMSK GENMASK(7, 0)
+#define SOC_HW_VERSION_MINOR_VER_SHFT 0
struct mhi_ctxt {
struct mhi_event_ctxt *er_ctxt;
--
2.25.1
This commit adds support for registering MHI endpoint controller drivers
with the MHI endpoint stack. MHI endpoint controller drivers manages
the interaction with the host machines such as x86. They are also the
MHI endpoint bus master in charge of managing the physical link between the
host and endpoint device.
The endpoint controller driver encloses all information about the
underlying physical bus like PCIe. The registration process involves
parsing the channel configuration and allocating an MHI EP device.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/Kconfig | 1 +
drivers/bus/mhi/Makefile | 3 +
drivers/bus/mhi/ep/Kconfig | 10 ++
drivers/bus/mhi/ep/Makefile | 2 +
drivers/bus/mhi/ep/internal.h | 158 +++++++++++++++++++++++
drivers/bus/mhi/ep/main.c | 231 ++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 140 +++++++++++++++++++++
7 files changed, 545 insertions(+)
create mode 100644 drivers/bus/mhi/ep/Kconfig
create mode 100644 drivers/bus/mhi/ep/Makefile
create mode 100644 drivers/bus/mhi/ep/internal.h
create mode 100644 drivers/bus/mhi/ep/main.c
create mode 100644 include/linux/mhi_ep.h
diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
index 4748df7f9cd5..b39a11e6c624 100644
--- a/drivers/bus/mhi/Kconfig
+++ b/drivers/bus/mhi/Kconfig
@@ -6,3 +6,4 @@
#
source "drivers/bus/mhi/host/Kconfig"
+source "drivers/bus/mhi/ep/Kconfig"
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
index 5f5708a249f5..46981331b38f 100644
--- a/drivers/bus/mhi/Makefile
+++ b/drivers/bus/mhi/Makefile
@@ -1,2 +1,5 @@
# Host MHI stack
obj-y += host/
+
+# Endpoint MHI stack
+obj-y += ep/
diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
new file mode 100644
index 000000000000..229c71397b30
--- /dev/null
+++ b/drivers/bus/mhi/ep/Kconfig
@@ -0,0 +1,10 @@
+config MHI_BUS_EP
+ tristate "Modem Host Interface (MHI) bus Endpoint implementation"
+ help
+ Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+ communication protocol used by the host processors to control
+ and communicate with modem devices over a high speed peripheral
+ bus or shared memory.
+
+ MHI_BUS_EP implements the MHI protocol for the endpoint devices
+ like SDX55 modem connected to the host machine over PCIe.
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
new file mode 100644
index 000000000000..64e29252b608
--- /dev/null
+++ b/drivers/bus/mhi/ep/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
+mhi_ep-y := main.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
new file mode 100644
index 000000000000..7b164daf4332
--- /dev/null
+++ b/drivers/bus/mhi/ep/internal.h
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+
+#ifndef _MHI_EP_INTERNAL_
+#define _MHI_EP_INTERNAL_
+
+#include <linux/bitfield.h>
+
+#include "../common.h"
+
+extern struct bus_type mhi_ep_bus_type;
+
+/* MHI register definitions */
+#define MHIREGLEN 0x100
+#define MHIVER 0x108
+#define MHICFG 0x110
+#define CHDBOFF 0x118
+#define ERDBOFF 0x120
+#define BHIOFF 0x128
+#define DEBUGOFF 0x130
+#define MHICTRL 0x138
+#define MHISTATUS 0x148
+#define CCABAP_LOWER 0x158
+#define CCABAP_HIGHER 0x15c
+#define ECABAP_LOWER 0x160
+#define ECABAP_HIGHER 0x164
+#define CRCBAP_LOWER 0x168
+#define CRCBAP_HIGHER 0x16c
+#define CRDB_LOWER 0x170
+#define CRDB_HIGHER 0x174
+#define MHICTRLBASE_LOWER 0x180
+#define MHICTRLBASE_HIGHER 0x184
+#define MHICTRLLIMIT_LOWER 0x188
+#define MHICTRLLIMIT_HIGHER 0x18c
+#define MHIDATABASE_LOWER 0x198
+#define MHIDATABASE_HIGHER 0x19c
+#define MHIDATALIMIT_LOWER 0x1a0
+#define MHIDATALIMIT_HIGHER 0x1a4
+#define CHDB_LOWER_n(n) (0x400 + 0x8 * (n))
+#define CHDB_HIGHER_n(n) (0x404 + 0x8 * (n))
+#define ERDB_LOWER_n(n) (0x800 + 0x8 * (n))
+#define ERDB_HIGHER_n(n) (0x804 + 0x8 * (n))
+#define BHI_INTVEC 0x220
+#define BHI_EXECENV 0x228
+#define BHI_IMGTXDB 0x218
+
+#define MHI_CTRL_INT_STATUS_A7 0x4
+#define MHI_CTRL_INT_STATUS_A7_MSK BIT(0)
+#define MHI_CTRL_INT_STATUS_CRDB_MSK BIT(1)
+#define MHI_CHDB_INT_STATUS_A7_n(n) (0x28 + 0x4 * (n))
+#define MHI_ERDB_INT_STATUS_A7_n(n) (0x38 + 0x4 * (n))
+
+#define MHI_CTRL_INT_CLEAR_A7 0x4c
+#define MHI_CTRL_INT_MMIO_WR_CLEAR BIT(2)
+#define MHI_CTRL_INT_CRDB_CLEAR BIT(1)
+#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR BIT(0)
+
+#define MHI_CHDB_INT_CLEAR_A7_n(n) (0x70 + 0x4 * (n))
+#define MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
+#define MHI_ERDB_INT_CLEAR_A7_n(n) (0x80 + 0x4 * (n))
+#define MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
+
+#define MHI_CTRL_INT_MASK_A7 0x94
+#define MHI_CTRL_INT_MASK_A7_MASK_MASK GENMASK(1, 0)
+#define MHI_CTRL_MHICTRL_MASK BIT(0)
+#define MHI_CTRL_MHICTRL_SHFT 0
+#define MHI_CTRL_CRDB_MASK BIT(1)
+#define MHI_CTRL_CRDB_SHFT 1
+
+#define MHI_CHDB_INT_MASK_A7_n(n) (0xb8 + 0x4 * (n))
+#define MHI_CHDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
+#define MHI_ERDB_INT_MASK_A7_n(n) (0xc8 + 0x4 * (n))
+#define MHI_ERDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
+
+#define NR_OF_CMD_RINGS 1
+#define MHI_MASK_ROWS_CH_EV_DB 4
+#define MHI_MASK_CH_EV_LEN 32
+
+/* Generic context */
+struct mhi_generic_ctx {
+ __u32 reserved0;
+ __u32 reserved1;
+ __u32 reserved2;
+
+ __u64 rbase __packed __aligned(4);
+ __u64 rlen __packed __aligned(4);
+ __u64 rp __packed __aligned(4);
+ __u64 wp __packed __aligned(4);
+};
+
+enum mhi_ep_ring_state {
+ RING_STATE_UINT = 0,
+ RING_STATE_IDLE,
+};
+
+enum mhi_ep_ring_type {
+ RING_TYPE_CMD = 0,
+ RING_TYPE_ER,
+ RING_TYPE_CH,
+ RING_TYPE_INVALID,
+};
+
+struct mhi_ep_ring_element {
+ u64 ptr;
+ u32 dword[2];
+};
+
+/* Transfer ring element type */
+union mhi_ep_ring_ctx {
+ struct mhi_cmd_ctxt cmd;
+ struct mhi_event_ctxt ev;
+ struct mhi_chan_ctxt ch;
+ struct mhi_generic_ctx generic;
+};
+
+struct mhi_ep_ring {
+ struct list_head list;
+ struct mhi_ep_cntrl *mhi_cntrl;
+ int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
+ union mhi_ep_ring_ctx *ring_ctx;
+ struct mhi_ep_ring_element *ring_cache;
+ enum mhi_ep_ring_type type;
+ enum mhi_ep_ring_state state;
+ size_t rd_offset;
+ size_t wr_offset;
+ size_t ring_size;
+ u32 db_offset_h;
+ u32 db_offset_l;
+ u32 ch_id;
+};
+
+struct mhi_ep_cmd {
+ struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_event {
+ struct mhi_ep_ring ring;
+};
+
+struct mhi_ep_chan {
+ char *name;
+ struct mhi_ep_device *mhi_dev;
+ struct mhi_ep_ring ring;
+ struct mutex lock;
+ void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
+ enum mhi_ch_state state;
+ enum dma_data_direction dir;
+ u64 tre_loc;
+ u32 tre_size;
+ u32 tre_bytes_left;
+ u32 chan;
+ bool skip_td;
+};
+
+#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
new file mode 100644
index 000000000000..db664360c8ab
--- /dev/null
+++ b/drivers/bus/mhi/ep/main.c
@@ -0,0 +1,231 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * MHI Bus Endpoint stack
+ *
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/delay.h>
+#include <linux/dma-direction.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include "internal.h"
+
+static DEFINE_IDA(mhi_ep_cntrl_ida);
+
+static void mhi_ep_release_device(struct device *dev)
+{
+ struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+ /*
+ * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
+ * devices for the channels will only get created if the mhi_dev
+ * associated with it is NULL.
+ */
+ if (mhi_dev->ul_chan)
+ mhi_dev->ul_chan->mhi_dev = NULL;
+
+ if (mhi_dev->dl_chan)
+ mhi_dev->dl_chan->mhi_dev = NULL;
+
+ kfree(mhi_dev);
+}
+
+static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct mhi_ep_device *mhi_dev;
+ struct device *dev;
+
+ mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+ if (!mhi_dev)
+ return ERR_PTR(-ENOMEM);
+
+ dev = &mhi_dev->dev;
+ device_initialize(dev);
+ dev->bus = &mhi_ep_bus_type;
+ dev->release = mhi_ep_release_device;
+
+ if (mhi_cntrl->mhi_dev) {
+ /* for MHI client devices, parent is the MHI controller device */
+ dev->parent = &mhi_cntrl->mhi_dev->dev;
+ } else {
+ /* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
+ dev->parent = mhi_cntrl->cntrl_dev;
+ }
+
+ mhi_dev->mhi_cntrl = mhi_cntrl;
+
+ return mhi_dev;
+}
+
+static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
+ const struct mhi_ep_cntrl_config *config)
+{
+ const struct mhi_ep_channel_config *ch_cfg;
+ struct device *dev = mhi_cntrl->cntrl_dev;
+ u32 chan, i;
+ int ret = -EINVAL;
+
+ mhi_cntrl->max_chan = config->max_channels;
+
+ /*
+ * Allocate max_channels supported by the MHI endpoint and populate
+ * only the defined channels
+ */
+ mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
+ GFP_KERNEL);
+ if (!mhi_cntrl->mhi_chan)
+ return -ENOMEM;
+
+ for (i = 0; i < config->num_channels; i++) {
+ struct mhi_ep_chan *mhi_chan;
+
+ ch_cfg = &config->ch_cfg[i];
+
+ chan = ch_cfg->num;
+ if (chan >= mhi_cntrl->max_chan) {
+ dev_err(dev, "Channel %d not available\n", chan);
+ goto error_chan_cfg;
+ }
+
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ mhi_chan->name = ch_cfg->name;
+ mhi_chan->chan = chan;
+ mhi_chan->dir = ch_cfg->dir;
+ mutex_init(&mhi_chan->lock);
+
+ /* Bi-directional and direction less channels are not supported */
+ if (mhi_chan->dir == DMA_BIDIRECTIONAL || mhi_chan->dir == DMA_NONE) {
+ dev_err(dev, "Invalid channel configuration\n");
+ goto error_chan_cfg;
+ }
+ }
+
+ return 0;
+
+error_chan_cfg:
+ kfree(mhi_cntrl->mhi_chan);
+
+ return ret;
+}
+
+/*
+ * Allocate channel and command rings here. Event rings will be allocated
+ * in mhi_ep_power_up() as the config comes from the host.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ const struct mhi_ep_cntrl_config *config)
+{
+ struct mhi_ep_device *mhi_dev;
+ int ret;
+
+ if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+ return -EINVAL;
+
+ ret = parse_ch_cfg(mhi_cntrl, config);
+ if (ret)
+ return ret;
+
+ mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+ if (!mhi_cntrl->mhi_cmd) {
+ ret = -ENOMEM;
+ goto err_free_ch;
+ }
+
+ /* Set controller index */
+ mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
+ if (mhi_cntrl->index < 0) {
+ ret = mhi_cntrl->index;
+ goto err_free_cmd;
+ }
+
+ /* Allocate the controller device */
+ mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
+ if (IS_ERR(mhi_dev)) {
+ dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
+ ret = PTR_ERR(mhi_dev);
+ goto err_ida_free;
+ }
+
+ mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
+ dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
+ mhi_dev->name = dev_name(&mhi_dev->dev);
+
+ ret = device_add(&mhi_dev->dev);
+ if (ret)
+ goto err_release_dev;
+
+ mhi_cntrl->mhi_dev = mhi_dev;
+
+ dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
+
+ return 0;
+
+err_release_dev:
+ put_device(&mhi_dev->dev);
+err_ida_free:
+ ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_free_cmd:
+ kfree(mhi_cntrl->mhi_cmd);
+err_free_ch:
+ kfree(mhi_cntrl->mhi_chan);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
+
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
+
+ kfree(mhi_cntrl->mhi_cmd);
+ kfree(mhi_cntrl->mhi_chan);
+
+ device_del(&mhi_dev->dev);
+ put_device(&mhi_dev->dev);
+
+ ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
+
+static int mhi_ep_match(struct device *dev, struct device_driver *drv)
+{
+ struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+ /*
+ * If the device is a controller type then there is no client driver
+ * associated with it
+ */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ return 0;
+};
+
+struct bus_type mhi_ep_bus_type = {
+ .name = "mhi_ep",
+ .dev_name = "mhi_ep",
+ .match = mhi_ep_match,
+};
+
+static int __init mhi_ep_init(void)
+{
+ return bus_register(&mhi_ep_bus_type);
+}
+
+static void __exit mhi_ep_exit(void)
+{
+ bus_unregister(&mhi_ep_bus_type);
+}
+
+postcore_initcall(mhi_ep_init);
+module_exit(mhi_ep_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("MHI Bus Endpoint stack");
+MODULE_AUTHOR("Manivannan Sadhasivam <[email protected]>");
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
new file mode 100644
index 000000000000..14fd40af8974
--- /dev/null
+++ b/include/linux/mhi_ep.h
@@ -0,0 +1,140 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2021, Linaro Ltd.
+ *
+ */
+#ifndef _MHI_EP_H_
+#define _MHI_EP_H_
+
+#include <linux/dma-direction.h>
+#include <linux/mhi.h>
+
+#define MHI_EP_DEFAULT_MTU 0x4000
+
+/**
+ * struct mhi_ep_channel_config - Channel configuration structure for controller
+ * @name: The name of this channel
+ * @num: The number assigned to this channel
+ * @num_elements: The number of elements that can be queued to this channel
+ * @dir: Direction that data may flow on this channel
+ */
+struct mhi_ep_channel_config {
+ char *name;
+ u32 num;
+ u32 num_elements;
+ enum dma_data_direction dir;
+};
+
+/**
+ * struct mhi_ep_cntrl_config - MHI Endpoint controller configuration
+ * @max_channels: Maximum number of channels supported
+ * @num_channels: Number of channels defined in @ch_cfg
+ * @ch_cfg: Array of defined channels
+ * @mhi_version: MHI spec version supported by the controller
+ */
+struct mhi_ep_cntrl_config {
+ u32 max_channels;
+ u32 num_channels;
+ const struct mhi_ep_channel_config *ch_cfg;
+ u32 mhi_version;
+};
+
+/**
+ * struct mhi_ep_db_info - MHI Endpoint doorbell info
+ * @mask: Mask of the doorbell interrupt
+ * @status: Status of the doorbell interrupt
+ */
+struct mhi_ep_db_info {
+ u32 mask;
+ u32 status;
+};
+
+/**
+ * struct mhi_ep_cntrl - MHI Endpoint controller structure
+ * @cntrl_dev: Pointer to the struct device of physical bus acting as the MHI
+ * Endpoint controller
+ * @mhi_dev: MHI Endpoint device instance for the controller
+ * @mmio: MMIO region containing the MHI registers
+ * @mhi_chan: Points to the channel configuration table
+ * @mhi_event: Points to the event ring configurations table
+ * @mhi_cmd: Points to the command ring configurations table
+ * @sm: MHI Endpoint state machine
+ * @raise_irq: CB function for raising IRQ to the host
+ * @alloc_addr: CB function for allocating memory in endpoint for storing host context
+ * @map_addr: CB function for mapping host context to endpoint
+ * @free_addr: CB function to free the allocated memory in endpoint for storing host context
+ * @unmap_addr: CB function to unmap the host context in endpoint
+ * @mhi_state: MHI Endpoint state
+ * @max_chan: Maximum channels supported by the endpoint controller
+ * @index: MHI Endpoint controller index
+ */
+struct mhi_ep_cntrl {
+ struct device *cntrl_dev;
+ struct mhi_ep_device *mhi_dev;
+ void __iomem *mmio;
+
+ struct mhi_ep_chan *mhi_chan;
+ struct mhi_ep_event *mhi_event;
+ struct mhi_ep_cmd *mhi_cmd;
+ struct mhi_ep_sm *sm;
+
+ void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
+ void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl,
+ phys_addr_t *phys_addr, size_t size);
+ int (*map_addr)(struct mhi_ep_cntrl *mhi_cntrl,
+ phys_addr_t phys_addr, u64 pci_addr, size_t size);
+ void (*free_addr)(struct mhi_ep_cntrl *mhi_cntrl,
+ phys_addr_t phys_addr, void __iomem *virt_addr, size_t size);
+ void (*unmap_addr)(struct mhi_ep_cntrl *mhi_cntrl,
+ phys_addr_t phys_addr);
+
+ enum mhi_state mhi_state;
+
+ u32 max_chan;
+ int index;
+};
+
+/**
+ * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
+ * to channels or is associated with controllers
+ * @dev: Driver model device node for the MHI Endpoint device
+ * @mhi_cntrl: Controller the device belongs to
+ * @id: Pointer to MHI Endpoint device ID struct
+ * @name: Name of the associated MHI Endpoint device
+ * @ul_chan: UL channel for the device
+ * @dl_chan: DL channel for the device
+ * @dev_type: MHI device type
+ * @ul_chan_id: Channel id for UL transfer
+ * @dl_chan_id: Channel id for DL transfer
+ */
+struct mhi_ep_device {
+ struct device dev;
+ struct mhi_ep_cntrl *mhi_cntrl;
+ const struct mhi_device_id *id;
+ const char *name;
+ struct mhi_ep_chan *ul_chan;
+ struct mhi_ep_chan *dl_chan;
+ enum mhi_device_type dev_type;
+ int ul_chan_id;
+ int dl_chan_id;
+};
+
+#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+
+/**
+ * mhi_ep_register_controller - Register MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to register
+ * @config: Configuration to use for the controller
+ *
+ * Return: 0 if controller registrations succeeds, a negative error code otherwise.
+ */
+int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
+ const struct mhi_ep_cntrl_config *config);
+
+/**
+ * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
+ * @mhi_cntrl: MHI Endpoint controller to unregister
+ */
+void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
+
+#endif
--
2.25.1
This commit adds support for registering MHI endpoint client drivers
with the MHI endpoint stack. MHI endpoint client drivers binds to one
or more MHI endpoint devices inorder to send and receive the upper-layer
protocol packets like IP packets, modem control messages, and diagnostics
messages over MHI bus.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 53 ++++++++++++++++++++++++
2 files changed, 138 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index db664360c8ab..ce0f99f22058 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -193,9 +193,88 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
}
EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
+static int mhi_ep_driver_probe(struct device *dev)
+{
+ struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+ struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+ struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
+ struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
+
+ if (ul_chan)
+ ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+
+ if (dl_chan)
+ dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+
+ return mhi_drv->probe(mhi_dev, mhi_dev->id);
+}
+
+static int mhi_ep_driver_remove(struct device *dev)
+{
+ struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+ struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
+ struct mhi_result result = {};
+ struct mhi_ep_chan *mhi_chan;
+ int dir;
+
+ /* Skip if it is a controller device */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ /* Disconnect the channels associated with the driver */
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ mutex_lock(&mhi_chan->lock);
+ /* Send channel disconnect status to the client driver */
+ if (mhi_chan->xfer_cb) {
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ }
+
+ /* Set channel state to DISABLED */
+ mhi_chan->state = MHI_CH_STATE_DISABLED;
+ mhi_chan->xfer_cb = NULL;
+ mutex_unlock(&mhi_chan->lock);
+ }
+
+ /* Remove the client driver now */
+ mhi_drv->remove(mhi_dev);
+
+ return 0;
+}
+
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner)
+{
+ struct device_driver *driver = &mhi_drv->driver;
+
+ if (!mhi_drv->probe || !mhi_drv->remove)
+ return -EINVAL;
+
+ driver->bus = &mhi_ep_bus_type;
+ driver->owner = owner;
+ driver->probe = mhi_ep_driver_probe;
+ driver->remove = mhi_ep_driver_remove;
+
+ return driver_register(driver);
+}
+EXPORT_SYMBOL_GPL(__mhi_ep_driver_register);
+
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
+{
+ driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
+
static int mhi_ep_match(struct device *dev, struct device_driver *drv)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+ struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(drv);
+ const struct mhi_device_id *id;
/*
* If the device is a controller type then there is no client driver
@@ -204,6 +283,12 @@ static int mhi_ep_match(struct device *dev, struct device_driver *drv)
if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
return 0;
+ for (id = mhi_drv->id_table; id->chan[0]; id++)
+ if (!strcmp(mhi_dev->name, id->chan)) {
+ mhi_dev->id = id;
+ return 1;
+ }
+
return 0;
};
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 14fd40af8974..bc72c197db4d 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -119,7 +119,60 @@ struct mhi_ep_device {
int dl_chan_id;
};
+/**
+ * struct mhi_ep_driver - Structure representing a MHI Endpoint client driver
+ * @id_table: Pointer to MHI Endpoint device ID table
+ * @driver: Device driver model driver
+ * @probe: CB function for client driver probe function
+ * @remove: CB function for client driver remove function
+ * @ul_xfer_cb: CB function for UL data transfer
+ * @dl_xfer_cb: CB function for DL data transfer
+ */
+struct mhi_ep_driver {
+ const struct mhi_device_id *id_table;
+ struct device_driver driver;
+ int (*probe)(struct mhi_ep_device *mhi_ep,
+ const struct mhi_device_id *id);
+ void (*remove)(struct mhi_ep_device *mhi_ep);
+ void (*ul_xfer_cb)(struct mhi_ep_device *mhi_dev,
+ struct mhi_result *result);
+ void (*dl_xfer_cb)(struct mhi_ep_device *mhi_dev,
+ struct mhi_result *result);
+};
+
#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
+#define to_mhi_ep_driver(drv) container_of(drv, struct mhi_ep_driver, driver)
+
+/*
+ * module_mhi_ep_driver() - Helper macro for drivers that don't do
+ * anything special other than using default mhi_ep_driver_register() and
+ * mhi_ep_driver_unregister(). This eliminates a lot of boilerplate.
+ * Each module may only use this macro once.
+ */
+#define module_mhi_ep_driver(mhi_drv) \
+ module_driver(mhi_drv, mhi_ep_driver_register, \
+ mhi_ep_driver_unregister)
+
+/*
+ * Macro to avoid include chaining to get THIS_MODULE
+ */
+#define mhi_ep_driver_register(mhi_drv) \
+ __mhi_ep_driver_register(mhi_drv, THIS_MODULE)
+
+/**
+ * __mhi_ep_driver_register - Register a driver with MHI Endpoint bus
+ * @mhi_drv: Driver to be associated with the device
+ * @owner: The module owner
+ *
+ * Return: 0 if driver registrations succeeds, a negative error code otherwise.
+ */
+int __mhi_ep_driver_register(struct mhi_ep_driver *mhi_drv, struct module *owner);
+
+/**
+ * mhi_ep_driver_unregister - Unregister a driver from MHI Endpoint bus
+ * @mhi_drv: Driver associated with the device
+ */
+void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv);
/**
* mhi_ep_register_controller - Register MHI Endpoint controller
--
2.25.1
This commit adds support for creating and destroying MHI endpoint devices.
The MHI endpoint devices binds to the MHI endpoint channels and are used
to transfer data between MHI host and endpoint device.
There is a single MHI EP device for each channel pair. The devices will be
created when the corresponding channels has been started by the host and
will be destroyed during MHI EP power down and reset.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
1 file changed, 85 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index ce0f99f22058..f0b5f49db95a 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -63,6 +63,91 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
return mhi_dev;
}
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
+{
+ struct mhi_ep_device *mhi_dev;
+ struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+ int ret;
+
+ mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
+ if (IS_ERR(mhi_dev))
+ return PTR_ERR(mhi_dev);
+
+ mhi_dev->dev_type = MHI_DEVICE_XFER;
+
+ /* Configure primary channel */
+ if (mhi_chan->dir == DMA_TO_DEVICE) {
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ } else {
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ }
+
+ get_device(&mhi_dev->dev);
+ mhi_chan->mhi_dev = mhi_dev;
+
+ /* Configure secondary channel as well */
+ mhi_chan++;
+ if (mhi_chan->dir == DMA_TO_DEVICE) {
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ } else {
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ }
+
+ get_device(&mhi_dev->dev);
+ mhi_chan->mhi_dev = mhi_dev;
+
+ /* Channel name is same for both UL and DL */
+ mhi_dev->name = mhi_chan->name;
+ dev_set_name(&mhi_dev->dev, "%s_%s",
+ dev_name(&mhi_cntrl->mhi_dev->dev),
+ mhi_dev->name);
+
+ ret = device_add(&mhi_dev->dev);
+ if (ret)
+ put_device(&mhi_dev->dev);
+
+ return ret;
+}
+
+static int mhi_ep_destroy_device(struct device *dev, void *data)
+{
+ struct mhi_ep_device *mhi_dev;
+ struct mhi_ep_cntrl *mhi_cntrl;
+ struct mhi_ep_chan *ul_chan, *dl_chan;
+
+ if (dev->bus != &mhi_ep_bus_type)
+ return 0;
+
+ mhi_dev = to_mhi_ep_device(dev);
+ mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ /* Only destroy devices created for channels */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ ul_chan = mhi_dev->ul_chan;
+ dl_chan = mhi_dev->dl_chan;
+
+ if (ul_chan)
+ put_device(&ul_chan->mhi_dev->dev);
+
+ if (dl_chan)
+ put_device(&dl_chan->mhi_dev->dev);
+
+ dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
+ mhi_dev->name);
+
+ /* Notify the client and remove the device from MHI bus */
+ device_del(dev);
+ put_device(dev);
+
+ return 0;
+}
+
static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
const struct mhi_ep_cntrl_config *config)
{
--
2.25.1
Add support for managing the Memory Mapped Input Output (MMIO) registers
of the MHI bus. All MHI operations are carried out using the MMIO registers
by both host and the endpoint device.
The MMIO registers reside inside the endpoint device memory (fixed
location based on the platform) and the address is passed by the MHI EP
controller driver during its registration.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/Makefile | 2 +-
drivers/bus/mhi/ep/internal.h | 36 ++++
drivers/bus/mhi/ep/main.c | 6 +-
drivers/bus/mhi/ep/mmio.c | 303 ++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 18 ++
5 files changed, 363 insertions(+), 2 deletions(-)
create mode 100644 drivers/bus/mhi/ep/mmio.c
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 64e29252b608..a1555ae287ad 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o
+mhi_ep-y := main.o mmio.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 7b164daf4332..39eeb5f384e2 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -91,6 +91,12 @@ struct mhi_generic_ctx {
__u64 wp __packed __aligned(4);
};
+enum mhi_ep_execenv {
+ MHI_EP_SBL_EE = 1,
+ MHI_EP_AMSS_EE = 2,
+ MHI_EP_UNRESERVED
+};
+
enum mhi_ep_ring_state {
RING_STATE_UINT = 0,
RING_STATE_IDLE,
@@ -155,4 +161,34 @@ struct mhi_ep_chan {
bool skip_td;
};
+/* MMIO related functions */
+void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset,
+ u32 mask, u32 shift, u32 val);
+int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
+ u32 mask, u32 shift, u32 *regval);
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
+void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_ch_db(struct mhi_ep_ring *ring, u64 *wr_offset);
+void mhi_ep_mmio_get_er_db(struct mhi_ep_ring *ring, u64 *wr_offset);
+void mhi_ep_mmio_get_cmd_db(struct mhi_ep_ring *ring, u64 *wr_offset);
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+ bool *mhi_reset);
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
+
#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index f0b5f49db95a..fddf75dfb9c7 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -209,7 +209,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_device *mhi_dev;
int ret;
- if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
+ if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
return -EINVAL;
ret = parse_ch_cfg(mhi_cntrl, config);
@@ -222,6 +222,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
goto err_free_ch;
}
+ /* Set MHI version and AMSS EE before enumeration */
+ mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
+ mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
/* Set controller index */
mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
if (mhi_cntrl->index < 0) {
diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
new file mode 100644
index 000000000000..157ef1240f6f
--- /dev/null
+++ b/drivers/bus/mhi/ep/mmio.c
@@ -0,0 +1,303 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <[email protected]>
+ */
+
+#include <linux/bitfield.h>
+#include <linux/io.h>
+#include <linux/mhi_ep.h>
+
+#include "internal.h"
+
+void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval)
+{
+ *regval = readl(mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
+{
+ writel(val, mhi_cntrl->mmio + offset);
+}
+
+void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask,
+ u32 shift, u32 val)
+{
+ u32 regval;
+
+ mhi_ep_mmio_read(mhi_cntrl, offset, ®val);
+ regval &= ~mask;
+ regval |= ((val << shift) & mask);
+ mhi_ep_mmio_write(mhi_cntrl, offset, regval);
+}
+
+int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
+ u32 mask, u32 shift, u32 *regval)
+{
+ mhi_ep_mmio_read(dev, offset, regval);
+ *regval &= mask;
+ *regval >>= shift;
+
+ return 0;
+}
+
+void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
+ bool *mhi_reset)
+{
+ u32 regval;
+
+ mhi_ep_mmio_read(mhi_cntrl, MHICTRL, ®val);
+ *state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
+ *mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
+}
+
+static void mhi_ep_mmio_mask_set_chdb_int_a7(struct mhi_ep_cntrl *mhi_cntrl,
+ u32 chdb_id, bool enable)
+{
+ u32 chid_mask, chid_idx, chid_shft, val = 0;
+
+ chid_shft = chdb_id % 32;
+ chid_mask = BIT(chid_shft);
+ chid_idx = chdb_id / 32;
+
+ if (chid_idx >= MHI_MASK_ROWS_CH_EV_DB)
+ return;
+
+ if (enable)
+ val = 1;
+
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
+ chid_mask, chid_shft, val);
+ mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
+ &mhi_cntrl->chdb[chid_idx].mask);
+}
+
+void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
+{
+ mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, true);
+}
+
+void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
+{
+ mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, false);
+}
+
+static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+ u32 val = 0, i = 0;
+
+ if (enable)
+ val = MHI_CHDB_INT_MASK_A7_n_EN_ALL;
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+ mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(i), val);
+ mhi_cntrl->chdb[i].mask = val;
+ }
+}
+
+void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, true);
+}
+
+static void mhi_ep_mmio_mask_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_set_chdb_interrupts(mhi_cntrl, false);
+}
+
+void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ u32 i;
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+ mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_STATUS_A7_n(i),
+ &mhi_cntrl->chdb[i].status);
+}
+
+static void mhi_ep_mmio_set_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
+{
+ u32 val = 0, i;
+
+ if (enable)
+ val = MHI_ERDB_INT_MASK_A7_n_EN_ALL;
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+ mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_MASK_A7_n(i), val);
+}
+
+static void mhi_ep_mmio_mask_erdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_set_erdb_interrupts(mhi_cntrl, false);
+}
+
+void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+ MHI_CTRL_MHICTRL_MASK,
+ MHI_CTRL_MHICTRL_SHFT, 1);
+}
+
+void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+ MHI_CTRL_MHICTRL_MASK,
+ MHI_CTRL_MHICTRL_SHFT, 0);
+}
+
+void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+ MHI_CTRL_CRDB_MASK,
+ MHI_CTRL_CRDB_SHFT, 1);
+}
+
+void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CTRL_INT_MASK_A7,
+ MHI_CTRL_CRDB_MASK,
+ MHI_CTRL_CRDB_SHFT, 0);
+}
+
+void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_disable_ctrl_interrupt(mhi_cntrl);
+ mhi_ep_mmio_disable_cmdb_interrupt(mhi_cntrl);
+ mhi_ep_mmio_mask_chdb_interrupts(mhi_cntrl);
+ mhi_ep_mmio_mask_erdb_interrupts(mhi_cntrl);
+}
+
+static void mhi_ep_mmio_clear_interrupts(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ u32 i = 0;
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+ mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
+ MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL);
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++)
+ mhi_ep_mmio_write(mhi_cntrl, MHI_ERDB_INT_CLEAR_A7_n(i),
+ MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL);
+
+ mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR_A7,
+ MHI_CTRL_INT_MMIO_WR_CLEAR |
+ MHI_CTRL_INT_CRDB_CLEAR |
+ MHI_CTRL_INT_CRDB_MHICTRL_CLEAR);
+}
+
+void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ u32 ccabap_value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, CCABAP_HIGHER, &ccabap_value);
+ mhi_cntrl->ch_ctx_host_pa = ccabap_value;
+ mhi_cntrl->ch_ctx_host_pa <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, CCABAP_LOWER, &ccabap_value);
+ mhi_cntrl->ch_ctx_host_pa |= ccabap_value;
+}
+
+void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ u32 ecabap_value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, ECABAP_HIGHER, &ecabap_value);
+ mhi_cntrl->ev_ctx_host_pa = ecabap_value;
+ mhi_cntrl->ev_ctx_host_pa <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, ECABAP_LOWER, &ecabap_value);
+ mhi_cntrl->ev_ctx_host_pa |= ecabap_value;
+}
+
+void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ u32 crcbap_value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, CRCBAP_HIGHER, &crcbap_value);
+ mhi_cntrl->cmd_ctx_host_pa = crcbap_value;
+ mhi_cntrl->cmd_ctx_host_pa <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, CRCBAP_LOWER, &crcbap_value);
+ mhi_cntrl->cmd_ctx_host_pa |= crcbap_value;
+}
+
+void mhi_ep_mmio_get_ch_db(struct mhi_ep_ring *ring, u64 *wr_ptr)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ u32 value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h, &value);
+ *wr_ptr = value;
+ *wr_ptr <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l, &value);
+
+ *wr_ptr |= value;
+}
+
+void mhi_ep_mmio_get_er_db(struct mhi_ep_ring *ring, u64 *wr_ptr)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ u32 value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h, &value);
+ *wr_ptr = value;
+ *wr_ptr <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l, &value);
+
+ *wr_ptr |= value;
+}
+
+void mhi_ep_mmio_get_cmd_db(struct mhi_ep_ring *ring, u64 *wr_ptr)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ u32 value = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_h, &value);
+ *wr_ptr = value;
+ *wr_ptr <<= 32;
+
+ mhi_ep_mmio_read(mhi_cntrl, ring->db_offset_l, &value);
+ *wr_ptr |= value;
+}
+
+void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value)
+{
+ mhi_ep_mmio_write(mhi_cntrl, BHI_EXECENV, value);
+}
+
+void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHICTRL, MHICTRL_RESET_MASK,
+ MHICTRL_RESET_SHIFT, 0);
+}
+
+void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_write(mhi_cntrl, MHICTRL, 0);
+ mhi_ep_mmio_write(mhi_cntrl, MHISTATUS, 0);
+ mhi_ep_mmio_clear_interrupts(mhi_cntrl);
+}
+
+void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ int mhi_cfg = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, CHDBOFF, &mhi_cntrl->chdb_offset);
+ mhi_ep_mmio_read(mhi_cntrl, ERDBOFF, &mhi_cntrl->erdb_offset);
+
+ mhi_ep_mmio_read(mhi_cntrl, MHICFG, &mhi_cfg);
+ mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, mhi_cfg);
+ mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, mhi_cfg);
+
+ mhi_ep_mmio_reset(mhi_cntrl);
+}
+
+void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ int mhi_cfg = 0;
+
+ mhi_ep_mmio_read(mhi_cntrl, MHICFG, &mhi_cfg);
+ mhi_cntrl->event_rings = FIELD_GET(MHICFG_NER_MASK, mhi_cfg);
+ mhi_cntrl->hw_event_rings = FIELD_GET(MHICFG_NHWER_MASK, mhi_cfg);
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index bc72c197db4d..902c8febd856 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,6 +59,10 @@ struct mhi_ep_db_info {
* @mhi_event: Points to the event ring configurations table
* @mhi_cmd: Points to the command ring configurations table
* @sm: MHI Endpoint state machine
+ * @ch_ctx_host_pa: Physical address of host channel context data structure
+ * @ev_ctx_host_pa: Physical address of host event context data structure
+ * @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @chdb: Array of channel doorbell interrupt info
* @raise_irq: CB function for raising IRQ to the host
* @alloc_addr: CB function for allocating memory in endpoint for storing host context
* @map_addr: CB function for mapping host context to endpoint
@@ -66,6 +70,10 @@ struct mhi_ep_db_info {
* @unmap_addr: CB function to unmap the host context in endpoint
* @mhi_state: MHI Endpoint state
* @max_chan: Maximum channels supported by the endpoint controller
+ * @event_rings: Number of event rings supported by the endpoint controller
+ * @hw_event_rings: Number of hardware event rings supported by the endpoint controller
+ * @chdb_offset: Channel doorbell offset set by the host
+ * @erdb_offset: Event ring doorbell offset set by the host
* @index: MHI Endpoint controller index
*/
struct mhi_ep_cntrl {
@@ -78,6 +86,12 @@ struct mhi_ep_cntrl {
struct mhi_ep_cmd *mhi_cmd;
struct mhi_ep_sm *sm;
+ u64 ch_ctx_host_pa;
+ u64 ev_ctx_host_pa;
+ u64 cmd_ctx_host_pa;
+
+ struct mhi_ep_db_info chdb[4];
+
void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
void __iomem *(*alloc_addr)(struct mhi_ep_cntrl *mhi_cntrl,
phys_addr_t *phys_addr, size_t size);
@@ -91,6 +105,10 @@ struct mhi_ep_cntrl {
enum mhi_state mhi_state;
u32 max_chan;
+ u32 event_rings;
+ u32 hw_event_rings;
+ u32 chdb_offset;
+ u32 erdb_offset;
int index;
};
--
2.25.1
Add support for managing the MHI ring. The MHI ring is a circular queue
of data structures used to pass the information between host and the
endpoint.
MHI support 3 types of rings:
1. Transfer ring
2. Event ring
3. Command ring
All rings reside inside the host memory and the MHI EP device maps it to
the device memory using blocks like PCIe iATU. The mapping is handled in
the MHI EP controller driver itself.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/Makefile | 2 +-
drivers/bus/mhi/ep/internal.h | 23 +++
drivers/bus/mhi/ep/main.c | 53 +++++-
drivers/bus/mhi/ep/ring.c | 314 ++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 11 ++
5 files changed, 401 insertions(+), 2 deletions(-)
create mode 100644 drivers/bus/mhi/ep/ring.c
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index a1555ae287ad..7ba0e04801eb 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o
+mhi_ep-y := main.o mmio.o ring.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 39eeb5f384e2..a7a4e6934f7d 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -97,6 +97,18 @@ enum mhi_ep_execenv {
MHI_EP_UNRESERVED
};
+/* Transfer Ring Element macros */
+#define MHI_EP_TRE_PTR(ptr) (ptr)
+#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+ | (ieot << 9) | (ieob << 8) | chain)
+#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
+#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
+#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
+#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
+#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
+#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
+
enum mhi_ep_ring_state {
RING_STATE_UINT = 0,
RING_STATE_IDLE,
@@ -161,6 +173,17 @@ struct mhi_ep_chan {
bool skip_td;
};
+/* MHI Ring related functions */
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
+void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+ union mhi_ep_ring_ctx *ctx);
+int mhi_ep_process_ring(struct mhi_ep_ring *ring);
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element,
+ int evt_offset);
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
+
/* MMIO related functions */
void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index fddf75dfb9c7..6d448d42f527 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -18,6 +18,42 @@
static DEFINE_IDA(mhi_ep_cntrl_ida);
+static void mhi_ep_ring_worker(struct work_struct *work)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
+ struct mhi_ep_cntrl, ring_work);
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_ring *ring;
+ struct list_head *cp, *q;
+ unsigned long flags;
+ int ret = 0;
+
+ /* Process the command ring first */
+ ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
+ if (ret) {
+ dev_err(dev, "Error processing command ring\n");
+ goto err_unlock;
+ }
+
+ spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+ /* Process the channel rings now */
+ list_for_each_safe(cp, q, &mhi_cntrl->ch_db_list) {
+ ring = list_entry(cp, struct mhi_ep_ring, list);
+ list_del(cp);
+ ret = mhi_ep_process_ring(ring);
+ if (ret) {
+ dev_err(dev, "Error processing channel ring: %d\n", ring->ch_id);
+ goto err_unlock;
+ }
+
+ /* Re-enable channel interrupt */
+ mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ring->ch_id);
+ }
+
+err_unlock:
+ spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+}
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -222,6 +258,17 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
goto err_free_ch;
}
+ INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
+
+ mhi_cntrl->ring_wq = alloc_ordered_workqueue("mhi_ep_ring_wq", WQ_HIGHPRI);
+ if (!mhi_cntrl->ring_wq) {
+ ret = -ENOMEM;
+ goto err_free_cmd;
+ }
+
+ INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
+ spin_lock_init(&mhi_cntrl->list_lock);
+
/* Set MHI version and AMSS EE before enumeration */
mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
@@ -230,7 +277,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
if (mhi_cntrl->index < 0) {
ret = mhi_cntrl->index;
- goto err_free_cmd;
+ goto err_destroy_ring_wq;
}
/* Allocate the controller device */
@@ -259,6 +306,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
put_device(&mhi_dev->dev);
err_ida_free:
ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_destroy_ring_wq:
+ destroy_workqueue(mhi_cntrl->ring_wq);
err_free_cmd:
kfree(mhi_cntrl->mhi_cmd);
err_free_ch:
@@ -272,6 +321,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
{
struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
+ destroy_workqueue(mhi_cntrl->ring_wq);
+
kfree(mhi_cntrl->mhi_cmd);
kfree(mhi_cntrl->mhi_chan);
diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
new file mode 100644
index 000000000000..763b8506d309
--- /dev/null
+++ b/drivers/bus/mhi/ep/ring.c
@@ -0,0 +1,314 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <[email protected]>
+ */
+
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr)
+{
+ u64 rbase;
+
+ rbase = ring->ring_ctx->generic.rbase;
+
+ return (ptr - rbase) / sizeof(struct mhi_ep_ring_element);
+}
+
+static u32 mhi_ep_ring_num_elems(struct mhi_ep_ring *ring)
+{
+ return ring->ring_ctx->generic.rlen / sizeof(struct mhi_ep_ring_element);
+}
+
+void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
+{
+ ring->rd_offset++;
+ if (ring->rd_offset == ring->ring_size)
+ ring->rd_offset = 0;
+}
+
+static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ size_t start, copy_size;
+ struct mhi_ep_ring_element *ring_shadow;
+ phys_addr_t ring_shadow_phys;
+ size_t size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
+ int ret;
+
+ /* No need to cache event rings */
+ if (ring->type == RING_TYPE_ER)
+ return 0;
+
+ /* No need to cache the ring if write pointer is unmodified */
+ if (ring->wr_offset == end)
+ return 0;
+
+ start = ring->wr_offset;
+
+ /* Allocate memory for host ring */
+ ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys,
+ size);
+ if (!ring_shadow) {
+ dev_err(dev, "Failed to allocate memory for ring_shadow\n");
+ return -ENOMEM;
+ }
+
+ /* Map host ring */
+ ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
+ ring->ring_ctx->generic.rbase, size);
+ if (ret) {
+ dev_err(dev, "Failed to map ring_shadow\n\n");
+ goto err_ring_free;
+ }
+
+ dev_dbg(dev, "Caching ring: start %d end %d size %d", start, end, copy_size);
+
+ if (start < end) {
+ copy_size = (end - start) * sizeof(struct mhi_ep_ring_element);
+ memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
+ } else {
+ copy_size = (ring->ring_size - start) * sizeof(struct mhi_ep_ring_element);
+ memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
+ if (end)
+ memcpy_fromio(&ring->ring_cache[0], &ring_shadow[0],
+ end * sizeof(struct mhi_ep_ring_element));
+ }
+
+ /* Now unmap and free host ring */
+ mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, size);
+
+ return 0;
+
+err_ring_free:
+ mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, &ring_shadow, size);
+
+ return ret;
+}
+
+static int mhi_ep_cache_ring(struct mhi_ep_ring *ring, u64 wr_ptr)
+{
+ size_t wr_offset;
+ int ret;
+
+ wr_offset = mhi_ep_ring_addr2offset(ring, wr_ptr);
+
+ /* Cache the host ring till write offset */
+ ret = __mhi_ep_cache_ring(ring, wr_offset);
+ if (ret)
+ return ret;
+
+ ring->wr_offset = wr_offset;
+
+ return 0;
+}
+
+static int mhi_ep_update_wr_offset(struct mhi_ep_ring *ring)
+{
+ u64 wr_ptr;
+
+ switch (ring->type) {
+ case RING_TYPE_CMD:
+ mhi_ep_mmio_get_cmd_db(ring, &wr_ptr);
+ break;
+ case RING_TYPE_ER:
+ mhi_ep_mmio_get_er_db(ring, &wr_ptr);
+ break;
+ case RING_TYPE_CH:
+ mhi_ep_mmio_get_ch_db(ring, &wr_ptr);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return mhi_ep_cache_ring(ring, wr_ptr);
+}
+
+static int mhi_ep_process_ring_element(struct mhi_ep_ring *ring, size_t offset)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_ring_element *el;
+ int ret = -ENODEV;
+
+ /* Get the element and invoke the respective callback */
+ el = &ring->ring_cache[offset];
+
+ if (ring->ring_cb)
+ ret = ring->ring_cb(ring, el);
+ else
+ dev_err(dev, "No callback registered for ring\n");
+
+ return ret;
+}
+
+int mhi_ep_process_ring(struct mhi_ep_ring *ring)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret = 0;
+
+ /* Event rings should not be processed */
+ if (ring->type == RING_TYPE_ER)
+ return -EINVAL;
+
+ dev_dbg(dev, "Processing ring of type: %d\n", ring->type);
+
+ /* Update the write offset for the ring */
+ ret = mhi_ep_update_wr_offset(ring);
+ if (ret) {
+ dev_err(dev, "Error updating write offset for ring\n");
+ return ret;
+ }
+
+ /* Sanity check to make sure there are elements in the ring */
+ if (ring->rd_offset == ring->wr_offset)
+ return 0;
+
+ /* Process channel ring first */
+ if (ring->type == RING_TYPE_CH) {
+ ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
+ if (ret)
+ dev_err(dev, "Error processing ch ring element: %d\n", ring->rd_offset);
+
+ return ret;
+ }
+
+ /* Process command ring now */
+ while (ring->rd_offset != ring->wr_offset) {
+ ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
+ if (ret) {
+ dev_err(dev, "Error processing cmd ring element: %d\n", ring->rd_offset);
+ return ret;
+ }
+
+ mhi_ep_ring_inc_index(ring);
+ }
+
+ return 0;
+}
+
+/* TODO: Support for adding multiple ring elements to the ring */
+int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el, int size)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_ring_element *ring_shadow;
+ size_t ring_size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
+ phys_addr_t ring_shadow_phys;
+ size_t old_offset = 0;
+ u32 num_free_elem;
+ int ret;
+
+ ret = mhi_ep_update_wr_offset(ring);
+ if (ret) {
+ dev_err(dev, "Error updating write pointer\n");
+ return ret;
+ }
+
+ if (ring->rd_offset < ring->wr_offset)
+ num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
+ else
+ num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
+
+ /* Check if there is space in ring for adding at least an element */
+ if (num_free_elem < 1) {
+ dev_err(dev, "No space left in the ring\n");
+ return -ENOSPC;
+ }
+
+ old_offset = ring->rd_offset;
+ mhi_ep_ring_inc_index(ring);
+
+ dev_dbg(dev, "Adding an element to ring at offset (%d)\n", ring->rd_offset);
+
+ /* Update rp in ring context */
+ ring->ring_ctx->generic.rp = (ring->rd_offset * sizeof(struct mhi_ep_ring_element)) +
+ ring->ring_ctx->generic.rbase;
+
+ /* Allocate memory for host ring */
+ ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys, ring_size);
+ if (!ring_shadow) {
+ dev_err(dev, "failed to allocate ring_shadow\n");
+ return -ENOMEM;
+ }
+
+ /* Map host ring */
+ ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
+ ring->ring_ctx->generic.rbase, ring_size);
+ if (ret) {
+ dev_err(dev, "failed to map ring_shadow\n\n");
+ goto err_ring_free;
+ }
+
+ /* Copy the element to ring */
+ memcpy_toio(&ring_shadow[old_offset], el, sizeof(struct mhi_ep_ring_element));
+
+ /* Now unmap and free host ring */
+ mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
+
+ return 0;
+
+err_ring_free:
+ mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
+
+ return ret;
+}
+
+void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
+{
+ ring->state = RING_STATE_UINT;
+ ring->type = type;
+ if (ring->type == RING_TYPE_CMD) {
+ ring->db_offset_h = CRDB_HIGHER;
+ ring->db_offset_l = CRDB_LOWER;
+ } else if (ring->type == RING_TYPE_CH) {
+ ring->db_offset_h = CHDB_HIGHER_n(id);
+ ring->db_offset_l = CHDB_LOWER_n(id);
+ ring->ch_id = id;
+ } else if (ring->type == RING_TYPE_ER) {
+ ring->db_offset_h = ERDB_HIGHER_n(id);
+ ring->db_offset_l = ERDB_LOWER_n(id);
+ }
+}
+
+int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
+ union mhi_ep_ring_ctx *ctx)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret;
+
+ ring->mhi_cntrl = mhi_cntrl;
+ ring->ring_ctx = ctx;
+ ring->ring_size = mhi_ep_ring_num_elems(ring);
+
+ /* During ring init, both rp and wp are equal */
+ ring->rd_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
+ ring->wr_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
+ ring->state = RING_STATE_IDLE;
+
+ /* Allocate ring cache memory for holding the copy of host ring */
+ ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ep_ring_element),
+ GFP_KERNEL);
+ if (!ring->ring_cache)
+ return -ENOMEM;
+
+ ret = mhi_ep_cache_ring(ring, ring->ring_ctx->generic.wp);
+ if (ret) {
+ dev_err(dev, "Failed to cache ring\n");
+ kfree(ring->ring_cache);
+ return ret;
+ }
+
+ return 0;
+}
+
+void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
+{
+ ring->state = RING_STATE_UINT;
+ kfree(ring->ring_cache);
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 902c8febd856..729f4b802b74 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -62,6 +62,11 @@ struct mhi_ep_db_info {
* @ch_ctx_host_pa: Physical address of host channel context data structure
* @ev_ctx_host_pa: Physical address of host event context data structure
* @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @ring_wq: Dedicated workqueue for processing MHI rings
+ * @ring_work: Ring worker
+ * @ch_db_list: List of queued channel doorbells
+ * @st_transition_list: List of state transitions
+ * @list_lock: Lock for protecting state transition and channel doorbell lists
* @chdb: Array of channel doorbell interrupt info
* @raise_irq: CB function for raising IRQ to the host
* @alloc_addr: CB function for allocating memory in endpoint for storing host context
@@ -90,6 +95,12 @@ struct mhi_ep_cntrl {
u64 ev_ctx_host_pa;
u64 cmd_ctx_host_pa;
+ struct workqueue_struct *ring_wq;
+ struct work_struct ring_work;
+
+ struct list_head ch_db_list;
+ struct list_head st_transition_list;
+ spinlock_t list_lock;
struct mhi_ep_db_info chdb[4];
void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
--
2.25.1
Add support for sending the events to the host over MHI bus from the
endpoint. Following events are supported:
1. Transfer completion event
2. Command completion event
3. State change event
4. Execution Environment (EE) change event
An event is sent whenever an operation has been completed in the MHI EP
device. Event is sent using the MHI event ring and additionally the host
is notified using an IRQ if required.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/internal.h | 4 ++
drivers/bus/mhi/ep/main.c | 126 ++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 2 +
3 files changed, 132 insertions(+)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index a7a4e6934f7d..3551e673d99a 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -214,4 +214,8 @@ void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *s
void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
+/* MHI EP core functions */
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
+
#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 6d448d42f527..999784eadb65 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -18,6 +18,131 @@
static DEFINE_IDA(mhi_ep_cntrl_ida);
+static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 event_ring,
+ struct mhi_ep_ring_element *el)
+{
+ struct mhi_ep_ring *ring = &mhi_cntrl->mhi_event[event_ring].ring;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ union mhi_ep_ring_ctx *ctx;
+ int ret;
+
+ mutex_lock(&mhi_cntrl->event_lock);
+ ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[event_ring];
+ if (ring->state == RING_STATE_UINT) {
+ ret = mhi_ep_ring_start(mhi_cntrl, ring, ctx);
+ if (ret) {
+ dev_err(dev, "Error starting event ring (%d)\n", event_ring);
+ goto err_unlock;
+ }
+ }
+
+ /* Add element to the primary event ring (0) */
+ ret = mhi_ep_ring_add_element(ring, el, 0);
+ if (ret) {
+ dev_err(dev, "Error adding element to event ring (%d)\n", event_ring);
+ goto err_unlock;
+ }
+
+ /* Ensure that the ring pointer gets updated in host memory before triggering IRQ */
+ wmb();
+
+ mutex_unlock(&mhi_cntrl->event_lock);
+
+ /*
+ * Raise IRQ to host only if the BEI flag is not set in TRE. Host might
+ * set this flag for interrupt moderation as per MHI protocol.
+ */
+ if (!MHI_EP_TRE_GET_BEI(el))
+ mhi_cntrl->raise_irq(mhi_cntrl);
+
+ return 0;
+
+err_unlock:
+ mutex_unlock(&mhi_cntrl->event_lock);
+
+ return ret;
+}
+
+static int mhi_ep_send_completion_event(struct mhi_ep_cntrl *mhi_cntrl,
+ struct mhi_ep_ring *ring, u32 len,
+ enum mhi_ev_ccs code)
+{
+ struct mhi_ep_ring_element event = {};
+ u32 er_index, tmp;
+
+ er_index = mhi_cntrl->ch_ctx_cache[ring->ch_id].erindex;
+ event.ptr = ring->ring_ctx->generic.rbase +
+ ring->rd_offset * sizeof(struct mhi_ep_ring_element);
+
+ tmp = event.dword[0];
+ tmp |= MHI_TRE_EV_DWORD0(code, len);
+ event.dword[0] = tmp;
+
+ tmp = event.dword[1];
+ tmp |= MHI_TRE_EV_DWORD1(ring->ch_id, MHI_PKT_TYPE_TX_EVENT);
+ event.dword[1] = tmp;
+
+ return mhi_ep_send_event(mhi_cntrl, er_index, &event);
+}
+
+int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state)
+{
+ struct mhi_ep_ring_element event = {};
+ u32 tmp;
+
+ tmp = event.dword[0];
+ tmp |= MHI_SC_EV_DWORD0(state);
+ event.dword[0] = tmp;
+
+ tmp = event.dword[1];
+ tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_STATE_CHANGE_EVENT);
+ event.dword[1] = tmp;
+
+ return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
+int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env)
+{
+ struct mhi_ep_ring_element event = {};
+ u32 tmp;
+
+ tmp = event.dword[0];
+ tmp |= MHI_EE_EV_DWORD0(exec_env);
+ event.dword[0] = tmp;
+
+ tmp = event.dword[1];
+ tmp |= MHI_SC_EV_DWORD1(MHI_PKT_TYPE_EE_EVENT);
+ event.dword[1] = tmp;
+
+ return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
+static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ev_ccs code)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_ring_element event = {};
+ u32 tmp;
+
+ if (code > MHI_EV_CC_BAD_TRE) {
+ dev_err(dev, "Invalid command completion code: %d\n", code);
+ return -EINVAL;
+ }
+
+ event.ptr = mhi_cntrl->cmd_ctx_cache->rbase
+ + (mhi_cntrl->mhi_cmd->ring.rd_offset *
+ (sizeof(struct mhi_ep_ring_element)));
+
+ tmp = event.dword[0];
+ tmp |= MHI_CC_EV_DWORD0(code);
+ event.dword[0] = tmp;
+
+ tmp = event.dword[1];
+ tmp |= MHI_CC_EV_DWORD1(MHI_PKT_TYPE_CMD_COMPLETION_EVENT);
+ event.dword[1] = tmp;
+
+ return mhi_ep_send_event(mhi_cntrl, 0, &event);
+}
+
static void mhi_ep_ring_worker(struct work_struct *work)
{
struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
@@ -268,6 +393,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
spin_lock_init(&mhi_cntrl->list_lock);
+ mutex_init(&mhi_cntrl->event_lock);
/* Set MHI version and AMSS EE before enumeration */
mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 729f4b802b74..323cd3319b13 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -67,6 +67,7 @@ struct mhi_ep_db_info {
* @ch_db_list: List of queued channel doorbells
* @st_transition_list: List of state transitions
* @list_lock: Lock for protecting state transition and channel doorbell lists
+ * @event_lock: Lock for protecting event rings
* @chdb: Array of channel doorbell interrupt info
* @raise_irq: CB function for raising IRQ to the host
* @alloc_addr: CB function for allocating memory in endpoint for storing host context
@@ -101,6 +102,7 @@ struct mhi_ep_cntrl {
struct list_head ch_db_list;
struct list_head st_transition_list;
spinlock_t list_lock;
+ struct mutex event_lock;
struct mhi_ep_db_info chdb[4];
void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
--
2.25.1
Add support for managing the MHI state machine by controlling the state
transitions. Only the following MHI state transitions are supported:
1. Ready state
2. M0 state
3. M3 state
4. SYS_ERR state
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/Makefile | 2 +-
drivers/bus/mhi/ep/internal.h | 11 +++
drivers/bus/mhi/ep/main.c | 49 +++++++++-
drivers/bus/mhi/ep/sm.c | 175 ++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 6 ++
5 files changed, 241 insertions(+), 2 deletions(-)
create mode 100644 drivers/bus/mhi/ep/sm.c
diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
index 7ba0e04801eb..aad85f180b70 100644
--- a/drivers/bus/mhi/ep/Makefile
+++ b/drivers/bus/mhi/ep/Makefile
@@ -1,2 +1,2 @@
obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
-mhi_ep-y := main.o mmio.o ring.o
+mhi_ep-y := main.o mmio.o ring.o sm.o
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 3551e673d99a..ec508201c5c0 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -158,6 +158,11 @@ struct mhi_ep_event {
struct mhi_ep_ring ring;
};
+struct mhi_ep_state_transition {
+ struct list_head node;
+ enum mhi_state state;
+};
+
struct mhi_ep_chan {
char *name;
struct mhi_ep_device *mhi_dev;
@@ -217,5 +222,11 @@ void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
/* MHI EP core functions */
int mhi_ep_send_state_change_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state state);
int mhi_ep_send_ee_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_ep_execenv exec_env);
+bool mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state cur_mhi_state,
+ enum mhi_state mhi_state);
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state);
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 999784eadb65..f9b80fccfe70 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -179,6 +179,42 @@ static void mhi_ep_ring_worker(struct work_struct *work)
spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
}
+static void mhi_ep_state_worker(struct work_struct *work)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_state_transition *itr, *tmp;
+ unsigned long flags;
+ LIST_HEAD(head);
+ int ret;
+
+ spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
+ list_splice_tail_init(&mhi_cntrl->st_transition_list, &head);
+ spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
+
+ list_for_each_entry_safe(itr, tmp, &head, node) {
+ list_del(&itr->node);
+ dev_dbg(dev, "Handling MHI state transition to %s\n",
+ TO_MHI_STATE_STR(itr->state));
+
+ switch (itr->state) {
+ case MHI_STATE_M0:
+ ret = mhi_ep_set_m0_state(mhi_cntrl);
+ if (ret)
+ dev_err(dev, "Failed to transition to M0 state\n");
+ break;
+ case MHI_STATE_M3:
+ ret = mhi_ep_set_m3_state(mhi_cntrl);
+ if (ret)
+ dev_err(dev, "Failed to transition to M3 state\n");
+ break;
+ default:
+ dev_err(dev, "Invalid MHI state transition: %d", itr->state);
+ break;
+ }
+ }
+}
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -384,6 +420,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
}
INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
+ INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
mhi_cntrl->ring_wq = alloc_ordered_workqueue("mhi_ep_ring_wq", WQ_HIGHPRI);
if (!mhi_cntrl->ring_wq) {
@@ -391,7 +428,14 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
goto err_free_cmd;
}
+ mhi_cntrl->state_wq = alloc_ordered_workqueue("mhi_ep_state_wq", WQ_HIGHPRI);
+ if (!mhi_cntrl->state_wq) {
+ ret = -ENOMEM;
+ goto err_destroy_ring_wq;
+ }
+
INIT_LIST_HEAD(&mhi_cntrl->ch_db_list);
+ INIT_LIST_HEAD(&mhi_cntrl->st_transition_list);
spin_lock_init(&mhi_cntrl->list_lock);
mutex_init(&mhi_cntrl->event_lock);
@@ -403,7 +447,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
if (mhi_cntrl->index < 0) {
ret = mhi_cntrl->index;
- goto err_destroy_ring_wq;
+ goto err_destroy_state_wq;
}
/* Allocate the controller device */
@@ -432,6 +476,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
put_device(&mhi_dev->dev);
err_ida_free:
ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
+err_destroy_state_wq:
+ destroy_workqueue(mhi_cntrl->state_wq);
err_destroy_ring_wq:
destroy_workqueue(mhi_cntrl->ring_wq);
err_free_cmd:
@@ -447,6 +493,7 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
{
struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
+ destroy_workqueue(mhi_cntrl->state_wq);
destroy_workqueue(mhi_cntrl->ring_wq);
kfree(mhi_cntrl->mhi_cmd);
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
new file mode 100644
index 000000000000..95cec5c627b4
--- /dev/null
+++ b/drivers/bus/mhi/ep/sm.c
@@ -0,0 +1,175 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2021 Linaro Ltd.
+ * Author: Manivannan Sadhasivam <[email protected]>
+ */
+
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/mhi_ep.h>
+#include "internal.h"
+
+bool __must_check mhi_ep_check_mhi_state(struct mhi_ep_cntrl *mhi_cntrl,
+ enum mhi_state cur_mhi_state,
+ enum mhi_state mhi_state)
+{
+ bool valid = false;
+
+ switch (mhi_state) {
+ case MHI_STATE_READY:
+ valid = (cur_mhi_state == MHI_STATE_RESET);
+ break;
+ case MHI_STATE_M0:
+ valid = (cur_mhi_state == MHI_STATE_READY ||
+ cur_mhi_state == MHI_STATE_M3);
+ break;
+ case MHI_STATE_M3:
+ valid = (cur_mhi_state == MHI_STATE_M0);
+ break;
+ case MHI_STATE_SYS_ERR:
+ /* Transition to SYS_ERR state is allowed all the time */
+ valid = true;
+ break;
+ default:
+ break;
+ }
+
+ return valid;
+}
+
+int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_state)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+
+ if (!mhi_ep_check_mhi_state(mhi_cntrl, mhi_cntrl->mhi_state, mhi_state)) {
+ dev_err(dev, "MHI state change to %s from %s is not allowed!\n",
+ TO_MHI_STATE_STR(mhi_state),
+ TO_MHI_STATE_STR(mhi_cntrl->mhi_state));
+ return -EACCES;
+ }
+
+ switch (mhi_state) {
+ case MHI_STATE_READY:
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+ MHISTATUS_READY_MASK,
+ MHISTATUS_READY_SHIFT, 1);
+
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+ MHISTATUS_MHISTATE_MASK,
+ MHISTATUS_MHISTATE_SHIFT, mhi_state);
+ break;
+ case MHI_STATE_SYS_ERR:
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+ MHISTATUS_SYSERR_MASK,
+ MHISTATUS_SYSERR_SHIFT, 1);
+
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+ MHISTATUS_MHISTATE_MASK,
+ MHISTATUS_MHISTATE_SHIFT, mhi_state);
+ break;
+ case MHI_STATE_M1:
+ case MHI_STATE_M2:
+ dev_err(dev, "MHI state (%s) not supported\n", TO_MHI_STATE_STR(mhi_state));
+ return -EOPNOTSUPP;
+ case MHI_STATE_M0:
+ case MHI_STATE_M3:
+ mhi_ep_mmio_masked_write(mhi_cntrl, MHISTATUS,
+ MHISTATUS_MHISTATE_MASK,
+ MHISTATUS_MHISTATE_SHIFT, mhi_state);
+ break;
+ default:
+ dev_err(dev, "Invalid MHI state (%d)", mhi_state);
+ return -EINVAL;
+ }
+
+ mhi_cntrl->mhi_state = mhi_state;
+
+ return 0;
+}
+
+int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_state old_state;
+ int ret;
+
+ spin_lock_bh(&mhi_cntrl->state_lock);
+ old_state = mhi_cntrl->mhi_state;
+
+ ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+ if (ret) {
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+ return ret;
+ }
+
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+ /* Signal host that the device moved to M0 */
+ ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M0);
+ if (ret) {
+ dev_err(dev, "Failed sending M0 state change event: %d\n", ret);
+ return ret;
+ }
+
+ if (old_state == MHI_STATE_READY) {
+ /* Allow the host to process state change event */
+ mdelay(1);
+
+ /* Send AMSS EE event to host */
+ ret = mhi_ep_send_ee_event(mhi_cntrl, MHI_EP_AMSS_EE);
+ if (ret) {
+ dev_err(dev, "Failed sending AMSS EE event: %d\n", ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret;
+
+ spin_lock_bh(&mhi_cntrl->state_lock);
+ ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
+ if (ret) {
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+ return ret;
+ }
+
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+
+ /* Signal host that the device moved to M3 */
+ ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
+ if (ret) {
+ dev_err(dev, "Failed sending M3 state change event: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_state mhi_state;
+ int ret, is_ready;
+
+ spin_lock_bh(&mhi_cntrl->state_lock);
+ /* Ensure that the MHISTATUS is set to RESET by host */
+ mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_MHISTATE_MASK,
+ MHISTATUS_MHISTATE_SHIFT, &mhi_state);
+ mhi_ep_mmio_masked_read(mhi_cntrl, MHISTATUS, MHISTATUS_READY_MASK,
+ MHISTATUS_READY_SHIFT, &is_ready);
+
+ if (mhi_state != MHI_STATE_RESET || is_ready) {
+ dev_err(dev, "READY state transition failed. MHI host not in RESET state\n");
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+ return -EFAULT;
+ }
+
+ ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_READY);
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+
+ return ret;
+}
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 323cd3319b13..ea7435d0e609 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -62,11 +62,14 @@ struct mhi_ep_db_info {
* @ch_ctx_host_pa: Physical address of host channel context data structure
* @ev_ctx_host_pa: Physical address of host event context data structure
* @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @state_wq: Dedicated workqueue for handling MHI state transitions
* @ring_wq: Dedicated workqueue for processing MHI rings
+ * @state_work: State transition worker
* @ring_work: Ring worker
* @ch_db_list: List of queued channel doorbells
* @st_transition_list: List of state transitions
* @list_lock: Lock for protecting state transition and channel doorbell lists
+ * @state_lock: Lock for protecting state transitions
* @event_lock: Lock for protecting event rings
* @chdb: Array of channel doorbell interrupt info
* @raise_irq: CB function for raising IRQ to the host
@@ -96,12 +99,15 @@ struct mhi_ep_cntrl {
u64 ev_ctx_host_pa;
u64 cmd_ctx_host_pa;
+ struct workqueue_struct *state_wq;
struct workqueue_struct *ring_wq;
+ struct work_struct state_work;
struct work_struct ring_work;
struct list_head ch_db_list;
struct list_head st_transition_list;
spinlock_t list_lock;
+ spinlock_t state_lock;
struct mutex event_lock;
struct mhi_ep_db_info chdb[4];
--
2.25.1
Add support for processing MHI endpoint interrupts such as control
interrupt, command interrupt and channel interrupt from the host.
The interrupts will be generated in the endpoint device whenever host
writes to the corresponding doorbell registers. The doorbell logic
is handled inside the hardware internally.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 114 +++++++++++++++++++++++++++++++++++++-
include/linux/mhi_ep.h | 2 +
2 files changed, 114 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index f9b80fccfe70..70740358d329 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -179,6 +179,56 @@ static void mhi_ep_ring_worker(struct work_struct *work)
spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
}
+static void mhi_ep_queue_channel_db(struct mhi_ep_cntrl *mhi_cntrl,
+ unsigned long ch_int, u32 ch_idx)
+{
+ struct mhi_ep_ring *ring;
+ unsigned int i;
+
+ for_each_set_bit(i, &ch_int, 32) {
+ /* Channel index varies for each register: 0, 32, 64, 96 */
+ i += ch_idx;
+ ring = &mhi_cntrl->mhi_chan[i].ring;
+
+ spin_lock(&mhi_cntrl->list_lock);
+ list_add(&ring->list, &mhi_cntrl->ch_db_list);
+ spin_unlock(&mhi_cntrl->list_lock);
+ /*
+ * Disable the channel interrupt here and enable it once
+ * the current interrupt got serviced
+ */
+ mhi_ep_mmio_disable_chdb_a7(mhi_cntrl, i);
+ queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);
+ }
+}
+
+/*
+ * Channel interrupt statuses are contained in 4 registers each of 32bit length.
+ * For checking all interrupts, we need to loop through each registers and then
+ * check for bits set.
+ */
+static void mhi_ep_check_channel_interrupt(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ u32 ch_int, ch_idx;
+ int i;
+
+ mhi_ep_mmio_read_chdb_status_interrupts(mhi_cntrl);
+
+ for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
+ ch_idx = i * MHI_MASK_CH_EV_LEN;
+
+ /* Only process channel interrupt if the mask is enabled */
+ ch_int = (mhi_cntrl->chdb[i].status & mhi_cntrl->chdb[i].mask);
+ if (ch_int) {
+ dev_dbg(dev, "Processing channel doorbell interrupt\n");
+ mhi_ep_queue_channel_db(mhi_cntrl, ch_int, ch_idx);
+ mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_CLEAR_A7_n(i),
+ mhi_cntrl->chdb[i].status);
+ }
+ }
+}
+
static void mhi_ep_state_worker(struct work_struct *work)
{
struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, state_work);
@@ -215,6 +265,54 @@ static void mhi_ep_state_worker(struct work_struct *work)
}
}
+static void mhi_ep_process_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl,
+ enum mhi_state state)
+{
+ struct mhi_ep_state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+
+ item->state = state;
+ spin_lock(&mhi_cntrl->list_lock);
+ list_add_tail(&item->node, &mhi_cntrl->st_transition_list);
+ spin_unlock(&mhi_cntrl->list_lock);
+
+ queue_work(mhi_cntrl->state_wq, &mhi_cntrl->state_work);
+}
+
+/*
+ * Interrupt handler that services interrupts raised by the host writing to
+ * MHICTRL and Command ring doorbell (CRDB) registers for state change and
+ * channel interrupts.
+ */
+static irqreturn_t mhi_ep_irq(int irq, void *data)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = data;
+
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_state state;
+ u32 int_value = 0;
+
+ /* Acknowledge the interrupts */
+ mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS_A7, &int_value);
+ mhi_ep_mmio_write(mhi_cntrl, MHI_CTRL_INT_CLEAR_A7, int_value);
+
+ /* Check for ctrl interrupt */
+ if (FIELD_GET(MHI_CTRL_INT_STATUS_A7_MSK, int_value)) {
+ dev_dbg(dev, "Processing ctrl interrupt\n");
+ mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
+ }
+
+ /* Check for command doorbell interrupt */
+ if (FIELD_GET(MHI_CTRL_INT_STATUS_CRDB_MSK, int_value)) {
+ dev_dbg(dev, "Processing command doorbell interrupt\n");
+ queue_work(mhi_cntrl->ring_wq, &mhi_cntrl->ring_work);
+ }
+
+ /* Check for channel interrupts */
+ mhi_ep_check_channel_interrupt(mhi_cntrl);
+
+ return IRQ_HANDLED;
+}
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -406,7 +504,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
struct mhi_ep_device *mhi_dev;
int ret;
- if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
+ if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
return -EINVAL;
ret = parse_ch_cfg(mhi_cntrl, config);
@@ -450,12 +548,20 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
goto err_destroy_state_wq;
}
+ irq_set_status_flags(mhi_cntrl->irq, IRQ_NOAUTOEN);
+ ret = request_irq(mhi_cntrl->irq, mhi_ep_irq, IRQF_TRIGGER_HIGH,
+ "doorbell_irq", mhi_cntrl);
+ if (ret) {
+ dev_err(mhi_cntrl->cntrl_dev, "Failed to request Doorbell IRQ: %d\n", ret);
+ goto err_ida_free;
+ }
+
/* Allocate the controller device */
mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
if (IS_ERR(mhi_dev)) {
dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
ret = PTR_ERR(mhi_dev);
- goto err_ida_free;
+ goto err_free_irq;
}
mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
@@ -474,6 +580,8 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
err_release_dev:
put_device(&mhi_dev->dev);
+err_free_irq:
+ free_irq(mhi_cntrl->irq, mhi_cntrl);
err_ida_free:
ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
err_destroy_state_wq:
@@ -496,6 +604,8 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
destroy_workqueue(mhi_cntrl->state_wq);
destroy_workqueue(mhi_cntrl->ring_wq);
+ free_irq(mhi_cntrl->irq, mhi_cntrl);
+
kfree(mhi_cntrl->mhi_cmd);
kfree(mhi_cntrl->mhi_chan);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index ea7435d0e609..7a665cd55579 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -84,6 +84,7 @@ struct mhi_ep_db_info {
* @chdb_offset: Channel doorbell offset set by the host
* @erdb_offset: Event ring doorbell offset set by the host
* @index: MHI Endpoint controller index
+ * @irq: IRQ used by the endpoint controller
*/
struct mhi_ep_cntrl {
struct device *cntrl_dev;
@@ -129,6 +130,7 @@ struct mhi_ep_cntrl {
u32 chdb_offset;
u32 erdb_offset;
int index;
+ int irq;
};
/**
--
2.25.1
Add support for MHI endpoint power_up that includes initializing the MMIO
and rings, caching the host MHI registers, and setting the MHI state to M0.
After registering the MHI EP controller, the stack has to be powered up
for usage.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 228 ++++++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 28 +++++
2 files changed, 256 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 70740358d329..5f62b6fb6dbc 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -16,6 +16,9 @@
#include <linux/module.h>
#include "internal.h"
+#define MHI_SUSPEND_MIN 100
+#define MHI_SUSPEND_TIMEOUT 600
+
static DEFINE_IDA(mhi_ep_cntrl_ida);
static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 event_ring,
@@ -143,6 +146,175 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
return mhi_ep_send_event(mhi_cntrl, 0, &event);
}
+static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret;
+
+ /* Update the number of event rings (NER) programmed by the host */
+ mhi_ep_mmio_update_ner(mhi_cntrl);
+
+ dev_dbg(dev, "Number of Event rings: %d, HW Event rings: %d\n",
+ mhi_cntrl->event_rings, mhi_cntrl->hw_event_rings);
+
+ mhi_cntrl->ch_ctx_host_size = sizeof(struct mhi_chan_ctxt) *
+ mhi_cntrl->max_chan;
+ mhi_cntrl->ev_ctx_host_size = sizeof(struct mhi_event_ctxt) *
+ mhi_cntrl->event_rings;
+ mhi_cntrl->cmd_ctx_host_size = sizeof(struct mhi_cmd_ctxt);
+
+ /* Get the channel context base pointer from host */
+ mhi_ep_mmio_get_chc_base(mhi_cntrl);
+
+ /* Allocate memory for caching host channel context */
+ mhi_cntrl->ch_ctx_cache = mhi_cntrl->alloc_addr(mhi_cntrl, &mhi_cntrl->ch_ctx_cache_phys,
+ mhi_cntrl->ch_ctx_host_size);
+ if (!mhi_cntrl->ch_ctx_cache) {
+ dev_err(dev, "Failed to allocate ch_ctx_cache memory\n");
+ return -ENOMEM;
+ }
+
+ /* Map the host channel context */
+ ret = mhi_cntrl->map_addr(mhi_cntrl, mhi_cntrl->ch_ctx_cache_phys,
+ mhi_cntrl->ch_ctx_host_pa, mhi_cntrl->ch_ctx_host_size);
+ if (ret) {
+ dev_err(dev, "Failed to map ch_ctx_cache\n");
+ goto err_ch_ctx;
+ }
+
+ /* Get the event context base pointer from host */
+ mhi_ep_mmio_get_erc_base(mhi_cntrl);
+
+ /* Allocate memory for caching host event context */
+ mhi_cntrl->ev_ctx_cache = mhi_cntrl->alloc_addr(mhi_cntrl, &mhi_cntrl->ev_ctx_cache_phys,
+ mhi_cntrl->ev_ctx_host_size);
+ if (!mhi_cntrl->ev_ctx_cache) {
+ dev_err(dev, "Failed to allocate ev_ctx_cache memory\n");
+ ret = -ENOMEM;
+ goto err_ch_ctx_map;
+ }
+
+ /* Map the host event context */
+ ret = mhi_cntrl->map_addr(mhi_cntrl, mhi_cntrl->ev_ctx_cache_phys,
+ mhi_cntrl->ev_ctx_host_pa, mhi_cntrl->ev_ctx_host_size);
+ if (ret) {
+ dev_err(dev, "Failed to map ev_ctx_cache\n");
+ goto err_ev_ctx;
+ }
+
+ /* Get the command context base pointer from host */
+ mhi_ep_mmio_get_crc_base(mhi_cntrl);
+
+ /* Allocate memory for caching host command context */
+ mhi_cntrl->cmd_ctx_cache = mhi_cntrl->alloc_addr(mhi_cntrl, &mhi_cntrl->cmd_ctx_cache_phys,
+ mhi_cntrl->cmd_ctx_host_size);
+ if (!mhi_cntrl->cmd_ctx_cache) {
+ dev_err(dev, "Failed to allocate cmd_ctx_cache memory\n");
+ ret = -ENOMEM;
+ goto err_ev_ctx_map;
+ }
+
+ /* Map the host command context */
+ ret = mhi_cntrl->map_addr(mhi_cntrl, mhi_cntrl->cmd_ctx_cache_phys,
+ mhi_cntrl->cmd_ctx_host_pa, mhi_cntrl->cmd_ctx_host_size);
+ if (ret) {
+ dev_err(dev, "Failed to map cmd_ctx_cache\n");
+ goto err_cmd_ctx;
+ }
+
+ /* Initialize command ring */
+ ret = mhi_ep_ring_start(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring,
+ (union mhi_ep_ring_ctx *)mhi_cntrl->cmd_ctx_cache);
+ if (ret) {
+ dev_err(dev, "Failed to start the command ring\n");
+ goto err_cmd_ctx_map;
+ }
+
+ return ret;
+
+err_cmd_ctx_map:
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->cmd_ctx_cache_phys);
+
+err_cmd_ctx:
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->cmd_ctx_cache_phys,
+ mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
+
+err_ev_ctx_map:
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->ev_ctx_cache_phys);
+
+err_ev_ctx:
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->ev_ctx_cache_phys,
+ mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
+
+err_ch_ctx_map:
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->ch_ctx_cache_phys);
+
+err_ch_ctx:
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->ch_ctx_cache_phys,
+ mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
+
+ return ret;
+}
+
+static void mhi_ep_free_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->cmd_ctx_cache_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->cmd_ctx_cache_phys,
+ mhi_cntrl->cmd_ctx_cache, mhi_cntrl->cmd_ctx_host_size);
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->ev_ctx_cache_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->ev_ctx_cache_phys,
+ mhi_cntrl->ev_ctx_cache, mhi_cntrl->ev_ctx_host_size);
+ mhi_cntrl->unmap_addr(mhi_cntrl, mhi_cntrl->ch_ctx_cache_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, mhi_cntrl->ch_ctx_cache_phys,
+ mhi_cntrl->ch_ctx_cache, mhi_cntrl->ch_ctx_host_size);
+}
+
+static void mhi_ep_enable_int(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ mhi_ep_mmio_enable_chdb_interrupts(mhi_cntrl);
+ mhi_ep_mmio_enable_ctrl_interrupt(mhi_cntrl);
+ mhi_ep_mmio_enable_cmdb_interrupt(mhi_cntrl);
+}
+
+static int mhi_ep_enable(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_state state;
+ u32 max_cnt = 0;
+ bool mhi_reset;
+ int ret;
+
+ /* Wait for Host to set the M0 state */
+ do {
+ msleep(MHI_SUSPEND_MIN);
+ mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+ if (mhi_reset) {
+ /* Clear the MHI reset if host is in reset state */
+ mhi_ep_mmio_clear_reset(mhi_cntrl);
+ dev_dbg(dev, "Host initiated reset while waiting for M0\n");
+ }
+ max_cnt++;
+ } while (state != MHI_STATE_M0 && max_cnt < MHI_SUSPEND_TIMEOUT);
+
+ if (state == MHI_STATE_M0) {
+ ret = mhi_ep_cache_host_cfg(mhi_cntrl);
+ if (ret) {
+ dev_err(dev, "Failed to cache host config\n");
+ return ret;
+ }
+
+ mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+ } else {
+ dev_err(dev, "Host failed to enter M0\n");
+ return -ETIMEDOUT;
+ }
+
+ /* Enable all interrupts now */
+ mhi_ep_enable_int(mhi_cntrl);
+
+ return 0;
+}
+
static void mhi_ep_ring_worker(struct work_struct *work)
{
struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
@@ -313,6 +485,62 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
return IRQ_HANDLED;
}
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret, i;
+
+ /*
+ * Mask all interrupts until the state machine is ready. Interrupts will
+ * be enabled later with mhi_ep_enable().
+ */
+ mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+ mhi_ep_mmio_init(mhi_cntrl);
+
+ mhi_cntrl->mhi_event = kzalloc(mhi_cntrl->event_rings * (sizeof(*mhi_cntrl->mhi_event)),
+ GFP_KERNEL);
+ if (!mhi_cntrl->mhi_event)
+ return -ENOMEM;
+
+ /* Initialize command, channel and event rings */
+ mhi_ep_ring_init(&mhi_cntrl->mhi_cmd->ring, RING_TYPE_CMD, 0);
+ for (i = 0; i < mhi_cntrl->max_chan; i++)
+ mhi_ep_ring_init(&mhi_cntrl->mhi_chan[i].ring, RING_TYPE_CH, i);
+ for (i = 0; i < mhi_cntrl->event_rings; i++)
+ mhi_ep_ring_init(&mhi_cntrl->mhi_event[i].ring, RING_TYPE_ER, i);
+
+ spin_lock_bh(&mhi_cntrl->state_lock);
+ mhi_cntrl->mhi_state = MHI_STATE_RESET;
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+
+ /* Set AMSS EE before signaling ready state */
+ mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
+ /* All set, notify the host that we are ready */
+ ret = mhi_ep_set_ready_state(mhi_cntrl);
+ if (ret)
+ goto err_free_event;
+
+ dev_dbg(dev, "READY state notification sent to the host\n");
+
+ ret = mhi_ep_enable(mhi_cntrl);
+ if (ret) {
+ dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
+ goto err_free_event;
+ }
+
+ enable_irq(mhi_cntrl->irq);
+ mhi_cntrl->is_enabled = true;
+
+ return 0;
+
+err_free_event:
+ kfree(mhi_cntrl->mhi_event);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_up);
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 7a665cd55579..105e8067409a 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -59,9 +59,18 @@ struct mhi_ep_db_info {
* @mhi_event: Points to the event ring configurations table
* @mhi_cmd: Points to the command ring configurations table
* @sm: MHI Endpoint state machine
+ * @ch_ctx_cache: Cache of host channel context data structure
+ * @ev_ctx_cache: Cache of host event context data structure
+ * @cmd_ctx_cache: Cache of host command context data structure
* @ch_ctx_host_pa: Physical address of host channel context data structure
* @ev_ctx_host_pa: Physical address of host event context data structure
* @cmd_ctx_host_pa: Physical address of host command context data structure
+ * @ch_ctx_cache_phys: Physical address of the host channel context cache
+ * @ev_ctx_cache_phys: Physical address of the host event context cache
+ * @cmd_ctx_cache_phys: Physical address of the host command context cache
+ * @ch_ctx_host_size: Size of the host channel context data structure
+ * @ev_ctx_host_size: Size of the host event context data structure
+ * @cmd_ctx_host_size: Size of the host command context data structure
* @state_wq: Dedicated workqueue for handling MHI state transitions
* @ring_wq: Dedicated workqueue for processing MHI rings
* @state_work: State transition worker
@@ -85,6 +94,7 @@ struct mhi_ep_db_info {
* @erdb_offset: Event ring doorbell offset set by the host
* @index: MHI Endpoint controller index
* @irq: IRQ used by the endpoint controller
+ * @is_enabled: Check if the endpoint controller is enabled or not
*/
struct mhi_ep_cntrl {
struct device *cntrl_dev;
@@ -96,9 +106,18 @@ struct mhi_ep_cntrl {
struct mhi_ep_cmd *mhi_cmd;
struct mhi_ep_sm *sm;
+ struct mhi_chan_ctxt *ch_ctx_cache;
+ struct mhi_event_ctxt *ev_ctx_cache;
+ struct mhi_cmd_ctxt *cmd_ctx_cache;
u64 ch_ctx_host_pa;
u64 ev_ctx_host_pa;
u64 cmd_ctx_host_pa;
+ phys_addr_t ch_ctx_cache_phys;
+ phys_addr_t ev_ctx_cache_phys;
+ phys_addr_t cmd_ctx_cache_phys;
+ size_t ch_ctx_host_size;
+ size_t ev_ctx_host_size;
+ size_t cmd_ctx_host_size;
struct workqueue_struct *state_wq;
struct workqueue_struct *ring_wq;
@@ -131,6 +150,7 @@ struct mhi_ep_cntrl {
u32 erdb_offset;
int index;
int irq;
+ bool is_enabled;
};
/**
@@ -229,4 +249,12 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
*/
void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
+/**
+ * mhi_ep_power_up - Power up the MHI endpoint stack
+ * @mhi_cntrl: MHI Endpoint controller
+ *
+ * Return: 0 if power up succeeds, a negative error code otherwise.
+ */
+int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
+
#endif
--
2.25.1
Add support for handling MHI_RESET in MHI endpoint stack. MHI_RESET will
be issued by the host during shutdown and during error scenario so that
it can recover the endpoint device without restarting the whole device.
MHI_RESET handling involves resetting the internal MHI registers, data
structures, state machines, resetting all channels/rings and setting
MHICTRL.RESET bit to 0. Additionally the device will also move to READY
state if the reset was due to SYS_ERR.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 53 +++++++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 2 ++
2 files changed, 55 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 89d1bb780747..0b0fad6bf69a 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -464,6 +464,7 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
struct device *dev = &mhi_cntrl->mhi_dev->dev;
enum mhi_state state;
u32 int_value = 0;
+ bool mhi_reset;
/* Acknowledge the interrupts */
mhi_ep_mmio_read(mhi_cntrl, MHI_CTRL_INT_STATUS_A7, &int_value);
@@ -472,6 +473,14 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
/* Check for ctrl interrupt */
if (FIELD_GET(MHI_CTRL_INT_STATUS_A7_MSK, int_value)) {
dev_dbg(dev, "Processing ctrl interrupt\n");
+ mhi_ep_mmio_get_mhi_state(mhi_cntrl, &state, &mhi_reset);
+ if (mhi_reset) {
+ dev_info(dev, "Host triggered MHI reset!\n");
+ disable_irq_nosync(mhi_cntrl->irq);
+ schedule_work(&mhi_cntrl->reset_work);
+ return IRQ_HANDLED;
+ }
+
mhi_ep_process_ctrl_interrupt(mhi_cntrl, state);
}
@@ -552,6 +561,49 @@ static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
mhi_cntrl->is_enabled = false;
}
+static void mhi_ep_reset_worker(struct work_struct *work)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, reset_work);
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ enum mhi_state cur_state;
+ int ret;
+
+ mhi_ep_abort_transfer(mhi_cntrl);
+
+ spin_lock_bh(&mhi_cntrl->state_lock);
+ /* Reset MMIO to signal host that the MHI_RESET is completed in endpoint */
+ mhi_ep_mmio_reset(mhi_cntrl);
+ cur_state = mhi_cntrl->mhi_state;
+ spin_unlock_bh(&mhi_cntrl->state_lock);
+
+ /*
+ * Only proceed further if the reset is due to SYS_ERR. The host will
+ * issue reset during shutdown also and we don't need to do re-init in
+ * that case.
+ */
+ if (cur_state == MHI_STATE_SYS_ERR) {
+ mhi_ep_mmio_init(mhi_cntrl);
+
+ /* Set AMSS EE before signaling ready state */
+ mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
+
+ /* All set, notify the host that we are ready */
+ ret = mhi_ep_set_ready_state(mhi_cntrl);
+ if (ret)
+ return;
+
+ dev_dbg(dev, "READY state notification sent to the host\n");
+
+ ret = mhi_ep_enable(mhi_cntrl);
+ if (ret) {
+ dev_err(dev, "Failed to enable MHI endpoint: %d\n", ret);
+ return;
+ }
+
+ enable_irq(mhi_cntrl->irq);
+ }
+}
+
int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -824,6 +876,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
INIT_WORK(&mhi_cntrl->ring_work, mhi_ep_ring_worker);
INIT_WORK(&mhi_cntrl->state_work, mhi_ep_state_worker);
+ INIT_WORK(&mhi_cntrl->reset_work, mhi_ep_reset_worker);
mhi_cntrl->ring_wq = alloc_ordered_workqueue("mhi_ep_ring_wq", WQ_HIGHPRI);
if (!mhi_cntrl->ring_wq) {
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 57fa445661f6..6482b0c91865 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -75,6 +75,7 @@ struct mhi_ep_db_info {
* @ring_wq: Dedicated workqueue for processing MHI rings
* @state_work: State transition worker
* @ring_work: Ring worker
+ * @reset_work: Worker for MHI Endpoint reset
* @ch_db_list: List of queued channel doorbells
* @st_transition_list: List of state transitions
* @list_lock: Lock for protecting state transition and channel doorbell lists
@@ -123,6 +124,7 @@ struct mhi_ep_cntrl {
struct workqueue_struct *ring_wq;
struct work_struct state_work;
struct work_struct ring_work;
+ struct work_struct reset_work;
struct list_head ch_db_list;
struct list_head st_transition_list;
--
2.25.1
Add support for MHI endpoint power_down that includes stopping all
available channels, destroying the channels, resetting the event and
transfer rings and freeing the host cache.
The stack will be powered down whenever the physical bus link goes down.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 81 +++++++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 6 +++
2 files changed, 87 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 5f62b6fb6dbc..89d1bb780747 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,8 @@
static DEFINE_IDA(mhi_ep_cntrl_ida);
+static int mhi_ep_destroy_device(struct device *dev, void *data);
+
static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 event_ring,
struct mhi_ep_ring_element *el)
{
@@ -485,6 +487,71 @@ static irqreturn_t mhi_ep_irq(int irq, void *data)
return IRQ_HANDLED;
}
+static void mhi_ep_abort_transfer(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct mhi_ep_ring *ch_ring, *ev_ring;
+ struct mhi_result result = {};
+ struct mhi_ep_chan *mhi_chan;
+ int i;
+
+ /* Stop all the channels */
+ for (i = 0; i < mhi_cntrl->max_chan; i++) {
+ ch_ring = &mhi_cntrl->mhi_chan[i].ring;
+ if (ch_ring->state == RING_STATE_UINT)
+ continue;
+
+ mhi_chan = &mhi_cntrl->mhi_chan[i];
+ mutex_lock(&mhi_chan->lock);
+ /* Send channel disconnect status to client drivers */
+ if (mhi_chan->xfer_cb) {
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ }
+
+ /* Set channel state to DISABLED */
+ mhi_chan->state = MHI_CH_STATE_DISABLED;
+ mutex_unlock(&mhi_chan->lock);
+ }
+
+ flush_workqueue(mhi_cntrl->ring_wq);
+ flush_workqueue(mhi_cntrl->state_wq);
+
+ /* Destroy devices associated with all channels */
+ device_for_each_child(&mhi_cntrl->mhi_dev->dev, NULL, mhi_ep_destroy_device);
+
+ /* Stop and reset the transfer rings */
+ for (i = 0; i < mhi_cntrl->max_chan; i++) {
+ ch_ring = &mhi_cntrl->mhi_chan[i].ring;
+ if (ch_ring->state == RING_STATE_UINT)
+ continue;
+
+ mhi_chan = &mhi_cntrl->mhi_chan[i];
+ mutex_lock(&mhi_chan->lock);
+ mhi_ep_ring_stop(mhi_cntrl, ch_ring);
+ mutex_unlock(&mhi_chan->lock);
+ }
+
+ /* Stop and reset the event rings */
+ for (i = 0; i < mhi_cntrl->event_rings; i++) {
+ ev_ring = &mhi_cntrl->mhi_event[i].ring;
+ if (ev_ring->state == RING_STATE_UINT)
+ continue;
+
+ mutex_lock(&mhi_cntrl->event_lock);
+ mhi_ep_ring_stop(mhi_cntrl, ev_ring);
+ mutex_unlock(&mhi_cntrl->event_lock);
+ }
+
+ /* Stop and reset the command ring */
+ mhi_ep_ring_stop(mhi_cntrl, &mhi_cntrl->mhi_cmd->ring);
+
+ mhi_ep_free_host_cfg(mhi_cntrl);
+ mhi_ep_mmio_mask_interrupts(mhi_cntrl);
+
+ mhi_cntrl->is_enabled = false;
+}
+
int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
@@ -541,6 +608,16 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl)
}
EXPORT_SYMBOL_GPL(mhi_ep_power_up);
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ if (mhi_cntrl->is_enabled)
+ mhi_ep_abort_transfer(mhi_cntrl);
+
+ kfree(mhi_cntrl->mhi_event);
+ disable_irq(mhi_cntrl->irq);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_power_down);
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -825,6 +902,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
}
EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
+/*
+ * It is expected that the controller drivers will power down the MHI EP stack
+ * using "mhi_ep_power_down()" before calling this function to unregister themselves.
+ */
void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
{
struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 105e8067409a..57fa445661f6 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -257,4 +257,10 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
*/
int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
+/**
+ * mhi_ep_power_down - Power down the MHI endpoint stack
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
+
#endif
--
2.25.1
Add support for handling SYS_ERR (System Error) condition in the MHI
endpoint stack. The SYS_ERR flag will be asserted by the endpoint device
when it detects an internal error. The host will then issue reset and
reinitializes MHI to recover from the error state.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/internal.h | 1 +
drivers/bus/mhi/ep/main.c | 24 ++++++++++++++++++++++++
drivers/bus/mhi/ep/sm.c | 2 ++
3 files changed, 27 insertions(+)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index ec508201c5c0..5c6b622482c9 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -228,5 +228,6 @@ int mhi_ep_set_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state mhi_stat
int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 0b0fad6bf69a..088eac0808d1 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -148,6 +148,30 @@ static int mhi_ep_send_cmd_comp_event(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_e
return mhi_ep_send_event(mhi_cntrl, 0, &event);
}
+/*
+ * We don't need to do anything special other than setting the MHI SYS_ERR
+ * state. The host issue will reset all contexts and issue MHI RESET so that we
+ * could also recover from error state.
+ */
+void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ int ret;
+
+ /* If MHI EP is not enabled, nothing to do */
+ if (!mhi_cntrl->is_enabled)
+ return;
+
+ ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+ if (ret)
+ return;
+
+ /* Signal host that the device went to SYS_ERR state */
+ ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_SYS_ERR);
+ if (ret)
+ dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
+}
+
static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index 95cec5c627b4..50378b9f7300 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -98,6 +98,7 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
if (ret) {
+ mhi_ep_handle_syserr(mhi_cntrl);
spin_unlock_bh(&mhi_cntrl->state_lock);
return ret;
}
@@ -133,6 +134,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
spin_lock_bh(&mhi_cntrl->state_lock);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M3);
if (ret) {
+ mhi_ep_handle_syserr(mhi_cntrl);
spin_unlock_bh(&mhi_cntrl->state_lock);
return ret;
}
--
2.25.1
Since we now have all necessary infrastructure, let's add support for
processing the command and TRE rings in the MHI endpoint stack. As a part
of the TRE ring processing, the channel read functionality is also added
that allows the MHI endpoint device to read data from the host over any
available MHI channel.
During the TRE ring processing, client driver will also be notified about
the data availability.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/internal.h | 2 +
drivers/bus/mhi/ep/main.c | 350 ++++++++++++++++++++++++++++++++++
drivers/bus/mhi/ep/ring.c | 2 +
include/linux/mhi_ep.h | 9 +
4 files changed, 363 insertions(+)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 5c6b622482c9..70626ef3799d 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -188,6 +188,8 @@ int mhi_ep_process_ring(struct mhi_ep_ring *ring);
int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element,
int evt_offset);
void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
+int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
+int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
/* MMIO related functions */
void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 088eac0808d1..26d551eb63ce 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -21,6 +21,7 @@
static DEFINE_IDA(mhi_ep_cntrl_ida);
+static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id);
static int mhi_ep_destroy_device(struct device *dev, void *data);
static int mhi_ep_send_event(struct mhi_ep_cntrl *mhi_cntrl, u32 event_ring,
@@ -172,6 +173,355 @@ void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl)
dev_err(dev, "Failed sending SYS_ERR state change event: %d\n", ret);
}
+int mhi_ep_process_cmd_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ struct mhi_ep_ring *ch_ring, *event_ring;
+ union mhi_ep_ring_ctx *event_ctx;
+ struct mhi_result result = {};
+ struct mhi_ep_chan *mhi_chan;
+ u32 event_ring_idx, tmp;
+ u32 ch_id;
+ int ret;
+
+ ch_id = MHI_TRE_GET_CMD_CHID(el);
+ mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
+ ch_ring = &mhi_cntrl->mhi_chan[ch_id].ring;
+
+ switch (MHI_TRE_GET_CMD_TYPE(el)) {
+ case MHI_PKT_TYPE_START_CHAN_CMD:
+ dev_dbg(dev, "Received START command for channel (%d)\n", ch_id);
+
+ mutex_lock(&mhi_chan->lock);
+ /* Initialize and configure the corresponding channel ring */
+ if (ch_ring->state == RING_STATE_UINT) {
+ ret = mhi_ep_ring_start(mhi_cntrl, ch_ring,
+ (union mhi_ep_ring_ctx *)&mhi_cntrl->ch_ctx_cache[ch_id]);
+ if (ret) {
+ dev_err(dev, "Failed to start ring for channel (%d)\n", ch_id);
+ ret = mhi_ep_send_cmd_comp_event(mhi_cntrl,
+ MHI_EV_CC_UNDEFINED_ERR);
+ if (ret)
+ dev_err(dev, "Error sending completion event: %d\n",
+ MHI_EV_CC_UNDEFINED_ERR);
+
+ goto err_unlock;
+ }
+ }
+
+ /* Enable DB for the channel */
+ mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ch_id);
+
+ mutex_lock(&mhi_cntrl->event_lock);
+ event_ring_idx = mhi_cntrl->ch_ctx_cache[ch_id].erindex;
+ event_ring = &mhi_cntrl->mhi_event[event_ring_idx].ring;
+ event_ctx = (union mhi_ep_ring_ctx *)&mhi_cntrl->ev_ctx_cache[event_ring_idx];
+ if (event_ring->state == RING_STATE_UINT) {
+ ret = mhi_ep_ring_start(mhi_cntrl, event_ring, event_ctx);
+ if (ret) {
+ dev_err(dev, "Error starting event ring: %d\n",
+ mhi_cntrl->ch_ctx_cache[ch_id].erindex);
+ mutex_unlock(&mhi_cntrl->event_lock);
+ goto err_unlock;
+ }
+ }
+
+ mutex_unlock(&mhi_cntrl->event_lock);
+
+ /* Set channel state to RUNNING */
+ mhi_chan->state = MHI_CH_STATE_RUNNING;
+ tmp = mhi_cntrl->ch_ctx_cache[ch_id].chcfg;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_RUNNING << CHAN_CTX_CHSTATE_SHIFT);
+ mhi_cntrl->ch_ctx_cache[ch_id].chcfg = tmp;
+
+ ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+ if (ret) {
+ dev_err(dev, "Error sending command completion event: %d\n",
+ MHI_EV_CC_SUCCESS);
+ goto err_unlock;
+ }
+
+ mutex_unlock(&mhi_chan->lock);
+
+ /*
+ * Create MHI device only during UL channel start. Since the MHI
+ * channels operate in a pair, we'll associate both UL and DL
+ * channels to the same device.
+ *
+ * We also need to check for mhi_dev != NULL because, the host
+ * will issue START_CHAN command during resume and we don't
+ * destroy the device during suspend.
+ */
+ if (!(ch_id % 2) && !mhi_chan->mhi_dev) {
+ ret = mhi_ep_create_device(mhi_cntrl, ch_id);
+ if (ret) {
+ dev_err(dev, "Error creating device for channel (%d)\n", ch_id);
+ return ret;
+ }
+ }
+
+ break;
+ case MHI_PKT_TYPE_STOP_CHAN_CMD:
+ dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
+ if (ch_ring->state == RING_STATE_UINT) {
+ dev_err(dev, "Channel (%d) not opened\n", ch_id);
+ return -ENODEV;
+ }
+
+ mutex_lock(&mhi_chan->lock);
+ /* Disable DB for the channel */
+ mhi_ep_mmio_disable_chdb_a7(mhi_cntrl, ch_id);
+
+ /* Set the local value of the transfer ring read pointer to the channel context */
+ ch_ring->rd_offset = mhi_ep_ring_addr2offset(ch_ring,
+ ch_ring->ring_ctx->generic.rp);
+
+ /* Send channel disconnect status to client drivers */
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ /* Set channel state to STOP */
+ mhi_chan->state = MHI_CH_STATE_STOP;
+ tmp = mhi_cntrl->ch_ctx_cache[ch_id].chcfg;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_STOP << CHAN_CTX_CHSTATE_SHIFT);
+ mhi_cntrl->ch_ctx_cache[ch_id].chcfg = tmp;
+
+ ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+ if (ret) {
+ dev_err(dev, "Error sending command completion event: %d\n",
+ MHI_EV_CC_SUCCESS);
+ goto err_unlock;
+ }
+
+ mutex_unlock(&mhi_chan->lock);
+ break;
+ case MHI_PKT_TYPE_RESET_CHAN_CMD:
+ dev_dbg(dev, "Received STOP command for channel (%d)\n", ch_id);
+ if (ch_ring->state == RING_STATE_UINT) {
+ dev_err(dev, "Channel (%d) not opened\n", ch_id);
+ return -ENODEV;
+ }
+
+ mutex_lock(&mhi_chan->lock);
+ /* Stop and reset the transfer ring */
+ mhi_ep_ring_stop(mhi_cntrl, ch_ring);
+
+ /* Send channel disconnect status to client driver */
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ /* Set channel state to DISABLED */
+ mhi_chan->state = MHI_CH_STATE_DISABLED;
+ tmp = mhi_cntrl->ch_ctx_cache[ch_id].chcfg;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_DISABLED << CHAN_CTX_CHSTATE_SHIFT);
+ mhi_cntrl->ch_ctx_cache[ch_id].chcfg = tmp;
+
+ ret = mhi_ep_send_cmd_comp_event(mhi_cntrl, MHI_EV_CC_SUCCESS);
+ if (ret) {
+ dev_err(dev, "Error sending command completion event: %d\n",
+ MHI_EV_CC_SUCCESS);
+ goto err_unlock;
+ }
+ mutex_unlock(&mhi_chan->lock);
+ break;
+ default:
+ dev_err(dev, "Invalid command received: %d for channel (%d)",
+ MHI_TRE_GET_CMD_TYPE(el), ch_id);
+ return -EINVAL;
+ }
+
+ return 0;
+
+err_unlock:
+ mutex_unlock(&mhi_chan->lock);
+
+ return ret;
+}
+
+static int mhi_ep_check_tre_bytes_left(struct mhi_ep_cntrl *mhi_cntrl,
+ struct mhi_ep_ring *ring,
+ struct mhi_ep_ring_element *el)
+{
+ struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+ bool td_done = 0;
+
+ /* A full TRE worth of data was consumed. Check if we are at a TD boundary */
+ if (mhi_chan->tre_bytes_left == 0) {
+ if (MHI_EP_TRE_GET_CHAIN(el)) {
+ if (MHI_EP_TRE_GET_IEOB(el))
+ mhi_ep_send_completion_event(mhi_cntrl,
+ ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOB);
+ } else {
+ if (MHI_EP_TRE_GET_IEOT(el))
+ mhi_ep_send_completion_event(mhi_cntrl,
+ ring, MHI_EP_TRE_GET_LEN(el), MHI_EV_CC_EOT);
+ td_done = 1;
+ }
+
+ mhi_ep_ring_inc_index(ring);
+ mhi_chan->tre_bytes_left = 0;
+ mhi_chan->tre_loc = 0;
+ }
+
+ return td_done;
+}
+
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir)
+{
+ struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
+ mhi_dev->ul_chan;
+ struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+ return !!(ring->rd_offset == ring->wr_offset);
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);
+
+static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
+ struct mhi_ep_ring *ring,
+ struct mhi_result *result,
+ u32 len)
+{
+ struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+ struct device *dev = &mhi_cntrl->mhi_dev->dev;
+ size_t bytes_to_read, addr_offset;
+ struct mhi_ep_ring_element *el;
+ ssize_t bytes_read = 0;
+ u32 buf_remaining;
+ void __iomem *tre_buf;
+ phys_addr_t tre_phys;
+ void *write_to_loc;
+ u64 read_from_loc;
+ bool td_done = 0;
+ int ret;
+
+ buf_remaining = len;
+
+ do {
+ /* Don't process the transfer ring if the channel is not in RUNNING state */
+ if (mhi_chan->state != MHI_CH_STATE_RUNNING)
+ return -ENODEV;
+
+ el = &ring->ring_cache[ring->rd_offset];
+
+ if (mhi_chan->tre_loc) {
+ bytes_to_read = min(buf_remaining,
+ mhi_chan->tre_bytes_left);
+ dev_dbg(dev, "TRE bytes remaining: %d", mhi_chan->tre_bytes_left);
+ } else {
+ if (mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE))
+ /* Nothing to do */
+ return 0;
+
+ mhi_chan->tre_loc = MHI_EP_TRE_GET_PTR(el);
+ mhi_chan->tre_size = MHI_EP_TRE_GET_LEN(el);
+ mhi_chan->tre_bytes_left = mhi_chan->tre_size;
+
+ bytes_to_read = min(buf_remaining, mhi_chan->tre_size);
+ }
+
+ bytes_read += bytes_to_read;
+ addr_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
+ read_from_loc = mhi_chan->tre_loc + addr_offset;
+ write_to_loc = result->buf_addr + (len - buf_remaining);
+ mhi_chan->tre_bytes_left -= bytes_to_read;
+
+ tre_buf = mhi_cntrl->alloc_addr(mhi_cntrl, &tre_phys, bytes_to_read);
+ if (!tre_buf) {
+ dev_err(dev, "Failed to allocate TRE buffer\n");
+ return -ENOMEM;
+ }
+
+ ret = mhi_cntrl->map_addr(mhi_cntrl, tre_phys, read_from_loc, bytes_to_read);
+ if (ret) {
+ dev_err(dev, "Failed to map TRE buffer\n");
+ goto err_tre_free;
+ }
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Reading %d bytes", bytes_to_read);
+ memcpy_fromio(write_to_loc, tre_buf, bytes_to_read);
+
+ mhi_cntrl->unmap_addr(mhi_cntrl, tre_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, tre_phys, tre_buf, bytes_to_read);
+
+ buf_remaining -= bytes_to_read;
+ td_done = mhi_ep_check_tre_bytes_left(mhi_cntrl, ring, el);
+ } while (buf_remaining && !td_done);
+
+ result->bytes_xferd = bytes_read;
+
+ return bytes_read;
+
+err_tre_free:
+ mhi_cntrl->free_addr(mhi_cntrl, tre_phys, tre_buf, bytes_to_read);
+
+ return ret;
+}
+
+int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el)
+{
+ struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
+ struct mhi_result result = {};
+ u32 len = MHI_EP_DEFAULT_MTU;
+ struct mhi_ep_chan *mhi_chan;
+ int ret = 0;
+
+ mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
+
+ /*
+ * Bail out if transfer callback is not registered for the channel.
+ * This is most likely due to the client driver not loaded at this point.
+ */
+ if (!mhi_chan->xfer_cb) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Client driver not available\n");
+ return -ENODEV;
+ }
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Processing TRE ring\n");
+
+ mutex_lock(&mhi_chan->lock);
+ if (ring->ch_id % 2) {
+ /* DL channel */
+ result.dir = mhi_chan->dir;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ } else {
+ /* UL channel */
+ while (1) {
+ result.buf_addr = kzalloc(len, GFP_KERNEL);
+ if (!result.buf_addr) {
+ ret = -ENOMEM;
+ goto err_unlock;
+ }
+
+ ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
+ if (ret < 0) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel");
+ kfree(result.buf_addr);
+ break;
+ } else if (ret == 0) {
+ /* No more data to read */
+ kfree(result.buf_addr);
+ break;
+ }
+
+ result.dir = mhi_chan->dir;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ kfree(result.buf_addr);
+ }
+ }
+
+err_unlock:
+ mutex_unlock(&mhi_chan->lock);
+
+ return ret;
+}
+
static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/drivers/bus/mhi/ep/ring.c b/drivers/bus/mhi/ep/ring.c
index 763b8506d309..11adfb659f16 100644
--- a/drivers/bus/mhi/ep/ring.c
+++ b/drivers/bus/mhi/ep/ring.c
@@ -264,9 +264,11 @@ void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32
ring->state = RING_STATE_UINT;
ring->type = type;
if (ring->type == RING_TYPE_CMD) {
+ ring->ring_cb = mhi_ep_process_cmd_ring;
ring->db_offset_h = CRDB_HIGHER;
ring->db_offset_l = CRDB_LOWER;
} else if (ring->type == RING_TYPE_CH) {
+ ring->ring_cb = mhi_ep_process_tre_ring;
ring->db_offset_h = CHDB_HIGHER_n(id);
ring->db_offset_l = CHDB_LOWER_n(id);
ring->ch_id = id;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 6482b0c91865..260a181d3fab 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -265,4 +265,13 @@ int mhi_ep_power_up(struct mhi_ep_cntrl *mhi_cntrl);
*/
void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
+/**
+ * mhi_ep_queue_is_empty - Determine whether the transfer queue is empty
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ *
+ * Return: true if the queue is empty, false otherwise.
+ */
+bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
+
#endif
--
2.25.1
Add support for queueing SKBs over MHI bus in the MHI endpoint stack.
The mhi_ep_queue_skb() API will be used by the client networking drivers
to queue the SKBs to the host over MHI.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 132 ++++++++++++++++++++++++++++++++++++++
include/linux/mhi_ep.h | 12 ++++
2 files changed, 144 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 26d551eb63ce..cc3da846ed36 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -522,6 +522,138 @@ int mhi_ep_process_tre_ring(struct mhi_ep_ring *ring, struct mhi_ep_ring_element
return ret;
}
+static void skip_to_next_td(struct mhi_ep_chan *mhi_chan, struct mhi_ep_ring *ring)
+{
+ struct mhi_ep_ring_element *el;
+ u32 td_boundary_reached = 0;
+
+ mhi_chan->skip_td = 1;
+ el = &ring->ring_cache[ring->rd_offset];
+ while (ring->rd_offset != ring->wr_offset) {
+ if (td_boundary_reached) {
+ mhi_chan->skip_td = 0;
+ break;
+ }
+
+ if (!MHI_EP_TRE_GET_CHAIN(el))
+ td_boundary_reached = 1;
+
+ mhi_ep_ring_inc_index(ring);
+ el = &ring->ring_cache[ring->rd_offset];
+ }
+}
+
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
+ struct sk_buff *skb, size_t len, enum mhi_flags mflags)
+{
+ struct mhi_ep_chan *mhi_chan = (dir == DMA_FROM_DEVICE) ? mhi_dev->dl_chan :
+ mhi_dev->ul_chan;
+ struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+ enum mhi_ev_ccs code = MHI_EV_CC_INVALID;
+ struct mhi_ep_ring_element *el;
+ u64 write_to_loc, skip_tre = 0;
+ struct mhi_ep_ring *ring;
+ size_t bytes_to_write;
+ void __iomem *tre_buf;
+ phys_addr_t tre_phys;
+ void *read_from_loc;
+ u32 buf_remaining;
+ u32 tre_len;
+ int ret = 0;
+
+ if (dir == DMA_TO_DEVICE)
+ return -EINVAL;
+
+ buf_remaining = len;
+ ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+
+ mutex_lock(&mhi_chan->lock);
+ if (mhi_chan->skip_td)
+ skip_to_next_td(mhi_chan, ring);
+
+ do {
+ /* Don't process the transfer ring if the channel is not in RUNNING state */
+ if (mhi_chan->state != MHI_CH_STATE_RUNNING) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Channel not available");
+ ret = -ENODEV;
+ goto err_exit;
+ }
+
+ if (mhi_ep_queue_is_empty(mhi_dev, dir)) {
+ dev_err(&mhi_chan->mhi_dev->dev, "TRE not available!\n");
+ ret = -EINVAL;
+ goto err_exit;
+ }
+
+ el = &ring->ring_cache[ring->rd_offset];
+ tre_len = MHI_EP_TRE_GET_LEN(el);
+ if (skb->len > tre_len) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Buffer size (%d) is too large!\n",
+ skb->len);
+ ret = -ENOMEM;
+ goto err_exit;
+ }
+
+ bytes_to_write = min(buf_remaining, tre_len);
+ read_from_loc = skb->data;
+ write_to_loc = MHI_EP_TRE_GET_PTR(el);
+
+ tre_buf = mhi_cntrl->alloc_addr(mhi_cntrl, &tre_phys, bytes_to_write);
+ if (!tre_buf) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Failed to allocate TRE buffer\n");
+ ret = -ENOMEM;
+ goto err_exit;
+ }
+
+ ret = mhi_cntrl->map_addr(mhi_cntrl, tre_phys, write_to_loc, bytes_to_write);
+ if (ret) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Failed to map TRE buffer\n");
+ goto err_tre_free;
+ }
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Writing %d bytes", bytes_to_write);
+ memcpy_toio(tre_buf, read_from_loc, bytes_to_write);
+
+ mhi_cntrl->unmap_addr(mhi_cntrl, tre_phys);
+ mhi_cntrl->free_addr(mhi_cntrl, tre_phys, tre_buf, bytes_to_write);
+
+ buf_remaining -= bytes_to_write;
+ if (buf_remaining) {
+ if (!MHI_EP_TRE_GET_CHAIN(el))
+ code = MHI_EV_CC_OVERFLOW;
+ else if (MHI_EP_TRE_GET_IEOB(el))
+ code = MHI_EV_CC_EOB;
+ } else {
+ if (MHI_EP_TRE_GET_CHAIN(el))
+ skip_tre = 1;
+ code = MHI_EV_CC_EOT;
+ }
+
+ ret = mhi_ep_send_completion_event(mhi_cntrl, ring, bytes_to_write, code);
+ if (ret) {
+ dev_err(&mhi_chan->mhi_dev->dev, "Error sending completion event");
+ goto err_exit;
+ }
+
+ mhi_ep_ring_inc_index(ring);
+ } while (!skip_tre && buf_remaining);
+
+ if (skip_tre)
+ skip_to_next_td(mhi_chan, ring);
+
+ mutex_unlock(&mhi_chan->lock);
+
+ return 0;
+
+err_tre_free:
+ mhi_cntrl->free_addr(mhi_cntrl, tre_phys, tre_buf, bytes_to_write);
+err_exit:
+ mutex_unlock(&mhi_chan->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_ep_queue_skb);
+
static int mhi_ep_cache_host_cfg(struct mhi_ep_cntrl *mhi_cntrl)
{
struct device *dev = &mhi_cntrl->mhi_dev->dev;
diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
index 260a181d3fab..a7715f8066ed 100644
--- a/include/linux/mhi_ep.h
+++ b/include/linux/mhi_ep.h
@@ -274,4 +274,16 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl);
*/
bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir);
+/**
+ * mhi_ep_queue_skb - Send SKBs to host over MHI Endpoint
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ * @skb: Buffer for holding SKBs
+ * @len: Buffer length
+ * @mflags: MHI Endpoint transfer flags used for the transfer
+ *
+ * Return: 0 if the SKBs has been sent successfully, a negative error code otherwise.
+ */
+int mhi_ep_queue_skb(struct mhi_ep_device *mhi_dev, enum dma_data_direction dir,
+ struct sk_buff *skb, size_t len, enum mhi_flags mflags);
#endif
--
2.25.1
Add support for suspending and resuming the channels in MHI endpoint stack.
The channels will be moved to the suspended state during M3 state
transition and will be resumed during M0 transition.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/internal.h | 2 ++
drivers/bus/mhi/ep/main.c | 58 +++++++++++++++++++++++++++++++++++
drivers/bus/mhi/ep/sm.c | 4 +++
3 files changed, 64 insertions(+)
diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
index 70626ef3799d..e0c36346c5b8 100644
--- a/drivers/bus/mhi/ep/internal.h
+++ b/drivers/bus/mhi/ep/internal.h
@@ -231,5 +231,7 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl);
int mhi_ep_set_ready_state(struct mhi_ep_cntrl *mhi_cntrl);
void mhi_ep_handle_syserr(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl);
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl);
#endif
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index cc3da846ed36..930b5c2005d0 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1176,6 +1176,64 @@ void mhi_ep_power_down(struct mhi_ep_cntrl *mhi_cntrl)
}
EXPORT_SYMBOL_GPL(mhi_ep_power_down);
+void mhi_ep_suspend_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct mhi_ep_chan *mhi_chan;
+ u32 tmp;
+ int i;
+
+ for (i = 0; i < mhi_cntrl->max_chan; i++) {
+ mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+ if (!mhi_chan->mhi_dev)
+ continue;
+
+ mutex_lock(&mhi_chan->lock);
+ /* Skip if the channel is not currently running */
+ tmp = mhi_cntrl->ch_ctx_cache[i].chcfg;
+ if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_RUNNING) {
+ mutex_unlock(&mhi_chan->lock);
+ continue;
+ }
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Suspending channel\n");
+ /* Set channel state to SUSPENDED */
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_SUSPENDED << CHAN_CTX_CHSTATE_SHIFT);
+ mhi_cntrl->ch_ctx_cache[i].chcfg = tmp;
+ mutex_unlock(&mhi_chan->lock);
+ }
+}
+
+void mhi_ep_resume_channels(struct mhi_ep_cntrl *mhi_cntrl)
+{
+ struct mhi_ep_chan *mhi_chan;
+ u32 tmp;
+ int i;
+
+ for (i = 0; i < mhi_cntrl->max_chan; i++) {
+ mhi_chan = &mhi_cntrl->mhi_chan[i];
+
+ if (!mhi_chan->mhi_dev)
+ continue;
+
+ mutex_lock(&mhi_chan->lock);
+ /* Skip if the channel is not currently suspended */
+ tmp = mhi_cntrl->ch_ctx_cache[i].chcfg;
+ if (FIELD_GET(CHAN_CTX_CHSTATE_MASK, tmp) != MHI_CH_STATE_SUSPENDED) {
+ mutex_unlock(&mhi_chan->lock);
+ continue;
+ }
+
+ dev_dbg(&mhi_chan->mhi_dev->dev, "Resuming channel\n");
+ /* Set channel state to RUNNING */
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_RUNNING << CHAN_CTX_CHSTATE_SHIFT);
+ mhi_cntrl->ch_ctx_cache[i].chcfg = tmp;
+ mutex_unlock(&mhi_chan->lock);
+ }
+}
+
static void mhi_ep_release_device(struct device *dev)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
diff --git a/drivers/bus/mhi/ep/sm.c b/drivers/bus/mhi/ep/sm.c
index 50378b9f7300..0bef5d808195 100644
--- a/drivers/bus/mhi/ep/sm.c
+++ b/drivers/bus/mhi/ep/sm.c
@@ -93,8 +93,11 @@ int mhi_ep_set_m0_state(struct mhi_ep_cntrl *mhi_cntrl)
enum mhi_state old_state;
int ret;
+ /* If MHI is in M3, resume suspended channels */
spin_lock_bh(&mhi_cntrl->state_lock);
old_state = mhi_cntrl->mhi_state;
+ if (old_state == MHI_STATE_M3)
+ mhi_ep_resume_channels(mhi_cntrl);
ret = mhi_ep_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
if (ret) {
@@ -140,6 +143,7 @@ int mhi_ep_set_m3_state(struct mhi_ep_cntrl *mhi_cntrl)
}
spin_unlock_bh(&mhi_cntrl->state_lock);
+ mhi_ep_suspend_channels(mhi_cntrl);
/* Signal host that the device moved to M3 */
ret = mhi_ep_send_state_change_event(mhi_cntrl, MHI_STATE_M3);
--
2.25.1
Add uevent support to MHI endpoint bus so that the client drivers can be
autoloaded by udev when the MHI endpoint devices gets created. The client
drivers are expected to provide MODULE_DEVICE_TABLE with the MHI id_table
struct so that the alias can be exported.
The MHI endpoint reused the mhi_device_id structure of the MHI bus.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 9 +++++++++
include/linux/mod_devicetable.h | 2 ++
scripts/mod/file2alias.c | 10 ++++++++++
3 files changed, 21 insertions(+)
diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 930b5c2005d0..42470d2a82b8 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1619,6 +1619,14 @@ void mhi_ep_driver_unregister(struct mhi_ep_driver *mhi_drv)
}
EXPORT_SYMBOL_GPL(mhi_ep_driver_unregister);
+static int mhi_ep_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+ struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
+
+ return add_uevent_var(env, "MODALIAS=" MHI_EP_DEVICE_MODALIAS_FMT,
+ mhi_dev->name);
+}
+
static int mhi_ep_match(struct device *dev, struct device_driver *drv)
{
struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
@@ -1645,6 +1653,7 @@ struct bus_type mhi_ep_bus_type = {
.name = "mhi_ep",
.dev_name = "mhi_ep",
.match = mhi_ep_match,
+ .uevent = mhi_ep_uevent,
};
static int __init mhi_ep_init(void)
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index ae2e75d15b21..a85d453ebf67 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -835,6 +835,8 @@ struct wmi_device_id {
#define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
#define MHI_NAME_SIZE 32
+#define MHI_EP_DEVICE_MODALIAS_FMT "mhi_ep:%s"
+
/**
* struct mhi_device_id - MHI device identification
* @chan: MHI channel name
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 49aba862073e..90cda36f3159 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1380,6 +1380,15 @@ static int do_mhi_entry(const char *filename, void *symval, char *alias)
return 1;
}
+/* Looks like: mhi_ep:S */
+static int do_mhi_ep_entry(const char *filename, void *symval, char *alias)
+{
+ DEF_FIELD_ADDR(symval, mhi_device_id, chan);
+ sprintf(alias, MHI_EP_DEVICE_MODALIAS_FMT, *chan);
+
+ return 1;
+}
+
static int do_auxiliary_entry(const char *filename, void *symval, char *alias)
{
DEF_FIELD_ADDR(symval, auxiliary_device_id, name);
@@ -1496,6 +1505,7 @@ static const struct devtable devtable[] = {
{"tee", SIZE_tee_client_device_id, do_tee_entry},
{"wmi", SIZE_wmi_device_id, do_wmi_entry},
{"mhi", SIZE_mhi_device_id, do_mhi_entry},
+ {"mhi_ep", SIZE_mhi_device_id, do_mhi_ep_entry},
{"auxiliary", SIZE_auxiliary_device_id, do_auxiliary_entry},
{"ssam", SIZE_ssam_device_id, do_ssam_entry},
{"dfl", SIZE_dfl_device_id, do_dfl_entry},
--
2.25.1
On 12/2/2021 3:35 AM, Manivannan Sadhasivam wrote:
> In preparation of the endpoint MHI support, let's move the host MHI code
> to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.
>
> While at it, let's also move the "pci_generic" driver to "host" directory
> as it is a host MHI controller driver.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Hemant Kumar <[email protected]>
--
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora
Forum, a Linux Foundation Collaborative Project
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> Hello,
>
> This series adds initial support for the Qualcomm specific Modem Host Interface
> (MHI) bus in endpoint devices like SDX55 modems. The MHI bus in endpoint devices
> communicates with the MHI bus in host machines like x86 over any physical bus
> like PCIe for data connectivity. The MHI host support is already in mainline [1]
> and been used by PCIe based modems and WLAN devices running vendor code
> (downstream).
Today I'm offering some initial review comments on this series.
I told you offline I had quite a few comments. When I looked at the
code a week or two ago I looked at the end result... So my notes
didn't line up well with the way you built it up incrementally.
(Still, the way you did it is generally good for review.)
I got through patch 9 and kind of petered out. I can look at the
rest tomorrow and/or can give you a chance to update before I
review more. I'll let you decide...
-Alex
> Overview
> ========
>
> This series aims at adding the MHI support in the endpoint devices with the goal
> of getting data connectivity using the mainline kernel running on the modems.
> Modems here refer to the combination of an APPS processor (Cortex A grade) and
> a baseband processor (DSP). The MHI bus is located in the APPS processor and it
> transfers data packets from the baseband processor to the host machine.
>
> The MHI Endpoint (MHI EP) stack proposed here is inspired by the downstream
> code written by Qualcomm. But the complete stack is mostly re-written to adapt
> to the "bus" framework and made it modular so that it can work with the upstream
> subsystems like "PCI Endpoint". The code structure of the MHI endpoint stack
> follows the MHI host stack to maintain uniformity.
>
> With this initial MHI EP stack (along with few other drivers), we can establish
> the network interface between host and endpoint over the MHI software channels
> (IP_SW0) and can do things like IP forwarding, SSH, etc...
. . .
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> In preparation of the endpoint MHI support, let's move the host MHI code
> to its own "host" directory and adjust the toplevel MHI Kconfig & Makefile.
>
> While at it, let's also move the "pci_generic" driver to "host" directory
> as it is a host MHI controller driver.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
This is a fairly trivial movement of code, so I have no real feedback.
-Alex
> ---
> drivers/bus/Makefile | 2 +-
> drivers/bus/mhi/Kconfig | 27 ++------------------
> drivers/bus/mhi/Makefile | 8 ++----
> drivers/bus/mhi/host/Kconfig | 31 +++++++++++++++++++++++
> drivers/bus/mhi/{core => host}/Makefile | 4 ++-
> drivers/bus/mhi/{core => host}/boot.c | 0
> drivers/bus/mhi/{core => host}/debugfs.c | 0
> drivers/bus/mhi/{core => host}/init.c | 0
> drivers/bus/mhi/{core => host}/internal.h | 0
> drivers/bus/mhi/{core => host}/main.c | 0
> drivers/bus/mhi/{ => host}/pci_generic.c | 0
> drivers/bus/mhi/{core => host}/pm.c | 0
> 12 files changed, 39 insertions(+), 33 deletions(-)
> create mode 100644 drivers/bus/mhi/host/Kconfig
> rename drivers/bus/mhi/{core => host}/Makefile (54%)
> rename drivers/bus/mhi/{core => host}/boot.c (100%)
> rename drivers/bus/mhi/{core => host}/debugfs.c (100%)
> rename drivers/bus/mhi/{core => host}/init.c (100%)
> rename drivers/bus/mhi/{core => host}/internal.h (100%)
> rename drivers/bus/mhi/{core => host}/main.c (100%)
> rename drivers/bus/mhi/{ => host}/pci_generic.c (100%)
> rename drivers/bus/mhi/{core => host}/pm.c (100%)
>
. . .
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> Move the common MHI definitions in host "internal.h" to "common.h" so
> that the endpoint code can make use of them. This also avoids
> duplicating the definitions in the endpoint stack.
>
> Still, the MHI register definitions are not moved since the offsets
> vary between host and endpoint.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/common.h | 182 ++++++++++++++++++++++++++++++++
> drivers/bus/mhi/host/internal.h | 154 +--------------------------
> 2 files changed, 183 insertions(+), 153 deletions(-)
> create mode 100644 drivers/bus/mhi/common.h
>
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> new file mode 100644
> index 000000000000..0f4f3b9f3027
> --- /dev/null
> +++ b/drivers/bus/mhi/common.h
> @@ -0,0 +1,182 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.
> + *
> + */
> +
> +#ifndef _MHI_COMMON_H
> +#define _MHI_COMMON_H
> +
> +#include <linux/mhi.h>
> +
> +/* Command Ring Element macros */
I know that the "new" code here is basically moved here as-is. But
I'll take this opportunity to mention some things.
Command ring elements have a pretty well-defined structure. I think
the code could be more readable if you defined it that way.
/* MHI commands only use the last two bytes of the structure */
struct mhi_cmd_tre {
__le64 reserved0; /* reserved fields must be zero */
__le32 reserved1;
__le16 reserved2;
u8 channel_id;
u8 command;
};
This isn't as good an example as is the TRE and event ring element
definitions though.
> +/* No operation command */
No need for parentheses for (0).
> +#define MHI_TRE_CMD_NOOP_PTR (0)
> +#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> +#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
> +
> +/* Channel reset command */
> +#define MHI_TRE_CMD_RESET_PTR (0)
> +#define MHI_TRE_CMD_RESET_DWORD0 (0)
> +#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
> + (MHI_CMD_RESET_CHAN << 16))
> +
> +/* Channel stop command */
> +#define MHI_TRE_CMD_STOP_PTR (0)
> +#define MHI_TRE_CMD_STOP_DWORD0 (0)
> +#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
> + (MHI_CMD_STOP_CHAN << 16))
> +
> +/* Channel start command */
> +#define MHI_TRE_CMD_START_PTR (0)
> +#define MHI_TRE_CMD_START_DWORD0 (0)
> +#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
> + (MHI_CMD_START_CHAN << 16))
> +
> +#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> +#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> +
> +/* Event descriptor macros */
> +/* Transfer completion event */
> +#define MHI_TRE_EV_PTR(ptr) (ptr)
> +#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
> +#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
An event ring element could look like a bit like this:
struct mhi_event {
__le64 tre_pointer; /* refers to channel entry */
u8 completion_code; /* enum mhi_ev_ccs */
u8 reserved0;
u8 channel_id;
u8 event_type; /* enum mhi_pkt_type (?) */
__le16 reserved1;
};
But different events have slightly different format (EE
events use byte in the "completion_code" position to hold
the EE number, for example). So it could be a union of
structs. You get the idea though.
The following macros operate on events, not TREs, so their names
and argument names should reflecvt that.
> +#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
> +#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
> +#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
> +#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
> +#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
> +#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
> +#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
> +
> +/* State change event */
> +#define MHI_SC_EV_PTR 0
> +#define MHI_SC_EV_DWORD0(state) (state << 24)
> +#define MHI_SC_EV_DWORD1(type) (type << 16)
> +
> +/* EE event */
> +#define MHI_EE_EV_PTR 0
> +#define MHI_EE_EV_DWORD0(ee) (ee << 24)
> +#define MHI_EE_EV_DWORD1(type) (type << 16)
> +
> +/* Command Completion event */
> +#define MHI_CC_EV_PTR(ptr) (ptr)
> +#define MHI_CC_EV_DWORD0(code) (code << 24)
> +#define MHI_CC_EV_DWORD1(type) (type << 16)
> +
> +/* Transfer descriptor macros */
> +#define MHI_TRE_DATA_PTR(ptr) (ptr)
The following macro assumes MHI_MAX_MTU is a mask (2^x - 1).
> +#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
It's not obvious the four parameters must be 0 or 1.
What does the (2 << 16) term here represent?
> +#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> + | (ieot << 9) | (ieob << 8) | chain)
A TRE could look like this:
struct mhi_tre {
__le64 transfer_pointer; /* dma_addr_t */
__le16 len;
__le16 reserved;
__le32 flags;
};
#define MHI_TRE_FLAGS_CHAIN cpu_to_le32(BIT(0))
#define MHI_TRE_FLAGS_IEOB cpu_to_le32(BIT(8))
#define MHI_TRE_FLAGS_IEOT cpu_to_le32(BIT(9))
#define MHI_TRE_FLAGS_BEI cpu_to_le32(BIT(10))
> +
> +/* RSC transfer descriptor macros */
> +#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
> +#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
> +#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
> +
> +enum mhi_pkt_type {
> + MHI_PKT_TYPE_INVALID = 0x0,
This is personal choice, but I like aligning the numeric
values assigned to symbols, by aligning the "=" in a
column. Maybe use 0x01, 0x02 for single-digits too.
> + MHI_PKT_TYPE_NOOP_CMD = 0x1,
> + MHI_PKT_TYPE_TRANSFER = 0x2,
> + MHI_PKT_TYPE_COALESCING = 0x8,
> + MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
> + MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
> + MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
> + MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
> + MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
> + MHI_PKT_TYPE_TX_EVENT = 0x22,
> + MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
> + MHI_PKT_TYPE_EE_EVENT = 0x40,
> + MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
> + MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
> + MHI_PKT_TYPE_STALE_EVENT, /* internal event */
> +};
> +
> +/* MHI transfer completion events */
> +enum mhi_ev_ccs {
> + MHI_EV_CC_INVALID = 0x0,
> + MHI_EV_CC_SUCCESS = 0x1,
> + MHI_EV_CC_EOT = 0x2, /* End of transfer event */
> + MHI_EV_CC_OVERFLOW = 0x3,
> + MHI_EV_CC_EOB = 0x4, /* End of block event */
> + MHI_EV_CC_OOB = 0x5, /* Out of block event */
> + MHI_EV_CC_DB_MODE = 0x6,
> + MHI_EV_CC_UNDEFINED_ERR = 0x10,
> + MHI_EV_CC_BAD_TRE = 0x11,
> +};
> +
> +/* Channel state */
> +enum mhi_ch_state {
> + MHI_CH_STATE_DISABLED,
> + MHI_CH_STATE_ENABLED,
> + MHI_CH_STATE_RUNNING,
> + MHI_CH_STATE_SUSPENDED,
> + MHI_CH_STATE_STOP,
> + MHI_CH_STATE_ERROR,
> +};
> +
> +enum mhi_cmd_type {
> + MHI_CMD_NOP = 1,
> + MHI_CMD_RESET_CHAN = 16,
> + MHI_CMD_STOP_CHAN = 17,
> + MHI_CMD_START_CHAN = 18,
> +};
> +
> +#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> +#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> +#define EV_CTX_INTMODC_SHIFT 8
> +#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> +#define EV_CTX_INTMODT_SHIFT 16
> +struct mhi_event_ctxt {
These fields should all be explicitly marked as little endian.
It so happens Intel and ARM use that, but defining them as
simple unsigned values is not correct for an external interface.
This comment applies to the command and channel context structures
also.
> + __u32 intmod;
> + __u32 ertype;
> + __u32 msivec;
> +
I think you can just define the entire struct as __packed
and __aligned(4) rather than defining all of these fields
with those attributes.
> + __u64 rbase __packed __aligned(4);
> + __u64 rlen __packed __aligned(4);
> + __u64 rp __packed __aligned(4);
> + __u64 wp __packed __aligned(4);
> +};
> +
> +#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> +#define CHAN_CTX_CHSTATE_SHIFT 0
Please eliminate all the _SHIFT definitions like this,
where you are already defining the corresponding _MASK.
The _SHIFT is redundant (and could lead to error, and
takes up extra space).
You are using bitfield operations (like FIELD_GET()) in
at least some places already. Use them consistently
throughout the driver. Those macros simplify the code
and obviate the need for any shift definitions.
-Alex
> +#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> +#define CHAN_CTX_BRSTMODE_SHIFT 8
> +#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> +#define CHAN_CTX_POLLCFG_SHIFT 10
> +#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
> +struct mhi_chan_ctxt {
> + __u32 chcfg;
> + __u32 chtype;
> + __u32 erindex;
> +
> + __u64 rbase __packed __aligned(4);
> + __u64 rlen __packed __aligned(4);
> + __u64 rp __packed __aligned(4);
> + __u64 wp __packed __aligned(4);
> +};
> +
> +struct mhi_cmd_ctxt {
> + __u32 reserved0;
> + __u32 reserved1;
> + __u32 reserved2;
> +
> + __u64 rbase __packed __aligned(4);
> + __u64 rlen __packed __aligned(4);
> + __u64 rp __packed __aligned(4);
> + __u64 wp __packed __aligned(4);
> +};
> +
> +extern const char * const mhi_state_str[MHI_STATE_MAX];
> +#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> + !mhi_state_str[state]) ? \
> + "INVALID_STATE" : mhi_state_str[state])
> +
> +#endif /* _MHI_COMMON_H */
> diff --git a/drivers/bus/mhi/host/internal.h b/drivers/bus/mhi/host/internal.h
> index 3a732afaf73e..a324a76684d0 100644
> --- a/drivers/bus/mhi/host/internal.h
> +++ b/drivers/bus/mhi/host/internal.h
> @@ -7,7 +7,7 @@
> #ifndef _MHI_INT_H
> #define _MHI_INT_H
>
> -#include <linux/mhi.h>
> +#include "../common.h"
>
> extern struct bus_type mhi_bus_type;
>
> @@ -203,51 +203,6 @@ extern struct bus_type mhi_bus_type;
> #define SOC_HW_VERSION_MINOR_VER_BMSK (0x000000FF)
> #define SOC_HW_VERSION_MINOR_VER_SHFT (0)
>
> -#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> -#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> -#define EV_CTX_INTMODC_SHIFT 8
> -#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> -#define EV_CTX_INTMODT_SHIFT 16
> -struct mhi_event_ctxt {
> - __u32 intmod;
> - __u32 ertype;
> - __u32 msivec;
> -
> - __u64 rbase __packed __aligned(4);
> - __u64 rlen __packed __aligned(4);
> - __u64 rp __packed __aligned(4);
> - __u64 wp __packed __aligned(4);
> -};
> -
> -#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> -#define CHAN_CTX_CHSTATE_SHIFT 0
> -#define CHAN_CTX_BRSTMODE_MASK GENMASK(9, 8)
> -#define CHAN_CTX_BRSTMODE_SHIFT 8
> -#define CHAN_CTX_POLLCFG_MASK GENMASK(15, 10)
> -#define CHAN_CTX_POLLCFG_SHIFT 10
> -#define CHAN_CTX_RESERVED_MASK GENMASK(31, 16)
> -struct mhi_chan_ctxt {
> - __u32 chcfg;
> - __u32 chtype;
> - __u32 erindex;
> -
> - __u64 rbase __packed __aligned(4);
> - __u64 rlen __packed __aligned(4);
> - __u64 rp __packed __aligned(4);
> - __u64 wp __packed __aligned(4);
> -};
> -
> -struct mhi_cmd_ctxt {
> - __u32 reserved0;
> - __u32 reserved1;
> - __u32 reserved2;
> -
> - __u64 rbase __packed __aligned(4);
> - __u64 rlen __packed __aligned(4);
> - __u64 rp __packed __aligned(4);
> - __u64 wp __packed __aligned(4);
> -};
> -
> struct mhi_ctxt {
> struct mhi_event_ctxt *er_ctxt;
> struct mhi_chan_ctxt *chan_ctxt;
> @@ -267,108 +222,6 @@ struct bhi_vec_entry {
> u64 size;
> };
>
> -enum mhi_cmd_type {
> - MHI_CMD_NOP = 1,
> - MHI_CMD_RESET_CHAN = 16,
> - MHI_CMD_STOP_CHAN = 17,
> - MHI_CMD_START_CHAN = 18,
> -};
> -
> -/* No operation command */
> -#define MHI_TRE_CMD_NOOP_PTR (0)
> -#define MHI_TRE_CMD_NOOP_DWORD0 (0)
> -#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
> -
> -/* Channel reset command */
> -#define MHI_TRE_CMD_RESET_PTR (0)
> -#define MHI_TRE_CMD_RESET_DWORD0 (0)
> -#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
> - (MHI_CMD_RESET_CHAN << 16))
> -
> -/* Channel stop command */
> -#define MHI_TRE_CMD_STOP_PTR (0)
> -#define MHI_TRE_CMD_STOP_DWORD0 (0)
> -#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
> - (MHI_CMD_STOP_CHAN << 16))
> -
> -/* Channel start command */
> -#define MHI_TRE_CMD_START_PTR (0)
> -#define MHI_TRE_CMD_START_DWORD0 (0)
> -#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
> - (MHI_CMD_START_CHAN << 16))
> -
> -#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> -
> -/* Event descriptor macros */
> -#define MHI_TRE_EV_PTR(ptr) (ptr)
> -#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
> -#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
> -#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
> -#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
> -#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
> -#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
> -#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
> -#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
> -
> -/* Transfer descriptor macros */
> -#define MHI_TRE_DATA_PTR(ptr) (ptr)
> -#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
> -#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> - | (ieot << 9) | (ieob << 8) | chain)
> -
> -/* RSC transfer descriptor macros */
> -#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
> -#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
> -#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
> -
> -enum mhi_pkt_type {
> - MHI_PKT_TYPE_INVALID = 0x0,
> - MHI_PKT_TYPE_NOOP_CMD = 0x1,
> - MHI_PKT_TYPE_TRANSFER = 0x2,
> - MHI_PKT_TYPE_COALESCING = 0x8,
> - MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
> - MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
> - MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
> - MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
> - MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
> - MHI_PKT_TYPE_TX_EVENT = 0x22,
> - MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
> - MHI_PKT_TYPE_EE_EVENT = 0x40,
> - MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
> - MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
> - MHI_PKT_TYPE_STALE_EVENT, /* internal event */
> -};
> -
> -/* MHI transfer completion events */
> -enum mhi_ev_ccs {
> - MHI_EV_CC_INVALID = 0x0,
> - MHI_EV_CC_SUCCESS = 0x1,
> - MHI_EV_CC_EOT = 0x2, /* End of transfer event */
> - MHI_EV_CC_OVERFLOW = 0x3,
> - MHI_EV_CC_EOB = 0x4, /* End of block event */
> - MHI_EV_CC_OOB = 0x5, /* Out of block event */
> - MHI_EV_CC_DB_MODE = 0x6,
> - MHI_EV_CC_UNDEFINED_ERR = 0x10,
> - MHI_EV_CC_BAD_TRE = 0x11,
> -};
> -
> -enum mhi_ch_state {
> - MHI_CH_STATE_DISABLED = 0x0,
> - MHI_CH_STATE_ENABLED = 0x1,
> - MHI_CH_STATE_RUNNING = 0x2,
> - MHI_CH_STATE_SUSPENDED = 0x3,
> - MHI_CH_STATE_STOP = 0x4,
> - MHI_CH_STATE_ERROR = 0x5,
> -};
> -
> enum mhi_ch_state_type {
> MHI_CH_STATE_TYPE_RESET,
> MHI_CH_STATE_TYPE_STOP,
> @@ -409,11 +262,6 @@ extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
> #define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
> "INVALID_STATE" : dev_state_tran_str[state])
>
> -extern const char * const mhi_state_str[MHI_STATE_MAX];
> -#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> - !mhi_state_str[state]) ? \
> - "INVALID_STATE" : mhi_state_str[state])
> -
> /* internal power states */
> enum mhi_pm_state {
> MHI_PM_STATE_DISABLE,
>
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> make the array as "static const" and move it inside the "common.h" header
> so that the endpoint stack could also make use of it. Otherwise, the
> structure definition should be present in both host and endpoint stack and
> that'll result in duplication.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
This result in common source code (which is good), but it will be
duplicated in everything that includes this file.
Do you have no common code, available to both the endpoint and host?
You could (in drivers/bus/mhi/common.c, for example).
If you don't, I have a different suggestion, below. It does
basically the same thing you're doing here, but I much prefer
duplicating an inline function than a data structure.
> ---
> drivers/bus/mhi/common.h | 13 ++++++++++++-
> drivers/bus/mhi/host/init.c | 12 ------------
> 2 files changed, 12 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index 0f4f3b9f3027..2ea438205617 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -174,7 +174,18 @@ struct mhi_cmd_ctxt {
> __u64 wp __packed __aligned(4);
> };
>
> -extern const char * const mhi_state_str[MHI_STATE_MAX];
> +static const char * const mhi_state_str[MHI_STATE_MAX] = {
> + [MHI_STATE_RESET] = "RESET",
> + [MHI_STATE_READY] = "READY",
> + [MHI_STATE_M0] = "M0",
> + [MHI_STATE_M1] = "M1",
> + [MHI_STATE_M2] = "M2",
> + [MHI_STATE_M3] = "M3",
> + [MHI_STATE_M3_FAST] = "M3 FAST",
> + [MHI_STATE_BHI] = "BHI",
> + [MHI_STATE_SYS_ERR] = "SYS ERROR",
> +};
> +
> #define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> !mhi_state_str[state]) ? \
> "INVALID_STATE" : mhi_state_str[state])
You could easily and safely define this as an inline function instead.
#define MHI_STATE_CASE(x) case MHI_STATE_ ## x: return #x
static inline const char *mhi_state_string(enum mhi_state state)
{
switch(state) {
MHI_STATE_CASE(RESET);
MHI_STATE_CASE(READY);
MHI_STATE_CASE(M0);
MHI_STATE_CASE(M1);
MHI_STATE_CASE(M2);
MHI_STATE_CASE(M3_FAST);
MHI_STATE_CASE(BHI);
MHI_STATE_CASE(SYS_ERR);
default: return "(unrecognized MHI state)";
}
}
#undef MHI_STATE_CASE
-Alex
> diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> index 5aaca6d0f52b..fa904e7468d8 100644
> --- a/drivers/bus/mhi/host/init.c
> +++ b/drivers/bus/mhi/host/init.c
> @@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
> [DEV_ST_TRANSITION_DISABLE] = "DISABLE",
> };
>
> -const char * const mhi_state_str[MHI_STATE_MAX] = {
> - [MHI_STATE_RESET] = "RESET",
> - [MHI_STATE_READY] = "READY",
> - [MHI_STATE_M0] = "M0",
> - [MHI_STATE_M1] = "M1",
> - [MHI_STATE_M2] = "M2",
> - [MHI_STATE_M3] = "M3",
> - [MHI_STATE_M3_FAST] = "M3 FAST",
> - [MHI_STATE_BHI] = "BHI",
> - [MHI_STATE_SYS_ERR] = "SYS ERROR",
> -};
> -
> const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
> [MHI_CH_STATE_TYPE_RESET] = "RESET",
> [MHI_CH_STATE_TYPE_STOP] = "STOP",
>
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> Cleanup includes:
>
> 1. Moving the MHI register bit definitions to common.h header (only the
> register offsets differ between host and ep not the bit definitions)
The register offsets do differ, but the group of registers for the host
differs from the group of registers for the endpoint by a fixed amount.
(MHIREGLEN = 0x0000 for host, or 0x100 for endpoint; CRCBAP_LOWER is
0x0068 for host, 0x0168 for endpoint.)
In other words, can you instead use the same symbolic offsets, but
have the endpoint add 0x0100 to them all? It would make the fact
that they're both referencing the same basic in-memory structure
more obvious.
> 2. Using the GENMASK macro for masks
> 3. Removing brackets for single values
> 4. Using lowercase for hex values
Yay!!! For all three of the above.
More below.
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/common.h | 129 ++++++++++++---
> drivers/bus/mhi/host/internal.h | 282 +++++++++++---------------------
> 2 files changed, 207 insertions(+), 204 deletions(-)
>
> diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> index 2ea438205617..c1272d61e54e 100644
> --- a/drivers/bus/mhi/common.h
> +++ b/drivers/bus/mhi/common.h
> @@ -9,32 +9,123 @@
>
> #include <linux/mhi.h>
>
> +/* MHI register bits */
> +#define MHIREGLEN_MHIREGLEN_MASK GENMASK(31, 0)
> +#define MHIREGLEN_MHIREGLEN_SHIFT 0
Again, please eliminate all _SHIFT definitions where they define
the low bit position of a mask.
Maybe you can add some underscores for readability?
Even if you don't do that, you could add a comment here or there to
explain what certain abbreviations stand for, to make it easier to
understand. E.g., CHDB = channel doorbell, CCA = channel context
array, BAP = base address pointer.
-Alex
> +#define MHIVER_MHIVER_MASK GENMASK(31, 0)
> +#define MHIVER_MHIVER_SHIFT 0
> +
> +#define MHICFG_NHWER_MASK GENMASK(31, 24)
> +#define MHICFG_NHWER_SHIFT 24
> +#define MHICFG_NER_MASK GENMASK(23, 16)
> +#define MHICFG_NER_SHIFT 16
> +#define MHICFG_NHWCH_MASK GENMASK(15, 8)
> +#define MHICFG_NHWCH_SHIFT 8
> +#define MHICFG_NCH_MASK GENMASK(7, 0)
> +#define MHICFG_NCH_SHIFT 0
. . .
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint controller drivers
> with the MHI endpoint stack. MHI endpoint controller drivers manages
> the interaction with the host machines such as x86. They are also the
> MHI endpoint bus master in charge of managing the physical link between the
> host and endpoint device.
>
> The endpoint controller driver encloses all information about the
> underlying physical bus like PCIe. The registration process involves
> parsing the channel configuration and allocating an MHI EP device.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
See below. Lots of little things, some I've said before.
> ---
> drivers/bus/mhi/Kconfig | 1 +
> drivers/bus/mhi/Makefile | 3 +
> drivers/bus/mhi/ep/Kconfig | 10 ++
> drivers/bus/mhi/ep/Makefile | 2 +
> drivers/bus/mhi/ep/internal.h | 158 +++++++++++++++++++++++
> drivers/bus/mhi/ep/main.c | 231 ++++++++++++++++++++++++++++++++++
> include/linux/mhi_ep.h | 140 +++++++++++++++++++++
> 7 files changed, 545 insertions(+)
> create mode 100644 drivers/bus/mhi/ep/Kconfig
> create mode 100644 drivers/bus/mhi/ep/Makefile
> create mode 100644 drivers/bus/mhi/ep/internal.h
> create mode 100644 drivers/bus/mhi/ep/main.c
> create mode 100644 include/linux/mhi_ep.h
>
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> index 4748df7f9cd5..b39a11e6c624 100644
> --- a/drivers/bus/mhi/Kconfig
> +++ b/drivers/bus/mhi/Kconfig
> @@ -6,3 +6,4 @@
> #
>
> source "drivers/bus/mhi/host/Kconfig"
> +source "drivers/bus/mhi/ep/Kconfig"
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> index 5f5708a249f5..46981331b38f 100644
> --- a/drivers/bus/mhi/Makefile
> +++ b/drivers/bus/mhi/Makefile
> @@ -1,2 +1,5 @@
> # Host MHI stack
> obj-y += host/
> +
> +# Endpoint MHI stack
> +obj-y += ep/
> diff --git a/drivers/bus/mhi/ep/Kconfig b/drivers/bus/mhi/ep/Kconfig
> new file mode 100644
> index 000000000000..229c71397b30
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Kconfig
> @@ -0,0 +1,10 @@
> +config MHI_BUS_EP
> + tristate "Modem Host Interface (MHI) bus Endpoint implementation"
> + help
> + Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> + communication protocol used by the host processors to control
> + and communicate with modem devices over a high speed peripheral
> + bus or shared memory.
> +
> + MHI_BUS_EP implements the MHI protocol for the endpoint devices
> + like SDX55 modem connected to the host machine over PCIe.
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> new file mode 100644
> index 000000000000..64e29252b608
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -0,0 +1,2 @@
> +obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> +mhi_ep-y := main.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> new file mode 100644
> index 000000000000..7b164daf4332
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -0,0 +1,158 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2021, Linaro Ltd.
> + *
> + */
> +
> +#ifndef _MHI_EP_INTERNAL_
> +#define _MHI_EP_INTERNAL_
> +
> +#include <linux/bitfield.h>
> +
> +#include "../common.h"
> +
> +extern struct bus_type mhi_ep_bus_type;
> +
> +/* MHI register definitions */
> +#define MHIREGLEN 0x100
I really think it would be nice if these could be common between the
host and endpoint.
> +#define MHIVER 0x108
> +#define MHICFG 0x110
> +#define CHDBOFF 0x118
> +#define ERDBOFF 0x120
> +#define BHIOFF 0x128
> +#define DEBUGOFF 0x130
> +#define MHICTRL 0x138
> +#define MHISTATUS 0x148
> +#define CCABAP_LOWER 0x158
> +#define CCABAP_HIGHER 0x15c
> +#define ECABAP_LOWER 0x160
> +#define ECABAP_HIGHER 0x164
> +#define CRCBAP_LOWER 0x168
> +#define CRCBAP_HIGHER 0x16c
> +#define CRDB_LOWER 0x170
> +#define CRDB_HIGHER 0x174
> +#define MHICTRLBASE_LOWER 0x180
> +#define MHICTRLBASE_HIGHER 0x184
> +#define MHICTRLLIMIT_LOWER 0x188
> +#define MHICTRLLIMIT_HIGHER 0x18c
> +#define MHIDATABASE_LOWER 0x198
> +#define MHIDATABASE_HIGHER 0x19c
> +#define MHIDATALIMIT_LOWER 0x1a0
> +#define MHIDATALIMIT_HIGHER 0x1a4
It wouldn't hurt to have a one or two line comment explaining how
these compute the offset for a given channel or event ring's
doorbell register.
I think you could use decimal for the multiplier (8 rather than 0x8),
but maybe you prefer not mixing that with a hex base offset.
Overall though, take a look at the macros you define like this.
See if you can decide on whether you can settle on a consistent
form. Some places you use decimal, others hex. It's not a
big deal, but consistency always helps.
> +#define CHDB_LOWER_n(n) (0x400 + 0x8 * (n))
> +#define CHDB_HIGHER_n(n) (0x404 + 0x8 * (n))
> +#define ERDB_LOWER_n(n) (0x800 + 0x8 * (n))
> +#define ERDB_HIGHER_n(n) (0x804 + 0x8 * (n))
> +#define BHI_INTVEC 0x220
> +#define BHI_EXECENV 0x228
> +#define BHI_IMGTXDB 0x218
> +
Will the AP always be an "A7"?
> +#define MHI_CTRL_INT_STATUS_A7 0x4
> +#define MHI_CTRL_INT_STATUS_A7_MSK BIT(0)
> +#define MHI_CTRL_INT_STATUS_CRDB_MSK BIT(1)
> +#define MHI_CHDB_INT_STATUS_A7_n(n) (0x28 + 0x4 * (n))
> +#define MHI_ERDB_INT_STATUS_A7_n(n) (0x38 + 0x4 * (n))
> +
> +#define MHI_CTRL_INT_CLEAR_A7 0x4c
> +#define MHI_CTRL_INT_MMIO_WR_CLEAR BIT(2)
> +#define MHI_CTRL_INT_CRDB_CLEAR BIT(1)
> +#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR BIT(0)
> +
> +#define MHI_CHDB_INT_CLEAR_A7_n(n) (0x70 + 0x4 * (n))
> +#define MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
> +#define MHI_ERDB_INT_CLEAR_A7_n(n) (0x80 + 0x4 * (n))
> +#define MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
> +
The term "MASK" here might be confusing. Does a bit set in
this mask register indicate an interrupt is enabled, or
disabled (masked)? A comment (here or where used) could
clear it up without renaming the symbol.
> +#define MHI_CTRL_INT_MASK_A7 0x94
> +#define MHI_CTRL_INT_MASK_A7_MASK_MASK GENMASK(1, 0)
> +#define MHI_CTRL_MHICTRL_MASK BIT(0)
> +#define MHI_CTRL_MHICTRL_SHFT 0
> +#define MHI_CTRL_CRDB_MASK BIT(1)
> +#define MHI_CTRL_CRDB_SHFT 1
Use SHIFT or SHFT (not both), consistently. (But get rid of
this shift definition, and others like it...)
> +#define MHI_CHDB_INT_MASK_A7_n(n) (0xb8 + 0x4 * (n))
> +#define MHI_CHDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
> +#define MHI_ERDB_INT_MASK_A7_n(n) (0xc8 + 0x4 * (n))
> +#define MHI_ERDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
> +
> +#define NR_OF_CMD_RINGS 1
Is there ever any reason to believe there will be more than one
command ring for a given MHI instance? I kept seeing loops over
NR_OF_CMD_RINGS, and it just seemed silly.
> +#define MHI_MASK_ROWS_CH_EV_DB 4
> +#define MHI_MASK_CH_EV_LEN 32
> +
> +/* Generic context */
Maybe define the entire structure as packed and aligned.
> +struct mhi_generic_ctx {
> + __u32 reserved0;
> + __u32 reserved1;
> + __u32 reserved2;
> +
> + __u64 rbase __packed __aligned(4);
> + __u64 rlen __packed __aligned(4);
> + __u64 rp __packed __aligned(4);
> + __u64 wp __packed __aligned(4);
> +};
Are these structures defined separately for host and endpoint?
(I've lost track... If they are, it would be better to define
them in common.)
> +
> +enum mhi_ep_ring_state {
> + RING_STATE_UINT = 0,
I think "UINT" is a *terrible* abbreviation to represent
"uninitialized".
> + RING_STATE_IDLE,
Since there are only two states, uninitialized or idle, maybe
you can get rid of this enum definition and just define the
ring state with "bool initialized".
> +};
> +
> +enum mhi_ep_ring_type {
Is the value 0 significant to hardware? If not, there's no need
to define the numeric value on this first symbol.
> + RING_TYPE_CMD = 0,
> + RING_TYPE_ER,
> + RING_TYPE_CH,
I don't think you ever use RING_TYPE_INVALID, so it does
not need to be defined.
> + RING_TYPE_INVALID,
> +};
> +
I prefer a more meaningful structure definition than this (as
mentioned in I think the first patch).
> +struct mhi_ep_ring_element {
> + u64 ptr;
> + u32 dword[2];
> +};
> +
> +/* Transfer ring element type */
Not transfer ring, just ring. Command, transfer, and event
ring descriptors are different things.
> +union mhi_ep_ring_ctx {
> + struct mhi_cmd_ctxt cmd;
> + struct mhi_event_ctxt ev;
> + struct mhi_chan_ctxt ch;
> + struct mhi_generic_ctx generic;
> +};
> +
> +struct mhi_ep_ring {
> + struct list_head list;
> + struct mhi_ep_cntrl *mhi_cntrl;
> + int (*ring_cb)(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el);
> + union mhi_ep_ring_ctx *ring_ctx;
> + struct mhi_ep_ring_element *ring_cache;
> + enum mhi_ep_ring_type type;
> + enum mhi_ep_ring_state state;
> + size_t rd_offset;
> + size_t wr_offset;
> + size_t ring_size;
> + u32 db_offset_h;
> + u32 db_offset_l;
> + u32 ch_id;
> +};
> +
> +struct mhi_ep_cmd {
> + struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_event {
> + struct mhi_ep_ring ring;
> +};
> +
> +struct mhi_ep_chan {
> + char *name;
> + struct mhi_ep_device *mhi_dev;
> + struct mhi_ep_ring ring;
> + struct mutex lock;
> + void (*xfer_cb)(struct mhi_ep_device *mhi_dev, struct mhi_result *result);
> + enum mhi_ch_state state;
> + enum dma_data_direction dir;
> + u64 tre_loc;
> + u32 tre_size;
> + u32 tre_bytes_left;
> + u32 chan;
> + bool skip_td;
> +};
> +
> +#endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> new file mode 100644
> index 000000000000..db664360c8ab
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -0,0 +1,231 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * MHI Bus Endpoint stack
> + *
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <[email protected]>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/delay.h>
> +#include <linux/dma-direction.h>
> +#include <linux/interrupt.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/module.h>
> +#include "internal.h"
> +
> +static DEFINE_IDA(mhi_ep_cntrl_ida);
> +
> +static void mhi_ep_release_device(struct device *dev)
> +{
> + struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> + /*
> + * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
> + * devices for the channels will only get created if the mhi_dev
> + * associated with it is NULL.
Maybe say where in the code what the comment above says happens.
> + */
> + if (mhi_dev->ul_chan)
> + mhi_dev->ul_chan->mhi_dev = NULL;
> +
> + if (mhi_dev->dl_chan)
> + mhi_dev->dl_chan->mhi_dev = NULL;
> +
> + kfree(mhi_dev);
> +}
> +
> +static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> + struct mhi_ep_device *mhi_dev;
> + struct device *dev;
> +
> + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> + if (!mhi_dev)
> + return ERR_PTR(-ENOMEM);
> +
> + dev = &mhi_dev->dev;
> + device_initialize(dev);
> + dev->bus = &mhi_ep_bus_type;
> + dev->release = mhi_ep_release_device;
> +
I think you should pass the MHI device type as argument here, and
set it within this function. Then use it in the test below, rather
than assuming the mhi_dev pointer will be NULL for the controller
only. Maybe you should set the mhi_dev pointer here as well.
> + if (mhi_cntrl->mhi_dev) {
> + /* for MHI client devices, parent is the MHI controller device */
> + dev->parent = &mhi_cntrl->mhi_dev->dev;
> + } else {
> + /* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
> + dev->parent = mhi_cntrl->cntrl_dev;
> + }
> +
> + mhi_dev->mhi_cntrl = mhi_cntrl;
> +
> + return mhi_dev;
> +}
> +
> +static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
> + const struct mhi_ep_cntrl_config *config)
> +{
> + const struct mhi_ep_channel_config *ch_cfg;
> + struct device *dev = mhi_cntrl->cntrl_dev;
> + u32 chan, i;
> + int ret = -EINVAL;
> +
> + mhi_cntrl->max_chan = config->max_channels;
> +
> + /*
> + * Allocate max_channels supported by the MHI endpoint and populate
> + * only the defined channels
> + */
> + mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
> + GFP_KERNEL);
> + if (!mhi_cntrl->mhi_chan)
> + return -ENOMEM;
> +
> + for (i = 0; i < config->num_channels; i++) {
> + struct mhi_ep_chan *mhi_chan;
> +
> + ch_cfg = &config->ch_cfg[i];
> +
> + chan = ch_cfg->num;
> + if (chan >= mhi_cntrl->max_chan) {
> + dev_err(dev, "Channel %d not available\n", chan);
> + goto error_chan_cfg;
> + }
> +
> + mhi_chan = &mhi_cntrl->mhi_chan[chan];
> + mhi_chan->name = ch_cfg->name;
> + mhi_chan->chan = chan;
> + mhi_chan->dir = ch_cfg->dir;
> + mutex_init(&mhi_chan->lock);
Move the error check below earlier, before assigning other values.
> + /* Bi-directional and direction less channels are not supported */
> + if (mhi_chan->dir == DMA_BIDIRECTIONAL || mhi_chan->dir == DMA_NONE) {
> + dev_err(dev, "Invalid channel configuration\n");
> + goto error_chan_cfg;
> + }
> + }
> +
> + return 0;
> +
> +error_chan_cfg:
> + kfree(mhi_cntrl->mhi_chan);
> +
> + return ret;
> +}
> +
> +/*
> + * Allocate channel and command rings here. Event rings will be allocated
> + * in mhi_ep_power_up() as the config comes from the host.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> + const struct mhi_ep_cntrl_config *config)
> +{
> + struct mhi_ep_device *mhi_dev;
Perhaps you could use a convention like "ep_dev" (and later, "ep_drv")
to represent an mhi_ep_device, different from "mhi_dev" representing
an mhi_device.
> + int ret;
> +
> + if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> + return -EINVAL;
> +
> + ret = parse_ch_cfg(mhi_cntrl, config);
> + if (ret)
> + return ret;
> +
NR_OF_CMD_RINGS is 1, and I think always will be, right? This and
elsewhere could be simplified if we just accept that.
> + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> + if (!mhi_cntrl->mhi_cmd) {
> + ret = -ENOMEM;
> + goto err_free_ch;
> + }
> +
> + /* Set controller index */
> + mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> + if (mhi_cntrl->index < 0) {
> + ret = mhi_cntrl->index;
> + goto err_free_cmd;
> + }
> +
> + /* Allocate the controller device */
> + mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
> + if (IS_ERR(mhi_dev)) {
> + dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
> + ret = PTR_ERR(mhi_dev);
> + goto err_ida_free;
> + }
> +
> + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> + dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
> + mhi_dev->name = dev_name(&mhi_dev->dev);
> +
> + ret = device_add(&mhi_dev->dev);
> + if (ret)
> + goto err_release_dev;
goto err_put_device?
> +
> + mhi_cntrl->mhi_dev = mhi_dev;
> +
> + dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
> +
> + return 0;
> +
> +err_release_dev:
> + put_device(&mhi_dev->dev);
> +err_ida_free:
> + ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +err_free_cmd:
> + kfree(mhi_cntrl->mhi_cmd);
> +err_free_ch:
> + kfree(mhi_cntrl->mhi_chan);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
> +
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> +{
> + struct mhi_ep_device *mhi_dev = mhi_cntrl->mhi_dev;
> +
> + kfree(mhi_cntrl->mhi_cmd);
> + kfree(mhi_cntrl->mhi_chan);
> +
> + device_del(&mhi_dev->dev);
> + put_device(&mhi_dev->dev);
> +
> + ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> +}
> +EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
> +
> +static int mhi_ep_match(struct device *dev, struct device_driver *drv)
> +{
> + struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> +
> + /*
> + * If the device is a controller type then there is no client driver
> + * associated with it
> + */
> + if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> + return 0;
> +
> + return 0;
> +};
> +
> +struct bus_type mhi_ep_bus_type = {
> + .name = "mhi_ep",
> + .dev_name = "mhi_ep",
> + .match = mhi_ep_match,
> +};
> +
> +static int __init mhi_ep_init(void)
> +{
> + return bus_register(&mhi_ep_bus_type);
> +}
> +
> +static void __exit mhi_ep_exit(void)
> +{
> + bus_unregister(&mhi_ep_bus_type);
> +}
> +
> +postcore_initcall(mhi_ep_init);
> +module_exit(mhi_ep_exit);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("MHI Bus Endpoint stack");
> +MODULE_AUTHOR("Manivannan Sadhasivam <[email protected]>");
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> new file mode 100644
> index 000000000000..14fd40af8974
> --- /dev/null
> +++ b/include/linux/mhi_ep.h
. . .
> +/**
> + * struct mhi_ep_device - Structure representing an MHI Endpoint device that binds
> + * to channels or is associated with controllers
> + * @dev: Driver model device node for the MHI Endpoint device
> + * @mhi_cntrl: Controller the device belongs to
> + * @id: Pointer to MHI Endpoint device ID struct
> + * @name: Name of the associated MHI Endpoint device
> + * @ul_chan: UL channel for the device
> + * @dl_chan: DL channel for the device
> + * @dev_type: MHI device type
> + * @ul_chan_id: Channel id for UL transfer
> + * @dl_chan_id: Channel id for DL transfer
> + */
> +struct mhi_ep_device {
> + struct device dev;
> + struct mhi_ep_cntrl *mhi_cntrl;
> + const struct mhi_device_id *id;
> + const char *name;
> + struct mhi_ep_chan *ul_chan;
> + struct mhi_ep_chan *dl_chan;
Could the dev_type just be: bool controller?
> + enum mhi_device_type dev_type;
> + int ul_chan_id;
Can't you ust use ul_chan->chan and dl_chan->chan?
In any case, I think the channel ids should be u32.
-Alex
> + int dl_chan_id;
> +};
> +
> +#define to_mhi_ep_device(dev) container_of(dev, struct mhi_ep_device, dev)
> +
> +/**
> + * mhi_ep_register_controller - Register MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to register
> + * @config: Configuration to use for the controller
> + *
> + * Return: 0 if controller registrations succeeds, a negative error code otherwise.
> + */
> +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> + const struct mhi_ep_cntrl_config *config);
> +
> +/**
> + * mhi_ep_unregister_controller - Unregister MHI Endpoint controller
> + * @mhi_cntrl: MHI Endpoint controller to unregister
> + */
> +void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl);
> +
> +#endif
>
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI endpoint client drivers
> with the MHI endpoint stack. MHI endpoint client drivers binds to one
> or more MHI endpoint devices inorder to send and receive the upper-layer
> protocol packets like IP packets, modem control messages, and diagnostics
> messages over MHI bus.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
> include/linux/mhi_ep.h | 53 ++++++++++++++++++++++++
> 2 files changed, 138 insertions(+)
>
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index db664360c8ab..ce0f99f22058 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -193,9 +193,88 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> }
> EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
>
> +static int mhi_ep_driver_probe(struct device *dev)
> +{
> + struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> + struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> + struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
> + struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
> +
Either ul_chan or dl_chan must be set, right? Check this.
Otherwise I think this looks OK.
-Alex
> + if (ul_chan)
> + ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
> +
> + if (dl_chan)
> + dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
> +
> + return mhi_drv->probe(mhi_dev, mhi_dev->id);
> +}
. . .
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> This commit adds support for creating and destroying MHI endpoint devices.
> The MHI endpoint devices binds to the MHI endpoint channels and are used
> to transfer data between MHI host and endpoint device.
>
> There is a single MHI EP device for each channel pair. The devices will be
> created when the corresponding channels has been started by the host and
> will be destroyed during MHI EP power down and reset.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
> 1 file changed, 85 insertions(+)
>
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index ce0f99f22058..f0b5f49db95a 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -63,6 +63,91 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
> return mhi_dev;
> }
>
> +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
> +{
> + struct mhi_ep_device *mhi_dev;
> + struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> + int ret;
> +
> + mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
> + if (IS_ERR(mhi_dev))
> + return PTR_ERR(mhi_dev);
> +
> + mhi_dev->dev_type = MHI_DEVICE_XFER;
Elsewhere (at least in mhi_ep_process_tre_ring()) in your code
you assume that the even-numbered channel is UL.
I would say, either use that assumption throughout, or do
not use that assumption at all. (I prefer the latter.)
I don't really like how this assumes that the channels
are defined in adjacent pairs. It assumes one is
upload and the next one is download, but doesn't
specify the order in which they're defined. If
you're going to assume they are defined in pairs, you
should be able to assume which one is defined first,
and then simplify this code (and even verify that
they are defined UL before DL, perhaps).
> + /* Configure primary channel */
> + if (mhi_chan->dir == DMA_TO_DEVICE) {
> + mhi_dev->ul_chan = mhi_chan;
> + mhi_dev->ul_chan_id = mhi_chan->chan;
> + } else {
> + mhi_dev->dl_chan = mhi_chan;
> + mhi_dev->dl_chan_id = mhi_chan->chan;
> + }
> +
> + get_device(&mhi_dev->dev);
> + mhi_chan->mhi_dev = mhi_dev;
> +
> + /* Configure secondary channel as well */
> + mhi_chan++;
> + if (mhi_chan->dir == DMA_TO_DEVICE) {
> + mhi_dev->ul_chan = mhi_chan;
> + mhi_dev->ul_chan_id = mhi_chan->chan;
> + } else {
> + mhi_dev->dl_chan = mhi_chan;
> + mhi_dev->dl_chan_id = mhi_chan->chan;
> + }
> +
> + get_device(&mhi_dev->dev);
> + mhi_chan->mhi_dev = mhi_dev;
> +
> + /* Channel name is same for both UL and DL */
You could verify the two channels indeed have the
same name.
-Alex
> + mhi_dev->name = mhi_chan->name;
> + dev_set_name(&mhi_dev->dev, "%s_%s",
> + dev_name(&mhi_cntrl->mhi_dev->dev),
> + mhi_dev->name);
> +
> + ret = device_add(&mhi_dev->dev);
> + if (ret)
> + put_device(&mhi_dev->dev);
> +
> + return ret;
> +}
> +
> +static int mhi_ep_destroy_device(struct device *dev, void *data)
> +{
> + struct mhi_ep_device *mhi_dev;
> + struct mhi_ep_cntrl *mhi_cntrl;
> + struct mhi_ep_chan *ul_chan, *dl_chan;
> +
> + if (dev->bus != &mhi_ep_bus_type)
> + return 0;
> +
> + mhi_dev = to_mhi_ep_device(dev);
> + mhi_cntrl = mhi_dev->mhi_cntrl;
> +
> + /* Only destroy devices created for channels */
> + if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
> + return 0;
> +
> + ul_chan = mhi_dev->ul_chan;
> + dl_chan = mhi_dev->dl_chan;
> +
> + if (ul_chan)
> + put_device(&ul_chan->mhi_dev->dev);
> +
> + if (dl_chan)
> + put_device(&dl_chan->mhi_dev->dev);
> +
> + dev_dbg(&mhi_cntrl->mhi_dev->dev, "Destroying device for chan:%s\n",
> + mhi_dev->name);
> +
> + /* Notify the client and remove the device from MHI bus */
> + device_del(dev);
> + put_device(dev);
> +
> + return 0;
> +}
> +
> static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
> const struct mhi_ep_cntrl_config *config)
> {
>
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> Add support for managing the Memory Mapped Input Output (MMIO) registers
> of the MHI bus. All MHI operations are carried out using the MMIO registers
> by both host and the endpoint device.
>
> The MMIO registers reside inside the endpoint device memory (fixed
> location based on the platform) and the address is passed by the MHI EP
> controller driver during its registration.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/ep/Makefile | 2 +-
> drivers/bus/mhi/ep/internal.h | 36 ++++
> drivers/bus/mhi/ep/main.c | 6 +-
> drivers/bus/mhi/ep/mmio.c | 303 ++++++++++++++++++++++++++++++++++
> include/linux/mhi_ep.h | 18 ++
> 5 files changed, 363 insertions(+), 2 deletions(-)
> create mode 100644 drivers/bus/mhi/ep/mmio.c
>
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index 64e29252b608..a1555ae287ad 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
> obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o
> +mhi_ep-y := main.o mmio.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 7b164daf4332..39eeb5f384e2 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -91,6 +91,12 @@ struct mhi_generic_ctx {
> __u64 wp __packed __aligned(4);
> };
>
Maybe add a comment defining SBL as "secondary boot loader" and AMSS
as "advanced modem subsystem".
> +enum mhi_ep_execenv {
> + MHI_EP_SBL_EE = 1,
> + MHI_EP_AMSS_EE = 2,
> + MHI_EP_UNRESERVED
> +};
> +
> enum mhi_ep_ring_state {
> RING_STATE_UINT = 0,
> RING_STATE_IDLE,
> @@ -155,4 +161,34 @@ struct mhi_ep_chan {
> bool skip_td;
> };
>
> +/* MMIO related functions */
I would *really* rather have the mmio_read functions *return* the read
value, rather than having the address of the location to store it passed
as argument. Your MMIO calls never fail, so there's no need to return
anything else. Returning the value also makes it more obvious that the
*result* is getting assigned (rather than sort of implying it by passing
in the address of the result). And there's no possibility of someone
passing a bad pointer that way either.
> +void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
In other words:
u32 mhi_ep_mmio_read(struct mhi_ep_ctrl *mhi_ctrl, u32 offset);
> +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset,
> + u32 mask, u32 shift, u32 val);
> +int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
> + u32 mask, u32 shift, u32 *regval);
> +void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> +void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_ch_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> +void mhi_ep_mmio_get_er_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> +void mhi_ep_mmio_get_cmd_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> +void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
> +void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> + bool *mhi_reset);
> +void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
> +void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
> +
> #endif
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index f0b5f49db95a..fddf75dfb9c7 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -209,7 +209,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> struct mhi_ep_device *mhi_dev;
> int ret;
>
> - if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> + if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
> return -EINVAL;
>
> ret = parse_ch_cfg(mhi_cntrl, config);
> @@ -222,6 +222,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> goto err_free_ch;
> }
>
> + /* Set MHI version and AMSS EE before enumeration */
> + mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
> + mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> +
> /* Set controller index */
> mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> if (mhi_cntrl->index < 0) {
> diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
> new file mode 100644
> index 000000000000..157ef1240f6f
> --- /dev/null
> +++ b/drivers/bus/mhi/ep/mmio.c
> @@ -0,0 +1,303 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (C) 2021 Linaro Ltd.
> + * Author: Manivannan Sadhasivam <[email protected]>
> + */
> +
> +#include <linux/bitfield.h>
> +#include <linux/io.h>
> +#include <linux/mhi_ep.h>
> +
> +#include "internal.h"
> +
> +void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval)
> +{
> + *regval = readl(mhi_cntrl->mmio + offset);
return readl(...);
> +}
> +
> +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
> +{
> + writel(val, mhi_cntrl->mmio + offset);
> +}
> +
> +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask,
> + u32 shift, u32 val)
There is no need for a shift argument here. I would like to say
"use the bitfield functions" but at the moment they require the
mask to be constant. You could still do that, by having all
these be defined as static inline functions in a header though.
Maybe you can use FIELD_GET() though, I don't know.
Anyway, try to get rid of these shifts; they shouldn't be
necessary.
> +{
> + u32 regval;
> +
> + mhi_ep_mmio_read(mhi_cntrl, offset, ®val);
> + regval &= ~mask;
> + regval |= ((val << shift) & mask);
> + mhi_ep_mmio_write(mhi_cntrl, offset, regval);
> +}
> +
> +int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
> + u32 mask, u32 shift, u32 *regval)
> +{
> + mhi_ep_mmio_read(dev, offset, regval);
> + *regval &= mask;
> + *regval >>= shift;
> +
> + return 0;
There is no point in returning 0 from this function.
> +}
> +
> +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> + bool *mhi_reset)
> +{
> + u32 regval;
> +
> + mhi_ep_mmio_read(mhi_cntrl, MHICTRL, ®val);
> + *state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
> + *mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
> +}
> +
> +static void mhi_ep_mmio_mask_set_chdb_int_a7(struct mhi_ep_cntrl *mhi_cntrl,
> + u32 chdb_id, bool enable)
> +{
> + u32 chid_mask, chid_idx, chid_shft, val = 0;
> +
> + chid_shft = chdb_id % 32;
> + chid_mask = BIT(chid_shft);
> + chid_idx = chdb_id / 32;
> +
> + if (chid_idx >= MHI_MASK_ROWS_CH_EV_DB)
> + return;
The above should maybe issue a warning?
> +
> + if (enable)
> + val = 1;
> +
> + mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
> + chid_mask, chid_shft, val);
> + mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
> + &mhi_cntrl->chdb[chid_idx].mask);
Why do you read after writing? Is this to be sure the write completes
over PCIe or something? Even then I don't think that would be needed
because the memory is on "this side" of PCIe (right?).
> +}
> +
> +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> +{
> + mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, true);
> +}
> +
> +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> +{
> + mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, false);
> +}
> +
> +static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
> +{
> + u32 val = 0, i = 0;
No need for assigning 0 to i.
-Alex
> +
> + if (enable)
> + val = MHI_CHDB_INT_MASK_A7_n_EN_ALL;
> +
> + for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> + mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(i), val);
> + mhi_cntrl->chdb[i].mask = val;
> + }
> +}
> +
. . .
On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> Add support for managing the MHI ring. The MHI ring is a circular queue
> of data structures used to pass the information between host and the
> endpoint.
>
> MHI support 3 types of rings:
>
> 1. Transfer ring
> 2. Event ring
> 3. Command ring
>
> All rings reside inside the host memory and the MHI EP device maps it to
> the device memory using blocks like PCIe iATU. The mapping is handled in
> the MHI EP controller driver itself.
A few more comments here. And with that, I'm done for today.
-Alex
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/ep/Makefile | 2 +-
> drivers/bus/mhi/ep/internal.h | 23 +++
> drivers/bus/mhi/ep/main.c | 53 +++++-
> drivers/bus/mhi/ep/ring.c | 314 ++++++++++++++++++++++++++++++++++
> include/linux/mhi_ep.h | 11 ++
> 5 files changed, 401 insertions(+), 2 deletions(-)
> create mode 100644 drivers/bus/mhi/ep/ring.c
>
> diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> index a1555ae287ad..7ba0e04801eb 100644
> --- a/drivers/bus/mhi/ep/Makefile
> +++ b/drivers/bus/mhi/ep/Makefile
> @@ -1,2 +1,2 @@
> obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> -mhi_ep-y := main.o mmio.o
> +mhi_ep-y := main.o mmio.o ring.o
> diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> index 39eeb5f384e2..a7a4e6934f7d 100644
> --- a/drivers/bus/mhi/ep/internal.h
> +++ b/drivers/bus/mhi/ep/internal.h
> @@ -97,6 +97,18 @@ enum mhi_ep_execenv {
> MHI_EP_UNRESERVED
> };
>
> +/* Transfer Ring Element macros */
> +#define MHI_EP_TRE_PTR(ptr) (ptr)
> +#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
> +#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> + | (ieot << 9) | (ieob << 8) | chain)
> +#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
> +#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
> +#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
> +#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
> +#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
> +#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
> +
> enum mhi_ep_ring_state {
> RING_STATE_UINT = 0,
> RING_STATE_IDLE,
> @@ -161,6 +173,17 @@ struct mhi_ep_chan {
> bool skip_td;
> };
>
> +/* MHI Ring related functions */
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> +void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> + union mhi_ep_ring_ctx *ctx);
> +int mhi_ep_process_ring(struct mhi_ep_ring *ring);
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element,
> + int evt_offset);
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> +
> /* MMIO related functions */
> void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
> void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> index fddf75dfb9c7..6d448d42f527 100644
> --- a/drivers/bus/mhi/ep/main.c
> +++ b/drivers/bus/mhi/ep/main.c
> @@ -18,6 +18,42 @@
>
> static DEFINE_IDA(mhi_ep_cntrl_ida);
>
> +static void mhi_ep_ring_worker(struct work_struct *work)
> +{
> + struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> + struct mhi_ep_cntrl, ring_work);
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + struct mhi_ep_ring *ring;
> + struct list_head *cp, *q;
> + unsigned long flags;
> + int ret = 0;
> +
> + /* Process the command ring first */
> + ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
> + if (ret) {
> + dev_err(dev, "Error processing command ring\n");
> + goto err_unlock;
> + }
> +
> + spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> + /* Process the channel rings now */
Use list_for_each_entry_safe() here.
But actually, rather than doing this, you can do the
trick of grabbing the whole list under lock, then
handling processing the entries outside of it. You'll
have to judge whether that can be done, but basically:
struct mhi_ep_ring *ring, *tmp;
LIST_HEAD(list);
spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
list_splice_init(&mhi_cntrl->ch_db_list, &list);
spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
list_for_each_entry_safe(ring, tmp, &list, list) {
list_del(&ring->list);
ret = mhi_ep_process_ring(ring);
...
}
> + list_for_each_safe(cp, q, &mhi_cntrl->ch_db_list) {
> + ring = list_entry(cp, struct mhi_ep_ring, list);
> + list_del(cp);
> + ret = mhi_ep_process_ring(ring);
> + if (ret) {
> + dev_err(dev, "Error processing channel ring: %d\n", ring->ch_id);
> + goto err_unlock;
> + }
> +
> + /* Re-enable channel interrupt */
> + mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ring->ch_id);
> + }
> +
> +err_unlock:
> + spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
> +}
> +
> static void mhi_ep_release_device(struct device *dev)
> {
> struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
. . .
> +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
> +{
> + ring->rd_offset++;
> + if (ring->rd_offset == ring->ring_size)
> + ring->rd_offset = 0;
> +}
> +
> +static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
> +{
> + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + size_t start, copy_size;
> + struct mhi_ep_ring_element *ring_shadow;
> + phys_addr_t ring_shadow_phys;
> + size_t size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
Do you cache the entire ring just in case you need to wrap
around the end of it?
> + int ret;
> +
> + /* No need to cache event rings */
> + if (ring->type == RING_TYPE_ER)
> + return 0;
> +
> + /* No need to cache the ring if write pointer is unmodified */
> + if (ring->wr_offset == end)
> + return 0;
> +
> + start = ring->wr_offset;
> +
> + /* Allocate memory for host ring */
> + ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys,
> + size);
> + if (!ring_shadow) {
> + dev_err(dev, "Failed to allocate memory for ring_shadow\n");
> + return -ENOMEM;
> + }
> +
> + /* Map host ring */
> + ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
> + ring->ring_ctx->generic.rbase, size);
> + if (ret) {
> + dev_err(dev, "Failed to map ring_shadow\n\n");
> + goto err_ring_free;
> + }
> +
> + dev_dbg(dev, "Caching ring: start %d end %d size %d", start, end, copy_size);
> +
> + if (start < end) {
> + copy_size = (end - start) * sizeof(struct mhi_ep_ring_element);
> + memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
> + } else {
> + copy_size = (ring->ring_size - start) * sizeof(struct mhi_ep_ring_element);
> + memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
> + if (end)
> + memcpy_fromio(&ring->ring_cache[0], &ring_shadow[0],
> + end * sizeof(struct mhi_ep_ring_element));
> + }
> +
> + /* Now unmap and free host ring */
> + mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
> + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, size);
> +
> + return 0;
> +
> +err_ring_free:
> + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, &ring_shadow, size);
> +
> + return ret;
> +}
> +
. . .
> +static int mhi_ep_process_ring_element(struct mhi_ep_ring *ring, size_t offset)
> +{
> + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + struct mhi_ep_ring_element *el;
> + int ret = -ENODEV;
> +
> + /* Get the element and invoke the respective callback */
> + el = &ring->ring_cache[offset];
> +
You already know that the ring_cb function pointer is non-null (you set
it in mhi_ep_ring_init(), below). At least you *should* be able to
be sure of that...
> + if (ring->ring_cb)
> + ret = ring->ring_cb(ring, el);
> + else
> + dev_err(dev, "No callback registered for ring\n");
> +
> + return ret;
> +}
> +
> +int mhi_ep_process_ring(struct mhi_ep_ring *ring)
> +{
> + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + int ret = 0;
> +
> + /* Event rings should not be processed */
> + if (ring->type == RING_TYPE_ER)
> + return -EINVAL;
> +
> + dev_dbg(dev, "Processing ring of type: %d\n", ring->type);
> +
> + /* Update the write offset for the ring */
> + ret = mhi_ep_update_wr_offset(ring);
> + if (ret) {
> + dev_err(dev, "Error updating write offset for ring\n");
> + return ret;
> + }
> +
> + /* Sanity check to make sure there are elements in the ring */
> + if (ring->rd_offset == ring->wr_offset)
> + return 0;
> +
> + /* Process channel ring first */
> + if (ring->type == RING_TYPE_CH) {
> + ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> + if (ret)
> + dev_err(dev, "Error processing ch ring element: %d\n", ring->rd_offset);
> +
> + return ret;
> + }
> +
> + /* Process command ring now */
> + while (ring->rd_offset != ring->wr_offset) {
> + ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> + if (ret) {
> + dev_err(dev, "Error processing cmd ring element: %d\n", ring->rd_offset);
> + return ret;
> + }
> +
> + mhi_ep_ring_inc_index(ring);
> + }
> +
> + return 0;
> +}
> +
> +/* TODO: Support for adding multiple ring elements to the ring */
> +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el, int size)
I'm pretty sure the size argument is unused, so should be eliminated.
> +{
> + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + struct mhi_ep_ring_element *ring_shadow;
> + size_t ring_size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
Use sizeof(*el) in the line above.
> + phys_addr_t ring_shadow_phys;
> + size_t old_offset = 0;
> + u32 num_free_elem;
> + int ret;
> +
> + ret = mhi_ep_update_wr_offset(ring);
> + if (ret) {
> + dev_err(dev, "Error updating write pointer\n");
> + return ret;
> + }
> +
> + if (ring->rd_offset < ring->wr_offset)
> + num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
> + else
> + num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
> +
> + /* Check if there is space in ring for adding at least an element */
> + if (num_free_elem < 1) {
if (!num_free_elem) {
> + dev_err(dev, "No space left in the ring\n");
> + return -ENOSPC;
> + }
> +
> + old_offset = ring->rd_offset;
> + mhi_ep_ring_inc_index(ring);
> +
> + dev_dbg(dev, "Adding an element to ring at offset (%d)\n", ring->rd_offset);
> +
> + /* Update rp in ring context */
> + ring->ring_ctx->generic.rp = (ring->rd_offset * sizeof(struct mhi_ep_ring_element)) +
> + ring->ring_ctx->generic.rbase;
> +
> + /* Allocate memory for host ring */
> + ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys, ring_size);
> + if (!ring_shadow) {
> + dev_err(dev, "failed to allocate ring_shadow\n");
> + return -ENOMEM;
> + }
> +
> + /* Map host ring */
> + ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
> + ring->ring_ctx->generic.rbase, ring_size);
> + if (ret) {
> + dev_err(dev, "failed to map ring_shadow\n\n");
> + goto err_ring_free;
> + }
> +
> + /* Copy the element to ring */
Use sizeof(*el) in the memcpy_toio() call.
> + memcpy_toio(&ring_shadow[old_offset], el, sizeof(struct mhi_ep_ring_element));
> +
> + /* Now unmap and free host ring */
> + mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
> + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
> +
> + return 0;
> +
> +err_ring_free:
> + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
> +
> + return ret;
> +}
> +
> +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
> +{
> + ring->state = RING_STATE_UINT;
> + ring->type = type;
> + if (ring->type == RING_TYPE_CMD) {
> + ring->db_offset_h = CRDB_HIGHER;
> + ring->db_offset_l = CRDB_LOWER;
> + } else if (ring->type == RING_TYPE_CH) {
> + ring->db_offset_h = CHDB_HIGHER_n(id);
> + ring->db_offset_l = CHDB_LOWER_n(id);
> + ring->ch_id = id;
> + } else if (ring->type == RING_TYPE_ER) {
> + ring->db_offset_h = ERDB_HIGHER_n(id);
> + ring->db_offset_l = ERDB_LOWER_n(id);
> + }
There is no other case, right? If you hit it, you should report
it. Otherwise there's not really any need to check for ring->type
RING_TYPE_ER...
-Alex
> +}
> +
> +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> + union mhi_ep_ring_ctx *ctx)
> +{
> + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> + int ret;
> +
> + ring->mhi_cntrl = mhi_cntrl;
> + ring->ring_ctx = ctx;
> + ring->ring_size = mhi_ep_ring_num_elems(ring);
> +
> + /* During ring init, both rp and wp are equal */
> + ring->rd_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
> + ring->wr_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
> + ring->state = RING_STATE_IDLE;
> +
> + /* Allocate ring cache memory for holding the copy of host ring */
> + ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ep_ring_element),
> + GFP_KERNEL);
> + if (!ring->ring_cache)
> + return -ENOMEM;
> +
> + ret = mhi_ep_cache_ring(ring, ring->ring_ctx->generic.wp);
> + if (ret) {
> + dev_err(dev, "Failed to cache ring\n");
> + kfree(ring->ring_cache);
> + return ret;
> + }
> +
> + return 0;
> +}
> +
> +void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
> +{
> + ring->state = RING_STATE_UINT;
> + kfree(ring->ring_cache);
> +}
> diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> index 902c8febd856..729f4b802b74 100644
> --- a/include/linux/mhi_ep.h
> +++ b/include/linux/mhi_ep.h
> @@ -62,6 +62,11 @@ struct mhi_ep_db_info {
> * @ch_ctx_host_pa: Physical address of host channel context data structure
> * @ev_ctx_host_pa: Physical address of host event context data structure
> * @cmd_ctx_host_pa: Physical address of host command context data structure
> + * @ring_wq: Dedicated workqueue for processing MHI rings
> + * @ring_work: Ring worker
> + * @ch_db_list: List of queued channel doorbells
> + * @st_transition_list: List of state transitions
> + * @list_lock: Lock for protecting state transition and channel doorbell lists
> * @chdb: Array of channel doorbell interrupt info
> * @raise_irq: CB function for raising IRQ to the host
> * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> @@ -90,6 +95,12 @@ struct mhi_ep_cntrl {
> u64 ev_ctx_host_pa;
> u64 cmd_ctx_host_pa;
>
> + struct workqueue_struct *ring_wq;
> + struct work_struct ring_work;
> +
> + struct list_head ch_db_list;
> + struct list_head st_transition_list;
> + spinlock_t list_lock;
> struct mhi_ep_db_info chdb[4];
>
> void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
>
On Wed, Jan 05, 2022 at 06:26:51PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for registering MHI endpoint controller drivers
> > with the MHI endpoint stack. MHI endpoint controller drivers manages
> > the interaction with the host machines such as x86. They are also the
> > MHI endpoint bus master in charge of managing the physical link between the
> > host and endpoint device.
> >
> > The endpoint controller driver encloses all information about the
> > underlying physical bus like PCIe. The registration process involves
> > parsing the channel configuration and allocating an MHI EP device.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
>
> See below. Lots of little things, some I've said before.
>
> > ---
> > drivers/bus/mhi/Kconfig | 1 +
> > drivers/bus/mhi/Makefile | 3 +
> > drivers/bus/mhi/ep/Kconfig | 10 ++
> > drivers/bus/mhi/ep/Makefile | 2 +
> > drivers/bus/mhi/ep/internal.h | 158 +++++++++++++++++++++++
> > drivers/bus/mhi/ep/main.c | 231 ++++++++++++++++++++++++++++++++++
> > include/linux/mhi_ep.h | 140 +++++++++++++++++++++
> > 7 files changed, 545 insertions(+)
> > create mode 100644 drivers/bus/mhi/ep/Kconfig
> > create mode 100644 drivers/bus/mhi/ep/Makefile
> > create mode 100644 drivers/bus/mhi/ep/internal.h
> > create mode 100644 drivers/bus/mhi/ep/main.c
> > create mode 100644 include/linux/mhi_ep.h
> >
[...]
> > +/* MHI register definitions */
> > +#define MHIREGLEN 0x100
>
> I really think it would be nice if these could be common between the
> host and endpoint.
>
done
> > +#define MHIVER 0x108
> > +#define MHICFG 0x110
> > +#define CHDBOFF 0x118
> > +#define ERDBOFF 0x120
> > +#define BHIOFF 0x128
> > +#define DEBUGOFF 0x130
> > +#define MHICTRL 0x138
> > +#define MHISTATUS 0x148
> > +#define CCABAP_LOWER 0x158
> > +#define CCABAP_HIGHER 0x15c
> > +#define ECABAP_LOWER 0x160
> > +#define ECABAP_HIGHER 0x164
> > +#define CRCBAP_LOWER 0x168
> > +#define CRCBAP_HIGHER 0x16c
> > +#define CRDB_LOWER 0x170
> > +#define CRDB_HIGHER 0x174
> > +#define MHICTRLBASE_LOWER 0x180
> > +#define MHICTRLBASE_HIGHER 0x184
> > +#define MHICTRLLIMIT_LOWER 0x188
> > +#define MHICTRLLIMIT_HIGHER 0x18c
> > +#define MHIDATABASE_LOWER 0x198
> > +#define MHIDATABASE_HIGHER 0x19c
> > +#define MHIDATALIMIT_LOWER 0x1a0
> > +#define MHIDATALIMIT_HIGHER 0x1a4
>
> It wouldn't hurt to have a one or two line comment explaining how
> these compute the offset for a given channel or event ring's
> doorbell register.
>
> I think you could use decimal for the multiplier (8 rather than 0x8),
> but maybe you prefer not mixing that with a hex base offset.
>
> Overall though, take a look at the macros you define like this.
> See if you can decide on whether you can settle on a consistent
> form. Some places you use decimal, others hex. It's not a
> big deal, but consistency always helps.
>
Will look.
> > +#define CHDB_LOWER_n(n) (0x400 + 0x8 * (n))
> > +#define CHDB_HIGHER_n(n) (0x404 + 0x8 * (n))
> > +#define ERDB_LOWER_n(n) (0x800 + 0x8 * (n))
> > +#define ERDB_HIGHER_n(n) (0x804 + 0x8 * (n))
> > +#define BHI_INTVEC 0x220
> > +#define BHI_EXECENV 0x228
> > +#define BHI_IMGTXDB 0x218
> > +
>
> Will the AP always be an "A7"?
>
That's the register defined in the register manual. So I'd like to keep it for
now.
> > +#define MHI_CTRL_INT_STATUS_A7 0x4
> > +#define MHI_CTRL_INT_STATUS_A7_MSK BIT(0)
> > +#define MHI_CTRL_INT_STATUS_CRDB_MSK BIT(1)
> > +#define MHI_CHDB_INT_STATUS_A7_n(n) (0x28 + 0x4 * (n))
> > +#define MHI_ERDB_INT_STATUS_A7_n(n) (0x38 + 0x4 * (n))
> > +
> > +#define MHI_CTRL_INT_CLEAR_A7 0x4c
> > +#define MHI_CTRL_INT_MMIO_WR_CLEAR BIT(2)
> > +#define MHI_CTRL_INT_CRDB_CLEAR BIT(1)
> > +#define MHI_CTRL_INT_CRDB_MHICTRL_CLEAR BIT(0)
> > +
> > +#define MHI_CHDB_INT_CLEAR_A7_n(n) (0x70 + 0x4 * (n))
> > +#define MHI_CHDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
> > +#define MHI_ERDB_INT_CLEAR_A7_n(n) (0x80 + 0x4 * (n))
> > +#define MHI_ERDB_INT_CLEAR_A7_n_CLEAR_ALL GENMASK(31, 0)
> > +
>
> The term "MASK" here might be confusing. Does a bit set in
> this mask register indicate an interrupt is enabled, or
> disabled (masked)? A comment (here or where used) could
> clear it up without renaming the symbol.
>
I agree that it is confusing but that's how the platform defines it. Will add a
comment though.
> > +#define MHI_CTRL_INT_MASK_A7 0x94
> > +#define MHI_CTRL_INT_MASK_A7_MASK_MASK GENMASK(1, 0)
> > +#define MHI_CTRL_MHICTRL_MASK BIT(0)
> > +#define MHI_CTRL_MHICTRL_SHFT 0
> > +#define MHI_CTRL_CRDB_MASK BIT(1)
> > +#define MHI_CTRL_CRDB_SHFT 1
>
> Use SHIFT or SHFT (not both), consistently. (But get rid of
> this shift definition, and others like it...)
>
Done
> > +#define MHI_CHDB_INT_MASK_A7_n(n) (0xb8 + 0x4 * (n))
> > +#define MHI_CHDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
> > +#define MHI_ERDB_INT_MASK_A7_n(n) (0xc8 + 0x4 * (n))
> > +#define MHI_ERDB_INT_MASK_A7_n_EN_ALL GENMASK(31, 0)
> > +
> > +#define NR_OF_CMD_RINGS 1
>
> Is there ever any reason to believe there will be more than one
> command ring for a given MHI instance? I kept seeing loops over
> NR_OF_CMD_RINGS, and it just seemed silly.
>
It was added for future compatibility and the spec doesn't mention that there is
only one command ring.
> > +#define MHI_MASK_ROWS_CH_EV_DB 4
> > +#define MHI_MASK_CH_EV_LEN 32
> > +
> > +/* Generic context */
>
> Maybe define the entire structure as packed and aligned.
>
Justified in previous patch.
> > +struct mhi_generic_ctx {
> > + __u32 reserved0;
> > + __u32 reserved1;
> > + __u32 reserved2;
> > +
> > + __u64 rbase __packed __aligned(4);
> > + __u64 rlen __packed __aligned(4);
> > + __u64 rp __packed __aligned(4);
> > + __u64 wp __packed __aligned(4);
> > +};
>
> Are these structures defined separately for host and endpoint?
> (I've lost track... If they are, it would be better to define
> them in common.)
>
Channel, Event and command contexts are common and they are already defined in
common.h. This one is specific to endpoint.
> > +
> > +enum mhi_ep_ring_state {
> > + RING_STATE_UINT = 0,
>
> I think "UINT" is a *terrible* abbreviation to represent
> "uninitialized".
>
> > + RING_STATE_IDLE,
>
> Since there are only two states, uninitialized or idle, maybe
> you can get rid of this enum definition and just define the
> ring state with "bool initialized".
>
okay
> > +};
> > +
> > +enum mhi_ep_ring_type {
>
> Is the value 0 significant to hardware? If not, there's no need
> to define the numeric value on this first symbol.
>
> > + RING_TYPE_CMD = 0,
> > + RING_TYPE_ER,
> > + RING_TYPE_CH,
>
> I don't think you ever use RING_TYPE_INVALID, so it does
> not need to be defined.
>
ack
> > + RING_TYPE_INVALID,
> > +};
> > +
>
> I prefer a more meaningful structure definition than this (as
> mentioned in I think the first patch).
>
> > +struct mhi_ep_ring_element {
> > + u64 ptr;
> > + u32 dword[2];
> > +};
> > +
> > +/* Transfer ring element type */
>
> Not transfer ring, just ring. Command, transfer, and event
> ring descriptors are different things.
>
ack
> > +union mhi_ep_ring_ctx {
> > + struct mhi_cmd_ctxt cmd;
> > + struct mhi_event_ctxt ev;
> > + struct mhi_chan_ctxt ch;
> > + struct mhi_generic_ctx generic;
> > +};
> > +
[...]
> > +static void mhi_ep_release_device(struct device *dev)
> > +{
> > + struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> > +
> > + /*
> > + * We need to set the mhi_chan->mhi_dev to NULL here since the MHI
> > + * devices for the channels will only get created if the mhi_dev
> > + * associated with it is NULL.
>
> Maybe say where in the code what the comment above says happens.
>
okay
> > + */
> > + if (mhi_dev->ul_chan)
> > + mhi_dev->ul_chan->mhi_dev = NULL;
> > +
> > + if (mhi_dev->dl_chan)
> > + mhi_dev->dl_chan->mhi_dev = NULL;
> > +
> > + kfree(mhi_dev);
> > +}
> > +
> > +static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
> > +{
> > + struct mhi_ep_device *mhi_dev;
> > + struct device *dev;
> > +
> > + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> > + if (!mhi_dev)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + dev = &mhi_dev->dev;
> > + device_initialize(dev);
> > + dev->bus = &mhi_ep_bus_type;
> > + dev->release = mhi_ep_release_device;
> > +
>
> I think you should pass the MHI device type as argument here, and
> set it within this function. Then use it in the test below, rather
> than assuming the mhi_dev pointer will be NULL for the controller
> only. Maybe you should set the mhi_dev pointer here as well.
>
Makes sense.
I was trying to align with host MHI stack that does the same way.
But I'll change it in ep and will do the same on host later.
> > + if (mhi_cntrl->mhi_dev) {
> > + /* for MHI client devices, parent is the MHI controller device */
> > + dev->parent = &mhi_cntrl->mhi_dev->dev;
> > + } else {
> > + /* for MHI controller device, parent is the bus device (e.g. PCI EPF) */
> > + dev->parent = mhi_cntrl->cntrl_dev;
> > + }
> > +
> > + mhi_dev->mhi_cntrl = mhi_cntrl;
> > +
> > + return mhi_dev;
> > +}
> > +
> > +static int parse_ch_cfg(struct mhi_ep_cntrl *mhi_cntrl,
> > + const struct mhi_ep_cntrl_config *config)
> > +{
> > + const struct mhi_ep_channel_config *ch_cfg;
> > + struct device *dev = mhi_cntrl->cntrl_dev;
> > + u32 chan, i;
> > + int ret = -EINVAL;
> > +
> > + mhi_cntrl->max_chan = config->max_channels;
> > +
> > + /*
> > + * Allocate max_channels supported by the MHI endpoint and populate
> > + * only the defined channels
> > + */
> > + mhi_cntrl->mhi_chan = kcalloc(mhi_cntrl->max_chan, sizeof(*mhi_cntrl->mhi_chan),
> > + GFP_KERNEL);
> > + if (!mhi_cntrl->mhi_chan)
> > + return -ENOMEM;
> > +
> > + for (i = 0; i < config->num_channels; i++) {
> > + struct mhi_ep_chan *mhi_chan;
> > +
> > + ch_cfg = &config->ch_cfg[i];
> > +
> > + chan = ch_cfg->num;
> > + if (chan >= mhi_cntrl->max_chan) {
> > + dev_err(dev, "Channel %d not available\n", chan);
> > + goto error_chan_cfg;
> > + }
> > +
> > + mhi_chan = &mhi_cntrl->mhi_chan[chan];
> > + mhi_chan->name = ch_cfg->name;
> > + mhi_chan->chan = chan;
> > + mhi_chan->dir = ch_cfg->dir;
> > + mutex_init(&mhi_chan->lock);
>
> Move the error check below earlier, before assigning other values.
>
ack
> > + /* Bi-directional and direction less channels are not supported */
> > + if (mhi_chan->dir == DMA_BIDIRECTIONAL || mhi_chan->dir == DMA_NONE) {
> > + dev_err(dev, "Invalid channel configuration\n");
> > + goto error_chan_cfg;
> > + }
> > + }
> > +
> > + return 0;
> > +
> > +error_chan_cfg:
> > + kfree(mhi_cntrl->mhi_chan);
> > +
> > + return ret;
> > +}
> > +
> > +/*
> > + * Allocate channel and command rings here. Event rings will be allocated
> > + * in mhi_ep_power_up() as the config comes from the host.
> > + */
> > +int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> > + const struct mhi_ep_cntrl_config *config)
> > +{
> > + struct mhi_ep_device *mhi_dev;
>
> Perhaps you could use a convention like "ep_dev" (and later, "ep_drv")
> to represent an mhi_ep_device, different from "mhi_dev" representing
> an mhi_device.
>
This is done to align with the host MHI stack, so that it'll be easy to spot
the MHI device pointer.
> > + int ret;
> > +
> > + if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> > + return -EINVAL;
> > +
> > + ret = parse_ch_cfg(mhi_cntrl, config);
> > + if (ret)
> > + return ret;
> > +
>
> NR_OF_CMD_RINGS is 1, and I think always will be, right? This and
> elsewhere could be simplified if we just accept that.
>
For now yes, but it could be changed in future.
> > + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS, sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> > + if (!mhi_cntrl->mhi_cmd) {
> > + ret = -ENOMEM;
> > + goto err_free_ch;
> > + }
> > +
> > + /* Set controller index */
> > + mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> > + if (mhi_cntrl->index < 0) {
> > + ret = mhi_cntrl->index;
> > + goto err_free_cmd;
> > + }
> > +
> > + /* Allocate the controller device */
> > + mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
> > + if (IS_ERR(mhi_dev)) {
> > + dev_err(mhi_cntrl->cntrl_dev, "Failed to allocate controller device\n");
> > + ret = PTR_ERR(mhi_dev);
> > + goto err_ida_free;
> > + }
> > +
> > + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> > + dev_set_name(&mhi_dev->dev, "mhi_ep%d", mhi_cntrl->index);
> > + mhi_dev->name = dev_name(&mhi_dev->dev);
> > +
> > + ret = device_add(&mhi_dev->dev);
> > + if (ret)
> > + goto err_release_dev;
>
> goto err_put_device?
>
okay
> > +
> > + mhi_cntrl->mhi_dev = mhi_dev;
> > +
> > + dev_dbg(&mhi_dev->dev, "MHI EP Controller registered\n");
> > +
> > + return 0;
> > +
> > +err_release_dev:
> > + put_device(&mhi_dev->dev);
> > +err_ida_free:
> > + ida_free(&mhi_ep_cntrl_ida, mhi_cntrl->index);
> > +err_free_cmd:
> > + kfree(mhi_cntrl->mhi_cmd);
> > +err_free_ch:
> > + kfree(mhi_cntrl->mhi_chan);
> > +
> > + return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(mhi_ep_register_controller);
> > +
[...]
> > +struct mhi_ep_device {
> > + struct device dev;
> > + struct mhi_ep_cntrl *mhi_cntrl;
> > + const struct mhi_device_id *id;
> > + const char *name;
> > + struct mhi_ep_chan *ul_chan;
> > + struct mhi_ep_chan *dl_chan;
>
> Could the dev_type just be: bool controller?
>
Again, this is done the same way as host. Will change it later if needed.
> > + enum mhi_device_type dev_type;
> > + int ul_chan_id;
>
> Can't you ust use ul_chan->chan and dl_chan->chan?
>
> In any case, I think the channel ids should be u32.
>
This is not used for now as well. I'll just remove it.
Thanks,
Mani
On Wed, Jan 05, 2022 at 06:27:33PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for registering MHI endpoint client drivers
> > with the MHI endpoint stack. MHI endpoint client drivers binds to one
> > or more MHI endpoint devices inorder to send and receive the upper-layer
> > protocol packets like IP packets, modem control messages, and diagnostics
> > messages over MHI bus.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
> > include/linux/mhi_ep.h | 53 ++++++++++++++++++++++++
> > 2 files changed, 138 insertions(+)
> >
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index db664360c8ab..ce0f99f22058 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -193,9 +193,88 @@ void mhi_ep_unregister_controller(struct mhi_ep_cntrl *mhi_cntrl)
> > }
> > EXPORT_SYMBOL_GPL(mhi_ep_unregister_controller);
> > +static int mhi_ep_driver_probe(struct device *dev)
> > +{
> > + struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
> > + struct mhi_ep_driver *mhi_drv = to_mhi_ep_driver(dev->driver);
> > + struct mhi_ep_chan *ul_chan = mhi_dev->ul_chan;
> > + struct mhi_ep_chan *dl_chan = mhi_dev->dl_chan;
> > +
>
> Either ul_chan or dl_chan must be set, right? Check this.
> Otherwise I think this looks OK.
>
done
> -Alex
>
> > + if (ul_chan)
> > + ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
> > +
> > + if (dl_chan)
> > + dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
> > +
> > + return mhi_drv->probe(mhi_dev, mhi_dev->id);
> > +}
>
> . . .
On Wed, Jan 05, 2022 at 06:29:00PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > Add support for managing the Memory Mapped Input Output (MMIO) registers
> > of the MHI bus. All MHI operations are carried out using the MMIO registers
> > by both host and the endpoint device.
> >
> > The MMIO registers reside inside the endpoint device memory (fixed
> > location based on the platform) and the address is passed by the MHI EP
> > controller driver during its registration.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/ep/Makefile | 2 +-
> > drivers/bus/mhi/ep/internal.h | 36 ++++
> > drivers/bus/mhi/ep/main.c | 6 +-
> > drivers/bus/mhi/ep/mmio.c | 303 ++++++++++++++++++++++++++++++++++
> > include/linux/mhi_ep.h | 18 ++
> > 5 files changed, 363 insertions(+), 2 deletions(-)
> > create mode 100644 drivers/bus/mhi/ep/mmio.c
> >
> > diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> > index 64e29252b608..a1555ae287ad 100644
> > --- a/drivers/bus/mhi/ep/Makefile
> > +++ b/drivers/bus/mhi/ep/Makefile
> > @@ -1,2 +1,2 @@
> > obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> > -mhi_ep-y := main.o
> > +mhi_ep-y := main.o mmio.o
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index 7b164daf4332..39eeb5f384e2 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -91,6 +91,12 @@ struct mhi_generic_ctx {
> > __u64 wp __packed __aligned(4);
> > };
>
> Maybe add a comment defining SBL as "secondary boot loader" and AMSS
> as "advanced modem subsystem".
>
Sure, will add kernel doc. But from modem terms, AMSS refers to
"Advanced Mode Subscriber Software".
> > +enum mhi_ep_execenv {
> > + MHI_EP_SBL_EE = 1,
> > + MHI_EP_AMSS_EE = 2,
> > + MHI_EP_UNRESERVED
> > +};
> > +
> > enum mhi_ep_ring_state {
> > RING_STATE_UINT = 0,
> > RING_STATE_IDLE,
> > @@ -155,4 +161,34 @@ struct mhi_ep_chan {
> > bool skip_td;
> > };
> > +/* MMIO related functions */
>
> I would *really* rather have the mmio_read functions *return* the read
> value, rather than having the address of the location to store it passed
> as argument. Your MMIO calls never fail, so there's no need to return
> anything else. Returning the value also makes it more obvious that the
> *result* is getting assigned (rather than sort of implying it by passing
> in the address of the result). And there's no possibility of someone
> passing a bad pointer that way either.
>
> > +void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
>
> In other words:
>
> u32 mhi_ep_mmio_read(struct mhi_ep_ctrl *mhi_ctrl, u32 offset);
>
> > +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> > +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset,
> > + u32 mask, u32 shift, u32 val);
> > +int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
> > + u32 mask, u32 shift, u32 *regval);
> > +void mhi_ep_mmio_enable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_disable_ctrl_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_enable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_disable_cmdb_interrupt(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> > +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id);
> > +void mhi_ep_mmio_enable_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_read_chdb_status_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_mask_interrupts(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_get_chc_base(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_get_erc_base(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_get_crc_base(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_get_ch_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> > +void mhi_ep_mmio_get_er_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> > +void mhi_ep_mmio_get_cmd_db(struct mhi_ep_ring *ring, u64 *wr_offset);
> > +void mhi_ep_mmio_set_env(struct mhi_ep_cntrl *mhi_cntrl, u32 value);
> > +void mhi_ep_mmio_clear_reset(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_reset(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> > + bool *mhi_reset);
> > +void mhi_ep_mmio_init(struct mhi_ep_cntrl *mhi_cntrl);
> > +void mhi_ep_mmio_update_ner(struct mhi_ep_cntrl *mhi_cntrl);
> > +
> > #endif
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index f0b5f49db95a..fddf75dfb9c7 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -209,7 +209,7 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> > struct mhi_ep_device *mhi_dev;
> > int ret;
> > - if (!mhi_cntrl || !mhi_cntrl->cntrl_dev)
> > + if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio)
> > return -EINVAL;
> > ret = parse_ch_cfg(mhi_cntrl, config);
> > @@ -222,6 +222,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
> > goto err_free_ch;
> > }
> > + /* Set MHI version and AMSS EE before enumeration */
> > + mhi_ep_mmio_write(mhi_cntrl, MHIVER, config->mhi_version);
> > + mhi_ep_mmio_set_env(mhi_cntrl, MHI_EP_AMSS_EE);
> > +
> > /* Set controller index */
> > mhi_cntrl->index = ida_alloc(&mhi_ep_cntrl_ida, GFP_KERNEL);
> > if (mhi_cntrl->index < 0) {
> > diff --git a/drivers/bus/mhi/ep/mmio.c b/drivers/bus/mhi/ep/mmio.c
> > new file mode 100644
> > index 000000000000..157ef1240f6f
> > --- /dev/null
> > +++ b/drivers/bus/mhi/ep/mmio.c
> > @@ -0,0 +1,303 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (C) 2021 Linaro Ltd.
> > + * Author: Manivannan Sadhasivam <[email protected]>
> > + */
> > +
> > +#include <linux/bitfield.h>
> > +#include <linux/io.h>
> > +#include <linux/mhi_ep.h>
> > +
> > +#include "internal.h"
> > +
> > +void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval)
> > +{
> > + *regval = readl(mhi_cntrl->mmio + offset);
>
> return readl(...);
>
done
> > +}
> > +
> > +void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val)
> > +{
> > + writel(val, mhi_cntrl->mmio + offset);
> > +}
> > +
> > +void mhi_ep_mmio_masked_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 mask,
> > + u32 shift, u32 val)
>
> There is no need for a shift argument here. I would like to say
> "use the bitfield functions" but at the moment they require the
> mask to be constant. You could still do that, by having all
> these be defined as static inline functions in a header though.
> Maybe you can use FIELD_GET() though, I don't know.
>
I've used __ffs to determine the shift.
> Anyway, try to get rid of these shifts; they shouldn't be
> necessary.
>
> > +{
> > + u32 regval;
> > +
> > + mhi_ep_mmio_read(mhi_cntrl, offset, ®val);
> > + regval &= ~mask;
> > + regval |= ((val << shift) & mask);
> > + mhi_ep_mmio_write(mhi_cntrl, offset, regval);
> > +}
> > +
> > +int mhi_ep_mmio_masked_read(struct mhi_ep_cntrl *dev, u32 offset,
> > + u32 mask, u32 shift, u32 *regval)
> > +{
> > + mhi_ep_mmio_read(dev, offset, regval);
> > + *regval &= mask;
> > + *regval >>= shift;
> > +
> > + return 0;
>
> There is no point in returning 0 from this function.
>
> > +}
> > +
> > +void mhi_ep_mmio_get_mhi_state(struct mhi_ep_cntrl *mhi_cntrl, enum mhi_state *state,
> > + bool *mhi_reset)
> > +{
> > + u32 regval;
> > +
> > + mhi_ep_mmio_read(mhi_cntrl, MHICTRL, ®val);
> > + *state = FIELD_GET(MHICTRL_MHISTATE_MASK, regval);
> > + *mhi_reset = !!FIELD_GET(MHICTRL_RESET_MASK, regval);
> > +}
> > +
> > +static void mhi_ep_mmio_mask_set_chdb_int_a7(struct mhi_ep_cntrl *mhi_cntrl,
> > + u32 chdb_id, bool enable)
> > +{
> > + u32 chid_mask, chid_idx, chid_shft, val = 0;
> > +
> > + chid_shft = chdb_id % 32;
> > + chid_mask = BIT(chid_shft);
> > + chid_idx = chdb_id / 32;
> > +
> > + if (chid_idx >= MHI_MASK_ROWS_CH_EV_DB)
> > + return;
>
> The above should maybe issue a warning?
>
ack
> > +
> > + if (enable)
> > + val = 1;
> > +
> > + mhi_ep_mmio_masked_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
> > + chid_mask, chid_shft, val);
> > + mhi_ep_mmio_read(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(chid_idx),
> > + &mhi_cntrl->chdb[chid_idx].mask);
>
> Why do you read after writing? Is this to be sure the write completes
> over PCIe or something? Even then I don't think that would be needed
> because the memory is on "this side" of PCIe (right?).
>
This is done to update the mask. We could also do the bit managment stuff here
instead of reading from the register (I guess that'll be faster).
Thanks,
Mani
> > +}
> > +
> > +void mhi_ep_mmio_enable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> > +{
> > + mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, true);
> > +}
> > +
> > +void mhi_ep_mmio_disable_chdb_a7(struct mhi_ep_cntrl *mhi_cntrl, u32 chdb_id)
> > +{
> > + mhi_ep_mmio_mask_set_chdb_int_a7(mhi_cntrl, chdb_id, false);
> > +}
> > +
> > +static void mhi_ep_mmio_set_chdb_interrupts(struct mhi_ep_cntrl *mhi_cntrl, bool enable)
> > +{
> > + u32 val = 0, i = 0;
>
> No need for assigning 0 to i.
>
> -Alex
>
> > +
> > + if (enable)
> > + val = MHI_CHDB_INT_MASK_A7_n_EN_ALL;
> > +
> > + for (i = 0; i < MHI_MASK_ROWS_CH_EV_DB; i++) {
> > + mhi_ep_mmio_write(mhi_cntrl, MHI_CHDB_INT_MASK_A7_n(i), val);
> > + mhi_cntrl->chdb[i].mask = val;
> > + }
> > +}
> > +
>
> . . .
On Wed, Jan 05, 2022 at 06:23:52PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > Cleanup includes:
> >
> > 1. Moving the MHI register bit definitions to common.h header (only the
> > register offsets differ between host and ep not the bit definitions)
>
> The register offsets do differ, but the group of registers for the host
> differs from the group of registers for the endpoint by a fixed amount.
> (MHIREGLEN = 0x0000 for host, or 0x100 for endpoint; CRCBAP_LOWER is
> 0x0068 for host, 0x0168 for endpoint.)
>
> In other words, can you instead use the same symbolic offsets, but
> have the endpoint add 0x0100 to them all? It would make the fact
> that they're both referencing the same basic in-memory structure
> more obvious.
>
Okay. I've used REG_ prefix for the register defines in common.h and used it on
host and ep internal.h.
> > 2. Using the GENMASK macro for masks
> > 3. Removing brackets for single values
> > 4. Using lowercase for hex values
>
> Yay!!! For all three of the above.
>
> More below.
>
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/common.h | 129 ++++++++++++---
> > drivers/bus/mhi/host/internal.h | 282 +++++++++++---------------------
> > 2 files changed, 207 insertions(+), 204 deletions(-)
> >
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index 2ea438205617..c1272d61e54e 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -9,32 +9,123 @@
> > #include <linux/mhi.h>
> > +/* MHI register bits */
> > +#define MHIREGLEN_MHIREGLEN_MASK GENMASK(31, 0)
> > +#define MHIREGLEN_MHIREGLEN_SHIFT 0
>
> Again, please eliminate all _SHIFT definitions where they define
> the low bit position of a mask.
>
> Maybe you can add some underscores for readability?
>
> Even if you don't do that, you could add a comment here or there to
> explain what certain abbreviations stand for, to make it easier to
> understand. E.g., CHDB = channel doorbell, CCA = channel context
> array, BAP = base address pointer.
>
I'll check on this.
Thanks,
Mani
> -Alex
>
>
> > +#define MHIVER_MHIVER_MASK GENMASK(31, 0)
> > +#define MHIVER_MHIVER_SHIFT 0
> > +
> > +#define MHICFG_NHWER_MASK GENMASK(31, 24)
> > +#define MHICFG_NHWER_SHIFT 24
> > +#define MHICFG_NER_MASK GENMASK(23, 16)
> > +#define MHICFG_NER_SHIFT 16
> > +#define MHICFG_NHWCH_MASK GENMASK(15, 8)
> > +#define MHICFG_NHWCH_SHIFT 8
> > +#define MHICFG_NCH_MASK GENMASK(7, 0)
> > +#define MHICFG_NCH_SHIFT 0
>
> . . .
On Wed, Jan 05, 2022 at 06:30:17PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > Add support for managing the MHI ring. The MHI ring is a circular queue
> > of data structures used to pass the information between host and the
> > endpoint.
> >
> > MHI support 3 types of rings:
> >
> > 1. Transfer ring
> > 2. Event ring
> > 3. Command ring
> >
> > All rings reside inside the host memory and the MHI EP device maps it to
> > the device memory using blocks like PCIe iATU. The mapping is handled in
> > the MHI EP controller driver itself.
>
> A few more comments here. And with that, I'm done for today.
>
> -Alex
>
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/ep/Makefile | 2 +-
> > drivers/bus/mhi/ep/internal.h | 23 +++
> > drivers/bus/mhi/ep/main.c | 53 +++++-
> > drivers/bus/mhi/ep/ring.c | 314 ++++++++++++++++++++++++++++++++++
> > include/linux/mhi_ep.h | 11 ++
> > 5 files changed, 401 insertions(+), 2 deletions(-)
> > create mode 100644 drivers/bus/mhi/ep/ring.c
> >
> > diff --git a/drivers/bus/mhi/ep/Makefile b/drivers/bus/mhi/ep/Makefile
> > index a1555ae287ad..7ba0e04801eb 100644
> > --- a/drivers/bus/mhi/ep/Makefile
> > +++ b/drivers/bus/mhi/ep/Makefile
> > @@ -1,2 +1,2 @@
> > obj-$(CONFIG_MHI_BUS_EP) += mhi_ep.o
> > -mhi_ep-y := main.o mmio.o
> > +mhi_ep-y := main.o mmio.o ring.o
> > diff --git a/drivers/bus/mhi/ep/internal.h b/drivers/bus/mhi/ep/internal.h
> > index 39eeb5f384e2..a7a4e6934f7d 100644
> > --- a/drivers/bus/mhi/ep/internal.h
> > +++ b/drivers/bus/mhi/ep/internal.h
> > @@ -97,6 +97,18 @@ enum mhi_ep_execenv {
> > MHI_EP_UNRESERVED
> > };
> > +/* Transfer Ring Element macros */
> > +#define MHI_EP_TRE_PTR(ptr) (ptr)
> > +#define MHI_EP_TRE_DWORD0(len) (len & MHI_MAX_MTU)
> > +#define MHI_EP_TRE_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
> > + | (ieot << 9) | (ieob << 8) | chain)
> > +#define MHI_EP_TRE_GET_PTR(tre) ((tre)->ptr)
> > +#define MHI_EP_TRE_GET_LEN(tre) ((tre)->dword[0] & 0xffff)
> > +#define MHI_EP_TRE_GET_CHAIN(tre) FIELD_GET(BIT(0), (tre)->dword[1])
> > +#define MHI_EP_TRE_GET_IEOB(tre) FIELD_GET(BIT(8), (tre)->dword[1])
> > +#define MHI_EP_TRE_GET_IEOT(tre) FIELD_GET(BIT(9), (tre)->dword[1])
> > +#define MHI_EP_TRE_GET_BEI(tre) FIELD_GET(BIT(10), (tre)->dword[1])
> > +
> > enum mhi_ep_ring_state {
> > RING_STATE_UINT = 0,
> > RING_STATE_IDLE,
> > @@ -161,6 +173,17 @@ struct mhi_ep_chan {
> > bool skip_td;
> > };
> > +/* MHI Ring related functions */
> > +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id);
> > +void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring);
> > +size_t mhi_ep_ring_addr2offset(struct mhi_ep_ring *ring, u64 ptr);
> > +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> > + union mhi_ep_ring_ctx *ctx);
> > +int mhi_ep_process_ring(struct mhi_ep_ring *ring);
> > +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *element,
> > + int evt_offset);
> > +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring);
> > +
> > /* MMIO related functions */
> > void mhi_ep_mmio_read(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 *regval);
> > void mhi_ep_mmio_write(struct mhi_ep_cntrl *mhi_cntrl, u32 offset, u32 val);
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index fddf75dfb9c7..6d448d42f527 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -18,6 +18,42 @@
> > static DEFINE_IDA(mhi_ep_cntrl_ida);
> > +static void mhi_ep_ring_worker(struct work_struct *work)
> > +{
> > + struct mhi_ep_cntrl *mhi_cntrl = container_of(work,
> > + struct mhi_ep_cntrl, ring_work);
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + struct mhi_ep_ring *ring;
> > + struct list_head *cp, *q;
> > + unsigned long flags;
> > + int ret = 0;
> > +
> > + /* Process the command ring first */
> > + ret = mhi_ep_process_ring(&mhi_cntrl->mhi_cmd->ring);
> > + if (ret) {
> > + dev_err(dev, "Error processing command ring\n");
> > + goto err_unlock;
> > + }
> > +
> > + spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
> > + /* Process the channel rings now */
>
> Use list_for_each_entry_safe() here.
>
> But actually, rather than doing this, you can do the
> trick of grabbing the whole list under lock, then
> handling processing the entries outside of it. You'll
> have to judge whether that can be done, but basically:
>
> struct mhi_ep_ring *ring, *tmp;
> LIST_HEAD(list);
>
> spin_lock_irqsave(&mhi_cntrl->list_lock, flags);
>
> list_splice_init(&mhi_cntrl->ch_db_list, &list);
>
> spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
>
> list_for_each_entry_safe(ring, tmp, &list, list) {
> list_del(&ring->list);
> ret = mhi_ep_process_ring(ring);
> ...
> }
>
Yes, I made this change while doing the bringup on SM8450.
> > + list_for_each_safe(cp, q, &mhi_cntrl->ch_db_list) {
> > + ring = list_entry(cp, struct mhi_ep_ring, list);
> > + list_del(cp);
> > + ret = mhi_ep_process_ring(ring);
> > + if (ret) {
> > + dev_err(dev, "Error processing channel ring: %d\n", ring->ch_id);
> > + goto err_unlock;
> > + }
> > +
> > + /* Re-enable channel interrupt */
> > + mhi_ep_mmio_enable_chdb_a7(mhi_cntrl, ring->ch_id);
> > + }
> > +
> > +err_unlock:
> > + spin_unlock_irqrestore(&mhi_cntrl->list_lock, flags);
> > +}
> > +
> > static void mhi_ep_release_device(struct device *dev)
> > {
> > struct mhi_ep_device *mhi_dev = to_mhi_ep_device(dev);
>
> . . .
>
> > +void mhi_ep_ring_inc_index(struct mhi_ep_ring *ring)
> > +{
> > + ring->rd_offset++;
> > + if (ring->rd_offset == ring->ring_size)
> > + ring->rd_offset = 0;
> > +}
> > +
> > +static int __mhi_ep_cache_ring(struct mhi_ep_ring *ring, size_t end)
> > +{
> > + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + size_t start, copy_size;
> > + struct mhi_ep_ring_element *ring_shadow;
> > + phys_addr_t ring_shadow_phys;
> > + size_t size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
>
> Do you cache the entire ring just in case you need to wrap
> around the end of it?
>
Actually caching the whole ring is not needed. I've now modified this code to
cache only the ring elements those need to be cached.
> > + int ret;
> > +
> > + /* No need to cache event rings */
> > + if (ring->type == RING_TYPE_ER)
> > + return 0;
> > +
> > + /* No need to cache the ring if write pointer is unmodified */
> > + if (ring->wr_offset == end)
> > + return 0;
> > +
> > + start = ring->wr_offset;
> > +
> > + /* Allocate memory for host ring */
> > + ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys,
> > + size);
> > + if (!ring_shadow) {
> > + dev_err(dev, "Failed to allocate memory for ring_shadow\n");
> > + return -ENOMEM;
> > + }
> > +
> > + /* Map host ring */
> > + ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
> > + ring->ring_ctx->generic.rbase, size);
> > + if (ret) {
> > + dev_err(dev, "Failed to map ring_shadow\n\n");
> > + goto err_ring_free;
> > + }
> > +
> > + dev_dbg(dev, "Caching ring: start %d end %d size %d", start, end, copy_size);
> > +
> > + if (start < end) {
> > + copy_size = (end - start) * sizeof(struct mhi_ep_ring_element);
> > + memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
> > + } else {
> > + copy_size = (ring->ring_size - start) * sizeof(struct mhi_ep_ring_element);
> > + memcpy_fromio(&ring->ring_cache[start], &ring_shadow[start], copy_size);
> > + if (end)
> > + memcpy_fromio(&ring->ring_cache[0], &ring_shadow[0],
> > + end * sizeof(struct mhi_ep_ring_element));
> > + }
> > +
> > + /* Now unmap and free host ring */
> > + mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
> > + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, size);
> > +
> > + return 0;
> > +
> > +err_ring_free:
> > + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, &ring_shadow, size);
> > +
> > + return ret;
> > +}
> > +
>
> . . .
>
> > +static int mhi_ep_process_ring_element(struct mhi_ep_ring *ring, size_t offset)
> > +{
> > + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + struct mhi_ep_ring_element *el;
> > + int ret = -ENODEV;
> > +
> > + /* Get the element and invoke the respective callback */
> > + el = &ring->ring_cache[offset];
> > +
>
> You already know that the ring_cb function pointer is non-null (you set
> it in mhi_ep_ring_init(), below). At least you *should* be able to
> be sure of that...
>
ack
> > + if (ring->ring_cb)
> > + ret = ring->ring_cb(ring, el);
> > + else
> > + dev_err(dev, "No callback registered for ring\n");
> > +
> > + return ret;
> > +}
> > +
> > +int mhi_ep_process_ring(struct mhi_ep_ring *ring)
> > +{
> > + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + int ret = 0;
> > +
> > + /* Event rings should not be processed */
> > + if (ring->type == RING_TYPE_ER)
> > + return -EINVAL;
> > +
> > + dev_dbg(dev, "Processing ring of type: %d\n", ring->type);
> > +
> > + /* Update the write offset for the ring */
> > + ret = mhi_ep_update_wr_offset(ring);
> > + if (ret) {
> > + dev_err(dev, "Error updating write offset for ring\n");
> > + return ret;
> > + }
> > +
> > + /* Sanity check to make sure there are elements in the ring */
> > + if (ring->rd_offset == ring->wr_offset)
> > + return 0;
> > +
> > + /* Process channel ring first */
> > + if (ring->type == RING_TYPE_CH) {
> > + ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> > + if (ret)
> > + dev_err(dev, "Error processing ch ring element: %d\n", ring->rd_offset);
> > +
> > + return ret;
> > + }
> > +
> > + /* Process command ring now */
> > + while (ring->rd_offset != ring->wr_offset) {
> > + ret = mhi_ep_process_ring_element(ring, ring->rd_offset);
> > + if (ret) {
> > + dev_err(dev, "Error processing cmd ring element: %d\n", ring->rd_offset);
> > + return ret;
> > + }
> > +
> > + mhi_ep_ring_inc_index(ring);
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* TODO: Support for adding multiple ring elements to the ring */
> > +int mhi_ep_ring_add_element(struct mhi_ep_ring *ring, struct mhi_ep_ring_element *el, int size)
>
> I'm pretty sure the size argument is unused, so should be eliminated.
>
done
> > +{
> > + struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + struct mhi_ep_ring_element *ring_shadow;
> > + size_t ring_size = ring->ring_size * sizeof(struct mhi_ep_ring_element);
>
> Use sizeof(*el) in the line above.
>
> > + phys_addr_t ring_shadow_phys;
> > + size_t old_offset = 0;
> > + u32 num_free_elem;
> > + int ret;
> > +
> > + ret = mhi_ep_update_wr_offset(ring);
> > + if (ret) {
> > + dev_err(dev, "Error updating write pointer\n");
> > + return ret;
> > + }
> > +
> > + if (ring->rd_offset < ring->wr_offset)
> > + num_free_elem = (ring->wr_offset - ring->rd_offset) - 1;
> > + else
> > + num_free_elem = ((ring->ring_size - ring->rd_offset) + ring->wr_offset) - 1;
> > +
> > + /* Check if there is space in ring for adding at least an element */
> > + if (num_free_elem < 1) {
>
> if (!num_free_elem) {
>
> > + dev_err(dev, "No space left in the ring\n");
> > + return -ENOSPC;
> > + }
> > +
> > + old_offset = ring->rd_offset;
> > + mhi_ep_ring_inc_index(ring);
> > +
> > + dev_dbg(dev, "Adding an element to ring at offset (%d)\n", ring->rd_offset);
> > +
> > + /* Update rp in ring context */
> > + ring->ring_ctx->generic.rp = (ring->rd_offset * sizeof(struct mhi_ep_ring_element)) +
> > + ring->ring_ctx->generic.rbase;
> > +
> > + /* Allocate memory for host ring */
> > + ring_shadow = mhi_cntrl->alloc_addr(mhi_cntrl, &ring_shadow_phys, ring_size);
> > + if (!ring_shadow) {
> > + dev_err(dev, "failed to allocate ring_shadow\n");
> > + return -ENOMEM;
> > + }
> > +
> > + /* Map host ring */
> > + ret = mhi_cntrl->map_addr(mhi_cntrl, ring_shadow_phys,
> > + ring->ring_ctx->generic.rbase, ring_size);
> > + if (ret) {
> > + dev_err(dev, "failed to map ring_shadow\n\n");
> > + goto err_ring_free;
> > + }
> > +
> > + /* Copy the element to ring */
>
> Use sizeof(*el) in the memcpy_toio() call.
>
> > + memcpy_toio(&ring_shadow[old_offset], el, sizeof(struct mhi_ep_ring_element));
> > +
> > + /* Now unmap and free host ring */
> > + mhi_cntrl->unmap_addr(mhi_cntrl, ring_shadow_phys);
> > + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
> > +
> > + return 0;
> > +
> > +err_ring_free:
> > + mhi_cntrl->free_addr(mhi_cntrl, ring_shadow_phys, ring_shadow, ring_size);
> > +
> > + return ret;
> > +}
> > +
> > +void mhi_ep_ring_init(struct mhi_ep_ring *ring, enum mhi_ep_ring_type type, u32 id)
> > +{
> > + ring->state = RING_STATE_UINT;
> > + ring->type = type;
> > + if (ring->type == RING_TYPE_CMD) {
> > + ring->db_offset_h = CRDB_HIGHER;
> > + ring->db_offset_l = CRDB_LOWER;
> > + } else if (ring->type == RING_TYPE_CH) {
> > + ring->db_offset_h = CHDB_HIGHER_n(id);
> > + ring->db_offset_l = CHDB_LOWER_n(id);
> > + ring->ch_id = id;
> > + } else if (ring->type == RING_TYPE_ER) {
> > + ring->db_offset_h = ERDB_HIGHER_n(id);
> > + ring->db_offset_l = ERDB_LOWER_n(id);
> > + }
>
> There is no other case, right? If you hit it, you should report
> it. Otherwise there's not really any need to check for ring->type
> RING_TYPE_ER...
>
Ack.
Thanks,
Mani
> -Alex
>
> > +}
> > +
> > +int mhi_ep_ring_start(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring,
> > + union mhi_ep_ring_ctx *ctx)
> > +{
> > + struct device *dev = &mhi_cntrl->mhi_dev->dev;
> > + int ret;
> > +
> > + ring->mhi_cntrl = mhi_cntrl;
> > + ring->ring_ctx = ctx;
> > + ring->ring_size = mhi_ep_ring_num_elems(ring);
> > +
> > + /* During ring init, both rp and wp are equal */
> > + ring->rd_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
> > + ring->wr_offset = mhi_ep_ring_addr2offset(ring, ring->ring_ctx->generic.rp);
> > + ring->state = RING_STATE_IDLE;
> > +
> > + /* Allocate ring cache memory for holding the copy of host ring */
> > + ring->ring_cache = kcalloc(ring->ring_size, sizeof(struct mhi_ep_ring_element),
> > + GFP_KERNEL);
> > + if (!ring->ring_cache)
> > + return -ENOMEM;
> > +
> > + ret = mhi_ep_cache_ring(ring, ring->ring_ctx->generic.wp);
> > + if (ret) {
> > + dev_err(dev, "Failed to cache ring\n");
> > + kfree(ring->ring_cache);
> > + return ret;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +void mhi_ep_ring_stop(struct mhi_ep_cntrl *mhi_cntrl, struct mhi_ep_ring *ring)
> > +{
> > + ring->state = RING_STATE_UINT;
> > + kfree(ring->ring_cache);
> > +}
> > diff --git a/include/linux/mhi_ep.h b/include/linux/mhi_ep.h
> > index 902c8febd856..729f4b802b74 100644
> > --- a/include/linux/mhi_ep.h
> > +++ b/include/linux/mhi_ep.h
> > @@ -62,6 +62,11 @@ struct mhi_ep_db_info {
> > * @ch_ctx_host_pa: Physical address of host channel context data structure
> > * @ev_ctx_host_pa: Physical address of host event context data structure
> > * @cmd_ctx_host_pa: Physical address of host command context data structure
> > + * @ring_wq: Dedicated workqueue for processing MHI rings
> > + * @ring_work: Ring worker
> > + * @ch_db_list: List of queued channel doorbells
> > + * @st_transition_list: List of state transitions
> > + * @list_lock: Lock for protecting state transition and channel doorbell lists
> > * @chdb: Array of channel doorbell interrupt info
> > * @raise_irq: CB function for raising IRQ to the host
> > * @alloc_addr: CB function for allocating memory in endpoint for storing host context
> > @@ -90,6 +95,12 @@ struct mhi_ep_cntrl {
> > u64 ev_ctx_host_pa;
> > u64 cmd_ctx_host_pa;
> > + struct workqueue_struct *ring_wq;
> > + struct work_struct ring_work;
> > +
> > + struct list_head ch_db_list;
> > + struct list_head st_transition_list;
> > + spinlock_t list_lock;
> > struct mhi_ep_db_info chdb[4];
> > void (*raise_irq)(struct mhi_ep_cntrl *mhi_cntrl);
> >
>
Hi Alex,
Thanks a lot for the inputs. I've incorporated a portion of your suggestions
and kept remaining ones post upstream.
On Wed, Jan 05, 2022 at 06:22:25PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > Move the common MHI definitions in host "internal.h" to "common.h" so
> > that the endpoint code can make use of them. This also avoids
> > duplicating the definitions in the endpoint stack.
> >
> > Still, the MHI register definitions are not moved since the offsets
> > vary between host and endpoint.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/common.h | 182 ++++++++++++++++++++++++++++++++
> > drivers/bus/mhi/host/internal.h | 154 +--------------------------
> > 2 files changed, 183 insertions(+), 153 deletions(-)
> > create mode 100644 drivers/bus/mhi/common.h
> >
[...]
> > +
> > +#define EV_CTX_RESERVED_MASK GENMASK(7, 0)
> > +#define EV_CTX_INTMODC_MASK GENMASK(15, 8)
> > +#define EV_CTX_INTMODC_SHIFT 8
> > +#define EV_CTX_INTMODT_MASK GENMASK(31, 16)
> > +#define EV_CTX_INTMODT_SHIFT 16
> > +struct mhi_event_ctxt {
>
> These fields should all be explicitly marked as little endian.
> It so happens Intel and ARM use that, but defining them as
> simple unsigned values is not correct for an external interface.
>
> This comment applies to the command and channel context structures
> also.
>
Ack
> > + __u32 intmod;
> > + __u32 ertype;
> > + __u32 msivec;
> > +
>
> I think you can just define the entire struct as __packed
> and __aligned(4) rather than defining all of these fields
> with those attributes.
>
This was suggested by Arnd during the MHI host review. He preferred adding the
aligned parameter only for members that require them.
> > + __u64 rbase __packed __aligned(4);
> > + __u64 rlen __packed __aligned(4);
> > + __u64 rp __packed __aligned(4);
> > + __u64 wp __packed __aligned(4);
> > +};
> > +
> > +#define CHAN_CTX_CHSTATE_MASK GENMASK(7, 0)
> > +#define CHAN_CTX_CHSTATE_SHIFT 0
>
> Please eliminate all the _SHIFT definitions like this,
> where you are already defining the corresponding _MASK.
> The _SHIFT is redundant (and could lead to error, and
> takes up extra space).
>
> You are using bitfield operations (like FIELD_GET()) in
> at least some places already. Use them consistently
> throughout the driver. Those macros simplify the code
> and obviate the need for any shift definitions.
>
Ack.
Thanks,
Mani
On Wed, Jan 05, 2022 at 06:22:59PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > mhi_state_str[] array could be used by MHI endpoint stack also. So let's
> > make the array as "static const" and move it inside the "common.h" header
> > so that the endpoint stack could also make use of it. Otherwise, the
> > structure definition should be present in both host and endpoint stack and
> > that'll result in duplication.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
>
> This result in common source code (which is good), but it will be
> duplicated in everything that includes this file.
>
> Do you have no common code, available to both the endpoint and host?
> You could (in drivers/bus/mhi/common.c, for example).
>
> If you don't, I have a different suggestion, below. It does
> basically the same thing you're doing here, but I much prefer
> duplicating an inline function than a data structure.
>
> > ---
> > drivers/bus/mhi/common.h | 13 ++++++++++++-
> > drivers/bus/mhi/host/init.c | 12 ------------
> > 2 files changed, 12 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/bus/mhi/common.h b/drivers/bus/mhi/common.h
> > index 0f4f3b9f3027..2ea438205617 100644
> > --- a/drivers/bus/mhi/common.h
> > +++ b/drivers/bus/mhi/common.h
> > @@ -174,7 +174,18 @@ struct mhi_cmd_ctxt {
> > __u64 wp __packed __aligned(4);
> > };
> > -extern const char * const mhi_state_str[MHI_STATE_MAX];
> > +static const char * const mhi_state_str[MHI_STATE_MAX] = {
> > + [MHI_STATE_RESET] = "RESET",
> > + [MHI_STATE_READY] = "READY",
> > + [MHI_STATE_M0] = "M0",
> > + [MHI_STATE_M1] = "M1",
> > + [MHI_STATE_M2] = "M2",
> > + [MHI_STATE_M3] = "M3",
> > + [MHI_STATE_M3_FAST] = "M3 FAST",
> > + [MHI_STATE_BHI] = "BHI",
> > + [MHI_STATE_SYS_ERR] = "SYS ERROR",
> > +};
> > +
> > #define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
> > !mhi_state_str[state]) ? \
> > "INVALID_STATE" : mhi_state_str[state])
>
> You could easily and safely define this as an inline function instead.
>
Sounds good!
> #define MHI_STATE_CASE(x) case MHI_STATE_ ## x: return #x
> static inline const char *mhi_state_string(enum mhi_state state)
> {
> switch(state) {
> MHI_STATE_CASE(RESET);
> MHI_STATE_CASE(READY);
> MHI_STATE_CASE(M0);
> MHI_STATE_CASE(M1);
> MHI_STATE_CASE(M2);
> MHI_STATE_CASE(M3_FAST);
> MHI_STATE_CASE(BHI);
> MHI_STATE_CASE(SYS_ERR);
> default: return "(unrecognized MHI state)";
> }
> }
> #undef MHI_STATE_CASE
I've used the below one:
static inline const char * const mhi_state_str(enum mhi_state state)
{
switch(state) {
case MHI_STATE_RESET:
return "RESET";
case MHI_STATE_READY:
return "READY";
case MHI_STATE_M0:
return "M0";
case MHI_STATE_M1:
return "M1";
case MHI_STATE_M2:
return"M2";
case MHI_STATE_M3:
return"M3";
case MHI_STATE_M3_FAST:
return"M3 FAST";
case MHI_STATE_BHI:
return"BHI";
case MHI_STATE_SYS_ERR:
return "SYS ERROR";
default:
return "Unknown state";
}
};
Thanks,
Mani
>
> -Alex
>
> > diff --git a/drivers/bus/mhi/host/init.c b/drivers/bus/mhi/host/init.c
> > index 5aaca6d0f52b..fa904e7468d8 100644
> > --- a/drivers/bus/mhi/host/init.c
> > +++ b/drivers/bus/mhi/host/init.c
> > @@ -44,18 +44,6 @@ const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
> > [DEV_ST_TRANSITION_DISABLE] = "DISABLE",
> > };
> > -const char * const mhi_state_str[MHI_STATE_MAX] = {
> > - [MHI_STATE_RESET] = "RESET",
> > - [MHI_STATE_READY] = "READY",
> > - [MHI_STATE_M0] = "M0",
> > - [MHI_STATE_M1] = "M1",
> > - [MHI_STATE_M2] = "M2",
> > - [MHI_STATE_M3] = "M3",
> > - [MHI_STATE_M3_FAST] = "M3 FAST",
> > - [MHI_STATE_BHI] = "BHI",
> > - [MHI_STATE_SYS_ERR] = "SYS ERROR",
> > -};
> > -
> > const char * const mhi_ch_state_type_str[MHI_CH_STATE_TYPE_MAX] = {
> > [MHI_CH_STATE_TYPE_RESET] = "RESET",
> > [MHI_CH_STATE_TYPE_STOP] = "STOP",
> >
>
On Wed, Jan 05, 2022 at 06:28:08PM -0600, Alex Elder wrote:
> On 12/2/21 5:35 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for creating and destroying MHI endpoint devices.
> > The MHI endpoint devices binds to the MHI endpoint channels and are used
> > to transfer data between MHI host and endpoint device.
> >
> > There is a single MHI EP device for each channel pair. The devices will be
> > created when the corresponding channels has been started by the host and
> > will be destroyed during MHI EP power down and reset.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/ep/main.c | 85 +++++++++++++++++++++++++++++++++++++++
> > 1 file changed, 85 insertions(+)
> >
> > diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
> > index ce0f99f22058..f0b5f49db95a 100644
> > --- a/drivers/bus/mhi/ep/main.c
> > +++ b/drivers/bus/mhi/ep/main.c
> > @@ -63,6 +63,91 @@ static struct mhi_ep_device *mhi_ep_alloc_device(struct mhi_ep_cntrl *mhi_cntrl)
> > return mhi_dev;
> > }
> > +static int mhi_ep_create_device(struct mhi_ep_cntrl *mhi_cntrl, u32 ch_id)
> > +{
> > + struct mhi_ep_device *mhi_dev;
> > + struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ch_id];
> > + int ret;
> > +
> > + mhi_dev = mhi_ep_alloc_device(mhi_cntrl);
> > + if (IS_ERR(mhi_dev))
> > + return PTR_ERR(mhi_dev);
> > +
> > + mhi_dev->dev_type = MHI_DEVICE_XFER;
>
> Elsewhere (at least in mhi_ep_process_tre_ring()) in your code
> you assume that the even-numbered channel is UL.
>
> I would say, either use that assumption throughout, or do
> not use that assumption at all. (I prefer the latter.)
>
> I don't really like how this assumes that the channels
> are defined in adjacent pairs. It assumes one is
> upload and the next one is download, but doesn't
> specify the order in which they're defined. If
> you're going to assume they are defined in pairs, you
> should be able to assume which one is defined first,
> and then simplify this code (and even verify that
> they are defined UL before DL, perhaps).
>
Yes, the UL channel is always even numbered and DL is odd.
I've removed the checks.
> > + /* Configure primary channel */
> > + if (mhi_chan->dir == DMA_TO_DEVICE) {
> > + mhi_dev->ul_chan = mhi_chan;
> > + mhi_dev->ul_chan_id = mhi_chan->chan;
> > + } else {
> > + mhi_dev->dl_chan = mhi_chan;
> > + mhi_dev->dl_chan_id = mhi_chan->chan;
> > + }
> > +
> > + get_device(&mhi_dev->dev);
> > + mhi_chan->mhi_dev = mhi_dev;
> > +
> > + /* Configure secondary channel as well */
> > + mhi_chan++;
> > + if (mhi_chan->dir == DMA_TO_DEVICE) {
> > + mhi_dev->ul_chan = mhi_chan;
> > + mhi_dev->ul_chan_id = mhi_chan->chan;
> > + } else {
> > + mhi_dev->dl_chan = mhi_chan;
> > + mhi_dev->dl_chan_id = mhi_chan->chan;
> > + }
> > +
> > + get_device(&mhi_dev->dev);
> > + mhi_chan->mhi_dev = mhi_dev;
> > +
> > + /* Channel name is same for both UL and DL */
>
> You could verify the two channels indeed have the
> same name.
>
done.
Thanks,
Mani