Hello,
This is the second attempt at adding the MHI (Modem Host Interface) bus
interface to Linux kernel. MHI is a communication protocol used by the
host processors to control and communicate with modems over a high
speed peripheral bus or shared memory. The MHI protocol has been
designed and developed by Qualcomm Innovation Center, Inc., for use
in their modems.
The first submission was made by Sujeev Dias of Qualcomm:
https://lkml.org/lkml/2018/4/26/1159
https://lkml.org/lkml/2018/7/9/987
This series addresses most of the review comments by Greg and Arnd for
the initial patchset. Furthermore, in order to ease the review process
I've splitted the patches logically and dropped few of them which were
not required for this initial submission.
Below is the high level changelog:
1. Removed all DT related code
2. Got rid of pci specific struct members from top level mhi structs
3. Moved device specific callbacks like ul_xfer() to driver struct. It
doesn’t make sense to have callbacks in device struct as suggested by
Greg
4. Used priv data of `struct device` instead of own priv data in
`mhi_device` as suggested by Greg. This will allow us to use
dev_set{get}_drvdata() APIs in client drivers
5. Removed all debugfs related code
6. Changes to the APIs to look uniform
7. Converted the documentation to .rst and placed in its own subdirectory
8. Changes to the MHI device naming
9. Converted all uppercase variable names to appropriate lowercase ones
10. Removed custom debug code and used the dev_* ones where applicable
11. Dropped timesync, DTR, UCI, and Qcom controller related codes
12. Added QRTR client driver patch
13. Added modalias support for the MHI stack as well as client driver for
autoloading of modules (client drivers) by udev once the MHI devices
are created
This series includes the MHI stack as well as the QRTR client driver which
falls under the networking subsystem.
Following developers deserve explicit acknowledgements for their
contributions to the MHI code:
Sujeev Dias
Siddartha Mohanadoss
Hemant Kumar
Jeff Hugo
Thanks,
Mani
Manivannan Sadhasivam (16):
docs: Add documentation for MHI bus
bus: mhi: core: Add support for registering MHI controllers
bus: mhi: core: Add support for registering MHI client drivers
bus: mhi: core: Add support for creating and destroying MHI devices
bus: mhi: core: Add support for ringing channel/event ring doorbells
bus: mhi: core: Add support for PM state transitions
bus: mhi: core: Add support for basic PM operations
bus: mhi: core: Add support for downloading firmware over BHIe
bus: mhi: core: Add support for downloading RDDM image during panic
bus: mhi: core: Add support for processing events from client device
bus: mhi: core: Add support for data transfer
bus: mhi: core: Add uevent support for module autoloading
MAINTAINERS: Add entry for MHI bus
net: qrtr: Add MHI transport layer
net: qrtr: Do not depend on ARCH_QCOM
soc: qcom: Do not depend on ARCH_QCOM for QMI helpers
Documentation/index.rst | 1 +
Documentation/mhi/index.rst | 18 +
Documentation/mhi/mhi.rst | 218 ++++
Documentation/mhi/topology.rst | 60 ++
MAINTAINERS | 9 +
drivers/bus/Kconfig | 1 +
drivers/bus/Makefile | 3 +
drivers/bus/mhi/Kconfig | 14 +
drivers/bus/mhi/Makefile | 2 +
drivers/bus/mhi/core/Makefile | 3 +
drivers/bus/mhi/core/boot.c | 510 ++++++++++
drivers/bus/mhi/core/init.c | 1283 +++++++++++++++++++++++
drivers/bus/mhi/core/internal.h | 703 +++++++++++++
drivers/bus/mhi/core/main.c | 1581 +++++++++++++++++++++++++++++
drivers/bus/mhi/core/pm.c | 974 ++++++++++++++++++
drivers/soc/qcom/Kconfig | 1 -
include/linux/mhi.h | 680 +++++++++++++
include/linux/mod_devicetable.h | 13 +
net/qrtr/Kconfig | 8 +-
net/qrtr/Makefile | 2 +
net/qrtr/mhi.c | 207 ++++
scripts/mod/devicetable-offsets.c | 3 +
scripts/mod/file2alias.c | 10 +
23 files changed, 6302 insertions(+), 2 deletions(-)
create mode 100644 Documentation/mhi/index.rst
create mode 100644 Documentation/mhi/mhi.rst
create mode 100644 Documentation/mhi/topology.rst
create mode 100644 drivers/bus/mhi/Kconfig
create mode 100644 drivers/bus/mhi/Makefile
create mode 100644 drivers/bus/mhi/core/Makefile
create mode 100644 drivers/bus/mhi/core/boot.c
create mode 100644 drivers/bus/mhi/core/init.c
create mode 100644 drivers/bus/mhi/core/internal.h
create mode 100644 drivers/bus/mhi/core/main.c
create mode 100644 drivers/bus/mhi/core/pm.c
create mode 100644 include/linux/mhi.h
create mode 100644 net/qrtr/mhi.c
--
2.17.1
This commit adds support for creating and destroying MHI devices. The
MHI devices binds to the MHI channels and are used to transfer data
between MHI host and client device.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted from pm patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/Makefile | 2 +-
drivers/bus/mhi/core/internal.h | 1 +
drivers/bus/mhi/core/main.c | 123 ++++++++++++++++++++++++++++++++
include/linux/mhi.h | 6 ++
4 files changed, 131 insertions(+), 1 deletion(-)
create mode 100644 drivers/bus/mhi/core/main.c
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index 2db32697c67f..77f7730da4bf 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1,3 +1,3 @@
obj-$(CONFIG_MHI_BUS) := mhi.o
-mhi-y := init.o
+mhi-y := init.o main.o
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 21f686d3a140..ea7f1d7b0129 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -140,6 +140,7 @@ struct mhi_chan {
bool offload_ch;
bool pre_alloc;
bool auto_start;
+ bool wake_capable;
int (*gen_tre)(struct mhi_controller *mhi_cntrl,
struct mhi_chan *mhi_chan, void *buf, void *cb,
size_t len, enum mhi_flags flags);
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
new file mode 100644
index 000000000000..216fd8691140
--- /dev/null
+++ b/drivers/bus/mhi/core/main.c
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#define dev_fmt(fmt) "MHI: " fmt
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/mhi.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include "internal.h"
+
+int mhi_destroy_device(struct device *dev, void *data)
+{
+ struct mhi_device *mhi_dev;
+ struct mhi_controller *mhi_cntrl;
+
+ if (dev->bus != &mhi_bus_type)
+ return 0;
+
+ mhi_dev = to_mhi_device(dev);
+ mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ /* Only destroy virtual devices thats attached to bus */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ dev_dbg(mhi_cntrl->dev, "destroy device for chan:%s\n",
+ mhi_dev->chan_name);
+
+ /* Notify the client and remove the device from MHI bus */
+ device_del(dev);
+ put_device(dev);
+
+ return 0;
+}
+
+static void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
+{
+ struct mhi_driver *mhi_drv;
+
+ if (!mhi_dev->dev.driver)
+ return;
+
+ mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
+
+ if (mhi_drv->status_cb)
+ mhi_drv->status_cb(mhi_dev, cb_reason);
+}
+
+/* Bind MHI channels to MHI devices */
+void mhi_create_devices(struct mhi_controller *mhi_cntrl)
+{
+ int i;
+ struct mhi_chan *mhi_chan;
+ struct mhi_device *mhi_dev;
+ int ret;
+
+ mhi_chan = mhi_cntrl->mhi_chan;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+ if (!mhi_chan->configured || mhi_chan->mhi_dev ||
+ !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
+ continue;
+ mhi_dev = mhi_alloc_device(mhi_cntrl);
+ if (!mhi_dev)
+ return;
+
+ mhi_dev->dev_type = MHI_DEVICE_XFER;
+ switch (mhi_chan->dir) {
+ case DMA_TO_DEVICE:
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ break;
+ case DMA_FROM_DEVICE:
+ /* We use dl_chan as offload channels */
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Direction not supported\n");
+ mhi_dealloc_device(mhi_cntrl, mhi_dev);
+ return;
+ }
+
+ mhi_chan->mhi_dev = mhi_dev;
+
+ /* Check next channel if it matches */
+ if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
+ if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
+ i++;
+ mhi_chan++;
+ if (mhi_chan->dir == DMA_TO_DEVICE) {
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ } else {
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ }
+ mhi_chan->mhi_dev = mhi_dev;
+ }
+ }
+
+ /* Channel name is same for both UL and DL */
+ mhi_dev->chan_name = mhi_chan->name;
+ dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan,
+ mhi_dev->chan_name);
+
+ /* Init wakeup source if available */
+ if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
+ device_init_wakeup(&mhi_dev->dev, true);
+
+ ret = device_add(&mhi_dev->dev);
+ if (ret)
+ mhi_dealloc_device(mhi_cntrl, mhi_dev);
+ }
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 0fdad987dd70..cb6ddd23463c 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -166,6 +166,7 @@ enum mhi_db_brst_mode {
* @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
* @auto_queue: Framework will automatically queue buffers for DL traffic
* @auto_start: Automatically start (open) this channel
+ * @wake-capable: Channel capable of waking up the system
*/
struct mhi_channel_config {
u32 num;
@@ -184,6 +185,7 @@ struct mhi_channel_config {
bool doorbell_mode_switch;
bool auto_queue;
bool auto_start;
+ bool wake_capable;
};
/**
@@ -365,6 +367,8 @@ struct mhi_controller {
* struct mhi_device - Structure representing a MHI device which binds
* to channels
* @dev: Driver model device node for the MHI device
+ * @ul_chan_id: MHI channel id for UL transfer
+ * @dl_chan_id: MHI channel id for DL transfer
* @tiocm: Device current terminal settings
* @id: Pointer to MHI device ID struct
* @chan_name: Name of the channel to which the device binds
@@ -376,6 +380,8 @@ struct mhi_controller {
*/
struct mhi_device {
struct device dev;
+ int ul_chan_id;
+ int dl_chan_id;
u32 tiocm;
const struct mhi_device_id *id;
const char *chan_name;
--
2.17.1
This commit adds support for registering MHI controller drivers with
the MHI stack. MHI controller drivers manages the interaction with the
MHI client devices such as the external modems and WiFi chipsets. They
are also the MHI bus master in charge of managing the physical link
between the host and client device.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/987
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[jhugo: added static config for controllers and fixed several bugs]
Signed-off-by: Jeffrey Hugo <[email protected]>
[mani: removed DT dependency, splitted and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/Kconfig | 1 +
drivers/bus/Makefile | 3 +
drivers/bus/mhi/Kconfig | 14 +
drivers/bus/mhi/Makefile | 2 +
drivers/bus/mhi/core/Makefile | 3 +
drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
drivers/bus/mhi/core/internal.h | 169 ++++++++++++
include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
include/linux/mod_devicetable.h | 12 +
9 files changed, 1046 insertions(+)
create mode 100644 drivers/bus/mhi/Kconfig
create mode 100644 drivers/bus/mhi/Makefile
create mode 100644 drivers/bus/mhi/core/Makefile
create mode 100644 drivers/bus/mhi/core/init.c
create mode 100644 drivers/bus/mhi/core/internal.h
create mode 100644 include/linux/mhi.h
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index 50200d1c06ea..383934e54786 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -202,5 +202,6 @@ config DA8XX_MSTPRI
peripherals.
source "drivers/bus/fsl-mc/Kconfig"
+source "drivers/bus/mhi/Kconfig"
endmenu
diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
index 1320bcf9fa9d..05f32cd694a4 100644
--- a/drivers/bus/Makefile
+++ b/drivers/bus/Makefile
@@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
+
+# MHI
+obj-$(CONFIG_MHI_BUS) += mhi/
diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
new file mode 100644
index 000000000000..a8bd9bd7db7c
--- /dev/null
+++ b/drivers/bus/mhi/Kconfig
@@ -0,0 +1,14 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# MHI bus
+#
+# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+#
+
+config MHI_BUS
+ tristate "Modem Host Interface (MHI) bus"
+ help
+ Bus driver for MHI protocol. Modem Host Interface (MHI) is a
+ communication protocol used by the host processors to control
+ and communicate with modem devices over a high speed peripheral
+ bus or shared memory.
diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
new file mode 100644
index 000000000000..19e6443b72df
--- /dev/null
+++ b/drivers/bus/mhi/Makefile
@@ -0,0 +1,2 @@
+# core layer
+obj-y += core/
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
new file mode 100644
index 000000000000..2db32697c67f
--- /dev/null
+++ b/drivers/bus/mhi/core/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_MHI_BUS) := mhi.o
+
+mhi-y := init.o
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
new file mode 100644
index 000000000000..5b817ec250e0
--- /dev/null
+++ b/drivers/bus/mhi/core/init.c
@@ -0,0 +1,404 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#define dev_fmt(fmt) "MHI: " fmt
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/mhi.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/wait.h>
+#include "internal.h"
+
+static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
+ struct mhi_controller_config *config)
+{
+ int i, num;
+ struct mhi_event *mhi_event;
+ struct mhi_event_config *event_cfg;
+
+ num = config->num_events;
+ mhi_cntrl->total_ev_rings = num;
+ mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
+ GFP_KERNEL);
+ if (!mhi_cntrl->mhi_event)
+ return -ENOMEM;
+
+ /* Populate event ring */
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < num; i++) {
+ event_cfg = &config->event_cfg[i];
+
+ mhi_event->er_index = i;
+ mhi_event->ring.elements = event_cfg->num_elements;
+ mhi_event->intmod = event_cfg->irq_moderation_ms;
+ mhi_event->irq = event_cfg->irq;
+
+ if (event_cfg->channel != U32_MAX) {
+ /* This event ring has a dedicated channel */
+ mhi_event->chan = event_cfg->channel;
+ if (mhi_event->chan >= mhi_cntrl->max_chan) {
+ dev_err(mhi_cntrl->dev,
+ "Event Ring channel not available\n");
+ goto error_ev_cfg;
+ }
+
+ mhi_event->mhi_chan =
+ &mhi_cntrl->mhi_chan[mhi_event->chan];
+ }
+
+ /* Priority is fixed to 1 for now */
+ mhi_event->priority = 1;
+
+ mhi_event->db_cfg.brstmode = event_cfg->mode;
+ if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
+ goto error_ev_cfg;
+
+ mhi_event->data_type = event_cfg->data_type;
+
+ mhi_event->hw_ring = event_cfg->hardware_event;
+ if (mhi_event->hw_ring)
+ mhi_cntrl->hw_ev_rings++;
+ else
+ mhi_cntrl->sw_ev_rings++;
+
+ mhi_event->cl_manage = event_cfg->client_managed;
+ mhi_event->offload_ev = event_cfg->offload_channel;
+ mhi_event++;
+ }
+
+ /* We need IRQ for each event ring + additional one for BHI */
+ mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
+
+ return 0;
+
+error_ev_cfg:
+
+ kfree(mhi_cntrl->mhi_event);
+ return -EINVAL;
+}
+
+static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
+ struct mhi_controller_config *config)
+{
+ int i;
+ u32 chan;
+ struct mhi_channel_config *ch_cfg;
+
+ mhi_cntrl->max_chan = config->max_channels;
+
+ /*
+ * The allocation of MHI channels can exceed 32KB in some scenarios,
+ * so to avoid any memory possible allocation failures, vzalloc is
+ * used here
+ */
+ mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
+ sizeof(*mhi_cntrl->mhi_chan));
+ if (!mhi_cntrl->mhi_chan)
+ return -ENOMEM;
+
+ INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
+
+ /* Populate channel configurations */
+ for (i = 0; i < config->num_channels; i++) {
+ struct mhi_chan *mhi_chan;
+
+ ch_cfg = &config->ch_cfg[i];
+
+ chan = ch_cfg->num;
+ if (chan >= mhi_cntrl->max_chan) {
+ dev_err(mhi_cntrl->dev,
+ "Channel %d not available\n", chan);
+ goto error_chan_cfg;
+ }
+
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ mhi_chan->name = ch_cfg->name;
+ mhi_chan->chan = chan;
+
+ mhi_chan->tre_ring.elements = ch_cfg->num_elements;
+ if (!mhi_chan->tre_ring.elements)
+ goto error_chan_cfg;
+
+ /*
+ * For some channels, local ring length should be bigger than
+ * the transfer ring length due to internal logical channels
+ * in device. So host can queue much more buffers than transfer
+ * ring length. Example, RSC channels should have a larger local
+ * channel length than transfer ring length.
+ */
+ mhi_chan->buf_ring.elements = ch_cfg->local_elements;
+ if (!mhi_chan->buf_ring.elements)
+ mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
+ mhi_chan->er_index = ch_cfg->event_ring;
+ mhi_chan->dir = ch_cfg->dir;
+
+ /*
+ * For most channels, chtype is identical to channel directions.
+ * So, if it is not defined then assign channel direction to
+ * chtype
+ */
+ mhi_chan->type = ch_cfg->type;
+ if (!mhi_chan->type)
+ mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
+
+ mhi_chan->ee_mask = ch_cfg->ee_mask;
+
+ mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
+ mhi_chan->xfer_type = ch_cfg->data_type;
+
+ mhi_chan->lpm_notify = ch_cfg->lpm_notify;
+ mhi_chan->offload_ch = ch_cfg->offload_channel;
+ mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
+ mhi_chan->pre_alloc = ch_cfg->auto_queue;
+ mhi_chan->auto_start = ch_cfg->auto_start;
+
+ /*
+ * If MHI host allocates buffers, then the channel direction
+ * should be DMA_FROM_DEVICE and the buffer type should be
+ * MHI_BUF_RAW
+ */
+ if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
+ mhi_chan->xfer_type != MHI_BUF_RAW)) {
+ dev_err(mhi_cntrl->dev,
+ "Invalid channel configuration\n");
+ goto error_chan_cfg;
+ }
+
+ /*
+ * Bi-directional and direction less channel must be an
+ * offload channel
+ */
+ if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
+ mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
+ dev_err(mhi_cntrl->dev,
+ "Invalid channel configuration\n");
+ goto error_chan_cfg;
+ }
+
+ if (!mhi_chan->offload_ch) {
+ mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
+ if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
+ dev_err(mhi_cntrl->dev,
+ "Invalid Door bell mode\n");
+ goto error_chan_cfg;
+ }
+ }
+
+ mhi_chan->configured = true;
+
+ if (mhi_chan->lpm_notify)
+ list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
+ }
+
+ return 0;
+
+error_chan_cfg:
+ vfree(mhi_cntrl->mhi_chan);
+
+ return -EINVAL;
+}
+
+static int parse_config(struct mhi_controller *mhi_cntrl,
+ struct mhi_controller_config *config)
+{
+ int ret;
+
+ /* Parse MHI channel configuration */
+ ret = parse_ch_cfg(mhi_cntrl, config);
+ if (ret)
+ return ret;
+
+ /* Parse MHI event configuration */
+ ret = parse_ev_cfg(mhi_cntrl, config);
+ if (ret)
+ goto error_ev_cfg;
+
+ mhi_cntrl->timeout_ms = config->timeout_ms;
+ if (!mhi_cntrl->timeout_ms)
+ mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
+
+ mhi_cntrl->bounce_buf = config->use_bounce_buf;
+ mhi_cntrl->buffer_len = config->buf_len;
+ if (!mhi_cntrl->buffer_len)
+ mhi_cntrl->buffer_len = MHI_MAX_MTU;
+
+ return 0;
+
+error_ev_cfg:
+ vfree(mhi_cntrl->mhi_chan);
+
+ return ret;
+}
+
+int mhi_register_controller(struct mhi_controller *mhi_cntrl,
+ struct mhi_controller_config *config)
+{
+ int ret;
+ int i;
+ struct mhi_event *mhi_event;
+ struct mhi_chan *mhi_chan;
+ struct mhi_cmd *mhi_cmd;
+ struct mhi_device *mhi_dev;
+
+ if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
+ return -EINVAL;
+
+ if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
+ return -EINVAL;
+
+ ret = parse_config(mhi_cntrl, config);
+ if (ret)
+ return -EINVAL;
+
+ mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
+ sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
+ if (!mhi_cntrl->mhi_cmd) {
+ ret = -ENOMEM;
+ goto error_alloc_cmd;
+ }
+
+ INIT_LIST_HEAD(&mhi_cntrl->transition_list);
+ spin_lock_init(&mhi_cntrl->transition_lock);
+ spin_lock_init(&mhi_cntrl->wlock);
+ init_waitqueue_head(&mhi_cntrl->state_event);
+
+ mhi_cmd = mhi_cntrl->mhi_cmd;
+ for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
+ spin_lock_init(&mhi_cmd->lock);
+
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ /* Skip for offload events */
+ if (mhi_event->offload_ev)
+ continue;
+
+ mhi_event->mhi_cntrl = mhi_cntrl;
+ spin_lock_init(&mhi_event->lock);
+ }
+
+ mhi_chan = mhi_cntrl->mhi_chan;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+ mutex_init(&mhi_chan->mutex);
+ init_completion(&mhi_chan->completion);
+ rwlock_init(&mhi_chan->lock);
+ }
+
+ /* Register controller with MHI bus */
+ mhi_dev = mhi_alloc_device(mhi_cntrl);
+ if (IS_ERR(mhi_dev)) {
+ dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
+ ret = PTR_ERR(mhi_dev);
+ goto error_alloc_dev;
+ }
+
+ mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
+ mhi_dev->mhi_cntrl = mhi_cntrl;
+ dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
+
+ /* Init wakeup source */
+ device_init_wakeup(&mhi_dev->dev, true);
+
+ ret = device_add(&mhi_dev->dev);
+ if (ret)
+ goto error_add_dev;
+
+ mhi_cntrl->mhi_dev = mhi_dev;
+
+ return 0;
+
+error_add_dev:
+ mhi_dealloc_device(mhi_cntrl, mhi_dev);
+
+error_alloc_dev:
+ kfree(mhi_cntrl->mhi_cmd);
+
+error_alloc_cmd:
+ vfree(mhi_cntrl->mhi_chan);
+ kfree(mhi_cntrl->mhi_event);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_register_controller);
+
+void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
+{
+ struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
+
+ kfree(mhi_cntrl->mhi_cmd);
+ kfree(mhi_cntrl->mhi_event);
+ vfree(mhi_cntrl->mhi_chan);
+
+ device_del(&mhi_dev->dev);
+ put_device(&mhi_dev->dev);
+}
+EXPORT_SYMBOL_GPL(mhi_unregister_controller);
+
+static void mhi_release_device(struct device *dev)
+{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+
+ if (mhi_dev->ul_chan)
+ mhi_dev->ul_chan->mhi_dev = NULL;
+
+ if (mhi_dev->dl_chan)
+ mhi_dev->dl_chan->mhi_dev = NULL;
+
+ kfree(mhi_dev);
+}
+
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
+{
+ struct mhi_device *mhi_dev;
+ struct device *dev;
+
+ mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
+ if (!mhi_dev)
+ return ERR_PTR(-ENOMEM);
+
+ dev = &mhi_dev->dev;
+ device_initialize(dev);
+ dev->bus = &mhi_bus_type;
+ dev->release = mhi_release_device;
+ dev->parent = mhi_cntrl->dev;
+ mhi_dev->mhi_cntrl = mhi_cntrl;
+ atomic_set(&mhi_dev->dev_wake, 0);
+
+ return mhi_dev;
+}
+
+static int mhi_match(struct device *dev, struct device_driver *drv)
+{
+ return 0;
+};
+
+struct bus_type mhi_bus_type = {
+ .name = "mhi",
+ .dev_name = "mhi",
+ .match = mhi_match,
+};
+
+static int __init mhi_init(void)
+{
+ return bus_register(&mhi_bus_type);
+}
+
+static void __exit mhi_exit(void)
+{
+ bus_unregister(&mhi_bus_type);
+}
+
+postcore_initcall(mhi_init);
+module_exit(mhi_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("MHI Host Interface");
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
new file mode 100644
index 000000000000..21f686d3a140
--- /dev/null
+++ b/drivers/bus/mhi/core/internal.h
@@ -0,0 +1,169 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#ifndef _MHI_INT_H
+#define _MHI_INT_H
+
+extern struct bus_type mhi_bus_type;
+
+/* MHI transfer completion events */
+enum mhi_ev_ccs {
+ MHI_EV_CC_INVALID = 0x0,
+ MHI_EV_CC_SUCCESS = 0x1,
+ MHI_EV_CC_EOT = 0x2,
+ MHI_EV_CC_OVERFLOW = 0x3,
+ MHI_EV_CC_EOB = 0x4,
+ MHI_EV_CC_OOB = 0x5,
+ MHI_EV_CC_DB_MODE = 0x6,
+ MHI_EV_CC_UNDEFINED_ERR = 0x10,
+ MHI_EV_CC_BAD_TRE = 0x11,
+};
+
+enum mhi_ch_state {
+ MHI_CH_STATE_DISABLED = 0x0,
+ MHI_CH_STATE_ENABLED = 0x1,
+ MHI_CH_STATE_RUNNING = 0x2,
+ MHI_CH_STATE_SUSPENDED = 0x3,
+ MHI_CH_STATE_STOP = 0x4,
+ MHI_CH_STATE_ERROR = 0x5,
+};
+
+#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
+ mode != MHI_DB_BRST_ENABLE)
+
+#define NR_OF_CMD_RINGS 1
+#define CMD_EL_PER_RING 128
+#define PRIMARY_CMD_RING 0
+#define MHI_MAX_MTU 0xffff
+
+enum mhi_er_type {
+ MHI_ER_TYPE_INVALID = 0x0,
+ MHI_ER_TYPE_VALID = 0x1,
+};
+
+enum mhi_ch_ee_mask {
+ MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
+ MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
+ MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
+ MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
+ MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
+ MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
+ MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
+};
+
+struct db_cfg {
+ bool reset_req;
+ bool db_mode;
+ u32 pollcfg;
+ enum mhi_db_brst_mode brstmode;
+ dma_addr_t db_val;
+ void (*process_db)(struct mhi_controller *mhi_cntrl,
+ struct db_cfg *db_cfg, void __iomem *io_addr,
+ dma_addr_t db_val);
+};
+
+struct mhi_ring {
+ dma_addr_t dma_handle;
+ dma_addr_t iommu_base;
+ u64 *ctxt_wp; /* point to ctxt wp */
+ void *pre_aligned;
+ void *base;
+ void *rp;
+ void *wp;
+ size_t el_size;
+ size_t len;
+ size_t elements;
+ size_t alloc_size;
+ void __iomem *db_addr;
+};
+
+struct mhi_cmd {
+ struct mhi_ring ring;
+ spinlock_t lock;
+};
+
+struct mhi_buf_info {
+ dma_addr_t p_addr;
+ void *v_addr;
+ void *bb_addr;
+ void *wp;
+ size_t len;
+ void *cb_buf;
+ enum dma_data_direction dir;
+};
+
+struct mhi_event {
+ u32 er_index;
+ u32 intmod;
+ u32 irq;
+ int chan; /* this event ring is dedicated to a channel (optional) */
+ u32 priority;
+ enum mhi_er_data_type data_type;
+ struct mhi_ring ring;
+ struct db_cfg db_cfg;
+ bool hw_ring;
+ bool cl_manage;
+ bool offload_ev; /* managed by a device driver */
+ spinlock_t lock;
+ struct mhi_chan *mhi_chan; /* dedicated to channel */
+ struct tasklet_struct task;
+ int (*process_event)(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota);
+ struct mhi_controller *mhi_cntrl;
+};
+
+struct mhi_chan {
+ u32 chan;
+ const char *name;
+ /*
+ * Important: When consuming, increment tre_ring first and when
+ * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
+ * is guranteed to have space so we do not need to check both rings.
+ */
+ struct mhi_ring buf_ring;
+ struct mhi_ring tre_ring;
+ u32 er_index;
+ u32 intmod;
+ enum mhi_ch_type type;
+ enum dma_data_direction dir;
+ struct db_cfg db_cfg;
+ enum mhi_ch_ee_mask ee_mask;
+ enum mhi_buf_type xfer_type;
+ enum mhi_ch_state ch_state;
+ enum mhi_ev_ccs ccs;
+ bool lpm_notify;
+ bool configured;
+ bool offload_ch;
+ bool pre_alloc;
+ bool auto_start;
+ int (*gen_tre)(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan, void *buf, void *cb,
+ size_t len, enum mhi_flags flags);
+ int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+ struct mhi_device *mhi_dev;
+ void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
+ struct mutex mutex;
+ struct completion completion;
+ rwlock_t lock;
+ struct list_head node;
+};
+
+/* Default MHI timeout */
+#define MHI_TIMEOUT_MS (1000)
+
+struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
+static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
+ struct mhi_device *mhi_dev)
+{
+ kfree(mhi_dev);
+}
+
+int mhi_destroy_device(struct device *dev, void *data);
+void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+
+#endif /* _MHI_INT_H */
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
new file mode 100644
index 000000000000..69cf9a4b06c7
--- /dev/null
+++ b/include/linux/mhi.h
@@ -0,0 +1,438 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+#ifndef _MHI_H_
+#define _MHI_H_
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/mutex.h>
+#include <linux/rwlock_types.h>
+#include <linux/slab.h>
+#include <linux/spinlock_types.h>
+#include <linux/wait.h>
+#include <linux/workqueue.h>
+
+struct mhi_chan;
+struct mhi_event;
+struct mhi_ctxt;
+struct mhi_cmd;
+struct mhi_buf_info;
+
+/**
+ * enum mhi_callback - MHI callback
+ * @MHI_CB_IDLE: MHI entered idle state
+ * @MHI_CB_PENDING_DATA: New data available for client to process
+ * @MHI_CB_LPM_ENTER: MHI host entered low power mode
+ * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
+ * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
+ * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
+ * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
+ * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
+ */
+enum mhi_callback {
+ MHI_CB_IDLE,
+ MHI_CB_PENDING_DATA,
+ MHI_CB_LPM_ENTER,
+ MHI_CB_LPM_EXIT,
+ MHI_CB_EE_RDDM,
+ MHI_CB_EE_MISSION_MODE,
+ MHI_CB_SYS_ERROR,
+ MHI_CB_FATAL_ERROR,
+};
+
+/**
+ * enum mhi_flags - Transfer flags
+ * @MHI_EOB: End of buffer for bulk transfer
+ * @MHI_EOT: End of transfer
+ * @MHI_CHAIN: Linked transfer
+ */
+enum mhi_flags {
+ MHI_EOB,
+ MHI_EOT,
+ MHI_CHAIN,
+};
+
+/**
+ * enum mhi_device_type - Device types
+ * @MHI_DEVICE_XFER: Handles data transfer
+ * @MHI_DEVICE_TIMESYNC: Use for timesync feature
+ * @MHI_DEVICE_CONTROLLER: Control device
+ */
+enum mhi_device_type {
+ MHI_DEVICE_XFER,
+ MHI_DEVICE_TIMESYNC,
+ MHI_DEVICE_CONTROLLER,
+};
+
+/**
+ * enum mhi_ch_type - Channel types
+ * @MHI_CH_TYPE_INVALID: Invalid channel type
+ * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
+ * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
+ * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
+ * multiple packets and send them as a single
+ * large packet to reduce CPU consumption
+ */
+enum mhi_ch_type {
+ MHI_CH_TYPE_INVALID = 0,
+ MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
+ MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
+ MHI_CH_TYPE_INBOUND_COALESCED = 3,
+};
+
+/**
+ * enum mhi_ee_type - Execution environment types
+ * @MHI_EE_PBL: Primary Bootloader
+ * @MHI_EE_SBL: Secondary Bootloader
+ * @MHI_EE_AMSS: Modem, aka the primary runtime EE
+ * @MHI_EE_RDDM: Ram dump download mode
+ * @MHI_EE_WFW: WLAN firmware mode
+ * @MHI_EE_PTHRU: Passthrough
+ * @MHI_EE_EDL: Embedded downloader
+ */
+enum mhi_ee_type {
+ MHI_EE_PBL,
+ MHI_EE_SBL,
+ MHI_EE_AMSS,
+ MHI_EE_RDDM,
+ MHI_EE_WFW,
+ MHI_EE_PTHRU,
+ MHI_EE_EDL,
+ MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
+ MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
+ MHI_EE_NOT_SUPPORTED,
+ MHI_EE_MAX,
+};
+
+/**
+ * enum mhi_buf_type - Accepted buffer type for the channel
+ * @MHI_BUF_RAW: Raw buffer
+ * @MHI_BUF_SKB: SKB struct
+ * @MHI_BUF_SCLIST: Scatter-gather list
+ * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
+ * @MHI_BUF_DMA: Receive DMA address mapped by client
+ * @MHI_BUF_RSC_DMA: RSC type premapped buffer
+ */
+enum mhi_buf_type {
+ MHI_BUF_RAW,
+ MHI_BUF_SKB,
+ MHI_BUF_SCLIST,
+ MHI_BUF_NOP,
+ MHI_BUF_DMA,
+ MHI_BUF_RSC_DMA,
+};
+
+/**
+ * enum mhi_er_data_type - Event ring data types
+ * @MHI_ER_DATA: Only client data over this ring
+ * @MHI_ER_CTRL: MHI control data and client data
+ * @MHI_ER_TSYNC: Time sync events
+ */
+enum mhi_er_data_type {
+ MHI_ER_DATA,
+ MHI_ER_CTRL,
+ MHI_ER_TSYNC,
+};
+
+/**
+ * enum mhi_db_brst_mode - Doorbell mode
+ * @MHI_DB_BRST_DISABLE: Burst mode disable
+ * @MHI_DB_BRST_ENABLE: Burst mode enable
+ */
+enum mhi_db_brst_mode {
+ MHI_DB_BRST_DISABLE = 0x2,
+ MHI_DB_BRST_ENABLE = 0x3,
+};
+
+/**
+ * struct mhi_channel_config - Channel configuration structure for controller
+ * @num: The number assigned to this channel
+ * @name: The name of this channel
+ * @num_elements: The number of elements that can be queued to this channel
+ * @local_elements: The local ring length of the channel
+ * @event_ring: The event rung index that services this channel
+ * @dir: Direction that data may flow on this channel
+ * @type: Channel type
+ * @ee_mask: Execution Environment mask for this channel
+ * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
+ for UL channels, multiple of 8 ring elements for DL channels
+ * @data_type: Data type accepted by this channel
+ * @doorbell: Doorbell mode
+ * @lpm_notify: The channel master requires low power mode notifications
+ * @offload_channel: The client manages the channel completely
+ * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
+ * @auto_queue: Framework will automatically queue buffers for DL traffic
+ * @auto_start: Automatically start (open) this channel
+ */
+struct mhi_channel_config {
+ u32 num;
+ char *name;
+ u32 num_elements;
+ u32 local_elements;
+ u32 event_ring;
+ enum dma_data_direction dir;
+ enum mhi_ch_type type;
+ u32 ee_mask;
+ u32 pollcfg;
+ enum mhi_buf_type data_type;
+ enum mhi_db_brst_mode doorbell;
+ bool lpm_notify;
+ bool offload_channel;
+ bool doorbell_mode_switch;
+ bool auto_queue;
+ bool auto_start;
+};
+
+/**
+ * struct mhi_event_config - Event ring configuration structure for controller
+ * @num_elements: The number of elements that can be queued to this ring
+ * @irq_moderation_ms: Delay irq for additional events to be aggregated
+ * @irq: IRQ associated with this ring
+ * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
+ * @mode: Doorbell mode
+ * @data_type: Type of data this ring will process
+ * @hardware_event: This ring is associated with hardware channels
+ * @client_managed: This ring is client managed
+ * @offload_channel: This ring is associated with an offloaded channel
+ * @priority: Priority of this ring. Use 1 for now
+ */
+struct mhi_event_config {
+ u32 num_elements;
+ u32 irq_moderation_ms;
+ u32 irq;
+ u32 channel;
+ enum mhi_db_brst_mode mode;
+ enum mhi_er_data_type data_type;
+ bool hardware_event;
+ bool client_managed;
+ bool offload_channel;
+ u32 priority;
+};
+
+/**
+ * struct mhi_controller_config - Root MHI controller configuration
+ * @max_channels: Maximum number of channels supported
+ * @timeout_ms: Timeout value for operations. 0 means use default
+ * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
+ * @m2_no_db: Host is not allowed to ring DB in M2 state
+ * @buf_len: Size of automatically allocated buffers. 0 means use default
+ * @num_channels: Number of channels defined in @ch_cfg
+ * @ch_cfg: Array of defined channels
+ * @num_events: Number of event rings defined in @event_cfg
+ * @event_cfg: Array of defined event rings
+ */
+struct mhi_controller_config {
+ u32 max_channels;
+ u32 timeout_ms;
+ bool use_bounce_buf;
+ bool m2_no_db;
+ u32 buf_len;
+ u32 num_channels;
+ struct mhi_channel_config *ch_cfg;
+ u32 num_events;
+ struct mhi_event_config *event_cfg;
+};
+
+/**
+ * struct mhi_controller - Master MHI controller structure
+ * @name: Name of the controller
+ * @dev: Driver model device node for the controller
+ * @mhi_dev: MHI device instance for the controller
+ * @dev_id: Device ID of the controller
+ * @bus_id: Physical bus instance used by the controller
+ * @regs: Base address of MHI MMIO register space
+ * @iova_start: IOMMU starting address for data
+ * @iova_stop: IOMMU stop address for data
+ * @fw_image: Firmware image name for normal booting
+ * @edl_image: Firmware image name for emergency download mode
+ * @fbc_download: MHI host needs to do complete image transfer
+ * @sbl_size: SBL image size
+ * @seg_len: BHIe vector size
+ * @max_chan: Maximum number of channels the controller supports
+ * @mhi_chan: Points to the channel configuration table
+ * @lpm_chans: List of channels that require LPM notifications
+ * @total_ev_rings: Total # of event rings allocated
+ * @hw_ev_rings: Number of hardware event rings
+ * @sw_ev_rings: Number of software event rings
+ * @nr_irqs_req: Number of IRQs required to operate
+ * @nr_irqs: Number of IRQ allocated by bus master
+ * @irq: base irq # to request
+ * @mhi_event: MHI event ring configurations table
+ * @mhi_cmd: MHI command ring configurations table
+ * @mhi_ctxt: MHI device context, shared memory between host and device
+ * @timeout_ms: Timeout in ms for state transitions
+ * @pm_mutex: Mutex for suspend/resume operation
+ * @pre_init: MHI host needs to do pre-initialization before power up
+ * @pm_lock: Lock for protecting MHI power management state
+ * @pm_state: MHI power management state
+ * @db_access: DB access states
+ * @ee: MHI device execution environment
+ * @wake_set: Device wakeup set flag
+ * @dev_wake: Device wakeup count
+ * @alloc_size: Total memory allocations size of the controller
+ * @pending_pkts: Pending packets for the controller
+ * @transition_list: List of MHI state transitions
+ * @wlock: Lock for protecting device wakeup
+ * @M0: M0 state counter for debugging
+ * @M2: M2 state counter for debugging
+ * @M3: M3 state counter for debugging
+ * @M3_FAST: M3 Fast state counter for debugging
+ * @st_worker: State transition worker
+ * @fw_worker: Firmware download worker
+ * @syserr_worker: System error worker
+ * @state_event: State change event
+ * @status_cb: CB function to notify various power states to bus master
+ * @link_status: CB function to query link status of the device
+ * @wake_get: CB function to assert device wake
+ * @wake_put: CB function to de-assert device wake
+ * @wake_toggle: CB function to assert and deasset (toggle) device wake
+ * @runtime_get: CB function to controller runtime resume
+ * @runtimet_put: CB function to decrement pm usage
+ * @lpm_disable: CB function to request disable link level low power modes
+ * @lpm_enable: CB function to request enable link level low power modes again
+ * @bounce_buf: Use of bounce buffer
+ * @buffer_len: Bounce buffer length
+ * @priv_data: Points to bus master's private data
+ */
+struct mhi_controller {
+ const char *name;
+ struct device *dev;
+ struct mhi_device *mhi_dev;
+ u32 dev_id;
+ u32 bus_id;
+ void __iomem *regs;
+ dma_addr_t iova_start;
+ dma_addr_t iova_stop;
+ const char *fw_image;
+ const char *edl_image;
+ bool fbc_download;
+ size_t sbl_size;
+ size_t seg_len;
+ u32 max_chan;
+ struct mhi_chan *mhi_chan;
+ struct list_head lpm_chans;
+ u32 total_ev_rings;
+ u32 hw_ev_rings;
+ u32 sw_ev_rings;
+ u32 nr_irqs_req;
+ u32 nr_irqs;
+ int *irq;
+
+ struct mhi_event *mhi_event;
+ struct mhi_cmd *mhi_cmd;
+ struct mhi_ctxt *mhi_ctxt;
+
+ u32 timeout_ms;
+ struct mutex pm_mutex;
+ bool pre_init;
+ rwlock_t pm_lock;
+ u32 pm_state;
+ u32 db_access;
+ enum mhi_ee_type ee;
+ bool wake_set;
+ atomic_t dev_wake;
+ atomic_t alloc_size;
+ atomic_t pending_pkts;
+ struct list_head transition_list;
+ spinlock_t transition_lock;
+ spinlock_t wlock;
+ u32 M0, M2, M3, M3_FAST;
+ struct work_struct st_worker;
+ struct work_struct fw_worker;
+ struct work_struct syserr_worker;
+ wait_queue_head_t state_event;
+
+ void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
+ enum mhi_callback cb);
+ int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
+ void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
+ void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
+ void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
+ int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
+ void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
+ void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
+ void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
+
+ bool bounce_buf;
+ size_t buffer_len;
+ void *priv_data;
+};
+
+/**
+ * struct mhi_device - Structure representing a MHI device which binds
+ * to channels
+ * @dev: Driver model device node for the MHI device
+ * @tiocm: Device current terminal settings
+ * @id: Pointer to MHI device ID struct
+ * @chan_name: Name of the channel to which the device binds
+ * @mhi_cntrl: Controller the device belongs to
+ * @ul_chan: UL channel for the device
+ * @dl_chan: DL channel for the device
+ * @dev_wake: Device wakeup counter
+ * @dev_type: MHI device type
+ */
+struct mhi_device {
+ struct device dev;
+ u32 tiocm;
+ const struct mhi_device_id *id;
+ const char *chan_name;
+ struct mhi_controller *mhi_cntrl;
+ struct mhi_chan *ul_chan;
+ struct mhi_chan *dl_chan;
+ atomic_t dev_wake;
+ enum mhi_device_type dev_type;
+};
+
+/**
+ * struct mhi_result - Completed buffer information
+ * @buf_addr: Address of data buffer
+ * @dir: Channel direction
+ * @bytes_xfer: # of bytes transferred
+ * @transaction_status: Status of last transaction
+ */
+struct mhi_result {
+ void *buf_addr;
+ enum dma_data_direction dir;
+ size_t bytes_xferd;
+ int transaction_status;
+};
+
+#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
+
+/**
+ * mhi_controller_set_devdata - Set MHI controller private data
+ * @mhi_cntrl: MHI controller to set data
+ */
+static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
+ void *priv)
+{
+ mhi_cntrl->priv_data = priv;
+}
+
+/**
+ * mhi_controller_get_devdata - Get MHI controller private data
+ * @mhi_cntrl: MHI controller to get data
+ */
+static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
+{
+ return mhi_cntrl->priv_data;
+}
+
+/**
+ * mhi_register_controller - Register MHI controller
+ * @mhi_cntrl: MHI controller to register
+ * @config: Configuration to use for the controller
+ */
+int mhi_register_controller(struct mhi_controller *mhi_cntrl,
+ struct mhi_controller_config *config);
+
+/**
+ * mhi_unregister_controller - Unregister MHI controller
+ * @mhi_cntrl: MHI controller to unregister
+ */
+void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
+
+#endif /* _MHI_H_ */
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index e3596db077dc..be15e997fe39 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -821,4 +821,16 @@ struct wmi_device_id {
const void *context;
};
+#define MHI_NAME_SIZE 32
+
+/**
+ * struct mhi_device_id - MHI device identification
+ * @chan: MHI channel name
+ * @driver_data: driver data;
+ */
+struct mhi_device_id {
+ const char chan[MHI_NAME_SIZE];
+ kernel_ulong_t driver_data;
+};
+
#endif /* LINUX_MOD_DEVICETABLE_H */
--
2.17.1
This commit adds support for ringing channel and event ring doorbells
by MHI host. The MHI host can use the channel and event ring doorbells
for notifying the client device about processing transfer and event
rings which it has queued using MMIO registers.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted from pm patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 140 ++++++++++++++++
drivers/bus/mhi/core/internal.h | 275 ++++++++++++++++++++++++++++++++
drivers/bus/mhi/core/main.c | 118 ++++++++++++++
include/linux/mhi.h | 5 +
4 files changed, 538 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 60dcf2ad3a5f..588166b588b4 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -19,6 +19,136 @@
#include <linux/wait.h>
#include "internal.h"
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
+{
+ u32 val;
+ int i, ret;
+ struct mhi_chan *mhi_chan;
+ struct mhi_event *mhi_event;
+ void __iomem *base = mhi_cntrl->regs;
+ struct {
+ u32 offset;
+ u32 mask;
+ u32 shift;
+ u32 val;
+ } reg_info[] = {
+ {
+ CCABAP_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+ },
+ {
+ CCABAP_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
+ },
+ {
+ ECABAP_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+ },
+ {
+ ECABAP_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
+ },
+ {
+ CRCBAP_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+ },
+ {
+ CRCBAP_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
+ },
+ {
+ MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
+ mhi_cntrl->total_ev_rings,
+ },
+ {
+ MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
+ mhi_cntrl->hw_ev_rings,
+ },
+ {
+ MHICTRLBASE_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->iova_start),
+ },
+ {
+ MHICTRLBASE_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->iova_start),
+ },
+ {
+ MHIDATABASE_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->iova_start),
+ },
+ {
+ MHIDATABASE_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->iova_start),
+ },
+ {
+ MHICTRLLIMIT_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->iova_stop),
+ },
+ {
+ MHICTRLLIMIT_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->iova_stop),
+ },
+ {
+ MHIDATALIMIT_HIGHER, U32_MAX, 0,
+ upper_32_bits(mhi_cntrl->iova_stop),
+ },
+ {
+ MHIDATALIMIT_LOWER, U32_MAX, 0,
+ lower_32_bits(mhi_cntrl->iova_stop),
+ },
+ { 0, 0, 0 }
+ };
+
+ dev_dbg(mhi_cntrl->dev, "Initializing MHI registers\n");
+
+ /* Read channel db offset */
+ ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
+ CHDBOFF_CHDBOFF_SHIFT, &val);
+ if (ret) {
+ dev_err(mhi_cntrl->dev, "Unable to read CHDBOFF register\n");
+ return -EIO;
+ }
+
+ /* Setup wake db */
+ mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
+ mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
+ mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
+ mhi_cntrl->wake_set = false;
+
+ /* Setup channel db address for each channel in tre_ring */
+ mhi_chan = mhi_cntrl->mhi_chan;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
+ mhi_chan->tre_ring.db_addr = base + val;
+
+ /* Read event ring db offset */
+ ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
+ ERDBOFF_ERDBOFF_SHIFT, &val);
+ if (ret) {
+ dev_err(mhi_cntrl->dev, "Unable to read ERDBOFF register\n");
+ return -EIO;
+ }
+
+ /* Setup event db address for each ev_ring */
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ mhi_event->ring.db_addr = base + val;
+ }
+
+ /* Setup DB register for primary CMD rings */
+ mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
+
+ /* Write to MMIO registers */
+ for (i = 0; reg_info[i].offset; i++)
+ mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
+ reg_info[i].mask, reg_info[i].shift,
+ reg_info[i].val);
+
+ return 0;
+}
+
static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
{
@@ -63,6 +193,11 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
goto error_ev_cfg;
+ if (mhi_event->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
+ mhi_event->db_cfg.process_db = mhi_db_brstmode;
+ else
+ mhi_event->db_cfg.process_db = mhi_db_brstmode_disable;
+
mhi_event->data_type = event_cfg->data_type;
mhi_event->hw_ring = event_cfg->hardware_event;
@@ -194,6 +329,11 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
}
}
+ if (mhi_chan->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
+ mhi_chan->db_cfg.process_db = mhi_db_brstmode;
+ else
+ mhi_chan->db_cfg.process_db = mhi_db_brstmode_disable;
+
mhi_chan->configured = true;
if (mhi_chan->lpm_notify)
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index ea7f1d7b0129..a4d10916984a 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -9,6 +9,255 @@
extern struct bus_type mhi_bus_type;
+/* MHI MMIO register mapping */
+#define PCI_INVALID_READ(val) (val == U32_MAX)
+
+#define MHIREGLEN (0x0)
+#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
+#define MHIREGLEN_MHIREGLEN_SHIFT (0)
+
+#define MHIVER (0x8)
+#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
+#define MHIVER_MHIVER_SHIFT (0)
+
+#define MHICFG (0x10)
+#define MHICFG_NHWER_MASK (0xFF000000)
+#define MHICFG_NHWER_SHIFT (24)
+#define MHICFG_NER_MASK (0xFF0000)
+#define MHICFG_NER_SHIFT (16)
+#define MHICFG_NHWCH_MASK (0xFF00)
+#define MHICFG_NHWCH_SHIFT (8)
+#define MHICFG_NCH_MASK (0xFF)
+#define MHICFG_NCH_SHIFT (0)
+
+#define CHDBOFF (0x18)
+#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
+#define CHDBOFF_CHDBOFF_SHIFT (0)
+
+#define ERDBOFF (0x20)
+#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
+#define ERDBOFF_ERDBOFF_SHIFT (0)
+
+#define BHIOFF (0x28)
+#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
+#define BHIOFF_BHIOFF_SHIFT (0)
+
+#define BHIEOFF (0x2C)
+#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
+#define BHIEOFF_BHIEOFF_SHIFT (0)
+
+#define DEBUGOFF (0x30)
+#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
+#define DEBUGOFF_DEBUGOFF_SHIFT (0)
+
+#define MHICTRL (0x38)
+#define MHICTRL_MHISTATE_MASK (0x0000FF00)
+#define MHICTRL_MHISTATE_SHIFT (8)
+#define MHICTRL_RESET_MASK (0x2)
+#define MHICTRL_RESET_SHIFT (1)
+
+#define MHISTATUS (0x48)
+#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
+#define MHISTATUS_MHISTATE_SHIFT (8)
+#define MHISTATUS_SYSERR_MASK (0x4)
+#define MHISTATUS_SYSERR_SHIFT (2)
+#define MHISTATUS_READY_MASK (0x1)
+#define MHISTATUS_READY_SHIFT (0)
+
+#define CCABAP_LOWER (0x58)
+#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
+#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
+
+#define CCABAP_HIGHER (0x5C)
+#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
+#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
+
+#define ECABAP_LOWER (0x60)
+#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
+#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
+
+#define ECABAP_HIGHER (0x64)
+#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
+#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
+
+#define CRCBAP_LOWER (0x68)
+#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
+#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
+
+#define CRCBAP_HIGHER (0x6C)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
+#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
+
+#define CRDB_LOWER (0x70)
+#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
+#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
+
+#define CRDB_HIGHER (0x74)
+#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
+#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
+
+#define MHICTRLBASE_LOWER (0x80)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
+
+#define MHICTRLBASE_HIGHER (0x84)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
+
+#define MHICTRLLIMIT_LOWER (0x88)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
+
+#define MHICTRLLIMIT_HIGHER (0x8C)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
+
+#define MHIDATABASE_LOWER (0x98)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
+
+#define MHIDATABASE_HIGHER (0x9C)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
+
+#define MHIDATALIMIT_LOWER (0xA0)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
+
+#define MHIDATALIMIT_HIGHER (0xA4)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
+#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
+
+/* Host request register */
+#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
+#define MHI_SOC_RESET_REQ BIT(0)
+
+/* MHI BHI offfsets */
+#define BHI_BHIVERSION_MINOR (0x00)
+#define BHI_BHIVERSION_MAJOR (0x04)
+#define BHI_IMGADDR_LOW (0x08)
+#define BHI_IMGADDR_HIGH (0x0C)
+#define BHI_IMGSIZE (0x10)
+#define BHI_RSVD1 (0x14)
+#define BHI_IMGTXDB (0x18)
+#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHI_TXDB_SEQNUM_SHFT (0)
+#define BHI_RSVD2 (0x1C)
+#define BHI_INTVEC (0x20)
+#define BHI_RSVD3 (0x24)
+#define BHI_EXECENV (0x28)
+#define BHI_STATUS (0x2C)
+#define BHI_ERRCODE (0x30)
+#define BHI_ERRDBG1 (0x34)
+#define BHI_ERRDBG2 (0x38)
+#define BHI_ERRDBG3 (0x3C)
+#define BHI_SERIALNU (0x40)
+#define BHI_SBLANTIROLLVER (0x44)
+#define BHI_NUMSEG (0x48)
+#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
+#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
+#define BHI_RSVD5 (0xC4)
+#define BHI_STATUS_MASK (0xC0000000)
+#define BHI_STATUS_SHIFT (30)
+#define BHI_STATUS_ERROR (3)
+#define BHI_STATUS_SUCCESS (2)
+#define BHI_STATUS_RESET (0)
+
+/* MHI BHIE offsets */
+#define BHIE_MSMSOCID_OFFS (0x0000)
+#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
+#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
+#define BHIE_TXVECSIZE_OFFS (0x0034)
+#define BHIE_TXVECDB_OFFS (0x003C)
+#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECDB_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_OFFS (0x0044)
+#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
+#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
+#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
+#define BHIE_RXVECSIZE_OFFS (0x0068)
+#define BHIE_RXVECDB_OFFS (0x0070)
+#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECDB_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_OFFS (0x0078)
+#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
+#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
+#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
+#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
+#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
+#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
+#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
+
+struct mhi_event_ctxt {
+ u32 reserved : 8;
+ u32 intmodc : 8;
+ u32 intmodt : 16;
+ u32 ertype;
+ u32 msivec;
+
+ u64 rbase __packed __aligned(4);
+ u64 rlen __packed __aligned(4);
+ u64 rp __packed __aligned(4);
+ u64 wp __packed __aligned(4);
+};
+
+struct mhi_chan_ctxt {
+ u32 chstate : 8;
+ u32 brstmode : 2;
+ u32 pollcfg : 6;
+ u32 reserved : 16;
+ u32 chtype;
+ u32 erindex;
+
+ u64 rbase __packed __aligned(4);
+ u64 rlen __packed __aligned(4);
+ u64 rp __packed __aligned(4);
+ u64 wp __packed __aligned(4);
+};
+
+struct mhi_cmd_ctxt {
+ u32 reserved0;
+ u32 reserved1;
+ u32 reserved2;
+
+ u64 rbase __packed __aligned(4);
+ u64 rlen __packed __aligned(4);
+ u64 rp __packed __aligned(4);
+ u64 wp __packed __aligned(4);
+};
+
+struct mhi_ctxt {
+ struct mhi_event_ctxt *er_ctxt;
+ struct mhi_chan_ctxt *chan_ctxt;
+ struct mhi_cmd_ctxt *cmd_ctxt;
+ dma_addr_t er_ctxt_addr;
+ dma_addr_t chan_ctxt_addr;
+ dma_addr_t cmd_ctxt_addr;
+};
+
+struct mhi_tre {
+ u64 ptr;
+ u32 dword[2];
+};
+
+struct bhi_vec_entry {
+ u64 dma_addr;
+ u64 size;
+};
+
+enum mhi_cmd_type {
+ MHI_CMD_NOP = 1,
+ MHI_CMD_RESET_CHAN = 16,
+ MHI_CMD_STOP_CHAN = 17,
+ MHI_CMD_START_CHAN = 18,
+};
+
/* MHI transfer completion events */
enum mhi_ev_ccs {
MHI_EV_CC_INVALID = 0x0,
@@ -37,6 +286,7 @@ enum mhi_ch_state {
#define NR_OF_CMD_RINGS 1
#define CMD_EL_PER_RING 128
#define PRIMARY_CMD_RING 0
+#define MHI_DEV_WAKE_DB 127
#define MHI_MAX_MTU 0xffff
enum mhi_er_type {
@@ -167,4 +417,29 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+/* Register access methods */
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
+ void __iomem *db_addr, dma_addr_t db_val);
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+ struct db_cfg *db_mode, void __iomem *db_addr,
+ dma_addr_t db_val);
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+ void __iomem *base, u32 offset, u32 *out);
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+ void __iomem *base, u32 offset, u32 mask,
+ u32 shift, u32 *out);
+void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+ u32 offset, u32 val);
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+ u32 offset, u32 mask, u32 shift, u32 val);
+void mhi_ring_er_db(struct mhi_event *mhi_event);
+void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+ dma_addr_t db_val);
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+
+/* Initialization methods */
+int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
+
#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 216fd8691140..134ef9b2cc78 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -17,6 +17,124 @@
#include <linux/slab.h>
#include "internal.h"
+int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
+ void __iomem *base, u32 offset, u32 *out)
+{
+ u32 tmp = readl_relaxed(base + offset);
+
+ /* If there is any unexpected value, query the link status */
+ if (PCI_INVALID_READ(tmp) &&
+ mhi_cntrl->link_status(mhi_cntrl, mhi_cntrl->priv_data))
+ return -EIO;
+
+ *out = tmp;
+
+ return 0;
+}
+
+int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
+ void __iomem *base, u32 offset,
+ u32 mask, u32 shift, u32 *out)
+{
+ u32 tmp;
+ int ret;
+
+ ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+ if (ret)
+ return ret;
+
+ *out = (tmp & mask) >> shift;
+
+ return 0;
+}
+
+void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
+ u32 offset, u32 val)
+{
+ writel_relaxed(val, base + offset);
+}
+
+void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
+ u32 offset, u32 mask, u32 shift, u32 val)
+{
+ int ret;
+ u32 tmp;
+
+ ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
+ if (ret)
+ return;
+
+ tmp &= ~mask;
+ tmp |= (val << shift);
+ mhi_write_reg(mhi_cntrl, base, offset, tmp);
+}
+
+void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
+ dma_addr_t db_val)
+{
+ mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(db_val));
+ mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(db_val));
+}
+
+void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
+ struct db_cfg *db_cfg,
+ void __iomem *db_addr,
+ dma_addr_t db_val)
+{
+ if (db_cfg->db_mode) {
+ db_cfg->db_val = db_val;
+ mhi_write_db(mhi_cntrl, db_addr, db_val);
+ db_cfg->db_mode = 0;
+ }
+}
+
+void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
+ struct db_cfg *db_cfg,
+ void __iomem *db_addr,
+ dma_addr_t db_val)
+{
+ db_cfg->db_val = db_val;
+ mhi_write_db(mhi_cntrl, db_addr, db_val);
+}
+
+void mhi_ring_er_db(struct mhi_event *mhi_event)
+{
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
+ ring->db_addr, *ring->ctxt_wp);
+}
+
+void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
+{
+ dma_addr_t db;
+ struct mhi_ring *ring = &mhi_cmd->ring;
+
+ db = ring->iommu_base + (ring->wp - ring->base);
+ *ring->ctxt_wp = db;
+ mhi_write_db(mhi_cntrl, ring->db_addr, db);
+}
+
+void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *ring = &mhi_chan->tre_ring;
+ dma_addr_t db;
+
+ db = ring->iommu_base + (ring->wp - ring->base);
+ *ring->ctxt_wp = db;
+ mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
+ ring->db_addr, db);
+}
+
+enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
+{
+ u32 exec;
+ int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
+
+ return (ret) ? MHI_EE_MAX : exec;
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_device *mhi_dev;
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index cb6ddd23463c..d08f212cdfd0 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -246,6 +246,8 @@ struct mhi_controller_config {
* @dev_id: Device ID of the controller
* @bus_id: Physical bus instance used by the controller
* @regs: Base address of MHI MMIO register space
+ * @bhi: Points to base of MHI BHI register space
+ * @wake_db: MHI WAKE doorbell register address
* @iova_start: IOMMU starting address for data
* @iova_stop: IOMMU stop address for data
* @fw_image: Firmware image name for normal booting
@@ -306,6 +308,9 @@ struct mhi_controller {
u32 dev_id;
u32 bus_id;
void __iomem *regs;
+ void __iomem *bhi;
+ void __iomem *wake_db;
+
dma_addr_t iova_start;
dma_addr_t iova_stop;
const char *fw_image;
--
2.17.1
This commit adds support for registering MHI client drivers with the
MHI stack. MHI client drivers binds to one or more MHI devices inorder
to sends and receive the upper-layer protocol packets like IP packets,
modem control messages, and diagnostics messages over MHI bus.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/987
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 149 ++++++++++++++++++++++++++++++++++++
include/linux/mhi.h | 35 +++++++++
2 files changed, 184 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 5b817ec250e0..60dcf2ad3a5f 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -376,8 +376,157 @@ struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
return mhi_dev;
}
+static int mhi_driver_probe(struct device *dev)
+{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct device_driver *drv = dev->driver;
+ struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+ struct mhi_event *mhi_event;
+ struct mhi_chan *ul_chan = mhi_dev->ul_chan;
+ struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+
+ if (ul_chan) {
+ /*
+ * If channel supports LPM notifications then status_cb should
+ * be provided
+ */
+ if (ul_chan->lpm_notify && !mhi_drv->status_cb)
+ return -EINVAL;
+
+ /* For non-offload channels then xfer_cb should be provided */
+ if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
+ return -EINVAL;
+
+ ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+ }
+
+ if (dl_chan) {
+ /*
+ * If channel supports LPM notifications then status_cb should
+ * be provided
+ */
+ if (dl_chan->lpm_notify && !mhi_drv->status_cb)
+ return -EINVAL;
+
+ /* For non-offload channels then xfer_cb should be provided */
+ if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
+ return -EINVAL;
+
+ mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
+
+ /*
+ * If the channel event ring is managed by client, then
+ * status_cb must be provided so that the framework can
+ * notify pending data
+ */
+ if (mhi_event->cl_manage && !mhi_drv->status_cb)
+ return -EINVAL;
+
+ dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
+ }
+
+ /* Call the user provided probe function */
+ return mhi_drv->probe(mhi_dev, mhi_dev->id);
+}
+
+static int mhi_driver_remove(struct device *dev)
+{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+ struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+ struct mhi_chan *mhi_chan;
+ enum mhi_ch_state ch_state[] = {
+ MHI_CH_STATE_DISABLED,
+ MHI_CH_STATE_DISABLED
+ };
+ int dir;
+
+ /* Skip if it is a controller device */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ /* Reset both channels */
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ /* Wake all threads waiting for completion */
+ write_lock_irq(&mhi_chan->lock);
+ mhi_chan->ccs = MHI_EV_CC_INVALID;
+ complete_all(&mhi_chan->completion);
+ write_unlock_irq(&mhi_chan->lock);
+
+ /* Set the channel state to disabled */
+ mutex_lock(&mhi_chan->mutex);
+ write_lock_irq(&mhi_chan->lock);
+ ch_state[dir] = mhi_chan->ch_state;
+ mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED;
+ write_unlock_irq(&mhi_chan->lock);
+
+ mutex_unlock(&mhi_chan->mutex);
+ }
+
+ mhi_drv->remove(mhi_dev);
+
+ /* De-init channel if it was enabled */
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ mutex_lock(&mhi_chan->mutex);
+
+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+
+ mutex_unlock(&mhi_chan->mutex);
+ }
+
+ return 0;
+}
+
+int mhi_driver_register(struct mhi_driver *mhi_drv)
+{
+ struct device_driver *driver = &mhi_drv->driver;
+
+ if (!mhi_drv->probe || !mhi_drv->remove)
+ return -EINVAL;
+
+ driver->bus = &mhi_bus_type;
+ driver->probe = mhi_driver_probe;
+ driver->remove = mhi_driver_remove;
+
+ return driver_register(driver);
+}
+EXPORT_SYMBOL_GPL(mhi_driver_register);
+
+void mhi_driver_unregister(struct mhi_driver *mhi_drv)
+{
+ driver_unregister(&mhi_drv->driver);
+}
+EXPORT_SYMBOL_GPL(mhi_driver_unregister);
+
static int mhi_match(struct device *dev, struct device_driver *drv)
{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+ struct mhi_driver *mhi_drv = to_mhi_driver(drv);
+ const struct mhi_device_id *id;
+
+ /*
+ * If the device is a controller type then there is no client driver
+ * associated with it
+ */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ for (id = mhi_drv->id_table; id->chan[0]; id++)
+ if (!strcmp(mhi_dev->chan_name, id->chan)) {
+ mhi_dev->id = id;
+ return 1;
+ }
+
return 0;
};
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 69cf9a4b06c7..0fdad987dd70 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -400,6 +400,29 @@ struct mhi_result {
int transaction_status;
};
+/**
+ * struct mhi_driver - Structure representing a MHI client driver
+ * @probe: CB function for client driver probe function
+ * @remove: CB function for client driver remove function
+ * @ul_xfer_cb: CB function for UL data transfer
+ * @dl_xfer_cb: CB function for DL data transfer
+ * @status_cb: CB functions for asynchronous status
+ * @driver: Device driver model driver
+ */
+struct mhi_driver {
+ const struct mhi_device_id *id_table;
+ int (*probe)(struct mhi_device *mhi_dev,
+ const struct mhi_device_id *id);
+ void (*remove)(struct mhi_device *mhi_dev);
+ void (*ul_xfer_cb)(struct mhi_device *mhi_dev,
+ struct mhi_result *result);
+ void (*dl_xfer_cb)(struct mhi_device *mhi_dev,
+ struct mhi_result *result);
+ void (*status_cb)(struct mhi_device *mhi_dev, enum mhi_callback mhi_cb);
+ struct device_driver driver;
+};
+
+#define to_mhi_driver(drv) container_of(drv, struct mhi_driver, driver)
#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
/**
@@ -435,4 +458,16 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
*/
void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
+/**
+ * mhi_driver_register - Register driver with MHI framework
+ * @mhi_drv: Driver associated with the device
+ */
+int mhi_driver_register(struct mhi_driver *mhi_drv);
+
+/**
+ * mhi_driver_unregister - Unregister a driver for mhi_devices
+ * @mhi_drv: Driver associated with the device
+ */
+void mhi_driver_unregister(struct mhi_driver *mhi_drv);
+
#endif /* _MHI_H_ */
--
2.17.1
This commit adds support for transitioning the MHI states as a
part of the power management operations. Helpers functions are
provided for the state transitions, which will be consumed by the
actual power management routines.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[jhugo: removed dma_zalloc_coherent() and fixed several bugs]
Signed-off-by: Jeffrey Hugo <[email protected]>
[mani: splitted the pm patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/Makefile | 2 +-
drivers/bus/mhi/core/init.c | 65 +++
drivers/bus/mhi/core/internal.h | 175 ++++++++
drivers/bus/mhi/core/main.c | 9 +
drivers/bus/mhi/core/pm.c | 685 ++++++++++++++++++++++++++++++++
include/linux/mhi.h | 51 +++
6 files changed, 986 insertions(+), 1 deletion(-)
create mode 100644 drivers/bus/mhi/core/pm.c
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index 77f7730da4bf..a0070f9cdfcd 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1,3 +1,3 @@
obj-$(CONFIG_MHI_BUS) := mhi.o
-mhi-y := init.o main.o
+mhi-y := init.o main.o pm.o
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 588166b588b4..83a03493c757 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -19,6 +19,62 @@
#include <linux/wait.h>
#include "internal.h"
+const char * const mhi_ee_str[MHI_EE_MAX] = {
+ [MHI_EE_PBL] = "PBL",
+ [MHI_EE_SBL] = "SBL",
+ [MHI_EE_AMSS] = "AMSS",
+ [MHI_EE_RDDM] = "RDDM",
+ [MHI_EE_WFW] = "WFW",
+ [MHI_EE_PTHRU] = "PASS THRU",
+ [MHI_EE_EDL] = "EDL",
+ [MHI_EE_DISABLE_TRANSITION] = "DISABLE",
+ [MHI_EE_NOT_SUPPORTED] = "NOT SUPPORTED",
+};
+
+const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX] = {
+ [DEV_ST_TRANSITION_PBL] = "PBL",
+ [DEV_ST_TRANSITION_READY] = "READY",
+ [DEV_ST_TRANSITION_SBL] = "SBL",
+ [DEV_ST_TRANSITION_MISSION_MODE] = "MISSION_MODE",
+};
+
+const char * const mhi_state_str[MHI_STATE_MAX] = {
+ [MHI_STATE_RESET] = "RESET",
+ [MHI_STATE_READY] = "READY",
+ [MHI_STATE_M0] = "M0",
+ [MHI_STATE_M1] = "M1",
+ [MHI_STATE_M2] = "M2",
+ [MHI_STATE_M3] = "M3",
+ [MHI_STATE_M3_FAST] = "M3_FAST",
+ [MHI_STATE_BHI] = "BHI",
+ [MHI_STATE_SYS_ERR] = "SYS_ERR",
+};
+
+static const char * const mhi_pm_state_str[] = {
+ [MHI_PM_STATE_DISABLE] = "DISABLE",
+ [MHI_PM_STATE_POR] = "POR",
+ [MHI_PM_STATE_M0] = "M0",
+ [MHI_PM_STATE_M2] = "M2",
+ [MHI_PM_STATE_M3_ENTER] = "M?->M3",
+ [MHI_PM_STATE_M3] = "M3",
+ [MHI_PM_STATE_M3_EXIT] = "M3->M0",
+ [MHI_PM_STATE_FW_DL_ERR] = "FW DL Error",
+ [MHI_PM_STATE_SYS_ERR_DETECT] = "SYS_ERR Detect",
+ [MHI_PM_STATE_SYS_ERR_PROCESS] = "SYS_ERR Process",
+ [MHI_PM_STATE_SHUTDOWN_PROCESS] = "SHUTDOWN Process",
+ [MHI_PM_STATE_LD_ERR_FATAL_DETECT] = "LD or Error Fatal Detect",
+};
+
+const char *to_mhi_pm_state_str(enum mhi_pm_state state)
+{
+ int index = find_last_bit((unsigned long *)&state, 32);
+
+ if (index >= ARRAY_SIZE(mhi_pm_state_str))
+ return "Invalid State";
+
+ return mhi_pm_state_str[index];
+}
+
int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
{
u32 val;
@@ -372,6 +428,11 @@ static int parse_config(struct mhi_controller *mhi_cntrl,
if (!mhi_cntrl->buffer_len)
mhi_cntrl->buffer_len = MHI_MAX_MTU;
+ /* By default, host is allowed to ring DB in both M0 and M2 states */
+ mhi_cntrl->db_access = MHI_PM_M0 | MHI_PM_M2;
+ if (config->m2_no_db)
+ mhi_cntrl->db_access &= ~MHI_PM_M2;
+
return 0;
error_ev_cfg:
@@ -408,8 +469,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
}
INIT_LIST_HEAD(&mhi_cntrl->transition_list);
+ mutex_init(&mhi_cntrl->pm_mutex);
+ rwlock_init(&mhi_cntrl->pm_lock);
spin_lock_init(&mhi_cntrl->transition_lock);
spin_lock_init(&mhi_cntrl->wlock);
+ INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
+ INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker);
init_waitqueue_head(&mhi_cntrl->state_event);
mhi_cmd = mhi_cntrl->mhi_cmd;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index a4d10916984a..2dba4923482a 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -258,6 +258,79 @@ enum mhi_cmd_type {
MHI_CMD_START_CHAN = 18,
};
+/* No operation command */
+#define MHI_TRE_CMD_NOOP_PTR (0)
+#define MHI_TRE_CMD_NOOP_DWORD0 (0)
+#define MHI_TRE_CMD_NOOP_DWORD1 (MHI_CMD_NOP << 16)
+
+/* Channel reset command */
+#define MHI_TRE_CMD_RESET_PTR (0)
+#define MHI_TRE_CMD_RESET_DWORD0 (0)
+#define MHI_TRE_CMD_RESET_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_RESET_CHAN << 16))
+
+/* Channel stop command */
+#define MHI_TRE_CMD_STOP_PTR (0)
+#define MHI_TRE_CMD_STOP_DWORD0 (0)
+#define MHI_TRE_CMD_STOP_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_STOP_CHAN << 16))
+
+/* Channel start command */
+#define MHI_TRE_CMD_START_PTR (0)
+#define MHI_TRE_CMD_START_DWORD0 (0)
+#define MHI_TRE_CMD_START_DWORD1(chid) ((chid << 24) | \
+ (MHI_CMD_START_CHAN << 16))
+
+#define MHI_TRE_GET_CMD_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_CMD_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+
+/* Event descriptor macros */
+#define MHI_TRE_EV_PTR(ptr) (ptr)
+#define MHI_TRE_EV_DWORD0(code, len) ((code << 24) | len)
+#define MHI_TRE_EV_DWORD1(chid, type) ((chid << 24) | (type << 16))
+#define MHI_TRE_GET_EV_PTR(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_CODE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LEN(tre) ((tre)->dword[0] & 0xFFFF)
+#define MHI_TRE_GET_EV_CHID(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_TYPE(tre) (((tre)->dword[1] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_STATE(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_EXECENV(tre) (((tre)->dword[0] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_SEQ(tre) ((tre)->dword[0])
+#define MHI_TRE_GET_EV_TIME(tre) ((tre)->ptr)
+#define MHI_TRE_GET_EV_COOKIE(tre) lower_32_bits((tre)->ptr)
+#define MHI_TRE_GET_EV_VEID(tre) (((tre)->dword[0] >> 16) & 0xFF)
+#define MHI_TRE_GET_EV_LINKSPEED(tre) (((tre)->dword[1] >> 24) & 0xFF)
+#define MHI_TRE_GET_EV_LINKWIDTH(tre) ((tre)->dword[0] & 0xFF)
+
+/* Transfer descriptor macros */
+#define MHI_TRE_DATA_PTR(ptr) (ptr)
+#define MHI_TRE_DATA_DWORD0(len) (len & MHI_MAX_MTU)
+#define MHI_TRE_DATA_DWORD1(bei, ieot, ieob, chain) ((2 << 16) | (bei << 10) \
+ | (ieot << 9) | (ieob << 8) | chain)
+
+/* RSC transfer descriptor macros */
+#define MHI_RSCTRE_DATA_PTR(ptr, len) (((u64)len << 48) | ptr)
+#define MHI_RSCTRE_DATA_DWORD0(cookie) (cookie)
+#define MHI_RSCTRE_DATA_DWORD1 (MHI_PKT_TYPE_COALESCING << 16)
+
+enum mhi_pkt_type {
+ MHI_PKT_TYPE_INVALID = 0x0,
+ MHI_PKT_TYPE_NOOP_CMD = 0x1,
+ MHI_PKT_TYPE_TRANSFER = 0x2,
+ MHI_PKT_TYPE_COALESCING = 0x8,
+ MHI_PKT_TYPE_RESET_CHAN_CMD = 0x10,
+ MHI_PKT_TYPE_STOP_CHAN_CMD = 0x11,
+ MHI_PKT_TYPE_START_CHAN_CMD = 0x12,
+ MHI_PKT_TYPE_STATE_CHANGE_EVENT = 0x20,
+ MHI_PKT_TYPE_CMD_COMPLETION_EVENT = 0x21,
+ MHI_PKT_TYPE_TX_EVENT = 0x22,
+ MHI_PKT_TYPE_RSC_TX_EVENT = 0x28,
+ MHI_PKT_TYPE_EE_EVENT = 0x40,
+ MHI_PKT_TYPE_TSYNC_EVENT = 0x48,
+ MHI_PKT_TYPE_BW_REQ_EVENT = 0x50,
+ MHI_PKT_TYPE_STALE_EVENT, /* internal event */
+};
+
/* MHI transfer completion events */
enum mhi_ev_ccs {
MHI_EV_CC_INVALID = 0x0,
@@ -283,6 +356,81 @@ enum mhi_ch_state {
#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
mode != MHI_DB_BRST_ENABLE)
+extern const char * const mhi_ee_str[MHI_EE_MAX];
+#define TO_MHI_EXEC_STR(ee) (((ee) >= MHI_EE_MAX) ? \
+ "INVALID_EE" : mhi_ee_str[ee])
+
+#define MHI_IN_PBL(ee) (ee == MHI_EE_PBL || ee == MHI_EE_PTHRU || \
+ ee == MHI_EE_EDL)
+
+#define MHI_IN_MISSION_MODE(ee) (ee == MHI_EE_AMSS || ee == MHI_EE_WFW)
+
+enum dev_st_transition {
+ DEV_ST_TRANSITION_PBL,
+ DEV_ST_TRANSITION_READY,
+ DEV_ST_TRANSITION_SBL,
+ DEV_ST_TRANSITION_MISSION_MODE,
+ DEV_ST_TRANSITION_MAX,
+};
+
+extern const char * const dev_state_tran_str[DEV_ST_TRANSITION_MAX];
+#define TO_DEV_STATE_TRANS_STR(state) (((state) >= DEV_ST_TRANSITION_MAX) ? \
+ "INVALID_STATE" : dev_state_tran_str[state])
+
+extern const char * const mhi_state_str[MHI_STATE_MAX];
+#define TO_MHI_STATE_STR(state) ((state >= MHI_STATE_MAX || \
+ !mhi_state_str[state]) ? \
+ "INVALID_STATE" : mhi_state_str[state])
+
+/* internal power states */
+enum mhi_pm_state {
+ MHI_PM_STATE_DISABLE,
+ MHI_PM_STATE_POR,
+ MHI_PM_STATE_M0,
+ MHI_PM_STATE_M2,
+ MHI_PM_STATE_M3_ENTER,
+ MHI_PM_STATE_M3,
+ MHI_PM_STATE_M3_EXIT,
+ MHI_PM_STATE_FW_DL_ERR,
+ MHI_PM_STATE_SYS_ERR_DETECT,
+ MHI_PM_STATE_SYS_ERR_PROCESS,
+ MHI_PM_STATE_SHUTDOWN_PROCESS,
+ MHI_PM_STATE_LD_ERR_FATAL_DETECT,
+ MHI_PM_STATE_MAX
+};
+
+#define MHI_PM_DISABLE BIT(0)
+#define MHI_PM_POR BIT(1)
+#define MHI_PM_M0 BIT(2)
+#define MHI_PM_M2 BIT(3)
+#define MHI_PM_M3_ENTER BIT(4)
+#define MHI_PM_M3 BIT(5)
+#define MHI_PM_M3_EXIT BIT(6)
+/* firmware download failure state */
+#define MHI_PM_FW_DL_ERR BIT(7)
+#define MHI_PM_SYS_ERR_DETECT BIT(8)
+#define MHI_PM_SYS_ERR_PROCESS BIT(9)
+#define MHI_PM_SHUTDOWN_PROCESS BIT(10)
+/* link not accessible */
+#define MHI_PM_LD_ERR_FATAL_DETECT BIT(11)
+
+#define MHI_REG_ACCESS_VALID(pm_state) ((pm_state & (MHI_PM_POR | MHI_PM_M0 | \
+ MHI_PM_M2 | MHI_PM_M3_ENTER | MHI_PM_M3_EXIT | \
+ MHI_PM_SYS_ERR_DETECT | MHI_PM_SYS_ERR_PROCESS | \
+ MHI_PM_SHUTDOWN_PROCESS | MHI_PM_FW_DL_ERR)))
+#define MHI_PM_IN_ERROR_STATE(pm_state) (pm_state >= MHI_PM_FW_DL_ERR)
+#define MHI_PM_IN_FATAL_STATE(pm_state) (pm_state == MHI_PM_LD_ERR_FATAL_DETECT)
+#define MHI_DB_ACCESS_VALID(mhi_cntrl) (mhi_cntrl->pm_state & \
+ mhi_cntrl->db_access)
+#define MHI_WAKE_DB_CLEAR_VALID(pm_state) (pm_state & (MHI_PM_M0 | \
+ MHI_PM_M2 | MHI_PM_M3_EXIT))
+#define MHI_WAKE_DB_SET_VALID(pm_state) (pm_state & MHI_PM_M2)
+#define MHI_WAKE_DB_FORCE_SET_VALID(pm_state) MHI_WAKE_DB_CLEAR_VALID(pm_state)
+#define MHI_EVENT_ACCESS_INVALID(pm_state) (pm_state == MHI_PM_DISABLE || \
+ MHI_PM_IN_ERROR_STATE(pm_state))
+#define MHI_PM_IN_SUSPEND_STATE(pm_state) (pm_state & \
+ (MHI_PM_M3_ENTER | MHI_PM_M3))
+
#define NR_OF_CMD_RINGS 1
#define CMD_EL_PER_RING 128
#define PRIMARY_CMD_RING 0
@@ -315,6 +463,16 @@ struct db_cfg {
dma_addr_t db_val);
};
+struct mhi_pm_transitions {
+ enum mhi_pm_state from_state;
+ u32 to_states;
+};
+
+struct state_transition {
+ struct list_head node;
+ enum dev_st_transition state;
+};
+
struct mhi_ring {
dma_addr_t dma_handle;
dma_addr_t iommu_base;
@@ -417,6 +575,23 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+/* Power management APIs */
+enum mhi_pm_state __must_check mhi_tryset_pm_state(
+ struct mhi_controller *mhi_cntrl,
+ enum mhi_pm_state state);
+const char *to_mhi_pm_state_str(enum mhi_pm_state state);
+enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl);
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+ enum dev_st_transition state);
+void mhi_pm_st_worker(struct work_struct *work);
+void mhi_pm_sys_err_worker(struct work_struct *work);
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
+void mhi_ctrl_ev_task(unsigned long data);
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
+int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
+
/* Register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
void __iomem *db_addr, dma_addr_t db_val);
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 134ef9b2cc78..df980fb3b6db 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -135,6 +135,15 @@ enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
return (ret) ? MHI_EE_MAX : exec;
}
+enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
+{
+ u32 state;
+ int ret = mhi_read_reg_field(mhi_cntrl, mhi_cntrl->regs, MHISTATUS,
+ MHISTATUS_MHISTATE_MASK,
+ MHISTATUS_MHISTATE_SHIFT, &state);
+ return ret ? MHI_STATE_MAX : state;
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_device *mhi_dev;
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
new file mode 100644
index 000000000000..75d43b1039ea
--- /dev/null
+++ b/drivers/bus/mhi/core/pm.c
@@ -0,0 +1,685 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#define dev_fmt(fmt) "MHI: " fmt
+
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/mhi.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include "internal.h"
+
+/*
+ * Not all MHI state transitions are synchronous. Transitions like Linkdown,
+ * SYS_ERR, and shutdown can happen anytime asynchronously. This function will
+ * transition to a new state only if we're allowed to.
+ *
+ * Priority increases as we go down. For instance, from any state in L0, the
+ * transition can be made to states in L1, L2 and L3. A notable exception to
+ * this rule is state DISABLE. From DISABLE state we can only transition to
+ * POR state. Also, while in L2 state, user cannot jump back to previous
+ * L1 or L0 states.
+ *
+ * Valid transitions:
+ * L0: DISABLE <--> POR
+ * POR <--> POR
+ * POR -> M0 -> M2 --> M0
+ * POR -> FW_DL_ERR
+ * FW_DL_ERR <--> FW_DL_ERR
+ * M0 <--> M0
+ * M0 -> FW_DL_ERR
+ * M0 -> M3_ENTER -> M3 -> M3_EXIT --> M0
+ * L1: SYS_ERR_DETECT -> SYS_ERR_PROCESS --> POR
+ * L2: SHUTDOWN_PROCESS -> DISABLE
+ * L3: LD_ERR_FATAL_DETECT <--> LD_ERR_FATAL_DETECT
+ * LD_ERR_FATAL_DETECT -> SHUTDOWN_PROCESS
+ */
+static struct mhi_pm_transitions const dev_state_transitions[] = {
+ /* L0 States */
+ {
+ MHI_PM_DISABLE,
+ MHI_PM_POR
+ },
+ {
+ MHI_PM_POR,
+ MHI_PM_POR | MHI_PM_DISABLE | MHI_PM_M0 |
+ MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+ },
+ {
+ MHI_PM_M0,
+ MHI_PM_M0 | MHI_PM_M2 | MHI_PM_M3_ENTER |
+ MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_FW_DL_ERR
+ },
+ {
+ MHI_PM_M2,
+ MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ {
+ MHI_PM_M3_ENTER,
+ MHI_PM_M3 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ {
+ MHI_PM_M3,
+ MHI_PM_M3_EXIT | MHI_PM_SYS_ERR_DETECT |
+ MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ {
+ MHI_PM_M3_EXIT,
+ MHI_PM_M0 | MHI_PM_SYS_ERR_DETECT | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ {
+ MHI_PM_FW_DL_ERR,
+ MHI_PM_FW_DL_ERR | MHI_PM_SYS_ERR_DETECT |
+ MHI_PM_SHUTDOWN_PROCESS | MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ /* L1 States */
+ {
+ MHI_PM_SYS_ERR_DETECT,
+ MHI_PM_SYS_ERR_PROCESS | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ {
+ MHI_PM_SYS_ERR_PROCESS,
+ MHI_PM_POR | MHI_PM_SHUTDOWN_PROCESS |
+ MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ /* L2 States */
+ {
+ MHI_PM_SHUTDOWN_PROCESS,
+ MHI_PM_DISABLE | MHI_PM_LD_ERR_FATAL_DETECT
+ },
+ /* L3 States */
+ {
+ MHI_PM_LD_ERR_FATAL_DETECT,
+ MHI_PM_LD_ERR_FATAL_DETECT | MHI_PM_SHUTDOWN_PROCESS
+ },
+};
+
+enum mhi_pm_state __must_check mhi_tryset_pm_state(struct mhi_controller *mhi_cntrl,
+ enum mhi_pm_state state)
+{
+ unsigned long cur_state = mhi_cntrl->pm_state;
+ int index = find_last_bit(&cur_state, 32);
+
+ if (unlikely(index >= ARRAY_SIZE(dev_state_transitions)))
+ return cur_state;
+
+ if (unlikely(dev_state_transitions[index].from_state != cur_state))
+ return cur_state;
+
+ if (unlikely(!(dev_state_transitions[index].to_states & state)))
+ return cur_state;
+
+ mhi_cntrl->pm_state = state;
+ return mhi_cntrl->pm_state;
+}
+
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
+{
+ if (state == MHI_STATE_RESET) {
+ mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+ MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 1);
+ } else {
+ mhi_write_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL,
+ MHICTRL_MHISTATE_MASK,
+ MHICTRL_MHISTATE_SHIFT, state);
+ }
+}
+
+/* Handle device ready state transition */
+int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
+{
+ void __iomem *base = mhi_cntrl->regs;
+ u32 reset = 1, ready = 0;
+ struct mhi_event *mhi_event;
+ enum mhi_pm_state cur_state;
+ int ret, i;
+
+ /* Wait for RESET to be cleared and READY bit to be set by the device */
+ wait_event_timeout(mhi_cntrl->state_event,
+ MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state) ||
+ mhi_read_reg_field(mhi_cntrl, base, MHICTRL,
+ MHICTRL_RESET_MASK,
+ MHICTRL_RESET_SHIFT, &reset) ||
+ mhi_read_reg_field(mhi_cntrl, base, MHISTATUS,
+ MHISTATUS_READY_MASK,
+ MHISTATUS_READY_SHIFT, &ready) ||
+ (!reset && ready),
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ /* Check if device entered error state */
+ if (MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) {
+ dev_err(mhi_cntrl->dev, "Device link is not accessible\n");
+ return -EIO;
+ }
+
+ /* Timeout if device did not transition to ready state */
+ if (reset || !ready) {
+ dev_err(mhi_cntrl->dev, "Device Ready timeout\n");
+ return -ETIMEDOUT;
+ }
+
+ dev_dbg(mhi_cntrl->dev, "Device in READY State\n");
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_POR);
+ mhi_cntrl->dev_state = MHI_STATE_READY;
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ if (cur_state != MHI_PM_POR) {
+ dev_err(mhi_cntrl->dev, "Error moving to state %s from %s\n",
+ to_mhi_pm_state_str(MHI_PM_POR),
+ to_mhi_pm_state_str(cur_state));
+ return -EIO;
+ }
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ dev_err(mhi_cntrl->dev, "Device registers not accessible\n");
+ goto error_mmio;
+ }
+
+ /* Configure MMIO registers */
+ ret = mhi_init_mmio(mhi_cntrl);
+ if (ret) {
+ dev_err(mhi_cntrl->dev, "Error configuring MMIO registers\n");
+ goto error_mmio;
+ }
+
+ /* Add elements to all SW event rings */
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ /* Skip if this is an offload or HW event */
+ if (mhi_event->offload_ev || mhi_event->hw_ring)
+ continue;
+
+ ring->wp = ring->base + ring->len - ring->el_size;
+ *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+ /* Update all cores */
+ smp_wmb();
+
+ /* Ring the event ring db */
+ spin_lock_irq(&mhi_event->lock);
+ mhi_ring_er_db(mhi_event);
+ spin_unlock_irq(&mhi_event->lock);
+ }
+
+ /* Set MHI to M0 state */
+ mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M0);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return 0;
+
+error_mmio:
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return -EIO;
+}
+
+int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl)
+{
+ enum mhi_pm_state cur_state;
+ struct mhi_chan *mhi_chan;
+ int i;
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->dev_state = MHI_STATE_M0;
+ cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M0);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (unlikely(cur_state != MHI_PM_M0)) {
+ dev_err(mhi_cntrl->dev, "Unable to transition to M0 state\n");
+ return -EIO;
+ }
+
+ mhi_cntrl->M0++;
+
+ /* Wake up the device */
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_get(mhi_cntrl, true);
+
+ /* Ring all event rings and CMD ring only if we're in mission mode */
+ if (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) {
+ struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+ struct mhi_cmd *mhi_cmd =
+ &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ spin_lock_irq(&mhi_event->lock);
+ mhi_ring_er_db(mhi_event);
+ spin_unlock_irq(&mhi_event->lock);
+ }
+
+ /* Only ring primary cmd ring if ring is not empty */
+ spin_lock_irq(&mhi_cmd->lock);
+ if (mhi_cmd->ring.rp != mhi_cmd->ring.wp)
+ mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+ spin_unlock_irq(&mhi_cmd->lock);
+ }
+
+ /* Ring channel DB registers */
+ mhi_chan = mhi_cntrl->mhi_chan;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+
+ write_lock_irq(&mhi_chan->lock);
+ if (mhi_chan->db_cfg.reset_req)
+ mhi_chan->db_cfg.db_mode = true;
+
+ /* Only ring DB if ring is not empty */
+ if (tre_ring->base && tre_ring->wp != tre_ring->rp)
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ write_unlock_irq(&mhi_chan->lock);
+ }
+
+ mhi_cntrl->wake_put(mhi_cntrl, false);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ wake_up_all(&mhi_cntrl->state_event);
+
+ return 0;
+}
+
+/*
+ * After receiving the MHI state change event from the device indicating the
+ * transition to M1 state, the host can transition the device to M2 state
+ * for keeping it in low power state.
+ */
+void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl)
+{
+ enum mhi_pm_state state;
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M2);
+ if (state == MHI_PM_M2) {
+ mhi_set_mhi_state(mhi_cntrl, MHI_STATE_M2);
+ mhi_cntrl->dev_state = MHI_STATE_M2;
+ mhi_cntrl->M2++;
+
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ wake_up_all(&mhi_cntrl->state_event);
+
+ /* If there are any pending resources, exit M2 immediately */
+ if (unlikely(atomic_read(&mhi_cntrl->pending_pkts) ||
+ atomic_read(&mhi_cntrl->dev_wake))) {
+ dev_dbg(mhi_cntrl->dev,
+ "Exiting M2, pending_pkts: %d dev_wake: %d\n",
+ atomic_read(&mhi_cntrl->pending_pkts),
+ atomic_read(&mhi_cntrl->dev_wake));
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_get(mhi_cntrl, true);
+ mhi_cntrl->wake_put(mhi_cntrl, true);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ } else {
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_IDLE);
+ }
+ } else {
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ }
+}
+
+/* MHI M3 completion handler */
+int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl)
+{
+ enum mhi_pm_state state;
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->dev_state = MHI_STATE_M3;
+ state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_M3);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (state != MHI_PM_M3) {
+ dev_err(mhi_cntrl->dev, "Unable to transition to M3 state\n");
+ return -EIO;
+ }
+
+ wake_up_all(&mhi_cntrl->state_event);
+ mhi_cntrl->M3++;
+
+ return 0;
+}
+
+/* Handle device Mission Mode transition */
+static int mhi_pm_mission_mode_transition(struct mhi_controller *mhi_cntrl)
+{
+ int i, ret;
+ struct mhi_event *mhi_event;
+
+ dev_dbg(mhi_cntrl->dev, "Processing Mission Mode transition\n");
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+ mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ if (!MHI_IN_MISSION_MODE(mhi_cntrl->ee))
+ return -EIO;
+
+ wake_up_all(&mhi_cntrl->state_event);
+
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_EE_MISSION_MODE);
+
+ /* Force MHI to be in M0 state before continuing */
+ ret = __mhi_device_get_sync(mhi_cntrl);
+ if (ret)
+ return ret;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ ret = -EIO;
+ goto error_mission_mode;
+ }
+
+ /* Add elements to all HW event rings */
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ if (mhi_event->offload_ev || !mhi_event->hw_ring)
+ continue;
+
+ ring->wp = ring->base + ring->len - ring->el_size;
+ *ring->ctxt_wp = ring->iommu_base + ring->len - ring->el_size;
+ /* Update to all cores */
+ smp_wmb();
+
+ spin_lock_irq(&mhi_event->lock);
+ if (MHI_DB_ACCESS_VALID(mhi_cntrl))
+ mhi_ring_er_db(mhi_event);
+ spin_unlock_irq(&mhi_event->lock);
+ }
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ /*
+ * The MHI devices are only created when the client device switches its
+ * Execution Environment (EE) to either SBL or AMSS states
+ */
+ mhi_create_devices(mhi_cntrl);
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+
+error_mission_mode:
+ mhi_cntrl->wake_put(mhi_cntrl, false);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return ret;
+}
+
+/* Handle SYS_ERR and Shutdown transitions */
+static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
+ enum mhi_pm_state transition_state)
+{
+ enum mhi_pm_state cur_state, prev_state;
+ struct mhi_event *mhi_event;
+ struct mhi_cmd_ctxt *cmd_ctxt;
+ struct mhi_cmd *mhi_cmd;
+ struct mhi_event_ctxt *er_ctxt;
+ int ret, i;
+
+ dev_dbg(mhi_cntrl->dev,
+ "Transitioning from PM state: %s to: %s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ to_mhi_pm_state_str(transition_state));
+
+ /* We must notify MHI control driver so it can clean up first */
+ if (transition_state == MHI_PM_SYS_ERR_PROCESS) {
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_SYS_ERROR);
+ }
+
+ mutex_lock(&mhi_cntrl->pm_mutex);
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ prev_state = mhi_cntrl->pm_state;
+ cur_state = mhi_tryset_pm_state(mhi_cntrl, transition_state);
+ if (cur_state == transition_state) {
+ mhi_cntrl->ee = MHI_EE_DISABLE_TRANSITION;
+ mhi_cntrl->dev_state = MHI_STATE_RESET;
+ }
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ /* Wake up threads waiting for state transition */
+ wake_up_all(&mhi_cntrl->state_event);
+
+ if (cur_state != transition_state) {
+ dev_err(mhi_cntrl->dev,
+ "Failed to transition to state: %s from: %s\n",
+ to_mhi_pm_state_str(transition_state),
+ to_mhi_pm_state_str(cur_state));
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+ return;
+ }
+
+ /* Trigger MHI RESET so that the device will not access host memory */
+ if (MHI_REG_ACCESS_VALID(prev_state)) {
+ u32 in_reset = -1;
+ unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms);
+
+ dev_dbg(mhi_cntrl->dev, "Triggering MHI Reset in device\n");
+ mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET);
+
+ /* Wait for the reset bit to be cleared by the device */
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ mhi_read_reg_field(mhi_cntrl,
+ mhi_cntrl->regs,
+ MHICTRL,
+ MHICTRL_RESET_MASK,
+ MHICTRL_RESET_SHIFT,
+ &in_reset) ||
+ !in_reset, timeout);
+ if ((!ret || in_reset) && cur_state == MHI_PM_SYS_ERR_PROCESS) {
+ dev_err(mhi_cntrl->dev,
+ "Device failed to exit MHI Reset state\n");
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+ return;
+ }
+
+ /*
+ * Device will clear BHI_INTVEC as a part of RESET processing,
+ * hence re-program it
+ */
+ mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+ }
+
+ dev_dbg(mhi_cntrl->dev,
+ "Waiting for all pending event ring processing to complete\n");
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+ tasklet_kill(&mhi_event->task);
+ }
+
+ /* Release lock and wait for all pending threads to complete */
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+ dev_dbg(mhi_cntrl->dev,
+ "Waiting for all pending threads to complete\n");
+ wake_up_all(&mhi_cntrl->state_event);
+ flush_work(&mhi_cntrl->st_worker);
+ flush_work(&mhi_cntrl->fw_worker);
+
+ dev_dbg(mhi_cntrl->dev,
+ "Reset all active channels and remove MHI devices\n");
+ device_for_each_child(mhi_cntrl->dev, NULL, mhi_destroy_device);
+
+ mutex_lock(&mhi_cntrl->pm_mutex);
+
+ WARN_ON(atomic_read(&mhi_cntrl->dev_wake));
+ WARN_ON(atomic_read(&mhi_cntrl->pending_pkts));
+
+ /* Reset the ev rings and cmd rings */
+ dev_dbg(mhi_cntrl->dev, "Resetting EV CTXT and CMD CTXT\n");
+ mhi_cmd = mhi_cntrl->mhi_cmd;
+ cmd_ctxt = mhi_cntrl->mhi_ctxt->cmd_ctxt;
+ for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+ struct mhi_ring *ring = &mhi_cmd->ring;
+
+ ring->rp = ring->base;
+ ring->wp = ring->base;
+ cmd_ctxt->rp = cmd_ctxt->rbase;
+ cmd_ctxt->wp = cmd_ctxt->rbase;
+ }
+
+ mhi_event = mhi_cntrl->mhi_event;
+ er_ctxt = mhi_cntrl->mhi_ctxt->er_ctxt;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+ mhi_event++) {
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ /* Skip offload events */
+ if (mhi_event->offload_ev)
+ continue;
+
+ ring->rp = ring->base;
+ ring->wp = ring->base;
+ er_ctxt->rp = er_ctxt->rbase;
+ er_ctxt->wp = er_ctxt->rbase;
+ }
+
+ if (cur_state == MHI_PM_SYS_ERR_PROCESS) {
+ mhi_ready_state_transition(mhi_cntrl);
+ } else {
+ /* Move to disable state */
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ cur_state = mhi_tryset_pm_state(mhi_cntrl, MHI_PM_DISABLE);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (unlikely(cur_state != MHI_PM_DISABLE))
+ dev_err(mhi_cntrl->dev,
+ "Error moving from PM state: %s to: %s\n",
+ to_mhi_pm_state_str(cur_state),
+ to_mhi_pm_state_str(MHI_PM_DISABLE));
+ }
+
+ dev_dbg(mhi_cntrl->dev, "Exiting with PM state: %s, MHI state: %s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state));
+
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+}
+
+/* Queue a new work item and schedule work */
+int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
+ enum dev_st_transition state)
+{
+ struct state_transition *item = kmalloc(sizeof(*item), GFP_ATOMIC);
+ unsigned long flags;
+
+ if (!item)
+ return -ENOMEM;
+
+ item->state = state;
+ spin_lock_irqsave(&mhi_cntrl->transition_lock, flags);
+ list_add_tail(&item->node, &mhi_cntrl->transition_list);
+ spin_unlock_irqrestore(&mhi_cntrl->transition_lock, flags);
+
+ schedule_work(&mhi_cntrl->st_worker);
+
+ return 0;
+}
+
+/* SYS_ERR worker */
+void mhi_pm_sys_err_worker(struct work_struct *work)
+{
+ struct mhi_controller *mhi_cntrl = container_of(work,
+ struct mhi_controller,
+ syserr_worker);
+
+ mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SYS_ERR_PROCESS);
+}
+
+/* Device State Transition worker */
+void mhi_pm_st_worker(struct work_struct *work)
+{
+ struct state_transition *itr, *tmp;
+ LIST_HEAD(head);
+ struct mhi_controller *mhi_cntrl = container_of(work,
+ struct mhi_controller,
+ st_worker);
+ spin_lock_irq(&mhi_cntrl->transition_lock);
+ list_splice_tail_init(&mhi_cntrl->transition_list, &head);
+ spin_unlock_irq(&mhi_cntrl->transition_lock);
+
+ list_for_each_entry_safe(itr, tmp, &head, node) {
+ list_del(&itr->node);
+ dev_dbg(mhi_cntrl->dev, "Handling state transition: %s\n",
+ TO_DEV_STATE_TRANS_STR(itr->state));
+
+ switch (itr->state) {
+ case DEV_ST_TRANSITION_PBL:
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state))
+ mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (MHI_IN_PBL(mhi_cntrl->ee))
+ wake_up_all(&mhi_cntrl->state_event);
+ break;
+ case DEV_ST_TRANSITION_SBL:
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->ee = MHI_EE_SBL;
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ /*
+ * The MHI devices are only created when the client
+ * device switches its Execution Environment (EE) to
+ * either SBL or AMSS states
+ */
+ mhi_create_devices(mhi_cntrl);
+ break;
+ case DEV_ST_TRANSITION_MISSION_MODE:
+ mhi_pm_mission_mode_transition(mhi_cntrl);
+ break;
+ case DEV_ST_TRANSITION_READY:
+ mhi_ready_state_transition(mhi_cntrl);
+ break;
+ default:
+ break;
+ }
+ kfree(itr);
+ }
+}
+
+int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
+{
+ int ret;
+
+ /* Wake up the device */
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_get(mhi_cntrl, true);
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ pm_wakeup_event(&mhi_cntrl->mhi_dev->dev, 0);
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ }
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ mhi_cntrl->pm_state == MHI_PM_M0 ||
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_put(mhi_cntrl, false);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index d08f212cdfd0..80633099b90d 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -107,6 +107,31 @@ enum mhi_ee_type {
MHI_EE_MAX,
};
+/**
+ * enum mhi_state - MHI states
+ * @MHI_STATE_RESET: Reset state
+ * @MHI_STATE_READY: Ready state
+ * @MHI_STATE_M0: M0 state
+ * @MHI_STATE_M1: M1 state
+ * @MHI_STATE_M2: M2 state
+ * @MHI_STATE_M3: M3 state
+ * @MHI_STATE_M3_FAST: M3 Fast state
+ * @MHI_STATE_BHI: BHI state
+ * @MHI_STATE_SYS_ERR: System Error state
+ */
+enum mhi_state {
+ MHI_STATE_RESET = 0x0,
+ MHI_STATE_READY = 0x1,
+ MHI_STATE_M0 = 0x2,
+ MHI_STATE_M1 = 0x3,
+ MHI_STATE_M2 = 0x4,
+ MHI_STATE_M3 = 0x5,
+ MHI_STATE_M3_FAST = 0x6,
+ MHI_STATE_BHI = 0x7,
+ MHI_STATE_SYS_ERR = 0xFF,
+ MHI_STATE_MAX,
+};
+
/**
* enum mhi_buf_type - Accepted buffer type for the channel
* @MHI_BUF_RAW: Raw buffer
@@ -274,6 +299,7 @@ struct mhi_controller_config {
* @pm_state: MHI power management state
* @db_access: DB access states
* @ee: MHI device execution environment
+ * @dev_state: MHI device state
* @wake_set: Device wakeup set flag
* @dev_wake: Device wakeup count
* @alloc_size: Total memory allocations size of the controller
@@ -339,6 +365,7 @@ struct mhi_controller {
u32 pm_state;
u32 db_access;
enum mhi_ee_type ee;
+ enum mhi_state dev_state;
bool wake_set;
atomic_t dev_wake;
atomic_t alloc_size;
@@ -411,6 +438,22 @@ struct mhi_result {
int transaction_status;
};
+/**
+ * struct mhi_buf - MHI Buffer description
+ * @buf: Virtual address of the buffer
+ * @dma_addr: IOMMU address of the buffer
+ * @len: # of bytes
+ * @name: Buffer label. For offload channel, configurations name must be:
+ * ECA - Event context array data
+ * CCA - Channel context array data
+ */
+struct mhi_buf {
+ void *buf;
+ dma_addr_t dma_addr;
+ size_t len;
+ const char *name;
+};
+
/**
* struct mhi_driver - Structure representing a MHI client driver
* @probe: CB function for client driver probe function
@@ -481,4 +524,12 @@ int mhi_driver_register(struct mhi_driver *mhi_drv);
*/
void mhi_driver_unregister(struct mhi_driver *mhi_drv);
+/**
+ * mhi_set_mhi_state - Set MHI device state
+ * @mhi_cntrl: MHI controller
+ * @state: State to set
+ */
+void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl,
+ enum mhi_state state);
+
#endif /* _MHI_H_ */
--
2.17.1
This commit adds support for basic MHI PM operations such as
mhi_async_power_up, mhi_sync_power_up, and mhi_power_down. These
routines places the MHI bus into respective power domain states
and calls the state_transition APIs when necessary. The MHI
controller driver is expected to call these PM routines for
MHI powerup and powerdown.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the pm patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/Makefile | 2 +-
drivers/bus/mhi/core/boot.c | 89 +++++++++
drivers/bus/mhi/core/init.c | 313 ++++++++++++++++++++++++++++++++
drivers/bus/mhi/core/internal.h | 37 ++++
drivers/bus/mhi/core/main.c | 89 +++++++++
drivers/bus/mhi/core/pm.c | 218 ++++++++++++++++++++++
include/linux/mhi.h | 51 ++++++
7 files changed, 798 insertions(+), 1 deletion(-)
create mode 100644 drivers/bus/mhi/core/boot.c
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index a0070f9cdfcd..66e2700c9032 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1,3 +1,3 @@
obj-$(CONFIG_MHI_BUS) := mhi.o
-mhi-y := init.o main.o pm.o
+mhi-y := init.o main.o pm.o boot.o
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
new file mode 100644
index 000000000000..0996f18c4281
--- /dev/null
+++ b/drivers/bus/mhi/core/boot.c
@@ -0,0 +1,89 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#define dev_fmt(fmt) "MHI: " fmt
+
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/firmware.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/mhi.h>
+#include <linux/module.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include "internal.h"
+
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+ struct image_info *image_info)
+{
+ int i;
+ struct mhi_buf *mhi_buf = image_info->mhi_buf;
+
+ for (i = 0; i < image_info->entries; i++, mhi_buf++)
+ mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+ mhi_buf->dma_addr);
+
+ kfree(image_info->mhi_buf);
+ kfree(image_info);
+}
+
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+ struct image_info **image_info,
+ size_t alloc_size)
+{
+ size_t seg_size = mhi_cntrl->seg_len;
+ int segments = DIV_ROUND_UP(alloc_size, seg_size) + 1;
+ int i;
+ struct image_info *img_info;
+ struct mhi_buf *mhi_buf;
+
+ img_info = kzalloc(sizeof(*img_info), GFP_KERNEL);
+ if (!img_info)
+ return -ENOMEM;
+
+ /* Allocate memory for entries */
+ img_info->mhi_buf = kcalloc(segments, sizeof(*img_info->mhi_buf),
+ GFP_KERNEL);
+ if (!img_info->mhi_buf)
+ goto error_alloc_mhi_buf;
+
+ /* Allocate and populate vector table */
+ mhi_buf = img_info->mhi_buf;
+ for (i = 0; i < segments; i++, mhi_buf++) {
+ size_t vec_size = seg_size;
+
+ /* Vector table is the last entry */
+ if (i == segments - 1)
+ vec_size = sizeof(struct bhi_vec_entry) * i;
+
+ mhi_buf->len = vec_size;
+ mhi_buf->buf = mhi_alloc_coherent(mhi_cntrl, vec_size,
+ &mhi_buf->dma_addr,
+ GFP_KERNEL);
+ if (!mhi_buf->buf)
+ goto error_alloc_segment;
+ }
+
+ img_info->bhi_vec = img_info->mhi_buf[segments - 1].buf;
+ img_info->entries = segments;
+ *image_info = img_info;
+
+ return 0;
+
+error_alloc_segment:
+ for (--i, --mhi_buf; i >= 0; i--, mhi_buf--)
+ mhi_free_coherent(mhi_cntrl, mhi_buf->len, mhi_buf->buf,
+ mhi_buf->dma_addr);
+
+error_alloc_mhi_buf:
+ kfree(img_info);
+
+ return -ENOMEM;
+}
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 83a03493c757..7fff92e9661b 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -75,6 +75,284 @@ const char *to_mhi_pm_state_str(enum mhi_pm_state state)
return mhi_pm_state_str[index];
}
+/* MHI protocol requires the transfer ring to be aligned with ring length */
+static int mhi_alloc_aligned_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring,
+ u64 len)
+{
+ ring->alloc_size = len + (len - 1);
+ ring->pre_aligned = mhi_alloc_coherent(mhi_cntrl, ring->alloc_size,
+ &ring->dma_handle, GFP_KERNEL);
+ if (!ring->pre_aligned)
+ return -ENOMEM;
+
+ ring->iommu_base = (ring->dma_handle + (len - 1)) & ~(len - 1);
+ ring->base = ring->pre_aligned + (ring->iommu_base - ring->dma_handle);
+
+ return 0;
+}
+
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl)
+{
+ int i;
+ struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
+ }
+
+ free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+}
+
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl)
+{
+ int i;
+ int ret;
+ struct mhi_event *mhi_event = mhi_cntrl->mhi_event;
+
+ /* Setup BHI_INTVEC IRQ */
+ ret = request_threaded_irq(mhi_cntrl->irq[0], mhi_intvec_handler,
+ mhi_intvec_threaded_handler,
+ IRQF_SHARED | IRQF_NO_SUSPEND,
+ "bhi", mhi_cntrl);
+ if (ret)
+ return ret;
+
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ ret = request_irq(mhi_cntrl->irq[mhi_event->irq],
+ mhi_irq_handler,
+ IRQF_SHARED | IRQF_NO_SUSPEND,
+ "mhi", mhi_event);
+ if (ret) {
+ dev_err(mhi_cntrl->dev,
+ "Error requesting irq:%d for ev:%d\n",
+ mhi_cntrl->irq[mhi_event->irq], i);
+ goto error_request;
+ }
+ }
+
+ return 0;
+
+error_request:
+ for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ free_irq(mhi_cntrl->irq[mhi_event->irq], mhi_event);
+ }
+ free_irq(mhi_cntrl->irq[0], mhi_cntrl);
+
+ return ret;
+}
+
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+ int i;
+ struct mhi_ctxt *mhi_ctxt = mhi_cntrl->mhi_ctxt;
+ struct mhi_cmd *mhi_cmd;
+ struct mhi_event *mhi_event;
+ struct mhi_ring *ring;
+
+ mhi_cmd = mhi_cntrl->mhi_cmd;
+ for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++) {
+ ring = &mhi_cmd->ring;
+ mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+ ring->pre_aligned, ring->dma_handle);
+ ring->base = NULL;
+ ring->iommu_base = 0;
+ }
+
+ mhi_free_coherent(mhi_cntrl,
+ sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+ mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
+ if (mhi_event->offload_ev)
+ continue;
+
+ ring = &mhi_event->ring;
+ mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+ ring->pre_aligned, ring->dma_handle);
+ ring->base = NULL;
+ ring->iommu_base = 0;
+ }
+
+ mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+ mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+ mhi_ctxt->er_ctxt_addr);
+
+ mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+ mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+ mhi_ctxt->chan_ctxt_addr);
+
+ kfree(mhi_ctxt);
+ mhi_cntrl->mhi_ctxt = NULL;
+}
+
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl)
+{
+ struct mhi_ctxt *mhi_ctxt;
+ struct mhi_chan_ctxt *chan_ctxt;
+ struct mhi_event_ctxt *er_ctxt;
+ struct mhi_cmd_ctxt *cmd_ctxt;
+ struct mhi_chan *mhi_chan;
+ struct mhi_event *mhi_event;
+ struct mhi_cmd *mhi_cmd;
+ int ret = -ENOMEM, i;
+
+ atomic_set(&mhi_cntrl->dev_wake, 0);
+ atomic_set(&mhi_cntrl->alloc_size, 0);
+ atomic_set(&mhi_cntrl->pending_pkts, 0);
+
+ mhi_ctxt = kzalloc(sizeof(*mhi_ctxt), GFP_KERNEL);
+ if (!mhi_ctxt)
+ return -ENOMEM;
+
+ /* Setup channel ctxt */
+ mhi_ctxt->chan_ctxt = mhi_alloc_coherent(mhi_cntrl,
+ sizeof(*mhi_ctxt->chan_ctxt) *
+ mhi_cntrl->max_chan,
+ &mhi_ctxt->chan_ctxt_addr,
+ GFP_KERNEL);
+ if (!mhi_ctxt->chan_ctxt)
+ goto error_alloc_chan_ctxt;
+
+ mhi_chan = mhi_cntrl->mhi_chan;
+ chan_ctxt = mhi_ctxt->chan_ctxt;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, chan_ctxt++, mhi_chan++) {
+ /* Skip if it is an offload channel */
+ if (mhi_chan->offload_ch)
+ continue;
+
+ chan_ctxt->chstate = MHI_CH_STATE_DISABLED;
+ chan_ctxt->brstmode = mhi_chan->db_cfg.brstmode;
+ chan_ctxt->pollcfg = mhi_chan->db_cfg.pollcfg;
+ chan_ctxt->chtype = mhi_chan->type;
+ chan_ctxt->erindex = mhi_chan->er_index;
+
+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+ mhi_chan->tre_ring.db_addr = (void __iomem *)&chan_ctxt->wp;
+ }
+
+ /* Setup event context */
+ mhi_ctxt->er_ctxt = mhi_alloc_coherent(mhi_cntrl,
+ sizeof(*mhi_ctxt->er_ctxt) *
+ mhi_cntrl->total_ev_rings,
+ &mhi_ctxt->er_ctxt_addr,
+ GFP_KERNEL);
+ if (!mhi_ctxt->er_ctxt)
+ goto error_alloc_er_ctxt;
+
+ er_ctxt = mhi_ctxt->er_ctxt;
+ mhi_event = mhi_cntrl->mhi_event;
+ for (i = 0; i < mhi_cntrl->total_ev_rings; i++, er_ctxt++,
+ mhi_event++) {
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ /* Skip if it is an offload event */
+ if (mhi_event->offload_ev)
+ continue;
+
+ er_ctxt->intmodc = 0;
+ er_ctxt->intmodt = mhi_event->intmod;
+ er_ctxt->ertype = MHI_ER_TYPE_VALID;
+ er_ctxt->msivec = mhi_event->irq;
+ mhi_event->db_cfg.db_mode = true;
+
+ ring->el_size = sizeof(struct mhi_tre);
+ ring->len = ring->el_size * ring->elements;
+ ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+ if (ret)
+ goto error_alloc_er;
+
+ /*
+ * If the read pointer equals to the write pointer, then the
+ * ring is empty
+ */
+ ring->rp = ring->wp = ring->base;
+ er_ctxt->rbase = ring->iommu_base;
+ er_ctxt->rp = er_ctxt->wp = er_ctxt->rbase;
+ er_ctxt->rlen = ring->len;
+ ring->ctxt_wp = &er_ctxt->wp;
+ }
+
+ /* Setup cmd context */
+ mhi_ctxt->cmd_ctxt = mhi_alloc_coherent(mhi_cntrl,
+ sizeof(*mhi_ctxt->cmd_ctxt) *
+ NR_OF_CMD_RINGS,
+ &mhi_ctxt->cmd_ctxt_addr,
+ GFP_KERNEL);
+ if (!mhi_ctxt->cmd_ctxt)
+ goto error_alloc_er;
+
+ mhi_cmd = mhi_cntrl->mhi_cmd;
+ cmd_ctxt = mhi_ctxt->cmd_ctxt;
+ for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++, cmd_ctxt++) {
+ struct mhi_ring *ring = &mhi_cmd->ring;
+
+ ring->el_size = sizeof(struct mhi_tre);
+ ring->elements = CMD_EL_PER_RING;
+ ring->len = ring->el_size * ring->elements;
+ ret = mhi_alloc_aligned_ring(mhi_cntrl, ring, ring->len);
+ if (ret)
+ goto error_alloc_cmd;
+
+ ring->rp = ring->wp = ring->base;
+ cmd_ctxt->rbase = ring->iommu_base;
+ cmd_ctxt->rp = cmd_ctxt->wp = cmd_ctxt->rbase;
+ cmd_ctxt->rlen = ring->len;
+ ring->ctxt_wp = &cmd_ctxt->wp;
+ }
+
+ mhi_cntrl->mhi_ctxt = mhi_ctxt;
+
+ return 0;
+
+error_alloc_cmd:
+ for (--i, --mhi_cmd; i >= 0; i--, mhi_cmd--) {
+ struct mhi_ring *ring = &mhi_cmd->ring;
+
+ mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+ ring->pre_aligned, ring->dma_handle);
+ }
+ mhi_free_coherent(mhi_cntrl,
+ sizeof(*mhi_ctxt->cmd_ctxt) * NR_OF_CMD_RINGS,
+ mhi_ctxt->cmd_ctxt, mhi_ctxt->cmd_ctxt_addr);
+ i = mhi_cntrl->total_ev_rings;
+ mhi_event = mhi_cntrl->mhi_event + i;
+
+error_alloc_er:
+ for (--i, --mhi_event; i >= 0; i--, mhi_event--) {
+ struct mhi_ring *ring = &mhi_event->ring;
+
+ if (mhi_event->offload_ev)
+ continue;
+
+ mhi_free_coherent(mhi_cntrl, ring->alloc_size,
+ ring->pre_aligned, ring->dma_handle);
+ }
+ mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->er_ctxt) *
+ mhi_cntrl->total_ev_rings, mhi_ctxt->er_ctxt,
+ mhi_ctxt->er_ctxt_addr);
+
+error_alloc_er_ctxt:
+ mhi_free_coherent(mhi_cntrl, sizeof(*mhi_ctxt->chan_ctxt) *
+ mhi_cntrl->max_chan, mhi_ctxt->chan_ctxt,
+ mhi_ctxt->chan_ctxt_addr);
+
+error_alloc_chan_ctxt:
+ kfree(mhi_ctxt);
+
+ return ret;
+}
+
int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
{
u32 val;
@@ -548,6 +826,41 @@ void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
}
EXPORT_SYMBOL_GPL(mhi_unregister_controller);
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
+{
+ int ret;
+
+ mutex_lock(&mhi_cntrl->pm_mutex);
+
+ ret = mhi_init_dev_ctxt(mhi_cntrl);
+ if (ret)
+ goto error_dev_ctxt;
+
+ mhi_cntrl->pre_init = true;
+
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+
+ return 0;
+
+error_dev_ctxt:
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_prepare_for_power_up);
+
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
+{
+ if (mhi_cntrl->fbc_image) {
+ mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+ mhi_cntrl->fbc_image = NULL;
+ }
+
+ mhi_deinit_dev_ctxt(mhi_cntrl);
+ mhi_cntrl->pre_init = false;
+}
+EXPORT_SYMBOL_GPL(mhi_unprepare_after_power_down);
+
static void mhi_release_device(struct device *dev)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 2dba4923482a..d920264ded21 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -575,6 +575,11 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
int mhi_destroy_device(struct device *dev, void *data);
void mhi_create_devices(struct mhi_controller *mhi_cntrl);
+int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
+ struct image_info **image_info, size_t alloc_size);
+void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
+ struct image_info *image_info);
+
/* Power management APIs */
enum mhi_pm_state __must_check mhi_tryset_pm_state(
struct mhi_controller *mhi_cntrl,
@@ -616,5 +621,37 @@ void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
/* Initialization methods */
int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
+int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
+int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
+void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
+
+/* Memory allocation methods */
+static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
+ size_t size,
+ dma_addr_t *dma_handle,
+ gfp_t gfp)
+{
+ void *buf = dma_alloc_coherent(mhi_cntrl->dev, size, dma_handle, gfp);
+
+ if (buf)
+ atomic_add(size, &mhi_cntrl->alloc_size);
+
+ return buf;
+}
+
+static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
+ size_t size,
+ void *vaddr,
+ dma_addr_t dma_handle)
+{
+ atomic_sub(size, &mhi_cntrl->alloc_size);
+ dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
+}
+
+/* ISR handlers */
+irqreturn_t mhi_irq_handler(int irq_number, void *dev);
+irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
+irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index df980fb3b6db..453feba8f02a 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -144,6 +144,11 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
return ret ? MHI_STATE_MAX : state;
}
+static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
+{
+ return (addr - ring->iommu_base) + ring->base;
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_device *mhi_dev;
@@ -248,3 +253,87 @@ void mhi_create_devices(struct mhi_controller *mhi_cntrl)
mhi_dealloc_device(mhi_cntrl, mhi_dev);
}
}
+
+irqreturn_t mhi_irq_handler(int irq_number, void *dev)
+{
+ struct mhi_event *mhi_event = dev;
+ struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+ void *dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ /* Only proceed if event ring has pending events */
+ if (ev_ring->rp == dev_rp)
+ return IRQ_HANDLED;
+
+ /* For client managed event ring, notify pending data */
+ if (mhi_event->cl_manage) {
+ struct mhi_chan *mhi_chan = mhi_event->mhi_chan;
+ struct mhi_device *mhi_dev = mhi_chan->mhi_dev;
+
+ if (mhi_dev)
+ mhi_notify(mhi_dev, MHI_CB_PENDING_DATA);
+ } else {
+ tasklet_schedule(&mhi_event->task);
+ }
+
+ return IRQ_HANDLED;
+}
+
+irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev)
+{
+ struct mhi_controller *mhi_cntrl = dev;
+ enum mhi_state state = MHI_STATE_MAX;
+ enum mhi_pm_state pm_state = 0;
+ enum mhi_ee_type ee = 0;
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ state = mhi_get_mhi_state(mhi_cntrl);
+ ee = mhi_cntrl->ee;
+ mhi_cntrl->ee = mhi_get_exec_env(mhi_cntrl);
+ }
+
+ if (state == MHI_STATE_SYS_ERR) {
+ dev_dbg(mhi_cntrl->dev, "System error detected\n");
+ pm_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_SYS_ERR_DETECT);
+ }
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ /* If device in RDDM don't bother processing SYS error */
+ if (mhi_cntrl->ee == MHI_EE_RDDM) {
+ if (mhi_cntrl->ee != ee) {
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_EE_RDDM);
+ wake_up_all(&mhi_cntrl->state_event);
+ }
+ goto exit_intvec;
+ }
+
+ if (pm_state == MHI_PM_SYS_ERR_DETECT) {
+ wake_up_all(&mhi_cntrl->state_event);
+
+ /* For fatal errors, we let controller decide next step */
+ if (MHI_IN_PBL(ee))
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_FATAL_ERROR);
+ else
+ schedule_work(&mhi_cntrl->syserr_worker);
+ }
+
+exit_intvec:
+
+ return IRQ_HANDLED;
+}
+
+irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
+{
+ struct mhi_controller *mhi_cntrl = dev;
+
+ /* Wake up events waiting for state change */
+ wake_up_all(&mhi_cntrl->state_event);
+
+ return IRQ_WAKE_THREAD;
+}
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 75d43b1039ea..b67ae2455fc5 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -140,6 +140,17 @@ void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl, enum mhi_state state)
}
}
+/* NOP for backward compatibility, host allowed to ring DB in M2 state */
+static void mhi_toggle_dev_wake_nop(struct mhi_controller *mhi_cntrl)
+{
+}
+
+static void mhi_toggle_dev_wake(struct mhi_controller *mhi_cntrl)
+{
+ mhi_cntrl->wake_get(mhi_cntrl, false);
+ mhi_cntrl->wake_put(mhi_cntrl, true);
+}
+
/* Handle device ready state transition */
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl)
{
@@ -683,3 +694,210 @@ int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl)
return 0;
}
+
+/* Assert device wake db */
+static void mhi_assert_dev_wake(struct mhi_controller *mhi_cntrl, bool force)
+{
+ unsigned long flags;
+
+ /*
+ * If force flag is set, then increment the wake count value and
+ * ring wake db
+ */
+ if (unlikely(force)) {
+ spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+ atomic_inc(&mhi_cntrl->dev_wake);
+ if (MHI_WAKE_DB_FORCE_SET_VALID(mhi_cntrl->pm_state) &&
+ !mhi_cntrl->wake_set) {
+ mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+ mhi_cntrl->wake_set = true;
+ }
+ spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+ } else {
+ /*
+ * If resources are already requested, then just increment
+ * the wake count value and return
+ */
+ if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, 1, 0)))
+ return;
+
+ spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+ if ((atomic_inc_return(&mhi_cntrl->dev_wake) == 1) &&
+ MHI_WAKE_DB_SET_VALID(mhi_cntrl->pm_state) &&
+ !mhi_cntrl->wake_set) {
+ mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 1);
+ mhi_cntrl->wake_set = true;
+ }
+ spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+ }
+}
+
+/* De-assert device wake db */
+static void mhi_deassert_dev_wake(struct mhi_controller *mhi_cntrl,
+ bool override)
+{
+ unsigned long flags;
+
+ /*
+ * Only continue if there is a single resource, else just decrement
+ * and return
+ */
+ if (likely(atomic_add_unless(&mhi_cntrl->dev_wake, -1, 1)))
+ return;
+
+ spin_lock_irqsave(&mhi_cntrl->wlock, flags);
+ if ((atomic_dec_return(&mhi_cntrl->dev_wake) == 0) &&
+ MHI_WAKE_DB_CLEAR_VALID(mhi_cntrl->pm_state) && !override &&
+ mhi_cntrl->wake_set) {
+ mhi_write_db(mhi_cntrl, mhi_cntrl->wake_db, 0);
+ mhi_cntrl->wake_set = false;
+ }
+ spin_unlock_irqrestore(&mhi_cntrl->wlock, flags);
+}
+
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl)
+{
+ int ret;
+ u32 val;
+ enum mhi_ee_type current_ee;
+ enum dev_st_transition next_state;
+
+ dev_info(mhi_cntrl->dev, "Requested to power ON\n");
+
+ if (mhi_cntrl->nr_irqs < mhi_cntrl->total_ev_rings)
+ return -EINVAL;
+
+ /* Supply default wake routines if not provided by controller driver */
+ if (!mhi_cntrl->wake_get || !mhi_cntrl->wake_put ||
+ !mhi_cntrl->wake_toggle) {
+ mhi_cntrl->wake_get = mhi_assert_dev_wake;
+ mhi_cntrl->wake_put = mhi_deassert_dev_wake;
+ mhi_cntrl->wake_toggle = (mhi_cntrl->db_access & MHI_PM_M2) ?
+ mhi_toggle_dev_wake_nop : mhi_toggle_dev_wake;
+ }
+
+ mutex_lock(&mhi_cntrl->pm_mutex);
+ mhi_cntrl->pm_state = MHI_PM_DISABLE;
+
+ if (!mhi_cntrl->pre_init) {
+ /* Setup device context */
+ ret = mhi_init_dev_ctxt(mhi_cntrl);
+ if (ret)
+ goto error_dev_ctxt;
+ }
+
+ ret = mhi_init_irq_setup(mhi_cntrl);
+ if (ret)
+ goto error_setup_irq;
+
+ /* Setup BHI offset & INTVEC */
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIOFF, &val);
+ if (ret) {
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ goto error_bhi_offset;
+ }
+
+ mhi_cntrl->bhi = mhi_cntrl->regs + val;
+
+ /* Setup BHIE offset */
+ if (mhi_cntrl->fbc_download) {
+ ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF, &val);
+ if (ret) {
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ dev_err(mhi_cntrl->dev, "Error reading BHIE offset\n");
+ goto error_bhi_offset;
+ }
+
+ mhi_cntrl->bhie = mhi_cntrl->regs + val;
+ }
+
+ mhi_write_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_INTVEC, 0);
+ mhi_cntrl->pm_state = MHI_PM_POR;
+ mhi_cntrl->ee = MHI_EE_MAX;
+ current_ee = mhi_get_exec_env(mhi_cntrl);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ /* Confirm that the device is in valid exec env */
+ if (!MHI_IN_PBL(current_ee) && current_ee != MHI_EE_AMSS) {
+ dev_err(mhi_cntrl->dev, "Not a valid EE for power on\n");
+ ret = -EIO;
+ goto error_bhi_offset;
+ }
+
+ /* Transition to next state */
+ next_state = MHI_IN_PBL(current_ee) ?
+ DEV_ST_TRANSITION_PBL : DEV_ST_TRANSITION_READY;
+
+ if (next_state == DEV_ST_TRANSITION_PBL)
+ schedule_work(&mhi_cntrl->fw_worker);
+
+ mhi_queue_state_transition(mhi_cntrl, next_state);
+
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+
+ dev_info(mhi_cntrl->dev, "Power on setup success\n");
+
+ return 0;
+
+error_bhi_offset:
+ mhi_deinit_free_irq(mhi_cntrl);
+
+error_setup_irq:
+ if (!mhi_cntrl->pre_init)
+ mhi_deinit_dev_ctxt(mhi_cntrl);
+
+error_dev_ctxt:
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_async_power_up);
+
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful)
+{
+ enum mhi_pm_state cur_state;
+
+ /* If it's not a graceful shutdown, force MHI to linkdown state */
+ if (!graceful) {
+ mutex_lock(&mhi_cntrl->pm_mutex);
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ cur_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_LD_ERR_FATAL_DETECT);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ mutex_unlock(&mhi_cntrl->pm_mutex);
+ if (cur_state != MHI_PM_LD_ERR_FATAL_DETECT)
+ dev_dbg(mhi_cntrl->dev,
+ "Failed to move to state: %s from: %s\n",
+ to_mhi_pm_state_str(MHI_PM_LD_ERR_FATAL_DETECT),
+ to_mhi_pm_state_str(mhi_cntrl->pm_state));
+ }
+ mhi_pm_disable_transition(mhi_cntrl, MHI_PM_SHUTDOWN_PROCESS);
+ mhi_deinit_free_irq(mhi_cntrl);
+
+ if (!mhi_cntrl->pre_init) {
+ /* Free all allocated resources */
+ if (mhi_cntrl->fbc_image) {
+ mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+ mhi_cntrl->fbc_image = NULL;
+ }
+ mhi_deinit_dev_ctxt(mhi_cntrl);
+ }
+}
+EXPORT_SYMBOL_GPL(mhi_power_down);
+
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
+{
+ int ret = mhi_async_power_up(mhi_cntrl);
+
+ if (ret)
+ return ret;
+
+ wait_event_timeout(mhi_cntrl->state_event,
+ MHI_IN_MISSION_MODE(mhi_cntrl->ee) ||
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO;
+}
+EXPORT_SYMBOL(mhi_sync_power_up);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 80633099b90d..04c500323214 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -83,6 +83,17 @@ enum mhi_ch_type {
MHI_CH_TYPE_INBOUND_COALESCED = 3,
};
+/**
+ * struct image_info - Firmware and RDDM table table
+ * @mhi_buf - Buffer for firmware and RDDM table
+ * @entries - # of entries in table
+ */
+struct image_info {
+ struct mhi_buf *mhi_buf;
+ struct bhi_vec_entry *bhi_vec;
+ u32 entries;
+};
+
/**
* enum mhi_ee_type - Execution environment types
* @MHI_EE_PBL: Primary Bootloader
@@ -272,6 +283,7 @@ struct mhi_controller_config {
* @bus_id: Physical bus instance used by the controller
* @regs: Base address of MHI MMIO register space
* @bhi: Points to base of MHI BHI register space
+ * @bhie: Points to base of MHI BHIe register space
* @wake_db: MHI WAKE doorbell register address
* @iova_start: IOMMU starting address for data
* @iova_stop: IOMMU stop address for data
@@ -280,6 +292,7 @@ struct mhi_controller_config {
* @fbc_download: MHI host needs to do complete image transfer
* @sbl_size: SBL image size
* @seg_len: BHIe vector size
+ * @fbc_image: Points to firmware image buffer
* @max_chan: Maximum number of channels the controller supports
* @mhi_chan: Points to the channel configuration table
* @lpm_chans: List of channels that require LPM notifications
@@ -335,6 +348,7 @@ struct mhi_controller {
u32 bus_id;
void __iomem *regs;
void __iomem *bhi;
+ void __iomem *bhie;
void __iomem *wake_db;
dma_addr_t iova_start;
@@ -344,6 +358,7 @@ struct mhi_controller {
bool fbc_download;
size_t sbl_size;
size_t seg_len;
+ struct image_info *fbc_image;
u32 max_chan;
struct mhi_chan *mhi_chan;
struct list_head lpm_chans;
@@ -532,4 +547,40 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv);
void mhi_set_mhi_state(struct mhi_controller *mhi_cntrl,
enum mhi_state state);
+/**
+ * mhi_prepare_for_power_up - Do pre-initialization before power up.
+ * This is optional, call this before power up if
+ * the controller does not want bus framework to
+ * automatically free any allocated memory during
+ * shutdown process.
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_async_power_up - Start MHI power up sequence
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_async_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_sync_power_up - Start MHI power up sequence and wait till the device
+ * device enters valid EE state
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_sync_power_up(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_power_down - Start MHI power down sequence
+ * @mhi_cntrl: MHI controller
+ * @graceful: Link is still accessible, so do a graceful shutdown process
+ */
+void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
+
+/**
+ * mhi_unprepare_after_power_down - Free any allocated memory after power down
+ * @mhi_cntrl: MHI controller
+ */
+void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
+
#endif /* _MHI_H_ */
--
2.17.1
Add uevent support to MHI bus so that the client drivers can be autoloaded
by udev when the MHI devices gets created. The client drivers are
expected to provide MODULE_DEVICE_TABLE with the MHI id_table struct so
that the alias can be exported.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 9 +++++++++
include/linux/mod_devicetable.h | 1 +
scripts/mod/devicetable-offsets.c | 3 +++
scripts/mod/file2alias.c | 10 ++++++++++
4 files changed, 23 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 40dcf8353f6f..152d12066bec 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -1229,6 +1229,14 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv)
}
EXPORT_SYMBOL_GPL(mhi_driver_unregister);
+static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+
+ return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
+ mhi_dev->chan_name);
+}
+
static int mhi_match(struct device *dev, struct device_driver *drv)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
@@ -1255,6 +1263,7 @@ struct bus_type mhi_bus_type = {
.name = "mhi",
.dev_name = "mhi",
.match = mhi_match,
+ .uevent = mhi_uevent,
};
static int __init mhi_init(void)
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index be15e997fe39..f10e779a3fd0 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -821,6 +821,7 @@ struct wmi_device_id {
const void *context;
};
+#define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
#define MHI_NAME_SIZE 32
/**
diff --git a/scripts/mod/devicetable-offsets.c b/scripts/mod/devicetable-offsets.c
index 054405b90ba4..fe3f4a95cb21 100644
--- a/scripts/mod/devicetable-offsets.c
+++ b/scripts/mod/devicetable-offsets.c
@@ -231,5 +231,8 @@ int main(void)
DEVID(wmi_device_id);
DEVID_FIELD(wmi_device_id, guid_string);
+ DEVID(mhi_device_id);
+ DEVID_FIELD(mhi_device_id, chan);
+
return 0;
}
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index c91eba751804..cae6a4e471b5 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1335,6 +1335,15 @@ static int do_wmi_entry(const char *filename, void *symval, char *alias)
return 1;
}
+/* Looks like: mhi:S */
+static int do_mhi_entry(const char *filename, void *symval, char *alias)
+{
+ DEF_FIELD_ADDR(symval, mhi_device_id, chan);
+ sprintf(alias, MHI_DEVICE_MODALIAS_FMT, *chan);
+
+ return 1;
+}
+
/* Does namelen bytes of name exactly match the symbol? */
static bool sym_is(const char *name, unsigned namelen, const char *symbol)
{
@@ -1407,6 +1416,7 @@ static const struct devtable devtable[] = {
{"typec", SIZE_typec_device_id, do_typec_entry},
{"tee", SIZE_tee_client_device_id, do_tee_entry},
{"wmi", SIZE_wmi_device_id, do_wmi_entry},
+ {"mhi", SIZE_mhi_device_id, do_mhi_entry},
};
/* Create MODULE_ALIAS() statements.
--
2.17.1
This commit adds support for processing the MHI data and control
events from the client device. The client device can report various
events such as EE events, state change events by interrupting the
host through IRQ and adding events to the event rings allocated by
the host during initialization.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/988
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 19 ++
drivers/bus/mhi/core/internal.h | 10 +
drivers/bus/mhi/core/main.c | 471 ++++++++++++++++++++++++++++++++
include/linux/mhi.h | 15 +
4 files changed, 515 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index f54429c9b7fc..c946693bdae4 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -534,6 +534,19 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
mhi_event->data_type = event_cfg->data_type;
+ switch (mhi_event->data_type) {
+ case MHI_ER_DATA:
+ mhi_event->process_event = mhi_process_data_event_ring;
+ break;
+ case MHI_ER_CTRL:
+ mhi_event->process_event = mhi_process_ctrl_ev_ring;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Event Ring type not supported\n");
+ goto error_ev_cfg;
+ }
+
mhi_event->hw_ring = event_cfg->hardware_event;
if (mhi_event->hw_ring)
mhi_cntrl->hw_ev_rings++;
@@ -768,6 +781,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
mhi_event->mhi_cntrl = mhi_cntrl;
spin_lock_init(&mhi_event->lock);
+ if (mhi_event->data_type == MHI_ER_CTRL)
+ tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
+ (ulong)mhi_event);
+ else
+ tasklet_init(&mhi_event->task, mhi_ev_task,
+ (ulong)mhi_event);
}
mhi_chan = mhi_cntrl->mhi_chan;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 889e91bcb2f8..314d0909c372 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -500,6 +500,8 @@ struct mhi_buf_info {
void *wp;
size_t len;
void *cb_buf;
+ bool used; /* Indicates whether the buffer is used or not */
+ bool pre_mapped; /* Already pre-mapped by client */
enum dma_data_direction dir;
};
@@ -652,6 +654,14 @@ static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
}
+/* Event processing methods */
+void mhi_ctrl_ev_task(unsigned long data);
+void mhi_ev_task(unsigned long data);
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event, u32 event_quota);
+
/* ISR handlers */
irqreturn_t mhi_irq_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 453feba8f02a..8450c74b4525 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -149,6 +149,16 @@ static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
return (addr - ring->iommu_base) + ring->base;
}
+static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ ring->rp += ring->el_size;
+ if (ring->rp >= (ring->base + ring->len))
+ ring->rp = ring->base;
+ /* smp update */
+ smp_wmb();
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_device *mhi_dev;
@@ -337,3 +347,464 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
return IRQ_WAKE_THREAD;
}
+
+static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ dma_addr_t ctxt_wp;
+
+ /* Update the WP */
+ ring->wp += ring->el_size;
+ ctxt_wp = *ring->ctxt_wp + ring->el_size;
+
+ if (ring->wp >= (ring->base + ring->len)) {
+ ring->wp = ring->base;
+ ctxt_wp = ring->iommu_base;
+ }
+
+ *ring->ctxt_wp = ctxt_wp;
+
+ /* Update the RP */
+ ring->rp += ring->el_size;
+ if (ring->rp >= (ring->base + ring->len))
+ ring->rp = ring->base;
+
+ /* Update to all cores */
+ smp_wmb();
+}
+
+static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *event,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ u32 ev_code;
+ struct mhi_result result;
+ unsigned long flags = 0;
+
+ ev_code = MHI_TRE_GET_EV_CODE(event);
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+
+ /*
+ * If it's a DB Event then we need to grab the lock
+ * with preemption disabled and as a write because we
+ * have to update db register and there are chances that
+ * another thread could be doing the same.
+ */
+ if (ev_code >= MHI_EV_CC_OOB)
+ write_lock_irqsave(&mhi_chan->lock, flags);
+ else
+ read_lock_bh(&mhi_chan->lock);
+
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+ goto end_process_tx_event;
+
+ switch (ev_code) {
+ case MHI_EV_CC_OVERFLOW:
+ case MHI_EV_CC_EOB:
+ case MHI_EV_CC_EOT:
+ {
+ dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+ struct mhi_tre *local_rp, *ev_tre;
+ void *dev_rp;
+ struct mhi_buf_info *buf_info;
+ u16 xfer_len;
+
+ /* Get the TRB this event points to */
+ ev_tre = mhi_to_virtual(tre_ring, ptr);
+
+ /* device rp after servicing the TREs */
+ dev_rp = ev_tre + 1;
+ if (dev_rp >= (tre_ring->base + tre_ring->len))
+ dev_rp = tre_ring->base;
+
+ result.dir = mhi_chan->dir;
+
+ /* local rp */
+ local_rp = tre_ring->rp;
+ while (local_rp != dev_rp) {
+ buf_info = buf_ring->rp;
+ /* if it's last tre get len from the event */
+ if (local_rp == ev_tre)
+ xfer_len = MHI_TRE_GET_EV_LEN(event);
+ else
+ xfer_len = buf_info->len;
+
+ result.buf_addr = buf_info->cb_buf;
+ result.bytes_xferd = xfer_len;
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ local_rp = tre_ring->rp;
+
+ /* notify client */
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_dec(&mhi_cntrl->pending_pkts);
+ }
+ break;
+ } /* CC_EOT */
+ case MHI_EV_CC_OOB:
+ case MHI_EV_CC_DB_MODE:
+ {
+ unsigned long flags;
+
+ mhi_chan->db_cfg.db_mode = 1;
+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+ if (tre_ring->wp != tre_ring->rp &&
+ MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ }
+ read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+ break;
+ }
+ case MHI_EV_CC_BAD_TRE:
+ default:
+ dev_err(mhi_cntrl->dev, "Unknown event 0x%x\n", ev_code);
+ break;
+ } /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
+
+end_process_tx_event:
+ if (ev_code >= MHI_EV_CC_OOB)
+ write_unlock_irqrestore(&mhi_chan->lock, flags);
+ else
+ read_unlock_bh(&mhi_chan->lock);
+
+ return 0;
+}
+
+static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *event,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_result result;
+ int ev_code;
+ u32 cookie; /* offset to local descriptor */
+ u16 xfer_len;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ ev_code = MHI_TRE_GET_EV_CODE(event);
+ cookie = MHI_TRE_GET_EV_COOKIE(event);
+ xfer_len = MHI_TRE_GET_EV_LEN(event);
+
+ /* Received out of bound cookie */
+ WARN_ON(cookie >= buf_ring->len);
+
+ buf_info = buf_ring->base + cookie;
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+ result.bytes_xferd = xfer_len;
+ result.buf_addr = buf_info->cb_buf;
+ result.dir = mhi_chan->dir;
+
+ read_lock_bh(&mhi_chan->lock);
+
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+ goto end_process_rsc_event;
+
+ WARN_ON(!buf_info->used);
+
+ /* notify the client */
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ /*
+ * Note: We're arbitrarily incrementing RP even though, completion
+ * packet we processed might not be the same one, reason we can do this
+ * is because device guaranteed to cache descriptors in order it
+ * receive, so even though completion event is different we can re-use
+ * all descriptors in between.
+ * Example:
+ * Transfer Ring has descriptors: A, B, C, D
+ * Last descriptor host queue is D (WP) and first descriptor
+ * host queue is A (RP).
+ * The completion event we just serviced is descriptor C.
+ * Then we can safely queue descriptors to replace A, B, and C
+ * even though host did not receive any completions.
+ */
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ buf_info->used = false;
+
+end_process_rsc_event:
+ read_unlock_bh(&mhi_chan->lock);
+
+ return 0;
+}
+
+static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *tre)
+{
+ dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
+ struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+ struct mhi_ring *mhi_ring = &cmd_ring->ring;
+ struct mhi_tre *cmd_pkt;
+ struct mhi_chan *mhi_chan;
+ u32 chan;
+
+ cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+
+ chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ write_lock_bh(&mhi_chan->lock);
+ mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+ complete(&mhi_chan->completion);
+ write_unlock_bh(&mhi_chan->lock);
+
+ mhi_del_ring_element(mhi_cntrl, mhi_ring);
+}
+
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota)
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ int count = 0;
+ u32 chan;
+ struct mhi_chan *mhi_chan;
+
+ /*
+ * This is a quick check to avoid unnecessary event processing
+ * in case MHI is already in error state, but it's still possible
+ * to transition to error state while processing events
+ */
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ local_rp = ev_ring->rp;
+
+ while (dev_rp != local_rp) {
+ enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+ switch (type) {
+ case MHI_PKT_TYPE_BW_REQ_EVENT:
+ {
+ struct mhi_link_info *link_info;
+
+ link_info = &mhi_cntrl->mhi_link_info;
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ link_info->target_link_speed =
+ MHI_TRE_GET_EV_LINKSPEED(local_rp);
+ link_info->target_link_width =
+ MHI_TRE_GET_EV_LINKWIDTH(local_rp);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ dev_dbg(mhi_cntrl->dev, "Received BW_REQ event\n");
+ mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
+ MHI_CB_BW_REQ);
+ break;
+ }
+ case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
+ {
+ enum mhi_state new_state;
+
+ new_state = MHI_TRE_GET_EV_STATE(local_rp);
+
+ dev_dbg(mhi_cntrl->dev,
+ "State change event to state: %s\n",
+ TO_MHI_STATE_STR(new_state));
+
+ switch (new_state) {
+ case MHI_STATE_M0:
+ mhi_pm_m0_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_M1:
+ mhi_pm_m1_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_M3:
+ mhi_pm_m3_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_SYS_ERR:
+ {
+ enum mhi_pm_state new_state;
+
+ dev_dbg(mhi_cntrl->dev,
+ "System error detected\n");
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ new_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_SYS_ERR_DETECT);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (new_state == MHI_PM_SYS_ERR_DETECT)
+ schedule_work(&mhi_cntrl->syserr_worker);
+ break;
+ }
+ default:
+ dev_err(mhi_cntrl->dev, "Invalid state: %s\n",
+ TO_MHI_STATE_STR(new_state));
+ }
+
+ break;
+ }
+ case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+ mhi_process_cmd_completion(mhi_cntrl, local_rp);
+ break;
+ case MHI_PKT_TYPE_EE_EVENT:
+ {
+ enum dev_st_transition st = DEV_ST_TRANSITION_MAX;
+ enum mhi_ee_type event = MHI_TRE_GET_EV_EXECENV(local_rp);
+
+ dev_dbg(mhi_cntrl->dev, "Received EE event: %s\n",
+ TO_MHI_EXEC_STR(event));
+ switch (event) {
+ case MHI_EE_SBL:
+ st = DEV_ST_TRANSITION_SBL;
+ break;
+ case MHI_EE_WFW:
+ case MHI_EE_AMSS:
+ st = DEV_ST_TRANSITION_MISSION_MODE;
+ break;
+ case MHI_EE_RDDM:
+ mhi_cntrl->status_cb(mhi_cntrl,
+ mhi_cntrl->priv_data,
+ MHI_CB_EE_RDDM);
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->ee = event;
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ wake_up_all(&mhi_cntrl->state_event);
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Unhandled EE event: 0x%x\n", type);
+ }
+ if (st != DEV_ST_TRANSITION_MAX)
+ mhi_queue_state_transition(mhi_cntrl, st);
+
+ break;
+ }
+ case MHI_PKT_TYPE_TX_EVENT:
+ chan = MHI_TRE_GET_EV_CHID(local_rp);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Unhandled event type: %d\n",
+ type);
+ break;
+ }
+
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ count++;
+ }
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_er_db(mhi_event);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return count;
+}
+
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota)
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ int count = 0;
+ u32 chan;
+ struct mhi_chan *mhi_chan;
+
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ local_rp = ev_ring->rp;
+
+ while (dev_rp != local_rp && event_quota > 0) {
+ enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+ chan = MHI_TRE_GET_EV_CHID(local_rp);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+
+ if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
+ parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ } else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
+ parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ }
+
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ count++;
+ }
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_er_db(mhi_event);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return count;
+}
+
+void mhi_ev_task(unsigned long data)
+{
+ struct mhi_event *mhi_event = (struct mhi_event *)data;
+ struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+
+ /* process all pending events */
+ spin_lock_bh(&mhi_event->lock);
+ mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+ spin_unlock_bh(&mhi_event->lock);
+}
+
+void mhi_ctrl_ev_task(unsigned long data)
+{
+ struct mhi_event *mhi_event = (struct mhi_event *)data;
+ struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+ enum mhi_state state;
+ enum mhi_pm_state pm_state = 0;
+ int ret;
+
+ /*
+ * We can check PM state w/o a lock here because there is no way
+ * PM state can change from reg access valid to no access while this
+ * thread being executed.
+ */
+ if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ /*
+ * We may have a pending event but not allowed to
+ * process it since we are probably in a suspended state,
+ * so trigger a resume.
+ */
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+ return;
+ }
+
+ /* Process ctrl events events */
+ ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+
+ /*
+ * We received an IRQ but no events to process, maybe device went to
+ * SYS_ERR state? Check the state to confirm.
+ */
+ if (!ret) {
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ state = mhi_get_mhi_state(mhi_cntrl);
+ if (state == MHI_STATE_SYS_ERR) {
+ dev_dbg(mhi_cntrl->dev, "System error detected\n");
+ pm_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_SYS_ERR_DETECT);
+ }
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (pm_state == MHI_PM_SYS_ERR_DETECT)
+ schedule_work(&mhi_cntrl->syserr_worker);
+ }
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 1b018e0d04f4..3e8f797c4c51 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -31,6 +31,7 @@ struct mhi_buf_info;
* @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
* @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
* @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
+ * @MHI_CB_BW_REQ: Received a bandwidth switch request from device
*/
enum mhi_callback {
MHI_CB_IDLE,
@@ -41,6 +42,7 @@ enum mhi_callback {
MHI_CB_EE_MISSION_MODE,
MHI_CB_SYS_ERROR,
MHI_CB_FATAL_ERROR,
+ MHI_CB_BW_REQ,
};
/**
@@ -94,6 +96,16 @@ struct image_info {
u32 entries;
};
+/**
+ * struct mhi_link_info - BW requirement
+ * target_link_speed - Link speed as defined by TLS bits in LinkControl reg
+ * target_link_width - Link width as defined by NLW bits in LinkStatus reg
+ */
+struct mhi_link_info {
+ unsigned int target_link_speed;
+ unsigned int target_link_width;
+};
+
/**
* enum mhi_ee_type - Execution environment types
* @MHI_EE_PBL: Primary Bootloader
@@ -321,6 +333,7 @@ struct mhi_controller_config {
* @pending_pkts: Pending packets for the controller
* @transition_list: List of MHI state transitions
* @wlock: Lock for protecting device wakeup
+ * @mhi_link_info: Device bandwidth info
* @M0: M0 state counter for debugging
* @M2: M2 state counter for debugging
* @M3: M3 state counter for debugging
@@ -392,6 +405,8 @@ struct mhi_controller {
struct list_head transition_list;
spinlock_t transition_lock;
spinlock_t wlock;
+ struct mhi_link_info mhi_link_info;
+
u32 M0, M2, M3, M3_FAST;
struct work_struct st_worker;
struct work_struct fw_worker;
--
2.17.1
MHI protocol supports downloading RDDM (RAM Dump) image from the
device through BHIE. This is useful to debugging as the RDDM image
can capture the firmware state.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/boot.c | 153 ++++++++++++++++++++++++++++++++
drivers/bus/mhi/core/init.c | 38 ++++++++
drivers/bus/mhi/core/internal.h | 2 +
drivers/bus/mhi/core/pm.c | 31 +++++++
include/linux/mhi.h | 24 +++++
5 files changed, 248 insertions(+)
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
index 36956fb6eff2..facfec26eca1 100644
--- a/drivers/bus/mhi/core/boot.c
+++ b/drivers/bus/mhi/core/boot.c
@@ -20,6 +20,159 @@
#include <linux/wait.h>
#include "internal.h"
+/* Setup RDDM vector table for RDDM transfer and program RXVEC */
+void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+ struct image_info *img_info)
+{
+ struct mhi_buf *mhi_buf = img_info->mhi_buf;
+ struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+ void __iomem *base = mhi_cntrl->bhie;
+ u32 sequence_id;
+ unsigned int i;
+
+ for (i = 0; i < img_info->entries - 1; i++, mhi_buf++, bhi_vec++) {
+ bhi_vec->dma_addr = mhi_buf->dma_addr;
+ bhi_vec->size = mhi_buf->len;
+ }
+
+ dev_dbg(mhi_cntrl->dev, "BHIe programming for RDDM\n");
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_HIGH_OFFS,
+ upper_32_bits(mhi_buf->dma_addr));
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_RXVECADDR_LOW_OFFS,
+ lower_32_bits(mhi_buf->dma_addr));
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_RXVECSIZE_OFFS, mhi_buf->len);
+ sequence_id = prandom_u32() & BHIE_RXVECSTATUS_SEQNUM_BMSK;
+
+ if (unlikely(!sequence_id))
+ sequence_id = 1;
+
+ mhi_write_reg_field(mhi_cntrl, base, BHIE_RXVECDB_OFFS,
+ BHIE_RXVECDB_SEQNUM_BMSK, BHIE_RXVECDB_SEQNUM_SHFT,
+ sequence_id);
+
+ dev_dbg(mhi_cntrl->dev, "Address: %p and len: 0x%lx sequence: %u\n",
+ &mhi_buf->dma_addr, mhi_buf->len, sequence_id);
+}
+
+/* Collect RDDM buffer during kernel panic */
+static int __mhi_download_rddm_in_panic(struct mhi_controller *mhi_cntrl)
+{
+ int ret;
+ u32 rx_status;
+ enum mhi_ee_type ee;
+ const u32 delayus = 2000;
+ u32 retry = (mhi_cntrl->timeout_ms * 1000) / delayus;
+ const u32 rddm_timeout_us = 200000;
+ int rddm_retry = rddm_timeout_us / delayus;
+ void __iomem *base = mhi_cntrl->bhie;
+
+ dev_dbg(mhi_cntrl->dev,
+ "Entered with pm_state:%s dev_state:%s ee:%s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state),
+ TO_MHI_STATE_STR(mhi_cntrl->dev_state),
+ TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+ /*
+ * This should only be executing during a kernel panic, we expect all
+ * other cores to shutdown while we're collecting RDDM buffer. After
+ * returning from this function, we expect the device to reset.
+ *
+ * Normaly, we read/write pm_state only after grabbing the
+ * pm_lock, since we're in a panic, skipping it. Also there is no
+ * gurantee that this state change would take effect since
+ * we're setting it w/o grabbing pm_lock
+ */
+ mhi_cntrl->pm_state = MHI_PM_LD_ERR_FATAL_DETECT;
+ /* update should take the effect immediately */
+ smp_wmb();
+
+ /*
+ * Make sure device is not already in RDDM. In case the device asserts
+ * and a kernel panic follows, device will already be in RDDM.
+ * Do not trigger SYS ERR again and proceed with waiting for
+ * image download completion.
+ */
+ ee = mhi_get_exec_env(mhi_cntrl);
+ if (ee != MHI_EE_RDDM) {
+ dev_dbg(mhi_cntrl->dev,
+ "Trigger device into RDDM mode using SYS ERR\n");
+ mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+
+ dev_dbg(mhi_cntrl->dev, "Waiting for device to enter RDDM\n");
+ while (rddm_retry--) {
+ ee = mhi_get_exec_env(mhi_cntrl);
+ if (ee == MHI_EE_RDDM)
+ break;
+
+ udelay(delayus);
+ }
+
+ if (rddm_retry <= 0) {
+ /* Hardware reset so force device to enter RDDM */
+ dev_dbg(mhi_cntrl->dev,
+ "Did not enter RDDM, do a host req reset\n");
+ mhi_write_reg(mhi_cntrl, mhi_cntrl->regs,
+ MHI_SOC_RESET_REQ_OFFSET,
+ MHI_SOC_RESET_REQ);
+ udelay(delayus);
+ }
+
+ ee = mhi_get_exec_env(mhi_cntrl);
+ }
+
+ dev_dbg(mhi_cntrl->dev,
+ "Waiting for image download completion, current EE: %s\n",
+ TO_MHI_EXEC_STR(ee));
+
+ while (retry--) {
+ ret = mhi_read_reg_field(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS,
+ BHIE_RXVECSTATUS_STATUS_BMSK,
+ BHIE_RXVECSTATUS_STATUS_SHFT,
+ &rx_status);
+ if (ret)
+ return -EIO;
+
+ if (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL)
+ return 0;
+
+ udelay(delayus);
+ }
+
+ ee = mhi_get_exec_env(mhi_cntrl);
+ ret = mhi_read_reg(mhi_cntrl, base, BHIE_RXVECSTATUS_OFFS, &rx_status);
+
+ dev_err(mhi_cntrl->dev, "Did not complete RDDM transfer\n");
+ dev_err(mhi_cntrl->dev, "Current EE: %s\n", TO_MHI_EXEC_STR(ee));
+ dev_err(mhi_cntrl->dev, "RXVEC_STATUS: 0x%x\n", rx_status);
+
+ return -EIO;
+}
+
+/* Download RDDM image from device */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic)
+{
+ void __iomem *base = mhi_cntrl->bhie;
+ u32 rx_status;
+
+ if (in_panic)
+ return __mhi_download_rddm_in_panic(mhi_cntrl);
+
+ /* Wait for the image download to complete */
+ wait_event_timeout(mhi_cntrl->state_event,
+ mhi_read_reg_field(mhi_cntrl, base,
+ BHIE_RXVECSTATUS_OFFS,
+ BHIE_RXVECSTATUS_STATUS_BMSK,
+ BHIE_RXVECSTATUS_STATUS_SHFT,
+ &rx_status) || rx_status,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ return (rx_status == BHIE_RXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+EXPORT_SYMBOL_GPL(mhi_download_rddm_img);
+
/* Download AMSS image to device */
static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
const struct mhi_buf *mhi_buf)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 2f06bf958f58..f54429c9b7fc 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -830,6 +830,7 @@ EXPORT_SYMBOL_GPL(mhi_unregister_controller);
int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
{
int ret;
+ u32 bhie_off;
mutex_lock(&mhi_cntrl->pm_mutex);
@@ -837,12 +838,44 @@ int mhi_prepare_for_power_up(struct mhi_controller *mhi_cntrl)
if (ret)
goto error_dev_ctxt;
+ /*
+ * Allocate RDDM table if specified, this table is for debugging purpose
+ */
+ if (mhi_cntrl->rddm_size) {
+ mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->rddm_image,
+ mhi_cntrl->rddm_size);
+
+ /*
+ * This controller supports RDDM, so we need to manually clear
+ * BHIE RX registers since POR values are undefined.
+ */
+ ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->regs, BHIEOFF,
+ &bhie_off);
+ if (ret) {
+ dev_err(mhi_cntrl->dev, "Error getting BHIE offset\n");
+ goto bhie_error;
+ }
+
+ memset_io(mhi_cntrl->regs + bhie_off + BHIE_RXVECADDR_LOW_OFFS,
+ 0, BHIE_RXVECSTATUS_OFFS - BHIE_RXVECADDR_LOW_OFFS +
+ 4);
+
+ if (mhi_cntrl->rddm_image)
+ mhi_rddm_prepare(mhi_cntrl, mhi_cntrl->rddm_image);
+ }
+
mhi_cntrl->pre_init = true;
mutex_unlock(&mhi_cntrl->pm_mutex);
return 0;
+bhie_error:
+ if (mhi_cntrl->rddm_image) {
+ mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+ mhi_cntrl->rddm_image = NULL;
+ }
+
error_dev_ctxt:
mutex_unlock(&mhi_cntrl->pm_mutex);
@@ -857,6 +890,11 @@ void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl)
mhi_cntrl->fbc_image = NULL;
}
+ if (mhi_cntrl->rddm_image) {
+ mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->rddm_image);
+ mhi_cntrl->rddm_image = NULL;
+ }
+
mhi_deinit_dev_ctxt(mhi_cntrl);
mhi_cntrl->pre_init = false;
}
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index eab9c051ca5e..889e91bcb2f8 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -626,6 +626,8 @@ int mhi_init_dev_ctxt(struct mhi_controller *mhi_cntrl);
void mhi_deinit_dev_ctxt(struct mhi_controller *mhi_cntrl);
int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
+void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
+ struct image_info *img_info);
/* Memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index b67ae2455fc5..0bdc667830f0 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -453,6 +453,16 @@ static void mhi_pm_disable_transition(struct mhi_controller *mhi_cntrl,
/* We must notify MHI control driver so it can clean up first */
if (transition_state == MHI_PM_SYS_ERR_PROCESS) {
+ /*
+ * If controller supports RDDM, we do not process
+ * SYS error state, instead we will jump directly
+ * to RDDM state
+ */
+ if (mhi_cntrl->rddm_image) {
+ dev_dbg(mhi_cntrl->dev,
+ "Controller supports RDDM, so skip SYS_ERR\n");
+ return;
+ }
mhi_cntrl->status_cb(mhi_cntrl, mhi_cntrl->priv_data,
MHI_CB_SYS_ERROR);
}
@@ -901,3 +911,24 @@ int mhi_sync_power_up(struct mhi_controller *mhi_cntrl)
return (MHI_IN_MISSION_MODE(mhi_cntrl->ee)) ? 0 : -EIO;
}
EXPORT_SYMBOL(mhi_sync_power_up);
+
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
+{
+ int ret;
+
+ /* Check if device is already in RDDM */
+ if (mhi_cntrl->ee == MHI_EE_RDDM)
+ return 0;
+
+ dev_dbg(mhi_cntrl->dev, "Triggering SYS_ERR to force RDDM state\n");
+ mhi_set_mhi_state(mhi_cntrl, MHI_STATE_SYS_ERR);
+
+ /* Wait for RDDM event */
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ mhi_cntrl->ee == MHI_EE_RDDM,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ ret = ret ? 0 : -EIO;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 04c500323214..1b018e0d04f4 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -290,9 +290,11 @@ struct mhi_controller_config {
* @fw_image: Firmware image name for normal booting
* @edl_image: Firmware image name for emergency download mode
* @fbc_download: MHI host needs to do complete image transfer
+ * @rddm_size: RAM dump size that host should allocate for debugging purpose
* @sbl_size: SBL image size
* @seg_len: BHIe vector size
* @fbc_image: Points to firmware image buffer
+ * @rddm_image: Points to RAM dump buffer
* @max_chan: Maximum number of channels the controller supports
* @mhi_chan: Points to the channel configuration table
* @lpm_chans: List of channels that require LPM notifications
@@ -356,9 +358,11 @@ struct mhi_controller {
const char *fw_image;
const char *edl_image;
bool fbc_download;
+ size_t rddm_size;
size_t sbl_size;
size_t seg_len;
struct image_info *fbc_image;
+ struct image_info *rddm_image;
u32 max_chan;
struct mhi_chan *mhi_chan;
struct list_head lpm_chans;
@@ -583,4 +587,24 @@ void mhi_power_down(struct mhi_controller *mhi_cntrl, bool graceful);
*/
void mhi_unprepare_after_power_down(struct mhi_controller *mhi_cntrl);
+/**
+ * mhi_download_rddm_img - Download ramdump image from device for
+ * debugging purpose.
+ * @mhi_cntrl: MHI controller
+ * @in_panic: Download rddm image during kernel panic
+ */
+int mhi_download_rddm_img(struct mhi_controller *mhi_cntrl, bool in_panic);
+
+/**
+ * mhi_force_rddm_mode - Force device into rddm mode
+ * @mhi_cntrl: MHI controller
+ */
+int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
+
+/**
+ * mhi_get_mhi_state - Get MHI state of the device
+ * @mhi_cntrl: MHI controller
+ */
+enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl);
+
#endif /* _MHI_H_ */
--
2.17.1
IPC Router protocol is also used by external modems for exchanging the QMI
messages. Hence, it doesn't always depend on Qualcomm platforms. As a side
effect of removing the ARCH_QCOM dependency, it is going to miss the
COMPILE_TEST build coverage.
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
net/qrtr/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig
index 8eb876471564..f362ca316015 100644
--- a/net/qrtr/Kconfig
+++ b/net/qrtr/Kconfig
@@ -4,7 +4,6 @@
config QRTR
tristate "Qualcomm IPC Router support"
- depends on ARCH_QCOM || COMPILE_TEST
---help---
Say Y if you intend to use Qualcomm IPC router protocol. The
protocol is used to communicate with services provided by other
--
2.17.1
MHI (Modem Host Interface) is a communication protocol used by the
host processors to control and communicate with modems over a high
speed peripheral bus or shared memory. The MHI protocol has been
designed and developed by Qualcomm Innovation Center, Inc., for use
in their modems. This commit adds the documentation for the bus and
the implementation in Linux kernel.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/987
Cc: Jonathan Corbet <[email protected]>
Cc: [email protected]
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: converted to .rst and splitted the patch]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
Documentation/index.rst | 1 +
Documentation/mhi/index.rst | 18 +++
Documentation/mhi/mhi.rst | 218 +++++++++++++++++++++++++++++++++
Documentation/mhi/topology.rst | 60 +++++++++
4 files changed, 297 insertions(+)
create mode 100644 Documentation/mhi/index.rst
create mode 100644 Documentation/mhi/mhi.rst
create mode 100644 Documentation/mhi/topology.rst
diff --git a/Documentation/index.rst b/Documentation/index.rst
index e99d0bd2589d..edc9b211bbff 100644
--- a/Documentation/index.rst
+++ b/Documentation/index.rst
@@ -133,6 +133,7 @@ needed).
misc-devices/index
mic/index
scheduler/index
+ mhi/index
Architecture-agnostic documentation
-----------------------------------
diff --git a/Documentation/mhi/index.rst b/Documentation/mhi/index.rst
new file mode 100644
index 000000000000..1d8dec302780
--- /dev/null
+++ b/Documentation/mhi/index.rst
@@ -0,0 +1,18 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===
+MHI
+===
+
+.. toctree::
+ :maxdepth: 1
+
+ mhi
+ topology
+
+.. only:: subproject and html
+
+ Indices
+ =======
+
+ * :ref:`genindex`
diff --git a/Documentation/mhi/mhi.rst b/Documentation/mhi/mhi.rst
new file mode 100644
index 000000000000..718dbbdc7a04
--- /dev/null
+++ b/Documentation/mhi/mhi.rst
@@ -0,0 +1,218 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+==========================
+MHI (Modem Host Interface)
+==========================
+
+This document provides information about the MHI protocol.
+
+Overview
+========
+
+MHI is a protocol developed by Qualcomm Innovation Center, Inc., It is used
+by the host processors to control and communicate with modem devices over high
+speed peripheral buses or shared memory. Even though MHI can be easily adapted
+to any peripheral buses, it is primarily used with PCIe based devices. MHI
+provides logical channels over the physical buses and allows transporting the
+modem protocols, such as IP data packets, modem control messages, and
+diagnostics over at least one of those logical channels. Also, the MHI
+protocol provides data acknowledgment feature and manages the power state of the
+modems via one or more logical channels.
+
+MHI Internals
+=============
+
+MMIO
+----
+
+MMIO (Memory mapped IO) consists of a set of registers in the device hardware,
+which are mapped to the host memory space by the peripheral buses like PCIe.
+Following are the major components of MMIO register space:
+
+MHI control registers: Access to MHI configurations registers
+
+MHI BHI registers: BHI (Boot Host Interface) registers are used by the host
+for downloading the firmware to the device before MHI initialization.
+
+Channel Doorbell array: Channel Doorbell (DB) registers used by the host to
+notify the device when there is new work to do.
+
+Event Doorbell array: Associated with event context array, the Event Doorbell
+(DB) registers are used by the host to notify the device when new events are
+available.
+
+Debug registers: A set of registers and counters used by the device to expose
+debugging information like performance, functional, and stability to the host.
+
+Data structures
+---------------
+
+All data structures used by MHI are in the host system memory. Using the
+physical interface, the device accesses those data structures. MHI data
+structures and data buffers in the host system memory regions are mapped for
+the device.
+
+Channel context array: All channel configurations are organized in channel
+context data array.
+
+Transfer rings: Used by the host to schedule work items for a channel. The
+transfer rings are organized as a circular queue of Transfer Descriptors (TD).
+
+Event context array: All event configurations are organized in the event context
+data array.
+
+Event rings: Used by the device to send completion and state transition messages
+to the host
+
+Command context array: All command configurations are organized in command
+context data array.
+
+Command rings: Used by the host to send MHI commands to the device. The command
+rings are organized as a circular queue of Command Descriptors (CD).
+
+Channels
+--------
+
+MHI channels are logical, unidirectional data pipes between a host and a device.
+The concept of channels in MHI is similar to endpoints in USB. MHI supports up
+to 256 channels. However, specific device implementations may support less than
+the maximum number of channels allowed.
+
+Two unidirectional channels with their associated transfer rings form a
+bidirectional data pipe, which can be used by the upper-layer protocols to
+transport application data packets (such as IP packets, modem control messages,
+diagnostics messages, and so on). Each channel is associated with a single
+transfer ring.
+
+Transfer rings
+--------------
+
+Transfers between the host and device are organized by channels and defined by
+Transfer Descriptors (TD). TDs are managed through transfer rings, which are
+defined for each channel between the device and host and reside in the host
+memory. TDs consist of one or more ring elements (or transfer blocks)::
+
+ [Read Pointer (RP)] ----------->[Ring Element] } TD
+ [Write Pointer (WP)]- [Ring Element]
+ - [Ring Element]
+ --------->[Ring Element]
+ [Ring Element]
+
+Below is the basic usage of transfer rings:
+
+* Host allocates memory for transfer ring.
+* Host sets the base pointer, read pointer, and write pointer in corresponding
+ channel context.
+* Ring is considered empty when RP == WP.
+* Ring is considered full when WP + 1 == RP.
+* RP indicates the next element to be serviced by the device.
+* When the host has a new buffer to send, it updates the ring element with
+ buffer information, increments the WP to the next element and rings the
+ associated channel DB.
+
+Event rings
+-----------
+
+Events from the device to host are organized in event rings and defined by Event
+Descriptors (ED). Event rings are used by the device to report events such as
+data transfer completion status, command completion status, and state changes
+to the host. Event rings are the array of EDs that resides in the host
+memory. EDs consist of one or more ring elements (or transfer blocks)::
+
+ [Read Pointer (RP)] ----------->[Ring Element] } ED
+ [Write Pointer (WP)]- [Ring Element]
+ - [Ring Element]
+ --------->[Ring Element]
+ [Ring Element]
+
+Below is the basic usage of event rings:
+
+* Host allocates memory for event ring.
+* Host sets the base pointer, read pointer, and write pointer in corresponding
+ channel context.
+* Both host and device has a local copy of RP, WP.
+* Ring is considered empty (no events to service) when WP + 1 == RP.
+* Ring is considered full of events when RP == WP.
+* When there is a new event the device needs to send, the device updates ED
+ pointed by RP, increments the RP to the next element and triggers the
+ interrupt.
+
+Ring Element
+------------
+
+A Ring Element is a data structure used to transfer a single block
+of data between the host and the device. Transfer ring element types contain a
+single buffer pointer, the size of the buffer, and additional control
+information. Other ring element types may only contain control and status
+information. For single buffer operations, a ring descriptor is composed of a
+single element. For large multi-buffer operations (such as scatter and gather),
+elements can be chained to form a longer descriptor.
+
+MHI Operations
+==============
+
+MHI States
+----------
+
+MHI_STATE_RESET
+~~~~~~~~~~~~~~~
+MHI is in reset state after power-up or hardware reset. The host is not allowed
+to access device MMIO register space.
+
+MHI_STATE_READY
+~~~~~~~~~~~~~~~
+MHI is ready for initialization. The host can start MHI initialization by
+programming MMIO registers.
+
+MHI_STATE_M0
+~~~~~~~~~~~~
+MHI is running and operational in the device. The host can start channels by
+issuing channel start command.
+
+MHI_STATE_M1
+~~~~~~~~~~~~
+MHI operation is suspended by the device. This state is entered when the
+device detects inactivity at the physical interface within a preset time.
+
+MHI_STATE_M2
+~~~~~~~~~~~~
+MHI is in low power state. MHI operation is suspended and the device may
+enter lower power mode.
+
+MHI_STATE_M3
+~~~~~~~~~~~~
+MHI operation stopped by the host. This state is entered when the host suspends
+MHI operation.
+
+MHI Initialization
+------------------
+
+After system boots, the device is enumerated over the physical interface.
+In the case of PCIe, the device is enumerated and assigned BAR-0 for
+the device's MMIO register space. To initialize the MHI in a device,
+the host performs the following operations:
+
+* Allocates the MHI context for event, channel and command arrays.
+* Initializes the context array, and prepares interrupts.
+* Waits until the device enters READY state.
+* Programs MHI MMIO registers and sets device into MHI_M0 state.
+* Waits for the device to enter M0 state.
+
+MHI Data Transfer
+-----------------
+
+MHI data transfer is initiated by the host to transfer data to the device.
+Following are the sequence of operations performed by the host to transfer
+data to device:
+
+* Host prepares TD with buffer information.
+* Host increments the WP of the corresponding channel transfer ring.
+* Host rings the channel DB register.
+* Device wakes up to process the TD.
+* Device generates a completion event for the processed TD by updating ED.
+* Device increments the RP of the corresponding event ring.
+* Device triggers IRQ to wake up the host.
+* Host wakes up and checks the event ring for completion event.
+* Host updates the WP of the corresponding event ring to indicate that the
+ data transfer has been completed successfully.
+
diff --git a/Documentation/mhi/topology.rst b/Documentation/mhi/topology.rst
new file mode 100644
index 000000000000..90d80a7f116d
--- /dev/null
+++ b/Documentation/mhi/topology.rst
@@ -0,0 +1,60 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+============
+MHI Topology
+============
+
+This document provides information about the MHI topology modeling and
+representation in the kernel.
+
+MHI Controller
+--------------
+
+MHI controller driver manages the interaction with the MHI client devices
+such as the external modems and WiFi chipsets. It is also the MHI bus master
+which is in charge of managing the physical link between the host and device.
+It is however not involved in the actual data transfer as the data transfer
+is taken care by the physical bus such as PCIe. Each controller driver exposes
+channels and events based on the client device type.
+
+Below are the roles of the MHI controller driver:
+
+* Turns on the physical bus and establishes the link to the device
+* Configures IRQs, SMMU, and IOMEM
+* Allocates struct mhi_controller and registers with the MHI bus framework
+ with channel and event configurations using mhi_register_controller.
+* Initiates power on and shutdown sequence
+* Initiates suspend and resume power management operations of the device.
+
+MHI Device
+----------
+
+MHI device is the logical device which binds to a maximum of two MHI channels
+for bi-directional communication. Once MHI is in powered on state, the MHI
+core will create MHI devices based on the channel configuration exposed
+by the controller. There can be a single MHI device for each channel or for a
+couple of channels.
+
+Each supported device is enumerated in::
+
+ /sys/bus/mhi/devices/
+
+MHI Driver
+----------
+
+MHI driver is the client driver which binds to one or more MHI devices. The MHI
+driver sends and receives the upper-layer protocol packets like IP packets,
+modem control messages, and diagnostics messages over MHI. The MHI core will
+bind the MHI devices to the MHI driver.
+
+Each supported driver is enumerated in::
+
+ /sys/bus/mhi/drivers/
+
+Below are the roles of the MHI driver:
+
+* Registers the driver with the MHI bus framework using mhi_driver_register.
+* Prepares the device for transfer by calling mhi_prepare_for_transfer.
+* Initiates data transfer by calling mhi_queue_transfer.
+* Once the data transfer is finished, calls mhi_unprepare_from_transfer to
+ end data transfer.
--
2.17.1
Add MAINTAINERS entry for MHI bus.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
MAINTAINERS | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index cf6ccca6e61c..927cdd907f1f 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10777,6 +10777,15 @@ M: Vladimir Vid <[email protected]>
S: Maintained
F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts
+MHI BUS
+M: Manivannan Sadhasivam <[email protected]>
+L: [email protected]
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
+S: Maintained
+F: drivers/bus/mhi/
+F: include/linux/mhi.h
+F: Documentation/mhi/
+
MICROBLAZE ARCHITECTURE
M: Michal Simek <[email protected]>
W: http://www.monstr.eu/fdt/
--
2.17.1
MHI supports downloading the device firmware over BHI/BHIe (Boot Host
Interface) protocol. Hence, this commit adds necessary helpers, which
will be called during device power up stage.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/boot.c | 268 ++++++++++++++++++++++++++++++++
drivers/bus/mhi/core/init.c | 1 +
drivers/bus/mhi/core/internal.h | 1 +
3 files changed, 270 insertions(+)
diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
index 0996f18c4281..36956fb6eff2 100644
--- a/drivers/bus/mhi/core/boot.c
+++ b/drivers/bus/mhi/core/boot.c
@@ -20,6 +20,121 @@
#include <linux/wait.h>
#include "internal.h"
+/* Download AMSS image to device */
+static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
+ const struct mhi_buf *mhi_buf)
+{
+ void __iomem *base = mhi_cntrl->bhie;
+ rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+ u32 tx_status, sequence_id;
+
+ read_lock_bh(pm_lock);
+ if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ read_unlock_bh(pm_lock);
+ return -EIO;
+ }
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_HIGH_OFFS,
+ upper_32_bits(mhi_buf->dma_addr));
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_TXVECADDR_LOW_OFFS,
+ lower_32_bits(mhi_buf->dma_addr));
+
+ mhi_write_reg(mhi_cntrl, base, BHIE_TXVECSIZE_OFFS, mhi_buf->len);
+
+ sequence_id = prandom_u32() & BHIE_TXVECSTATUS_SEQNUM_BMSK;
+ mhi_write_reg_field(mhi_cntrl, base, BHIE_TXVECDB_OFFS,
+ BHIE_TXVECDB_SEQNUM_BMSK, BHIE_TXVECDB_SEQNUM_SHFT,
+ sequence_id);
+ read_unlock_bh(pm_lock);
+
+ /* Wait for the image download to complete */
+ wait_event_timeout(mhi_cntrl->state_event,
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+ mhi_read_reg_field(mhi_cntrl, base,
+ BHIE_TXVECSTATUS_OFFS,
+ BHIE_TXVECSTATUS_STATUS_BMSK,
+ BHIE_TXVECSTATUS_STATUS_SHFT,
+ &tx_status) || tx_status,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+ return -EIO;
+
+ return (tx_status == BHIE_TXVECSTATUS_STATUS_XFER_COMPL) ? 0 : -EIO;
+}
+
+/* Download SBL image to device */
+static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
+ dma_addr_t dma_addr,
+ size_t size)
+{
+ u32 tx_status, val, session_id;
+ int i, ret;
+ void __iomem *base = mhi_cntrl->bhi;
+ rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
+ struct {
+ char *name;
+ u32 offset;
+ } error_reg[] = {
+ { "ERROR_CODE", BHI_ERRCODE },
+ { "ERROR_DBG1", BHI_ERRDBG1 },
+ { "ERROR_DBG2", BHI_ERRDBG2 },
+ { "ERROR_DBG3", BHI_ERRDBG3 },
+ { NULL },
+ };
+
+ read_lock_bh(pm_lock);
+ if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ read_unlock_bh(pm_lock);
+ goto invalid_pm_state;
+ }
+
+ /* Start SBL download via BHI protocol */
+ mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
+ mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
+ upper_32_bits(dma_addr));
+ mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
+ lower_32_bits(dma_addr));
+ mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
+ session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
+ mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
+ read_unlock_bh(pm_lock);
+
+ /* Wait for the image download to complete */
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state) ||
+ mhi_read_reg_field(mhi_cntrl, base, BHI_STATUS,
+ BHI_STATUS_MASK, BHI_STATUS_SHIFT,
+ &tx_status) || tx_status,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))
+ goto invalid_pm_state;
+
+ if (tx_status == BHI_STATUS_ERROR) {
+ dev_err(mhi_cntrl->dev, "Image transfer failed\n");
+ read_lock_bh(pm_lock);
+ if (MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ for (i = 0; error_reg[i].name; i++) {
+ ret = mhi_read_reg(mhi_cntrl, base,
+ error_reg[i].offset, &val);
+ if (ret)
+ break;
+ dev_err(mhi_cntrl->dev, "Reg: %s value: 0x%x\n",
+ error_reg[i].name, val);
+ }
+ }
+ read_unlock_bh(pm_lock);
+ goto invalid_pm_state;
+ }
+
+ return (!ret) ? -ETIMEDOUT : 0;
+
+invalid_pm_state:
+
+ return -EIO;
+}
+
void mhi_free_bhie_table(struct mhi_controller *mhi_cntrl,
struct image_info *image_info)
{
@@ -87,3 +202,156 @@ int mhi_alloc_bhie_table(struct mhi_controller *mhi_cntrl,
return -ENOMEM;
}
+
+static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
+ const struct firmware *firmware,
+ struct image_info *img_info)
+{
+ size_t remainder = firmware->size;
+ size_t to_cpy;
+ const u8 *buf = firmware->data;
+ int i = 0;
+ struct mhi_buf *mhi_buf = img_info->mhi_buf;
+ struct bhi_vec_entry *bhi_vec = img_info->bhi_vec;
+
+ while (remainder) {
+ to_cpy = min(remainder, mhi_buf->len);
+ memcpy(mhi_buf->buf, buf, to_cpy);
+ bhi_vec->dma_addr = mhi_buf->dma_addr;
+ bhi_vec->size = to_cpy;
+
+ buf += to_cpy;
+ remainder -= to_cpy;
+ i++;
+ bhi_vec++;
+ mhi_buf++;
+ }
+}
+
+void mhi_fw_load_worker(struct work_struct *work)
+{
+ int ret;
+ struct mhi_controller *mhi_cntrl;
+ const char *fw_name;
+ const struct firmware *firmware = NULL;
+ struct image_info *image_info;
+ void *buf;
+ dma_addr_t dma_addr;
+ size_t size;
+
+ mhi_cntrl = container_of(work, struct mhi_controller, fw_worker);
+
+ dev_dbg(mhi_cntrl->dev, "Waiting for device to enter PBL from: %s\n",
+ TO_MHI_EXEC_STR(mhi_cntrl->ee));
+
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ MHI_IN_PBL(mhi_cntrl->ee) ||
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ dev_err(mhi_cntrl->dev, "Device MHI is not in valid state\n");
+ return;
+ }
+
+ /* If device is in pass through, do reset to ready state transition */
+ if (mhi_cntrl->ee == MHI_EE_PTHRU)
+ goto fw_load_ee_pthru;
+
+ fw_name = (mhi_cntrl->ee == MHI_EE_EDL) ?
+ mhi_cntrl->edl_image : mhi_cntrl->fw_image;
+
+ if (!fw_name || (mhi_cntrl->fbc_download && (!mhi_cntrl->sbl_size ||
+ !mhi_cntrl->seg_len))) {
+ dev_err(mhi_cntrl->dev,
+ "No firmware image defined or !sbl_size || !seg_len\n");
+ return;
+ }
+
+ ret = request_firmware(&firmware, fw_name, mhi_cntrl->dev);
+ if (ret) {
+ dev_err(mhi_cntrl->dev, "Error loading firmware: %d\n",
+ ret);
+ return;
+ }
+
+ size = (mhi_cntrl->fbc_download) ? mhi_cntrl->sbl_size : firmware->size;
+
+ /* SBL size provided is maximum size, not necessarily the image size */
+ if (size > firmware->size)
+ size = firmware->size;
+
+ buf = mhi_alloc_coherent(mhi_cntrl, size, &dma_addr, GFP_KERNEL);
+ if (!buf) {
+ release_firmware(firmware);
+ return;
+ }
+
+ /* Download SBL image */
+ memcpy(buf, firmware->data, size);
+ ret = mhi_fw_load_sbl(mhi_cntrl, dma_addr, size);
+ mhi_free_coherent(mhi_cntrl, size, buf, dma_addr);
+
+ if (!mhi_cntrl->fbc_download || ret || mhi_cntrl->ee == MHI_EE_EDL)
+ release_firmware(firmware);
+
+ /* Error or in EDL mode, we're done */
+ if (ret || mhi_cntrl->ee == MHI_EE_EDL)
+ return;
+
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->dev_state = MHI_STATE_RESET;
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+
+ /*
+ * If we're doing fbc, populate vector tables while
+ * device transitioning into MHI READY state
+ */
+ if (mhi_cntrl->fbc_download) {
+ ret = mhi_alloc_bhie_table(mhi_cntrl, &mhi_cntrl->fbc_image,
+ firmware->size);
+ if (ret)
+ goto error_alloc_fw_table;
+
+ /* Load the firmware into BHIE vec table */
+ mhi_firmware_copy(mhi_cntrl, firmware, mhi_cntrl->fbc_image);
+ }
+
+fw_load_ee_pthru:
+ /* Transitioning into MHI RESET->READY state */
+ ret = mhi_ready_state_transition(mhi_cntrl);
+
+ if (!mhi_cntrl->fbc_download)
+ return;
+
+ if (ret)
+ goto error_read;
+
+ /* Wait for the SBL event */
+ ret = wait_event_timeout(mhi_cntrl->state_event,
+ mhi_cntrl->ee == MHI_EE_SBL ||
+ MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state),
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+
+ if (!ret || MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ dev_err(mhi_cntrl->dev, "MHI did not enter SBL\n");
+ goto error_read;
+ }
+
+ /* Start full firmware image download */
+ image_info = mhi_cntrl->fbc_image;
+ ret = mhi_fw_load_amss(mhi_cntrl,
+ /* Vector table is the last entry */
+ &image_info->mhi_buf[image_info->entries - 1]);
+
+ release_firmware(firmware);
+
+ return;
+
+error_read:
+ mhi_free_bhie_table(mhi_cntrl, mhi_cntrl->fbc_image);
+ mhi_cntrl->fbc_image = NULL;
+
+error_alloc_fw_table:
+ release_firmware(firmware);
+}
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 7fff92e9661b..2f06bf958f58 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -753,6 +753,7 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
spin_lock_init(&mhi_cntrl->wlock);
INIT_WORK(&mhi_cntrl->st_worker, mhi_pm_st_worker);
INIT_WORK(&mhi_cntrl->syserr_worker, mhi_pm_sys_err_worker);
+ INIT_WORK(&mhi_cntrl->fw_worker, mhi_fw_load_worker);
init_waitqueue_head(&mhi_cntrl->state_event);
mhi_cmd = mhi_cntrl->mhi_cmd;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index d920264ded21..eab9c051ca5e 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -590,6 +590,7 @@ int mhi_queue_state_transition(struct mhi_controller *mhi_cntrl,
enum dev_st_transition state);
void mhi_pm_st_worker(struct work_struct *work);
void mhi_pm_sys_err_worker(struct work_struct *work);
+void mhi_fw_load_worker(struct work_struct *work);
int mhi_ready_state_transition(struct mhi_controller *mhi_cntrl);
void mhi_ctrl_ev_task(unsigned long data);
int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
--
2.17.1
Add support for transferring data between external modem and host
processor using MHI protocol.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/988
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 157 ++++++-
drivers/bus/mhi/core/internal.h | 33 ++
drivers/bus/mhi/core/main.c | 777 +++++++++++++++++++++++++++++++-
drivers/bus/mhi/core/pm.c | 40 ++
include/linux/mhi.h | 55 +++
5 files changed, 1053 insertions(+), 9 deletions(-)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index c946693bdae4..40dcf8353f6f 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -483,6 +483,68 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
return 0;
}
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring;
+ struct mhi_ring *tre_ring;
+ struct mhi_chan_ctxt *chan_ctxt;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+
+ mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+ tre_ring->pre_aligned, tre_ring->dma_handle);
+ vfree(buf_ring->base);
+
+ buf_ring->base = tre_ring->base = NULL;
+ chan_ctxt->rbase = 0;
+}
+
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring;
+ struct mhi_ring *tre_ring;
+ struct mhi_chan_ctxt *chan_ctxt;
+ int ret;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ tre_ring->el_size = sizeof(struct mhi_tre);
+ tre_ring->len = tre_ring->el_size * tre_ring->elements;
+ chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+ ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
+ if (ret)
+ return -ENOMEM;
+
+ buf_ring->el_size = sizeof(struct mhi_buf_info);
+ buf_ring->len = buf_ring->el_size * buf_ring->elements;
+ buf_ring->base = vzalloc(buf_ring->len);
+
+ if (!buf_ring->base) {
+ mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+ tre_ring->pre_aligned, tre_ring->dma_handle);
+ return -ENOMEM;
+ }
+
+ chan_ctxt->chstate = MHI_CH_STATE_ENABLED;
+ chan_ctxt->rbase = tre_ring->iommu_base;
+ chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+ chan_ctxt->rlen = tre_ring->len;
+ tre_ring->ctxt_wp = &chan_ctxt->wp;
+
+ tre_ring->rp = tre_ring->wp = tre_ring->base;
+ buf_ring->rp = buf_ring->wp = buf_ring->base;
+ mhi_chan->db_cfg.db_mode = 1;
+
+ /* Update to all cores */
+ smp_wmb();
+
+ return 0;
+}
+
static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
{
@@ -638,6 +700,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
mhi_chan->xfer_type = ch_cfg->data_type;
+ switch (mhi_chan->xfer_type) {
+ case MHI_BUF_RAW:
+ mhi_chan->gen_tre = mhi_gen_tre;
+ mhi_chan->queue_xfer = mhi_queue_buf;
+ break;
+ case MHI_BUF_SKB:
+ mhi_chan->queue_xfer = mhi_queue_skb;
+ break;
+ case MHI_BUF_SCLIST:
+ mhi_chan->gen_tre = mhi_gen_tre;
+ mhi_chan->queue_xfer = mhi_queue_sclist;
+ break;
+ case MHI_BUF_NOP:
+ mhi_chan->queue_xfer = mhi_queue_nop;
+ break;
+ case MHI_BUF_DMA:
+ case MHI_BUF_RSC_DMA:
+ mhi_chan->queue_xfer = mhi_queue_dma;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Channel datatype not supported\n");
+ goto error_chan_cfg;
+ }
+
mhi_chan->lpm_notify = ch_cfg->lpm_notify;
mhi_chan->offload_ch = ch_cfg->offload_channel;
mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
@@ -667,6 +754,13 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
goto error_chan_cfg;
}
+ /*
+ * If MHI host pre-allocates buffers then client drivers
+ * cannot queue
+ */
+ if (mhi_chan->pre_alloc)
+ mhi_chan->queue_xfer = mhi_queue_nop;
+
if (!mhi_chan->offload_ch) {
mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
@@ -796,6 +890,14 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
rwlock_init(&mhi_chan->lock);
}
+ if (mhi_cntrl->bounce_buf) {
+ mhi_cntrl->map_single = mhi_map_single_use_bb;
+ mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
+ } else {
+ mhi_cntrl->map_single = mhi_map_single_no_bb;
+ mhi_cntrl->unmap_single = mhi_unmap_single_no_bb;
+ }
+
/* Register controller with MHI bus */
mhi_dev = mhi_alloc_device(mhi_cntrl);
if (IS_ERR(mhi_dev)) {
@@ -961,6 +1063,14 @@ static int mhi_driver_probe(struct device *dev)
struct mhi_event *mhi_event;
struct mhi_chan *ul_chan = mhi_dev->ul_chan;
struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+ int ret;
+
+ /* Bring device out of LPM */
+ ret = mhi_device_get_sync(mhi_dev);
+ if (ret)
+ return ret;
+
+ ret = -EINVAL;
if (ul_chan) {
/*
@@ -968,13 +1078,18 @@ static int mhi_driver_probe(struct device *dev)
* be provided
*/
if (ul_chan->lpm_notify && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
/* For non-offload channels then xfer_cb should be provided */
if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
- return -EINVAL;
+ goto exit_probe;
ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+ if (ul_chan->auto_start) {
+ ret = mhi_prepare_channel(mhi_cntrl, ul_chan);
+ if (ret)
+ goto exit_probe;
+ }
}
if (dl_chan) {
@@ -983,11 +1098,11 @@ static int mhi_driver_probe(struct device *dev)
* be provided
*/
if (dl_chan->lpm_notify && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
/* For non-offload channels then xfer_cb should be provided */
if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
- return -EINVAL;
+ goto exit_probe;
mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
@@ -997,19 +1112,36 @@ static int mhi_driver_probe(struct device *dev)
* notify pending data
*/
if (mhi_event->cl_manage && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
}
/* Call the user provided probe function */
- return mhi_drv->probe(mhi_dev, mhi_dev->id);
+ ret = mhi_drv->probe(mhi_dev, mhi_dev->id);
+ if (ret)
+ goto exit_probe;
+
+ if (dl_chan && dl_chan->auto_start)
+ mhi_prepare_channel(mhi_cntrl, dl_chan);
+
+ mhi_device_put(mhi_dev);
+
+ return ret;
+
+exit_probe:
+ mhi_unprepare_from_transfer(mhi_dev);
+
+ mhi_device_put(mhi_dev);
+
+ return ret;
}
static int mhi_driver_remove(struct device *dev)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_chan *mhi_chan;
enum mhi_ch_state ch_state[] = {
MHI_CH_STATE_DISABLED,
@@ -1041,6 +1173,10 @@ static int mhi_driver_remove(struct device *dev)
mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED;
write_unlock_irq(&mhi_chan->lock);
+ /* Reset the non-offload channel */
+ if (!mhi_chan->offload_ch)
+ mhi_reset_chan(mhi_cntrl, mhi_chan);
+
mutex_unlock(&mhi_chan->mutex);
}
@@ -1055,11 +1191,20 @@ static int mhi_driver_remove(struct device *dev)
mutex_lock(&mhi_chan->mutex);
+ if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
+ !mhi_chan->offload_ch)
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
mutex_unlock(&mhi_chan->mutex);
}
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ while (atomic_read(&mhi_dev->dev_wake))
+ mhi_device_put(mhi_dev);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
return 0;
}
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 314d0909c372..93003295405b 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -599,6 +599,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ enum mhi_cmd_type cmd);
/* Register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
@@ -630,6 +632,14 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
+int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
/* Memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
@@ -667,4 +677,27 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
+/* Queue transfer methods */
+int mhi_queue_dma(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ void *buf, void *cb, size_t buf_len, enum mhi_flags flags);
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+
#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 8450c74b4525..89632e4920c1 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -144,11 +144,82 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
return ret ? MHI_STATE_MAX : state;
}
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf_info->v_addr,
+ buf_info->len, buf_info->dir);
+ if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr))
+ return -ENOMEM;
+
+ return 0;
+}
+
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len,
+ &buf_info->p_addr, GFP_ATOMIC);
+
+ if (!buf)
+ return -ENOMEM;
+
+ if (buf_info->dir == DMA_TO_DEVICE)
+ memcpy(buf, buf_info->v_addr, buf_info->len);
+
+ buf_info->bb_addr = buf;
+
+ return 0;
+}
+
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr, buf_info->len,
+ buf_info->dir);
+}
+
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ if (buf_info->dir == DMA_FROM_DEVICE)
+ memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len);
+
+ mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr,
+ buf_info->p_addr);
+}
+
+static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ int nr_el;
+
+ if (ring->wp < ring->rp) {
+ nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
+ } else {
+ nr_el = (ring->rp - ring->base) / ring->el_size;
+ nr_el += ((ring->base + ring->len - ring->wp) /
+ ring->el_size) - 1;
+ }
+
+ return nr_el;
+}
+
static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
{
return (addr - ring->iommu_base) + ring->base;
}
+static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ ring->wp += ring->el_size;
+ if (ring->wp >= (ring->base + ring->len))
+ ring->wp = ring->base;
+ /* smp update */
+ smp_wmb();
+}
+
static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ring)
{
@@ -417,23 +488,25 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
/* Get the TRB this event points to */
ev_tre = mhi_to_virtual(tre_ring, ptr);
- /* device rp after servicing the TREs */
dev_rp = ev_tre + 1;
if (dev_rp >= (tre_ring->base + tre_ring->len))
dev_rp = tre_ring->base;
result.dir = mhi_chan->dir;
- /* local rp */
local_rp = tre_ring->rp;
while (local_rp != dev_rp) {
buf_info = buf_ring->rp;
- /* if it's last tre get len from the event */
+ /* If it's the last TRE, get length from the event */
if (local_rp == ev_tre)
xfer_len = MHI_TRE_GET_EV_LEN(event);
else
xfer_len = buf_info->len;
+ /* Unmap if it's not pre-mapped by client */
+ if (likely(!buf_info->pre_mapped))
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
result.buf_addr = buf_info->cb_buf;
result.bytes_xferd = xfer_len;
mhi_del_ring_element(mhi_cntrl, buf_ring);
@@ -445,6 +518,22 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
if (mhi_chan->dir == DMA_TO_DEVICE)
atomic_dec(&mhi_cntrl->pending_pkts);
+
+ /*
+ * Recycle the buffer if buffer is pre-allocated,
+ * if there is an error, not much we can do apart
+ * from dropping the packet
+ */
+ if (mhi_chan->pre_alloc) {
+ if (mhi_queue_buf(mhi_chan->mhi_dev, mhi_chan,
+ buf_info->cb_buf,
+ buf_info->len, MHI_EOT)) {
+ dev_err(mhi_cntrl->dev,
+ "Error recycling buffer for chan:%d\n",
+ mhi_chan->chan);
+ kfree(buf_info->cb_buf);
+ }
+ }
}
break;
} /* CC_EOT */
@@ -808,3 +897,685 @@ void mhi_ctrl_ev_task(unsigned long data)
schedule_work(&mhi_cntrl->syserr_worker);
}
}
+
+static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ void *tmp = ring->wp + ring->el_size;
+
+ if (tmp >= (ring->base + ring->len))
+ tmp = ring->base;
+
+ return (tmp == ring->rp);
+}
+
+/* TODO: Scatter-Gather transfer not implemented */
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ return -EINVAL;
+}
+
+/*
+ * MHI client drivers are not allowed to queue buffer for pre allocated
+ * channels. Hence, this function just returns -EINVAL.
+ */
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ return -EINVAL;
+}
+
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ struct sk_buff *skb = buf;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+ struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_tre *mhi_tre;
+ int ret;
+
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ return -EIO;
+ }
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ /* Generate the TRE */
+ buf_info = buf_ring->wp;
+
+ buf_info->v_addr = skb->data;
+ buf_info->cb_buf = skb;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = len;
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+ if (ret)
+ goto map_error;
+
+ mhi_tre = tre_ring->wp;
+
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ read_lock_bh(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_bh(&mhi_chan->lock);
+ }
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return 0;
+
+map_error:
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return ret;
+}
+
+int mhi_queue_dma(struct mhi_device *mhi_dev,
+ struct mhi_chan *mhi_chan,
+ void *buf,
+ size_t len,
+ enum mhi_flags mflags)
+{
+ struct mhi_buf *mhi_buf = buf;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+ struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_tre *mhi_tre;
+
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+ dev_err(mhi_cntrl->dev,
+ "MHI is not in activate state, PM state: %s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state));
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return -EIO;
+ }
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ /* Generate the TRE */
+ buf_info = buf_ring->wp;
+ WARN_ON(buf_info->used);
+ buf_info->p_addr = mhi_buf->dma_addr;
+ buf_info->pre_mapped = true;
+ buf_info->cb_buf = mhi_buf;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = len;
+
+ mhi_tre = tre_ring->wp;
+
+ if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA) {
+ buf_info->used = true;
+ mhi_tre->ptr =
+ MHI_RSCTRE_DATA_PTR(buf_info->p_addr, buf_info->len);
+ mhi_tre->dword[0] =
+ MHI_RSCTRE_DATA_DWORD0(buf_ring->wp - buf_ring->base);
+ mhi_tre->dword[1] = MHI_RSCTRE_DATA_DWORD1;
+ } else {
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+ }
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ read_lock_bh(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_bh(&mhi_chan->lock);
+ }
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return 0;
+}
+
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ void *buf, void *cb, size_t buf_len, enum mhi_flags flags)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_tre *mhi_tre;
+ struct mhi_buf_info *buf_info;
+ int eot, eob, chain, bei;
+ int ret;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ buf_info = buf_ring->wp;
+ buf_info->v_addr = buf;
+ buf_info->cb_buf = cb;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = buf_len;
+
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+ if (ret)
+ return ret;
+
+ eob = !!(flags & MHI_EOB);
+ eot = !!(flags & MHI_EOT);
+ chain = !!(flags & MHI_CHAIN);
+ bei = !!(mhi_chan->intmod);
+
+ mhi_tre = tre_ring->wp;
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ return 0;
+}
+
+int mhi_queue_transfer(struct mhi_device *mhi_dev,
+ enum dma_data_direction dir, void *buf, size_t len,
+ enum mhi_flags mflags)
+{
+ if (dir == DMA_TO_DEVICE)
+ return mhi_dev->ul_chan->queue_xfer(mhi_dev, mhi_dev->ul_chan,
+ buf, len, mflags);
+ else
+ return mhi_dev->dl_chan->queue_xfer(mhi_dev, mhi_dev->dl_chan,
+ buf, len, mflags);
+}
+EXPORT_SYMBOL_GPL(mhi_queue_transfer);
+
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring;
+ unsigned long flags;
+ int ret;
+
+ /*
+ * this check here only as a guard, it's always
+ * possible mhi can enter error while executing rest of function,
+ * which is not fatal so we do not need to hold pm_lock
+ */
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ tre_ring = &mhi_chan->tre_ring;
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags);
+ if (unlikely(ret))
+ return ret;
+
+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ unsigned long flags;
+
+ read_lock_irqsave(&mhi_chan->lock, flags);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_irqrestore(&mhi_chan->lock, flags);
+ }
+
+ read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+ return 0;
+}
+
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan,
+ enum mhi_cmd_type cmd)
+{
+ struct mhi_tre *cmd_tre = NULL;
+ struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+ struct mhi_ring *ring = &mhi_cmd->ring;
+ int chan = 0;
+
+ if (mhi_chan)
+ chan = mhi_chan->chan;
+
+ spin_lock_bh(&mhi_cmd->lock);
+ if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
+ spin_unlock_bh(&mhi_cmd->lock);
+ return -ENOMEM;
+ }
+
+ /* prepare the cmd tre */
+ cmd_tre = ring->wp;
+ switch (cmd) {
+ case MHI_CMD_RESET_CHAN:
+ cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
+ cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
+ cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
+ break;
+ case MHI_CMD_START_CHAN:
+ cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
+ cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
+ cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Command not supported\n");
+ break;
+ }
+
+ /* queue to hardware */
+ mhi_add_ring_element(mhi_cntrl, ring);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ spin_unlock_bh(&mhi_cmd->lock);
+
+ return 0;
+}
+
+static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ int ret;
+
+ dev_dbg(mhi_cntrl->dev, "Entered: unprepare channel:%d\n",
+ mhi_chan->chan);
+
+ /* no more processing events for this channel */
+ mutex_lock(&mhi_chan->mutex);
+ write_lock_irq(&mhi_chan->lock);
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
+ write_unlock_irq(&mhi_chan->lock);
+ mutex_unlock(&mhi_chan->mutex);
+ return;
+ }
+
+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+ write_unlock_irq(&mhi_chan->lock);
+
+ reinit_completion(&mhi_chan->completion);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ goto error_invalid_state;
+ }
+
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
+ if (ret)
+ goto error_invalid_state;
+
+ /* even if it fails we will still reset */
+ ret = wait_for_completion_timeout(&mhi_chan->completion,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
+ dev_err(mhi_cntrl->dev,
+ "Failed to receive cmd completion, still resetting\n");
+
+error_invalid_state:
+ if (!mhi_chan->offload_ch) {
+ mhi_reset_chan(mhi_cntrl, mhi_chan);
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+ }
+ dev_dbg(mhi_cntrl->dev, "chan:%d successfully resetted\n",
+ mhi_chan->chan);
+ mutex_unlock(&mhi_chan->mutex);
+}
+
+int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ int ret = 0;
+
+ dev_dbg(mhi_cntrl->dev, "Preparing channel: %d\n",
+ mhi_chan->chan);
+
+ if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
+ dev_err(mhi_cntrl->dev,
+ "Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
+ TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
+ mhi_chan->name);
+ return -ENOTCONN;
+ }
+
+ mutex_lock(&mhi_chan->mutex);
+
+ /* If channel is not in disable state, do not allow it to start */
+ if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
+ ret = -EIO;
+ dev_dbg(mhi_cntrl->dev,
+ "channel: %d is not in disabled state\n",
+ mhi_chan->chan);
+ goto error_init_chan;
+ }
+
+ /* Check of client manages channel context for offload channels */
+ if (!mhi_chan->offload_ch) {
+ ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
+ if (ret)
+ goto error_init_chan;
+ }
+
+ reinit_completion(&mhi_chan->completion);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ ret = -EIO;
+ goto error_pm_state;
+ }
+
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+
+ ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
+ if (ret)
+ goto error_pm_state;
+
+ ret = wait_for_completion_timeout(&mhi_chan->completion,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
+ ret = -EIO;
+ goto error_pm_state;
+ }
+
+ write_lock_irq(&mhi_chan->lock);
+ mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
+ write_unlock_irq(&mhi_chan->lock);
+
+ /* Pre-allocate buffer for xfer ring */
+ if (mhi_chan->pre_alloc) {
+ int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
+ &mhi_chan->tre_ring);
+ size_t len = mhi_cntrl->buffer_len;
+
+ while (nr_el--) {
+ void *buf;
+
+ buf = kmalloc(len, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto error_pre_alloc;
+ }
+
+ /* Prepare transfer descriptors */
+ ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf,
+ len, MHI_EOT);
+ if (ret) {
+ kfree(buf);
+ goto error_pre_alloc;
+ }
+ }
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+ read_lock_irq(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_irq(&mhi_chan->lock);
+ }
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ }
+
+ mutex_unlock(&mhi_chan->mutex);
+
+ dev_dbg(mhi_cntrl->dev, "Chan: %d successfully moved to start state\n",
+ mhi_chan->chan);
+
+ return 0;
+
+error_pm_state:
+ if (!mhi_chan->offload_ch)
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+error_init_chan:
+ mutex_unlock(&mhi_chan->mutex);
+
+ return ret;
+
+error_pre_alloc:
+ mutex_unlock(&mhi_chan->mutex);
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+
+ return ret;
+}
+
+static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ struct mhi_event_ctxt *er_ctxt,
+ int chan)
+
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring;
+ unsigned long flags;
+
+ dev_dbg(mhi_cntrl->dev,
+ "Marking all events for chan: %d as stale\n", chan);
+
+ ev_ring = &mhi_event->ring;
+
+ /* mark all stale events related to channel as STALE event */
+ spin_lock_irqsave(&mhi_event->lock, flags);
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ local_rp = ev_ring->rp;
+ while (dev_rp != local_rp) {
+ if (MHI_TRE_GET_EV_TYPE(local_rp) ==
+ MHI_PKT_TYPE_TX_EVENT &&
+ chan == MHI_TRE_GET_EV_CHID(local_rp))
+ local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
+ MHI_PKT_TYPE_STALE_EVENT);
+ local_rp++;
+ if (local_rp == (ev_ring->base + ev_ring->len))
+ local_rp = ev_ring->base;
+ }
+
+
+ dev_dbg(mhi_cntrl->dev,
+ "Finished marking events as stale events\n");
+ spin_unlock_irqrestore(&mhi_event->lock, flags);
+}
+
+static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_result result;
+
+ /* Reset any pending buffers */
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ while (tre_ring->rp != tre_ring->wp) {
+ struct mhi_buf_info *buf_info = buf_ring->rp;
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_dec(&mhi_cntrl->pending_pkts);
+
+ if (!buf_info->pre_mapped)
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+
+ if (mhi_chan->pre_alloc) {
+ kfree(buf_info->cb_buf);
+ } else {
+ result.buf_addr = buf_info->cb_buf;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ }
+ }
+}
+
+static void mhi_reset_rsc_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_result result;
+ struct mhi_buf_info *buf_info;
+
+ /* Reset any pending buffers */
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+
+ buf_info = buf_ring->base;
+ for (; (void *)buf_info < buf_ring->base + buf_ring->len; buf_info++) {
+ if (!buf_info->used)
+ continue;
+
+ result.buf_addr = buf_info->cb_buf;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ buf_info->used = false;
+ }
+}
+
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
+{
+
+ struct mhi_event *mhi_event;
+ struct mhi_event_ctxt *er_ctxt;
+ int chan = mhi_chan->chan;
+
+ /* Nothing to reset, client doesn't queue buffers */
+ if (mhi_chan->offload_ch)
+ return;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+ er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
+
+ mhi_mark_stale_events(mhi_cntrl, mhi_event, er_ctxt, chan);
+
+ if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA)
+ mhi_reset_rsc_chan(mhi_cntrl, mhi_chan);
+ else
+ mhi_reset_data_chan(mhi_cntrl, mhi_chan);
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+
+/* Move channel to start state */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+{
+ int ret, dir;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan;
+
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ ret = mhi_prepare_channel(mhi_cntrl, mhi_chan);
+ if (ret)
+ goto error_open_chan;
+ }
+
+ return 0;
+
+error_open_chan:
+ for (--dir; dir >= 0; dir--) {
+ mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer);
+
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan;
+ int dir;
+
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+ }
+}
+EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
+
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
+ struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+ int ret;
+
+ spin_lock_bh(&mhi_event->lock);
+ ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget);
+ spin_unlock_bh(&mhi_event->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_poll);
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 0bdc667830f0..a0ec76c56c6b 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -932,3 +932,43 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
return ret;
}
EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
+
+void mhi_device_get(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ atomic_inc(&mhi_dev->dev_wake);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_get(mhi_cntrl, true);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL_GPL(mhi_device_get);
+
+int mhi_device_get_sync(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ int ret;
+
+ ret = __mhi_device_get_sync(mhi_cntrl);
+ if (!ret)
+ atomic_inc(&mhi_dev->dev_wake);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_device_get_sync);
+
+void mhi_device_put(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ atomic_dec(&mhi_dev->dev_wake);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl, mhi_cntrl->priv_data);
+ mhi_cntrl->runtime_put(mhi_cntrl, mhi_cntrl->priv_data);
+ }
+
+ mhi_cntrl->wake_put(mhi_cntrl, false);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL_GPL(mhi_device_put);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 3e8f797c4c51..33ac04ff2ab4 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -351,6 +351,8 @@ struct mhi_controller_config {
* @runtimet_put: CB function to decrement pm usage
* @lpm_disable: CB function to request disable link level low power modes
* @lpm_enable: CB function to request enable link level low power modes again
+ * @map_single: CB function to create TRE buffer
+ * @unmap_single: CB function to destroy TRE buffer
* @bounce_buf: Use of bounce buffer
* @buffer_len: Bounce buffer length
* @priv_data: Points to bus master's private data
@@ -423,6 +425,10 @@ struct mhi_controller {
void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
+ int (*map_single)(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf);
+ void (*unmap_single)(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf);
bool bounce_buf;
size_t buffer_len;
@@ -622,4 +628,53 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
*/
enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl);
+/**
+ * mhi_device_get - Disable device low power mode
+ * @mhi_dev: Device associated with the channel
+ */
+void mhi_device_get(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_get_sync - Disable device low power mode. Synchronously
+ * take the controller out of suspended state
+ * @mhi_dev: Device associated with the channel
+ */
+int mhi_device_get_sync(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_put - Re-enable device low power mode
+ * @mhi_dev: Device associated with the channel
+ */
+void mhi_device_put(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_prepare_for_transfer - Setup channel for data transfer
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_unprepare_from_transfer - Unprepare the channels
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_poll - Poll for any available data in DL direction
+ * @mhi_dev: Device associated with the channels
+ * @budget: # of events to process
+ */
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
+
+/**
+ * mhi_queue_transfer - Send or receive data from client device over MHI channel
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ * @buf: Buffer for holding the data
+ * @len: Buffer length
+ * @mflags: MHI transfer flags used for the transfer
+ */
+int mhi_queue_transfer(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+ void *buf, size_t len, enum mhi_flags mflags);
+
#endif /* _MHI_H_ */
--
2.17.1
QMI helpers are not always used by Qualcomm platforms. One of the
exceptions is the external modems available in near future. As a
side effect of removing the dependency, it is also going to loose
COMPILE_TEST build coverage.
Cc: Andy Gross <[email protected]>
Cc: Bjorn Andersson <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/soc/qcom/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 79d826553ac8..ca057bc9aae6 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -88,7 +88,6 @@ config QCOM_PM
config QCOM_QMI_HELPERS
tristate
- depends on ARCH_QCOM || COMPILE_TEST
depends on NET
config QCOM_RMTFS_MEM
--
2.17.1
MHI is the transport layer used for communicating to the external modems.
Hence, this commit adds MHI transport layer support to QRTR for
transferring the QMI messages over IPC Router.
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
net/qrtr/Kconfig | 7 ++
net/qrtr/Makefile | 2 +
net/qrtr/mhi.c | 207 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 216 insertions(+)
create mode 100644 net/qrtr/mhi.c
diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig
index 63f89cc6e82c..8eb876471564 100644
--- a/net/qrtr/Kconfig
+++ b/net/qrtr/Kconfig
@@ -29,4 +29,11 @@ config QRTR_TUN
implement endpoints of QRTR, for purpose of tunneling data to other
hosts or testing purposes.
+config QRTR_MHI
+ tristate "MHI IPC Router channels"
+ depends on MHI_BUS
+ help
+ Say Y here to support MHI based ipcrouter channels. MHI is the
+ transport used for communicating to external modems.
+
endif # QRTR
diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile
index 1c6d6c120fb7..3dc0a7c9d455 100644
--- a/net/qrtr/Makefile
+++ b/net/qrtr/Makefile
@@ -5,3 +5,5 @@ obj-$(CONFIG_QRTR_SMD) += qrtr-smd.o
qrtr-smd-y := smd.o
obj-$(CONFIG_QRTR_TUN) += qrtr-tun.o
qrtr-tun-y := tun.o
+obj-$(CONFIG_QRTR_MHI) += qrtr-mhi.o
+qrtr-mhi-y := mhi.o
diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
new file mode 100644
index 000000000000..c85041a22f85
--- /dev/null
+++ b/net/qrtr/mhi.c
@@ -0,0 +1,207 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/mhi.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <net/sock.h>
+
+#include "qrtr.h"
+
+struct qrtr_mhi_dev {
+ struct qrtr_endpoint ep;
+ struct mhi_device *mhi_dev;
+ struct device *dev;
+ spinlock_t ul_lock; /* lock to protect ul_pkts */
+ struct list_head ul_pkts;
+ atomic_t in_reset;
+};
+
+struct qrtr_mhi_pkt {
+ struct list_head node;
+ struct sk_buff *skb;
+ struct kref refcount;
+ struct completion done;
+};
+
+static void qrtr_mhi_pkt_release(struct kref *ref)
+{
+ struct qrtr_mhi_pkt *pkt = container_of(ref, struct qrtr_mhi_pkt,
+ refcount);
+ struct sock *sk = pkt->skb->sk;
+
+ consume_skb(pkt->skb);
+ if (sk)
+ sock_put(sk);
+
+ kfree(pkt);
+}
+
+/* From MHI to QRTR */
+static void qcom_mhi_qrtr_dl_callback(struct mhi_device *mhi_dev,
+ struct mhi_result *mhi_res)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ int rc;
+
+ if (!qdev || mhi_res->transaction_status)
+ return;
+
+ rc = qrtr_endpoint_post(&qdev->ep, mhi_res->buf_addr,
+ mhi_res->bytes_xferd);
+ if (rc == -EINVAL)
+ dev_err(qdev->dev, "invalid ipcrouter packet\n");
+}
+
+/* From QRTR to MHI */
+static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
+ struct mhi_result *mhi_res)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ struct qrtr_mhi_pkt *pkt;
+ unsigned long flags;
+
+ spin_lock_irqsave(&qdev->ul_lock, flags);
+ pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
+ list_del(&pkt->node);
+ complete_all(&pkt->done);
+
+ kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
+ spin_unlock_irqrestore(&qdev->ul_lock, flags);
+}
+
+static void qcom_mhi_qrtr_status_callback(struct mhi_device *mhi_dev,
+ enum mhi_callback mhi_cb)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ struct qrtr_mhi_pkt *pkt;
+ unsigned long flags;
+
+ if (mhi_cb != MHI_CB_FATAL_ERROR)
+ return;
+
+ atomic_inc(&qdev->in_reset);
+ spin_lock_irqsave(&qdev->ul_lock, flags);
+ list_for_each_entry(pkt, &qdev->ul_pkts, node)
+ complete_all(&pkt->done);
+ spin_unlock_irqrestore(&qdev->ul_lock, flags);
+}
+
+/* Send data over MHI */
+static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+{
+ struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep);
+ struct qrtr_mhi_pkt *pkt;
+ int rc;
+
+ rc = skb_linearize(skb);
+ if (rc) {
+ kfree_skb(skb);
+ return rc;
+ }
+
+ pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+ if (!pkt) {
+ kfree_skb(skb);
+ return -ENOMEM;
+ }
+
+ init_completion(&pkt->done);
+ kref_init(&pkt->refcount);
+ kref_get(&pkt->refcount);
+ pkt->skb = skb;
+
+ spin_lock_bh(&qdev->ul_lock);
+ list_add_tail(&pkt->node, &qdev->ul_pkts);
+ rc = mhi_queue_transfer(qdev->mhi_dev, DMA_TO_DEVICE, skb, skb->len,
+ MHI_EOT);
+ if (rc) {
+ list_del(&pkt->node);
+ kfree_skb(skb);
+ kfree(pkt);
+ spin_unlock_bh(&qdev->ul_lock);
+ return rc;
+ }
+
+ spin_unlock_bh(&qdev->ul_lock);
+ if (skb->sk)
+ sock_hold(skb->sk);
+
+ rc = wait_for_completion_interruptible_timeout(&pkt->done, HZ * 5);
+ if (atomic_read(&qdev->in_reset))
+ rc = -ECONNRESET;
+ else if (rc == 0)
+ rc = -ETIMEDOUT;
+ else if (rc > 0)
+ rc = 0;
+
+ kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
+
+ return rc;
+}
+
+static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ const struct mhi_device_id *id)
+{
+ struct qrtr_mhi_dev *qdev;
+ u32 net_id;
+ int rc;
+
+ qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL);
+ if (!qdev)
+ return -ENOMEM;
+
+ qdev->mhi_dev = mhi_dev;
+ qdev->dev = &mhi_dev->dev;
+ qdev->ep.xmit = qcom_mhi_qrtr_send;
+ atomic_set(&qdev->in_reset, 0);
+
+ net_id = QRTR_EP_NID_AUTO;
+
+ INIT_LIST_HEAD(&qdev->ul_pkts);
+ spin_lock_init(&qdev->ul_lock);
+
+ dev_set_drvdata(&mhi_dev->dev, qdev);
+ rc = qrtr_endpoint_register(&qdev->ep, net_id);
+ if (rc)
+ return rc;
+
+ dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
+
+ return 0;
+}
+
+static void qcom_mhi_qrtr_remove(struct mhi_device *mhi_dev)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+
+ qrtr_endpoint_unregister(&qdev->ep);
+ dev_set_drvdata(&mhi_dev->dev, NULL);
+}
+
+static const struct mhi_device_id qcom_mhi_qrtr_id_table[] = {
+ { .chan = "IPCR" },
+ {}
+};
+MODULE_DEVICE_TABLE(mhi, qcom_mhi_qrtr_id_table);
+
+static struct mhi_driver qcom_mhi_qrtr_driver = {
+ .probe = qcom_mhi_qrtr_probe,
+ .remove = qcom_mhi_qrtr_remove,
+ .dl_xfer_cb = qcom_mhi_qrtr_dl_callback,
+ .ul_xfer_cb = qcom_mhi_qrtr_ul_callback,
+ .status_cb = qcom_mhi_qrtr_status_callback,
+ .id_table = qcom_mhi_qrtr_id_table,
+ .driver = {
+ .name = "qcom_mhi_qrtr",
+ },
+};
+
+module_driver(qcom_mhi_qrtr_driver, mhi_driver_register,
+ mhi_driver_unregister);
+
+MODULE_DESCRIPTION("Qualcomm IPC-Router MHI interface driver");
+MODULE_LICENSE("GPL v2");
--
2.17.1
On Thu, Jan 23, 2020 at 04:48:22PM +0530, Manivannan Sadhasivam wrote:
> +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> + struct mhi_device *mhi_dev)
> +{
> + kfree(mhi_dev);
> +}
You just leaked memory, please read the documentation for
device_initialize().
:(
On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
<[email protected]> wrote:
> +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> + void __iomem *base, u32 offset, u32 *out)
> +{
> + u32 tmp = readl_relaxed(base + offset);
....
> +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> + u32 offset, u32 val)
> +{
> + writel_relaxed(val, base + offset);
Please avoid using _relaxed accessors by default, and use the regular
ones instead. There are a number of things that can go wrong with
the relaxed version, so ideally each caller should have a comment
explaining why this instance is safe without the barriers and why it
matters to not have it.
If there are performance critical callers of mhi_read_reg/mhi_write_reg,
you could add mhi_read_reg_relaxed/mhi_write_reg_relaxed for those
and apply the same rules there.
Usually most mmio accesses are only needed for reconfiguration or
other slow paths.
Arnd
On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
<[email protected]> wrote:
>
> QMI helpers are not always used by Qualcomm platforms. One of the
> exceptions is the external modems available in near future. As a
> side effect of removing the dependency, it is also going to loose
> COMPILE_TEST build coverage.
>
> Cc: Andy Gross <[email protected]>
> Cc: Bjorn Andersson <[email protected]>
> Cc: [email protected]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/soc/qcom/Kconfig | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
> index 79d826553ac8..ca057bc9aae6 100644
> --- a/drivers/soc/qcom/Kconfig
> +++ b/drivers/soc/qcom/Kconfig
> @@ -88,7 +88,6 @@ config QCOM_PM
>
> config QCOM_QMI_HELPERS
> tristate
> - depends on ARCH_QCOM || COMPILE_TEST
> depends on NET
Should this be moved out of drivers/soc/ then?
Arnd
Hi Greg,
On Thu, Jan 23, 2020 at 12:33:42PM +0100, Greg KH wrote:
> On Thu, Jan 23, 2020 at 04:48:22PM +0530, Manivannan Sadhasivam wrote:
> > +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> > + struct mhi_device *mhi_dev)
> > +{
> > + kfree(mhi_dev);
> > +}
>
> You just leaked memory, please read the documentation for
> device_initialize().
>
> :(
Ah, okay. My bad. Should've used put_device(&mhi_dev->dev) here also to
drop the ref count. Will add it in next iteration.
Thanks,
Mani
Hi Arnd,
On Thu, Jan 23, 2020 at 12:39:06PM +0100, Arnd Bergmann wrote:
> On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
> <[email protected]> wrote:
>
> > +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> > + void __iomem *base, u32 offset, u32 *out)
> > +{
> > + u32 tmp = readl_relaxed(base + offset);
> ....
> > +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> > + u32 offset, u32 val)
> > +{
> > + writel_relaxed(val, base + offset);
>
> Please avoid using _relaxed accessors by default, and use the regular
> ones instead. There are a number of things that can go wrong with
> the relaxed version, so ideally each caller should have a comment
> explaining why this instance is safe without the barriers and why it
> matters to not have it.
>
> If there are performance critical callers of mhi_read_reg/mhi_write_reg,
> you could add mhi_read_reg_relaxed/mhi_write_reg_relaxed for those
> and apply the same rules there.
>
> Usually most mmio accesses are only needed for reconfiguration or
> other slow paths.
>
Fair point. I'll defer to readl/writel APIs and I also need to add
le32_to_cpu/cpu_to_le32 to them.
Thanks,
Mani
> Arnd
Hi Arnd,
On Thu, Jan 23, 2020 at 12:45:32PM +0100, Arnd Bergmann wrote:
> On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
> <[email protected]> wrote:
> >
> > QMI helpers are not always used by Qualcomm platforms. One of the
> > exceptions is the external modems available in near future. As a
> > side effect of removing the dependency, it is also going to loose
> > COMPILE_TEST build coverage.
> >
> > Cc: Andy Gross <[email protected]>
> > Cc: Bjorn Andersson <[email protected]>
> > Cc: [email protected]
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/soc/qcom/Kconfig | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
> > index 79d826553ac8..ca057bc9aae6 100644
> > --- a/drivers/soc/qcom/Kconfig
> > +++ b/drivers/soc/qcom/Kconfig
> > @@ -88,7 +88,6 @@ config QCOM_PM
> >
> > config QCOM_QMI_HELPERS
> > tristate
> > - depends on ARCH_QCOM || COMPILE_TEST
> > depends on NET
>
> Should this be moved out of drivers/soc/ then?
>
Good question. I thought this change will trigger the question anyway ;)
Will need to hear from Bjorn on this. I agree that it should be moved out
of drivers/soc!
Thanks,
Mani
> Arnd
On Thu, Jan 23, 2020 at 1:01 PM Manivannan Sadhasivam
<[email protected]> wrote:
> On Thu, Jan 23, 2020 at 12:39:06PM +0100, Arnd Bergmann wrote:
> > On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
> > <[email protected]> wrote:
> >
> > > +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> > > + void __iomem *base, u32 offset, u32 *out)
> > > +{
> > > + u32 tmp = readl_relaxed(base + offset);
> > ....
> > > +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> > > + u32 offset, u32 val)
> > > +{
> > > + writel_relaxed(val, base + offset);
> >
> > Please avoid using _relaxed accessors by default, and use the regular
> > ones instead. There are a number of things that can go wrong with
> > the relaxed version, so ideally each caller should have a comment
> > explaining why this instance is safe without the barriers and why it
> > matters to not have it.
> >
> > If there are performance critical callers of mhi_read_reg/mhi_write_reg,
> > you could add mhi_read_reg_relaxed/mhi_write_reg_relaxed for those
> > and apply the same rules there.
> >
> > Usually most mmio accesses are only needed for reconfiguration or
> > other slow paths.
> >
>
> Fair point. I'll defer to readl/writel APIs and I also need to add
> le32_to_cpu/cpu_to_le32 to them.
What do you need the byteswap for? All of the above already
assume that the registers are little-endian.
Arnd
On Thu, Jan 23, 2020 at 12:18 PM Manivannan Sadhasivam
<[email protected]> wrote:
> +============
> +MHI Topology
> +============
> +
> +This document provides information about the MHI topology modeling and
> +representation in the kernel.
> +
> +MHI Controller
> +--------------
> +
> +MHI controller driver manages the interaction with the MHI client devices
> +such as the external modems and WiFi chipsets. It is also the MHI bus master
> +which is in charge of managing the physical link between the host and device.
> +It is however not involved in the actual data transfer as the data transfer
> +is taken care by the physical bus such as PCIe. Each controller driver exposes
> +channels and events based on the client device type.
> +
> +Below are the roles of the MHI controller driver:
> +
> +* Turns on the physical bus and establishes the link to the device
> +* Configures IRQs, SMMU, and IOMEM
> +* Allocates struct mhi_controller and registers with the MHI bus framework
> + with channel and event configurations using mhi_register_controller.
> +* Initiates power on and shutdown sequence
> +* Initiates suspend and resume power management operations of the device.
I don't see any callers of mhi_register_controller(). Did I just miss it or did
you not post one? I'm particularly interested in where the configuration comes
from, is this hardcoded in the driver, or parsed from firmware or from registers
in the hardware itself?
Arnd
On Thu, Jan 23, 2020 at 01:44:32PM +0100, Arnd Bergmann wrote:
> On Thu, Jan 23, 2020 at 1:01 PM Manivannan Sadhasivam
> <[email protected]> wrote:
> > On Thu, Jan 23, 2020 at 12:39:06PM +0100, Arnd Bergmann wrote:
> > > On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
> > > <[email protected]> wrote:
> > >
> > > > +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> > > > + void __iomem *base, u32 offset, u32 *out)
> > > > +{
> > > > + u32 tmp = readl_relaxed(base + offset);
> > > ....
> > > > +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> > > > + u32 offset, u32 val)
> > > > +{
> > > > + writel_relaxed(val, base + offset);
> > >
> > > Please avoid using _relaxed accessors by default, and use the regular
> > > ones instead. There are a number of things that can go wrong with
> > > the relaxed version, so ideally each caller should have a comment
> > > explaining why this instance is safe without the barriers and why it
> > > matters to not have it.
> > >
> > > If there are performance critical callers of mhi_read_reg/mhi_write_reg,
> > > you could add mhi_read_reg_relaxed/mhi_write_reg_relaxed for those
> > > and apply the same rules there.
> > >
> > > Usually most mmio accesses are only needed for reconfiguration or
> > > other slow paths.
> > >
> >
> > Fair point. I'll defer to readl/writel APIs and I also need to add
> > le32_to_cpu/cpu_to_le32 to them.
>
> What do you need the byteswap for? All of the above already
> assume that the registers are little-endian.
>
I thought the readl/writel are native endian... Now I read the macro
definitions once again and looks like these APIs are LE for all archs...
So it is not needed. Sorry for the confusion.
Thanks,
Mani
> Arnd
On Thu, Jan 23, 2020 at 01:58:22PM +0100, Arnd Bergmann wrote:
> On Thu, Jan 23, 2020 at 12:18 PM Manivannan Sadhasivam
> <[email protected]> wrote:
> > +============
> > +MHI Topology
> > +============
> > +
> > +This document provides information about the MHI topology modeling and
> > +representation in the kernel.
> > +
> > +MHI Controller
> > +--------------
> > +
> > +MHI controller driver manages the interaction with the MHI client devices
> > +such as the external modems and WiFi chipsets. It is also the MHI bus master
> > +which is in charge of managing the physical link between the host and device.
> > +It is however not involved in the actual data transfer as the data transfer
> > +is taken care by the physical bus such as PCIe. Each controller driver exposes
> > +channels and events based on the client device type.
> > +
> > +Below are the roles of the MHI controller driver:
> > +
> > +* Turns on the physical bus and establishes the link to the device
> > +* Configures IRQs, SMMU, and IOMEM
> > +* Allocates struct mhi_controller and registers with the MHI bus framework
> > + with channel and event configurations using mhi_register_controller.
> > +* Initiates power on and shutdown sequence
> > +* Initiates suspend and resume power management operations of the device.
>
> I don't see any callers of mhi_register_controller(). Did I just miss it or did
> you not post one? I'm particularly interested in where the configuration comes
> from, is this hardcoded in the driver, or parsed from firmware or from registers
> in the hardware itself?
>
I have not included the controller driver in this patchset. But you can take a
look at the ath11k controller driver here:
https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi#n13
So the configuration comes from the static structures defined in the controller
driver. Earlier revision derived the configuration from devicetree but there are
many cases where this MHI bus is being used in non DT environments like x86.
So inorder to be platform agnostic, we chose static declaration method.
In future we can add DT/ACPI support for the applicable parameters.
I will include the link to this controller driver in the cover letter of future
iterations.
Thanks,
Mani
> Arnd
On Thu, Jan 23, 2020 at 2:10 PM Manivannan Sadhasivam
<[email protected]> wrote:
>
> On Thu, Jan 23, 2020 at 01:58:22PM +0100, Arnd Bergmann wrote:
> > On Thu, Jan 23, 2020 at 12:18 PM Manivannan Sadhasivam <[email protected]> wrote:
> >
> > I don't see any callers of mhi_register_controller(). Did I just miss it or did
> > you not post one? I'm particularly interested in where the configuration comes
> > from, is this hardcoded in the driver, or parsed from firmware or from registers
> > in the hardware itself?
> >
>
> I have not included the controller driver in this patchset. But you can take a
> look at the ath11k controller driver here:
> https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi#n13
>
> So the configuration comes from the static structures defined in the controller
> driver. Earlier revision derived the configuration from devicetree but there are
> many cases where this MHI bus is being used in non DT environments like x86.
> So inorder to be platform agnostic, we chose static declaration method.
>
> In future we can add DT/ACPI support for the applicable parameters.
What determines the configuration? Is this always something that is fixed
in hardware, or can some of the properties be changed based on what
firmware runs the device?
If this is determined by the firmware, maybe the configuration would also
need to be loaded from the file that contains the firmware, which in turn
could be a blob in DT.
Arnd
On Thu, Jan 23, 2020 at 02:19:51PM +0100, Arnd Bergmann wrote:
> On Thu, Jan 23, 2020 at 2:10 PM Manivannan Sadhasivam
> <[email protected]> wrote:
> >
> > On Thu, Jan 23, 2020 at 01:58:22PM +0100, Arnd Bergmann wrote:
> > > On Thu, Jan 23, 2020 at 12:18 PM Manivannan Sadhasivam <[email protected]> wrote:
> > >
> > > I don't see any callers of mhi_register_controller(). Did I just miss it or did
> > > you not post one? I'm particularly interested in where the configuration comes
> > > from, is this hardcoded in the driver, or parsed from firmware or from registers
> > > in the hardware itself?
> > >
> >
> > I have not included the controller driver in this patchset. But you can take a
> > look at the ath11k controller driver here:
> > https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi#n13
> >
> > So the configuration comes from the static structures defined in the controller
> > driver. Earlier revision derived the configuration from devicetree but there are
> > many cases where this MHI bus is being used in non DT environments like x86.
> > So inorder to be platform agnostic, we chose static declaration method.
> >
> > In future we can add DT/ACPI support for the applicable parameters.
>
> What determines the configuration? Is this always something that is fixed
> in hardware, or can some of the properties be changed based on what
> firmware runs the device?
>
AFAIK, these configurations are fixed in hardware (this could come from
the firmware I'm not sure but they don't change with firmware revisions
for sure)
The reason for defining in the driver itself implies that these don't
change. But I'll confirm this with Qcom folks.
Thanks,
Mani
> If this is determined by the firmware, maybe the configuration would also
> need to be loaded from the file that contains the firmware, which in turn
> could be a blob in DT.
>
> Arnd
On 1/23/2020 5:00 AM, Manivannan Sadhasivam wrote:
> Hi Arnd,
>
> On Thu, Jan 23, 2020 at 12:39:06PM +0100, Arnd Bergmann wrote:
>> On Thu, Jan 23, 2020 at 12:19 PM Manivannan Sadhasivam
>> <[email protected]> wrote:
>>
>>> +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
>>> + void __iomem *base, u32 offset, u32 *out)
>>> +{
>>> + u32 tmp = readl_relaxed(base + offset);
>> ....
>>> +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
>>> + u32 offset, u32 val)
>>> +{
>>> + writel_relaxed(val, base + offset);
>>
>> Please avoid using _relaxed accessors by default, and use the regular
>> ones instead. There are a number of things that can go wrong with
>> the relaxed version, so ideally each caller should have a comment
>> explaining why this instance is safe without the barriers and why it
>> matters to not have it.
>>
>> If there are performance critical callers of mhi_read_reg/mhi_write_reg,
>> you could add mhi_read_reg_relaxed/mhi_write_reg_relaxed for those
>> and apply the same rules there.
>>
>> Usually most mmio accesses are only needed for reconfiguration or
>> other slow paths.
>>
>
> Fair point. I'll defer to readl/writel APIs and I also need to add
> le32_to_cpu/cpu_to_le32 to them.
I would expect we would be using these in the "hot" path.
I'm a bit confused, I thought the convention was to put a comment why a
barrier was necessary, now we should be putting a comment why a barrier
is not necessary?
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/23/2020 6:30 AM, Manivannan Sadhasivam wrote:
> On Thu, Jan 23, 2020 at 02:19:51PM +0100, Arnd Bergmann wrote:
>> On Thu, Jan 23, 2020 at 2:10 PM Manivannan Sadhasivam
>> <[email protected]> wrote:
>>>
>>> On Thu, Jan 23, 2020 at 01:58:22PM +0100, Arnd Bergmann wrote:
>>>> On Thu, Jan 23, 2020 at 12:18 PM Manivannan Sadhasivam <[email protected]> wrote:
>>>>
>>>> I don't see any callers of mhi_register_controller(). Did I just miss it or did
>>>> you not post one? I'm particularly interested in where the configuration comes
>>>> from, is this hardcoded in the driver, or parsed from firmware or from registers
>>>> in the hardware itself?
>>>>
>>>
>>> I have not included the controller driver in this patchset. But you can take a
>>> look at the ath11k controller driver here:
>>> https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi#n13
>>>
>>> So the configuration comes from the static structures defined in the controller
>>> driver. Earlier revision derived the configuration from devicetree but there are
>>> many cases where this MHI bus is being used in non DT environments like x86.
>>> So inorder to be platform agnostic, we chose static declaration method.
>>>
>>> In future we can add DT/ACPI support for the applicable parameters.
>>
>> What determines the configuration? Is this always something that is fixed
>> in hardware, or can some of the properties be changed based on what
>> firmware runs the device?
>>
>
> AFAIK, these configurations are fixed in hardware (this could come from
> the firmware I'm not sure but they don't change with firmware revisions
> for sure)
>
> The reason for defining in the driver itself implies that these don't
> change. But I'll confirm this with Qcom folks.
>
> Thanks,
> Mani
>
>> If this is determined by the firmware, maybe the configuration would also
>> need to be loaded from the file that contains the firmware, which in turn
>> could be a blob in DT.
>>
>> Arnd
We can't derive the configuration from hardware, and its something that
is currently a priori known since the host (linux) needs to initialize
the hardware with the configuration before it can communicate with the
device (ie the on device FW).
99% of the time the configuration is fixed, however there have been
instances where features have been added on the device, which result in
new channels, which then impact the configuration. In the cases I'm
aware of this, both sides were updated in lockstep. I don't know how
upstream would handle it. I'm thinking we can ignore that case until it
comes up.
DT/ACPI is tricky, since the cases where we want this currently are
essentially standalone PCI(e) cards. Those are likely to be on systems
which don't support DT (ie x86), and there really isn't a place in ACPI
to put PCI(e) device configuration information, since its supposed to be
a discoverable bus.
There are hardware limitations to the configuration, and that varies
from device to device. Since the host (linux) programs the
configuration into the hardware, its possible for an invalid
configuration to be programed, but I would expect that in the majority
of cases (ie programming a channel that the device FW doesn't know
about), there is no adverse impact.
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> MHI (Modem Host Interface) is a communication protocol used by the
> host processors to control and communicate with modems over a high
> speed peripheral bus or shared memory. The MHI protocol has been
> designed and developed by Qualcomm Innovation Center, Inc., for use
> in their modems. This commit adds the documentation for the bus and
> the implementation in Linux kernel.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/987
>
> Cc: Jonathan Corbet <[email protected]>
> Cc: [email protected]
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: converted to .rst and splitted the patch]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> Documentation/index.rst | 1 +
> Documentation/mhi/index.rst | 18 +++
> Documentation/mhi/mhi.rst | 218 +++++++++++++++++++++++++++++++++
> Documentation/mhi/topology.rst | 60 +++++++++
> 4 files changed, 297 insertions(+)
> create mode 100644 Documentation/mhi/index.rst
> create mode 100644 Documentation/mhi/mhi.rst
> create mode 100644 Documentation/mhi/topology.rst
>
> diff --git a/Documentation/index.rst b/Documentation/index.rst
> index e99d0bd2589d..edc9b211bbff 100644
> --- a/Documentation/index.rst
> +++ b/Documentation/index.rst
> @@ -133,6 +133,7 @@ needed).
> misc-devices/index
> mic/index
> scheduler/index
> + mhi/index
>
> Architecture-agnostic documentation
> -----------------------------------
> diff --git a/Documentation/mhi/index.rst b/Documentation/mhi/index.rst
> new file mode 100644
> index 000000000000..1d8dec302780
> --- /dev/null
> +++ b/Documentation/mhi/index.rst
> @@ -0,0 +1,18 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +===
> +MHI
> +===
> +
> +.. toctree::
> + :maxdepth: 1
> +
> + mhi
> + topology
> +
> +.. only:: subproject and html
> +
> + Indices
> + =======
> +
> + * :ref:`genindex`
> diff --git a/Documentation/mhi/mhi.rst b/Documentation/mhi/mhi.rst
> new file mode 100644
> index 000000000000..718dbbdc7a04
> --- /dev/null
> +++ b/Documentation/mhi/mhi.rst
> @@ -0,0 +1,218 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +==========================
> +MHI (Modem Host Interface)
> +==========================
> +
> +This document provides information about the MHI protocol.
> +
> +Overview
> +========
> +
> +MHI is a protocol developed by Qualcomm Innovation Center, Inc., It is used
The "," suggests the sentence is going to continue, yet "it" has a
capitol "I" like its the start of a new sentence. This seems wrong to
me. Perhaps drop that final comma?
> +by the host processors to control and communicate with modem devices over high
> +speed peripheral buses or shared memory. Even though MHI can be easily adapted
> +to any peripheral buses, it is primarily used with PCIe based devices. MHI
> +provides logical channels over the physical buses and allows transporting the
> +modem protocols, such as IP data packets, modem control messages, and
> +diagnostics over at least one of those logical channels. Also, the MHI
> +protocol provides data acknowledgment feature and manages the power state of the
> +modems via one or more logical channels.
> +
> +MHI Internals
> +=============
> +
> +MMIO
> +----
> +
> +MMIO (Memory mapped IO) consists of a set of registers in the device hardware,
> +which are mapped to the host memory space by the peripheral buses like PCIe.
> +Following are the major components of MMIO register space:
> +
> +MHI control registers: Access to MHI configurations registers
> +
> +MHI BHI registers: BHI (Boot Host Interface) registers are used by the host
> +for downloading the firmware to the device before MHI initialization.
> +
> +Channel Doorbell array: Channel Doorbell (DB) registers used by the host to
> +notify the device when there is new work to do.
> +
> +Event Doorbell array: Associated with event context array, the Event Doorbell
> +(DB) registers are used by the host to notify the device when new events are
> +available.
> +
> +Debug registers: A set of registers and counters used by the device to expose
> +debugging information like performance, functional, and stability to the host.
> +
> +Data structures
> +---------------
> +
> +All data structures used by MHI are in the host system memory. Using the
> +physical interface, the device accesses those data structures. MHI data
> +structures and data buffers in the host system memory regions are mapped for
> +the device.
> +
> +Channel context array: All channel configurations are organized in channel
> +context data array.
> +
> +Transfer rings: Used by the host to schedule work items for a channel. The
> +transfer rings are organized as a circular queue of Transfer Descriptors (TD).
> +
> +Event context array: All event configurations are organized in the event context
> +data array.
> +
> +Event rings: Used by the device to send completion and state transition messages
> +to the host
> +
> +Command context array: All command configurations are organized in command
> +context data array.
> +
> +Command rings: Used by the host to send MHI commands to the device. The command
> +rings are organized as a circular queue of Command Descriptors (CD).
> +
> +Channels
> +--------
> +
> +MHI channels are logical, unidirectional data pipes between a host and a device.
> +The concept of channels in MHI is similar to endpoints in USB. MHI supports up
> +to 256 channels. However, specific device implementations may support less than
> +the maximum number of channels allowed.
> +
> +Two unidirectional channels with their associated transfer rings form a
> +bidirectional data pipe, which can be used by the upper-layer protocols to
> +transport application data packets (such as IP packets, modem control messages,
> +diagnostics messages, and so on). Each channel is associated with a single
> +transfer ring.
> +
> +Transfer rings
> +--------------
> +
> +Transfers between the host and device are organized by channels and defined by
> +Transfer Descriptors (TD). TDs are managed through transfer rings, which are
> +defined for each channel between the device and host and reside in the host
> +memory. TDs consist of one or more ring elements (or transfer blocks)::
> +
> + [Read Pointer (RP)] ----------->[Ring Element] } TD
> + [Write Pointer (WP)]- [Ring Element]
> + - [Ring Element]
> + --------->[Ring Element]
> + [Ring Element]
> +
> +Below is the basic usage of transfer rings:
> +
> +* Host allocates memory for transfer ring.
> +* Host sets the base pointer, read pointer, and write pointer in corresponding
> + channel context.
> +* Ring is considered empty when RP == WP.
> +* Ring is considered full when WP + 1 == RP.
> +* RP indicates the next element to be serviced by the device.
> +* When the host has a new buffer to send, it updates the ring element with
> + buffer information, increments the WP to the next element and rings the
> + associated channel DB.
> +
> +Event rings
> +-----------
> +
> +Events from the device to host are organized in event rings and defined by Event
> +Descriptors (ED). Event rings are used by the device to report events such as
> +data transfer completion status, command completion status, and state changes
> +to the host. Event rings are the array of EDs that resides in the host
> +memory. EDs consist of one or more ring elements (or transfer blocks)::
> +
> + [Read Pointer (RP)] ----------->[Ring Element] } ED
> + [Write Pointer (WP)]- [Ring Element]
> + - [Ring Element]
> + --------->[Ring Element]
> + [Ring Element]
> +
> +Below is the basic usage of event rings:
> +
> +* Host allocates memory for event ring.
> +* Host sets the base pointer, read pointer, and write pointer in corresponding
> + channel context.
> +* Both host and device has a local copy of RP, WP.
> +* Ring is considered empty (no events to service) when WP + 1 == RP.
> +* Ring is considered full of events when RP == WP.
> +* When there is a new event the device needs to send, the device updates ED
> + pointed by RP, increments the RP to the next element and triggers the
> + interrupt.
> +
> +Ring Element
> +------------
> +
> +A Ring Element is a data structure used to transfer a single block
> +of data between the host and the device. Transfer ring element types contain a
> +single buffer pointer, the size of the buffer, and additional control
> +information. Other ring element types may only contain control and status
> +information. For single buffer operations, a ring descriptor is composed of a
> +single element. For large multi-buffer operations (such as scatter and gather),
> +elements can be chained to form a longer descriptor.
> +
> +MHI Operations
> +==============
> +
> +MHI States
> +----------
> +
> +MHI_STATE_RESET
> +~~~~~~~~~~~~~~~
> +MHI is in reset state after power-up or hardware reset. The host is not allowed
> +to access device MMIO register space.
> +
> +MHI_STATE_READY
> +~~~~~~~~~~~~~~~
> +MHI is ready for initialization. The host can start MHI initialization by
> +programming MMIO registers.
> +
> +MHI_STATE_M0
> +~~~~~~~~~~~~
> +MHI is running and operational in the device. The host can start channels by
> +issuing channel start command.
> +
> +MHI_STATE_M1
> +~~~~~~~~~~~~
> +MHI operation is suspended by the device. This state is entered when the
> +device detects inactivity at the physical interface within a preset time.
> +
> +MHI_STATE_M2
> +~~~~~~~~~~~~
> +MHI is in low power state. MHI operation is suspended and the device may
> +enter lower power mode.
> +
> +MHI_STATE_M3
> +~~~~~~~~~~~~
> +MHI operation stopped by the host. This state is entered when the host suspends
> +MHI operation.
> +
> +MHI Initialization
> +------------------
> +
> +After system boots, the device is enumerated over the physical interface.
> +In the case of PCIe, the device is enumerated and assigned BAR-0 for
> +the device's MMIO register space. To initialize the MHI in a device,
> +the host performs the following operations:
> +
> +* Allocates the MHI context for event, channel and command arrays.
> +* Initializes the context array, and prepares interrupts.
> +* Waits until the device enters READY state.
> +* Programs MHI MMIO registers and sets device into MHI_M0 state.
> +* Waits for the device to enter M0 state.
> +
> +MHI Data Transfer
> +-----------------
> +
> +MHI data transfer is initiated by the host to transfer data to the device.
> +Following are the sequence of operations performed by the host to transfer
> +data to device:
> +
> +* Host prepares TD with buffer information.
> +* Host increments the WP of the corresponding channel transfer ring.
> +* Host rings the channel DB register.
> +* Device wakes up to process the TD.
> +* Device generates a completion event for the processed TD by updating ED.
> +* Device increments the RP of the corresponding event ring.
> +* Device triggers IRQ to wake up the host.
> +* Host wakes up and checks the event ring for completion event.
> +* Host updates the WP of the corresponding event ring to indicate that the
> + data transfer has been completed successfully.
> +
> diff --git a/Documentation/mhi/topology.rst b/Documentation/mhi/topology.rst
> new file mode 100644
> index 000000000000..90d80a7f116d
> --- /dev/null
> +++ b/Documentation/mhi/topology.rst
> @@ -0,0 +1,60 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +============
> +MHI Topology
> +============
> +
> +This document provides information about the MHI topology modeling and
> +representation in the kernel.
> +
> +MHI Controller
> +--------------
> +
> +MHI controller driver manages the interaction with the MHI client devices
> +such as the external modems and WiFi chipsets. It is also the MHI bus master
> +which is in charge of managing the physical link between the host and device.
> +It is however not involved in the actual data transfer as the data transfer
> +is taken care by the physical bus such as PCIe. Each controller driver exposes
> +channels and events based on the client device type.
> +
> +Below are the roles of the MHI controller driver:
> +
> +* Turns on the physical bus and establishes the link to the device
> +* Configures IRQs, SMMU, and IOMEM
I'd recommend changing SMMU to IOMMU. SMMU tends to be an ARM specific
term, while IOMMU is the generic term, in my experience.
> +* Allocates struct mhi_controller and registers with the MHI bus framework
> + with channel and event configurations using mhi_register_controller.
> +* Initiates power on and shutdown sequence
> +* Initiates suspend and resume power management operations of the device.
> +
> +MHI Device
> +----------
> +
> +MHI device is the logical device which binds to a maximum of two MHI channels
> +for bi-directional communication. Once MHI is in powered on state, the MHI
> +core will create MHI devices based on the channel configuration exposed
> +by the controller. There can be a single MHI device for each channel or for a
> +couple of channels.
> +
> +Each supported device is enumerated in::
> +
> + /sys/bus/mhi/devices/
> +
> +MHI Driver
> +----------
> +
> +MHI driver is the client driver which binds to one or more MHI devices. The MHI
> +driver sends and receives the upper-layer protocol packets like IP packets,
> +modem control messages, and diagnostics messages over MHI. The MHI core will
> +bind the MHI devices to the MHI driver.
> +
> +Each supported driver is enumerated in::
> +
> + /sys/bus/mhi/drivers/
> +
> +Below are the roles of the MHI driver:
> +
> +* Registers the driver with the MHI bus framework using mhi_driver_register.
> +* Prepares the device for transfer by calling mhi_prepare_for_transfer.
> +* Initiates data transfer by calling mhi_queue_transfer.
> +* Once the data transfer is finished, calls mhi_unprepare_from_transfer to
> + end data transfer.
>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> This commit adds support for registering MHI controller drivers with
> the MHI stack. MHI controller drivers manages the interaction with the
> MHI client devices such as the external modems and WiFi chipsets. They
> are also the MHI bus master in charge of managing the physical link
> between the host and client device.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/987
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [jhugo: added static config for controllers and fixed several bugs]
> Signed-off-by: Jeffrey Hugo <[email protected]>
> [mani: removed DT dependency, splitted and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/Kconfig | 1 +
> drivers/bus/Makefile | 3 +
> drivers/bus/mhi/Kconfig | 14 +
> drivers/bus/mhi/Makefile | 2 +
> drivers/bus/mhi/core/Makefile | 3 +
> drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
> drivers/bus/mhi/core/internal.h | 169 ++++++++++++
> include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
> include/linux/mod_devicetable.h | 12 +
> 9 files changed, 1046 insertions(+)
> create mode 100644 drivers/bus/mhi/Kconfig
> create mode 100644 drivers/bus/mhi/Makefile
> create mode 100644 drivers/bus/mhi/core/Makefile
> create mode 100644 drivers/bus/mhi/core/init.c
> create mode 100644 drivers/bus/mhi/core/internal.h
> create mode 100644 include/linux/mhi.h
>
> diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> index 50200d1c06ea..383934e54786 100644
> --- a/drivers/bus/Kconfig
> +++ b/drivers/bus/Kconfig
> @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
> peripherals.
>
> source "drivers/bus/fsl-mc/Kconfig"
> +source "drivers/bus/mhi/Kconfig"
>
> endmenu
> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> index 1320bcf9fa9d..05f32cd694a4 100644
> --- a/drivers/bus/Makefile
> +++ b/drivers/bus/Makefile
> @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
> obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
>
> obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
> +
> +# MHI
> +obj-$(CONFIG_MHI_BUS) += mhi/
> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> new file mode 100644
> index 000000000000..a8bd9bd7db7c
> --- /dev/null
> +++ b/drivers/bus/mhi/Kconfig
> @@ -0,0 +1,14 @@
> +# SPDX-License-Identifier: GPL-2.0
first time I noticed this, although I suspect this will need to be
corrected "everywhere" -
Per the SPDX website, the "GPL-2.0" label is deprecated. It's
replacement is "GPL-2.0-only".
I think all instances should be updated to "GPL-2.0-only"
> +#
> +# MHI bus
> +#
> +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> +#
> +
> +config MHI_BUS
> + tristate "Modem Host Interface (MHI) bus"
> + help
> + Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> + communication protocol used by the host processors to control
> + and communicate with modem devices over a high speed peripheral
> + bus or shared memory.
> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> new file mode 100644
> index 000000000000..19e6443b72df
> --- /dev/null
> +++ b/drivers/bus/mhi/Makefile
> @@ -0,0 +1,2 @@
> +# core layer
> +obj-y += core/
> diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
> new file mode 100644
> index 000000000000..2db32697c67f
> --- /dev/null
> +++ b/drivers/bus/mhi/core/Makefile
> @@ -0,0 +1,3 @@
> +obj-$(CONFIG_MHI_BUS) := mhi.o
> +
> +mhi-y := init.o
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> new file mode 100644
> index 000000000000..5b817ec250e0
> --- /dev/null
> +++ b/drivers/bus/mhi/core/init.c
> @@ -0,0 +1,404 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> + *
> + */
> +
> +#define dev_fmt(fmt) "MHI: " fmt
> +
> +#include <linux/device.h>
> +#include <linux/dma-direction.h>
> +#include <linux/dma-mapping.h>
> +#include <linux/interrupt.h>
> +#include <linux/list.h>
> +#include <linux/mhi.h>
> +#include <linux/mod_devicetable.h>
> +#include <linux/module.h>
> +#include <linux/slab.h>
> +#include <linux/vmalloc.h>
> +#include <linux/wait.h>
> +#include "internal.h"
> +
> +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> + struct mhi_controller_config *config)
> +{
> + int i, num;
> + struct mhi_event *mhi_event;
> + struct mhi_event_config *event_cfg;
> +
> + num = config->num_events;
> + mhi_cntrl->total_ev_rings = num;
> + mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
> + GFP_KERNEL);
> + if (!mhi_cntrl->mhi_event)
> + return -ENOMEM;
> +
> + /* Populate event ring */
> + mhi_event = mhi_cntrl->mhi_event;
> + for (i = 0; i < num; i++) {
> + event_cfg = &config->event_cfg[i];
> +
> + mhi_event->er_index = i;
> + mhi_event->ring.elements = event_cfg->num_elements;
> + mhi_event->intmod = event_cfg->irq_moderation_ms;
> + mhi_event->irq = event_cfg->irq;
> +
> + if (event_cfg->channel != U32_MAX) {
> + /* This event ring has a dedicated channel */
> + mhi_event->chan = event_cfg->channel;
> + if (mhi_event->chan >= mhi_cntrl->max_chan) {
> + dev_err(mhi_cntrl->dev,
> + "Event Ring channel not available\n");
> + goto error_ev_cfg;
> + }
> +
> + mhi_event->mhi_chan =
> + &mhi_cntrl->mhi_chan[mhi_event->chan];
> + }
> +
> + /* Priority is fixed to 1 for now */
> + mhi_event->priority = 1;
> +
> + mhi_event->db_cfg.brstmode = event_cfg->mode;
> + if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
> + goto error_ev_cfg;
> +
> + mhi_event->data_type = event_cfg->data_type;
> +
> + mhi_event->hw_ring = event_cfg->hardware_event;
> + if (mhi_event->hw_ring)
> + mhi_cntrl->hw_ev_rings++;
> + else
> + mhi_cntrl->sw_ev_rings++;
> +
> + mhi_event->cl_manage = event_cfg->client_managed;
> + mhi_event->offload_ev = event_cfg->offload_channel;
> + mhi_event++;
> + }
> +
> + /* We need IRQ for each event ring + additional one for BHI */
> + mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
> +
> + return 0;
> +
> +error_ev_cfg:
> +
> + kfree(mhi_cntrl->mhi_event);
> + return -EINVAL;
> +}
> +
> +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> + struct mhi_controller_config *config)
> +{
> + int i;
> + u32 chan;
> + struct mhi_channel_config *ch_cfg;
> +
> + mhi_cntrl->max_chan = config->max_channels;
> +
> + /*
> + * The allocation of MHI channels can exceed 32KB in some scenarios,
> + * so to avoid any memory possible allocation failures, vzalloc is
> + * used here
> + */
> + mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
> + sizeof(*mhi_cntrl->mhi_chan));
> + if (!mhi_cntrl->mhi_chan)
> + return -ENOMEM;
> +
> + INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
> +
> + /* Populate channel configurations */
> + for (i = 0; i < config->num_channels; i++) {
> + struct mhi_chan *mhi_chan;
> +
> + ch_cfg = &config->ch_cfg[i];
> +
> + chan = ch_cfg->num;
> + if (chan >= mhi_cntrl->max_chan) {
> + dev_err(mhi_cntrl->dev,
> + "Channel %d not available\n", chan);
> + goto error_chan_cfg;
> + }
> +
> + mhi_chan = &mhi_cntrl->mhi_chan[chan];
> + mhi_chan->name = ch_cfg->name;
> + mhi_chan->chan = chan;
> +
> + mhi_chan->tre_ring.elements = ch_cfg->num_elements;
> + if (!mhi_chan->tre_ring.elements)
> + goto error_chan_cfg;
> +
> + /*
> + * For some channels, local ring length should be bigger than
> + * the transfer ring length due to internal logical channels
> + * in device. So host can queue much more buffers than transfer
> + * ring length. Example, RSC channels should have a larger local
> + * channel length than transfer ring length.
> + */
> + mhi_chan->buf_ring.elements = ch_cfg->local_elements;
> + if (!mhi_chan->buf_ring.elements)
> + mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
> + mhi_chan->er_index = ch_cfg->event_ring;
> + mhi_chan->dir = ch_cfg->dir;
> +
> + /*
> + * For most channels, chtype is identical to channel directions.
> + * So, if it is not defined then assign channel direction to
> + * chtype
> + */
> + mhi_chan->type = ch_cfg->type;
> + if (!mhi_chan->type)
> + mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
> +
> + mhi_chan->ee_mask = ch_cfg->ee_mask;
> +
> + mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> + mhi_chan->xfer_type = ch_cfg->data_type;
> +
> + mhi_chan->lpm_notify = ch_cfg->lpm_notify;
> + mhi_chan->offload_ch = ch_cfg->offload_channel;
> + mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
> + mhi_chan->pre_alloc = ch_cfg->auto_queue;
> + mhi_chan->auto_start = ch_cfg->auto_start;
> +
> + /*
> + * If MHI host allocates buffers, then the channel direction
> + * should be DMA_FROM_DEVICE and the buffer type should be
> + * MHI_BUF_RAW
> + */
> + if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
> + mhi_chan->xfer_type != MHI_BUF_RAW)) {
> + dev_err(mhi_cntrl->dev,
> + "Invalid channel configuration\n");
> + goto error_chan_cfg;
> + }
> +
> + /*
> + * Bi-directional and direction less channel must be an
> + * offload channel
> + */
> + if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
> + mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
> + dev_err(mhi_cntrl->dev,
> + "Invalid channel configuration\n");
> + goto error_chan_cfg;
> + }
> +
> + if (!mhi_chan->offload_ch) {
> + mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
> + if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
> + dev_err(mhi_cntrl->dev,
> + "Invalid Door bell mode\n");
> + goto error_chan_cfg;
> + }
> + }
> +
> + mhi_chan->configured = true;
> +
> + if (mhi_chan->lpm_notify)
> + list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
> + }
> +
> + return 0;
> +
> +error_chan_cfg:
> + vfree(mhi_cntrl->mhi_chan);
> +
> + return -EINVAL;
> +}
> +
> +static int parse_config(struct mhi_controller *mhi_cntrl,
> + struct mhi_controller_config *config)
> +{
> + int ret;
> +
> + /* Parse MHI channel configuration */
> + ret = parse_ch_cfg(mhi_cntrl, config);
> + if (ret)
> + return ret;
> +
> + /* Parse MHI event configuration */
> + ret = parse_ev_cfg(mhi_cntrl, config);
> + if (ret)
> + goto error_ev_cfg;
> +
> + mhi_cntrl->timeout_ms = config->timeout_ms;
> + if (!mhi_cntrl->timeout_ms)
> + mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
> +
> + mhi_cntrl->bounce_buf = config->use_bounce_buf;
> + mhi_cntrl->buffer_len = config->buf_len;
> + if (!mhi_cntrl->buffer_len)
> + mhi_cntrl->buffer_len = MHI_MAX_MTU;
> +
> + return 0;
> +
> +error_ev_cfg:
> + vfree(mhi_cntrl->mhi_chan);
> +
> + return ret;
> +}
> +
> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> + struct mhi_controller_config *config)
> +{
> + int ret;
> + int i;
> + struct mhi_event *mhi_event;
> + struct mhi_chan *mhi_chan;
> + struct mhi_cmd *mhi_cmd;
> + struct mhi_device *mhi_dev;
> +
> + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
> + return -EINVAL;
> +
> + if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
> + return -EINVAL;
> +
> + ret = parse_config(mhi_cntrl, config);
> + if (ret)
> + return -EINVAL;
> +
> + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
> + sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> + if (!mhi_cntrl->mhi_cmd) {
> + ret = -ENOMEM;
> + goto error_alloc_cmd;
> + }
> +
> + INIT_LIST_HEAD(&mhi_cntrl->transition_list);
> + spin_lock_init(&mhi_cntrl->transition_lock);
> + spin_lock_init(&mhi_cntrl->wlock);
> + init_waitqueue_head(&mhi_cntrl->state_event);
> +
> + mhi_cmd = mhi_cntrl->mhi_cmd;
> + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
> + spin_lock_init(&mhi_cmd->lock);
> +
> + mhi_event = mhi_cntrl->mhi_event;
> + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
> + /* Skip for offload events */
> + if (mhi_event->offload_ev)
> + continue;
> +
> + mhi_event->mhi_cntrl = mhi_cntrl;
> + spin_lock_init(&mhi_event->lock);
> + }
> +
> + mhi_chan = mhi_cntrl->mhi_chan;
> + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
> + mutex_init(&mhi_chan->mutex);
> + init_completion(&mhi_chan->completion);
> + rwlock_init(&mhi_chan->lock);
> + }
> +
> + /* Register controller with MHI bus */
> + mhi_dev = mhi_alloc_device(mhi_cntrl);
> + if (IS_ERR(mhi_dev)) {
> + dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
> + ret = PTR_ERR(mhi_dev);
> + goto error_alloc_dev;
> + }
> +
> + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> + mhi_dev->mhi_cntrl = mhi_cntrl;
> + dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
> +
> + /* Init wakeup source */
> + device_init_wakeup(&mhi_dev->dev, true);
> +
> + ret = device_add(&mhi_dev->dev);
> + if (ret)
> + goto error_add_dev;
> +
> + mhi_cntrl->mhi_dev = mhi_dev;
> +
> + return 0;
> +
> +error_add_dev:
> + mhi_dealloc_device(mhi_cntrl, mhi_dev);
> +
> +error_alloc_dev:
> + kfree(mhi_cntrl->mhi_cmd);
> +
> +error_alloc_cmd:
> + vfree(mhi_cntrl->mhi_chan);
> + kfree(mhi_cntrl->mhi_event);
> +
> + return ret;
> +}
> +EXPORT_SYMBOL_GPL(mhi_register_controller);
> +
> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
> +{
> + struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
> +
> + kfree(mhi_cntrl->mhi_cmd);
> + kfree(mhi_cntrl->mhi_event);
> + vfree(mhi_cntrl->mhi_chan);
> +
> + device_del(&mhi_dev->dev);
> + put_device(&mhi_dev->dev);
> +}
> +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
> +
> +static void mhi_release_device(struct device *dev)
> +{
> + struct mhi_device *mhi_dev = to_mhi_device(dev);
> +
> + if (mhi_dev->ul_chan)
> + mhi_dev->ul_chan->mhi_dev = NULL;
> +
> + if (mhi_dev->dl_chan)
> + mhi_dev->dl_chan->mhi_dev = NULL;
> +
> + kfree(mhi_dev);
> +}
> +
> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
> +{
> + struct mhi_device *mhi_dev;
> + struct device *dev;
> +
> + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> + if (!mhi_dev)
> + return ERR_PTR(-ENOMEM);
> +
> + dev = &mhi_dev->dev;
> + device_initialize(dev);
> + dev->bus = &mhi_bus_type;
> + dev->release = mhi_release_device;
> + dev->parent = mhi_cntrl->dev;
> + mhi_dev->mhi_cntrl = mhi_cntrl;
> + atomic_set(&mhi_dev->dev_wake, 0);
> +
> + return mhi_dev;
> +}
> +
> +static int mhi_match(struct device *dev, struct device_driver *drv)
> +{
> + return 0;
> +};
> +
> +struct bus_type mhi_bus_type = {
> + .name = "mhi",
> + .dev_name = "mhi",
> + .match = mhi_match,
> +};
> +
> +static int __init mhi_init(void)
> +{
> + return bus_register(&mhi_bus_type);
> +}
> +
> +static void __exit mhi_exit(void)
> +{
> + bus_unregister(&mhi_bus_type);
> +}
> +
> +postcore_initcall(mhi_init);
> +module_exit(mhi_exit);
> +
> +MODULE_LICENSE("GPL v2");
> +MODULE_DESCRIPTION("MHI Host Interface");
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> new file mode 100644
> index 000000000000..21f686d3a140
> --- /dev/null
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -0,0 +1,169 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> + *
> + */
> +
> +#ifndef _MHI_INT_H
> +#define _MHI_INT_H
> +
> +extern struct bus_type mhi_bus_type;
> +
> +/* MHI transfer completion events */
> +enum mhi_ev_ccs {
> + MHI_EV_CC_INVALID = 0x0,
> + MHI_EV_CC_SUCCESS = 0x1,
> + MHI_EV_CC_EOT = 0x2,
> + MHI_EV_CC_OVERFLOW = 0x3,
> + MHI_EV_CC_EOB = 0x4,
> + MHI_EV_CC_OOB = 0x5,
> + MHI_EV_CC_DB_MODE = 0x6,
> + MHI_EV_CC_UNDEFINED_ERR = 0x10,
> + MHI_EV_CC_BAD_TRE = 0x11,
Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms? I
feel like those might not be obvious to someone not familiar with the
protocol.
> +};
> +
> +enum mhi_ch_state {
> + MHI_CH_STATE_DISABLED = 0x0,
> + MHI_CH_STATE_ENABLED = 0x1,
> + MHI_CH_STATE_RUNNING = 0x2,
> + MHI_CH_STATE_SUSPENDED = 0x3,
> + MHI_CH_STATE_STOP = 0x4,
> + MHI_CH_STATE_ERROR = 0x5,
> +};
> +
> +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
> + mode != MHI_DB_BRST_ENABLE)
> +
> +#define NR_OF_CMD_RINGS 1
> +#define CMD_EL_PER_RING 128
> +#define PRIMARY_CMD_RING 0
> +#define MHI_MAX_MTU 0xffff
> +
> +enum mhi_er_type {
> + MHI_ER_TYPE_INVALID = 0x0,
> + MHI_ER_TYPE_VALID = 0x1,
> +};
> +
> +enum mhi_ch_ee_mask {
> + MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
include?
> + MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
> + MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
> + MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
> + MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
> + MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
> + MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
> +};
> +
> +struct db_cfg {
> + bool reset_req;
> + bool db_mode;
> + u32 pollcfg;
> + enum mhi_db_brst_mode brstmode;
> + dma_addr_t db_val;
> + void (*process_db)(struct mhi_controller *mhi_cntrl,
> + struct db_cfg *db_cfg, void __iomem *io_addr,
> + dma_addr_t db_val);
> +};
> +
> +struct mhi_ring {
> + dma_addr_t dma_handle;
> + dma_addr_t iommu_base;
> + u64 *ctxt_wp; /* point to ctxt wp */
> + void *pre_aligned;
> + void *base;
> + void *rp;
> + void *wp;
> + size_t el_size;
> + size_t len;
> + size_t elements;
> + size_t alloc_size;
> + void __iomem *db_addr;
> +};
> +
> +struct mhi_cmd {
> + struct mhi_ring ring;
> + spinlock_t lock;
> +};
> +
> +struct mhi_buf_info {
> + dma_addr_t p_addr;
> + void *v_addr;
> + void *bb_addr;
> + void *wp;
> + size_t len;
> + void *cb_buf;
> + enum dma_data_direction dir;
> +};
> +
> +struct mhi_event {
> + u32 er_index;
> + u32 intmod;
> + u32 irq;
> + int chan; /* this event ring is dedicated to a channel (optional) */
> + u32 priority;
> + enum mhi_er_data_type data_type;
> + struct mhi_ring ring;
> + struct db_cfg db_cfg;
> + bool hw_ring;
> + bool cl_manage;
> + bool offload_ev; /* managed by a device driver */
> + spinlock_t lock;
> + struct mhi_chan *mhi_chan; /* dedicated to channel */
> + struct tasklet_struct task;
> + int (*process_event)(struct mhi_controller *mhi_cntrl,
> + struct mhi_event *mhi_event,
> + u32 event_quota);
> + struct mhi_controller *mhi_cntrl;
> +};
> +
> +struct mhi_chan {
> + u32 chan;
> + const char *name;
> + /*
> + * Important: When consuming, increment tre_ring first and when
> + * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
> + * is guranteed to have space so we do not need to check both rings.
> + */
> + struct mhi_ring buf_ring;
> + struct mhi_ring tre_ring;
> + u32 er_index;
> + u32 intmod;
> + enum mhi_ch_type type;
> + enum dma_data_direction dir;
> + struct db_cfg db_cfg;
> + enum mhi_ch_ee_mask ee_mask;
> + enum mhi_buf_type xfer_type;
> + enum mhi_ch_state ch_state;
> + enum mhi_ev_ccs ccs;
> + bool lpm_notify;
> + bool configured;
> + bool offload_ch;
> + bool pre_alloc;
> + bool auto_start;
> + int (*gen_tre)(struct mhi_controller *mhi_cntrl,
> + struct mhi_chan *mhi_chan, void *buf, void *cb,
> + size_t len, enum mhi_flags flags);
> + int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> + void *buf, size_t len, enum mhi_flags mflags);
> + struct mhi_device *mhi_dev;
> + void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
> + struct mutex mutex;
> + struct completion completion;
> + rwlock_t lock;
> + struct list_head node;
> +};
> +
> +/* Default MHI timeout */
> +#define MHI_TIMEOUT_MS (1000)
> +
> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
> +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> + struct mhi_device *mhi_dev)
> +{
> + kfree(mhi_dev);
> +}
> +
> +int mhi_destroy_device(struct device *dev, void *data);
> +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
> +
> +#endif /* _MHI_INT_H */
> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> new file mode 100644
> index 000000000000..69cf9a4b06c7
> --- /dev/null
> +++ b/include/linux/mhi.h
> @@ -0,0 +1,438 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> + *
> + */
> +#ifndef _MHI_H_
> +#define _MHI_H_
> +
> +#include <linux/device.h>
> +#include <linux/dma-direction.h>
> +#include <linux/mutex.h>
> +#include <linux/rwlock_types.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock_types.h>
> +#include <linux/wait.h>
> +#include <linux/workqueue.h>
> +
> +struct mhi_chan;
> +struct mhi_event;
> +struct mhi_ctxt;
> +struct mhi_cmd;
> +struct mhi_buf_info;
> +
> +/**
> + * enum mhi_callback - MHI callback
> + * @MHI_CB_IDLE: MHI entered idle state
> + * @MHI_CB_PENDING_DATA: New data available for client to process
> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> + */
> +enum mhi_callback {
> + MHI_CB_IDLE,
> + MHI_CB_PENDING_DATA,
> + MHI_CB_LPM_ENTER,
> + MHI_CB_LPM_EXIT,
> + MHI_CB_EE_RDDM,
> + MHI_CB_EE_MISSION_MODE,
> + MHI_CB_SYS_ERROR,
> + MHI_CB_FATAL_ERROR,
> +};
> +
> +/**
> + * enum mhi_flags - Transfer flags
> + * @MHI_EOB: End of buffer for bulk transfer
> + * @MHI_EOT: End of transfer
> + * @MHI_CHAIN: Linked transfer
> + */
> +enum mhi_flags {
> + MHI_EOB,
> + MHI_EOT,
> + MHI_CHAIN,
> +};
> +
> +/**
> + * enum mhi_device_type - Device types
> + * @MHI_DEVICE_XFER: Handles data transfer
> + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> + * @MHI_DEVICE_CONTROLLER: Control device
> + */
> +enum mhi_device_type {
> + MHI_DEVICE_XFER,
> + MHI_DEVICE_TIMESYNC,
> + MHI_DEVICE_CONTROLLER,
> +};
> +
> +/**
> + * enum mhi_ch_type - Channel types
> + * @MHI_CH_TYPE_INVALID: Invalid channel type
> + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
> + * multiple packets and send them as a single
> + * large packet to reduce CPU consumption
> + */
> +enum mhi_ch_type {
> + MHI_CH_TYPE_INVALID = 0,
> + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> + MHI_CH_TYPE_INBOUND_COALESCED = 3,
> +};
> +
> +/**
> + * enum mhi_ee_type - Execution environment types
> + * @MHI_EE_PBL: Primary Bootloader
> + * @MHI_EE_SBL: Secondary Bootloader
> + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> + * @MHI_EE_RDDM: Ram dump download mode
> + * @MHI_EE_WFW: WLAN firmware mode
> + * @MHI_EE_PTHRU: Passthrough
> + * @MHI_EE_EDL: Embedded downloader
> + */
> +enum mhi_ee_type {
> + MHI_EE_PBL,
> + MHI_EE_SBL,
> + MHI_EE_AMSS,
> + MHI_EE_RDDM,
> + MHI_EE_WFW,
> + MHI_EE_PTHRU,
> + MHI_EE_EDL,
> + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> + MHI_EE_NOT_SUPPORTED,
> + MHI_EE_MAX,
> +};
> +
> +/**
> + * enum mhi_buf_type - Accepted buffer type for the channel
> + * @MHI_BUF_RAW: Raw buffer
> + * @MHI_BUF_SKB: SKB struct
> + * @MHI_BUF_SCLIST: Scatter-gather list
> + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> + * @MHI_BUF_DMA: Receive DMA address mapped by client
> + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
Maybe its just me, but what is "RSC"?
> + */
> +enum mhi_buf_type {
> + MHI_BUF_RAW,
> + MHI_BUF_SKB,
> + MHI_BUF_SCLIST,
> + MHI_BUF_NOP,
> + MHI_BUF_DMA,
> + MHI_BUF_RSC_DMA,
> +};
> +
> +/**
> + * enum mhi_er_data_type - Event ring data types
> + * @MHI_ER_DATA: Only client data over this ring
> + * @MHI_ER_CTRL: MHI control data and client data
> + * @MHI_ER_TSYNC: Time sync events
> + */
> +enum mhi_er_data_type {
> + MHI_ER_DATA,
> + MHI_ER_CTRL,
> + MHI_ER_TSYNC,
> +};
> +
> +/**
> + * enum mhi_db_brst_mode - Doorbell mode
> + * @MHI_DB_BRST_DISABLE: Burst mode disable
> + * @MHI_DB_BRST_ENABLE: Burst mode enable
> + */
> +enum mhi_db_brst_mode {
> + MHI_DB_BRST_DISABLE = 0x2,
> + MHI_DB_BRST_ENABLE = 0x3,
> +};
> +
> +/**
> + * struct mhi_channel_config - Channel configuration structure for controller
> + * @num: The number assigned to this channel
> + * @name: The name of this channel
> + * @num_elements: The number of elements that can be queued to this channel
> + * @local_elements: The local ring length of the channel
> + * @event_ring: The event rung index that services this channel
> + * @dir: Direction that data may flow on this channel
> + * @type: Channel type
> + * @ee_mask: Execution Environment mask for this channel
But the mask defines are in internal.h, so how is a client suposed to
know what they are?
> + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
> + for UL channels, multiple of 8 ring elements for DL channels
> + * @data_type: Data type accepted by this channel
> + * @doorbell: Doorbell mode
> + * @lpm_notify: The channel master requires low power mode notifications
> + * @offload_channel: The client manages the channel completely
> + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
> + * @auto_queue: Framework will automatically queue buffers for DL traffic
> + * @auto_start: Automatically start (open) this channel
> + */
> +struct mhi_channel_config {
> + u32 num;
> + char *name;
> + u32 num_elements;
> + u32 local_elements;
> + u32 event_ring;
> + enum dma_data_direction dir;
> + enum mhi_ch_type type;
> + u32 ee_mask;
> + u32 pollcfg;
> + enum mhi_buf_type data_type;
> + enum mhi_db_brst_mode doorbell;
> + bool lpm_notify;
> + bool offload_channel;
> + bool doorbell_mode_switch;
> + bool auto_queue;
> + bool auto_start;
> +};
> +
> +/**
> + * struct mhi_event_config - Event ring configuration structure for controller
> + * @num_elements: The number of elements that can be queued to this ring
> + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> + * @irq: IRQ associated with this ring
> + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
> + * @mode: Doorbell mode
> + * @data_type: Type of data this ring will process
> + * @hardware_event: This ring is associated with hardware channels
> + * @client_managed: This ring is client managed
> + * @offload_channel: This ring is associated with an offloaded channel
> + * @priority: Priority of this ring. Use 1 for now
> + */
> +struct mhi_event_config {
> + u32 num_elements;
> + u32 irq_moderation_ms;
> + u32 irq;
> + u32 channel;
> + enum mhi_db_brst_mode mode;
> + enum mhi_er_data_type data_type;
> + bool hardware_event;
> + bool client_managed;
> + bool offload_channel;
> + u32 priority;
> +};
> +
> +/**
> + * struct mhi_controller_config - Root MHI controller configuration
> + * @max_channels: Maximum number of channels supported
> + * @timeout_ms: Timeout value for operations. 0 means use default
> + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> + * @m2_no_db: Host is not allowed to ring DB in M2 state
> + * @buf_len: Size of automatically allocated buffers. 0 means use default
> + * @num_channels: Number of channels defined in @ch_cfg
> + * @ch_cfg: Array of defined channels
> + * @num_events: Number of event rings defined in @event_cfg
> + * @event_cfg: Array of defined event rings
> + */
> +struct mhi_controller_config {
> + u32 max_channels;
> + u32 timeout_ms;
> + bool use_bounce_buf;
> + bool m2_no_db;
> + u32 buf_len;
> + u32 num_channels;
> + struct mhi_channel_config *ch_cfg;
> + u32 num_events;
> + struct mhi_event_config *event_cfg;
> +};
> +
> +/**
> + * struct mhi_controller - Master MHI controller structure
> + * @name: Name of the controller
> + * @dev: Driver model device node for the controller
> + * @mhi_dev: MHI device instance for the controller
> + * @dev_id: Device ID of the controller
> + * @bus_id: Physical bus instance used by the controller
> + * @regs: Base address of MHI MMIO register space
> + * @iova_start: IOMMU starting address for data
> + * @iova_stop: IOMMU stop address for data
> + * @fw_image: Firmware image name for normal booting
> + * @edl_image: Firmware image name for emergency download mode
> + * @fbc_download: MHI host needs to do complete image transfer
> + * @sbl_size: SBL image size
> + * @seg_len: BHIe vector size
> + * @max_chan: Maximum number of channels the controller supports
> + * @mhi_chan: Points to the channel configuration table
> + * @lpm_chans: List of channels that require LPM notifications
> + * @total_ev_rings: Total # of event rings allocated
> + * @hw_ev_rings: Number of hardware event rings
> + * @sw_ev_rings: Number of software event rings
> + * @nr_irqs_req: Number of IRQs required to operate
> + * @nr_irqs: Number of IRQ allocated by bus master
> + * @irq: base irq # to request
> + * @mhi_event: MHI event ring configurations table
> + * @mhi_cmd: MHI command ring configurations table
> + * @mhi_ctxt: MHI device context, shared memory between host and device
> + * @timeout_ms: Timeout in ms for state transitions
> + * @pm_mutex: Mutex for suspend/resume operation
> + * @pre_init: MHI host needs to do pre-initialization before power up
> + * @pm_lock: Lock for protecting MHI power management state
> + * @pm_state: MHI power management state
> + * @db_access: DB access states
> + * @ee: MHI device execution environment
> + * @wake_set: Device wakeup set flag
> + * @dev_wake: Device wakeup count
> + * @alloc_size: Total memory allocations size of the controller
> + * @pending_pkts: Pending packets for the controller
> + * @transition_list: List of MHI state transitions
> + * @wlock: Lock for protecting device wakeup
> + * @M0: M0 state counter for debugging
> + * @M2: M2 state counter for debugging
> + * @M3: M3 state counter for debugging
> + * @M3_FAST: M3 Fast state counter for debugging
> + * @st_worker: State transition worker
> + * @fw_worker: Firmware download worker
> + * @syserr_worker: System error worker
> + * @state_event: State change event
> + * @status_cb: CB function to notify various power states to bus master
> + * @link_status: CB function to query link status of the device
> + * @wake_get: CB function to assert device wake
> + * @wake_put: CB function to de-assert device wake
> + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> + * @runtime_get: CB function to controller runtime resume
> + * @runtimet_put: CB function to decrement pm usage
> + * @lpm_disable: CB function to request disable link level low power modes
> + * @lpm_enable: CB function to request enable link level low power modes again
> + * @bounce_buf: Use of bounce buffer
> + * @buffer_len: Bounce buffer length
> + * @priv_data: Points to bus master's private data
> + */
> +struct mhi_controller {
> + const char *name;
> + struct device *dev;
> + struct mhi_device *mhi_dev;
> + u32 dev_id;
> + u32 bus_id;
> + void __iomem *regs;
> + dma_addr_t iova_start;
> + dma_addr_t iova_stop;
> + const char *fw_image;
> + const char *edl_image;
> + bool fbc_download;
> + size_t sbl_size;
> + size_t seg_len;
> + u32 max_chan;
> + struct mhi_chan *mhi_chan;
> + struct list_head lpm_chans;
> + u32 total_ev_rings;
> + u32 hw_ev_rings;
> + u32 sw_ev_rings;
> + u32 nr_irqs_req;
> + u32 nr_irqs;
> + int *irq;
> +
> + struct mhi_event *mhi_event;
> + struct mhi_cmd *mhi_cmd;
> + struct mhi_ctxt *mhi_ctxt;
> +
> + u32 timeout_ms;
> + struct mutex pm_mutex;
> + bool pre_init;
> + rwlock_t pm_lock;
> + u32 pm_state;
> + u32 db_access;
> + enum mhi_ee_type ee;
> + bool wake_set;
> + atomic_t dev_wake;
> + atomic_t alloc_size;
> + atomic_t pending_pkts;
> + struct list_head transition_list;
> + spinlock_t transition_lock;
> + spinlock_t wlock;
> + u32 M0, M2, M3, M3_FAST;
> + struct work_struct st_worker;
> + struct work_struct fw_worker;
> + struct work_struct syserr_worker;
> + wait_queue_head_t state_event;
> +
> + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> + enum mhi_callback cb);
> + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> +
> + bool bounce_buf;
> + size_t buffer_len;
> + void *priv_data;
> +};
> +
> +/**
> + * struct mhi_device - Structure representing a MHI device which binds
> + * to channels
> + * @dev: Driver model device node for the MHI device
> + * @tiocm: Device current terminal settings
> + * @id: Pointer to MHI device ID struct
> + * @chan_name: Name of the channel to which the device binds
> + * @mhi_cntrl: Controller the device belongs to
> + * @ul_chan: UL channel for the device
> + * @dl_chan: DL channel for the device
> + * @dev_wake: Device wakeup counter
> + * @dev_type: MHI device type
> + */
> +struct mhi_device {
> + struct device dev;
> + u32 tiocm;
> + const struct mhi_device_id *id;
> + const char *chan_name;
> + struct mhi_controller *mhi_cntrl;
> + struct mhi_chan *ul_chan;
> + struct mhi_chan *dl_chan;
> + atomic_t dev_wake;
> + enum mhi_device_type dev_type;
> +};
> +
> +/**
> + * struct mhi_result - Completed buffer information
> + * @buf_addr: Address of data buffer
> + * @dir: Channel direction
> + * @bytes_xfer: # of bytes transferred
> + * @transaction_status: Status of last transaction
> + */
> +struct mhi_result {
> + void *buf_addr;
> + enum dma_data_direction dir;
> + size_t bytes_xferd;
Desription says this is named "bytes_xfer"
> + int transaction_status;
> +};
> +
> +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
> +
> +/**
> + * mhi_controller_set_devdata - Set MHI controller private data
> + * @mhi_cntrl: MHI controller to set data
> + */
> +static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
> + void *priv)
> +{
> + mhi_cntrl->priv_data = priv;
> +}
> +
> +/**
> + * mhi_controller_get_devdata - Get MHI controller private data
> + * @mhi_cntrl: MHI controller to get data
> + */
> +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
> +{
> + return mhi_cntrl->priv_data;
> +}
> +
> +/**
> + * mhi_register_controller - Register MHI controller
> + * @mhi_cntrl: MHI controller to register
> + * @config: Configuration to use for the controller
> + */
> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> + struct mhi_controller_config *config);
> +
> +/**
> + * mhi_unregister_controller - Unregister MHI controller
> + * @mhi_cntrl: MHI controller to unregister
> + */
> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
> +
> +#endif /* _MHI_H_ */
> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> index e3596db077dc..be15e997fe39 100644
> --- a/include/linux/mod_devicetable.h
> +++ b/include/linux/mod_devicetable.h
> @@ -821,4 +821,16 @@ struct wmi_device_id {
> const void *context;
> };
>
> +#define MHI_NAME_SIZE 32
> +
> +/**
> + * struct mhi_device_id - MHI device identification
> + * @chan: MHI channel name
> + * @driver_data: driver data;
> + */
> +struct mhi_device_id {
> + const char chan[MHI_NAME_SIZE];
> + kernel_ulong_t driver_data;
> +};
> +
> #endif /* LINUX_MOD_DEVICETABLE_H */
>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On Thu, Jan 23, 2020 at 10:05:50AM -0700, Jeffrey Hugo wrote:
> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for registering MHI controller drivers with
> > the MHI stack. MHI controller drivers manages the interaction with the
> > MHI client devices such as the external modems and WiFi chipsets. They
> > are also the MHI bus master in charge of managing the physical link
> > between the host and client device.
> >
> > This is based on the patch submitted by Sujeev Dias:
> > https://lkml.org/lkml/2018/7/9/987
> >
> > Signed-off-by: Sujeev Dias <[email protected]>
> > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > [jhugo: added static config for controllers and fixed several bugs]
> > Signed-off-by: Jeffrey Hugo <[email protected]>
> > [mani: removed DT dependency, splitted and cleaned up for upstream]
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/Kconfig | 1 +
> > drivers/bus/Makefile | 3 +
> > drivers/bus/mhi/Kconfig | 14 +
> > drivers/bus/mhi/Makefile | 2 +
> > drivers/bus/mhi/core/Makefile | 3 +
> > drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
> > drivers/bus/mhi/core/internal.h | 169 ++++++++++++
> > include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
> > include/linux/mod_devicetable.h | 12 +
> > 9 files changed, 1046 insertions(+)
> > create mode 100644 drivers/bus/mhi/Kconfig
> > create mode 100644 drivers/bus/mhi/Makefile
> > create mode 100644 drivers/bus/mhi/core/Makefile
> > create mode 100644 drivers/bus/mhi/core/init.c
> > create mode 100644 drivers/bus/mhi/core/internal.h
> > create mode 100644 include/linux/mhi.h
> >
> > diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> > index 50200d1c06ea..383934e54786 100644
> > --- a/drivers/bus/Kconfig
> > +++ b/drivers/bus/Kconfig
> > @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
> > peripherals.
> > source "drivers/bus/fsl-mc/Kconfig"
> > +source "drivers/bus/mhi/Kconfig"
> > endmenu
> > diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> > index 1320bcf9fa9d..05f32cd694a4 100644
> > --- a/drivers/bus/Makefile
> > +++ b/drivers/bus/Makefile
> > @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
> > obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
> > obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
> > +
> > +# MHI
> > +obj-$(CONFIG_MHI_BUS) += mhi/
> > diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> > new file mode 100644
> > index 000000000000..a8bd9bd7db7c
> > --- /dev/null
> > +++ b/drivers/bus/mhi/Kconfig
> > @@ -0,0 +1,14 @@
> > +# SPDX-License-Identifier: GPL-2.0
>
> first time I noticed this, although I suspect this will need to be corrected
> "everywhere" -
> Per the SPDX website, the "GPL-2.0" label is deprecated. It's replacement
> is "GPL-2.0-only".
> I think all instances should be updated to "GPL-2.0-only"
No, it is fine, please read Documentation/process/license-rules.rst
thanks,
greg k-h
On Thu, Jan 23, 2020 at 04:48:22PM +0530, Manivannan Sadhasivam wrote:
> --- /dev/null
> +++ b/include/linux/mhi.h
> @@ -0,0 +1,438 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> + *
> + */
> +#ifndef _MHI_H_
> +#define _MHI_H_
> +
> +#include <linux/device.h>
> +#include <linux/dma-direction.h>
> +#include <linux/mutex.h>
> +#include <linux/rwlock_types.h>
> +#include <linux/slab.h>
> +#include <linux/spinlock_types.h>
> +#include <linux/wait.h>
> +#include <linux/workqueue.h>
> +
> +struct mhi_chan;
> +struct mhi_event;
> +struct mhi_ctxt;
> +struct mhi_cmd;
> +struct mhi_buf_info;
> +
> +/**
> + * enum mhi_callback - MHI callback
> + * @MHI_CB_IDLE: MHI entered idle state
> + * @MHI_CB_PENDING_DATA: New data available for client to process
> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> + */
> +enum mhi_callback {
> + MHI_CB_IDLE,
> + MHI_CB_PENDING_DATA,
> + MHI_CB_LPM_ENTER,
> + MHI_CB_LPM_EXIT,
> + MHI_CB_EE_RDDM,
> + MHI_CB_EE_MISSION_MODE,
> + MHI_CB_SYS_ERROR,
> + MHI_CB_FATAL_ERROR,
> +};
> +
> +/**
> + * enum mhi_flags - Transfer flags
> + * @MHI_EOB: End of buffer for bulk transfer
> + * @MHI_EOT: End of transfer
> + * @MHI_CHAIN: Linked transfer
> + */
> +enum mhi_flags {
> + MHI_EOB,
> + MHI_EOT,
> + MHI_CHAIN,
> +};
> +
> +/**
> + * enum mhi_device_type - Device types
> + * @MHI_DEVICE_XFER: Handles data transfer
> + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> + * @MHI_DEVICE_CONTROLLER: Control device
> + */
> +enum mhi_device_type {
> + MHI_DEVICE_XFER,
> + MHI_DEVICE_TIMESYNC,
> + MHI_DEVICE_CONTROLLER,
> +};
> +
> +/**
> + * enum mhi_ch_type - Channel types
> + * @MHI_CH_TYPE_INVALID: Invalid channel type
> + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
> + * multiple packets and send them as a single
> + * large packet to reduce CPU consumption
> + */
> +enum mhi_ch_type {
> + MHI_CH_TYPE_INVALID = 0,
> + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> + MHI_CH_TYPE_INBOUND_COALESCED = 3,
> +};
> +
> +/**
> + * enum mhi_ee_type - Execution environment types
> + * @MHI_EE_PBL: Primary Bootloader
> + * @MHI_EE_SBL: Secondary Bootloader
> + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> + * @MHI_EE_RDDM: Ram dump download mode
> + * @MHI_EE_WFW: WLAN firmware mode
> + * @MHI_EE_PTHRU: Passthrough
> + * @MHI_EE_EDL: Embedded downloader
> + */
> +enum mhi_ee_type {
> + MHI_EE_PBL,
> + MHI_EE_SBL,
> + MHI_EE_AMSS,
> + MHI_EE_RDDM,
> + MHI_EE_WFW,
> + MHI_EE_PTHRU,
> + MHI_EE_EDL,
> + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> + MHI_EE_NOT_SUPPORTED,
> + MHI_EE_MAX,
> +};
> +
> +/**
> + * enum mhi_buf_type - Accepted buffer type for the channel
> + * @MHI_BUF_RAW: Raw buffer
> + * @MHI_BUF_SKB: SKB struct
> + * @MHI_BUF_SCLIST: Scatter-gather list
> + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> + * @MHI_BUF_DMA: Receive DMA address mapped by client
> + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
> + */
> +enum mhi_buf_type {
> + MHI_BUF_RAW,
> + MHI_BUF_SKB,
> + MHI_BUF_SCLIST,
> + MHI_BUF_NOP,
> + MHI_BUF_DMA,
> + MHI_BUF_RSC_DMA,
> +};
> +
> +/**
> + * enum mhi_er_data_type - Event ring data types
> + * @MHI_ER_DATA: Only client data over this ring
> + * @MHI_ER_CTRL: MHI control data and client data
> + * @MHI_ER_TSYNC: Time sync events
> + */
> +enum mhi_er_data_type {
> + MHI_ER_DATA,
> + MHI_ER_CTRL,
> + MHI_ER_TSYNC,
> +};
> +
> +/**
> + * enum mhi_db_brst_mode - Doorbell mode
> + * @MHI_DB_BRST_DISABLE: Burst mode disable
> + * @MHI_DB_BRST_ENABLE: Burst mode enable
> + */
> +enum mhi_db_brst_mode {
> + MHI_DB_BRST_DISABLE = 0x2,
> + MHI_DB_BRST_ENABLE = 0x3,
> +};
> +
> +/**
> + * struct mhi_channel_config - Channel configuration structure for controller
> + * @num: The number assigned to this channel
> + * @name: The name of this channel
> + * @num_elements: The number of elements that can be queued to this channel
> + * @local_elements: The local ring length of the channel
> + * @event_ring: The event rung index that services this channel
> + * @dir: Direction that data may flow on this channel
> + * @type: Channel type
> + * @ee_mask: Execution Environment mask for this channel
> + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
> + for UL channels, multiple of 8 ring elements for DL channels
> + * @data_type: Data type accepted by this channel
> + * @doorbell: Doorbell mode
> + * @lpm_notify: The channel master requires low power mode notifications
> + * @offload_channel: The client manages the channel completely
> + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
> + * @auto_queue: Framework will automatically queue buffers for DL traffic
> + * @auto_start: Automatically start (open) this channel
> + */
> +struct mhi_channel_config {
> + u32 num;
> + char *name;
If you have a "name" for your configuration, shouldn't this be a struct
device so you see that in sysfs? If not, what is the name for?
> + u32 num_elements;
> + u32 local_elements;
> + u32 event_ring;
> + enum dma_data_direction dir;
> + enum mhi_ch_type type;
> + u32 ee_mask;
> + u32 pollcfg;
> + enum mhi_buf_type data_type;
> + enum mhi_db_brst_mode doorbell;
> + bool lpm_notify;
> + bool offload_channel;
> + bool doorbell_mode_switch;
> + bool auto_queue;
> + bool auto_start;
> +};
> +
> +/**
> + * struct mhi_event_config - Event ring configuration structure for controller
> + * @num_elements: The number of elements that can be queued to this ring
> + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> + * @irq: IRQ associated with this ring
> + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
> + * @mode: Doorbell mode
> + * @data_type: Type of data this ring will process
> + * @hardware_event: This ring is associated with hardware channels
> + * @client_managed: This ring is client managed
> + * @offload_channel: This ring is associated with an offloaded channel
> + * @priority: Priority of this ring. Use 1 for now
> + */
> +struct mhi_event_config {
> + u32 num_elements;
> + u32 irq_moderation_ms;
> + u32 irq;
> + u32 channel;
> + enum mhi_db_brst_mode mode;
> + enum mhi_er_data_type data_type;
> + bool hardware_event;
> + bool client_managed;
> + bool offload_channel;
> + u32 priority;
> +};
> +
> +/**
> + * struct mhi_controller_config - Root MHI controller configuration
> + * @max_channels: Maximum number of channels supported
> + * @timeout_ms: Timeout value for operations. 0 means use default
> + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> + * @m2_no_db: Host is not allowed to ring DB in M2 state
> + * @buf_len: Size of automatically allocated buffers. 0 means use default
> + * @num_channels: Number of channels defined in @ch_cfg
> + * @ch_cfg: Array of defined channels
> + * @num_events: Number of event rings defined in @event_cfg
> + * @event_cfg: Array of defined event rings
> + */
> +struct mhi_controller_config {
> + u32 max_channels;
> + u32 timeout_ms;
> + bool use_bounce_buf;
> + bool m2_no_db;
> + u32 buf_len;
> + u32 num_channels;
> + struct mhi_channel_config *ch_cfg;
> + u32 num_events;
> + struct mhi_event_config *event_cfg;
You really should run pahole on this file to see how badly packed these
structures are :(
> +};
> +
> +/**
> + * struct mhi_controller - Master MHI controller structure
> + * @name: Name of the controller
> + * @dev: Driver model device node for the controller
> + * @mhi_dev: MHI device instance for the controller
> + * @dev_id: Device ID of the controller
> + * @bus_id: Physical bus instance used by the controller
> + * @regs: Base address of MHI MMIO register space
> + * @iova_start: IOMMU starting address for data
> + * @iova_stop: IOMMU stop address for data
> + * @fw_image: Firmware image name for normal booting
> + * @edl_image: Firmware image name for emergency download mode
> + * @fbc_download: MHI host needs to do complete image transfer
> + * @sbl_size: SBL image size
> + * @seg_len: BHIe vector size
> + * @max_chan: Maximum number of channels the controller supports
> + * @mhi_chan: Points to the channel configuration table
> + * @lpm_chans: List of channels that require LPM notifications
> + * @total_ev_rings: Total # of event rings allocated
> + * @hw_ev_rings: Number of hardware event rings
> + * @sw_ev_rings: Number of software event rings
> + * @nr_irqs_req: Number of IRQs required to operate
> + * @nr_irqs: Number of IRQ allocated by bus master
> + * @irq: base irq # to request
> + * @mhi_event: MHI event ring configurations table
> + * @mhi_cmd: MHI command ring configurations table
> + * @mhi_ctxt: MHI device context, shared memory between host and device
> + * @timeout_ms: Timeout in ms for state transitions
> + * @pm_mutex: Mutex for suspend/resume operation
> + * @pre_init: MHI host needs to do pre-initialization before power up
> + * @pm_lock: Lock for protecting MHI power management state
> + * @pm_state: MHI power management state
> + * @db_access: DB access states
> + * @ee: MHI device execution environment
> + * @wake_set: Device wakeup set flag
> + * @dev_wake: Device wakeup count
> + * @alloc_size: Total memory allocations size of the controller
> + * @pending_pkts: Pending packets for the controller
> + * @transition_list: List of MHI state transitions
> + * @wlock: Lock for protecting device wakeup
> + * @M0: M0 state counter for debugging
> + * @M2: M2 state counter for debugging
> + * @M3: M3 state counter for debugging
> + * @M3_FAST: M3 Fast state counter for debugging
> + * @st_worker: State transition worker
> + * @fw_worker: Firmware download worker
> + * @syserr_worker: System error worker
> + * @state_event: State change event
> + * @status_cb: CB function to notify various power states to bus master
> + * @link_status: CB function to query link status of the device
> + * @wake_get: CB function to assert device wake
> + * @wake_put: CB function to de-assert device wake
> + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> + * @runtime_get: CB function to controller runtime resume
> + * @runtimet_put: CB function to decrement pm usage
> + * @lpm_disable: CB function to request disable link level low power modes
> + * @lpm_enable: CB function to request enable link level low power modes again
> + * @bounce_buf: Use of bounce buffer
> + * @buffer_len: Bounce buffer length
> + * @priv_data: Points to bus master's private data
> + */
> +struct mhi_controller {
> + const char *name;
> + struct device *dev;
Why isn't this a struct device directly? Why a pointer?
And why don't you use the name in the struct device?
> + struct mhi_device *mhi_dev;
> + u32 dev_id;
> + u32 bus_id;
Shouldn't the bus id come from the bus it is assigned to? Why store it
again?
> + void __iomem *regs;
> + dma_addr_t iova_start;
> + dma_addr_t iova_stop;
> + const char *fw_image;
> + const char *edl_image;
> + bool fbc_download;
> + size_t sbl_size;
> + size_t seg_len;
> + u32 max_chan;
> + struct mhi_chan *mhi_chan;
> + struct list_head lpm_chans;
> + u32 total_ev_rings;
> + u32 hw_ev_rings;
> + u32 sw_ev_rings;
> + u32 nr_irqs_req;
> + u32 nr_irqs;
> + int *irq;
> +
> + struct mhi_event *mhi_event;
> + struct mhi_cmd *mhi_cmd;
> + struct mhi_ctxt *mhi_ctxt;
> +
> + u32 timeout_ms;
> + struct mutex pm_mutex;
> + bool pre_init;
> + rwlock_t pm_lock;
> + u32 pm_state;
> + u32 db_access;
> + enum mhi_ee_type ee;
> + bool wake_set;
> + atomic_t dev_wake;
> + atomic_t alloc_size;
> + atomic_t pending_pkts;
Why a bunch of atomic variables when you already have a lock?
> + struct list_head transition_list;
> + spinlock_t transition_lock;
You don't document this lock.
> + spinlock_t wlock;
Why have 2 locks?
> + u32 M0, M2, M3, M3_FAST;
> + struct work_struct st_worker;
> + struct work_struct fw_worker;
> + struct work_struct syserr_worker;
> + wait_queue_head_t state_event;
> +
> + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> + enum mhi_callback cb);
> + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
Shouldn't all of these be part of the bus or driver assigned to this
device and not in the device itself? This feels odd as-is.
> +
> + bool bounce_buf;
> + size_t buffer_len;
> + void *priv_data;
Why can't you use the private pointer in struct device?
> +};
> +
> +/**
> + * struct mhi_device - Structure representing a MHI device which binds
> + * to channels
> + * @dev: Driver model device node for the MHI device
> + * @tiocm: Device current terminal settings
> + * @id: Pointer to MHI device ID struct
> + * @chan_name: Name of the channel to which the device binds
> + * @mhi_cntrl: Controller the device belongs to
> + * @ul_chan: UL channel for the device
> + * @dl_chan: DL channel for the device
> + * @dev_wake: Device wakeup counter
> + * @dev_type: MHI device type
> + */
> +struct mhi_device {
> + struct device dev;
> + u32 tiocm;
> + const struct mhi_device_id *id;
> + const char *chan_name;
> + struct mhi_controller *mhi_cntrl;
> + struct mhi_chan *ul_chan;
> + struct mhi_chan *dl_chan;
> + atomic_t dev_wake;
Why does this have to be atomic?
> + enum mhi_device_type dev_type;
> +};
> +
> +/**
> + * struct mhi_result - Completed buffer information
> + * @buf_addr: Address of data buffer
> + * @dir: Channel direction
> + * @bytes_xfer: # of bytes transferred
> + * @transaction_status: Status of last transaction
> + */
> +struct mhi_result {
> + void *buf_addr;
Why void *?
> + enum dma_data_direction dir;
> + size_t bytes_xferd;
> + int transaction_status;
> +};
> +
thanks,
greg k-h
On 1/24/2020 1:29 AM, Greg KH wrote:
> On Thu, Jan 23, 2020 at 04:48:22PM +0530, Manivannan Sadhasivam wrote:
>> --- /dev/null
>> +++ b/include/linux/mhi.h
>> @@ -0,0 +1,438 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>> + *
>> + */
>> +#ifndef _MHI_H_
>> +#define _MHI_H_
>> +
>> +#include <linux/device.h>
>> +#include <linux/dma-direction.h>
>> +#include <linux/mutex.h>
>> +#include <linux/rwlock_types.h>
>> +#include <linux/slab.h>
>> +#include <linux/spinlock_types.h>
>> +#include <linux/wait.h>
>> +#include <linux/workqueue.h>
>> +
>> +struct mhi_chan;
>> +struct mhi_event;
>> +struct mhi_ctxt;
>> +struct mhi_cmd;
>> +struct mhi_buf_info;
>> +
>> +/**
>> + * enum mhi_callback - MHI callback
>> + * @MHI_CB_IDLE: MHI entered idle state
>> + * @MHI_CB_PENDING_DATA: New data available for client to process
>> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
>> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
>> + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
>> + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
>> + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
>> + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
>> + */
>> +enum mhi_callback {
>> + MHI_CB_IDLE,
>> + MHI_CB_PENDING_DATA,
>> + MHI_CB_LPM_ENTER,
>> + MHI_CB_LPM_EXIT,
>> + MHI_CB_EE_RDDM,
>> + MHI_CB_EE_MISSION_MODE,
>> + MHI_CB_SYS_ERROR,
>> + MHI_CB_FATAL_ERROR,
>> +};
>> +
>> +/**
>> + * enum mhi_flags - Transfer flags
>> + * @MHI_EOB: End of buffer for bulk transfer
>> + * @MHI_EOT: End of transfer
>> + * @MHI_CHAIN: Linked transfer
>> + */
>> +enum mhi_flags {
>> + MHI_EOB,
>> + MHI_EOT,
>> + MHI_CHAIN,
>> +};
>> +
>> +/**
>> + * enum mhi_device_type - Device types
>> + * @MHI_DEVICE_XFER: Handles data transfer
>> + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
>> + * @MHI_DEVICE_CONTROLLER: Control device
>> + */
>> +enum mhi_device_type {
>> + MHI_DEVICE_XFER,
>> + MHI_DEVICE_TIMESYNC,
>> + MHI_DEVICE_CONTROLLER,
>> +};
>> +
>> +/**
>> + * enum mhi_ch_type - Channel types
>> + * @MHI_CH_TYPE_INVALID: Invalid channel type
>> + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
>> + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
>> + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
>> + * multiple packets and send them as a single
>> + * large packet to reduce CPU consumption
>> + */
>> +enum mhi_ch_type {
>> + MHI_CH_TYPE_INVALID = 0,
>> + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
>> + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
>> + MHI_CH_TYPE_INBOUND_COALESCED = 3,
>> +};
>> +
>> +/**
>> + * enum mhi_ee_type - Execution environment types
>> + * @MHI_EE_PBL: Primary Bootloader
>> + * @MHI_EE_SBL: Secondary Bootloader
>> + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
>> + * @MHI_EE_RDDM: Ram dump download mode
>> + * @MHI_EE_WFW: WLAN firmware mode
>> + * @MHI_EE_PTHRU: Passthrough
>> + * @MHI_EE_EDL: Embedded downloader
>> + */
>> +enum mhi_ee_type {
>> + MHI_EE_PBL,
>> + MHI_EE_SBL,
>> + MHI_EE_AMSS,
>> + MHI_EE_RDDM,
>> + MHI_EE_WFW,
>> + MHI_EE_PTHRU,
>> + MHI_EE_EDL,
>> + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
>> + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
>> + MHI_EE_NOT_SUPPORTED,
>> + MHI_EE_MAX,
>> +};
>> +
>> +/**
>> + * enum mhi_buf_type - Accepted buffer type for the channel
>> + * @MHI_BUF_RAW: Raw buffer
>> + * @MHI_BUF_SKB: SKB struct
>> + * @MHI_BUF_SCLIST: Scatter-gather list
>> + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
>> + * @MHI_BUF_DMA: Receive DMA address mapped by client
>> + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
>> + */
>> +enum mhi_buf_type {
>> + MHI_BUF_RAW,
>> + MHI_BUF_SKB,
>> + MHI_BUF_SCLIST,
>> + MHI_BUF_NOP,
>> + MHI_BUF_DMA,
>> + MHI_BUF_RSC_DMA,
>> +};
>> +
>> +/**
>> + * enum mhi_er_data_type - Event ring data types
>> + * @MHI_ER_DATA: Only client data over this ring
>> + * @MHI_ER_CTRL: MHI control data and client data
>> + * @MHI_ER_TSYNC: Time sync events
>> + */
>> +enum mhi_er_data_type {
>> + MHI_ER_DATA,
>> + MHI_ER_CTRL,
>> + MHI_ER_TSYNC,
>> +};
>> +
>> +/**
>> + * enum mhi_db_brst_mode - Doorbell mode
>> + * @MHI_DB_BRST_DISABLE: Burst mode disable
>> + * @MHI_DB_BRST_ENABLE: Burst mode enable
>> + */
>> +enum mhi_db_brst_mode {
>> + MHI_DB_BRST_DISABLE = 0x2,
>> + MHI_DB_BRST_ENABLE = 0x3,
>> +};
>> +
>> +/**
>> + * struct mhi_channel_config - Channel configuration structure for controller
>> + * @num: The number assigned to this channel
>> + * @name: The name of this channel
>> + * @num_elements: The number of elements that can be queued to this channel
>> + * @local_elements: The local ring length of the channel
>> + * @event_ring: The event rung index that services this channel
>> + * @dir: Direction that data may flow on this channel
>> + * @type: Channel type
>> + * @ee_mask: Execution Environment mask for this channel
>> + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
>> + for UL channels, multiple of 8 ring elements for DL channels
>> + * @data_type: Data type accepted by this channel
>> + * @doorbell: Doorbell mode
>> + * @lpm_notify: The channel master requires low power mode notifications
>> + * @offload_channel: The client manages the channel completely
>> + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
>> + * @auto_queue: Framework will automatically queue buffers for DL traffic
>> + * @auto_start: Automatically start (open) this channel
>> + */
>> +struct mhi_channel_config {
>> + u32 num;
>> + char *name;
>
> If you have a "name" for your configuration, shouldn't this be a struct
> device so you see that in sysfs? If not, what is the name for?
The config struct is used to create the channel device, but is not the
channel device. Eventually a struct mhi_device is created from this
information.
>
>> + u32 num_elements;
>> + u32 local_elements;
>> + u32 event_ring;
>> + enum dma_data_direction dir;
>> + enum mhi_ch_type type;
>> + u32 ee_mask;
>> + u32 pollcfg;
>> + enum mhi_buf_type data_type;
>> + enum mhi_db_brst_mode doorbell;
>> + bool lpm_notify;
>> + bool offload_channel;
>> + bool doorbell_mode_switch;
>> + bool auto_queue;
>> + bool auto_start;
>> +};
>> +
>> +/**
>> + * struct mhi_event_config - Event ring configuration structure for controller
>> + * @num_elements: The number of elements that can be queued to this ring
>> + * @irq_moderation_ms: Delay irq for additional events to be aggregated
>> + * @irq: IRQ associated with this ring
>> + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
>> + * @mode: Doorbell mode
>> + * @data_type: Type of data this ring will process
>> + * @hardware_event: This ring is associated with hardware channels
>> + * @client_managed: This ring is client managed
>> + * @offload_channel: This ring is associated with an offloaded channel
>> + * @priority: Priority of this ring. Use 1 for now
>> + */
>> +struct mhi_event_config {
>> + u32 num_elements;
>> + u32 irq_moderation_ms;
>> + u32 irq;
>> + u32 channel;
>> + enum mhi_db_brst_mode mode;
>> + enum mhi_er_data_type data_type;
>> + bool hardware_event;
>> + bool client_managed;
>> + bool offload_channel;
>> + u32 priority;
>> +};
>> +
>> +/**
>> + * struct mhi_controller_config - Root MHI controller configuration
>> + * @max_channels: Maximum number of channels supported
>> + * @timeout_ms: Timeout value for operations. 0 means use default
>> + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
>> + * @m2_no_db: Host is not allowed to ring DB in M2 state
>> + * @buf_len: Size of automatically allocated buffers. 0 means use default
>> + * @num_channels: Number of channels defined in @ch_cfg
>> + * @ch_cfg: Array of defined channels
>> + * @num_events: Number of event rings defined in @event_cfg
>> + * @event_cfg: Array of defined event rings
>> + */
>> +struct mhi_controller_config {
>> + u32 max_channels;
>> + u32 timeout_ms;
>> + bool use_bounce_buf;
>> + bool m2_no_db;
>> + u32 buf_len;
>> + u32 num_channels;
>> + struct mhi_channel_config *ch_cfg;
>> + u32 num_events;
>> + struct mhi_event_config *event_cfg;
>
> You really should run pahole on this file to see how badly packed these
> structures are :(
>
>> +};
>> +
>> +/**
>> + * struct mhi_controller - Master MHI controller structure
>> + * @name: Name of the controller
>> + * @dev: Driver model device node for the controller
>> + * @mhi_dev: MHI device instance for the controller
>> + * @dev_id: Device ID of the controller
>> + * @bus_id: Physical bus instance used by the controller
>> + * @regs: Base address of MHI MMIO register space
>> + * @iova_start: IOMMU starting address for data
>> + * @iova_stop: IOMMU stop address for data
>> + * @fw_image: Firmware image name for normal booting
>> + * @edl_image: Firmware image name for emergency download mode
>> + * @fbc_download: MHI host needs to do complete image transfer
>> + * @sbl_size: SBL image size
>> + * @seg_len: BHIe vector size
>> + * @max_chan: Maximum number of channels the controller supports
>> + * @mhi_chan: Points to the channel configuration table
>> + * @lpm_chans: List of channels that require LPM notifications
>> + * @total_ev_rings: Total # of event rings allocated
>> + * @hw_ev_rings: Number of hardware event rings
>> + * @sw_ev_rings: Number of software event rings
>> + * @nr_irqs_req: Number of IRQs required to operate
>> + * @nr_irqs: Number of IRQ allocated by bus master
>> + * @irq: base irq # to request
>> + * @mhi_event: MHI event ring configurations table
>> + * @mhi_cmd: MHI command ring configurations table
>> + * @mhi_ctxt: MHI device context, shared memory between host and device
>> + * @timeout_ms: Timeout in ms for state transitions
>> + * @pm_mutex: Mutex for suspend/resume operation
>> + * @pre_init: MHI host needs to do pre-initialization before power up
>> + * @pm_lock: Lock for protecting MHI power management state
>> + * @pm_state: MHI power management state
>> + * @db_access: DB access states
>> + * @ee: MHI device execution environment
>> + * @wake_set: Device wakeup set flag
>> + * @dev_wake: Device wakeup count
>> + * @alloc_size: Total memory allocations size of the controller
>> + * @pending_pkts: Pending packets for the controller
>> + * @transition_list: List of MHI state transitions
>> + * @wlock: Lock for protecting device wakeup
>> + * @M0: M0 state counter for debugging
>> + * @M2: M2 state counter for debugging
>> + * @M3: M3 state counter for debugging
>> + * @M3_FAST: M3 Fast state counter for debugging
>> + * @st_worker: State transition worker
>> + * @fw_worker: Firmware download worker
>> + * @syserr_worker: System error worker
>> + * @state_event: State change event
>> + * @status_cb: CB function to notify various power states to bus master
>> + * @link_status: CB function to query link status of the device
>> + * @wake_get: CB function to assert device wake
>> + * @wake_put: CB function to de-assert device wake
>> + * @wake_toggle: CB function to assert and deasset (toggle) device wake
>> + * @runtime_get: CB function to controller runtime resume
>> + * @runtimet_put: CB function to decrement pm usage
>> + * @lpm_disable: CB function to request disable link level low power modes
>> + * @lpm_enable: CB function to request enable link level low power modes again
>> + * @bounce_buf: Use of bounce buffer
>> + * @buffer_len: Bounce buffer length
>> + * @priv_data: Points to bus master's private data
>> + */
>> +struct mhi_controller {
>> + const char *name;
>> + struct device *dev;
>
> Why isn't this a struct device directly? Why a pointer?
>
> And why don't you use the name in the struct device?
>
>> + struct mhi_device *mhi_dev;
>> + u32 dev_id;
>> + u32 bus_id;
>
> Shouldn't the bus id come from the bus it is assigned to? Why store it
> again?
>
>> + void __iomem *regs;
>> + dma_addr_t iova_start;
>> + dma_addr_t iova_stop;
>> + const char *fw_image;
>> + const char *edl_image;
>> + bool fbc_download;
>> + size_t sbl_size;
>> + size_t seg_len;
>> + u32 max_chan;
>> + struct mhi_chan *mhi_chan;
>> + struct list_head lpm_chans;
>> + u32 total_ev_rings;
>> + u32 hw_ev_rings;
>> + u32 sw_ev_rings;
>> + u32 nr_irqs_req;
>> + u32 nr_irqs;
>> + int *irq;
>> +
>> + struct mhi_event *mhi_event;
>> + struct mhi_cmd *mhi_cmd;
>> + struct mhi_ctxt *mhi_ctxt;
>> +
>> + u32 timeout_ms;
>> + struct mutex pm_mutex;
>> + bool pre_init;
>> + rwlock_t pm_lock;
>> + u32 pm_state;
>> + u32 db_access;
>> + enum mhi_ee_type ee;
>> + bool wake_set;
>> + atomic_t dev_wake;
>> + atomic_t alloc_size;
>> + atomic_t pending_pkts;
>
> Why a bunch of atomic variables when you already have a lock?
>
>> + struct list_head transition_list;
>> + spinlock_t transition_lock;
>
> You don't document this lock.
>
>> + spinlock_t wlock;
>
> Why have 2 locks?
>
>> + u32 M0, M2, M3, M3_FAST;
>> + struct work_struct st_worker;
>> + struct work_struct fw_worker;
>> + struct work_struct syserr_worker;
>> + wait_queue_head_t state_event;
>> +
>> + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
>> + enum mhi_callback cb);
>> + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
>> + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
>> + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
>> + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
>> + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
>> + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
>> + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
>> + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
>
> Shouldn't all of these be part of the bus or driver assigned to this
> device and not in the device itself? This feels odd as-is.
>
>> +
>> + bool bounce_buf;
>> + size_t buffer_len;
>> + void *priv_data;
>
> Why can't you use the private pointer in struct device?
>
>> +};
>> +
>> +/**
>> + * struct mhi_device - Structure representing a MHI device which binds
>> + * to channels
>> + * @dev: Driver model device node for the MHI device
>> + * @tiocm: Device current terminal settings
>> + * @id: Pointer to MHI device ID struct
>> + * @chan_name: Name of the channel to which the device binds
>> + * @mhi_cntrl: Controller the device belongs to
>> + * @ul_chan: UL channel for the device
>> + * @dl_chan: DL channel for the device
>> + * @dev_wake: Device wakeup counter
>> + * @dev_type: MHI device type
>> + */
>> +struct mhi_device {
>> + struct device dev;
>> + u32 tiocm;
>> + const struct mhi_device_id *id;
>> + const char *chan_name;
>> + struct mhi_controller *mhi_cntrl;
>> + struct mhi_chan *ul_chan;
>> + struct mhi_chan *dl_chan;
>> + atomic_t dev_wake;
>
> Why does this have to be atomic?
>
>> + enum mhi_device_type dev_type;
>> +};
>> +
>> +/**
>> + * struct mhi_result - Completed buffer information
>> + * @buf_addr: Address of data buffer
>> + * @dir: Channel direction
>> + * @bytes_xfer: # of bytes transferred
>> + * @transaction_status: Status of last transaction
>> + */
>> +struct mhi_result {
>> + void *buf_addr;
>
> Why void *?
Because its not possible to resolve this more clearly. The client
provides the buffer and knows what the structure is. The bus does not.
Its just an opaque pointer (hence void *) to the bus, and the client
needs to decode it. This is the struct that is handed to the client to
allow them to decode the activity (either a received buf, or a
confirmation that a transmitted buf has been consumed).
>
>> + enum dma_data_direction dir;
>> + size_t bytes_xferd;
>> + int transaction_status;
>> +};
>> +
>
> thanks,
>
> greg k-h
>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On Fri, Jan 24, 2020 at 07:24:43AM -0700, Jeffrey Hugo wrote:
> > > +/**
> > > + * struct mhi_result - Completed buffer information
> > > + * @buf_addr: Address of data buffer
> > > + * @dir: Channel direction
> > > + * @bytes_xfer: # of bytes transferred
> > > + * @transaction_status: Status of last transaction
> > > + */
> > > +struct mhi_result {
> > > + void *buf_addr;
> >
> > Why void *?
>
> Because its not possible to resolve this more clearly. The client provides
> the buffer and knows what the structure is. The bus does not. Its just an
> opaque pointer (hence void *) to the bus, and the client needs to decode it.
> This is the struct that is handed to the client to allow them to decode the
> activity (either a received buf, or a confirmation that a transmitted buf
> has been consumed).
Then shouldn't this be a "u8 *" instead as you are saying how many bytes
are here?
thanks,
greg k-h
On 1/24/2020 10:47 AM, Greg KH wrote:
> On Fri, Jan 24, 2020 at 07:24:43AM -0700, Jeffrey Hugo wrote:
>>>> +/**
>>>> + * struct mhi_result - Completed buffer information
>>>> + * @buf_addr: Address of data buffer
>>>> + * @dir: Channel direction
>>>> + * @bytes_xfer: # of bytes transferred
>>>> + * @transaction_status: Status of last transaction
>>>> + */
>>>> +struct mhi_result {
>>>> + void *buf_addr;
>>>
>>> Why void *?
>>
>> Because its not possible to resolve this more clearly. The client provides
>> the buffer and knows what the structure is. The bus does not. Its just an
>> opaque pointer (hence void *) to the bus, and the client needs to decode it.
>> This is the struct that is handed to the client to allow them to decode the
>> activity (either a received buf, or a confirmation that a transmitted buf
>> has been consumed).
>
> Then shouldn't this be a "u8 *" instead as you are saying how many bytes
> are here?
I'm sorry, I don't see the benefit of that. Can you elaborate on why
you think that u8 * is a better type?
Sure, its an arbitrary byte stream from the perspective of the bus, but
to the client, 99% of the time its going to have some structure.
In the call back, the first thing the client is likely to do is:
struct my_struct *s = buf_addr;
This works great when its a void *. If buf_addr is a u8 *, that's not
valid, and will result in a compiler error (at-least per gcc 5.4.0).
With u8 *, the client has to do:
struct my_struct *s = (struct my_struct *)buf_addr;
I don't see a benefit to u8 * over void * in this case.
The only possibly benefit I might see is if the client wants to use
buf_addr as an array to poke into it and maybe check a magic number, but
that assumes said magic number is a u8. Otherwise the client has to do
an explicit cast. It seems like such a small amount of the time that
usecase would be valid, that its not worth it to cater to it.
rpmsg, as one example, does the exact same thing where the received
buffer is a void *, and there is a size parameter.
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> This commit adds support for ringing channel and event ring doorbells
> by MHI host. The MHI host can use the channel and event ring doorbells
> for notifying the client device about processing transfer and event
> rings which it has queued using MMIO registers.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/989
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: splitted from pm patch and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/core/init.c | 140 ++++++++++++++++
> drivers/bus/mhi/core/internal.h | 275 ++++++++++++++++++++++++++++++++
> drivers/bus/mhi/core/main.c | 118 ++++++++++++++
> include/linux/mhi.h | 5 +
> 4 files changed, 538 insertions(+)
>
> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> index 60dcf2ad3a5f..588166b588b4 100644
> --- a/drivers/bus/mhi/core/init.c
> +++ b/drivers/bus/mhi/core/init.c
> @@ -19,6 +19,136 @@
> #include <linux/wait.h>
> #include "internal.h"
>
> +int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
> +{
> + u32 val;
> + int i, ret;
> + struct mhi_chan *mhi_chan;
> + struct mhi_event *mhi_event;
> + void __iomem *base = mhi_cntrl->regs;
> + struct {
> + u32 offset;
> + u32 mask;
> + u32 shift;
> + u32 val;
> + } reg_info[] = {
> + {
> + CCABAP_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
> + },
> + {
> + CCABAP_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->mhi_ctxt->chan_ctxt_addr),
> + },
> + {
> + ECABAP_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
> + },
> + {
> + ECABAP_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->mhi_ctxt->er_ctxt_addr),
> + },
> + {
> + CRCBAP_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
> + },
> + {
> + CRCBAP_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->mhi_ctxt->cmd_ctxt_addr),
> + },
> + {
> + MHICFG, MHICFG_NER_MASK, MHICFG_NER_SHIFT,
> + mhi_cntrl->total_ev_rings,
> + },
> + {
> + MHICFG, MHICFG_NHWER_MASK, MHICFG_NHWER_SHIFT,
> + mhi_cntrl->hw_ev_rings,
> + },
> + {
> + MHICTRLBASE_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->iova_start),
> + },
> + {
> + MHICTRLBASE_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->iova_start),
> + },
> + {
> + MHIDATABASE_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->iova_start),
> + },
> + {
> + MHIDATABASE_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->iova_start),
> + },
> + {
> + MHICTRLLIMIT_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->iova_stop),
> + },
> + {
> + MHICTRLLIMIT_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->iova_stop),
> + },
> + {
> + MHIDATALIMIT_HIGHER, U32_MAX, 0,
> + upper_32_bits(mhi_cntrl->iova_stop),
> + },
> + {
> + MHIDATALIMIT_LOWER, U32_MAX, 0,
> + lower_32_bits(mhi_cntrl->iova_stop),
> + },
> + { 0, 0, 0 }
> + };
> +
> + dev_dbg(mhi_cntrl->dev, "Initializing MHI registers\n");
> +
> + /* Read channel db offset */
> + ret = mhi_read_reg_field(mhi_cntrl, base, CHDBOFF, CHDBOFF_CHDBOFF_MASK,
> + CHDBOFF_CHDBOFF_SHIFT, &val);
> + if (ret) {
> + dev_err(mhi_cntrl->dev, "Unable to read CHDBOFF register\n");
> + return -EIO;
> + }
> +
> + /* Setup wake db */
> + mhi_cntrl->wake_db = base + val + (8 * MHI_DEV_WAKE_DB);
> + mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 4, 0);
> + mhi_write_reg(mhi_cntrl, mhi_cntrl->wake_db, 0, 0);
> + mhi_cntrl->wake_set = false;
> +
> + /* Setup channel db address for each channel in tre_ring */
> + mhi_chan = mhi_cntrl->mhi_chan;
> + for (i = 0; i < mhi_cntrl->max_chan; i++, val += 8, mhi_chan++)
> + mhi_chan->tre_ring.db_addr = base + val;
> +
> + /* Read event ring db offset */
> + ret = mhi_read_reg_field(mhi_cntrl, base, ERDBOFF, ERDBOFF_ERDBOFF_MASK,
> + ERDBOFF_ERDBOFF_SHIFT, &val);
> + if (ret) {
> + dev_err(mhi_cntrl->dev, "Unable to read ERDBOFF register\n");
> + return -EIO;
> + }
> +
> + /* Setup event db address for each ev_ring */
> + mhi_event = mhi_cntrl->mhi_event;
> + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, val += 8, mhi_event++) {
> + if (mhi_event->offload_ev)
> + continue;
> +
> + mhi_event->ring.db_addr = base + val;
> + }
> +
> + /* Setup DB register for primary CMD rings */
> + mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING].ring.db_addr = base + CRDB_LOWER;
> +
> + /* Write to MMIO registers */
> + for (i = 0; reg_info[i].offset; i++)
> + mhi_write_reg_field(mhi_cntrl, base, reg_info[i].offset,
> + reg_info[i].mask, reg_info[i].shift,
> + reg_info[i].val);
> +
> + return 0;
> +}
> +
> static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> struct mhi_controller_config *config)
> {
> @@ -63,6 +193,11 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
> goto error_ev_cfg;
>
> + if (mhi_event->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
> + mhi_event->db_cfg.process_db = mhi_db_brstmode;
> + else
> + mhi_event->db_cfg.process_db = mhi_db_brstmode_disable;
> +
> mhi_event->data_type = event_cfg->data_type;
>
> mhi_event->hw_ring = event_cfg->hardware_event;
> @@ -194,6 +329,11 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> }
> }
>
> + if (mhi_chan->db_cfg.brstmode == MHI_DB_BRST_ENABLE)
> + mhi_chan->db_cfg.process_db = mhi_db_brstmode;
> + else
> + mhi_chan->db_cfg.process_db = mhi_db_brstmode_disable;
> +
> mhi_chan->configured = true;
>
> if (mhi_chan->lpm_notify)
> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> index ea7f1d7b0129..a4d10916984a 100644
> --- a/drivers/bus/mhi/core/internal.h
> +++ b/drivers/bus/mhi/core/internal.h
> @@ -9,6 +9,255 @@
>
> extern struct bus_type mhi_bus_type;
>
> +/* MHI MMIO register mapping */
> +#define PCI_INVALID_READ(val) (val == U32_MAX)
> +
> +#define MHIREGLEN (0x0)
> +#define MHIREGLEN_MHIREGLEN_MASK (0xFFFFFFFF)
> +#define MHIREGLEN_MHIREGLEN_SHIFT (0)
> +
> +#define MHIVER (0x8)
> +#define MHIVER_MHIVER_MASK (0xFFFFFFFF)
> +#define MHIVER_MHIVER_SHIFT (0)
> +
> +#define MHICFG (0x10)
> +#define MHICFG_NHWER_MASK (0xFF000000)
> +#define MHICFG_NHWER_SHIFT (24)
> +#define MHICFG_NER_MASK (0xFF0000)
> +#define MHICFG_NER_SHIFT (16)
> +#define MHICFG_NHWCH_MASK (0xFF00)
> +#define MHICFG_NHWCH_SHIFT (8)
> +#define MHICFG_NCH_MASK (0xFF)
> +#define MHICFG_NCH_SHIFT (0)
> +
> +#define CHDBOFF (0x18)
> +#define CHDBOFF_CHDBOFF_MASK (0xFFFFFFFF)
> +#define CHDBOFF_CHDBOFF_SHIFT (0)
> +
> +#define ERDBOFF (0x20)
> +#define ERDBOFF_ERDBOFF_MASK (0xFFFFFFFF)
> +#define ERDBOFF_ERDBOFF_SHIFT (0)
> +
> +#define BHIOFF (0x28)
> +#define BHIOFF_BHIOFF_MASK (0xFFFFFFFF)
> +#define BHIOFF_BHIOFF_SHIFT (0)
> +
> +#define BHIEOFF (0x2C)
> +#define BHIEOFF_BHIEOFF_MASK (0xFFFFFFFF)
> +#define BHIEOFF_BHIEOFF_SHIFT (0)
> +
> +#define DEBUGOFF (0x30)
> +#define DEBUGOFF_DEBUGOFF_MASK (0xFFFFFFFF)
> +#define DEBUGOFF_DEBUGOFF_SHIFT (0)
> +
> +#define MHICTRL (0x38)
> +#define MHICTRL_MHISTATE_MASK (0x0000FF00)
> +#define MHICTRL_MHISTATE_SHIFT (8)
> +#define MHICTRL_RESET_MASK (0x2)
> +#define MHICTRL_RESET_SHIFT (1)
> +
> +#define MHISTATUS (0x48)
> +#define MHISTATUS_MHISTATE_MASK (0x0000FF00)
> +#define MHISTATUS_MHISTATE_SHIFT (8)
> +#define MHISTATUS_SYSERR_MASK (0x4)
> +#define MHISTATUS_SYSERR_SHIFT (2)
> +#define MHISTATUS_READY_MASK (0x1)
> +#define MHISTATUS_READY_SHIFT (0)
> +
> +#define CCABAP_LOWER (0x58)
> +#define CCABAP_LOWER_CCABAP_LOWER_MASK (0xFFFFFFFF)
> +#define CCABAP_LOWER_CCABAP_LOWER_SHIFT (0)
> +
> +#define CCABAP_HIGHER (0x5C)
> +#define CCABAP_HIGHER_CCABAP_HIGHER_MASK (0xFFFFFFFF)
> +#define CCABAP_HIGHER_CCABAP_HIGHER_SHIFT (0)
> +
> +#define ECABAP_LOWER (0x60)
> +#define ECABAP_LOWER_ECABAP_LOWER_MASK (0xFFFFFFFF)
> +#define ECABAP_LOWER_ECABAP_LOWER_SHIFT (0)
> +
> +#define ECABAP_HIGHER (0x64)
> +#define ECABAP_HIGHER_ECABAP_HIGHER_MASK (0xFFFFFFFF)
> +#define ECABAP_HIGHER_ECABAP_HIGHER_SHIFT (0)
> +
> +#define CRCBAP_LOWER (0x68)
> +#define CRCBAP_LOWER_CRCBAP_LOWER_MASK (0xFFFFFFFF)
> +#define CRCBAP_LOWER_CRCBAP_LOWER_SHIFT (0)
> +
> +#define CRCBAP_HIGHER (0x6C)
> +#define CRCBAP_HIGHER_CRCBAP_HIGHER_MASK (0xFFFFFFFF)
> +#define CRCBAP_HIGHER_CRCBAP_HIGHER_SHIFT (0)
> +
> +#define CRDB_LOWER (0x70)
> +#define CRDB_LOWER_CRDB_LOWER_MASK (0xFFFFFFFF)
> +#define CRDB_LOWER_CRDB_LOWER_SHIFT (0)
> +
> +#define CRDB_HIGHER (0x74)
> +#define CRDB_HIGHER_CRDB_HIGHER_MASK (0xFFFFFFFF)
> +#define CRDB_HIGHER_CRDB_HIGHER_SHIFT (0)
> +
> +#define MHICTRLBASE_LOWER (0x80)
> +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_MASK (0xFFFFFFFF)
> +#define MHICTRLBASE_LOWER_MHICTRLBASE_LOWER_SHIFT (0)
> +
> +#define MHICTRLBASE_HIGHER (0x84)
> +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_MASK (0xFFFFFFFF)
> +#define MHICTRLBASE_HIGHER_MHICTRLBASE_HIGHER_SHIFT (0)
> +
> +#define MHICTRLLIMIT_LOWER (0x88)
> +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_MASK (0xFFFFFFFF)
> +#define MHICTRLLIMIT_LOWER_MHICTRLLIMIT_LOWER_SHIFT (0)
> +
> +#define MHICTRLLIMIT_HIGHER (0x8C)
> +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_MASK (0xFFFFFFFF)
> +#define MHICTRLLIMIT_HIGHER_MHICTRLLIMIT_HIGHER_SHIFT (0)
> +
> +#define MHIDATABASE_LOWER (0x98)
> +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_MASK (0xFFFFFFFF)
> +#define MHIDATABASE_LOWER_MHIDATABASE_LOWER_SHIFT (0)
> +
> +#define MHIDATABASE_HIGHER (0x9C)
> +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_MASK (0xFFFFFFFF)
> +#define MHIDATABASE_HIGHER_MHIDATABASE_HIGHER_SHIFT (0)
> +
> +#define MHIDATALIMIT_LOWER (0xA0)
> +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_MASK (0xFFFFFFFF)
> +#define MHIDATALIMIT_LOWER_MHIDATALIMIT_LOWER_SHIFT (0)
> +
> +#define MHIDATALIMIT_HIGHER (0xA4)
> +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_MASK (0xFFFFFFFF)
> +#define MHIDATALIMIT_HIGHER_MHIDATALIMIT_HIGHER_SHIFT (0)
> +
> +/* Host request register */
> +#define MHI_SOC_RESET_REQ_OFFSET (0xB0)
> +#define MHI_SOC_RESET_REQ BIT(0)
> +
> +/* MHI BHI offfsets */
> +#define BHI_BHIVERSION_MINOR (0x00)
> +#define BHI_BHIVERSION_MAJOR (0x04)
> +#define BHI_IMGADDR_LOW (0x08)
> +#define BHI_IMGADDR_HIGH (0x0C)
> +#define BHI_IMGSIZE (0x10)
> +#define BHI_RSVD1 (0x14)
> +#define BHI_IMGTXDB (0x18)
> +#define BHI_TXDB_SEQNUM_BMSK (0x3FFFFFFF)
> +#define BHI_TXDB_SEQNUM_SHFT (0)
> +#define BHI_RSVD2 (0x1C)
> +#define BHI_INTVEC (0x20)
> +#define BHI_RSVD3 (0x24)
> +#define BHI_EXECENV (0x28)
> +#define BHI_STATUS (0x2C)
> +#define BHI_ERRCODE (0x30)
> +#define BHI_ERRDBG1 (0x34)
> +#define BHI_ERRDBG2 (0x38)
> +#define BHI_ERRDBG3 (0x3C)
> +#define BHI_SERIALNU (0x40)
> +#define BHI_SBLANTIROLLVER (0x44)
> +#define BHI_NUMSEG (0x48)
> +#define BHI_MSMHWID(n) (0x4C + (0x4 * n))
> +#define BHI_OEMPKHASH(n) (0x64 + (0x4 * n))
> +#define BHI_RSVD5 (0xC4)
> +#define BHI_STATUS_MASK (0xC0000000)
> +#define BHI_STATUS_SHIFT (30)
> +#define BHI_STATUS_ERROR (3)
> +#define BHI_STATUS_SUCCESS (2)
> +#define BHI_STATUS_RESET (0)
> +
> +/* MHI BHIE offsets */
> +#define BHIE_MSMSOCID_OFFS (0x0000)
> +#define BHIE_TXVECADDR_LOW_OFFS (0x002C)
> +#define BHIE_TXVECADDR_HIGH_OFFS (0x0030)
> +#define BHIE_TXVECSIZE_OFFS (0x0034)
> +#define BHIE_TXVECDB_OFFS (0x003C)
> +#define BHIE_TXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> +#define BHIE_TXVECDB_SEQNUM_SHFT (0)
> +#define BHIE_TXVECSTATUS_OFFS (0x0044)
> +#define BHIE_TXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> +#define BHIE_TXVECSTATUS_SEQNUM_SHFT (0)
> +#define BHIE_TXVECSTATUS_STATUS_BMSK (0xC0000000)
> +#define BHIE_TXVECSTATUS_STATUS_SHFT (30)
> +#define BHIE_TXVECSTATUS_STATUS_RESET (0x00)
> +#define BHIE_TXVECSTATUS_STATUS_XFER_COMPL (0x02)
> +#define BHIE_TXVECSTATUS_STATUS_ERROR (0x03)
> +#define BHIE_RXVECADDR_LOW_OFFS (0x0060)
> +#define BHIE_RXVECADDR_HIGH_OFFS (0x0064)
> +#define BHIE_RXVECSIZE_OFFS (0x0068)
> +#define BHIE_RXVECDB_OFFS (0x0070)
> +#define BHIE_RXVECDB_SEQNUM_BMSK (0x3FFFFFFF)
> +#define BHIE_RXVECDB_SEQNUM_SHFT (0)
> +#define BHIE_RXVECSTATUS_OFFS (0x0078)
> +#define BHIE_RXVECSTATUS_SEQNUM_BMSK (0x3FFFFFFF)
> +#define BHIE_RXVECSTATUS_SEQNUM_SHFT (0)
> +#define BHIE_RXVECSTATUS_STATUS_BMSK (0xC0000000)
> +#define BHIE_RXVECSTATUS_STATUS_SHFT (30)
> +#define BHIE_RXVECSTATUS_STATUS_RESET (0x00)
> +#define BHIE_RXVECSTATUS_STATUS_XFER_COMPL (0x02)
> +#define BHIE_RXVECSTATUS_STATUS_ERROR (0x03)
> +
> +struct mhi_event_ctxt {
> + u32 reserved : 8;
> + u32 intmodc : 8;
> + u32 intmodt : 16;
> + u32 ertype;
> + u32 msivec;
> +
> + u64 rbase __packed __aligned(4);
> + u64 rlen __packed __aligned(4);
> + u64 rp __packed __aligned(4);
> + u64 wp __packed __aligned(4);
> +};
This is the struct that is shared with the device, correct? Surely it
needs to be packed then? Seems like you'd expect some padding between
msivec and rbase on a 64-bit system otherwise, which is probably not
intended.
Also I strongly dislike bitfields in structures which are shared with
another system since the C specification doesn't define how they are
implemented, therefore you can run into issues where different compilers
decide to implement the actual backing memory differently. I know its
less convinent, but I would prefer the use of bitmasks for these fields.
Same comments for the next two structs following this.
> +
> +struct mhi_chan_ctxt {
> + u32 chstate : 8;
> + u32 brstmode : 2;
> + u32 pollcfg : 6;
> + u32 reserved : 16;
> + u32 chtype;
> + u32 erindex;
> +
> + u64 rbase __packed __aligned(4);
> + u64 rlen __packed __aligned(4);
> + u64 rp __packed __aligned(4);
> + u64 wp __packed __aligned(4);
> +};
> +
> +struct mhi_cmd_ctxt {
> + u32 reserved0;
> + u32 reserved1;
> + u32 reserved2;
> +
> + u64 rbase __packed __aligned(4);
> + u64 rlen __packed __aligned(4);
> + u64 rp __packed __aligned(4);
> + u64 wp __packed __aligned(4);
> +};
> +
> +struct mhi_ctxt {
> + struct mhi_event_ctxt *er_ctxt;
> + struct mhi_chan_ctxt *chan_ctxt;
> + struct mhi_cmd_ctxt *cmd_ctxt;
> + dma_addr_t er_ctxt_addr;
> + dma_addr_t chan_ctxt_addr;
> + dma_addr_t cmd_ctxt_addr;
> +};
> +
> +struct mhi_tre {
> + u64 ptr;
> + u32 dword[2];
> +};
> +
> +struct bhi_vec_entry {
> + u64 dma_addr;
> + u64 size;
> +};
> +
> +enum mhi_cmd_type {
> + MHI_CMD_NOP = 1,
> + MHI_CMD_RESET_CHAN = 16,
> + MHI_CMD_STOP_CHAN = 17,
> + MHI_CMD_START_CHAN = 18,
> +};
> +
> /* MHI transfer completion events */
> enum mhi_ev_ccs {
> MHI_EV_CC_INVALID = 0x0,
> @@ -37,6 +286,7 @@ enum mhi_ch_state {
> #define NR_OF_CMD_RINGS 1
> #define CMD_EL_PER_RING 128
> #define PRIMARY_CMD_RING 0
> +#define MHI_DEV_WAKE_DB 127
> #define MHI_MAX_MTU 0xffff
>
> enum mhi_er_type {
> @@ -167,4 +417,29 @@ static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> int mhi_destroy_device(struct device *dev, void *data);
> void mhi_create_devices(struct mhi_controller *mhi_cntrl);
>
> +/* Register access methods */
> +void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
> + void __iomem *db_addr, dma_addr_t db_val);
> +void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
> + struct db_cfg *db_mode, void __iomem *db_addr,
> + dma_addr_t db_val);
> +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> + void __iomem *base, u32 offset, u32 *out);
> +int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> + void __iomem *base, u32 offset, u32 mask,
> + u32 shift, u32 *out);
> +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> + u32 offset, u32 val);
> +void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> + u32 offset, u32 mask, u32 shift, u32 val);
> +void mhi_ring_er_db(struct mhi_event *mhi_event);
> +void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
> + dma_addr_t db_val);
> +void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd);
> +void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
> + struct mhi_chan *mhi_chan);
> +
> +/* Initialization methods */
> +int mhi_init_mmio(struct mhi_controller *mhi_cntrl);
> +
> #endif /* _MHI_INT_H */
> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
> index 216fd8691140..134ef9b2cc78 100644
> --- a/drivers/bus/mhi/core/main.c
> +++ b/drivers/bus/mhi/core/main.c
> @@ -17,6 +17,124 @@
> #include <linux/slab.h>
> #include "internal.h"
>
> +int __must_check mhi_read_reg(struct mhi_controller *mhi_cntrl,
> + void __iomem *base, u32 offset, u32 *out)
> +{
> + u32 tmp = readl_relaxed(base + offset);
> +
> + /* If there is any unexpected value, query the link status */
> + if (PCI_INVALID_READ(tmp) &&
> + mhi_cntrl->link_status(mhi_cntrl, mhi_cntrl->priv_data))
> + return -EIO;
> +
> + *out = tmp;
> +
> + return 0;
> +}
> +
> +int __must_check mhi_read_reg_field(struct mhi_controller *mhi_cntrl,
> + void __iomem *base, u32 offset,
> + u32 mask, u32 shift, u32 *out)
> +{
> + u32 tmp;
> + int ret;
> +
> + ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
> + if (ret)
> + return ret;
> +
> + *out = (tmp & mask) >> shift;
> +
> + return 0;
> +}
> +
> +void mhi_write_reg(struct mhi_controller *mhi_cntrl, void __iomem *base,
> + u32 offset, u32 val)
> +{
> + writel_relaxed(val, base + offset);
> +}
> +
> +void mhi_write_reg_field(struct mhi_controller *mhi_cntrl, void __iomem *base,
> + u32 offset, u32 mask, u32 shift, u32 val)
> +{
> + int ret;
> + u32 tmp;
> +
> + ret = mhi_read_reg(mhi_cntrl, base, offset, &tmp);
> + if (ret)
> + return;
> +
> + tmp &= ~mask;
> + tmp |= (val << shift);
> + mhi_write_reg(mhi_cntrl, base, offset, tmp);
> +}
> +
> +void mhi_write_db(struct mhi_controller *mhi_cntrl, void __iomem *db_addr,
> + dma_addr_t db_val)
> +{
> + mhi_write_reg(mhi_cntrl, db_addr, 4, upper_32_bits(db_val));
> + mhi_write_reg(mhi_cntrl, db_addr, 0, lower_32_bits(db_val));
> +}
> +
> +void mhi_db_brstmode(struct mhi_controller *mhi_cntrl,
> + struct db_cfg *db_cfg,
> + void __iomem *db_addr,
> + dma_addr_t db_val)
> +{
> + if (db_cfg->db_mode) {
> + db_cfg->db_val = db_val;
> + mhi_write_db(mhi_cntrl, db_addr, db_val);
> + db_cfg->db_mode = 0;
> + }
> +}
> +
> +void mhi_db_brstmode_disable(struct mhi_controller *mhi_cntrl,
> + struct db_cfg *db_cfg,
> + void __iomem *db_addr,
> + dma_addr_t db_val)
> +{
> + db_cfg->db_val = db_val;
> + mhi_write_db(mhi_cntrl, db_addr, db_val);
> +}
> +
> +void mhi_ring_er_db(struct mhi_event *mhi_event)
> +{
> + struct mhi_ring *ring = &mhi_event->ring;
> +
> + mhi_event->db_cfg.process_db(mhi_event->mhi_cntrl, &mhi_event->db_cfg,
> + ring->db_addr, *ring->ctxt_wp);
> +}
> +
> +void mhi_ring_cmd_db(struct mhi_controller *mhi_cntrl, struct mhi_cmd *mhi_cmd)
> +{
> + dma_addr_t db;
> + struct mhi_ring *ring = &mhi_cmd->ring;
> +
> + db = ring->iommu_base + (ring->wp - ring->base);
> + *ring->ctxt_wp = db;
> + mhi_write_db(mhi_cntrl, ring->db_addr, db);
> +}
> +
> +void mhi_ring_chan_db(struct mhi_controller *mhi_cntrl,
> + struct mhi_chan *mhi_chan)
> +{
> + struct mhi_ring *ring = &mhi_chan->tre_ring;
> + dma_addr_t db;
> +
> + db = ring->iommu_base + (ring->wp - ring->base);
> + *ring->ctxt_wp = db;
> + mhi_chan->db_cfg.process_db(mhi_cntrl, &mhi_chan->db_cfg,
> + ring->db_addr, db);
> +}
> +
> +enum mhi_ee_type mhi_get_exec_env(struct mhi_controller *mhi_cntrl)
> +{
> + u32 exec;
> + int ret = mhi_read_reg(mhi_cntrl, mhi_cntrl->bhi, BHI_EXECENV, &exec);
> +
> + return (ret) ? MHI_EE_MAX : exec;
> +}
> +
> int mhi_destroy_device(struct device *dev, void *data)
> {
> struct mhi_device *mhi_dev;
> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> index cb6ddd23463c..d08f212cdfd0 100644
> --- a/include/linux/mhi.h
> +++ b/include/linux/mhi.h
> @@ -246,6 +246,8 @@ struct mhi_controller_config {
> * @dev_id: Device ID of the controller
> * @bus_id: Physical bus instance used by the controller
> * @regs: Base address of MHI MMIO register space
> + * @bhi: Points to base of MHI BHI register space
> + * @wake_db: MHI WAKE doorbell register address
> * @iova_start: IOMMU starting address for data
> * @iova_stop: IOMMU stop address for data
> * @fw_image: Firmware image name for normal booting
> @@ -306,6 +308,9 @@ struct mhi_controller {
> u32 dev_id;
> u32 bus_id;
> void __iomem *regs;
> + void __iomem *bhi;
> + void __iomem *wake_db;
> +
> dma_addr_t iova_start;
> dma_addr_t iova_stop;
> const char *fw_image;
>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On Fri, Jan 24, 2020 at 11:12:57AM -0700, Jeffrey Hugo wrote:
> On 1/24/2020 10:47 AM, Greg KH wrote:
> > On Fri, Jan 24, 2020 at 07:24:43AM -0700, Jeffrey Hugo wrote:
> > > > > +/**
> > > > > + * struct mhi_result - Completed buffer information
> > > > > + * @buf_addr: Address of data buffer
> > > > > + * @dir: Channel direction
> > > > > + * @bytes_xfer: # of bytes transferred
> > > > > + * @transaction_status: Status of last transaction
> > > > > + */
> > > > > +struct mhi_result {
> > > > > + void *buf_addr;
> > > >
> > > > Why void *?
> > >
> > > Because its not possible to resolve this more clearly. The client provides
> > > the buffer and knows what the structure is. The bus does not. Its just an
> > > opaque pointer (hence void *) to the bus, and the client needs to decode it.
> > > This is the struct that is handed to the client to allow them to decode the
> > > activity (either a received buf, or a confirmation that a transmitted buf
> > > has been consumed).
> >
> > Then shouldn't this be a "u8 *" instead as you are saying how many bytes
> > are here?
>
> I'm sorry, I don't see the benefit of that. Can you elaborate on why you
> think that u8 * is a better type?
>
> Sure, its an arbitrary byte stream from the perspective of the bus, but to
> the client, 99% of the time its going to have some structure.
So which side is in control here, the "bus" or the "client"? For the
bus to care, it's a bytestream and should be represented as such (like
you have) with a number of bytes in the "packet".
If you already know the structure types, just make a union of all of the
valid ones and be done with it. In other words, try to avoid using void
* as much as is ever possible please.
thanks,
greg k-h
On Fri, Jan 24, 2020 at 03:51:12PM -0700, Jeffrey Hugo wrote:
> > +struct mhi_event_ctxt {
> > + u32 reserved : 8;
> > + u32 intmodc : 8;
> > + u32 intmodt : 16;
> > + u32 ertype;
> > + u32 msivec;
> > +
> > + u64 rbase __packed __aligned(4);
> > + u64 rlen __packed __aligned(4);
> > + u64 rp __packed __aligned(4);
> > + u64 wp __packed __aligned(4);
> > +};
>
> This is the struct that is shared with the device, correct? Surely it needs
> to be packed then? Seems like you'd expect some padding between msivec and
> rbase on a 64-bit system otherwise, which is probably not intended.
>
> Also I strongly dislike bitfields in structures which are shared with
> another system since the C specification doesn't define how they are
> implemented, therefore you can run into issues where different compilers
> decide to implement the actual backing memory differently. I know its less
> convinent, but I would prefer the use of bitmasks for these fields.
You have to use bitmasks in order for all endian cpus to work properly
here, so that needs to be fixed.
Oh, and if these values are in hardware, then the correct types also
need to be used (i.e. __u32 and __u64).
good catch!
greg k-h
On 1/25/2020 6:26 AM, Greg KH wrote:
> On Fri, Jan 24, 2020 at 11:12:57AM -0700, Jeffrey Hugo wrote:
>> On 1/24/2020 10:47 AM, Greg KH wrote:
>>> On Fri, Jan 24, 2020 at 07:24:43AM -0700, Jeffrey Hugo wrote:
>>>>>> +/**
>>>>>> + * struct mhi_result - Completed buffer information
>>>>>> + * @buf_addr: Address of data buffer
>>>>>> + * @dir: Channel direction
>>>>>> + * @bytes_xfer: # of bytes transferred
>>>>>> + * @transaction_status: Status of last transaction
>>>>>> + */
>>>>>> +struct mhi_result {
>>>>>> + void *buf_addr;
>>>>>
>>>>> Why void *?
>>>>
>>>> Because its not possible to resolve this more clearly. The client provides
>>>> the buffer and knows what the structure is. The bus does not. Its just an
>>>> opaque pointer (hence void *) to the bus, and the client needs to decode it.
>>>> This is the struct that is handed to the client to allow them to decode the
>>>> activity (either a received buf, or a confirmation that a transmitted buf
>>>> has been consumed).
>>>
>>> Then shouldn't this be a "u8 *" instead as you are saying how many bytes
>>> are here?
>>
>> I'm sorry, I don't see the benefit of that. Can you elaborate on why you
>> think that u8 * is a better type?
>>
>> Sure, its an arbitrary byte stream from the perspective of the bus, but to
>> the client, 99% of the time its going to have some structure.
>
> So which side is in control here, the "bus" or the "client"? For the
> bus to care, it's a bytestream and should be represented as such (like
> you have) with a number of bytes in the "packet". >
> If you already know the structure types, just make a union of all of the
> valid ones and be done with it. In other words, try to avoid using void
> * as much as is ever possible please.
The client is in control. Perhaps if you think of this like a NIC - the
NIC is a dumb pipe that you shove bytes into and get bytes out of. The
NIC doesn't know or care what the bytes are, only that it performs its
responsibilities of successfully moving those bytes through the pipe.
The bytes could be a TCP packet, UDP packet, raw IP packet, or something
entirely different. The NIC doesn't need to know, nor care.
MHI is a little one sided because its designed so that the Host is in
control for the most part.
In the transmit path, the client on the Host gives the bus a stream of
bytes. The DMA-able address of that stream of bytes is put into the bus
structures, and the doorbell is rung. Then the device pulls in the data.
In the receive path, the client on the host gives the bus a receive
buffer. The DMA-able address of that buffer is put into the bus
structures. When the device wants to send data to the host, it picks up
the buffer address, copies the data into it, and then flags an event to
the Host.
This structure we are discussing is used in the callback from the bus to
the client to either signal that the TX buffer has been consumed by the
device and is now back under the control of the client, or that the
device has consumed a queued RX buffer, and now the buffer is back under
the control of the client and can be read to determine what data the
device sent.
In both cases, its impossible for the bus to know the structure or
content of the data. All the bus knows or cares about is the location
and size of the buffer. Its entirely up to the control of the client.
The client could be the network stack, in which case the data is
probably an IP packet. The client could be entirely something else
where the client protocol running over MHI is entirely unique to that
client.
Since MHI supports arbitrary clients, its impossible to come up with
some kind of union that describes every possible client's structure
definitions from now until the end of time.
void * is the only type that makes realistic sense.
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/23/2020 10:05 AM, Jeffrey Hugo wrote:
> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
>> This commit adds support for registering MHI controller drivers with
>> the MHI stack. MHI controller drivers manages the interaction with the
>> MHI client devices such as the external modems and WiFi chipsets. They
>> are also the MHI bus master in charge of managing the physical link
>> between the host and client device.
>>
>> This is based on the patch submitted by Sujeev Dias:
>> https://lkml.org/lkml/2018/7/9/987
>>
>> Signed-off-by: Sujeev Dias <[email protected]>
>> Signed-off-by: Siddartha Mohanadoss <[email protected]>
>> [jhugo: added static config for controllers and fixed several bugs]
>> Signed-off-by: Jeffrey Hugo <[email protected]>
>> [mani: removed DT dependency, splitted and cleaned up for upstream]
>> Signed-off-by: Manivannan Sadhasivam <[email protected]>
>> ---
>>  drivers/bus/Kconfig            |  1 +
>>  drivers/bus/Makefile           |  3 +
>>  drivers/bus/mhi/Kconfig        | 14 +
>>  drivers/bus/mhi/Makefile       |  2 +
>>  drivers/bus/mhi/core/Makefile  |  3 +
>>  drivers/bus/mhi/core/init.c    | 404 +++++++++++++++++++++++++++++
>> Â drivers/bus/mhi/core/internal.h | 169 ++++++++++++
>>  include/linux/mhi.h            | 438 ++++++++++++++++++++++++++++++++
>> Â include/linux/mod_devicetable.h |Â 12 +
>> Â 9 files changed, 1046 insertions(+)
>> Â create mode 100644 drivers/bus/mhi/Kconfig
>> Â create mode 100644 drivers/bus/mhi/Makefile
>> Â create mode 100644 drivers/bus/mhi/core/Makefile
>> Â create mode 100644 drivers/bus/mhi/core/init.c
>> Â create mode 100644 drivers/bus/mhi/core/internal.h
>> Â create mode 100644 include/linux/mhi.h
>>
>> diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
>> index 50200d1c06ea..383934e54786 100644
>> --- a/drivers/bus/Kconfig
>> +++ b/drivers/bus/Kconfig
>> @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
>> Â Â Â Â Â Â Â peripherals.
>> Â source "drivers/bus/fsl-mc/Kconfig"
>> +source "drivers/bus/mhi/Kconfig"
>> Â endmenu
>> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
>> index 1320bcf9fa9d..05f32cd694a4 100644
>> --- a/drivers/bus/Makefile
>> +++ b/drivers/bus/Makefile
>> @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS)Â Â Â +=
>> uniphier-system-bus.o
>> Â obj-$(CONFIG_VEXPRESS_CONFIG)Â Â Â += vexpress-config.o
>> Â obj-$(CONFIG_DA8XX_MSTPRI)Â Â Â += da8xx-mstpri.o
>> +
>> +# MHI
>> +obj-$(CONFIG_MHI_BUS)Â Â Â Â Â Â Â += mhi/
>> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
>> new file mode 100644
>> index 000000000000..a8bd9bd7db7c
>> --- /dev/null
>> +++ b/drivers/bus/mhi/Kconfig
>> @@ -0,0 +1,14 @@
>> +# SPDX-License-Identifier: GPL-2.0
>
> first time I noticed this, although I suspect this will need to be
> corrected "everywhere" -
> Per the SPDX website, the "GPL-2.0" label is deprecated. It's
> replacement is "GPL-2.0-only".
> I think all instances should be updated to "GPL-2.0-only"
>
>> +#
>> +# MHI bus
>> +#
>> +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>> +#
>> +
>> +config MHI_BUS
>> +Â Â Â Â Â Â tristate "Modem Host Interface (MHI) bus"
>> +Â Â Â Â Â Â help
>> +Â Â Â Â Bus driver for MHI protocol. Modem Host Interface (MHI) is a
>> +Â Â Â Â communication protocol used by the host processors to control
>> +Â Â Â Â and communicate with modem devices over a high speed peripheral
>> +Â Â Â Â bus or shared memory.
>> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
>> new file mode 100644
>> index 000000000000..19e6443b72df
>> --- /dev/null
>> +++ b/drivers/bus/mhi/Makefile
>> @@ -0,0 +1,2 @@
>> +# core layer
>> +obj-y += core/
>> diff --git a/drivers/bus/mhi/core/Makefile
>> b/drivers/bus/mhi/core/Makefile
>> new file mode 100644
>> index 000000000000..2db32697c67f
>> --- /dev/null
>> +++ b/drivers/bus/mhi/core/Makefile
>> @@ -0,0 +1,3 @@
>> +obj-$(CONFIG_MHI_BUS) := mhi.o
>> +
>> +mhi-y := init.o
>> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
>> new file mode 100644
>> index 000000000000..5b817ec250e0
>> --- /dev/null
>> +++ b/drivers/bus/mhi/core/init.c
>> @@ -0,0 +1,404 @@
>> +// SPDX-License-Identifier: GPL-2.0
>> +/*
>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>> + *
>> + */
>> +
>> +#define dev_fmt(fmt) "MHI: " fmt
>> +
>> +#include <linux/device.h>
>> +#include <linux/dma-direction.h>
>> +#include <linux/dma-mapping.h>
>> +#include <linux/interrupt.h>
>> +#include <linux/list.h>
>> +#include <linux/mhi.h>
>> +#include <linux/mod_devicetable.h>
>> +#include <linux/module.h>
>> +#include <linux/slab.h>
>> +#include <linux/vmalloc.h>
>> +#include <linux/wait.h>
>> +#include "internal.h"
>> +
>> +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â struct mhi_controller_config *config)
>> +{
>> +Â Â Â int i, num;
>> +Â Â Â struct mhi_event *mhi_event;
>> +Â Â Â struct mhi_event_config *event_cfg;
>> +
>> +Â Â Â num = config->num_events;
>> +Â Â Â mhi_cntrl->total_ev_rings = num;
>> +Â Â Â mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â GFP_KERNEL);
>> +Â Â Â if (!mhi_cntrl->mhi_event)
>> +Â Â Â Â Â Â Â return -ENOMEM;
>> +
>> +Â Â Â /* Populate event ring */
>> +Â Â Â mhi_event = mhi_cntrl->mhi_event;
>> +Â Â Â for (i = 0; i < num; i++) {
>> +Â Â Â Â Â Â Â event_cfg = &config->event_cfg[i];
>> +
>> +Â Â Â Â Â Â Â mhi_event->er_index = i;
>> +Â Â Â Â Â Â Â mhi_event->ring.elements = event_cfg->num_elements;
>> +Â Â Â Â Â Â Â mhi_event->intmod = event_cfg->irq_moderation_ms;
>> +Â Â Â Â Â Â Â mhi_event->irq = event_cfg->irq;
>> +
>> +Â Â Â Â Â Â Â if (event_cfg->channel != U32_MAX) {
>> +Â Â Â Â Â Â Â Â Â Â Â /* This event ring has a dedicated channel */
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_event->chan = event_cfg->channel;
>> +Â Â Â Â Â Â Â Â Â Â Â if (mhi_event->chan >= mhi_cntrl->max_chan) {
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â "Event Ring channel not available\n");
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto error_ev_cfg;
>> +Â Â Â Â Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_event->mhi_chan =
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â &mhi_cntrl->mhi_chan[mhi_event->chan];
>> +Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â /* Priority is fixed to 1 for now */
>> +Â Â Â Â Â Â Â mhi_event->priority = 1;
>> +
>> +Â Â Â Â Â Â Â mhi_event->db_cfg.brstmode = event_cfg->mode;
>> +Â Â Â Â Â Â Â if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
>> +Â Â Â Â Â Â Â Â Â Â Â goto error_ev_cfg;
>> +
>> +Â Â Â Â Â Â Â mhi_event->data_type = event_cfg->data_type;
>> +
>> +Â Â Â Â Â Â Â mhi_event->hw_ring = event_cfg->hardware_event;
>> +Â Â Â Â Â Â Â if (mhi_event->hw_ring)
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_cntrl->hw_ev_rings++;
>> +Â Â Â Â Â Â Â else
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_cntrl->sw_ev_rings++;
>> +
>> +Â Â Â Â Â Â Â mhi_event->cl_manage = event_cfg->client_managed;
>> +Â Â Â Â Â Â Â mhi_event->offload_ev = event_cfg->offload_channel;
>> +Â Â Â Â Â Â Â mhi_event++;
>> +Â Â Â }
>> +
>> +Â Â Â /* We need IRQ for each event ring + additional one for BHI */
>> +Â Â Â mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
>> +
>> +Â Â Â return 0;
>> +
>> +error_ev_cfg:
>> +
>> +Â Â Â kfree(mhi_cntrl->mhi_event);
>> +Â Â Â return -EINVAL;
>> +}
>> +
>> +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â struct mhi_controller_config *config)
>> +{
>> +Â Â Â int i;
>> +Â Â Â u32 chan;
>> +Â Â Â struct mhi_channel_config *ch_cfg;
>> +
>> +Â Â Â mhi_cntrl->max_chan = config->max_channels;
>> +
>> +Â Â Â /*
>> +Â Â Â Â * The allocation of MHI channels can exceed 32KB in some scenarios,
>> +Â Â Â Â * so to avoid any memory possible allocation failures, vzalloc is
>> +Â Â Â Â * used here
>> +Â Â Â Â */
>> +Â Â Â mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â sizeof(*mhi_cntrl->mhi_chan));
>> +Â Â Â if (!mhi_cntrl->mhi_chan)
>> +Â Â Â Â Â Â Â return -ENOMEM;
>> +
>> +Â Â Â INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
>> +
>> +Â Â Â /* Populate channel configurations */
>> +Â Â Â for (i = 0; i < config->num_channels; i++) {
>> +Â Â Â Â Â Â Â struct mhi_chan *mhi_chan;
>> +
>> +Â Â Â Â Â Â Â ch_cfg = &config->ch_cfg[i];
>> +
>> +Â Â Â Â Â Â Â chan = ch_cfg->num;
>> +Â Â Â Â Â Â Â if (chan >= mhi_cntrl->max_chan) {
>> +Â Â Â Â Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â "Channel %d not available\n", chan);
>> +Â Â Â Â Â Â Â Â Â Â Â goto error_chan_cfg;
>> +Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â mhi_chan = &mhi_cntrl->mhi_chan[chan];
>> +Â Â Â Â Â Â Â mhi_chan->name = ch_cfg->name;
>> +Â Â Â Â Â Â Â mhi_chan->chan = chan;
>> +
>> +Â Â Â Â Â Â Â mhi_chan->tre_ring.elements = ch_cfg->num_elements;
>> +Â Â Â Â Â Â Â if (!mhi_chan->tre_ring.elements)
>> +Â Â Â Â Â Â Â Â Â Â Â goto error_chan_cfg;
>> +
>> +Â Â Â Â Â Â Â /*
>> +Â Â Â Â Â Â Â Â * For some channels, local ring length should be bigger than
>> +Â Â Â Â Â Â Â Â * the transfer ring length due to internal logical channels
>> +Â Â Â Â Â Â Â Â * in device. So host can queue much more buffers than transfer
>> +Â Â Â Â Â Â Â Â * ring length. Example, RSC channels should have a larger local
>> +Â Â Â Â Â Â Â Â * channel length than transfer ring length.
>> +Â Â Â Â Â Â Â Â */
>> +Â Â Â Â Â Â Â mhi_chan->buf_ring.elements = ch_cfg->local_elements;
>> +Â Â Â Â Â Â Â if (!mhi_chan->buf_ring.elements)
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
>> +Â Â Â Â Â Â Â mhi_chan->er_index = ch_cfg->event_ring;
>> +Â Â Â Â Â Â Â mhi_chan->dir = ch_cfg->dir;
>> +
>> +Â Â Â Â Â Â Â /*
>> +Â Â Â Â Â Â Â Â * For most channels, chtype is identical to channel directions.
>> +Â Â Â Â Â Â Â Â * So, if it is not defined then assign channel direction to
>> +Â Â Â Â Â Â Â Â * chtype
>> +Â Â Â Â Â Â Â Â */
>> +Â Â Â Â Â Â Â mhi_chan->type = ch_cfg->type;
>> +Â Â Â Â Â Â Â if (!mhi_chan->type)
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
>> +
>> +Â Â Â Â Â Â Â mhi_chan->ee_mask = ch_cfg->ee_mask;
>> +
>> +Â Â Â Â Â Â Â mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
>> +Â Â Â Â Â Â Â mhi_chan->xfer_type = ch_cfg->data_type;
>> +
>> +Â Â Â Â Â Â Â mhi_chan->lpm_notify = ch_cfg->lpm_notify;
>> +Â Â Â Â Â Â Â mhi_chan->offload_ch = ch_cfg->offload_channel;
>> +Â Â Â Â Â Â Â mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
>> +Â Â Â Â Â Â Â mhi_chan->pre_alloc = ch_cfg->auto_queue;
>> +Â Â Â Â Â Â Â mhi_chan->auto_start = ch_cfg->auto_start;
>> +
>> +Â Â Â Â Â Â Â /*
>> +Â Â Â Â Â Â Â Â * If MHI host allocates buffers, then the channel direction
>> +Â Â Â Â Â Â Â Â * should be DMA_FROM_DEVICE and the buffer type should be
>> +Â Â Â Â Â Â Â Â * MHI_BUF_RAW
>> +Â Â Â Â Â Â Â Â */
>> +Â Â Â Â Â Â Â if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â mhi_chan->xfer_type != MHI_BUF_RAW)) {
>> +Â Â Â Â Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â "Invalid channel configuration\n");
>> +Â Â Â Â Â Â Â Â Â Â Â goto error_chan_cfg;
>> +Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â /*
>> +Â Â Â Â Â Â Â Â * Bi-directional and direction less channel must be an
>> +Â Â Â Â Â Â Â Â * offload channel
>> +Â Â Â Â Â Â Â Â */
>> +Â Â Â Â Â Â Â if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
>> +Â Â Â Â Â Â Â Â Â Â Â Â mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
>> +Â Â Â Â Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â "Invalid channel configuration\n");
>> +Â Â Â Â Â Â Â Â Â Â Â goto error_chan_cfg;
>> +Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â if (!mhi_chan->offload_ch) {
>> +Â Â Â Â Â Â Â Â Â Â Â mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
>> +Â Â Â Â Â Â Â Â Â Â Â if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â "Invalid Door bell mode\n");
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â goto error_chan_cfg;
>> +Â Â Â Â Â Â Â Â Â Â Â }
>> +Â Â Â Â Â Â Â }
>> +
>> +Â Â Â Â Â Â Â mhi_chan->configured = true;
>> +
>> +Â Â Â Â Â Â Â if (mhi_chan->lpm_notify)
>> +Â Â Â Â Â Â Â Â Â Â Â list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
>> +Â Â Â }
>> +
>> +Â Â Â return 0;
>> +
>> +error_chan_cfg:
>> +Â Â Â vfree(mhi_cntrl->mhi_chan);
>> +
>> +Â Â Â return -EINVAL;
>> +}
>> +
>> +static int parse_config(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â struct mhi_controller_config *config)
>> +{
>> +Â Â Â int ret;
>> +
>> +Â Â Â /* Parse MHI channel configuration */
>> +Â Â Â ret = parse_ch_cfg(mhi_cntrl, config);
>> +Â Â Â if (ret)
>> +Â Â Â Â Â Â Â return ret;
>> +
>> +Â Â Â /* Parse MHI event configuration */
>> +Â Â Â ret = parse_ev_cfg(mhi_cntrl, config);
>> +Â Â Â if (ret)
>> +Â Â Â Â Â Â Â goto error_ev_cfg;
>> +
>> +Â Â Â mhi_cntrl->timeout_ms = config->timeout_ms;
>> +Â Â Â if (!mhi_cntrl->timeout_ms)
>> +Â Â Â Â Â Â Â mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
>> +
>> +Â Â Â mhi_cntrl->bounce_buf = config->use_bounce_buf;
>> +Â Â Â mhi_cntrl->buffer_len = config->buf_len;
>> +Â Â Â if (!mhi_cntrl->buffer_len)
>> +Â Â Â Â Â Â Â mhi_cntrl->buffer_len = MHI_MAX_MTU;
>> +
>> +Â Â Â return 0;
>> +
>> +error_ev_cfg:
>> +Â Â Â vfree(mhi_cntrl->mhi_chan);
>> +
>> +Â Â Â return ret;
>> +}
>> +
>> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct mhi_controller_config *config)
>> +{
>> +Â Â Â int ret;
>> +Â Â Â int i;
>> +Â Â Â struct mhi_event *mhi_event;
>> +Â Â Â struct mhi_chan *mhi_chan;
>> +Â Â Â struct mhi_cmd *mhi_cmd;
>> +Â Â Â struct mhi_device *mhi_dev;
>> +
You need a null check on mhi_cntrl right here, otherwise you could cause
a panic with the following if.
>> +Â Â Â if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
>> +Â Â Â Â Â Â Â return -EINVAL;
>> +
>> +Â Â Â if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
>> +Â Â Â Â Â Â Â return -EINVAL;
>> +
>> +Â Â Â ret = parse_config(mhi_cntrl, config);
>> +Â Â Â if (ret)
>> +Â Â Â Â Â Â Â return -EINVAL;
>> +
>> +Â Â Â mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
>> +Â Â Â if (!mhi_cntrl->mhi_cmd) {
>> +Â Â Â Â Â Â Â ret = -ENOMEM;
>> +Â Â Â Â Â Â Â goto error_alloc_cmd;
>> +Â Â Â }
>> +
>> +Â Â Â INIT_LIST_HEAD(&mhi_cntrl->transition_list);
>> +Â Â Â spin_lock_init(&mhi_cntrl->transition_lock);
>> +Â Â Â spin_lock_init(&mhi_cntrl->wlock);
>> +Â Â Â init_waitqueue_head(&mhi_cntrl->state_event);
>> +
>> +Â Â Â mhi_cmd = mhi_cntrl->mhi_cmd;
>> +Â Â Â for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
>> +Â Â Â Â Â Â Â spin_lock_init(&mhi_cmd->lock);
>> +
>> +Â Â Â mhi_event = mhi_cntrl->mhi_event;
>> +Â Â Â for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
>> +Â Â Â Â Â Â Â /* Skip for offload events */
>> +Â Â Â Â Â Â Â if (mhi_event->offload_ev)
>> +Â Â Â Â Â Â Â Â Â Â Â continue;
>> +
>> +Â Â Â Â Â Â Â mhi_event->mhi_cntrl = mhi_cntrl;
>> +Â Â Â Â Â Â Â spin_lock_init(&mhi_event->lock);
>> +Â Â Â }
>> +
>> +Â Â Â mhi_chan = mhi_cntrl->mhi_chan;
>> +Â Â Â for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
>> +Â Â Â Â Â Â Â mutex_init(&mhi_chan->mutex);
>> +Â Â Â Â Â Â Â init_completion(&mhi_chan->completion);
>> +Â Â Â Â Â Â Â rwlock_init(&mhi_chan->lock);
>> +Â Â Â }
>> +
>> +Â Â Â /* Register controller with MHI bus */
>> +Â Â Â mhi_dev = mhi_alloc_device(mhi_cntrl);
>> +Â Â Â if (IS_ERR(mhi_dev)) {
>> +Â Â Â Â Â Â Â dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
>> +Â Â Â Â Â Â Â ret = PTR_ERR(mhi_dev);
>> +Â Â Â Â Â Â Â goto error_alloc_dev;
>> +Â Â Â }
>> +
>> +Â Â Â mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
>> +Â Â Â mhi_dev->mhi_cntrl = mhi_cntrl;
>> +Â Â Â dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
>> +
>> +Â Â Â /* Init wakeup source */
>> +Â Â Â device_init_wakeup(&mhi_dev->dev, true);
>> +
>> +Â Â Â ret = device_add(&mhi_dev->dev);
>> +Â Â Â if (ret)
>> +Â Â Â Â Â Â Â goto error_add_dev;
>> +
>> +Â Â Â mhi_cntrl->mhi_dev = mhi_dev;
>> +
>> +Â Â Â return 0;
>> +
>> +error_add_dev:
>> +Â Â Â mhi_dealloc_device(mhi_cntrl, mhi_dev);
>> +
>> +error_alloc_dev:
>> +Â Â Â kfree(mhi_cntrl->mhi_cmd);
>> +
>> +error_alloc_cmd:
>> +Â Â Â vfree(mhi_cntrl->mhi_chan);
>> +Â Â Â kfree(mhi_cntrl->mhi_event);
>> +
>> +Â Â Â return ret;
>> +}
>> +EXPORT_SYMBOL_GPL(mhi_register_controller);
>> +
>> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
>> +{
>> +Â Â Â struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
>> +
>> +Â Â Â kfree(mhi_cntrl->mhi_cmd);
>> +Â Â Â kfree(mhi_cntrl->mhi_event);
>> +Â Â Â vfree(mhi_cntrl->mhi_chan);
>> +
>> +Â Â Â device_del(&mhi_dev->dev);
>> +Â Â Â put_device(&mhi_dev->dev);
>> +}
>> +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
>> +
>> +static void mhi_release_device(struct device *dev)
>> +{
>> +Â Â Â struct mhi_device *mhi_dev = to_mhi_device(dev);
>> +
>> +Â Â Â if (mhi_dev->ul_chan)
>> +Â Â Â Â Â Â Â mhi_dev->ul_chan->mhi_dev = NULL;
>> +
>> +Â Â Â if (mhi_dev->dl_chan)
>> +Â Â Â Â Â Â Â mhi_dev->dl_chan->mhi_dev = NULL;
>> +
>> +Â Â Â kfree(mhi_dev);
>> +}
>> +
>> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
>> +{
>> +Â Â Â struct mhi_device *mhi_dev;
>> +Â Â Â struct device *dev;
>> +
>> +Â Â Â mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
>> +Â Â Â if (!mhi_dev)
>> +Â Â Â Â Â Â Â return ERR_PTR(-ENOMEM);
>> +
>> +Â Â Â dev = &mhi_dev->dev;
>> +Â Â Â device_initialize(dev);
>> +Â Â Â dev->bus = &mhi_bus_type;
>> +Â Â Â dev->release = mhi_release_device;
>> +Â Â Â dev->parent = mhi_cntrl->dev;
>> +Â Â Â mhi_dev->mhi_cntrl = mhi_cntrl;
>> +Â Â Â atomic_set(&mhi_dev->dev_wake, 0);
>> +
>> +Â Â Â return mhi_dev;
>> +}
>> +
>> +static int mhi_match(struct device *dev, struct device_driver *drv)
>> +{
>> +Â Â Â return 0;
>> +};
>> +
>> +struct bus_type mhi_bus_type = {
>> +Â Â Â .name = "mhi",
>> +Â Â Â .dev_name = "mhi",
>> +Â Â Â .match = mhi_match,
>> +};
>> +
>> +static int __init mhi_init(void)
>> +{
>> +Â Â Â return bus_register(&mhi_bus_type);
>> +}
>> +
>> +static void __exit mhi_exit(void)
>> +{
>> +Â Â Â bus_unregister(&mhi_bus_type);
>> +}
>> +
>> +postcore_initcall(mhi_init);
>> +module_exit(mhi_exit);
>> +
>> +MODULE_LICENSE("GPL v2");
>> +MODULE_DESCRIPTION("MHI Host Interface");
>> diff --git a/drivers/bus/mhi/core/internal.h
>> b/drivers/bus/mhi/core/internal.h
>> new file mode 100644
>> index 000000000000..21f686d3a140
>> --- /dev/null
>> +++ b/drivers/bus/mhi/core/internal.h
>> @@ -0,0 +1,169 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>> + *
>> + */
>> +
>> +#ifndef _MHI_INT_H
>> +#define _MHI_INT_H
>> +
>> +extern struct bus_type mhi_bus_type;
>> +
>> +/* MHI transfer completion events */
>> +enum mhi_ev_ccs {
>> +Â Â Â MHI_EV_CC_INVALID = 0x0,
>> +Â Â Â MHI_EV_CC_SUCCESS = 0x1,
>> +Â Â Â MHI_EV_CC_EOT = 0x2,
>> +Â Â Â MHI_EV_CC_OVERFLOW = 0x3,
>> +Â Â Â MHI_EV_CC_EOB = 0x4,
>> +Â Â Â MHI_EV_CC_OOB = 0x5,
>> +Â Â Â MHI_EV_CC_DB_MODE = 0x6,
>> +Â Â Â MHI_EV_CC_UNDEFINED_ERR = 0x10,
>> +Â Â Â MHI_EV_CC_BAD_TRE = 0x11,
>
> Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms? I
> feel like those might not be obvious to someone not familiar with the
> protocol.
>
>> +};
>> +
>> +enum mhi_ch_state {
>> +Â Â Â MHI_CH_STATE_DISABLED = 0x0,
>> +Â Â Â MHI_CH_STATE_ENABLED = 0x1,
>> +Â Â Â MHI_CH_STATE_RUNNING = 0x2,
>> +Â Â Â MHI_CH_STATE_SUSPENDED = 0x3,
>> +Â Â Â MHI_CH_STATE_STOP = 0x4,
>> +Â Â Â MHI_CH_STATE_ERROR = 0x5,
>> +};
>> +
>> +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â mode != MHI_DB_BRST_ENABLE)
>> +
>> +#define NR_OF_CMD_RINGSÂ Â Â Â Â Â Â Â Â Â Â 1
>> +#define CMD_EL_PER_RINGÂ Â Â Â Â Â Â Â Â Â Â 128
>> +#define PRIMARY_CMD_RINGÂ Â Â Â Â Â Â 0
>> +#define MHI_MAX_MTUÂ Â Â Â Â Â Â Â Â Â Â 0xffff
>> +
>> +enum mhi_er_type {
>> +Â Â Â MHI_ER_TYPE_INVALID = 0x0,
>> +Â Â Â MHI_ER_TYPE_VALID = 0x1,
>> +};
>> +
>> +enum mhi_ch_ee_mask {
>> +Â Â Â MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
>
> MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
> include?
>
>> +Â Â Â MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
>> +Â Â Â MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
>> +Â Â Â MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
>> +Â Â Â MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
>> +Â Â Â MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
>> +Â Â Â MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
>> +};
>> +
>> +struct db_cfg {
>> +Â Â Â bool reset_req;
>> +Â Â Â bool db_mode;
>> +Â Â Â u32 pollcfg;
>> +Â Â Â enum mhi_db_brst_mode brstmode;
>> +Â Â Â dma_addr_t db_val;
>> +Â Â Â void (*process_db)(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct db_cfg *db_cfg, void __iomem *io_addr,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â dma_addr_t db_val);
>> +};
>> +
>> +struct mhi_ring {
>> +Â Â Â dma_addr_t dma_handle;
>> +Â Â Â dma_addr_t iommu_base;
>> +Â Â Â u64 *ctxt_wp; /* point to ctxt wp */
>> +Â Â Â void *pre_aligned;
>> +Â Â Â void *base;
>> +Â Â Â void *rp;
>> +Â Â Â void *wp;
>> +Â Â Â size_t el_size;
>> +Â Â Â size_t len;
>> +Â Â Â size_t elements;
>> +Â Â Â size_t alloc_size;
>> +Â Â Â void __iomem *db_addr;
>> +};
>> +
>> +struct mhi_cmd {
>> +Â Â Â struct mhi_ring ring;
>> +Â Â Â spinlock_t lock;
>> +};
>> +
>> +struct mhi_buf_info {
>> +Â Â Â dma_addr_t p_addr;
>> +Â Â Â void *v_addr;
>> +Â Â Â void *bb_addr;
>> +Â Â Â void *wp;
>> +Â Â Â size_t len;
>> +Â Â Â void *cb_buf;
>> +Â Â Â enum dma_data_direction dir;
>> +};
>> +
>> +struct mhi_event {
>> +Â Â Â u32 er_index;
>> +Â Â Â u32 intmod;
>> +Â Â Â u32 irq;
>> +Â Â Â int chan; /* this event ring is dedicated to a channel (optional) */
>> +Â Â Â u32 priority;
>> +Â Â Â enum mhi_er_data_type data_type;
>> +Â Â Â struct mhi_ring ring;
>> +Â Â Â struct db_cfg db_cfg;
>> +Â Â Â bool hw_ring;
>> +Â Â Â bool cl_manage;
>> +Â Â Â bool offload_ev; /* managed by a device driver */
>> +Â Â Â spinlock_t lock;
>> +Â Â Â struct mhi_chan *mhi_chan; /* dedicated to channel */
>> +Â Â Â struct tasklet_struct task;
>> +Â Â Â int (*process_event)(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct mhi_event *mhi_event,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â u32 event_quota);
>> +Â Â Â struct mhi_controller *mhi_cntrl;
>> +};
>> +
>> +struct mhi_chan {
>> +Â Â Â u32 chan;
>> +Â Â Â const char *name;
>> +Â Â Â /*
>> +Â Â Â Â * Important: When consuming, increment tre_ring first and when
>> +Â Â Â Â * releasing, decrement buf_ring first. If tre_ring has space,
>> buf_ring
>> +Â Â Â Â * is guranteed to have space so we do not need to check both rings.
>> +Â Â Â Â */
>> +Â Â Â struct mhi_ring buf_ring;
>> +Â Â Â struct mhi_ring tre_ring;
>> +Â Â Â u32 er_index;
>> +Â Â Â u32 intmod;
>> +Â Â Â enum mhi_ch_type type;
>> +Â Â Â enum dma_data_direction dir;
>> +Â Â Â struct db_cfg db_cfg;
>> +Â Â Â enum mhi_ch_ee_mask ee_mask;
>> +Â Â Â enum mhi_buf_type xfer_type;
>> +Â Â Â enum mhi_ch_state ch_state;
>> +Â Â Â enum mhi_ev_ccs ccs;
>> +Â Â Â bool lpm_notify;
>> +Â Â Â bool configured;
>> +Â Â Â bool offload_ch;
>> +Â Â Â bool pre_alloc;
>> +Â Â Â bool auto_start;
>> +Â Â Â int (*gen_tre)(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct mhi_chan *mhi_chan, void *buf, void *cb,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â size_t len, enum mhi_flags flags);
>> +Â Â Â int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan
>> *mhi_chan,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â void *buf, size_t len, enum mhi_flags mflags);
>> +Â Â Â struct mhi_device *mhi_dev;
>> +Â Â Â void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result
>> *result);
>> +Â Â Â struct mutex mutex;
>> +Â Â Â struct completion completion;
>> +Â Â Â rwlock_t lock;
>> +Â Â Â struct list_head node;
>> +};
>> +
>> +/* Default MHI timeout */
>> +#define MHI_TIMEOUT_MS (1000)
>> +
>> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
>> +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct mhi_device *mhi_dev)
>> +{
>> +Â Â Â kfree(mhi_dev);
>> +}
>> +
>> +int mhi_destroy_device(struct device *dev, void *data);
>> +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
>> +
>> +#endif /* _MHI_INT_H */
>> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
>> new file mode 100644
>> index 000000000000..69cf9a4b06c7
>> --- /dev/null
>> +++ b/include/linux/mhi.h
>> @@ -0,0 +1,438 @@
>> +/* SPDX-License-Identifier: GPL-2.0 */
>> +/*
>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>> + *
>> + */
>> +#ifndef _MHI_H_
>> +#define _MHI_H_
>> +
>> +#include <linux/device.h>
>> +#include <linux/dma-direction.h>
>> +#include <linux/mutex.h>
>> +#include <linux/rwlock_types.h>
>> +#include <linux/slab.h>
>> +#include <linux/spinlock_types.h>
>> +#include <linux/wait.h>
>> +#include <linux/workqueue.h>
>> +
>> +struct mhi_chan;
>> +struct mhi_event;
>> +struct mhi_ctxt;
>> +struct mhi_cmd;
>> +struct mhi_buf_info;
>> +
>> +/**
>> + * enum mhi_callback - MHI callback
>> + * @MHI_CB_IDLE: MHI entered idle state
>> + * @MHI_CB_PENDING_DATA: New data available for client to process
>> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
>> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
>> + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
>> + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
>> + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
>> + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
>> + */
>> +enum mhi_callback {
>> +Â Â Â MHI_CB_IDLE,
>> +Â Â Â MHI_CB_PENDING_DATA,
>> +Â Â Â MHI_CB_LPM_ENTER,
>> +Â Â Â MHI_CB_LPM_EXIT,
>> +Â Â Â MHI_CB_EE_RDDM,
>> +Â Â Â MHI_CB_EE_MISSION_MODE,
>> +Â Â Â MHI_CB_SYS_ERROR,
>> +Â Â Â MHI_CB_FATAL_ERROR,
>> +};
>> +
>> +/**
>> + * enum mhi_flags - Transfer flags
>> + * @MHI_EOB: End of buffer for bulk transfer
>> + * @MHI_EOT: End of transfer
>> + * @MHI_CHAIN: Linked transfer
>> + */
>> +enum mhi_flags {
>> +Â Â Â MHI_EOB,
>> +Â Â Â MHI_EOT,
>> +Â Â Â MHI_CHAIN,
>> +};
>> +
>> +/**
>> + * enum mhi_device_type - Device types
>> + * @MHI_DEVICE_XFER: Handles data transfer
>> + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
>> + * @MHI_DEVICE_CONTROLLER: Control device
>> + */
>> +enum mhi_device_type {
>> +Â Â Â MHI_DEVICE_XFER,
>> +Â Â Â MHI_DEVICE_TIMESYNC,
>> +Â Â Â MHI_DEVICE_CONTROLLER,
>> +};
>> +
>> +/**
>> + * enum mhi_ch_type - Channel types
>> + * @MHI_CH_TYPE_INVALID: Invalid channel type
>> + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
>> + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
>> + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device
>> to combine
>> + *Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â multiple packets and send them as a single
>> + *Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â large packet to reduce CPU consumption
>> + */
>> +enum mhi_ch_type {
>> +Â Â Â MHI_CH_TYPE_INVALID = 0,
>> +Â Â Â MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
>> +Â Â Â MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
>> +Â Â Â MHI_CH_TYPE_INBOUND_COALESCED = 3,
>> +};
>> +
>> +/**
>> + * enum mhi_ee_type - Execution environment types
>> + * @MHI_EE_PBL: Primary Bootloader
>> + * @MHI_EE_SBL: Secondary Bootloader
>> + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
>> + * @MHI_EE_RDDM: Ram dump download mode
>> + * @MHI_EE_WFW: WLAN firmware mode
>> + * @MHI_EE_PTHRU: Passthrough
>> + * @MHI_EE_EDL: Embedded downloader
>> + */
>> +enum mhi_ee_type {
>> +Â Â Â MHI_EE_PBL,
>> +Â Â Â MHI_EE_SBL,
>> +Â Â Â MHI_EE_AMSS,
>> +Â Â Â MHI_EE_RDDM,
>> +Â Â Â MHI_EE_WFW,
>> +Â Â Â MHI_EE_PTHRU,
>> +Â Â Â MHI_EE_EDL,
>> +Â Â Â MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
>> +Â Â Â MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
>> +Â Â Â MHI_EE_NOT_SUPPORTED,
>> +Â Â Â MHI_EE_MAX,
>> +};
>> +
>> +/**
>> + * enum mhi_buf_type - Accepted buffer type for the channel
>> + * @MHI_BUF_RAW: Raw buffer
>> + * @MHI_BUF_SKB: SKB struct
>> + * @MHI_BUF_SCLIST: Scatter-gather list
>> + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
>> + * @MHI_BUF_DMA: Receive DMA address mapped by client
>> + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
>
> Maybe its just me, but what is "RSC"?
>
>> + */
>> +enum mhi_buf_type {
>> +Â Â Â MHI_BUF_RAW,
>> +Â Â Â MHI_BUF_SKB,
>> +Â Â Â MHI_BUF_SCLIST,
>> +Â Â Â MHI_BUF_NOP,
>> +Â Â Â MHI_BUF_DMA,
>> +Â Â Â MHI_BUF_RSC_DMA,
>> +};
>> +
>> +/**
>> + * enum mhi_er_data_type - Event ring data types
>> + * @MHI_ER_DATA: Only client data over this ring
>> + * @MHI_ER_CTRL: MHI control data and client data
>> + * @MHI_ER_TSYNC: Time sync events
>> + */
>> +enum mhi_er_data_type {
>> +Â Â Â MHI_ER_DATA,
>> +Â Â Â MHI_ER_CTRL,
>> +Â Â Â MHI_ER_TSYNC,
>> +};
>> +
>> +/**
>> + * enum mhi_db_brst_mode - Doorbell mode
>> + * @MHI_DB_BRST_DISABLE: Burst mode disable
>> + * @MHI_DB_BRST_ENABLE: Burst mode enable
>> + */
>> +enum mhi_db_brst_mode {
>> +Â Â Â MHI_DB_BRST_DISABLE = 0x2,
>> +Â Â Â MHI_DB_BRST_ENABLE = 0x3,
>> +};
>> +
>> +/**
>> + * struct mhi_channel_config - Channel configuration structure for
>> controller
>> + * @num: The number assigned to this channel
>> + * @name: The name of this channel
>> + * @num_elements: The number of elements that can be queued to this
>> channel
>> + * @local_elements: The local ring length of the channel
>> + * @event_ring: The event rung index that services this channel
>> + * @dir: Direction that data may flow on this channel
>> + * @type: Channel type
>> + * @ee_mask: Execution Environment mask for this channel
>
> But the mask defines are in internal.h, so how is a client suposed to
> know what they are?
>
>> + * @pollcfg: Polling configuration for burst mode. 0 is default.
>> milliseconds
>> +Â Â Â Â Â Â Â Â for UL channels, multiple of 8 ring elements for DL channels
>> + * @data_type: Data type accepted by this channel
>> + * @doorbell: Doorbell mode
>> + * @lpm_notify: The channel master requires low power mode notifications
>> + * @offload_channel: The client manages the channel completely
>> + * @doorbell_mode_switch: Channel switches to doorbell mode on M0
>> transition
>> + * @auto_queue: Framework will automatically queue buffers for DL
>> traffic
>> + * @auto_start: Automatically start (open) this channel
>> + */
>> +struct mhi_channel_config {
>> +Â Â Â u32 num;
>> +Â Â Â char *name;
>> +Â Â Â u32 num_elements;
>> +Â Â Â u32 local_elements;
>> +Â Â Â u32 event_ring;
>> +Â Â Â enum dma_data_direction dir;
>> +Â Â Â enum mhi_ch_type type;
Why do we have "dir" and "type" when they are the same thing?
>> +Â Â Â u32 ee_mask;
>> +Â Â Â u32 pollcfg;
>> +Â Â Â enum mhi_buf_type data_type;
>> +Â Â Â enum mhi_db_brst_mode doorbell;
>> +Â Â Â bool lpm_notify;
>> +Â Â Â bool offload_channel;
>> +Â Â Â bool doorbell_mode_switch;
>> +Â Â Â bool auto_queue;
>> +Â Â Â bool auto_start;
>> +};
>> +
>> +/**
>> + * struct mhi_event_config - Event ring configuration structure for
>> controller
>> + * @num_elements: The number of elements that can be queued to this ring
>> + * @irq_moderation_ms: Delay irq for additional events to be aggregated
>> + * @irq: IRQ associated with this ring
>> + * @channel: Dedicated channel number. U32_MAX indicates a
>> non-dedicated ring
>> + * @mode: Doorbell mode
>> + * @data_type: Type of data this ring will process
>> + * @hardware_event: This ring is associated with hardware channels
>> + * @client_managed: This ring is client managed
>> + * @offload_channel: This ring is associated with an offloaded channel
>> + * @priority: Priority of this ring. Use 1 for now
>> + */
>> +struct mhi_event_config {
>> +Â Â Â u32 num_elements;
>> +Â Â Â u32 irq_moderation_ms;
>> +Â Â Â u32 irq;
>> +Â Â Â u32 channel;
>> +Â Â Â enum mhi_db_brst_mode mode;
>> +Â Â Â enum mhi_er_data_type data_type;
>> +Â Â Â bool hardware_event;
>> +Â Â Â bool client_managed;
>> +Â Â Â bool offload_channel;
>> +Â Â Â u32 priority;
>> +};
>> +
>> +/**
>> + * struct mhi_controller_config - Root MHI controller configuration
>> + * @max_channels: Maximum number of channels supported
>> + * @timeout_ms: Timeout value for operations. 0 means use default
>> + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
>> + * @m2_no_db: Host is not allowed to ring DB in M2 state
>> + * @buf_len: Size of automatically allocated buffers. 0 means use
>> default
>> + * @num_channels: Number of channels defined in @ch_cfg
>> + * @ch_cfg: Array of defined channels
>> + * @num_events: Number of event rings defined in @event_cfg
>> + * @event_cfg: Array of defined event rings
>> + */
>> +struct mhi_controller_config {
>> +Â Â Â u32 max_channels;
>> +Â Â Â u32 timeout_ms;
>> +Â Â Â bool use_bounce_buf;
>> +Â Â Â bool m2_no_db;
>> +Â Â Â u32 buf_len;
>> +Â Â Â u32 num_channels;
>> +Â Â Â struct mhi_channel_config *ch_cfg;
>> +Â Â Â u32 num_events;
>> +Â Â Â struct mhi_event_config *event_cfg;
>> +};
>> +
>> +/**
>> + * struct mhi_controller - Master MHI controller structure
Quite a bit of this needs to be initialized by the entity calling
mhi_register_controller(), but its not clear what. I'm thinking that
since we have a config structure, all of that should be copied/moved
into the config so that the caller of mhi_register_controller() provides
an empty mhi_controller struct / a populated config struct and receives
an initialized mhi_controller instance.
>> + * @name: Name of the controller
>> + * @dev: Driver model device node for the controller
>> + * @mhi_dev: MHI device instance for the controller
>> + * @dev_id: Device ID of the controller
>> + * @bus_id: Physical bus instance used by the controller
>> + * @regs: Base address of MHI MMIO register space
>> + * @iova_start: IOMMU starting address for data
>> + * @iova_stop: IOMMU stop address for data
>> + * @fw_image: Firmware image name for normal booting
>> + * @edl_image: Firmware image name for emergency download mode
>> + * @fbc_download: MHI host needs to do complete image transfer
>> + * @sbl_size: SBL image size
>> + * @seg_len: BHIe vector size
>> + * @max_chan: Maximum number of channels the controller supports
>> + * @mhi_chan: Points to the channel configuration table
>> + * @lpm_chans: List of channels that require LPM notifications
>> + * @total_ev_rings: Total # of event rings allocated
>> + * @hw_ev_rings: Number of hardware event rings
>> + * @sw_ev_rings: Number of software event rings
>> + * @nr_irqs_req: Number of IRQs required to operate
>> + * @nr_irqs: Number of IRQ allocated by bus master
>> + * @irq: base irq # to request
>> + * @mhi_event: MHI event ring configurations table
>> + * @mhi_cmd: MHI command ring configurations table
>> + * @mhi_ctxt: MHI device context, shared memory between host and device
>> + * @timeout_ms: Timeout in ms for state transitions
>> + * @pm_mutex: Mutex for suspend/resume operation
>> + * @pre_init: MHI host needs to do pre-initialization before power up
>> + * @pm_lock: Lock for protecting MHI power management state
>> + * @pm_state: MHI power management state
>> + * @db_access: DB access states
>> + * @ee: MHI device execution environment
>> + * @wake_set: Device wakeup set flag
>> + * @dev_wake: Device wakeup count
>> + * @alloc_size: Total memory allocations size of the controller
>> + * @pending_pkts: Pending packets for the controller
>> + * @transition_list: List of MHI state transitions
>> + * @wlock: Lock for protecting device wakeup
>> + * @M0: M0 state counter for debugging
>> + * @M2: M2 state counter for debugging
>> + * @M3: M3 state counter for debugging
>> + * @M3_FAST: M3 Fast state counter for debugging
>> + * @st_worker: State transition worker
>> + * @fw_worker: Firmware download worker
>> + * @syserr_worker: System error worker
>> + * @state_event: State change event
>> + * @status_cb: CB function to notify various power states to bus master
>> + * @link_status: CB function to query link status of the device
>> + * @wake_get: CB function to assert device wake
>> + * @wake_put: CB function to de-assert device wake
>> + * @wake_toggle: CB function to assert and deasset (toggle) device wake
>> + * @runtime_get: CB function to controller runtime resume
>> + * @runtimet_put: CB function to decrement pm usage
>> + * @lpm_disable: CB function to request disable link level low power
>> modes
>> + * @lpm_enable: CB function to request enable link level low power
>> modes again
>> + * @bounce_buf: Use of bounce buffer
>> + * @buffer_len: Bounce buffer length
>> + * @priv_data: Points to bus master's private data
>> + */
>> +struct mhi_controller {
>> +Â Â Â const char *name;
>> +Â Â Â struct device *dev;
>> +Â Â Â struct mhi_device *mhi_dev;
>> +Â Â Â u32 dev_id;
>> +Â Â Â u32 bus_id;
>> +Â Â Â void __iomem *regs;
>> +Â Â Â dma_addr_t iova_start;
>> +Â Â Â dma_addr_t iova_stop;
>> +Â Â Â const char *fw_image;
>> +Â Â Â const char *edl_image;
>> +Â Â Â bool fbc_download;
>> +Â Â Â size_t sbl_size;
>> +Â Â Â size_t seg_len;
>> +Â Â Â u32 max_chan;
>> +Â Â Â struct mhi_chan *mhi_chan;
>> +Â Â Â struct list_head lpm_chans;
>> +Â Â Â u32 total_ev_rings;
>> +Â Â Â u32 hw_ev_rings;
>> +Â Â Â u32 sw_ev_rings;
>> +Â Â Â u32 nr_irqs_req;
>> +Â Â Â u32 nr_irqs;
>> +Â Â Â int *irq;
>> +
>> +Â Â Â struct mhi_event *mhi_event;
>> +Â Â Â struct mhi_cmd *mhi_cmd;
>> +Â Â Â struct mhi_ctxt *mhi_ctxt;
>> +
>> +Â Â Â u32 timeout_ms;
>> +Â Â Â struct mutex pm_mutex;
>> +Â Â Â bool pre_init;
>> +Â Â Â rwlock_t pm_lock;
>> +Â Â Â u32 pm_state;
>> +Â Â Â u32 db_access;
>> +Â Â Â enum mhi_ee_type ee;
>> +Â Â Â bool wake_set;
>> +Â Â Â atomic_t dev_wake;
>> +Â Â Â atomic_t alloc_size;
>> +Â Â Â atomic_t pending_pkts;
>> +Â Â Â struct list_head transition_list;
>> +Â Â Â spinlock_t transition_lock;
>> +Â Â Â spinlock_t wlock;
>> +Â Â Â u32 M0, M2, M3, M3_FAST;
>> +Â Â Â struct work_struct st_worker;
>> +Â Â Â struct work_struct fw_worker;
>> +Â Â Â struct work_struct syserr_worker;
>> +Â Â Â wait_queue_head_t state_event;
>> +
>> +Â Â Â void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â enum mhi_callback cb);
>> +Â Â Â int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
>> +Â Â Â void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
>> +Â Â Â void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
>> +Â Â Â void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
>> +Â Â Â int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
>> +Â Â Â void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
>> +Â Â Â void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
>> +Â Â Â void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
>> +
>> +Â Â Â bool bounce_buf;
>> +Â Â Â size_t buffer_len;
>> +Â Â Â void *priv_data;
>> +};
>> +
>> +/**
>> + * struct mhi_device - Structure representing a MHI device which binds
>> + *Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â to channels
>> + * @dev: Driver model device node for the MHI device
>> + * @tiocm: Device current terminal settings
>> + * @id: Pointer to MHI device ID struct
>> + * @chan_name: Name of the channel to which the device binds
>> + * @mhi_cntrl: Controller the device belongs to
>> + * @ul_chan: UL channel for the device
>> + * @dl_chan: DL channel for the device
>> + * @dev_wake: Device wakeup counter
>> + * @dev_type: MHI device type
>> + */
>> +struct mhi_device {
>> +Â Â Â struct device dev;
>> +Â Â Â u32 tiocm;
>> +Â Â Â const struct mhi_device_id *id;
>> +Â Â Â const char *chan_name;
>> +Â Â Â struct mhi_controller *mhi_cntrl;
>> +Â Â Â struct mhi_chan *ul_chan;
>> +Â Â Â struct mhi_chan *dl_chan;
>> +Â Â Â atomic_t dev_wake;
>> +Â Â Â enum mhi_device_type dev_type;
>> +};
>> +
>> +/**
>> + * struct mhi_result - Completed buffer information
>> + * @buf_addr: Address of data buffer
>> + * @dir: Channel direction
>> + * @bytes_xfer: # of bytes transferred
>> + * @transaction_status: Status of last transaction
>> + */
>> +struct mhi_result {
>> +Â Â Â void *buf_addr;
>> +Â Â Â enum dma_data_direction dir;
>> +Â Â Â size_t bytes_xferd;
>
> Desription says this is named "bytes_xfer"
>
>> +Â Â Â int transaction_status;
>> +};
>> +
>> +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
>> +
>> +/**
>> + * mhi_controller_set_devdata - Set MHI controller private data
>> + * @mhi_cntrl: MHI controller to set data
>> + */
>> +static inline void mhi_controller_set_devdata(struct mhi_controller
>> *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â void *priv)
>> +{
>> +Â Â Â mhi_cntrl->priv_data = priv;
>> +}
>> +
>> +/**
>> + * mhi_controller_get_devdata - Get MHI controller private data
>> + * @mhi_cntrl: MHI controller to get data
>> + */
>> +static inline void *mhi_controller_get_devdata(struct mhi_controller
>> *mhi_cntrl)
>> +{
>> +Â Â Â return mhi_cntrl->priv_data;
>> +}
>> +
>> +/**
>> + * mhi_register_controller - Register MHI controller
>> + * @mhi_cntrl: MHI controller to register
>> + * @config: Configuration to use for the controller
>> + */
>> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct mhi_controller_config *config);
>> +
>> +/**
>> + * mhi_unregister_controller - Unregister MHI controller
>> + * @mhi_cntrl: MHI controller to unregister
>> + */
>> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
>> +
>> +#endif /* _MHI_H_ */
>> diff --git a/include/linux/mod_devicetable.h
>> b/include/linux/mod_devicetable.h
>> index e3596db077dc..be15e997fe39 100644
>> --- a/include/linux/mod_devicetable.h
>> +++ b/include/linux/mod_devicetable.h
>> @@ -821,4 +821,16 @@ struct wmi_device_id {
>> Â Â Â Â Â const void *context;
>> Â };
>> +#define MHI_NAME_SIZE 32
>> +
>> +/**
>> + * struct mhi_device_id - MHI device identification
>> + * @chan: MHI channel name
>> + * @driver_data: driver data;
>> + */
>> +struct mhi_device_id {
>> +Â Â Â const char chan[MHI_NAME_SIZE];
>> +Â Â Â kernel_ulong_t driver_data;
>> +};
>> +
>> Â #endif /* LINUX_MOD_DEVICETABLE_H */
>>
>
>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
Hi Greg,
[On top of Jeff's reply]
On Fri, Jan 24, 2020 at 07:24:43AM -0700, Jeffrey Hugo wrote:
> On 1/24/2020 1:29 AM, Greg KH wrote:
> > On Thu, Jan 23, 2020 at 04:48:22PM +0530, Manivannan Sadhasivam wrote:
> > > --- /dev/null
> > > +++ b/include/linux/mhi.h
> > > @@ -0,0 +1,438 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > + *
> > > + */
> > > +#ifndef _MHI_H_
> > > +#define _MHI_H_
> > > +
> > > +#include <linux/device.h>
> > > +#include <linux/dma-direction.h>
> > > +#include <linux/mutex.h>
> > > +#include <linux/rwlock_types.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/spinlock_types.h>
> > > +#include <linux/wait.h>
> > > +#include <linux/workqueue.h>
> > > +
> > > +struct mhi_chan;
> > > +struct mhi_event;
> > > +struct mhi_ctxt;
> > > +struct mhi_cmd;
> > > +struct mhi_buf_info;
> > > +
> > > +/**
> > > + * enum mhi_callback - MHI callback
> > > + * @MHI_CB_IDLE: MHI entered idle state
> > > + * @MHI_CB_PENDING_DATA: New data available for client to process
> > > + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> > > + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> > > + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> > > + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> > > + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> > > + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> > > + */
> > > +enum mhi_callback {
> > > + MHI_CB_IDLE,
> > > + MHI_CB_PENDING_DATA,
> > > + MHI_CB_LPM_ENTER,
> > > + MHI_CB_LPM_EXIT,
> > > + MHI_CB_EE_RDDM,
> > > + MHI_CB_EE_MISSION_MODE,
> > > + MHI_CB_SYS_ERROR,
> > > + MHI_CB_FATAL_ERROR,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_flags - Transfer flags
> > > + * @MHI_EOB: End of buffer for bulk transfer
> > > + * @MHI_EOT: End of transfer
> > > + * @MHI_CHAIN: Linked transfer
> > > + */
> > > +enum mhi_flags {
> > > + MHI_EOB,
> > > + MHI_EOT,
> > > + MHI_CHAIN,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_device_type - Device types
> > > + * @MHI_DEVICE_XFER: Handles data transfer
> > > + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> > > + * @MHI_DEVICE_CONTROLLER: Control device
> > > + */
> > > +enum mhi_device_type {
> > > + MHI_DEVICE_XFER,
> > > + MHI_DEVICE_TIMESYNC,
> > > + MHI_DEVICE_CONTROLLER,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_ch_type - Channel types
> > > + * @MHI_CH_TYPE_INVALID: Invalid channel type
> > > + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> > > + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> > > + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
> > > + * multiple packets and send them as a single
> > > + * large packet to reduce CPU consumption
> > > + */
> > > +enum mhi_ch_type {
> > > + MHI_CH_TYPE_INVALID = 0,
> > > + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> > > + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> > > + MHI_CH_TYPE_INBOUND_COALESCED = 3,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_ee_type - Execution environment types
> > > + * @MHI_EE_PBL: Primary Bootloader
> > > + * @MHI_EE_SBL: Secondary Bootloader
> > > + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> > > + * @MHI_EE_RDDM: Ram dump download mode
> > > + * @MHI_EE_WFW: WLAN firmware mode
> > > + * @MHI_EE_PTHRU: Passthrough
> > > + * @MHI_EE_EDL: Embedded downloader
> > > + */
> > > +enum mhi_ee_type {
> > > + MHI_EE_PBL,
> > > + MHI_EE_SBL,
> > > + MHI_EE_AMSS,
> > > + MHI_EE_RDDM,
> > > + MHI_EE_WFW,
> > > + MHI_EE_PTHRU,
> > > + MHI_EE_EDL,
> > > + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> > > + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> > > + MHI_EE_NOT_SUPPORTED,
> > > + MHI_EE_MAX,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_buf_type - Accepted buffer type for the channel
> > > + * @MHI_BUF_RAW: Raw buffer
> > > + * @MHI_BUF_SKB: SKB struct
> > > + * @MHI_BUF_SCLIST: Scatter-gather list
> > > + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> > > + * @MHI_BUF_DMA: Receive DMA address mapped by client
> > > + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
> > > + */
> > > +enum mhi_buf_type {
> > > + MHI_BUF_RAW,
> > > + MHI_BUF_SKB,
> > > + MHI_BUF_SCLIST,
> > > + MHI_BUF_NOP,
> > > + MHI_BUF_DMA,
> > > + MHI_BUF_RSC_DMA,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_er_data_type - Event ring data types
> > > + * @MHI_ER_DATA: Only client data over this ring
> > > + * @MHI_ER_CTRL: MHI control data and client data
> > > + * @MHI_ER_TSYNC: Time sync events
> > > + */
> > > +enum mhi_er_data_type {
> > > + MHI_ER_DATA,
> > > + MHI_ER_CTRL,
> > > + MHI_ER_TSYNC,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_db_brst_mode - Doorbell mode
> > > + * @MHI_DB_BRST_DISABLE: Burst mode disable
> > > + * @MHI_DB_BRST_ENABLE: Burst mode enable
> > > + */
> > > +enum mhi_db_brst_mode {
> > > + MHI_DB_BRST_DISABLE = 0x2,
> > > + MHI_DB_BRST_ENABLE = 0x3,
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_channel_config - Channel configuration structure for controller
> > > + * @num: The number assigned to this channel
> > > + * @name: The name of this channel
> > > + * @num_elements: The number of elements that can be queued to this channel
> > > + * @local_elements: The local ring length of the channel
> > > + * @event_ring: The event rung index that services this channel
> > > + * @dir: Direction that data may flow on this channel
> > > + * @type: Channel type
> > > + * @ee_mask: Execution Environment mask for this channel
> > > + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
> > > + for UL channels, multiple of 8 ring elements for DL channels
> > > + * @data_type: Data type accepted by this channel
> > > + * @doorbell: Doorbell mode
> > > + * @lpm_notify: The channel master requires low power mode notifications
> > > + * @offload_channel: The client manages the channel completely
> > > + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
> > > + * @auto_queue: Framework will automatically queue buffers for DL traffic
> > > + * @auto_start: Automatically start (open) this channel
> > > + */
> > > +struct mhi_channel_config {
> > > + u32 num;
> > > + char *name;
> >
> > If you have a "name" for your configuration, shouldn't this be a struct
> > device so you see that in sysfs? If not, what is the name for?
>
> The config struct is used to create the channel device, but is not the
> channel device. Eventually a struct mhi_device is created from this
> information.
>
Yes. This struct is used to get the channel information from the controller
driver and gets passed to the MHI stack, which will inturn create MHI device.
> >
> > > + u32 num_elements;
> > > + u32 local_elements;
> > > + u32 event_ring;
> > > + enum dma_data_direction dir;
> > > + enum mhi_ch_type type;
> > > + u32 ee_mask;
> > > + u32 pollcfg;
> > > + enum mhi_buf_type data_type;
> > > + enum mhi_db_brst_mode doorbell;
> > > + bool lpm_notify;
> > > + bool offload_channel;
> > > + bool doorbell_mode_switch;
> > > + bool auto_queue;
> > > + bool auto_start;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_event_config - Event ring configuration structure for controller
> > > + * @num_elements: The number of elements that can be queued to this ring
> > > + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> > > + * @irq: IRQ associated with this ring
> > > + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
> > > + * @mode: Doorbell mode
> > > + * @data_type: Type of data this ring will process
> > > + * @hardware_event: This ring is associated with hardware channels
> > > + * @client_managed: This ring is client managed
> > > + * @offload_channel: This ring is associated with an offloaded channel
> > > + * @priority: Priority of this ring. Use 1 for now
> > > + */
> > > +struct mhi_event_config {
> > > + u32 num_elements;
> > > + u32 irq_moderation_ms;
> > > + u32 irq;
> > > + u32 channel;
> > > + enum mhi_db_brst_mode mode;
> > > + enum mhi_er_data_type data_type;
> > > + bool hardware_event;
> > > + bool client_managed;
> > > + bool offload_channel;
> > > + u32 priority;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_controller_config - Root MHI controller configuration
> > > + * @max_channels: Maximum number of channels supported
> > > + * @timeout_ms: Timeout value for operations. 0 means use default
> > > + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> > > + * @m2_no_db: Host is not allowed to ring DB in M2 state
> > > + * @buf_len: Size of automatically allocated buffers. 0 means use default
> > > + * @num_channels: Number of channels defined in @ch_cfg
> > > + * @ch_cfg: Array of defined channels
> > > + * @num_events: Number of event rings defined in @event_cfg
> > > + * @event_cfg: Array of defined event rings
> > > + */
> > > +struct mhi_controller_config {
> > > + u32 max_channels;
> > > + u32 timeout_ms;
> > > + bool use_bounce_buf;
> > > + bool m2_no_db;
> > > + u32 buf_len;
> > > + u32 num_channels;
> > > + struct mhi_channel_config *ch_cfg;
> > > + u32 num_events;
> > > + struct mhi_event_config *event_cfg;
> >
> > You really should run pahole on this file to see how badly packed these
> > structures are :(
> >
Right. I was more interested in grouping the members logically so that they
can look in order. But I can rework these to fill the holes.
Thanks for the pahole pointer :)
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_controller - Master MHI controller structure
> > > + * @name: Name of the controller
> > > + * @dev: Driver model device node for the controller
> > > + * @mhi_dev: MHI device instance for the controller
> > > + * @dev_id: Device ID of the controller
> > > + * @bus_id: Physical bus instance used by the controller
> > > + * @regs: Base address of MHI MMIO register space
> > > + * @iova_start: IOMMU starting address for data
> > > + * @iova_stop: IOMMU stop address for data
> > > + * @fw_image: Firmware image name for normal booting
> > > + * @edl_image: Firmware image name for emergency download mode
> > > + * @fbc_download: MHI host needs to do complete image transfer
> > > + * @sbl_size: SBL image size
> > > + * @seg_len: BHIe vector size
> > > + * @max_chan: Maximum number of channels the controller supports
> > > + * @mhi_chan: Points to the channel configuration table
> > > + * @lpm_chans: List of channels that require LPM notifications
> > > + * @total_ev_rings: Total # of event rings allocated
> > > + * @hw_ev_rings: Number of hardware event rings
> > > + * @sw_ev_rings: Number of software event rings
> > > + * @nr_irqs_req: Number of IRQs required to operate
> > > + * @nr_irqs: Number of IRQ allocated by bus master
> > > + * @irq: base irq # to request
> > > + * @mhi_event: MHI event ring configurations table
> > > + * @mhi_cmd: MHI command ring configurations table
> > > + * @mhi_ctxt: MHI device context, shared memory between host and device
> > > + * @timeout_ms: Timeout in ms for state transitions
> > > + * @pm_mutex: Mutex for suspend/resume operation
> > > + * @pre_init: MHI host needs to do pre-initialization before power up
> > > + * @pm_lock: Lock for protecting MHI power management state
> > > + * @pm_state: MHI power management state
> > > + * @db_access: DB access states
> > > + * @ee: MHI device execution environment
> > > + * @wake_set: Device wakeup set flag
> > > + * @dev_wake: Device wakeup count
> > > + * @alloc_size: Total memory allocations size of the controller
> > > + * @pending_pkts: Pending packets for the controller
> > > + * @transition_list: List of MHI state transitions
> > > + * @wlock: Lock for protecting device wakeup
> > > + * @M0: M0 state counter for debugging
> > > + * @M2: M2 state counter for debugging
> > > + * @M3: M3 state counter for debugging
> > > + * @M3_FAST: M3 Fast state counter for debugging
> > > + * @st_worker: State transition worker
> > > + * @fw_worker: Firmware download worker
> > > + * @syserr_worker: System error worker
> > > + * @state_event: State change event
> > > + * @status_cb: CB function to notify various power states to bus master
> > > + * @link_status: CB function to query link status of the device
> > > + * @wake_get: CB function to assert device wake
> > > + * @wake_put: CB function to de-assert device wake
> > > + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> > > + * @runtime_get: CB function to controller runtime resume
> > > + * @runtimet_put: CB function to decrement pm usage
> > > + * @lpm_disable: CB function to request disable link level low power modes
> > > + * @lpm_enable: CB function to request enable link level low power modes again
> > > + * @bounce_buf: Use of bounce buffer
> > > + * @buffer_len: Bounce buffer length
> > > + * @priv_data: Points to bus master's private data
> > > + */
> > > +struct mhi_controller {
> > > + const char *name;
> > > + struct device *dev;
> >
> > Why isn't this a struct device directly? Why a pointer?
> >
As like above, this structure will be used by the controller driver to
pass the information to the MHI stack. So it will pass the struct device
of the actual transport layer (like PCI-E). This is required since MHI
stack itself will not involve in the data transport.
Reference controller implementation can be found here:
https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi#n207
Moreover, I should've marked the members of this struct which were required
by the controller drivers to fill in. This was already pointed out by Jeff
during internal review but I don't know how the change I did to address not
included here :/ Will incorporate in next iteration.
> > And why don't you use the name in the struct device?
> >
Right, a separate name is not needed. Will fix it.
> > > + struct mhi_device *mhi_dev;
> > > + u32 dev_id;
> > > + u32 bus_id;
> >
> > Shouldn't the bus id come from the bus it is assigned to? Why store it
> > again?
> >
Ah, I got rid of the usecase for these but forgot to remove these variables.
Will remove.
> > > + void __iomem *regs;
> > > + dma_addr_t iova_start;
> > > + dma_addr_t iova_stop;
> > > + const char *fw_image;
> > > + const char *edl_image;
> > > + bool fbc_download;
> > > + size_t sbl_size;
> > > + size_t seg_len;
> > > + u32 max_chan;
> > > + struct mhi_chan *mhi_chan;
> > > + struct list_head lpm_chans;
> > > + u32 total_ev_rings;
> > > + u32 hw_ev_rings;
> > > + u32 sw_ev_rings;
> > > + u32 nr_irqs_req;
> > > + u32 nr_irqs;
> > > + int *irq;
> > > +
> > > + struct mhi_event *mhi_event;
> > > + struct mhi_cmd *mhi_cmd;
> > > + struct mhi_ctxt *mhi_ctxt;
> > > +
> > > + u32 timeout_ms;
> > > + struct mutex pm_mutex;
> > > + bool pre_init;
> > > + rwlock_t pm_lock;
> > > + u32 pm_state;
> > > + u32 db_access;
> > > + enum mhi_ee_type ee;
> > > + bool wake_set;
> > > + atomic_t dev_wake;
> > > + atomic_t alloc_size;
> > > + atomic_t pending_pkts;
> >
> > Why a bunch of atomic variables when you already have a lock?
> >
So there are multiple locks used throughout the MHI stack and each one
servers its own purpose. For instance, pm_lock protects againt the
concurrent access to the PM state, transition_lock protects against the
concurrent access to the state transition list, wlock protects against
the concurrent access to device wake state. Since there are multiple
worker threads and each trying to update these variables, we did the
best to protect against the race condition by having all these locks.
And there are these atomic variables which are again shared with the
threads holding the above locks, precisely with threads holding read locks.
So it becomes convenient to just use the atomic_ APIs to update these variables.
> > > + struct list_head transition_list;
> > > + spinlock_t transition_lock;
> >
> > You don't document this lock.
> >
Right, will do.
> > > + spinlock_t wlock;
> >
> > Why have 2 locks?
> >
Gave the justification above.
> > > + u32 M0, M2, M3, M3_FAST;
> > > + struct work_struct st_worker;
> > > + struct work_struct fw_worker;
> > > + struct work_struct syserr_worker;
> > > + wait_queue_head_t state_event;
> > > +
> > > + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> > > + enum mhi_callback cb);
> > > + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> > > + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> > > + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> > > + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> > > + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> > > + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> > > + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> > > + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> >
> > Shouldn't all of these be part of the bus or driver assigned to this
> > device and not in the device itself? This feels odd as-is.
> >
These are controller specific callbacks which needs to be provided by the
controller drivers. The MHI stack creates device during the controller
registration. So there is essentially no further client driver to bind to
this device. Only for the devices created for each channel, there
are client drivers which binds to it. So these callbacks should be present
here itself.
> > > +
> > > + bool bounce_buf;
> > > + size_t buffer_len;
> > > + void *priv_data;
> >
> > Why can't you use the private pointer in struct device?
> >
Again, I got rid of the usecase for this but forgot to remove it here.
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_device - Structure representing a MHI device which binds
> > > + * to channels
> > > + * @dev: Driver model device node for the MHI device
> > > + * @tiocm: Device current terminal settings
> > > + * @id: Pointer to MHI device ID struct
> > > + * @chan_name: Name of the channel to which the device binds
> > > + * @mhi_cntrl: Controller the device belongs to
> > > + * @ul_chan: UL channel for the device
> > > + * @dl_chan: DL channel for the device
> > > + * @dev_wake: Device wakeup counter
> > > + * @dev_type: MHI device type
> > > + */
> > > +struct mhi_device {
> > > + struct device dev;
> > > + u32 tiocm;
> > > + const struct mhi_device_id *id;
> > > + const char *chan_name;
> > > + struct mhi_controller *mhi_cntrl;
> > > + struct mhi_chan *ul_chan;
> > > + struct mhi_chan *dl_chan;
> > > + atomic_t dev_wake;
> >
> > Why does this have to be atomic?
> >
While having a close look at this, it looks like the atomicity
is not required here. Will change.
> > > + enum mhi_device_type dev_type;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_result - Completed buffer information
> > > + * @buf_addr: Address of data buffer
> > > + * @dir: Channel direction
> > > + * @bytes_xfer: # of bytes transferred
> > > + * @transaction_status: Status of last transaction
> > > + */
> > > +struct mhi_result {
> > > + void *buf_addr;
> >
> > Why void *?
>
> Because its not possible to resolve this more clearly. The client provides
> the buffer and knows what the structure is. The bus does not. Its just an
> opaque pointer (hence void *) to the bus, and the client needs to decode it.
> This is the struct that is handed to the client to allow them to decode the
> activity (either a received buf, or a confirmation that a transmitted buf
> has been consumed).
>
Jeff gave a reasonable justification in further threads also!
Thanks,
Mani
> >
> > > + enum dma_data_direction dir;
> > > + size_t bytes_xferd;
> > > + int transaction_status;
> > > +};
> > > +
> >
> > thanks,
> >
> > greg k-h
> >
>
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.
On Sat, Jan 25, 2020 at 02:46:31PM +0100, Greg KH wrote:
> On Fri, Jan 24, 2020 at 03:51:12PM -0700, Jeffrey Hugo wrote:
> > > +struct mhi_event_ctxt {
> > > + u32 reserved : 8;
> > > + u32 intmodc : 8;
> > > + u32 intmodt : 16;
> > > + u32 ertype;
> > > + u32 msivec;
> > > +
> > > + u64 rbase __packed __aligned(4);
> > > + u64 rlen __packed __aligned(4);
> > > + u64 rp __packed __aligned(4);
> > > + u64 wp __packed __aligned(4);
> > > +};
> >
> > This is the struct that is shared with the device, correct? Surely it needs
> > to be packed then? Seems like you'd expect some padding between msivec and
> > rbase on a 64-bit system otherwise, which is probably not intended.
> >
> > Also I strongly dislike bitfields in structures which are shared with
> > another system since the C specification doesn't define how they are
> > implemented, therefore you can run into issues where different compilers
> > decide to implement the actual backing memory differently. I know its less
> > convinent, but I would prefer the use of bitmasks for these fields.
>
> You have to use bitmasks in order for all endian cpus to work properly
> here, so that needs to be fixed.
>
Okay.
> Oh, and if these values are in hardware, then the correct types also
> need to be used (i.e. __u32 and __u64).
>
I thought the __* prefix types are only for sharing with userspace...
Could you please clarify why it is needed here?
Thanks,
Mani
> good catch!
>
> greg k-h
On Mon, Jan 27, 2020 at 12:32:52PM +0530, Manivannan Sadhasivam wrote:
> > > > + void __iomem *regs;
> > > > + dma_addr_t iova_start;
> > > > + dma_addr_t iova_stop;
> > > > + const char *fw_image;
> > > > + const char *edl_image;
> > > > + bool fbc_download;
> > > > + size_t sbl_size;
> > > > + size_t seg_len;
> > > > + u32 max_chan;
> > > > + struct mhi_chan *mhi_chan;
> > > > + struct list_head lpm_chans;
> > > > + u32 total_ev_rings;
> > > > + u32 hw_ev_rings;
> > > > + u32 sw_ev_rings;
> > > > + u32 nr_irqs_req;
> > > > + u32 nr_irqs;
> > > > + int *irq;
> > > > +
> > > > + struct mhi_event *mhi_event;
> > > > + struct mhi_cmd *mhi_cmd;
> > > > + struct mhi_ctxt *mhi_ctxt;
> > > > +
> > > > + u32 timeout_ms;
> > > > + struct mutex pm_mutex;
> > > > + bool pre_init;
> > > > + rwlock_t pm_lock;
> > > > + u32 pm_state;
> > > > + u32 db_access;
> > > > + enum mhi_ee_type ee;
> > > > + bool wake_set;
> > > > + atomic_t dev_wake;
> > > > + atomic_t alloc_size;
> > > > + atomic_t pending_pkts;
> > >
> > > Why a bunch of atomic variables when you already have a lock?
> > >
>
> So there are multiple locks used throughout the MHI stack and each one
> servers its own purpose. For instance, pm_lock protects againt the
> concurrent access to the PM state, transition_lock protects against the
> concurrent access to the state transition list, wlock protects against
> the concurrent access to device wake state. Since there are multiple
> worker threads and each trying to update these variables, we did the
> best to protect against the race condition by having all these locks.
>
> And there are these atomic variables which are again shared with the
> threads holding the above locks, precisely with threads holding read locks.
> So it becomes convenient to just use the atomic_ APIs to update these variables.
An atomic_ api is almost as heavy as a "normal" lock, so while you might
think it is convenient, it's odd that you feel it is needed. As an
example, "wake_set" and "dev_wake" look like they are happening at the
same time, yet one is going to be held with a lock and the other one
updated without one?
Anyway, I'll leave this alone, let's see what your next round looks
like...
greg k-h
On Mon, Jan 27, 2020 at 12:40:52PM +0530, Manivannan Sadhasivam wrote:
> On Sat, Jan 25, 2020 at 02:46:31PM +0100, Greg KH wrote:
> > On Fri, Jan 24, 2020 at 03:51:12PM -0700, Jeffrey Hugo wrote:
> > > > +struct mhi_event_ctxt {
> > > > + u32 reserved : 8;
> > > > + u32 intmodc : 8;
> > > > + u32 intmodt : 16;
> > > > + u32 ertype;
> > > > + u32 msivec;
> > > > +
> > > > + u64 rbase __packed __aligned(4);
> > > > + u64 rlen __packed __aligned(4);
> > > > + u64 rp __packed __aligned(4);
> > > > + u64 wp __packed __aligned(4);
> > > > +};
> > >
> > > This is the struct that is shared with the device, correct? Surely it needs
> > > to be packed then? Seems like you'd expect some padding between msivec and
> > > rbase on a 64-bit system otherwise, which is probably not intended.
> > >
> > > Also I strongly dislike bitfields in structures which are shared with
> > > another system since the C specification doesn't define how they are
> > > implemented, therefore you can run into issues where different compilers
> > > decide to implement the actual backing memory differently. I know its less
> > > convinent, but I would prefer the use of bitmasks for these fields.
> >
> > You have to use bitmasks in order for all endian cpus to work properly
> > here, so that needs to be fixed.
> >
>
> Okay.
>
> > Oh, and if these values are in hardware, then the correct types also
> > need to be used (i.e. __u32 and __u64).
> >
>
> I thought the __* prefix types are only for sharing with userspace...
> Could you please clarify why it is needed here?
It crosses the kernel boundry, so it needs to use those types. This is
not a new requirement, has been there for decades.
greg k-h
Hi Jeff,
On Thu, Jan 23, 2020 at 10:05:50AM -0700, Jeffrey Hugo wrote:
> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > This commit adds support for registering MHI controller drivers with
> > the MHI stack. MHI controller drivers manages the interaction with the
> > MHI client devices such as the external modems and WiFi chipsets. They
> > are also the MHI bus master in charge of managing the physical link
> > between the host and client device.
> >
> > This is based on the patch submitted by Sujeev Dias:
> > https://lkml.org/lkml/2018/7/9/987
> >
> > Signed-off-by: Sujeev Dias <[email protected]>
> > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > [jhugo: added static config for controllers and fixed several bugs]
> > Signed-off-by: Jeffrey Hugo <[email protected]>
> > [mani: removed DT dependency, splitted and cleaned up for upstream]
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/Kconfig | 1 +
> > drivers/bus/Makefile | 3 +
> > drivers/bus/mhi/Kconfig | 14 +
> > drivers/bus/mhi/Makefile | 2 +
> > drivers/bus/mhi/core/Makefile | 3 +
> > drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
> > drivers/bus/mhi/core/internal.h | 169 ++++++++++++
> > include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
> > include/linux/mod_devicetable.h | 12 +
> > 9 files changed, 1046 insertions(+)
> > create mode 100644 drivers/bus/mhi/Kconfig
> > create mode 100644 drivers/bus/mhi/Makefile
> > create mode 100644 drivers/bus/mhi/core/Makefile
> > create mode 100644 drivers/bus/mhi/core/init.c
> > create mode 100644 drivers/bus/mhi/core/internal.h
> > create mode 100644 include/linux/mhi.h
> >
> > diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> > index 50200d1c06ea..383934e54786 100644
> > --- a/drivers/bus/Kconfig
> > +++ b/drivers/bus/Kconfig
> > @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
> > peripherals.
> > source "drivers/bus/fsl-mc/Kconfig"
> > +source "drivers/bus/mhi/Kconfig"
> > endmenu
> > diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> > index 1320bcf9fa9d..05f32cd694a4 100644
> > --- a/drivers/bus/Makefile
> > +++ b/drivers/bus/Makefile
> > @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
> > obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
> > obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
> > +
> > +# MHI
> > +obj-$(CONFIG_MHI_BUS) += mhi/
> > diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> > new file mode 100644
> > index 000000000000..a8bd9bd7db7c
> > --- /dev/null
> > +++ b/drivers/bus/mhi/Kconfig
> > @@ -0,0 +1,14 @@
> > +# SPDX-License-Identifier: GPL-2.0
>
> first time I noticed this, although I suspect this will need to be corrected
> "everywhere" -
> Per the SPDX website, the "GPL-2.0" label is deprecated. It's replacement
> is "GPL-2.0-only".
> I think all instances should be updated to "GPL-2.0-only"
>
> > +#
> > +# MHI bus
> > +#
> > +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > +#
> > +
> > +config MHI_BUS
> > + tristate "Modem Host Interface (MHI) bus"
> > + help
> > + Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> > + communication protocol used by the host processors to control
> > + and communicate with modem devices over a high speed peripheral
> > + bus or shared memory.
> > diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> > new file mode 100644
> > index 000000000000..19e6443b72df
> > --- /dev/null
> > +++ b/drivers/bus/mhi/Makefile
> > @@ -0,0 +1,2 @@
> > +# core layer
> > +obj-y += core/
> > diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
> > new file mode 100644
> > index 000000000000..2db32697c67f
> > --- /dev/null
> > +++ b/drivers/bus/mhi/core/Makefile
> > @@ -0,0 +1,3 @@
> > +obj-$(CONFIG_MHI_BUS) := mhi.o
> > +
> > +mhi-y := init.o
> > diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> > new file mode 100644
> > index 000000000000..5b817ec250e0
> > --- /dev/null
> > +++ b/drivers/bus/mhi/core/init.c
> > @@ -0,0 +1,404 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > + *
> > + */
> > +
> > +#define dev_fmt(fmt) "MHI: " fmt
> > +
> > +#include <linux/device.h>
> > +#include <linux/dma-direction.h>
> > +#include <linux/dma-mapping.h>
> > +#include <linux/interrupt.h>
> > +#include <linux/list.h>
> > +#include <linux/mhi.h>
> > +#include <linux/mod_devicetable.h>
> > +#include <linux/module.h>
> > +#include <linux/slab.h>
> > +#include <linux/vmalloc.h>
> > +#include <linux/wait.h>
> > +#include "internal.h"
> > +
> > +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> > + struct mhi_controller_config *config)
> > +{
> > + int i, num;
> > + struct mhi_event *mhi_event;
> > + struct mhi_event_config *event_cfg;
> > +
> > + num = config->num_events;
> > + mhi_cntrl->total_ev_rings = num;
> > + mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
> > + GFP_KERNEL);
> > + if (!mhi_cntrl->mhi_event)
> > + return -ENOMEM;
> > +
> > + /* Populate event ring */
> > + mhi_event = mhi_cntrl->mhi_event;
> > + for (i = 0; i < num; i++) {
> > + event_cfg = &config->event_cfg[i];
> > +
> > + mhi_event->er_index = i;
> > + mhi_event->ring.elements = event_cfg->num_elements;
> > + mhi_event->intmod = event_cfg->irq_moderation_ms;
> > + mhi_event->irq = event_cfg->irq;
> > +
> > + if (event_cfg->channel != U32_MAX) {
> > + /* This event ring has a dedicated channel */
> > + mhi_event->chan = event_cfg->channel;
> > + if (mhi_event->chan >= mhi_cntrl->max_chan) {
> > + dev_err(mhi_cntrl->dev,
> > + "Event Ring channel not available\n");
> > + goto error_ev_cfg;
> > + }
> > +
> > + mhi_event->mhi_chan =
> > + &mhi_cntrl->mhi_chan[mhi_event->chan];
> > + }
> > +
> > + /* Priority is fixed to 1 for now */
> > + mhi_event->priority = 1;
> > +
> > + mhi_event->db_cfg.brstmode = event_cfg->mode;
> > + if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
> > + goto error_ev_cfg;
> > +
> > + mhi_event->data_type = event_cfg->data_type;
> > +
> > + mhi_event->hw_ring = event_cfg->hardware_event;
> > + if (mhi_event->hw_ring)
> > + mhi_cntrl->hw_ev_rings++;
> > + else
> > + mhi_cntrl->sw_ev_rings++;
> > +
> > + mhi_event->cl_manage = event_cfg->client_managed;
> > + mhi_event->offload_ev = event_cfg->offload_channel;
> > + mhi_event++;
> > + }
> > +
> > + /* We need IRQ for each event ring + additional one for BHI */
> > + mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
> > +
> > + return 0;
> > +
> > +error_ev_cfg:
> > +
> > + kfree(mhi_cntrl->mhi_event);
> > + return -EINVAL;
> > +}
> > +
> > +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> > + struct mhi_controller_config *config)
> > +{
> > + int i;
> > + u32 chan;
> > + struct mhi_channel_config *ch_cfg;
> > +
> > + mhi_cntrl->max_chan = config->max_channels;
> > +
> > + /*
> > + * The allocation of MHI channels can exceed 32KB in some scenarios,
> > + * so to avoid any memory possible allocation failures, vzalloc is
> > + * used here
> > + */
> > + mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
> > + sizeof(*mhi_cntrl->mhi_chan));
> > + if (!mhi_cntrl->mhi_chan)
> > + return -ENOMEM;
> > +
> > + INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
> > +
> > + /* Populate channel configurations */
> > + for (i = 0; i < config->num_channels; i++) {
> > + struct mhi_chan *mhi_chan;
> > +
> > + ch_cfg = &config->ch_cfg[i];
> > +
> > + chan = ch_cfg->num;
> > + if (chan >= mhi_cntrl->max_chan) {
> > + dev_err(mhi_cntrl->dev,
> > + "Channel %d not available\n", chan);
> > + goto error_chan_cfg;
> > + }
> > +
> > + mhi_chan = &mhi_cntrl->mhi_chan[chan];
> > + mhi_chan->name = ch_cfg->name;
> > + mhi_chan->chan = chan;
> > +
> > + mhi_chan->tre_ring.elements = ch_cfg->num_elements;
> > + if (!mhi_chan->tre_ring.elements)
> > + goto error_chan_cfg;
> > +
> > + /*
> > + * For some channels, local ring length should be bigger than
> > + * the transfer ring length due to internal logical channels
> > + * in device. So host can queue much more buffers than transfer
> > + * ring length. Example, RSC channels should have a larger local
> > + * channel length than transfer ring length.
> > + */
> > + mhi_chan->buf_ring.elements = ch_cfg->local_elements;
> > + if (!mhi_chan->buf_ring.elements)
> > + mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
> > + mhi_chan->er_index = ch_cfg->event_ring;
> > + mhi_chan->dir = ch_cfg->dir;
> > +
> > + /*
> > + * For most channels, chtype is identical to channel directions.
> > + * So, if it is not defined then assign channel direction to
> > + * chtype
> > + */
> > + mhi_chan->type = ch_cfg->type;
> > + if (!mhi_chan->type)
> > + mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
> > +
> > + mhi_chan->ee_mask = ch_cfg->ee_mask;
> > +
> > + mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> > + mhi_chan->xfer_type = ch_cfg->data_type;
> > +
> > + mhi_chan->lpm_notify = ch_cfg->lpm_notify;
> > + mhi_chan->offload_ch = ch_cfg->offload_channel;
> > + mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
> > + mhi_chan->pre_alloc = ch_cfg->auto_queue;
> > + mhi_chan->auto_start = ch_cfg->auto_start;
> > +
> > + /*
> > + * If MHI host allocates buffers, then the channel direction
> > + * should be DMA_FROM_DEVICE and the buffer type should be
> > + * MHI_BUF_RAW
> > + */
> > + if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
> > + mhi_chan->xfer_type != MHI_BUF_RAW)) {
> > + dev_err(mhi_cntrl->dev,
> > + "Invalid channel configuration\n");
> > + goto error_chan_cfg;
> > + }
> > +
> > + /*
> > + * Bi-directional and direction less channel must be an
> > + * offload channel
> > + */
> > + if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
> > + mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
> > + dev_err(mhi_cntrl->dev,
> > + "Invalid channel configuration\n");
> > + goto error_chan_cfg;
> > + }
> > +
> > + if (!mhi_chan->offload_ch) {
> > + mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
> > + if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
> > + dev_err(mhi_cntrl->dev,
> > + "Invalid Door bell mode\n");
> > + goto error_chan_cfg;
> > + }
> > + }
> > +
> > + mhi_chan->configured = true;
> > +
> > + if (mhi_chan->lpm_notify)
> > + list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
> > + }
> > +
> > + return 0;
> > +
> > +error_chan_cfg:
> > + vfree(mhi_cntrl->mhi_chan);
> > +
> > + return -EINVAL;
> > +}
> > +
> > +static int parse_config(struct mhi_controller *mhi_cntrl,
> > + struct mhi_controller_config *config)
> > +{
> > + int ret;
> > +
> > + /* Parse MHI channel configuration */
> > + ret = parse_ch_cfg(mhi_cntrl, config);
> > + if (ret)
> > + return ret;
> > +
> > + /* Parse MHI event configuration */
> > + ret = parse_ev_cfg(mhi_cntrl, config);
> > + if (ret)
> > + goto error_ev_cfg;
> > +
> > + mhi_cntrl->timeout_ms = config->timeout_ms;
> > + if (!mhi_cntrl->timeout_ms)
> > + mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
> > +
> > + mhi_cntrl->bounce_buf = config->use_bounce_buf;
> > + mhi_cntrl->buffer_len = config->buf_len;
> > + if (!mhi_cntrl->buffer_len)
> > + mhi_cntrl->buffer_len = MHI_MAX_MTU;
> > +
> > + return 0;
> > +
> > +error_ev_cfg:
> > + vfree(mhi_cntrl->mhi_chan);
> > +
> > + return ret;
> > +}
> > +
> > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > + struct mhi_controller_config *config)
> > +{
> > + int ret;
> > + int i;
> > + struct mhi_event *mhi_event;
> > + struct mhi_chan *mhi_chan;
> > + struct mhi_cmd *mhi_cmd;
> > + struct mhi_device *mhi_dev;
> > +
> > + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
> > + return -EINVAL;
> > +
> > + if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
> > + return -EINVAL;
> > +
> > + ret = parse_config(mhi_cntrl, config);
> > + if (ret)
> > + return -EINVAL;
> > +
> > + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
> > + sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> > + if (!mhi_cntrl->mhi_cmd) {
> > + ret = -ENOMEM;
> > + goto error_alloc_cmd;
> > + }
> > +
> > + INIT_LIST_HEAD(&mhi_cntrl->transition_list);
> > + spin_lock_init(&mhi_cntrl->transition_lock);
> > + spin_lock_init(&mhi_cntrl->wlock);
> > + init_waitqueue_head(&mhi_cntrl->state_event);
> > +
> > + mhi_cmd = mhi_cntrl->mhi_cmd;
> > + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
> > + spin_lock_init(&mhi_cmd->lock);
> > +
> > + mhi_event = mhi_cntrl->mhi_event;
> > + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
> > + /* Skip for offload events */
> > + if (mhi_event->offload_ev)
> > + continue;
> > +
> > + mhi_event->mhi_cntrl = mhi_cntrl;
> > + spin_lock_init(&mhi_event->lock);
> > + }
> > +
> > + mhi_chan = mhi_cntrl->mhi_chan;
> > + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
> > + mutex_init(&mhi_chan->mutex);
> > + init_completion(&mhi_chan->completion);
> > + rwlock_init(&mhi_chan->lock);
> > + }
> > +
> > + /* Register controller with MHI bus */
> > + mhi_dev = mhi_alloc_device(mhi_cntrl);
> > + if (IS_ERR(mhi_dev)) {
> > + dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
> > + ret = PTR_ERR(mhi_dev);
> > + goto error_alloc_dev;
> > + }
> > +
> > + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> > + mhi_dev->mhi_cntrl = mhi_cntrl;
> > + dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
> > +
> > + /* Init wakeup source */
> > + device_init_wakeup(&mhi_dev->dev, true);
> > +
> > + ret = device_add(&mhi_dev->dev);
> > + if (ret)
> > + goto error_add_dev;
> > +
> > + mhi_cntrl->mhi_dev = mhi_dev;
> > +
> > + return 0;
> > +
> > +error_add_dev:
> > + mhi_dealloc_device(mhi_cntrl, mhi_dev);
> > +
> > +error_alloc_dev:
> > + kfree(mhi_cntrl->mhi_cmd);
> > +
> > +error_alloc_cmd:
> > + vfree(mhi_cntrl->mhi_chan);
> > + kfree(mhi_cntrl->mhi_event);
> > +
> > + return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(mhi_register_controller);
> > +
> > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
> > +{
> > + struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
> > +
> > + kfree(mhi_cntrl->mhi_cmd);
> > + kfree(mhi_cntrl->mhi_event);
> > + vfree(mhi_cntrl->mhi_chan);
> > +
> > + device_del(&mhi_dev->dev);
> > + put_device(&mhi_dev->dev);
> > +}
> > +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
> > +
> > +static void mhi_release_device(struct device *dev)
> > +{
> > + struct mhi_device *mhi_dev = to_mhi_device(dev);
> > +
> > + if (mhi_dev->ul_chan)
> > + mhi_dev->ul_chan->mhi_dev = NULL;
> > +
> > + if (mhi_dev->dl_chan)
> > + mhi_dev->dl_chan->mhi_dev = NULL;
> > +
> > + kfree(mhi_dev);
> > +}
> > +
> > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
> > +{
> > + struct mhi_device *mhi_dev;
> > + struct device *dev;
> > +
> > + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> > + if (!mhi_dev)
> > + return ERR_PTR(-ENOMEM);
> > +
> > + dev = &mhi_dev->dev;
> > + device_initialize(dev);
> > + dev->bus = &mhi_bus_type;
> > + dev->release = mhi_release_device;
> > + dev->parent = mhi_cntrl->dev;
> > + mhi_dev->mhi_cntrl = mhi_cntrl;
> > + atomic_set(&mhi_dev->dev_wake, 0);
> > +
> > + return mhi_dev;
> > +}
> > +
> > +static int mhi_match(struct device *dev, struct device_driver *drv)
> > +{
> > + return 0;
> > +};
> > +
> > +struct bus_type mhi_bus_type = {
> > + .name = "mhi",
> > + .dev_name = "mhi",
> > + .match = mhi_match,
> > +};
> > +
> > +static int __init mhi_init(void)
> > +{
> > + return bus_register(&mhi_bus_type);
> > +}
> > +
> > +static void __exit mhi_exit(void)
> > +{
> > + bus_unregister(&mhi_bus_type);
> > +}
> > +
> > +postcore_initcall(mhi_init);
> > +module_exit(mhi_exit);
> > +
> > +MODULE_LICENSE("GPL v2");
> > +MODULE_DESCRIPTION("MHI Host Interface");
> > diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> > new file mode 100644
> > index 000000000000..21f686d3a140
> > --- /dev/null
> > +++ b/drivers/bus/mhi/core/internal.h
> > @@ -0,0 +1,169 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > + *
> > + */
> > +
> > +#ifndef _MHI_INT_H
> > +#define _MHI_INT_H
> > +
> > +extern struct bus_type mhi_bus_type;
> > +
> > +/* MHI transfer completion events */
> > +enum mhi_ev_ccs {
> > + MHI_EV_CC_INVALID = 0x0,
> > + MHI_EV_CC_SUCCESS = 0x1,
> > + MHI_EV_CC_EOT = 0x2,
> > + MHI_EV_CC_OVERFLOW = 0x3,
> > + MHI_EV_CC_EOB = 0x4,
> > + MHI_EV_CC_OOB = 0x5,
> > + MHI_EV_CC_DB_MODE = 0x6,
> > + MHI_EV_CC_UNDEFINED_ERR = 0x10,
> > + MHI_EV_CC_BAD_TRE = 0x11,
>
> Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms? I feel
> like those might not be obvious to someone not familiar with the protocol.
>
Sure
> > +};
> > +
> > +enum mhi_ch_state {
> > + MHI_CH_STATE_DISABLED = 0x0,
> > + MHI_CH_STATE_ENABLED = 0x1,
> > + MHI_CH_STATE_RUNNING = 0x2,
> > + MHI_CH_STATE_SUSPENDED = 0x3,
> > + MHI_CH_STATE_STOP = 0x4,
> > + MHI_CH_STATE_ERROR = 0x5,
> > +};
> > +
> > +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
> > + mode != MHI_DB_BRST_ENABLE)
> > +
> > +#define NR_OF_CMD_RINGS 1
> > +#define CMD_EL_PER_RING 128
> > +#define PRIMARY_CMD_RING 0
> > +#define MHI_MAX_MTU 0xffff
> > +
> > +enum mhi_er_type {
> > + MHI_ER_TYPE_INVALID = 0x0,
> > + MHI_ER_TYPE_VALID = 0x1,
> > +};
> > +
> > +enum mhi_ch_ee_mask {
> > + MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
>
> MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
> include?
>
It is defined in mhi.h as mhi_ee_type.
> > + MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
> > + MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
> > + MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
> > + MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
> > + MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
> > + MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
> > +};
> > +
> > +struct db_cfg {
> > + bool reset_req;
> > + bool db_mode;
> > + u32 pollcfg;
> > + enum mhi_db_brst_mode brstmode;
> > + dma_addr_t db_val;
> > + void (*process_db)(struct mhi_controller *mhi_cntrl,
> > + struct db_cfg *db_cfg, void __iomem *io_addr,
> > + dma_addr_t db_val);
> > +};
> > +
> > +struct mhi_ring {
> > + dma_addr_t dma_handle;
> > + dma_addr_t iommu_base;
> > + u64 *ctxt_wp; /* point to ctxt wp */
> > + void *pre_aligned;
> > + void *base;
> > + void *rp;
> > + void *wp;
> > + size_t el_size;
> > + size_t len;
> > + size_t elements;
> > + size_t alloc_size;
> > + void __iomem *db_addr;
> > +};
> > +
> > +struct mhi_cmd {
> > + struct mhi_ring ring;
> > + spinlock_t lock;
> > +};
> > +
> > +struct mhi_buf_info {
> > + dma_addr_t p_addr;
> > + void *v_addr;
> > + void *bb_addr;
> > + void *wp;
> > + size_t len;
> > + void *cb_buf;
> > + enum dma_data_direction dir;
> > +};
> > +
> > +struct mhi_event {
> > + u32 er_index;
> > + u32 intmod;
> > + u32 irq;
> > + int chan; /* this event ring is dedicated to a channel (optional) */
> > + u32 priority;
> > + enum mhi_er_data_type data_type;
> > + struct mhi_ring ring;
> > + struct db_cfg db_cfg;
> > + bool hw_ring;
> > + bool cl_manage;
> > + bool offload_ev; /* managed by a device driver */
> > + spinlock_t lock;
> > + struct mhi_chan *mhi_chan; /* dedicated to channel */
> > + struct tasklet_struct task;
> > + int (*process_event)(struct mhi_controller *mhi_cntrl,
> > + struct mhi_event *mhi_event,
> > + u32 event_quota);
> > + struct mhi_controller *mhi_cntrl;
> > +};
> > +
> > +struct mhi_chan {
> > + u32 chan;
> > + const char *name;
> > + /*
> > + * Important: When consuming, increment tre_ring first and when
> > + * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
> > + * is guranteed to have space so we do not need to check both rings.
> > + */
> > + struct mhi_ring buf_ring;
> > + struct mhi_ring tre_ring;
> > + u32 er_index;
> > + u32 intmod;
> > + enum mhi_ch_type type;
> > + enum dma_data_direction dir;
> > + struct db_cfg db_cfg;
> > + enum mhi_ch_ee_mask ee_mask;
> > + enum mhi_buf_type xfer_type;
> > + enum mhi_ch_state ch_state;
> > + enum mhi_ev_ccs ccs;
> > + bool lpm_notify;
> > + bool configured;
> > + bool offload_ch;
> > + bool pre_alloc;
> > + bool auto_start;
> > + int (*gen_tre)(struct mhi_controller *mhi_cntrl,
> > + struct mhi_chan *mhi_chan, void *buf, void *cb,
> > + size_t len, enum mhi_flags flags);
> > + int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> > + void *buf, size_t len, enum mhi_flags mflags);
> > + struct mhi_device *mhi_dev;
> > + void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
> > + struct mutex mutex;
> > + struct completion completion;
> > + rwlock_t lock;
> > + struct list_head node;
> > +};
> > +
> > +/* Default MHI timeout */
> > +#define MHI_TIMEOUT_MS (1000)
> > +
> > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
> > +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> > + struct mhi_device *mhi_dev)
> > +{
> > + kfree(mhi_dev);
> > +}
> > +
> > +int mhi_destroy_device(struct device *dev, void *data);
> > +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
> > +
> > +#endif /* _MHI_INT_H */
> > diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> > new file mode 100644
> > index 000000000000..69cf9a4b06c7
> > --- /dev/null
> > +++ b/include/linux/mhi.h
> > @@ -0,0 +1,438 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +/*
> > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > + *
> > + */
> > +#ifndef _MHI_H_
> > +#define _MHI_H_
> > +
> > +#include <linux/device.h>
> > +#include <linux/dma-direction.h>
> > +#include <linux/mutex.h>
> > +#include <linux/rwlock_types.h>
> > +#include <linux/slab.h>
> > +#include <linux/spinlock_types.h>
> > +#include <linux/wait.h>
> > +#include <linux/workqueue.h>
> > +
> > +struct mhi_chan;
> > +struct mhi_event;
> > +struct mhi_ctxt;
> > +struct mhi_cmd;
> > +struct mhi_buf_info;
> > +
> > +/**
> > + * enum mhi_callback - MHI callback
> > + * @MHI_CB_IDLE: MHI entered idle state
> > + * @MHI_CB_PENDING_DATA: New data available for client to process
> > + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> > + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> > + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> > + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> > + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> > + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> > + */
> > +enum mhi_callback {
> > + MHI_CB_IDLE,
> > + MHI_CB_PENDING_DATA,
> > + MHI_CB_LPM_ENTER,
> > + MHI_CB_LPM_EXIT,
> > + MHI_CB_EE_RDDM,
> > + MHI_CB_EE_MISSION_MODE,
> > + MHI_CB_SYS_ERROR,
> > + MHI_CB_FATAL_ERROR,
> > +};
> > +
> > +/**
> > + * enum mhi_flags - Transfer flags
> > + * @MHI_EOB: End of buffer for bulk transfer
> > + * @MHI_EOT: End of transfer
> > + * @MHI_CHAIN: Linked transfer
> > + */
> > +enum mhi_flags {
> > + MHI_EOB,
> > + MHI_EOT,
> > + MHI_CHAIN,
> > +};
> > +
> > +/**
> > + * enum mhi_device_type - Device types
> > + * @MHI_DEVICE_XFER: Handles data transfer
> > + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> > + * @MHI_DEVICE_CONTROLLER: Control device
> > + */
> > +enum mhi_device_type {
> > + MHI_DEVICE_XFER,
> > + MHI_DEVICE_TIMESYNC,
> > + MHI_DEVICE_CONTROLLER,
> > +};
> > +
> > +/**
> > + * enum mhi_ch_type - Channel types
> > + * @MHI_CH_TYPE_INVALID: Invalid channel type
> > + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> > + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> > + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
> > + * multiple packets and send them as a single
> > + * large packet to reduce CPU consumption
> > + */
> > +enum mhi_ch_type {
> > + MHI_CH_TYPE_INVALID = 0,
> > + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> > + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> > + MHI_CH_TYPE_INBOUND_COALESCED = 3,
> > +};
> > +
> > +/**
> > + * enum mhi_ee_type - Execution environment types
> > + * @MHI_EE_PBL: Primary Bootloader
> > + * @MHI_EE_SBL: Secondary Bootloader
> > + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> > + * @MHI_EE_RDDM: Ram dump download mode
> > + * @MHI_EE_WFW: WLAN firmware mode
> > + * @MHI_EE_PTHRU: Passthrough
> > + * @MHI_EE_EDL: Embedded downloader
> > + */
> > +enum mhi_ee_type {
> > + MHI_EE_PBL,
> > + MHI_EE_SBL,
> > + MHI_EE_AMSS,
> > + MHI_EE_RDDM,
> > + MHI_EE_WFW,
> > + MHI_EE_PTHRU,
> > + MHI_EE_EDL,
> > + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> > + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> > + MHI_EE_NOT_SUPPORTED,
> > + MHI_EE_MAX,
> > +};
> > +
> > +/**
> > + * enum mhi_buf_type - Accepted buffer type for the channel
> > + * @MHI_BUF_RAW: Raw buffer
> > + * @MHI_BUF_SKB: SKB struct
> > + * @MHI_BUF_SCLIST: Scatter-gather list
> > + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> > + * @MHI_BUF_DMA: Receive DMA address mapped by client
> > + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
>
> Maybe its just me, but what is "RSC"?
>
Reserved Side Coalesced. I thought I mentioned it somewhere but not...Will do.
> > + */
> > +enum mhi_buf_type {
> > + MHI_BUF_RAW,
> > + MHI_BUF_SKB,
> > + MHI_BUF_SCLIST,
> > + MHI_BUF_NOP,
> > + MHI_BUF_DMA,
> > + MHI_BUF_RSC_DMA,
> > +};
> > +
> > +/**
> > + * enum mhi_er_data_type - Event ring data types
> > + * @MHI_ER_DATA: Only client data over this ring
> > + * @MHI_ER_CTRL: MHI control data and client data
> > + * @MHI_ER_TSYNC: Time sync events
> > + */
> > +enum mhi_er_data_type {
> > + MHI_ER_DATA,
> > + MHI_ER_CTRL,
> > + MHI_ER_TSYNC,
> > +};
> > +
> > +/**
> > + * enum mhi_db_brst_mode - Doorbell mode
> > + * @MHI_DB_BRST_DISABLE: Burst mode disable
> > + * @MHI_DB_BRST_ENABLE: Burst mode enable
> > + */
> > +enum mhi_db_brst_mode {
> > + MHI_DB_BRST_DISABLE = 0x2,
> > + MHI_DB_BRST_ENABLE = 0x3,
> > +};
> > +
> > +/**
> > + * struct mhi_channel_config - Channel configuration structure for controller
> > + * @num: The number assigned to this channel
> > + * @name: The name of this channel
> > + * @num_elements: The number of elements that can be queued to this channel
> > + * @local_elements: The local ring length of the channel
> > + * @event_ring: The event rung index that services this channel
> > + * @dir: Direction that data may flow on this channel
> > + * @type: Channel type
> > + * @ee_mask: Execution Environment mask for this channel
>
> But the mask defines are in internal.h, so how is a client suposed to know
> what they are?
>
Again, I missed the whole change addressing your internal review (It is one
them). I defined the masks in mhi.h. Will add it in next iteration.
> > + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
> > + for UL channels, multiple of 8 ring elements for DL channels
> > + * @data_type: Data type accepted by this channel
> > + * @doorbell: Doorbell mode
> > + * @lpm_notify: The channel master requires low power mode notifications
> > + * @offload_channel: The client manages the channel completely
> > + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
> > + * @auto_queue: Framework will automatically queue buffers for DL traffic
> > + * @auto_start: Automatically start (open) this channel
> > + */
> > +struct mhi_channel_config {
> > + u32 num;
> > + char *name;
> > + u32 num_elements;
> > + u32 local_elements;
> > + u32 event_ring;
> > + enum dma_data_direction dir;
> > + enum mhi_ch_type type;
> > + u32 ee_mask;
> > + u32 pollcfg;
> > + enum mhi_buf_type data_type;
> > + enum mhi_db_brst_mode doorbell;
> > + bool lpm_notify;
> > + bool offload_channel;
> > + bool doorbell_mode_switch;
> > + bool auto_queue;
> > + bool auto_start;
> > +};
> > +
> > +/**
> > + * struct mhi_event_config - Event ring configuration structure for controller
> > + * @num_elements: The number of elements that can be queued to this ring
> > + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> > + * @irq: IRQ associated with this ring
> > + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
> > + * @mode: Doorbell mode
> > + * @data_type: Type of data this ring will process
> > + * @hardware_event: This ring is associated with hardware channels
> > + * @client_managed: This ring is client managed
> > + * @offload_channel: This ring is associated with an offloaded channel
> > + * @priority: Priority of this ring. Use 1 for now
> > + */
> > +struct mhi_event_config {
> > + u32 num_elements;
> > + u32 irq_moderation_ms;
> > + u32 irq;
> > + u32 channel;
> > + enum mhi_db_brst_mode mode;
> > + enum mhi_er_data_type data_type;
> > + bool hardware_event;
> > + bool client_managed;
> > + bool offload_channel;
> > + u32 priority;
> > +};
> > +
> > +/**
> > + * struct mhi_controller_config - Root MHI controller configuration
> > + * @max_channels: Maximum number of channels supported
> > + * @timeout_ms: Timeout value for operations. 0 means use default
> > + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> > + * @m2_no_db: Host is not allowed to ring DB in M2 state
> > + * @buf_len: Size of automatically allocated buffers. 0 means use default
> > + * @num_channels: Number of channels defined in @ch_cfg
> > + * @ch_cfg: Array of defined channels
> > + * @num_events: Number of event rings defined in @event_cfg
> > + * @event_cfg: Array of defined event rings
> > + */
> > +struct mhi_controller_config {
> > + u32 max_channels;
> > + u32 timeout_ms;
> > + bool use_bounce_buf;
> > + bool m2_no_db;
> > + u32 buf_len;
> > + u32 num_channels;
> > + struct mhi_channel_config *ch_cfg;
> > + u32 num_events;
> > + struct mhi_event_config *event_cfg;
> > +};
> > +
> > +/**
> > + * struct mhi_controller - Master MHI controller structure
> > + * @name: Name of the controller
> > + * @dev: Driver model device node for the controller
> > + * @mhi_dev: MHI device instance for the controller
> > + * @dev_id: Device ID of the controller
> > + * @bus_id: Physical bus instance used by the controller
> > + * @regs: Base address of MHI MMIO register space
> > + * @iova_start: IOMMU starting address for data
> > + * @iova_stop: IOMMU stop address for data
> > + * @fw_image: Firmware image name for normal booting
> > + * @edl_image: Firmware image name for emergency download mode
> > + * @fbc_download: MHI host needs to do complete image transfer
> > + * @sbl_size: SBL image size
> > + * @seg_len: BHIe vector size
> > + * @max_chan: Maximum number of channels the controller supports
> > + * @mhi_chan: Points to the channel configuration table
> > + * @lpm_chans: List of channels that require LPM notifications
> > + * @total_ev_rings: Total # of event rings allocated
> > + * @hw_ev_rings: Number of hardware event rings
> > + * @sw_ev_rings: Number of software event rings
> > + * @nr_irqs_req: Number of IRQs required to operate
> > + * @nr_irqs: Number of IRQ allocated by bus master
> > + * @irq: base irq # to request
> > + * @mhi_event: MHI event ring configurations table
> > + * @mhi_cmd: MHI command ring configurations table
> > + * @mhi_ctxt: MHI device context, shared memory between host and device
> > + * @timeout_ms: Timeout in ms for state transitions
> > + * @pm_mutex: Mutex for suspend/resume operation
> > + * @pre_init: MHI host needs to do pre-initialization before power up
> > + * @pm_lock: Lock for protecting MHI power management state
> > + * @pm_state: MHI power management state
> > + * @db_access: DB access states
> > + * @ee: MHI device execution environment
> > + * @wake_set: Device wakeup set flag
> > + * @dev_wake: Device wakeup count
> > + * @alloc_size: Total memory allocations size of the controller
> > + * @pending_pkts: Pending packets for the controller
> > + * @transition_list: List of MHI state transitions
> > + * @wlock: Lock for protecting device wakeup
> > + * @M0: M0 state counter for debugging
> > + * @M2: M2 state counter for debugging
> > + * @M3: M3 state counter for debugging
> > + * @M3_FAST: M3 Fast state counter for debugging
> > + * @st_worker: State transition worker
> > + * @fw_worker: Firmware download worker
> > + * @syserr_worker: System error worker
> > + * @state_event: State change event
> > + * @status_cb: CB function to notify various power states to bus master
> > + * @link_status: CB function to query link status of the device
> > + * @wake_get: CB function to assert device wake
> > + * @wake_put: CB function to de-assert device wake
> > + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> > + * @runtime_get: CB function to controller runtime resume
> > + * @runtimet_put: CB function to decrement pm usage
> > + * @lpm_disable: CB function to request disable link level low power modes
> > + * @lpm_enable: CB function to request enable link level low power modes again
> > + * @bounce_buf: Use of bounce buffer
> > + * @buffer_len: Bounce buffer length
> > + * @priv_data: Points to bus master's private data
> > + */
> > +struct mhi_controller {
> > + const char *name;
> > + struct device *dev;
> > + struct mhi_device *mhi_dev;
> > + u32 dev_id;
> > + u32 bus_id;
> > + void __iomem *regs;
> > + dma_addr_t iova_start;
> > + dma_addr_t iova_stop;
> > + const char *fw_image;
> > + const char *edl_image;
> > + bool fbc_download;
> > + size_t sbl_size;
> > + size_t seg_len;
> > + u32 max_chan;
> > + struct mhi_chan *mhi_chan;
> > + struct list_head lpm_chans;
> > + u32 total_ev_rings;
> > + u32 hw_ev_rings;
> > + u32 sw_ev_rings;
> > + u32 nr_irqs_req;
> > + u32 nr_irqs;
> > + int *irq;
> > +
> > + struct mhi_event *mhi_event;
> > + struct mhi_cmd *mhi_cmd;
> > + struct mhi_ctxt *mhi_ctxt;
> > +
> > + u32 timeout_ms;
> > + struct mutex pm_mutex;
> > + bool pre_init;
> > + rwlock_t pm_lock;
> > + u32 pm_state;
> > + u32 db_access;
> > + enum mhi_ee_type ee;
> > + bool wake_set;
> > + atomic_t dev_wake;
> > + atomic_t alloc_size;
> > + atomic_t pending_pkts;
> > + struct list_head transition_list;
> > + spinlock_t transition_lock;
> > + spinlock_t wlock;
> > + u32 M0, M2, M3, M3_FAST;
> > + struct work_struct st_worker;
> > + struct work_struct fw_worker;
> > + struct work_struct syserr_worker;
> > + wait_queue_head_t state_event;
> > +
> > + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> > + enum mhi_callback cb);
> > + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> > + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> > + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> > + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> > + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> > + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> > + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> > + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> > +
> > + bool bounce_buf;
> > + size_t buffer_len;
> > + void *priv_data;
> > +};
> > +
> > +/**
> > + * struct mhi_device - Structure representing a MHI device which binds
> > + * to channels
> > + * @dev: Driver model device node for the MHI device
> > + * @tiocm: Device current terminal settings
> > + * @id: Pointer to MHI device ID struct
> > + * @chan_name: Name of the channel to which the device binds
> > + * @mhi_cntrl: Controller the device belongs to
> > + * @ul_chan: UL channel for the device
> > + * @dl_chan: DL channel for the device
> > + * @dev_wake: Device wakeup counter
> > + * @dev_type: MHI device type
> > + */
> > +struct mhi_device {
> > + struct device dev;
> > + u32 tiocm;
> > + const struct mhi_device_id *id;
> > + const char *chan_name;
> > + struct mhi_controller *mhi_cntrl;
> > + struct mhi_chan *ul_chan;
> > + struct mhi_chan *dl_chan;
> > + atomic_t dev_wake;
> > + enum mhi_device_type dev_type;
> > +};
> > +
> > +/**
> > + * struct mhi_result - Completed buffer information
> > + * @buf_addr: Address of data buffer
> > + * @dir: Channel direction
> > + * @bytes_xfer: # of bytes transferred
> > + * @transaction_status: Status of last transaction
> > + */
> > +struct mhi_result {
> > + void *buf_addr;
> > + enum dma_data_direction dir;
> > + size_t bytes_xferd;
>
> Desription says this is named "bytes_xfer"
>
Ah yes typo, it is bytes_xferd only. Will fix it.
Thanks,
Mani
> > + int transaction_status;
> > +};
> > +
> > +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
> > +
> > +/**
> > + * mhi_controller_set_devdata - Set MHI controller private data
> > + * @mhi_cntrl: MHI controller to set data
> > + */
> > +static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
> > + void *priv)
> > +{
> > + mhi_cntrl->priv_data = priv;
> > +}
> > +
> > +/**
> > + * mhi_controller_get_devdata - Get MHI controller private data
> > + * @mhi_cntrl: MHI controller to get data
> > + */
> > +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
> > +{
> > + return mhi_cntrl->priv_data;
> > +}
> > +
> > +/**
> > + * mhi_register_controller - Register MHI controller
> > + * @mhi_cntrl: MHI controller to register
> > + * @config: Configuration to use for the controller
> > + */
> > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > + struct mhi_controller_config *config);
> > +
> > +/**
> > + * mhi_unregister_controller - Unregister MHI controller
> > + * @mhi_cntrl: MHI controller to unregister
> > + */
> > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
> > +
> > +#endif /* _MHI_H_ */
> > diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> > index e3596db077dc..be15e997fe39 100644
> > --- a/include/linux/mod_devicetable.h
> > +++ b/include/linux/mod_devicetable.h
> > @@ -821,4 +821,16 @@ struct wmi_device_id {
> > const void *context;
> > };
> > +#define MHI_NAME_SIZE 32
> > +
> > +/**
> > + * struct mhi_device_id - MHI device identification
> > + * @chan: MHI channel name
> > + * @driver_data: driver data;
> > + */
> > +struct mhi_device_id {
> > + const char chan[MHI_NAME_SIZE];
> > + kernel_ulong_t driver_data;
> > +};
> > +
> > #endif /* LINUX_MOD_DEVICETABLE_H */
> >
>
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.
On Thu, Jan 23, 2020 at 09:41:42AM -0700, Jeffrey Hugo wrote:
> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > MHI (Modem Host Interface) is a communication protocol used by the
> > host processors to control and communicate with modems over a high
> > speed peripheral bus or shared memory. The MHI protocol has been
> > designed and developed by Qualcomm Innovation Center, Inc., for use
> > in their modems. This commit adds the documentation for the bus and
> > the implementation in Linux kernel.
> >
> > This is based on the patch submitted by Sujeev Dias:
> > https://lkml.org/lkml/2018/7/9/987
> >
> > Cc: Jonathan Corbet <[email protected]>
> > Cc: [email protected]
> > Signed-off-by: Sujeev Dias <[email protected]>
> > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > [mani: converted to .rst and splitted the patch]
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > Documentation/index.rst | 1 +
> > Documentation/mhi/index.rst | 18 +++
> > Documentation/mhi/mhi.rst | 218 +++++++++++++++++++++++++++++++++
> > Documentation/mhi/topology.rst | 60 +++++++++
> > 4 files changed, 297 insertions(+)
> > create mode 100644 Documentation/mhi/index.rst
> > create mode 100644 Documentation/mhi/mhi.rst
> > create mode 100644 Documentation/mhi/topology.rst
> >
> > diff --git a/Documentation/index.rst b/Documentation/index.rst
> > index e99d0bd2589d..edc9b211bbff 100644
> > --- a/Documentation/index.rst
> > +++ b/Documentation/index.rst
> > @@ -133,6 +133,7 @@ needed).
> > misc-devices/index
> > mic/index
> > scheduler/index
> > + mhi/index
> > Architecture-agnostic documentation
> > -----------------------------------
> > diff --git a/Documentation/mhi/index.rst b/Documentation/mhi/index.rst
> > new file mode 100644
> > index 000000000000..1d8dec302780
> > --- /dev/null
> > +++ b/Documentation/mhi/index.rst
> > @@ -0,0 +1,18 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +===
> > +MHI
> > +===
> > +
> > +.. toctree::
> > + :maxdepth: 1
> > +
> > + mhi
> > + topology
> > +
> > +.. only:: subproject and html
> > +
> > + Indices
> > + =======
> > +
> > + * :ref:`genindex`
> > diff --git a/Documentation/mhi/mhi.rst b/Documentation/mhi/mhi.rst
> > new file mode 100644
> > index 000000000000..718dbbdc7a04
> > --- /dev/null
> > +++ b/Documentation/mhi/mhi.rst
> > @@ -0,0 +1,218 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +==========================
> > +MHI (Modem Host Interface)
> > +==========================
> > +
> > +This document provides information about the MHI protocol.
> > +
> > +Overview
> > +========
> > +
> > +MHI is a protocol developed by Qualcomm Innovation Center, Inc., It is used
>
> The "," suggests the sentence is going to continue, yet "it" has a capitol
> "I" like its the start of a new sentence. This seems wrong to me. Perhaps
> drop that final comma?
>
Interesting find... Yes, the comma needs to be dropped :)
> > +by the host processors to control and communicate with modem devices over high
> > +speed peripheral buses or shared memory. Even though MHI can be easily adapted
> > +to any peripheral buses, it is primarily used with PCIe based devices. MHI
> > +provides logical channels over the physical buses and allows transporting the
> > +modem protocols, such as IP data packets, modem control messages, and
> > +diagnostics over at least one of those logical channels. Also, the MHI
> > +protocol provides data acknowledgment feature and manages the power state of the
> > +modems via one or more logical channels.
> > +
> > +MHI Internals
> > +=============
> > +
> > +MMIO
> > +----
> > +
> > +MMIO (Memory mapped IO) consists of a set of registers in the device hardware,
> > +which are mapped to the host memory space by the peripheral buses like PCIe.
> > +Following are the major components of MMIO register space:
> > +
> > +MHI control registers: Access to MHI configurations registers
> > +
> > +MHI BHI registers: BHI (Boot Host Interface) registers are used by the host
> > +for downloading the firmware to the device before MHI initialization.
> > +
> > +Channel Doorbell array: Channel Doorbell (DB) registers used by the host to
> > +notify the device when there is new work to do.
> > +
> > +Event Doorbell array: Associated with event context array, the Event Doorbell
> > +(DB) registers are used by the host to notify the device when new events are
> > +available.
> > +
> > +Debug registers: A set of registers and counters used by the device to expose
> > +debugging information like performance, functional, and stability to the host.
> > +
> > +Data structures
> > +---------------
> > +
> > +All data structures used by MHI are in the host system memory. Using the
> > +physical interface, the device accesses those data structures. MHI data
> > +structures and data buffers in the host system memory regions are mapped for
> > +the device.
> > +
> > +Channel context array: All channel configurations are organized in channel
> > +context data array.
> > +
> > +Transfer rings: Used by the host to schedule work items for a channel. The
> > +transfer rings are organized as a circular queue of Transfer Descriptors (TD).
> > +
> > +Event context array: All event configurations are organized in the event context
> > +data array.
> > +
> > +Event rings: Used by the device to send completion and state transition messages
> > +to the host
> > +
> > +Command context array: All command configurations are organized in command
> > +context data array.
> > +
> > +Command rings: Used by the host to send MHI commands to the device. The command
> > +rings are organized as a circular queue of Command Descriptors (CD).
> > +
> > +Channels
> > +--------
> > +
> > +MHI channels are logical, unidirectional data pipes between a host and a device.
> > +The concept of channels in MHI is similar to endpoints in USB. MHI supports up
> > +to 256 channels. However, specific device implementations may support less than
> > +the maximum number of channels allowed.
> > +
> > +Two unidirectional channels with their associated transfer rings form a
> > +bidirectional data pipe, which can be used by the upper-layer protocols to
> > +transport application data packets (such as IP packets, modem control messages,
> > +diagnostics messages, and so on). Each channel is associated with a single
> > +transfer ring.
> > +
> > +Transfer rings
> > +--------------
> > +
> > +Transfers between the host and device are organized by channels and defined by
> > +Transfer Descriptors (TD). TDs are managed through transfer rings, which are
> > +defined for each channel between the device and host and reside in the host
> > +memory. TDs consist of one or more ring elements (or transfer blocks)::
> > +
> > + [Read Pointer (RP)] ----------->[Ring Element] } TD
> > + [Write Pointer (WP)]- [Ring Element]
> > + - [Ring Element]
> > + --------->[Ring Element]
> > + [Ring Element]
> > +
> > +Below is the basic usage of transfer rings:
> > +
> > +* Host allocates memory for transfer ring.
> > +* Host sets the base pointer, read pointer, and write pointer in corresponding
> > + channel context.
> > +* Ring is considered empty when RP == WP.
> > +* Ring is considered full when WP + 1 == RP.
> > +* RP indicates the next element to be serviced by the device.
> > +* When the host has a new buffer to send, it updates the ring element with
> > + buffer information, increments the WP to the next element and rings the
> > + associated channel DB.
> > +
> > +Event rings
> > +-----------
> > +
> > +Events from the device to host are organized in event rings and defined by Event
> > +Descriptors (ED). Event rings are used by the device to report events such as
> > +data transfer completion status, command completion status, and state changes
> > +to the host. Event rings are the array of EDs that resides in the host
> > +memory. EDs consist of one or more ring elements (or transfer blocks)::
> > +
> > + [Read Pointer (RP)] ----------->[Ring Element] } ED
> > + [Write Pointer (WP)]- [Ring Element]
> > + - [Ring Element]
> > + --------->[Ring Element]
> > + [Ring Element]
> > +
> > +Below is the basic usage of event rings:
> > +
> > +* Host allocates memory for event ring.
> > +* Host sets the base pointer, read pointer, and write pointer in corresponding
> > + channel context.
> > +* Both host and device has a local copy of RP, WP.
> > +* Ring is considered empty (no events to service) when WP + 1 == RP.
> > +* Ring is considered full of events when RP == WP.
> > +* When there is a new event the device needs to send, the device updates ED
> > + pointed by RP, increments the RP to the next element and triggers the
> > + interrupt.
> > +
> > +Ring Element
> > +------------
> > +
> > +A Ring Element is a data structure used to transfer a single block
> > +of data between the host and the device. Transfer ring element types contain a
> > +single buffer pointer, the size of the buffer, and additional control
> > +information. Other ring element types may only contain control and status
> > +information. For single buffer operations, a ring descriptor is composed of a
> > +single element. For large multi-buffer operations (such as scatter and gather),
> > +elements can be chained to form a longer descriptor.
> > +
> > +MHI Operations
> > +==============
> > +
> > +MHI States
> > +----------
> > +
> > +MHI_STATE_RESET
> > +~~~~~~~~~~~~~~~
> > +MHI is in reset state after power-up or hardware reset. The host is not allowed
> > +to access device MMIO register space.
> > +
> > +MHI_STATE_READY
> > +~~~~~~~~~~~~~~~
> > +MHI is ready for initialization. The host can start MHI initialization by
> > +programming MMIO registers.
> > +
> > +MHI_STATE_M0
> > +~~~~~~~~~~~~
> > +MHI is running and operational in the device. The host can start channels by
> > +issuing channel start command.
> > +
> > +MHI_STATE_M1
> > +~~~~~~~~~~~~
> > +MHI operation is suspended by the device. This state is entered when the
> > +device detects inactivity at the physical interface within a preset time.
> > +
> > +MHI_STATE_M2
> > +~~~~~~~~~~~~
> > +MHI is in low power state. MHI operation is suspended and the device may
> > +enter lower power mode.
> > +
> > +MHI_STATE_M3
> > +~~~~~~~~~~~~
> > +MHI operation stopped by the host. This state is entered when the host suspends
> > +MHI operation.
> > +
> > +MHI Initialization
> > +------------------
> > +
> > +After system boots, the device is enumerated over the physical interface.
> > +In the case of PCIe, the device is enumerated and assigned BAR-0 for
> > +the device's MMIO register space. To initialize the MHI in a device,
> > +the host performs the following operations:
> > +
> > +* Allocates the MHI context for event, channel and command arrays.
> > +* Initializes the context array, and prepares interrupts.
> > +* Waits until the device enters READY state.
> > +* Programs MHI MMIO registers and sets device into MHI_M0 state.
> > +* Waits for the device to enter M0 state.
> > +
> > +MHI Data Transfer
> > +-----------------
> > +
> > +MHI data transfer is initiated by the host to transfer data to the device.
> > +Following are the sequence of operations performed by the host to transfer
> > +data to device:
> > +
> > +* Host prepares TD with buffer information.
> > +* Host increments the WP of the corresponding channel transfer ring.
> > +* Host rings the channel DB register.
> > +* Device wakes up to process the TD.
> > +* Device generates a completion event for the processed TD by updating ED.
> > +* Device increments the RP of the corresponding event ring.
> > +* Device triggers IRQ to wake up the host.
> > +* Host wakes up and checks the event ring for completion event.
> > +* Host updates the WP of the corresponding event ring to indicate that the
> > + data transfer has been completed successfully.
> > +
> > diff --git a/Documentation/mhi/topology.rst b/Documentation/mhi/topology.rst
> > new file mode 100644
> > index 000000000000..90d80a7f116d
> > --- /dev/null
> > +++ b/Documentation/mhi/topology.rst
> > @@ -0,0 +1,60 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +============
> > +MHI Topology
> > +============
> > +
> > +This document provides information about the MHI topology modeling and
> > +representation in the kernel.
> > +
> > +MHI Controller
> > +--------------
> > +
> > +MHI controller driver manages the interaction with the MHI client devices
> > +such as the external modems and WiFi chipsets. It is also the MHI bus master
> > +which is in charge of managing the physical link between the host and device.
> > +It is however not involved in the actual data transfer as the data transfer
> > +is taken care by the physical bus such as PCIe. Each controller driver exposes
> > +channels and events based on the client device type.
> > +
> > +Below are the roles of the MHI controller driver:
> > +
> > +* Turns on the physical bus and establishes the link to the device
> > +* Configures IRQs, SMMU, and IOMEM
>
> I'd recommend changing SMMU to IOMMU. SMMU tends to be an ARM specific
> term, while IOMMU is the generic term, in my experience.
>
Ack.
Thanks a lot for the review!
Regards,
Mani
> > +* Allocates struct mhi_controller and registers with the MHI bus framework
> > + with channel and event configurations using mhi_register_controller.
> > +* Initiates power on and shutdown sequence
> > +* Initiates suspend and resume power management operations of the device.
> > +
> > +MHI Device
> > +----------
> > +
> > +MHI device is the logical device which binds to a maximum of two MHI channels
> > +for bi-directional communication. Once MHI is in powered on state, the MHI
> > +core will create MHI devices based on the channel configuration exposed
> > +by the controller. There can be a single MHI device for each channel or for a
> > +couple of channels.
> > +
> > +Each supported device is enumerated in::
> > +
> > + /sys/bus/mhi/devices/
> > +
> > +MHI Driver
> > +----------
> > +
> > +MHI driver is the client driver which binds to one or more MHI devices. The MHI
> > +driver sends and receives the upper-layer protocol packets like IP packets,
> > +modem control messages, and diagnostics messages over MHI. The MHI core will
> > +bind the MHI devices to the MHI driver.
> > +
> > +Each supported driver is enumerated in::
> > +
> > + /sys/bus/mhi/drivers/
> > +
> > +Below are the roles of the MHI driver:
> > +
> > +* Registers the driver with the MHI bus framework using mhi_driver_register.
> > +* Prepares the device for transfer by calling mhi_prepare_for_transfer.
> > +* Initiates data transfer by calling mhi_queue_transfer.
> > +* Once the data transfer is finished, calls mhi_unprepare_from_transfer to
> > + end data transfer.
> >
>
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/27/2020 4:56 AM, Manivannan Sadhasivam wrote:
> Hi Jeff,
>
> On Thu, Jan 23, 2020 at 10:05:50AM -0700, Jeffrey Hugo wrote:
>> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
>>> This commit adds support for registering MHI controller drivers with
>>> the MHI stack. MHI controller drivers manages the interaction with the
>>> MHI client devices such as the external modems and WiFi chipsets. They
>>> are also the MHI bus master in charge of managing the physical link
>>> between the host and client device.
>>>
>>> This is based on the patch submitted by Sujeev Dias:
>>> https://lkml.org/lkml/2018/7/9/987
>>>
>>> Signed-off-by: Sujeev Dias <[email protected]>
>>> Signed-off-by: Siddartha Mohanadoss <[email protected]>
>>> [jhugo: added static config for controllers and fixed several bugs]
>>> Signed-off-by: Jeffrey Hugo <[email protected]>
>>> [mani: removed DT dependency, splitted and cleaned up for upstream]
>>> Signed-off-by: Manivannan Sadhasivam <[email protected]>
>>> ---
>>> drivers/bus/Kconfig | 1 +
>>> drivers/bus/Makefile | 3 +
>>> drivers/bus/mhi/Kconfig | 14 +
>>> drivers/bus/mhi/Makefile | 2 +
>>> drivers/bus/mhi/core/Makefile | 3 +
>>> drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
>>> drivers/bus/mhi/core/internal.h | 169 ++++++++++++
>>> include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
>>> include/linux/mod_devicetable.h | 12 +
>>> 9 files changed, 1046 insertions(+)
>>> create mode 100644 drivers/bus/mhi/Kconfig
>>> create mode 100644 drivers/bus/mhi/Makefile
>>> create mode 100644 drivers/bus/mhi/core/Makefile
>>> create mode 100644 drivers/bus/mhi/core/init.c
>>> create mode 100644 drivers/bus/mhi/core/internal.h
>>> create mode 100644 include/linux/mhi.h
>>>
>>> diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
>>> index 50200d1c06ea..383934e54786 100644
>>> --- a/drivers/bus/Kconfig
>>> +++ b/drivers/bus/Kconfig
>>> @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
>>> peripherals.
>>> source "drivers/bus/fsl-mc/Kconfig"
>>> +source "drivers/bus/mhi/Kconfig"
>>> endmenu
>>> diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
>>> index 1320bcf9fa9d..05f32cd694a4 100644
>>> --- a/drivers/bus/Makefile
>>> +++ b/drivers/bus/Makefile
>>> @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
>>> obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
>>> obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
>>> +
>>> +# MHI
>>> +obj-$(CONFIG_MHI_BUS) += mhi/
>>> diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
>>> new file mode 100644
>>> index 000000000000..a8bd9bd7db7c
>>> --- /dev/null
>>> +++ b/drivers/bus/mhi/Kconfig
>>> @@ -0,0 +1,14 @@
>>> +# SPDX-License-Identifier: GPL-2.0
>>
>> first time I noticed this, although I suspect this will need to be corrected
>> "everywhere" -
>> Per the SPDX website, the "GPL-2.0" label is deprecated. It's replacement
>> is "GPL-2.0-only".
>> I think all instances should be updated to "GPL-2.0-only"
>>
>>> +#
>>> +# MHI bus
>>> +#
>>> +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>>> +#
>>> +
>>> +config MHI_BUS
>>> + tristate "Modem Host Interface (MHI) bus"
>>> + help
>>> + Bus driver for MHI protocol. Modem Host Interface (MHI) is a
>>> + communication protocol used by the host processors to control
>>> + and communicate with modem devices over a high speed peripheral
>>> + bus or shared memory.
>>> diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
>>> new file mode 100644
>>> index 000000000000..19e6443b72df
>>> --- /dev/null
>>> +++ b/drivers/bus/mhi/Makefile
>>> @@ -0,0 +1,2 @@
>>> +# core layer
>>> +obj-y += core/
>>> diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
>>> new file mode 100644
>>> index 000000000000..2db32697c67f
>>> --- /dev/null
>>> +++ b/drivers/bus/mhi/core/Makefile
>>> @@ -0,0 +1,3 @@
>>> +obj-$(CONFIG_MHI_BUS) := mhi.o
>>> +
>>> +mhi-y := init.o
>>> diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
>>> new file mode 100644
>>> index 000000000000..5b817ec250e0
>>> --- /dev/null
>>> +++ b/drivers/bus/mhi/core/init.c
>>> @@ -0,0 +1,404 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/*
>>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>>> + *
>>> + */
>>> +
>>> +#define dev_fmt(fmt) "MHI: " fmt
>>> +
>>> +#include <linux/device.h>
>>> +#include <linux/dma-direction.h>
>>> +#include <linux/dma-mapping.h>
>>> +#include <linux/interrupt.h>
>>> +#include <linux/list.h>
>>> +#include <linux/mhi.h>
>>> +#include <linux/mod_devicetable.h>
>>> +#include <linux/module.h>
>>> +#include <linux/slab.h>
>>> +#include <linux/vmalloc.h>
>>> +#include <linux/wait.h>
>>> +#include "internal.h"
>>> +
>>> +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_controller_config *config)
>>> +{
>>> + int i, num;
>>> + struct mhi_event *mhi_event;
>>> + struct mhi_event_config *event_cfg;
>>> +
>>> + num = config->num_events;
>>> + mhi_cntrl->total_ev_rings = num;
>>> + mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
>>> + GFP_KERNEL);
>>> + if (!mhi_cntrl->mhi_event)
>>> + return -ENOMEM;
>>> +
>>> + /* Populate event ring */
>>> + mhi_event = mhi_cntrl->mhi_event;
>>> + for (i = 0; i < num; i++) {
>>> + event_cfg = &config->event_cfg[i];
>>> +
>>> + mhi_event->er_index = i;
>>> + mhi_event->ring.elements = event_cfg->num_elements;
>>> + mhi_event->intmod = event_cfg->irq_moderation_ms;
>>> + mhi_event->irq = event_cfg->irq;
>>> +
>>> + if (event_cfg->channel != U32_MAX) {
>>> + /* This event ring has a dedicated channel */
>>> + mhi_event->chan = event_cfg->channel;
>>> + if (mhi_event->chan >= mhi_cntrl->max_chan) {
>>> + dev_err(mhi_cntrl->dev,
>>> + "Event Ring channel not available\n");
>>> + goto error_ev_cfg;
>>> + }
>>> +
>>> + mhi_event->mhi_chan =
>>> + &mhi_cntrl->mhi_chan[mhi_event->chan];
>>> + }
>>> +
>>> + /* Priority is fixed to 1 for now */
>>> + mhi_event->priority = 1;
>>> +
>>> + mhi_event->db_cfg.brstmode = event_cfg->mode;
>>> + if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
>>> + goto error_ev_cfg;
>>> +
>>> + mhi_event->data_type = event_cfg->data_type;
>>> +
>>> + mhi_event->hw_ring = event_cfg->hardware_event;
>>> + if (mhi_event->hw_ring)
>>> + mhi_cntrl->hw_ev_rings++;
>>> + else
>>> + mhi_cntrl->sw_ev_rings++;
>>> +
>>> + mhi_event->cl_manage = event_cfg->client_managed;
>>> + mhi_event->offload_ev = event_cfg->offload_channel;
>>> + mhi_event++;
>>> + }
>>> +
>>> + /* We need IRQ for each event ring + additional one for BHI */
>>> + mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
>>> +
>>> + return 0;
>>> +
>>> +error_ev_cfg:
>>> +
>>> + kfree(mhi_cntrl->mhi_event);
>>> + return -EINVAL;
>>> +}
>>> +
>>> +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_controller_config *config)
>>> +{
>>> + int i;
>>> + u32 chan;
>>> + struct mhi_channel_config *ch_cfg;
>>> +
>>> + mhi_cntrl->max_chan = config->max_channels;
>>> +
>>> + /*
>>> + * The allocation of MHI channels can exceed 32KB in some scenarios,
>>> + * so to avoid any memory possible allocation failures, vzalloc is
>>> + * used here
>>> + */
>>> + mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
>>> + sizeof(*mhi_cntrl->mhi_chan));
>>> + if (!mhi_cntrl->mhi_chan)
>>> + return -ENOMEM;
>>> +
>>> + INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
>>> +
>>> + /* Populate channel configurations */
>>> + for (i = 0; i < config->num_channels; i++) {
>>> + struct mhi_chan *mhi_chan;
>>> +
>>> + ch_cfg = &config->ch_cfg[i];
>>> +
>>> + chan = ch_cfg->num;
>>> + if (chan >= mhi_cntrl->max_chan) {
>>> + dev_err(mhi_cntrl->dev,
>>> + "Channel %d not available\n", chan);
>>> + goto error_chan_cfg;
>>> + }
>>> +
>>> + mhi_chan = &mhi_cntrl->mhi_chan[chan];
>>> + mhi_chan->name = ch_cfg->name;
>>> + mhi_chan->chan = chan;
>>> +
>>> + mhi_chan->tre_ring.elements = ch_cfg->num_elements;
>>> + if (!mhi_chan->tre_ring.elements)
>>> + goto error_chan_cfg;
>>> +
>>> + /*
>>> + * For some channels, local ring length should be bigger than
>>> + * the transfer ring length due to internal logical channels
>>> + * in device. So host can queue much more buffers than transfer
>>> + * ring length. Example, RSC channels should have a larger local
>>> + * channel length than transfer ring length.
>>> + */
>>> + mhi_chan->buf_ring.elements = ch_cfg->local_elements;
>>> + if (!mhi_chan->buf_ring.elements)
>>> + mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
>>> + mhi_chan->er_index = ch_cfg->event_ring;
>>> + mhi_chan->dir = ch_cfg->dir;
>>> +
>>> + /*
>>> + * For most channels, chtype is identical to channel directions.
>>> + * So, if it is not defined then assign channel direction to
>>> + * chtype
>>> + */
>>> + mhi_chan->type = ch_cfg->type;
>>> + if (!mhi_chan->type)
>>> + mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
>>> +
>>> + mhi_chan->ee_mask = ch_cfg->ee_mask;
>>> +
>>> + mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
>>> + mhi_chan->xfer_type = ch_cfg->data_type;
>>> +
>>> + mhi_chan->lpm_notify = ch_cfg->lpm_notify;
>>> + mhi_chan->offload_ch = ch_cfg->offload_channel;
>>> + mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
>>> + mhi_chan->pre_alloc = ch_cfg->auto_queue;
>>> + mhi_chan->auto_start = ch_cfg->auto_start;
>>> +
>>> + /*
>>> + * If MHI host allocates buffers, then the channel direction
>>> + * should be DMA_FROM_DEVICE and the buffer type should be
>>> + * MHI_BUF_RAW
>>> + */
>>> + if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
>>> + mhi_chan->xfer_type != MHI_BUF_RAW)) {
>>> + dev_err(mhi_cntrl->dev,
>>> + "Invalid channel configuration\n");
>>> + goto error_chan_cfg;
>>> + }
>>> +
>>> + /*
>>> + * Bi-directional and direction less channel must be an
>>> + * offload channel
>>> + */
>>> + if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
>>> + mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
>>> + dev_err(mhi_cntrl->dev,
>>> + "Invalid channel configuration\n");
>>> + goto error_chan_cfg;
>>> + }
>>> +
>>> + if (!mhi_chan->offload_ch) {
>>> + mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
>>> + if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
>>> + dev_err(mhi_cntrl->dev,
>>> + "Invalid Door bell mode\n");
>>> + goto error_chan_cfg;
>>> + }
>>> + }
>>> +
>>> + mhi_chan->configured = true;
>>> +
>>> + if (mhi_chan->lpm_notify)
>>> + list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
>>> + }
>>> +
>>> + return 0;
>>> +
>>> +error_chan_cfg:
>>> + vfree(mhi_cntrl->mhi_chan);
>>> +
>>> + return -EINVAL;
>>> +}
>>> +
>>> +static int parse_config(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_controller_config *config)
>>> +{
>>> + int ret;
>>> +
>>> + /* Parse MHI channel configuration */
>>> + ret = parse_ch_cfg(mhi_cntrl, config);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /* Parse MHI event configuration */
>>> + ret = parse_ev_cfg(mhi_cntrl, config);
>>> + if (ret)
>>> + goto error_ev_cfg;
>>> +
>>> + mhi_cntrl->timeout_ms = config->timeout_ms;
>>> + if (!mhi_cntrl->timeout_ms)
>>> + mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
>>> +
>>> + mhi_cntrl->bounce_buf = config->use_bounce_buf;
>>> + mhi_cntrl->buffer_len = config->buf_len;
>>> + if (!mhi_cntrl->buffer_len)
>>> + mhi_cntrl->buffer_len = MHI_MAX_MTU;
>>> +
>>> + return 0;
>>> +
>>> +error_ev_cfg:
>>> + vfree(mhi_cntrl->mhi_chan);
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_controller_config *config)
>>> +{
>>> + int ret;
>>> + int i;
>>> + struct mhi_event *mhi_event;
>>> + struct mhi_chan *mhi_chan;
>>> + struct mhi_cmd *mhi_cmd;
>>> + struct mhi_device *mhi_dev;
>>> +
>>> + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
>>> + return -EINVAL;
>>> +
>>> + if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
>>> + return -EINVAL;
>>> +
>>> + ret = parse_config(mhi_cntrl, config);
>>> + if (ret)
>>> + return -EINVAL;
>>> +
>>> + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
>>> + sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
>>> + if (!mhi_cntrl->mhi_cmd) {
>>> + ret = -ENOMEM;
>>> + goto error_alloc_cmd;
>>> + }
>>> +
>>> + INIT_LIST_HEAD(&mhi_cntrl->transition_list);
>>> + spin_lock_init(&mhi_cntrl->transition_lock);
>>> + spin_lock_init(&mhi_cntrl->wlock);
>>> + init_waitqueue_head(&mhi_cntrl->state_event);
>>> +
>>> + mhi_cmd = mhi_cntrl->mhi_cmd;
>>> + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
>>> + spin_lock_init(&mhi_cmd->lock);
>>> +
>>> + mhi_event = mhi_cntrl->mhi_event;
>>> + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
>>> + /* Skip for offload events */
>>> + if (mhi_event->offload_ev)
>>> + continue;
>>> +
>>> + mhi_event->mhi_cntrl = mhi_cntrl;
>>> + spin_lock_init(&mhi_event->lock);
>>> + }
>>> +
>>> + mhi_chan = mhi_cntrl->mhi_chan;
>>> + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
>>> + mutex_init(&mhi_chan->mutex);
>>> + init_completion(&mhi_chan->completion);
>>> + rwlock_init(&mhi_chan->lock);
>>> + }
>>> +
>>> + /* Register controller with MHI bus */
>>> + mhi_dev = mhi_alloc_device(mhi_cntrl);
>>> + if (IS_ERR(mhi_dev)) {
>>> + dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
>>> + ret = PTR_ERR(mhi_dev);
>>> + goto error_alloc_dev;
>>> + }
>>> +
>>> + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
>>> + mhi_dev->mhi_cntrl = mhi_cntrl;
>>> + dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
>>> +
>>> + /* Init wakeup source */
>>> + device_init_wakeup(&mhi_dev->dev, true);
>>> +
>>> + ret = device_add(&mhi_dev->dev);
>>> + if (ret)
>>> + goto error_add_dev;
>>> +
>>> + mhi_cntrl->mhi_dev = mhi_dev;
>>> +
>>> + return 0;
>>> +
>>> +error_add_dev:
>>> + mhi_dealloc_device(mhi_cntrl, mhi_dev);
>>> +
>>> +error_alloc_dev:
>>> + kfree(mhi_cntrl->mhi_cmd);
>>> +
>>> +error_alloc_cmd:
>>> + vfree(mhi_cntrl->mhi_chan);
>>> + kfree(mhi_cntrl->mhi_event);
>>> +
>>> + return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(mhi_register_controller);
>>> +
>>> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
>>> +{
>>> + struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
>>> +
>>> + kfree(mhi_cntrl->mhi_cmd);
>>> + kfree(mhi_cntrl->mhi_event);
>>> + vfree(mhi_cntrl->mhi_chan);
>>> +
>>> + device_del(&mhi_dev->dev);
>>> + put_device(&mhi_dev->dev);
>>> +}
>>> +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
>>> +
>>> +static void mhi_release_device(struct device *dev)
>>> +{
>>> + struct mhi_device *mhi_dev = to_mhi_device(dev);
>>> +
>>> + if (mhi_dev->ul_chan)
>>> + mhi_dev->ul_chan->mhi_dev = NULL;
>>> +
>>> + if (mhi_dev->dl_chan)
>>> + mhi_dev->dl_chan->mhi_dev = NULL;
>>> +
>>> + kfree(mhi_dev);
>>> +}
>>> +
>>> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
>>> +{
>>> + struct mhi_device *mhi_dev;
>>> + struct device *dev;
>>> +
>>> + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
>>> + if (!mhi_dev)
>>> + return ERR_PTR(-ENOMEM);
>>> +
>>> + dev = &mhi_dev->dev;
>>> + device_initialize(dev);
>>> + dev->bus = &mhi_bus_type;
>>> + dev->release = mhi_release_device;
>>> + dev->parent = mhi_cntrl->dev;
>>> + mhi_dev->mhi_cntrl = mhi_cntrl;
>>> + atomic_set(&mhi_dev->dev_wake, 0);
>>> +
>>> + return mhi_dev;
>>> +}
>>> +
>>> +static int mhi_match(struct device *dev, struct device_driver *drv)
>>> +{
>>> + return 0;
>>> +};
>>> +
>>> +struct bus_type mhi_bus_type = {
>>> + .name = "mhi",
>>> + .dev_name = "mhi",
>>> + .match = mhi_match,
>>> +};
>>> +
>>> +static int __init mhi_init(void)
>>> +{
>>> + return bus_register(&mhi_bus_type);
>>> +}
>>> +
>>> +static void __exit mhi_exit(void)
>>> +{
>>> + bus_unregister(&mhi_bus_type);
>>> +}
>>> +
>>> +postcore_initcall(mhi_init);
>>> +module_exit(mhi_exit);
>>> +
>>> +MODULE_LICENSE("GPL v2");
>>> +MODULE_DESCRIPTION("MHI Host Interface");
>>> diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
>>> new file mode 100644
>>> index 000000000000..21f686d3a140
>>> --- /dev/null
>>> +++ b/drivers/bus/mhi/core/internal.h
>>> @@ -0,0 +1,169 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>>> + *
>>> + */
>>> +
>>> +#ifndef _MHI_INT_H
>>> +#define _MHI_INT_H
>>> +
>>> +extern struct bus_type mhi_bus_type;
>>> +
>>> +/* MHI transfer completion events */
>>> +enum mhi_ev_ccs {
>>> + MHI_EV_CC_INVALID = 0x0,
>>> + MHI_EV_CC_SUCCESS = 0x1,
>>> + MHI_EV_CC_EOT = 0x2,
>>> + MHI_EV_CC_OVERFLOW = 0x3,
>>> + MHI_EV_CC_EOB = 0x4,
>>> + MHI_EV_CC_OOB = 0x5,
>>> + MHI_EV_CC_DB_MODE = 0x6,
>>> + MHI_EV_CC_UNDEFINED_ERR = 0x10,
>>> + MHI_EV_CC_BAD_TRE = 0x11,
>>
>> Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms? I feel
>> like those might not be obvious to someone not familiar with the protocol.
>>
>
> Sure
>
>>> +};
>>> +
>>> +enum mhi_ch_state {
>>> + MHI_CH_STATE_DISABLED = 0x0,
>>> + MHI_CH_STATE_ENABLED = 0x1,
>>> + MHI_CH_STATE_RUNNING = 0x2,
>>> + MHI_CH_STATE_SUSPENDED = 0x3,
>>> + MHI_CH_STATE_STOP = 0x4,
>>> + MHI_CH_STATE_ERROR = 0x5,
>>> +};
>>> +
>>> +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
>>> + mode != MHI_DB_BRST_ENABLE)
>>> +
>>> +#define NR_OF_CMD_RINGS 1
>>> +#define CMD_EL_PER_RING 128
>>> +#define PRIMARY_CMD_RING 0
>>> +#define MHI_MAX_MTU 0xffff
>>> +
>>> +enum mhi_er_type {
>>> + MHI_ER_TYPE_INVALID = 0x0,
>>> + MHI_ER_TYPE_VALID = 0x1,
>>> +};
>>> +
>>> +enum mhi_ch_ee_mask {
>>> + MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
>>
>> MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
>> include?
>>
>
> It is defined in mhi.h as mhi_ee_type.
mhi.h isn't included here. You are relying on the users of this file to
have included that, in particular to have included it before this file.
That tends to result in really weird errors later on. It would be best
to include mhi.h here if you need these definitions.
Although, I suspect this struct should be moved out of internal.h and
into mhi.h since clients need to know this, so perhaps this issue is moot.
>
>>> + MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
>>> + MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
>>> + MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
>>> + MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
>>> + MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
>>> + MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
>>> +};
>>> +
>>> +struct db_cfg {
>>> + bool reset_req;
>>> + bool db_mode;
>>> + u32 pollcfg;
>>> + enum mhi_db_brst_mode brstmode;
>>> + dma_addr_t db_val;
>>> + void (*process_db)(struct mhi_controller *mhi_cntrl,
>>> + struct db_cfg *db_cfg, void __iomem *io_addr,
>>> + dma_addr_t db_val);
>>> +};
>>> +
>>> +struct mhi_ring {
>>> + dma_addr_t dma_handle;
>>> + dma_addr_t iommu_base;
>>> + u64 *ctxt_wp; /* point to ctxt wp */
>>> + void *pre_aligned;
>>> + void *base;
>>> + void *rp;
>>> + void *wp;
>>> + size_t el_size;
>>> + size_t len;
>>> + size_t elements;
>>> + size_t alloc_size;
>>> + void __iomem *db_addr;
>>> +};
>>> +
>>> +struct mhi_cmd {
>>> + struct mhi_ring ring;
>>> + spinlock_t lock;
>>> +};
>>> +
>>> +struct mhi_buf_info {
>>> + dma_addr_t p_addr;
>>> + void *v_addr;
>>> + void *bb_addr;
>>> + void *wp;
>>> + size_t len;
>>> + void *cb_buf;
>>> + enum dma_data_direction dir;
>>> +};
>>> +
>>> +struct mhi_event {
>>> + u32 er_index;
>>> + u32 intmod;
>>> + u32 irq;
>>> + int chan; /* this event ring is dedicated to a channel (optional) */
>>> + u32 priority;
>>> + enum mhi_er_data_type data_type;
>>> + struct mhi_ring ring;
>>> + struct db_cfg db_cfg;
>>> + bool hw_ring;
>>> + bool cl_manage;
>>> + bool offload_ev; /* managed by a device driver */
>>> + spinlock_t lock;
>>> + struct mhi_chan *mhi_chan; /* dedicated to channel */
>>> + struct tasklet_struct task;
>>> + int (*process_event)(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_event *mhi_event,
>>> + u32 event_quota);
>>> + struct mhi_controller *mhi_cntrl;
>>> +};
>>> +
>>> +struct mhi_chan {
>>> + u32 chan;
>>> + const char *name;
>>> + /*
>>> + * Important: When consuming, increment tre_ring first and when
>>> + * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
>>> + * is guranteed to have space so we do not need to check both rings.
>>> + */
>>> + struct mhi_ring buf_ring;
>>> + struct mhi_ring tre_ring;
>>> + u32 er_index;
>>> + u32 intmod;
>>> + enum mhi_ch_type type;
>>> + enum dma_data_direction dir;
>>> + struct db_cfg db_cfg;
>>> + enum mhi_ch_ee_mask ee_mask;
>>> + enum mhi_buf_type xfer_type;
>>> + enum mhi_ch_state ch_state;
>>> + enum mhi_ev_ccs ccs;
>>> + bool lpm_notify;
>>> + bool configured;
>>> + bool offload_ch;
>>> + bool pre_alloc;
>>> + bool auto_start;
>>> + int (*gen_tre)(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_chan *mhi_chan, void *buf, void *cb,
>>> + size_t len, enum mhi_flags flags);
>>> + int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
>>> + void *buf, size_t len, enum mhi_flags mflags);
>>> + struct mhi_device *mhi_dev;
>>> + void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
>>> + struct mutex mutex;
>>> + struct completion completion;
>>> + rwlock_t lock;
>>> + struct list_head node;
>>> +};
>>> +
>>> +/* Default MHI timeout */
>>> +#define MHI_TIMEOUT_MS (1000)
>>> +
>>> +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
>>> +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_device *mhi_dev)
>>> +{
>>> + kfree(mhi_dev);
>>> +}
>>> +
>>> +int mhi_destroy_device(struct device *dev, void *data);
>>> +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
>>> +
>>> +#endif /* _MHI_INT_H */
>>> diff --git a/include/linux/mhi.h b/include/linux/mhi.h
>>> new file mode 100644
>>> index 000000000000..69cf9a4b06c7
>>> --- /dev/null
>>> +++ b/include/linux/mhi.h
>>> @@ -0,0 +1,438 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/*
>>> + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
>>> + *
>>> + */
>>> +#ifndef _MHI_H_
>>> +#define _MHI_H_
>>> +
>>> +#include <linux/device.h>
>>> +#include <linux/dma-direction.h>
>>> +#include <linux/mutex.h>
>>> +#include <linux/rwlock_types.h>
>>> +#include <linux/slab.h>
>>> +#include <linux/spinlock_types.h>
>>> +#include <linux/wait.h>
>>> +#include <linux/workqueue.h>
>>> +
>>> +struct mhi_chan;
>>> +struct mhi_event;
>>> +struct mhi_ctxt;
>>> +struct mhi_cmd;
>>> +struct mhi_buf_info;
>>> +
>>> +/**
>>> + * enum mhi_callback - MHI callback
>>> + * @MHI_CB_IDLE: MHI entered idle state
>>> + * @MHI_CB_PENDING_DATA: New data available for client to process
>>> + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
>>> + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
>>> + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
>>> + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
>>> + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
>>> + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
>>> + */
>>> +enum mhi_callback {
>>> + MHI_CB_IDLE,
>>> + MHI_CB_PENDING_DATA,
>>> + MHI_CB_LPM_ENTER,
>>> + MHI_CB_LPM_EXIT,
>>> + MHI_CB_EE_RDDM,
>>> + MHI_CB_EE_MISSION_MODE,
>>> + MHI_CB_SYS_ERROR,
>>> + MHI_CB_FATAL_ERROR,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_flags - Transfer flags
>>> + * @MHI_EOB: End of buffer for bulk transfer
>>> + * @MHI_EOT: End of transfer
>>> + * @MHI_CHAIN: Linked transfer
>>> + */
>>> +enum mhi_flags {
>>> + MHI_EOB,
>>> + MHI_EOT,
>>> + MHI_CHAIN,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_device_type - Device types
>>> + * @MHI_DEVICE_XFER: Handles data transfer
>>> + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
>>> + * @MHI_DEVICE_CONTROLLER: Control device
>>> + */
>>> +enum mhi_device_type {
>>> + MHI_DEVICE_XFER,
>>> + MHI_DEVICE_TIMESYNC,
>>> + MHI_DEVICE_CONTROLLER,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_ch_type - Channel types
>>> + * @MHI_CH_TYPE_INVALID: Invalid channel type
>>> + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
>>> + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
>>> + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
>>> + * multiple packets and send them as a single
>>> + * large packet to reduce CPU consumption
>>> + */
>>> +enum mhi_ch_type {
>>> + MHI_CH_TYPE_INVALID = 0,
>>> + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
>>> + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
>>> + MHI_CH_TYPE_INBOUND_COALESCED = 3,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_ee_type - Execution environment types
>>> + * @MHI_EE_PBL: Primary Bootloader
>>> + * @MHI_EE_SBL: Secondary Bootloader
>>> + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
>>> + * @MHI_EE_RDDM: Ram dump download mode
>>> + * @MHI_EE_WFW: WLAN firmware mode
>>> + * @MHI_EE_PTHRU: Passthrough
>>> + * @MHI_EE_EDL: Embedded downloader
>>> + */
>>> +enum mhi_ee_type {
>>> + MHI_EE_PBL,
>>> + MHI_EE_SBL,
>>> + MHI_EE_AMSS,
>>> + MHI_EE_RDDM,
>>> + MHI_EE_WFW,
>>> + MHI_EE_PTHRU,
>>> + MHI_EE_EDL,
>>> + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
>>> + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
>>> + MHI_EE_NOT_SUPPORTED,
>>> + MHI_EE_MAX,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_buf_type - Accepted buffer type for the channel
>>> + * @MHI_BUF_RAW: Raw buffer
>>> + * @MHI_BUF_SKB: SKB struct
>>> + * @MHI_BUF_SCLIST: Scatter-gather list
>>> + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
>>> + * @MHI_BUF_DMA: Receive DMA address mapped by client
>>> + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
>>
>> Maybe its just me, but what is "RSC"?
>>
>
> Reserved Side Coalesced. I thought I mentioned it somewhere but not...Will do.
>
>>> + */
>>> +enum mhi_buf_type {
>>> + MHI_BUF_RAW,
>>> + MHI_BUF_SKB,
>>> + MHI_BUF_SCLIST,
>>> + MHI_BUF_NOP,
>>> + MHI_BUF_DMA,
>>> + MHI_BUF_RSC_DMA,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_er_data_type - Event ring data types
>>> + * @MHI_ER_DATA: Only client data over this ring
>>> + * @MHI_ER_CTRL: MHI control data and client data
>>> + * @MHI_ER_TSYNC: Time sync events
>>> + */
>>> +enum mhi_er_data_type {
>>> + MHI_ER_DATA,
>>> + MHI_ER_CTRL,
>>> + MHI_ER_TSYNC,
>>> +};
>>> +
>>> +/**
>>> + * enum mhi_db_brst_mode - Doorbell mode
>>> + * @MHI_DB_BRST_DISABLE: Burst mode disable
>>> + * @MHI_DB_BRST_ENABLE: Burst mode enable
>>> + */
>>> +enum mhi_db_brst_mode {
>>> + MHI_DB_BRST_DISABLE = 0x2,
>>> + MHI_DB_BRST_ENABLE = 0x3,
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_channel_config - Channel configuration structure for controller
>>> + * @num: The number assigned to this channel
>>> + * @name: The name of this channel
>>> + * @num_elements: The number of elements that can be queued to this channel
>>> + * @local_elements: The local ring length of the channel
>>> + * @event_ring: The event rung index that services this channel
>>> + * @dir: Direction that data may flow on this channel
>>> + * @type: Channel type
>>> + * @ee_mask: Execution Environment mask for this channel
>>
>> But the mask defines are in internal.h, so how is a client suposed to know
>> what they are?
>>
>
> Again, I missed the whole change addressing your internal review (It is one
> them). I defined the masks in mhi.h. Will add it in next iteration.
>
>>> + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
>>> + for UL channels, multiple of 8 ring elements for DL channels
>>> + * @data_type: Data type accepted by this channel
>>> + * @doorbell: Doorbell mode
>>> + * @lpm_notify: The channel master requires low power mode notifications
>>> + * @offload_channel: The client manages the channel completely
>>> + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
>>> + * @auto_queue: Framework will automatically queue buffers for DL traffic
>>> + * @auto_start: Automatically start (open) this channel
>>> + */
>>> +struct mhi_channel_config {
>>> + u32 num;
>>> + char *name;
>>> + u32 num_elements;
>>> + u32 local_elements;
>>> + u32 event_ring;
>>> + enum dma_data_direction dir;
>>> + enum mhi_ch_type type;
>>> + u32 ee_mask;
>>> + u32 pollcfg;
>>> + enum mhi_buf_type data_type;
>>> + enum mhi_db_brst_mode doorbell;
>>> + bool lpm_notify;
>>> + bool offload_channel;
>>> + bool doorbell_mode_switch;
>>> + bool auto_queue;
>>> + bool auto_start;
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_event_config - Event ring configuration structure for controller
>>> + * @num_elements: The number of elements that can be queued to this ring
>>> + * @irq_moderation_ms: Delay irq for additional events to be aggregated
>>> + * @irq: IRQ associated with this ring
>>> + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
>>> + * @mode: Doorbell mode
>>> + * @data_type: Type of data this ring will process
>>> + * @hardware_event: This ring is associated with hardware channels
>>> + * @client_managed: This ring is client managed
>>> + * @offload_channel: This ring is associated with an offloaded channel
>>> + * @priority: Priority of this ring. Use 1 for now
>>> + */
>>> +struct mhi_event_config {
>>> + u32 num_elements;
>>> + u32 irq_moderation_ms;
>>> + u32 irq;
>>> + u32 channel;
>>> + enum mhi_db_brst_mode mode;
>>> + enum mhi_er_data_type data_type;
>>> + bool hardware_event;
>>> + bool client_managed;
>>> + bool offload_channel;
>>> + u32 priority;
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_controller_config - Root MHI controller configuration
>>> + * @max_channels: Maximum number of channels supported
>>> + * @timeout_ms: Timeout value for operations. 0 means use default
>>> + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
>>> + * @m2_no_db: Host is not allowed to ring DB in M2 state
>>> + * @buf_len: Size of automatically allocated buffers. 0 means use default
>>> + * @num_channels: Number of channels defined in @ch_cfg
>>> + * @ch_cfg: Array of defined channels
>>> + * @num_events: Number of event rings defined in @event_cfg
>>> + * @event_cfg: Array of defined event rings
>>> + */
>>> +struct mhi_controller_config {
>>> + u32 max_channels;
>>> + u32 timeout_ms;
>>> + bool use_bounce_buf;
>>> + bool m2_no_db;
>>> + u32 buf_len;
>>> + u32 num_channels;
>>> + struct mhi_channel_config *ch_cfg;
>>> + u32 num_events;
>>> + struct mhi_event_config *event_cfg;
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_controller - Master MHI controller structure
>>> + * @name: Name of the controller
>>> + * @dev: Driver model device node for the controller
>>> + * @mhi_dev: MHI device instance for the controller
>>> + * @dev_id: Device ID of the controller
>>> + * @bus_id: Physical bus instance used by the controller
>>> + * @regs: Base address of MHI MMIO register space
>>> + * @iova_start: IOMMU starting address for data
>>> + * @iova_stop: IOMMU stop address for data
>>> + * @fw_image: Firmware image name for normal booting
>>> + * @edl_image: Firmware image name for emergency download mode
>>> + * @fbc_download: MHI host needs to do complete image transfer
>>> + * @sbl_size: SBL image size
>>> + * @seg_len: BHIe vector size
>>> + * @max_chan: Maximum number of channels the controller supports
>>> + * @mhi_chan: Points to the channel configuration table
>>> + * @lpm_chans: List of channels that require LPM notifications
>>> + * @total_ev_rings: Total # of event rings allocated
>>> + * @hw_ev_rings: Number of hardware event rings
>>> + * @sw_ev_rings: Number of software event rings
>>> + * @nr_irqs_req: Number of IRQs required to operate
>>> + * @nr_irqs: Number of IRQ allocated by bus master
>>> + * @irq: base irq # to request
>>> + * @mhi_event: MHI event ring configurations table
>>> + * @mhi_cmd: MHI command ring configurations table
>>> + * @mhi_ctxt: MHI device context, shared memory between host and device
>>> + * @timeout_ms: Timeout in ms for state transitions
>>> + * @pm_mutex: Mutex for suspend/resume operation
>>> + * @pre_init: MHI host needs to do pre-initialization before power up
>>> + * @pm_lock: Lock for protecting MHI power management state
>>> + * @pm_state: MHI power management state
>>> + * @db_access: DB access states
>>> + * @ee: MHI device execution environment
>>> + * @wake_set: Device wakeup set flag
>>> + * @dev_wake: Device wakeup count
>>> + * @alloc_size: Total memory allocations size of the controller
>>> + * @pending_pkts: Pending packets for the controller
>>> + * @transition_list: List of MHI state transitions
>>> + * @wlock: Lock for protecting device wakeup
>>> + * @M0: M0 state counter for debugging
>>> + * @M2: M2 state counter for debugging
>>> + * @M3: M3 state counter for debugging
>>> + * @M3_FAST: M3 Fast state counter for debugging
>>> + * @st_worker: State transition worker
>>> + * @fw_worker: Firmware download worker
>>> + * @syserr_worker: System error worker
>>> + * @state_event: State change event
>>> + * @status_cb: CB function to notify various power states to bus master
>>> + * @link_status: CB function to query link status of the device
>>> + * @wake_get: CB function to assert device wake
>>> + * @wake_put: CB function to de-assert device wake
>>> + * @wake_toggle: CB function to assert and deasset (toggle) device wake
>>> + * @runtime_get: CB function to controller runtime resume
>>> + * @runtimet_put: CB function to decrement pm usage
>>> + * @lpm_disable: CB function to request disable link level low power modes
>>> + * @lpm_enable: CB function to request enable link level low power modes again
>>> + * @bounce_buf: Use of bounce buffer
>>> + * @buffer_len: Bounce buffer length
>>> + * @priv_data: Points to bus master's private data
>>> + */
>>> +struct mhi_controller {
>>> + const char *name;
>>> + struct device *dev;
>>> + struct mhi_device *mhi_dev;
>>> + u32 dev_id;
>>> + u32 bus_id;
>>> + void __iomem *regs;
>>> + dma_addr_t iova_start;
>>> + dma_addr_t iova_stop;
>>> + const char *fw_image;
>>> + const char *edl_image;
>>> + bool fbc_download;
>>> + size_t sbl_size;
>>> + size_t seg_len;
>>> + u32 max_chan;
>>> + struct mhi_chan *mhi_chan;
>>> + struct list_head lpm_chans;
>>> + u32 total_ev_rings;
>>> + u32 hw_ev_rings;
>>> + u32 sw_ev_rings;
>>> + u32 nr_irqs_req;
>>> + u32 nr_irqs;
>>> + int *irq;
>>> +
>>> + struct mhi_event *mhi_event;
>>> + struct mhi_cmd *mhi_cmd;
>>> + struct mhi_ctxt *mhi_ctxt;
>>> +
>>> + u32 timeout_ms;
>>> + struct mutex pm_mutex;
>>> + bool pre_init;
>>> + rwlock_t pm_lock;
>>> + u32 pm_state;
>>> + u32 db_access;
>>> + enum mhi_ee_type ee;
>>> + bool wake_set;
>>> + atomic_t dev_wake;
>>> + atomic_t alloc_size;
>>> + atomic_t pending_pkts;
>>> + struct list_head transition_list;
>>> + spinlock_t transition_lock;
>>> + spinlock_t wlock;
>>> + u32 M0, M2, M3, M3_FAST;
>>> + struct work_struct st_worker;
>>> + struct work_struct fw_worker;
>>> + struct work_struct syserr_worker;
>>> + wait_queue_head_t state_event;
>>> +
>>> + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
>>> + enum mhi_callback cb);
>>> + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
>>> + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
>>> + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
>>> + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
>>> + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
>>> + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
>>> + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
>>> + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
>>> +
>>> + bool bounce_buf;
>>> + size_t buffer_len;
>>> + void *priv_data;
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_device - Structure representing a MHI device which binds
>>> + * to channels
>>> + * @dev: Driver model device node for the MHI device
>>> + * @tiocm: Device current terminal settings
>>> + * @id: Pointer to MHI device ID struct
>>> + * @chan_name: Name of the channel to which the device binds
>>> + * @mhi_cntrl: Controller the device belongs to
>>> + * @ul_chan: UL channel for the device
>>> + * @dl_chan: DL channel for the device
>>> + * @dev_wake: Device wakeup counter
>>> + * @dev_type: MHI device type
>>> + */
>>> +struct mhi_device {
>>> + struct device dev;
>>> + u32 tiocm;
>>> + const struct mhi_device_id *id;
>>> + const char *chan_name;
>>> + struct mhi_controller *mhi_cntrl;
>>> + struct mhi_chan *ul_chan;
>>> + struct mhi_chan *dl_chan;
>>> + atomic_t dev_wake;
>>> + enum mhi_device_type dev_type;
>>> +};
>>> +
>>> +/**
>>> + * struct mhi_result - Completed buffer information
>>> + * @buf_addr: Address of data buffer
>>> + * @dir: Channel direction
>>> + * @bytes_xfer: # of bytes transferred
>>> + * @transaction_status: Status of last transaction
>>> + */
>>> +struct mhi_result {
>>> + void *buf_addr;
>>> + enum dma_data_direction dir;
>>> + size_t bytes_xferd;
>>
>> Desription says this is named "bytes_xfer"
>>
>
> Ah yes typo, it is bytes_xferd only. Will fix it.
>
> Thanks,
> Mani
>
>>> + int transaction_status;
>>> +};
>>> +
>>> +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
>>> +
>>> +/**
>>> + * mhi_controller_set_devdata - Set MHI controller private data
>>> + * @mhi_cntrl: MHI controller to set data
>>> + */
>>> +static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
>>> + void *priv)
>>> +{
>>> + mhi_cntrl->priv_data = priv;
>>> +}
>>> +
>>> +/**
>>> + * mhi_controller_get_devdata - Get MHI controller private data
>>> + * @mhi_cntrl: MHI controller to get data
>>> + */
>>> +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
>>> +{
>>> + return mhi_cntrl->priv_data;
>>> +}
>>> +
>>> +/**
>>> + * mhi_register_controller - Register MHI controller
>>> + * @mhi_cntrl: MHI controller to register
>>> + * @config: Configuration to use for the controller
>>> + */
>>> +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
>>> + struct mhi_controller_config *config);
>>> +
>>> +/**
>>> + * mhi_unregister_controller - Unregister MHI controller
>>> + * @mhi_cntrl: MHI controller to unregister
>>> + */
>>> +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
>>> +
>>> +#endif /* _MHI_H_ */
>>> diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
>>> index e3596db077dc..be15e997fe39 100644
>>> --- a/include/linux/mod_devicetable.h
>>> +++ b/include/linux/mod_devicetable.h
>>> @@ -821,4 +821,16 @@ struct wmi_device_id {
>>> const void *context;
>>> };
>>> +#define MHI_NAME_SIZE 32
>>> +
>>> +/**
>>> + * struct mhi_device_id - MHI device identification
>>> + * @chan: MHI channel name
>>> + * @driver_data: driver data;
>>> + */
>>> +struct mhi_device_id {
>>> + const char chan[MHI_NAME_SIZE];
>>> + kernel_ulong_t driver_data;
>>> +};
>>> +
>>> #endif /* LINUX_MOD_DEVICETABLE_H */
>>>
>>
>>
>> --
>> Jeffrey Hugo
>> Qualcomm Technologies, Inc. is a member of the
>> Code Aurora Forum, a Linux Foundation Collaborative Project.
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On Mon, Jan 27, 2020 at 07:52:13AM -0700, Jeffrey Hugo wrote:
> On 1/27/2020 4:56 AM, Manivannan Sadhasivam wrote:
> > Hi Jeff,
> >
> > On Thu, Jan 23, 2020 at 10:05:50AM -0700, Jeffrey Hugo wrote:
> > > On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > > > This commit adds support for registering MHI controller drivers with
> > > > the MHI stack. MHI controller drivers manages the interaction with the
> > > > MHI client devices such as the external modems and WiFi chipsets. They
> > > > are also the MHI bus master in charge of managing the physical link
> > > > between the host and client device.
> > > >
> > > > This is based on the patch submitted by Sujeev Dias:
> > > > https://lkml.org/lkml/2018/7/9/987
> > > >
> > > > Signed-off-by: Sujeev Dias <[email protected]>
> > > > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > > > [jhugo: added static config for controllers and fixed several bugs]
> > > > Signed-off-by: Jeffrey Hugo <[email protected]>
> > > > [mani: removed DT dependency, splitted and cleaned up for upstream]
> > > > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > > > ---
> > > > drivers/bus/Kconfig | 1 +
> > > > drivers/bus/Makefile | 3 +
> > > > drivers/bus/mhi/Kconfig | 14 +
> > > > drivers/bus/mhi/Makefile | 2 +
> > > > drivers/bus/mhi/core/Makefile | 3 +
> > > > drivers/bus/mhi/core/init.c | 404 +++++++++++++++++++++++++++++
> > > > drivers/bus/mhi/core/internal.h | 169 ++++++++++++
> > > > include/linux/mhi.h | 438 ++++++++++++++++++++++++++++++++
> > > > include/linux/mod_devicetable.h | 12 +
> > > > 9 files changed, 1046 insertions(+)
> > > > create mode 100644 drivers/bus/mhi/Kconfig
> > > > create mode 100644 drivers/bus/mhi/Makefile
> > > > create mode 100644 drivers/bus/mhi/core/Makefile
> > > > create mode 100644 drivers/bus/mhi/core/init.c
> > > > create mode 100644 drivers/bus/mhi/core/internal.h
> > > > create mode 100644 include/linux/mhi.h
> > > >
> > > > diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> > > > index 50200d1c06ea..383934e54786 100644
> > > > --- a/drivers/bus/Kconfig
> > > > +++ b/drivers/bus/Kconfig
> > > > @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
> > > > peripherals.
> > > > source "drivers/bus/fsl-mc/Kconfig"
> > > > +source "drivers/bus/mhi/Kconfig"
> > > > endmenu
> > > > diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> > > > index 1320bcf9fa9d..05f32cd694a4 100644
> > > > --- a/drivers/bus/Makefile
> > > > +++ b/drivers/bus/Makefile
> > > > @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS) += uniphier-system-bus.o
> > > > obj-$(CONFIG_VEXPRESS_CONFIG) += vexpress-config.o
> > > > obj-$(CONFIG_DA8XX_MSTPRI) += da8xx-mstpri.o
> > > > +
> > > > +# MHI
> > > > +obj-$(CONFIG_MHI_BUS) += mhi/
> > > > diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> > > > new file mode 100644
> > > > index 000000000000..a8bd9bd7db7c
> > > > --- /dev/null
> > > > +++ b/drivers/bus/mhi/Kconfig
> > > > @@ -0,0 +1,14 @@
> > > > +# SPDX-License-Identifier: GPL-2.0
> > >
> > > first time I noticed this, although I suspect this will need to be corrected
> > > "everywhere" -
> > > Per the SPDX website, the "GPL-2.0" label is deprecated. It's replacement
> > > is "GPL-2.0-only".
> > > I think all instances should be updated to "GPL-2.0-only"
> > >
> > > > +#
> > > > +# MHI bus
> > > > +#
> > > > +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > > +#
> > > > +
> > > > +config MHI_BUS
> > > > + tristate "Modem Host Interface (MHI) bus"
> > > > + help
> > > > + Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> > > > + communication protocol used by the host processors to control
> > > > + and communicate with modem devices over a high speed peripheral
> > > > + bus or shared memory.
> > > > diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> > > > new file mode 100644
> > > > index 000000000000..19e6443b72df
> > > > --- /dev/null
> > > > +++ b/drivers/bus/mhi/Makefile
> > > > @@ -0,0 +1,2 @@
> > > > +# core layer
> > > > +obj-y += core/
> > > > diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
> > > > new file mode 100644
> > > > index 000000000000..2db32697c67f
> > > > --- /dev/null
> > > > +++ b/drivers/bus/mhi/core/Makefile
> > > > @@ -0,0 +1,3 @@
> > > > +obj-$(CONFIG_MHI_BUS) := mhi.o
> > > > +
> > > > +mhi-y := init.o
> > > > diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> > > > new file mode 100644
> > > > index 000000000000..5b817ec250e0
> > > > --- /dev/null
> > > > +++ b/drivers/bus/mhi/core/init.c
> > > > @@ -0,0 +1,404 @@
> > > > +// SPDX-License-Identifier: GPL-2.0
> > > > +/*
> > > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > > + *
> > > > + */
> > > > +
> > > > +#define dev_fmt(fmt) "MHI: " fmt
> > > > +
> > > > +#include <linux/device.h>
> > > > +#include <linux/dma-direction.h>
> > > > +#include <linux/dma-mapping.h>
> > > > +#include <linux/interrupt.h>
> > > > +#include <linux/list.h>
> > > > +#include <linux/mhi.h>
> > > > +#include <linux/mod_devicetable.h>
> > > > +#include <linux/module.h>
> > > > +#include <linux/slab.h>
> > > > +#include <linux/vmalloc.h>
> > > > +#include <linux/wait.h>
> > > > +#include "internal.h"
> > > > +
> > > > +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_controller_config *config)
> > > > +{
> > > > + int i, num;
> > > > + struct mhi_event *mhi_event;
> > > > + struct mhi_event_config *event_cfg;
> > > > +
> > > > + num = config->num_events;
> > > > + mhi_cntrl->total_ev_rings = num;
> > > > + mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
> > > > + GFP_KERNEL);
> > > > + if (!mhi_cntrl->mhi_event)
> > > > + return -ENOMEM;
> > > > +
> > > > + /* Populate event ring */
> > > > + mhi_event = mhi_cntrl->mhi_event;
> > > > + for (i = 0; i < num; i++) {
> > > > + event_cfg = &config->event_cfg[i];
> > > > +
> > > > + mhi_event->er_index = i;
> > > > + mhi_event->ring.elements = event_cfg->num_elements;
> > > > + mhi_event->intmod = event_cfg->irq_moderation_ms;
> > > > + mhi_event->irq = event_cfg->irq;
> > > > +
> > > > + if (event_cfg->channel != U32_MAX) {
> > > > + /* This event ring has a dedicated channel */
> > > > + mhi_event->chan = event_cfg->channel;
> > > > + if (mhi_event->chan >= mhi_cntrl->max_chan) {
> > > > + dev_err(mhi_cntrl->dev,
> > > > + "Event Ring channel not available\n");
> > > > + goto error_ev_cfg;
> > > > + }
> > > > +
> > > > + mhi_event->mhi_chan =
> > > > + &mhi_cntrl->mhi_chan[mhi_event->chan];
> > > > + }
> > > > +
> > > > + /* Priority is fixed to 1 for now */
> > > > + mhi_event->priority = 1;
> > > > +
> > > > + mhi_event->db_cfg.brstmode = event_cfg->mode;
> > > > + if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
> > > > + goto error_ev_cfg;
> > > > +
> > > > + mhi_event->data_type = event_cfg->data_type;
> > > > +
> > > > + mhi_event->hw_ring = event_cfg->hardware_event;
> > > > + if (mhi_event->hw_ring)
> > > > + mhi_cntrl->hw_ev_rings++;
> > > > + else
> > > > + mhi_cntrl->sw_ev_rings++;
> > > > +
> > > > + mhi_event->cl_manage = event_cfg->client_managed;
> > > > + mhi_event->offload_ev = event_cfg->offload_channel;
> > > > + mhi_event++;
> > > > + }
> > > > +
> > > > + /* We need IRQ for each event ring + additional one for BHI */
> > > > + mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
> > > > +
> > > > + return 0;
> > > > +
> > > > +error_ev_cfg:
> > > > +
> > > > + kfree(mhi_cntrl->mhi_event);
> > > > + return -EINVAL;
> > > > +}
> > > > +
> > > > +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_controller_config *config)
> > > > +{
> > > > + int i;
> > > > + u32 chan;
> > > > + struct mhi_channel_config *ch_cfg;
> > > > +
> > > > + mhi_cntrl->max_chan = config->max_channels;
> > > > +
> > > > + /*
> > > > + * The allocation of MHI channels can exceed 32KB in some scenarios,
> > > > + * so to avoid any memory possible allocation failures, vzalloc is
> > > > + * used here
> > > > + */
> > > > + mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
> > > > + sizeof(*mhi_cntrl->mhi_chan));
> > > > + if (!mhi_cntrl->mhi_chan)
> > > > + return -ENOMEM;
> > > > +
> > > > + INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
> > > > +
> > > > + /* Populate channel configurations */
> > > > + for (i = 0; i < config->num_channels; i++) {
> > > > + struct mhi_chan *mhi_chan;
> > > > +
> > > > + ch_cfg = &config->ch_cfg[i];
> > > > +
> > > > + chan = ch_cfg->num;
> > > > + if (chan >= mhi_cntrl->max_chan) {
> > > > + dev_err(mhi_cntrl->dev,
> > > > + "Channel %d not available\n", chan);
> > > > + goto error_chan_cfg;
> > > > + }
> > > > +
> > > > + mhi_chan = &mhi_cntrl->mhi_chan[chan];
> > > > + mhi_chan->name = ch_cfg->name;
> > > > + mhi_chan->chan = chan;
> > > > +
> > > > + mhi_chan->tre_ring.elements = ch_cfg->num_elements;
> > > > + if (!mhi_chan->tre_ring.elements)
> > > > + goto error_chan_cfg;
> > > > +
> > > > + /*
> > > > + * For some channels, local ring length should be bigger than
> > > > + * the transfer ring length due to internal logical channels
> > > > + * in device. So host can queue much more buffers than transfer
> > > > + * ring length. Example, RSC channels should have a larger local
> > > > + * channel length than transfer ring length.
> > > > + */
> > > > + mhi_chan->buf_ring.elements = ch_cfg->local_elements;
> > > > + if (!mhi_chan->buf_ring.elements)
> > > > + mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
> > > > + mhi_chan->er_index = ch_cfg->event_ring;
> > > > + mhi_chan->dir = ch_cfg->dir;
> > > > +
> > > > + /*
> > > > + * For most channels, chtype is identical to channel directions.
> > > > + * So, if it is not defined then assign channel direction to
> > > > + * chtype
> > > > + */
> > > > + mhi_chan->type = ch_cfg->type;
> > > > + if (!mhi_chan->type)
> > > > + mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
> > > > +
> > > > + mhi_chan->ee_mask = ch_cfg->ee_mask;
> > > > +
> > > > + mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> > > > + mhi_chan->xfer_type = ch_cfg->data_type;
> > > > +
> > > > + mhi_chan->lpm_notify = ch_cfg->lpm_notify;
> > > > + mhi_chan->offload_ch = ch_cfg->offload_channel;
> > > > + mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
> > > > + mhi_chan->pre_alloc = ch_cfg->auto_queue;
> > > > + mhi_chan->auto_start = ch_cfg->auto_start;
> > > > +
> > > > + /*
> > > > + * If MHI host allocates buffers, then the channel direction
> > > > + * should be DMA_FROM_DEVICE and the buffer type should be
> > > > + * MHI_BUF_RAW
> > > > + */
> > > > + if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
> > > > + mhi_chan->xfer_type != MHI_BUF_RAW)) {
> > > > + dev_err(mhi_cntrl->dev,
> > > > + "Invalid channel configuration\n");
> > > > + goto error_chan_cfg;
> > > > + }
> > > > +
> > > > + /*
> > > > + * Bi-directional and direction less channel must be an
> > > > + * offload channel
> > > > + */
> > > > + if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
> > > > + mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
> > > > + dev_err(mhi_cntrl->dev,
> > > > + "Invalid channel configuration\n");
> > > > + goto error_chan_cfg;
> > > > + }
> > > > +
> > > > + if (!mhi_chan->offload_ch) {
> > > > + mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
> > > > + if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
> > > > + dev_err(mhi_cntrl->dev,
> > > > + "Invalid Door bell mode\n");
> > > > + goto error_chan_cfg;
> > > > + }
> > > > + }
> > > > +
> > > > + mhi_chan->configured = true;
> > > > +
> > > > + if (mhi_chan->lpm_notify)
> > > > + list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +
> > > > +error_chan_cfg:
> > > > + vfree(mhi_cntrl->mhi_chan);
> > > > +
> > > > + return -EINVAL;
> > > > +}
> > > > +
> > > > +static int parse_config(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_controller_config *config)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + /* Parse MHI channel configuration */
> > > > + ret = parse_ch_cfg(mhi_cntrl, config);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /* Parse MHI event configuration */
> > > > + ret = parse_ev_cfg(mhi_cntrl, config);
> > > > + if (ret)
> > > > + goto error_ev_cfg;
> > > > +
> > > > + mhi_cntrl->timeout_ms = config->timeout_ms;
> > > > + if (!mhi_cntrl->timeout_ms)
> > > > + mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
> > > > +
> > > > + mhi_cntrl->bounce_buf = config->use_bounce_buf;
> > > > + mhi_cntrl->buffer_len = config->buf_len;
> > > > + if (!mhi_cntrl->buffer_len)
> > > > + mhi_cntrl->buffer_len = MHI_MAX_MTU;
> > > > +
> > > > + return 0;
> > > > +
> > > > +error_ev_cfg:
> > > > + vfree(mhi_cntrl->mhi_chan);
> > > > +
> > > > + return ret;
> > > > +}
> > > > +
> > > > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_controller_config *config)
> > > > +{
> > > > + int ret;
> > > > + int i;
> > > > + struct mhi_event *mhi_event;
> > > > + struct mhi_chan *mhi_chan;
> > > > + struct mhi_cmd *mhi_cmd;
> > > > + struct mhi_device *mhi_dev;
> > > > +
> > > > + if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
> > > > + return -EINVAL;
> > > > +
> > > > + if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
> > > > + return -EINVAL;
> > > > +
> > > > + ret = parse_config(mhi_cntrl, config);
> > > > + if (ret)
> > > > + return -EINVAL;
> > > > +
> > > > + mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
> > > > + sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> > > > + if (!mhi_cntrl->mhi_cmd) {
> > > > + ret = -ENOMEM;
> > > > + goto error_alloc_cmd;
> > > > + }
> > > > +
> > > > + INIT_LIST_HEAD(&mhi_cntrl->transition_list);
> > > > + spin_lock_init(&mhi_cntrl->transition_lock);
> > > > + spin_lock_init(&mhi_cntrl->wlock);
> > > > + init_waitqueue_head(&mhi_cntrl->state_event);
> > > > +
> > > > + mhi_cmd = mhi_cntrl->mhi_cmd;
> > > > + for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
> > > > + spin_lock_init(&mhi_cmd->lock);
> > > > +
> > > > + mhi_event = mhi_cntrl->mhi_event;
> > > > + for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
> > > > + /* Skip for offload events */
> > > > + if (mhi_event->offload_ev)
> > > > + continue;
> > > > +
> > > > + mhi_event->mhi_cntrl = mhi_cntrl;
> > > > + spin_lock_init(&mhi_event->lock);
> > > > + }
> > > > +
> > > > + mhi_chan = mhi_cntrl->mhi_chan;
> > > > + for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
> > > > + mutex_init(&mhi_chan->mutex);
> > > > + init_completion(&mhi_chan->completion);
> > > > + rwlock_init(&mhi_chan->lock);
> > > > + }
> > > > +
> > > > + /* Register controller with MHI bus */
> > > > + mhi_dev = mhi_alloc_device(mhi_cntrl);
> > > > + if (IS_ERR(mhi_dev)) {
> > > > + dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
> > > > + ret = PTR_ERR(mhi_dev);
> > > > + goto error_alloc_dev;
> > > > + }
> > > > +
> > > > + mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> > > > + mhi_dev->mhi_cntrl = mhi_cntrl;
> > > > + dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
> > > > +
> > > > + /* Init wakeup source */
> > > > + device_init_wakeup(&mhi_dev->dev, true);
> > > > +
> > > > + ret = device_add(&mhi_dev->dev);
> > > > + if (ret)
> > > > + goto error_add_dev;
> > > > +
> > > > + mhi_cntrl->mhi_dev = mhi_dev;
> > > > +
> > > > + return 0;
> > > > +
> > > > +error_add_dev:
> > > > + mhi_dealloc_device(mhi_cntrl, mhi_dev);
> > > > +
> > > > +error_alloc_dev:
> > > > + kfree(mhi_cntrl->mhi_cmd);
> > > > +
> > > > +error_alloc_cmd:
> > > > + vfree(mhi_cntrl->mhi_chan);
> > > > + kfree(mhi_cntrl->mhi_event);
> > > > +
> > > > + return ret;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(mhi_register_controller);
> > > > +
> > > > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
> > > > +{
> > > > + struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
> > > > +
> > > > + kfree(mhi_cntrl->mhi_cmd);
> > > > + kfree(mhi_cntrl->mhi_event);
> > > > + vfree(mhi_cntrl->mhi_chan);
> > > > +
> > > > + device_del(&mhi_dev->dev);
> > > > + put_device(&mhi_dev->dev);
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
> > > > +
> > > > +static void mhi_release_device(struct device *dev)
> > > > +{
> > > > + struct mhi_device *mhi_dev = to_mhi_device(dev);
> > > > +
> > > > + if (mhi_dev->ul_chan)
> > > > + mhi_dev->ul_chan->mhi_dev = NULL;
> > > > +
> > > > + if (mhi_dev->dl_chan)
> > > > + mhi_dev->dl_chan->mhi_dev = NULL;
> > > > +
> > > > + kfree(mhi_dev);
> > > > +}
> > > > +
> > > > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
> > > > +{
> > > > + struct mhi_device *mhi_dev;
> > > > + struct device *dev;
> > > > +
> > > > + mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> > > > + if (!mhi_dev)
> > > > + return ERR_PTR(-ENOMEM);
> > > > +
> > > > + dev = &mhi_dev->dev;
> > > > + device_initialize(dev);
> > > > + dev->bus = &mhi_bus_type;
> > > > + dev->release = mhi_release_device;
> > > > + dev->parent = mhi_cntrl->dev;
> > > > + mhi_dev->mhi_cntrl = mhi_cntrl;
> > > > + atomic_set(&mhi_dev->dev_wake, 0);
> > > > +
> > > > + return mhi_dev;
> > > > +}
> > > > +
> > > > +static int mhi_match(struct device *dev, struct device_driver *drv)
> > > > +{
> > > > + return 0;
> > > > +};
> > > > +
> > > > +struct bus_type mhi_bus_type = {
> > > > + .name = "mhi",
> > > > + .dev_name = "mhi",
> > > > + .match = mhi_match,
> > > > +};
> > > > +
> > > > +static int __init mhi_init(void)
> > > > +{
> > > > + return bus_register(&mhi_bus_type);
> > > > +}
> > > > +
> > > > +static void __exit mhi_exit(void)
> > > > +{
> > > > + bus_unregister(&mhi_bus_type);
> > > > +}
> > > > +
> > > > +postcore_initcall(mhi_init);
> > > > +module_exit(mhi_exit);
> > > > +
> > > > +MODULE_LICENSE("GPL v2");
> > > > +MODULE_DESCRIPTION("MHI Host Interface");
> > > > diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
> > > > new file mode 100644
> > > > index 000000000000..21f686d3a140
> > > > --- /dev/null
> > > > +++ b/drivers/bus/mhi/core/internal.h
> > > > @@ -0,0 +1,169 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > > + *
> > > > + */
> > > > +
> > > > +#ifndef _MHI_INT_H
> > > > +#define _MHI_INT_H
> > > > +
> > > > +extern struct bus_type mhi_bus_type;
> > > > +
> > > > +/* MHI transfer completion events */
> > > > +enum mhi_ev_ccs {
> > > > + MHI_EV_CC_INVALID = 0x0,
> > > > + MHI_EV_CC_SUCCESS = 0x1,
> > > > + MHI_EV_CC_EOT = 0x2,
> > > > + MHI_EV_CC_OVERFLOW = 0x3,
> > > > + MHI_EV_CC_EOB = 0x4,
> > > > + MHI_EV_CC_OOB = 0x5,
> > > > + MHI_EV_CC_DB_MODE = 0x6,
> > > > + MHI_EV_CC_UNDEFINED_ERR = 0x10,
> > > > + MHI_EV_CC_BAD_TRE = 0x11,
> > >
> > > Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms? I feel
> > > like those might not be obvious to someone not familiar with the protocol.
> > >
> >
> > Sure
> >
> > > > +};
> > > > +
> > > > +enum mhi_ch_state {
> > > > + MHI_CH_STATE_DISABLED = 0x0,
> > > > + MHI_CH_STATE_ENABLED = 0x1,
> > > > + MHI_CH_STATE_RUNNING = 0x2,
> > > > + MHI_CH_STATE_SUSPENDED = 0x3,
> > > > + MHI_CH_STATE_STOP = 0x4,
> > > > + MHI_CH_STATE_ERROR = 0x5,
> > > > +};
> > > > +
> > > > +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
> > > > + mode != MHI_DB_BRST_ENABLE)
> > > > +
> > > > +#define NR_OF_CMD_RINGS 1
> > > > +#define CMD_EL_PER_RING 128
> > > > +#define PRIMARY_CMD_RING 0
> > > > +#define MHI_MAX_MTU 0xffff
> > > > +
> > > > +enum mhi_er_type {
> > > > + MHI_ER_TYPE_INVALID = 0x0,
> > > > + MHI_ER_TYPE_VALID = 0x1,
> > > > +};
> > > > +
> > > > +enum mhi_ch_ee_mask {
> > > > + MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
> > >
> > > MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
> > > include?
> > >
> >
> > It is defined in mhi.h as mhi_ee_type.
>
> mhi.h isn't included here. You are relying on the users of this file to
> have included that, in particular to have included it before this file. That
> tends to result in really weird errors later on. It would be best to
> include mhi.h here if you need these definitions.
>
> Although, I suspect this struct should be moved out of internal.h and into
> mhi.h since clients need to know this, so perhaps this issue is moot.
>
Yep. I've moved this enum to mhi.h since it will be used by controller drivers.
You can find this change in next iteration.
Thanks,
Mani
> >
> > > > + MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
> > > > + MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
> > > > + MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
> > > > + MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
> > > > + MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
> > > > + MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
> > > > +};
> > > > +
> > > > +struct db_cfg {
> > > > + bool reset_req;
> > > > + bool db_mode;
> > > > + u32 pollcfg;
> > > > + enum mhi_db_brst_mode brstmode;
> > > > + dma_addr_t db_val;
> > > > + void (*process_db)(struct mhi_controller *mhi_cntrl,
> > > > + struct db_cfg *db_cfg, void __iomem *io_addr,
> > > > + dma_addr_t db_val);
> > > > +};
> > > > +
> > > > +struct mhi_ring {
> > > > + dma_addr_t dma_handle;
> > > > + dma_addr_t iommu_base;
> > > > + u64 *ctxt_wp; /* point to ctxt wp */
> > > > + void *pre_aligned;
> > > > + void *base;
> > > > + void *rp;
> > > > + void *wp;
> > > > + size_t el_size;
> > > > + size_t len;
> > > > + size_t elements;
> > > > + size_t alloc_size;
> > > > + void __iomem *db_addr;
> > > > +};
> > > > +
> > > > +struct mhi_cmd {
> > > > + struct mhi_ring ring;
> > > > + spinlock_t lock;
> > > > +};
> > > > +
> > > > +struct mhi_buf_info {
> > > > + dma_addr_t p_addr;
> > > > + void *v_addr;
> > > > + void *bb_addr;
> > > > + void *wp;
> > > > + size_t len;
> > > > + void *cb_buf;
> > > > + enum dma_data_direction dir;
> > > > +};
> > > > +
> > > > +struct mhi_event {
> > > > + u32 er_index;
> > > > + u32 intmod;
> > > > + u32 irq;
> > > > + int chan; /* this event ring is dedicated to a channel (optional) */
> > > > + u32 priority;
> > > > + enum mhi_er_data_type data_type;
> > > > + struct mhi_ring ring;
> > > > + struct db_cfg db_cfg;
> > > > + bool hw_ring;
> > > > + bool cl_manage;
> > > > + bool offload_ev; /* managed by a device driver */
> > > > + spinlock_t lock;
> > > > + struct mhi_chan *mhi_chan; /* dedicated to channel */
> > > > + struct tasklet_struct task;
> > > > + int (*process_event)(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_event *mhi_event,
> > > > + u32 event_quota);
> > > > + struct mhi_controller *mhi_cntrl;
> > > > +};
> > > > +
> > > > +struct mhi_chan {
> > > > + u32 chan;
> > > > + const char *name;
> > > > + /*
> > > > + * Important: When consuming, increment tre_ring first and when
> > > > + * releasing, decrement buf_ring first. If tre_ring has space, buf_ring
> > > > + * is guranteed to have space so we do not need to check both rings.
> > > > + */
> > > > + struct mhi_ring buf_ring;
> > > > + struct mhi_ring tre_ring;
> > > > + u32 er_index;
> > > > + u32 intmod;
> > > > + enum mhi_ch_type type;
> > > > + enum dma_data_direction dir;
> > > > + struct db_cfg db_cfg;
> > > > + enum mhi_ch_ee_mask ee_mask;
> > > > + enum mhi_buf_type xfer_type;
> > > > + enum mhi_ch_state ch_state;
> > > > + enum mhi_ev_ccs ccs;
> > > > + bool lpm_notify;
> > > > + bool configured;
> > > > + bool offload_ch;
> > > > + bool pre_alloc;
> > > > + bool auto_start;
> > > > + int (*gen_tre)(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_chan *mhi_chan, void *buf, void *cb,
> > > > + size_t len, enum mhi_flags flags);
> > > > + int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
> > > > + void *buf, size_t len, enum mhi_flags mflags);
> > > > + struct mhi_device *mhi_dev;
> > > > + void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result *result);
> > > > + struct mutex mutex;
> > > > + struct completion completion;
> > > > + rwlock_t lock;
> > > > + struct list_head node;
> > > > +};
> > > > +
> > > > +/* Default MHI timeout */
> > > > +#define MHI_TIMEOUT_MS (1000)
> > > > +
> > > > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
> > > > +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_device *mhi_dev)
> > > > +{
> > > > + kfree(mhi_dev);
> > > > +}
> > > > +
> > > > +int mhi_destroy_device(struct device *dev, void *data);
> > > > +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
> > > > +
> > > > +#endif /* _MHI_INT_H */
> > > > diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> > > > new file mode 100644
> > > > index 000000000000..69cf9a4b06c7
> > > > --- /dev/null
> > > > +++ b/include/linux/mhi.h
> > > > @@ -0,0 +1,438 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +/*
> > > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > > + *
> > > > + */
> > > > +#ifndef _MHI_H_
> > > > +#define _MHI_H_
> > > > +
> > > > +#include <linux/device.h>
> > > > +#include <linux/dma-direction.h>
> > > > +#include <linux/mutex.h>
> > > > +#include <linux/rwlock_types.h>
> > > > +#include <linux/slab.h>
> > > > +#include <linux/spinlock_types.h>
> > > > +#include <linux/wait.h>
> > > > +#include <linux/workqueue.h>
> > > > +
> > > > +struct mhi_chan;
> > > > +struct mhi_event;
> > > > +struct mhi_ctxt;
> > > > +struct mhi_cmd;
> > > > +struct mhi_buf_info;
> > > > +
> > > > +/**
> > > > + * enum mhi_callback - MHI callback
> > > > + * @MHI_CB_IDLE: MHI entered idle state
> > > > + * @MHI_CB_PENDING_DATA: New data available for client to process
> > > > + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> > > > + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> > > > + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> > > > + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> > > > + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> > > > + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> > > > + */
> > > > +enum mhi_callback {
> > > > + MHI_CB_IDLE,
> > > > + MHI_CB_PENDING_DATA,
> > > > + MHI_CB_LPM_ENTER,
> > > > + MHI_CB_LPM_EXIT,
> > > > + MHI_CB_EE_RDDM,
> > > > + MHI_CB_EE_MISSION_MODE,
> > > > + MHI_CB_SYS_ERROR,
> > > > + MHI_CB_FATAL_ERROR,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_flags - Transfer flags
> > > > + * @MHI_EOB: End of buffer for bulk transfer
> > > > + * @MHI_EOT: End of transfer
> > > > + * @MHI_CHAIN: Linked transfer
> > > > + */
> > > > +enum mhi_flags {
> > > > + MHI_EOB,
> > > > + MHI_EOT,
> > > > + MHI_CHAIN,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_device_type - Device types
> > > > + * @MHI_DEVICE_XFER: Handles data transfer
> > > > + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> > > > + * @MHI_DEVICE_CONTROLLER: Control device
> > > > + */
> > > > +enum mhi_device_type {
> > > > + MHI_DEVICE_XFER,
> > > > + MHI_DEVICE_TIMESYNC,
> > > > + MHI_DEVICE_CONTROLLER,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_ch_type - Channel types
> > > > + * @MHI_CH_TYPE_INVALID: Invalid channel type
> > > > + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> > > > + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> > > > + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device to combine
> > > > + * multiple packets and send them as a single
> > > > + * large packet to reduce CPU consumption
> > > > + */
> > > > +enum mhi_ch_type {
> > > > + MHI_CH_TYPE_INVALID = 0,
> > > > + MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> > > > + MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> > > > + MHI_CH_TYPE_INBOUND_COALESCED = 3,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_ee_type - Execution environment types
> > > > + * @MHI_EE_PBL: Primary Bootloader
> > > > + * @MHI_EE_SBL: Secondary Bootloader
> > > > + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> > > > + * @MHI_EE_RDDM: Ram dump download mode
> > > > + * @MHI_EE_WFW: WLAN firmware mode
> > > > + * @MHI_EE_PTHRU: Passthrough
> > > > + * @MHI_EE_EDL: Embedded downloader
> > > > + */
> > > > +enum mhi_ee_type {
> > > > + MHI_EE_PBL,
> > > > + MHI_EE_SBL,
> > > > + MHI_EE_AMSS,
> > > > + MHI_EE_RDDM,
> > > > + MHI_EE_WFW,
> > > > + MHI_EE_PTHRU,
> > > > + MHI_EE_EDL,
> > > > + MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> > > > + MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> > > > + MHI_EE_NOT_SUPPORTED,
> > > > + MHI_EE_MAX,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_buf_type - Accepted buffer type for the channel
> > > > + * @MHI_BUF_RAW: Raw buffer
> > > > + * @MHI_BUF_SKB: SKB struct
> > > > + * @MHI_BUF_SCLIST: Scatter-gather list
> > > > + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> > > > + * @MHI_BUF_DMA: Receive DMA address mapped by client
> > > > + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
> > >
> > > Maybe its just me, but what is "RSC"?
> > >
> >
> > Reserved Side Coalesced. I thought I mentioned it somewhere but not...Will do.
> >
> > > > + */
> > > > +enum mhi_buf_type {
> > > > + MHI_BUF_RAW,
> > > > + MHI_BUF_SKB,
> > > > + MHI_BUF_SCLIST,
> > > > + MHI_BUF_NOP,
> > > > + MHI_BUF_DMA,
> > > > + MHI_BUF_RSC_DMA,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_er_data_type - Event ring data types
> > > > + * @MHI_ER_DATA: Only client data over this ring
> > > > + * @MHI_ER_CTRL: MHI control data and client data
> > > > + * @MHI_ER_TSYNC: Time sync events
> > > > + */
> > > > +enum mhi_er_data_type {
> > > > + MHI_ER_DATA,
> > > > + MHI_ER_CTRL,
> > > > + MHI_ER_TSYNC,
> > > > +};
> > > > +
> > > > +/**
> > > > + * enum mhi_db_brst_mode - Doorbell mode
> > > > + * @MHI_DB_BRST_DISABLE: Burst mode disable
> > > > + * @MHI_DB_BRST_ENABLE: Burst mode enable
> > > > + */
> > > > +enum mhi_db_brst_mode {
> > > > + MHI_DB_BRST_DISABLE = 0x2,
> > > > + MHI_DB_BRST_ENABLE = 0x3,
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_channel_config - Channel configuration structure for controller
> > > > + * @num: The number assigned to this channel
> > > > + * @name: The name of this channel
> > > > + * @num_elements: The number of elements that can be queued to this channel
> > > > + * @local_elements: The local ring length of the channel
> > > > + * @event_ring: The event rung index that services this channel
> > > > + * @dir: Direction that data may flow on this channel
> > > > + * @type: Channel type
> > > > + * @ee_mask: Execution Environment mask for this channel
> > >
> > > But the mask defines are in internal.h, so how is a client suposed to know
> > > what they are?
> > >
> >
> > Again, I missed the whole change addressing your internal review (It is one
> > them). I defined the masks in mhi.h. Will add it in next iteration.
> >
> > > > + * @pollcfg: Polling configuration for burst mode. 0 is default. milliseconds
> > > > + for UL channels, multiple of 8 ring elements for DL channels
> > > > + * @data_type: Data type accepted by this channel
> > > > + * @doorbell: Doorbell mode
> > > > + * @lpm_notify: The channel master requires low power mode notifications
> > > > + * @offload_channel: The client manages the channel completely
> > > > + * @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
> > > > + * @auto_queue: Framework will automatically queue buffers for DL traffic
> > > > + * @auto_start: Automatically start (open) this channel
> > > > + */
> > > > +struct mhi_channel_config {
> > > > + u32 num;
> > > > + char *name;
> > > > + u32 num_elements;
> > > > + u32 local_elements;
> > > > + u32 event_ring;
> > > > + enum dma_data_direction dir;
> > > > + enum mhi_ch_type type;
> > > > + u32 ee_mask;
> > > > + u32 pollcfg;
> > > > + enum mhi_buf_type data_type;
> > > > + enum mhi_db_brst_mode doorbell;
> > > > + bool lpm_notify;
> > > > + bool offload_channel;
> > > > + bool doorbell_mode_switch;
> > > > + bool auto_queue;
> > > > + bool auto_start;
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_event_config - Event ring configuration structure for controller
> > > > + * @num_elements: The number of elements that can be queued to this ring
> > > > + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> > > > + * @irq: IRQ associated with this ring
> > > > + * @channel: Dedicated channel number. U32_MAX indicates a non-dedicated ring
> > > > + * @mode: Doorbell mode
> > > > + * @data_type: Type of data this ring will process
> > > > + * @hardware_event: This ring is associated with hardware channels
> > > > + * @client_managed: This ring is client managed
> > > > + * @offload_channel: This ring is associated with an offloaded channel
> > > > + * @priority: Priority of this ring. Use 1 for now
> > > > + */
> > > > +struct mhi_event_config {
> > > > + u32 num_elements;
> > > > + u32 irq_moderation_ms;
> > > > + u32 irq;
> > > > + u32 channel;
> > > > + enum mhi_db_brst_mode mode;
> > > > + enum mhi_er_data_type data_type;
> > > > + bool hardware_event;
> > > > + bool client_managed;
> > > > + bool offload_channel;
> > > > + u32 priority;
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_controller_config - Root MHI controller configuration
> > > > + * @max_channels: Maximum number of channels supported
> > > > + * @timeout_ms: Timeout value for operations. 0 means use default
> > > > + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> > > > + * @m2_no_db: Host is not allowed to ring DB in M2 state
> > > > + * @buf_len: Size of automatically allocated buffers. 0 means use default
> > > > + * @num_channels: Number of channels defined in @ch_cfg
> > > > + * @ch_cfg: Array of defined channels
> > > > + * @num_events: Number of event rings defined in @event_cfg
> > > > + * @event_cfg: Array of defined event rings
> > > > + */
> > > > +struct mhi_controller_config {
> > > > + u32 max_channels;
> > > > + u32 timeout_ms;
> > > > + bool use_bounce_buf;
> > > > + bool m2_no_db;
> > > > + u32 buf_len;
> > > > + u32 num_channels;
> > > > + struct mhi_channel_config *ch_cfg;
> > > > + u32 num_events;
> > > > + struct mhi_event_config *event_cfg;
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_controller - Master MHI controller structure
> > > > + * @name: Name of the controller
> > > > + * @dev: Driver model device node for the controller
> > > > + * @mhi_dev: MHI device instance for the controller
> > > > + * @dev_id: Device ID of the controller
> > > > + * @bus_id: Physical bus instance used by the controller
> > > > + * @regs: Base address of MHI MMIO register space
> > > > + * @iova_start: IOMMU starting address for data
> > > > + * @iova_stop: IOMMU stop address for data
> > > > + * @fw_image: Firmware image name for normal booting
> > > > + * @edl_image: Firmware image name for emergency download mode
> > > > + * @fbc_download: MHI host needs to do complete image transfer
> > > > + * @sbl_size: SBL image size
> > > > + * @seg_len: BHIe vector size
> > > > + * @max_chan: Maximum number of channels the controller supports
> > > > + * @mhi_chan: Points to the channel configuration table
> > > > + * @lpm_chans: List of channels that require LPM notifications
> > > > + * @total_ev_rings: Total # of event rings allocated
> > > > + * @hw_ev_rings: Number of hardware event rings
> > > > + * @sw_ev_rings: Number of software event rings
> > > > + * @nr_irqs_req: Number of IRQs required to operate
> > > > + * @nr_irqs: Number of IRQ allocated by bus master
> > > > + * @irq: base irq # to request
> > > > + * @mhi_event: MHI event ring configurations table
> > > > + * @mhi_cmd: MHI command ring configurations table
> > > > + * @mhi_ctxt: MHI device context, shared memory between host and device
> > > > + * @timeout_ms: Timeout in ms for state transitions
> > > > + * @pm_mutex: Mutex for suspend/resume operation
> > > > + * @pre_init: MHI host needs to do pre-initialization before power up
> > > > + * @pm_lock: Lock for protecting MHI power management state
> > > > + * @pm_state: MHI power management state
> > > > + * @db_access: DB access states
> > > > + * @ee: MHI device execution environment
> > > > + * @wake_set: Device wakeup set flag
> > > > + * @dev_wake: Device wakeup count
> > > > + * @alloc_size: Total memory allocations size of the controller
> > > > + * @pending_pkts: Pending packets for the controller
> > > > + * @transition_list: List of MHI state transitions
> > > > + * @wlock: Lock for protecting device wakeup
> > > > + * @M0: M0 state counter for debugging
> > > > + * @M2: M2 state counter for debugging
> > > > + * @M3: M3 state counter for debugging
> > > > + * @M3_FAST: M3 Fast state counter for debugging
> > > > + * @st_worker: State transition worker
> > > > + * @fw_worker: Firmware download worker
> > > > + * @syserr_worker: System error worker
> > > > + * @state_event: State change event
> > > > + * @status_cb: CB function to notify various power states to bus master
> > > > + * @link_status: CB function to query link status of the device
> > > > + * @wake_get: CB function to assert device wake
> > > > + * @wake_put: CB function to de-assert device wake
> > > > + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> > > > + * @runtime_get: CB function to controller runtime resume
> > > > + * @runtimet_put: CB function to decrement pm usage
> > > > + * @lpm_disable: CB function to request disable link level low power modes
> > > > + * @lpm_enable: CB function to request enable link level low power modes again
> > > > + * @bounce_buf: Use of bounce buffer
> > > > + * @buffer_len: Bounce buffer length
> > > > + * @priv_data: Points to bus master's private data
> > > > + */
> > > > +struct mhi_controller {
> > > > + const char *name;
> > > > + struct device *dev;
> > > > + struct mhi_device *mhi_dev;
> > > > + u32 dev_id;
> > > > + u32 bus_id;
> > > > + void __iomem *regs;
> > > > + dma_addr_t iova_start;
> > > > + dma_addr_t iova_stop;
> > > > + const char *fw_image;
> > > > + const char *edl_image;
> > > > + bool fbc_download;
> > > > + size_t sbl_size;
> > > > + size_t seg_len;
> > > > + u32 max_chan;
> > > > + struct mhi_chan *mhi_chan;
> > > > + struct list_head lpm_chans;
> > > > + u32 total_ev_rings;
> > > > + u32 hw_ev_rings;
> > > > + u32 sw_ev_rings;
> > > > + u32 nr_irqs_req;
> > > > + u32 nr_irqs;
> > > > + int *irq;
> > > > +
> > > > + struct mhi_event *mhi_event;
> > > > + struct mhi_cmd *mhi_cmd;
> > > > + struct mhi_ctxt *mhi_ctxt;
> > > > +
> > > > + u32 timeout_ms;
> > > > + struct mutex pm_mutex;
> > > > + bool pre_init;
> > > > + rwlock_t pm_lock;
> > > > + u32 pm_state;
> > > > + u32 db_access;
> > > > + enum mhi_ee_type ee;
> > > > + bool wake_set;
> > > > + atomic_t dev_wake;
> > > > + atomic_t alloc_size;
> > > > + atomic_t pending_pkts;
> > > > + struct list_head transition_list;
> > > > + spinlock_t transition_lock;
> > > > + spinlock_t wlock;
> > > > + u32 M0, M2, M3, M3_FAST;
> > > > + struct work_struct st_worker;
> > > > + struct work_struct fw_worker;
> > > > + struct work_struct syserr_worker;
> > > > + wait_queue_head_t state_event;
> > > > +
> > > > + void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> > > > + enum mhi_callback cb);
> > > > + int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> > > > + void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> > > > + void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> > > > + void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> > > > + int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> > > > + void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> > > > + void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> > > > + void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> > > > +
> > > > + bool bounce_buf;
> > > > + size_t buffer_len;
> > > > + void *priv_data;
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_device - Structure representing a MHI device which binds
> > > > + * to channels
> > > > + * @dev: Driver model device node for the MHI device
> > > > + * @tiocm: Device current terminal settings
> > > > + * @id: Pointer to MHI device ID struct
> > > > + * @chan_name: Name of the channel to which the device binds
> > > > + * @mhi_cntrl: Controller the device belongs to
> > > > + * @ul_chan: UL channel for the device
> > > > + * @dl_chan: DL channel for the device
> > > > + * @dev_wake: Device wakeup counter
> > > > + * @dev_type: MHI device type
> > > > + */
> > > > +struct mhi_device {
> > > > + struct device dev;
> > > > + u32 tiocm;
> > > > + const struct mhi_device_id *id;
> > > > + const char *chan_name;
> > > > + struct mhi_controller *mhi_cntrl;
> > > > + struct mhi_chan *ul_chan;
> > > > + struct mhi_chan *dl_chan;
> > > > + atomic_t dev_wake;
> > > > + enum mhi_device_type dev_type;
> > > > +};
> > > > +
> > > > +/**
> > > > + * struct mhi_result - Completed buffer information
> > > > + * @buf_addr: Address of data buffer
> > > > + * @dir: Channel direction
> > > > + * @bytes_xfer: # of bytes transferred
> > > > + * @transaction_status: Status of last transaction
> > > > + */
> > > > +struct mhi_result {
> > > > + void *buf_addr;
> > > > + enum dma_data_direction dir;
> > > > + size_t bytes_xferd;
> > >
> > > Desription says this is named "bytes_xfer"
> > >
> >
> > Ah yes typo, it is bytes_xferd only. Will fix it.
> >
> > Thanks,
> > Mani
> >
> > > > + int transaction_status;
> > > > +};
> > > > +
> > > > +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
> > > > +
> > > > +/**
> > > > + * mhi_controller_set_devdata - Set MHI controller private data
> > > > + * @mhi_cntrl: MHI controller to set data
> > > > + */
> > > > +static inline void mhi_controller_set_devdata(struct mhi_controller *mhi_cntrl,
> > > > + void *priv)
> > > > +{
> > > > + mhi_cntrl->priv_data = priv;
> > > > +}
> > > > +
> > > > +/**
> > > > + * mhi_controller_get_devdata - Get MHI controller private data
> > > > + * @mhi_cntrl: MHI controller to get data
> > > > + */
> > > > +static inline void *mhi_controller_get_devdata(struct mhi_controller *mhi_cntrl)
> > > > +{
> > > > + return mhi_cntrl->priv_data;
> > > > +}
> > > > +
> > > > +/**
> > > > + * mhi_register_controller - Register MHI controller
> > > > + * @mhi_cntrl: MHI controller to register
> > > > + * @config: Configuration to use for the controller
> > > > + */
> > > > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > > > + struct mhi_controller_config *config);
> > > > +
> > > > +/**
> > > > + * mhi_unregister_controller - Unregister MHI controller
> > > > + * @mhi_cntrl: MHI controller to unregister
> > > > + */
> > > > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
> > > > +
> > > > +#endif /* _MHI_H_ */
> > > > diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
> > > > index e3596db077dc..be15e997fe39 100644
> > > > --- a/include/linux/mod_devicetable.h
> > > > +++ b/include/linux/mod_devicetable.h
> > > > @@ -821,4 +821,16 @@ struct wmi_device_id {
> > > > const void *context;
> > > > };
> > > > +#define MHI_NAME_SIZE 32
> > > > +
> > > > +/**
> > > > + * struct mhi_device_id - MHI device identification
> > > > + * @chan: MHI channel name
> > > > + * @driver_data: driver data;
> > > > + */
> > > > +struct mhi_device_id {
> > > > + const char chan[MHI_NAME_SIZE];
> > > > + kernel_ulong_t driver_data;
> > > > +};
> > > > +
> > > > #endif /* LINUX_MOD_DEVICETABLE_H */
> > > >
> > >
> > >
> > > --
> > > Jeffrey Hugo
> > > Qualcomm Technologies, Inc. is a member of the
> > > Code Aurora Forum, a Linux Foundation Collaborative Project.
>
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.
On Sun, Jan 26, 2020 at 04:58:49PM -0700, Jeffrey Hugo wrote:
> On 1/23/2020 10:05 AM, Jeffrey Hugo wrote:
> > On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > > This commit adds support for registering MHI controller drivers with
> > > the MHI stack. MHI controller drivers manages the interaction with the
> > > MHI client devices such as the external modems and WiFi chipsets. They
> > > are also the MHI bus master in charge of managing the physical link
> > > between the host and client device.
> > >
> > > This is based on the patch submitted by Sujeev Dias:
> > > https://lkml.org/lkml/2018/7/9/987
> > >
> > > Signed-off-by: Sujeev Dias <[email protected]>
> > > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > > [jhugo: added static config for controllers and fixed several bugs]
> > > Signed-off-by: Jeffrey Hugo <[email protected]>
> > > [mani: removed DT dependency, splitted and cleaned up for upstream]
> > > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > > ---
> > > ? drivers/bus/Kconfig???????????? |?? 1 +
> > > ? drivers/bus/Makefile??????????? |?? 3 +
> > > ? drivers/bus/mhi/Kconfig???????? |? 14 +
> > > ? drivers/bus/mhi/Makefile??????? |?? 2 +
> > > ? drivers/bus/mhi/core/Makefile?? |?? 3 +
> > > ? drivers/bus/mhi/core/init.c???? | 404 +++++++++++++++++++++++++++++
> > > ? drivers/bus/mhi/core/internal.h | 169 ++++++++++++
> > > ? include/linux/mhi.h???????????? | 438 ++++++++++++++++++++++++++++++++
> > > ? include/linux/mod_devicetable.h |? 12 +
> > > ? 9 files changed, 1046 insertions(+)
> > > ? create mode 100644 drivers/bus/mhi/Kconfig
> > > ? create mode 100644 drivers/bus/mhi/Makefile
> > > ? create mode 100644 drivers/bus/mhi/core/Makefile
> > > ? create mode 100644 drivers/bus/mhi/core/init.c
> > > ? create mode 100644 drivers/bus/mhi/core/internal.h
> > > ? create mode 100644 include/linux/mhi.h
> > >
> > > diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
> > > index 50200d1c06ea..383934e54786 100644
> > > --- a/drivers/bus/Kconfig
> > > +++ b/drivers/bus/Kconfig
> > > @@ -202,5 +202,6 @@ config DA8XX_MSTPRI
> > > ??????? peripherals.
> > > ? source "drivers/bus/fsl-mc/Kconfig"
> > > +source "drivers/bus/mhi/Kconfig"
> > > ? endmenu
> > > diff --git a/drivers/bus/Makefile b/drivers/bus/Makefile
> > > index 1320bcf9fa9d..05f32cd694a4 100644
> > > --- a/drivers/bus/Makefile
> > > +++ b/drivers/bus/Makefile
> > > @@ -34,3 +34,6 @@ obj-$(CONFIG_UNIPHIER_SYSTEM_BUS)??? +=
> > > uniphier-system-bus.o
> > > ? obj-$(CONFIG_VEXPRESS_CONFIG)??? += vexpress-config.o
> > > ? obj-$(CONFIG_DA8XX_MSTPRI)??? += da8xx-mstpri.o
> > > +
> > > +# MHI
> > > +obj-$(CONFIG_MHI_BUS)??????? += mhi/
> > > diff --git a/drivers/bus/mhi/Kconfig b/drivers/bus/mhi/Kconfig
> > > new file mode 100644
> > > index 000000000000..a8bd9bd7db7c
> > > --- /dev/null
> > > +++ b/drivers/bus/mhi/Kconfig
> > > @@ -0,0 +1,14 @@
> > > +# SPDX-License-Identifier: GPL-2.0
> >
> > first time I noticed this, although I suspect this will need to be
> > corrected "everywhere" -
> > Per the SPDX website, the "GPL-2.0" label is deprecated.? It's
> > replacement is "GPL-2.0-only".
> > I think all instances should be updated to "GPL-2.0-only"
> >
> > > +#
> > > +# MHI bus
> > > +#
> > > +# Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > +#
> > > +
> > > +config MHI_BUS
> > > +?????? tristate "Modem Host Interface (MHI) bus"
> > > +?????? help
> > > +???? Bus driver for MHI protocol. Modem Host Interface (MHI) is a
> > > +???? communication protocol used by the host processors to control
> > > +???? and communicate with modem devices over a high speed peripheral
> > > +???? bus or shared memory.
> > > diff --git a/drivers/bus/mhi/Makefile b/drivers/bus/mhi/Makefile
> > > new file mode 100644
> > > index 000000000000..19e6443b72df
> > > --- /dev/null
> > > +++ b/drivers/bus/mhi/Makefile
> > > @@ -0,0 +1,2 @@
> > > +# core layer
> > > +obj-y += core/
> > > diff --git a/drivers/bus/mhi/core/Makefile
> > > b/drivers/bus/mhi/core/Makefile
> > > new file mode 100644
> > > index 000000000000..2db32697c67f
> > > --- /dev/null
> > > +++ b/drivers/bus/mhi/core/Makefile
> > > @@ -0,0 +1,3 @@
> > > +obj-$(CONFIG_MHI_BUS) := mhi.o
> > > +
> > > +mhi-y := init.o
> > > diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
> > > new file mode 100644
> > > index 000000000000..5b817ec250e0
> > > --- /dev/null
> > > +++ b/drivers/bus/mhi/core/init.c
> > > @@ -0,0 +1,404 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +/*
> > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > + *
> > > + */
> > > +
> > > +#define dev_fmt(fmt) "MHI: " fmt
> > > +
> > > +#include <linux/device.h>
> > > +#include <linux/dma-direction.h>
> > > +#include <linux/dma-mapping.h>
> > > +#include <linux/interrupt.h>
> > > +#include <linux/list.h>
> > > +#include <linux/mhi.h>
> > > +#include <linux/mod_devicetable.h>
> > > +#include <linux/module.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/vmalloc.h>
> > > +#include <linux/wait.h>
> > > +#include "internal.h"
> > > +
> > > +static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
> > > +??????????? struct mhi_controller_config *config)
> > > +{
> > > +??? int i, num;
> > > +??? struct mhi_event *mhi_event;
> > > +??? struct mhi_event_config *event_cfg;
> > > +
> > > +??? num = config->num_events;
> > > +??? mhi_cntrl->total_ev_rings = num;
> > > +??? mhi_cntrl->mhi_event = kcalloc(num, sizeof(*mhi_cntrl->mhi_event),
> > > +?????????????????????? GFP_KERNEL);
> > > +??? if (!mhi_cntrl->mhi_event)
> > > +??????? return -ENOMEM;
> > > +
> > > +??? /* Populate event ring */
> > > +??? mhi_event = mhi_cntrl->mhi_event;
> > > +??? for (i = 0; i < num; i++) {
> > > +??????? event_cfg = &config->event_cfg[i];
> > > +
> > > +??????? mhi_event->er_index = i;
> > > +??????? mhi_event->ring.elements = event_cfg->num_elements;
> > > +??????? mhi_event->intmod = event_cfg->irq_moderation_ms;
> > > +??????? mhi_event->irq = event_cfg->irq;
> > > +
> > > +??????? if (event_cfg->channel != U32_MAX) {
> > > +??????????? /* This event ring has a dedicated channel */
> > > +??????????? mhi_event->chan = event_cfg->channel;
> > > +??????????? if (mhi_event->chan >= mhi_cntrl->max_chan) {
> > > +??????????????? dev_err(mhi_cntrl->dev,
> > > +??????????????????? "Event Ring channel not available\n");
> > > +??????????????? goto error_ev_cfg;
> > > +??????????? }
> > > +
> > > +??????????? mhi_event->mhi_chan =
> > > +??????????????? &mhi_cntrl->mhi_chan[mhi_event->chan];
> > > +??????? }
> > > +
> > > +??????? /* Priority is fixed to 1 for now */
> > > +??????? mhi_event->priority = 1;
> > > +
> > > +??????? mhi_event->db_cfg.brstmode = event_cfg->mode;
> > > +??????? if (MHI_INVALID_BRSTMODE(mhi_event->db_cfg.brstmode))
> > > +??????????? goto error_ev_cfg;
> > > +
> > > +??????? mhi_event->data_type = event_cfg->data_type;
> > > +
> > > +??????? mhi_event->hw_ring = event_cfg->hardware_event;
> > > +??????? if (mhi_event->hw_ring)
> > > +??????????? mhi_cntrl->hw_ev_rings++;
> > > +??????? else
> > > +??????????? mhi_cntrl->sw_ev_rings++;
> > > +
> > > +??????? mhi_event->cl_manage = event_cfg->client_managed;
> > > +??????? mhi_event->offload_ev = event_cfg->offload_channel;
> > > +??????? mhi_event++;
> > > +??? }
> > > +
> > > +??? /* We need IRQ for each event ring + additional one for BHI */
> > > +??? mhi_cntrl->nr_irqs_req = mhi_cntrl->total_ev_rings + 1;
> > > +
> > > +??? return 0;
> > > +
> > > +error_ev_cfg:
> > > +
> > > +??? kfree(mhi_cntrl->mhi_event);
> > > +??? return -EINVAL;
> > > +}
> > > +
> > > +static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> > > +??????????? struct mhi_controller_config *config)
> > > +{
> > > +??? int i;
> > > +??? u32 chan;
> > > +??? struct mhi_channel_config *ch_cfg;
> > > +
> > > +??? mhi_cntrl->max_chan = config->max_channels;
> > > +
> > > +??? /*
> > > +???? * The allocation of MHI channels can exceed 32KB in some scenarios,
> > > +???? * so to avoid any memory possible allocation failures, vzalloc is
> > > +???? * used here
> > > +???? */
> > > +??? mhi_cntrl->mhi_chan = vzalloc(mhi_cntrl->max_chan *
> > > +????????????????????? sizeof(*mhi_cntrl->mhi_chan));
> > > +??? if (!mhi_cntrl->mhi_chan)
> > > +??????? return -ENOMEM;
> > > +
> > > +??? INIT_LIST_HEAD(&mhi_cntrl->lpm_chans);
> > > +
> > > +??? /* Populate channel configurations */
> > > +??? for (i = 0; i < config->num_channels; i++) {
> > > +??????? struct mhi_chan *mhi_chan;
> > > +
> > > +??????? ch_cfg = &config->ch_cfg[i];
> > > +
> > > +??????? chan = ch_cfg->num;
> > > +??????? if (chan >= mhi_cntrl->max_chan) {
> > > +??????????? dev_err(mhi_cntrl->dev,
> > > +??????????????? "Channel %d not available\n", chan);
> > > +??????????? goto error_chan_cfg;
> > > +??????? }
> > > +
> > > +??????? mhi_chan = &mhi_cntrl->mhi_chan[chan];
> > > +??????? mhi_chan->name = ch_cfg->name;
> > > +??????? mhi_chan->chan = chan;
> > > +
> > > +??????? mhi_chan->tre_ring.elements = ch_cfg->num_elements;
> > > +??????? if (!mhi_chan->tre_ring.elements)
> > > +??????????? goto error_chan_cfg;
> > > +
> > > +??????? /*
> > > +???????? * For some channels, local ring length should be bigger than
> > > +???????? * the transfer ring length due to internal logical channels
> > > +???????? * in device. So host can queue much more buffers than transfer
> > > +???????? * ring length. Example, RSC channels should have a larger local
> > > +???????? * channel length than transfer ring length.
> > > +???????? */
> > > +??????? mhi_chan->buf_ring.elements = ch_cfg->local_elements;
> > > +??????? if (!mhi_chan->buf_ring.elements)
> > > +??????????? mhi_chan->buf_ring.elements = mhi_chan->tre_ring.elements;
> > > +??????? mhi_chan->er_index = ch_cfg->event_ring;
> > > +??????? mhi_chan->dir = ch_cfg->dir;
> > > +
> > > +??????? /*
> > > +???????? * For most channels, chtype is identical to channel directions.
> > > +???????? * So, if it is not defined then assign channel direction to
> > > +???????? * chtype
> > > +???????? */
> > > +??????? mhi_chan->type = ch_cfg->type;
> > > +??????? if (!mhi_chan->type)
> > > +??????????? mhi_chan->type = (enum mhi_ch_type)mhi_chan->dir;
> > > +
> > > +??????? mhi_chan->ee_mask = ch_cfg->ee_mask;
> > > +
> > > +??????? mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> > > +??????? mhi_chan->xfer_type = ch_cfg->data_type;
> > > +
> > > +??????? mhi_chan->lpm_notify = ch_cfg->lpm_notify;
> > > +??????? mhi_chan->offload_ch = ch_cfg->offload_channel;
> > > +??????? mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
> > > +??????? mhi_chan->pre_alloc = ch_cfg->auto_queue;
> > > +??????? mhi_chan->auto_start = ch_cfg->auto_start;
> > > +
> > > +??????? /*
> > > +???????? * If MHI host allocates buffers, then the channel direction
> > > +???????? * should be DMA_FROM_DEVICE and the buffer type should be
> > > +???????? * MHI_BUF_RAW
> > > +???????? */
> > > +??????? if (mhi_chan->pre_alloc && (mhi_chan->dir != DMA_FROM_DEVICE ||
> > > +??????????????? mhi_chan->xfer_type != MHI_BUF_RAW)) {
> > > +??????????? dev_err(mhi_cntrl->dev,
> > > +??????????????? "Invalid channel configuration\n");
> > > +??????????? goto error_chan_cfg;
> > > +??????? }
> > > +
> > > +??????? /*
> > > +???????? * Bi-directional and direction less channel must be an
> > > +???????? * offload channel
> > > +???????? */
> > > +??????? if ((mhi_chan->dir == DMA_BIDIRECTIONAL ||
> > > +???????????? mhi_chan->dir == DMA_NONE) && !mhi_chan->offload_ch) {
> > > +??????????? dev_err(mhi_cntrl->dev,
> > > +??????????????? "Invalid channel configuration\n");
> > > +??????????? goto error_chan_cfg;
> > > +??????? }
> > > +
> > > +??????? if (!mhi_chan->offload_ch) {
> > > +??????????? mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
> > > +??????????? if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
> > > +??????????????? dev_err(mhi_cntrl->dev,
> > > +??????????????????? "Invalid Door bell mode\n");
> > > +??????????????? goto error_chan_cfg;
> > > +??????????? }
> > > +??????? }
> > > +
> > > +??????? mhi_chan->configured = true;
> > > +
> > > +??????? if (mhi_chan->lpm_notify)
> > > +??????????? list_add_tail(&mhi_chan->node, &mhi_cntrl->lpm_chans);
> > > +??? }
> > > +
> > > +??? return 0;
> > > +
> > > +error_chan_cfg:
> > > +??? vfree(mhi_cntrl->mhi_chan);
> > > +
> > > +??? return -EINVAL;
> > > +}
> > > +
> > > +static int parse_config(struct mhi_controller *mhi_cntrl,
> > > +??????????? struct mhi_controller_config *config)
> > > +{
> > > +??? int ret;
> > > +
> > > +??? /* Parse MHI channel configuration */
> > > +??? ret = parse_ch_cfg(mhi_cntrl, config);
> > > +??? if (ret)
> > > +??????? return ret;
> > > +
> > > +??? /* Parse MHI event configuration */
> > > +??? ret = parse_ev_cfg(mhi_cntrl, config);
> > > +??? if (ret)
> > > +??????? goto error_ev_cfg;
> > > +
> > > +??? mhi_cntrl->timeout_ms = config->timeout_ms;
> > > +??? if (!mhi_cntrl->timeout_ms)
> > > +??????? mhi_cntrl->timeout_ms = MHI_TIMEOUT_MS;
> > > +
> > > +??? mhi_cntrl->bounce_buf = config->use_bounce_buf;
> > > +??? mhi_cntrl->buffer_len = config->buf_len;
> > > +??? if (!mhi_cntrl->buffer_len)
> > > +??????? mhi_cntrl->buffer_len = MHI_MAX_MTU;
> > > +
> > > +??? return 0;
> > > +
> > > +error_ev_cfg:
> > > +??? vfree(mhi_cntrl->mhi_chan);
> > > +
> > > +??? return ret;
> > > +}
> > > +
> > > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > > +??????????????? struct mhi_controller_config *config)
> > > +{
> > > +??? int ret;
> > > +??? int i;
> > > +??? struct mhi_event *mhi_event;
> > > +??? struct mhi_chan *mhi_chan;
> > > +??? struct mhi_cmd *mhi_cmd;
> > > +??? struct mhi_device *mhi_dev;
> > > +
>
> You need a null check on mhi_cntrl right here, otherwise you could cause a
> panic with the following if.
>
Ack.
> > > +??? if (!mhi_cntrl->runtime_get || !mhi_cntrl->runtime_put)
> > > +??????? return -EINVAL;
> > > +
> > > +??? if (!mhi_cntrl->status_cb || !mhi_cntrl->link_status)
> > > +??????? return -EINVAL;
> > > +
> > > +??? ret = parse_config(mhi_cntrl, config);
> > > +??? if (ret)
> > > +??????? return -EINVAL;
> > > +
> > > +??? mhi_cntrl->mhi_cmd = kcalloc(NR_OF_CMD_RINGS,
> > > +???????????????????? sizeof(*mhi_cntrl->mhi_cmd), GFP_KERNEL);
> > > +??? if (!mhi_cntrl->mhi_cmd) {
> > > +??????? ret = -ENOMEM;
> > > +??????? goto error_alloc_cmd;
> > > +??? }
> > > +
> > > +??? INIT_LIST_HEAD(&mhi_cntrl->transition_list);
> > > +??? spin_lock_init(&mhi_cntrl->transition_lock);
> > > +??? spin_lock_init(&mhi_cntrl->wlock);
> > > +??? init_waitqueue_head(&mhi_cntrl->state_event);
> > > +
> > > +??? mhi_cmd = mhi_cntrl->mhi_cmd;
> > > +??? for (i = 0; i < NR_OF_CMD_RINGS; i++, mhi_cmd++)
> > > +??????? spin_lock_init(&mhi_cmd->lock);
> > > +
> > > +??? mhi_event = mhi_cntrl->mhi_event;
> > > +??? for (i = 0; i < mhi_cntrl->total_ev_rings; i++, mhi_event++) {
> > > +??????? /* Skip for offload events */
> > > +??????? if (mhi_event->offload_ev)
> > > +??????????? continue;
> > > +
> > > +??????? mhi_event->mhi_cntrl = mhi_cntrl;
> > > +??????? spin_lock_init(&mhi_event->lock);
> > > +??? }
> > > +
> > > +??? mhi_chan = mhi_cntrl->mhi_chan;
> > > +??? for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
> > > +??????? mutex_init(&mhi_chan->mutex);
> > > +??????? init_completion(&mhi_chan->completion);
> > > +??????? rwlock_init(&mhi_chan->lock);
> > > +??? }
> > > +
> > > +??? /* Register controller with MHI bus */
> > > +??? mhi_dev = mhi_alloc_device(mhi_cntrl);
> > > +??? if (IS_ERR(mhi_dev)) {
> > > +??????? dev_err(mhi_cntrl->dev, "Failed to allocate device\n");
> > > +??????? ret = PTR_ERR(mhi_dev);
> > > +??????? goto error_alloc_dev;
> > > +??? }
> > > +
> > > +??? mhi_dev->dev_type = MHI_DEVICE_CONTROLLER;
> > > +??? mhi_dev->mhi_cntrl = mhi_cntrl;
> > > +??? dev_set_name(&mhi_dev->dev, "%s", mhi_cntrl->name);
> > > +
> > > +??? /* Init wakeup source */
> > > +??? device_init_wakeup(&mhi_dev->dev, true);
> > > +
> > > +??? ret = device_add(&mhi_dev->dev);
> > > +??? if (ret)
> > > +??????? goto error_add_dev;
> > > +
> > > +??? mhi_cntrl->mhi_dev = mhi_dev;
> > > +
> > > +??? return 0;
> > > +
> > > +error_add_dev:
> > > +??? mhi_dealloc_device(mhi_cntrl, mhi_dev);
> > > +
> > > +error_alloc_dev:
> > > +??? kfree(mhi_cntrl->mhi_cmd);
> > > +
> > > +error_alloc_cmd:
> > > +??? vfree(mhi_cntrl->mhi_chan);
> > > +??? kfree(mhi_cntrl->mhi_event);
> > > +
> > > +??? return ret;
> > > +}
> > > +EXPORT_SYMBOL_GPL(mhi_register_controller);
> > > +
> > > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl)
> > > +{
> > > +??? struct mhi_device *mhi_dev = mhi_cntrl->mhi_dev;
> > > +
> > > +??? kfree(mhi_cntrl->mhi_cmd);
> > > +??? kfree(mhi_cntrl->mhi_event);
> > > +??? vfree(mhi_cntrl->mhi_chan);
> > > +
> > > +??? device_del(&mhi_dev->dev);
> > > +??? put_device(&mhi_dev->dev);
> > > +}
> > > +EXPORT_SYMBOL_GPL(mhi_unregister_controller);
> > > +
> > > +static void mhi_release_device(struct device *dev)
> > > +{
> > > +??? struct mhi_device *mhi_dev = to_mhi_device(dev);
> > > +
> > > +??? if (mhi_dev->ul_chan)
> > > +??????? mhi_dev->ul_chan->mhi_dev = NULL;
> > > +
> > > +??? if (mhi_dev->dl_chan)
> > > +??????? mhi_dev->dl_chan->mhi_dev = NULL;
> > > +
> > > +??? kfree(mhi_dev);
> > > +}
> > > +
> > > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl)
> > > +{
> > > +??? struct mhi_device *mhi_dev;
> > > +??? struct device *dev;
> > > +
> > > +??? mhi_dev = kzalloc(sizeof(*mhi_dev), GFP_KERNEL);
> > > +??? if (!mhi_dev)
> > > +??????? return ERR_PTR(-ENOMEM);
> > > +
> > > +??? dev = &mhi_dev->dev;
> > > +??? device_initialize(dev);
> > > +??? dev->bus = &mhi_bus_type;
> > > +??? dev->release = mhi_release_device;
> > > +??? dev->parent = mhi_cntrl->dev;
> > > +??? mhi_dev->mhi_cntrl = mhi_cntrl;
> > > +??? atomic_set(&mhi_dev->dev_wake, 0);
> > > +
> > > +??? return mhi_dev;
> > > +}
> > > +
> > > +static int mhi_match(struct device *dev, struct device_driver *drv)
> > > +{
> > > +??? return 0;
> > > +};
> > > +
> > > +struct bus_type mhi_bus_type = {
> > > +??? .name = "mhi",
> > > +??? .dev_name = "mhi",
> > > +??? .match = mhi_match,
> > > +};
> > > +
> > > +static int __init mhi_init(void)
> > > +{
> > > +??? return bus_register(&mhi_bus_type);
> > > +}
> > > +
> > > +static void __exit mhi_exit(void)
> > > +{
> > > +??? bus_unregister(&mhi_bus_type);
> > > +}
> > > +
> > > +postcore_initcall(mhi_init);
> > > +module_exit(mhi_exit);
> > > +
> > > +MODULE_LICENSE("GPL v2");
> > > +MODULE_DESCRIPTION("MHI Host Interface");
> > > diff --git a/drivers/bus/mhi/core/internal.h
> > > b/drivers/bus/mhi/core/internal.h
> > > new file mode 100644
> > > index 000000000000..21f686d3a140
> > > --- /dev/null
> > > +++ b/drivers/bus/mhi/core/internal.h
> > > @@ -0,0 +1,169 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > + *
> > > + */
> > > +
> > > +#ifndef _MHI_INT_H
> > > +#define _MHI_INT_H
> > > +
> > > +extern struct bus_type mhi_bus_type;
> > > +
> > > +/* MHI transfer completion events */
> > > +enum mhi_ev_ccs {
> > > +??? MHI_EV_CC_INVALID = 0x0,
> > > +??? MHI_EV_CC_SUCCESS = 0x1,
> > > +??? MHI_EV_CC_EOT = 0x2,
> > > +??? MHI_EV_CC_OVERFLOW = 0x3,
> > > +??? MHI_EV_CC_EOB = 0x4,
> > > +??? MHI_EV_CC_OOB = 0x5,
> > > +??? MHI_EV_CC_DB_MODE = 0x6,
> > > +??? MHI_EV_CC_UNDEFINED_ERR = 0x10,
> > > +??? MHI_EV_CC_BAD_TRE = 0x11,
> >
> > Perhaps a quick comment expanding the "EOT", "EOB", "OOB" acronyms?? I
> > feel like those might not be obvious to someone not familiar with the
> > protocol.
> >
> > > +};
> > > +
> > > +enum mhi_ch_state {
> > > +??? MHI_CH_STATE_DISABLED = 0x0,
> > > +??? MHI_CH_STATE_ENABLED = 0x1,
> > > +??? MHI_CH_STATE_RUNNING = 0x2,
> > > +??? MHI_CH_STATE_SUSPENDED = 0x3,
> > > +??? MHI_CH_STATE_STOP = 0x4,
> > > +??? MHI_CH_STATE_ERROR = 0x5,
> > > +};
> > > +
> > > +#define MHI_INVALID_BRSTMODE(mode) (mode != MHI_DB_BRST_DISABLE && \
> > > +??????????????????? mode != MHI_DB_BRST_ENABLE)
> > > +
> > > +#define NR_OF_CMD_RINGS??????????? 1
> > > +#define CMD_EL_PER_RING??????????? 128
> > > +#define PRIMARY_CMD_RING??????? 0
> > > +#define MHI_MAX_MTU??????????? 0xffff
> > > +
> > > +enum mhi_er_type {
> > > +??? MHI_ER_TYPE_INVALID = 0x0,
> > > +??? MHI_ER_TYPE_VALID = 0x1,
> > > +};
> > > +
> > > +enum mhi_ch_ee_mask {
> > > +??? MHI_CH_EE_PBL = BIT(MHI_EE_PBL),
> >
> > MHI_EE_PBL does not appear to be defined.? Are you perhaps missing an
> > include?
> >
Answered in other thread.
> > > +??? MHI_CH_EE_SBL = BIT(MHI_EE_SBL),
> > > +??? MHI_CH_EE_AMSS = BIT(MHI_EE_AMSS),
> > > +??? MHI_CH_EE_RDDM = BIT(MHI_EE_RDDM),
> > > +??? MHI_CH_EE_PTHRU = BIT(MHI_EE_PTHRU),
> > > +??? MHI_CH_EE_WFW = BIT(MHI_EE_WFW),
> > > +??? MHI_CH_EE_EDL = BIT(MHI_EE_EDL),
> > > +};
> > > +
> > > +struct db_cfg {
> > > +??? bool reset_req;
> > > +??? bool db_mode;
> > > +??? u32 pollcfg;
> > > +??? enum mhi_db_brst_mode brstmode;
> > > +??? dma_addr_t db_val;
> > > +??? void (*process_db)(struct mhi_controller *mhi_cntrl,
> > > +?????????????? struct db_cfg *db_cfg, void __iomem *io_addr,
> > > +?????????????? dma_addr_t db_val);
> > > +};
> > > +
> > > +struct mhi_ring {
> > > +??? dma_addr_t dma_handle;
> > > +??? dma_addr_t iommu_base;
> > > +??? u64 *ctxt_wp; /* point to ctxt wp */
> > > +??? void *pre_aligned;
> > > +??? void *base;
> > > +??? void *rp;
> > > +??? void *wp;
> > > +??? size_t el_size;
> > > +??? size_t len;
> > > +??? size_t elements;
> > > +??? size_t alloc_size;
> > > +??? void __iomem *db_addr;
> > > +};
> > > +
> > > +struct mhi_cmd {
> > > +??? struct mhi_ring ring;
> > > +??? spinlock_t lock;
> > > +};
> > > +
> > > +struct mhi_buf_info {
> > > +??? dma_addr_t p_addr;
> > > +??? void *v_addr;
> > > +??? void *bb_addr;
> > > +??? void *wp;
> > > +??? size_t len;
> > > +??? void *cb_buf;
> > > +??? enum dma_data_direction dir;
> > > +};
> > > +
> > > +struct mhi_event {
> > > +??? u32 er_index;
> > > +??? u32 intmod;
> > > +??? u32 irq;
> > > +??? int chan; /* this event ring is dedicated to a channel (optional) */
> > > +??? u32 priority;
> > > +??? enum mhi_er_data_type data_type;
> > > +??? struct mhi_ring ring;
> > > +??? struct db_cfg db_cfg;
> > > +??? bool hw_ring;
> > > +??? bool cl_manage;
> > > +??? bool offload_ev; /* managed by a device driver */
> > > +??? spinlock_t lock;
> > > +??? struct mhi_chan *mhi_chan; /* dedicated to channel */
> > > +??? struct tasklet_struct task;
> > > +??? int (*process_event)(struct mhi_controller *mhi_cntrl,
> > > +???????????????? struct mhi_event *mhi_event,
> > > +???????????????? u32 event_quota);
> > > +??? struct mhi_controller *mhi_cntrl;
> > > +};
> > > +
> > > +struct mhi_chan {
> > > +??? u32 chan;
> > > +??? const char *name;
> > > +??? /*
> > > +???? * Important: When consuming, increment tre_ring first and when
> > > +???? * releasing, decrement buf_ring first. If tre_ring has space,
> > > buf_ring
> > > +???? * is guranteed to have space so we do not need to check both rings.
> > > +???? */
> > > +??? struct mhi_ring buf_ring;
> > > +??? struct mhi_ring tre_ring;
> > > +??? u32 er_index;
> > > +??? u32 intmod;
> > > +??? enum mhi_ch_type type;
> > > +??? enum dma_data_direction dir;
> > > +??? struct db_cfg db_cfg;
> > > +??? enum mhi_ch_ee_mask ee_mask;
> > > +??? enum mhi_buf_type xfer_type;
> > > +??? enum mhi_ch_state ch_state;
> > > +??? enum mhi_ev_ccs ccs;
> > > +??? bool lpm_notify;
> > > +??? bool configured;
> > > +??? bool offload_ch;
> > > +??? bool pre_alloc;
> > > +??? bool auto_start;
> > > +??? int (*gen_tre)(struct mhi_controller *mhi_cntrl,
> > > +?????????????? struct mhi_chan *mhi_chan, void *buf, void *cb,
> > > +?????????????? size_t len, enum mhi_flags flags);
> > > +??? int (*queue_xfer)(struct mhi_device *mhi_dev, struct mhi_chan
> > > *mhi_chan,
> > > +????????????? void *buf, size_t len, enum mhi_flags mflags);
> > > +??? struct mhi_device *mhi_dev;
> > > +??? void (*xfer_cb)(struct mhi_device *mhi_dev, struct mhi_result
> > > *result);
> > > +??? struct mutex mutex;
> > > +??? struct completion completion;
> > > +??? rwlock_t lock;
> > > +??? struct list_head node;
> > > +};
> > > +
> > > +/* Default MHI timeout */
> > > +#define MHI_TIMEOUT_MS (1000)
> > > +
> > > +struct mhi_device *mhi_alloc_device(struct mhi_controller *mhi_cntrl);
> > > +static inline void mhi_dealloc_device(struct mhi_controller *mhi_cntrl,
> > > +????????????????????? struct mhi_device *mhi_dev)
> > > +{
> > > +??? kfree(mhi_dev);
> > > +}
> > > +
> > > +int mhi_destroy_device(struct device *dev, void *data);
> > > +void mhi_create_devices(struct mhi_controller *mhi_cntrl);
> > > +
> > > +#endif /* _MHI_INT_H */
> > > diff --git a/include/linux/mhi.h b/include/linux/mhi.h
> > > new file mode 100644
> > > index 000000000000..69cf9a4b06c7
> > > --- /dev/null
> > > +++ b/include/linux/mhi.h
> > > @@ -0,0 +1,438 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +/*
> > > + * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
> > > + *
> > > + */
> > > +#ifndef _MHI_H_
> > > +#define _MHI_H_
> > > +
> > > +#include <linux/device.h>
> > > +#include <linux/dma-direction.h>
> > > +#include <linux/mutex.h>
> > > +#include <linux/rwlock_types.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/spinlock_types.h>
> > > +#include <linux/wait.h>
> > > +#include <linux/workqueue.h>
> > > +
> > > +struct mhi_chan;
> > > +struct mhi_event;
> > > +struct mhi_ctxt;
> > > +struct mhi_cmd;
> > > +struct mhi_buf_info;
> > > +
> > > +/**
> > > + * enum mhi_callback - MHI callback
> > > + * @MHI_CB_IDLE: MHI entered idle state
> > > + * @MHI_CB_PENDING_DATA: New data available for client to process
> > > + * @MHI_CB_LPM_ENTER: MHI host entered low power mode
> > > + * @MHI_CB_LPM_EXIT: MHI host about to exit low power mode
> > > + * @MHI_CB_EE_RDDM: MHI device entered RDDM exec env
> > > + * @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
> > > + * @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
> > > + * @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
> > > + */
> > > +enum mhi_callback {
> > > +??? MHI_CB_IDLE,
> > > +??? MHI_CB_PENDING_DATA,
> > > +??? MHI_CB_LPM_ENTER,
> > > +??? MHI_CB_LPM_EXIT,
> > > +??? MHI_CB_EE_RDDM,
> > > +??? MHI_CB_EE_MISSION_MODE,
> > > +??? MHI_CB_SYS_ERROR,
> > > +??? MHI_CB_FATAL_ERROR,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_flags - Transfer flags
> > > + * @MHI_EOB: End of buffer for bulk transfer
> > > + * @MHI_EOT: End of transfer
> > > + * @MHI_CHAIN: Linked transfer
> > > + */
> > > +enum mhi_flags {
> > > +??? MHI_EOB,
> > > +??? MHI_EOT,
> > > +??? MHI_CHAIN,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_device_type - Device types
> > > + * @MHI_DEVICE_XFER: Handles data transfer
> > > + * @MHI_DEVICE_TIMESYNC: Use for timesync feature
> > > + * @MHI_DEVICE_CONTROLLER: Control device
> > > + */
> > > +enum mhi_device_type {
> > > +??? MHI_DEVICE_XFER,
> > > +??? MHI_DEVICE_TIMESYNC,
> > > +??? MHI_DEVICE_CONTROLLER,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_ch_type - Channel types
> > > + * @MHI_CH_TYPE_INVALID: Invalid channel type
> > > + * @MHI_CH_TYPE_OUTBOUND: Outbound channel to the device
> > > + * @MHI_CH_TYPE_INBOUND: Inbound channel from the device
> > > + * @MHI_CH_TYPE_INBOUND_COALESCED: Coalesced channel for the device
> > > to combine
> > > + *?????????????????? multiple packets and send them as a single
> > > + *?????????????????? large packet to reduce CPU consumption
> > > + */
> > > +enum mhi_ch_type {
> > > +??? MHI_CH_TYPE_INVALID = 0,
> > > +??? MHI_CH_TYPE_OUTBOUND = DMA_TO_DEVICE,
> > > +??? MHI_CH_TYPE_INBOUND = DMA_FROM_DEVICE,
> > > +??? MHI_CH_TYPE_INBOUND_COALESCED = 3,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_ee_type - Execution environment types
> > > + * @MHI_EE_PBL: Primary Bootloader
> > > + * @MHI_EE_SBL: Secondary Bootloader
> > > + * @MHI_EE_AMSS: Modem, aka the primary runtime EE
> > > + * @MHI_EE_RDDM: Ram dump download mode
> > > + * @MHI_EE_WFW: WLAN firmware mode
> > > + * @MHI_EE_PTHRU: Passthrough
> > > + * @MHI_EE_EDL: Embedded downloader
> > > + */
> > > +enum mhi_ee_type {
> > > +??? MHI_EE_PBL,
> > > +??? MHI_EE_SBL,
> > > +??? MHI_EE_AMSS,
> > > +??? MHI_EE_RDDM,
> > > +??? MHI_EE_WFW,
> > > +??? MHI_EE_PTHRU,
> > > +??? MHI_EE_EDL,
> > > +??? MHI_EE_MAX_SUPPORTED = MHI_EE_EDL,
> > > +??? MHI_EE_DISABLE_TRANSITION, /* local EE, not related to mhi spec */
> > > +??? MHI_EE_NOT_SUPPORTED,
> > > +??? MHI_EE_MAX,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_buf_type - Accepted buffer type for the channel
> > > + * @MHI_BUF_RAW: Raw buffer
> > > + * @MHI_BUF_SKB: SKB struct
> > > + * @MHI_BUF_SCLIST: Scatter-gather list
> > > + * @MHI_BUF_NOP: CPU offload channel, host does not accept transfer
> > > + * @MHI_BUF_DMA: Receive DMA address mapped by client
> > > + * @MHI_BUF_RSC_DMA: RSC type premapped buffer
> >
> > Maybe its just me, but what is "RSC"?
> >
> > > + */
> > > +enum mhi_buf_type {
> > > +??? MHI_BUF_RAW,
> > > +??? MHI_BUF_SKB,
> > > +??? MHI_BUF_SCLIST,
> > > +??? MHI_BUF_NOP,
> > > +??? MHI_BUF_DMA,
> > > +??? MHI_BUF_RSC_DMA,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_er_data_type - Event ring data types
> > > + * @MHI_ER_DATA: Only client data over this ring
> > > + * @MHI_ER_CTRL: MHI control data and client data
> > > + * @MHI_ER_TSYNC: Time sync events
> > > + */
> > > +enum mhi_er_data_type {
> > > +??? MHI_ER_DATA,
> > > +??? MHI_ER_CTRL,
> > > +??? MHI_ER_TSYNC,
> > > +};
> > > +
> > > +/**
> > > + * enum mhi_db_brst_mode - Doorbell mode
> > > + * @MHI_DB_BRST_DISABLE: Burst mode disable
> > > + * @MHI_DB_BRST_ENABLE: Burst mode enable
> > > + */
> > > +enum mhi_db_brst_mode {
> > > +??? MHI_DB_BRST_DISABLE = 0x2,
> > > +??? MHI_DB_BRST_ENABLE = 0x3,
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_channel_config - Channel configuration structure for
> > > controller
> > > + * @num: The number assigned to this channel
> > > + * @name: The name of this channel
> > > + * @num_elements: The number of elements that can be queued to this
> > > channel
> > > + * @local_elements: The local ring length of the channel
> > > + * @event_ring: The event rung index that services this channel
> > > + * @dir: Direction that data may flow on this channel
> > > + * @type: Channel type
> > > + * @ee_mask: Execution Environment mask for this channel
> >
> > But the mask defines are in internal.h, so how is a client suposed to
> > know what they are?
> >
> > > + * @pollcfg: Polling configuration for burst mode.? 0 is default.
> > > milliseconds
> > > +???????? for UL channels, multiple of 8 ring elements for DL channels
> > > + * @data_type: Data type accepted by this channel
> > > + * @doorbell: Doorbell mode
> > > + * @lpm_notify: The channel master requires low power mode notifications
> > > + * @offload_channel: The client manages the channel completely
> > > + * @doorbell_mode_switch: Channel switches to doorbell mode on M0
> > > transition
> > > + * @auto_queue: Framework will automatically queue buffers for DL
> > > traffic
> > > + * @auto_start: Automatically start (open) this channel
> > > + */
> > > +struct mhi_channel_config {
> > > +??? u32 num;
> > > +??? char *name;
> > > +??? u32 num_elements;
> > > +??? u32 local_elements;
> > > +??? u32 event_ring;
> > > +??? enum dma_data_direction dir;
> > > +??? enum mhi_ch_type type;
>
> Why do we have "dir" and "type" when they are the same thing?
>
The motive of mhi_ch_type is to define additional channel specific
properties apart from channel direction. Right now, we don't make
use of it fully but it is kept for future use.
> > > +??? u32 ee_mask;
> > > +??? u32 pollcfg;
> > > +??? enum mhi_buf_type data_type;
> > > +??? enum mhi_db_brst_mode doorbell;
> > > +??? bool lpm_notify;
> > > +??? bool offload_channel;
> > > +??? bool doorbell_mode_switch;
> > > +??? bool auto_queue;
> > > +??? bool auto_start;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_event_config - Event ring configuration structure for
> > > controller
> > > + * @num_elements: The number of elements that can be queued to this ring
> > > + * @irq_moderation_ms: Delay irq for additional events to be aggregated
> > > + * @irq: IRQ associated with this ring
> > > + * @channel: Dedicated channel number. U32_MAX indicates a
> > > non-dedicated ring
> > > + * @mode: Doorbell mode
> > > + * @data_type: Type of data this ring will process
> > > + * @hardware_event: This ring is associated with hardware channels
> > > + * @client_managed: This ring is client managed
> > > + * @offload_channel: This ring is associated with an offloaded channel
> > > + * @priority: Priority of this ring. Use 1 for now
> > > + */
> > > +struct mhi_event_config {
> > > +??? u32 num_elements;
> > > +??? u32 irq_moderation_ms;
> > > +??? u32 irq;
> > > +??? u32 channel;
> > > +??? enum mhi_db_brst_mode mode;
> > > +??? enum mhi_er_data_type data_type;
> > > +??? bool hardware_event;
> > > +??? bool client_managed;
> > > +??? bool offload_channel;
> > > +??? u32 priority;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_controller_config - Root MHI controller configuration
> > > + * @max_channels: Maximum number of channels supported
> > > + * @timeout_ms: Timeout value for operations. 0 means use default
> > > + * @use_bounce_buf: Use a bounce buffer pool due to limited DDR access
> > > + * @m2_no_db: Host is not allowed to ring DB in M2 state
> > > + * @buf_len: Size of automatically allocated buffers. 0 means use
> > > default
> > > + * @num_channels: Number of channels defined in @ch_cfg
> > > + * @ch_cfg: Array of defined channels
> > > + * @num_events: Number of event rings defined in @event_cfg
> > > + * @event_cfg: Array of defined event rings
> > > + */
> > > +struct mhi_controller_config {
> > > +??? u32 max_channels;
> > > +??? u32 timeout_ms;
> > > +??? bool use_bounce_buf;
> > > +??? bool m2_no_db;
> > > +??? u32 buf_len;
> > > +??? u32 num_channels;
> > > +??? struct mhi_channel_config *ch_cfg;
> > > +??? u32 num_events;
> > > +??? struct mhi_event_config *event_cfg;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_controller - Master MHI controller structure
>
> Quite a bit of this needs to be initialized by the entity calling
> mhi_register_controller(), but its not clear what. I'm thinking that since
> we have a config structure, all of that should be copied/moved into the
> config so that the caller of mhi_register_controller() provides an empty
> mhi_controller struct / a populated config struct and receives an
> initialized mhi_controller instance.
>
Agree. The problem with moving these configurable fields to a separate
struct is the usage of `dev` pointer. It doesn't make sense to have the
dev pointer in the config structure. So, I've decided to mark the members
which were required/optional to be provided by the controller driver.
The change is here FYI: https://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git/commit/?h=mhi-v1&id=997dc05ae2ffa787e3948c34c50f3ea7b6cecb67
Thanks,
Mani
> > > + * @name: Name of the controller
> > > + * @dev: Driver model device node for the controller
> > > + * @mhi_dev: MHI device instance for the controller
> > > + * @dev_id: Device ID of the controller
> > > + * @bus_id: Physical bus instance used by the controller
> > > + * @regs: Base address of MHI MMIO register space
> > > + * @iova_start: IOMMU starting address for data
> > > + * @iova_stop: IOMMU stop address for data
> > > + * @fw_image: Firmware image name for normal booting
> > > + * @edl_image: Firmware image name for emergency download mode
> > > + * @fbc_download: MHI host needs to do complete image transfer
> > > + * @sbl_size: SBL image size
> > > + * @seg_len: BHIe vector size
> > > + * @max_chan: Maximum number of channels the controller supports
> > > + * @mhi_chan: Points to the channel configuration table
> > > + * @lpm_chans: List of channels that require LPM notifications
> > > + * @total_ev_rings: Total # of event rings allocated
> > > + * @hw_ev_rings: Number of hardware event rings
> > > + * @sw_ev_rings: Number of software event rings
> > > + * @nr_irqs_req: Number of IRQs required to operate
> > > + * @nr_irqs: Number of IRQ allocated by bus master
> > > + * @irq: base irq # to request
> > > + * @mhi_event: MHI event ring configurations table
> > > + * @mhi_cmd: MHI command ring configurations table
> > > + * @mhi_ctxt: MHI device context, shared memory between host and device
> > > + * @timeout_ms: Timeout in ms for state transitions
> > > + * @pm_mutex: Mutex for suspend/resume operation
> > > + * @pre_init: MHI host needs to do pre-initialization before power up
> > > + * @pm_lock: Lock for protecting MHI power management state
> > > + * @pm_state: MHI power management state
> > > + * @db_access: DB access states
> > > + * @ee: MHI device execution environment
> > > + * @wake_set: Device wakeup set flag
> > > + * @dev_wake: Device wakeup count
> > > + * @alloc_size: Total memory allocations size of the controller
> > > + * @pending_pkts: Pending packets for the controller
> > > + * @transition_list: List of MHI state transitions
> > > + * @wlock: Lock for protecting device wakeup
> > > + * @M0: M0 state counter for debugging
> > > + * @M2: M2 state counter for debugging
> > > + * @M3: M3 state counter for debugging
> > > + * @M3_FAST: M3 Fast state counter for debugging
> > > + * @st_worker: State transition worker
> > > + * @fw_worker: Firmware download worker
> > > + * @syserr_worker: System error worker
> > > + * @state_event: State change event
> > > + * @status_cb: CB function to notify various power states to bus master
> > > + * @link_status: CB function to query link status of the device
> > > + * @wake_get: CB function to assert device wake
> > > + * @wake_put: CB function to de-assert device wake
> > > + * @wake_toggle: CB function to assert and deasset (toggle) device wake
> > > + * @runtime_get: CB function to controller runtime resume
> > > + * @runtimet_put: CB function to decrement pm usage
> > > + * @lpm_disable: CB function to request disable link level low
> > > power modes
> > > + * @lpm_enable: CB function to request enable link level low power
> > > modes again
> > > + * @bounce_buf: Use of bounce buffer
> > > + * @buffer_len: Bounce buffer length
> > > + * @priv_data: Points to bus master's private data
> > > + */
> > > +struct mhi_controller {
> > > +??? const char *name;
> > > +??? struct device *dev;
> > > +??? struct mhi_device *mhi_dev;
> > > +??? u32 dev_id;
> > > +??? u32 bus_id;
> > > +??? void __iomem *regs;
> > > +??? dma_addr_t iova_start;
> > > +??? dma_addr_t iova_stop;
> > > +??? const char *fw_image;
> > > +??? const char *edl_image;
> > > +??? bool fbc_download;
> > > +??? size_t sbl_size;
> > > +??? size_t seg_len;
> > > +??? u32 max_chan;
> > > +??? struct mhi_chan *mhi_chan;
> > > +??? struct list_head lpm_chans;
> > > +??? u32 total_ev_rings;
> > > +??? u32 hw_ev_rings;
> > > +??? u32 sw_ev_rings;
> > > +??? u32 nr_irqs_req;
> > > +??? u32 nr_irqs;
> > > +??? int *irq;
> > > +
> > > +??? struct mhi_event *mhi_event;
> > > +??? struct mhi_cmd *mhi_cmd;
> > > +??? struct mhi_ctxt *mhi_ctxt;
> > > +
> > > +??? u32 timeout_ms;
> > > +??? struct mutex pm_mutex;
> > > +??? bool pre_init;
> > > +??? rwlock_t pm_lock;
> > > +??? u32 pm_state;
> > > +??? u32 db_access;
> > > +??? enum mhi_ee_type ee;
> > > +??? bool wake_set;
> > > +??? atomic_t dev_wake;
> > > +??? atomic_t alloc_size;
> > > +??? atomic_t pending_pkts;
> > > +??? struct list_head transition_list;
> > > +??? spinlock_t transition_lock;
> > > +??? spinlock_t wlock;
> > > +??? u32 M0, M2, M3, M3_FAST;
> > > +??? struct work_struct st_worker;
> > > +??? struct work_struct fw_worker;
> > > +??? struct work_struct syserr_worker;
> > > +??? wait_queue_head_t state_event;
> > > +
> > > +??? void (*status_cb)(struct mhi_controller *mhi_cntrl, void *priv,
> > > +????????????? enum mhi_callback cb);
> > > +??? int (*link_status)(struct mhi_controller *mhi_cntrl, void *priv);
> > > +??? void (*wake_get)(struct mhi_controller *mhi_cntrl, bool override);
> > > +??? void (*wake_put)(struct mhi_controller *mhi_cntrl, bool override);
> > > +??? void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
> > > +??? int (*runtime_get)(struct mhi_controller *mhi_cntrl, void *priv);
> > > +??? void (*runtime_put)(struct mhi_controller *mhi_cntrl, void *priv);
> > > +??? void (*lpm_disable)(struct mhi_controller *mhi_cntrl, void *priv);
> > > +??? void (*lpm_enable)(struct mhi_controller *mhi_cntrl, void *priv);
> > > +
> > > +??? bool bounce_buf;
> > > +??? size_t buffer_len;
> > > +??? void *priv_data;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_device - Structure representing a MHI device which binds
> > > + *???????????????????? to channels
> > > + * @dev: Driver model device node for the MHI device
> > > + * @tiocm: Device current terminal settings
> > > + * @id: Pointer to MHI device ID struct
> > > + * @chan_name: Name of the channel to which the device binds
> > > + * @mhi_cntrl: Controller the device belongs to
> > > + * @ul_chan: UL channel for the device
> > > + * @dl_chan: DL channel for the device
> > > + * @dev_wake: Device wakeup counter
> > > + * @dev_type: MHI device type
> > > + */
> > > +struct mhi_device {
> > > +??? struct device dev;
> > > +??? u32 tiocm;
> > > +??? const struct mhi_device_id *id;
> > > +??? const char *chan_name;
> > > +??? struct mhi_controller *mhi_cntrl;
> > > +??? struct mhi_chan *ul_chan;
> > > +??? struct mhi_chan *dl_chan;
> > > +??? atomic_t dev_wake;
> > > +??? enum mhi_device_type dev_type;
> > > +};
> > > +
> > > +/**
> > > + * struct mhi_result - Completed buffer information
> > > + * @buf_addr: Address of data buffer
> > > + * @dir: Channel direction
> > > + * @bytes_xfer: # of bytes transferred
> > > + * @transaction_status: Status of last transaction
> > > + */
> > > +struct mhi_result {
> > > +??? void *buf_addr;
> > > +??? enum dma_data_direction dir;
> > > +??? size_t bytes_xferd;
> >
> > Desription says this is named "bytes_xfer"
> >
> > > +??? int transaction_status;
> > > +};
> > > +
> > > +#define to_mhi_device(dev) container_of(dev, struct mhi_device, dev)
> > > +
> > > +/**
> > > + * mhi_controller_set_devdata - Set MHI controller private data
> > > + * @mhi_cntrl: MHI controller to set data
> > > + */
> > > +static inline void mhi_controller_set_devdata(struct mhi_controller
> > > *mhi_cntrl,
> > > +???????????????????? void *priv)
> > > +{
> > > +??? mhi_cntrl->priv_data = priv;
> > > +}
> > > +
> > > +/**
> > > + * mhi_controller_get_devdata - Get MHI controller private data
> > > + * @mhi_cntrl: MHI controller to get data
> > > + */
> > > +static inline void *mhi_controller_get_devdata(struct
> > > mhi_controller *mhi_cntrl)
> > > +{
> > > +??? return mhi_cntrl->priv_data;
> > > +}
> > > +
> > > +/**
> > > + * mhi_register_controller - Register MHI controller
> > > + * @mhi_cntrl: MHI controller to register
> > > + * @config: Configuration to use for the controller
> > > + */
> > > +int mhi_register_controller(struct mhi_controller *mhi_cntrl,
> > > +??????????????? struct mhi_controller_config *config);
> > > +
> > > +/**
> > > + * mhi_unregister_controller - Unregister MHI controller
> > > + * @mhi_cntrl: MHI controller to unregister
> > > + */
> > > +void mhi_unregister_controller(struct mhi_controller *mhi_cntrl);
> > > +
> > > +#endif /* _MHI_H_ */
> > > diff --git a/include/linux/mod_devicetable.h
> > > b/include/linux/mod_devicetable.h
> > > index e3596db077dc..be15e997fe39 100644
> > > --- a/include/linux/mod_devicetable.h
> > > +++ b/include/linux/mod_devicetable.h
> > > @@ -821,4 +821,16 @@ struct wmi_device_id {
> > > ????? const void *context;
> > > ? };
> > > +#define MHI_NAME_SIZE 32
> > > +
> > > +/**
> > > + * struct mhi_device_id - MHI device identification
> > > + * @chan: MHI channel name
> > > + * @driver_data: driver data;
> > > + */
> > > +struct mhi_device_id {
> > > +??? const char chan[MHI_NAME_SIZE];
> > > +??? kernel_ulong_t driver_data;
> > > +};
> > > +
> > > ? #endif /* LINUX_MOD_DEVICETABLE_H */
> > >
> >
> >
>
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.
On Tue, Jan 28, 2020 at 12:07:57PM +0530, Manivannan Sadhasivam wrote:
> On Mon, Jan 27, 2020 at 07:52:13AM -0700, Jeffrey Hugo wrote:
> > > > MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
> > > > include?
> > > >
> > >
> > > It is defined in mhi.h as mhi_ee_type.
> >
> > mhi.h isn't included here. You are relying on the users of this file to
> > have included that, in particular to have included it before this file. That
> > tends to result in really weird errors later on. It would be best to
> > include mhi.h here if you need these definitions.
> >
> > Although, I suspect this struct should be moved out of internal.h and into
> > mhi.h since clients need to know this, so perhaps this issue is moot.
> >
>
> Yep. I've moved this enum to mhi.h since it will be used by controller drivers.
> You can find this change in next iteration.
Both of you please learn to properly trim emails, digging through 1200
lines to try to find 2 new lines in the middle is unworkable.
greg k-h
On Tue, Jan 28, 2020 at 08:24:53AM +0100, Greg KH wrote:
> On Tue, Jan 28, 2020 at 12:07:57PM +0530, Manivannan Sadhasivam wrote:
> > On Mon, Jan 27, 2020 at 07:52:13AM -0700, Jeffrey Hugo wrote:
> > > > > MHI_EE_PBL does not appear to be defined. Are you perhaps missing an
> > > > > include?
> > > > >
> > > >
> > > > It is defined in mhi.h as mhi_ee_type.
> > >
> > > mhi.h isn't included here. You are relying on the users of this file to
> > > have included that, in particular to have included it before this file. That
> > > tends to result in really weird errors later on. It would be best to
> > > include mhi.h here if you need these definitions.
> > >
> > > Although, I suspect this struct should be moved out of internal.h and into
> > > mhi.h since clients need to know this, so perhaps this issue is moot.
> > >
> >
> > Yep. I've moved this enum to mhi.h since it will be used by controller drivers.
> > You can find this change in next iteration.
>
> Both of you please learn to properly trim emails, digging through 1200
> lines to try to find 2 new lines in the middle is unworkable.
>
Oops. Sorry! Will do it for future replies.
Thanks,
Mani
> greg k-h
On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> MHI supports downloading the device firmware over BHI/BHIe (Boot Host
> Interface) protocol. Hence, this commit adds necessary helpers, which
> will be called during device power up stage.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/989
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: splitted the data transfer patch and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/bus/mhi/core/boot.c | 268 ++++++++++++++++++++++++++++++++
> drivers/bus/mhi/core/init.c | 1 +
> drivers/bus/mhi/core/internal.h | 1 +
> 3 files changed, 270 insertions(+)
>
> diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
> index 0996f18c4281..36956fb6eff2 100644
> --- a/drivers/bus/mhi/core/boot.c
> +++ b/drivers/bus/mhi/core/boot.c
> @@ -20,6 +20,121 @@
> #include <linux/wait.h>
> #include "internal.h"
>
> +/* Download AMSS image to device */
Nit: I don't feel like this comment really adds any value. I feel like
it either should have more content, or be removed. What do you think?
> +static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
> + const struct mhi_buf *mhi_buf)
> +{
> +/* Download SBL image to device */
Same here. Comment seems self evident from the function name.
> +static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
> + dma_addr_t dma_addr,
> + size_t size)
> +{
> + u32 tx_status, val, session_id;
> + int i, ret;
> + void __iomem *base = mhi_cntrl->bhi;
> + rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
> + struct {
> + char *name;
> + u32 offset;
> + } error_reg[] = {
> + { "ERROR_CODE", BHI_ERRCODE },
> + { "ERROR_DBG1", BHI_ERRDBG1 },
> + { "ERROR_DBG2", BHI_ERRDBG2 },
> + { "ERROR_DBG3", BHI_ERRDBG3 },
> + { NULL },
> + };
> +
> + read_lock_bh(pm_lock);
> + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
> + read_unlock_bh(pm_lock);
> + goto invalid_pm_state;
> + }
> +
> + /* Start SBL download via BHI protocol */
I'm wondering, what do you think about having a debug level message here
that SBL is being loaded? I think it would be handy for looking into
the device state.
> + mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
> + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
> + upper_32_bits(dma_addr));
> + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
> + lower_32_bits(dma_addr));
> + mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
> + session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
> + mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
> + read_unlock_bh(pm_lock);
> +
> +
> +static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
> + const struct firmware *firmware,
> + struct image_info *img_info)
Perhaps its just me, but the parameters on the second and third lines do
not look aligned in the style used in the rest of the file.
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On Tue, Jan 28, 2020 at 12:36:04PM -0700, Jeffrey Hugo wrote:
> On 1/23/2020 4:18 AM, Manivannan Sadhasivam wrote:
> > MHI supports downloading the device firmware over BHI/BHIe (Boot Host
> > Interface) protocol. Hence, this commit adds necessary helpers, which
> > will be called during device power up stage.
> >
> > This is based on the patch submitted by Sujeev Dias:
> > https://lkml.org/lkml/2018/7/9/989
> >
> > Signed-off-by: Sujeev Dias <[email protected]>
> > Signed-off-by: Siddartha Mohanadoss <[email protected]>
> > [mani: splitted the data transfer patch and cleaned up for upstream]
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/bus/mhi/core/boot.c | 268 ++++++++++++++++++++++++++++++++
> > drivers/bus/mhi/core/init.c | 1 +
> > drivers/bus/mhi/core/internal.h | 1 +
> > 3 files changed, 270 insertions(+)
> >
> > diff --git a/drivers/bus/mhi/core/boot.c b/drivers/bus/mhi/core/boot.c
> > index 0996f18c4281..36956fb6eff2 100644
> > --- a/drivers/bus/mhi/core/boot.c
> > +++ b/drivers/bus/mhi/core/boot.c
> > @@ -20,6 +20,121 @@
> > #include <linux/wait.h>
> > #include "internal.h"
> > +/* Download AMSS image to device */
>
> Nit: I don't feel like this comment really adds any value. I feel like it
> either should have more content, or be removed. What do you think?
Okay, I think it can be removed.
> > +static int mhi_fw_load_amss(struct mhi_controller *mhi_cntrl,
> > + const struct mhi_buf *mhi_buf)
> > +{
>
>
> > +/* Download SBL image to device */
>
> Same here. Comment seems self evident from the function name.
Ack.
> > +static int mhi_fw_load_sbl(struct mhi_controller *mhi_cntrl,
> > + dma_addr_t dma_addr,
> > + size_t size)
> > +{
> > + u32 tx_status, val, session_id;
> > + int i, ret;
> > + void __iomem *base = mhi_cntrl->bhi;
> > + rwlock_t *pm_lock = &mhi_cntrl->pm_lock;
> > + struct {
> > + char *name;
> > + u32 offset;
> > + } error_reg[] = {
> > + { "ERROR_CODE", BHI_ERRCODE },
> > + { "ERROR_DBG1", BHI_ERRDBG1 },
> > + { "ERROR_DBG2", BHI_ERRDBG2 },
> > + { "ERROR_DBG3", BHI_ERRDBG3 },
> > + { NULL },
> > + };
> > +
> > + read_lock_bh(pm_lock);
> > + if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
> > + read_unlock_bh(pm_lock);
> > + goto invalid_pm_state;
> > + }
> > +
> > + /* Start SBL download via BHI protocol */
>
> I'm wondering, what do you think about having a debug level message here
> that SBL is being loaded? I think it would be handy for looking into the
> device state.
>
Agree. will add.
> > + mhi_write_reg(mhi_cntrl, base, BHI_STATUS, 0);
> > + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_HIGH,
> > + upper_32_bits(dma_addr));
> > + mhi_write_reg(mhi_cntrl, base, BHI_IMGADDR_LOW,
> > + lower_32_bits(dma_addr));
> > + mhi_write_reg(mhi_cntrl, base, BHI_IMGSIZE, size);
> > + session_id = prandom_u32() & BHI_TXDB_SEQNUM_BMSK;
> > + mhi_write_reg(mhi_cntrl, base, BHI_IMGTXDB, session_id);
> > + read_unlock_bh(pm_lock);
> > +
> > +
> > +static void mhi_firmware_copy(struct mhi_controller *mhi_cntrl,
> > + const struct firmware *firmware,
> > + struct image_info *img_info)
>
> Perhaps its just me, but the parameters on the second and third lines do not
> look aligned in the style used in the rest of the file.
>
Yep, will fix it.
Thanks,
Mani
>
> --
> Jeffrey Hugo
> Qualcomm Technologies, Inc. is a member of the
> Code Aurora Forum, a Linux Foundation Collaborative Project.