Hello,
This is the second attempt at adding the MHI (Modem Host Interface) bus
interface to Linux kernel. MHI is a communication protocol used by the
host processors to control and communicate with modems over a high
speed peripheral bus or shared memory. The MHI protocol has been
designed and developed by Qualcomm Innovation Center, Inc., for use
in their modems.
The first submission was made by Sujeev Dias of Qualcomm:
https://lkml.org/lkml/2018/4/26/1159
https://lkml.org/lkml/2018/7/9/987
This series addresses most of the review comments by Greg and Arnd for
the initial patchset. Furthermore, in order to ease the review process
I've splitted the patches logically and dropped few of them which were
not required for this initial submission.
Below is the high level changelog:
1. Removed all DT related code
2. Got rid of pci specific struct members from top level mhi structs
3. Moved device specific callbacks like ul_xfer() to driver struct. It
doesn’t make sense to have callbacks in device struct as suggested by
Greg
4. Used priv data of `struct device` instead of own priv data in
`mhi_device` as suggested by Greg. This will allow us to use
dev_set{get}_drvdata() APIs in client drivers
5. Removed all debugfs related code
6. Changes to the APIs to look uniform
7. Converted the documentation to .rst and placed in its own subdirectory
8. Changes to the MHI device naming
9. Converted all uppercase variable names to appropriate lowercase ones
10. Removed custom debug code and used the dev_* ones where applicable
11. Dropped timesync, DTR, UCI, and Qcom controller related codes
12. Added QRTR client driver patch
13. Added modalias support for the MHI stack as well as client driver for
autoloading of modules (client drivers) by udev once the MHI devices
are created
This series includes the MHI stack as well as the QRTR client driver which
falls under the networking subsystem.
The reference controller implementation is here:
https://git.linaro.org/people/manivannan.sadhasivam/linux.git/tree/drivers/net/wireless/ath/ath11k/mhi.c?h=ath11k-qca6390-mhi
It will be submitted later along with ath11k patches.
Following developers deserve explicit acknowledgements for their
contributions to the MHI code:
Sujeev Dias
Siddartha Mohanadoss
Hemant Kumar
Jeff Hugo
Thanks,
Mani
Changes in v2:
* Added put_device to mhi_dealloc_device
* Removed unused members from struct mhi_controller
* Removed the atomicity of dev_wake in struct mhi_device as it is not required
* Reordered MHI structs to avoid holes
* Used struct device name for the controller device
* Marked the required and optional mhi_controller members for helping the
controller driver implementation
* Cleanups to the MHI doc
* Removed _relaxed variants and used readl/writel
* Added comments for MHI specific acronyms
* Removed the usage of bitfields and used bitmasks for mhi_event_ctxt and
mhi_chan_ctxt
* Used __64/__u32 types for structures representing hw states
* Added Hemant as a co-maintainer of MHI bus. He is from the MHI team of
Qualcomm and he will take up reviews and if possible, maintainership
in future.
Manivannan Sadhasivam (16):
docs: Add documentation for MHI bus
bus: mhi: core: Add support for registering MHI controllers
bus: mhi: core: Add support for registering MHI client drivers
bus: mhi: core: Add support for creating and destroying MHI devices
bus: mhi: core: Add support for ringing channel/event ring doorbells
bus: mhi: core: Add support for PM state transitions
bus: mhi: core: Add support for basic PM operations
bus: mhi: core: Add support for downloading firmware over BHIe
bus: mhi: core: Add support for downloading RDDM image during panic
bus: mhi: core: Add support for processing events from client device
bus: mhi: core: Add support for data transfer
bus: mhi: core: Add uevent support for module autoloading
MAINTAINERS: Add entry for MHI bus
net: qrtr: Add MHI transport layer
net: qrtr: Do not depend on ARCH_QCOM
soc: qcom: Do not depend on ARCH_QCOM for QMI helpers
Documentation/index.rst | 1 +
Documentation/mhi/index.rst | 18 +
Documentation/mhi/mhi.rst | 218 ++++
Documentation/mhi/topology.rst | 60 ++
MAINTAINERS | 10 +
drivers/bus/Kconfig | 1 +
drivers/bus/Makefile | 3 +
drivers/bus/mhi/Kconfig | 14 +
drivers/bus/mhi/Makefile | 2 +
drivers/bus/mhi/core/Makefile | 3 +
drivers/bus/mhi/core/boot.c | 508 ++++++++++
drivers/bus/mhi/core/init.c | 1301 ++++++++++++++++++++++++
drivers/bus/mhi/core/internal.h | 699 +++++++++++++
drivers/bus/mhi/core/main.c | 1576 +++++++++++++++++++++++++++++
drivers/bus/mhi/core/pm.c | 967 ++++++++++++++++++
drivers/soc/qcom/Kconfig | 1 -
include/linux/mhi.h | 661 ++++++++++++
include/linux/mod_devicetable.h | 13 +
net/qrtr/Kconfig | 8 +-
net/qrtr/Makefile | 2 +
net/qrtr/mhi.c | 207 ++++
scripts/mod/devicetable-offsets.c | 3 +
scripts/mod/file2alias.c | 10 +
23 files changed, 6284 insertions(+), 2 deletions(-)
create mode 100644 Documentation/mhi/index.rst
create mode 100644 Documentation/mhi/mhi.rst
create mode 100644 Documentation/mhi/topology.rst
create mode 100644 drivers/bus/mhi/Kconfig
create mode 100644 drivers/bus/mhi/Makefile
create mode 100644 drivers/bus/mhi/core/Makefile
create mode 100644 drivers/bus/mhi/core/boot.c
create mode 100644 drivers/bus/mhi/core/init.c
create mode 100644 drivers/bus/mhi/core/internal.h
create mode 100644 drivers/bus/mhi/core/main.c
create mode 100644 drivers/bus/mhi/core/pm.c
create mode 100644 include/linux/mhi.h
create mode 100644 net/qrtr/mhi.c
--
2.17.1
Add uevent support to MHI bus so that the client drivers can be autoloaded
by udev when the MHI devices gets created. The client drivers are
expected to provide MODULE_DEVICE_TABLE with the MHI id_table struct so
that the alias can be exported.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 9 +++++++++
include/linux/mod_devicetable.h | 1 +
scripts/mod/devicetable-offsets.c | 3 +++
scripts/mod/file2alias.c | 10 ++++++++++
4 files changed, 23 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 4d0e5bbb3389..711ddc661530 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -1247,6 +1247,14 @@ void mhi_driver_unregister(struct mhi_driver *mhi_drv)
}
EXPORT_SYMBOL_GPL(mhi_driver_unregister);
+static int mhi_uevent(struct device *dev, struct kobj_uevent_env *env)
+{
+ struct mhi_device *mhi_dev = to_mhi_device(dev);
+
+ return add_uevent_var(env, "MODALIAS=" MHI_DEVICE_MODALIAS_FMT,
+ mhi_dev->chan_name);
+}
+
static int mhi_match(struct device *dev, struct device_driver *drv)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
@@ -1273,6 +1281,7 @@ struct bus_type mhi_bus_type = {
.name = "mhi",
.dev_name = "mhi",
.match = mhi_match,
+ .uevent = mhi_uevent,
};
static int __init mhi_init(void)
diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
index be15e997fe39..f10e779a3fd0 100644
--- a/include/linux/mod_devicetable.h
+++ b/include/linux/mod_devicetable.h
@@ -821,6 +821,7 @@ struct wmi_device_id {
const void *context;
};
+#define MHI_DEVICE_MODALIAS_FMT "mhi:%s"
#define MHI_NAME_SIZE 32
/**
diff --git a/scripts/mod/devicetable-offsets.c b/scripts/mod/devicetable-offsets.c
index 054405b90ba4..fe3f4a95cb21 100644
--- a/scripts/mod/devicetable-offsets.c
+++ b/scripts/mod/devicetable-offsets.c
@@ -231,5 +231,8 @@ int main(void)
DEVID(wmi_device_id);
DEVID_FIELD(wmi_device_id, guid_string);
+ DEVID(mhi_device_id);
+ DEVID_FIELD(mhi_device_id, chan);
+
return 0;
}
diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index c91eba751804..cae6a4e471b5 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -1335,6 +1335,15 @@ static int do_wmi_entry(const char *filename, void *symval, char *alias)
return 1;
}
+/* Looks like: mhi:S */
+static int do_mhi_entry(const char *filename, void *symval, char *alias)
+{
+ DEF_FIELD_ADDR(symval, mhi_device_id, chan);
+ sprintf(alias, MHI_DEVICE_MODALIAS_FMT, *chan);
+
+ return 1;
+}
+
/* Does namelen bytes of name exactly match the symbol? */
static bool sym_is(const char *name, unsigned namelen, const char *symbol)
{
@@ -1407,6 +1416,7 @@ static const struct devtable devtable[] = {
{"typec", SIZE_typec_device_id, do_typec_entry},
{"tee", SIZE_tee_client_device_id, do_tee_entry},
{"wmi", SIZE_wmi_device_id, do_wmi_entry},
+ {"mhi", SIZE_mhi_device_id, do_mhi_entry},
};
/* Create MODULE_ALIAS() statements.
--
2.17.1
Add support for transferring data between external modem and host
processor using MHI protocol.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/988
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 162 ++++++-
drivers/bus/mhi/core/internal.h | 33 ++
drivers/bus/mhi/core/main.c | 777 +++++++++++++++++++++++++++++++-
drivers/bus/mhi/core/pm.c | 40 ++
include/linux/mhi.h | 55 +++
5 files changed, 1058 insertions(+), 9 deletions(-)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index 1b5169e20e2b..4d0e5bbb3389 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -493,6 +493,73 @@ int mhi_init_mmio(struct mhi_controller *mhi_cntrl)
return 0;
}
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring;
+ struct mhi_ring *tre_ring;
+ struct mhi_chan_ctxt *chan_ctxt;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+
+ mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+ tre_ring->pre_aligned, tre_ring->dma_handle);
+ vfree(buf_ring->base);
+
+ buf_ring->base = tre_ring->base = NULL;
+ chan_ctxt->rbase = 0;
+}
+
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring;
+ struct mhi_ring *tre_ring;
+ struct mhi_chan_ctxt *chan_ctxt;
+ u32 tmp;
+ int ret;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ tre_ring->el_size = sizeof(struct mhi_tre);
+ tre_ring->len = tre_ring->el_size * tre_ring->elements;
+ chan_ctxt = &mhi_cntrl->mhi_ctxt->chan_ctxt[mhi_chan->chan];
+ ret = mhi_alloc_aligned_ring(mhi_cntrl, tre_ring, tre_ring->len);
+ if (ret)
+ return -ENOMEM;
+
+ buf_ring->el_size = sizeof(struct mhi_buf_info);
+ buf_ring->len = buf_ring->el_size * buf_ring->elements;
+ buf_ring->base = vzalloc(buf_ring->len);
+
+ if (!buf_ring->base) {
+ mhi_free_coherent(mhi_cntrl, tre_ring->alloc_size,
+ tre_ring->pre_aligned, tre_ring->dma_handle);
+ return -ENOMEM;
+ }
+
+ tmp = chan_ctxt->chcfg;
+ tmp &= ~CHAN_CTX_CHSTATE_MASK;
+ tmp |= (MHI_CH_STATE_ENABLED << CHAN_CTX_CHSTATE_SHIFT);
+ chan_ctxt->chcfg = tmp;
+
+ chan_ctxt->rbase = tre_ring->iommu_base;
+ chan_ctxt->rp = chan_ctxt->wp = chan_ctxt->rbase;
+ chan_ctxt->rlen = tre_ring->len;
+ tre_ring->ctxt_wp = &chan_ctxt->wp;
+
+ tre_ring->rp = tre_ring->wp = tre_ring->base;
+ buf_ring->rp = buf_ring->wp = buf_ring->base;
+ mhi_chan->db_cfg.db_mode = 1;
+
+ /* Update to all cores */
+ smp_wmb();
+
+ return 0;
+}
+
static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
struct mhi_controller_config *config)
{
@@ -648,6 +715,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
mhi_chan->xfer_type = ch_cfg->data_type;
+ switch (mhi_chan->xfer_type) {
+ case MHI_BUF_RAW:
+ mhi_chan->gen_tre = mhi_gen_tre;
+ mhi_chan->queue_xfer = mhi_queue_buf;
+ break;
+ case MHI_BUF_SKB:
+ mhi_chan->queue_xfer = mhi_queue_skb;
+ break;
+ case MHI_BUF_SCLIST:
+ mhi_chan->gen_tre = mhi_gen_tre;
+ mhi_chan->queue_xfer = mhi_queue_sclist;
+ break;
+ case MHI_BUF_NOP:
+ mhi_chan->queue_xfer = mhi_queue_nop;
+ break;
+ case MHI_BUF_DMA:
+ case MHI_BUF_RSC_DMA:
+ mhi_chan->queue_xfer = mhi_queue_dma;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Channel datatype not supported\n");
+ goto error_chan_cfg;
+ }
+
mhi_chan->lpm_notify = ch_cfg->lpm_notify;
mhi_chan->offload_ch = ch_cfg->offload_channel;
mhi_chan->db_cfg.reset_req = ch_cfg->doorbell_mode_switch;
@@ -677,6 +769,13 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
goto error_chan_cfg;
}
+ /*
+ * If MHI host pre-allocates buffers then client drivers
+ * cannot queue
+ */
+ if (mhi_chan->pre_alloc)
+ mhi_chan->queue_xfer = mhi_queue_nop;
+
if (!mhi_chan->offload_ch) {
mhi_chan->db_cfg.brstmode = ch_cfg->doorbell;
if (MHI_INVALID_BRSTMODE(mhi_chan->db_cfg.brstmode)) {
@@ -809,6 +908,14 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
rwlock_init(&mhi_chan->lock);
}
+ if (mhi_cntrl->bounce_buf) {
+ mhi_cntrl->map_single = mhi_map_single_use_bb;
+ mhi_cntrl->unmap_single = mhi_unmap_single_use_bb;
+ } else {
+ mhi_cntrl->map_single = mhi_map_single_no_bb;
+ mhi_cntrl->unmap_single = mhi_unmap_single_no_bb;
+ }
+
/* Register controller with MHI bus */
mhi_dev = mhi_alloc_device(mhi_cntrl);
if (IS_ERR(mhi_dev)) {
@@ -974,6 +1081,14 @@ static int mhi_driver_probe(struct device *dev)
struct mhi_event *mhi_event;
struct mhi_chan *ul_chan = mhi_dev->ul_chan;
struct mhi_chan *dl_chan = mhi_dev->dl_chan;
+ int ret;
+
+ /* Bring device out of LPM */
+ ret = mhi_device_get_sync(mhi_dev);
+ if (ret)
+ return ret;
+
+ ret = -EINVAL;
if (ul_chan) {
/*
@@ -981,13 +1096,18 @@ static int mhi_driver_probe(struct device *dev)
* be provided
*/
if (ul_chan->lpm_notify && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
/* For non-offload channels then xfer_cb should be provided */
if (!ul_chan->offload_ch && !mhi_drv->ul_xfer_cb)
- return -EINVAL;
+ goto exit_probe;
ul_chan->xfer_cb = mhi_drv->ul_xfer_cb;
+ if (ul_chan->auto_start) {
+ ret = mhi_prepare_channel(mhi_cntrl, ul_chan);
+ if (ret)
+ goto exit_probe;
+ }
}
if (dl_chan) {
@@ -996,11 +1116,11 @@ static int mhi_driver_probe(struct device *dev)
* be provided
*/
if (dl_chan->lpm_notify && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
/* For non-offload channels then xfer_cb should be provided */
if (!dl_chan->offload_ch && !mhi_drv->dl_xfer_cb)
- return -EINVAL;
+ goto exit_probe;
mhi_event = &mhi_cntrl->mhi_event[dl_chan->er_index];
@@ -1010,19 +1130,36 @@ static int mhi_driver_probe(struct device *dev)
* notify pending data
*/
if (mhi_event->cl_manage && !mhi_drv->status_cb)
- return -EINVAL;
+ goto exit_probe;
dl_chan->xfer_cb = mhi_drv->dl_xfer_cb;
}
/* Call the user provided probe function */
- return mhi_drv->probe(mhi_dev, mhi_dev->id);
+ ret = mhi_drv->probe(mhi_dev, mhi_dev->id);
+ if (ret)
+ goto exit_probe;
+
+ if (dl_chan && dl_chan->auto_start)
+ mhi_prepare_channel(mhi_cntrl, dl_chan);
+
+ mhi_device_put(mhi_dev);
+
+ return ret;
+
+exit_probe:
+ mhi_unprepare_from_transfer(mhi_dev);
+
+ mhi_device_put(mhi_dev);
+
+ return ret;
}
static int mhi_driver_remove(struct device *dev)
{
struct mhi_device *mhi_dev = to_mhi_device(dev);
struct mhi_driver *mhi_drv = to_mhi_driver(dev->driver);
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
struct mhi_chan *mhi_chan;
enum mhi_ch_state ch_state[] = {
MHI_CH_STATE_DISABLED,
@@ -1054,6 +1191,10 @@ static int mhi_driver_remove(struct device *dev)
mhi_chan->ch_state = MHI_CH_STATE_SUSPENDED;
write_unlock_irq(&mhi_chan->lock);
+ /* Reset the non-offload channel */
+ if (!mhi_chan->offload_ch)
+ mhi_reset_chan(mhi_cntrl, mhi_chan);
+
mutex_unlock(&mhi_chan->mutex);
}
@@ -1068,11 +1209,20 @@ static int mhi_driver_remove(struct device *dev)
mutex_lock(&mhi_chan->mutex);
+ if (ch_state[dir] == MHI_CH_STATE_ENABLED &&
+ !mhi_chan->offload_ch)
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
mutex_unlock(&mhi_chan->mutex);
}
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ while (mhi_dev->dev_wake)
+ mhi_device_put(mhi_dev);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
return 0;
}
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index aa8a3433afae..0b95a1ab6dfd 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -599,6 +599,8 @@ int mhi_pm_m0_transition(struct mhi_controller *mhi_cntrl);
void mhi_pm_m1_transition(struct mhi_controller *mhi_cntrl);
int mhi_pm_m3_transition(struct mhi_controller *mhi_cntrl);
int __mhi_device_get_sync(struct mhi_controller *mhi_cntrl);
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ enum mhi_cmd_type cmd);
/* Register access methods */
void mhi_db_brstmode(struct mhi_controller *mhi_cntrl, struct db_cfg *db_cfg,
@@ -630,6 +632,14 @@ int mhi_init_irq_setup(struct mhi_controller *mhi_cntrl);
void mhi_deinit_free_irq(struct mhi_controller *mhi_cntrl);
void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl,
struct image_info *img_info);
+int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan);
/* Memory allocation methods */
static inline void *mhi_alloc_coherent(struct mhi_controller *mhi_cntrl,
@@ -663,4 +673,27 @@ irqreturn_t mhi_irq_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_handler(int irq_number, void *dev);
+/* Queue transfer methods */
+int mhi_queue_dma(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ void *buf, void *cb, size_t buf_len, enum mhi_flags flags);
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags);
+
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info);
+
#endif /* _MHI_INT_H */
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 775d4a8fdefd..9891a5c0ce2c 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -144,11 +144,82 @@ enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl)
return ret ? MHI_STATE_MAX : state;
}
+int mhi_map_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ buf_info->p_addr = dma_map_single(mhi_cntrl->dev, buf_info->v_addr,
+ buf_info->len, buf_info->dir);
+ if (dma_mapping_error(mhi_cntrl->dev, buf_info->p_addr))
+ return -ENOMEM;
+
+ return 0;
+}
+
+int mhi_map_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ void *buf = mhi_alloc_coherent(mhi_cntrl, buf_info->len,
+ &buf_info->p_addr, GFP_ATOMIC);
+
+ if (!buf)
+ return -ENOMEM;
+
+ if (buf_info->dir == DMA_TO_DEVICE)
+ memcpy(buf, buf_info->v_addr, buf_info->len);
+
+ buf_info->bb_addr = buf;
+
+ return 0;
+}
+
+void mhi_unmap_single_no_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ dma_unmap_single(mhi_cntrl->dev, buf_info->p_addr, buf_info->len,
+ buf_info->dir);
+}
+
+void mhi_unmap_single_use_bb(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf_info)
+{
+ if (buf_info->dir == DMA_FROM_DEVICE)
+ memcpy(buf_info->v_addr, buf_info->bb_addr, buf_info->len);
+
+ mhi_free_coherent(mhi_cntrl, buf_info->len, buf_info->bb_addr,
+ buf_info->p_addr);
+}
+
+static int get_nr_avail_ring_elements(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ int nr_el;
+
+ if (ring->wp < ring->rp) {
+ nr_el = ((ring->rp - ring->wp) / ring->el_size) - 1;
+ } else {
+ nr_el = (ring->rp - ring->base) / ring->el_size;
+ nr_el += ((ring->base + ring->len - ring->wp) /
+ ring->el_size) - 1;
+ }
+
+ return nr_el;
+}
+
static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
{
return (addr - ring->iommu_base) + ring->base;
}
+static void mhi_add_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ ring->wp += ring->el_size;
+ if (ring->wp >= (ring->base + ring->len))
+ ring->wp = ring->base;
+ /* smp update */
+ smp_wmb();
+}
+
static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
struct mhi_ring *ring)
{
@@ -415,23 +486,25 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
/* Get the TRB this event points to */
ev_tre = mhi_to_virtual(tre_ring, ptr);
- /* device rp after servicing the TREs */
dev_rp = ev_tre + 1;
if (dev_rp >= (tre_ring->base + tre_ring->len))
dev_rp = tre_ring->base;
result.dir = mhi_chan->dir;
- /* local rp */
local_rp = tre_ring->rp;
while (local_rp != dev_rp) {
buf_info = buf_ring->rp;
- /* if it's last tre get len from the event */
+ /* If it's the last TRE, get length from the event */
if (local_rp == ev_tre)
xfer_len = MHI_TRE_GET_EV_LEN(event);
else
xfer_len = buf_info->len;
+ /* Unmap if it's not pre-mapped by client */
+ if (likely(!buf_info->pre_mapped))
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
result.buf_addr = buf_info->cb_buf;
result.bytes_xferd = xfer_len;
mhi_del_ring_element(mhi_cntrl, buf_ring);
@@ -443,6 +516,22 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
if (mhi_chan->dir == DMA_TO_DEVICE)
atomic_dec(&mhi_cntrl->pending_pkts);
+
+ /*
+ * Recycle the buffer if buffer is pre-allocated,
+ * if there is an error, not much we can do apart
+ * from dropping the packet
+ */
+ if (mhi_chan->pre_alloc) {
+ if (mhi_queue_buf(mhi_chan->mhi_dev, mhi_chan,
+ buf_info->cb_buf,
+ buf_info->len, MHI_EOT)) {
+ dev_err(mhi_cntrl->dev,
+ "Error recycling buffer for chan:%d\n",
+ mhi_chan->chan);
+ kfree(buf_info->cb_buf);
+ }
+ }
}
break;
} /* CC_EOT */
@@ -803,3 +892,685 @@ void mhi_ctrl_ev_task(unsigned long data)
schedule_work(&mhi_cntrl->syserr_worker);
}
}
+
+static bool mhi_is_ring_full(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ void *tmp = ring->wp + ring->el_size;
+
+ if (tmp >= (ring->base + ring->len))
+ tmp = ring->base;
+
+ return (tmp == ring->rp);
+}
+
+/* TODO: Scatter-Gather transfer not implemented */
+int mhi_queue_sclist(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ return -EINVAL;
+}
+
+/*
+ * MHI client drivers are not allowed to queue buffer for pre allocated
+ * channels. Hence, this function just returns -EINVAL.
+ */
+int mhi_queue_nop(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ return -EINVAL;
+}
+
+int mhi_queue_skb(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ struct sk_buff *skb = buf;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+ struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_tre *mhi_tre;
+ int ret;
+
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ return -EIO;
+ }
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ /* Generate the TRE */
+ buf_info = buf_ring->wp;
+
+ buf_info->v_addr = skb->data;
+ buf_info->cb_buf = skb;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = len;
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+ if (ret)
+ goto map_error;
+
+ mhi_tre = tre_ring->wp;
+
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ read_lock_bh(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_bh(&mhi_chan->lock);
+ }
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return 0;
+
+map_error:
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return ret;
+}
+
+int mhi_queue_dma(struct mhi_device *mhi_dev,
+ struct mhi_chan *mhi_chan,
+ void *buf,
+ size_t len,
+ enum mhi_flags mflags)
+{
+ struct mhi_buf *mhi_buf = buf;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring = &mhi_chan->tre_ring;
+ struct mhi_ring *buf_ring = &mhi_chan->buf_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_tre *mhi_tre;
+
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state))) {
+ dev_err(mhi_cntrl->dev,
+ "MHI is not in activate state, PM state: %s\n",
+ to_mhi_pm_state_str(mhi_cntrl->pm_state));
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return -EIO;
+ }
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ /* Generate the TRE */
+ buf_info = buf_ring->wp;
+ WARN_ON(buf_info->used);
+ buf_info->p_addr = mhi_buf->dma_addr;
+ buf_info->pre_mapped = true;
+ buf_info->cb_buf = mhi_buf;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = len;
+
+ mhi_tre = tre_ring->wp;
+
+ if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA) {
+ buf_info->used = true;
+ mhi_tre->ptr =
+ MHI_RSCTRE_DATA_PTR(buf_info->p_addr, buf_info->len);
+ mhi_tre->dword[0] =
+ MHI_RSCTRE_DATA_DWORD0(buf_ring->wp - buf_ring->base);
+ mhi_tre->dword[1] = MHI_RSCTRE_DATA_DWORD1;
+ } else {
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_info->len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(1, 1, 0, 0);
+ }
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ read_lock_bh(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_bh(&mhi_chan->lock);
+ }
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return 0;
+}
+
+int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ void *buf, void *cb, size_t buf_len, enum mhi_flags flags)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_tre *mhi_tre;
+ struct mhi_buf_info *buf_info;
+ int eot, eob, chain, bei;
+ int ret;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ buf_info = buf_ring->wp;
+ buf_info->v_addr = buf;
+ buf_info->cb_buf = cb;
+ buf_info->wp = tre_ring->wp;
+ buf_info->dir = mhi_chan->dir;
+ buf_info->len = buf_len;
+
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+ if (ret)
+ return ret;
+
+ eob = !!(flags & MHI_EOB);
+ eot = !!(flags & MHI_EOT);
+ chain = !!(flags & MHI_CHAIN);
+ bei = !!(mhi_chan->intmod);
+
+ mhi_tre = tre_ring->wp;
+ mhi_tre->ptr = MHI_TRE_DATA_PTR(buf_info->p_addr);
+ mhi_tre->dword[0] = MHI_TRE_DATA_DWORD0(buf_len);
+ mhi_tre->dword[1] = MHI_TRE_DATA_DWORD1(bei, eot, eob, chain);
+
+ /* increment WP */
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
+ return 0;
+}
+
+int mhi_queue_transfer(struct mhi_device *mhi_dev,
+ enum dma_data_direction dir, void *buf, size_t len,
+ enum mhi_flags mflags)
+{
+ if (dir == DMA_TO_DEVICE)
+ return mhi_dev->ul_chan->queue_xfer(mhi_dev, mhi_dev->ul_chan,
+ buf, len, mflags);
+ else
+ return mhi_dev->dl_chan->queue_xfer(mhi_dev, mhi_dev->dl_chan,
+ buf, len, mflags);
+}
+EXPORT_SYMBOL_GPL(mhi_queue_transfer);
+
+int mhi_queue_buf(struct mhi_device *mhi_dev, struct mhi_chan *mhi_chan,
+ void *buf, size_t len, enum mhi_flags mflags)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ring *tre_ring;
+ unsigned long flags;
+ int ret;
+
+ /*
+ * this check here only as a guard, it's always
+ * possible mhi can enter error while executing rest of function,
+ * which is not fatal so we do not need to hold pm_lock
+ */
+ if (unlikely(MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ tre_ring = &mhi_chan->tre_ring;
+ if (mhi_is_ring_full(mhi_cntrl, tre_ring))
+ return -ENOMEM;
+
+ ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf, len, mflags);
+ if (unlikely(ret))
+ return ret;
+
+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+
+ /* we're in M3 or transitioning to M3 */
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+ }
+
+ /* Toggle wake to exit out of M2 */
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_inc(&mhi_cntrl->pending_pkts);
+
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl))) {
+ unsigned long flags;
+
+ read_lock_irqsave(&mhi_chan->lock, flags);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_irqrestore(&mhi_chan->lock, flags);
+ }
+
+ read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+
+ return 0;
+}
+
+int mhi_send_cmd(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan,
+ enum mhi_cmd_type cmd)
+{
+ struct mhi_tre *cmd_tre = NULL;
+ struct mhi_cmd *mhi_cmd = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+ struct mhi_ring *ring = &mhi_cmd->ring;
+ int chan = 0;
+
+ if (mhi_chan)
+ chan = mhi_chan->chan;
+
+ spin_lock_bh(&mhi_cmd->lock);
+ if (!get_nr_avail_ring_elements(mhi_cntrl, ring)) {
+ spin_unlock_bh(&mhi_cmd->lock);
+ return -ENOMEM;
+ }
+
+ /* prepare the cmd tre */
+ cmd_tre = ring->wp;
+ switch (cmd) {
+ case MHI_CMD_RESET_CHAN:
+ cmd_tre->ptr = MHI_TRE_CMD_RESET_PTR;
+ cmd_tre->dword[0] = MHI_TRE_CMD_RESET_DWORD0;
+ cmd_tre->dword[1] = MHI_TRE_CMD_RESET_DWORD1(chan);
+ break;
+ case MHI_CMD_START_CHAN:
+ cmd_tre->ptr = MHI_TRE_CMD_START_PTR;
+ cmd_tre->dword[0] = MHI_TRE_CMD_START_DWORD0;
+ cmd_tre->dword[1] = MHI_TRE_CMD_START_DWORD1(chan);
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Command not supported\n");
+ break;
+ }
+
+ /* queue to hardware */
+ mhi_add_ring_element(mhi_cntrl, ring);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_cmd_db(mhi_cntrl, mhi_cmd);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ spin_unlock_bh(&mhi_cmd->lock);
+
+ return 0;
+}
+
+static void __mhi_unprepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ int ret;
+
+ dev_dbg(mhi_cntrl->dev, "Entered: unprepare channel:%d\n",
+ mhi_chan->chan);
+
+ /* no more processing events for this channel */
+ mutex_lock(&mhi_chan->mutex);
+ write_lock_irq(&mhi_chan->lock);
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
+ write_unlock_irq(&mhi_chan->lock);
+ mutex_unlock(&mhi_chan->mutex);
+ return;
+ }
+
+ mhi_chan->ch_state = MHI_CH_STATE_DISABLED;
+ write_unlock_irq(&mhi_chan->lock);
+
+ reinit_completion(&mhi_chan->completion);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ goto error_invalid_state;
+ }
+
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+ ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_RESET_CHAN);
+ if (ret)
+ goto error_invalid_state;
+
+ /* even if it fails we will still reset */
+ ret = wait_for_completion_timeout(&mhi_chan->completion,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS)
+ dev_err(mhi_cntrl->dev,
+ "Failed to receive cmd completion, still resetting\n");
+
+error_invalid_state:
+ if (!mhi_chan->offload_ch) {
+ mhi_reset_chan(mhi_cntrl, mhi_chan);
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+ }
+ dev_dbg(mhi_cntrl->dev, "chan:%d successfully resetted\n",
+ mhi_chan->chan);
+ mutex_unlock(&mhi_chan->mutex);
+}
+
+int mhi_prepare_channel(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ int ret = 0;
+
+ dev_dbg(mhi_cntrl->dev, "Preparing channel: %d\n",
+ mhi_chan->chan);
+
+ if (!(BIT(mhi_cntrl->ee) & mhi_chan->ee_mask)) {
+ dev_err(mhi_cntrl->dev,
+ "Current EE: %s Required EE Mask: 0x%x for chan: %s\n",
+ TO_MHI_EXEC_STR(mhi_cntrl->ee), mhi_chan->ee_mask,
+ mhi_chan->name);
+ return -ENOTCONN;
+ }
+
+ mutex_lock(&mhi_chan->mutex);
+
+ /* If channel is not in disable state, do not allow it to start */
+ if (mhi_chan->ch_state != MHI_CH_STATE_DISABLED) {
+ ret = -EIO;
+ dev_dbg(mhi_cntrl->dev,
+ "channel: %d is not in disabled state\n",
+ mhi_chan->chan);
+ goto error_init_chan;
+ }
+
+ /* Check of client manages channel context for offload channels */
+ if (!mhi_chan->offload_ch) {
+ ret = mhi_init_chan_ctxt(mhi_cntrl, mhi_chan);
+ if (ret)
+ goto error_init_chan;
+ }
+
+ reinit_completion(&mhi_chan->completion);
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state)) {
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ ret = -EIO;
+ goto error_pm_state;
+ }
+
+ mhi_cntrl->wake_toggle(mhi_cntrl);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+
+ ret = mhi_send_cmd(mhi_cntrl, mhi_chan, MHI_CMD_START_CHAN);
+ if (ret)
+ goto error_pm_state;
+
+ ret = wait_for_completion_timeout(&mhi_chan->completion,
+ msecs_to_jiffies(mhi_cntrl->timeout_ms));
+ if (!ret || mhi_chan->ccs != MHI_EV_CC_SUCCESS) {
+ ret = -EIO;
+ goto error_pm_state;
+ }
+
+ write_lock_irq(&mhi_chan->lock);
+ mhi_chan->ch_state = MHI_CH_STATE_ENABLED;
+ write_unlock_irq(&mhi_chan->lock);
+
+ /* Pre-allocate buffer for xfer ring */
+ if (mhi_chan->pre_alloc) {
+ int nr_el = get_nr_avail_ring_elements(mhi_cntrl,
+ &mhi_chan->tre_ring);
+ size_t len = mhi_cntrl->buffer_len;
+
+ while (nr_el--) {
+ void *buf;
+
+ buf = kmalloc(len, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto error_pre_alloc;
+ }
+
+ /* Prepare transfer descriptors */
+ ret = mhi_chan->gen_tre(mhi_cntrl, mhi_chan, buf, buf,
+ len, MHI_EOT);
+ if (ret) {
+ kfree(buf);
+ goto error_pre_alloc;
+ }
+ }
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+ read_lock_irq(&mhi_chan->lock);
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ read_unlock_irq(&mhi_chan->lock);
+ }
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+ }
+
+ mutex_unlock(&mhi_chan->mutex);
+
+ dev_dbg(mhi_cntrl->dev, "Chan: %d successfully moved to start state\n",
+ mhi_chan->chan);
+
+ return 0;
+
+error_pm_state:
+ if (!mhi_chan->offload_ch)
+ mhi_deinit_chan_ctxt(mhi_cntrl, mhi_chan);
+
+error_init_chan:
+ mutex_unlock(&mhi_chan->mutex);
+
+ return ret;
+
+error_pre_alloc:
+ mutex_unlock(&mhi_chan->mutex);
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+
+ return ret;
+}
+
+static void mhi_mark_stale_events(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ struct mhi_event_ctxt *er_ctxt,
+ int chan)
+
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring;
+ unsigned long flags;
+
+ dev_dbg(mhi_cntrl->dev,
+ "Marking all events for chan: %d as stale\n", chan);
+
+ ev_ring = &mhi_event->ring;
+
+ /* mark all stale events related to channel as STALE event */
+ spin_lock_irqsave(&mhi_event->lock, flags);
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+
+ local_rp = ev_ring->rp;
+ while (dev_rp != local_rp) {
+ if (MHI_TRE_GET_EV_TYPE(local_rp) ==
+ MHI_PKT_TYPE_TX_EVENT &&
+ chan == MHI_TRE_GET_EV_CHID(local_rp))
+ local_rp->dword[1] = MHI_TRE_EV_DWORD1(chan,
+ MHI_PKT_TYPE_STALE_EVENT);
+ local_rp++;
+ if (local_rp == (ev_ring->base + ev_ring->len))
+ local_rp = ev_ring->base;
+ }
+
+
+ dev_dbg(mhi_cntrl->dev,
+ "Finished marking events as stale events\n");
+ spin_unlock_irqrestore(&mhi_event->lock, flags);
+}
+
+static void mhi_reset_data_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_result result;
+
+ /* Reset any pending buffers */
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+ while (tre_ring->rp != tre_ring->wp) {
+ struct mhi_buf_info *buf_info = buf_ring->rp;
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_dec(&mhi_cntrl->pending_pkts);
+
+ if (!buf_info->pre_mapped)
+ mhi_cntrl->unmap_single(mhi_cntrl, buf_info);
+
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+
+ if (mhi_chan->pre_alloc) {
+ kfree(buf_info->cb_buf);
+ } else {
+ result.buf_addr = buf_info->cb_buf;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ }
+ }
+}
+
+static void mhi_reset_rsc_chan(struct mhi_controller *mhi_cntrl,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_result result;
+ struct mhi_buf_info *buf_info;
+
+ /* Reset any pending buffers */
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+ result.transaction_status = -ENOTCONN;
+ result.bytes_xferd = 0;
+
+ buf_info = buf_ring->base;
+ for (; (void *)buf_info < buf_ring->base + buf_ring->len; buf_info++) {
+ if (!buf_info->used)
+ continue;
+
+ result.buf_addr = buf_info->cb_buf;
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+ buf_info->used = false;
+ }
+}
+
+void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan)
+{
+
+ struct mhi_event *mhi_event;
+ struct mhi_event_ctxt *er_ctxt;
+ int chan = mhi_chan->chan;
+
+ /* Nothing to reset, client doesn't queue buffers */
+ if (mhi_chan->offload_ch)
+ return;
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+ er_ctxt = &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_chan->er_index];
+
+ mhi_mark_stale_events(mhi_cntrl, mhi_event, er_ctxt, chan);
+
+ if (mhi_chan->xfer_type == MHI_BUF_RSC_DMA)
+ mhi_reset_rsc_chan(mhi_cntrl, mhi_chan);
+ else
+ mhi_reset_data_chan(mhi_cntrl, mhi_chan);
+
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+
+/* Move channel to start state */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev)
+{
+ int ret, dir;
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan;
+
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ ret = mhi_prepare_channel(mhi_cntrl, mhi_chan);
+ if (ret)
+ goto error_open_chan;
+ }
+
+ return 0;
+
+error_open_chan:
+ for (--dir; dir >= 0; dir--) {
+ mhi_chan = dir ? mhi_dev->dl_chan : mhi_dev->ul_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_prepare_for_transfer);
+
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan;
+ int dir;
+
+ for (dir = 0; dir < 2; dir++) {
+ mhi_chan = dir ? mhi_dev->ul_chan : mhi_dev->dl_chan;
+
+ if (!mhi_chan)
+ continue;
+
+ __mhi_unprepare_channel(mhi_cntrl, mhi_chan);
+ }
+}
+EXPORT_SYMBOL_GPL(mhi_unprepare_from_transfer);
+
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_chan *mhi_chan = mhi_dev->dl_chan;
+ struct mhi_event *mhi_event = &mhi_cntrl->mhi_event[mhi_chan->er_index];
+ int ret;
+
+ spin_lock_bh(&mhi_event->lock);
+ ret = mhi_event->process_event(mhi_cntrl, mhi_event, budget);
+ spin_unlock_bh(&mhi_event->lock);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_poll);
diff --git a/drivers/bus/mhi/core/pm.c b/drivers/bus/mhi/core/pm.c
index 19987068b475..e37678037cde 100644
--- a/drivers/bus/mhi/core/pm.c
+++ b/drivers/bus/mhi/core/pm.c
@@ -925,3 +925,43 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl)
return ret;
}
EXPORT_SYMBOL_GPL(mhi_force_rddm_mode);
+
+void mhi_device_get(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ mhi_dev->dev_wake++;
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ mhi_cntrl->wake_get(mhi_cntrl, true);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL_GPL(mhi_device_get);
+
+int mhi_device_get_sync(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+ int ret;
+
+ ret = __mhi_device_get_sync(mhi_cntrl);
+ if (!ret)
+ mhi_dev->dev_wake++;
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(mhi_device_get_sync);
+
+void mhi_device_put(struct mhi_device *mhi_dev)
+{
+ struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ mhi_dev->dev_wake--;
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (MHI_PM_IN_SUSPEND_STATE(mhi_cntrl->pm_state)) {
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+ }
+
+ mhi_cntrl->wake_put(mhi_cntrl, false);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+}
+EXPORT_SYMBOL_GPL(mhi_device_put);
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 2ea2deb922a1..a32548ac3425 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -359,6 +359,8 @@ struct mhi_controller_config {
* @wake_toggle: CB function to assert and de-assert device wake (optional)
* @runtime_get: CB function to controller runtime resume (required)
* @runtimet_put: CB function to decrement pm usage (required)
+ * @map_single: CB function to create TRE buffer
+ * @unmap_single: CB function to destroy TRE buffer
* @buffer_len: Bounce buffer length
* @bounce_buf: Use of bounce buffer
* @fbc_download: MHI host needs to do complete image transfer (optional)
@@ -421,6 +423,10 @@ struct mhi_controller {
void (*wake_toggle)(struct mhi_controller *mhi_cntrl);
int (*runtime_get)(struct mhi_controller *mhi_cntrl);
void (*runtime_put)(struct mhi_controller *mhi_cntrl);
+ int (*map_single)(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf);
+ void (*unmap_single)(struct mhi_controller *mhi_cntrl,
+ struct mhi_buf_info *buf);
size_t buffer_len;
bool bounce_buf;
@@ -603,4 +609,53 @@ int mhi_force_rddm_mode(struct mhi_controller *mhi_cntrl);
*/
enum mhi_state mhi_get_mhi_state(struct mhi_controller *mhi_cntrl);
+/**
+ * mhi_device_get - Disable device low power mode
+ * @mhi_dev: Device associated with the channel
+ */
+void mhi_device_get(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_get_sync - Disable device low power mode. Synchronously
+ * take the controller out of suspended state
+ * @mhi_dev: Device associated with the channel
+ */
+int mhi_device_get_sync(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_device_put - Re-enable device low power mode
+ * @mhi_dev: Device associated with the channel
+ */
+void mhi_device_put(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_prepare_for_transfer - Setup channel for data transfer
+ * @mhi_dev: Device associated with the channels
+ */
+int mhi_prepare_for_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_unprepare_from_transfer - Unprepare the channels
+ * @mhi_dev: Device associated with the channels
+ */
+void mhi_unprepare_from_transfer(struct mhi_device *mhi_dev);
+
+/**
+ * mhi_poll - Poll for any available data in DL direction
+ * @mhi_dev: Device associated with the channels
+ * @budget: # of events to process
+ */
+int mhi_poll(struct mhi_device *mhi_dev, u32 budget);
+
+/**
+ * mhi_queue_transfer - Send or receive data from client device over MHI channel
+ * @mhi_dev: Device associated with the channels
+ * @dir: DMA direction for the channel
+ * @buf: Buffer for holding the data
+ * @len: Buffer length
+ * @mflags: MHI transfer flags used for the transfer
+ */
+int mhi_queue_transfer(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+ void *buf, size_t len, enum mhi_flags mflags);
+
#endif /* _MHI_H_ */
--
2.17.1
MHI is the transport layer used for communicating to the external modems.
Hence, this commit adds MHI transport layer support to QRTR for
transferring the QMI messages over IPC Router.
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
net/qrtr/Kconfig | 7 ++
net/qrtr/Makefile | 2 +
net/qrtr/mhi.c | 207 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 216 insertions(+)
create mode 100644 net/qrtr/mhi.c
diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig
index 63f89cc6e82c..8eb876471564 100644
--- a/net/qrtr/Kconfig
+++ b/net/qrtr/Kconfig
@@ -29,4 +29,11 @@ config QRTR_TUN
implement endpoints of QRTR, for purpose of tunneling data to other
hosts or testing purposes.
+config QRTR_MHI
+ tristate "MHI IPC Router channels"
+ depends on MHI_BUS
+ help
+ Say Y here to support MHI based ipcrouter channels. MHI is the
+ transport used for communicating to external modems.
+
endif # QRTR
diff --git a/net/qrtr/Makefile b/net/qrtr/Makefile
index 1c6d6c120fb7..3dc0a7c9d455 100644
--- a/net/qrtr/Makefile
+++ b/net/qrtr/Makefile
@@ -5,3 +5,5 @@ obj-$(CONFIG_QRTR_SMD) += qrtr-smd.o
qrtr-smd-y := smd.o
obj-$(CONFIG_QRTR_TUN) += qrtr-tun.o
qrtr-tun-y := tun.o
+obj-$(CONFIG_QRTR_MHI) += qrtr-mhi.o
+qrtr-mhi-y := mhi.o
diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c
new file mode 100644
index 000000000000..c85041a22f85
--- /dev/null
+++ b/net/qrtr/mhi.c
@@ -0,0 +1,207 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ */
+
+#include <linux/mhi.h>
+#include <linux/mod_devicetable.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <net/sock.h>
+
+#include "qrtr.h"
+
+struct qrtr_mhi_dev {
+ struct qrtr_endpoint ep;
+ struct mhi_device *mhi_dev;
+ struct device *dev;
+ spinlock_t ul_lock; /* lock to protect ul_pkts */
+ struct list_head ul_pkts;
+ atomic_t in_reset;
+};
+
+struct qrtr_mhi_pkt {
+ struct list_head node;
+ struct sk_buff *skb;
+ struct kref refcount;
+ struct completion done;
+};
+
+static void qrtr_mhi_pkt_release(struct kref *ref)
+{
+ struct qrtr_mhi_pkt *pkt = container_of(ref, struct qrtr_mhi_pkt,
+ refcount);
+ struct sock *sk = pkt->skb->sk;
+
+ consume_skb(pkt->skb);
+ if (sk)
+ sock_put(sk);
+
+ kfree(pkt);
+}
+
+/* From MHI to QRTR */
+static void qcom_mhi_qrtr_dl_callback(struct mhi_device *mhi_dev,
+ struct mhi_result *mhi_res)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ int rc;
+
+ if (!qdev || mhi_res->transaction_status)
+ return;
+
+ rc = qrtr_endpoint_post(&qdev->ep, mhi_res->buf_addr,
+ mhi_res->bytes_xferd);
+ if (rc == -EINVAL)
+ dev_err(qdev->dev, "invalid ipcrouter packet\n");
+}
+
+/* From QRTR to MHI */
+static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
+ struct mhi_result *mhi_res)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ struct qrtr_mhi_pkt *pkt;
+ unsigned long flags;
+
+ spin_lock_irqsave(&qdev->ul_lock, flags);
+ pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
+ list_del(&pkt->node);
+ complete_all(&pkt->done);
+
+ kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
+ spin_unlock_irqrestore(&qdev->ul_lock, flags);
+}
+
+static void qcom_mhi_qrtr_status_callback(struct mhi_device *mhi_dev,
+ enum mhi_callback mhi_cb)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+ struct qrtr_mhi_pkt *pkt;
+ unsigned long flags;
+
+ if (mhi_cb != MHI_CB_FATAL_ERROR)
+ return;
+
+ atomic_inc(&qdev->in_reset);
+ spin_lock_irqsave(&qdev->ul_lock, flags);
+ list_for_each_entry(pkt, &qdev->ul_pkts, node)
+ complete_all(&pkt->done);
+ spin_unlock_irqrestore(&qdev->ul_lock, flags);
+}
+
+/* Send data over MHI */
+static int qcom_mhi_qrtr_send(struct qrtr_endpoint *ep, struct sk_buff *skb)
+{
+ struct qrtr_mhi_dev *qdev = container_of(ep, struct qrtr_mhi_dev, ep);
+ struct qrtr_mhi_pkt *pkt;
+ int rc;
+
+ rc = skb_linearize(skb);
+ if (rc) {
+ kfree_skb(skb);
+ return rc;
+ }
+
+ pkt = kzalloc(sizeof(*pkt), GFP_KERNEL);
+ if (!pkt) {
+ kfree_skb(skb);
+ return -ENOMEM;
+ }
+
+ init_completion(&pkt->done);
+ kref_init(&pkt->refcount);
+ kref_get(&pkt->refcount);
+ pkt->skb = skb;
+
+ spin_lock_bh(&qdev->ul_lock);
+ list_add_tail(&pkt->node, &qdev->ul_pkts);
+ rc = mhi_queue_transfer(qdev->mhi_dev, DMA_TO_DEVICE, skb, skb->len,
+ MHI_EOT);
+ if (rc) {
+ list_del(&pkt->node);
+ kfree_skb(skb);
+ kfree(pkt);
+ spin_unlock_bh(&qdev->ul_lock);
+ return rc;
+ }
+
+ spin_unlock_bh(&qdev->ul_lock);
+ if (skb->sk)
+ sock_hold(skb->sk);
+
+ rc = wait_for_completion_interruptible_timeout(&pkt->done, HZ * 5);
+ if (atomic_read(&qdev->in_reset))
+ rc = -ECONNRESET;
+ else if (rc == 0)
+ rc = -ETIMEDOUT;
+ else if (rc > 0)
+ rc = 0;
+
+ kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
+
+ return rc;
+}
+
+static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev,
+ const struct mhi_device_id *id)
+{
+ struct qrtr_mhi_dev *qdev;
+ u32 net_id;
+ int rc;
+
+ qdev = devm_kzalloc(&mhi_dev->dev, sizeof(*qdev), GFP_KERNEL);
+ if (!qdev)
+ return -ENOMEM;
+
+ qdev->mhi_dev = mhi_dev;
+ qdev->dev = &mhi_dev->dev;
+ qdev->ep.xmit = qcom_mhi_qrtr_send;
+ atomic_set(&qdev->in_reset, 0);
+
+ net_id = QRTR_EP_NID_AUTO;
+
+ INIT_LIST_HEAD(&qdev->ul_pkts);
+ spin_lock_init(&qdev->ul_lock);
+
+ dev_set_drvdata(&mhi_dev->dev, qdev);
+ rc = qrtr_endpoint_register(&qdev->ep, net_id);
+ if (rc)
+ return rc;
+
+ dev_dbg(qdev->dev, "Qualcomm MHI QRTR driver probed\n");
+
+ return 0;
+}
+
+static void qcom_mhi_qrtr_remove(struct mhi_device *mhi_dev)
+{
+ struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
+
+ qrtr_endpoint_unregister(&qdev->ep);
+ dev_set_drvdata(&mhi_dev->dev, NULL);
+}
+
+static const struct mhi_device_id qcom_mhi_qrtr_id_table[] = {
+ { .chan = "IPCR" },
+ {}
+};
+MODULE_DEVICE_TABLE(mhi, qcom_mhi_qrtr_id_table);
+
+static struct mhi_driver qcom_mhi_qrtr_driver = {
+ .probe = qcom_mhi_qrtr_probe,
+ .remove = qcom_mhi_qrtr_remove,
+ .dl_xfer_cb = qcom_mhi_qrtr_dl_callback,
+ .ul_xfer_cb = qcom_mhi_qrtr_ul_callback,
+ .status_cb = qcom_mhi_qrtr_status_callback,
+ .id_table = qcom_mhi_qrtr_id_table,
+ .driver = {
+ .name = "qcom_mhi_qrtr",
+ },
+};
+
+module_driver(qcom_mhi_qrtr_driver, mhi_driver_register,
+ mhi_driver_unregister);
+
+MODULE_DESCRIPTION("Qualcomm IPC-Router MHI interface driver");
+MODULE_LICENSE("GPL v2");
--
2.17.1
This commit adds support for creating and destroying MHI devices. The
MHI devices binds to the MHI channels and are used to transfer data
between MHI host and client device.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/989
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted from pm patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/Makefile | 2 +-
drivers/bus/mhi/core/main.c | 123 ++++++++++++++++++++++++++++++++++
include/linux/mhi.h | 2 +
3 files changed, 126 insertions(+), 1 deletion(-)
create mode 100644 drivers/bus/mhi/core/main.c
diff --git a/drivers/bus/mhi/core/Makefile b/drivers/bus/mhi/core/Makefile
index 2db32697c67f..77f7730da4bf 100644
--- a/drivers/bus/mhi/core/Makefile
+++ b/drivers/bus/mhi/core/Makefile
@@ -1,3 +1,3 @@
obj-$(CONFIG_MHI_BUS) := mhi.o
-mhi-y := init.o
+mhi-y := init.o main.o
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
new file mode 100644
index 000000000000..216fd8691140
--- /dev/null
+++ b/drivers/bus/mhi/core/main.c
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2018-2020, The Linux Foundation. All rights reserved.
+ *
+ */
+
+#define dev_fmt(fmt) "MHI: " fmt
+
+#include <linux/device.h>
+#include <linux/dma-direction.h>
+#include <linux/dma-mapping.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/mhi.h>
+#include <linux/module.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include "internal.h"
+
+int mhi_destroy_device(struct device *dev, void *data)
+{
+ struct mhi_device *mhi_dev;
+ struct mhi_controller *mhi_cntrl;
+
+ if (dev->bus != &mhi_bus_type)
+ return 0;
+
+ mhi_dev = to_mhi_device(dev);
+ mhi_cntrl = mhi_dev->mhi_cntrl;
+
+ /* Only destroy virtual devices thats attached to bus */
+ if (mhi_dev->dev_type == MHI_DEVICE_CONTROLLER)
+ return 0;
+
+ dev_dbg(mhi_cntrl->dev, "destroy device for chan:%s\n",
+ mhi_dev->chan_name);
+
+ /* Notify the client and remove the device from MHI bus */
+ device_del(dev);
+ put_device(dev);
+
+ return 0;
+}
+
+static void mhi_notify(struct mhi_device *mhi_dev, enum mhi_callback cb_reason)
+{
+ struct mhi_driver *mhi_drv;
+
+ if (!mhi_dev->dev.driver)
+ return;
+
+ mhi_drv = to_mhi_driver(mhi_dev->dev.driver);
+
+ if (mhi_drv->status_cb)
+ mhi_drv->status_cb(mhi_dev, cb_reason);
+}
+
+/* Bind MHI channels to MHI devices */
+void mhi_create_devices(struct mhi_controller *mhi_cntrl)
+{
+ int i;
+ struct mhi_chan *mhi_chan;
+ struct mhi_device *mhi_dev;
+ int ret;
+
+ mhi_chan = mhi_cntrl->mhi_chan;
+ for (i = 0; i < mhi_cntrl->max_chan; i++, mhi_chan++) {
+ if (!mhi_chan->configured || mhi_chan->mhi_dev ||
+ !(mhi_chan->ee_mask & BIT(mhi_cntrl->ee)))
+ continue;
+ mhi_dev = mhi_alloc_device(mhi_cntrl);
+ if (!mhi_dev)
+ return;
+
+ mhi_dev->dev_type = MHI_DEVICE_XFER;
+ switch (mhi_chan->dir) {
+ case DMA_TO_DEVICE:
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ break;
+ case DMA_FROM_DEVICE:
+ /* We use dl_chan as offload channels */
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Direction not supported\n");
+ mhi_dealloc_device(mhi_cntrl, mhi_dev);
+ return;
+ }
+
+ mhi_chan->mhi_dev = mhi_dev;
+
+ /* Check next channel if it matches */
+ if ((i + 1) < mhi_cntrl->max_chan && mhi_chan[1].configured) {
+ if (!strcmp(mhi_chan[1].name, mhi_chan->name)) {
+ i++;
+ mhi_chan++;
+ if (mhi_chan->dir == DMA_TO_DEVICE) {
+ mhi_dev->ul_chan = mhi_chan;
+ mhi_dev->ul_chan_id = mhi_chan->chan;
+ } else {
+ mhi_dev->dl_chan = mhi_chan;
+ mhi_dev->dl_chan_id = mhi_chan->chan;
+ }
+ mhi_chan->mhi_dev = mhi_dev;
+ }
+ }
+
+ /* Channel name is same for both UL and DL */
+ mhi_dev->chan_name = mhi_chan->name;
+ dev_set_name(&mhi_dev->dev, "%04x_%s", mhi_chan->chan,
+ mhi_dev->chan_name);
+
+ /* Init wakeup source if available */
+ if (mhi_dev->dl_chan && mhi_dev->dl_chan->wake_capable)
+ device_init_wakeup(&mhi_dev->dev, true);
+
+ ret = device_add(&mhi_dev->dev);
+ if (ret)
+ mhi_dealloc_device(mhi_cntrl, mhi_dev);
+ }
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 137f1891701f..5cbc1e33829f 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -186,6 +186,7 @@ enum mhi_db_brst_mode {
* @doorbell_mode_switch: Channel switches to doorbell mode on M0 transition
* @auto_queue: Framework will automatically queue buffers for DL traffic
* @auto_start: Automatically start (open) this channel
+ * @wake-capable: Channel capable of waking up the system
*/
struct mhi_channel_config {
char *name;
@@ -204,6 +205,7 @@ struct mhi_channel_config {
bool doorbell_mode_switch;
bool auto_queue;
bool auto_start;
+ bool wake_capable;
};
/**
--
2.17.1
Add MAINTAINERS entry for MHI bus.
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
MAINTAINERS | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index cf6ccca6e61c..9213d150041c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10777,6 +10777,16 @@ M: Vladimir Vid <[email protected]>
S: Maintained
F: arch/arm64/boot/dts/marvell/armada-3720-uDPU.dts
+MHI BUS
+M: Manivannan Sadhasivam <[email protected]>
+M: Hemant Kumar <[email protected]>
+L: [email protected]
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
+S: Maintained
+F: drivers/bus/mhi/
+F: include/linux/mhi.h
+F: Documentation/mhi/
+
MICROBLAZE ARCHITECTURE
M: Michal Simek <[email protected]>
W: http://www.monstr.eu/fdt/
--
2.17.1
IPC Router protocol is also used by external modems for exchanging the QMI
messages. Hence, it doesn't always depend on Qualcomm platforms. As a side
effect of removing the ARCH_QCOM dependency, it is going to miss the
COMPILE_TEST build coverage.
Cc: "David S. Miller" <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
net/qrtr/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/net/qrtr/Kconfig b/net/qrtr/Kconfig
index 8eb876471564..f362ca316015 100644
--- a/net/qrtr/Kconfig
+++ b/net/qrtr/Kconfig
@@ -4,7 +4,6 @@
config QRTR
tristate "Qualcomm IPC Router support"
- depends on ARCH_QCOM || COMPILE_TEST
---help---
Say Y if you intend to use Qualcomm IPC router protocol. The
protocol is used to communicate with services provided by other
--
2.17.1
This commit adds support for processing the MHI data and control
events from the client device. The client device can report various
events such as EE events, state change events by interrupting the
host through IRQ and adding events to the event rings allocated by
the host during initialization.
This is based on the patch submitted by Sujeev Dias:
https://lkml.org/lkml/2018/7/9/988
Signed-off-by: Sujeev Dias <[email protected]>
Signed-off-by: Siddartha Mohanadoss <[email protected]>
[mani: splitted the data transfer patch and cleaned up for upstream]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/core/init.c | 19 ++
drivers/bus/mhi/core/internal.h | 10 +
drivers/bus/mhi/core/main.c | 468 ++++++++++++++++++++++++++++++++
include/linux/mhi.h | 14 +
4 files changed, 511 insertions(+)
diff --git a/drivers/bus/mhi/core/init.c b/drivers/bus/mhi/core/init.c
index d0886de1c8bb..1b5169e20e2b 100644
--- a/drivers/bus/mhi/core/init.c
+++ b/drivers/bus/mhi/core/init.c
@@ -544,6 +544,19 @@ static int parse_ev_cfg(struct mhi_controller *mhi_cntrl,
mhi_event->data_type = event_cfg->data_type;
+ switch (mhi_event->data_type) {
+ case MHI_ER_DATA:
+ mhi_event->process_event = mhi_process_data_event_ring;
+ break;
+ case MHI_ER_CTRL:
+ mhi_event->process_event = mhi_process_ctrl_ev_ring;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Event Ring type not supported\n");
+ goto error_ev_cfg;
+ }
+
mhi_event->hw_ring = event_cfg->hardware_event;
if (mhi_event->hw_ring)
mhi_cntrl->hw_ev_rings++;
@@ -781,6 +794,12 @@ int mhi_register_controller(struct mhi_controller *mhi_cntrl,
mhi_event->mhi_cntrl = mhi_cntrl;
spin_lock_init(&mhi_event->lock);
+ if (mhi_event->data_type == MHI_ER_CTRL)
+ tasklet_init(&mhi_event->task, mhi_ctrl_ev_task,
+ (ulong)mhi_event);
+ else
+ tasklet_init(&mhi_event->task, mhi_ev_task,
+ (ulong)mhi_event);
}
mhi_chan = mhi_cntrl->mhi_chan;
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h
index 8ce1f11b8074..aa8a3433afae 100644
--- a/drivers/bus/mhi/core/internal.h
+++ b/drivers/bus/mhi/core/internal.h
@@ -500,6 +500,8 @@ struct mhi_buf_info {
dma_addr_t p_addr;
size_t len;
enum dma_data_direction dir;
+ bool used; /* Indicates whether the buffer is used or not */
+ bool pre_mapped; /* Already pre-mapped by client */
};
struct mhi_event {
@@ -648,6 +650,14 @@ static inline void mhi_free_coherent(struct mhi_controller *mhi_cntrl,
dma_free_coherent(mhi_cntrl->dev, size, vaddr, dma_handle);
}
+/* Event processing methods */
+void mhi_ctrl_ev_task(unsigned long data);
+void mhi_ev_task(unsigned long data);
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event, u32 event_quota);
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event, u32 event_quota);
+
/* ISR handlers */
irqreturn_t mhi_irq_handler(int irq_number, void *dev);
irqreturn_t mhi_intvec_threaded_handler(int irq_number, void *dev);
diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c
index 4d767dd97956..775d4a8fdefd 100644
--- a/drivers/bus/mhi/core/main.c
+++ b/drivers/bus/mhi/core/main.c
@@ -149,6 +149,16 @@ static void *mhi_to_virtual(struct mhi_ring *ring, dma_addr_t addr)
return (addr - ring->iommu_base) + ring->base;
}
+static void mhi_del_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ ring->rp += ring->el_size;
+ if (ring->rp >= (ring->base + ring->len))
+ ring->rp = ring->base;
+ /* smp update */
+ smp_wmb();
+}
+
int mhi_destroy_device(struct device *dev, void *data)
{
struct mhi_device *mhi_dev;
@@ -335,3 +345,461 @@ irqreturn_t mhi_intvec_handler(int irq_number, void *dev)
return IRQ_WAKE_THREAD;
}
+
+static void mhi_recycle_ev_ring_element(struct mhi_controller *mhi_cntrl,
+ struct mhi_ring *ring)
+{
+ dma_addr_t ctxt_wp;
+
+ /* Update the WP */
+ ring->wp += ring->el_size;
+ ctxt_wp = *ring->ctxt_wp + ring->el_size;
+
+ if (ring->wp >= (ring->base + ring->len)) {
+ ring->wp = ring->base;
+ ctxt_wp = ring->iommu_base;
+ }
+
+ *ring->ctxt_wp = ctxt_wp;
+
+ /* Update the RP */
+ ring->rp += ring->el_size;
+ if (ring->rp >= (ring->base + ring->len))
+ ring->rp = ring->base;
+
+ /* Update to all cores */
+ smp_wmb();
+}
+
+static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *event,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ u32 ev_code;
+ struct mhi_result result;
+ unsigned long flags = 0;
+
+ ev_code = MHI_TRE_GET_EV_CODE(event);
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+
+ /*
+ * If it's a DB Event then we need to grab the lock
+ * with preemption disabled and as a write because we
+ * have to update db register and there are chances that
+ * another thread could be doing the same.
+ */
+ if (ev_code >= MHI_EV_CC_OOB)
+ write_lock_irqsave(&mhi_chan->lock, flags);
+ else
+ read_lock_bh(&mhi_chan->lock);
+
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+ goto end_process_tx_event;
+
+ switch (ev_code) {
+ case MHI_EV_CC_OVERFLOW:
+ case MHI_EV_CC_EOB:
+ case MHI_EV_CC_EOT:
+ {
+ dma_addr_t ptr = MHI_TRE_GET_EV_PTR(event);
+ struct mhi_tre *local_rp, *ev_tre;
+ void *dev_rp;
+ struct mhi_buf_info *buf_info;
+ u16 xfer_len;
+
+ /* Get the TRB this event points to */
+ ev_tre = mhi_to_virtual(tre_ring, ptr);
+
+ /* device rp after servicing the TREs */
+ dev_rp = ev_tre + 1;
+ if (dev_rp >= (tre_ring->base + tre_ring->len))
+ dev_rp = tre_ring->base;
+
+ result.dir = mhi_chan->dir;
+
+ /* local rp */
+ local_rp = tre_ring->rp;
+ while (local_rp != dev_rp) {
+ buf_info = buf_ring->rp;
+ /* if it's last tre get len from the event */
+ if (local_rp == ev_tre)
+ xfer_len = MHI_TRE_GET_EV_LEN(event);
+ else
+ xfer_len = buf_info->len;
+
+ result.buf_addr = buf_info->cb_buf;
+ result.bytes_xferd = xfer_len;
+ mhi_del_ring_element(mhi_cntrl, buf_ring);
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ local_rp = tre_ring->rp;
+
+ /* notify client */
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ if (mhi_chan->dir == DMA_TO_DEVICE)
+ atomic_dec(&mhi_cntrl->pending_pkts);
+ }
+ break;
+ } /* CC_EOT */
+ case MHI_EV_CC_OOB:
+ case MHI_EV_CC_DB_MODE:
+ {
+ unsigned long flags;
+
+ mhi_chan->db_cfg.db_mode = 1;
+ read_lock_irqsave(&mhi_cntrl->pm_lock, flags);
+ if (tre_ring->wp != tre_ring->rp &&
+ MHI_DB_ACCESS_VALID(mhi_cntrl)) {
+ mhi_ring_chan_db(mhi_cntrl, mhi_chan);
+ }
+ read_unlock_irqrestore(&mhi_cntrl->pm_lock, flags);
+ break;
+ }
+ case MHI_EV_CC_BAD_TRE:
+ default:
+ dev_err(mhi_cntrl->dev, "Unknown event 0x%x\n", ev_code);
+ break;
+ } /* switch(MHI_EV_READ_CODE(EV_TRB_CODE,event)) */
+
+end_process_tx_event:
+ if (ev_code >= MHI_EV_CC_OOB)
+ write_unlock_irqrestore(&mhi_chan->lock, flags);
+ else
+ read_unlock_bh(&mhi_chan->lock);
+
+ return 0;
+}
+
+static int parse_rsc_event(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *event,
+ struct mhi_chan *mhi_chan)
+{
+ struct mhi_ring *buf_ring, *tre_ring;
+ struct mhi_buf_info *buf_info;
+ struct mhi_result result;
+ int ev_code;
+ u32 cookie; /* offset to local descriptor */
+ u16 xfer_len;
+
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+ ev_code = MHI_TRE_GET_EV_CODE(event);
+ cookie = MHI_TRE_GET_EV_COOKIE(event);
+ xfer_len = MHI_TRE_GET_EV_LEN(event);
+
+ /* Received out of bound cookie */
+ WARN_ON(cookie >= buf_ring->len);
+
+ buf_info = buf_ring->base + cookie;
+
+ result.transaction_status = (ev_code == MHI_EV_CC_OVERFLOW) ?
+ -EOVERFLOW : 0;
+ result.bytes_xferd = xfer_len;
+ result.buf_addr = buf_info->cb_buf;
+ result.dir = mhi_chan->dir;
+
+ read_lock_bh(&mhi_chan->lock);
+
+ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED)
+ goto end_process_rsc_event;
+
+ WARN_ON(!buf_info->used);
+
+ /* notify the client */
+ mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
+
+ /*
+ * Note: We're arbitrarily incrementing RP even though, completion
+ * packet we processed might not be the same one, reason we can do this
+ * is because device guaranteed to cache descriptors in order it
+ * receive, so even though completion event is different we can re-use
+ * all descriptors in between.
+ * Example:
+ * Transfer Ring has descriptors: A, B, C, D
+ * Last descriptor host queue is D (WP) and first descriptor
+ * host queue is A (RP).
+ * The completion event we just serviced is descriptor C.
+ * Then we can safely queue descriptors to replace A, B, and C
+ * even though host did not receive any completions.
+ */
+ mhi_del_ring_element(mhi_cntrl, tre_ring);
+ buf_info->used = false;
+
+end_process_rsc_event:
+ read_unlock_bh(&mhi_chan->lock);
+
+ return 0;
+}
+
+static void mhi_process_cmd_completion(struct mhi_controller *mhi_cntrl,
+ struct mhi_tre *tre)
+{
+ dma_addr_t ptr = MHI_TRE_GET_EV_PTR(tre);
+ struct mhi_cmd *cmd_ring = &mhi_cntrl->mhi_cmd[PRIMARY_CMD_RING];
+ struct mhi_ring *mhi_ring = &cmd_ring->ring;
+ struct mhi_tre *cmd_pkt;
+ struct mhi_chan *mhi_chan;
+ u32 chan;
+
+ cmd_pkt = mhi_to_virtual(mhi_ring, ptr);
+
+ chan = MHI_TRE_GET_CMD_CHID(cmd_pkt);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ write_lock_bh(&mhi_chan->lock);
+ mhi_chan->ccs = MHI_TRE_GET_EV_CODE(tre);
+ complete(&mhi_chan->completion);
+ write_unlock_bh(&mhi_chan->lock);
+
+ mhi_del_ring_element(mhi_cntrl, mhi_ring);
+}
+
+int mhi_process_ctrl_ev_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota)
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ int count = 0;
+ u32 chan;
+ struct mhi_chan *mhi_chan;
+
+ /*
+ * This is a quick check to avoid unnecessary event processing
+ * in case MHI is already in error state, but it's still possible
+ * to transition to error state while processing events
+ */
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ local_rp = ev_ring->rp;
+
+ while (dev_rp != local_rp) {
+ enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+ switch (type) {
+ case MHI_PKT_TYPE_BW_REQ_EVENT:
+ {
+ struct mhi_link_info *link_info;
+
+ link_info = &mhi_cntrl->mhi_link_info;
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ link_info->target_link_speed =
+ MHI_TRE_GET_EV_LINKSPEED(local_rp);
+ link_info->target_link_width =
+ MHI_TRE_GET_EV_LINKWIDTH(local_rp);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ dev_dbg(mhi_cntrl->dev, "Received BW_REQ event\n");
+ mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_BW_REQ);
+ break;
+ }
+ case MHI_PKT_TYPE_STATE_CHANGE_EVENT:
+ {
+ enum mhi_state new_state;
+
+ new_state = MHI_TRE_GET_EV_STATE(local_rp);
+
+ dev_dbg(mhi_cntrl->dev,
+ "State change event to state: %s\n",
+ TO_MHI_STATE_STR(new_state));
+
+ switch (new_state) {
+ case MHI_STATE_M0:
+ mhi_pm_m0_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_M1:
+ mhi_pm_m1_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_M3:
+ mhi_pm_m3_transition(mhi_cntrl);
+ break;
+ case MHI_STATE_SYS_ERR:
+ {
+ enum mhi_pm_state new_state;
+
+ dev_dbg(mhi_cntrl->dev,
+ "System error detected\n");
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ new_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_SYS_ERR_DETECT);
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (new_state == MHI_PM_SYS_ERR_DETECT)
+ schedule_work(&mhi_cntrl->syserr_worker);
+ break;
+ }
+ default:
+ dev_err(mhi_cntrl->dev, "Invalid state: %s\n",
+ TO_MHI_STATE_STR(new_state));
+ }
+
+ break;
+ }
+ case MHI_PKT_TYPE_CMD_COMPLETION_EVENT:
+ mhi_process_cmd_completion(mhi_cntrl, local_rp);
+ break;
+ case MHI_PKT_TYPE_EE_EVENT:
+ {
+ enum dev_st_transition st = DEV_ST_TRANSITION_MAX;
+ enum mhi_ee_type event = MHI_TRE_GET_EV_EXECENV(local_rp);
+
+ dev_dbg(mhi_cntrl->dev, "Received EE event: %s\n",
+ TO_MHI_EXEC_STR(event));
+ switch (event) {
+ case MHI_EE_SBL:
+ st = DEV_ST_TRANSITION_SBL;
+ break;
+ case MHI_EE_WFW:
+ case MHI_EE_AMSS:
+ st = DEV_ST_TRANSITION_MISSION_MODE;
+ break;
+ case MHI_EE_RDDM:
+ mhi_cntrl->status_cb(mhi_cntrl, MHI_CB_EE_RDDM);
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ mhi_cntrl->ee = event;
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ wake_up_all(&mhi_cntrl->state_event);
+ break;
+ default:
+ dev_err(mhi_cntrl->dev,
+ "Unhandled EE event: 0x%x\n", type);
+ }
+ if (st != DEV_ST_TRANSITION_MAX)
+ mhi_queue_state_transition(mhi_cntrl, st);
+
+ break;
+ }
+ case MHI_PKT_TYPE_TX_EVENT:
+ chan = MHI_TRE_GET_EV_CHID(local_rp);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+ parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ break;
+ default:
+ dev_err(mhi_cntrl->dev, "Unhandled event type: %d\n",
+ type);
+ break;
+ }
+
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ count++;
+ }
+
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_er_db(mhi_event);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return count;
+}
+
+int mhi_process_data_event_ring(struct mhi_controller *mhi_cntrl,
+ struct mhi_event *mhi_event,
+ u32 event_quota)
+{
+ struct mhi_tre *dev_rp, *local_rp;
+ struct mhi_ring *ev_ring = &mhi_event->ring;
+ struct mhi_event_ctxt *er_ctxt =
+ &mhi_cntrl->mhi_ctxt->er_ctxt[mhi_event->er_index];
+ int count = 0;
+ u32 chan;
+ struct mhi_chan *mhi_chan;
+
+ if (unlikely(MHI_EVENT_ACCESS_INVALID(mhi_cntrl->pm_state)))
+ return -EIO;
+
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ local_rp = ev_ring->rp;
+
+ while (dev_rp != local_rp && event_quota > 0) {
+ enum mhi_pkt_type type = MHI_TRE_GET_EV_TYPE(local_rp);
+
+ chan = MHI_TRE_GET_EV_CHID(local_rp);
+ mhi_chan = &mhi_cntrl->mhi_chan[chan];
+
+ if (likely(type == MHI_PKT_TYPE_TX_EVENT)) {
+ parse_xfer_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ } else if (type == MHI_PKT_TYPE_RSC_TX_EVENT) {
+ parse_rsc_event(mhi_cntrl, local_rp, mhi_chan);
+ event_quota--;
+ }
+
+ mhi_recycle_ev_ring_element(mhi_cntrl, ev_ring);
+ local_rp = ev_ring->rp;
+ dev_rp = mhi_to_virtual(ev_ring, er_ctxt->rp);
+ count++;
+ }
+ read_lock_bh(&mhi_cntrl->pm_lock);
+ if (likely(MHI_DB_ACCESS_VALID(mhi_cntrl)))
+ mhi_ring_er_db(mhi_event);
+ read_unlock_bh(&mhi_cntrl->pm_lock);
+
+ return count;
+}
+
+void mhi_ev_task(unsigned long data)
+{
+ struct mhi_event *mhi_event = (struct mhi_event *)data;
+ struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+
+ /* process all pending events */
+ spin_lock_bh(&mhi_event->lock);
+ mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+ spin_unlock_bh(&mhi_event->lock);
+}
+
+void mhi_ctrl_ev_task(unsigned long data)
+{
+ struct mhi_event *mhi_event = (struct mhi_event *)data;
+ struct mhi_controller *mhi_cntrl = mhi_event->mhi_cntrl;
+ enum mhi_state state;
+ enum mhi_pm_state pm_state = 0;
+ int ret;
+
+ /*
+ * We can check PM state w/o a lock here because there is no way
+ * PM state can change from reg access valid to no access while this
+ * thread being executed.
+ */
+ if (!MHI_REG_ACCESS_VALID(mhi_cntrl->pm_state)) {
+ /*
+ * We may have a pending event but not allowed to
+ * process it since we are probably in a suspended state,
+ * so trigger a resume.
+ */
+ mhi_cntrl->runtime_get(mhi_cntrl);
+ mhi_cntrl->runtime_put(mhi_cntrl);
+
+ return;
+ }
+
+ /* Process ctrl events events */
+ ret = mhi_event->process_event(mhi_cntrl, mhi_event, U32_MAX);
+
+ /*
+ * We received an IRQ but no events to process, maybe device went to
+ * SYS_ERR state? Check the state to confirm.
+ */
+ if (!ret) {
+ write_lock_irq(&mhi_cntrl->pm_lock);
+ state = mhi_get_mhi_state(mhi_cntrl);
+ if (state == MHI_STATE_SYS_ERR) {
+ dev_dbg(mhi_cntrl->dev, "System error detected\n");
+ pm_state = mhi_tryset_pm_state(mhi_cntrl,
+ MHI_PM_SYS_ERR_DETECT);
+ }
+ write_unlock_irq(&mhi_cntrl->pm_lock);
+ if (pm_state == MHI_PM_SYS_ERR_DETECT)
+ schedule_work(&mhi_cntrl->syserr_worker);
+ }
+}
diff --git a/include/linux/mhi.h b/include/linux/mhi.h
index 5ce99939a624..2ea2deb922a1 100644
--- a/include/linux/mhi.h
+++ b/include/linux/mhi.h
@@ -31,6 +31,7 @@ struct mhi_buf_info;
* @MHI_CB_EE_MISSION_MODE: MHI device entered Mission Mode exec env
* @MHI_CB_SYS_ERROR: MHI device entered error state (may recover)
* @MHI_CB_FATAL_ERROR: MHI device entered fatal error state
+ * @MHI_CB_BW_REQ: Received a bandwidth switch request from device
*/
enum mhi_callback {
MHI_CB_IDLE,
@@ -41,6 +42,7 @@ enum mhi_callback {
MHI_CB_EE_MISSION_MODE,
MHI_CB_SYS_ERROR,
MHI_CB_FATAL_ERROR,
+ MHI_CB_BW_REQ,
};
/**
@@ -94,6 +96,16 @@ struct image_info {
u32 entries;
};
+/**
+ * struct mhi_link_info - BW requirement
+ * target_link_speed - Link speed as defined by TLS bits in LinkControl reg
+ * target_link_width - Link width as defined by NLW bits in LinkStatus reg
+ */
+struct mhi_link_info {
+ unsigned int target_link_speed;
+ unsigned int target_link_width;
+};
+
/**
* enum mhi_ee_type - Execution environment types
* @MHI_EE_PBL: Primary Bootloader
@@ -335,6 +347,7 @@ struct mhi_controller_config {
* @transition_list: List of MHI state transitions
* @transition_lock: Lock for protecting MHI state transition list
* @wlock: Lock for protecting device wakeup
+ * @mhi_link_info: Device bandwidth info
* @st_worker: State transition worker
* @fw_worker: Firmware download worker
* @syserr_worker: System error worker
@@ -395,6 +408,7 @@ struct mhi_controller {
struct list_head transition_list;
spinlock_t transition_lock;
spinlock_t wlock;
+ struct mhi_link_info mhi_link_info;
struct work_struct st_worker;
struct work_struct fw_worker;
struct work_struct syserr_worker;
--
2.17.1
QMI helpers are not always used by Qualcomm platforms. One of the
exceptions is the external modems available in near future. As a
side effect of removing the dependency, it is also going to loose
COMPILE_TEST build coverage.
Cc: Andy Gross <[email protected]>
Cc: Bjorn Andersson <[email protected]>
Cc: [email protected]
Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/soc/qcom/Kconfig | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/soc/qcom/Kconfig b/drivers/soc/qcom/Kconfig
index 79d826553ac8..ca057bc9aae6 100644
--- a/drivers/soc/qcom/Kconfig
+++ b/drivers/soc/qcom/Kconfig
@@ -88,7 +88,6 @@ config QCOM_PM
config QCOM_QMI_HELPERS
tristate
- depends on ARCH_QCOM || COMPILE_TEST
depends on NET
config QCOM_RMTFS_MEM
--
2.17.1
On Fri, Jan 31, 2020 at 3:53 PM Manivannan Sadhasivam
<[email protected]> wrote:
>
> Add MAINTAINERS entry for MHI bus.
> +MHI BUS
> +M: Manivannan Sadhasivam <[email protected]>
> +M: Hemant Kumar <[email protected]>
> +L: [email protected]
> +T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
> +S: Maintained
> +F: drivers/bus/mhi/
> +F: include/linux/mhi.h
> +F: Documentation/mhi/
Had you run parse-maintainers.pl afterwards to see if everything is okay?
--
With Best Regards,
Andy Shevchenko
On Fri, 31 Jan 2020 19:20:07 +0530, Manivannan Sadhasivam wrote:
> +/* From QRTR to MHI */
> +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
> + struct mhi_result *mhi_res)
> +{
> + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
> + struct qrtr_mhi_pkt *pkt;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&qdev->ul_lock, flags);
> + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
> + list_del(&pkt->node);
> + complete_all(&pkt->done);
> +
> + kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
Which kref_get() does this pair with?
Looks like qcom_mhi_qrtr_send() will release a reference after
completion, too.
> + spin_unlock_irqrestore(&qdev->ul_lock, flags);
> +}
Hi Andy,
On Mon, Feb 03, 2020 at 12:16:16PM +0200, Andy Shevchenko wrote:
> On Fri, Jan 31, 2020 at 3:53 PM Manivannan Sadhasivam
> <[email protected]> wrote:
> >
> > Add MAINTAINERS entry for MHI bus.
>
> > +MHI BUS
> > +M: Manivannan Sadhasivam <[email protected]>
> > +M: Hemant Kumar <[email protected]>
> > +L: [email protected]
> > +T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
> > +S: Maintained
> > +F: drivers/bus/mhi/
> > +F: include/linux/mhi.h
> > +F: Documentation/mhi/
>
> Had you run parse-maintainers.pl afterwards to see if everything is okay?
>
Hmm. I didn't run it before but now looks like the entries need to be sorted
as below:
M: Manivannan Sadhasivam <[email protected]>
M: Hemant Kumar <[email protected]>
L: [email protected]
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git
F: Documentation/mhi/
F: drivers/bus/mhi/
F: include/linux/mhi.h
Will address this in next revision.
Thanks,
Mani
> --
> With Best Regards,
> Andy Shevchenko
Hi Jakub,
On Mon, Feb 03, 2020 at 10:12:25AM -0800, Jakub Kicinski wrote:
> On Fri, 31 Jan 2020 19:20:07 +0530, Manivannan Sadhasivam wrote:
> > +/* From QRTR to MHI */
> > +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
> > + struct mhi_result *mhi_res)
> > +{
> > + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
> > + struct qrtr_mhi_pkt *pkt;
> > + unsigned long flags;
> > +
> > + spin_lock_irqsave(&qdev->ul_lock, flags);
> > + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
> > + list_del(&pkt->node);
> > + complete_all(&pkt->done);
> > +
> > + kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
>
> Which kref_get() does this pair with?
>
> Looks like qcom_mhi_qrtr_send() will release a reference after
> completion, too.
>
Yikes, there is some issue here...
Acutally the issue is not in what you referred above but the overall kref
handling itself. Please see below.
kref_put() should be present in qcom_mhi_qrtr_ul_callback() as it will
decrement the refcount which got incremented in qcom_mhi_qrtr_send(). It
should be noted that kref_init() will fix the refcount to 1 and kref_get() will
increment to 2. So for properly releasing the refcount to 0, we need to call
kref_put() twice.
So if all goes well, the refcount will get decremented twice in
qcom_mhi_qrtr_ul_callback() as well as in qcom_mhi_qrtr_send() and we are good.
But, if the transfer has failed ie., when qcom_mhi_qrtr_ul_callback() doesn't
get called, then we are leaking the refcount. I need to rework the kref handling
code in next iteration.
Thanks for triggering this!
Regards,
Mani
> > + spin_unlock_irqrestore(&qdev->ul_lock, flags);
> > +}
On 1/31/2020 6:49 AM, Manivannan Sadhasivam wrote:
> This commit adds support for creating and destroying MHI devices. The
> MHI devices binds to the MHI channels and are used to transfer data
> between MHI host and client device.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/989
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: splitted from pm patch and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Jeffrey Hugo <[email protected]>
Tested-by: Jeffrey Hugo <[email protected]>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/31/2020 6:50 AM, Manivannan Sadhasivam wrote:
> Add support for transferring data between external modem and host
> processor using MHI protocol.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/988
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: splitted the data transfer patch and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Jeffrey Hugo <[email protected]>
Tested-by: Jeffrey Hugo <[email protected]>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/31/2020 6:50 AM, Manivannan Sadhasivam wrote:
> This commit adds support for processing the MHI data and control
> events from the client device. The client device can report various
> events such as EE events, state change events by interrupting the
> host through IRQ and adding events to the event rings allocated by
> the host during initialization.
>
> This is based on the patch submitted by Sujeev Dias:
> https://lkml.org/lkml/2018/7/9/988
>
> Signed-off-by: Sujeev Dias <[email protected]>
> Signed-off-by: Siddartha Mohanadoss <[email protected]>
> [mani: splitted the data transfer patch and cleaned up for upstream]
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Jeffrey Hugo <[email protected]>
Tested-by: Jeffrey Hugo <[email protected]>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 1/31/2020 6:50 AM, Manivannan Sadhasivam wrote:
> Add uevent support to MHI bus so that the client drivers can be autoloaded
> by udev when the MHI devices gets created. The client drivers are
> expected to provide MODULE_DEVICE_TABLE with the MHI id_table struct so
> that the alias can be exported.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
Reviewed-by: Jeffrey Hugo <[email protected]>
Tested-by: Jeffrey Hugo <[email protected]>
--
Jeffrey Hugo
Qualcomm Technologies, Inc. is a member of the
Code Aurora Forum, a Linux Foundation Collaborative Project.
On 2/4/2020 12:19 AM, Manivannan Sadhasivam wrote:
> Hi Jakub,
>
> On Mon, Feb 03, 2020 at 10:12:25AM -0800, Jakub Kicinski wrote:
>> On Fri, 31 Jan 2020 19:20:07 +0530, Manivannan Sadhasivam wrote:
>>> +/* From QRTR to MHI */
>>> +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
>>> + struct mhi_result *mhi_res)
>>> +{
>>> + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
>>> + struct qrtr_mhi_pkt *pkt;
>>> + unsigned long flags;
>>> +
>>> + spin_lock_irqsave(&qdev->ul_lock, flags);
>>> + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
>>> + list_del(&pkt->node);
>>> + complete_all(&pkt->done);
>>> +
>>> + kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
>> Which kref_get() does this pair with?
>>
>> Looks like qcom_mhi_qrtr_send() will release a reference after
>> completion, too.
>>
> Yikes, there is some issue here...
>
> Acutally the issue is not in what you referred above but the overall kref
> handling itself. Please see below.
>
> kref_put() should be present in qcom_mhi_qrtr_ul_callback() as it will
> decrement the refcount which got incremented in qcom_mhi_qrtr_send(). It
> should be noted that kref_init() will fix the refcount to 1 and kref_get() will
> increment to 2. So for properly releasing the refcount to 0, we need to call
> kref_put() twice.
>
> So if all goes well, the refcount will get decremented twice in
> qcom_mhi_qrtr_ul_callback() as well as in qcom_mhi_qrtr_send() and we are good.
>
> But, if the transfer has failed ie., when qcom_mhi_qrtr_ul_callback() doesn't
> get called, then we are leaking the refcount. I need to rework the kref handling
> code in next iteration.
>
> Thanks for triggering this!
>
> Regards,
> Mani
>
>>> + spin_unlock_irqrestore(&qdev->ul_lock, flags);
>>> +}
Hi Mani,
I'm not sure if this was changed in your patches but MHI is supposed to give a
ul_callback() for any packet that is successfully queued. In the case of the
transfer failing, the ul_callback() should still be called so there should
be no refcount leaking. It is an essential assumption I made, if that no longer
holds true then the entire driver needs to be reworked.
Thanks,
Chris
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
Hi Chris,
On Thu, Feb 06, 2020 at 04:14:19PM -0800, Chris Lew wrote:
>
> On 2/4/2020 12:19 AM, Manivannan Sadhasivam wrote:
> > Hi Jakub,
> >
> > On Mon, Feb 03, 2020 at 10:12:25AM -0800, Jakub Kicinski wrote:
> > > On Fri, 31 Jan 2020 19:20:07 +0530, Manivannan Sadhasivam wrote:
> > > > +/* From QRTR to MHI */
> > > > +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
> > > > + struct mhi_result *mhi_res)
> > > > +{
> > > > + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
> > > > + struct qrtr_mhi_pkt *pkt;
> > > > + unsigned long flags;
> > > > +
> > > > + spin_lock_irqsave(&qdev->ul_lock, flags);
> > > > + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
> > > > + list_del(&pkt->node);
> > > > + complete_all(&pkt->done);
> > > > +
> > > > + kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
> > > Which kref_get() does this pair with?
> > >
> > > Looks like qcom_mhi_qrtr_send() will release a reference after
> > > completion, too.
> > >
> > Yikes, there is some issue here...
> >
> > Acutally the issue is not in what you referred above but the overall kref
> > handling itself. Please see below.
> >
> > kref_put() should be present in qcom_mhi_qrtr_ul_callback() as it will
> > decrement the refcount which got incremented in qcom_mhi_qrtr_send(). It
> > should be noted that kref_init() will fix the refcount to 1 and kref_get() will
> > increment to 2. So for properly releasing the refcount to 0, we need to call
> > kref_put() twice.
> >
> > So if all goes well, the refcount will get decremented twice in
> > qcom_mhi_qrtr_ul_callback() as well as in qcom_mhi_qrtr_send() and we are good.
> >
> > But, if the transfer has failed ie., when qcom_mhi_qrtr_ul_callback() doesn't
> > get called, then we are leaking the refcount. I need to rework the kref handling
> > code in next iteration.
> >
> > Thanks for triggering this!
> >
> > Regards,
> > Mani
> >
> > > > + spin_unlock_irqrestore(&qdev->ul_lock, flags);
> > > > +}
>
> Hi Mani,
>
> I'm not sure if this was changed in your patches but MHI is supposed to give a
> ul_callback() for any packet that is successfully queued. In the case of the
> transfer failing, the ul_callback() should still be called so there should
> be no refcount leaking. It is an essential assumption I made, if that no longer
> holds true then the entire driver needs to be reworked.
>
Your assumption is correct. Only when the packet gets queued into the transfer
ring, the ul_xfer_cb will be called irrespective of the transfer state (success
or failure). But when the mhi_queue_transfer() returns even before queuing any
packet, then we need to decrease the refcount in the error path.
Please correct me if I'm wrong.
Thanks,
Mani
> Thanks,
> Chris
>
> --
>
> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
On 2/10/2020 7:50 PM, Manivannan Sadhasivam wrote:
> Hi Chris,
>
> On Thu, Feb 06, 2020 at 04:14:19PM -0800, Chris Lew wrote:
>> On 2/4/2020 12:19 AM, Manivannan Sadhasivam wrote:
>>> Hi Jakub,
>>>
>>> On Mon, Feb 03, 2020 at 10:12:25AM -0800, Jakub Kicinski wrote:
>>>> On Fri, 31 Jan 2020 19:20:07 +0530, Manivannan Sadhasivam wrote:
>>>>> +/* From QRTR to MHI */
>>>>> +static void qcom_mhi_qrtr_ul_callback(struct mhi_device *mhi_dev,
>>>>> + struct mhi_result *mhi_res)
>>>>> +{
>>>>> + struct qrtr_mhi_dev *qdev = dev_get_drvdata(&mhi_dev->dev);
>>>>> + struct qrtr_mhi_pkt *pkt;
>>>>> + unsigned long flags;
>>>>> +
>>>>> + spin_lock_irqsave(&qdev->ul_lock, flags);
>>>>> + pkt = list_first_entry(&qdev->ul_pkts, struct qrtr_mhi_pkt, node);
>>>>> + list_del(&pkt->node);
>>>>> + complete_all(&pkt->done);
>>>>> +
>>>>> + kref_put(&pkt->refcount, qrtr_mhi_pkt_release);
>>>> Which kref_get() does this pair with?
>>>>
>>>> Looks like qcom_mhi_qrtr_send() will release a reference after
>>>> completion, too.
>>>>
>>> Yikes, there is some issue here...
>>>
>>> Acutally the issue is not in what you referred above but the overall kref
>>> handling itself. Please see below.
>>>
>>> kref_put() should be present in qcom_mhi_qrtr_ul_callback() as it will
>>> decrement the refcount which got incremented in qcom_mhi_qrtr_send(). It
>>> should be noted that kref_init() will fix the refcount to 1 and kref_get() will
>>> increment to 2. So for properly releasing the refcount to 0, we need to call
>>> kref_put() twice.
>>>
>>> So if all goes well, the refcount will get decremented twice in
>>> qcom_mhi_qrtr_ul_callback() as well as in qcom_mhi_qrtr_send() and we are good.
>>>
>>> But, if the transfer has failed ie., when qcom_mhi_qrtr_ul_callback() doesn't
>>> get called, then we are leaking the refcount. I need to rework the kref handling
>>> code in next iteration.
>>>
>>> Thanks for triggering this!
>>>
>>> Regards,
>>> Mani
>>>
>>>>> + spin_unlock_irqrestore(&qdev->ul_lock, flags);
>>>>> +}
>> Hi Mani,
>>
>> I'm not sure if this was changed in your patches but MHI is supposed to give a
>> ul_callback() for any packet that is successfully queued. In the case of the
>> transfer failing, the ul_callback() should still be called so there should
>> be no refcount leaking. It is an essential assumption I made, if that no longer
>> holds true then the entire driver needs to be reworked.
>>
> Your assumption is correct. Only when the packet gets queued into the transfer
> ring, the ul_xfer_cb will be called irrespective of the transfer state (success
> or failure). But when the mhi_queue_transfer() returns even before queuing any
> packet, then we need to decrease the refcount in the error path.
>
> Please correct me if I'm wrong.
The error path for mhi_queue_transfer directly frees the packet structure since no
other context has a reference to those structs. If you wanted to clean it up and converge
using kref release to free, I think that would work. There are some things you'll have to
re-arrange like at what point the packet is added to the ul pkts list.
> Thanks,
> Mani
>
>> Thanks,
>> Chris
>>
>> --
>>
>> Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
--
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, a Linux Foundation Collaborative Project
On Fri, Jan 31, 2020 at 2:51 PM Manivannan Sadhasivam
<[email protected]> wrote:
> @@ -648,6 +715,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> mhi_chan->xfer_type = ch_cfg->data_type;
>
> + switch (mhi_chan->xfer_type) {
> + case MHI_BUF_RAW:
> + mhi_chan->gen_tre = mhi_gen_tre;
> + mhi_chan->queue_xfer = mhi_queue_buf;
> + break;
> + case MHI_BUF_SKB:
> + mhi_chan->queue_xfer = mhi_queue_skb;
> + break;
> + case MHI_BUF_SCLIST:
> + mhi_chan->gen_tre = mhi_gen_tre;
> + mhi_chan->queue_xfer = mhi_queue_sclist;
> + break;
> + case MHI_BUF_NOP:
> + mhi_chan->queue_xfer = mhi_queue_nop;
> + break;
> + case MHI_BUF_DMA:
> + case MHI_BUF_RSC_DMA:
> + mhi_chan->queue_xfer = mhi_queue_dma;
> + break;
> + default:
> + dev_err(mhi_cntrl->dev,
> + "Channel datatype not supported\n");
> + goto error_chan_cfg;
> + }
> +
While looking through the driver to see how the DMA gets handled, I came
across the multitude of mhi_queue_* functions, which seems like a
layering violation to me, given that they are all implemented by the
core code as well, and the client driver needs to be aware of
which one to call. Are you able to lift these out of the common interface
and make the client driver call these directly, or maybe provide a direct
interface based on mhi_buf_info to replace these?
Arnd
Hi Arnd,
On Mon, Feb 17, 2020 at 05:13:37PM +0100, Arnd Bergmann wrote:
> On Fri, Jan 31, 2020 at 2:51 PM Manivannan Sadhasivam
> <[email protected]> wrote:
>
> > @@ -648,6 +715,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> > mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> > mhi_chan->xfer_type = ch_cfg->data_type;
> >
> > + switch (mhi_chan->xfer_type) {
> > + case MHI_BUF_RAW:
> > + mhi_chan->gen_tre = mhi_gen_tre;
> > + mhi_chan->queue_xfer = mhi_queue_buf;
> > + break;
> > + case MHI_BUF_SKB:
> > + mhi_chan->queue_xfer = mhi_queue_skb;
> > + break;
> > + case MHI_BUF_SCLIST:
> > + mhi_chan->gen_tre = mhi_gen_tre;
> > + mhi_chan->queue_xfer = mhi_queue_sclist;
> > + break;
> > + case MHI_BUF_NOP:
> > + mhi_chan->queue_xfer = mhi_queue_nop;
> > + break;
> > + case MHI_BUF_DMA:
> > + case MHI_BUF_RSC_DMA:
> > + mhi_chan->queue_xfer = mhi_queue_dma;
> > + break;
> > + default:
> > + dev_err(mhi_cntrl->dev,
> > + "Channel datatype not supported\n");
> > + goto error_chan_cfg;
> > + }
> > +
>
> While looking through the driver to see how the DMA gets handled, I came
> across the multitude of mhi_queue_* functions, which seems like a
> layering violation to me, given that they are all implemented by the
> core code as well, and the client driver needs to be aware of
> which one to call. Are you able to lift these out of the common interface
> and make the client driver call these directly, or maybe provide a direct
> interface based on mhi_buf_info to replace these?
>
It sounds reasonable to me. Let me discuss this internally with Qcom guys to
see if they have any objections.
Thanks,
Mani
> Arnd
Hi Arnd,
On Mon, Feb 17, 2020 at 10:17:51PM +0530, Manivannan Sadhasivam wrote:
> Hi Arnd,
>
> On Mon, Feb 17, 2020 at 05:13:37PM +0100, Arnd Bergmann wrote:
> > On Fri, Jan 31, 2020 at 2:51 PM Manivannan Sadhasivam
> > <[email protected]> wrote:
> >
> > > @@ -648,6 +715,31 @@ static int parse_ch_cfg(struct mhi_controller *mhi_cntrl,
> > > mhi_chan->db_cfg.pollcfg = ch_cfg->pollcfg;
> > > mhi_chan->xfer_type = ch_cfg->data_type;
> > >
> > > + switch (mhi_chan->xfer_type) {
> > > + case MHI_BUF_RAW:
> > > + mhi_chan->gen_tre = mhi_gen_tre;
> > > + mhi_chan->queue_xfer = mhi_queue_buf;
> > > + break;
> > > + case MHI_BUF_SKB:
> > > + mhi_chan->queue_xfer = mhi_queue_skb;
> > > + break;
> > > + case MHI_BUF_SCLIST:
> > > + mhi_chan->gen_tre = mhi_gen_tre;
> > > + mhi_chan->queue_xfer = mhi_queue_sclist;
> > > + break;
> > > + case MHI_BUF_NOP:
> > > + mhi_chan->queue_xfer = mhi_queue_nop;
> > > + break;
> > > + case MHI_BUF_DMA:
> > > + case MHI_BUF_RSC_DMA:
> > > + mhi_chan->queue_xfer = mhi_queue_dma;
> > > + break;
> > > + default:
> > > + dev_err(mhi_cntrl->dev,
> > > + "Channel datatype not supported\n");
> > > + goto error_chan_cfg;
> > > + }
> > > +
> >
> > While looking through the driver to see how the DMA gets handled, I came
> > across the multitude of mhi_queue_* functions, which seems like a
> > layering violation to me, given that they are all implemented by the
> > core code as well, and the client driver needs to be aware of
> > which one to call. Are you able to lift these out of the common interface
> > and make the client driver call these directly, or maybe provide a direct
> > interface based on mhi_buf_info to replace these?
> >
>
> It sounds reasonable to me. Let me discuss this internally with Qcom guys to
> see if they have any objections.
>
I looked into this in detail and found that the queue_xfer callbacks are tied
to the MHI channels. For instance, the callback gets attached to ul_chan (uplink
channel) and dl_chan (downlink channel) during controller registration. And
when the device driver calls the callback, the MHI stack calls respective queue
function for the relevant channel. For instance,
```
int mhi_queue_transfer(struct mhi_device *mhi_dev,
enum dma_data_direction dir, void *buf, size_t len,
enum mhi_flags mflags)
{
if (dir == DMA_TO_DEVICE)
return mhi_dev->ul_chan->queue_xfer(mhi_dev, mhi_dev->ul_chan,
buf, len, mflags);
else
return mhi_dev->dl_chan->queue_xfer(mhi_dev, mhi_dev->dl_chan,
buf, len, mflags);
}
```
If we use the direct queue API's this will become hard to handle. So, I'll keep
it as it is.
Thanks,
Mani
> Thanks,
> Mani
>
> > Arnd
On Tue, Feb 18, 2020 at 6:51 AM Manivannan Sadhasivam
<[email protected]> wrote:
> On Mon, Feb 17, 2020 at 10:17:51PM +0530, Manivannan Sadhasivam wrote:
> > On Mon, Feb 17, 2020 at 05:13:37PM +0100, Arnd Bergmann wrote:
> > > On Fri, Jan 31, 2020 at 2:51 PM Manivannan Sadhasivam
> > > <[email protected]> wrote:
> > > While looking through the driver to see how the DMA gets handled, I came
> > > across the multitude of mhi_queue_* functions, which seems like a
> > > layering violation to me, given that they are all implemented by the
> > > core code as well, and the client driver needs to be aware of
> > > which one to call. Are you able to lift these out of the common interface
> > > and make the client driver call these directly, or maybe provide a direct
> > > interface based on mhi_buf_info to replace these?
> > >
> >
> > It sounds reasonable to me. Let me discuss this internally with Qcom guys to
> > see if they have any objections.
> >
>
> I looked into this in detail and found that the queue_xfer callbacks are tied
> to the MHI channels. For instance, the callback gets attached to ul_chan (uplink
> channel) and dl_chan (downlink channel) during controller registration. And
> when the device driver calls the callback, the MHI stack calls respective queue
> function for the relevant channel. For instance,
>
> ```
> int mhi_queue_transfer(struct mhi_device *mhi_dev,
> enum dma_data_direction dir, void *buf, size_t len,
> enum mhi_flags mflags)
> {
> if (dir == DMA_TO_DEVICE)
> return mhi_dev->ul_chan->queue_xfer(mhi_dev, mhi_dev->ul_chan,
> buf, len, mflags);
> else
> return mhi_dev->dl_chan->queue_xfer(mhi_dev, mhi_dev->dl_chan,
> buf, len, mflags);
> }
> ```
>
> If we use the direct queue API's this will become hard to handle. So, I'll keep
> it as it is.
Please have another look, this is exactly the part of the subsystem
that I think should be replaced. For the caller, there should not be much
difference between passing ul_chan/dl_chan or
DMA_TO_DEVICE/DMA_FROM_DEVICE, so that too can be lifted
to a higher level.
Arnd