2023-11-27 12:47:20

by Manivannan Sadhasivam

[permalink] [raw]
Subject: [PATCH 0/9] bus: mhi: ep: Add async read/write support

Hi,

This series add async read/write support for the MHI endpoint stack by
modifying the MHI ep stack and the MHI EPF (controller) driver.

Currently, only sync read/write operations are supported by the stack,
this resulting in poor data throughput as the transfer is halted until
receiving the DMA completion. So this series adds async support such
that the MHI transfers can continue without waiting for the transfer
completion. And once the completion happens, host is notified by sending
the transfer completion event.

This series brings iperf throughput of ~4Gbps on SM8450 based dev platform,
where previously 1.6Gbps was achieved with sync operation.

- Mani

Manivannan Sadhasivam (9):
bus: mhi: ep: Pass mhi_ep_buf_info struct to read/write APIs
bus: mhi: ep: Rename read_from_host() and write_to_host() APIs
bus: mhi: ep: Introduce async read/write callbacks
PCI: epf-mhi: Simulate async read/write using iATU
PCI: epf-mhi: Add support for DMA async read/write operation
PCI: epf-mhi: Enable MHI async read/write support
bus: mhi: ep: Add support for async DMA write operation
bus: mhi: ep: Add support for async DMA read operation
bus: mhi: ep: Add checks for read/write callbacks while registering
controllers

drivers/bus/mhi/ep/internal.h | 1 +
drivers/bus/mhi/ep/main.c | 256 +++++++++------
drivers/bus/mhi/ep/ring.c | 41 +--
drivers/pci/endpoint/functions/pci-epf-mhi.c | 314 ++++++++++++++++---
include/linux/mhi_ep.h | 33 +-
5 files changed, 485 insertions(+), 160 deletions(-)

--
2.25.1


2023-11-27 12:47:20

by Manivannan Sadhasivam

[permalink] [raw]
Subject: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support

Now that both eDMA and iATU are prepared to support async transfer, let's
enable MHI async read/write by supplying the relevant callbacks.

In the absence of eDMA, iATU will be used for both sync and async
operations.

Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
index 3d09a37e5f7c..d3d6a1054036 100644
--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
+++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
@@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
+ mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
+ mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
if (info->flags & MHI_EPF_USE_DMA) {
mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
- } else {
- mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
- mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
+ mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
+ mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
}

/* Register the MHI EP controller */
--
2.25.1

2023-11-27 12:48:02

by Manivannan Sadhasivam

[permalink] [raw]
Subject: [PATCH 5/9] PCI: epf-mhi: Add support for DMA async read/write operation

The driver currently supports only the sync read/write operation i.e., it
waits for the DMA transfer to complete before returning to the caller
(MHI stack). But it is sub-optimal and defeats the actual purpose of using
DMA.

So let's add support for DMA async read/write operation by skipping the DMA
transfer completion and returning to the caller immediately. When the
completion actually happens later, the driver will be notified using the
DMA completion handler and in turn it will notify the caller using the
newly introduced callback in "struct mhi_ep_buf_info".

Since the DMA completion handler is invoked from the interrupt context, a
separate workqueue (epf_mhi->dma_wq) is used to notify the caller about the
completion of the transfer.

Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/pci/endpoint/functions/pci-epf-mhi.c | 231 ++++++++++++++++++-
1 file changed, 228 insertions(+), 3 deletions(-)

diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
index 7214f4da733b..3d09a37e5f7c 100644
--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
+++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
@@ -21,6 +21,15 @@
/* Platform specific flags */
#define MHI_EPF_USE_DMA BIT(0)

+struct pci_epf_mhi_dma_transfer {
+ struct pci_epf_mhi *epf_mhi;
+ struct mhi_ep_buf_info buf_info;
+ struct list_head node;
+ dma_addr_t paddr;
+ enum dma_data_direction dir;
+ size_t size;
+};
+
struct pci_epf_mhi_ep_info {
const struct mhi_ep_cntrl_config *config;
struct pci_epf_header *epf_header;
@@ -124,6 +133,10 @@ struct pci_epf_mhi {
resource_size_t mmio_phys;
struct dma_chan *dma_chan_tx;
struct dma_chan *dma_chan_rx;
+ struct workqueue_struct *dma_wq;
+ struct work_struct dma_work;
+ struct list_head dma_list;
+ spinlock_t list_lock;
u32 mmio_size;
int irq;
};
@@ -418,6 +431,198 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl,
return ret;
}

+static void pci_epf_mhi_dma_worker(struct work_struct *work)
+{
+ struct pci_epf_mhi *epf_mhi = container_of(work, struct pci_epf_mhi, dma_work);
+ struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
+ struct pci_epf_mhi_dma_transfer *itr, *tmp;
+ struct mhi_ep_buf_info *buf_info;
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&epf_mhi->list_lock, flags);
+ list_splice_tail_init(&epf_mhi->dma_list, &head);
+ spin_unlock_irqrestore(&epf_mhi->list_lock, flags);
+
+ list_for_each_entry_safe(itr, tmp, &head, node) {
+ list_del(&itr->node);
+ dma_unmap_single(dma_dev, itr->paddr, itr->size, itr->dir);
+ buf_info = &itr->buf_info;
+ buf_info->cb(buf_info);
+ kfree(itr);
+ }
+}
+
+static void pci_epf_mhi_dma_async_callback(void *param)
+{
+ struct pci_epf_mhi_dma_transfer *transfer = param;
+ struct pci_epf_mhi *epf_mhi = transfer->epf_mhi;
+
+ spin_lock(&epf_mhi->list_lock);
+ list_add_tail(&transfer->node, &epf_mhi->dma_list);
+ spin_unlock(&epf_mhi->list_lock);
+
+ queue_work(epf_mhi->dma_wq, &epf_mhi->dma_work);
+}
+
+static int pci_epf_mhi_edma_read_async(struct mhi_ep_cntrl *mhi_cntrl,
+ struct mhi_ep_buf_info *buf_info)
+{
+ struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+ struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
+ struct pci_epf_mhi_dma_transfer *transfer = NULL;
+ struct dma_chan *chan = epf_mhi->dma_chan_rx;
+ struct device *dev = &epf_mhi->epf->dev;
+ DECLARE_COMPLETION_ONSTACK(complete);
+ struct dma_async_tx_descriptor *desc;
+ struct dma_slave_config config = {};
+ dma_cookie_t cookie;
+ dma_addr_t dst_addr;
+ int ret;
+
+ mutex_lock(&epf_mhi->lock);
+
+ config.direction = DMA_DEV_TO_MEM;
+ config.src_addr = buf_info->host_addr;
+
+ ret = dmaengine_slave_config(chan, &config);
+ if (ret) {
+ dev_err(dev, "Failed to configure DMA channel\n");
+ goto err_unlock;
+ }
+
+ dst_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
+ DMA_FROM_DEVICE);
+ ret = dma_mapping_error(dma_dev, dst_addr);
+ if (ret) {
+ dev_err(dev, "Failed to map remote memory\n");
+ goto err_unlock;
+ }
+
+ desc = dmaengine_prep_slave_single(chan, dst_addr, buf_info->size,
+ DMA_DEV_TO_MEM,
+ DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
+ if (!desc) {
+ dev_err(dev, "Failed to prepare DMA\n");
+ ret = -EIO;
+ goto err_unmap;
+ }
+
+ transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
+ if (!transfer) {
+ ret = -ENOMEM;
+ goto err_unmap;
+ }
+
+ transfer->epf_mhi = epf_mhi;
+ transfer->paddr = dst_addr;
+ transfer->size = buf_info->size;
+ transfer->dir = DMA_FROM_DEVICE;
+ memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
+
+ desc->callback = pci_epf_mhi_dma_async_callback;
+ desc->callback_param = transfer;
+
+ cookie = dmaengine_submit(desc);
+ ret = dma_submit_error(cookie);
+ if (ret) {
+ dev_err(dev, "Failed to do DMA submit\n");
+ goto err_free_transfer;
+ }
+
+ dma_async_issue_pending(chan);
+
+ goto err_unlock;
+
+err_free_transfer:
+ kfree(transfer);
+err_unmap:
+ dma_unmap_single(dma_dev, dst_addr, buf_info->size, DMA_FROM_DEVICE);
+err_unlock:
+ mutex_unlock(&epf_mhi->lock);
+
+ return ret;
+}
+
+static int pci_epf_mhi_edma_write_async(struct mhi_ep_cntrl *mhi_cntrl,
+ struct mhi_ep_buf_info *buf_info)
+{
+ struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
+ struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
+ struct pci_epf_mhi_dma_transfer *transfer = NULL;
+ struct dma_chan *chan = epf_mhi->dma_chan_tx;
+ struct device *dev = &epf_mhi->epf->dev;
+ DECLARE_COMPLETION_ONSTACK(complete);
+ struct dma_async_tx_descriptor *desc;
+ struct dma_slave_config config = {};
+ dma_cookie_t cookie;
+ dma_addr_t src_addr;
+ int ret;
+
+ mutex_lock(&epf_mhi->lock);
+
+ config.direction = DMA_MEM_TO_DEV;
+ config.dst_addr = buf_info->host_addr;
+
+ ret = dmaengine_slave_config(chan, &config);
+ if (ret) {
+ dev_err(dev, "Failed to configure DMA channel\n");
+ goto err_unlock;
+ }
+
+ src_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
+ DMA_TO_DEVICE);
+ ret = dma_mapping_error(dma_dev, src_addr);
+ if (ret) {
+ dev_err(dev, "Failed to map remote memory\n");
+ goto err_unlock;
+ }
+
+ desc = dmaengine_prep_slave_single(chan, src_addr, buf_info->size,
+ DMA_MEM_TO_DEV,
+ DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
+ if (!desc) {
+ dev_err(dev, "Failed to prepare DMA\n");
+ ret = -EIO;
+ goto err_unmap;
+ }
+
+ transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
+ if (!transfer) {
+ ret = -ENOMEM;
+ goto err_unmap;
+ }
+
+ transfer->epf_mhi = epf_mhi;
+ transfer->paddr = src_addr;
+ transfer->size = buf_info->size;
+ transfer->dir = DMA_TO_DEVICE;
+ memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
+
+ desc->callback = pci_epf_mhi_dma_async_callback;
+ desc->callback_param = transfer;
+
+ cookie = dmaengine_submit(desc);
+ ret = dma_submit_error(cookie);
+ if (ret) {
+ dev_err(dev, "Failed to do DMA submit\n");
+ goto err_free_transfer;
+ }
+
+ dma_async_issue_pending(chan);
+
+ goto err_unlock;
+
+err_free_transfer:
+ kfree(transfer);
+err_unmap:
+ dma_unmap_single(dma_dev, src_addr, buf_info->size, DMA_TO_DEVICE);
+err_unlock:
+ mutex_unlock(&epf_mhi->lock);
+
+ return ret;
+}
+
struct epf_dma_filter {
struct device *dev;
u32 dma_mask;
@@ -441,6 +646,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
struct device *dev = &epf_mhi->epf->dev;
struct epf_dma_filter filter;
dma_cap_mask_t mask;
+ int ret;

dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
@@ -459,16 +665,35 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
&filter);
if (IS_ERR_OR_NULL(epf_mhi->dma_chan_rx)) {
dev_err(dev, "Failed to request rx channel\n");
- dma_release_channel(epf_mhi->dma_chan_tx);
- epf_mhi->dma_chan_tx = NULL;
- return -ENODEV;
+ ret = -ENODEV;
+ goto err_release_tx;
+ }
+
+ epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0);
+ if (!epf_mhi->dma_wq) {
+ ret = -ENOMEM;
+ goto err_release_rx;
}

+ INIT_LIST_HEAD(&epf_mhi->dma_list);
+ INIT_WORK(&epf_mhi->dma_work, pci_epf_mhi_dma_worker);
+ spin_lock_init(&epf_mhi->list_lock);
+
return 0;
+
+err_release_rx:
+ dma_release_channel(epf_mhi->dma_chan_rx);
+ epf_mhi->dma_chan_rx = NULL;
+err_release_tx:
+ dma_release_channel(epf_mhi->dma_chan_tx);
+ epf_mhi->dma_chan_tx = NULL;
+
+ return ret;
}

static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi)
{
+ destroy_workqueue(epf_mhi->dma_wq);
dma_release_channel(epf_mhi->dma_chan_tx);
dma_release_channel(epf_mhi->dma_chan_rx);
epf_mhi->dma_chan_tx = NULL;
--
2.25.1

2023-11-27 12:48:03

by Manivannan Sadhasivam

[permalink] [raw]
Subject: [PATCH 9/9] bus: mhi: ep: Add checks for read/write callbacks while registering controllers

The MHI EP controller drivers has to support both sync and async read/write
callbacks. Hence, add a check for it.

Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 3e599d9640f5..6b84aeeb247a 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -1471,6 +1471,10 @@ int mhi_ep_register_controller(struct mhi_ep_cntrl *mhi_cntrl,
if (!mhi_cntrl || !mhi_cntrl->cntrl_dev || !mhi_cntrl->mmio || !mhi_cntrl->irq)
return -EINVAL;

+ if (!mhi_cntrl->read_sync || !mhi_cntrl->write_sync ||
+ !mhi_cntrl->read_async || !mhi_cntrl->write_async)
+ return -EINVAL;
+
ret = mhi_ep_chan_init(mhi_cntrl, config);
if (ret)
return ret;
--
2.25.1

2023-11-27 12:48:03

by Manivannan Sadhasivam

[permalink] [raw]
Subject: [PATCH 8/9] bus: mhi: ep: Add support for async DMA read operation

As like the async DMA write operation, let's add support for async DMA read
operation. In the async path, the data will be read from the transfer ring
continuously and when the controller driver notifies the stack using the
completion callback (mhi_ep_read_completion), then the client driver will
be notified with the read data and the completion event will be sent to the
host for the respective ring element (if requested by the host).

Signed-off-by: Manivannan Sadhasivam <[email protected]>
---
drivers/bus/mhi/ep/main.c | 162 +++++++++++++++++++++-----------------
1 file changed, 89 insertions(+), 73 deletions(-)

diff --git a/drivers/bus/mhi/ep/main.c b/drivers/bus/mhi/ep/main.c
index 81d693433a5f..3e599d9640f5 100644
--- a/drivers/bus/mhi/ep/main.c
+++ b/drivers/bus/mhi/ep/main.c
@@ -338,17 +338,81 @@ bool mhi_ep_queue_is_empty(struct mhi_ep_device *mhi_dev, enum dma_data_directio
}
EXPORT_SYMBOL_GPL(mhi_ep_queue_is_empty);

+static void mhi_ep_read_completion(struct mhi_ep_buf_info *buf_info)
+{
+ struct mhi_ep_device *mhi_dev = buf_info->mhi_dev;
+ struct mhi_ep_cntrl *mhi_cntrl = mhi_dev->mhi_cntrl;
+ struct mhi_ep_chan *mhi_chan = mhi_dev->ul_chan;
+ struct mhi_ep_ring *ring = &mhi_cntrl->mhi_chan[mhi_chan->chan].ring;
+ struct mhi_ring_element *el = &ring->ring_cache[ring->rd_offset];
+ struct mhi_result result = {};
+ int ret;
+
+ if (mhi_chan->xfer_cb) {
+ result.buf_addr = buf_info->cb_buf;
+ result.dir = mhi_chan->dir;
+ result.bytes_xferd = buf_info->size;
+
+ mhi_chan->xfer_cb(mhi_dev, &result);
+ }
+
+ /*
+ * The host will split the data packet into multiple TREs if it can't fit
+ * the packet in a single TRE. In that case, CHAIN flag will be set by the
+ * host for all TREs except the last one.
+ */
+ if (buf_info->code != MHI_EV_CC_OVERFLOW) {
+ if (MHI_TRE_DATA_GET_CHAIN(el)) {
+ /*
+ * IEOB (Interrupt on End of Block) flag will be set by the host if
+ * it expects the completion event for all TREs of a TD.
+ */
+ if (MHI_TRE_DATA_GET_IEOB(el)) {
+ ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
+ MHI_TRE_DATA_GET_LEN(el),
+ MHI_EV_CC_EOB);
+ if (ret < 0) {
+ dev_err(&mhi_chan->mhi_dev->dev,
+ "Error sending transfer compl. event\n");
+ goto err_free_tre_buf;
+ }
+ }
+ } else {
+ /*
+ * IEOT (Interrupt on End of Transfer) flag will be set by the host
+ * for the last TRE of the TD and expects the completion event for
+ * the same.
+ */
+ if (MHI_TRE_DATA_GET_IEOT(el)) {
+ ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
+ MHI_TRE_DATA_GET_LEN(el),
+ MHI_EV_CC_EOT);
+ if (ret < 0) {
+ dev_err(&mhi_chan->mhi_dev->dev,
+ "Error sending transfer compl. event\n");
+ goto err_free_tre_buf;
+ }
+ }
+ }
+ }
+
+ mhi_ep_ring_inc_index(ring);
+
+err_free_tre_buf:
+ kmem_cache_free(mhi_cntrl->tre_buf_cache, buf_info->cb_buf);
+}
+
static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
- struct mhi_ep_ring *ring,
- struct mhi_result *result,
- u32 len)
+ struct mhi_ep_ring *ring)
{
struct mhi_ep_chan *mhi_chan = &mhi_cntrl->mhi_chan[ring->ch_id];
struct device *dev = &mhi_cntrl->mhi_dev->dev;
size_t tr_len, read_offset, write_offset;
struct mhi_ep_buf_info buf_info = {};
+ u32 len = MHI_EP_DEFAULT_MTU;
struct mhi_ring_element *el;
bool tr_done = false;
+ void *buf_addr;
u32 buf_left;
int ret;

@@ -378,83 +442,50 @@ static int mhi_ep_read_channel(struct mhi_ep_cntrl *mhi_cntrl,
read_offset = mhi_chan->tre_size - mhi_chan->tre_bytes_left;
write_offset = len - buf_left;

+ buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL | GFP_DMA);
+ if (!buf_addr)
+ return -ENOMEM;
+
buf_info.host_addr = mhi_chan->tre_loc + read_offset;
- buf_info.dev_addr = result->buf_addr + write_offset;
+ buf_info.dev_addr = buf_addr + write_offset;
buf_info.size = tr_len;
+ buf_info.cb = mhi_ep_read_completion;
+ buf_info.cb_buf = buf_addr;
+ buf_info.mhi_dev = mhi_chan->mhi_dev;
+
+ if (mhi_chan->tre_bytes_left - tr_len)
+ buf_info.code = MHI_EV_CC_OVERFLOW;

dev_dbg(dev, "Reading %zd bytes from channel (%u)\n", tr_len, ring->ch_id);
- ret = mhi_cntrl->read_sync(mhi_cntrl, &buf_info);
+ ret = mhi_cntrl->read_async(mhi_cntrl, &buf_info);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev, "Error reading from channel\n");
- return ret;
+ goto err_free_buf_addr;
}

buf_left -= tr_len;
mhi_chan->tre_bytes_left -= tr_len;

- /*
- * Once the TRE (Transfer Ring Element) of a TD (Transfer Descriptor) has been
- * read completely:
- *
- * 1. Send completion event to the host based on the flags set in TRE.
- * 2. Increment the local read offset of the transfer ring.
- */
if (!mhi_chan->tre_bytes_left) {
- /*
- * The host will split the data packet into multiple TREs if it can't fit
- * the packet in a single TRE. In that case, CHAIN flag will be set by the
- * host for all TREs except the last one.
- */
- if (MHI_TRE_DATA_GET_CHAIN(el)) {
- /*
- * IEOB (Interrupt on End of Block) flag will be set by the host if
- * it expects the completion event for all TREs of a TD.
- */
- if (MHI_TRE_DATA_GET_IEOB(el)) {
- ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
- MHI_TRE_DATA_GET_LEN(el),
- MHI_EV_CC_EOB);
- if (ret < 0) {
- dev_err(&mhi_chan->mhi_dev->dev,
- "Error sending transfer compl. event\n");
- return ret;
- }
- }
- } else {
- /*
- * IEOT (Interrupt on End of Transfer) flag will be set by the host
- * for the last TRE of the TD and expects the completion event for
- * the same.
- */
- if (MHI_TRE_DATA_GET_IEOT(el)) {
- ret = mhi_ep_send_completion_event(mhi_cntrl, ring, el,
- MHI_TRE_DATA_GET_LEN(el),
- MHI_EV_CC_EOT);
- if (ret < 0) {
- dev_err(&mhi_chan->mhi_dev->dev,
- "Error sending transfer compl. event\n");
- return ret;
- }
- }
-
+ if (MHI_TRE_DATA_GET_IEOT(el))
tr_done = true;
- }

mhi_chan->rd_offset = (mhi_chan->rd_offset + 1) % ring->ring_size;
- mhi_ep_ring_inc_index(ring);
}
-
- result->bytes_xferd += tr_len;
} while (buf_left && !tr_done);

return 0;
+
+err_free_buf_addr:
+ kmem_cache_free(mhi_cntrl->tre_buf_cache, buf_addr);
+
+ return ret;
}

-static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_element *el)
+static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring)
{
struct mhi_ep_cntrl *mhi_cntrl = ring->mhi_cntrl;
struct mhi_result result = {};
- u32 len = MHI_EP_DEFAULT_MTU;
struct mhi_ep_chan *mhi_chan;
int ret;

@@ -475,27 +506,15 @@ static int mhi_ep_process_ch_ring(struct mhi_ep_ring *ring, struct mhi_ring_elem
mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
} else {
/* UL channel */
- result.buf_addr = kmem_cache_zalloc(mhi_cntrl->tre_buf_cache, GFP_KERNEL | GFP_DMA);
- if (!result.buf_addr)
- return -ENOMEM;
-
do {
- ret = mhi_ep_read_channel(mhi_cntrl, ring, &result, len);
+ ret = mhi_ep_read_channel(mhi_cntrl, ring);
if (ret < 0) {
dev_err(&mhi_chan->mhi_dev->dev, "Failed to read channel\n");
- kmem_cache_free(mhi_cntrl->tre_buf_cache, result.buf_addr);
return ret;
}

- result.dir = mhi_chan->dir;
- mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
- result.bytes_xferd = 0;
- memset(result.buf_addr, 0, len);
-
/* Read until the ring becomes empty */
} while (!mhi_ep_queue_is_empty(mhi_chan->mhi_dev, DMA_TO_DEVICE));
-
- kmem_cache_free(mhi_cntrl->tre_buf_cache, result.buf_addr);
}

return 0;
@@ -804,7 +823,6 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
struct mhi_ep_cntrl *mhi_cntrl = container_of(work, struct mhi_ep_cntrl, ch_ring_work);
struct device *dev = &mhi_cntrl->mhi_dev->dev;
struct mhi_ep_ring_item *itr, *tmp;
- struct mhi_ring_element *el;
struct mhi_ep_ring *ring;
struct mhi_ep_chan *chan;
unsigned long flags;
@@ -849,10 +867,8 @@ static void mhi_ep_ch_ring_worker(struct work_struct *work)
continue;
}

- el = &ring->ring_cache[ring->rd_offset];
-
dev_dbg(dev, "Processing the ring for channel (%u)\n", ring->ch_id);
- ret = mhi_ep_process_ch_ring(ring, el);
+ ret = mhi_ep_process_ch_ring(ring);
if (ret) {
dev_err(dev, "Error processing ring for channel (%u): %d\n",
ring->ch_id, ret);
--
2.25.1

2023-12-13 18:48:38

by Krzysztof Wilczyński

[permalink] [raw]
Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support

Hello,

Manivannan, you will be taking this through the MHI tree, correct?

> Now that both eDMA and iATU are prepared to support async transfer, let's
> enable MHI async read/write by supplying the relevant callbacks.
>
> In the absence of eDMA, iATU will be used for both sync and async
> operations.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> index 3d09a37e5f7c..d3d6a1054036 100644
> --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
> mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
> mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
> mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
> + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
> + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
> if (info->flags & MHI_EPF_USE_DMA) {
> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> - } else {
> - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
> - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
> + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
> }

Looks good!

Reviewed-by: Krzysztof Wilczyński <[email protected]>

Krzysztof

2023-12-13 18:50:52

by Krzysztof Wilczyński

[permalink] [raw]
Subject: Re: [PATCH 5/9] PCI: epf-mhi: Add support for DMA async read/write operation

Hello,

> The driver currently supports only the sync read/write operation i.e., it
> waits for the DMA transfer to complete before returning to the caller
> (MHI stack). But it is sub-optimal and defeats the actual purpose of using
> DMA.
>
> So let's add support for DMA async read/write operation by skipping the DMA
> transfer completion and returning to the caller immediately. When the
> completion actually happens later, the driver will be notified using the
> DMA completion handler and in turn it will notify the caller using the
> newly introduced callback in "struct mhi_ep_buf_info".
>
> Since the DMA completion handler is invoked from the interrupt context, a
> separate workqueue (epf_mhi->dma_wq) is used to notify the caller about the
> completion of the transfer.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 231 ++++++++++++++++++-
> 1 file changed, 228 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> index 7214f4da733b..3d09a37e5f7c 100644
> --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> @@ -21,6 +21,15 @@
> /* Platform specific flags */
> #define MHI_EPF_USE_DMA BIT(0)
>
> +struct pci_epf_mhi_dma_transfer {
> + struct pci_epf_mhi *epf_mhi;
> + struct mhi_ep_buf_info buf_info;
> + struct list_head node;
> + dma_addr_t paddr;
> + enum dma_data_direction dir;
> + size_t size;
> +};
> +
> struct pci_epf_mhi_ep_info {
> const struct mhi_ep_cntrl_config *config;
> struct pci_epf_header *epf_header;
> @@ -124,6 +133,10 @@ struct pci_epf_mhi {
> resource_size_t mmio_phys;
> struct dma_chan *dma_chan_tx;
> struct dma_chan *dma_chan_rx;
> + struct workqueue_struct *dma_wq;
> + struct work_struct dma_work;
> + struct list_head dma_list;
> + spinlock_t list_lock;
> u32 mmio_size;
> int irq;
> };
> @@ -418,6 +431,198 @@ static int pci_epf_mhi_edma_write(struct mhi_ep_cntrl *mhi_cntrl,
> return ret;
> }
>
> +static void pci_epf_mhi_dma_worker(struct work_struct *work)
> +{
> + struct pci_epf_mhi *epf_mhi = container_of(work, struct pci_epf_mhi, dma_work);
> + struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
> + struct pci_epf_mhi_dma_transfer *itr, *tmp;
> + struct mhi_ep_buf_info *buf_info;
> + unsigned long flags;
> + LIST_HEAD(head);
> +
> + spin_lock_irqsave(&epf_mhi->list_lock, flags);
> + list_splice_tail_init(&epf_mhi->dma_list, &head);
> + spin_unlock_irqrestore(&epf_mhi->list_lock, flags);
> +
> + list_for_each_entry_safe(itr, tmp, &head, node) {
> + list_del(&itr->node);
> + dma_unmap_single(dma_dev, itr->paddr, itr->size, itr->dir);
> + buf_info = &itr->buf_info;
> + buf_info->cb(buf_info);
> + kfree(itr);
> + }
> +}
> +
> +static void pci_epf_mhi_dma_async_callback(void *param)
> +{
> + struct pci_epf_mhi_dma_transfer *transfer = param;
> + struct pci_epf_mhi *epf_mhi = transfer->epf_mhi;
> +
> + spin_lock(&epf_mhi->list_lock);
> + list_add_tail(&transfer->node, &epf_mhi->dma_list);
> + spin_unlock(&epf_mhi->list_lock);
> +
> + queue_work(epf_mhi->dma_wq, &epf_mhi->dma_work);
> +}
> +
> +static int pci_epf_mhi_edma_read_async(struct mhi_ep_cntrl *mhi_cntrl,
> + struct mhi_ep_buf_info *buf_info)
> +{
> + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
> + struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
> + struct pci_epf_mhi_dma_transfer *transfer = NULL;
> + struct dma_chan *chan = epf_mhi->dma_chan_rx;
> + struct device *dev = &epf_mhi->epf->dev;
> + DECLARE_COMPLETION_ONSTACK(complete);
> + struct dma_async_tx_descriptor *desc;
> + struct dma_slave_config config = {};
> + dma_cookie_t cookie;
> + dma_addr_t dst_addr;
> + int ret;
> +
> + mutex_lock(&epf_mhi->lock);
> +
> + config.direction = DMA_DEV_TO_MEM;
> + config.src_addr = buf_info->host_addr;
> +
> + ret = dmaengine_slave_config(chan, &config);
> + if (ret) {
> + dev_err(dev, "Failed to configure DMA channel\n");
> + goto err_unlock;
> + }
> +
> + dst_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
> + DMA_FROM_DEVICE);
> + ret = dma_mapping_error(dma_dev, dst_addr);
> + if (ret) {
> + dev_err(dev, "Failed to map remote memory\n");
> + goto err_unlock;
> + }
> +
> + desc = dmaengine_prep_slave_single(chan, dst_addr, buf_info->size,
> + DMA_DEV_TO_MEM,
> + DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
> + if (!desc) {
> + dev_err(dev, "Failed to prepare DMA\n");
> + ret = -EIO;
> + goto err_unmap;
> + }
> +
> + transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
> + if (!transfer) {
> + ret = -ENOMEM;
> + goto err_unmap;
> + }
> +
> + transfer->epf_mhi = epf_mhi;
> + transfer->paddr = dst_addr;
> + transfer->size = buf_info->size;
> + transfer->dir = DMA_FROM_DEVICE;
> + memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
> +
> + desc->callback = pci_epf_mhi_dma_async_callback;
> + desc->callback_param = transfer;
> +
> + cookie = dmaengine_submit(desc);
> + ret = dma_submit_error(cookie);
> + if (ret) {
> + dev_err(dev, "Failed to do DMA submit\n");
> + goto err_free_transfer;
> + }
> +
> + dma_async_issue_pending(chan);
> +
> + goto err_unlock;
> +
> +err_free_transfer:
> + kfree(transfer);
> +err_unmap:
> + dma_unmap_single(dma_dev, dst_addr, buf_info->size, DMA_FROM_DEVICE);
> +err_unlock:
> + mutex_unlock(&epf_mhi->lock);
> +
> + return ret;
> +}
> +
> +static int pci_epf_mhi_edma_write_async(struct mhi_ep_cntrl *mhi_cntrl,
> + struct mhi_ep_buf_info *buf_info)
> +{
> + struct pci_epf_mhi *epf_mhi = to_epf_mhi(mhi_cntrl);
> + struct device *dma_dev = epf_mhi->epf->epc->dev.parent;
> + struct pci_epf_mhi_dma_transfer *transfer = NULL;
> + struct dma_chan *chan = epf_mhi->dma_chan_tx;
> + struct device *dev = &epf_mhi->epf->dev;
> + DECLARE_COMPLETION_ONSTACK(complete);
> + struct dma_async_tx_descriptor *desc;
> + struct dma_slave_config config = {};
> + dma_cookie_t cookie;
> + dma_addr_t src_addr;
> + int ret;
> +
> + mutex_lock(&epf_mhi->lock);
> +
> + config.direction = DMA_MEM_TO_DEV;
> + config.dst_addr = buf_info->host_addr;
> +
> + ret = dmaengine_slave_config(chan, &config);
> + if (ret) {
> + dev_err(dev, "Failed to configure DMA channel\n");
> + goto err_unlock;
> + }
> +
> + src_addr = dma_map_single(dma_dev, buf_info->dev_addr, buf_info->size,
> + DMA_TO_DEVICE);
> + ret = dma_mapping_error(dma_dev, src_addr);
> + if (ret) {
> + dev_err(dev, "Failed to map remote memory\n");
> + goto err_unlock;
> + }
> +
> + desc = dmaengine_prep_slave_single(chan, src_addr, buf_info->size,
> + DMA_MEM_TO_DEV,
> + DMA_CTRL_ACK | DMA_PREP_INTERRUPT);
> + if (!desc) {
> + dev_err(dev, "Failed to prepare DMA\n");
> + ret = -EIO;
> + goto err_unmap;
> + }
> +
> + transfer = kzalloc(sizeof(*transfer), GFP_KERNEL);
> + if (!transfer) {
> + ret = -ENOMEM;
> + goto err_unmap;
> + }
> +
> + transfer->epf_mhi = epf_mhi;
> + transfer->paddr = src_addr;
> + transfer->size = buf_info->size;
> + transfer->dir = DMA_TO_DEVICE;
> + memcpy(&transfer->buf_info, buf_info, sizeof(*buf_info));
> +
> + desc->callback = pci_epf_mhi_dma_async_callback;
> + desc->callback_param = transfer;
> +
> + cookie = dmaengine_submit(desc);
> + ret = dma_submit_error(cookie);
> + if (ret) {
> + dev_err(dev, "Failed to do DMA submit\n");
> + goto err_free_transfer;
> + }
> +
> + dma_async_issue_pending(chan);
> +
> + goto err_unlock;
> +
> +err_free_transfer:
> + kfree(transfer);
> +err_unmap:
> + dma_unmap_single(dma_dev, src_addr, buf_info->size, DMA_TO_DEVICE);
> +err_unlock:
> + mutex_unlock(&epf_mhi->lock);
> +
> + return ret;
> +}
> +
> struct epf_dma_filter {
> struct device *dev;
> u32 dma_mask;
> @@ -441,6 +646,7 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
> struct device *dev = &epf_mhi->epf->dev;
> struct epf_dma_filter filter;
> dma_cap_mask_t mask;
> + int ret;
>
> dma_cap_zero(mask);
> dma_cap_set(DMA_SLAVE, mask);
> @@ -459,16 +665,35 @@ static int pci_epf_mhi_dma_init(struct pci_epf_mhi *epf_mhi)
> &filter);
> if (IS_ERR_OR_NULL(epf_mhi->dma_chan_rx)) {
> dev_err(dev, "Failed to request rx channel\n");
> - dma_release_channel(epf_mhi->dma_chan_tx);
> - epf_mhi->dma_chan_tx = NULL;
> - return -ENODEV;
> + ret = -ENODEV;
> + goto err_release_tx;
> + }
> +
> + epf_mhi->dma_wq = alloc_workqueue("pci_epf_mhi_dma_wq", 0, 0);
> + if (!epf_mhi->dma_wq) {
> + ret = -ENOMEM;
> + goto err_release_rx;
> }
>
> + INIT_LIST_HEAD(&epf_mhi->dma_list);
> + INIT_WORK(&epf_mhi->dma_work, pci_epf_mhi_dma_worker);
> + spin_lock_init(&epf_mhi->list_lock);
> +
> return 0;
> +
> +err_release_rx:
> + dma_release_channel(epf_mhi->dma_chan_rx);
> + epf_mhi->dma_chan_rx = NULL;
> +err_release_tx:
> + dma_release_channel(epf_mhi->dma_chan_tx);
> + epf_mhi->dma_chan_tx = NULL;
> +
> + return ret;
> }
>
> static void pci_epf_mhi_dma_deinit(struct pci_epf_mhi *epf_mhi)
> {
> + destroy_workqueue(epf_mhi->dma_wq);
> dma_release_channel(epf_mhi->dma_chan_tx);
> dma_release_channel(epf_mhi->dma_chan_rx);
> epf_mhi->dma_chan_tx = NULL;

Looks good!

Reviewed-by: Krzysztof Wilczyński <[email protected]>

Krzysztof

2023-12-13 19:31:55

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 0/9] bus: mhi: ep: Add async read/write support

On Mon, Nov 27, 2023 at 06:15:20PM +0530, Manivannan Sadhasivam wrote:
> Hi,
>
> This series add async read/write support for the MHI endpoint stack by
> modifying the MHI ep stack and the MHI EPF (controller) driver.
>
> Currently, only sync read/write operations are supported by the stack,
> this resulting in poor data throughput as the transfer is halted until
> receiving the DMA completion. So this series adds async support such
> that the MHI transfers can continue without waiting for the transfer
> completion. And once the completion happens, host is notified by sending
> the transfer completion event.
>
> This series brings iperf throughput of ~4Gbps on SM8450 based dev platform,
> where previously 1.6Gbps was achieved with sync operation.
>
> - Mani
>
> Manivannan Sadhasivam (9):
> bus: mhi: ep: Pass mhi_ep_buf_info struct to read/write APIs
> bus: mhi: ep: Rename read_from_host() and write_to_host() APIs
> bus: mhi: ep: Introduce async read/write callbacks
> PCI: epf-mhi: Simulate async read/write using iATU
> PCI: epf-mhi: Add support for DMA async read/write operation
> PCI: epf-mhi: Enable MHI async read/write support
> bus: mhi: ep: Add support for async DMA write operation
> bus: mhi: ep: Add support for async DMA read operation
> bus: mhi: ep: Add checks for read/write callbacks while registering
> controllers
>
> drivers/bus/mhi/ep/internal.h | 1 +
> drivers/bus/mhi/ep/main.c | 256 +++++++++------
> drivers/bus/mhi/ep/ring.c | 41 +--
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 314 ++++++++++++++++---
> include/linux/mhi_ep.h | 33 +-
> 5 files changed, 485 insertions(+), 160 deletions(-)

Mani, do you want to merge this via your MHI tree? If so, you can
include Krzysztof's Reviewed-by tags and my:

Acked-by: Bjorn Helgaas <[email protected]>

If you think it'd be better via the PCI tree, let me know and we can
do that, too.

Bjorn

2023-12-14 05:19:41

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support

On Thu, Dec 14, 2023 at 03:48:29AM +0900, Krzysztof Wilczyński wrote:
> Hello,
>
> Manivannan, you will be taking this through the MHI tree, correct?
>

Yes, to avoid conflict with other MHI patches, I'm taking this series through
MHI tree.

> > Now that both eDMA and iATU are prepared to support async transfer, let's
> > enable MHI async read/write by supplying the relevant callbacks.
> >
> > In the absence of eDMA, iATU will be used for both sync and async
> > operations.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
> > 1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > index 3d09a37e5f7c..d3d6a1054036 100644
> > --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
> > mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
> > mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
> > mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
> > + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
> > + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
> > if (info->flags & MHI_EPF_USE_DMA) {
> > mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> > mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> > - } else {
> > - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
> > - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
> > + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> > + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
> > }
>
> Looks good!
>
> Reviewed-by: Krzysztof Wilczyński <[email protected]>
>

Thanks!

- Mani

> Krzysztof

--
மணிவண்ணன் சதாசிவம்

2023-12-14 05:21:47

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH 0/9] bus: mhi: ep: Add async read/write support

On Wed, Dec 13, 2023 at 01:31:03PM -0600, Bjorn Helgaas wrote:
> On Mon, Nov 27, 2023 at 06:15:20PM +0530, Manivannan Sadhasivam wrote:
> > Hi,
> >
> > This series add async read/write support for the MHI endpoint stack by
> > modifying the MHI ep stack and the MHI EPF (controller) driver.
> >
> > Currently, only sync read/write operations are supported by the stack,
> > this resulting in poor data throughput as the transfer is halted until
> > receiving the DMA completion. So this series adds async support such
> > that the MHI transfers can continue without waiting for the transfer
> > completion. And once the completion happens, host is notified by sending
> > the transfer completion event.
> >
> > This series brings iperf throughput of ~4Gbps on SM8450 based dev platform,
> > where previously 1.6Gbps was achieved with sync operation.
> >
> > - Mani
> >
> > Manivannan Sadhasivam (9):
> > bus: mhi: ep: Pass mhi_ep_buf_info struct to read/write APIs
> > bus: mhi: ep: Rename read_from_host() and write_to_host() APIs
> > bus: mhi: ep: Introduce async read/write callbacks
> > PCI: epf-mhi: Simulate async read/write using iATU
> > PCI: epf-mhi: Add support for DMA async read/write operation
> > PCI: epf-mhi: Enable MHI async read/write support
> > bus: mhi: ep: Add support for async DMA write operation
> > bus: mhi: ep: Add support for async DMA read operation
> > bus: mhi: ep: Add checks for read/write callbacks while registering
> > controllers
> >
> > drivers/bus/mhi/ep/internal.h | 1 +
> > drivers/bus/mhi/ep/main.c | 256 +++++++++------
> > drivers/bus/mhi/ep/ring.c | 41 +--
> > drivers/pci/endpoint/functions/pci-epf-mhi.c | 314 ++++++++++++++++---
> > include/linux/mhi_ep.h | 33 +-
> > 5 files changed, 485 insertions(+), 160 deletions(-)
>
> Mani, do you want to merge this via your MHI tree? If so, you can
> include Krzysztof's Reviewed-by tags and my:
>
> Acked-by: Bjorn Helgaas <[email protected]>
>
> If you think it'd be better via the PCI tree, let me know and we can
> do that, too.
>

Thanks Bjorn! Yes, to avoid possible conflicts with other MHI patches, I need to
take this series via MHI tree.

- Mani

> Bjorn

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support


On 11/27/2023 6:15 PM, Manivannan Sadhasivam wrote:
> Now that both eDMA and iATU are prepared to support async transfer, let's
> enable MHI async read/write by supplying the relevant callbacks.
>
> In the absence of eDMA, iATU will be used for both sync and async
> operations.
>
> Signed-off-by: Manivannan Sadhasivam <[email protected]>
> ---
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> index 3d09a37e5f7c..d3d6a1054036 100644
> --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
> mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
> mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
> mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
> + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
> + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
> if (info->flags & MHI_EPF_USE_DMA) {
> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> - } else {
> - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
> - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
> + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;

I think the read_async & write async should be updated inside the if
condition where MHI_EPF_USE_DMA flag is set.

- Krishna Chaitanya.

> }
>
> /* Register the MHI EP controller */

2023-12-14 10:10:02

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support

On Thu, Dec 14, 2023 at 03:10:01PM +0530, Krishna Chaitanya Chundru wrote:
>
> On 11/27/2023 6:15 PM, Manivannan Sadhasivam wrote:
> > Now that both eDMA and iATU are prepared to support async transfer, let's
> > enable MHI async read/write by supplying the relevant callbacks.
> >
> > In the absence of eDMA, iATU will be used for both sync and async
> > operations.
> >
> > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > ---
> > drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
> > 1 file changed, 4 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > index 3d09a37e5f7c..d3d6a1054036 100644
> > --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
> > mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
> > mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
> > mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
> > + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
> > + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
> > if (info->flags & MHI_EPF_USE_DMA) {
> > mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> > mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> > - } else {
> > - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
> > - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
> > + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> > + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
>
> I think the read_async & write async should be updated inside the if
> condition where MHI_EPF_USE_DMA flag is set.
>

That's what being done here. Am I missing anything?

- Mani

> - Krishna Chaitanya.
>
> > }
> > /* Register the MHI EP controller */

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support


On 12/14/2023 3:39 PM, Manivannan Sadhasivam wrote:
> On Thu, Dec 14, 2023 at 03:10:01PM +0530, Krishna Chaitanya Chundru wrote:
>> On 11/27/2023 6:15 PM, Manivannan Sadhasivam wrote:
>>> Now that both eDMA and iATU are prepared to support async transfer, let's
>>> enable MHI async read/write by supplying the relevant callbacks.
>>>
>>> In the absence of eDMA, iATU will be used for both sync and async
>>> operations.
>>>
>>> Signed-off-by: Manivannan Sadhasivam <[email protected]>
>>> ---
>>> drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
>>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>> index 3d09a37e5f7c..d3d6a1054036 100644
>>> --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>> +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>> @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
>>> mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
>>> mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
>>> mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
>>> + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
>>> + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
>>> if (info->flags & MHI_EPF_USE_DMA) {
>>> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
>>> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
>>> - } else {
>>> - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
>>> - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
>>> + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
>>> + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
>> I think the read_async & write async should be updated inside the if
>> condition where MHI_EPF_USE_DMA flag is set.
>>
> That's what being done here. Am I missing anything?
>
> - Mani

It should be like this as edma sync & aysnc read write should be update
only if DMA is supported, in the patch I see async function pointers are
being updated with the edma function pointers for IATU operations.

                if (info->flags & MHI_EPF_USE_DMA) {

mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
}
- Krishna Chaitanya.

>> - Krishna Chaitanya.
>>
>>> }
>>> /* Register the MHI EP controller */

2023-12-14 10:47:45

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support

On Thu, Dec 14, 2023 at 03:44:21PM +0530, Krishna Chaitanya Chundru wrote:
>
> On 12/14/2023 3:39 PM, Manivannan Sadhasivam wrote:
> > On Thu, Dec 14, 2023 at 03:10:01PM +0530, Krishna Chaitanya Chundru wrote:
> > > On 11/27/2023 6:15 PM, Manivannan Sadhasivam wrote:
> > > > Now that both eDMA and iATU are prepared to support async transfer, let's
> > > > enable MHI async read/write by supplying the relevant callbacks.
> > > >
> > > > In the absence of eDMA, iATU will be used for both sync and async
> > > > operations.
> > > >
> > > > Signed-off-by: Manivannan Sadhasivam <[email protected]>
> > > > ---
> > > > drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
> > > > 1 file changed, 4 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > > > index 3d09a37e5f7c..d3d6a1054036 100644
> > > > --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > > > +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
> > > > @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
> > > > mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
> > > > mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
> > > > mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
> > > > + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
> > > > + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
> > > > if (info->flags & MHI_EPF_USE_DMA) {
> > > > mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> > > > mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> > > > - } else {
> > > > - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
> > > > - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
> > > > + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> > > > + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
> > > I think the read_async & write async should be updated inside the if
> > > condition where MHI_EPF_USE_DMA flag is set.
> > >
> > That's what being done here. Am I missing anything?
> >
> > - Mani
>
> It should be like this as edma sync & aysnc read write should be update only
> if DMA is supported, in the patch I see async function pointers are being
> updated with the edma function pointers for IATU operations.
>
>                 if (info->flags & MHI_EPF_USE_DMA) {
>
> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
> mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
> mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
> }

Are you reading the patch correctly? Please take a look at this commit:
https://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git/tree/drivers/pci/endpoint/functions/pci-epf-mhi.c?h=mhi-next&id=d1c6f4ba4746ed41fde8269cb5fea88bddb60504#n771

- Mani

> - Krishna Chaitanya.
>
> > > - Krishna Chaitanya.
> > >
> > > > }
> > > > /* Register the MHI EP controller */

--
மணிவண்ணன் சதாசிவம்

Subject: Re: [PATCH 6/9] PCI: epf-mhi: Enable MHI async read/write support


On 12/14/2023 4:17 PM, Manivannan Sadhasivam wrote:
> On Thu, Dec 14, 2023 at 03:44:21PM +0530, Krishna Chaitanya Chundru wrote:
>> On 12/14/2023 3:39 PM, Manivannan Sadhasivam wrote:
>>> On Thu, Dec 14, 2023 at 03:10:01PM +0530, Krishna Chaitanya Chundru wrote:
>>>> On 11/27/2023 6:15 PM, Manivannan Sadhasivam wrote:
>>>>> Now that both eDMA and iATU are prepared to support async transfer, let's
>>>>> enable MHI async read/write by supplying the relevant callbacks.
>>>>>
>>>>> In the absence of eDMA, iATU will be used for both sync and async
>>>>> operations.
>>>>>
>>>>> Signed-off-by: Manivannan Sadhasivam <[email protected]>
>>>>> ---
>>>>> drivers/pci/endpoint/functions/pci-epf-mhi.c | 7 ++++---
>>>>> 1 file changed, 4 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>>>> index 3d09a37e5f7c..d3d6a1054036 100644
>>>>> --- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>>>> +++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
>>>>> @@ -766,12 +766,13 @@ static int pci_epf_mhi_link_up(struct pci_epf *epf)
>>>>> mhi_cntrl->raise_irq = pci_epf_mhi_raise_irq;
>>>>> mhi_cntrl->alloc_map = pci_epf_mhi_alloc_map;
>>>>> mhi_cntrl->unmap_free = pci_epf_mhi_unmap_free;
>>>>> + mhi_cntrl->read_sync = mhi_cntrl->read_async = pci_epf_mhi_iatu_read;
>>>>> + mhi_cntrl->write_sync = mhi_cntrl->write_async = pci_epf_mhi_iatu_write;
>>>>> if (info->flags & MHI_EPF_USE_DMA) {
>>>>> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
>>>>> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
>>>>> - } else {
>>>>> - mhi_cntrl->read_sync = pci_epf_mhi_iatu_read;
>>>>> - mhi_cntrl->write_sync = pci_epf_mhi_iatu_write;
>>>>> + mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
>>>>> + mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
>>>> I think the read_async & write async should be updated inside the if
>>>> condition where MHI_EPF_USE_DMA flag is set.
>>>>
>>> That's what being done here. Am I missing anything?
>>>
>>> - Mani
>> It should be like this as edma sync & aysnc read write should be update only
>> if DMA is supported, in the patch I see async function pointers are being
>> updated with the edma function pointers for IATU operations.
>>
>>                 if (info->flags & MHI_EPF_USE_DMA) {
>>
>> mhi_cntrl->read_sync = pci_epf_mhi_edma_read;
>> mhi_cntrl->write_sync = pci_epf_mhi_edma_write;
>> mhi_cntrl->read_async = pci_epf_mhi_edma_read_async;
>> mhi_cntrl->write_async = pci_epf_mhi_edma_write_async;
>> }
> Are you reading the patch correctly? Please take a look at this commit:
> https://git.kernel.org/pub/scm/linux/kernel/git/mani/mhi.git/tree/drivers/pci/endpoint/functions/pci-epf-mhi.c?h=mhi-next&id=d1c6f4ba4746ed41fde8269cb5fea88bddb60504#n771
>
> - Mani

Sorry for the noise, I didn't notice else is also removed.

- Krishna Chaitanya.

>> - Krishna Chaitanya.
>>
>>>> - Krishna Chaitanya.
>>>>
>>>>> }
>>>>> /* Register the MHI EP controller */

2023-12-14 10:56:59

by Manivannan Sadhasivam

[permalink] [raw]
Subject: Re: [PATCH 0/9] bus: mhi: ep: Add async read/write support

On Mon, Nov 27, 2023 at 06:15:20PM +0530, Manivannan Sadhasivam wrote:
> Hi,
>
> This series add async read/write support for the MHI endpoint stack by
> modifying the MHI ep stack and the MHI EPF (controller) driver.
>
> Currently, only sync read/write operations are supported by the stack,
> this resulting in poor data throughput as the transfer is halted until
> receiving the DMA completion. So this series adds async support such
> that the MHI transfers can continue without waiting for the transfer
> completion. And once the completion happens, host is notified by sending
> the transfer completion event.
>
> This series brings iperf throughput of ~4Gbps on SM8450 based dev platform,
> where previously 1.6Gbps was achieved with sync operation.
>

Applied to mhi-next with reviews from Bjorn and Krzysztof for PCI EPF patches.

- Mani

> - Mani
>
> Manivannan Sadhasivam (9):
> bus: mhi: ep: Pass mhi_ep_buf_info struct to read/write APIs
> bus: mhi: ep: Rename read_from_host() and write_to_host() APIs
> bus: mhi: ep: Introduce async read/write callbacks
> PCI: epf-mhi: Simulate async read/write using iATU
> PCI: epf-mhi: Add support for DMA async read/write operation
> PCI: epf-mhi: Enable MHI async read/write support
> bus: mhi: ep: Add support for async DMA write operation
> bus: mhi: ep: Add support for async DMA read operation
> bus: mhi: ep: Add checks for read/write callbacks while registering
> controllers
>
> drivers/bus/mhi/ep/internal.h | 1 +
> drivers/bus/mhi/ep/main.c | 256 +++++++++------
> drivers/bus/mhi/ep/ring.c | 41 +--
> drivers/pci/endpoint/functions/pci-epf-mhi.c | 314 ++++++++++++++++---
> include/linux/mhi_ep.h | 33 +-
> 5 files changed, 485 insertions(+), 160 deletions(-)
>
> --
> 2.25.1
>

--
மணிவண்ணன் சதாசிவம்