2022-12-19 15:56:47

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 00/26] dmaengine: dw-edma: Add RP/EP local DMA controllers support

This is a final patchset in the series created in the framework of
my Baikal-T1 PCIe/eDMA-related work:

[1: Done v5] PCI: dwc: Various fixes and cleanups
Link: https://lore.kernel.org/linux-pci/[email protected]/
Merged: kernel 6.0-rc1
[2: Done v4] PCI: dwc: Add hw version and dma-ranges support
Link: https://lore.kernel.org/linux-pci/[email protected]/
Merged: kernel 6.0-rc1
[3: Done v7] PCI: dwc: Add generic resources and Baikal-T1 support
Link: https://lore.kernel.org/linux-pci/[email protected]/
Merged: kernel 6.2-rc1
[4: In-review v7] dmaengine: dw-edma: Add RP/EP local DMA support
Link: ---you are looking at it---

Note it is very recommended to merge the patchsets in the same order as
they are listed in the set above in order to have them applied smoothly.
Since the patchsets 1-3 have already been merged into the mainline kernel
this series can be applied via any DMA-engine or PCI repos.

Here is a short summary regarding this patchset. The series starts with
fixes patches. We discovered that the dw-edma-pcie.c driver incorrectly
initializes the LL/DT base addresses for the platforms with not matching
CPU and PCIe memory spaces. It is fixed by using the pci_bus_address()
method to get a correct base address. After that you can find a series of
the interleaved xfers fixes. It turned out the interleaved transfers
implementation didn't work quite correctly from the very beginning for
instance missing src/dst addresses initialization, etc. In the framework
of the next two patches we suggest to add a new platform-specific
callback - pci_address() and use it to convert the CPU address to the PCIe
space address. It is at least required for the DW eDMA remote End-point
setup on the platforms with not-matching CPU/PCIe address spaces. In case
of the DW eDMA local RP/EP setup the conversion will be done automatically
by the outbound iATU (if no DMA-bypass flag is specified for the
corresponding iATU window). Then we introduce a set of the patches to make
the DebugFS part of the code supporting the multi-eDMA controllers
platforms. It starts with several cleanup patches and is closed joining
the Read/Write channels into a single DMA-device as they originally should
have been. After that you can find the patches with adding the non-atomic
io-64 methods usage, dropping DT-region descriptors allocation, replacing
chip IDs with the device name. In addition to that in order to have the
eDMA embedded into the DW PCIe RP/EP supported we need to bypass the
dma-ranges-based memory ranges mapping since in case of the root port DT
node it's applicable for the peripheral PCIe devices only. Finally at the
series closure we introduce a generic DW eDMA controller support being
available in the DW PCIe Root Port/Endpoint driver.

Link: https://lore.kernel.org/linux-pci/[email protected]/
Changelog v2:
- Drop the patches:
[PATCH 1/25] dmaengine: dw-edma: Drop dma_slave_config.direction field usage
[PATCH 2/25] dmaengine: dw-edma: Fix eDMA Rd/Wr-channels and DMA-direction semantics
since they are going to be merged in in the framework of the
Frank's patchset.
- Add a new patch: "dmaengine: dw-edma: Release requested IRQs on
failure."
- Drop __iomem qualifier from the struct dw_edma_debugfs_entry instance
definition in the dw_edma_debugfs_u32_get() method. (@Manivannan)
- Add a new patch: "dmaengine: dw-edma: Rename DebugFS dentry variables to
'dent'." (@Manivannan)
- Slightly extend the eDMA name array size. (@Manivannan)
- Change the specific DMA mapping comment a bit to being
clearer. (@Manivannan)
- Add a new patch: "PCI: dwc: Add generic iATU/eDMA CSRs space detection
method."
- Don't fail eDMA detection procedure if the DW eDMA driver couldn't probe
device. That happens if the driver is disabled. (@Manivannan)
- Add "dma" registers resource mapping procedure. (@Manivannan)
- Move the eDMA CSRs space detection into the dw_pcie_map_detect() method.
- Remove eDMA on the dw_pcie_ep_init() internal errors. (@Manivannan)
- Remove eDMA in the dw_pcie_ep_exit() method.
- Move the dw_pcie_edma_detect() method execution to the tail of the
dw_pcie_ep_init() function.

Link: https://lore.kernel.org/linux-pci/[email protected]/
Changelog v3:
- Conditionally set dchan->dev->device.dma_coherent field since it can
be missing on some platforms. (@Manivannan)
- Drop the patch: "PCI: dwc: Add generic iATU/eDMA CSRs space detection
method". A similar modification has been done in another patchset.
- Add more comprehensive and less regression prune eDMA block detection
procedure.
- Drop the patch: "dma-direct: take dma-ranges/offsets into account in
resource mapping". It will be separately reviewed.
- Remove Manivannan tb tag from the modified patches.
- Rebase onto the kernel v5.18.

Link: https://lore.kernel.org/linux-pci/[email protected]
Changelog v4:
- Rabase onto the laters Frank Li series:
Link: https://lore.kernel.org/all/[email protected]/
- Add Vinod' Ab-tag.
- Rebase onto the kernel v5.19-rcX.

Link: https://lore.kernel.org/linux-pci/[email protected]
Changelog v5:
- Just resend.
- Rebase onto the kernel v6.0-rc2.

Link: https://lore.kernel.org/linux-pci/[email protected]
Changelog v6:
- Fix some patchlog and in-line comments misspells. (@Bjorn)
- Directly call *_dma_configure() method on the DW eDMA channel child
device used for the DMA buffers mapping. (@Robin)
- Explicitly set the DMA-mask of the child device in the channel
allocation proecedure. (@Robin)
- Rebase onto the kernel v6.1-rc3.

Link: https://lore.kernel.org/linux-pci/[email protected]/
Changelog v7:
- Activate the mapping auto-detection procedure for IP-cores older than
5.40a. The viewport-based access has been removed since that
version. (@Yoshihiro)
- Drop the patch
[PATCH v6 22/24] dmaengine: dw-edma: Bypass dma-ranges mapping for the local setup
since the problem has been fixed in the commit f1ad5338a4d5 ("of: Fix
"dma-ranges" handling for bus controllers"). (@Robin)
- Add a new patch:
[PATCH v7 23/25] PCI: dwc: Restore DMA-mask after MSI-data allocation
(@Robin)
- Add a new patch:
[PATCH v7 24/25] PCI: bt1: Set 64-bit DMA-mask
(@Robin)

Link: https://lore.kernel.org/linux-pci/[email protected]/
Changelog v8:
- Add a new patch:
[PATCH v8 23/26] dmaengine: dw-edma: Relax driver config settings
(@tbot)
- Replace the patch
[PATCH v7 23/25] PCI: dwc: Restore DMA-mask after MSI-data allocation
with a new one:
[PATCH v8 24/26] PCI: dwc: Set coherent DMA-mask on MSI-address allocation
(@Robin, @Christoph)

Signed-off-by: Serge Semin <[email protected]>
Cc: Alexey Malahov <[email protected]>
Cc: Pavel Parkhomenko <[email protected]>
Cc: "Krzysztof WilczyƄski" <[email protected]>
Cc: caihuoqing <[email protected]>
Cc: Yoshihiro Shimoda <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]

Serge Semin (26):
dmaengine: Fix dma_slave_config.dst_addr description
dmaengine: dw-edma: Release requested IRQs on failure
dmaengine: dw-edma: Convert ll/dt phys-address to PCIe bus/DMA address
dmaengine: dw-edma: Fix missing src/dst address of the interleaved
xfers
dmaengine: dw-edma: Don't permit non-inc interleaved xfers
dmaengine: dw-edma: Fix invalid interleaved xfers semantics
dmaengine: dw-edma: Add CPU to PCIe bus address translation
dmaengine: dw-edma: Add PCIe bus address getter to the remote EP
glue-driver
dmaengine: dw-edma: Drop chancnt initialization
dmaengine: dw-edma: Fix DebugFS reg entry type
dmaengine: dw-edma: Stop checking debugfs_create_*() return value
dmaengine: dw-edma: Add dw_edma prefix to the DebugFS nodes descriptor
dmaengine: dw-edma: Convert DebugFS descs to being kz-allocated
dmaengine: dw-edma: Rename DebugFS dentry variables to 'dent'
dmaengine: dw-edma: Simplify the DebugFS context CSRs init procedure
dmaengine: dw-edma: Move eDMA data pointer to DebugFS node descriptor
dmaengine: dw-edma: Join Write/Read channels into a single device
dmaengine: dw-edma: Use DMA-engine device DebugFS subdirectory
dmaengine: dw-edma: Use non-atomic io-64 methods
dmaengine: dw-edma: Drop DT-region allocation
dmaengine: dw-edma: Replace chip ID number with device name
dmaengine: dw-edma: Skip cleanup procedure if no private data found
dmaengine: dw-edma: Relax driver config settings
PCI: dwc: Set coherent DMA-mask on MSI-address allocation
PCI: bt1: Set 64-bit DMA-mask
PCI: dwc: Add DW eDMA engine support

drivers/dma/dw-edma/Kconfig | 5 +-
drivers/dma/dw-edma/dw-edma-core.c | 196 ++++-----
drivers/dma/dw-edma/dw-edma-core.h | 10 +-
drivers/dma/dw-edma/dw-edma-pcie.c | 24 +-
drivers/dma/dw-edma/dw-edma-v0-core.c | 60 +--
drivers/dma/dw-edma/dw-edma-v0-core.h | 1 -
drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 372 ++++++++----------
drivers/dma/dw-edma/dw-edma-v0-debugfs.h | 5 -
drivers/pci/controller/dwc/pcie-bt1.c | 4 +
.../pci/controller/dwc/pcie-designware-ep.c | 12 +-
.../pci/controller/dwc/pcie-designware-host.c | 24 +-
drivers/pci/controller/dwc/pcie-designware.c | 195 +++++++++
drivers/pci/controller/dwc/pcie-designware.h | 21 +
include/linux/dma/edma.h | 20 +-
include/linux/dmaengine.h | 2 +-
15 files changed, 592 insertions(+), 359 deletions(-)

--
2.38.1



2022-12-19 15:57:00

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 09/26] dmaengine: dw-edma: Drop chancnt initialization

DMA device drivers aren't supposed to initialize the dma_device.chancnt
field. It will be done by the DMA-engine core in accordance with number of
added virtual DMA-channels. Pre-initializing it with some value causes
having a wrong number of channels printed in the device summary.

Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver")
Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-core.c | 1 -
1 file changed, 1 deletion(-)

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index 6c9f95a8e397..ecd3e8f7ac5d 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -817,7 +817,6 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
- dma->chancnt = cnt;

/* Set DMA channel callbacks */
dma->dev = chip->dev;
--
2.38.1


2022-12-19 15:57:03

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 17/26] dmaengine: dw-edma: Join Write/Read channels into a single device

Indeed there is no point in such split up because due to multiple reasons.
First of all eDMA read and write channels belong to one physical
controller. Splitting them up illogical. Secondly the channels
differentiating can be done by means of the filtering and the
dma_get_slave_caps() method. Finally having these channels handled
separately not only needlessly complicates the code, but also causes the
DebugFS error printed to console:

>> Debugfs: Directory '1f052000.pcie' with parent 'dmaengine' already present!

So to speak let's join the read/write channels into a single DMA device.
The client drivers will be able to choose the channel with required
capability by getting the DMA slave direction setting. It's default value
is overridden by the dw_edma_device_caps() callback in accordance with the
channel nature.

Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-core.c | 116 +++++++++++++++--------------
drivers/dma/dw-edma/dw-edma-core.h | 5 +-
2 files changed, 61 insertions(+), 60 deletions(-)

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index ecd3e8f7ac5d..c3ecae4287d0 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -208,6 +208,24 @@ static void dw_edma_start_transfer(struct dw_edma_chan *chan)
desc->chunks_alloc--;
}

+static void dw_edma_device_caps(struct dma_chan *dchan,
+ struct dma_slave_caps *caps)
+{
+ struct dw_edma_chan *chan = dchan2dw_edma_chan(dchan);
+
+ if (chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL) {
+ if (chan->dir == EDMA_DIR_READ)
+ caps->directions = BIT(DMA_DEV_TO_MEM);
+ else
+ caps->directions = BIT(DMA_MEM_TO_DEV);
+ } else {
+ if (chan->dir == EDMA_DIR_WRITE)
+ caps->directions = BIT(DMA_DEV_TO_MEM);
+ else
+ caps->directions = BIT(DMA_MEM_TO_DEV);
+ }
+}
+
static int dw_edma_device_config(struct dma_chan *dchan,
struct dma_slave_config *config)
{
@@ -717,8 +735,7 @@ static void dw_edma_free_chan_resources(struct dma_chan *dchan)
}
}

-static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
- u32 wr_alloc, u32 rd_alloc)
+static int dw_edma_channel_setup(struct dw_edma *dw, u32 wr_alloc, u32 rd_alloc)
{
struct dw_edma_chip *chip = dw->chip;
struct dw_edma_region *dt_region;
@@ -726,27 +743,15 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
struct dw_edma_chan *chan;
struct dw_edma_irq *irq;
struct dma_device *dma;
- u32 alloc, off_alloc;
- u32 i, j, cnt;
- int err = 0;
+ u32 i, ch_cnt;
u32 pos;

- if (write) {
- i = 0;
- cnt = dw->wr_ch_cnt;
- dma = &dw->wr_edma;
- alloc = wr_alloc;
- off_alloc = 0;
- } else {
- i = dw->wr_ch_cnt;
- cnt = dw->rd_ch_cnt;
- dma = &dw->rd_edma;
- alloc = rd_alloc;
- off_alloc = wr_alloc;
- }
+ ch_cnt = dw->wr_ch_cnt + dw->rd_ch_cnt;
+ dma = &dw->dma;

INIT_LIST_HEAD(&dma->channels);
- for (j = 0; (alloc || dw->nr_irqs == 1) && j < cnt; j++, i++) {
+
+ for (i = 0; i < ch_cnt; i++) {
chan = &dw->chan[i];

dt_region = devm_kzalloc(dev, sizeof(*dt_region), GFP_KERNEL);
@@ -756,52 +761,62 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
chan->vc.chan.private = dt_region;

chan->dw = dw;
- chan->id = j;
- chan->dir = write ? EDMA_DIR_WRITE : EDMA_DIR_READ;
+
+ if (i < dw->wr_ch_cnt) {
+ chan->id = i;
+ chan->dir = EDMA_DIR_WRITE;
+ } else {
+ chan->id = i - dw->wr_ch_cnt;
+ chan->dir = EDMA_DIR_READ;
+ }
+
chan->configured = false;
chan->request = EDMA_REQ_NONE;
chan->status = EDMA_ST_IDLE;

- if (write)
- chan->ll_max = (chip->ll_region_wr[j].sz / EDMA_LL_SZ);
+ if (chan->dir == EDMA_DIR_WRITE)
+ chan->ll_max = (chip->ll_region_wr[chan->id].sz / EDMA_LL_SZ);
else
- chan->ll_max = (chip->ll_region_rd[j].sz / EDMA_LL_SZ);
+ chan->ll_max = (chip->ll_region_rd[chan->id].sz / EDMA_LL_SZ);
chan->ll_max -= 1;

dev_vdbg(dev, "L. List:\tChannel %s[%u] max_cnt=%u\n",
- write ? "write" : "read", j, chan->ll_max);
+ chan->dir == EDMA_DIR_WRITE ? "write" : "read",
+ chan->id, chan->ll_max);

if (dw->nr_irqs == 1)
pos = 0;
+ else if (chan->dir == EDMA_DIR_WRITE)
+ pos = chan->id % wr_alloc;
else
- pos = off_alloc + (j % alloc);
+ pos = wr_alloc + chan->id % rd_alloc;

irq = &dw->irq[pos];

- if (write)
- irq->wr_mask |= BIT(j);
+ if (chan->dir == EDMA_DIR_WRITE)
+ irq->wr_mask |= BIT(chan->id);
else
- irq->rd_mask |= BIT(j);
+ irq->rd_mask |= BIT(chan->id);

irq->dw = dw;
memcpy(&chan->msi, &irq->msi, sizeof(chan->msi));

dev_vdbg(dev, "MSI:\t\tChannel %s[%u] addr=0x%.8x%.8x, data=0x%.8x\n",
- write ? "write" : "read", j,
+ chan->dir == EDMA_DIR_WRITE ? "write" : "read", chan->id,
chan->msi.address_hi, chan->msi.address_lo,
chan->msi.data);

chan->vc.desc_free = vchan_free_desc;
vchan_init(&chan->vc, dma);

- if (write) {
- dt_region->paddr = chip->dt_region_wr[j].paddr;
- dt_region->vaddr = chip->dt_region_wr[j].vaddr;
- dt_region->sz = chip->dt_region_wr[j].sz;
+ if (chan->dir == EDMA_DIR_WRITE) {
+ dt_region->paddr = chip->dt_region_wr[chan->id].paddr;
+ dt_region->vaddr = chip->dt_region_wr[chan->id].vaddr;
+ dt_region->sz = chip->dt_region_wr[chan->id].sz;
} else {
- dt_region->paddr = chip->dt_region_rd[j].paddr;
- dt_region->vaddr = chip->dt_region_rd[j].vaddr;
- dt_region->sz = chip->dt_region_rd[j].sz;
+ dt_region->paddr = chip->dt_region_rd[chan->id].paddr;
+ dt_region->vaddr = chip->dt_region_rd[chan->id].vaddr;
+ dt_region->sz = chip->dt_region_rd[chan->id].sz;
}

dw_edma_v0_core_device_config(chan);
@@ -813,7 +828,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
dma_cap_set(DMA_CYCLIC, dma->cap_mask);
dma_cap_set(DMA_PRIVATE, dma->cap_mask);
dma_cap_set(DMA_INTERLEAVE, dma->cap_mask);
- dma->directions = BIT(write ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV);
+ dma->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
dma->src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
dma->dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
dma->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
@@ -822,6 +837,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
dma->dev = chip->dev;
dma->device_alloc_chan_resources = dw_edma_alloc_chan_resources;
dma->device_free_chan_resources = dw_edma_free_chan_resources;
+ dma->device_caps = dw_edma_device_caps;
dma->device_config = dw_edma_device_config;
dma->device_pause = dw_edma_device_pause;
dma->device_resume = dw_edma_device_resume;
@@ -835,9 +851,7 @@ static int dw_edma_channel_setup(struct dw_edma *dw, bool write,
dma_set_max_seg_size(dma->dev, U32_MAX);

/* Register DMA device */
- err = dma_async_device_register(dma);
-
- return err;
+ return dma_async_device_register(dma);
}

static inline void dw_edma_dec_irq_alloc(int *nr_irqs, u32 *alloc, u16 cnt)
@@ -982,13 +996,8 @@ int dw_edma_probe(struct dw_edma_chip *chip)
if (err)
return err;

- /* Setup write channels */
- err = dw_edma_channel_setup(dw, true, wr_alloc, rd_alloc);
- if (err)
- goto err_irq_free;
-
- /* Setup read channels */
- err = dw_edma_channel_setup(dw, false, wr_alloc, rd_alloc);
+ /* Setup write/read channels */
+ err = dw_edma_channel_setup(dw, wr_alloc, rd_alloc);
if (err)
goto err_irq_free;

@@ -1022,15 +1031,8 @@ int dw_edma_remove(struct dw_edma_chip *chip)
free_irq(chip->ops->irq_vector(dev, i), &dw->irq[i]);

/* Deregister eDMA device */
- dma_async_device_unregister(&dw->wr_edma);
- list_for_each_entry_safe(chan, _chan, &dw->wr_edma.channels,
- vc.chan.device_node) {
- tasklet_kill(&chan->vc.task);
- list_del(&chan->vc.chan.device_node);
- }
-
- dma_async_device_unregister(&dw->rd_edma);
- list_for_each_entry_safe(chan, _chan, &dw->rd_edma.channels,
+ dma_async_device_unregister(&dw->dma);
+ list_for_each_entry_safe(chan, _chan, &dw->dma.channels,
vc.chan.device_node) {
tasklet_kill(&chan->vc.task);
list_del(&chan->vc.chan.device_node);
diff --git a/drivers/dma/dw-edma/dw-edma-core.h b/drivers/dma/dw-edma/dw-edma-core.h
index 85df2d511907..b576a8fff45a 100644
--- a/drivers/dma/dw-edma/dw-edma-core.h
+++ b/drivers/dma/dw-edma/dw-edma-core.h
@@ -98,10 +98,9 @@ struct dw_edma_irq {
struct dw_edma {
char name[20];

- struct dma_device wr_edma;
- u16 wr_ch_cnt;
+ struct dma_device dma;

- struct dma_device rd_edma;
+ u16 wr_ch_cnt;
u16 rd_ch_cnt;

struct dw_edma_irq *irq;
--
2.38.1


2022-12-19 15:57:16

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 05/26] dmaengine: dw-edma: Don't permit non-inc interleaved xfers

DW eDMA controller always increments both source and destination
addresses. Permitting DMA interleaved transfers with no src_inc/dst_inc
flags set may lead to unexpected behaviour for the device users. Let's fix
that by terminating the interleaved transfers if at least one of the
dma_interleaved_template.{src_inc,dst_inc} flag is initialized with false
value. Note in addition to that we need to increase the source and
destination addresses accordingly after each iteration.

Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support")
Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-core.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index 778d91d9fc1b..35588e14f79a 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -385,6 +385,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
return NULL;
if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0)
return NULL;
+ if (!xfer->xfer.il->src_inc || !xfer->xfer.il->dst_inc)
+ return NULL;
} else {
return NULL;
}
@@ -484,15 +486,13 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
struct dma_interleaved_template *il = xfer->xfer.il;
struct data_chunk *dc = &il->sgl[i];

- if (il->src_sgl) {
- src_addr += burst->sz;
+ src_addr += burst->sz;
+ if (il->src_sgl)
src_addr += dmaengine_get_src_icg(il, dc);
- }

- if (il->dst_sgl) {
- dst_addr += burst->sz;
+ dst_addr += burst->sz;
+ if (il->dst_sgl)
dst_addr += dmaengine_get_dst_icg(il, dc);
- }
}
}

--
2.38.1


2022-12-19 15:57:18

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 11/26] dmaengine: dw-edma: Stop checking debugfs_create_*() return value

First of all they never return NULL. So checking their return value for
being not NULL just pointless. Secondly the DebugFS subsystem is designed
in a way to be used as simple as possible. So if one of the
debugfs_create_*() method in a hierarchy fails, the following methods will
just silently return the passed erroneous parental dentry. Finally the
code is supposed to be working no matter whether anything DebugFS-related
fails. So in order to make code simpler and DebugFS-independent let's drop
the debugfs_create_*() methods return value checking in the same way as
the most of the kernel drivers do.

Note in order to preserve some memory space we suggest to skip the DebugFS
nodes initialization if the file system in unavailable.

Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 20 +++++---------------
1 file changed, 5 insertions(+), 15 deletions(-)

diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
index 8e61810dea4b..6e7f3ef60ca7 100644
--- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
+++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
@@ -100,9 +100,8 @@ static void dw_edma_debugfs_create_x32(const struct debugfs_entries entries[],
int i;

for (i = 0; i < nr_entries; i++) {
- if (!debugfs_create_file_unsafe(entries[i].name, 0444, dir,
- entries[i].reg, &fops_x32))
- break;
+ debugfs_create_file_unsafe(entries[i].name, 0444, dir,
+ entries[i].reg, &fops_x32);
}
}

@@ -168,8 +167,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir)
char name[16];

regs_dir = debugfs_create_dir(WRITE_STR, dir);
- if (!regs_dir)
- return;

nr_entries = ARRAY_SIZE(debugfs_regs);
dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
@@ -184,8 +181,6 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir)
snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i);

ch_dir = debugfs_create_dir(name, regs_dir);
- if (!ch_dir)
- return;

dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].wr, ch_dir);

@@ -237,8 +232,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir)
char name[16];

regs_dir = debugfs_create_dir(READ_STR, dir);
- if (!regs_dir)
- return;

nr_entries = ARRAY_SIZE(debugfs_regs);
dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
@@ -253,8 +246,6 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir)
snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i);

ch_dir = debugfs_create_dir(name, regs_dir);
- if (!ch_dir)
- return;

dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].rd, ch_dir);

@@ -273,8 +264,6 @@ static void dw_edma_debugfs_regs(void)
int nr_entries;

regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs);
- if (!regs_dir)
- return;

nr_entries = ARRAY_SIZE(debugfs_regs);
dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
@@ -285,6 +274,9 @@ static void dw_edma_debugfs_regs(void)

void dw_edma_v0_debugfs_on(struct dw_edma *_dw)
{
+ if (!debugfs_initialized())
+ return;
+
dw = _dw;
if (!dw)
return;
@@ -294,8 +286,6 @@ void dw_edma_v0_debugfs_on(struct dw_edma *_dw)
return;

dw->debugfs = debugfs_create_dir(dw->name, NULL);
- if (!dw->debugfs)
- return;

debugfs_create_u32("mf", 0444, dw->debugfs, &dw->chip->mf);
debugfs_create_u16("wr_ch_cnt", 0444, dw->debugfs, &dw->wr_ch_cnt);
--
2.38.1


2022-12-19 15:57:40

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 14/26] dmaengine: dw-edma: Rename DebugFS dentry variables to 'dent'

Since we are about to add the eDMA channels direction support to the
debugfs module it will be confusing to have both the DebugFS directory and
the channels direction short names used in the same code. As a preparation
patch let's convert the DebugFS dentry 'dir' variables to having the
'dent' name so to prevent the confusion.

Suggested-by: Manivannan Sadhasivam <[email protected]>
Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>

---

Changelog v2:
- This is a new patch added in v2. (@Manivannan)
---
drivers/dma/dw-edma/dw-edma-v0-debugfs.c | 46 ++++++++++++------------
1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
index 78f15e4b07ac..7bb3363b40e4 100644
--- a/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
+++ b/drivers/dma/dw-edma/dw-edma-v0-debugfs.c
@@ -96,7 +96,7 @@ static int dw_edma_debugfs_u32_get(void *data, u64 *val)
DEFINE_DEBUGFS_ATTRIBUTE(fops_x32, dw_edma_debugfs_u32_get, NULL, "0x%08llx\n");

static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[],
- int nr_entries, struct dentry *dir)
+ int nr_entries, struct dentry *dent)
{
struct dw_edma_debugfs_entry *entries;
int i;
@@ -109,13 +109,13 @@ static void dw_edma_debugfs_create_x32(const struct dw_edma_debugfs_entry ini[],
for (i = 0; i < nr_entries; i++) {
entries[i] = ini[i];

- debugfs_create_file_unsafe(entries[i].name, 0444, dir,
+ debugfs_create_file_unsafe(entries[i].name, 0444, dent,
&entries[i], &fops_x32);
}
}

static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs,
- struct dentry *dir)
+ struct dentry *dent)
{
const struct dw_edma_debugfs_entry debugfs_regs[] = {
REGISTER(ch_control1),
@@ -131,10 +131,10 @@ static void dw_edma_debugfs_regs_ch(struct dw_edma_v0_ch_regs __iomem *regs,
int nr_entries;

nr_entries = ARRAY_SIZE(debugfs_regs);
- dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dir);
+ dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, dent);
}

-static void dw_edma_debugfs_regs_wr(struct dentry *dir)
+static void dw_edma_debugfs_regs_wr(struct dentry *dent)
{
const struct dw_edma_debugfs_entry debugfs_regs[] = {
/* eDMA global registers */
@@ -171,34 +171,34 @@ static void dw_edma_debugfs_regs_wr(struct dentry *dir)
WR_REGISTER_UNROLL(ch6_pwr_en),
WR_REGISTER_UNROLL(ch7_pwr_en),
};
- struct dentry *regs_dir, *ch_dir;
+ struct dentry *regs_dent, *ch_dent;
int nr_entries, i;
char name[16];

- regs_dir = debugfs_create_dir(WRITE_STR, dir);
+ regs_dent = debugfs_create_dir(WRITE_STR, dent);

nr_entries = ARRAY_SIZE(debugfs_regs);
- dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
+ dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent);

if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) {
nr_entries = ARRAY_SIZE(debugfs_unroll_regs);
dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries,
- regs_dir);
+ regs_dent);
}

for (i = 0; i < dw->wr_ch_cnt; i++) {
snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i);

- ch_dir = debugfs_create_dir(name, regs_dir);
+ ch_dent = debugfs_create_dir(name, regs_dent);

- dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].wr, ch_dir);
+ dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].wr, ch_dent);

lim[0][i].start = &regs->type.unroll.ch[i].wr;
lim[0][i].end = &regs->type.unroll.ch[i].padding_1[0];
}
}

-static void dw_edma_debugfs_regs_rd(struct dentry *dir)
+static void dw_edma_debugfs_regs_rd(struct dentry *dent)
{
const struct dw_edma_debugfs_entry debugfs_regs[] = {
/* eDMA global registers */
@@ -236,27 +236,27 @@ static void dw_edma_debugfs_regs_rd(struct dentry *dir)
RD_REGISTER_UNROLL(ch6_pwr_en),
RD_REGISTER_UNROLL(ch7_pwr_en),
};
- struct dentry *regs_dir, *ch_dir;
+ struct dentry *regs_dent, *ch_dent;
int nr_entries, i;
char name[16];

- regs_dir = debugfs_create_dir(READ_STR, dir);
+ regs_dent = debugfs_create_dir(READ_STR, dent);

nr_entries = ARRAY_SIZE(debugfs_regs);
- dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
+ dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent);

if (dw->chip->mf == EDMA_MF_HDMA_COMPAT) {
nr_entries = ARRAY_SIZE(debugfs_unroll_regs);
dw_edma_debugfs_create_x32(debugfs_unroll_regs, nr_entries,
- regs_dir);
+ regs_dent);
}

for (i = 0; i < dw->rd_ch_cnt; i++) {
snprintf(name, sizeof(name), "%s:%d", CHANNEL_STR, i);

- ch_dir = debugfs_create_dir(name, regs_dir);
+ ch_dent = debugfs_create_dir(name, regs_dent);

- dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].rd, ch_dir);
+ dw_edma_debugfs_regs_ch(&regs->type.unroll.ch[i].rd, ch_dent);

lim[1][i].start = &regs->type.unroll.ch[i].rd;
lim[1][i].end = &regs->type.unroll.ch[i].padding_2[0];
@@ -269,16 +269,16 @@ static void dw_edma_debugfs_regs(void)
REGISTER(ctrl_data_arb_prior),
REGISTER(ctrl),
};
- struct dentry *regs_dir;
+ struct dentry *regs_dent;
int nr_entries;

- regs_dir = debugfs_create_dir(REGISTERS_STR, dw->debugfs);
+ regs_dent = debugfs_create_dir(REGISTERS_STR, dw->debugfs);

nr_entries = ARRAY_SIZE(debugfs_regs);
- dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dir);
+ dw_edma_debugfs_create_x32(debugfs_regs, nr_entries, regs_dent);

- dw_edma_debugfs_regs_wr(regs_dir);
- dw_edma_debugfs_regs_rd(regs_dir);
+ dw_edma_debugfs_regs_wr(regs_dent);
+ dw_edma_debugfs_regs_rd(regs_dent);
}

void dw_edma_v0_debugfs_on(struct dw_edma *_dw)
--
2.38.1


2022-12-19 15:57:53

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 25/26] PCI: bt1: Set 64-bit DMA-mask

The DW PCIe RC IP-core is synthesized with the 64-bits AXI address bus.
Since the device is also equipped with the eDMA engine we need to
explicitly set the device DMA-mask so the DMA-engine clients would be able
to allocate the data buffers from the DMA-able memory space.

Signed-off-by: Serge Semin <[email protected]>

---

Changelog v7:
- This is a new patch added on v7 stage of the series. (@Robin)
---
drivers/pci/controller/dwc/pcie-bt1.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/pci/controller/dwc/pcie-bt1.c b/drivers/pci/controller/dwc/pcie-bt1.c
index f6cdb35b4cde..3b95dadf5176 100644
--- a/drivers/pci/controller/dwc/pcie-bt1.c
+++ b/drivers/pci/controller/dwc/pcie-bt1.c
@@ -583,6 +583,10 @@ static int bt1_pcie_add_port(struct bt1_pcie *btpci)
struct device *dev = &btpci->pdev->dev;
int ret;

+ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
+ if (ret)
+ return ret;
+
btpci->dw.version = DW_PCIE_VER_460A;
btpci->dw.dev = dev;
btpci->dw.ops = &bt1_pcie_ops;
--
2.38.1


2022-12-19 15:58:17

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 24/26] PCI: dwc: Set coherent DMA-mask on MSI-address allocation

The MSI target address requires to be reserved within the lowest 4GB
memory in order to support the PCIe peripherals with no 64-bit MSI TLPs
support. Since the allocation is done from the DMA-coherent memory let's
modify the allocation procedure to setting the coherent DMA-mask only and
avoiding the streaming DMA-mask modification. Thus at least the streaming
DMA operations would work with no artificial limitations. It will be
specifically useful for the eDMA-capable controllers so the corresponding
DMA-engine clients would map the DMA buffers with no need in the SWIOTLB
intervention for the buffers allocated above the 4GB memory region.

While at it let's add a brief comment about the reason of having the MSI
target address allocated from the DMA-coherent memory limited with the 4GB
upper bound.

Signed-off-by: Serge Semin <[email protected]>

---

Changelog v8:
- This is a new patch added on v8 stage of the series.
(@Robin, @Christoph)
---
drivers/pci/controller/dwc/pcie-designware-host.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
index 3ab6ae3712c4..e10608af39b4 100644
--- a/drivers/pci/controller/dwc/pcie-designware-host.c
+++ b/drivers/pci/controller/dwc/pcie-designware-host.c
@@ -366,7 +366,16 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
dw_chained_msi_isr, pp);
}

- ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ /*
+ * Even though the iMSI-RX Module supports 64-bit addresses some
+ * peripheral PCIe devices may lack the 64-bit messages support. In
+ * order not to miss MSI TLPs from those devices the MSI target address
+ * has to be reserved within the lowest 4GB.
+ * Note until there is a better alternative found the reservation is
+ * done by allocating from the artificially limited DMA-coherent
+ * memory.
+ */
+ ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
if (ret)
dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");

--
2.38.1


2022-12-19 15:59:00

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 04/26] dmaengine: dw-edma: Fix missing src/dst address of the interleaved xfers

The interleaved DMA transfers support was added in the commit 85e7518f42c8
("dmaengine: dw-edma: Add device_prep_interleave_dma() support"). It
seems like the support was broken from the very beginning. Depending on
the selected channel either source or destination address are left
uninitialized which was obviously wrong. I don't really know how come the
original modification was working for the commit author. Anyway let's fix
it by initializing the destination address of the eDMA burst descriptors
for the DEV_TO_MEM interleaved operations and by initializing the source
address of the eDMA burst descriptors for the MEM_TO_DEV interleaved
operations.

Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support")
Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-core.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index a8c1bd9c7ae9..778d91d9fc1b 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -455,6 +455,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
* and destination addresses are increased
* by the same portion (data length)
*/
+ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
+ burst->dar = dst_addr;
}
} else {
burst->dar = dst_addr;
@@ -470,6 +472,8 @@ dw_edma_device_transfer(struct dw_edma_transfer *xfer)
* and destination addresses are increased
* by the same portion (data length)
*/
+ } else if (xfer->type == EDMA_XFER_INTERLEAVED) {
+ burst->sar = src_addr;
}
}

--
2.38.1


2022-12-19 16:05:45

by Serge Semin

[permalink] [raw]
Subject: [PATCH v8 22/26] dmaengine: dw-edma: Skip cleanup procedure if no private data found

DW eDMA driver private data is preserved in the passed DW eDMA chip info
structure. If either probe procedure failed or for some reason the passed
info object doesn't have private data pointer initialized we need to halt
the DMA device cleanup procedure in order to prevent possible system
crashes.

Signed-off-by: Serge Semin <[email protected]>
Reviewed-by: Manivannan Sadhasivam <[email protected]>
Tested-by: Manivannan Sadhasivam <[email protected]>
Acked-by: Vinod Koul <[email protected]>
---
drivers/dma/dw-edma/dw-edma-core.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/drivers/dma/dw-edma/dw-edma-core.c b/drivers/dma/dw-edma/dw-edma-core.c
index e3671bfbe186..1906a836f0aa 100644
--- a/drivers/dma/dw-edma/dw-edma-core.c
+++ b/drivers/dma/dw-edma/dw-edma-core.c
@@ -1011,6 +1011,10 @@ int dw_edma_remove(struct dw_edma_chip *chip)
struct dw_edma *dw = chip->dw;
int i;

+ /* Skip removal if no private data found */
+ if (!dw)
+ return -ENODEV;
+
/* Disable eDMA */
dw_edma_v0_core_off(dw);

--
2.38.1


2023-01-11 13:49:59

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v8 24/26] PCI: dwc: Set coherent DMA-mask on MSI-address allocation

On 2022-12-19 14:46, Serge Semin wrote:
> The MSI target address requires to be reserved within the lowest 4GB
> memory in order to support the PCIe peripherals with no 64-bit MSI TLPs
> support. Since the allocation is done from the DMA-coherent memory let's
> modify the allocation procedure to setting the coherent DMA-mask only and
> avoiding the streaming DMA-mask modification. Thus at least the streaming
> DMA operations would work with no artificial limitations. It will be
> specifically useful for the eDMA-capable controllers so the corresponding
> DMA-engine clients would map the DMA buffers with no need in the SWIOTLB
> intervention for the buffers allocated above the 4GB memory region.
>
> While at it let's add a brief comment about the reason of having the MSI
> target address allocated from the DMA-coherent memory limited with the 4GB
> upper bound.

Reviewed-by: Robin Murphy <[email protected]>

> Signed-off-by: Serge Semin <[email protected]>
>
> ---
>
> Changelog v8:
> - This is a new patch added on v8 stage of the series.
> (@Robin, @Christoph)
> ---
> drivers/pci/controller/dwc/pcie-designware-host.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> index 3ab6ae3712c4..e10608af39b4 100644
> --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> @@ -366,7 +366,16 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
> dw_chained_msi_isr, pp);
> }
>
> - ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
> + /*
> + * Even though the iMSI-RX Module supports 64-bit addresses some
> + * peripheral PCIe devices may lack the 64-bit messages support. In
> + * order not to miss MSI TLPs from those devices the MSI target address
> + * has to be reserved within the lowest 4GB.
> + * Note until there is a better alternative found the reservation is
> + * done by allocating from the artificially limited DMA-coherent
> + * memory.
> + */
> + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> if (ret)
> dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");
>

2023-01-12 19:27:26

by Serge Semin

[permalink] [raw]
Subject: Re: [PATCH v8 24/26] PCI: dwc: Set coherent DMA-mask on MSI-address allocation

On Wed, Jan 11, 2023 at 01:39:35PM +0000, Robin Murphy wrote:
> On 2022-12-19 14:46, Serge Semin wrote:
> > The MSI target address requires to be reserved within the lowest 4GB
> > memory in order to support the PCIe peripherals with no 64-bit MSI TLPs
> > support. Since the allocation is done from the DMA-coherent memory let's
> > modify the allocation procedure to setting the coherent DMA-mask only and
> > avoiding the streaming DMA-mask modification. Thus at least the streaming
> > DMA operations would work with no artificial limitations. It will be
> > specifically useful for the eDMA-capable controllers so the corresponding
> > DMA-engine clients would map the DMA buffers with no need in the SWIOTLB
> > intervention for the buffers allocated above the 4GB memory region.
> >
> > While at it let's add a brief comment about the reason of having the MSI
> > target address allocated from the DMA-coherent memory limited with the 4GB
> > upper bound.
>
> Reviewed-by: Robin Murphy <[email protected]>

Great! Thanks.

-Serge(y)

>
> > Signed-off-by: Serge Semin <[email protected]>
> >
> > ---
> >
> > Changelog v8:
> > - This is a new patch added on v8 stage of the series.
> > (@Robin, @Christoph)
> > ---
> > drivers/pci/controller/dwc/pcie-designware-host.c | 11 ++++++++++-
> > 1 file changed, 10 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
> > index 3ab6ae3712c4..e10608af39b4 100644
> > --- a/drivers/pci/controller/dwc/pcie-designware-host.c
> > +++ b/drivers/pci/controller/dwc/pcie-designware-host.c
> > @@ -366,7 +366,16 @@ static int dw_pcie_msi_host_init(struct dw_pcie_rp *pp)
> > dw_chained_msi_isr, pp);
> > }
> > - ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
> > + /*
> > + * Even though the iMSI-RX Module supports 64-bit addresses some
> > + * peripheral PCIe devices may lack the 64-bit messages support. In
> > + * order not to miss MSI TLPs from those devices the MSI target address
> > + * has to be reserved within the lowest 4GB.
> > + * Note until there is a better alternative found the reservation is
> > + * done by allocating from the artificially limited DMA-coherent
> > + * memory.
> > + */
> > + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
> > if (ret)
> > dev_warn(dev, "Failed to set DMA mask to 32-bit. Devices with only 32-bit MSI support may not work properly\n");