From: Kory Maincent <[email protected]>
The Linked list element and pointer are not stored in the same memory as
the HDMA controller register. If the doorbell register is toggled before
the full write of the linked list a race condition error can appears.
In remote setup we can only use a readl to the memory to assured the full
write has occurred.
Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
Signed-off-by: Kory Maincent <[email protected]>
---
This patch is fixing a commit which is only in dmaengine tree and not
merged mainline.
---
drivers/dma/dw-edma/dw-hdma-v0-core.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/drivers/dma/dw-edma/dw-hdma-v0-core.c b/drivers/dma/dw-edma/dw-hdma-v0-core.c
index 7bd1a0f742be..0b77ddbe91b5 100644
--- a/drivers/dma/dw-edma/dw-hdma-v0-core.c
+++ b/drivers/dma/dw-edma/dw-hdma-v0-core.c
@@ -247,6 +247,15 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
/* Set consumer cycle */
SET_CH_32(dw, chan->dir, chan->id, cycle_sync,
HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT);
+
+ if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
+ /* Make sure Linked List has been written.
+ * Linux memory barriers don't cater for what's required here.
+ * What's required is what's here - a read of the linked
+ * list region.
+ */
+ readl(chunk->ll_region.vaddr.io);
+
/* Doorbell */
SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START);
}
--
2.25.1
On Fri, Jun 09, 2023 at 10:16:49AM +0200, K?ry Maincent wrote:
> From: Kory Maincent <[email protected]>
>
> The Linked list element and pointer are not stored in the same memory as
> the HDMA controller register. If the doorbell register is toggled before
> the full write of the linked list a race condition error can appears.
> In remote setup we can only use a readl to the memory to assured the full
> write has occurred.
>
> Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
> Signed-off-by: Kory Maincent <[email protected]>
Is this a hypothetical bug? Have you actually experienced the
described problem? If so are you sure that it's supposed to be fixed
as you suggest?
I am asking because based on the kernel doc (Documentation/memory-barriers.txt):
* 1. All readX() and writeX() accesses to the same peripheral are ordered
* with respect to each other. This ensures that MMIO register accesses
* by the same CPU thread to a particular device will arrive in program
* order.
* ...
* The ordering properties of __iomem pointers obtained with non-default
* attributes (e.g. those returned by ioremap_wc()) are specific to the
* underlying architecture and therefore the guarantees listed above cannot
* generally be relied upon for accesses to these types of mappings.
the IOs performed by the accessors are supposed to arrive in the
program order. Thus SET_CH_32(..., HDMA_V0_DOORBELL_START) performed
after all the previous SET_CH_32(...) are finished looks correct with
no need in additional barriers. The results of the later operations
are supposed to be seen by the device (in our case it's a remote DW
eDMA controller) before the doorbell update from scratch. From that
perspective your problem looks as if the IO operations preceding the
doorbell CSR update aren't finished yet. So you are sure that the LL
memory is mapped with no additional flags like Write-Combine or some
caching optimizations? Are you sure that the PCIe IOs are correctly
implemented in your platform?
I do understand that the eDMA CSRs and the LL memory are mapped by
different BARs in the remote eDMA setup. But they still belong to the
same device. So the IO accessors semantic described in the kernel doc
implies no need in additional barrier.
-Serge(y)
> ---
>
> This patch is fixing a commit which is only in dmaengine tree and not
> merged mainline.
> ---
> drivers/dma/dw-edma/dw-hdma-v0-core.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>
> diff --git a/drivers/dma/dw-edma/dw-hdma-v0-core.c b/drivers/dma/dw-edma/dw-hdma-v0-core.c
> index 7bd1a0f742be..0b77ddbe91b5 100644
> --- a/drivers/dma/dw-edma/dw-hdma-v0-core.c
> +++ b/drivers/dma/dw-edma/dw-hdma-v0-core.c
> @@ -247,6 +247,15 @@ static void dw_hdma_v0_core_start(struct dw_edma_chunk *chunk, bool first)
> /* Set consumer cycle */
> SET_CH_32(dw, chan->dir, chan->id, cycle_sync,
> HDMA_V0_CONSUMER_CYCLE_STAT | HDMA_V0_CONSUMER_CYCLE_BIT);
> +
> + if (!(chunk->chan->dw->chip->flags & DW_EDMA_CHIP_LOCAL))
> + /* Make sure Linked List has been written.
> + * Linux memory barriers don't cater for what's required here.
> + * What's required is what's here - a read of the linked
> + * list region.
> + */
> + readl(chunk->ll_region.vaddr.io);
> +
> /* Doorbell */
> SET_CH_32(dw, chan->dir, chan->id, doorbell, HDMA_V0_DOORBELL_START);
> }
> --
> 2.25.1
>
On Mon, 19 Jun 2023 20:02:01 +0300
Serge Semin <[email protected]> wrote:
> On Fri, Jun 09, 2023 at 10:16:49AM +0200, Köry Maincent wrote:
> > From: Kory Maincent <[email protected]>
> >
>
> > The Linked list element and pointer are not stored in the same memory as
> > the HDMA controller register. If the doorbell register is toggled before
> > the full write of the linked list a race condition error can appears.
> > In remote setup we can only use a readl to the memory to assured the full
> > write has occurred.
> >
> > Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
> > Signed-off-by: Kory Maincent <[email protected]>
>
> Is this a hypothetical bug? Have you actually experienced the
> described problem? If so are you sure that it's supposed to be fixed
> as you suggest?
I do experienced this problem and this patch fixed it.
>
> I am asking because based on the kernel doc
> (Documentation/memory-barriers.txt):
>
> * 1. All readX() and writeX() accesses to the same peripheral are ordered
> * with respect to each other. This ensures that MMIO register accesses
> * by the same CPU thread to a particular device will arrive in program
> * order.
> * ...
> * The ordering properties of __iomem pointers obtained with non-default
> * attributes (e.g. those returned by ioremap_wc()) are specific to the
> * underlying architecture and therefore the guarantees listed above cannot
> * generally be relied upon for accesses to these types of mappings.
>
> the IOs performed by the accessors are supposed to arrive in the
> program order. Thus SET_CH_32(..., HDMA_V0_DOORBELL_START) performed
> after all the previous SET_CH_32(...) are finished looks correct with
> no need in additional barriers. The results of the later operations
> are supposed to be seen by the device (in our case it's a remote DW
> eDMA controller) before the doorbell update from scratch. From that
> perspective your problem looks as if the IO operations preceding the
> doorbell CSR update aren't finished yet. So you are sure that the LL
> memory is mapped with no additional flags like Write-Combine or some
> caching optimizations? Are you sure that the PCIe IOs are correctly
> implemented in your platform?
No, I don't know if there is extra flags or optimizations.
>
> I do understand that the eDMA CSRs and the LL memory are mapped by
> different BARs in the remote eDMA setup. But they still belong to the
> same device. So the IO accessors semantic described in the kernel doc
> implies no need in additional barrier.
Even if they are on the same device it is two type of memory.
I am not an PCIe expert but I suppose the PCIe controller of the board is
sending to both memory and if one of them (LL memory here) is slower in the
write process then we faced this race issue. We can not find out that the write
to LL memory has finished before the CSRs even if the write command has been
sent earlier.
Köry,
On Mon, Jun 19, 2023 at 08:32:07PM +0200, K?ry Maincent wrote:
> On Mon, 19 Jun 2023 20:02:01 +0300
> Serge Semin <[email protected]> wrote:
>
> > On Fri, Jun 09, 2023 at 10:16:49AM +0200, K?ry Maincent wrote:
> > > From: Kory Maincent <[email protected]>
> > >
> >
> > > The Linked list element and pointer are not stored in the same memory as
> > > the HDMA controller register. If the doorbell register is toggled before
> > > the full write of the linked list a race condition error can appears.
> > > In remote setup we can only use a readl to the memory to assured the full
> > > write has occurred.
> > >
> > > Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
> > > Signed-off-by: Kory Maincent <[email protected]>
> >
> > Is this a hypothetical bug? Have you actually experienced the
> > described problem? If so are you sure that it's supposed to be fixed
> > as you suggest?
>
> I do experienced this problem and this patch fixed it.
Could you give more details of how often does it happen? Is it stably
reproducible or does it happen at very rare occasion?
>
> >
> > I am asking because based on the kernel doc
> > (Documentation/memory-barriers.txt):
> >
> > * 1. All readX() and writeX() accesses to the same peripheral are ordered
> > * with respect to each other. This ensures that MMIO register accesses
> > * by the same CPU thread to a particular device will arrive in program
> > * order.
> > * ...
> > * The ordering properties of __iomem pointers obtained with non-default
> > * attributes (e.g. those returned by ioremap_wc()) are specific to the
> > * underlying architecture and therefore the guarantees listed above cannot
> > * generally be relied upon for accesses to these types of mappings.
> >
> > the IOs performed by the accessors are supposed to arrive in the
> > program order. Thus SET_CH_32(..., HDMA_V0_DOORBELL_START) performed
> > after all the previous SET_CH_32(...) are finished looks correct with
> > no need in additional barriers. The results of the later operations
> > are supposed to be seen by the device (in our case it's a remote DW
> > eDMA controller) before the doorbell update from scratch. From that
> > perspective your problem looks as if the IO operations preceding the
> > doorbell CSR update aren't finished yet. So you are sure that the LL
> > memory is mapped with no additional flags like Write-Combine or some
> > caching optimizations? Are you sure that the PCIe IOs are correctly
> > implemented in your platform?
>
> No, I don't know if there is extra flags or optimizations.
Well, I can't know that either.) The only one who can figure it out is
you, at least at this stage (I doubt Gustavo will ever get back to
reviewing and testing the driver on his remote eDMA device). I can
help if you provide some more details about the platform you are
using, about the low-level driver (is it
drivers/dma/dw-edma/dw-edma-pcie.o?) which gets to detect the DW eDMA
remote device and probes it by the DW eDMA core.
* Though I don't have hardware with the remote DW eDMA setup to try to
reproduce and debug the problem discovered by you.
>
> >
> > I do understand that the eDMA CSRs and the LL memory are mapped by
> > different BARs in the remote eDMA setup. But they still belong to the
> > same device. So the IO accessors semantic described in the kernel doc
> > implies no need in additional barrier.
>
> Even if they are on the same device it is two type of memory.
What do you mean by "two types of memory"? From the CPU perspective
they are the same. Both are mapped via MMIO by means of a PCIe Root
Port outbound memory window.
> I am not an PCIe expert but I suppose the PCIe controller of the board is
> sending to both memory and if one of them (LL memory here) is slower in the
> write process then we faced this race issue. We can not find out that the write
> to LL memory has finished before the CSRs even if the write command has been
> sent earlier.
From your description there is no guarantee that reading from the
remote device solves the race for sure. If writes have been collected
in a cache, then the intermediate read shall return a data from the
cache with no data being flushed to the device memory. It might be
possible that in your case the read just adds some delay enough for
some independent activity to flush the cache. Thus the problem you
discovered may get back in some other circumstance. Moreover based on
the PCI Express specification "A Posted Request must not pass another
Posted Request unless a TLP has RO (Relaxed ordering) or IDO (ID-based
ordering) flag set." So neither intermediate PCIe switches nor the
PCIe host controller is supposed to re-order simple writes unless the
Root Port outbound MW is configure to set the denoted flags. In anyway
all of that is platform specific. So in order to have it figured out
we need more details from the platform from you.
Meanwhile:
Q1 are you sure that neither dma_wmb() nor io_stop_wc() help to solve
the problem in your case?
Q2 Does specifying a delay instead of the dummy read before the
doorbell update solve the problem?
-Serge(y)
>
> K?ry,
On Tue, 20 Jun 2023 14:45:40 +0300
Serge Semin <[email protected]> wrote:
> On Mon, Jun 19, 2023 at 08:32:07PM +0200, Köry Maincent wrote:
> > On Mon, 19 Jun 2023 20:02:01 +0300
> > Serge Semin <[email protected]> wrote:
> >
> > > On Fri, Jun 09, 2023 at 10:16:49AM +0200, Köry Maincent wrote:
> > > > From: Kory Maincent <[email protected]>
> > > >
> > >
> > > > The Linked list element and pointer are not stored in the same memory as
> > > > the HDMA controller register. If the doorbell register is toggled before
> > > > the full write of the linked list a race condition error can appears.
> > > > In remote setup we can only use a readl to the memory to assured the
> > > > full write has occurred.
> > > >
> > > > Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
> > > > Signed-off-by: Kory Maincent <[email protected]>
> > >
> > > Is this a hypothetical bug? Have you actually experienced the
> > > described problem? If so are you sure that it's supposed to be fixed
> > > as you suggest?
> >
>
> > I do experienced this problem and this patch fixed it.
>
> Could you give more details of how often does it happen? Is it stably
> reproducible or does it happen at very rare occasion?
I have a test example that run DMA stress transfer in 3 threads and
the issue appear in only few transfers but each time I run my test.
> > > I am asking because based on the kernel doc
> > > (Documentation/memory-barriers.txt):
> > >
> > > * 1. All readX() and writeX() accesses to the same peripheral are
> > > ordered
> > > * with respect to each other. This ensures that MMIO register
> > > accesses
> > > * by the same CPU thread to a particular device will arrive in
> > > program
> > > * order.
> > > * ...
> > > * The ordering properties of __iomem pointers obtained with non-default
> > > * attributes (e.g. those returned by ioremap_wc()) are specific to the
> > > * underlying architecture and therefore the guarantees listed above cannot
> > > * generally be relied upon for accesses to these types of mappings.
> > >
> > > the IOs performed by the accessors are supposed to arrive in the
> > > program order. Thus SET_CH_32(..., HDMA_V0_DOORBELL_START) performed
> > > after all the previous SET_CH_32(...) are finished looks correct with
> > > no need in additional barriers. The results of the later operations
> > > are supposed to be seen by the device (in our case it's a remote DW
> > > eDMA controller) before the doorbell update from scratch. From that
> > > perspective your problem looks as if the IO operations preceding the
> > > doorbell CSR update aren't finished yet. So you are sure that the LL
> > > memory is mapped with no additional flags like Write-Combine or some
> > > caching optimizations? Are you sure that the PCIe IOs are correctly
> > > implemented in your platform?
> >
> > No, I don't know if there is extra flags or optimizations.
>
> Well, I can't know that either.) The only one who can figure it out is
> you, at least at this stage (I doubt Gustavo will ever get back to
> reviewing and testing the driver on his remote eDMA device). I can
> help if you provide some more details about the platform you are
> using, about the low-level driver (is it
> drivers/dma/dw-edma/dw-edma-pcie.o?) which gets to detect the DW eDMA
> remote device and probes it by the DW eDMA core.
No it is another custom driver but also communicating through PCIe. In fact I
have a contact to the FPGA designer, I will ask them.
>
> * Though I don't have hardware with the remote DW eDMA setup to try to
> reproduce and debug the problem discovered by you.
>
> >
> > >
> > > I do understand that the eDMA CSRs and the LL memory are mapped by
> > > different BARs in the remote eDMA setup. But they still belong to the
> > > same device. So the IO accessors semantic described in the kernel doc
> > > implies no need in additional barrier.
> >
> > Even if they are on the same device it is two type of memory.
>
> What do you mean by "two types of memory"? From the CPU perspective
> they are the same. Both are mapped via MMIO by means of a PCIe Root
> Port outbound memory window.
I was meaning hardware memory. Yes they are mapped via MMIO, but they are
mapped to two different BAR which may map the CSRs or the memory where LL
are stored. According to you the write should be ordered but is there a way
to know that the write has succeed?
> > I am not an PCIe expert but I suppose the PCIe controller of the board is
> > sending to both memory and if one of them (LL memory here) is slower in the
> > write process then we faced this race issue. We can not find out that the
> > write to LL memory has finished before the CSRs even if the write command
> > has been sent earlier.
>
> From your description there is no guarantee that reading from the
> remote device solves the race for sure. If writes have been collected
> in a cache, then the intermediate read shall return a data from the
> cache with no data being flushed to the device memory. It might be
> possible that in your case the read just adds some delay enough for
> some independent activity to flush the cache. Thus the problem you
> discovered may get back in some other circumstance. Moreover based on
> the PCI Express specification "A Posted Request must not pass another
> Posted Request unless a TLP has RO (Relaxed ordering) or IDO (ID-based
> ordering) flag set." So neither intermediate PCIe switches nor the
> PCIe host controller is supposed to re-order simple writes unless the
> Root Port outbound MW is configure to set the denoted flags. In anyway
> all of that is platform specific. So in order to have it figured out
> we need more details from the platform from you.
I thought that using a read will solve the issue like the gpio_nand driver
(gpio_nand_dosync) but I didn't thought of a cache that could return the value
of the read even if the write doesn't fully happen. In the case of a cache how
could we know that the write is done without using a delay?
>
> Meanwhile:
>
> Q1 are you sure that neither dma_wmb() nor io_stop_wc() help to solve
> the problem in your case?
dma_wmb is like wmb and is called by the writel function of the doorbell.
io_stop_wc is doing nothing except or arm64.
Both of these function won't change anything.
> Q2 Does specifying a delay instead of the dummy read before the
> doorbell update solve the problem?
Delaying it for at least 4 us before toggling doorbell solves also the issue.
This seems long for an equivalent of the readl function right?
Wouldn't using a read after the write ask to the PCIe controller to check the
write has happen? It should be written in the PCIe protocol but not sure I want
to open the full protocol description document.
Köry
On Tue, Jun 20, 2023 at 03:30:06PM +0200, K?ry Maincent wrote:
> On Tue, 20 Jun 2023 14:45:40 +0300
> Serge Semin <[email protected]> wrote:
>
> > On Mon, Jun 19, 2023 at 08:32:07PM +0200, K?ry Maincent wrote:
> > > On Mon, 19 Jun 2023 20:02:01 +0300
> > > Serge Semin <[email protected]> wrote:
> > >
> > > > On Fri, Jun 09, 2023 at 10:16:49AM +0200, K?ry Maincent wrote:
> > > > > From: Kory Maincent <[email protected]>
> > > > >
> > > >
> > > > > The Linked list element and pointer are not stored in the same memory as
> > > > > the HDMA controller register. If the doorbell register is toggled before
> > > > > the full write of the linked list a race condition error can appears.
> > > > > In remote setup we can only use a readl to the memory to assured the
> > > > > full write has occurred.
> > > > >
> > > > > Fixes: e74c39573d35 ("dmaengine: dw-edma: Add support for native HDMA")
> > > > > Signed-off-by: Kory Maincent <[email protected]>
> > > >
> > > > Is this a hypothetical bug? Have you actually experienced the
> > > > described problem? If so are you sure that it's supposed to be fixed
> > > > as you suggest?
> > >
> >
> > > I do experienced this problem and this patch fixed it.
> >
> > Could you give more details of how often does it happen? Is it stably
> > reproducible or does it happen at very rare occasion?
>
> I have a test example that run DMA stress transfer in 3 threads and
> the issue appear in only few transfers but each time I run my test.
>
>
> > > > I am asking because based on the kernel doc
> > > > (Documentation/memory-barriers.txt):
> > > >
> > > > * 1. All readX() and writeX() accesses to the same peripheral are
> > > > ordered
> > > > * with respect to each other. This ensures that MMIO register
> > > > accesses
> > > > * by the same CPU thread to a particular device will arrive in
> > > > program
> > > > * order.
> > > > * ...
> > > > * The ordering properties of __iomem pointers obtained with non-default
> > > > * attributes (e.g. those returned by ioremap_wc()) are specific to the
> > > > * underlying architecture and therefore the guarantees listed above cannot
> > > > * generally be relied upon for accesses to these types of mappings.
> > > >
> > > > the IOs performed by the accessors are supposed to arrive in the
> > > > program order. Thus SET_CH_32(..., HDMA_V0_DOORBELL_START) performed
> > > > after all the previous SET_CH_32(...) are finished looks correct with
> > > > no need in additional barriers. The results of the later operations
> > > > are supposed to be seen by the device (in our case it's a remote DW
> > > > eDMA controller) before the doorbell update from scratch. From that
> > > > perspective your problem looks as if the IO operations preceding the
> > > > doorbell CSR update aren't finished yet. So you are sure that the LL
> > > > memory is mapped with no additional flags like Write-Combine or some
> > > > caching optimizations? Are you sure that the PCIe IOs are correctly
> > > > implemented in your platform?
> > >
> > > No, I don't know if there is extra flags or optimizations.
> >
> > Well, I can't know that either.) The only one who can figure it out is
> > you, at least at this stage (I doubt Gustavo will ever get back to
> > reviewing and testing the driver on his remote eDMA device). I can
> > help if you provide some more details about the platform you are
> > using, about the low-level driver (is it
> > drivers/dma/dw-edma/dw-edma-pcie.o?) which gets to detect the DW eDMA
> > remote device and probes it by the DW eDMA core.
>
> No it is another custom driver but also communicating through PCIe. In fact I
> have a contact to the FPGA designer, I will ask them.
Then if I were your I would have checked the way the driver maps the
BARs first. In additional I would have asked FPGA designer whether
it's possible to have the writes re-ordering or some significant
delays during the LL BAR writes inside the eDMA end-point itself.
>
> >
> > * Though I don't have hardware with the remote DW eDMA setup to try to
> > reproduce and debug the problem discovered by you.
> >
> > >
> > > >
> > > > I do understand that the eDMA CSRs and the LL memory are mapped by
> > > > different BARs in the remote eDMA setup. But they still belong to the
> > > > same device. So the IO accessors semantic described in the kernel doc
> > > > implies no need in additional barrier.
> > >
> > > Even if they are on the same device it is two type of memory.
> >
> > What do you mean by "two types of memory"? From the CPU perspective
> > they are the same. Both are mapped via MMIO by means of a PCIe Root
> > Port outbound memory window.
>
> I was meaning hardware memory. Yes they are mapped via MMIO, but they are
> mapped to two different BAR which may map the CSRs or the memory where LL
> are stored. According to you the write should be ordered but is there a way
> to know that the write has succeed?
No. Normal writes are posted in PCIe bus. That is "send and forget".
>
>
> > > I am not an PCIe expert but I suppose the PCIe controller of the board is
> > > sending to both memory and if one of them (LL memory here) is slower in the
> > > write process then we faced this race issue. We can not find out that the
> > > write to LL memory has finished before the CSRs even if the write command
> > > has been sent earlier.
> >
> > From your description there is no guarantee that reading from the
> > remote device solves the race for sure. If writes have been collected
> > in a cache, then the intermediate read shall return a data from the
> > cache with no data being flushed to the device memory. It might be
> > possible that in your case the read just adds some delay enough for
> > some independent activity to flush the cache. Thus the problem you
> > discovered may get back in some other circumstance. Moreover based on
> > the PCI Express specification "A Posted Request must not pass another
> > Posted Request unless a TLP has RO (Relaxed ordering) or IDO (ID-based
> > ordering) flag set." So neither intermediate PCIe switches nor the
> > PCIe host controller is supposed to re-order simple writes unless the
> > Root Port outbound MW is configure to set the denoted flags. In anyway
> > all of that is platform specific. So in order to have it figured out
> > we need more details from the platform from you.
>
> I thought that using a read will solve the issue like the gpio_nand driver
> (gpio_nand_dosync)
AFAICS The io_sync dummy-read there is a workaround to fix the
bus-reordering within the SoC bus. In this case we have a PCIe bus
which is supposed to guarantee the strong order with the exception I
described above or unless there is a bug someplace in the PCIe fabric.
> but I didn't thought of a cache that could return the value
> of the read even if the write doesn't fully happen. In the case of a cache how
> could we know that the write is done without using a delay?
MMIO mapping is platform dependent and low-level driver dependent.
That's why I asked many times about the platform you are using and the
low-level driver that probes the eDMA engine. It would be also useful
to know what PCIe host controller is utilized too.
Mainly MMIO spaces are mapped in a way to bypass the caching. But in
some cases it might be useful to map an MMIO space with additional
optimizations like Write-combining. For instance it could be
effectively done for the eDMA linked-list BAR mapping. Indeed why
would you need to send each linked-list byte/word/dword right away to
the device while you can combine them and send all together, then
flush the cache and only after that start the DMA transfer? Another
possible reason of the writes reordering could be in a way the PCIe
host outbound memory window (a memory region accesses to which are
translated to the PCIe bus transfers) is configured. For instance DW
PCIe Host controller outbound MW config CSR has a special flag which
enables setting a custom PCIe bus TLPs (packets) attribute. As I
mentioned above that attribute can affect the TLPs order: make it
relaxed or ID-based.
Of course we can't reject a possibility of having some delays hidden
inside your device which may cause writes to the internal memory
landing after the writes to the CSRs. But that seems too exotic to be
considered as the real one for sure until the alternatives are
thoroughly checked.
What I was trying to say that your problem can be caused by some much
more frequently met reason. If I were you I would have checked them
first and only then considered a workaround like you suggest.
-Serge(y)
>
> >
> > Meanwhile:
> >
> > Q1 are you sure that neither dma_wmb() nor io_stop_wc() help to solve
> > the problem in your case?
>
> dma_wmb is like wmb and is called by the writel function of the doorbell.
> io_stop_wc is doing nothing except or arm64.
> Both of these function won't change anything.
>
> > Q2 Does specifying a delay instead of the dummy read before the
> > doorbell update solve the problem?
>
> Delaying it for at least 4 us before toggling doorbell solves also the issue.
> This seems long for an equivalent of the readl function right?
> Wouldn't using a read after the write ask to the PCIe controller to check the
> write has happen? It should be written in the PCIe protocol but not sure I want
> to open the full protocol description document.
>
> K?ry
On Wed, 21 Jun 2023 12:45:35 +0300
Serge Semin <[email protected]> wrote:
> > I thought that using a read will solve the issue like the gpio_nand driver
> > (gpio_nand_dosync)
>
> AFAICS The io_sync dummy-read there is a workaround to fix the
> bus-reordering within the SoC bus. In this case we have a PCIe bus
> which is supposed to guarantee the strong order with the exception I
> described above or unless there is a bug someplace in the PCIe fabric.
>
> > but I didn't thought of a cache that could return the value
> > of the read even if the write doesn't fully happen. In the case of a cache
> > how could we know that the write is done without using a delay?
>
> MMIO mapping is platform dependent and low-level driver dependent.
> That's why I asked many times about the platform you are using and the
> low-level driver that probes the eDMA engine. It would be also useful
> to know what PCIe host controller is utilized too.
>
> Mainly MMIO spaces are mapped in a way to bypass the caching. But in
> some cases it might be useful to map an MMIO space with additional
> optimizations like Write-combining. For instance it could be
> effectively done for the eDMA linked-list BAR mapping. Indeed why
> would you need to send each linked-list byte/word/dword right away to
> the device while you can combine them and send all together, then
> flush the cache and only after that start the DMA transfer? Another
> possible reason of the writes reordering could be in a way the PCIe
> host outbound memory window (a memory region accesses to which are
> translated to the PCIe bus transfers) is configured. For instance DW
> PCIe Host controller outbound MW config CSR has a special flag which
> enables setting a custom PCIe bus TLPs (packets) attribute. As I
> mentioned above that attribute can affect the TLPs order: make it
> relaxed or ID-based.
>
> Of course we can't reject a possibility of having some delays hidden
> inside your device which may cause writes to the internal memory
> landing after the writes to the CSRs. But that seems too exotic to be
> considered as the real one for sure until the alternatives are
> thoroughly checked.
>
> What I was trying to say that your problem can be caused by some much
> more frequently met reason. If I were you I would have checked them
> first and only then considered a workaround like you suggest.
Thanks for you detailed answer, this was instructive.
I will come back with more information if TLP flags are set.
FYI the PCIe board I am currently working with is the one from Brainchip:
Here is the driver:
https://github.com/Brainchip-Inc/akida_dw_edma
Köry
On Wed, Jun 21, 2023 at 03:19:48PM +0200, K?ry Maincent wrote:
> On Wed, 21 Jun 2023 12:45:35 +0300
> Serge Semin <[email protected]> wrote:
>
> > > I thought that using a read will solve the issue like the gpio_nand driver
> > > (gpio_nand_dosync)
> >
> > AFAICS The io_sync dummy-read there is a workaround to fix the
> > bus-reordering within the SoC bus. In this case we have a PCIe bus
> > which is supposed to guarantee the strong order with the exception I
> > described above or unless there is a bug someplace in the PCIe fabric.
> >
> > > but I didn't thought of a cache that could return the value
> > > of the read even if the write doesn't fully happen. In the case of a cache
> > > how could we know that the write is done without using a delay?
> >
> > MMIO mapping is platform dependent and low-level driver dependent.
> > That's why I asked many times about the platform you are using and the
> > low-level driver that probes the eDMA engine. It would be also useful
> > to know what PCIe host controller is utilized too.
> >
> > Mainly MMIO spaces are mapped in a way to bypass the caching. But in
> > some cases it might be useful to map an MMIO space with additional
> > optimizations like Write-combining. For instance it could be
> > effectively done for the eDMA linked-list BAR mapping. Indeed why
> > would you need to send each linked-list byte/word/dword right away to
> > the device while you can combine them and send all together, then
> > flush the cache and only after that start the DMA transfer? Another
> > possible reason of the writes reordering could be in a way the PCIe
> > host outbound memory window (a memory region accesses to which are
> > translated to the PCIe bus transfers) is configured. For instance DW
> > PCIe Host controller outbound MW config CSR has a special flag which
> > enables setting a custom PCIe bus TLPs (packets) attribute. As I
> > mentioned above that attribute can affect the TLPs order: make it
> > relaxed or ID-based.
> >
> > Of course we can't reject a possibility of having some delays hidden
> > inside your device which may cause writes to the internal memory
> > landing after the writes to the CSRs. But that seems too exotic to be
> > considered as the real one for sure until the alternatives are
> > thoroughly checked.
> >
> > What I was trying to say that your problem can be caused by some much
> > more frequently met reason. If I were you I would have checked them
> > first and only then considered a workaround like you suggest.
>
> Thanks for you detailed answer, this was instructive.
> I will come back with more information if TLP flags are set.
> FYI the PCIe board I am currently working with is the one from Brainchip:
> Here is the driver:
> https://github.com/Brainchip-Inc/akida_dw_edma
I've glanced at the driver a bit:
1. Nothing criminal I've noticed in the way the BARs are mapped. It's
done as it's normally done. pcim_iomap_regions() is supposed to map
with no additional optimization. So the caching seems irrelevant
in this case.
2. The probe() method performs some device iATU config:
akida_1500_setup_iatu() and akida_1000_setup_iatu(). I would have a
closer look at the way the inbound MWs setup is done.
3. akida_1000_iatu_conf_table contains comments about the APB bus. If
it's an internal device bus and both LPDDR and eDMA are accessible
over the same bus, then the re-ordering may happen there. If APB means
the well known Advanced Peripheral Bus, then it's a quite slow bus
with respect to the system interconnect and PCIe buses. If eDMA regs
and LL-memory buses are different then the last write to the LL-memory
might be indeed still pending while the doorbells update arrives.
Sending a dummy read to the LL-memory stalls the program execution
until a response arrive (PCIe MRd TLPs are non-posted - "send and wait
for response") which happens only after the last write to the
LL-memory finishes. That's probably why your fix with the dummy-read
works and why the delay you noticed is quite significant (4us).
Though it looks quite strange to put LPDDR on such slow bus.
4. I would have also had a closer look at the way the outbound MW is
configured in your PCIe host controller (whether it enables some
optimizations like Relaxed ordering and ID-based ordering).
In anyway I would have got in touch with the FPGA designers whether
any of my suppositions correct (especially regarding 3.).
-Serge(y)
>
> K?ry
On Wed, 21 Jun 2023 18:56:49 +0300
Serge Semin <[email protected]> wrote:
> > Thanks for you detailed answer, this was instructive.
> > I will come back with more information if TLP flags are set.
> > FYI the PCIe board I am currently working with is the one from Brainchip:
> > Here is the driver:
> > https://github.com/Brainchip-Inc/akida_dw_edma
>
> I've glanced at the driver a bit:
>
> 1. Nothing criminal I've noticed in the way the BARs are mapped. It's
> done as it's normally done. pcim_iomap_regions() is supposed to map
> with no additional optimization. So the caching seems irrelevant
> in this case.
>
> 2. The probe() method performs some device iATU config:
> akida_1500_setup_iatu() and akida_1000_setup_iatu(). I would have a
> closer look at the way the inbound MWs setup is done.
>
> 3. akida_1000_iatu_conf_table contains comments about the APB bus. If
> it's an internal device bus and both LPDDR and eDMA are accessible
> over the same bus, then the re-ordering may happen there. If APB means
> the well known Advanced Peripheral Bus, then it's a quite slow bus
> with respect to the system interconnect and PCIe buses. If eDMA regs
> and LL-memory buses are different then the last write to the LL-memory
> might be indeed still pending while the doorbells update arrives.
> Sending a dummy read to the LL-memory stalls the program execution
> until a response arrive (PCIe MRd TLPs are non-posted - "send and wait
> for response") which happens only after the last write to the
> LL-memory finishes. That's probably why your fix with the dummy-read
> works and why the delay you noticed is quite significant (4us).
> Though it looks quite strange to put LPDDR on such slow bus.
>
> 4. I would have also had a closer look at the way the outbound MW is
> configured in your PCIe host controller (whether it enables some
> optimizations like Relaxed ordering and ID-based ordering).
>
> In anyway I would have got in touch with the FPGA designers whether
> any of my suppositions correct (especially regarding 3.).
Alright, thanks for your instructive review!
In the HDMA driver point of view we can not know if the eDMA regs and the
LL-memory will be in same bus in whatever future implementation. Of course it
is the hardware designers who should be careful about having a fast bus and
memory for the LL, but wouldn't it be more cautious to have this read?
Just a small thought!
Köry
On Thu, Jun 22, 2023 at 05:12:03PM +0200, K?ry Maincent wrote:
> On Wed, 21 Jun 2023 18:56:49 +0300
> Serge Semin <[email protected]> wrote:
>
> > > Thanks for you detailed answer, this was instructive.
> > > I will come back with more information if TLP flags are set.
> > > FYI the PCIe board I am currently working with is the one from Brainchip:
> > > Here is the driver:
> > > https://github.com/Brainchip-Inc/akida_dw_edma
> >
> > I've glanced at the driver a bit:
> >
> > 1. Nothing criminal I've noticed in the way the BARs are mapped. It's
> > done as it's normally done. pcim_iomap_regions() is supposed to map
> > with no additional optimization. So the caching seems irrelevant
> > in this case.
> >
> > 2. The probe() method performs some device iATU config:
> > akida_1500_setup_iatu() and akida_1000_setup_iatu(). I would have a
> > closer look at the way the inbound MWs setup is done.
> >
> > 3. akida_1000_iatu_conf_table contains comments about the APB bus. If
> > it's an internal device bus and both LPDDR and eDMA are accessible
> > over the same bus, then the re-ordering may happen there. If APB means
> > the well known Advanced Peripheral Bus, then it's a quite slow bus
> > with respect to the system interconnect and PCIe buses. If eDMA regs
> > and LL-memory buses are different then the last write to the LL-memory
> > might be indeed still pending while the doorbells update arrives.
> > Sending a dummy read to the LL-memory stalls the program execution
> > until a response arrive (PCIe MRd TLPs are non-posted - "send and wait
> > for response") which happens only after the last write to the
> > LL-memory finishes. That's probably why your fix with the dummy-read
> > works and why the delay you noticed is quite significant (4us).
> > Though it looks quite strange to put LPDDR on such slow bus.
> >
> > 4. I would have also had a closer look at the way the outbound MW is
> > configured in your PCIe host controller (whether it enables some
> > optimizations like Relaxed ordering and ID-based ordering).
> >
> > In anyway I would have got in touch with the FPGA designers whether
> > any of my suppositions correct (especially regarding 3.).
>
> Alright, thanks for your instructive review!
>
> In the HDMA driver point of view we can not know if the eDMA regs and the
> LL-memory will be in same bus in whatever future implementation. Of course it
> is the hardware designers who should be careful about having a fast bus and
> memory for the LL, but wouldn't it be more cautious to have this read?
> Just a small thought!
If we get assured that hardware with such problem exists (if you'll get
confirmation about the supposition 3. above) then we'll need to
activate your trick for that hardware only. Adding dummy reads for all
the remote eDMA setups doesn't look correct since it adds additional
delay to the execution path and especially seeing nobody has noticed
and reported such problem so far (for instance Gustavo didn't see the
problem on his device otherwise he would have fixed it).
So if assumption 3. is correct then I'd suggest the next
implementation: add a new dw_edma_chip_flags flag defined (a.k.a
DW_EDMA_SLOW_MEM), have it specified via the dw_edma_chip.flags field
in the Akida device probe() method and activate your trick only if
that flag is set.
-Serge(y)
>
> K?ry
Hello Serge,
I am back with an hardware design answer:
> "Even though the PCIe itself respects the transactions ordering, the
> AXI bus does not have an end-to-end completion acknowledgement (it
> terminates at the PCIe EP boundary with bus), and does not guaranteed
> ordering if accessing different destinations on the Bus. So, an access to LL
> could be declared complete even though the transactions is still being
> pipelined in the AXI Bus. (a dozen or so clocks, I can give an accurate
> number if needed)
>
> The access to DMA registers is done through BAR0 “rolling”
> so the transaction does not actually go out on the AXI bus and
> looped-back to PCIe DMA, rather it stays inside the PCIe EP.
>
> For the above reasons, hypothetically, there’s a chance that even if the DMA
> LL is accessed before the DM DB from PCIe RC side, the DB could be updated
> before the LL in local memory."
On Thu, 22 Jun 2023 19:22:20 +0300
Serge Semin <[email protected]> wrote:
> If we get assured that hardware with such problem exists (if you'll get
> confirmation about the supposition 3. above) then we'll need to
> activate your trick for that hardware only. Adding dummy reads for all
> the remote eDMA setups doesn't look correct since it adds additional
> delay to the execution path and especially seeing nobody has noticed
> and reported such problem so far (for instance Gustavo didn't see the
> problem on his device otherwise he would have fixed it).
>
> So if assumption 3. is correct then I'd suggest the next
> implementation: add a new dw_edma_chip_flags flag defined (a.k.a
> DW_EDMA_SLOW_MEM), have it specified via the dw_edma_chip.flags field
> in the Akida device probe() method and activate your trick only if
> that flag is set.
The flag you suggested is about slow memory write but as said above the issue
comes from the AXI bus and not the memory. I am wondering why you don't see
this issue. If I understand well it should be present on all IP as the DMA
register is internal to the IP and the LL memory is external through AXI bus.
Did you stress your IP? On my side it appears with lots of operation using
several (at least 3) thread through 2 DMA channels.
Köry
Hi Köry
On Tue, Sep 12, 2023 at 10:52:10AM +0200, Köry Maincent wrote:
> Hello Serge,
>
> I am back with an hardware design answer:
> > "Even though the PCIe itself respects the transactions ordering, the
> > AXI bus does not have an end-to-end completion acknowledgement (it
> > terminates at the PCIe EP boundary with bus), and does not guaranteed
> > ordering if accessing different destinations on the Bus. So, an access to LL
> > could be declared complete even though the transactions is still being
> > pipelined in the AXI Bus. (a dozen or so clocks, I can give an accurate
> > number if needed)
> >
> > The access to DMA registers is done through BAR0 “rolling”
> > so the transaction does not actually go out on the AXI bus and
> > looped-back to PCIe DMA, rather it stays inside the PCIe EP.
> >
> > For the above reasons, hypothetically, there’s a chance that even if the DMA
> > LL is accessed before the DM DB from PCIe RC side, the DB could be updated
> > before the LL in local memory."
Thanks for the detailed explanation. It doesn't firmly point out to
the root cause of the problem but mainly confirms a possible race
condition inside the remote PCIe device itself. That's what I meant in
my suggestion 3.
>
> On Thu, 22 Jun 2023 19:22:20 +0300
> Serge Semin <[email protected]> wrote:
>
> > If we get assured that hardware with such problem exists (if you'll get
> > confirmation about the supposition 3. above) then we'll need to
> > activate your trick for that hardware only. Adding dummy reads for all
> > the remote eDMA setups doesn't look correct since it adds additional
> > delay to the execution path and especially seeing nobody has noticed
> > and reported such problem so far (for instance Gustavo didn't see the
> > problem on his device otherwise he would have fixed it).
> >
> > So if assumption 3. is correct then I'd suggest the next
> > implementation: add a new dw_edma_chip_flags flag defined (a.k.a
> > DW_EDMA_SLOW_MEM), have it specified via the dw_edma_chip.flags field
> > in the Akida device probe() method and activate your trick only if
> > that flag is set.
>
> The flag you suggested is about slow memory write but as said above the issue
> comes from the AXI bus and not the memory.
AXI bus is a bus what is utilized to access the LL-memory in your
case. From the CPU perspective it's the same since the access time
depends on the both parts performance.
> I am wondering why you don't see
> this issue.
Well, in my case the DW PCIe eDMA controller is _locally_ implemented.
So it' CSRs and the LL-memory are accessible from the CPU side over a
system interconnect. The LL-memory is allocated from the system RAM
(CPU<->_AXI IC_<->AXI<->DDR<->RAM), while the DW PCIe CSRs are just
the memory-mapped IO space (CPU<->_AXI IC_<->APB<->AXI<->DBI<->CDM).
So in my case:
1. APB is too slow to be updated before the Linked-List data.
2. MMIO accessors (writel()/readl()/etc) are defined in a way so all
the memory updates (normal memory writes and reads) are supposed to be
completed before any further MMIO accesses.
So the ordering is mainly assured by 2 in case of the local DW PCIe
eDMA implementation.
Your configuration is different. You have the DW PCIe eDMA controller
implemented remotely. In that case you have both CSRs and Linked-list
memory accessible over a chain like:
CPU<->_Some IC_<->AXI/Native<->Some PCIe Host<->... PCIe bus ... <-+
|
+------------------------------------------------------------------+
|
+->DW eDMA
+> BARx<->CDM CSRs
+> BARy<->AHB/AXI/APB/etc<->Some SRAM
^
|
+-----------------------+
|
AFAICS a race condition happens due to this + bus being too slow. So
in case if the LL and CSRs IO writes are performed right one after
another with no additional delays or syncs in between them, then
indeed the later one can be finished earlier than the former one.
> If I understand well it should be present on all IP as the DMA
> register is internal to the IP and the LL memory is external through AXI bus.
> Did you stress your IP? On my side it appears with lots of operation using
> several (at least 3) thread through 2 DMA channels.
I didn't stress it up with such test. But AFAICS from a normal systems
implementations the problem isn't relevant for the locally accessible
DW PCIe eDMA controllers otherwise at the very least it would have
popped up in many other places in kernel.
What I meant in my previous message was that it was strange Gustavo
(the original driver developer) didn't spot the problem you were
referring to. He was the only one having the Remote DW eDMA hardware
at hands to perform such tests. Anyway seeing we've got to some
understanding around the problem and since based on the DW PCIe RP/EP
internals the CSRs and Application memory are indeed normally accessed
over the different buses, let's fix the problem as you suggest with
just using the DW_EDMA_CHIP_LOCAL flag. But please:
1. Fix it for both HDMA and EDMA controllers.
2. Create functions like dw_edma_v0_sync_ll_data() and
dw_hdma_v0_sync_ll_data() between the dw_Xdma_v0_core_write_chunk()
and dw_Xdma_v0_core_start() methods, which would perform the
dummy-read from the passed LL-chunk in order to sync the remote memory
writes.
3. Based on all our discussions add a saner comment to these methods
about why the dummy-read is needed for the remote DW eDMA setups.
-Serge(y)
>
> Köry