2023-02-21 16:54:19

by Keith Busch

[permalink] [raw]
Subject: [PATCH] dmapool: push new blocks in ascending order

From: Keith Busch <[email protected]>

Some users of the dmapool need their allocations to happen in ascending
order. The recent optimizations pushed the blocks in reverse order, so
restore the previous behavior by linking the next available block from
low-to-high.

Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
Reported-by: Bryan O'Donoghue <[email protected]>
Signed-off-by: Keith Busch <[email protected]>
---
mm/dmapool.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index 1920890ff8d3d..a151a21e571b7 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -300,7 +300,7 @@ EXPORT_SYMBOL(dma_pool_create);
static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
{
unsigned int next_boundary = pool->boundary, offset = 0;
- struct dma_block *block;
+ struct dma_block *block, *first = NULL, *last = NULL;

pool_init_page(pool, page);
while (offset + pool->size <= pool->allocation) {
@@ -311,11 +311,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
}

block = page->vaddr + offset;
- pool_block_push(pool, block, page->dma + offset);
+ block->dma = page->dma + offset;
+ block->next_block = NULL;
+
+ if (last)
+ last->next_block = block;
+ else
+ first = block;
+ last = block;
+
offset += pool->size;
pool->nr_blocks++;
}

+ last->next_block = pool->next_block;
+ pool->next_block = first;
+
list_add(&page->page_list, &pool->page_list);
pool->nr_pages++;
}
--
2.30.2



2023-02-21 17:20:18

by Bryan O'Donoghue

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On 21/02/2023 16:54, Keith Busch wrote:
> From: Keith Busch <[email protected]>
>
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.
>
> Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
> Reported-by: Bryan O'Donoghue <[email protected]>
> Signed-off-by: Keith Busch <[email protected]>

Tested-by: Bryan O'Donoghue <[email protected]>


2023-02-21 18:02:42

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> From: Keith Busch <[email protected]>
>
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.

Who are those users?

Also should we document this behavior somewhere so that it isn't
accidentally changed again some time in the future?

2023-02-21 18:08:04

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > From: Keith Busch <[email protected]>
> >
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
>
> Who are those users?
>
> Also should we document this behavior somewhere so that it isn't
> accidentally changed again some time in the future?

usb/chipidea/udc.c qh_pool called "ci_hw_qh". My initial thought was dmapool
isn't the right API if you need a specific order when allocating from it, but I
can't readily test any changes to that driver. Restoring the previous behavior
is easy enough.

2023-02-23 20:41:49

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <[email protected]> wrote:

> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > From: Keith Busch <[email protected]>
> > >
> > > Some users of the dmapool need their allocations to happen in ascending
> > > order. The recent optimizations pushed the blocks in reverse order, so
> > > restore the previous behavior by linking the next available block from
> > > low-to-high.
> >
> > Who are those users?
> >
> > Also should we document this behavior somewhere so that it isn't
> > accidentally changed again some time in the future?
>
> usb/chipidea/udc.c qh_pool called "ci_hw_qh".

It would be helpful to know why these users need this side-effect. Did
the drivers break? Or just get slower?

Are those drivers misbehaving by assuming this behavior? Should we
require that they be altered instead of forever constraining the dmapool
implementation in this fashion?

2023-02-24 18:24:30

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <[email protected]> wrote:
>
> > On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
> > > On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> > > > From: Keith Busch <[email protected]>
> > > >
> > > > Some users of the dmapool need their allocations to happen in ascending
> > > > order. The recent optimizations pushed the blocks in reverse order, so
> > > > restore the previous behavior by linking the next available block from
> > > > low-to-high.
> > >
> > > Who are those users?
> > >
> > > Also should we document this behavior somewhere so that it isn't
> > > accidentally changed again some time in the future?
> >
> > usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>
> It would be helpful to know why these users need this side-effect. Did
> the drivers break? Or just get slower?

The affected driver was reported to be unusable without this behavior.

> Are those drivers misbehaving by assuming this behavior? Should we

I do think they're using the wrong API. You you shouldn't use the dmapool if
your blocks need to be arranged in a contiguous address order. They should just
directly use dma_alloc_coherent() instead.

> require that they be altered instead of forever constraining the dmapool
> implementation in this fashion?

This change isn't really constraining dmapool where it matters. It's just an
unexpected one-time initialization thing.

As far as altering those drivers, I'll reach out to someone on that side for
comment (I'm currently not familiar with the affected subsystem).

2023-02-24 22:28:37

by Bryan O'Donoghue

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On 24/02/2023 18:24, Keith Busch wrote:
> On Thu, Feb 23, 2023 at 12:41:37PM -0800, Andrew Morton wrote:
>> On Tue, 21 Feb 2023 11:07:32 -0700 Keith Busch <[email protected]> wrote:
>>
>>> On Tue, Feb 21, 2023 at 10:02:34AM -0800, Christoph Hellwig wrote:
>>>> On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
>>>>> From: Keith Busch <[email protected]>
>>>>>
>>>>> Some users of the dmapool need their allocations to happen in ascending
>>>>> order. The recent optimizations pushed the blocks in reverse order, so
>>>>> restore the previous behavior by linking the next available block from
>>>>> low-to-high.
>>>>
>>>> Who are those users?
>>>>
>>>> Also should we document this behavior somewhere so that it isn't
>>>> accidentally changed again some time in the future?
>>>
>>> usb/chipidea/udc.c qh_pool called "ci_hw_qh".
>>
>> It would be helpful to know why these users need this side-effect. Did
>> the drivers break? Or just get slower?
>
> The affected driver was reported to be unusable without this behavior.
>
>> Are those drivers misbehaving by assuming this behavior? Should we
>
> I do think they're using the wrong API. You you shouldn't use the dmapool if
> your blocks need to be arranged in a contiguous address order. They should just
> directly use dma_alloc_coherent() instead.
>
>> require that they be altered instead of forever constraining the dmapool
>> implementation in this fashion?
>
> This change isn't really constraining dmapool where it matters. It's just an
> unexpected one-time initialization thing.
>
> As far as altering those drivers, I'll reach out to someone on that side for
> comment (I'm currently not familiar with the affected subsystem).

We can always change this driver, I'm fine to do that in-parallel/instead.

The symptom we have is a silent failure absent this change so, I just
wonder are we really the _only_ code path that would be affected absent
the change in this patch ?

---
bod



2023-02-26 04:42:48

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <[email protected]> wrote:

> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.

As I understand it, this fixes the only known issues with patch series
"dmapool enhancements", v4. So we're good for a merge before 6.3-rc1,
yes?

2023-02-27 17:20:28

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Sat, Feb 25, 2023 at 08:42:39PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <[email protected]> wrote:
>
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
>
> As I understand it, this fixes the only known issues with patch series
> "dmapool enhancements", v4. So we're good for a merge before 6.3-rc1,
> yes?

I was going to say "yes", but Guenter is reporting a new error with the
original series. I working on that right now.

2023-02-28 01:25:53

by Keith Busch

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Sat, Feb 25, 2023 at 08:42:39PM -0800, Andrew Morton wrote:
> On Tue, 21 Feb 2023 08:54:00 -0800 Keith Busch <[email protected]> wrote:
>
> > Some users of the dmapool need their allocations to happen in ascending
> > order. The recent optimizations pushed the blocks in reverse order, so
> > restore the previous behavior by linking the next available block from
> > low-to-high.
>
> As I understand it, this fixes the only known issues with patch series
> "dmapool enhancements", v4. So we're good for a merge before 6.3-rc1,
> yes?

Okay, I think this is good to go to merge now. My local testing also show this
fixes the megaraid issue that Guenter reported on the other thread, so I
believe this does indeed fix the only reported issues with the dmapool
enhancements.
.

2023-02-28 02:14:33

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH] dmapool: push new blocks in ascending order

On Tue, Feb 21, 2023 at 08:54:00AM -0800, Keith Busch wrote:
> From: Keith Busch <[email protected]>
>
> Some users of the dmapool need their allocations to happen in ascending
> order. The recent optimizations pushed the blocks in reverse order, so
> restore the previous behavior by linking the next available block from
> low-to-high.
>
> Fixes: ced6d06a81fb69 ("dmapool: link blocks across pages")
> Reported-by: Bryan O'Donoghue <[email protected]>
> Signed-off-by: Keith Busch <[email protected]>

This patch fixes the problem I had observed when trying to boot from
the megasas SCSI controller on powernv.

Tested-by: Guenter Roeck <[email protected]>

Thanks,
Guenter

> ---
> mm/dmapool.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index 1920890ff8d3d..a151a21e571b7 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -300,7 +300,7 @@ EXPORT_SYMBOL(dma_pool_create);
> static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
> {
> unsigned int next_boundary = pool->boundary, offset = 0;
> - struct dma_block *block;
> + struct dma_block *block, *first = NULL, *last = NULL;
>
> pool_init_page(pool, page);
> while (offset + pool->size <= pool->allocation) {
> @@ -311,11 +311,22 @@ static void pool_initialise_page(struct dma_pool *pool, struct dma_page *page)
> }
>
> block = page->vaddr + offset;
> - pool_block_push(pool, block, page->dma + offset);
> + block->dma = page->dma + offset;
> + block->next_block = NULL;
> +
> + if (last)
> + last->next_block = block;
> + else
> + first = block;
> + last = block;
> +
> offset += pool->size;
> pool->nr_blocks++;
> }
>
> + last->next_block = pool->next_block;
> + pool->next_block = first;
> +
> list_add(&page->page_list, &pool->page_list);
> pool->nr_pages++;
> }
> --
> 2.30.2
>
>