2020-06-08 14:04:05

by Geert Uytterhoeven

[permalink] [raw]
Subject: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
memory pools are much larger than intended (e.g. 2 MiB instead of 128
KiB on a 256 MiB system).

Fix this by correcting the calculation of the number of GiBs of RAM in
the system. Invert the order of the min/max operations, to keep on
calculating in pages until the last step, which aids readability.

Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
Signed-off-by: Geert Uytterhoeven <[email protected]>
---
v2:
- Improve readability:
- Divide by (SZ_1G / SZ_128K) instead of using a shift,
- Invert min/max order to keep calculating in pages until the last
step.
---
kernel/dma/pool.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
index 35bb51c31fff370f..8cfa01243ed27b6f 100644
--- a/kernel/dma/pool.c
+++ b/kernel/dma/pool.c
@@ -175,10 +175,9 @@ static int __init dma_atomic_pool_init(void)
* sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1.
*/
if (!atomic_pool_size) {
- atomic_pool_size = max(totalram_pages() >> PAGE_SHIFT, 1UL) *
- SZ_128K;
- atomic_pool_size = min_t(size_t, atomic_pool_size,
- 1 << (PAGE_SHIFT + MAX_ORDER-1));
+ unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K);
+ pages = min_t(unsigned long, pages, MAX_ORDER_NR_PAGES);
+ atomic_pool_size = max_t(size_t, pages << PAGE_SHIFT, SZ_128K);
}
INIT_WORK(&atomic_pool_work, atomic_pool_work_fn);

--
2.17.1


2020-06-08 21:08:54

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Mon, 8 Jun 2020, Geert Uytterhoeven wrote:

> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
>
> Fix this by correcting the calculation of the number of GiBs of RAM in
> the system. Invert the order of the min/max operations, to keep on
> calculating in pages until the last step, which aids readability.
>
> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> Signed-off-by: Geert Uytterhoeven <[email protected]>

This works as well and is much more readable. Thanks Geert!

Acked-by: David Rientjes <[email protected]>

2020-06-09 13:28:30

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

Thanks,

applied to the dma-mapping tree for 5.8.

2020-06-20 20:12:39

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> memory pools are much larger than intended (e.g. 2 MiB instead of 128
> KiB on a 256 MiB system).
>
> Fix this by correcting the calculation of the number of GiBs of RAM in
> the system. Invert the order of the min/max operations, to keep on
> calculating in pages until the last step, which aids readability.
>
> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> Signed-off-by: Geert Uytterhoeven <[email protected]>
> Acked-by: David Rientjes <[email protected]>

This patch results in a boot failure in some of my powerpc boot tests,
specifically those testing boots from mptsas1068 devices. Error message:

mptsas 0000:00:02.0: enabling device (0000 -> 0002)
mptbase: ioc0: Initiating bringup
ioc0: LSISAS1068 A0: Capabilities={Initiator}
mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
mptbase: ioc0: ERROR - didn't initialize properly! (-3)
mptsas: probe of 0000:00:02.0 failed with error -3

Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
Qemu command line is

qemu-system-ppc -kernel vmlinux -M bamboo \
-m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
-device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
file=rootfs.ext2,format=raw,if=none,id=d0 \
--append "panic=-1 slub_debug=FZPUA root=/dev/sda mem=256M console=ttyS0" \
-monitor none -nographic

canyonlands_defconfig with sam460ex machine and otherwise similar command line
fails as well.

Reverting this patch fixes the problem.

Guenter

2020-06-21 08:38:11

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

Hi Günter,

On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck <[email protected]> wrote:
> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
> > On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
> > memory pools are much larger than intended (e.g. 2 MiB instead of 128
> > KiB on a 256 MiB system).
> >
> > Fix this by correcting the calculation of the number of GiBs of RAM in
> > the system. Invert the order of the min/max operations, to keep on
> > calculating in pages until the last step, which aids readability.
> >
> > Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
> > Signed-off-by: Geert Uytterhoeven <[email protected]>
> > Acked-by: David Rientjes <[email protected]>
>
> This patch results in a boot failure in some of my powerpc boot tests,
> specifically those testing boots from mptsas1068 devices. Error message:
>
> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
> mptbase: ioc0: Initiating bringup
> ioc0: LSISAS1068 A0: Capabilities={Initiator}
> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
> mptsas: probe of 0000:00:02.0 failed with error -3
>
> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
> Qemu command line is
>
> qemu-system-ppc -kernel vmlinux -M bamboo \
> -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
> -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
> file=rootfs.ext2,format=raw,if=none,id=d0 \
> --append "panic=-1 slub_debug=FZPUA root=/dev/sda mem=256M console=ttyS0" \
> -monitor none -nographic
>
> canyonlands_defconfig with sam460ex machine and otherwise similar command line
> fails as well.
>
> Reverting this patch fixes the problem.

This looks like the minimum value of 128 KiB is not sufficient, and the
bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
default DMA coherent pool size with memory capacity")?
Before, there was a single pool of (fixed) 256 KiB size, now there are
up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
size (128 KiB each).

Can you print the requested size in drivers/message/fusion/mptbase.c:
PrimeIocFifos()?
Does replacing all SZ_128K by SZ_256K in my patch help?
That would waste^H^H^H^H^Hallocate 256 KiB or 512 KiB more, though.

Thanks!

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2020-06-21 13:14:50

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On 6/21/20 1:35 AM, Geert Uytterhoeven wrote:
> Hi Günter,
>
> On Sat, Jun 20, 2020 at 10:09 PM Guenter Roeck <[email protected]> wrote:
>> On Mon, Jun 08, 2020 at 03:22:17PM +0200, Geert Uytterhoeven wrote:
>>> On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA
>>> memory pools are much larger than intended (e.g. 2 MiB instead of 128
>>> KiB on a 256 MiB system).
>>>
>>> Fix this by correcting the calculation of the number of GiBs of RAM in
>>> the system. Invert the order of the min/max operations, to keep on
>>> calculating in pages until the last step, which aids readability.
>>>
>>> Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity")
>>> Signed-off-by: Geert Uytterhoeven <[email protected]>
>>> Acked-by: David Rientjes <[email protected]>
>>
>> This patch results in a boot failure in some of my powerpc boot tests,
>> specifically those testing boots from mptsas1068 devices. Error message:
>>
>> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
>> mptbase: ioc0: Initiating bringup
>> ioc0: LSISAS1068 A0: Capabilities={Initiator}
>> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
>> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
>> mptsas: probe of 0000:00:02.0 failed with error -3
>>
>> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
>> Qemu command line is
>>
>> qemu-system-ppc -kernel vmlinux -M bamboo \
>> -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
>> -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
>> file=rootfs.ext2,format=raw,if=none,id=d0 \
>> --append "panic=-1 slub_debug=FZPUA root=/dev/sda mem=256M console=ttyS0" \
>> -monitor none -nographic
>>
>> canyonlands_defconfig with sam460ex machine and otherwise similar command line
>> fails as well.
>>
>> Reverting this patch fixes the problem.
>
> This looks like the minimum value of 128 KiB is not sufficient, and the
> bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
> default DMA coherent pool size with memory capacity")?
> Before, there was a single pool of (fixed) 256 KiB size, now there are
> up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
> size (128 KiB each).
>
> Can you print the requested size in drivers/message/fusion/mptbase.c:
> PrimeIocFifos()?

172928 bytes

> Does replacing all SZ_128K by SZ_256K in my patch help?

Yes, it does.

Guenter

2020-06-21 20:23:20

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Sun, 21 Jun 2020, Guenter Roeck wrote:

> >> This patch results in a boot failure in some of my powerpc boot tests,
> >> specifically those testing boots from mptsas1068 devices. Error message:
> >>
> >> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
> >> mptbase: ioc0: Initiating bringup
> >> ioc0: LSISAS1068 A0: Capabilities={Initiator}
> >> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
> >> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
> >> mptsas: probe of 0000:00:02.0 failed with error -3
> >>
> >> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
> >> Qemu command line is
> >>
> >> qemu-system-ppc -kernel vmlinux -M bamboo \
> >> -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
> >> -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
> >> file=rootfs.ext2,format=raw,if=none,id=d0 \
> >> --append "panic=-1 slub_debug=FZPUA root=/dev/sda mem=256M console=ttyS0" \
> >> -monitor none -nographic
> >>
> >> canyonlands_defconfig with sam460ex machine and otherwise similar command line
> >> fails as well.
> >>
> >> Reverting this patch fixes the problem.
> >
> > This looks like the minimum value of 128 KiB is not sufficient, and the
> > bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
> > default DMA coherent pool size with memory capacity")?
> > Before, there was a single pool of (fixed) 256 KiB size, now there are
> > up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
> > size (128 KiB each).
> >
> > Can you print the requested size in drivers/message/fusion/mptbase.c:
> > PrimeIocFifos()?
>
> 172928 bytes
>
> > Does replacing all SZ_128K by SZ_256K in my patch help?
>
> Yes, it does.
>

The new coherent pools should auto expand when they are close to being
depleted but there's no guarantee that it can be done fast enough.
Switching the min size to be the previous min size (256KB) seems like the
best option and it matches what
Documentation/admin-guide/kernel-parameters.txt still stays.

I'll also send a patch to point in the right direction when this happens.

2020-06-22 16:12:14

by Robin Murphy

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On 2020-06-21 21:20, David Rientjes wrote:
> On Sun, 21 Jun 2020, Guenter Roeck wrote:
>
>>>> This patch results in a boot failure in some of my powerpc boot tests,
>>>> specifically those testing boots from mptsas1068 devices. Error message:
>>>>
>>>> mptsas 0000:00:02.0: enabling device (0000 -> 0002)
>>>> mptbase: ioc0: Initiating bringup
>>>> ioc0: LSISAS1068 A0: Capabilities={Initiator}
>>>> mptbase: ioc0: ERROR - Unable to allocate Reply, Request, Chain Buffers!
>>>> mptbase: ioc0: ERROR - didn't initialize properly! (-3)
>>>> mptsas: probe of 0000:00:02.0 failed with error -3
>>>>
>>>> Configuration is bamboo:44x/bamboo_defconfig plus various added drivers.
>>>> Qemu command line is
>>>>
>>>> qemu-system-ppc -kernel vmlinux -M bamboo \
>>>> -m 256 -no-reboot -snapshot -device mptsas1068,id=scsi \
>>>> -device scsi-hd,bus=scsi.0,drive=d0,wwn=0x5000c50015ea71ac -drive \
>>>> file=rootfs.ext2,format=raw,if=none,id=d0 \
>>>> --append "panic=-1 slub_debug=FZPUA root=/dev/sda mem=256M console=ttyS0" \
>>>> -monitor none -nographic
>>>>
>>>> canyonlands_defconfig with sam460ex machine and otherwise similar command line
>>>> fails as well.
>>>>
>>>> Reverting this patch fixes the problem.
>>>
>>> This looks like the minimum value of 128 KiB is not sufficient, and the
>>> bug is in the intention of 1d659236fb43c4d2 ("dma-pool: scale the
>>> default DMA coherent pool size with memory capacity")?
>>> Before, there was a single pool of (fixed) 256 KiB size, now there are
>>> up to three coherent pools (DMA, DMA32, and kernel), albeit of smaller
>>> size (128 KiB each).
>>>
>>> Can you print the requested size in drivers/message/fusion/mptbase.c:
>>> PrimeIocFifos()?
>>
>> 172928 bytes
>>
>>> Does replacing all SZ_128K by SZ_256K in my patch help?
>>
>> Yes, it does.
>>
>
> The new coherent pools should auto expand when they are close to being
> depleted but there's no guarantee that it can be done fast enough.

More to the point, it's never going to help if the pool is empty and one
allocation is simply larger than the entire thing ;)

Another angle, though, is to question why this driver is making such a
large allocation with GFP_ATOMIC in the first place. At a glance it
looks like there's no reason at all other than that it's still using the
legacy pci_alloc_consistent() API, since every path to that appears to
have CAN_SLEEP passed as its flag - modernising that would arguably be
an even better long-term win.

Robin.

> Switching the min size to be the previous min size (256KB) seems like the
> best option and it matches what
> Documentation/admin-guide/kernel-parameters.txt still stays.
>
> I'll also send a patch to point in the right direction when this happens.
>

2020-06-22 17:34:11

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Mon, Jun 22, 2020 at 05:07:55PM +0100, Robin Murphy wrote:
> Another angle, though, is to question why this driver is making such a
> large allocation with GFP_ATOMIC in the first place. At a glance it looks
> like there's no reason at all other than that it's still using the legacy
> pci_alloc_consistent() API, since every path to that appears to have
> CAN_SLEEP passed as its flag - modernising that would arguably be an even
> better long-term win.

Maybe we can just try that for now? If other problems show up we
can still increase the initial pool size later in this cycle.

I'll try to cook up a patch.

2020-06-24 07:39:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

Hi Guenter,

can you try the patch below? This just converts the huge allocations
in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
a scripted conversion for the rest that he hasn't posted yet, so I'll
aim for the minimal version here.


diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
index 68aea22f2b8978..5216487db4fbea 100644
--- a/drivers/message/fusion/mptbase.c
+++ b/drivers/message/fusion/mptbase.c
@@ -1324,13 +1324,13 @@ mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init)
return 0; /* fw doesn't need any host buffers */

/* spin till we get enough memory */
- while(host_page_buffer_sz > 0) {
-
- if((ioc->HostPageBuffer = pci_alloc_consistent(
- ioc->pcidev,
- host_page_buffer_sz,
- &ioc->HostPageBuffer_dma)) != NULL) {
-
+ while (host_page_buffer_sz > 0) {
+ ioc->HostPageBuffer =
+ dma_alloc_coherent(&ioc->pcidev->dev,
+ host_page_buffer_sz,
+ &ioc->HostPageBuffer_dma,
+ GFP_KERNEL);
+ if (ioc->HostPageBuffer) {
dinitprintk(ioc, printk(MYIOC_s_DEBUG_FMT
"host_page_buffer @ %p, dma @ %x, sz=%d bytes\n",
ioc->name, ioc->HostPageBuffer,
@@ -2741,8 +2741,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
sz = ioc->alloc_sz;
dexitprintk(ioc, printk(MYIOC_s_INFO_FMT "free @ %p, sz=%d bytes\n",
ioc->name, ioc->alloc, ioc->alloc_sz));
- pci_free_consistent(ioc->pcidev, sz,
- ioc->alloc, ioc->alloc_dma);
+ dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
+ ioc->alloc_dma);
ioc->reply_frames = NULL;
ioc->req_frames = NULL;
ioc->alloc = NULL;
@@ -2751,8 +2751,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)

if (ioc->sense_buf_pool != NULL) {
sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
- pci_free_consistent(ioc->pcidev, sz,
- ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
+ dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
+ ioc->sense_buf_pool_dma);
ioc->sense_buf_pool = NULL;
ioc->alloc_total -= sz;
}
@@ -2802,7 +2802,7 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
"HostPageBuffer free @ %p, sz=%d bytes\n",
ioc->name, ioc->HostPageBuffer,
ioc->HostPageBuffer_sz));
- pci_free_consistent(ioc->pcidev, ioc->HostPageBuffer_sz,
+ dma_free_coherent(&ioc->pcidev->dev, ioc->HostPageBuffer_sz,
ioc->HostPageBuffer, ioc->HostPageBuffer_dma);
ioc->HostPageBuffer = NULL;
ioc->HostPageBuffer_sz = 0;
@@ -4497,7 +4497,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
ioc->name, sz, sz, num_chain));

total_size += sz;
- mem = pci_alloc_consistent(ioc->pcidev, total_size, &alloc_dma);
+ mem = dma_alloc_coherent(&ioc->pcidev->dev, total_size,
+ &alloc_dma, GFP_KERNEL);
if (mem == NULL) {
printk(MYIOC_s_ERR_FMT "Unable to allocate Reply, Request, Chain Buffers!\n",
ioc->name);
@@ -4574,8 +4575,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
spin_unlock_irqrestore(&ioc->FreeQlock, flags);

sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
- ioc->sense_buf_pool =
- pci_alloc_consistent(ioc->pcidev, sz, &ioc->sense_buf_pool_dma);
+ ioc->sense_buf_pool = dma_alloc_coherent(&ioc->pcidev->dev, sz,
+ &ioc->sense_buf_pool_dma, GFP_KERNEL);
if (ioc->sense_buf_pool == NULL) {
printk(MYIOC_s_ERR_FMT "Unable to allocate Sense Buffers!\n",
ioc->name);
@@ -4613,18 +4614,16 @@ PrimeIocFifos(MPT_ADAPTER *ioc)

if (ioc->alloc != NULL) {
sz = ioc->alloc_sz;
- pci_free_consistent(ioc->pcidev,
- sz,
- ioc->alloc, ioc->alloc_dma);
+ dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
+ ioc->alloc_dma);
ioc->reply_frames = NULL;
ioc->req_frames = NULL;
ioc->alloc_total -= sz;
}
if (ioc->sense_buf_pool != NULL) {
sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
- pci_free_consistent(ioc->pcidev,
- sz,
- ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
+ dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
+ ioc->sense_buf_pool_dma);
ioc->sense_buf_pool = NULL;
}

2020-06-24 16:21:41

by Guenter Roeck

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Wed, Jun 24, 2020 at 09:38:15AM +0200, Christoph Hellwig wrote:
> Hi Guenter,
>
> can you try the patch below? This just converts the huge allocations
> in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
> a scripted conversion for the rest that he hasn't posted yet, so I'll
> aim for the minimal version here.
>

The previously failing test passes with this patch applied on top of the
mainline kernel.

Guenter

>
> diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
> index 68aea22f2b8978..5216487db4fbea 100644
> --- a/drivers/message/fusion/mptbase.c
> +++ b/drivers/message/fusion/mptbase.c
> @@ -1324,13 +1324,13 @@ mpt_host_page_alloc(MPT_ADAPTER *ioc, pIOCInit_t ioc_init)
> return 0; /* fw doesn't need any host buffers */
>
> /* spin till we get enough memory */
> - while(host_page_buffer_sz > 0) {
> -
> - if((ioc->HostPageBuffer = pci_alloc_consistent(
> - ioc->pcidev,
> - host_page_buffer_sz,
> - &ioc->HostPageBuffer_dma)) != NULL) {
> -
> + while (host_page_buffer_sz > 0) {
> + ioc->HostPageBuffer =
> + dma_alloc_coherent(&ioc->pcidev->dev,
> + host_page_buffer_sz,
> + &ioc->HostPageBuffer_dma,
> + GFP_KERNEL);
> + if (ioc->HostPageBuffer) {
> dinitprintk(ioc, printk(MYIOC_s_DEBUG_FMT
> "host_page_buffer @ %p, dma @ %x, sz=%d bytes\n",
> ioc->name, ioc->HostPageBuffer,
> @@ -2741,8 +2741,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
> sz = ioc->alloc_sz;
> dexitprintk(ioc, printk(MYIOC_s_INFO_FMT "free @ %p, sz=%d bytes\n",
> ioc->name, ioc->alloc, ioc->alloc_sz));
> - pci_free_consistent(ioc->pcidev, sz,
> - ioc->alloc, ioc->alloc_dma);
> + dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
> + ioc->alloc_dma);
> ioc->reply_frames = NULL;
> ioc->req_frames = NULL;
> ioc->alloc = NULL;
> @@ -2751,8 +2751,8 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
>
> if (ioc->sense_buf_pool != NULL) {
> sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> - pci_free_consistent(ioc->pcidev, sz,
> - ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
> + dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
> + ioc->sense_buf_pool_dma);
> ioc->sense_buf_pool = NULL;
> ioc->alloc_total -= sz;
> }
> @@ -2802,7 +2802,7 @@ mpt_adapter_disable(MPT_ADAPTER *ioc)
> "HostPageBuffer free @ %p, sz=%d bytes\n",
> ioc->name, ioc->HostPageBuffer,
> ioc->HostPageBuffer_sz));
> - pci_free_consistent(ioc->pcidev, ioc->HostPageBuffer_sz,
> + dma_free_coherent(&ioc->pcidev->dev, ioc->HostPageBuffer_sz,
> ioc->HostPageBuffer, ioc->HostPageBuffer_dma);
> ioc->HostPageBuffer = NULL;
> ioc->HostPageBuffer_sz = 0;
> @@ -4497,7 +4497,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
> ioc->name, sz, sz, num_chain));
>
> total_size += sz;
> - mem = pci_alloc_consistent(ioc->pcidev, total_size, &alloc_dma);
> + mem = dma_alloc_coherent(&ioc->pcidev->dev, total_size,
> + &alloc_dma, GFP_KERNEL);
> if (mem == NULL) {
> printk(MYIOC_s_ERR_FMT "Unable to allocate Reply, Request, Chain Buffers!\n",
> ioc->name);
> @@ -4574,8 +4575,8 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
> spin_unlock_irqrestore(&ioc->FreeQlock, flags);
>
> sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> - ioc->sense_buf_pool =
> - pci_alloc_consistent(ioc->pcidev, sz, &ioc->sense_buf_pool_dma);
> + ioc->sense_buf_pool = dma_alloc_coherent(&ioc->pcidev->dev, sz,
> + &ioc->sense_buf_pool_dma, GFP_KERNEL);
> if (ioc->sense_buf_pool == NULL) {
> printk(MYIOC_s_ERR_FMT "Unable to allocate Sense Buffers!\n",
> ioc->name);
> @@ -4613,18 +4614,16 @@ PrimeIocFifos(MPT_ADAPTER *ioc)
>
> if (ioc->alloc != NULL) {
> sz = ioc->alloc_sz;
> - pci_free_consistent(ioc->pcidev,
> - sz,
> - ioc->alloc, ioc->alloc_dma);
> + dma_free_coherent(&ioc->pcidev->dev, sz, ioc->alloc,
> + ioc->alloc_dma);
> ioc->reply_frames = NULL;
> ioc->req_frames = NULL;
> ioc->alloc_total -= sz;
> }
> if (ioc->sense_buf_pool != NULL) {
> sz = (ioc->req_depth * MPT_SENSE_BUFFER_ALLOC);
> - pci_free_consistent(ioc->pcidev,
> - sz,
> - ioc->sense_buf_pool, ioc->sense_buf_pool_dma);
> + dma_free_coherent(&ioc->pcidev->dev, sz, ioc->sense_buf_pool,
> + ioc->sense_buf_pool_dma);
> ioc->sense_buf_pool = NULL;
> }
>

2020-06-27 16:16:41

by Christophe JAILLET

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems


Le 24/06/2020 à 09:38, Christoph Hellwig a écrit :
> Hi Guenter,
>
> can you try the patch below? This just converts the huge allocations
> in mptbase to use GFP_KERNEL. Christophe (added to Cc) actually has
> a scripted conversion for the rest that he hasn't posted yet, so I'll
> aim for the minimal version here.

Hi,

I'm sorry, but I will not send pci_ --> dma_ conversion for this driver.
I'm a bit puzzled by some choice of GFP_KERNEL and GFP_ATOMIC that not
all that obvious to me.

I'll try to send some patches for other easier drivers in the coming weeks.

Best regards,

CJ

2020-06-29 18:52:09

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH v2] dma-pool: Fix too large DMA pools on medium systems

On Sat, Jun 27, 2020 at 06:13:43PM +0200, Marion & Christophe JAILLET wrote:
> I'm sorry, but I will not send pci_ --> dma_ conversion for this driver.
> I'm a bit puzzled by some choice of GFP_KERNEL and GFP_ATOMIC that not all
> that obvious to me.
>
> I'll try to send some patches for other easier drivers in the coming weeks.

No problem, I sent a patch for the conversion of the allocations that
caused problems.