2024-04-04 16:53:51

by Frank van der Linden

[permalink] [raw]
Subject: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

cma_init_reserved_mem uses IS_ALIGNED to check if the size
represented by one bit in the cma allocation bitmask is
aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).

However, this is too strict, as this will fail if
order_per_bit > pageblock_order, which is a valid configuration.

We could check IS_ALIGNED both ways, but since both numbers are
powers of two, no check is needed at all.

Signed-off-by: Frank van der Linden <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: David Hildenbrand <[email protected]>
Fixes: de9e14eebf33 ("drivers: dma-contiguous: add initialization from device tree")
---
mm/cma.c | 4 ----
1 file changed, 4 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 01f5a8f71ddf..3e9724716bad 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -182,10 +182,6 @@ int __init cma_init_reserved_mem(phys_addr_t base, phys_addr_t size,
if (!size || !memblock_is_region_reserved(base, size))
return -EINVAL;

- /* alignment should be aligned with order_per_bit */
- if (!IS_ALIGNED(CMA_MIN_ALIGNMENT_PAGES, 1 << order_per_bit))
- return -EINVAL;
-
/* ensure minimal alignment required by mm core */
if (!IS_ALIGNED(base | size, CMA_MIN_ALIGNMENT_BYTES))
return -EINVAL;
--
2.44.0.478.gd926399ef9-goog



2024-04-04 20:05:42

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

On 04.04.24 18:25, Frank van der Linden wrote:
> cma_init_reserved_mem uses IS_ALIGNED to check if the size
> represented by one bit in the cma allocation bitmask is
> aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).

I recall the important part is that our area always covers full
pageblocks (CMA_MIN_ALIGNMENT_BYTES), because we cannot have "partial
CMA" pageblocks.

Internally, allocating from multiple pageblock should just work.

It's late in Germany, hopefully I am not missing something

Acked-by: David Hildenbrand <[email protected]>

>
> However, this is too strict, as this will fail if
> order_per_bit > pageblock_order, which is a valid configuration.
>
> We could check IS_ALIGNED both ways, but since both numbers are
> powers of two, no check is needed at all.
>
> Signed-off-by: Frank van der Linden <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Fixes: de9e14eebf33 ("drivers: dma-contiguous: add initialization from device tree")

Is there are real setup/BUG we are fixing? Why did we not stumble over
that earlier?

If so, please describe that in the patch description.

--
Cheers,

David / dhildenb


2024-04-04 20:15:58

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

On Thu, 4 Apr 2024 16:25:14 +0000 Frank van der Linden <[email protected]> wrote:

> cma_init_reserved_mem uses IS_ALIGNED to check if the size
> represented by one bit in the cma allocation bitmask is
> aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).
>
> However, this is too strict, as this will fail if
> order_per_bit > pageblock_order, which is a valid configuration.
>
> We could check IS_ALIGNED both ways, but since both numbers are
> powers of two, no check is needed at all.

What are the userspace visible effects of this bug?

2024-04-04 20:49:07

by Frank van der Linden

[permalink] [raw]
Subject: Re: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

On Thu, Apr 4, 2024 at 1:05 PM David Hildenbrand <[email protected]> wrote:
>
> On 04.04.24 18:25, Frank van der Linden wrote:
> > cma_init_reserved_mem uses IS_ALIGNED to check if the size
> > represented by one bit in the cma allocation bitmask is
> > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).
>
> I recall the important part is that our area always covers full
> pageblocks (CMA_MIN_ALIGNMENT_BYTES), because we cannot have "partial
> CMA" pageblocks.
>
> Internally, allocating from multiple pageblock should just work.
>
> It's late in Germany, hopefully I am not missing something
>
> Acked-by: David Hildenbrand <[email protected]>
>
> >
> > However, this is too strict, as this will fail if
> > order_per_bit > pageblock_order, which is a valid configuration.
> >
> > We could check IS_ALIGNED both ways, but since both numbers are
> > powers of two, no check is needed at all.
> >
> > Signed-off-by: Frank van der Linden <[email protected]>
> > Cc: Marek Szyprowski <[email protected]>
> > Cc: David Hildenbrand <[email protected]>
> > Fixes: de9e14eebf33 ("drivers: dma-contiguous: add initialization from device tree")
>
> Is there are real setup/BUG we are fixing? Why did we not stumble over
> that earlier?
>
> If so, please describe that in the patch description.

Nobody stumbled over it because the only user of CMA that should have
passed in an order_per_bit large enough to trigger this was
hugetlb_cma. However, because of a bug, it didn't :) When I fixed
that, I noticed that this check fired.

- Frank

2024-04-04 21:03:10

by Frank van der Linden

[permalink] [raw]
Subject: Re: [PATCH 1/2] mm/cma: drop incorrect alignment check in cma_init_reserved_mem

On Thu, Apr 4, 2024 at 1:15 PM Andrew Morton <akpm@linux-foundationorg> wrote:
>
> On Thu, 4 Apr 2024 16:25:14 +0000 Frank van der Linden <[email protected]> wrote:
>
> > cma_init_reserved_mem uses IS_ALIGNED to check if the size
> > represented by one bit in the cma allocation bitmask is
> > aligned with CMA_MIN_ALIGNMENT_BYTES (pageblock size).
> >
> > However, this is too strict, as this will fail if
> > order_per_bit > pageblock_order, which is a valid configuration.
> >
> > We could check IS_ALIGNED both ways, but since both numbers are
> > powers of two, no check is needed at all.
>
> What are the userspace visible effects of this bug?

None that I know of. This bug was exposed because I made the hugetlb
code correctly pass the right order_per_bit argument (see the
accompanying hugetlb cma fix), which then tripped this check when I
backported it to an older kernel, passing an order of 30 (1G hugetlb
page) as order_per_bit. This actually won't happen for 6.9-rc, since
the (intended) order_per_bit was reduced to HUGETLB_PAGE_ORDER because
of hugetlb page demotion.

So, no user visible effects. However, if the other fix is going to be
backported, this one is a prereq.

- Frank