2017-06-28 17:08:30

by Doug Berger

[permalink] [raw]
Subject: [PATCH] cma: fix calculation of aligned offset

The align_offset parameter is used by bitmap_find_next_zero_area_off()
to represent the offset of map's base from the previous alignment
boundary; the function ensures that the returned index, plus the
align_offset, honors the specified align_mask.

The logic introduced by commit b5be83e308f7 ("mm: cma: align to
physical address, not CMA region position") has the cma driver
calculate the offset to the *next* alignment boundary. In most cases,
the base alignment is greater than that specified when making
allocations, resulting in a zero offset whether we align up or down.
In the example given with the commit, the base alignment (8MB) was
half the requested alignment (16MB) so the math also happened to work
since the offset is 8MB in both directions. However, when requesting
allocations with an alignment greater than twice that of the base,
the returned index would not be correctly aligned.

Also, the align_order arguments of cma_bitmap_aligned_mask() and
cma_bitmap_aligned_offset() should not be negative so the argument
type was made unsigned.

Fixes: b5be83e308f7 ("mm: cma: align to physical address, not CMA region position")
Signed-off-by: Angus Clark <[email protected]>
Signed-off-by: Doug Berger <[email protected]>
---
mm/cma.c | 15 ++++++---------
1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 978b4a1441ef..56a388eb0242 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -59,7 +59,7 @@ const char *cma_get_name(const struct cma *cma)
}

static unsigned long cma_bitmap_aligned_mask(const struct cma *cma,
- int align_order)
+ unsigned int align_order)
{
if (align_order <= cma->order_per_bit)
return 0;
@@ -67,17 +67,14 @@ static unsigned long cma_bitmap_aligned_mask(const struct cma *cma,
}

/*
- * Find a PFN aligned to the specified order and return an offset represented in
- * order_per_bits.
+ * Find the offset of the base PFN from the specified align_order.
+ * The value returned is represented in order_per_bits.
*/
static unsigned long cma_bitmap_aligned_offset(const struct cma *cma,
- int align_order)
+ unsigned int align_order)
{
- if (align_order <= cma->order_per_bit)
- return 0;
-
- return (ALIGN(cma->base_pfn, (1UL << align_order))
- - cma->base_pfn) >> cma->order_per_bit;
+ return (cma->base_pfn & ((1UL << align_order) - 1))
+ >> cma->order_per_bit;
}

static unsigned long cma_bitmap_pages_to_bits(const struct cma *cma,
--
2.13.0


2017-06-29 06:24:31

by Gregory Fong

[permalink] [raw]
Subject: Re: [PATCH] cma: fix calculation of aligned offset

On Wed, Jun 28, 2017 at 10:07 AM, Doug Berger <[email protected]> wrote:
> The align_offset parameter is used by bitmap_find_next_zero_area_off()
> to represent the offset of map's base from the previous alignment
> boundary; the function ensures that the returned index, plus the
> align_offset, honors the specified align_mask.
>
> The logic introduced by commit b5be83e308f7 ("mm: cma: align to
> physical address, not CMA region position") has the cma driver
> calculate the offset to the *next* alignment boundary.

Wow, I had that completely backward, nice catch.

> In most cases,
> the base alignment is greater than that specified when making
> allocations, resulting in a zero offset whether we align up or down.
> In the example given with the commit, the base alignment (8MB) was
> half the requested alignment (16MB) so the math also happened to work
> since the offset is 8MB in both directions. However, when requesting
> allocations with an alignment greater than twice that of the base,
> the returned index would not be correctly aligned.

It may be worth explaining what impact incorrect alignment has for an
end user, then considering for inclusion in stable.

>
> Also, the align_order arguments of cma_bitmap_aligned_mask() and
> cma_bitmap_aligned_offset() should not be negative so the argument
> type was made unsigned.
>
> Fixes: b5be83e308f7 ("mm: cma: align to physical address, not CMA region position")
> Signed-off-by: Angus Clark <[email protected]>
> Signed-off-by: Doug Berger <[email protected]>

Acked-by: Gregory Fong <[email protected]>

2017-06-29 16:55:02

by Doug Berger

[permalink] [raw]
Subject: Re: [PATCH] cma: fix calculation of aligned offset

On 06/28/2017 11:23 PM, Gregory Fong wrote:
> On Wed, Jun 28, 2017 at 10:07 AM, Doug Berger <[email protected]> wrote:
>> The align_offset parameter is used by bitmap_find_next_zero_area_off()
>> to represent the offset of map's base from the previous alignment
>> boundary; the function ensures that the returned index, plus the
>> align_offset, honors the specified align_mask.
>>
>> The logic introduced by commit b5be83e308f7 ("mm: cma: align to
>> physical address, not CMA region position") has the cma driver
>> calculate the offset to the *next* alignment boundary.
>
> Wow, I had that completely backward, nice catch.
Thanks go to Angus for that!

>> In most cases,
>> the base alignment is greater than that specified when making
>> allocations, resulting in a zero offset whether we align up or down.
>> In the example given with the commit, the base alignment (8MB) was
>> half the requested alignment (16MB) so the math also happened to work
>> since the offset is 8MB in both directions. However, when requesting
>> allocations with an alignment greater than twice that of the base,
>> the returned index would not be correctly aligned.
>
> It may be worth explaining what impact incorrect alignment has for an
> end user, then considering for inclusion in stable.
It would be difficult to explain in a general way since the end user is
requesting the alignment and only she knows what the consequences would
be for insufficient alignment.

I assume in general with the CMA it is most likely a DMA constraint.
However, in our particular case the problem affected an allocation used
by a co-processor. The larger CONFIG_CMA_ALIGNMENT is the less likely
users would run into this bug. We encountered it after reducing our
default CONFIG_CMA_ALIGNMENT.

I agree that it should be considered for stable.

>>
>> Also, the align_order arguments of cma_bitmap_aligned_mask() and
>> cma_bitmap_aligned_offset() should not be negative so the argument
>> type was made unsigned.
>>
>> Fixes: b5be83e308f7 ("mm: cma: align to physical address, not CMA region position")
>> Signed-off-by: Angus Clark <[email protected]>
>> Signed-off-by: Doug Berger <[email protected]>
>
> Acked-by: Gregory Fong <[email protected]>
>

2017-06-29 20:48:13

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] cma: fix calculation of aligned offset

On Wed, 28 Jun 2017 10:07:41 -0700 Doug Berger <[email protected]> wrote:

> The align_offset parameter is used by bitmap_find_next_zero_area_off()
> to represent the offset of map's base from the previous alignment
> boundary; the function ensures that the returned index, plus the
> align_offset, honors the specified align_mask.
>
> The logic introduced by commit b5be83e308f7 ("mm: cma: align to
> physical address, not CMA region position") has the cma driver
> calculate the offset to the *next* alignment boundary. In most cases,
> the base alignment is greater than that specified when making
> allocations, resulting in a zero offset whether we align up or down.
> In the example given with the commit, the base alignment (8MB) was
> half the requested alignment (16MB) so the math also happened to work
> since the offset is 8MB in both directions. However, when requesting
> allocations with an alignment greater than twice that of the base,
> the returned index would not be correctly aligned.
>
> Also, the align_order arguments of cma_bitmap_aligned_mask() and
> cma_bitmap_aligned_offset() should not be negative so the argument
> type was made unsigned.

The changelog doesn't describe the user-visible effects of the bug. It
should do so please, so that others can decide which kernel(s) need the fix.

Since the bug has been there for three years, I'll assume that -stable
backporting is not needed.

2017-06-30 00:43:23

by Doug Berger

[permalink] [raw]
Subject: Re: [PATCH] cma: fix calculation of aligned offset

On 06/29/2017 01:48 PM, Andrew Morton wrote:
> On Wed, 28 Jun 2017 10:07:41 -0700 Doug Berger <[email protected]> wrote:
>
>> The align_offset parameter is used by bitmap_find_next_zero_area_off()
>> to represent the offset of map's base from the previous alignment
>> boundary; the function ensures that the returned index, plus the
>> align_offset, honors the specified align_mask.
>>
>> The logic introduced by commit b5be83e308f7 ("mm: cma: align to
>> physical address, not CMA region position") has the cma driver
>> calculate the offset to the *next* alignment boundary. In most cases,
>> the base alignment is greater than that specified when making
>> allocations, resulting in a zero offset whether we align up or down.
>> In the example given with the commit, the base alignment (8MB) was
>> half the requested alignment (16MB) so the math also happened to work
>> since the offset is 8MB in both directions. However, when requesting
>> allocations with an alignment greater than twice that of the base,
>> the returned index would not be correctly aligned.
>>
>> Also, the align_order arguments of cma_bitmap_aligned_mask() and
>> cma_bitmap_aligned_offset() should not be negative so the argument
>> type was made unsigned.
>
> The changelog doesn't describe the user-visible effects of the bug. It
> should do so please, so that others can decide which kernel(s) need the fix.
>
> Since the bug has been there for three years, I'll assume that -stable
> backporting is not needed.
>
I'm afraid I'm confused by what you are asking me to do since it appears
that you have already signed-off on this patch.

The direct user-visible effect of the bug is that if the user requests a
CMA allocation that is aligned with a granule that is more than twice
the base alignment of the CMA region she will receive an allocation that
does not have that alignment.

As I indicated to Gregory, the follow-on consequences of the address not
satisfying the required alignment depend on why the alignment was
requested. In our case it was a system crash, but it could also
manifest as data corruption on a network interface for example.

In general I would expect it to be unusual for anyone to request an
allocation alignment that is larger than the CMA base alignment which is
probably why the bug has been hiding for three years.

Thanks for your support with this and let me know what more you would
like from me.

-Doug

2017-06-30 00:49:43

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] cma: fix calculation of aligned offset

On Thu, 29 Jun 2017 17:43:18 -0700 Doug Berger <[email protected]> wrote:

> On 06/29/2017 01:48 PM, Andrew Morton wrote:
> > On Wed, 28 Jun 2017 10:07:41 -0700 Doug Berger <[email protected]> wrote:
> >
> >> The align_offset parameter is used by bitmap_find_next_zero_area_off()
> >> to represent the offset of map's base from the previous alignment
> >> boundary; the function ensures that the returned index, plus the
> >> align_offset, honors the specified align_mask.
> >>
> >> The logic introduced by commit b5be83e308f7 ("mm: cma: align to
> >> physical address, not CMA region position") has the cma driver
> >> calculate the offset to the *next* alignment boundary. In most cases,
> >> the base alignment is greater than that specified when making
> >> allocations, resulting in a zero offset whether we align up or down.
> >> In the example given with the commit, the base alignment (8MB) was
> >> half the requested alignment (16MB) so the math also happened to work
> >> since the offset is 8MB in both directions. However, when requesting
> >> allocations with an alignment greater than twice that of the base,
> >> the returned index would not be correctly aligned.
> >>
> >> Also, the align_order arguments of cma_bitmap_aligned_mask() and
> >> cma_bitmap_aligned_offset() should not be negative so the argument
> >> type was made unsigned.
> >
> > The changelog doesn't describe the user-visible effects of the bug. It
> > should do so please, so that others can decide which kernel(s) need the fix.
> >
> > Since the bug has been there for three years, I'll assume that -stable
> > backporting is not needed.
> >
> I'm afraid I'm confused by what you are asking me to do since it appears
> that you have already signed-off on this patch.
>
> The direct user-visible effect of the bug is that if the user requests a
> CMA allocation that is aligned with a granule that is more than twice
> the base alignment of the CMA region she will receive an allocation that
> does not have that alignment.
>
> As I indicated to Gregory, the follow-on consequences of the address not
> satisfying the required alignment depend on why the alignment was
> requested. In our case it was a system crash, but it could also
> manifest as data corruption on a network interface for example.
>
> In general I would expect it to be unusual for anyone to request an
> allocation alignment that is larger than the CMA base alignment which is
> probably why the bug has been hiding for three years.
>

OK, it sounds like it isn't very critical so I'll remove the cc:stable
and the patch will appear in 4.12 and no earlier kernels.