2014-10-10 02:17:29

by Weijie Yang

[permalink] [raw]
Subject: [PATCH] mm/cma: fix cma bitmap aligned mask computing

The current cma bitmap aligned mask compute way is incorrect, it could
cause an unexpected align when using cma_alloc() if wanted align order
is bigger than cma->order_per_bit.

Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6,
when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as
expected align value, after using current computing, however, we get 0 as
cma bitmap aligned mask other than 511.

This patch fixes the cma bitmap aligned mask compute way.

Signed-off-by: Weijie Yang <[email protected]>
---
mm/cma.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index c17751c..f6207ef 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma)

static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order)
{
- return (1UL << (align_order >> cma->order_per_bit)) - 1;
+ if (align_order <= cma->order_per_bit)
+ return 0;
+ else
+ return (1UL << (align_order - cma->order_per_bit)) - 1;
}

static unsigned long cma_bitmap_maxno(struct cma *cma)
--
1.7.10.4


2014-10-10 14:19:05

by Michal Nazarewicz

[permalink] [raw]
Subject: Re: [PATCH] mm/cma: fix cma bitmap aligned mask computing

On Fri, Oct 10 2014, Weijie Yang wrote:
> The current cma bitmap aligned mask compute way is incorrect, it could
> cause an unexpected align when using cma_alloc() if wanted align order
> is bigger than cma->order_per_bit.
>
> Take kvm for example (PAGE_SHIFT = 12), kvm_cma->order_per_bit is set to 6,
> when kvm_alloc_rma() tries to alloc kvm_rma_pages, it will input 15 as
> expected align value, after using current computing, however, we get 0 as
> cma bitmap aligned mask other than 511.
>
> This patch fixes the cma bitmap aligned mask compute way.
>
> Signed-off-by: Weijie Yang <[email protected]>

Acked-by: Michal Nazarewicz <[email protected]>

Should that also get:

Cc: <[email protected]> # v3.17

> ---
> mm/cma.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index c17751c..f6207ef 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -57,7 +57,10 @@ unsigned long cma_get_size(struct cma *cma)
>
> static unsigned long cma_bitmap_aligned_mask(struct cma *cma, int align_order)
> {
> - return (1UL << (align_order >> cma->order_per_bit)) - 1;
> + if (align_order <= cma->order_per_bit)
> + return 0;
> + else
> + return (1UL << (align_order - cma->order_per_bit)) - 1;
> }
>
> static unsigned long cma_bitmap_maxno(struct cma *cma)
> --
> 1.7.10.4
>
>

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<[email protected]>--<xmpp:[email protected]>--ooO--(_)--Ooo--