2014-10-23 14:33:58

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH 0/4] Low/high memory CMA reservation fixes

Hello,

This patch set fixes an issue introduced by commits 95b0e655f914 ("ARM: mm:
don't limit default CMA region only to low memory") and f7426b983a6a ("mm:
cma: adjust address limit to avoid hitting low/high memory boundary")
resulting in reserved areas crossing the low/high memory boundary.

Patches 1/4 and 2/4 fix sides issues, with the bulk of the work in patch 3/4.
Patch 4/4 then fixes a printk issue that got me puzzled wondering why memory
reported under the lowmem limit was actually highmem.

This series fixes a v3.18-rc1 regression causing Renesas Koelsch boot
breakages when CMA is enabled.

Laurent Pinchart (4):
mm: cma: Don't crash on allocation if CMA area can't be activated
mm: cma: Always consider a 0 base address reservation as dynamic
mm: cma: Ensure that reservations never cross the low/high mem
boundary
mm: cma: Use %pa to print physical addresses

mm/cma.c | 93 +++++++++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 60 insertions(+), 33 deletions(-)

--
Regards,

Laurent Pinchart


2014-10-23 14:34:01

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH 4/4] mm: cma: Use %pa to print physical addresses

Casting physical addresses to unsigned long and using %lu truncates the
values on systems where physical addresses are larger than 32 bits. Use
%pa and get rid of the cast instead.

Signed-off-by: Laurent Pinchart <[email protected]>
---
mm/cma.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index b83597b..741c7ec 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -212,9 +212,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
phys_addr_t highmem_start = __pa(high_memory);
int ret = 0;

- pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
- __func__, (unsigned long)size, (unsigned long)base,
- (unsigned long)limit, (unsigned long)alignment);
+ pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
+ __func__, &size, &base, &limit, &alignment);

if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
@@ -257,8 +256,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
*/
if (fixed && base < highmem_start && base + size > highmem_start) {
ret = -EINVAL;
- pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
- (unsigned long)base, (unsigned long)highmem_start);
+ pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+ &base, &highmem_start);
goto err;
}

@@ -316,8 +315,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
if (ret)
goto err;

- pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
- (unsigned long)base);
+ pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
+ &base);
return 0;

err:
--
2.0.4

2014-10-23 14:34:45

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH 3/4] mm: cma: Ensure that reservations never cross the low/high mem boundary

Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
low memory") extended CMA memory reservation to allow usage of high
memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
to avoid hitting low/high memory boundary") to ensure that the reserved
block never crossed the low/high memory boundary. While the
implementation correctly lowered the limit, it failed to consider the
case where the base..limit range crossed the low/high memory boundary
with enough space on each side to reserve the requested size on either
low or high memory.

Rework the base and limit adjustment to fix the problem. The function
now starts by rejecting the reservation altogether for fixed
reservations that cross the boundary, then adjust the limit if
reservation from high memory is impossible, and finally first try to
reserve from high memory first and then falls back to low memory.

Signed-off-by: Laurent Pinchart <[email protected]>
---
mm/cma.c | 58 ++++++++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 44 insertions(+), 14 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 6b14346..b83597b 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -247,23 +247,38 @@ int __init cma_declare_contiguous(phys_addr_t base,
return -EINVAL;

/*
- * adjust limit to avoid crossing low/high memory boundary for
+ * Adjust limit and base to avoid crossing low/high memory boundary for
* automatically allocated regions
*/
- if (((limit == 0 || limit > memblock_end) &&
- (memblock_end - size < highmem_start &&
- memblock_end > highmem_start)) ||
- (!fixed && limit > highmem_start && limit - size < highmem_start)) {
- limit = highmem_start;
- }

- if (fixed && base < highmem_start && base+size > highmem_start) {
+ /*
+ * If allocating at a fixed base the request region must not cross the
+ * low/high memory boundary.
+ */
+ if (fixed && base < highmem_start && base + size > highmem_start) {
ret = -EINVAL;
pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
(unsigned long)base, (unsigned long)highmem_start);
goto err;
}

+ /*
+ * If the limit is unspecified or above the memblock end, its effective
+ * value will be the memblock end. Set it explicitly to simplify further
+ * checks.
+ */
+ if (limit == 0 || limit > memblock_end)
+ limit = memblock_end;
+
+ /*
+ * If the limit is above the highmem start by less than the reserved
+ * size allocation in highmem won't be possible. Lower the limit to the
+ * lowmem end.
+ */
+ if (limit > highmem_start && limit - size < highmem_start)
+ limit = highmem_start;
+
+
/* Reserve memory */
if (fixed) {
if (memblock_is_region_reserved(base, size) ||
@@ -272,14 +287,29 @@ int __init cma_declare_contiguous(phys_addr_t base,
goto err;
}
} else {
- phys_addr_t addr = memblock_alloc_range(size, alignment, base,
- limit);
+ phys_addr_t addr = 0;
+
+ /*
+ * If the requested region crosses the low/high memory boundary,
+ * try allocating from high memory first and fall back to low
+ * memory in case of failure.
+ */
+ if (base < highmem_start && limit > highmem_start) {
+ addr = memblock_alloc_range(size, alignment,
+ highmem_start, limit);
+ limit = highmem_start;
+ }
+
if (!addr) {
- ret = -ENOMEM;
- goto err;
- } else {
- base = addr;
+ addr = memblock_alloc_range(size, alignment, base,
+ limit);
+ if (!addr) {
+ ret = -ENOMEM;
+ goto err;
+ }
}
+
+ base = addr;
}

ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma);
--
2.0.4

2014-10-23 14:35:07

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH 2/4] mm: cma: Always consider a 0 base address reservation as dynamic

The fixed parameter to cma_declare_contiguous() tells the function
whether the given base address must be honoured or should be considered
as a hint only. The API considers a zero base address as meaning any
base address, which must never be considered as a fixed value.

Part of the implementation correctly checks both fixed and base != 0,
but two locations check the fixed value only. Set fixed to false when
base is 0 to fix that and simplify the code.

Signed-off-by: Laurent Pinchart <[email protected]>
---
mm/cma.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index 16c6650..6b14346 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -239,6 +239,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
size = ALIGN(size, alignment);
limit &= ~(alignment - 1);

+ if (!base)
+ fixed = false;
+
/* size should be aligned with order_per_bit */
if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
return -EINVAL;
@@ -262,7 +265,7 @@ int __init cma_declare_contiguous(phys_addr_t base,
}

/* Reserve memory */
- if (base && fixed) {
+ if (fixed) {
if (memblock_is_region_reserved(base, size) ||
memblock_reserve(base, size) < 0) {
ret = -EBUSY;
--
2.0.4

2014-10-23 14:35:42

by Laurent Pinchart

[permalink] [raw]
Subject: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

If activation of the CMA area fails its mutex won't be initialized,
leading to an oops at allocation time when trying to lock the mutex. Fix
this by failing allocation if the area hasn't been successfully actived,
and detect that condition by moving the CMA bitmap allocation after page
block reservation completion.

Signed-off-by: Laurent Pinchart <[email protected]>
---
mm/cma.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)

diff --git a/mm/cma.c b/mm/cma.c
index 963bc4a..16c6650 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -93,11 +93,6 @@ static int __init cma_activate_area(struct cma *cma)
unsigned i = cma->count >> pageblock_order;
struct zone *zone;

- cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
-
- if (!cma->bitmap)
- return -ENOMEM;
-
WARN_ON_ONCE(!pfn_valid(pfn));
zone = page_zone(pfn_to_page(pfn));

@@ -114,17 +109,17 @@ static int __init cma_activate_area(struct cma *cma)
* to be in the same zone.
*/
if (page_zone(pfn_to_page(pfn)) != zone)
- goto err;
+ return -EINVAL;
}
init_cma_reserved_pageblock(pfn_to_page(base_pfn));
} while (--i);

+ cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
+ if (!cma->bitmap)
+ return -ENOMEM;
+
mutex_init(&cma->lock);
return 0;
-
-err:
- kfree(cma->bitmap);
- return -EINVAL;
}

static int __init cma_init_reserved_areas(void)
@@ -313,7 +308,7 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
struct page *page = NULL;
int ret;

- if (!cma || !cma->count)
+ if (!cma || !cma->count || !cma->bitmap)
return NULL;

pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
--
2.0.4

2014-10-23 16:53:45

by Michal Nazarewicz

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

On Thu, Oct 23 2014, Laurent Pinchart wrote:
> If activation of the CMA area fails its mutex won't be initialized,
> leading to an oops at allocation time when trying to lock the mutex. Fix
> this by failing allocation if the area hasn't been successfully actived,
> and detect that condition by moving the CMA bitmap allocation after page
> block reservation completion.
>
> Signed-off-by: Laurent Pinchart <[email protected]>

Cc: <[email protected]> # v3.17
Acked-by: Michal Nazarewicz <[email protected]>

As a matter of fact, this is present in kernels earlier than 3.17 but in
the 3.17 the code has been moved from drivers/base/dma-contiguous.c to
mm/cma.c so this might require separate stable patch. I can track this
and prepare a patch if you want.

> ---
> mm/cma.c | 17 ++++++-----------
> 1 file changed, 6 insertions(+), 11 deletions(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 963bc4a..16c6650 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -93,11 +93,6 @@ static int __init cma_activate_area(struct cma *cma)
> unsigned i = cma->count >> pageblock_order;
> struct zone *zone;
>
> - cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> -
> - if (!cma->bitmap)
> - return -ENOMEM;
> -
> WARN_ON_ONCE(!pfn_valid(pfn));
> zone = page_zone(pfn_to_page(pfn));
>
> @@ -114,17 +109,17 @@ static int __init cma_activate_area(struct cma *cma)
> * to be in the same zone.
> */
> if (page_zone(pfn_to_page(pfn)) != zone)
> - goto err;
> + return -EINVAL;
> }
> init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> } while (--i);
>
> + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> + if (!cma->bitmap)
> + return -ENOMEM;
> +
> mutex_init(&cma->lock);
> return 0;
> -
> -err:
> - kfree(cma->bitmap);
> - return -EINVAL;
> }
>
> static int __init cma_init_reserved_areas(void)
> @@ -313,7 +308,7 @@ struct page *cma_alloc(struct cma *cma, int count, unsigned int align)
> struct page *page = NULL;
> int ret;
>
> - if (!cma || !cma->count)
> + if (!cma || !cma->count || !cma->bitmap)
> return NULL;
>
> pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
> --
> 2.0.4
>

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<[email protected]>--<xmpp:[email protected]>--ooO--(_)--Ooo--

2014-10-23 16:55:28

by Michal Nazarewicz

[permalink] [raw]
Subject: Re: [PATCH 2/4] mm: cma: Always consider a 0 base address reservation as dynamic

On Thu, Oct 23 2014, Laurent Pinchart wrote:
> The fixed parameter to cma_declare_contiguous() tells the function
> whether the given base address must be honoured or should be considered
> as a hint only. The API considers a zero base address as meaning any
> base address, which must never be considered as a fixed value.
>
> Part of the implementation correctly checks both fixed and base != 0,
> but two locations check the fixed value only. Set fixed to false when
> base is 0 to fix that and simplify the code.
>
> Signed-off-by: Laurent Pinchart
> <[email protected]>

Acked-by: Michal Nazarewicz <[email protected]>

And like before, this should also probably also go to stable.

> ---
> mm/cma.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 16c6650..6b14346 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -239,6 +239,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
> size = ALIGN(size, alignment);
> limit &= ~(alignment - 1);
>
> + if (!base)
> + fixed = false;
> +
> /* size should be aligned with order_per_bit */
> if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
> return -EINVAL;
> @@ -262,7 +265,7 @@ int __init cma_declare_contiguous(phys_addr_t base,
> }
>
> /* Reserve memory */
> - if (base && fixed) {
> + if (fixed) {
> if (memblock_is_region_reserved(base, size) ||
> memblock_reserve(base, size) < 0) {
> ret = -EBUSY;
> --
> 2.0.4
>

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<[email protected]>--<xmpp:[email protected]>--ooO--(_)--Ooo--

2014-10-23 16:56:24

by Michal Nazarewicz

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: cma: Use %pa to print physical addresses

On Thu, Oct 23 2014, Laurent Pinchart wrote:
> Casting physical addresses to unsigned long and using %lu truncates the
> values on systems where physical addresses are larger than 32 bits. Use
> %pa and get rid of the cast instead.
>
> Signed-off-by: Laurent Pinchart <[email protected]>

Acked-by: Michal Nazarewicz <[email protected]>

> ---
> mm/cma.c | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index b83597b..741c7ec 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -212,9 +212,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
> phys_addr_t highmem_start = __pa(high_memory);
> int ret = 0;
>
> - pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
> - __func__, (unsigned long)size, (unsigned long)base,
> - (unsigned long)limit, (unsigned long)alignment);
> + pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
> + __func__, &size, &base, &limit, &alignment);
>
> if (cma_area_count == ARRAY_SIZE(cma_areas)) {
> pr_err("Not enough slots for CMA reserved regions!\n");
> @@ -257,8 +256,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
> */
> if (fixed && base < highmem_start && base + size > highmem_start) {
> ret = -EINVAL;
> - pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
> - (unsigned long)base, (unsigned long)highmem_start);
> + pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
> + &base, &highmem_start);
> goto err;
> }
>
> @@ -316,8 +315,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
> if (ret)
> goto err;
>
> - pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
> - (unsigned long)base);
> + pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
> + &base);
> return 0;
>
> err:
> --
> 2.0.4
>

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<[email protected]>--<xmpp:[email protected]>--ooO--(_)--Ooo--

2014-10-23 17:51:18

by Geert Uytterhoeven

[permalink] [raw]
Subject: Re: [PATCH 4/4] mm: cma: Use %pa to print physical addresses

On Thu, Oct 23, 2014 at 4:33 PM, Laurent Pinchart
<[email protected]> wrote:
> Casting physical addresses to unsigned long and using %lu truncates the
> values on systems where physical addresses are larger than 32 bits. Use
> %pa and get rid of the cast instead.
>
> Signed-off-by: Laurent Pinchart <[email protected]>

Acked-by: Geert Uytterhoeven <[email protected]>

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- [email protected]

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

2014-10-23 23:42:16

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

Hi Michal,

On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
> On Thu, Oct 23 2014, Laurent Pinchart wrote:
> > If activation of the CMA area fails its mutex won't be initialized,
> > leading to an oops at allocation time when trying to lock the mutex. Fix
> > this by failing allocation if the area hasn't been successfully actived,
> > and detect that condition by moving the CMA bitmap allocation after page
> > block reservation completion.
> >
> > Signed-off-by: Laurent Pinchart
> > <[email protected]>
>
> Cc: <[email protected]> # v3.17
> Acked-by: Michal Nazarewicz <[email protected]>
>
> As a matter of fact, this is present in kernels earlier than 3.17 but in
> the 3.17 the code has been moved from drivers/base/dma-contiguous.c to
> mm/cma.c so this might require separate stable patch. I can track this
> and prepare a patch if you want.

That could be done, but I'm not sure if it's really worth it. The bug only
occurs when the CMA zone activation fails. I've ran into that case due to a
bug introduced in v3.18-rc1, but this shouldn't be the case for older kernel
versions.

If you think the fix should be backported to stable kernels older than v3.17
please feel free to cook up a patch.

> > ---
> >
> > mm/cma.c | 17 ++++++-----------
> > 1 file changed, 6 insertions(+), 11 deletions(-)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 963bc4a..16c6650 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -93,11 +93,6 @@ static int __init cma_activate_area(struct cma *cma)
> > unsigned i = cma->count >> pageblock_order;
> > struct zone *zone;
> >
> > - cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > -
> > - if (!cma->bitmap)
> > - return -ENOMEM;
> > -
> > WARN_ON_ONCE(!pfn_valid(pfn));
> > zone = page_zone(pfn_to_page(pfn));
> >
> > @@ -114,17 +109,17 @@ static int __init cma_activate_area(struct cma *cma)
> > * to be in the same zone.
> > */
> > if (page_zone(pfn_to_page(pfn)) != zone)
> > - goto err;
> > + return -EINVAL;
> > }
> > init_cma_reserved_pageblock(pfn_to_page(base_pfn));
> > } while (--i);
> >
> > + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
> > + if (!cma->bitmap)
> > + return -ENOMEM;
> > +
> > mutex_init(&cma->lock);
> > return 0;
> > -
> > -err:
> > - kfree(cma->bitmap);
> > - return -EINVAL;
> > }
> >
> > static int __init cma_init_reserved_areas(void)
> > @@ -313,7 +308,7 @@ struct page *cma_alloc(struct cma *cma, int count,
> > unsigned int align)
> > struct page *page = NULL;
> > int ret;
> >
> > - if (!cma || !cma->count)
> > + if (!cma || !cma->count || !cma->bitmap)
> > return NULL;
> >
> > pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,

--
Regards,

Laurent Pinchart

2014-10-23 23:50:45

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 2/4] mm: cma: Always consider a 0 base address reservation as dynamic

Hi Michal,

On Thursday 23 October 2014 18:55:20 Michal Nazarewicz wrote:
> On Thu, Oct 23 2014, Laurent Pinchart wrote:
> > The fixed parameter to cma_declare_contiguous() tells the function
> > whether the given base address must be honoured or should be considered
> > as a hint only. The API considers a zero base address as meaning any
> > base address, which must never be considered as a fixed value.
> >
> > Part of the implementation correctly checks both fixed and base != 0,
> > but two locations check the fixed value only. Set fixed to false when
> > base is 0 to fix that and simplify the code.
> >
> > Signed-off-by: Laurent Pinchart
> > <[email protected]>
>
> Acked-by: Michal Nazarewicz <[email protected]>
>
> And like before, this should also probably also go to stable.

v3.17 and older don't have the extra fixed checks, so I don't think there's a
need to Cc stable.

> > ---
> >
> > mm/cma.c | 5 ++++-
> > 1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 16c6650..6b14346 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -239,6 +239,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
> > size = ALIGN(size, alignment);
> > limit &= ~(alignment - 1);
> >
> > + if (!base)
> > + fixed = false;
> > +
> > /* size should be aligned with order_per_bit */
> > if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
> > return -EINVAL;
> > @@ -262,7 +265,7 @@ int __init cma_declare_contiguous(phys_addr_t base,
> > }
> >
> > /* Reserve memory */
> > - if (base && fixed) {
> > + if (fixed) {
> > if (memblock_is_region_reserved(base, size) ||
> > memblock_reserve(base, size) < 0) {
> > ret = -EBUSY;

--
Regards,

Laurent Pinchart

2014-10-24 02:02:52

by Weijie Yang

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

On Fri, Oct 24, 2014 at 7:42 AM, Laurent Pinchart
<[email protected]> wrote:
> Hi Michal,
>
> On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
>> On Thu, Oct 23 2014, Laurent Pinchart wrote:
>> > If activation of the CMA area fails its mutex won't be initialized,
>> > leading to an oops at allocation time when trying to lock the mutex. Fix
>> > this by failing allocation if the area hasn't been successfully actived,
>> > and detect that condition by moving the CMA bitmap allocation after page
>> > block reservation completion.
>> >
>> > Signed-off-by: Laurent Pinchart
>> > <[email protected]>
>>
>> Cc: <[email protected]> # v3.17
>> Acked-by: Michal Nazarewicz <[email protected]>

This patch is good, but how about add a active field in cma struct?
use cma->active to check whether cma is actived successfully.
I think it will make code more clear and readable.
Just my little opinion.


>> As a matter of fact, this is present in kernels earlier than 3.17 but in
>> the 3.17 the code has been moved from drivers/base/dma-contiguous.c to
>> mm/cma.c so this might require separate stable patch. I can track this
>> and prepare a patch if you want.
>
> That could be done, but I'm not sure if it's really worth it. The bug only
> occurs when the CMA zone activation fails. I've ran into that case due to a
> bug introduced in v3.18-rc1, but this shouldn't be the case for older kernel
> versions.
>
> If you think the fix should be backported to stable kernels older than v3.17
> please feel free to cook up a patch.
>
>> > ---
>> >
>> > mm/cma.c | 17 ++++++-----------
>> > 1 file changed, 6 insertions(+), 11 deletions(-)
>> >
>> > diff --git a/mm/cma.c b/mm/cma.c
>> > index 963bc4a..16c6650 100644
>> > --- a/mm/cma.c
>> > +++ b/mm/cma.c
>> > @@ -93,11 +93,6 @@ static int __init cma_activate_area(struct cma *cma)
>> > unsigned i = cma->count >> pageblock_order;
>> > struct zone *zone;
>> >
>> > - cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>> > -
>> > - if (!cma->bitmap)
>> > - return -ENOMEM;
>> > -
>> > WARN_ON_ONCE(!pfn_valid(pfn));
>> > zone = page_zone(pfn_to_page(pfn));
>> >
>> > @@ -114,17 +109,17 @@ static int __init cma_activate_area(struct cma *cma)
>> > * to be in the same zone.
>> > */
>> > if (page_zone(pfn_to_page(pfn)) != zone)
>> > - goto err;
>> > + return -EINVAL;
>> > }
>> > init_cma_reserved_pageblock(pfn_to_page(base_pfn));
>> > } while (--i);
>> >
>> > + cma->bitmap = kzalloc(bitmap_size, GFP_KERNEL);
>> > + if (!cma->bitmap)
>> > + return -ENOMEM;
>> > +
>> > mutex_init(&cma->lock);
>> > return 0;
>> > -
>> > -err:
>> > - kfree(cma->bitmap);
>> > - return -EINVAL;
>> > }
>> >
>> > static int __init cma_init_reserved_areas(void)
>> > @@ -313,7 +308,7 @@ struct page *cma_alloc(struct cma *cma, int count,
>> > unsigned int align)
>> > struct page *page = NULL;
>> > int ret;
>> >
>> > - if (!cma || !cma->count)
>> > + if (!cma || !cma->count || !cma->bitmap)
>> > return NULL;
>> >
>> > pr_debug("%s(cma %p, count %d, align %d)\n", __func__, (void *)cma,
>
> --
> Regards,
>
> Laurent Pinchart
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/

2014-10-24 02:49:13

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

On Fri, Oct 24, 2014 at 10:02:49AM +0800, Weijie Yang wrote:
> On Fri, Oct 24, 2014 at 7:42 AM, Laurent Pinchart
> <[email protected]> wrote:
> > Hi Michal,
> >
> > On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
> >> On Thu, Oct 23 2014, Laurent Pinchart wrote:
> >> > If activation of the CMA area fails its mutex won't be initialized,
> >> > leading to an oops at allocation time when trying to lock the mutex. Fix
> >> > this by failing allocation if the area hasn't been successfully actived,
> >> > and detect that condition by moving the CMA bitmap allocation after page
> >> > block reservation completion.
> >> >
> >> > Signed-off-by: Laurent Pinchart
> >> > <[email protected]>
> >>
> >> Cc: <[email protected]> # v3.17
> >> Acked-by: Michal Nazarewicz <[email protected]>
>
> This patch is good, but how about add a active field in cma struct?
> use cma->active to check whether cma is actived successfully.
> I think it will make code more clear and readable.
> Just my little opinion.
>

Or just setting cma->count to 0 would work fine.

Thanks.

2014-10-24 02:52:24

by Joonsoo Kim

[permalink] [raw]
Subject: Re: [PATCH 3/4] mm: cma: Ensure that reservations never cross the low/high mem boundary

On Thu, Oct 23, 2014 at 05:33:47PM +0300, Laurent Pinchart wrote:
> Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
> low memory") extended CMA memory reservation to allow usage of high
> memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
> to avoid hitting low/high memory boundary") to ensure that the reserved
> block never crossed the low/high memory boundary. While the
> implementation correctly lowered the limit, it failed to consider the
> case where the base..limit range crossed the low/high memory boundary
> with enough space on each side to reserve the requested size on either
> low or high memory.
>
> Rework the base and limit adjustment to fix the problem. The function
> now starts by rejecting the reservation altogether for fixed
> reservations that cross the boundary, then adjust the limit if
> reservation from high memory is impossible, and finally first try to
> reserve from high memory first and then falls back to low memory.
>
> Signed-off-by: Laurent Pinchart <[email protected]>
> ---
> mm/cma.c | 58 ++++++++++++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 44 insertions(+), 14 deletions(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index 6b14346..b83597b 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -247,23 +247,38 @@ int __init cma_declare_contiguous(phys_addr_t base,
> return -EINVAL;
>
> /*
> - * adjust limit to avoid crossing low/high memory boundary for
> + * Adjust limit and base to avoid crossing low/high memory boundary for
> * automatically allocated regions
> */
> - if (((limit == 0 || limit > memblock_end) &&
> - (memblock_end - size < highmem_start &&
> - memblock_end > highmem_start)) ||
> - (!fixed && limit > highmem_start && limit - size < highmem_start)) {
> - limit = highmem_start;
> - }
>
> - if (fixed && base < highmem_start && base+size > highmem_start) {
> + /*
> + * If allocating at a fixed base the request region must not cross the
> + * low/high memory boundary.
> + */
> + if (fixed && base < highmem_start && base + size > highmem_start) {
> ret = -EINVAL;
> pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
> (unsigned long)base, (unsigned long)highmem_start);
> goto err;
> }
>
> + /*
> + * If the limit is unspecified or above the memblock end, its effective
> + * value will be the memblock end. Set it explicitly to simplify further
> + * checks.
> + */
> + if (limit == 0 || limit > memblock_end)
> + limit = memblock_end;
> +
> + /*
> + * If the limit is above the highmem start by less than the reserved
> + * size allocation in highmem won't be possible. Lower the limit to the
> + * lowmem end.
> + */
> + if (limit > highmem_start && limit - size < highmem_start)
> + limit = highmem_start;
> +
> +

How about removing this check?
Without this check, memblock_alloc_range would be failed and we can
go fallback correctly. So, this is redundant, IMO.

Thanks.

2014-10-24 09:54:12

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

Hello,

On Friday 24 October 2014 11:50:14 Joonsoo Kim wrote:
> On Fri, Oct 24, 2014 at 10:02:49AM +0800, Weijie Yang wrote:
> > On Fri, Oct 24, 2014 at 7:42 AM, Laurent Pinchart wrote:
> > > On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
> > >> On Thu, Oct 23 2014, Laurent Pinchart wrote:
> > >> > If activation of the CMA area fails its mutex won't be initialized,
> > >> > leading to an oops at allocation time when trying to lock the mutex.
> > >> > Fix this by failing allocation if the area hasn't been successfully
> > >> > actived, and detect that condition by moving the CMA bitmap
> > >> > allocation after page block reservation completion.
> > >> >
> > >> > Signed-off-by: Laurent Pinchart
> > >> > <[email protected]>
> > >>
> > >> Cc: <[email protected]> # v3.17
> > >> Acked-by: Michal Nazarewicz <[email protected]>
> >
> > This patch is good, but how about add a active field in cma struct?
> > use cma->active to check whether cma is actived successfully.
> > I think it will make code more clear and readable.
> > Just my little opinion.
>
> Or just setting cma->count to 0 would work fine.

I would prefer setting cma->count to 0 to avoid the extra field. I'll modify
the patch accordingly.

--
Regards,

Laurent Pinchart

2014-10-24 10:00:15

by Laurent Pinchart

[permalink] [raw]
Subject: Re: [PATCH 3/4] mm: cma: Ensure that reservations never cross the low/high mem boundary

Hi Joonsoo,

Thank you for the review.

On Friday 24 October 2014 11:53:25 Joonsoo Kim wrote:
> On Thu, Oct 23, 2014 at 05:33:47PM +0300, Laurent Pinchart wrote:
> > Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
> > low memory") extended CMA memory reservation to allow usage of high
> > memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
> > to avoid hitting low/high memory boundary") to ensure that the reserved
> > block never crossed the low/high memory boundary. While the
> > implementation correctly lowered the limit, it failed to consider the
> > case where the base..limit range crossed the low/high memory boundary
> > with enough space on each side to reserve the requested size on either
> > low or high memory.
> >
> > Rework the base and limit adjustment to fix the problem. The function
> > now starts by rejecting the reservation altogether for fixed
> > reservations that cross the boundary, then adjust the limit if
> > reservation from high memory is impossible, and finally first try to
> > reserve from high memory first and then falls back to low memory.
> >
> > Signed-off-by: Laurent Pinchart
> > <[email protected]>
> > ---
> >
> > mm/cma.c | 58 ++++++++++++++++++++++++++++++++++++++++++++--------------
> > 1 file changed, 44 insertions(+), 14 deletions(-)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 6b14346..b83597b 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -247,23 +247,38 @@ int __init cma_declare_contiguous(phys_addr_t base,
> > return -EINVAL;
> >
> > /*
> > - * adjust limit to avoid crossing low/high memory boundary for
> > + * Adjust limit and base to avoid crossing low/high memory boundary
> > for
> > * automatically allocated regions
> > */
> >
> > - if (((limit == 0 || limit > memblock_end) &&
> > - (memblock_end - size < highmem_start &&
> > - memblock_end > highmem_start)) ||
> > - (!fixed && limit > highmem_start && limit - size <
> > highmem_start)) {
> > - limit = highmem_start;
> > - }
> >
> > - if (fixed && base < highmem_start && base+size > highmem_start) {
> > + /*
> > + * If allocating at a fixed base the request region must not cross
> > the
> > + * low/high memory boundary.
> > + */
> > + if (fixed && base < highmem_start && base + size > highmem_start) {
> > ret = -EINVAL;
> > pr_err("Region at %08lx defined on low/high memory boundary
> > (%08lx)\n",
> > (unsigned long)base, (unsigned long)highmem_start);
> > goto err;
> > }
> >
> > + /*
> > + * If the limit is unspecified or above the memblock end, its
> > effective
> > + * value will be the memblock end. Set it explicitly to simplify
> > further
> > + * checks.
> > + */
> > + if (limit == 0 || limit > memblock_end)
> > + limit = memblock_end;
> > +
> > + /*
> > + * If the limit is above the highmem start by less than the reserved
> > + * size allocation in highmem won't be possible. Lower the limit to
> > the
> > + * lowmem end.
> > + */
> > + if (limit > highmem_start && limit - size < highmem_start)
> > + limit = highmem_start;
> > +
>
> How about removing this check?
> Without this check, memblock_alloc_range would be failed and we can
> go fallback correctly. So, this is redundant, IMO.

Good point. I'll remove the check in v2.

--
Regards,

Laurent Pinchart

2014-10-24 16:34:54

by Michal Nazarewicz

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm: cma: Don't crash on allocation if CMA area can't be activated

> On Thursday 23 October 2014 18:53:36 Michal Nazarewicz wrote:
>> As a matter of fact, this is present in kernels earlier than 3.17 but in
>> the 3.17 the code has been moved from drivers/base/dma-contiguous.c to
>> mm/cma.c so this might require separate stable patch.

On Fri, Oct 24 2014, Laurent Pinchart <[email protected]> wrote:
> That could be done, but I'm not sure if it's really worth it. The bug only
> occurs when the CMA zone activation fails. I've ran into that case due to a
> bug introduced in v3.18-rc1, but this shouldn't be the case for older kernel
> versions.

Fair enough.

--
Best regards, _ _
.o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
..o | Computer Science, Michał “mina86” Nazarewicz (o o)
ooo +--<[email protected]>--<xmpp:[email protected]>--ooO--(_)--Ooo--