Hello,
This patch set fixes an issue introduced by commits 95b0e655f914 ("ARM: mm:
don't limit default CMA region only to low memory") and f7426b983a6a ("mm:
cma: adjust address limit to avoid hitting low/high memory boundary")
resulting in reserved areas crossing the low/high memory boundary.
Patches 1/4 and 2/4 fix sides issues, with the bulk of the work in patch 3/4.
Patch 4/4 then fixes a printk issue that got me puzzled wondering why memory
reported under the lowmem limit was actually highmem.
This series fixes a v3.18-rc1 regression causing Renesas Koelsch boot
breakages when CMA is enabled.
Changes since v1:
- Use the cma count field to detect non-activated reservations
- Remove the redundant limit adjustment
Laurent Pinchart (4):
mm: cma: Don't crash on allocation if CMA area can't be activated
mm: cma: Always consider a 0 base address reservation as dynamic
mm: cma: Ensure that reservations never cross the low/high mem
boundary
mm: cma: Use %pa to print physical addresses
mm/cma.c | 68 +++++++++++++++++++++++++++++++++++++++++-----------------------
1 file changed, 44 insertions(+), 24 deletions(-)
--
Regards,
Laurent Pinchart
Casting physical addresses to unsigned long and using %lu truncates the
values on systems where physical addresses are larger than 32 bits. Use
%pa and get rid of the cast instead.
Signed-off-by: Laurent Pinchart <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]>
---
mm/cma.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index c30a6edee65c..fde706e1284f 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -218,9 +218,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
phys_addr_t highmem_start = __pa(high_memory);
int ret = 0;
- pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n",
- __func__, (unsigned long)size, (unsigned long)base,
- (unsigned long)limit, (unsigned long)alignment);
+ pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n",
+ __func__, &size, &base, &limit, &alignment);
if (cma_area_count == ARRAY_SIZE(cma_areas)) {
pr_err("Not enough slots for CMA reserved regions!\n");
@@ -258,8 +257,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
*/
if (fixed && base < highmem_start && base + size > highmem_start) {
ret = -EINVAL;
- pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
- (unsigned long)base, (unsigned long)highmem_start);
+ pr_err("Region at %pa defined on low/high memory boundary (%pa)\n",
+ &base, &highmem_start);
goto err;
}
@@ -309,8 +308,8 @@ int __init cma_declare_contiguous(phys_addr_t base,
if (ret)
goto err;
- pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M,
- (unsigned long)base);
+ pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M,
+ &base);
return 0;
err:
--
2.0.4
If activation of the CMA area fails its mutex won't be initialized,
leading to an oops at allocation time when trying to lock the mutex. Fix
this by setting the cma area count field to 0 when activation fails,
leading to allocation returning NULL immediately.
Cc: <[email protected]> # v3.17
Signed-off-by: Laurent Pinchart <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
---
mm/cma.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/cma.c b/mm/cma.c
index 963bc4add9af..5aa1a6f74dec 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -124,6 +124,7 @@ static int __init cma_activate_area(struct cma *cma)
err:
kfree(cma->bitmap);
+ cma->count = 0;
return -EINVAL;
}
--
2.0.4
The fixed parameter to cma_declare_contiguous() tells the function
whether the given base address must be honoured or should be considered
as a hint only. The API considers a zero base address as meaning any
base address, which must never be considered as a fixed value.
Part of the implementation correctly checks both fixed and base != 0,
but two locations check the fixed value only. Set fixed to false when
base is 0 to fix that and simplify the code.
Signed-off-by: Laurent Pinchart <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
---
mm/cma.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/cma.c b/mm/cma.c
index 5aa1a6f74dec..62a5dccc3fb8 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -245,6 +245,9 @@ int __init cma_declare_contiguous(phys_addr_t base,
size = ALIGN(size, alignment);
limit &= ~(alignment - 1);
+ if (!base)
+ fixed = false;
+
/* size should be aligned with order_per_bit */
if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit))
return -EINVAL;
@@ -268,7 +271,7 @@ int __init cma_declare_contiguous(phys_addr_t base,
}
/* Reserve memory */
- if (base && fixed) {
+ if (fixed) {
if (memblock_is_region_reserved(base, size) ||
memblock_reserve(base, size) < 0) {
ret = -EBUSY;
--
2.0.4
Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
low memory") extended CMA memory reservation to allow usage of high
memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
to avoid hitting low/high memory boundary") to ensure that the reserved
block never crossed the low/high memory boundary. While the
implementation correctly lowered the limit, it failed to consider the
case where the base..limit range crossed the low/high memory boundary
with enough space on each side to reserve the requested size on either
low or high memory.
Rework the base and limit adjustment to fix the problem. The function
now starts by rejecting the reservation altogether for fixed
reservations that cross the boundary, tries to reserve from high memory
first and then falls back to low memory.
Signed-off-by: Laurent Pinchart <[email protected]>
---
mm/cma.c | 49 +++++++++++++++++++++++++++++++++----------------
1 file changed, 33 insertions(+), 16 deletions(-)
diff --git a/mm/cma.c b/mm/cma.c
index 62a5dccc3fb8..c30a6edee65c 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -253,23 +253,24 @@ int __init cma_declare_contiguous(phys_addr_t base,
return -EINVAL;
/*
- * adjust limit to avoid crossing low/high memory boundary for
- * automatically allocated regions
+ * If allocating at a fixed base the request region must not cross the
+ * low/high memory boundary.
*/
- if (((limit == 0 || limit > memblock_end) &&
- (memblock_end - size < highmem_start &&
- memblock_end > highmem_start)) ||
- (!fixed && limit > highmem_start && limit - size < highmem_start)) {
- limit = highmem_start;
- }
-
- if (fixed && base < highmem_start && base+size > highmem_start) {
+ if (fixed && base < highmem_start && base + size > highmem_start) {
ret = -EINVAL;
pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n",
(unsigned long)base, (unsigned long)highmem_start);
goto err;
}
+ /*
+ * If the limit is unspecified or above the memblock end, its effective
+ * value will be the memblock end. Set it explicitly to simplify further
+ * checks.
+ */
+ if (limit == 0 || limit > memblock_end)
+ limit = memblock_end;
+
/* Reserve memory */
if (fixed) {
if (memblock_is_region_reserved(base, size) ||
@@ -278,14 +279,30 @@ int __init cma_declare_contiguous(phys_addr_t base,
goto err;
}
} else {
- phys_addr_t addr = memblock_alloc_range(size, alignment, base,
- limit);
+ phys_addr_t addr = 0;
+
+ /*
+ * All pages in the reserved area must come from the same zone.
+ * If the requested region crosses the low/high memory boundary,
+ * try allocating from high memory first and fall back to low
+ * memory in case of failure.
+ */
+ if (base < highmem_start && limit > highmem_start) {
+ addr = memblock_alloc_range(size, alignment,
+ highmem_start, limit);
+ limit = highmem_start;
+ }
+
if (!addr) {
- ret = -ENOMEM;
- goto err;
- } else {
- base = addr;
+ addr = memblock_alloc_range(size, alignment, base,
+ limit);
+ if (!addr) {
+ ret = -ENOMEM;
+ goto err;
+ }
}
+
+ base = addr;
}
ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma);
--
2.0.4
On Fri, Oct 24 2014, Laurent Pinchart <[email protected]> wrote:
> Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
> low memory") extended CMA memory reservation to allow usage of high
> memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
> to avoid hitting low/high memory boundary") to ensure that the reserved
> block never crossed the low/high memory boundary. While the
> implementation correctly lowered the limit, it failed to consider the
> case where the base..limit range crossed the low/high memory boundary
> with enough space on each side to reserve the requested size on either
> low or high memory.
>
> Rework the base and limit adjustment to fix the problem. The function
> now starts by rejecting the reservation altogether for fixed
> reservations that cross the boundary, tries to reserve from high memory
> first and then falls back to low memory.
>
> Signed-off-by: Laurent Pinchart <[email protected]>
Acked-by: Michal Nazarewicz <[email protected]>
On Friday 24 October 2014 18:26:58 Michal Nazarewicz wrote:
> On Fri, Oct 24 2014, Laurent Pinchart wrote:
> > Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
> > low memory") extended CMA memory reservation to allow usage of high
> > memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
> > to avoid hitting low/high memory boundary") to ensure that the reserved
> > block never crossed the low/high memory boundary. While the
> > implementation correctly lowered the limit, it failed to consider the
> > case where the base..limit range crossed the low/high memory boundary
> > with enough space on each side to reserve the requested size on either
> > low or high memory.
> >
> > Rework the base and limit adjustment to fix the problem. The function
> > now starts by rejecting the reservation altogether for fixed
> > reservations that cross the boundary, tries to reserve from high memory
> > first and then falls back to low memory.
> >
> > Signed-off-by: Laurent Pinchart
> > <[email protected]>
>
> Acked-by: Michal Nazarewicz <[email protected]>
Thank you. Can we get this series merged in v3.18-rc ?
--
Regards,
Laurent Pinchart
On Sun, Oct 26, 2014 at 02:43:52PM +0200, Laurent Pinchart wrote:
> On Friday 24 October 2014 18:26:58 Michal Nazarewicz wrote:
> > On Fri, Oct 24 2014, Laurent Pinchart wrote:
> > > Commit 95b0e655f914 ("ARM: mm: don't limit default CMA region only to
> > > low memory") extended CMA memory reservation to allow usage of high
> > > memory. It relied on commit f7426b983a6a ("mm: cma: adjust address limit
> > > to avoid hitting low/high memory boundary") to ensure that the reserved
> > > block never crossed the low/high memory boundary. While the
> > > implementation correctly lowered the limit, it failed to consider the
> > > case where the base..limit range crossed the low/high memory boundary
> > > with enough space on each side to reserve the requested size on either
> > > low or high memory.
> > >
> > > Rework the base and limit adjustment to fix the problem. The function
> > > now starts by rejecting the reservation altogether for fixed
> > > reservations that cross the boundary, tries to reserve from high memory
> > > first and then falls back to low memory.
> > >
> > > Signed-off-by: Laurent Pinchart
> > > <[email protected]>
> >
> > Acked-by: Michal Nazarewicz <[email protected]>
>
> Thank you. Can we get this series merged in v3.18-rc ?
Hello,
You'd better to resend whole series to Andrew.
Thanks.
Hello,
On 2014-10-24 12:18, Laurent Pinchart wrote:
> Hello,
>
> This patch set fixes an issue introduced by commits 95b0e655f914 ("ARM: mm:
> don't limit default CMA region only to low memory") and f7426b983a6a ("mm:
> cma: adjust address limit to avoid hitting low/high memory boundary")
> resulting in reserved areas crossing the low/high memory boundary.
>
> Patches 1/4 and 2/4 fix sides issues, with the bulk of the work in patch 3/4.
> Patch 4/4 then fixes a printk issue that got me puzzled wondering why memory
> reported under the lowmem limit was actually highmem.
>
> This series fixes a v3.18-rc1 regression causing Renesas Koelsch boot
> breakages when CMA is enabled.
I've applied the whole series to my fixes-for-v3.18 branch.
> Changes since v1:
>
> - Use the cma count field to detect non-activated reservations
> - Remove the redundant limit adjustment
>
> Laurent Pinchart (4):
> mm: cma: Don't crash on allocation if CMA area can't be activated
> mm: cma: Always consider a 0 base address reservation as dynamic
> mm: cma: Ensure that reservations never cross the low/high mem
> boundary
> mm: cma: Use %pa to print physical addresses
>
> mm/cma.c | 68 +++++++++++++++++++++++++++++++++++++++++-----------------------
> 1 file changed, 44 insertions(+), 24 deletions(-)
>
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland