Hi,
Current memblock API is quite extensive and, which is more annoying,
duplicated. Except the low-level functions that allow searching for a free
memory region and marking it as reserved, memblock provides three (well,
two and a half) sets of functions to allocate memory. There are several
overlapping functions that return a physical address and there are
functions that return virtual address. Those that return the virtual
address may also clear the allocated memory. And, on top of all that, some
allocators panic and some return NULL in case of error.
This set tries to reduce the mess, and trim down the amount of memblock
allocation methods.
Patches 1-10 consolidate the functions that return physical address of
the allocated memory
Patches 11-13 are some trivial cleanups
Patches 14-19 add checks for the return value of memblock_alloc*() and
panics in case of errors. The patches 14-18 include some minor refactoring
to have better readability of the resulting code and patch 19 is a
mechanical addition of
if (!ptr)
panic();
after memblock_alloc*() calls.
And, finally, patches 20 and 21 remove panic() calls memblock and _nopanic
variants from memblock.
v2 changes:
* replace some more %lu with %zu
* remove panics where they are not needed in s390 and in printk
* collect Acked-by and Reviewed-by.
Christophe Leroy (1):
powerpc: use memblock functions returning virtual address
Mike Rapoport (20):
openrisc: prefer memblock APIs returning virtual address
memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc
memblock: drop memblock_alloc_base_nid()
memblock: emphasize that memblock_alloc_range() returns a physical address
memblock: memblock_phys_alloc_try_nid(): don't panic
memblock: memblock_phys_alloc(): don't panic
memblock: drop __memblock_alloc_base()
memblock: drop memblock_alloc_base()
memblock: refactor internal allocation functions
memblock: make memblock_find_in_range_node() and choose_memblock_flags() static
arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)
arch: don't memset(0) memory returned by memblock_alloc()
ia64: add checks for the return value of memblock_alloc*()
sparc: add checks for the return value of memblock_alloc*()
mm/percpu: add checks for the return value of memblock_alloc*()
init/main: add checks for the return value of memblock_alloc*()
swiotlb: add checks for the return value of memblock_alloc*()
treewide: add checks for the return value of memblock_alloc*()
memblock: memblock_alloc_try_nid: don't panic
memblock: drop memblock_alloc_*_nopanic() variants
arch/alpha/kernel/core_cia.c | 5 +-
arch/alpha/kernel/core_marvel.c | 6 +
arch/alpha/kernel/pci-noop.c | 13 +-
arch/alpha/kernel/pci.c | 11 +-
arch/alpha/kernel/pci_iommu.c | 16 +-
arch/alpha/kernel/setup.c | 2 +-
arch/arc/kernel/unwind.c | 3 +-
arch/arc/mm/highmem.c | 4 +
arch/arm/kernel/setup.c | 6 +
arch/arm/mm/init.c | 6 +-
arch/arm/mm/mmu.c | 14 +-
arch/arm64/kernel/setup.c | 8 +-
arch/arm64/mm/kasan_init.c | 10 ++
arch/arm64/mm/mmu.c | 2 +
arch/arm64/mm/numa.c | 4 +
arch/c6x/mm/dma-coherent.c | 4 +
arch/c6x/mm/init.c | 4 +-
arch/csky/mm/highmem.c | 5 +
arch/h8300/mm/init.c | 4 +-
arch/ia64/kernel/mca.c | 25 +--
arch/ia64/mm/contig.c | 8 +-
arch/ia64/mm/discontig.c | 4 +
arch/ia64/mm/init.c | 38 ++++-
arch/ia64/mm/tlb.c | 6 +
arch/ia64/sn/kernel/io_common.c | 3 +
arch/ia64/sn/kernel/setup.c | 12 +-
arch/m68k/atari/stram.c | 4 +
arch/m68k/mm/init.c | 3 +
arch/m68k/mm/mcfmmu.c | 7 +-
arch/m68k/mm/motorola.c | 9 ++
arch/m68k/mm/sun3mmu.c | 6 +
arch/m68k/sun3/sun3dvma.c | 3 +
arch/microblaze/mm/init.c | 10 +-
arch/mips/cavium-octeon/dma-octeon.c | 3 +
arch/mips/kernel/setup.c | 3 +
arch/mips/kernel/traps.c | 5 +-
arch/mips/mm/init.c | 5 +
arch/nds32/mm/init.c | 12 ++
arch/openrisc/mm/init.c | 5 +-
arch/openrisc/mm/ioremap.c | 8 +-
arch/powerpc/kernel/dt_cpu_ftrs.c | 8 +-
arch/powerpc/kernel/irq.c | 5 -
arch/powerpc/kernel/paca.c | 6 +-
arch/powerpc/kernel/pci_32.c | 3 +
arch/powerpc/kernel/prom.c | 5 +-
arch/powerpc/kernel/rtas.c | 6 +-
arch/powerpc/kernel/setup-common.c | 3 +
arch/powerpc/kernel/setup_32.c | 26 ++--
arch/powerpc/kernel/setup_64.c | 4 +
arch/powerpc/lib/alloc.c | 3 +
arch/powerpc/mm/hash_utils_64.c | 11 +-
arch/powerpc/mm/mmu_context_nohash.c | 9 ++
arch/powerpc/mm/numa.c | 4 +
arch/powerpc/mm/pgtable-book3e.c | 12 +-
arch/powerpc/mm/pgtable-book3s64.c | 3 +
arch/powerpc/mm/pgtable-radix.c | 9 +-
arch/powerpc/mm/ppc_mmu_32.c | 3 +
arch/powerpc/platforms/pasemi/iommu.c | 3 +
arch/powerpc/platforms/powermac/nvram.c | 3 +
arch/powerpc/platforms/powernv/opal.c | 3 +
arch/powerpc/platforms/powernv/pci-ioda.c | 8 +
arch/powerpc/platforms/ps3/setup.c | 3 +
arch/powerpc/sysdev/dart_iommu.c | 3 +
arch/powerpc/sysdev/msi_bitmap.c | 3 +
arch/s390/kernel/crash_dump.c | 3 +
arch/s390/kernel/setup.c | 16 ++
arch/s390/kernel/smp.c | 9 +-
arch/s390/kernel/topology.c | 6 +
arch/s390/numa/mode_emu.c | 3 +
arch/s390/numa/numa.c | 6 +-
arch/sh/boards/mach-ap325rxa/setup.c | 5 +-
arch/sh/boards/mach-ecovec24/setup.c | 10 +-
arch/sh/boards/mach-kfr2r09/setup.c | 5 +-
arch/sh/boards/mach-migor/setup.c | 5 +-
arch/sh/boards/mach-se/7724/setup.c | 10 +-
arch/sh/kernel/machine_kexec.c | 3 +-
arch/sh/mm/init.c | 8 +-
arch/sh/mm/numa.c | 4 +
arch/sparc/kernel/prom_32.c | 6 +-
arch/sparc/kernel/setup_64.c | 6 +
arch/sparc/kernel/smp_64.c | 12 ++
arch/sparc/mm/init_32.c | 2 +-
arch/sparc/mm/init_64.c | 11 ++
arch/sparc/mm/srmmu.c | 18 ++-
arch/um/drivers/net_kern.c | 3 +
arch/um/drivers/vector_kern.c | 3 +
arch/um/kernel/initrd.c | 2 +
arch/um/kernel/mem.c | 16 ++
arch/unicore32/kernel/setup.c | 4 +
arch/unicore32/mm/mmu.c | 15 +-
arch/x86/kernel/acpi/boot.c | 3 +
arch/x86/kernel/apic/io_apic.c | 5 +
arch/x86/kernel/e820.c | 5 +-
arch/x86/kernel/setup_percpu.c | 10 +-
arch/x86/mm/kasan_init_64.c | 14 +-
arch/x86/mm/numa.c | 12 +-
arch/x86/platform/olpc/olpc_dt.c | 3 +
arch/x86/xen/p2m.c | 11 +-
arch/xtensa/mm/kasan_init.c | 10 +-
arch/xtensa/mm/mmu.c | 3 +
drivers/clk/ti/clk.c | 3 +
drivers/firmware/memmap.c | 2 +-
drivers/macintosh/smu.c | 5 +-
drivers/of/fdt.c | 8 +-
drivers/of/of_reserved_mem.c | 7 +-
drivers/of/unittest.c | 8 +-
drivers/usb/early/xhci-dbc.c | 2 +-
drivers/xen/swiotlb-xen.c | 7 +-
include/linux/memblock.h | 59 +------
init/main.c | 26 +++-
kernel/dma/swiotlb.c | 21 ++-
kernel/power/snapshot.c | 3 +
kernel/printk/printk.c | 9 +-
lib/cpumask.c | 3 +
mm/cma.c | 10 +-
mm/kasan/init.c | 10 +-
mm/memblock.c | 249 ++++++++++--------------------
mm/page_alloc.c | 10 +-
mm/page_ext.c | 2 +-
mm/percpu.c | 84 +++++++---
mm/sparse.c | 25 ++-
121 files changed, 860 insertions(+), 412 deletions(-)
--
2.7.4
Rename memblock_alloc_range() to memblock_phys_alloc_range() to emphasize
that it returns a physical address.
While on it, remove the 'enum memblock_flags' parameter from this function
as its only user anyway sets it to MEMBLOCK_NONE, which is the default for
the most of memblock allocations.
Signed-off-by: Mike Rapoport <[email protected]>
---
include/linux/memblock.h | 5 ++---
mm/cma.c | 10 ++++------
mm/memblock.c | 9 +++++----
3 files changed, 11 insertions(+), 13 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f7ef313..66dfdb3 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -369,6 +369,8 @@ static inline int memblock_get_region_node(const struct memblock_region *r)
#define ARCH_LOW_ADDRESS_LIMIT 0xffffffffUL
#endif
+phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
+ phys_addr_t start, phys_addr_t end);
phys_addr_t memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid);
phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
@@ -487,9 +489,6 @@ static inline bool memblock_bottom_up(void)
return memblock.bottom_up;
}
-phys_addr_t __init memblock_alloc_range(phys_addr_t size, phys_addr_t align,
- phys_addr_t start, phys_addr_t end,
- enum memblock_flags flags);
phys_addr_t memblock_alloc_base(phys_addr_t size, phys_addr_t align,
phys_addr_t max_addr);
phys_addr_t __memblock_alloc_base(phys_addr_t size, phys_addr_t align,
diff --git a/mm/cma.c b/mm/cma.c
index c7b39dd..e4530ae 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -327,16 +327,14 @@ int __init cma_declare_contiguous(phys_addr_t base,
* memory in case of failure.
*/
if (base < highmem_start && limit > highmem_start) {
- addr = memblock_alloc_range(size, alignment,
- highmem_start, limit,
- MEMBLOCK_NONE);
+ addr = memblock_phys_alloc_range(size, alignment,
+ highmem_start, limit);
limit = highmem_start;
}
if (!addr) {
- addr = memblock_alloc_range(size, alignment, base,
- limit,
- MEMBLOCK_NONE);
+ addr = memblock_phys_alloc_range(size, alignment, base,
+ limit);
if (!addr) {
ret = -ENOMEM;
goto err;
diff --git a/mm/memblock.c b/mm/memblock.c
index c80029e..f019aee 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1338,12 +1338,13 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
return 0;
}
-phys_addr_t __init memblock_alloc_range(phys_addr_t size, phys_addr_t align,
- phys_addr_t start, phys_addr_t end,
- enum memblock_flags flags)
+phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
+ phys_addr_t align,
+ phys_addr_t start,
+ phys_addr_t end)
{
return memblock_alloc_range_nid(size, align, start, end, NUMA_NO_NODE,
- flags);
+ MEMBLOCK_NONE);
}
phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid)
--
2.7.4
From: Christophe Leroy <[email protected]>
Since only the virtual address of allocated blocks is used,
lets use functions returning directly virtual address.
Those functions have the advantage of also zeroing the block.
[ MR:
- updated error message in alloc_stack() to be more verbose
- convereted several additional call sites ]
Signed-off-by: Christophe Leroy <[email protected]>
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/powerpc/kernel/dt_cpu_ftrs.c | 3 +--
arch/powerpc/kernel/irq.c | 5 -----
arch/powerpc/kernel/paca.c | 6 +++++-
arch/powerpc/kernel/prom.c | 5 ++++-
arch/powerpc/kernel/setup_32.c | 26 ++++++++++++++++----------
5 files changed, 26 insertions(+), 19 deletions(-)
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 8be3721..2554824 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -813,7 +813,6 @@ static int __init process_cpufeatures_node(unsigned long node,
int len;
f = &dt_cpu_features[i];
- memset(f, 0, sizeof(struct dt_cpu_feature));
f->node = node;
@@ -1008,7 +1007,7 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char
/* Count and allocate space for cpu features */
of_scan_flat_dt_subnodes(node, count_cpufeatures_subnodes,
&nr_dt_cpu_features);
- dt_cpu_features = __va(memblock_phys_alloc(sizeof(struct dt_cpu_feature) * nr_dt_cpu_features, PAGE_SIZE));
+ dt_cpu_features = memblock_alloc(sizeof(struct dt_cpu_feature) * nr_dt_cpu_features, PAGE_SIZE);
cpufeatures_setup_start(isa);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 916ddc4..4a44bc3 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -725,18 +725,15 @@ void exc_lvl_ctx_init(void)
#endif
#endif
- memset((void *)critirq_ctx[cpu_nr], 0, THREAD_SIZE);
tp = critirq_ctx[cpu_nr];
tp->cpu = cpu_nr;
tp->preempt_count = 0;
#ifdef CONFIG_BOOKE
- memset((void *)dbgirq_ctx[cpu_nr], 0, THREAD_SIZE);
tp = dbgirq_ctx[cpu_nr];
tp->cpu = cpu_nr;
tp->preempt_count = 0;
- memset((void *)mcheckirq_ctx[cpu_nr], 0, THREAD_SIZE);
tp = mcheckirq_ctx[cpu_nr];
tp->cpu = cpu_nr;
tp->preempt_count = HARDIRQ_OFFSET;
@@ -754,12 +751,10 @@ void irq_ctx_init(void)
int i;
for_each_possible_cpu(i) {
- memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
tp = softirq_ctx[i];
tp->cpu = i;
klp_init_thread_info(tp);
- memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
tp = hardirq_ctx[i];
tp->cpu = i;
klp_init_thread_info(tp);
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 8c890c6..e7382ab 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -196,7 +196,11 @@ void __init allocate_paca_ptrs(void)
paca_nr_cpu_ids = nr_cpu_ids;
paca_ptrs_size = sizeof(struct paca_struct *) * nr_cpu_ids;
- paca_ptrs = __va(memblock_phys_alloc(paca_ptrs_size, SMP_CACHE_BYTES));
+ paca_ptrs = memblock_alloc_raw(paca_ptrs_size, SMP_CACHE_BYTES);
+ if (!paca_ptrs)
+ panic("Failed to allocate %d bytes for paca pointers\n",
+ paca_ptrs_size);
+
memset(paca_ptrs, 0x88, paca_ptrs_size);
}
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index e97aaf2..c0ed4fa 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -127,7 +127,10 @@ static void __init move_device_tree(void)
if ((memory_limit && (start + size) > PHYSICAL_START + memory_limit) ||
!memblock_is_memory(start + size - 1) ||
overlaps_crashkernel(start, size) || overlaps_initrd(start, size)) {
- p = __va(memblock_phys_alloc(size, PAGE_SIZE));
+ p = memblock_alloc_raw(size, PAGE_SIZE);
+ if (!p)
+ panic("Failed to allocate %lu bytes to move device tree\n",
+ size);
memcpy(p, initial_boot_params, size);
initial_boot_params = p;
DBG("Moved device tree to 0x%px\n", p);
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 947f904..1f0b762 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -196,6 +196,17 @@ static int __init ppc_init(void)
}
arch_initcall(ppc_init);
+static void *__init alloc_stack(void)
+{
+ void *ptr = memblock_alloc(THREAD_SIZE, THREAD_SIZE);
+
+ if (!ptr)
+ panic("cannot allocate %d bytes for stack at %pS\n",
+ THREAD_SIZE, (void *)_RET_IP_);
+
+ return ptr;
+}
+
void __init irqstack_early_init(void)
{
unsigned int i;
@@ -203,10 +214,8 @@ void __init irqstack_early_init(void)
/* interrupt stacks must be in lowmem, we get that for free on ppc32
* as the memblock is limited to lowmem by default */
for_each_possible_cpu(i) {
- softirq_ctx[i] = (struct thread_info *)
- __va(memblock_phys_alloc(THREAD_SIZE, THREAD_SIZE));
- hardirq_ctx[i] = (struct thread_info *)
- __va(memblock_phys_alloc(THREAD_SIZE, THREAD_SIZE));
+ softirq_ctx[i] = alloc_stack();
+ hardirq_ctx[i] = alloc_stack();
}
}
@@ -224,13 +233,10 @@ void __init exc_lvl_early_init(void)
hw_cpu = 0;
#endif
- critirq_ctx[hw_cpu] = (struct thread_info *)
- __va(memblock_phys_alloc(THREAD_SIZE, THREAD_SIZE));
+ critirq_ctx[hw_cpu] = alloc_stack();
#ifdef CONFIG_BOOKE
- dbgirq_ctx[hw_cpu] = (struct thread_info *)
- __va(memblock_phys_alloc(THREAD_SIZE, THREAD_SIZE));
- mcheckirq_ctx[hw_cpu] = (struct thread_info *)
- __va(memblock_phys_alloc(THREAD_SIZE, THREAD_SIZE));
+ dbgirq_ctx[hw_cpu] = alloc_stack();
+ mcheckirq_ctx[hw_cpu] = alloc_stack();
#endif
}
}
--
2.7.4
The allocation of the page tables memory in openrics uses
memblock_phys_alloc() and then converts the returned physical address to
virtual one. Use memblock_alloc_raw() and add a panic() if the allocation
fails.
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/openrisc/mm/init.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index d157310..caeb418 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -105,7 +105,10 @@ static void __init map_ram(void)
}
/* Alloc one page for holding PTE's... */
- pte = (pte_t *) __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE));
+ pte = memblock_alloc_raw(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate page for PTEs\n",
+ __func__);
set_pmd(pme, __pmd(_KERNPG_TABLE + __pa(pte)));
/* Fill the newly allocated page with PTE'S */
--
2.7.4
The __memblock_alloc_base() function tries to allocate a memory up to the
limit specified by its max_addr parameter. Depending on the value of this
parameter, the __memblock_alloc_base() can is replaced with the appropriate
memblock_phys_alloc*() variant.
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Rob Herring <[email protected]>
---
arch/sh/kernel/machine_kexec.c | 3 ++-
arch/x86/kernel/e820.c | 2 +-
arch/x86/mm/numa.c | 12 ++++--------
drivers/of/of_reserved_mem.c | 7 ++-----
include/linux/memblock.h | 2 --
mm/memblock.c | 9 ++-------
6 files changed, 11 insertions(+), 24 deletions(-)
diff --git a/arch/sh/kernel/machine_kexec.c b/arch/sh/kernel/machine_kexec.c
index b9f9f1a..63d63a3 100644
--- a/arch/sh/kernel/machine_kexec.c
+++ b/arch/sh/kernel/machine_kexec.c
@@ -168,7 +168,8 @@ void __init reserve_crashkernel(void)
crash_size = PAGE_ALIGN(resource_size(&crashk_res));
if (!crashk_res.start) {
unsigned long max = memblock_end_of_DRAM() - memory_limit;
- crashk_res.start = __memblock_alloc_base(crash_size, PAGE_SIZE, max);
+ crashk_res.start = memblock_phys_alloc_range(crash_size,
+ PAGE_SIZE, 0, max);
if (!crashk_res.start) {
pr_err("crashkernel allocation failed\n");
goto disable;
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 50895c2..9c0eb54 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -778,7 +778,7 @@ u64 __init e820__memblock_alloc_reserved(u64 size, u64 align)
{
u64 addr;
- addr = __memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ACCESSIBLE);
+ addr = memblock_phys_alloc(size, align);
if (addr) {
e820__range_update_kexec(addr, size, E820_TYPE_RAM, E820_TYPE_RESERVED);
pr_info("update e820_table_kexec for e820__memblock_alloc_reserved()\n");
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index 1308f54..f85ae42 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -195,15 +195,11 @@ static void __init alloc_node_data(int nid)
* Allocate node data. Try node-local memory and then any node.
* Never allocate in DMA zone.
*/
- nd_pa = memblock_phys_alloc_nid(nd_size, SMP_CACHE_BYTES, nid);
+ nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
if (!nd_pa) {
- nd_pa = __memblock_alloc_base(nd_size, SMP_CACHE_BYTES,
- MEMBLOCK_ALLOC_ACCESSIBLE);
- if (!nd_pa) {
- pr_err("Cannot find %zu bytes in any node (initial node: %d)\n",
- nd_size, nid);
- return;
- }
+ pr_err("Cannot find %zu bytes in any node (initial node: %d)\n",
+ nd_size, nid);
+ return;
}
nd = __va(nd_pa);
diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
index 1977ee0..499f16d 100644
--- a/drivers/of/of_reserved_mem.c
+++ b/drivers/of/of_reserved_mem.c
@@ -31,13 +31,10 @@ int __init __weak early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
phys_addr_t *res_base)
{
phys_addr_t base;
- /*
- * We use __memblock_alloc_base() because memblock_alloc_base()
- * panic()s on allocation failure.
- */
+
end = !end ? MEMBLOCK_ALLOC_ANYWHERE : end;
align = !align ? SMP_CACHE_BYTES : align;
- base = __memblock_alloc_base(size, align, end);
+ base = memblock_phys_alloc_range(size, align, 0, end);
if (!base)
return -ENOMEM;
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 7883c74..768e2b4 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -496,8 +496,6 @@ static inline bool memblock_bottom_up(void)
phys_addr_t memblock_alloc_base(phys_addr_t size, phys_addr_t align,
phys_addr_t max_addr);
-phys_addr_t __memblock_alloc_base(phys_addr_t size, phys_addr_t align,
- phys_addr_t max_addr);
phys_addr_t memblock_phys_mem_size(void);
phys_addr_t memblock_reserved_size(void);
phys_addr_t memblock_mem_size(unsigned long limit_pfn);
diff --git a/mm/memblock.c b/mm/memblock.c
index 461e40a3..e5ffdcd 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1363,17 +1363,12 @@ phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align,
return ret;
}
-phys_addr_t __init __memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys_addr_t max_addr)
-{
- return memblock_alloc_range_nid(size, align, 0, max_addr, NUMA_NO_NODE,
- MEMBLOCK_NONE);
-}
-
phys_addr_t __init memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys_addr_t max_addr)
{
phys_addr_t alloc;
- alloc = __memblock_alloc_base(size, align, max_addr);
+ alloc = memblock_alloc_range_nid(size, align, 0, max_addr, NUMA_NO_NODE,
+ MEMBLOCK_NONE);
if (alloc == 0)
panic("ERROR: Failed to allocate %pa bytes below %pa.\n",
--
2.7.4
Currently, memblock has several internal functions with overlapping
functionality. They all call memblock_find_in_range_node() to find free
memory and then reserve the allocated range and mark it with kmemleak.
However, there is difference in the allocation constraints and in fallback
strategies.
The allocations returning physical address first attempt to find free
memory on the specified node within mirrored memory regions, then retry on
the same node without the requirement for memory mirroring and finally fall
back to all available memory.
The allocations returning virtual address start with clamping the allowed
range to memblock.current_limit, attempt to allocate from the specified
node from regions with mirroring and with user defined minimal address. If
such allocation fails, next attempt is done with node restriction lifted.
Next, the allocation is retried with minimal address reset to zero and at
last without the requirement for mirrored regions.
Let's consolidate various fallbacks handling and make them more consistent
for physical and virtual variants. Most of the fallback handling is moved
to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
physical address of the allocated range and converts it to virtual address.
The fallback for allocation below the specified minimal address remains in
memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
with exact requirement for lower bounds.
The memblock_phys_alloc_nid() function is completely dropped as it is not
used anywhere outside memblock and its only usage can be replaced by a call
to memblock_alloc_range_nid().
Signed-off-by: Mike Rapoport <[email protected]>
---
include/linux/memblock.h | 1 -
mm/memblock.c | 173 +++++++++++++++++++++--------------------------
2 files changed, 78 insertions(+), 96 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 6874fdc..cf4cd9c 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -371,7 +371,6 @@ static inline int memblock_get_region_node(const struct memblock_region *r)
phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
phys_addr_t start, phys_addr_t end);
-phys_addr_t memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid);
phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
diff --git a/mm/memblock.c b/mm/memblock.c
index 531fa77..739f769 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1312,30 +1312,84 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
#endif /* CONFIG_DEFERRED_STRUCT_PAGE_INIT */
+/**
+ * memblock_alloc_range_nid - allocate boot memory block
+ * @size: size of memory block to be allocated in bytes
+ * @align: alignment of the region and block's size
+ * @start: the lower bound of the memory region to allocate (phys address)
+ * @end: the upper bound of the memory region to allocate (phys address)
+ * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
+ *
+ * The allocation is performed from memory region limited by
+ * memblock.current_limit if @max_addr == %MEMBLOCK_ALLOC_ACCESSIBLE.
+ *
+ * If the specified node can not hold the requested memory the
+ * allocation falls back to any node in the system
+ *
+ * For systems with memory mirroring, the allocation is attempted first
+ * from the regions with mirroring enabled and then retried from any
+ * memory region.
+ *
+ * In addition, function sets the min_count to 0 using kmemleak_alloc_phys for
+ * allocated boot memory block, so that it is never reported as leaks.
+ *
+ * Return:
+ * Physical address of allocated memory block on success, %0 on failure.
+ */
static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
phys_addr_t align, phys_addr_t start,
- phys_addr_t end, int nid,
- enum memblock_flags flags)
+ phys_addr_t end, int nid)
{
+ enum memblock_flags flags = choose_memblock_flags();
phys_addr_t found;
+ if (WARN_ONCE(nid == MAX_NUMNODES, "Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead\n"))
+ nid = NUMA_NO_NODE;
+
if (!align) {
/* Can't use WARNs this early in boot on powerpc */
dump_stack();
align = SMP_CACHE_BYTES;
}
+ if (end > memblock.current_limit)
+ end = memblock.current_limit;
+
+again:
found = memblock_find_in_range_node(size, align, start, end, nid,
flags);
- if (found && !memblock_reserve(found, size)) {
+ if (found && !memblock_reserve(found, size))
+ goto done;
+
+ if (nid != NUMA_NO_NODE) {
+ found = memblock_find_in_range_node(size, align, start,
+ end, NUMA_NO_NODE,
+ flags);
+ if (found && !memblock_reserve(found, size))
+ goto done;
+ }
+
+ if (flags & MEMBLOCK_MIRROR) {
+ flags &= ~MEMBLOCK_MIRROR;
+ pr_warn("Could not allocate %pap bytes of mirrored memory\n",
+ &size);
+ goto again;
+ }
+
+ return 0;
+
+done:
+ /* Skip kmemleak for kasan_init() due to high volume. */
+ if (end != MEMBLOCK_ALLOC_KASAN)
/*
- * The min_count is set to 0 so that memblock allocations are
- * never reported as leaks.
+ * The min_count is set to 0 so that memblock allocated
+ * blocks are never reported as leaks. This is because many
+ * of these blocks are only referred via the physical
+ * address which is not looked up by kmemleak.
*/
kmemleak_alloc_phys(found, size, 0, 0);
- return found;
- }
- return 0;
+
+ return found;
}
phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
@@ -1343,35 +1397,13 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
phys_addr_t start,
phys_addr_t end)
{
- return memblock_alloc_range_nid(size, align, start, end, NUMA_NO_NODE,
- MEMBLOCK_NONE);
-}
-
-phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid)
-{
- enum memblock_flags flags = choose_memblock_flags();
- phys_addr_t ret;
-
-again:
- ret = memblock_alloc_range_nid(size, align, 0,
- MEMBLOCK_ALLOC_ACCESSIBLE, nid, flags);
-
- if (!ret && (flags & MEMBLOCK_MIRROR)) {
- flags &= ~MEMBLOCK_MIRROR;
- goto again;
- }
- return ret;
+ return memblock_alloc_range_nid(size, align, start, end, NUMA_NO_NODE);
}
phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
{
- phys_addr_t res = memblock_phys_alloc_nid(size, align, nid);
-
- if (res)
- return res;
- return memblock_alloc_range_nid(size, align, 0,
- MEMBLOCK_ALLOC_ACCESSIBLE,
- NUMA_NO_NODE, MEMBLOCK_NONE);
+ return memblock_alloc_range_nid(size, align, 0, nid,
+ MEMBLOCK_ALLOC_ACCESSIBLE);
}
/**
@@ -1382,19 +1414,13 @@ phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t ali
* @max_addr: the upper bound of the memory region to allocate (phys address)
* @nid: nid of the free area to find, %NUMA_NO_NODE for any node
*
- * The @min_addr limit is dropped if it can not be satisfied and the allocation
- * will fall back to memory below @min_addr. Also, allocation may fall back
- * to any node in the system if the specified node can not
- * hold the requested memory.
- *
- * The allocation is performed from memory region limited by
- * memblock.current_limit if @max_addr == %MEMBLOCK_ALLOC_ACCESSIBLE.
- *
- * The phys address of allocated boot memory block is converted to virtual and
- * allocated memory is reset to 0.
+ * Allocates memory block using memblock_alloc_range_nid() and
+ * converts the returned physical address to virtual.
*
- * In addition, function sets the min_count to 0 using kmemleak_alloc for
- * allocated boot memory block, so that it is never reported as leaks.
+ * The @min_addr limit is dropped if it can not be satisfied and the allocation
+ * will fall back to memory below @min_addr. Other constraints, such
+ * as node and mirrored memory will be handled again in
+ * memblock_alloc_range_nid().
*
* Return:
* Virtual address of allocated memory block on success, NULL on failure.
@@ -1405,11 +1431,6 @@ static void * __init memblock_alloc_internal(
int nid)
{
phys_addr_t alloc;
- void *ptr;
- enum memblock_flags flags = choose_memblock_flags();
-
- if (WARN_ONCE(nid == MAX_NUMNODES, "Usage of MAX_NUMNODES is deprecated. Use NUMA_NO_NODE instead\n"))
- nid = NUMA_NO_NODE;
/*
* Detect any accidental use of these APIs after slab is ready, as at
@@ -1419,54 +1440,16 @@ static void * __init memblock_alloc_internal(
if (WARN_ON_ONCE(slab_is_available()))
return kzalloc_node(size, GFP_NOWAIT, nid);
- if (!align) {
- dump_stack();
- align = SMP_CACHE_BYTES;
- }
-
- if (max_addr > memblock.current_limit)
- max_addr = memblock.current_limit;
-again:
- alloc = memblock_find_in_range_node(size, align, min_addr, max_addr,
- nid, flags);
- if (alloc && !memblock_reserve(alloc, size))
- goto done;
-
- if (nid != NUMA_NO_NODE) {
- alloc = memblock_find_in_range_node(size, align, min_addr,
- max_addr, NUMA_NO_NODE,
- flags);
- if (alloc && !memblock_reserve(alloc, size))
- goto done;
- }
-
- if (min_addr) {
- min_addr = 0;
- goto again;
- }
-
- if (flags & MEMBLOCK_MIRROR) {
- flags &= ~MEMBLOCK_MIRROR;
- pr_warn("Could not allocate %pap bytes of mirrored memory\n",
- &size);
- goto again;
- }
+ alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
- return NULL;
-done:
- ptr = phys_to_virt(alloc);
+ /* retry allocation without lower limit */
+ if (!alloc && min_addr)
+ alloc = memblock_alloc_range_nid(size, align, 0, max_addr, nid);
- /* Skip kmemleak for kasan_init() due to high volume. */
- if (max_addr != MEMBLOCK_ALLOC_KASAN)
- /*
- * The min_count is set to 0 so that bootmem allocated
- * blocks are never reported as leaks. This is because many
- * of these blocks are only referred via the physical
- * address which is not looked up by kmemleak.
- */
- kmemleak_alloc(ptr, size, 0, 0);
+ if (!alloc)
+ return NULL;
- return ptr;
+ return phys_to_virt(alloc);
}
/**
--
2.7.4
The memblock_phys_alloc_try_nid() function tries to allocate memory from
the requested node and then falls back to allocation from any node in the
system. The memblock_alloc_base() fallback used by this function panics if
the allocation fails.
Replace the memblock_alloc_base() fallback with the direct call to
memblock_alloc_range_nid() and update the memblock_phys_alloc_try_nid()
callers to check the returned value and panic in case of error.
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/arm64/mm/numa.c | 4 ++++
arch/powerpc/mm/numa.c | 4 ++++
mm/memblock.c | 4 +++-
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
index ae34e3a..2c61ea4 100644
--- a/arch/arm64/mm/numa.c
+++ b/arch/arm64/mm/numa.c
@@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
+ if (!nd_pa)
+ panic("Cannot allocate %zu bytes for node %d data\n",
+ nd_size, nid);
+
nd = __va(nd_pa);
/* report and initialize */
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 270cefb..8f2bbe1 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -788,6 +788,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
int tnid;
nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
+ if (!nd_pa)
+ panic("Cannot allocate %zu bytes for node %d data\n",
+ nd_size, nid);
+
nd = __va(nd_pa);
/* report and initialize */
diff --git a/mm/memblock.c b/mm/memblock.c
index f019aee..8aabb1b 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1393,7 +1393,9 @@ phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t ali
if (res)
return res;
- return memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ACCESSIBLE);
+ return memblock_alloc_range_nid(size, align, 0,
+ MEMBLOCK_ALLOC_ACCESSIBLE,
+ NUMA_NO_NODE, MEMBLOCK_NONE);
}
/**
--
2.7.4
The last parameter of memblock_alloc_from() is the lower limit for the
memory allocation. When it is 0, the call is equivalent to
memblock_alloc().
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Paul Burton <[email protected]> # MIPS part
---
arch/alpha/kernel/core_cia.c | 2 +-
arch/alpha/kernel/pci_iommu.c | 4 ++--
arch/alpha/kernel/setup.c | 2 +-
arch/ia64/kernel/mca.c | 3 +--
arch/mips/kernel/traps.c | 2 +-
arch/sparc/kernel/prom_32.c | 2 +-
arch/sparc/mm/init_32.c | 2 +-
arch/sparc/mm/srmmu.c | 10 +++++-----
8 files changed, 13 insertions(+), 14 deletions(-)
diff --git a/arch/alpha/kernel/core_cia.c b/arch/alpha/kernel/core_cia.c
index 867e873..466cd44 100644
--- a/arch/alpha/kernel/core_cia.c
+++ b/arch/alpha/kernel/core_cia.c
@@ -331,7 +331,7 @@ cia_prepare_tbia_workaround(int window)
long i;
/* Use minimal 1K map. */
- ppte = memblock_alloc_from(CIA_BROKEN_TBIA_SIZE, 32768, 0);
+ ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768);
pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1;
for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i)
diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index aa0f50d..e4cf77b 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -87,13 +87,13 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
printk("%s: couldn't allocate arena ptes from node %d\n"
" falling back to system-wide allocation\n",
__func__, nid);
- arena->ptes = memblock_alloc_from(mem_size, align, 0);
+ arena->ptes = memblock_alloc(mem_size, align);
}
#else /* CONFIG_DISCONTIGMEM */
arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
- arena->ptes = memblock_alloc_from(mem_size, align, 0);
+ arena->ptes = memblock_alloc(mem_size, align);
#endif /* CONFIG_DISCONTIGMEM */
diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
index 4b5b1b2..5d4c76a 100644
--- a/arch/alpha/kernel/setup.c
+++ b/arch/alpha/kernel/setup.c
@@ -293,7 +293,7 @@ move_initrd(unsigned long mem_limit)
unsigned long size;
size = initrd_end - initrd_start;
- start = memblock_alloc_from(PAGE_ALIGN(size), PAGE_SIZE, 0);
+ start = memblock_alloc(PAGE_ALIGN(size), PAGE_SIZE);
if (!start || __pa(start) + size > mem_limit) {
initrd_start = initrd_end = 0;
return NULL;
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 91bd1e1..74d148b 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1835,8 +1835,7 @@ format_mca_init_stack(void *mca_data, unsigned long offset,
/* Caller prevents this from being called after init */
static void * __ref mca_bootmem(void)
{
- return memblock_alloc_from(sizeof(struct ia64_mca_cpu),
- KERNEL_STACK_SIZE, 0);
+ return memblock_alloc(sizeof(struct ia64_mca_cpu), KERNEL_STACK_SIZE);
}
/* Do per-CPU MCA-related initialization. */
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index c91097f..2bbdee5 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -2291,7 +2291,7 @@ void __init trap_init(void)
phys_addr_t ebase_pa;
ebase = (unsigned long)
- memblock_alloc_from(size, 1 << fls(size), 0);
+ memblock_alloc(size, 1 << fls(size));
/*
* Try to ensure ebase resides in KSeg0 if possible.
diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
index 42d7f2a..38940af 100644
--- a/arch/sparc/kernel/prom_32.c
+++ b/arch/sparc/kernel/prom_32.c
@@ -32,7 +32,7 @@ void * __init prom_early_alloc(unsigned long size)
{
void *ret;
- ret = memblock_alloc_from(size, SMP_CACHE_BYTES, 0UL);
+ ret = memblock_alloc(size, SMP_CACHE_BYTES);
if (ret != NULL)
memset(ret, 0, size);
diff --git a/arch/sparc/mm/init_32.c b/arch/sparc/mm/init_32.c
index d900952..a8ff298 100644
--- a/arch/sparc/mm/init_32.c
+++ b/arch/sparc/mm/init_32.c
@@ -264,7 +264,7 @@ void __init mem_init(void)
i = last_valid_pfn >> ((20 - PAGE_SHIFT) + 5);
i += 1;
sparc_valid_addr_bitmap = (unsigned long *)
- memblock_alloc_from(i << 2, SMP_CACHE_BYTES, 0UL);
+ memblock_alloc(i << 2, SMP_CACHE_BYTES);
if (sparc_valid_addr_bitmap == NULL) {
prom_printf("mem_init: Cannot alloc valid_addr_bitmap.\n");
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index b609362..a400ec3 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -303,13 +303,13 @@ static void __init srmmu_nocache_init(void)
bitmap_bits = srmmu_nocache_size >> SRMMU_NOCACHE_BITMAP_SHIFT;
- srmmu_nocache_pool = memblock_alloc_from(srmmu_nocache_size,
- SRMMU_NOCACHE_ALIGN_MAX, 0UL);
+ srmmu_nocache_pool = memblock_alloc(srmmu_nocache_size,
+ SRMMU_NOCACHE_ALIGN_MAX);
memset(srmmu_nocache_pool, 0, srmmu_nocache_size);
srmmu_nocache_bitmap =
- memblock_alloc_from(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
- SMP_CACHE_BYTES, 0UL);
+ memblock_alloc(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
+ SMP_CACHE_BYTES);
bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits);
srmmu_swapper_pg_dir = __srmmu_get_nocache(SRMMU_PGD_TABLE_SIZE, SRMMU_PGD_TABLE_SIZE);
@@ -467,7 +467,7 @@ static void __init sparc_context_init(int numctx)
unsigned long size;
size = numctx * sizeof(struct ctx_list);
- ctx_list_pool = memblock_alloc_from(size, SMP_CACHE_BYTES, 0UL);
+ ctx_list_pool = memblock_alloc(size, SMP_CACHE_BYTES);
for (ctx = 0; ctx < numctx; ctx++) {
struct ctx_list *clist;
--
2.7.4
Add panic() calls if memblock_alloc() returns NULL.
The panic() format duplicates the one used by memblock itself and in order
to avoid explosion with long parameters list replace open coded allocation
size calculations with a local variable.
Signed-off-by: Mike Rapoport <[email protected]>
---
kernel/dma/swiotlb.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d636177..e78835c8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -191,6 +191,7 @@ void __init swiotlb_update_mem_attributes(void)
int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
{
unsigned long i, bytes;
+ size_t alloc_size;
bytes = nslabs << IO_TLB_SHIFT;
@@ -203,12 +204,18 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
* to find contiguous free memory regions of size up to IO_TLB_SEGSIZE
* between io_tlb_start and io_tlb_end.
*/
- io_tlb_list = memblock_alloc(
- PAGE_ALIGN(io_tlb_nslabs * sizeof(int)),
- PAGE_SIZE);
- io_tlb_orig_addr = memblock_alloc(
- PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t)),
- PAGE_SIZE);
+ alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(int));
+ io_tlb_list = memblock_alloc(alloc_size, PAGE_SIZE);
+ if (!io_tlb_list)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, alloc_size, PAGE_SIZE);
+
+ alloc_size = PAGE_ALIGN(io_tlb_nslabs * sizeof(phys_addr_t));
+ io_tlb_orig_addr = memblock_alloc(alloc_size, PAGE_SIZE);
+ if (!io_tlb_orig_addr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, alloc_size, PAGE_SIZE);
+
for (i = 0; i < io_tlb_nslabs; i++) {
io_tlb_list[i] = IO_TLB_SEGSIZE - OFFSET(i, IO_TLB_SEGSIZE);
io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
--
2.7.4
As all the memblock_alloc*() users are now checking the return value and
panic() in case of error, the panic() call can be removed from the core
memblock allocator, namely memblock_alloc_try_nid().
Signed-off-by: Mike Rapoport <[email protected]>
---
mm/memblock.c | 15 +++++----------
1 file changed, 5 insertions(+), 10 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index 03b3929..7164275 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1526,7 +1526,7 @@ void * __init memblock_alloc_try_nid_nopanic(
}
/**
- * memblock_alloc_try_nid - allocate boot memory block with panicking
+ * memblock_alloc_try_nid - allocate boot memory block
* @size: size of memory block to be allocated in bytes
* @align: alignment of the region and block's size
* @min_addr: the lower bound of the memory region from where the allocation
@@ -1536,9 +1536,8 @@ void * __init memblock_alloc_try_nid_nopanic(
* allocate only from memory limited by memblock.current_limit value
* @nid: nid of the free area to find, %NUMA_NO_NODE for any node
*
- * Public panicking version of memblock_alloc_try_nid_nopanic()
- * which provides debug information (including caller info), if enabled,
- * and panics if the request can not be satisfied.
+ * Public function, provides additional debug information (including caller
+ * info), if enabled. This function zeroes the allocated memory.
*
* Return:
* Virtual address of allocated memory block on success, NULL on failure.
@@ -1555,14 +1554,10 @@ void * __init memblock_alloc_try_nid(
&max_addr, (void *)_RET_IP_);
ptr = memblock_alloc_internal(size, align,
min_addr, max_addr, nid);
- if (ptr) {
+ if (ptr)
memset(ptr, 0, size);
- return ptr;
- }
- panic("%s: Failed to allocate %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa\n",
- __func__, (u64)size, (u64)align, nid, &min_addr, &max_addr);
- return NULL;
+ return ptr;
}
/**
--
2.7.4
As all the memblock allocation functions return NULL in case of error
rather than panic(), the duplicates with _nopanic suffix can be removed.
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Greg Kroah-Hartman <[email protected]>
---
arch/arc/kernel/unwind.c | 3 +--
arch/sh/mm/init.c | 2 +-
arch/x86/kernel/setup_percpu.c | 10 +++++-----
arch/x86/mm/kasan_init_64.c | 14 ++++++++------
drivers/firmware/memmap.c | 2 +-
drivers/usb/early/xhci-dbc.c | 2 +-
include/linux/memblock.h | 35 -----------------------------------
kernel/dma/swiotlb.c | 2 +-
kernel/printk/printk.c | 9 +--------
mm/memblock.c | 35 -----------------------------------
mm/page_alloc.c | 10 +++++-----
mm/page_ext.c | 2 +-
mm/percpu.c | 11 ++++-------
mm/sparse.c | 6 ++----
14 files changed, 31 insertions(+), 112 deletions(-)
diff --git a/arch/arc/kernel/unwind.c b/arch/arc/kernel/unwind.c
index d34f69e..271e9fa 100644
--- a/arch/arc/kernel/unwind.c
+++ b/arch/arc/kernel/unwind.c
@@ -181,8 +181,7 @@ static void init_unwind_hdr(struct unwind_table *table,
*/
static void *__init unw_hdr_alloc_early(unsigned long sz)
{
- return memblock_alloc_from_nopanic(sz, sizeof(unsigned int),
- MAX_DMA_ADDRESS);
+ return memblock_alloc_from(sz, sizeof(unsigned int), MAX_DMA_ADDRESS);
}
static void *unw_hdr_alloc(unsigned long sz)
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index fceefd9..7062132 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -202,7 +202,7 @@ void __init allocate_pgdat(unsigned int nid)
get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
#ifdef CONFIG_NEED_MULTIPLE_NODES
- NODE_DATA(nid) = memblock_alloc_try_nid_nopanic(
+ NODE_DATA(nid) = memblock_alloc_try_nid(
sizeof(struct pglist_data),
SMP_CACHE_BYTES, MEMBLOCK_LOW_LIMIT,
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
index e8796fc..0c5e9bf 100644
--- a/arch/x86/kernel/setup_percpu.c
+++ b/arch/x86/kernel/setup_percpu.c
@@ -106,22 +106,22 @@ static void * __init pcpu_alloc_bootmem(unsigned int cpu, unsigned long size,
void *ptr;
if (!node_online(node) || !NODE_DATA(node)) {
- ptr = memblock_alloc_from_nopanic(size, align, goal);
+ ptr = memblock_alloc_from(size, align, goal);
pr_info("cpu %d has no node %d or node-local memory\n",
cpu, node);
pr_debug("per cpu data for cpu%d %lu bytes at %016lx\n",
cpu, size, __pa(ptr));
} else {
- ptr = memblock_alloc_try_nid_nopanic(size, align, goal,
- MEMBLOCK_ALLOC_ACCESSIBLE,
- node);
+ ptr = memblock_alloc_try_nid(size, align, goal,
+ MEMBLOCK_ALLOC_ACCESSIBLE,
+ node);
pr_debug("per cpu data for cpu%d %lu bytes on node%d at %016lx\n",
cpu, size, node, __pa(ptr));
}
return ptr;
#else
- return memblock_alloc_from_nopanic(size, align, goal);
+ return memblock_alloc_from(size, align, goal);
#endif
}
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 462fde8..8dc0fc0 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -24,14 +24,16 @@ extern struct range pfn_mapped[E820_MAX_ENTRIES];
static p4d_t tmp_p4d_table[MAX_PTRS_PER_P4D] __initdata __aligned(PAGE_SIZE);
-static __init void *early_alloc(size_t size, int nid, bool panic)
+static __init void *early_alloc(size_t size, int nid, bool should_panic)
{
- if (panic)
- return memblock_alloc_try_nid(size, size,
- __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
- else
- return memblock_alloc_try_nid_nopanic(size, size,
+ void *ptr = memblock_alloc_try_nid(size, size,
__pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+
+ if (!ptr && should_panic)
+ panic("%pS: Failed to allocate page, nid=%d from=%lx\n",
+ (void *)_RET_IP_, nid, __pa(MAX_DMA_ADDRESS));
+
+ return ptr;
}
static void __init kasan_populate_pmd(pmd_t *pmd, unsigned long addr,
diff --git a/drivers/firmware/memmap.c b/drivers/firmware/memmap.c
index ec4fd25..d168c87 100644
--- a/drivers/firmware/memmap.c
+++ b/drivers/firmware/memmap.c
@@ -333,7 +333,7 @@ int __init firmware_map_add_early(u64 start, u64 end, const char *type)
{
struct firmware_map_entry *entry;
- entry = memblock_alloc_nopanic(sizeof(struct firmware_map_entry),
+ entry = memblock_alloc(sizeof(struct firmware_map_entry),
SMP_CACHE_BYTES);
if (WARN_ON(!entry))
return -ENOMEM;
diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
index d2652dc..c9cfb10 100644
--- a/drivers/usb/early/xhci-dbc.c
+++ b/drivers/usb/early/xhci-dbc.c
@@ -94,7 +94,7 @@ static void * __init xdbc_get_page(dma_addr_t *dma_addr)
{
void *virt;
- virt = memblock_alloc_nopanic(PAGE_SIZE, PAGE_SIZE);
+ virt = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
if (!virt)
return NULL;
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f5a83a1..71c9e32 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -379,9 +379,6 @@ static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
phys_addr_t min_addr, phys_addr_t max_addr,
int nid);
-void *memblock_alloc_try_nid_nopanic(phys_addr_t size, phys_addr_t align,
- phys_addr_t min_addr, phys_addr_t max_addr,
- int nid);
void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
phys_addr_t min_addr, phys_addr_t max_addr,
int nid);
@@ -408,36 +405,12 @@ static inline void * __init memblock_alloc_from(phys_addr_t size,
MEMBLOCK_ALLOC_ACCESSIBLE, NUMA_NO_NODE);
}
-static inline void * __init memblock_alloc_nopanic(phys_addr_t size,
- phys_addr_t align)
-{
- return memblock_alloc_try_nid_nopanic(size, align, MEMBLOCK_LOW_LIMIT,
- MEMBLOCK_ALLOC_ACCESSIBLE,
- NUMA_NO_NODE);
-}
-
static inline void * __init memblock_alloc_low(phys_addr_t size,
phys_addr_t align)
{
return memblock_alloc_try_nid(size, align, MEMBLOCK_LOW_LIMIT,
ARCH_LOW_ADDRESS_LIMIT, NUMA_NO_NODE);
}
-static inline void * __init memblock_alloc_low_nopanic(phys_addr_t size,
- phys_addr_t align)
-{
- return memblock_alloc_try_nid_nopanic(size, align, MEMBLOCK_LOW_LIMIT,
- ARCH_LOW_ADDRESS_LIMIT,
- NUMA_NO_NODE);
-}
-
-static inline void * __init memblock_alloc_from_nopanic(phys_addr_t size,
- phys_addr_t align,
- phys_addr_t min_addr)
-{
- return memblock_alloc_try_nid_nopanic(size, align, min_addr,
- MEMBLOCK_ALLOC_ACCESSIBLE,
- NUMA_NO_NODE);
-}
static inline void * __init memblock_alloc_node(phys_addr_t size,
phys_addr_t align, int nid)
@@ -446,14 +419,6 @@ static inline void * __init memblock_alloc_node(phys_addr_t size,
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
}
-static inline void * __init memblock_alloc_node_nopanic(phys_addr_t size,
- int nid)
-{
- return memblock_alloc_try_nid_nopanic(size, SMP_CACHE_BYTES,
- MEMBLOCK_LOW_LIMIT,
- MEMBLOCK_ALLOC_ACCESSIBLE, nid);
-}
-
static inline void __init memblock_free_early(phys_addr_t base,
phys_addr_t size)
{
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index e78835c8..659fc2a5 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -248,7 +248,7 @@ swiotlb_init(int verbose)
bytes = io_tlb_nslabs << IO_TLB_SHIFT;
/* Get IO TLB memory from the low pages */
- vstart = memblock_alloc_low_nopanic(PAGE_ALIGN(bytes), PAGE_SIZE);
+ vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
return;
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index c4f0a41..35cb48b5 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1147,14 +1147,7 @@ void __init setup_log_buf(int early)
if (!new_log_buf_len)
return;
- if (early) {
- new_log_buf =
- memblock_alloc(new_log_buf_len, LOG_ALIGN);
- } else {
- new_log_buf = memblock_alloc_nopanic(new_log_buf_len,
- LOG_ALIGN);
- }
-
+ new_log_buf = memblock_alloc(new_log_buf_len, LOG_ALIGN);
if (unlikely(!new_log_buf)) {
pr_err("log_buf_len: %lu bytes not available\n",
new_log_buf_len);
diff --git a/mm/memblock.c b/mm/memblock.c
index 7164275..522a44e 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1491,41 +1491,6 @@ void * __init memblock_alloc_try_nid_raw(
}
/**
- * memblock_alloc_try_nid_nopanic - allocate boot memory block
- * @size: size of memory block to be allocated in bytes
- * @align: alignment of the region and block's size
- * @min_addr: the lower bound of the memory region from where the allocation
- * is preferred (phys address)
- * @max_addr: the upper bound of the memory region from where the allocation
- * is preferred (phys address), or %MEMBLOCK_ALLOC_ACCESSIBLE to
- * allocate only from memory limited by memblock.current_limit value
- * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
- *
- * Public function, provides additional debug information (including caller
- * info), if enabled. This function zeroes the allocated memory.
- *
- * Return:
- * Virtual address of allocated memory block on success, NULL on failure.
- */
-void * __init memblock_alloc_try_nid_nopanic(
- phys_addr_t size, phys_addr_t align,
- phys_addr_t min_addr, phys_addr_t max_addr,
- int nid)
-{
- void *ptr;
-
- memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pF\n",
- __func__, (u64)size, (u64)align, nid, &min_addr,
- &max_addr, (void *)_RET_IP_);
-
- ptr = memblock_alloc_internal(size, align,
- min_addr, max_addr, nid);
- if (ptr)
- memset(ptr, 0, size);
- return ptr;
-}
-
-/**
* memblock_alloc_try_nid - allocate boot memory block
* @size: size of memory block to be allocated in bytes
* @align: alignment of the region and block's size
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d7a5219..cd5c593 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6556,8 +6556,8 @@ static void __ref setup_usemap(struct pglist_data *pgdat,
zone->pageblock_flags = NULL;
if (usemapsize)
zone->pageblock_flags =
- memblock_alloc_node_nopanic(usemapsize,
- pgdat->node_id);
+ memblock_alloc_node(usemapsize, SMP_CACHE_BYTES,
+ pgdat->node_id);
}
#else
static inline void setup_usemap(struct pglist_data *pgdat, struct zone *zone,
@@ -6786,7 +6786,8 @@ static void __ref alloc_node_mem_map(struct pglist_data *pgdat)
end = pgdat_end_pfn(pgdat);
end = ALIGN(end, MAX_ORDER_NR_PAGES);
size = (end - start) * sizeof(struct page);
- map = memblock_alloc_node_nopanic(size, pgdat->node_id);
+ map = memblock_alloc_node(size, SMP_CACHE_BYTES,
+ pgdat->node_id);
pgdat->node_mem_map = map + offset;
}
pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",
@@ -8064,8 +8065,7 @@ void *__init alloc_large_system_hash(const char *tablename,
size = bucketsize << log2qty;
if (flags & HASH_EARLY) {
if (flags & HASH_ZERO)
- table = memblock_alloc_nopanic(size,
- SMP_CACHE_BYTES);
+ table = memblock_alloc(size, SMP_CACHE_BYTES);
else
table = memblock_alloc_raw(size,
SMP_CACHE_BYTES);
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 0cfaa06..a3db109 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -161,7 +161,7 @@ static int __init alloc_node_page_ext(int nid)
table_size = get_entry_size() * nr_pages;
- base = memblock_alloc_try_nid_nopanic(
+ base = memblock_alloc_try_nid(
table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
if (!base)
diff --git a/mm/percpu.c b/mm/percpu.c
index 5998b03..e302b81 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1905,7 +1905,7 @@ struct pcpu_alloc_info * __init pcpu_alloc_alloc_info(int nr_groups,
__alignof__(ai->groups[0].cpu_map[0]));
ai_size = base_size + nr_units * sizeof(ai->groups[0].cpu_map[0]);
- ptr = memblock_alloc_nopanic(PFN_ALIGN(ai_size), PAGE_SIZE);
+ ptr = memblock_alloc(PFN_ALIGN(ai_size), PAGE_SIZE);
if (!ptr)
return NULL;
ai = ptr;
@@ -2496,7 +2496,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
areas_size = PFN_ALIGN(ai->nr_groups * sizeof(void *));
- areas = memblock_alloc_nopanic(areas_size, SMP_CACHE_BYTES);
+ areas = memblock_alloc(areas_size, SMP_CACHE_BYTES);
if (!areas) {
rc = -ENOMEM;
goto out_free;
@@ -2729,8 +2729,7 @@ EXPORT_SYMBOL(__per_cpu_offset);
static void * __init pcpu_dfl_fc_alloc(unsigned int cpu, size_t size,
size_t align)
{
- return memblock_alloc_from_nopanic(
- size, align, __pa(MAX_DMA_ADDRESS));
+ return memblock_alloc_from(size, align, __pa(MAX_DMA_ADDRESS));
}
static void __init pcpu_dfl_fc_free(void *ptr, size_t size)
@@ -2778,9 +2777,7 @@ void __init setup_per_cpu_areas(void)
void *fc;
ai = pcpu_alloc_alloc_info(1, 1);
- fc = memblock_alloc_from_nopanic(unit_size,
- PAGE_SIZE,
- __pa(MAX_DMA_ADDRESS));
+ fc = memblock_alloc_from(unit_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
if (!ai || !fc)
panic("Failed to allocate memory for percpu areas.");
/* kmemleak tracks the percpu allocations separately */
diff --git a/mm/sparse.c b/mm/sparse.c
index ad94242..1471f06 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -330,9 +330,7 @@ sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat,
limit = goal + (1UL << PA_SECTION_SHIFT);
nid = early_pfn_to_nid(goal >> PAGE_SHIFT);
again:
- p = memblock_alloc_try_nid_nopanic(size,
- SMP_CACHE_BYTES, goal, limit,
- nid);
+ p = memblock_alloc_try_nid(size, SMP_CACHE_BYTES, goal, limit, nid);
if (!p && limit) {
limit = 0;
goto again;
@@ -386,7 +384,7 @@ static unsigned long * __init
sparse_early_usemaps_alloc_pgdat_section(struct pglist_data *pgdat,
unsigned long size)
{
- return memblock_alloc_node_nopanic(size, pgdat->node_id);
+ return memblock_alloc_node(size, SMP_CACHE_BYTES, pgdat->node_id);
}
static void __init check_usemap_section_nr(int nid, unsigned long *usemap)
--
2.7.4
The memblock_alloc_base() function tries to allocate a memory up to the
limit specified by its max_addr parameter and panics if the allocation
fails. Replace its usage with memblock_phys_alloc_range() and make the
callers check the return value and panic in case of error.
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/powerpc/kernel/rtas.c | 6 +++++-
arch/powerpc/mm/hash_utils_64.c | 8 ++++++--
arch/s390/kernel/smp.c | 6 +++++-
drivers/macintosh/smu.c | 2 +-
include/linux/memblock.h | 2 --
mm/memblock.c | 14 --------------
6 files changed, 17 insertions(+), 21 deletions(-)
diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index de35bd8f..fbc6761 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -1187,7 +1187,11 @@ void __init rtas_initialize(void)
ibm_suspend_me_token = rtas_token("ibm,suspend-me");
}
#endif
- rtas_rmo_buf = memblock_alloc_base(RTAS_RMOBUF_MAX, PAGE_SIZE, rtas_region);
+ rtas_rmo_buf = memblock_phys_alloc_range(RTAS_RMOBUF_MAX, PAGE_SIZE,
+ 0, rtas_region);
+ if (!rtas_rmo_buf)
+ panic("ERROR: RTAS: Failed to allocate %lx bytes below %pa\n",
+ PAGE_SIZE, &rtas_region);
#ifdef CONFIG_RTAS_ERROR_LOGGING
rtas_last_error_token = rtas_token("rtas-last-error");
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index bc6be44..c7d5f48 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -882,8 +882,12 @@ static void __init htab_initialize(void)
}
#endif /* CONFIG_PPC_CELL */
- table = memblock_alloc_base(htab_size_bytes, htab_size_bytes,
- limit);
+ table = memblock_phys_alloc_range(htab_size_bytes,
+ htab_size_bytes,
+ 0, limit);
+ if (!table)
+ panic("ERROR: Failed to allocate %pa bytes below %pa\n",
+ &htab_size_bytes, &limit);
DBG("Hash table allocated at %lx, size: %lx\n", table,
htab_size_bytes);
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index f82b3d3..9061597 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -651,7 +651,11 @@ void __init smp_save_dump_cpus(void)
/* No previous system present, normal boot. */
return;
/* Allocate a page as dumping area for the store status sigps */
- page = memblock_alloc_base(PAGE_SIZE, PAGE_SIZE, 1UL << 31);
+ page = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, 1UL << 31);
+ if (!page)
+ panic("ERROR: Failed to allocate %x bytes below %lx\n",
+ PAGE_SIZE, 1UL << 31);
+
/* Set multi-threading state to the previous system. */
pcpu_set_smt(sclp.mtid_prev);
boot_cpu_addr = stap();
diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c
index 0a0b8e1..42cf68d 100644
--- a/drivers/macintosh/smu.c
+++ b/drivers/macintosh/smu.c
@@ -485,7 +485,7 @@ int __init smu_init (void)
* SMU based G5s need some memory below 2Gb. Thankfully this is
* called at a time where memblock is still available.
*/
- smu_cmdbuf_abs = memblock_alloc_base(4096, 4096, 0x80000000UL);
+ smu_cmdbuf_abs = memblock_phys_alloc_range(4096, 4096, 0, 0x80000000UL);
if (smu_cmdbuf_abs == 0) {
printk(KERN_ERR "SMU: Command buffer allocation failed !\n");
ret = -EINVAL;
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 768e2b4..6874fdc 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -494,8 +494,6 @@ static inline bool memblock_bottom_up(void)
return memblock.bottom_up;
}
-phys_addr_t memblock_alloc_base(phys_addr_t size, phys_addr_t align,
- phys_addr_t max_addr);
phys_addr_t memblock_phys_mem_size(void);
phys_addr_t memblock_reserved_size(void);
phys_addr_t memblock_mem_size(unsigned long limit_pfn);
diff --git a/mm/memblock.c b/mm/memblock.c
index e5ffdcd..531fa77 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1363,20 +1363,6 @@ phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align,
return ret;
}
-phys_addr_t __init memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys_addr_t max_addr)
-{
- phys_addr_t alloc;
-
- alloc = memblock_alloc_range_nid(size, align, 0, max_addr, NUMA_NO_NODE,
- MEMBLOCK_NONE);
-
- if (alloc == 0)
- panic("ERROR: Failed to allocate %pa bytes below %pa.\n",
- &size, &max_addr);
-
- return alloc;
-}
-
phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
{
phys_addr_t res = memblock_phys_alloc_nid(size, align, nid);
--
2.7.4
Add panic() calls if memblock_alloc*() returns NULL.
Most of the changes are simply addition of
if(!ptr)
panic();
statements after the calls to memblock_alloc*() variants.
Exceptions are pcpu_populate_pte() and kernel_map_range() that were
slightly refactored to accommodate the change.
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: David S. Miller <[email protected]>
---
arch/sparc/kernel/prom_32.c | 2 ++
arch/sparc/kernel/setup_64.c | 6 ++++++
arch/sparc/kernel/smp_64.c | 12 ++++++++++++
arch/sparc/mm/init_64.c | 11 +++++++++++
arch/sparc/mm/srmmu.c | 8 ++++++++
5 files changed, 39 insertions(+)
diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
index e7126ca..869b16c 100644
--- a/arch/sparc/kernel/prom_32.c
+++ b/arch/sparc/kernel/prom_32.c
@@ -33,6 +33,8 @@ void * __init prom_early_alloc(unsigned long size)
void *ret;
ret = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!ret)
+ panic("%s: Failed to allocate %lu bytes\n", __func__, size);
prom_early_allocated += size;
diff --git a/arch/sparc/kernel/setup_64.c b/arch/sparc/kernel/setup_64.c
index 51c4d12..fd2182a 100644
--- a/arch/sparc/kernel/setup_64.c
+++ b/arch/sparc/kernel/setup_64.c
@@ -624,8 +624,14 @@ void __init alloc_irqstack_bootmem(void)
softirq_stack[i] = memblock_alloc_node(THREAD_SIZE,
THREAD_SIZE, node);
+ if (!softirq_stack[i])
+ panic("%s: Failed to allocate %lu bytes align=%lx nid=%d\n",
+ __func__, THREAD_SIZE, THREAD_SIZE, node);
hardirq_stack[i] = memblock_alloc_node(THREAD_SIZE,
THREAD_SIZE, node);
+ if (!hardirq_stack[i])
+ panic("%s: Failed to allocate %lu bytes align=%lx nid=%d\n",
+ __func__, THREAD_SIZE, THREAD_SIZE, node);
}
}
diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c
index f45d876..a8275fe 100644
--- a/arch/sparc/kernel/smp_64.c
+++ b/arch/sparc/kernel/smp_64.c
@@ -1628,6 +1628,8 @@ static void __init pcpu_populate_pte(unsigned long addr)
pud_t *new;
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
pgd_populate(&init_mm, pgd, new);
}
@@ -1636,6 +1638,8 @@ static void __init pcpu_populate_pte(unsigned long addr)
pmd_t *new;
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
pud_populate(&init_mm, pud, new);
}
@@ -1644,8 +1648,16 @@ static void __init pcpu_populate_pte(unsigned long addr)
pte_t *new;
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
pmd_populate_kernel(&init_mm, pmd, new);
}
+
+ return;
+
+err_alloc:
+ panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
}
void __init setup_per_cpu_areas(void)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index ef340e8..f2d70ff 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -1809,6 +1809,8 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
alloc_bytes += PAGE_SIZE;
pgd_populate(&init_mm, pgd, new);
}
@@ -1822,6 +1824,8 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
}
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
alloc_bytes += PAGE_SIZE;
pud_populate(&init_mm, pud, new);
}
@@ -1836,6 +1840,8 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
}
new = memblock_alloc_from(PAGE_SIZE, PAGE_SIZE,
PAGE_SIZE);
+ if (!new)
+ goto err_alloc;
alloc_bytes += PAGE_SIZE;
pmd_populate_kernel(&init_mm, pmd, new);
}
@@ -1855,6 +1861,11 @@ static unsigned long __ref kernel_map_range(unsigned long pstart,
}
return alloc_bytes;
+
+err_alloc:
+ panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, PAGE_SIZE);
+ return -ENOMEM;
}
static void __init flush_all_kernel_tsbs(void)
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index a400ec3..aaebbc0 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -305,11 +305,17 @@ static void __init srmmu_nocache_init(void)
srmmu_nocache_pool = memblock_alloc(srmmu_nocache_size,
SRMMU_NOCACHE_ALIGN_MAX);
+ if (!srmmu_nocache_pool)
+ panic("%s: Failed to allocate %lu bytes align=0x%x\n",
+ __func__, srmmu_nocache_size, SRMMU_NOCACHE_ALIGN_MAX);
memset(srmmu_nocache_pool, 0, srmmu_nocache_size);
srmmu_nocache_bitmap =
memblock_alloc(BITS_TO_LONGS(bitmap_bits) * sizeof(long),
SMP_CACHE_BYTES);
+ if (!srmmu_nocache_bitmap)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ BITS_TO_LONGS(bitmap_bits) * sizeof(long));
bit_map_init(&srmmu_nocache_map, srmmu_nocache_bitmap, bitmap_bits);
srmmu_swapper_pg_dir = __srmmu_get_nocache(SRMMU_PGD_TABLE_SIZE, SRMMU_PGD_TABLE_SIZE);
@@ -468,6 +474,8 @@ static void __init sparc_context_init(int numctx)
size = numctx * sizeof(struct ctx_list);
ctx_list_pool = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!ctx_list_pool)
+ panic("%s: Failed to allocate %lu bytes\n", __func__, size);
for (ctx = 0; ctx < numctx; ctx++) {
struct ctx_list *clist;
--
2.7.4
Add check for the return value of memblock_alloc*() functions and call
panic() in case of error.
The panic message repeats the one used by panicing memblock allocators with
adjustment of parameters to include only relevant ones.
The replacement was mostly automated with semantic patches like the one
below with manual massaging of format strings.
@@
expression ptr, size, align;
@@
ptr = memblock_alloc(size, align);
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
size, align);
Signed-off-by: Mike Rapoport <[email protected]>
Reviewed-by: Guo Ren <[email protected]> # c-sky
Acked-by: Paul Burton <[email protected]> # MIPS
Acked-by: Heiko Carstens <[email protected]> # s390
Reviewed-by: Juergen Gross <[email protected]> # Xen
---
arch/alpha/kernel/core_cia.c | 3 +++
arch/alpha/kernel/core_marvel.c | 6 ++++++
arch/alpha/kernel/pci-noop.c | 13 +++++++++++--
arch/alpha/kernel/pci.c | 11 ++++++++++-
arch/alpha/kernel/pci_iommu.c | 12 ++++++++++++
arch/arc/mm/highmem.c | 4 ++++
arch/arm/kernel/setup.c | 6 ++++++
arch/arm/mm/mmu.c | 14 +++++++++++++-
arch/arm64/kernel/setup.c | 8 +++++---
arch/arm64/mm/kasan_init.c | 10 ++++++++++
arch/c6x/mm/dma-coherent.c | 4 ++++
arch/c6x/mm/init.c | 3 +++
arch/csky/mm/highmem.c | 5 +++++
arch/h8300/mm/init.c | 3 +++
arch/m68k/atari/stram.c | 4 ++++
arch/m68k/mm/init.c | 3 +++
arch/m68k/mm/mcfmmu.c | 6 ++++++
arch/m68k/mm/motorola.c | 9 +++++++++
arch/m68k/mm/sun3mmu.c | 6 ++++++
arch/m68k/sun3/sun3dvma.c | 3 +++
arch/microblaze/mm/init.c | 8 ++++++--
arch/mips/cavium-octeon/dma-octeon.c | 3 +++
arch/mips/kernel/setup.c | 3 +++
arch/mips/kernel/traps.c | 3 +++
arch/mips/mm/init.c | 5 +++++
arch/nds32/mm/init.c | 12 ++++++++++++
arch/openrisc/mm/ioremap.c | 8 ++++++--
arch/powerpc/kernel/dt_cpu_ftrs.c | 5 +++++
arch/powerpc/kernel/pci_32.c | 3 +++
arch/powerpc/kernel/setup-common.c | 3 +++
arch/powerpc/kernel/setup_64.c | 4 ++++
arch/powerpc/lib/alloc.c | 3 +++
arch/powerpc/mm/hash_utils_64.c | 3 +++
arch/powerpc/mm/mmu_context_nohash.c | 9 +++++++++
arch/powerpc/mm/pgtable-book3e.c | 12 ++++++++++--
arch/powerpc/mm/pgtable-book3s64.c | 3 +++
arch/powerpc/mm/pgtable-radix.c | 9 ++++++++-
arch/powerpc/mm/ppc_mmu_32.c | 3 +++
arch/powerpc/platforms/pasemi/iommu.c | 3 +++
arch/powerpc/platforms/powermac/nvram.c | 3 +++
arch/powerpc/platforms/powernv/opal.c | 3 +++
arch/powerpc/platforms/powernv/pci-ioda.c | 8 ++++++++
arch/powerpc/platforms/ps3/setup.c | 3 +++
arch/powerpc/sysdev/msi_bitmap.c | 3 +++
arch/s390/kernel/setup.c | 13 +++++++++++++
arch/s390/kernel/smp.c | 5 ++++-
arch/s390/kernel/topology.c | 6 ++++++
arch/s390/numa/mode_emu.c | 3 +++
arch/s390/numa/numa.c | 6 +++++-
arch/sh/mm/init.c | 6 ++++++
arch/sh/mm/numa.c | 4 ++++
arch/um/drivers/net_kern.c | 3 +++
arch/um/drivers/vector_kern.c | 3 +++
arch/um/kernel/initrd.c | 2 ++
arch/um/kernel/mem.c | 16 ++++++++++++++++
arch/unicore32/kernel/setup.c | 4 ++++
arch/unicore32/mm/mmu.c | 15 +++++++++++++--
arch/x86/kernel/acpi/boot.c | 3 +++
arch/x86/kernel/apic/io_apic.c | 5 +++++
arch/x86/kernel/e820.c | 3 +++
arch/x86/platform/olpc/olpc_dt.c | 3 +++
arch/x86/xen/p2m.c | 11 +++++++++--
arch/xtensa/mm/kasan_init.c | 4 ++++
arch/xtensa/mm/mmu.c | 3 +++
drivers/clk/ti/clk.c | 3 +++
drivers/macintosh/smu.c | 3 +++
drivers/of/fdt.c | 8 +++++++-
drivers/of/unittest.c | 8 +++++++-
drivers/xen/swiotlb-xen.c | 7 +++++--
kernel/power/snapshot.c | 3 +++
lib/cpumask.c | 3 +++
mm/kasan/init.c | 10 ++++++++--
mm/sparse.c | 19 +++++++++++++++++--
73 files changed, 409 insertions(+), 28 deletions(-)
diff --git a/arch/alpha/kernel/core_cia.c b/arch/alpha/kernel/core_cia.c
index 466cd44..f489170 100644
--- a/arch/alpha/kernel/core_cia.c
+++ b/arch/alpha/kernel/core_cia.c
@@ -332,6 +332,9 @@ cia_prepare_tbia_workaround(int window)
/* Use minimal 1K map. */
ppte = memblock_alloc(CIA_BROKEN_TBIA_SIZE, 32768);
+ if (!ppte)
+ panic("%s: Failed to allocate %u bytes align=0x%x\n",
+ __func__, CIA_BROKEN_TBIA_SIZE, 32768);
pte = (virt_to_phys(ppte) >> (PAGE_SHIFT - 1)) | 1;
for (i = 0; i < CIA_BROKEN_TBIA_SIZE / sizeof(unsigned long); ++i)
diff --git a/arch/alpha/kernel/core_marvel.c b/arch/alpha/kernel/core_marvel.c
index c1d0c18..1db9d0e 100644
--- a/arch/alpha/kernel/core_marvel.c
+++ b/arch/alpha/kernel/core_marvel.c
@@ -83,6 +83,9 @@ mk_resource_name(int pe, int port, char *str)
sprintf(tmp, "PCI %s PE %d PORT %d", str, pe, port);
name = memblock_alloc(strlen(tmp) + 1, SMP_CACHE_BYTES);
+ if (!name)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ strlen(tmp) + 1);
strcpy(name, tmp);
return name;
@@ -118,6 +121,9 @@ alloc_io7(unsigned int pe)
}
io7 = memblock_alloc(sizeof(*io7), SMP_CACHE_BYTES);
+ if (!io7)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*io7));
io7->pe = pe;
raw_spin_lock_init(&io7->irq_lock);
diff --git a/arch/alpha/kernel/pci-noop.c b/arch/alpha/kernel/pci-noop.c
index 091cff3..ae82061 100644
--- a/arch/alpha/kernel/pci-noop.c
+++ b/arch/alpha/kernel/pci-noop.c
@@ -34,6 +34,9 @@ alloc_pci_controller(void)
struct pci_controller *hose;
hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
+ if (!hose)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*hose));
*hose_tail = hose;
hose_tail = &hose->next;
@@ -44,7 +47,13 @@ alloc_pci_controller(void)
struct resource * __init
alloc_resource(void)
{
- return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
+ void *ptr = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct resource));
+
+ return ptr;
}
SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus,
@@ -54,7 +63,7 @@ SYSCALL_DEFINE3(pciconfig_iobase, long, which, unsigned long, bus,
/* from hose or from bus.devfn */
if (which & IOBASE_FROM_HOSE) {
- for (hose = hose_head; hose; hose = hose->next)
+ for (hose = hose_head; hose; hose = hose->next)
if (hose->index == bus)
break;
if (!hose)
diff --git a/arch/alpha/kernel/pci.c b/arch/alpha/kernel/pci.c
index 9709812..64fbfb0 100644
--- a/arch/alpha/kernel/pci.c
+++ b/arch/alpha/kernel/pci.c
@@ -393,6 +393,9 @@ alloc_pci_controller(void)
struct pci_controller *hose;
hose = memblock_alloc(sizeof(*hose), SMP_CACHE_BYTES);
+ if (!hose)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*hose));
*hose_tail = hose;
hose_tail = &hose->next;
@@ -403,7 +406,13 @@ alloc_pci_controller(void)
struct resource * __init
alloc_resource(void)
{
- return memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
+ void *ptr = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct resource));
+
+ return ptr;
}
diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c
index e4cf77b..3034d6d 100644
--- a/arch/alpha/kernel/pci_iommu.c
+++ b/arch/alpha/kernel/pci_iommu.c
@@ -80,6 +80,9 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
" falling back to system-wide allocation\n",
__func__, nid);
arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
+ if (!arena)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*arena));
}
arena->ptes = memblock_alloc_node(sizeof(*arena), align, nid);
@@ -88,12 +91,21 @@ iommu_arena_new_node(int nid, struct pci_controller *hose, dma_addr_t base,
" falling back to system-wide allocation\n",
__func__, nid);
arena->ptes = memblock_alloc(mem_size, align);
+ if (!arena->ptes)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, mem_size, align);
}
#else /* CONFIG_DISCONTIGMEM */
arena = memblock_alloc(sizeof(*arena), SMP_CACHE_BYTES);
+ if (!arena)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*arena));
arena->ptes = memblock_alloc(mem_size, align);
+ if (!arena->ptes)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, mem_size, align);
#endif /* CONFIG_DISCONTIGMEM */
diff --git a/arch/arc/mm/highmem.c b/arch/arc/mm/highmem.c
index 48e7001..11f57e2 100644
--- a/arch/arc/mm/highmem.c
+++ b/arch/arc/mm/highmem.c
@@ -124,6 +124,10 @@ static noinline pte_t * __init alloc_kmap_pgtable(unsigned long kvaddr)
pmd_k = pmd_offset(pud_k, kvaddr);
pte_k = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
+ if (!pte_k)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
pmd_populate_kernel(&init_mm, pmd_k, pte_k);
return pte_k;
}
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 375b13f..5d78b6a 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -867,6 +867,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
boot_alias_start = phys_to_idmap(start);
if (arm_has_idmap_alias() && boot_alias_start != IDMAP_INVALID_ADDR) {
res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes\n",
+ __func__, sizeof(*res));
res->name = "System RAM (boot alias)";
res->start = boot_alias_start;
res->end = phys_to_idmap(end);
@@ -875,6 +878,9 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
}
res = memblock_alloc(sizeof(*res), SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*res));
res->name = "System RAM";
res->start = start;
res->end = end;
diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
index 57de0dd..f3ce341 100644
--- a/arch/arm/mm/mmu.c
+++ b/arch/arm/mm/mmu.c
@@ -721,7 +721,13 @@ EXPORT_SYMBOL(phys_mem_access_prot);
static void __init *early_alloc(unsigned long sz)
{
- return memblock_alloc(sz, sz);
+ void *ptr = memblock_alloc(sz, sz);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, sz, sz);
+
+ return ptr;
}
static void *__init late_alloc(unsigned long sz)
@@ -994,6 +1000,9 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
return;
svm = memblock_alloc(sizeof(*svm) * nr, __alignof__(*svm));
+ if (!svm)
+ panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
+ __func__, sizeof(*svm) * nr, __alignof__(*svm));
for (md = io_desc; nr; md++, nr--) {
create_mapping(md);
@@ -1016,6 +1025,9 @@ void __init vm_reserve_area_early(unsigned long addr, unsigned long size,
struct static_vm *svm;
svm = memblock_alloc(sizeof(*svm), __alignof__(*svm));
+ if (!svm)
+ panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
+ __func__, sizeof(*svm), __alignof__(*svm));
vm = &svm->vm;
vm->addr = (void *)addr;
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 4b0e123..5c5401f 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -209,6 +209,7 @@ static void __init request_standard_resources(void)
struct memblock_region *region;
struct resource *res;
unsigned long i = 0;
+ size_t res_size;
kernel_code.start = __pa_symbol(_text);
kernel_code.end = __pa_symbol(__init_begin - 1);
@@ -216,9 +217,10 @@ static void __init request_standard_resources(void)
kernel_data.end = __pa_symbol(_end - 1);
num_standard_resources = memblock.memory.cnt;
- standard_resources = memblock_alloc_low(num_standard_resources *
- sizeof(*standard_resources),
- SMP_CACHE_BYTES);
+ res_size = num_standard_resources * sizeof(*standard_resources);
+ standard_resources = memblock_alloc_low(res_size, SMP_CACHE_BYTES);
+ if (!standard_resources)
+ panic("%s: Failed to allocate %zu bytes\n", __func__, res_size);
for_each_memblock(memory, region) {
res = &standard_resources[i++];
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 4b55b15..43d13c7 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -40,6 +40,11 @@ static phys_addr_t __init kasan_alloc_zeroed_page(int node)
void *p = memblock_alloc_try_nid(PAGE_SIZE, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_KASAN, node);
+ if (!p)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, node,
+ __pa(MAX_DMA_ADDRESS));
+
return __pa(p);
}
@@ -48,6 +53,11 @@ static phys_addr_t __init kasan_alloc_raw_page(int node)
void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_KASAN, node);
+ if (!p)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, node,
+ __pa(MAX_DMA_ADDRESS));
+
return __pa(p);
}
diff --git a/arch/c6x/mm/dma-coherent.c b/arch/c6x/mm/dma-coherent.c
index 0be2898..0d3701b 100644
--- a/arch/c6x/mm/dma-coherent.c
+++ b/arch/c6x/mm/dma-coherent.c
@@ -138,6 +138,10 @@ void __init coherent_mem_init(phys_addr_t start, u32 size)
dma_bitmap = memblock_alloc(BITS_TO_LONGS(dma_pages) * sizeof(long),
sizeof(long));
+ if (!dma_bitmap)
+ panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
+ __func__, BITS_TO_LONGS(dma_pages) * sizeof(long),
+ sizeof(long));
}
static void c6x_dma_sync(struct device *dev, phys_addr_t paddr, size_t size,
diff --git a/arch/c6x/mm/init.c b/arch/c6x/mm/init.c
index e83c046..fe582c3 100644
--- a/arch/c6x/mm/init.c
+++ b/arch/c6x/mm/init.c
@@ -40,6 +40,9 @@ void __init paging_init(void)
empty_zero_page = (unsigned long) memblock_alloc(PAGE_SIZE,
PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up user data space
diff --git a/arch/csky/mm/highmem.c b/arch/csky/mm/highmem.c
index 53b1bfa..3317b774 100644
--- a/arch/csky/mm/highmem.c
+++ b/arch/csky/mm/highmem.c
@@ -141,6 +141,11 @@ static void __init fixrange_init(unsigned long start, unsigned long end,
for (; (k < PTRS_PER_PMD) && (vaddr != end); pmd++, k++) {
if (pmd_none(*pmd)) {
pte = (pte_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE,
+ PAGE_SIZE);
+
set_pmd(pmd, __pmd(__pa(pte)));
BUG_ON(pte != pte_offset_kernel(pmd, 0));
}
diff --git a/arch/h8300/mm/init.c b/arch/h8300/mm/init.c
index a157890..0f04a5e 100644
--- a/arch/h8300/mm/init.c
+++ b/arch/h8300/mm/init.c
@@ -68,6 +68,9 @@ void __init paging_init(void)
* to a couple of allocated pages.
*/
empty_zero_page = (unsigned long)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up SFC/DFC registers (user data space).
diff --git a/arch/m68k/atari/stram.c b/arch/m68k/atari/stram.c
index 6ffc204..6152f9f 100644
--- a/arch/m68k/atari/stram.c
+++ b/arch/m68k/atari/stram.c
@@ -97,6 +97,10 @@ void __init atari_stram_reserve_pages(void *start_mem)
pr_debug("atari_stram pool: kernel in ST-RAM, using alloc_bootmem!\n");
stram_pool.start = (resource_size_t)memblock_alloc_low(pool_size,
PAGE_SIZE);
+ if (!stram_pool.start)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, pool_size, PAGE_SIZE);
+
stram_pool.end = stram_pool.start + pool_size - 1;
request_resource(&iomem_resource, &stram_pool);
stram_virt_offset = 0;
diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
index 933c33e..8868a4c 100644
--- a/arch/m68k/mm/init.c
+++ b/arch/m68k/mm/init.c
@@ -94,6 +94,9 @@ void __init paging_init(void)
high_memory = (void *) end_mem;
empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up SFC/DFC registers (user data space).
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 492f953..6cb1e41 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -44,6 +44,9 @@ void __init paging_init(void)
int i;
empty_zero_page = (void *) memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
pg_dir = swapper_pg_dir;
memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir));
@@ -51,6 +54,9 @@ void __init paging_init(void)
size = num_pages * sizeof(pte_t);
size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
next_pgtable = (unsigned long) memblock_alloc(size, PAGE_SIZE);
+ if (!next_pgtable)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, size, PAGE_SIZE);
bootmem_end = (next_pgtable + size + PAGE_SIZE) & PAGE_MASK;
pg_dir += PAGE_OFFSET >> PGDIR_SHIFT;
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index 3f3d0bf..356601b 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -55,6 +55,9 @@ static pte_t * __init kernel_page_table(void)
pte_t *ptablep;
ptablep = (pte_t *)memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
+ if (!ptablep)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
clear_page(ptablep);
__flush_page_to_ram(ptablep);
@@ -96,6 +99,9 @@ static pmd_t * __init kernel_ptr_table(void)
if (((unsigned long)last_pgtable & ~PAGE_MASK) == 0) {
last_pgtable = (pmd_t *)memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
+ if (!last_pgtable)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
clear_page(last_pgtable);
__flush_page_to_ram(last_pgtable);
@@ -278,6 +284,9 @@ void __init paging_init(void)
* to a couple of allocated pages
*/
empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
/*
* Set up SFC/DFC registers
diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
index f736db4..eca1c46 100644
--- a/arch/m68k/mm/sun3mmu.c
+++ b/arch/m68k/mm/sun3mmu.c
@@ -46,6 +46,9 @@ void __init paging_init(void)
unsigned long size;
empty_zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
address = PAGE_OFFSET;
pg_dir = swapper_pg_dir;
@@ -56,6 +59,9 @@ void __init paging_init(void)
size = (size + PAGE_SIZE) & ~(PAGE_SIZE-1);
next_pgtable = (unsigned long)memblock_alloc(size, PAGE_SIZE);
+ if (!next_pgtable)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, size, PAGE_SIZE);
bootmem_end = (next_pgtable + size + PAGE_SIZE) & PAGE_MASK;
/* Map whole memory from PAGE_OFFSET (0x0E000000) */
diff --git a/arch/m68k/sun3/sun3dvma.c b/arch/m68k/sun3/sun3dvma.c
index 4d64711..399f3d0 100644
--- a/arch/m68k/sun3/sun3dvma.c
+++ b/arch/m68k/sun3/sun3dvma.c
@@ -269,6 +269,9 @@ void __init dvma_init(void)
iommu_use = memblock_alloc(IOMMU_TOTAL_ENTRIES * sizeof(unsigned long),
SMP_CACHE_BYTES);
+ if (!iommu_use)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ IOMMU_TOTAL_ENTRIES * sizeof(unsigned long));
dvma_unmap_iommu(DVMA_START, DVMA_SIZE);
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index bd1cd4b..7e97d44 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -374,10 +374,14 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)
{
void *p;
- if (mem_init_done)
+ if (mem_init_done) {
p = kzalloc(size, mask);
- else
+ } else {
p = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!p)
+ panic("%s: Failed to allocate %zu bytes\n",
+ __func__, size);
+ }
return p;
}
diff --git a/arch/mips/cavium-octeon/dma-octeon.c b/arch/mips/cavium-octeon/dma-octeon.c
index e8eb60e..11d5a4e 100644
--- a/arch/mips/cavium-octeon/dma-octeon.c
+++ b/arch/mips/cavium-octeon/dma-octeon.c
@@ -245,6 +245,9 @@ void __init plat_swiotlb_setup(void)
swiotlbsize = swiotlb_nslabs << IO_TLB_SHIFT;
octeon_swiotlb = memblock_alloc_low(swiotlbsize, PAGE_SIZE);
+ if (!octeon_swiotlb)
+ panic("%s: Failed to allocate %zu bytes align=%lx\n",
+ __func__, swiotlbsize, PAGE_SIZE);
if (swiotlb_init_with_tbl(octeon_swiotlb, swiotlb_nslabs, 1) == -ENOMEM)
panic("Cannot allocate SWIOTLB buffer");
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 8c6c48ed..91bc962 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -918,6 +918,9 @@ static void __init resource_init(void)
end = HIGHMEM_START - 1;
res = memblock_alloc(sizeof(struct resource), SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct resource));
res->start = start;
res->end = end;
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index 2bbdee5..64b541a 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -2292,6 +2292,9 @@ void __init trap_init(void)
ebase = (unsigned long)
memblock_alloc(size, 1 << fls(size));
+ if (!ebase)
+ panic("%s: Failed to allocate %lu bytes align=0x%x\n",
+ __func__, size, 1 << fls(size));
/*
* Try to ensure ebase resides in KSeg0 if possible.
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index b521d8e..89e2afc 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -245,6 +245,11 @@ void __init fixrange_init(unsigned long start, unsigned long end,
if (pmd_none(*pmd)) {
pte = (pte_t *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE,
+ PAGE_SIZE);
+
set_pmd(pmd, __pmd((unsigned long)pte));
BUG_ON(pte != pte_offset_kernel(pmd, 0));
}
diff --git a/arch/nds32/mm/init.c b/arch/nds32/mm/init.c
index d1e521c..1d03633 100644
--- a/arch/nds32/mm/init.c
+++ b/arch/nds32/mm/init.c
@@ -79,6 +79,9 @@ static void __init map_ram(void)
/* Alloc one page for holding PTE's... */
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
set_pmd(pme, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE));
/* Fill the newly allocated page with PTE'S */
@@ -111,6 +114,9 @@ static void __init fixedrange_init(void)
pud = pud_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
fixmap_pmd_p = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!fixmap_pmd_p)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
set_pmd(pmd, __pmd(__pa(fixmap_pmd_p) + _PAGE_KERNEL_TABLE));
#ifdef CONFIG_HIGHMEM
@@ -123,6 +129,9 @@ static void __init fixedrange_init(void)
pud = pud_offset(pgd, vaddr);
pmd = pmd_offset(pud, vaddr);
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
set_pmd(pmd, __pmd(__pa(pte) + _PAGE_KERNEL_TABLE));
pkmap_page_table = pte;
#endif /* CONFIG_HIGHMEM */
@@ -148,6 +157,9 @@ void __init paging_init(void)
/* allocate space for empty_zero_page */
zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
zone_sizes_init();
empty_zero_page = virt_to_page(zero_page);
diff --git a/arch/openrisc/mm/ioremap.c b/arch/openrisc/mm/ioremap.c
index 051bcb4..a850995 100644
--- a/arch/openrisc/mm/ioremap.c
+++ b/arch/openrisc/mm/ioremap.c
@@ -122,10 +122,14 @@ pte_t __ref *pte_alloc_one_kernel(struct mm_struct *mm)
{
pte_t *pte;
- if (likely(mem_init_done))
+ if (likely(mem_init_done)) {
pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
- else
+ } else {
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+ }
return pte;
}
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index 2554824..af6814a 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -1008,6 +1008,11 @@ static int __init dt_cpu_ftrs_scan_callback(unsigned long node, const char
of_scan_flat_dt_subnodes(node, count_cpufeatures_subnodes,
&nr_dt_cpu_features);
dt_cpu_features = memblock_alloc(sizeof(struct dt_cpu_feature) * nr_dt_cpu_features, PAGE_SIZE);
+ if (!dt_cpu_features)
+ panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
+ __func__,
+ sizeof(struct dt_cpu_feature) * nr_dt_cpu_features,
+ PAGE_SIZE);
cpufeatures_setup_start(isa);
diff --git a/arch/powerpc/kernel/pci_32.c b/arch/powerpc/kernel/pci_32.c
index d3f04f2..0417fda 100644
--- a/arch/powerpc/kernel/pci_32.c
+++ b/arch/powerpc/kernel/pci_32.c
@@ -205,6 +205,9 @@ pci_create_OF_bus_map(void)
of_prop = memblock_alloc(sizeof(struct property) + 256,
SMP_CACHE_BYTES);
+ if (!of_prop)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct property) + 256);
dn = of_find_node_by_path("/");
if (dn) {
memset(of_prop, -1, sizeof(struct property) + 256);
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 82be48c..1810f09 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -461,6 +461,9 @@ void __init smp_setup_cpu_maps(void)
cpu_to_phys_id = memblock_alloc(nr_cpu_ids * sizeof(u32),
__alignof__(u32));
+ if (!cpu_to_phys_id)
+ panic("%s: Failed to allocate %zu bytes align=0x%zx\n",
+ __func__, nr_cpu_ids * sizeof(u32), __alignof__(u32));
for_each_node_by_type(dn, "cpu") {
const __be32 *intserv;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 3dcd779..dd62b05 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -931,6 +931,10 @@ static void __ref init_fallback_flush(void)
l1d_flush_fallback_area = memblock_alloc_try_nid(l1d_size * 2,
l1d_size, MEMBLOCK_LOW_LIMIT,
limit, NUMA_NO_NODE);
+ if (!l1d_flush_fallback_area)
+ panic("%s: Failed to allocate %llu bytes align=0x%llx max_addr=%pa\n",
+ __func__, l1d_size * 2, l1d_size, &limit);
+
for_each_possible_cpu(cpu) {
struct paca_struct *paca = paca_ptrs[cpu];
diff --git a/arch/powerpc/lib/alloc.c b/arch/powerpc/lib/alloc.c
index dedf88a..ce18087 100644
--- a/arch/powerpc/lib/alloc.c
+++ b/arch/powerpc/lib/alloc.c
@@ -15,6 +15,9 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)
p = kzalloc(size, mask);
else {
p = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!p)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ size);
}
return p;
}
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index c7d5f48..ddf3b9c 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -915,6 +915,9 @@ static void __init htab_initialize(void)
linear_map_hash_slots = memblock_alloc_try_nid(
linear_map_hash_count, 1, MEMBLOCK_LOW_LIMIT,
ppc64_rma_size, NUMA_NO_NODE);
+ if (!linear_map_hash_slots)
+ panic("%s: Failed to allocate %lu bytes max_addr=%pa\n",
+ __func__, linear_map_hash_count, &ppc64_rma_size);
}
#endif /* CONFIG_DEBUG_PAGEALLOC */
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index 22d71a58..1945c5f 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -461,10 +461,19 @@ void __init mmu_context_init(void)
* Allocate the maps used by context management
*/
context_map = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
+ if (!context_map)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ CTX_MAP_SIZE);
context_mm = memblock_alloc(sizeof(void *) * (LAST_CONTEXT + 1),
SMP_CACHE_BYTES);
+ if (!context_mm)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(void *) * (LAST_CONTEXT + 1));
#ifdef CONFIG_SMP
stale_map[boot_cpuid] = memblock_alloc(CTX_MAP_SIZE, SMP_CACHE_BYTES);
+ if (!stale_map[boot_cpuid])
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ CTX_MAP_SIZE);
cpuhp_setup_state_nocalls(CPUHP_POWERPC_MMU_CTX_PREPARE,
"powerpc/mmu/ctx:prepare",
diff --git a/arch/powerpc/mm/pgtable-book3e.c b/arch/powerpc/mm/pgtable-book3e.c
index 53cbc7d..1032ef7 100644
--- a/arch/powerpc/mm/pgtable-book3e.c
+++ b/arch/powerpc/mm/pgtable-book3e.c
@@ -57,8 +57,16 @@ void vmemmap_remove_mapping(unsigned long start,
static __ref void *early_alloc_pgtable(unsigned long size)
{
- return memblock_alloc_try_nid(size, size, MEMBLOCK_LOW_LIMIT,
- __pa(MAX_DMA_ADDRESS), NUMA_NO_NODE);
+ void *ptr;
+
+ ptr = memblock_alloc_try_nid(size, size, MEMBLOCK_LOW_LIMIT,
+ __pa(MAX_DMA_ADDRESS), NUMA_NO_NODE);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx max_addr=%lx\n",
+ __func__, size, size, __pa(MAX_DMA_ADDRESS));
+
+ return ptr;
}
/*
diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
index 55876b7..68e95f8 100644
--- a/arch/powerpc/mm/pgtable-book3s64.c
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -197,6 +197,9 @@ void __init mmu_partition_table_init(void)
BUILD_BUG_ON_MSG((PATB_SIZE_SHIFT > 36), "Partition table size too large.");
/* Initialize the Partition Table with no entries */
partition_tb = memblock_alloc(patb_size, patb_size);
+ if (!partition_tb)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, patb_size, patb_size);
/*
* update partition table control register,
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 29bcea5..6fc05fd 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -53,13 +53,20 @@ static __ref void *early_alloc_pgtable(unsigned long size, int nid,
{
phys_addr_t min_addr = MEMBLOCK_LOW_LIMIT;
phys_addr_t max_addr = MEMBLOCK_ALLOC_ANYWHERE;
+ void *ptr;
if (region_start)
min_addr = region_start;
if (region_end)
max_addr = region_end;
- return memblock_alloc_try_nid(size, size, min_addr, max_addr, nid);
+ ptr = memblock_alloc_try_nid(size, size, min_addr, max_addr, nid);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%pa max_addr=%pa\n",
+ __func__, size, size, nid, &min_addr, &max_addr);
+
+ return ptr;
}
static int early_map_kernel_page(unsigned long ea, unsigned long pa,
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 36a664f..a85b2f4 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -212,6 +212,9 @@ void __init MMU_init_hw(void)
*/
if ( ppc_md.progress ) ppc_md.progress("hash:find piece", 0x322);
Hash = memblock_alloc(Hash_size, Hash_size);
+ if (!Hash)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, Hash_size, Hash_size);
_SDR1 = __pa(Hash) | SDR1_LOW_BITS;
Hash_end = (struct hash_pte *) ((unsigned long)Hash + Hash_size);
diff --git a/arch/powerpc/platforms/pasemi/iommu.c b/arch/powerpc/platforms/pasemi/iommu.c
index f62930f..ab75e70 100644
--- a/arch/powerpc/platforms/pasemi/iommu.c
+++ b/arch/powerpc/platforms/pasemi/iommu.c
@@ -211,6 +211,9 @@ static int __init iob_init(struct device_node *dn)
iob_l2_base = memblock_alloc_try_nid_raw(1UL << 21, 1UL << 21,
MEMBLOCK_LOW_LIMIT, 0x80000000,
NUMA_NO_NODE);
+ if (!iob_l2_base)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx max_addr=%x\n",
+ __func__, 1UL << 21, 1UL << 21, 0x80000000);
pr_info("IOBMAP L2 allocated at: %p\n", iob_l2_base);
diff --git a/arch/powerpc/platforms/powermac/nvram.c b/arch/powerpc/platforms/powermac/nvram.c
index ae54d7f..e0a1d15 100644
--- a/arch/powerpc/platforms/powermac/nvram.c
+++ b/arch/powerpc/platforms/powermac/nvram.c
@@ -514,6 +514,9 @@ static int __init core99_nvram_setup(struct device_node *dp, unsigned long addr)
return -EINVAL;
}
nvram_image = memblock_alloc(NVRAM_SIZE, SMP_CACHE_BYTES);
+ if (!nvram_image)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ NVRAM_SIZE);
nvram_data = ioremap(addr, NVRAM_SIZE*2);
nvram_naddrs = 1; /* Make sure we get the correct case */
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index 8e157f9..38fb678 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -172,6 +172,9 @@ int __init early_init_dt_scan_recoverable_ranges(unsigned long node,
* Allocate a buffer to hold the MC recoverable ranges.
*/
mc_recoverable_range = memblock_alloc(size, __alignof__(u64));
+ if (!mc_recoverable_range)
+ panic("%s: Failed to allocate %u bytes align=0x%lx\n",
+ __func__, size, __alignof__(u64));
for (i = 0; i < mc_recoverable_range_len; i++) {
mc_recoverable_range[i].start_addr =
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index 1d6406a..4817f15 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -3727,6 +3727,9 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
pr_debug(" PHB-ID : 0x%016llx\n", phb_id);
phb = memblock_alloc(sizeof(*phb), SMP_CACHE_BYTES);
+ if (!phb)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*phb));
/* Allocate PCI controller */
phb->hose = hose = pcibios_alloc_controller(np);
@@ -3773,6 +3776,9 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
phb->diag_data_size = PNV_PCI_DIAG_BUF_SIZE;
phb->diag_data = memblock_alloc(phb->diag_data_size, SMP_CACHE_BYTES);
+ if (!phb->diag_data)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ phb->diag_data_size);
/* Parse 32-bit and IO ranges (if any) */
pci_process_bridge_OF_ranges(hose, np, !hose->global_number);
@@ -3832,6 +3838,8 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
pemap_off = size;
size += phb->ioda.total_pe_num * sizeof(struct pnv_ioda_pe);
aux = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!aux)
+ panic("%s: Failed to allocate %lu bytes\n", __func__, size);
phb->ioda.pe_alloc = aux;
phb->ioda.m64_segmap = aux + m64map_off;
phb->ioda.m32_segmap = aux + m32map_off;
diff --git a/arch/powerpc/platforms/ps3/setup.c b/arch/powerpc/platforms/ps3/setup.c
index 658bfab..4ce5458 100644
--- a/arch/powerpc/platforms/ps3/setup.c
+++ b/arch/powerpc/platforms/ps3/setup.c
@@ -127,6 +127,9 @@ static void __init prealloc(struct ps3_prealloc *p)
return;
p->address = memblock_alloc(p->size, p->align);
+ if (!p->address)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, p->size, p->align);
printk(KERN_INFO "%s: %lu bytes at %p\n", p->name, p->size,
p->address);
diff --git a/arch/powerpc/sysdev/msi_bitmap.c b/arch/powerpc/sysdev/msi_bitmap.c
index d45450f..51a679a 100644
--- a/arch/powerpc/sysdev/msi_bitmap.c
+++ b/arch/powerpc/sysdev/msi_bitmap.c
@@ -129,6 +129,9 @@ int __ref msi_bitmap_alloc(struct msi_bitmap *bmp, unsigned int irq_count,
bmp->bitmap = kzalloc(size, GFP_KERNEL);
else {
bmp->bitmap = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!bmp->bitmap)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ size);
/* the bitmap won't be freed from memblock allocator */
kmemleak_not_leak(bmp->bitmap);
}
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index da48397..8f9e7f6 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -378,6 +378,10 @@ static void __init setup_lowcore(void)
*/
BUILD_BUG_ON(sizeof(struct lowcore) != LC_PAGES * PAGE_SIZE);
lc = memblock_alloc_low(sizeof(*lc), sizeof(*lc));
+ if (!lc)
+ panic("%s: Failed to allocate %zu bytes align=%zx\n",
+ __func__, sizeof(*lc), sizeof(*lc));
+
lc->restart_psw.mask = PSW_KERNEL_BITS;
lc->restart_psw.addr = (unsigned long) restart_int_handler;
lc->external_new_psw.mask = PSW_KERNEL_BITS |
@@ -422,6 +426,9 @@ static void __init setup_lowcore(void)
* all CPUs in cast *one* of them does a PSW restart.
*/
restart_stack = memblock_alloc(THREAD_SIZE, THREAD_SIZE);
+ if (!restart_stack)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, THREAD_SIZE, THREAD_SIZE);
restart_stack += STACK_INIT_OFFSET;
/*
@@ -488,6 +495,9 @@ static void __init setup_resources(void)
for_each_memblock(memory, reg) {
res = memblock_alloc(sizeof(*res), 8);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(*res), 8);
res->flags = IORESOURCE_BUSY | IORESOURCE_SYSTEM_RAM;
res->name = "System RAM";
@@ -502,6 +512,9 @@ static void __init setup_resources(void)
continue;
if (std_res->end > res->end) {
sub_res = memblock_alloc(sizeof(*sub_res), 8);
+ if (!sub_res)
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(*sub_res), 8);
*sub_res = *std_res;
sub_res->end = res->end;
std_res->start = res->end + 1;
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 9061597..17c626e2 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -653,7 +653,7 @@ void __init smp_save_dump_cpus(void)
/* Allocate a page as dumping area for the store status sigps */
page = memblock_phys_alloc_range(PAGE_SIZE, PAGE_SIZE, 0, 1UL << 31);
if (!page)
- panic("ERROR: Failed to allocate %x bytes below %lx\n",
+ panic("ERROR: Failed to allocate %lx bytes below %lx\n",
PAGE_SIZE, 1UL << 31);
/* Set multi-threading state to the previous system. */
@@ -765,6 +765,9 @@ void __init smp_detect_cpus(void)
/* Get CPU information */
info = memblock_alloc(sizeof(*info), 8);
+ if (!info)
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(*info), 8);
smp_get_core_info(info, 1);
/* Find boot CPU type */
if (sclp.has_core_type) {
diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
index 8992b04..8964a3f 100644
--- a/arch/s390/kernel/topology.c
+++ b/arch/s390/kernel/topology.c
@@ -520,6 +520,9 @@ static void __init alloc_masks(struct sysinfo_15_1_x *info,
nr_masks = max(nr_masks, 1);
for (i = 0; i < nr_masks; i++) {
mask->next = memblock_alloc(sizeof(*mask->next), 8);
+ if (!mask->next)
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(*mask->next), 8);
mask = mask->next;
}
}
@@ -538,6 +541,9 @@ void __init topology_init_early(void)
if (!MACHINE_HAS_TOPOLOGY)
goto out;
tl_info = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!tl_info)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
info = tl_info;
store_topology(info);
pr_info("The CPU configuration topology of the machine is: %d %d %d %d %d %d / %d\n",
diff --git a/arch/s390/numa/mode_emu.c b/arch/s390/numa/mode_emu.c
index bfba273..71a12a4 100644
--- a/arch/s390/numa/mode_emu.c
+++ b/arch/s390/numa/mode_emu.c
@@ -313,6 +313,9 @@ static void __ref create_core_to_node_map(void)
int i;
emu_cores = memblock_alloc(sizeof(*emu_cores), 8);
+ if (!emu_cores)
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(*emu_cores), 8);
for (i = 0; i < ARRAY_SIZE(emu_cores->to_node_id); i++)
emu_cores->to_node_id[i] = NODE_ID_FREE;
}
diff --git a/arch/s390/numa/numa.c b/arch/s390/numa/numa.c
index 2d1271e..8eb9e97 100644
--- a/arch/s390/numa/numa.c
+++ b/arch/s390/numa/numa.c
@@ -92,8 +92,12 @@ static void __init numa_setup_memory(void)
} while (cur_base < end_of_dram);
/* Allocate and fill out node_data */
- for (nid = 0; nid < MAX_NUMNODES; nid++)
+ for (nid = 0; nid < MAX_NUMNODES; nid++) {
NODE_DATA(nid) = memblock_alloc(sizeof(pg_data_t), 8);
+ if (!NODE_DATA(nid))
+ panic("%s: Failed to allocate %zu bytes align=0x%x\n",
+ __func__, sizeof(pg_data_t), 8);
+ }
for_each_online_node(nid) {
unsigned long start_pfn, end_pfn;
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index a0fa4de..fceefd9 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -128,6 +128,9 @@ static pmd_t * __init one_md_table_init(pud_t *pud)
pmd_t *pmd;
pmd = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!pmd)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
pud_populate(&init_mm, pud, pmd);
BUG_ON(pmd != pmd_offset(pud, 0));
}
@@ -141,6 +144,9 @@ static pte_t * __init one_page_table_init(pmd_t *pmd)
pte_t *pte;
pte = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
pmd_populate_kernel(&init_mm, pmd, pte);
BUG_ON(pte != pte_offset_kernel(pmd, 0));
}
diff --git a/arch/sh/mm/numa.c b/arch/sh/mm/numa.c
index c4bde61..f7e4439 100644
--- a/arch/sh/mm/numa.c
+++ b/arch/sh/mm/numa.c
@@ -43,6 +43,10 @@ void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end)
/* Node-local pgdat */
NODE_DATA(nid) = memblock_alloc_node(sizeof(struct pglist_data),
SMP_CACHE_BYTES, nid);
+ if (!NODE_DATA(nid))
+ panic("%s: Failed to allocate %zu bytes align=0x%x nid=%d\n",
+ __func__, sizeof(struct pglist_data), SMP_CACHE_BYTES,
+ nid);
NODE_DATA(nid)->node_start_pfn = start_pfn;
NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
index d80cfb1..6e5be5f 100644
--- a/arch/um/drivers/net_kern.c
+++ b/arch/um/drivers/net_kern.c
@@ -649,6 +649,9 @@ static int __init eth_setup(char *str)
}
new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
+ if (!new)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*new));
INIT_LIST_HEAD(&new->list);
new->index = n;
diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
index 046fa9e..596e705 100644
--- a/arch/um/drivers/vector_kern.c
+++ b/arch/um/drivers/vector_kern.c
@@ -1576,6 +1576,9 @@ static int __init vector_setup(char *str)
return 1;
}
new = memblock_alloc(sizeof(*new), SMP_CACHE_BYTES);
+ if (!new)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*new));
INIT_LIST_HEAD(&new->list);
new->unit = n;
new->arguments = str;
diff --git a/arch/um/kernel/initrd.c b/arch/um/kernel/initrd.c
index ce169ea..1dcd310 100644
--- a/arch/um/kernel/initrd.c
+++ b/arch/um/kernel/initrd.c
@@ -37,6 +37,8 @@ int __init read_initrd(void)
}
area = memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!area)
+ panic("%s: Failed to allocate %llu bytes\n", __func__, size);
if (load_initrd(initrd, area, size) == -1)
return 0;
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 799b571..99aa11b 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -66,6 +66,10 @@ static void __init one_page_table_init(pmd_t *pmd)
if (pmd_none(*pmd)) {
pte_t *pte = (pte_t *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
set_pmd(pmd, __pmd(_KERNPG_TABLE +
(unsigned long) __pa(pte)));
if (pte != pte_offset_kernel(pmd, 0))
@@ -77,6 +81,10 @@ static void __init one_md_table_init(pud_t *pud)
{
#ifdef CONFIG_3_LEVEL_PGTABLES
pmd_t *pmd_table = (pmd_t *) memblock_alloc_low(PAGE_SIZE, PAGE_SIZE);
+ if (!pmd_table)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
set_pud(pud, __pud(_KERNPG_TABLE + (unsigned long) __pa(pmd_table)));
if (pmd_table != pmd_offset(pud, 0))
BUG();
@@ -126,6 +134,10 @@ static void __init fixaddr_user_init( void)
fixrange_init( FIXADDR_USER_START, FIXADDR_USER_END, swapper_pg_dir);
v = (unsigned long) memblock_alloc_low(size, PAGE_SIZE);
+ if (!v)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, size, PAGE_SIZE);
+
memcpy((void *) v , (void *) FIXADDR_USER_START, size);
p = __pa(v);
for ( ; size > 0; size -= PAGE_SIZE, vaddr += PAGE_SIZE,
@@ -146,6 +158,10 @@ void __init paging_init(void)
empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
+ if (!empty_zero_page)
+ panic("%s: Failed to allocate %lu bytes align=%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
for (i = 0; i < ARRAY_SIZE(zones_size); i++)
zones_size[i] = 0;
diff --git a/arch/unicore32/kernel/setup.c b/arch/unicore32/kernel/setup.c
index 4b0cb68..d3239cf 100644
--- a/arch/unicore32/kernel/setup.c
+++ b/arch/unicore32/kernel/setup.c
@@ -207,6 +207,10 @@ request_standard_resources(struct meminfo *mi)
continue;
res = memblock_alloc_low(sizeof(*res), SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes align=%x\n",
+ __func__, sizeof(*res), SMP_CACHE_BYTES);
+
res->name = "System RAM";
res->start = mi->bank[i].start;
res->end = mi->bank[i].start + mi->bank[i].size - 1;
diff --git a/arch/unicore32/mm/mmu.c b/arch/unicore32/mm/mmu.c
index a402192..aa2060b 100644
--- a/arch/unicore32/mm/mmu.c
+++ b/arch/unicore32/mm/mmu.c
@@ -145,8 +145,13 @@ static pte_t * __init early_pte_alloc(pmd_t *pmd, unsigned long addr,
unsigned long prot)
{
if (pmd_none(*pmd)) {
- pte_t *pte = memblock_alloc(PTRS_PER_PTE * sizeof(pte_t),
- PTRS_PER_PTE * sizeof(pte_t));
+ size_t size = PTRS_PER_PTE * sizeof(pte_t);
+ pte_t *pte = memblock_alloc(size, size);
+
+ if (!pte)
+ panic("%s: Failed to allocate %zu bytes align=%zx\n",
+ __func__, size, size);
+
__pmd_populate(pmd, __pa(pte) | prot);
}
BUG_ON(pmd_bad(*pmd));
@@ -349,6 +354,9 @@ static void __init devicemaps_init(void)
* Allocate the vector page early.
*/
vectors = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!vectors)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
for (addr = VMALLOC_END; addr; addr += PGDIR_SIZE)
pmd_clear(pmd_off_k(addr));
@@ -426,6 +434,9 @@ void __init paging_init(void)
/* allocate the zero page. */
zero_page = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!zero_page)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
bootmem_init();
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 2624de1..8dcbf68 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -935,6 +935,9 @@ static int __init acpi_parse_hpet(struct acpi_table_header *table)
#define HPET_RESOURCE_NAME_SIZE 9
hpet_res = memblock_alloc(sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE,
SMP_CACHE_BYTES);
+ if (!hpet_res)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*hpet_res) + HPET_RESOURCE_NAME_SIZE);
hpet_res->name = (void *)&hpet_res[1];
hpet_res->flags = IORESOURCE_MEM;
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 2953bbf..397bfc8 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -2579,6 +2579,8 @@ static struct resource * __init ioapic_setup_resources(void)
n *= nr_ioapics;
mem = memblock_alloc(n, SMP_CACHE_BYTES);
+ if (!mem)
+ panic("%s: Failed to allocate %lu bytes\n", __func__, n);
res = (void *)mem;
mem += sizeof(struct resource) * nr_ioapics;
@@ -2623,6 +2625,9 @@ void __init io_apic_init_mappings(void)
#endif
ioapic_phys = (unsigned long)memblock_alloc(PAGE_SIZE,
PAGE_SIZE);
+ if (!ioapic_phys)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
ioapic_phys = __pa(ioapic_phys);
}
set_fixmap_nocache(idx, ioapic_phys);
diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
index 9c0eb54..1f18ec0 100644
--- a/arch/x86/kernel/e820.c
+++ b/arch/x86/kernel/e820.c
@@ -1095,6 +1095,9 @@ void __init e820__reserve_resources(void)
res = memblock_alloc(sizeof(*res) * e820_table->nr_entries,
SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*res) * e820_table->nr_entries);
e820_res = res;
for (i = 0; i < e820_table->nr_entries; i++) {
diff --git a/arch/x86/platform/olpc/olpc_dt.c b/arch/x86/platform/olpc/olpc_dt.c
index b4ab779..dad3b60 100644
--- a/arch/x86/platform/olpc/olpc_dt.c
+++ b/arch/x86/platform/olpc/olpc_dt.c
@@ -141,6 +141,9 @@ void * __init prom_early_alloc(unsigned long size)
* wasted bootmem) and hand off chunks of it to callers.
*/
res = memblock_alloc(chunk_size, SMP_CACHE_BYTES);
+ if (!res)
+ panic("%s: Failed to allocate %lu bytes\n", __func__,
+ chunk_size);
BUG_ON(!res);
prom_early_allocated += chunk_size;
memset(res, 0, chunk_size);
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 055e37e..95ce9b5 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -181,8 +181,15 @@ static void p2m_init_identity(unsigned long *p2m, unsigned long pfn)
static void * __ref alloc_p2m_page(void)
{
- if (unlikely(!slab_is_available()))
- return memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (unlikely(!slab_is_available())) {
+ void *ptr = memblock_alloc(PAGE_SIZE, PAGE_SIZE);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_SIZE, PAGE_SIZE);
+
+ return ptr;
+ }
return (void *)__get_free_page(GFP_KERNEL);
}
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index 4852848..41047ca 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -45,6 +45,10 @@ static void __init populate(void *start, void *end)
pmd_t *pmd = pmd_offset(pgd, vaddr);
pte_t *pte = memblock_alloc(n_pages * sizeof(pte_t), PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
+ __func__, n_pages * sizeof(pte_t), PAGE_SIZE);
+
pr_debug("%s: %p - %p\n", __func__, start, end);
for (i = j = 0; i < n_pmds; ++i) {
diff --git a/arch/xtensa/mm/mmu.c b/arch/xtensa/mm/mmu.c
index a4dcfd3..2fb7d11 100644
--- a/arch/xtensa/mm/mmu.c
+++ b/arch/xtensa/mm/mmu.c
@@ -32,6 +32,9 @@ static void * __init init_pmd(unsigned long vaddr, unsigned long n_pages)
__func__, vaddr, n_pages);
pte = memblock_alloc_low(n_pages * sizeof(pte_t), PAGE_SIZE);
+ if (!pte)
+ panic("%s: Failed to allocate %zu bytes align=%lx\n",
+ __func__, n_pages * sizeof(pte_t), PAGE_SIZE);
for (i = 0; i < n_pages; ++i)
pte_clear(NULL, 0, pte + i);
diff --git a/drivers/clk/ti/clk.c b/drivers/clk/ti/clk.c
index d0cd585..5d7fb2e 100644
--- a/drivers/clk/ti/clk.c
+++ b/drivers/clk/ti/clk.c
@@ -351,6 +351,9 @@ void __init omap2_clk_legacy_provider_init(int index, void __iomem *mem)
struct clk_iomap *io;
io = memblock_alloc(sizeof(*io), SMP_CACHE_BYTES);
+ if (!io)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(*io));
io->mem = mem;
diff --git a/drivers/macintosh/smu.c b/drivers/macintosh/smu.c
index 42cf68d..6a84412 100644
--- a/drivers/macintosh/smu.c
+++ b/drivers/macintosh/smu.c
@@ -493,6 +493,9 @@ int __init smu_init (void)
}
smu = memblock_alloc(sizeof(struct smu_device), SMP_CACHE_BYTES);
+ if (!smu)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct smu_device));
spin_lock_init(&smu->lock);
INIT_LIST_HEAD(&smu->cmd_list);
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 7099c65..796460c 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -1185,7 +1185,13 @@ int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base,
static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
{
- return memblock_alloc(size, align);
+ void *ptr = memblock_alloc(size, align);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %llu bytes align=0x%llx\n",
+ __func__, size, align);
+
+ return ptr;
}
bool __init early_init_dt_verify(void *params)
diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
index 8442738..10f6599 100644
--- a/drivers/of/unittest.c
+++ b/drivers/of/unittest.c
@@ -2234,7 +2234,13 @@ static struct device_node *overlay_base_root;
static void * __init dt_alloc_memory(u64 size, u64 align)
{
- return memblock_alloc(size, align);
+ void *ptr = memblock_alloc(size, align);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %llu bytes align=0x%llx\n",
+ __func__, size, align);
+
+ return ptr;
}
/*
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 989cf87..d3e49f1 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -214,10 +214,13 @@ int __ref xen_swiotlb_init(int verbose, bool early)
/*
* Get IO TLB memory from any location.
*/
- if (early)
+ if (early) {
xen_io_tlb_start = memblock_alloc(PAGE_ALIGN(bytes),
PAGE_SIZE);
- else {
+ if (!xen_io_tlb_start)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
+ } else {
#define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT))
#define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT)
while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) {
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 4802b03..f08a1e4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -965,6 +965,9 @@ void __init __register_nosave_region(unsigned long start_pfn,
/* This allocation cannot fail */
region = memblock_alloc(sizeof(struct nosave_region),
SMP_CACHE_BYTES);
+ if (!region)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ sizeof(struct nosave_region));
}
region->start_pfn = start_pfn;
region->end_pfn = end_pfn;
diff --git a/lib/cpumask.c b/lib/cpumask.c
index 087a3e9..0cb672e 100644
--- a/lib/cpumask.c
+++ b/lib/cpumask.c
@@ -165,6 +165,9 @@ EXPORT_SYMBOL(zalloc_cpumask_var);
void __init alloc_bootmem_cpumask_var(cpumask_var_t *mask)
{
*mask = memblock_alloc(cpumask_size(), SMP_CACHE_BYTES);
+ if (!*mask)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ cpumask_size());
}
/**
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index 45a1b5e..af907d8 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -83,8 +83,14 @@ static inline bool kasan_early_shadow_page_entry(pte_t pte)
static __init void *early_alloc(size_t size, int node)
{
- return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
- MEMBLOCK_ALLOC_ACCESSIBLE, node);
+ void *ptr = memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS),
+ MEMBLOCK_ALLOC_ACCESSIBLE, node);
+
+ if (!ptr)
+ panic("%s: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n",
+ __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS));
+
+ return ptr;
}
static void __ref zero_pte_populate(pmd_t *pmd, unsigned long addr,
diff --git a/mm/sparse.c b/mm/sparse.c
index 7ea5dc6..ad94242 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -65,11 +65,15 @@ static noinline struct mem_section __ref *sparse_index_alloc(int nid)
unsigned long array_size = SECTIONS_PER_ROOT *
sizeof(struct mem_section);
- if (slab_is_available())
+ if (slab_is_available()) {
section = kzalloc_node(array_size, GFP_KERNEL, nid);
- else
+ } else {
section = memblock_alloc_node(array_size, SMP_CACHE_BYTES,
nid);
+ if (!section)
+ panic("%s: Failed to allocate %lu bytes nid=%d\n",
+ __func__, array_size, nid);
+ }
return section;
}
@@ -218,6 +222,9 @@ void __init memory_present(int nid, unsigned long start, unsigned long end)
size = sizeof(struct mem_section*) * NR_SECTION_ROOTS;
align = 1 << (INTERNODE_CACHE_SHIFT);
mem_section = memblock_alloc(size, align);
+ if (!mem_section)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n",
+ __func__, size, align);
}
#endif
@@ -411,6 +418,10 @@ struct page __init *sparse_mem_map_populate(unsigned long pnum, int nid,
map = memblock_alloc_try_nid(size,
PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+ if (!map)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
+ __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
+
return map;
}
#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
@@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
memblock_alloc_try_nid_raw(size, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+ if (!sparsemap_buf)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
+ __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
+
sparsemap_buf_end = sparsemap_buf + size;
}
--
2.7.4
Add panic() calls if memblock_alloc() returns NULL.
The panic() format duplicates the one used by memblock itself and in order
to avoid explosion with long parameters list replace open coded allocation
size calculations with a local variable.
Signed-off-by: Mike Rapoport <[email protected]>
---
init/main.c | 26 ++++++++++++++++++++------
1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/init/main.c b/init/main.c
index a56f65a..d58a365 100644
--- a/init/main.c
+++ b/init/main.c
@@ -373,12 +373,20 @@ static inline void smp_prepare_cpus(unsigned int maxcpus) { }
*/
static void __init setup_command_line(char *command_line)
{
- saved_command_line =
- memblock_alloc(strlen(boot_command_line) + 1, SMP_CACHE_BYTES);
- initcall_command_line =
- memblock_alloc(strlen(boot_command_line) + 1, SMP_CACHE_BYTES);
- static_command_line = memblock_alloc(strlen(command_line) + 1,
- SMP_CACHE_BYTES);
+ size_t len = strlen(boot_command_line) + 1;
+
+ saved_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
+ if (!saved_command_line)
+ panic("%s: Failed to allocate %zu bytes\n", __func__, len);
+
+ initcall_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
+ if (!initcall_command_line)
+ panic("%s: Failed to allocate %zu bytes\n", __func__, len);
+
+ static_command_line = memblock_alloc(len, SMP_CACHE_BYTES);
+ if (!static_command_line)
+ panic("%s: Failed to allocate %zu bytes\n", __func__, len);
+
strcpy(saved_command_line, boot_command_line);
strcpy(static_command_line, command_line);
}
@@ -773,8 +781,14 @@ static int __init initcall_blacklist(char *str)
pr_debug("blacklisting initcall %s\n", str_entry);
entry = memblock_alloc(sizeof(*entry),
SMP_CACHE_BYTES);
+ if (!entry)
+ panic("%s: Failed to allocate %zu bytes\n",
+ __func__, sizeof(*entry));
entry->buf = memblock_alloc(strlen(str_entry) + 1,
SMP_CACHE_BYTES);
+ if (!entry->buf)
+ panic("%s: Failed to allocate %zu bytes\n",
+ __func__, strlen(str_entry) + 1);
strcpy(entry->buf, str_entry);
list_add(&entry->next, &blacklisted_initcalls);
}
--
2.7.4
Add panic() calls if memblock_alloc() returns NULL.
The panic() format duplicates the one used by memblock itself and in order
to avoid explosion with long parameters list replace open coded allocation
size calculations with a local variable.
Signed-off-by: Mike Rapoport <[email protected]>
---
mm/percpu.c | 73 +++++++++++++++++++++++++++++++++++++++++++++++--------------
1 file changed, 56 insertions(+), 17 deletions(-)
diff --git a/mm/percpu.c b/mm/percpu.c
index db86282..5998b03 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1086,6 +1086,7 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
struct pcpu_chunk *chunk;
unsigned long aligned_addr, lcm_align;
int start_offset, offset_bits, region_size, region_bits;
+ size_t alloc_size;
/* region calculations */
aligned_addr = tmp_addr & PAGE_MASK;
@@ -1101,9 +1102,12 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
region_size = ALIGN(start_offset + map_size, lcm_align);
/* allocate chunk */
- chunk = memblock_alloc(sizeof(struct pcpu_chunk) +
- BITS_TO_LONGS(region_size >> PAGE_SHIFT),
- SMP_CACHE_BYTES);
+ alloc_size = sizeof(struct pcpu_chunk) +
+ BITS_TO_LONGS(region_size >> PAGE_SHIFT);
+ chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!chunk)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
INIT_LIST_HEAD(&chunk->list);
@@ -1114,12 +1118,25 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
chunk->nr_pages = region_size >> PAGE_SHIFT;
region_bits = pcpu_chunk_map_bits(chunk);
- chunk->alloc_map = memblock_alloc(BITS_TO_LONGS(region_bits) * sizeof(chunk->alloc_map[0]),
- SMP_CACHE_BYTES);
- chunk->bound_map = memblock_alloc(BITS_TO_LONGS(region_bits + 1) * sizeof(chunk->bound_map[0]),
- SMP_CACHE_BYTES);
- chunk->md_blocks = memblock_alloc(pcpu_chunk_nr_blocks(chunk) * sizeof(chunk->md_blocks[0]),
- SMP_CACHE_BYTES);
+ alloc_size = BITS_TO_LONGS(region_bits) * sizeof(chunk->alloc_map[0]);
+ chunk->alloc_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!chunk->alloc_map)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
+ alloc_size =
+ BITS_TO_LONGS(region_bits + 1) * sizeof(chunk->bound_map[0]);
+ chunk->bound_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!chunk->bound_map)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
+ alloc_size = pcpu_chunk_nr_blocks(chunk) * sizeof(chunk->md_blocks[0]);
+ chunk->md_blocks = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!chunk->md_blocks)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
pcpu_init_md_blocks(chunk);
/* manage populated page bitmap */
@@ -2044,6 +2061,7 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
int group, unit, i;
int map_size;
unsigned long tmp_addr;
+ size_t alloc_size;
#define PCPU_SETUP_BUG_ON(cond) do { \
if (unlikely(cond)) { \
@@ -2075,14 +2093,29 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
PCPU_SETUP_BUG_ON(pcpu_verify_alloc_info(ai) < 0);
/* process group information and build config tables accordingly */
- group_offsets = memblock_alloc(ai->nr_groups * sizeof(group_offsets[0]),
- SMP_CACHE_BYTES);
- group_sizes = memblock_alloc(ai->nr_groups * sizeof(group_sizes[0]),
- SMP_CACHE_BYTES);
- unit_map = memblock_alloc(nr_cpu_ids * sizeof(unit_map[0]),
- SMP_CACHE_BYTES);
- unit_off = memblock_alloc(nr_cpu_ids * sizeof(unit_off[0]),
- SMP_CACHE_BYTES);
+ alloc_size = ai->nr_groups * sizeof(group_offsets[0]);
+ group_offsets = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!group_offsets)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
+ alloc_size = ai->nr_groups * sizeof(group_sizes[0]);
+ group_sizes = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!group_sizes)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
+ alloc_size = nr_cpu_ids * sizeof(unit_map[0]);
+ unit_map = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!unit_map)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
+
+ alloc_size = nr_cpu_ids * sizeof(unit_off[0]);
+ unit_off = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
+ if (!unit_off)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ alloc_size);
for (cpu = 0; cpu < nr_cpu_ids; cpu++)
unit_map[cpu] = UINT_MAX;
@@ -2148,6 +2181,9 @@ int __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
pcpu_nr_slots = __pcpu_size_to_slot(pcpu_unit_size) + 2;
pcpu_slot = memblock_alloc(pcpu_nr_slots * sizeof(pcpu_slot[0]),
SMP_CACHE_BYTES);
+ if (!pcpu_slot)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ pcpu_nr_slots * sizeof(pcpu_slot[0]));
for (i = 0; i < pcpu_nr_slots; i++)
INIT_LIST_HEAD(&pcpu_slot[i]);
@@ -2602,6 +2638,9 @@ int __init pcpu_page_first_chunk(size_t reserved_size,
pages_size = PFN_ALIGN(unit_pages * num_possible_cpus() *
sizeof(pages[0]));
pages = memblock_alloc(pages_size, SMP_CACHE_BYTES);
+ if (!pages)
+ panic("%s: Failed to allocate %zu bytes\n", __func__,
+ pages_size);
/* allocate pages */
j = 0;
--
2.7.4
Add panic() calls if memblock_alloc*() returns NULL.
Most of the changes are simply addition of
if(!ptr)
panic();
statements after the calls to memblock_alloc*() variants.
Exceptions are create_mem_map_page_table() and ia64_log_init() that were
slightly refactored to accommodate the change.
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/ia64/kernel/mca.c | 20 ++++++++++++++------
arch/ia64/mm/contig.c | 8 ++++++--
arch/ia64/mm/discontig.c | 4 ++++
arch/ia64/mm/init.c | 38 ++++++++++++++++++++++++++++++--------
arch/ia64/mm/tlb.c | 6 ++++++
arch/ia64/sn/kernel/io_common.c | 3 +++
arch/ia64/sn/kernel/setup.c | 12 +++++++++++-
7 files changed, 74 insertions(+), 17 deletions(-)
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 370bc34..5cabb3f 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -359,11 +359,6 @@ typedef struct ia64_state_log_s
static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES];
-#define IA64_LOG_ALLOCATE(it, size) \
- {ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)] = \
- (ia64_err_rec_t *)memblock_alloc(size, SMP_CACHE_BYTES); \
- ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)] = \
- (ia64_err_rec_t *)memblock_alloc(size, SMP_CACHE_BYTES);}
#define IA64_LOG_LOCK_INIT(it) spin_lock_init(&ia64_state_log[it].isl_lock)
#define IA64_LOG_LOCK(it) spin_lock_irqsave(&ia64_state_log[it].isl_lock, s)
#define IA64_LOG_UNLOCK(it) spin_unlock_irqrestore(&ia64_state_log[it].isl_lock,s)
@@ -378,6 +373,19 @@ static ia64_state_log_t ia64_state_log[IA64_MAX_LOG_TYPES];
#define IA64_LOG_CURR_BUFFER(it) (void *)((ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)]))
#define IA64_LOG_COUNT(it) ia64_state_log[it].isl_count
+static inline void ia64_log_allocate(int it, u64 size)
+{
+ ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)] =
+ (ia64_err_rec_t *)memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!ia64_state_log[it].isl_log[IA64_LOG_CURR_INDEX(it)])
+ panic("%s: Failed to allocate %llu bytes\n", __func__, size);
+
+ ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)] =
+ (ia64_err_rec_t *)memblock_alloc(size, SMP_CACHE_BYTES);
+ if (!ia64_state_log[it].isl_log[IA64_LOG_NEXT_INDEX(it)])
+ panic("%s: Failed to allocate %llu bytes\n", __func__, size);
+}
+
/*
* ia64_log_init
* Reset the OS ia64 log buffer
@@ -399,7 +407,7 @@ ia64_log_init(int sal_info_type)
return;
// set up OS data structures to hold error info
- IA64_LOG_ALLOCATE(sal_info_type, max_size);
+ ia64_log_allocate(sal_info_type, max_size);
}
/*
diff --git a/arch/ia64/mm/contig.c b/arch/ia64/mm/contig.c
index 6e44723..d29fb6b 100644
--- a/arch/ia64/mm/contig.c
+++ b/arch/ia64/mm/contig.c
@@ -84,9 +84,13 @@ void *per_cpu_init(void)
static inline void
alloc_per_cpu_data(void)
{
- cpu_data = memblock_alloc_from(PERCPU_PAGE_SIZE * num_possible_cpus(),
- PERCPU_PAGE_SIZE,
+ size_t size = PERCPU_PAGE_SIZE * num_possible_cpus();
+
+ cpu_data = memblock_alloc_from(size, PERCPU_PAGE_SIZE,
__pa(MAX_DMA_ADDRESS));
+ if (!cpu_data)
+ panic("%s: Failed to allocate %lu bytes align=%lx from=%lx\n",
+ __func__, size, PERCPU_PAGE_SIZE, __pa(MAX_DMA_ADDRESS));
}
/**
diff --git a/arch/ia64/mm/discontig.c b/arch/ia64/mm/discontig.c
index f9c3675..05490dd 100644
--- a/arch/ia64/mm/discontig.c
+++ b/arch/ia64/mm/discontig.c
@@ -454,6 +454,10 @@ static void __init *memory_less_node_alloc(int nid, unsigned long pernodesize)
__pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_ACCESSIBLE,
bestnode);
+ if (!ptr)
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
+ __func__, pernodesize, PERCPU_PAGE_SIZE, bestnode,
+ __pa(MAX_DMA_ADDRESS));
return ptr;
}
diff --git a/arch/ia64/mm/init.c b/arch/ia64/mm/init.c
index 29d8415..e49200e 100644
--- a/arch/ia64/mm/init.c
+++ b/arch/ia64/mm/init.c
@@ -444,23 +444,45 @@ int __init create_mem_map_page_table(u64 start, u64 end, void *arg)
for (address = start_page; address < end_page; address += PAGE_SIZE) {
pgd = pgd_offset_k(address);
- if (pgd_none(*pgd))
- pgd_populate(&init_mm, pgd, memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node));
+ if (pgd_none(*pgd)) {
+ pud = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node);
+ if (!pud)
+ goto err_alloc;
+ pgd_populate(&init_mm, pgd, pud);
+ }
pud = pud_offset(pgd, address);
- if (pud_none(*pud))
- pud_populate(&init_mm, pud, memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node));
+ if (pud_none(*pud)) {
+ pmd = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node);
+ if (!pmd)
+ goto err_alloc;
+ pud_populate(&init_mm, pud, pmd);
+ }
pmd = pmd_offset(pud, address);
- if (pmd_none(*pmd))
- pmd_populate_kernel(&init_mm, pmd, memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node));
+ if (pmd_none(*pmd)) {
+ pte = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node);
+ if (!pte)
+ goto err_alloc;
+ pmd_populate_kernel(&init_mm, pmd, pte);
+ }
pte = pte_offset_kernel(pmd, address);
- if (pte_none(*pte))
- set_pte(pte, pfn_pte(__pa(memblock_alloc_node(PAGE_SIZE, PAGE_SIZE, node)) >> PAGE_SHIFT,
+ if (pte_none(*pte)) {
+ void *page = memblock_alloc_node(PAGE_SIZE, PAGE_SIZE,
+ node);
+ if (!page)
+ goto err_alloc;
+ set_pte(pte, pfn_pte(__pa(page) >> PAGE_SHIFT,
PAGE_KERNEL));
+ }
}
return 0;
+
+err_alloc:
+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d\n",
+ __func__, PAGE_SIZE, PAGE_SIZE, node);
+ return -ENOMEM;
}
struct memmap_init_callback_data {
diff --git a/arch/ia64/mm/tlb.c b/arch/ia64/mm/tlb.c
index 9340bcb..5fc89aa 100644
--- a/arch/ia64/mm/tlb.c
+++ b/arch/ia64/mm/tlb.c
@@ -61,8 +61,14 @@ mmu_context_init (void)
{
ia64_ctx.bitmap = memblock_alloc((ia64_ctx.max_ctx + 1) >> 3,
SMP_CACHE_BYTES);
+ if (!ia64_ctx.bitmap)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ (ia64_ctx.max_ctx + 1) >> 3);
ia64_ctx.flushmap = memblock_alloc((ia64_ctx.max_ctx + 1) >> 3,
SMP_CACHE_BYTES);
+ if (!ia64_ctx.flushmap)
+ panic("%s: Failed to allocate %u bytes\n", __func__,
+ (ia64_ctx.max_ctx + 1) >> 3);
}
/*
diff --git a/arch/ia64/sn/kernel/io_common.c b/arch/ia64/sn/kernel/io_common.c
index 8df13d0..d468473 100644
--- a/arch/ia64/sn/kernel/io_common.c
+++ b/arch/ia64/sn/kernel/io_common.c
@@ -394,6 +394,9 @@ void __init hubdev_init_node(nodepda_t * npda, cnodeid_t node)
hubdev_info = (struct hubdev_info *)memblock_alloc_node(size,
SMP_CACHE_BYTES,
node);
+ if (!hubdev_info)
+ panic("%s: Failed to allocate %d bytes align=0x%x nid=%d\n",
+ __func__, size, SMP_CACHE_BYTES, node);
npda->pdinfo = (void *)hubdev_info;
}
diff --git a/arch/ia64/sn/kernel/setup.c b/arch/ia64/sn/kernel/setup.c
index a6d40a2..e6a5049 100644
--- a/arch/ia64/sn/kernel/setup.c
+++ b/arch/ia64/sn/kernel/setup.c
@@ -513,6 +513,10 @@ static void __init sn_init_pdas(char **cmdline_p)
nodepdaindr[cnode] =
memblock_alloc_node(sizeof(nodepda_t), SMP_CACHE_BYTES,
cnode);
+ if (!nodepdaindr[cnode])
+ panic("%s: Failed to allocate %lu bytes align=0x%x nid=%d\n",
+ __func__, sizeof(nodepda_t), SMP_CACHE_BYTES,
+ cnode);
memset(nodepdaindr[cnode]->phys_cpuid, -1,
sizeof(nodepdaindr[cnode]->phys_cpuid));
spin_lock_init(&nodepdaindr[cnode]->ptc_lock);
@@ -521,9 +525,15 @@ static void __init sn_init_pdas(char **cmdline_p)
/*
* Allocate & initialize nodepda for TIOs. For now, put them on node 0.
*/
- for (cnode = num_online_nodes(); cnode < num_cnodes; cnode++)
+ for (cnode = num_online_nodes(); cnode < num_cnodes; cnode++) {
nodepdaindr[cnode] =
memblock_alloc_node(sizeof(nodepda_t), SMP_CACHE_BYTES, 0);
+ if (!nodepdaindr[cnode])
+ panic("%s: Failed to allocate %lu bytes align=0x%x nid=%d\n",
+ __func__, sizeof(nodepda_t), SMP_CACHE_BYTES,
+ cnode);
+ }
+
/*
* Now copy the array of nodepda pointers to each nodepda.
--
2.7.4
memblock_alloc() already clears the allocated memory, no point in doing it
twice.
Signed-off-by: Mike Rapoport <[email protected]>
Acked-by: Geert Uytterhoeven <[email protected]> # m68k
---
arch/c6x/mm/init.c | 1 -
arch/h8300/mm/init.c | 1 -
arch/ia64/kernel/mca.c | 2 --
arch/m68k/mm/mcfmmu.c | 1 -
arch/microblaze/mm/init.c | 6 ++----
arch/sparc/kernel/prom_32.c | 2 --
6 files changed, 2 insertions(+), 11 deletions(-)
diff --git a/arch/c6x/mm/init.c b/arch/c6x/mm/init.c
index af5ada0..e83c046 100644
--- a/arch/c6x/mm/init.c
+++ b/arch/c6x/mm/init.c
@@ -40,7 +40,6 @@ void __init paging_init(void)
empty_zero_page = (unsigned long) memblock_alloc(PAGE_SIZE,
PAGE_SIZE);
- memset((void *)empty_zero_page, 0, PAGE_SIZE);
/*
* Set up user data space
diff --git a/arch/h8300/mm/init.c b/arch/h8300/mm/init.c
index 6519252..a157890 100644
--- a/arch/h8300/mm/init.c
+++ b/arch/h8300/mm/init.c
@@ -68,7 +68,6 @@ void __init paging_init(void)
* to a couple of allocated pages.
*/
empty_zero_page = (unsigned long)memblock_alloc(PAGE_SIZE, PAGE_SIZE);
- memset((void *)empty_zero_page, 0, PAGE_SIZE);
/*
* Set up SFC/DFC registers (user data space).
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 74d148b..370bc34 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -400,8 +400,6 @@ ia64_log_init(int sal_info_type)
// set up OS data structures to hold error info
IA64_LOG_ALLOCATE(sal_info_type, max_size);
- memset(IA64_LOG_CURR_BUFFER(sal_info_type), 0, max_size);
- memset(IA64_LOG_NEXT_BUFFER(sal_info_type), 0, max_size);
}
/*
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 0de4999..492f953 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -44,7 +44,6 @@ void __init paging_init(void)
int i;
empty_zero_page = (void *) memblock_alloc(PAGE_SIZE, PAGE_SIZE);
- memset((void *) empty_zero_page, 0, PAGE_SIZE);
pg_dir = swapper_pg_dir;
memset(swapper_pg_dir, 0, sizeof(swapper_pg_dir));
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 44f4b89..bd1cd4b 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -376,10 +376,8 @@ void * __ref zalloc_maybe_bootmem(size_t size, gfp_t mask)
if (mem_init_done)
p = kzalloc(size, mask);
- else {
+ else
p = memblock_alloc(size, SMP_CACHE_BYTES);
- if (p)
- memset(p, 0, size);
- }
+
return p;
}
diff --git a/arch/sparc/kernel/prom_32.c b/arch/sparc/kernel/prom_32.c
index 38940af..e7126ca 100644
--- a/arch/sparc/kernel/prom_32.c
+++ b/arch/sparc/kernel/prom_32.c
@@ -33,8 +33,6 @@ void * __init prom_early_alloc(unsigned long size)
void *ret;
ret = memblock_alloc(size, SMP_CACHE_BYTES);
- if (ret != NULL)
- memset(ret, 0, size);
prom_early_allocated += size;
--
2.7.4
These functions are not used outside memblock. Make them static.
Signed-off-by: Mike Rapoport <[email protected]>
---
include/linux/memblock.h | 4 ----
mm/memblock.c | 4 ++--
2 files changed, 2 insertions(+), 6 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index cf4cd9c..f5a83a1 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -111,9 +111,6 @@ void memblock_discard(void);
#define memblock_dbg(fmt, ...) \
if (memblock_debug) printk(KERN_INFO pr_fmt(fmt), ##__VA_ARGS__)
-phys_addr_t memblock_find_in_range_node(phys_addr_t size, phys_addr_t align,
- phys_addr_t start, phys_addr_t end,
- int nid, enum memblock_flags flags);
phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
phys_addr_t size, phys_addr_t align);
void memblock_allow_resize(void);
@@ -130,7 +127,6 @@ int memblock_clear_hotplug(phys_addr_t base, phys_addr_t size);
int memblock_mark_mirror(phys_addr_t base, phys_addr_t size);
int memblock_mark_nomap(phys_addr_t base, phys_addr_t size);
int memblock_clear_nomap(phys_addr_t base, phys_addr_t size);
-enum memblock_flags choose_memblock_flags(void);
unsigned long memblock_free_all(void);
void reset_node_managed_pages(pg_data_t *pgdat);
diff --git a/mm/memblock.c b/mm/memblock.c
index 739f769..03b3929 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -125,7 +125,7 @@ static int memblock_can_resize __initdata_memblock;
static int memblock_memory_in_slab __initdata_memblock = 0;
static int memblock_reserved_in_slab __initdata_memblock = 0;
-enum memblock_flags __init_memblock choose_memblock_flags(void)
+static enum memblock_flags __init_memblock choose_memblock_flags(void)
{
return system_has_some_mirror ? MEMBLOCK_MIRROR : MEMBLOCK_NONE;
}
@@ -254,7 +254,7 @@ __memblock_find_range_top_down(phys_addr_t start, phys_addr_t end,
* Return:
* Found address on success, 0 on failure.
*/
-phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
+static phys_addr_t __init_memblock memblock_find_in_range_node(phys_addr_t size,
phys_addr_t align, phys_addr_t start,
phys_addr_t end, int nid,
enum memblock_flags flags)
--
2.7.4
Make the memblock_phys_alloc() function an inline wrapper for
memblock_phys_alloc_range() and update the memblock_phys_alloc() callers to
check the returned value and panic in case of error.
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/arm/mm/init.c | 4 ++++
arch/arm64/mm/mmu.c | 2 ++
arch/powerpc/sysdev/dart_iommu.c | 3 +++
arch/s390/kernel/crash_dump.c | 3 +++
arch/s390/kernel/setup.c | 3 +++
arch/sh/boards/mach-ap325rxa/setup.c | 3 +++
arch/sh/boards/mach-ecovec24/setup.c | 6 ++++++
arch/sh/boards/mach-kfr2r09/setup.c | 3 +++
arch/sh/boards/mach-migor/setup.c | 3 +++
arch/sh/boards/mach-se/7724/setup.c | 6 ++++++
arch/xtensa/mm/kasan_init.c | 3 +++
include/linux/memblock.h | 7 ++++++-
mm/memblock.c | 5 -----
13 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index b76b90e..15dddfe 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -206,6 +206,10 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
BUG_ON(!arm_memblock_steal_permitted);
phys = memblock_phys_alloc(size, align);
+ if (!phys)
+ panic("Failed to steal %pa bytes at %pS\n",
+ &size, (void *)_RET_IP_);
+
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b6f5aa5..a74e4be 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -104,6 +104,8 @@ static phys_addr_t __init early_pgtable_alloc(void)
void *ptr;
phys = memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate page table page\n");
/*
* The FIX_{PGD,PUD,PMD} slots may be in active use, but the FIX_PTE
diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c
index 25bc25f..b82c9ff 100644
--- a/arch/powerpc/sysdev/dart_iommu.c
+++ b/arch/powerpc/sysdev/dart_iommu.c
@@ -265,6 +265,9 @@ static void allocate_dart(void)
* prefetching into invalid pages and corrupting data
*/
tmp = memblock_phys_alloc(DART_PAGE_SIZE, DART_PAGE_SIZE);
+ if (!tmp)
+ panic("DART: table allocation failed\n");
+
dart_emptyval = DARTMAP_VALID | ((tmp >> DART_PAGE_SHIFT) &
DARTMAP_RPNMASK);
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index 97eae38..f96a585 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -61,6 +61,9 @@ struct save_area * __init save_area_alloc(bool is_boot_cpu)
struct save_area *sa;
sa = (void *) memblock_phys_alloc(sizeof(*sa), 8);
+ if (!sa)
+ panic("Failed to allocate save area\n");
+
if (is_boot_cpu)
list_add(&sa->list, &dump_save_areas);
else
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index 72dd23e..da48397 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -968,6 +968,9 @@ static void __init setup_randomness(void)
vmms = (struct sysinfo_3_2_2 *) memblock_phys_alloc(PAGE_SIZE,
PAGE_SIZE);
+ if (!vmms)
+ panic("Failed to allocate memory for sysinfo structure\n");
+
if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
memblock_free((unsigned long) vmms, PAGE_SIZE);
diff --git a/arch/sh/boards/mach-ap325rxa/setup.c b/arch/sh/boards/mach-ap325rxa/setup.c
index d7ceab6..08a0cc9 100644
--- a/arch/sh/boards/mach-ap325rxa/setup.c
+++ b/arch/sh/boards/mach-ap325rxa/setup.c
@@ -558,6 +558,9 @@ static void __init ap325rxa_mv_mem_reserve(void)
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
index a3901806..fd264a6 100644
--- a/arch/sh/boards/mach-ecovec24/setup.c
+++ b/arch/sh/boards/mach-ecovec24/setup.c
@@ -1481,11 +1481,17 @@ static void __init ecovec_mv_mem_reserve(void)
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU0 memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
ceu0_dma_membase = phys;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU1 memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
ceu1_dma_membase = phys;
diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c
index 55bdf4a..ebe90d06 100644
--- a/arch/sh/boards/mach-kfr2r09/setup.c
+++ b/arch/sh/boards/mach-kfr2r09/setup.c
@@ -632,6 +632,9 @@ static void __init kfr2r09_mv_mem_reserve(void)
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
index ba7eee6..1adff09 100644
--- a/arch/sh/boards/mach-migor/setup.c
+++ b/arch/sh/boards/mach-migor/setup.c
@@ -631,6 +631,9 @@ static void __init migor_mv_mem_reserve(void)
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-se/7724/setup.c b/arch/sh/boards/mach-se/7724/setup.c
index 4696e10..201631a 100644
--- a/arch/sh/boards/mach-se/7724/setup.c
+++ b/arch/sh/boards/mach-se/7724/setup.c
@@ -966,11 +966,17 @@ static void __init ms7724se_mv_mem_reserve(void)
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU0 memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
ceu0_dma_membase = phys;
phys = memblock_phys_alloc(size, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate CEU1 memory\n");
+
memblock_free(phys, size);
memblock_remove(phys, size);
ceu1_dma_membase = phys;
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index 48dbb03..4852848 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -54,6 +54,9 @@ static void __init populate(void *start, void *end)
phys_addr_t phys =
memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
+ if (!phys)
+ panic("Failed to allocate page table page\n");
+
set_pte(pte + j, pfn_pte(PHYS_PFN(phys), PAGE_KERNEL));
}
}
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 66dfdb3..7883c74 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -374,7 +374,12 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
phys_addr_t memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid);
phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
-phys_addr_t memblock_phys_alloc(phys_addr_t size, phys_addr_t align);
+static inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
+ phys_addr_t align)
+{
+ return memblock_phys_alloc_range(size, align, 0,
+ MEMBLOCK_ALLOC_ACCESSIBLE);
+}
void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
phys_addr_t min_addr, phys_addr_t max_addr,
diff --git a/mm/memblock.c b/mm/memblock.c
index 8aabb1b..461e40a3 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1382,11 +1382,6 @@ phys_addr_t __init memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys
return alloc;
}
-phys_addr_t __init memblock_phys_alloc(phys_addr_t size, phys_addr_t align)
-{
- return memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ACCESSIBLE);
-}
-
phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
{
phys_addr_t res = memblock_phys_alloc_nid(size, align, nid);
--
2.7.4
The memblock_alloc_base_nid() is a oneliner wrapper for
memblock_alloc_range_nid() without any side effect.
Replace it's usage by the direct calls to memblock_alloc_range_nid().
Signed-off-by: Mike Rapoport <[email protected]>
---
include/linux/memblock.h | 3 ---
mm/memblock.c | 15 ++++-----------
2 files changed, 4 insertions(+), 14 deletions(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index 60e100f..f7ef313 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -490,9 +490,6 @@ static inline bool memblock_bottom_up(void)
phys_addr_t __init memblock_alloc_range(phys_addr_t size, phys_addr_t align,
phys_addr_t start, phys_addr_t end,
enum memblock_flags flags);
-phys_addr_t memblock_alloc_base_nid(phys_addr_t size,
- phys_addr_t align, phys_addr_t max_addr,
- int nid, enum memblock_flags flags);
phys_addr_t memblock_alloc_base(phys_addr_t size, phys_addr_t align,
phys_addr_t max_addr);
phys_addr_t __memblock_alloc_base(phys_addr_t size, phys_addr_t align,
diff --git a/mm/memblock.c b/mm/memblock.c
index a32db30..c80029e 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1346,21 +1346,14 @@ phys_addr_t __init memblock_alloc_range(phys_addr_t size, phys_addr_t align,
flags);
}
-phys_addr_t __init memblock_alloc_base_nid(phys_addr_t size,
- phys_addr_t align, phys_addr_t max_addr,
- int nid, enum memblock_flags flags)
-{
- return memblock_alloc_range_nid(size, align, 0, max_addr, nid, flags);
-}
-
phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align, int nid)
{
enum memblock_flags flags = choose_memblock_flags();
phys_addr_t ret;
again:
- ret = memblock_alloc_base_nid(size, align, MEMBLOCK_ALLOC_ACCESSIBLE,
- nid, flags);
+ ret = memblock_alloc_range_nid(size, align, 0,
+ MEMBLOCK_ALLOC_ACCESSIBLE, nid, flags);
if (!ret && (flags & MEMBLOCK_MIRROR)) {
flags &= ~MEMBLOCK_MIRROR;
@@ -1371,8 +1364,8 @@ phys_addr_t __init memblock_phys_alloc_nid(phys_addr_t size, phys_addr_t align,
phys_addr_t __init __memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys_addr_t max_addr)
{
- return memblock_alloc_base_nid(size, align, max_addr, NUMA_NO_NODE,
- MEMBLOCK_NONE);
+ return memblock_alloc_range_nid(size, align, 0, max_addr, NUMA_NO_NODE,
+ MEMBLOCK_NONE);
}
phys_addr_t __init memblock_alloc_base(phys_addr_t size, phys_addr_t align, phys_addr_t max_addr)
--
2.7.4
The calls to memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE) and
memblock_phys_alloc(size, align) are equivalent as both try to allocate
'size' bytes with 'align' alignment anywhere in the memory and panic if hte
allocation fails.
The conversion is done using the following semantic patch:
@@
expression size, align;
@@
- memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE)
+ memblock_phys_alloc(size, align)
Signed-off-by: Mike Rapoport <[email protected]>
---
arch/arm/mm/init.c | 2 +-
arch/sh/boards/mach-ap325rxa/setup.c | 2 +-
arch/sh/boards/mach-ecovec24/setup.c | 4 ++--
arch/sh/boards/mach-kfr2r09/setup.c | 2 +-
arch/sh/boards/mach-migor/setup.c | 2 +-
arch/sh/boards/mach-se/7724/setup.c | 4 ++--
arch/xtensa/mm/kasan_init.c | 3 +--
7 files changed, 9 insertions(+), 10 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 478ea8b..b76b90e 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -205,7 +205,7 @@ phys_addr_t __init arm_memblock_steal(phys_addr_t size, phys_addr_t align)
BUG_ON(!arm_memblock_steal_permitted);
- phys = memblock_alloc_base(size, align, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, align);
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-ap325rxa/setup.c b/arch/sh/boards/mach-ap325rxa/setup.c
index 8f234d04..d7ceab6 100644
--- a/arch/sh/boards/mach-ap325rxa/setup.c
+++ b/arch/sh/boards/mach-ap325rxa/setup.c
@@ -557,7 +557,7 @@ static void __init ap325rxa_mv_mem_reserve(void)
phys_addr_t phys;
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
index 22b4106..a3901806 100644
--- a/arch/sh/boards/mach-ecovec24/setup.c
+++ b/arch/sh/boards/mach-ecovec24/setup.c
@@ -1480,12 +1480,12 @@ static void __init ecovec_mv_mem_reserve(void)
phys_addr_t phys;
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
ceu0_dma_membase = phys;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
ceu1_dma_membase = phys;
diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c
index 203d249..55bdf4a 100644
--- a/arch/sh/boards/mach-kfr2r09/setup.c
+++ b/arch/sh/boards/mach-kfr2r09/setup.c
@@ -631,7 +631,7 @@ static void __init kfr2r09_mv_mem_reserve(void)
phys_addr_t phys;
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
index f4ad33c..ba7eee6 100644
--- a/arch/sh/boards/mach-migor/setup.c
+++ b/arch/sh/boards/mach-migor/setup.c
@@ -630,7 +630,7 @@ static void __init migor_mv_mem_reserve(void)
phys_addr_t phys;
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
diff --git a/arch/sh/boards/mach-se/7724/setup.c b/arch/sh/boards/mach-se/7724/setup.c
index fdbec22a..4696e10 100644
--- a/arch/sh/boards/mach-se/7724/setup.c
+++ b/arch/sh/boards/mach-se/7724/setup.c
@@ -965,12 +965,12 @@ static void __init ms7724se_mv_mem_reserve(void)
phys_addr_t phys;
phys_addr_t size = CEU_BUFFER_MEMORY_SIZE;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
ceu0_dma_membase = phys;
- phys = memblock_alloc_base(size, PAGE_SIZE, MEMBLOCK_ALLOC_ANYWHERE);
+ phys = memblock_phys_alloc(size, PAGE_SIZE);
memblock_free(phys, size);
memblock_remove(phys, size);
ceu1_dma_membase = phys;
diff --git a/arch/xtensa/mm/kasan_init.c b/arch/xtensa/mm/kasan_init.c
index 1734cda..48dbb03 100644
--- a/arch/xtensa/mm/kasan_init.c
+++ b/arch/xtensa/mm/kasan_init.c
@@ -52,8 +52,7 @@ static void __init populate(void *start, void *end)
for (k = 0; k < PTRS_PER_PTE; ++k, ++j) {
phys_addr_t phys =
- memblock_alloc_base(PAGE_SIZE, PAGE_SIZE,
- MEMBLOCK_ALLOC_ANYWHERE);
+ memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE);
set_pte(pte + j, pfn_pte(PHYS_PFN(phys), PAGE_KERNEL));
}
--
2.7.4
On Mon, Jan 21, 2019 at 10:04:06AM +0200, Mike Rapoport wrote:
> Add check for the return value of memblock_alloc*() functions and call
> panic() in case of error.
> The panic message repeats the one used by panicing memblock allocators with
> adjustment of parameters to include only relevant ones.
>
> The replacement was mostly automated with semantic patches like the one
> below with manual massaging of format strings.
>
> @@
> expression ptr, size, align;
> @@
> ptr = memblock_alloc(size, align);
> + if (!ptr)
> + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
> size, align);
>
> Signed-off-by: Mike Rapoport <[email protected]>
> Reviewed-by: Guo Ren <[email protected]> # c-sky
> Acked-by: Paul Burton <[email protected]> # MIPS
> Acked-by: Heiko Carstens <[email protected]> # s390
> Reviewed-by: Juergen Gross <[email protected]> # Xen
> ---
> arch/alpha/kernel/core_cia.c | 3 +++
> arch/alpha/kernel/core_marvel.c | 6 ++++++
> arch/alpha/kernel/pci-noop.c | 13 +++++++++++--
> arch/alpha/kernel/pci.c | 11 ++++++++++-
> arch/alpha/kernel/pci_iommu.c | 12 ++++++++++++
> arch/arc/mm/highmem.c | 4 ++++
> arch/arm/kernel/setup.c | 6 ++++++
> arch/arm/mm/mmu.c | 14 +++++++++++++-
> arch/arm64/kernel/setup.c | 8 +++++---
> arch/arm64/mm/kasan_init.c | 10 ++++++++++
> arch/c6x/mm/dma-coherent.c | 4 ++++
> arch/c6x/mm/init.c | 3 +++
> arch/csky/mm/highmem.c | 5 +++++
> arch/h8300/mm/init.c | 3 +++
> arch/m68k/atari/stram.c | 4 ++++
> arch/m68k/mm/init.c | 3 +++
> arch/m68k/mm/mcfmmu.c | 6 ++++++
> arch/m68k/mm/motorola.c | 9 +++++++++
> arch/m68k/mm/sun3mmu.c | 6 ++++++
> arch/m68k/sun3/sun3dvma.c | 3 +++
> arch/microblaze/mm/init.c | 8 ++++++--
> arch/mips/cavium-octeon/dma-octeon.c | 3 +++
> arch/mips/kernel/setup.c | 3 +++
> arch/mips/kernel/traps.c | 3 +++
> arch/mips/mm/init.c | 5 +++++
> arch/nds32/mm/init.c | 12 ++++++++++++
> arch/openrisc/mm/ioremap.c | 8 ++++++--
> arch/powerpc/kernel/dt_cpu_ftrs.c | 5 +++++
> arch/powerpc/kernel/pci_32.c | 3 +++
> arch/powerpc/kernel/setup-common.c | 3 +++
> arch/powerpc/kernel/setup_64.c | 4 ++++
> arch/powerpc/lib/alloc.c | 3 +++
> arch/powerpc/mm/hash_utils_64.c | 3 +++
> arch/powerpc/mm/mmu_context_nohash.c | 9 +++++++++
> arch/powerpc/mm/pgtable-book3e.c | 12 ++++++++++--
> arch/powerpc/mm/pgtable-book3s64.c | 3 +++
> arch/powerpc/mm/pgtable-radix.c | 9 ++++++++-
> arch/powerpc/mm/ppc_mmu_32.c | 3 +++
> arch/powerpc/platforms/pasemi/iommu.c | 3 +++
> arch/powerpc/platforms/powermac/nvram.c | 3 +++
> arch/powerpc/platforms/powernv/opal.c | 3 +++
> arch/powerpc/platforms/powernv/pci-ioda.c | 8 ++++++++
> arch/powerpc/platforms/ps3/setup.c | 3 +++
> arch/powerpc/sysdev/msi_bitmap.c | 3 +++
> arch/s390/kernel/setup.c | 13 +++++++++++++
> arch/s390/kernel/smp.c | 5 ++++-
> arch/s390/kernel/topology.c | 6 ++++++
> arch/s390/numa/mode_emu.c | 3 +++
> arch/s390/numa/numa.c | 6 +++++-
> arch/sh/mm/init.c | 6 ++++++
> arch/sh/mm/numa.c | 4 ++++
> arch/um/drivers/net_kern.c | 3 +++
> arch/um/drivers/vector_kern.c | 3 +++
> arch/um/kernel/initrd.c | 2 ++
> arch/um/kernel/mem.c | 16 ++++++++++++++++
> arch/unicore32/kernel/setup.c | 4 ++++
> arch/unicore32/mm/mmu.c | 15 +++++++++++++--
> arch/x86/kernel/acpi/boot.c | 3 +++
> arch/x86/kernel/apic/io_apic.c | 5 +++++
> arch/x86/kernel/e820.c | 3 +++
> arch/x86/platform/olpc/olpc_dt.c | 3 +++
> arch/x86/xen/p2m.c | 11 +++++++++--
> arch/xtensa/mm/kasan_init.c | 4 ++++
> arch/xtensa/mm/mmu.c | 3 +++
> drivers/clk/ti/clk.c | 3 +++
> drivers/macintosh/smu.c | 3 +++
> drivers/of/fdt.c | 8 +++++++-
> drivers/of/unittest.c | 8 +++++++-
Acked-by: Rob Herring <[email protected]>
> drivers/xen/swiotlb-xen.c | 7 +++++--
> kernel/power/snapshot.c | 3 +++
> lib/cpumask.c | 3 +++
> mm/kasan/init.c | 10 ++++++++--
> mm/sparse.c | 19 +++++++++++++++++--
> 73 files changed, 409 insertions(+), 28 deletions(-)
On Mon, Jan 21, 2019 at 10:03:53AM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
> index ae34e3a..2c61ea4 100644
> --- a/arch/arm64/mm/numa.c
> +++ b/arch/arm64/mm/numa.c
> @@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
> pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
>
> nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
> + if (!nd_pa)
> + panic("Cannot allocate %zu bytes for node %d data\n",
> + nd_size, nid);
> +
> nd = __va(nd_pa);
>
> /* report and initialize */
Does it mean that memblock_phys_alloc_try_nid() never returns valid
physical memory starting at 0?
--
Catalin
On Fri, Jan 25, 2019 at 05:45:02PM +0000, Catalin Marinas wrote:
> On Mon, Jan 21, 2019 at 10:03:53AM +0200, Mike Rapoport wrote:
> > diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
> > index ae34e3a..2c61ea4 100644
> > --- a/arch/arm64/mm/numa.c
> > +++ b/arch/arm64/mm/numa.c
> > @@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
> > pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
> >
> > nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
> > + if (!nd_pa)
> > + panic("Cannot allocate %zu bytes for node %d data\n",
> > + nd_size, nid);
> > +
> > nd = __va(nd_pa);
> >
> > /* report and initialize */
>
> Does it mean that memblock_phys_alloc_try_nid() never returns valid
> physical memory starting at 0?
Yes, it does.
memblock_find_in_range_node() that is used by all allocation methods
skips the first page [1].
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/memblock.c#n257
> --
> Catalin
>
--
Sincerely yours,
Mike.
On Mon, Jan 21, 2019 at 10:03:48AM +0200, Mike Rapoport wrote:
> The allocation of the page tables memory in openrics uses
> memblock_phys_alloc() and then converts the returned physical address to
> virtual one. Use memblock_alloc_raw() and add a panic() if the allocation
> fails.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> arch/openrisc/mm/init.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
> index d157310..caeb418 100644
> --- a/arch/openrisc/mm/init.c
> +++ b/arch/openrisc/mm/init.c
> @@ -105,7 +105,10 @@ static void __init map_ram(void)
> }
>
> /* Alloc one page for holding PTE's... */
> - pte = (pte_t *) __va(memblock_phys_alloc(PAGE_SIZE, PAGE_SIZE));
> + pte = memblock_alloc_raw(PAGE_SIZE, PAGE_SIZE);
> + if (!pte)
> + panic("%s: Failed to allocate page for PTEs\n",
> + __func__);
> set_pmd(pme, __pmd(_KERNPG_TABLE + __pa(pte)));
>
> /* Fill the newly allocated page with PTE'S */
This seems reasonable to me.
Acked-by: Stafford Horne <[email protected]>
Mike Rapoport <[email protected]> writes:
> From: Christophe Leroy <[email protected]>
>
> Since only the virtual address of allocated blocks is used,
> lets use functions returning directly virtual address.
>
> Those functions have the advantage of also zeroing the block.
>
> [ MR:
> - updated error message in alloc_stack() to be more verbose
> - convereted several additional call sites ]
>
> Signed-off-by: Christophe Leroy <[email protected]>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> arch/powerpc/kernel/dt_cpu_ftrs.c | 3 +--
> arch/powerpc/kernel/irq.c | 5 -----
> arch/powerpc/kernel/paca.c | 6 +++++-
> arch/powerpc/kernel/prom.c | 5 ++++-
> arch/powerpc/kernel/setup_32.c | 26 ++++++++++++++++----------
> 5 files changed, 26 insertions(+), 19 deletions(-)
LGTM.
Acked-by: Michael Ellerman <[email protected]>
cheers
Mike Rapoport <[email protected]> writes:
> diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
> index ae34e3a..2c61ea4 100644
> --- a/arch/arm64/mm/numa.c
> +++ b/arch/arm64/mm/numa.c
> @@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
> pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
>
> nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
> + if (!nd_pa)
> + panic("Cannot allocate %zu bytes for node %d data\n",
> + nd_size, nid);
> +
> nd = __va(nd_pa);
Acked-by: Michael Ellerman <[email protected]> (powerpc)
cheers
Michael Ellerman <[email protected]> writes:
> Mike Rapoport <[email protected]> writes:
>
>> diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
>> index ae34e3a..2c61ea4 100644
>> --- a/arch/arm64/mm/numa.c
>> +++ b/arch/arm64/mm/numa.c
>> @@ -237,6 +237,10 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
>> pr_info("Initmem setup node %d [<memory-less node>]\n", nid);
>>
>> nd_pa = memblock_phys_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
>> + if (!nd_pa)
>> + panic("Cannot allocate %zu bytes for node %d data\n",
>> + nd_size, nid);
>> +
>> nd = __va(nd_pa);
Wrong hunk, O_o
> Acked-by: Michael Ellerman <[email protected]> (powerpc)
You know what I mean though :)
cheers
Mike Rapoport <[email protected]> writes:
> The memblock_alloc_base() function tries to allocate a memory up to the
> limit specified by its max_addr parameter and panics if the allocation
> fails. Replace its usage with memblock_phys_alloc_range() and make the
> callers check the return value and panic in case of error.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> arch/powerpc/kernel/rtas.c | 6 +++++-
> arch/powerpc/mm/hash_utils_64.c | 8 ++++++--
> arch/s390/kernel/smp.c | 6 +++++-
> drivers/macintosh/smu.c | 2 +-
> include/linux/memblock.h | 2 --
> mm/memblock.c | 14 --------------
> 6 files changed, 17 insertions(+), 21 deletions(-)
Acked-by: Michael Ellerman <[email protected]> (powerpc)
cheers
On Mon 2019-01-21 10:04:08, Mike Rapoport wrote:
> As all the memblock allocation functions return NULL in case of error
> rather than panic(), the duplicates with _nopanic suffix can be removed.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> Acked-by: Greg Kroah-Hartman <[email protected]>
> ---
> arch/arc/kernel/unwind.c | 3 +--
> arch/sh/mm/init.c | 2 +-
> arch/x86/kernel/setup_percpu.c | 10 +++++-----
> arch/x86/mm/kasan_init_64.c | 14 ++++++++------
> drivers/firmware/memmap.c | 2 +-
> drivers/usb/early/xhci-dbc.c | 2 +-
> include/linux/memblock.h | 35 -----------------------------------
> kernel/dma/swiotlb.c | 2 +-
> kernel/printk/printk.c | 9 +--------
For printk:
Reviewed-by: Petr Mladek <[email protected]>
Acked-by: Petr Mladek <[email protected]>
Best Regards,
Petr
> mm/memblock.c | 35 -----------------------------------
> mm/page_alloc.c | 10 +++++-----
> mm/page_ext.c | 2 +-
> mm/percpu.c | 11 ++++-------
> mm/sparse.c | 6 ++----
> 14 files changed, 31 insertions(+), 112 deletions(-)
>
Le 21/01/2019 à 09:04, Mike Rapoport a écrit :
> Add check for the return value of memblock_alloc*() functions and call
> panic() in case of error.
> The panic message repeats the one used by panicing memblock allocators with
> adjustment of parameters to include only relevant ones.
>
> The replacement was mostly automated with semantic patches like the one
> below with manual massaging of format strings.
>
> @@
> expression ptr, size, align;
> @@
> ptr = memblock_alloc(size, align);
> + if (!ptr)
> + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
> size, align);
>
> Signed-off-by: Mike Rapoport <[email protected]>
> Reviewed-by: Guo Ren <[email protected]> # c-sky
> Acked-by: Paul Burton <[email protected]> # MIPS
> Acked-by: Heiko Carstens <[email protected]> # s390
> Reviewed-by: Juergen Gross <[email protected]> # Xen
> ---
[...]
> diff --git a/mm/sparse.c b/mm/sparse.c
> index 7ea5dc6..ad94242 100644
> --- a/mm/sparse.c
> +++ b/mm/sparse.c
[...]
> @@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
> memblock_alloc_try_nid_raw(size, PAGE_SIZE,
> __pa(MAX_DMA_ADDRESS),
> MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> + if (!sparsemap_buf)
> + panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
> + __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
> +
memblock_alloc_try_nid_raw() does not panic (help explicitly says: Does
not zero allocated memory, does not panic if request cannot be satisfied.).
Stephen Rothwell reports a boot failure due to this change.
Christophe
> sparsemap_buf_end = sparsemap_buf + size;
> }
>
>
On Thu, Jan 31, 2019 at 07:07:46AM +0100, Christophe Leroy wrote:
>
>
> Le 21/01/2019 ? 09:04, Mike Rapoport a ?crit?:
> >Add check for the return value of memblock_alloc*() functions and call
> >panic() in case of error.
> >The panic message repeats the one used by panicing memblock allocators with
> >adjustment of parameters to include only relevant ones.
> >
> >The replacement was mostly automated with semantic patches like the one
> >below with manual massaging of format strings.
> >
> >@@
> >expression ptr, size, align;
> >@@
> >ptr = memblock_alloc(size, align);
> >+ if (!ptr)
> >+ panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
> >size, align);
> >
> >Signed-off-by: Mike Rapoport <[email protected]>
> >Reviewed-by: Guo Ren <[email protected]> # c-sky
> >Acked-by: Paul Burton <[email protected]> # MIPS
> >Acked-by: Heiko Carstens <[email protected]> # s390
> >Reviewed-by: Juergen Gross <[email protected]> # Xen
> >---
>
> [...]
>
> >diff --git a/mm/sparse.c b/mm/sparse.c
> >index 7ea5dc6..ad94242 100644
> >--- a/mm/sparse.c
> >+++ b/mm/sparse.c
>
> [...]
>
> >@@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
> > memblock_alloc_try_nid_raw(size, PAGE_SIZE,
> > __pa(MAX_DMA_ADDRESS),
> > MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> >+ if (!sparsemap_buf)
> >+ panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
> >+ __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
> >+
>
> memblock_alloc_try_nid_raw() does not panic (help explicitly says: Does not
> zero allocated memory, does not panic if request cannot be satisfied.).
"Does not panic" does not mean it always succeeds.
> Stephen Rothwell reports a boot failure due to this change.
Please see my reply on that thread.
> Christophe
>
> > sparsemap_buf_end = sparsemap_buf + size;
> > }
> >
>
--
Sincerely yours,
Mike.
Le 31/01/2019 à 07:41, Mike Rapoport a écrit :
> On Thu, Jan 31, 2019 at 07:07:46AM +0100, Christophe Leroy wrote:
>>
>>
>> Le 21/01/2019 à 09:04, Mike Rapoport a écrit :
>>> Add check for the return value of memblock_alloc*() functions and call
>>> panic() in case of error.
>>> The panic message repeats the one used by panicing memblock allocators with
>>> adjustment of parameters to include only relevant ones.
>>>
>>> The replacement was mostly automated with semantic patches like the one
>>> below with manual massaging of format strings.
>>>
>>> @@
>>> expression ptr, size, align;
>>> @@
>>> ptr = memblock_alloc(size, align);
>>> + if (!ptr)
>>> + panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
>>> size, align);
>>>
>>> Signed-off-by: Mike Rapoport <[email protected]>
>>> Reviewed-by: Guo Ren <[email protected]> # c-sky
>>> Acked-by: Paul Burton <[email protected]> # MIPS
>>> Acked-by: Heiko Carstens <[email protected]> # s390
>>> Reviewed-by: Juergen Gross <[email protected]> # Xen
>>> ---
>>
>> [...]
>>
>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>> index 7ea5dc6..ad94242 100644
>>> --- a/mm/sparse.c
>>> +++ b/mm/sparse.c
>>
>> [...]
>>
>>> @@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned long size, int nid)
>>> memblock_alloc_try_nid_raw(size, PAGE_SIZE,
>>> __pa(MAX_DMA_ADDRESS),
>>> MEMBLOCK_ALLOC_ACCESSIBLE, nid);
>>> + if (!sparsemap_buf)
>>> + panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%lx\n",
>>> + __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
>>> +
>>
>> memblock_alloc_try_nid_raw() does not panic (help explicitly says: Does not
>> zero allocated memory, does not panic if request cannot be satisfied.).
>
> "Does not panic" does not mean it always succeeds.
I agree, but at least here you are changing the behaviour by making it
panic explicitly. Are we sure there are not cases where the system could
just continue functionning ? Maybe a WARN_ON() would be enough there ?
Christophe
>
>> Stephen Rothwell reports a boot failure due to this change.
>
> Please see my reply on that thread.
>
>> Christophe
>>
>>> sparsemap_buf_end = sparsemap_buf + size;
>>> }
>>>
>>
>
Le 31/01/2019 à 07:44, Christophe Leroy a écrit :
>
>
> Le 31/01/2019 à 07:41, Mike Rapoport a écrit :
>> On Thu, Jan 31, 2019 at 07:07:46AM +0100, Christophe Leroy wrote:
>>>
>>>
>>> Le 21/01/2019 à 09:04, Mike Rapoport a écrit :
>>>> Add check for the return value of memblock_alloc*() functions and call
>>>> panic() in case of error.
>>>> The panic message repeats the one used by panicing memblock
>>>> allocators with
>>>> adjustment of parameters to include only relevant ones.
>>>>
>>>> The replacement was mostly automated with semantic patches like the one
>>>> below with manual massaging of format strings.
>>>>
>>>> @@
>>>> expression ptr, size, align;
>>>> @@
>>>> ptr = memblock_alloc(size, align);
>>>> + if (!ptr)
>>>> +Â Â Â Â panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
>>>> size, align);
>>>>
>>>> Signed-off-by: Mike Rapoport <[email protected]>
>>>> Reviewed-by: Guo Ren <[email protected]>Â Â Â Â Â Â Â Â Â Â Â Â # c-sky
>>>> Acked-by: Paul Burton <[email protected]>Â Â Â Â Â Â Â Â # MIPS
>>>> Acked-by: Heiko Carstens <[email protected]> # s390
>>>> Reviewed-by: Juergen Gross <[email protected]>Â Â Â Â Â Â Â Â # Xen
>>>> ---
>>>
>>> [...]
>>>
>>>> diff --git a/mm/sparse.c b/mm/sparse.c
>>>> index 7ea5dc6..ad94242 100644
>>>> --- a/mm/sparse.c
>>>> +++ b/mm/sparse.c
>>>
>>> [...]
>>>
>>>> @@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned
>>>> long size, int nid)
>>>> Â Â Â Â Â Â Â Â Â memblock_alloc_try_nid_raw(size, PAGE_SIZE,
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â __pa(MAX_DMA_ADDRESS),
>>>> Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â MEMBLOCK_ALLOC_ACCESSIBLE, nid);
>>>> +Â Â Â if (!sparsemap_buf)
>>>> +Â Â Â Â Â Â Â panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d
>>>> from=%lx\n",
>>>> +Â Â Â Â Â Â Â Â Â Â Â Â Â __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
>>>> +
>>>
>>> memblock_alloc_try_nid_raw() does not panic (help explicitly says:
>>> Does not
>>> zero allocated memory, does not panic if request cannot be satisfied.).
>>
>> "Does not panic" does not mean it always succeeds.
>
> I agree, but at least here you are changing the behaviour by making it
> panic explicitly. Are we sure there are not cases where the system could
> just continue functionning ? Maybe a WARN_ON() would be enough there ?
Looking more in details, it looks like everything is done to live with
sparsemap_buf NULL, all functions using it check it so having it NULL
shouldn't imply a panic I believe, see code below.
static void *sparsemap_buf __meminitdata;
static void *sparsemap_buf_end __meminitdata;
static void __init sparse_buffer_init(unsigned long size, int nid)
{
WARN_ON(sparsemap_buf); /* forgot to call sparse_buffer_fini()? */
sparsemap_buf =
memblock_alloc_try_nid_raw(size, PAGE_SIZE,
__pa(MAX_DMA_ADDRESS),
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
sparsemap_buf_end = sparsemap_buf + size;
}
static void __init sparse_buffer_fini(void)
{
unsigned long size = sparsemap_buf_end - sparsemap_buf;
if (sparsemap_buf && size > 0)
memblock_free_early(__pa(sparsemap_buf), size);
sparsemap_buf = NULL;
}
void * __meminit sparse_buffer_alloc(unsigned long size)
{
void *ptr = NULL;
if (sparsemap_buf) {
ptr = PTR_ALIGN(sparsemap_buf, size);
if (ptr + size > sparsemap_buf_end)
ptr = NULL;
else
sparsemap_buf = ptr + size;
}
return ptr;
}
Christophe
On Thu, Jan 31, 2019 at 08:07:29AM +0100, Christophe Leroy wrote:
>
>
> Le 31/01/2019 ? 07:44, Christophe Leroy a ?crit?:
> >
> >
> >Le 31/01/2019 ? 07:41, Mike Rapoport a ?crit?:
> >>On Thu, Jan 31, 2019 at 07:07:46AM +0100, Christophe Leroy wrote:
> >>>
> >>>
> >>>Le 21/01/2019 ? 09:04, Mike Rapoport a ?crit?:
> >>>>Add check for the return value of memblock_alloc*() functions and call
> >>>>panic() in case of error.
> >>>>The panic message repeats the one used by panicing memblock
> >>>>allocators with
> >>>>adjustment of parameters to include only relevant ones.
> >>>>
> >>>>The replacement was mostly automated with semantic patches like the one
> >>>>below with manual massaging of format strings.
> >>>>
> >>>>@@
> >>>>expression ptr, size, align;
> >>>>@@
> >>>>ptr = memblock_alloc(size, align);
> >>>>+ if (!ptr)
> >>>>+???? panic("%s: Failed to allocate %lu bytes align=0x%lx\n", __func__,
> >>>>size, align);
> >>>>
> >>>>Signed-off-by: Mike Rapoport <[email protected]>
> >>>>Reviewed-by: Guo Ren <[email protected]>???????????? # c-sky
> >>>>Acked-by: Paul Burton <[email protected]>???????? # MIPS
> >>>>Acked-by: Heiko Carstens <[email protected]> # s390
> >>>>Reviewed-by: Juergen Gross <[email protected]>???????? # Xen
> >>>>---
> >>>
> >>>[...]
> >>>
> >>>>diff --git a/mm/sparse.c b/mm/sparse.c
> >>>>index 7ea5dc6..ad94242 100644
> >>>>--- a/mm/sparse.c
> >>>>+++ b/mm/sparse.c
> >>>
> >>>[...]
> >>>
> >>>>@@ -425,6 +436,10 @@ static void __init sparse_buffer_init(unsigned
> >>>>long size, int nid)
> >>>>????????? memblock_alloc_try_nid_raw(size, PAGE_SIZE,
> >>>>????????????????????????? __pa(MAX_DMA_ADDRESS),
> >>>>????????????????????????? MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> >>>>+??? if (!sparsemap_buf)
> >>>>+??????? panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d
> >>>>from=%lx\n",
> >>>>+????????????? __func__, size, PAGE_SIZE, nid, __pa(MAX_DMA_ADDRESS));
> >>>>+
> >>>
> >>>memblock_alloc_try_nid_raw() does not panic (help explicitly says:
> >>>Does not
> >>>zero allocated memory, does not panic if request cannot be satisfied.).
> >>
> >>"Does not panic" does not mean it always succeeds.
> >
> >I agree, but at least here you are changing the behaviour by making it
> >panic explicitly. Are we sure there are not cases where the system could
> >just continue functionning ? Maybe a WARN_ON() would be enough there ?
>
> Looking more in details, it looks like everything is done to live with
> sparsemap_buf NULL, all functions using it check it so having it NULL
> shouldn't imply a panic I believe, see code below.
You are right, I'm preparing the fix right now.
> static void *sparsemap_buf __meminitdata;
> static void *sparsemap_buf_end __meminitdata;
>
> static void __init sparse_buffer_init(unsigned long size, int nid)
> {
> WARN_ON(sparsemap_buf); /* forgot to call sparse_buffer_fini()? */
> sparsemap_buf =
> memblock_alloc_try_nid_raw(size, PAGE_SIZE,
> __pa(MAX_DMA_ADDRESS),
> MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> sparsemap_buf_end = sparsemap_buf + size;
> }
>
> static void __init sparse_buffer_fini(void)
> {
> unsigned long size = sparsemap_buf_end - sparsemap_buf;
>
> if (sparsemap_buf && size > 0)
> memblock_free_early(__pa(sparsemap_buf), size);
> sparsemap_buf = NULL;
> }
>
> void * __meminit sparse_buffer_alloc(unsigned long size)
> {
> void *ptr = NULL;
>
> if (sparsemap_buf) {
> ptr = PTR_ALIGN(sparsemap_buf, size);
> if (ptr + size > sparsemap_buf_end)
> ptr = NULL;
> else
> sparsemap_buf = ptr + size;
> }
> return ptr;
> }
>
>
> Christophe
>
--
Sincerely yours,
Mike.
Mike Rapoport <[email protected]> writes:
> Currently, memblock has several internal functions with overlapping
> functionality. They all call memblock_find_in_range_node() to find free
> memory and then reserve the allocated range and mark it with kmemleak.
> However, there is difference in the allocation constraints and in fallback
> strategies.
>
> The allocations returning physical address first attempt to find free
> memory on the specified node within mirrored memory regions, then retry on
> the same node without the requirement for memory mirroring and finally fall
> back to all available memory.
>
> The allocations returning virtual address start with clamping the allowed
> range to memblock.current_limit, attempt to allocate from the specified
> node from regions with mirroring and with user defined minimal address. If
> such allocation fails, next attempt is done with node restriction lifted.
> Next, the allocation is retried with minimal address reset to zero and at
> last without the requirement for mirrored regions.
>
> Let's consolidate various fallbacks handling and make them more consistent
> for physical and virtual variants. Most of the fallback handling is moved
> to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
>
> The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
> physical address of the allocated range and converts it to virtual address.
>
> The fallback for allocation below the specified minimal address remains in
> memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
> with exact requirement for lower bounds.
This is causing problems on some of my machines.
I see NODE_DATA allocations falling back to node 0 when they shouldn't,
or didn't previously.
eg, before:
57990190: (116011251): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
58152042: (116373087): numa: NODE_DATA [mem 0x8fff90980-0x8fff97fff]
after:
16356872061562: (6296877055): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
16356872079279: (6296894772): numa: NODE_DATA [mem 0xfffcd300-0xfffd497f]
16356872096376: (6296911869): numa: NODE_DATA(1) on node 0
On some of my other systems it does that, and then panics because it
can't allocate anything at all:
[ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
[ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
[ 0.000000] numa: NODE_DATA(1) on node 0
[ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
[ 0.000000] Call Trace:
[ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable)
[ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0
[ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260
[ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418
[ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684
[ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c
[ 0.000000] Rebooting in 180 seconds..
So there's something going wrong there, I haven't had time to dig into
it though (Sunday night here).
cheers
On Sun, Feb 03, 2019 at 08:39:20PM +1100, Michael Ellerman wrote:
> Mike Rapoport <[email protected]> writes:
>
> > Currently, memblock has several internal functions with overlapping
> > functionality. They all call memblock_find_in_range_node() to find free
> > memory and then reserve the allocated range and mark it with kmemleak.
> > However, there is difference in the allocation constraints and in fallback
> > strategies.
> >
> > The allocations returning physical address first attempt to find free
> > memory on the specified node within mirrored memory regions, then retry on
> > the same node without the requirement for memory mirroring and finally fall
> > back to all available memory.
> >
> > The allocations returning virtual address start with clamping the allowed
> > range to memblock.current_limit, attempt to allocate from the specified
> > node from regions with mirroring and with user defined minimal address. If
> > such allocation fails, next attempt is done with node restriction lifted.
> > Next, the allocation is retried with minimal address reset to zero and at
> > last without the requirement for mirrored regions.
> >
> > Let's consolidate various fallbacks handling and make them more consistent
> > for physical and virtual variants. Most of the fallback handling is moved
> > to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
> >
> > The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
> > physical address of the allocated range and converts it to virtual address.
> >
> > The fallback for allocation below the specified minimal address remains in
> > memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
> > with exact requirement for lower bounds.
>
> This is causing problems on some of my machines.
>
> I see NODE_DATA allocations falling back to node 0 when they shouldn't,
> or didn't previously.
>
> eg, before:
>
> 57990190: (116011251): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 58152042: (116373087): numa: NODE_DATA [mem 0x8fff90980-0x8fff97fff]
>
> after:
>
> 16356872061562: (6296877055): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 16356872079279: (6296894772): numa: NODE_DATA [mem 0xfffcd300-0xfffd497f]
> 16356872096376: (6296911869): numa: NODE_DATA(1) on node 0
>
>
> On some of my other systems it does that, and then panics because it
> can't allocate anything at all:
>
> [ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
> [ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
> [ 0.000000] numa: NODE_DATA(1) on node 0
> [ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data
> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
> [ 0.000000] Call Trace:
> [ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable)
> [ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0
> [ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260
> [ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418
> [ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684
> [ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c
> [ 0.000000] Rebooting in 180 seconds..
>
>
> So there's something going wrong there, I haven't had time to dig into
> it though (Sunday night here).
I'll try to see if I can reproduce it with qemu.
> cheers
>
--
Sincerely yours,
Mike.
(dropped most of 'CC)
On Sun, Feb 03, 2019 at 08:39:20PM +1100, Michael Ellerman wrote:
> Mike Rapoport <[email protected]> writes:
>
> > Currently, memblock has several internal functions with overlapping
> > functionality. They all call memblock_find_in_range_node() to find free
> > memory and then reserve the allocated range and mark it with kmemleak.
> > However, there is difference in the allocation constraints and in fallback
> > strategies.
> >
> > The allocations returning physical address first attempt to find free
> > memory on the specified node within mirrored memory regions, then retry on
> > the same node without the requirement for memory mirroring and finally fall
> > back to all available memory.
> >
> > The allocations returning virtual address start with clamping the allowed
> > range to memblock.current_limit, attempt to allocate from the specified
> > node from regions with mirroring and with user defined minimal address. If
> > such allocation fails, next attempt is done with node restriction lifted.
> > Next, the allocation is retried with minimal address reset to zero and at
> > last without the requirement for mirrored regions.
> >
> > Let's consolidate various fallbacks handling and make them more consistent
> > for physical and virtual variants. Most of the fallback handling is moved
> > to memblock_alloc_range_nid() and it now handles node and mirror fallbacks.
> >
> > The memblock_alloc_internal() uses memblock_alloc_range_nid() to get a
> > physical address of the allocated range and converts it to virtual address.
> >
> > The fallback for allocation below the specified minimal address remains in
> > memblock_alloc_internal() because memblock_alloc_range_nid() is used by CMA
> > with exact requirement for lower bounds.
>
> This is causing problems on some of my machines.
>
> I see NODE_DATA allocations falling back to node 0 when they shouldn't,
> or didn't previously.
>
> eg, before:
>
> 57990190: (116011251): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 58152042: (116373087): numa: NODE_DATA [mem 0x8fff90980-0x8fff97fff]
>
> after:
>
> 16356872061562: (6296877055): numa: NODE_DATA [mem 0xfffe4980-0xfffebfff]
> 16356872079279: (6296894772): numa: NODE_DATA [mem 0xfffcd300-0xfffd497f]
> 16356872096376: (6296911869): numa: NODE_DATA(1) on node 0
>
>
> On some of my other systems it does that, and then panics because it
> can't allocate anything at all:
>
> [ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
> [ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
> [ 0.000000] numa: NODE_DATA(1) on node 0
> [ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data
> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
> [ 0.000000] Call Trace:
> [ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable)
> [ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0
> [ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260
> [ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418
> [ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684
> [ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c
> [ 0.000000] Rebooting in 180 seconds..
>
>
> So there's something going wrong there, I haven't had time to dig into
> it though (Sunday night here).
Yeah, I've misplaced 'nid' and 'MEMBLOCK_ALLOC_ACCESSIBLE' in
memblock_phys_alloc_try_nid() :(
Can you please check if the below patch fixes the issue on your systems?
> cheers
>
From 5875b7440e985ce551e6da3cb28aa8e9af697e10 Mon Sep 17 00:00:00 2001
From: Mike Rapoport <[email protected]>
Date: Sun, 3 Feb 2019 13:35:42 +0200
Subject: [PATCH] memblock: fix parameter order in
memblock_phys_alloc_try_nid()
The refactoring of internal memblock allocation functions used wrong order
of parameters in memblock_alloc_range_nid() call from
memblock_phys_alloc_try_nid().
Fix it.
Signed-off-by: Mike Rapoport <[email protected]>
---
mm/memblock.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memblock.c b/mm/memblock.c
index e047933..0151a5b 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1402,8 +1402,8 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
{
- return memblock_alloc_range_nid(size, align, 0, nid,
- MEMBLOCK_ALLOC_ACCESSIBLE);
+ return memblock_alloc_range_nid(size, align, 0,
+ MEMBLOCK_ALLOC_ACCESSIBLE, nid);
}
/**
--
2.7.4
--
Sincerely yours,
Mike.
Mike Rapoport <[email protected]> writes:
> On Sun, Feb 03, 2019 at 08:39:20PM +1100, Michael Ellerman wrote:
>> Mike Rapoport <[email protected]> writes:
>> > Currently, memblock has several internal functions with overlapping
>> > functionality. They all call memblock_find_in_range_node() to find free
>> > memory and then reserve the allocated range and mark it with kmemleak.
>> > However, there is difference in the allocation constraints and in fallback
>> > strategies.
...
>>
>> This is causing problems on some of my machines.
...
>>
>> On some of my other systems it does that, and then panics because it
>> can't allocate anything at all:
>>
>> [ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
>> [ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
>> [ 0.000000] numa: NODE_DATA(1) on node 0
>> [ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data
>> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
>> [ 0.000000] Call Trace:
>> [ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable)
>> [ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0
>> [ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260
>> [ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418
>> [ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684
>> [ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c
>> [ 0.000000] Rebooting in 180 seconds..
>>
>>
>> So there's something going wrong there, I haven't had time to dig into
>> it though (Sunday night here).
>
> Yeah, I've misplaced 'nid' and 'MEMBLOCK_ALLOC_ACCESSIBLE' in
> memblock_phys_alloc_try_nid() :(
>
> Can you please check if the below patch fixes the issue on your systems?
Yes it does, thanks.
Tested-by: Michael Ellerman <[email protected]>
cheers
> From 5875b7440e985ce551e6da3cb28aa8e9af697e10 Mon Sep 17 00:00:00 2001
> From: Mike Rapoport <[email protected]>
> Date: Sun, 3 Feb 2019 13:35:42 +0200
> Subject: [PATCH] memblock: fix parameter order in
> memblock_phys_alloc_try_nid()
>
> The refactoring of internal memblock allocation functions used wrong order
> of parameters in memblock_alloc_range_nid() call from
> memblock_phys_alloc_try_nid().
> Fix it.
>
> Signed-off-by: Mike Rapoport <[email protected]>
> ---
> mm/memblock.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/memblock.c b/mm/memblock.c
> index e047933..0151a5b 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1402,8 +1402,8 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
>
> phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
> {
> - return memblock_alloc_range_nid(size, align, 0, nid,
> - MEMBLOCK_ALLOC_ACCESSIBLE);
> + return memblock_alloc_range_nid(size, align, 0,
> + MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> }
>
> /**
> --
> 2.7.4
>
>
> --
> Sincerely yours,
> Mike.
Hi all,
On Mon, 04 Feb 2019 19:45:17 +1100 Michael Ellerman <[email protected]> wrote:
>
> Mike Rapoport <[email protected]> writes:
> > On Sun, Feb 03, 2019 at 08:39:20PM +1100, Michael Ellerman wrote:
> >> Mike Rapoport <[email protected]> writes:
> >> > Currently, memblock has several internal functions with overlapping
> >> > functionality. They all call memblock_find_in_range_node() to find free
> >> > memory and then reserve the allocated range and mark it with kmemleak.
> >> > However, there is difference in the allocation constraints and in fallback
> >> > strategies.
> ...
> >>
> >> This is causing problems on some of my machines.
> ...
> >>
> >> On some of my other systems it does that, and then panics because it
> >> can't allocate anything at all:
> >>
> >> [ 0.000000] numa: NODE_DATA [mem 0x7ffcaee80-0x7ffcb3fff]
> >> [ 0.000000] numa: NODE_DATA [mem 0x7ffc99d00-0x7ffc9ee7f]
> >> [ 0.000000] numa: NODE_DATA(1) on node 0
> >> [ 0.000000] Kernel panic - not syncing: Cannot allocate 20864 bytes for node 16 data
> >> [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 5.0.0-rc4-gccN-next-20190201-gdc4c899 #1
> >> [ 0.000000] Call Trace:
> >> [ 0.000000] [c0000000011cfca0] [c000000000c11044] dump_stack+0xe8/0x164 (unreliable)
> >> [ 0.000000] [c0000000011cfcf0] [c0000000000fdd6c] panic+0x17c/0x3e0
> >> [ 0.000000] [c0000000011cfd90] [c000000000f61bc8] initmem_init+0x128/0x260
> >> [ 0.000000] [c0000000011cfe60] [c000000000f57940] setup_arch+0x398/0x418
> >> [ 0.000000] [c0000000011cfee0] [c000000000f50a94] start_kernel+0xa0/0x684
> >> [ 0.000000] [c0000000011cff90] [c00000000000af70] start_here_common+0x1c/0x52c
> >> [ 0.000000] Rebooting in 180 seconds..
> >>
> >>
> >> So there's something going wrong there, I haven't had time to dig into
> >> it though (Sunday night here).
> >
> > Yeah, I've misplaced 'nid' and 'MEMBLOCK_ALLOC_ACCESSIBLE' in
> > memblock_phys_alloc_try_nid() :(
> >
> > Can you please check if the below patch fixes the issue on your systems?
>
> Yes it does, thanks.
>
> Tested-by: Michael Ellerman <[email protected]>
>
> cheers
>
>
> > From 5875b7440e985ce551e6da3cb28aa8e9af697e10 Mon Sep 17 00:00:00 2001
> > From: Mike Rapoport <[email protected]>
> > Date: Sun, 3 Feb 2019 13:35:42 +0200
> > Subject: [PATCH] memblock: fix parameter order in
> > memblock_phys_alloc_try_nid()
> >
> > The refactoring of internal memblock allocation functions used wrong order
> > of parameters in memblock_alloc_range_nid() call from
> > memblock_phys_alloc_try_nid().
> > Fix it.
> >
> > Signed-off-by: Mike Rapoport <[email protected]>
> > ---
> > mm/memblock.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index e047933..0151a5b 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -1402,8 +1402,8 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
> >
> > phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
> > {
> > - return memblock_alloc_range_nid(size, align, 0, nid,
> > - MEMBLOCK_ALLOC_ACCESSIBLE);
> > + return memblock_alloc_range_nid(size, align, 0,
> > + MEMBLOCK_ALLOC_ACCESSIBLE, nid);
> > }
> >
> > /**
> > --
> > 2.7.4
I have applied that patch to the akpm tree in linux-next from today.
--
Cheers,
Stephen Rothwell
(updated CC)
Hi,
On Tue, Sep 24, 2019 at 12:52:35PM -0500, Adam Ford wrote:
> On Mon, Jan 21, 2019 at 2:05 AM Mike Rapoport <[email protected]> wrote:
> >
> > Hi,
> >
> > v2 changes:
> > * replace some more %lu with %zu
> > * remove panics where they are not needed in s390 and in printk
> > * collect Acked-by and Reviewed-by.
> >
> >
> > Christophe Leroy (1):
> > powerpc: use memblock functions returning virtual address
> >
> > Mike Rapoport (20):
> > openrisc: prefer memblock APIs returning virtual address
> > memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc
> > memblock: drop memblock_alloc_base_nid()
> > memblock: emphasize that memblock_alloc_range() returns a physical address
> > memblock: memblock_phys_alloc_try_nid(): don't panic
> > memblock: memblock_phys_alloc(): don't panic
> > memblock: drop __memblock_alloc_base()
> > memblock: drop memblock_alloc_base()
> > memblock: refactor internal allocation functions
> > memblock: make memblock_find_in_range_node() and choose_memblock_flags() static
> > arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)
> > arch: don't memset(0) memory returned by memblock_alloc()
> > ia64: add checks for the return value of memblock_alloc*()
> > sparc: add checks for the return value of memblock_alloc*()
> > mm/percpu: add checks for the return value of memblock_alloc*()
> > init/main: add checks for the return value of memblock_alloc*()
> > swiotlb: add checks for the return value of memblock_alloc*()
> > treewide: add checks for the return value of memblock_alloc*()
> > memblock: memblock_alloc_try_nid: don't panic
> > memblock: drop memblock_alloc_*_nopanic() variants
> >
> I know it's rather late, but this patch broke the Etnaviv 3D graphics
> in my i.MX6Q.
Can you identify the exact patch from the series that caused the
regression?
> When I try to use the 3D, it returns some errors and the dmesg log
> shows some memory allocation errors too:
> [ 3.682347] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
> [ 3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> [ 3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> [ 3.700800] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
> [ 3.723013] etnaviv-gpu 130000.gpu: command buffer outside valid
> memory window
> [ 3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> [ 3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> memory window
> [ 3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
> [ 3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
> [ 3.776131] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0
>
> # glmark2-es2-drm
> Error creating gpu
> Error: eglCreateWindowSurface failed with error: 0x3009
> Error: eglCreateWindowSurface failed with error: 0x3009
> Error: CanvasGeneric: Invalid EGL state
> Error: main: Could not initialize canvas
>
>
> Before this patch:
>
> [ 3.691995] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
> [ 3.698356] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> [ 3.704792] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> [ 3.710488] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
> [ 3.733649] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> [ 3.756115] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
> [ 3.762250] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
> [ 3.771432] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0
>
> and the 3D gemos work without this.
>
> I don't know enough about the i.MX6 nor the 3D accelerator to know how
> to fix it.
> I am hoping someone in the know might have some suggestions.
Can you please add "memblock=debug" to your kernel command line and send
kernel logs for both working and failing versions?
--
Sincerely yours,
Mike.
On Mon, Jan 21, 2019 at 2:05 AM Mike Rapoport <[email protected]> wrote:
>
> Hi,
>
> Current memblock API is quite extensive and, which is more annoying,
> duplicated. Except the low-level functions that allow searching for a free
> memory region and marking it as reserved, memblock provides three (well,
> two and a half) sets of functions to allocate memory. There are several
> overlapping functions that return a physical address and there are
> functions that return virtual address. Those that return the virtual
> address may also clear the allocated memory. And, on top of all that, some
> allocators panic and some return NULL in case of error.
>
> This set tries to reduce the mess, and trim down the amount of memblock
> allocation methods.
>
> Patches 1-10 consolidate the functions that return physical address of
> the allocated memory
>
> Patches 11-13 are some trivial cleanups
>
> Patches 14-19 add checks for the return value of memblock_alloc*() and
> panics in case of errors. The patches 14-18 include some minor refactoring
> to have better readability of the resulting code and patch 19 is a
> mechanical addition of
>
> if (!ptr)
> panic();
>
> after memblock_alloc*() calls.
>
> And, finally, patches 20 and 21 remove panic() calls memblock and _nopanic
> variants from memblock.
>
> v2 changes:
> * replace some more %lu with %zu
> * remove panics where they are not needed in s390 and in printk
> * collect Acked-by and Reviewed-by.
>
>
> Christophe Leroy (1):
> powerpc: use memblock functions returning virtual address
>
> Mike Rapoport (20):
> openrisc: prefer memblock APIs returning virtual address
> memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc
> memblock: drop memblock_alloc_base_nid()
> memblock: emphasize that memblock_alloc_range() returns a physical address
> memblock: memblock_phys_alloc_try_nid(): don't panic
> memblock: memblock_phys_alloc(): don't panic
> memblock: drop __memblock_alloc_base()
> memblock: drop memblock_alloc_base()
> memblock: refactor internal allocation functions
> memblock: make memblock_find_in_range_node() and choose_memblock_flags() static
> arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)
> arch: don't memset(0) memory returned by memblock_alloc()
> ia64: add checks for the return value of memblock_alloc*()
> sparc: add checks for the return value of memblock_alloc*()
> mm/percpu: add checks for the return value of memblock_alloc*()
> init/main: add checks for the return value of memblock_alloc*()
> swiotlb: add checks for the return value of memblock_alloc*()
> treewide: add checks for the return value of memblock_alloc*()
> memblock: memblock_alloc_try_nid: don't panic
> memblock: drop memblock_alloc_*_nopanic() variants
>
I know it's rather late, but this patch broke the Etnaviv 3D graphics
in my i.MX6Q.
When I try to use the 3D, it returns some errors and the dmesg log
shows some memory allocation errors too:
[ 3.682347] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
[ 3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[ 3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[ 3.700800] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
[ 3.723013] etnaviv-gpu 130000.gpu: command buffer outside valid
memory window
[ 3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[ 3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
memory window
[ 3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[ 3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
[ 3.776131] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0
# glmark2-es2-drm
Error creating gpu
Error: eglCreateWindowSurface failed with error: 0x3009
Error: eglCreateWindowSurface failed with error: 0x3009
Error: CanvasGeneric: Invalid EGL state
Error: main: Could not initialize canvas
Before this patch:
[ 3.691995] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
[ 3.698356] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[ 3.704792] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[ 3.710488] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
[ 3.733649] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[ 3.756115] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[ 3.762250] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
[ 3.771432] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0
and the 3D gemos work without this.
I don't know enough about the i.MX6 nor the 3D accelerator to know how
to fix it.
I am hoping someone in the know might have some suggestions.
> arch/alpha/kernel/core_cia.c | 5 +-
> arch/alpha/kernel/core_marvel.c | 6 +
> arch/alpha/kernel/pci-noop.c | 13 +-
> arch/alpha/kernel/pci.c | 11 +-
> arch/alpha/kernel/pci_iommu.c | 16 +-
> arch/alpha/kernel/setup.c | 2 +-
> arch/arc/kernel/unwind.c | 3 +-
> arch/arc/mm/highmem.c | 4 +
> arch/arm/kernel/setup.c | 6 +
> arch/arm/mm/init.c | 6 +-
> arch/arm/mm/mmu.c | 14 +-
> arch/arm64/kernel/setup.c | 8 +-
> arch/arm64/mm/kasan_init.c | 10 ++
> arch/arm64/mm/mmu.c | 2 +
> arch/arm64/mm/numa.c | 4 +
> arch/c6x/mm/dma-coherent.c | 4 +
> arch/c6x/mm/init.c | 4 +-
> arch/csky/mm/highmem.c | 5 +
> arch/h8300/mm/init.c | 4 +-
> arch/ia64/kernel/mca.c | 25 +--
> arch/ia64/mm/contig.c | 8 +-
> arch/ia64/mm/discontig.c | 4 +
> arch/ia64/mm/init.c | 38 ++++-
> arch/ia64/mm/tlb.c | 6 +
> arch/ia64/sn/kernel/io_common.c | 3 +
> arch/ia64/sn/kernel/setup.c | 12 +-
> arch/m68k/atari/stram.c | 4 +
> arch/m68k/mm/init.c | 3 +
> arch/m68k/mm/mcfmmu.c | 7 +-
> arch/m68k/mm/motorola.c | 9 ++
> arch/m68k/mm/sun3mmu.c | 6 +
> arch/m68k/sun3/sun3dvma.c | 3 +
> arch/microblaze/mm/init.c | 10 +-
> arch/mips/cavium-octeon/dma-octeon.c | 3 +
> arch/mips/kernel/setup.c | 3 +
> arch/mips/kernel/traps.c | 5 +-
> arch/mips/mm/init.c | 5 +
> arch/nds32/mm/init.c | 12 ++
> arch/openrisc/mm/init.c | 5 +-
> arch/openrisc/mm/ioremap.c | 8 +-
> arch/powerpc/kernel/dt_cpu_ftrs.c | 8 +-
> arch/powerpc/kernel/irq.c | 5 -
> arch/powerpc/kernel/paca.c | 6 +-
> arch/powerpc/kernel/pci_32.c | 3 +
> arch/powerpc/kernel/prom.c | 5 +-
> arch/powerpc/kernel/rtas.c | 6 +-
> arch/powerpc/kernel/setup-common.c | 3 +
> arch/powerpc/kernel/setup_32.c | 26 ++--
> arch/powerpc/kernel/setup_64.c | 4 +
> arch/powerpc/lib/alloc.c | 3 +
> arch/powerpc/mm/hash_utils_64.c | 11 +-
> arch/powerpc/mm/mmu_context_nohash.c | 9 ++
> arch/powerpc/mm/numa.c | 4 +
> arch/powerpc/mm/pgtable-book3e.c | 12 +-
> arch/powerpc/mm/pgtable-book3s64.c | 3 +
> arch/powerpc/mm/pgtable-radix.c | 9 +-
> arch/powerpc/mm/ppc_mmu_32.c | 3 +
> arch/powerpc/platforms/pasemi/iommu.c | 3 +
> arch/powerpc/platforms/powermac/nvram.c | 3 +
> arch/powerpc/platforms/powernv/opal.c | 3 +
> arch/powerpc/platforms/powernv/pci-ioda.c | 8 +
> arch/powerpc/platforms/ps3/setup.c | 3 +
> arch/powerpc/sysdev/dart_iommu.c | 3 +
> arch/powerpc/sysdev/msi_bitmap.c | 3 +
> arch/s390/kernel/crash_dump.c | 3 +
> arch/s390/kernel/setup.c | 16 ++
> arch/s390/kernel/smp.c | 9 +-
> arch/s390/kernel/topology.c | 6 +
> arch/s390/numa/mode_emu.c | 3 +
> arch/s390/numa/numa.c | 6 +-
> arch/sh/boards/mach-ap325rxa/setup.c | 5 +-
> arch/sh/boards/mach-ecovec24/setup.c | 10 +-
> arch/sh/boards/mach-kfr2r09/setup.c | 5 +-
> arch/sh/boards/mach-migor/setup.c | 5 +-
> arch/sh/boards/mach-se/7724/setup.c | 10 +-
> arch/sh/kernel/machine_kexec.c | 3 +-
> arch/sh/mm/init.c | 8 +-
> arch/sh/mm/numa.c | 4 +
> arch/sparc/kernel/prom_32.c | 6 +-
> arch/sparc/kernel/setup_64.c | 6 +
> arch/sparc/kernel/smp_64.c | 12 ++
> arch/sparc/mm/init_32.c | 2 +-
> arch/sparc/mm/init_64.c | 11 ++
> arch/sparc/mm/srmmu.c | 18 ++-
> arch/um/drivers/net_kern.c | 3 +
> arch/um/drivers/vector_kern.c | 3 +
> arch/um/kernel/initrd.c | 2 +
> arch/um/kernel/mem.c | 16 ++
> arch/unicore32/kernel/setup.c | 4 +
> arch/unicore32/mm/mmu.c | 15 +-
> arch/x86/kernel/acpi/boot.c | 3 +
> arch/x86/kernel/apic/io_apic.c | 5 +
> arch/x86/kernel/e820.c | 5 +-
> arch/x86/kernel/setup_percpu.c | 10 +-
> arch/x86/mm/kasan_init_64.c | 14 +-
> arch/x86/mm/numa.c | 12 +-
> arch/x86/platform/olpc/olpc_dt.c | 3 +
> arch/x86/xen/p2m.c | 11 +-
> arch/xtensa/mm/kasan_init.c | 10 +-
> arch/xtensa/mm/mmu.c | 3 +
> drivers/clk/ti/clk.c | 3 +
> drivers/firmware/memmap.c | 2 +-
> drivers/macintosh/smu.c | 5 +-
> drivers/of/fdt.c | 8 +-
> drivers/of/of_reserved_mem.c | 7 +-
> drivers/of/unittest.c | 8 +-
> drivers/usb/early/xhci-dbc.c | 2 +-
> drivers/xen/swiotlb-xen.c | 7 +-
> include/linux/memblock.h | 59 +------
> init/main.c | 26 +++-
> kernel/dma/swiotlb.c | 21 ++-
> kernel/power/snapshot.c | 3 +
> kernel/printk/printk.c | 9 +-
> lib/cpumask.c | 3 +
> mm/cma.c | 10 +-
> mm/kasan/init.c | 10 +-
> mm/memblock.c | 249 ++++++++++--------------------
> mm/page_alloc.c | 10 +-
> mm/page_ext.c | 2 +-
> mm/percpu.c | 84 +++++++---
> mm/sparse.c | 25 ++-
> 121 files changed, 860 insertions(+), 412 deletions(-)
>
> --
> 2.7.4
>
>
> _______________________________________________
> linux-arm-kernel mailing list
> [email protected]
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
(trimmed the CC)
On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> >
>
> Before the patch:
>
> # cat /sys/kernel/debug/memblock/memory
> 0: 0x10000000..0x8fffffff
> # cat /sys/kernel/debug/memblock/reserved
> 0: 0x10004000..0x10007fff
> 34: 0x2fffff88..0x3fffffff
>
>
> After the patch:
> # cat /sys/kernel/debug/memblock/memory
> 0: 0x10000000..0x8fffffff
> # cat /sys/kernel/debug/memblock/reserved
> 0: 0x10004000..0x10007fff
> 36: 0x80000000..0x8fffffff
I'm still not convinced that the memblock refactoring didn't uncovered an
issue in etnaviv driver.
Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
BTW, the code that complained about "command buffer outside valid memory
window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
implement per-process address spaces on MMUv2").
Could be that recent changes to MMU management of etnaviv resolve the
issue?
> > From 06529f861772b7dea2912fc2245debe4690139b8 Mon Sep 17 00:00:00 2001
> > From: Mike Rapoport <[email protected]>
> > Date: Wed, 2 Oct 2019 10:14:17 +0300
> > Subject: [PATCH] mm: memblock: do not enforce current limit for memblock_phys*
> > family
> >
> > Until commit 92d12f9544b7 ("memblock: refactor internal allocation
> > functions") the maximal address for memblock allocations was forced to
> > memblock.current_limit only for the allocation functions returning virtual
> > address. The changes introduced by that commit moved the limit enforcement
> > into the allocation core and as a result the allocation functions returning
> > physical address also started to limit allocations to
> > memblock.current_limit.
> >
> > This caused breakage of etnaviv GPU driver:
> >
> > [ 3.682347] etnaviv etnaviv: bound 130000.gpu (ops gpu_ops)
> > [ 3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> > [ 3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> > [ 3.700800] etnaviv-gpu 130000.gpu: model: GC2000, revision: 5108
> > [ 3.723013] etnaviv-gpu 130000.gpu: command buffer outside valid
> > memory window
> > [ 3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> > [ 3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> > memory window
> > [ 3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
> > [ 3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
> >
> > Restore the behaviour of memblock_phys* family so that these functions will
> > not enforce memblock.current_limit.
> >
>
> This fixed the issue. Thank you
>
> Tested-by: Adam Ford <[email protected]> #imx6q-logicpd
>
> > Fixes: 92d12f9544b7 ("memblock: refactor internal allocation functions")
> > Reported-by: Adam Ford <[email protected]>
> > Signed-off-by: Mike Rapoport <[email protected]>
> > ---
> > mm/memblock.c | 6 +++---
> > 1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index 7d4f61a..c4b16ca 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > align = SMP_CACHE_BYTES;
> > }
> >
> > - if (end > memblock.current_limit)
> > - end = memblock.current_limit;
> > -
> > again:
> > found = memblock_find_in_range_node(size, align, start, end, nid,
> > flags);
> > @@ -1469,6 +1466,9 @@ static void * __init memblock_alloc_internal(
> > if (WARN_ON_ONCE(slab_is_available()))
> > return kzalloc_node(size, GFP_NOWAIT, nid);
> >
> > + if (max_addr > memblock.current_limit)
> > + max_addr = memblock.current_limit;
> > +
> > alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
> >
> > /* retry allocation without lower limit */
> > --
> > 2.7.4
> >
> >
> > > > adam
> > > >
> > > > On Sat, Sep 28, 2019 at 2:33 AM Mike Rapoport <[email protected]> wrote:
> > > > >
> > > > > On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> > > > > > On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport <[email protected]> wrote:
> > > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > > > > > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford <[email protected]> wrote:
> > > > > > > > >
> > > > > > > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > > > > > > change. Do we need to setup a reserved-memory node like
> > > > > > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > > > > > >
> > > > > > > > > I don't think so.
> > > > > > > > >
> > > > > > > > > Were you able to identify what was the exact commit that caused such regression?
> > > > > > > >
> > > > > > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > > > > > internal allocation functions") that caused the regression with
> > > > > > > > Etnaviv.
> > > > > > >
> > > > > > >
> > > > > > > Can you please test with this change:
> > > > > > >
> > > > > >
> > > > > > That appears to have fixed my issue. I am not sure what the impact
> > > > > > is, but is this a safe option?
> > > > >
> > > > > It's not really a fix, I just wanted to see how exactly 92d12f9544b7 ("memblock:
> > > > > refactor internal allocation functions") broke your setup.
> > > > >
> > > > > Can you share the dts you are using and the full kernel log?
> > > > >
> > > > > > adam
> > > > > >
> > > > > > > diff --git a/mm/memblock.c b/mm/memblock.c
> > > > > > > index 7d4f61a..1f5a0eb 100644
> > > > > > > --- a/mm/memblock.c
> > > > > > > +++ b/mm/memblock.c
> > > > > > > @@ -1356,9 +1356,6 @@ static phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
> > > > > > > align = SMP_CACHE_BYTES;
> > > > > > > }
> > > > > > >
> > > > > > > - if (end > memblock.current_limit)
> > > > > > > - end = memblock.current_limit;
> > > > > > > -
> > > > > > > again:
> > > > > > > found = memblock_find_in_range_node(size, align, start, end, nid,
> > > > > > > flags);
> > > > > > >
> > > > > > > > I also noticed that if I create a reserved memory node as was done one
> > > > > > > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > > > > > > was getting errors regardless of the 'cma=256M' or not.
> > > > > > > > I don't have a problem using the reserved memory, but I guess I am not
> > > > > > > > sure what the amount should be. I know for the video decoding 1080p,
> > > > > > > > I have historically used cma=128M, but with the 3D also needing some
> > > > > > > > memory allocation, is that enough or should I use 256M?
> > > > > > > >
> > > > > > > > adam
> > > > > > >
> > > > > > > --
> > > > > > > Sincerely yours,
> > > > > > > Mike.
> > > > > > >
> > > > >
> > > > > --
> > > > > Sincerely yours,
> > > > > Mike.
> > > > >
> >
> > --
> > Sincerely yours,
> > Mike.
> >
--
Sincerely yours,
Mike.
On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> (trimmed the CC)
>
> On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> > >
> >
> > Before the patch:
> >
> > # cat /sys/kernel/debug/memblock/memory
> > 0: 0x10000000..0x8fffffff
> > # cat /sys/kernel/debug/memblock/reserved
> > 0: 0x10004000..0x10007fff
> > 34: 0x2fffff88..0x3fffffff
> >
> >
> > After the patch:
> > # cat /sys/kernel/debug/memblock/memory
> > 0: 0x10000000..0x8fffffff
> > # cat /sys/kernel/debug/memblock/reserved
> > 0: 0x10004000..0x10007fff
> > 36: 0x80000000..0x8fffffff
>
> I'm still not convinced that the memblock refactoring didn't uncovered an
> issue in etnaviv driver.
>
> Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
I think you have that the wrong way round.
> BTW, the code that complained about "command buffer outside valid memory
> window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
> implement per-process address spaces on MMUv2").
>
> Could be that recent changes to MMU management of etnaviv resolve the
> issue?
The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
hardware requires command buffers within the first 2GiB of physical
RAM.
I've reported the problem previously but there was no resolution,
other than pointing the blame at CMA.
https://lists.freedesktop.org/archives/dri-devel/2019-June/thread.html#223516
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux admin wrote:
> On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > (trimmed the CC)
> >
> > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> > > >
> > >
> > > Before the patch:
> > >
> > > # cat /sys/kernel/debug/memblock/memory
> > > 0: 0x10000000..0x8fffffff
> > > # cat /sys/kernel/debug/memblock/reserved
> > > 0: 0x10004000..0x10007fff
> > > 34: 0x2fffff88..0x3fffffff
> > >
> > >
> > > After the patch:
> > > # cat /sys/kernel/debug/memblock/memory
> > > 0: 0x10000000..0x8fffffff
> > > # cat /sys/kernel/debug/memblock/reserved
> > > 0: 0x10004000..0x10007fff
> > > 36: 0x80000000..0x8fffffff
> >
> > I'm still not convinced that the memblock refactoring didn't uncovered an
> > issue in etnaviv driver.
> >
> > Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
>
> I think you have that the wrong way round.
I'm relying on Adam's reports of working and non-working versions.
According to that etnaviv works when CMA area is at 0x80000000 and does not
work when it is at 0x30000000.
He also sent logs a few days ago [1], they also confirm that.
[1] https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
> > BTW, the code that complained about "command buffer outside valid memory
> > window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
> > implement per-process address spaces on MMUv2").
> >
> > Could be that recent changes to MMU management of etnaviv resolve the
> > issue?
>
> The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> hardware requires command buffers within the first 2GiB of physical
> RAM.
I've mentioned that patch because it removed the check for cmdbuf address
for MMUv1:
@@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
PAGE_SIZE);
if (ret) {
dev_err(gpu->dev, "could not create command buffer\n");
- goto unmap_suballoc;
- }
-
- if (!(gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION) &&
- etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) {
- ret = -EINVAL;
- dev_err(gpu->dev,
- "command buffer outside valid memory window\n");
- goto free_buffer;
+ goto fail;
}
/* Setup event management */
I really don't know how etnaviv works, so I hoped that people who
understand it would help.
> I've reported the problem previously but there was no resolution,
> other than pointing the blame at CMA.
>
> https://lists.freedesktop.org/archives/dri-devel/2019-June/thread.html#223516
>
> --
> RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
> FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
> According to speedtest.net: 11.9Mbps down 500kbps up
--
Sincerely yours,
Mike.
Am Donnerstag, den 03.10.2019, 14:30 +0300 schrieb Mike Rapoport:
> On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux admin wrote:
> > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > (trimmed the CC)
> > >
> > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> > > >
> > > > Before the patch:
> > > >
> > > > # cat /sys/kernel/debug/memblock/memory
> > > > 0: 0x10000000..0x8fffffff
> > > > # cat /sys/kernel/debug/memblock/reserved
> > > > 0: 0x10004000..0x10007fff
> > > > 34: 0x2fffff88..0x3fffffff
> > > >
> > > >
> > > > After the patch:
> > > > # cat /sys/kernel/debug/memblock/memory
> > > > 0: 0x10000000..0x8fffffff
> > > > # cat /sys/kernel/debug/memblock/reserved
> > > > 0: 0x10004000..0x10007fff
> > > > 36: 0x80000000..0x8fffffff
> > >
> > > I'm still not convinced that the memblock refactoring didn't uncovered an
> > > issue in etnaviv driver.
> > >
> > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
> >
> > I think you have that the wrong way round.
>
> I'm relying on Adam's reports of working and non-working versions.
> According to that etnaviv works when CMA area is at 0x80000000 and does not
> work when it is at 0x30000000.
>
> He also sent logs a few days ago [1], they also confirm that.
>
> [1] https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
To clarify: Etnaviv needs to know where the CMA area is in order to
move a aperture window to cover the CMA area so the command buffers
allocated in contig memory can be mapped through this aperture. Now the
issue is that there is currently there is no good API for a driver to
know where the CMA area is located, so we are trying to infer this from
dma_get_required_mask. Unfortunately this can overshoot the real DRAM
area by a bit, so combined with the fixed 2GB size of the GPU aperture
this means we are no longer able to map the command buffers through the
required aperture if the CMA area moves too far down in the physical
memory.
It's really a bad interaction between etnaviv and CMA area placement,
due to insufficient APIs to communicate some crucial information. There
is nothing in the etnaviv driver or the hardware which requires the CMA
area to be at a certain place, we just need to know where it is located
exactly. So my try at fixing this [1] was by adding a API to get the
required information, but the first attempt was shot down and I hadn't
had time to follow up on this yet.
Regards,
Lucas
[1] https://patchwork.kernel.org/patch/10966767/
>
> The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> hardware requires command buffers within the first 2GiB of physical
> RAM.
>
I thought that the i.MX6q has the MMUv1 and GC2000 GPU while the
i.MX6qp has the MMUv2 and GC3000? Meaning the i.MX6 has both MMUv1
and MMUv2 depending on which i.MX6 part we are talking about.
On Thu, Oct 03, 2019 at 07:46:06AM -0700, Chris Healy wrote:
> >
> > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > hardware requires command buffers within the first 2GiB of physical
> > RAM.
> >
> I thought that the i.MX6q has the MMUv1 and GC2000 GPU while the
> i.MX6qp has the MMUv2 and GC3000? Meaning the i.MX6 has both MMUv1
> and MMUv2 depending on which i.MX6 part we are talking about.
The report says iMX6Q with GC2000 - which is what I was referring to
here. I'm not aware of what the later SoCs use, since I've never used
them.
Thanks.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
On Thu, Oct 03, 2019 at 02:30:10PM +0300, Mike Rapoport wrote:
> On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux admin wrote:
> > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > (trimmed the CC)
> > >
> > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> > > > >
> > > >
> > > > Before the patch:
> > > >
> > > > # cat /sys/kernel/debug/memblock/memory
> > > > 0: 0x10000000..0x8fffffff
> > > > # cat /sys/kernel/debug/memblock/reserved
> > > > 0: 0x10004000..0x10007fff
> > > > 34: 0x2fffff88..0x3fffffff
> > > >
> > > >
> > > > After the patch:
> > > > # cat /sys/kernel/debug/memblock/memory
> > > > 0: 0x10000000..0x8fffffff
> > > > # cat /sys/kernel/debug/memblock/reserved
> > > > 0: 0x10004000..0x10007fff
> > > > 36: 0x80000000..0x8fffffff
> > >
> > > I'm still not convinced that the memblock refactoring didn't uncovered an
> > > issue in etnaviv driver.
> > >
> > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
> >
> > I think you have that the wrong way round.
>
> I'm relying on Adam's reports of working and non-working versions.
> According to that etnaviv works when CMA area is at 0x80000000 and does not
> work when it is at 0x30000000.
>
> He also sent logs a few days ago [1], they also confirm that.
>
> [1] https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
Sorry, yes, you're right. Still, I've reported this same regression
a while back, and it's never gone away.
> > > BTW, the code that complained about "command buffer outside valid memory
> > > window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
> > > implement per-process address spaces on MMUv2").
> > >
> > > Could be that recent changes to MMU management of etnaviv resolve the
> > > issue?
> >
> > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > hardware requires command buffers within the first 2GiB of physical
> > RAM.
>
> I've mentioned that patch because it removed the check for cmdbuf address
> for MMUv1:
>
> @@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
> PAGE_SIZE);
> if (ret) {
> dev_err(gpu->dev, "could not create command buffer\n");
> - goto unmap_suballoc;
> - }
> -
> - if (!(gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION) &&
> - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) {
> - ret = -EINVAL;
> - dev_err(gpu->dev,
> - "command buffer outside valid memory window\n");
> - goto free_buffer;
> + goto fail;
> }
>
> /* Setup event management */
>
>
> I really don't know how etnaviv works, so I hoped that people who
> understand it would help.
From what I can see, removing that check is a completely insane thing
to do, and I note that these changes are _not_ described in the commit
message. The problem was known about _before_ (June 22) the patch was
created (July 5).
Lucas, please can you explain why removing the above check, which is
well known to correctly trigger on various platforms to prevent
incorrect GPU behaviour, is safe?
Thanks.
--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTC broadband for 0.8mile line in suburbia: sync at 12.1Mbps down 622kbps up
According to speedtest.net: 11.9Mbps down 500kbps up
Am Freitag, den 04.10.2019, 10:27 +0100 schrieb Russell King - ARM
Linux admin:
> On Thu, Oct 03, 2019 at 02:30:10PM +0300, Mike Rapoport wrote:
> > On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux
> > admin wrote:
> > > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > > (trimmed the CC)
> > > >
> > > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <
> > > > > [email protected]> wrote:
> > > > >
> > > > > Before the patch:
> > > > >
> > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > 0: 0x10000000..0x8fffffff
> > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > 0: 0x10004000..0x10007fff
> > > > > 34: 0x2fffff88..0x3fffffff
> > > > >
> > > > >
> > > > > After the patch:
> > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > 0: 0x10000000..0x8fffffff
> > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > 0: 0x10004000..0x10007fff
> > > > > 36: 0x80000000..0x8fffffff
> > > >
> > > > I'm still not convinced that the memblock refactoring didn't
> > > > uncovered an
> > > > issue in etnaviv driver.
> > > >
> > > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it
> > > > fail?
> > >
> > > I think you have that the wrong way round.
> >
> > I'm relying on Adam's reports of working and non-working versions.
> > According to that etnaviv works when CMA area is at 0x80000000 and
> > does not
> > work when it is at 0x30000000.
> >
> > He also sent logs a few days ago [1], they also confirm that.
> >
> > [1]
> > https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
>
> Sorry, yes, you're right. Still, I've reported this same regression
> a while back, and it's never gone away.
>
> > > > BTW, the code that complained about "command buffer outside
> > > > valid memory
> > > > window" has been removed by the commit 17e4660ae3d7
> > > > ("drm/etnaviv:
> > > > implement per-process address spaces on MMUv2").
> > > >
> > > > Could be that recent changes to MMU management of etnaviv
> > > > resolve the
> > > > issue?
> > >
> > > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > > hardware requires command buffers within the first 2GiB of
> > > physical
> > > RAM.
> >
> > I've mentioned that patch because it removed the check for cmdbuf
> > address
> > for MMUv1:
> >
> > @@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
> > PAGE_SIZE);
> > if (ret) {
> > dev_err(gpu->dev, "could not create command
> > buffer\n");
> > - goto unmap_suballoc;
> > - }
> > -
> > - if (!(gpu->identity.minor_features1 &
> > chipMinorFeatures1_MMU_VERSION) &&
> > - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu-
> > >cmdbuf_mapping) > 0x80000000) {
> > - ret = -EINVAL;
> > - dev_err(gpu->dev,
> > - "command buffer outside valid memory
> > window\n");
> > - goto free_buffer;
> > + goto fail;
> > }
> >
> > /* Setup event management */
> >
> >
> > I really don't know how etnaviv works, so I hoped that people who
> > understand it would help.
>
> From what I can see, removing that check is a completely insane thing
> to do, and I note that these changes are _not_ described in the
> commit
> message. The problem was known about _before_ (June 22) the patch
> was
> created (July 5).
>
> Lucas, please can you explain why removing the above check, which is
> well known to correctly trigger on various platforms to prevent
> incorrect GPU behaviour, is safe?
It isn't. It's a pretty big oversight in this commit to remove this
check. It can't be done at the same spot in the code anymore, as we
don't have a mapping context at this time anymore, but it should have
moved into etnaviv_iommu_context_init(). I'll send a patch to fix this
up.
Regards,
Lucas
On Fri, Oct 4, 2019 at 8:23 AM Lucas Stach <[email protected]> wrote:
>
> Am Freitag, den 04.10.2019, 10:27 +0100 schrieb Russell King - ARM
> Linux admin:
> > On Thu, Oct 03, 2019 at 02:30:10PM +0300, Mike Rapoport wrote:
> > > On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux
> > > admin wrote:
> > > > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > > > (trimmed the CC)
> > > > >
> > > > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <
> > > > > > [email protected]> wrote:
> > > > > >
> > > > > > Before the patch:
> > > > > >
> > > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > > 0: 0x10000000..0x8fffffff
> > > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > > 0: 0x10004000..0x10007fff
> > > > > > 34: 0x2fffff88..0x3fffffff
> > > > > >
> > > > > >
> > > > > > After the patch:
> > > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > > 0: 0x10000000..0x8fffffff
> > > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > > 0: 0x10004000..0x10007fff
> > > > > > 36: 0x80000000..0x8fffffff
> > > > >
> > > > > I'm still not convinced that the memblock refactoring didn't
> > > > > uncovered an
> > > > > issue in etnaviv driver.
> > > > >
> > > > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it
> > > > > fail?
> > > >
> > > > I think you have that the wrong way round.
> > >
> > > I'm relying on Adam's reports of working and non-working versions.
> > > According to that etnaviv works when CMA area is at 0x80000000 and
> > > does not
> > > work when it is at 0x30000000.
> > >
> > > He also sent logs a few days ago [1], they also confirm that.
> > >
> > > [1]
> > > https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
> >
> > Sorry, yes, you're right. Still, I've reported this same regression
> > a while back, and it's never gone away.
> >
> > > > > BTW, the code that complained about "command buffer outside
> > > > > valid memory
> > > > > window" has been removed by the commit 17e4660ae3d7
> > > > > ("drm/etnaviv:
> > > > > implement per-process address spaces on MMUv2").
> > > > >
> > > > > Could be that recent changes to MMU management of etnaviv
> > > > > resolve the
> > > > > issue?
> > > >
> > > > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > > > hardware requires command buffers within the first 2GiB of
> > > > physical
> > > > RAM.
> > >
> > > I've mentioned that patch because it removed the check for cmdbuf
> > > address
> > > for MMUv1:
> > >
> > > @@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
> > > PAGE_SIZE);
> > > if (ret) {
> > > dev_err(gpu->dev, "could not create command
> > > buffer\n");
> > > - goto unmap_suballoc;
> > > - }
> > > -
> > > - if (!(gpu->identity.minor_features1 &
> > > chipMinorFeatures1_MMU_VERSION) &&
> > > - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu-
> > > >cmdbuf_mapping) > 0x80000000) {
> > > - ret = -EINVAL;
> > > - dev_err(gpu->dev,
> > > - "command buffer outside valid memory
> > > window\n");
> > > - goto free_buffer;
> > > + goto fail;
> > > }
> > >
> > > /* Setup event management */
> > >
> > >
> > > I really don't know how etnaviv works, so I hoped that people who
> > > understand it would help.
> >
> > From what I can see, removing that check is a completely insane thing
> > to do, and I note that these changes are _not_ described in the
> > commit
> > message. The problem was known about _before_ (June 22) the patch
> > was
> > created (July 5).
> >
> > Lucas, please can you explain why removing the above check, which is
> > well known to correctly trigger on various platforms to prevent
> > incorrect GPU behaviour, is safe?
>
> It isn't. It's a pretty big oversight in this commit to remove this
> check. It can't be done at the same spot in the code anymore, as we
> don't have a mapping context at this time anymore, but it should have
> moved into etnaviv_iommu_context_init(). I'll send a patch to fix this
> up.
If you CC me, I will test it and report my findings.
adam
>
> Regards,
> Lucas
>
On Fri, Oct 04, 2019 at 03:21:03PM +0200, Lucas Stach wrote:
> Am Freitag, den 04.10.2019, 10:27 +0100 schrieb Russell King - ARM
> Linux admin:
> > On Thu, Oct 03, 2019 at 02:30:10PM +0300, Mike Rapoport wrote:
> > > On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux
> > > admin wrote:
> > > > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > > > (trimmed the CC)
> > > > >
> > > > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <
> > > > > > [email protected]> wrote:
> > > > > >
> > > > > > Before the patch:
> > > > > >
> > > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > > 0: 0x10000000..0x8fffffff
> > > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > > 0: 0x10004000..0x10007fff
> > > > > > 34: 0x2fffff88..0x3fffffff
> > > > > >
> > > > > >
> > > > > > After the patch:
> > > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > > 0: 0x10000000..0x8fffffff
> > > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > > 0: 0x10004000..0x10007fff
> > > > > > 36: 0x80000000..0x8fffffff
> > > > >
> > > > > I'm still not convinced that the memblock refactoring didn't
> > > > > uncovered an
> > > > > issue in etnaviv driver.
> > > > >
> > > > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it
> > > > > fail?
> > > >
> > > > I think you have that the wrong way round.
> > >
> > > I'm relying on Adam's reports of working and non-working versions.
> > > According to that etnaviv works when CMA area is at 0x80000000 and
> > > does not
> > > work when it is at 0x30000000.
> > >
> > > He also sent logs a few days ago [1], they also confirm that.
> > >
> > > [1]
> > > https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
> >
> > Sorry, yes, you're right. Still, I've reported this same regression
> > a while back, and it's never gone away.
> >
> > > > > BTW, the code that complained about "command buffer outside
> > > > > valid memory
> > > > > window" has been removed by the commit 17e4660ae3d7
> > > > > ("drm/etnaviv:
> > > > > implement per-process address spaces on MMUv2").
> > > > >
> > > > > Could be that recent changes to MMU management of etnaviv
> > > > > resolve the
> > > > > issue?
> > > >
> > > > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > > > hardware requires command buffers within the first 2GiB of
> > > > physical
> > > > RAM.
> > >
> > > I've mentioned that patch because it removed the check for cmdbuf
> > > address
> > > for MMUv1:
> > >
> > > @@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
> > > PAGE_SIZE);
> > > if (ret) {
> > > dev_err(gpu->dev, "could not create command
> > > buffer\n");
> > > - goto unmap_suballoc;
> > > - }
> > > -
> > > - if (!(gpu->identity.minor_features1 &
> > > chipMinorFeatures1_MMU_VERSION) &&
> > > - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu-
> > > >cmdbuf_mapping) > 0x80000000) {
> > > - ret = -EINVAL;
> > > - dev_err(gpu->dev,
> > > - "command buffer outside valid memory
> > > window\n");
> > > - goto free_buffer;
> > > + goto fail;
> > > }
> > >
> > > /* Setup event management */
> > >
> > >
> > > I really don't know how etnaviv works, so I hoped that people who
> > > understand it would help.
> >
> > From what I can see, removing that check is a completely insane thing
> > to do, and I note that these changes are _not_ described in the
> > commit
> > message. The problem was known about _before_ (June 22) the patch
> > was
> > created (July 5).
> >
> > Lucas, please can you explain why removing the above check, which is
> > well known to correctly trigger on various platforms to prevent
> > incorrect GPU behaviour, is safe?
>
> It isn't. It's a pretty big oversight in this commit to remove this
> check. It can't be done at the same spot in the code anymore, as we
> don't have a mapping context at this time anymore, but it should have
> moved into etnaviv_iommu_context_init(). I'll send a patch to fix this
> up.
Lucas, can you make the check use SZ_2G instead of 0x80000000 and add a
comment about 2G limitation of the aperture window?
> Regards,
> Lucas
>
--
Sincerely yours,
Mike.
On Fri, Oct 04, 2019 at 10:27:27AM +0100, Russell King - ARM Linux admin wrote:
> On Thu, Oct 03, 2019 at 02:30:10PM +0300, Mike Rapoport wrote:
> > On Thu, Oct 03, 2019 at 09:49:14AM +0100, Russell King - ARM Linux admin wrote:
> > > On Thu, Oct 03, 2019 at 08:34:52AM +0300, Mike Rapoport wrote:
> > > > (trimmed the CC)
> > > >
> > > > On Wed, Oct 02, 2019 at 06:14:11AM -0500, Adam Ford wrote:
> > > > > On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport <[email protected]> wrote:
> > > > > >
> > > > >
> > > > > Before the patch:
> > > > >
> > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > 0: 0x10000000..0x8fffffff
> > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > 0: 0x10004000..0x10007fff
> > > > > 34: 0x2fffff88..0x3fffffff
> > > > >
> > > > >
> > > > > After the patch:
> > > > > # cat /sys/kernel/debug/memblock/memory
> > > > > 0: 0x10000000..0x8fffffff
> > > > > # cat /sys/kernel/debug/memblock/reserved
> > > > > 0: 0x10004000..0x10007fff
> > > > > 36: 0x80000000..0x8fffffff
> > > >
> > > > I'm still not convinced that the memblock refactoring didn't uncovered an
> > > > issue in etnaviv driver.
> > > >
> > > > Why moving the CMA area from 0x80000000 to 0x30000000 makes it fail?
> > >
> > > I think you have that the wrong way round.
> >
> > I'm relying on Adam's reports of working and non-working versions.
> > According to that etnaviv works when CMA area is at 0x80000000 and does not
> > work when it is at 0x30000000.
> >
> > He also sent logs a few days ago [1], they also confirm that.
> >
> > [1] https://lore.kernel.org/linux-mm/CAHCN7xJEvS2Si=M+BYtz+kY0M4NxmqDjiX9Nwq6_3GGBh3yg=w@mail.gmail.com/
>
> Sorry, yes, you're right. Still, I've reported this same regression
> a while back, and it's never gone away.
>
> > > > BTW, the code that complained about "command buffer outside valid memory
> > > > window" has been removed by the commit 17e4660ae3d7 ("drm/etnaviv:
> > > > implement per-process address spaces on MMUv2").
> > > >
> > > > Could be that recent changes to MMU management of etnaviv resolve the
> > > > issue?
> > >
> > > The iMX6 does not have MMUv2 hardware, it has MMUv1. With MMUv1
> > > hardware requires command buffers within the first 2GiB of physical
> > > RAM.
> >
> > I've mentioned that patch because it removed the check for cmdbuf address
> > for MMUv1:
> >
> > @@ -785,15 +768,7 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
> > PAGE_SIZE);
> > if (ret) {
> > dev_err(gpu->dev, "could not create command buffer\n");
> > - goto unmap_suballoc;
> > - }
> > -
> > - if (!(gpu->identity.minor_features1 & chipMinorFeatures1_MMU_VERSION) &&
> > - etnaviv_cmdbuf_get_va(&gpu->buffer, &gpu->cmdbuf_mapping) > 0x80000000) {
> > - ret = -EINVAL;
> > - dev_err(gpu->dev,
> > - "command buffer outside valid memory window\n");
> > - goto free_buffer;
> > + goto fail;
> > }
> >
> > /* Setup event management */
> >
> >
> > I really don't know how etnaviv works, so I hoped that people who
> > understand it would help.
>
> From what I can see, removing that check is a completely insane thing
> to do, and I note that these changes are _not_ described in the commit
> message. The problem was known about _before_ (June 22) the patch was
> created (July 5).
The memblock refactoring went in in 5.1 which was May 5, and likely it
caused the regression.
Unless I'm missing something, before the memblock refactoring the CMA
reservation could use the entire physical memory because
memblock_phys_alloc() didn't enforce memblock.current_limit.
Since memblock default is to allocate from top, cma_declare_contiguous()
could grab the memory close to the end of DRAM and thus have physical
address close enough to the virtual address to fit in the 2G limit.
When I've made memblock_phys* limit the memblock allocations to
memblock.current_limit the CMA area moved too far away down and the gap
became larger than 2G.
It does not seem like dealing with this in etnaviv driver and DMA and CMA
APIs would happen fast and the "revert" of the memblock changes I've sent
earlier in this thread does fix the problem.
Andrew, would you like me to resend the patch in a separate e-mail?
> Lucas, please can you explain why removing the above check, which is
> well known to correctly trigger on various platforms to prevent
> incorrect GPU behaviour, is safe?
>
> Thanks.
>
--
Sincerely yours,
Mike.