Fix the calculation of the size of numa_distance array. It is 2-dimemsional
array. This fixes early panics seen on large numa systems.
Signed-off-by: Jack Steiner <[email protected]>
Index: linux/arch/x86/mm/numa_64.c
===================================================================
--- linux.orig/arch/x86/mm/numa_64.c 2011-02-23 17:46:38.000000000 -0600
+++ linux/arch/x86/mm/numa_64.c 2011-02-25 08:53:45.781921592 -0600
@@ -414,7 +414,8 @@ static int __init numa_alloc_distance(vo
for_each_node_mask(i, nodes_parsed)
cnt = i;
- size = ++cnt * sizeof(numa_distance[0]);
+ cnt++;
+ size = cnt * cnt * sizeof(numa_distance[0]);
phys = memblock_find_in_range(0, (u64)max_pfn_mapped << PAGE_SHIFT,
size, PAGE_SIZE);
* Jack Steiner <[email protected]> wrote:
> Fix the calculation of the size of numa_distance array. It is 2-dimemsional
> array. This fixes early panics seen on large numa systems.
>
> Signed-off-by: Jack Steiner <[email protected]>
>
> Index: linux/arch/x86/mm/numa_64.c
> ===================================================================
> --- linux.orig/arch/x86/mm/numa_64.c 2011-02-23 17:46:38.000000000 -0600
> +++ linux/arch/x86/mm/numa_64.c 2011-02-25 08:53:45.781921592 -0600
> @@ -414,7 +414,8 @@ static int __init numa_alloc_distance(vo
>
> for_each_node_mask(i, nodes_parsed)
> cnt = i;
> - size = ++cnt * sizeof(numa_distance[0]);
> + cnt++;
> + size = cnt * cnt * sizeof(numa_distance[0]);
>
> phys = memblock_find_in_range(0, (u64)max_pfn_mapped << PAGE_SHIFT,
> size, PAGE_SIZE);
Tejun has queued up a fix for this already. Tejun, mind sending the fix to me ASAP?
Thanks,
Ingo
Please pull from the following branch to receive NUMA distance table
size fix.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/misc.git x86-mm
David Rientjes (1):
x86-64, NUMA: Fix size of numa_distance array
arch/x86/mm/numa_64.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
Thanks.
--
tejun