Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754149AbdCNDIg (ORCPT ); Mon, 13 Mar 2017 23:08:36 -0400 Received: from mail-pg0-f66.google.com ([74.125.83.66]:35223 "EHLO mail-pg0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753631AbdCNDIf (ORCPT ); Mon, 13 Mar 2017 23:08:35 -0400 From: Wei Yang To: tglx@linutronix.de, mingo@redhat.com, hpa@zytor.com, tj@kernel.org, bp@alien8.de Cc: linux-kernel@vger.kernel.org, Wei Yang Subject: [Patch V2 2/2] x86/mm/numa: remove the numa_nodemask_from_meminfo() Date: Tue, 14 Mar 2017 11:08:01 +0800 Message-Id: <20170314030801.13656-2-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170314030801.13656-1-richard.weiyang@gmail.com> References: <20170314030801.13656-1-richard.weiyang@gmail.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2988 Lines: 97 numa_nodemask_from_meminfo() is called to set bit according to numa_meminfo. While the only two places for this call is used to set proper bit to a copy of numa_nodes_parsed from numa_meminfo. With current code path, those numa node information in numa_meminfo is a subset of numa_nodes_parsed. So it is not necessary to set the bits again. The following is a code path analysis to prove the numa node information in numa_meminfo is a subset of numa_nodes_parsed. x86_numa_init() numa_init() Case 1 acpi_numa_init() acpi_parse_memory_affinity() numa_add_memblk() node_set(numa_nodes_parsed) acpi_parse_slit() numa_nodemask_from_meminfo() Case 2 amd_numa_init() numa_add_memblk() node_set(numa_nodes_parsed) Case 3 dummy_numa_init() node_set(numa_nodes_parsed) numa_add_memblk() numa_register_memblks() numa_nodemask_from_meminfo() >From the code path analysis, we can see each time a memblk is added, the proper bit is set in numa_nodes_parsed, which means it is not necessary to set it again in numa_nodemask_from_meminfo() for a copy of numa_nodes_parsed. This patch removes numa_nodemask_from_meminfo(). Signed-off-by: Wei Yang --- arch/x86/mm/numa.c | 21 +-------------------- 1 file changed, 1 insertion(+), 20 deletions(-) diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c index ac632e5397aa..5ecc5a745c51 100644 --- a/arch/x86/mm/numa.c +++ b/arch/x86/mm/numa.c @@ -314,20 +314,6 @@ int __init numa_cleanup_meminfo(struct numa_meminfo *mi) return 0; } -/* - * Set nodes, which have memory in @mi, in *@nodemask. - */ -static void __init numa_nodemask_from_meminfo(nodemask_t *nodemask, - const struct numa_meminfo *mi) -{ - int i; - - for (i = 0; i < ARRAY_SIZE(mi->blk); i++) - if (mi->blk[i].start != mi->blk[i].end && - mi->blk[i].nid != NUMA_NO_NODE) - node_set(mi->blk[i].nid, *nodemask); -} - /** * numa_reset_distance - Reset NUMA distance table * @@ -347,16 +333,12 @@ void __init numa_reset_distance(void) static int __init numa_alloc_distance(void) { - nodemask_t nodes_parsed; size_t size; int i, j, cnt = 0; u64 phys; /* size the new table and allocate it */ - nodes_parsed = numa_nodes_parsed; - numa_nodemask_from_meminfo(&nodes_parsed, &numa_meminfo); - - for_each_node_mask(i, nodes_parsed) + for_each_node_mask(i, numa_nodes_parsed) cnt = i; cnt++; size = cnt * cnt * sizeof(numa_distance[0]); @@ -535,7 +517,6 @@ static int __init numa_register_memblks(struct numa_meminfo *mi) /* Account for nodes with cpus and no memory */ node_possible_map = numa_nodes_parsed; - numa_nodemask_from_meminfo(&node_possible_map, mi); if (WARN_ON(nodes_empty(node_possible_map))) return -EINVAL; -- 2.11.0