Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752862AbbGHXQq (ORCPT ); Wed, 8 Jul 2015 19:16:46 -0400 Received: from e34.co.us.ibm.com ([32.97.110.152]:52615 "EHLO e34.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751822AbbGHXQ3 (ORCPT ); Wed, 8 Jul 2015 19:16:29 -0400 X-Helo: d03dlp02.boulder.ibm.com X-MailFrom: nacc@linux.vnet.ibm.com X-RcptTo: linux-kernel@vger.kernel.org Date: Wed, 8 Jul 2015 16:16:23 -0700 From: Nishanth Aravamudan To: Michael Ellerman Cc: Peter Zijlstra , linux-kernel@vger.kernel.org, Paul Mackerras , Anton Blanchard , David Rientjes , linuxppc-dev@lists.ozlabs.org Subject: Re: [RFC,1/2] powerpc/numa: fix cpu_to_node() usage during boot Message-ID: <20150708231623.GB44862@linux.vnet.ibm.com> References: <20150702230202.GA2807@linux.vnet.ibm.com> <20150708040056.948A1140770@ozlabs.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150708040056.948A1140770@ozlabs.org> X-Operating-System: Linux 3.13.0-40-generic (x86_64) User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15070823-0017-0000-0000-00000C2C7356 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2933 Lines: 87 On 08.07.2015 [14:00:56 +1000], Michael Ellerman wrote: > On Thu, 2015-02-07 at 23:02:02 UTC, Nishanth Aravamudan wrote: > > Much like on x86, now that powerpc is using USE_PERCPU_NUMA_NODE_ID, we > > have an ordering issue during boot with early calls to cpu_to_node(). > > "now that .." implies we changed something and broke this. What commit was > it that changed the behaviour? Well, that's something I'm trying to still unearth. In the commits before and after adding USE_PERCPU_NUMA_NODE_ID (8c272261194d "powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), the dmesg reports: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 At least prior to 8c272261194d, this might have been due to the old powerpc-specific cpu_to_node(): static inline int cpu_to_node(int cpu) { int nid; nid = numa_cpu_lookup_table[cpu]; /* * During early boot, the numa-cpu lookup table might not have been * setup for all CPUs yet. In such cases, default to node 0. */ return (nid < 0) ? 0 : nid; } which might imply that no one cares or that simply no one noticed. > > The value returned by those calls now depend on the per-cpu area being > > setup, but that is not guaranteed to be the case during boot. Instead, > > we need to add an early_cpu_to_node() which doesn't use the per-CPU area > > and call that from certain spots that are known to invoke cpu_to_node() > > before the per-CPU areas are not configured. > > > > On an example 2-node NUMA system with the following topology: > > > > available: 2 nodes (0-1) > > node 0 cpus: 0 1 2 3 > > node 0 size: 2029 MB > > node 0 free: 1753 MB > > node 1 cpus: 4 5 6 7 > > node 1 size: 2045 MB > > node 1 free: 1945 MB > > node distances: > > node 0 1 > > 0: 10 40 > > 1: 40 10 > > > > we currently emit at boot: > > > > [ 0.000000] pcpu-alloc: [0] 0 1 2 3 [0] 4 5 6 7 > > > > After this commit, we correctly emit: > > > > [ 0.000000] pcpu-alloc: [0] 0 1 2 3 [1] 4 5 6 7 > > > So it looks fairly sane, and I guess it's a bug fix. > > But I'm a bit reluctant to put it in straight away without some time in next. I'm fine with that -- it could use some more extensive testing, admittedly (I only have been able to verify the pcpu areas are being correctly allocated on the right node so far). I still need to test with hotplug and things like that. Hence the RFC. > It looks like the symptom is that the per-cpu areas are all allocated on node > 0, is that all that goes wrong? Yes, that's the symptom. I cc'd a few folks to see if they could help indicate the performance implications of such a setup -- sorry, I should have been more explicit about that. Thanks, Nish -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/