Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753100AbbDGKWD (ORCPT ); Tue, 7 Apr 2015 06:22:03 -0400 Received: from casper.infradead.org ([85.118.1.10]:38386 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752895AbbDGKWA (ORCPT ); Tue, 7 Apr 2015 06:22:00 -0400 Date: Tue, 7 Apr 2015 12:21:47 +0200 From: Peter Zijlstra To: Nishanth Aravamudan Cc: Ingo Molnar , linux-kernel@vger.kernel.org, Srikar Dronamraju , Boqun Feng , Anshuman Khandual , linuxppc-dev@lists.ozlabs.org, Benjamin Herrenschmidt , Anton Blanchard Subject: Re: Topology updates and NUMA-level sched domains Message-ID: <20150407102147.GJ23123@twins.programming.kicks-ass.net> References: <20150406214558.GA38501@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150406214558.GA38501@linux.vnet.ibm.com> User-Agent: Mutt/1.5.21 (2012-12-30) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1873 Lines: 42 On Mon, Apr 06, 2015 at 02:45:58PM -0700, Nishanth Aravamudan wrote: > Hi Peter, > > As you are very aware, I think, power has some odd NUMA topologies (and > changes to the those topologies) at run-time. In particular, we can see > a topology at boot: > > Node 0: all Cpus > Node 7: no cpus > > Then we get a notification from the hypervisor that a core (or two) have > moved from node 0 to node 7. This results in the: > or a re-init API (which won't try to reallocate various bits), because > the topology could be completely different now (e.g., > sched_domains_numa_distance will also be inaccurate now). Really, a > topology update on power (not sure on s390x, but those are the only two > archs that return a positive value from arch_update_cpu_topology() right > now, afaics) is a lot like a hotplug event and we need to re-initialize > any dependent structures. > > I'm just sending out feelers, as we can limp by with the above warning, > it seems, but is less than ideal. Any help or insight you could provide > would be greatly appreciated! So I think (and ISTR having stated this before) that dynamic cpu<->node maps are absolutely insane. There is a ton of stuff that assumes the cpu<->node relation is a boot time fixed one. Userspace being one of them. Per-cpu memory another. You simply cannot do this without causing massive borkage. So please come up with a coherent plan to deal with the entire problem of dynamic cpu to memory relation and I might consider the scheduler impact. But we're not going to hack around and maybe make it not crash in a few corner cases while the entire thing is shite. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/