Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2474753imm; Mon, 28 May 2018 08:47:04 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJrYdKUF39IEht7RQI8N8fWNJ170pi06Thg9DHG4xal4ZSk7t6jQXkst8JFK4v8uRLu429K X-Received: by 2002:a65:420d:: with SMTP id c13-v6mr4890254pgq.265.1527522424522; Mon, 28 May 2018 08:47:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527522424; cv=none; d=google.com; s=arc-20160816; b=JfVSCSpl5kG6N3O2Ck0LkmWmzxw+Zh5DasF6RMk+bivv25BSoAO12NPm15GzwI3KM/ vUsTfF/nbKX3/Ds/nvGP77u0PJwuWsaTfamgkBLWzAkSoAVxEEDK669GiJh5VTRnmWUY EZcrwibIGupUSJcK6KWiU4IJrmUVdTsFs8ZlU7MnixBGa16Ts04YOEbBCjztn5Qw8ELI SMWPZMLxzuNEyHIxi9lD96xiF7CtdEY16GGmOOkcsLRQqQwPo7Gs4R+6mSiWWF+lXkqt /8Zn4AZxKs/z7OGWKPJt0jKgONunXFl3U8gZFV6z+0oSWKWJyXON2lLiujK7BqSUijz8 qeJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from:dkim-signature :arc-authentication-results; bh=WAaOheEI+uAtZsPP/K1f0PHDR0EtxxP0FnxYtaYrw0M=; b=owx7s1b8BvgEMa4Mw8DEWOVOjVXil3JU2cN5ecLhsPFIfHXgLCnGAmwr8YjHWsHT/t zhOqexJIAewXw4LFI3txR2zM9daOPwB2f4tHHsVh58mo68n5mfo13DkZkp8Si/Qay9Al 6YrBPYH+Q1Oao5BMLz7gdjR0ukXSYZg2xb2tdC+ZwCmfz75Rg1a7hKgijfeVWj+e95gb 7DQ5qFTZQW/KNIr6ybrd6ANtConC77hLOu+UMULxC3pViFlSbhcqcha6z03MgyJXUcMB a7eI1nBRTdksnjH93CXknGa+jTRTWmwr7P4h47KMlFgU3H9e6wdiaRgJGqWRis7A8sTI V9zw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UJWf4nDr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y6-v6si10634971plr.501.2018.05.28.08.46.50; Mon, 28 May 2018 08:47:04 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=UJWf4nDr; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S968563AbeE1KQp (ORCPT + 99 others); Mon, 28 May 2018 06:16:45 -0400 Received: from mail.kernel.org ([198.145.29.99]:36828 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S966476AbeE1KQl (ORCPT ); Mon, 28 May 2018 06:16:41 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 96BB9206B7; Mon, 28 May 2018 10:16:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1527502601; bh=cyzYPtj8lmnKVroxq/K3SDAHFNi75dqBgNiW4GQwEcU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UJWf4nDrO4G+xSnz+g0fLLyVthj/Rv8irScCIXBIxZ6f4tAVL8zHVCIPtp2wI54gQ mxoa++QHN9nTksE9fc2RjFVUE3YjJq3gK0nfYY+ta6T2rR7uz0JPKw2OMelEzPjjNQ cUENt9cJCjTpeNXTOx8H0eT5K4QLxa+5pLzcD6Mc= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michael Bringmann , Nathan Fontenot , Michael Ellerman , Sasha Levin Subject: [PATCH 4.4 043/268] powerpc/numa: Ensure nodes initialized for hotplug Date: Mon, 28 May 2018 12:00:17 +0200 Message-Id: <20180528100206.920580418@linuxfoundation.org> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180528100202.045206534@linuxfoundation.org> References: <20180528100202.045206534@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Michael Bringmann [ Upstream commit ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6 ] This patch fixes some problems encountered at runtime with configurations that support memory-less nodes, or that hot-add CPUs into nodes that are memoryless during system execution after boot. The problems of interest include: * Nodes known to powerpc to be memoryless at boot, but to have CPUs in them are allowed to be 'possible' and 'online'. Memory allocations for those nodes are taken from another node that does have memory until and if memory is hot-added to the node. * Nodes which have no resources assigned at boot, but which may still be referenced subsequently by affinity or associativity attributes, are kept in the list of 'possible' nodes for powerpc. Hot-add of memory or CPUs to the system can reference these nodes and bring them online instead of redirecting the references to one of the set of nodes known to have memory at boot. Note that this software operates under the context of CPU hotplug. We are not doing memory hotplug in this code, but rather updating the kernel's CPU topology (i.e. arch_update_cpu_topology / numa_update_cpu_topology). We are initializing a node that may be used by CPUs or memory before it can be referenced as invalid by a CPU hotplug operation. CPU hotplug operations are protected by a range of APIs including cpu_maps_update_begin/cpu_maps_update_done, cpus_read/write_lock / cpus_read/write_unlock, device locks, and more. Memory hotplug operations, including try_online_node, are protected by mem_hotplug_begin/mem_hotplug_done, device locks, and more. In the case of CPUs being hot-added to a previously memoryless node, the try_online_node operation occurs wholly within the CPU locks with no overlap. Using HMC hot-add/hot-remove operations, we have been able to add and remove CPUs to any possible node without failures. HMC operations involve a degree self-serialization, though. Signed-off-by: Michael Bringmann Reviewed-by: Nathan Fontenot Signed-off-by: Michael Ellerman Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/mm/numa.c | 47 +++++++++++++++++++++++++++++++++++++---------- 1 file changed, 37 insertions(+), 10 deletions(-) --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -551,7 +551,7 @@ static int numa_setup_cpu(unsigned long nid = of_node_to_nid_single(cpu); out_present: - if (nid < 0 || !node_online(nid)) + if (nid < 0 || !node_possible(nid)) nid = first_online_node; map_cpu_to_node(lcpu, nid); @@ -969,10 +969,8 @@ static void __init find_possible_nodes(v goto out; for (i = 0; i < numnodes; i++) { - if (!node_possible(i)) { - setup_node_data(i, 0, 0); + if (!node_possible(i)) node_set(i, node_possible_map); - } } out: @@ -1335,6 +1333,40 @@ static long vphn_get_associativity(unsig return rc; } +static inline int find_and_online_cpu_nid(int cpu) +{ + __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0}; + int new_nid; + + /* Use associativity from first thread for all siblings */ + vphn_get_associativity(cpu, associativity); + new_nid = associativity_to_nid(associativity); + if (new_nid < 0 || !node_possible(new_nid)) + new_nid = first_online_node; + + if (NODE_DATA(new_nid) == NULL) { +#ifdef CONFIG_MEMORY_HOTPLUG + /* + * Need to ensure that NODE_DATA is initialized for a node from + * available memory (see memblock_alloc_try_nid). If unable to + * init the node, then default to nearest node that has memory + * installed. + */ + if (try_online_node(new_nid)) + new_nid = first_online_node; +#else + /* + * Default to using the nearest node that has memory installed. + * Otherwise, it would be necessary to patch the kernel MM code + * to deal with more memoryless-node error conditions. + */ + new_nid = first_online_node; +#endif + } + + return new_nid; +} + /* * Update the CPU maps and sysfs entries for a single CPU when its NUMA * characteristics change. This function doesn't perform any locking and is @@ -1400,7 +1432,6 @@ int arch_update_cpu_topology(void) { unsigned int cpu, sibling, changed = 0; struct topology_update_data *updates, *ud; - __be32 associativity[VPHN_ASSOC_BUFSIZE] = {0}; cpumask_t updated_cpus; struct device *dev; int weight, new_nid, i = 0; @@ -1435,11 +1466,7 @@ int arch_update_cpu_topology(void) continue; } - /* Use associativity from first thread for all siblings */ - vphn_get_associativity(cpu, associativity); - new_nid = associativity_to_nid(associativity); - if (new_nid < 0 || !node_online(new_nid)) - new_nid = first_online_node; + new_nid = find_and_online_cpu_nid(cpu); if (new_nid == numa_cpu_lookup_table[cpu]) { cpumask_andnot(&cpu_associativity_changes_mask,