Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp2231729imm; Thu, 7 Jun 2018 07:26:14 -0700 (PDT) X-Google-Smtp-Source: ADUXVKIRDV9i3MLezI9Sxt9ug5R7oOspSra9I3382B70hGn64R+4i1Z2Vu7WLvXEfpKcIN5XI7oM X-Received: by 2002:a17:902:8a95:: with SMTP id p21-v6mr2307317plo.325.1528381574206; Thu, 07 Jun 2018 07:26:14 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528381574; cv=none; d=google.com; s=arc-20160816; b=BEwijxWCem2IT5gC6n1seLNBR8qYcUr3AoJm08eeC10TkRdhf0QmwaKgPPVfhBJwWW UzIsb9pbdlsO2vqyxozYN+FT+dzBueY9sl/fr2uZZxgblYIO7XBW8HZwUo3EaL03wiGJ CGm6V3U9JuX4WM3roezd5bIDydupQ4ztO5IUurJp1+01yx02PXVwrU5tfmn9Dre08Umw TdstBjoa2EwVZxU6+OBKf+zdxHxmaX0MJ5g/UgAwyMlO+V+DfdUKfS11Mqow3vZZBII7 msjDFqXyb5mtim4LR7bivgOXDR92iEWYQqwI+OLJ+rlLtofZztav8EgOGUGp76S2Adns f/dA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:subject:message-id:date:cc:to :from:mime-version:content-transfer-encoding:content-disposition :arc-authentication-results; bh=xnXsB8Rq4lyc4JsXgZEq41mIGJJcR/Ahu3RJbiSAL1U=; b=XmDwmpfwGBR7o3LhkUqMnZoxfL/d+LiS2wsADn0dXXGsanSvbzRtlumxjzC/LUncmD 0DFuc8F9M8gQyJs+0jJHFwTrPCOWaF4v9UNvFRhNzfSKUjAcWhJuDab/8L9pPDB0L15m WILrsKNL1uzSxGGSMMWhODQZXsByqBih6twkeRsvz4oqCs0o4kTws9o1FhQO+7v4uqsN 3DndRZ43zrzQrLatDNpJwkFZIg9x9RHjI5pvVc21jU9kTD6usXWJ33BpjFXyMORIycjm +e0nCQOXsKagR8yIpsyhzsp92e+xscgV1n0jOtbN86L3+M3bgzg2JB8nPvilkF4wKvCt wc4g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e72-v6si10149692pfd.352.2018.06.07.07.26.00; Thu, 07 Jun 2018 07:26:14 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933538AbeFGOYt (ORCPT + 99 others); Thu, 7 Jun 2018 10:24:49 -0400 Received: from shadbolt.e.decadent.org.uk ([88.96.1.126]:40133 "EHLO shadbolt.e.decadent.org.uk" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933376AbeFGOYh (ORCPT ); Thu, 7 Jun 2018 10:24:37 -0400 Received: from [148.252.241.226] (helo=deadeye) by shadbolt.decadent.org.uk with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.84_2) (envelope-from ) id 1fQvbe-0005Zm-Jg; Thu, 07 Jun 2018 15:09:38 +0100 Received: from ben by deadeye with local (Exim 4.91) (envelope-from ) id 1fQvb6-000329-Pg; Thu, 07 Jun 2018 15:09:04 +0100 Content-Type: text/plain; charset="UTF-8" Content-Disposition: inline Content-Transfer-Encoding: 8bit MIME-Version: 1.0 From: Ben Hutchings To: linux-kernel@vger.kernel.org, stable@vger.kernel.org CC: akpm@linux-foundation.org, "Nathan Fontenot" , "Tyrel Datwyler" , "Michael Ellerman" Date: Thu, 07 Jun 2018 15:05:21 +0100 Message-ID: X-Mailer: LinuxStableQueue (scripts by bwh) Subject: [PATCH 3.16 219/410] powerpc/numa: Invalidate numa_cpu_lookup_table on cpu remove In-Reply-To: X-SA-Exim-Connect-IP: 148.252.241.226 X-SA-Exim-Mail-From: ben@decadent.org.uk X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 3.16.57-rc1 review patch. If anyone has any objections, please let me know. ------------------ From: Nathan Fontenot commit 1d9a090783bef19fe8cdec878620d22f05191316 upstream. When DLPAR removing a CPU, the unmapping of the cpu from a node in unmap_cpu_from_node() should also invalidate the CPUs entry in the numa_cpu_lookup_table. There is not a guarantee that on a subsequent DLPAR add of the CPU the associativity will be the same and thus could be in a different node. Invalidating the entry in the numa_cpu_lookup_table causes the associativity to be read from the device tree at the time of the add. The current behavior of not invalidating the CPUs entry in the numa_cpu_lookup_table can result in scenarios where the the topology layout of CPUs in the partition does not match the device tree or the topology reported by the HMC. This bug looks like it was introduced in 2004 in the commit titled "ppc64: cpu hotplug notifier for numa", which is 6b15e4e87e32 in the linux-fullhist tree. Hence tag it for all stable releases. Signed-off-by: Nathan Fontenot Reviewed-by: Tyrel Datwyler Signed-off-by: Michael Ellerman Signed-off-by: Ben Hutchings --- arch/powerpc/include/asm/topology.h | 5 +++++ arch/powerpc/mm/numa.c | 5 ----- arch/powerpc/platforms/pseries/hotplug-cpu.c | 2 ++ 3 files changed, 7 insertions(+), 5 deletions(-) --- a/arch/powerpc/include/asm/topology.h +++ b/arch/powerpc/include/asm/topology.h @@ -44,6 +44,11 @@ extern void __init dump_numa_cpu_topolog extern int sysfs_add_device_to_node(struct device *dev, int nid); extern void sysfs_remove_device_from_node(struct device *dev, int nid); +static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node) +{ + numa_cpu_lookup_table[cpu] = node; +} + static inline int early_cpu_to_node(int cpu) { int nid; --- a/arch/powerpc/mm/numa.c +++ b/arch/powerpc/mm/numa.c @@ -162,11 +162,6 @@ static void reset_numa_cpu_lookup_table( numa_cpu_lookup_table[cpu] = -1; } -static void update_numa_cpu_lookup_table(unsigned int cpu, int node) -{ - numa_cpu_lookup_table[cpu] = node; -} - static void map_cpu_to_node(int cpu, int node) { update_numa_cpu_lookup_table(cpu, node); --- a/arch/powerpc/platforms/pseries/hotplug-cpu.c +++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c @@ -31,6 +31,7 @@ #include #include #include +#include #include "offline_states.h" @@ -328,6 +329,7 @@ static void pseries_remove_processor(str BUG_ON(cpu_online(cpu)); set_cpu_present(cpu, false); set_hard_smp_processor_id(cpu, -1); + update_numa_cpu_lookup_table(cpu, -1); break; } if (cpu >= nr_cpu_ids)