Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758538AbZIRTel (ORCPT ); Fri, 18 Sep 2009 15:34:41 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758272AbZIRTeU (ORCPT ); Fri, 18 Sep 2009 15:34:20 -0400 Received: from gir.skynet.ie ([193.1.99.77]:38491 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758073AbZIRTeJ (ORCPT ); Fri, 18 Sep 2009 15:34:09 -0400 From: Mel Gorman To: Nick Piggin , Pekka Enberg , Christoph Lameter Cc: heiko.carstens@de.ibm.com, sachinp@in.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Mel Gorman Subject: [PATCH 1/3] slqb: Do not use DEFINE_PER_CPU for per-node data Date: Fri, 18 Sep 2009 20:34:09 +0100 Message-Id: <1253302451-27740-2-git-send-email-mel@csn.ul.ie> X-Mailer: git-send-email 1.6.3.3 In-Reply-To: <1253302451-27740-1-git-send-email-mel@csn.ul.ie> References: <1253302451-27740-1-git-send-email-mel@csn.ul.ie> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2637 Lines: 73 SLQB used a seemingly nice hack to allocate per-node data for the statically initialised caches. Unfortunately, due to some unknown per-cpu optimisation, these regions are being reused by something else as the per-node data is getting randomly scrambled. This patch fixes the problem but it's not fully understood *why* it fixes the problem at the moment. Signed-off-by: Mel Gorman --- mm/slqb.c | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/slqb.c b/mm/slqb.c index 4ca85e2..4d72be2 100644 --- a/mm/slqb.c +++ b/mm/slqb.c @@ -1944,16 +1944,16 @@ static void init_kmem_cache_node(struct kmem_cache *s, static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_cache_cpus); #endif #ifdef CONFIG_NUMA -/* XXX: really need a DEFINE_PER_NODE for per-node data, but this is better than - * a static array */ -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_cache_nodes); +/* XXX: really need a DEFINE_PER_NODE for per-node data because a static + * array is wasteful */ +static struct kmem_cache_node kmem_cache_nodes[MAX_NUMNODES]; #endif #ifdef CONFIG_SMP static struct kmem_cache kmem_cpu_cache; static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_cpu_cpus); #ifdef CONFIG_NUMA -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_cpu_nodes); /* XXX per-nid */ +static struct kmem_cache_node kmem_cpu_nodes[MAX_NUMNODES]; /* XXX per-nid */ #endif #endif @@ -1962,7 +1962,7 @@ static struct kmem_cache kmem_node_cache; #ifdef CONFIG_SMP static DEFINE_PER_CPU(struct kmem_cache_cpu, kmem_node_cpus); #endif -static DEFINE_PER_CPU(struct kmem_cache_node, kmem_node_nodes); /*XXX per-nid */ +static struct kmem_cache_node kmem_node_nodes[MAX_NUMNODES]; /*XXX per-nid */ #endif #ifdef CONFIG_SMP @@ -2918,15 +2918,15 @@ void __init kmem_cache_init(void) for_each_node_state(i, N_NORMAL_MEMORY) { struct kmem_cache_node *n; - n = &per_cpu(kmem_cache_nodes, i); + n = &kmem_cache_nodes[i]; init_kmem_cache_node(&kmem_cache_cache, n); kmem_cache_cache.node_slab[i] = n; #ifdef CONFIG_SMP - n = &per_cpu(kmem_cpu_nodes, i); + n = &kmem_cpu_nodes[i]; init_kmem_cache_node(&kmem_cpu_cache, n); kmem_cpu_cache.node_slab[i] = n; #endif - n = &per_cpu(kmem_node_nodes, i); + n = &kmem_node_nodes[i]; init_kmem_cache_node(&kmem_node_cache, n); kmem_node_cache.node_slab[i] = n; } -- 1.6.3.3 -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/