Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753036AbcJFGVA (ORCPT ); Thu, 6 Oct 2016 02:21:00 -0400 Received: from mail-pf0-f193.google.com ([209.85.192.193]:32846 "EHLO mail-pf0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752187AbcJFGU6 (ORCPT ); Thu, 6 Oct 2016 02:20:58 -0400 From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Johannes Weiner , Vladimir Davydov , Doug Smythies , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim , stable@vger.kernel.org Subject: [PATCH] mm/slab: fix kmemcg cache creation delayed issue Date: Thu, 6 Oct 2016 15:20:55 +0900 Message-Id: <1475734855-4837-1-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.9.1 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1399 Lines: 40 From: Joonsoo Kim There is a bug report that SLAB makes extreme load average due to over 2000 kworker thread. https://bugzilla.kernel.org/show_bug.cgi?id=172981 This issue is caused by kmemcg feature that try to create new set of kmem_caches for each memcg. Recently, kmem_cache creation is slowed by synchronize_sched() and futher kmem_cache creation is also delayed since kmem_cache creation is synchronized by a global slab_mutex lock. So, the number of kworker that try to create kmem_cache increases quitely. synchronize_sched() is for lockless access to node's shared array but it's not needed when a new kmem_cache is created. So, this patch rules out that case. Fixes: 801faf0db894 ("mm/slab: lockless decision to grow cache") Cc: stable@vger.kernel.org Reported-by: Doug Smythies Tested-by: Doug Smythies Signed-off-by: Joonsoo Kim --- mm/slab.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slab.c b/mm/slab.c index 6508b4d..3c83c29 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -961,7 +961,7 @@ static int setup_kmem_cache_node(struct kmem_cache *cachep, * guaranteed to be valid until irq is re-enabled, because it will be * freed after synchronize_sched(). */ - if (force_change) + if (old_shared && force_change) synchronize_sched(); fail: -- 1.9.1