Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754593AbcC1F2F (ORCPT ); Mon, 28 Mar 2016 01:28:05 -0400 Received: from mail-pf0-f172.google.com ([209.85.192.172]:34530 "EHLO mail-pf0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753755AbcC1F1Q (ORCPT ); Mon, 28 Mar 2016 01:27:16 -0400 From: js1304@gmail.com X-Google-Original-From: iamjoonsoo.kim@lge.com To: Andrew Morton Cc: Christoph Lameter , Pekka Enberg , David Rientjes , Jesper Dangaard Brouer , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim Subject: [PATCH 01/11] mm/slab: hold a slab_mutex when calling __kmem_cache_shrink() Date: Mon, 28 Mar 2016 14:26:51 +0900 Message-Id: <1459142821-20303-2-git-send-email-iamjoonsoo.kim@lge.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459142821-20303-1-git-send-email-iamjoonsoo.kim@lge.com> References: <1459142821-20303-1-git-send-email-iamjoonsoo.kim@lge.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2037 Lines: 59 From: Joonsoo Kim Major kmem_cache metadata in slab subsystem is synchronized with the slab_mutex. In SLAB, if some of them is changed, node's shared array cache would be freed and re-populated. If __kmem_cache_shrink() is called at the same time, it will call drain_array() with n->shared without holding node lock so problem can happen. We can fix this small theoretical race condition by holding node lock in drain_array(), but, holding a slab_mutex in kmem_cache_shrink() looks more appropriate solution because stable state would make things less error-prone and this is not performance critical path. In addtion, annotate on SLAB functions. Signed-off-by: Joonsoo Kim --- mm/slab.c | 2 ++ mm/slab_common.c | 4 ++++ 2 files changed, 6 insertions(+) diff --git a/mm/slab.c b/mm/slab.c index a53a0f6..043606a 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -2218,6 +2218,7 @@ static void do_drain(void *arg) ac->avail = 0; } +/* Should be called with slab_mutex to prevent from freeing shared array */ static void drain_cpu_caches(struct kmem_cache *cachep) { struct kmem_cache_node *n; @@ -3871,6 +3872,7 @@ skip_setup: * Drain an array if it contains any elements taking the node lock only if * necessary. Note that the node listlock also protects the array_cache * if drain_array() is used on the shared array. + * Should be called with slab_mutex to prevent from freeing shared array. */ static void drain_array(struct kmem_cache *cachep, struct kmem_cache_node *n, struct array_cache *ac, int force, int node) diff --git a/mm/slab_common.c b/mm/slab_common.c index a65dad7..5bed565 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -755,7 +755,11 @@ int kmem_cache_shrink(struct kmem_cache *cachep) get_online_cpus(); get_online_mems(); kasan_cache_shrink(cachep); + + mutex_lock(&slab_mutex); ret = __kmem_cache_shrink(cachep, false); + mutex_unlock(&slab_mutex); + put_online_mems(); put_online_cpus(); return ret; -- 1.9.1