Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933597AbcDLQio (ORCPT ); Tue, 12 Apr 2016 12:38:44 -0400 Received: from resqmta-po-12v.sys.comcast.net ([96.114.154.171]:58501 "EHLO resqmta-po-12v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932325AbcDLQim (ORCPT ); Tue, 12 Apr 2016 12:38:42 -0400 Date: Tue, 12 Apr 2016 11:38:39 -0500 (CDT) From: Christoph Lameter X-X-Sender: cl@east.gentwo.org To: js1304@gmail.com cc: Andrew Morton , Pekka Enberg , David Rientjes , Jesper Dangaard Brouer , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim Subject: Re: [PATCH v2 01/11] mm/slab: fix the theoretical race by holding proper lock In-Reply-To: <1460436666-20462-2-git-send-email-iamjoonsoo.kim@lge.com> Message-ID: References: <1460436666-20462-1-git-send-email-iamjoonsoo.kim@lge.com> <1460436666-20462-2-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 802 Lines: 25 On Tue, 12 Apr 2016, js1304@gmail.com wrote: > @@ -2222,6 +2241,7 @@ static void drain_cpu_caches(struct kmem_cache *cachep) > { > struct kmem_cache_node *n; > int node; > + LIST_HEAD(list); > > on_each_cpu(do_drain, cachep, 1); > check_irq_on(); > @@ -2229,8 +2249,13 @@ static void drain_cpu_caches(struct kmem_cache *cachep) > if (n->alien) > drain_alien_cache(cachep, n->alien); > > - for_each_kmem_cache_node(cachep, node, n) > - drain_array(cachep, n, n->shared, 1, node); > + for_each_kmem_cache_node(cachep, node, n) { > + spin_lock_irq(&n->list_lock); > + drain_array_locked(cachep, n->shared, node, true, &list); > + spin_unlock_irq(&n->list_lock); > + > + slabs_destroy(cachep, &list); Can the slabs_destroy() call be moved outside of the loop? It may be faster then?