Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755610AbcC2Aul (ORCPT ); Mon, 28 Mar 2016 20:50:41 -0400 Received: from resqmta-ch2-10v.sys.comcast.net ([69.252.207.42]:49441 "EHLO resqmta-ch2-10v.sys.comcast.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752162AbcC2Aui (ORCPT ); Mon, 28 Mar 2016 20:50:38 -0400 Date: Mon, 28 Mar 2016 19:50:36 -0500 (CDT) From: Christoph Lameter X-X-Sender: cl@east.gentwo.org To: js1304@gmail.com cc: Andrew Morton , Pekka Enberg , David Rientjes , Jesper Dangaard Brouer , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Joonsoo Kim Subject: Re: [PATCH 01/11] mm/slab: hold a slab_mutex when calling __kmem_cache_shrink() In-Reply-To: <1459142821-20303-2-git-send-email-iamjoonsoo.kim@lge.com> Message-ID: References: <1459142821-20303-1-git-send-email-iamjoonsoo.kim@lge.com> <1459142821-20303-2-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1330 Lines: 25 On Mon, 28 Mar 2016, js1304@gmail.com wrote: > Major kmem_cache metadata in slab subsystem is synchronized with > the slab_mutex. In SLAB, if some of them is changed, node's shared > array cache would be freed and re-populated. If __kmem_cache_shrink() > is called at the same time, it will call drain_array() with n->shared > without holding node lock so problem can happen. > > We can fix this small theoretical race condition by holding node lock > in drain_array(), but, holding a slab_mutex in kmem_cache_shrink() > looks more appropriate solution because stable state would make things > less error-prone and this is not performance critical path. Ummm.. The mutex taking is added to common code. So this will also affect SLUB. The patch needs to consider this. Do we want to force all allocators to run shrinking only when holding the lock? SLUB does not need to hold the mutex. And frankly the mutex is for reconfiguration of metadata which is *not* occurring here. A shrink operation does not do that. Can we figure out a slab specific way of handling synchronization in the strange free/realloc cycle? It seems that taking the node lock is the appropriate level of synchrnonization since the concern is with the contents of a shared cache at that level. There is no change of metadata which would require the mutex.