Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752843AbcJCMfT (ORCPT ); Mon, 3 Oct 2016 08:35:19 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:34697 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751699AbcJCMfK (ORCPT ); Mon, 3 Oct 2016 08:35:10 -0400 Date: Mon, 3 Oct 2016 15:35:06 +0300 From: Vladimir Davydov To: Michal Hocko Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Lameter , David Rientjes , Johannes Weiner , Joonsoo Kim , Pekka Enberg Subject: Re: [PATCH 1/2] mm: memcontrol: use special workqueue for creating per-memcg caches Message-ID: <20161003123505.GA1862@esperanza> References: <20161003120641.GC26768@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20161003120641.GC26768@dhcp22.suse.cz> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1944 Lines: 37 On Mon, Oct 03, 2016 at 02:06:42PM +0200, Michal Hocko wrote: > On Sat 01-10-16 16:56:47, Vladimir Davydov wrote: > > Creating a lot of cgroups at the same time might stall all worker > > threads with kmem cache creation works, because kmem cache creation is > > done with the slab_mutex held. To prevent that from happening, let's use > > a special workqueue for kmem cache creation with max in-flight work > > items equal to 1. > > > > Link: https://bugzilla.kernel.org/show_bug.cgi?id=172981 > > This looks like a regression but I am not really sure I understand what > has caused it. We had the WQ based cache creation since kmem was > introduced more or less. So is it 801faf0db894 ("mm/slab: lockless > decision to grow cache") which was pointed by bisection that changed the > timing resp. relaxed the cache creation to the point that would allow > this runaway? It is in case of SLAB. For SLUB the issue was caused by commit 81ae6d03952c ("mm/slub.c: replace kick_all_cpus_sync() with synchronize_sched() in kmem_cache_shrink()"). > This would be really useful for the stable backport > consideration. > > Also, if I understand the fix correctly, now we do limit the number of > workers to 1 thread. Is this really what we want? Wouldn't it be > possible that few memcgs could starve others fromm having their cache > created? What would be the result, missed charges? Now kmem caches are created in FIFO order, i.e. if one memcg called kmem_cache_alloc on a non-existent cache before another, it will be served first. Since the number of caches that can be created by a single memcg is obviously limited, I don't see any possibility of starvation. Actually, this patch doesn't introduce any functional changes regarding the order in which kmem caches are created, as the work function holds the global slab_mutex during its whole runtime anyway. We only avoid creating a thread per each work by making the queue single-threaded.