Received: by 10.213.65.68 with SMTP id h4csp26677imn; Sat, 24 Mar 2018 12:26:52 -0700 (PDT) X-Google-Smtp-Source: AG47ELvvNAMij4gIIYKDHa2FDpL3CxeERTJuM+WPq+MGMZ99baUflbRv4MSV9ouihdiAY5GRnKwS X-Received: by 10.167.129.67 with SMTP id d3mr28163122pfn.108.1521919612324; Sat, 24 Mar 2018 12:26:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521919612; cv=none; d=google.com; s=arc-20160816; b=oJJWXxl2tQ3Mk9z6GGnCcJRpBGbvCqWxqf88BAHpTUeuPjn8Df3JMNlwAyTc/Lc38h Hioo3PW9n9xtzVBy2pyUD0apMrSE+aSE3IcjTyStR7/neHQ9NfbFhSFMlKW9KejpxLId qB5FtcQU7uYYXQ6w9wxxIGy0VJ0zk1vlkYobt3QvYxQo8sTgC9ZsnoLanqgK8uxitXqh qTdihFCVJe0cl5ryrc0DKYb5W9/GCNwvu56n0WnU7KXQ+B2RDbqwh1Tae43zo/7ioZKj zAF875nD/NGhh2BUaUUqQjP0dTGI/OPuGb2egYaBrM4ZeeM/yWuzlum3hjpVFooegSU+ 6zgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature:arc-authentication-results; bh=9+KGCcKRxZOuJjq+23RYQRWm4DSpNeS1KP1AGWS9RYw=; b=sazaF5dUXD5nEzl3OyDp2zCHHugGbH3/n/KlcbwAwYD258jTOiUPucq7b9+EHrE+OA 3z0DnGAArNw3cifbcv1CW8Es6jm6ShHZ+/GfX0M4r8NHJo8iI7cwXCWL4kJRPw/a09LK jAVJ5qwpqcEm3z6q/dXdOOQ/g1JIMb0SKHIMmm+Fk8GNBPKuK+MxIp/uLiak1gJhk98U 84m1Fr/zp1jS+G9uGoaXEuCI/3uDgZUdX9OrryJ4XGC8AO3AtFZkfsatjqnPGZ75SW/9 KsCDmriyCFx9bgw62yvmkoB9X9+e68hOaYHqf2hbBLseK0qlNU6hXJxst5LD6VYvpw6H Kycw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mYiUzxfC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id n19-v6si10906008plp.582.2018.03.24.12.26.37; Sat, 24 Mar 2018 12:26:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=mYiUzxfC; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752792AbeCXTZ1 (ORCPT + 99 others); Sat, 24 Mar 2018 15:25:27 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:42763 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752633AbeCXTZ0 (ORCPT ); Sat, 24 Mar 2018 15:25:26 -0400 Received: by mail-lf0-f65.google.com with SMTP id a22-v6so22785602lfg.9 for ; Sat, 24 Mar 2018 12:25:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=9+KGCcKRxZOuJjq+23RYQRWm4DSpNeS1KP1AGWS9RYw=; b=mYiUzxfCMQ8Ld+H+PFMmEfym37+qdjWMaBeYIyG42sOFgy7RCc5ZVeR+h+aV2jVMJl LjkKMAoYh0R9HMmyiidOx98SwzeKva8iqdvsoqL235sWfsyuhaveKBGtjtt4Rt6IVFMh k5LVNGYORWWbi8zRP6yMSlwhG/Yg59ekwS8w60/o4yNqqjsR4taQdHwc85X7IPSa7HA2 llVFzCDnIDITL3KssYeElYCFZud9t10eThGv9LqJI5YuNPNpEcJDMmQhn4dFZHodIFjD L+QPDxyMNhNsrl1eBUEEREn9Y4VE2CfAhuMQoPIf7HeQesHD/372SlRgC48UPq8S1cL2 m76Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=9+KGCcKRxZOuJjq+23RYQRWm4DSpNeS1KP1AGWS9RYw=; b=D8zYqkrUMD+zKZntdAABJX+lL9IoeSHK8Wqds4q0lVquz08OIFyyncaVoidBENDWKS ik/nVRCCvmU7iRlfOELOt2Ak19V0AyzaCq2ExvJBP8X6c37H1hZ3UTu/Hxw1Rq4ABOEf gRjYiRFC8eMG5Ntpj5uEwvvt2FY/UOINpLuCNustOw3bRy5Le2ChRwX4LOp2BavXajk1 c+7EKg6Ul8HTzLHSH5mSuyA+qBYYB4J/7bmCDh9iYO0l8/FfM68JXOU3PT+P+clqMu6Z KNpJkxIf3l0FBXrjw1yPS+MC8m1FbvF5vR5tysjuOQ8ibR+Gp/hd1YCZc3E1L7zfJWki 2kjQ== X-Gm-Message-State: AElRT7F/lFPqMaxLQgigj2U2LYfuhubhUiy1CDQShXlTcXlmuYRv3dW8 1+Hcm76GhT6YTDIz1nQRoaQ= X-Received: by 10.46.42.67 with SMTP id q64mr23431619ljq.133.1521919524792; Sat, 24 Mar 2018 12:25:24 -0700 (PDT) Received: from esperanza (81.5.110.211.dhcp.mipt-telecom.ru. [81.5.110.211]) by smtp.gmail.com with ESMTPSA id i65sm612050lji.26.2018.03.24.12.25.23 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sat, 24 Mar 2018 12:25:24 -0700 (PDT) Date: Sat, 24 Mar 2018 22:25:21 +0300 From: Vladimir Davydov To: Kirill Tkhai Cc: viro@zeniv.linux.org.uk, hannes@cmpxchg.org, mhocko@kernel.org, akpm@linux-foundation.org, tglx@linutronix.de, pombredanne@nexb.com, stummala@codeaurora.org, gregkh@linuxfoundation.org, sfr@canb.auug.org.au, guro@fb.com, mka@chromium.org, penguin-kernel@I-love.SAKURA.ne.jp, chris@chris-wilson.co.uk, longman@redhat.com, minchan@kernel.org, hillf.zj@alibaba-inc.com, ying.huang@intel.com, mgorman@techsingularity.net, shakeelb@google.com, jbacik@fb.com, linux@roeck-us.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org Subject: Re: [PATCH 03/10] mm: Assign memcg-aware shrinkers bitmap to memcg Message-ID: <20180324192521.my7akysvj7wtudan@esperanza> References: <152163840790.21546.980703278415599202.stgit@localhost.localdomain> <152163850081.21546.6969747084834474733.stgit@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152163850081.21546.6969747084834474733.stgit@localhost.localdomain> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Mar 21, 2018 at 04:21:40PM +0300, Kirill Tkhai wrote: > Imagine a big node with many cpus, memory cgroups and containers. > Let we have 200 containers, every container has 10 mounts, > and 10 cgroups. All container tasks don't touch foreign > containers mounts. If there is intensive pages write, > and global reclaim happens, a writing task has to iterate > over all memcgs to shrink slab, before it's able to go > to shrink_page_list(). > > Iteration over all the memcg slabs is very expensive: > the task has to visit 200 * 10 = 2000 shrinkers > for every memcg, and since there are 2000 memcgs, > the total calls are 2000 * 2000 = 4000000. > > So, the shrinker makes 4 million do_shrink_slab() calls > just to try to isolate SWAP_CLUSTER_MAX pages in one > of the actively writing memcg via shrink_page_list(). > I've observed a node spending almost 100% in kernel, > making useless iteration over already shrinked slab. > > This patch adds bitmap of memcg-aware shrinkers to memcg. > The size of the bitmap depends on bitmap_nr_ids, and during > memcg life it's maintained to be enough to fit bitmap_nr_ids > shrinkers. Every bit in the map is related to corresponding > shrinker id. > > Next patches will maintain set bit only for really charged > memcg. This will allow shrink_slab() to increase its > performance in significant way. See the last patch for > the numbers. > > Signed-off-by: Kirill Tkhai > --- > include/linux/memcontrol.h | 20 ++++++++ > mm/memcontrol.c | 5 ++ > mm/vmscan.c | 117 ++++++++++++++++++++++++++++++++++++++++++++ > 3 files changed, 142 insertions(+) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 4525b4404a9e..ad88a9697fb9 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -151,6 +151,11 @@ struct mem_cgroup_thresholds { > struct mem_cgroup_threshold_ary *spare; > }; > > +struct shrinkers_map { IMO better call it mem_cgroup_shrinker_map. > + struct rcu_head rcu; > + unsigned long *map[0]; > +}; > + > enum memcg_kmem_state { > KMEM_NONE, > KMEM_ALLOCATED, > @@ -182,6 +187,9 @@ struct mem_cgroup { > unsigned long low; > unsigned long high; > > + /* Bitmap of shrinker ids suitable to call for this memcg */ > + struct shrinkers_map __rcu *shrinkers_map; > + We keep all per-node data in mem_cgroup_per_node struct. I think this bitmap should be defined there as well. > /* Range enforcement for interrupt charges */ > struct work_struct high_work; > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 3801ac1fcfbc..2324577c62dc 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -4476,6 +4476,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css) > { > struct mem_cgroup *memcg = mem_cgroup_from_css(css); > > + if (alloc_shrinker_maps(memcg)) > + return -ENOMEM; > + This needs a comment explaining why you can't allocate the map in css_alloc, which seems to be a better place for it. > /* Online state pins memcg ID, memcg ID pins CSS */ > atomic_set(&memcg->id.ref, 1); > css_get(css); > @@ -4487,6 +4490,8 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css) > struct mem_cgroup *memcg = mem_cgroup_from_css(css); > struct mem_cgroup_event *event, *tmp; > > + free_shrinker_maps(memcg); > + AFAIU this can race with shrink_slab accessing the map, resulting in use-after-free. IMO it would be safer to free the bitmap from css_free. > /* > * Unregister events and notify userspace. > * Notify userspace about cgroup removing only after rmdir of cgroup > diff --git a/mm/vmscan.c b/mm/vmscan.c > index 97ce4f342fab..9d1df5d90eca 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -165,6 +165,10 @@ static DECLARE_RWSEM(bitmap_rwsem); > static int bitmap_id_start; > static int bitmap_nr_ids; > static struct shrinker **mcg_shrinkers; > +struct shrinkers_map *__rcu root_shrinkers_map; Why do you need root_shrinkers_map? AFAIR the root memory cgroup doesn't have kernel memory accounting enabled. > + > +#define SHRINKERS_MAP(memcg) \ > + (memcg == root_mem_cgroup || !memcg ? root_shrinkers_map : memcg->shrinkers_map) > > static int expand_shrinkers_array(int old_nr, int nr) > { > @@ -188,6 +192,116 @@ static int expand_shrinkers_array(int old_nr, int nr) > return 0; > } > > +static void kvfree_map_rcu(struct rcu_head *head) > +{ > +static int memcg_expand_maps(struct mem_cgroup *memcg, int size, int old_size) > +{ > +int alloc_shrinker_maps(struct mem_cgroup *memcg) > +{ > +void free_shrinker_maps(struct mem_cgroup *memcg) > +{ > +static int expand_shrinker_maps(int old_id, int id) > +{ All these functions should be defined in memcontrol.c The only public function should be mem_cgroup_grow_shrinker_map (I'm not insisting on the name), which reallocates shrinker bitmap for each cgroups so that it can accommodate the new shrinker id. To do that, you'll probably need to keep track of the bitmap capacity in memcontrol.c