Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp6352719imm; Sun, 20 May 2018 01:00:40 -0700 (PDT) X-Google-Smtp-Source: AB8JxZoz+4+Kuf19pOe42Y34LGUS+4XsWti4tLenU2TIIOSONWOudxp2xMqeDdjx5PG1vAepq2/0 X-Received: by 2002:a62:469b:: with SMTP id o27-v6mr15555160pfi.124.1526803240117; Sun, 20 May 2018 01:00:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526803240; cv=none; d=google.com; s=arc-20160816; b=thw2Btiv6YlZ8E5Z8IW1DimU0eVcw7vsnRW+aENRopBYxa7OwSjYSOUXF2LYSImA0I gecIRKyblM7Ek9GoL6vy4Onr9ZlZ3S0dPxtNIQmIRO/gOLuKUK/7WInunBvUhXHvLteH WE4EW3SH1+wcfLA6Cddm7TaaBxd6a0Oqb6x0JepyGnCClStJ9GzwMYdGk0E+2Jy4wqIE vc5gNwoNRZEmHRd/2uXTcihwBFwSPzKokWfVhoEVMX5E5EAiDncBpN64oYh073htCVZ1 InU4gJaezcn9s7twKQzLGC71IWw96DiZ5wQrSKnDM3KLCQ98oqZk0FyAZwSuQo/goiPM tMsQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature:arc-authentication-results; bh=/AmKnO0W30rn1AiOxhXXC0Kq9Ocg80nYMSpjrPGywmM=; b=Ao9i7NDm3zcDYysnxcJPnP76/8P8HTvFpzkYEKbaeY97HXD/LgY2ehaR5XkFPHfaJY viglx8G+XGjrRCT8s1kgg5drc3C46QRB1sK66OzaYA1NK0jDV4zFAwEs8vOYTO58EH2T 1TKRLmyUJpka0i0fUghvMTkmGnAzAbSvE4B7jSyXfDqUHro/cYw4P0U/6SGtnnPBZryM wXZqOnsrY457wYm5hbps+Hht4ujUxfAgjTPZGVtlIcvsP7C4kf6pixtOYSltBgeIJ0V0 mme0QZ5uBlmyIFsLq4Q985o/81W1ftm7R6dPRxtvC8kuSCWqUjLmK00mzX3hbz+rB53K 35iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=B/f+IV3J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b9-v6si11525656pfn.100.2018.05.20.01.00.25; Sun, 20 May 2018 01:00:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=B/f+IV3J; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751067AbeETIAK (ORCPT + 99 others); Sun, 20 May 2018 04:00:10 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:39333 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750800AbeETIAI (ORCPT ); Sun, 20 May 2018 04:00:08 -0400 Received: by mail-lf0-f66.google.com with SMTP id j193-v6so19638300lfg.6 for ; Sun, 20 May 2018 01:00:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=/AmKnO0W30rn1AiOxhXXC0Kq9Ocg80nYMSpjrPGywmM=; b=B/f+IV3Jn1crsPRQysQaKZgFAHU0inhTL1UQO9DTpANCpP+aydfNUY2P2zYeJ7dqp5 sEdksBJo6vYME3orLBbCWMUcMXeDeHhwbxP6n+J89B6tUF5//lq7+RTwO9Q6IWXlIfdt V4noAr8U+c7WzfjhqCJtcLd0DcuttNAPMu7qyY7WBmf2FguTovr59vEpjHeo7yDpnPik qa/ut22pO84KtwLV+0gvbUgHYpr4AcOg94CCBcFLCZfy6OYQWH+aTst5c1f17BvG0ej1 6wg44KYf6OiE8LcjhDevLTSQgi5YRqVlW8VIXMjxfg0bWOr1MUQMKbTQ7BJp0VdBnvwJ PAfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=/AmKnO0W30rn1AiOxhXXC0Kq9Ocg80nYMSpjrPGywmM=; b=fs80onqEqeI1JQH5X0LjhI0UvMX/LUCQe7ud9T0x7k0qG8Dip4wWxdbD8699a2efVn L/PsVJGpa39aNQqSO8C6Ecd+WMDdh11f5LrBj0k7Ro6SuR/5eUVyA3mmUP2swQwIk+Xs dluLddvrl+z5uPn5h6MFhDPPIYrR3olLVHPnpy2ePtEZRsZlxHSlVE0oh999YiRGc0il 2uftDd8vjnNRRLjhL/W+maYmoUV2bVb8lXb6LeC0Hodb1fB21eMzT02oZLBfzrj/ooTj 7g1PB9el3b7JK6c7zWLLlVJc3M4MsrIYNpkvkXKuzKF7KIFLGeNxp3b03hsWDwNgdveN 16Yg== X-Gm-Message-State: ALKqPwcGYgX6HTTnagef935uByLFOGYqV1L9GihQYbcaZ0qFqmdST03b xW+QG4xvMgREzAnHaUsJciU= X-Received: by 2002:a19:5701:: with SMTP id l1-v6mr5711784lfb.32.1526803206429; Sun, 20 May 2018 01:00:06 -0700 (PDT) Received: from esperanza (81.5.110.211.dhcp.mipt-telecom.ru. [81.5.110.211]) by smtp.gmail.com with ESMTPSA id u21-v6sm2806334lfi.29.2018.05.20.01.00.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 20 May 2018 01:00:05 -0700 (PDT) Date: Sun, 20 May 2018 11:00:03 +0300 From: Vladimir Davydov To: Kirill Tkhai Cc: akpm@linux-foundation.org, shakeelb@google.com, viro@zeniv.linux.org.uk, hannes@cmpxchg.org, mhocko@kernel.org, tglx@linutronix.de, pombredanne@nexb.com, stummala@codeaurora.org, gregkh@linuxfoundation.org, sfr@canb.auug.org.au, guro@fb.com, mka@chromium.org, penguin-kernel@I-love.SAKURA.ne.jp, chris@chris-wilson.co.uk, longman@redhat.com, minchan@kernel.org, ying.huang@intel.com, mgorman@techsingularity.net, jbacik@fb.com, linux@roeck-us.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, lirongqing@baidu.com, aryabinin@virtuozzo.com Subject: Re: [PATCH v6 14/17] mm: Iterate only over charged shrinkers during memcg shrink_slab() Message-ID: <20180520080003.gfygtb6rloqpjaol@esperanza> References: <152663268383.5308.8660992135988724014.stgit@localhost.localdomain> <152663304128.5308.12840831728812876902.stgit@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152663304128.5308.12840831728812876902.stgit@localhost.localdomain> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 18, 2018 at 11:44:01AM +0300, Kirill Tkhai wrote: > Using the preparations made in previous patches, in case of memcg > shrink, we may avoid shrinkers, which are not set in memcg's shrinkers > bitmap. To do that, we separate iterations over memcg-aware and > !memcg-aware shrinkers, and memcg-aware shrinkers are chosen > via for_each_set_bit() from the bitmap. In case of big nodes, > having many isolated environments, this gives significant > performance growth. See next patches for the details. > > Note, that the patch does not respect to empty memcg shrinkers, > since we never clear the bitmap bits after we set it once. > Their shrinkers will be called again, with no shrinked objects > as result. This functionality is provided by next patches. > > Signed-off-by: Kirill Tkhai > --- > mm/vmscan.c | 87 +++++++++++++++++++++++++++++++++++++++++++++++++++++------ > 1 file changed, 78 insertions(+), 9 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index f09ea20d7270..2fbf3b476601 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -373,6 +373,20 @@ int prealloc_shrinker(struct shrinker *shrinker) > goto free_deferred; > } > > + /* > + * There is a window between prealloc_shrinker() > + * and register_shrinker_prepared(). We don't want > + * to clear bit of a shrinker in such the state > + * in shrink_slab_memcg(), since this will impose > + * restrictions on a code registering a shrinker > + * (they would have to guarantee, their LRU lists > + * are empty till shrinker is completely registered). > + * So, we differ the situation, when 1)a shrinker > + * is semi-registered (id is assigned, but it has > + * not yet linked to shrinker_list) and 2)shrinker > + * is not registered (id is not assigned). > + */ > + INIT_LIST_HEAD(&shrinker->list); > return 0; > > free_deferred: > @@ -544,6 +558,67 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl, > return freed; > } > > +#ifdef CONFIG_MEMCG_KMEM > +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > + struct mem_cgroup *memcg, int priority) > +{ > + struct memcg_shrinker_map *map; > + unsigned long freed = 0; > + int ret, i; > + > + if (!memcg_kmem_enabled() || !mem_cgroup_online(memcg)) > + return 0; > + > + if (!down_read_trylock(&shrinker_rwsem)) > + return 0; > + > + /* > + * 1) Caller passes only alive memcg, so map can't be NULL. > + * 2) shrinker_rwsem protects from maps expanding. > + */ > + map = rcu_dereference_protected(memcg->nodeinfo[nid]->shrinker_map, > + true); > + BUG_ON(!map); > + > + for_each_set_bit(i, map->map, memcg_shrinker_nr_max) { > + struct shrink_control sc = { > + .gfp_mask = gfp_mask, > + .nid = nid, > + .memcg = memcg, > + }; > + struct shrinker *shrinker; > + > + shrinker = idr_find(&shrinker_idr, i); > + if (unlikely(!shrinker)) { Nit: I don't think 'unlikely' is required here as this is definitely not a hot path. > + clear_bit(i, map->map); > + continue; > + } > + BUG_ON(!(shrinker->flags & SHRINKER_MEMCG_AWARE)); > + > + /* See comment in prealloc_shrinker() */ > + if (unlikely(list_empty(&shrinker->list))) Ditto. > + continue; > + > + ret = do_shrink_slab(&sc, shrinker, priority); > + freed += ret; > + > + if (rwsem_is_contended(&shrinker_rwsem)) { > + freed = freed ? : 1; > + break; > + } > + } > + > + up_read(&shrinker_rwsem); > + return freed; > +} > +#else /* CONFIG_MEMCG_KMEM */ > +static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, > + struct mem_cgroup *memcg, int priority) > +{ > + return 0; > +} > +#endif /* CONFIG_MEMCG_KMEM */ > + > /** > * shrink_slab - shrink slab caches > * @gfp_mask: allocation context > @@ -573,8 +648,8 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > struct shrinker *shrinker; > unsigned long freed = 0; > > - if (memcg && (!memcg_kmem_enabled() || !mem_cgroup_online(memcg))) > - return 0; > + if (memcg && !mem_cgroup_is_root(memcg)) > + return shrink_slab_memcg(gfp_mask, nid, memcg, priority); > > if (!down_read_trylock(&shrinker_rwsem)) > goto out; > @@ -586,13 +661,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid, > .memcg = memcg, > }; > > - /* > - * If kernel memory accounting is disabled, we ignore > - * SHRINKER_MEMCG_AWARE flag and call all shrinkers > - * passing NULL for memcg. > - */ > - if (memcg_kmem_enabled() && > - !!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE)) > + if (!!memcg != !!(shrinker->flags & SHRINKER_MEMCG_AWARE)) > continue; > > if (!(shrinker->flags & SHRINKER_NUMA_AWARE)) >