Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp6350239imm; Sun, 20 May 2018 00:56:26 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpHT27sOXdEtywFwASST7jkHkwoTR4KqMuqdBqmvsPJqetlyXklSk/yT3+65mVyuPF5tCjD X-Received: by 2002:a62:1152:: with SMTP id z79-v6mr15553235pfi.135.1526802986681; Sun, 20 May 2018 00:56:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526802986; cv=none; d=google.com; s=arc-20160816; b=zvg+kdL9n3ZoyzFN786YOUkoKdvmZGD4Lpfkkv2d5THnSFnjh3A1C9Da0ERa1x87MB ZOg/BSrAHbp0//FhQNABifo/hoeQVDnOG8IpwWG8mDMmkCB6wZut6dAJd4lVRNTjAOTY MLh3xqQMdWdAAxqLN7QyB+UP0UFrbUAk1UBzsTTNV1wihKBPSxSeh1c4yFp96E3e9V+d e03HYCjIjcpISwvcmq3MgGwMHJD+jXR4c7LyaOe0yDeYcqzw7/G8h7AixOMVrFQYBAz5 zLATuyIb1P4iTiabIsby0MEeasCSWwCXzQ23budjQtoOu/L/M2G4dio3ZquYRd+KyN3X 47dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature:arc-authentication-results; bh=6DZW87j+LAcfPxcsVg5cvHz05pWq5zNsN8LTJ0NUiXg=; b=JQYSXYtQHEQxNomIRQF2GiWpYGQ01TN8arE5P1sziSn/H+rUeARum/3nl1lWURF0C8 xZ+vZU3L5yi8yJQEe1nkpbyRO8v/jif6abinSrwHmU3o8Bm1FMQ0OJbTP7QuHTV83pcy Au4eg+87hGxXPXdr3+yGLtae5+SuPgyrT6NBGKQr+smalVRK9GBrVpXameH4nqWQh01S kuGrdk/EJVx/+Xm6vJZS02zsxdSTXizJQEKe7xxJLkIA+g4iyvUQho6kBqdiSxZUwHAe bBpNNEsT7wHCR0qiqJ5huzfrFFIl7t8IU++aPd+/mC3IjMiE4WJZMzCZzBMrPykCt0oj jf0Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ehT+Y9YS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2-v6si11535137pli.269.2018.05.20.00.56.12; Sun, 20 May 2018 00:56:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=ehT+Y9YS; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751084AbeETH4E (ORCPT + 99 others); Sun, 20 May 2018 03:56:04 -0400 Received: from mail-lf0-f67.google.com ([209.85.215.67]:36928 "EHLO mail-lf0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750878AbeETH4D (ORCPT ); Sun, 20 May 2018 03:56:03 -0400 Received: by mail-lf0-f67.google.com with SMTP id r2-v6so19649388lff.4 for ; Sun, 20 May 2018 00:56:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=6DZW87j+LAcfPxcsVg5cvHz05pWq5zNsN8LTJ0NUiXg=; b=ehT+Y9YSBAv90iCk+XOD7Bktt2xgVjBpoSnOTwHfHmNwqtB/wc88GfsYcAzIzojJlj p+DX8ZIO+fS5EYopexGvMn6YziuQf7CwrZLfSykytpbskcoG1rwaozL54kokYehOZwW2 klufCEe1n/g6hZUv1d0/K9hiy/04SWcwO6xzaNiQvWnyJNGSY6eR+PCilDTYVSXcBxEY atILTVf6r8FHo7lGXR1NMDmEu+QXDiGkXH5ND8e1DT9g6eDbG01YWUsYx4xRj5X/XT20 s4IVo3kx8BQdcgTwo0sou4bofSqyaSnCDnk6gRtcXHc3FELApWpENFD6mt3vXvzAn313 TH/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=6DZW87j+LAcfPxcsVg5cvHz05pWq5zNsN8LTJ0NUiXg=; b=PCnDcFOiVqaH7JUkKcgvWI+40IOkjcCOCe3U8Z3hQ6AQfBoQObN+kdYt/kS7Q+8miD NoUJo4tzYEo099vx5L6rjKlVgUFPkRhxhxnbRMmKQ3E/Uhd7s/MCCe9jZESYYY0QIu1q nuoHi7mXpNYQokucfiNb31D3Guc0cwVs69f+fv5ydjt52i/urao57YjaL3RTY3d1eLuN DiqkOh0Enr3fzs6RwEfSsyCKdw2kctKvyyvWh6nYlrpuMUPcwa42fruhkMsP33nlBcsb a2aNJXKNHYeRcMk+NirS4ao1ia3YVW15HxcSWU3unFUnBdtrxDev6YoG63NYk5QH9LVq 1aWg== X-Gm-Message-State: ALKqPwdMZPO72mW7kp/Epiw45vVgzRdOtQ6TSsxz6KsYV7KS7EuhZbFC 4eC3Fsa1jiZ47U1gizu8oYU= X-Received: by 2002:a2e:8541:: with SMTP id u1-v6mr9213867ljj.10.1526802961560; Sun, 20 May 2018 00:56:01 -0700 (PDT) Received: from esperanza (81.5.110.211.dhcp.mipt-telecom.ru. [81.5.110.211]) by smtp.gmail.com with ESMTPSA id s137-v6sm2782827lfs.67.2018.05.20.00.56.00 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 20 May 2018 00:56:00 -0700 (PDT) Date: Sun, 20 May 2018 10:55:58 +0300 From: Vladimir Davydov To: Kirill Tkhai Cc: akpm@linux-foundation.org, shakeelb@google.com, viro@zeniv.linux.org.uk, hannes@cmpxchg.org, mhocko@kernel.org, tglx@linutronix.de, pombredanne@nexb.com, stummala@codeaurora.org, gregkh@linuxfoundation.org, sfr@canb.auug.org.au, guro@fb.com, mka@chromium.org, penguin-kernel@I-love.SAKURA.ne.jp, chris@chris-wilson.co.uk, longman@redhat.com, minchan@kernel.org, ying.huang@intel.com, mgorman@techsingularity.net, jbacik@fb.com, linux@roeck-us.net, linux-kernel@vger.kernel.org, linux-mm@kvack.org, willy@infradead.org, lirongqing@baidu.com, aryabinin@virtuozzo.com Subject: Re: [PATCH v6 12/17] mm: Set bit in memcg shrinker bitmap on first list_lru item apearance Message-ID: <20180520075558.6ls4yzrkou63orkb@esperanza> References: <152663268383.5308.8660992135988724014.stgit@localhost.localdomain> <152663302275.5308.7476660277265020067.stgit@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <152663302275.5308.7476660277265020067.stgit@localhost.localdomain> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 18, 2018 at 11:43:42AM +0300, Kirill Tkhai wrote: > Introduce set_shrinker_bit() function to set shrinker-related > bit in memcg shrinker bitmap, and set the bit after the first > item is added and in case of reparenting destroyed memcg's items. > > This will allow next patch to make shrinkers be called only, > in case of they have charged objects at the moment, and > to improve shrink_slab() performance. > > Signed-off-by: Kirill Tkhai > --- > include/linux/memcontrol.h | 14 ++++++++++++++ > mm/list_lru.c | 22 ++++++++++++++++++++-- > 2 files changed, 34 insertions(+), 2 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index e51c6e953d7a..7ae1b94becf3 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -1275,6 +1275,18 @@ static inline int memcg_cache_id(struct mem_cgroup *memcg) > > extern int memcg_expand_shrinker_maps(int new_id); > > +static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, > + int nid, int shrinker_id) > +{ > + if (shrinker_id >= 0 && memcg && memcg != root_mem_cgroup) { Nit: I'd remove these checks from this function and require the caller to check that shrinker_id >= 0 and memcg != NULL or root_mem_cgroup. See below how the call sites would look then. > + struct memcg_shrinker_map *map; > + > + rcu_read_lock(); > + map = rcu_dereference(memcg->nodeinfo[nid]->shrinker_map); > + set_bit(shrinker_id, map->map); > + rcu_read_unlock(); > + } > +} > #else > #define for_each_memcg_cache_index(_idx) \ > for (; NULL; ) > @@ -1297,6 +1309,8 @@ static inline void memcg_put_cache_ids(void) > { > } > > +static inline void memcg_set_shrinker_bit(struct mem_cgroup *memcg, > + int nid, int shrinker_id) { } > #endif /* CONFIG_MEMCG_KMEM */ > > #endif /* _LINUX_MEMCONTROL_H */ > diff --git a/mm/list_lru.c b/mm/list_lru.c > index cab8fad7f7e2..7df71ab0de1c 100644 > --- a/mm/list_lru.c > +++ b/mm/list_lru.c > @@ -31,6 +31,11 @@ static void list_lru_unregister(struct list_lru *lru) > mutex_unlock(&list_lrus_mutex); > } > > +static int lru_shrinker_id(struct list_lru *lru) > +{ > + return lru->shrinker_id; > +} > + > static inline bool list_lru_memcg_aware(struct list_lru *lru) > { > /* > @@ -94,6 +99,11 @@ static void list_lru_unregister(struct list_lru *lru) > { > } > > +static int lru_shrinker_id(struct list_lru *lru) > +{ > + return -1; > +} > + > static inline bool list_lru_memcg_aware(struct list_lru *lru) > { > return false; > @@ -119,13 +129,17 @@ bool list_lru_add(struct list_lru *lru, struct list_head *item) > { > int nid = page_to_nid(virt_to_page(item)); > struct list_lru_node *nlru = &lru->node[nid]; > + struct mem_cgroup *memcg; > struct list_lru_one *l; > > spin_lock(&nlru->lock); > if (list_empty(item)) { > - l = list_lru_from_kmem(nlru, item, NULL); > + l = list_lru_from_kmem(nlru, item, &memcg); > list_add_tail(item, &l->list); > - l->nr_items++; > + /* Set shrinker bit if the first element was added */ > + if (!l->nr_items++) > + memcg_set_shrinker_bit(memcg, nid, > + lru_shrinker_id(lru)); This would turn into if (!l->nr_items++ && memcg) memcg_set_shrinker_bit(memcg, nid, lru_shrinker_id(lru)); Note, you don't need to check that lru_shrinker_id(lru) is >= 0 here as the fact that memcg != NULL guarantees that. Also, memcg can't be root_mem_cgroup here as kmem objects allocated for the root cgroup go unaccounted. > nlru->nr_items++; > spin_unlock(&nlru->lock); > return true; > @@ -520,6 +534,7 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, > struct list_lru_node *nlru = &lru->node[nid]; > int dst_idx = dst_memcg->kmemcg_id; > struct list_lru_one *src, *dst; > + bool set; > > /* > * Since list_lru_{add,del} may be called under an IRQ-safe lock, > @@ -531,7 +546,10 @@ static void memcg_drain_list_lru_node(struct list_lru *lru, int nid, > dst = list_lru_from_memcg_idx(nlru, dst_idx); > > list_splice_init(&src->list, &dst->list); > + set = (!dst->nr_items && src->nr_items); > dst->nr_items += src->nr_items; > + if (set) > + memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); This would turn into if (set && dst_idx >= 0) memcg_set_shrinker_bit(dst_memcg, nid, lru_shrinker_id(lru)); Again, the shrinker is guaranteed to be memcg aware in this function and dst_memcg != NULL. IMHO such a change would make the code a bit more straightforward. > src->nr_items = 0; > > spin_unlock_irq(&nlru->lock); >