Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp3429264pxu; Tue, 15 Dec 2020 06:55:15 -0800 (PST) X-Google-Smtp-Source: ABdhPJwYYwWb/gwWrlWxIbTnH+iBtRSO8WIJ0ELnQfuAizX94lSIvvovbSimLb609+gEwpadGLhh X-Received: by 2002:a17:906:391b:: with SMTP id f27mr26370070eje.195.1608044115359; Tue, 15 Dec 2020 06:55:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608044115; cv=none; d=google.com; s=arc-20160816; b=yh5bMgmCrfa98KQJclphH6WuX+PsuWhucomEoVZTDQ7T8oXc5cRssJPICep0ku/DjY iZdWcgkohuAQ4+/bvnvvws1uo8krKjhu+3oqpT8nvWV52Hjs/67mQb8/CiUtwDfRvlXy x09JczuKowMx04P3e49fBO5jzWzN+mecBF7qNY1iVtz+wE7T+KEdiiBr7Od3BgVFnZ4Y +ms4zk5ukqqKpKZsb5r/ZbBKvnuthbPwyGNie9OrnJQim2s5GjiFbdmHJqzXaXW9doDJ 8tz5vl818ua4rtaLqgIPTdo3m96UWdAU9HKx0uZ0438jDf6HFvJIkewrMloI5L2hJ40x e81g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=uD/vSpbHxAL5UXaj0au3pw7OmPFQ/iUXWt69k3s5/lCBcQkb7W1qZyTx//31sib37Y 6ymeR1ScMHkZEYftV28Djojd7W1xN2oMBdO7qwXBA6vtAXlDOINNfDbIscmI4XhiqY4p ZJyZTSonukWF8e1zkp+5dim6CAeNe/2NfACIqjSSp7Zh/weuV2Fos9qp7alUSAoCBBst +bxjeUOt+9QfT6Tf7fdO56SjD5bc5m/daKdUT2Ys0mJWsN9GfuIBA1CfDN4jWW5izjt/ 54A6MauuP+bmfGKK9vgkevKD/XWq2Fw5y5QscRkLrlkJVuKLhpVjULTsswDS/+TXiSel NxDA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=hmgJIO6h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q19si1158528edv.156.2020.12.15.06.54.51; Tue, 15 Dec 2020 06:55:15 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=hmgJIO6h; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729998AbgLOOs6 (ORCPT + 99 others); Tue, 15 Dec 2020 09:48:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45228 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729937AbgLOOsG (ORCPT ); Tue, 15 Dec 2020 09:48:06 -0500 Received: from mail-ej1-x643.google.com (mail-ej1-x643.google.com [IPv6:2a00:1450:4864:20::643]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50A0BC0617A6 for ; Tue, 15 Dec 2020 06:47:26 -0800 (PST) Received: by mail-ej1-x643.google.com with SMTP id jx16so28027277ejb.10 for ; Tue, 15 Dec 2020 06:47:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=hmgJIO6hC+zCQO5jBDz/KqQyAqz5+66A1CXpR5ppJ3Q/jcTKWX8eO+xYDY4zLAjzNr kFDPci2DvyPzOKMfd/qC9QytiEQvX19bwaw/0YJpzMd/+c8ZyP7aFmwrx99A1Cr24HFb ihbxVMLUFLBpNilkbDc1PmzUWzAF4mzFrnMfiKzPXAmRoCJ/uwMgWCry/RDwkCBK2pUu kXrPH0Ict45lQScdNP4sQaALfJEwZ32bed71d9OnWnpJt2kxo6DQlxwxmRpBNQtkxrgQ B6dGvIZyc21A8sViRde/+qJLxJP1qAR+ozUGcdG901Um+YPCtXBIpzZKFDgSh2I1xg3P PIVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=uXsidnHN6xvD4rkSp2MPrzKExLvD2s/CBWkJh4xL/c8=; b=hG++rZJ0c0MWvVxzgkDYQwP5AWU9f6oi7/WMME4DYq7xFKOvJh3ZmDPPflq+oV/8HC YzyiuxLp0uS5zgYnumHaizqJDxngviD6Ih3xwDOI4szElVAVcCe+7eYbBOlQPIH4N4go Y7F+eX3xAh5db272JN2kYDPEAEHDgPXxFVT/sNKjp5iKmhuaXpXW7KVnef4LTjSUtSVd 2DQiZ3+9ROWt5RejPW+JDJqI2kF/TpOKataP+B7DT5vhCO+fco9ySGFSc10UkrdfiioE XajQA8Q9oUCtolK70jc74jDvUcRuS9yCT/QTkeedA8UPkx6CtUCHJm/izvWRZt9xWPNZ 4G6w== X-Gm-Message-State: AOAM531WY9t5R/Ble92pE2UASdIbNol7gfzbDZgA/6/CgG3sqwB5lBbb 2DIu1SFhz1dtRkX/rkB8MDcygQ== X-Received: by 2002:a17:906:134f:: with SMTP id x15mr27667216ejb.278.1608043644908; Tue, 15 Dec 2020 06:47:24 -0800 (PST) Received: from localhost ([2620:10d:c093:400::5:d6dd]) by smtp.gmail.com with ESMTPSA id h15sm1759058ejq.29.2020.12.15.06.47.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Dec 2020 06:47:23 -0800 (PST) Date: Tue, 15 Dec 2020 15:45:16 +0100 From: Johannes Weiner To: Dave Chinner Cc: Yang Shi , guro@fb.com, ktkhai@virtuozzo.com, shakeelb@google.com, mhocko@suse.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [v2 PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred Message-ID: <20201215144516.GE379720@cmpxchg.org> References: <20201214223722.232537-1-shy828301@gmail.com> <20201214223722.232537-6-shy828301@gmail.com> <20201215022233.GL3913616@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201215022233.GL3913616@dread.disaster.area> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Dec 15, 2020 at 01:22:33PM +1100, Dave Chinner wrote: > On Mon, Dec 14, 2020 at 02:37:18PM -0800, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi > > --- > > include/linux/memcontrol.h | 9 +++ > > mm/memcontrol.c | 110 ++++++++++++++++++++++++++++++++++++- > > mm/vmscan.c | 4 ++ > > 3 files changed, 120 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 922a7f600465..1b343b268359 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > long count[NR_VM_NODE_STAT_ITEMS]; > > }; > > > > + > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > +struct memcg_shrinker_deferred { > > + struct rcu_head rcu; > > + atomic_long_t nr_deferred[]; > > +}; > > So you're effectively copy and pasting the memcg_shrinker_map > infrastructure and doubling the number of allocations/frees required > to set up/tear down a memcg? Why not add it to the struct > memcg_shrinker_map like this: > > struct memcg_shrinker_map { > struct rcu_head rcu; > unsigned long *map; > atomic_long_t *nr_deferred; > }; > > And when you dynamically allocate the structure, set the map and > nr_deferred pointers to the correct offset in the allocated range. > > Then this patch is really only changes to the size of the chunk > being allocated, setting up the pointers and copying the relevant > data from the old to new. Fully agreed. In the longer-term, it may be nice to further expand this and make this the generalized intersection between cgroup, node and shrinkers. There is large overlap with list_lru e.g. - with data of identical scope and lifetime, but duplicative callbacks and management. If we folded list_lru_memcg into the above data structure, we could also generalize and reuse the existing callbacks.