Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp226894pxu; Wed, 2 Dec 2020 20:57:40 -0800 (PST) X-Google-Smtp-Source: ABdhPJy3rqgHPcyshd0pWqFEQ8k0a4h0VHijNPtKZw54WPK6bjzil2y3syjq5oB1jlCaQ7PbP+2B X-Received: by 2002:a17:906:7cc6:: with SMTP id h6mr961279ejp.161.1606971460355; Wed, 02 Dec 2020 20:57:40 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1606971460; cv=none; d=google.com; s=arc-20160816; b=OR7OZay385Btey8DRYD6zYj1Iu77PU96tmE2HMskW92g9ec1Y+vmObyp3Z/kWd2QQa y+fBw/OVCHkKprJ9CBPygU54diNkG8eMoGFNYVipySspKO92oGUq965Q7Q5CdiNh6E/v gUgNIoh7F/FZjokGaHAssV3hOj4O+5nHWIwAWgxo6TQegC7TTiVrFNfD+19xr8xajNMZ 0UjF4pc4XpeOrqTVnlsR2VCV3IzCy9P9ZQnau4MOOu1+7e3SL55t/UCcDUEClk7n4xd5 3m3lkaQN9AIZsX334c90dTFK9R1JTLNTVvtaNW3XBk83O3pBEsy6iTL9ykJL+/C8446t dhpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=hMZ/CoXr7rG2rPWo+uH3O4w1xYXh8+l7UsyrUayDNgc=; b=Ft8Fyxq9jRp9Ql6m8s+awa+mMK7wdJHRzgdBUNG8EZCFDlB4+fDGNNHuV7IWsQ5WL/ YkGOkPScIOBJ7zoguOCpa2HDSwzNgZ9BFGjF8AXFwmGtOI4P5xIUU0x5oK38pMc4c9Bt hFuOZP0PV4XegrhCnRNm6fnCSR3OGNrA/9Mqvk2pETgA+ze7MazB9B1GShxM366gYWR1 PjhSYo2XlLQnO82dI5kdXIObDA2VlwZ5YU2uOnunc62nID6lFSRRT4mYSCq8gE0F9ndN riEXreH4sh4bCHNVNIxYsnWwUWD32dilR4N0iiKoWc+IyS7Rj8+Y+EjTIIS7ghVpkvR3 vnXQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="MyeceJ/M"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dd25si278186edb.438.2020.12.02.20.57.17; Wed, 02 Dec 2020 20:57:40 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b="MyeceJ/M"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727664AbgLCEzo (ORCPT + 99 others); Wed, 2 Dec 2020 23:55:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727023AbgLCEzo (ORCPT ); Wed, 2 Dec 2020 23:55:44 -0500 Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com [IPv6:2a00:1450:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 814B1C061A4D; Wed, 2 Dec 2020 20:55:03 -0800 (PST) Received: by mail-ej1-x644.google.com with SMTP id g20so1560953ejb.1; Wed, 02 Dec 2020 20:55:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=hMZ/CoXr7rG2rPWo+uH3O4w1xYXh8+l7UsyrUayDNgc=; b=MyeceJ/MnvUoQZlEkdJUF/PNAjmS9cb+O9H0M999APAmXuB1ykgu2WYO3kNjeT/dIT x73rT0rD+oso8pvKkN7c4Rtib3zJRnnlsdI7rSXMbDmb1uQSxTmWYeMDfLJBZn9L5pV7 J7r+Q91L27e5jyXZuQj1zYzzH9TEZiA66oh1YZBO6xzEffHNXuehk7IkeKmyUe3T35ja YEPkgIE4/a1kgS63sbTyApSrfGB1xGVcmxfJTcB0B8xjO0Hc8M5BUafrFENbPOPn/mDz Uh0uOjH38vyf50ycfrws9lwYwFLHFbFVpBr0BqU9EbQMdiQbXTdf7lV6uYqNH83tYwa4 xfiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hMZ/CoXr7rG2rPWo+uH3O4w1xYXh8+l7UsyrUayDNgc=; b=HWNN0ayhePCu2yzZ/vLY+PIc311M8FNNpjPw+1STiFI0NJcwLjboKDtHRmjy48c3Af DKmhmWujEg9FIvsoCmpb0F7jAFFccVtddpnPT8ZqJOFTU+PHKBCf/Zow3RyJcGKSGZ9j p6CjAWZwNpdIXhZ3Q9h3DfcFavs3lX6ngt8QtAhkWdvF243WUM7t6eFflEEBzJoqYw5i a9LD1exjAS+nlt6jaBIO1psMrzwjgArFALivfRXCtWzQT9egG72P9rf3KAuUGQaqf+dR 5AqG8XAwjh/Jyw4gfxzqxhKwzpB2lU8uF3XljHnRSA/fjEQAsVba4p8f1boasiFqLlXC P/Uw== X-Gm-Message-State: AOAM532XUMHZ+Tp/sbactqVcLNZfSw5gs47Ga+TQSB2UL+zsf410HLgz 1CdW7jUcqGmRlkQbnjWcpZ4ga2DnKHwlnxvN39o= X-Received: by 2002:a17:906:cd06:: with SMTP id oz6mr989147ejb.25.1606971302241; Wed, 02 Dec 2020 20:55:02 -0800 (PST) MIME-Version: 1.0 References: <20201202182725.265020-1-shy828301@gmail.com> <20201202182725.265020-6-shy828301@gmail.com> <20201203030632.GG1375014@carbon.DHCP.thefacebook.com> In-Reply-To: <20201203030632.GG1375014@carbon.DHCP.thefacebook.com> From: Yang Shi Date: Wed, 2 Dec 2020 20:54:50 -0800 Message-ID: Subject: Re: [PATCH 5/9] mm: memcontrol: add per memcg shrinker nr_deferred To: Roman Gushchin Cc: Kirill Tkhai , Shakeel Butt , Dave Chinner , Johannes Weiner , Michal Hocko , Andrew Morton , Linux MM , Linux FS-devel Mailing List , Linux Kernel Mailing List Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Dec 2, 2020 at 7:06 PM Roman Gushchin wrote: > > On Wed, Dec 02, 2020 at 10:27:21AM -0800, Yang Shi wrote: > > Currently the number of deferred objects are per shrinker, but some slabs, for example, > > vfs inode/dentry cache are per memcg, this would result in poor isolation among memcgs. > > > > The deferred objects typically are generated by __GFP_NOFS allocations, one memcg with > > excessive __GFP_NOFS allocations may blow up deferred objects, then other innocent memcgs > > may suffer from over shrink, excessive reclaim latency, etc. > > > > For example, two workloads run in memcgA and memcgB respectively, workload in B is vfs > > heavy workload. Workload in A generates excessive deferred objects, then B's vfs cache > > might be hit heavily (drop half of caches) by B's limit reclaim or global reclaim. > > > > We observed this hit in our production environment which was running vfs heavy workload > > shown as the below tracing log: > > > > <...>-409454 [016] .... 28286961.747146: mm_shrink_slab_start: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 objects to shrink 3641681686040 gfp_flags GFP_HIGHUSER_MOVABLE|__GFP_ZERO pgs_scanned 1 lru_pgs 15721 > > cache items 246404277 delta 31345 total_scan 123202138 > > <...>-409454 [022] .... 28287105.928018: mm_shrink_slab_end: super_cache_scan+0x0/0x1a0 ffff9a83046f3458: > > nid: 1 unused scan count 3641681686040 new scan count 3641798379189 total_scan 602 > > last shrinker return val 123186855 > > > > The vfs cache and page cache ration was 10:1 on this machine, and half of caches were dropped. > > This also resulted in significant amount of page caches were dropped due to inodes eviction. > > > > Make nr_deferred per memcg for memcg aware shrinkers would solve the unfairness and bring > > better isolation. > > > > When memcg is not enabled (!CONFIG_MEMCG or memcg disabled), the shrinker's nr_deferred > > would be used. And non memcg aware shrinkers use shrinker's nr_deferred all the time. > > > > Signed-off-by: Yang Shi > > --- > > include/linux/memcontrol.h | 9 +++ > > mm/memcontrol.c | 112 ++++++++++++++++++++++++++++++++++++- > > mm/vmscan.c | 4 ++ > > 3 files changed, 123 insertions(+), 2 deletions(-) > > > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > > index 922a7f600465..1b343b268359 100644 > > --- a/include/linux/memcontrol.h > > +++ b/include/linux/memcontrol.h > > @@ -92,6 +92,13 @@ struct lruvec_stat { > > long count[NR_VM_NODE_STAT_ITEMS]; > > }; > > > > + > > +/* Shrinker::id indexed nr_deferred of memcg-aware shrinkers. */ > > +struct memcg_shrinker_deferred { > > + struct rcu_head rcu; > > + atomic_long_t nr_deferred[]; > > +}; > > The idea makes total sense to me. But I wonder if we can add nr_deferred to > struct list_lru_one, instead of adding another per-memcg per-shrinker entity? > I guess it can simplify the code quite a lot. What do you think? Aha, actually this exactly was what I did at the first place. But Dave NAK'ed this approach. You can find the discussion at: https://lore.kernel.org/linux-mm/20200930073152.GH12096@dread.disaster.area/.