Received: by 2002:a25:1506:0:0:0:0:0 with SMTP id 6csp565231ybv; Thu, 13 Feb 2020 05:46:53 -0800 (PST) X-Google-Smtp-Source: APXvYqyhFDttutOFO7Oc60JEqInSuqULENAEzPvOIGBKJ76sA4Iqi9wxwwypgDxB3sSbBXiPloKQ X-Received: by 2002:a54:4e8d:: with SMTP id c13mr2908826oiy.27.1581601613445; Thu, 13 Feb 2020 05:46:53 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1581601613; cv=none; d=google.com; s=arc-20160816; b=sOABViNvJ/IGfY9Inlr/1aTszoScGa2tVkKkT4JwzZL57SsPKHJ7U0sRy5wmWLJKNF 86TsDO8eH+TPxDElP8xjX1y0Q5c7GQI9ADN98ZdjnpKMBxVQlfIdenZrsyDBgppt/5wA NEmnoeLv1gAqlQlB1NpwkU1m10jawmiPEoOid/VWHB8OjzeGQGOyPa2s1oSDYpDcF7yL krHj9/qZSl4ZCTbSdkwCSVRs2RHC2Uu2369VSesuTpxnngCP/BhdUOyCobAC+frzYvXJ i7YAPGI3MbhLhsFD4oG0USIpJSNQs+ufOrDC1BhMqmj+3GRnx4aThfIQWOYgBWVoASNI Bt2g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date :dkim-signature; bh=aw5WCdrOr580cO7oznYIDuNtsM6o28rEuLnOqaeZRwU=; b=nTcQqhRdSksGvRT1MKQMUsXM3AULSo7XDn1/kw05fwRckA5xBVwFnwimB/Gmto4Dc/ 4pWH+t+MJHIYM8v1MXPa87T0JMn8Lix2ggjbScrYDMLZocwgPen+kxN4fnuGQNMRO/tg EdViM8ch3Bjg+40ryqG1znu8LwMOnim2pJyLEbO3POPY5Xy0vEtzAgS7wSfj689kjM1N pLUOSahXKxM5IkXFDR0gRmK60K4wCS/z+4LmTAjgKZifvT7LcWHv63UAykXVdQjpoToP yZYCEtPzh8QEecuihDe/ggquj2vmmn2UL2o+MLcsse0xurJgkrUT03iHqWukq6AW2rJf KTfA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=QQCi3sJY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h138si1126726oib.6.2020.02.13.05.46.40; Thu, 13 Feb 2020 05:46:53 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=QQCi3sJY; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730003AbgBMNqb (ORCPT + 99 others); Thu, 13 Feb 2020 08:46:31 -0500 Received: from mail-qt1-f194.google.com ([209.85.160.194]:45533 "EHLO mail-qt1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729801AbgBMNqb (ORCPT ); Thu, 13 Feb 2020 08:46:31 -0500 Received: by mail-qt1-f194.google.com with SMTP id d9so4348463qte.12 for ; Thu, 13 Feb 2020 05:46:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=aw5WCdrOr580cO7oznYIDuNtsM6o28rEuLnOqaeZRwU=; b=QQCi3sJYPmlDH0+08h7f5QPcRZToa2l5Ay1lqldX9AFSHlwpwDsg9zFpn3G+maFQb1 mxBkqapaVf2GFKPujVvHLUkAYazoB0Os3g9DsFUvav9VA8rFnOGGxWUZxDtPAgWpIfgC GErz0RmyTRICexIwjowvev80sx/HOUOqk6TBAtsCzsCfB3HOoxQKHvihfzetImr1HD9t jLGvuV0WSaG64pJGFlYkiEOEpvEu2SNFogeViOCAdem+ae2b8DT3MpvtrflblclDHDoB wDnUksoT2vuKaPMEav/3Zb+xiPr6RmO36fOMMzmGEagJ67X8FfdqIDritqGsxr2bJb2Q pP+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=aw5WCdrOr580cO7oznYIDuNtsM6o28rEuLnOqaeZRwU=; b=IxxTSw+QHhrAxdlILxRSBWC7T5q604KYU4FE5AOs3g4X0oU04hTbxZaqmtnR8OVmiI rW4mMmy3VFwOwhCyRsYnOuRb0iNgk3cGDD1Kps28o43gQdkIwBH3A7rBQ9GlF7Efq919 xqziWJn91rP0htng9trjVuYCLi2aMSL3tIdzQBN/rVTNZdb1Y8HNg4Ez8rQDTnNe7skM 3t5cWty0Rzyan3weq/5kFf+mXgY8iuG1lGFL6uF6HwHiG3ylb6DkhTLxsqtJpiGhIUlF LqX3vDzhYeo7hWCYuWqKgemJuACcIGu0UuYxqxo14pjECIk9YtrhkcHmPVXC5W7LWozy 8p9A== X-Gm-Message-State: APjAAAWihLt/OC+A21eIPwkyqu/Z0ek6zj55Jcdc2nNgnrWdG8fuPaKy WHCcJEwdwt9svsBJF3oV8zAc0Q== X-Received: by 2002:ac8:7152:: with SMTP id h18mr11642514qtp.349.1581601588915; Thu, 13 Feb 2020 05:46:28 -0800 (PST) Received: from localhost (pool-108-27-252-85.nycmny.fios.verizon.net. [108.27.252.85]) by smtp.gmail.com with ESMTPSA id z21sm1331911qka.122.2020.02.13.05.46.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Feb 2020 05:46:28 -0800 (PST) Date: Thu, 13 Feb 2020 08:46:27 -0500 From: Johannes Weiner To: Yafang Shao Cc: linux-fsdevel@vger.kernel.org, Linux MM , LKML , Dave Chinner , Michal Hocko , Roman Gushchin , Andrew Morton , Linus Torvalds , Al Viro , Kernel Team Subject: Re: [PATCH] vfs: keep inodes with page cache off the inode shrinker LRU Message-ID: <20200213134627.GB208501@cmpxchg.org> References: <20200211175507.178100-1-hannes@cmpxchg.org> <20200212164235.GB180867@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 13, 2020 at 09:47:29AM +0800, Yafang Shao wrote: > On Thu, Feb 13, 2020 at 12:42 AM Johannes Weiner wrote: > > > > On Wed, Feb 12, 2020 at 08:25:45PM +0800, Yafang Shao wrote: > > > On Wed, Feb 12, 2020 at 1:55 AM Johannes Weiner wrote: > > > > Another variant of this problem was recently observed, where the > > > > kernel violates cgroups' memory.low protection settings and reclaims > > > > page cache way beyond the configured thresholds. It was followed by a > > > > proposal of a modified form of the reverted commit above, that > > > > implements memory.low-sensitive shrinker skipping over populated > > > > inodes on the LRU [1]. However, this proposal continues to run the > > > > risk of attracting disproportionate reclaim pressure to a pool of > > > > still-used inodes, > > > > > > Hi Johannes, > > > > > > If you really think that is a risk, what about bellow additional patch > > > to fix this risk ? > > > > > > diff --git a/fs/inode.c b/fs/inode.c > > > index 80dddbc..61862d9 100644 > > > --- a/fs/inode.c > > > +++ b/fs/inode.c > > > @@ -760,7 +760,7 @@ static bool memcg_can_reclaim_inode(struct inode *inode, > > > goto out; > > > > > > cgroup_size = mem_cgroup_size(memcg); > > > - if (inode->i_data.nrpages + protection >= cgroup_size) > > > + if (inode->i_data.nrpages) > > > reclaimable = false; > > > > > > out: > > > > > > With this additional patch, we skip all inodes in this memcg until all > > > its page cache pages are reclaimed. > > > > Well that's something we've tried and had to revert because it caused > > issues in slab reclaim. See the History part of my changelog. > > You misuderstood it. > The reverted patch skips all inodes in the system, while this patch > only works when you turn on memcg.{min, low} protection. > IOW, that is not a default behavior, while it only works when you want > it and only effect your targeted memcg rather than the whole system. I understand perfectly well. Keeping unreclaimable inodes on the shrinker LRU causes the shrinker to build up excessive pressure on all VFS objects. This is a bug. Making it cgroup-specific doesn't make it less of a bug, it just means you only hit the bug when you use cgroup memory protection. > > > > while not addressing the more generic reclaim > > > > inversion problem outside of a very specific cgroup application. > > > > > > > > > > But I have a different understanding. This method works like a > > > knob. If you really care about your workingset (data), you should > > > turn it on (i.e. by using memcg protection to protect them), while > > > if you don't care about your workingset (data) then you'd better > > > turn it off. That would be more flexible. Regaring your case in the > > > commit log, why not protect your linux git tree with memcg > > > protection ? > > > > I can't imagine a scenario where I *wouldn't* care about my > > workingset, though. Why should it be opt-in, not the default? > > Because the default behavior has caused the XFS performace hit. That means that with your proposal you cannot use cgroup memory protection for workloads that run on xfs. (And if I remember the bug report correctly, this wasn't just xfs. It also caused metadata caches on other filesystems to get trashed. xfs was just more pronounced because it does sync inode flushing from the shrinker, adding write stalls to the mix of metadata cache misses.) What I'm proposing is an implementation that protects hot page cache without causing excessive shrinker pressure and rotations.