Received: by 2002:a25:7ec1:0:0:0:0:0 with SMTP id z184csp888893ybc; Tue, 12 Nov 2019 10:46:58 -0800 (PST) X-Google-Smtp-Source: APXvYqz4B6QtpSfGxSXFpMM/bq9ducQ+txAt05qRWfjKoGh69SaQXZICfq5TNlCsmpDA+fnYFFI8 X-Received: by 2002:a17:906:3495:: with SMTP id g21mr557446ejb.190.1573584418175; Tue, 12 Nov 2019 10:46:58 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1573584418; cv=none; d=google.com; s=arc-20160816; b=QKvggE+ABc6PmRzs1wqW2Ra5l8FhFN6ABIqbT6fBzgsCfX7rRpcXRUi/RoHE2pTU/E 6Ld9PGI8KF8ZyUuy2NKyNQIvhBYjlhmgzHkWfST6EEUFR4NMhE/RknEewguVzEfkQe/E SK/QDsijSa+eW77GbD3209OiFa/IXPpo+pj/VVRZvLMzoB3lIeOexFOQ3eKp7xmzXd2w dEPjz2sCS6g6O/FsG3VgLPHnwMLwZ3o93PoZzevQ1rwEljkLNszgvRRSKkU2X4M9MS5p KtHrZ1hBAQhA1QzAxWbMz0ixLXLBHygKmAkIL66XFf1spea2yB/2aLIeJmqbY3DiDuh5 sUPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=hxMhnNTPNMqUl9aYg/+q9b2xf6WaAQRl4vSfzI/n7p4=; b=kUT2q0LISNHuoi1o16PKRRm0vEfrkmaJAZ3ObTcIl4gyCVTGCGxoNTt4xrmv1pV3SC JGhty8YsoeasCyy8KEHvzzvCH49OuC/C1hI2uXIQeV0//KpBx1/1sD8B4Fc0HKHZiUDU oGlruoudJzIb4+9Q0nbta79YAp3ZoPQ98ts8Y9kNaIxCVt7b3lOh3q9i23zipN+kgHdi XqSOLu+iWdXmjbtAFTlSKpGf0Hg3XP7PskCTWbsSq7/KbBVmBtnllHgt+5i6s8LmeCp5 +4Maln4ZdeHr5bNY+U5gD5yePYoQz8MZkGpNmzHxMwl8DntLoBK6/b1OMGTRomOCdtp3 clpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=eLwd7Iy5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h52si14715628edb.77.2019.11.12.10.46.32; Tue, 12 Nov 2019 10:46:58 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=eLwd7Iy5; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726982AbfKLSqA (ORCPT + 99 others); Tue, 12 Nov 2019 13:46:00 -0500 Received: from mail-wr1-f67.google.com ([209.85.221.67]:37324 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726964AbfKLSqA (ORCPT ); Tue, 12 Nov 2019 13:46:00 -0500 Received: by mail-wr1-f67.google.com with SMTP id t1so19726718wrv.4 for ; Tue, 12 Nov 2019 10:45:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=hxMhnNTPNMqUl9aYg/+q9b2xf6WaAQRl4vSfzI/n7p4=; b=eLwd7Iy5L1mI4E21IClk5SY41lrRU+CDtVSYFCoaOR/KUjfehCrHpcwrUvhp5HmTAT uhs0ezHEN43/gLsummGlDTM/iWi7hu/cK+YLg7/dcWasjaAw10a2+vjz46+e6Ll6PbEd kk7UC9to+igCpH1Qy7B86f84kz2U9b0jeR7oGoeHgPuti9b6KYvUj9VoslJCfooG0qk/ 72DcEX4NgQ9jNxxJ7WebVvv6PsNmtNSc3uYsBxmg6jwEtpuvxGLrflrVyJzoCJk2ByKe 9QVYH8BACYdUA2k9t12xq9WwexnMZMO5giUKpafTjobHbHZ3dpi4zK2f6QHCXUkyDnMY G88g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=hxMhnNTPNMqUl9aYg/+q9b2xf6WaAQRl4vSfzI/n7p4=; b=jwK1NOLe/9tp59Wdm45/V/RW76btdhQbCsLqdcF1WfUOfEW4k1KThDcmrBU75Ddpqs UjsQ7DE0xHAKVL6BVIo7V2e5LS+DNqzIohs9j9zH0PUsuPKSX5haEDc5OF26gkwnF5LV ykSJxTXv6cd5mOSbpcDVl7y3Knq6rCpX9pc+825IenCrJeh1Cop8rpjiv1zu/a5xfeHd ONHa12UilvqDWeWNL+cwHcUh2zw4S3I/BSEEAr2kMoypp46E1YoyEMS24Q2bxk6w/9tm c4+DeIZOXCpxXhZEJrKlrGdGk6xs5DcinDsoKB2eErfdhWswZvIQCfpu5RVB9PBYYAc8 tkkg== X-Gm-Message-State: APjAAAVr+pLTp8iDDFPT5v+ms7EmLRUpIEmP53qxkdFS0X0+8bERHcaF PKqA3kLKGHCdjZssMqqxBGSdrB6OfZ7ZajuGcpdWJA== X-Received: by 2002:a5d:678c:: with SMTP id v12mr2383882wru.116.1573584355560; Tue, 12 Nov 2019 10:45:55 -0800 (PST) MIME-Version: 1.0 References: <20191107205334.158354-1-hannes@cmpxchg.org> <20191107205334.158354-3-hannes@cmpxchg.org> <20191112174533.GA178331@cmpxchg.org> In-Reply-To: <20191112174533.GA178331@cmpxchg.org> From: Suren Baghdasaryan Date: Tue, 12 Nov 2019 10:45:44 -0800 Message-ID: Subject: Re: [PATCH 2/3] mm: vmscan: detect file thrashing at the reclaim root To: Johannes Weiner Cc: Andrew Morton , Andrey Ryabinin , Shakeel Butt , Rik van Riel , Michal Hocko , linux-mm , cgroups mailinglist , LKML , kernel-team@fb.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Nov 12, 2019 at 9:45 AM Johannes Weiner wrote: > > On Sun, Nov 10, 2019 at 06:01:18PM -0800, Suren Baghdasaryan wrote: > > On Thu, Nov 7, 2019 at 12:53 PM Johannes Weiner wrote: > > > > > > We use refault information to determine whether the cache workingset > > > is stable or transitioning, and dynamically adjust the inactive:active > > > file LRU ratio so as to maximize protection from one-off cache during > > > stable periods, and minimize IO during transitions. > > > > > > With cgroups and their nested LRU lists, we currently don't do this > > > correctly. While recursive cgroup reclaim establishes a relative LRU > > > order among the pages of all involved cgroups, refaults only affect > > > the local LRU order in the cgroup in which they are occuring. As a > > > result, cache transitions can take longer in a cgrouped system as the > > > active pages of sibling cgroups aren't challenged when they should be. > > > > > > [ Right now, this is somewhat theoretical, because the siblings, under > > > continued regular reclaim pressure, should eventually run out of > > > inactive pages - and since inactive:active *size* balancing is also > > > done on a cgroup-local level, we will challenge the active pages > > > eventually in most cases. But the next patch will move that relative > > > size enforcement to the reclaim root as well, and then this patch > > > here will be necessary to propagate refault pressure to siblings. ] > > > > > > This patch moves refault detection to the root of reclaim. Instead of > > > remembering the cgroup owner of an evicted page, remember the cgroup > > > that caused the reclaim to happen. When refaults later occur, they'll > > > correctly influence the cross-cgroup LRU order that reclaim follows. > > > > I spent some time thinking about the idea of calculating refault > > distance using target_memcg's inactive_age and then activating > > refaulted page in (possibly) another memcg and I am still having > > trouble convincing myself that this should work correctly. However I > > also was unable to convince myself otherwise... We use refault > > distance to calculate the deficit in inactive LRU space and then > > activate the refaulted page if that distance is less that > > active+inactive LRU size. However making that decision based on LRU > > sizes of one memcg and then activating the page in another one seems > > very counterintuitive to me. Maybe that's just me though... > > It's not activating in a random, unrelated memcg - it's the parental > relationship that makes it work. > > If you have a cgroup tree > > root > | > A > / \ > B1 B2 > > and reclaim is driven by a limit in A, we are reclaiming the pages in > B1 and B2 as if they were on a single LRU list A (it's approximated by > the round-robin reclaim and has some caveats, but that's the idea). > > So when a page that belongs to B2 gets evicted, it gets evicted from > virtual LRU list A. When it refaults later, we make the (in)active > size and distance comparisons against virtual LRU list A as well. > > The pages on the physical LRU list B2 are not just ordered relative to > its B2 peers, they are also ordered relative to the pages in B1. And > that of course is necessary if we want fair competition between them > under shared reclaim pressure from A. Thanks for clarification. The testcase in your description when group B has a large inactive cache which does not get reclaimed while its sibling group A has to drop its active cache got me under the impression that sibling cgroups (in your reply above B1 and B2) can cause memory pressure in each other. Maybe that's not a legit case and B1 would not cause pressure in B2 without causing pressure in their shared parent A? It now makes more sense to me and I want to confirm that is the case.