Received: by 2002:a25:ab43:0:0:0:0:0 with SMTP id u61csp5710706ybi; Tue, 28 May 2019 18:24:38 -0700 (PDT) X-Google-Smtp-Source: APXvYqyMuL6SRMY4ZK5wiwGUCkmXyQTojbwwnHWqHubK6y8X0XWKisyFz7GM+JfmXF/DtjX1OsON X-Received: by 2002:a17:902:a605:: with SMTP id u5mr89588785plq.43.1559093078296; Tue, 28 May 2019 18:24:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1559093078; cv=none; d=google.com; s=arc-20160816; b=yhVYHc5xd83BULMFp/YEUcf/3DHsR806mTK+qaa+x4U97IuUciOlwtjPMJqy1ShM6Q J5TaO1LBypNhqyEjpIe9S9gQ5azbPLvtbndze5XZdmNOE3I2f9WezEsnyiE8SqOL2Qom y44xrfo447lGbGS1Nin458mxvFQTHDK+NFvkLF/MSiuxBX6s5Vcg/+1Mn+gExv+FrmJ3 h+5jTbiBywzotzUglBxPkfA2SksQB7cauALogITUFbfYshiFwDyWAMewQKsFtT4GT8jd BMolThV8je14w1EAD0HmAsRBQgbo/3csaRzddaBUdbIJ4lw4uqvnsSGZuFBseRQl4eLC u0UQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=7hiqTDVHP7ycUHBATDRFOTjH/h1L38jFsXVu8jDFuS0=; b=Ul1jAYT1r24bXnbyAdQAFACYu+QIAj4AQnQ7jGKmAk23rFTu2yBAVV5d03M34m++Jb H1luNpCihpZjQqXrhEp5Wc33MO3JGNb7euz1JzDETgZI1HI5BafTr9Y50Ej07A9efDAY J2hlK79VQtK0P64dVdhrqr0w4/i4Bw9dN2GvDgJEdwINjQ/f5HBayzO43TWs9WzmStjd JKKp3voy950a0RdQebY0jjAWg1SGk/i0h9g2thVyVmawVzn6nsl9x9hWTMRXh2C25+Fi 1qdsoZUxjCYURvuG7TNIsbysmnKc5Ew/bYi3NztjOOvRVk6vQ1wxMIvaFfODQztm5h3M VPqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=v5k5yiAE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s73si26093844pfs.15.2019.05.28.18.24.23; Tue, 28 May 2019 18:24:38 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=v5k5yiAE; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726253AbfE2BWw (ORCPT + 99 others); Tue, 28 May 2019 21:22:52 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:38344 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725805AbfE2BWv (ORCPT ); Tue, 28 May 2019 21:22:51 -0400 Received: by mail-pf1-f193.google.com with SMTP id b76so459457pfb.5 for ; Tue, 28 May 2019 18:22:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=7hiqTDVHP7ycUHBATDRFOTjH/h1L38jFsXVu8jDFuS0=; b=v5k5yiAEtdR51yLQSHl2C6JH8A1Af23+g41mDYdOwzAoVr8XvhXKy1cPAVR6T0zU9o LKnKBxafam9LaCEh3d+X5rKOkdouDjXgxTU6wQdOdlH4rUoGs1PTtm0H8aqJft+sNE/a Dw5goKxS0owGNFLVDkYUFBHumuPjbss/4pvtFMe30flVdpHTQRcqv6CtN7yfsHfQvsmY LZlFDrSof0jt1RVozI2kRAsyiKNrNpK7UKuZvdFmVfuwmq7CZMGeLiaJc+/zlBjvW1Oc x+HA1SC43P3UPEpLYGOotPb8OTta16UBQX0fl66ySL8X1HM9OFvIPgcCpoQt0Vw9J8E1 pVMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=7hiqTDVHP7ycUHBATDRFOTjH/h1L38jFsXVu8jDFuS0=; b=P0BffdpdD+JiCLN2mrQ0kKdWC7dSTEyyH96ReBZbrOGUNuLAZHOXvS5C6Aj7owbRFt OXR11FwQDxHQbXCuhiN682+RaEOBJ+CwMpSdv9IghNisJWwQtLJXpy0u3wLs/xb6Av0p JFOlbNlj0OsBW/IakhfGnm03V9EkfTuBH8KEVwGqxnV4ioT8g31/rx8H7UW/EtMQb6Cl RiE4c0UlwX+bMIpON6jMrbvx4snM0nrxexha+r3WlKDiemAWA5BHLGYIDVP4SOKn54pb G1LxLkSNzPOo3YC+Ek3Ka3csYmTmaXz/dzQBkuo4LJWFa3nNL9vBC7uMG1yc12YylfrY FS8g== X-Gm-Message-State: APjAAAW+RHxzBipR8YOh7q76gpltXfBNZ5SdPLW5H6H8jcUdGUpwxusW gsdB8s4aGd8Yj2Ub3+n+I/ldUg== X-Received: by 2002:a63:490a:: with SMTP id w10mr32183693pga.6.1559092970775; Tue, 28 May 2019 18:22:50 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id a7sm15705665pgj.42.2019.05.28.18.22.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 28 May 2019 18:22:50 -0700 (PDT) Date: Tue, 28 May 2019 18:22:49 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Yang Shi cc: ktkhai@virtuozzo.com, hannes@cmpxchg.org, mhocko@suse.com, kirill.shutemov@linux.intel.com, hughd@google.com, shakeelb@google.com, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/3] Make deferred split shrinker memcg aware In-Reply-To: <1559047464-59838-1-git-send-email-yang.shi@linux.alibaba.com> Message-ID: References: <1559047464-59838-1-git-send-email-yang.shi@linux.alibaba.com> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 28 May 2019, Yang Shi wrote: > > I got some reports from our internal application team about memcg OOM. > Even though the application has been killed by oom killer, there are > still a lot THPs reside, page reclaim doesn't reclaim them at all. > > Some investigation shows they are on deferred split queue, memcg direct > reclaim can't shrink them since THP deferred split shrinker is not memcg > aware, this may cause premature OOM in memcg. The issue can be > reproduced easily by the below test: > Right, we've also encountered this. I talked to Kirill about it a week or so ago where the suggestion was to split all compound pages on the deferred split queues under the presence of even memory pressure. That breaks cgroup isolation and perhaps unfairly penalizes workloads that are running attached to other memcg hierarchies that are not under pressure because their compound pages are now split as a side effect. There is a benefit to keeping these compound pages around while not under memory pressure if all pages are subsequently mapped again. > $ cgcreate -g memory:thp > $ echo 4G > /sys/fs/cgroup/memory/thp/memory/limit_in_bytes > $ cgexec -g memory:thp ./transhuge-stress 4000 > > transhuge-stress comes from kernel selftest. > > It is easy to hit OOM, but there are still a lot THP on the deferred split > queue, memcg direct reclaim can't touch them since the deferred split > shrinker is not memcg aware. > Yes, we have seen this on at least 4.15 as well. > Convert deferred split shrinker memcg aware by introducing per memcg deferred > split queue. The THP should be on either per node or per memcg deferred > split queue if it belongs to a memcg. When the page is immigrated to the > other memcg, it will be immigrated to the target memcg's deferred split queue > too. > > And, move deleting THP from deferred split queue in page free before memcg > uncharge so that the page's memcg information is available. > > Reuse the second tail page's deferred_list for per memcg list since the same > THP can't be on multiple deferred split queues at the same time. > > Remove THP specific destructor since it is not used anymore with memcg aware > THP shrinker (Please see the commit log of patch 2/3 for the details). > > Make deferred split shrinker not depend on memcg kmem since it is not slab. > It doesn't make sense to not shrink THP even though memcg kmem is disabled. > > With the above change the test demonstrated above doesn't trigger OOM anymore > even though with cgroup.memory=nokmem. > I'm curious if your internal applications team is also asking for statistics on how much memory can be freed if the deferred split queues can be shrunk? We have applications that monitor their own memory usage through memcg stats or usage and proactively try to reduce that usage when it is growing too large. The deferred split queues have significantly increased both memcg usage and rss when they've upgraded kernels. How are your applications monitoring how much memory from deferred split queues can be freed on memory pressure? Any thoughts on providing it as a memcg stat? Thanks!