Received: by 2002:a25:868d:0:0:0:0:0 with SMTP id z13csp899930ybk; Fri, 15 May 2020 17:10:02 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyp4QMOb/spTVEf81VQgCwVnFJxeOJgsGtNE7J98ku8ivp5lj8zH1J6Ogg1/5vG7tia8S4k X-Received: by 2002:a05:6402:74b:: with SMTP id p11mr4242462edy.229.1589587802297; Fri, 15 May 2020 17:10:02 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1589587802; cv=none; d=google.com; s=arc-20160816; b=VloVDZPNl1mzy/YTPgXfZSk3LvZ7GoyLsPdhplnL0AAatctU3drfppaUR+VQicmWn4 TBSbsNMTqKD993c4QG+9m6vYlh8UaNqp5jdvEt3lNO9HC+TQl/Ipf6AgfpnYeZFbF5U2 b4ZDZNkALR7RMUzBH2XE/LcW7zBh29GgUJc60jfc5Om6rO/uC/chiuJvsAKn9jCfPkLM XYBym+idi6CFAJrtYP/ZIcLNIkORioV7du4Lap1aPNgR6MpCJdQoX9YAIO2QwWUd6uTp h5tKsc15A6CBt+qrdhuBSXyI1/4D8/buqO07sp6xVv5ymkXnSYy9letcomvxiP7IwR3R T2RQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:dkim-signature; bh=xjOcJ8psemIC/njQ/FR6hvBA3owD/j/88LT4loTH6ow=; b=pSpoQ/mUZuheVQDNQWGpku21YaYsoiRQWL/VFBhRwWILJue9bNIaqI2wektbRd5kCd ZHXdzxNv55Cko7IVacshjqzxardbtrn9Ld7Li9LsVxz3jpPWGMasxuFKC1HUEP0fP2m1 fCLxe7OKla0aKi06SIW0JiQPTZR1ZzTVIP4PuUQfgu9CRfj/rv2a+rSoUnlyPr2DRJMd /7LInEowcsiZVA447+YhkafAsRjuW2C4LSSKf38ZDJ+K9qQbDXcchwWI36H0DtVk9vEW 38+w9kdlEafQ51u8Q//VQfpVlsLJKFyLaH44ycvNljBroM2iSABNt1PWbE2Vr1lyqXIL aTuw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=r0AnL5ov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id do17si2826790ejc.93.2020.05.15.17.09.24; Fri, 15 May 2020 17:10:02 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=r0AnL5ov; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726290AbgEPAG7 (ORCPT + 99 others); Fri, 15 May 2020 20:06:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40460 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726198AbgEPAG6 (ORCPT ); Fri, 15 May 2020 20:06:58 -0400 Received: from mail-lj1-x244.google.com (mail-lj1-x244.google.com [IPv6:2a00:1450:4864:20::244]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BDA8C061A0C for ; Fri, 15 May 2020 17:06:58 -0700 (PDT) Received: by mail-lj1-x244.google.com with SMTP id u6so4044009ljl.6 for ; Fri, 15 May 2020 17:06:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=xjOcJ8psemIC/njQ/FR6hvBA3owD/j/88LT4loTH6ow=; b=r0AnL5ov0wH1Z5BckbiP8WqX7vc9IKjluKmNmaaoZ44YPV3yedbwiXkoP5XUpvbPGu xfeUKdvmqvQvm4g4/ul6R/RLRDfSXaD0AmDltu1f5DiCWM/ReoVbpMJKKVzlrFst3caa DwHXapjEqfrP6GqR2UjujYlsFYdxrl+LdTcn1lmzN6wSGJ20i9YPIDJHkzUePbZbLmF6 ZpmVRFnZuTG+RSs9g3eT2yqN1j05UD2QByf6Qv2EHa81L1miET8Q23nFkSJwF9FgWHzJ REZH0jXKFGcYfUY2WN4c9FaTEFKykwI57QTpzGOTl9tV7gwZeeKbfj7vzUiKJRZCOMET gMgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=xjOcJ8psemIC/njQ/FR6hvBA3owD/j/88LT4loTH6ow=; b=ChH+xcXa/3GwlOmgSzg/q3IsUuYWUWK0rVsDhhrmollFP2Myuu5AA0PdETutZufpkM qKnZil/+qRKEsgTJ2MLX5W1KYg4mUEVuzoWUF/Xo3yV9wfXB3UDcrgIshteiUbPzdJLA Ym3RmC+sZq70tDKlt/NAxVzh2nEpC3KV+tkSt0kxFPZ8R5/IPZDJjr5Yr4r/XNHtrnP1 yVB5ca4zwS1wq0N6/+3DP2LSiJ3Tey+8vkT1rcdBM7je7PmwKYjzU7ALOrBlJPAVC79x 9Nqk/ajIGqwEEjBzqYUNKoQx7+l9zQnDQCvjwWnHWb+XVihptCtkKzp4LzCMSFEHaHZx ytEA== X-Gm-Message-State: AOAM533CG6OlTcGwmR5IV7eyb1/rzsTd2zmFs6/prjTMKljp+tTSxEck 2aCQnM0jsVnQucoTcAeUTw31sOQIC48bI9foOIHbpQ== X-Received: by 2002:a2e:9255:: with SMTP id v21mr3656290ljg.222.1589587615924; Fri, 15 May 2020 17:06:55 -0700 (PDT) MIME-Version: 1.0 References: <20200508170630.94406-1-shakeelb@google.com> <20200508214405.GA226164@cmpxchg.org> <20200515082955.GJ29153@dhcp22.suse.cz> <20200515132421.GC591266@cmpxchg.org> <20200515150026.GA94522@carbon.DHCP.thefacebook.com> <20200515180920.GC94522@carbon.DHCP.thefacebook.com> In-Reply-To: <20200515180920.GC94522@carbon.DHCP.thefacebook.com> From: Shakeel Butt Date: Fri, 15 May 2020 17:06:44 -0700 Message-ID: Subject: Re: [PATCH] memcg: expose root cgroup's memory.stat To: Roman Gushchin Cc: Johannes Weiner , Michal Hocko , Mel Gorman , Andrew Morton , Yafang Shao , Linux MM , Cgroups , LKML Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, May 15, 2020 at 11:09 AM Roman Gushchin wrote: > > On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote: > > On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote: > > > > > > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote: > > > > On Fri, May 15, 2020 at 6:24 AM Johannes Weiner wrote: > > > > > > > > > > On Fri, May 15, 2020 at 10:29:55AM +0200, Michal Hocko wrote: > > > > > > On Sat 09-05-20 07:06:38, Shakeel Butt wrote: > > > > > > > On Fri, May 8, 2020 at 2:44 PM Johannes Weiner wrote: > > > > > > > > > > > > > > > > On Fri, May 08, 2020 at 10:06:30AM -0700, Shakeel Butt wrote: > > > > > > > > > One way to measure the efficiency of memory reclaim is to look at the > > > > > > > > > ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are > > > > > > > > > not updated consistently at the system level and the ratio of these are > > > > > > > > > not very meaningful. The pgsteal and pgscan are updated for only global > > > > > > > > > reclaim while pgrefill gets updated for global as well as cgroup > > > > > > > > > reclaim. > > > > > > > > > > > > > > > > > > Please note that this difference is only for system level vmstats. The > > > > > > > > > cgroup stats returned by memory.stat are actually consistent. The > > > > > > > > > cgroup's pgsteal contains number of reclaimed pages for global as well > > > > > > > > > as cgroup reclaim. So, one way to get the system level stats is to get > > > > > > > > > these stats from root's memory.stat, so, expose memory.stat for the root > > > > > > > > > cgroup. > > > > > > > > > > > > > > > > > > from Johannes Weiner: > > > > > > > > > There are subtle differences between /proc/vmstat and > > > > > > > > > memory.stat, and cgroup-aware code that wants to watch the full > > > > > > > > > hierarchy currently has to know about these intricacies and > > > > > > > > > translate semantics back and forth. > > > > > > > > > > > > Can we have those subtle differences documented please? > > > > > > > > > > > > > > > > > > > > > > > > Generally having the fully recursive memory.stat at the root > > > > > > > > > level could help a broader range of usecases. > > > > > > > > > > > > > > > > The changelog begs the question why we don't just "fix" the > > > > > > > > system-level stats. It may be useful to include the conclusions from > > > > > > > > that discussion, and why there is value in keeping the stats this way. > > > > > > > > > > > > > > > > > > > > > > Right. Andrew, can you please add the following para to the changelog? > > > > > > > > > > > > > > Why not fix the stats by including both the global and cgroup reclaim > > > > > > > activity instead of exposing root cgroup's memory.stat? The reason is > > > > > > > the benefit of having metrics exposing the activity that happens > > > > > > > purely due to machine capacity rather than localized activity that > > > > > > > happens due to the limits throughout the cgroup tree. Additionally > > > > > > > there are userspace tools like sysstat(sar) which reads these stats to > > > > > > > inform about the system level reclaim activity. So, we should not > > > > > > > break such use-cases. > > > > > > > > > > > > > > > > Signed-off-by: Shakeel Butt > > > > > > > > > Suggested-by: Johannes Weiner > > > > > > > > > > > > > > > > Acked-by: Johannes Weiner > > > > > > > > > > > > > > Thanks a lot. > > > > > > > > > > > > I was quite surprised that the patch is so simple TBH. For some reason > > > > > > I've still had memories that we do not account for root memcg (likely > > > > > > because mem_cgroup_is_root(memcg) bail out in the try_charge. But stats > > > > > > are slightly different here. > > > > > > > > > > Yep, we skip the page_counter for root, but keep in mind that cgroup1 > > > > > *does* have a root-level memory.stat, so (for the most part) we've > > > > > been keeping consumer stats for the root level the whole time. > > > > > > > > > > > counters because they are not really all the same. E.g. > > > > > > - mem_cgroup_charge_statistics accounts for each memcg > > > > > > > > > > Yep, that's heritage from cgroup1. > > > > > > > > > > > - memcg_charge_kernel_stack relies on pages being associated with a > > > > > > memcg and that in turn relies on __memcg_kmem_charge_page which bails > > > > > > out on root memcg > > > > > > > > > > You're right. It should only bypass the page_counter, but still set > > > > > page->mem_cgroup = root_mem_cgroup, just like user pages. > > > > > > What about kernel threads? We consider them belonging to the root memory > > > cgroup. Should their memory consumption being considered in root-level stats? > > > > > > I'm not sure we really want it, but I guess we need to document how > > > kernel threads are handled. > > > > > > > What will be the cons of updating root-level stats for kthreads? > > It makes total sense for stacks, but not much for the slab memory. > Because it's really "some part of the total slab memory, which is > accounted on the memcg level". And it comes with some performance > overhead. > > I'm not really opposing any solution, just saying we need to document > what's included into this statistics and what not. > Yes, I agree. I will explore which stats it makes sense and for which it does not.