Received: by 10.223.176.5 with SMTP id f5csp4238205wra; Tue, 30 Jan 2018 04:21:28 -0800 (PST) X-Google-Smtp-Source: AH8x227JDUkfSxUIC0pMrT9HDd5tsR8A8zRTMpCmyR7YRKBHDJ6NcG/y3QIYsB6NBy3WajF7wa9S X-Received: by 2002:a17:902:7c86:: with SMTP id y6-v6mr15733637pll.24.1517314888351; Tue, 30 Jan 2018 04:21:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1517314888; cv=none; d=google.com; s=arc-20160816; b=nvGIagCOvlB7Dw+et2tOjxi7UvcsnegQlZg3sd88A7k2kFdkxeEnafNEq/fQTxbm85 vmusqVE9LDiSyvpR+vHDyRUEpQ1FSINmvqsqg8dvxr2zrz2S7yh9cKcAT9NsdYOYtXrf Peeco4nfRSkrhGBSpLvsgFtKRhN4A9elN6dNtwg86FyebQNY3Bt1/gYXuJ4ZeLCeTjsP 3n7DdunTXoV+DLi22RvRryH02bqgDjlY54QrBmv26fdw6sBUmlDm13zmbikpzCHQpJmz w7N+FQtsJIivvtoK56K/20VOygVfOs5hTL0xHvuuO5qWQSWhiPbPSR4If7zWciJibTo5 c66g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=Pv5aM9jyFY8oP6rnW2vJm/TaC9NoZn4F7fOJLgklN5U=; b=wWHsIZh6fdcsgQEtO/VohsIfO9FSGPLdCMWSDLp7p5bYxcfINceSlALlHQeAva8udT 7tDOEMTH0hOZP+tD3h8Kfjqc3rV/wpp0GtEQ5X9e8LhL8e1xYd9FTytS2P5LeqitO8vm FOsHWz2RiJ0QZXs3jlUv5T8OEiTrJMCuUnNYU60dsVJ6/EGoBdVO/0QL7kLvxarNKX++ UyqGlnTcQ/Uwi17462HtovZXq//FpxBJl4wnySm8/Q5EhnPWyMPSE7+AbFZBijUZl/ys v9p/0gx6nWgNtwGtL/x9Zo+hod0xty4U9X6U1Z/JmlMrEl+sTG7hgntI0fIDyEJEZ8rA 3H7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x2si9236102pgq.223.2018.01.30.04.21.13; Tue, 30 Jan 2018 04:21:28 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751731AbeA3MUP (ORCPT + 99 others); Tue, 30 Jan 2018 07:20:15 -0500 Received: from mx2.suse.de ([195.135.220.15]:54713 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751457AbeA3MUN (ORCPT ); Tue, 30 Jan 2018 07:20:13 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 1E455AE1C; Tue, 30 Jan 2018 12:20:12 +0000 (UTC) Date: Tue, 30 Jan 2018 13:20:11 +0100 From: Michal Hocko To: Roman Gushchin Cc: Tejun Heo , Andrew Morton , David Rientjes , Vladimir Davydov , Johannes Weiner , Tetsuo Handa , kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch -mm v2 2/3] mm, memcg: replace cgroup aware oom killer mount option with tunable Message-ID: <20180130122011.GB21609@dhcp22.suse.cz> References: <20180126143950.719912507bd993d92188877f@linux-foundation.org> <20180126161735.b999356fbe96c0acd33aaa66@linux-foundation.org> <20180129104657.GC21609@dhcp22.suse.cz> <20180129191139.GA1121507@devbig577.frc2.facebook.com> <20180130085445.GQ21609@dhcp22.suse.cz> <20180130115846.GA4720@castle.DHCP.thefacebook.com> <20180130120852.GA21609@dhcp22.suse.cz> <20180130121315.GA5888@castle.DHCP.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180130121315.GA5888@castle.DHCP.thefacebook.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 30-01-18 12:13:22, Roman Gushchin wrote: > On Tue, Jan 30, 2018 at 01:08:52PM +0100, Michal Hocko wrote: > > On Tue 30-01-18 11:58:51, Roman Gushchin wrote: > > > On Tue, Jan 30, 2018 at 09:54:45AM +0100, Michal Hocko wrote: > > > > On Mon 29-01-18 11:11:39, Tejun Heo wrote: > > > > > > Hello, Michal! > > > > > > > diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt > > > > index 2eaed1e2243d..67bdf19f8e5b 100644 > > > > --- a/Documentation/cgroup-v2.txt > > > > +++ b/Documentation/cgroup-v2.txt > > > > @@ -1291,8 +1291,14 @@ This affects both system- and cgroup-wide OOMs. For a cgroup-wide OOM > > > > the memory controller considers only cgroups belonging to the sub-tree > > > > of the OOM'ing cgroup. > > > > > > > > -The root cgroup is treated as a leaf memory cgroup, so it's compared > > > > -with other leaf memory cgroups and cgroups with oom_group option set. > > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > > IMO, this statement is important. Isn't it? > > > > > > > +Leaf cgroups are compared based on their cumulative memory usage. The > > > > +root cgroup is treated as a leaf memory cgroup as well, so it's > > > > +compared with other leaf memory cgroups. Due to internal implementation > > > > +restrictions the size of the root cgroup is a cumulative sum of > > > > +oom_badness of all its tasks (in other words oom_score_adj of each task > > > > +is obeyed). Relying on oom_score_adj (appart from OOM_SCORE_ADJ_MIN) > > > > +can lead to overestimating of the root cgroup consumption and it is > > > > > > Hm, and underestimating too. Also OOM_SCORE_ADJ_MIN isn't any different > > > in this case. Say, all tasks except a small one have OOM_SCORE_ADJ set to > > > -999, this means the root croup has extremely low chances to be elected. > > > > > > > +therefore discouraged. This might change in the future, though. > > > > > > Other than that looks very good to me. > > > > This? > > > > diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt > > index 2eaed1e2243d..34ad80ee90f2 100644 > > --- a/Documentation/cgroup-v2.txt > > +++ b/Documentation/cgroup-v2.txt > > @@ -1291,8 +1291,15 @@ This affects both system- and cgroup-wide OOMs. For a cgroup-wide OOM > > the memory controller considers only cgroups belonging to the sub-tree > > of the OOM'ing cgroup. > > > > -The root cgroup is treated as a leaf memory cgroup, so it's compared > > -with other leaf memory cgroups and cgroups with oom_group option set. > > +Leaf cgroups and cgroups with oom_group option set are compared based > > +on their cumulative memory usage. The root cgroup is treated as a > > +leaf memory cgroup as well, so it's compared with other leaf memory > > +cgroups. Due to internal implementation restrictions the size of > > +the root cgroup is a cumulative sum of oom_badness of all its tasks > > +(in other words oom_score_adj of each task is obeyed). Relying on > > +oom_score_adj (appart from OOM_SCORE_ADJ_MIN) can lead to over or > > +underestimating of the root cgroup consumption and it is therefore > > +discouraged. This might change in the future, though. > > Acked-by: Roman Gushchin Andrew? From 361275a05ad7026b8f721f8aa756a4975a2c42b1 Mon Sep 17 00:00:00 2001 From: Michal Hocko Date: Tue, 30 Jan 2018 09:54:15 +0100 Subject: [PATCH] oom, memcg: clarify root memcg oom accounting David Rientjes has pointed out that the current way how the root memcg is accounted for the cgroup aware OOM killer is undocumented. Unlike regular cgroups there is no accounting going on in the root memcg (mostly for performance reasons). Therefore we are suming up oom_badness of its tasks. This might result in an over accounting because of the oom_score_adj setting. Document this for now. Acked-by: Roman Gushchin Signed-off-by: Michal Hocko --- Documentation/cgroup-v2.txt | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/Documentation/cgroup-v2.txt b/Documentation/cgroup-v2.txt index 2eaed1e2243d..34ad80ee90f2 100644 --- a/Documentation/cgroup-v2.txt +++ b/Documentation/cgroup-v2.txt @@ -1291,8 +1291,15 @@ This affects both system- and cgroup-wide OOMs. For a cgroup-wide OOM the memory controller considers only cgroups belonging to the sub-tree of the OOM'ing cgroup. -The root cgroup is treated as a leaf memory cgroup, so it's compared -with other leaf memory cgroups and cgroups with oom_group option set. +Leaf cgroups and cgroups with oom_group option set are compared based +on their cumulative memory usage. The root cgroup is treated as a +leaf memory cgroup as well, so it's compared with other leaf memory +cgroups. Due to internal implementation restrictions the size of +the root cgroup is a cumulative sum of oom_badness of all its tasks +(in other words oom_score_adj of each task is obeyed). Relying on +oom_score_adj (appart from OOM_SCORE_ADJ_MIN) can lead to over or +underestimating of the root cgroup consumption and it is therefore +discouraged. This might change in the future, though. If there are no cgroups with the enabled memory controller, the OOM killer is using the "traditional" process-based approach. -- 2.15.1 -- Michal Hocko SUSE Labs