Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp3492386ybh; Tue, 17 Mar 2020 00:53:45 -0700 (PDT) X-Google-Smtp-Source: ADFU+vtpneh57ypvxDQl1DodKcNbvETBGGBwNmrIWu7R1gSpq7oEHcazv9BKztmHoDMcUnFkuIDk X-Received: by 2002:a05:6830:4025:: with SMTP id i5mr2779958ots.203.1584431625508; Tue, 17 Mar 2020 00:53:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584431625; cv=none; d=google.com; s=arc-20160816; b=hG2K+EAjAVqHDDyIUADekSPCdEXqqWzpii5I+By9YwnkQO60hXElFkI43F5blKr6i7 XN4Q4WrJGuZK0xtC1+ARx62fl2hUTrADwKMRVr+40IkBqLaa0JUCKDQYlG1834HyvP2R hDifCYhf83CjSfWn5JxBzYQ/SrSaRWyfm/D4I9FMrliTAIgIISg0SCUkh9cOsP+k15oV VgXZ4Bs79ZZwbnF3MeRo0PRZxT0SGv6G9HtHQIMRs9TnH4CSEy3kzAOZMKM2pUACl/Pl XLgvxMa/8TeyPj8Oa3Yq4wnGVE2vxC9bYfg14xXv9TmZlAtyVt8u/FevDxfB2oXaFUTk XuTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=FuaQPF1ZWPKes0ddeJfIKVDXd4T44v9s14fJWIErxcE=; b=Dtx7n0JhbH9BXGqoLIiSr9Iv7U6WxTYmgALkSZWlHEmOropVG7mGI5BAimhzYs/y5c 5dOAw1Hqhbq/i3Di8dGG+ZXPFYTBNSfq3m7b0BIizBWON9SaJ+2imZRYc1JPyjWHFv95 Q06eXbxJjN46XNpp04s8qA+uLmabY4rZP72I6T5+TmrT2QJe6m/p+1RFNdsFjvNeOQwh /JYg/HciRR6KiTm/KN1ZwsTGP0FPcSagWpw5Ik/ZgB15+qwQGD/4Ek7nderQ4N7L5CFC VvPvPh0qpVKFzMreXl12icguyVYCjgMshq/ZfmjKFU7dIqvbkNCDvW/1G/R+ccppyCJW Ginw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q13si1348541otn.141.2020.03.17.00.53.33; Tue, 17 Mar 2020 00:53:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726222AbgCQHwQ (ORCPT + 99 others); Tue, 17 Mar 2020 03:52:16 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:44826 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725916AbgCQHwQ (ORCPT ); Tue, 17 Mar 2020 03:52:16 -0400 Received: by mail-wr1-f68.google.com with SMTP id y2so8821407wrn.11 for ; Tue, 17 Mar 2020 00:52:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=FuaQPF1ZWPKes0ddeJfIKVDXd4T44v9s14fJWIErxcE=; b=P26j+LlnYmTOJWCP87P5Oc6WyLoDiFgOq6TGdOUvjakZxcrmx/bver+N+U/0235vw3 IIDtVhR1uKWRTFoo3Gyo333/g7NdW+R3V2Lby0b7Wk08lmV22zBnaLzKJ0TTXwIbJBb7 7bZvXKyYToZiRJuMXSrDpninocK5fN+Dk86Ug1VzwbcSHSkr6JweMcWPOrx5MGJKXbDq b7IPlX4+qw3SiF556yN1zFAzLFn3Pg/lPenqU0wMzPDF4v3dlWCjSc9he8fVYtaI5OLi ixMnHij3JDKlRaw6PIFxAHPrZb5nEpsoM2qbNsYi5IlC0MaD9sNlzxeuejwxjESMESO7 6HRg== X-Gm-Message-State: ANhLgQ0AKyvqT3+zHE4xKs/GtYmLpm/2caUD3rQU5+WaX8ME0hxdNt3/ aooUebuKocG3aiNElvcAwmM= X-Received: by 2002:adf:ed04:: with SMTP id a4mr4372448wro.76.1584431534404; Tue, 17 Mar 2020 00:52:14 -0700 (PDT) Received: from localhost (ip-37-188-255-121.eurotel.cz. [37.188.255.121]) by smtp.gmail.com with ESMTPSA id z11sm2875547wmc.30.2020.03.17.00.52.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 Mar 2020 00:52:13 -0700 (PDT) Date: Tue, 17 Mar 2020 08:52:12 +0100 From: Michal Hocko To: Roman Gushchin Cc: Andrew Morton , linux-mm@kvack.org, kernel-team@fb.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Message-ID: <20200317075212.GC26018@dhcp22.suse.cz> References: <20200316223510.3176148-1-guro@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200316223510.3176148-1-guro@fb.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 16-03-20 15:35:10, Roman Gushchin wrote: > If a task is getting moved out of the OOMing cgroup, it might > result in unexpected OOM killings if memory.oom.group is used > anywhere in the cgroup tree. > > Imagine the following example: > > A (oom.group = 1) > / \ > (OOM) B C > > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer > selects a task in B as a victim, but someone asynchronously moves > the task into C. I can see Reported-by here, does that mean that the race really happened in real workloads? If yes, I would be really curious. Mostly because moving tasks outside of the oom domain is quite questionable without charge migration. > mem_cgroup_get_oom_group() will iterate over all > ancestors of C up to the root cgroup. In theory it had to stop > at the oom_domain level - the memory cgroup which is OOMing. > But because B is not an ancestor of C, it's not happening. > Instead it chooses A (because it's oom.group is set), and kills > all tasks in A. This behavior is wrong because the OOM happened in B, > so there is no reason to kill anything outside. > > Fix this by checking it the memory cgroup to which the task belongs > is a descendant of the oom_domain. If not, memory.oom.group should > be ignored, and the OOM killer should kill only the victim task. I was about to suggest storing the memcg in oom_evaluate_task but then I have realized that this would be both more complex and I am not yet sure it would be better so much better after all. The thing is that killing the selected task makes a lot of sense because it was the largest consumer. No matter it has run away. On the other hand if your B was oom.group = 1 then one could expect that any OOM killer event in that group will result in the whole group tear down. This is however a gray zone because we do emit MEMCG_OOM event but MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the observer B could think that the oom was resolved without killing while observer C would see a kill event without oom. That being said, please try to think about the above. I will give it some more time as well. Killing the selected victim is the obviously correct thing and your patch does that so it is correct in that regard but I believe that the group oom behavior in the original oom domain remains an open question. Fixes: 3d8b38eb81ca ("mm, oom: introduce memory.oom.group") > Signed-off-by: Roman Gushchin > Reported-by: Dan Schatzberg > --- > mm/memcontrol.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index daa399be4688..d8c4b7aa4e73 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1930,6 +1930,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim, > if (memcg == root_mem_cgroup) > goto out; > > + /* > + * If the victim task has been asynchronously moved to a different > + * memory cgroup, we might end up killing tasks outside oom_domain. > + * In this case it's better to ignore memory.group.oom. > + */ > + if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain))) > + goto out; > + > /* > * Traverse the memory cgroup hierarchy from the victim task's > * cgroup up to the OOMing cgroup (or root) to find the > -- > 2.24.1 -- Michal Hocko SUSE Labs