Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp2042335imm; Thu, 2 Aug 2018 05:25:51 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfo7FMQx4rF+XY7l3PS3sK/YwZLk3uDI4NR0ZuivptgriB7TN7RZAuT/yzH4Elv0ejiaFXI X-Received: by 2002:a17:902:934a:: with SMTP id g10-v6mr2228371plp.121.1533212751303; Thu, 02 Aug 2018 05:25:51 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533212751; cv=none; d=google.com; s=arc-20160816; b=Px5Hhjy/KEfY3AHU78eBJ45TqKVJ5juPt4aVJgPbzL/zb6/70uM1Y/Wi5uG7xBQlyc hrpuyAaRcXg92hji2bOwTqs6/pLfQHVHVRxOywAtdJuEso4OT7+3S4NpE+5vG04lv5Gq Q32AsWkiVvIP/jr1WtB97dc0mo6AmBMJ55505ykpje9DLeUQKg2RNnCYdLS5UE+nz9zp /b6IHsIQLUdRwVsmNS+ypMrV7gwsrD3RoxB/rp2ApGowKNoAuMIqKavpHj91fgfUAS4C leP8XbI4Nq9sd5OEpB1gQB1K+fvZZxc+lEUJ7qlvg0/C8slLoldCobzVFeD0mc3VHsx9 WooQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=BHoTBCzC2EJ0rE0Zim8x8tGnS2qXhZldgOum9zYEtz4=; b=A5El2AQDeQDhvWxJKGGKS3SSH1N7NuQA6Guf0DM7GOYEjsQ6WKkiELgTxjIhpoNxGz QdrMxoh/0g+zAOkkZpWgT5+svyC5Zd+4OdzJeFDLDDoSvxXV0szzayv2RZH5XpZLOzkq MvPQ+yMxefd6c+qVFnfMLfmo5I5Ziw1ncxts4DCKemz8uQRJ/5SqUrLRthyhhG/3yp00 6s96jpesYfS6P4mSbXQwfA8TZ4HenemuImwdQwbpS4m1+4hvmEkfN2d6YK714mZwQ1IS lH4UcD7JdWGtDEH4wUeSMRatBlFjLiDc0dKyByifDi4ADxF1t32FZbIIGKg4HvdS9zBY KzzA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e10-v6si2045298pgm.94.2018.08.02.05.25.36; Thu, 02 Aug 2018 05:25:51 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732232AbeHBOFo (ORCPT + 99 others); Thu, 2 Aug 2018 10:05:44 -0400 Received: from mx2.suse.de ([195.135.220.15]:45824 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728797AbeHBOFo (ORCPT ); Thu, 2 Aug 2018 10:05:44 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 27363ACD7; Thu, 2 Aug 2018 12:14:48 +0000 (UTC) Date: Thu, 2 Aug 2018 14:14:46 +0200 From: Michal Hocko To: Tetsuo Handa Cc: Roman Gushchin , linux-mm@kvack.org, Johannes Weiner , David Rientjes , Tejun Heo , kernel-team@fb.com, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 3/3] mm, oom: introduce memory.oom.group Message-ID: <20180802121446.GK10808@dhcp22.suse.cz> References: <20180802003201.817-1-guro@fb.com> <20180802003201.817-4-guro@fb.com> <879f1767-8b15-4e83-d9ef-d8df0e8b4d83@i-love.sakura.ne.jp> <20180802112114.GG10808@dhcp22.suse.cz> <712a319f-c9da-230a-f2cb-af980daff704@i-love.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <712a319f-c9da-230a-f2cb-af980daff704@i-love.sakura.ne.jp> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 02-08-18 20:53:14, Tetsuo Handa wrote: > On 2018/08/02 20:21, Michal Hocko wrote: > > On Thu 02-08-18 19:53:13, Tetsuo Handa wrote: > >> On 2018/08/02 9:32, Roman Gushchin wrote: > > [...] > >>> +struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim, > >>> + struct mem_cgroup *oom_domain) > >>> +{ > >>> + struct mem_cgroup *oom_group = NULL; > >>> + struct mem_cgroup *memcg; > >>> + > >>> + if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) > >>> + return NULL; > >>> + > >>> + if (!oom_domain) > >>> + oom_domain = root_mem_cgroup; > >>> + > >>> + rcu_read_lock(); > >>> + > >>> + memcg = mem_cgroup_from_task(victim); > >> > >> Isn't this racy? I guess that memcg of this "victim" can change to > >> somewhere else from the one as of determining the final candidate. > > > > How is this any different from the existing code? We select a victim and > > then kill it. The victim might move away and won't be part of the oom > > memcg anymore but we will still kill it. I do not remember this ever > > being a problem. Migration is a privileged operation. If you loose this > > restriction you shouldn't allow to move outside of the oom domain. > > The existing code kills one process (plus other processes sharing mm if any). > But oom_cgroup kills multiple processes. Thus, whether we made decision based > on correct memcg becomes important. Yes but a proper configuration should already mitigate the harm because you shouldn't be able to migrate the task outside of the oom domain. A (oom.group = 1) / \ B C moving task between B and C should be harmless while moving it out of A subtree completely is a dubious configuration. > >> This "victim" might have already passed exit_mm()/cgroup_exit() from do_exit(). > > > > Why does this matter? The victim hasn't been killed yet so if it exists > > by its own I do not think we really have to tear the whole cgroup down. > > The existing code does not send SIGKILL if find_lock_task_mm() failed. Who can > guarantee that the victim is not inside do_exit() yet when this code is executed? I do not follow. Why does this matter at all? -- Michal Hocko SUSE Labs