Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp2151574ima; Mon, 22 Oct 2018 05:18:08 -0700 (PDT) X-Google-Smtp-Source: ACcGV60AtP07Z7hPATghzAfp/476OV/wx6ikvtxRy8qdFtEyyUcH7B+Uw6Wxz53fHT/MlX0MD5Lz X-Received: by 2002:a17:902:720b:: with SMTP id ba11-v6mr43846107plb.199.1540210688796; Mon, 22 Oct 2018 05:18:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540210688; cv=none; d=google.com; s=arc-20160816; b=davS9RVNzB7FLNHB9XZJTwmBAsmjP30Hy1wLAdyauNhpzeyAHY8b8zuGbaTwGkG2xX S/Cu5sQFtnoFNXMfajhCYHa4mT7uhkrKwsQxvXfk4euVaQzHwIl48vdKlDvvwEVeiWSq kzfIJBAkgwYqV36wpb/qJiFqLEgJHRttzYgk5i5+KMlnJqD5cxxctaXsLSBDvqWT0SZ1 paPK/LNK0KrjLEVk45+4x71w17x3A0ttEGw/dY4/dv6FhxxklT0htlLC2b2HOMyK/D4e QTGIwgH6CPAkGlKlUr1/mY9+Pjt0O+MtNXTEn7UGkibjAVlU/pV0lZvrYj1Wd3mAPzEL 40dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=MpDxKlDZ+AxJqpqJUdp6ckEGnhy4bxQ0Dp4FyaoapjQ=; b=S3XrnuR23LeahPZvDUmnsmHaO58ZtN0ZcvVGl1ltVwIwtXRqo97JJFp7xFobZphTzR h2hEFC9OzHVJOprj7eQF3MXSTezJw8OMCDilteX7Z5aKSrapTAEUjqRZpTKe0kR+tbFB x3xnWtkdVWr45yjXiQa+VMHelW5OIdo6EwnTFv1usSd1YYCsn/vzddaqy7Wdr9EJBnGq MP5k+kaVO0oPLsQA8+A5vjjSAmckvD/77txXhsX4LMmkP/gXgrKxvMQSgiKbfxcOhxiH yB2noxrl8qaMZEG3FBCXxlNle5KjZm+50FYM4NHgc1//O8zNKHEfnds6PsctXtMGZVuB SpWw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q1-v6si35316199pfb.258.2018.10.22.05.17.52; Mon, 22 Oct 2018 05:18:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728365AbeJVUV3 (ORCPT + 99 others); Mon, 22 Oct 2018 16:21:29 -0400 Received: from mx2.suse.de ([195.135.220.15]:34114 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1728186AbeJVUV3 (ORCPT ); Mon, 22 Oct 2018 16:21:29 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 90BEAAFB6; Mon, 22 Oct 2018 12:03:11 +0000 (UTC) Date: Mon, 22 Oct 2018 14:03:08 +0200 From: Michal Hocko To: Tetsuo Handa Cc: linux-mm@kvack.org, Johannes Weiner , David Rientjes , Andrew Morton , LKML Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Message-ID: <20181022120308.GB18839@dhcp22.suse.cz> References: <20181022071323.9550-1-mhocko@kernel.org> <20181022071323.9550-3-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 22-10-18 20:45:17, Tetsuo Handa wrote: > On 2018/10/22 16:13, Michal Hocko wrote: > > From: Michal Hocko > > > > Tetsuo has reported [1] that a single process group memcg might easily > > swamp the log with no-eligible oom victim reports due to race between > > the memcg charge and oom_reaper > > > > Thread 1 Thread2 oom_reaper > > try_charge try_charge > > mem_cgroup_out_of_memory > > mutex_lock(oom_lock) > > mem_cgroup_out_of_memory > > mutex_lock(oom_lock) > > out_of_memory > > select_bad_process > > oom_kill_process(current) > > wake_oom_reaper > > oom_reap_task > > MMF_OOM_SKIP->victim > > mutex_unlock(oom_lock) > > out_of_memory > > select_bad_process # no task > > > > If Thread1 didn't race it would bail out from try_charge and force the > > charge. We can achieve the same by checking tsk_is_oom_victim inside > > the oom_lock and therefore close the race. > > > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > > Signed-off-by: Michal Hocko > > --- > > mm/memcontrol.c | 14 +++++++++++++- > > 1 file changed, 13 insertions(+), 1 deletion(-) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > index e79cb59552d9..a9dfed29967b 100644 > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > .gfp_mask = gfp_mask, > > .order = order, > > }; > > - bool ret; > > + bool ret = true; > > > > mutex_lock(&oom_lock); > > + > > + /* > > + * multi-threaded tasks might race with oom_reaper and gain > > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > > + * to out_of_memory failure if the task is the last one in > > + * memcg which would be a false possitive failure reported > > + */ > > + if (tsk_is_oom_victim(current)) > > + goto unlock; > > + > > This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock) > so that any killed threads no longer wait for oom_lock. tsk_is_oom_victim is stronger because it doesn't depend on fatal_signal_pending which might be cleared throughout the exit process. > Also, closing this race for only memcg OOM path is strange. Global OOM path > (which are CLONE_VM without CLONE_THREAD) is still suffering this race > (though frequency is lower than memcg OOM due to use of mutex_trylock()). Either > checking before calling out_of_memory() or checking task_will_free_mem(current) > inside out_of_memory() will close this race for both paths. The global case is much more complicated because we know that memcg might bypass the charge so we do not have to care about the potential endless loop like in page allocator path. Moreover I am not even sure the race is all that interesting in the global case. I have never heard about a pre-mature panic due to no killable task. The racing oom task would have to be the last eligible process in the system and that is quite unlikely. We can think about a more involved solution for that if we ever hear about this to be a real problem. So a simple memcg specific fix sounds like a reasonable way forward. -- Michal Hocko SUSE Labs