Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp3384936ima; Tue, 23 Oct 2018 05:12:19 -0700 (PDT) X-Google-Smtp-Source: ACcGV60HF11FkAOvNGtgsCX1GBED+REK9O+oZIyZgsn567Nb+m+hrGJOi10NGRLrf1GtCEX++vMn X-Received: by 2002:a63:c5a:: with SMTP id 26-v6mr46524914pgm.372.1540296739191; Tue, 23 Oct 2018 05:12:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540296739; cv=none; d=google.com; s=arc-20160816; b=aWxiKuUaO4n6JAztGqCy16M5DjHLMBiIOLWQztvT9ngjO0LXYsyDy+JjLVfD9PRTNx vq0FnwOf0gh3SgwFgObqwvuUd8tCbJOL8bhCkPs3aI3y4u70hiEWmAZKWmFJWCpBOJ2A Go7QQdmlRpC9Uzs+8MaOBc+YywD2gO3mApvys22LjbYtY8zXs2CZ/HPPYrNVCbEgmmi8 ATp1eyXU7X/Ni44FO0ekh7tdv02OEwbVSUPT+Lf6skuIfSF9Bj6GU+jHdklXvta/uuRc 93OquT38esziKX2hzmuiImSVilhPDicvz+PbiKpIJj0Q0aFVnuQCVUF2iBR2Fz6Jfnw3 57MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=hkpjpBSZFv3OXSi6/mUNz2RKpvUiVj3HQuIN8yfrkew=; b=AfyQ6ExjzGVFinukVMFNTHZMErRMKE7W1NBXKS9P8ME0T0RcE1HhLwqnuaZMu9bs+X kTAQmxp9O16/FmRRl5dtA7AG/MqX9ozyNM/oDv2QlrTLL1uyIadOLYs6xJhbN/V2dW1Y ChzntV9/loSFYDoNamWJVHmd289uCdEJ2DiL4cEfQIAS+MzLacbiPE+4ljvD9DGijQnT pppRg3Ac3nuw2BSLIOWBTdEs4vK6WwRggf1VcStEc8ZESVRtrH+vgoVlx2uvN9E3qFe1 oyIcsk3CA7ERAHxpPmmV233eLW3SsHdRr0JyClH9YuLtqdro+0EO3WkNeiXLY/c+UqUj dzoA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 37-v6si1094546pgp.211.2018.10.23.05.12.02; Tue, 23 Oct 2018 05:12:19 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728046AbeJWUeK (ORCPT + 99 others); Tue, 23 Oct 2018 16:34:10 -0400 Received: from mx2.suse.de ([195.135.220.15]:59598 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727264AbeJWUeK (ORCPT ); Tue, 23 Oct 2018 16:34:10 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 789C9AF4D; Tue, 23 Oct 2018 12:10:56 +0000 (UTC) Date: Tue, 23 Oct 2018 14:10:55 +0200 From: Michal Hocko To: Johannes Weiner Cc: linux-mm@kvack.org, Tetsuo Handa , David Rientjes , Andrew Morton , LKML Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Message-ID: <20181023121055.GS18839@dhcp22.suse.cz> References: <20181022120308.GB18839@dhcp22.suse.cz> <201810230101.w9N118i3042448@www262.sakura.ne.jp> <20181023114246.GR18839@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181023114246.GR18839@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 23-10-18 13:42:46, Michal Hocko wrote: > On Tue 23-10-18 10:01:08, Tetsuo Handa wrote: > > Michal Hocko wrote: > > > On Mon 22-10-18 20:45:17, Tetsuo Handa wrote: > > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > > > index e79cb59552d9..a9dfed29967b 100644 > > > > > --- a/mm/memcontrol.c > > > > > +++ b/mm/memcontrol.c > > > > > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > > > > .gfp_mask = gfp_mask, > > > > > .order = order, > > > > > }; > > > > > - bool ret; > > > > > + bool ret = true; > > > > > > > > > > mutex_lock(&oom_lock); > > > > > + > > > > > + /* > > > > > + * multi-threaded tasks might race with oom_reaper and gain > > > > > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > > > > > + * to out_of_memory failure if the task is the last one in > > > > > + * memcg which would be a false possitive failure reported > > > > > + */ > > > > > + if (tsk_is_oom_victim(current)) > > > > > + goto unlock; > > > > > + > > > > > > > > This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock) > > > > so that any killed threads no longer wait for oom_lock. > > > > > > tsk_is_oom_victim is stronger because it doesn't depend on > > > fatal_signal_pending which might be cleared throughout the exit process. > > > > > > > I still want to propose this. No need to be memcg OOM specific. > > Well, I maintain what I've said [1] about simplicity and specific fix > for a specific issue. Especially in the tricky code like this where all > the consequences are far more subtle than they seem to be. > > This is obviously a matter of taste but I don't see much point discussing > this back and forth for ever. Unless there is a general agreement that > the above is less appropriate then I am willing to consider a different > change but I simply do not have energy to nit pick for ever. > > [1] http://lkml.kernel.org/r/20181022134315.GF18839@dhcp22.suse.cz In other words. Having a memcg specific fix means, well, a memcg maintenance burden. Like any other memcg specific oom decisions we already have. So are you OK with that Johannes or you would like to see a more generic fix which might turn out to be more complex? -- Michal Hocko SUSE Labs