Received: by 2002:ac0:98c7:0:0:0:0:0 with SMTP id g7-v6csp211031imd; Fri, 26 Oct 2018 07:26:33 -0700 (PDT) X-Google-Smtp-Source: AJdET5csZ5J4FZBP63G6TLePGdwuYLK1yxiJsQtq3TFVqB8KPwDaddmLYsMx5MfNbVK+Nto74RDn X-Received: by 2002:a17:902:ea:: with SMTP id a97-v6mr3741217pla.164.1540563993620; Fri, 26 Oct 2018 07:26:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540563993; cv=none; d=google.com; s=arc-20160816; b=Qh53b2yx5EKTVtLzp0A2YLJv1ANAEH1CoiZmyFE/oLPYBbeeBex2knEyGi6BpOpMeQ Z0FyVj71YxD6yEFt7fJagjI0IdUnOIfKVQDxgBNonVKAeyWwitWjOnUikHEsfzAYxYq0 moRbw7qPAznlsfZu/EUNfM8pjP/NJaGGIjCfql9CYFpND6u2mGRH0tzF+pHeqvv5+XkV 57X7syBzS9N/XFBb8i6H4WhJQD1Tc8G3iIvkXH8LuE6UlEBEr7SJlz81LypF04zdVpsO TG6N38yT3u8yRiAQN4H8zuUqaRj2uMVWuqpBI4PNlKD5ymvNOHCM13oGs6cGZqDGCrK5 CS/g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature; bh=PQPuqnIiTbpovh3AcyPOUF01wLMiFIwNw9LC2etHdjI=; b=CwMQKGEqg30CxwYRHKPY8nSjvDilmCyjpDNLIb73x3Uui2nZD+2fAtWwbuZd0Cm8vU TYRKWLM1oiKKNSSdAjCrC4C5fuURwGMsgeAxofqr5/sXclN9/OFcWkeH0SKWabT7zRbr ZY2LcnQBZ24J/wnZWzE+MAQSPpnUdkwRLE3rkwjQapo6GN69N04jvKWr8tKrbnlLJ+7y LS/2IUcj5HgHGEW/a6nclb7eh7b8q3vV3D0g3F/hKVo8pzaf4tRzZTpK3Q+7tnRevhMv CTaWoMyQ0Qd/UsrqQh2a3Fwzd1gr4lAHpZlI7P0Id85aPaaXL9eQHFfNoNTx/Ru5RGsA FDOA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=usaN2Swv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v1-v6si10795233plp.85.2018.10.26.07.26.17; Fri, 26 Oct 2018 07:26:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=usaN2Swv; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726341AbeJZXCv (ORCPT + 99 others); Fri, 26 Oct 2018 19:02:51 -0400 Received: from mail-qt1-f193.google.com ([209.85.160.193]:45208 "EHLO mail-qt1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726179AbeJZXCu (ORCPT ); Fri, 26 Oct 2018 19:02:50 -0400 Received: by mail-qt1-f193.google.com with SMTP id l9-v6so1373537qtj.12 for ; Fri, 26 Oct 2018 07:25:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=PQPuqnIiTbpovh3AcyPOUF01wLMiFIwNw9LC2etHdjI=; b=usaN2Swv8uJxg/xGtxbJ9kBTExP19zDcowiCUyCz5TiDNFus1vGeW6CVUhZ/Sc8Xcu 1jYMUoYfCgO9CHuUIm8aWbl8g25XU7sBiiH7/fEZPdKXA7M7cfPU1koxLJXPgfFO6dnN 0o8IA3NjeMzsd/id0v6k/Fp6QpoB+tJZYeiZxP5bsJ6B8/Ui/4bYsGNmSUffgTgrkiQM oX1yJEzFdjCY++f4BqORC0lZ/Ln1wwLgGoUaAg/me9dirbDMJ/fJtvIF7LJgAjybWpJE ZBfukyBpbQvIhyi6ViS5J7VgoME8jO6Nv5v5x3Gjjicom6rCKdJeYDdSAOapJHn9MK7X SKZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=PQPuqnIiTbpovh3AcyPOUF01wLMiFIwNw9LC2etHdjI=; b=RrLOnIoFbXEF949S6BzfrxzhtESDRgKDBbJOEP09thJULJci3E5csxZYZ3xnzBhXHe G21Ryd/zk6KWLzLUuFhCzWcwbWnVnOLWy7kCjBlEBEvjzUgPaG6HBBf9QX0t6c0Qv9oY gqvWRUnd3xCsHYCQH9Xc3r6JbMlgHP1mhWoRCDVvkeeO2TqXMurZP3IxaZzdC1KZE8of RCktkmLk9wzFCSAR4DOFy8K+tskJA3eIRZCVKc7ZQRRArXT3hDfklKQOnpCXKWr7slws BNR/z1v+gkk09X9DzcCtasgtCU3pEl/2bHaPAhxGMzc6zMiEKqV2t60DAhLowD78FQTY 30DQ== X-Gm-Message-State: AGRZ1gIjX6xgVqQc/VKCcSMiHT5YGab87ymGGYl18g7QCpy4hV4wngP8 j/PdTyshq8lc5YCnGX8Nq5jVsw== X-Received: by 2002:a0c:9023:: with SMTP id o32mr3372983qvo.17.1540563934025; Fri, 26 Oct 2018 07:25:34 -0700 (PDT) Received: from localhost (216.49.36.201.res-cmts.bus.ptd.net. [216.49.36.201]) by smtp.gmail.com with ESMTPSA id 26-v6sm5632568qtq.57.2018.10.26.07.25.32 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 26 Oct 2018 07:25:32 -0700 (PDT) Date: Fri, 26 Oct 2018 10:25:31 -0400 From: Johannes Weiner To: Michal Hocko Cc: linux-mm@kvack.org, Tetsuo Handa , David Rientjes , Andrew Morton , LKML , Michal Hocko Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Message-ID: <20181026142531.GA27370@cmpxchg.org> References: <20181022071323.9550-1-mhocko@kernel.org> <20181022071323.9550-3-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20181022071323.9550-3-mhocko@kernel.org> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 22, 2018 at 09:13:23AM +0200, Michal Hocko wrote: > From: Michal Hocko > > Tetsuo has reported [1] that a single process group memcg might easily > swamp the log with no-eligible oom victim reports due to race between > the memcg charge and oom_reaper > > Thread 1 Thread2 oom_reaper > try_charge try_charge > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > out_of_memory > select_bad_process > oom_kill_process(current) > wake_oom_reaper > oom_reap_task > MMF_OOM_SKIP->victim > mutex_unlock(oom_lock) > out_of_memory > select_bad_process # no task > > If Thread1 didn't race it would bail out from try_charge and force the > charge. We can achieve the same by checking tsk_is_oom_victim inside > the oom_lock and therefore close the race. > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > Signed-off-by: Michal Hocko > --- > mm/memcontrol.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index e79cb59552d9..a9dfed29967b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > .gfp_mask = gfp_mask, > .order = order, > }; > - bool ret; > + bool ret = true; > > mutex_lock(&oom_lock); > + > + /* > + * multi-threaded tasks might race with oom_reaper and gain > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > + * to out_of_memory failure if the task is the last one in > + * memcg which would be a false possitive failure reported > + */ > + if (tsk_is_oom_victim(current)) > + goto unlock; > + > ret = out_of_memory(&oc); We already check tsk_is_oom_victim(current) in try_charge() before looping on the OOM killer, so at most we'd have a single "no eligible tasks" message from such a race before we force the charge - right? While that's not perfect, I don't think it warrants complicating this code even more. I honestly find it near-impossible to follow the code and the possible scenarios at this point. out_of_memory() bails on task_will_free_mem(current), which specifically *excludes* already reaped tasks. Why are we then adding a separate check before that to bail on already reaped victims? Do we want to bail if current is a reaped victim or not? I don't see how we could skip it safely in general: the current task might have been killed and reaped and gotten access to the memory reserve and still fail to allocate on its way out. It needs to kill the next task if there is one, or warn if there isn't another one. Because we're genuinely oom without reclaimable tasks. There is of course the scenario brought forward in this thread, where multiple threads of a process race and the second one enters oom even though it doesn't need to anymore. What the global case does to catch this is to grab the oom lock and do one last alloc attempt. Should memcg lock the oom_lock and try one more time to charge the memcg? Some simplification in this area would really be great. I'm reluctant to ack patches like the above, even if they have some optical benefits for the user, because the code is already too tricky for what it does.