Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp2863237ima; Mon, 22 Oct 2018 18:02:12 -0700 (PDT) X-Google-Smtp-Source: ACcGV61gniTGYaHRkfxW4Z0qeknWOWqxYVdl4IDbYlc7SCNg7odzdTV/tCbpXEBOnRBQIeWjMIlt X-Received: by 2002:a17:902:9696:: with SMTP id n22-v6mr38284927plp.282.1540256532890; Mon, 22 Oct 2018 18:02:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540256532; cv=none; d=google.com; s=arc-20160816; b=YxVlkhRAU6wMjKKECZ/Eyr23BNYbonLZX7tNCvLdsSCYKU632yymJOqQexCI4y05eM zpzEPzg4UX/u1u2eqEYyFHti2eNPiLliQQsjFXO/ydCIBHPtZVTZDo+a8Nr/2AT03Mho /KMXd9hPCyWWRFjWPyr/QNFd/xuW9txpIe3SJ+qWFTYwNx15oOdkDPXUx8mOVjhj3ZCd Byd4SUBE4NU+LeOueJYUhI4OwYna0leGNotzVlf6Wr5XUFhRx5Y4/xlBuxTT2A8Mpba0 2ZJ6u1IDjPFXxVjsxAybqxfjO7zjBv7PwkQJ4Ia7JZxW+T2gTphJwtuW9nx/vtsgJhls 5l9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:in-reply-to :references:date:mime-version:cc:to:from:subject:message-id; bh=IO4NmDwoToMqUAKhhr+5Qz1wmDz7cR+UjxZEMpZELhQ=; b=DuM7w+pzcrcq7w489ZYy9mJ6OWYgfi/48I4RdgxheA6RoyrMZYMvyY83JU2fYy2tbn b2M/y6D+PdZeiJIv6xO/4E5mA9bElApJ4fpHkafop4HakPqqAIDbl+rvN+bBx43iuYq+ lHPWDEXR55yLQElEESV58qY6pKQYHvjS1Wb40O5YjbKmyrNB0zEUSQJsVyuNoR3oXWXd AeW9hmBthHBldIE5AHI1iIYJnL9yf4i0PBb3IB0fWgrmQqdqvmU5Zw3ABOYbQ4lT6X9W 9DVoCa5ACine8YcVl6sNrYs55GCEQb61BW72fbcO/vSIN5IBqpEBXGubGnkifZ4YXIoU WsyQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d21-v6si32611078pfd.114.2018.10.22.18.01.58; Mon, 22 Oct 2018 18:02:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727058AbeJWJWT (ORCPT + 99 others); Tue, 23 Oct 2018 05:22:19 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:36455 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725998AbeJWJWT (ORCPT ); Tue, 23 Oct 2018 05:22:19 -0400 Received: from fsav107.sakura.ne.jp (fsav107.sakura.ne.jp [27.133.134.234]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id w9N1190s042453; Tue, 23 Oct 2018 10:01:09 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav107.sakura.ne.jp (F-Secure/fsigk_smtp/530/fsav107.sakura.ne.jp); Tue, 23 Oct 2018 10:01:09 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/530/fsav107.sakura.ne.jp) Received: from www262.sakura.ne.jp (localhost [127.0.0.1]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id w9N118Et042449; Tue, 23 Oct 2018 10:01:08 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: (from i-love@localhost) by www262.sakura.ne.jp (8.15.2/8.15.2/Submit) id w9N118i3042448; Tue, 23 Oct 2018 10:01:08 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Message-Id: <201810230101.w9N118i3042448@www262.sakura.ne.jp> X-Authentication-Warning: www262.sakura.ne.jp: i-love set sender to penguin-kernel@i-love.sakura.ne.jp using -f Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks From: Tetsuo Handa To: Michal Hocko Cc: linux-mm@kvack.org, Johannes Weiner , David Rientjes , Andrew Morton , LKML MIME-Version: 1.0 Date: Tue, 23 Oct 2018 10:01:08 +0900 References: <20181022120308.GB18839@dhcp22.suse.cz> In-Reply-To: <20181022120308.GB18839@dhcp22.suse.cz> Content-Type: text/plain; charset="ISO-2022-JP" Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Hocko wrote: > On Mon 22-10-18 20:45:17, Tetsuo Handa wrote: > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index e79cb59552d9..a9dfed29967b 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > > .gfp_mask = gfp_mask, > > > .order = order, > > > }; > > > - bool ret; > > > + bool ret = true; > > > > > > mutex_lock(&oom_lock); > > > + > > > + /* > > > + * multi-threaded tasks might race with oom_reaper and gain > > > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > > > + * to out_of_memory failure if the task is the last one in > > > + * memcg which would be a false possitive failure reported > > > + */ > > > + if (tsk_is_oom_victim(current)) > > > + goto unlock; > > > + > > > > This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock) > > so that any killed threads no longer wait for oom_lock. > > tsk_is_oom_victim is stronger because it doesn't depend on > fatal_signal_pending which might be cleared throughout the exit process. > I still want to propose this. No need to be memcg OOM specific. mm/memcontrol.c | 3 ++- mm/oom_kill.c | 10 ++++++++++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e79cb59..2c1e1ac 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1382,7 +1382,8 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, }; bool ret; - mutex_lock(&oom_lock); + if (mutex_lock_killable(&oom_lock)) + return true; ret = out_of_memory(&oc); mutex_unlock(&oom_lock); return ret; diff --git a/mm/oom_kill.c b/mm/oom_kill.c index f10aa53..e453bad 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -1055,6 +1055,16 @@ bool out_of_memory(struct oom_control *oc) unsigned long freed = 0; enum oom_constraint constraint = CONSTRAINT_NONE; + /* + * It is possible that multi-threaded OOM victims get + * task_will_free_mem(current) == false when the OOM reaper quickly + * set MMF_OOM_SKIP. But since we know that tsk_is_oom_victim() == true + * tasks won't loop forever (unleess it is a __GFP_NOFAIL allocation + * request), we don't need to select next OOM victim. + */ + if (tsk_is_oom_victim(current) && !(oc->gfp_mask & __GFP_NOFAIL)) + return true; + if (oom_killer_disabled) return false; -- 1.8.3.1