Received: by 2002:ac0:aa62:0:0:0:0:0 with SMTP id w31-v6csp2204372ima; Mon, 22 Oct 2018 06:09:46 -0700 (PDT) X-Google-Smtp-Source: AJdET5d0VYlzfO9HzvL6gNQmxFhuJJPQ9uU6OOh6YcYRPhNw3FdIZdF750gP78+5CwkkE61WgZ7p X-Received: by 2002:a17:902:e081:: with SMTP id cb1-v6mr2255081plb.206.1540213785980; Mon, 22 Oct 2018 06:09:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540213785; cv=none; d=google.com; s=arc-20160816; b=dphOptxjsnOlKYK5uVIl70X8TFoxINdkvy8LfNlOqV7/zvj2mJiSOonaxcsZXAXD6J YbV8f+qyPrX4HlsYV1I2ANjfDuf3Ew8dXVx+e21lsoASaRUOPhoWP5dw4CJzNz8el11T v6Z2FWLvrh/a6BwYUQ4r5rmobRkfT5AX7iIG519o9J9YA37S1VPoH6U3CEIrfTr0fSJu 9+CsXyaZHxuXBlkERX4todag1WD/8YpDQUiIIo5qnV0tFnazOGHSNCIxFqN3v79cnyqn SQQwJsPw6WGYL8erZ128ibxVg3y9H+dNr+/qZuoJbhzCmvstW4wXkJVdtjW8sPNLlIJ1 D5zw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject; bh=LI0fxBURR+6XzN9sWWPofuyPITjUNRobbBa8S3UhnNA=; b=mnp7d+DLQsHVI3DQ0KcAKIMEaLpJn+Da2un72/9Lks7nNzsQ9bDOKsQK8ARG1k++qJ 06kUUHc+rweQfMdNKIvLLn+KNEJuQDEOox4ev9rVRcqyo4eqCyJCOm4HJB6UqfU+R9Sf 3/SH0TE5i/V4RO+0OfJf2avf1KmfbLtc76UUyA43esD92PinqcZTGdZ1SHyq0dhDJMPJ O5odmpBWCmJC8fnOPxGfmZIlqNqLVVt/T4oLhNWuLkhwi8xzIlaNUEOueKQk6RMgTn7F nfwKWV23AgK47I12DJo69LL3dcXVQ2FADSjJGA+tVK3hQ4L6nJtu8Q8Z1yLCIZU7vV2S oIrA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w16-v6si12067910pgg.245.2018.10.22.06.09.29; Mon, 22 Oct 2018 06:09:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727744AbeJVUDy (ORCPT + 99 others); Mon, 22 Oct 2018 16:03:54 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:38254 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727550AbeJVUDy (ORCPT ); Mon, 22 Oct 2018 16:03:54 -0400 Received: from fsav305.sakura.ne.jp (fsav305.sakura.ne.jp [153.120.85.136]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id w9MBjLK3083252; Mon, 22 Oct 2018 20:45:21 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav305.sakura.ne.jp (F-Secure/fsigk_smtp/530/fsav305.sakura.ne.jp); Mon, 22 Oct 2018 20:45:21 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/530/fsav305.sakura.ne.jp) Received: from [192.168.1.8] (softbank060157066051.bbtec.net [60.157.66.51]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id w9MBjJkL083210 (version=TLSv1.2 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Mon, 22 Oct 2018 20:45:20 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks To: Michal Hocko , linux-mm@kvack.org Cc: Johannes Weiner , David Rientjes , Andrew Morton , LKML , Michal Hocko References: <20181022071323.9550-1-mhocko@kernel.org> <20181022071323.9550-3-mhocko@kernel.org> From: Tetsuo Handa Message-ID: Date: Mon, 22 Oct 2018 20:45:17 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181022071323.9550-3-mhocko@kernel.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/10/22 16:13, Michal Hocko wrote: > From: Michal Hocko > > Tetsuo has reported [1] that a single process group memcg might easily > swamp the log with no-eligible oom victim reports due to race between > the memcg charge and oom_reaper > > Thread 1 Thread2 oom_reaper > try_charge try_charge > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > mem_cgroup_out_of_memory > mutex_lock(oom_lock) > out_of_memory > select_bad_process > oom_kill_process(current) > wake_oom_reaper > oom_reap_task > MMF_OOM_SKIP->victim > mutex_unlock(oom_lock) > out_of_memory > select_bad_process # no task > > If Thread1 didn't race it would bail out from try_charge and force the > charge. We can achieve the same by checking tsk_is_oom_victim inside > the oom_lock and therefore close the race. > > [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp > Signed-off-by: Michal Hocko > --- > mm/memcontrol.c | 14 +++++++++++++- > 1 file changed, 13 insertions(+), 1 deletion(-) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index e79cb59552d9..a9dfed29967b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > .gfp_mask = gfp_mask, > .order = order, > }; > - bool ret; > + bool ret = true; > > mutex_lock(&oom_lock); > + > + /* > + * multi-threaded tasks might race with oom_reaper and gain > + * MMF_OOM_SKIP before reaching out_of_memory which can lead > + * to out_of_memory failure if the task is the last one in > + * memcg which would be a false possitive failure reported > + */ > + if (tsk_is_oom_victim(current)) > + goto unlock; > + This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock) so that any killed threads no longer wait for oom_lock. Also, closing this race for only memcg OOM path is strange. Global OOM path (which are CLONE_VM without CLONE_THREAD) is still suffering this race (though frequency is lower than memcg OOM due to use of mutex_trylock()). Either checking before calling out_of_memory() or checking task_will_free_mem(current) inside out_of_memory() will close this race for both paths. > ret = out_of_memory(&oc); > + > +unlock: > mutex_unlock(&oom_lock); > return ret; > } >