Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2941413imu; Wed, 7 Nov 2018 02:09:02 -0800 (PST) X-Google-Smtp-Source: AJdET5d/Uq3fTMzsaacrXdT1qRkBROpxGMpMcIh+E0RPZdcVgDgx4r+4EuTyZ3dplTOWvn7KEdYn X-Received: by 2002:a17:902:e28a:: with SMTP id cf10-v6mr1294398plb.81.1541585342685; Wed, 07 Nov 2018 02:09:02 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1541585342; cv=none; d=google.com; s=arc-20160816; b=p6olK37zFyU3sPXpgTGerSB018pRCBuH69w/K/dfF2QLB58ec6Rsaa5HIdy6AB5ADD sG1FQTm5NImC4brvWUBUh6ZtqayynOoo7UUX8sXyVpm9siYd5SOTxPduw0Jn1cH/TglY W1diIqKVRvW0PZyAkKiarUBylJyAYUMsEpoD4927Soi42dgZa8txG8Vr8HlSPVlkMp+K K5gKoC9HAT0DJVx5ofZ5y3LV8cVujtbGe7qrrpF34TgeLB2lgf/MgWGFF1NMD2Ya0iwm s7EkeiTOiOJUkOsGMT7kEcrw9U31sHrhjaj6SckG6ejSMsfiP7nQY0Hlt3NW5YKtdsbD CZJA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=FR99OiVtPULOWdkQxtysI5HCNQAvf2IXXCKCfe66fXU=; b=za3+HSLO9n57dMjIPsSAaQzAm95Wy6/YnbhdGYH3kV8MtnNkLF8p+Ug2u2gga4NZ3P 0DTcH6VWVLe8M+VMZ/KRDByzdtJHOddGSayu3VVc4LKC9OXAUW5D9PL+iVf5BSXvGac9 NZRRcMyQjUQhXgI11XUCyu4AMKFjQ67mPag+x+4A9iCqcJoche9bgHOVcYUaTdL5O74R YDhrNrR6EFsGz1JKooSFE8XGmVAfAdAxOX9YIMvoY45xU6KyQseAfmTeOnOTG+uChSs8 7KnxBoUwOUePVxgQawoGenbuTLCfWa1TuSx3Bg+/LzrKoJdHKcGeiLtBp1gv3TZRoHK6 aJ1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c75-v6si212187pga.34.2018.11.07.02.08.47; Wed, 07 Nov 2018 02:09:02 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730115AbeKGThy (ORCPT + 99 others); Wed, 7 Nov 2018 14:37:54 -0500 Received: from mx2.suse.de ([195.135.220.15]:54928 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726218AbeKGThx (ORCPT ); Wed, 7 Nov 2018 14:37:53 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id 3D879B6EC; Wed, 7 Nov 2018 10:08:13 +0000 (UTC) Date: Wed, 7 Nov 2018 11:08:10 +0100 From: Michal Hocko To: Johannes Weiner , Tetsuo Handa Cc: linux-mm@kvack.org, David Rientjes , Andrew Morton , LKML Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Message-ID: <20181107100810.GA27423@dhcp22.suse.cz> References: <20181022071323.9550-1-mhocko@kernel.org> <20181022071323.9550-3-mhocko@kernel.org> <20181026142531.GA27370@cmpxchg.org> <20181026192551.GC18839@dhcp22.suse.cz> <20181026193304.GD18839@dhcp22.suse.cz> <20181106124224.GM27423@dhcp22.suse.cz> <8725e3b3-3752-fa7f-a88f-5ff4f5b6eace@i-love.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <8725e3b3-3752-fa7f-a88f-5ff4f5b6eace@i-love.sakura.ne.jp> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 07-11-18 18:45:27, Tetsuo Handa wrote: > On 2018/11/06 21:42, Michal Hocko wrote: > > On Tue 06-11-18 18:44:43, Tetsuo Handa wrote: > > [...] > >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c > >> index 6e1469b..a97648a 100644 > >> --- a/mm/memcontrol.c > >> +++ b/mm/memcontrol.c > >> @@ -1382,8 +1382,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > >> }; > >> bool ret; > >> > >> - mutex_lock(&oom_lock); > >> - ret = out_of_memory(&oc); > >> + if (mutex_lock_killable(&oom_lock)) > >> + return true; > >> + /* > >> + * A few threads which were not waiting at mutex_lock_killable() can > >> + * fail to bail out. Therefore, check again after holding oom_lock. > >> + */ > >> + ret = fatal_signal_pending(current) || out_of_memory(&oc); > >> mutex_unlock(&oom_lock); > >> return ret; > >> } > > > > If we are goging with a memcg specific thingy then I really prefer > > tsk_is_oom_victim approach. Or is there any reason why this is not > > suitable? > > > > Why need to wait for mark_oom_victim() called after slow printk() messages? > > If current thread got Ctrl-C and thus current thread can terminate, what is > nice with waiting for the OOM killer? If there are several OOM events in > multiple memcg domains waiting for completion of printk() messages? I don't > see points with waiting for oom_lock, for try_charge() already allows current > thread to terminate due to fatal_signal_pending() test. mutex_lock_killable would take care of exiting task already. I would then still prefer to check for mark_oom_victim because that is not racy with the exit path clearing signals. I can update my patch to use _killable lock variant if we are really going with the memcg specific fix. Johaness? -- Michal Hocko SUSE Labs