Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp2799706pxb; Sat, 23 Oct 2021 08:16:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJznP6qzbksAkYvdPkX9qUGjFQ/g/d0V7+TYsoSHTbweyeBo1ycNTy1+upoN7CmDKnPpazVy X-Received: by 2002:a63:68c5:: with SMTP id d188mr4972497pgc.37.1635002181132; Sat, 23 Oct 2021 08:16:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1635002181; cv=none; d=google.com; s=arc-20160816; b=UQU8Q9ELitJbV3u5u6EL7fnpT8Br4Pf1sE6pPOQBLr1H2HJIyiFENG4ySHdfBMFgGR R9rvDjUO/++7rQGC0cXMtnIeoQE1oGWf7fI8Hp6p0aa2br0jTNEl3Wz6bWKsYKAW9YJL skiEilI9Fo2hdyAHqvtd2XRWX5R+1aw4nAk20TsXh4yNxssfCD5xYgUeTlsDJXfo/9e2 yid672jFcI3FAnXY3WdWJJoRCIKt8a11Ki5JE2EmWEQM3Yp4kRxmwuHaW3iwgLZKTKH/ NHA4SSDuxnW89FaBXcZ/mzr5XpH/whSRVoO+eeJAywlK2ZtW2JjCQDqVbl8xmbbMivlu HUpQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=eBnodiQPNJvqQ1ewljmjhP1wjY7PZX3ItWbpOyeBlCw=; b=eBWKIMEbrhOhSR2QTTX63Jj2HVrsA/zdJnVknu0YujKmjGuoQCyMMlwzHbUjomUKtZ ReTXKHWbb5lgZdPw7FALxF3HUmcwq0HJLQKokub9p/dxEdz3TuyOTzxs6ryiebroR22l otvHNg1RkVOBMIXXKqDOUfVxxeQSf6D2/SsA3AR1whp8Jz+RdXnI8hX6vIY7e5wUOAG1 GOYnUyWLkYF9RanXjm4Do1vyXrhmD14w2+9CcUMehGJEoMBFLne71Cf4dwP5zErn0sXz lLg7aBhV9xSE4JwfIcWpC3PTo9PwhczFn42nrZoOxqkbOE7k3uisNYcOPmXrOamwo+w+ abAw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bc12si12629205plb.31.2021.10.23.08.15.52; Sat, 23 Oct 2021 08:16:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230379AbhJWPEG (ORCPT + 99 others); Sat, 23 Oct 2021 11:04:06 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:64545 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229901AbhJWPEG (ORCPT ); Sat, 23 Oct 2021 11:04:06 -0400 Received: from fsav113.sakura.ne.jp (fsav113.sakura.ne.jp [27.133.134.240]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 19NF18CF008358; Sun, 24 Oct 2021 00:01:08 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav113.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav113.sakura.ne.jp); Sun, 24 Oct 2021 00:01:08 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav113.sakura.ne.jp) Received: from [192.168.1.9] (M106072142033.v4.enabler.ne.jp [106.72.142.33]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id 19NF17p2008355 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NO); Sun, 24 Oct 2021 00:01:07 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Subject: Re: [PATCH memcg v3 2/3] mm, oom: do not trigger out_of_memory from the #PF To: Vasily Averin , Michal Hocko Cc: Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org, Johannes Weiner , Vladimir Davydov , Andrew Morton References: From: Tetsuo Handa Message-ID: Date: Sun, 24 Oct 2021 00:01:07 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.14.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/10/23 22:20, Vasily Averin wrote: > /* > - * The pagefault handler calls here because it is out of memory, so kill a > - * memory-hogging task. If oom_lock is held by somebody else, a parallel oom > - * killing is already in progress so do nothing. > + * The pagefault handler calls here because some allocation has failed. We have > + * to take care of the memcg OOM here because this is the only safe context without > + * any locks held but let the oom killer triggered from the allocation context care > + * about the global OOM. > */ Excuse me for a stupid question. I consider if (!mutex_trylock(&oom_lock)) return; out_of_memory(&oc); mutex_unlock(&oom_lock); here as the last resort (safeguard) when neither __alloc_pages_may_oom() nor mem_cgroup_out_of_memory() can make progress. This patch says let the oom killer triggered from the allocation context care about the global OOM. but what if the OOM killer cannot be invoked from the allocation context? Is there a guarantee that all memory allocations which might result in VM_FAULT_OOM can invoke the OOM killer?