Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751773AbdFHOgM (ORCPT ); Thu, 8 Jun 2017 10:36:12 -0400 Received: from mx2.suse.de ([195.135.220.15]:58963 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751412AbdFHOgK (ORCPT ); Thu, 8 Jun 2017 10:36:10 -0400 Date: Thu, 8 Jun 2017 16:36:07 +0200 From: Michal Hocko To: Andrew Morton Cc: Johannes Weiner , Roman Gushchin , Tetsuo Handa , Vladimir Davydov , linux-mm@kvack.org, LKML Subject: Re: [RFC PATCH 2/2] mm, oom: do not trigger out_of_memory from the #PF Message-ID: <20170608143606.GK19866@dhcp22.suse.cz> References: <20170519112604.29090-1-mhocko@kernel.org> <20170519112604.29090-3-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170519112604.29090-3-mhocko@kernel.org> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3146 Lines: 85 Does anybody see any problem with the patch or I can send it for the inclusion? On Fri 19-05-17 13:26:04, Michal Hocko wrote: > From: Michal Hocko > > Any allocation failure during the #PF path will return with VM_FAULT_OOM > which in turn results in pagefault_out_of_memory. This can happen for > 2 different reasons. a) Memcg is out of memory and we rely on > mem_cgroup_oom_synchronize to perform the memcg OOM handling or b) > normal allocation fails. > > The later is quite problematic because allocation paths already trigger > out_of_memory and the page allocator tries really hard to not fail > allocations. Anyway, if the OOM killer has been already invoked there > is no reason to invoke it again from the #PF path. Especially when the > OOM condition might be gone by that time and we have no way to find out > other than allocate. > > Moreover if the allocation failed and the OOM killer hasn't been > invoked then we are unlikely to do the right thing from the #PF context > because we have already lost the allocation context and restictions and > therefore might oom kill a task from a different NUMA domain. > > An allocation might fail also when the current task is the oom victim > and there are no memory reserves left and we should simply bail out > from the #PF rather than invoking out_of_memory. > > This all suggests that there is no legitimate reason to trigger > out_of_memory from pagefault_out_of_memory so drop it. Just to be sure > that no #PF path returns with VM_FAULT_OOM without allocation print a > warning that this is happening before we restart the #PF. > > Signed-off-by: Michal Hocko > --- > mm/oom_kill.c | 23 ++++++++++------------- > 1 file changed, 10 insertions(+), 13 deletions(-) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 04c9143a8625..0f24bdfaadfd 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -1051,25 +1051,22 @@ bool out_of_memory(struct oom_control *oc) > } > > /* > - * The pagefault handler calls here because it is out of memory, so kill a > - * memory-hogging task. If oom_lock is held by somebody else, a parallel oom > - * killing is already in progress so do nothing. > + * The pagefault handler calls here because some allocation has failed. We have > + * to take care of the memcg OOM here because this is the only safe context without > + * any locks held but let the oom killer triggered from the allocation context care > + * about the global OOM. > */ > void pagefault_out_of_memory(void) > { > - struct oom_control oc = { > - .zonelist = NULL, > - .nodemask = NULL, > - .memcg = NULL, > - .gfp_mask = 0, > - .order = 0, > - }; > + static DEFINE_RATELIMIT_STATE(pfoom_rs, DEFAULT_RATELIMIT_INTERVAL, > + DEFAULT_RATELIMIT_BURST); > > if (mem_cgroup_oom_synchronize(true)) > return; > > - if (!mutex_trylock(&oom_lock)) > + if (fatal_signal_pending) > return; > - out_of_memory(&oc); > - mutex_unlock(&oom_lock); > + > + if (__ratelimit(&pfoom_rs)) > + pr_warn("Huh VM_FAULT_OOM leaked out to the #PF handler. Retrying PF\n"); > } > -- > 2.11.0 > -- Michal Hocko SUSE Labs