Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp541048pxb; Thu, 21 Oct 2021 04:51:23 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxOQSMBMIfzuChswp1RkQq5C+a6efGOzamL8dAvRWHnkL1Oz34QO/i3PRIwVSZvE8eRFaQj X-Received: by 2002:a05:6a00:a94:b0:44c:ecb2:6018 with SMTP id b20-20020a056a000a9400b0044cecb26018mr5151217pfl.57.1634817083102; Thu, 21 Oct 2021 04:51:23 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634817083; cv=none; d=google.com; s=arc-20160816; b=tvdgh18CgB3jM1rpNBFa1YpVaDsf9Mw5FkO37UW0McE0NGfsGwWK9c1RIngpEcjbQC P1ZbNalC4ZOETZbfxdR7rdk0mOW4CNwEtYDDc0cQrdVRQUQ7NtedD8VT/YV0yFdykL7T IttZnad3kwzEVnOwBRttqSu3sI/Wt04zz/c2Jp2OyHLqFJiFCdgq/LDcwAEeWRnbUd/G EomMMIKXmhgcPIC31ILbMpZyjH1QbnfAUnY+JFMpXXCTqGSqkoB+DgpAISVqFESFwHS+ JNz/tGRqSqos0pHaoX8MnBM9dsMLDf4rJCsJKr4h0D4j1ccbWanwqldHC8ar2TKtt9bI sEfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=wT70oYeZpHQ/Zw1q79aVg8xYlfn/XGItMyWdHMMq2mc=; b=IoL97q235SrGUb+0/QOO6Nd5P8wdA25OEg5/ppkaXct+A4aBhPFXBDGYeDuNHp9503 KgHk1Vwujkv9tEDrcpttHGikZpk+SiO0PtkpWwci2Yx2J8ZLLGXMyrxuw7cZLHZQy/2G Geg55Kl4cj5eE/5EtdFts5ItruYnS62JyQqCxlIw1JCX35faKSM9s+28SAne3FLUQWRK 221PMUrKOheKTa5eYevsymfK+FKF713QKTGVD0FMO9aIP+QkjnymucANqr5IWWB+WS9s /4Cy+3Aqk1d/l7V7pZ6AKV2HAaj65N1B1j5MMMJzsdtLsPctWqBuKStdc96850fE5TI1 +yhg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=dbvcoLBO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id l189si2203835pgd.487.2021.10.21.04.51.10; Thu, 21 Oct 2021 04:51:23 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=dbvcoLBO; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231531AbhJULvx (ORCPT + 99 others); Thu, 21 Oct 2021 07:51:53 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:54734 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230400AbhJULvs (ORCPT ); Thu, 21 Oct 2021 07:51:48 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id BA74F2198C; Thu, 21 Oct 2021 11:49:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634816970; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wT70oYeZpHQ/Zw1q79aVg8xYlfn/XGItMyWdHMMq2mc=; b=dbvcoLBO60xANuysYbeygYlx/ufkfe8rO3pFHYiG4HhBwwrtVgXTY+w0JMwpn1U1Kn+CBz NPjWJBRPA8rgHaxlIj42LDmApYAY1ZTK5tYK6jF9phuFrOArW5jidBSQO35sz+MHqfjVsK CLNc706hSIYtGkr3yPfVVl0YCOt74TM= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 3A76DA3B87; Thu, 21 Oct 2021 11:49:30 +0000 (UTC) Date: Thu, 21 Oct 2021 13:49:29 +0200 From: Michal Hocko To: Vasily Averin Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , Tetsuo Handa , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org Subject: Re: [PATCH memcg 3/3] memcg: handle memcg oom failures Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 20-10-21 18:46:56, Vasily Averin wrote: > On 20.10.2021 16:02, Michal Hocko wrote: > > On Wed 20-10-21 15:14:27, Vasily Averin wrote: > >> mem_cgroup_oom() can fail if current task was marked unkillable > >> and oom killer cannot find any victim. > >> > >> Currently we force memcg charge for such allocations, > >> however it allow memcg-limited userspace task in to overuse assigned limits > >> and potentially trigger the global memory shortage. > > > > You should really go into more details whether that is a practical > > problem to handle. OOM_FAILED means that the memcg oom killer couldn't > > find any oom victim so it cannot help with a forward progress. There are > > not that many situations when that can happen. Naming that would be > > really useful. > > I've pointed above: > "if current task was marked unkillable and oom killer cannot find any victim." > This may happen when current task cannot be oom-killed because it was marked > unkillable i.e. it have p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN > and other processes in memcg are either dying, or are kernel threads or are marked unkillable > by the same way. Or when memcg have this process only. > > If we always approve such kind of allocation, it can be misused. > Process can mmap a lot of memory, > ant then touch it and generate page fault and make overcharged memory allocations. > Finally it can consume all node memory and trigger global memory shortage on the host. Yes, this is true but a) OOM_SCORE_ADJ_MIN tasks are excluded from the OOM handling so they have to be careful with the memory consumption and b) is this a theoretical or a practical concern. This is mostly what I wanted to make sure you describe in the changelog. > >> Let's fail the memory charge in such cases. > >> > >> This failure should be somehow recognised in #PF context, > > > > explain why > > When #PF cannot allocate memory (due to reason described above), handle_mm_fault returns VM_FAULT_OOM, > then its caller executes pagefault_out_of_memory(). If last one cannot recognize the real reason of this fail, > it expect it was global memory shortage and executed global out_ouf_memory() that can kill random process > or just crash node if sysctl vm.panic_on_oom is set to 1. > > Currently pagefault_out_of_memory() knows about possible async memcg OOM and handles it correctly. > However it is not aware that memcg can reject some other allocations, do not recognize the fault > as memcg-related and allows to run global OOM. Again something to be added to the changelog. > >> so let's use current->memcg_in_oom == (struct mem_cgroup *)OOM_FAILED > >> > >> ToDo: what is the best way to notify pagefault_out_of_memory() about > >> mem_cgroup_out_of_memory failure ? > > > > why don't you simply remove out_of_memory from pagefault_out_of_memory > > and leave it only with the blocking memcg OOM handling? Wouldn't that be a > > more generic solution? Your first patch already goes that way partially. > > I clearly understand that global out_of_memory should not be trggired by memcg restrictions. > I clearly understand that dying task will release some memory soon and we can do not run global oom if current task is dying. > > However I'm not sure that I can remove out_of_memory at all. At least I do not have good arguments to do it. I do understand that handling a very specific case sounds easier but it would be better to have a robust fix even if that requires some more head scratching. So far we have collected several reasons why the it is bad to trigger oom killer from the #PF path. There is no single argument to keep it so it sounds like a viable path to pursue. Maybe there are some very well hidden reasons but those should be documented and this is a great opportunity to do either of the step. Moreover if it turns out that there is a regression then this can be easily reverted and a different, maybe memcg specific, solution can be implemented. -- Michal Hocko SUSE Labs