Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp2926077pxb; Mon, 18 Oct 2021 04:58:10 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHfRoQuzl/tu18p/3hedXyhboIrENaLmamoL466jqXLF0/AFJO1gsTTDncr2l4qxaiF2OU X-Received: by 2002:a17:906:1f95:: with SMTP id t21mr28857099ejr.234.1634558290187; Mon, 18 Oct 2021 04:58:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634558290; cv=none; d=google.com; s=arc-20160816; b=V2ns7S+jKYNesNuJQBR1orWqbiwjqlbW2uVW08aR3GyweAi3CALx/y6rL7wQ7+hyC1 rWAnkJ0OvOmD9SF1xEGQsY9xSueULiLmgEqj++rcv0STJ/6VY4xXcRfKHDJ5z16a1HDx GSB36ltzVmdVV8/1OoL1WutdIDguaEViC/pJB6bcnjESaEFyXODD1c0wZmjIAzEtoCrX j6KApT/sfVOphQRqbSQWXhtti8Bw2J6mJUywY62VepLyNTFCW5uv25xu16EcfS3zwPDW 9BW27SObFxhJkXmwa5MygOlROs1CimceuTbn9JlKdNZhmctn8S4da2kmHjFlDXHl10Sz fc3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=z0UuFhyEIjaBwP5r+PsHqxA5xRPhuNYdcMZlskEMrGY=; b=oxdJPnMEekxXMbu1YFuhA5ueBOpkgZhh5OC2vyPeF/4PobLNia23g3eqaBiK+whPgG +EruA6VGRHIQYU3A+cFxTbCIVXkhJL1khf43CFbW842kdBIDSDs8vZt3TawenemlsW+C AXYtES6HuHYzh1HDO8bXv8Zf7BWpwUN7knGJbc8qDXkuQPKl0Q73s5ve5NpGJYdNCf0S x7zSbnTfuVkbQfcbmJTVrRWVLG/yv5yDXtHU994eOpw53EvCymmd9S7WdDcKmk7N61/T WiB6AXFAYedXZrrSE6So0UJb0u2fOYD234ODXuaITPr70Eiz7kRuw5q6ll/FXoRQk0wj fsAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=ZjqWXdfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j2si18826173ejx.208.2021.10.18.04.57.46; Mon, 18 Oct 2021 04:58:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=ZjqWXdfX; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231459AbhJRL4G (ORCPT + 99 others); Mon, 18 Oct 2021 07:56:06 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:37654 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230048AbhJRL4F (ORCPT ); Mon, 18 Oct 2021 07:56:05 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id C710221A88; Mon, 18 Oct 2021 11:53:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634558033; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=z0UuFhyEIjaBwP5r+PsHqxA5xRPhuNYdcMZlskEMrGY=; b=ZjqWXdfXKMHPd/7dT3ZBbyBYZT0GvgAZ5uFgml9vKheBX4/odRHnrEAofs7gvFBKL4fXSS QqzjjM9mDqd1EmrGjEllilhLo4/Pl972XwkKlQ/ZZD5galvGZzeue6+WzkaFo16XZqbkRh O4YcNdXLvw0yWOgMbWzLZcivjA/nvDg= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 642DDA3B89; Mon, 18 Oct 2021 11:53:53 +0000 (UTC) Date: Mon, 18 Oct 2021 13:53:43 +0200 From: Michal Hocko To: Vasily Averin Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org Subject: Re: [PATCH memcg 0/1] false global OOM triggered by memcg-limited task Message-ID: References: <9d10df01-0127-fb40-81c3-cc53c9733c3e@virtuozzo.com> <6b751abe-aa52-d1d8-2631-ec471975cc3a@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6b751abe-aa52-d1d8-2631-ec471975cc3a@virtuozzo.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 18-10-21 13:05:35, Vasily Averin wrote: > On 18.10.2021 12:04, Michal Hocko wrote: > > On Mon 18-10-21 11:13:52, Vasily Averin wrote: > > [...] > >> How could this happen? > >> > >> User-space task inside the memcg-limited container generated a page fault, > >> its handler do_user_addr_fault() called handle_mm_fault which could not > >> allocate the page due to exceeding the memcg limit and returned VM_FAULT_OOM. > >> Then do_user_addr_fault() called pagefault_out_of_memory() which executed > >> out_of_memory() without set of memcg. > >> > >> Partially this problem depends on one of my recent patches, disabled unlimited > >> memory allocation for dying tasks. However I think the problem can happen > >> on non-killed tasks too, for example because of kmem limit. > > > > Could you be more specific on how this can happen without your patch? I > > have to say I haven't realized this side effect when discussing it. > > We can reach obj_cgroup_charge_pages() for example via > > do_user_addr_fault > handle_mm_fault > __handle_mm_fault > p4d_alloc > __p4d_alloc > p4d_alloc_one > get_zeroed_page > __get_free_pages > alloc_pages > __alloc_pages > __memcg_kmem_charge_page > obj_cgroup_charge_pages > > Here we call try_charge_memcg() that return success and approve the allocation, > however then we hit into kmem limit and fail the allocation. Just to make sure I understand this would be for the v1 kmem explicit limit, correct? > If required I can try to search how try_charge_memcg() can reject page allocation > of non-dying task too. Yes. > > I will be honest that I am not really happy about pagefault_out_of_memory. > > I have tried to remove it in the past. Without much success back then, > > unfortunately[1]. > > Maybe we should get rid of it finally. The OOM is always triggered from > > inside the allocator where we have much more infromation about the > > allocation context. A first step would be to skip pagefault_out_of_memory > > for killed or exiting processes. > > I like this idea, however it may be not enough, at least in scenario described above. I original patch has removed the oom killer completely. -- Michal Hocko SUSE Labs