Received: by 2002:ac0:a5a6:0:0:0:0:0 with SMTP id m35-v6csp5398504imm; Wed, 12 Sep 2018 05:36:09 -0700 (PDT) X-Google-Smtp-Source: ANB0Vda3273/C/eZFwB22gxqrHgfgvaPejTEjRZQvWIEi/1N0aF6OVry035XsXH9MYw5576PR8dd X-Received: by 2002:a63:f4b:: with SMTP id 11-v6mr2126281pgp.100.1536755768967; Wed, 12 Sep 2018 05:36:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536755768; cv=none; d=google.com; s=arc-20160816; b=J5yDJS2kSYJs1JQ6emxExd9s+XQwCY67b63eSPD3uXMybMTrnvTyAwfEpqNiGiCx8s Y837tL0chuW23EszKKdOVyqBI2uUY8n5bBwakA7OXLWxLlwUmLJZdcgZ5KmJARVGYHP1 OhryJk+W6royv0BoMNKfxcNgMUGkGatwaY31vVtRVFGCrPI45Ynnkuy0ckRGVoHGKOgf FkaCNErhSQSEh1/4ENeYHSRvVVvFBMyplQDI8V9JjuHHr1fOnCsz/R/yYrssnMUwEh7/ n1WHQ/VgBRT9IG7QwWhp6HQiWMbK08dDv24v26c/UlcETP3A/LmknxDM7Tonvu7pXtk5 53MA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=Per0W6WEhiiK3IP6YO6nkSOj9qj6VOrX+GtJ/Kx2Az4=; b=C9GwHshyulmYGEGm1sfTBSZETd5Lnq1rsPlbx6aG2HmxUdW0qNiFv72U+V3hwnGzoL roTyrzfjb8z6+aejdCbN9hMsN5Ackzp55ORTVjLUa9oIPgH4zM2myHn/NjtAV3jlOVQL 6sGg/P1xeM5wbU1+NTuKPy3TcdAkuI4zOlYI9DB7Uvq3hYAlLQgL6bIlVTtBumLK34Ny uu6W60V3GgXJ8JOZAF0upUWoH6BQl6DlVIpbitG74SEABkTzEjRvDlahae6UNKPnGGmm 4PPys2UhmGaZTJ8LcxDchIZRmMCCkwRTE7HPWOCtWzGsUHrx3dYHNFB7jwkVCURbeXYk L8AQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id s11-v6si1014405pfc.38.2018.09.12.05.35.43; Wed, 12 Sep 2018 05:36:08 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727152AbeILRj5 (ORCPT + 99 others); Wed, 12 Sep 2018 13:39:57 -0400 Received: from mx2.suse.de ([195.135.220.15]:51604 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726932AbeILRj4 (ORCPT ); Wed, 12 Sep 2018 13:39:56 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id C8AFBACAD; Wed, 12 Sep 2018 12:35:35 +0000 (UTC) Date: Wed, 12 Sep 2018 14:35:34 +0200 From: Michal Hocko To: Roman Gushchin Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, Johannes Weiner , Vladimir Davydov Subject: Re: [PATCH RFC] mm: don't raise MEMCG_OOM event due to failed high-order allocation Message-ID: <20180912123534.GG10951@dhcp22.suse.cz> References: <20180910215622.4428-1-guro@fb.com> <20180911121141.GS10951@dhcp22.suse.cz> <20180911152725.GA28828@tower.DHCP.thefacebook.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180911152725.GA28828@tower.DHCP.thefacebook.com> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 11-09-18 08:27:30, Roman Gushchin wrote: > On Tue, Sep 11, 2018 at 02:11:41PM +0200, Michal Hocko wrote: > > On Mon 10-09-18 14:56:22, Roman Gushchin wrote: > > > The memcg OOM killer is never invoked due to a failed high-order > > > allocation, however the MEMCG_OOM event can be easily raised. > > > > > > Under some memory pressure it can happen easily because of a > > > concurrent allocation. Let's look at try_charge(). Even if we were > > > able to reclaim enough memory, this check can fail due to a race > > > with another allocation: > > > > > > if (mem_cgroup_margin(mem_over_limit) >= nr_pages) > > > goto retry; > > > > > > For regular pages the following condition will save us from triggering > > > the OOM: > > > > > > if (nr_reclaimed && nr_pages <= (1 << PAGE_ALLOC_COSTLY_ORDER)) > > > goto retry; > > > > > > But for high-order allocation this condition will intentionally fail. > > > The reason behind is that we'll likely fall to regular pages anyway, > > > so it's ok and even preferred to return ENOMEM. > > > > > > In this case the idea of raising the MEMCG_OOM event looks dubious. > > > > Why is this a problem though? IIRC this event was deliberately placed > > outside of the oom path because we wanted to count allocation failures > > and this is also documented that way > > > > oom > > The number of time the cgroup's memory usage was > > reached the limit and allocation was about to fail. > > > > Depending on context result could be invocation of OOM > > killer and retrying allocation or failing a > > > > One could argue that we do not apply the same logic to GFP_NOWAIT > > requests but in general I would like to see a good reason to change > > the behavior and if it is really the right thing to do then we need to > > update the documentation as well. > > Right, the current behavior matches the documentation, because the description > of the event is broad enough. My point is that the current behavior is not > useful in my corner case. > > Let me explain my case in details: I've got a report about sporadic memcg oom > kills on some hosts with plenty of pagecache and low memory pressure. > You'll probably agree, that raising OOM signal in this case looks strange. I am not sure I follow. So you see both OOM_KILL and OOM events and the user misinterprets OOM ones? My understanding was that OOM event should tell admin that the limit should be increased in order to allow more charges. Without OOM_KILL events it means that those failed charges have some sort of fallback so it is not critical condition for the workload yet. Something to watch for though in case of perf. degradation or potential misbehavior. Whether this is how the event is used, I dunno. Anyway, if you want to just move the event and make it closer to OOM_KILL then I strongly suspect the event is losing its relevance. -- Michal Hocko SUSE Labs