Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp503222pxb; Tue, 19 Oct 2021 07:18:48 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzkRU+uMhLAbpKp8Z0FXbZYAika10vuTqvbpTQofmbHCEQNkRW44PzKp9BZmlsdsLBaHmd5 X-Received: by 2002:a50:e60a:: with SMTP id y10mr54308465edm.233.1634653128360; Tue, 19 Oct 2021 07:18:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634653128; cv=none; d=google.com; s=arc-20160816; b=XWvLYz48aTG++t03rd9D3/g6hCXwcqP9iNEpu/lqdVAWnvyMzB9TyU78dEEcnpZZd1 V6G8gMsVcjgCu/Yr/MIr4GZOUzZIRDzx1mn+1qrvUqqIMJvnQBBsqxbiFo/0c8xPzXBo xQhda+lKnsdzcOfjheI6amdAYG8MawrdA78mvI6N0HtZu7ABKk0Yi/NLgvqutd620i3g t2Wdd9GGhFfUQ30ui7xLpQo30oWcnzS7UpXf2EZUmVMl9VbC0hOe0yQsJyjK3rB41bLh 7nBt8qilxa/n1rGQ/x7ewVfGf5W4etRjPtnO0o3IJqd9FxWSHaFAEPCchDMuHm6s8iVB XRgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Hh4cTDoB691a4aDWi4pwU2Luw6LUBVER2ejZ1+gX5jY=; b=kr53DAw+6kDeID+eeF5fr6JCdDsQ1841xyRTCA9p74b0y20LOLsNqYswr8WvCufWVk uzmJTws3y6QhS4rnexA6OCETGo+30TccjpYt18C/sMQlFz9B29nQuKVWzobdJkbE0Rsl nLEuvs18cyb90nFBd92We7BG1gMqD9RQnba7KIHuENHwOPvyyPWIZkgph5ukaFLSmYgR foGJmaHw7khG86plHoLzvdzRcMAgrZYV0/hLYYaA7LbWXzA2Fn60Ng1RJTEHbygE7Jhr kmTl8WGUeo6JxVWFrhRPVMQQOLa+bJXwITCNmyw2bfLYwxgsvIDVdlqjjKVBikUJStW+ yJHA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=l3MWILDG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z17si34840000edc.377.2021.10.19.07.18.24; Tue, 19 Oct 2021 07:18:48 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@suse.com header.s=susede1 header.b=l3MWILDG; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=suse.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233009AbhJSOPg (ORCPT + 99 others); Tue, 19 Oct 2021 10:15:36 -0400 Received: from smtp-out1.suse.de ([195.135.220.28]:46578 "EHLO smtp-out1.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232717AbhJSOPd (ORCPT ); Tue, 19 Oct 2021 10:15:33 -0400 Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 858E72196D; Tue, 19 Oct 2021 14:13:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634652799; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Hh4cTDoB691a4aDWi4pwU2Luw6LUBVER2ejZ1+gX5jY=; b=l3MWILDG/rS2ZjOfIv675NfHLdc5hR/inlQhJEY2yiA5E8yx/b4RWSjIAObQQwai45CIVq bkXWzzNwzrW04hogbvcc04IU8B22ONkC+MHkajrT3q1rU7ptGwcMLPluenx/GCcaA6rMsN pGP2qY+LA0ONdqaaqPazDtf0qVu3nUk= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 5438EA3B8E; Tue, 19 Oct 2021 14:13:19 +0000 (UTC) Date: Tue, 19 Oct 2021 16:13:18 +0200 From: Michal Hocko To: Vasily Averin Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org Subject: Re: [PATCH memcg 0/1] false global OOM triggered by memcg-limited task Message-ID: References: <9d10df01-0127-fb40-81c3-cc53c9733c3e@virtuozzo.com> <6b751abe-aa52-d1d8-2631-ec471975cc3a@virtuozzo.com> <339ae4b5-6efd-8fc2-33f1-2eb3aee71cb2@virtuozzo.com> <687bf489-f7a7-5604-25c5-0c1a09e0905b@virtuozzo.com> <6c422150-593f-f601-8f91-914c6c5e82f4@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6c422150-593f-f601-8f91-914c6c5e82f4@virtuozzo.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue 19-10-21 16:26:50, Vasily Averin wrote: > On 19.10.2021 15:04, Michal Hocko wrote: > > On Tue 19-10-21 13:54:42, Michal Hocko wrote: > >> On Tue 19-10-21 13:30:06, Vasily Averin wrote: > >>> On 19.10.2021 11:49, Michal Hocko wrote: > >>>> On Tue 19-10-21 09:30:18, Vasily Averin wrote: > >>>> [...] > >>>>> With my patch ("memcg: prohibit unconditional exceeding the limit of dying tasks") try_charge_memcg() can fail: > >>>>> a) due to fatal signal > >>>>> b) when mem_cgroup_oom -> mem_cgroup_out_of_memory -> out_of_memory() returns false (when select_bad_process() found nothing) > >>>>> > >>>>> To handle a) we can follow to your suggestion and skip excution of out_of_memory() in pagefault_out_of memory() > >>>>> To handle b) we can go to retry: if mem_cgroup_oom() return OOM_FAILED. > >>> > >>>> How is b) possible without current being killed? Do we allow remote > >>>> charging? > >>> > >>> out_of_memory for memcg_oom > >>> select_bad_process > >>> mem_cgroup_scan_tasks > >>> oom_evaluate_task > >>> oom_badness > >>> > >>> /* > >>> * Do not even consider tasks which are explicitly marked oom > >>> * unkillable or have been already oom reaped or the are in > >>> * the middle of vfork > >>> */ > >>> adj = (long)p->signal->oom_score_adj; > >>> if (adj == OOM_SCORE_ADJ_MIN || > >>> test_bit(MMF_OOM_SKIP, &p->mm->flags) || > >>> in_vfork(p)) { > >>> task_unlock(p); > >>> return LONG_MIN; > >>> } > >>> > >>> This time we handle userspace page fault, so we cannot be kenrel thread, > >>> and cannot be in_vfork(). > >>> However task can be marked as oom unkillable, > >>> i.e. have p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN > >> > >> You are right. I am not sure there is a way out of this though. The task > >> can only retry for ever in this case. There is nothing actionable here. > >> We cannot kill the task and there is no other way to release the memory. > > > > Btw. don't we force the charge in that case? > > We should force charge for allocation from inside page fault handler, > to prevent endless cycle in retried page faults. > However we should not do it for allocations from task context, > to prevent memcg-limited vmalloc-eaters from to consume all host memory. I don't see a big difference between those two. Because the #PF could result into the very same situation depleting all the memory by overcharging. A different behavior just leads to a confusion and unexpected behavior. E.g. in the past we only triggered memcg OOM killer from the #PF path and failed the charge otherwise. That is something different but it shows problems we haven't anticipated and had user visible problems. See 29ef680ae7c2 ("memcg, oom: move out_of_memory back to the charge path"). > Also I would like to return to the following hunk. > @@ -1575,7 +1575,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > * A few threads which were not waiting at mutex_lock_killable() can > * fail to bail out. Therefore, check again after holding oom_lock. > */ > - ret = should_force_charge() || out_of_memory(&oc); > + ret = task_is_dying() || out_of_memory(&oc); > > unlock: > mutex_unlock(&oom_lock); > > Now I think it's better to keep task_is_dying() check here. > if task is dying, it is not necessary to push other task to free the memory. > We broke vmalloc cycle already, so it looks like nothing should prevent us > from returning to userspace, handle fatal signal, exit and free the memory. That patch has to be discuss in its full length. There were other details I have brought up AFAIU. -- Michal Hocko SUSE Labs