Received: by 2002:a25:e7d8:0:0:0:0:0 with SMTP id e207csp1080643ybh; Wed, 18 Mar 2020 14:42:36 -0700 (PDT) X-Google-Smtp-Source: ADFU+vs2JbeR/VweO92DIHfke00Kzn1BQJgW7DqeXZAgqaYNjfBSdSTTE1vNRIx8nbXz7pW0ZlOH X-Received: by 2002:a9d:3a45:: with SMTP id j63mr3386453otc.71.1584567756326; Wed, 18 Mar 2020 14:42:36 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1584567756; cv=none; d=google.com; s=arc-20160816; b=PJ9WxdLJzVw759XUGw1dZu7Kl21x6Uey+wB1a0pgAbUkZCT5WMngAZt+KXEbNEzWyT nWIOd6EDDT62hKWh4HB4DTC/So6jzcypYiW32aoctaRMmQZjpaQSzLRH6pNKi245Jnf9 mRcK2JN78zT7/5QhnuDGcwbQZeeEbrf0ZnUeAQ5zQ4XaVSIuS9NNhoDSFfdwVXK1NeXy 9B65JjTqmw2/5XUN9EDNIzYKPTykZnQMod2aiuKsmqiXmoaDAgxsLn6+DVkW2wwVSGXb VCo2rFC/bObv0MNFi1MoV3+zfrjzM64xNjC7PZJywH24Uye+6zIq4QFa7gqMAbSG4tpQ UmbA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :message-id:in-reply-to:subject:cc:to:from:date:dkim-signature; bh=tM1vSIKwCnlZUi/xGFebQI1FUo5MhWvpnsIJnM9Uk1Y=; b=aTRoauR7QkeIw8vDjqeweEFIpZLPHPxYhKXhh9/U7Quhoz2CORgnIXwcMdOjtPzYZ2 3id+nfp4OOULEhs7Qg2eCzbcDIdVuJv0NKHqpLf1HQSubTOpgfCs/UawOWlwSkqXurog AFdTH6eIDhdmmaZ7PzCQs51JGE5LE1VVFLEOyPu+c5cdyWx5WEF/xJdzN3hfj8STQcci lw7UgO6JbN82WAobQNiYfzDZqn4pMVUCA+s0h68uRUQINLK9y8/YV1ENAszarRBVQM8Y Vta2bToVcpM7GAryFI0Q0jaZ7vTXl9vQGtK/IU0sLNGtZ08ZtLmA4qBeojwubaU2/nzQ s56g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Zxh3TAA2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id l25si3819598oii.205.2020.03.18.14.42.24; Wed, 18 Mar 2020 14:42:36 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Zxh3TAA2; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727049AbgCRVkt (ORCPT + 99 others); Wed, 18 Mar 2020 17:40:49 -0400 Received: from mail-pf1-f194.google.com ([209.85.210.194]:36800 "EHLO mail-pf1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726821AbgCRVkt (ORCPT ); Wed, 18 Mar 2020 17:40:49 -0400 Received: by mail-pf1-f194.google.com with SMTP id i13so198661pfe.3 for ; Wed, 18 Mar 2020 14:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=tM1vSIKwCnlZUi/xGFebQI1FUo5MhWvpnsIJnM9Uk1Y=; b=Zxh3TAA2TCweDaW7C+E0Yv8wPsabF/oNuLlphDFoxMBH9tdjRpxhTdsjLSucayZibU u4SwgpHEZAkrvbn7UUnqkCu9OOoPkoMeeSWm+pMvCcbT4vDmMZ5hjW7yhshioxFdTQLI VpWe/E+Mfz+DRBCt2L1VASK1FpgzLl4aKzAibEMySdN1r/sOyZQqUhDGgiLvLkGfzxz4 3Nd6KpSVahsfqxARdCtadmC/2ayZab4uuGcXx+EYX2oyKzgTJYYwMgGuusoWC3Ig0pHl Aqfhh/hcuLjjyiYsn9dYLK65TJI+rSbbQFsaoKyREY8EJfGXjzqaR/dUjnSjnqSX9/em ZmTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=tM1vSIKwCnlZUi/xGFebQI1FUo5MhWvpnsIJnM9Uk1Y=; b=HkEO7MIWuTty/yZRcI/CQ+SV5n8jQizw7/dnPdH6n6x4D6lgiiXVKqPxSOKuXiaDKm qGaQJGUwGkcyB/e771AuFQr+2PwVS4O5TIAKIuIR94LTFrMofF/3TqkGNEC+ILL9OOGM lrrvAJGVI7VNVnoBs1SXwFwVHnm4wkCLSDO3YfDypoi/sRRL+dkL4MzK6pqxn0ArAAMp Mdd6EM+9+mm4U5uvt8bywkQohc0gunyOgPUTQ2ZP1PzS8dViPOj818OiwZVuMP4gxrZ0 INiBOfKAzASyecHGGQiHltr3PAdmqtqI8JSIPbluKcvHIuh46PdKy/cW/8NBPruiPvUN L86Q== X-Gm-Message-State: ANhLgQ0AZ87y+GYji9c9gzy3P4NUE7dmOnHPuV3NyV92mp/BfgywXl8e C4cKDEftPYfhkcoZucUKMJmdDg== X-Received: by 2002:aa7:96a6:: with SMTP id g6mr364347pfk.88.1584567647456; Wed, 18 Mar 2020 14:40:47 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id q9sm53027pgs.89.2020.03.18.14.40.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2020 14:40:46 -0700 (PDT) Date: Wed, 18 Mar 2020 14:40:45 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: Andrew Morton , Tetsuo Handa , Vlastimil Babka , Robert Kolchmeyer , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch v2] mm, oom: prevent soft lockup on memcg oom for UP systems In-Reply-To: <20200318094219.GE21362@dhcp22.suse.cz> Message-ID: References: <8395df04-9b7a-0084-4bb5-e430efe18b97@i-love.sakura.ne.jp> <202003170318.02H3IpSx047471@www262.sakura.ne.jp> <20200318094219.GE21362@dhcp22.suse.cz> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 18 Mar 2020, Michal Hocko wrote: > > When a process is oom killed as a result of memcg limits and the victim > > is waiting to exit, nothing ends up actually yielding the processor back > > to the victim on UP systems with preemption disabled. Instead, the > > charging process simply loops in memcg reclaim and eventually soft > > lockups. > > It seems that my request to describe the setup got ignored. Sigh. > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, > > anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB > > oom_score_adj:0 > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > ... > > Call Trace: > > shrink_node+0x40d/0x7d0 > > do_try_to_free_pages+0x13f/0x470 > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > try_charge+0x247/0xac0 > > mem_cgroup_try_charge+0x10a/0x220 > > mem_cgroup_try_charge_delay+0x1e/0x40 > > handle_mm_fault+0xdf2/0x15f0 > > do_user_addr_fault+0x21f/0x420 > > page_fault+0x2f/0x40 > > > > Make sure that once the oom killer has been called that we forcibly yield > > if current is not the chosen victim regardless of priority to allow for > > memory freeing. The same situation can theoretically occur in the page > > allocator, so do this after dropping oom_lock there as well. > > I would have prefered the cond_resched solution proposed previously but > I can live with this as well. I would just ask to add more information > to the changelog. E.g. I'm still planning on sending the cond_resched() change as well, but not as advertised to fix this particular issue per Tetsuo's feedback. I think the reported issue showed it's possible to excessively loop in reclaim without a conditional yield depending on various memcg configs and the shrink_node_memcgs() cond_resched() is still appropriate for interactivity but also because the iteration of memcgs can be particularly long. > " > We used to have a short sleep after the oom handling but 9bfe5ded054b > ("mm, oom: remove sleep from under oom_lock") has removed it because > sleep inside the oom_lock is dangerous. This patch restores the sleep > outside of the lock. Will do. > " > > Suggested-by: Tetsuo Handa > > Tested-by: Robert Kolchmeyer > > Cc: stable@vger.kernel.org > > Signed-off-by: David Rientjes > > --- > > mm/memcontrol.c | 2 ++ > > mm/page_alloc.c | 2 ++ > > 2 files changed, 4 insertions(+) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -1576,6 +1576,8 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > */ > > ret = should_force_charge() || out_of_memory(&oc); > > mutex_unlock(&oom_lock); > > + if (!fatal_signal_pending(current)) > > + schedule_timeout_killable(1); > > Check for fatal_signal_pending is redundant. > > -- > Michal Hocko > SUSE Labs >