Received: by 2002:a05:6a10:6006:0:0:0:0 with SMTP id w6csp125601pxa; Wed, 26 Aug 2020 19:35:49 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzykawTrG3XGgTICoK5OBlxH5QYJ9wQt/FzDyx8ipQk9tsfh8U+Afft8FZa2RaXDFVrahsW X-Received: by 2002:a17:907:119c:: with SMTP id uz28mr17875659ejb.361.1598495749170; Wed, 26 Aug 2020 19:35:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1598495749; cv=none; d=google.com; s=arc-20160816; b=jNmp2Gp/czf/6x4n19scbOvbzbCrtGIjs0toUbSB1zJHn3k0pYoge9Nhx6Uy5HmAv+ o/JADTrm1DdrleH1hLwWXS3BvquP7ndvRXJZsKf3xpHlarvyXl5Rcx8kAo/aWmi5frUf 00mkKRnRpEe7cMvWTGuTbe64hkh9Bsae0Jn8bHQSGSvWfVKcUD/tFe7Oxxf46vYKSrlI Pwuf/OVHkVRUcA0MO+IJE6LG2w8aMV/uE88E/byzmK014NpqU4hqBtl5zDNb/ydM1Hvi KCXAthHMdlObEEIz4VV+x0G/NezA2MH/d/8kK05/7jGG9LbV/wc5aoComiUwZd3ZcxZa eOlQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:message-id:date:subject:cc:to:from; bh=53W+oadNwlKgnoV0l07bbYBBA1lfEj/izzCCFnNEYDQ=; b=zsVpCl+e/5QxaZCLgplsr+o2dgVeQ2sUEkIJWFfauuNIPy+ZKeWs77yWUjg6qWfbXh kxUcABBKeGoJuO3TIMukk0B1rI6ux7zgtJo+o2hAvKZjV4NtmSsAUqk/zpayCQyyVOo1 ssu7tkrPonflfzX6LGnUiLPhYY78yMkNc6NoXY1ef22O6ZRPwp5hatXOEd9tNb3p7TOa Jiu7q0QB6ST45hzy0fSdmRCz7sYM57kjfy1Rm23rHBSxBOmE9aarltSnQYKT8E5s6UCq 66wfbfbobF6wlYFvCW8Twjzddezuq/W/W2wLeXmDuskIE0ODWKUzaq/Sdd+TK9J+hq0t uCeg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id sb29si453758ejb.43.2020.08.26.19.35.26; Wed, 26 Aug 2020 19:35:49 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727034AbgH0Cch (ORCPT + 99 others); Wed, 26 Aug 2020 22:32:37 -0400 Received: from out30-56.freemail.mail.aliyun.com ([115.124.30.56]:55042 "EHLO out30-56.freemail.mail.aliyun.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727017AbgH0Cch (ORCPT ); Wed, 26 Aug 2020 22:32:37 -0400 X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R951e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04407;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=8;SR=0;TI=SMTPD_---0U6yzcF2_1598495549; Received: from localhost(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U6yzcF2_1598495549) by smtp.aliyun-inc.com(127.0.0.1); Thu, 27 Aug 2020 10:32:34 +0800 From: Xunlei Pang To: Johannes Weiner , Michal Hocko , Andrew Morton , Vladimir Davydov , Chris Down Cc: Xunlei Pang , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3] mm: memcg: Fix memcg reclaim soft lockup Date: Thu, 27 Aug 2020 10:32:29 +0800 Message-Id: <1598495549-67324-1-git-send-email-xlpang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org We've met softlockup with "CONFIG_PREEMPT_NONE=y", when the target memcg doesn't have any reclaimable memory. It can be easily reproduced as below: watchdog: BUG: soft lockup - CPU#0 stuck for 111s![memcg_test:2204] CPU: 0 PID: 2204 Comm: memcg_test Not tainted 5.9.0-rc2+ #12 Call Trace: shrink_lruvec+0x49f/0x640 shrink_node+0x2a6/0x6f0 do_try_to_free_pages+0xe9/0x3e0 try_to_free_mem_cgroup_pages+0xef/0x1f0 try_charge+0x2c1/0x750 mem_cgroup_charge+0xd7/0x240 __add_to_page_cache_locked+0x2fd/0x370 add_to_page_cache_lru+0x4a/0xc0 pagecache_get_page+0x10b/0x2f0 filemap_fault+0x661/0xad0 ext4_filemap_fault+0x2c/0x40 __do_fault+0x4d/0xf9 handle_mm_fault+0x1080/0x1790 It only happens on our 1-vcpu instances, because there's no chance for oom reaper to run to reclaim the to-be-killed process. Add a cond_resched() at the upper shrink_node_memcgs() to solve this issue, this will mean that we will get a scheduling point for each memcg in the reclaimed hierarchy without any dependency on the reclaimable memory in that memcg thus making it more predictable. Acked-by: Chris Down Acked-by: Michal Hocko Suggested-by: Michal Hocko Signed-off-by: Xunlei Pang --- mm/vmscan.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/mm/vmscan.c b/mm/vmscan.c index 99e1796..9727dd8 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2615,6 +2615,14 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) unsigned long reclaimed; unsigned long scanned; + /* + * This loop can become CPU-bound when target memcgs + * aren't eligible for reclaim - either because they + * don't have any reclaimable pages, or because their + * memory is explicitly protected. Avoid soft lockups. + */ + cond_resched(); + mem_cgroup_calculate_protection(target_memcg, memcg); if (mem_cgroup_below_min(memcg)) { -- 1.8.3.1