Received: by 2002:a05:7412:1703:b0:e2:908c:2ebd with SMTP id dm3csp1442445rdb; Sat, 26 Aug 2023 02:44:38 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEmLfGTJCHx7slGJtRgCbW602fSxGXbvUtUqU/7YzlW3OuUIDePBk3i9Dyw9J9gJqsJAxlG X-Received: by 2002:a17:906:291:b0:9a1:c357:c743 with SMTP id 17-20020a170906029100b009a1c357c743mr8311521ejf.52.1693043078031; Sat, 26 Aug 2023 02:44:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1693043078; cv=none; d=google.com; s=arc-20160816; b=wuFZIApoHRFZEGpCZg42FJc3pDGv/9iKoF+QZGcTIAgJDplj7OHSdsXo0iDmFSiYXC xDsTNbaqDjnDbfQaGR49xE3jEizuFYRy8rYvIOI0UE0exreVt/d1F0C3z3CUfRp8alnO ycGjjG6gmSzdHTkTQhdR0KMMmC53WXLEA2GlX+xUksXBk7rxgRIEc0rHvA2nwBRHeb83 i0XVgh/BkccX0Jx5D+shmyPsYmTVbG7o8b6kx4x0pwyzkY2+4tO3ZpaiPhQDC0owWm47 sN6hY8R2+B9QfKJ3a2LZc8iBZaT+RJhSSsWloZepJ75pMDJYUCIaurz+suDmHWSGfIX6 WepA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from; bh=Qtf42UubiDK48rSY4Guamm8dE8nmiMljibl8m+O676s=; fh=/799qIjt9t3mIkusTpXc9Qm2JzVeHVsUuAcrg6SGUMw=; b=i0sU0Co8YCiIg5oTNQXpeQv0GcRTgBMKBy5fDXYLB058JVVXh3n8cb6JmNP4MUkghf x42LPHmdXg3MdoS/q+PpT74sj4lR9o+/MYTglsmWbItm2/CAlWpoYeNR15CyYexP5j4Y zVYdMTJ5S0ZZ/LencPJgMhEedkpHWDCNkO2fh9/jiI77Wxd+1LBYTr53GaenG1xQPw7a 7pC7CA3kpy+7u9UJzKQZY0WuWp40ssG1mQuLvPA7zSuuuX+sCzMCpex635F2rhM0u25Y B5JjOHNHTtCLOyopg4iSkjpL7v29QnQypu/c7lxAPr81E7IZkAt6TgW2epwpwZFKyril C4Mw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gz4-20020a170906f2c400b009945340230esi1977270ejb.174.2023.08.26.02.43.57; Sat, 26 Aug 2023 02:44:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230402AbjHZCuO (ORCPT + 99 others); Fri, 25 Aug 2023 22:50:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229933AbjHZCtt (ORCPT ); Fri, 25 Aug 2023 22:49:49 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93D781FD7; Fri, 25 Aug 2023 19:49:46 -0700 (PDT) Received: from dggpemm500009.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4RXh6N462CzJrmp; Sat, 26 Aug 2023 10:46:36 +0800 (CST) Received: from huawei.com (10.175.113.32) by dggpemm500009.china.huawei.com (7.185.36.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Sat, 26 Aug 2023 10:49:43 +0800 From: Liu Shixin To: Yosry Ahmed , Huang Ying , Michal Hocko , Johannes Weiner , Roman Gushchin , Shakeel Butt , Muchun Song , Andrew Morton , CC: , , , Liu Shixin Subject: [PATCH v3] mm: vmscan: try to reclaim swapcache pages if no swap space Date: Sat, 26 Aug 2023 11:44:01 +0800 Message-ID: <20230826034401.640861-1-liushixin2@huawei.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500009.china.huawei.com (7.185.36.225) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When spaces of swap devices are exhausted, only file pages can be reclaimed. But there are still some swapcache pages in anon lru list. This can lead to a premature out-of-memory. This problem can be fixed by checking number of swapcache pages in can_reclaim_anon_pages(). Add a new bit swapcache_only in struct scan_control to skip isolating anon pages that are not in the swap cache when only swap cache can be reclaimed. Signed-off-by: Liu Shixin --- include/linux/swap.h | 6 ++++++ mm/memcontrol.c | 8 ++++++++ mm/vmscan.c | 29 +++++++++++++++++++++++++++-- 3 files changed, 41 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 456546443f1f..0318e918bfa4 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -669,6 +669,7 @@ static inline void mem_cgroup_uncharge_swap(swp_entry_t entry, unsigned int nr_p } extern long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg); +extern long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg); extern bool mem_cgroup_swap_full(struct folio *folio); #else static inline void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry) @@ -691,6 +692,11 @@ static inline long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) return get_nr_swap_pages(); } +static inline long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg) +{ + return total_swapcache_pages(); +} + static inline bool mem_cgroup_swap_full(struct folio *folio) { return vm_swap_full(); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e8ca4bdcb03c..c465829db92b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -7567,6 +7567,14 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg) return nr_swap_pages; } +long mem_cgroup_get_nr_swapcache_pages(struct mem_cgroup *memcg) +{ + if (mem_cgroup_disabled()) + return total_swapcache_pages(); + + return memcg_page_state(memcg, NR_SWAPCACHE); +} + bool mem_cgroup_swap_full(struct folio *folio) { struct mem_cgroup *memcg; diff --git a/mm/vmscan.c b/mm/vmscan.c index 7c33c5b653ef..5cb4adf6642b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -137,6 +137,9 @@ struct scan_control { /* Always discard instead of demoting to lower tier memory */ unsigned int no_demotion:1; + /* Swap space is exhausted, only reclaim swapcache for anon LRU */ + unsigned int swapcache_only:1; + /* Allocation order */ s8 order; @@ -613,10 +616,20 @@ static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, */ if (get_nr_swap_pages() > 0) return true; + /* Is there any swapcache pages to reclaim? */ + if (total_swapcache_pages() > 0) { + sc->swapcache_only = 1; + return true; + } } else { /* Is the memcg below its swap limit? */ if (mem_cgroup_get_nr_swap_pages(memcg) > 0) return true; + /* Is there any swapcache pages in memcg to reclaim? */ + if (mem_cgroup_get_nr_swapcache_pages(memcg) > 0) { + sc->swapcache_only = 1; + return true; + } } /* @@ -2280,6 +2293,19 @@ static bool skip_cma(struct folio *folio, struct scan_control *sc) } #endif +static bool skip_isolate(struct folio *folio, struct scan_control *sc, + enum lru_list lru) +{ + if (folio_zonenum(folio) > sc->reclaim_idx) + return true; + if (skip_cma(folio, sc)) + return true; + if (unlikely(sc->swapcache_only && !is_file_lru(lru) && + !folio_test_swapcache(folio))) + return true; + return false; +} + /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2326,8 +2352,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx || - skip_cma(folio, sc)) { + if (skip_isolate(folio, sc, lru)) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move; -- 2.25.1