Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp2388002rwd; Sun, 21 May 2023 20:13:09 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ61W5KYLjwx5y5ZuWedoZVeMIpsG+CEn0NDeGVkIPc1FO5nSoQ1LXPfL1wprsLBsrwrgZyn X-Received: by 2002:a05:6a00:2e15:b0:645:834c:f521 with SMTP id fc21-20020a056a002e1500b00645834cf521mr16919724pfb.17.1684725189597; Sun, 21 May 2023 20:13:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1684725189; cv=none; d=google.com; s=arc-20160816; b=WwHSZYpGojGwCTR7egRQrUsLQbhBQgWUgg0UF2l2CNRAt8qF0WYtwb0k3xHsrGjBQs nsqRzzteK+rpRI2KfbY5WU+I3Pu7LxNN9S79Nqd/MM71ILt/ZgWArWx4k3Sc2ijyJVc7 x2Oa9WetOkwjFQ1V5ccKe/1iP29PsyL8LKSP2Hx1mjmAygPmXe54xTZzPBjBVAfeeaaE 5DoMYmZtsbbx0+EHTD6m1P+DD28fafSXJWUK1PrmN3VonIPIb5Sji4LvreeQ1WAyr7eB InQBEDmk9GAk9hrW5b1AIOrpXtoCWV+fbzPtpYYrXg0DyxU7AYGWJoeBhhYuNTpUXtW7 nbMQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:to:from; bh=9ef8IWbsTzQd6iJLYMSHvl0dWoT/0VDQMvMcty640j4=; b=ZgeDFP7jSH8EI7vcoqY/B11bAvyPaz501ljmhp5zf4hmPODUkC0++NGWquoGnGR02s ZaMLsJcTaWKx6pZX43lHdTwUwPV+zCizxB4s3JbhwVaPXiB6Fk4XrTn8EYJtYAfZ1N8J eKDYN7Hg9P7bWXi1VMJUaCPPIBnoxCwcw+8QVgVPZtoWQtBMwBdppMaUSfVMQ5E7TDRX NX7MVdDtCeFXra0KqlpPv6+PXogL90O92G4PW5v9zWGeudIPDtQU7HHV5kS0MHywoAH7 HuESKbluWoFhCjdU+5dcXxrZfm+dY/7h3dKp5KIVE46m4e82TC62a5sIp6G1UAXMvHA8 zjRA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id 24-20020a631258000000b00530b780214bsi4078456pgs.64.2023.05.21.20.12.56; Sun, 21 May 2023 20:13:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231178AbjEVDJA (ORCPT + 99 others); Sun, 21 May 2023 23:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229707AbjEVDIw (ORCPT ); Sun, 21 May 2023 23:08:52 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0368DC1 for ; Sun, 21 May 2023 20:08:43 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34M38SV7079848; Mon, 22 May 2023 11:08:28 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 22 May 2023 11:08:23 +0800 From: "zhaoyang.huang" To: Andrew Morton , Matthew Wilcox , Minchan Kim , Joonsoo Kim , , , Zhaoyang Huang , Subject: [PATCHv3] mm: skip CMA pages when they are not available Date: Mon, 22 May 2023 11:08:02 +0800 Message-ID: <1684724882-22266-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS01.spreadtrum.com (10.0.1.201) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34M38SV7079848 X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zhaoyang Huang This patch fixes unproductive reclaiming of CMA pages by skipping them when they are not available for current context. It is arise from bellowing OOM issue, which caused by large proportion of MIGRATE_CMA pages among free pages. [ 36.172486] [03-19 10:05:52.172] ActivityManager: page allocation failure: order:0, mode:0xc00(GFP_NOIO), nodemask=(null),cpuset=foreground,mems_allowed=0 [ 36.189447] [03-19 10:05:52.189] DMA32: 0*4kB 447*8kB (C) 217*16kB (C) 124*32kB (C) 136*64kB (C) 70*128kB (C) 22*256kB (C) 3*512kB (C) 0*1024kB 0*2048kB 0*4096kB = 35848kB [ 36.193125] [03-19 10:05:52.193] Normal: 231*4kB (UMEH) 49*8kB (MEH) 14*16kB (H) 13*32kB (H) 8*64kB (H) 2*128kB (H) 0*256kB 1*512kB (H) 0*1024kB 0*2048kB 0*4096kB = 3236kB ... [ 36.234447] [03-19 10:05:52.234] SLUB: Unable to allocate memory on node -1, gfp=0xa20(GFP_ATOMIC) [ 36.234455] [03-19 10:05:52.234] cache: ext4_io_end, object size: 64, buffer size: 64, default order: 0, min order: 0 [ 36.234459] [03-19 10:05:52.234] node 0: slabs: 53,objs: 3392, free: 0 Signed-off-by: Zhaoyang Huang --- v2: update commit message and fix build error when CONFIG_CMA is not set V3: update code and comments --- --- mm/vmscan.c | 27 ++++++++++++++++++++++++--- 1 file changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bd6637f..17cd246 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2192,7 +2192,24 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec, } } - +#ifdef CONFIG_CMA +/* + * It is waste of effort to scan and reclaim CMA pages if it is not available + * for current allocation context + */ +static bool skip_cma(struct page *page, struct scan_control *sc) +{ + if (!current_is_kswapd() && gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE + && get_pageblock_migratetype(page) == MIGRATE_CMA) + return true; + return false; +} +#else +static bool skip_cma(struct page *page, struct scan_control *sc) +{ + return false; +} +#endif /* * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. * @@ -2225,10 +2242,12 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, unsigned long nr_skipped[MAX_NR_ZONES] = { 0, }; unsigned long skipped = 0; unsigned long scan, total_scan, nr_pages; + struct page *page; LIST_HEAD(folios_skipped); total_scan = 0; scan = 0; + while (scan < nr_to_scan && !list_empty(src)) { struct list_head *move_to = src; struct folio *folio; @@ -2239,12 +2258,14 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan, nr_pages = folio_nr_pages(folio); total_scan += nr_pages; - if (folio_zonenum(folio) > sc->reclaim_idx) { + page = &folio->page; + + if (folio_zonenum(folio) > sc->reclaim_idx + || skip_cma(page, sc)) { nr_skipped[folio_zonenum(folio)] += nr_pages; move_to = &folios_skipped; goto move; } - /* * Do not count skipped folios because that makes the function * return with no isolated folios if the LRU mostly contains -- 1.9.1