Received: by 2002:ab2:3141:0:b0:1ed:23cc:44d1 with SMTP id i1csp1590447lqg; Sun, 3 Mar 2024 18:30:42 -0800 (PST) X-Forwarded-Encrypted: i=3; AJvYcCWqstCCprYKa4/10esIMx2aaIGhH05O6sHvpSe6tA7GpM0tBoWJr6AIcU3q1BgE9Pfh2yatWggbqYPPxykDsAL7vrb8CASMeGb+NqL7Ug== X-Google-Smtp-Source: AGHT+IEasLjAwtG4pJJhYTDbp1UExJxF48UDsM7Nbv/H0g6QlvYLWa3U3A191vAYjP9OwGrrdj59 X-Received: by 2002:a05:6902:4ca:b0:dcc:32cb:cb3b with SMTP id v10-20020a05690204ca00b00dcc32cbcb3bmr5024874ybs.44.1709519442072; Sun, 03 Mar 2024 18:30:42 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1709519442; cv=pass; d=google.com; s=arc-20160816; b=B9NItPFlalb+PG4fYEdfKlRuNx7J0t5SZtNcw29NX9JaN8aOd6fjnKQ9OOmNnXubCO xZhLAoDGTStnF4HfYeS55ki+fK9/pImN4cweVF7tT4S+WIk3Dq74kfX47P9cfJtJTsEz h27TaOB2d/OjmKhxb/+qzs5JqDwo6MPfiqC1rrnRkIh0posryZyndcx+S2bUlr0I+ebV mkdk6S0unrkdGmeGILpbirEsSJIuSo12TwNhrBE72/GFR4wqUfpnpyDEOafp/VZh9pce zBjtjhAHxlIFDu8ricigWqPJdG1HgekU5ypWMyskvM+CHlbVKGaG5Et3qZGhVlZq7/L2 qn9Q== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-unsubscribe:list-subscribe:list-id:precedence:message-id:date :subject:cc:to:from; bh=LeYc5mf2I8TsBO8azTzNqyDWC1zy6gm93RhZO4602bA=; fh=DOadx7rWUYeeRok/4tizP12xMOxY1kIMTrkwQ4J9nho=; b=FW31jWnUhihGhcEengngMkeNy/8HpkeJJUQ9BCa4oFdzOK/JF/uJAmDgwh+Q1euVq2 UZQ/8uIMYnJlS0CcsHhVsWozgOIih9vU5WeQixJBmJLgxYB5tqXsTQ4Al1aMYFz9Z05+ rTT5li4h59mUgd/O7ZfaLFXFbpnW5rv4JwKFvW/smb59/8NiBRBD4xwikl2gJgiwA3v+ 8LyKMOTRHHJYeRdHLM2131fkQG9Rhi1SseDi8objfYUP0KTRgGNkMlEoqCQ4cHKyioS8 SdGyRtU2+owZjAJwsIuUpRMERQz7HOhxZGyrG8eVZLD6A687/KD9kvKLfzcisPQcFA6h 8m6g==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-89981-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-89981-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id w11-20020ac857cb000000b0042ec79de2e2si495368qta.597.2024.03.03.18.30.41 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 03 Mar 2024 18:30:42 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-89981-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; arc=pass (i=1 spf=pass spfdomain=sk.com); spf=pass (google.com: domain of linux-kernel+bounces-89981-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-89981-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 9CFA61C20C86 for ; Mon, 4 Mar 2024 02:30:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 91EEA6FAD; Mon, 4 Mar 2024 02:30:37 +0000 (UTC) Received: from invmail4.hynix.com (exvmail4.skhynix.com [166.125.252.92]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C0D0C4C69 for ; Mon, 4 Mar 2024 02:30:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=166.125.252.92 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709519436; cv=none; b=hSTv45g+bQ/WHcmgfZHW8NOsU/ieN/JI/jjy+8Syeg4XRM5d2L6WIYHzGXT7fpD+37ZC2+Zs8FUI0gZFmFnNxdBYknP2xFWxUgUP0vsCt4NfemKP6xsRIT1fyVMWFUTsEe3XBpQrffPlIOtINNGAyCfP0flyOCigkgA8A9VcCWE= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1709519436; c=relaxed/simple; bh=/wMSez2yC0zHz3TtIns6bvleXd0BasoCTavjhzGH9P8=; h=From:To:Cc:Subject:Date:Message-Id; b=Iei1iw875hbpx1z2w6gHQyRycRp6ti7XXAMdJAq0k4sAtu6Oua/ubv1ggPj5aBFDjYShTgvvjd+YkND/fVnZrofbeXq/Goim29Gjgo6YkpQKMig+2eDzNya5Gcm6Dy3MxUEcQgr8zepM9E0Lu1Yp9R7zf04WEY8yReyc6OaPnnQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com; spf=pass smtp.mailfrom=sk.com; arc=none smtp.client-ip=166.125.252.92 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=sk.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=sk.com X-AuditID: a67dfc5b-d6dff70000001748-e8-65e53244aee4 From: Byungchul Park To: akpm@linux-foundation.org Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, kernel_team@skhynix.com, yuzhao@google.com, ying.huang@intel.com, hannes@cmpxchg.org Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure Date: Mon, 4 Mar 2024 11:30:18 +0900 Message-Id: <20240304023018.69705-1-byungchul@sk.com> X-Mailer: git-send-email 2.17.1 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFvrOJMWRmVeSWpSXmKPExsXC9ZZnka6L0dNUg6eXVSzmrF/DZrF6k6/F 5V1z2CzurfnPanFy1mQWi3cTvrA6sHkcfvOe2WPBplKPxXteMnls+jSJ3ePEjN8sHp83yQWw RXHZpKTmZJalFunbJXBldP97xlSw2qHi8aLZjA2MfQZdjJwcEgImEgeeHGOGsV+cW8kOYrMJ qEvcuPETLC4iICsx9e95li5GLg5mgSmMEivutbOBJIQFEiRWPd3IAmKzCKhKzN/+BczmFTCV eLBqFgvEUHmJ1RsOMIM0SwjcZZX42fufCSIhKXFwxQ2WCYzcCxgZVjEKZeaV5SZm5pjoZVTm ZVboJefnbmIEBsey2j/ROxg/XQg+xCjAwajEw5vR+SRViDWxrLgy9xCjBAezkghvzS+gEG9K YmVValF+fFFpTmrxIUZpDhYlcV6jb+UpQgLpiSWp2ampBalFMFkmDk6pBkYu9xPmyrl+qlYO x5xCKy7bPg2areLyQ65xelZ3l5j+7c+KIecfbG9fp7TA6Bnvt0tfU2syTHiELtwOYJZQPiYa HLD6bKLPya9JW/puTEy93+lbUfQj483xZXvnPbBb59akyCV+RmThvb4OSyO3EwfnW65vr2Cu X1AZITHzEYsDs1jCMo6I/0osxRmJhlrMRcWJAJMvXxQKAgAA X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrHJMWRmVeSWpSXmKPExsXC5WfdrOti9DTVYP9GMYs569ewWaze5Gtx eO5JVovLu+awWdxb85/V4uSsySwW7yZ8YXVg9zj85j2zx4JNpR6L97xk8tj0aRK7x4kZv1k8 Fr/4wOTxeZNcAHsUl01Kak5mWWqRvl0CV0b3v2dMBasdKh4vms3YwNhn0MXIySEhYCLx4txK dhCbTUBd4saNn8wgtoiArMTUv+dZuhi5OJgFpjBKrLjXzgaSEBZIkFj1dCMLiM0ioCoxf/sX MJtXwFTiwapZLBBD5SVWbzjAPIGRYwEjwypGkcy8stzEzBxTveLsjMq8zAq95PzcTYxAXy+r /TNxB+OXy+6HGAU4GJV4eCeseZIqxJpYVlyZe4hRgoNZSYS35hdQiDclsbIqtSg/vqg0J7X4 EKM0B4uSOK9XeGqCkEB6YklqdmpqQWoRTJaJg1OqgbHZJUjak92xyWrSrOXXZbNaZx0sNW9Z uEp1v0PSjIrw2DOf/n8Wt3qSnbRlPbtUfvT/Z8X3zuomeJ/ZE21uWZAceurUgxuNAgvLlhWG tkdNUJh0KPT5pTl3HT2XVq62zZkRueTs0/v7bjSs6vu8jvtnjnNH8EqhfZG8X8RCFMILelvm JLesbVdiKc5INNRiLipOBAAMUnki8QEAAA== X-CFilter-Loop: Reflected Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Sorry for noise. I should've applied v5's change in v4. Changes from v4: 1. Make other scans start with may_cache_trim_mode = 1. Changes from v3: 1. Update the test result in the commit message with v4. 2. Retry the whole priority loop with cache_trim_mode off again, rather than forcing the mode off at the highest priority, when the mode doesn't work. (feedbacked by Johannes Weiner) Changes from v2: 1. Change the condition to stop cache_trim_mode. From - Stop it if it's at high scan priorities, 0 or 1. To - Stop it if it's at high scan priorities, 0 or 1, and the mode didn't work in the previous turn. (feedbacked by Huang Ying) 2. Change the test result in the commit message after testing with the new logic. Changes from v1: 1. Add a comment describing why this change is necessary in code and rewrite the commit message with how to reproduce and what the result is using vmstat. (feedbacked by Andrew Morton and Yu Zhao) 2. Change the condition to avoid cache_trim_mode from 'sc->priority != 1' to 'sc->priority > 1' to reflect cases where the priority goes to zero all the way. (feedbacked by Yu Zhao) --->8--- From 58f1a0e41b9feea72d7fd4bd7bed1ace592e6e4c Mon Sep 17 00:00:00 2001 From: Byungchul Park Date: Mon, 4 Mar 2024 11:24:40 +0900 Subject: [PATCH v5] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon pages. However, it should be more careful to use the mode because it's going to prevent anon pages from being reclaimed even if there are a huge number of anon pages that are cold and should be reclaimed. Even worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and stopping kswapd from functioning until direct reclaim eventually works to resume kswapd. So kswapd needs to retry its scan priority loop with cache_trim_mode off again if the mode doesn't work for reclaim. The problematic behavior can be reproduced by: CONFIG_NUMA_BALANCING enabled sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING numa node0 (8GB local memory, 16 CPUs) numa node1 (8GB slow tier memory, no CPUs) Sequence: 1) echo 3 > /proc/sys/vm/drop_caches 2) To emulate the system with full of cold memory in local DRAM, run the following dummy program and never touch the region: mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0); 3) Run any memory intensive work e.g. XSBench. 4) Check if numa balancing is working e.i. promotion/demotion. 5) Iterate 1) ~ 4) until numa balancing stops. With this, you could see that promotion/demotion are not working because kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES. Interesting vmstat delta's differences between before and after are like: +-----------------------+-------------------------------+ | interesting vmstat | before | after | +-----------------------+-------------------------------+ | nr_inactive_anon | 321935 | 1646193 | | nr_active_anon | 1780700 | 456388 | | nr_inactive_file | 30425 | 27836 | | nr_active_file | 14961 | 1217 | | pgpromote_success | 356 | 1310120 | | pgpromote_candidate | 21953245 | 1736872 | | pgactivate | 1844523 | 3292443 | | pgdeactivate | 50634 | 1526701 | | pgfault | 31100294 | 6715375 | | pgdemote_kswapd | 30856 | 1954199 | | pgscan_kswapd | 1861981 | 7100099 | | pgscan_anon | 1822930 | 7061135 | | pgscan_file | 39051 | 38964 | | pgsteal_anon | 386 | 1925214 | | pgsteal_file | 30470 | 28985 | | pageoutrun | 30 | 500 | | numa_hint_faults | 27418279 | 3090773 | | numa_pages_migrated | 356 | 1310120 | +-----------------------+-------------------------------+ Signed-off-by: Byungchul Park --- mm/vmscan.c | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index bba207f41b14..77948b0f8b5b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -108,6 +108,9 @@ struct scan_control { /* Can folios be swapped as part of reclaim? */ unsigned int may_swap:1; + /* Can cache_trim_mode be turned on as part of reclaim? */ + unsigned int may_cache_trim_mode:1; + /* Proactive reclaim invoked by userspace through memory.reclaim */ unsigned int proactive:1; @@ -1500,6 +1503,7 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone, struct scan_control sc = { .gfp_mask = GFP_KERNEL, .may_unmap = 1, + .may_cache_trim_mode = 1, }; struct reclaim_stat stat; unsigned int nr_reclaimed; @@ -2094,6 +2098,7 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list, .may_writepage = 1, .may_unmap = 1, .may_swap = 1, + .may_cache_trim_mode = 1, .no_demotion = 1, }; @@ -2268,7 +2273,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) * anonymous pages. */ file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE); - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE)) + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) && + sc->may_cache_trim_mode) sc->cache_trim_mode = 1; else sc->cache_trim_mode = 0; @@ -5435,6 +5441,7 @@ static ssize_t lru_gen_seq_write(struct file *file, const char __user *src, .may_writepage = true, .may_unmap = true, .may_swap = true, + .may_cache_trim_mode = 1, .reclaim_idx = MAX_NR_ZONES - 1, .gfp_mask = GFP_KERNEL, }; @@ -6394,6 +6401,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order, .may_writepage = !laptop_mode, .may_unmap = 1, .may_swap = 1, + .may_cache_trim_mode = 1, }; /* @@ -6439,6 +6447,7 @@ unsigned long mem_cgroup_shrink_node(struct mem_cgroup *memcg, .may_unmap = 1, .reclaim_idx = MAX_NR_ZONES - 1, .may_swap = !noswap, + .may_cache_trim_mode = 1, }; WARN_ON_ONCE(!current->reclaim_state); @@ -6482,6 +6491,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, .may_writepage = !laptop_mode, .may_unmap = 1, .may_swap = !!(reclaim_options & MEMCG_RECLAIM_MAY_SWAP), + .may_cache_trim_mode = 1, .proactive = !!(reclaim_options & MEMCG_RECLAIM_PROACTIVE), }; /* @@ -6744,6 +6754,7 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) .gfp_mask = GFP_KERNEL, .order = order, .may_unmap = 1, + .may_cache_trim_mode = 1, }; set_task_reclaim_state(current, &sc.reclaim_state); @@ -6898,8 +6909,14 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx) sc.priority--; } while (sc.priority >= 1); - if (!sc.nr_reclaimed) + if (!sc.nr_reclaimed) { + if (sc.may_cache_trim_mode) { + sc.may_cache_trim_mode = 0; + goto restart; + } + pgdat->kswapd_failures++; + } out: clear_reclaim_active(pgdat, highest_zoneidx); @@ -7202,6 +7219,7 @@ unsigned long shrink_all_memory(unsigned long nr_to_reclaim) .may_writepage = 1, .may_unmap = 1, .may_swap = 1, + .may_cache_trim_mode = 1, .hibernation_mode = 1, }; struct zonelist *zonelist = node_zonelist(numa_node_id(), sc.gfp_mask); @@ -7360,6 +7378,7 @@ static int __node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned in .may_writepage = !!(node_reclaim_mode & RECLAIM_WRITE), .may_unmap = !!(node_reclaim_mode & RECLAIM_UNMAP), .may_swap = 1, + .may_cache_trim_mode = 1, .reclaim_idx = gfp_zone(gfp_mask), }; unsigned long pflags; -- 2.17.1