Received: by 2002:ab2:6816:0:b0:1f9:5764:f03e with SMTP id t22csp815126lqo; Fri, 17 May 2024 02:20:56 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCVjreALcwtcuq2uKuW/6Eh5qmDFJU6QhUqxUlLrOq+bfynE6c497X51RM838CLN2wbN1KvQ/M1IebZ1tQqrUtmVeCLZCtwE1HbpbaFb7Q== X-Google-Smtp-Source: AGHT+IEmKSXoU6oOSPRKDv8ghqh52bBrWqhf2s51mUxtruN6typBp8KZKbu/BbMrxkm+WEqfJ9x7 X-Received: by 2002:a17:902:a587:b0:1eb:7081:3e23 with SMTP id d9443c01a7336-1ef44059d79mr199735375ad.66.1715937655703; Fri, 17 May 2024 02:20:55 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1715937655; cv=pass; d=google.com; s=arc-20160816; b=Wa+/fUsPp17ivTARNGz6G7kWyGWbgogM62hwArOpbWKTBQAtxYm0lRbRU4HMCJ55IA 0m1gfBi8RM3JW/aIAOtHcGychsCHrf3QnVAHC/We8FU8Vhmz1O+BebZn0QTJzjpBc2SF SoVKu3kzk/1oPA8h3EjYgyZmv4aF8TuFBIyepFgUE5W7cjZVWbitDvhoX1aJDxXn9ni6 oVHTq21sfZTRulNgS1gN6XIQ3OXFxD/HzuMrm4kkCK0C7wftteOiRRj2GJUoIvUJFb83 AXTuTamND6y1bhIkarzOa+Uhc883rt4ptslneUG4NTykrnWgUB0DsrK8XWhiPtgZFUlK i9lQ== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=dkim-signature:content-transfer-encoding:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:message-id:date :subject:cc:to:from; bh=oPFQIg0ko21AngBowgpROR5t84Fq6KmBGriZzjFVcw4=; fh=/bMNkmVqvv8SpaH3jyAdEvk0vyr+xF/Vh4jB4FaGBfA=; b=e7UjDUnuYN6xgJUHXbDhhZQAD2blNWV+rHAwZI6D1s0L5id+QXtMEDCvD/D01C3BG3 2avjYj831GV/EjhlmnC91+oKcShg/8KufALAWqU20iAmgngIC9v0tgb604nfO5oqh0/8 z57wzFoDp8CaplI3zvReEkMLOyYDsHirOOY7B57/z2r8IknmRcIL0Lb4KlBgc3rOtW1T CkFJ2NRX8Cl8PGNMDOdyqsYglGJU0SmwH1FUx+a5eVWFv4L8X8rjazA9bvjXUOAI0Ymo EAQTrNzmEyZaZo/zUS5YqHez8ijAoJZTEA+xqPS/Adxo8olV5Rmj4sizwuJg8wku0CtS 746w==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@yshyn.com header.s=uberspace header.b=s8qF9X+x; arc=pass (i=1 spf=pass spfdomain=yshyn.com dkim=pass dkdomain=yshyn.com); spf=pass (google.com: domain of linux-kernel+bounces-181909-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-181909-linux.lists.archive=gmail.com@vger.kernel.org" Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id d9443c01a7336-1ef0bad7af6si167708635ad.196.2024.05.17.02.20.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 17 May 2024 02:20:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-181909-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@yshyn.com header.s=uberspace header.b=s8qF9X+x; arc=pass (i=1 spf=pass spfdomain=yshyn.com dkim=pass dkdomain=yshyn.com); spf=pass (google.com: domain of linux-kernel+bounces-181909-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-181909-linux.lists.archive=gmail.com@vger.kernel.org" Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 0C0BD282493 for ; Fri, 17 May 2024 09:20:54 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 46B58225D9; Fri, 17 May 2024 09:20:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=yshyn.com header.i=@yshyn.com header.b="s8qF9X+x" Received: from phoenix.uberspace.de (phoenix.uberspace.de [95.143.172.135]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6B4A1EB5E for ; Fri, 17 May 2024 09:20:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.143.172.135 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715937645; cv=none; b=h8WP06lLh4c5i8k2Kr3F9PNGwLdFYtWXQTEKZrEJt/yAPfhT0yauaPZMc8MhX0T0ib1DA1rAU8U3d/5IUpJaA24+zSMM5x9Kij4KqgK2PUaBNmtRkqynNjiBlPMom3TsnphCKvymXhdtPQf/Gs4y+hYTUC8C8zoqtJuh5GVSVGo= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1715937645; c=relaxed/simple; bh=zUX5qNIfW548vNLMV3YfOWr2RVuk/rIW/JNspwwgaDA=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=miOO6hXABB8WYhRafy1Bd4LdUwtvrI09mh9jefGrk0LJ+VbF5nrE1hMopsMyY3CIZSIM3lSBxjeWj+Xfh7W5mdsYjj500/O9/yOJn2BJFlSydDMw0WucBwVSZRvWdySlkHPtC9ZARWB0L5MjUkA1KIS9Rob/SJz3BSdh8zlrZ54= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yshyn.com; spf=pass smtp.mailfrom=yshyn.com; dkim=pass (2048-bit key) header.d=yshyn.com header.i=@yshyn.com header.b=s8qF9X+x; arc=none smtp.client-ip=95.143.172.135 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=yshyn.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=yshyn.com Received: (qmail 13005 invoked by uid 988); 17 May 2024 09:13:58 -0000 Authentication-Results: phoenix.uberspace.de; auth=pass (plain) Received: from unknown (HELO unkown) (::1) by phoenix.uberspace.de (Haraka/3.0.1) with ESMTPSA; Fri, 17 May 2024 11:13:57 +0200 From: Illia Ostapyshyn To: Jonathan Corbet , Andrew Morton , Matthew Wilcox Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Illia Ostapyshyn Subject: [PATCH] mm/vmscan: Update stale references to shrink_page_list Date: Fri, 17 May 2024 11:13:48 +0200 Message-Id: <20240517091348.1185566-1-illia@yshyn.com> X-Mailer: git-send-email 2.39.2 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Bar: + X-Rspamd-Report: MID_CONTAINS_FROM(1) MIME_GOOD(-0.1) R_MISSING_CHARSET(0.5) X-Rspamd-Score: 1.4 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yshyn.com; s=uberspace; h=from:to:cc:subject:date; bh=zUX5qNIfW548vNLMV3YfOWr2RVuk/rIW/JNspwwgaDA=; b=s8qF9X+xASJEWN/cMfdXpv37+eChv24TVme12xnLdvA65YeLPvsTXzosKqg8OeC49J5W76JCJG jXt8b3WBCoWyHOW6ZAXTIEhwcx3HgSKo7GNhhPzlmOFAiMkpfbb/Qr2+tBLxzjpAAPK2KqHIAS3X dun0Pvmwj1DQ5Ikf7ijj4Y6RxzPpcYrN9nNzeGMn0R0Thnq6Z3unRuYKkAK98CKUGmfeVqggYD8j 7pWiuwES3hWX1bm0vSyKHopIQGsB0IWLRKFbBd0pEPGW4zAG8AmBYKOyWdHAm1T2zG3TdRB4+fRv qKhYjDH39qcYIOrnRP/Z7LvFijc0MOp8VK/n/Kiw== Commit 49fd9b6df54e ("mm/vmscan: fix a lot of comments") renamed shrink_page_list() to shrink_folio_list(). Fix up the remaining references to the old name in comments and documentation. Signed-off-by: Illia Ostapyshyn --- Documentation/mm/unevictable-lru.rst | 10 +++++----- mm/memory.c | 2 +- mm/swap_state.c | 2 +- mm/truncate.c | 2 +- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index b6a07a26b10d..2feb2ed51ae2 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -191,13 +191,13 @@ have become evictable again (via munlock() for example) and have been "rescued" from the unevictable list. However, there may be situations where we decide, for the sake of expediency, to leave an unevictable folio on one of the regular active/inactive LRU lists for vmscan to deal with. vmscan checks for such -folios in all of the shrink_{active|inactive|page}_list() functions and will +folios in all of the shrink_{active|inactive|folio}_list() functions and will "cull" such folios that it encounters: that is, it diverts those folios to the unevictable list for the memory cgroup and node being scanned. There may be situations where a folio is mapped into a VM_LOCKED VMA, but the folio does not have the mlocked flag set. Such folios will make -it all the way to shrink_active_list() or shrink_page_list() where they +it all the way to shrink_active_list() or shrink_folio_list() where they will be detected when vmscan walks the reverse map in folio_referenced() or try_to_unmap(). The folio is culled to the unevictable list when it is released by the shrinker. @@ -269,7 +269,7 @@ the LRU. Such pages can be "noticed" by memory management in several places: (4) in the fault path and when a VM_LOCKED stack segment is expanded; or - (5) as mentioned above, in vmscan:shrink_page_list() when attempting to + (5) as mentioned above, in vmscan:shrink_folio_list() when attempting to reclaim a page in a VM_LOCKED VMA by folio_referenced() or try_to_unmap(). mlocked pages become unlocked and rescued from the unevictable list when: @@ -548,12 +548,12 @@ Some examples of these unevictable pages on the LRU lists are: (3) pages still mapped into VM_LOCKED VMAs, which should be marked mlocked, but events left mlock_count too low, so they were munlocked too early. -vmscan's shrink_inactive_list() and shrink_page_list() also divert obviously +vmscan's shrink_inactive_list() and shrink_folio_list() also divert obviously unevictable pages found on the inactive lists to the appropriate memory cgroup and node unevictable list. rmap's folio_referenced_one(), called via vmscan's shrink_active_list() or -shrink_page_list(), and rmap's try_to_unmap_one() called via shrink_page_list(), +shrink_folio_list(), and rmap's try_to_unmap_one() called via shrink_folio_list(), check for (3) pages still mapped into VM_LOCKED VMAs, and call mlock_vma_folio() to correct them. Such pages are culled to the unevictable list when released by the shrinker. diff --git a/mm/memory.c b/mm/memory.c index 0201f50d8307..c58b3d92e6a8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4511,7 +4511,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * lock_page(B) * lock_page(B) * pte_alloc_one - * shrink_page_list + * shrink_folio_list * wait_on_page_writeback(A) * SetPageWriteback(B) * unlock_page(B) diff --git a/mm/swap_state.c b/mm/swap_state.c index bfc7e8c58a6d..3d163ec1364a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -28,7 +28,7 @@ /* * swapper_space is a fiction, retained to simplify the path through - * vmscan's shrink_page_list. + * vmscan's shrink_folio_list. */ static const struct address_space_operations swap_aops = { .writepage = swap_writepage, diff --git a/mm/truncate.c b/mm/truncate.c index 725b150e47ac..e1c352bb026b 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -554,7 +554,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages); * This is like mapping_evict_folio(), except it ignores the folio's * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave folios behind because - * shrink_page_list() has a temp ref on them, or because they're transiently + * shrink_folio_list() has a temp ref on them, or because they're transiently * sitting in the folio_add_lru() caches. */ static int invalidate_complete_folio2(struct address_space *mapping, -- 2.39.2