Received: by 2002:ac2:464d:0:0:0:0:0 with SMTP id s13csp189843lfo; Tue, 17 May 2022 21:48:45 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwl+06dRnt8VINHMWkDBrS/w7yPP7j9Kv4We7ozW5nQv1tJ5RkH/bQkjwPemO759vXAfsMz X-Received: by 2002:a17:90b:4b91:b0:1df:6f4f:21f9 with SMTP id lr17-20020a17090b4b9100b001df6f4f21f9mr11072466pjb.29.1652849325473; Tue, 17 May 2022 21:48:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1652849325; cv=none; d=google.com; s=arc-20160816; b=y87IGoNeChC3VhEMBBYlU5aUrY66tMtmoH1d+UwdYia8Xq9sLgmLgxbs5iZFXXJ4U9 xVo+653UK+3jqa1pvwKdgNe7HeXHpVgRmLhQnlqnkpUvEVqSi4qLuodovW4RlRydnXhI nily9I3bg2gKtNObHGGOVVLq+YhclRZL+fgm0oMTUxvbMFaSsLjQNq3Ukht7iyaTMqK2 xwSzVRUiVUkQnugWXfY9k8M2lZCxy7k2KStJiyOMLB8jndnJeo/Mp6pn2wlKTOuZOaad VDX82mnZBvu7q0BrIgChGeBZqt15OIkLukeRt112/wPc74XDJ6LGbo347pNEqoxW3L5r c00A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:from:subject :references:mime-version:message-id:in-reply-to:date:dkim-signature; bh=DIXSaH9RJGkbTuDMTnXDRSvoEI7pCtEeA4wUNDyN3z8=; b=Uj3zlChz3O6LmsjwaK3RrlS+7mqPdo0cLnqfesJDmpRbCXc9+QIUcUXYPk5bvha4MV L/1CHeBAuQ9B/CDNMTeOdTQdJ66x+FDcM5NbgnA6oY0TiuijILmxMFrWM/ZxBJ0lA5HY E8vnnYXG/o5QMb1I3SgpdgozwRq5QjyBlXa2yW/fHeIWtcXFLA6uAT5piegVjGgdA+T0 Aja9F6DKR0pCXz8nNiyEqrNS73rT8WXODRNGRyByGtokj/9ejQllmcmEukjBakwucXtU J50Jl69WmGOIp5JhKei1bBx52pzNolKXctPgeXeV4556/nGhIZ0NvO1jC+zJqK+GkUb9 TOvg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=g0wxsGI7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id n184-20020a6327c1000000b003ab22453625si1107314pgn.746.2022.05.17.21.48.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 17 May 2022 21:48:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=g0wxsGI7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 117241271AB; Tue, 17 May 2022 20:59:53 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233918AbiERBrt (ORCPT + 99 others); Tue, 17 May 2022 21:47:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233884AbiERBrT (ORCPT ); Tue, 17 May 2022 21:47:19 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3A29B54BCA for ; Tue, 17 May 2022 18:47:05 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id j2-20020a2597c2000000b0064b3e54191aso651575ybo.20 for ; Tue, 17 May 2022 18:47:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=DIXSaH9RJGkbTuDMTnXDRSvoEI7pCtEeA4wUNDyN3z8=; b=g0wxsGI7R45H2kF0pWERdLKkyLwtp+Yw0IOEBDlqZo/73VltFoNtGsnpTTiSArgRhy AUeYjOQRLucgcT1BOXjwa8iwyjc/FV4gVMQECeZmAm89brEUh/FlMMzDjbGmAz7sRZKS ZYK6UJRksyC2OA6yMzw37fB2ChJivBQPOwqyRwJa9RYiiTEVoLazjeC7EIFt9WUKkiYA /9UmBdWZAazbCzR+VjrjlnlcaeQghun0EDh1um9NtW846e6uJyKDChbVM31G3CYT6qRo Lbpaf47UJD7WZRJaMWGDTBm43aZ4pjSZ2+FY11zz3ohF53ByYbwB3XT+asa39cay+hYZ J/cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=DIXSaH9RJGkbTuDMTnXDRSvoEI7pCtEeA4wUNDyN3z8=; b=NXwhzWyM1vx2BRWPwGSWLqSiBbvR72yuVJovnQbvVElrIVIvtH2YWlTq1QEEX2dYNx yaA0yiDQOArxkZFN0HR8OqaPd7aXWeGSC4TaUFUiqP/vBdY0MqqSvBKleibQ90ZPRkCd ebdk3WqDjZL65wrKYqXnPVdZBIIo46dVQo73Jn1wqau3LQkM7WaiyRrJBwHmxHFxnsGV V0Ov5R5HIsr3VhdBnmGi61DpMrOnMKeGZNx2kTmbhq9beCvWlwegGs3NLF5lv4xpgUWf XNkY9HyPDskl+yujkY8JQZEKqrGacyiVBCuAyphWJtV9wp8HRe32c0X4ZKrizcBCr0sv cnEA== X-Gm-Message-State: AOAM532G/yRt+QP5j00bBECEwvSO9pWhpMzHSmxbQ80C2WoChjWq8GTg iWG6ao1RjimqwtPj4D1b/pJPtC94xPI= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:f7bc:90c9:d86e:4ea8]) (user=yuzhao job=sendgmr) by 2002:a81:d86:0:b0:2ff:2258:f9b3 with SMTP id 128-20020a810d86000000b002ff2258f9b3mr8281904ywn.355.1652838424351; Tue, 17 May 2022 18:47:04 -0700 (PDT) Date: Tue, 17 May 2022 19:46:26 -0600 In-Reply-To: <20220518014632.922072-1-yuzhao@google.com> Message-Id: <20220518014632.922072-8-yuzhao@google.com> Mime-Version: 1.0 References: <20220518014632.922072-1-yuzhao@google.com> X-Mailer: git-send-email 2.36.0.550.gb090851708-goog Subject: [PATCH v11 07/14] mm: multi-gen LRU: exploit locality in rmap From: Yu Zhao To: Andrew Morton , linux-mm@kvack.org Cc: Andi Kleen , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Peter Zijlstra , Tejun Heo , Vlastimil Babka , Will Deacon , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, x86@kernel.org, page-reclaim@google.com, Yu Zhao , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , "=?UTF-8?q?Holger=20Hoffst=C3=A4tte?=" , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-9.5 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Searching the rmap for PTEs mapping each page on an LRU list (to test and clear the accessed bit) can be expensive because pages from different VMAs (PA space) are not cache friendly to the rmap (VA space). For workloads mostly using mapped pages, the rmap has a high CPU cost in the reclaim path. This patch exploits spatial locality to reduce the trips into the rmap. When shrink_page_list() walks the rmap and finds a young PTE, a new function lru_gen_look_around() scans at most BITS_PER_LONG-1 adjacent PTEs. On finding another young PTE, it clears the accessed bit and updates the gen counter of the page mapped by this PTE to (max_seq%MAX_NR_GENS)+1. Server benchmark results: Single workload: fio (buffered I/O): no change Single workload: memcached (anon): +[5.5, 7.5]% Ops/sec KB/sec patch1-6: 1120643.70 43588.06 patch1-7: 1193918.93 46438.15 Configurations: no change Client benchmark results: kswapd profiles: patch1-6 35.99% lzo1x_1_do_compress (real work) 19.40% page_vma_mapped_walk 6.31% _raw_spin_unlock_irq 3.95% do_raw_spin_lock 2.39% anon_vma_interval_tree_iter_first 2.25% ptep_clear_flush 1.92% __anon_vma_interval_tree_subtree_search 1.70% folio_referenced_one 1.68% __zram_bvec_write 1.43% anon_vma_interval_tree_iter_next patch1-7 45.90% lzo1x_1_do_compress (real work) 9.14% page_vma_mapped_walk 6.81% _raw_spin_unlock_irq 2.80% ptep_clear_flush 2.34% __zram_bvec_write 2.29% do_raw_spin_lock 1.84% lru_gen_look_around 1.78% memmove 1.74% obj_malloc 1.50% free_unref_page_list Configurations: no change Signed-off-by: Yu Zhao Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffst=C3=A4tte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- include/linux/memcontrol.h | 31 ++++++++ include/linux/mm.h | 5 ++ include/linux/mmzone.h | 6 ++ mm/internal.h | 1 + mm/memcontrol.c | 1 + mm/rmap.c | 7 ++ mm/swap.c | 4 +- mm/vmscan.c | 157 +++++++++++++++++++++++++++++++++++++ 8 files changed, 210 insertions(+), 2 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 89b14729d59f..2bfdcc77648a 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -438,6 +438,7 @@ static inline struct obj_cgroup *__folio_objcg(struct f= olio *folio) * - LRU isolation * - lock_page_memcg() * - exclusive reference + * - mem_cgroup_trylock_pages() * * For a kmem folio a caller should hold an rcu read lock to protect memcg * associated with a kmem folio from being released. @@ -499,6 +500,7 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct= folio *folio) * - LRU isolation * - lock_page_memcg() * - exclusive reference + * - mem_cgroup_trylock_pages() * * For a kmem page a caller should hold an rcu read lock to protect memcg * associated with a kmem page from being released. @@ -948,6 +950,23 @@ void unlock_page_memcg(struct page *page); =20 void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val); =20 +/* try to stablize folio_memcg() for all the pages in a memcg */ +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) +{ + rcu_read_lock(); + + if (mem_cgroup_disabled() || !atomic_read(&memcg->moving_account)) + return true; + + rcu_read_unlock(); + return false; +} + +static inline void mem_cgroup_unlock_pages(void) +{ + rcu_read_unlock(); +} + /* idx can be of type enum memcg_stat_item or node_stat_item */ static inline void mod_memcg_state(struct mem_cgroup *memcg, int idx, int val) @@ -1386,6 +1405,18 @@ static inline void folio_memcg_unlock(struct folio *= folio) { } =20 +static inline bool mem_cgroup_trylock_pages(struct mem_cgroup *memcg) +{ + /* to match folio_memcg_rcu() */ + rcu_read_lock(); + return true; +} + +static inline void mem_cgroup_unlock_pages(void) +{ + rcu_read_unlock(); +} + static inline void mem_cgroup_handle_over_high(void) { } diff --git a/include/linux/mm.h b/include/linux/mm.h index 894c289c2c06..4e8ab4ad4473 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1523,6 +1523,11 @@ static inline unsigned long folio_pfn(struct folio *= folio) return page_to_pfn(&folio->page); } =20 +static inline struct folio *pfn_folio(unsigned long pfn) +{ + return page_folio(pfn_to_page(pfn)); +} + static inline atomic_t *folio_pincount_ptr(struct folio *folio) { return &folio_page(folio, 1)->compound_pincount; diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 2d023d243e73..f0b980362186 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -374,6 +374,7 @@ enum lruvec_flags { #ifndef __GENERATING_BOUNDS_H =20 struct lruvec; +struct page_vma_mapped_walk; =20 #define LRU_GEN_MASK ((BIT(LRU_GEN_WIDTH) - 1) << LRU_GEN_PGOFF) #define LRU_REFS_MASK ((BIT(LRU_REFS_WIDTH) - 1) << LRU_REFS_PGOFF) @@ -429,6 +430,7 @@ struct lru_gen_struct { }; =20 void lru_gen_init_lruvec(struct lruvec *lruvec); +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw); =20 #ifdef CONFIG_MEMCG void lru_gen_init_memcg(struct mem_cgroup *memcg); @@ -441,6 +443,10 @@ static inline void lru_gen_init_lruvec(struct lruvec *= lruvec) { } =20 +static inline void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) +{ +} + #ifdef CONFIG_MEMCG static inline void lru_gen_init_memcg(struct mem_cgroup *memcg) { diff --git a/mm/internal.h b/mm/internal.h index cf16280ce132..59d2422b647d 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -68,6 +68,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf); void folio_rotate_reclaimable(struct folio *folio); bool __folio_end_writeback(struct folio *folio); void deactivate_file_folio(struct folio *folio); +void folio_activate(struct folio *folio); =20 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vm= a, unsigned long floor, unsigned long ceiling); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 2ee074f80e72..98aa720ac639 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2769,6 +2769,7 @@ static void commit_charge(struct folio *folio, struct= mem_cgroup *memcg) * - LRU isolation * - lock_page_memcg() * - exclusive reference + * - mem_cgroup_trylock_pages() */ folio->memcg_data =3D (unsigned long)memcg; } diff --git a/mm/rmap.c b/mm/rmap.c index fedb82371efe..7cb7ef29088a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -73,6 +73,7 @@ #include #include #include +#include =20 #include =20 @@ -821,6 +822,12 @@ static bool folio_referenced_one(struct folio *folio, } =20 if (pvmw.pte) { + if (lru_gen_enabled() && pte_young(*pvmw.pte) && + !(vma->vm_flags & (VM_SEQ_READ | VM_RAND_READ))) { + lru_gen_look_around(&pvmw); + referenced++; + } + if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) { /* diff --git a/mm/swap.c b/mm/swap.c index a99d22308f28..0aa1d0b33d42 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -342,7 +342,7 @@ static bool need_activate_page_drain(int cpu) return pagevec_count(&per_cpu(lru_pvecs.activate_page, cpu)) !=3D 0; } =20 -static void folio_activate(struct folio *folio) +void folio_activate(struct folio *folio) { if (folio_test_lru(folio) && !folio_test_active(folio) && !folio_test_unevictable(folio)) { @@ -362,7 +362,7 @@ static inline void activate_page_drain(int cpu) { } =20 -static void folio_activate(struct folio *folio) +void folio_activate(struct folio *folio) { struct lruvec *lruvec; =20 diff --git a/mm/vmscan.c b/mm/vmscan.c index 891f0ab69b3a..cf89a28c3b0e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1554,6 +1554,11 @@ static unsigned int shrink_page_list(struct list_hea= d *page_list, if (!sc->may_unmap && page_mapped(page)) goto keep_locked; =20 + /* folio_update_gen() tried to promote this page? */ + if (lru_gen_enabled() && !ignore_references && + page_mapped(page) && PageReferenced(page)) + goto keep_locked; + may_enter_fs =3D (sc->gfp_mask & __GFP_FS) || (PageSwapCache(page) && (sc->gfp_mask & __GFP_IO)); =20 @@ -3137,6 +3142,28 @@ static bool positive_ctrl_err(struct ctrl_pos *sp, s= truct ctrl_pos *pv) * the aging *************************************************************************= *****/ =20 +static int folio_update_gen(struct folio *folio, int gen) +{ + unsigned long new_flags, old_flags =3D READ_ONCE(folio->flags); + + VM_WARN_ON_ONCE(gen >=3D MAX_NR_GENS); + VM_WARN_ON_ONCE(!rcu_read_lock_held()); + + do { + /* lru_gen_del_folio() has isolated this page? */ + if (!(old_flags & LRU_GEN_MASK)) { + /* for shrink_page_list() */ + new_flags =3D old_flags | BIT(PG_referenced); + continue; + } + + new_flags =3D old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAG= S); + new_flags |=3D (gen + 1UL) << LRU_GEN_PGOFF; + } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); + + return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; +} + static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, bool = reclaiming) { int type =3D folio_is_file_lru(folio); @@ -3147,6 +3174,11 @@ static int folio_inc_gen(struct lruvec *lruvec, stru= ct folio *folio, bool reclai VM_WARN_ON_ONCE_FOLIO(!(old_flags & LRU_GEN_MASK), folio); =20 do { + new_gen =3D ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; + /* folio_update_gen() has promoted this page? */ + if (new_gen >=3D 0 && new_gen !=3D old_gen) + return new_gen; + new_gen =3D (old_gen + 1) % MAX_NR_GENS; =20 new_flags =3D old_flags & ~(LRU_GEN_MASK | LRU_REFS_MASK | LRU_REFS_FLAG= S); @@ -3365,6 +3397,125 @@ static void lru_gen_age_node(struct pglist_data *pg= dat, struct scan_control *sc) } while ((memcg =3D mem_cgroup_iter(NULL, memcg, NULL))); } =20 +/* + * This function exploits spatial locality when shrink_page_list() walks t= he + * rmap. It scans the adjacent PTEs of a young PTE and promotes hot pages. + */ +void lru_gen_look_around(struct page_vma_mapped_walk *pvmw) +{ + int i; + pte_t *pte; + unsigned long start; + unsigned long end; + unsigned long addr; + unsigned long bitmap[BITS_TO_LONGS(MIN_LRU_BATCH)] =3D {}; + struct folio *folio =3D pfn_folio(pvmw->pfn); + struct mem_cgroup *memcg =3D folio_memcg(folio); + struct pglist_data *pgdat =3D folio_pgdat(folio); + struct lruvec *lruvec =3D mem_cgroup_lruvec(memcg, pgdat); + DEFINE_MAX_SEQ(lruvec); + int old_gen, new_gen =3D lru_gen_from_seq(max_seq); + + lockdep_assert_held(pvmw->ptl); + VM_WARN_ON_ONCE_FOLIO(folio_test_lru(folio), folio); + + if (spin_is_contended(pvmw->ptl)) + return; + + start =3D max(pvmw->address & PMD_MASK, pvmw->vma->vm_start); + end =3D pmd_addr_end(pvmw->address, pvmw->vma->vm_end); + + if (end - start > MIN_LRU_BATCH * PAGE_SIZE) { + if (pvmw->address - start < MIN_LRU_BATCH * PAGE_SIZE / 2) + end =3D start + MIN_LRU_BATCH * PAGE_SIZE; + else if (end - pvmw->address < MIN_LRU_BATCH * PAGE_SIZE / 2) + start =3D end - MIN_LRU_BATCH * PAGE_SIZE; + else { + start =3D pvmw->address - MIN_LRU_BATCH * PAGE_SIZE / 2; + end =3D pvmw->address + MIN_LRU_BATCH * PAGE_SIZE / 2; + } + } + + pte =3D pvmw->pte - (pvmw->address - start) / PAGE_SIZE; + + rcu_read_lock(); + arch_enter_lazy_mmu_mode(); + + for (i =3D 0, addr =3D start; addr !=3D end; i++, addr +=3D PAGE_SIZE) { + unsigned long pfn =3D pte_pfn(pte[i]); + + VM_WARN_ON_ONCE(addr < pvmw->vma->vm_start || addr >=3D pvmw->vma->vm_en= d); + + if (!pte_present(pte[i]) || is_zero_pfn(pfn)) + continue; + + if (WARN_ON_ONCE(pte_devmap(pte[i]) || pte_special(pte[i]))) + continue; + + if (!pte_young(pte[i])) + continue; + + VM_WARN_ON_ONCE(!pfn_valid(pfn)); + if (pfn < pgdat->node_start_pfn || pfn >=3D pgdat_end_pfn(pgdat)) + continue; + + folio =3D pfn_folio(pfn); + if (folio_nid(folio) !=3D pgdat->node_id) + continue; + + if (folio_memcg_rcu(folio) !=3D memcg) + continue; + + if (!ptep_test_and_clear_young(pvmw->vma, addr, pte + i)) + continue; + + if (pte_dirty(pte[i]) && !folio_test_dirty(folio) && + !(folio_test_anon(folio) && folio_test_swapbacked(folio) && + !folio_test_swapcache(folio))) + folio_mark_dirty(folio); + + old_gen =3D folio_lru_gen(folio); + if (old_gen < 0) + folio_set_referenced(folio); + else if (old_gen !=3D new_gen) + __set_bit(i, bitmap); + } + + arch_leave_lazy_mmu_mode(); + rcu_read_unlock(); + + if (bitmap_weight(bitmap, MIN_LRU_BATCH) < PAGEVEC_SIZE) { + for_each_set_bit(i, bitmap, MIN_LRU_BATCH) { + folio =3D pfn_folio(pte_pfn(pte[i])); + folio_activate(folio); + } + return; + } + + /* folio_update_gen() requires stable folio_memcg() */ + if (!mem_cgroup_trylock_pages(memcg)) + return; + + spin_lock_irq(&lruvec->lru_lock); + new_gen =3D lru_gen_from_seq(lruvec->lrugen.max_seq); + + for_each_set_bit(i, bitmap, MIN_LRU_BATCH) { + folio =3D pfn_folio(pte_pfn(pte[i])); + if (folio_memcg_rcu(folio) !=3D memcg) + continue; + + old_gen =3D folio_update_gen(folio, new_gen); + if (old_gen < 0 || old_gen =3D=3D new_gen) + continue; + + lru_gen_update_size(lruvec, folio, old_gen, new_gen); + } + + spin_unlock_irq(&lruvec->lru_lock); + + mem_cgroup_unlock_pages(); +} + /*************************************************************************= ***** * the eviction *************************************************************************= *****/ @@ -3401,6 +3552,12 @@ static bool sort_folio(struct lruvec *lruvec, struct= folio *folio, int tier_idx) return true; } =20 + /* promoted */ + if (gen !=3D lru_gen_from_seq(lrugen->min_seq[type])) { + list_move(&folio->lru, &lrugen->lists[gen][type][zone]); + return true; + } + /* protected */ if (tier > tier_idx) { int hist =3D lru_hist_from_seq(lrugen->min_seq[type]); --=20 2.36.0.550.gb090851708-goog