Hi all,
This series adds support for reclaiming PMD-mapped THP marked as lazyfree
without needing to first split the large folio via split_huge_pmd_address().
When the user no longer requires the pages, they would use madvise(MADV_FREE)
to mark the pages as lazy free. Subsequently, they typically would not re-write
to that memory again.
During memory reclaim, if we detect that the large folio and its PMD are both
still marked as clean and there are no unexpected references(such as GUP), so we
can just discard the memory lazily, improving the efficiency of memory
reclamation in this case.
Performance Testing
===================
On an Intel i5 CPU, reclaiming 1GiB of lazyfree THPs using
mem_cgroup_force_empty() results in the following runtimes in seconds
(shorter is better):
--------------------------------------------
| Old | New | Change |
--------------------------------------------
| 0.683426 | 0.049197 | -92.80% |
--------------------------------------------
---
Changes since v2 [2]
====================
- Update the changelog (thanks to David Hildenbrand)
- Support try_to_unmap_one() to unmap PMD-mapped folios
(thanks a lot to David Hildenbrand and Zi Yan)
Changes since v1 [1]
====================
- Update the changelog
- Follow the exact same logic as in try_to_unmap_one() (per David Hildenbrand)
- Remove the extra code from rmap.c (per Matthew Wilcox)
[1] https://lore.kernel.org/linux-mm/[email protected]
[2] https://lore.kernel.org/linux-mm/[email protected]
Lance Yang (3):
mm/rmap: remove duplicated exit code in pagewalk loop
mm/rmap: integrate PMD-mapped folio splitting into pagewalk loop
mm/vmscan: avoid split lazyfree THP during shrink_folio_list()
include/linux/huge_mm.h | 4 ++
mm/huge_memory.c | 117 +++++++++++++++++++++++++++++++++-------
mm/rmap.c | 69 +++++++++++++-----------
3 files changed, 139 insertions(+), 51 deletions(-)
--
2.33.1