2023-07-19 14:19:05

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v2 0/3] Optimize large folio interaction with deferred split

Hi All,

This is v2 of a small series in support of my work to enable the use of large
folios for anonymous memory (known as "FLEXIBLE_THP" or "LARGE_ANON_FOLIO") [1].
It first makes it possible to add large, non-pmd-mappable folios to the deferred
split queue. Then it modifies zap_pte_range() to batch-remove spans of
physically contiguous pages from the rmap, which means that in the common case,
we elide the need to ever put the folio on the deferred split queue, thus
reducing lock contention and improving performance.

This becomes more visible once we have lots of large anonymous folios in the
system, and Huang Ying has suggested solving this needs to be a prerequisit for
merging the main FLEXIBLE_THP/LARGE_ANON_FOLIO work.

The series applies on top of v6.5-rc2 and a branch is available at [2].

I don't have a full test run with the latest versions of all the patches on top
of the latest baseline, so not posting results formally. I can get these if
people feel they are neccessary though. But anecdotally, for the kernel
compilation workload, this series reduces kernel time by ~4% and reduces
real-time by ~0.4%, compared with [1].

Changes since v1 [3]
--------------------

- patch 2: Modified doc comment for folio_remove_rmap_range()
- patch 2: Hoisted _nr_pages_mapped manipulation out of page loop so its now
modified once per folio_remove_rmap_range() call.
- patch 2: Added check that page range is fully contained by folio in
folio_remove_rmap_range()
- patch 2: Fixed some nits raised by Huang, Ying for folio_remove_rmap_range()
- patch 3: Support batch-zap of all anon pages, not just those in anon vmas
- patch 3: Renamed various functions to make their use clear
- patch 3: Various minor refactoring/cleanups
- Added Reviewed-By tags - thanks!

[1] https://lore.kernel.org/linux-mm/[email protected]/
[2] https://gitlab.arm.com/linux-arm/linux-rr/-/tree/features/granule_perf/deferredsplit-lkml_v2
[3] https://lore.kernel.org/linux-mm/[email protected]/

Thanks,
Ryan


Ryan Roberts (3):
mm: Allow deferred splitting of arbitrary large anon folios
mm: Implement folio_remove_rmap_range()
mm: Batch-zap large anonymous folio PTE mappings

include/linux/rmap.h | 2 +
mm/memory.c | 120 +++++++++++++++++++++++++++++++++++++++++++
mm/rmap.c | 76 ++++++++++++++++++++++++++-
3 files changed, 196 insertions(+), 2 deletions(-)

--
2.25.1



2023-07-19 14:21:10

by Ryan Roberts

[permalink] [raw]
Subject: [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()

Like page_remove_rmap() but batch-removes the rmap for a range of pages
belonging to a folio. This can provide a small speedup due to less
manipuation of the various counters. But more crucially, if removing the
rmap for all pages of a folio in a batch, there is no need to
(spuriously) add it to the deferred split list, which saves significant
cost when there is contention for the split queue lock.

All contained pages are accounted using the order-0 folio (or base page)
scheme.

Reviewed-by: Yin Fengwei <[email protected]>
Reviewed-by: Zi Yan <[email protected]>
Signed-off-by: Ryan Roberts <[email protected]>
---
include/linux/rmap.h | 2 ++
mm/rmap.c | 72 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 74 insertions(+)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b87d01660412..f578975c12c0 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -200,6 +200,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *,
bool compound);
void page_remove_rmap(struct page *, struct vm_area_struct *,
bool compound);
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma);

void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long address, rmap_t flags);
diff --git a/mm/rmap.c b/mm/rmap.c
index eb0bb00dae34..4022a3ab73a8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1359,6 +1359,78 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
mlock_vma_folio(folio, vma, compound);
}

+/**
+ * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
+ * @folio: Folio containing all pages in range.
+ * @page: First page in range to unmap.
+ * @nr: Number of pages to unmap.
+ * @vma: The VM area containing the range.
+ *
+ * All pages in the range must belong to the same VMA & folio. They
+ * must be mapped with PTEs, not a PMD.
+ *
+ * Context: Caller holds the pte lock.
+ */
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma)
+{
+ atomic_t *mapped = &folio->_nr_pages_mapped;
+ int nr_unmapped = 0;
+ int nr_mapped;
+ bool last;
+ enum node_stat_item idx;
+
+ if (unlikely(folio_test_hugetlb(folio))) {
+ VM_WARN_ON_FOLIO(1, folio);
+ return;
+ }
+
+ VM_WARN_ON_ONCE(page < &folio->page ||
+ page + nr > (&folio->page + folio_nr_pages(folio)));
+
+ if (!folio_test_large(folio)) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ nr_unmapped = last;
+ } else {
+ for (; nr != 0; nr--, page++) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ if (last)
+ nr_unmapped++;
+ }
+
+ /* Pages still mapped if folio mapped entirely */
+ nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
+ if (nr_mapped >= COMPOUND_MAPPED)
+ nr_unmapped = 0;
+ }
+
+ if (nr_unmapped) {
+ idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
+ __lruvec_stat_mod_folio(folio, idx, -nr_unmapped);
+
+ /*
+ * Queue anon large folio for deferred split if at least one
+ * page of the folio is unmapped and at least one page is still
+ * mapped.
+ */
+ if (folio_test_large(folio) &&
+ folio_test_anon(folio) && nr_mapped)
+ deferred_split_folio(folio);
+ }
+
+ /*
+ * It would be tidy to reset folio_test_anon mapping when fully
+ * unmapped, but that might overwrite a racing page_add_anon_rmap
+ * which increments mapcount after us but sets mapping before us:
+ * so leave the reset to free_pages_prepare, and remember that
+ * it's only reliable while mapped.
+ */
+
+ munlock_vma_folio(folio, vma, false);
+}
+
/**
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from
--
2.25.1


2023-07-19 18:36:44

by Yu Zhao

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()

On Wed, Jul 19, 2023 at 7:55 AM Ryan Roberts <[email protected]> wrote:
>
> Like page_remove_rmap() but batch-removes the rmap for a range of pages
> belonging to a folio. This can provide a small speedup due to less
> manipuation of the various counters. But more crucially, if removing the
> rmap for all pages of a folio in a batch, there is no need to
> (spuriously) add it to the deferred split list, which saves significant
> cost when there is contention for the split queue lock.
>
> All contained pages are accounted using the order-0 folio (or base page)
> scheme.
>
> Reviewed-by: Yin Fengwei <[email protected]>
> Reviewed-by: Zi Yan <[email protected]>
> Signed-off-by: Ryan Roberts <[email protected]>

I have asked for this before but let me be very clear this time: we
need to generalize the existing functions rather than add more
specialized functions. Otherwise it'd get even harder to maintain down
the road.

folio_remove_rmap_range() needs to replace page_remove_rmap(). IOW,
page_remove_rmap() is just a wrapper around folio_remove_rmap_range().

2023-07-19 19:32:50

by Ryan Roberts

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()

On 19/07/2023 19:23, Yu Zhao wrote:
> On Wed, Jul 19, 2023 at 7:55 AM Ryan Roberts <[email protected]> wrote:
>>
>> Like page_remove_rmap() but batch-removes the rmap for a range of pages
>> belonging to a folio. This can provide a small speedup due to less
>> manipuation of the various counters. But more crucially, if removing the
>> rmap for all pages of a folio in a batch, there is no need to
>> (spuriously) add it to the deferred split list, which saves significant
>> cost when there is contention for the split queue lock.
>>
>> All contained pages are accounted using the order-0 folio (or base page)
>> scheme.
>>
>> Reviewed-by: Yin Fengwei <[email protected]>
>> Reviewed-by: Zi Yan <[email protected]>
>> Signed-off-by: Ryan Roberts <[email protected]>
>
> I have asked for this before but let me be very clear this time: we
> need to generalize the existing functions rather than add more
> specialized functions. Otherwise it'd get even harder to maintain down
> the road.

Yeah fair enough, my fault; I wrote this before I had your feedback on the other
rmap function and overlooked it when refactoring this. I'll fix it and repost.

>
> folio_remove_rmap_range() needs to replace page_remove_rmap(). IOW,
> page_remove_rmap() is just a wrapper around folio_remove_rmap_range().