2024-05-24 08:57:46

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH 0/4] mm/ksm: cmp_and_merge_page() optimizations and cleanup

Hello,

This series mainly optimizes cmp_and_merge_page() to have more efficient
separate code flow for ksm page and non-ksm anon page.

- ksm page: don't need to calculate the checksum obviously.
- anon page: don't need to search stable tree if changing fast and try
to merge with zero page before searching ksm page on stable tree.

Please see the patch-2 for details.

Patch-3 is cleanup also a little optimization for the chain()/chain_prune
interfaces, which made the stable_tree_search()/stable_tree_insert() over
complex.

In patch-4, fix behaviors in stable_tree_search() when handle migrating
stable_node: return the migrated ksm page if no shareable ksm page found
on the stable tree, so our rmap_item can be added directly.

I have done simple testing using "hackbench -g 1 -l 300000" (maybe I need
to use a better workload) on my machine, have seen a little CPU usage
decrease of ksmd and some improvements of cmp_and_merge_page() latency:

Before:

- ksm page
[128, 256) 21 | |
[256, 512) 12509 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 769 |@@@ |
[1K, 2K) 99 | |
[2K, 4K) 4 | |
[4K, 8K) 2 | |
[8K, 16K) 8 | |

- anon page
[512, 1K) 19 | |
[1K, 2K) 7160 |@@@@@@@@@@@ |
[2K, 4K) 33516 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[4K, 8K) 33172 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[8K, 16K) 11305 |@@@@@@@@@@@@@@@@@ |
[16K, 32K) 1303 |@@ |
[32K, 64K) 16 | |
[64K, 128K) 6 | |
[128K, 256K) 6 | |
[256K, 512K) 9 | |
[512K, 1M) 3 | |
[1M, 2M) 2 | |
[2M, 4M) 1 | |

After:

- ksm page
[128, 256) 9 | |
[256, 512) 915 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K) 41 |@@ |
[1K, 2K) 1 | |
[2K, 4K) 1 | |

- anon page
[512, 1K) 374 | |
[1K, 2K) 5367 |@@@@ |
[2K, 4K) 64362 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[4K, 8K) 27721 |@@@@@@@@@@@@@@@@@@@@@@ |
[8K, 16K) 1047 | |
[16K, 32K) 63 | |
[32K, 64K) 7 | |
[64K, 128K) 6 | |
[128K, 256K) 5 | |
[256K, 512K) 3 | |
[512K, 1M) 1 | |

We can see the latency of cmp_and_merge_page() when handling non-ksm
anon pages has been improved.

Thanks for review and comments!

Signed-off-by: Chengming Zhou <[email protected]>
---
Chengming Zhou (4):
mm/ksm: refactor out try_to_merge_with_zero_page()
mm/ksm: don't waste time searching stable tree for fast changing page
mm/ksm: optimize the chain()/chain_prune() interfaces
mm/ksm: use ksm page itself if no another ksm page is found on stable tree

mm/ksm.c | 266 ++++++++++++++++++++-------------------------------------------
1 file changed, 84 insertions(+), 182 deletions(-)
---
base-commit: 2218eca02bc4203f68b8fb7e1116e5a2601506d1
change-id: 20240524-b4-ksm-scan-optimize-d2fd9401c357

Best regards,
--
Chengming Zhou <[email protected]>



2024-05-24 08:57:54

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH 1/4] mm/ksm: refactor out try_to_merge_with_zero_page()

In preparation for later changes, refactor out a new function called
try_to_merge_with_zero_page(), which tries to merge with zero page.

Signed-off-by: Chengming Zhou <[email protected]>
---
mm/ksm.c | 67 +++++++++++++++++++++++++++++++++++-----------------------------
1 file changed, 37 insertions(+), 30 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 4dc707d175fa..cbd4ba7ea974 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1531,6 +1531,41 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
return err;
}

+/* This function returns 0 if the pages were merged, -EFAULT otherwise. */
+static int try_to_merge_with_zero_page(struct ksm_rmap_item *rmap_item,
+ struct page *page)
+{
+ struct mm_struct *mm = rmap_item->mm;
+ int err = -EFAULT;
+
+ /*
+ * Same checksum as an empty page. We attempt to merge it with the
+ * appropriate zero page if the user enabled this via sysfs.
+ */
+ if (ksm_use_zero_pages && (rmap_item->oldchecksum == zero_checksum)) {
+ struct vm_area_struct *vma;
+
+ mmap_read_lock(mm);
+ vma = find_mergeable_vma(mm, rmap_item->address);
+ if (vma) {
+ err = try_to_merge_one_page(vma, page,
+ ZERO_PAGE(rmap_item->address));
+ trace_ksm_merge_one_page(
+ page_to_pfn(ZERO_PAGE(rmap_item->address)),
+ rmap_item, mm, err);
+ } else {
+ /*
+ * If the vma is out of date, we do not need to
+ * continue.
+ */
+ err = 0;
+ }
+ mmap_read_unlock(mm);
+ }
+
+ return err;
+}
+
/*
* try_to_merge_with_ksm_page - like try_to_merge_two_pages,
* but no new kernel page is allocated: kpage must already be a ksm page.
@@ -2305,7 +2340,6 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item,
*/
static noinline void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_item)
{
- struct mm_struct *mm = rmap_item->mm;
struct ksm_rmap_item *tree_rmap_item;
struct page *tree_page = NULL;
struct ksm_stable_node *stable_node;
@@ -2374,36 +2408,9 @@ static noinline void cmp_and_merge_page(struct page *page, struct ksm_rmap_item
return;
}

- /*
- * Same checksum as an empty page. We attempt to merge it with the
- * appropriate zero page if the user enabled this via sysfs.
- */
- if (ksm_use_zero_pages && (checksum == zero_checksum)) {
- struct vm_area_struct *vma;
+ if (!try_to_merge_with_zero_page(rmap_item, page))
+ return;

- mmap_read_lock(mm);
- vma = find_mergeable_vma(mm, rmap_item->address);
- if (vma) {
- err = try_to_merge_one_page(vma, page,
- ZERO_PAGE(rmap_item->address));
- trace_ksm_merge_one_page(
- page_to_pfn(ZERO_PAGE(rmap_item->address)),
- rmap_item, mm, err);
- } else {
- /*
- * If the vma is out of date, we do not need to
- * continue.
- */
- err = 0;
- }
- mmap_read_unlock(mm);
- /*
- * In case of failure, the page was not really empty, so we
- * need to continue. Otherwise we're done.
- */
- if (!err)
- return;
- }
tree_rmap_item =
unstable_tree_search_insert(rmap_item, page, &tree_page);
if (tree_rmap_item) {

--
2.45.1


2024-05-24 08:58:08

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH 2/4] mm/ksm: don't waste time searching stable tree for fast changing page

The code flow in cmp_and_merge_page() is suboptimal for handling the
ksm page and non-ksm page at the same time. For example:

- ksm page
1. Mostly just return if this ksm page is not migrated and this rmap_item
has been on the rmap hlist. Or we have to fix this rmap_item mapping.
2. But we absolutely don't need to checksum for this ksm page, since it
can't change.

- non-ksm page
1. First don't need to waste time searching stable tree if fast changing.
2. Should try to merge with zero page before search the stable tree.
3. Then search stable tree to find mergeable ksm page.

This patch optimizes the code flow so the handling differences between
ksm page and non-ksm page become clearer and more efficient too.

Signed-off-by: Chengming Zhou <[email protected]>
---
mm/ksm.c | 32 +++++++++++++++++---------------
1 file changed, 17 insertions(+), 15 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index cbd4ba7ea974..2424081f386e 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2366,6 +2366,23 @@ static noinline void cmp_and_merge_page(struct page *page, struct ksm_rmap_item
*/
if (!is_page_sharing_candidate(stable_node))
max_page_sharing_bypass = true;
+ } else {
+ remove_rmap_item_from_tree(rmap_item);
+
+ /*
+ * If the hash value of the page has changed from the last time
+ * we calculated it, this page is changing frequently: therefore we
+ * don't want to insert it in the unstable tree, and we don't want
+ * to waste our time searching for something identical to it there.
+ */
+ checksum = calc_checksum(page);
+ if (rmap_item->oldchecksum != checksum) {
+ rmap_item->oldchecksum = checksum;
+ return;
+ }
+
+ if (!try_to_merge_with_zero_page(rmap_item, page))
+ return;
}

/* We first start with searching the page inside the stable tree */
@@ -2396,21 +2413,6 @@ static noinline void cmp_and_merge_page(struct page *page, struct ksm_rmap_item
return;
}

- /*
- * If the hash value of the page has changed from the last time
- * we calculated it, this page is changing frequently: therefore we
- * don't want to insert it in the unstable tree, and we don't want
- * to waste our time searching for something identical to it there.
- */
- checksum = calc_checksum(page);
- if (rmap_item->oldchecksum != checksum) {
- rmap_item->oldchecksum = checksum;
- return;
- }
-
- if (!try_to_merge_with_zero_page(rmap_item, page))
- return;
-
tree_rmap_item =
unstable_tree_search_insert(rmap_item, page, &tree_page);
if (tree_rmap_item) {

--
2.45.1


2024-05-24 08:58:22

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH 3/4] mm/ksm: optimize the chain()/chain_prune() interfaces

Now the implementation of stable_node_dup() causes chain()/chain_prune()
interfaces and usages are overcomplicated.

Why? stable_node_dup() only find and return a candidate stable_node for
sharing, so the users have to recheck using stable_node_dup_any() if any
non-candidate stable_node exist. And try to ksm_get_folio() from it again.

Actually, stable_node_dup() can just return a best stable_node as it can,
then the users can check if it's a candidate for sharing or not.

The code is simplified too and fewer corner cases: such as stable_node and
stable_node_dup can't be NULL if returned tree_folio is not NULL.

Signed-off-by: Chengming Zhou <[email protected]>
---
mm/ksm.c | 152 ++++++++++++---------------------------------------------------
1 file changed, 27 insertions(+), 125 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 2424081f386e..f923699452ed 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1660,7 +1660,6 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
struct hlist_node *hlist_safe;
struct folio *folio, *tree_folio = NULL;
- int nr = 0;
int found_rmap_hlist_len;

if (!prune_stale_stable_nodes ||
@@ -1687,33 +1686,26 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
folio = ksm_get_folio(dup, KSM_GET_FOLIO_NOLOCK);
if (!folio)
continue;
- nr += 1;
- if (is_page_sharing_candidate(dup)) {
- if (!found ||
- dup->rmap_hlist_len > found_rmap_hlist_len) {
- if (found)
- folio_put(tree_folio);
- found = dup;
- found_rmap_hlist_len = found->rmap_hlist_len;
- tree_folio = folio;
-
- /* skip put_page for found dup */
- if (!prune_stale_stable_nodes)
- break;
- continue;
- }
+ /* Pick the best candidate if possible. */
+ if (!found || (is_page_sharing_candidate(dup) &&
+ (!is_page_sharing_candidate(found) ||
+ dup->rmap_hlist_len > found_rmap_hlist_len))) {
+ if (found)
+ folio_put(tree_folio);
+ found = dup;
+ found_rmap_hlist_len = found->rmap_hlist_len;
+ tree_folio = folio;
+ /* skip put_page for found candidate */
+ if (!prune_stale_stable_nodes &&
+ is_page_sharing_candidate(found))
+ break;
+ continue;
}
folio_put(folio);
}

if (found) {
- /*
- * nr is counting all dups in the chain only if
- * prune_stale_stable_nodes is true, otherwise we may
- * break the loop at nr == 1 even if there are
- * multiple entries.
- */
- if (prune_stale_stable_nodes && nr == 1) {
+ if (hlist_is_singular_node(&found->hlist_dup, &stable_node->hlist)) {
/*
* If there's not just one entry it would
* corrupt memory, better BUG_ON. In KSM
@@ -1765,25 +1757,15 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
hlist_add_head(&found->hlist_dup,
&stable_node->hlist);
}
+ } else {
+ /* Its hlist must be empty if no one found. */
+ free_stable_node_chain(stable_node, root);
}

*_stable_node_dup = found;
return tree_folio;
}

-static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
- struct rb_root *root)
-{
- if (!is_stable_node_chain(stable_node))
- return stable_node;
- if (hlist_empty(&stable_node->hlist)) {
- free_stable_node_chain(stable_node, root);
- return NULL;
- }
- return hlist_entry(stable_node->hlist.first,
- typeof(*stable_node), hlist_dup);
-}
-
/*
* Like for ksm_get_folio, this function can free the *_stable_node and
* *_stable_node_dup if the returned tree_page is NULL.
@@ -1804,17 +1786,10 @@ static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_d
bool prune_stale_stable_nodes)
{
struct ksm_stable_node *stable_node = *_stable_node;
+
if (!is_stable_node_chain(stable_node)) {
- if (is_page_sharing_candidate(stable_node)) {
- *_stable_node_dup = stable_node;
- return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
- }
- /*
- * _stable_node_dup set to NULL means the stable_node
- * reached the ksm_max_page_sharing limit.
- */
- *_stable_node_dup = NULL;
- return NULL;
+ *_stable_node_dup = stable_node;
+ return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
}
return stable_node_dup(_stable_node_dup, _stable_node, root,
prune_stale_stable_nodes);
@@ -1828,16 +1803,10 @@ static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
}

static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
- struct ksm_stable_node *s_n,
+ struct ksm_stable_node **s_n,
struct rb_root *root)
{
- struct ksm_stable_node *old_stable_node = s_n;
- struct folio *tree_folio;
-
- tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
- /* not pruning dups so s_n cannot have changed */
- VM_BUG_ON(s_n != old_stable_node);
- return tree_folio;
+ return __stable_node_chain(s_n_d, s_n, root, false);
}

/*
@@ -1855,7 +1824,7 @@ static struct page *stable_tree_search(struct page *page)
struct rb_root *root;
struct rb_node **new;
struct rb_node *parent;
- struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+ struct ksm_stable_node *stable_node, *stable_node_dup;
struct ksm_stable_node *page_node;
struct folio *folio;

@@ -1879,45 +1848,7 @@ static struct page *stable_tree_search(struct page *page)

cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
- stable_node_any = NULL;
tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
- /*
- * NOTE: stable_node may have been freed by
- * chain_prune() if the returned stable_node_dup is
- * not NULL. stable_node_dup may have been inserted in
- * the rbtree instead as a regular stable_node (in
- * order to collapse the stable_node chain if a single
- * stable_node dup was found in it). In such case the
- * stable_node is overwritten by the callee to point
- * to the stable_node_dup that was collapsed in the
- * stable rbtree and stable_node will be equal to
- * stable_node_dup like if the chain never existed.
- */
- if (!stable_node_dup) {
- /*
- * Either all stable_node dups were full in
- * this stable_node chain, or this chain was
- * empty and should be rb_erased.
- */
- stable_node_any = stable_node_dup_any(stable_node,
- root);
- if (!stable_node_any) {
- /* rb_erase just run */
- goto again;
- }
- /*
- * Take any of the stable_node dups page of
- * this stable_node chain to let the tree walk
- * continue. All KSM pages belonging to the
- * stable_node dups in a stable_node chain
- * have the same content and they're
- * write protected at all times. Any will work
- * fine to continue the walk.
- */
- tree_folio = ksm_get_folio(stable_node_any,
- KSM_GET_FOLIO_NOLOCK);
- }
- VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
if (!tree_folio) {
/*
* If we walked over a stale stable_node,
@@ -1955,7 +1886,7 @@ static struct page *stable_tree_search(struct page *page)
goto chain_append;
}

- if (!stable_node_dup) {
+ if (!is_page_sharing_candidate(stable_node_dup)) {
/*
* If the stable_node is a chain and
* we got a payload match in memcmp
@@ -2064,9 +1995,6 @@ static struct page *stable_tree_search(struct page *page)
return &folio->page;

chain_append:
- /* stable_node_dup could be null if it reached the limit */
- if (!stable_node_dup)
- stable_node_dup = stable_node_any;
/*
* If stable_node was a chain and chain_prune collapsed it,
* stable_node has been updated to be the new regular
@@ -2111,7 +2039,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
struct rb_root *root;
struct rb_node **new;
struct rb_node *parent;
- struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+ struct ksm_stable_node *stable_node, *stable_node_dup;
bool need_chain = false;

kpfn = folio_pfn(kfolio);
@@ -2127,33 +2055,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)

cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
- stable_node_any = NULL;
- tree_folio = chain(&stable_node_dup, stable_node, root);
- if (!stable_node_dup) {
- /*
- * Either all stable_node dups were full in
- * this stable_node chain, or this chain was
- * empty and should be rb_erased.
- */
- stable_node_any = stable_node_dup_any(stable_node,
- root);
- if (!stable_node_any) {
- /* rb_erase just run */
- goto again;
- }
- /*
- * Take any of the stable_node dups page of
- * this stable_node chain to let the tree walk
- * continue. All KSM pages belonging to the
- * stable_node dups in a stable_node chain
- * have the same content and they're
- * write protected at all times. Any will work
- * fine to continue the walk.
- */
- tree_folio = ksm_get_folio(stable_node_any,
- KSM_GET_FOLIO_NOLOCK);
- }
- VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
+ tree_folio = chain(&stable_node_dup, &stable_node, root);
if (!tree_folio) {
/*
* If we walked over a stale stable_node,

--
2.45.1


2024-05-24 08:58:29

by Chengming Zhou

[permalink] [raw]
Subject: [PATCH 4/4] mm/ksm: use ksm page itself if no another ksm page is found on stable tree

It's interesting that a mapped ksm page also need to stable_tree_search(),
instead of using stable_tree_insert() directly. The reason is that we have
a minor optimization for migrated ksm page that has only one mapcount, in
which case we can find another ksm page that already on the stable tree
to replace it.

But what if we can't find another shareable candidate on the stable tree?
Obviously, we should just return the ksm page itself if it has been
inserted on the tree. And we shouldn't return NULL if no another ksm page
is found on the tree, since we will still map on this ksm page but the
rmap_item will be removed out to insert on the unstable tree if we return
NULL in this case.

We can ignore the is_page_sharing_candidate() check in this case, since
max_page_sharing_bypass is set to true in cmp_and_merge_page().

Signed-off-by: Chengming Zhou <[email protected]>
---
mm/ksm.c | 19 +++++--------------
1 file changed, 5 insertions(+), 14 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index f923699452ed..6dea83998258 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1940,11 +1940,8 @@ static struct page *stable_tree_search(struct page *page)
rb_link_node(&page_node->node, parent, new);
rb_insert_color(&page_node->node, root);
out:
- if (is_page_sharing_candidate(page_node)) {
- folio_get(folio);
- return &folio->page;
- } else
- return NULL;
+ folio_get(folio);
+ return &folio->page;

replace:
/*
@@ -1966,10 +1963,7 @@ static struct page *stable_tree_search(struct page *page)
rb_replace_node(&stable_node_dup->node,
&page_node->node,
root);
- if (is_page_sharing_candidate(page_node))
- folio_get(folio);
- else
- folio = NULL;
+ folio_get(folio);
} else {
rb_erase(&stable_node_dup->node, root);
folio = NULL;
@@ -1982,10 +1976,7 @@ static struct page *stable_tree_search(struct page *page)
list_del(&page_node->list);
DO_NUMA(page_node->nid = nid);
stable_node_chain_add_dup(page_node, stable_node);
- if (is_page_sharing_candidate(page_node))
- folio_get(folio);
- else
- folio = NULL;
+ folio_get(folio);
} else {
folio = NULL;
}
@@ -2009,7 +2000,7 @@ static struct page *stable_tree_search(struct page *page)
stable_node = alloc_stable_node_chain(stable_node_dup,
root);
if (!stable_node)
- return NULL;
+ goto out;
}
/*
* Add this stable_node dup that was

--
2.45.1


2024-05-24 15:13:08

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm/ksm: refactor out try_to_merge_with_zero_page()

On 24.05.24 10:56, Chengming Zhou wrote:
> In preparation for later changes, refactor out a new function called
> try_to_merge_with_zero_page(), which tries to merge with zero page.
>
> Signed-off-by: Chengming Zhou <[email protected]>
> ---
> mm/ksm.c | 67 +++++++++++++++++++++++++++++++++++-----------------------------
> 1 file changed, 37 insertions(+), 30 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 4dc707d175fa..cbd4ba7ea974 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1531,6 +1531,41 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
> return err;
> }
>
> +/* This function returns 0 if the pages were merged, -EFAULT otherwise. */

No it doesn't. Check the "err = 0" case.

--
Cheers,

David / dhildenb


2024-05-27 04:39:26

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm/ksm: refactor out try_to_merge_with_zero_page()

On 2024/5/24 23:12, David Hildenbrand wrote:
> On 24.05.24 10:56, Chengming Zhou wrote:
>> In preparation for later changes, refactor out a new function called
>> try_to_merge_with_zero_page(), which tries to merge with zero page.
>>
>> Signed-off-by: Chengming Zhou <[email protected]>
>> ---
>>   mm/ksm.c | 67 +++++++++++++++++++++++++++++++++++-----------------------------
>>   1 file changed, 37 insertions(+), 30 deletions(-)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index 4dc707d175fa..cbd4ba7ea974 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1531,6 +1531,41 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
>>       return err;
>>   }
>>   +/* This function returns 0 if the pages were merged, -EFAULT otherwise. */
>
> No it doesn't. Check the "err = 0" case.
>

Right, how about this: This function returns 0 if the page were merged or the vma
is out of date, which means we don't need to continue, -EFAULT otherwise.

2024-05-27 07:18:33

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm/ksm: refactor out try_to_merge_with_zero_page()

Am 27.05.24 um 06:36 schrieb Chengming Zhou:
> On 2024/5/24 23:12, David Hildenbrand wrote:
>> On 24.05.24 10:56, Chengming Zhou wrote:
>>> In preparation for later changes, refactor out a new function called
>>> try_to_merge_with_zero_page(), which tries to merge with zero page.
>>>
>>> Signed-off-by: Chengming Zhou <[email protected]>
>>> ---
>>>   mm/ksm.c | 67 +++++++++++++++++++++++++++++++++++-----------------------------
>>>   1 file changed, 37 insertions(+), 30 deletions(-)
>>>
>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>> index 4dc707d175fa..cbd4ba7ea974 100644
>>> --- a/mm/ksm.c
>>> +++ b/mm/ksm.c
>>> @@ -1531,6 +1531,41 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
>>>       return err;
>>>   }
>>>   +/* This function returns 0 if the pages were merged, -EFAULT otherwise. */
>>
>> No it doesn't. Check the "err = 0" case.
>>
>
> Right, how about this: This function returns 0 if the page were merged or the vma
> is out of date, which means we don't need to continue, -EFAULT otherwise.

Maybe slightly adjusted:

This function returns 0 if the pages were merged or if they are no longer
merging candidates (e.g., VMA stale), -EFAULT otherwise.

--
Thanks,

David / dhildenb


2024-05-27 07:42:14

by Chengming Zhou

[permalink] [raw]
Subject: Re: [PATCH 1/4] mm/ksm: refactor out try_to_merge_with_zero_page()

On 2024/5/27 15:18, David Hildenbrand wrote:
> Am 27.05.24 um 06:36 schrieb Chengming Zhou:
>> On 2024/5/24 23:12, David Hildenbrand wrote:
>>> On 24.05.24 10:56, Chengming Zhou wrote:
>>>> In preparation for later changes, refactor out a new function called
>>>> try_to_merge_with_zero_page(), which tries to merge with zero page.
>>>>
>>>> Signed-off-by: Chengming Zhou <[email protected]>
>>>> ---
>>>>    mm/ksm.c | 67 +++++++++++++++++++++++++++++++++++-----------------------------
>>>>    1 file changed, 37 insertions(+), 30 deletions(-)
>>>>
>>>> diff --git a/mm/ksm.c b/mm/ksm.c
>>>> index 4dc707d175fa..cbd4ba7ea974 100644
>>>> --- a/mm/ksm.c
>>>> +++ b/mm/ksm.c
>>>> @@ -1531,6 +1531,41 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
>>>>        return err;
>>>>    }
>>>>    +/* This function returns 0 if the pages were merged, -EFAULT otherwise. */
>>>
>>> No it doesn't. Check the "err = 0" case.
>>>
>>
>> Right, how about this: This function returns 0 if the page were merged or the vma
>> is out of date, which means we don't need to continue, -EFAULT otherwise.
>
> Maybe slightly adjusted:
>
> This function returns 0 if the pages were merged or if they are no longer merging candidates (e.g., VMA stale), -EFAULT otherwise.
>

Great, will change to this. Thanks!