2023-05-24 06:22:57

by Yang Yang

[permalink] [raw]
Subject: [PATCH v9 0/5] ksm: support tracking KSM-placed zero-pages

From: xu xin <[email protected]>

The core idea of this patch set is to enable users to perceive the number
of any pages merged by KSM, regardless of whether use_zero_page switch has
been turned on, so that users can know how much free memory increase is
really due to their madvise(MERGEABLE) actions. But the problem is, when
enabling use_zero_pages, all empty pages will be merged with kernel zero
pages instead of with each other as use_zero_pages is disabled, and then
these zero-pages are no longer monitored by KSM.

The motivations to do this is seen at:
https://lore.kernel.org/lkml/[email protected]/

In one word, we hope to implement the support for KSM-placed zero pages
tracking without affecting the feature of use_zero_pages, so that app
developer can also benefit from knowing the actual KSM profit by getting
KSM-placed zero pages to optimize applications eventually when
/sys/kernel/mm/ksm/use_zero_pages is enabled.

Change log
----------
v8->v9:
------
(1) The previous [PATCH v8 4/6] is squashed into the current [PATCH v9 2/5].

(2) Improve the codes as David's suggestions.

v7->v8:
-------
(1) Since [1] which fix the bug of pte_mkdirty on sparc64 that makes pte
writable, then we can remove the architechture restrictions of our
features.
(2) Improve the scheme of update ksm_zero_pages: add the handling case when
khugepaged replaces a shared zeropage by a THP.

[1] https://lore.kernel.org/all/[email protected]/

v6->v7:
-------
This is an all-newed version which is different from v6 which relys on KSM's
rmap_item. The patch series don't rely on rmap_item but pte_dirty, so the
general handling of tracking KSM-placed zero-pages is simplified a lot.

For safety, we restrict this feature only to the tested and known-working
architechtures (ARM, ARM64, and X86) fow now.

xu xin (5):
ksm: support unsharing KSM-placed zero pages
ksm: count all zero pages placed by KSM
ksm: add ksm zero pages for each process
ksm: consider KSM-placed zeropages when calculating KSM profit
selftest: add a testcase of ksm zero pages

Documentation/admin-guide/mm/ksm.rst | 25 ++++++--
fs/proc/base.c | 1 +
include/linux/ksm.h | 22 +++++++
include/linux/mm_types.h | 9 ++-
mm/khugepaged.c | 2 +
mm/ksm.c | 28 ++++++--
mm/memory.c | 5 +-
tools/testing/selftests/mm/ksm_functional_tests.c | 78 ++++++++++++++++++++++-
8 files changed, 154 insertions(+), 16 deletions(-)

--
2.15.2


2023-05-24 06:24:20

by Yang Yang

[permalink] [raw]
Subject: [PATCH v9 1/5] ksm: support unsharing KSM-placed zero pages

From: xu xin <[email protected]>

When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
unsharing will *not* actually unshare the shared zeropage as placed by KSM
(which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
zero pages are out of the control of KSM, the related counts of ksm pages
don't expose how many zero pages are placed by KSM (these special zero
pages are different from those initially mapped zero pages, because the
zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
and unshared page).

To not blindly unshare all shared zero_pages in applicable VMAs, the patch
use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.

In addition, we'll reuse this mechanism to reliably identify KSM-placed
ZeroPages to properly account for them (e.g., calculating the KSM profit
that includes zeropages) in the latter patches.

The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.

Signed-off-by: xu xin <[email protected]>
Suggested-by: David Hildenbrand <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Reviewed-by: Xiaokai Ran <[email protected]>
Reviewed-by: Yang Yang <[email protected]>
---
include/linux/ksm.h | 8 ++++++++
mm/ksm.c | 11 ++++++++---
2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 899a314bc487..4fd5f4a50bac 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -26,6 +26,12 @@ int ksm_disable(struct mm_struct *mm);

int __ksm_enter(struct mm_struct *mm);
void __ksm_exit(struct mm_struct *mm);
+/*
+ * To identify zeropages that were mapped by KSM, we reuse the dirty bit
+ * in the PTE. If the PTE is dirty, the zeropage was mapped by KSM when
+ * deduplicating memory.
+ */
+#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))

static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
{
@@ -95,6 +101,8 @@ static inline void ksm_exit(struct mm_struct *mm)
{
}

+#define is_ksm_zero_pte(pte) 0
+
#ifdef CONFIG_MEMORY_FAILURE
static inline void collect_procs_ksm(struct page *page,
struct list_head *to_kill, int force_early)
diff --git a/mm/ksm.c b/mm/ksm.c
index 0156bded3a66..f31c789406b1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex
if (is_migration_entry(entry))
page = pfn_swap_entry_to_page(entry);
}
- ret = page && PageKsm(page);
+ /* return 1 if the page is an normal ksm page or KSM-placed zero page */
+ ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte);
pte_unmap_unlock(pte, ptl);
return ret;
}
@@ -1220,8 +1221,12 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
page_add_anon_rmap(kpage, vma, addr, RMAP_NONE);
newpte = mk_pte(kpage, vma->vm_page_prot);
} else {
- newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
- vma->vm_page_prot));
+ /*
+ * Use pte_mkdirty to mark the zero page mapped by KSM, and then
+ * we can easily track all KSM-placed zero pages by checking if
+ * the dirty bit in zero page's PTE is set.
+ */
+ newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)));
/*
* We're replacing an anonymous page with a zero page, which is
* not anonymous. We need to do proper accounting otherwise we
--
2.15.2

2023-05-24 06:26:29

by Yang Yang

[permalink] [raw]
Subject: [PATCH v9 2/5] ksm: count all zero pages placed by KSM

From: xu xin <[email protected]>

As pages_sharing and pages_shared don't include the number of zero pages
merged by KSM, we cannot know how many pages are zero pages placed by KSM
when enabling use_zero_pages, which leads to KSM not being transparent with
all actual merged pages by KSM. In the early days of use_zero_pages,
zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so
it's hard to count how many times one of those zeropages was then unmerged.

But now, unsharing KSM-placed zero page accurately has been achieved, so we
can easily count both how many times a page full of zeroes was merged with
zero-page and how many times one of those pages was then unmerged. and so,
it helps to estimate memory demands when each and every shared page could
get unshared.

So we add ksm_zero_pages under /sys/kernel/mm/ksm/ to show the number
of all zero pages placed by KSM. Meanwhile, we update the Documentation.

Signed-off-by: xu xin <[email protected]>
Suggested-by: David Hildenbrand <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Reviewed-by: Xiaokai Ran <[email protected]>
Reviewed-by: Yang Yang <[email protected]>
---
Documentation/admin-guide/mm/ksm.rst | 7 +++++++
include/linux/ksm.h | 12 ++++++++++++
mm/khugepaged.c | 2 ++
mm/ksm.c | 12 ++++++++++++
mm/memory.c | 5 ++++-
5 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 7626392fe82c..6cc919dbfd55 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -173,6 +173,13 @@ stable_node_chains
the number of KSM pages that hit the ``max_page_sharing`` limit
stable_node_dups
number of duplicated KSM pages
+ksm_zero_pages
+ how many zero pages that are still mapped into processes were mapped by
+ KSM when deduplicating.
+
+When ``use_zero_pages`` is/was enabled, the sum of ``pages_sharing`` +
+``ksm_zero_pages`` represents the actual number of pages saved by KSM.
+if ``use_zero_pages`` has never been enabled, ``ksm_zero_pages`` is 0.

A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good
sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing``
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 4fd5f4a50bac..f2d98c53cfec 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -33,6 +33,14 @@ void __ksm_exit(struct mm_struct *mm);
*/
#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))

+extern unsigned long ksm_zero_pages;
+
+static inline void ksm_notify_unmap_zero_page(pte_t pte)
+{
+ if (is_ksm_zero_pte(pte))
+ ksm_zero_pages--;
+}
+
static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
{
int ret;
@@ -103,6 +111,10 @@ static inline void ksm_exit(struct mm_struct *mm)

#define is_ksm_zero_pte(pte) 0

+static inline void ksm_notify_unmap_zero_page(pte_t pte)
+{
+}
+
#ifdef CONFIG_MEMORY_FAILURE
static inline void collect_procs_ksm(struct page *page,
struct list_head *to_kill, int force_early)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 6b9d39d65b73..e417a928ef8d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -19,6 +19,7 @@
#include <linux/page_table_check.h>
#include <linux/swapops.h>
#include <linux/shmem_fs.h>
+#include <linux/ksm.h>

#include <asm/tlb.h>
#include <asm/pgalloc.h>
@@ -711,6 +712,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
spin_lock(ptl);
ptep_clear(vma->vm_mm, address, _pte);
spin_unlock(ptl);
+ ksm_notify_unmap_zero_page(pteval);
}
} else {
src_page = pte_page(pteval);
diff --git a/mm/ksm.c b/mm/ksm.c
index f31c789406b1..d3ed90159322 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -278,6 +278,9 @@ static unsigned int zero_checksum __read_mostly;
/* Whether to merge empty (zeroed) pages with actual zero pages */
static bool ksm_use_zero_pages __read_mostly;

+/* The number of zero pages which is placed by KSM */
+unsigned long ksm_zero_pages;
+
#ifdef CONFIG_NUMA
/* Zeroed when merging across nodes is not allowed */
static unsigned int ksm_merge_across_nodes = 1;
@@ -1227,6 +1230,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
* the dirty bit in zero page's PTE is set.
*/
newpte = pte_mkdirty(pte_mkspecial(pfn_pte(page_to_pfn(kpage), vma->vm_page_prot)));
+ ksm_zero_pages++;
/*
* We're replacing an anonymous page with a zero page, which is
* not anonymous. We need to do proper accounting otherwise we
@@ -3354,6 +3358,13 @@ static ssize_t pages_volatile_show(struct kobject *kobj,
}
KSM_ATTR_RO(pages_volatile);

+static ssize_t ksm_zero_pages_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%ld\n", ksm_zero_pages);
+}
+KSM_ATTR_RO(ksm_zero_pages);
+
static ssize_t general_profit_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -3421,6 +3432,7 @@ static struct attribute *ksm_attrs[] = {
&pages_sharing_attr.attr,
&pages_unshared_attr.attr,
&pages_volatile_attr.attr,
+ &ksm_zero_pages_attr.attr,
&full_scans_attr.attr,
#ifdef CONFIG_NUMA
&merge_across_nodes_attr.attr,
diff --git a/mm/memory.c b/mm/memory.c
index 8358f3b853f2..09c31160af4e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1415,8 +1415,10 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
tlb_remove_tlb_entry(tlb, pte, addr);
zap_install_uffd_wp_if_needed(vma, addr, pte, details,
ptent);
- if (unlikely(!page))
+ if (unlikely(!page)) {
+ ksm_notify_unmap_zero_page(ptent);
continue;
+ }

delay_rmap = 0;
if (!PageAnon(page)) {
@@ -3120,6 +3122,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
inc_mm_counter(mm, MM_ANONPAGES);
}
} else {
+ ksm_notify_unmap_zero_page(vmf->orig_pte);
inc_mm_counter(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
--
2.15.2

2023-05-24 07:27:02

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v9 1/5] ksm: support unsharing KSM-placed zero pages

On 24.05.23 07:57, Yang Yang wrote:
> From: xu xin <[email protected]>
>
> When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
> and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
> unsharing will *not* actually unshare the shared zeropage as placed by KSM
> (which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
> zero pages are out of the control of KSM, the related counts of ksm pages
> don't expose how many zero pages are placed by KSM (these special zero
> pages are different from those initially mapped zero pages, because the
> zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
> and unshared page).
>
> To not blindly unshare all shared zero_pages in applicable VMAs, the patch
> use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
> Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.
>
> In addition, we'll reuse this mechanism to reliably identify KSM-placed
> ZeroPages to properly account for them (e.g., calculating the KSM profit
> that includes zeropages) in the latter patches.
>
> The patch will not degrade the performance of use_zero_pages as it doesn't
> change the way of merging empty pages in use_zero_pages's feature.
>
> Signed-off-by: xu xin <[email protected]>
> Suggested-by: David Hildenbrand <[email protected]>
> Cc: Claudio Imbrenda <[email protected]>
> Cc: Xuexin Jiang <[email protected]>
> Reviewed-by: Xiaokai Ran <[email protected]>
> Reviewed-by: Yang Yang <[email protected]>
> ---
> include/linux/ksm.h | 8 ++++++++
> mm/ksm.c | 11 ++++++++---
> 2 files changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 899a314bc487..4fd5f4a50bac 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -26,6 +26,12 @@ int ksm_disable(struct mm_struct *mm);
>
> int __ksm_enter(struct mm_struct *mm);
> void __ksm_exit(struct mm_struct *mm);
> +/*
> + * To identify zeropages that were mapped by KSM, we reuse the dirty bit
> + * in the PTE. If the PTE is dirty, the zeropage was mapped by KSM when
> + * deduplicating memory.
> + */
> +#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
>
> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> {
> @@ -95,6 +101,8 @@ static inline void ksm_exit(struct mm_struct *mm)
> {
> }
>
> +#define is_ksm_zero_pte(pte) 0
> +

Not required in this patch (and AFAIKS in the others). So you can drop that.

Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb


2023-05-24 07:36:27

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v9 2/5] ksm: count all zero pages placed by KSM

On 24.05.23 07:57, Yang Yang wrote:
> From: xu xin <[email protected]>
>
> As pages_sharing and pages_shared don't include the number of zero pages
> merged by KSM, we cannot know how many pages are zero pages placed by KSM
> when enabling use_zero_pages, which leads to KSM not being transparent with
> all actual merged pages by KSM. In the early days of use_zero_pages,
> zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so
> it's hard to count how many times one of those zeropages was then unmerged.
>
> But now, unsharing KSM-placed zero page accurately has been achieved, so we
> can easily count both how many times a page full of zeroes was merged with
> zero-page and how many times one of those pages was then unmerged. and so,
> it helps to estimate memory demands when each and every shared page could
> get unshared.
>
> So we add ksm_zero_pages under /sys/kernel/mm/ksm/ to show the number
> of all zero pages placed by KSM. Meanwhile, we update the Documentation.
>
> Signed-off-by: xu xin <[email protected]>
> Suggested-by: David Hildenbrand <[email protected]>
> Cc: Claudio Imbrenda <[email protected]>
> Cc: Xuexin Jiang <[email protected]>
> Reviewed-by: Xiaokai Ran <[email protected]>
> Reviewed-by: Yang Yang <[email protected]>
> ---
> Documentation/admin-guide/mm/ksm.rst | 7 +++++++
> include/linux/ksm.h | 12 ++++++++++++
> mm/khugepaged.c | 2 ++
> mm/ksm.c | 12 ++++++++++++
> mm/memory.c | 5 ++++-
> 5 files changed, 37 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
> index 7626392fe82c..6cc919dbfd55 100644
> --- a/Documentation/admin-guide/mm/ksm.rst
> +++ b/Documentation/admin-guide/mm/ksm.rst
> @@ -173,6 +173,13 @@ stable_node_chains
> the number of KSM pages that hit the ``max_page_sharing`` limit
> stable_node_dups
> number of duplicated KSM pages
> +ksm_zero_pages
> + how many zero pages that are still mapped into processes were mapped by
> + KSM when deduplicating.
> +
> +When ``use_zero_pages`` is/was enabled, the sum of ``pages_sharing`` +
> +``ksm_zero_pages`` represents the actual number of pages saved by KSM.
> +if ``use_zero_pages`` has never been enabled, ``ksm_zero_pages`` is 0.
>
> A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good
> sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing``
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 4fd5f4a50bac..f2d98c53cfec 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -33,6 +33,14 @@ void __ksm_exit(struct mm_struct *mm);
> */
> #define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
>
> +extern unsigned long ksm_zero_pages;
> +
> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
> +{
> + if (is_ksm_zero_pte(pte))
> + ksm_zero_pages--;
> +}
> +
> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> {
> int ret;
> @@ -103,6 +111,10 @@ static inline void ksm_exit(struct mm_struct *mm)
>
> #define is_ksm_zero_pte(pte) 0
>
> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
> +{
> +}
> +

Having proposed that name ... I realize that we call this function
whenever there might be a zeropage mapped (when we have !page after
vm_normal_page()) -- but it could also not be the zeropage.

Not really able to come up with a better name :)

ksm_notify_maybe_unmap_zero_page ?

ksm_maybe_unmap_zero_page ?


Maybe someone else reading along has a better idea. In any case, the
logic itself LGTM

Reviewed-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb


2023-05-24 08:00:04

by xu

[permalink] [raw]
Subject: Re: [PATCH v9 2/5] ksm: count all zero pages placed by KSM

>> +extern unsigned long ksm_zero_pages;
>> +
>> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
>> +{
>> + if (is_ksm_zero_pte(pte))
>> + ksm_zero_pages--;
>> +}
>> +
>> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>> {
>> int ret;
>> @@ -103,6 +111,10 @@ static inline void ksm_exit(struct mm_struct *mm)
>>
>> #define is_ksm_zero_pte(pte) 0
>>
>> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
>> +{
>> +}
>> +
>
>Having proposed that name ... I realize that we call this function
>whenever there might be a zeropage mapped (when we have !page after
>vm_normal_page()) -- but it could also not be the zeropage.
>
>Not really able to come up with a better name :)
>
>ksm_notify_maybe_unmap_zero_page ?
>
>ksm_maybe_unmap_zero_page ?
>

Analogous to the existing name of ksm_might_need_to_copy, so maybe we can use
'ksm_might_unmap_zero_page',

>
>Maybe someone else reading along has a better idea. In any case, the
>logic itself LGTM


2023-05-24 08:31:58

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v9 2/5] ksm: count all zero pages placed by KSM

On 24.05.23 09:55, xu xin wrote:
>>> +extern unsigned long ksm_zero_pages;
>>> +
>>> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
>>> +{
>>> + if (is_ksm_zero_pte(pte))
>>> + ksm_zero_pages--;
>>> +}
>>> +
>>> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
>>> {
>>> int ret;
>>> @@ -103,6 +111,10 @@ static inline void ksm_exit(struct mm_struct *mm)
>>>
>>> #define is_ksm_zero_pte(pte) 0
>>>
>>> +static inline void ksm_notify_unmap_zero_page(pte_t pte)
>>> +{
>>> +}
>>> +
>>
>> Having proposed that name ... I realize that we call this function
>> whenever there might be a zeropage mapped (when we have !page after
>> vm_normal_page()) -- but it could also not be the zeropage.
>>
>> Not really able to come up with a better name :)
>>
>> ksm_notify_maybe_unmap_zero_page ?
>>
>> ksm_maybe_unmap_zero_page ?
>>
>
> Analogous to the existing name of ksm_might_need_to_copy, so maybe we can use
> 'ksm_might_unmap_zero_page',

Yes, that should work :)

--
Thanks,

David / dhildenb


2023-05-24 09:36:24

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v9 0/5] ksm: support tracking KSM-placed zero-pages

On 24.05.23 07:51, [email protected] wrote:
> From: xu xin <[email protected]>
>
> The core idea of this patch set is to enable users to perceive the number
> of any pages merged by KSM, regardless of whether use_zero_page switch has
> been turned on, so that users can know how much free memory increase is
> really due to their madvise(MERGEABLE) actions. But the problem is, when
> enabling use_zero_pages, all empty pages will be merged with kernel zero
> pages instead of with each other as use_zero_pages is disabled, and then
> these zero-pages are no longer monitored by KSM.
>
> The motivations to do this is seen at:
> https://lore.kernel.org/lkml/[email protected]/
>
> In one word, we hope to implement the support for KSM-placed zero pages
> tracking without affecting the feature of use_zero_pages, so that app
> developer can also benefit from knowing the actual KSM profit by getting
> KSM-placed zero pages to optimize applications eventually when
> /sys/kernel/mm/ksm/use_zero_pages is enabled.
>


Ran the tests and they worked as expected. I only had some remaining
feedback for the last patch, otherwise LGTM.

--
Thanks,

David / dhildenb