2023-04-13 05:59:41

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 0/6] ksm: support tracking KSM-placed zero-pages

From: xu xin <[email protected]>

The core idea of this patch set is to enable users to perceive the number
of any pages merged by KSM, regardless of whether use_zero_page switch has
been turned on, so that users can know how much free memory increase is
really due to their madvise(MERGEABLE) actions. But the problem is, when
enabling use_zero_pages, all empty pages will be merged with kernel zero
pages instead of with each other as use_zero_pages is disabled, and then
these zero-pages are no longer monitored by KSM.

The motivations to do this is seen at:
https://lore.kernel.org/lkml/[email protected]/

In one word, we hope to implement the support for KSM-placed zero pages
tracking without affecting the feature of use_zero_pages, so that app
developer can also benefit from knowing the actual KSM profit by getting
KSM-placed zero pages to optimize applications eventually when
/sys/kernel/mm/ksm/use_zero_pages is enabled.

the patch uses pte_mkdirty (related with architecture) to mark KSM-placed
zero pages. Some architecture(like sparc64) treat R/O dirty PTEs as
writable, which will break KSM pages state (wrprotect) and affect
the KSM functionality. For safety, we restrict this feature only to the
tested and known-working architechtures (ARM, ARM64, and X86) fow now.

Change log
----------
v6->v7:
This is an all-newed version which is different from v6 which relys on KSM's
rmap_item. The patch series don't rely on rmap_item but pte_dirty, so the
general handling of tracking KSM-placed zero-pages is simplified a lot.

For safety, we restrict this feature only to the tested and known-working
architechtures (ARM, ARM64, and X86) fow now.

xu xin (6):
ksm: support unsharing KSM-placed zero pages
ksm: count all zero pages placed by KSM
ksm: add ksm zero pages for each process
ksm: add documentation for ksm zero pages
ksm: update the calculation of KSM profit
selftest: add a testcase of ksm zero pages

Documentation/admin-guide/mm/ksm.rst | 26 +++++---
fs/proc/base.c | 3 +
include/linux/ksm.h | 27 ++++++++
include/linux/mm_types.h | 11 +++-
mm/Kconfig | 23 ++++++-
mm/ksm.c | 28 ++++++++-
mm/memory.c | 7 ++-
tools/testing/selftests/mm/ksm_functional_tests.c | 75 +++++++++++++++++++++++
8 files changed, 187 insertions(+), 13 deletions(-)

--
2.15.2


2023-04-13 06:01:01

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 2/6] ksm: count all zero pages placed by KSM

From: xu xin <[email protected]>

As pages_sharing and pages_shared don't include the number of zero pages
merged by KSM, we cannot know how many pages are zero pages placed by KSM
when enabling use_zero_pages, which leads to KSM not being transparent with
all actual merged pages by KSM. In the early days of use_zero_pages,
zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so
it's hard to count how many times one of those zeropages was then unmerged.

But now, unsharing KSM-placed zero page accurately has been achieved, so we
can easily count both how many times a page full of zeroes was merged with
zero-page and how many times one of those pages was then unmerged. and so,
it helps to estimate memory demands when each and every shared page could
get unshared.

So we add ksm_zero_pages under /sys/kernel/mm/ksm/ to show the number
of all zero pages placed by KSM.

Signed-off-by: xu xin <[email protected]>
Suggested-by: David Hildenbrand <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Reviewed-by: Xiaokai Ran <[email protected]>
Reviewed-by: Yang Yang <[email protected]>
---
include/linux/ksm.h | 16 ++++++++++++++++
mm/ksm.c | 18 ++++++++++++++++++
mm/memory.c | 7 ++++++-
3 files changed, 40 insertions(+), 1 deletion(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index f0cc085be42a..ea628d2a9105 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -99,9 +99,25 @@ static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old)
/* use pte_mkdirty to track a KSM-placed zero page */
#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte))
#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
+extern unsigned long ksm_zero_pages;
+static inline void inc_ksm_zero_pages(void)
+{
+ ksm_zero_pages++;
+}
+
+static inline void dec_ksm_zero_pages(void)
+{
+ ksm_zero_pages--;
+}
#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */
#define set_pte_ksm_zero(pte) pte_mkspecial(pte)
#define is_ksm_zero_pte(pte) 0
+static inline void inc_ksm_zero_pages(void)
+{
+}
+static inline void dec_ksm_zero_pages(void)
+{
+}
#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */

#endif /* __LINUX_KSM_H */
diff --git a/mm/ksm.c b/mm/ksm.c
index 1d1771a6b3fe..232680393741 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -278,6 +278,11 @@ static unsigned int zero_checksum __read_mostly;
/* Whether to merge empty (zeroed) pages with actual zero pages */
static bool ksm_use_zero_pages __read_mostly;

+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+/* The number of zero pages which is placed by KSM */
+unsigned long ksm_zero_pages;
+#endif
+
#ifdef CONFIG_NUMA
/* Zeroed when merging across nodes is not allowed */
static unsigned int ksm_merge_across_nodes = 1;
@@ -1243,6 +1248,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
} else {
newpte = set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage),
vma->vm_page_prot));
+ inc_ksm_zero_pages();
/*
* We're replacing an anonymous page with a zero page, which is
* not anonymous. We need to do proper accounting otherwise we
@@ -3216,6 +3222,15 @@ static ssize_t pages_volatile_show(struct kobject *kobj,
}
KSM_ATTR_RO(pages_volatile);

+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+static ssize_t ksm_zero_pages_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%ld\n", ksm_zero_pages);
+}
+KSM_ATTR_RO(ksm_zero_pages);
+#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
+
static ssize_t general_profit_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -3286,6 +3301,9 @@ static struct attribute *ksm_attrs[] = {
&pages_sharing_attr.attr,
&pages_unshared_attr.attr,
&pages_volatile_attr.attr,
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+ &ksm_zero_pages_attr.attr,
+#endif
&full_scans_attr.attr,
#ifdef CONFIG_NUMA
&merge_across_nodes_attr.attr,
diff --git a/mm/memory.c b/mm/memory.c
index 42dd1ab5e4e6..76598287280f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1416,8 +1416,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
tlb_remove_tlb_entry(tlb, pte, addr);
zap_install_uffd_wp_if_needed(vma, addr, pte, details,
ptent);
- if (unlikely(!page))
+ if (unlikely(!page)) {
+ if (is_ksm_zero_pte(ptent))
+ dec_ksm_zero_pages();
continue;
+ }

delay_rmap = 0;
if (!PageAnon(page)) {
@@ -3118,6 +3121,8 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
inc_mm_counter(mm, MM_ANONPAGES);
}
} else {
+ if (is_ksm_zero_pte(vmf->orig_pte))
+ dec_ksm_zero_pages();
inc_mm_counter(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
--
2.15.2

2023-04-13 06:01:16

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 3/6] ksm: add ksm zero pages for each process

From: xu xin <[email protected]>

As the number of ksm zero pages is not included in ksm_merging_pages per
process when enabling use_zero_pages, it's unclear of how many actual
pages are merged by KSM. To let users accurately estimate their memory
demands when unsharing KSM zero-pages, it's necessary to show KSM zero-
pages per process. In addition, it help users to know the actual KSM
profit because KSM-placed zero pages are also benefit from KSM.

since unsharing zero pages placed by KSM accurately is achieved, then
tracking empty pages merging and unmerging is not a difficult thing any
longer.

Since we already have /proc/<pid>/ksm_stat, just add the information of
'ksm_zero_pages' in it.

Signed-off-by: xu xin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
---
fs/proc/base.c | 3 +++
include/linux/ksm.h | 10 ++++++----
include/linux/mm_types.h | 11 +++++++++--
mm/ksm.c | 2 +-
mm/memory.c | 4 ++--
5 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index ab9fa5b1b6be..235182cd143d 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -3211,6 +3211,9 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
seq_printf(m, "ksm_merging_pages %lu\n", mm->ksm_merging_pages);
seq_printf(m, "ksm_merge_type %s\n", ksm_merge_type(mm));
seq_printf(m, "ksm_process_profit %ld\n", ksm_process_profit(mm));
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+ seq_printf(m, "ksm_zero_pages %lu\n", mm->ksm_zero_pages);
+#endif
mmput(mm);
}

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index ea628d2a9105..2da40af9ad4d 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -100,22 +100,24 @@ static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old)
#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte))
#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
extern unsigned long ksm_zero_pages;
-static inline void inc_ksm_zero_pages(void)
+static inline void inc_ksm_zero_pages(struct mm_struct *mm)
{
ksm_zero_pages++;
+ mm->ksm_zero_pages++;
}

-static inline void dec_ksm_zero_pages(void)
+static inline void dec_ksm_zero_pages(struct mm_struct *mm)
{
ksm_zero_pages--;
+ mm->ksm_zero_pages--;
}
#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */
#define set_pte_ksm_zero(pte) pte_mkspecial(pte)
#define is_ksm_zero_pte(pte) 0
-static inline void inc_ksm_zero_pages(void)
+static inline void inc_ksm_zero_pages(struct mm_struct *mm)
{
}
-static inline void dec_ksm_zero_pages(void)
+static inline void dec_ksm_zero_pages(struct mm_struct *mm)
{
}
#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 3fc9e680f174..2e72329ed1a2 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -766,7 +766,7 @@ struct mm_struct {
#ifdef CONFIG_KSM
/*
* Represent how many pages of this process are involved in KSM
- * merging.
+ * merging (not including ksm_zero_pages).
*/
unsigned long ksm_merging_pages;
/*
@@ -774,7 +774,14 @@ struct mm_struct {
* including merged and not merged.
*/
unsigned long ksm_rmap_items;
-#endif
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+ /*
+ * Represent how many empty pages are merged with kernel zero
+ * pages when enabling KSM use_zero_pages.
+ */
+ unsigned long ksm_zero_pages;
+#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
+#endif /* CONFIG_KSM */
#ifdef CONFIG_LRU_GEN
struct {
/* this mm_struct is on lru_gen_mm_list */
diff --git a/mm/ksm.c b/mm/ksm.c
index 232680393741..7867fae3c61c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1248,7 +1248,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
} else {
newpte = set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage),
vma->vm_page_prot));
- inc_ksm_zero_pages();
+ inc_ksm_zero_pages(mm);
/*
* We're replacing an anonymous page with a zero page, which is
* not anonymous. We need to do proper accounting otherwise we
diff --git a/mm/memory.c b/mm/memory.c
index 76598287280f..ec89b81a14fd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1418,7 +1418,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
ptent);
if (unlikely(!page)) {
if (is_ksm_zero_pte(ptent))
- dec_ksm_zero_pages();
+ dec_ksm_zero_pages(mm);
continue;
}

@@ -3122,7 +3122,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf)
}
} else {
if (is_ksm_zero_pte(vmf->orig_pte))
- dec_ksm_zero_pages();
+ dec_ksm_zero_pages(mm);
inc_mm_counter(mm, MM_ANONPAGES);
}
flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte));
--
2.15.2

2023-04-13 06:01:28

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 4/6] ksm: add documentation for ksm zero pages

From: xu xin <[email protected]>

Add the description of ksm_zero_pages.

When use_zero_pages is enabled, pages_sharing cannot represent how
much memory saved actually by KSM, but the sum of ksm_zero_pages +
pages_sharing does.

Signed-off-by: xu xin <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
Cc: Jiang Xuexin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
---
Documentation/admin-guide/mm/ksm.rst | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 60dc42b3a6a8..64e6a13bda74 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -212,6 +212,14 @@ stable_node_chains
the number of KSM pages that hit the ``max_page_sharing`` limit
stable_node_dups
number of duplicated KSM pages
+ksm_zero_pages
+ how many empty pages are sharing the kernel zero page(s) instead
+ of other user pages as it would happen normally. Only meaningful
+ when ``use_zero_pages`` is/was enabled.
+
+When ``use_zero_pages`` is/was enabled, the sum of ``pages_sharing`` +
+``ksm_zero_pages`` represents the actual number of pages saved by KSM.
+if ``use_zero_pages`` has never been enabled, ``ksm_zero_pages`` is 0.

A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good
sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing``
--
2.15.2

2023-04-13 06:01:31

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 1/6] ksm: support unsharing KSM-placed zero pages

From: xu xin <[email protected]>

When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
unsharing will *not* actually unshare the shared zeropage as placed by KSM
(which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
zero pages are out of the control of KSM, the related counts of ksm pages
don't expose how many zero pages are placed by KSM (these special zero
pages are different from those initially mapped zero pages, because the
zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
and unshared page)

To not blindly unshare all shared zero_pages in applicable VMAs, the patch
use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.

The architecture must guarantee that pte_mkdirty won't treat the pte as
writable. Otherwise, it will break KSM pages state (wrprotect) and affect
the KSM functionality. For safety, we restrict this feature only to the
tested and known-working architechtures fow now.

The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.

Signed-off-by: xu xin <[email protected]>
Suggested-by: David Hildenbrand <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Reviewed-by: Xiaokai Ran <[email protected]>
Reviewed-by: Yang Yang <[email protected]>
---
include/linux/ksm.h | 9 +++++++++
mm/Kconfig | 24 +++++++++++++++++++++++-
mm/ksm.c | 5 +++--
3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d5f69f18ee5a..f0cc085be42a 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -95,4 +95,13 @@ static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old)
#endif /* CONFIG_MMU */
#endif /* !CONFIG_KSM */

+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+/* use pte_mkdirty to track a KSM-placed zero page */
+#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte))
+#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
+#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */
+#define set_pte_ksm_zero(pte) pte_mkspecial(pte)
+#define is_ksm_zero_pte(pte) 0
+#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
+
#endif /* __LINUX_KSM_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 3894a6309c41..42f69f421a03 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -666,7 +666,7 @@ config MMU_NOTIFIER
bool
select INTERVAL_TREE

-config KSM
+menuconfig KSM
bool "Enable KSM for page merging"
depends on MMU
select XXHASH
@@ -681,6 +681,28 @@ config KSM
until a program has madvised that an area is MADV_MERGEABLE, and
root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).

+if KSM
+
+config KSM_ZERO_PAGES_TRACK
+ bool "support tracking KSM-placed zero pages"
+ depends on KSM
+ depends on ARM || ARM64 || X86
+ default y
+ help
+ This allows KSM to track KSM-placed zero pages, including supporting
+ unsharing and counting the KSM-placed zero pages. if say N, then
+ madvise(,,UNMERGEABLE) can't unshare the KSM-placed zero pages, and
+ users can't know how many zero pages are placed by KSM. This feature
+ depends on pte_mkdirty (related with architecture) to mark KSM-placed
+ zero pages.
+
+ The architecture must guarantee that pte_mkdirty won't treat the pte
+ as writable. Otherwise, it will break KSM pages state (wrprotect) and
+ affect the KSM functionality. For safety, we restrict this feature only
+ to the tested and known-working architechtures.
+
+endif # KSM
+
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
depends on MMU
diff --git a/mm/ksm.c b/mm/ksm.c
index 7cd7e12cd3df..1d1771a6b3fe 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex
if (is_migration_entry(entry))
page = pfn_swap_entry_to_page(entry);
}
- ret = page && PageKsm(page);
+ /* return 1 if the page is an normal ksm page or KSM-placed zero page */
+ ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte);
pte_unmap_unlock(pte, ptl);
return ret;
}
@@ -1240,7 +1241,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
page_add_anon_rmap(kpage, vma, addr, RMAP_NONE);
newpte = mk_pte(kpage, vma->vm_page_prot);
} else {
- newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
+ newpte = set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage),
vma->vm_page_prot));
/*
* We're replacing an anonymous page with a zero page, which is
--
2.15.2

2023-04-13 06:09:33

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 6/6] selftest: add a testcase of ksm zero pages

From: xu xin <[email protected]>

Add a function test_unmerge_zero_page() to test the functionality on
unsharing and counting ksm-placed zero pages and counting of this patch
series.

test_unmerge_zero_page() actually contains three subjct test objects:
(1) whether the count of ksm zero pages can update correctly after merging;
(2) whether the count of ksm zero pages can update correctly after
unmerging;
(3) whether ksm zero pages are really unmerged.

Signed-off-by: xu xin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Reviewed-by: Xiaokai Ran <[email protected]>
Reviewed-by: Yang Yang <[email protected]>
---
tools/testing/selftests/mm/ksm_functional_tests.c | 75 +++++++++++++++++++++++
1 file changed, 75 insertions(+)

diff --git a/tools/testing/selftests/mm/ksm_functional_tests.c b/tools/testing/selftests/mm/ksm_functional_tests.c
index d8b5b4930412..11f8e4726607 100644
--- a/tools/testing/selftests/mm/ksm_functional_tests.c
+++ b/tools/testing/selftests/mm/ksm_functional_tests.c
@@ -27,6 +27,8 @@

static int ksm_fd;
static int ksm_full_scans_fd;
+static int ksm_zero_pages_fd;
+static int ksm_use_zero_pages_fd;
static int pagemap_fd;
static size_t pagesize;

@@ -57,6 +59,21 @@ static bool range_maps_duplicates(char *addr, unsigned long size)
return false;
}

+static long get_ksm_zero_pages(void)
+{
+ char buf[20];
+ ssize_t read_size;
+ unsigned long ksm_zero_pages;
+
+ read_size = pread(ksm_zero_pages_fd, buf, sizeof(buf) - 1, 0);
+ if (read_size < 0)
+ return -errno;
+ buf[read_size] = 0;
+ ksm_zero_pages = strtol(buf, NULL, 10);
+
+ return ksm_zero_pages;
+}
+
static long ksm_get_full_scans(void)
{
char buf[10];
@@ -146,6 +163,61 @@ static void test_unmerge(void)
munmap(map, size);
}

+static inline unsigned long expected_ksm_pages(unsigned long mergeable_size)
+{
+ return mergeable_size / pagesize;
+}
+
+static void test_unmerge_zero_pages(void)
+{
+ const unsigned int size = 2 * MiB;
+ char *map;
+ unsigned long pages_expected;
+
+ ksft_print_msg("[RUN] %s\n", __func__);
+
+ /* Confirm the interfaces*/
+ if (ksm_zero_pages_fd < 0) {
+ ksft_test_result_skip("open(\"/sys/kernel/mm/ksm/ksm_zero_pages\") failed\n");
+ return;
+ }
+ if (ksm_use_zero_pages_fd < 0) {
+ ksft_test_result_skip("open \"/sys/kernel/mm/ksm/use_zero_pages\" failed\n");
+ return;
+ }
+ if (write(ksm_use_zero_pages_fd, "1", 1) != 1) {
+ ksft_test_result_skip("write \"/sys/kernel/mm/ksm/use_zero_pages\" failed\n");
+ return;
+ }
+
+ /* Mmap zero pages*/
+ map = mmap_and_merge_range(0x00, size);
+ if (map == MAP_FAILED)
+ return;
+
+ /* Check if ksm_zero_pages can be update correctly after merging */
+ pages_expected = expected_ksm_pages(size);
+ ksft_test_result(pages_expected == get_ksm_zero_pages(),
+ "The count zero_page_sharing was updated after merging\n");
+
+ /* try to unmerge half of the region */
+ if (madvise(map, size / 2, MADV_UNMERGEABLE)) {
+ ksft_test_result_fail("MADV_UNMERGEABLE failed\n");
+ goto unmap;
+ }
+
+ /* Check if ksm_zero_pages can be update correctly after unmerging */
+ pages_expected = expected_ksm_pages(size / 2);
+ ksft_test_result(pages_expected == get_ksm_zero_pages(),
+ "The count zero_page_sharing was updated after unmerging\n");
+
+ /* Check if ksm zero pages are really unmerged */
+ ksft_test_result(!range_maps_duplicates(map, size / 2),
+ "KSM zero pages were unmerged\n");
+unmap:
+ munmap(map, size);
+}
+
static void test_unmerge_discarded(void)
{
const unsigned int size = 2 * MiB;
@@ -264,8 +336,11 @@ int main(int argc, char **argv)
pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
if (pagemap_fd < 0)
ksft_exit_skip("open(\"/proc/self/pagemap\") failed\n");
+ ksm_zero_pages_fd = open("/sys/kernel/mm/ksm/ksm_zero_pages", O_RDONLY);
+ ksm_use_zero_pages_fd = open("/sys/kernel/mm/ksm/use_zero_pages", O_RDWR);

test_unmerge();
+ test_unmerge_zero_pages();
test_unmerge_discarded();
#ifdef __NR_userfaultfd
test_unmerge_uffd_wp();
--
2.15.2

2023-04-13 06:11:20

by Yang Yang

[permalink] [raw]
Subject: [PATCH v7 5/6] ksm: update the calculation of KSM profit

From: xu xin <[email protected]>

When use_zero_pages is enabled, the calculation of ksm profit is not
correct because ksm zero pages is not counted in. So update the
calculation of KSM profit including the documentation.

Signed-off-by: xu xin <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
Cc: Jiang Xuexin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
---
Documentation/admin-guide/mm/ksm.rst | 18 +++++++++++-------
mm/ksm.c | 5 +++++
2 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index 64e6a13bda74..1a0f623cd570 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -243,21 +243,25 @@ several times, which are unprofitable memory consumed.
1) How to determine whether KSM save memory or consume memory in system-wide
range? Here is a simple approximate calculation for reference::

- general_profit =~ pages_sharing * sizeof(page) - (all_rmap_items) *
+ general_profit =~ ksm_saved_pages * sizeof(page) - (all_rmap_items) *
sizeof(rmap_item);

- where all_rmap_items can be easily obtained by summing ``pages_sharing``,
- ``pages_shared``, ``pages_unshared`` and ``pages_volatile``.
+ where ksm_saved_pages equals to the sum of ``pages_sharing`` +
+ ``ksm_zero_pages`` of the system, and all_rmap_items can be easily
+ obtained by summing ``pages_sharing``, ``pages_shared``, ``pages_unshared``
+ and ``pages_volatile``.

2) The KSM profit inner a single process can be similarly obtained by the
following approximate calculation::

- process_profit =~ ksm_merging_pages * sizeof(page) -
+ process_profit =~ ksm_saved_pages * sizeof(page) -
ksm_rmap_items * sizeof(rmap_item).

- where ksm_merging_pages is shown under the directory ``/proc/<pid>/``,
- and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``. The process profit
- is also shown in ``/proc/<pid>/ksm_stat`` as ksm_process_profit.
+ where ksm_saved_pages equals to the sum of ``ksm_merging_pages`` and
+ ``ksm_zero_pages``, both of which are shown under the directory
+ ``/proc/<pid>/``, and ksm_rmap_items is shown in ``/proc/<pid>/ksm_stat``.
+ The process profit is also shown in ``/proc/<pid>/ksm_stat`` as
+ ksm_process_profit.

From the perspective of application, a high ratio of ``ksm_rmap_items`` to
``ksm_merging_pages`` means a bad madvise-applied policy, so developers or
diff --git a/mm/ksm.c b/mm/ksm.c
index 7867fae3c61c..10902c8c503f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2936,8 +2936,13 @@ static void wait_while_offlining(void)
#ifdef CONFIG_PROC_FS
long ksm_process_profit(struct mm_struct *mm)
{
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+ return (long)(mm->ksm_merging_pages + mm->ksm_zero_pages) * PAGE_SIZE -
+ mm->ksm_rmap_items * sizeof(struct ksm_rmap_item);
+#else
return (long)mm->ksm_merging_pages * PAGE_SIZE -
mm->ksm_rmap_items * sizeof(struct ksm_rmap_item);
+#endif
}

/* Return merge type name as string. */
--
2.15.2

2023-04-17 08:04:05

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] ksm: support tracking KSM-placed zero-pages

On 13.04.23 07:46, [email protected] wrote:
> From: xu xin <[email protected]>
>
> The core idea of this patch set is to enable users to perceive the number
> of any pages merged by KSM, regardless of whether use_zero_page switch has
> been turned on, so that users can know how much free memory increase is
> really due to their madvise(MERGEABLE) actions. But the problem is, when
> enabling use_zero_pages, all empty pages will be merged with kernel zero
> pages instead of with each other as use_zero_pages is disabled, and then
> these zero-pages are no longer monitored by KSM.
>
> The motivations to do this is seen at:
> https://lore.kernel.org/lkml/[email protected]/
>
> In one word, we hope to implement the support for KSM-placed zero pages
> tracking without affecting the feature of use_zero_pages, so that app
> developer can also benefit from knowing the actual KSM profit by getting
> KSM-placed zero pages to optimize applications eventually when
> /sys/kernel/mm/ksm/use_zero_pages is enabled.
>

Thanks for the update!

> the patch uses pte_mkdirty (related with architecture) to mark KSM-placed
> zero pages. Some architecture(like sparc64) treat R/O dirty PTEs as
> writable, which will break KSM pages state (wrprotect) and affect

With [1] that should be resolved and we should be able to enable it
unconditionally.

Further, ideally this should get based on [2], such that we can include
the zeropages in the ksm and per-mm profit calculation.

Last but not least, I realized that we also have to handle the case when
khugepaged replaces a shared zeropage by a THP. I think that should be
easy by adjusting the counters in the the is_zero_pfn() handling in
mm/khugepaged.c:__collapse_huge_page_copy().

> the KSM functionality. For safety, we restrict this feature only to the
> tested and known-working architechtures (ARM, ARM64, and X86) fow now.
>
> Change log
> ----------
> v6->v7:
> This is an all-newed version which is different from v6 which relys on KSM's
> rmap_item. The patch series don't rely on rmap_item but pte_dirty, so the
> general handling of tracking KSM-placed zero-pages is simplified a lot.
>
> For safety, we restrict this feature only to the tested and known-working
> architechtures (ARM, ARM64, and X86) fow now.

Yeah, with [1] this can be further simplified.


I'll be on vacation starting on Thursday for ~1.5 weeks, not sure if I
get to review before that. But it's unlikely that we'll make the
upcoming merge windows, so I guess we still have time (especially, for
[1] and [2] to land)


[1] https://lkml.kernel.org/r/[email protected]
[2] https://lkml.kernel.org/r/[email protected]

--
Thanks,

David / dhildenb