2022-10-08 07:05:45

by xu

[permalink] [raw]
Subject: [PATCH 0/5] ksm: support tracking KSM-placed zero-pages

From: xu xin <[email protected]>

Before enabling use_zero_pages by setting /sys/kernel/mm/ksm/
use_zero_pages to 1, pages_sharing of KSM is basically accurate. But
when enabling use_zero_pages, all empty pages that are merged with
kernel zero page are not counted in pages_sharing or pages_shared.
That is because these empty pages are merged with zero-pages then no
longer managed by KSM, which leads to two issues at least:

1) MADV_UNMERGEABLE and other ways to trigger unsharing will *not*
unshare the shared zeropage as placed by KSM (which is against the
MADV_UNMERGEABLE documentation at least); see the link:
https://lore.kernel.org/lkml/[email protected]/

2) we cannot know how many pages are zero pages placed by KSM when
enabling use_zero_pages, which leads to KSM not being transparent
with all actual merged pages by KSM.

With the patch series, we can unshare zero-pages(KSM-placed) accurately
and count ksm zero pages.


*** BLURB HERE ***

xu xin (5):
ksm: abstract the function try_to_get_old_rmap_item
ksm: support unsharing zero pages placed by KSM
ksm: count all zero pages placed by KSM
ksm: count zero pages for each process
ksm: add zero_pages_sharing documentation

Documentation/admin-guide/mm/ksm.rst | 10 +-
fs/proc/base.c | 1 +
include/linux/mm_types.h | 7 +-
mm/ksm.c | 177 +++++++++++++++++++++------
4 files changed, 157 insertions(+), 38 deletions(-)

--
2.25.1


2022-10-08 07:25:56

by xu

[permalink] [raw]
Subject: [PATCH 3/5] ksm: count all zero pages placed by KSM

From: xu xin <[email protected]>

As pages_sharing and pages_shared don't include the number of zero pages
merged by KSM, we cannot know how many pages are zero pages placed by KSM
when enabling use_zero_pages, which leads to KSM not being transparent with
all actual merged pages by KSM. In the early days of use_zero_pages,
zero-pages was unable to get unshared by the ways like MADV_UNMERGEABLE so
it's hard to count how many times one of those zeropages was then unmerged.

But now, unsharing KSM-placed zero page accurately has been achieved, so we
can easily count both how many times a page full of zeroes was merged with
zero-page and how many times one of those pages was then unmerged. and so,
it helps to estimate memory demands when each and every shared page could
get unshared.

So we add zero_pages_sharing under /sys/kernel/mm/ksm/ to show the number
of all zero pages placed by KSM.

Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
Signed-off-by: xu xin <[email protected]>
---
mm/ksm.c | 22 +++++++++++++++++++---
1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 75978f7eeed1..9b7c28abfc89 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -275,6 +275,9 @@ static unsigned int zero_checksum __read_mostly;
/* Whether to merge empty (zeroed) pages with actual zero pages */
static bool ksm_use_zero_pages __read_mostly;

+/* The number of zero pages placed by KSM use_zero_pages */
+static unsigned long ksm_zero_pages_sharing;
+
#ifdef CONFIG_NUMA
/* Zeroed when merging across nodes is not allowed */
static unsigned int ksm_merge_across_nodes = 1;
@@ -541,8 +544,10 @@ static inline int unshare_zero_pages(struct ksm_rmap_item *rmap_item)

static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
{
- if (rmap_item->address & ZERO_PAGE_FLAG)
- unshare_zero_pages(rmap_item);
+ if (rmap_item->address & ZERO_PAGE_FLAG) {
+ if (!unshare_zero_pages(rmap_item))
+ ksm_zero_pages_sharing--;
+ }
ksm_rmap_items--;
rmap_item->mm->ksm_rmap_items--;
rmap_item->mm = NULL; /* debug safety */
@@ -2074,8 +2079,10 @@ static int try_to_merge_with_kernel_zero_page(struct mm_struct *mm,
* In case of failure, the page was not really empty, so we
* need to continue. Otherwise we're done.
*/
- if (!err)
+ if (!err) {
rmap_item->address |= ZERO_PAGE_FLAG;
+ ksm_zero_pages_sharing++;
+ }
}

return err;
@@ -2177,6 +2184,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
* to reset the flag and update the corresponding count.
*/
rmap_item->address &= PAGE_MASK;
+ ksm_zero_pages_sharing--;
}
}

@@ -3189,6 +3197,13 @@ static ssize_t pages_volatile_show(struct kobject *kobj,
}
KSM_ATTR_RO(pages_volatile);

+static ssize_t zero_pages_sharing_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sysfs_emit(buf, "%ld\n", ksm_zero_pages_sharing);
+}
+KSM_ATTR_RO(zero_pages_sharing);
+
static ssize_t stable_node_dups_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
@@ -3249,6 +3264,7 @@ static struct attribute *ksm_attrs[] = {
&merge_across_nodes_attr.attr,
#endif
&max_page_sharing_attr.attr,
+ &zero_pages_sharing_attr.attr,
&stable_node_chains_attr.attr,
&stable_node_dups_attr.attr,
&stable_node_chains_prune_millisecs_attr.attr,
--
2.25.1

2022-10-08 07:35:50

by xu

[permalink] [raw]
Subject: [PATCH 1/5] ksm: abstract the function try_to_get_old_rmap_item

From: xu xin <[email protected]>

A new function try_to_get_old_rmap_item is abstracted from
get_next_rmap_item. This function will be reused by the subsequent
patches about counting ksm_zero_pages.

The patch improves the readability and reusability of KSM code.

Signed-off-by: xu xin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
---
mm/ksm.c | 24 ++++++++++++++++++------
1 file changed, 18 insertions(+), 6 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index c19fcca9bc03..5b68482d2b3b 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2187,14 +2187,11 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
}
}

-static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
- struct ksm_rmap_item **rmap_list,
- unsigned long addr)
+static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr,
+ struct ksm_rmap_item **rmap_list)
{
- struct ksm_rmap_item *rmap_item;
-
while (*rmap_list) {
- rmap_item = *rmap_list;
+ struct ksm_rmap_item *rmap_item = *rmap_list;
if ((rmap_item->address & PAGE_MASK) == addr)
return rmap_item;
if (rmap_item->address > addr)
@@ -2204,6 +2201,21 @@ static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
free_rmap_item(rmap_item);
}

+ return NULL;
+}
+
+static struct ksm_rmap_item *get_next_rmap_item(struct ksm_mm_slot *mm_slot,
+ struct ksm_rmap_item **rmap_list,
+ unsigned long addr)
+{
+ struct ksm_rmap_item *rmap_item;
+
+ /* lookup if we have a old rmap_item matching the addr*/
+ rmap_item = try_to_get_old_rmap_item(addr, rmap_list);
+ if (rmap_item)
+ return rmap_item;
+
+ /* Need to allocate a new rmap_item */
rmap_item = alloc_rmap_item();
if (rmap_item) {
/* It has already been zeroed */
--
2.25.1

2022-10-08 07:46:25

by xu

[permalink] [raw]
Subject: [PATCH 5/5] ksm: add zero_pages_sharing documentation

From: xu xin <[email protected]>

When enabling use_zero_pages, pages_sharing cannot represent how
much memory saved indeed. zero_pages_sharing + pages_sharing does.
add the description of zero_pages_sharing.

Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
Cc: Jiang Xuexin <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Signed-off-by: xu xin <[email protected]>
---
Documentation/admin-guide/mm/ksm.rst | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/mm/ksm.rst b/Documentation/admin-guide/mm/ksm.rst
index fb6ba2002a4b..484665aa7418 100644
--- a/Documentation/admin-guide/mm/ksm.rst
+++ b/Documentation/admin-guide/mm/ksm.rst
@@ -162,7 +162,7 @@ The effectiveness of KSM and MADV_MERGEABLE is shown in ``/sys/kernel/mm/ksm/``:
pages_shared
how many shared pages are being used
pages_sharing
- how many more sites are sharing them i.e. how much saved
+ how many more sites are sharing them
pages_unshared
how many pages unique but repeatedly checked for merging
pages_volatile
@@ -173,6 +173,14 @@ stable_node_chains
the number of KSM pages that hit the ``max_page_sharing`` limit
stable_node_dups
number of duplicated KSM pages
+zero_pages_sharing
+ how many empty pages are sharing kernel zero page(s) instead of
+ with each other as it would happen normally. only effective when
+ enabling ``use_zero_pages`` knob.
+
+If ``use_zero_pages`` is 0, only ``pages_sharing`` can represents how
+much saved. Otherwise, ``pages_sharing`` + ``zero_pages_sharing``
+represents how much saved actually.

A high ratio of ``pages_sharing`` to ``pages_shared`` indicates good
sharing, but a high ratio of ``pages_unshared`` to ``pages_sharing``
--
2.25.1

2022-10-08 08:13:21

by xu

[permalink] [raw]
Subject: [PATCH 2/5] ksm: support unsharing zero pages placed by KSM

From: xu xin <[email protected]>

After the commit e86c59b1b12d ("mm/ksm: improve deduplication of zero
pages with colouring"), madvise(addr, len, MADV_UNMERGEABLE) and other
ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing will
**not** unshare the shared zeropage as placed by KSM (which is against
the MADV_UNMERGEABLE documentation at least).

To not blindly unshare all shared zero_pages in applicable VMAs, the patch
introduces a dedicated flag ZERO_PAGE_FLAG to mark the rmap_items of those
shared zero_pages. and guarantee that these rmap_items will be not freed
during the time of zero_pages not being writing, so we can only unshare
the *KSM-placed* zero_pages.

The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.

Fixes: e86c59b1b12d ("mm/ksm: improve deduplication of zero pages with colouring")
Reported-by: David Hildenbrand <[email protected]>
Cc: Claudio Imbrenda <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Signed-off-by: xu xin <[email protected]>
Co-developed-by: Xiaokai Ran <[email protected]>
Signed-off-by: Xiaokai Ran <[email protected]>
Co-developed-by: Yang Yang <[email protected]>
Signed-off-by: Yang Yang <[email protected]>
Signed-off-by: xu xin <[email protected]>
---
mm/ksm.c | 134 ++++++++++++++++++++++++++++++++++++++++++-------------
1 file changed, 104 insertions(+), 30 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 5b68482d2b3b..75978f7eeed1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -213,6 +213,7 @@ struct ksm_rmap_item {
#define SEQNR_MASK 0x0ff /* low bits of unstable tree seqnr */
#define UNSTABLE_FLAG 0x100 /* is a node of the unstable tree */
#define STABLE_FLAG 0x200 /* is listed from the stable tree */
+#define ZERO_PAGE_FLAG 0x400 /* is zero page placed by KSM */

/* The stable and unstable tree heads */
static struct rb_root one_stable_tree[1] = { RB_ROOT };
@@ -381,14 +382,6 @@ static inline struct ksm_rmap_item *alloc_rmap_item(void)
return rmap_item;
}

-static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
-{
- ksm_rmap_items--;
- rmap_item->mm->ksm_rmap_items--;
- rmap_item->mm = NULL; /* debug safety */
- kmem_cache_free(rmap_item_cache, rmap_item);
-}
-
static inline struct ksm_stable_node *alloc_stable_node(void)
{
/*
@@ -434,7 +427,8 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
* of the process that owns 'vma'. We also do not want to enforce
* protection keys here anyway.
*/
-static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
+static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
+ bool ksm_check_bypass)
{
struct page *page;
vm_fault_t ret = 0;
@@ -449,6 +443,16 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
ret = handle_mm_fault(vma, addr,
FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
NULL);
+ else if (ksm_check_bypass && is_zero_pfn(page_to_pfn(page))) {
+ /*
+ * Although it's not ksm page, it's zero page as placed by
+ * KSM use_zero_page, so we should unshare it when
+ * ksm_check_bypass is true.
+ */
+ ret = handle_mm_fault(vma, addr,
+ FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
+ NULL);
+ }
else
ret = VM_FAULT_WRITE;
put_page(page);
@@ -496,6 +500,11 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm,
return vma;
}

+/*
+ * Note: Don't call break_cow() in the context protected by
+ * mmap_read_lock(), which may cause dead lock because inside
+ * break_cow mmap_read_lock exists.
+ */
static void break_cow(struct ksm_rmap_item *rmap_item)
{
struct mm_struct *mm = rmap_item->mm;
@@ -511,10 +520,35 @@ static void break_cow(struct ksm_rmap_item *rmap_item)
mmap_read_lock(mm);
vma = find_mergeable_vma(mm, addr);
if (vma)
- break_ksm(vma, addr);
+ break_ksm(vma, addr, false);
mmap_read_unlock(mm);
}

+/* Only called when rmap_item->address is with ZERO_PAGE_FLAG */
+static inline int unshare_zero_pages(struct ksm_rmap_item *rmap_item)
+{
+ struct mm_struct *mm = rmap_item->mm;
+ struct vm_area_struct *vma;
+ unsigned long addr = rmap_item->address;
+ int err = -EFAULT;
+
+ vma = vma_lookup(mm, addr);
+ if (vma)
+ err = break_ksm(vma, addr, true);
+
+ return err;
+}
+
+static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
+{
+ if (rmap_item->address & ZERO_PAGE_FLAG)
+ unshare_zero_pages(rmap_item);
+ ksm_rmap_items--;
+ rmap_item->mm->ksm_rmap_items--;
+ rmap_item->mm = NULL; /* debug safety */
+ kmem_cache_free(rmap_item_cache, rmap_item);
+}
+
static struct page *get_mergeable_page(struct ksm_rmap_item *rmap_item)
{
struct mm_struct *mm = rmap_item->mm;
@@ -825,7 +859,7 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma,
if (signal_pending(current))
err = -ERESTARTSYS;
else
- err = break_ksm(vma, addr);
+ err = break_ksm(vma, addr, NULL);
}
return err;
}
@@ -2017,6 +2051,36 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item,
rmap_item->mm->ksm_merging_pages++;
}

+static int try_to_merge_with_kernel_zero_page(struct mm_struct *mm,
+ struct ksm_rmap_item *rmap_item,
+ struct page *page)
+{
+ int err = 0;
+
+ if (!(rmap_item->address & ZERO_PAGE_FLAG)) {
+ struct vm_area_struct *vma;
+
+ mmap_read_lock(mm);
+ vma = find_mergeable_vma(mm, rmap_item->address);
+ if (vma) {
+ err = try_to_merge_one_page(vma, page,
+ ZERO_PAGE(rmap_item->address));
+ } else {
+ /* If the vma is out of date, we do not need to continue. */
+ err = 0;
+ }
+ mmap_read_unlock(mm);
+ /*
+ * In case of failure, the page was not really empty, so we
+ * need to continue. Otherwise we're done.
+ */
+ if (!err)
+ rmap_item->address |= ZERO_PAGE_FLAG;
+ }
+
+ return err;
+}
+
/*
* cmp_and_merge_page - first see if page can be merged into the stable tree;
* if not, compare checksum to previous and if it's the same, see if page can
@@ -2101,29 +2165,21 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
* Same checksum as an empty page. We attempt to merge it with the
* appropriate zero page if the user enabled this via sysfs.
*/
- if (ksm_use_zero_pages && (checksum == zero_checksum)) {
- struct vm_area_struct *vma;
-
- mmap_read_lock(mm);
- vma = find_mergeable_vma(mm, rmap_item->address);
- if (vma) {
- err = try_to_merge_one_page(vma, page,
- ZERO_PAGE(rmap_item->address));
- } else {
+ if (ksm_use_zero_pages) {
+ if (checksum == zero_checksum) {
+ /* If success, just return. Otherwise, continue */
+ if (!try_to_merge_with_kernel_zero_page(mm, rmap_item, page))
+ return;
+ } else if (rmap_item->address & ZERO_PAGE_FLAG) {
/*
- * If the vma is out of date, we do not need to
- * continue.
+ * The page now is not kernel zero page (COW happens to it)
+ * but the flag of its rmap_item is still zero-page, so need
+ * to reset the flag and update the corresponding count.
*/
- err = 0;
+ rmap_item->address &= PAGE_MASK;
}
- mmap_read_unlock(mm);
- /*
- * In case of failure, the page was not really empty, so we
- * need to continue. Otherwise we're done.
- */
- if (!err)
- return;
}
+
tree_rmap_item =
unstable_tree_search_insert(rmap_item, page, &tree_page);
if (tree_rmap_item) {
@@ -2197,6 +2253,7 @@ static struct ksm_rmap_item *try_to_get_old_rmap_item(unsigned long addr,
if (rmap_item->address > addr)
break;
*rmap_list = rmap_item->rmap_list;
+ /* running here indicates these pages have been unmerged */
remove_rmap_item_from_tree(rmap_item);
free_rmap_item(rmap_item);
}
@@ -2336,6 +2393,23 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
mmap_read_unlock(mm);
return rmap_item;
}
+ /*
+ * Because we want to monitor ksm zero pages which is
+ * non-anonymous, we must try to return the rmap_items
+ * of those kernel zero pages which replaces its
+ * original anonymous empty page due to use_zero_pages's
+ * feature.
+ */
+ if (is_zero_pfn(page_to_pfn(*page))) {
+ rmap_item = try_to_get_old_rmap_item(ksm_scan.address,
+ ksm_scan.rmap_list);
+ if (rmap_item && (rmap_item->address & ZERO_PAGE_FLAG)) {
+ ksm_scan.rmap_list = &rmap_item->rmap_list;
+ ksm_scan.address += PAGE_SIZE;
+ mmap_read_unlock(mm);
+ return rmap_item;
+ }
+ }
next_page:
put_page(*page);
ksm_scan.address += PAGE_SIZE;
--
2.25.1

2022-10-08 08:49:07

by xu

[permalink] [raw]
Subject: [PATCH 4/5] ksm: count zero pages for each process

From: xu xin <[email protected]>

As the number of ksm zero pages is not included in ksm_merging_pages per
process when enabling use_zero_pages, it's unclear of how many actual
pages are merged by KSM. To let users accurately estimate their memory
demands when unsharing KSM zero-pages, it's necessary to show KSM zero-
pages per process.

since unsharing zero pages placed by KSM accurately is achieved, then
tracking empty pages merging and unmerging is not a difficult thing any
longer.

Since we already have /proc/<pid>/ksm_stat, just add the information of
zero_pages_sharing in it.

Cc: Claudio Imbrenda <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Xuexin Jiang <[email protected]>
Cc: Xiaokai Ran <[email protected]>
Cc: Yang Yang <[email protected]>
Signed-off-by: xu xin <[email protected]>
---
fs/proc/base.c | 1 +
include/linux/mm_types.h | 7 ++++++-
mm/ksm.c | 3 +++
3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/fs/proc/base.c b/fs/proc/base.c
index 9e479d7d202b..ac9ebe972be0 100644
--- a/fs/proc/base.c
+++ b/fs/proc/base.c
@@ -3207,6 +3207,7 @@ static int proc_pid_ksm_stat(struct seq_file *m, struct pid_namespace *ns,
mm = get_task_mm(task);
if (mm) {
seq_printf(m, "ksm_rmap_items %lu\n", mm->ksm_rmap_items);
+ seq_printf(m, "zero_pages_sharing %lu\n", mm->ksm_zero_pages_sharing);
mmput(mm);
}

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 500e536796ca..78a4ee264645 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -691,7 +691,7 @@ struct mm_struct {
#ifdef CONFIG_KSM
/*
* Represent how many pages of this process are involved in KSM
- * merging.
+ * merging (not including ksm_zero_pages_sharing).
*/
unsigned long ksm_merging_pages;
/*
@@ -699,6 +699,11 @@ struct mm_struct {
* including merged and not merged.
*/
unsigned long ksm_rmap_items;
+ /*
+ * Represent how many empty pages are merged with kernel zero
+ * pages when enabling KSM use_zero_pages.
+ */
+ unsigned long ksm_zero_pages_sharing;
#endif
#ifdef CONFIG_LRU_GEN
struct {
diff --git a/mm/ksm.c b/mm/ksm.c
index 9b7c28abfc89..a889b1925f51 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -547,6 +547,7 @@ static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
if (rmap_item->address & ZERO_PAGE_FLAG) {
if (!unshare_zero_pages(rmap_item))
ksm_zero_pages_sharing--;
+ rmap_item->mm->ksm_zero_pages_sharing--;
}
ksm_rmap_items--;
rmap_item->mm->ksm_rmap_items--;
@@ -2082,6 +2083,7 @@ static int try_to_merge_with_kernel_zero_page(struct mm_struct *mm,
if (!err) {
rmap_item->address |= ZERO_PAGE_FLAG;
ksm_zero_pages_sharing++;
+ rmap_item->mm->ksm_zero_pages_sharing++;
}
}

@@ -2185,6 +2187,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
*/
rmap_item->address &= PAGE_MASK;
ksm_zero_pages_sharing--;
+ rmap_item->mm->ksm_zero_pages_sharing--;
}
}

--
2.25.1

2022-10-08 15:02:41

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 4/5] ksm: count zero pages for each process

Hi,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20221007]
[cannot apply to linus/master v6.0]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/xu-xin-sc-gmail-com/ksm-support-tracking-KSM-placed-zero-pages/20221008-150936
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
config: i386-randconfig-a002-20221003
compiler: clang version 14.0.6 (https://github.com/llvm/llvm-project f28c006a5895fc0e329fe15fead81e37457cb1d1)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/73e535667af6553c5f6317da4d8034e79557417b
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review xu-xin-sc-gmail-com/ksm-support-tracking-KSM-placed-zero-pages/20221008-150936
git checkout 73e535667af6553c5f6317da4d8034e79557417b
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

>> mm/ksm.c:550:4: warning: misleading indentation; statement is not part of the previous 'if' [-Wmisleading-indentation]
rmap_item->mm->ksm_zero_pages_sharing--;
^
mm/ksm.c:548:3: note: previous statement is here
if (!unshare_zero_pages(rmap_item))
^
1 warning generated.


vim +/if +550 mm/ksm.c

544
545 static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
546 {
547 if (rmap_item->address & ZERO_PAGE_FLAG) {
548 if (!unshare_zero_pages(rmap_item))
549 ksm_zero_pages_sharing--;
> 550 rmap_item->mm->ksm_zero_pages_sharing--;
551 }
552 ksm_rmap_items--;
553 rmap_item->mm->ksm_rmap_items--;
554 rmap_item->mm = NULL; /* debug safety */
555 kmem_cache_free(rmap_item_cache, rmap_item);
556 }
557

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (2.56 kB)
config (156.73 kB)
Download all attachments