2009-06-29 14:14:38

by Hugh Dickins

[permalink] [raw]
Subject: KSM: current madvise rollup

Hi Izik,

Thanks a lot for giving me some space. As I proposed in private mail
last week, here is my current rollup of the madvise version of KSM.

The patch is against 2.6.31-rc1, but the work is based upon your
"RFC - ksm api change into madvise" from 14 May: omitting for now
your 4/4 to apply KSM to other processes, but including Andrea's
two rmap_item fixes from 3 June.

This is not a patch to go into any tree yet: it needs to be split
up and reviewed and argued over and parts reverted etc. But it is
good for some testing, and it is good for you to take a look at,
diff against what you have and say, perhaps: right, please split
this up into this and this and this kind of change, so we can
examine it more closely; or, perhaps, you won't like my direction
at all and want a fresh start.

The changes outside of mm/ksm.c shouldn't cause much controversy.
Perhaps we'll want to send in the arch mman.h additions, and the
madvise interface, and your mmu_notifier mods, along with a dummy
mm/ksm.c, quite early; while we continue to discuss what's in ksm.c.

It'll be hard for you not to get irritated by all my trivial cleanups
there, sorry. I find it best when I'm working to let myself do such
tidying up, then only at the end go back over to decide whether it's
justified or not. And my correction of typos in comments etc. is
fairly random: sometimes I've just corrected one word, sometimes
I've rewritten a comment, but lots I've not read through yet.

A lot of the change came about because I couldn't run the loads
I wanted, they'd OOM because of the way KSM had a hold on any mm it
was advised of (so the mm couldn't exit and free up its pages until
KSM got there). I know you were dissatisfied with that too, but
perhaps you've solved it differently by now.

I've plenty more to do: still haven't really focussed in on mremap
move, and races when the vma we expect to be VM_MERGEABLE is actually
something else by the time we get mmap_sem for get_user_pages. But I
don't think there's any show-stopper there, just a little tightening
needed. The rollup below is a good staging post, I think, and much
better than the /dev/ksm version that used to be in mmotm.

Though I haven't even begun to worry about how KSM interacts with
page migration and mem cgroups and Andi & Wu's HWPOISONous pages.

Hugh
---

arch/alpha/include/asm/mman.h | 3
arch/mips/include/asm/mman.h | 3
arch/parisc/include/asm/mman.h | 3
arch/xtensa/include/asm/mman.h | 3
include/asm-generic/mman-common.h | 3
include/linux/ksm.h | 50
include/linux/mm.h | 1
include/linux/mmu_notifier.h | 34
include/linux/sched.h | 7
kernel/fork.c | 8
mm/Kconfig | 11
mm/Makefile | 1
mm/ksm.c | 1675 ++++++++++++++++++++++++++++
mm/madvise.c | 53
mm/memory.c | 9
mm/mmap.c | 6
mm/mmu_notifier.c | 20
17 files changed, 1857 insertions(+), 33 deletions(-)

--- 2.6.31-rc1/arch/alpha/include/asm/mman.h 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/arch/alpha/include/asm/mman.h 2009-06-29 14:10:53.000000000 +0100
@@ -48,6 +48,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc1/arch/mips/include/asm/mman.h 2008-12-24 23:26:37.000000000 +0000
+++ madv_ksm/arch/mips/include/asm/mman.h 2009-06-29 14:10:54.000000000 +0100
@@ -71,6 +71,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc1/arch/parisc/include/asm/mman.h 2008-12-24 23:26:37.000000000 +0000
+++ madv_ksm/arch/parisc/include/asm/mman.h 2009-06-29 14:10:54.000000000 +0100
@@ -54,6 +54,9 @@
#define MADV_16M_PAGES 24 /* Use 16 Megabyte pages */
#define MADV_64M_PAGES 26 /* Use 64 Megabyte pages */

+#define MADV_MERGEABLE 65 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 66 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0
#define MAP_VARIABLE 0
--- 2.6.31-rc1/arch/xtensa/include/asm/mman.h 2009-03-23 23:12:14.000000000 +0000
+++ madv_ksm/arch/xtensa/include/asm/mman.h 2009-06-29 14:10:54.000000000 +0100
@@ -78,6 +78,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc1/include/asm-generic/mman-common.h 2009-06-25 05:18:08.000000000 +0100
+++ madv_ksm/include/asm-generic/mman-common.h 2009-06-29 14:10:54.000000000 +0100
@@ -35,6 +35,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc1/include/linux/ksm.h 1970-01-01 01:00:00.000000000 +0100
+++ madv_ksm/include/linux/ksm.h 2009-06-29 14:10:54.000000000 +0100
@@ -0,0 +1,50 @@
+#ifndef __LINUX_KSM_H
+#define __LINUX_KSM_H
+/*
+ * Memory merging support.
+ *
+ * This code enables dynamic sharing of identical pages found in different
+ * memory areas, even if they are not shared by fork().
+ */
+
+#include <linux/bitops.h>
+#include <linux/mm_types.h>
+#include <linux/sched.h>
+
+#ifdef CONFIG_KSM
+int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags);
+int __ksm_enter(struct mm_struct *mm);
+void __ksm_exit(struct mm_struct *mm);
+
+static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+{
+ if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
+ return __ksm_enter(mm);
+ return 0;
+}
+
+static inline void ksm_exit(struct mm_struct *mm)
+{
+ if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ __ksm_exit(mm);
+}
+#else /* !CONFIG_KSM */
+
+static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags)
+{
+ return 0;
+}
+
+static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+{
+ return 0;
+}
+
+static inline void ksm_exit(struct mm_struct *mm)
+{
+}
+#endif /* !CONFIG_KSM */
+
+#endif
--- 2.6.31-rc1/include/linux/mm.h 2009-06-25 05:18:08.000000000 +0100
+++ madv_ksm/include/linux/mm.h 2009-06-29 14:10:54.000000000 +0100
@@ -105,6 +105,7 @@ extern unsigned int kobjsize(const void
#define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN pages */
#define VM_SAO 0x20000000 /* Strong Access Ordering (powerpc) */
#define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */
+#define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */

#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
--- 2.6.31-rc1/include/linux/mmu_notifier.h 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/include/linux/mmu_notifier.h 2009-06-29 14:10:54.000000000 +0100
@@ -62,6 +62,15 @@ struct mmu_notifier_ops {
unsigned long address);

/*
+ * change_pte is called in cases that pte mapping to page is changed:
+ * for example, when ksm remaps pte to point to a new shared page.
+ */
+ void (*change_pte)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address,
+ pte_t pte);
+
+ /*
* Before this is invoked any secondary MMU is still ok to
* read/write to the page previously pointed to by the Linux
* pte because the page hasn't been freed yet and it won't be
@@ -154,6 +163,8 @@ extern void __mmu_notifier_mm_destroy(st
extern void __mmu_notifier_release(struct mm_struct *mm);
extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
unsigned long address);
+extern void __mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte);
extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address);
extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
@@ -175,6 +186,13 @@ static inline int mmu_notifier_clear_flu
return 0;
}

+static inline void mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_change_pte(mm, address, pte);
+}
+
static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{
@@ -236,6 +254,16 @@ static inline void mmu_notifier_mm_destr
__young; \
})

+#define set_pte_at_notify(__mm, __address, __ptep, __pte) \
+({ \
+ struct mm_struct *___mm = __mm; \
+ unsigned long ___address = __address; \
+ pte_t ___pte = __pte; \
+ \
+ set_pte_at(___mm, ___address, __ptep, ___pte); \
+ mmu_notifier_change_pte(___mm, ___address, ___pte); \
+})
+
#else /* CONFIG_MMU_NOTIFIER */

static inline void mmu_notifier_release(struct mm_struct *mm)
@@ -248,6 +276,11 @@ static inline int mmu_notifier_clear_flu
return 0;
}

+static inline void mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte)
+{
+}
+
static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{
@@ -273,6 +306,7 @@ static inline void mmu_notifier_mm_destr

#define ptep_clear_flush_young_notify ptep_clear_flush_young
#define ptep_clear_flush_notify ptep_clear_flush
+#define set_pte_at_notify set_pte_at

#endif /* CONFIG_MMU_NOTIFIER */

--- 2.6.31-rc1/include/linux/sched.h 2009-06-25 05:18:09.000000000 +0100
+++ madv_ksm/include/linux/sched.h 2009-06-29 14:10:54.000000000 +0100
@@ -419,7 +419,9 @@ extern int get_dumpable(struct mm_struct
/* dumpable bits */
#define MMF_DUMPABLE 0 /* core dump is permitted */
#define MMF_DUMP_SECURELY 1 /* core file is readable only by root */
+
#define MMF_DUMPABLE_BITS 2
+#define MMF_DUMPABLE_MASK ((1 << MMF_DUMPABLE_BITS) - 1)

/* coredump filter bits */
#define MMF_DUMP_ANON_PRIVATE 2
@@ -429,6 +431,7 @@ extern int get_dumpable(struct mm_struct
#define MMF_DUMP_ELF_HEADERS 6
#define MMF_DUMP_HUGETLB_PRIVATE 7
#define MMF_DUMP_HUGETLB_SHARED 8
+
#define MMF_DUMP_FILTER_SHIFT MMF_DUMPABLE_BITS
#define MMF_DUMP_FILTER_BITS 7
#define MMF_DUMP_FILTER_MASK \
@@ -442,6 +445,10 @@ extern int get_dumpable(struct mm_struct
#else
# define MMF_DUMP_MASK_DEFAULT_ELF 0
#endif
+ /* leave room for more dump flags */
+#define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */
+
+#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)

struct sighand_struct {
atomic_t count;
--- 2.6.31-rc1/kernel/fork.c 2009-06-25 05:18:09.000000000 +0100
+++ madv_ksm/kernel/fork.c 2009-06-29 14:10:54.000000000 +0100
@@ -50,6 +50,7 @@
#include <linux/ftrace.h>
#include <linux/profile.h>
#include <linux/rmap.h>
+#include <linux/ksm.h>
#include <linux/acct.h>
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
@@ -290,6 +291,9 @@ static int dup_mmap(struct mm_struct *mm
rb_link = &mm->mm_rb.rb_node;
rb_parent = NULL;
pprev = &mm->mmap;
+ retval = ksm_fork(mm, oldmm);
+ if (retval)
+ goto out;

for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) {
struct file *file;
@@ -426,7 +430,8 @@ static struct mm_struct * mm_init(struct
atomic_set(&mm->mm_count, 1);
init_rwsem(&mm->mmap_sem);
INIT_LIST_HEAD(&mm->mmlist);
- mm->flags = (current->mm) ? current->mm->flags : default_dump_filter;
+ mm->flags = (current->mm) ?
+ (current->mm->flags & MMF_INIT_MASK) : default_dump_filter;
mm->core_state = NULL;
mm->nr_ptes = 0;
set_mm_counter(mm, file_rss, 0);
@@ -487,6 +492,7 @@ void mmput(struct mm_struct *mm)

if (atomic_dec_and_test(&mm->mm_users)) {
exit_aio(mm);
+ ksm_exit(mm);
exit_mmap(mm);
set_mm_exe_file(mm, NULL);
if (!list_empty(&mm->mmlist)) {
--- 2.6.31-rc1/mm/Kconfig 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/Kconfig 2009-06-29 14:10:54.000000000 +0100
@@ -214,6 +214,17 @@ config HAVE_MLOCKED_PAGE_BIT
config MMU_NOTIFIER
bool

+config KSM
+ bool "Enable KSM for page merging"
+ depends on MMU
+ help
+ Enable Kernel Samepage Merging: KSM periodically scans those areas
+ of an application's address space that an app has advised may be
+ mergeable. When it finds pages of identical content, it replaces
+ the many instances by a single resident page with that content, so
+ saving memory until one or another app needs to modify the content.
+ Recommended for use with KVM, or with other duplicative applications.
+
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
default 4096
--- 2.6.31-rc1/mm/Makefile 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/Makefile 2009-06-29 14:10:54.000000000 +0100
@@ -25,6 +25,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += spars
obj-$(CONFIG_TMPFS_POSIX_ACL) += shmem_acl.o
obj-$(CONFIG_SLOB) += slob.o
obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
+obj-$(CONFIG_KSM) += ksm.o
obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
obj-$(CONFIG_SLAB) += slab.o
obj-$(CONFIG_SLUB) += slub.o
--- 2.6.31-rc1/mm/ksm.c 1970-01-01 01:00:00.000000000 +0100
+++ madv_ksm/mm/ksm.c 2009-06-29 14:10:54.000000000 +0100
@@ -0,0 +1,1675 @@
+/*
+ * Memory merging support.
+ *
+ * This code enables dynamic sharing of identical pages found in different
+ * memory areas, even if they are not shared by fork()
+ *
+ * Copyright (C) 2008 Red Hat, Inc.
+ * Authors:
+ * Izik Eidus
+ * Andrea Arcangeli
+ * Chris Wright
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mman.h>
+#include <linux/sched.h>
+#include <linux/rwsem.h>
+#include <linux/pagemap.h>
+#include <linux/hugetlb.h>
+#include <linux/rmap.h>
+#include <linux/spinlock.h>
+#include <linux/jhash.h>
+#include <linux/delay.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/random.h>
+#include <linux/slab.h>
+#include <linux/rbtree.h>
+#include <linux/mmu_notifier.h>
+#include <linux/ksm.h>
+
+#include <asm/tlbflush.h>
+
+/*
+ * A few notes about the ksm scanning process,
+ * to make it easier to understand the data structures below:
+ *
+ * In order to reduce excessive scanning, ksm sorts the memory pages by their
+ * contents into a data structure that hold pointers to the pages.
+ *
+ * Since the contents of the pages may change at any moment, ksm cannot just
+ * insert the pages into a normal sorted tree and expect it to find anything.
+ *
+ * For this purpose ksm use two data structures - stable and unstable trees.
+ * The stable tree holds pointers to all the merged pages (Ksm Pages), sorted
+ * by their contents. Because each such page has to be write-protected,
+ * searching on this tree is fully assured to be working, and therefore this
+ * tree is called the stable tree.
+ *
+ * In addition to the stable tree, ksm uses another data structure called the
+ * unstable tree: this tree holds pointers to pages that have been found to
+ * be "unchanged for a period of time". The unstable tree sorts these pages
+ * by their contents; but since they are not write-protected, ksm cannot rely
+ * upon the unstable tree to be guaranteed to work.
+ *
+ * For the reason that the unstable tree would become corrupted when some of
+ * the page inside itself would change, the tree is called unstable.
+ * Ksm solve this problem by two ways:
+ * 1) the unstable tree get flushed every time ksm finish to scan the whole
+ * memory, and then the tree is rebuild from the begining.
+ * 2) Ksm will only insert into the unstable tree, pages that their hash value
+ * was not changed during the whole progress of one circuler scanning of the
+ * memory.
+ * 3) The unstable tree is RedBlack Tree - meaning its balancing is based on
+ * the colors of the nodes and not their content, this assure that even when
+ * the tree get "corrupted" we wont get out of balance and the timing of
+ * scanning is the same, another issue is that searching and inserting nodes
+ * into rbtree is the same algorithm, therefore we have no overhead when we
+ * flush the tree and rebuild it.
+ * 4) Ksm never flush the stable tree, this mean that even if it would take 10
+ * times to find page inside the unstable tree, as soon as we would find it,
+ * it will be secured inside the stable tree,
+ * (When we scan new page, we first compare it against the stable tree, and
+ * then against the unstable tree)
+ */
+
+/**
+ * struct mm_slot - ksm information per mm that is being scanned
+ * @link: link to the mm_slots hash list
+ * @rmap_list: head for the rmap_list list
+ * @mm_list: link into the mm_slots list, rooted in ksm_mm_head
+ * @mm: the mm that this information is valid for
+ */
+struct mm_slot {
+ struct hlist_node link;
+ struct list_head mm_list;
+ struct list_head rmap_list;
+ struct mm_struct *mm;
+};
+
+/**
+ * struct ksm_scan - cursor for scanning
+ * @mm_slot: the current mm_slot we are scanning
+ * @address: the next address inside that to be scanned
+ * @rmap_item: the current rmap that we are scanning inside the rmap_list
+ * @seqnr: count of completed scans, for unstable_nr (and for stats)
+ *
+ * ksm uses it to know what are the next pages it need to scan
+ */
+struct ksm_scan {
+ struct mm_slot *mm_slot;
+ unsigned long address;
+ struct rmap_item *rmap_item;
+ unsigned long seqnr;
+};
+
+/**
+ * struct rmap_item - reverse mapping item for virtual addresses
+ * @link: link into rmap_list (rmap_list is per mm)
+ * @mm: the memory strcture the rmap_item is pointing to.
+ * @address: the virtual address the rmap_item is pointing to.
+ * @oldchecksum: old checksum result for the page belong the virtual address
+ * @unstable_nr: tracks seqnr while in unstable tree, to help when removing
+ * @stable_tree: when 1 rmap_item is used for stable_tree, 0 unstable tree
+ * @tree_item: pointer into the stable/unstable tree that hold the virtual
+ * address that the rmap_item is pointing to.
+ * @next: the next rmap item inside the stable/unstable tree that have that is
+ * found inside the same tree node.
+ */
+struct rmap_item {
+ struct list_head link;
+ struct mm_struct *mm;
+ unsigned long address;
+ unsigned int oldchecksum;
+ unsigned char stable_tree;
+ unsigned char unstable_nr;
+ struct tree_item *tree_item;
+ struct rmap_item *next;
+ struct rmap_item *prev;
+};
+
+/*
+ * tree_item - object of the stable and unstable trees
+ */
+struct tree_item {
+ struct rb_node node;
+ struct rmap_item *rmap_item;
+};
+
+/* The stable and unstable tree heads */
+static struct rb_root root_stable_tree = RB_ROOT;
+static struct rb_root root_unstable_tree = RB_ROOT;
+
+static unsigned int nmm_slots_hash = 4096;
+static struct hlist_head *mm_slots_hash;
+
+static struct mm_slot ksm_mm_head = {
+ .mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
+};
+static struct ksm_scan ksm_scan = {
+ .mm_slot = &ksm_mm_head,
+};
+
+static struct kmem_cache *tree_item_cache;
+static struct kmem_cache *rmap_item_cache;
+static struct kmem_cache *mm_slot_cache;
+
+/* The number of nodes in the stable tree */
+static unsigned long ksm_kernel_pages_allocated;
+
+/* The number of page slots sharing those nodes */
+static unsigned long ksm_pages_shared;
+
+/* Limit on the number of unswappable pages used */
+static unsigned long ksm_max_kernel_pages;
+
+/* Number of pages ksmd should scan in one batch */
+static unsigned int ksm_thread_pages_to_scan;
+
+/* Milliseconds ksmd should sleep between batches */
+static unsigned int ksm_thread_sleep_millisecs;
+
+#define KSM_RUN_STOP 0
+#define KSM_RUN_MERGE 1
+#define KSM_RUN_UNMERGE 2
+static unsigned int ksm_run;
+
+static DECLARE_WAIT_QUEUE_HEAD(ksm_thread_wait);
+static DEFINE_MUTEX(ksm_thread_mutex);
+static DEFINE_SPINLOCK(ksm_mmlist_lock);
+
+#define KSM_KMEM_CACHE(__struct, __flags) kmem_cache_create("ksm_"#__struct,\
+ sizeof(struct __struct), __alignof__(struct __struct),\
+ (__flags), NULL)
+
+static int __init ksm_slab_init(void)
+{
+ int ret = -ENOMEM;
+
+ tree_item_cache = KSM_KMEM_CACHE(tree_item, 0);
+ if (!tree_item_cache)
+ goto out;
+
+ rmap_item_cache = KSM_KMEM_CACHE(rmap_item, 0);
+ if (!rmap_item_cache)
+ goto out_free;
+
+ mm_slot_cache = KSM_KMEM_CACHE(mm_slot, 0);
+ if (!mm_slot_cache)
+ goto out_free1;
+
+ return 0;
+
+out_free1:
+ kmem_cache_destroy(rmap_item_cache);
+out_free:
+ kmem_cache_destroy(tree_item_cache);
+out:
+ return ret;
+}
+
+static void __init ksm_slab_free(void)
+{
+ kmem_cache_destroy(mm_slot_cache);
+ kmem_cache_destroy(rmap_item_cache);
+ kmem_cache_destroy(tree_item_cache);
+ mm_slot_cache = NULL;
+}
+
+static inline struct tree_item *alloc_tree_item(void)
+{
+ return kmem_cache_zalloc(tree_item_cache, GFP_KERNEL);
+}
+
+static void free_tree_item(struct tree_item *tree_item)
+{
+ kmem_cache_free(tree_item_cache, tree_item);
+}
+
+static inline struct rmap_item *alloc_rmap_item(void)
+{
+ return kmem_cache_zalloc(rmap_item_cache, GFP_KERNEL);
+}
+
+static inline void free_rmap_item(struct rmap_item *rmap_item)
+{
+ rmap_item->mm = NULL; /* debug safety */
+ kmem_cache_free(rmap_item_cache, rmap_item);
+}
+
+static inline struct mm_slot *alloc_mm_slot(void)
+{
+ if (!mm_slot_cache) /* initialization failed */
+ return NULL;
+ return kmem_cache_zalloc(mm_slot_cache, GFP_KERNEL);
+}
+
+static inline void free_mm_slot(struct mm_slot *mm_slot)
+{
+ kmem_cache_free(mm_slot_cache, mm_slot);
+}
+
+static int is_present_pte(struct mm_struct *mm, unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *ptep;
+ int present = 0;
+
+ pgd = pgd_offset(mm, addr);
+ if (!pgd_present(*pgd))
+ goto out;
+
+ pud = pud_offset(pgd, addr);
+ if (!pud_present(*pud) || pud_huge(*pud))
+ goto out;
+
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd) || pmd_huge(*pmd))
+ goto out;
+
+ ptep = pte_offset_map(pmd, addr);
+ present = pte_present(*ptep);
+ pte_unmap(ptep);
+out:
+ return present;
+}
+
+/*
+ * PageKsm - these pages are the write-protected pages that ksm maps into
+ * multiple vmas: the "shared pages" or "merged pages". All user ptes
+ * pointing to them are write-protected, so their data content cannot
+ * be changed; nor can they be swapped out (at present).
+ */
+static inline int PageKsm(struct page *page)
+{
+ /*
+ * When ksm creates a new shared page, it uses an ordinary kernel page
+ * allocated with alloc_page(): therefore this page is not PageAnon,
+ * nor is it mapped from any file. So long as we only apply this test
+ * to VM_MERGEABLE areas, the check below is good to distinguish a ksm
+ * page from an anonymous page or file page in that area.
+ */
+ return page->mapping == NULL;
+}
+
+static inline void __break_cow(struct mm_struct *mm, unsigned long addr)
+{
+ struct page *page[1];
+
+ if (get_user_pages(current, mm, addr, 1, 1, 1, page, NULL) == 1)
+ put_page(page[0]);
+}
+
+static void break_cow(struct mm_struct *mm, unsigned long addr)
+{
+ down_read(&mm->mmap_sem);
+ __break_cow(mm, addr);
+ up_read(&mm->mmap_sem);
+}
+
+/*
+ * Removing rmap_item from stable or unstable tree.
+ * This function will clean the information from the stable/unstable tree
+ * and will free the tree_item if needed.
+ */
+static void remove_rmap_item_from_tree(struct rmap_item *rmap_item)
+{
+ struct tree_item *tree_item = rmap_item->tree_item;
+
+ if (rmap_item->stable_tree) {
+ ksm_pages_shared--;
+ if (rmap_item->prev) {
+ BUG_ON(rmap_item->prev->next != rmap_item);
+ rmap_item->prev->next = rmap_item->next;
+ }
+ if (rmap_item->next) {
+ BUG_ON(rmap_item->next->prev != rmap_item);
+ rmap_item->next->prev = rmap_item->prev;
+ }
+ }
+
+ if (tree_item) {
+ if (rmap_item->stable_tree) {
+ if (!rmap_item->next && !rmap_item->prev) {
+ rb_erase(&tree_item->node, &root_stable_tree);
+ free_tree_item(tree_item);
+ ksm_kernel_pages_allocated--;
+ } else if (!rmap_item->prev) {
+ BUG_ON(tree_item->rmap_item != rmap_item);
+ tree_item->rmap_item = rmap_item->next;
+ } else
+ BUG_ON(tree_item->rmap_item == rmap_item);
+ } else {
+ unsigned char age;
+ /*
+ * ksm_thread can and must skip the rb_erase, because
+ * root_unstable_tree was already reset to RB_ROOT.
+ * But __ksm_exit has to be careful: do the rb_erase
+ * if it's interrupting a scan, and this rmap_item was
+ * inserted by this scan rather than left from before.
+ *
+ * Because of the case in which remove_mm_from_lists
+ * increments seqnr before removing rmaps, unstable_nr
+ * may even be 2 behind seqnr, but should never be
+ * further behind. Yes, I did have trouble with this!
+ */
+ age = ksm_scan.seqnr - rmap_item->unstable_nr;
+ BUG_ON(age > 2);
+ if (!age)
+ rb_erase(&tree_item->node, &root_unstable_tree);
+ free_tree_item(tree_item);
+ }
+ }
+
+ rmap_item->stable_tree = 0;
+ rmap_item->tree_item = NULL;
+ rmap_item->next = NULL;
+ rmap_item->prev = NULL;
+
+ cond_resched(); /* we're called from many long loops */
+}
+
+static void remove_all_slot_rmap_items(struct mm_slot *mm_slot)
+{
+ struct rmap_item *rmap_item, *node;
+
+ list_for_each_entry_safe(rmap_item, node, &mm_slot->rmap_list, link) {
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+}
+
+static void unmerge_slot_rmap_items(struct mm_slot *mm_slot,
+ unsigned long start, unsigned long end)
+{
+ struct rmap_item *rmap_item;
+
+ list_for_each_entry(rmap_item, &mm_slot->rmap_list, link) {
+ if (rmap_item->address < start)
+ continue;
+ if (rmap_item->address >= end)
+ break;
+ if (rmap_item->stable_tree)
+ __break_cow(mm_slot->mm, rmap_item->address);
+ /*
+ * madvise's down_write of mmap_sem is enough to protect
+ * us from those who list_del with down_read of mmap_sem;
+ * but we cannot safely remove_rmap_item_from_tree without
+ * ksm_thread_mutex, and we cannot acquire that whilst we
+ * hold mmap_sem: so leave the cleanup to ksmd's next pass.
+ */
+ }
+}
+
+static void unmerge_and_remove_all_rmap_items(void)
+{
+ struct mm_slot *mm_slot;
+ struct rmap_item *rmap_item, *node;
+
+ list_for_each_entry(mm_slot, &ksm_mm_head.mm_list, mm_list) {
+ down_read(&mm_slot->mm->mmap_sem);
+ list_for_each_entry_safe(rmap_item, node,
+ &mm_slot->rmap_list, link) {
+ if (rmap_item->stable_tree)
+ __break_cow(mm_slot->mm, rmap_item->address);
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+ up_read(&mm_slot->mm->mmap_sem);
+ }
+
+ spin_lock(&ksm_mmlist_lock);
+ if (ksm_scan.mm_slot != &ksm_mm_head) {
+ ksm_scan.mm_slot = &ksm_mm_head;
+ ksm_scan.seqnr++;
+ }
+ spin_unlock(&ksm_mmlist_lock);
+}
+
+static int __init mm_slots_hash_init(void)
+{
+ mm_slots_hash = kzalloc(nmm_slots_hash * sizeof(struct hlist_head),
+ GFP_KERNEL);
+ if (!mm_slots_hash)
+ return -ENOMEM;
+ return 0;
+}
+
+static void __init mm_slots_hash_free(void)
+{
+ kfree(mm_slots_hash);
+}
+
+static u32 calc_checksum(struct page *page)
+{
+ u32 checksum;
+ void *addr = kmap_atomic(page, KM_USER0);
+ checksum = jhash2(addr, PAGE_SIZE / 4, 17);
+ kunmap_atomic(addr, KM_USER0);
+ return checksum;
+}
+
+static int memcmp_pages(struct page *page1, struct page *page2)
+{
+ char *addr1, *addr2;
+ int ret;
+
+ addr1 = kmap_atomic(page1, KM_USER0);
+ addr2 = kmap_atomic(page2, KM_USER1);
+ ret = memcmp(addr1, addr2, PAGE_SIZE);
+ kunmap_atomic(addr2, KM_USER1);
+ kunmap_atomic(addr1, KM_USER0);
+ return ret;
+}
+
+/*
+ * pages_identical: return 1 if identical, otherwise 0.
+ */
+static inline int pages_identical(struct page *page1, struct page *page2)
+{
+ return !memcmp_pages(page1, page2);
+}
+
+static int write_protect_page(struct page *page,
+ struct vm_area_struct *vma,
+ pte_t *orig_pte)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long addr;
+ pte_t *ptep;
+ spinlock_t *ptl;
+ int swapped;
+ int ret = -EFAULT;
+
+ addr = page_address_in_vma(page, vma);
+ if (addr == -EFAULT)
+ goto out;
+
+ ptep = page_check_address(page, mm, addr, &ptl, 0);
+ if (!ptep)
+ goto out;
+
+ if (pte_write(*ptep)) {
+ pte_t entry;
+
+ swapped = PageSwapCache(page);
+ flush_cache_page(vma, addr, page_to_pfn(page));
+ /*
+ * Ok this is tricky, when get_user_pages_fast() run it doesnt
+ * take any lock, therefore the check that we are going to make
+ * with the pagecount against the mapcount is racey and
+ * O_DIRECT can happen right after the check.
+ * So we clear the pte and flush the tlb before the check
+ * this assure us that no O_DIRECT can happen after the check
+ * or in the middle of the check.
+ */
+ entry = ptep_clear_flush(vma, addr, ptep);
+ /*
+ * Check that no O_DIRECT or similar I/O is in progress on the
+ * page
+ */
+ if ((page_mapcount(page) + 2 + swapped) != page_count(page)) {
+ set_pte_at_notify(mm, addr, ptep, entry);
+ goto out_unlock;
+ }
+ entry = pte_wrprotect(entry);
+ set_pte_at_notify(mm, addr, ptep, entry);
+ }
+ *orig_pte = *ptep;
+ ret = 0;
+
+out_unlock:
+ pte_unmap_unlock(ptep, ptl);
+out:
+ return ret;
+}
+
+/**
+ * replace_page - replace page in vma with new page
+ * @vma: vma that hold the pte oldpage is pointed by.
+ * @oldpage: the page we are replacing with newpage
+ * @newpage: the page we replace oldpage with
+ * @orig_pte: the original value of the pte
+ * @prot: page protection bits
+ *
+ * Returns 0 on success, -EFAULT on failure.
+ *
+ * Note: @newpage must not be an anonymous page because replace_page() does
+ * not change the mapping of @newpage to have the same values as @oldpage.
+ * @newpage can be mapped in several vmas at different offsets (page->index).
+ */
+static int replace_page(struct vm_area_struct *vma, struct page *oldpage,
+ struct page *newpage, pte_t orig_pte, pgprot_t prot)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *ptep;
+ spinlock_t *ptl;
+ unsigned long addr;
+ int ret = -EFAULT;
+
+ BUG_ON(PageAnon(newpage));
+
+ addr = page_address_in_vma(oldpage, vma);
+ if (addr == -EFAULT)
+ goto out;
+
+ pgd = pgd_offset(mm, addr);
+ if (!pgd_present(*pgd))
+ goto out;
+
+ pud = pud_offset(pgd, addr);
+ if (!pud_present(*pud))
+ goto out;
+
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd))
+ goto out;
+
+ ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
+ if (!pte_same(*ptep, orig_pte)) {
+ pte_unmap_unlock(ptep, ptl);
+ goto out;
+ }
+
+ ret = 0;
+ get_page(newpage);
+ page_add_file_rmap(newpage);
+
+ flush_cache_page(vma, addr, pte_pfn(*ptep));
+ ptep_clear_flush(vma, addr, ptep);
+ set_pte_at_notify(mm, addr, ptep, mk_pte(newpage, prot));
+
+ page_remove_rmap(oldpage);
+ if (PageAnon(oldpage)) {
+ dec_mm_counter(mm, anon_rss);
+ inc_mm_counter(mm, file_rss);
+ }
+ put_page(oldpage);
+
+ pte_unmap_unlock(ptep, ptl);
+out:
+ return ret;
+}
+
+/*
+ * try_to_merge_one_page - take two pages and merge them into one
+ * @mm: mm_struct that hold vma pointing into oldpage
+ * @vma: the vma that hold the pte pointing into oldpage
+ * @oldpage: the page that we want to replace with newpage
+ * @newpage: the page that we want to map instead of oldpage
+ * @newprot: the new permission of the pte inside vma
+ * note:
+ * oldpage should be anon page while newpage should be file mapped page
+ *
+ * this function returns 0 if the pages were merged, -EFAULT otherwise.
+ */
+static int try_to_merge_one_page(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ struct page *oldpage,
+ struct page *newpage,
+ pgprot_t newprot)
+{
+ int ret = -EFAULT;
+ pte_t orig_pte = __pte(0);
+
+ if (!(vma->vm_flags & VM_MERGEABLE))
+ goto out;
+
+ if (!PageAnon(oldpage))
+ goto out;
+
+ get_page(newpage);
+ get_page(oldpage);
+
+ /*
+ * We need the page lock to read a stable PageSwapCache in
+ * write_protect_page(). We use trylock_page() instead of
+ * lock_page() because we don't want to wait here - we
+ * prefer to continue scanning and merging different pages,
+ * then come back to this page when it is unlocked.
+ */
+ if (!trylock_page(oldpage))
+ goto out_putpage;
+ /*
+ * If this anonymous page is mapped only here, its pte may need
+ * to be write-protected. If it's mapped elsewhere, all of its
+ * ptes are necessarily already write-protected. But in either
+ * case, we need to lock and check page_count is not raised.
+ */
+ if (write_protect_page(oldpage, vma, &orig_pte)) {
+ unlock_page(oldpage);
+ goto out_putpage;
+ }
+ unlock_page(oldpage);
+
+ if (pages_identical(oldpage, newpage))
+ ret = replace_page(vma, oldpage, newpage, orig_pte, newprot);
+
+out_putpage:
+ put_page(oldpage);
+ put_page(newpage);
+out:
+ return ret;
+}
+
+/*
+ * try_to_merge_two_pages_alloc - take two identical pages and prepare them
+ * to be merged into one page.
+ *
+ * this function returns 0 if we successfully mapped two identical pages
+ * into one page, -EFAULT otherwise.
+ * (note this function will allocate a new kernel page, if one of the pages
+ * is already shared page (KsmPage), then try_to_merge_two_pages_noalloc()
+ * should be called.)
+ */
+static int try_to_merge_two_pages_alloc(struct mm_struct *mm1,
+ struct page *page1,
+ struct mm_struct *mm2,
+ struct page *page2,
+ unsigned long addr1,
+ unsigned long addr2)
+{
+ struct vm_area_struct *vma;
+ pgprot_t prot;
+ struct page *kpage;
+ int ret = -EFAULT;
+
+ /*
+ * The number of nodes in the stable tree
+ * is the number of kernel pages that we hold.
+ */
+ if (ksm_max_kernel_pages &&
+ ksm_max_kernel_pages <= ksm_kernel_pages_allocated)
+ return ret;
+
+ kpage = alloc_page(GFP_HIGHUSER);
+ if (!kpage)
+ return ret;
+
+ down_read(&mm1->mmap_sem);
+ vma = find_vma(mm1, addr1);
+ if (!vma || vma->vm_start > addr1) {
+ put_page(kpage);
+ up_read(&mm1->mmap_sem);
+ return ret;
+ }
+
+ prot = vm_get_page_prot(vma->vm_flags & ~VM_WRITE);
+
+ copy_user_highpage(kpage, page1, addr1, vma);
+ ret = try_to_merge_one_page(mm1, vma, page1, kpage, prot);
+ up_read(&mm1->mmap_sem);
+
+ if (!ret) {
+ down_read(&mm2->mmap_sem);
+ vma = find_vma(mm2, addr2);
+ if (!vma || vma->vm_start > addr2) {
+ put_page(kpage);
+ up_read(&mm2->mmap_sem);
+ break_cow(mm1, addr1);
+ return -EFAULT;
+ }
+
+ prot = vm_get_page_prot(vma->vm_flags & ~VM_WRITE);
+
+ ret = try_to_merge_one_page(mm2, vma, page2, kpage,
+ prot);
+ up_read(&mm2->mmap_sem);
+ /*
+ * If the second try_to_merge_one_page call was failed,
+ * we are in situation where we have Ksm page that have
+ * just one pte pointing to it, in this case we break
+ * it.
+ */
+ if (ret)
+ break_cow(mm1, addr1);
+ else
+ ksm_pages_shared += 2;
+ }
+
+ put_page(kpage);
+ return ret;
+}
+
+/*
+ * try_to_merge_two_pages_noalloc - the same astry_to_merge_two_pages_alloc,
+ * but no new kernel page is allocated (page2 should be KsmPage)
+ */
+static int try_to_merge_two_pages_noalloc(struct mm_struct *mm1,
+ struct page *page1,
+ struct page *page2,
+ unsigned long addr1)
+{
+ struct vm_area_struct *vma;
+ pgprot_t prot;
+ int ret = -EFAULT;
+
+ /*
+ * If page2 is shared, we can just make the pte of mm1(page1) point to
+ * page2.
+ */
+ BUG_ON(!PageKsm(page2));
+ down_read(&mm1->mmap_sem);
+ vma = find_vma(mm1, addr1);
+ if (!vma || vma->vm_start > addr1) {
+ up_read(&mm1->mmap_sem);
+ return ret;
+ }
+
+ prot = vm_get_page_prot(vma->vm_flags & ~VM_WRITE);
+
+ ret = try_to_merge_one_page(mm1, vma, page1, page2, prot);
+ up_read(&mm1->mmap_sem);
+ if (!ret)
+ ksm_pages_shared++;
+
+ return ret;
+}
+
+/*
+ * is_zapped_item - check if the page belong to the rmap_item was zapped.
+ *
+ * This function would check if the page that the virtual address inside
+ * rmap_item is poiting to is still KsmPage, and therefore we can trust the
+ * content of this page.
+ * Since that this function call already to get_user_pages it return the
+ * pointer to the page as an optimization.
+ */
+static int is_zapped_item(struct rmap_item *rmap_item,
+ struct page **page)
+{
+ struct vm_area_struct *vma;
+ int ret = 0;
+
+ cond_resched();
+ down_read(&rmap_item->mm->mmap_sem);
+ if (is_present_pte(rmap_item->mm, rmap_item->address)) {
+ vma = find_vma(rmap_item->mm, rmap_item->address);
+ if (vma && (vma->vm_flags & VM_MERGEABLE)) {
+ ret = get_user_pages(current, rmap_item->mm,
+ rmap_item->address,
+ 1, 0, 0, page, NULL);
+ }
+ }
+ up_read(&rmap_item->mm->mmap_sem);
+
+ if (ret != 1)
+ return 1;
+
+ if (unlikely(!PageKsm(page[0]))) {
+ put_page(page[0]);
+ return 1;
+ }
+ return 0;
+}
+
+/*
+ * stable_tree_search - search page inside the stable tree
+ * @page: the page that we are searching identical pages to.
+ * @page2: pointer into identical page that we are holding inside the stable
+ * tree that we have found.
+ * @rmap_item: the reverse mapping item
+ *
+ * this function check if there is a page inside the stable tree
+ * with identical content to the page that we are scanning right now.
+ *
+ * this function return rmap_item pointer to the identical item if found, NULL
+ * otherwise.
+ */
+static struct rmap_item *stable_tree_search(struct page *page,
+ struct page **page2,
+ struct rmap_item *rmap_item)
+{
+ struct rb_node *node = root_stable_tree.rb_node;
+ struct tree_item *tree_item;
+ struct rmap_item *found_rmap_item, *next_rmap_item;
+
+ while (node) {
+ int ret;
+
+ tree_item = rb_entry(node, struct tree_item, node);
+ found_rmap_item = tree_item->rmap_item;
+ while (found_rmap_item) {
+ BUG_ON(!found_rmap_item->stable_tree);
+ BUG_ON(!found_rmap_item->tree_item);
+ if (!rmap_item ||
+ !(found_rmap_item->mm == rmap_item->mm &&
+ found_rmap_item->address == rmap_item->address)) {
+ if (!is_zapped_item(found_rmap_item, page2))
+ break;
+ next_rmap_item = found_rmap_item->next;
+ remove_rmap_item_from_tree(found_rmap_item);
+ found_rmap_item = next_rmap_item;
+ } else
+ found_rmap_item = found_rmap_item->next;
+ }
+ if (!found_rmap_item)
+ return NULL;
+
+ /*
+ * We can trust the value of the memcmp as we know the pages
+ * are write protected.
+ */
+ ret = memcmp_pages(page, page2[0]);
+
+ if (ret < 0) {
+ put_page(page2[0]);
+ node = node->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ node = node->rb_right;
+ } else {
+ return found_rmap_item;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * stable_tree_insert - insert into the stable tree, new rmap_item that is
+ * pointing into a new KsmPage.
+ *
+ * @page: the page that we are searching identical page to inside the stable
+ * tree.
+ * @new_tree_item: the new tree item we are going to link into the stable tree.
+ * @rmap_item: pointer into the reverse mapping item.
+ *
+ * this function return 0 if success, -EFAULT otherwise.
+ */
+static int stable_tree_insert(struct page *page,
+ struct tree_item *new_tree_item,
+ struct rmap_item *rmap_item)
+{
+ struct rb_node **new = &root_stable_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct tree_item *tree_item;
+ struct page *page2[1];
+
+ while (*new) {
+ int ret;
+ struct rmap_item *insert_rmap_item, *next_rmap_item;
+
+ tree_item = rb_entry(*new, struct tree_item, node);
+ insert_rmap_item = tree_item->rmap_item;
+ while (insert_rmap_item) {
+ BUG_ON(!insert_rmap_item->stable_tree);
+ BUG_ON(!insert_rmap_item->tree_item);
+ if (!(insert_rmap_item->mm == rmap_item->mm &&
+ insert_rmap_item->address == rmap_item->address)) {
+ if (!is_zapped_item(insert_rmap_item, page2))
+ break;
+ next_rmap_item = insert_rmap_item->next;
+ remove_rmap_item_from_tree(insert_rmap_item);
+ insert_rmap_item = next_rmap_item;
+ } else
+ insert_rmap_item = insert_rmap_item->next;
+ }
+ if (!insert_rmap_item)
+ return -EFAULT;
+
+ ret = memcmp_pages(page, page2[0]);
+
+ parent = *new;
+ if (ret < 0) {
+ put_page(page2[0]);
+ new = &parent->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ new = &parent->rb_right;
+ } else {
+ /*
+ * It isn't a bug when we are here (the fact that
+ * we didn't find the page inside the stable tree),
+ * because when we searched for the page inside the
+ * stable tree it was still not write-protected,
+ * and therefore it could have changed later.
+ */
+ return -EFAULT;
+ }
+ }
+
+ ksm_kernel_pages_allocated++;
+ rmap_item->stable_tree = 1;
+ rmap_item->tree_item = new_tree_item;
+ rb_link_node(&new_tree_item->node, parent, new);
+ rb_insert_color(&new_tree_item->node, &root_stable_tree);
+
+ return 0;
+}
+
+/*
+ * unstable_tree_search_insert - search and insert items into the unstable tree.
+ *
+ * @page: the page that we are going to search for identical page or to insert
+ * into the unstable tree
+ * @page2: pointer into identical page that was found inside the unstable tree
+ * @page_rmap_item: the reverse mapping item of page
+ *
+ * this function search if identical page to the page that we
+ * are scanning right now is found inside the unstable tree, and in case no page
+ * with identical content is exist inside the unstable tree, we insert
+ * page_rmap_item as a new object into the unstable tree.
+ *
+ * this function return pointer to rmap_item pointer of item that is found to
+ * be identical to the page that we are scanning right now, NULL otherwise.
+ *
+ * (this function do both searching and inserting, because the fact that
+ * searching and inserting share the same walking algorithem in rbtrees)
+ */
+static struct rmap_item *unstable_tree_search_insert(struct page *page,
+ struct page **page2,
+ struct rmap_item *page_rmap_item)
+{
+ struct rb_node **new = &root_unstable_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct tree_item *tree_item;
+ struct tree_item *new_tree_item;
+ struct rmap_item *rmap_item;
+
+ while (*new) {
+ int ret;
+
+ tree_item = rb_entry(*new, struct tree_item, node);
+ rmap_item = tree_item->rmap_item;
+
+ down_read(&rmap_item->mm->mmap_sem);
+ /*
+ * We don't want to swap in pages
+ */
+ if (!is_present_pte(rmap_item->mm, rmap_item->address)) {
+ up_read(&rmap_item->mm->mmap_sem);
+ return NULL;
+ }
+
+ ret = get_user_pages(current, rmap_item->mm, rmap_item->address,
+ 1, 0, 0, page2, NULL);
+ up_read(&rmap_item->mm->mmap_sem);
+ if (ret != 1)
+ return NULL;
+
+ /*
+ * Don't substitute an unswappable ksm page
+ * just for one good swappable forked page.
+ */
+ if (page == page2[0]) {
+ put_page(page2[0]);
+ return NULL;
+ }
+
+ ret = memcmp_pages(page, page2[0]);
+
+ parent = *new;
+ if (ret < 0) {
+ put_page(page2[0]);
+ new = &parent->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ new = &parent->rb_right;
+ } else {
+ return rmap_item;
+ }
+ }
+
+ if (!page_rmap_item)
+ return NULL;
+
+ new_tree_item = alloc_tree_item();
+ if (!new_tree_item)
+ return NULL;
+
+ page_rmap_item->unstable_nr = ksm_scan.seqnr; /* truncated */
+ page_rmap_item->tree_item = new_tree_item;
+ new_tree_item->rmap_item = page_rmap_item;
+ rb_link_node(&new_tree_item->node, parent, new);
+ rb_insert_color(&new_tree_item->node, &root_unstable_tree);
+
+ return NULL;
+}
+
+/*
+ * insert_to_stable_tree_list - insert another rmap_item into the linked list
+ * rmap_items of a given node inside the stable tree.
+ */
+static void insert_to_stable_tree_list(struct rmap_item *rmap_item,
+ struct rmap_item *tree_rmap_item)
+{
+ rmap_item->next = tree_rmap_item->next;
+ rmap_item->prev = tree_rmap_item;
+
+ if (tree_rmap_item->next)
+ tree_rmap_item->next->prev = rmap_item;
+
+ tree_rmap_item->next = rmap_item;
+
+ rmap_item->stable_tree = 1;
+ rmap_item->tree_item = tree_rmap_item->tree_item;
+}
+
+/*
+ * cmp_and_merge_page - take a page computes its hash value and check if there
+ * is similar hash value to different page,
+ * in case we find that there is similar hash to different page we call to
+ * try_to_merge_two_pages().
+ *
+ * @page: the page that we are searching identical page to.
+ * @rmap_item: the reverse mapping into the virtual address of this page
+ */
+static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
+{
+ struct page *page2[1];
+ struct rmap_item *tree_rmap_item;
+ unsigned int checksum;
+ int ret;
+
+ if (rmap_item->stable_tree)
+ remove_rmap_item_from_tree(rmap_item);
+
+ /* We first start with searching the page inside the stable tree */
+ tree_rmap_item = stable_tree_search(page, page2, rmap_item);
+ if (tree_rmap_item) {
+ BUG_ON(!tree_rmap_item->tree_item);
+
+ if (page == page2[0]) { /* forked */
+ ksm_pages_shared++;
+ ret = 0;
+ } else
+ ret = try_to_merge_two_pages_noalloc(rmap_item->mm,
+ page, page2[0],
+ rmap_item->address);
+ put_page(page2[0]);
+
+ if (!ret) {
+ /*
+ * The page was successfully merged, let's insert its
+ * rmap_item into the stable tree.
+ */
+ insert_to_stable_tree_list(rmap_item, tree_rmap_item);
+ }
+ return;
+ }
+
+ /*
+ * A ksm page might have got here by fork or by mremap move, but
+ * its other references have already been removed from the tree.
+ */
+ if (PageKsm(page))
+ break_cow(rmap_item->mm, rmap_item->address);
+
+ /*
+ * In case the hash value of the page was changed from the last time we
+ * have calculated it, this page to be changed frequely, therefore we
+ * don't want to insert it to the unstable tree, and we don't want to
+ * waste our time to search if there is something identical to it there.
+ */
+ checksum = calc_checksum(page);
+ if (rmap_item->oldchecksum != checksum) {
+ rmap_item->oldchecksum = checksum;
+ return;
+ }
+
+ tree_rmap_item = unstable_tree_search_insert(page, page2, rmap_item);
+ if (tree_rmap_item) {
+ struct tree_item *tree_item;
+ struct mm_struct *tree_mm;
+ unsigned long tree_addr;
+
+ tree_item = tree_rmap_item->tree_item;
+ tree_mm = tree_rmap_item->mm;
+ tree_addr = tree_rmap_item->address;
+
+ ret = try_to_merge_two_pages_alloc(rmap_item->mm, page, tree_mm,
+ page2[0], rmap_item->address,
+ tree_addr);
+ /*
+ * As soon as we successfully merge this page, we want to remove
+ * the rmap_item object of the page that we have merged with
+ * from the unstable_tree and instead insert it as a new stable
+ * tree node.
+ */
+ if (!ret) {
+ rb_erase(&tree_item->node, &root_unstable_tree);
+ /*
+ * If we fail to insert the page into the stable tree,
+ * we will have 2 virtual addresses that are pointing
+ * to a KsmPage left outside the stable tree,
+ * in which case we need to break_cow on both.
+ */
+ if (stable_tree_insert(page2[0], tree_item,
+ tree_rmap_item) == 0) {
+ insert_to_stable_tree_list(rmap_item,
+ tree_rmap_item);
+ } else {
+ free_tree_item(tree_item);
+ tree_rmap_item->tree_item = NULL;
+ break_cow(tree_mm, tree_addr);
+ break_cow(rmap_item->mm, rmap_item->address);
+ ksm_pages_shared -= 2;
+ }
+ }
+
+ put_page(page2[0]);
+ }
+}
+
+static struct mm_slot *get_mm_slot(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot;
+ struct hlist_head *bucket;
+ struct hlist_node *node;
+
+ bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
+ % nmm_slots_hash];
+ hlist_for_each_entry(mm_slot, node, bucket, link) {
+ if (mm == mm_slot->mm)
+ return mm_slot;
+ }
+ return NULL;
+}
+
+static void insert_to_mm_slots_hash(struct mm_struct *mm,
+ struct mm_slot *mm_slot)
+{
+ struct hlist_head *bucket;
+
+ bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
+ % nmm_slots_hash];
+ mm_slot->mm = mm;
+ INIT_LIST_HEAD(&mm_slot->rmap_list);
+ hlist_add_head(&mm_slot->link, bucket);
+}
+
+static void remove_mm_from_lists(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot;
+
+ spin_lock(&ksm_mmlist_lock);
+ mm_slot = get_mm_slot(mm);
+
+ /*
+ * This mm_slot is always at the scanning cursor when we're
+ * called from scan_get_next_rmap_item; but that's a special
+ * case when we're called from __ksm_exit.
+ */
+ if (ksm_scan.mm_slot == mm_slot) {
+ ksm_scan.mm_slot = list_entry(
+ mm_slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.address = 0;
+ ksm_scan.rmap_item = list_entry(
+ &ksm_scan.mm_slot->rmap_list, struct rmap_item, link);
+ if (ksm_scan.mm_slot == &ksm_mm_head)
+ ksm_scan.seqnr++;
+ }
+
+ hlist_del(&mm_slot->link);
+ list_del(&mm_slot->mm_list);
+ spin_unlock(&ksm_mmlist_lock);
+
+ remove_all_slot_rmap_items(mm_slot);
+ free_mm_slot(mm_slot);
+ clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+}
+
+/*
+ * update_rmap_list - nuke every rmap_item above the current rmap_item.
+ */
+static void update_rmap_list(struct list_head *head, struct list_head *cur)
+{
+ struct rmap_item *rmap_item;
+
+ cur = cur->next;
+ while (cur != head) {
+ rmap_item = list_entry(cur, struct rmap_item, link);
+ cur = cur->next;
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+}
+
+static struct rmap_item *get_next_rmap_item(unsigned long addr,
+ struct mm_struct *mm,
+ struct list_head *head,
+ struct list_head *cur)
+{
+ struct rmap_item *rmap_item;
+
+ cur = cur->next;
+ while (cur != head) {
+ rmap_item = list_entry(cur, struct rmap_item, link);
+ if (rmap_item->address == addr) {
+ if (!rmap_item->stable_tree)
+ remove_rmap_item_from_tree(rmap_item);
+ return rmap_item;
+ }
+ if (rmap_item->address > addr)
+ break;
+ cur = cur->next;
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+
+ rmap_item = alloc_rmap_item();
+ if (rmap_item) {
+ /* It has already been zeroed */
+ rmap_item->mm = mm;
+ rmap_item->address = addr;
+ list_add_tail(&rmap_item->link, cur);
+ }
+ return rmap_item;
+}
+
+static struct rmap_item *scan_get_next_rmap_item(void)
+{
+ struct mm_struct *mm;
+ struct mm_slot *slot;
+ struct vm_area_struct *vma;
+ struct rmap_item *rmap_item;
+
+ if (list_empty(&ksm_mm_head.mm_list))
+ return NULL;
+
+ slot = ksm_scan.mm_slot;
+ if (slot == &ksm_mm_head) {
+ root_unstable_tree = RB_ROOT;
+
+ spin_lock(&ksm_mmlist_lock);
+ slot = list_entry(slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.mm_slot = slot;
+ spin_unlock(&ksm_mmlist_lock);
+next_mm:
+ ksm_scan.address = 0;
+ ksm_scan.rmap_item = list_entry(&slot->rmap_list,
+ struct rmap_item, link);
+ }
+
+ mm = slot->mm;
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, ksm_scan.address);
+ while (vma && !(vma->vm_flags & VM_MERGEABLE))
+ vma = vma->vm_next;
+
+ if (vma) {
+ if (ksm_scan.address < vma->vm_start)
+ ksm_scan.address = vma->vm_start;
+ rmap_item = get_next_rmap_item(ksm_scan.address, mm,
+ &slot->rmap_list, &ksm_scan.rmap_item->link);
+ up_read(&mm->mmap_sem);
+
+ if (rmap_item) {
+ ksm_scan.rmap_item = rmap_item;
+ ksm_scan.address += PAGE_SIZE; /* ready for next */
+ }
+ return rmap_item;
+ }
+
+ if (!ksm_scan.address) {
+ /*
+ * We've completed a full scan of all vmas, holding mmap_sem
+ * throughout, and found no VM_MERGEABLE: so do the same as
+ * __ksm_exit does to remove this mm from all our lists now.
+ */
+ remove_mm_from_lists(mm);
+ up_read(&mm->mmap_sem);
+ slot = ksm_scan.mm_slot;
+ if (slot != &ksm_mm_head)
+ goto next_mm;
+ return NULL;
+ }
+
+ /*
+ * Nuke all the rmap_items that are above this current rmap:
+ * because there were no VM_MERGEABLE vmas with such addresses.
+ */
+ update_rmap_list(&slot->rmap_list, &ksm_scan.rmap_item->link);
+ up_read(&mm->mmap_sem);
+
+ spin_lock(&ksm_mmlist_lock);
+ slot = list_entry(slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.mm_slot = slot;
+ spin_unlock(&ksm_mmlist_lock);
+
+ /* Repeat until we've completed scanning the whole list */
+ if (slot != &ksm_mm_head)
+ goto next_mm;
+
+ /*
+ * Bump seqnr here rather than at top, so that __ksm_exit
+ * can skip rb_erase on unstable tree until we run again.
+ */
+ ksm_scan.seqnr++;
+ return NULL;
+}
+
+/**
+ * ksm_do_scan - the ksm scanner main worker function.
+ * @ksm_scan - the scanner.
+ * @scan_npages - number of pages we want to scan before we return.
+ */
+static void ksm_do_scan(unsigned int scan_npages)
+{
+ struct page *page[1];
+ struct mm_struct *mm;
+ struct rmap_item *rmap_item;
+ unsigned long addr;
+ int val;
+
+ while (scan_npages--) {
+ cond_resched();
+
+ rmap_item = scan_get_next_rmap_item();
+ if (!rmap_item)
+ return;
+
+ mm = rmap_item->mm;
+ addr = rmap_item->address;
+
+ /*
+ * If page is not present, don't waste time faulting it in.
+ */
+ down_read(&mm->mmap_sem);
+ if (is_present_pte(mm, addr)) {
+ val = get_user_pages(current, mm, addr, 1, 0, 0, page,
+ NULL);
+ up_read(&mm->mmap_sem);
+ if (val == 1) {
+ if (!PageKsm(page[0]) ||
+ !rmap_item->stable_tree)
+ cmp_and_merge_page(page[0], rmap_item);
+ put_page(page[0]);
+ }
+ } else {
+ up_read(&mm->mmap_sem);
+ }
+ }
+}
+
+static int ksm_scan_thread(void *nothing)
+{
+ while (!kthread_should_stop()) {
+ if (ksm_run & KSM_RUN_MERGE) {
+ mutex_lock(&ksm_thread_mutex);
+ ksm_do_scan(ksm_thread_pages_to_scan);
+ mutex_unlock(&ksm_thread_mutex);
+ schedule_timeout_interruptible(
+ msecs_to_jiffies(ksm_thread_sleep_millisecs));
+ } else {
+ wait_event_interruptible(ksm_thread_wait,
+ (ksm_run & KSM_RUN_MERGE) ||
+ kthread_should_stop());
+ }
+ }
+ return 0;
+}
+
+int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ struct mm_slot *mm_slot;
+
+ switch (advice) {
+ case MADV_MERGEABLE:
+ /*
+ * Be somewhat over-protective for now!
+ */
+ if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE |
+ VM_PFNMAP | VM_IO | VM_DONTEXPAND |
+ VM_RESERVED | VM_HUGETLB | VM_INSERTPAGE |
+ VM_MIXEDMAP | VM_SAO))
+ return 0; /* just ignore the advice */
+
+ if (vma->vm_file || vma->vm_ops)
+ return 0; /* just ignore the advice */
+
+ if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ if (__ksm_enter(mm) < 0)
+ return -EAGAIN;
+
+ *vm_flags |= VM_MERGEABLE;
+ break;
+
+ case MADV_UNMERGEABLE:
+ if (!(*vm_flags & VM_MERGEABLE))
+ return 0; /* just ignore the advice */
+
+ spin_lock(&ksm_mmlist_lock);
+ mm_slot = get_mm_slot(mm);
+ spin_unlock(&ksm_mmlist_lock);
+
+ unmerge_slot_rmap_items(mm_slot, start, end);
+
+ *vm_flags &= ~VM_MERGEABLE;
+ break;
+ }
+
+ return 0;
+}
+
+int __ksm_enter(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot = alloc_mm_slot();
+ if (!mm_slot)
+ return -ENOMEM;
+
+ spin_lock(&ksm_mmlist_lock);
+ insert_to_mm_slots_hash(mm, mm_slot);
+ /*
+ * Insert just behind the scanning cursor, to let the area settle
+ * down a little; when fork is followed by immediate exec, we don't
+ * want ksmd to waste time setting up and tearing down an rmap_list.
+ */
+ list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);
+ spin_unlock(&ksm_mmlist_lock);
+
+ set_bit(MMF_VM_MERGEABLE, &mm->flags);
+ return 0;
+}
+
+void __ksm_exit(struct mm_struct *mm)
+{
+ /*
+ * This process is exiting: doesn't hold and doesn't need mmap_sem;
+ * but we do need to exclude ksmd and other exiters while we modify
+ * the various lists and trees.
+ */
+ mutex_lock(&ksm_thread_mutex);
+ remove_mm_from_lists(mm);
+ mutex_unlock(&ksm_thread_mutex);
+}
+
+#define KSM_ATTR_RO(_name) \
+ static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
+#define KSM_ATTR(_name) \
+ static struct kobj_attribute _name##_attr = \
+ __ATTR(_name, 0644, _name##_show, _name##_store)
+
+static ssize_t sleep_millisecs_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_thread_sleep_millisecs);
+}
+
+static ssize_t sleep_millisecs_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long msecs;
+ int err;
+
+ err = strict_strtoul(buf, 10, &msecs);
+ if (err || msecs > UINT_MAX)
+ return -EINVAL;
+
+ ksm_thread_sleep_millisecs = msecs;
+
+ return count;
+}
+KSM_ATTR(sleep_millisecs);
+
+static ssize_t pages_to_scan_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_thread_pages_to_scan);
+}
+
+static ssize_t pages_to_scan_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long nr_pages;
+
+ err = strict_strtoul(buf, 10, &nr_pages);
+ if (err || nr_pages > UINT_MAX)
+ return -EINVAL;
+
+ ksm_thread_pages_to_scan = nr_pages;
+
+ return count;
+}
+KSM_ATTR(pages_to_scan);
+
+static ssize_t run_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_run);
+}
+
+static ssize_t run_store(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long flags;
+
+ err = strict_strtoul(buf, 10, &flags);
+ if (err || flags > UINT_MAX)
+ return -EINVAL;
+ if (flags > KSM_RUN_UNMERGE)
+ return -EINVAL;
+
+ /*
+ * KSM_RUN_MERGE sets ksmd running, and 0 stops it running.
+ * KSM_RUN_UNMERGE stops it running and unmerges all rmap_items,
+ * breaking COW to free the kernel_pages_allocated (but leaves
+ * mm_slots on the list for when ksmd may be set running again).
+ */
+
+ mutex_lock(&ksm_thread_mutex);
+ ksm_run = flags;
+ if (flags & KSM_RUN_UNMERGE)
+ unmerge_and_remove_all_rmap_items();
+ mutex_unlock(&ksm_thread_mutex);
+
+ if (flags & KSM_RUN_MERGE)
+ wake_up_interruptible(&ksm_thread_wait);
+
+ return count;
+}
+KSM_ATTR(run);
+
+static ssize_t pages_shared_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%lu\n",
+ ksm_pages_shared - ksm_kernel_pages_allocated);
+}
+KSM_ATTR_RO(pages_shared);
+
+static ssize_t kernel_pages_allocated_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%lu\n", ksm_kernel_pages_allocated);
+}
+KSM_ATTR_RO(kernel_pages_allocated);
+
+static ssize_t max_kernel_pages_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long nr_pages;
+
+ err = strict_strtoul(buf, 10, &nr_pages);
+ if (err)
+ return -EINVAL;
+
+ ksm_max_kernel_pages = nr_pages;
+
+ return count;
+}
+
+static ssize_t max_kernel_pages_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%lu\n", ksm_max_kernel_pages);
+}
+KSM_ATTR(max_kernel_pages);
+
+static struct attribute *ksm_attrs[] = {
+ &sleep_millisecs_attr.attr,
+ &pages_to_scan_attr.attr,
+ &run_attr.attr,
+ &pages_shared_attr.attr,
+ &kernel_pages_allocated_attr.attr,
+ &max_kernel_pages_attr.attr,
+ NULL,
+};
+
+static struct attribute_group ksm_attr_group = {
+ .attrs = ksm_attrs,
+ .name = "ksm",
+};
+
+static int __init ksm_init(void)
+{
+ struct task_struct *ksm_thread;
+ int ret;
+
+ ret = ksm_slab_init();
+ if (ret)
+ goto out;
+
+ ret = mm_slots_hash_init();
+ if (ret)
+ goto out_free1;
+
+ ksm_thread = kthread_run(ksm_scan_thread, NULL, "ksmd");
+ if (IS_ERR(ksm_thread)) {
+ printk(KERN_ERR "ksm: creating kthread failed\n");
+ ret = PTR_ERR(ksm_thread);
+ goto out_free2;
+ }
+
+ ret = sysfs_create_group(mm_kobj, &ksm_attr_group);
+ if (ret) {
+ printk(KERN_ERR "ksm: register sysfs failed\n");
+ goto out_free3;
+ }
+
+ return 0;
+
+out_free3:
+ kthread_stop(ksm_thread);
+out_free2:
+ mm_slots_hash_free();
+out_free1:
+ ksm_slab_free();
+out:
+ return ret;
+}
+module_init(ksm_init)
--- 2.6.31-rc1/mm/madvise.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/madvise.c 2009-06-29 14:10:54.000000000 +0100
@@ -11,6 +11,7 @@
#include <linux/mempolicy.h>
#include <linux/hugetlb.h>
#include <linux/sched.h>
+#include <linux/ksm.h>

/*
* Any behaviour which results in changes to the vma->vm_flags needs to
@@ -41,7 +42,7 @@ static long madvise_behavior(struct vm_a
struct mm_struct * mm = vma->vm_mm;
int error = 0;
pgoff_t pgoff;
- int new_flags = vma->vm_flags;
+ unsigned long new_flags = vma->vm_flags;

switch (behavior) {
case MADV_NORMAL:
@@ -57,8 +58,18 @@ static long madvise_behavior(struct vm_a
new_flags |= VM_DONTCOPY;
break;
case MADV_DOFORK:
+ if (vma->vm_flags & VM_IO) {
+ error = -EINVAL;
+ goto out;
+ }
new_flags &= ~VM_DONTCOPY;
break;
+ case MADV_MERGEABLE:
+ case MADV_UNMERGEABLE:
+ error = ksm_madvise(vma, start, end, behavior, &new_flags);
+ if (error)
+ goto out;
+ break;
}

if (new_flags == vma->vm_flags) {
@@ -211,37 +222,16 @@ static long
madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
unsigned long start, unsigned long end, int behavior)
{
- long error;
-
switch (behavior) {
- case MADV_DOFORK:
- if (vma->vm_flags & VM_IO) {
- error = -EINVAL;
- break;
- }
- case MADV_DONTFORK:
- case MADV_NORMAL:
- case MADV_SEQUENTIAL:
- case MADV_RANDOM:
- error = madvise_behavior(vma, prev, start, end, behavior);
- break;
case MADV_REMOVE:
- error = madvise_remove(vma, prev, start, end);
- break;
-
+ return madvise_remove(vma, prev, start, end);
case MADV_WILLNEED:
- error = madvise_willneed(vma, prev, start, end);
- break;
-
+ return madvise_willneed(vma, prev, start, end);
case MADV_DONTNEED:
- error = madvise_dontneed(vma, prev, start, end);
- break;
-
+ return madvise_dontneed(vma, prev, start, end);
default:
- BUG();
- break;
+ return madvise_behavior(vma, prev, start, end, behavior);
}
- return error;
}

static int
@@ -256,12 +246,17 @@ madvise_behavior_valid(int behavior)
case MADV_REMOVE:
case MADV_WILLNEED:
case MADV_DONTNEED:
+#ifdef CONFIG_KSM
+ case MADV_MERGEABLE:
+ case MADV_UNMERGEABLE:
+#endif
return 1;

default:
return 0;
}
}
+
/*
* The madvise(2) system call.
*
@@ -286,6 +281,12 @@ madvise_behavior_valid(int behavior)
* so the kernel can free resources associated with it.
* MADV_REMOVE - the application wants to free up the given range of
* pages and associated backing store.
+ * MADV_DONTFORK - omit this area from child's address space when forking:
+ * typically, to avoid COWing pages pinned by get_user_pages().
+ * MADV_DOFORK - cancel MADV_DONTFORK: no longer omit this area when forking.
+ * MADV_MERGEABLE - the application recommends that KSM try to merge pages in
+ * this area with pages of identical content from other such areas.
+ * MADV_UNMERGEABLE- cancel MADV_MERGEABLE: no longer merge pages with others.
*
* return values:
* zero - success
--- 2.6.31-rc1/mm/memory.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/memory.c 2009-06-29 14:10:54.000000000 +0100
@@ -2115,9 +2115,14 @@ gotten:
* seen in the presence of one thread doing SMC and another
* thread doing COW.
*/
- ptep_clear_flush_notify(vma, address, page_table);
+ ptep_clear_flush(vma, address, page_table);
page_add_new_anon_rmap(new_page, vma, address);
- set_pte_at(mm, address, page_table, entry);
+ /*
+ * We call the notify macro here because, when using secondary
+ * mmu page tables (such as kvm shadow page tables), we want the
+ * new page to be mapped directly into the secondary page table.
+ */
+ set_pte_at_notify(mm, address, page_table, entry);
update_mmu_cache(vma, address, entry);
if (old_page) {
/*
--- 2.6.31-rc1/mm/mmap.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/mmap.c 2009-06-29 14:10:54.000000000 +0100
@@ -659,9 +659,6 @@ again: remove_next = 1 + (end > next->
validate_mm(mm);
}

-/* Flags that can be inherited from an existing mapping when merging */
-#define VM_MERGEABLE_FLAGS (VM_CAN_NONLINEAR)
-
/*
* If the vma has a ->close operation then the driver probably needs to release
* per-vma resources, so we don't attempt to merge those.
@@ -669,7 +666,8 @@ again: remove_next = 1 + (end > next->
static inline int is_mergeable_vma(struct vm_area_struct *vma,
struct file *file, unsigned long vm_flags)
{
- if ((vma->vm_flags ^ vm_flags) & ~VM_MERGEABLE_FLAGS)
+ /* VM_CAN_NONLINEAR may get set later by f_op->mmap() */
+ if ((vma->vm_flags ^ vm_flags) & ~VM_CAN_NONLINEAR)
return 0;
if (vma->vm_file != file)
return 0;
--- 2.6.31-rc1/mm/mmu_notifier.c 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/mm/mmu_notifier.c 2009-06-29 14:10:54.000000000 +0100
@@ -99,6 +99,26 @@ int __mmu_notifier_clear_flush_young(str
return young;
}

+void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
+ pte_t pte)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_mm->list, hlist) {
+ if (mn->ops->change_pte)
+ mn->ops->change_pte(mn, mm, address, pte);
+ /*
+ * Some drivers don't have change_pte,
+ * so we must call invalidate_page in that case.
+ */
+ else if (mn->ops->invalidate_page)
+ mn->ops->invalidate_page(mn, mm, address);
+ }
+ rcu_read_unlock();
+}
+
void __mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{


2009-06-30 09:50:15

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
> Hi Izik,
>
Hello

> Thanks a lot for giving me some space. As I proposed in private mail
> last week, here is my current rollup of the madvise version of KSM.
>
> The patch is against 2.6.31-rc1, but the work is based upon your
> "RFC - ksm api change into madvise" from 14 May: omitting for now
> your 4/4 to apply KSM to other processes, but including Andrea's
> two rmap_item fixes from 3 June.
>
> This is not a patch to go into any tree yet: it needs to be split
> up and reviewed and argued over and parts reverted etc. But it is
> good for some testing, and it is good for you to take a look at,
> diff against what you have and say, perhaps: right, please split
> this up into this and this and this kind of change, so we can
> examine it more closely; or, perhaps, you won't like my direction
> at all and want a fresh start.
>
> The changes outside of mm/ksm.c shouldn't cause much controversy.
> Perhaps we'll want to send in the arch mman.h additions, and the
> madvise interface, and your mmu_notifier mods, along with a dummy
> mm/ksm.c, quite early; while we continue to discuss what's in ksm.c.
>
> It'll be hard for you not to get irritated by all my trivial cleanups
> there, sorry.

Oh, for that I am very happy, nothing that I am proud of was found there :)

> I find it best when I'm working to let myself do such
> tidying up, then only at the end go back over to decide whether it's
> justified or not. And my correction of typos in comments etc. is
> fairly random: sometimes I've just corrected one word, sometimes
> I've rewritten a comment, but lots I've not read through yet.
>
> A lot of the change came about because I couldn't run the loads
> I wanted, they'd OOM because of the way KSM had a hold on any mm it
> was advised of (so the mm couldn't exit and free up its pages until
> KSM got there). I know you were dissatisfied with that too, but
> perhaps you've solved it differently by now.
>

I wanted to switch to mm_count instead of mm_users + some safety checks,
But your way is for sure much better not to take any reference counter
for the mm!

> I've plenty more to do: still haven't really focussed in on mremap
> move, and races when the vma we expect to be VM_MERGEABLE is actually
> something else by the time we get mmap_sem for get_user_pages.

Considering the fact that the madvise run with mmap_sem(write) isn't it
enough just to check the VM_MERGEABLE flag?

> But I
> don't think there's any show-stopper there, just a little tightening
> needed. The rollup below is a good staging post, I think, and much
> better than the /dev/ksm version that used to be in mmotm.
>

I agree, the Interface now look much better i got to admit, moreover the
fact that it tied up with the vmas allowed the code to be somewhat more
simple
(Such as the rmap_items handling)

> Though I haven't even begun to worry about how KSM interacts with
> page migration and mem cgroups and Andi & Wu's HWPOISONous pages.
>

About page migration - right now it should fail when trying to migrate
ksmpage:

/* Establish migration ptes or remove ptes */
try_to_unmap(page, 1);

if (!page_mapped(page))
rc = move_to_new_page(newpage, page);


So as I see it, the soultion for this case is the same soultion as for
the swapping problem of the ksm pages...:
We need something such as extrnal rmap callbacks to make the rmap code
be aware of the ksm virtual mappings of the pages - (we can use our data
structures information inside ksm such as the stable_tree to track the
virtual addresses that point into this page)

So about the page migration i think we need to add support to it, when
we add support of swapping, probably one release after we first get ksm
merged...

And about cgroups, again, i think swapping is main issue for this, for
now we only have max_kernel_page_alloc to control the number of
unswappable pages allocated by ksm.



About your patch:
Excellent changes!, the direction you took it, is much better than the
previous interface with madvise.
Moreover all your code style changes, and "clean ups" you made to my
code are all 100% justified and are very welcomed!
Thanks alot for your work on that area, i am very pleased from the
results...
just few comments below (And there are really just few comments, as I
really like everything)


> Hugh
> ---
>

--snip--

>
> + struct page *newpage, pte_t orig_pte, pgprot_t prot)
> +{
> + struct mm_struct *mm = vma->vm_mm;
> + pgd_t *pgd;
> + pud_t *pud;
> + pmd_t *pmd;
> + pte_t *ptep;
> + spinlock_t *ptl;
> + unsigned long addr;
> + int ret = -EFAULT;
> +
> + BUG_ON(PageAnon(newpage));
> +
> + addr = page_address_in_vma(oldpage, vma);
> + if (addr == -EFAULT)
> + goto out;
> +
> + pgd = pgd_offset(mm, addr);
> + if (!pgd_present(*pgd))
> + goto out;
> +
> + pud = pud_offset(pgd, addr);
> + if (!pud_present(*pud))
> + goto out;
> +
> + pmd = pmd_offset(pud, addr);
> + if (!pmd_present(*pmd))
> + goto out;
> +
> + ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
> + if (!pte_same(*ptep, orig_pte)) {
> + pte_unmap_unlock(ptep, ptl);
> + goto out;
> + }
> +
> + ret = 0;
> + get_page(newpage);
> + page_add_file_rmap(newpage);
> +
> + flush_cache_page(vma, addr, pte_pfn(*ptep));
> + ptep_clear_flush(vma, addr, ptep);
> + set_pte_at_notify(mm, addr, ptep, mk_pte(newpage, prot));
> +
> + page_remove_rmap(oldpage);
> + if (PageAnon(oldpage)) {
> + dec_mm_counter(mm, anon_rss);
> + inc_mm_counter(mm, file_rss);
> + }
>

So now that replace_page is embedded inside ksm.c, i guess we dont need
the if (PageAnon() check...) ?

> + put_page(oldpage);
> +
> + pte_unmap_unlock(ptep, ptl);
> +out:
> + return ret;
> +}
> +
> +/*
>
>


-- snip --


> +static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
> +{
> + struct page *page2[1];
> + struct rmap_item *tree_rmap_item;
> + unsigned int checksum;
> + int ret;
> +
> + if (rmap_item->stable_tree)
> + remove_rmap_item_from_tree(rmap_item);
> +
> + /* We first start with searching the page inside the stable tree */
> + tree_rmap_item = stable_tree_search(page, page2, rmap_item);
> + if (tree_rmap_item) {
> + BUG_ON(!tree_rmap_item->tree_item);
> +
> + if (page == page2[0]) { /* forked */
> + ksm_pages_shared++;
> + ret = 0;
>

So here we increase the ksm_pages_shared, but how would we decrease it?
Shouldnt we map the rmap_item to be stable_tree item?, and add this
virtual address into the linked list of the stable tree node?
(so when remove_rmap_item() will run we will be able to decrease the
number...)


> + } else
> + ret = try_to_merge_two_pages_noalloc(rmap_item->mm,
> + page, page2[0],
> + rmap_item->address);
> + put_page(page2[0]);
> +
> + if (!ret) {
> + /*
> + * The page was successfully merged, let's insert its
> + * rmap_item into the stable tree.
> + */
> + insert_to_stable_tree_list(rmap_item, tree_rmap_item);
> + }
> + return;
> + }
> +
> + /*
> + * A ksm page might have got here by fork or by mremap move, but
> + * its other references have already been removed from the tree.
> + */
> + if (PageKsm(page))
> + break_cow(rmap_item->mm, rmap_item->address);
> +
> + /*
> + * In case the hash value of the page was changed from the last time we
> + * have calculated it, this page to be changed frequely, therefore we
> + * don't want to insert it to the unstable tree, and we don't want to
> + * waste our time to search if there is something identical to it there.
> + */
> + checksum = calc_checksum(page);
> + if (rmap_item->oldchecksum != checksum) {
> + rmap_item->oldchecksum = checksum;
> + return;
> + }
> +
> + tree_rmap_item = unstable_tree_search_insert(page, page2, rmap_item);
> + if (tree_rmap_item) {
> + struct tree_item *tree_item;
> + struct mm_struct *tree_mm;
> + unsigned long tree_addr;
> +
> + tree_item = tree_rmap_item->tree_item;
> + tree_mm = tree_rmap_item->mm;
> + tree_addr = tree_rmap_item->address;
> +
> + ret = try_to_merge_two_pages_alloc(rmap_item->mm, page, tree_mm,
> + page2[0], rmap_item->address,
> + tree_addr);
> + /*
> + * As soon as we successfully merge this page, we want to remove
> + * the rmap_item object of the page that we have merged with
> + * from the unstable_tree and instead insert it as a new stable
> + * tree node.
> + */
> + if (!ret) {
> + rb_erase(&tree_item->node, &root_unstable_tree);
> + /*
> + * If we fail to insert the page into the stable tree,
> + * we will have 2 virtual addresses that are pointing
> + * to a KsmPage left outside the stable tree,
> + * in which case we need to break_cow on both.
> + */
> + if (stable_tree_insert(page2[0], tree_item,
> + tree_rmap_item) == 0) {
> + insert_to_stable_tree_list(rmap_item,
> + tree_rmap_item);
> + } else {
> + free_tree_item(tree_item);
> + tree_rmap_item->tree_item = NULL;
> + break_cow(tree_mm, tree_addr);
> + break_cow(rmap_item->mm, rmap_item->address);
> + ksm_pages_shared -= 2;
>

Much better handling than my kpage_outside_tree !


> + }
> + }
> +
> + put_page(page2[0]);
> + }
> +}
> +
> +static struct mm_slot *get_mm_slot(struct mm_struct *mm)
> +{
> + struct mm_slot *mm_slot;
> + struct hlist_head *bucket;
> + struct hlist_node *node;
> +
> + bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
> + % nmm_slots_hash];
> + hlist_for_each_entry(mm_slot, node, bucket, link) {
> + if (mm == mm_slot->mm)
> + return mm_slot;
> + }
> + return NULL;
> +}
> +
> +s

Great / Excllent work Hugh!, I really like the result, no need from you
to split it, i just have walked the code again, and i like it.
From the perspective of features, i dont think i want to change
anything for the merge release, about the migration/cgroup and all friends,
I think the swapping work that will be need to be taken for ksm will
solve their problems as well, at least from infrastructure point of view.

I will run it on my server and will try to heavy load it...

2009-06-30 15:58:18

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Many thanks for your speedy and generous response: I'm glad you like it.

On Tue, 30 Jun 2009, Izik Eidus wrote:
> Hugh Dickins wrote:
>
> > I've plenty more to do: still haven't really focussed in on mremap
> > move, and races when the vma we expect to be VM_MERGEABLE is actually
> > something else by the time we get mmap_sem for get_user_pages.
>
> Considering the fact that the madvise run with mmap_sem(write)
> isn't it enough just to check the VM_MERGEABLE flag?

That is most of it, yes: though really we'd want that check down
inside __get_user_pages(), so that it wouldn't proceed any further
if by then the vma is not VM_MERGEABLE. GUP's VM_IO | VM_PFNMAP check
does already keep it away from the most dangerous areas, but we also
don't want it to touch VM_HUGETLB ones (faulting in huge pages nobody
wants), nor any !VM_MERGEABLE really.

However, rather than adding another flag to get_user_pages(), we
probably want to craft KSM's own substitute for it instead: a lot
of what GUP does (find_extend_vma, in_gate_area, follow_hugetlb_page,
handle_mm_fault) is stuff you actively do NOT want - all you want,
I think, is find_vma + VM_MERGEABLE check + follow_page. And you
might have gone that way already, except follow_page wasn't EXPORTed,
and you were at that time supporting KSM as a loadable module.

But that isn't the whole of it. I don't think there's any other way
to do it, but with mremap move you can move one VM_MERGEABLE area to
where another VM_MERGEABLE area was a moment ago, placing KSM pages
belonging to the one area where KSM pages of the other are expected.
Can't you? I don't think that would actually cause data corruption,
but it could badly poison the stable tree (memcmp'ing against a page
of data which is not what's expected at that point in the tree),
rendering KSM close to useless thereafter. I think.

The simplest answer to that is to prohibit mremap move on VM_MERGEABLE
areas. We already have atypical prohibitions in mremap e.g. it's not
allowed to span vmas; and I don't think it would limit anybody's real
life use of KSM. However, I don't like that solution because it gets
in the way of things like my testing with all areas VM_MERGEABLE, or
your applying mergeability to another process address space: if we
make some area mergeable, then later the app happens to decide it
wants to move that area, it would get an unexpected failure where
none normally occurs. So, I'll probably want to allow mremap move,
but add a ksm_mremap interface to rearrange the rmap_items at the
same time (haven't considered locking yet, that's often the problem).

> About page migration - right now it should fail when trying to migrate
> ksmpage:
>
> /* Establish migration ptes or remove ptes */
> try_to_unmap(page, 1);
>
> if (!page_mapped(page))
> rc = move_to_new_page(newpage, page);
>
>
> So as I see it, the soultion for this case is the same soultion as for the
> swapping problem of the ksm pages...:
> We need something such as extrnal rmap callbacks to make the rmap code be
> aware of the ksm virtual mappings of the pages - (we can use our data
> structures information inside ksm such as the stable_tree to track the virtual
> addresses that point into this page)
>
> So about the page migration i think we need to add support to it, when we add
> support of swapping, probably one release after we first get ksm merged...
>
> And about cgroups, again, i think swapping is main issue for this, for now we
> only have max_kernel_page_alloc to control the number of unswappable pages
> allocated by ksm.

Yes, I woke this morning thinking swapping the KSM pages should be a lot
easier than I thought originally. Not much more than plumbing in a third
alternative to try_to_unmap_file and try_to_unmap_anon, one which goes off
to KSM to run down the appropriate list hanging off the stable tree.

I'd intended it for other use, but I think now we'll likely give you
PAGE_MAPPING_KSM 2, so you can fit a tree_item pointer and page flag
into page_mapping there, instead of anon_vma pointer and PageAnon flag.
Or more likely, in addition to PageAnon flag.

It's very tempting to get into KSM swapping right now,
but I'll restrain myself.

> [replace_page ]
> > +
> > + BUG_ON(PageAnon(newpage));
> > ...
> > +
> > + page_remove_rmap(oldpage);
> > + if (PageAnon(oldpage)) {
> > + dec_mm_counter(mm, anon_rss);
> > + inc_mm_counter(mm, file_rss);
> > + }
> >
>
> So now that replace_page is embedded inside ksm.c, i guess we dont need the if
> (PageAnon() check...) ?

The first time I read you, I thought you were suggesting to remove the
BUG_ON(PageAnon(newpage)) at the head of replace_page: yes, now that's
static within ksm.c, it's just a waste of space, better removed.

But you're suggesting remove this check around the dec/inc_mm_counter.
Well, of course, you're absolutely right; but I'd nonetheless actually
prefer to keep that test for the moment, as a flag of something odd to
revisit. Perhaps you'd prefer a comment "/* Something odd to revisit */"
instead ;) But if so, let's keep PageAnon in the text of the comment:
I know I'm going to want to go back to check usages of PageAnon here,
and this is a block I'll want to think about.

When testing, I got quite confused for a while when /proc/meminfo
showed my file pages going up and my anon pages going down. Yes,
that block is necessary to keep the stats right for unmap and exit;
but I think, the more so when we move on to swapping, that we'd
prefer KSM-substituted-pages to remain counted as anon rather than
as file, in the bulk of the mm which is unaware of KSM pages.
Something odd to revisit.

> > +static void cmp_and_merge_page(struct page *page, struct rmap_item
> > *rmap_item)
> > +{
> > + struct page *page2[1];
> > + struct rmap_item *tree_rmap_item;
> > + unsigned int checksum;
> > + int ret;
> > +
> > + if (rmap_item->stable_tree)
> > + remove_rmap_item_from_tree(rmap_item);
> > +
> > + /* We first start with searching the page inside the stable tree */
> > + tree_rmap_item = stable_tree_search(page, page2, rmap_item);
> > + if (tree_rmap_item) {
> > + BUG_ON(!tree_rmap_item->tree_item);
> > +
> > + if (page == page2[0]) { /* forked */
> > + ksm_pages_shared++;
> > + ret = 0;
>
> So here we increase the ksm_pages_shared, but how would we decrease it?
> Shouldnt we map the rmap_item to be stable_tree item?, and add this virtual
> address into the linked list of the stable tree node?
> (so when remove_rmap_item() will run we will be able to decrease the
> number...)

You had me worried, and I wondered how the numbers had worked out
correctly in testing; and this was a rather recent mod which could
easily be wrong. But it's okay, isn't it? In the very code which
you included below, there's the insert_to_stable_tree_list which
does exactly what you want - doesn't it?

> > + } else
> > + ret = try_to_merge_two_pages_noalloc(rmap_item->mm,
> > + page, page2[0],
> > +
> > rmap_item->address);
> > + put_page(page2[0]);
> > +
> > + if (!ret) {
> > + /*
> > + * The page was successfully merged, let's insert its
> > + * rmap_item into the stable tree.
> > + */
> > + insert_to_stable_tree_list(rmap_item, tree_rmap_item);
> > + }
> > + return;
> > + }

> > + /*
> > + * If we fail to insert the page into the stable tree,
> > + * we will have 2 virtual addresses that are pointing
> > + * to a KsmPage left outside the stable tree,
> > + * in which case we need to break_cow on both.
> > + */
> > + if (stable_tree_insert(page2[0], tree_item,
> > + tree_rmap_item) == 0) {
> > + insert_to_stable_tree_list(rmap_item,
> > + tree_rmap_item);
> > + } else {
> > + free_tree_item(tree_item);
> > + tree_rmap_item->tree_item = NULL;
> > + break_cow(tree_mm, tree_addr);
> > + break_cow(rmap_item->mm, rmap_item->address);
> > + ksm_pages_shared -= 2;
> >
>
> Much better handling than my kpage_outside_tree !

You surprise me! Although I couldn't see what was wrong with doing
the break_cow()s there, I made that change pretty much as a provocation,
to force you to explain patiently why we cannot break_cow() there, but
have to go through the kpage_outside_tree, nkpage_out_tree (how I hated
that name!) dance. I thought perhaps that you'd found in testing that
fixing it all up there got ksmd into making the same "mistake" again
and again, so better to introduce a delay; then I was going to suggest
forcing checksum back to 0 instead, and adding a comment (I removed the
other, no longer stable, reason for "wait" too: thinking, by all means
reinstate, but let's have a separate patch and comment to explain it).

Surely there's some reason you did it the convoluted way originally?
Perhaps the locking was different at the time the issue first came up,
and you were not free to break_cow() here at that time? Sadly, I don't
think my testing has gone down this path even once, I hope yours does.

> Great / Excllent work Hugh!, I really like the result, no need from you to
> split it, i just have walked the code again, and i like it.
> From the perspective of features, i dont think i want to change anything for
> the merge release, about the migration/cgroup and all friends,
> I think the swapping work that will be need to be taken for ksm will solve
> their problems as well, at least from infrastructure point of view.
>
> I will run it on my server and will try to heavy load it...

Great, I'm so pleased, thank you.

Aside from the break_cows change you already noticed above, there
was one other thing I wanted to check with you, where my __ksm_enter does:
/*
* Insert just behind the scanning cursor, to let the area settle
* down a little; when fork is followed by immediate exec, we don't
* want ksmd to waste time setting up and tearing down an rmap_list.
*/
list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);

Seems reasonable enough, but I've replaced the original list_add
to the head by a list_add_tail to behind the cursor. And before
I brought the cursor into it, I had a list_add_tail to the head.

Now, I prefer list_add_tail because I just find it easier to think
through the sequence using list_add_tail; but it occurred to me later
that you might have run tests which show list_add head to behave better.

For example, if you always list_add new mms to the head (and the scan
runs forwards from the head: last night I added an experimental option
to run backwards, but notice no difference), then the unstable tree
will get built up starting with the pages from the new mms, which
might jostle the tree better than always starting with the same old
mms? So I might be spoiling a careful and subtle decision you made.

Hugh

2009-06-30 18:39:19

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
> Many thanks for your speedy and generous response: I'm glad you like it.
>
> On Tue, 30 Jun 2009, Izik Eidus wrote:
>
>> Hugh Dickins wrote:
>>
>>
>>> I've plenty more to do: still haven't really focussed in on mremap
>>> move, and races when the vma we expect to be VM_MERGEABLE is actually
>>> something else by the time we get mmap_sem for get_user_pages.
>>>
>> Considering the fact that the madvise run with mmap_sem(write)
>> isn't it enough just to check the VM_MERGEABLE flag?
>>
>
> That is most of it, yes: though really we'd want that check down
> inside __get_user_pages(), so that it wouldn't proceed any further
> if by then the vma is not VM_MERGEABLE. GUP's VM_IO | VM_PFNMAP check
> does already keep it away from the most dangerous areas, but we also
> don't want it to touch VM_HUGETLB ones (faulting in huge pages nobody
> wants), nor any !VM_MERGEABLE really.
>

Ouch, I see what you mean!, previously the check for PageKsm() that was
made before cmp_and_merge_page() was protecting us against everything
but anonymous pages...
I saw that you change the condition, but i forgot about that protection!

Looking again this line you do:
if (!PageKsm(page[0]) ||
!rmap_item->stable_tree)
cmp_and_merge_page(page[0], rmap_item);

How would we get into situation where rmap_item->stable_tree would be
NULL, and page[0] would be not anonymous page and we would like to enter
to cmp_and_merge_page()?

Can you expline more this line please? (I have looked on it before, but
i thought i saw reasonable case for it, But now looking again I cant
remember)


> However, rather than adding another flag to get_user_pages(), we
> probably want to craft KSM's own substitute for it instead: a lot
> of what GUP does (find_extend_vma, in_gate_area, follow_hugetlb_page,
> handle_mm_fault) is stuff you actively do NOT want - all you want,
> I think, is find_vma + VM_MERGEABLE check + follow_page. And you
> might have gone that way already, except follow_page wasn't EXPORTed,
> and you were at that time supporting KSM as a loadable module.
>
> But that isn't the whole of it. I don't think there's any other way
> to do it, but with mremap move you can move one VM_MERGEABLE area to
> where another VM_MERGEABLE area was a moment ago, placing KSM pages
> belonging to the one area where KSM pages of the other are expected.
> Can't you? I don't think that would actually cause data corruption,
> but it could badly poison the stable tree (memcmp'ing against a page
> of data which is not what's expected at that point in the tree),
> rendering KSM close to useless thereafter. I think.
>
> The simplest answer to that is to prohibit mremap move on VM_MERGEABLE
> areas. We already have atypical prohibitions in mremap e.g. it's not
> allowed to span vmas; and I don't think it would limit anybody's real
> life use of KSM. However, I don't like that solution because it gets
> in the way of things like my testing with all areas VM_MERGEABLE, or
> your applying mergeability to another process address space: if we
> make some area mergeable, then later the app happens to decide it
> wants to move that area, it would get an unexpected failure where
> none normally occurs. So, I'll probably want to allow mremap move,
> but add a ksm_mremap interface to rearrange the rmap_items at the
> same time (haven't considered locking yet, that's often the problem).
>
>
>> About page migration - right now it should fail when trying to migrate
>> ksmpage:
>>
>> /* Establish migration ptes or remove ptes */
>> try_to_unmap(page, 1);
>>
>> if (!page_mapped(page))
>> rc = move_to_new_page(newpage, page);
>>
>>
>> So as I see it, the soultion for this case is the same soultion as for the
>> swapping problem of the ksm pages...:
>> We need something such as extrnal rmap callbacks to make the rmap code be
>> aware of the ksm virtual mappings of the pages - (we can use our data
>> structures information inside ksm such as the stable_tree to track the virtual
>> addresses that point into this page)
>>
>> So about the page migration i think we need to add support to it, when we add
>> support of swapping, probably one release after we first get ksm merged...
>>
>> And about cgroups, again, i think swapping is main issue for this, for now we
>> only have max_kernel_page_alloc to control the number of unswappable pages
>> allocated by ksm.
>>
>
> Yes, I woke this morning thinking swapping the KSM pages should be a lot
> easier than I thought originally. Not much more than plumbing in a third
> alternative to try_to_unmap_file and try_to_unmap_anon, one which goes off
> to KSM to run down the appropriate list hanging off the stable tree.
>
> I'd intended it for other use, but I think now we'll likely give you
> PAGE_MAPPING_KSM 2, so you can fit a tree_item pointer and page flag
> into page_mapping there, instead of anon_vma pointer and PageAnon flag.
> Or more likely, in addition to PageAnon flag.
>
> It's very tempting to get into KSM swapping right now,
> but I'll restrain myself.
>

Thanks you, I really want to go step by step with this..., To much
changes would scare everyone :(

>
>> [replace_page ]
>>
>>> +
>>> + BUG_ON(PageAnon(newpage));
>>> ...
>>> +
>>> + page_remove_rmap(oldpage);
>>> + if (PageAnon(oldpage)) {
>>> + dec_mm_counter(mm, anon_rss);
>>> + inc_mm_counter(mm, file_rss);
>>> + }
>>>
>>>
>> So now that replace_page is embedded inside ksm.c, i guess we dont need the if
>> (PageAnon() check...) ?
>>
>
> The first time I read you, I thought you were suggesting to remove the
> BUG_ON(PageAnon(newpage)) at the head of replace_page: yes, now that's
> static within ksm.c, it's just a waste of space, better removed.
>
> But you're suggesting remove this check around the dec/inc_mm_counter.
> Well, of course, you're absolutely right; but I'd nonetheless actually
> prefer to keep that test for the moment, as a flag of something odd to
> revisit. Perhaps you'd prefer a comment "/* Something odd to revisit */"
> instead ;) But if so, let's keep PageAnon in the text of the comment:
> I know I'm going to want to go back to check usages of PageAnon here,
> and this is a block I'll want to think about.
>
> When testing, I got quite confused for a while when /proc/meminfo
> showed my file pages going up and my anon pages going down. Yes,
> that block is necessary to keep the stats right for unmap and exit;
> but I think, the more so when we move on to swapping, that we'd
> prefer KSM-substituted-pages to remain counted as anon rather than
> as file, in the bulk of the mm which is unaware of KSM pages.
> Something odd to revisit.
>


See my comment above, I meant that only Anonymous pages should come into
this path..., (I missed the fact that the || condition before the
cmp_and_merge_page() call change the behavior and allowed file-mapped
pages to get merged)

>
>>> +static void cmp_and_merge_page(struct page *page, struct rmap_item
>>> *rmap_item)
>>> +{
>>> + struct page *page2[1];
>>> + struct rmap_item *tree_rmap_item;
>>> + unsigned int checksum;
>>> + int ret;
>>> +
>>> + if (rmap_item->stable_tree)
>>> + remove_rmap_item_from_tree(rmap_item);
>>> +
>>> + /* We first start with searching the page inside the stable tree */
>>> + tree_rmap_item = stable_tree_search(page, page2, rmap_item);
>>> + if (tree_rmap_item) {
>>> + BUG_ON(!tree_rmap_item->tree_item);
>>> +
>>> + if (page == page2[0]) { /* forked */
>>> + ksm_pages_shared++;
>>> + ret = 0;
>>>
>> So here we increase the ksm_pages_shared, but how would we decrease it?
>> Shouldnt we map the rmap_item to be stable_tree item?, and add this virtual
>> address into the linked list of the stable tree node?
>> (so when remove_rmap_item() will run we will be able to decrease the
>> number...)
>>
>
> You had me worried, and I wondered how the numbers had worked out
> correctly in testing; and this was a rather recent mod which could
> easily be wrong. But it's okay, isn't it? In the very code which
> you included below, there's the insert_to_stable_tree_list which
> does exactly what you want - doesn't it?
>

You are right, i didnt notice that case.

>
>>> + } else
>>> + ret = try_to_merge_two_pages_noalloc(rmap_item->mm,
>>> + page, page2[0],
>>> +
>>> rmap_item->address);
>>> + put_page(page2[0]);
>>> +
>>> + if (!ret) {
>>> + /*
>>> + * The page was successfully merged, let's insert its
>>> + * rmap_item into the stable tree.
>>> + */
>>> + insert_to_stable_tree_list(rmap_item, tree_rmap_item);
>>> + }
>>> + return;
>>> + }
>>>
>
>
>>> + /*
>>> + * If we fail to insert the page into the stable tree,
>>> + * we will have 2 virtual addresses that are pointing
>>> + * to a KsmPage left outside the stable tree,
>>> + * in which case we need to break_cow on both.
>>> + */
>>> + if (stable_tree_insert(page2[0], tree_item,
>>> + tree_rmap_item) == 0) {
>>> + insert_to_stable_tree_list(rmap_item,
>>> + tree_rmap_item);
>>> + } else {
>>> + free_tree_item(tree_item);
>>> + tree_rmap_item->tree_item = NULL;
>>> + break_cow(tree_mm, tree_addr);
>>> + break_cow(rmap_item->mm, rmap_item->address);
>>> + ksm_pages_shared -= 2;
>>>
>>>
>> Much better handling than my kpage_outside_tree !
>>
>
> You surprise me! Although I couldn't see what was wrong with doing
> the break_cow()s there, I made that change pretty much as a provocation,
> to force you to explain patiently why we cannot break_cow() there, but
> have to go through the kpage_outside_tree, nkpage_out_tree (how I hated
> that name!) dance. I thought perhaps that you'd found in testing that
> fixing it all up there got ksmd into making the same "mistake" again
> and again, so better to introduce a delay; then I was going to suggest
> forcing checksum back to 0 instead, and adding a comment (I removed the
> other, no longer stable, reason for "wait" too: thinking, by all means
> reinstate, but let's have a separate patch and comment to explain it).
>
> Surely there's some reason you did it the convoluted way originally?
> Perhaps the locking was different at the time the issue first came up,
> and you were not free to break_cow() here at that time? Sadly, I don't
> think my testing has gone down this path even once, I hope yours does.
>

Well the only reason that i did it was that it looked more optimal - if
we already found 2 identical pages and they are both shared....
The complexity that it add to the code isnt worth it, plus it might turn
on to be less effective, beacuse due to the fact that they will be
outside the stable_tree, less pages will be merged with them....

>
>> Great / Excllent work Hugh!, I really like the result, no need from you to
>> split it, i just have walked the code again, and i like it.
>> From the perspective of features, i dont think i want to change anything for
>> the merge release, about the migration/cgroup and all friends,
>> I think the swapping work that will be need to be taken for ksm will solve
>> their problems as well, at least from infrastructure point of view.
>>
>> I will run it on my server and will try to heavy load it...
>>
>
> Great, I'm so pleased, thank you.
>
> Aside from the break_cows change you already noticed above, there
> was one other thing I wanted to check with you, where my __ksm_enter does:
> /*
> * Insert just behind the scanning cursor, to let the area settle
> * down a little; when fork is followed by immediate exec, we don't
> * want ksmd to waste time setting up and tearing down an rmap_list.
> */
> list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);
>
> Seems reasonable enough, but I've replaced the original list_add
> to the head by a list_add_tail to behind the cursor. And before
> I brought the cursor into it, I had a list_add_tail to the head.
>
> Now, I prefer list_add_tail because I just find it easier to think
> through the sequence using list_add_tail; but it occurred to me later
> that you might have run tests which show list_add head to behave better.
>
> For example, if you always list_add new mms to the head (and the scan
> runs forwards from the head: last night I added an experimental option
> to run backwards, but notice no difference), then the unstable tree
> will get built up starting with the pages from the new mms, which
> might jostle the tree better than always starting with the same old
> mms? So I might be spoiling a careful and subtle decision you made.
>

I saw this change, previously the order was made arbitrary - meaning I
never thought what should be the right order..
So if you feel something is better that way - it is fine (and better
than my unopinion for that case)

Thanks.

> Hugh
>

2009-07-01 02:04:29

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

On Tue, 30 Jun 2009, Izik Eidus wrote:
> Hugh Dickins wrote:
> > On Tue, 30 Jun 2009, Izik Eidus wrote:
> > > Hugh Dickins wrote:
> > >
> > > > I've plenty more to do: still haven't really focussed in on mremap
> > > > move, and races when the vma we expect to be VM_MERGEABLE is actually
> > > > something else by the time we get mmap_sem for get_user_pages.
> > > >
> > > Considering the fact that the madvise run with mmap_sem(write)
> > > isn't it enough just to check the VM_MERGEABLE flag?
> >
> > That is most of it, yes: though really we'd want that check down
> > inside __get_user_pages(), so that it wouldn't proceed any further
> > if by then the vma is not VM_MERGEABLE. GUP's VM_IO | VM_PFNMAP check
> > does already keep it away from the most dangerous areas, but we also
> > don't want it to touch VM_HUGETLB ones (faulting in huge pages nobody
> > wants), nor any !VM_MERGEABLE really.
>
> Ouch, I see what you mean!, previously the check for PageKsm() that was made
> before cmp_and_merge_page() was protecting us against everything but anonymous
> pages...
> I saw that you change the condition, but i forgot about that protection!

No, that isn't what I meant, actually. It's a little worrying that I
didn't even consider your point when adding the || !rmap_item->stable_tree,
but on reflection I don't think that changes our protection.

My point is (well, to be honest, I'm adjusting my view as I reply) that
"PageKsm(page)" is seductive, but dangerously relative. Until we are
sure that we're in a VM_MERGEABLE area (and here we are not sure, just
know that it was when the recent get_next_rmap_item returned that item),
it means no more than the condition within that function. You're clear
about that in the comments above and within PageKsm() itself, but it's
easily forgotten in the places where it's used. Here, I'm afraid, I
read "PageKsm(page[0])" as saying that page[0] is definitely a KSM page,
but we don't actually know that much, since we're unsure of VM_MERGEABLE.

Or when you say "I saw that you change the condition", perhaps you don't
mean my added "|| !rmap_item->stable_tree" below, but my change to PageKsm
itself, changing it from !PageAnon(page) to page->mapping == NULL? I
should explain that actually my testing was with one additional patch

@@ -1433,7 +1433,8 @@ int ksm_madvise(struct vm_area_struct *v
return 0; /* just ignore the advice */

if (vma->vm_file || vma->vm_ops)
- return 0; /* just ignore the advice */
+ if (!(vma->vm_flags & VM_CAN_NONLINEAR))
+ return 0; /* just ignore the advice */

if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
if (__ksm_enter(mm) < 0)

which extends KSM from pure anonymous vmas to most private file-backed
vmas, hence I needed the test to distinguish the KSM pages from nearby
pages in there too. I left that line out of the rollup I sent, partly
to separate the extension, but mainly because I'm a bit uncomfortable
with using "VM_CAN_NONLINEAR" in that way, for two reasons: though it
is a way of saying "this is a normal kind of filesystem, not some weird
device driver", if we're going to use it in that way, then we ought to
get Nick's agreement and probably rename it VM_REGULAR (but would then
need to define exactly what a filesystem must provide to qualify: that
might take a while!); the other reason is, I've noticed in the past
that btrfs is not yet setting VM_CAN_NONLINEAR, I think that's just
an oversight (which I did once mention to Chris), but ought to check
with Nick that I've not forgotten some reason why an ordinary filesystem
might be unable to support nonlinear.

However, if we go on to use an actual bit to distinguish PageKsm, as
I think we shall probably have to to support swapping, then we won't
need the VM_CAN_NONLINEAR-like check at all: so long as KSM sticks to
merging only PageAnon pages (and I think "so long as" will be forever),
it would be safe even on most driver areas, the only exceptions being
/dev/mem and /dev/kmem (or anything like them: mmaps that can include
anonymous or KSM pages without them being anonymous or KSM pages in
the context of that mapping), which are for sure
marked VM_IO | VM_RESERVED | VM_PFNMAP.

I feel I've waved my hands in the air convincingly for several
paragraphs, and brought in enough confusions to be fairly sure
of confusing you, without quite getting to address your point.
I'm not even sure I understood your point. And whether PageKsm
means !PageAnon or !page->mapping, for different reasons those
are both pretty safe, so maybe I've overestimated the danger of
races here - though I still believe we need to protect against them.

>
> Looking again this line you do:
> if (!PageKsm(page[0]) ||
> !rmap_item->stable_tree)
> cmp_and_merge_page(page[0], rmap_item);
>
> How would we get into situation where rmap_item->stable_tree would be NULL,
> and page[0] would be not anonymous page and we would like to enter to
> cmp_and_merge_page()?
>
> Can you expline more this line please? (I have looked on it before, but i
> thought i saw reasonable case for it, But now looking again I cant remember)

We get into that situation through fork(). If a task with a
VM_MERGEABLE vma (which already contains KSM pages) does fork(),
then the child's mm will contain KSM pages. The original ksm.c
would tend to leak KSM pages that way, the KSM page count would
get decremented when they vanished from the parent's mm, but they
could be held indefinitely in the child's mm without being counted
in or out: I fixed that by fork copying MMF_VM_MERGEABLE and putting
child on mm_list, and this counting in of the KSM pages.

But thank you for saying "page[0] would be not anonymous page", I was
going to point out the error of reading PageKsm that way now, when I
realize I've got it wrong myself - I really ought to have changed that
!PageKsm(page[0]) over to PageAnon(page[0]), shouldn't I? Though
probably saved from serious error by try_to_merge_one_page's later
PageAnon check, haven't I been wasting a lot of time on passing down
file pages to cmp_and_merge_page() there? Ah, and you're pointing
out that they come down anyway with the ||!stable part of the test.

(Though, aside from races, they're only coming down when VM_MERGEABLE's
vm_file test is overridden, as in my own testing, but not in the rollup
I sent.)

I'll check that again in the morning: it really reinforces my point
above that "PageKsm" is too dangerously deceptive as it stands.


[ re: PageAnon test before dec/inc_mm_counter ]

> See my comment above, I meant that only Anonymous pages should come into this
> path..., (I missed the fact that the || condition before the
> cmp_and_merge_page() call change the behavior and allowed file-mapped pages to
> get merged)

If file-mapped pages get anywhere near here, it's only a bug. But I
do want anonymous COWed pages in private file-mapped areas to be able
to get here soon.

By this stage, try_to_merge_one_page's

if (!PageAnon(oldpage))
goto out;

has already come into play, so your point stands, that the PageAnon
test around the dec/inc_mm_counter is strictly unnecessary; but as
I said, I want to keep it for now as a little warning flag to me.

But put this (the funny way pages move from anon to file) together
with my PageKsm confusions above, and I think we have a clear case
for adding the PageKsm flag (in page->mapping with PageAnon) already.


[ re: kpage_outside_tree versus double break_cow ]

> > Surely there's some reason you did it the convoluted way originally?
> > Perhaps the locking was different at the time the issue first came up,
> > and you were not free to break_cow() here at that time? Sadly, I don't
> > think my testing has gone down this path even once, I hope yours does.
>
> Well the only reason that i did it was that it looked more optimal - if we
> already found 2 identical pages and they are both shared....

Ah, I see, yes, that makes some sense - though I've not fully thought
it through, and don't have a strong enough grasp on the various reasons
why stable_tree_insert can fail ...

> The complexity that it add to the code isnt worth it, plus it might turn on to
> be less effective, beacuse due to the fact that they will be outside the
> stable_tree, less pages will be merged with them....

... but agree the complexity is not worth it, or not without a stronger
real life case. Especially as I never observed it to happen anyway.
The beauty of the unstable tree is how these temporary failures
should sort themselves out in due course, without special casing.


[ re: list_add_tail ]
> >
> > Now, I prefer list_add_tail because I just find it easier to think
> > through the sequence using list_add_tail; but it occurred to me later
> > that you might have run tests which show list_add head to behave better.
> >
> > For example, if you always list_add new mms to the head (and the scan
> > runs forwards from the head: last night I added an experimental option
> > to run backwards, but notice no difference), then the unstable tree
> > will get built up starting with the pages from the new mms, which
> > might jostle the tree better than always starting with the same old
> > mms? So I might be spoiling a careful and subtle decision you made.
>
> I saw this change, previously the order was made arbitrary - meaning I never
> thought what should be the right order..
> So if you feel something is better that way - it is fine (and better than my
> unopinion for that case)

Okay, thanks. I think the fork() potential for time-wasting that I
comment upon in the code makes a good case for doing list_add_tail
behind the cursor as it stands. But we can revisit if someone comes
up with evidence for doing it differently - I think there's scope
for academic papers on the behaviour of the unstable tree.

Hugh

2009-07-01 09:48:54

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
> On Tue, 30 Jun 2009, Izik Eidus wrote:
>
>> Hugh Dickins wrote:
>>
>>> On Tue, 30 Jun 2009, Izik Eidus wrote:
>>>
>>>> Hugh Dickins wrote:
>>>>
>>>>
>>>>> I've plenty more to do: still haven't really focussed in on mremap
>>>>> move, and races when the vma we expect to be VM_MERGEABLE is actually
>>>>> something else by the time we get mmap_sem for get_user_pages.
>>>>>
>>>>>
>>>> Considering the fact that the madvise run with mmap_sem(write)
>>>> isn't it enough just to check the VM_MERGEABLE flag?
>>>>
>>> That is most of it, yes: though really we'd want that check down
>>> inside __get_user_pages(), so that it wouldn't proceed any further
>>> if by then the vma is not VM_MERGEABLE. GUP's VM_IO | VM_PFNMAP check
>>> does already keep it away from the most dangerous areas, but we also
>>> don't want it to touch VM_HUGETLB ones (faulting in huge pages nobody
>>> wants), nor any !VM_MERGEABLE really.
>>>
>> Ouch, I see what you mean!, previously the check for PageKsm() that was made
>> before cmp_and_merge_page() was protecting us against everything but anonymous
>> pages...
>> I saw that you change the condition, but i forgot about that protection!
>>
>
> No, that isn't what I meant, actually. It's a little worrying that I
> didn't even consider your point when adding the || !rmap_item->stable_tree,
> but on reflection I don't think that changes our protection.
>

Yes, i have forgot about this: PageAnon() check inside
try_to_merge_one_page() - we didnt always had it...
(Sorry for all the mess... ;-))

> My point is (well, to be honest, I'm adjusting my view as I reply) that
> "PageKsm(page)" is seductive, but dangerously relative. Until we are
> sure that we're in a VM_MERGEABLE area (and here we are not sure, just
> know that it was when the recent get_next_rmap_item returned that item),
> it means no more than the condition within that function. You're clear
> about that in the comments above and within PageKsm() itself, but it's
> easily forgotten in the places where it's used. Here, I'm afraid, I
> read "PageKsm(page[0])" as saying that page[0] is definitely a KSM page,
> but we don't actually know that much, since we're unsure of VM_MERGEABLE.
>
> Or when you say "I saw that you change the condition", perhaps you don't
> mean my added "|| !rmap_item->stable_tree" below, but my change to PageKsm
> itself, changing it from !PageAnon(page) to page->mapping == NULL? I
> should explain that actually my testing was with one additional patch
>
> @@ -1433,7 +1433,8 @@ int ksm_madvise(struct vm_area_struct *v
> return 0; /* just ignore the advice */
>
> if (vma->vm_file || vma->vm_ops)
> - return 0; /* just ignore the advice */
> + if (!(vma->vm_flags & VM_CAN_NONLINEAR))
> + return 0; /* just ignore the advice */
>
> if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
> if (__ksm_enter(mm) < 0)
>
> which extends KSM from pure anonymous vmas to most private file-backed
> vmas, hence I needed the test to distinguish the KSM pages from nearby
> pages in there too. I left that line out of the rollup I sent, partly
> to separate the extension, but mainly because I'm a bit uncomfortable
> with using "VM_CAN_NONLINEAR" in that way, for two reasons: though it
> is a way of saying "this is a normal kind of filesystem, not some weird
> device driver", if we're going to use it in that way, then we ought to
> get Nick's agreement and probably rename it VM_REGULAR (but would then
> need to define exactly what a filesystem must provide to qualify: that
> might take a while!); the other reason is, I've noticed in the past
> that btrfs is not yet setting VM_CAN_NONLINEAR, I think that's just
> an oversight (which I did once mention to Chris), but ought to check
> with Nick that I've not forgotten some reason why an ordinary filesystem
> might be unable to support nonlinear.
>

Considering the fact that we will try to merge only anonymous pages, and
considering the fact of our "O_DIRECT check" i think we can allow to
scan any vma...
the thing is:
If it is evil driver that play with the page, it will have to run
get_user_pages that will increase the pagecount but not the mapcount,
and therefore we wont merge it while it is being used by the driver...,
now if we will merge the page before the evil driver will do
get_user_pages(), then in case it will really want to play with the
page, it will have to call get_user_pages(write) that will break the
COW..., if it will call get_user_pages(read) it wont be able to write to
it...

Unless the driver would try to do something really tricky, and Do it in
a buggy way (Like not checking if the page that it recive is anonymous
or not) we should be safe.

No?

> However, if we go on to use an actual bit to distinguish PageKsm, as
> I think we shall probably have to to support swapping, then we won't
> need the VM_CAN_NONLINEAR-like check at all: so long as KSM sticks to
> merging only PageAnon pages (and I think "so long as" will be forever),
> it would be safe even on most driver areas, the only exceptions being
> /dev/mem and /dev/kmem (or anything like them: mmaps that can include
> anonymous or KSM pages without them being anonymous or KSM pages in
> the context of that mapping), which are for sure
> marked VM_IO | VM_RESERVED | VM_PFNMAP.
>
> I feel I've waved my hands in the air convincingly for several
> paragraphs, and brought in enough confusions to be fairly sure
> of confusing you, without quite getting to address your point.
> I'm not even sure I understood your point. And whether PageKsm
> means !PageAnon or !page->mapping, for different reasons those
> are both pretty safe, so maybe I've overestimated the danger of
> races here - though I still believe we need to protect against them.
>
>
>> Looking again this line you do:
>> if (!PageKsm(page[0]) ||
>> !rmap_item->stable_tree)
>> cmp_and_merge_page(page[0], rmap_item);
>>
>> How would we get into situation where rmap_item->stable_tree would be NULL,
>> and page[0] would be not anonymous page and we would like to enter to
>> cmp_and_merge_page()?
>>
>> Can you expline more this line please? (I have looked on it before, but i
>> thought i saw reasonable case for it, But now looking again I cant remember)
>>
>
> We get into that situation through fork(). If a task with a
> VM_MERGEABLE vma (which already contains KSM pages) does fork(),
> then the child's mm will contain KSM pages. The original ksm.c
> would tend to leak KSM pages that way, the KSM page count would
> get decremented when they vanished from the parent's mm, but they
> could be held indefinitely in the child's mm without being counted
> in or out: I fixed that by fork copying MMF_VM_MERGEABLE and putting
> child on mm_list, and this counting in of the KSM pages.
>
> But thank you for saying "page[0] would be not anonymous page", I was
> going to point out the error of reading PageKsm that way now, when I
> realize I've got it wrong myself - I really ought to have changed that
> !PageKsm(page[0]) over to PageAnon(page[0]), shouldn't I? Though
> probably saved from serious error by try_to_merge_one_page's later
> PageAnon check, haven't I been wasting a lot of time on passing down
> file pages to cmp_and_merge_page() there? Ah, and you're pointing
> out that they come down anyway with the ||!stable part of the test.
>
> (Though, aside from races, they're only coming down when VM_MERGEABLE's
> vm_file test is overridden, as in my own testing, but not in the rollup
> I sent.)
>
> I'll check that again in the morning: it really reinforces my point
> above that "PageKsm" is too dangerously deceptive as it stands.
>

Because we have this AnonPage() check in try_to_merge_one_page() it
should be safe to allow filebacked pages to go into
cmp_and_merge_page(), but! I dont think it anything useful..., we cant
merge this pages... so why burn cpu cycles on them?...

If you feel more comfortable with PageKsm() -> !page->mapping, we can
add an PageAnon check before cmp_and_merge_page()...


>
> [ re: PageAnon test before dec/inc_mm_counter ]
>
>
>> See my comment above, I meant that only Anonymous pages should come into this
>> path..., (I missed the fact that the || condition before the
>> cmp_and_merge_page() call change the behavior and allowed file-mapped pages to
>> get merged)
>>
>
> If file-mapped pages get anywhere near here, it's only a bug. But I
> do want anonymous COWed pages in private file-mapped areas to be able
> to get here soon.
>
> By this stage, try_to_merge_one_page's
>
> if (!PageAnon(oldpage))
> goto out;
>
> has already come into play, so your point stands, that the PageAnon
> test around the dec/inc_mm_counter is strictly unnecessary; but as
> I said, I want to keep it for now as a little warning flag to me.
>
> But put this (the funny way pages move from anon to file) together
> with my PageKsm confusions above, and I think we have a clear case
> for adding the PageKsm flag (in page->mapping with PageAnon) already.
>

You mean to do: PageKsm()-> if !page->mapping && !PageAnon(page) ?

>
> [ re: kpage_outside_tree versus double break_cow ]
>
>
>>> Surely there's some reason you did it the convoluted way originally?
>>> Perhaps the locking was different at the time the issue first came up,
>>> and you were not free to break_cow() here at that time? Sadly, I don't
>>> think my testing has gone down this path even once, I hope yours does.
>>>
>> Well the only reason that i did it was that it looked more optimal - if we
>> already found 2 identical pages and they are both shared....
>>
>
> Ah, I see, yes, that makes some sense - though I've not fully thought
> it through, and don't have a strong enough grasp on the various reasons
> why stable_tree_insert can fail ...
>


Honestly - that doesn't make much sense considering the complexity it
added, I have no idea what i thought to myself when I wrote it!

>> The complexity that it add to the code isnt worth it, plus it might turn on to
>> be less effective, beacuse due to the fact that they will be outside the
>> stable_tree, less pages will be merged with them....
>>
>
> ... but agree the complexity is not worth it, or not without a stronger
> real life case. Especially as I never observed it to happen anyway.
> The beauty of the unstable tree is how these temporary failures
> should sort themselves out in due course, without special casing.
>
>
> [ re: list_add_tail ]
>
>>> Now, I prefer list_add_tail because I just find it easier to think
>>> through the sequence using list_add_tail; but it occurred to me later
>>> that you might have run tests which show list_add head to behave better.
>>>
>>> For example, if you always list_add new mms to the head (and the scan
>>> runs forwards from the head: last night I added an experimental option
>>> to run backwards, but notice no difference), then the unstable tree
>>> will get built up starting with the pages from the new mms, which
>>> might jostle the tree better than always starting with the same old
>>> mms? So I might be spoiling a careful and subtle decision you made.
>>>
>> I saw this change, previously the order was made arbitrary - meaning I never
>> thought what should be the right order..
>> So if you feel something is better that way - it is fine (and better than my
>> unopinion for that case)
>>
>
> Okay, thanks. I think the fork() potential for time-wasting that I
> comment upon in the code makes a good case for doing list_add_tail
> behind the cursor as it stands. But we can revisit if someone comes
> up with evidence for doing it differently - I think there's scope
> for academic papers on the behaviour of the unstable tree.
>

Well there is a big window for optimizations for both the stable and
unstable tree...
the current code is the naivest implementation of that stable/unstable
trees...
> Hugh
>

2009-07-01 10:33:55

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hi Hugh!

On Wed, Jul 01, 2009 at 03:03:58AM +0100, Hugh Dickins wrote:
> up with evidence for doing it differently - I think there's scope
> for academic papers on the behaviour of the unstable tree.

Eheh, I covered some of the behaviour of the unstable tree in the KSM
paper @ LinuxSymposium (downloadable in a few weeks) and I will ""try""
to cover some of it in my presentation as well ;).

2009-07-08 21:08:18

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hi Izik,

Sorry, I've not yet replied to your response of 1 July, nor shall I
right now. Instead, more urgent to send you my current KSM rollup,
against 2.6.31-rc2, with which I'm now pretty happy - to the extent
that I've put my signoff to it below.

Though of course it's actually your and Andrea's and Chris's work,
just played around with by me; I don't know what the order of
signoffs should be in the end.

What it mainly lacks is a Documentation file, and more statistics in
sysfs: though we can already see how much is being merged, we don't
see any comparison against how much isn't.

But if you still like the patch below, let's advance to splitting
it up and getting it into mmotm: I have some opinions on the splitup,
I'll make some suggestions on that tomorrow.

You asked for a full diff against -rc2, but may want some explanation
of differences from what I sent before. The main changes are:-

A reliable PageKsm(), not dependent on the nature of the vma it's in:
it's like PageAnon, but with NULL anon_vma - needs a couple of slight
adjustments outside ksm.c.

Consequently, no reason to go on prohibiting KSM on private anonymous
pages COWed from template file pages in file-backed vmas.

Most of what get_user_pages did for us was unhelpful: now rely on
find_vma and follow_page and handle_mm_fault directly, which allow
us to check VM_MERGEABLE and PageKsm ourselves where needed.

Which eliminates the separate is_present_pte checks, and spares us
from wasting rmap_items on absent ptes.

Which then drew attention to the hyperactive allocation and freeing
of tree_items, "slabinfo -AD" showing huge activity there, even when
idling. It's not much of a problem really, but might cause concern.

And revealed that really those tree_items were a waste of space, can
be packed within the rmap_items that pointed to them, while still
keeping to the nice cache-friendly 64-byte or 32-byte rmap_item.
(If another field needed later, can make rmap_list singly linked.)

mremap move issue sorted, in simplest COW-breaking way. My previous
code to unmerge according to rmap_item->stable was racy/buggy for
two reasons: ignore rmap_items there now, just scan the ptes.

ksmd used to be running at higher priority: now nice 0.

Moved mm_slot hash functions together; made hash table smaller
now it's used less frequently than it was in your design.

More cleanup, making similar things more alike.

Signed-off-by: Hugh Dickins <[email protected]>
---

arch/alpha/include/asm/mman.h | 3
arch/mips/include/asm/mman.h | 3
arch/parisc/include/asm/mman.h | 3
arch/xtensa/include/asm/mman.h | 3
fs/proc/page.c | 5
include/asm-generic/mman-common.h | 3
include/linux/ksm.h | 79 +
include/linux/mm.h | 1
include/linux/mmu_notifier.h | 34
include/linux/rmap.h | 6
include/linux/sched.h | 7
kernel/fork.c | 8
mm/Kconfig | 11
mm/Makefile | 1
mm/ksm.c | 1542 ++++++++++++++++++++++++++++
mm/madvise.c | 53
mm/memory.c | 14
mm/mmap.c | 6
mm/mmu_notifier.c | 20
mm/mremap.c | 12
mm/rmap.c | 21
21 files changed, 1774 insertions(+), 61 deletions(-)

--- 2.6.31-rc2/arch/alpha/include/asm/mman.h 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/arch/alpha/include/asm/mman.h 2009-07-05 00:51:29.000000000 +0100
@@ -48,6 +48,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc2/arch/mips/include/asm/mman.h 2008-12-24 23:26:37.000000000 +0000
+++ madv_ksm/arch/mips/include/asm/mman.h 2009-07-05 00:51:29.000000000 +0100
@@ -71,6 +71,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc2/arch/parisc/include/asm/mman.h 2008-12-24 23:26:37.000000000 +0000
+++ madv_ksm/arch/parisc/include/asm/mman.h 2009-07-05 00:51:29.000000000 +0100
@@ -54,6 +54,9 @@
#define MADV_16M_PAGES 24 /* Use 16 Megabyte pages */
#define MADV_64M_PAGES 26 /* Use 64 Megabyte pages */

+#define MADV_MERGEABLE 65 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 66 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0
#define MAP_VARIABLE 0
--- 2.6.31-rc2/arch/xtensa/include/asm/mman.h 2009-03-23 23:12:14.000000000 +0000
+++ madv_ksm/arch/xtensa/include/asm/mman.h 2009-07-05 00:51:29.000000000 +0100
@@ -78,6 +78,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc2/fs/proc/page.c 2009-06-25 05:18:07.000000000 +0100
+++ madv_ksm/fs/proc/page.c 2009-07-07 21:58:29.000000000 +0100
@@ -2,6 +2,7 @@
#include <linux/compiler.h>
#include <linux/fs.h>
#include <linux/init.h>
+#include <linux/ksm.h>
#include <linux/mm.h>
#include <linux/mmzone.h>
#include <linux/proc_fs.h>
@@ -95,6 +96,8 @@ static const struct file_operations proc
#define KPF_UNEVICTABLE 18
#define KPF_NOPAGE 20

+#define KPF_KSM 21
+
/* kernel hacking assistances
* WARNING: subject to change, never rely on them!
*/
@@ -137,6 +140,8 @@ static u64 get_uflags(struct page *page)
u |= 1 << KPF_MMAP;
if (PageAnon(page))
u |= 1 << KPF_ANON;
+ if (PageKsm(page))
+ u |= 1 << KPF_KSM;

/*
* compound pages: export both head/tail info
--- 2.6.31-rc2/include/asm-generic/mman-common.h 2009-06-25 05:18:08.000000000 +0100
+++ madv_ksm/include/asm-generic/mman-common.h 2009-07-05 00:51:29.000000000 +0100
@@ -35,6 +35,9 @@
#define MADV_DONTFORK 10 /* don't inherit across fork */
#define MADV_DOFORK 11 /* do inherit across fork */

+#define MADV_MERGEABLE 12 /* KSM may merge identical pages */
+#define MADV_UNMERGEABLE 13 /* KSM may not merge identical pages */
+
/* compatibility flags */
#define MAP_FILE 0

--- 2.6.31-rc2/include/linux/ksm.h 1970-01-01 01:00:00.000000000 +0100
+++ madv_ksm/include/linux/ksm.h 2009-07-08 16:49:33.000000000 +0100
@@ -0,0 +1,79 @@
+#ifndef __LINUX_KSM_H
+#define __LINUX_KSM_H
+/*
+ * Memory merging support.
+ *
+ * This code enables dynamic sharing of identical pages found in different
+ * memory areas, even if they are not shared by fork().
+ */
+
+#include <linux/bitops.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/vmstat.h>
+
+#ifdef CONFIG_KSM
+int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags);
+int __ksm_enter(struct mm_struct *mm);
+void __ksm_exit(struct mm_struct *mm);
+
+static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+{
+ if (test_bit(MMF_VM_MERGEABLE, &oldmm->flags))
+ return __ksm_enter(mm);
+ return 0;
+}
+
+static inline void ksm_exit(struct mm_struct *mm)
+{
+ if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ __ksm_exit(mm);
+}
+
+/*
+ * A KSM page is one of those write-protected "shared pages" or "merged pages"
+ * which KSM maps into multiple mms, wherever identical anonymous page content
+ * is found in VM_MERGEABLE vmas. It's a PageAnon page, with NULL anon_vma.
+ */
+static inline int PageKsm(struct page *page)
+{
+ return ((unsigned long)page->mapping == PAGE_MAPPING_ANON);
+}
+
+/*
+ * But we have to avoid the checking which page_add_anon_rmap() performs.
+ */
+static inline void page_add_ksm_rmap(struct page *page)
+{
+ if (atomic_inc_and_test(&page->_mapcount)) {
+ page->mapping = (void *) PAGE_MAPPING_ANON;
+ __inc_zone_page_state(page, NR_ANON_PAGES);
+ }
+}
+#else /* !CONFIG_KSM */
+
+static inline int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags)
+{
+ return 0;
+}
+
+static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
+{
+ return 0;
+}
+
+static inline void ksm_exit(struct mm_struct *mm)
+{
+}
+
+static inline int PageKsm(struct page *page)
+{
+ return 0;
+}
+
+/* No stub required for page_add_ksm_rmap(page) */
+#endif /* !CONFIG_KSM */
+
+#endif
--- 2.6.31-rc2/include/linux/mm.h 2009-07-04 21:26:08.000000000 +0100
+++ madv_ksm/include/linux/mm.h 2009-07-05 00:51:29.000000000 +0100
@@ -105,6 +105,7 @@ extern unsigned int kobjsize(const void
#define VM_MIXEDMAP 0x10000000 /* Can contain "struct page" and pure PFN pages */
#define VM_SAO 0x20000000 /* Strong Access Ordering (powerpc) */
#define VM_PFN_AT_MMAP 0x40000000 /* PFNMAP vma that is fully mapped at mmap time */
+#define VM_MERGEABLE 0x80000000 /* KSM may merge identical pages */

#ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */
#define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS
--- 2.6.31-rc2/include/linux/mmu_notifier.h 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/include/linux/mmu_notifier.h 2009-07-05 00:51:29.000000000 +0100
@@ -62,6 +62,15 @@ struct mmu_notifier_ops {
unsigned long address);

/*
+ * change_pte is called in cases that pte mapping to page is changed:
+ * for example, when ksm remaps pte to point to a new shared page.
+ */
+ void (*change_pte)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address,
+ pte_t pte);
+
+ /*
* Before this is invoked any secondary MMU is still ok to
* read/write to the page previously pointed to by the Linux
* pte because the page hasn't been freed yet and it won't be
@@ -154,6 +163,8 @@ extern void __mmu_notifier_mm_destroy(st
extern void __mmu_notifier_release(struct mm_struct *mm);
extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
unsigned long address);
+extern void __mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte);
extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address);
extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
@@ -175,6 +186,13 @@ static inline int mmu_notifier_clear_flu
return 0;
}

+static inline void mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte)
+{
+ if (mm_has_notifiers(mm))
+ __mmu_notifier_change_pte(mm, address, pte);
+}
+
static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{
@@ -236,6 +254,16 @@ static inline void mmu_notifier_mm_destr
__young; \
})

+#define set_pte_at_notify(__mm, __address, __ptep, __pte) \
+({ \
+ struct mm_struct *___mm = __mm; \
+ unsigned long ___address = __address; \
+ pte_t ___pte = __pte; \
+ \
+ set_pte_at(___mm, ___address, __ptep, ___pte); \
+ mmu_notifier_change_pte(___mm, ___address, ___pte); \
+})
+
#else /* CONFIG_MMU_NOTIFIER */

static inline void mmu_notifier_release(struct mm_struct *mm)
@@ -248,6 +276,11 @@ static inline int mmu_notifier_clear_flu
return 0;
}

+static inline void mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address, pte_t pte)
+{
+}
+
static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{
@@ -273,6 +306,7 @@ static inline void mmu_notifier_mm_destr

#define ptep_clear_flush_young_notify ptep_clear_flush_young
#define ptep_clear_flush_notify ptep_clear_flush
+#define set_pte_at_notify set_pte_at

#endif /* CONFIG_MMU_NOTIFIER */

--- 2.6.31-rc2/include/linux/rmap.h 2009-06-25 05:18:09.000000000 +0100
+++ madv_ksm/include/linux/rmap.h 2009-07-05 00:56:00.000000000 +0100
@@ -71,14 +71,10 @@ void page_add_new_anon_rmap(struct page
void page_add_file_rmap(struct page *);
void page_remove_rmap(struct page *);

-#ifdef CONFIG_DEBUG_VM
-void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address);
-#else
-static inline void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
+static inline void page_dup_rmap(struct page *page)
{
atomic_inc(&page->_mapcount);
}
-#endif

/*
* Called from mm/vmscan.c to handle paging out
--- 2.6.31-rc2/include/linux/sched.h 2009-07-04 21:26:08.000000000 +0100
+++ madv_ksm/include/linux/sched.h 2009-07-05 00:51:29.000000000 +0100
@@ -431,7 +431,9 @@ extern int get_dumpable(struct mm_struct
/* dumpable bits */
#define MMF_DUMPABLE 0 /* core dump is permitted */
#define MMF_DUMP_SECURELY 1 /* core file is readable only by root */
+
#define MMF_DUMPABLE_BITS 2
+#define MMF_DUMPABLE_MASK ((1 << MMF_DUMPABLE_BITS) - 1)

/* coredump filter bits */
#define MMF_DUMP_ANON_PRIVATE 2
@@ -441,6 +443,7 @@ extern int get_dumpable(struct mm_struct
#define MMF_DUMP_ELF_HEADERS 6
#define MMF_DUMP_HUGETLB_PRIVATE 7
#define MMF_DUMP_HUGETLB_SHARED 8
+
#define MMF_DUMP_FILTER_SHIFT MMF_DUMPABLE_BITS
#define MMF_DUMP_FILTER_BITS 7
#define MMF_DUMP_FILTER_MASK \
@@ -454,6 +457,10 @@ extern int get_dumpable(struct mm_struct
#else
# define MMF_DUMP_MASK_DEFAULT_ELF 0
#endif
+ /* leave room for more dump flags */
+#define MMF_VM_MERGEABLE 16 /* KSM may merge identical pages */
+
+#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK)

struct sighand_struct {
atomic_t count;
--- 2.6.31-rc2/kernel/fork.c 2009-06-25 05:18:09.000000000 +0100
+++ madv_ksm/kernel/fork.c 2009-07-05 00:51:29.000000000 +0100
@@ -50,6 +50,7 @@
#include <linux/ftrace.h>
#include <linux/profile.h>
#include <linux/rmap.h>
+#include <linux/ksm.h>
#include <linux/acct.h>
#include <linux/tsacct_kern.h>
#include <linux/cn_proc.h>
@@ -290,6 +291,9 @@ static int dup_mmap(struct mm_struct *mm
rb_link = &mm->mm_rb.rb_node;
rb_parent = NULL;
pprev = &mm->mmap;
+ retval = ksm_fork(mm, oldmm);
+ if (retval)
+ goto out;

for (mpnt = oldmm->mmap; mpnt; mpnt = mpnt->vm_next) {
struct file *file;
@@ -426,7 +430,8 @@ static struct mm_struct * mm_init(struct
atomic_set(&mm->mm_count, 1);
init_rwsem(&mm->mmap_sem);
INIT_LIST_HEAD(&mm->mmlist);
- mm->flags = (current->mm) ? current->mm->flags : default_dump_filter;
+ mm->flags = (current->mm) ?
+ (current->mm->flags & MMF_INIT_MASK) : default_dump_filter;
mm->core_state = NULL;
mm->nr_ptes = 0;
set_mm_counter(mm, file_rss, 0);
@@ -487,6 +492,7 @@ void mmput(struct mm_struct *mm)

if (atomic_dec_and_test(&mm->mm_users)) {
exit_aio(mm);
+ ksm_exit(mm);
exit_mmap(mm);
set_mm_exe_file(mm, NULL);
if (!list_empty(&mm->mmlist)) {
--- 2.6.31-rc2/mm/Kconfig 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/Kconfig 2009-07-05 00:51:29.000000000 +0100
@@ -214,6 +214,17 @@ config HAVE_MLOCKED_PAGE_BIT
config MMU_NOTIFIER
bool

+config KSM
+ bool "Enable KSM for page merging"
+ depends on MMU
+ help
+ Enable Kernel Samepage Merging: KSM periodically scans those areas
+ of an application's address space that an app has advised may be
+ mergeable. When it finds pages of identical content, it replaces
+ the many instances by a single resident page with that content, so
+ saving memory until one or another app needs to modify the content.
+ Recommended for use with KVM, or with other duplicative applications.
+
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
default 4096
--- 2.6.31-rc2/mm/Makefile 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/Makefile 2009-07-05 00:51:29.000000000 +0100
@@ -25,6 +25,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += spars
obj-$(CONFIG_TMPFS_POSIX_ACL) += shmem_acl.o
obj-$(CONFIG_SLOB) += slob.o
obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
+obj-$(CONFIG_KSM) += ksm.o
obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
obj-$(CONFIG_SLAB) += slab.o
obj-$(CONFIG_SLUB) += slub.o
--- 2.6.31-rc2/mm/ksm.c 1970-01-01 01:00:00.000000000 +0100
+++ madv_ksm/mm/ksm.c 2009-07-08 16:49:33.000000000 +0100
@@ -0,0 +1,1542 @@
+/*
+ * Memory merging support.
+ *
+ * This code enables dynamic sharing of identical pages found in different
+ * memory areas, even if they are not shared by fork()
+ *
+ * Copyright (C) 2008 Red Hat, Inc.
+ * Authors:
+ * Izik Eidus
+ * Andrea Arcangeli
+ * Chris Wright
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2.
+ */
+
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mman.h>
+#include <linux/sched.h>
+#include <linux/rwsem.h>
+#include <linux/pagemap.h>
+#include <linux/rmap.h>
+#include <linux/spinlock.h>
+#include <linux/jhash.h>
+#include <linux/delay.h>
+#include <linux/kthread.h>
+#include <linux/wait.h>
+#include <linux/slab.h>
+#include <linux/rbtree.h>
+#include <linux/mmu_notifier.h>
+#include <linux/ksm.h>
+
+#include <asm/tlbflush.h>
+
+/*
+ * A few notes about the KSM scanning process,
+ * to make it easier to understand the data structures below:
+ *
+ * In order to reduce excessive scanning, KSM sorts the memory pages by their
+ * contents into a data structure that holds pointers to the pages' locations.
+ *
+ * Since the contents of the pages may change at any moment, KSM cannot just
+ * insert the pages into a normal sorted tree and expect it to find anything.
+ * Therefore KSM uses two data structures - the stable and the unstable tree.
+ *
+ * The stable tree holds pointers to all the merged pages (ksm pages), sorted
+ * by their contents. Because each such page is write-protected, searching on
+ * this tree is fully assured to be working (except when pages are unmapped),
+ * and therefore this tree is called the stable tree.
+ *
+ * In addition to the stable tree, KSM uses a second data structure called the
+ * unstable tree: this tree holds pointers to pages which have been found to
+ * be "unchanged for a period of time". The unstable tree sorts these pages
+ * by their contents, but since they are not write-protected, KSM cannot rely
+ * upon the unstable tree to work correctly - the unstable tree is liable to
+ * be corrupted as its contents are modified, and so it is called unstable.
+ *
+ * KSM solves this problem by several techniques:
+ *
+ * 1) The unstable tree is flushed every time KSM completes scanning all
+ * memory areas, and then the tree is rebuilt again from the beginning.
+ * 2) KSM will only insert into the unstable tree, pages whose hash value
+ * has not changed since the previous scan of all memory areas.
+ * 3) The unstable tree is a RedBlack Tree - so its balancing is based on the
+ * colors of the nodes and not on their contents, assuring that even when
+ * the tree gets "corrupted" it won't get out of balance, so scanning time
+ * remains the same (also, searching and inserting nodes in an rbtree uses
+ * the same algorithm, so we have no overhead when we flush and rebuild).
+ * 4) KSM never flushes the stable tree, which means that even if it were to
+ * take 10 attempts to find a page in the unstable tree, once it is found,
+ * it is secured in the stable tree. (When we scan a new page, we first
+ * compare it against the stable tree, and then against the unstable tree.)
+ */
+
+/**
+ * struct mm_slot - ksm information per mm that is being scanned
+ * @link: link to the mm_slots hash list
+ * @mm_list: link into the mm_slots list, rooted in ksm_mm_head
+ * @rmap_list: head for this mm_slot's list of rmap_items
+ * @mm: the mm that this information is valid for
+ */
+struct mm_slot {
+ struct hlist_node link;
+ struct list_head mm_list;
+ struct list_head rmap_list;
+ struct mm_struct *mm;
+};
+
+/**
+ * struct ksm_scan - cursor for scanning
+ * @mm_slot: the current mm_slot we are scanning
+ * @address: the next address inside that to be scanned
+ * @rmap_item: the current rmap that we are scanning inside the rmap_list
+ * @seqnr: count of completed full scans (needed when removing unstable node)
+ *
+ * There is only the one ksm_scan instance of this cursor structure.
+ */
+struct ksm_scan {
+ struct mm_slot *mm_slot;
+ unsigned long address;
+ struct rmap_item *rmap_item;
+ unsigned long seqnr;
+};
+
+/**
+ * struct rmap_item - reverse mapping item for virtual addresses
+ * @link: link into mm_slot's rmap_list (rmap_list is per mm)
+ * @mm: the memory structure this rmap_item is pointing into
+ * @address: the virtual address this rmap_item tracks (+ flags in low bits)
+ * @oldchecksum: previous checksum of the page at that virtual address
+ * @node: rb_node of this rmap_item in either unstable or stable tree
+ * @next: next rmap_item hanging off the same node of the stable tree
+ * @prev: previous rmap_item hanging off the same node of the stable tree
+ */
+struct rmap_item {
+ struct list_head link;
+ struct mm_struct *mm;
+ unsigned long address; /* + low bits used for flags below */
+ union {
+ unsigned int oldchecksum; /* when unstable */
+ struct rmap_item *next; /* when stable */
+ };
+ union {
+ struct rb_node node; /* when tree node */
+ struct rmap_item *prev; /* in stable list */
+ };
+};
+
+#define SEQNR_MASK 0x0ff /* low bits of unstable tree seqnr */
+#define NODE_FLAG 0x100 /* is a node of unstable or stable tree */
+#define STABLE_FLAG 0x200 /* is a node or list item of stable tree */
+
+/* The stable and unstable tree heads */
+static struct rb_root root_stable_tree = RB_ROOT;
+static struct rb_root root_unstable_tree = RB_ROOT;
+
+#define MM_SLOTS_HASH_HEADS 1024
+static struct hlist_head *mm_slots_hash;
+
+static struct mm_slot ksm_mm_head = {
+ .mm_list = LIST_HEAD_INIT(ksm_mm_head.mm_list),
+};
+static struct ksm_scan ksm_scan = {
+ .mm_slot = &ksm_mm_head,
+};
+
+static struct kmem_cache *rmap_item_cache;
+static struct kmem_cache *mm_slot_cache;
+
+/* The number of nodes in the stable tree */
+static unsigned long ksm_kernel_pages_allocated;
+
+/* The number of page slots sharing those nodes */
+static unsigned long ksm_pages_shared;
+
+/* Limit on the number of unswappable pages used */
+static unsigned long ksm_max_kernel_pages;
+
+/* Number of pages ksmd should scan in one batch */
+static unsigned int ksm_thread_pages_to_scan;
+
+/* Milliseconds ksmd should sleep between batches */
+static unsigned int ksm_thread_sleep_millisecs;
+
+#define KSM_RUN_STOP 0
+#define KSM_RUN_MERGE 1
+#define KSM_RUN_UNMERGE 2
+static unsigned int ksm_run;
+
+static DECLARE_WAIT_QUEUE_HEAD(ksm_thread_wait);
+static DEFINE_MUTEX(ksm_thread_mutex);
+static DEFINE_SPINLOCK(ksm_mmlist_lock);
+
+#define KSM_KMEM_CACHE(__struct, __flags) kmem_cache_create("ksm_"#__struct,\
+ sizeof(struct __struct), __alignof__(struct __struct),\
+ (__flags), NULL)
+
+static int __init ksm_slab_init(void)
+{
+ rmap_item_cache = KSM_KMEM_CACHE(rmap_item, 0);
+ if (!rmap_item_cache)
+ goto out;
+
+ mm_slot_cache = KSM_KMEM_CACHE(mm_slot, 0);
+ if (!mm_slot_cache)
+ goto out_free;
+
+ return 0;
+
+out_free:
+ kmem_cache_destroy(rmap_item_cache);
+out:
+ return -ENOMEM;
+}
+
+static void __init ksm_slab_free(void)
+{
+ kmem_cache_destroy(mm_slot_cache);
+ kmem_cache_destroy(rmap_item_cache);
+ mm_slot_cache = NULL;
+}
+
+static inline struct rmap_item *alloc_rmap_item(void)
+{
+ return kmem_cache_zalloc(rmap_item_cache, GFP_KERNEL);
+}
+
+static inline void free_rmap_item(struct rmap_item *rmap_item)
+{
+ rmap_item->mm = NULL; /* debug safety */
+ kmem_cache_free(rmap_item_cache, rmap_item);
+}
+
+static inline struct mm_slot *alloc_mm_slot(void)
+{
+ if (!mm_slot_cache) /* initialization failed */
+ return NULL;
+ return kmem_cache_zalloc(mm_slot_cache, GFP_KERNEL);
+}
+
+static inline void free_mm_slot(struct mm_slot *mm_slot)
+{
+ kmem_cache_free(mm_slot_cache, mm_slot);
+}
+
+static int __init mm_slots_hash_init(void)
+{
+ mm_slots_hash = kzalloc(MM_SLOTS_HASH_HEADS * sizeof(struct hlist_head),
+ GFP_KERNEL);
+ if (!mm_slots_hash)
+ return -ENOMEM;
+ return 0;
+}
+
+static void __init mm_slots_hash_free(void)
+{
+ kfree(mm_slots_hash);
+}
+
+static struct mm_slot *get_mm_slot(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot;
+ struct hlist_head *bucket;
+ struct hlist_node *node;
+
+ bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
+ % MM_SLOTS_HASH_HEADS];
+ hlist_for_each_entry(mm_slot, node, bucket, link) {
+ if (mm == mm_slot->mm)
+ return mm_slot;
+ }
+ return NULL;
+}
+
+static void insert_to_mm_slots_hash(struct mm_struct *mm,
+ struct mm_slot *mm_slot)
+{
+ struct hlist_head *bucket;
+
+ bucket = &mm_slots_hash[((unsigned long)mm / sizeof(struct mm_struct))
+ % MM_SLOTS_HASH_HEADS];
+ mm_slot->mm = mm;
+ INIT_LIST_HEAD(&mm_slot->rmap_list);
+ hlist_add_head(&mm_slot->link, bucket);
+}
+
+static inline int in_stable_tree(struct rmap_item *rmap_item)
+{
+ return rmap_item->address & STABLE_FLAG;
+}
+
+/*
+ * We use break_ksm to break COW on a ksm page: it's a stripped down
+ *
+ * if (get_user_pages(current, mm, addr, 1, 1, 1, &page, NULL) == 1)
+ * put_page(page);
+ *
+ * but taking great care only to touch a ksm page, in a VM_MERGEABLE vma,
+ * in case the application has unmapped and remapped mm,addr meanwhile.
+ * Could a ksm page appear anywhere else? Actually yes, in a VM_PFNMAP
+ * mmap of /dev/mem or /dev/kmem, where we would not want to touch it.
+ */
+static void break_ksm(struct vm_area_struct *vma, unsigned long addr)
+{
+ struct page *page;
+ int ret;
+
+ do {
+ cond_resched();
+ page = follow_page(vma, addr, FOLL_GET);
+ if (!page)
+ break;
+ if (PageKsm(page))
+ ret = handle_mm_fault(vma->vm_mm, vma, addr,
+ FAULT_FLAG_WRITE);
+ else
+ ret = VM_FAULT_WRITE;
+ put_page(page);
+ } while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS)));
+
+ /* Which leaves us looping there if VM_FAULT_OOM: hmmm... */
+}
+
+static void __break_cow(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma;
+
+ vma = find_vma(mm, addr);
+ if (!vma || vma->vm_start > addr)
+ return;
+ if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
+ return;
+ break_ksm(vma, addr);
+}
+
+static void break_cow(struct mm_struct *mm, unsigned long addr)
+{
+ down_read(&mm->mmap_sem);
+ __break_cow(mm, addr);
+ up_read(&mm->mmap_sem);
+}
+
+static struct page *get_mergeable_page(struct rmap_item *rmap_item)
+{
+ struct mm_struct *mm = rmap_item->mm;
+ unsigned long addr = rmap_item->address;
+ struct vm_area_struct *vma;
+ struct page *page;
+
+ down_read(&mm->mmap_sem);
+ vma = find_vma(mm, addr);
+ if (!vma || vma->vm_start > addr)
+ goto out;
+ if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
+ goto out;
+
+ page = follow_page(vma, addr, FOLL_GET);
+ if (!page)
+ goto out;
+ if (PageAnon(page)) {
+ flush_anon_page(vma, page, addr);
+ flush_dcache_page(page);
+ } else {
+ put_page(page);
+out: page = NULL;
+ }
+ up_read(&mm->mmap_sem);
+ return page;
+}
+
+/*
+ * get_ksm_page: checks if the page at the virtual address in rmap_item
+ * is still PageKsm, in which case we can trust the content of the page,
+ * and it returns the gotten page; but NULL if the page has been zapped.
+ */
+static struct page *get_ksm_page(struct rmap_item *rmap_item)
+{
+ struct page *page;
+
+ page = get_mergeable_page(rmap_item);
+ if (page && !PageKsm(page)) {
+ put_page(page);
+ page = NULL;
+ }
+ return page;
+}
+
+/*
+ * Removing rmap_item from stable or unstable tree.
+ * This function will clean the information from the stable/unstable tree.
+ */
+static void remove_rmap_item_from_tree(struct rmap_item *rmap_item)
+{
+ if (in_stable_tree(rmap_item)) {
+ struct rmap_item *next_item = rmap_item->next;
+
+ if (rmap_item->address & NODE_FLAG) {
+ if (next_item) {
+ rb_replace_node(&rmap_item->node,
+ &next_item->node,
+ &root_stable_tree);
+ next_item->address |= NODE_FLAG;
+ } else {
+ rb_erase(&rmap_item->node, &root_stable_tree);
+ ksm_kernel_pages_allocated--;
+ }
+ } else {
+ struct rmap_item *prev_item = rmap_item->prev;
+
+ BUG_ON(prev_item->next != rmap_item);
+ prev_item->next = next_item;
+ if (next_item) {
+ BUG_ON(next_item->prev != rmap_item);
+ next_item->prev = rmap_item->prev;
+ }
+ }
+
+ rmap_item->next = NULL;
+ ksm_pages_shared--;
+
+ } else if (rmap_item->address & NODE_FLAG) {
+ unsigned char age;
+ /*
+ * ksm_thread can and must skip the rb_erase, because
+ * root_unstable_tree was already reset to RB_ROOT.
+ * But __ksm_exit has to be careful: do the rb_erase
+ * if it's interrupting a scan, and this rmap_item was
+ * inserted by this scan rather than left from before.
+ *
+ * Because of the case in which remove_mm_from_lists
+ * increments seqnr before removing rmaps, unstable_nr
+ * may even be 2 behind seqnr, but should never be
+ * further behind. Yes, I did have trouble with this!
+ */
+ age = (unsigned char)(ksm_scan.seqnr - rmap_item->address);
+ BUG_ON(age > 2);
+ if (!age)
+ rb_erase(&rmap_item->node, &root_unstable_tree);
+ }
+
+ rmap_item->address &= PAGE_MASK;
+
+ cond_resched(); /* we're called from many long loops */
+}
+
+static void remove_all_slot_rmap_items(struct mm_slot *mm_slot)
+{
+ struct rmap_item *rmap_item, *node;
+
+ list_for_each_entry_safe(rmap_item, node, &mm_slot->rmap_list, link) {
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+}
+
+static void remove_trailing_rmap_items(struct mm_slot *mm_slot,
+ struct list_head *cur)
+{
+ struct rmap_item *rmap_item;
+
+ while (cur != &mm_slot->rmap_list) {
+ rmap_item = list_entry(cur, struct rmap_item, link);
+ cur = cur->next;
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+}
+
+/*
+ * Though it's very tempting to unmerge in_stable_tree(rmap_item)s rather
+ * than check every pte of a given vma, the locking doesn't quite work for
+ * that - an rmap_item is assigned to the stable tree after inserting ksm
+ * page and upping mmap_sem. Nor does it fit with the way we skip dup'ing
+ * rmap_items from parent to child at fork time (so as not to waste time
+ * if exit comes before the next scan reaches it).
+ */
+static void unmerge_ksm_pages(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ unsigned long addr;
+
+ for (addr = start; addr < end; addr += PAGE_SIZE)
+ break_ksm(vma, addr);
+}
+
+static void unmerge_and_remove_all_rmap_items(void)
+{
+ struct mm_slot *mm_slot;
+ struct mm_struct *mm;
+ struct vm_area_struct *vma;
+
+ list_for_each_entry(mm_slot, &ksm_mm_head.mm_list, mm_list) {
+ mm = mm_slot->mm;
+ down_read(&mm->mmap_sem);
+ for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
+ continue;
+ unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end);
+ }
+ remove_all_slot_rmap_items(mm_slot);
+ up_read(&mm->mmap_sem);
+ }
+
+ spin_lock(&ksm_mmlist_lock);
+ if (ksm_scan.mm_slot != &ksm_mm_head) {
+ ksm_scan.mm_slot = &ksm_mm_head;
+ ksm_scan.seqnr++;
+ }
+ spin_unlock(&ksm_mmlist_lock);
+}
+
+static void remove_mm_from_lists(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot;
+
+ spin_lock(&ksm_mmlist_lock);
+ mm_slot = get_mm_slot(mm);
+
+ /*
+ * This mm_slot is always at the scanning cursor when we're
+ * called from scan_get_next_rmap_item; but it's a special
+ * case when we're called from __ksm_exit.
+ */
+ if (ksm_scan.mm_slot == mm_slot) {
+ ksm_scan.mm_slot = list_entry(
+ mm_slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.address = 0;
+ ksm_scan.rmap_item = list_entry(
+ &ksm_scan.mm_slot->rmap_list, struct rmap_item, link);
+ if (ksm_scan.mm_slot == &ksm_mm_head)
+ ksm_scan.seqnr++;
+ }
+
+ hlist_del(&mm_slot->link);
+ list_del(&mm_slot->mm_list);
+ spin_unlock(&ksm_mmlist_lock);
+
+ remove_all_slot_rmap_items(mm_slot);
+ free_mm_slot(mm_slot);
+ clear_bit(MMF_VM_MERGEABLE, &mm->flags);
+}
+
+static u32 calc_checksum(struct page *page)
+{
+ u32 checksum;
+ void *addr = kmap_atomic(page, KM_USER0);
+ checksum = jhash2(addr, PAGE_SIZE / 4, 17);
+ kunmap_atomic(addr, KM_USER0);
+ return checksum;
+}
+
+static int memcmp_pages(struct page *page1, struct page *page2)
+{
+ char *addr1, *addr2;
+ int ret;
+
+ addr1 = kmap_atomic(page1, KM_USER0);
+ addr2 = kmap_atomic(page2, KM_USER1);
+ ret = memcmp(addr1, addr2, PAGE_SIZE);
+ kunmap_atomic(addr2, KM_USER1);
+ kunmap_atomic(addr1, KM_USER0);
+ return ret;
+}
+
+static inline int pages_identical(struct page *page1, struct page *page2)
+{
+ return !memcmp_pages(page1, page2);
+}
+
+static int write_protect_page(struct vm_area_struct *vma, struct page *page,
+ pte_t *orig_pte)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long addr;
+ pte_t *ptep;
+ spinlock_t *ptl;
+ int swapped;
+ int err = -EFAULT;
+
+ addr = page_address_in_vma(page, vma);
+ if (addr == -EFAULT)
+ goto out;
+
+ ptep = page_check_address(page, mm, addr, &ptl, 0);
+ if (!ptep)
+ goto out;
+
+ if (pte_write(*ptep)) {
+ pte_t entry;
+
+ swapped = PageSwapCache(page);
+ flush_cache_page(vma, addr, page_to_pfn(page));
+ /*
+ * Ok this is tricky, when get_user_pages_fast() run it doesnt
+ * take any lock, therefore the check that we are going to make
+ * with the pagecount against the mapcount is racey and
+ * O_DIRECT can happen right after the check.
+ * So we clear the pte and flush the tlb before the check
+ * this assure us that no O_DIRECT can happen after the check
+ * or in the middle of the check.
+ */
+ entry = ptep_clear_flush(vma, addr, ptep);
+ /*
+ * Check that no O_DIRECT or similar I/O is in progress on the
+ * page
+ */
+ if ((page_mapcount(page) + 2 + swapped) != page_count(page)) {
+ set_pte_at_notify(mm, addr, ptep, entry);
+ goto out_unlock;
+ }
+ entry = pte_wrprotect(entry);
+ set_pte_at_notify(mm, addr, ptep, entry);
+ }
+ *orig_pte = *ptep;
+ err = 0;
+
+out_unlock:
+ pte_unmap_unlock(ptep, ptl);
+out:
+ return err;
+}
+
+/**
+ * replace_page - replace page in vma by new ksm page
+ * @vma: vma that holds the pte pointing to oldpage
+ * @oldpage: the page we are replacing by newpage
+ * @newpage: the ksm page we replace oldpage by
+ * @orig_pte: the original value of the pte
+ *
+ * Returns 0 on success, -EFAULT on failure.
+ */
+static int replace_page(struct vm_area_struct *vma, struct page *oldpage,
+ struct page *newpage, pte_t orig_pte)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *ptep;
+ spinlock_t *ptl;
+ unsigned long addr;
+ pgprot_t prot;
+ int err = -EFAULT;
+
+ prot = vm_get_page_prot(vma->vm_flags & ~VM_WRITE);
+
+ addr = page_address_in_vma(oldpage, vma);
+ if (addr == -EFAULT)
+ goto out;
+
+ pgd = pgd_offset(mm, addr);
+ if (!pgd_present(*pgd))
+ goto out;
+
+ pud = pud_offset(pgd, addr);
+ if (!pud_present(*pud))
+ goto out;
+
+ pmd = pmd_offset(pud, addr);
+ if (!pmd_present(*pmd))
+ goto out;
+
+ ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
+ if (!pte_same(*ptep, orig_pte)) {
+ pte_unmap_unlock(ptep, ptl);
+ goto out;
+ }
+
+ get_page(newpage);
+ page_add_ksm_rmap(newpage);
+
+ flush_cache_page(vma, addr, pte_pfn(*ptep));
+ ptep_clear_flush(vma, addr, ptep);
+ set_pte_at_notify(mm, addr, ptep, mk_pte(newpage, prot));
+
+ page_remove_rmap(oldpage);
+ put_page(oldpage);
+
+ pte_unmap_unlock(ptep, ptl);
+ err = 0;
+out:
+ return err;
+}
+
+/*
+ * try_to_merge_one_page - take two pages and merge them into one
+ * @vma: the vma that hold the pte pointing into oldpage
+ * @oldpage: the page that we want to replace with newpage
+ * @newpage: the page that we want to map instead of oldpage
+ *
+ * Note:
+ * oldpage should be a PageAnon page, while newpage should be a PageKsm page,
+ * or a newly allocated kernel page which page_add_ksm_rmap will make PageKsm.
+ *
+ * This function returns 0 if the pages were merged, -EFAULT otherwise.
+ */
+static int try_to_merge_one_page(struct vm_area_struct *vma,
+ struct page *oldpage,
+ struct page *newpage)
+{
+ pte_t orig_pte = __pte(0);
+ int err = -EFAULT;
+
+ if (!(vma->vm_flags & VM_MERGEABLE))
+ goto out;
+
+ if (!PageAnon(oldpage))
+ goto out;
+
+ get_page(newpage);
+ get_page(oldpage);
+
+ /*
+ * We need the page lock to read a stable PageSwapCache in
+ * write_protect_page(). We use trylock_page() instead of
+ * lock_page() because we don't want to wait here - we
+ * prefer to continue scanning and merging different pages,
+ * then come back to this page when it is unlocked.
+ */
+ if (!trylock_page(oldpage))
+ goto out_putpage;
+ /*
+ * If this anonymous page is mapped only here, its pte may need
+ * to be write-protected. If it's mapped elsewhere, all of its
+ * ptes are necessarily already write-protected. But in either
+ * case, we need to lock and check page_count is not raised.
+ */
+ if (write_protect_page(vma, oldpage, &orig_pte)) {
+ unlock_page(oldpage);
+ goto out_putpage;
+ }
+ unlock_page(oldpage);
+
+ if (pages_identical(oldpage, newpage))
+ err = replace_page(vma, oldpage, newpage, orig_pte);
+
+out_putpage:
+ put_page(oldpage);
+ put_page(newpage);
+out:
+ return err;
+}
+
+/*
+ * try_to_merge_two_pages - take two identical pages and prepare them
+ * to be merged into one page.
+ *
+ * This function returns 0 if we successfully mapped two identical pages
+ * into one page, -EFAULT otherwise.
+ *
+ * Note that this function allocates a new kernel page: if one of the pages
+ * is already a ksm page, try_to_merge_with_ksm_page should be used.
+ */
+static int try_to_merge_two_pages(struct mm_struct *mm1, unsigned long addr1,
+ struct page *page1, struct mm_struct *mm2,
+ unsigned long addr2, struct page *page2)
+{
+ struct vm_area_struct *vma;
+ struct page *kpage;
+ int err = -EFAULT;
+
+ /*
+ * The number of nodes in the stable tree
+ * is the number of kernel pages that we hold.
+ */
+ if (ksm_max_kernel_pages &&
+ ksm_max_kernel_pages <= ksm_kernel_pages_allocated)
+ return err;
+
+ kpage = alloc_page(GFP_HIGHUSER);
+ if (!kpage)
+ return err;
+
+ down_read(&mm1->mmap_sem);
+ vma = find_vma(mm1, addr1);
+ if (!vma || vma->vm_start > addr1) {
+ put_page(kpage);
+ up_read(&mm1->mmap_sem);
+ return err;
+ }
+
+ copy_user_highpage(kpage, page1, addr1, vma);
+ err = try_to_merge_one_page(vma, page1, kpage);
+ up_read(&mm1->mmap_sem);
+
+ if (!err) {
+ down_read(&mm2->mmap_sem);
+ vma = find_vma(mm2, addr2);
+ if (!vma || vma->vm_start > addr2) {
+ put_page(kpage);
+ up_read(&mm2->mmap_sem);
+ break_cow(mm1, addr1);
+ return -EFAULT;
+ }
+
+ err = try_to_merge_one_page(vma, page2, kpage);
+ up_read(&mm2->mmap_sem);
+
+ /*
+ * If the second try_to_merge_one_page failed, we have a
+ * ksm page with just one pte pointing to it, so break it.
+ */
+ if (err)
+ break_cow(mm1, addr1);
+ else
+ ksm_pages_shared += 2;
+ }
+
+ put_page(kpage);
+ return err;
+}
+
+/*
+ * try_to_merge_with_ksm_page - like try_to_merge_two_pages,
+ * but no new kernel page is allocated: kpage must already be a ksm page.
+ */
+static int try_to_merge_with_ksm_page(struct mm_struct *mm1,
+ unsigned long addr1,
+ struct page *page1,
+ struct page *kpage)
+{
+ struct vm_area_struct *vma;
+ int err = -EFAULT;
+
+ down_read(&mm1->mmap_sem);
+ vma = find_vma(mm1, addr1);
+ if (!vma || vma->vm_start > addr1) {
+ up_read(&mm1->mmap_sem);
+ return err;
+ }
+
+ err = try_to_merge_one_page(vma, page1, kpage);
+ up_read(&mm1->mmap_sem);
+
+ if (!err)
+ ksm_pages_shared++;
+
+ return err;
+}
+
+/*
+ * stable_tree_search - search page inside the stable tree
+ * @page: the page that we are searching identical pages to.
+ * @page2: pointer into identical page that we are holding inside the stable
+ * tree that we have found.
+ * @rmap_item: the reverse mapping item
+ *
+ * This function checks if there is a page inside the stable tree
+ * with identical content to the page that we are scanning right now.
+ *
+ * This function return rmap_item pointer to the identical item if found,
+ * NULL otherwise.
+ */
+static struct rmap_item *stable_tree_search(struct page *page,
+ struct page **page2,
+ struct rmap_item *rmap_item)
+{
+ struct rb_node *node = root_stable_tree.rb_node;
+
+ while (node) {
+ struct rmap_item *tree_rmap_item, *next_rmap_item;
+ int ret;
+
+ tree_rmap_item = rb_entry(node, struct rmap_item, node);
+ while (tree_rmap_item) {
+ BUG_ON(!in_stable_tree(tree_rmap_item));
+ cond_resched();
+ page2[0] = get_ksm_page(tree_rmap_item);
+ if (page2[0])
+ break;
+ next_rmap_item = tree_rmap_item->next;
+ remove_rmap_item_from_tree(tree_rmap_item);
+ tree_rmap_item = next_rmap_item;
+ }
+ if (!tree_rmap_item)
+ return NULL;
+
+ /*
+ * We can trust the value of the memcmp as we know the pages
+ * are write protected.
+ */
+ ret = memcmp_pages(page, page2[0]);
+
+ if (ret < 0) {
+ put_page(page2[0]);
+ node = node->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ node = node->rb_right;
+ } else {
+ return tree_rmap_item;
+ }
+ }
+
+ return NULL;
+}
+
+/*
+ * stable_tree_insert - insert rmap_item pointing to new ksm page
+ * into the stable tree.
+ *
+ * @page: the page that we are searching identical page to inside the stable
+ * tree.
+ * @rmap_item: pointer to the reverse mapping item.
+ *
+ * This function returns rmap_item if success, NULL otherwise.
+ */
+static struct rmap_item *stable_tree_insert(struct page *page,
+ struct rmap_item *rmap_item)
+{
+ struct rb_node **new = &root_stable_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct page *page2[1];
+
+ while (*new) {
+ struct rmap_item *tree_rmap_item, *next_rmap_item;
+ int ret;
+
+ tree_rmap_item = rb_entry(*new, struct rmap_item, node);
+ while (tree_rmap_item) {
+ BUG_ON(!in_stable_tree(tree_rmap_item));
+ cond_resched();
+ page2[0] = get_ksm_page(tree_rmap_item);
+ if (page2[0])
+ break;
+ next_rmap_item = tree_rmap_item->next;
+ remove_rmap_item_from_tree(tree_rmap_item);
+ tree_rmap_item = next_rmap_item;
+ }
+ if (!tree_rmap_item)
+ return NULL;
+
+ ret = memcmp_pages(page, page2[0]);
+
+ parent = *new;
+ if (ret < 0) {
+ put_page(page2[0]);
+ new = &parent->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ new = &parent->rb_right;
+ } else {
+ /*
+ * It is not a bug when we come here (the fact that
+ * we didn't find the page inside the stable tree):
+ * because when we searched for the page inside the
+ * stable tree it was still not write-protected,
+ * so therefore it could have changed later.
+ */
+ return NULL;
+ }
+ }
+
+ ksm_kernel_pages_allocated++;
+
+ rmap_item->address |= NODE_FLAG | STABLE_FLAG;
+ rmap_item->next = NULL;
+ rb_link_node(&rmap_item->node, parent, new);
+ rb_insert_color(&rmap_item->node, &root_stable_tree);
+
+ return rmap_item;
+}
+
+/*
+ * unstable_tree_search_insert - search and insert items into the unstable tree.
+ *
+ * @page: the page that we are going to search for identical page or to insert
+ * into the unstable tree
+ * @page2: pointer into identical page that was found inside the unstable tree
+ * @rmap_item: the reverse mapping item of page
+ *
+ * This function searches for a page in the unstable tree identical to the
+ * page currently being scanned; and if no identical page is found in the
+ * tree, we insert rmap_item as a new object into the unstable tree.
+ *
+ * This function returns pointer to rmap_item found to be identical
+ * to the currently scanned page, NULL otherwise.
+ *
+ * This function does both searching and inserting, because they share
+ * the same walking algorithm in an rbtree.
+ */
+static struct rmap_item *unstable_tree_search_insert(struct page *page,
+ struct page **page2,
+ struct rmap_item *rmap_item)
+{
+ struct rb_node **new = &root_unstable_tree.rb_node;
+ struct rb_node *parent = NULL;
+
+ while (*new) {
+ struct rmap_item *tree_rmap_item;
+ int ret;
+
+ tree_rmap_item = rb_entry(*new, struct rmap_item, node);
+ page2[0] = get_mergeable_page(tree_rmap_item);
+ if (!page2[0])
+ return NULL;
+
+ /*
+ * Don't substitute an unswappable ksm page
+ * just for one good swappable forked page.
+ */
+ if (page == page2[0]) {
+ put_page(page2[0]);
+ return NULL;
+ }
+
+ ret = memcmp_pages(page, page2[0]);
+
+ parent = *new;
+ if (ret < 0) {
+ put_page(page2[0]);
+ new = &parent->rb_left;
+ } else if (ret > 0) {
+ put_page(page2[0]);
+ new = &parent->rb_right;
+ } else {
+ return tree_rmap_item;
+ }
+ }
+
+ rmap_item->address |= NODE_FLAG;
+ rmap_item->address |= (ksm_scan.seqnr & SEQNR_MASK);
+ rb_link_node(&rmap_item->node, parent, new);
+ rb_insert_color(&rmap_item->node, &root_unstable_tree);
+
+ return NULL;
+}
+
+/*
+ * stable_tree_append - add another rmap_item to the linked list of
+ * rmap_items hanging off a given node of the stable tree, all sharing
+ * the same ksm page.
+ */
+static void stable_tree_append(struct rmap_item *rmap_item,
+ struct rmap_item *tree_rmap_item)
+{
+ rmap_item->next = tree_rmap_item->next;
+ rmap_item->prev = tree_rmap_item;
+
+ if (tree_rmap_item->next)
+ tree_rmap_item->next->prev = rmap_item;
+
+ tree_rmap_item->next = rmap_item;
+ rmap_item->address |= STABLE_FLAG;
+}
+
+/*
+ * cmp_and_merge_page - take a page computes its hash value and check if there
+ * is similar hash value to different page,
+ * in case we find that there is similar hash to different page we call to
+ * try_to_merge_two_pages().
+ *
+ * @page: the page that we are searching identical page to.
+ * @rmap_item: the reverse mapping into the virtual address of this page
+ */
+static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
+{
+ struct page *page2[1];
+ struct rmap_item *tree_rmap_item;
+ unsigned int checksum;
+ int err;
+
+ if (in_stable_tree(rmap_item))
+ remove_rmap_item_from_tree(rmap_item);
+
+ /* We first start with searching the page inside the stable tree */
+ tree_rmap_item = stable_tree_search(page, page2, rmap_item);
+ if (tree_rmap_item) {
+ if (page == page2[0]) { /* forked */
+ ksm_pages_shared++;
+ err = 0;
+ } else
+ err = try_to_merge_with_ksm_page(rmap_item->mm,
+ rmap_item->address,
+ page, page2[0]);
+ put_page(page2[0]);
+
+ if (!err) {
+ /*
+ * The page was successfully merged:
+ * add its rmap_item to the stable tree.
+ */
+ stable_tree_append(rmap_item, tree_rmap_item);
+ }
+ return;
+ }
+
+ /*
+ * A ksm page might have got here by fork, but its other
+ * references have already been removed from the stable tree.
+ */
+ if (PageKsm(page))
+ break_cow(rmap_item->mm, rmap_item->address);
+
+ /*
+ * In case the hash value of the page was changed from the last time we
+ * have calculated it, this page to be changed frequely, therefore we
+ * don't want to insert it to the unstable tree, and we don't want to
+ * waste our time to search if there is something identical to it there.
+ */
+ checksum = calc_checksum(page);
+ if (rmap_item->oldchecksum != checksum) {
+ rmap_item->oldchecksum = checksum;
+ return;
+ }
+
+ tree_rmap_item = unstable_tree_search_insert(page, page2, rmap_item);
+ if (tree_rmap_item) {
+ err = try_to_merge_two_pages(rmap_item->mm,
+ rmap_item->address, page,
+ tree_rmap_item->mm,
+ tree_rmap_item->address, page2[0]);
+ /*
+ * As soon as we merge this page, we want to remove the
+ * rmap_item of the page we have merged with from the unstable
+ * tree, and insert it instead as new node in the stable tree.
+ */
+ if (!err) {
+ rb_erase(&tree_rmap_item->node, &root_unstable_tree);
+ tree_rmap_item->address &= ~NODE_FLAG;
+ /*
+ * If we fail to insert the page into the stable tree,
+ * we will have 2 virtual addresses that are pointing
+ * to a ksm page left outside the stable tree,
+ * in which case we need to break_cow on both.
+ */
+ if (stable_tree_insert(page2[0], tree_rmap_item))
+ stable_tree_append(rmap_item, tree_rmap_item);
+ else {
+ break_cow(tree_rmap_item->mm,
+ tree_rmap_item->address);
+ break_cow(rmap_item->mm, rmap_item->address);
+ ksm_pages_shared -= 2;
+ }
+ }
+
+ put_page(page2[0]);
+ }
+}
+
+static struct rmap_item *get_next_rmap_item(struct mm_slot *mm_slot,
+ struct list_head *cur,
+ unsigned long addr)
+{
+ struct rmap_item *rmap_item;
+
+ while (cur != &mm_slot->rmap_list) {
+ rmap_item = list_entry(cur, struct rmap_item, link);
+ if ((rmap_item->address & PAGE_MASK) == addr) {
+ if (!in_stable_tree(rmap_item))
+ remove_rmap_item_from_tree(rmap_item);
+ return rmap_item;
+ }
+ if (rmap_item->address > addr)
+ break;
+ cur = cur->next;
+ remove_rmap_item_from_tree(rmap_item);
+ list_del(&rmap_item->link);
+ free_rmap_item(rmap_item);
+ }
+
+ rmap_item = alloc_rmap_item();
+ if (rmap_item) {
+ /* It has already been zeroed */
+ rmap_item->mm = mm_slot->mm;
+ rmap_item->address = addr;
+ list_add_tail(&rmap_item->link, cur);
+ }
+ return rmap_item;
+}
+
+static struct rmap_item *scan_get_next_rmap_item(struct page **page)
+{
+ struct mm_struct *mm;
+ struct mm_slot *slot;
+ struct vm_area_struct *vma;
+ struct rmap_item *rmap_item;
+
+ if (list_empty(&ksm_mm_head.mm_list))
+ return NULL;
+
+ slot = ksm_scan.mm_slot;
+ if (slot == &ksm_mm_head) {
+ root_unstable_tree = RB_ROOT;
+
+ spin_lock(&ksm_mmlist_lock);
+ slot = list_entry(slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.mm_slot = slot;
+ spin_unlock(&ksm_mmlist_lock);
+next_mm:
+ ksm_scan.address = 0;
+ ksm_scan.rmap_item = list_entry(&slot->rmap_list,
+ struct rmap_item, link);
+ }
+
+ mm = slot->mm;
+ down_read(&mm->mmap_sem);
+ for (vma = find_vma(mm, ksm_scan.address); vma; vma = vma->vm_next) {
+ if (!(vma->vm_flags & VM_MERGEABLE))
+ continue;
+ if (ksm_scan.address < vma->vm_start)
+ ksm_scan.address = vma->vm_start;
+ if (!vma->anon_vma)
+ ksm_scan.address = vma->vm_end;
+
+ while (ksm_scan.address < vma->vm_end) {
+ *page = follow_page(vma, ksm_scan.address, FOLL_GET);
+ if (*page && PageAnon(*page)) {
+ flush_anon_page(vma, *page, ksm_scan.address);
+ flush_dcache_page(*page);
+ rmap_item = get_next_rmap_item(slot,
+ ksm_scan.rmap_item->link.next,
+ ksm_scan.address);
+ if (rmap_item) {
+ ksm_scan.rmap_item = rmap_item;
+ ksm_scan.address += PAGE_SIZE;
+ } else
+ put_page(*page);
+ up_read(&mm->mmap_sem);
+ return rmap_item;
+ }
+ if (*page)
+ put_page(*page);
+ ksm_scan.address += PAGE_SIZE;
+ cond_resched();
+ }
+ }
+
+ if (!ksm_scan.address) {
+ /*
+ * We've completed a full scan of all vmas, holding mmap_sem
+ * throughout, and found no VM_MERGEABLE: so do the same as
+ * __ksm_exit does to remove this mm from all our lists now.
+ */
+ remove_mm_from_lists(mm);
+ up_read(&mm->mmap_sem);
+ slot = ksm_scan.mm_slot;
+ if (slot != &ksm_mm_head)
+ goto next_mm;
+ return NULL;
+ }
+
+ /*
+ * Nuke all the rmap_items that are above this current rmap:
+ * because there were no VM_MERGEABLE vmas with such addresses.
+ */
+ remove_trailing_rmap_items(slot, ksm_scan.rmap_item->link.next);
+ up_read(&mm->mmap_sem);
+
+ spin_lock(&ksm_mmlist_lock);
+ slot = list_entry(slot->mm_list.next, struct mm_slot, mm_list);
+ ksm_scan.mm_slot = slot;
+ spin_unlock(&ksm_mmlist_lock);
+
+ /* Repeat until we've completed scanning the whole list */
+ if (slot != &ksm_mm_head)
+ goto next_mm;
+
+ /*
+ * Bump seqnr here rather than at top, so that __ksm_exit
+ * can skip rb_erase on unstable tree until we run again.
+ */
+ ksm_scan.seqnr++;
+ return NULL;
+}
+
+/**
+ * ksm_do_scan - the ksm scanner main worker function.
+ * @scan_npages - number of pages we want to scan before we return.
+ */
+static void ksm_do_scan(unsigned int scan_npages)
+{
+ struct rmap_item *rmap_item;
+ struct page *page;
+
+ while (scan_npages--) {
+ cond_resched();
+ rmap_item = scan_get_next_rmap_item(&page);
+ if (!rmap_item)
+ return;
+ if (!PageKsm(page) || !in_stable_tree(rmap_item))
+ cmp_and_merge_page(page, rmap_item);
+ put_page(page);
+ }
+}
+
+static int ksm_scan_thread(void *nothing)
+{
+ set_user_nice(current, 0);
+
+ while (!kthread_should_stop()) {
+ if (ksm_run & KSM_RUN_MERGE) {
+ mutex_lock(&ksm_thread_mutex);
+ ksm_do_scan(ksm_thread_pages_to_scan);
+ mutex_unlock(&ksm_thread_mutex);
+ schedule_timeout_interruptible(
+ msecs_to_jiffies(ksm_thread_sleep_millisecs));
+ } else {
+ wait_event_interruptible(ksm_thread_wait,
+ (ksm_run & KSM_RUN_MERGE) ||
+ kthread_should_stop());
+ }
+ }
+ return 0;
+}
+
+int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, int advice, unsigned long *vm_flags)
+{
+ struct mm_struct *mm = vma->vm_mm;
+
+ switch (advice) {
+ case MADV_MERGEABLE:
+ /*
+ * Be somewhat over-protective for now!
+ */
+ if (*vm_flags & (VM_MERGEABLE | VM_SHARED | VM_MAYSHARE |
+ VM_PFNMAP | VM_IO | VM_DONTEXPAND |
+ VM_RESERVED | VM_HUGETLB | VM_INSERTPAGE |
+ VM_MIXEDMAP | VM_SAO))
+ return 0; /* just ignore the advice */
+
+ if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ if (__ksm_enter(mm) < 0)
+ return -EAGAIN;
+
+ *vm_flags |= VM_MERGEABLE;
+ break;
+
+ case MADV_UNMERGEABLE:
+ if (!(*vm_flags & VM_MERGEABLE))
+ return 0; /* just ignore the advice */
+
+ if (vma->anon_vma)
+ unmerge_ksm_pages(vma, start, end);
+
+ *vm_flags &= ~VM_MERGEABLE;
+ break;
+ }
+
+ return 0;
+}
+
+int __ksm_enter(struct mm_struct *mm)
+{
+ struct mm_slot *mm_slot = alloc_mm_slot();
+ if (!mm_slot)
+ return -ENOMEM;
+
+ spin_lock(&ksm_mmlist_lock);
+ insert_to_mm_slots_hash(mm, mm_slot);
+ /*
+ * Insert just behind the scanning cursor, to let the area settle
+ * down a little; when fork is followed by immediate exec, we don't
+ * want ksmd to waste time setting up and tearing down an rmap_list.
+ */
+ list_add_tail(&mm_slot->mm_list, &ksm_scan.mm_slot->mm_list);
+ spin_unlock(&ksm_mmlist_lock);
+
+ set_bit(MMF_VM_MERGEABLE, &mm->flags);
+ return 0;
+}
+
+void __ksm_exit(struct mm_struct *mm)
+{
+ /*
+ * This process is exiting: doesn't hold and doesn't need mmap_sem;
+ * but we do need to exclude ksmd and other exiters while we modify
+ * the various lists and trees.
+ */
+ mutex_lock(&ksm_thread_mutex);
+ remove_mm_from_lists(mm);
+ mutex_unlock(&ksm_thread_mutex);
+}
+
+#define KSM_ATTR_RO(_name) \
+ static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
+#define KSM_ATTR(_name) \
+ static struct kobj_attribute _name##_attr = \
+ __ATTR(_name, 0644, _name##_show, _name##_store)
+
+static ssize_t sleep_millisecs_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_thread_sleep_millisecs);
+}
+
+static ssize_t sleep_millisecs_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long msecs;
+ int err;
+
+ err = strict_strtoul(buf, 10, &msecs);
+ if (err || msecs > UINT_MAX)
+ return -EINVAL;
+
+ ksm_thread_sleep_millisecs = msecs;
+
+ return count;
+}
+KSM_ATTR(sleep_millisecs);
+
+static ssize_t pages_to_scan_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_thread_pages_to_scan);
+}
+
+static ssize_t pages_to_scan_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long nr_pages;
+
+ err = strict_strtoul(buf, 10, &nr_pages);
+ if (err || nr_pages > UINT_MAX)
+ return -EINVAL;
+
+ ksm_thread_pages_to_scan = nr_pages;
+
+ return count;
+}
+KSM_ATTR(pages_to_scan);
+
+static ssize_t run_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%u\n", ksm_run);
+}
+
+static ssize_t run_store(struct kobject *kobj, struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long flags;
+
+ err = strict_strtoul(buf, 10, &flags);
+ if (err || flags > UINT_MAX)
+ return -EINVAL;
+ if (flags > KSM_RUN_UNMERGE)
+ return -EINVAL;
+
+ /*
+ * KSM_RUN_MERGE sets ksmd running, and 0 stops it running.
+ * KSM_RUN_UNMERGE stops it running and unmerges all rmap_items,
+ * breaking COW to free the kernel_pages_allocated (but leaves
+ * mm_slots on the list for when ksmd may be set running again).
+ */
+
+ mutex_lock(&ksm_thread_mutex);
+ if (ksm_run != flags) {
+ ksm_run = flags;
+ if (flags & KSM_RUN_UNMERGE)
+ unmerge_and_remove_all_rmap_items();
+ }
+ mutex_unlock(&ksm_thread_mutex);
+
+ if (flags & KSM_RUN_MERGE)
+ wake_up_interruptible(&ksm_thread_wait);
+
+ return count;
+}
+KSM_ATTR(run);
+
+static ssize_t pages_shared_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%lu\n",
+ ksm_pages_shared - ksm_kernel_pages_allocated);
+}
+KSM_ATTR_RO(pages_shared);
+
+static ssize_t kernel_pages_allocated_show(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ char *buf)
+{
+ return sprintf(buf, "%lu\n", ksm_kernel_pages_allocated);
+}
+KSM_ATTR_RO(kernel_pages_allocated);
+
+static ssize_t max_kernel_pages_store(struct kobject *kobj,
+ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+{
+ int err;
+ unsigned long nr_pages;
+
+ err = strict_strtoul(buf, 10, &nr_pages);
+ if (err)
+ return -EINVAL;
+
+ ksm_max_kernel_pages = nr_pages;
+
+ return count;
+}
+
+static ssize_t max_kernel_pages_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ return sprintf(buf, "%lu\n", ksm_max_kernel_pages);
+}
+KSM_ATTR(max_kernel_pages);
+
+static struct attribute *ksm_attrs[] = {
+ &sleep_millisecs_attr.attr,
+ &pages_to_scan_attr.attr,
+ &run_attr.attr,
+ &pages_shared_attr.attr,
+ &kernel_pages_allocated_attr.attr,
+ &max_kernel_pages_attr.attr,
+ NULL,
+};
+
+static struct attribute_group ksm_attr_group = {
+ .attrs = ksm_attrs,
+ .name = "ksm",
+};
+
+static int __init ksm_init(void)
+{
+ struct task_struct *ksm_thread;
+ int err;
+
+ err = ksm_slab_init();
+ if (err)
+ goto out;
+
+ err = mm_slots_hash_init();
+ if (err)
+ goto out_free1;
+
+ ksm_thread = kthread_run(ksm_scan_thread, NULL, "ksmd");
+ if (IS_ERR(ksm_thread)) {
+ printk(KERN_ERR "ksm: creating kthread failed\n");
+ err = PTR_ERR(ksm_thread);
+ goto out_free2;
+ }
+
+ err = sysfs_create_group(mm_kobj, &ksm_attr_group);
+ if (err) {
+ printk(KERN_ERR "ksm: register sysfs failed\n");
+ goto out_free3;
+ }
+
+ return 0;
+
+out_free3:
+ kthread_stop(ksm_thread);
+out_free2:
+ mm_slots_hash_free();
+out_free1:
+ ksm_slab_free();
+out:
+ return err;
+}
+module_init(ksm_init)
--- 2.6.31-rc2/mm/madvise.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/madvise.c 2009-07-05 00:51:29.000000000 +0100
@@ -11,6 +11,7 @@
#include <linux/mempolicy.h>
#include <linux/hugetlb.h>
#include <linux/sched.h>
+#include <linux/ksm.h>

/*
* Any behaviour which results in changes to the vma->vm_flags needs to
@@ -41,7 +42,7 @@ static long madvise_behavior(struct vm_a
struct mm_struct * mm = vma->vm_mm;
int error = 0;
pgoff_t pgoff;
- int new_flags = vma->vm_flags;
+ unsigned long new_flags = vma->vm_flags;

switch (behavior) {
case MADV_NORMAL:
@@ -57,8 +58,18 @@ static long madvise_behavior(struct vm_a
new_flags |= VM_DONTCOPY;
break;
case MADV_DOFORK:
+ if (vma->vm_flags & VM_IO) {
+ error = -EINVAL;
+ goto out;
+ }
new_flags &= ~VM_DONTCOPY;
break;
+ case MADV_MERGEABLE:
+ case MADV_UNMERGEABLE:
+ error = ksm_madvise(vma, start, end, behavior, &new_flags);
+ if (error)
+ goto out;
+ break;
}

if (new_flags == vma->vm_flags) {
@@ -211,37 +222,16 @@ static long
madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev,
unsigned long start, unsigned long end, int behavior)
{
- long error;
-
switch (behavior) {
- case MADV_DOFORK:
- if (vma->vm_flags & VM_IO) {
- error = -EINVAL;
- break;
- }
- case MADV_DONTFORK:
- case MADV_NORMAL:
- case MADV_SEQUENTIAL:
- case MADV_RANDOM:
- error = madvise_behavior(vma, prev, start, end, behavior);
- break;
case MADV_REMOVE:
- error = madvise_remove(vma, prev, start, end);
- break;
-
+ return madvise_remove(vma, prev, start, end);
case MADV_WILLNEED:
- error = madvise_willneed(vma, prev, start, end);
- break;
-
+ return madvise_willneed(vma, prev, start, end);
case MADV_DONTNEED:
- error = madvise_dontneed(vma, prev, start, end);
- break;
-
+ return madvise_dontneed(vma, prev, start, end);
default:
- BUG();
- break;
+ return madvise_behavior(vma, prev, start, end, behavior);
}
- return error;
}

static int
@@ -256,12 +246,17 @@ madvise_behavior_valid(int behavior)
case MADV_REMOVE:
case MADV_WILLNEED:
case MADV_DONTNEED:
+#ifdef CONFIG_KSM
+ case MADV_MERGEABLE:
+ case MADV_UNMERGEABLE:
+#endif
return 1;

default:
return 0;
}
}
+
/*
* The madvise(2) system call.
*
@@ -286,6 +281,12 @@ madvise_behavior_valid(int behavior)
* so the kernel can free resources associated with it.
* MADV_REMOVE - the application wants to free up the given range of
* pages and associated backing store.
+ * MADV_DONTFORK - omit this area from child's address space when forking:
+ * typically, to avoid COWing pages pinned by get_user_pages().
+ * MADV_DOFORK - cancel MADV_DONTFORK: no longer omit this area when forking.
+ * MADV_MERGEABLE - the application recommends that KSM try to merge pages in
+ * this area with pages of identical content from other such areas.
+ * MADV_UNMERGEABLE- cancel MADV_MERGEABLE: no longer merge pages with others.
*
* return values:
* zero - success
--- 2.6.31-rc2/mm/memory.c 2009-07-04 21:26:08.000000000 +0100
+++ madv_ksm/mm/memory.c 2009-07-05 00:56:00.000000000 +0100
@@ -45,6 +45,7 @@
#include <linux/swap.h>
#include <linux/highmem.h>
#include <linux/pagemap.h>
+#include <linux/ksm.h>
#include <linux/rmap.h>
#include <linux/module.h>
#include <linux/delayacct.h>
@@ -595,7 +596,7 @@ copy_one_pte(struct mm_struct *dst_mm, s
page = vm_normal_page(vma, addr, pte);
if (page) {
get_page(page);
- page_dup_rmap(page, vma, addr);
+ page_dup_rmap(page);
rss[!!PageAnon(page)]++;
}

@@ -1972,7 +1973,7 @@ static int do_wp_page(struct mm_struct *
* Take out anonymous pages first, anonymous shared vmas are
* not dirty accountable.
*/
- if (PageAnon(old_page)) {
+ if (PageAnon(old_page) && !PageKsm(old_page)) {
if (!trylock_page(old_page)) {
page_cache_get(old_page);
pte_unmap_unlock(page_table, ptl);
@@ -2113,9 +2114,14 @@ gotten:
* seen in the presence of one thread doing SMC and another
* thread doing COW.
*/
- ptep_clear_flush_notify(vma, address, page_table);
+ ptep_clear_flush(vma, address, page_table);
page_add_new_anon_rmap(new_page, vma, address);
- set_pte_at(mm, address, page_table, entry);
+ /*
+ * We call the notify macro here because, when using secondary
+ * mmu page tables (such as kvm shadow page tables), we want the
+ * new page to be mapped directly into the secondary page table.
+ */
+ set_pte_at_notify(mm, address, page_table, entry);
update_mmu_cache(vma, address, entry);
if (old_page) {
/*
--- 2.6.31-rc2/mm/mmap.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/mmap.c 2009-07-05 00:56:00.000000000 +0100
@@ -659,9 +659,6 @@ again: remove_next = 1 + (end > next->
validate_mm(mm);
}

-/* Flags that can be inherited from an existing mapping when merging */
-#define VM_MERGEABLE_FLAGS (VM_CAN_NONLINEAR)
-
/*
* If the vma has a ->close operation then the driver probably needs to release
* per-vma resources, so we don't attempt to merge those.
@@ -669,7 +666,8 @@ again: remove_next = 1 + (end > next->
static inline int is_mergeable_vma(struct vm_area_struct *vma,
struct file *file, unsigned long vm_flags)
{
- if ((vma->vm_flags ^ vm_flags) & ~VM_MERGEABLE_FLAGS)
+ /* VM_CAN_NONLINEAR may get set later by f_op->mmap() */
+ if ((vma->vm_flags ^ vm_flags) & ~VM_CAN_NONLINEAR)
return 0;
if (vma->vm_file != file)
return 0;
--- 2.6.31-rc2/mm/mmu_notifier.c 2008-10-09 23:13:53.000000000 +0100
+++ madv_ksm/mm/mmu_notifier.c 2009-07-05 00:51:29.000000000 +0100
@@ -99,6 +99,26 @@ int __mmu_notifier_clear_flush_young(str
return young;
}

+void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
+ pte_t pte)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(mn, n, &mm->mmu_notifier_mm->list, hlist) {
+ if (mn->ops->change_pte)
+ mn->ops->change_pte(mn, mm, address, pte);
+ /*
+ * Some drivers don't have change_pte,
+ * so we must call invalidate_page in that case.
+ */
+ else if (mn->ops->invalidate_page)
+ mn->ops->invalidate_page(mn, mm, address);
+ }
+ rcu_read_unlock();
+}
+
void __mmu_notifier_invalidate_page(struct mm_struct *mm,
unsigned long address)
{
--- 2.6.31-rc2/mm/mremap.c 2009-03-23 23:12:14.000000000 +0000
+++ madv_ksm/mm/mremap.c 2009-07-07 21:58:29.000000000 +0100
@@ -11,6 +11,7 @@
#include <linux/hugetlb.h>
#include <linux/slab.h>
#include <linux/shm.h>
+#include <linux/ksm.h>
#include <linux/mman.h>
#include <linux/swap.h>
#include <linux/capability.h>
@@ -182,6 +183,17 @@ static unsigned long move_vma(struct vm_
if (mm->map_count >= sysctl_max_map_count - 3)
return -ENOMEM;

+ /*
+ * Advise KSM to break any KSM pages in the area to be moved:
+ * it would be confusing if they were to turn up at the new
+ * location, where they happen to coincide with different KSM
+ * pages recently unmapped. But leave vma->vm_flags as it was,
+ * so KSM can come around to merge on vma and new_vma afterwards.
+ */
+ if (ksm_madvise(vma, old_addr, old_addr + old_len,
+ MADV_UNMERGEABLE, &vm_flags))
+ return -ENOMEM;
+
new_pgoff = vma->vm_pgoff + ((old_addr - vma->vm_start) >> PAGE_SHIFT);
new_vma = copy_vma(&vma, new_addr, new_len, new_pgoff);
if (!new_vma)
--- 2.6.31-rc2/mm/rmap.c 2009-06-25 05:18:10.000000000 +0100
+++ madv_ksm/mm/rmap.c 2009-07-05 00:56:00.000000000 +0100
@@ -709,27 +709,6 @@ void page_add_file_rmap(struct page *pag
}
}

-#ifdef CONFIG_DEBUG_VM
-/**
- * page_dup_rmap - duplicate pte mapping to a page
- * @page: the page to add the mapping to
- * @vma: the vm area being duplicated
- * @address: the user virtual address mapped
- *
- * For copy_page_range only: minimal extract from page_add_file_rmap /
- * page_add_anon_rmap, avoiding unnecessary tests (already checked) so it's
- * quicker.
- *
- * The caller needs to hold the pte lock.
- */
-void page_dup_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address)
-{
- if (PageAnon(page))
- __page_check_anon_rmap(page, vma, address);
- atomic_inc(&page->_mapcount);
-}
-#endif
-
/**
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from

2009-07-09 15:35:19

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
> Hi Izik,
>
> Sorry, I've not yet replied to your response of 1 July, nor shall I
> right now. Instead, more urgent to send you my current KSM rollup,
> against 2.6.31-rc2, with which I'm now pretty happy - to the extent
> that I've put my signoff to it below.
>
> Though of course it's actually your and Andrea's and Chris's work,
> just played around with by me; I don't know what the order of
> signoffs should be in the end.
>
> What it mainly lacks is a Documentation file, and more statistics in
> sysfs: though we can already see how much is being merged, we don't
> see any comparison against how much isn't.
>
> But if you still like the patch below, let's advance to splitting
> it up and getting it into mmotm: I have some opinions on the splitup,
> I'll make some suggestions on that tomorrow.
>

I like it very much, you really ~cleaned / optimized / made better
interface~ it up i got to say, thanks you.
(Very high standard you have)

> You asked for a full diff against -rc2, but may want some explanation
> of differences from what I sent before. The main changes are:-
>
> A reliable PageKsm(), not dependent on the nature of the vma it's in:
> it's like PageAnon, but with NULL anon_vma - needs a couple of slight
> adjustments outside ksm.c.
>

Good change.

> Consequently, no reason to go on prohibiting KSM on private anonymous
> pages COWed from template file pages in file-backed vmas.
>

Agree.

> Most of what get_user_pages did for us was unhelpful: now rely on
> find_vma and follow_page and handle_mm_fault directly, which allow
> us to check VM_MERGEABLE and PageKsm ourselves where needed.
>
> Which eliminates the separate is_present_pte checks, and spares us
> from wasting rmap_items on absent ptes.
>

That is great, much better.
(I actually searched where you have exported follow_page and
handle_mm_fault, and then realized that life are easier when you are not
a modules anymore)

> Which then drew attention to the hyperactive allocation and freeing
> of tree_items, "slabinfo -AD" showing huge activity there, even when
> idling. It's not much of a problem really, but might cause concern.
>
> And revealed that really those tree_items were a waste of space, can
> be packed within the rmap_items that pointed to them, while still
> keeping to the nice cache-friendly 64-byte or 32-byte rmap_item.
> (If another field needed later, can make rmap_list singly linked.)
>

That change together with the "is_stable_tree" embedded inside the
rmap_item address are my favorite changes.

> mremap move issue sorted, in simplest COW-breaking way. My previous
> code to unmerge according to rmap_item->stable was racy/buggy for
> two reasons: ignore rmap_items there now, just scan the ptes.
>
> ksmd used to be running at higher priority: now nice 0.
>


That is indeed logical change, maybe we can even punish it in another 5
points in nice...

> Moved mm_slot hash functions together; made hash table smaller
> now it's used less frequently than it was in your design.
>
> More cleanup, making similar things more alike.
>

Really quality work. (What i did was just walk from line 1 to the end of
ksm.c with my eyes, I still want to apply the patch and play with it)

The only thing i was afraid is if the check inside the
stable_tree_search is safe:

+ page2[0] = get_ksm_page(tree_rmap_item);
+ if (page2[0])
+ break;


But i convinced myself that it is safe due to the fact that the page is
anonymous, so it wasnt be able to get remapped by the user (to try to
corrupt the stable tree) without the page will get breaked.

So from my side I believe we can send it to mmotm I still want to run it
on my machine and play with it, to add some bug_ons (just for my own
testing) to see that everything
going well, but I couldn't find one objection to any of your changes.
(And i did try hard to find at least one..., i thought maybe
module_init() can be replaced with something different, but i then i saw
it used in vmscan.c, so i gave up...)


What you want to do now? send it to mmotm or do you want to play with it
more?


Big thanks.

2009-07-10 13:43:50

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

On Thu, 9 Jul 2009, Izik Eidus wrote:
> Hugh Dickins wrote:
> >
> > But if you still like the patch below, let's advance to splitting
> > it up and getting it into mmotm: I have some opinions on the splitup,
> > I'll make some suggestions on that tomorrow.
>
> I like it very much, you really ~cleaned / optimized / made better interface~
> it up i got to say, thanks you.
> (Very high standard you have)

Be careful: I might think you're flattering me to get KSM in ;)

> (I actually searched where you have exported follow_page and handle_mm_fault,
> and then realized that life are easier when you are not a modules anymore)

Yes, I'm glad Chris nudged us that way: your use of get_user_pages()
was great for working from the outside, but non-modular makes better
options available (options I wouldn't want a driver to be using, but
use within the mm/ directory seems fine).

> > And revealed that really those tree_items were a waste of space, can
> > be packed within the rmap_items that pointed to them, while still
> > keeping to the nice cache-friendly 64-byte or 32-byte rmap_item.
> > (If another field needed later, can make rmap_list singly linked.)
>
> That change together with the "is_stable_tree" embedded inside the
> rmap_item address are my favorite changes.

Oh good. I was rather expecting you to say, let's leave that to a
separate patch afterwards - that would be reasonable, and if you'd
actually prefer it that way, just say so.

> > ksmd used to be running at higher priority: now nice 0.
>
> That is indeed logical change, maybe we can even punish it in another
> 5 points in nice...

When I noticed it running at nice -5 in top, I felt sure that's wrong,
and get better behaviour now it's not preempting ordinary loads. But
as a babysitter, I didn't feel entitled to punish your baby too much:
if you think it's right to punish it more yourself, yes, go ahead.

> The only thing i was afraid is if the check inside the stable_tree_search is
> safe:
>
> + page2[0] = get_ksm_page(tree_rmap_item);
> + if (page2[0])
> + break;
>
>
> But i convinced myself that it is safe due to the fact that the page is
> anonymous, so it wasnt be able to get remapped by the user (to try to
> corrupt the stable tree) without the page will get breaked.

Did I change the dynamics there? I thought I was just doing the same
as before more efficiently, while guarding against some races.

> So from my side I believe we can send it to mmotm I still want to run it on my
> machine and play with it, to add some bug_ons (just for my own testing) to see
> that everything
> going well, but I couldn't find one objection to any of your changes. (And i
> did try hard to find at least one..., i thought maybe module_init() can be
> replaced with something different, but i then i saw it used in vmscan.c, so i
> gave up...)

I think I went round that same loop of suspicion with the module_init(),
but in fact it's a common way of saying this init has no high precedence.

If you want something to criticize harshly, look no further than
break_ksm, where at least I had the decency to leave a comment:
/* Which leaves us looping there if VM_FAULT_OOM: hmmm... */
but only realized later is unacceptable in the MADV_UNMERGEABLE or
KSM_RUN_UNMERGE cases - a single mm of one often repeated page could
balloon up into something huge there, we definitely need to do better.

But that needs a separate kind of testing, and what happens under
OOM has been changing recently I believe. I'll experiment there,
but it's no reason to delay the KSM patches now going to mmotm.

> What you want to do now? send it to mmotm or do you want to
> play with it more?

And not or. What we have now should go to mmotm, it's close
enough; but there's a little more playing to do (the OOM issue,
additional stats, Documentation, madvise man page).

I've got to rush off now, will resume later in the day: but
regarding the split up, what I have in mind is about nine patches:
perhaps I send to you, then you adjust and send on to Andrew?

I'm anxious that we get Signed-off-by's from Andrea and Chris,
for the main ones at least: too much has changed for us to assume
their signoffs, but so much the same that we very much want them in.

The nine patches that I'm thinking of, concentrating reviewers on
different aspects, most of them fairly trivial:-

Your mmu notifier mods:
include/linux/mmu_notifier.h
mm/memory.c
mm/mmu_notifier.c

My madvise_behavior rationalization:
mm/madvise.c

MADV_MERGEABLE and MADV_UNMERGEABLE (or maybe 5 patches)
arch/alpha/include/asm/mman.h
arch/mips/include/asm/mman.h
arch/parisc/include/asm/mman.h
arch/xtensa/include/asm/mman.h
include/asm-generic/mman-common.h

The interface to a dummy ksm.c:
include/linux/ksm.h
include/linux/mm.h
include/linux/sched.h
kernel/fork.c
mm/Kconfig
mm/Makefile
mm/ksm.c
mm/madvise.c

My reversion of DEBUG_VM page_dup_rmap:
include/linux/rmap.h
mm/memory.c
mm/rmap.c

Introduction of PageKsm:
fs/proc/page.c
include/linux/ksm.h
mm/memory.c

The main work:
mm/ksm.c

My fix to mremap move messing up the stable tree:
mm/mremap.c

My substitution for confusing VM_MERGEABLE_FLAGS:
mm/mmap.c

Makes sense? Or split up mm/ksm.c more somehow?

Hugh

2009-07-10 22:39:34

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
>
>
Hey Hugh,

I started to hack the code around to make sure i understand everything,
and in addition i wanted to add few things

One thing that catched my eyes was:
> +
> +/*
> + * cmp_and_merge_page - take a page computes its hash value and check if there
> + * is similar hash value to different page,
> + * in case we find that there is similar hash to different page we call to
> + * try_to_merge_two_pages().
> + *
> + * @page: the page that we are searching identical page to.
> + * @rmap_item: the reverse mapping into the virtual address of this page
> + */
> +static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
> +{
> + struct page *page2[1];
> + struct rmap_item *tree_rmap_item;
> + unsigned int checksum;
> + int err;
> +
> + if (in_stable_tree(rmap_item))
> + remove_rmap_item_from_tree(rmap_item);
> +
>
>


So when we enter to cmp_and_merge_page, if the page is in_stable_tree()
we will remove the rmap_item from the stable tree,
And then we have inside:

> + * ksm_do_scan - the ksm scanner main worker function.
> + * @scan_npages - number of pages we want to scan before we return.
> + */
> +static void ksm_do_scan(unsigned int scan_npages)
> +{
> + struct rmap_item *rmap_item;
> + struct page *page;
> +
> + while (scan_npages--) {
> + cond_resched();
> + rmap_item = scan_get_next_rmap_item(&page);
> + if (!rmap_item)
> + return;
> + if (!PageKsm(page) || !in_stable_tree(rmap_item))
> + cmp_and_merge_page(page, rmap_item);
> + put_page(page);
> + }
> +}
>

So this check for: if (!PageKsm(page) || !in_stable_tree(rmap_item))
will be true for !in_stable_tree(rmap_item)

Isnt it mean that we are "stop using the stable tree help" ?
It look like every item that will go into the stable tree will get
flushed from it in the second run, that will highly increase the ksmd
cpu usage, and will make it find less pages...
Was this what you wanted to do? or am i missed anything?

Beside this one more thing i noticed while checking this code:
beacuse the new "Ksm shared page" is not File backed page, it isnt count
in top as a shared page, and i couldnt find a way to see how many pages
are shared for each application..
This is important for management tools such as a tool that will want to
know what Virtual Machines it want to migrate from the host into another
host based on the memory sharing in that specific host (Meaning how much
ram it really take on that specific host)

So I started to prepre a patch that will show merged pages count inside
/proc/pid/mergedpages, But then i thought this statics lie:
if we will have 2 applications: application A and application B, that
share the same page, how should it look like?:

cat /proc/pid_of_A/merged_pages -> 1
cat /proc/pid_of_B/merged_pages -> 1

or:

cat /proc/pid_of_A/merged_pages -> 0 (beacuse this one was shared with
the page of B)
cat /proc/pid_of_B/merged_pages -> 1

To make the second method thing work as much as reaible as we can we
would want to break KsmPages that have just one mapping into them...


What do you think about that? witch direction should we take for that?


(Other than this stuff, everything running happy and nice, I think cpu
is little bit too high beacuse the removing of the stable_tree issue)

Thanks.

2009-07-10 22:46:35

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Izik Eidus wrote:
> Hugh Dickins wrote:
>>
> Hey Hugh,
>
> I started to hack the code around to make sure i understand
> everything, and in addition i wanted to add few things
>
> One thing that catched my eyes was:
Forgot about that thing, It isn't true what i said there - Silly mistake
from my side, but please read the bottom part about the - number of
merged pages per process statistics.

Sorry.

2009-07-11 19:22:53

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

On Sat, 11 Jul 2009, Izik Eidus wrote:
>...
> Isnt it mean that we are "stop using the stable tree help" ?
> It look like every item that will go into the stable tree will get flushed
> from it in the second run, that will highly increase the ksmd cpu usage, and
> will make it find less pages...
> Was this what you wanted to do? or am i missed anything?

You sorted this one out for yourself before I got around to it.

>
> Beside this one more thing i noticed while checking this code:
> beacuse the new "Ksm shared page" is not File backed page, it isnt count in
> top as a shared page, and i couldnt find a way to see how many pages are
> shared for each application..

Hah! I can't quite call that a neat trick, but it is amusing.

Yes, checking up on where top gets SHR from, it does originate from
file_rss, and your previous definition of PageKsm was such that those
pages had to get counted as file_rss.

If I thought for a moment that these pages really are like file pages,
I'd immediately revert my PageKsm change, and instead make page->mapping
point to a fictional address_space (rather like swapper_space), so that
those pages could still be definitively identified.

But they are not file pages, they are anon pages: anon pages which are
shared (in this case shared via KSM rather than shared via fork); and
I was fixing the accounting by making them look like anon pages again.
And it'll be more important for them to look like anon pages when we
get to swapping them next time around.

Bundling them in with file_rss may have made some numbers stand out
more obviously to you; but it was a masquerade, they weren't really
the numbers you were wanting.

> This is important for management tools such as a tool that will want to know
> what Virtual Machines it want to migrate from the host into another host based
> on the memory sharing in that specific host (Meaning how much ram it really
> take on that specific host)

Okay, I can see that you may well want such info.

>
> So I started to prepre a patch that will show merged pages count inside
> /proc/pid/mergedpages, But then i thought this statics lie:
> if we will have 2 applications: application A and application B, that share
> the same page, how should it look like?:
>
> cat /proc/pid_of_A/merged_pages -> 1
> cat /proc/pid_of_B/merged_pages -> 1
>
> or:
>
> cat /proc/pid_of_A/merged_pages -> 0 (beacuse this one was shared with the
> page of B)
> cat /proc/pid_of_B/merged_pages -> 1

I happen to think that the second method, plausible though it
starts out, ends up leading to more grief than the first.

But two more important things to say.

One, I'm the wrong person to be asking about this: I've little
experience to draw on here, and my interest wanes when it comes
to the number gathering end of this.

Two, I don't think you can do it with a count like that at all.
If you're thinking of migrating A away from B, or A and B together
away from the rest, don't you need to know how much they're sharing
with each other, how much they're sharing with the rest? If A and B
are different instances of the same app, they're likely to be sharing
much more with each other than with the rest as a whole: and that'll
make a huge difference to your decisions on migration.

A single number (probably of that first kind) may be a nice kind
of reassurance that things are working, and worth providing. But
for detailed migration/provisioning decisions, I'd have thought
you'd need the kernel to provide a list of "id"s of KSM-shared
pages for each process, which your management tools could then
crunch upon (observing the different sharings of ids) to try out
different splits; or else, doing it the other way around, a
representation of the stable_tree itself, with pids at the nodes.

Though once you got into that detail, I wonder if you'd find that
you need such info, not just about the KSM pages, but about the
rest as well (how much are the anon pages being shared across fork,
for example? which of the file pages are shmem/tmpfs pages needing
swap? how much swap is being used?).

I think it becomes quite a big subject, and you may be able to
excite other people with it.

>
> To make the second method thing work as much as reaible as we can we would
> want to break KsmPages that have just one mapping into them...

We may want to do that anyway. It concerned me a lot when I was
first testing (and often saw kernel_pages_allocated greater than
pages_shared - probably because of the original KSM's eagerness to
merge forked pages, though I think there may have been more to it
than that). But seems much less of an issue now (that ratio is much
healthier), and even less of an issue once KSM pages can be swapped.
So I'm not bothering about it at the moment, but it may make sense.

>
> What do you think about that? witch direction should we take for that?

If nobody else volunteers in on that, I could perhaps make up an
incriminating list of mm people who have an interest in such things!

>
> (Other than this stuff, everything running happy and nice,

Glad to hear it, yes, same at my end (I did have a hang in the cow
breaking the night before I sent out the rollup, but included the
fix in that, and it has stood up since).

> I think cpu is
> little bit too high beacuse the removing of the stable_tree issue)

I think you've resolved that as a non-issue, but is cpu still looking
too high to you? It looks high to me, but then I realize that I've
tuned it to be high anyway. Do you have any comparison against the
/dev/ksm KSM, or your first madvise version?

Oh, something that might be making it higher, that I didn't highlight
(and can revert if you like, it was just more straightforward this way):
with scan_get_next_rmap skipping the non-present ptes, pages_to_scan is
currently a limit on the _present_ pages scanned in one batch.

Hugh

2009-07-11 21:19:42

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

On Sat, 11 Jul 2009 20:22:11 +0100 (BST)
Hugh Dickins <[email protected]> wrote:


> I think it becomes quite a big subject, and you may be able to
> excite other people with it.

Yea, I agree, I dropped this patch, I think i have idea how to mange
it from userspace in a much better way for the kvm case.

>
> >
> > To make the second method thing work as much as reaible as we can
> > we would want to break KsmPages that have just one mapping into
> > them...
>
> We may want to do that anyway. It concerned me a lot when I was
> first testing (and often saw kernel_pages_allocated greater than
> pages_shared - probably because of the original KSM's eagerness to
> merge forked pages, though I think there may have been more to it
> than that). But seems much less of an issue now (that ratio is much
> healthier), and even less of an issue once KSM pages can be swapped.
> So I'm not bothering about it at the moment, but it may make sense.
>

We could add patch like the below, but I think we should leave it as it
is now, and solve it all (like you have said) with the ksm pages
swapping support in next kernel release.
(Right now ksm can limit itself with max_kernel_pages_alloc)

diff --git a/mm/ksm.c b/mm/ksm.c
index a0fbdb2..ee80861 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1261,8 +1261,13 @@ static void ksm_do_scan(unsigned int scan_npages)
rmap_item = scan_get_next_rmap_item(&page);
if (!rmap_item)
return;
- if (!PageKsm(page) || !in_stable_tree(rmap_item))
+ if (!PageKsm(page) || !in_stable_tree(rmap_item)) {
cmp_and_merge_page(page, rmap_item);
+ } else if (page_mapcount(page) == 0) {
+ break_cow(rmap_item->mm,
+ rmap_item->address & PAGE_MASK);
+ remove_rmap_item_from_tree(rmap_item);
+ }
put_page(page);
}
}


>
> I think you've resolved that as a non-issue, but is cpu still looking
> too high to you? It looks high to me, but then I realize that I've
> tuned it to be high anyway. Do you have any comparison against the
> /dev/ksm KSM, or your first madvise version?

I think I made myself to think it is to high, i ran it for 250 pages
scan each 10 millisecond, cpu usage was most of the time 1-4%, (beside
when it merged pages) - then the walking on the tree is longer, and if
it is the first page, we have addition memcpy of the page (into new
allocated page) - we can solve this issue, together with a big list of
optimizations that can come into ksm stable/unstable
algorithm/implemantion, in later releases of the kernel.

>
> Oh, something that might be making it higher, that I didn't highlight
> (and can revert if you like, it was just more straightforward this
> way): with scan_get_next_rmap skipping the non-present ptes,
> pages_to_scan is currently a limit on the _present_ pages scanned in
> one batch.

You mean that now when you say: pages_to_scan = 512, it wont count the
none present ptes as part of the counter, so if we have 500 not present
ptes in the begining and then 512 ptes later, before it used to call
cmp_and_merge_page() only for 12 pages while now it will get called on
512 pages?

If yes, then I liked this change, it is more logical from cpu
consumption point of view, and in addition we have that cond_reched()
so I dont see a problem with this.

Thanks.

>
> Hugh

2009-07-12 14:45:33

by Hugh Dickins

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

On Sun, 12 Jul 2009, Izik Eidus wrote:
> On Sat, 11 Jul 2009 20:22:11 +0100 (BST)
> Hugh Dickins <[email protected]> wrote:
> >
> > We may want to do that anyway. It concerned me a lot when I was
> > first testing (and often saw kernel_pages_allocated greater than
> > pages_shared - probably because of the original KSM's eagerness to
> > merge forked pages, though I think there may have been more to it
> > than that). But seems much less of an issue now (that ratio is much
> > healthier), and even less of an issue once KSM pages can be swapped.
> > So I'm not bothering about it at the moment, but it may make sense.

I realized since writing that with the current statistics you really
cannot tell how big an issue the orphaned (count 1) KSM pages are -
good sharing of a few will completely hide non-sharing of many.

But I've hacked in more stats (not something I'd care to share yet!),
and those confirm that for my loads at least, the orphaned KSM pages
are few compared with the shared ones.

>
> We could add patch like the below, but I think we should leave it as it
> is now,

I agree we should leave it as is for now. My guess is that we'll
prefer to leave them around, until approaching max_kernel_pages_alloc,
pruning them only at that stage (rather as we free swap more aggressively
when it's 50% full). There may be benefit in not removing them too soon,
there may be benefit in holding on to stable pages for longer (holding a
reference in the stable tree for a while). Or maybe not, just an idea.

> and solve it all (like you have said) with the ksm pages
> swapping support in next kernel release.
> (Right now ksm can limit itself with max_kernel_pages_alloc)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index a0fbdb2..ee80861 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1261,8 +1261,13 @@ static void ksm_do_scan(unsigned int scan_npages)
> rmap_item = scan_get_next_rmap_item(&page);
> if (!rmap_item)
> return;
> - if (!PageKsm(page) || !in_stable_tree(rmap_item))
> + if (!PageKsm(page) || !in_stable_tree(rmap_item)) {
> cmp_and_merge_page(page, rmap_item);
> + } else if (page_mapcount(page) == 0) {

If we did that (but we agree not for now), shouldn't it be
page_mapcount(page) == 1
? The mapcount 0 ones already got freed by the zap/unmap code.

> + break_cow(rmap_item->mm,
> + rmap_item->address & PAGE_MASK);

Just a note on that " & PAGE_MASK": it's unnecessary there and
almost everywhere else. One of the pleasures of putting flags into
the bottom bits of the address, in code concerned with faulting, is
that the faulting address can be anywhere within the page, so we
don't have to bother to mask off the flags.

> + remove_rmap_item_from_tree(rmap_item);
> + }
> put_page(page);
> }
> }
>
> > Oh, something that might be making it higher, that I didn't highlight
> > (and can revert if you like, it was just more straightforward this
> > way): with scan_get_next_rmap skipping the non-present ptes,
> > pages_to_scan is currently a limit on the _present_ pages scanned in
> > one batch.
>
> You mean that now when you say: pages_to_scan = 512, it wont count the
> none present ptes as part of the counter, so if we have 500 not present
> ptes in the begining and then 512 ptes later, before it used to call
> cmp_and_merge_page() only for 12 pages while now it will get called on
> 512 pages?

If I understand you right, yes, before it would do those 500 absent then
512 present in two batches, first 512 (of which only 12 present) then 500;
whereas now it'll skip the 500 absent without counting them, and handle
the 512 present in that same one batch.

>
> If yes, then I liked this change, it is more logical from cpu
> consumption point of view,

Yes, although it does spend a little time on the absent ones, it should
be much less time than it spends comparing or checksumming on present ones.

> and in addition we have that cond_reched()
> so I dont see a problem with this.

Right, that cond_resched() is vital in this case.

By the way, something else I didn't highlight, a significant benefit
from avoiding get_user_pages(): that was doing a mark_page_accessed()
on every present pte that it found, interfering with pageout decisions.

Hugh

2009-07-13 18:25:13

by Izik Eidus

[permalink] [raw]
Subject: Re: KSM: current madvise rollup

Hugh Dickins wrote:
> On Sun, 12 Jul 2009, Izik Eidus wrote:
>
>> On Sat, 11 Jul 2009 20:22:11 +0100 (BST)
>> Hugh Dickins <[email protected]> wrote:
>>
>>> We may want to do that anyway. It concerned me a lot when I was
>>> first testing (and often saw kernel_pages_allocated greater than
>>> pages_shared - probably because of the original KSM's eagerness to
>>> merge forked pages, though I think there may have been more to it
>>> than that). But seems much less of an issue now (that ratio is much
>>> healthier), and even less of an issue once KSM pages can be swapped.
>>> So I'm not bothering about it at the moment, but it may make sense.
>>>
>
> I realized since writing that with the current statistics you really
> cannot tell how big an issue the orphaned (count 1) KSM pages are -
> good sharing of a few will completely hide non-sharing of many.
>
> But I've hacked in more stats (not something I'd care to share yet!),
> and those confirm that for my loads at least, the orphaned KSM pages
> are few compared with the shared ones.
>
>
>> We could add patch like the below, but I think we should leave it as it
>> is now,
>>
>
> I agree we should leave it as is for now. My guess is that we'll
> prefer to leave them around, until approaching max_kernel_pages_alloc,
> pruning them only at that stage (rather as we free swap more aggressively
> when it's 50% full). There may be benefit in not removing them too soon,
> there may be benefit in holding on to stable pages for longer (holding a
> reference in the stable tree for a while). Or maybe not, just an idea.
>

Well, I personaly dont see any real soultion to this problem without
real swapping support,
So for now I dont mind not to deal with it at all (as long as the admin
can decide how many unswappable pages he allow)
But, If you have a stronger opnion than me for that case, I really dont
care to change it.

>
>> and solve it all (like you have said) with the ksm pages
>> swapping support in next kernel release.
>> (Right now ksm can limit itself with max_kernel_pages_alloc)
>>
>> diff --git a/mm/ksm.c b/mm/ksm.c
>> index a0fbdb2..ee80861 100644
>> --- a/mm/ksm.c
>> +++ b/mm/ksm.c
>> @@ -1261,8 +1261,13 @@ static void ksm_do_scan(unsigned int scan_npages)
>> rmap_item = scan_get_next_rmap_item(&page);
>> if (!rmap_item)
>> return;
>> - if (!PageKsm(page) || !in_stable_tree(rmap_item))
>> + if (!PageKsm(page) || !in_stable_tree(rmap_item)) {
>> cmp_and_merge_page(page, rmap_item);
>> + } else if (page_mapcount(page) == 0) {
>>
>
> If we did that (but we agree not for now), shouldn't it be
> page_mapcount(page) == 1
> ? The mapcount 0 ones already got freed by the zap/unmap code.
>

Yea, I dont know what i thought when i compare it to zero..., 1 is the
right value here indeed.

>
>> + break_cow(rmap_item->mm,
>> + rmap_item->address & PAGE_MASK);
>>
>
> Just a note on that " & PAGE_MASK": it's unnecessary there and
> almost everywhere else. One of the pleasures of putting flags into
> the bottom bits of the address, in code concerned with faulting, is
> that the faulting address can be anywhere within the page, so we
> don't have to bother to mask off the flags.
>
>
>> + remove_rmap_item_from_tree(rmap_item);
>> + }
>> put_page(page);
>> }
>> }
>>
>>
>>> Oh, something that might be making it higher, that I didn't highlight
>>> (and can revert if you like, it was just more straightforward this
>>> way): with scan_get_next_rmap skipping the non-present ptes,
>>> pages_to_scan is currently a limit on the _present_ pages scanned in
>>> one batch.
>>>
>> You mean that now when you say: pages_to_scan = 512, it wont count the
>> none present ptes as part of the counter, so if we have 500 not present
>> ptes in the begining and then 512 ptes later, before it used to call
>> cmp_and_merge_page() only for 12 pages while now it will get called on
>> 512 pages?
>>
>
> If I understand you right, yes, before it would do those 500 absent then
> 512 present in two batches, first 512 (of which only 12 present) then 500;
> whereas now it'll skip the 500 absent without counting them, and handle
> the 512 present in that same one batch.
>
>
>> If yes, then I liked this change, it is more logical from cpu
>> consumption point of view,
>>
>
> Yes, although it does spend a little time on the absent ones, it should
> be much less time than it spends comparing or checksumming on present ones.
>

Yea, compare to the other stuff it is minor...

>
>> and in addition we have that cond_reched()
>> so I dont see a problem with this.
>>
>
> Right, that cond_resched() is vital in this case.
>
> By the way, something else I didn't highlight, a significant benefit
> from avoiding get_user_pages(): that was doing a mark_page_accessed()
> on every present pte that it found, interfering with pageout decisions.
>

That is a great value, i didn't even thought about that...

> Hugh
>