2008-02-15 06:50:17

by Christoph Lameter

[permalink] [raw]
Subject: [patch 1/6] mmu_notifier: Core code

MMU notifiers are used for hardware and software that establishes
external references to pages managed by the Linux kernel. These are
page table entriews or tlb entries or something else that allows
hardware (such as DMA engines, scatter gather devices, networking,
sharing of address spaces across operating system boundaries) and
software (Virtualization solutions such as KVM, Xen etc) to
access memory managed by the Linux kernel.

The MMU notifier will notify the device driver that subscribes to such
a notifier that the VM is going to do something with the memory
mapped by that device. The device must then drop references for the
indicated memory area. The references may be reestablished later.

The notification scheme is much better than the current schemes of
avoiding the danger of the VM removing pages that are externally
mapped. We currently either mlock pages used for RDMA, XPmem etc
in memory or increase the refcount to pin the pages. Increasing
the refcount makes it impossible for the VM to reclaim the page.

Mlock causes problems with reclaim and may lead to OOM if too many
pages are pinned in memory. It is also incorrect in terms what the POSIX
specificies for what role mlock should play. Mlock does *not* pin pages in
memory. Mlock just means do not allow the page to be moved to swap.

Linux can move pages in memory (for example through the page migration
mechanism). These pages can be moved even if they are mlocked(!!!!).
The current approach of page pinning in use by RDMA etc is conceptually
broken but there are currently no other easy solutions.

The alternate of increasing the page count to pin pages is also not
that enticing since there will be continual attempts to reclaim
or migrate these pages.

The solution here allows us to finally fix this issue by requiring
such devices to subscribe to a notification chain that will allow
them to work without pinning. The VM gains control of its memory again
and the memory that has external references can be managed like regular
memory.

This patch: Core portion

Signed-off-by: Christoph Lameter <[email protected]>
Signed-off-by: Andrea Arcangeli <[email protected]>

---
Documentation/mmu_notifier/README | 105 ++++++++++++++++++++++
include/linux/mm_types.h | 7 +
include/linux/mmu_notifier.h | 180 ++++++++++++++++++++++++++++++++++++++
kernel/fork.c | 2
mm/Kconfig | 4
mm/Makefile | 1
mm/mmap.c | 2
mm/mmu_notifier.c | 76 ++++++++++++++++
8 files changed, 377 insertions(+)

Index: linux-2.6/Documentation/mmu_notifier/README
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6/Documentation/mmu_notifier/README 2008-02-14 22:27:19.000000000 -0800
@@ -0,0 +1,105 @@
+Linux MMU Notifiers
+-------------------
+
+MMU notifiers are used for hardware and software that establishes
+external references to pages managed by the Linux kernel. These are
+page table entriews or tlb entries or something else that allows
+hardware (such as DMA engines, scatter gather devices, networking,
+sharing of address spaces across operating system boundaries) and
+software (Virtualization solutions such as KVM, Xen etc) to
+access memory managed by the Linux kernel.
+
+The MMU notifier will notify the device driver that subscribes to such
+a notifier that the VM is going to do something with the memory
+mapped by that device. The device must then drop references for the
+indicated memory area. The references may be reestablished later.
+
+The notification scheme is much better than the current schemes of
+dealing with the danger of the VM removing pages.
+We currently mlock pages used for RDMA, XPmem etc in memory or
+increase the refcount of the pages.
+
+Both cause problems with reclaim and may lead to OOM if too many
+pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
+specification of the role of mlock. Mlock does *not* pin pages in
+memory. It just does not allow the page to be moved to swap.
+The page refcount is used to track current users of a page struct.
+Artificially inflating the refcount means that the VM cannot track
+down all references to a page. It will not be able to reclaim or
+move a page. However, the core code will try again and again because
+the assumption is that an elevated refcount is a temporary situation.
+
+Linux can move pages in memory (for example through the page migration
+mechanism). These pages can be moved even if they are mlocked(!!!!).
+So the current approach in use by RDMA etc etc is conceptually broken
+but there are currently no other easy solutions.
+
+The solution here allows us to finally fix this issue by requiring
+such devices to subscribe to a notification chain that will allow
+them to work without pinning.
+
+The notifier chains provide two callback mechanisms. The
+first one is required for any device that establishes external mappings.
+The second (rmap) mechanism is required if a device needs to be
+able to sleep when invalidating references. Sleeping may be necessary
+if we are mapping across a network or to different Linux instances
+in the same address space.
+
+mmu_notifier mechanism (for KVM/GRU etc)
+----------------------------------------
+Callbacks are registered with an mm_struct from a device driver using
+mmu_notifier_register(). When the VM removes pages (or changes
+permissions on pages etc) then callbacks are triggered.
+
+The invalidation function for a single page (*invalidate_page)
+is called with spinlocks (in particular the pte lock) held. This allow
+for an easy implementation of external ptes that are on the local system.
+
+The invalidation mechanism for a range (*invalidate_range_begin/end*) is
+called most of the time without any locks held. It is only called with
+locks held for file backed mappings that are truncated. A flag indicates
+in which mode we are. A driver can use that mechanism to f.e.
+delay the freeing of the pages during truncate until no locks are held.
+
+Pages must be marked dirty if dirty bits are found to be set in
+the external ptes during unmap.
+
+The *release* method is called when a Linux process exits. It is run before
+the pages and mappings of a process are torn down and gives the device driver
+a chance to zap all the external mappings in one go.
+
+An example for a code that can be used to build a notifier mechanism into
+a device driver can be found in the file
+Documentation/mmu_notifier/skeleton.c
+
+mmu_rmap_notifier mechanism (XPMEM etc)
+---------------------------------------
+The mmu_rmap_notifier allows the device driver to implement their own rmap
+and allows the device driver to sleep during page eviction. This is necessary
+for complex drivers that f.e. allow the sharing of memory between processes
+running on different Linux instances (typically over a network or in a
+partitioned NUMA system).
+
+The mmu_rmap_notifier adds another invalidate_page() callout that is called
+*before* the Linux rmaps are walked. At that point only the page lock is
+held. The invalidate_page() function must walk the driver rmaps and evict
+all the references to the page.
+
+There is no process information available before the rmaps are consulted.
+The notifier mechanism can therefore not be attached to an mm_struct. Instead
+it is a global callback list. Having to perform a callback for each and every
+page that is reclaimed would be inefficient. Therefore we add an additional
+page flag: PageRmapExternal(). Only pages that are marked with this bit can
+be exported and the rmap callbacks will only be performed for pages marked
+that way.
+
+The required additional Page flag is only availabe in 64 bit mode and
+therefore the mmu_rmap_notifier portion is not available on 32 bit platforms.
+
+An example of code to build a mmu_notifier mechanism with rmap capabilty
+can be found in Documentation/mmu_notifier/skeleton_rmap.c
+
+February 9, 2008,
+ Christoph Lameter <[email protected]
+
+Index: linux-2.6/include/linux/mm_types.h
Index: linux-2.6/include/linux/mm_types.h
===================================================================
--- linux-2.6.orig/include/linux/mm_types.h 2008-02-14 20:59:01.000000000 -0800
+++ linux-2.6/include/linux/mm_types.h 2008-02-14 21:17:51.000000000 -0800
@@ -159,6 +159,12 @@ struct vm_area_struct {
#endif
};

+struct mmu_notifier_head {
+#ifdef CONFIG_MMU_NOTIFIER
+ struct hlist_head head;
+#endif
+};
+
struct mm_struct {
struct vm_area_struct * mmap; /* list of VMAs */
struct rb_root mm_rb;
@@ -228,6 +234,7 @@ struct mm_struct {
#ifdef CONFIG_CGROUP_MEM_CONT
struct mem_cgroup *mem_cgroup;
#endif
+ struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
};

#endif /* _LINUX_MM_TYPES_H */
Index: linux-2.6/include/linux/mmu_notifier.h
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6/include/linux/mmu_notifier.h 2008-02-14 22:42:28.000000000 -0800
@@ -0,0 +1,180 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+/*
+ * MMU motifier
+ *
+ * Notifier functions for hardware and software that establishes external
+ * references to pages of a Linux system. The notifier calls ensure that
+ * external mappings are removed when the Linux VM removes memory ranges
+ * or individual pages from a process.
+ *
+ * These fall into two classes:
+ *
+ * 1. mmu_notifier
+ *
+ * These are callbacks registered with an mm_struct. If pages are
+ * removed from an address space then callbacks are performed.
+ *
+ * Spinlocks must be held in order to walk reverse maps. The
+ * invalidate_page() callbacks are performed with spinlocks held.
+ *
+ * The invalidate_range_start/end callbacks can be performed in contexts
+ * where sleeping is allowed or in atomic contexts. A flag is passed
+ * to indicate an atomic context.
+ *
+ * Pages must be marked dirty if dirty bits are found to be set in
+ * the external ptes.
+ */
+
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/rcupdate.h>
+#include <linux/mm_types.h>
+
+struct mmu_notifier_ops;
+
+struct mmu_notifier {
+ struct hlist_node hlist;
+ const struct mmu_notifier_ops *ops;
+};
+
+struct mmu_notifier_ops {
+ /*
+ * The release notifier is called when no other execution threads
+ * are left. Synchronization is not necessary.
+ */
+ void (*release)(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+
+ /*
+ * age_page is called from contexts where the pte_lock is held
+ */
+ int (*age_page)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+
+ /*
+ * invalidate_page is called from contexts where the pte_lock is held.
+ */
+ void (*invalidate_page)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+
+ /*
+ * invalidate_range_begin() and invalidate_range_end() must be paired.
+ *
+ * Multiple invalidate_range_begin/ends may be nested or called
+ * concurrently. That is legit. However, no new external references
+ * may be established as long as any invalidate_xxx is running or
+ * any invalidate_range_begin() and has not been completed through a
+ * corresponding call to invalidate_range_end().
+ *
+ * Locking within the notifier needs to serialize events correspondingly.
+ *
+ * invalidate_range_begin() must clear all references in the range
+ * and stop the establishment of new references.
+ *
+ * invalidate_range_end() reenables the establishment of references.
+ *
+ * atomic indicates that the function is called in an atomic context.
+ * We can sleep if atomic == 0.
+ *
+ * invalidate_range_begin() must remove all external references.
+ * There will be no retries as with invalidate_page().
+ */
+ void (*invalidate_range_begin)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end,
+ int atomic);
+
+ void (*invalidate_range_end)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end,
+ int atomic);
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+/*
+ * Must hold the mmap_sem for write.
+ *
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads
+ */
+extern void mmu_notifier_register(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+
+/*
+ * Must hold mmap_sem for write.
+ *
+ * A quiescent period needs to pass before the mmu_notifier structure
+ * can be released. mmu_notifier_release() will wait for a quiescent period
+ * after calling the ->release callback. So it is safe to call
+ * mmu_notifier_unregister from the ->release function.
+ */
+extern void mmu_notifier_unregister(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+
+
+extern void mmu_notifier_release(struct mm_struct *mm);
+extern int mmu_notifier_age_page(struct mm_struct *mm,
+ unsigned long address);
+
+static inline void mmu_notifier_head_init(struct mmu_notifier_head *mnh)
+{
+ INIT_HLIST_HEAD(&mnh->head);
+}
+
+#define mmu_notifier(function, mm, args...) \
+ do { \
+ struct mmu_notifier *__mn; \
+ struct hlist_node *__n; \
+ \
+ if (unlikely(!hlist_empty(&(mm)->mmu_notifier.head))) { \
+ rcu_read_lock(); \
+ hlist_for_each_entry_rcu(__mn, __n, \
+ &(mm)->mmu_notifier.head, \
+ hlist) \
+ if (__mn->ops->function) \
+ __mn->ops->function(__mn, \
+ mm, \
+ args); \
+ rcu_read_unlock(); \
+ } \
+ } while (0)
+
+#else /* CONFIG_MMU_NOTIFIER */
+
+/*
+ * Notifiers that use the parameters that they were passed so that the
+ * compiler does not complain about unused variables but does proper
+ * parameter checks even if !CONFIG_MMU_NOTIFIER.
+ * Macros generate no code.
+ */
+#define mmu_notifier(function, mm, args...) \
+ do { \
+ if (0) { \
+ struct mmu_notifier *__mn; \
+ \
+ __mn = (struct mmu_notifier *)(0x00ff); \
+ __mn->ops->function(__mn, mm, args); \
+ }; \
+ } while (0)
+
+static inline void mmu_notifier_register(struct mmu_notifier *mn,
+ struct mm_struct *mm) {}
+static inline void mmu_notifier_unregister(struct mmu_notifier *mn,
+ struct mm_struct *mm) {}
+static inline void mmu_notifier_release(struct mm_struct *mm) {}
+static inline int mmu_notifier_age_page(struct mm_struct *mm,
+ unsigned long address)
+{
+ return 0;
+}
+
+static inline void mmu_notifier_head_init(struct mmu_notifier_head *mmh) {}
+
+#endif /* CONFIG_MMU_NOTIFIER */
+
+#endif /* _LINUX_MMU_NOTIFIER_H */
Index: linux-2.6/mm/Kconfig
===================================================================
--- linux-2.6.orig/mm/Kconfig 2008-02-14 20:59:01.000000000 -0800
+++ linux-2.6/mm/Kconfig 2008-02-14 21:17:51.000000000 -0800
@@ -193,3 +193,7 @@ config NR_QUICK
config VIRT_TO_BUS
def_bool y
depends on !ARCH_NO_VIRT_TO_BUS
+
+config MMU_NOTIFIER
+ def_bool y
+ bool "MMU notifier, for paging KVM/RDMA"
Index: linux-2.6/mm/Makefile
===================================================================
--- linux-2.6.orig/mm/Makefile 2008-02-14 20:59:01.000000000 -0800
+++ linux-2.6/mm/Makefile 2008-02-14 21:17:51.000000000 -0800
@@ -33,4 +33,5 @@ obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_SMP) += allocpercpu.o
obj-$(CONFIG_QUICKLIST) += quicklist.o
obj-$(CONFIG_CGROUP_MEM_CONT) += memcontrol.o
+obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o

Index: linux-2.6/mm/mmu_notifier.c
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-2.6/mm/mmu_notifier.c 2008-02-14 22:41:55.000000000 -0800
@@ -0,0 +1,76 @@
+/*
+ * linux/mm/mmu_notifier.c
+ *
+ * Copyright (C) 2008 Qumranet, Inc.
+ * Copyright (C) 2008 SGI
+ * Christoph Lameter <[email protected]>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/mmu_notifier.h>
+
+/*
+ * No synchronization. This function can only be called when only a single
+ * process remains that performs teardown.
+ */
+void mmu_notifier_release(struct mm_struct *mm)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n, *t;
+
+ if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
+ hlist_for_each_entry_safe(mn, n, t,
+ &mm->mmu_notifier.head, hlist) {
+ hlist_del_init(&mn->hlist);
+ if (mn->ops->release)
+ mn->ops->release(mn, mm);
+ }
+ }
+}
+
+/*
+ * If no young bitflag is supported by the hardware, ->age_page can
+ * unmap the address and return 1 or 0 depending if the mapping previously
+ * existed or not.
+ */
+int mmu_notifier_age_page(struct mm_struct *mm, unsigned long address)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ int young = 0;
+
+ if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(mn, n,
+ &mm->mmu_notifier.head, hlist) {
+ if (mn->ops->age_page)
+ young |= mn->ops->age_page(mn, mm, address);
+ }
+ rcu_read_unlock();
+ }
+
+ return young;
+}
+
+/*
+ * Note that all notifiers use RCU. The updates are only guaranteed to be
+ * visible to other processes after a RCU quiescent period!
+ *
+ * Must hold mmap_sem writably when calling registration functions.
+ */
+void mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
+{
+ hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier.head);
+}
+EXPORT_SYMBOL_GPL(mmu_notifier_register);
+
+void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm)
+{
+ hlist_del_rcu(&mn->hlist);
+}
+EXPORT_SYMBOL_GPL(mmu_notifier_unregister);
+
Index: linux-2.6/kernel/fork.c
===================================================================
--- linux-2.6.orig/kernel/fork.c 2008-02-14 20:59:01.000000000 -0800
+++ linux-2.6/kernel/fork.c 2008-02-14 21:17:51.000000000 -0800
@@ -53,6 +53,7 @@
#include <linux/tty.h>
#include <linux/proc_fs.h>
#include <linux/blkdev.h>
+#include <linux/mmu_notifier.h>

#include <asm/pgtable.h>
#include <asm/pgalloc.h>
@@ -362,6 +363,7 @@ static struct mm_struct * mm_init(struct

if (likely(!mm_alloc_pgd(mm))) {
mm->def_flags = 0;
+ mmu_notifier_head_init(&mm->mmu_notifier);
return mm;
}

Index: linux-2.6/mm/mmap.c
===================================================================
--- linux-2.6.orig/mm/mmap.c 2008-02-14 20:59:01.000000000 -0800
+++ linux-2.6/mm/mmap.c 2008-02-14 22:42:02.000000000 -0800
@@ -26,6 +26,7 @@
#include <linux/mount.h>
#include <linux/mempolicy.h>
#include <linux/rmap.h>
+#include <linux/mmu_notifier.h>

#include <asm/uaccess.h>
#include <asm/cacheflush.h>
@@ -2037,6 +2038,7 @@ void exit_mmap(struct mm_struct *mm)
unsigned long end;

/* mm's last user has gone, and its about to be pulled down */
+ mmu_notifier_release(mm);
arch_exit_mmap(mm);

lru_add_drain();

--


2008-02-16 03:38:51

by Andrew Morton

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Thu, 14 Feb 2008 22:49:00 -0800 Christoph Lameter <[email protected]> wrote:

> MMU notifiers are used for hardware and software that establishes
> external references to pages managed by the Linux kernel. These are
> page table entriews or tlb entries or something else that allows
> hardware (such as DMA engines, scatter gather devices, networking,
> sharing of address spaces across operating system boundaries) and
> software (Virtualization solutions such as KVM, Xen etc) to
> access memory managed by the Linux kernel.
>
> The MMU notifier will notify the device driver that subscribes to such
> a notifier that the VM is going to do something with the memory
> mapped by that device. The device must then drop references for the
> indicated memory area. The references may be reestablished later.
>
> The notification scheme is much better than the current schemes of
> avoiding the danger of the VM removing pages that are externally
> mapped. We currently either mlock pages used for RDMA, XPmem etc
> in memory or increase the refcount to pin the pages. Increasing
> the refcount makes it impossible for the VM to reclaim the page.
>
> Mlock causes problems with reclaim and may lead to OOM if too many
> pages are pinned in memory. It is also incorrect in terms what the POSIX
> specificies for what role mlock should play. Mlock does *not* pin pages in
> memory. Mlock just means do not allow the page to be moved to swap.
>
> Linux can move pages in memory (for example through the page migration
> mechanism). These pages can be moved even if they are mlocked(!!!!).
> The current approach of page pinning in use by RDMA etc is conceptually
> broken but there are currently no other easy solutions.
>
> The alternate of increasing the page count to pin pages is also not
> that enticing since there will be continual attempts to reclaim
> or migrate these pages.
>
> The solution here allows us to finally fix this issue by requiring
> such devices to subscribe to a notification chain that will allow
> them to work without pinning. The VM gains control of its memory again
> and the memory that has external references can be managed like regular
> memory.
>
> This patch: Core portion
>

What is the status of getting infiniband to use this facility?

How important is this feature to KVM?

To xpmem?

Which other potential clients have been identified and how important it it
to those?


> Index: linux-2.6/Documentation/mmu_notifier/README
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6/Documentation/mmu_notifier/README 2008-02-14 22:27:19.000000000 -0800
> @@ -0,0 +1,105 @@
> +Linux MMU Notifiers
> +-------------------
> +
> +MMU notifiers are used for hardware and software that establishes
> +external references to pages managed by the Linux kernel. These are
> +page table entriews or tlb entries or something else that allows
> +hardware (such as DMA engines, scatter gather devices, networking,
> +sharing of address spaces across operating system boundaries) and
> +software (Virtualization solutions such as KVM, Xen etc) to
> +access memory managed by the Linux kernel.
> +
> +The MMU notifier will notify the device driver that subscribes to such
> +a notifier that the VM is going to do something with the memory
> +mapped by that device. The device must then drop references for the
> +indicated memory area. The references may be reestablished later.
> +
> +The notification scheme is much better than the current schemes of
> +dealing with the danger of the VM removing pages.
> +We currently mlock pages used for RDMA, XPmem etc in memory or
> +increase the refcount of the pages.
> +
> +Both cause problems with reclaim and may lead to OOM if too many
> +pages are pinned in memory. Mlock is also incorrect in terms of the POSIX
> +specification of the role of mlock. Mlock does *not* pin pages in
> +memory. It just does not allow the page to be moved to swap.
> +The page refcount is used to track current users of a page struct.
> +Artificially inflating the refcount means that the VM cannot track
> +down all references to a page. It will not be able to reclaim or
> +move a page. However, the core code will try again and again because
> +the assumption is that an elevated refcount is a temporary situation.
> +
> +Linux can move pages in memory (for example through the page migration
> +mechanism). These pages can be moved even if they are mlocked(!!!!).
> +So the current approach in use by RDMA etc etc is conceptually broken
> +but there are currently no other easy solutions.
> +
> +The solution here allows us to finally fix this issue by requiring
> +such devices to subscribe to a notification chain that will allow
> +them to work without pinning.
> +
> +The notifier chains provide two callback mechanisms. The
> +first one is required for any device that establishes external mappings.
> +The second (rmap) mechanism is required if a device needs to be
> +able to sleep when invalidating references. Sleeping may be necessary
> +if we are mapping across a network or to different Linux instances
> +in the same address space.

I'd have thought that a major reason for sleeping would be to wait for IO
to complete. Worth mentioning here?

> +mmu_notifier mechanism (for KVM/GRU etc)
> +----------------------------------------
> +Callbacks are registered with an mm_struct from a device driver using
> +mmu_notifier_register(). When the VM removes pages (or changes
> +permissions on pages etc) then callbacks are triggered.
> +
> +The invalidation function for a single page (*invalidate_page)

We already have an invalidatepage. Ho hum.

> +is called with spinlocks (in particular the pte lock) held. This allow
> +for an easy implementation of external ptes that are on the local system.
>

Why is that "easy"? I's have thought that it would only be easy if the
driver happened to be using those same locks for its own purposes.
Otherwise it is "awkward"?

> +The invalidation mechanism for a range (*invalidate_range_begin/end*) is
> +called most of the time without any locks held. It is only called with
> +locks held for file backed mappings that are truncated. A flag indicates
> +in which mode we are. A driver can use that mechanism to f.e.
> +delay the freeing of the pages during truncate until no locks are held.

That sucks big time. What do we need to do to make get the callback
functions called in non-atomic context?

> +Pages must be marked dirty if dirty bits are found to be set in
> +the external ptes during unmap.

That sentence is too vague. Define "marked dirty"?

> +The *release* method is called when a Linux process exits. It is run before

We'd conventionally use a notation such as "->release()" here, rather than
the asterisks.

> +the pages and mappings of a process are torn down and gives the device driver
> +a chance to zap all the external mappings in one go.

I assume what you mean here is that ->release() is called during exit()
when the final reference to an mm is being dropped.

> +An example for a code that can be used to build a notifier mechanism into
> +a device driver can be found in the file
> +Documentation/mmu_notifier/skeleton.c

Should that be in samples/?

> +mmu_rmap_notifier mechanism (XPMEM etc)
> +---------------------------------------
> +The mmu_rmap_notifier allows the device driver to implement their own rmap

s/their/its/

> +and allows the device driver to sleep during page eviction. This is necessary
> +for complex drivers that f.e. allow the sharing of memory between processes
> +running on different Linux instances (typically over a network or in a
> +partitioned NUMA system).
> +
> +The mmu_rmap_notifier adds another invalidate_page() callout that is called
> +*before* the Linux rmaps are walked. At that point only the page lock is
> +held. The invalidate_page() function must walk the driver rmaps and evict
> +all the references to the page.

What happens if it cannot do so?

> +There is no process information available before the rmaps are consulted.

Not sure what that sentence means. I guess "available to the core VM"?

> +The notifier mechanism can therefore not be attached to an mm_struct. Instead
> +it is a global callback list. Having to perform a callback for each and every
> +page that is reclaimed would be inefficient. Therefore we add an additional
> +page flag: PageRmapExternal().

How many page flags are left?

Is this feature important enough to justfy consumption of another one?

> Only pages that are marked with this bit can
> +be exported and the rmap callbacks will only be performed for pages marked
> +that way.

"exported": new term, unclear what it means.

> +The required additional Page flag is only availabe in 64 bit mode and
> +therefore the mmu_rmap_notifier portion is not available on 32 bit platforms.

whoa. Is that good? You just made your feature unavailable on the great
majority of Linux systems.

> +An example of code to build a mmu_notifier mechanism with rmap capabilty
> +can be found in Documentation/mmu_notifier/skeleton_rmap.c
> +
> +February 9, 2008,
> + Christoph Lameter <[email protected]
> +
> +Index: linux-2.6/include/linux/mm_types.h
> Index: linux-2.6/include/linux/mm_types.h
> ===================================================================
> --- linux-2.6.orig/include/linux/mm_types.h 2008-02-14 20:59:01.000000000 -0800
> +++ linux-2.6/include/linux/mm_types.h 2008-02-14 21:17:51.000000000 -0800
> @@ -159,6 +159,12 @@ struct vm_area_struct {
> #endif
> };
>
> +struct mmu_notifier_head {
> +#ifdef CONFIG_MMU_NOTIFIER
> + struct hlist_head head;
> +#endif
> +};
> +
> struct mm_struct {
> struct vm_area_struct * mmap; /* list of VMAs */
> struct rb_root mm_rb;
> @@ -228,6 +234,7 @@ struct mm_struct {
> #ifdef CONFIG_CGROUP_MEM_CONT
> struct mem_cgroup *mem_cgroup;
> #endif
> + struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
> };
>
> #endif /* _LINUX_MM_TYPES_H */
> Index: linux-2.6/include/linux/mmu_notifier.h
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6/include/linux/mmu_notifier.h 2008-02-14 22:42:28.000000000 -0800
> @@ -0,0 +1,180 @@
> +#ifndef _LINUX_MMU_NOTIFIER_H
> +#define _LINUX_MMU_NOTIFIER_H
> +
> +/*
> + * MMU motifier

typo

> + * Notifier functions for hardware and software that establishes external
> + * references to pages of a Linux system. The notifier calls ensure that
> + * external mappings are removed when the Linux VM removes memory ranges
> + * or individual pages from a process.

So the callee cannot fail. hm. If it can't block, it's likely screwed in
that case. In other cases it might be screwed anyway. I suspect we'll
need to be able to handle callee failure.

> + * These fall into two classes:
> + *
> + * 1. mmu_notifier
> + *
> + * These are callbacks registered with an mm_struct. If pages are
> + * removed from an address space then callbacks are performed.

"to be removed", I guess. It's called before the page is actually removed?

> + * Spinlocks must be held in order to walk reverse maps. The
> + * invalidate_page() callbacks are performed with spinlocks held.

hm, yes, problem. Permitting callee failure might be good enough.

> + * The invalidate_range_start/end callbacks can be performed in contexts
> + * where sleeping is allowed or in atomic contexts. A flag is passed
> + * to indicate an atomic context.

We generally would prefer separate callbacks, rather than a unified
callback with a mode flag.


> + * Pages must be marked dirty if dirty bits are found to be set in
> + * the external ptes.
> + */
> +
> +#include <linux/list.h>
> +#include <linux/spinlock.h>
> +#include <linux/rcupdate.h>
> +#include <linux/mm_types.h>
> +
> +struct mmu_notifier_ops;
> +
> +struct mmu_notifier {
> + struct hlist_node hlist;
> + const struct mmu_notifier_ops *ops;
> +};
> +
> +struct mmu_notifier_ops {
> + /*
> + * The release notifier is called when no other execution threads
> + * are left. Synchronization is not necessary.

"and the mm is about to be destroyed"?

> + */
> + void (*release)(struct mmu_notifier *mn,
> + struct mm_struct *mm);
> +
> + /*
> + * age_page is called from contexts where the pte_lock is held
> + */
> + int (*age_page)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long address);

This wasn't documented.

> + /*
> + * invalidate_page is called from contexts where the pte_lock is held.
> + */
> + void (*invalidate_page)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long address);
> +
> + /*
> + * invalidate_range_begin() and invalidate_range_end() must be paired.
> + *
> + * Multiple invalidate_range_begin/ends may be nested or called
> + * concurrently.

Under what circumstances would they be nested?

> That is legit. However, no new external references

references to what?

> + * may be established as long as any invalidate_xxx is running or
> + * any invalidate_range_begin() and has not been completed through a

stray "and".

> + * corresponding call to invalidate_range_end().
> + *
> + * Locking within the notifier needs to serialize events correspondingly.
> + *
> + * invalidate_range_begin() must clear all references in the range
> + * and stop the establishment of new references.

and stop the establishment of new references within the range, I assume?

If so, that's putting a heck of a lot of complexity into the driver, isn't
it? It needs to temporarily remember an arbitrarily large number of
regions in this mm against which references may not be taken?

> + * invalidate_range_end() reenables the establishment of references.

within the range?

> + * atomic indicates that the function is called in an atomic context.
> + * We can sleep if atomic == 0.
> + *
> + * invalidate_range_begin() must remove all external references.
> + * There will be no retries as with invalidate_page().
> + */
> + void (*invalidate_range_begin)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end,
> + int atomic);
> +
> + void (*invalidate_range_end)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end,
> + int atomic);
> +};
> +
> +#ifdef CONFIG_MMU_NOTIFIER
> +
> +/*
> + * Must hold the mmap_sem for write.
> + *
> + * RCU is used to traverse the list. A quiescent period needs to pass
> + * before the notifier is guaranteed to be visible to all threads
> + */
> +extern void mmu_notifier_register(struct mmu_notifier *mn,
> + struct mm_struct *mm);
> +
> +/*
> + * Must hold mmap_sem for write.
> + *
> + * A quiescent period needs to pass before the mmu_notifier structure
> + * can be released. mmu_notifier_release() will wait for a quiescent period
> + * after calling the ->release callback. So it is safe to call
> + * mmu_notifier_unregister from the ->release function.
> + */
> +extern void mmu_notifier_unregister(struct mmu_notifier *mn,
> + struct mm_struct *mm);
> +
> +
> +extern void mmu_notifier_release(struct mm_struct *mm);
> +extern int mmu_notifier_age_page(struct mm_struct *mm,
> + unsigned long address);

There's the mysterious age_page again.

> +static inline void mmu_notifier_head_init(struct mmu_notifier_head *mnh)
> +{
> + INIT_HLIST_HEAD(&mnh->head);
> +}
> +
> +#define mmu_notifier(function, mm, args...) \
> + do { \
> + struct mmu_notifier *__mn; \
> + struct hlist_node *__n; \
> + \
> + if (unlikely(!hlist_empty(&(mm)->mmu_notifier.head))) { \
> + rcu_read_lock(); \
> + hlist_for_each_entry_rcu(__mn, __n, \
> + &(mm)->mmu_notifier.head, \
> + hlist) \
> + if (__mn->ops->function) \
> + __mn->ops->function(__mn, \
> + mm, \
> + args); \
> + rcu_read_unlock(); \
> + } \
> + } while (0)

The macro references its args more than once. Anyone who does

mmu_notifier(function, some_function_which_has_side_effects())

will get a surprise. Use temporaries.

> +#else /* CONFIG_MMU_NOTIFIER */
> +
> +/*
> + * Notifiers that use the parameters that they were passed so that the
> + * compiler does not complain about unused variables but does proper
> + * parameter checks even if !CONFIG_MMU_NOTIFIER.
> + * Macros generate no code.
> + */
> +#define mmu_notifier(function, mm, args...) \
> + do { \
> + if (0) { \
> + struct mmu_notifier *__mn; \
> + \
> + __mn = (struct mmu_notifier *)(0x00ff); \
> + __mn->ops->function(__mn, mm, args); \
> + }; \
> + } while (0)

That's a bit weird. Can't we do the old

(void)function;
(void)mm;

trick? Or make it a staic inline function?

> +static inline void mmu_notifier_register(struct mmu_notifier *mn,
> + struct mm_struct *mm) {}
> +static inline void mmu_notifier_unregister(struct mmu_notifier *mn,
> + struct mm_struct *mm) {}
> +static inline void mmu_notifier_release(struct mm_struct *mm) {}
> +static inline int mmu_notifier_age_page(struct mm_struct *mm,
> + unsigned long address)
> +{
> + return 0;
> +}
> +
> +static inline void mmu_notifier_head_init(struct mmu_notifier_head *mmh) {}
> +
> +#endif /* CONFIG_MMU_NOTIFIER */
> +
> +#endif /* _LINUX_MMU_NOTIFIER_H */
> Index: linux-2.6/mm/Kconfig
> ===================================================================
> --- linux-2.6.orig/mm/Kconfig 2008-02-14 20:59:01.000000000 -0800
> +++ linux-2.6/mm/Kconfig 2008-02-14 21:17:51.000000000 -0800
> @@ -193,3 +193,7 @@ config NR_QUICK
> config VIRT_TO_BUS
> def_bool y
> depends on !ARCH_NO_VIRT_TO_BUS
> +
> +config MMU_NOTIFIER
> + def_bool y
> + bool "MMU notifier, for paging KVM/RDMA"

Why is this not selectable? The help seems a bit brief.

Does this cause 32-bit systems to drag in a bunch of code they're not
allowed to ever use?

> Index: linux-2.6/mm/Makefile
> ===================================================================
> --- linux-2.6.orig/mm/Makefile 2008-02-14 20:59:01.000000000 -0800
> +++ linux-2.6/mm/Makefile 2008-02-14 21:17:51.000000000 -0800
> @@ -33,4 +33,5 @@ obj-$(CONFIG_MIGRATION) += migrate.o
> obj-$(CONFIG_SMP) += allocpercpu.o
> obj-$(CONFIG_QUICKLIST) += quicklist.o
> obj-$(CONFIG_CGROUP_MEM_CONT) += memcontrol.o
> +obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
>
> Index: linux-2.6/mm/mmu_notifier.c
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-2.6/mm/mmu_notifier.c 2008-02-14 22:41:55.000000000 -0800
> @@ -0,0 +1,76 @@
> +/*
> + * linux/mm/mmu_notifier.c
> + *
> + * Copyright (C) 2008 Qumranet, Inc.
> + * Copyright (C) 2008 SGI
> + * Christoph Lameter <[email protected]>
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2. See
> + * the COPYING file in the top-level directory.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/mm.h>
> +#include <linux/mmu_notifier.h>
> +
> +/*
> + * No synchronization. This function can only be called when only a single
> + * process remains that performs teardown.
> + */
> +void mmu_notifier_release(struct mm_struct *mm)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n, *t;
> +
> + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
> + hlist_for_each_entry_safe(mn, n, t,
> + &mm->mmu_notifier.head, hlist) {
> + hlist_del_init(&mn->hlist);
> + if (mn->ops->release)
> + mn->ops->release(mn, mm);

We do this a lot, but back in the old days people didn't like optional
callbacks which can be NULL. If we expect that mmu_notifier_ops.release is
usually implemented, the just unconditionally call it and require that all
clients implement it. Perhaps provide an exported-to-modules stuv in core
kernel for clients which didn't want to implement ->release().

> + }
> + }
> +}
> +
> +/*
> + * If no young bitflag is supported by the hardware, ->age_page can
> + * unmap the address and return 1 or 0 depending if the mapping previously
> + * existed or not.
> + */
> +int mmu_notifier_age_page(struct mm_struct *mm, unsigned long address)
> +{
> + struct mmu_notifier *mn;
> + struct hlist_node *n;
> + int young = 0;
> +
> + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
> + rcu_read_lock();
> + hlist_for_each_entry_rcu(mn, n,
> + &mm->mmu_notifier.head, hlist) {
> + if (mn->ops->age_page)
> + young |= mn->ops->age_page(mn, mm, address);
> + }
> + rcu_read_unlock();
> + }
> +
> + return young;
> +}

should the rcu_read_lock() cover the hlist_empty() test?

This function looks like it was tossed in at the last minute. It's
mysterious, undocumented, poorly commented, poorly named. A better name
would be one which has some correlation with the return value.

Because anyone who looks at some code which does

if (mmu_notifier_age_page(mm, address))
...

has to go and reverse-engineer the implementation of
mmu_notifier_age_page() to work out under which circumstances the "..."
will be executed. But this should be apparent just from reading the callee
implementation.

This function *really* does need some documentation. What does it *mean*
when the ->age_page() from some of the notifiers returned "1" and the
->age_page() from some other notifiers returned zero? Dunno.

2008-02-16 08:47:19

by Avi Kivity

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

Andrew Morton wrote:
> How important is this feature to KVM?
>

Very. kvm pins pages that are referenced by the guest; a 64-bit guest
will easily pin its entire memory with the kernel map. So this is
critical for guest swapping to actually work.

Other nice features like page migration are also enabled by this patch.

--
Any sufficiently difficult bug is indistinguishable from a feature.

2008-02-16 08:58:21

by Andrew Morton

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Sat, 16 Feb 2008 10:45:50 +0200 Avi Kivity <[email protected]> wrote:

> Andrew Morton wrote:
> > How important is this feature to KVM?
> >
>
> Very. kvm pins pages that are referenced by the guest;

hm. Why does it do that?

> a 64-bit guest
> will easily pin its entire memory with the kernel map.

> So this is
> critical for guest swapping to actually work.

Curious. If KVM can release guest pages at the request of this notifier so
that they can be swapped out, why can't it release them by default, and
allow swapping to proceed?

>
> Other nice features like page migration are also enabled by this patch.
>

We already have page migration. Do you mean page-migration-when-using-kvm?

2008-02-16 09:22:50

by Avi Kivity

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

Andrew Morton wrote:



>> Very. kvm pins pages that are referenced by the guest;
>>
>
> hm. Why does it do that?
>
>

It was deemed best not to allow the guest to write to a page that has
been swapped out and assigned to an unrelated host process.

One way to view the kvm shadow page tables is as hardware dma
descriptors. kvm pins pages for the same reason that drivers pin pages
that are being dma'ed. It's also the reason why mmu notifiers are useful
for such a wide range of dma capable hardware.

>> a 64-bit guest
>> will easily pin its entire memory with the kernel map.
>>
>
>
>> So this is
>> critical for guest swapping to actually work.
>>
>
> Curious. If KVM can release guest pages at the request of this notifier so
> that they can be swapped out, why can't it release them by default, and
> allow swapping to proceed?
>
>

If kvm releases a page, it must also zap any shadow ptes pointing at the
page and flush the tlb. If you do that for all of memory you can't
reference any of it.

Releasing a page has costs, both at the time of the release and when the
guest eventually refers to the page again.

>> Other nice features like page migration are also enabled by this patch.
>>
>>
>
> We already have page migration. Do you mean page-migration-when-using-kvm?
>

Yes, I'm obviously writing from a kvm-centric point of view. This is an
important feature, as the virtualization future seems to be NUMA hosts
(2- or 4- way, 4 cores per socket) running moderately sized guests. The
ability to load-balance guests among the NUMA nodes is important for
performance.

(btw, I'm also looking forward to memory defragmentation. large pages
are important for virtualization workloads and mmu notifiers are again
critical to getting it to work while running kvm).

--
Any sufficiently difficult bug is indistinguishable from a feature.

2008-02-16 10:42:08

by Brice Goglin

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

Andrew Morton wrote:
> What is the status of getting infiniband to use this facility?
>
> How important is this feature to KVM?
>
> To xpmem?
>
> Which other potential clients have been identified and how important it it
> to those?
>

As I said when Andrea posted the first patch series, I used something
very similar for non-RDMA-based HPC about 4 years ago. I haven't had
time yet to look in depth and try the latest proposed API but my feeling
is that it looks good.

Brice

2008-02-16 10:59:03

by Andrew Morton

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Sat, 16 Feb 2008 11:41:35 +0100 Brice Goglin <[email protected]> wrote:

> Andrew Morton wrote:
> > What is the status of getting infiniband to use this facility?
> >
> > How important is this feature to KVM?
> >
> > To xpmem?
> >
> > Which other potential clients have been identified and how important it it
> > to those?
> >
>
> As I said when Andrea posted the first patch series, I used something
> very similar for non-RDMA-based HPC about 4 years ago. I haven't had
> time yet to look in depth and try the latest proposed API but my feeling
> is that it looks good.
>

"looks good" maybe. But it's in the details where I fear this will come
unstuck. The likelihood that some callbacks really will want to be able to
block in places where this interface doesn't permit that - either to wait
for IO to complete or to wait for other threads to clear critical regions.

>From that POV it doesn't look like a sufficiently general and useful
design. Looks like it was grafted onto the current VM implementation in a
way which just about suits two particular clients if they try hard enough.

Which is all perfectly understandable - it would be hard to rework core MM
to be able to make this interface more general. But I do think it's
half-baked and there is a decent risk that future (or present) code which
_could_ use something like this won't be able to use this one, and will
continue to futz with mlock, page-pinning, etc.

Not that I know what the fix to that is..

2008-02-16 19:21:23

by Christoph Lameter

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Fri, 15 Feb 2008, Andrew Morton wrote:

> What is the status of getting infiniband to use this facility?

Well we are talking about this it seems.
>
> How important is this feature to KVM?

Andrea can answer this.

> To xpmem?

Without this feature we are stuck with page pinning by increasing
refcounts which leads to endless lru scanning and other misbehavior. Also
applications that use XPmem will not be able to swap or be able to use
things like remap.

> Which other potential clients have been identified and how important it it
> to those?

It is likely important to various DMA engines, framebuffers devices etc
etc. Seems to be a generally useful feature.


> > +The notifier chains provide two callback mechanisms. The
> > +first one is required for any device that establishes external mappings.
> > +The second (rmap) mechanism is required if a device needs to be
> > +able to sleep when invalidating references. Sleeping may be necessary
> > +if we are mapping across a network or to different Linux instances
> > +in the same address space.
>
> I'd have thought that a major reason for sleeping would be to wait for IO
> to complete. Worth mentioning here?

Right.

> Why is that "easy"? I's have thought that it would only be easy if the
> driver happened to be using those same locks for its own purposes.
> Otherwise it is "awkward"?

Its relatively easy because it is tied directly to a process and can use
external tlb shootdown / external page table clearing directly. The other
method requires an rmap in the device driver where it can lookup the
processes that are mapping the page.

> > +The invalidation mechanism for a range (*invalidate_range_begin/end*) is
> > +called most of the time without any locks held. It is only called with
> > +locks held for file backed mappings that are truncated. A flag indicates
> > +in which mode we are. A driver can use that mechanism to f.e.
> > +delay the freeing of the pages during truncate until no locks are held.
>
> That sucks big time. What do we need to do to make get the callback
> functions called in non-atomic context?

We would have to drop the inode_mmap_lock. Could be done with some minor
work.

> > +Pages must be marked dirty if dirty bits are found to be set in
> > +the external ptes during unmap.
>
> That sentence is too vague. Define "marked dirty"?

Call set_page_dirty().

> > +The *release* method is called when a Linux process exits. It is run before
>
> We'd conventionally use a notation such as "->release()" here, rather than
> the asterisks.

Ok.

>
> > +the pages and mappings of a process are torn down and gives the device driver
> > +a chance to zap all the external mappings in one go.
>
> I assume what you mean here is that ->release() is called during exit()
> when the final reference to an mm is being dropped.

Right.

> > +An example for a code that can be used to build a notifier mechanism into
> > +a device driver can be found in the file
> > +Documentation/mmu_notifier/skeleton.c
>
> Should that be in samples/?

Oh. We have that?

> > +The mmu_rmap_notifier adds another invalidate_page() callout that is called
> > +*before* the Linux rmaps are walked. At that point only the page lock is
> > +held. The invalidate_page() function must walk the driver rmaps and evict
> > +all the references to the page.
>
> What happens if it cannot do so?

The page is not reclaimed if we were called from try_to_unmap(). From
page_mkclean() we must always evict the page to switch off the write
protect bit.

> > +There is no process information available before the rmaps are consulted.
>
> Not sure what that sentence means. I guess "available to the core VM"?

At that point we only have the page. We do not know which processes map
the page. In order to find out we need to take a spinlock.


> > +The notifier mechanism can therefore not be attached to an mm_struct. Instead
> > +it is a global callback list. Having to perform a callback for each and every
> > +page that is reclaimed would be inefficient. Therefore we add an additional
> > +page flag: PageRmapExternal().
>
> How many page flags are left?

30 or so. Its only available on 64bit.

> Is this feature important enough to justfy consumption of another one?
>
> > Only pages that are marked with this bit can
> > +be exported and the rmap callbacks will only be performed for pages marked
> > +that way.
>
> "exported": new term, unclear what it means.

Something external to the kernel references the page.

> > +The required additional Page flag is only availabe in 64 bit mode and
> > +therefore the mmu_rmap_notifier portion is not available on 32 bit platforms.
>
> whoa. Is that good? You just made your feature unavailable on the great
> majority of Linux systems.

rmaps are usually used by complex drivers that are typically used in large
systems.

> > + * Notifier functions for hardware and software that establishes external
> > + * references to pages of a Linux system. The notifier calls ensure that
> > + * external mappings are removed when the Linux VM removes memory ranges
> > + * or individual pages from a process.
>
> So the callee cannot fail. hm. If it can't block, it's likely screwed in
> that case. In other cases it might be screwed anyway. I suspect we'll
> need to be able to handle callee failure.

Probably.

>
> > + * These fall into two classes:
> > + *
> > + * 1. mmu_notifier
> > + *
> > + * These are callbacks registered with an mm_struct. If pages are
> > + * removed from an address space then callbacks are performed.
>
> "to be removed", I guess. It's called before the page is actually removed?

Its called after the pte was cleared while holding the pte lock.

> > + * The invalidate_range_start/end callbacks can be performed in contexts
> > + * where sleeping is allowed or in atomic contexts. A flag is passed
> > + * to indicate an atomic context.
>
> We generally would prefer separate callbacks, rather than a unified
> callback with a mode flag.

We could drop the inode_mmap_lock when doing truncate. That would make
this work but its a kind of invasive thing for the VM.

> > +struct mmu_notifier_ops {
> > + /*
> > + * The release notifier is called when no other execution threads
> > + * are left. Synchronization is not necessary.
>
> "and the mm is about to be destroyed"?

Right.

> > + /*
> > + * invalidate_range_begin() and invalidate_range_end() must be paired.
> > + *
> > + * Multiple invalidate_range_begin/ends may be nested or called
> > + * concurrently.
>
> Under what circumstances would they be nested?

Hmmmm.. Right they cannot be nested. Multiple processors can have
invalidates() concurrently in progress.

> > That is legit. However, no new external references
>
> references to what?

To the ranges that are in the process of being invalidated.

> > + * invalidate_range_begin() must clear all references in the range
> > + * and stop the establishment of new references.
>
> and stop the establishment of new references within the range, I assume?

Right.

> If so, that's putting a heck of a lot of complexity into the driver, isn't
> it? It needs to temporarily remember an arbitrarily large number of
> regions in this mm against which references may not be taken?

That is one implementation (XPmem does that). The other is to simply stop
all references when any invalidate_range is in progress (KVM and GRU do
that).


> > + * invalidate_range_end() reenables the establishment of references.
>
> within the range?

Right.

> > +extern void mmu_notifier_release(struct mm_struct *mm);
> > +extern int mmu_notifier_age_page(struct mm_struct *mm,
> > + unsigned long address);
>
> There's the mysterious age_page again.

Andrea put this in to check the reference status of a page. It functions
like the accessed bit.

> > +static inline void mmu_notifier_head_init(struct mmu_notifier_head *mnh)
> > +{
> > + INIT_HLIST_HEAD(&mnh->head);
> > +}
> > +
> > +#define mmu_notifier(function, mm, args...) \
> > + do { \
> > + struct mmu_notifier *__mn; \
> > + struct hlist_node *__n; \
> > + \
> > + if (unlikely(!hlist_empty(&(mm)->mmu_notifier.head))) { \
> > + rcu_read_lock(); \
> > + hlist_for_each_entry_rcu(__mn, __n, \
> > + &(mm)->mmu_notifier.head, \
> > + hlist) \
> > + if (__mn->ops->function) \
> > + __mn->ops->function(__mn, \
> > + mm, \
> > + args); \
> > + rcu_read_unlock(); \
> > + } \
> > + } while (0)
>
> The macro references its args more than once. Anyone who does
>
> mmu_notifier(function, some_function_which_has_side_effects())
>
> will get a surprise. Use temporaries.

Ok.

> > +#define mmu_notifier(function, mm, args...) \
> > + do { \
> > + if (0) { \
> > + struct mmu_notifier *__mn; \
> > + \
> > + __mn = (struct mmu_notifier *)(0x00ff); \
> > + __mn->ops->function(__mn, mm, args); \
> > + }; \
> > + } while (0)
>
> That's a bit weird. Can't we do the old
>
> (void)function;
> (void)mm;
>
> trick? Or make it a staic inline function?

Static inline wont allow the checking of the parameters.

(void) may be a good thing here.

> > +config MMU_NOTIFIER
> > + def_bool y
> > + bool "MMU notifier, for paging KVM/RDMA"
>
> Why is this not selectable? The help seems a bit brief.
>
> Does this cause 32-bit systems to drag in a bunch of code they're not
> allowed to ever use?

I have selected it a number of times. We could make that a bit longer
right.


> > + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
> > + hlist_for_each_entry_safe(mn, n, t,
> > + &mm->mmu_notifier.head, hlist) {
> > + hlist_del_init(&mn->hlist);
> > + if (mn->ops->release)
> > + mn->ops->release(mn, mm);
>
> We do this a lot, but back in the old days people didn't like optional
> callbacks which can be NULL. If we expect that mmu_notifier_ops.release is
> usually implemented, the just unconditionally call it and require that all
> clients implement it. Perhaps provide an exported-to-modules stuv in core
> kernel for clients which didn't want to implement ->release().

Ok.

> > +{
> > + struct mmu_notifier *mn;
> > + struct hlist_node *n;
> > + int young = 0;
> > +
> > + if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
> > + rcu_read_lock();
> > + hlist_for_each_entry_rcu(mn, n,
> > + &mm->mmu_notifier.head, hlist) {
> > + if (mn->ops->age_page)
> > + young |= mn->ops->age_page(mn, mm, address);
> > + }
> > + rcu_read_unlock();
> > + }
> > +
> > + return young;
> > +}
>
> should the rcu_read_lock() cover the hlist_empty() test?
>
> This function looks like it was tossed in at the last minute. It's
> mysterious, undocumented, poorly commented, poorly named. A better name
> would be one which has some correlation with the return value.
>
> Because anyone who looks at some code which does
>
> if (mmu_notifier_age_page(mm, address))
> ...
>
> has to go and reverse-engineer the implementation of
> mmu_notifier_age_page() to work out under which circumstances the "..."
> will be executed. But this should be apparent just from reading the callee
> implementation.
>
> This function *really* does need some documentation. What does it *mean*
> when the ->age_page() from some of the notifiers returned "1" and the
> ->age_page() from some other notifiers returned zero? Dunno.

Andrea: Could you provide some more detail here?

2008-02-16 19:31:21

by Christoph Lameter

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Sat, 16 Feb 2008, Andrew Morton wrote:

> "looks good" maybe. But it's in the details where I fear this will come
> unstuck. The likelihood that some callbacks really will want to be able to
> block in places where this interface doesn't permit that - either to wait
> for IO to complete or to wait for other threads to clear critical regions.

We can get the invalidate_range to always be called without spinlocks if
we deal with the case of the inode_mmap_lock being held in truncate case.

If you always want to be able to sleep then we could drop the
invalidate_page() that is called while pte locks held and require the use
of a device driver rmap?

> >From that POV it doesn't look like a sufficiently general and useful
> design. Looks like it was grafted onto the current VM implementation in a
> way which just about suits two particular clients if they try hard enough.

You missed KVM. We did the best we could being as least invasive as
possible.

> Which is all perfectly understandable - it would be hard to rework core MM
> to be able to make this interface more general. But I do think it's
> half-baked and there is a decent risk that future (or present) code which
> _could_ use something like this won't be able to use this one, and will
> continue to futz with mlock, page-pinning, etc.
>
> Not that I know what the fix to that is..

You do not see a chance of this being okay if we adopt the two measures
that I mentioned above?

2008-02-17 03:04:32

by Andrea Arcangeli

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
> On Fri, 15 Feb 2008, Andrew Morton wrote:
>
> > What is the status of getting infiniband to use this facility?
>
> Well we are talking about this it seems.

It seems the IB folks think allowing RDMA over virtual memory is not
interesting, their argument seem to be that RDMA is only interesting
on RAM (and they seem not interested in allowing RDMA over a ram+swap
backed _virtual_ memory allocation). They've just to decide if
ram+swap allocation for RDMA is useful or not.

> > How important is this feature to KVM?
>
> Andrea can answer this.

I think I already did in separate email.

> > That sucks big time. What do we need to do to make get the callback
> > functions called in non-atomic context?

I sure agree given I also asked to drop the lock param and enforce the
invalidate_range_* to always be called in non atomic context.

> We would have to drop the inode_mmap_lock. Could be done with some minor
> work.

The invalidate may be deferred after releasing the lock, the lock may
not have to be dropped to cleanup the API (and make xpmem life easier).

> That is one implementation (XPmem does that). The other is to simply stop
> all references when any invalidate_range is in progress (KVM and GRU do
> that).

KVM doesn't stop new references. It doesn't need to because it holds a
reference on the page (GRU doesn't). KVM can invalidate the spte and
flush the tlb only after the linux pte has been cleared and after the
page has been released by the VM (because the page doesn't go in the
freelist and it remains pinned for a little while, until the spte is
dropped too inside invalidate_range_end). GRU has to invalidate
_before_ the linux pte is cleared so it has to stop new references
from being established in the invalidate_range_start/end critical
section.

> Andrea put this in to check the reference status of a page. It functions
> like the accessed bit.

In short each pte can have some spte associated to it. So whenever we
do a ptep_clear_flush protected by the PT lock, we also have to run
invalidate_page that will internally invoke a sort-of
sptep_clear_flush protected by a kvm->mmu_lock (equivalent of
page_table_lock/PT-lock). sptes just like ptes maps virtual addresses
to physical addresses, so you can read/write to RAM either through a
pte or through a spte.

Just like it would be insane to have any requirement that
ptep_clear_flush has to run in not-atomic context (forcing a
conversion of the PT lock to a mutex), it's also weird require the
invalidate_page/age_page to run in atomic context.

All troubles start with the xpmem requirements of having to schedule
in its equivalent of the sptep_clear_flush because it's not a
gigaherz-in-cpu thing but a gigabit thing where the network stack is
involved with its own software linux driven skb memory allocations,
schedules waiting for network I/O, etc... Imagine ptes allocated in a
remote node, no surprise its brings a new set of problems (assuming it
can work reliably during oom given its memory requirements in the
try_to_unmap path, no page can ever be freed until the skbs have been
allocated and sent and allocated again to receive the ack).

Furthermore xpmem doesn't associate any pte to a spte, it associates a
page_t to certain remote references, or it would be in trouble with
invalidate_page that corresponds to ptep_clear_flush on a virtual
address that exists thanks to the anon_vma/i_mmap lock held (and not
thanks to the mmap_sem like in all invalidate_range calls).

Christoph's patch is a mix of two entirely separated features. KVM can
live with V7 just fine, but it's a lot more than what is needed by KVM.

I don't think that invalidate_page/age_page must be allowed to sleep
because invalidate_range also can sleep. You've to just ask yourself
if the VM locks shall remain spinlocks, for the VM own good (not for
the mmu notifiers good). It'd be bad to make the VM underperform with
mutex protecting tiny critical sections to please some mmu notifier
user. But if they're spinlocks, then clearly invalidate_page/age_page
based on virtual addresses can't sleep or the virtual address wouldn't
make sense anymore by the time the spinlock is released.

> > This function looks like it was tossed in at the last minute. It's
> > mysterious, undocumented, poorly commented, poorly named. A better name
> > would be one which has some correlation with the return value.
> >
> > Because anyone who looks at some code which does
> >
> > if (mmu_notifier_age_page(mm, address))
> > ...
> >
> > has to go and reverse-engineer the implementation of
> > mmu_notifier_age_page() to work out under which circumstances the "..."
> > will be executed. But this should be apparent just from reading the callee
> > implementation.
> >
> > This function *really* does need some documentation. What does it *mean*
> > when the ->age_page() from some of the notifiers returned "1" and the
> > ->age_page() from some other notifiers returned zero? Dunno.
>
> Andrea: Could you provide some more detail here?

age_page is simply the ptep_clear_flush_young equivalent for
sptes. It's meant to provide aging to the pages mapped by secondary
mmus. Its return value is the same one of ptep_clear_flush_young but
it represents the sptes associated with the pte,
ptep_clear_flush_young instead only takes care of the pte itself.

For KVM the below would be all that is needed, the fact
invalidate_range can sleep and invalidate_page/age_page can't, is
because their users are very different. With my approach the mmu
notifiers callback are always protected by the PT lock (just like
ptep_clear_flush and the other pte+tlb manglings) and they're called
after the pte is cleared and before the VM reference on the page has
been dropped. That makes it safe for GRU too, so for my initial
approach _none_ of the callbacks was allowed to sleep, and that was a
feature that allows GRU not to block its tlb miss interrupt with any
further locking (the PT-lock taken by follow_page automatically
serialized the GRU interrupt against the MMU notifiers and the linux
page fault). For KVM the invalidate_pages of my patch is converted to
invalidate_range_end because it doesn't matter for KVM if it's called
after the PT lock has been dropped. In the try_to_unmap case
invalidate_page is called by atomic context in Christoph's patch too,
because a virtual address and in turn a pte and in turn certain sptes,
can only exist thanks to the spinlocks taken by the VM. Changing the
VM to make mmu notifiers sleepable in the try_to_unmap path sounds bad
to me, especially given not even xpmem needs this.

You can see how everything looks simpler and more symmetric by
assuming the secondary mmu-references are established and dropped like
ptes, like in the KVM case where infact sptes are a pure cpu thing
exact like the ptes. XPMEM adds the requirement that sptes are infact
remote entities that are mangled by a message passing protocol over
the network, it's the same as ptep_clear_flush being required to
schedule and send skbs to be successful and allowing try_to_unmap to
do its work. Same problem. No wonder patch gets more complicated then.

Signed-off-by: Andrea Arcangeli <[email protected]>

diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -46,6 +46,7 @@
__young = ptep_test_and_clear_young(__vma, __address, __ptep); \
if (__young) \
flush_tlb_page(__vma, __address); \
+ __young |= mmu_notifier_age_page((__vma)->vm_mm, __address); \
__young; \
})
#endif
@@ -86,6 +87,7 @@ do { \
pte_t __pte; \
__pte = ptep_get_and_clear((__vma)->vm_mm, __address, __ptep); \
flush_tlb_page(__vma, __address); \
+ mmu_notifier(invalidate_page, (__vma)->vm_mm, __address); \
__pte; \
})
#endif
diff --git a/include/asm-s390/pgtable.h b/include/asm-s390/pgtable.h
--- a/include/asm-s390/pgtable.h
+++ b/include/asm-s390/pgtable.h
@@ -712,6 +712,7 @@ static inline pte_t ptep_clear_flush(str
{
pte_t pte = *ptep;
ptep_invalidate(address, ptep);
+ mmu_notifier(invalidate_page, vma->vm_mm, address);
return pte;
}

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -10,6 +10,7 @@
#include <linux/rbtree.h>
#include <linux/rwsem.h>
#include <linux/completion.h>
+#include <linux/mmu_notifier.h>
#include <asm/page.h>
#include <asm/mmu.h>

@@ -219,6 +220,8 @@ struct mm_struct {
/* aio bits */
rwlock_t ioctx_list_lock;
struct kioctx *ioctx_list;
+
+ struct mmu_notifier_head mmu_notifier; /* MMU notifier list */
};

#endif /* _LINUX_MM_TYPES_H */
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
new file mode 100644
--- /dev/null
+++ b/include/linux/mmu_notifier.h
@@ -0,0 +1,132 @@
+#ifndef _LINUX_MMU_NOTIFIER_H
+#define _LINUX_MMU_NOTIFIER_H
+
+#include <linux/list.h>
+#include <linux/spinlock.h>
+
+struct mmu_notifier;
+
+struct mmu_notifier_ops {
+ /*
+ * Called when nobody can register any more notifier in the mm
+ * and after the "mn" notifier has been disarmed already.
+ */
+ void (*release)(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+
+ /*
+ * invalidate_page[s] is called in atomic context
+ * after any pte has been updated and before
+ * dropping the PT lock required to update any Linux pte.
+ * Once the PT lock will be released the pte will have its
+ * final value to export through the secondary MMU.
+ * Before this is invoked any secondary MMU is still ok
+ * to read/write to the page previously pointed by the
+ * Linux pte because the old page hasn't been freed yet.
+ * If required set_page_dirty has to be called internally
+ * to this method.
+ */
+ void (*invalidate_page)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+ void (*invalidate_pages)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long start, unsigned long end);
+
+ /*
+ * Age page is called in atomic context inside the PT lock
+ * right after the VM is test-and-clearing the young/accessed
+ * bitflag in the pte. This way the VM will provide proper aging
+ * to the accesses to the page through the secondary MMUs
+ * and not only to the ones through the Linux pte.
+ */
+ int (*age_page)(struct mmu_notifier *mn,
+ struct mm_struct *mm,
+ unsigned long address);
+};
+
+struct mmu_notifier {
+ struct hlist_node hlist;
+ const struct mmu_notifier_ops *ops;
+};
+
+#ifdef CONFIG_MMU_NOTIFIER
+
+struct mmu_notifier_head {
+ struct hlist_head head;
+ spinlock_t lock;
+};
+
+#include <linux/mm_types.h>
+
+/*
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the notifier is guaranteed to be visible to all threads.
+ */
+extern void mmu_notifier_register(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+/*
+ * RCU is used to traverse the list. A quiescent period needs to pass
+ * before the "struct mmu_notifier" can be freed. Alternatively it
+ * can be synchronously freed inside ->release when the list can't
+ * change anymore and nobody could possibly walk it.
+ */
+extern void mmu_notifier_unregister(struct mmu_notifier *mn,
+ struct mm_struct *mm);
+extern void mmu_notifier_release(struct mm_struct *mm);
+extern int mmu_notifier_age_page(struct mm_struct *mm,
+ unsigned long address);
+
+static inline void mmu_notifier_head_init(struct mmu_notifier_head *mnh)
+{
+ INIT_HLIST_HEAD(&mnh->head);
+ spin_lock_init(&mnh->lock);
+}
+
+#define mmu_notifier(function, mm, args...) \
+ do { \
+ struct mmu_notifier *__mn; \
+ struct hlist_node *__n; \
+ \
+ if (unlikely(!hlist_empty(&(mm)->mmu_notifier.head))) { \
+ rcu_read_lock(); \
+ hlist_for_each_entry_rcu(__mn, __n, \
+ &(mm)->mmu_notifier.head, \
+ hlist) \
+ if (__mn->ops->function) \
+ __mn->ops->function(__mn, \
+ mm, \
+ args); \
+ rcu_read_unlock(); \
+ } \
+ } while (0)
+
+#else /* CONFIG_MMU_NOTIFIER */
+
+struct mmu_notifier_head {};
+
+#define mmu_notifier_register(mn, mm) do {} while(0)
+#define mmu_notifier_unregister(mn, mm) do {} while (0)
+#define mmu_notifier_release(mm) do {} while (0)
+#define mmu_notifier_age_page(mm, address) ({ 0; })
+#define mmu_notifier_head_init(mmh) do {} while (0)
+
+/*
+ * Notifiers that use the parameters that they were passed so that the
+ * compiler does not complain about unused variables but does proper
+ * parameter checks even if !CONFIG_MMU_NOTIFIER.
+ * Macros generate no code.
+ */
+#define mmu_notifier(function, mm, args...) \
+ do { \
+ if (0) { \
+ struct mmu_notifier *__mn; \
+ \
+ __mn = (struct mmu_notifier *)(0x00ff); \
+ __mn->ops->function(__mn, mm, args); \
+ }; \
+ } while (0)
+
+#endif /* CONFIG_MMU_NOTIFIER */
+
+#endif /* _LINUX_MMU_NOTIFIER_H */
diff --git a/kernel/fork.c b/kernel/fork.c
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -360,6 +360,7 @@ static struct mm_struct * mm_init(struct

if (likely(!mm_alloc_pgd(mm))) {
mm->def_flags = 0;
+ mmu_notifier_head_init(&mm->mmu_notifier);
return mm;
}
free_mm(mm);
diff --git a/mm/Kconfig b/mm/Kconfig
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -193,3 +193,7 @@ config VIRT_TO_BUS
config VIRT_TO_BUS
def_bool y
depends on !ARCH_NO_VIRT_TO_BUS
+
+config MMU_NOTIFIER
+ def_bool y
+ bool "MMU notifier, for paging KVM/RDMA"
diff --git a/mm/Makefile b/mm/Makefile
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -30,4 +30,5 @@ obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_MIGRATION) += migrate.o
obj-$(CONFIG_SMP) += allocpercpu.o
obj-$(CONFIG_QUICKLIST) += quicklist.o
+obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -756,6 +756,7 @@ void __unmap_hugepage_range(struct vm_ar
if (pte_none(pte))
continue;

+ mmu_notifier(invalidate_page, mm, address);
page = pte_page(pte);
if (pte_dirty(pte))
set_page_dirty(page);
diff --git a/mm/memory.c b/mm/memory.c
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -494,6 +494,7 @@ static int copy_pte_range(struct mm_stru
spinlock_t *src_ptl, *dst_ptl;
int progress = 0;
int rss[2];
+ unsigned long start;

again:
rss[1] = rss[0] = 0;
@@ -505,6 +506,7 @@ again:
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
arch_enter_lazy_mmu_mode();

+ start = addr;
do {
/*
* We are holding two locks at this point - either of them
@@ -525,6 +527,8 @@ again:
} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);

arch_leave_lazy_mmu_mode();
+ if (is_cow_mapping(vma->vm_flags))
+ mmu_notifier(invalidate_pages, vma->vm_mm, start, addr);
spin_unlock(src_ptl);
pte_unmap_nested(src_pte - 1);
add_mm_rss(dst_mm, rss[0], rss[1]);
@@ -660,6 +664,7 @@ static unsigned long zap_pte_range(struc
}
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
+ mmu_notifier(invalidate_page, mm, addr);
tlb_remove_tlb_entry(tlb, pte, addr);
if (unlikely(!page))
continue;
@@ -1248,6 +1253,7 @@ static int remap_pte_range(struct mm_str
{
pte_t *pte;
spinlock_t *ptl;
+ unsigned long start = addr;

pte = pte_alloc_map_lock(mm, pmd, addr, &ptl);
if (!pte)
@@ -1259,6 +1265,7 @@ static int remap_pte_range(struct mm_str
pfn++;
} while (pte++, addr += PAGE_SIZE, addr != end);
arch_leave_lazy_mmu_mode();
+ mmu_notifier(invalidate_pages, mm, start, addr);
pte_unmap_unlock(pte - 1, ptl);
return 0;
}
diff --git a/mm/mmap.c b/mm/mmap.c
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2044,6 +2044,7 @@ void exit_mmap(struct mm_struct *mm)
vm_unacct_memory(nr_accounted);
free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0);
tlb_finish_mmu(tlb, 0, end);
+ mmu_notifier_release(mm);

/*
* Walk the list again, actually closing and freeing it,
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
new file mode 100644
--- /dev/null
+++ b/mm/mmu_notifier.c
@@ -0,0 +1,73 @@
+/*
+ * linux/mm/mmu_notifier.c
+ *
+ * Copyright (C) 2008 Qumranet, Inc.
+ * Copyright (C) 2008 SGI
+ * Christoph Lameter <[email protected]>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2. See
+ * the COPYING file in the top-level directory.
+ */
+
+#include <linux/mmu_notifier.h>
+#include <linux/module.h>
+#include <linux/rcupdate.h>
+
+/*
+ * No synchronization. This function can only be called when only a single
+ * process remains that performs teardown.
+ */
+void mmu_notifier_release(struct mm_struct *mm)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n, *tmp;
+
+ if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
+ hlist_for_each_entry_safe(mn, n, tmp,
+ &mm->mmu_notifier.head, hlist) {
+ hlist_del(&mn->hlist);
+ if (mn->ops->release)
+ mn->ops->release(mn, mm);
+ }
+ }
+}
+
+/*
+ * If no young bitflag is supported by the hardware, ->age_page can
+ * unmap the address and return 1 or 0 depending if the mapping previously
+ * existed or not.
+ */
+int mmu_notifier_age_page(struct mm_struct *mm, unsigned long address)
+{
+ struct mmu_notifier *mn;
+ struct hlist_node *n;
+ int young = 0;
+
+ if (unlikely(!hlist_empty(&mm->mmu_notifier.head))) {
+ rcu_read_lock();
+ hlist_for_each_entry_rcu(mn, n,
+ &mm->mmu_notifier.head, hlist) {
+ if (mn->ops->age_page)
+ young |= mn->ops->age_page(mn, mm, address);
+ }
+ rcu_read_unlock();
+ }
+
+ return young;
+}
+
+void mmu_notifier_register(struct mmu_notifier *mn, struct mm_struct *mm)
+{
+ spin_lock(&mm->mmu_notifier.lock);
+ hlist_add_head_rcu(&mn->hlist, &mm->mmu_notifier.head);
+ spin_unlock(&mm->mmu_notifier.lock);
+}
+EXPORT_SYMBOL_GPL(mmu_notifier_register);
+
+void mmu_notifier_unregister(struct mmu_notifier *mn, struct mm_struct *mm)
+{
+ spin_lock(&mm->mmu_notifier.lock);
+ hlist_del_rcu(&mn->hlist);
+ spin_unlock(&mm->mmu_notifier.lock);
+}
+EXPORT_SYMBOL_GPL(mmu_notifier_unregister);
diff --git a/mm/mprotect.c b/mm/mprotect.c
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -32,6 +32,7 @@ static void change_pte_range(struct mm_s
{
pte_t *pte, oldpte;
spinlock_t *ptl;
+ unsigned long start = addr;

pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
arch_enter_lazy_mmu_mode();
@@ -71,6 +72,7 @@ static void change_pte_range(struct mm_s

} while (pte++, addr += PAGE_SIZE, addr != end);
arch_leave_lazy_mmu_mode();
+ mmu_notifier(invalidate_pages, mm, start, addr);
pte_unmap_unlock(pte - 1, ptl);
}

2008-02-17 05:06:30

by Doug Maxey

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code


On Fri, 15 Feb 2008 19:37:19 PST, Andrew Morton wrote:
> Which other potential clients have been identified and how important it it
> to those?

The powerpc ehea utilizes its own mmu. Not sure about the importance
to the driver. (But will investigate :)

++doug

2008-02-17 12:32:47

by Robin Holt

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

On Sun, Feb 17, 2008 at 04:01:20AM +0100, Andrea Arcangeli wrote:
> On Sat, Feb 16, 2008 at 11:21:07AM -0800, Christoph Lameter wrote:
> > On Fri, 15 Feb 2008, Andrew Morton wrote:
> >
> > > What is the status of getting infiniband to use this facility?
> >
> > Well we are talking about this it seems.
>
> It seems the IB folks think allowing RDMA over virtual memory is not
> interesting, their argument seem to be that RDMA is only interesting
> on RAM (and they seem not interested in allowing RDMA over a ram+swap
> backed _virtual_ memory allocation). They've just to decide if
> ram+swap allocation for RDMA is useful or not.

I don't think that is a completely fair characterization. It would be
more fair to say that the changes required to their library/user api
would be too significant to allow an adaptation to any scheme which
allowed removal of physical memory below a virtual mapping.

I agree with the IB folks when they say it is impossible with their
current scheme. The fact that any consumer of their endpoint identifier
can use any identifier without notifying the kernel prior to its use
certainly makes any implementation under any scheme impossible.

I guess we could possibly make things work for IB if we did some heavy
work. Let's assume, instead of passing around the physical endpoint
identifiers, they passed around a handle. In order for any IB endpoint
to commuicate, it would need to request the kernel translate a handle
into an endpoint identifier. In order for the kernel to put a TLB
entry into the processes address space allowing the process access to
the _CARD_, it would need to ensure all the current endpoint identifiers
for this process were "active" meaning we have verified with the other
endpoint that all pages are faulted and TLB/PFN information is in the
owning card's TLB/PFN tables. Once all of a processes endoints are
"active" we would drop in the PFN for the adapter into the pages tables.
Any time pages are being revoked from under an active handle, we would
shoot-down the IB adapter card TLB entries for all the remote users of
this handle and quiesce the cards state to ensure transfers are either
complete or terminated. When their are no active transfers, we would
respond back to the owner and they could complete the source process
page table cleaning. Any time all of the pages for a handle can not be
mapped from virtual to physical, the remote process would be SIGBUS'd
instead of having it IB adapter TLB installed.

This is essentially how XPMEM does it except we have the benefit of
working on individual pages.

Again, not knowing what I am talking about, but under the assumption that
MPI IB use is contained to a library, I would hope the changes could be
contained under the MPI-to-IB library interface and would not need any
changes at the MPI-user library interface.

We do keep track of the virtual address ranges within a handle that
are being used. I assume the IB folks will find that helpful as well.
Otherwise, I think they could make things operate this way. XPMEM has
the advantage of not needing to have virtual-to-physical at all times,
but otherwise it is essentially the same.

Thanks,
Robin

2008-02-18 22:33:55

by Roland Dreier

[permalink] [raw]
Subject: Re: [patch 1/6] mmu_notifier: Core code

It seems that we've come up with two reasonable cases where it makes
sense to use these notifiers for InfiniBand/RDMA:

First, the ability to safely to DMA to/from userspace memory with the
memory regions mlock()ed but the pages not pinned. In this case the
notifiers here would seem to suit us well:

> + void (*invalidate_range_begin)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end,
> + int atomic);
> +
> + void (*invalidate_range_end)(struct mmu_notifier *mn,
> + struct mm_struct *mm,
> + unsigned long start, unsigned long end,
> + int atomic);

If I understand correctly, the IB stack would have to get the hardware
driver to shoot down translation entries and suspend access to the
region when an invalidate_range_begin notifier is called, and wait for
the invalidate_range_end notifier to repopulate the adapter
translation tables. This will probably work OK as long as the
interval between the invalidate_range_begin and invalidate_range_end
calls is not "too long."

Also, using this effectively requires us to figure out how we want to
mlock() regions that are going to be used for RDMA. We could require
userspace to do it, but it's not clear to me that we're safe in the
case where userspace decides not to... what happens if some pages get
swapped out after the invalidate_range_begin notifier?

The second case where some form of notifiers are useful is for
userspace to know when a memory registration is still valid, ie Pete
Wyckoff's work:

http://www.osc.edu/~pw/papers/wyckoff-memreg-ccgrid05.pdf
http://www.osc.edu/~pw/dreg/

however these MMU notifiers seem orthogonal to that: the registration
cache is concerned with address spaces, not page mapping, and hence
the existing vma operations seem to be a better fit.

- R.