2014-06-28 02:00:48

by Jerome Glisse

[permalink] [raw]
Subject: mm preparatory patches for HMM and IOMMUv2

Andrew so here are a set of mm patch that do some ground modification to core
mm code. They apply on top of today's linux-next and they pass checkpatch.pl
with flying color (except patch 4 but i did not wanted to be a nazi about 80
char line).

Patch 1 is the mmput notifier call chain we discussed with AMD.

Patch 2, 3 and 4 are so far only useful to HMM but i am discussing with AMD and
i believe it will be useful to them to (in the context of IOMMUv2).

Patch 2 allows to differentiate page unmap for vmscan reason or for poisoning.

Patch 3 associate mmu_notifier with an event type allowing to take different code
path inside mmu_notifier callback depending on what is currently happening to the
cpu page table. There is no functional change, it just add a new argument to the
various mmu_notifier calls and callback.

Patch 4 pass along the vma into which the range invalidation is happening. There
is few functional changes in place where mmu_notifier_range_invalidate_start/end
used [0, -1] as range, instead now those place call the notifier once for each
vma. This might prove to add unwanted overhead hence why i did it as a separate
patch.

I did not include the core hmm patch but i intend to send a v4 next week. So i
really would like to see those included for next release.

As usual comments welcome.

Cheers,
Jérôme Glisse


2014-06-28 02:00:59

by Jerome Glisse

[permalink] [raw]
Subject: [PATCH 1/6] mmput: use notifier chain to call subsystem exit handler.

From: Jérôme Glisse <[email protected]>

Several subsystem require a callback when a mm struct is being destroy
so that they can cleanup there respective per mm struct. Instead of
having each subsystem add its callback to mmput use a notifier chain
to call each of the subsystem.

This will allow new subsystem to register callback even if they are
module. There should be no contention on the rw semaphore protecting
the call chain and the impact on the code path should be low and
burried in the noise.

Note that this patch also move the call to cleanup functions after
exit_mmap so that new call back can assume that mmu_notifier_release
have already been call. This does not impact existing cleanup functions
as they do not rely on anything that exit_mmap is freeing. Also moved
khugepaged_exit to exit_mmap so that ordering is preserved for that
function.

Signed-off-by: Jérôme Glisse <[email protected]>
---
fs/aio.c | 29 ++++++++++++++++++++++-------
include/linux/aio.h | 2 --
include/linux/ksm.h | 11 -----------
include/linux/sched.h | 5 +++++
include/linux/uprobes.h | 1 -
kernel/events/uprobes.c | 19 ++++++++++++++++---
kernel/fork.c | 22 ++++++++++++++++++----
mm/ksm.c | 26 +++++++++++++++++++++-----
mm/mmap.c | 3 +++
9 files changed, 85 insertions(+), 33 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index c1d8c48..1d06e92 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -40,6 +40,7 @@
#include <linux/ramfs.h>
#include <linux/percpu-refcount.h>
#include <linux/mount.h>
+#include <linux/notifier.h>

#include <asm/kmap_types.h>
#include <asm/uaccess.h>
@@ -774,20 +775,22 @@ ssize_t wait_on_sync_kiocb(struct kiocb *req)
EXPORT_SYMBOL(wait_on_sync_kiocb);

/*
- * exit_aio: called when the last user of mm goes away. At this point, there is
+ * aio_exit: called when the last user of mm goes away. At this point, there is
* no way for any new requests to be submited or any of the io_* syscalls to be
* called on the context.
*
* There may be outstanding kiocbs, but free_ioctx() will explicitly wait on
* them.
*/
-void exit_aio(struct mm_struct *mm)
+static int aio_exit(struct notifier_block *nb,
+ unsigned long action, void *data)
{
+ struct mm_struct *mm = data;
struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
int i;

if (!table)
- return;
+ return 0;

for (i = 0; i < table->nr; ++i) {
struct kioctx *ctx = table->table[i];
@@ -796,10 +799,10 @@ void exit_aio(struct mm_struct *mm)
continue;
/*
* We don't need to bother with munmap() here - exit_mmap(mm)
- * is coming and it'll unmap everything. And we simply can't,
- * this is not necessarily our ->mm.
- * Since kill_ioctx() uses non-zero ->mmap_size as indicator
- * that it needs to unmap the area, just set it to 0.
+ * have already been call and everything is unmap by now. But
+ * to be safe set ->mmap_size to 0 since aio_free_ring() uses
+ * non-zero ->mmap_size as indicator that it needs to unmap the
+ * area.
*/
ctx->mmap_size = 0;
kill_ioctx(mm, ctx, NULL);
@@ -807,6 +810,7 @@ void exit_aio(struct mm_struct *mm)

RCU_INIT_POINTER(mm->ioctx_table, NULL);
kfree(table);
+ return 0;
}

static void put_reqs_available(struct kioctx *ctx, unsigned nr)
@@ -1629,3 +1633,14 @@ SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id,
}
return ret;
}
+
+static struct notifier_block aio_mmput_nb = {
+ .notifier_call = aio_exit,
+ .priority = 1,
+};
+
+static int __init aio_init(void)
+{
+ return mmput_register_notifier(&aio_mmput_nb);
+}
+subsys_initcall(aio_init);
diff --git a/include/linux/aio.h b/include/linux/aio.h
index d9c92da..6308fac 100644
--- a/include/linux/aio.h
+++ b/include/linux/aio.h
@@ -73,7 +73,6 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
extern ssize_t wait_on_sync_kiocb(struct kiocb *iocb);
extern void aio_complete(struct kiocb *iocb, long res, long res2);
struct mm_struct;
-extern void exit_aio(struct mm_struct *mm);
extern long do_io_submit(aio_context_t ctx_id, long nr,
struct iocb __user *__user *iocbpp, bool compat);
void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
@@ -81,7 +80,6 @@ void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
static inline ssize_t wait_on_sync_kiocb(struct kiocb *iocb) { return 0; }
static inline void aio_complete(struct kiocb *iocb, long res, long res2) { }
struct mm_struct;
-static inline void exit_aio(struct mm_struct *mm) { }
static inline long do_io_submit(aio_context_t ctx_id, long nr,
struct iocb __user * __user *iocbpp,
bool compat) { return 0; }
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index 3be6bb1..84c184f 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -20,7 +20,6 @@ struct mem_cgroup;
int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
unsigned long end, int advice, unsigned long *vm_flags);
int __ksm_enter(struct mm_struct *mm);
-void __ksm_exit(struct mm_struct *mm);

static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
{
@@ -29,12 +28,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
return 0;
}

-static inline void ksm_exit(struct mm_struct *mm)
-{
- if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
- __ksm_exit(mm);
-}
-
/*
* A KSM page is one of those write-protected "shared pages" or "merged pages"
* which KSM maps into multiple mms, wherever identical anonymous page content
@@ -83,10 +76,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
return 0;
}

-static inline void ksm_exit(struct mm_struct *mm)
-{
-}
-
static inline int PageKsm(struct page *page)
{
return 0;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 322d4fc..428b3cf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2384,6 +2384,11 @@ static inline void mmdrop(struct mm_struct * mm)
__mmdrop(mm);
}

+/* mmput call list of notifier and subsystem/module can register
+ * new one through this call.
+ */
+extern int mmput_register_notifier(struct notifier_block *nb);
+extern int mmput_unregister_notifier(struct notifier_block *nb);
/* mmput gets rid of the mappings and all user-space */
extern void mmput(struct mm_struct *);
/* Grab a reference to a task's mm, if it is not already going away */
diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
index 4f844c6..44e7267 100644
--- a/include/linux/uprobes.h
+++ b/include/linux/uprobes.h
@@ -120,7 +120,6 @@ extern int uprobe_pre_sstep_notifier(struct pt_regs *regs);
extern void uprobe_notify_resume(struct pt_regs *regs);
extern bool uprobe_deny_signal(void);
extern bool arch_uprobe_skip_sstep(struct arch_uprobe *aup, struct pt_regs *regs);
-extern void uprobe_clear_state(struct mm_struct *mm);
extern int arch_uprobe_analyze_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long addr);
extern int arch_uprobe_pre_xol(struct arch_uprobe *aup, struct pt_regs *regs);
extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs);
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 46b7c31..32b04dc 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -37,6 +37,7 @@
#include <linux/percpu-rwsem.h>
#include <linux/task_work.h>
#include <linux/shmem_fs.h>
+#include <linux/notifier.h>

#include <linux/uprobes.h>

@@ -1220,16 +1221,19 @@ static struct xol_area *get_xol_area(void)
/*
* uprobe_clear_state - Free the area allocated for slots.
*/
-void uprobe_clear_state(struct mm_struct *mm)
+static int uprobe_clear_state(struct notifier_block *nb,
+ unsigned long action, void *data)
{
+ struct mm_struct *mm = data;
struct xol_area *area = mm->uprobes_state.xol_area;

if (!area)
- return;
+ return 0;

put_page(area->page);
kfree(area->bitmap);
kfree(area);
+ return 0;
}

void uprobe_start_dup_mmap(void)
@@ -1979,9 +1983,14 @@ static struct notifier_block uprobe_exception_nb = {
.priority = INT_MAX-1, /* notified after kprobes, kgdb */
};

+static struct notifier_block uprobe_mmput_nb = {
+ .notifier_call = uprobe_clear_state,
+ .priority = 0,
+};
+
static int __init init_uprobes(void)
{
- int i;
+ int i, err;

for (i = 0; i < UPROBES_HASH_SZ; i++)
mutex_init(&uprobes_mmap_mutex[i]);
@@ -1989,6 +1998,10 @@ static int __init init_uprobes(void)
if (percpu_init_rwsem(&dup_mmap_sem))
return -ENOMEM;

+ err = mmput_register_notifier(&uprobe_mmput_nb);
+ if (err)
+ return err;
+
return register_die_notifier(&uprobe_exception_nb);
}
__initcall(init_uprobes);
diff --git a/kernel/fork.c b/kernel/fork.c
index dd8864f..b448509 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -87,6 +87,8 @@
#define CREATE_TRACE_POINTS
#include <trace/events/task.h>

+static BLOCKING_NOTIFIER_HEAD(mmput_notifier);
+
/*
* Protected counters by write_lock_irq(&tasklist_lock)
*/
@@ -623,6 +625,21 @@ void __mmdrop(struct mm_struct *mm)
EXPORT_SYMBOL_GPL(__mmdrop);

/*
+ * Register a notifier that will be call by mmput
+ */
+int mmput_register_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_register(&mmput_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(mmput_register_notifier);
+
+int mmput_unregister_notifier(struct notifier_block *nb)
+{
+ return blocking_notifier_chain_unregister(&mmput_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(mmput_unregister_notifier);
+
+/*
* Decrement the use count and release all resources for an mm.
*/
void mmput(struct mm_struct *mm)
@@ -630,11 +647,8 @@ void mmput(struct mm_struct *mm)
might_sleep();

if (atomic_dec_and_test(&mm->mm_users)) {
- uprobe_clear_state(mm);
- exit_aio(mm);
- ksm_exit(mm);
- khugepaged_exit(mm); /* must run before exit_mmap */
exit_mmap(mm);
+ blocking_notifier_call_chain(&mmput_notifier, 0, mm);
set_mm_exe_file(mm, NULL);
if (!list_empty(&mm->mmlist)) {
spin_lock(&mmlist_lock);
diff --git a/mm/ksm.c b/mm/ksm.c
index 346ddc9..cb1e976 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -37,6 +37,7 @@
#include <linux/freezer.h>
#include <linux/oom.h>
#include <linux/numa.h>
+#include <linux/notifier.h>

#include <asm/tlbflush.h>
#include "internal.h"
@@ -1586,7 +1587,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
ksm_scan.mm_slot = slot;
spin_unlock(&ksm_mmlist_lock);
/*
- * Although we tested list_empty() above, a racing __ksm_exit
+ * Although we tested list_empty() above, a racing ksm_exit
* of the last mm on the list may have removed it since then.
*/
if (slot == &ksm_mm_head)
@@ -1658,9 +1659,9 @@ next_mm:
/*
* We've completed a full scan of all vmas, holding mmap_sem
* throughout, and found no VM_MERGEABLE: so do the same as
- * __ksm_exit does to remove this mm from all our lists now.
- * This applies either when cleaning up after __ksm_exit
- * (but beware: we can reach here even before __ksm_exit),
+ * ksm_exit does to remove this mm from all our lists now.
+ * This applies either when cleaning up after ksm_exit
+ * (but beware: we can reach here even before ksm_exit),
* or when all VM_MERGEABLE areas have been unmapped (and
* mmap_sem then protects against race with MADV_MERGEABLE).
*/
@@ -1821,11 +1822,16 @@ int __ksm_enter(struct mm_struct *mm)
return 0;
}

-void __ksm_exit(struct mm_struct *mm)
+static int ksm_exit(struct notifier_block *nb,
+ unsigned long action, void *data)
{
+ struct mm_struct *mm = data;
struct mm_slot *mm_slot;
int easy_to_free = 0;

+ if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
+ return 0;
+
/*
* This process is exiting: if it's straightforward (as is the
* case when ksmd was never running), free mm_slot immediately.
@@ -1857,6 +1863,7 @@ void __ksm_exit(struct mm_struct *mm)
down_write(&mm->mmap_sem);
up_write(&mm->mmap_sem);
}
+ return 0;
}

struct page *ksm_might_need_to_copy(struct page *page,
@@ -2305,11 +2312,20 @@ static struct attribute_group ksm_attr_group = {
};
#endif /* CONFIG_SYSFS */

+static struct notifier_block ksm_mmput_nb = {
+ .notifier_call = ksm_exit,
+ .priority = 2,
+};
+
static int __init ksm_init(void)
{
struct task_struct *ksm_thread;
int err;

+ err = mmput_register_notifier(&ksm_mmput_nb);
+ if (err)
+ return err;
+
err = ksm_slab_init();
if (err)
goto out;
diff --git a/mm/mmap.c b/mm/mmap.c
index 61aec93..b684a21 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2775,6 +2775,9 @@ void exit_mmap(struct mm_struct *mm)
struct vm_area_struct *vma;
unsigned long nr_accounted = 0;

+ /* Important to call this first. */
+ khugepaged_exit(mm);
+
/* mm's last user has gone, and its about to be pulled down */
mmu_notifier_release(mm);

--
1.9.0

2014-06-28 02:01:09

by Jerome Glisse

[permalink] [raw]
Subject: [PATCH 3/6] mmu_notifier: add event information to address invalidation v2

From: Jérôme Glisse <[email protected]>

The event information will be usefull for new user of mmu_notifier API.
The event argument differentiate between a vma disappearing, a page
being write protected or simply a page being unmaped. This allow new
user to take different path for different event for instance on unmap
the resource used to track a vma are still valid and should stay around.
While if the event is saying that a vma is being destroy it means that any
resources used to track this vma can be free.

Changed since v1:
- renamed action into event (updated commit message too).
- simplified the event names and clarified their intented usage
also documenting what exceptation the listener can have in
respect to each event.

Signed-off-by: Jérôme Glisse <[email protected]>
---
drivers/gpu/drm/i915/i915_gem_userptr.c | 3 +-
drivers/iommu/amd_iommu_v2.c | 14 ++--
drivers/misc/sgi-gru/grutlbpurge.c | 9 ++-
drivers/xen/gntdev.c | 9 ++-
fs/proc/task_mmu.c | 6 +-
include/linux/hugetlb.h | 7 +-
include/linux/mmu_notifier.h | 117 ++++++++++++++++++++++++++------
kernel/events/uprobes.c | 10 ++-
mm/filemap_xip.c | 2 +-
mm/huge_memory.c | 51 ++++++++------
mm/hugetlb.c | 25 ++++---
mm/ksm.c | 18 +++--
mm/memory.c | 27 +++++---
mm/migrate.c | 9 ++-
mm/mmu_notifier.c | 28 +++++---
mm/mprotect.c | 33 ++++++---
mm/mremap.c | 6 +-
mm/rmap.c | 24 +++++--
virt/kvm/kvm_main.c | 12 ++--
19 files changed, 291 insertions(+), 119 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 21ea928..ed6f35e 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -56,7 +56,8 @@ struct i915_mmu_object {
static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
struct mm_struct *mm,
unsigned long start,
- unsigned long end)
+ unsigned long end,
+ enum mmu_event event)
{
struct i915_mmu_notifier *mn = container_of(_mn, struct i915_mmu_notifier, mn);
struct interval_tree_node *it = NULL;
diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
index 499b436..2bb9771 100644
--- a/drivers/iommu/amd_iommu_v2.c
+++ b/drivers/iommu/amd_iommu_v2.c
@@ -414,21 +414,25 @@ static int mn_clear_flush_young(struct mmu_notifier *mn,
static void mn_change_pte(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long address,
- pte_t pte)
+ pte_t pte,
+ enum mmu_event event)
{
__mn_flush_page(mn, address);
}

static void mn_invalidate_page(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
__mn_flush_page(mn, address);
}

static void mn_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
struct pasid_state *pasid_state;
struct device_state *dev_state;
@@ -449,7 +453,9 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,

static void mn_invalidate_range_end(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
struct pasid_state *pasid_state;
struct device_state *dev_state;
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index 2129274..e67fed1 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -221,7 +221,8 @@ void gru_flush_all_tlb(struct gru_state *gru)
*/
static void gru_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start, unsigned long end,
+ enum mmu_event event)
{
struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
ms_notifier);
@@ -235,7 +236,8 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,

static void gru_invalidate_range_end(struct mmu_notifier *mn,
struct mm_struct *mm, unsigned long start,
- unsigned long end)
+ unsigned long end,
+ enum mmu_event event)
{
struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
ms_notifier);
@@ -248,7 +250,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
}

static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
ms_notifier);
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 073b4a1..fe9da94 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -428,7 +428,9 @@ static void unmap_if_in_range(struct grant_map *map,

static void mn_invl_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
struct grant_map *map;
@@ -445,9 +447,10 @@ static void mn_invl_range_start(struct mmu_notifier *mn,

static void mn_invl_page(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
- mn_invl_range_start(mn, mm, address, address + PAGE_SIZE);
+ mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
}

static void mn_release(struct mmu_notifier *mn,
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index cfa63ee..e9e79f7 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -830,7 +830,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
};
down_read(&mm->mmap_sem);
if (type == CLEAR_REFS_SOFT_DIRTY)
- mmu_notifier_invalidate_range_start(mm, 0, -1);
+ mmu_notifier_invalidate_range_start(mm, 0,
+ -1, MMU_STATUS);
for (vma = mm->mmap; vma; vma = vma->vm_next) {
cp.vma = vma;
if (is_vm_hugetlb_page(vma))
@@ -858,7 +859,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
&clear_refs_walk);
}
if (type == CLEAR_REFS_SOFT_DIRTY)
- mmu_notifier_invalidate_range_end(mm, 0, -1);
+ mmu_notifier_invalidate_range_end(mm, 0,
+ -1, MMU_STATUS);
flush_tlb_mm(mm);
up_read(&mm->mmap_sem);
mmput(mm);
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 6a836ef..d7e512f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -6,6 +6,7 @@
#include <linux/fs.h>
#include <linux/hugetlb_inline.h>
#include <linux/cgroup.h>
+#include <linux/mmu_notifier.h>
#include <linux/list.h>
#include <linux/kref.h>

@@ -103,7 +104,8 @@ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
int pmd_huge(pmd_t pmd);
int pud_huge(pud_t pmd);
unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
- unsigned long address, unsigned long end, pgprot_t newprot);
+ unsigned long address, unsigned long end, pgprot_t newprot,
+ enum mmu_event event);

#else /* !CONFIG_HUGETLB_PAGE */

@@ -148,7 +150,8 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
#define is_hugepage_active(x) false

static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
- unsigned long address, unsigned long end, pgprot_t newprot)
+ unsigned long address, unsigned long end, pgprot_t newprot,
+ enum mmu_event event)
{
return 0;
}
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index deca874..82e9577 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -9,6 +9,52 @@
struct mmu_notifier;
struct mmu_notifier_ops;

+/* Event report finer informations to the callback allowing the event listener
+ * to take better action. There are only few kinds of events :
+ *
+ * - MMU_MIGRATE memory is migrating from one page to another thus all write
+ * access must stop after invalidate_range_start callback returns. And no
+ * read access should be allowed either as new page can be remapped with
+ * write access before the invalidate_range_end callback happen and thus
+ * any read access to old page might access outdated informations. Several
+ * source to this event like page moving to swap (for various reasons like
+ * page reclaim), outcome of mremap syscall, migration for numa reasons,
+ * balancing memory pool, write fault on read only page trigger a new page
+ * to be allocated and used, ...
+ * - MMU_MPROT_NONE memory access protection is change, no page in the range
+ * can be accessed in either read or write mode but the range of address
+ * is still valid. All access are still fine until invalidate_range_end
+ * callback returns.
+ * - MMU_MPROT_RONLY memory access proctection is changing to read only.
+ * All access are still fine until invalidate_range_end callback returns.
+ * - MMU_MPROT_RANDW memory access proctection is changing to read an write.
+ * All access are still fine until invalidate_range_end callback returns.
+ * - MMU_MPROT_WONLY memory access proctection is changing to write only.
+ * All access are still fine until invalidate_range_end callback returns.
+ * - MMU_MUNMAP the range is being unmaped (outcome of a munmap syscall). It
+ * is fine to still have read/write access until the invalidate_range_end
+ * callback returns. This also imply that secondary page table can be trim
+ * as the address range is no longer valid.
+ * - MMU_WB memory is being write back to disk, all write access must stop
+ * after invalidate_range_start callback returns. Read access are still
+ * allowed.
+ * - MMU_STATUS memory status change, like soft dirty.
+ *
+ * In doubt when adding a new notifier caller use MMU_MIGRATE it will always
+ * result in expected behavior but will not allow listener a chance to optimize
+ * its events.
+ */
+enum mmu_event {
+ MMU_MIGRATE = 0,
+ MMU_MPROT_NONE,
+ MMU_MPROT_RONLY,
+ MMU_MPROT_RANDW,
+ MMU_MPROT_WONLY,
+ MMU_MUNMAP,
+ MMU_STATUS,
+ MMU_WB,
+};
+
#ifdef CONFIG_MMU_NOTIFIER

/*
@@ -79,7 +125,8 @@ struct mmu_notifier_ops {
void (*change_pte)(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long address,
- pte_t pte);
+ pte_t pte,
+ enum mmu_event event);

/*
* Before this is invoked any secondary MMU is still ok to
@@ -90,7 +137,8 @@ struct mmu_notifier_ops {
*/
void (*invalidate_page)(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long address);
+ unsigned long address,
+ enum mmu_event event);

/*
* invalidate_range_start() and invalidate_range_end() must be
@@ -137,10 +185,14 @@ struct mmu_notifier_ops {
*/
void (*invalidate_range_start)(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end);
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event);
void (*invalidate_range_end)(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long start, unsigned long end);
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event);
};

/*
@@ -177,13 +229,20 @@ extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
extern int __mmu_notifier_test_young(struct mm_struct *mm,
unsigned long address);
extern void __mmu_notifier_change_pte(struct mm_struct *mm,
- unsigned long address, pte_t pte);
+ unsigned long address,
+ pte_t pte,
+ enum mmu_event event);
extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
- unsigned long address);
+ unsigned long address,
+ enum mmu_event event);
extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
- unsigned long start, unsigned long end);
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event);
extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
- unsigned long start, unsigned long end);
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event);

static inline void mmu_notifier_release(struct mm_struct *mm)
{
@@ -208,31 +267,38 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
}

static inline void mmu_notifier_change_pte(struct mm_struct *mm,
- unsigned long address, pte_t pte)
+ unsigned long address,
+ pte_t pte,
+ enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_change_pte(mm, address, pte);
+ __mmu_notifier_change_pte(mm, address, pte, event);
}

static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_page(mm, address);
+ __mmu_notifier_invalidate_page(mm, address, event);
}

static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_range_start(mm, start, end);
+ __mmu_notifier_invalidate_range_start(mm, start, end, event);
}

static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_range_end(mm, start, end);
+ __mmu_notifier_invalidate_range_end(mm, start, end, event);
}

static inline void mmu_notifier_mm_init(struct mm_struct *mm)
@@ -278,13 +344,13 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
* old page would remain mapped readonly in the secondary MMUs after the new
* page is already writable by some CPU through the primary MMU.
*/
-#define set_pte_at_notify(__mm, __address, __ptep, __pte) \
+#define set_pte_at_notify(__mm, __address, __ptep, __pte, __event) \
({ \
struct mm_struct *___mm = __mm; \
unsigned long ___address = __address; \
pte_t ___pte = __pte; \
\
- mmu_notifier_change_pte(___mm, ___address, ___pte); \
+ mmu_notifier_change_pte(___mm, ___address, ___pte, __event); \
set_pte_at(___mm, ___address, __ptep, ___pte); \
})

@@ -307,22 +373,29 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
}

static inline void mmu_notifier_change_pte(struct mm_struct *mm,
- unsigned long address, pte_t pte)
+ unsigned long address,
+ pte_t pte,
+ enum mmu_event event)
{
}

static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
}

static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
}

static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
}

diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 32b04dc..296f81e 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -177,7 +177,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
/* For try_to_free_swap() and munlock_vma_page() below */
lock_page(page);

- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
err = -EAGAIN;
ptep = page_check_address(page, mm, addr, &ptl, 0);
if (!ptep)
@@ -195,7 +196,9 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,

flush_cache_page(vma, addr, pte_pfn(*ptep));
ptep_clear_flush(vma, addr, ptep);
- set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
+ set_pte_at_notify(mm, addr, ptep,
+ mk_pte(kpage, vma->vm_page_prot),
+ MMU_MIGRATE);

page_remove_rmap(page);
if (!page_mapped(page))
@@ -209,7 +212,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
err = 0;
unlock:
mem_cgroup_cancel_charge(kpage, memcg);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
unlock_page(page);
return err;
}
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
index d8d9fe3..a2b3f09 100644
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -198,7 +198,7 @@ retry:
BUG_ON(pte_dirty(pteval));
pte_unmap_unlock(pte, ptl);
/* must invalidate_page _before_ freeing the page */
- mmu_notifier_invalidate_page(mm, address);
+ mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
page_cache_release(page);
}
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5d562a9..fa30857 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1020,6 +1020,11 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
set_page_private(pages[i], (unsigned long)memcg);
}

+ mmun_start = haddr;
+ mmun_end = haddr + HPAGE_PMD_SIZE;
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
+
for (i = 0; i < HPAGE_PMD_NR; i++) {
copy_user_highpage(pages[i], page + i,
haddr + PAGE_SIZE * i, vma);
@@ -1027,10 +1032,6 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
cond_resched();
}

- mmun_start = haddr;
- mmun_end = haddr + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
-
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_same(*pmd, orig_pmd)))
goto out_free_pages;
@@ -1063,7 +1064,8 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
page_remove_rmap(page);
spin_unlock(ptl);

- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

ret |= VM_FAULT_WRITE;
put_page(page);
@@ -1073,7 +1075,8 @@ out:

out_free_pages:
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
for (i = 0; i < HPAGE_PMD_NR; i++) {
memcg = (void *)page_private(pages[i]);
set_page_private(pages[i], 0);
@@ -1157,16 +1160,17 @@ alloc:

count_vm_event(THP_FAULT_ALLOC);

+ mmun_start = haddr;
+ mmun_end = haddr + HPAGE_PMD_SIZE;
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
+
if (!page)
clear_huge_page(new_page, haddr, HPAGE_PMD_NR);
else
copy_user_huge_page(new_page, page, haddr, vma, HPAGE_PMD_NR);
__SetPageUptodate(new_page);

- mmun_start = haddr;
- mmun_end = haddr + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
-
spin_lock(ptl);
if (page)
put_user_huge_page(page);
@@ -1197,7 +1201,8 @@ alloc:
}
spin_unlock(ptl);
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
out:
return ret;
out_unlock:
@@ -1632,7 +1637,8 @@ static int __split_huge_page_splitting(struct page *page,
const unsigned long mmun_start = address;
const unsigned long mmun_end = address + HPAGE_PMD_SIZE;

- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_STATUS);
pmd = page_check_address_pmd(page, mm, address,
PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
if (pmd) {
@@ -1647,7 +1653,8 @@ static int __split_huge_page_splitting(struct page *page,
ret = 1;
spin_unlock(ptl);
}
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_STATUS);

return ret;
}
@@ -2446,7 +2453,8 @@ static void collapse_huge_page(struct mm_struct *mm,

mmun_start = address;
mmun_end = address + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
/*
* After this gup_fast can't run anymore. This also removes
@@ -2456,7 +2464,8 @@ static void collapse_huge_page(struct mm_struct *mm,
*/
_pmd = pmdp_clear_flush(vma, address, pmd);
spin_unlock(pmd_ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

spin_lock(pte_ptl);
isolated = __collapse_huge_page_isolate(vma, address, pte);
@@ -2845,24 +2854,28 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
mmun_start = haddr;
mmun_end = haddr + HPAGE_PMD_SIZE;
again:
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
return;
}
if (is_huge_zero_pmd(*pmd)) {
__split_huge_zero_page_pmd(vma, haddr, pmd);
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
return;
}
page = pmd_page(*pmd);
VM_BUG_ON_PAGE(!page_count(page), page);
get_page(page);
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

split_huge_page(page);

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7faab71..73e1576 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2565,7 +2565,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
mmun_start = vma->vm_start;
mmun_end = vma->vm_end;
if (cow)
- mmu_notifier_invalidate_range_start(src, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(src, mmun_start,
+ mmun_end, MMU_MIGRATE);

for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
spinlock_t *src_ptl, *dst_ptl;
@@ -2615,7 +2616,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
}

if (cow)
- mmu_notifier_invalidate_range_end(src, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(src, mmun_start,
+ mmun_end, MMU_MIGRATE);

return ret;
}
@@ -2641,7 +2643,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
BUG_ON(end & ~huge_page_mask(h));

tlb_start_vma(tlb, vma);
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
again:
for (address = start; address < end; address += sz) {
ptep = huge_pte_offset(mm, address);
@@ -2712,7 +2715,8 @@ unlock:
if (address < end && !ref_page)
goto again;
}
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
tlb_end_vma(tlb, vma);
}

@@ -2899,7 +2903,8 @@ retry_avoidcopy:

mmun_start = address & huge_page_mask(h);
mmun_end = mmun_start + huge_page_size(h);
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
/*
* Retake the page table lock to check for racing updates
* before the page tables are altered
@@ -2919,7 +2924,8 @@ retry_avoidcopy:
new_page = old_page;
}
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
page_cache_release(new_page);
page_cache_release(old_page);

@@ -3344,7 +3350,8 @@ same_page:
}

unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
- unsigned long address, unsigned long end, pgprot_t newprot)
+ unsigned long address, unsigned long end, pgprot_t newprot,
+ enum mmu_event event)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long start = address;
@@ -3356,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
BUG_ON(address >= end);
flush_cache_range(vma, address, end);

- mmu_notifier_invalidate_range_start(mm, start, end);
+ mmu_notifier_invalidate_range_start(mm, start, end, event);
mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
for (; address < end; address += huge_page_size(h)) {
spinlock_t *ptl;
@@ -3386,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
*/
flush_tlb_range(vma, start, end);
mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
- mmu_notifier_invalidate_range_end(mm, start, end);
+ mmu_notifier_invalidate_range_end(mm, start, end, event);

return pages << h->order;
}
diff --git a/mm/ksm.c b/mm/ksm.c
index cb1e976..4b659f1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -873,7 +873,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,

mmun_start = addr;
mmun_end = addr + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MPROT_RONLY);

ptep = page_check_address(page, mm, addr, &ptl, 0);
if (!ptep)
@@ -905,7 +906,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
if (pte_dirty(entry))
set_page_dirty(page);
entry = pte_mkclean(pte_wrprotect(entry));
- set_pte_at_notify(mm, addr, ptep, entry);
+ set_pte_at_notify(mm, addr, ptep, entry, MMU_MPROT_RONLY);
}
*orig_pte = *ptep;
err = 0;
@@ -913,7 +914,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
out_unlock:
pte_unmap_unlock(ptep, ptl);
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MPROT_RONLY);
out:
return err;
}
@@ -949,7 +951,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,

mmun_start = addr;
mmun_end = addr + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
if (!pte_same(*ptep, orig_pte)) {
@@ -962,7 +965,9 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,

flush_cache_page(vma, addr, pte_pfn(*ptep));
ptep_clear_flush(vma, addr, ptep);
- set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
+ set_pte_at_notify(mm, addr, ptep,
+ mk_pte(kpage, vma->vm_page_prot),
+ MMU_MIGRATE);

page_remove_rmap(page);
if (!page_mapped(page))
@@ -972,7 +977,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
pte_unmap_unlock(ptep, ptl);
err = 0;
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
out:
return err;
}
diff --git a/mm/memory.c b/mm/memory.c
index 09e2cd0..d3908f0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1050,7 +1050,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
mmun_end = end;
if (is_cow)
mmu_notifier_invalidate_range_start(src_mm, mmun_start,
- mmun_end);
+ mmun_end, MMU_MIGRATE);

ret = 0;
dst_pgd = pgd_offset(dst_mm, addr);
@@ -1067,7 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
} while (dst_pgd++, src_pgd++, addr = next, addr != end);

if (is_cow)
- mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
+ MMU_MIGRATE);
return ret;
}

@@ -1371,10 +1372,12 @@ void unmap_vmas(struct mmu_gather *tlb,
{
struct mm_struct *mm = vma->vm_mm;

- mmu_notifier_invalidate_range_start(mm, start_addr, end_addr);
+ mmu_notifier_invalidate_range_start(mm, start_addr,
+ end_addr, MMU_MUNMAP);
for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
- mmu_notifier_invalidate_range_end(mm, start_addr, end_addr);
+ mmu_notifier_invalidate_range_end(mm, start_addr,
+ end_addr, MMU_MUNMAP);
}

/**
@@ -1396,10 +1399,10 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
lru_add_drain();
tlb_gather_mmu(&tlb, mm, start, end);
update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, start, end);
+ mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
unmap_single_vma(&tlb, vma, start, end, details);
- mmu_notifier_invalidate_range_end(mm, start, end);
+ mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
tlb_finish_mmu(&tlb, start, end);
}

@@ -1422,9 +1425,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
lru_add_drain();
tlb_gather_mmu(&tlb, mm, address, end);
update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, address, end);
+ mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
unmap_single_vma(&tlb, vma, address, end, details);
- mmu_notifier_invalidate_range_end(mm, address, end);
+ mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
tlb_finish_mmu(&tlb, address, end);
}

@@ -2208,7 +2211,8 @@ gotten:

mmun_start = address & PAGE_MASK;
mmun_end = mmun_start + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

/*
* Re-check the pte - we dropped the lock
@@ -2240,7 +2244,7 @@ gotten:
* mmu page tables (such as kvm shadow page tables), we want the
* new page to be mapped directly into the secondary page table.
*/
- set_pte_at_notify(mm, address, page_table, entry);
+ set_pte_at_notify(mm, address, page_table, entry, MMU_MIGRATE);
update_mmu_cache(vma, address, page_table);
if (old_page) {
/*
@@ -2279,7 +2283,8 @@ gotten:
unlock:
pte_unmap_unlock(page_table, ptl);
if (mmun_end > mmun_start)
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
if (old_page) {
/*
* Don't let another task, with possibly unlocked vma,
diff --git a/mm/migrate.c b/mm/migrate.c
index ab43fbf..b526c72 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1820,12 +1820,14 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
WARN_ON(PageLRU(new_page));

/* Recheck the target PMD */
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
fail_putback:
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

/* Reverse changes made by migrate_page_copy() */
if (TestClearPageActive(new_page))
@@ -1878,7 +1880,8 @@ fail_putback:
page_remove_rmap(page);

spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

/* Take an "isolate" reference and put new page on the LRU. */
get_page(new_page);
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 41cefdf..9decb88 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -122,8 +122,10 @@ int __mmu_notifier_test_young(struct mm_struct *mm,
return young;
}

-void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
- pte_t pte)
+void __mmu_notifier_change_pte(struct mm_struct *mm,
+ unsigned long address,
+ pte_t pte,
+ enum mmu_event event)
{
struct mmu_notifier *mn;
int id;
@@ -131,13 +133,14 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->change_pte)
- mn->ops->change_pte(mn, mm, address, pte);
+ mn->ops->change_pte(mn, mm, address, pte, event);
}
srcu_read_unlock(&srcu, id);
}

void __mmu_notifier_invalidate_page(struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
struct mmu_notifier *mn;
int id;
@@ -145,13 +148,16 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_page)
- mn->ops->invalidate_page(mn, mm, address);
+ mn->ops->invalidate_page(mn, mm, address, event);
}
srcu_read_unlock(&srcu, id);
}

void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
+
{
struct mmu_notifier *mn;
int id;
@@ -159,14 +165,17 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_range_start)
- mn->ops->invalidate_range_start(mn, mm, start, end);
+ mn->ops->invalidate_range_start(mn, mm, start,
+ end, event);
}
srcu_read_unlock(&srcu, id);
}
EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);

void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
- unsigned long start, unsigned long end)
+ unsigned long start,
+ unsigned long end,
+ enum mmu_event event)
{
struct mmu_notifier *mn;
int id;
@@ -174,7 +183,8 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_range_end)
- mn->ops->invalidate_range_end(mn, mm, start, end);
+ mn->ops->invalidate_range_end(mn, mm, start,
+ end, event);
}
srcu_read_unlock(&srcu, id);
}
diff --git a/mm/mprotect.c b/mm/mprotect.c
index c43d557..6ce6c23 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -137,7 +137,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,

static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
pud_t *pud, unsigned long addr, unsigned long end,
- pgprot_t newprot, int dirty_accountable, int prot_numa)
+ pgprot_t newprot, int dirty_accountable, int prot_numa,
+ enum mmu_event event)
{
pmd_t *pmd;
struct mm_struct *mm = vma->vm_mm;
@@ -157,7 +158,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
/* invoke the mmu notifier if the pmd is populated */
if (!mni_start) {
mni_start = addr;
- mmu_notifier_invalidate_range_start(mm, mni_start, end);
+ mmu_notifier_invalidate_range_start(mm, mni_start,
+ end, event);
}

if (pmd_trans_huge(*pmd)) {
@@ -185,7 +187,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
} while (pmd++, addr = next, addr != end);

if (mni_start)
- mmu_notifier_invalidate_range_end(mm, mni_start, end);
+ mmu_notifier_invalidate_range_end(mm, mni_start, end, event);

if (nr_huge_updates)
count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
@@ -194,7 +196,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,

static inline unsigned long change_pud_range(struct vm_area_struct *vma,
pgd_t *pgd, unsigned long addr, unsigned long end,
- pgprot_t newprot, int dirty_accountable, int prot_numa)
+ pgprot_t newprot, int dirty_accountable, int prot_numa,
+ enum mmu_event event)
{
pud_t *pud;
unsigned long next;
@@ -206,7 +209,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
if (pud_none_or_clear_bad(pud))
continue;
pages += change_pmd_range(vma, pud, addr, next, newprot,
- dirty_accountable, prot_numa);
+ dirty_accountable, prot_numa, event);
} while (pud++, addr = next, addr != end);

return pages;
@@ -214,7 +217,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,

static unsigned long change_protection_range(struct vm_area_struct *vma,
unsigned long addr, unsigned long end, pgprot_t newprot,
- int dirty_accountable, int prot_numa)
+ int dirty_accountable, int prot_numa, enum mmu_event event)
{
struct mm_struct *mm = vma->vm_mm;
pgd_t *pgd;
@@ -231,7 +234,7 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
if (pgd_none_or_clear_bad(pgd))
continue;
pages += change_pud_range(vma, pgd, addr, next, newprot,
- dirty_accountable, prot_numa);
+ dirty_accountable, prot_numa, event);
} while (pgd++, addr = next, addr != end);

/* Only flush the TLB if we actually modified any entries: */
@@ -247,11 +250,23 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
int dirty_accountable, int prot_numa)
{
unsigned long pages;
+ enum mmu_event event = MMU_MPROT_NONE;
+
+ /* At this points vm_flags is updated. */
+ if ((vma->vm_flags & VM_READ) && (vma->vm_flags & VM_WRITE))
+ event = MMU_MPROT_RANDW;
+ else if (vma->vm_flags & VM_WRITE)
+ event = MMU_MPROT_WONLY;
+ else if (vma->vm_flags & VM_READ)
+ event = MMU_MPROT_RONLY;

if (is_vm_hugetlb_page(vma))
- pages = hugetlb_change_protection(vma, start, end, newprot);
+ pages = hugetlb_change_protection(vma, start, end,
+ newprot, event);
else
- pages = change_protection_range(vma, start, end, newprot, dirty_accountable, prot_numa);
+ pages = change_protection_range(vma, start, end, newprot,
+ dirty_accountable,
+ prot_numa, event);

return pages;
}
diff --git a/mm/mremap.c b/mm/mremap.c
index 05f1180..6827d2f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -177,7 +177,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

mmun_start = old_addr;
mmun_end = old_end;
- mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
cond_resched();
@@ -228,7 +229,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
if (likely(need_flush))
flush_tlb_range(vma, old_end-len, old_addr);

- mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
+ mmun_end, MMU_MIGRATE);

return len + old_addr - old_end; /* how much done */
}
diff --git a/mm/rmap.c b/mm/rmap.c
index 7928ddd..bd7e6d7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
pte_unmap_unlock(pte, ptl);

if (ret) {
- mmu_notifier_invalidate_page(mm, address);
+ mmu_notifier_invalidate_page(mm, address, MMU_WB);
(*cleaned)++;
}
out:
@@ -1128,6 +1128,10 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
spinlock_t *ptl;
int ret = SWAP_AGAIN;
enum ttu_flags flags = (enum ttu_flags)arg;
+ enum mmu_event event = MMU_MIGRATE;
+
+ if (flags & TTU_MUNLOCK)
+ event = MMU_STATUS;

pte = page_check_address(page, mm, address, &ptl, 0);
if (!pte)
@@ -1233,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
out_unmap:
pte_unmap_unlock(pte, ptl);
if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
- mmu_notifier_invalidate_page(mm, address);
+ mmu_notifier_invalidate_page(mm, address, event);
out:
return ret;

@@ -1287,7 +1291,9 @@ out_mlock:
#define CLUSTER_MASK (~(CLUSTER_SIZE - 1))

static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
- struct vm_area_struct *vma, struct page *check_page)
+ struct vm_area_struct *vma,
+ struct page *check_page,
+ enum ttu_flags flags)
{
struct mm_struct *mm = vma->vm_mm;
pmd_t *pmd;
@@ -1301,6 +1307,10 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
unsigned long end;
int ret = SWAP_AGAIN;
int locked_vma = 0;
+ enum mmu_event event = MMU_MIGRATE;
+
+ if (flags & TTU_MUNLOCK)
+ event = MMU_STATUS;

address = (vma->vm_start + cursor) & CLUSTER_MASK;
end = address + CLUSTER_SIZE;
@@ -1315,7 +1325,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,

mmun_start = address;
mmun_end = end;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);

/*
* If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
@@ -1380,7 +1390,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
(*mapcount)--;
}
pte_unmap_unlock(pte - 1, ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
+ mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
if (locked_vma)
up_read(&vma->vm_mm->mmap_sem);
return ret;
@@ -1436,7 +1446,9 @@ static int try_to_unmap_nonlinear(struct page *page,
while (cursor < max_nl_cursor &&
cursor < vma->vm_end - vma->vm_start) {
if (try_to_unmap_cluster(cursor, &mapcount,
- vma, page) == SWAP_MLOCK)
+ vma, page,
+ (enum ttu_flags)arg)
+ == SWAP_MLOCK)
ret = SWAP_MLOCK;
cursor += CLUSTER_SIZE;
vma->vm_private_data = (void *) cursor;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4b6c01b..6e1992f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -262,7 +262,8 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)

static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
struct mm_struct *mm,
- unsigned long address)
+ unsigned long address,
+ enum mmu_event event)
{
struct kvm *kvm = mmu_notifier_to_kvm(mn);
int need_tlb_flush, idx;
@@ -301,7 +302,8 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long address,
- pte_t pte)
+ pte_t pte,
+ enum mmu_event event)
{
struct kvm *kvm = mmu_notifier_to_kvm(mn);
int idx;
@@ -317,7 +319,8 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long start,
- unsigned long end)
+ unsigned long end,
+ enum mmu_event event)
{
struct kvm *kvm = mmu_notifier_to_kvm(mn);
int need_tlb_flush = 0, idx;
@@ -343,7 +346,8 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
struct mm_struct *mm,
unsigned long start,
- unsigned long end)
+ unsigned long end,
+ enum mmu_event event)
{
struct kvm *kvm = mmu_notifier_to_kvm(mn);

--
1.9.0

2014-06-28 02:01:12

by Jerome Glisse

[permalink] [raw]
Subject: [PATCH 4/6] mmu_notifier: pass through vma to invalidate_range and invalidate_page

From: Jérôme Glisse <[email protected]>

New user of the mmu_notifier interface need to lookup vma in order to
perform the invalidation operation. Instead of redoing a vma lookup
inside the callback just pass through the vma from the call site where
it is already available.

This needs small refactoring in memory.c to call invalidate_range on
vma boundary the overhead should be low enough.

Signed-off-by: Jérôme Glisse <[email protected]>
---
drivers/gpu/drm/i915/i915_gem_userptr.c | 1 +
drivers/iommu/amd_iommu_v2.c | 3 +++
drivers/misc/sgi-gru/grutlbpurge.c | 6 ++++-
drivers/xen/gntdev.c | 4 +++-
fs/proc/task_mmu.c | 16 ++++++++-----
include/linux/mmu_notifier.h | 19 ++++++++++++---
kernel/events/uprobes.c | 4 ++--
mm/filemap_xip.c | 3 ++-
mm/huge_memory.c | 26 ++++++++++----------
mm/hugetlb.c | 16 ++++++-------
mm/ksm.c | 8 +++----
mm/memory.c | 42 +++++++++++++++++++++------------
mm/migrate.c | 6 ++---
mm/mmu_notifier.c | 9 ++++---
mm/mprotect.c | 5 ++--
mm/mremap.c | 4 ++--
mm/rmap.c | 9 +++----
virt/kvm/kvm_main.c | 3 +++
18 files changed, 116 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index ed6f35e..191ac71 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -55,6 +55,7 @@ struct i915_mmu_object {

static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
index 2bb9771..9f9e706 100644
--- a/drivers/iommu/amd_iommu_v2.c
+++ b/drivers/iommu/amd_iommu_v2.c
@@ -422,6 +422,7 @@ static void mn_change_pte(struct mmu_notifier *mn,

static void mn_invalidate_page(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
@@ -430,6 +431,7 @@ static void mn_invalidate_page(struct mmu_notifier *mn,

static void mn_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -453,6 +455,7 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,

static void mn_invalidate_range_end(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
index e67fed1..d02e4c7 100644
--- a/drivers/misc/sgi-gru/grutlbpurge.c
+++ b/drivers/misc/sgi-gru/grutlbpurge.c
@@ -221,6 +221,7 @@ void gru_flush_all_tlb(struct gru_state *gru)
*/
static void gru_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start, unsigned long end,
enum mmu_event event)
{
@@ -235,7 +236,9 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,
}

static void gru_invalidate_range_end(struct mmu_notifier *mn,
- struct mm_struct *mm, unsigned long start,
+ struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ unsigned long start,
unsigned long end,
enum mmu_event event)
{
@@ -250,6 +253,7 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
}

static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index fe9da94..219928b 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -428,6 +428,7 @@ static void unmap_if_in_range(struct grant_map *map,

static void mn_invl_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -447,10 +448,11 @@ static void mn_invl_range_start(struct mmu_notifier *mn,

static void mn_invl_page(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
- mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
+ mn_invl_range_start(mn, mm, vma, address, address + PAGE_SIZE, event);
}

static void mn_release(struct mmu_notifier *mn,
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e9e79f7..8b0f25d 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -829,13 +829,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
.private = &cp,
};
down_read(&mm->mmap_sem);
- if (type == CLEAR_REFS_SOFT_DIRTY)
- mmu_notifier_invalidate_range_start(mm, 0,
- -1, MMU_STATUS);
for (vma = mm->mmap; vma; vma = vma->vm_next) {
cp.vma = vma;
if (is_vm_hugetlb_page(vma))
continue;
+ if (type == CLEAR_REFS_SOFT_DIRTY)
+ mmu_notifier_invalidate_range_start(mm, vma,
+ vma->vm_start,
+ vma->vm_end,
+ MMU_STATUS);
/*
* Writing 1 to /proc/pid/clear_refs affects all pages.
*
@@ -857,10 +859,12 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
}
walk_page_range(vma->vm_start, vma->vm_end,
&clear_refs_walk);
+ if (type == CLEAR_REFS_SOFT_DIRTY)
+ mmu_notifier_invalidate_range_end(mm, vma,
+ vma->vm_start,
+ vma->vm_end,
+ MMU_STATUS);
}
- if (type == CLEAR_REFS_SOFT_DIRTY)
- mmu_notifier_invalidate_range_end(mm, 0,
- -1, MMU_STATUS);
flush_tlb_mm(mm);
up_read(&mm->mmap_sem);
mmput(mm);
diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
index 82e9577..8907e5d 100644
--- a/include/linux/mmu_notifier.h
+++ b/include/linux/mmu_notifier.h
@@ -137,6 +137,7 @@ struct mmu_notifier_ops {
*/
void (*invalidate_page)(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event);

@@ -185,11 +186,13 @@ struct mmu_notifier_ops {
*/
void (*invalidate_range_start)(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event);
void (*invalidate_range_end)(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event);
@@ -233,13 +236,16 @@ extern void __mmu_notifier_change_pte(struct mm_struct *mm,
pte_t pte,
enum mmu_event event);
extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event);
extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event);
extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event);
@@ -276,29 +282,33 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
}

static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_page(mm, address, event);
+ __mmu_notifier_invalidate_page(mm, vma, address, event);
}

static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_range_start(mm, start, end, event);
+ __mmu_notifier_invalidate_range_start(mm, vma, start,
+ end, event);
}

static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
{
if (mm_has_notifiers(mm))
- __mmu_notifier_invalidate_range_end(mm, start, end, event);
+ __mmu_notifier_invalidate_range_end(mm, vma, start, end, event);
}

static inline void mmu_notifier_mm_init(struct mm_struct *mm)
@@ -380,12 +390,14 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
}

static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
}

static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -393,6 +405,7 @@ static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
}

static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 296f81e..0f552bc 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -177,7 +177,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
/* For try_to_free_swap() and munlock_vma_page() below */
lock_page(page);

- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
err = -EAGAIN;
ptep = page_check_address(page, mm, addr, &ptl, 0);
@@ -212,7 +212,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
err = 0;
unlock:
mem_cgroup_cancel_charge(kpage, memcg);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
unlock_page(page);
return err;
diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
index a2b3f09..f0113df 100644
--- a/mm/filemap_xip.c
+++ b/mm/filemap_xip.c
@@ -198,7 +198,8 @@ retry:
BUG_ON(pte_dirty(pteval));
pte_unmap_unlock(pte, ptl);
/* must invalidate_page _before_ freeing the page */
- mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
+ mmu_notifier_invalidate_page(mm, vma, address,
+ MMU_MIGRATE);
page_cache_release(page);
}
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fa30857..cc74b60 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1022,7 +1022,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,

mmun_start = haddr;
mmun_end = haddr + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

for (i = 0; i < HPAGE_PMD_NR; i++) {
@@ -1064,7 +1064,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
page_remove_rmap(page);
spin_unlock(ptl);

- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

ret |= VM_FAULT_WRITE;
@@ -1075,7 +1075,7 @@ out:

out_free_pages:
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
for (i = 0; i < HPAGE_PMD_NR; i++) {
memcg = (void *)page_private(pages[i]);
@@ -1162,7 +1162,7 @@ alloc:

mmun_start = haddr;
mmun_end = haddr + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

if (!page)
@@ -1201,7 +1201,7 @@ alloc:
}
spin_unlock(ptl);
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
out:
return ret;
@@ -1637,7 +1637,7 @@ static int __split_huge_page_splitting(struct page *page,
const unsigned long mmun_start = address;
const unsigned long mmun_end = address + HPAGE_PMD_SIZE;

- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_STATUS);
pmd = page_check_address_pmd(page, mm, address,
PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
@@ -1653,7 +1653,7 @@ static int __split_huge_page_splitting(struct page *page,
ret = 1;
spin_unlock(ptl);
}
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_STATUS);

return ret;
@@ -2453,7 +2453,7 @@ static void collapse_huge_page(struct mm_struct *mm,

mmun_start = address;
mmun_end = address + HPAGE_PMD_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
/*
@@ -2464,7 +2464,7 @@ static void collapse_huge_page(struct mm_struct *mm,
*/
_pmd = pmdp_clear_flush(vma, address, pmd);
spin_unlock(pmd_ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

spin_lock(pte_ptl);
@@ -2854,19 +2854,19 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
mmun_start = haddr;
mmun_end = haddr + HPAGE_PMD_SIZE;
again:
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_trans_huge(*pmd))) {
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
return;
}
if (is_huge_zero_pmd(*pmd)) {
__split_huge_zero_page_pmd(vma, haddr, pmd);
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
return;
}
@@ -2874,7 +2874,7 @@ again:
VM_BUG_ON_PAGE(!page_count(page), page);
get_page(page);
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

split_huge_page(page);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 73e1576..15f0123 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2565,7 +2565,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
mmun_start = vma->vm_start;
mmun_end = vma->vm_end;
if (cow)
- mmu_notifier_invalidate_range_start(src, mmun_start,
+ mmu_notifier_invalidate_range_start(src, vma, mmun_start,
mmun_end, MMU_MIGRATE);

for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
@@ -2616,7 +2616,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
}

if (cow)
- mmu_notifier_invalidate_range_end(src, mmun_start,
+ mmu_notifier_invalidate_range_end(src, vma, mmun_start,
mmun_end, MMU_MIGRATE);

return ret;
@@ -2643,7 +2643,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
BUG_ON(end & ~huge_page_mask(h));

tlb_start_vma(tlb, vma);
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
again:
for (address = start; address < end; address += sz) {
@@ -2715,7 +2715,7 @@ unlock:
if (address < end && !ref_page)
goto again;
}
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
tlb_end_vma(tlb, vma);
}
@@ -2903,7 +2903,7 @@ retry_avoidcopy:

mmun_start = address & huge_page_mask(h);
mmun_end = mmun_start + huge_page_size(h);
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
/*
* Retake the page table lock to check for racing updates
@@ -2924,7 +2924,7 @@ retry_avoidcopy:
new_page = old_page;
}
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
page_cache_release(new_page);
page_cache_release(old_page);
@@ -3363,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
BUG_ON(address >= end);
flush_cache_range(vma, address, end);

- mmu_notifier_invalidate_range_start(mm, start, end, event);
+ mmu_notifier_invalidate_range_start(mm, vma, start, end, event);
mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
for (; address < end; address += huge_page_size(h)) {
spinlock_t *ptl;
@@ -3393,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
*/
flush_tlb_range(vma, start, end);
mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
- mmu_notifier_invalidate_range_end(mm, start, end, event);
+ mmu_notifier_invalidate_range_end(mm, vma, start, end, event);

return pages << h->order;
}
diff --git a/mm/ksm.c b/mm/ksm.c
index 4b659f1..1f3c4d7 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -873,7 +873,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,

mmun_start = addr;
mmun_end = addr + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MPROT_RONLY);

ptep = page_check_address(page, mm, addr, &ptl, 0);
@@ -914,7 +914,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
out_unlock:
pte_unmap_unlock(ptep, ptl);
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MPROT_RONLY);
out:
return err;
@@ -951,7 +951,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,

mmun_start = addr;
mmun_end = addr + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
@@ -977,7 +977,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
pte_unmap_unlock(ptep, ptl);
err = 0;
out_mn:
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
out:
return err;
diff --git a/mm/memory.c b/mm/memory.c
index d3908f0..4717579 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1049,7 +1049,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
mmun_start = addr;
mmun_end = end;
if (is_cow)
- mmu_notifier_invalidate_range_start(src_mm, mmun_start,
+ mmu_notifier_invalidate_range_start(src_mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

ret = 0;
@@ -1067,8 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
} while (dst_pgd++, src_pgd++, addr = next, addr != end);

if (is_cow)
- mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
- MMU_MIGRATE);
+ mmu_notifier_invalidate_range_end(src_mm, vma, mmun_start,
+ mmun_end, MMU_MIGRATE);
return ret;
}

@@ -1372,12 +1372,17 @@ void unmap_vmas(struct mmu_gather *tlb,
{
struct mm_struct *mm = vma->vm_mm;

- mmu_notifier_invalidate_range_start(mm, start_addr,
- end_addr, MMU_MUNMAP);
- for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
+ for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) {
+ mmu_notifier_invalidate_range_start(mm, vma,
+ max(start_addr, vma->vm_start),
+ min(end_addr, vma->vm_end),
+ MMU_MUNMAP);
unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
- mmu_notifier_invalidate_range_end(mm, start_addr,
- end_addr, MMU_MUNMAP);
+ mmu_notifier_invalidate_range_end(mm, vma,
+ max(start_addr, vma->vm_start),
+ min(end_addr, vma->vm_end),
+ MMU_MUNMAP);
+ }
}

/**
@@ -1399,10 +1404,17 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
lru_add_drain();
tlb_gather_mmu(&tlb, mm, start, end);
update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
- for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
+ for ( ; vma && vma->vm_start < end; vma = vma->vm_next) {
+ mmu_notifier_invalidate_range_start(mm, vma,
+ max(start, vma->vm_start),
+ min(end, vma->vm_end),
+ MMU_MUNMAP);
unmap_single_vma(&tlb, vma, start, end, details);
- mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
+ mmu_notifier_invalidate_range_end(mm, vma,
+ max(start, vma->vm_start),
+ min(end, vma->vm_end),
+ MMU_MUNMAP);
+ }
tlb_finish_mmu(&tlb, start, end);
}

@@ -1425,9 +1437,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
lru_add_drain();
tlb_gather_mmu(&tlb, mm, address, end);
update_hiwater_rss(mm);
- mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
+ mmu_notifier_invalidate_range_start(mm, vma, address, end, MMU_MUNMAP);
unmap_single_vma(&tlb, vma, address, end, details);
- mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
+ mmu_notifier_invalidate_range_end(mm, vma, address, end, MMU_MUNMAP);
tlb_finish_mmu(&tlb, address, end);
}

@@ -2211,7 +2223,7 @@ gotten:

mmun_start = address & PAGE_MASK;
mmun_end = mmun_start + PAGE_SIZE;
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

/*
@@ -2283,7 +2295,7 @@ gotten:
unlock:
pte_unmap_unlock(page_table, ptl);
if (mmun_end > mmun_start)
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
if (old_page) {
/*
diff --git a/mm/migrate.c b/mm/migrate.c
index b526c72..0c61aa9 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1820,13 +1820,13 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
WARN_ON(PageLRU(new_page));

/* Recheck the target PMD */
- mmu_notifier_invalidate_range_start(mm, mmun_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);
ptl = pmd_lock(mm, pmd);
if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
fail_putback:
spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

/* Reverse changes made by migrate_page_copy() */
@@ -1880,7 +1880,7 @@ fail_putback:
page_remove_rmap(page);

spin_unlock(ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start,
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

/* Take an "isolate" reference and put new page on the LRU. */
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index 9decb88..87e6bc5 100644
--- a/mm/mmu_notifier.c
+++ b/mm/mmu_notifier.c
@@ -139,6 +139,7 @@ void __mmu_notifier_change_pte(struct mm_struct *mm,
}

void __mmu_notifier_invalidate_page(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
@@ -148,12 +149,13 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_page)
- mn->ops->invalidate_page(mn, mm, address, event);
+ mn->ops->invalidate_page(mn, mm, vma, address, event);
}
srcu_read_unlock(&srcu, id);
}

void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -165,7 +167,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_range_start)
- mn->ops->invalidate_range_start(mn, mm, start,
+ mn->ops->invalidate_range_start(mn, vma, mm, start,
end, event);
}
srcu_read_unlock(&srcu, id);
@@ -173,6 +175,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);

void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -183,7 +186,7 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
id = srcu_read_lock(&srcu);
hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
if (mn->ops->invalidate_range_end)
- mn->ops->invalidate_range_end(mn, mm, start,
+ mn->ops->invalidate_range_end(mn, vma, mm, start,
end, event);
}
srcu_read_unlock(&srcu, id);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 6ce6c23..16ce504 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -158,7 +158,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
/* invoke the mmu notifier if the pmd is populated */
if (!mni_start) {
mni_start = addr;
- mmu_notifier_invalidate_range_start(mm, mni_start,
+ mmu_notifier_invalidate_range_start(mm, vma, mni_start,
end, event);
}

@@ -187,7 +187,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
} while (pmd++, addr = next, addr != end);

if (mni_start)
- mmu_notifier_invalidate_range_end(mm, mni_start, end, event);
+ mmu_notifier_invalidate_range_end(mm, vma, mni_start,
+ end, event);

if (nr_huge_updates)
count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
diff --git a/mm/mremap.c b/mm/mremap.c
index 6827d2f..9bee6de 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -177,7 +177,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,

mmun_start = old_addr;
mmun_end = old_end;
- mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
+ mmu_notifier_invalidate_range_start(vma->vm_mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
@@ -229,7 +229,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
if (likely(need_flush))
flush_tlb_range(vma, old_end-len, old_addr);

- mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
+ mmu_notifier_invalidate_range_end(vma->vm_mm, vma, mmun_start,
mmun_end, MMU_MIGRATE);

return len + old_addr - old_end; /* how much done */
diff --git a/mm/rmap.c b/mm/rmap.c
index bd7e6d7..f1be50d 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
pte_unmap_unlock(pte, ptl);

if (ret) {
- mmu_notifier_invalidate_page(mm, address, MMU_WB);
+ mmu_notifier_invalidate_page(mm, vma, address, MMU_WB);
(*cleaned)++;
}
out:
@@ -1237,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
out_unmap:
pte_unmap_unlock(pte, ptl);
if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
- mmu_notifier_invalidate_page(mm, address, event);
+ mmu_notifier_invalidate_page(mm, vma, address, event);
out:
return ret;

@@ -1325,7 +1325,8 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,

mmun_start = address;
mmun_end = end;
- mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
+ mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
+ mmun_end, event);

/*
* If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
@@ -1390,7 +1391,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
(*mapcount)--;
}
pte_unmap_unlock(pte - 1, ptl);
- mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
+ mmu_notifier_invalidate_range_end(mm, vma, mmun_start, mmun_end, event);
if (locked_vma)
up_read(&vma->vm_mm->mmap_sem);
return ret;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6e1992f..c4b7bf9 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -262,6 +262,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)

static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long address,
enum mmu_event event)
{
@@ -318,6 +319,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,

static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
@@ -345,6 +347,7 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,

static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
struct mm_struct *mm,
+ struct vm_area_struct *vma,
unsigned long start,
unsigned long end,
enum mmu_event event)
--
1.9.0

2014-06-28 02:01:05

by Jerome Glisse

[permalink] [raw]
Subject: [PATCH 2/6] mm: differentiate unmap for vmscan from other unmap.

From: Jérôme Glisse <[email protected]>

New code will need to be able to differentiate between a regular unmap and
an unmap trigger by vmscan in which case we want to be as quick as possible.

Signed-off-by: Jérôme Glisse <[email protected]>
---
include/linux/rmap.h | 15 ++++++++-------
mm/memory-failure.c | 2 +-
mm/vmscan.c | 4 ++--
3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index be57450..eddbc07 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -72,13 +72,14 @@ struct anon_vma_chain {
};

enum ttu_flags {
- TTU_UNMAP = 1, /* unmap mode */
- TTU_MIGRATION = 2, /* migration mode */
- TTU_MUNLOCK = 4, /* munlock mode */
-
- TTU_IGNORE_MLOCK = (1 << 8), /* ignore mlock */
- TTU_IGNORE_ACCESS = (1 << 9), /* don't age */
- TTU_IGNORE_HWPOISON = (1 << 10),/* corrupted page is recoverable */
+ TTU_VMSCAN = 1, /* unmap for vmscan */
+ TTU_POISON = 2, /* unmap for poison */
+ TTU_MIGRATION = 4, /* migration mode */
+ TTU_MUNLOCK = 8, /* munlock mode */
+
+ TTU_IGNORE_MLOCK = (1 << 9), /* ignore mlock */
+ TTU_IGNORE_ACCESS = (1 << 10), /* don't age */
+ TTU_IGNORE_HWPOISON = (1 << 11),/* corrupted page is recoverable */
};

#ifdef CONFIG_MMU
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index a7a89eb..ba176c4 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -887,7 +887,7 @@ static int page_action(struct page_state *ps, struct page *p,
static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
int trapno, int flags, struct page **hpagep)
{
- enum ttu_flags ttu = TTU_UNMAP | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
+ enum ttu_flags ttu = TTU_POISON | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
struct address_space *mapping;
LIST_HEAD(tokill);
int ret;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6d24fd6..5a7d286 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1163,7 +1163,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
}

ret = shrink_page_list(&clean_pages, zone, &sc,
- TTU_UNMAP|TTU_IGNORE_ACCESS,
+ TTU_VMSCAN|TTU_IGNORE_ACCESS,
&dummy1, &dummy2, &dummy3, &dummy4, &dummy5, true);
list_splice(&clean_pages, page_list);
mod_zone_page_state(zone, NR_ISOLATED_FILE, -ret);
@@ -1518,7 +1518,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
if (nr_taken == 0)
return 0;

- nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_UNMAP,
+ nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_VMSCAN,
&nr_dirty, &nr_unqueued_dirty, &nr_congested,
&nr_writeback, &nr_immediate,
false);
--
1.9.0

2014-06-30 03:29:10

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH 4/6] mmu_notifier: pass through vma to invalidate_range and invalidate_page

On Fri, 27 Jun 2014, Jérôme Glisse wrote:

> From: Jérôme Glisse <[email protected]>
>
> New user of the mmu_notifier interface need to lookup vma in order to
> perform the invalidation operation. Instead of redoing a vma lookup
> inside the callback just pass through the vma from the call site where
> it is already available.
>
> This needs small refactoring in memory.c to call invalidate_range on
> vma boundary the overhead should be low enough.
>
> Signed-off-by: Jérôme Glisse <[email protected]>
> ---
> drivers/gpu/drm/i915/i915_gem_userptr.c | 1 +
> drivers/iommu/amd_iommu_v2.c | 3 +++
> drivers/misc/sgi-gru/grutlbpurge.c | 6 ++++-
> drivers/xen/gntdev.c | 4 +++-
> fs/proc/task_mmu.c | 16 ++++++++-----
> include/linux/mmu_notifier.h | 19 ++++++++++++---
> kernel/events/uprobes.c | 4 ++--
> mm/filemap_xip.c | 3 ++-
> mm/huge_memory.c | 26 ++++++++++----------
> mm/hugetlb.c | 16 ++++++-------
> mm/ksm.c | 8 +++----
> mm/memory.c | 42 +++++++++++++++++++++------------
> mm/migrate.c | 6 ++---
> mm/mmu_notifier.c | 9 ++++---
> mm/mprotect.c | 5 ++--
> mm/mremap.c | 4 ++--
> mm/rmap.c | 9 +++----
> virt/kvm/kvm_main.c | 3 +++
> 18 files changed, 116 insertions(+), 68 deletions(-)
>

Hi Jerome, considering that you have to change every call site already, it
seems to me that it would be ideal to just delete the mm argument from all
of these invalidate_range* callbacks. In other words, replace the mm
argument with the new vma argument. I don't see much point in passing
them both around, and while it would make the patch a *bit* larger, it's
mostly just an extra line or two per call site:

mm = vma->vm_mm;

Also, passing the vma around really does seem like a good approach, but it
does cause a bunch of additional calls to the invalidate_range* routines,
because we generate a call per vma, instead of just one for the entire mm.
So that brings up a couple questions:

1) Is there any chance that this could cause measurable performance
regressions?

2) Should you put a little note in the commit message, mentioning this
point?

> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index ed6f35e..191ac71 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -55,6 +55,7 @@ struct i915_mmu_object {
>
> static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)

That routine has a local variable named vma, so it might be polite to
rename the local variable, to make it more obvious to the reader that they
are different. Of course, since the compiler knows which is which, feel
free to ignore this comment.

> diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
> index 2bb9771..9f9e706 100644
> --- a/drivers/iommu/amd_iommu_v2.c
> +++ b/drivers/iommu/amd_iommu_v2.c
> @@ -422,6 +422,7 @@ static void mn_change_pte(struct mmu_notifier *mn,
>
> static void mn_invalidate_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> @@ -430,6 +431,7 @@ static void mn_invalidate_page(struct mmu_notifier *mn,
>
> static void mn_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -453,6 +455,7 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,
>
> static void mn_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
> index e67fed1..d02e4c7 100644
> --- a/drivers/misc/sgi-gru/grutlbpurge.c
> +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> @@ -221,6 +221,7 @@ void gru_flush_all_tlb(struct gru_state *gru)
> */
> static void gru_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start, unsigned long end,
> enum mmu_event event)
> {
> @@ -235,7 +236,9 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,
> }
>
> static void gru_invalidate_range_end(struct mmu_notifier *mn,
> - struct mm_struct *mm, unsigned long start,
> + struct mm_struct *mm,
> + struct vm_area_struct *vma,
> + unsigned long start,
> unsigned long end,
> enum mmu_event event)
> {
> @@ -250,6 +253,7 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
> }
>
> static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index fe9da94..219928b 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -428,6 +428,7 @@ static void unmap_if_in_range(struct grant_map *map,
>
> static void mn_invl_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -447,10 +448,11 @@ static void mn_invl_range_start(struct mmu_notifier *mn,
>
> static void mn_invl_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> - mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
> + mn_invl_range_start(mn, mm, vma, address, address + PAGE_SIZE, event);
> }
>
> static void mn_release(struct mmu_notifier *mn,
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index e9e79f7..8b0f25d 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -829,13 +829,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> .private = &cp,
> };
> down_read(&mm->mmap_sem);
> - if (type == CLEAR_REFS_SOFT_DIRTY)
> - mmu_notifier_invalidate_range_start(mm, 0,
> - -1, MMU_STATUS);
> for (vma = mm->mmap; vma; vma = vma->vm_next) {
> cp.vma = vma;
> if (is_vm_hugetlb_page(vma))
> continue;
> + if (type == CLEAR_REFS_SOFT_DIRTY)
> + mmu_notifier_invalidate_range_start(mm, vma,
> + vma->vm_start,
> + vma->vm_end,
> + MMU_STATUS);
> /*
> * Writing 1 to /proc/pid/clear_refs affects all pages.
> *
> @@ -857,10 +859,12 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> }
> walk_page_range(vma->vm_start, vma->vm_end,
> &clear_refs_walk);
> + if (type == CLEAR_REFS_SOFT_DIRTY)
> + mmu_notifier_invalidate_range_end(mm, vma,
> + vma->vm_start,
> + vma->vm_end,
> + MMU_STATUS);
> }
> - if (type == CLEAR_REFS_SOFT_DIRTY)
> - mmu_notifier_invalidate_range_end(mm, 0,
> - -1, MMU_STATUS);
> flush_tlb_mm(mm);
> up_read(&mm->mmap_sem);
> mmput(mm);
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index 82e9577..8907e5d 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -137,6 +137,7 @@ struct mmu_notifier_ops {
> */
> void (*invalidate_page)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event);
>
> @@ -185,11 +186,13 @@ struct mmu_notifier_ops {
> */
> void (*invalidate_range_start)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event);
> void (*invalidate_range_end)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event);
> @@ -233,13 +236,16 @@ extern void __mmu_notifier_change_pte(struct mm_struct *mm,
> pte_t pte,
> enum mmu_event event);
> extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event);
> extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event);
> extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event);
> @@ -276,29 +282,33 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> }
>
> static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_page(mm, address, event);
> + __mmu_notifier_invalidate_page(mm, vma, address, event);
> }
>
> static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_range_start(mm, start, end, event);
> + __mmu_notifier_invalidate_range_start(mm, vma, start,
> + end, event);
> }
>
> static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_range_end(mm, start, end, event);
> + __mmu_notifier_invalidate_range_end(mm, vma, start, end, event);
> }
>
> static inline void mmu_notifier_mm_init(struct mm_struct *mm)
> @@ -380,12 +390,14 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> }
>
> static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> }
>
> static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -393,6 +405,7 @@ static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> }
>
> static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 296f81e..0f552bc 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -177,7 +177,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> /* For try_to_free_swap() and munlock_vma_page() below */
> lock_page(page);
>
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> err = -EAGAIN;
> ptep = page_check_address(page, mm, addr, &ptl, 0);
> @@ -212,7 +212,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> err = 0;
> unlock:
> mem_cgroup_cancel_charge(kpage, memcg);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> unlock_page(page);
> return err;
> diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
> index a2b3f09..f0113df 100644
> --- a/mm/filemap_xip.c
> +++ b/mm/filemap_xip.c
> @@ -198,7 +198,8 @@ retry:
> BUG_ON(pte_dirty(pteval));
> pte_unmap_unlock(pte, ptl);
> /* must invalidate_page _before_ freeing the page */
> - mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
> + mmu_notifier_invalidate_page(mm, vma, address,
> + MMU_MIGRATE);
> page_cache_release(page);
> }
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index fa30857..cc74b60 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1022,7 +1022,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
>
> mmun_start = haddr;
> mmun_end = haddr + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> for (i = 0; i < HPAGE_PMD_NR; i++) {
> @@ -1064,7 +1064,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> page_remove_rmap(page);
> spin_unlock(ptl);
>
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> ret |= VM_FAULT_WRITE;
> @@ -1075,7 +1075,7 @@ out:
>
> out_free_pages:
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> for (i = 0; i < HPAGE_PMD_NR; i++) {
> memcg = (void *)page_private(pages[i]);
> @@ -1162,7 +1162,7 @@ alloc:
>
> mmun_start = haddr;
> mmun_end = haddr + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> if (!page)
> @@ -1201,7 +1201,7 @@ alloc:
> }
> spin_unlock(ptl);
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> out:
> return ret;
> @@ -1637,7 +1637,7 @@ static int __split_huge_page_splitting(struct page *page,
> const unsigned long mmun_start = address;
> const unsigned long mmun_end = address + HPAGE_PMD_SIZE;
>
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_STATUS);
> pmd = page_check_address_pmd(page, mm, address,
> PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
> @@ -1653,7 +1653,7 @@ static int __split_huge_page_splitting(struct page *page,
> ret = 1;
> spin_unlock(ptl);
> }
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_STATUS);
>
> return ret;
> @@ -2453,7 +2453,7 @@ static void collapse_huge_page(struct mm_struct *mm,
>
> mmun_start = address;
> mmun_end = address + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> /*
> @@ -2464,7 +2464,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> */
> _pmd = pmdp_clear_flush(vma, address, pmd);
> spin_unlock(pmd_ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> spin_lock(pte_ptl);
> @@ -2854,19 +2854,19 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
> mmun_start = haddr;
> mmun_end = haddr + HPAGE_PMD_SIZE;
> again:
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> ptl = pmd_lock(mm, pmd);
> if (unlikely(!pmd_trans_huge(*pmd))) {
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> return;
> }
> if (is_huge_zero_pmd(*pmd)) {
> __split_huge_zero_page_pmd(vma, haddr, pmd);
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> return;
> }
> @@ -2874,7 +2874,7 @@ again:
> VM_BUG_ON_PAGE(!page_count(page), page);
> get_page(page);
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> split_huge_page(page);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 73e1576..15f0123 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2565,7 +2565,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> mmun_start = vma->vm_start;
> mmun_end = vma->vm_end;
> if (cow)
> - mmu_notifier_invalidate_range_start(src, mmun_start,
> + mmu_notifier_invalidate_range_start(src, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
> @@ -2616,7 +2616,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> }
>
> if (cow)
> - mmu_notifier_invalidate_range_end(src, mmun_start,
> + mmu_notifier_invalidate_range_end(src, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> return ret;
> @@ -2643,7 +2643,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> BUG_ON(end & ~huge_page_mask(h));
>
> tlb_start_vma(tlb, vma);
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> again:
> for (address = start; address < end; address += sz) {
> @@ -2715,7 +2715,7 @@ unlock:
> if (address < end && !ref_page)
> goto again;
> }
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> tlb_end_vma(tlb, vma);
> }
> @@ -2903,7 +2903,7 @@ retry_avoidcopy:
>
> mmun_start = address & huge_page_mask(h);
> mmun_end = mmun_start + huge_page_size(h);
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> /*
> * Retake the page table lock to check for racing updates
> @@ -2924,7 +2924,7 @@ retry_avoidcopy:
> new_page = old_page;
> }
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> page_cache_release(new_page);
> page_cache_release(old_page);
> @@ -3363,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> BUG_ON(address >= end);
> flush_cache_range(vma, address, end);
>
> - mmu_notifier_invalidate_range_start(mm, start, end, event);
> + mmu_notifier_invalidate_range_start(mm, vma, start, end, event);
> mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
> for (; address < end; address += huge_page_size(h)) {
> spinlock_t *ptl;
> @@ -3393,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> */
> flush_tlb_range(vma, start, end);
> mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
> - mmu_notifier_invalidate_range_end(mm, start, end, event);
> + mmu_notifier_invalidate_range_end(mm, vma, start, end, event);
>
> return pages << h->order;
> }
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 4b659f1..1f3c4d7 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -873,7 +873,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
>
> mmun_start = addr;
> mmun_end = addr + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MPROT_RONLY);
>
> ptep = page_check_address(page, mm, addr, &ptl, 0);
> @@ -914,7 +914,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> out_unlock:
> pte_unmap_unlock(ptep, ptl);
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MPROT_RONLY);
> out:
> return err;
> @@ -951,7 +951,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>
> mmun_start = addr;
> mmun_end = addr + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
> @@ -977,7 +977,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> pte_unmap_unlock(ptep, ptl);
> err = 0;
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> out:
> return err;
> diff --git a/mm/memory.c b/mm/memory.c
> index d3908f0..4717579 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1049,7 +1049,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> mmun_start = addr;
> mmun_end = end;
> if (is_cow)
> - mmu_notifier_invalidate_range_start(src_mm, mmun_start,
> + mmu_notifier_invalidate_range_start(src_mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> ret = 0;
> @@ -1067,8 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> } while (dst_pgd++, src_pgd++, addr = next, addr != end);
>
> if (is_cow)
> - mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
> - MMU_MIGRATE);
> + mmu_notifier_invalidate_range_end(src_mm, vma, mmun_start,
> + mmun_end, MMU_MIGRATE);
> return ret;
> }
>
> @@ -1372,12 +1372,17 @@ void unmap_vmas(struct mmu_gather *tlb,
> {
> struct mm_struct *mm = vma->vm_mm;
>
> - mmu_notifier_invalidate_range_start(mm, start_addr,
> - end_addr, MMU_MUNMAP);
> - for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
> + for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) {
> + mmu_notifier_invalidate_range_start(mm, vma,
> + max(start_addr, vma->vm_start),
> + min(end_addr, vma->vm_end),
> + MMU_MUNMAP);
> unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
> - mmu_notifier_invalidate_range_end(mm, start_addr,
> - end_addr, MMU_MUNMAP);
> + mmu_notifier_invalidate_range_end(mm, vma,
> + max(start_addr, vma->vm_start),
> + min(end_addr, vma->vm_end),
> + MMU_MUNMAP);
> + }
> }
>
> /**
> @@ -1399,10 +1404,17 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> lru_add_drain();
> tlb_gather_mmu(&tlb, mm, start, end);
> update_hiwater_rss(mm);
> - mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
> - for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
> + for ( ; vma && vma->vm_start < end; vma = vma->vm_next) {
> + mmu_notifier_invalidate_range_start(mm, vma,
> + max(start, vma->vm_start),
> + min(end, vma->vm_end),
> + MMU_MUNMAP);
> unmap_single_vma(&tlb, vma, start, end, details);
> - mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
> + mmu_notifier_invalidate_range_end(mm, vma,
> + max(start, vma->vm_start),
> + min(end, vma->vm_end),
> + MMU_MUNMAP);
> + }
> tlb_finish_mmu(&tlb, start, end);
> }
>
> @@ -1425,9 +1437,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
> lru_add_drain();
> tlb_gather_mmu(&tlb, mm, address, end);
> update_hiwater_rss(mm);
> - mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
> + mmu_notifier_invalidate_range_start(mm, vma, address, end, MMU_MUNMAP);
> unmap_single_vma(&tlb, vma, address, end, details);
> - mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
> + mmu_notifier_invalidate_range_end(mm, vma, address, end, MMU_MUNMAP);
> tlb_finish_mmu(&tlb, address, end);
> }
>
> @@ -2211,7 +2223,7 @@ gotten:
>
> mmun_start = address & PAGE_MASK;
> mmun_end = mmun_start + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> /*
> @@ -2283,7 +2295,7 @@ gotten:
> unlock:
> pte_unmap_unlock(page_table, ptl);
> if (mmun_end > mmun_start)
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> if (old_page) {
> /*
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b526c72..0c61aa9 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1820,13 +1820,13 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
> WARN_ON(PageLRU(new_page));
>
> /* Recheck the target PMD */
> - mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
> ptl = pmd_lock(mm, pmd);
> if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
> fail_putback:
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> /* Reverse changes made by migrate_page_copy() */
> @@ -1880,7 +1880,7 @@ fail_putback:
> page_remove_rmap(page);
>
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> /* Take an "isolate" reference and put new page on the LRU. */
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 9decb88..87e6bc5 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -139,6 +139,7 @@ void __mmu_notifier_change_pte(struct mm_struct *mm,
> }
>
> void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> @@ -148,12 +149,13 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_page)
> - mn->ops->invalidate_page(mn, mm, address, event);
> + mn->ops->invalidate_page(mn, mm, vma, address, event);
> }
> srcu_read_unlock(&srcu, id);
> }
>
> void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -165,7 +167,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_start)
> - mn->ops->invalidate_range_start(mn, mm, start,
> + mn->ops->invalidate_range_start(mn, vma, mm, start,
> end, event);
> }
> srcu_read_unlock(&srcu, id);
> @@ -173,6 +175,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
>
> void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -183,7 +186,7 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_end)
> - mn->ops->invalidate_range_end(mn, mm, start,
> + mn->ops->invalidate_range_end(mn, vma, mm, start,
> end, event);
> }
> srcu_read_unlock(&srcu, id);
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 6ce6c23..16ce504 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -158,7 +158,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> /* invoke the mmu notifier if the pmd is populated */
> if (!mni_start) {
> mni_start = addr;
> - mmu_notifier_invalidate_range_start(mm, mni_start,
> + mmu_notifier_invalidate_range_start(mm, vma, mni_start,
> end, event);
> }
>
> @@ -187,7 +187,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> } while (pmd++, addr = next, addr != end);
>
> if (mni_start)
> - mmu_notifier_invalidate_range_end(mm, mni_start, end, event);
> + mmu_notifier_invalidate_range_end(mm, vma, mni_start,
> + end, event);
>
> if (nr_huge_updates)
> count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 6827d2f..9bee6de 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -177,7 +177,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>
> mmun_start = old_addr;
> mmun_end = old_end;
> - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
> + mmu_notifier_invalidate_range_start(vma->vm_mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
> @@ -229,7 +229,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> if (likely(need_flush))
> flush_tlb_range(vma, old_end-len, old_addr);
>
> - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
> + mmu_notifier_invalidate_range_end(vma->vm_mm, vma, mmun_start,
> mmun_end, MMU_MIGRATE);
>
> return len + old_addr - old_end; /* how much done */
> diff --git a/mm/rmap.c b/mm/rmap.c
> index bd7e6d7..f1be50d 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> pte_unmap_unlock(pte, ptl);
>
> if (ret) {
> - mmu_notifier_invalidate_page(mm, address, MMU_WB);
> + mmu_notifier_invalidate_page(mm, vma, address, MMU_WB);
> (*cleaned)++;
> }
> out:
> @@ -1237,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> out_unmap:
> pte_unmap_unlock(pte, ptl);
> if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
> - mmu_notifier_invalidate_page(mm, address, event);
> + mmu_notifier_invalidate_page(mm, vma, address, event);
> out:
> return ret;
>
> @@ -1325,7 +1325,8 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
>
> mmun_start = address;
> mmun_end = end;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
> + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> + mmun_end, event);
>
> /*
> * If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
> @@ -1390,7 +1391,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> (*mapcount)--;
> }
> pte_unmap_unlock(pte - 1, ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
> + mmu_notifier_invalidate_range_end(mm, vma, mmun_start, mmun_end, event);
> if (locked_vma)
> up_read(&vma->vm_mm->mmap_sem);
> return ret;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 6e1992f..c4b7bf9 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -262,6 +262,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
>
> static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long address,
> enum mmu_event event)
> {
> @@ -318,6 +319,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
>
> static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> @@ -345,6 +347,7 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
>
> static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm,
> + struct vm_area_struct *vma,
> unsigned long start,
> unsigned long end,
> enum mmu_event event)
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

Other than the refinements suggested above, I can't seem to find anything
wrong with this patch, so:

Reviewed-by: John Hubbard <[email protected]>

thanks,
John H.

2014-06-30 03:49:24

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH 1/6] mmput: use notifier chain to call subsystem exit handler.

On Fri, 27 Jun 2014, Jérôme Glisse wrote:

> From: Jérôme Glisse <[email protected]>
>
> Several subsystem require a callback when a mm struct is being destroy
> so that they can cleanup there respective per mm struct. Instead of
> having each subsystem add its callback to mmput use a notifier chain
> to call each of the subsystem.
>
> This will allow new subsystem to register callback even if they are
> module. There should be no contention on the rw semaphore protecting
> the call chain and the impact on the code path should be low and
> burried in the noise.
>
> Note that this patch also move the call to cleanup functions after
> exit_mmap so that new call back can assume that mmu_notifier_release
> have already been call. This does not impact existing cleanup functions
> as they do not rely on anything that exit_mmap is freeing. Also moved
> khugepaged_exit to exit_mmap so that ordering is preserved for that
> function.
>
> Signed-off-by: Jérôme Glisse <[email protected]>
> ---
> fs/aio.c | 29 ++++++++++++++++++++++-------
> include/linux/aio.h | 2 --
> include/linux/ksm.h | 11 -----------
> include/linux/sched.h | 5 +++++
> include/linux/uprobes.h | 1 -
> kernel/events/uprobes.c | 19 ++++++++++++++++---
> kernel/fork.c | 22 ++++++++++++++++++----
> mm/ksm.c | 26 +++++++++++++++++++++-----
> mm/mmap.c | 3 +++
> 9 files changed, 85 insertions(+), 33 deletions(-)
>
> diff --git a/fs/aio.c b/fs/aio.c
> index c1d8c48..1d06e92 100644
> --- a/fs/aio.c
> +++ b/fs/aio.c
> @@ -40,6 +40,7 @@
> #include <linux/ramfs.h>
> #include <linux/percpu-refcount.h>
> #include <linux/mount.h>
> +#include <linux/notifier.h>
>
> #include <asm/kmap_types.h>
> #include <asm/uaccess.h>
> @@ -774,20 +775,22 @@ ssize_t wait_on_sync_kiocb(struct kiocb *req)
> EXPORT_SYMBOL(wait_on_sync_kiocb);
>
> /*
> - * exit_aio: called when the last user of mm goes away. At this point, there is
> + * aio_exit: called when the last user of mm goes away. At this point, there is
> * no way for any new requests to be submited or any of the io_* syscalls to be
> * called on the context.
> *
> * There may be outstanding kiocbs, but free_ioctx() will explicitly wait on
> * them.
> */
> -void exit_aio(struct mm_struct *mm)
> +static int aio_exit(struct notifier_block *nb,
> + unsigned long action, void *data)
> {
> + struct mm_struct *mm = data;
> struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
> int i;
>
> if (!table)
> - return;
> + return 0;
>
> for (i = 0; i < table->nr; ++i) {
> struct kioctx *ctx = table->table[i];
> @@ -796,10 +799,10 @@ void exit_aio(struct mm_struct *mm)
> continue;
> /*
> * We don't need to bother with munmap() here - exit_mmap(mm)
> - * is coming and it'll unmap everything. And we simply can't,
> - * this is not necessarily our ->mm.
> - * Since kill_ioctx() uses non-zero ->mmap_size as indicator
> - * that it needs to unmap the area, just set it to 0.
> + * have already been call and everything is unmap by now. But
> + * to be safe set ->mmap_size to 0 since aio_free_ring() uses
> + * non-zero ->mmap_size as indicator that it needs to unmap the
> + * area.
> */

Actually, I think the original part of the comment about kill_ioctx
was accurate, but the new reference to aio_free_ring looks like a typo
(?). I'd write the entire comment as follows (I've dropped the leading
whitespace, for email):

/*
* We don't need to bother with munmap() here - exit_mmap(mm)
* has already been called and everything is unmapped by now.
* But to be safe, set ->mmap_size to 0 since kill_ioctx() uses a
* non-zero >mmap_size as an indicator that it needs to unmap the
* area.
*/


> ctx->mmap_size = 0;
> kill_ioctx(mm, ctx, NULL);
> @@ -807,6 +810,7 @@ void exit_aio(struct mm_struct *mm)
>
> RCU_INIT_POINTER(mm->ioctx_table, NULL);
> kfree(table);
> + return 0;
> }
>
> static void put_reqs_available(struct kioctx *ctx, unsigned nr)
> @@ -1629,3 +1633,14 @@ SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id,
> }
> return ret;
> }
> +
> +static struct notifier_block aio_mmput_nb = {
> + .notifier_call = aio_exit,
> + .priority = 1,
> +};
> +
> +static int __init aio_init(void)
> +{
> + return mmput_register_notifier(&aio_mmput_nb);
> +}
> +subsys_initcall(aio_init);
> diff --git a/include/linux/aio.h b/include/linux/aio.h
> index d9c92da..6308fac 100644
> --- a/include/linux/aio.h
> +++ b/include/linux/aio.h
> @@ -73,7 +73,6 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
> extern ssize_t wait_on_sync_kiocb(struct kiocb *iocb);
> extern void aio_complete(struct kiocb *iocb, long res, long res2);
> struct mm_struct;
> -extern void exit_aio(struct mm_struct *mm);
> extern long do_io_submit(aio_context_t ctx_id, long nr,
> struct iocb __user *__user *iocbpp, bool compat);
> void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
> @@ -81,7 +80,6 @@ void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
> static inline ssize_t wait_on_sync_kiocb(struct kiocb *iocb) { return 0; }
> static inline void aio_complete(struct kiocb *iocb, long res, long res2) { }
> struct mm_struct;
> -static inline void exit_aio(struct mm_struct *mm) { }
> static inline long do_io_submit(aio_context_t ctx_id, long nr,
> struct iocb __user * __user *iocbpp,
> bool compat) { return 0; }
> diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> index 3be6bb1..84c184f 100644
> --- a/include/linux/ksm.h
> +++ b/include/linux/ksm.h
> @@ -20,7 +20,6 @@ struct mem_cgroup;
> int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> unsigned long end, int advice, unsigned long *vm_flags);
> int __ksm_enter(struct mm_struct *mm);
> -void __ksm_exit(struct mm_struct *mm);
>
> static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> {
> @@ -29,12 +28,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> return 0;
> }
>
> -static inline void ksm_exit(struct mm_struct *mm)
> -{
> - if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> - __ksm_exit(mm);
> -}
> -
> /*
> * A KSM page is one of those write-protected "shared pages" or "merged pages"
> * which KSM maps into multiple mms, wherever identical anonymous page content
> @@ -83,10 +76,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> return 0;
> }
>
> -static inline void ksm_exit(struct mm_struct *mm)
> -{
> -}
> -
> static inline int PageKsm(struct page *page)
> {
> return 0;
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 322d4fc..428b3cf 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -2384,6 +2384,11 @@ static inline void mmdrop(struct mm_struct * mm)
> __mmdrop(mm);
> }
>
> +/* mmput call list of notifier and subsystem/module can register
> + * new one through this call.
> + */
> +extern int mmput_register_notifier(struct notifier_block *nb);
> +extern int mmput_unregister_notifier(struct notifier_block *nb);
> /* mmput gets rid of the mappings and all user-space */
> extern void mmput(struct mm_struct *);
> /* Grab a reference to a task's mm, if it is not already going away */
> diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> index 4f844c6..44e7267 100644
> --- a/include/linux/uprobes.h
> +++ b/include/linux/uprobes.h
> @@ -120,7 +120,6 @@ extern int uprobe_pre_sstep_notifier(struct pt_regs *regs);
> extern void uprobe_notify_resume(struct pt_regs *regs);
> extern bool uprobe_deny_signal(void);
> extern bool arch_uprobe_skip_sstep(struct arch_uprobe *aup, struct pt_regs *regs);
> -extern void uprobe_clear_state(struct mm_struct *mm);
> extern int arch_uprobe_analyze_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long addr);
> extern int arch_uprobe_pre_xol(struct arch_uprobe *aup, struct pt_regs *regs);
> extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs);
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 46b7c31..32b04dc 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -37,6 +37,7 @@
> #include <linux/percpu-rwsem.h>
> #include <linux/task_work.h>
> #include <linux/shmem_fs.h>
> +#include <linux/notifier.h>
>
> #include <linux/uprobes.h>
>
> @@ -1220,16 +1221,19 @@ static struct xol_area *get_xol_area(void)
> /*
> * uprobe_clear_state - Free the area allocated for slots.
> */
> -void uprobe_clear_state(struct mm_struct *mm)
> +static int uprobe_clear_state(struct notifier_block *nb,
> + unsigned long action, void *data)
> {
> + struct mm_struct *mm = data;
> struct xol_area *area = mm->uprobes_state.xol_area;
>
> if (!area)
> - return;
> + return 0;
>
> put_page(area->page);
> kfree(area->bitmap);
> kfree(area);
> + return 0;
> }
>
> void uprobe_start_dup_mmap(void)
> @@ -1979,9 +1983,14 @@ static struct notifier_block uprobe_exception_nb = {
> .priority = INT_MAX-1, /* notified after kprobes, kgdb */
> };
>
> +static struct notifier_block uprobe_mmput_nb = {
> + .notifier_call = uprobe_clear_state,
> + .priority = 0,
> +};
> +
> static int __init init_uprobes(void)
> {
> - int i;
> + int i, err;
>
> for (i = 0; i < UPROBES_HASH_SZ; i++)
> mutex_init(&uprobes_mmap_mutex[i]);
> @@ -1989,6 +1998,10 @@ static int __init init_uprobes(void)
> if (percpu_init_rwsem(&dup_mmap_sem))
> return -ENOMEM;
>
> + err = mmput_register_notifier(&uprobe_mmput_nb);
> + if (err)
> + return err;
> +
> return register_die_notifier(&uprobe_exception_nb);
> }
> __initcall(init_uprobes);
> diff --git a/kernel/fork.c b/kernel/fork.c
> index dd8864f..b448509 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -87,6 +87,8 @@
> #define CREATE_TRACE_POINTS
> #include <trace/events/task.h>
>
> +static BLOCKING_NOTIFIER_HEAD(mmput_notifier);
> +
> /*
> * Protected counters by write_lock_irq(&tasklist_lock)
> */
> @@ -623,6 +625,21 @@ void __mmdrop(struct mm_struct *mm)
> EXPORT_SYMBOL_GPL(__mmdrop);
>
> /*
> + * Register a notifier that will be call by mmput
> + */
> +int mmput_register_notifier(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_register(&mmput_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(mmput_register_notifier);
> +
> +int mmput_unregister_notifier(struct notifier_block *nb)
> +{
> + return blocking_notifier_chain_unregister(&mmput_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(mmput_unregister_notifier);
> +
> +/*
> * Decrement the use count and release all resources for an mm.
> */
> void mmput(struct mm_struct *mm)
> @@ -630,11 +647,8 @@ void mmput(struct mm_struct *mm)
> might_sleep();
>
> if (atomic_dec_and_test(&mm->mm_users)) {
> - uprobe_clear_state(mm);
> - exit_aio(mm);
> - ksm_exit(mm);
> - khugepaged_exit(mm); /* must run before exit_mmap */
> exit_mmap(mm);
> + blocking_notifier_call_chain(&mmput_notifier, 0, mm);
> set_mm_exe_file(mm, NULL);
> if (!list_empty(&mm->mmlist)) {
> spin_lock(&mmlist_lock);
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 346ddc9..cb1e976 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -37,6 +37,7 @@
> #include <linux/freezer.h>
> #include <linux/oom.h>
> #include <linux/numa.h>
> +#include <linux/notifier.h>
>
> #include <asm/tlbflush.h>
> #include "internal.h"
> @@ -1586,7 +1587,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
> ksm_scan.mm_slot = slot;
> spin_unlock(&ksm_mmlist_lock);
> /*
> - * Although we tested list_empty() above, a racing __ksm_exit
> + * Although we tested list_empty() above, a racing ksm_exit
> * of the last mm on the list may have removed it since then.
> */
> if (slot == &ksm_mm_head)
> @@ -1658,9 +1659,9 @@ next_mm:
> /*
> * We've completed a full scan of all vmas, holding mmap_sem
> * throughout, and found no VM_MERGEABLE: so do the same as
> - * __ksm_exit does to remove this mm from all our lists now.
> - * This applies either when cleaning up after __ksm_exit
> - * (but beware: we can reach here even before __ksm_exit),
> + * ksm_exit does to remove this mm from all our lists now.
> + * This applies either when cleaning up after ksm_exit
> + * (but beware: we can reach here even before ksm_exit),
> * or when all VM_MERGEABLE areas have been unmapped (and
> * mmap_sem then protects against race with MADV_MERGEABLE).
> */
> @@ -1821,11 +1822,16 @@ int __ksm_enter(struct mm_struct *mm)
> return 0;
> }
>
> -void __ksm_exit(struct mm_struct *mm)
> +static int ksm_exit(struct notifier_block *nb,
> + unsigned long action, void *data)
> {
> + struct mm_struct *mm = data;
> struct mm_slot *mm_slot;
> int easy_to_free = 0;
>
> + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
> + return 0;
> +
> /*
> * This process is exiting: if it's straightforward (as is the
> * case when ksmd was never running), free mm_slot immediately.
> @@ -1857,6 +1863,7 @@ void __ksm_exit(struct mm_struct *mm)
> down_write(&mm->mmap_sem);
> up_write(&mm->mmap_sem);
> }
> + return 0;
> }
>
> struct page *ksm_might_need_to_copy(struct page *page,
> @@ -2305,11 +2312,20 @@ static struct attribute_group ksm_attr_group = {
> };
> #endif /* CONFIG_SYSFS */
>
> +static struct notifier_block ksm_mmput_nb = {
> + .notifier_call = ksm_exit,
> + .priority = 2,
> +};
> +
> static int __init ksm_init(void)
> {
> struct task_struct *ksm_thread;
> int err;
>
> + err = mmput_register_notifier(&ksm_mmput_nb);
> + if (err)
> + return err;
> +

In order to be perfectly consistent with this routine's existing code, you
would want to write:

if (err)
goto out;

...but it does the same thing as your code. It' just a consistency thing.

> err = ksm_slab_init();
> if (err)
> goto out;
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 61aec93..b684a21 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2775,6 +2775,9 @@ void exit_mmap(struct mm_struct *mm)
> struct vm_area_struct *vma;
> unsigned long nr_accounted = 0;
>
> + /* Important to call this first. */
> + khugepaged_exit(mm);
> +
> /* mm's last user has gone, and its about to be pulled down */
> mmu_notifier_release(mm);
>
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

Above points are extremely minor, so:

Reviewed-by: John Hubbard <[email protected]>

thanks,
John H.

2014-06-30 03:58:24

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm: differentiate unmap for vmscan from other unmap.

On Fri, 27 Jun 2014, Jérôme Glisse wrote:

> From: Jérôme Glisse <[email protected]>
>
> New code will need to be able to differentiate between a regular unmap and
> an unmap trigger by vmscan in which case we want to be as quick as possible.
>
> Signed-off-by: Jérôme Glisse <[email protected]>
> ---
> include/linux/rmap.h | 15 ++++++++-------
> mm/memory-failure.c | 2 +-
> mm/vmscan.c | 4 ++--
> 3 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index be57450..eddbc07 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -72,13 +72,14 @@ struct anon_vma_chain {
> };
>
> enum ttu_flags {
> - TTU_UNMAP = 1, /* unmap mode */
> - TTU_MIGRATION = 2, /* migration mode */
> - TTU_MUNLOCK = 4, /* munlock mode */
> -
> - TTU_IGNORE_MLOCK = (1 << 8), /* ignore mlock */
> - TTU_IGNORE_ACCESS = (1 << 9), /* don't age */
> - TTU_IGNORE_HWPOISON = (1 << 10),/* corrupted page is recoverable */
> + TTU_VMSCAN = 1, /* unmap for vmscan */
> + TTU_POISON = 2, /* unmap for poison */
> + TTU_MIGRATION = 4, /* migration mode */
> + TTU_MUNLOCK = 8, /* munlock mode */
> +
> + TTU_IGNORE_MLOCK = (1 << 9), /* ignore mlock */
> + TTU_IGNORE_ACCESS = (1 << 10), /* don't age */
> + TTU_IGNORE_HWPOISON = (1 << 11),/* corrupted page is recoverable */

Unless there is a deeper purpose that I am overlooking, I think it would
be better to leave the _MLOCK, _ACCESS, and _HWPOISON at their original
values. I just can't quite see why they would need to start at bit 9
instead of bit 8...

> };
>
> #ifdef CONFIG_MMU
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index a7a89eb..ba176c4 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -887,7 +887,7 @@ static int page_action(struct page_state *ps, struct page *p,
> static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
> int trapno, int flags, struct page **hpagep)
> {
> - enum ttu_flags ttu = TTU_UNMAP | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
> + enum ttu_flags ttu = TTU_POISON | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
> struct address_space *mapping;
> LIST_HEAD(tokill);
> int ret;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6d24fd6..5a7d286 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1163,7 +1163,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
> }
>
> ret = shrink_page_list(&clean_pages, zone, &sc,
> - TTU_UNMAP|TTU_IGNORE_ACCESS,
> + TTU_VMSCAN|TTU_IGNORE_ACCESS,
> &dummy1, &dummy2, &dummy3, &dummy4, &dummy5, true);
> list_splice(&clean_pages, page_list);
> mod_zone_page_state(zone, NR_ISOLATED_FILE, -ret);
> @@ -1518,7 +1518,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> if (nr_taken == 0)
> return 0;
>
> - nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_UNMAP,
> + nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_VMSCAN,
> &nr_dirty, &nr_unqueued_dirty, &nr_congested,
> &nr_writeback, &nr_immediate,
> false);
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

Other than that, looks good.

Reviewed-by: John Hubbard <[email protected]>

thanks,
John H.

2014-06-30 05:23:06

by John Hubbard

[permalink] [raw]
Subject: Re: [PATCH 3/6] mmu_notifier: add event information to address invalidation v2

On Fri, 27 Jun 2014, Jérôme Glisse wrote:

> From: Jérôme Glisse <[email protected]>
>
> The event information will be usefull for new user of mmu_notifier API.
> The event argument differentiate between a vma disappearing, a page
> being write protected or simply a page being unmaped. This allow new
> user to take different path for different event for instance on unmap
> the resource used to track a vma are still valid and should stay around.
> While if the event is saying that a vma is being destroy it means that any
> resources used to track this vma can be free.
>
> Changed since v1:
> - renamed action into event (updated commit message too).
> - simplified the event names and clarified their intented usage
> also documenting what exceptation the listener can have in
> respect to each event.
>
> Signed-off-by: Jérôme Glisse <[email protected]>
> ---
> drivers/gpu/drm/i915/i915_gem_userptr.c | 3 +-
> drivers/iommu/amd_iommu_v2.c | 14 ++--
> drivers/misc/sgi-gru/grutlbpurge.c | 9 ++-
> drivers/xen/gntdev.c | 9 ++-
> fs/proc/task_mmu.c | 6 +-
> include/linux/hugetlb.h | 7 +-
> include/linux/mmu_notifier.h | 117 ++++++++++++++++++++++++++------
> kernel/events/uprobes.c | 10 ++-
> mm/filemap_xip.c | 2 +-
> mm/huge_memory.c | 51 ++++++++------
> mm/hugetlb.c | 25 ++++---
> mm/ksm.c | 18 +++--
> mm/memory.c | 27 +++++---
> mm/migrate.c | 9 ++-
> mm/mmu_notifier.c | 28 +++++---
> mm/mprotect.c | 33 ++++++---
> mm/mremap.c | 6 +-
> mm/rmap.c | 24 +++++--
> virt/kvm/kvm_main.c | 12 ++--
> 19 files changed, 291 insertions(+), 119 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> index 21ea928..ed6f35e 100644
> --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> @@ -56,7 +56,8 @@ struct i915_mmu_object {
> static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> struct mm_struct *mm,
> unsigned long start,
> - unsigned long end)
> + unsigned long end,
> + enum mmu_event event)
> {
> struct i915_mmu_notifier *mn = container_of(_mn, struct i915_mmu_notifier, mn);
> struct interval_tree_node *it = NULL;
> diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
> index 499b436..2bb9771 100644
> --- a/drivers/iommu/amd_iommu_v2.c
> +++ b/drivers/iommu/amd_iommu_v2.c
> @@ -414,21 +414,25 @@ static int mn_clear_flush_young(struct mmu_notifier *mn,
> static void mn_change_pte(struct mmu_notifier *mn,
> struct mm_struct *mm,
> unsigned long address,
> - pte_t pte)
> + pte_t pte,
> + enum mmu_event event)
> {
> __mn_flush_page(mn, address);
> }
>
> static void mn_invalidate_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> __mn_flush_page(mn, address);
> }
>
> static void mn_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> struct pasid_state *pasid_state;
> struct device_state *dev_state;
> @@ -449,7 +453,9 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,
>
> static void mn_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> struct pasid_state *pasid_state;
> struct device_state *dev_state;
> diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
> index 2129274..e67fed1 100644
> --- a/drivers/misc/sgi-gru/grutlbpurge.c
> +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> @@ -221,7 +221,8 @@ void gru_flush_all_tlb(struct gru_state *gru)
> */
> static void gru_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start, unsigned long end,
> + enum mmu_event event)
> {
> struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> ms_notifier);
> @@ -235,7 +236,8 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,
>
> static void gru_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm, unsigned long start,
> - unsigned long end)
> + unsigned long end,
> + enum mmu_event event)
> {
> struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> ms_notifier);
> @@ -248,7 +250,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
> }
>
> static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> ms_notifier);
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 073b4a1..fe9da94 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -428,7 +428,9 @@ static void unmap_if_in_range(struct grant_map *map,
>
> static void mn_invl_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
> struct grant_map *map;
> @@ -445,9 +447,10 @@ static void mn_invl_range_start(struct mmu_notifier *mn,
>
> static void mn_invl_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> - mn_invl_range_start(mn, mm, address, address + PAGE_SIZE);
> + mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
> }
>
> static void mn_release(struct mmu_notifier *mn,
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index cfa63ee..e9e79f7 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -830,7 +830,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> };
> down_read(&mm->mmap_sem);
> if (type == CLEAR_REFS_SOFT_DIRTY)
> - mmu_notifier_invalidate_range_start(mm, 0, -1);
> + mmu_notifier_invalidate_range_start(mm, 0,
> + -1, MMU_STATUS);
> for (vma = mm->mmap; vma; vma = vma->vm_next) {
> cp.vma = vma;
> if (is_vm_hugetlb_page(vma))
> @@ -858,7 +859,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> &clear_refs_walk);
> }
> if (type == CLEAR_REFS_SOFT_DIRTY)
> - mmu_notifier_invalidate_range_end(mm, 0, -1);
> + mmu_notifier_invalidate_range_end(mm, 0,
> + -1, MMU_STATUS);
> flush_tlb_mm(mm);
> up_read(&mm->mmap_sem);
> mmput(mm);
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 6a836ef..d7e512f 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -6,6 +6,7 @@
> #include <linux/fs.h>
> #include <linux/hugetlb_inline.h>
> #include <linux/cgroup.h>
> +#include <linux/mmu_notifier.h>
> #include <linux/list.h>
> #include <linux/kref.h>
>
> @@ -103,7 +104,8 @@ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
> int pmd_huge(pmd_t pmd);
> int pud_huge(pud_t pmd);
> unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> - unsigned long address, unsigned long end, pgprot_t newprot);
> + unsigned long address, unsigned long end, pgprot_t newprot,
> + enum mmu_event event);
>
> #else /* !CONFIG_HUGETLB_PAGE */
>
> @@ -148,7 +150,8 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
> #define is_hugepage_active(x) false
>
> static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> - unsigned long address, unsigned long end, pgprot_t newprot)
> + unsigned long address, unsigned long end, pgprot_t newprot,
> + enum mmu_event event)
> {
> return 0;
> }
> diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> index deca874..82e9577 100644
> --- a/include/linux/mmu_notifier.h
> +++ b/include/linux/mmu_notifier.h
> @@ -9,6 +9,52 @@
> struct mmu_notifier;
> struct mmu_notifier_ops;
>
> +/* Event report finer informations to the callback allowing the event listener
> + * to take better action. There are only few kinds of events :
> + *
> + * - MMU_MIGRATE memory is migrating from one page to another thus all write
> + * access must stop after invalidate_range_start callback returns. And no
> + * read access should be allowed either as new page can be remapped with
> + * write access before the invalidate_range_end callback happen and thus
> + * any read access to old page might access outdated informations. Several
> + * source to this event like page moving to swap (for various reasons like
> + * page reclaim), outcome of mremap syscall, migration for numa reasons,
> + * balancing memory pool, write fault on read only page trigger a new page
> + * to be allocated and used, ...
> + * - MMU_MPROT_NONE memory access protection is change, no page in the range
> + * can be accessed in either read or write mode but the range of address
> + * is still valid. All access are still fine until invalidate_range_end
> + * callback returns.
> + * - MMU_MPROT_RONLY memory access proctection is changing to read only.
> + * All access are still fine until invalidate_range_end callback returns.
> + * - MMU_MPROT_RANDW memory access proctection is changing to read an write.
> + * All access are still fine until invalidate_range_end callback returns.
> + * - MMU_MPROT_WONLY memory access proctection is changing to write only.
> + * All access are still fine until invalidate_range_end callback returns.
> + * - MMU_MUNMAP the range is being unmaped (outcome of a munmap syscall). It
> + * is fine to still have read/write access until the invalidate_range_end
> + * callback returns. This also imply that secondary page table can be trim
> + * as the address range is no longer valid.
> + * - MMU_WB memory is being write back to disk, all write access must stop
> + * after invalidate_range_start callback returns. Read access are still
> + * allowed.
> + * - MMU_STATUS memory status change, like soft dirty.
> + *
> + * In doubt when adding a new notifier caller use MMU_MIGRATE it will always
> + * result in expected behavior but will not allow listener a chance to optimize
> + * its events.
> + */

Here is a pass at tightening up that documentation:

/* MMU Events report fine-grained information to the callback routine, allowing
* the event listener to make a more informed decision as to what action to
* take. The event types are:
*
* - MMU_MIGRATE: memory is migrating from one page to another, thus all write
* access must stop after invalidate_range_start callback returns.
* Furthermore, no read access should be allowed either, as a new page can
* be remapped with write access before the invalidate_range_end callback
* happens and thus any read access to old page might read stale data. There
* are several sources for this event, including:
*
* - A page moving to swap (for various reasons, including page
* reclaim),
* - An mremap syscall,
* - migration for NUMA reasons,
* - balancing the memory pool,
* - write fault on a read-only page triggers a new page to be allocated
* and used,
* - and more that are not listed here.
*
* - MMU_MPROT_NONE: memory access protection is changing to "none": no page
* in the range can be accessed in either read or write mode but the range
* of addresses is still valid. However, access is still allowed, up until
* invalidate_range_end callback returns.
*
* - MMU_MPROT_RONLY: memory access proctection is changing to read only.
* However, access is still allowed, up until invalidate_range_end callback
* returns.
*
* - MMU_MPROT_RANDW: memory access proctection is changing to read and write.
* However, access is still allowed, up until invalidate_range_end callback
* returns.
*
* - MMU_MPROT_WONLY: memory access proctection is changing to write only.
* However, access is still allowed, up until invalidate_range_end callback
* returns.
*
* - MMU_MUNMAP: the range is being unmapped (outcome of a munmap syscall).
* However, access is still allowed, up until invalidate_range_end callback
* returns. This also implies that the secondary page table can be trimmed,
* because the address range is no longer valid.
*
* - MMU_WB: memory is being written back to disk, all write accesses must
* stop after invalidate_range_start callback returns. Read access are still
* allowed.
*
* - MMU_STATUS memory status change, like soft dirty, or huge page
* splitting (in place).
*
* If in doubt when adding a new notifier caller, please use MMU_MIGRATE,
* because it will always lead to reasonable behavior, but will not allow the
* listener a chance to optimize its events.
*/

Mostly just cleaning up the wording, except that I did add "huge page
splitting" to the cases that could cause an MMU_STATUS to fire.

> +enum mmu_event {
> + MMU_MIGRATE = 0,
> + MMU_MPROT_NONE,
> + MMU_MPROT_RONLY,
> + MMU_MPROT_RANDW,
> + MMU_MPROT_WONLY,
> + MMU_MUNMAP,
> + MMU_STATUS,
> + MMU_WB,
> +};
> +
> #ifdef CONFIG_MMU_NOTIFIER
>
> /*
> @@ -79,7 +125,8 @@ struct mmu_notifier_ops {
> void (*change_pte)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> unsigned long address,
> - pte_t pte);
> + pte_t pte,
> + enum mmu_event event);
>
> /*
> * Before this is invoked any secondary MMU is still ok to
> @@ -90,7 +137,8 @@ struct mmu_notifier_ops {
> */
> void (*invalidate_page)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long address);
> + unsigned long address,
> + enum mmu_event event);
>
> /*
> * invalidate_range_start() and invalidate_range_end() must be
> @@ -137,10 +185,14 @@ struct mmu_notifier_ops {
> */
> void (*invalidate_range_start)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end);
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event);
> void (*invalidate_range_end)(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long start, unsigned long end);
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event);
> };
>
> /*
> @@ -177,13 +229,20 @@ extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> extern int __mmu_notifier_test_young(struct mm_struct *mm,
> unsigned long address);
> extern void __mmu_notifier_change_pte(struct mm_struct *mm,
> - unsigned long address, pte_t pte);
> + unsigned long address,
> + pte_t pte,
> + enum mmu_event event);
> extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> - unsigned long address);
> + unsigned long address,
> + enum mmu_event event);
> extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> - unsigned long start, unsigned long end);
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event);
> extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> - unsigned long start, unsigned long end);
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event);
>
> static inline void mmu_notifier_release(struct mm_struct *mm)
> {
> @@ -208,31 +267,38 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
> }
>
> static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> - unsigned long address, pte_t pte)
> + unsigned long address,
> + pte_t pte,
> + enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_change_pte(mm, address, pte);
> + __mmu_notifier_change_pte(mm, address, pte, event);
> }
>
> static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_page(mm, address);
> + __mmu_notifier_invalidate_page(mm, address, event);
> }
>
> static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_range_start(mm, start, end);
> + __mmu_notifier_invalidate_range_start(mm, start, end, event);
> }
>
> static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> if (mm_has_notifiers(mm))
> - __mmu_notifier_invalidate_range_end(mm, start, end);
> + __mmu_notifier_invalidate_range_end(mm, start, end, event);
> }
>
> static inline void mmu_notifier_mm_init(struct mm_struct *mm)
> @@ -278,13 +344,13 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
> * old page would remain mapped readonly in the secondary MMUs after the new
> * page is already writable by some CPU through the primary MMU.
> */
> -#define set_pte_at_notify(__mm, __address, __ptep, __pte) \
> +#define set_pte_at_notify(__mm, __address, __ptep, __pte, __event) \
> ({ \
> struct mm_struct *___mm = __mm; \
> unsigned long ___address = __address; \
> pte_t ___pte = __pte; \
> \
> - mmu_notifier_change_pte(___mm, ___address, ___pte); \
> + mmu_notifier_change_pte(___mm, ___address, ___pte, __event); \
> set_pte_at(___mm, ___address, __ptep, ___pte); \
> })
>
> @@ -307,22 +373,29 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
> }
>
> static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> - unsigned long address, pte_t pte)
> + unsigned long address,
> + pte_t pte,
> + enum mmu_event event)
> {
> }
>
> static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> }
>
> static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> }
>
> static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> }
>
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 32b04dc..296f81e 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -177,7 +177,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> /* For try_to_free_swap() and munlock_vma_page() below */
> lock_page(page);
>
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> err = -EAGAIN;
> ptep = page_check_address(page, mm, addr, &ptl, 0);
> if (!ptep)
> @@ -195,7 +196,9 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
>
> flush_cache_page(vma, addr, pte_pfn(*ptep));
> ptep_clear_flush(vma, addr, ptep);
> - set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
> + set_pte_at_notify(mm, addr, ptep,
> + mk_pte(kpage, vma->vm_page_prot),
> + MMU_MIGRATE);
>
> page_remove_rmap(page);
> if (!page_mapped(page))
> @@ -209,7 +212,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> err = 0;
> unlock:
> mem_cgroup_cancel_charge(kpage, memcg);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> unlock_page(page);
> return err;
> }
> diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
> index d8d9fe3..a2b3f09 100644
> --- a/mm/filemap_xip.c
> +++ b/mm/filemap_xip.c
> @@ -198,7 +198,7 @@ retry:
> BUG_ON(pte_dirty(pteval));
> pte_unmap_unlock(pte, ptl);
> /* must invalidate_page _before_ freeing the page */
> - mmu_notifier_invalidate_page(mm, address);
> + mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
> page_cache_release(page);
> }
> }
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 5d562a9..fa30857 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1020,6 +1020,11 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> set_page_private(pages[i], (unsigned long)memcg);
> }
>
> + mmun_start = haddr;
> + mmun_end = haddr + HPAGE_PMD_SIZE;
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> +
> for (i = 0; i < HPAGE_PMD_NR; i++) {
> copy_user_highpage(pages[i], page + i,
> haddr + PAGE_SIZE * i, vma);
> @@ -1027,10 +1032,6 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> cond_resched();
> }
>
> - mmun_start = haddr;
> - mmun_end = haddr + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> -
> ptl = pmd_lock(mm, pmd);
> if (unlikely(!pmd_same(*pmd, orig_pmd)))
> goto out_free_pages;

So, that looks like you are fixing a pre-existing bug here? The
invalidate_range call is now happening *before* we copy pages. That seems
correct, although this is starting to get into code I'm less comfortable
with (huge pages). But I think it's worth mentioning in the commit
message.

> @@ -1063,7 +1064,8 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> page_remove_rmap(page);
> spin_unlock(ptl);
>
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> ret |= VM_FAULT_WRITE;
> put_page(page);
> @@ -1073,7 +1075,8 @@ out:
>
> out_free_pages:
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> for (i = 0; i < HPAGE_PMD_NR; i++) {
> memcg = (void *)page_private(pages[i]);
> set_page_private(pages[i], 0);
> @@ -1157,16 +1160,17 @@ alloc:
>
> count_vm_event(THP_FAULT_ALLOC);
>
> + mmun_start = haddr;
> + mmun_end = haddr + HPAGE_PMD_SIZE;
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> +
> if (!page)
> clear_huge_page(new_page, haddr, HPAGE_PMD_NR);
> else
> copy_user_huge_page(new_page, page, haddr, vma, HPAGE_PMD_NR);
> __SetPageUptodate(new_page);
>
> - mmun_start = haddr;
> - mmun_end = haddr + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> -

Another bug fix, OK.

> spin_lock(ptl);
> if (page)
> put_user_huge_page(page);
> @@ -1197,7 +1201,8 @@ alloc:
> }
> spin_unlock(ptl);
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> out:
> return ret;
> out_unlock:
> @@ -1632,7 +1637,8 @@ static int __split_huge_page_splitting(struct page *page,
> const unsigned long mmun_start = address;
> const unsigned long mmun_end = address + HPAGE_PMD_SIZE;
>
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_STATUS);

OK, just to be sure: we are not moving the page contents at this point,
right? Just changing the page table from a single "huge" entry, into lots
of little 4K page entries? If so, than MMU_STATUS seems correct, but we
should add that case to the "Event types" documentation above.

> pmd = page_check_address_pmd(page, mm, address,
> PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
> if (pmd) {
> @@ -1647,7 +1653,8 @@ static int __split_huge_page_splitting(struct page *page,
> ret = 1;
> spin_unlock(ptl);
> }
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_STATUS);
>
> return ret;
> }
> @@ -2446,7 +2453,8 @@ static void collapse_huge_page(struct mm_struct *mm,
>
> mmun_start = address;
> mmun_end = address + HPAGE_PMD_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> /*
> * After this gup_fast can't run anymore. This also removes
> @@ -2456,7 +2464,8 @@ static void collapse_huge_page(struct mm_struct *mm,
> */
> _pmd = pmdp_clear_flush(vma, address, pmd);
> spin_unlock(pmd_ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> spin_lock(pte_ptl);
> isolated = __collapse_huge_page_isolate(vma, address, pte);
> @@ -2845,24 +2854,28 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
> mmun_start = haddr;
> mmun_end = haddr + HPAGE_PMD_SIZE;
> again:
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);

Just checking: this is MMU_MIGRATE, instead of MMU_STATUS, because we are
actually moving data? (The pages backing the page table?)

> ptl = pmd_lock(mm, pmd);
> if (unlikely(!pmd_trans_huge(*pmd))) {
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> return;
> }
> if (is_huge_zero_pmd(*pmd)) {
> __split_huge_zero_page_pmd(vma, haddr, pmd);
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> return;
> }
> page = pmd_page(*pmd);
> VM_BUG_ON_PAGE(!page_count(page), page);
> get_page(page);
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> split_huge_page(page);
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 7faab71..73e1576 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2565,7 +2565,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> mmun_start = vma->vm_start;
> mmun_end = vma->vm_end;
> if (cow)
> - mmu_notifier_invalidate_range_start(src, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(src, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
> spinlock_t *src_ptl, *dst_ptl;
> @@ -2615,7 +2616,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> }
>
> if (cow)
> - mmu_notifier_invalidate_range_end(src, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(src, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> return ret;
> }
> @@ -2641,7 +2643,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> BUG_ON(end & ~huge_page_mask(h));
>
> tlb_start_vma(tlb, vma);
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> again:
> for (address = start; address < end; address += sz) {
> ptep = huge_pte_offset(mm, address);
> @@ -2712,7 +2715,8 @@ unlock:
> if (address < end && !ref_page)
> goto again;
> }
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> tlb_end_vma(tlb, vma);
> }
>
> @@ -2899,7 +2903,8 @@ retry_avoidcopy:
>
> mmun_start = address & huge_page_mask(h);
> mmun_end = mmun_start + huge_page_size(h);
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> /*
> * Retake the page table lock to check for racing updates
> * before the page tables are altered
> @@ -2919,7 +2924,8 @@ retry_avoidcopy:
> new_page = old_page;
> }
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> page_cache_release(new_page);
> page_cache_release(old_page);
>
> @@ -3344,7 +3350,8 @@ same_page:
> }
>
> unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> - unsigned long address, unsigned long end, pgprot_t newprot)
> + unsigned long address, unsigned long end, pgprot_t newprot,
> + enum mmu_event event)
> {
> struct mm_struct *mm = vma->vm_mm;
> unsigned long start = address;
> @@ -3356,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> BUG_ON(address >= end);
> flush_cache_range(vma, address, end);
>
> - mmu_notifier_invalidate_range_start(mm, start, end);
> + mmu_notifier_invalidate_range_start(mm, start, end, event);
> mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
> for (; address < end; address += huge_page_size(h)) {
> spinlock_t *ptl;
> @@ -3386,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> */
> flush_tlb_range(vma, start, end);
> mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
> - mmu_notifier_invalidate_range_end(mm, start, end);
> + mmu_notifier_invalidate_range_end(mm, start, end, event);
>
> return pages << h->order;
> }
> diff --git a/mm/ksm.c b/mm/ksm.c
> index cb1e976..4b659f1 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -873,7 +873,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
>
> mmun_start = addr;
> mmun_end = addr + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MPROT_RONLY);
>
> ptep = page_check_address(page, mm, addr, &ptl, 0);
> if (!ptep)
> @@ -905,7 +906,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> if (pte_dirty(entry))
> set_page_dirty(page);
> entry = pte_mkclean(pte_wrprotect(entry));
> - set_pte_at_notify(mm, addr, ptep, entry);
> + set_pte_at_notify(mm, addr, ptep, entry, MMU_MPROT_RONLY);
> }
> *orig_pte = *ptep;
> err = 0;
> @@ -913,7 +914,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> out_unlock:
> pte_unmap_unlock(ptep, ptl);
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MPROT_RONLY);
> out:
> return err;
> }
> @@ -949,7 +951,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>
> mmun_start = addr;
> mmun_end = addr + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
> if (!pte_same(*ptep, orig_pte)) {
> @@ -962,7 +965,9 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>
> flush_cache_page(vma, addr, pte_pfn(*ptep));
> ptep_clear_flush(vma, addr, ptep);
> - set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
> + set_pte_at_notify(mm, addr, ptep,
> + mk_pte(kpage, vma->vm_page_prot),
> + MMU_MIGRATE);
>
> page_remove_rmap(page);
> if (!page_mapped(page))
> @@ -972,7 +977,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> pte_unmap_unlock(ptep, ptl);
> err = 0;
> out_mn:
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> out:
> return err;
> }
> diff --git a/mm/memory.c b/mm/memory.c
> index 09e2cd0..d3908f0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1050,7 +1050,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> mmun_end = end;
> if (is_cow)
> mmu_notifier_invalidate_range_start(src_mm, mmun_start,
> - mmun_end);
> + mmun_end, MMU_MIGRATE);
>
> ret = 0;
> dst_pgd = pgd_offset(dst_mm, addr);
> @@ -1067,7 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> } while (dst_pgd++, src_pgd++, addr = next, addr != end);
>
> if (is_cow)
> - mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
> + MMU_MIGRATE);
> return ret;
> }
>
> @@ -1371,10 +1372,12 @@ void unmap_vmas(struct mmu_gather *tlb,
> {
> struct mm_struct *mm = vma->vm_mm;
>
> - mmu_notifier_invalidate_range_start(mm, start_addr, end_addr);
> + mmu_notifier_invalidate_range_start(mm, start_addr,
> + end_addr, MMU_MUNMAP);
> for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
> unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
> - mmu_notifier_invalidate_range_end(mm, start_addr, end_addr);
> + mmu_notifier_invalidate_range_end(mm, start_addr,
> + end_addr, MMU_MUNMAP);
> }
>
> /**
> @@ -1396,10 +1399,10 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> lru_add_drain();
> tlb_gather_mmu(&tlb, mm, start, end);
> update_hiwater_rss(mm);
> - mmu_notifier_invalidate_range_start(mm, start, end);
> + mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
> for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
> unmap_single_vma(&tlb, vma, start, end, details);
> - mmu_notifier_invalidate_range_end(mm, start, end);
> + mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
> tlb_finish_mmu(&tlb, start, end);
> }
>
> @@ -1422,9 +1425,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
> lru_add_drain();
> tlb_gather_mmu(&tlb, mm, address, end);
> update_hiwater_rss(mm);
> - mmu_notifier_invalidate_range_start(mm, address, end);
> + mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
> unmap_single_vma(&tlb, vma, address, end, details);
> - mmu_notifier_invalidate_range_end(mm, address, end);
> + mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
> tlb_finish_mmu(&tlb, address, end);
> }
>
> @@ -2208,7 +2211,8 @@ gotten:
>
> mmun_start = address & PAGE_MASK;
> mmun_end = mmun_start + PAGE_SIZE;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> /*
> * Re-check the pte - we dropped the lock
> @@ -2240,7 +2244,7 @@ gotten:
> * mmu page tables (such as kvm shadow page tables), we want the
> * new page to be mapped directly into the secondary page table.
> */
> - set_pte_at_notify(mm, address, page_table, entry);
> + set_pte_at_notify(mm, address, page_table, entry, MMU_MIGRATE);
> update_mmu_cache(vma, address, page_table);
> if (old_page) {
> /*
> @@ -2279,7 +2283,8 @@ gotten:
> unlock:
> pte_unmap_unlock(page_table, ptl);
> if (mmun_end > mmun_start)
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> if (old_page) {
> /*
> * Don't let another task, with possibly unlocked vma,
> diff --git a/mm/migrate.c b/mm/migrate.c
> index ab43fbf..b526c72 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1820,12 +1820,14 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
> WARN_ON(PageLRU(new_page));
>
> /* Recheck the target PMD */
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
> ptl = pmd_lock(mm, pmd);
> if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
> fail_putback:
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> /* Reverse changes made by migrate_page_copy() */
> if (TestClearPageActive(new_page))
> @@ -1878,7 +1880,8 @@ fail_putback:
> page_remove_rmap(page);
>
> spin_unlock(ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> /* Take an "isolate" reference and put new page on the LRU. */
> get_page(new_page);
> diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> index 41cefdf..9decb88 100644
> --- a/mm/mmu_notifier.c
> +++ b/mm/mmu_notifier.c
> @@ -122,8 +122,10 @@ int __mmu_notifier_test_young(struct mm_struct *mm,
> return young;
> }
>
> -void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> - pte_t pte)
> +void __mmu_notifier_change_pte(struct mm_struct *mm,
> + unsigned long address,
> + pte_t pte,
> + enum mmu_event event)
> {
> struct mmu_notifier *mn;
> int id;
> @@ -131,13 +133,14 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->change_pte)
> - mn->ops->change_pte(mn, mm, address, pte);
> + mn->ops->change_pte(mn, mm, address, pte, event);
> }
> srcu_read_unlock(&srcu, id);
> }
>
> void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> struct mmu_notifier *mn;
> int id;
> @@ -145,13 +148,16 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_page)
> - mn->ops->invalidate_page(mn, mm, address);
> + mn->ops->invalidate_page(mn, mm, address, event);
> }
> srcu_read_unlock(&srcu, id);
> }
>
> void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> +
> {
> struct mmu_notifier *mn;
> int id;
> @@ -159,14 +165,17 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_start)
> - mn->ops->invalidate_range_start(mn, mm, start, end);
> + mn->ops->invalidate_range_start(mn, mm, start,
> + end, event);
> }
> srcu_read_unlock(&srcu, id);
> }
> EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
>
> void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> - unsigned long start, unsigned long end)
> + unsigned long start,
> + unsigned long end,
> + enum mmu_event event)
> {
> struct mmu_notifier *mn;
> int id;
> @@ -174,7 +183,8 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> id = srcu_read_lock(&srcu);
> hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> if (mn->ops->invalidate_range_end)
> - mn->ops->invalidate_range_end(mn, mm, start, end);
> + mn->ops->invalidate_range_end(mn, mm, start,
> + end, event);
> }
> srcu_read_unlock(&srcu, id);
> }
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index c43d557..6ce6c23 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -137,7 +137,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>
> static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> pud_t *pud, unsigned long addr, unsigned long end,
> - pgprot_t newprot, int dirty_accountable, int prot_numa)
> + pgprot_t newprot, int dirty_accountable, int prot_numa,
> + enum mmu_event event)
> {
> pmd_t *pmd;
> struct mm_struct *mm = vma->vm_mm;
> @@ -157,7 +158,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> /* invoke the mmu notifier if the pmd is populated */
> if (!mni_start) {
> mni_start = addr;
> - mmu_notifier_invalidate_range_start(mm, mni_start, end);
> + mmu_notifier_invalidate_range_start(mm, mni_start,
> + end, event);
> }
>
> if (pmd_trans_huge(*pmd)) {
> @@ -185,7 +187,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> } while (pmd++, addr = next, addr != end);
>
> if (mni_start)
> - mmu_notifier_invalidate_range_end(mm, mni_start, end);
> + mmu_notifier_invalidate_range_end(mm, mni_start, end, event);
>
> if (nr_huge_updates)
> count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
> @@ -194,7 +196,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
>
> static inline unsigned long change_pud_range(struct vm_area_struct *vma,
> pgd_t *pgd, unsigned long addr, unsigned long end,
> - pgprot_t newprot, int dirty_accountable, int prot_numa)
> + pgprot_t newprot, int dirty_accountable, int prot_numa,
> + enum mmu_event event)
> {
> pud_t *pud;
> unsigned long next;
> @@ -206,7 +209,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
> if (pud_none_or_clear_bad(pud))
> continue;
> pages += change_pmd_range(vma, pud, addr, next, newprot,
> - dirty_accountable, prot_numa);
> + dirty_accountable, prot_numa, event);
> } while (pud++, addr = next, addr != end);
>
> return pages;
> @@ -214,7 +217,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
>
> static unsigned long change_protection_range(struct vm_area_struct *vma,
> unsigned long addr, unsigned long end, pgprot_t newprot,
> - int dirty_accountable, int prot_numa)
> + int dirty_accountable, int prot_numa, enum mmu_event event)
> {
> struct mm_struct *mm = vma->vm_mm;
> pgd_t *pgd;
> @@ -231,7 +234,7 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
> if (pgd_none_or_clear_bad(pgd))
> continue;
> pages += change_pud_range(vma, pgd, addr, next, newprot,
> - dirty_accountable, prot_numa);
> + dirty_accountable, prot_numa, event);
> } while (pgd++, addr = next, addr != end);
>
> /* Only flush the TLB if we actually modified any entries: */
> @@ -247,11 +250,23 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
> int dirty_accountable, int prot_numa)
> {
> unsigned long pages;
> + enum mmu_event event = MMU_MPROT_NONE;
> +
> + /* At this points vm_flags is updated. */
> + if ((vma->vm_flags & VM_READ) && (vma->vm_flags & VM_WRITE))
> + event = MMU_MPROT_RANDW;
> + else if (vma->vm_flags & VM_WRITE)
> + event = MMU_MPROT_WONLY;
> + else if (vma->vm_flags & VM_READ)
> + event = MMU_MPROT_RONLY;

hmmm, shouldn't we be checking against the newprot argument, instead of
against vm->vm_flags? The calling code, mprotect_fixup for example, can
set flags *other* than VM_READ or RM_WRITE, and that could leave to a
confusing or even inaccurate event. We could have a case where the event
type is MMU_MPROT_RONLY, but the page might have been read-only the entire
time, and some other flag was actually getting set.

I'm also starting to wonder if this event is adding much value here (for
protection changes), if the newprot argument contains the same
information. Although it is important to have a unified sort of reporting
system for HMM, so that's probably good enough reason to do this.

>
> if (is_vm_hugetlb_page(vma))
> - pages = hugetlb_change_protection(vma, start, end, newprot);
> + pages = hugetlb_change_protection(vma, start, end,
> + newprot, event);
> else
> - pages = change_protection_range(vma, start, end, newprot, dirty_accountable, prot_numa);
> + pages = change_protection_range(vma, start, end, newprot,
> + dirty_accountable,
> + prot_numa, event);
>
> return pages;
> }
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 05f1180..6827d2f 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -177,7 +177,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>
> mmun_start = old_addr;
> mmun_end = old_end;
> - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
> cond_resched();
> @@ -228,7 +229,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> if (likely(need_flush))
> flush_tlb_range(vma, old_end-len, old_addr);
>
> - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
> + mmun_end, MMU_MIGRATE);
>
> return len + old_addr - old_end; /* how much done */
> }
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 7928ddd..bd7e6d7 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> pte_unmap_unlock(pte, ptl);
>
> if (ret) {
> - mmu_notifier_invalidate_page(mm, address);
> + mmu_notifier_invalidate_page(mm, address, MMU_WB);
> (*cleaned)++;
> }
> out:
> @@ -1128,6 +1128,10 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> spinlock_t *ptl;
> int ret = SWAP_AGAIN;
> enum ttu_flags flags = (enum ttu_flags)arg;
> + enum mmu_event event = MMU_MIGRATE;
> +
> + if (flags & TTU_MUNLOCK)
> + event = MMU_STATUS;
>
> pte = page_check_address(page, mm, address, &ptl, 0);
> if (!pte)
> @@ -1233,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> out_unmap:
> pte_unmap_unlock(pte, ptl);
> if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
> - mmu_notifier_invalidate_page(mm, address);
> + mmu_notifier_invalidate_page(mm, address, event);
> out:
> return ret;
>
> @@ -1287,7 +1291,9 @@ out_mlock:
> #define CLUSTER_MASK (~(CLUSTER_SIZE - 1))
>
> static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> - struct vm_area_struct *vma, struct page *check_page)
> + struct vm_area_struct *vma,
> + struct page *check_page,
> + enum ttu_flags flags)
> {
> struct mm_struct *mm = vma->vm_mm;
> pmd_t *pmd;
> @@ -1301,6 +1307,10 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> unsigned long end;
> int ret = SWAP_AGAIN;
> int locked_vma = 0;
> + enum mmu_event event = MMU_MIGRATE;
> +
> + if (flags & TTU_MUNLOCK)
> + event = MMU_STATUS;
>
> address = (vma->vm_start + cursor) & CLUSTER_MASK;
> end = address + CLUSTER_SIZE;
> @@ -1315,7 +1325,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
>
> mmun_start = address;
> mmun_end = end;
> - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
>
> /*
> * If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
> @@ -1380,7 +1390,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> (*mapcount)--;
> }
> pte_unmap_unlock(pte - 1, ptl);
> - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
> if (locked_vma)
> up_read(&vma->vm_mm->mmap_sem);
> return ret;
> @@ -1436,7 +1446,9 @@ static int try_to_unmap_nonlinear(struct page *page,
> while (cursor < max_nl_cursor &&
> cursor < vma->vm_end - vma->vm_start) {
> if (try_to_unmap_cluster(cursor, &mapcount,
> - vma, page) == SWAP_MLOCK)
> + vma, page,
> + (enum ttu_flags)arg)
> + == SWAP_MLOCK)
> ret = SWAP_MLOCK;
> cursor += CLUSTER_SIZE;
> vma->vm_private_data = (void *) cursor;
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 4b6c01b..6e1992f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -262,7 +262,8 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
>
> static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> struct mm_struct *mm,
> - unsigned long address)
> + unsigned long address,
> + enum mmu_event event)
> {
> struct kvm *kvm = mmu_notifier_to_kvm(mn);
> int need_tlb_flush, idx;
> @@ -301,7 +302,8 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> struct mm_struct *mm,
> unsigned long address,
> - pte_t pte)
> + pte_t pte,
> + enum mmu_event event)
> {
> struct kvm *kvm = mmu_notifier_to_kvm(mn);
> int idx;
> @@ -317,7 +319,8 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> struct mm_struct *mm,
> unsigned long start,
> - unsigned long end)
> + unsigned long end,
> + enum mmu_event event)
> {
> struct kvm *kvm = mmu_notifier_to_kvm(mn);
> int need_tlb_flush = 0, idx;
> @@ -343,7 +346,8 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> struct mm_struct *mm,
> unsigned long start,
> - unsigned long end)
> + unsigned long end,
> + enum mmu_event event)
> {
> struct kvm *kvm = mmu_notifier_to_kvm(mn);
>
> --
> 1.9.0
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

thanks,
John H.

2014-06-30 15:08:11

by Jerome Glisse

[permalink] [raw]
Subject: Re: [PATCH 1/6] mmput: use notifier chain to call subsystem exit handler.

On Sun, Jun 29, 2014 at 08:49:16PM -0700, John Hubbard wrote:
> On Fri, 27 Jun 2014, J?r?me Glisse wrote:
>
> > From: J?r?me Glisse <[email protected]>
> >
> > Several subsystem require a callback when a mm struct is being destroy
> > so that they can cleanup there respective per mm struct. Instead of
> > having each subsystem add its callback to mmput use a notifier chain
> > to call each of the subsystem.
> >
> > This will allow new subsystem to register callback even if they are
> > module. There should be no contention on the rw semaphore protecting
> > the call chain and the impact on the code path should be low and
> > burried in the noise.
> >
> > Note that this patch also move the call to cleanup functions after
> > exit_mmap so that new call back can assume that mmu_notifier_release
> > have already been call. This does not impact existing cleanup functions
> > as they do not rely on anything that exit_mmap is freeing. Also moved
> > khugepaged_exit to exit_mmap so that ordering is preserved for that
> > function.
> >
> > Signed-off-by: J?r?me Glisse <[email protected]>
> > ---
> > fs/aio.c | 29 ++++++++++++++++++++++-------
> > include/linux/aio.h | 2 --
> > include/linux/ksm.h | 11 -----------
> > include/linux/sched.h | 5 +++++
> > include/linux/uprobes.h | 1 -
> > kernel/events/uprobes.c | 19 ++++++++++++++++---
> > kernel/fork.c | 22 ++++++++++++++++++----
> > mm/ksm.c | 26 +++++++++++++++++++++-----
> > mm/mmap.c | 3 +++
> > 9 files changed, 85 insertions(+), 33 deletions(-)
> >
> > diff --git a/fs/aio.c b/fs/aio.c
> > index c1d8c48..1d06e92 100644
> > --- a/fs/aio.c
> > +++ b/fs/aio.c
> > @@ -40,6 +40,7 @@
> > #include <linux/ramfs.h>
> > #include <linux/percpu-refcount.h>
> > #include <linux/mount.h>
> > +#include <linux/notifier.h>
> >
> > #include <asm/kmap_types.h>
> > #include <asm/uaccess.h>
> > @@ -774,20 +775,22 @@ ssize_t wait_on_sync_kiocb(struct kiocb *req)
> > EXPORT_SYMBOL(wait_on_sync_kiocb);
> >
> > /*
> > - * exit_aio: called when the last user of mm goes away. At this point, there is
> > + * aio_exit: called when the last user of mm goes away. At this point, there is
> > * no way for any new requests to be submited or any of the io_* syscalls to be
> > * called on the context.
> > *
> > * There may be outstanding kiocbs, but free_ioctx() will explicitly wait on
> > * them.
> > */
> > -void exit_aio(struct mm_struct *mm)
> > +static int aio_exit(struct notifier_block *nb,
> > + unsigned long action, void *data)
> > {
> > + struct mm_struct *mm = data;
> > struct kioctx_table *table = rcu_dereference_raw(mm->ioctx_table);
> > int i;
> >
> > if (!table)
> > - return;
> > + return 0;
> >
> > for (i = 0; i < table->nr; ++i) {
> > struct kioctx *ctx = table->table[i];
> > @@ -796,10 +799,10 @@ void exit_aio(struct mm_struct *mm)
> > continue;
> > /*
> > * We don't need to bother with munmap() here - exit_mmap(mm)
> > - * is coming and it'll unmap everything. And we simply can't,
> > - * this is not necessarily our ->mm.
> > - * Since kill_ioctx() uses non-zero ->mmap_size as indicator
> > - * that it needs to unmap the area, just set it to 0.
> > + * have already been call and everything is unmap by now. But
> > + * to be safe set ->mmap_size to 0 since aio_free_ring() uses
> > + * non-zero ->mmap_size as indicator that it needs to unmap the
> > + * area.
> > */
>
> Actually, I think the original part of the comment about kill_ioctx
> was accurate, but the new reference to aio_free_ring looks like a typo
> (?). I'd write the entire comment as follows (I've dropped the leading
> whitespace, for email):
>
> /*
> * We don't need to bother with munmap() here - exit_mmap(mm)
> * has already been called and everything is unmapped by now.
> * But to be safe, set ->mmap_size to 0 since kill_ioctx() uses a
> * non-zero >mmap_size as an indicator that it needs to unmap the
> * area.
> */
>

This is a rebase issue, the code changed and i updated the code but
not the comment.

>
> > ctx->mmap_size = 0;
> > kill_ioctx(mm, ctx, NULL);
> > @@ -807,6 +810,7 @@ void exit_aio(struct mm_struct *mm)
> >
> > RCU_INIT_POINTER(mm->ioctx_table, NULL);
> > kfree(table);
> > + return 0;
> > }
> >
> > static void put_reqs_available(struct kioctx *ctx, unsigned nr)
> > @@ -1629,3 +1633,14 @@ SYSCALL_DEFINE5(io_getevents, aio_context_t, ctx_id,
> > }
> > return ret;
> > }
> > +
> > +static struct notifier_block aio_mmput_nb = {
> > + .notifier_call = aio_exit,
> > + .priority = 1,
> > +};
> > +
> > +static int __init aio_init(void)
> > +{
> > + return mmput_register_notifier(&aio_mmput_nb);
> > +}
> > +subsys_initcall(aio_init);
> > diff --git a/include/linux/aio.h b/include/linux/aio.h
> > index d9c92da..6308fac 100644
> > --- a/include/linux/aio.h
> > +++ b/include/linux/aio.h
> > @@ -73,7 +73,6 @@ static inline void init_sync_kiocb(struct kiocb *kiocb, struct file *filp)
> > extern ssize_t wait_on_sync_kiocb(struct kiocb *iocb);
> > extern void aio_complete(struct kiocb *iocb, long res, long res2);
> > struct mm_struct;
> > -extern void exit_aio(struct mm_struct *mm);
> > extern long do_io_submit(aio_context_t ctx_id, long nr,
> > struct iocb __user *__user *iocbpp, bool compat);
> > void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
> > @@ -81,7 +80,6 @@ void kiocb_set_cancel_fn(struct kiocb *req, kiocb_cancel_fn *cancel);
> > static inline ssize_t wait_on_sync_kiocb(struct kiocb *iocb) { return 0; }
> > static inline void aio_complete(struct kiocb *iocb, long res, long res2) { }
> > struct mm_struct;
> > -static inline void exit_aio(struct mm_struct *mm) { }
> > static inline long do_io_submit(aio_context_t ctx_id, long nr,
> > struct iocb __user * __user *iocbpp,
> > bool compat) { return 0; }
> > diff --git a/include/linux/ksm.h b/include/linux/ksm.h
> > index 3be6bb1..84c184f 100644
> > --- a/include/linux/ksm.h
> > +++ b/include/linux/ksm.h
> > @@ -20,7 +20,6 @@ struct mem_cgroup;
> > int ksm_madvise(struct vm_area_struct *vma, unsigned long start,
> > unsigned long end, int advice, unsigned long *vm_flags);
> > int __ksm_enter(struct mm_struct *mm);
> > -void __ksm_exit(struct mm_struct *mm);
> >
> > static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> > {
> > @@ -29,12 +28,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> > return 0;
> > }
> >
> > -static inline void ksm_exit(struct mm_struct *mm)
> > -{
> > - if (test_bit(MMF_VM_MERGEABLE, &mm->flags))
> > - __ksm_exit(mm);
> > -}
> > -
> > /*
> > * A KSM page is one of those write-protected "shared pages" or "merged pages"
> > * which KSM maps into multiple mms, wherever identical anonymous page content
> > @@ -83,10 +76,6 @@ static inline int ksm_fork(struct mm_struct *mm, struct mm_struct *oldmm)
> > return 0;
> > }
> >
> > -static inline void ksm_exit(struct mm_struct *mm)
> > -{
> > -}
> > -
> > static inline int PageKsm(struct page *page)
> > {
> > return 0;
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 322d4fc..428b3cf 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -2384,6 +2384,11 @@ static inline void mmdrop(struct mm_struct * mm)
> > __mmdrop(mm);
> > }
> >
> > +/* mmput call list of notifier and subsystem/module can register
> > + * new one through this call.
> > + */
> > +extern int mmput_register_notifier(struct notifier_block *nb);
> > +extern int mmput_unregister_notifier(struct notifier_block *nb);
> > /* mmput gets rid of the mappings and all user-space */
> > extern void mmput(struct mm_struct *);
> > /* Grab a reference to a task's mm, if it is not already going away */
> > diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
> > index 4f844c6..44e7267 100644
> > --- a/include/linux/uprobes.h
> > +++ b/include/linux/uprobes.h
> > @@ -120,7 +120,6 @@ extern int uprobe_pre_sstep_notifier(struct pt_regs *regs);
> > extern void uprobe_notify_resume(struct pt_regs *regs);
> > extern bool uprobe_deny_signal(void);
> > extern bool arch_uprobe_skip_sstep(struct arch_uprobe *aup, struct pt_regs *regs);
> > -extern void uprobe_clear_state(struct mm_struct *mm);
> > extern int arch_uprobe_analyze_insn(struct arch_uprobe *aup, struct mm_struct *mm, unsigned long addr);
> > extern int arch_uprobe_pre_xol(struct arch_uprobe *aup, struct pt_regs *regs);
> > extern int arch_uprobe_post_xol(struct arch_uprobe *aup, struct pt_regs *regs);
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 46b7c31..32b04dc 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -37,6 +37,7 @@
> > #include <linux/percpu-rwsem.h>
> > #include <linux/task_work.h>
> > #include <linux/shmem_fs.h>
> > +#include <linux/notifier.h>
> >
> > #include <linux/uprobes.h>
> >
> > @@ -1220,16 +1221,19 @@ static struct xol_area *get_xol_area(void)
> > /*
> > * uprobe_clear_state - Free the area allocated for slots.
> > */
> > -void uprobe_clear_state(struct mm_struct *mm)
> > +static int uprobe_clear_state(struct notifier_block *nb,
> > + unsigned long action, void *data)
> > {
> > + struct mm_struct *mm = data;
> > struct xol_area *area = mm->uprobes_state.xol_area;
> >
> > if (!area)
> > - return;
> > + return 0;
> >
> > put_page(area->page);
> > kfree(area->bitmap);
> > kfree(area);
> > + return 0;
> > }
> >
> > void uprobe_start_dup_mmap(void)
> > @@ -1979,9 +1983,14 @@ static struct notifier_block uprobe_exception_nb = {
> > .priority = INT_MAX-1, /* notified after kprobes, kgdb */
> > };
> >
> > +static struct notifier_block uprobe_mmput_nb = {
> > + .notifier_call = uprobe_clear_state,
> > + .priority = 0,
> > +};
> > +
> > static int __init init_uprobes(void)
> > {
> > - int i;
> > + int i, err;
> >
> > for (i = 0; i < UPROBES_HASH_SZ; i++)
> > mutex_init(&uprobes_mmap_mutex[i]);
> > @@ -1989,6 +1998,10 @@ static int __init init_uprobes(void)
> > if (percpu_init_rwsem(&dup_mmap_sem))
> > return -ENOMEM;
> >
> > + err = mmput_register_notifier(&uprobe_mmput_nb);
> > + if (err)
> > + return err;
> > +
> > return register_die_notifier(&uprobe_exception_nb);
> > }
> > __initcall(init_uprobes);
> > diff --git a/kernel/fork.c b/kernel/fork.c
> > index dd8864f..b448509 100644
> > --- a/kernel/fork.c
> > +++ b/kernel/fork.c
> > @@ -87,6 +87,8 @@
> > #define CREATE_TRACE_POINTS
> > #include <trace/events/task.h>
> >
> > +static BLOCKING_NOTIFIER_HEAD(mmput_notifier);
> > +
> > /*
> > * Protected counters by write_lock_irq(&tasklist_lock)
> > */
> > @@ -623,6 +625,21 @@ void __mmdrop(struct mm_struct *mm)
> > EXPORT_SYMBOL_GPL(__mmdrop);
> >
> > /*
> > + * Register a notifier that will be call by mmput
> > + */
> > +int mmput_register_notifier(struct notifier_block *nb)
> > +{
> > + return blocking_notifier_chain_register(&mmput_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(mmput_register_notifier);
> > +
> > +int mmput_unregister_notifier(struct notifier_block *nb)
> > +{
> > + return blocking_notifier_chain_unregister(&mmput_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(mmput_unregister_notifier);
> > +
> > +/*
> > * Decrement the use count and release all resources for an mm.
> > */
> > void mmput(struct mm_struct *mm)
> > @@ -630,11 +647,8 @@ void mmput(struct mm_struct *mm)
> > might_sleep();
> >
> > if (atomic_dec_and_test(&mm->mm_users)) {
> > - uprobe_clear_state(mm);
> > - exit_aio(mm);
> > - ksm_exit(mm);
> > - khugepaged_exit(mm); /* must run before exit_mmap */
> > exit_mmap(mm);
> > + blocking_notifier_call_chain(&mmput_notifier, 0, mm);
> > set_mm_exe_file(mm, NULL);
> > if (!list_empty(&mm->mmlist)) {
> > spin_lock(&mmlist_lock);
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index 346ddc9..cb1e976 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -37,6 +37,7 @@
> > #include <linux/freezer.h>
> > #include <linux/oom.h>
> > #include <linux/numa.h>
> > +#include <linux/notifier.h>
> >
> > #include <asm/tlbflush.h>
> > #include "internal.h"
> > @@ -1586,7 +1587,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page)
> > ksm_scan.mm_slot = slot;
> > spin_unlock(&ksm_mmlist_lock);
> > /*
> > - * Although we tested list_empty() above, a racing __ksm_exit
> > + * Although we tested list_empty() above, a racing ksm_exit
> > * of the last mm on the list may have removed it since then.
> > */
> > if (slot == &ksm_mm_head)
> > @@ -1658,9 +1659,9 @@ next_mm:
> > /*
> > * We've completed a full scan of all vmas, holding mmap_sem
> > * throughout, and found no VM_MERGEABLE: so do the same as
> > - * __ksm_exit does to remove this mm from all our lists now.
> > - * This applies either when cleaning up after __ksm_exit
> > - * (but beware: we can reach here even before __ksm_exit),
> > + * ksm_exit does to remove this mm from all our lists now.
> > + * This applies either when cleaning up after ksm_exit
> > + * (but beware: we can reach here even before ksm_exit),
> > * or when all VM_MERGEABLE areas have been unmapped (and
> > * mmap_sem then protects against race with MADV_MERGEABLE).
> > */
> > @@ -1821,11 +1822,16 @@ int __ksm_enter(struct mm_struct *mm)
> > return 0;
> > }
> >
> > -void __ksm_exit(struct mm_struct *mm)
> > +static int ksm_exit(struct notifier_block *nb,
> > + unsigned long action, void *data)
> > {
> > + struct mm_struct *mm = data;
> > struct mm_slot *mm_slot;
> > int easy_to_free = 0;
> >
> > + if (!test_bit(MMF_VM_MERGEABLE, &mm->flags))
> > + return 0;
> > +
> > /*
> > * This process is exiting: if it's straightforward (as is the
> > * case when ksmd was never running), free mm_slot immediately.
> > @@ -1857,6 +1863,7 @@ void __ksm_exit(struct mm_struct *mm)
> > down_write(&mm->mmap_sem);
> > up_write(&mm->mmap_sem);
> > }
> > + return 0;
> > }
> >
> > struct page *ksm_might_need_to_copy(struct page *page,
> > @@ -2305,11 +2312,20 @@ static struct attribute_group ksm_attr_group = {
> > };
> > #endif /* CONFIG_SYSFS */
> >
> > +static struct notifier_block ksm_mmput_nb = {
> > + .notifier_call = ksm_exit,
> > + .priority = 2,
> > +};
> > +
> > static int __init ksm_init(void)
> > {
> > struct task_struct *ksm_thread;
> > int err;
> >
> > + err = mmput_register_notifier(&ksm_mmput_nb);
> > + if (err)
> > + return err;
> > +
>
> In order to be perfectly consistent with this routine's existing code, you
> would want to write:
>
> if (err)
> goto out;
>
> ...but it does the same thing as your code. It' just a consistency thing.
>
> > err = ksm_slab_init();
> > if (err)
> > goto out;
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 61aec93..b684a21 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2775,6 +2775,9 @@ void exit_mmap(struct mm_struct *mm)
> > struct vm_area_struct *vma;
> > unsigned long nr_accounted = 0;
> >
> > + /* Important to call this first. */
> > + khugepaged_exit(mm);
> > +
> > /* mm's last user has gone, and its about to be pulled down */
> > mmu_notifier_release(mm);
> >
> > --
> > 1.9.0
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to [email protected]. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> >
>
> Above points are extremely minor, so:
>
> Reviewed-by: John Hubbard <[email protected]>

I will respin none the less with fixed comment.

>
> thanks,
> John H.

2014-06-30 15:37:47

by Joerg Roedel

[permalink] [raw]
Subject: Re: [PATCH 1/6] mmput: use notifier chain to call subsystem exit handler.

On Fri, Jun 27, 2014 at 10:00:19PM -0400, J?r?me Glisse wrote:
> Note that this patch also move the call to cleanup functions after
> exit_mmap so that new call back can assume that mmu_notifier_release
> have already been call. This does not impact existing cleanup functions
> as they do not rely on anything that exit_mmap is freeing. Also moved
> khugepaged_exit to exit_mmap so that ordering is preserved for that
> function.

What this patch does is duplicating the functionality of the
mmu_notifier_release call-back. Why is it needed?


Joerg

2014-06-30 15:57:13

by Jerome Glisse

[permalink] [raw]
Subject: Re: [PATCH 3/6] mmu_notifier: add event information to address invalidation v2

On Sun, Jun 29, 2014 at 10:22:57PM -0700, John Hubbard wrote:
> On Fri, 27 Jun 2014, J?r?me Glisse wrote:
>
> > From: J?r?me Glisse <[email protected]>
> >
> > The event information will be usefull for new user of mmu_notifier API.
> > The event argument differentiate between a vma disappearing, a page
> > being write protected or simply a page being unmaped. This allow new
> > user to take different path for different event for instance on unmap
> > the resource used to track a vma are still valid and should stay around.
> > While if the event is saying that a vma is being destroy it means that any
> > resources used to track this vma can be free.
> >
> > Changed since v1:
> > - renamed action into event (updated commit message too).
> > - simplified the event names and clarified their intented usage
> > also documenting what exceptation the listener can have in
> > respect to each event.
> >
> > Signed-off-by: J?r?me Glisse <[email protected]>
> > ---
> > drivers/gpu/drm/i915/i915_gem_userptr.c | 3 +-
> > drivers/iommu/amd_iommu_v2.c | 14 ++--
> > drivers/misc/sgi-gru/grutlbpurge.c | 9 ++-
> > drivers/xen/gntdev.c | 9 ++-
> > fs/proc/task_mmu.c | 6 +-
> > include/linux/hugetlb.h | 7 +-
> > include/linux/mmu_notifier.h | 117 ++++++++++++++++++++++++++------
> > kernel/events/uprobes.c | 10 ++-
> > mm/filemap_xip.c | 2 +-
> > mm/huge_memory.c | 51 ++++++++------
> > mm/hugetlb.c | 25 ++++---
> > mm/ksm.c | 18 +++--
> > mm/memory.c | 27 +++++---
> > mm/migrate.c | 9 ++-
> > mm/mmu_notifier.c | 28 +++++---
> > mm/mprotect.c | 33 ++++++---
> > mm/mremap.c | 6 +-
> > mm/rmap.c | 24 +++++--
> > virt/kvm/kvm_main.c | 12 ++--
> > 19 files changed, 291 insertions(+), 119 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > index 21ea928..ed6f35e 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > @@ -56,7 +56,8 @@ struct i915_mmu_object {
> > static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> > struct mm_struct *mm,
> > unsigned long start,
> > - unsigned long end)
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct i915_mmu_notifier *mn = container_of(_mn, struct i915_mmu_notifier, mn);
> > struct interval_tree_node *it = NULL;
> > diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
> > index 499b436..2bb9771 100644
> > --- a/drivers/iommu/amd_iommu_v2.c
> > +++ b/drivers/iommu/amd_iommu_v2.c
> > @@ -414,21 +414,25 @@ static int mn_clear_flush_young(struct mmu_notifier *mn,
> > static void mn_change_pte(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > unsigned long address,
> > - pte_t pte)
> > + pte_t pte,
> > + enum mmu_event event)
> > {
> > __mn_flush_page(mn, address);
> > }
> >
> > static void mn_invalidate_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > __mn_flush_page(mn, address);
> > }
> >
> > static void mn_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct pasid_state *pasid_state;
> > struct device_state *dev_state;
> > @@ -449,7 +453,9 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,
> >
> > static void mn_invalidate_range_end(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct pasid_state *pasid_state;
> > struct device_state *dev_state;
> > diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
> > index 2129274..e67fed1 100644
> > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > @@ -221,7 +221,8 @@ void gru_flush_all_tlb(struct gru_state *gru)
> > */
> > static void gru_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start, unsigned long end,
> > + enum mmu_event event)
> > {
> > struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> > ms_notifier);
> > @@ -235,7 +236,8 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,
> >
> > static void gru_invalidate_range_end(struct mmu_notifier *mn,
> > struct mm_struct *mm, unsigned long start,
> > - unsigned long end)
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> > ms_notifier);
> > @@ -248,7 +250,8 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
> > }
> >
> > static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > struct gru_mm_struct *gms = container_of(mn, struct gru_mm_struct,
> > ms_notifier);
> > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > index 073b4a1..fe9da94 100644
> > --- a/drivers/xen/gntdev.c
> > +++ b/drivers/xen/gntdev.c
> > @@ -428,7 +428,9 @@ static void unmap_if_in_range(struct grant_map *map,
> >
> > static void mn_invl_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct gntdev_priv *priv = container_of(mn, struct gntdev_priv, mn);
> > struct grant_map *map;
> > @@ -445,9 +447,10 @@ static void mn_invl_range_start(struct mmu_notifier *mn,
> >
> > static void mn_invl_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > - mn_invl_range_start(mn, mm, address, address + PAGE_SIZE);
> > + mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
> > }
> >
> > static void mn_release(struct mmu_notifier *mn,
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index cfa63ee..e9e79f7 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -830,7 +830,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > };
> > down_read(&mm->mmap_sem);
> > if (type == CLEAR_REFS_SOFT_DIRTY)
> > - mmu_notifier_invalidate_range_start(mm, 0, -1);
> > + mmu_notifier_invalidate_range_start(mm, 0,
> > + -1, MMU_STATUS);
> > for (vma = mm->mmap; vma; vma = vma->vm_next) {
> > cp.vma = vma;
> > if (is_vm_hugetlb_page(vma))
> > @@ -858,7 +859,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > &clear_refs_walk);
> > }
> > if (type == CLEAR_REFS_SOFT_DIRTY)
> > - mmu_notifier_invalidate_range_end(mm, 0, -1);
> > + mmu_notifier_invalidate_range_end(mm, 0,
> > + -1, MMU_STATUS);
> > flush_tlb_mm(mm);
> > up_read(&mm->mmap_sem);
> > mmput(mm);
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index 6a836ef..d7e512f 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -6,6 +6,7 @@
> > #include <linux/fs.h>
> > #include <linux/hugetlb_inline.h>
> > #include <linux/cgroup.h>
> > +#include <linux/mmu_notifier.h>
> > #include <linux/list.h>
> > #include <linux/kref.h>
> >
> > @@ -103,7 +104,8 @@ struct page *follow_huge_pud(struct mm_struct *mm, unsigned long address,
> > int pmd_huge(pmd_t pmd);
> > int pud_huge(pud_t pmd);
> > unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > - unsigned long address, unsigned long end, pgprot_t newprot);
> > + unsigned long address, unsigned long end, pgprot_t newprot,
> > + enum mmu_event event);
> >
> > #else /* !CONFIG_HUGETLB_PAGE */
> >
> > @@ -148,7 +150,8 @@ static inline bool isolate_huge_page(struct page *page, struct list_head *list)
> > #define is_hugepage_active(x) false
> >
> > static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > - unsigned long address, unsigned long end, pgprot_t newprot)
> > + unsigned long address, unsigned long end, pgprot_t newprot,
> > + enum mmu_event event)
> > {
> > return 0;
> > }
> > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > index deca874..82e9577 100644
> > --- a/include/linux/mmu_notifier.h
> > +++ b/include/linux/mmu_notifier.h
> > @@ -9,6 +9,52 @@
> > struct mmu_notifier;
> > struct mmu_notifier_ops;
> >
> > +/* Event report finer informations to the callback allowing the event listener
> > + * to take better action. There are only few kinds of events :
> > + *
> > + * - MMU_MIGRATE memory is migrating from one page to another thus all write
> > + * access must stop after invalidate_range_start callback returns. And no
> > + * read access should be allowed either as new page can be remapped with
> > + * write access before the invalidate_range_end callback happen and thus
> > + * any read access to old page might access outdated informations. Several
> > + * source to this event like page moving to swap (for various reasons like
> > + * page reclaim), outcome of mremap syscall, migration for numa reasons,
> > + * balancing memory pool, write fault on read only page trigger a new page
> > + * to be allocated and used, ...
> > + * - MMU_MPROT_NONE memory access protection is change, no page in the range
> > + * can be accessed in either read or write mode but the range of address
> > + * is still valid. All access are still fine until invalidate_range_end
> > + * callback returns.
> > + * - MMU_MPROT_RONLY memory access proctection is changing to read only.
> > + * All access are still fine until invalidate_range_end callback returns.
> > + * - MMU_MPROT_RANDW memory access proctection is changing to read an write.
> > + * All access are still fine until invalidate_range_end callback returns.
> > + * - MMU_MPROT_WONLY memory access proctection is changing to write only.
> > + * All access are still fine until invalidate_range_end callback returns.
> > + * - MMU_MUNMAP the range is being unmaped (outcome of a munmap syscall). It
> > + * is fine to still have read/write access until the invalidate_range_end
> > + * callback returns. This also imply that secondary page table can be trim
> > + * as the address range is no longer valid.
> > + * - MMU_WB memory is being write back to disk, all write access must stop
> > + * after invalidate_range_start callback returns. Read access are still
> > + * allowed.
> > + * - MMU_STATUS memory status change, like soft dirty.
> > + *
> > + * In doubt when adding a new notifier caller use MMU_MIGRATE it will always
> > + * result in expected behavior but will not allow listener a chance to optimize
> > + * its events.
> > + */
>
> Here is a pass at tightening up that documentation:
>
> /* MMU Events report fine-grained information to the callback routine, allowing
> * the event listener to make a more informed decision as to what action to
> * take. The event types are:
> *
> * - MMU_MIGRATE: memory is migrating from one page to another, thus all write
> * access must stop after invalidate_range_start callback returns.
> * Furthermore, no read access should be allowed either, as a new page can
> * be remapped with write access before the invalidate_range_end callback
> * happens and thus any read access to old page might read stale data. There
> * are several sources for this event, including:
> *
> * - A page moving to swap (for various reasons, including page
> * reclaim),
> * - An mremap syscall,
> * - migration for NUMA reasons,
> * - balancing the memory pool,
> * - write fault on a read-only page triggers a new page to be allocated
> * and used,
> * - and more that are not listed here.
> *
> * - MMU_MPROT_NONE: memory access protection is changing to "none": no page
> * in the range can be accessed in either read or write mode but the range
> * of addresses is still valid. However, access is still allowed, up until
> * invalidate_range_end callback returns.
> *
> * - MMU_MPROT_RONLY: memory access proctection is changing to read only.
> * However, access is still allowed, up until invalidate_range_end callback
> * returns.
> *
> * - MMU_MPROT_RANDW: memory access proctection is changing to read and write.
> * However, access is still allowed, up until invalidate_range_end callback
> * returns.
> *
> * - MMU_MPROT_WONLY: memory access proctection is changing to write only.
> * However, access is still allowed, up until invalidate_range_end callback
> * returns.
> *
> * - MMU_MUNMAP: the range is being unmapped (outcome of a munmap syscall).
> * However, access is still allowed, up until invalidate_range_end callback
> * returns. This also implies that the secondary page table can be trimmed,
> * because the address range is no longer valid.
> *
> * - MMU_WB: memory is being written back to disk, all write accesses must
> * stop after invalidate_range_start callback returns. Read access are still
> * allowed.
> *
> * - MMU_STATUS memory status change, like soft dirty, or huge page
> * splitting (in place).
> *
> * If in doubt when adding a new notifier caller, please use MMU_MIGRATE,
> * because it will always lead to reasonable behavior, but will not allow the
> * listener a chance to optimize its events.
> */
>
> Mostly just cleaning up the wording, except that I did add "huge page
> splitting" to the cases that could cause an MMU_STATUS to fire.
>

Yes your wording is better than mine. The huge page case is kind of a
weird case as it is not always reusing the same page (THP vs hugetlbfs).

> > +enum mmu_event {
> > + MMU_MIGRATE = 0,
> > + MMU_MPROT_NONE,
> > + MMU_MPROT_RONLY,
> > + MMU_MPROT_RANDW,
> > + MMU_MPROT_WONLY,
> > + MMU_MUNMAP,
> > + MMU_STATUS,
> > + MMU_WB,
> > +};
> > +
> > #ifdef CONFIG_MMU_NOTIFIER
> >
> > /*
> > @@ -79,7 +125,8 @@ struct mmu_notifier_ops {
> > void (*change_pte)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > unsigned long address,
> > - pte_t pte);
> > + pte_t pte,
> > + enum mmu_event event);
> >
> > /*
> > * Before this is invoked any secondary MMU is still ok to
> > @@ -90,7 +137,8 @@ struct mmu_notifier_ops {
> > */
> > void (*invalidate_page)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long address);
> > + unsigned long address,
> > + enum mmu_event event);
> >
> > /*
> > * invalidate_range_start() and invalidate_range_end() must be
> > @@ -137,10 +185,14 @@ struct mmu_notifier_ops {
> > */
> > void (*invalidate_range_start)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end);
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event);
> > void (*invalidate_range_end)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long start, unsigned long end);
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event);
> > };
> >
> > /*
> > @@ -177,13 +229,20 @@ extern int __mmu_notifier_clear_flush_young(struct mm_struct *mm,
> > extern int __mmu_notifier_test_young(struct mm_struct *mm,
> > unsigned long address);
> > extern void __mmu_notifier_change_pte(struct mm_struct *mm,
> > - unsigned long address, pte_t pte);
> > + unsigned long address,
> > + pte_t pte,
> > + enum mmu_event event);
> > extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > - unsigned long address);
> > + unsigned long address,
> > + enum mmu_event event);
> > extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > - unsigned long start, unsigned long end);
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event);
> > extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > - unsigned long start, unsigned long end);
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event);
> >
> > static inline void mmu_notifier_release(struct mm_struct *mm)
> > {
> > @@ -208,31 +267,38 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
> > }
> >
> > static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > - unsigned long address, pte_t pte)
> > + unsigned long address,
> > + pte_t pte,
> > + enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_change_pte(mm, address, pte);
> > + __mmu_notifier_change_pte(mm, address, pte, event);
> > }
> >
> > static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_page(mm, address);
> > + __mmu_notifier_invalidate_page(mm, address, event);
> > }
> >
> > static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_range_start(mm, start, end);
> > + __mmu_notifier_invalidate_range_start(mm, start, end, event);
> > }
> >
> > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_range_end(mm, start, end);
> > + __mmu_notifier_invalidate_range_end(mm, start, end, event);
> > }
> >
> > static inline void mmu_notifier_mm_init(struct mm_struct *mm)
> > @@ -278,13 +344,13 @@ static inline void mmu_notifier_mm_destroy(struct mm_struct *mm)
> > * old page would remain mapped readonly in the secondary MMUs after the new
> > * page is already writable by some CPU through the primary MMU.
> > */
> > -#define set_pte_at_notify(__mm, __address, __ptep, __pte) \
> > +#define set_pte_at_notify(__mm, __address, __ptep, __pte, __event) \
> > ({ \
> > struct mm_struct *___mm = __mm; \
> > unsigned long ___address = __address; \
> > pte_t ___pte = __pte; \
> > \
> > - mmu_notifier_change_pte(___mm, ___address, ___pte); \
> > + mmu_notifier_change_pte(___mm, ___address, ___pte, __event); \
> > set_pte_at(___mm, ___address, __ptep, ___pte); \
> > })
> >
> > @@ -307,22 +373,29 @@ static inline int mmu_notifier_test_young(struct mm_struct *mm,
> > }
> >
> > static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > - unsigned long address, pte_t pte)
> > + unsigned long address,
> > + pte_t pte,
> > + enum mmu_event event)
> > {
> > }
> >
> > static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > }
> >
> > static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > }
> >
> > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > }
> >
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 32b04dc..296f81e 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -177,7 +177,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> > /* For try_to_free_swap() and munlock_vma_page() below */
> > lock_page(page);
> >
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > err = -EAGAIN;
> > ptep = page_check_address(page, mm, addr, &ptl, 0);
> > if (!ptep)
> > @@ -195,7 +196,9 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> >
> > flush_cache_page(vma, addr, pte_pfn(*ptep));
> > ptep_clear_flush(vma, addr, ptep);
> > - set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
> > + set_pte_at_notify(mm, addr, ptep,
> > + mk_pte(kpage, vma->vm_page_prot),
> > + MMU_MIGRATE);
> >
> > page_remove_rmap(page);
> > if (!page_mapped(page))
> > @@ -209,7 +212,8 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> > err = 0;
> > unlock:
> > mem_cgroup_cancel_charge(kpage, memcg);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > unlock_page(page);
> > return err;
> > }
> > diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
> > index d8d9fe3..a2b3f09 100644
> > --- a/mm/filemap_xip.c
> > +++ b/mm/filemap_xip.c
> > @@ -198,7 +198,7 @@ retry:
> > BUG_ON(pte_dirty(pteval));
> > pte_unmap_unlock(pte, ptl);
> > /* must invalidate_page _before_ freeing the page */
> > - mmu_notifier_invalidate_page(mm, address);
> > + mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
> > page_cache_release(page);
> > }
> > }
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 5d562a9..fa30857 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1020,6 +1020,11 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> > set_page_private(pages[i], (unsigned long)memcg);
> > }
> >
> > + mmun_start = haddr;
> > + mmun_end = haddr + HPAGE_PMD_SIZE;
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > +
> > for (i = 0; i < HPAGE_PMD_NR; i++) {
> > copy_user_highpage(pages[i], page + i,
> > haddr + PAGE_SIZE * i, vma);
> > @@ -1027,10 +1032,6 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> > cond_resched();
> > }
> >
> > - mmun_start = haddr;
> > - mmun_end = haddr + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > -
> > ptl = pmd_lock(mm, pmd);
> > if (unlikely(!pmd_same(*pmd, orig_pmd)))
> > goto out_free_pages;
>
> So, that looks like you are fixing a pre-existing bug here? The
> invalidate_range call is now happening *before* we copy pages. That seems
> correct, although this is starting to get into code I'm less comfortable
> with (huge pages). But I think it's worth mentioning in the commit
> message.

Yes i should actualy split the fix out of this patch i will respin with
fix a preparatory patch to this one.

>
> > @@ -1063,7 +1064,8 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> > page_remove_rmap(page);
> > spin_unlock(ptl);
> >
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > ret |= VM_FAULT_WRITE;
> > put_page(page);
> > @@ -1073,7 +1075,8 @@ out:
> >
> > out_free_pages:
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > for (i = 0; i < HPAGE_PMD_NR; i++) {
> > memcg = (void *)page_private(pages[i]);
> > set_page_private(pages[i], 0);
> > @@ -1157,16 +1160,17 @@ alloc:
> >
> > count_vm_event(THP_FAULT_ALLOC);
> >
> > + mmun_start = haddr;
> > + mmun_end = haddr + HPAGE_PMD_SIZE;
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > +
> > if (!page)
> > clear_huge_page(new_page, haddr, HPAGE_PMD_NR);
> > else
> > copy_user_huge_page(new_page, page, haddr, vma, HPAGE_PMD_NR);
> > __SetPageUptodate(new_page);
> >
> > - mmun_start = haddr;
> > - mmun_end = haddr + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > -
>
> Another bug fix, OK.
>
> > spin_lock(ptl);
> > if (page)
> > put_user_huge_page(page);
> > @@ -1197,7 +1201,8 @@ alloc:
> > }
> > spin_unlock(ptl);
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > out:
> > return ret;
> > out_unlock:
> > @@ -1632,7 +1637,8 @@ static int __split_huge_page_splitting(struct page *page,
> > const unsigned long mmun_start = address;
> > const unsigned long mmun_end = address + HPAGE_PMD_SIZE;
> >
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_STATUS);
>
> OK, just to be sure: we are not moving the page contents at this point,
> right? Just changing the page table from a single "huge" entry, into lots
> of little 4K page entries? If so, than MMU_STATUS seems correct, but we
> should add that case to the "Event types" documentation above.

Yes correct.

>
> > pmd = page_check_address_pmd(page, mm, address,
> > PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
> > if (pmd) {
> > @@ -1647,7 +1653,8 @@ static int __split_huge_page_splitting(struct page *page,
> > ret = 1;
> > spin_unlock(ptl);
> > }
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_STATUS);
> >
> > return ret;
> > }
> > @@ -2446,7 +2453,8 @@ static void collapse_huge_page(struct mm_struct *mm,
> >
> > mmun_start = address;
> > mmun_end = address + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> > /*
> > * After this gup_fast can't run anymore. This also removes
> > @@ -2456,7 +2464,8 @@ static void collapse_huge_page(struct mm_struct *mm,
> > */
> > _pmd = pmdp_clear_flush(vma, address, pmd);
> > spin_unlock(pmd_ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > spin_lock(pte_ptl);
> > isolated = __collapse_huge_page_isolate(vma, address, pte);
> > @@ -2845,24 +2854,28 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
> > mmun_start = haddr;
> > mmun_end = haddr + HPAGE_PMD_SIZE;
> > again:
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
>
> Just checking: this is MMU_MIGRATE, instead of MMU_STATUS, because we are
> actually moving data? (The pages backing the page table?)

Well in truth i think this mmu_notifier call should change. As what happens
depends on the branch taken. But i am guessing the call were added there
because of the spinlock. I will just remove then with explanation in a
preparatory patch to this one. Issue is the huge zero case, but i think it
is fine to call mmu_notifier after having updated the cpu page table for
that case.

>
> > ptl = pmd_lock(mm, pmd);
> > if (unlikely(!pmd_trans_huge(*pmd))) {
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > return;
> > }
> > if (is_huge_zero_pmd(*pmd)) {
> > __split_huge_zero_page_pmd(vma, haddr, pmd);
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > return;
> > }
> > page = pmd_page(*pmd);
> > VM_BUG_ON_PAGE(!page_count(page), page);
> > get_page(page);
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > split_huge_page(page);
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 7faab71..73e1576 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2565,7 +2565,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> > mmun_start = vma->vm_start;
> > mmun_end = vma->vm_end;
> > if (cow)
> > - mmu_notifier_invalidate_range_start(src, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(src, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
> > spinlock_t *src_ptl, *dst_ptl;
> > @@ -2615,7 +2616,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> > }
> >
> > if (cow)
> > - mmu_notifier_invalidate_range_end(src, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(src, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > return ret;
> > }
> > @@ -2641,7 +2643,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > BUG_ON(end & ~huge_page_mask(h));
> >
> > tlb_start_vma(tlb, vma);
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > again:
> > for (address = start; address < end; address += sz) {
> > ptep = huge_pte_offset(mm, address);
> > @@ -2712,7 +2715,8 @@ unlock:
> > if (address < end && !ref_page)
> > goto again;
> > }
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > tlb_end_vma(tlb, vma);
> > }
> >
> > @@ -2899,7 +2903,8 @@ retry_avoidcopy:
> >
> > mmun_start = address & huge_page_mask(h);
> > mmun_end = mmun_start + huge_page_size(h);
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > /*
> > * Retake the page table lock to check for racing updates
> > * before the page tables are altered
> > @@ -2919,7 +2924,8 @@ retry_avoidcopy:
> > new_page = old_page;
> > }
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > page_cache_release(new_page);
> > page_cache_release(old_page);
> >
> > @@ -3344,7 +3350,8 @@ same_page:
> > }
> >
> > unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > - unsigned long address, unsigned long end, pgprot_t newprot)
> > + unsigned long address, unsigned long end, pgprot_t newprot,
> > + enum mmu_event event)
> > {
> > struct mm_struct *mm = vma->vm_mm;
> > unsigned long start = address;
> > @@ -3356,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > BUG_ON(address >= end);
> > flush_cache_range(vma, address, end);
> >
> > - mmu_notifier_invalidate_range_start(mm, start, end);
> > + mmu_notifier_invalidate_range_start(mm, start, end, event);
> > mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
> > for (; address < end; address += huge_page_size(h)) {
> > spinlock_t *ptl;
> > @@ -3386,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > */
> > flush_tlb_range(vma, start, end);
> > mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
> > - mmu_notifier_invalidate_range_end(mm, start, end);
> > + mmu_notifier_invalidate_range_end(mm, start, end, event);
> >
> > return pages << h->order;
> > }
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index cb1e976..4b659f1 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -873,7 +873,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> >
> > mmun_start = addr;
> > mmun_end = addr + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MPROT_RONLY);
> >
> > ptep = page_check_address(page, mm, addr, &ptl, 0);
> > if (!ptep)
> > @@ -905,7 +906,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> > if (pte_dirty(entry))
> > set_page_dirty(page);
> > entry = pte_mkclean(pte_wrprotect(entry));
> > - set_pte_at_notify(mm, addr, ptep, entry);
> > + set_pte_at_notify(mm, addr, ptep, entry, MMU_MPROT_RONLY);
> > }
> > *orig_pte = *ptep;
> > err = 0;
> > @@ -913,7 +914,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> > out_unlock:
> > pte_unmap_unlock(ptep, ptl);
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MPROT_RONLY);
> > out:
> > return err;
> > }
> > @@ -949,7 +951,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> >
> > mmun_start = addr;
> > mmun_end = addr + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
> > if (!pte_same(*ptep, orig_pte)) {
> > @@ -962,7 +965,9 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> >
> > flush_cache_page(vma, addr, pte_pfn(*ptep));
> > ptep_clear_flush(vma, addr, ptep);
> > - set_pte_at_notify(mm, addr, ptep, mk_pte(kpage, vma->vm_page_prot));
> > + set_pte_at_notify(mm, addr, ptep,
> > + mk_pte(kpage, vma->vm_page_prot),
> > + MMU_MIGRATE);
> >
> > page_remove_rmap(page);
> > if (!page_mapped(page))
> > @@ -972,7 +977,8 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> > pte_unmap_unlock(ptep, ptl);
> > err = 0;
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > out:
> > return err;
> > }
> > diff --git a/mm/memory.c b/mm/memory.c
> > index 09e2cd0..d3908f0 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -1050,7 +1050,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > mmun_end = end;
> > if (is_cow)
> > mmu_notifier_invalidate_range_start(src_mm, mmun_start,
> > - mmun_end);
> > + mmun_end, MMU_MIGRATE);
> >
> > ret = 0;
> > dst_pgd = pgd_offset(dst_mm, addr);
> > @@ -1067,7 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > } while (dst_pgd++, src_pgd++, addr = next, addr != end);
> >
> > if (is_cow)
> > - mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
> > + MMU_MIGRATE);
> > return ret;
> > }
> >
> > @@ -1371,10 +1372,12 @@ void unmap_vmas(struct mmu_gather *tlb,
> > {
> > struct mm_struct *mm = vma->vm_mm;
> >
> > - mmu_notifier_invalidate_range_start(mm, start_addr, end_addr);
> > + mmu_notifier_invalidate_range_start(mm, start_addr,
> > + end_addr, MMU_MUNMAP);
> > for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
> > unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
> > - mmu_notifier_invalidate_range_end(mm, start_addr, end_addr);
> > + mmu_notifier_invalidate_range_end(mm, start_addr,
> > + end_addr, MMU_MUNMAP);
> > }
> >
> > /**
> > @@ -1396,10 +1399,10 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> > lru_add_drain();
> > tlb_gather_mmu(&tlb, mm, start, end);
> > update_hiwater_rss(mm);
> > - mmu_notifier_invalidate_range_start(mm, start, end);
> > + mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
> > for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
> > unmap_single_vma(&tlb, vma, start, end, details);
> > - mmu_notifier_invalidate_range_end(mm, start, end);
> > + mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
> > tlb_finish_mmu(&tlb, start, end);
> > }
> >
> > @@ -1422,9 +1425,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
> > lru_add_drain();
> > tlb_gather_mmu(&tlb, mm, address, end);
> > update_hiwater_rss(mm);
> > - mmu_notifier_invalidate_range_start(mm, address, end);
> > + mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
> > unmap_single_vma(&tlb, vma, address, end, details);
> > - mmu_notifier_invalidate_range_end(mm, address, end);
> > + mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
> > tlb_finish_mmu(&tlb, address, end);
> > }
> >
> > @@ -2208,7 +2211,8 @@ gotten:
> >
> > mmun_start = address & PAGE_MASK;
> > mmun_end = mmun_start + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > /*
> > * Re-check the pte - we dropped the lock
> > @@ -2240,7 +2244,7 @@ gotten:
> > * mmu page tables (such as kvm shadow page tables), we want the
> > * new page to be mapped directly into the secondary page table.
> > */
> > - set_pte_at_notify(mm, address, page_table, entry);
> > + set_pte_at_notify(mm, address, page_table, entry, MMU_MIGRATE);
> > update_mmu_cache(vma, address, page_table);
> > if (old_page) {
> > /*
> > @@ -2279,7 +2283,8 @@ gotten:
> > unlock:
> > pte_unmap_unlock(page_table, ptl);
> > if (mmun_end > mmun_start)
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > if (old_page) {
> > /*
> > * Don't let another task, with possibly unlocked vma,
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index ab43fbf..b526c72 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -1820,12 +1820,14 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
> > WARN_ON(PageLRU(new_page));
> >
> > /* Recheck the target PMD */
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > ptl = pmd_lock(mm, pmd);
> > if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
> > fail_putback:
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > /* Reverse changes made by migrate_page_copy() */
> > if (TestClearPageActive(new_page))
> > @@ -1878,7 +1880,8 @@ fail_putback:
> > page_remove_rmap(page);
> >
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > /* Take an "isolate" reference and put new page on the LRU. */
> > get_page(new_page);
> > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> > index 41cefdf..9decb88 100644
> > --- a/mm/mmu_notifier.c
> > +++ b/mm/mmu_notifier.c
> > @@ -122,8 +122,10 @@ int __mmu_notifier_test_young(struct mm_struct *mm,
> > return young;
> > }
> >
> > -void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> > - pte_t pte)
> > +void __mmu_notifier_change_pte(struct mm_struct *mm,
> > + unsigned long address,
> > + pte_t pte,
> > + enum mmu_event event)
> > {
> > struct mmu_notifier *mn;
> > int id;
> > @@ -131,13 +133,14 @@ void __mmu_notifier_change_pte(struct mm_struct *mm, unsigned long address,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->change_pte)
> > - mn->ops->change_pte(mn, mm, address, pte);
> > + mn->ops->change_pte(mn, mm, address, pte, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > }
> >
> > void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > struct mmu_notifier *mn;
> > int id;
> > @@ -145,13 +148,16 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_page)
> > - mn->ops->invalidate_page(mn, mm, address);
> > + mn->ops->invalidate_page(mn, mm, address, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > }
> >
> > void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > +
> > {
> > struct mmu_notifier *mn;
> > int id;
> > @@ -159,14 +165,17 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_range_start)
> > - mn->ops->invalidate_range_start(mn, mm, start, end);
> > + mn->ops->invalidate_range_start(mn, mm, start,
> > + end, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > }
> > EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
> >
> > void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > - unsigned long start, unsigned long end)
> > + unsigned long start,
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct mmu_notifier *mn;
> > int id;
> > @@ -174,7 +183,8 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_range_end)
> > - mn->ops->invalidate_range_end(mn, mm, start, end);
> > + mn->ops->invalidate_range_end(mn, mm, start,
> > + end, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > }
> > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > index c43d557..6ce6c23 100644
> > --- a/mm/mprotect.c
> > +++ b/mm/mprotect.c
> > @@ -137,7 +137,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
> >
> > static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> > pud_t *pud, unsigned long addr, unsigned long end,
> > - pgprot_t newprot, int dirty_accountable, int prot_numa)
> > + pgprot_t newprot, int dirty_accountable, int prot_numa,
> > + enum mmu_event event)
> > {
> > pmd_t *pmd;
> > struct mm_struct *mm = vma->vm_mm;
> > @@ -157,7 +158,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> > /* invoke the mmu notifier if the pmd is populated */
> > if (!mni_start) {
> > mni_start = addr;
> > - mmu_notifier_invalidate_range_start(mm, mni_start, end);
> > + mmu_notifier_invalidate_range_start(mm, mni_start,
> > + end, event);
> > }
> >
> > if (pmd_trans_huge(*pmd)) {
> > @@ -185,7 +187,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> > } while (pmd++, addr = next, addr != end);
> >
> > if (mni_start)
> > - mmu_notifier_invalidate_range_end(mm, mni_start, end);
> > + mmu_notifier_invalidate_range_end(mm, mni_start, end, event);
> >
> > if (nr_huge_updates)
> > count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
> > @@ -194,7 +196,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> >
> > static inline unsigned long change_pud_range(struct vm_area_struct *vma,
> > pgd_t *pgd, unsigned long addr, unsigned long end,
> > - pgprot_t newprot, int dirty_accountable, int prot_numa)
> > + pgprot_t newprot, int dirty_accountable, int prot_numa,
> > + enum mmu_event event)
> > {
> > pud_t *pud;
> > unsigned long next;
> > @@ -206,7 +209,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
> > if (pud_none_or_clear_bad(pud))
> > continue;
> > pages += change_pmd_range(vma, pud, addr, next, newprot,
> > - dirty_accountable, prot_numa);
> > + dirty_accountable, prot_numa, event);
> > } while (pud++, addr = next, addr != end);
> >
> > return pages;
> > @@ -214,7 +217,7 @@ static inline unsigned long change_pud_range(struct vm_area_struct *vma,
> >
> > static unsigned long change_protection_range(struct vm_area_struct *vma,
> > unsigned long addr, unsigned long end, pgprot_t newprot,
> > - int dirty_accountable, int prot_numa)
> > + int dirty_accountable, int prot_numa, enum mmu_event event)
> > {
> > struct mm_struct *mm = vma->vm_mm;
> > pgd_t *pgd;
> > @@ -231,7 +234,7 @@ static unsigned long change_protection_range(struct vm_area_struct *vma,
> > if (pgd_none_or_clear_bad(pgd))
> > continue;
> > pages += change_pud_range(vma, pgd, addr, next, newprot,
> > - dirty_accountable, prot_numa);
> > + dirty_accountable, prot_numa, event);
> > } while (pgd++, addr = next, addr != end);
> >
> > /* Only flush the TLB if we actually modified any entries: */
> > @@ -247,11 +250,23 @@ unsigned long change_protection(struct vm_area_struct *vma, unsigned long start,
> > int dirty_accountable, int prot_numa)
> > {
> > unsigned long pages;
> > + enum mmu_event event = MMU_MPROT_NONE;
> > +
> > + /* At this points vm_flags is updated. */
> > + if ((vma->vm_flags & VM_READ) && (vma->vm_flags & VM_WRITE))
> > + event = MMU_MPROT_RANDW;
> > + else if (vma->vm_flags & VM_WRITE)
> > + event = MMU_MPROT_WONLY;
> > + else if (vma->vm_flags & VM_READ)
> > + event = MMU_MPROT_RONLY;
>
> hmmm, shouldn't we be checking against the newprot argument, instead of
> against vm->vm_flags? The calling code, mprotect_fixup for example, can
> set flags *other* than VM_READ or RM_WRITE, and that could leave to a
> confusing or even inaccurate event. We could have a case where the event
> type is MMU_MPROT_RONLY, but the page might have been read-only the entire
> time, and some other flag was actually getting set.

Afaict the only way to end up here is if VM_WRITE or VM_READ changes (are
set or clear, either of them or both of them). It might be simpler to use
new prot as the whole VM_ vs VM_MAY_ arithmetic always confuse me.

>
> I'm also starting to wonder if this event is adding much value here (for
> protection changes), if the newprot argument contains the same
> information. Although it is important to have a unified sort of reporting
> system for HMM, so that's probably good enough reason to do this.
>
> >
> > if (is_vm_hugetlb_page(vma))
> > - pages = hugetlb_change_protection(vma, start, end, newprot);
> > + pages = hugetlb_change_protection(vma, start, end,
> > + newprot, event);
> > else
> > - pages = change_protection_range(vma, start, end, newprot, dirty_accountable, prot_numa);
> > + pages = change_protection_range(vma, start, end, newprot,
> > + dirty_accountable,
> > + prot_numa, event);
> >
> > return pages;
> > }
> > diff --git a/mm/mremap.c b/mm/mremap.c
> > index 05f1180..6827d2f 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -177,7 +177,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> >
> > mmun_start = old_addr;
> > mmun_end = old_end;
> > - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
> > cond_resched();
> > @@ -228,7 +229,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > if (likely(need_flush))
> > flush_tlb_range(vma, old_end-len, old_addr);
> >
> > - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> >
> > return len + old_addr - old_end; /* how much done */
> > }
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 7928ddd..bd7e6d7 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> > pte_unmap_unlock(pte, ptl);
> >
> > if (ret) {
> > - mmu_notifier_invalidate_page(mm, address);
> > + mmu_notifier_invalidate_page(mm, address, MMU_WB);
> > (*cleaned)++;
> > }
> > out:
> > @@ -1128,6 +1128,10 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > spinlock_t *ptl;
> > int ret = SWAP_AGAIN;
> > enum ttu_flags flags = (enum ttu_flags)arg;
> > + enum mmu_event event = MMU_MIGRATE;
> > +
> > + if (flags & TTU_MUNLOCK)
> > + event = MMU_STATUS;
> >
> > pte = page_check_address(page, mm, address, &ptl, 0);
> > if (!pte)
> > @@ -1233,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > out_unmap:
> > pte_unmap_unlock(pte, ptl);
> > if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
> > - mmu_notifier_invalidate_page(mm, address);
> > + mmu_notifier_invalidate_page(mm, address, event);
> > out:
> > return ret;
> >
> > @@ -1287,7 +1291,9 @@ out_mlock:
> > #define CLUSTER_MASK (~(CLUSTER_SIZE - 1))
> >
> > static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> > - struct vm_area_struct *vma, struct page *check_page)
> > + struct vm_area_struct *vma,
> > + struct page *check_page,
> > + enum ttu_flags flags)
> > {
> > struct mm_struct *mm = vma->vm_mm;
> > pmd_t *pmd;
> > @@ -1301,6 +1307,10 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> > unsigned long end;
> > int ret = SWAP_AGAIN;
> > int locked_vma = 0;
> > + enum mmu_event event = MMU_MIGRATE;
> > +
> > + if (flags & TTU_MUNLOCK)
> > + event = MMU_STATUS;
> >
> > address = (vma->vm_start + cursor) & CLUSTER_MASK;
> > end = address + CLUSTER_SIZE;
> > @@ -1315,7 +1325,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> >
> > mmun_start = address;
> > mmun_end = end;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
> >
> > /*
> > * If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
> > @@ -1380,7 +1390,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> > (*mapcount)--;
> > }
> > pte_unmap_unlock(pte - 1, ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
> > + mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
> > if (locked_vma)
> > up_read(&vma->vm_mm->mmap_sem);
> > return ret;
> > @@ -1436,7 +1446,9 @@ static int try_to_unmap_nonlinear(struct page *page,
> > while (cursor < max_nl_cursor &&
> > cursor < vma->vm_end - vma->vm_start) {
> > if (try_to_unmap_cluster(cursor, &mapcount,
> > - vma, page) == SWAP_MLOCK)
> > + vma, page,
> > + (enum ttu_flags)arg)
> > + == SWAP_MLOCK)
> > ret = SWAP_MLOCK;
> > cursor += CLUSTER_SIZE;
> > vma->vm_private_data = (void *) cursor;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 4b6c01b..6e1992f 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -262,7 +262,8 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
> >
> > static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > - unsigned long address)
> > + unsigned long address,
> > + enum mmu_event event)
> > {
> > struct kvm *kvm = mmu_notifier_to_kvm(mn);
> > int need_tlb_flush, idx;
> > @@ -301,7 +302,8 @@ static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> > static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > unsigned long address,
> > - pte_t pte)
> > + pte_t pte,
> > + enum mmu_event event)
> > {
> > struct kvm *kvm = mmu_notifier_to_kvm(mn);
> > int idx;
> > @@ -317,7 +319,8 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> > static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > unsigned long start,
> > - unsigned long end)
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct kvm *kvm = mmu_notifier_to_kvm(mn);
> > int need_tlb_flush = 0, idx;
> > @@ -343,7 +346,8 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> > static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > unsigned long start,
> > - unsigned long end)
> > + unsigned long end,
> > + enum mmu_event event)
> > {
> > struct kvm *kvm = mmu_notifier_to_kvm(mn);
> >
> > --
> > 1.9.0
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to [email protected]. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> >
>
> thanks,
> John H.

2014-06-30 15:58:47

by Jerome Glisse

[permalink] [raw]
Subject: Re: [PATCH 2/6] mm: differentiate unmap for vmscan from other unmap.

On Sun, Jun 29, 2014 at 08:58:17PM -0700, John Hubbard wrote:
> On Fri, 27 Jun 2014, J?r?me Glisse wrote:
>
> > From: J?r?me Glisse <[email protected]>
> >
> > New code will need to be able to differentiate between a regular unmap and
> > an unmap trigger by vmscan in which case we want to be as quick as possible.
> >
> > Signed-off-by: J?r?me Glisse <[email protected]>
> > ---
> > include/linux/rmap.h | 15 ++++++++-------
> > mm/memory-failure.c | 2 +-
> > mm/vmscan.c | 4 ++--
> > 3 files changed, 11 insertions(+), 10 deletions(-)
> >
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index be57450..eddbc07 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -72,13 +72,14 @@ struct anon_vma_chain {
> > };
> >
> > enum ttu_flags {
> > - TTU_UNMAP = 1, /* unmap mode */
> > - TTU_MIGRATION = 2, /* migration mode */
> > - TTU_MUNLOCK = 4, /* munlock mode */
> > -
> > - TTU_IGNORE_MLOCK = (1 << 8), /* ignore mlock */
> > - TTU_IGNORE_ACCESS = (1 << 9), /* don't age */
> > - TTU_IGNORE_HWPOISON = (1 << 10),/* corrupted page is recoverable */
> > + TTU_VMSCAN = 1, /* unmap for vmscan */
> > + TTU_POISON = 2, /* unmap for poison */
> > + TTU_MIGRATION = 4, /* migration mode */
> > + TTU_MUNLOCK = 8, /* munlock mode */
> > +
> > + TTU_IGNORE_MLOCK = (1 << 9), /* ignore mlock */
> > + TTU_IGNORE_ACCESS = (1 << 10), /* don't age */
> > + TTU_IGNORE_HWPOISON = (1 << 11),/* corrupted page is recoverable */
>
> Unless there is a deeper purpose that I am overlooking, I think it would
> be better to leave the _MLOCK, _ACCESS, and _HWPOISON at their original
> values. I just can't quite see why they would need to start at bit 9
> instead of bit 8...

This code change to have various TTU_* be bitflag instead of value. I am
not sure what was the win in that, would need to dig up that patch that
did that. But in all the case i preserve that change here hence starting
at 9.
>
> > };
> >
> > #ifdef CONFIG_MMU
> > diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> > index a7a89eb..ba176c4 100644
> > --- a/mm/memory-failure.c
> > +++ b/mm/memory-failure.c
> > @@ -887,7 +887,7 @@ static int page_action(struct page_state *ps, struct page *p,
> > static int hwpoison_user_mappings(struct page *p, unsigned long pfn,
> > int trapno, int flags, struct page **hpagep)
> > {
> > - enum ttu_flags ttu = TTU_UNMAP | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
> > + enum ttu_flags ttu = TTU_POISON | TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS;
> > struct address_space *mapping;
> > LIST_HEAD(tokill);
> > int ret;
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 6d24fd6..5a7d286 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -1163,7 +1163,7 @@ unsigned long reclaim_clean_pages_from_list(struct zone *zone,
> > }
> >
> > ret = shrink_page_list(&clean_pages, zone, &sc,
> > - TTU_UNMAP|TTU_IGNORE_ACCESS,
> > + TTU_VMSCAN|TTU_IGNORE_ACCESS,
> > &dummy1, &dummy2, &dummy3, &dummy4, &dummy5, true);
> > list_splice(&clean_pages, page_list);
> > mod_zone_page_state(zone, NR_ISOLATED_FILE, -ret);
> > @@ -1518,7 +1518,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
> > if (nr_taken == 0)
> > return 0;
> >
> > - nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_UNMAP,
> > + nr_reclaimed = shrink_page_list(&page_list, zone, sc, TTU_VMSCAN,
> > &nr_dirty, &nr_unqueued_dirty, &nr_congested,
> > &nr_writeback, &nr_immediate,
> > false);
> > --
> > 1.9.0
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to [email protected]. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> >
>
> Other than that, looks good.
>
> Reviewed-by: John Hubbard <[email protected]>
>
> thanks,
> John H.

2014-06-30 16:00:45

by Jerome Glisse

[permalink] [raw]
Subject: Re: [PATCH 4/6] mmu_notifier: pass through vma to invalidate_range and invalidate_page

On Sun, Jun 29, 2014 at 08:29:01PM -0700, John Hubbard wrote:
> On Fri, 27 Jun 2014, J?r?me Glisse wrote:
>
> > From: J?r?me Glisse <[email protected]>
> >
> > New user of the mmu_notifier interface need to lookup vma in order to
> > perform the invalidation operation. Instead of redoing a vma lookup
> > inside the callback just pass through the vma from the call site where
> > it is already available.
> >
> > This needs small refactoring in memory.c to call invalidate_range on
> > vma boundary the overhead should be low enough.
> >
> > Signed-off-by: J?r?me Glisse <[email protected]>
> > ---
> > drivers/gpu/drm/i915/i915_gem_userptr.c | 1 +
> > drivers/iommu/amd_iommu_v2.c | 3 +++
> > drivers/misc/sgi-gru/grutlbpurge.c | 6 ++++-
> > drivers/xen/gntdev.c | 4 +++-
> > fs/proc/task_mmu.c | 16 ++++++++-----
> > include/linux/mmu_notifier.h | 19 ++++++++++++---
> > kernel/events/uprobes.c | 4 ++--
> > mm/filemap_xip.c | 3 ++-
> > mm/huge_memory.c | 26 ++++++++++----------
> > mm/hugetlb.c | 16 ++++++-------
> > mm/ksm.c | 8 +++----
> > mm/memory.c | 42 +++++++++++++++++++++------------
> > mm/migrate.c | 6 ++---
> > mm/mmu_notifier.c | 9 ++++---
> > mm/mprotect.c | 5 ++--
> > mm/mremap.c | 4 ++--
> > mm/rmap.c | 9 +++----
> > virt/kvm/kvm_main.c | 3 +++
> > 18 files changed, 116 insertions(+), 68 deletions(-)
> >
>
> Hi Jerome, considering that you have to change every call site already, it
> seems to me that it would be ideal to just delete the mm argument from all
> of these invalidate_range* callbacks. In other words, replace the mm
> argument with the new vma argument. I don't see much point in passing
> them both around, and while it would make the patch a *bit* larger, it's
> mostly just an extra line or two per call site:
>
> mm = vma->vm_mm;

Yes probably is.

>
> Also, passing the vma around really does seem like a good approach, but it
> does cause a bunch of additional calls to the invalidate_range* routines,
> because we generate a call per vma, instead of just one for the entire mm.
> So that brings up a couple questions:
>
> 1) Is there any chance that this could cause measurable performance
> regressions?
>
> 2) Should you put a little note in the commit message, mentioning this
> point?
>

I pointed that out in my introduction mail to the patchset. I think those
code path are mostly use during process destruction and hence is fine but
at same time being able to spin and destroy quickly many process by be
considered a features and an important use case that should not regress.

> > diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > index ed6f35e..191ac71 100644
> > --- a/drivers/gpu/drm/i915/i915_gem_userptr.c
> > +++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
> > @@ -55,6 +55,7 @@ struct i915_mmu_object {
> >
> > static void i915_gem_userptr_mn_invalidate_range_start(struct mmu_notifier *_mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
>
> That routine has a local variable named vma, so it might be polite to
> rename the local variable, to make it more obvious to the reader that they
> are different. Of course, since the compiler knows which is which, feel
> free to ignore this comment.
>
> > diff --git a/drivers/iommu/amd_iommu_v2.c b/drivers/iommu/amd_iommu_v2.c
> > index 2bb9771..9f9e706 100644
> > --- a/drivers/iommu/amd_iommu_v2.c
> > +++ b/drivers/iommu/amd_iommu_v2.c
> > @@ -422,6 +422,7 @@ static void mn_change_pte(struct mmu_notifier *mn,
> >
> > static void mn_invalidate_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > @@ -430,6 +431,7 @@ static void mn_invalidate_page(struct mmu_notifier *mn,
> >
> > static void mn_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -453,6 +455,7 @@ static void mn_invalidate_range_start(struct mmu_notifier *mn,
> >
> > static void mn_invalidate_range_end(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > diff --git a/drivers/misc/sgi-gru/grutlbpurge.c b/drivers/misc/sgi-gru/grutlbpurge.c
> > index e67fed1..d02e4c7 100644
> > --- a/drivers/misc/sgi-gru/grutlbpurge.c
> > +++ b/drivers/misc/sgi-gru/grutlbpurge.c
> > @@ -221,6 +221,7 @@ void gru_flush_all_tlb(struct gru_state *gru)
> > */
> > static void gru_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start, unsigned long end,
> > enum mmu_event event)
> > {
> > @@ -235,7 +236,9 @@ static void gru_invalidate_range_start(struct mmu_notifier *mn,
> > }
> >
> > static void gru_invalidate_range_end(struct mmu_notifier *mn,
> > - struct mm_struct *mm, unsigned long start,
> > + struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > + unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > {
> > @@ -250,6 +253,7 @@ static void gru_invalidate_range_end(struct mmu_notifier *mn,
> > }
> >
> > static void gru_invalidate_page(struct mmu_notifier *mn, struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> > index fe9da94..219928b 100644
> > --- a/drivers/xen/gntdev.c
> > +++ b/drivers/xen/gntdev.c
> > @@ -428,6 +428,7 @@ static void unmap_if_in_range(struct grant_map *map,
> >
> > static void mn_invl_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -447,10 +448,11 @@ static void mn_invl_range_start(struct mmu_notifier *mn,
> >
> > static void mn_invl_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > - mn_invl_range_start(mn, mm, address, address + PAGE_SIZE, event);
> > + mn_invl_range_start(mn, mm, vma, address, address + PAGE_SIZE, event);
> > }
> >
> > static void mn_release(struct mmu_notifier *mn,
> > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> > index e9e79f7..8b0f25d 100644
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -829,13 +829,15 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > .private = &cp,
> > };
> > down_read(&mm->mmap_sem);
> > - if (type == CLEAR_REFS_SOFT_DIRTY)
> > - mmu_notifier_invalidate_range_start(mm, 0,
> > - -1, MMU_STATUS);
> > for (vma = mm->mmap; vma; vma = vma->vm_next) {
> > cp.vma = vma;
> > if (is_vm_hugetlb_page(vma))
> > continue;
> > + if (type == CLEAR_REFS_SOFT_DIRTY)
> > + mmu_notifier_invalidate_range_start(mm, vma,
> > + vma->vm_start,
> > + vma->vm_end,
> > + MMU_STATUS);
> > /*
> > * Writing 1 to /proc/pid/clear_refs affects all pages.
> > *
> > @@ -857,10 +859,12 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
> > }
> > walk_page_range(vma->vm_start, vma->vm_end,
> > &clear_refs_walk);
> > + if (type == CLEAR_REFS_SOFT_DIRTY)
> > + mmu_notifier_invalidate_range_end(mm, vma,
> > + vma->vm_start,
> > + vma->vm_end,
> > + MMU_STATUS);
> > }
> > - if (type == CLEAR_REFS_SOFT_DIRTY)
> > - mmu_notifier_invalidate_range_end(mm, 0,
> > - -1, MMU_STATUS);
> > flush_tlb_mm(mm);
> > up_read(&mm->mmap_sem);
> > mmput(mm);
> > diff --git a/include/linux/mmu_notifier.h b/include/linux/mmu_notifier.h
> > index 82e9577..8907e5d 100644
> > --- a/include/linux/mmu_notifier.h
> > +++ b/include/linux/mmu_notifier.h
> > @@ -137,6 +137,7 @@ struct mmu_notifier_ops {
> > */
> > void (*invalidate_page)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event);
> >
> > @@ -185,11 +186,13 @@ struct mmu_notifier_ops {
> > */
> > void (*invalidate_range_start)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event);
> > void (*invalidate_range_end)(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event);
> > @@ -233,13 +236,16 @@ extern void __mmu_notifier_change_pte(struct mm_struct *mm,
> > pte_t pte,
> > enum mmu_event event);
> > extern void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event);
> > extern void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event);
> > extern void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event);
> > @@ -276,29 +282,33 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > }
> >
> > static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_page(mm, address, event);
> > + __mmu_notifier_invalidate_page(mm, vma, address, event);
> > }
> >
> > static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_range_start(mm, start, end, event);
> > + __mmu_notifier_invalidate_range_start(mm, vma, start,
> > + end, event);
> > }
> >
> > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > {
> > if (mm_has_notifiers(mm))
> > - __mmu_notifier_invalidate_range_end(mm, start, end, event);
> > + __mmu_notifier_invalidate_range_end(mm, vma, start, end, event);
> > }
> >
> > static inline void mmu_notifier_mm_init(struct mm_struct *mm)
> > @@ -380,12 +390,14 @@ static inline void mmu_notifier_change_pte(struct mm_struct *mm,
> > }
> >
> > static inline void mmu_notifier_invalidate_page(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > }
> >
> > static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -393,6 +405,7 @@ static inline void mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > }
> >
> > static inline void mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > index 296f81e..0f552bc 100644
> > --- a/kernel/events/uprobes.c
> > +++ b/kernel/events/uprobes.c
> > @@ -177,7 +177,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> > /* For try_to_free_swap() and munlock_vma_page() below */
> > lock_page(page);
> >
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > err = -EAGAIN;
> > ptep = page_check_address(page, mm, addr, &ptl, 0);
> > @@ -212,7 +212,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
> > err = 0;
> > unlock:
> > mem_cgroup_cancel_charge(kpage, memcg);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > unlock_page(page);
> > return err;
> > diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c
> > index a2b3f09..f0113df 100644
> > --- a/mm/filemap_xip.c
> > +++ b/mm/filemap_xip.c
> > @@ -198,7 +198,8 @@ retry:
> > BUG_ON(pte_dirty(pteval));
> > pte_unmap_unlock(pte, ptl);
> > /* must invalidate_page _before_ freeing the page */
> > - mmu_notifier_invalidate_page(mm, address, MMU_MIGRATE);
> > + mmu_notifier_invalidate_page(mm, vma, address,
> > + MMU_MIGRATE);
> > page_cache_release(page);
> > }
> > }
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index fa30857..cc74b60 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -1022,7 +1022,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> >
> > mmun_start = haddr;
> > mmun_end = haddr + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > for (i = 0; i < HPAGE_PMD_NR; i++) {
> > @@ -1064,7 +1064,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
> > page_remove_rmap(page);
> > spin_unlock(ptl);
> >
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > ret |= VM_FAULT_WRITE;
> > @@ -1075,7 +1075,7 @@ out:
> >
> > out_free_pages:
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > for (i = 0; i < HPAGE_PMD_NR; i++) {
> > memcg = (void *)page_private(pages[i]);
> > @@ -1162,7 +1162,7 @@ alloc:
> >
> > mmun_start = haddr;
> > mmun_end = haddr + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > if (!page)
> > @@ -1201,7 +1201,7 @@ alloc:
> > }
> > spin_unlock(ptl);
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > out:
> > return ret;
> > @@ -1637,7 +1637,7 @@ static int __split_huge_page_splitting(struct page *page,
> > const unsigned long mmun_start = address;
> > const unsigned long mmun_end = address + HPAGE_PMD_SIZE;
> >
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_STATUS);
> > pmd = page_check_address_pmd(page, mm, address,
> > PAGE_CHECK_ADDRESS_PMD_NOTSPLITTING_FLAG, &ptl);
> > @@ -1653,7 +1653,7 @@ static int __split_huge_page_splitting(struct page *page,
> > ret = 1;
> > spin_unlock(ptl);
> > }
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_STATUS);
> >
> > return ret;
> > @@ -2453,7 +2453,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> >
> > mmun_start = address;
> > mmun_end = address + HPAGE_PMD_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> > /*
> > @@ -2464,7 +2464,7 @@ static void collapse_huge_page(struct mm_struct *mm,
> > */
> > _pmd = pmdp_clear_flush(vma, address, pmd);
> > spin_unlock(pmd_ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > spin_lock(pte_ptl);
> > @@ -2854,19 +2854,19 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address,
> > mmun_start = haddr;
> > mmun_end = haddr + HPAGE_PMD_SIZE;
> > again:
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > ptl = pmd_lock(mm, pmd);
> > if (unlikely(!pmd_trans_huge(*pmd))) {
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > return;
> > }
> > if (is_huge_zero_pmd(*pmd)) {
> > __split_huge_zero_page_pmd(vma, haddr, pmd);
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > return;
> > }
> > @@ -2874,7 +2874,7 @@ again:
> > VM_BUG_ON_PAGE(!page_count(page), page);
> > get_page(page);
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > split_huge_page(page);
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 73e1576..15f0123 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -2565,7 +2565,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> > mmun_start = vma->vm_start;
> > mmun_end = vma->vm_end;
> > if (cow)
> > - mmu_notifier_invalidate_range_start(src, mmun_start,
> > + mmu_notifier_invalidate_range_start(src, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > for (addr = vma->vm_start; addr < vma->vm_end; addr += sz) {
> > @@ -2616,7 +2616,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
> > }
> >
> > if (cow)
> > - mmu_notifier_invalidate_range_end(src, mmun_start,
> > + mmu_notifier_invalidate_range_end(src, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > return ret;
> > @@ -2643,7 +2643,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > BUG_ON(end & ~huge_page_mask(h));
> >
> > tlb_start_vma(tlb, vma);
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > again:
> > for (address = start; address < end; address += sz) {
> > @@ -2715,7 +2715,7 @@ unlock:
> > if (address < end && !ref_page)
> > goto again;
> > }
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > tlb_end_vma(tlb, vma);
> > }
> > @@ -2903,7 +2903,7 @@ retry_avoidcopy:
> >
> > mmun_start = address & huge_page_mask(h);
> > mmun_end = mmun_start + huge_page_size(h);
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > /*
> > * Retake the page table lock to check for racing updates
> > @@ -2924,7 +2924,7 @@ retry_avoidcopy:
> > new_page = old_page;
> > }
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > page_cache_release(new_page);
> > page_cache_release(old_page);
> > @@ -3363,7 +3363,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > BUG_ON(address >= end);
> > flush_cache_range(vma, address, end);
> >
> > - mmu_notifier_invalidate_range_start(mm, start, end, event);
> > + mmu_notifier_invalidate_range_start(mm, vma, start, end, event);
> > mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex);
> > for (; address < end; address += huge_page_size(h)) {
> > spinlock_t *ptl;
> > @@ -3393,7 +3393,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
> > */
> > flush_tlb_range(vma, start, end);
> > mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex);
> > - mmu_notifier_invalidate_range_end(mm, start, end, event);
> > + mmu_notifier_invalidate_range_end(mm, vma, start, end, event);
> >
> > return pages << h->order;
> > }
> > diff --git a/mm/ksm.c b/mm/ksm.c
> > index 4b659f1..1f3c4d7 100644
> > --- a/mm/ksm.c
> > +++ b/mm/ksm.c
> > @@ -873,7 +873,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> >
> > mmun_start = addr;
> > mmun_end = addr + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MPROT_RONLY);
> >
> > ptep = page_check_address(page, mm, addr, &ptl, 0);
> > @@ -914,7 +914,7 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> > out_unlock:
> > pte_unmap_unlock(ptep, ptl);
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MPROT_RONLY);
> > out:
> > return err;
> > @@ -951,7 +951,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> >
> > mmun_start = addr;
> > mmun_end = addr + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > ptep = pte_offset_map_lock(mm, pmd, addr, &ptl);
> > @@ -977,7 +977,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
> > pte_unmap_unlock(ptep, ptl);
> > err = 0;
> > out_mn:
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > out:
> > return err;
> > diff --git a/mm/memory.c b/mm/memory.c
> > index d3908f0..4717579 100644
> > --- a/mm/memory.c
> > +++ b/mm/memory.c
> > @@ -1049,7 +1049,7 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > mmun_start = addr;
> > mmun_end = end;
> > if (is_cow)
> > - mmu_notifier_invalidate_range_start(src_mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(src_mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > ret = 0;
> > @@ -1067,8 +1067,8 @@ int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > } while (dst_pgd++, src_pgd++, addr = next, addr != end);
> >
> > if (is_cow)
> > - mmu_notifier_invalidate_range_end(src_mm, mmun_start, mmun_end,
> > - MMU_MIGRATE);
> > + mmu_notifier_invalidate_range_end(src_mm, vma, mmun_start,
> > + mmun_end, MMU_MIGRATE);
> > return ret;
> > }
> >
> > @@ -1372,12 +1372,17 @@ void unmap_vmas(struct mmu_gather *tlb,
> > {
> > struct mm_struct *mm = vma->vm_mm;
> >
> > - mmu_notifier_invalidate_range_start(mm, start_addr,
> > - end_addr, MMU_MUNMAP);
> > - for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next)
> > + for ( ; vma && vma->vm_start < end_addr; vma = vma->vm_next) {
> > + mmu_notifier_invalidate_range_start(mm, vma,
> > + max(start_addr, vma->vm_start),
> > + min(end_addr, vma->vm_end),
> > + MMU_MUNMAP);
> > unmap_single_vma(tlb, vma, start_addr, end_addr, NULL);
> > - mmu_notifier_invalidate_range_end(mm, start_addr,
> > - end_addr, MMU_MUNMAP);
> > + mmu_notifier_invalidate_range_end(mm, vma,
> > + max(start_addr, vma->vm_start),
> > + min(end_addr, vma->vm_end),
> > + MMU_MUNMAP);
> > + }
> > }
> >
> > /**
> > @@ -1399,10 +1404,17 @@ void zap_page_range(struct vm_area_struct *vma, unsigned long start,
> > lru_add_drain();
> > tlb_gather_mmu(&tlb, mm, start, end);
> > update_hiwater_rss(mm);
> > - mmu_notifier_invalidate_range_start(mm, start, end, MMU_MUNMAP);
> > - for ( ; vma && vma->vm_start < end; vma = vma->vm_next)
> > + for ( ; vma && vma->vm_start < end; vma = vma->vm_next) {
> > + mmu_notifier_invalidate_range_start(mm, vma,
> > + max(start, vma->vm_start),
> > + min(end, vma->vm_end),
> > + MMU_MUNMAP);
> > unmap_single_vma(&tlb, vma, start, end, details);
> > - mmu_notifier_invalidate_range_end(mm, start, end, MMU_MUNMAP);
> > + mmu_notifier_invalidate_range_end(mm, vma,
> > + max(start, vma->vm_start),
> > + min(end, vma->vm_end),
> > + MMU_MUNMAP);
> > + }
> > tlb_finish_mmu(&tlb, start, end);
> > }
> >
> > @@ -1425,9 +1437,9 @@ static void zap_page_range_single(struct vm_area_struct *vma, unsigned long addr
> > lru_add_drain();
> > tlb_gather_mmu(&tlb, mm, address, end);
> > update_hiwater_rss(mm);
> > - mmu_notifier_invalidate_range_start(mm, address, end, MMU_MUNMAP);
> > + mmu_notifier_invalidate_range_start(mm, vma, address, end, MMU_MUNMAP);
> > unmap_single_vma(&tlb, vma, address, end, details);
> > - mmu_notifier_invalidate_range_end(mm, address, end, MMU_MUNMAP);
> > + mmu_notifier_invalidate_range_end(mm, vma, address, end, MMU_MUNMAP);
> > tlb_finish_mmu(&tlb, address, end);
> > }
> >
> > @@ -2211,7 +2223,7 @@ gotten:
> >
> > mmun_start = address & PAGE_MASK;
> > mmun_end = mmun_start + PAGE_SIZE;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > /*
> > @@ -2283,7 +2295,7 @@ gotten:
> > unlock:
> > pte_unmap_unlock(page_table, ptl);
> > if (mmun_end > mmun_start)
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > if (old_page) {
> > /*
> > diff --git a/mm/migrate.c b/mm/migrate.c
> > index b526c72..0c61aa9 100644
> > --- a/mm/migrate.c
> > +++ b/mm/migrate.c
> > @@ -1820,13 +1820,13 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
> > WARN_ON(PageLRU(new_page));
> >
> > /* Recheck the target PMD */
> > - mmu_notifier_invalidate_range_start(mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> > ptl = pmd_lock(mm, pmd);
> > if (unlikely(!pmd_same(*pmd, entry) || page_count(page) != 2)) {
> > fail_putback:
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > /* Reverse changes made by migrate_page_copy() */
> > @@ -1880,7 +1880,7 @@ fail_putback:
> > page_remove_rmap(page);
> >
> > spin_unlock(ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > /* Take an "isolate" reference and put new page on the LRU. */
> > diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
> > index 9decb88..87e6bc5 100644
> > --- a/mm/mmu_notifier.c
> > +++ b/mm/mmu_notifier.c
> > @@ -139,6 +139,7 @@ void __mmu_notifier_change_pte(struct mm_struct *mm,
> > }
> >
> > void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > @@ -148,12 +149,13 @@ void __mmu_notifier_invalidate_page(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_page)
> > - mn->ops->invalidate_page(mn, mm, address, event);
> > + mn->ops->invalidate_page(mn, mm, vma, address, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > }
> >
> > void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -165,7 +167,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_range_start)
> > - mn->ops->invalidate_range_start(mn, mm, start,
> > + mn->ops->invalidate_range_start(mn, vma, mm, start,
> > end, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > @@ -173,6 +175,7 @@ void __mmu_notifier_invalidate_range_start(struct mm_struct *mm,
> > EXPORT_SYMBOL_GPL(__mmu_notifier_invalidate_range_start);
> >
> > void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -183,7 +186,7 @@ void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> > id = srcu_read_lock(&srcu);
> > hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> > if (mn->ops->invalidate_range_end)
> > - mn->ops->invalidate_range_end(mn, mm, start,
> > + mn->ops->invalidate_range_end(mn, vma, mm, start,
> > end, event);
> > }
> > srcu_read_unlock(&srcu, id);
> > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > index 6ce6c23..16ce504 100644
> > --- a/mm/mprotect.c
> > +++ b/mm/mprotect.c
> > @@ -158,7 +158,7 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> > /* invoke the mmu notifier if the pmd is populated */
> > if (!mni_start) {
> > mni_start = addr;
> > - mmu_notifier_invalidate_range_start(mm, mni_start,
> > + mmu_notifier_invalidate_range_start(mm, vma, mni_start,
> > end, event);
> > }
> >
> > @@ -187,7 +187,8 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
> > } while (pmd++, addr = next, addr != end);
> >
> > if (mni_start)
> > - mmu_notifier_invalidate_range_end(mm, mni_start, end, event);
> > + mmu_notifier_invalidate_range_end(mm, vma, mni_start,
> > + end, event);
> >
> > if (nr_huge_updates)
> > count_vm_numa_events(NUMA_HUGE_PTE_UPDATES, nr_huge_updates);
> > diff --git a/mm/mremap.c b/mm/mremap.c
> > index 6827d2f..9bee6de 100644
> > --- a/mm/mremap.c
> > +++ b/mm/mremap.c
> > @@ -177,7 +177,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> >
> > mmun_start = old_addr;
> > mmun_end = old_end;
> > - mmu_notifier_invalidate_range_start(vma->vm_mm, mmun_start,
> > + mmu_notifier_invalidate_range_start(vma->vm_mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > for (; old_addr < old_end; old_addr += extent, new_addr += extent) {
> > @@ -229,7 +229,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > if (likely(need_flush))
> > flush_tlb_range(vma, old_end-len, old_addr);
> >
> > - mmu_notifier_invalidate_range_end(vma->vm_mm, mmun_start,
> > + mmu_notifier_invalidate_range_end(vma->vm_mm, vma, mmun_start,
> > mmun_end, MMU_MIGRATE);
> >
> > return len + old_addr - old_end; /* how much done */
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index bd7e6d7..f1be50d 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -840,7 +840,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma,
> > pte_unmap_unlock(pte, ptl);
> >
> > if (ret) {
> > - mmu_notifier_invalidate_page(mm, address, MMU_WB);
> > + mmu_notifier_invalidate_page(mm, vma, address, MMU_WB);
> > (*cleaned)++;
> > }
> > out:
> > @@ -1237,7 +1237,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > out_unmap:
> > pte_unmap_unlock(pte, ptl);
> > if (ret != SWAP_FAIL && !(flags & TTU_MUNLOCK))
> > - mmu_notifier_invalidate_page(mm, address, event);
> > + mmu_notifier_invalidate_page(mm, vma, address, event);
> > out:
> > return ret;
> >
> > @@ -1325,7 +1325,8 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> >
> > mmun_start = address;
> > mmun_end = end;
> > - mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end, event);
> > + mmu_notifier_invalidate_range_start(mm, vma, mmun_start,
> > + mmun_end, event);
> >
> > /*
> > * If we can acquire the mmap_sem for read, and vma is VM_LOCKED,
> > @@ -1390,7 +1391,7 @@ static int try_to_unmap_cluster(unsigned long cursor, unsigned int *mapcount,
> > (*mapcount)--;
> > }
> > pte_unmap_unlock(pte - 1, ptl);
> > - mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end, event);
> > + mmu_notifier_invalidate_range_end(mm, vma, mmun_start, mmun_end, event);
> > if (locked_vma)
> > up_read(&vma->vm_mm->mmap_sem);
> > return ret;
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index 6e1992f..c4b7bf9 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
> > @@ -262,6 +262,7 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
> >
> > static void kvm_mmu_notifier_invalidate_page(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long address,
> > enum mmu_event event)
> > {
> > @@ -318,6 +319,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
> >
> > static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > @@ -345,6 +347,7 @@ static void kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
> >
> > static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
> > struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > unsigned long start,
> > unsigned long end,
> > enum mmu_event event)
> > --
> > 1.9.0
> >
> > --
> > To unsubscribe, send a message with 'unsubscribe linux-mm' in
> > the body to [email protected]. For more info on Linux MM,
> > see: http://www.linux-mm.org/ .
> > Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
> >
>
> Other than the refinements suggested above, I can't seem to find anything
> wrong with this patch, so:
>
> Reviewed-by: John Hubbard <[email protected]>
>
> thanks,
> John H.

2014-07-01 01:57:27

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 3/6] mmu_notifier: add event information to address invalidation v2

On Fri, Jun 27, 2014 at 7:00 PM, Jérôme Glisse <[email protected]> wrote:
> From: Jérôme Glisse <[email protected]>
>
> The event information will be useful [...]

That needs to be cleaned up, though.

Why the heck are you making up ew and stupid event types? Now you make
the generic VM code do stupid things like this:

+ if ((vma->vm_flags & VM_READ) && (vma->vm_flags & VM_WRITE))
+ event = MMU_MPROT_RANDW;
+ else if (vma->vm_flags & VM_WRITE)
+ event = MMU_MPROT_WONLY;
+ else if (vma->vm_flags & VM_READ)
+ event = MMU_MPROT_RONLY;

which makes no sense at all. The names are some horrible abortion too
("RANDW"? That sounds like "random write" to me, not "read-and-write",
which is commonly shortened RW or perhaps RDWR. Same foes for
RONLY/WONLY - what kind of crazy names are those?

But more importantly, afaik none of that is needed. Instead, tell us
why you need particular flags, and don't make up crazy names like
this. As far as I can tell, you're already passing in the new
protection information (thanks to passing in the vma), so all those
badly named states you've made up seem to be totally pointless. They
add no actual information, but they *do* add crazy code like the above
to generic code that doesn't even WANT any of this crap. The only
thing this should need is a MMU_MPROT event, and just use that. Then
anybody who wants to look at whether the protections are being changed
to read-only, they can just look at the vma->vm_flags themselves.

So things like this need to be tightened up and made sane before any
chance of merging it.

So NAK NAK NAK in the meantime.

Linus

2014-07-01 02:04:13

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH 4/6] mmu_notifier: pass through vma to invalidate_range and invalidate_page

On Fri, Jun 27, 2014 at 7:00 PM, Jérôme Glisse <[email protected]> wrote:
>
> This needs small refactoring in memory.c to call invalidate_range on
> vma boundary the overhead should be low enough.

.. and looking at it, doesn't that mean that the whole invalidate call
should be moved inside unmap_single_vma() then, instead of being
duplicated in all the callers?

I really get the feeling that somebody needs to go over this
patch-series with a fine comb to fix these kinds of ugly things.

Linus

2015-11-20 15:45:41

by David Woodhouse

[permalink] [raw]
Subject: Re: [PATCH 1/6] mmput: use notifier chain to call subsystem exit handler.

On Sun, 2015-10-11 at 20:03 +0100, David Woodhouse wrote:
> As we try to put together a generic API for device access to processes'
> address space, I definitely think we want to stick with the model that
> we take a reference on the mm, and we *keep* it until the device driver
> unbinds from the mm (because its file descriptor is closed, or
> whatever).

I've found another problem with this.

In use some cases, we mmap() the device file descriptor which is
responsible for the PASID binding. And in that case we end up with a
recursive refcount.

When the process exits, its file descriptors are closed... but the
underlying struct file remains open because it's still referenced from
the mmap'd VMA.

That VMA remains alive because it's still part of the MM.

And the MM remains alive because the PASID binding still holds a
refcount for it because device's struct file didn't get closed yet...
because it's still mmap'd... because the MM is still alive...

So I suspect that even for the relatively simple case where the
lifetime of the PASID can be bound to a file descriptor (unlike with
amdkfd), we probably still want to explicitly manage its lifetime as an
'off-cpu task' and explicitly kill it when the process dies.

I'm still not keen on doing that implicitly through the mm_release. I
think that way lies a lot of subtle bugs.

--
David Woodhouse Open Source Technology Centre
[email protected] Intel Corporation


Attachments:
smime.p7s (5.56 kB)