2023-10-09 20:54:01

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 0/5] Abstract vma_merge() and split_vma()

The vma_merge() interface is very confusing and its implementation has led
to numerous bugs as a result of that confusion.

In addition there is duplication both in invocation of vma_merge(), but
also in the common mprotect()-style pattern of attempting a merge, then if
this fails, splitting the portion of a VMA about to have its attributes
changed.

This pattern has been copy/pasted around the kernel in each instance where
such an operation has been required, each very slightly modified from the
last to make it even harder to decipher what is going on.

Simplify the whole thing by dividing the actual uses of vma_merge() and
split_vma() into specific and abstracted functions and de-duplicate the
vma_merge()/split_vma() pattern altogether.

Doing so also opens the door to changing how vma_merge() is implemented -
by knowing precisely what cases a caller is invoking rather than having a
central interface where anything might happen we can untangle the brittle
and confusing vma_merge() implementation into something more workable.

For mprotect()-like cases we introduce vma_modify() which performs the
vma_merge()/split_vma() pattern, returning a pointer or an ERR_PTR(err) if
the splits fail.

We provide a number of inline helper functions to make things even clearer:-

* vma_modify_flags() - Prepare to modify the VMA's flags.
* vma_modify_flags_name() - Prepare to modify the VMA's flags/anon_vma_name
* vma_modify_policy() - Prepare to modify the VMA's mempolicy.
* vma_modify_flags_uffd() - Prepare to modify the VMA's flags/uffd context.

For cases where a new VMA is attempted to be merged with adjacent VMAs we
add:-

* vma_merge_new_vma() - Prepare to merge a new VMA.
* vma_merge_extend() - Prepare to extend the end of a new VMA.

v2:
* Correct mistake where error cases would have been treated as success as
pointed out by Vlastimil.
* Move vma_policy() define to mm_types.h.
* Move anon_vma_name(), anon_vma_name_alloc() and anon_vma_name_free() to
mm_types.h from mm_inline.h.
* These moves make it possible to implement the vma_modify_*() helpers as
static inline declarations, so do so.
* Spelling corrections and clarifications.

v1:
https://lore.kernel.org/all/[email protected]/

Lorenzo Stoakes (5):
mm: move vma_policy() and anon_vma_name() decls to mm_types.h
mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.
mm: make vma_merge() and split_vma() internal
mm: abstract merge for new VMAs into vma_merge_new_vma()
mm: abstract VMA merge and extend into vma_merge_extend() helper

fs/userfaultfd.c | 69 ++++++++----------------
include/linux/mempolicy.h | 4 --
include/linux/mm.h | 69 ++++++++++++++++++++----
include/linux/mm_inline.h | 20 +------
include/linux/mm_types.h | 27 ++++++++++
mm/internal.h | 7 +++
mm/madvise.c | 32 ++++-------
mm/mempolicy.c | 22 ++------
mm/mlock.c | 27 +++-------
mm/mmap.c | 111 +++++++++++++++++++++++++++++++-------
mm/mprotect.c | 35 ++++--------
mm/mremap.c | 30 +++++------
mm/nommu.c | 4 +-
13 files changed, 255 insertions(+), 202 deletions(-)

--
2.42.0


2023-10-09 20:54:35

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 3/5] mm: make vma_merge() and split_vma() internal

Now the common pattern of - attempting a merge via vma_merge() and should
this fail splitting VMAs via split_vma() - has been abstracted, the former
can be placed into mm/internal.h and the latter made static.

In addition, the split_vma() nommu variant also need not be exported.

Reviewed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Lorenzo Stoakes <[email protected]>
---
include/linux/mm.h | 9 ---------
mm/internal.h | 9 +++++++++
mm/mmap.c | 8 ++++----
mm/nommu.c | 4 ++--
4 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 83ee1f35febe..74d7547ffb70 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3237,16 +3237,7 @@ extern int vma_expand(struct vma_iterator *vmi, struct vm_area_struct *vma,
struct vm_area_struct *next);
extern int vma_shrink(struct vma_iterator *vmi, struct vm_area_struct *vma,
unsigned long start, unsigned long end, pgoff_t pgoff);
-extern struct vm_area_struct *vma_merge(struct vma_iterator *vmi,
- struct mm_struct *, struct vm_area_struct *prev, unsigned long addr,
- unsigned long end, unsigned long vm_flags, struct anon_vma *,
- struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx,
- struct anon_vma_name *);
extern struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *);
-extern int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *,
- unsigned long addr, int new_below);
-extern int split_vma(struct vma_iterator *vmi, struct vm_area_struct *,
- unsigned long addr, int new_below);
extern int insert_vm_struct(struct mm_struct *, struct vm_area_struct *);
extern void unlink_file_vma(struct vm_area_struct *);
extern struct vm_area_struct *copy_vma(struct vm_area_struct **,
diff --git a/mm/internal.h b/mm/internal.h
index 3a72975425bb..ddaeb9f2d9d7 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1011,6 +1011,15 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
unsigned long addr, pmd_t *pmd,
unsigned int flags);

+/*
+ * mm/mmap.c
+ */
+struct vm_area_struct *vma_merge(struct vma_iterator *vmi,
+ struct mm_struct *, struct vm_area_struct *prev, unsigned long addr,
+ unsigned long end, unsigned long vm_flags, struct anon_vma *,
+ struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx,
+ struct anon_vma_name *);
+
enum {
/* mark page accessed */
FOLL_TOUCH = 1 << 16,
diff --git a/mm/mmap.c b/mm/mmap.c
index 22d968affc07..17c0dcfb1527 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2346,8 +2346,8 @@ static void unmap_region(struct mm_struct *mm, struct ma_state *mas,
* has already been checked or doesn't make sense to fail.
* VMA Iterator will point to the end VMA.
*/
-int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
- unsigned long addr, int new_below)
+static int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ unsigned long addr, int new_below)
{
struct vma_prepare vp;
struct vm_area_struct *new;
@@ -2428,8 +2428,8 @@ int __split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
* Split a vma into two pieces at address 'addr', a new vma is allocated
* either for the first part or the tail.
*/
-int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
- unsigned long addr, int new_below)
+static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ unsigned long addr, int new_below)
{
if (vma->vm_mm->map_count >= sysctl_max_map_count)
return -ENOMEM;
diff --git a/mm/nommu.c b/mm/nommu.c
index f9553579389b..fc4afe924ad5 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1305,8 +1305,8 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_arg_struct __user *, arg)
* split a vma into two pieces at address 'addr', a new vma is allocated either
* for the first part or the tail.
*/
-int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
- unsigned long addr, int new_below)
+static int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ unsigned long addr, int new_below)
{
struct vm_area_struct *new;
struct vm_region *region;
--
2.42.0

2023-10-09 20:54:38

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 5/5] mm: abstract VMA merge and extend into vma_merge_extend() helper

mremap uses vma_merge() in the case where a VMA needs to be extended. This
can be significantly simplified and abstracted.

This makes it far easier to understand what the actual function is doing,
avoids future mistakes in use of the confusing vma_merge() function and
importantly allows us to make future changes to how vma_merge() is
implemented by knowing explicitly which merge cases each invocation uses.

Note that in the mremap() extend case, we perform this merge only when
old_len == vma->vm_end - addr. The extension_start, i.e. the start of the
extended portion of the VMA is equal to addr + old_len, i.e. vma->vm_end.

With this refactoring, vma_merge() is no longer required anywhere except
mm/mmap.c, so mark it static.

Reviewed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Lorenzo Stoakes <[email protected]>
---
mm/internal.h | 8 +++-----
mm/mmap.c | 31 ++++++++++++++++++++++++-------
mm/mremap.c | 30 +++++++++++++-----------------
3 files changed, 40 insertions(+), 29 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index ddaeb9f2d9d7..6fa722b07a94 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1014,11 +1014,9 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
/*
* mm/mmap.c
*/
-struct vm_area_struct *vma_merge(struct vma_iterator *vmi,
- struct mm_struct *, struct vm_area_struct *prev, unsigned long addr,
- unsigned long end, unsigned long vm_flags, struct anon_vma *,
- struct file *, pgoff_t, struct mempolicy *, struct vm_userfaultfd_ctx,
- struct anon_vma_name *);
+struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
+ struct vm_area_struct *vma,
+ unsigned long delta);

enum {
/* mark page accessed */
diff --git a/mm/mmap.c b/mm/mmap.c
index 33aafd23823b..200319bf3292 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -860,13 +860,13 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags,
* **** is not represented - it will be merged and the vma containing the
* area is returned, or the function will return NULL
*/
-struct vm_area_struct *vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
- struct vm_area_struct *prev, unsigned long addr,
- unsigned long end, unsigned long vm_flags,
- struct anon_vma *anon_vma, struct file *file,
- pgoff_t pgoff, struct mempolicy *policy,
- struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
- struct anon_vma_name *anon_name)
+static struct vm_area_struct
+*vma_merge(struct vma_iterator *vmi, struct mm_struct *mm,
+ struct vm_area_struct *prev, unsigned long addr, unsigned long end,
+ unsigned long vm_flags, struct anon_vma *anon_vma, struct file *file,
+ pgoff_t pgoff, struct mempolicy *policy,
+ struct vm_userfaultfd_ctx vm_userfaultfd_ctx,
+ struct anon_vma_name *anon_name)
{
struct vm_area_struct *curr, *next, *res;
struct vm_area_struct *vma, *adjust, *remove, *remove2;
@@ -2498,6 +2498,23 @@ static struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi,
vma->vm_userfaultfd_ctx, anon_vma_name(vma));
}

+/*
+ * Expand vma by delta bytes, potentially merging with an immediately adjacent
+ * VMA with identical properties.
+ */
+struct vm_area_struct *vma_merge_extend(struct vma_iterator *vmi,
+ struct vm_area_struct *vma,
+ unsigned long delta)
+{
+ pgoff_t pgoff = vma->vm_pgoff + vma_pages(vma);
+
+ /* vma is specified as prev, so case 1 or 2 will apply. */
+ return vma_merge(vmi, vma->vm_mm, vma, vma->vm_end, vma->vm_end + delta,
+ vma->vm_flags, vma->anon_vma, vma->vm_file, pgoff,
+ vma_policy(vma), vma->vm_userfaultfd_ctx,
+ anon_vma_name(vma));
+}
+
/*
* do_vmi_align_munmap() - munmap the aligned region from @start to @end.
* @vmi: The vma iterator
diff --git a/mm/mremap.c b/mm/mremap.c
index ce8a23ef325a..38d98465f3d8 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -1096,14 +1096,12 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
/* old_len exactly to the end of the area..
*/
if (old_len == vma->vm_end - addr) {
+ unsigned long delta = new_len - old_len;
+
/* can we just expand the current mapping? */
- if (vma_expandable(vma, new_len - old_len)) {
- long pages = (new_len - old_len) >> PAGE_SHIFT;
- unsigned long extension_start = addr + old_len;
- unsigned long extension_end = addr + new_len;
- pgoff_t extension_pgoff = vma->vm_pgoff +
- ((extension_start - vma->vm_start) >> PAGE_SHIFT);
- VMA_ITERATOR(vmi, mm, extension_start);
+ if (vma_expandable(vma, delta)) {
+ long pages = delta >> PAGE_SHIFT;
+ VMA_ITERATOR(vmi, mm, vma->vm_end);
long charged = 0;

if (vma->vm_flags & VM_ACCOUNT) {
@@ -1115,17 +1113,15 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
}

/*
- * Function vma_merge() is called on the extension we
- * are adding to the already existing vma, vma_merge()
- * will merge this extension with the already existing
- * vma (expand operation itself) and possibly also with
- * the next vma if it becomes adjacent to the expanded
- * vma and otherwise compatible.
+ * Function vma_merge_extend() is called on the
+ * extension we are adding to the already existing vma,
+ * vma_merge_extend() will merge this extension with the
+ * already existing vma (expand operation itself) and
+ * possibly also with the next vma if it becomes
+ * adjacent to the expanded vma and otherwise
+ * compatible.
*/
- vma = vma_merge(&vmi, mm, vma, extension_start,
- extension_end, vma->vm_flags, vma->anon_vma,
- vma->vm_file, extension_pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx, anon_vma_name(vma));
+ vma = vma_merge_extend(&vmi, vma, delta);
if (!vma) {
vm_unacct_memory(charged);
ret = -ENOMEM;
--
2.42.0

2023-10-09 20:54:41

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 1/5] mm: move vma_policy() and anon_vma_name() decls to mm_types.h

The vma_policy() define is a helper specifically for a VMA field so it
makes sense to host it in the memory management types header.

The anon_vma_name(), anon_vma_name_alloc() and anon_vma_name_free()
functions are a little out of place in mm_inline.h as they define external
functions, and so it makes sense to locate them in mm_types.h.

The purpose of these relocations is to make it possible to abstract static
inline wrappers which invoke both of these helpers.

Signed-off-by: Lorenzo Stoakes <[email protected]>
---
include/linux/mempolicy.h | 4 ----
include/linux/mm_inline.h | 20 +-------------------
include/linux/mm_types.h | 27 +++++++++++++++++++++++++++
3 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
index 3c208d4f0ee9..2801d5b0a4e9 100644
--- a/include/linux/mempolicy.h
+++ b/include/linux/mempolicy.h
@@ -89,8 +89,6 @@ static inline struct mempolicy *mpol_dup(struct mempolicy *pol)
return pol;
}

-#define vma_policy(vma) ((vma)->vm_policy)
-
static inline void mpol_get(struct mempolicy *pol)
{
if (pol)
@@ -222,8 +220,6 @@ static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
return NULL;
}

-#define vma_policy(vma) NULL
-
static inline int
vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
{
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 8148b30a9df1..9ae7def16cb2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -4,6 +4,7 @@

#include <linux/atomic.h>
#include <linux/huge_mm.h>
+#include <linux/mm_types.h>
#include <linux/swap.h>
#include <linux/string.h>
#include <linux/userfaultfd_k.h>
@@ -352,15 +353,6 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
}

#ifdef CONFIG_ANON_VMA_NAME
-/*
- * mmap_lock should be read-locked when calling anon_vma_name(). Caller should
- * either keep holding the lock while using the returned pointer or it should
- * raise anon_vma_name refcount before releasing the lock.
- */
-extern struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
-extern struct anon_vma_name *anon_vma_name_alloc(const char *name);
-extern void anon_vma_name_free(struct kref *kref);
-
/* mmap_lock should be read-locked */
static inline void anon_vma_name_get(struct anon_vma_name *anon_name)
{
@@ -415,16 +407,6 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
}

#else /* CONFIG_ANON_VMA_NAME */
-static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
-{
- return NULL;
-}
-
-static inline struct anon_vma_name *anon_vma_name_alloc(const char *name)
-{
- return NULL;
-}
-
static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {}
static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {}
static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 36c5b43999e6..21eb56145f57 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -546,6 +546,27 @@ struct anon_vma_name {
char name[];
};

+#ifdef CONFIG_ANON_VMA_NAME
+/*
+ * mmap_lock should be read-locked when calling anon_vma_name(). Caller should
+ * either keep holding the lock while using the returned pointer or it should
+ * raise anon_vma_name refcount before releasing the lock.
+ */
+struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
+struct anon_vma_name *anon_vma_name_alloc(const char *name);
+void anon_vma_name_free(struct kref *kref);
+#else /* CONFIG_ANON_VMA_NAME */
+static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
+{
+ return NULL;
+}
+
+static inline struct anon_vma_name *anon_vma_name_alloc(const char *name)
+{
+ return NULL;
+}
+#endif
+
struct vma_lock {
struct rw_semaphore lock;
};
@@ -662,6 +683,12 @@ struct vm_area_struct {
struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
} __randomize_layout;

+#ifdef CONFIG_NUMA
+#define vma_policy(vma) ((vma)->vm_policy)
+#else
+#define vma_policy(vma) NULL
+#endif
+
#ifdef CONFIG_SCHED_MM_CID
struct mm_cid {
u64 time;
--
2.42.0

2023-10-09 20:54:41

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 4/5] mm: abstract merge for new VMAs into vma_merge_new_vma()

Only in mmap_region() and copy_vma() do we attempt to merge VMAs which
occupy entirely new regions of virtual memory.

We can abstract this logic and make the intent of this invocations of it
completely explicit, rather than invoking vma_merge() with an inscrutable
wall of parameters.

This also paves the way for a simplification of the core vma_merge()
implementation, as we seek to make it entirely an implementation detail.

Note that on mmap_region(), VMA fields are initialised to zero, so we can
simply reference these rather than explicitly specifying NULL.

Reviewed-by: Vlastimil Babka <[email protected]>
Signed-off-by: Lorenzo Stoakes <[email protected]>
---
mm/mmap.c | 27 ++++++++++++++++++++-------
1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 17c0dcfb1527..33aafd23823b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2482,6 +2482,22 @@ struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
return NULL;
}

+/*
+ * Attempt to merge a newly mapped VMA with those adjacent to it. The caller
+ * must ensure that [start, end) does not overlap any existing VMA.
+ */
+static struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end,
+ pgoff_t pgoff)
+{
+ return vma_merge(vmi, vma->vm_mm, prev, start, end, vma->vm_flags,
+ vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
+ vma->vm_userfaultfd_ctx, anon_vma_name(vma));
+}
+
/*
* do_vmi_align_munmap() - munmap the aligned region from @start to @end.
* @vmi: The vma iterator
@@ -2837,10 +2853,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
* vma again as we may succeed this time.
*/
if (unlikely(vm_flags != vma->vm_flags && prev)) {
- merge = vma_merge(&vmi, mm, prev, vma->vm_start,
- vma->vm_end, vma->vm_flags, NULL,
- vma->vm_file, vma->vm_pgoff, NULL,
- NULL_VM_UFFD_CTX, NULL);
+ merge = vma_merge_new_vma(&vmi, prev, vma,
+ vma->vm_start, vma->vm_end,
+ pgoff);
if (merge) {
/*
* ->mmap() can change vma->vm_file and fput
@@ -3382,9 +3397,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
if (new_vma && new_vma->vm_start < addr + len)
return NULL; /* should never get here */

- new_vma = vma_merge(&vmi, mm, prev, addr, addr + len, vma->vm_flags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx, anon_vma_name(vma));
+ new_vma = vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff);
if (new_vma) {
/*
* Source vma may have been merged into new_vma
--
2.42.0

2023-10-09 20:54:47

by Lorenzo Stoakes

[permalink] [raw]
Subject: [PATCH v2 2/5] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.

mprotect() and other functions which change VMA parameters over a range
each employ a pattern of:-

1. Attempt to merge the range with adjacent VMAs.
2. If this fails, and the range spans a subset of the VMA, split it
accordingly.

This is open-coded and duplicated in each case. Also in each case most of
the parameters passed to vma_merge() remain the same.

Create a new function, vma_modify(), which abstracts this operation,
accepting only those parameters which can be changed.

To avoid the mess of invoking each function call with unnecessary
parameters, create inline wrapper functions for each of the modify
operations, parameterised only by what is required to perform the action.

Note that the userfaultfd_release() case works even though it does not
split VMAs - since start is set to vma->vm_start and end is set to
vma->vm_end, the split logic does not trigger.

In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start
- vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this
instance, this invocation will remain unchanged.

Signed-off-by: Lorenzo Stoakes <[email protected]>
---
fs/userfaultfd.c | 69 +++++++++++++++-------------------------------
include/linux/mm.h | 60 ++++++++++++++++++++++++++++++++++++++++
mm/madvise.c | 32 ++++++---------------
mm/mempolicy.c | 22 +++------------
mm/mlock.c | 27 +++++-------------
mm/mmap.c | 45 ++++++++++++++++++++++++++++++
mm/mprotect.c | 35 +++++++----------------
7 files changed, 157 insertions(+), 133 deletions(-)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index a7c6ef764e63..ba44a67a0a34 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -927,11 +927,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
continue;
}
new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
- prev = vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end,
- new_flags, vma->anon_vma,
- vma->vm_file, vma->vm_pgoff,
- vma_policy(vma),
- NULL_VM_UFFD_CTX, anon_vma_name(vma));
+ prev = vma_modify_flags_uffd(&vmi, prev, vma, vma->vm_start,
+ vma->vm_end, new_flags,
+ NULL_VM_UFFD_CTX);
+
if (prev) {
vma = prev;
} else {
@@ -1331,7 +1330,6 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
unsigned long start, end, vma_end;
struct vma_iterator vmi;
bool wp_async = userfaultfd_wp_async_ctx(ctx);
- pgoff_t pgoff;

user_uffdio_register = (struct uffdio_register __user *) arg;

@@ -1484,28 +1482,17 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
vma_end = min(end, vma->vm_end);

new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags;
- pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
- prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags,
- vma->anon_vma, vma->vm_file, pgoff,
- vma_policy(vma),
- ((struct vm_userfaultfd_ctx){ ctx }),
- anon_vma_name(vma));
- if (prev) {
- /* vma_merge() invalidated the mas */
- vma = prev;
- goto next;
- }
- if (vma->vm_start < start) {
- ret = split_vma(&vmi, vma, start, 1);
- if (ret)
- break;
- }
- if (vma->vm_end > end) {
- ret = split_vma(&vmi, vma, end, 0);
- if (ret)
- break;
+ prev = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
+ new_flags,
+ (struct vm_userfaultfd_ctx){ctx});
+ if (IS_ERR(prev)) {
+ ret = PTR_ERR(prev);
+ break;
}
- next:
+
+ if (prev)
+ vma = prev; /* vma_merge() invalidated the mas */
+
/*
* In the vma_merge() successful mprotect-like case 8:
* the next vma was merged into the current one and
@@ -1568,7 +1555,6 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
const void __user *buf = (void __user *)arg;
struct vma_iterator vmi;
bool wp_async = userfaultfd_wp_async_ctx(ctx);
- pgoff_t pgoff;

ret = -EFAULT;
if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
@@ -1671,26 +1657,15 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
uffd_wp_range(vma, start, vma_end - start, false);

new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
- pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
- prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags,
- vma->anon_vma, vma->vm_file, pgoff,
- vma_policy(vma),
- NULL_VM_UFFD_CTX, anon_vma_name(vma));
- if (prev) {
- vma = prev;
- goto next;
- }
- if (vma->vm_start < start) {
- ret = split_vma(&vmi, vma, start, 1);
- if (ret)
- break;
- }
- if (vma->vm_end > end) {
- ret = split_vma(&vmi, vma, end, 0);
- if (ret)
- break;
+ prev = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
+ new_flags, NULL_VM_UFFD_CTX);
+ if (IS_ERR(prev)) {
+ ret = PTR_ERR(prev);
+ break;
}
- next:
+
+ if (prev)
+ vma = prev;
/*
* In the vma_merge() successful mprotect-like case 8:
* the next vma was merged into the current one and
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a7b667786cde..83ee1f35febe 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3253,6 +3253,66 @@ extern struct vm_area_struct *copy_vma(struct vm_area_struct **,
unsigned long addr, unsigned long len, pgoff_t pgoff,
bool *need_rmap_locks);
extern void exit_mmap(struct mm_struct *);
+struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long vm_flags,
+ struct mempolicy *policy,
+ struct vm_userfaultfd_ctx uffd_ctx,
+ struct anon_vma_name *anon_name);
+
+/* We are about to modify the VMA's flags. */
+static inline struct vm_area_struct
+*vma_modify_flags(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags)
+{
+ return vma_modify(vmi, prev, vma, start, end, new_flags,
+ vma_policy(vma), vma->vm_userfaultfd_ctx,
+ anon_vma_name(vma));
+}
+
+/* We are about to modify the VMA's flags and/or anon_name. */
+static inline struct vm_area_struct
+*vma_modify_flags_name(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start,
+ unsigned long end,
+ unsigned long new_flags,
+ struct anon_vma_name *new_name)
+{
+ return vma_modify(vmi, prev, vma, start, end, new_flags,
+ vma_policy(vma), vma->vm_userfaultfd_ctx, new_name);
+}
+
+/* We are about to modify the VMA's memory policy. */
+static inline struct vm_area_struct
+*vma_modify_policy(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ struct mempolicy *new_pol)
+{
+ return vma_modify(vmi, prev, vma, start, end, vma->vm_flags,
+ new_pol, vma->vm_userfaultfd_ctx, anon_vma_name(vma));
+}
+
+/* We are about to modify the VMA's flags and/or uffd context. */
+static inline struct vm_area_struct
+*vma_modify_flags_uffd(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags,
+ struct vm_userfaultfd_ctx new_ctx)
+{
+ return vma_modify(vmi, prev, vma, start, end, new_flags,
+ vma_policy(vma), new_ctx, anon_vma_name(vma));
+}

static inline int check_data_rlimit(unsigned long rlim,
unsigned long new,
diff --git a/mm/madvise.c b/mm/madvise.c
index a4a20de50494..801d3c1bb7b3 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -141,7 +141,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
int error;
- pgoff_t pgoff;
+ struct vm_area_struct *merged;
VMA_ITERATOR(vmi, mm, start);

if (new_flags == vma->vm_flags && anon_vma_name_eq(anon_vma_name(vma), anon_name)) {
@@ -149,30 +149,16 @@ static int madvise_update_vma(struct vm_area_struct *vma,
return 0;
}

- pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
- *prev = vma_merge(&vmi, mm, *prev, start, end, new_flags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx, anon_name);
- if (*prev) {
- vma = *prev;
- goto success;
- }
-
- *prev = vma;
-
- if (start != vma->vm_start) {
- error = split_vma(&vmi, vma, start, 1);
- if (error)
- return error;
- }
+ merged = vma_modify_flags_name(&vmi, *prev, vma, start, end, new_flags,
+ anon_name);
+ if (IS_ERR(merged))
+ return PTR_ERR(merged);

- if (end != vma->vm_end) {
- error = split_vma(&vmi, vma, end, 0);
- if (error)
- return error;
- }
+ if (merged)
+ vma = *prev = merged;
+ else
+ *prev = vma;

-success:
/* vm_flags is protected by the mmap_lock held in write mode. */
vma_start_write(vma);
vm_flags_reset(vma, new_flags);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index b01922e88548..6b2e99db6dd5 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -786,8 +786,6 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
{
struct vm_area_struct *merged;
unsigned long vmstart, vmend;
- pgoff_t pgoff;
- int err;

vmend = min(end, vma->vm_end);
if (start > vma->vm_start) {
@@ -802,27 +800,15 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
return 0;
}

- pgoff = vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT);
- merged = vma_merge(vmi, vma->vm_mm, *prev, vmstart, vmend, vma->vm_flags,
- vma->anon_vma, vma->vm_file, pgoff, new_pol,
- vma->vm_userfaultfd_ctx, anon_vma_name(vma));
+ merged = vma_modify_policy(vmi, *prev, vma, vmstart, vmend, new_pol);
+ if (IS_ERR(merged))
+ return PTR_ERR(merged);
+
if (merged) {
*prev = merged;
return vma_replace_policy(merged, new_pol);
}

- if (vma->vm_start != vmstart) {
- err = split_vma(vmi, vma, vmstart, 1);
- if (err)
- return err;
- }
-
- if (vma->vm_end != vmend) {
- err = split_vma(vmi, vma, vmend, 0);
- if (err)
- return err;
- }
-
*prev = vma;
return vma_replace_policy(vma, new_pol);
}
diff --git a/mm/mlock.c b/mm/mlock.c
index 42b6865f8f82..ae83a33c387e 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -476,10 +476,10 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
unsigned long end, vm_flags_t newflags)
{
struct mm_struct *mm = vma->vm_mm;
- pgoff_t pgoff;
int nr_pages;
int ret = 0;
vm_flags_t oldflags = vma->vm_flags;
+ struct vm_area_struct *merged;

if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
@@ -487,28 +487,15 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
goto out;

- pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
- *prev = vma_merge(vmi, mm, *prev, start, end, newflags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx, anon_vma_name(vma));
- if (*prev) {
- vma = *prev;
- goto success;
- }
-
- if (start != vma->vm_start) {
- ret = split_vma(vmi, vma, start, 1);
- if (ret)
- goto out;
+ merged = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
+ if (IS_ERR(merged)) {
+ ret = PTR_ERR(merged);
+ goto out;
}

- if (end != vma->vm_end) {
- ret = split_vma(vmi, vma, end, 0);
- if (ret)
- goto out;
- }
+ if (merged)
+ vma = *prev = merged;

-success:
/*
* Keep track of amount of locked VM.
*/
diff --git a/mm/mmap.c b/mm/mmap.c
index 673429ee8a9e..22d968affc07 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2437,6 +2437,51 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
return __split_vma(vmi, vma, addr, new_below);
}

+/*
+ * We are about to modify one or multiple of a VMA's flags, policy, userfaultfd
+ * context and anonymous VMA name within the range [start, end).
+ *
+ * As a result, we might be able to merge the newly modified VMA range with an
+ * adjacent VMA with identical properties.
+ *
+ * If no merge is possible and the range does not span the entirety of the VMA,
+ * we then need to split the VMA to accommodate the change.
+ */
+struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
+ struct vm_area_struct *prev,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long vm_flags,
+ struct mempolicy *policy,
+ struct vm_userfaultfd_ctx uffd_ctx,
+ struct anon_vma_name *anon_name)
+{
+ pgoff_t pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
+ struct vm_area_struct *merged;
+
+ merged = vma_merge(vmi, vma->vm_mm, prev, start, end, vm_flags,
+ vma->anon_vma, vma->vm_file, pgoff, policy,
+ uffd_ctx, anon_name);
+ if (merged)
+ return merged;
+
+ if (vma->vm_start < start) {
+ int err = split_vma(vmi, vma, start, 1);
+
+ if (err)
+ return ERR_PTR(err);
+ }
+
+ if (vma->vm_end > end) {
+ int err = split_vma(vmi, vma, end, 0);
+
+ if (err)
+ return ERR_PTR(err);
+ }
+
+ return NULL;
+}
+
/*
* do_vmi_align_munmap() - munmap the aligned region from @start to @end.
* @vmi: The vma iterator
diff --git a/mm/mprotect.c b/mm/mprotect.c
index b94fbb45d5c7..6f85d99682ab 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -581,7 +581,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
long nrpages = (end - start) >> PAGE_SHIFT;
unsigned int mm_cp_flags = 0;
unsigned long charged = 0;
- pgoff_t pgoff;
+ struct vm_area_struct *merged;
int error;

if (newflags == oldflags) {
@@ -625,34 +625,19 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
}
}

- /*
- * First try to merge with previous and/or next vma.
- */
- pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
- *pprev = vma_merge(vmi, mm, *pprev, start, end, newflags,
- vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
- vma->vm_userfaultfd_ctx, anon_vma_name(vma));
- if (*pprev) {
- vma = *pprev;
- VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
- goto success;
+ merged = vma_modify_flags(vmi, *pprev, vma, start, end, newflags);
+ if (IS_ERR(merged)) {
+ error = PTR_ERR(merged);
+ goto fail;
}

- *pprev = vma;
-
- if (start != vma->vm_start) {
- error = split_vma(vmi, vma, start, 1);
- if (error)
- goto fail;
- }
-
- if (end != vma->vm_end) {
- error = split_vma(vmi, vma, end, 0);
- if (error)
- goto fail;
+ if (merged) {
+ vma = *pprev = merged;
+ VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
+ } else {
+ *pprev = vma;
}

-success:
/*
* vm_flags and vm_page_prot are protected by the mmap_lock
* held in write mode.
--
2.42.0

2023-10-10 06:46:37

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 1/5] mm: move vma_policy() and anon_vma_name() decls to mm_types.h

On 10/9/23 22:53, Lorenzo Stoakes wrote:
> The vma_policy() define is a helper specifically for a VMA field so it
> makes sense to host it in the memory management types header.
>
> The anon_vma_name(), anon_vma_name_alloc() and anon_vma_name_free()
> functions are a little out of place in mm_inline.h as they define external
> functions, and so it makes sense to locate them in mm_types.h.
>
> The purpose of these relocations is to make it possible to abstract static
> inline wrappers which invoke both of these helpers.
>
> Signed-off-by: Lorenzo Stoakes <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

> ---
> include/linux/mempolicy.h | 4 ----
> include/linux/mm_inline.h | 20 +-------------------
> include/linux/mm_types.h | 27 +++++++++++++++++++++++++++
> 3 files changed, 28 insertions(+), 23 deletions(-)
>
> diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h
> index 3c208d4f0ee9..2801d5b0a4e9 100644
> --- a/include/linux/mempolicy.h
> +++ b/include/linux/mempolicy.h
> @@ -89,8 +89,6 @@ static inline struct mempolicy *mpol_dup(struct mempolicy *pol)
> return pol;
> }
>
> -#define vma_policy(vma) ((vma)->vm_policy)
> -
> static inline void mpol_get(struct mempolicy *pol)
> {
> if (pol)
> @@ -222,8 +220,6 @@ static inline struct mempolicy *get_vma_policy(struct vm_area_struct *vma,
> return NULL;
> }
>
> -#define vma_policy(vma) NULL
> -
> static inline int
> vma_dup_policy(struct vm_area_struct *src, struct vm_area_struct *dst)
> {
> diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
> index 8148b30a9df1..9ae7def16cb2 100644
> --- a/include/linux/mm_inline.h
> +++ b/include/linux/mm_inline.h
> @@ -4,6 +4,7 @@
>
> #include <linux/atomic.h>
> #include <linux/huge_mm.h>
> +#include <linux/mm_types.h>
> #include <linux/swap.h>
> #include <linux/string.h>
> #include <linux/userfaultfd_k.h>
> @@ -352,15 +353,6 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
> }
>
> #ifdef CONFIG_ANON_VMA_NAME
> -/*
> - * mmap_lock should be read-locked when calling anon_vma_name(). Caller should
> - * either keep holding the lock while using the returned pointer or it should
> - * raise anon_vma_name refcount before releasing the lock.
> - */
> -extern struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
> -extern struct anon_vma_name *anon_vma_name_alloc(const char *name);
> -extern void anon_vma_name_free(struct kref *kref);
> -
> /* mmap_lock should be read-locked */
> static inline void anon_vma_name_get(struct anon_vma_name *anon_name)
> {
> @@ -415,16 +407,6 @@ static inline bool anon_vma_name_eq(struct anon_vma_name *anon_name1,
> }
>
> #else /* CONFIG_ANON_VMA_NAME */
> -static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> -{
> - return NULL;
> -}
> -
> -static inline struct anon_vma_name *anon_vma_name_alloc(const char *name)
> -{
> - return NULL;
> -}
> -
> static inline void anon_vma_name_get(struct anon_vma_name *anon_name) {}
> static inline void anon_vma_name_put(struct anon_vma_name *anon_name) {}
> static inline void dup_anon_vma_name(struct vm_area_struct *orig_vma,
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 36c5b43999e6..21eb56145f57 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -546,6 +546,27 @@ struct anon_vma_name {
> char name[];
> };
>
> +#ifdef CONFIG_ANON_VMA_NAME
> +/*
> + * mmap_lock should be read-locked when calling anon_vma_name(). Caller should
> + * either keep holding the lock while using the returned pointer or it should
> + * raise anon_vma_name refcount before releasing the lock.
> + */
> +struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma);
> +struct anon_vma_name *anon_vma_name_alloc(const char *name);
> +void anon_vma_name_free(struct kref *kref);
> +#else /* CONFIG_ANON_VMA_NAME */
> +static inline struct anon_vma_name *anon_vma_name(struct vm_area_struct *vma)
> +{
> + return NULL;
> +}
> +
> +static inline struct anon_vma_name *anon_vma_name_alloc(const char *name)
> +{
> + return NULL;
> +}
> +#endif
> +
> struct vma_lock {
> struct rw_semaphore lock;
> };
> @@ -662,6 +683,12 @@ struct vm_area_struct {
> struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
> } __randomize_layout;
>
> +#ifdef CONFIG_NUMA
> +#define vma_policy(vma) ((vma)->vm_policy)
> +#else
> +#define vma_policy(vma) NULL
> +#endif
> +
> #ifdef CONFIG_SCHED_MM_CID
> struct mm_cid {
> u64 time;

2023-10-10 07:13:07

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.

On 10/9/23 22:53, Lorenzo Stoakes wrote:
> mprotect() and other functions which change VMA parameters over a range
> each employ a pattern of:-
>
> 1. Attempt to merge the range with adjacent VMAs.
> 2. If this fails, and the range spans a subset of the VMA, split it
> accordingly.
>
> This is open-coded and duplicated in each case. Also in each case most of
> the parameters passed to vma_merge() remain the same.
>
> Create a new function, vma_modify(), which abstracts this operation,
> accepting only those parameters which can be changed.
>
> To avoid the mess of invoking each function call with unnecessary
> parameters, create inline wrapper functions for each of the modify
> operations, parameterised only by what is required to perform the action.
>
> Note that the userfaultfd_release() case works even though it does not
> split VMAs - since start is set to vma->vm_start and end is set to
> vma->vm_end, the split logic does not trigger.
>
> In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start
> - vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this
> instance, this invocation will remain unchanged.
>
> Signed-off-by: Lorenzo Stoakes <[email protected]>

Reviewed-by: Vlastimil Babka <[email protected]>

some nits below:

> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2437,6 +2437,51 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> return __split_vma(vmi, vma, addr, new_below);
> }
>
> +/*
> + * We are about to modify one or multiple of a VMA's flags, policy, userfaultfd
> + * context and anonymous VMA name within the range [start, end).
> + *
> + * As a result, we might be able to merge the newly modified VMA range with an
> + * adjacent VMA with identical properties.
> + *
> + * If no merge is possible and the range does not span the entirety of the VMA,
> + * we then need to split the VMA to accommodate the change.
> + */

This could describe the return value too? It's not entirely trivial.
But I also wonder if we could just return 'vma' for the split_vma() cases
and the callers could simply stop distinguishing whether there was a merge
or split, and their code would become even simpler?
It seems to me most callers don't care, except mprotect, see below...

> +struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + unsigned long vm_flags,
> + struct mempolicy *policy,
> + struct vm_userfaultfd_ctx uffd_ctx,
> + struct anon_vma_name *anon_name)
> +{
> + pgoff_t pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> + struct vm_area_struct *merged;
> +
> + merged = vma_merge(vmi, vma->vm_mm, prev, start, end, vm_flags,
> + vma->anon_vma, vma->vm_file, pgoff, policy,
> + uffd_ctx, anon_name);
> + if (merged)
> + return merged;
> +
> + if (vma->vm_start < start) {
> + int err = split_vma(vmi, vma, start, 1);
> +
> + if (err)
> + return ERR_PTR(err);
> + }
> +
> + if (vma->vm_end > end) {
> + int err = split_vma(vmi, vma, end, 0);
> +
> + if (err)
> + return ERR_PTR(err);
> + }
> +
> + return NULL;
> +}
> +
> /*
> * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> * @vmi: The vma iterator
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index b94fbb45d5c7..6f85d99682ab 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -581,7 +581,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> long nrpages = (end - start) >> PAGE_SHIFT;
> unsigned int mm_cp_flags = 0;
> unsigned long charged = 0;
> - pgoff_t pgoff;
> + struct vm_area_struct *merged;
> int error;
>
> if (newflags == oldflags) {
> @@ -625,34 +625,19 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> }
> }
>
> - /*
> - * First try to merge with previous and/or next vma.
> - */
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - *pprev = vma_merge(vmi, mm, *pprev, start, end, newflags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> - if (*pprev) {
> - vma = *pprev;
> - VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
> - goto success;
> + merged = vma_modify_flags(vmi, *pprev, vma, start, end, newflags);
> + if (IS_ERR(merged)) {
> + error = PTR_ERR(merged);
> + goto fail;
> }
>
> - *pprev = vma;
> -
> - if (start != vma->vm_start) {
> - error = split_vma(vmi, vma, start, 1);
> - if (error)
> - goto fail;
> - }
> -
> - if (end != vma->vm_end) {
> - error = split_vma(vmi, vma, end, 0);
> - if (error)
> - goto fail;
> + if (merged) {
> + vma = *pprev = merged;
> + VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);

This VM_WARN_ON() is AFAICS the only piece of code that cares about merged
vs split. Would it be ok to call it for the split vma cases as well, or
maybe remove it?

> + } else {
> + *pprev = vma;
> }
>
> -success:
> /*
> * vm_flags and vm_page_prot are protected by the mmap_lock
> * held in write mode.

2023-10-10 18:11:43

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.

On Tue, Oct 10, 2023 at 09:12:21AM +0200, Vlastimil Babka wrote:
> On 10/9/23 22:53, Lorenzo Stoakes wrote:
> > mprotect() and other functions which change VMA parameters over a range
> > each employ a pattern of:-
> >
> > 1. Attempt to merge the range with adjacent VMAs.
> > 2. If this fails, and the range spans a subset of the VMA, split it
> > accordingly.
> >
> > This is open-coded and duplicated in each case. Also in each case most of
> > the parameters passed to vma_merge() remain the same.
> >
> > Create a new function, vma_modify(), which abstracts this operation,
> > accepting only those parameters which can be changed.
> >
> > To avoid the mess of invoking each function call with unnecessary
> > parameters, create inline wrapper functions for each of the modify
> > operations, parameterised only by what is required to perform the action.
> >
> > Note that the userfaultfd_release() case works even though it does not
> > split VMAs - since start is set to vma->vm_start and end is set to
> > vma->vm_end, the split logic does not trigger.
> >
> > In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start
> > - vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this
> > instance, this invocation will remain unchanged.
> >
> > Signed-off-by: Lorenzo Stoakes <[email protected]>
>
> Reviewed-by: Vlastimil Babka <[email protected]>
>
> some nits below:
>
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2437,6 +2437,51 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > return __split_vma(vmi, vma, addr, new_below);
> > }
> >
> > +/*
> > + * We are about to modify one or multiple of a VMA's flags, policy, userfaultfd
> > + * context and anonymous VMA name within the range [start, end).
> > + *
> > + * As a result, we might be able to merge the newly modified VMA range with an
> > + * adjacent VMA with identical properties.
> > + *
> > + * If no merge is possible and the range does not span the entirety of the VMA,
> > + * we then need to split the VMA to accommodate the change.
> > + */
>
> This could describe the return value too? It's not entirely trivial.
> But I also wonder if we could just return 'vma' for the split_vma() cases
> and the callers could simply stop distinguishing whether there was a merge
> or split, and their code would become even simpler?
> It seems to me most callers don't care, except mprotect, see below...

What a great idea, thanks! I have worked through and implemented this and
it does indeed work and simplify things even further, cheers!

>
> > +struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> > + struct vm_area_struct *prev,
> > + struct vm_area_struct *vma,
> > + unsigned long start, unsigned long end,
> > + unsigned long vm_flags,
> > + struct mempolicy *policy,
> > + struct vm_userfaultfd_ctx uffd_ctx,
> > + struct anon_vma_name *anon_name)
> > +{
> > + pgoff_t pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> > + struct vm_area_struct *merged;
> > +
> > + merged = vma_merge(vmi, vma->vm_mm, prev, start, end, vm_flags,
> > + vma->anon_vma, vma->vm_file, pgoff, policy,
> > + uffd_ctx, anon_name);
> > + if (merged)
> > + return merged;
> > +
> > + if (vma->vm_start < start) {
> > + int err = split_vma(vmi, vma, start, 1);
> > +
> > + if (err)
> > + return ERR_PTR(err);
> > + }
> > +
> > + if (vma->vm_end > end) {
> > + int err = split_vma(vmi, vma, end, 0);
> > +
> > + if (err)
> > + return ERR_PTR(err);
> > + }
> > +
> > + return NULL;
> > +}
> > +
> > /*
> > * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> > * @vmi: The vma iterator
> > diff --git a/mm/mprotect.c b/mm/mprotect.c
> > index b94fbb45d5c7..6f85d99682ab 100644
> > --- a/mm/mprotect.c
> > +++ b/mm/mprotect.c
> > @@ -581,7 +581,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> > long nrpages = (end - start) >> PAGE_SHIFT;
> > unsigned int mm_cp_flags = 0;
> > unsigned long charged = 0;
> > - pgoff_t pgoff;
> > + struct vm_area_struct *merged;
> > int error;
> >
> > if (newflags == oldflags) {
> > @@ -625,34 +625,19 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> > }
> > }
> >
> > - /*
> > - * First try to merge with previous and/or next vma.
> > - */
> > - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> > - *pprev = vma_merge(vmi, mm, *pprev, start, end, newflags,
> > - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> > - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> > - if (*pprev) {
> > - vma = *pprev;
> > - VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
> > - goto success;
> > + merged = vma_modify_flags(vmi, *pprev, vma, start, end, newflags);
> > + if (IS_ERR(merged)) {
> > + error = PTR_ERR(merged);
> > + goto fail;
> > }
> >
> > - *pprev = vma;
> > -
> > - if (start != vma->vm_start) {
> > - error = split_vma(vmi, vma, start, 1);
> > - if (error)
> > - goto fail;
> > - }
> > -
> > - if (end != vma->vm_end) {
> > - error = split_vma(vmi, vma, end, 0);
> > - if (error)
> > - goto fail;
> > + if (merged) {
> > + vma = *pprev = merged;
> > + VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
>
> This VM_WARN_ON() is AFAICS the only piece of code that cares about merged
> vs split. Would it be ok to call it for the split vma cases as well, or
> maybe remove it?

This is simply asserting a fundamental requirement of vma_merge() in
general, i.e. that the flags of what was merged match those of the VMA that
is being merged.

This is already checked in the VMA merge implementation, so this feels
super redundant, so I think we're good to simply remove it.

>
> > + } else {
> > + *pprev = vma;
> > }
> >
> > -success:
> > /*
> > * vm_flags and vm_page_prot are protected by the mmap_lock
> > * held in write mode.
>

2023-10-11 01:52:26

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] mm: abstract merge for new VMAs into vma_merge_new_vma()

* Lorenzo Stoakes <[email protected]> [231009 16:53]:
> Only in mmap_region() and copy_vma() do we attempt to merge VMAs which
> occupy entirely new regions of virtual memory.
>
> We can abstract this logic and make the intent of this invocations of it
> completely explicit, rather than invoking vma_merge() with an inscrutable
> wall of parameters.
>
> This also paves the way for a simplification of the core vma_merge()
> implementation, as we seek to make it entirely an implementation detail.
>
> Note that on mmap_region(), VMA fields are initialised to zero, so we can
> simply reference these rather than explicitly specifying NULL.

I don't think that's accurate.. mmap_region() sets the start, end,
offset, flags. It also passes this vma into a driver, so I'm not sure
we can rely on them being anything after that? The whole reason
vma_merge() is attempted in this case is because the driver may have
changed vma->vm_flags on us. Your way may actually be better since the
driver may set something we assume is NULL today.

>
> Reviewed-by: Vlastimil Babka <[email protected]>
> Signed-off-by: Lorenzo Stoakes <[email protected]>
> ---
> mm/mmap.c | 27 ++++++++++++++++++++-------
> 1 file changed, 20 insertions(+), 7 deletions(-)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 17c0dcfb1527..33aafd23823b 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2482,6 +2482,22 @@ struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> return NULL;
> }
>
> +/*
> + * Attempt to merge a newly mapped VMA with those adjacent to it. The caller
> + * must ensure that [start, end) does not overlap any existing VMA.
> + */
> +static struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start,
> + unsigned long end,
> + pgoff_t pgoff)

It's not a coding style, but if you used two tabs here, it may make this
more condensed.

> +{
> + return vma_merge(vmi, vma->vm_mm, prev, start, end, vma->vm_flags,
> + vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> + vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> +}
> +
> /*
> * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> * @vmi: The vma iterator
> @@ -2837,10 +2853,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> * vma again as we may succeed this time.
> */
> if (unlikely(vm_flags != vma->vm_flags && prev)) {
> - merge = vma_merge(&vmi, mm, prev, vma->vm_start,
> - vma->vm_end, vma->vm_flags, NULL,
> - vma->vm_file, vma->vm_pgoff, NULL,
> - NULL_VM_UFFD_CTX, NULL);
> + merge = vma_merge_new_vma(&vmi, prev, vma,
> + vma->vm_start, vma->vm_end,
> + pgoff);
└ vma->vm_pgoff
> if (merge) {
> /*
> * ->mmap() can change vma->vm_file and fput
> @@ -3382,9 +3397,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> if (new_vma && new_vma->vm_start < addr + len)
> return NULL; /* should never get here */
>
> - new_vma = vma_merge(&vmi, mm, prev, addr, addr + len, vma->vm_flags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> + new_vma = vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff);
> if (new_vma) {
> /*
> * Source vma may have been merged into new_vma
> --
> 2.42.0
>

2023-10-11 02:15:47

by Liam R. Howlett

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.

* Lorenzo Stoakes <[email protected]> [231009 16:53]:
> mprotect() and other functions which change VMA parameters over a range
> each employ a pattern of:-
>
> 1. Attempt to merge the range with adjacent VMAs.
> 2. If this fails, and the range spans a subset of the VMA, split it
> accordingly.
>
> This is open-coded and duplicated in each case. Also in each case most of
> the parameters passed to vma_merge() remain the same.
>
> Create a new function, vma_modify(), which abstracts this operation,
> accepting only those parameters which can be changed.
>
> To avoid the mess of invoking each function call with unnecessary
> parameters, create inline wrapper functions for each of the modify
> operations, parameterised only by what is required to perform the action.
>
> Note that the userfaultfd_release() case works even though it does not
> split VMAs - since start is set to vma->vm_start and end is set to
> vma->vm_end, the split logic does not trigger.
>
> In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start
> - vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this
> instance, this invocation will remain unchanged.
>
> Signed-off-by: Lorenzo Stoakes <[email protected]>
> ---
> fs/userfaultfd.c | 69 +++++++++++++++-------------------------------
> include/linux/mm.h | 60 ++++++++++++++++++++++++++++++++++++++++
> mm/madvise.c | 32 ++++++---------------
> mm/mempolicy.c | 22 +++------------
> mm/mlock.c | 27 +++++-------------
> mm/mmap.c | 45 ++++++++++++++++++++++++++++++
> mm/mprotect.c | 35 +++++++----------------
> 7 files changed, 157 insertions(+), 133 deletions(-)
>
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index a7c6ef764e63..ba44a67a0a34 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -927,11 +927,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> continue;
> }
> new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
> - prev = vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end,
> - new_flags, vma->anon_vma,
> - vma->vm_file, vma->vm_pgoff,
> - vma_policy(vma),
> - NULL_VM_UFFD_CTX, anon_vma_name(vma));
> + prev = vma_modify_flags_uffd(&vmi, prev, vma, vma->vm_start,
> + vma->vm_end, new_flags,
> + NULL_VM_UFFD_CTX);
> +
> if (prev) {
> vma = prev;
> } else {
> @@ -1331,7 +1330,6 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> unsigned long start, end, vma_end;
> struct vma_iterator vmi;
> bool wp_async = userfaultfd_wp_async_ctx(ctx);
> - pgoff_t pgoff;
>
> user_uffdio_register = (struct uffdio_register __user *) arg;
>
> @@ -1484,28 +1482,17 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> vma_end = min(end, vma->vm_end);
>
> new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags;
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags,
> - vma->anon_vma, vma->vm_file, pgoff,
> - vma_policy(vma),
> - ((struct vm_userfaultfd_ctx){ ctx }),
> - anon_vma_name(vma));
> - if (prev) {
> - /* vma_merge() invalidated the mas */
> - vma = prev;
> - goto next;
> - }
> - if (vma->vm_start < start) {
> - ret = split_vma(&vmi, vma, start, 1);
> - if (ret)
> - break;
> - }
> - if (vma->vm_end > end) {
> - ret = split_vma(&vmi, vma, end, 0);
> - if (ret)
> - break;
> + prev = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
> + new_flags,
> + (struct vm_userfaultfd_ctx){ctx});
> + if (IS_ERR(prev)) {
> + ret = PTR_ERR(prev);
> + break;
> }
> - next:
> +
> + if (prev)
> + vma = prev; /* vma_merge() invalidated the mas */

This is a stale comment. The maple state is in the vma iterator, which
is passed through. I missed this on the vma iterator conversion.

> +
> /*
> * In the vma_merge() successful mprotect-like case 8:
> * the next vma was merged into the current one and
> @@ -1568,7 +1555,6 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> const void __user *buf = (void __user *)arg;
> struct vma_iterator vmi;
> bool wp_async = userfaultfd_wp_async_ctx(ctx);
> - pgoff_t pgoff;
>
> ret = -EFAULT;
> if (copy_from_user(&uffdio_unregister, buf, sizeof(uffdio_unregister)))
> @@ -1671,26 +1657,15 @@ static int userfaultfd_unregister(struct userfaultfd_ctx *ctx,
> uffd_wp_range(vma, start, vma_end - start, false);
>
> new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags,
> - vma->anon_vma, vma->vm_file, pgoff,
> - vma_policy(vma),
> - NULL_VM_UFFD_CTX, anon_vma_name(vma));
> - if (prev) {
> - vma = prev;
> - goto next;
> - }
> - if (vma->vm_start < start) {
> - ret = split_vma(&vmi, vma, start, 1);
> - if (ret)
> - break;
> - }
> - if (vma->vm_end > end) {
> - ret = split_vma(&vmi, vma, end, 0);
> - if (ret)
> - break;
> + prev = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
> + new_flags, NULL_VM_UFFD_CTX);
> + if (IS_ERR(prev)) {
> + ret = PTR_ERR(prev);
> + break;
> }
> - next:
> +
> + if (prev)
> + vma = prev;
> /*
> * In the vma_merge() successful mprotect-like case 8:
> * the next vma was merged into the current one and
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index a7b667786cde..83ee1f35febe 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3253,6 +3253,66 @@ extern struct vm_area_struct *copy_vma(struct vm_area_struct **,
> unsigned long addr, unsigned long len, pgoff_t pgoff,
> bool *need_rmap_locks);
> extern void exit_mmap(struct mm_struct *);
> +struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + unsigned long vm_flags,
> + struct mempolicy *policy,
> + struct vm_userfaultfd_ctx uffd_ctx,
> + struct anon_vma_name *anon_name);
> +
> +/* We are about to modify the VMA's flags. */
> +static inline struct vm_area_struct
> +*vma_modify_flags(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + unsigned long new_flags)
> +{
> + return vma_modify(vmi, prev, vma, start, end, new_flags,
> + vma_policy(vma), vma->vm_userfaultfd_ctx,
> + anon_vma_name(vma));
> +}
> +
> +/* We are about to modify the VMA's flags and/or anon_name. */
> +static inline struct vm_area_struct
> +*vma_modify_flags_name(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start,
> + unsigned long end,
> + unsigned long new_flags,
> + struct anon_vma_name *new_name)
> +{
> + return vma_modify(vmi, prev, vma, start, end, new_flags,
> + vma_policy(vma), vma->vm_userfaultfd_ctx, new_name);
> +}
> +
> +/* We are about to modify the VMA's memory policy. */
> +static inline struct vm_area_struct
> +*vma_modify_policy(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + struct mempolicy *new_pol)
> +{
> + return vma_modify(vmi, prev, vma, start, end, vma->vm_flags,
> + new_pol, vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> +}
> +
> +/* We are about to modify the VMA's flags and/or uffd context. */
> +static inline struct vm_area_struct
> +*vma_modify_flags_uffd(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + unsigned long new_flags,
> + struct vm_userfaultfd_ctx new_ctx)
> +{
> + return vma_modify(vmi, prev, vma, start, end, new_flags,
> + vma_policy(vma), new_ctx, anon_vma_name(vma));
> +}
>
> static inline int check_data_rlimit(unsigned long rlim,
> unsigned long new,
> diff --git a/mm/madvise.c b/mm/madvise.c
> index a4a20de50494..801d3c1bb7b3 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -141,7 +141,7 @@ static int madvise_update_vma(struct vm_area_struct *vma,
> {
> struct mm_struct *mm = vma->vm_mm;
> int error;
> - pgoff_t pgoff;
> + struct vm_area_struct *merged;
> VMA_ITERATOR(vmi, mm, start);
>
> if (new_flags == vma->vm_flags && anon_vma_name_eq(anon_vma_name(vma), anon_name)) {
> @@ -149,30 +149,16 @@ static int madvise_update_vma(struct vm_area_struct *vma,
> return 0;
> }
>
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - *prev = vma_merge(&vmi, mm, *prev, start, end, new_flags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx, anon_name);
> - if (*prev) {
> - vma = *prev;
> - goto success;
> - }
> -
> - *prev = vma;
> -
> - if (start != vma->vm_start) {
> - error = split_vma(&vmi, vma, start, 1);
> - if (error)
> - return error;
> - }
> + merged = vma_modify_flags_name(&vmi, *prev, vma, start, end, new_flags,
> + anon_name);
> + if (IS_ERR(merged))
> + return PTR_ERR(merged);
>
> - if (end != vma->vm_end) {
> - error = split_vma(&vmi, vma, end, 0);
> - if (error)
> - return error;
> - }
> + if (merged)
> + vma = *prev = merged;
> + else
> + *prev = vma;
>
> -success:
> /* vm_flags is protected by the mmap_lock held in write mode. */
> vma_start_write(vma);
> vm_flags_reset(vma, new_flags);
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index b01922e88548..6b2e99db6dd5 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -786,8 +786,6 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
> {
> struct vm_area_struct *merged;
> unsigned long vmstart, vmend;
> - pgoff_t pgoff;
> - int err;
>
> vmend = min(end, vma->vm_end);
> if (start > vma->vm_start) {
> @@ -802,27 +800,15 @@ static int mbind_range(struct vma_iterator *vmi, struct vm_area_struct *vma,
> return 0;
> }
>
> - pgoff = vma->vm_pgoff + ((vmstart - vma->vm_start) >> PAGE_SHIFT);
> - merged = vma_merge(vmi, vma->vm_mm, *prev, vmstart, vmend, vma->vm_flags,
> - vma->anon_vma, vma->vm_file, pgoff, new_pol,
> - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> + merged = vma_modify_policy(vmi, *prev, vma, vmstart, vmend, new_pol);
> + if (IS_ERR(merged))
> + return PTR_ERR(merged);
> +
> if (merged) {
> *prev = merged;
> return vma_replace_policy(merged, new_pol);
> }
>
> - if (vma->vm_start != vmstart) {
> - err = split_vma(vmi, vma, vmstart, 1);
> - if (err)
> - return err;
> - }
> -
> - if (vma->vm_end != vmend) {
> - err = split_vma(vmi, vma, vmend, 0);
> - if (err)
> - return err;
> - }
> -
> *prev = vma;
> return vma_replace_policy(vma, new_pol);
> }
> diff --git a/mm/mlock.c b/mm/mlock.c
> index 42b6865f8f82..ae83a33c387e 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -476,10 +476,10 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> unsigned long end, vm_flags_t newflags)
> {
> struct mm_struct *mm = vma->vm_mm;
> - pgoff_t pgoff;
> int nr_pages;
> int ret = 0;
> vm_flags_t oldflags = vma->vm_flags;
> + struct vm_area_struct *merged;
>
> if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
> is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
> @@ -487,28 +487,15 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
> /* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
> goto out;
>
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - *prev = vma_merge(vmi, mm, *prev, start, end, newflags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> - if (*prev) {
> - vma = *prev;
> - goto success;
> - }
> -
> - if (start != vma->vm_start) {
> - ret = split_vma(vmi, vma, start, 1);
> - if (ret)
> - goto out;
> + merged = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
> + if (IS_ERR(merged)) {
> + ret = PTR_ERR(merged);
> + goto out;
> }
>
> - if (end != vma->vm_end) {
> - ret = split_vma(vmi, vma, end, 0);
> - if (ret)
> - goto out;
> - }
> + if (merged)
> + vma = *prev = merged;
>
> -success:
> /*
> * Keep track of amount of locked VM.
> */
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 673429ee8a9e..22d968affc07 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -2437,6 +2437,51 @@ int split_vma(struct vma_iterator *vmi, struct vm_area_struct *vma,
> return __split_vma(vmi, vma, addr, new_below);
> }
>
> +/*
> + * We are about to modify one or multiple of a VMA's flags, policy, userfaultfd
> + * context and anonymous VMA name within the range [start, end).
> + *
> + * As a result, we might be able to merge the newly modified VMA range with an
> + * adjacent VMA with identical properties.
> + *
> + * If no merge is possible and the range does not span the entirety of the VMA,
> + * we then need to split the VMA to accommodate the change.
> + */
> +struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> + struct vm_area_struct *prev,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end,
> + unsigned long vm_flags,
> + struct mempolicy *policy,
> + struct vm_userfaultfd_ctx uffd_ctx,
> + struct anon_vma_name *anon_name)
> +{
> + pgoff_t pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> + struct vm_area_struct *merged;
> +
> + merged = vma_merge(vmi, vma->vm_mm, prev, start, end, vm_flags,
> + vma->anon_vma, vma->vm_file, pgoff, policy,
> + uffd_ctx, anon_name);
> + if (merged)
> + return merged;
> +
> + if (vma->vm_start < start) {
> + int err = split_vma(vmi, vma, start, 1);
> +
> + if (err)
> + return ERR_PTR(err);
> + }
> +
> + if (vma->vm_end > end) {
> + int err = split_vma(vmi, vma, end, 0);
> +
> + if (err)
> + return ERR_PTR(err);
> + }
> +
> + return NULL;
> +}
> +
> /*
> * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> * @vmi: The vma iterator
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index b94fbb45d5c7..6f85d99682ab 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -581,7 +581,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> long nrpages = (end - start) >> PAGE_SHIFT;
> unsigned int mm_cp_flags = 0;
> unsigned long charged = 0;
> - pgoff_t pgoff;
> + struct vm_area_struct *merged;
> int error;
>
> if (newflags == oldflags) {
> @@ -625,34 +625,19 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> }
> }
>
> - /*
> - * First try to merge with previous and/or next vma.
> - */
> - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> - *pprev = vma_merge(vmi, mm, *pprev, start, end, newflags,
> - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> - if (*pprev) {
> - vma = *pprev;
> - VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
> - goto success;
> + merged = vma_modify_flags(vmi, *pprev, vma, start, end, newflags);
> + if (IS_ERR(merged)) {
> + error = PTR_ERR(merged);
> + goto fail;
> }
>
> - *pprev = vma;
> -
> - if (start != vma->vm_start) {
> - error = split_vma(vmi, vma, start, 1);
> - if (error)
> - goto fail;
> - }
> -
> - if (end != vma->vm_end) {
> - error = split_vma(vmi, vma, end, 0);
> - if (error)
> - goto fail;
> + if (merged) {
> + vma = *pprev = merged;
> + VM_WARN_ON((vma->vm_flags ^ newflags) & ~VM_SOFTDIRTY);
> + } else {
> + *pprev = vma;
> }
>
> -success:
> /*
> * vm_flags and vm_page_prot are protected by the mmap_lock
> * held in write mode.
> --
> 2.42.0
>

2023-10-11 06:34:58

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH v2 2/5] mm: abstract the vma_merge()/split_vma() pattern for mprotect() et al.

On Tue, Oct 10, 2023 at 10:14:52PM -0400, Liam R. Howlett wrote:
> * Lorenzo Stoakes <[email protected]> [231009 16:53]:
> > mprotect() and other functions which change VMA parameters over a range
> > each employ a pattern of:-
> >
> > 1. Attempt to merge the range with adjacent VMAs.
> > 2. If this fails, and the range spans a subset of the VMA, split it
> > accordingly.
> >
> > This is open-coded and duplicated in each case. Also in each case most of
> > the parameters passed to vma_merge() remain the same.
> >
> > Create a new function, vma_modify(), which abstracts this operation,
> > accepting only those parameters which can be changed.
> >
> > To avoid the mess of invoking each function call with unnecessary
> > parameters, create inline wrapper functions for each of the modify
> > operations, parameterised only by what is required to perform the action.
> >
> > Note that the userfaultfd_release() case works even though it does not
> > split VMAs - since start is set to vma->vm_start and end is set to
> > vma->vm_end, the split logic does not trigger.
> >
> > In addition, since we calculate pgoff to be equal to vma->vm_pgoff + (start
> > - vma->vm_start) >> PAGE_SHIFT, and start - vma->vm_start will be 0 in this
> > instance, this invocation will remain unchanged.
> >
> > Signed-off-by: Lorenzo Stoakes <[email protected]>
> > ---
> > fs/userfaultfd.c | 69 +++++++++++++++-------------------------------
> > include/linux/mm.h | 60 ++++++++++++++++++++++++++++++++++++++++
> > mm/madvise.c | 32 ++++++---------------
> > mm/mempolicy.c | 22 +++------------
> > mm/mlock.c | 27 +++++-------------
> > mm/mmap.c | 45 ++++++++++++++++++++++++++++++
> > mm/mprotect.c | 35 +++++++----------------
> > 7 files changed, 157 insertions(+), 133 deletions(-)
> >
> > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > index a7c6ef764e63..ba44a67a0a34 100644
> > --- a/fs/userfaultfd.c
> > +++ b/fs/userfaultfd.c
> > @@ -927,11 +927,10 @@ static int userfaultfd_release(struct inode *inode, struct file *file)
> > continue;
> > }
> > new_flags = vma->vm_flags & ~__VM_UFFD_FLAGS;
> > - prev = vma_merge(&vmi, mm, prev, vma->vm_start, vma->vm_end,
> > - new_flags, vma->anon_vma,
> > - vma->vm_file, vma->vm_pgoff,
> > - vma_policy(vma),
> > - NULL_VM_UFFD_CTX, anon_vma_name(vma));
> > + prev = vma_modify_flags_uffd(&vmi, prev, vma, vma->vm_start,
> > + vma->vm_end, new_flags,
> > + NULL_VM_UFFD_CTX);
> > +
> > if (prev) {
> > vma = prev;
> > } else {
> > @@ -1331,7 +1330,6 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> > unsigned long start, end, vma_end;
> > struct vma_iterator vmi;
> > bool wp_async = userfaultfd_wp_async_ctx(ctx);
> > - pgoff_t pgoff;
> >
> > user_uffdio_register = (struct uffdio_register __user *) arg;
> >
> > @@ -1484,28 +1482,17 @@ static int userfaultfd_register(struct userfaultfd_ctx *ctx,
> > vma_end = min(end, vma->vm_end);
> >
> > new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags;
> > - pgoff = vma->vm_pgoff + ((start - vma->vm_start) >> PAGE_SHIFT);
> > - prev = vma_merge(&vmi, mm, prev, start, vma_end, new_flags,
> > - vma->anon_vma, vma->vm_file, pgoff,
> > - vma_policy(vma),
> > - ((struct vm_userfaultfd_ctx){ ctx }),
> > - anon_vma_name(vma));
> > - if (prev) {
> > - /* vma_merge() invalidated the mas */
> > - vma = prev;
> > - goto next;
> > - }
> > - if (vma->vm_start < start) {
> > - ret = split_vma(&vmi, vma, start, 1);
> > - if (ret)
> > - break;
> > - }
> > - if (vma->vm_end > end) {
> > - ret = split_vma(&vmi, vma, end, 0);
> > - if (ret)
> > - break;
> > + prev = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
> > + new_flags,
> > + (struct vm_userfaultfd_ctx){ctx});
> > + if (IS_ERR(prev)) {
> > + ret = PTR_ERR(prev);
> > + break;
> > }
> > - next:
> > +
> > + if (prev)
> > + vma = prev; /* vma_merge() invalidated the mas */
>
> This is a stale comment. The maple state is in the vma iterator, which
> is passed through. I missed this on the vma iterator conversion.

Ack, this was coincidentally removed in v3 so this is already resolved.

2023-10-11 06:49:21

by Lorenzo Stoakes

[permalink] [raw]
Subject: Re: [PATCH v2 4/5] mm: abstract merge for new VMAs into vma_merge_new_vma()

On Tue, Oct 10, 2023 at 09:51:40PM -0400, Liam R. Howlett wrote:
> * Lorenzo Stoakes <[email protected]> [231009 16:53]:
> > Only in mmap_region() and copy_vma() do we attempt to merge VMAs which
> > occupy entirely new regions of virtual memory.
> >
> > We can abstract this logic and make the intent of this invocations of it
> > completely explicit, rather than invoking vma_merge() with an inscrutable
> > wall of parameters.
> >
> > This also paves the way for a simplification of the core vma_merge()
> > implementation, as we seek to make it entirely an implementation detail.
> >
> > Note that on mmap_region(), VMA fields are initialised to zero, so we can
> > simply reference these rather than explicitly specifying NULL.
>
> I don't think that's accurate.. mmap_region() sets the start, end,
> offset, flags. It also passes this vma into a driver, so I'm not sure
> we can rely on them being anything after that? The whole reason
> vma_merge() is attempted in this case is because the driver may have
> changed vma->vm_flags on us. Your way may actually be better since the
> driver may set something we assume is NULL today.

Yeah I think I wasn't clear here - I meant to say that we memset -> 0 so
all fields that are not specified (e.g. not start, end, offset, flags).

However you make a very good point re: the driver, which I hadn't thought
of, also it's worth saying here that we specifically only do this for a
file-backed mapping just for complete clarity.

I will add a note to this part of the v3 series asking Andrew to update the
comment.

>
> >
> > Reviewed-by: Vlastimil Babka <[email protected]>
> > Signed-off-by: Lorenzo Stoakes <[email protected]>
> > ---
> > mm/mmap.c | 27 ++++++++++++++++++++-------
> > 1 file changed, 20 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 17c0dcfb1527..33aafd23823b 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -2482,6 +2482,22 @@ struct vm_area_struct *vma_modify(struct vma_iterator *vmi,
> > return NULL;
> > }
> >
> > +/*
> > + * Attempt to merge a newly mapped VMA with those adjacent to it. The caller
> > + * must ensure that [start, end) does not overlap any existing VMA.
> > + */
> > +static struct vm_area_struct *vma_merge_new_vma(struct vma_iterator *vmi,
> > + struct vm_area_struct *prev,
> > + struct vm_area_struct *vma,
> > + unsigned long start,
> > + unsigned long end,
> > + pgoff_t pgoff)
>
> It's not a coding style, but if you used two tabs here, it may make this
> more condensed.

Checkpatch shouts at me about aligning to the paren, I obviously could just
put "static struct vm_area_struct *" on the line before to make this a bit
better though. If we go to a v4 will fix, otherwise I think probably ok to
leave even if a bit squished for now?

>
> > +{
> > + return vma_merge(vmi, vma->vm_mm, prev, start, end, vma->vm_flags,
> > + vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> > + vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> > +}
> > +
> > /*
> > * do_vmi_align_munmap() - munmap the aligned region from @start to @end.
> > * @vmi: The vma iterator
> > @@ -2837,10 +2853,9 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
> > * vma again as we may succeed this time.
> > */
> > if (unlikely(vm_flags != vma->vm_flags && prev)) {
> > - merge = vma_merge(&vmi, mm, prev, vma->vm_start,
> > - vma->vm_end, vma->vm_flags, NULL,
> > - vma->vm_file, vma->vm_pgoff, NULL,
> > - NULL_VM_UFFD_CTX, NULL);
> > + merge = vma_merge_new_vma(&vmi, prev, vma,
> > + vma->vm_start, vma->vm_end,
> > + pgoff);
> └ vma->vm_pgoff
> > if (merge) {
> > /*
> > * ->mmap() can change vma->vm_file and fput
> > @@ -3382,9 +3397,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
> > if (new_vma && new_vma->vm_start < addr + len)
> > return NULL; /* should never get here */
> >
> > - new_vma = vma_merge(&vmi, mm, prev, addr, addr + len, vma->vm_flags,
> > - vma->anon_vma, vma->vm_file, pgoff, vma_policy(vma),
> > - vma->vm_userfaultfd_ctx, anon_vma_name(vma));
> > + new_vma = vma_merge_new_vma(&vmi, prev, vma, addr, addr + len, pgoff);
> > if (new_vma) {
> > /*
> > * Source vma may have been merged into new_vma
> > --
> > 2.42.0
> >