2023-10-09 06:42:45

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH v3 0/3] userfaultfd move option

This patch series introduces UFFDIO_MOVE feature to userfaultfd, which
has long been implemented and maintained by Andrea in his local tree [1],
but was not upstreamed due to lack of use cases where this approach would
be better than allocating a new page and copying the contents. Previous
upstraming attempts could be found at [6] and [7].

UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
needs pages to be allocated [2]. However, with UFFDIO_MOVE, if pages are
available (in userspace) for recycling, as is usually the case in heap
compaction algorithms, then we can avoid the page allocation and memcpy
(done by UFFDIO_COPY). Also, since the pages are recycled in the
userspace, we avoid the need to release (via madvise) the pages back to
the kernel [3].
We see over 40% reduction (on a Google pixel 6 device) in the compacting
thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
measured using a benchmark that emulates a heap compaction implementation
using userfaultfd (to allow concurrent accesses by application threads).
More details of the usecase are explained in [3].

Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
touching them within the same vma. Today, it can only be done by mremap,
however it forces splitting the vma.

Main changes since Andrea's last version [1]:
- Trivial translations from page to folio, mmap_sem to mmap_lock
- Replace pmd_trans_unstable() with pte_offset_map_nolock() and handle its
possible failure
- Move pte mapping into remap_pages_pte to allow for retries when source
page or anon_vma is contended. Since pte_offset_map_nolock() start RCU
read section, we can't block anymore after mapping a pte, so have to unmap
the ptesm do the locking and retry.
- Add and use anon_vma_trylock_write() to avoid blocking while in RCU
read section.
- Accommodate changes in mmu_notifier_range_init() API, switch to
mmu_notifier_invalidate_range_start_nonblock() to avoid blocking while in
RCU read section.
- Open-code now removed __swp_swapcount()
- Replace pmd_read_atomic() with pmdp_get_lockless()
- Add new selftest for UFFDIO_MOVE

Changes since v1 [4]:
- add mmget_not_zero in userfaultfd_remap, per Jann Horn
- removed extern from function definitions, per Matthew Wilcox
- converted to folios in remap_pages_huge_pmd, per Matthew Wilcox
- use PageAnonExclusive in remap_pages_huge_pmd, per David Hildenbrand
- handle pgtable transfers between MMs, per Jann Horn
- ignore concurrent A/D pte bit changes, per Jann Horn
- split functions into smaller units, per David Hildenbrand
- test for folio_test_large in remap_anon_pte, per Matthew Wilcox
- use pte_swp_exclusive for swapcount check, per David Hildenbrand
- eliminated use of mmu_notifier_invalidate_range_start_nonblock,
per Jann Horn
- simplified THP alignment checks, per Jann Horn
- refactored the loop inside remap_pages, per Jann Horn
- additional clarifying comments, per Jann Horn

Changes since v2 [5]:
- renamed UFFDIO_REMAP to UFFDIO_MOVE, per David Hildenbrand
- rebase over mm-unstable to use folio_move_anon_rmap(),
per David Hildenbrand
- added text for manpage explaining DONTFORK and KSM requirements for this
feature, per David Hildenbrand
- check for anon_vma changes in the fast path of folio_lock_anon_vma_read,
per Peter Xu
- updated the title and description of the first patch,
per David Hildenbrand
- updating comments in folio_lock_anon_vma_read() explaining the need for
anon_vma checks, per David Hildenbrand
- changed all mapcount checks to PageAnonExclusive, per Jann Horn and
David Hildenbrand
- changed counters in remap_swap_pte() from MM_ANONPAGES to MM_SWAPENTS,
per Jann Horn
- added a check for PTE change after folio is locked in remap_pages_pte(),
per Jann Horn
- added handling of PMD migration entries and bailout when pmd_devmap(),
per Jann Horn
- added checks to ensure both src and dst VMAs are writable, per Peter Xu
- added UFFD_FEATURE_MOVE, per Peter Xu
- removed obsolete comments, per Peter Xu
- renamed remap_anon_pte to remap_present_pte, per Peter Xu
- added a comment for folio_get_anon_vma() explaining the need for
anon_vma checks, per Peter Xu
- changed error handling in remap_pages() to make it more clear,
per Peter Xu
- changed EFAULT to EAGAIN to retry when a hugepage appears or disappears
from under us, per Peter Xu
- added links to previous upstreaming attempts, per David Hildenbrand

[1] https://gitlab.com/aarcange/aa/-/commit/2aec7aea56b10438a3881a20a411aa4b1fc19e92
[2] https://lore.kernel.org/all/[email protected]/
[3] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
[4] https://lore.kernel.org/all/[email protected]/
[5] https://lore.kernel.org/all/[email protected]/
[6] https://lore.kernel.org/all/[email protected]/
[7] https://lore.kernel.org/all/[email protected]/

The patchset applies over mm-unstable.

Andrea Arcangeli (2):
mm/rmap: support move to different root anon_vma in
folio_move_anon_rmap()
userfaultfd: UFFDIO_MOVE uABI

Suren Baghdasaryan (1):
selftests/mm: add UFFDIO_MOVE ioctl test

Documentation/admin-guide/mm/userfaultfd.rst | 3 +
fs/userfaultfd.c | 63 ++
include/linux/rmap.h | 5 +
include/linux/userfaultfd_k.h | 12 +
include/uapi/linux/userfaultfd.h | 29 +-
mm/huge_memory.c | 138 +++++
mm/khugepaged.c | 3 +
mm/rmap.c | 30 +
mm/userfaultfd.c | 602 +++++++++++++++++++
tools/testing/selftests/mm/uffd-common.c | 41 +-
tools/testing/selftests/mm/uffd-common.h | 1 +
tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++
12 files changed, 986 insertions(+), 3 deletions(-)

--
2.42.0.609.gbb76f46606-goog


2023-10-09 06:43:03

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

From: Andrea Arcangeli <[email protected]>

Implement the uABI of UFFDIO_MOVE ioctl.
UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
available (in userspace) for recycling, as is usually the case in heap
compaction algorithms, then we can avoid the page allocation and memcpy
(done by UFFDIO_COPY). Also, since the pages are recycled in the
userspace, we avoid the need to release (via madvise) the pages back to
the kernel [2].
We see over 40% reduction (on a Google pixel 6 device) in the compacting
thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
measured using a benchmark that emulates a heap compaction implementation
using userfaultfd (to allow concurrent accesses by application threads).
More details of the usecase are explained in [2].
Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
touching them within the same vma. Today, it can only be done by mremap,
however it forces splitting the vma.

[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/

Update for the ioctl_userfaultfd(2) manpage:

UFFDIO_MOVE
(Since Linux xxx) Move a continuous memory chunk into the
userfault registered range and optionally wake up the blocked
thread. The source and destination addresses and the number of
bytes to move are specified by the src, dst, and len fields of
the uffdio_move structure pointed to by argp:

struct uffdio_move {
__u64 dst; /* Destination of move */
__u64 src; /* Source of move */
__u64 len; /* Number of bytes to move */
__u64 mode; /* Flags controlling behavior of move */
__s64 move; /* Number of bytes moved, or negated error */
};

The following value may be bitwise ORed in mode to change the
behavior of the UFFDIO_MOVE operation:

UFFDIO_MOVE_MODE_DONTWAKE
Do not wake up the thread that waits for page-fault
resolution

UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
Allow holes in the source virtual range that is being moved.
When not specified, the holes will result in ENOENT error.
When specified, the holes will be accounted as successfully
moved memory. This is mostly useful to move hugepage aligned
virtual regions without knowing if there are transparent
hugepages in the regions or not, but preventing the risk of
having to split the hugepage during the operation.

The move field is used by the kernel to return the number of
bytes that was actually moved, or an error (a negated errno-
style value). If the value returned in move doesn't match the
value that was specified in len, the operation fails with the
error EAGAIN. The move field is output-only; it is not read by
the UFFDIO_MOVE operation.

The operation may fail for various reasons. Usually, remapping of
pages that are not exclusive to the given process fail; once KSM
might deduplicate pages or fork() COW-shares pages during fork()
with child processes, they are no longer exclusive. Further, the
kernel might only perform lightweight checks for detecting whether
the pages are exclusive, and return -EBUSY in case that check fails.
To make the operation more likely to succeed, KSM should be
disabled, fork() should be avoided or MADV_DONTFORK should be
configured for the source VMA before fork().

This ioctl(2) operation returns 0 on success. In this case, the
entire area was moved. On error, -1 is returned and errno is
set to indicate the error. Possible errors include:

EAGAIN The number of bytes moved (i.e., the value returned in
the move field) does not equal the value that was
specified in the len field.

EINVAL Either dst or len was not a multiple of the system page
size, or the range specified by src and len or dst and len
was invalid.

EINVAL An invalid bit was specified in the mode field.

ENOENT
The source virtual memory range has unmapped holes and
UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.

EEXIST
The destination virtual memory range is fully or partially
mapped.

EBUSY
The pages in the source virtual memory range are not
exclusive to the process. The kernel might only perform
lightweight checks for detecting whether the pages are
exclusive. To make the operation more likely to succeed,
KSM should be disabled, fork() should be avoided or
MADV_DONTFORK should be configured for the source virtual
memory area before fork().

ENOMEM Allocating memory needed for the operation failed.

ESRCH
The faulting process has exited at the time of a
UFFDIO_MOVE operation.

Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Suren Baghdasaryan <[email protected]>
---
Documentation/admin-guide/mm/userfaultfd.rst | 3 +
fs/userfaultfd.c | 63 ++
include/linux/rmap.h | 5 +
include/linux/userfaultfd_k.h | 12 +
include/uapi/linux/userfaultfd.h | 29 +-
mm/huge_memory.c | 138 +++++
mm/khugepaged.c | 3 +
mm/rmap.c | 6 +
mm/userfaultfd.c | 602 +++++++++++++++++++
9 files changed, 860 insertions(+), 1 deletion(-)

diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst
index 203e26da5f92..e5cc8848dcb3 100644
--- a/Documentation/admin-guide/mm/userfaultfd.rst
+++ b/Documentation/admin-guide/mm/userfaultfd.rst
@@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
support for shmem virtual memory areas.

+- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
+ existing page contents from userspace.
+
The userland application should set the feature flags it intends to use
when invoking the ``UFFDIO_API`` ioctl, to request that those features be
enabled if supported.
diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index a7c6ef764e63..ac52e0f99a69 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
}

+static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
+ unsigned long arg)
+{
+ __s64 ret;
+ struct uffdio_move uffdio_move;
+ struct uffdio_move __user *user_uffdio_move;
+ struct userfaultfd_wake_range range;
+
+ user_uffdio_move = (struct uffdio_move __user *) arg;
+
+ ret = -EAGAIN;
+ if (atomic_read(&ctx->mmap_changing))
+ goto out;
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_move, user_uffdio_move,
+ /* don't copy "remap" last field */
+ sizeof(uffdio_move)-sizeof(__s64)))
+ goto out;
+
+ ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len);
+ if (ret)
+ goto out;
+
+ ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len);
+ if (ret)
+ goto out;
+
+ ret = -EINVAL;
+ if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
+ UFFDIO_MOVE_MODE_DONTWAKE))
+ goto out;
+
+ if (mmget_not_zero(ctx->mm)) {
+ ret = remap_pages(ctx->mm, current->mm,
+ uffdio_move.dst, uffdio_move.src,
+ uffdio_move.len, uffdio_move.mode);
+ mmput(ctx->mm);
+ } else {
+ return -ESRCH;
+ }
+
+ if (unlikely(put_user(ret, &user_uffdio_move->move)))
+ return -EFAULT;
+ if (ret < 0)
+ goto out;
+
+ /* len == 0 would wake all */
+ BUG_ON(!ret);
+ range.len = ret;
+ if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
+ range.start = uffdio_move.dst;
+ wake_userfault(ctx, &range);
+ }
+ ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
+
+out:
+ return ret;
+}
+
/*
* userland asks for a certain API version and we return which bits
* and ioctl commands are implemented in this kernel for such API
@@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
case UFFDIO_ZEROPAGE:
ret = userfaultfd_zeropage(ctx, arg);
break;
+ case UFFDIO_MOVE:
+ ret = userfaultfd_remap(ctx, arg);
+ break;
case UFFDIO_WRITEPROTECT:
ret = userfaultfd_writeprotect(ctx, arg);
break;
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b26fe858fd44..8034eda972e5 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
down_write(&anon_vma->root->rwsem);
}

+static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
+{
+ return down_write_trylock(&anon_vma->root->rwsem);
+}
+
static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
{
up_write(&anon_vma->root->rwsem);
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index f2dc19f40d05..ce8d20b57e8c 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
extern long uffd_wp_range(struct vm_area_struct *vma,
unsigned long start, unsigned long len, bool enable_wp);

+/* remap_pages */
+void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
+void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_start, unsigned long src_start,
+ unsigned long len, __u64 flags);
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr);
+
/* mm helpers */
static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
struct vm_userfaultfd_ctx vm_ctx)
diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
index 0dbc81015018..2841e4ea8f2c 100644
--- a/include/uapi/linux/userfaultfd.h
+++ b/include/uapi/linux/userfaultfd.h
@@ -41,7 +41,8 @@
UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
UFFD_FEATURE_WP_UNPOPULATED | \
UFFD_FEATURE_POISON | \
- UFFD_FEATURE_WP_ASYNC)
+ UFFD_FEATURE_WP_ASYNC | \
+ UFFD_FEATURE_MOVE)
#define UFFD_API_IOCTLS \
((__u64)1 << _UFFDIO_REGISTER | \
(__u64)1 << _UFFDIO_UNREGISTER | \
@@ -50,6 +51,7 @@
((__u64)1 << _UFFDIO_WAKE | \
(__u64)1 << _UFFDIO_COPY | \
(__u64)1 << _UFFDIO_ZEROPAGE | \
+ (__u64)1 << _UFFDIO_MOVE | \
(__u64)1 << _UFFDIO_WRITEPROTECT | \
(__u64)1 << _UFFDIO_CONTINUE | \
(__u64)1 << _UFFDIO_POISON)
@@ -73,6 +75,7 @@
#define _UFFDIO_WAKE (0x02)
#define _UFFDIO_COPY (0x03)
#define _UFFDIO_ZEROPAGE (0x04)
+#define _UFFDIO_MOVE (0x05)
#define _UFFDIO_WRITEPROTECT (0x06)
#define _UFFDIO_CONTINUE (0x07)
#define _UFFDIO_POISON (0x08)
@@ -92,6 +95,8 @@
struct uffdio_copy)
#define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
struct uffdio_zeropage)
+#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
+ struct uffdio_move)
#define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
struct uffdio_writeprotect)
#define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
@@ -222,6 +227,9 @@ struct uffdio_api {
* asynchronous mode is supported in which the write fault is
* automatically resolved and write-protection is un-set.
* It implies UFFD_FEATURE_WP_UNPOPULATED.
+ *
+ * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
+ * existing page contents from userspace.
*/
#define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
#define UFFD_FEATURE_EVENT_FORK (1<<1)
@@ -239,6 +247,7 @@ struct uffdio_api {
#define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
#define UFFD_FEATURE_POISON (1<<14)
#define UFFD_FEATURE_WP_ASYNC (1<<15)
+#define UFFD_FEATURE_MOVE (1<<16)
__u64 features;

__u64 ioctls;
@@ -347,6 +356,24 @@ struct uffdio_poison {
__s64 updated;
};

+struct uffdio_move {
+ __u64 dst;
+ __u64 src;
+ __u64 len;
+ /*
+ * Especially if used to atomically remove memory from the
+ * address space the wake on the dst range is not needed.
+ */
+#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
+#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
+ __u64 mode;
+ /*
+ * "move" is written by the ioctl and must be at the end: the
+ * copy_from_user will not read the last 8 bytes.
+ */
+ __s64 move;
+};
+
/*
* Flags for the userfaultfd(2) system call itself.
*/
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9656be95a542..6fac5c3d66e6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
return ret;
}

+#ifdef CONFIG_USERFAULTFD
+/*
+ * The PT lock for src_pmd and the mmap_lock for reading are held by
+ * the caller, but it must return after releasing the
+ * page_table_lock. Just move the page from src_pmd to dst_pmd if possible.
+ * Return zero if succeeded in moving the page, -EAGAIN if it needs to be
+ * repeated by the caller, or other errors in case of failure.
+ */
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr)
+{
+ pmd_t _dst_pmd, src_pmdval;
+ struct page *src_page;
+ struct folio *src_folio;
+ struct anon_vma *src_anon_vma;
+ spinlock_t *src_ptl, *dst_ptl;
+ pgtable_t src_pgtable, dst_pgtable;
+ struct mmu_notifier_range range;
+ int err = 0;
+
+ src_pmdval = *src_pmd;
+ src_ptl = pmd_lockptr(src_mm, src_pmd);
+
+ lockdep_assert_held(src_ptl);
+ mmap_assert_locked(src_mm);
+ mmap_assert_locked(dst_mm);
+
+ BUG_ON(!pmd_none(dst_pmdval));
+ BUG_ON(src_addr & ~HPAGE_PMD_MASK);
+ BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
+
+ if (!pmd_trans_huge(src_pmdval)) {
+ spin_unlock(src_ptl);
+ if (is_pmd_migration_entry(src_pmdval)) {
+ pmd_migration_entry_wait(src_mm, &src_pmdval);
+ return -EAGAIN;
+ }
+ return -ENOENT;
+ }
+
+ src_page = pmd_page(src_pmdval);
+ if (unlikely(!PageAnonExclusive(src_page))) {
+ spin_unlock(src_ptl);
+ return -EBUSY;
+ }
+
+ src_folio = page_folio(src_page);
+ folio_get(src_folio);
+ spin_unlock(src_ptl);
+
+ /* preallocate dst_pgtable if needed */
+ if (dst_mm != src_mm) {
+ dst_pgtable = pte_alloc_one(dst_mm);
+ if (unlikely(!dst_pgtable)) {
+ err = -ENOMEM;
+ goto put_folio;
+ }
+ } else {
+ dst_pgtable = NULL;
+ }
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
+ src_addr + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+
+ folio_lock(src_folio);
+
+ /*
+ * split_huge_page walks the anon_vma chain without the page
+ * lock. Serialize against it with the anon_vma lock, the page
+ * lock is not enough.
+ */
+ src_anon_vma = folio_get_anon_vma(src_folio);
+ if (!src_anon_vma) {
+ err = -EAGAIN;
+ goto unlock_folio;
+ }
+ anon_vma_lock_write(src_anon_vma);
+
+ dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
+ double_pt_lock(src_ptl, dst_ptl);
+ if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
+ !pmd_same(*dst_pmd, dst_pmdval))) {
+ double_pt_unlock(src_ptl, dst_ptl);
+ err = -EAGAIN;
+ goto put_anon_vma;
+ }
+ if (!PageAnonExclusive(&src_folio->page)) {
+ double_pt_unlock(src_ptl, dst_ptl);
+ err = -EBUSY;
+ goto put_anon_vma;
+ }
+
+ BUG_ON(!folio_test_head(src_folio));
+ BUG_ON(!folio_test_anon(src_folio));
+
+ folio_move_anon_rmap(src_folio, dst_vma);
+ WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
+
+ src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
+ _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
+ _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
+ set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
+
+ src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
+ if (dst_pgtable) {
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
+ pte_free(src_mm, src_pgtable);
+ dst_pgtable = NULL;
+
+ mm_inc_nr_ptes(dst_mm);
+ mm_dec_nr_ptes(src_mm);
+ add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ } else {
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
+ }
+ double_pt_unlock(src_ptl, dst_ptl);
+
+put_anon_vma:
+ anon_vma_unlock_write(src_anon_vma);
+ put_anon_vma(src_anon_vma);
+unlock_folio:
+ /* unblock rmap walks */
+ folio_unlock(src_folio);
+ mmu_notifier_invalidate_range_end(&range);
+ if (dst_pgtable)
+ pte_free(dst_mm, dst_pgtable);
+put_folio:
+ folio_put(src_folio);
+
+ return err;
+}
+#endif /* CONFIG_USERFAULTFD */
+
/*
* Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
*
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2b5c0321d96b..0c1ee7172852 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* Prevent all access to pagetables with the exception of
* gup_fast later handled by the ptep_clear_flush and the VM
* handled by the anon_vma lock + PG_lock.
+ *
+ * UFFDIO_MOVE is prevented to race as well thanks to the
+ * mmap_lock.
*/
mmap_write_lock(mm);
result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
diff --git a/mm/rmap.c b/mm/rmap.c
index f9ddc50269d2..a5919cac9a08 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -490,6 +490,12 @@ void __init anon_vma_init(void)
* page_remove_rmap() that the anon_vma pointer from page->mapping is valid
* if there is a mapcount, we can dereference the anon_vma after observing
* those.
+ *
+ * NOTE: the caller should normally hold folio lock when calling this. If
+ * not, the caller needs to double check the anon_vma didn't change after
+ * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
+ * concurrently without folio lock protection). See folio_lock_anon_vma_read()
+ * which has already covered that, and comment above remap_pages().
*/
struct anon_vma *folio_get_anon_vma(struct folio *folio)
{
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 96d9eae5c7cc..45ce1a8b8ab9 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
mmap_read_unlock(dst_mm);
return err;
}
+
+
+void double_pt_lock(spinlock_t *ptl1,
+ spinlock_t *ptl2)
+ __acquires(ptl1)
+ __acquires(ptl2)
+{
+ spinlock_t *ptl_tmp;
+
+ if (ptl1 > ptl2) {
+ /* exchange ptl1 and ptl2 */
+ ptl_tmp = ptl1;
+ ptl1 = ptl2;
+ ptl2 = ptl_tmp;
+ }
+ /* lock in virtual address order to avoid lock inversion */
+ spin_lock(ptl1);
+ if (ptl1 != ptl2)
+ spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
+ else
+ __acquire(ptl2);
+}
+
+void double_pt_unlock(spinlock_t *ptl1,
+ spinlock_t *ptl2)
+ __releases(ptl1)
+ __releases(ptl2)
+{
+ spin_unlock(ptl1);
+ if (ptl1 != ptl2)
+ spin_unlock(ptl2);
+ else
+ __release(ptl2);
+}
+
+
+static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr,
+ pte_t *dst_pte, pte_t *src_pte,
+ pte_t orig_dst_pte, pte_t orig_src_pte,
+ spinlock_t *dst_ptl, spinlock_t *src_ptl,
+ struct folio *src_folio)
+{
+ double_pt_lock(dst_ptl, src_ptl);
+
+ if (!pte_same(*src_pte, orig_src_pte) ||
+ !pte_same(*dst_pte, orig_dst_pte)) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EAGAIN;
+ }
+ if (folio_test_large(src_folio) ||
+ !PageAnonExclusive(&src_folio->page)) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EBUSY;
+ }
+
+ BUG_ON(!folio_test_anon(src_folio));
+
+ folio_move_anon_rmap(src_folio, dst_vma);
+ WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
+
+ orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
+ orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
+ orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
+
+ set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
+
+ if (dst_mm != src_mm) {
+ inc_mm_counter(dst_mm, MM_ANONPAGES);
+ dec_mm_counter(src_mm, MM_ANONPAGES);
+ }
+
+ double_pt_unlock(dst_ptl, src_ptl);
+
+ return 0;
+}
+
+static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_addr, unsigned long src_addr,
+ pte_t *dst_pte, pte_t *src_pte,
+ pte_t orig_dst_pte, pte_t orig_src_pte,
+ spinlock_t *dst_ptl, spinlock_t *src_ptl)
+{
+ if (!pte_swp_exclusive(orig_src_pte))
+ return -EBUSY;
+
+ double_pt_lock(dst_ptl, src_ptl);
+
+ if (!pte_same(*src_pte, orig_src_pte) ||
+ !pte_same(*dst_pte, orig_dst_pte)) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EAGAIN;
+ }
+
+ orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
+ set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
+
+ if (dst_mm != src_mm) {
+ inc_mm_counter(dst_mm, MM_SWAPENTS);
+ dec_mm_counter(src_mm, MM_SWAPENTS);
+ }
+
+ double_pt_unlock(dst_ptl, src_ptl);
+
+ return 0;
+}
+
+/*
+ * The mmap_lock for reading is held by the caller. Just move the page
+ * from src_pmd to dst_pmd if possible, and return true if succeeded
+ * in moving the page.
+ */
+static int remap_pages_pte(struct mm_struct *dst_mm,
+ struct mm_struct *src_mm,
+ pmd_t *dst_pmd,
+ pmd_t *src_pmd,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr,
+ unsigned long src_addr,
+ __u64 mode)
+{
+ swp_entry_t entry;
+ pte_t orig_src_pte, orig_dst_pte;
+ pte_t src_folio_pte;
+ spinlock_t *src_ptl, *dst_ptl;
+ pte_t *src_pte = NULL;
+ pte_t *dst_pte = NULL;
+
+ struct folio *src_folio = NULL;
+ struct anon_vma *src_anon_vma = NULL;
+ struct mmu_notifier_range range;
+ int err = 0;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
+ src_addr, src_addr + PAGE_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+retry:
+ dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
+
+ /* Retry if a huge pmd materialized from under us */
+ if (unlikely(!dst_pte)) {
+ err = -EAGAIN;
+ goto out;
+ }
+
+ src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
+
+ /*
+ * We held the mmap_lock for reading so MADV_DONTNEED
+ * can zap transparent huge pages under us, or the
+ * transparent huge page fault can establish new
+ * transparent huge pages under us.
+ */
+ if (unlikely(!src_pte)) {
+ err = -EAGAIN;
+ goto out;
+ }
+
+ BUG_ON(pmd_none(*dst_pmd));
+ BUG_ON(pmd_none(*src_pmd));
+ BUG_ON(pmd_trans_huge(*dst_pmd));
+ BUG_ON(pmd_trans_huge(*src_pmd));
+
+ spin_lock(dst_ptl);
+ orig_dst_pte = *dst_pte;
+ spin_unlock(dst_ptl);
+ if (!pte_none(orig_dst_pte)) {
+ err = -EEXIST;
+ goto out;
+ }
+
+ spin_lock(src_ptl);
+ orig_src_pte = *src_pte;
+ spin_unlock(src_ptl);
+ if (pte_none(orig_src_pte)) {
+ if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES))
+ err = -ENOENT;
+ else /* nothing to do to remap a hole */
+ err = 0;
+ goto out;
+ }
+
+ /* If PTE changed after we locked the folio them start over */
+ if (src_folio && unlikely(!pte_same(src_folio_pte, orig_src_pte))) {
+ err = -EAGAIN;
+ goto out;
+ }
+
+ if (pte_present(orig_src_pte)) {
+ /*
+ * Pin and lock both source folio and anon_vma. Since we are in
+ * RCU read section, we can't block, so on contention have to
+ * unmap the ptes, obtain the lock and retry.
+ */
+ if (!src_folio) {
+ struct folio *folio;
+
+ /*
+ * Pin the page while holding the lock to be sure the
+ * page isn't freed under us
+ */
+ spin_lock(src_ptl);
+ if (!pte_same(orig_src_pte, *src_pte)) {
+ spin_unlock(src_ptl);
+ err = -EAGAIN;
+ goto out;
+ }
+
+ folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
+ if (!folio || folio_test_large(folio) ||
+ !PageAnonExclusive(&folio->page)) {
+ spin_unlock(src_ptl);
+ err = -EBUSY;
+ goto out;
+ }
+
+ folio_get(folio);
+ src_folio = folio;
+ src_folio_pte = orig_src_pte;
+ spin_unlock(src_ptl);
+
+ if (!folio_trylock(src_folio)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ folio_lock(src_folio);
+ goto retry;
+ }
+ }
+
+ if (!src_anon_vma) {
+ /*
+ * folio_referenced walks the anon_vma chain
+ * without the folio lock. Serialize against it with
+ * the anon_vma lock, the folio lock is not enough.
+ */
+ src_anon_vma = folio_get_anon_vma(src_folio);
+ if (!src_anon_vma) {
+ /* page was unmapped from under us */
+ err = -EAGAIN;
+ goto out;
+ }
+ if (!anon_vma_trylock_write(src_anon_vma)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ anon_vma_lock_write(src_anon_vma);
+ goto retry;
+ }
+ }
+
+ err = remap_present_pte(dst_mm, src_mm, dst_vma, src_vma,
+ dst_addr, src_addr, dst_pte, src_pte,
+ orig_dst_pte, orig_src_pte,
+ dst_ptl, src_ptl, src_folio);
+ } else {
+ entry = pte_to_swp_entry(orig_src_pte);
+ if (non_swap_entry(entry)) {
+ if (is_migration_entry(entry)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ migration_entry_wait(src_mm, src_pmd,
+ src_addr);
+ err = -EAGAIN;
+ } else
+ err = -EFAULT;
+ goto out;
+ }
+
+ err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
+ dst_pte, src_pte,
+ orig_dst_pte, orig_src_pte,
+ dst_ptl, src_ptl);
+ }
+
+out:
+ if (src_anon_vma) {
+ anon_vma_unlock_write(src_anon_vma);
+ put_anon_vma(src_anon_vma);
+ }
+ if (src_folio) {
+ folio_unlock(src_folio);
+ folio_put(src_folio);
+ }
+ if (dst_pte)
+ pte_unmap(dst_pte);
+ if (src_pte)
+ pte_unmap(src_pte);
+ mmu_notifier_invalidate_range_end(&range);
+
+ return err;
+}
+
+static int validate_remap_areas(struct vm_area_struct *src_vma,
+ struct vm_area_struct *dst_vma)
+{
+ /* Only allow remapping if both have the same access and protection */
+ if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
+ pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
+ return -EINVAL;
+
+ /* Only allow remapping if both are mlocked or both aren't */
+ if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
+ return -EINVAL;
+
+ if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
+ return -EINVAL;
+
+ /*
+ * Be strict and only allow remap_pages if either the src or
+ * dst range is registered in the userfaultfd to prevent
+ * userland errors going unnoticed. As far as the VM
+ * consistency is concerned, it would be perfectly safe to
+ * remove this check, but there's no useful usage for
+ * remap_pages ouside of userfaultfd registered ranges. This
+ * is after all why it is an ioctl belonging to the
+ * userfaultfd and not a syscall.
+ *
+ * Allow both vmas to be registered in the userfaultfd, just
+ * in case somebody finds a way to make such a case useful.
+ * Normally only one of the two vmas would be registered in
+ * the userfaultfd.
+ */
+ if (!dst_vma->vm_userfaultfd_ctx.ctx &&
+ !src_vma->vm_userfaultfd_ctx.ctx)
+ return -EINVAL;
+
+ /*
+ * FIXME: only allow remapping across anonymous vmas,
+ * tmpfs should be added.
+ */
+ if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
+ return -EINVAL;
+
+ /*
+ * Ensure the dst_vma has a anon_vma or this page
+ * would get a NULL anon_vma when moved in the
+ * dst_vma.
+ */
+ if (unlikely(anon_vma_prepare(dst_vma)))
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * remap_pages - remap arbitrary anonymous pages of an existing vma
+ * @dst_start: start of the destination virtual memory range
+ * @src_start: start of the source virtual memory range
+ * @len: length of the virtual memory range
+ *
+ * remap_pages() remaps arbitrary anonymous pages atomically in zero
+ * copy. It only works on non shared anonymous pages because those can
+ * be relocated without generating non linear anon_vmas in the rmap
+ * code.
+ *
+ * It provides a zero copy mechanism to handle userspace page faults.
+ * The source vma pages should have mapcount == 1, which can be
+ * enforced by using madvise(MADV_DONTFORK) on src vma.
+ *
+ * The thread receiving the page during the userland page fault
+ * will receive the faulting page in the source vma through the network,
+ * storage or any other I/O device (MADV_DONTFORK in the source vma
+ * avoids remap_pages() to fail with -EBUSY if the process forks before
+ * remap_pages() is called), then it will call remap_pages() to map the
+ * page in the faulting address in the destination vma.
+ *
+ * This userfaultfd command works purely via pagetables, so it's the
+ * most efficient way to move physical non shared anonymous pages
+ * across different virtual addresses. Unlike mremap()/mmap()/munmap()
+ * it does not create any new vmas. The mapping in the destination
+ * address is atomic.
+ *
+ * It only works if the vma protection bits are identical from the
+ * source and destination vma.
+ *
+ * It can remap non shared anonymous pages within the same vma too.
+ *
+ * If the source virtual memory range has any unmapped holes, or if
+ * the destination virtual memory range is not a whole unmapped hole,
+ * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
+ * provides a very strict behavior to avoid any chance of memory
+ * corruption going unnoticed if there are userland race conditions.
+ * Only one thread should resolve the userland page fault at any given
+ * time for any given faulting address. This means that if two threads
+ * try to both call remap_pages() on the same destination address at the
+ * same time, the second thread will get an explicit error from this
+ * command.
+ *
+ * The command retval will return "len" is successful. The command
+ * however can be interrupted by fatal signals or errors. If
+ * interrupted it will return the number of bytes successfully
+ * remapped before the interruption if any, or the negative error if
+ * none. It will never return zero. Either it will return an error or
+ * an amount of bytes successfully moved. If the retval reports a
+ * "short" remap, the remap_pages() command should be repeated by
+ * userland with src+retval, dst+reval, len-retval if it wants to know
+ * about the error that interrupted it.
+ *
+ * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to
+ * prevent -ENOENT errors to materialize if there are holes in the
+ * source virtual range that is being remapped. The holes will be
+ * accounted as successfully remapped in the retval of the
+ * command. This is mostly useful to remap hugepage naturally aligned
+ * virtual regions without knowing if there are transparent hugepage
+ * in the regions or not, but preventing the risk of having to split
+ * the hugepmd during the remap.
+ *
+ * If there's any rmap walk that is taking the anon_vma locks without
+ * first obtaining the folio lock (the only current instance is
+ * folio_referenced), they will have to verify if the folio->mapping
+ * has changed after taking the anon_vma lock. If it changed they
+ * should release the lock and retry obtaining a new anon_vma, because
+ * it means the anon_vma was changed by remap_pages() before the lock
+ * could be obtained. This is the only additional complexity added to
+ * the rmap code to provide this anonymous page remapping functionality.
+ */
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_start, unsigned long src_start,
+ unsigned long len, __u64 mode)
+{
+ struct vm_area_struct *src_vma, *dst_vma;
+ unsigned long src_addr, dst_addr;
+ pmd_t *src_pmd, *dst_pmd;
+ long err = -EINVAL;
+ ssize_t moved = 0;
+
+ /*
+ * Sanitize the command parameters:
+ */
+ BUG_ON(src_start & ~PAGE_MASK);
+ BUG_ON(dst_start & ~PAGE_MASK);
+ BUG_ON(len & ~PAGE_MASK);
+
+ /* Does the address range wrap, or is the span zero-sized? */
+ BUG_ON(src_start + len <= src_start);
+ BUG_ON(dst_start + len <= dst_start);
+
+ /*
+ * Because these are read sempahores there's no risk of lock
+ * inversion.
+ */
+ mmap_read_lock(dst_mm);
+ if (dst_mm != src_mm)
+ mmap_read_lock(src_mm);
+
+ /*
+ * Make sure the vma is not shared, that the src and dst remap
+ * ranges are both valid and fully within a single existing
+ * vma.
+ */
+ src_vma = find_vma(src_mm, src_start);
+ if (!src_vma || (src_vma->vm_flags & VM_SHARED))
+ goto out;
+ if (src_start < src_vma->vm_start ||
+ src_start + len > src_vma->vm_end)
+ goto out;
+
+ dst_vma = find_vma(dst_mm, dst_start);
+ if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
+ goto out;
+ if (dst_start < dst_vma->vm_start ||
+ dst_start + len > dst_vma->vm_end)
+ goto out;
+
+ err = validate_remap_areas(src_vma, dst_vma);
+ if (err)
+ goto out;
+
+ for (src_addr = src_start, dst_addr = dst_start;
+ src_addr < src_start + len;) {
+ spinlock_t *ptl;
+ pmd_t dst_pmdval;
+ unsigned long step_size;
+
+ BUG_ON(dst_addr >= dst_start + len);
+ /*
+ * Below works because anonymous area would not have a
+ * transparent huge PUD. If file-backed support is added,
+ * that case would need to be handled here.
+ */
+ src_pmd = mm_find_pmd(src_mm, src_addr);
+ if (unlikely(!src_pmd)) {
+ if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
+ err = -ENOENT;
+ break;
+ }
+ src_pmd = mm_alloc_pmd(src_mm, src_addr);
+ if (unlikely(!src_pmd)) {
+ err = -ENOMEM;
+ break;
+ }
+ }
+ dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
+ if (unlikely(!dst_pmd)) {
+ err = -ENOMEM;
+ break;
+ }
+
+ dst_pmdval = pmdp_get_lockless(dst_pmd);
+ /*
+ * If the dst_pmd is mapped as THP don't override it and just
+ * be strict. If dst_pmd changes into TPH after this check, the
+ * remap_pages_huge_pmd() will detect the change and retry
+ * while remap_pages_pte() will detect the change and fail.
+ */
+ if (unlikely(pmd_trans_huge(dst_pmdval))) {
+ err = -EEXIST;
+ break;
+ }
+
+ ptl = pmd_trans_huge_lock(src_pmd, src_vma);
+ if (ptl) {
+ if (pmd_devmap(*src_pmd)) {
+ spin_unlock(ptl);
+ err = -ENOENT;
+ break;
+ }
+
+ /*
+ * Check if we can move the pmd without
+ * splitting it. First check the address
+ * alignment to be the same in src/dst. These
+ * checks don't actually need the PT lock but
+ * it's good to do it here to optimize this
+ * block away at build time if
+ * CONFIG_TRANSPARENT_HUGEPAGE is not set.
+ */
+ if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
+ src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
+ spin_unlock(ptl);
+ split_huge_pmd(src_vma, src_pmd, src_addr);
+ continue;
+ }
+
+ err = remap_pages_huge_pmd(dst_mm, src_mm,
+ dst_pmd, src_pmd,
+ dst_pmdval,
+ dst_vma, src_vma,
+ dst_addr, src_addr);
+ step_size = HPAGE_PMD_SIZE;
+ } else {
+ if (pmd_none(*src_pmd)) {
+ if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
+ err = -ENOENT;
+ break;
+ }
+ if (unlikely(__pte_alloc(src_mm, src_pmd))) {
+ err = -ENOMEM;
+ break;
+ }
+ }
+
+ if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
+ err = -ENOMEM;
+ break;
+ }
+
+ err = remap_pages_pte(dst_mm, src_mm,
+ dst_pmd, src_pmd,
+ dst_vma, src_vma,
+ dst_addr, src_addr,
+ mode);
+ step_size = PAGE_SIZE;
+ }
+
+ cond_resched();
+
+ if (fatal_signal_pending(current)) {
+ /* Do not override an error */
+ if (!err || err == -EAGAIN)
+ err = -EINTR;
+ break;
+ }
+
+ if (err) {
+ if (err == -EAGAIN)
+ continue;
+ break;
+ }
+
+ /* Proceed to the next page */
+ dst_addr += step_size;
+ src_addr += step_size;
+ moved += step_size;
+ }
+
+out:
+ mmap_read_unlock(dst_mm);
+ if (dst_mm != src_mm)
+ mmap_read_unlock(src_mm);
+ BUG_ON(moved < 0);
+ BUG_ON(err > 0);
+ BUG_ON(!moved && !err);
+ return moved ? moved : err;
+}
--
2.42.0.609.gbb76f46606-goog

2023-10-09 06:43:11

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH v3 1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()

From: Andrea Arcangeli <[email protected]>

For now, folio_move_anon_rmap() was only used to move a folio to a
different anon_vma after fork(), whereby the root anon_vma stayed
unchanged. For that, it was sufficient to hold the folio lock when
calling folio_move_anon_rmap().

However, we want to make use of folio_move_anon_rmap() to move folios
between VMAs that have a different root anon_vma. As folio_referenced()
performs an RMAP walk without holding the folio lock but only holding the
anon_vma in read mode, holding the folio lock is insufficient.

When moving to an anon_vma with a different root anon_vma, we'll have to
hold both, the folio lock and the anon_vma lock in write mode.
Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
read-lock the anon_vma, we have to re-check if the mapping was changed
in the meantime. If that was the case, we have to retry.

Note that folio_move_anon_rmap() must only be called if the anon page is
exclusive to a process, and must not be called on KSM folios.

This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
the anon_vma lock in write mode, and the mmap_lock in read mode.

Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Suren Baghdasaryan <[email protected]>
---
mm/rmap.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)

diff --git a/mm/rmap.c b/mm/rmap.c
index c1f11c9dbe61..f9ddc50269d2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
struct anon_vma *root_anon_vma;
unsigned long anon_mapping;

+retry:
rcu_read_lock();
+retry_under_rcu:
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
goto out;
@@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
root_anon_vma = READ_ONCE(anon_vma->root);
if (down_read_trylock(&root_anon_vma->rwsem)) {
+ /*
+ * folio_move_anon_rmap() might have changed the anon_vma as we
+ * might not hold the folio lock here.
+ */
+ if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
+ anon_mapping)) {
+ up_read(&root_anon_vma->rwsem);
+ goto retry_under_rcu;
+ }
+
/*
* If the folio is still mapped, then this anon_vma is still
* its anon_vma, and holding the mutex ensures that it will
@@ -586,6 +598,18 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
rcu_read_unlock();
anon_vma_lock_read(anon_vma);

+ /*
+ * folio_move_anon_rmap() might have changed the anon_vma as we might
+ * not hold the folio lock here.
+ */
+ if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
+ anon_mapping)) {
+ anon_vma_unlock_read(anon_vma);
+ put_anon_vma(anon_vma);
+ anon_vma = NULL;
+ goto retry;
+ }
+
if (atomic_dec_and_test(&anon_vma->refcount)) {
/*
* Oops, we held the last refcount, release the lock
--
2.42.0.609.gbb76f46606-goog

2023-10-09 06:43:15

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test

Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source
into destination buffer while checking the contents of both after
remapping. After the operation the content of the destination buffer
should match the original source buffer's content while the source
buffer should be zeroed.

Signed-off-by: Suren Baghdasaryan <[email protected]>
---
tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++-
tools/testing/selftests/mm/uffd-common.h | 1 +
tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++
3 files changed, 102 insertions(+), 2 deletions(-)

diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
index 02b89860e193..ecc1244f1c2b 100644
--- a/tools/testing/selftests/mm/uffd-common.c
+++ b/tools/testing/selftests/mm/uffd-common.c
@@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src)
*alloc_area = NULL;
return -errno;
}
+
+ /* Prevent source pages from collapsing into THPs */
+ if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) {
+ *alloc_area = NULL;
+ return -errno;
+ }
+
return 0;
}

@@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args)
offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst;
offset &= ~(page_size-1);

- if (copy_page(uffd, offset, args->apply_wp))
- args->missing_faults++;
+ /* UFFD_MOVE is supported for anon non-shared mappings. */
+ if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) {
+ if (move_page(uffd, offset))
+ args->missing_faults++;
+ } else {
+ if (copy_page(uffd, offset, args->apply_wp))
+ args->missing_faults++;
+ }
}
}

@@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp)
return __copy_page(ufd, offset, false, wp);
}

+int move_page(int ufd, unsigned long offset)
+{
+ struct uffdio_move uffdio_move;
+
+ if (offset >= nr_pages * page_size)
+ err("unexpected offset %lu\n", offset);
+ uffdio_move.dst = (unsigned long) area_dst + offset;
+ uffdio_move.src = (unsigned long) area_src + offset;
+ uffdio_move.len = page_size;
+ uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES;
+ uffdio_move.move = 0;
+ if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) {
+ /* real retval in uffdio_move.move */
+ if (uffdio_move.move != -EEXIST)
+ err("UFFDIO_MOVE error: %"PRId64,
+ (int64_t)uffdio_move.move);
+ wake_range(ufd, uffdio_move.dst, page_size);
+ } else if (uffdio_move.move != page_size) {
+ err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move);
+ } else
+ return 1;
+ return 0;
+}
+
int uffd_open_dev(unsigned int flags)
{
int fd, uffd;
diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
index 7c4fa964c3b0..f4d79e169a3d 100644
--- a/tools/testing/selftests/mm/uffd-common.h
+++ b/tools/testing/selftests/mm/uffd-common.h
@@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp);
void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args);
int __copy_page(int ufd, unsigned long offset, bool retry, bool wp);
int copy_page(int ufd, unsigned long offset, bool wp);
+int move_page(int ufd, unsigned long offset);
void *uffd_poll_thread(void *arg);

int uffd_open_dev(unsigned int flags);
diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
index 2709a34a39c5..f0ded3b34367 100644
--- a/tools/testing/selftests/mm/uffd-unit-tests.c
+++ b/tools/testing/selftests/mm/uffd-unit-tests.c
@@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp)
char c;
struct uffd_args args = { 0 };

+ /* Prevent source pages from being mapped more than once */
+ if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK))
+ err("madvise(MADV_DONTFORK) failed");
+
fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
if (uffd_register(uffd, area_dst, nr_pages * page_size,
true, wp, false))
@@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs)
uffd_test_pass();
}

+static void uffd_move_test(uffd_test_args_t *targs)
+{
+ unsigned long nr;
+ pthread_t uffd_mon;
+ char c;
+ unsigned long long count;
+ struct uffd_args args = { 0 };
+
+ if (uffd_register(uffd, area_dst, nr_pages * page_size,
+ true, false, false))
+ err("register failure");
+
+ if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
+ err("uffd_poll_thread create");
+
+ /*
+ * Read each of the pages back using the UFFD-registered mapping. We
+ * expect that the first time we touch a page, it will result in a missing
+ * fault. uffd_poll_thread will resolve the fault by remapping source
+ * page to destination.
+ */
+ for (nr = 0; nr < nr_pages; nr++) {
+ /* Check area_src content */
+ count = *area_count(area_src, nr);
+ if (count != count_verify[nr])
+ err("nr %lu source memory invalid %llu %llu\n",
+ nr, count, count_verify[nr]);
+
+ /* Faulting into area_dst should remap the page */
+ count = *area_count(area_dst, nr);
+ if (count != count_verify[nr])
+ err("nr %lu memory corruption %llu %llu\n",
+ nr, count, count_verify[nr]);
+
+ /* Re-check area_src content which should be empty */
+ count = *area_count(area_src, nr);
+ if (count != 0)
+ err("nr %lu move failed %llu %llu\n",
+ nr, count, count_verify[nr]);
+ }
+
+ if (write(pipefd[1], &c, sizeof(c)) != sizeof(c))
+ err("pipe write");
+ if (pthread_join(uffd_mon, NULL))
+ err("join() failed");
+
+ if (args.missing_faults != nr_pages || args.minor_faults != 0)
+ uffd_test_fail("stats check error");
+ else
+ uffd_test_pass();
+}
+
/*
* Test the returned uffdio_register.ioctls with different register modes.
* Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test.
@@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = {
.mem_targets = MEM_ALL,
.uffd_feature_required = 0,
},
+ {
+ .name = "move",
+ .uffd_fn = uffd_move_test,
+ .mem_targets = MEM_ANON,
+ .uffd_feature_required = UFFD_FEATURE_MOVE,
+ },
{
.name = "wp-fork",
.uffd_fn = uffd_wp_fork_test,
--
2.42.0.609.gbb76f46606-goog

2023-10-09 14:39:44

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 09.10.23 08:42, Suren Baghdasaryan wrote:
> From: Andrea Arcangeli <[email protected]>
>
> Implement the uABI of UFFDIO_MOVE ioctl.
> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> available (in userspace) for recycling, as is usually the case in heap
> compaction algorithms, then we can avoid the page allocation and memcpy
> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> userspace, we avoid the need to release (via madvise) the pages back to
> the kernel [2].
> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> measured using a benchmark that emulates a heap compaction implementation
> using userfaultfd (to allow concurrent accesses by application threads).
> More details of the usecase are explained in [2].
> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> touching them within the same vma. Today, it can only be done by mremap,
> however it forces splitting the vma.
>
> [1] https://lore.kernel.org/all/[email protected]/
> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
>
> Update for the ioctl_userfaultfd(2) manpage:
>
> UFFDIO_MOVE
> (Since Linux xxx) Move a continuous memory chunk into the
> userfault registered range and optionally wake up the blocked
> thread. The source and destination addresses and the number of
> bytes to move are specified by the src, dst, and len fields of
> the uffdio_move structure pointed to by argp:
>
> struct uffdio_move {
> __u64 dst; /* Destination of move */
> __u64 src; /* Source of move */
> __u64 len; /* Number of bytes to move */
> __u64 mode; /* Flags controlling behavior of move */
> __s64 move; /* Number of bytes moved, or negated error */
> };
>
> The following value may be bitwise ORed in mode to change the
> behavior of the UFFDIO_MOVE operation:
>
> UFFDIO_MOVE_MODE_DONTWAKE
> Do not wake up the thread that waits for page-fault
> resolution
>
> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> Allow holes in the source virtual range that is being moved.
> When not specified, the holes will result in ENOENT error.
> When specified, the holes will be accounted as successfully
> moved memory. This is mostly useful to move hugepage aligned
> virtual regions without knowing if there are transparent
> hugepages in the regions or not, but preventing the risk of
> having to split the hugepage during the operation.
>
> The move field is used by the kernel to return the number of
> bytes that was actually moved, or an error (a negated errno-
> style value). If the value returned in move doesn't match the
> value that was specified in len, the operation fails with the
> error EAGAIN. The move field is output-only; it is not read by
> the UFFDIO_MOVE operation.
>
> The operation may fail for various reasons. Usually, remapping of
> pages that are not exclusive to the given process fail; once KSM
> might deduplicate pages or fork() COW-shares pages during fork()
> with child processes, they are no longer exclusive. Further, the
> kernel might only perform lightweight checks for detecting whether
> the pages are exclusive, and return -EBUSY in case that check fails.
> To make the operation more likely to succeed, KSM should be
> disabled, fork() should be avoided or MADV_DONTFORK should be
> configured for the source VMA before fork().
>
> This ioctl(2) operation returns 0 on success. In this case, the
> entire area was moved. On error, -1 is returned and errno is
> set to indicate the error. Possible errors include:
>
> EAGAIN The number of bytes moved (i.e., the value returned in
> the move field) does not equal the value that was
> specified in the len field.
>
> EINVAL Either dst or len was not a multiple of the system page
> size, or the range specified by src and len or dst and len
> was invalid.
>
> EINVAL An invalid bit was specified in the mode field.
>
> ENOENT
> The source virtual memory range has unmapped holes and
> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
>
> EEXIST
> The destination virtual memory range is fully or partially
> mapped.
>
> EBUSY
> The pages in the source virtual memory range are not
> exclusive to the process. The kernel might only perform
> lightweight checks for detecting whether the pages are
> exclusive. To make the operation more likely to succeed,
> KSM should be disabled, fork() should be avoided or
> MADV_DONTFORK should be configured for the source virtual
> memory area before fork().
>
> ENOMEM Allocating memory needed for the operation failed.
>
> ESRCH
> The faulting process has exited at the time of a
> UFFDIO_MOVE operation.
>

A general comment simply because I realized that just now: does anything
speak against limiting the operations now to a single MM?

The use cases I heard so far don't need it. If ever required, we could
consider extending it.

Let's reduce complexity and KIS unless really required.


Further: see "22) Do not crash the kernel" in coding-style.rst. All
these BUG_ON need to go. Ideally, use WARN_ON_ONCE() or just VM_WARN_ON().

--
Cheers,

David / dhildenb

2023-10-09 16:21:43

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
>
> On 09.10.23 08:42, Suren Baghdasaryan wrote:
> > From: Andrea Arcangeli <[email protected]>
> >
> > Implement the uABI of UFFDIO_MOVE ioctl.
> > UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > available (in userspace) for recycling, as is usually the case in heap
> > compaction algorithms, then we can avoid the page allocation and memcpy
> > (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > userspace, we avoid the need to release (via madvise) the pages back to
> > the kernel [2].
> > We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > measured using a benchmark that emulates a heap compaction implementation
> > using userfaultfd (to allow concurrent accesses by application threads).
> > More details of the usecase are explained in [2].
> > Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > touching them within the same vma. Today, it can only be done by mremap,
> > however it forces splitting the vma.
> >
> > [1] https://lore.kernel.org/all/[email protected]/
> > [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> >
> > Update for the ioctl_userfaultfd(2) manpage:
> >
> > UFFDIO_MOVE
> > (Since Linux xxx) Move a continuous memory chunk into the
> > userfault registered range and optionally wake up the blocked
> > thread. The source and destination addresses and the number of
> > bytes to move are specified by the src, dst, and len fields of
> > the uffdio_move structure pointed to by argp:
> >
> > struct uffdio_move {
> > __u64 dst; /* Destination of move */
> > __u64 src; /* Source of move */
> > __u64 len; /* Number of bytes to move */
> > __u64 mode; /* Flags controlling behavior of move */
> > __s64 move; /* Number of bytes moved, or negated error */
> > };
> >
> > The following value may be bitwise ORed in mode to change the
> > behavior of the UFFDIO_MOVE operation:
> >
> > UFFDIO_MOVE_MODE_DONTWAKE
> > Do not wake up the thread that waits for page-fault
> > resolution
> >
> > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > Allow holes in the source virtual range that is being moved.
> > When not specified, the holes will result in ENOENT error.
> > When specified, the holes will be accounted as successfully
> > moved memory. This is mostly useful to move hugepage aligned
> > virtual regions without knowing if there are transparent
> > hugepages in the regions or not, but preventing the risk of
> > having to split the hugepage during the operation.
> >
> > The move field is used by the kernel to return the number of
> > bytes that was actually moved, or an error (a negated errno-
> > style value). If the value returned in move doesn't match the
> > value that was specified in len, the operation fails with the
> > error EAGAIN. The move field is output-only; it is not read by
> > the UFFDIO_MOVE operation.
> >
> > The operation may fail for various reasons. Usually, remapping of
> > pages that are not exclusive to the given process fail; once KSM
> > might deduplicate pages or fork() COW-shares pages during fork()
> > with child processes, they are no longer exclusive. Further, the
> > kernel might only perform lightweight checks for detecting whether
> > the pages are exclusive, and return -EBUSY in case that check fails.
> > To make the operation more likely to succeed, KSM should be
> > disabled, fork() should be avoided or MADV_DONTFORK should be
> > configured for the source VMA before fork().
> >
> > This ioctl(2) operation returns 0 on success. In this case, the
> > entire area was moved. On error, -1 is returned and errno is
> > set to indicate the error. Possible errors include:
> >
> > EAGAIN The number of bytes moved (i.e., the value returned in
> > the move field) does not equal the value that was
> > specified in the len field.
> >
> > EINVAL Either dst or len was not a multiple of the system page
> > size, or the range specified by src and len or dst and len
> > was invalid.
> >
> > EINVAL An invalid bit was specified in the mode field.
> >
> > ENOENT
> > The source virtual memory range has unmapped holes and
> > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> >
> > EEXIST
> > The destination virtual memory range is fully or partially
> > mapped.
> >
> > EBUSY
> > The pages in the source virtual memory range are not
> > exclusive to the process. The kernel might only perform
> > lightweight checks for detecting whether the pages are
> > exclusive. To make the operation more likely to succeed,
> > KSM should be disabled, fork() should be avoided or
> > MADV_DONTFORK should be configured for the source virtual
> > memory area before fork().
> >
> > ENOMEM Allocating memory needed for the operation failed.
> >
> > ESRCH
> > The faulting process has exited at the time of a
> > UFFDIO_MOVE operation.
> >
>
> A general comment simply because I realized that just now: does anything
> speak against limiting the operations now to a single MM?
>
> The use cases I heard so far don't need it. If ever required, we could
> consider extending it.
>
> Let's reduce complexity and KIS unless really required.

Let me check if there are use cases that require moves between MMs.
Andrea seems to have put considerable effort to make it work between
MMs and it would be a pity to lose that. I can send a follow-up patch
to recover that functionality and even if it does not get merged, it
can be used in the future as a reference. But first let me check if we
can drop it.

>
>
> Further: see "22) Do not crash the kernel" in coding-style.rst. All
> these BUG_ON need to go. Ideally, use WARN_ON_ONCE() or just VM_WARN_ON().

Yeah, it might be the right time to clean that up. Will do.
Thanks,
Suren.

>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-09 16:25:08

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 09.10.23 18:21, Suren Baghdasaryan wrote:
> On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
>>
>> On 09.10.23 08:42, Suren Baghdasaryan wrote:
>>> From: Andrea Arcangeli <[email protected]>
>>>
>>> Implement the uABI of UFFDIO_MOVE ioctl.
>>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
>>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
>>> available (in userspace) for recycling, as is usually the case in heap
>>> compaction algorithms, then we can avoid the page allocation and memcpy
>>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
>>> userspace, we avoid the need to release (via madvise) the pages back to
>>> the kernel [2].
>>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
>>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
>>> measured using a benchmark that emulates a heap compaction implementation
>>> using userfaultfd (to allow concurrent accesses by application threads).
>>> More details of the usecase are explained in [2].
>>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
>>> touching them within the same vma. Today, it can only be done by mremap,
>>> however it forces splitting the vma.
>>>
>>> [1] https://lore.kernel.org/all/[email protected]/
>>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
>>>
>>> Update for the ioctl_userfaultfd(2) manpage:
>>>
>>> UFFDIO_MOVE
>>> (Since Linux xxx) Move a continuous memory chunk into the
>>> userfault registered range and optionally wake up the blocked
>>> thread. The source and destination addresses and the number of
>>> bytes to move are specified by the src, dst, and len fields of
>>> the uffdio_move structure pointed to by argp:
>>>
>>> struct uffdio_move {
>>> __u64 dst; /* Destination of move */
>>> __u64 src; /* Source of move */
>>> __u64 len; /* Number of bytes to move */
>>> __u64 mode; /* Flags controlling behavior of move */
>>> __s64 move; /* Number of bytes moved, or negated error */
>>> };
>>>
>>> The following value may be bitwise ORed in mode to change the
>>> behavior of the UFFDIO_MOVE operation:
>>>
>>> UFFDIO_MOVE_MODE_DONTWAKE
>>> Do not wake up the thread that waits for page-fault
>>> resolution
>>>
>>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
>>> Allow holes in the source virtual range that is being moved.
>>> When not specified, the holes will result in ENOENT error.
>>> When specified, the holes will be accounted as successfully
>>> moved memory. This is mostly useful to move hugepage aligned
>>> virtual regions without knowing if there are transparent
>>> hugepages in the regions or not, but preventing the risk of
>>> having to split the hugepage during the operation.
>>>
>>> The move field is used by the kernel to return the number of
>>> bytes that was actually moved, or an error (a negated errno-
>>> style value). If the value returned in move doesn't match the
>>> value that was specified in len, the operation fails with the
>>> error EAGAIN. The move field is output-only; it is not read by
>>> the UFFDIO_MOVE operation.
>>>
>>> The operation may fail for various reasons. Usually, remapping of
>>> pages that are not exclusive to the given process fail; once KSM
>>> might deduplicate pages or fork() COW-shares pages during fork()
>>> with child processes, they are no longer exclusive. Further, the
>>> kernel might only perform lightweight checks for detecting whether
>>> the pages are exclusive, and return -EBUSY in case that check fails.
>>> To make the operation more likely to succeed, KSM should be
>>> disabled, fork() should be avoided or MADV_DONTFORK should be
>>> configured for the source VMA before fork().
>>>
>>> This ioctl(2) operation returns 0 on success. In this case, the
>>> entire area was moved. On error, -1 is returned and errno is
>>> set to indicate the error. Possible errors include:
>>>
>>> EAGAIN The number of bytes moved (i.e., the value returned in
>>> the move field) does not equal the value that was
>>> specified in the len field.
>>>
>>> EINVAL Either dst or len was not a multiple of the system page
>>> size, or the range specified by src and len or dst and len
>>> was invalid.
>>>
>>> EINVAL An invalid bit was specified in the mode field.
>>>
>>> ENOENT
>>> The source virtual memory range has unmapped holes and
>>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
>>>
>>> EEXIST
>>> The destination virtual memory range is fully or partially
>>> mapped.
>>>
>>> EBUSY
>>> The pages in the source virtual memory range are not
>>> exclusive to the process. The kernel might only perform
>>> lightweight checks for detecting whether the pages are
>>> exclusive. To make the operation more likely to succeed,
>>> KSM should be disabled, fork() should be avoided or
>>> MADV_DONTFORK should be configured for the source virtual
>>> memory area before fork().
>>>
>>> ENOMEM Allocating memory needed for the operation failed.
>>>
>>> ESRCH
>>> The faulting process has exited at the time of a
>>> UFFDIO_MOVE operation.
>>>
>>
>> A general comment simply because I realized that just now: does anything
>> speak against limiting the operations now to a single MM?
>>
>> The use cases I heard so far don't need it. If ever required, we could
>> consider extending it.
>>
>> Let's reduce complexity and KIS unless really required.
>
> Let me check if there are use cases that require moves between MMs.
> Andrea seems to have put considerable effort to make it work between
> MMs and it would be a pity to lose that. I can send a follow-up patch
> to recover that functionality and even if it does not get merged, it
> can be used in the future as a reference. But first let me check if we
> can drop it.

Yes, that sounds reasonable. Unless the big important use cases requires
moving pages between processes, let's leave that as future work for now.

--
Cheers,

David / dhildenb

2023-10-09 16:29:48

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 9, 2023 at 5:24 PM David Hildenbrand <[email protected]> wrote:
>
> On 09.10.23 18:21, Suren Baghdasaryan wrote:
> > On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
> >>
> >> On 09.10.23 08:42, Suren Baghdasaryan wrote:
> >>> From: Andrea Arcangeli <[email protected]>
> >>>
> >>> Implement the uABI of UFFDIO_MOVE ioctl.
> >>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> >>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> >>> available (in userspace) for recycling, as is usually the case in heap
> >>> compaction algorithms, then we can avoid the page allocation and memcpy
> >>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> >>> userspace, we avoid the need to release (via madvise) the pages back to
> >>> the kernel [2].
> >>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> >>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> >>> measured using a benchmark that emulates a heap compaction implementation
> >>> using userfaultfd (to allow concurrent accesses by application threads).
> >>> More details of the usecase are explained in [2].
> >>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> >>> touching them within the same vma. Today, it can only be done by mremap,
> >>> however it forces splitting the vma.
> >>>
> >>> [1] https://lore.kernel.org/all/[email protected]/
> >>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> >>>
> >>> Update for the ioctl_userfaultfd(2) manpage:
> >>>
> >>> UFFDIO_MOVE
> >>> (Since Linux xxx) Move a continuous memory chunk into the
> >>> userfault registered range and optionally wake up the blocked
> >>> thread. The source and destination addresses and the number of
> >>> bytes to move are specified by the src, dst, and len fields of
> >>> the uffdio_move structure pointed to by argp:
> >>>
> >>> struct uffdio_move {
> >>> __u64 dst; /* Destination of move */
> >>> __u64 src; /* Source of move */
> >>> __u64 len; /* Number of bytes to move */
> >>> __u64 mode; /* Flags controlling behavior of move */
> >>> __s64 move; /* Number of bytes moved, or negated error */
> >>> };
> >>>
> >>> The following value may be bitwise ORed in mode to change the
> >>> behavior of the UFFDIO_MOVE operation:
> >>>
> >>> UFFDIO_MOVE_MODE_DONTWAKE
> >>> Do not wake up the thread that waits for page-fault
> >>> resolution
> >>>
> >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> >>> Allow holes in the source virtual range that is being moved.
> >>> When not specified, the holes will result in ENOENT error.
> >>> When specified, the holes will be accounted as successfully
> >>> moved memory. This is mostly useful to move hugepage aligned
> >>> virtual regions without knowing if there are transparent
> >>> hugepages in the regions or not, but preventing the risk of
> >>> having to split the hugepage during the operation.
> >>>
> >>> The move field is used by the kernel to return the number of
> >>> bytes that was actually moved, or an error (a negated errno-
> >>> style value). If the value returned in move doesn't match the
> >>> value that was specified in len, the operation fails with the
> >>> error EAGAIN. The move field is output-only; it is not read by
> >>> the UFFDIO_MOVE operation.
> >>>
> >>> The operation may fail for various reasons. Usually, remapping of
> >>> pages that are not exclusive to the given process fail; once KSM
> >>> might deduplicate pages or fork() COW-shares pages during fork()
> >>> with child processes, they are no longer exclusive. Further, the
> >>> kernel might only perform lightweight checks for detecting whether
> >>> the pages are exclusive, and return -EBUSY in case that check fails.
> >>> To make the operation more likely to succeed, KSM should be
> >>> disabled, fork() should be avoided or MADV_DONTFORK should be
> >>> configured for the source VMA before fork().
> >>>
> >>> This ioctl(2) operation returns 0 on success. In this case, the
> >>> entire area was moved. On error, -1 is returned and errno is
> >>> set to indicate the error. Possible errors include:
> >>>
> >>> EAGAIN The number of bytes moved (i.e., the value returned in
> >>> the move field) does not equal the value that was
> >>> specified in the len field.
> >>>
> >>> EINVAL Either dst or len was not a multiple of the system page
> >>> size, or the range specified by src and len or dst and len
> >>> was invalid.
> >>>
> >>> EINVAL An invalid bit was specified in the mode field.
> >>>
> >>> ENOENT
> >>> The source virtual memory range has unmapped holes and
> >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> >>>
> >>> EEXIST
> >>> The destination virtual memory range is fully or partially
> >>> mapped.
> >>>
> >>> EBUSY
> >>> The pages in the source virtual memory range are not
> >>> exclusive to the process. The kernel might only perform
> >>> lightweight checks for detecting whether the pages are
> >>> exclusive. To make the operation more likely to succeed,
> >>> KSM should be disabled, fork() should be avoided or
> >>> MADV_DONTFORK should be configured for the source virtual
> >>> memory area before fork().
> >>>
> >>> ENOMEM Allocating memory needed for the operation failed.
> >>>
> >>> ESRCH
> >>> The faulting process has exited at the time of a
> >>> UFFDIO_MOVE operation.
> >>>
> >>
> >> A general comment simply because I realized that just now: does anything
> >> speak against limiting the operations now to a single MM?
> >>
> >> The use cases I heard so far don't need it. If ever required, we could
> >> consider extending it.
> >>
> >> Let's reduce complexity and KIS unless really required.
> >
> > Let me check if there are use cases that require moves between MMs.
> > Andrea seems to have put considerable effort to make it work between
> > MMs and it would be a pity to lose that. I can send a follow-up patch
> > to recover that functionality and even if it does not get merged, it
> > can be used in the future as a reference. But first let me check if we
> > can drop it.

For the compaction use case that we have it's fine to limit it to
single MM. However, for general use I think Peter will have a better
idea.
>
> Yes, that sounds reasonable. Unless the big important use cases requires
> moving pages between processes, let's leave that as future work for now.
>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-09 17:57:25

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 9, 2023 at 9:29 AM Lokesh Gidra <[email protected]> wrote:
>
> On Mon, Oct 9, 2023 at 5:24 PM David Hildenbrand <[email protected]> wrote:
> >
> > On 09.10.23 18:21, Suren Baghdasaryan wrote:
> > > On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
> > >>
> > >> On 09.10.23 08:42, Suren Baghdasaryan wrote:
> > >>> From: Andrea Arcangeli <[email protected]>
> > >>>
> > >>> Implement the uABI of UFFDIO_MOVE ioctl.
> > >>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > >>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > >>> available (in userspace) for recycling, as is usually the case in heap
> > >>> compaction algorithms, then we can avoid the page allocation and memcpy
> > >>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > >>> userspace, we avoid the need to release (via madvise) the pages back to
> > >>> the kernel [2].
> > >>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > >>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > >>> measured using a benchmark that emulates a heap compaction implementation
> > >>> using userfaultfd (to allow concurrent accesses by application threads).
> > >>> More details of the usecase are explained in [2].
> > >>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > >>> touching them within the same vma. Today, it can only be done by mremap,
> > >>> however it forces splitting the vma.
> > >>>
> > >>> [1] https://lore.kernel.org/all/[email protected]/
> > >>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> > >>>
> > >>> Update for the ioctl_userfaultfd(2) manpage:
> > >>>
> > >>> UFFDIO_MOVE
> > >>> (Since Linux xxx) Move a continuous memory chunk into the
> > >>> userfault registered range and optionally wake up the blocked
> > >>> thread. The source and destination addresses and the number of
> > >>> bytes to move are specified by the src, dst, and len fields of
> > >>> the uffdio_move structure pointed to by argp:
> > >>>
> > >>> struct uffdio_move {
> > >>> __u64 dst; /* Destination of move */
> > >>> __u64 src; /* Source of move */
> > >>> __u64 len; /* Number of bytes to move */
> > >>> __u64 mode; /* Flags controlling behavior of move */
> > >>> __s64 move; /* Number of bytes moved, or negated error */
> > >>> };
> > >>>
> > >>> The following value may be bitwise ORed in mode to change the
> > >>> behavior of the UFFDIO_MOVE operation:
> > >>>
> > >>> UFFDIO_MOVE_MODE_DONTWAKE
> > >>> Do not wake up the thread that waits for page-fault
> > >>> resolution
> > >>>
> > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > >>> Allow holes in the source virtual range that is being moved.
> > >>> When not specified, the holes will result in ENOENT error.
> > >>> When specified, the holes will be accounted as successfully
> > >>> moved memory. This is mostly useful to move hugepage aligned
> > >>> virtual regions without knowing if there are transparent
> > >>> hugepages in the regions or not, but preventing the risk of
> > >>> having to split the hugepage during the operation.
> > >>>
> > >>> The move field is used by the kernel to return the number of
> > >>> bytes that was actually moved, or an error (a negated errno-
> > >>> style value). If the value returned in move doesn't match the
> > >>> value that was specified in len, the operation fails with the
> > >>> error EAGAIN. The move field is output-only; it is not read by
> > >>> the UFFDIO_MOVE operation.
> > >>>
> > >>> The operation may fail for various reasons. Usually, remapping of
> > >>> pages that are not exclusive to the given process fail; once KSM
> > >>> might deduplicate pages or fork() COW-shares pages during fork()
> > >>> with child processes, they are no longer exclusive. Further, the
> > >>> kernel might only perform lightweight checks for detecting whether
> > >>> the pages are exclusive, and return -EBUSY in case that check fails.
> > >>> To make the operation more likely to succeed, KSM should be
> > >>> disabled, fork() should be avoided or MADV_DONTFORK should be
> > >>> configured for the source VMA before fork().
> > >>>
> > >>> This ioctl(2) operation returns 0 on success. In this case, the
> > >>> entire area was moved. On error, -1 is returned and errno is
> > >>> set to indicate the error. Possible errors include:
> > >>>
> > >>> EAGAIN The number of bytes moved (i.e., the value returned in
> > >>> the move field) does not equal the value that was
> > >>> specified in the len field.
> > >>>
> > >>> EINVAL Either dst or len was not a multiple of the system page
> > >>> size, or the range specified by src and len or dst and len
> > >>> was invalid.
> > >>>
> > >>> EINVAL An invalid bit was specified in the mode field.
> > >>>
> > >>> ENOENT
> > >>> The source virtual memory range has unmapped holes and
> > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> > >>>
> > >>> EEXIST
> > >>> The destination virtual memory range is fully or partially
> > >>> mapped.
> > >>>
> > >>> EBUSY
> > >>> The pages in the source virtual memory range are not
> > >>> exclusive to the process. The kernel might only perform
> > >>> lightweight checks for detecting whether the pages are
> > >>> exclusive. To make the operation more likely to succeed,
> > >>> KSM should be disabled, fork() should be avoided or
> > >>> MADV_DONTFORK should be configured for the source virtual
> > >>> memory area before fork().
> > >>>
> > >>> ENOMEM Allocating memory needed for the operation failed.
> > >>>
> > >>> ESRCH
> > >>> The faulting process has exited at the time of a
> > >>> UFFDIO_MOVE operation.
> > >>>
> > >>
> > >> A general comment simply because I realized that just now: does anything
> > >> speak against limiting the operations now to a single MM?
> > >>
> > >> The use cases I heard so far don't need it. If ever required, we could
> > >> consider extending it.
> > >>
> > >> Let's reduce complexity and KIS unless really required.
> > >
> > > Let me check if there are use cases that require moves between MMs.
> > > Andrea seems to have put considerable effort to make it work between
> > > MMs and it would be a pity to lose that. I can send a follow-up patch
> > > to recover that functionality and even if it does not get merged, it
> > > can be used in the future as a reference. But first let me check if we
> > > can drop it.
>
> For the compaction use case that we have it's fine to limit it to
> single MM. However, for general use I think Peter will have a better
> idea.
> >
> > Yes, that sounds reasonable. Unless the big important use cases requires
> > moving pages between processes, let's leave that as future work for now.
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >

While going through mremap's move_page_tables code, which is pretty
similar to what we do here, I noticed that cache is flushed as well,
whereas we are not doing that here. Is that OK? I'm not a MM expert by
any means, so it's a question rather than a comment :)

2023-10-10 01:50:32

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 9, 2023 at 5:57 PM Lokesh Gidra <[email protected]> wrote:
>
> On Mon, Oct 9, 2023 at 9:29 AM Lokesh Gidra <[email protected]> wrote:
> >
> > On Mon, Oct 9, 2023 at 5:24 PM David Hildenbrand <[email protected]> wrote:
> > >
> > > On 09.10.23 18:21, Suren Baghdasaryan wrote:
> > > > On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
> > > >>
> > > >> On 09.10.23 08:42, Suren Baghdasaryan wrote:
> > > >>> From: Andrea Arcangeli <[email protected]>
> > > >>>
> > > >>> Implement the uABI of UFFDIO_MOVE ioctl.
> > > >>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > > >>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > > >>> available (in userspace) for recycling, as is usually the case in heap
> > > >>> compaction algorithms, then we can avoid the page allocation and memcpy
> > > >>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > > >>> userspace, we avoid the need to release (via madvise) the pages back to
> > > >>> the kernel [2].
> > > >>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > > >>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > > >>> measured using a benchmark that emulates a heap compaction implementation
> > > >>> using userfaultfd (to allow concurrent accesses by application threads).
> > > >>> More details of the usecase are explained in [2].
> > > >>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > > >>> touching them within the same vma. Today, it can only be done by mremap,
> > > >>> however it forces splitting the vma.
> > > >>>
> > > >>> [1] https://lore.kernel.org/all/[email protected]/
> > > >>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> > > >>>
> > > >>> Update for the ioctl_userfaultfd(2) manpage:
> > > >>>
> > > >>> UFFDIO_MOVE
> > > >>> (Since Linux xxx) Move a continuous memory chunk into the
> > > >>> userfault registered range and optionally wake up the blocked
> > > >>> thread. The source and destination addresses and the number of
> > > >>> bytes to move are specified by the src, dst, and len fields of
> > > >>> the uffdio_move structure pointed to by argp:
> > > >>>
> > > >>> struct uffdio_move {
> > > >>> __u64 dst; /* Destination of move */
> > > >>> __u64 src; /* Source of move */
> > > >>> __u64 len; /* Number of bytes to move */
> > > >>> __u64 mode; /* Flags controlling behavior of move */
> > > >>> __s64 move; /* Number of bytes moved, or negated error */
> > > >>> };
> > > >>>
> > > >>> The following value may be bitwise ORed in mode to change the
> > > >>> behavior of the UFFDIO_MOVE operation:
> > > >>>
> > > >>> UFFDIO_MOVE_MODE_DONTWAKE
> > > >>> Do not wake up the thread that waits for page-fault
> > > >>> resolution
> > > >>>
> > > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > > >>> Allow holes in the source virtual range that is being moved.
> > > >>> When not specified, the holes will result in ENOENT error.
> > > >>> When specified, the holes will be accounted as successfully
> > > >>> moved memory. This is mostly useful to move hugepage aligned
> > > >>> virtual regions without knowing if there are transparent
> > > >>> hugepages in the regions or not, but preventing the risk of
> > > >>> having to split the hugepage during the operation.
> > > >>>
> > > >>> The move field is used by the kernel to return the number of
> > > >>> bytes that was actually moved, or an error (a negated errno-
> > > >>> style value). If the value returned in move doesn't match the
> > > >>> value that was specified in len, the operation fails with the
> > > >>> error EAGAIN. The move field is output-only; it is not read by
> > > >>> the UFFDIO_MOVE operation.
> > > >>>
> > > >>> The operation may fail for various reasons. Usually, remapping of
> > > >>> pages that are not exclusive to the given process fail; once KSM
> > > >>> might deduplicate pages or fork() COW-shares pages during fork()
> > > >>> with child processes, they are no longer exclusive. Further, the
> > > >>> kernel might only perform lightweight checks for detecting whether
> > > >>> the pages are exclusive, and return -EBUSY in case that check fails.
> > > >>> To make the operation more likely to succeed, KSM should be
> > > >>> disabled, fork() should be avoided or MADV_DONTFORK should be
> > > >>> configured for the source VMA before fork().
> > > >>>
> > > >>> This ioctl(2) operation returns 0 on success. In this case, the
> > > >>> entire area was moved. On error, -1 is returned and errno is
> > > >>> set to indicate the error. Possible errors include:
> > > >>>
> > > >>> EAGAIN The number of bytes moved (i.e., the value returned in
> > > >>> the move field) does not equal the value that was
> > > >>> specified in the len field.
> > > >>>
> > > >>> EINVAL Either dst or len was not a multiple of the system page
> > > >>> size, or the range specified by src and len or dst and len
> > > >>> was invalid.
> > > >>>
> > > >>> EINVAL An invalid bit was specified in the mode field.
> > > >>>
> > > >>> ENOENT
> > > >>> The source virtual memory range has unmapped holes and
> > > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> > > >>>
> > > >>> EEXIST
> > > >>> The destination virtual memory range is fully or partially
> > > >>> mapped.
> > > >>>
> > > >>> EBUSY
> > > >>> The pages in the source virtual memory range are not
> > > >>> exclusive to the process. The kernel might only perform
> > > >>> lightweight checks for detecting whether the pages are
> > > >>> exclusive. To make the operation more likely to succeed,
> > > >>> KSM should be disabled, fork() should be avoided or
> > > >>> MADV_DONTFORK should be configured for the source virtual
> > > >>> memory area before fork().
> > > >>>
> > > >>> ENOMEM Allocating memory needed for the operation failed.
> > > >>>
> > > >>> ESRCH
> > > >>> The faulting process has exited at the time of a
> > > >>> UFFDIO_MOVE operation.
> > > >>>
> > > >>
> > > >> A general comment simply because I realized that just now: does anything
> > > >> speak against limiting the operations now to a single MM?
> > > >>
> > > >> The use cases I heard so far don't need it. If ever required, we could
> > > >> consider extending it.
> > > >>
> > > >> Let's reduce complexity and KIS unless really required.
> > > >
> > > > Let me check if there are use cases that require moves between MMs.
> > > > Andrea seems to have put considerable effort to make it work between
> > > > MMs and it would be a pity to lose that. I can send a follow-up patch
> > > > to recover that functionality and even if it does not get merged, it
> > > > can be used in the future as a reference. But first let me check if we
> > > > can drop it.
> >
> > For the compaction use case that we have it's fine to limit it to
> > single MM. However, for general use I think Peter will have a better
> > idea.
> > >
> > > Yes, that sounds reasonable. Unless the big important use cases requires
> > > moving pages between processes, let's leave that as future work for now.
> > >
> > > --
> > > Cheers,
> > >
> > > David / dhildenb
> > >
>
> While going through mremap's move_page_tables code, which is pretty
> similar to what we do here, I noticed that cache is flushed as well,
> whereas we are not doing that here. Is that OK? I'm not a MM expert by
> any means, so it's a question rather than a comment :)

Good question. I'll have to look closer into it. Unfortunately I'll be
travelling starting tomorrow and be back next week. Will try my best
to answer questions in a timely manner but depends on my connection
and availability.
Thanks!

2023-10-12 20:12:40

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 09, 2023 at 05:29:08PM +0100, Lokesh Gidra wrote:
> On Mon, Oct 9, 2023 at 5:24 PM David Hildenbrand <[email protected]> wrote:
> >
> > On 09.10.23 18:21, Suren Baghdasaryan wrote:
> > > On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
> > >>
> > >> On 09.10.23 08:42, Suren Baghdasaryan wrote:
> > >>> From: Andrea Arcangeli <[email protected]>
> > >>>
> > >>> Implement the uABI of UFFDIO_MOVE ioctl.
> > >>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > >>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > >>> available (in userspace) for recycling, as is usually the case in heap
> > >>> compaction algorithms, then we can avoid the page allocation and memcpy
> > >>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > >>> userspace, we avoid the need to release (via madvise) the pages back to
> > >>> the kernel [2].
> > >>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > >>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > >>> measured using a benchmark that emulates a heap compaction implementation
> > >>> using userfaultfd (to allow concurrent accesses by application threads).
> > >>> More details of the usecase are explained in [2].
> > >>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > >>> touching them within the same vma. Today, it can only be done by mremap,
> > >>> however it forces splitting the vma.
> > >>>
> > >>> [1] https://lore.kernel.org/all/[email protected]/
> > >>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> > >>>
> > >>> Update for the ioctl_userfaultfd(2) manpage:
> > >>>
> > >>> UFFDIO_MOVE
> > >>> (Since Linux xxx) Move a continuous memory chunk into the
> > >>> userfault registered range and optionally wake up the blocked
> > >>> thread. The source and destination addresses and the number of
> > >>> bytes to move are specified by the src, dst, and len fields of
> > >>> the uffdio_move structure pointed to by argp:
> > >>>
> > >>> struct uffdio_move {
> > >>> __u64 dst; /* Destination of move */
> > >>> __u64 src; /* Source of move */
> > >>> __u64 len; /* Number of bytes to move */
> > >>> __u64 mode; /* Flags controlling behavior of move */
> > >>> __s64 move; /* Number of bytes moved, or negated error */
> > >>> };
> > >>>
> > >>> The following value may be bitwise ORed in mode to change the
> > >>> behavior of the UFFDIO_MOVE operation:
> > >>>
> > >>> UFFDIO_MOVE_MODE_DONTWAKE
> > >>> Do not wake up the thread that waits for page-fault
> > >>> resolution
> > >>>
> > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > >>> Allow holes in the source virtual range that is being moved.
> > >>> When not specified, the holes will result in ENOENT error.
> > >>> When specified, the holes will be accounted as successfully
> > >>> moved memory. This is mostly useful to move hugepage aligned
> > >>> virtual regions without knowing if there are transparent
> > >>> hugepages in the regions or not, but preventing the risk of
> > >>> having to split the hugepage during the operation.
> > >>>
> > >>> The move field is used by the kernel to return the number of
> > >>> bytes that was actually moved, or an error (a negated errno-
> > >>> style value). If the value returned in move doesn't match the
> > >>> value that was specified in len, the operation fails with the
> > >>> error EAGAIN. The move field is output-only; it is not read by
> > >>> the UFFDIO_MOVE operation.
> > >>>
> > >>> The operation may fail for various reasons. Usually, remapping of
> > >>> pages that are not exclusive to the given process fail; once KSM
> > >>> might deduplicate pages or fork() COW-shares pages during fork()
> > >>> with child processes, they are no longer exclusive. Further, the
> > >>> kernel might only perform lightweight checks for detecting whether
> > >>> the pages are exclusive, and return -EBUSY in case that check fails.
> > >>> To make the operation more likely to succeed, KSM should be
> > >>> disabled, fork() should be avoided or MADV_DONTFORK should be
> > >>> configured for the source VMA before fork().
> > >>>
> > >>> This ioctl(2) operation returns 0 on success. In this case, the
> > >>> entire area was moved. On error, -1 is returned and errno is
> > >>> set to indicate the error. Possible errors include:
> > >>>
> > >>> EAGAIN The number of bytes moved (i.e., the value returned in
> > >>> the move field) does not equal the value that was
> > >>> specified in the len field.
> > >>>
> > >>> EINVAL Either dst or len was not a multiple of the system page
> > >>> size, or the range specified by src and len or dst and len
> > >>> was invalid.
> > >>>
> > >>> EINVAL An invalid bit was specified in the mode field.
> > >>>
> > >>> ENOENT
> > >>> The source virtual memory range has unmapped holes and
> > >>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> > >>>
> > >>> EEXIST
> > >>> The destination virtual memory range is fully or partially
> > >>> mapped.
> > >>>
> > >>> EBUSY
> > >>> The pages in the source virtual memory range are not
> > >>> exclusive to the process. The kernel might only perform
> > >>> lightweight checks for detecting whether the pages are
> > >>> exclusive. To make the operation more likely to succeed,
> > >>> KSM should be disabled, fork() should be avoided or
> > >>> MADV_DONTFORK should be configured for the source virtual
> > >>> memory area before fork().
> > >>>
> > >>> ENOMEM Allocating memory needed for the operation failed.
> > >>>
> > >>> ESRCH
> > >>> The faulting process has exited at the time of a
> > >>> UFFDIO_MOVE operation.
> > >>>
> > >>
> > >> A general comment simply because I realized that just now: does anything
> > >> speak against limiting the operations now to a single MM?
> > >>
> > >> The use cases I heard so far don't need it. If ever required, we could
> > >> consider extending it.
> > >>
> > >> Let's reduce complexity and KIS unless really required.
> > >
> > > Let me check if there are use cases that require moves between MMs.
> > > Andrea seems to have put considerable effort to make it work between
> > > MMs and it would be a pity to lose that. I can send a follow-up patch
> > > to recover that functionality and even if it does not get merged, it
> > > can be used in the future as a reference. But first let me check if we
> > > can drop it.
>
> For the compaction use case that we have it's fine to limit it to
> single MM. However, for general use I think Peter will have a better
> idea.

I used to have the same thought with David on whether we can simplify the
design to e.g. limit it to single mm. Then I found that the trickiest is
actually patch 1 together with the anon_vma manipulations, and the problem
is that's not avoidable even if we restrict the api to apply on single mm.

What else we can benefit from single mm? One less mmap read lock, but
probably that's all we can get; IIUC we need to keep most of the rest of
the code, e.g. pgtable walks, double pgtable lockings, etc.

Actually, even though I have no solid clue, but I had a feeling that there
can be some interesting way to leverage this across-mm movement, while
keeping things all safe (by e.g. elaborately requiring other proc to create
uffd and deliver to this proc).

Considering Andrea's original version already contains those bits and all
above, I'd vote that we go ahead with supporting two MMs.

Thanks,

--
Peter Xu

2023-10-12 22:01:33

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Sun, Oct 08, 2023 at 11:42:27PM -0700, Suren Baghdasaryan wrote:
> From: Andrea Arcangeli <[email protected]>
>
> Implement the uABI of UFFDIO_MOVE ioctl.
> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> available (in userspace) for recycling, as is usually the case in heap
> compaction algorithms, then we can avoid the page allocation and memcpy
> (done by UFFDIO_COPY). Also, since the pages are recycled in the
> userspace, we avoid the need to release (via madvise) the pages back to
> the kernel [2].
> We see over 40% reduction (on a Google pixel 6 device) in the compacting
> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> measured using a benchmark that emulates a heap compaction implementation
> using userfaultfd (to allow concurrent accesses by application threads).
> More details of the usecase are explained in [2].
> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> touching them within the same vma. Today, it can only be done by mremap,
> however it forces splitting the vma.
>
> [1] https://lore.kernel.org/all/[email protected]/
> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
>
> Update for the ioctl_userfaultfd(2) manpage:
>
> UFFDIO_MOVE
> (Since Linux xxx) Move a continuous memory chunk into the
> userfault registered range and optionally wake up the blocked
> thread. The source and destination addresses and the number of
> bytes to move are specified by the src, dst, and len fields of
> the uffdio_move structure pointed to by argp:
>
> struct uffdio_move {
> __u64 dst; /* Destination of move */
> __u64 src; /* Source of move */
> __u64 len; /* Number of bytes to move */
> __u64 mode; /* Flags controlling behavior of move */
> __s64 move; /* Number of bytes moved, or negated error */
> };
>
> The following value may be bitwise ORed in mode to change the
> behavior of the UFFDIO_MOVE operation:
>
> UFFDIO_MOVE_MODE_DONTWAKE
> Do not wake up the thread that waits for page-fault
> resolution
>
> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> Allow holes in the source virtual range that is being moved.
> When not specified, the holes will result in ENOENT error.
> When specified, the holes will be accounted as successfully
> moved memory. This is mostly useful to move hugepage aligned
> virtual regions without knowing if there are transparent
> hugepages in the regions or not, but preventing the risk of
> having to split the hugepage during the operation.
>
> The move field is used by the kernel to return the number of
> bytes that was actually moved, or an error (a negated errno-
> style value). If the value returned in move doesn't match the
> value that was specified in len, the operation fails with the
> error EAGAIN. The move field is output-only; it is not read by
> the UFFDIO_MOVE operation.
>
> The operation may fail for various reasons. Usually, remapping of
> pages that are not exclusive to the given process fail; once KSM
> might deduplicate pages or fork() COW-shares pages during fork()
> with child processes, they are no longer exclusive. Further, the
> kernel might only perform lightweight checks for detecting whether
> the pages are exclusive, and return -EBUSY in case that check fails.
> To make the operation more likely to succeed, KSM should be
> disabled, fork() should be avoided or MADV_DONTFORK should be
> configured for the source VMA before fork().
>
> This ioctl(2) operation returns 0 on success. In this case, the
> entire area was moved. On error, -1 is returned and errno is
> set to indicate the error. Possible errors include:
>
> EAGAIN The number of bytes moved (i.e., the value returned in
> the move field) does not equal the value that was
> specified in the len field.
>
> EINVAL Either dst or len was not a multiple of the system page
> size, or the range specified by src and len or dst and len
> was invalid.
>
> EINVAL An invalid bit was specified in the mode field.
>
> ENOENT
> The source virtual memory range has unmapped holes and
> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
>
> EEXIST
> The destination virtual memory range is fully or partially
> mapped.
>
> EBUSY
> The pages in the source virtual memory range are not
> exclusive to the process. The kernel might only perform
> lightweight checks for detecting whether the pages are
> exclusive. To make the operation more likely to succeed,
> KSM should be disabled, fork() should be avoided or
> MADV_DONTFORK should be configured for the source virtual
> memory area before fork().
>
> ENOMEM Allocating memory needed for the operation failed.
>
> ESRCH
> The faulting process has exited at the time of a

Nit pick comment for future man page: there's no faulting process in this
context. Perhaps "target process"?

> UFFDIO_MOVE operation.
>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> ---
> Documentation/admin-guide/mm/userfaultfd.rst | 3 +
> fs/userfaultfd.c | 63 ++
> include/linux/rmap.h | 5 +
> include/linux/userfaultfd_k.h | 12 +
> include/uapi/linux/userfaultfd.h | 29 +-
> mm/huge_memory.c | 138 +++++
> mm/khugepaged.c | 3 +
> mm/rmap.c | 6 +
> mm/userfaultfd.c | 602 +++++++++++++++++++
> 9 files changed, 860 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst
> index 203e26da5f92..e5cc8848dcb3 100644
> --- a/Documentation/admin-guide/mm/userfaultfd.rst
> +++ b/Documentation/admin-guide/mm/userfaultfd.rst
> @@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
> areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
> support for shmem virtual memory areas.
>
> +- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
> + existing page contents from userspace.
> +
> The userland application should set the feature flags it intends to use
> when invoking the ``UFFDIO_API`` ioctl, to request that those features be
> enabled if supported.
> diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> index a7c6ef764e63..ac52e0f99a69 100644
> --- a/fs/userfaultfd.c
> +++ b/fs/userfaultfd.c
> @@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
> return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
> }
>
> +static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
> + unsigned long arg)

If we do want to rename from REMAP to MOVE, we'd better rename the
functions too, as "remap" still exists all over the place..

> +{
> + __s64 ret;
> + struct uffdio_move uffdio_move;
> + struct uffdio_move __user *user_uffdio_move;
> + struct userfaultfd_wake_range range;
> +
> + user_uffdio_move = (struct uffdio_move __user *) arg;
> +
> + ret = -EAGAIN;
> + if (atomic_read(&ctx->mmap_changing))
> + goto out;

I didn't notice this before, but I think we need to re-check this after
taking target mm's mmap read lock..

maybe we'd want to pass in ctx* into remap_pages(), more reasoning below.

> +
> + ret = -EFAULT;
> + if (copy_from_user(&uffdio_move, user_uffdio_move,
> + /* don't copy "remap" last field */

s/remap/move/

> + sizeof(uffdio_move)-sizeof(__s64)))
> + goto out;
> +
> + ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len);
> + if (ret)
> + goto out;
> +
> + ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len);
> + if (ret)
> + goto out;
> +
> + ret = -EINVAL;
> + if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
> + UFFDIO_MOVE_MODE_DONTWAKE))
> + goto out;
> +
> + if (mmget_not_zero(ctx->mm)) {
> + ret = remap_pages(ctx->mm, current->mm,
> + uffdio_move.dst, uffdio_move.src,
> + uffdio_move.len, uffdio_move.mode);
> + mmput(ctx->mm);
> + } else {
> + return -ESRCH;
> + }
> +
> + if (unlikely(put_user(ret, &user_uffdio_move->move)))
> + return -EFAULT;
> + if (ret < 0)
> + goto out;
> +
> + /* len == 0 would wake all */
> + BUG_ON(!ret);
> + range.len = ret;
> + if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
> + range.start = uffdio_move.dst;
> + wake_userfault(ctx, &range);
> + }
> + ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
> +
> +out:
> + return ret;
> +}
> +
> /*
> * userland asks for a certain API version and we return which bits
> * and ioctl commands are implemented in this kernel for such API
> @@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
> case UFFDIO_ZEROPAGE:
> ret = userfaultfd_zeropage(ctx, arg);
> break;
> + case UFFDIO_MOVE:
> + ret = userfaultfd_remap(ctx, arg);
> + break;
> case UFFDIO_WRITEPROTECT:
> ret = userfaultfd_writeprotect(ctx, arg);
> break;
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b26fe858fd44..8034eda972e5 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
> down_write(&anon_vma->root->rwsem);
> }
>
> +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
> +{
> + return down_write_trylock(&anon_vma->root->rwsem);
> +}
> +
> static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
> {
> up_write(&anon_vma->root->rwsem);
> diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> index f2dc19f40d05..ce8d20b57e8c 100644
> --- a/include/linux/userfaultfd_k.h
> +++ b/include/linux/userfaultfd_k.h
> @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
> extern long uffd_wp_range(struct vm_area_struct *vma,
> unsigned long start, unsigned long len, bool enable_wp);
>
> +/* remap_pages */
> +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
> +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
> +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_start, unsigned long src_start,
> + unsigned long len, __u64 flags);
> +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr);
> +
> /* mm helpers */
> static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> struct vm_userfaultfd_ctx vm_ctx)
> diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
> index 0dbc81015018..2841e4ea8f2c 100644
> --- a/include/uapi/linux/userfaultfd.h
> +++ b/include/uapi/linux/userfaultfd.h
> @@ -41,7 +41,8 @@
> UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
> UFFD_FEATURE_WP_UNPOPULATED | \
> UFFD_FEATURE_POISON | \
> - UFFD_FEATURE_WP_ASYNC)
> + UFFD_FEATURE_WP_ASYNC | \
> + UFFD_FEATURE_MOVE)
> #define UFFD_API_IOCTLS \
> ((__u64)1 << _UFFDIO_REGISTER | \
> (__u64)1 << _UFFDIO_UNREGISTER | \
> @@ -50,6 +51,7 @@
> ((__u64)1 << _UFFDIO_WAKE | \
> (__u64)1 << _UFFDIO_COPY | \
> (__u64)1 << _UFFDIO_ZEROPAGE | \
> + (__u64)1 << _UFFDIO_MOVE | \
> (__u64)1 << _UFFDIO_WRITEPROTECT | \
> (__u64)1 << _UFFDIO_CONTINUE | \
> (__u64)1 << _UFFDIO_POISON)
> @@ -73,6 +75,7 @@
> #define _UFFDIO_WAKE (0x02)
> #define _UFFDIO_COPY (0x03)
> #define _UFFDIO_ZEROPAGE (0x04)
> +#define _UFFDIO_MOVE (0x05)
> #define _UFFDIO_WRITEPROTECT (0x06)
> #define _UFFDIO_CONTINUE (0x07)
> #define _UFFDIO_POISON (0x08)
> @@ -92,6 +95,8 @@
> struct uffdio_copy)
> #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
> struct uffdio_zeropage)
> +#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
> + struct uffdio_move)
> #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
> struct uffdio_writeprotect)
> #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
> @@ -222,6 +227,9 @@ struct uffdio_api {
> * asynchronous mode is supported in which the write fault is
> * automatically resolved and write-protection is un-set.
> * It implies UFFD_FEATURE_WP_UNPOPULATED.
> + *
> + * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
> + * existing page contents from userspace.
> */
> #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
> #define UFFD_FEATURE_EVENT_FORK (1<<1)
> @@ -239,6 +247,7 @@ struct uffdio_api {
> #define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
> #define UFFD_FEATURE_POISON (1<<14)
> #define UFFD_FEATURE_WP_ASYNC (1<<15)
> +#define UFFD_FEATURE_MOVE (1<<16)
> __u64 features;
>
> __u64 ioctls;
> @@ -347,6 +356,24 @@ struct uffdio_poison {
> __s64 updated;
> };
>
> +struct uffdio_move {
> + __u64 dst;
> + __u64 src;
> + __u64 len;
> + /*
> + * Especially if used to atomically remove memory from the
> + * address space the wake on the dst range is not needed.
> + */
> +#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
> +#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
> + __u64 mode;
> + /*
> + * "move" is written by the ioctl and must be at the end: the
> + * copy_from_user will not read the last 8 bytes.
> + */
> + __s64 move;
> +};
> +
> /*
> * Flags for the userfaultfd(2) system call itself.
> */
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 9656be95a542..6fac5c3d66e6 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> return ret;
> }
>
> +#ifdef CONFIG_USERFAULTFD
> +/*
> + * The PT lock for src_pmd and the mmap_lock for reading are held by
> + * the caller, but it must return after releasing the
> + * page_table_lock. Just move the page from src_pmd to dst_pmd if possible.
> + * Return zero if succeeded in moving the page, -EAGAIN if it needs to be
> + * repeated by the caller, or other errors in case of failure.
> + */
> +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr)
> +{
> + pmd_t _dst_pmd, src_pmdval;
> + struct page *src_page;
> + struct folio *src_folio;
> + struct anon_vma *src_anon_vma;
> + spinlock_t *src_ptl, *dst_ptl;
> + pgtable_t src_pgtable, dst_pgtable;
> + struct mmu_notifier_range range;
> + int err = 0;
> +
> + src_pmdval = *src_pmd;
> + src_ptl = pmd_lockptr(src_mm, src_pmd);
> +
> + lockdep_assert_held(src_ptl);
> + mmap_assert_locked(src_mm);
> + mmap_assert_locked(dst_mm);
> +
> + BUG_ON(!pmd_none(dst_pmdval));
> + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> +
> + if (!pmd_trans_huge(src_pmdval)) {
> + spin_unlock(src_ptl);
> + if (is_pmd_migration_entry(src_pmdval)) {
> + pmd_migration_entry_wait(src_mm, &src_pmdval);
> + return -EAGAIN;
> + }
> + return -ENOENT;
> + }
> +
> + src_page = pmd_page(src_pmdval);
> + if (unlikely(!PageAnonExclusive(src_page))) {
> + spin_unlock(src_ptl);
> + return -EBUSY;
> + }
> +
> + src_folio = page_folio(src_page);
> + folio_get(src_folio);
> + spin_unlock(src_ptl);
> +
> + /* preallocate dst_pgtable if needed */
> + if (dst_mm != src_mm) {
> + dst_pgtable = pte_alloc_one(dst_mm);
> + if (unlikely(!dst_pgtable)) {
> + err = -ENOMEM;
> + goto put_folio;
> + }
> + } else {
> + dst_pgtable = NULL;
> + }
> +

IIUC Lokesh's comment applies here, we probably need the
flush_cache_range(), not for x86 but for the other ones..

cachetlb.rst:

Next, we have the cache flushing interfaces. In general, when Linux
is changing an existing virtual-->physical mapping to a new value,
the sequence will be in one of the following forms::

1) flush_cache_mm(mm);
change_all_page_tables_of(mm);
flush_tlb_mm(mm);

2) flush_cache_range(vma, start, end);
change_range_of_page_tables(mm, start, end);
flush_tlb_range(vma, start, end);

3) flush_cache_page(vma, addr, pfn);
set_pte(pte_pointer, new_pte_val);
flush_tlb_page(vma, addr);

> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> + src_addr + HPAGE_PMD_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +
> + folio_lock(src_folio);
> +
> + /*
> + * split_huge_page walks the anon_vma chain without the page
> + * lock. Serialize against it with the anon_vma lock, the page
> + * lock is not enough.
> + */
> + src_anon_vma = folio_get_anon_vma(src_folio);
> + if (!src_anon_vma) {
> + err = -EAGAIN;
> + goto unlock_folio;
> + }
> + anon_vma_lock_write(src_anon_vma);
> +
> + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> + double_pt_lock(src_ptl, dst_ptl);
> + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> + !pmd_same(*dst_pmd, dst_pmdval))) {
> + double_pt_unlock(src_ptl, dst_ptl);
> + err = -EAGAIN;
> + goto put_anon_vma;
> + }
> + if (!PageAnonExclusive(&src_folio->page)) {
> + double_pt_unlock(src_ptl, dst_ptl);
> + err = -EBUSY;
> + goto put_anon_vma;
> + }
> +
> + BUG_ON(!folio_test_head(src_folio));
> + BUG_ON(!folio_test_anon(src_folio));
> +
> + folio_move_anon_rmap(src_folio, dst_vma);
> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> +
> + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);

Last time the conclusion is we leverage can_change_pmd_writable(), no?

> + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> +
> + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> + if (dst_pgtable) {
> + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> + pte_free(src_mm, src_pgtable);
> + dst_pgtable = NULL;
> +
> + mm_inc_nr_ptes(dst_mm);
> + mm_dec_nr_ptes(src_mm);
> + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> + } else {
> + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> + }
> + double_pt_unlock(src_ptl, dst_ptl);
> +
> +put_anon_vma:
> + anon_vma_unlock_write(src_anon_vma);
> + put_anon_vma(src_anon_vma);
> +unlock_folio:
> + /* unblock rmap walks */
> + folio_unlock(src_folio);
> + mmu_notifier_invalidate_range_end(&range);
> + if (dst_pgtable)
> + pte_free(dst_mm, dst_pgtable);
> +put_folio:
> + folio_put(src_folio);
> +
> + return err;
> +}
> +#endif /* CONFIG_USERFAULTFD */
> +
> /*
> * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
> *
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 2b5c0321d96b..0c1ee7172852 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * Prevent all access to pagetables with the exception of
> * gup_fast later handled by the ptep_clear_flush and the VM
> * handled by the anon_vma lock + PG_lock.
> + *
> + * UFFDIO_MOVE is prevented to race as well thanks to the
> + * mmap_lock.
> */
> mmap_write_lock(mm);
> result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index f9ddc50269d2..a5919cac9a08 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -490,6 +490,12 @@ void __init anon_vma_init(void)
> * page_remove_rmap() that the anon_vma pointer from page->mapping is valid
> * if there is a mapcount, we can dereference the anon_vma after observing
> * those.
> + *
> + * NOTE: the caller should normally hold folio lock when calling this. If
> + * not, the caller needs to double check the anon_vma didn't change after
> + * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
> + * concurrently without folio lock protection). See folio_lock_anon_vma_read()
> + * which has already covered that, and comment above remap_pages().
> */
> struct anon_vma *folio_get_anon_vma(struct folio *folio)
> {
> diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> index 96d9eae5c7cc..45ce1a8b8ab9 100644
> --- a/mm/userfaultfd.c
> +++ b/mm/userfaultfd.c
> @@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
> mmap_read_unlock(dst_mm);
> return err;
> }
> +
> +
> +void double_pt_lock(spinlock_t *ptl1,
> + spinlock_t *ptl2)
> + __acquires(ptl1)
> + __acquires(ptl2)
> +{
> + spinlock_t *ptl_tmp;
> +
> + if (ptl1 > ptl2) {
> + /* exchange ptl1 and ptl2 */
> + ptl_tmp = ptl1;
> + ptl1 = ptl2;
> + ptl2 = ptl_tmp;
> + }
> + /* lock in virtual address order to avoid lock inversion */
> + spin_lock(ptl1);
> + if (ptl1 != ptl2)
> + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
> + else
> + __acquire(ptl2);
> +}
> +
> +void double_pt_unlock(spinlock_t *ptl1,
> + spinlock_t *ptl2)
> + __releases(ptl1)
> + __releases(ptl2)
> +{
> + spin_unlock(ptl1);
> + if (ptl1 != ptl2)
> + spin_unlock(ptl2);
> + else
> + __release(ptl2);
> +}
> +
> +
> +static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr,
> + pte_t *dst_pte, pte_t *src_pte,
> + pte_t orig_dst_pte, pte_t orig_src_pte,
> + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> + struct folio *src_folio)
> +{
> + double_pt_lock(dst_ptl, src_ptl);
> +
> + if (!pte_same(*src_pte, orig_src_pte) ||
> + !pte_same(*dst_pte, orig_dst_pte)) {
> + double_pt_unlock(dst_ptl, src_ptl);
> + return -EAGAIN;
> + }
> + if (folio_test_large(src_folio) ||
> + !PageAnonExclusive(&src_folio->page)) {
> + double_pt_unlock(dst_ptl, src_ptl);
> + return -EBUSY;
> + }
> +
> + BUG_ON(!folio_test_anon(src_folio));
> +
> + folio_move_anon_rmap(src_folio, dst_vma);
> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> +
> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);

can_change_pte_writable()?

> +
> + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
> +
> + if (dst_mm != src_mm) {
> + inc_mm_counter(dst_mm, MM_ANONPAGES);
> + dec_mm_counter(src_mm, MM_ANONPAGES);
> + }
> +
> + double_pt_unlock(dst_ptl, src_ptl);
> +
> + return 0;
> +}
> +
> +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_addr, unsigned long src_addr,
> + pte_t *dst_pte, pte_t *src_pte,
> + pte_t orig_dst_pte, pte_t orig_src_pte,
> + spinlock_t *dst_ptl, spinlock_t *src_ptl)
> +{
> + if (!pte_swp_exclusive(orig_src_pte))
> + return -EBUSY;
> +
> + double_pt_lock(dst_ptl, src_ptl);
> +
> + if (!pte_same(*src_pte, orig_src_pte) ||
> + !pte_same(*dst_pte, orig_dst_pte)) {
> + double_pt_unlock(dst_ptl, src_ptl);
> + return -EAGAIN;
> + }
> +
> + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
> + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
> +
> + if (dst_mm != src_mm) {
> + inc_mm_counter(dst_mm, MM_SWAPENTS);
> + dec_mm_counter(src_mm, MM_SWAPENTS);
> + }
> +
> + double_pt_unlock(dst_ptl, src_ptl);
> +
> + return 0;
> +}
> +
> +/*
> + * The mmap_lock for reading is held by the caller. Just move the page
> + * from src_pmd to dst_pmd if possible, and return true if succeeded
> + * in moving the page.
> + */
> +static int remap_pages_pte(struct mm_struct *dst_mm,
> + struct mm_struct *src_mm,
> + pmd_t *dst_pmd,
> + pmd_t *src_pmd,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr,
> + unsigned long src_addr,
> + __u64 mode)
> +{
> + swp_entry_t entry;
> + pte_t orig_src_pte, orig_dst_pte;
> + pte_t src_folio_pte;
> + spinlock_t *src_ptl, *dst_ptl;
> + pte_t *src_pte = NULL;
> + pte_t *dst_pte = NULL;
> +
> + struct folio *src_folio = NULL;
> + struct anon_vma *src_anon_vma = NULL;
> + struct mmu_notifier_range range;
> + int err = 0;
> +

Same flush_cache_range().

> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> + src_addr, src_addr + PAGE_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +retry:
> + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> +
> + /* Retry if a huge pmd materialized from under us */
> + if (unlikely(!dst_pte)) {
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> +
> + /*
> + * We held the mmap_lock for reading so MADV_DONTNEED
> + * can zap transparent huge pages under us, or the
> + * transparent huge page fault can establish new
> + * transparent huge pages under us.
> + */
> + if (unlikely(!src_pte)) {
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + BUG_ON(pmd_none(*dst_pmd));
> + BUG_ON(pmd_none(*src_pmd));
> + BUG_ON(pmd_trans_huge(*dst_pmd));
> + BUG_ON(pmd_trans_huge(*src_pmd));
> +
> + spin_lock(dst_ptl);
> + orig_dst_pte = *dst_pte;
> + spin_unlock(dst_ptl);
> + if (!pte_none(orig_dst_pte)) {
> + err = -EEXIST;
> + goto out;
> + }
> +
> + spin_lock(src_ptl);
> + orig_src_pte = *src_pte;
> + spin_unlock(src_ptl);
> + if (pte_none(orig_src_pte)) {
> + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES))
> + err = -ENOENT;
> + else /* nothing to do to remap a hole */
> + err = 0;
> + goto out;
> + }
> +
> + /* If PTE changed after we locked the folio them start over */
> + if (src_folio && unlikely(!pte_same(src_folio_pte, orig_src_pte))) {
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + if (pte_present(orig_src_pte)) {
> + /*
> + * Pin and lock both source folio and anon_vma. Since we are in
> + * RCU read section, we can't block, so on contention have to
> + * unmap the ptes, obtain the lock and retry.
> + */
> + if (!src_folio) {
> + struct folio *folio;
> +
> + /*
> + * Pin the page while holding the lock to be sure the
> + * page isn't freed under us
> + */
> + spin_lock(src_ptl);
> + if (!pte_same(orig_src_pte, *src_pte)) {
> + spin_unlock(src_ptl);
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> + if (!folio || folio_test_large(folio) ||
> + !PageAnonExclusive(&folio->page)) {
> + spin_unlock(src_ptl);
> + err = -EBUSY;
> + goto out;
> + }
> +
> + folio_get(folio);
> + src_folio = folio;
> + src_folio_pte = orig_src_pte;
> + spin_unlock(src_ptl);
> +
> + if (!folio_trylock(src_folio)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + folio_lock(src_folio);
> + goto retry;
> + }
> + }
> +
> + if (!src_anon_vma) {
> + /*
> + * folio_referenced walks the anon_vma chain
> + * without the folio lock. Serialize against it with
> + * the anon_vma lock, the folio lock is not enough.
> + */
> + src_anon_vma = folio_get_anon_vma(src_folio);
> + if (!src_anon_vma) {
> + /* page was unmapped from under us */
> + err = -EAGAIN;
> + goto out;
> + }
> + if (!anon_vma_trylock_write(src_anon_vma)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + anon_vma_lock_write(src_anon_vma);
> + goto retry;
> + }
> + }
> +
> + err = remap_present_pte(dst_mm, src_mm, dst_vma, src_vma,
> + dst_addr, src_addr, dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte,
> + dst_ptl, src_ptl, src_folio);
> + } else {
> + entry = pte_to_swp_entry(orig_src_pte);
> + if (non_swap_entry(entry)) {
> + if (is_migration_entry(entry)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + migration_entry_wait(src_mm, src_pmd,
> + src_addr);
> + err = -EAGAIN;
> + } else
> + err = -EFAULT;
> + goto out;
> + }
> +
> + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
> + dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte,
> + dst_ptl, src_ptl);
> + }
> +
> +out:
> + if (src_anon_vma) {
> + anon_vma_unlock_write(src_anon_vma);
> + put_anon_vma(src_anon_vma);
> + }
> + if (src_folio) {
> + folio_unlock(src_folio);
> + folio_put(src_folio);
> + }
> + if (dst_pte)
> + pte_unmap(dst_pte);
> + if (src_pte)
> + pte_unmap(src_pte);
> + mmu_notifier_invalidate_range_end(&range);
> +
> + return err;
> +}
> +
> +static int validate_remap_areas(struct vm_area_struct *src_vma,

s/remap/move/

> + struct vm_area_struct *dst_vma)
> +{
> + /* Only allow remapping if both have the same access and protection */

s/remap/move/

PS: I'll stop comment on renamings.. but please have a look over the whole
patch..

> + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
> + return -EINVAL;
> +
> + /* Only allow remapping if both are mlocked or both aren't */
> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> + return -EINVAL;
> +
> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> + return -EINVAL;
> +
> + /*
> + * Be strict and only allow remap_pages if either the src or
> + * dst range is registered in the userfaultfd to prevent
> + * userland errors going unnoticed. As far as the VM
> + * consistency is concerned, it would be perfectly safe to
> + * remove this check, but there's no useful usage for
> + * remap_pages ouside of userfaultfd registered ranges. This
> + * is after all why it is an ioctl belonging to the
> + * userfaultfd and not a syscall.
> + *
> + * Allow both vmas to be registered in the userfaultfd, just
> + * in case somebody finds a way to make such a case useful.
> + * Normally only one of the two vmas would be registered in
> + * the userfaultfd.
> + */
> + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
> + !src_vma->vm_userfaultfd_ctx.ctx)
> + return -EINVAL;

When rethinking this, I'm wondering whether we should make it strict.

The problem is, src/dst is not equal in this case. We got dst_mm from the
uffd context passed in, so we know at least dst_mm has something registered
onto this uffd we're working on, no matter it's the dst_vma or not. It'll
be weird if dst_vma is not registered to the uffd context we're using right
now..

Then, can we have a case where src vma is registered with uffd, then dest
vma is not? It sounds really weird, since dest mm needs to be registered
to uffd anyway. It sounds to me the interface is a bit loose and unclear.

I'm thinking a more reasonable way to safe check this: we model UFFDIO_MOVE
the same way as UFFDIO_COPY/CONTINUE/... where we always assumed the uffd
context is the target (uffd registration required), while current mm is
always the source (no uffd registration required).

It also means "dropping current page" does not require privilege (here
UFFDIO_MOVE is the same as DONTNEED, per discussed), while "installing a
page into the pgtable" does require some privilege (where privilege granted
by the uffd itself).

Then it means something like:

if (!dst_vma->vm_userfaultfd_ctx.ctx ||
dst_vma->vm_userfaultfd_ctx.ctx != ctx)
return -EINVAL;

Here ctx should be passed over from upper.

What do you think?

PS: it seems current uffd ioctls are not double checking the vma's ctx is
the same as the ctx that the ioctl is operating on.. but it seems we should
do that for all. I'll think more about it..

> +
> + /*
> + * FIXME: only allow remapping across anonymous vmas,
> + * tmpfs should be added.
> + */
> + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
> + return -EINVAL;
> +
> + /*
> + * Ensure the dst_vma has a anon_vma or this page
> + * would get a NULL anon_vma when moved in the
> + * dst_vma.
> + */
> + if (unlikely(anon_vma_prepare(dst_vma)))
> + return -ENOMEM;
> +
> + return 0;
> +}
> +
> +/**
> + * remap_pages - remap arbitrary anonymous pages of an existing vma
> + * @dst_start: start of the destination virtual memory range
> + * @src_start: start of the source virtual memory range
> + * @len: length of the virtual memory range
> + *
> + * remap_pages() remaps arbitrary anonymous pages atomically in zero
> + * copy. It only works on non shared anonymous pages because those can
> + * be relocated without generating non linear anon_vmas in the rmap
> + * code.
> + *
> + * It provides a zero copy mechanism to handle userspace page faults.
> + * The source vma pages should have mapcount == 1, which can be
> + * enforced by using madvise(MADV_DONTFORK) on src vma.
> + *
> + * The thread receiving the page during the userland page fault
> + * will receive the faulting page in the source vma through the network,
> + * storage or any other I/O device (MADV_DONTFORK in the source vma
> + * avoids remap_pages() to fail with -EBUSY if the process forks before
> + * remap_pages() is called), then it will call remap_pages() to map the
> + * page in the faulting address in the destination vma.
> + *
> + * This userfaultfd command works purely via pagetables, so it's the
> + * most efficient way to move physical non shared anonymous pages
> + * across different virtual addresses. Unlike mremap()/mmap()/munmap()
> + * it does not create any new vmas. The mapping in the destination
> + * address is atomic.
> + *
> + * It only works if the vma protection bits are identical from the
> + * source and destination vma.
> + *
> + * It can remap non shared anonymous pages within the same vma too.
> + *
> + * If the source virtual memory range has any unmapped holes, or if
> + * the destination virtual memory range is not a whole unmapped hole,
> + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
> + * provides a very strict behavior to avoid any chance of memory
> + * corruption going unnoticed if there are userland race conditions.
> + * Only one thread should resolve the userland page fault at any given
> + * time for any given faulting address. This means that if two threads
> + * try to both call remap_pages() on the same destination address at the
> + * same time, the second thread will get an explicit error from this
> + * command.
> + *
> + * The command retval will return "len" is successful. The command
> + * however can be interrupted by fatal signals or errors. If
> + * interrupted it will return the number of bytes successfully
> + * remapped before the interruption if any, or the negative error if
> + * none. It will never return zero. Either it will return an error or
> + * an amount of bytes successfully moved. If the retval reports a
> + * "short" remap, the remap_pages() command should be repeated by
> + * userland with src+retval, dst+reval, len-retval if it wants to know
> + * about the error that interrupted it.
> + *
> + * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to
> + * prevent -ENOENT errors to materialize if there are holes in the
> + * source virtual range that is being remapped. The holes will be
> + * accounted as successfully remapped in the retval of the
> + * command. This is mostly useful to remap hugepage naturally aligned
> + * virtual regions without knowing if there are transparent hugepage
> + * in the regions or not, but preventing the risk of having to split
> + * the hugepmd during the remap.
> + *
> + * If there's any rmap walk that is taking the anon_vma locks without
> + * first obtaining the folio lock (the only current instance is
> + * folio_referenced), they will have to verify if the folio->mapping
> + * has changed after taking the anon_vma lock. If it changed they
> + * should release the lock and retry obtaining a new anon_vma, because
> + * it means the anon_vma was changed by remap_pages() before the lock
> + * could be obtained. This is the only additional complexity added to
> + * the rmap code to provide this anonymous page remapping functionality.
> + */
> +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_start, unsigned long src_start,
> + unsigned long len, __u64 mode)
> +{
> + struct vm_area_struct *src_vma, *dst_vma;
> + unsigned long src_addr, dst_addr;
> + pmd_t *src_pmd, *dst_pmd;
> + long err = -EINVAL;
> + ssize_t moved = 0;
> +
> + /*
> + * Sanitize the command parameters:
> + */
> + BUG_ON(src_start & ~PAGE_MASK);
> + BUG_ON(dst_start & ~PAGE_MASK);
> + BUG_ON(len & ~PAGE_MASK);
> +
> + /* Does the address range wrap, or is the span zero-sized? */
> + BUG_ON(src_start + len <= src_start);
> + BUG_ON(dst_start + len <= dst_start);
> +
> + /*
> + * Because these are read sempahores there's no risk of lock
> + * inversion.
> + */
> + mmap_read_lock(dst_mm);
> + if (dst_mm != src_mm)
> + mmap_read_lock(src_mm);
> +
> + /*
> + * Make sure the vma is not shared, that the src and dst remap
> + * ranges are both valid and fully within a single existing
> + * vma.
> + */
> + src_vma = find_vma(src_mm, src_start);
> + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> + goto out;
> + if (src_start < src_vma->vm_start ||
> + src_start + len > src_vma->vm_end)
> + goto out;
> +
> + dst_vma = find_vma(dst_mm, dst_start);
> + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> + goto out;
> + if (dst_start < dst_vma->vm_start ||
> + dst_start + len > dst_vma->vm_end)
> + goto out;
> +
> + err = validate_remap_areas(src_vma, dst_vma);
> + if (err)
> + goto out;
> +
> + for (src_addr = src_start, dst_addr = dst_start;
> + src_addr < src_start + len;) {
> + spinlock_t *ptl;
> + pmd_t dst_pmdval;
> + unsigned long step_size;
> +
> + BUG_ON(dst_addr >= dst_start + len);
> + /*
> + * Below works because anonymous area would not have a
> + * transparent huge PUD. If file-backed support is added,
> + * that case would need to be handled here.
> + */
> + src_pmd = mm_find_pmd(src_mm, src_addr);
> + if (unlikely(!src_pmd)) {
> + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
> + err = -ENOENT;
> + break;
> + }
> + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> + if (unlikely(!src_pmd)) {
> + err = -ENOMEM;
> + break;
> + }
> + }
> + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> + if (unlikely(!dst_pmd)) {
> + err = -ENOMEM;
> + break;
> + }
> +
> + dst_pmdval = pmdp_get_lockless(dst_pmd);
> + /*
> + * If the dst_pmd is mapped as THP don't override it and just
> + * be strict. If dst_pmd changes into TPH after this check, the
> + * remap_pages_huge_pmd() will detect the change and retry
> + * while remap_pages_pte() will detect the change and fail.
> + */
> + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> + err = -EEXIST;
> + break;
> + }
> +
> + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> + if (ptl) {
> + if (pmd_devmap(*src_pmd)) {
> + spin_unlock(ptl);
> + err = -ENOENT;
> + break;
> + }
> +
> + /*
> + * Check if we can move the pmd without
> + * splitting it. First check the address
> + * alignment to be the same in src/dst. These
> + * checks don't actually need the PT lock but
> + * it's good to do it here to optimize this
> + * block away at build time if
> + * CONFIG_TRANSPARENT_HUGEPAGE is not set.
> + */
> + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
> + spin_unlock(ptl);
> + split_huge_pmd(src_vma, src_pmd, src_addr);
> + continue;
> + }
> +
> + err = remap_pages_huge_pmd(dst_mm, src_mm,
> + dst_pmd, src_pmd,
> + dst_pmdval,
> + dst_vma, src_vma,
> + dst_addr, src_addr);
> + step_size = HPAGE_PMD_SIZE;
> + } else {
> + if (pmd_none(*src_pmd)) {
> + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
> + err = -ENOENT;
> + break;
> + }
> + if (unlikely(__pte_alloc(src_mm, src_pmd))) {
> + err = -ENOMEM;
> + break;
> + }
> + }
> +
> + if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
> + err = -ENOMEM;
> + break;
> + }
> +
> + err = remap_pages_pte(dst_mm, src_mm,
> + dst_pmd, src_pmd,
> + dst_vma, src_vma,
> + dst_addr, src_addr,
> + mode);
> + step_size = PAGE_SIZE;
> + }
> +
> + cond_resched();
> +
> + if (fatal_signal_pending(current)) {
> + /* Do not override an error */
> + if (!err || err == -EAGAIN)
> + err = -EINTR;
> + break;
> + }
> +
> + if (err) {
> + if (err == -EAGAIN)
> + continue;
> + break;
> + }
> +
> + /* Proceed to the next page */
> + dst_addr += step_size;
> + src_addr += step_size;
> + moved += step_size;
> + }
> +
> +out:
> + mmap_read_unlock(dst_mm);
> + if (dst_mm != src_mm)
> + mmap_read_unlock(src_mm);
> + BUG_ON(moved < 0);
> + BUG_ON(err > 0);
> + BUG_ON(!moved && !err);
> + return moved ? moved : err;
> +}
> --
> 2.42.0.609.gbb76f46606-goog
>

--
Peter Xu

2023-10-12 22:02:40

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()

On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
> From: Andrea Arcangeli <[email protected]>
>
> For now, folio_move_anon_rmap() was only used to move a folio to a
> different anon_vma after fork(), whereby the root anon_vma stayed
> unchanged. For that, it was sufficient to hold the folio lock when
> calling folio_move_anon_rmap().
>
> However, we want to make use of folio_move_anon_rmap() to move folios
> between VMAs that have a different root anon_vma. As folio_referenced()
> performs an RMAP walk without holding the folio lock but only holding the
> anon_vma in read mode, holding the folio lock is insufficient.
>
> When moving to an anon_vma with a different root anon_vma, we'll have to
> hold both, the folio lock and the anon_vma lock in write mode.
> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
> read-lock the anon_vma, we have to re-check if the mapping was changed
> in the meantime. If that was the case, we have to retry.
>
> Note that folio_move_anon_rmap() must only be called if the anon page is
> exclusive to a process, and must not be called on KSM folios.
>
> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
> the anon_vma lock in write mode, and the mmap_lock in read mode.
>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> ---
> mm/rmap.c | 24 ++++++++++++++++++++++++
> 1 file changed, 24 insertions(+)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index c1f11c9dbe61..f9ddc50269d2 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> struct anon_vma *root_anon_vma;
> unsigned long anon_mapping;
>
> +retry:
> rcu_read_lock();
> +retry_under_rcu:
> anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
> if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> goto out;
> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> root_anon_vma = READ_ONCE(anon_vma->root);
> if (down_read_trylock(&root_anon_vma->rwsem)) {
> + /*
> + * folio_move_anon_rmap() might have changed the anon_vma as we
> + * might not hold the folio lock here.
> + */
> + if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
> + anon_mapping)) {
> + up_read(&root_anon_vma->rwsem);
> + goto retry_under_rcu;

Is adding this specific label worthwhile? How about rcu unlock and goto
retry (then it'll also be clear that we won't hold rcu read lock for
unpredictable time)?

> + }
> +
> /*
> * If the folio is still mapped, then this anon_vma is still
> * its anon_vma, and holding the mutex ensures that it will

--
Peter Xu

2023-10-12 22:31:15

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test

On Sun, Oct 08, 2023 at 11:42:28PM -0700, Suren Baghdasaryan wrote:
> Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source
> into destination buffer while checking the contents of both after
> remapping. After the operation the content of the destination buffer
> should match the original source buffer's content while the source
> buffer should be zeroed.
>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
> ---
> tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++-
> tools/testing/selftests/mm/uffd-common.h | 1 +
> tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++
> 3 files changed, 102 insertions(+), 2 deletions(-)
>
> diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
> index 02b89860e193..ecc1244f1c2b 100644
> --- a/tools/testing/selftests/mm/uffd-common.c
> +++ b/tools/testing/selftests/mm/uffd-common.c
> @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src)
> *alloc_area = NULL;
> return -errno;
> }
> +
> + /* Prevent source pages from collapsing into THPs */
> + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) {
> + *alloc_area = NULL;
> + return -errno;
> + }

Can we move this to test specific code?

> +
> return 0;
> }
>
> @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args)
> offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst;
> offset &= ~(page_size-1);
>
> - if (copy_page(uffd, offset, args->apply_wp))
> - args->missing_faults++;
> + /* UFFD_MOVE is supported for anon non-shared mappings. */
> + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) {

IIUC this means move_page() will start to run on many other tests... as
long as anonymous & private. Probably not wanted, because not all tests
may need this MOVE test, and it also means UFFDIO_COPY is never tested on
anonymous..

You can overwrite uffd_args.handle_fault(). Axel just added a hook which
seems also usable here. See 99aa77215ad02.

> + if (move_page(uffd, offset))
> + args->missing_faults++;
> + } else {
> + if (copy_page(uffd, offset, args->apply_wp))
> + args->missing_faults++;
> + }
> }
> }
>
> @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp)
> return __copy_page(ufd, offset, false, wp);
> }
>
> +int move_page(int ufd, unsigned long offset)
> +{
> + struct uffdio_move uffdio_move;
> +
> + if (offset >= nr_pages * page_size)
> + err("unexpected offset %lu\n", offset);
> + uffdio_move.dst = (unsigned long) area_dst + offset;
> + uffdio_move.src = (unsigned long) area_src + offset;
> + uffdio_move.len = page_size;
> + uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES;
> + uffdio_move.move = 0;
> + if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) {
> + /* real retval in uffdio_move.move */
> + if (uffdio_move.move != -EEXIST)
> + err("UFFDIO_MOVE error: %"PRId64,
> + (int64_t)uffdio_move.move);
> + wake_range(ufd, uffdio_move.dst, page_size);
> + } else if (uffdio_move.move != page_size) {
> + err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move);
> + } else
> + return 1;
> + return 0;
> +}
> +
> int uffd_open_dev(unsigned int flags)
> {
> int fd, uffd;
> diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
> index 7c4fa964c3b0..f4d79e169a3d 100644
> --- a/tools/testing/selftests/mm/uffd-common.h
> +++ b/tools/testing/selftests/mm/uffd-common.h
> @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp);
> void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args);
> int __copy_page(int ufd, unsigned long offset, bool retry, bool wp);
> int copy_page(int ufd, unsigned long offset, bool wp);
> +int move_page(int ufd, unsigned long offset);
> void *uffd_poll_thread(void *arg);
>
> int uffd_open_dev(unsigned int flags);
> diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
> index 2709a34a39c5..f0ded3b34367 100644
> --- a/tools/testing/selftests/mm/uffd-unit-tests.c
> +++ b/tools/testing/selftests/mm/uffd-unit-tests.c
> @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp)
> char c;
> struct uffd_args args = { 0 };
>
> + /* Prevent source pages from being mapped more than once */
> + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK))
> + err("madvise(MADV_DONTFORK) failed");

Modifying events test is weird.. I assume you don't need this anymore after
you switch to the handle_fault() hook.

> +
> fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
> if (uffd_register(uffd, area_dst, nr_pages * page_size,
> true, wp, false))
> @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs)
> uffd_test_pass();
> }
>
> +static void uffd_move_test(uffd_test_args_t *targs)
> +{
> + unsigned long nr;
> + pthread_t uffd_mon;
> + char c;
> + unsigned long long count;
> + struct uffd_args args = { 0 };
> +
> + if (uffd_register(uffd, area_dst, nr_pages * page_size,
> + true, false, false))
> + err("register failure");
> +
> + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
> + err("uffd_poll_thread create");
> +
> + /*
> + * Read each of the pages back using the UFFD-registered mapping. We
> + * expect that the first time we touch a page, it will result in a missing
> + * fault. uffd_poll_thread will resolve the fault by remapping source
> + * page to destination.
> + */
> + for (nr = 0; nr < nr_pages; nr++) {
> + /* Check area_src content */
> + count = *area_count(area_src, nr);
> + if (count != count_verify[nr])
> + err("nr %lu source memory invalid %llu %llu\n",
> + nr, count, count_verify[nr]);
> +
> + /* Faulting into area_dst should remap the page */
> + count = *area_count(area_dst, nr);
> + if (count != count_verify[nr])
> + err("nr %lu memory corruption %llu %llu\n",
> + nr, count, count_verify[nr]);
> +
> + /* Re-check area_src content which should be empty */
> + count = *area_count(area_src, nr);
> + if (count != 0)
> + err("nr %lu move failed %llu %llu\n",
> + nr, count, count_verify[nr]);

All of above should see zeros, right? Because I don't think anyone boosted
the counter at all..

Maybe set some non-zero values to it? Then the re-check can make more
sense.

If you want, I think we can also make uffd-stress.c test to cover MOVE too,
basically replacing all UFFDIO_COPY when e.g. user specified from cmdline.
Optional, and may need some touch ups here and there, though.

Thanks,

> + }
> +
> + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c))
> + err("pipe write");
> + if (pthread_join(uffd_mon, NULL))
> + err("join() failed");
> +
> + if (args.missing_faults != nr_pages || args.minor_faults != 0)
> + uffd_test_fail("stats check error");
> + else
> + uffd_test_pass();
> +}
> +
> /*
> * Test the returned uffdio_register.ioctls with different register modes.
> * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test.
> @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = {
> .mem_targets = MEM_ALL,
> .uffd_feature_required = 0,
> },
> + {
> + .name = "move",
> + .uffd_fn = uffd_move_test,
> + .mem_targets = MEM_ANON,
> + .uffd_feature_required = UFFD_FEATURE_MOVE,
> + },
> {
> .name = "wp-fork",
> .uffd_fn = uffd_wp_fork_test,
> --
> 2.42.0.609.gbb76f46606-goog
>

--
Peter Xu

2023-10-13 08:05:48

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()

On 13.10.23 00:01, Peter Xu wrote:
> On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
>> From: Andrea Arcangeli <[email protected]>
>>
>> For now, folio_move_anon_rmap() was only used to move a folio to a
>> different anon_vma after fork(), whereby the root anon_vma stayed
>> unchanged. For that, it was sufficient to hold the folio lock when
>> calling folio_move_anon_rmap().
>>
>> However, we want to make use of folio_move_anon_rmap() to move folios
>> between VMAs that have a different root anon_vma. As folio_referenced()
>> performs an RMAP walk without holding the folio lock but only holding the
>> anon_vma in read mode, holding the folio lock is insufficient.
>>
>> When moving to an anon_vma with a different root anon_vma, we'll have to
>> hold both, the folio lock and the anon_vma lock in write mode.
>> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
>> read-lock the anon_vma, we have to re-check if the mapping was changed
>> in the meantime. If that was the case, we have to retry.
>>
>> Note that folio_move_anon_rmap() must only be called if the anon page is
>> exclusive to a process, and must not be called on KSM folios.
>>
>> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
>> the anon_vma lock in write mode, and the mmap_lock in read mode.
>>
>> Signed-off-by: Andrea Arcangeli <[email protected]>
>> Signed-off-by: Suren Baghdasaryan <[email protected]>
>> ---
>> mm/rmap.c | 24 ++++++++++++++++++++++++
>> 1 file changed, 24 insertions(+)
>>
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index c1f11c9dbe61..f9ddc50269d2 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>> struct anon_vma *root_anon_vma;
>> unsigned long anon_mapping;
>>
>> +retry:
>> rcu_read_lock();
>> +retry_under_rcu:
>> anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
>> if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
>> goto out;
>> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
>> anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
>> root_anon_vma = READ_ONCE(anon_vma->root);
>> if (down_read_trylock(&root_anon_vma->rwsem)) {
>> + /*
>> + * folio_move_anon_rmap() might have changed the anon_vma as we
>> + * might not hold the folio lock here.
>> + */
>> + if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
>> + anon_mapping)) {
>> + up_read(&root_anon_vma->rwsem);
>> + goto retry_under_rcu;
>
> Is adding this specific label worthwhile? How about rcu unlock and goto
> retry (then it'll also be clear that we won't hold rcu read lock for
> unpredictable time)?

+1, sounds good to me

--
Cheers,

David / dhildenb

2023-10-13 09:58:00

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 12.10.23 22:11, Peter Xu wrote:
> On Mon, Oct 09, 2023 at 05:29:08PM +0100, Lokesh Gidra wrote:
>> On Mon, Oct 9, 2023 at 5:24 PM David Hildenbrand <[email protected]> wrote:
>>>
>>> On 09.10.23 18:21, Suren Baghdasaryan wrote:
>>>> On Mon, Oct 9, 2023 at 7:38 AM David Hildenbrand <[email protected]> wrote:
>>>>>
>>>>> On 09.10.23 08:42, Suren Baghdasaryan wrote:
>>>>>> From: Andrea Arcangeli <[email protected]>
>>>>>>
>>>>>> Implement the uABI of UFFDIO_MOVE ioctl.
>>>>>> UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
>>>>>> needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
>>>>>> available (in userspace) for recycling, as is usually the case in heap
>>>>>> compaction algorithms, then we can avoid the page allocation and memcpy
>>>>>> (done by UFFDIO_COPY). Also, since the pages are recycled in the
>>>>>> userspace, we avoid the need to release (via madvise) the pages back to
>>>>>> the kernel [2].
>>>>>> We see over 40% reduction (on a Google pixel 6 device) in the compacting
>>>>>> thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
>>>>>> measured using a benchmark that emulates a heap compaction implementation
>>>>>> using userfaultfd (to allow concurrent accesses by application threads).
>>>>>> More details of the usecase are explained in [2].
>>>>>> Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
>>>>>> touching them within the same vma. Today, it can only be done by mremap,
>>>>>> however it forces splitting the vma.
>>>>>>
>>>>>> [1] https://lore.kernel.org/all/[email protected]/
>>>>>> [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
>>>>>>
>>>>>> Update for the ioctl_userfaultfd(2) manpage:
>>>>>>
>>>>>> UFFDIO_MOVE
>>>>>> (Since Linux xxx) Move a continuous memory chunk into the
>>>>>> userfault registered range and optionally wake up the blocked
>>>>>> thread. The source and destination addresses and the number of
>>>>>> bytes to move are specified by the src, dst, and len fields of
>>>>>> the uffdio_move structure pointed to by argp:
>>>>>>
>>>>>> struct uffdio_move {
>>>>>> __u64 dst; /* Destination of move */
>>>>>> __u64 src; /* Source of move */
>>>>>> __u64 len; /* Number of bytes to move */
>>>>>> __u64 mode; /* Flags controlling behavior of move */
>>>>>> __s64 move; /* Number of bytes moved, or negated error */
>>>>>> };
>>>>>>
>>>>>> The following value may be bitwise ORed in mode to change the
>>>>>> behavior of the UFFDIO_MOVE operation:
>>>>>>
>>>>>> UFFDIO_MOVE_MODE_DONTWAKE
>>>>>> Do not wake up the thread that waits for page-fault
>>>>>> resolution
>>>>>>
>>>>>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
>>>>>> Allow holes in the source virtual range that is being moved.
>>>>>> When not specified, the holes will result in ENOENT error.
>>>>>> When specified, the holes will be accounted as successfully
>>>>>> moved memory. This is mostly useful to move hugepage aligned
>>>>>> virtual regions without knowing if there are transparent
>>>>>> hugepages in the regions or not, but preventing the risk of
>>>>>> having to split the hugepage during the operation.
>>>>>>
>>>>>> The move field is used by the kernel to return the number of
>>>>>> bytes that was actually moved, or an error (a negated errno-
>>>>>> style value). If the value returned in move doesn't match the
>>>>>> value that was specified in len, the operation fails with the
>>>>>> error EAGAIN. The move field is output-only; it is not read by
>>>>>> the UFFDIO_MOVE operation.
>>>>>>
>>>>>> The operation may fail for various reasons. Usually, remapping of
>>>>>> pages that are not exclusive to the given process fail; once KSM
>>>>>> might deduplicate pages or fork() COW-shares pages during fork()
>>>>>> with child processes, they are no longer exclusive. Further, the
>>>>>> kernel might only perform lightweight checks for detecting whether
>>>>>> the pages are exclusive, and return -EBUSY in case that check fails.
>>>>>> To make the operation more likely to succeed, KSM should be
>>>>>> disabled, fork() should be avoided or MADV_DONTFORK should be
>>>>>> configured for the source VMA before fork().
>>>>>>
>>>>>> This ioctl(2) operation returns 0 on success. In this case, the
>>>>>> entire area was moved. On error, -1 is returned and errno is
>>>>>> set to indicate the error. Possible errors include:
>>>>>>
>>>>>> EAGAIN The number of bytes moved (i.e., the value returned in
>>>>>> the move field) does not equal the value that was
>>>>>> specified in the len field.
>>>>>>
>>>>>> EINVAL Either dst or len was not a multiple of the system page
>>>>>> size, or the range specified by src and len or dst and len
>>>>>> was invalid.
>>>>>>
>>>>>> EINVAL An invalid bit was specified in the mode field.
>>>>>>
>>>>>> ENOENT
>>>>>> The source virtual memory range has unmapped holes and
>>>>>> UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
>>>>>>
>>>>>> EEXIST
>>>>>> The destination virtual memory range is fully or partially
>>>>>> mapped.
>>>>>>
>>>>>> EBUSY
>>>>>> The pages in the source virtual memory range are not
>>>>>> exclusive to the process. The kernel might only perform
>>>>>> lightweight checks for detecting whether the pages are
>>>>>> exclusive. To make the operation more likely to succeed,
>>>>>> KSM should be disabled, fork() should be avoided or
>>>>>> MADV_DONTFORK should be configured for the source virtual
>>>>>> memory area before fork().
>>>>>>
>>>>>> ENOMEM Allocating memory needed for the operation failed.
>>>>>>
>>>>>> ESRCH
>>>>>> The faulting process has exited at the time of a
>>>>>> UFFDIO_MOVE operation.
>>>>>>
>>>>>
>>>>> A general comment simply because I realized that just now: does anything
>>>>> speak against limiting the operations now to a single MM?
>>>>>
>>>>> The use cases I heard so far don't need it. If ever required, we could
>>>>> consider extending it.
>>>>>
>>>>> Let's reduce complexity and KIS unless really required.
>>>>
>>>> Let me check if there are use cases that require moves between MMs.
>>>> Andrea seems to have put considerable effort to make it work between
>>>> MMs and it would be a pity to lose that. I can send a follow-up patch
>>>> to recover that functionality and even if it does not get merged, it
>>>> can be used in the future as a reference. But first let me check if we
>>>> can drop it.
>>
>> For the compaction use case that we have it's fine to limit it to
>> single MM. However, for general use I think Peter will have a better
>> idea.
>

Hi Peter,

> I used to have the same thought with David on whether we can simplify the
> design to e.g. limit it to single mm. Then I found that the trickiest is
> actually patch 1 together with the anon_vma manipulations, and the problem
> is that's not avoidable even if we restrict the api to apply on single mm.
>
> What else we can benefit from single mm? One less mmap read lock, but
> probably that's all we can get; IIUC we need to keep most of the rest of
> the code, e.g. pgtable walks, double pgtable lockings, etc.

No existing mechanisms move anon pages between unrelated processes, that
naturally makes me nervous if we're doing it "just because we can".

>
> Actually, even though I have no solid clue, but I had a feeling that there
> can be some interesting way to leverage this across-mm movement, while
> keeping things all safe (by e.g. elaborately requiring other proc to create
> uffd and deliver to this proc).

Okay, but no real use cases yet.

>
> Considering Andrea's original version already contains those bits and all
> above, I'd vote that we go ahead with supporting two MMs.

You can do nasty things with that, as it stands, on the upstream codebase.

If you pin the page in src_mm and move it to dst_mm, you successfully
broke an invariant that "exclusive" means "no other references from
other processes". That page is marked exclusive but it is, in fact, not
exclusive.

Once you achieved that, you can easily have src_mm not have
MMF_HAS_PINNED, so you can just COW-share that page. Now you
successfully broke the invariant that COW-shared pages must not be
pinned. And you can even trigger VM_BUG_ONs, like in
sanity_check_pinned_pages().

Can it all be fixed? Sure, with more complexity. For something without
clear motivation, I'll have to pass.

Once there is real demand, we can revisit it and explore what else we
would have to take care of (I don't know how memcg behaves when moving
between completely unrelated processes, maybe that works as expected, I
don't know and I have no time to spare on reviewing features with no
real use cases) and announce it as a new feature.


Note: that (with only reading the documentation) it also kept me
wondering how the MMs are even implied from

struct uffdio_move {
__u64 dst; /* Destination of move */
__u64 src; /* Source of move */
__u64 len; /* Number of bytes to move */
__u64 mode; /* Flags controlling behavior of move */
__s64 move; /* Number of bytes moved, or negated error */
};

That probably has to be documented as well, in which address space dst
and src reside.

--
Cheers,

David / dhildenb

2023-10-13 16:13:01

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote:
> Hi Peter,

Hi, David,

>
> > I used to have the same thought with David on whether we can simplify the
> > design to e.g. limit it to single mm. Then I found that the trickiest is
> > actually patch 1 together with the anon_vma manipulations, and the problem
> > is that's not avoidable even if we restrict the api to apply on single mm.
> >
> > What else we can benefit from single mm? One less mmap read lock, but
> > probably that's all we can get; IIUC we need to keep most of the rest of
> > the code, e.g. pgtable walks, double pgtable lockings, etc.
>
> No existing mechanisms move anon pages between unrelated processes, that
> naturally makes me nervous if we're doing it "just because we can".

IMHO that's also the potential, when guarded with userfaultfd descriptor
being shared between two processes.

See below with more comment on the raised concerns.

>
> >
> > Actually, even though I have no solid clue, but I had a feeling that there
> > can be some interesting way to leverage this across-mm movement, while
> > keeping things all safe (by e.g. elaborately requiring other proc to create
> > uffd and deliver to this proc).
>
> Okay, but no real use cases yet.

I can provide a "not solid" example. I didn't mention it because it's
really something that just popped into my mind when thinking cross-mm, so I
never discussed with anyone yet nor shared it anywhere.

Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
extremely efficient postcopy live upgrade now with anonymous if with REMAP.

Basically I see it a potential way of moving memory efficiently especially
with thp.

>
> >
> > Considering Andrea's original version already contains those bits and all
> > above, I'd vote that we go ahead with supporting two MMs.
>
> You can do nasty things with that, as it stands, on the upstream codebase.
>
> If you pin the page in src_mm and move it to dst_mm, you successfully broke
> an invariant that "exclusive" means "no other references from other
> processes". That page is marked exclusive but it is, in fact, not exclusive.

It is still exclusive to the dst mm? I see your point, but I think you're
taking exclusiveness altogether with pinning, and IMHO that may not be
always necessary?

>
> Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,

(I suppose you meant dst_mm here)

> so you can just COW-share that page. Now you successfully broke the
> invariant that COW-shared pages must not be pinned. And you can even trigger
> VM_BUG_ONs, like in sanity_check_pinned_pages().

Yeah, that's really unfortunate. But frankly, I don't think it's the fault
of this new feature, but the rest.

Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
per-vma, which I don't see why we can't because it's simply a hint so far.
Then if we apply the same rule here, UFFDIO_REMAP won't even work for
single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
be NACKed simply because of this..

And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
happen, or any further change to pinning solution that may affect this. So
far it just looks unsafe to remap a pin page to me.

I don't have a good suggestion here if this is a risk.. I'd think it risky
then to do REMAP over pinned pages no matter cross-mm or single-mm. It
means probably we just rule them out: folio_maybe_dma_pinned() may not even
be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
with proper write_protect_seq no matter cross-mm or single-mm?

>
> Can it all be fixed? Sure, with more complexity. For something without clear
> motivation, I'll have to pass.

I think what you raised is a valid concern, but IMHO it's better fixed no
matter cross-mm or single-mm. What do you think?

In general, pinning lose its whole point here to me for an userspace either
if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
it upon DONTNEED/REMAP/whatever drops the page, because it loses its
coherency anyway, IMHO.

>
> Once there is real demand, we can revisit it and explore what else we would
> have to take care of (I don't know how memcg behaves when moving between
> completely unrelated processes, maybe that works as expected, I don't know
> and I have no time to spare on reviewing features with no real use cases)
> and announce it as a new feature.

Good point. memcg is probably needed..

So you reminded me to do a more thorough review against zap/fault paths, I
think what's missing are (besides page pinning):

- mem_cgroup_charge()/mem_cgroup_uncharge():

(side note: I think folio_throttle_swaprate() is only for when
allocating new pages, so not needed here)

- check_stable_address_space() (under pgtable lock)

- tlb flush

Hmm???????????????? I can't see anywhere we did tlb flush, batched or
not, either single-mm or cross-mm should need it. Is this missing?

>
>
> Note: that (with only reading the documentation) it also kept me wondering
> how the MMs are even implied from
>
> struct uffdio_move {
> __u64 dst; /* Destination of move */
> __u64 src; /* Source of move */
> __u64 len; /* Number of bytes to move */
> __u64 mode; /* Flags controlling behavior of move */
> __s64 move; /* Number of bytes moved, or negated error */
> };
>
> That probably has to be documented as well, in which address space dst and
> src reside.

Agreed, some better documentation will never hurt. Dst should be in the mm
address space that was bound to the userfault descriptor. Src should be in
the current mm address space.

Thanks,

--
Peter Xu

2023-10-13 16:49:45

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 13, 2023 at 9:08 AM Peter Xu <[email protected]> wrote:
>
> On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote:
> > Hi Peter,
>
> Hi, David,
>
> >
> > > I used to have the same thought with David on whether we can simplify the
> > > design to e.g. limit it to single mm. Then I found that the trickiest is
> > > actually patch 1 together with the anon_vma manipulations, and the problem
> > > is that's not avoidable even if we restrict the api to apply on single mm.
> > >
> > > What else we can benefit from single mm? One less mmap read lock, but
> > > probably that's all we can get; IIUC we need to keep most of the rest of
> > > the code, e.g. pgtable walks, double pgtable lockings, etc.
> >
> > No existing mechanisms move anon pages between unrelated processes, that
> > naturally makes me nervous if we're doing it "just because we can".
>
> IMHO that's also the potential, when guarded with userfaultfd descriptor
> being shared between two processes.
>
> See below with more comment on the raised concerns.
>
> >
> > >
> > > Actually, even though I have no solid clue, but I had a feeling that there
> > > can be some interesting way to leverage this across-mm movement, while
> > > keeping things all safe (by e.g. elaborately requiring other proc to create
> > > uffd and deliver to this proc).
> >
> > Okay, but no real use cases yet.
>
> I can provide a "not solid" example. I didn't mention it because it's
> really something that just popped into my mind when thinking cross-mm, so I
> never discussed with anyone yet nor shared it anywhere.
>
> Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> extremely efficient postcopy live upgrade now with anonymous if with REMAP.
>
> Basically I see it a potential way of moving memory efficiently especially
> with thp.
>
> >
> > >
> > > Considering Andrea's original version already contains those bits and all
> > > above, I'd vote that we go ahead with supporting two MMs.
> >
> > You can do nasty things with that, as it stands, on the upstream codebase.
> >
> > If you pin the page in src_mm and move it to dst_mm, you successfully broke
> > an invariant that "exclusive" means "no other references from other
> > processes". That page is marked exclusive but it is, in fact, not exclusive.
>
> It is still exclusive to the dst mm? I see your point, but I think you're
> taking exclusiveness altogether with pinning, and IMHO that may not be
> always necessary?
>
> >
> > Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
>
> (I suppose you meant dst_mm here)
>
> > so you can just COW-share that page. Now you successfully broke the
> > invariant that COW-shared pages must not be pinned. And you can even trigger
> > VM_BUG_ONs, like in sanity_check_pinned_pages().
>
> Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> of this new feature, but the rest.
>
> Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> per-vma, which I don't see why we can't because it's simply a hint so far.
> Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> be NACKed simply because of this..
>
> And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> happen, or any further change to pinning solution that may affect this. So
> far it just looks unsafe to remap a pin page to me.
>
> I don't have a good suggestion here if this is a risk.. I'd think it risky
> then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> means probably we just rule them out: folio_maybe_dma_pinned() may not even
> be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> with proper write_protect_seq no matter cross-mm or single-mm?
>
> >
> > Can it all be fixed? Sure, with more complexity. For something without clear
> > motivation, I'll have to pass.
>
> I think what you raised is a valid concern, but IMHO it's better fixed no
> matter cross-mm or single-mm. What do you think?
>
> In general, pinning lose its whole point here to me for an userspace either
> if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> coherency anyway, IMHO.
>
> >
> > Once there is real demand, we can revisit it and explore what else we would
> > have to take care of (I don't know how memcg behaves when moving between
> > completely unrelated processes, maybe that works as expected, I don't know
> > and I have no time to spare on reviewing features with no real use cases)
> > and announce it as a new feature.
>
> Good point. memcg is probably needed..
>
> So you reminded me to do a more thorough review against zap/fault paths, I
> think what's missing are (besides page pinning):
>
> - mem_cgroup_charge()/mem_cgroup_uncharge():
>
> (side note: I think folio_throttle_swaprate() is only for when
> allocating new pages, so not needed here)
>
> - check_stable_address_space() (under pgtable lock)
>
> - tlb flush
>
> Hmm???????????????? I can't see anywhere we did tlb flush, batched or
> not, either single-mm or cross-mm should need it. Is this missing?
>
IIUC, ptep_clear_flush() flushes tlb entry. So I think we are doing
unbatched flushing. Possibly a nice performance improvement later on
would be to try doing it batched. Suren can throw more light on it.

One thing I was wondering is don't we need cache flush for the src
pages? mremap's move_page_tables() does it. IMHO, it's required here
as well.

> >
> >
> > Note: that (with only reading the documentation) it also kept me wondering
> > how the MMs are even implied from
> >
> > struct uffdio_move {
> > __u64 dst; /* Destination of move */
> > __u64 src; /* Source of move */
> > __u64 len; /* Number of bytes to move */
> > __u64 mode; /* Flags controlling behavior of move */
> > __s64 move; /* Number of bytes moved, or negated error */
> > };
> >
> > That probably has to be documented as well, in which address space dst and
> > src reside.
>
> Agreed, some better documentation will never hurt. Dst should be in the mm
> address space that was bound to the userfault descriptor. Src should be in
> the current mm address space.
>
> Thanks,
>
> --
> Peter Xu
>

2023-10-13 17:07:36

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 13, 2023 at 09:49:10AM -0700, Lokesh Gidra wrote:
> On Fri, Oct 13, 2023 at 9:08 AM Peter Xu <[email protected]> wrote:
> >
> > On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote:
> > > Hi Peter,
> >
> > Hi, David,
> >
> > >
> > > > I used to have the same thought with David on whether we can simplify the
> > > > design to e.g. limit it to single mm. Then I found that the trickiest is
> > > > actually patch 1 together with the anon_vma manipulations, and the problem
> > > > is that's not avoidable even if we restrict the api to apply on single mm.
> > > >
> > > > What else we can benefit from single mm? One less mmap read lock, but
> > > > probably that's all we can get; IIUC we need to keep most of the rest of
> > > > the code, e.g. pgtable walks, double pgtable lockings, etc.
> > >
> > > No existing mechanisms move anon pages between unrelated processes, that
> > > naturally makes me nervous if we're doing it "just because we can".
> >
> > IMHO that's also the potential, when guarded with userfaultfd descriptor
> > being shared between two processes.
> >
> > See below with more comment on the raised concerns.
> >
> > >
> > > >
> > > > Actually, even though I have no solid clue, but I had a feeling that there
> > > > can be some interesting way to leverage this across-mm movement, while
> > > > keeping things all safe (by e.g. elaborately requiring other proc to create
> > > > uffd and deliver to this proc).
> > >
> > > Okay, but no real use cases yet.
> >
> > I can provide a "not solid" example. I didn't mention it because it's
> > really something that just popped into my mind when thinking cross-mm, so I
> > never discussed with anyone yet nor shared it anywhere.
> >
> > Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> > very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> > extremely efficient postcopy live upgrade now with anonymous if with REMAP.
> >
> > Basically I see it a potential way of moving memory efficiently especially
> > with thp.
> >
> > >
> > > >
> > > > Considering Andrea's original version already contains those bits and all
> > > > above, I'd vote that we go ahead with supporting two MMs.
> > >
> > > You can do nasty things with that, as it stands, on the upstream codebase.
> > >
> > > If you pin the page in src_mm and move it to dst_mm, you successfully broke
> > > an invariant that "exclusive" means "no other references from other
> > > processes". That page is marked exclusive but it is, in fact, not exclusive.
> >
> > It is still exclusive to the dst mm? I see your point, but I think you're
> > taking exclusiveness altogether with pinning, and IMHO that may not be
> > always necessary?
> >
> > >
> > > Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
> >
> > (I suppose you meant dst_mm here)
> >
> > > so you can just COW-share that page. Now you successfully broke the
> > > invariant that COW-shared pages must not be pinned. And you can even trigger
> > > VM_BUG_ONs, like in sanity_check_pinned_pages().
> >
> > Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> > of this new feature, but the rest.
> >
> > Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> > per-vma, which I don't see why we can't because it's simply a hint so far.
> > Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> > single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> > be NACKed simply because of this..
> >
> > And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> > happen, or any further change to pinning solution that may affect this. So
> > far it just looks unsafe to remap a pin page to me.
> >
> > I don't have a good suggestion here if this is a risk.. I'd think it risky
> > then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> > means probably we just rule them out: folio_maybe_dma_pinned() may not even
> > be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> > with proper write_protect_seq no matter cross-mm or single-mm?
> >
> > >
> > > Can it all be fixed? Sure, with more complexity. For something without clear
> > > motivation, I'll have to pass.
> >
> > I think what you raised is a valid concern, but IMHO it's better fixed no
> > matter cross-mm or single-mm. What do you think?
> >
> > In general, pinning lose its whole point here to me for an userspace either
> > if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> > it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> > coherency anyway, IMHO.
> >
> > >
> > > Once there is real demand, we can revisit it and explore what else we would
> > > have to take care of (I don't know how memcg behaves when moving between
> > > completely unrelated processes, maybe that works as expected, I don't know
> > > and I have no time to spare on reviewing features with no real use cases)
> > > and announce it as a new feature.
> >
> > Good point. memcg is probably needed..
> >
> > So you reminded me to do a more thorough review against zap/fault paths, I
> > think what's missing are (besides page pinning):
> >
> > - mem_cgroup_charge()/mem_cgroup_uncharge():
> >
> > (side note: I think folio_throttle_swaprate() is only for when
> > allocating new pages, so not needed here)
> >
> > - check_stable_address_space() (under pgtable lock)
> >
> > - tlb flush
> >
> > Hmm???????????????? I can't see anywhere we did tlb flush, batched or
> > not, either single-mm or cross-mm should need it. Is this missing?
> >
> IIUC, ptep_clear_flush() flushes tlb entry. So I think we are doing
> unbatched flushing. Possibly a nice performance improvement later on
> would be to try doing it batched. Suren can throw more light on it.

Oh yeah.. thanks.

>
> One thing I was wondering is don't we need cache flush for the src
> pages? mremap's move_page_tables() does it. IMHO, it's required here
> as well.

I commented in my reply, I also think it's needed. Otherwise for some
arches I think we can have page containing stall data if not fully flushed
before the movement. x86 is probably fine, though.

>
> > >
> > >
> > > Note: that (with only reading the documentation) it also kept me wondering
> > > how the MMs are even implied from
> > >
> > > struct uffdio_move {
> > > __u64 dst; /* Destination of move */
> > > __u64 src; /* Source of move */
> > > __u64 len; /* Number of bytes to move */
> > > __u64 mode; /* Flags controlling behavior of move */
> > > __s64 move; /* Number of bytes moved, or negated error */
> > > };
> > >
> > > That probably has to be documented as well, in which address space dst and
> > > src reside.
> >
> > Agreed, some better documentation will never hurt. Dst should be in the mm
> > address space that was bound to the userfault descriptor. Src should be in
> > the current mm address space.
> >
> > Thanks,
> >
> > --
> > Peter Xu
> >
>

--
Peter Xu

2023-10-16 18:02:34

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

[...]

>>> Actually, even though I have no solid clue, but I had a feeling that there
>>> can be some interesting way to leverage this across-mm movement, while
>>> keeping things all safe (by e.g. elaborately requiring other proc to create
>>> uffd and deliver to this proc).
>>
>> Okay, but no real use cases yet.
>
> I can provide a "not solid" example. I didn't mention it because it's
> really something that just popped into my mind when thinking cross-mm, so I
> never discussed with anyone yet nor shared it anywhere.
>
> Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> extremely efficient postcopy live upgrade now with anonymous if with REMAP.
>
> Basically I see it a potential way of moving memory efficiently especially
> with thp.

It's an interesting use case indeed. The questions would be if this is
(a) a use case we want to support; (b) why we need to make that decision
now and add that feature.

One question is if this kind of "moving memory between processes" really
should be done, because intuitively SHMEM smells like the right thing to
use here (two processes wanting to access the same memory).

The downsides of shmem are lack of the shared zeropage and KSM. The
shared zeropage is usually less of a concern for VMs, but KSM is.
However, KSM will also disallow moving pages here. But all
non-deduplicated ones could be moved.

[I wondered whether moving KSM pages (rmap items) could be done;
probably in some limited form with some more added complexity]

>
>>
>>>
>>> Considering Andrea's original version already contains those bits and all
>>> above, I'd vote that we go ahead with supporting two MMs.
>>
>> You can do nasty things with that, as it stands, on the upstream codebase.
>>
>> If you pin the page in src_mm and move it to dst_mm, you successfully broke
>> an invariant that "exclusive" means "no other references from other
>> processes". That page is marked exclusive but it is, in fact, not exclusive.
>
> It is still exclusive to the dst mm? I see your point, but I think you're
> taking exclusiveness altogether with pinning, and IMHO that may not be
> always necessary?

That's the definition of PAE. See do_wp_page() on when we reset PAE:
when there are no other references, which implies no other references
from other processes. Maybe you have "currently exclusively mapped" in
mind, which is what the mapcount can be used for.

>
>>
>> Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
>
> (I suppose you meant dst_mm here)

Yes.

>
>> so you can just COW-share that page. Now you successfully broke the
>> invariant that COW-shared pages must not be pinned. And you can even trigger
>> VM_BUG_ONs, like in sanity_check_pinned_pages().
>
> Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> of this new feature, but the rest.
>
> Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> per-vma, which I don't see why we can't because it's simply a hint so far.
> Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> be NACKed simply because of this..

Because of gup-fast we likely won't see that happening. And if we would,
it could be handled (src_mm has the flag set, set it on the destination
if the page maybe pinned after hiding it from gup-fast; or simply always
copy the flag if set on the src).

>
> And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> happen, or any further change to pinning solution that may affect this. So
> far it just looks unsafe to remap a pin page to me.

It may be questionable to allow remapping pinned pages.

>
> I don't have a good suggestion here if this is a risk.. I'd think it risky
> then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> means probably we just rule them out: folio_maybe_dma_pinned() may not even
> be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> with proper write_protect_seq no matter cross-mm or single-mm?

If you unmap and sync against GUP-fast, you can check after unmapping
and remap and fail if it maybe pinned afterwards. Plus an early check
upfront.

>
>>
>> Can it all be fixed? Sure, with more complexity. For something without clear
>> motivation, I'll have to pass.
>
> I think what you raised is a valid concern, but IMHO it's better fixed no
> matter cross-mm or single-mm. What do you think?

single-mm should at least not cause harm, but the semantics are
questionable. cross-mm could, especially with malicious user space that
wants to find ways of harming the kernel.

I'll note that mremap with pinned pages works.

>
> In general, pinning lose its whole point here to me for an userspace either
> if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> coherency anyway, IMHO.

Further, moving a part of a THP would fail either way, because the
pinned THP cannot get split.

--
Cheers,

David / dhildenb

2023-10-16 19:03:17

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

David,

On Mon, Oct 16, 2023 at 08:01:10PM +0200, David Hildenbrand wrote:
> [...]
>
> > > > Actually, even though I have no solid clue, but I had a feeling that there
> > > > can be some interesting way to leverage this across-mm movement, while
> > > > keeping things all safe (by e.g. elaborately requiring other proc to create
> > > > uffd and deliver to this proc).
> > >
> > > Okay, but no real use cases yet.
> >
> > I can provide a "not solid" example. I didn't mention it because it's
> > really something that just popped into my mind when thinking cross-mm, so I
> > never discussed with anyone yet nor shared it anywhere.
> >
> > Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> > very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> > extremely efficient postcopy live upgrade now with anonymous if with REMAP.
> >
> > Basically I see it a potential way of moving memory efficiently especially
> > with thp.
>
> It's an interesting use case indeed. The questions would be if this is (a) a
> use case we want to support; (b) why we need to make that decision now and
> add that feature.

I would like to support that if nothing stops it from happening, but that's
what we're discussing though..

For (b), I wanted to avoid UFFD_FEATURE_MOVE_CROSS_MM feature flag just for
this, if they're already so close, not to mention current code already
contains cross-mm support.

If to support that live upgrade use case, I'd probably need to rework tlb
flushing too to do the batching (actually tlb flush is not even needed for
upgrade scenario..). I'm not sure whether Lokesh's use case would move
large chunks, it'll be perfect if Suren did it altogether. But that one is
much easier if transparent to userapps. Cross-mm is not transparent and
need another feature knob, which I want to avoid if possible.

>
> One question is if this kind of "moving memory between processes" really
> should be done, because intuitively SHMEM smells like the right thing to use
> here (two processes wanting to access the same memory).

That's the whole point, IMHO, where shmem cannot be used. As you said, on
when someone cannot use file memory for some reason like ksm.

>
> The downsides of shmem are lack of the shared zeropage and KSM. The shared
> zeropage is usually less of a concern for VMs, but KSM is. However, KSM will
> also disallow moving pages here. But all non-deduplicated ones could be
> moved.
>
> [I wondered whether moving KSM pages (rmap items) could be done; probably in
> some limited form with some more added complexity]

Yeah we can leave that complexity for later when really needed. Here
cross-mm support, OTOH, isn't making it so complicated, IMHO.

Btw, we don't even necessarily need to be able to migrate KSM pages for a
VM live upgrade use case: we can unmerge the pages, upgrade, and wait for
KSM to scan & merge again on the new binary / mmap. Userspace can have
that control easily, afaiu, via existing madvise().

>
> >
> > >
> > > >
> > > > Considering Andrea's original version already contains those bits and all
> > > > above, I'd vote that we go ahead with supporting two MMs.
> > >
> > > You can do nasty things with that, as it stands, on the upstream codebase.
> > >
> > > If you pin the page in src_mm and move it to dst_mm, you successfully broke
> > > an invariant that "exclusive" means "no other references from other
> > > processes". That page is marked exclusive but it is, in fact, not exclusive.
> >
> > It is still exclusive to the dst mm? I see your point, but I think you're
> > taking exclusiveness altogether with pinning, and IMHO that may not be
> > always necessary?
>
> That's the definition of PAE. See do_wp_page() on when we reset PAE: when
> there are no other references, which implies no other references from other
> processes. Maybe you have "currently exclusively mapped" in mind, which is
> what the mapcount can be used for.

Okay.

>
> >
> > >
> > > Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
> >
> > (I suppose you meant dst_mm here)
>
> Yes.
>
> >
> > > so you can just COW-share that page. Now you successfully broke the
> > > invariant that COW-shared pages must not be pinned. And you can even trigger
> > > VM_BUG_ONs, like in sanity_check_pinned_pages().
> >
> > Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> > of this new feature, but the rest.
> >
> > Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> > per-vma, which I don't see why we can't because it's simply a hint so far.
> > Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> > single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> > be NACKed simply because of this..
>
> Because of gup-fast we likely won't see that happening. And if we would, it
> could be handled (src_mm has the flag set, set it on the destination if the
> page maybe pinned after hiding it from gup-fast; or simply always copy the
> flag if set on the src).
>
> >
> > And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> > happen, or any further change to pinning solution that may affect this. So
> > far it just looks unsafe to remap a pin page to me.
>
> It may be questionable to allow remapping pinned pages.
>
> >
> > I don't have a good suggestion here if this is a risk.. I'd think it risky
> > then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> > means probably we just rule them out: folio_maybe_dma_pinned() may not even
> > be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> > with proper write_protect_seq no matter cross-mm or single-mm?
>
> If you unmap and sync against GUP-fast, you can check after unmapping and
> remap and fail if it maybe pinned afterwards. Plus an early check upfront.
>
> >
> > >
> > > Can it all be fixed? Sure, with more complexity. For something without clear
> > > motivation, I'll have to pass.
> >
> > I think what you raised is a valid concern, but IMHO it's better fixed no
> > matter cross-mm or single-mm. What do you think?
>
> single-mm should at least not cause harm, but the semantics are
> questionable. cross-mm could, especially with malicious user space that
> wants to find ways of harming the kernel.

For kernel, I think we're discussing to see whether it's safe to do so from
kernel pov, e.g., whether to exclude pinned pages is part of that.

For the user app, the dest process has provided the uffd descriptor on its
own willingness, or be a child of the UFFDIO_MOVE issuer when used with
EVENT_FORK, I assume that's already some form of safety check because it
cannot be any process, but ones that are proactively in close cooperative
with the issuer process.

>
> I'll note that mremap with pinned pages works.

But that's not "by design", am I right? IOW, do we have any real pin user
that rely on mremap() allowing pages to be moved?

I don't see any word guarantee at least from man page that mremap() will
make sure the PFN won't change after the movement.. even though it seems to
be what happening now.

Neither do I think when designing MMF_HAS_PINNED we kept that in mind that
it won't be affected by someone mremap() pinned pages and we want to keep
it working..

All of it just seems to be an accident..

One step back, we're free to define UFFDIO_MOVE anyway, and we don't
necessarily need to always follow mremap(). E.g., mremap() also supports
ksm pages, but IIUC we already decide to not support that for now on
UFFDIO_MOVE. UFFDIO_MOVE seems all fine to make it clear on failing at
pinned pages from the 1st day if that satisfies our goals, too.

>
> >
> > In general, pinning lose its whole point here to me for an userspace either
> > if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> > it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> > coherency anyway, IMHO.
>
> Further, moving a part of a THP would fail either way, because the pinned
> THP cannot get split.
>
> --
> Cheers,
>
> David / dhildenb
>

Thanks,

--
Peter Xu

2023-10-17 16:16:36

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 16.10.23 21:01, Peter Xu wrote:
> David,

Hi Peter,

>>> Basically I see it a potential way of moving memory efficiently especially
>>> with thp.
>>
>> It's an interesting use case indeed. The questions would be if this is (a) a
>> use case we want to support; (b) why we need to make that decision now and
>> add that feature.
>
> I would like to support that if nothing stops it from happening, but that's
> what we're discussing though..
>
> For (b), I wanted to avoid UFFD_FEATURE_MOVE_CROSS_MM feature flag just for
> this, if they're already so close, not to mention current code already
> contains cross-mm support.

Yeah, but that implementation is apparently not sufficiently correct yet.

Don't get me wrong, but this feature is already complicated enough that
we should really think twice if we want to make this even more
complicated and harder to maintain -- because once it's in we all know
it's hard to remove and we can easily end up with a maintenance
nightmare without sufficiently good use cases.

>
> If to support that live upgrade use case, I'd probably need to rework tlb
> flushing too to do the batching (actually tlb flush is not even needed for
> upgrade scenario..). I'm not sure whether Lokesh's use case would move
> large chunks, it'll be perfect if Suren did it altogether. But that one is
> much easier if transparent to userapps. Cross-mm is not transparent and
> need another feature knob, which I want to avoid if possible.

And for me it's the other way around: the kernel doesn't have to support
each and every use case. So we better think twice before we do something
we can no longer undo easily.

Further, as we see, this feature without cross-mm capabilities is
perfectly usable for other use cases. So even limited initialy support
is extremely valuable on its own.

>
>>
>> One question is if this kind of "moving memory between processes" really
>> should be done, because intuitively SHMEM smells like the right thing to use
>> here (two processes wanting to access the same memory).
>
> That's the whole point, IMHO, where shmem cannot be used. As you said, on
> when someone cannot use file memory for some reason like ksm.

Right, but as I explore below KSM will at least prohibit remapping the
KSM pages, taking some of the benefit away again.

>
>>
>> The downsides of shmem are lack of the shared zeropage and KSM. The shared
>> zeropage is usually less of a concern for VMs, but KSM is. However, KSM will
>> also disallow moving pages here. But all non-deduplicated ones could be
>> moved.
>>
>> [I wondered whether moving KSM pages (rmap items) could be done; probably in
>> some limited form with some more added complexity]
>
> Yeah we can leave that complexity for later when really needed. Here
> cross-mm support, OTOH, isn't making it so complicated, IMHO.
>
> Btw, we don't even necessarily need to be able to migrate KSM pages for a
> VM live upgrade use case: we can unmerge the pages, upgrade, and wait for
> KSM to scan & merge again on the new binary / mmap. Userspace can have
> that control easily, afaiu, via existing madvise().

MADV_POPULATE_WRITE would do, yes.

BTW, wasn't there a way to do VM live-upgrade using fork() and replacing
the binary? I recall that there was at some time either an
implementation in QEMU or a proposal for an implementation; but I don't
know how VM memory was provided. It's certainly harder to move VM memory
using fork().

[...]

>>
>> single-mm should at least not cause harm, but the semantics are
>> questionable. cross-mm could, especially with malicious user space that
>> wants to find ways of harming the kernel.
>
> For kernel, I think we're discussing to see whether it's safe to do so from
> kernel pov, e.g., whether to exclude pinned pages is part of that.
>
> For the user app, the dest process has provided the uffd descriptor on its
> own willingness, or be a child of the UFFDIO_MOVE issuer when used with
> EVENT_FORK, I assume that's already some form of safety check because it
> cannot be any process, but ones that are proactively in close cooperative
> with the issuer process.

Is that and will that remain the case? I know people have been working
on transparent user-space swapping using monitor processes using uffd. I
thought there would have been ways to achieve that without any
corporation of the dst.

>
>>
>> I'll note that mremap with pinned pages works.
>
> But that's not "by design", am I right? IOW, do we have any real pin user
> that rely on mremap() allowing pages to be moved?

If in doubt, usually "probably yes".

Hard to tell if the "remap" in mremap indicates that we are simply
remapping pages , and not moving data.

>
> I don't see any word guarantee at least from man page that mremap() will
> make sure the PFN won't change after the movement.. even though it seems to
> be what happening now.

If you managed to support page pinning with migration the exact PFN
doesn't matter. Likely nobody would implement that due to the tracking
complexity.

>
> Neither do I think when designing MMF_HAS_PINNED we kept that in mind that
> it won't be affected by someone mremap() pinned pages and we want to keep
> it working.. >
> All of it just seems to be an accident..

Not necessarily my opinion, but doesn't really matter here. It's working.

At least It's reasonable to have part of a THP pinned and mremapping the
other part of the THP. That makes things more tricky ... because you
only know that the THP is pinned.

>
> One step back, we're free to define UFFDIO_MOVE anyway, and we don't
> necessarily need to always follow mremap(). E.g., mremap() also supports
> ksm pages, but IIUC we already decide to not support that for now on
> UFFDIO_MOVE. UFFDIO_MOVE seems all fine to make it clear on failing at
> pinned pages from the 1st day if that satisfies our goals, too.

Yes.

--
Cheers,

David / dhildenb

2023-10-17 19:01:13

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

David,

On Tue, Oct 17, 2023 at 05:55:10PM +0200, David Hildenbrand wrote:
> Don't get me wrong, but this feature is already complicated enough that we
> should really think twice if we want to make this even more complicated and
> harder to maintain -- because once it's in we all know it's hard to remove
> and we can easily end up with a maintenance nightmare without sufficiently
> good use cases.

Yes I agree it's non-trivial. My point is adding cross-mm doesn't make it
even more complicated.. afaics.

For example, could you provide a list of things that will be different to
support single mm or cross mm? I see two things that can be different, but
I'd rather have all of them even if single-mm..

- cgroup: I assume single-mm may avoid uncharge and charge again, but I
prefer it be there even if we only allow single-mm. For example, I'm
not 100% sure whether memcg won't start to behave differently according
to vma attribute in the future.

- page pinning: I assume for single-mm we can avoid checking page pinning
based on the fact that MMF_HAS_PINNED is per-mm, but I also prefer we
fail explicitly on pinned pages over UFFDIO_MOVE because it doesn't
sound correct, and avoid future changes on top of pinning solution that
can change the assumption that "move a pin page within mm" is ok.

Is there anything else that will be different? Did I miss something
important?

[...]

> BTW, wasn't there a way to do VM live-upgrade using fork() and replacing the
> binary? I recall that there was at some time either an implementation in
> QEMU or a proposal for an implementation; but I don't know how VM memory was
> provided. It's certainly harder to move VM memory using fork().

Maybe you meant the cpr project. I didn't actually follow that much
previously (and will need to follow more after I took the migration
duties.. when there's a new post), but IIUC at least the latest version
needs to go with file memory only, not anonymous:

https://lore.kernel.org/all/[email protected]/

Guest RAM must be non-volatile across reboot, which can be achieved by
backing it with a dax device, or /dev/shm PKRAM as proposed in...

Guest RAM must be backed by a memory backend with share=on, but
cannot be memory-backend-ram. The memory is re-mmap'd in the
updated process, so guest ram is efficiently preserved in place

My understanding is there used to have solution for anonymous but that
needs extra kernel changes (MADV_DOEXEC).

https://lore.kernel.org/linux-mm/[email protected]/

I saw that you were part of the discussion, so maybe you will remember some
more clue of that part.

IIUC one core requirement of the whole approach is also that it will cover
VFIO and maintenance of device DMA mappings, in which case it'll be
different with any approach to leverage UFFDIO_MOVE because VFIO will not
be allowed here; again I hope we start with forbid pinning. But it should
be much cleaner on the design when with UFFDIO_MOVE, just not working with
VFIO.

One thing I'd need to measure is latency of UFFDIO_MOVE on page fault
resolutions. I expect no more than tens of microseconds or even less.
Should be drastically smaller than remote postcopy anyway.

I'm probably off topic.. To go back: let's try to figure out what is
special with cross-mm support. It'll be very weird in the future for
anyone to propose a patch just add a feature flag and declaring cross-mm
support, if the code is mostly all there. Nothing stops us from discussing
what a cross-mm design will need.

[...]

> Is that and will that remain the case? I know people have been working on
> transparent user-space swapping using monitor processes using uffd. I
> thought there would have been ways to achieve that without any corporation
> of the dst.

Any example?

For what I am aware, all corporation requires uffd desc forwarding. I
think the trick here is any userfaultfd desc must be created by its own
process, so far nobody else. That's more or less saying "I want to do
this" from its own opinion. The next is forwarding that to someone else.
Parent process is fine taking uffd of child with EVENT_FORK, as I
mentioned, but besides that nothing else I can think of that can violate
this guard to manipulate a random process.

Thanks,

--
Peter Xu

2023-10-17 19:41:07

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

Hi Suren,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20231017]
[cannot apply to linus/master v6.6-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Suren-Baghdasaryan/mm-rmap-support-move-to-different-root-anon_vma-in-folio_move_anon_rmap/20231009-144552
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20231009064230.2952396-3-surenb%40google.com
patch subject: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI
config: i386-randconfig-141-20231017 (https://download.01.org/0day-ci/archive/20231018/[email protected]/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce: (https://download.01.org/0day-ci/archive/20231018/[email protected]/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/

smatch warnings:
mm/userfaultfd.c:1380 remap_pages() warn: unsigned 'src_start + len - src_addr' is never less than zero.

vim +1380 mm/userfaultfd.c

1195
1196 /**
1197 * remap_pages - remap arbitrary anonymous pages of an existing vma
1198 * @dst_start: start of the destination virtual memory range
1199 * @src_start: start of the source virtual memory range
1200 * @len: length of the virtual memory range
1201 *
1202 * remap_pages() remaps arbitrary anonymous pages atomically in zero
1203 * copy. It only works on non shared anonymous pages because those can
1204 * be relocated without generating non linear anon_vmas in the rmap
1205 * code.
1206 *
1207 * It provides a zero copy mechanism to handle userspace page faults.
1208 * The source vma pages should have mapcount == 1, which can be
1209 * enforced by using madvise(MADV_DONTFORK) on src vma.
1210 *
1211 * The thread receiving the page during the userland page fault
1212 * will receive the faulting page in the source vma through the network,
1213 * storage or any other I/O device (MADV_DONTFORK in the source vma
1214 * avoids remap_pages() to fail with -EBUSY if the process forks before
1215 * remap_pages() is called), then it will call remap_pages() to map the
1216 * page in the faulting address in the destination vma.
1217 *
1218 * This userfaultfd command works purely via pagetables, so it's the
1219 * most efficient way to move physical non shared anonymous pages
1220 * across different virtual addresses. Unlike mremap()/mmap()/munmap()
1221 * it does not create any new vmas. The mapping in the destination
1222 * address is atomic.
1223 *
1224 * It only works if the vma protection bits are identical from the
1225 * source and destination vma.
1226 *
1227 * It can remap non shared anonymous pages within the same vma too.
1228 *
1229 * If the source virtual memory range has any unmapped holes, or if
1230 * the destination virtual memory range is not a whole unmapped hole,
1231 * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
1232 * provides a very strict behavior to avoid any chance of memory
1233 * corruption going unnoticed if there are userland race conditions.
1234 * Only one thread should resolve the userland page fault at any given
1235 * time for any given faulting address. This means that if two threads
1236 * try to both call remap_pages() on the same destination address at the
1237 * same time, the second thread will get an explicit error from this
1238 * command.
1239 *
1240 * The command retval will return "len" is successful. The command
1241 * however can be interrupted by fatal signals or errors. If
1242 * interrupted it will return the number of bytes successfully
1243 * remapped before the interruption if any, or the negative error if
1244 * none. It will never return zero. Either it will return an error or
1245 * an amount of bytes successfully moved. If the retval reports a
1246 * "short" remap, the remap_pages() command should be repeated by
1247 * userland with src+retval, dst+reval, len-retval if it wants to know
1248 * about the error that interrupted it.
1249 *
1250 * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to
1251 * prevent -ENOENT errors to materialize if there are holes in the
1252 * source virtual range that is being remapped. The holes will be
1253 * accounted as successfully remapped in the retval of the
1254 * command. This is mostly useful to remap hugepage naturally aligned
1255 * virtual regions without knowing if there are transparent hugepage
1256 * in the regions or not, but preventing the risk of having to split
1257 * the hugepmd during the remap.
1258 *
1259 * If there's any rmap walk that is taking the anon_vma locks without
1260 * first obtaining the folio lock (the only current instance is
1261 * folio_referenced), they will have to verify if the folio->mapping
1262 * has changed after taking the anon_vma lock. If it changed they
1263 * should release the lock and retry obtaining a new anon_vma, because
1264 * it means the anon_vma was changed by remap_pages() before the lock
1265 * could be obtained. This is the only additional complexity added to
1266 * the rmap code to provide this anonymous page remapping functionality.
1267 */
1268 ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
1269 unsigned long dst_start, unsigned long src_start,
1270 unsigned long len, __u64 mode)
1271 {
1272 struct vm_area_struct *src_vma, *dst_vma;
1273 unsigned long src_addr, dst_addr;
1274 pmd_t *src_pmd, *dst_pmd;
1275 long err = -EINVAL;
1276 ssize_t moved = 0;
1277
1278 /*
1279 * Sanitize the command parameters:
1280 */
1281 BUG_ON(src_start & ~PAGE_MASK);
1282 BUG_ON(dst_start & ~PAGE_MASK);
1283 BUG_ON(len & ~PAGE_MASK);
1284
1285 /* Does the address range wrap, or is the span zero-sized? */
1286 BUG_ON(src_start + len <= src_start);
1287 BUG_ON(dst_start + len <= dst_start);
1288
1289 /*
1290 * Because these are read sempahores there's no risk of lock
1291 * inversion.
1292 */
1293 mmap_read_lock(dst_mm);
1294 if (dst_mm != src_mm)
1295 mmap_read_lock(src_mm);
1296
1297 /*
1298 * Make sure the vma is not shared, that the src and dst remap
1299 * ranges are both valid and fully within a single existing
1300 * vma.
1301 */
1302 src_vma = find_vma(src_mm, src_start);
1303 if (!src_vma || (src_vma->vm_flags & VM_SHARED))
1304 goto out;
1305 if (src_start < src_vma->vm_start ||
1306 src_start + len > src_vma->vm_end)
1307 goto out;
1308
1309 dst_vma = find_vma(dst_mm, dst_start);
1310 if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
1311 goto out;
1312 if (dst_start < dst_vma->vm_start ||
1313 dst_start + len > dst_vma->vm_end)
1314 goto out;
1315
1316 err = validate_remap_areas(src_vma, dst_vma);
1317 if (err)
1318 goto out;
1319
1320 for (src_addr = src_start, dst_addr = dst_start;
1321 src_addr < src_start + len;) {
1322 spinlock_t *ptl;
1323 pmd_t dst_pmdval;
1324 unsigned long step_size;
1325
1326 BUG_ON(dst_addr >= dst_start + len);
1327 /*
1328 * Below works because anonymous area would not have a
1329 * transparent huge PUD. If file-backed support is added,
1330 * that case would need to be handled here.
1331 */
1332 src_pmd = mm_find_pmd(src_mm, src_addr);
1333 if (unlikely(!src_pmd)) {
1334 if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
1335 err = -ENOENT;
1336 break;
1337 }
1338 src_pmd = mm_alloc_pmd(src_mm, src_addr);
1339 if (unlikely(!src_pmd)) {
1340 err = -ENOMEM;
1341 break;
1342 }
1343 }
1344 dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
1345 if (unlikely(!dst_pmd)) {
1346 err = -ENOMEM;
1347 break;
1348 }
1349
1350 dst_pmdval = pmdp_get_lockless(dst_pmd);
1351 /*
1352 * If the dst_pmd is mapped as THP don't override it and just
1353 * be strict. If dst_pmd changes into TPH after this check, the
1354 * remap_pages_huge_pmd() will detect the change and retry
1355 * while remap_pages_pte() will detect the change and fail.
1356 */
1357 if (unlikely(pmd_trans_huge(dst_pmdval))) {
1358 err = -EEXIST;
1359 break;
1360 }
1361
1362 ptl = pmd_trans_huge_lock(src_pmd, src_vma);
1363 if (ptl) {
1364 if (pmd_devmap(*src_pmd)) {
1365 spin_unlock(ptl);
1366 err = -ENOENT;
1367 break;
1368 }
1369
1370 /*
1371 * Check if we can move the pmd without
1372 * splitting it. First check the address
1373 * alignment to be the same in src/dst. These
1374 * checks don't actually need the PT lock but
1375 * it's good to do it here to optimize this
1376 * block away at build time if
1377 * CONFIG_TRANSPARENT_HUGEPAGE is not set.
1378 */
1379 if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> 1380 src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {

--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

2023-10-19 15:20:28

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 1/3] mm/rmap: support move to different root anon_vma in folio_move_anon_rmap()

On Fri, Oct 13, 2023 at 1:04 AM David Hildenbrand <[email protected]> wrote:
>
> On 13.10.23 00:01, Peter Xu wrote:
> > On Sun, Oct 08, 2023 at 11:42:26PM -0700, Suren Baghdasaryan wrote:
> >> From: Andrea Arcangeli <[email protected]>
> >>
> >> For now, folio_move_anon_rmap() was only used to move a folio to a
> >> different anon_vma after fork(), whereby the root anon_vma stayed
> >> unchanged. For that, it was sufficient to hold the folio lock when
> >> calling folio_move_anon_rmap().
> >>
> >> However, we want to make use of folio_move_anon_rmap() to move folios
> >> between VMAs that have a different root anon_vma. As folio_referenced()
> >> performs an RMAP walk without holding the folio lock but only holding the
> >> anon_vma in read mode, holding the folio lock is insufficient.
> >>
> >> When moving to an anon_vma with a different root anon_vma, we'll have to
> >> hold both, the folio lock and the anon_vma lock in write mode.
> >> Consequently, whenever we succeeded in folio_lock_anon_vma_read() to
> >> read-lock the anon_vma, we have to re-check if the mapping was changed
> >> in the meantime. If that was the case, we have to retry.
> >>
> >> Note that folio_move_anon_rmap() must only be called if the anon page is
> >> exclusive to a process, and must not be called on KSM folios.
> >>
> >> This is a preparation for UFFDIO_MOVE, which will hold the folio lock,
> >> the anon_vma lock in write mode, and the mmap_lock in read mode.
> >>
> >> Signed-off-by: Andrea Arcangeli <[email protected]>
> >> Signed-off-by: Suren Baghdasaryan <[email protected]>
> >> ---
> >> mm/rmap.c | 24 ++++++++++++++++++++++++
> >> 1 file changed, 24 insertions(+)
> >>
> >> diff --git a/mm/rmap.c b/mm/rmap.c
> >> index c1f11c9dbe61..f9ddc50269d2 100644
> >> --- a/mm/rmap.c
> >> +++ b/mm/rmap.c
> >> @@ -542,7 +542,9 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> >> struct anon_vma *root_anon_vma;
> >> unsigned long anon_mapping;
> >>
> >> +retry:
> >> rcu_read_lock();
> >> +retry_under_rcu:
> >> anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
> >> if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> >> goto out;
> >> @@ -552,6 +554,16 @@ struct anon_vma *folio_lock_anon_vma_read(struct folio *folio,
> >> anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> >> root_anon_vma = READ_ONCE(anon_vma->root);
> >> if (down_read_trylock(&root_anon_vma->rwsem)) {
> >> + /*
> >> + * folio_move_anon_rmap() might have changed the anon_vma as we
> >> + * might not hold the folio lock here.
> >> + */
> >> + if (unlikely((unsigned long)READ_ONCE(folio->mapping) !=
> >> + anon_mapping)) {
> >> + up_read(&root_anon_vma->rwsem);
> >> + goto retry_under_rcu;
> >
> > Is adding this specific label worthwhile? How about rcu unlock and goto
> > retry (then it'll also be clear that we won't hold rcu read lock for
> > unpredictable time)?
>
> +1, sounds good to me

Sorry for the delay, I was travelling for a week.

I was hesitant about RCU unlocking and then immediately re-locking but
your point about holding it for unpredictable time makes sense. Will
change. Thanks!

>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-19 15:42:44

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 17.10.23 20:59, Peter Xu wrote:
> David,
>
> On Tue, Oct 17, 2023 at 05:55:10PM +0200, David Hildenbrand wrote:
>> Don't get me wrong, but this feature is already complicated enough that we
>> should really think twice if we want to make this even more complicated and
>> harder to maintain -- because once it's in we all know it's hard to remove
>> and we can easily end up with a maintenance nightmare without sufficiently
>> good use cases.
>
> Yes I agree it's non-trivial. My point is adding cross-mm doesn't make it
> even more complicated.. afaics.

That's not my main point. It can easily become a maintenance burden
without any real use cases yet that we are willing to support.

>
> For example, could you provide a list of things that will be different to
> support single mm or cross mm? I see two things that can be different, but
> I'd rather have all of them even if single-mm..
>
> - cgroup: I assume single-mm may avoid uncharge and charge again, but I
> prefer it be there even if we only allow single-mm. For example, I'm
> not 100% sure whether memcg won't start to behave differently according
> to vma attribute in the future.
>
> - page pinning: I assume for single-mm we can avoid checking page pinning
> based on the fact that MMF_HAS_PINNED is per-mm, but I also prefer we
> fail explicitly on pinned pages over UFFDIO_MOVE because it doesn't
> sound correct, and avoid future changes on top of pinning solution that
> can change the assumption that "move a pin page within mm" is ok.
>
> Is there anything else that will be different? Did I miss something
> important?

Again, that's not my main point. All I'm asking for is to separate it
out, make it a separate flag, and include it once we have reasonable use
cases that we are actually willing to support -- including actual data
why it's beneficial to have.

For the single-mm use it has been shown that there are reasonable,
existing use cases exist, and I think we are willing to support that.

This patch set is close to doubling (!) the size of mm/userfaultfd.c,
and it already has every possible smell of maintanance nightmare IMHO.
It does things that shouldn't be specific to some MM subsystem. I'm
happy to see any possible complexity reduced. Moving pages between MMs
is added complexity.

But I will stop arguing further; I hope I made my point clear and I have
other things to work on than fighting against overly-complicated uffd
features.


>
> [...]
>
>> BTW, wasn't there a way to do VM live-upgrade using fork() and replacing the
>> binary? I recall that there was at some time either an implementation in
>> QEMU or a proposal for an implementation; but I don't know how VM memory was
>> provided. It's certainly harder to move VM memory using fork().
>
> Maybe you meant the cpr project. I didn't actually follow that much
> previously (and will need to follow more after I took the migration
> duties.. when there's a new post), but IIUC at least the latest version
> needs to go with file memory only, not anonymous:
>
> https://lore.kernel.org/all/[email protected]/
>
> Guest RAM must be non-volatile across reboot, which can be achieved by
> backing it with a dax device, or /dev/shm PKRAM as proposed in...
>
> Guest RAM must be backed by a memory backend with share=on, but
> cannot be memory-backend-ram. The memory is re-mmap'd in the
> updated process, so guest ram is efficiently preserved in place
>
> My understanding is there used to have solution for anonymous but that
> needs extra kernel changes (MADV_DOEXEC).

Probably, I also stumbled over a paper from 2019 that mentioned that that.

>
> https://lore.kernel.org/linux-mm/[email protected]/
>
> I saw that you were part of the discussion, so maybe you will remember some
> more clue of that part.
>

Ouch, 2020. But my comments were only regarding mshare, not MADV_DOEXEC.
In fact, I don't even know why both discussions/threads show up as a
single one there..

> IIUC one core requirement of the whole approach is also that it will cover
> VFIO and maintenance of device DMA mappings, in which case it'll be
> different with any approach to leverage UFFDIO_MOVE because VFIO will not
> be allowed here; again I hope we start with forbid pinning. But it should
> be much cleaner on the design when with UFFDIO_MOVE, just not working with
> VFIO.
>
> One thing I'd need to measure is latency of UFFDIO_MOVE on page fault
> resolutions. I expect no more than tens of microseconds or even less.
> Should be drastically smaller than remote postcopy anyway.
>
> I'm probably off topic.. To go back: let's try to figure out what is
> special with cross-mm support. It'll be very weird in the future for
> anyone to propose a patch just add a feature flag and declaring cross-mm
> support, if the code is mostly all there. Nothing stops us from discussing
> what a cross-mm design will need.

Again, I hope I made my point clear.

>
> [...]
>
>> Is that and will that remain the case? I know people have been working on
>> transparent user-space swapping using monitor processes using uffd. I
>> thought there would have been ways to achieve that without any corporation
>> of the dst.
>
> Any example?

Nothing concrete, I only heard about uffd monitors that implement
user-space based swapping. I don't recall if they require some kind of
support from a library that gets loaded into these processes,

Same thoughts regarding CRIU using uffd.

>
> For what I am aware, all corporation requires uffd desc forwarding. I
> think the trick here is any userfaultfd desc must be created by its own
> process, so far nobody else. That's more or less saying "I want to do
> this" from its own opinion. The next is forwarding that to someone else.
> Parent process is fine taking uffd of child with EVENT_FORK, as I
> mentioned, but besides that nothing else I can think of that can violate
> this guard to manipulate a random process.

Do you have any idea how CRIU makes that work (at least I recall that
they wanted to use UFFD).

--
Cheers,

David / dhildenb

2023-10-19 15:43:43

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test

On Thu, Oct 12, 2023 at 3:29 PM Peter Xu <[email protected]> wrote:
>
> On Sun, Oct 08, 2023 at 11:42:28PM -0700, Suren Baghdasaryan wrote:
> > Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source
> > into destination buffer while checking the contents of both after
> > remapping. After the operation the content of the destination buffer
> > should match the original source buffer's content while the source
> > buffer should be zeroed.
> >
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > ---
> > tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++-
> > tools/testing/selftests/mm/uffd-common.h | 1 +
> > tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++
> > 3 files changed, 102 insertions(+), 2 deletions(-)
> >
> > diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
> > index 02b89860e193..ecc1244f1c2b 100644
> > --- a/tools/testing/selftests/mm/uffd-common.c
> > +++ b/tools/testing/selftests/mm/uffd-common.c
> > @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src)
> > *alloc_area = NULL;
> > return -errno;
> > }
> > +
> > + /* Prevent source pages from collapsing into THPs */
> > + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) {
> > + *alloc_area = NULL;
> > + return -errno;
> > + }
>
> Can we move this to test specific code?

Ack. I think that's doable.

>
> > +
> > return 0;
> > }
> >
> > @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args)
> > offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst;
> > offset &= ~(page_size-1);
> >
> > - if (copy_page(uffd, offset, args->apply_wp))
> > - args->missing_faults++;
> > + /* UFFD_MOVE is supported for anon non-shared mappings. */
> > + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) {
>
> IIUC this means move_page() will start to run on many other tests... as
> long as anonymous & private. Probably not wanted, because not all tests
> may need this MOVE test, and it also means UFFDIO_COPY is never tested on
> anonymous..
>
> You can overwrite uffd_args.handle_fault(). Axel just added a hook which
> seems also usable here. See 99aa77215ad02.

Yes, I was thinking about adding a completely new set of tests for
UFFDIO_MOVE but was not sure. With your confirmation I'll follow that
path so that UFFDIO_COPY tests stay the same.

>
> > + if (move_page(uffd, offset))
> > + args->missing_faults++;
> > + } else {
> > + if (copy_page(uffd, offset, args->apply_wp))
> > + args->missing_faults++;
> > + }
> > }
> > }
> >
> > @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp)
> > return __copy_page(ufd, offset, false, wp);
> > }
> >
> > +int move_page(int ufd, unsigned long offset)
> > +{
> > + struct uffdio_move uffdio_move;
> > +
> > + if (offset >= nr_pages * page_size)
> > + err("unexpected offset %lu\n", offset);
> > + uffdio_move.dst = (unsigned long) area_dst + offset;
> > + uffdio_move.src = (unsigned long) area_src + offset;
> > + uffdio_move.len = page_size;
> > + uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES;
> > + uffdio_move.move = 0;
> > + if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) {
> > + /* real retval in uffdio_move.move */
> > + if (uffdio_move.move != -EEXIST)
> > + err("UFFDIO_MOVE error: %"PRId64,
> > + (int64_t)uffdio_move.move);
> > + wake_range(ufd, uffdio_move.dst, page_size);
> > + } else if (uffdio_move.move != page_size) {
> > + err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move);
> > + } else
> > + return 1;
> > + return 0;
> > +}
> > +
> > int uffd_open_dev(unsigned int flags)
> > {
> > int fd, uffd;
> > diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
> > index 7c4fa964c3b0..f4d79e169a3d 100644
> > --- a/tools/testing/selftests/mm/uffd-common.h
> > +++ b/tools/testing/selftests/mm/uffd-common.h
> > @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp);
> > void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args);
> > int __copy_page(int ufd, unsigned long offset, bool retry, bool wp);
> > int copy_page(int ufd, unsigned long offset, bool wp);
> > +int move_page(int ufd, unsigned long offset);
> > void *uffd_poll_thread(void *arg);
> >
> > int uffd_open_dev(unsigned int flags);
> > diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
> > index 2709a34a39c5..f0ded3b34367 100644
> > --- a/tools/testing/selftests/mm/uffd-unit-tests.c
> > +++ b/tools/testing/selftests/mm/uffd-unit-tests.c
> > @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp)
> > char c;
> > struct uffd_args args = { 0 };
> >
> > + /* Prevent source pages from being mapped more than once */
> > + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK))
> > + err("madvise(MADV_DONTFORK) failed");
>
> Modifying events test is weird.. I assume you don't need this anymore after
> you switch to the handle_fault() hook.

I think so but let me try first and I'll get back on that.

>
> > +
> > fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
> > if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > true, wp, false))
> > @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs)
> > uffd_test_pass();
> > }
> >
> > +static void uffd_move_test(uffd_test_args_t *targs)
> > +{
> > + unsigned long nr;
> > + pthread_t uffd_mon;
> > + char c;
> > + unsigned long long count;
> > + struct uffd_args args = { 0 };
> > +
> > + if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > + true, false, false))
> > + err("register failure");
> > +
> > + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
> > + err("uffd_poll_thread create");
> > +
> > + /*
> > + * Read each of the pages back using the UFFD-registered mapping. We
> > + * expect that the first time we touch a page, it will result in a missing
> > + * fault. uffd_poll_thread will resolve the fault by remapping source
> > + * page to destination.
> > + */
> > + for (nr = 0; nr < nr_pages; nr++) {
> > + /* Check area_src content */
> > + count = *area_count(area_src, nr);
> > + if (count != count_verify[nr])
> > + err("nr %lu source memory invalid %llu %llu\n",
> > + nr, count, count_verify[nr]);
> > +
> > + /* Faulting into area_dst should remap the page */
> > + count = *area_count(area_dst, nr);
> > + if (count != count_verify[nr])
> > + err("nr %lu memory corruption %llu %llu\n",
> > + nr, count, count_verify[nr]);
> > +
> > + /* Re-check area_src content which should be empty */
> > + count = *area_count(area_src, nr);
> > + if (count != 0)
> > + err("nr %lu move failed %llu %llu\n",
> > + nr, count, count_verify[nr]);
>
> All of above should see zeros, right? Because I don't think anyone boosted
> the counter at all..
>
> Maybe set some non-zero values to it? Then the re-check can make more
> sense.

I thought uffd_test_ctx_init() is initializing area_count(area_src,
nr), so the source pages should contain non-zero data before the move.
Am I missing something?

>
> If you want, I think we can also make uffd-stress.c test to cover MOVE too,
> basically replacing all UFFDIO_COPY when e.g. user specified from cmdline.
> Optional, and may need some touch ups here and there, though.

That's a good idea. I'll add that in the next version.
Thanks,
Suren.

>
> Thanks,
>
> > + }
> > +
> > + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c))
> > + err("pipe write");
> > + if (pthread_join(uffd_mon, NULL))
> > + err("join() failed");
> > +
> > + if (args.missing_faults != nr_pages || args.minor_faults != 0)
> > + uffd_test_fail("stats check error");
> > + else
> > + uffd_test_pass();
> > +}
> > +
> > /*
> > * Test the returned uffdio_register.ioctls with different register modes.
> > * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test.
> > @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = {
> > .mem_targets = MEM_ALL,
> > .uffd_feature_required = 0,
> > },
> > + {
> > + .name = "move",
> > + .uffd_fn = uffd_move_test,
> > + .mem_targets = MEM_ANON,
> > + .uffd_feature_required = UFFD_FEATURE_MOVE,
> > + },
> > {
> > .name = "wp-fork",
> > .uffd_fn = uffd_wp_fork_test,
> > --
> > 2.42.0.609.gbb76f46606-goog
> >
>
> --
> Peter Xu
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-19 17:31:07

by Axel Rasmussen

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test

On Thu, Oct 19, 2023 at 8:43 AM Suren Baghdasaryan <[email protected]> wrote:
>
> On Thu, Oct 12, 2023 at 3:29 PM Peter Xu <[email protected]> wrote:
> >
> > On Sun, Oct 08, 2023 at 11:42:28PM -0700, Suren Baghdasaryan wrote:
> > > Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source
> > > into destination buffer while checking the contents of both after
> > > remapping. After the operation the content of the destination buffer
> > > should match the original source buffer's content while the source
> > > buffer should be zeroed.
> > >
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > > ---
> > > tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++-
> > > tools/testing/selftests/mm/uffd-common.h | 1 +
> > > tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++
> > > 3 files changed, 102 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
> > > index 02b89860e193..ecc1244f1c2b 100644
> > > --- a/tools/testing/selftests/mm/uffd-common.c
> > > +++ b/tools/testing/selftests/mm/uffd-common.c
> > > @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src)
> > > *alloc_area = NULL;
> > > return -errno;
> > > }
> > > +
> > > + /* Prevent source pages from collapsing into THPs */
> > > + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) {
> > > + *alloc_area = NULL;
> > > + return -errno;
> > > + }
> >
> > Can we move this to test specific code?
>
> Ack. I think that's doable.
>
> >
> > > +
> > > return 0;
> > > }
> > >
> > > @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args)
> > > offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst;
> > > offset &= ~(page_size-1);
> > >
> > > - if (copy_page(uffd, offset, args->apply_wp))
> > > - args->missing_faults++;
> > > + /* UFFD_MOVE is supported for anon non-shared mappings. */
> > > + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) {
> >
> > IIUC this means move_page() will start to run on many other tests... as
> > long as anonymous & private. Probably not wanted, because not all tests
> > may need this MOVE test, and it also means UFFDIO_COPY is never tested on
> > anonymous..
> >
> > You can overwrite uffd_args.handle_fault(). Axel just added a hook which
> > seems also usable here. See 99aa77215ad02.
>
> Yes, I was thinking about adding a completely new set of tests for
> UFFDIO_MOVE but was not sure. With your confirmation I'll follow that
> path so that UFFDIO_COPY tests stay the same.
>
> >
> > > + if (move_page(uffd, offset))
> > > + args->missing_faults++;
> > > + } else {
> > > + if (copy_page(uffd, offset, args->apply_wp))
> > > + args->missing_faults++;
> > > + }
> > > }
> > > }
> > >
> > > @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp)
> > > return __copy_page(ufd, offset, false, wp);
> > > }
> > >
> > > +int move_page(int ufd, unsigned long offset)
> > > +{
> > > + struct uffdio_move uffdio_move;
> > > +
> > > + if (offset >= nr_pages * page_size)
> > > + err("unexpected offset %lu\n", offset);
> > > + uffdio_move.dst = (unsigned long) area_dst + offset;
> > > + uffdio_move.src = (unsigned long) area_src + offset;
> > > + uffdio_move.len = page_size;
> > > + uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES;
> > > + uffdio_move.move = 0;
> > > + if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) {
> > > + /* real retval in uffdio_move.move */
> > > + if (uffdio_move.move != -EEXIST)
> > > + err("UFFDIO_MOVE error: %"PRId64,
> > > + (int64_t)uffdio_move.move);
> > > + wake_range(ufd, uffdio_move.dst, page_size);
> > > + } else if (uffdio_move.move != page_size) {
> > > + err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move);
> > > + } else
> > > + return 1;
> > > + return 0;
> > > +}
> > > +
> > > int uffd_open_dev(unsigned int flags)
> > > {
> > > int fd, uffd;
> > > diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
> > > index 7c4fa964c3b0..f4d79e169a3d 100644
> > > --- a/tools/testing/selftests/mm/uffd-common.h
> > > +++ b/tools/testing/selftests/mm/uffd-common.h
> > > @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp);
> > > void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args);
> > > int __copy_page(int ufd, unsigned long offset, bool retry, bool wp);
> > > int copy_page(int ufd, unsigned long offset, bool wp);
> > > +int move_page(int ufd, unsigned long offset);
> > > void *uffd_poll_thread(void *arg);
> > >
> > > int uffd_open_dev(unsigned int flags);
> > > diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
> > > index 2709a34a39c5..f0ded3b34367 100644
> > > --- a/tools/testing/selftests/mm/uffd-unit-tests.c
> > > +++ b/tools/testing/selftests/mm/uffd-unit-tests.c
> > > @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp)
> > > char c;
> > > struct uffd_args args = { 0 };
> > >
> > > + /* Prevent source pages from being mapped more than once */
> > > + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK))
> > > + err("madvise(MADV_DONTFORK) failed");
> >
> > Modifying events test is weird.. I assume you don't need this anymore after
> > you switch to the handle_fault() hook.
>
> I think so but let me try first and I'll get back on that.
>
> >
> > > +
> > > fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
> > > if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > > true, wp, false))
> > > @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs)
> > > uffd_test_pass();
> > > }
> > >
> > > +static void uffd_move_test(uffd_test_args_t *targs)
> > > +{
> > > + unsigned long nr;
> > > + pthread_t uffd_mon;
> > > + char c;
> > > + unsigned long long count;
> > > + struct uffd_args args = { 0 };
> > > +
> > > + if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > > + true, false, false))
> > > + err("register failure");
> > > +
> > > + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
> > > + err("uffd_poll_thread create");
> > > +
> > > + /*
> > > + * Read each of the pages back using the UFFD-registered mapping. We
> > > + * expect that the first time we touch a page, it will result in a missing
> > > + * fault. uffd_poll_thread will resolve the fault by remapping source
> > > + * page to destination.
> > > + */
> > > + for (nr = 0; nr < nr_pages; nr++) {
> > > + /* Check area_src content */
> > > + count = *area_count(area_src, nr);
> > > + if (count != count_verify[nr])
> > > + err("nr %lu source memory invalid %llu %llu\n",
> > > + nr, count, count_verify[nr]);
> > > +
> > > + /* Faulting into area_dst should remap the page */
> > > + count = *area_count(area_dst, nr);
> > > + if (count != count_verify[nr])
> > > + err("nr %lu memory corruption %llu %llu\n",
> > > + nr, count, count_verify[nr]);
> > > +
> > > + /* Re-check area_src content which should be empty */
> > > + count = *area_count(area_src, nr);
> > > + if (count != 0)
> > > + err("nr %lu move failed %llu %llu\n",
> > > + nr, count, count_verify[nr]);
> >
> > All of above should see zeros, right? Because I don't think anyone boosted
> > the counter at all..
> >
> > Maybe set some non-zero values to it? Then the re-check can make more
> > sense.
>
> I thought uffd_test_ctx_init() is initializing area_count(area_src,
> nr), so the source pages should contain non-zero data before the move.
> Am I missing something?

You're correct, uffd_test_ctx_init() fills in some data in area_src.

>
> >
> > If you want, I think we can also make uffd-stress.c test to cover MOVE too,
> > basically replacing all UFFDIO_COPY when e.g. user specified from cmdline.
> > Optional, and may need some touch ups here and there, though.
>
> That's a good idea. I'll add that in the next version.
> Thanks,
> Suren.
>
> >
> > Thanks,
> >
> > > + }
> > > +
> > > + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c))
> > > + err("pipe write");
> > > + if (pthread_join(uffd_mon, NULL))
> > > + err("join() failed");
> > > +
> > > + if (args.missing_faults != nr_pages || args.minor_faults != 0)
> > > + uffd_test_fail("stats check error");
> > > + else
> > > + uffd_test_pass();
> > > +}
> > > +
> > > /*
> > > * Test the returned uffdio_register.ioctls with different register modes.
> > > * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test.
> > > @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = {
> > > .mem_targets = MEM_ALL,
> > > .uffd_feature_required = 0,
> > > },
> > > + {
> > > + .name = "move",
> > > + .uffd_fn = uffd_move_test,
> > > + .mem_targets = MEM_ANON,
> > > + .uffd_feature_required = UFFD_FEATURE_MOVE,
> > > + },
> > > {
> > > .name = "wp-fork",
> > > .uffd_fn = uffd_wp_fork_test,
> > > --
> > > 2.42.0.609.gbb76f46606-goog
> > >
> >
> > --
> > Peter Xu
> >
> > --
> > To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> >

2023-10-19 19:34:27

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 3/3] selftests/mm: add UFFDIO_MOVE ioctl test

On Thu, Oct 19, 2023 at 10:29:27AM -0700, Axel Rasmussen wrote:
> On Thu, Oct 19, 2023 at 8:43 AM Suren Baghdasaryan <[email protected]> wrote:
> >
> > On Thu, Oct 12, 2023 at 3:29 PM Peter Xu <[email protected]> wrote:
> > >
> > > On Sun, Oct 08, 2023 at 11:42:28PM -0700, Suren Baghdasaryan wrote:
> > > > Add a test for new UFFDIO_MOVE ioctl which uses uffd to move source
> > > > into destination buffer while checking the contents of both after
> > > > remapping. After the operation the content of the destination buffer
> > > > should match the original source buffer's content while the source
> > > > buffer should be zeroed.
> > > >
> > > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > > > ---
> > > > tools/testing/selftests/mm/uffd-common.c | 41 ++++++++++++-
> > > > tools/testing/selftests/mm/uffd-common.h | 1 +
> > > > tools/testing/selftests/mm/uffd-unit-tests.c | 62 ++++++++++++++++++++
> > > > 3 files changed, 102 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
> > > > index 02b89860e193..ecc1244f1c2b 100644
> > > > --- a/tools/testing/selftests/mm/uffd-common.c
> > > > +++ b/tools/testing/selftests/mm/uffd-common.c
> > > > @@ -52,6 +52,13 @@ static int anon_allocate_area(void **alloc_area, bool is_src)
> > > > *alloc_area = NULL;
> > > > return -errno;
> > > > }
> > > > +
> > > > + /* Prevent source pages from collapsing into THPs */
> > > > + if (madvise(*alloc_area, nr_pages * page_size, MADV_NOHUGEPAGE)) {
> > > > + *alloc_area = NULL;
> > > > + return -errno;
> > > > + }
> > >
> > > Can we move this to test specific code?
> >
> > Ack. I think that's doable.
> >
> > >
> > > > +
> > > > return 0;
> > > > }
> > > >
> > > > @@ -484,8 +491,14 @@ void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args)
> > > > offset = (char *)(unsigned long)msg->arg.pagefault.address - area_dst;
> > > > offset &= ~(page_size-1);
> > > >
> > > > - if (copy_page(uffd, offset, args->apply_wp))
> > > > - args->missing_faults++;
> > > > + /* UFFD_MOVE is supported for anon non-shared mappings. */
> > > > + if (uffd_test_ops == &anon_uffd_test_ops && !map_shared) {
> > >
> > > IIUC this means move_page() will start to run on many other tests... as
> > > long as anonymous & private. Probably not wanted, because not all tests
> > > may need this MOVE test, and it also means UFFDIO_COPY is never tested on
> > > anonymous..
> > >
> > > You can overwrite uffd_args.handle_fault(). Axel just added a hook which
> > > seems also usable here. See 99aa77215ad02.
> >
> > Yes, I was thinking about adding a completely new set of tests for
> > UFFDIO_MOVE but was not sure. With your confirmation I'll follow that
> > path so that UFFDIO_COPY tests stay the same.

Sounds good.

If you want you can also torture MOVE a bit with uffd-stress.c to do
bouncing test all with MOVE, may need a new option and some more code
changes, though.

> >
> > >
> > > > + if (move_page(uffd, offset))
> > > > + args->missing_faults++;
> > > > + } else {
> > > > + if (copy_page(uffd, offset, args->apply_wp))
> > > > + args->missing_faults++;
> > > > + }
> > > > }
> > > > }
> > > >
> > > > @@ -620,6 +633,30 @@ int copy_page(int ufd, unsigned long offset, bool wp)
> > > > return __copy_page(ufd, offset, false, wp);
> > > > }
> > > >
> > > > +int move_page(int ufd, unsigned long offset)
> > > > +{
> > > > + struct uffdio_move uffdio_move;
> > > > +
> > > > + if (offset >= nr_pages * page_size)
> > > > + err("unexpected offset %lu\n", offset);
> > > > + uffdio_move.dst = (unsigned long) area_dst + offset;
> > > > + uffdio_move.src = (unsigned long) area_src + offset;
> > > > + uffdio_move.len = page_size;
> > > > + uffdio_move.mode = UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES;
> > > > + uffdio_move.move = 0;
> > > > + if (ioctl(ufd, UFFDIO_MOVE, &uffdio_move)) {
> > > > + /* real retval in uffdio_move.move */
> > > > + if (uffdio_move.move != -EEXIST)
> > > > + err("UFFDIO_MOVE error: %"PRId64,
> > > > + (int64_t)uffdio_move.move);
> > > > + wake_range(ufd, uffdio_move.dst, page_size);
> > > > + } else if (uffdio_move.move != page_size) {
> > > > + err("UFFDIO_MOVE error: %"PRId64, (int64_t)uffdio_move.move);
> > > > + } else
> > > > + return 1;
> > > > + return 0;
> > > > +}
> > > > +
> > > > int uffd_open_dev(unsigned int flags)
> > > > {
> > > > int fd, uffd;
> > > > diff --git a/tools/testing/selftests/mm/uffd-common.h b/tools/testing/selftests/mm/uffd-common.h
> > > > index 7c4fa964c3b0..f4d79e169a3d 100644
> > > > --- a/tools/testing/selftests/mm/uffd-common.h
> > > > +++ b/tools/testing/selftests/mm/uffd-common.h
> > > > @@ -111,6 +111,7 @@ void wp_range(int ufd, __u64 start, __u64 len, bool wp);
> > > > void uffd_handle_page_fault(struct uffd_msg *msg, struct uffd_args *args);
> > > > int __copy_page(int ufd, unsigned long offset, bool retry, bool wp);
> > > > int copy_page(int ufd, unsigned long offset, bool wp);
> > > > +int move_page(int ufd, unsigned long offset);
> > > > void *uffd_poll_thread(void *arg);
> > > >
> > > > int uffd_open_dev(unsigned int flags);
> > > > diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
> > > > index 2709a34a39c5..f0ded3b34367 100644
> > > > --- a/tools/testing/selftests/mm/uffd-unit-tests.c
> > > > +++ b/tools/testing/selftests/mm/uffd-unit-tests.c
> > > > @@ -824,6 +824,10 @@ static void uffd_events_test_common(bool wp)
> > > > char c;
> > > > struct uffd_args args = { 0 };
> > > >
> > > > + /* Prevent source pages from being mapped more than once */
> > > > + if (madvise(area_src, nr_pages * page_size, MADV_DONTFORK))
> > > > + err("madvise(MADV_DONTFORK) failed");
> > >
> > > Modifying events test is weird.. I assume you don't need this anymore after
> > > you switch to the handle_fault() hook.
> >
> > I think so but let me try first and I'll get back on that.
> >
> > >
> > > > +
> > > > fcntl(uffd, F_SETFL, uffd_flags | O_NONBLOCK);
> > > > if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > > > true, wp, false))
> > > > @@ -1062,6 +1066,58 @@ static void uffd_poison_test(uffd_test_args_t *targs)
> > > > uffd_test_pass();
> > > > }
> > > >
> > > > +static void uffd_move_test(uffd_test_args_t *targs)
> > > > +{
> > > > + unsigned long nr;
> > > > + pthread_t uffd_mon;
> > > > + char c;
> > > > + unsigned long long count;
> > > > + struct uffd_args args = { 0 };
> > > > +
> > > > + if (uffd_register(uffd, area_dst, nr_pages * page_size,
> > > > + true, false, false))
> > > > + err("register failure");
> > > > +
> > > > + if (pthread_create(&uffd_mon, NULL, uffd_poll_thread, &args))
> > > > + err("uffd_poll_thread create");
> > > > +
> > > > + /*
> > > > + * Read each of the pages back using the UFFD-registered mapping. We
> > > > + * expect that the first time we touch a page, it will result in a missing
> > > > + * fault. uffd_poll_thread will resolve the fault by remapping source
> > > > + * page to destination.
> > > > + */
> > > > + for (nr = 0; nr < nr_pages; nr++) {
> > > > + /* Check area_src content */
> > > > + count = *area_count(area_src, nr);
> > > > + if (count != count_verify[nr])
> > > > + err("nr %lu source memory invalid %llu %llu\n",
> > > > + nr, count, count_verify[nr]);
> > > > +
> > > > + /* Faulting into area_dst should remap the page */
> > > > + count = *area_count(area_dst, nr);
> > > > + if (count != count_verify[nr])
> > > > + err("nr %lu memory corruption %llu %llu\n",
> > > > + nr, count, count_verify[nr]);
> > > > +
> > > > + /* Re-check area_src content which should be empty */
> > > > + count = *area_count(area_src, nr);
> > > > + if (count != 0)
> > > > + err("nr %lu move failed %llu %llu\n",
> > > > + nr, count, count_verify[nr]);
> > >
> > > All of above should see zeros, right? Because I don't think anyone boosted
> > > the counter at all..
> > >
> > > Maybe set some non-zero values to it? Then the re-check can make more
> > > sense.
> >
> > I thought uffd_test_ctx_init() is initializing area_count(area_src,
> > nr), so the source pages should contain non-zero data before the move.
> > Am I missing something?
>
> You're correct, uffd_test_ctx_init() fills in some data in area_src.

Indeed.

>
> >
> > >
> > > If you want, I think we can also make uffd-stress.c test to cover MOVE too,
> > > basically replacing all UFFDIO_COPY when e.g. user specified from cmdline.
> > > Optional, and may need some touch ups here and there, though.
> >
> > That's a good idea. I'll add that in the next version.
> > Thanks,
> > Suren.
> >
> > >
> > > Thanks,
> > >
> > > > + }
> > > > +
> > > > + if (write(pipefd[1], &c, sizeof(c)) != sizeof(c))
> > > > + err("pipe write");
> > > > + if (pthread_join(uffd_mon, NULL))
> > > > + err("join() failed");
> > > > +
> > > > + if (args.missing_faults != nr_pages || args.minor_faults != 0)
> > > > + uffd_test_fail("stats check error");
> > > > + else
> > > > + uffd_test_pass();
> > > > +}
> > > > +
> > > > /*
> > > > * Test the returned uffdio_register.ioctls with different register modes.
> > > > * Note that _UFFDIO_ZEROPAGE is tested separately in the zeropage test.
> > > > @@ -1139,6 +1195,12 @@ uffd_test_case_t uffd_tests[] = {
> > > > .mem_targets = MEM_ALL,
> > > > .uffd_feature_required = 0,
> > > > },
> > > > + {
> > > > + .name = "move",
> > > > + .uffd_fn = uffd_move_test,
> > > > + .mem_targets = MEM_ANON,
> > > > + .uffd_feature_required = UFFD_FEATURE_MOVE,
> > > > + },
> > > > {
> > > > .name = "wp-fork",
> > > > .uffd_fn = uffd_wp_fork_test,
> > > > --
> > > > 2.42.0.609.gbb76f46606-goog
> > > >
> > >
> > > --
> > > Peter Xu
> > >
> > > --
> > > To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> > >
>

--
Peter Xu

2023-10-19 19:54:04

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Thu, Oct 19, 2023 at 05:41:01PM +0200, David Hildenbrand wrote:
> That's not my main point. It can easily become a maintenance burden without
> any real use cases yet that we are willing to support.

That's why I requested a few times that we can discuss the complexity of
cross-mm support already here, and I'm all ears if I missed something on
the "maintenance burden" part..

I started by listing what I think might be different, and we can easily
speedup single-mm with things like "if (ctx->mm != mm)" checks with
e.g. memcg, just like what this patch already did with pgtable depositions.

We keep saying "maintenance burden" but we refuse to discuss what is that..

I'll leave that to Suren and Lokesh to decide. For me the worst case is
one more flag which might be confusing, which is not the end of the world..
Suren, you may need to work more thoroughly to remove cross-mm implications
if so, just like when renaming REMAP to MOVE.

Thanks,

--
Peter Xu

2023-10-19 20:03:15

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Thu, Oct 19, 2023 at 12:53 PM Peter Xu <[email protected]> wrote:
>
> On Thu, Oct 19, 2023 at 05:41:01PM +0200, David Hildenbrand wrote:
> > That's not my main point. It can easily become a maintenance burden without
> > any real use cases yet that we are willing to support.
>
> That's why I requested a few times that we can discuss the complexity of
> cross-mm support already here, and I'm all ears if I missed something on
> the "maintenance burden" part..
>
> I started by listing what I think might be different, and we can easily
> speedup single-mm with things like "if (ctx->mm != mm)" checks with
> e.g. memcg, just like what this patch already did with pgtable depositions.
>
> We keep saying "maintenance burden" but we refuse to discuss what is that..
>
> I'll leave that to Suren and Lokesh to decide. For me the worst case is
> one more flag which might be confusing, which is not the end of the world..
> Suren, you may need to work more thoroughly to remove cross-mm implications
> if so, just like when renaming REMAP to MOVE.

Hi Folks,
Sorry, I'm just catching up on all the comments in this thread after a
week-long absence. Will be addressing other questions separately but
for cross-mm one, I think the best way forward would be for me to
split this patch into two with the second one adding cross-mm support.
That will clearly show how much additional code that requires and will
make it easier for us to decide whether to support it or not.
TBH, I don't see the need for an additional flag even if the initial
version will be merged without cross-mm support. Once it's added the
manpage can mention that starting with a specific Linux version
cross-mm is supported, no?
Also from my quick read, it sounds like we want to prevent movements
of pinned pages regardless of cross-mm support. Is my understanding
correct?
Thanks,
Suren.


>
> Thanks,
>
> --
> Peter Xu
>

2023-10-19 20:45:08

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Thu, Oct 19, 2023 at 01:02:39PM -0700, Suren Baghdasaryan wrote:
> Hi Folks,
> Sorry, I'm just catching up on all the comments in this thread after a

Not a problem.

> week-long absence. Will be addressing other questions separately but
> for cross-mm one, I think the best way forward would be for me to
> split this patch into two with the second one adding cross-mm support.
> That will clearly show how much additional code that requires and will
> make it easier for us to decide whether to support it or not.

Sounds good, thanks for that extra work.

> TBH, I don't see the need for an additional flag even if the initial
> version will be merged without cross-mm support. Once it's added the
> manpage can mention that starting with a specific Linux version
> cross-mm is supported, no?

It's about how an user app knows what the kernel supports.

On kernels that only support single-mm, UFFDIO_MOVE should fail if it found
ctx->mm != current->mm.

I think the best way to let the user app be clear of what happened is one
new feature bit if cross-mm will be supported separately. Or the userapp
will need to rely on a specific failure code of UFFDIO_MOVE, and only until
the 1st MOVE being triggered. Not as clear, IMHO.

> Also from my quick read, it sounds like we want to prevent movements
> of pinned pages regardless of cross-mm support. Is my understanding
> correct?

I prefer that, but that's only my 2 cents. I just don't see how remap can
work with pin. IIUC pin is about coherency of processor view and DMA view.
Then if so the VA is the only identifier of a "page" for an user app
because real pfn is hidden, and remap changes that VA. So it doesn't make
sense to me to remap a pin in whatever form.

For check pinning: I think I used to mention that it may again require
proper locking over mm.write_protect_seq like fork() paths. No, when
thinking again I think I was wrong.. write_protect_seq requires mmap write
lock, definitely not good.

We can do what David mentioned before, after ptep_clear_flush() (so pte is
cleared) we recheck page pinning, if pinned fail MOVE and put the page
back. Note that we can't do that check after installing it into dest
pgtables, because then someone can start to pin it from dest mm already.

Thanks,

--
Peter Xu

2023-10-19 21:25:47

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Thu, Oct 12, 2023 at 3:00 PM Peter Xu <[email protected]> wrote:
>
> On Sun, Oct 08, 2023 at 11:42:27PM -0700, Suren Baghdasaryan wrote:
> > From: Andrea Arcangeli <[email protected]>
> >
> > Implement the uABI of UFFDIO_MOVE ioctl.
> > UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > available (in userspace) for recycling, as is usually the case in heap
> > compaction algorithms, then we can avoid the page allocation and memcpy
> > (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > userspace, we avoid the need to release (via madvise) the pages back to
> > the kernel [2].
> > We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > measured using a benchmark that emulates a heap compaction implementation
> > using userfaultfd (to allow concurrent accesses by application threads).
> > More details of the usecase are explained in [2].
> > Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > touching them within the same vma. Today, it can only be done by mremap,
> > however it forces splitting the vma.
> >
> > [1] https://lore.kernel.org/all/[email protected]/
> > [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> >
> > Update for the ioctl_userfaultfd(2) manpage:
> >
> > UFFDIO_MOVE
> > (Since Linux xxx) Move a continuous memory chunk into the
> > userfault registered range and optionally wake up the blocked
> > thread. The source and destination addresses and the number of
> > bytes to move are specified by the src, dst, and len fields of
> > the uffdio_move structure pointed to by argp:
> >
> > struct uffdio_move {
> > __u64 dst; /* Destination of move */
> > __u64 src; /* Source of move */
> > __u64 len; /* Number of bytes to move */
> > __u64 mode; /* Flags controlling behavior of move */
> > __s64 move; /* Number of bytes moved, or negated error */
> > };
> >
> > The following value may be bitwise ORed in mode to change the
> > behavior of the UFFDIO_MOVE operation:
> >
> > UFFDIO_MOVE_MODE_DONTWAKE
> > Do not wake up the thread that waits for page-fault
> > resolution
> >
> > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > Allow holes in the source virtual range that is being moved.
> > When not specified, the holes will result in ENOENT error.
> > When specified, the holes will be accounted as successfully
> > moved memory. This is mostly useful to move hugepage aligned
> > virtual regions without knowing if there are transparent
> > hugepages in the regions or not, but preventing the risk of
> > having to split the hugepage during the operation.
> >
> > The move field is used by the kernel to return the number of
> > bytes that was actually moved, or an error (a negated errno-
> > style value). If the value returned in move doesn't match the
> > value that was specified in len, the operation fails with the
> > error EAGAIN. The move field is output-only; it is not read by
> > the UFFDIO_MOVE operation.
> >
> > The operation may fail for various reasons. Usually, remapping of
> > pages that are not exclusive to the given process fail; once KSM
> > might deduplicate pages or fork() COW-shares pages during fork()
> > with child processes, they are no longer exclusive. Further, the
> > kernel might only perform lightweight checks for detecting whether
> > the pages are exclusive, and return -EBUSY in case that check fails.
> > To make the operation more likely to succeed, KSM should be
> > disabled, fork() should be avoided or MADV_DONTFORK should be
> > configured for the source VMA before fork().
> >
> > This ioctl(2) operation returns 0 on success. In this case, the
> > entire area was moved. On error, -1 is returned and errno is
> > set to indicate the error. Possible errors include:
> >
> > EAGAIN The number of bytes moved (i.e., the value returned in
> > the move field) does not equal the value that was
> > specified in the len field.
> >
> > EINVAL Either dst or len was not a multiple of the system page
> > size, or the range specified by src and len or dst and len
> > was invalid.
> >
> > EINVAL An invalid bit was specified in the mode field.
> >
> > ENOENT
> > The source virtual memory range has unmapped holes and
> > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> >
> > EEXIST
> > The destination virtual memory range is fully or partially
> > mapped.
> >
> > EBUSY
> > The pages in the source virtual memory range are not
> > exclusive to the process. The kernel might only perform
> > lightweight checks for detecting whether the pages are
> > exclusive. To make the operation more likely to succeed,
> > KSM should be disabled, fork() should be avoided or
> > MADV_DONTFORK should be configured for the source virtual
> > memory area before fork().
> >
> > ENOMEM Allocating memory needed for the operation failed.
> >
> > ESRCH
> > The faulting process has exited at the time of a
>
> Nit pick comment for future man page: there's no faulting process in this
> context. Perhaps "target process"?

Ack.

>
> > UFFDIO_MOVE operation.
> >
> > Signed-off-by: Andrea Arcangeli <[email protected]>
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > ---
> > Documentation/admin-guide/mm/userfaultfd.rst | 3 +
> > fs/userfaultfd.c | 63 ++
> > include/linux/rmap.h | 5 +
> > include/linux/userfaultfd_k.h | 12 +
> > include/uapi/linux/userfaultfd.h | 29 +-
> > mm/huge_memory.c | 138 +++++
> > mm/khugepaged.c | 3 +
> > mm/rmap.c | 6 +
> > mm/userfaultfd.c | 602 +++++++++++++++++++
> > 9 files changed, 860 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst
> > index 203e26da5f92..e5cc8848dcb3 100644
> > --- a/Documentation/admin-guide/mm/userfaultfd.rst
> > +++ b/Documentation/admin-guide/mm/userfaultfd.rst
> > @@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
> > areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
> > support for shmem virtual memory areas.
> >
> > +- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
> > + existing page contents from userspace.
> > +
> > The userland application should set the feature flags it intends to use
> > when invoking the ``UFFDIO_API`` ioctl, to request that those features be
> > enabled if supported.
> > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > index a7c6ef764e63..ac52e0f99a69 100644
> > --- a/fs/userfaultfd.c
> > +++ b/fs/userfaultfd.c
> > @@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
> > return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
> > }
> >
> > +static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
> > + unsigned long arg)
>
> If we do want to rename from REMAP to MOVE, we'd better rename the
> functions too, as "remap" still exists all over the place..

Ok. I thought that since the current implementation only remaps and
never copies it would be correct to keep "remap" in these internal
names and change that later if we support copying. But I'm fine with
renaming them now to avoid confusion. Will do.


>
> > +{
> > + __s64 ret;
> > + struct uffdio_move uffdio_move;
> > + struct uffdio_move __user *user_uffdio_move;
> > + struct userfaultfd_wake_range range;
> > +
> > + user_uffdio_move = (struct uffdio_move __user *) arg;
> > +
> > + ret = -EAGAIN;
> > + if (atomic_read(&ctx->mmap_changing))
> > + goto out;
>
> I didn't notice this before, but I think we need to re-check this after
> taking target mm's mmap read lock..

Ack.

>
> maybe we'd want to pass in ctx* into remap_pages(), more reasoning below.

Makes sense.

>
> > +
> > + ret = -EFAULT;
> > + if (copy_from_user(&uffdio_move, user_uffdio_move,
> > + /* don't copy "remap" last field */
>
> s/remap/move/

Ack.

>
> > + sizeof(uffdio_move)-sizeof(__s64)))
> > + goto out;
> > +
> > + ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len);
> > + if (ret)
> > + goto out;
> > +
> > + ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len);
> > + if (ret)
> > + goto out;
> > +
> > + ret = -EINVAL;
> > + if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
> > + UFFDIO_MOVE_MODE_DONTWAKE))
> > + goto out;
> > +
> > + if (mmget_not_zero(ctx->mm)) {
> > + ret = remap_pages(ctx->mm, current->mm,
> > + uffdio_move.dst, uffdio_move.src,
> > + uffdio_move.len, uffdio_move.mode);
> > + mmput(ctx->mm);
> > + } else {
> > + return -ESRCH;
> > + }
> > +
> > + if (unlikely(put_user(ret, &user_uffdio_move->move)))
> > + return -EFAULT;
> > + if (ret < 0)
> > + goto out;
> > +
> > + /* len == 0 would wake all */
> > + BUG_ON(!ret);
> > + range.len = ret;
> > + if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
> > + range.start = uffdio_move.dst;
> > + wake_userfault(ctx, &range);
> > + }
> > + ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
> > +
> > +out:
> > + return ret;
> > +}
> > +
> > /*
> > * userland asks for a certain API version and we return which bits
> > * and ioctl commands are implemented in this kernel for such API
> > @@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
> > case UFFDIO_ZEROPAGE:
> > ret = userfaultfd_zeropage(ctx, arg);
> > break;
> > + case UFFDIO_MOVE:
> > + ret = userfaultfd_remap(ctx, arg);
> > + break;
> > case UFFDIO_WRITEPROTECT:
> > ret = userfaultfd_writeprotect(ctx, arg);
> > break;
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index b26fe858fd44..8034eda972e5 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
> > down_write(&anon_vma->root->rwsem);
> > }
> >
> > +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
> > +{
> > + return down_write_trylock(&anon_vma->root->rwsem);
> > +}
> > +
> > static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
> > {
> > up_write(&anon_vma->root->rwsem);
> > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> > index f2dc19f40d05..ce8d20b57e8c 100644
> > --- a/include/linux/userfaultfd_k.h
> > +++ b/include/linux/userfaultfd_k.h
> > @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
> > extern long uffd_wp_range(struct vm_area_struct *vma,
> > unsigned long start, unsigned long len, bool enable_wp);
> >
> > +/* remap_pages */
> > +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
> > +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
> > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_start, unsigned long src_start,
> > + unsigned long len, __u64 flags);
> > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr);
> > +
> > /* mm helpers */
> > static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> > struct vm_userfaultfd_ctx vm_ctx)
> > diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
> > index 0dbc81015018..2841e4ea8f2c 100644
> > --- a/include/uapi/linux/userfaultfd.h
> > +++ b/include/uapi/linux/userfaultfd.h
> > @@ -41,7 +41,8 @@
> > UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
> > UFFD_FEATURE_WP_UNPOPULATED | \
> > UFFD_FEATURE_POISON | \
> > - UFFD_FEATURE_WP_ASYNC)
> > + UFFD_FEATURE_WP_ASYNC | \
> > + UFFD_FEATURE_MOVE)
> > #define UFFD_API_IOCTLS \
> > ((__u64)1 << _UFFDIO_REGISTER | \
> > (__u64)1 << _UFFDIO_UNREGISTER | \
> > @@ -50,6 +51,7 @@
> > ((__u64)1 << _UFFDIO_WAKE | \
> > (__u64)1 << _UFFDIO_COPY | \
> > (__u64)1 << _UFFDIO_ZEROPAGE | \
> > + (__u64)1 << _UFFDIO_MOVE | \
> > (__u64)1 << _UFFDIO_WRITEPROTECT | \
> > (__u64)1 << _UFFDIO_CONTINUE | \
> > (__u64)1 << _UFFDIO_POISON)
> > @@ -73,6 +75,7 @@
> > #define _UFFDIO_WAKE (0x02)
> > #define _UFFDIO_COPY (0x03)
> > #define _UFFDIO_ZEROPAGE (0x04)
> > +#define _UFFDIO_MOVE (0x05)
> > #define _UFFDIO_WRITEPROTECT (0x06)
> > #define _UFFDIO_CONTINUE (0x07)
> > #define _UFFDIO_POISON (0x08)
> > @@ -92,6 +95,8 @@
> > struct uffdio_copy)
> > #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
> > struct uffdio_zeropage)
> > +#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
> > + struct uffdio_move)
> > #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
> > struct uffdio_writeprotect)
> > #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
> > @@ -222,6 +227,9 @@ struct uffdio_api {
> > * asynchronous mode is supported in which the write fault is
> > * automatically resolved and write-protection is un-set.
> > * It implies UFFD_FEATURE_WP_UNPOPULATED.
> > + *
> > + * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
> > + * existing page contents from userspace.
> > */
> > #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
> > #define UFFD_FEATURE_EVENT_FORK (1<<1)
> > @@ -239,6 +247,7 @@ struct uffdio_api {
> > #define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
> > #define UFFD_FEATURE_POISON (1<<14)
> > #define UFFD_FEATURE_WP_ASYNC (1<<15)
> > +#define UFFD_FEATURE_MOVE (1<<16)
> > __u64 features;
> >
> > __u64 ioctls;
> > @@ -347,6 +356,24 @@ struct uffdio_poison {
> > __s64 updated;
> > };
> >
> > +struct uffdio_move {
> > + __u64 dst;
> > + __u64 src;
> > + __u64 len;
> > + /*
> > + * Especially if used to atomically remove memory from the
> > + * address space the wake on the dst range is not needed.
> > + */
> > +#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
> > +#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
> > + __u64 mode;
> > + /*
> > + * "move" is written by the ioctl and must be at the end: the
> > + * copy_from_user will not read the last 8 bytes.
> > + */
> > + __s64 move;
> > +};
> > +
> > /*
> > * Flags for the userfaultfd(2) system call itself.
> > */
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 9656be95a542..6fac5c3d66e6 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > return ret;
> > }
> >
> > +#ifdef CONFIG_USERFAULTFD
> > +/*
> > + * The PT lock for src_pmd and the mmap_lock for reading are held by
> > + * the caller, but it must return after releasing the
> > + * page_table_lock. Just move the page from src_pmd to dst_pmd if possible.
> > + * Return zero if succeeded in moving the page, -EAGAIN if it needs to be
> > + * repeated by the caller, or other errors in case of failure.
> > + */
> > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr)
> > +{
> > + pmd_t _dst_pmd, src_pmdval;
> > + struct page *src_page;
> > + struct folio *src_folio;
> > + struct anon_vma *src_anon_vma;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pgtable_t src_pgtable, dst_pgtable;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
> > + src_pmdval = *src_pmd;
> > + src_ptl = pmd_lockptr(src_mm, src_pmd);
> > +
> > + lockdep_assert_held(src_ptl);
> > + mmap_assert_locked(src_mm);
> > + mmap_assert_locked(dst_mm);
> > +
> > + BUG_ON(!pmd_none(dst_pmdval));
> > + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> > +
> > + if (!pmd_trans_huge(src_pmdval)) {
> > + spin_unlock(src_ptl);
> > + if (is_pmd_migration_entry(src_pmdval)) {
> > + pmd_migration_entry_wait(src_mm, &src_pmdval);
> > + return -EAGAIN;
> > + }
> > + return -ENOENT;
> > + }
> > +
> > + src_page = pmd_page(src_pmdval);
> > + if (unlikely(!PageAnonExclusive(src_page))) {
> > + spin_unlock(src_ptl);
> > + return -EBUSY;
> > + }
> > +
> > + src_folio = page_folio(src_page);
> > + folio_get(src_folio);
> > + spin_unlock(src_ptl);
> > +
> > + /* preallocate dst_pgtable if needed */
> > + if (dst_mm != src_mm) {
> > + dst_pgtable = pte_alloc_one(dst_mm);
> > + if (unlikely(!dst_pgtable)) {
> > + err = -ENOMEM;
> > + goto put_folio;
> > + }
> > + } else {
> > + dst_pgtable = NULL;
> > + }
> > +
>
> IIUC Lokesh's comment applies here, we probably need the
> flush_cache_range(), not for x86 but for the other ones..
>
> cachetlb.rst:
>
> Next, we have the cache flushing interfaces. In general, when Linux
> is changing an existing virtual-->physical mapping to a new value,
> the sequence will be in one of the following forms::
>
> 1) flush_cache_mm(mm);
> change_all_page_tables_of(mm);
> flush_tlb_mm(mm);
>
> 2) flush_cache_range(vma, start, end);
> change_range_of_page_tables(mm, start, end);
> flush_tlb_range(vma, start, end);
>
> 3) flush_cache_page(vma, addr, pfn);
> set_pte(pte_pointer, new_pte_val);
> flush_tlb_page(vma, addr);

Thanks for the reference. I guess that's to support VIVT caches?

>
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> > + src_addr + HPAGE_PMD_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +
> > + folio_lock(src_folio);
> > +
> > + /*
> > + * split_huge_page walks the anon_vma chain without the page
> > + * lock. Serialize against it with the anon_vma lock, the page
> > + * lock is not enough.
> > + */
> > + src_anon_vma = folio_get_anon_vma(src_folio);
> > + if (!src_anon_vma) {
> > + err = -EAGAIN;
> > + goto unlock_folio;
> > + }
> > + anon_vma_lock_write(src_anon_vma);
> > +
> > + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> > + double_pt_lock(src_ptl, dst_ptl);
> > + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> > + !pmd_same(*dst_pmd, dst_pmdval))) {
> > + double_pt_unlock(src_ptl, dst_ptl);
> > + err = -EAGAIN;
> > + goto put_anon_vma;
> > + }
> > + if (!PageAnonExclusive(&src_folio->page)) {
> > + double_pt_unlock(src_ptl, dst_ptl);
> > + err = -EBUSY;
> > + goto put_anon_vma;
> > + }
> > +
> > + BUG_ON(!folio_test_head(src_folio));
> > + BUG_ON(!folio_test_anon(src_folio));
> > +
> > + folio_move_anon_rmap(src_folio, dst_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
> > + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> > + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
>
> Last time the conclusion is we leverage can_change_pmd_writable(), no?

After your explanation that this works correctly for soft-dirty and
UFFD_WP I thought the only thing left to handle was the check for
VM_WRITE in both src_vma and dst_vma (which I added into
validate_remap_areas()). Maybe I misunderstood and if so, I can
replace the above PageAnonExclusive() with can_change_pmd_writable()
(note that we err out on VM_SHARED VMAs, so PageAnonExclusive() will
be included in that check).

>
> > + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> > +
> > + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> > + if (dst_pgtable) {
> > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> > + pte_free(src_mm, src_pgtable);
> > + dst_pgtable = NULL;
> > +
> > + mm_inc_nr_ptes(dst_mm);
> > + mm_dec_nr_ptes(src_mm);
> > + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> > + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> > + } else {
> > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> > + }
> > + double_pt_unlock(src_ptl, dst_ptl);
> > +
> > +put_anon_vma:
> > + anon_vma_unlock_write(src_anon_vma);
> > + put_anon_vma(src_anon_vma);
> > +unlock_folio:
> > + /* unblock rmap walks */
> > + folio_unlock(src_folio);
> > + mmu_notifier_invalidate_range_end(&range);
> > + if (dst_pgtable)
> > + pte_free(dst_mm, dst_pgtable);
> > +put_folio:
> > + folio_put(src_folio);
> > +
> > + return err;
> > +}
> > +#endif /* CONFIG_USERFAULTFD */
> > +
> > /*
> > * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
> > *
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 2b5c0321d96b..0c1ee7172852 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * Prevent all access to pagetables with the exception of
> > * gup_fast later handled by the ptep_clear_flush and the VM
> > * handled by the anon_vma lock + PG_lock.
> > + *
> > + * UFFDIO_MOVE is prevented to race as well thanks to the
> > + * mmap_lock.
> > */
> > mmap_write_lock(mm);
> > result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index f9ddc50269d2..a5919cac9a08 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -490,6 +490,12 @@ void __init anon_vma_init(void)
> > * page_remove_rmap() that the anon_vma pointer from page->mapping is valid
> > * if there is a mapcount, we can dereference the anon_vma after observing
> > * those.
> > + *
> > + * NOTE: the caller should normally hold folio lock when calling this. If
> > + * not, the caller needs to double check the anon_vma didn't change after
> > + * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
> > + * concurrently without folio lock protection). See folio_lock_anon_vma_read()
> > + * which has already covered that, and comment above remap_pages().
> > */
> > struct anon_vma *folio_get_anon_vma(struct folio *folio)
> > {
> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 96d9eae5c7cc..45ce1a8b8ab9 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
> > mmap_read_unlock(dst_mm);
> > return err;
> > }
> > +
> > +
> > +void double_pt_lock(spinlock_t *ptl1,
> > + spinlock_t *ptl2)
> > + __acquires(ptl1)
> > + __acquires(ptl2)
> > +{
> > + spinlock_t *ptl_tmp;
> > +
> > + if (ptl1 > ptl2) {
> > + /* exchange ptl1 and ptl2 */
> > + ptl_tmp = ptl1;
> > + ptl1 = ptl2;
> > + ptl2 = ptl_tmp;
> > + }
> > + /* lock in virtual address order to avoid lock inversion */
> > + spin_lock(ptl1);
> > + if (ptl1 != ptl2)
> > + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
> > + else
> > + __acquire(ptl2);
> > +}
> > +
> > +void double_pt_unlock(spinlock_t *ptl1,
> > + spinlock_t *ptl2)
> > + __releases(ptl1)
> > + __releases(ptl2)
> > +{
> > + spin_unlock(ptl1);
> > + if (ptl1 != ptl2)
> > + spin_unlock(ptl2);
> > + else
> > + __release(ptl2);
> > +}
> > +
> > +
> > +static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr,
> > + pte_t *dst_pte, pte_t *src_pte,
> > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> > + struct folio *src_folio)
> > +{
> > + double_pt_lock(dst_ptl, src_ptl);
> > +
> > + if (!pte_same(*src_pte, orig_src_pte) ||
> > + !pte_same(*dst_pte, orig_dst_pte)) {
> > + double_pt_unlock(dst_ptl, src_ptl);
> > + return -EAGAIN;
> > + }
> > + if (folio_test_large(src_folio) ||
> > + !PageAnonExclusive(&src_folio->page)) {
> > + double_pt_unlock(dst_ptl, src_ptl);
> > + return -EBUSY;
> > + }
> > +
> > + BUG_ON(!folio_test_anon(src_folio));
> > +
> > + folio_move_anon_rmap(src_folio, dst_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
> > + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> > + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> > + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
>
> can_change_pte_writable()?

Same as my previous comment. If that's still needed I'll replace the
above PageAnonExclusive() check with can_change_pte_writable().

>
> > +
> > + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
> > +
> > + if (dst_mm != src_mm) {
> > + inc_mm_counter(dst_mm, MM_ANONPAGES);
> > + dec_mm_counter(src_mm, MM_ANONPAGES);
> > + }
> > +
> > + double_pt_unlock(dst_ptl, src_ptl);
> > +
> > + return 0;
> > +}
> > +
> > +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_addr, unsigned long src_addr,
> > + pte_t *dst_pte, pte_t *src_pte,
> > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > + spinlock_t *dst_ptl, spinlock_t *src_ptl)
> > +{
> > + if (!pte_swp_exclusive(orig_src_pte))
> > + return -EBUSY;
> > +
> > + double_pt_lock(dst_ptl, src_ptl);
> > +
> > + if (!pte_same(*src_pte, orig_src_pte) ||
> > + !pte_same(*dst_pte, orig_dst_pte)) {
> > + double_pt_unlock(dst_ptl, src_ptl);
> > + return -EAGAIN;
> > + }
> > +
> > + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
> > + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
> > +
> > + if (dst_mm != src_mm) {
> > + inc_mm_counter(dst_mm, MM_SWAPENTS);
> > + dec_mm_counter(src_mm, MM_SWAPENTS);
> > + }
> > +
> > + double_pt_unlock(dst_ptl, src_ptl);
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * The mmap_lock for reading is held by the caller. Just move the page
> > + * from src_pmd to dst_pmd if possible, and return true if succeeded
> > + * in moving the page.
> > + */
> > +static int remap_pages_pte(struct mm_struct *dst_mm,
> > + struct mm_struct *src_mm,
> > + pmd_t *dst_pmd,
> > + pmd_t *src_pmd,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr,
> > + unsigned long src_addr,
> > + __u64 mode)
> > +{
> > + swp_entry_t entry;
> > + pte_t orig_src_pte, orig_dst_pte;
> > + pte_t src_folio_pte;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pte_t *src_pte = NULL;
> > + pte_t *dst_pte = NULL;
> > +
> > + struct folio *src_folio = NULL;
> > + struct anon_vma *src_anon_vma = NULL;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
>
> Same flush_cache_range().

Ack.

>
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> > + src_addr, src_addr + PAGE_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +retry:
> > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > +
> > + /* Retry if a huge pmd materialized from under us */
> > + if (unlikely(!dst_pte)) {
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> > +
> > + /*
> > + * We held the mmap_lock for reading so MADV_DONTNEED
> > + * can zap transparent huge pages under us, or the
> > + * transparent huge page fault can establish new
> > + * transparent huge pages under us.
> > + */
> > + if (unlikely(!src_pte)) {
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + BUG_ON(pmd_none(*dst_pmd));
> > + BUG_ON(pmd_none(*src_pmd));
> > + BUG_ON(pmd_trans_huge(*dst_pmd));
> > + BUG_ON(pmd_trans_huge(*src_pmd));
> > +
> > + spin_lock(dst_ptl);
> > + orig_dst_pte = *dst_pte;
> > + spin_unlock(dst_ptl);
> > + if (!pte_none(orig_dst_pte)) {
> > + err = -EEXIST;
> > + goto out;
> > + }
> > +
> > + spin_lock(src_ptl);
> > + orig_src_pte = *src_pte;
> > + spin_unlock(src_ptl);
> > + if (pte_none(orig_src_pte)) {
> > + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES))
> > + err = -ENOENT;
> > + else /* nothing to do to remap a hole */
> > + err = 0;
> > + goto out;
> > + }
> > +
> > + /* If PTE changed after we locked the folio them start over */
> > + if (src_folio && unlikely(!pte_same(src_folio_pte, orig_src_pte))) {
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + if (pte_present(orig_src_pte)) {
> > + /*
> > + * Pin and lock both source folio and anon_vma. Since we are in
> > + * RCU read section, we can't block, so on contention have to
> > + * unmap the ptes, obtain the lock and retry.
> > + */
> > + if (!src_folio) {
> > + struct folio *folio;
> > +
> > + /*
> > + * Pin the page while holding the lock to be sure the
> > + * page isn't freed under us
> > + */
> > + spin_lock(src_ptl);
> > + if (!pte_same(orig_src_pte, *src_pte)) {
> > + spin_unlock(src_ptl);
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> > + if (!folio || folio_test_large(folio) ||
> > + !PageAnonExclusive(&folio->page)) {
> > + spin_unlock(src_ptl);
> > + err = -EBUSY;
> > + goto out;
> > + }
> > +
> > + folio_get(folio);
> > + src_folio = folio;
> > + src_folio_pte = orig_src_pte;
> > + spin_unlock(src_ptl);
> > +
> > + if (!folio_trylock(src_folio)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + folio_lock(src_folio);
> > + goto retry;
> > + }
> > + }
> > +
> > + if (!src_anon_vma) {
> > + /*
> > + * folio_referenced walks the anon_vma chain
> > + * without the folio lock. Serialize against it with
> > + * the anon_vma lock, the folio lock is not enough.
> > + */
> > + src_anon_vma = folio_get_anon_vma(src_folio);
> > + if (!src_anon_vma) {
> > + /* page was unmapped from under us */
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + anon_vma_lock_write(src_anon_vma);
> > + goto retry;
> > + }
> > + }
> > +
> > + err = remap_present_pte(dst_mm, src_mm, dst_vma, src_vma,
> > + dst_addr, src_addr, dst_pte, src_pte,
> > + orig_dst_pte, orig_src_pte,
> > + dst_ptl, src_ptl, src_folio);
> > + } else {
> > + entry = pte_to_swp_entry(orig_src_pte);
> > + if (non_swap_entry(entry)) {
> > + if (is_migration_entry(entry)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + migration_entry_wait(src_mm, src_pmd,
> > + src_addr);
> > + err = -EAGAIN;
> > + } else
> > + err = -EFAULT;
> > + goto out;
> > + }
> > +
> > + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
> > + dst_pte, src_pte,
> > + orig_dst_pte, orig_src_pte,
> > + dst_ptl, src_ptl);
> > + }
> > +
> > +out:
> > + if (src_anon_vma) {
> > + anon_vma_unlock_write(src_anon_vma);
> > + put_anon_vma(src_anon_vma);
> > + }
> > + if (src_folio) {
> > + folio_unlock(src_folio);
> > + folio_put(src_folio);
> > + }
> > + if (dst_pte)
> > + pte_unmap(dst_pte);
> > + if (src_pte)
> > + pte_unmap(src_pte);
> > + mmu_notifier_invalidate_range_end(&range);
> > +
> > + return err;
> > +}
> > +
> > +static int validate_remap_areas(struct vm_area_struct *src_vma,
>
> s/remap/move/

Ack.

>
> > + struct vm_area_struct *dst_vma)
> > +{
> > + /* Only allow remapping if both have the same access and protection */
>
> s/remap/move/

Ack.

>
> PS: I'll stop comment on renamings.. but please have a look over the whole
> patch..

Will do.

>
> > + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> > + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
> > + return -EINVAL;
> > +
> > + /* Only allow remapping if both are mlocked or both aren't */
> > + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> > + return -EINVAL;
> > +
> > + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> > + return -EINVAL;
> > +
> > + /*
> > + * Be strict and only allow remap_pages if either the src or
> > + * dst range is registered in the userfaultfd to prevent
> > + * userland errors going unnoticed. As far as the VM
> > + * consistency is concerned, it would be perfectly safe to
> > + * remove this check, but there's no useful usage for
> > + * remap_pages ouside of userfaultfd registered ranges. This
> > + * is after all why it is an ioctl belonging to the
> > + * userfaultfd and not a syscall.
> > + *
> > + * Allow both vmas to be registered in the userfaultfd, just
> > + * in case somebody finds a way to make such a case useful.
> > + * Normally only one of the two vmas would be registered in
> > + * the userfaultfd.
> > + */
> > + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
> > + !src_vma->vm_userfaultfd_ctx.ctx)
> > + return -EINVAL;
>
> When rethinking this, I'm wondering whether we should make it strict.
>
> The problem is, src/dst is not equal in this case. We got dst_mm from the
> uffd context passed in, so we know at least dst_mm has something registered
> onto this uffd we're working on, no matter it's the dst_vma or not. It'll
> be weird if dst_vma is not registered to the uffd context we're using right
> now..
>
> Then, can we have a case where src vma is registered with uffd, then dest
> vma is not? It sounds really weird, since dest mm needs to be registered
> to uffd anyway. It sounds to me the interface is a bit loose and unclear.
>
> I'm thinking a more reasonable way to safe check this: we model UFFDIO_MOVE
> the same way as UFFDIO_COPY/CONTINUE/... where we always assumed the uffd
> context is the target (uffd registration required), while current mm is
> always the source (no uffd registration required).
>
> It also means "dropping current page" does not require privilege (here
> UFFDIO_MOVE is the same as DONTNEED, per discussed), while "installing a
> page into the pgtable" does require some privilege (where privilege granted
> by the uffd itself).
>
> Then it means something like:
>
> if (!dst_vma->vm_userfaultfd_ctx.ctx ||
> dst_vma->vm_userfaultfd_ctx.ctx != ctx)
> return -EINVAL;
>
> Here ctx should be passed over from upper.
>
> What do you think?

I didn't think that hard about this check but with your analysis it
does sound reasonable. I'll adopt it. Thanks!

>
> PS: it seems current uffd ioctls are not double checking the vma's ctx is
> the same as the ctx that the ioctl is operating on.. but it seems we should
> do that for all. I'll think more about it..
>
> > +
> > + /*
> > + * FIXME: only allow remapping across anonymous vmas,
> > + * tmpfs should be added.
> > + */
> > + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
> > + return -EINVAL;
> > +
> > + /*
> > + * Ensure the dst_vma has a anon_vma or this page
> > + * would get a NULL anon_vma when moved in the
> > + * dst_vma.
> > + */
> > + if (unlikely(anon_vma_prepare(dst_vma)))
> > + return -ENOMEM;
> > +
> > + return 0;
> > +}
> > +
> > +/**
> > + * remap_pages - remap arbitrary anonymous pages of an existing vma
> > + * @dst_start: start of the destination virtual memory range
> > + * @src_start: start of the source virtual memory range
> > + * @len: length of the virtual memory range
> > + *
> > + * remap_pages() remaps arbitrary anonymous pages atomically in zero
> > + * copy. It only works on non shared anonymous pages because those can
> > + * be relocated without generating non linear anon_vmas in the rmap
> > + * code.
> > + *
> > + * It provides a zero copy mechanism to handle userspace page faults.
> > + * The source vma pages should have mapcount == 1, which can be
> > + * enforced by using madvise(MADV_DONTFORK) on src vma.
> > + *
> > + * The thread receiving the page during the userland page fault
> > + * will receive the faulting page in the source vma through the network,
> > + * storage or any other I/O device (MADV_DONTFORK in the source vma
> > + * avoids remap_pages() to fail with -EBUSY if the process forks before
> > + * remap_pages() is called), then it will call remap_pages() to map the
> > + * page in the faulting address in the destination vma.
> > + *
> > + * This userfaultfd command works purely via pagetables, so it's the
> > + * most efficient way to move physical non shared anonymous pages
> > + * across different virtual addresses. Unlike mremap()/mmap()/munmap()
> > + * it does not create any new vmas. The mapping in the destination
> > + * address is atomic.
> > + *
> > + * It only works if the vma protection bits are identical from the
> > + * source and destination vma.
> > + *
> > + * It can remap non shared anonymous pages within the same vma too.
> > + *
> > + * If the source virtual memory range has any unmapped holes, or if
> > + * the destination virtual memory range is not a whole unmapped hole,
> > + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
> > + * provides a very strict behavior to avoid any chance of memory
> > + * corruption going unnoticed if there are userland race conditions.
> > + * Only one thread should resolve the userland page fault at any given
> > + * time for any given faulting address. This means that if two threads
> > + * try to both call remap_pages() on the same destination address at the
> > + * same time, the second thread will get an explicit error from this
> > + * command.
> > + *
> > + * The command retval will return "len" is successful. The command
> > + * however can be interrupted by fatal signals or errors. If
> > + * interrupted it will return the number of bytes successfully
> > + * remapped before the interruption if any, or the negative error if
> > + * none. It will never return zero. Either it will return an error or
> > + * an amount of bytes successfully moved. If the retval reports a
> > + * "short" remap, the remap_pages() command should be repeated by
> > + * userland with src+retval, dst+reval, len-retval if it wants to know
> > + * about the error that interrupted it.
> > + *
> > + * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to
> > + * prevent -ENOENT errors to materialize if there are holes in the
> > + * source virtual range that is being remapped. The holes will be
> > + * accounted as successfully remapped in the retval of the
> > + * command. This is mostly useful to remap hugepage naturally aligned
> > + * virtual regions without knowing if there are transparent hugepage
> > + * in the regions or not, but preventing the risk of having to split
> > + * the hugepmd during the remap.
> > + *
> > + * If there's any rmap walk that is taking the anon_vma locks without
> > + * first obtaining the folio lock (the only current instance is
> > + * folio_referenced), they will have to verify if the folio->mapping
> > + * has changed after taking the anon_vma lock. If it changed they
> > + * should release the lock and retry obtaining a new anon_vma, because
> > + * it means the anon_vma was changed by remap_pages() before the lock
> > + * could be obtained. This is the only additional complexity added to
> > + * the rmap code to provide this anonymous page remapping functionality.
> > + */
> > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_start, unsigned long src_start,
> > + unsigned long len, __u64 mode)
> > +{
> > + struct vm_area_struct *src_vma, *dst_vma;
> > + unsigned long src_addr, dst_addr;
> > + pmd_t *src_pmd, *dst_pmd;
> > + long err = -EINVAL;
> > + ssize_t moved = 0;
> > +
> > + /*
> > + * Sanitize the command parameters:
> > + */
> > + BUG_ON(src_start & ~PAGE_MASK);
> > + BUG_ON(dst_start & ~PAGE_MASK);
> > + BUG_ON(len & ~PAGE_MASK);
> > +
> > + /* Does the address range wrap, or is the span zero-sized? */
> > + BUG_ON(src_start + len <= src_start);
> > + BUG_ON(dst_start + len <= dst_start);
> > +
> > + /*
> > + * Because these are read sempahores there's no risk of lock
> > + * inversion.
> > + */
> > + mmap_read_lock(dst_mm);
> > + if (dst_mm != src_mm)
> > + mmap_read_lock(src_mm);
> > +
> > + /*
> > + * Make sure the vma is not shared, that the src and dst remap
> > + * ranges are both valid and fully within a single existing
> > + * vma.
> > + */
> > + src_vma = find_vma(src_mm, src_start);
> > + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > + goto out;
> > + if (src_start < src_vma->vm_start ||
> > + src_start + len > src_vma->vm_end)
> > + goto out;
> > +
> > + dst_vma = find_vma(dst_mm, dst_start);
> > + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > + goto out;
> > + if (dst_start < dst_vma->vm_start ||
> > + dst_start + len > dst_vma->vm_end)
> > + goto out;
> > +
> > + err = validate_remap_areas(src_vma, dst_vma);
> > + if (err)
> > + goto out;
> > +
> > + for (src_addr = src_start, dst_addr = dst_start;
> > + src_addr < src_start + len;) {
> > + spinlock_t *ptl;
> > + pmd_t dst_pmdval;
> > + unsigned long step_size;
> > +
> > + BUG_ON(dst_addr >= dst_start + len);
> > + /*
> > + * Below works because anonymous area would not have a
> > + * transparent huge PUD. If file-backed support is added,
> > + * that case would need to be handled here.
> > + */
> > + src_pmd = mm_find_pmd(src_mm, src_addr);
> > + if (unlikely(!src_pmd)) {
> > + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
> > + err = -ENOENT;
> > + break;
> > + }
> > + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> > + if (unlikely(!src_pmd)) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > + }
> > + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> > + if (unlikely(!dst_pmd)) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > +
> > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > + /*
> > + * If the dst_pmd is mapped as THP don't override it and just
> > + * be strict. If dst_pmd changes into TPH after this check, the
> > + * remap_pages_huge_pmd() will detect the change and retry
> > + * while remap_pages_pte() will detect the change and fail.
> > + */
> > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > + err = -EEXIST;
> > + break;
> > + }
> > +
> > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > + if (ptl) {
> > + if (pmd_devmap(*src_pmd)) {
> > + spin_unlock(ptl);
> > + err = -ENOENT;
> > + break;
> > + }
> > +
> > + /*
> > + * Check if we can move the pmd without
> > + * splitting it. First check the address
> > + * alignment to be the same in src/dst. These
> > + * checks don't actually need the PT lock but
> > + * it's good to do it here to optimize this
> > + * block away at build time if
> > + * CONFIG_TRANSPARENT_HUGEPAGE is not set.
> > + */
> > + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> > + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
> > + spin_unlock(ptl);
> > + split_huge_pmd(src_vma, src_pmd, src_addr);
> > + continue;
> > + }
> > +
> > + err = remap_pages_huge_pmd(dst_mm, src_mm,
> > + dst_pmd, src_pmd,
> > + dst_pmdval,
> > + dst_vma, src_vma,
> > + dst_addr, src_addr);
> > + step_size = HPAGE_PMD_SIZE;
> > + } else {
> > + if (pmd_none(*src_pmd)) {
> > + if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
> > + err = -ENOENT;
> > + break;
> > + }
> > + if (unlikely(__pte_alloc(src_mm, src_pmd))) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > + }
> > +
> > + if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > +
> > + err = remap_pages_pte(dst_mm, src_mm,
> > + dst_pmd, src_pmd,
> > + dst_vma, src_vma,
> > + dst_addr, src_addr,
> > + mode);
> > + step_size = PAGE_SIZE;
> > + }
> > +
> > + cond_resched();
> > +
> > + if (fatal_signal_pending(current)) {
> > + /* Do not override an error */
> > + if (!err || err == -EAGAIN)
> > + err = -EINTR;
> > + break;
> > + }
> > +
> > + if (err) {
> > + if (err == -EAGAIN)
> > + continue;
> > + break;
> > + }
> > +
> > + /* Proceed to the next page */
> > + dst_addr += step_size;
> > + src_addr += step_size;
> > + moved += step_size;
> > + }
> > +
> > +out:
> > + mmap_read_unlock(dst_mm);
> > + if (dst_mm != src_mm)
> > + mmap_read_unlock(src_mm);
> > + BUG_ON(moved < 0);
> > + BUG_ON(err > 0);
> > + BUG_ON(!moved && !err);
> > + return moved ? moved : err;
> > +}
> > --
> > 2.42.0.609.gbb76f46606-goog
> >
>
> --
> Peter Xu
>

2023-10-19 21:46:56

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 13, 2023 at 9:08 AM Peter Xu <[email protected]> wrote:
>
> On Fri, Oct 13, 2023 at 11:56:31AM +0200, David Hildenbrand wrote:
> > Hi Peter,
>
> Hi, David,
>
> >
> > > I used to have the same thought with David on whether we can simplify the
> > > design to e.g. limit it to single mm. Then I found that the trickiest is
> > > actually patch 1 together with the anon_vma manipulations, and the problem
> > > is that's not avoidable even if we restrict the api to apply on single mm.
> > >
> > > What else we can benefit from single mm? One less mmap read lock, but
> > > probably that's all we can get; IIUC we need to keep most of the rest of
> > > the code, e.g. pgtable walks, double pgtable lockings, etc.
> >
> > No existing mechanisms move anon pages between unrelated processes, that
> > naturally makes me nervous if we're doing it "just because we can".
>
> IMHO that's also the potential, when guarded with userfaultfd descriptor
> being shared between two processes.
>
> See below with more comment on the raised concerns.
>
> >
> > >
> > > Actually, even though I have no solid clue, but I had a feeling that there
> > > can be some interesting way to leverage this across-mm movement, while
> > > keeping things all safe (by e.g. elaborately requiring other proc to create
> > > uffd and deliver to this proc).
> >
> > Okay, but no real use cases yet.
>
> I can provide a "not solid" example. I didn't mention it because it's
> really something that just popped into my mind when thinking cross-mm, so I
> never discussed with anyone yet nor shared it anywhere.
>
> Consider VM live upgrade in a generic form (e.g., no VFIO), we can do that
> very efficiently with shmem or hugetlbfs, but not yet anonymous. We can do
> extremely efficient postcopy live upgrade now with anonymous if with REMAP.
>
> Basically I see it a potential way of moving memory efficiently especially
> with thp.
>
> >
> > >
> > > Considering Andrea's original version already contains those bits and all
> > > above, I'd vote that we go ahead with supporting two MMs.
> >
> > You can do nasty things with that, as it stands, on the upstream codebase.
> >
> > If you pin the page in src_mm and move it to dst_mm, you successfully broke
> > an invariant that "exclusive" means "no other references from other
> > processes". That page is marked exclusive but it is, in fact, not exclusive.
>
> It is still exclusive to the dst mm? I see your point, but I think you're
> taking exclusiveness altogether with pinning, and IMHO that may not be
> always necessary?
>
> >
> > Once you achieved that, you can easily have src_mm not have MMF_HAS_PINNED,
>
> (I suppose you meant dst_mm here)
>
> > so you can just COW-share that page. Now you successfully broke the
> > invariant that COW-shared pages must not be pinned. And you can even trigger
> > VM_BUG_ONs, like in sanity_check_pinned_pages().
>
> Yeah, that's really unfortunate. But frankly, I don't think it's the fault
> of this new feature, but the rest.
>
> Let's imagine if the MMF_HAS_PINNED wasn't proposed as a per-mm flag, but
> per-vma, which I don't see why we can't because it's simply a hint so far.
> Then if we apply the same rule here, UFFDIO_REMAP won't even work for
> single-mm as long as cross-vma. Then UFFDIO_REMAP as a whole feature will
> be NACKed simply because of this..
>
> And I don't think anyone can guarantee a per-vma MMF_HAS_PINNED can never
> happen, or any further change to pinning solution that may affect this. So
> far it just looks unsafe to remap a pin page to me.
>
> I don't have a good suggestion here if this is a risk.. I'd think it risky
> then to do REMAP over pinned pages no matter cross-mm or single-mm. It
> means probably we just rule them out: folio_maybe_dma_pinned() may not even
> be enough to be safe with fast-gup. We may need page_needs_cow_for_dma()
> with proper write_protect_seq no matter cross-mm or single-mm?
>
> >
> > Can it all be fixed? Sure, with more complexity. For something without clear
> > motivation, I'll have to pass.
>
> I think what you raised is a valid concern, but IMHO it's better fixed no
> matter cross-mm or single-mm. What do you think?
>
> In general, pinning lose its whole point here to me for an userspace either
> if it DONTNEEDs it or REMAP it. What would be great to do here is we unpin
> it upon DONTNEED/REMAP/whatever drops the page, because it loses its
> coherency anyway, IMHO.
>
> >
> > Once there is real demand, we can revisit it and explore what else we would
> > have to take care of (I don't know how memcg behaves when moving between
> > completely unrelated processes, maybe that works as expected, I don't know
> > and I have no time to spare on reviewing features with no real use cases)
> > and announce it as a new feature.
>
> Good point. memcg is probably needed..
>
> So you reminded me to do a more thorough review against zap/fault paths, I
> think what's missing are (besides page pinning):
>
> - mem_cgroup_charge()/mem_cgroup_uncharge():

Good point. Will add in the next version.

>
> (side note: I think folio_throttle_swaprate() is only for when
> allocating new pages, so not needed here)
>
> - check_stable_address_space() (under pgtable lock)

Ack.

>
> - tlb flush
>
> Hmm???????????????? I can't see anywhere we did tlb flush, batched or
> not, either single-mm or cross-mm should need it. Is this missing?

As Lokesh pointed out we do that but we don't batch them. I'll try to
add batching in the next version.

>
> >
> >
> > Note: that (with only reading the documentation) it also kept me wondering
> > how the MMs are even implied from
> >
> > struct uffdio_move {
> > __u64 dst; /* Destination of move */
> > __u64 src; /* Source of move */
> > __u64 len; /* Number of bytes to move */
> > __u64 mode; /* Flags controlling behavior of move */
> > __s64 move; /* Number of bytes moved, or negated error */
> > };
> >
> > That probably has to be documented as well, in which address space dst and
> > src reside.
>
> Agreed, some better documentation will never hurt. Dst should be in the mm
> address space that was bound to the userfault descriptor. Src should be in
> the current mm address space.

Ack. Will add. Thanks!

>
> Thanks,
>
> --
> Peter Xu
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-19 21:56:41

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Tue, Oct 17, 2023 at 12:40 PM kernel test robot <[email protected]> wrote:
>
> Hi Suren,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on akpm-mm/mm-everything]
> [also build test WARNING on next-20231017]
> [cannot apply to linus/master v6.6-rc6]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Suren-Baghdasaryan/mm-rmap-support-move-to-different-root-anon_vma-in-folio_move_anon_rmap/20231009-144552
> base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link: https://lore.kernel.org/r/20231009064230.2952396-3-surenb%40google.com
> patch subject: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI
> config: i386-randconfig-141-20231017 (https://download.01.org/0day-ci/archive/20231018/[email protected]/config)
> compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
> reproduce: (https://download.01.org/0day-ci/archive/20231018/[email protected]/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <[email protected]>
> | Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
>
> smatch warnings:
> mm/userfaultfd.c:1380 remap_pages() warn: unsigned 'src_start + len - src_addr' is never less than zero.

Hmm. I think this warning is correct only when
CONFIG_TRANSPARENT_HUGEPAGE=n. I guess I'll have to add an "ifdef
CONFIG_TRANSPARENT_HUGEPAGE" here after all, which lets us move these
checks before locking PTL.

>
> vim +1380 mm/userfaultfd.c
>
> 1195
> 1196 /**
> 1197 * remap_pages - remap arbitrary anonymous pages of an existing vma
> 1198 * @dst_start: start of the destination virtual memory range
> 1199 * @src_start: start of the source virtual memory range
> 1200 * @len: length of the virtual memory range
> 1201 *
> 1202 * remap_pages() remaps arbitrary anonymous pages atomically in zero
> 1203 * copy. It only works on non shared anonymous pages because those can
> 1204 * be relocated without generating non linear anon_vmas in the rmap
> 1205 * code.
> 1206 *
> 1207 * It provides a zero copy mechanism to handle userspace page faults.
> 1208 * The source vma pages should have mapcount == 1, which can be
> 1209 * enforced by using madvise(MADV_DONTFORK) on src vma.
> 1210 *
> 1211 * The thread receiving the page during the userland page fault
> 1212 * will receive the faulting page in the source vma through the network,
> 1213 * storage or any other I/O device (MADV_DONTFORK in the source vma
> 1214 * avoids remap_pages() to fail with -EBUSY if the process forks before
> 1215 * remap_pages() is called), then it will call remap_pages() to map the
> 1216 * page in the faulting address in the destination vma.
> 1217 *
> 1218 * This userfaultfd command works purely via pagetables, so it's the
> 1219 * most efficient way to move physical non shared anonymous pages
> 1220 * across different virtual addresses. Unlike mremap()/mmap()/munmap()
> 1221 * it does not create any new vmas. The mapping in the destination
> 1222 * address is atomic.
> 1223 *
> 1224 * It only works if the vma protection bits are identical from the
> 1225 * source and destination vma.
> 1226 *
> 1227 * It can remap non shared anonymous pages within the same vma too.
> 1228 *
> 1229 * If the source virtual memory range has any unmapped holes, or if
> 1230 * the destination virtual memory range is not a whole unmapped hole,
> 1231 * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
> 1232 * provides a very strict behavior to avoid any chance of memory
> 1233 * corruption going unnoticed if there are userland race conditions.
> 1234 * Only one thread should resolve the userland page fault at any given
> 1235 * time for any given faulting address. This means that if two threads
> 1236 * try to both call remap_pages() on the same destination address at the
> 1237 * same time, the second thread will get an explicit error from this
> 1238 * command.
> 1239 *
> 1240 * The command retval will return "len" is successful. The command
> 1241 * however can be interrupted by fatal signals or errors. If
> 1242 * interrupted it will return the number of bytes successfully
> 1243 * remapped before the interruption if any, or the negative error if
> 1244 * none. It will never return zero. Either it will return an error or
> 1245 * an amount of bytes successfully moved. If the retval reports a
> 1246 * "short" remap, the remap_pages() command should be repeated by
> 1247 * userland with src+retval, dst+reval, len-retval if it wants to know
> 1248 * about the error that interrupted it.
> 1249 *
> 1250 * The UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES flag can be specified to
> 1251 * prevent -ENOENT errors to materialize if there are holes in the
> 1252 * source virtual range that is being remapped. The holes will be
> 1253 * accounted as successfully remapped in the retval of the
> 1254 * command. This is mostly useful to remap hugepage naturally aligned
> 1255 * virtual regions without knowing if there are transparent hugepage
> 1256 * in the regions or not, but preventing the risk of having to split
> 1257 * the hugepmd during the remap.
> 1258 *
> 1259 * If there's any rmap walk that is taking the anon_vma locks without
> 1260 * first obtaining the folio lock (the only current instance is
> 1261 * folio_referenced), they will have to verify if the folio->mapping
> 1262 * has changed after taking the anon_vma lock. If it changed they
> 1263 * should release the lock and retry obtaining a new anon_vma, because
> 1264 * it means the anon_vma was changed by remap_pages() before the lock
> 1265 * could be obtained. This is the only additional complexity added to
> 1266 * the rmap code to provide this anonymous page remapping functionality.
> 1267 */
> 1268 ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> 1269 unsigned long dst_start, unsigned long src_start,
> 1270 unsigned long len, __u64 mode)
> 1271 {
> 1272 struct vm_area_struct *src_vma, *dst_vma;
> 1273 unsigned long src_addr, dst_addr;
> 1274 pmd_t *src_pmd, *dst_pmd;
> 1275 long err = -EINVAL;
> 1276 ssize_t moved = 0;
> 1277
> 1278 /*
> 1279 * Sanitize the command parameters:
> 1280 */
> 1281 BUG_ON(src_start & ~PAGE_MASK);
> 1282 BUG_ON(dst_start & ~PAGE_MASK);
> 1283 BUG_ON(len & ~PAGE_MASK);
> 1284
> 1285 /* Does the address range wrap, or is the span zero-sized? */
> 1286 BUG_ON(src_start + len <= src_start);
> 1287 BUG_ON(dst_start + len <= dst_start);
> 1288
> 1289 /*
> 1290 * Because these are read sempahores there's no risk of lock
> 1291 * inversion.
> 1292 */
> 1293 mmap_read_lock(dst_mm);
> 1294 if (dst_mm != src_mm)
> 1295 mmap_read_lock(src_mm);
> 1296
> 1297 /*
> 1298 * Make sure the vma is not shared, that the src and dst remap
> 1299 * ranges are both valid and fully within a single existing
> 1300 * vma.
> 1301 */
> 1302 src_vma = find_vma(src_mm, src_start);
> 1303 if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> 1304 goto out;
> 1305 if (src_start < src_vma->vm_start ||
> 1306 src_start + len > src_vma->vm_end)
> 1307 goto out;
> 1308
> 1309 dst_vma = find_vma(dst_mm, dst_start);
> 1310 if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> 1311 goto out;
> 1312 if (dst_start < dst_vma->vm_start ||
> 1313 dst_start + len > dst_vma->vm_end)
> 1314 goto out;
> 1315
> 1316 err = validate_remap_areas(src_vma, dst_vma);
> 1317 if (err)
> 1318 goto out;
> 1319
> 1320 for (src_addr = src_start, dst_addr = dst_start;
> 1321 src_addr < src_start + len;) {
> 1322 spinlock_t *ptl;
> 1323 pmd_t dst_pmdval;
> 1324 unsigned long step_size;
> 1325
> 1326 BUG_ON(dst_addr >= dst_start + len);
> 1327 /*
> 1328 * Below works because anonymous area would not have a
> 1329 * transparent huge PUD. If file-backed support is added,
> 1330 * that case would need to be handled here.
> 1331 */
> 1332 src_pmd = mm_find_pmd(src_mm, src_addr);
> 1333 if (unlikely(!src_pmd)) {
> 1334 if (!(mode & UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES)) {
> 1335 err = -ENOENT;
> 1336 break;
> 1337 }
> 1338 src_pmd = mm_alloc_pmd(src_mm, src_addr);
> 1339 if (unlikely(!src_pmd)) {
> 1340 err = -ENOMEM;
> 1341 break;
> 1342 }
> 1343 }
> 1344 dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> 1345 if (unlikely(!dst_pmd)) {
> 1346 err = -ENOMEM;
> 1347 break;
> 1348 }
> 1349
> 1350 dst_pmdval = pmdp_get_lockless(dst_pmd);
> 1351 /*
> 1352 * If the dst_pmd is mapped as THP don't override it and just
> 1353 * be strict. If dst_pmd changes into TPH after this check, the
> 1354 * remap_pages_huge_pmd() will detect the change and retry
> 1355 * while remap_pages_pte() will detect the change and fail.
> 1356 */
> 1357 if (unlikely(pmd_trans_huge(dst_pmdval))) {
> 1358 err = -EEXIST;
> 1359 break;
> 1360 }
> 1361
> 1362 ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> 1363 if (ptl) {
> 1364 if (pmd_devmap(*src_pmd)) {
> 1365 spin_unlock(ptl);
> 1366 err = -ENOENT;
> 1367 break;
> 1368 }
> 1369
> 1370 /*
> 1371 * Check if we can move the pmd without
> 1372 * splitting it. First check the address
> 1373 * alignment to be the same in src/dst. These
> 1374 * checks don't actually need the PT lock but
> 1375 * it's good to do it here to optimize this
> 1376 * block away at build time if
> 1377 * CONFIG_TRANSPARENT_HUGEPAGE is not set.
> 1378 */
> 1379 if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> > 1380 src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-20 10:03:26

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 19.10.23 21:53, Peter Xu wrote:
> On Thu, Oct 19, 2023 at 05:41:01PM +0200, David Hildenbrand wrote:
>> That's not my main point. It can easily become a maintenance burden without
>> any real use cases yet that we are willing to support.
>
> That's why I requested a few times that we can discuss the complexity of
> cross-mm support already here, and I'm all ears if I missed something on
> the "maintenance burden" part..
>
> I started by listing what I think might be different, and we can easily
> speedup single-mm with things like "if (ctx->mm != mm)" checks with
> e.g. memcg, just like what this patch already did with pgtable depositions.
>
> We keep saying "maintenance burden" but we refuse to discuss what is that..

Let's recap

(1) We have person A up-streaming code written by person B, whereby B is
not involved in the discussions nor seems to be active to maintain that
code.

Worse, the code that is getting up-streamed was originally based on a
different kernel version that has significant differences in some key
areas -- for example, page pinning, exclusive vs. shared.

I claim that nobody here fully understands the code at hand (just look
at the previous discussions), and reviewers have to sort out the mess
that was created by the very way this stuff is getting upstreamed here.

We're already struggling to get the single-mm case working correctly.


(2) Cross-mm was not even announced anywhere nor mentioned which use it
would have; I had to stumble over this while digging through the code.
Further, is it even *tested*? AFAIKS in patch #3 no. Why do we have to
make the life of reviewers harder by forcing them to review code that
currently *nobody* on this earth needs?


(3) You said "What else we can benefit from single mm? One less mmap
read lock, but probably that's all we can get;" and I presented two
non-obvious issues. I did not even look any further because I really
have better things to do than review complicated code without real use
cases at hand. As I said "maybe that works as expected, I
don't know and I have no time to spare on reviewing features with no
real use cases)"; apparently I was right by just guessing that memcg
handling is missing.


The sub-feature in question (cross-mm) has no solid use cases; at this
point I am not even convinced the use case you raised requires
*userfaultfd*; for the purpose of moving a whole VMA worth of pages
between two processes; I don't see the immediate need to get userfaultfd
involved and move individual pages under page lock etc.

>
> I'll leave that to Suren and Lokesh to decide. For me the worst case is
> one more flag which might be confusing, which is not the end of the world..
> Suren, you may need to work more thoroughly to remove cross-mm implications
> if so, just like when renaming REMAP to MOVE.

I'm asking myself why you are pushing so hard to include complexity
"just because we can"; doesn't make any sense to me, honestly.

Maybe you have some other real use cases that ultimately require
userfaultfd for cross-mm that you cannot share?

Will the world end when we have to use a separate flag so we can open
this pandora's box when really required?


Again, moving anon pages within a process is a known thing; we do that
already via mremap; the only difference here really is, that we have to
get the rmap right because we don't adjust VMAs. It's a shame we don't
try to combine both code paths, maybe it's not easily possible like we
did with mprotect vs. uffd-wp.

Moving anon pages between process is currently only done via COW, where
all things (page pinning, memcg, ...) have been figured out and are
simply working as expected. Making uffd special by coding-up their own
thing does not sound compelling to me.


I am clearly against any unwarranted features+complexity. Again, I will
stop arguing further, the whole thing of "include it just because we
can" to avoid a flag (that we might never even see) doesn't make any
sense to me and likely never will.

The whole way this feature is getting upstreamed is just messed up IMHO
and I the reasoning used in this thread to stick
as-close-as-possible to some code person B wrote some years ago (e.g.,
naming, sub-features) is far out of my comprehension.

--
Cheers,

David / dhildenb

2023-10-20 14:10:02

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 20, 2023 at 3:02 AM David Hildenbrand <[email protected]> wrote:
>
> On 19.10.23 21:53, Peter Xu wrote:
> > On Thu, Oct 19, 2023 at 05:41:01PM +0200, David Hildenbrand wrote:
> >> That's not my main point. It can easily become a maintenance burden without
> >> any real use cases yet that we are willing to support.
> >
> > That's why I requested a few times that we can discuss the complexity of
> > cross-mm support already here, and I'm all ears if I missed something on
> > the "maintenance burden" part..
> >
> > I started by listing what I think might be different, and we can easily
> > speedup single-mm with things like "if (ctx->mm != mm)" checks with
> > e.g. memcg, just like what this patch already did with pgtable depositions.
> >
> > We keep saying "maintenance burden" but we refuse to discuss what is that..
>
> Let's recap
>
> (1) We have person A up-streaming code written by person B, whereby B is
> not involved in the discussions nor seems to be active to maintain that
> code.
>
> Worse, the code that is getting up-streamed was originally based on a
> different kernel version that has significant differences in some key
> areas -- for example, page pinning, exclusive vs. shared.
>
> I claim that nobody here fully understands the code at hand (just look
> at the previous discussions), and reviewers have to sort out the mess
> that was created by the very way this stuff is getting upstreamed here.
>
> We're already struggling to get the single-mm case working correctly.
>
>
> (2) Cross-mm was not even announced anywhere nor mentioned which use it
> would have; I had to stumble over this while digging through the code.
> Further, is it even *tested*? AFAIKS in patch #3 no. Why do we have to
> make the life of reviewers harder by forcing them to review code that
> currently *nobody* on this earth needs?
>
>
> (3) You said "What else we can benefit from single mm? One less mmap
> read lock, but probably that's all we can get;" and I presented two
> non-obvious issues. I did not even look any further because I really
> have better things to do than review complicated code without real use
> cases at hand. As I said "maybe that works as expected, I
> don't know and I have no time to spare on reviewing features with no
> real use cases)"; apparently I was right by just guessing that memcg
> handling is missing.
>
>
> The sub-feature in question (cross-mm) has no solid use cases; at this
> point I am not even convinced the use case you raised requires
> *userfaultfd*; for the purpose of moving a whole VMA worth of pages
> between two processes; I don't see the immediate need to get userfaultfd
> involved and move individual pages under page lock etc.

You make a compelling case against cross-mm support.
While I can't force Andrea to participate in upstreaming nor do I have
his background, keeping it simple, as you requested, is doable. That's
what I plan on doing by splitting the patch and I think we all agreed
to that. I'll also see if I can easily add a separate patch to test
cross-mm support.
I do apologize for the extra effort required from reviewers to cover
for the gaps in my patches. I'm doing my best to minimize that and I
really appreciate your time.

>
> >
> > I'll leave that to Suren and Lokesh to decide. For me the worst case is
> > one more flag which might be confusing, which is not the end of the world..
> > Suren, you may need to work more thoroughly to remove cross-mm implications
> > if so, just like when renaming REMAP to MOVE.
>
> I'm asking myself why you are pushing so hard to include complexity
> "just because we can"; doesn't make any sense to me, honestly.
>
> Maybe you have some other real use cases that ultimately require
> userfaultfd for cross-mm that you cannot share?
>
> Will the world end when we have to use a separate flag so we can open
> this pandora's box when really required?
>
>
> Again, moving anon pages within a process is a known thing; we do that
> already via mremap; the only difference here really is, that we have to
> get the rmap right because we don't adjust VMAs. It's a shame we don't
> try to combine both code paths, maybe it's not easily possible like we
> did with mprotect vs. uffd-wp.

That's a good point. With cross-mm support baked in, the overlap was
not obvious to me. I'll see how much we can reuse from the mremap
path.

>
> Moving anon pages between process is currently only done via COW, where
> all things (page pinning, memcg, ...) have been figured out and are
> simply working as expected. Making uffd special by coding-up their own
> thing does not sound compelling to me.
>
>
> I am clearly against any unwarranted features+complexity. Again, I will
> stop arguing further, the whole thing of "include it just because we
> can" to avoid a flag (that we might never even see) doesn't make any
> sense to me and likely never will.
>
> The whole way this feature is getting upstreamed is just messed up IMHO
> and I the reasoning used in this thread to stick
> as-close-as-possible to some code person B wrote some years ago (e.g.,
> naming, sub-features) is far out of my comprehension.

I don't think staying as-close-as-possible to the original version was
the way I was driving this so far. At least that was not my conscious
intention. I'm open to further suggestions whenever it makes sense to
deviate from it.
Thanks,
Suren.

>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-20 17:17:32

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI


>>
>> The sub-feature in question (cross-mm) has no solid use cases; at this
>> point I am not even convinced the use case you raised requires
>> *userfaultfd*; for the purpose of moving a whole VMA worth of pages
>> between two processes; I don't see the immediate need to get userfaultfd
>> involved and move individual pages under page lock etc.
>
> You make a compelling case against cross-mm support.

I tried to :P

I'm happy to hear compelling cases for cross-mm support that we need
*right now*. And that's what I'm missing so far besides "already
included in the patches" and "but we would eventually need a separate flag".

As a side note, I already do have another rmap-related feature in the
works that will require extra-effort to handle this case (short: assign
each MM a unique ID and use that for accounting purposes when
(un)mapping pages); I think I figured out how to handle this case here;
and it's questionable if my work will make it upstream -- to be posted
as PoC in 2-4 weeks I guess. But it easily shows that there are cases
where this will require extra work -- without any current benefits due
to lack of actual users.

> While I can't force Andrea to participate in upstreaming nor do I have
> his background, keeping it simple, as you requested, is doable. That's
> what I plan on doing by splitting the patch and I think we all agreed
> to that. I'll also see if I can easily add a separate patch to test
> cross-mm support.
> I do apologize for the extra effort required from reviewers to cover
> for the gaps in my patches. I'm doing my best to minimize that and I
> really appreciate your time.

It's absolutely not your fault and there is absolutely no need to
apologize (sorry if I sounded like I would be blaming you in any way). I
made myself the experience that up-streaming the work of someone else
can be troublesome, because it's hard to grasp all the details from a
set of patches. Documentation and comments can't handle all the implicit
knowledge from the original author.

I likely wouldn't be able to even write that code myself.

For example: why is cross-mm relevant and was included in the original
patches? Maybe there was a very good reason and it is simply not documented.

>
>>
>>>
>>> I'll leave that to Suren and Lokesh to decide. For me the worst case is
>>> one more flag which might be confusing, which is not the end of the world..
>>> Suren, you may need to work more thoroughly to remove cross-mm implications
>>> if so, just like when renaming REMAP to MOVE.
>>
>> I'm asking myself why you are pushing so hard to include complexity
>> "just because we can"; doesn't make any sense to me, honestly.
>>
>> Maybe you have some other real use cases that ultimately require
>> userfaultfd for cross-mm that you cannot share?
>>
>> Will the world end when we have to use a separate flag so we can open
>> this pandora's box when really required?
>>
>>
>> Again, moving anon pages within a process is a known thing; we do that
>> already via mremap; the only difference here really is, that we have to
>> get the rmap right because we don't adjust VMAs. It's a shame we don't
>> try to combine both code paths, maybe it's not easily possible like we
>> did with mprotect vs. uffd-wp.
>
> That's a good point. With cross-mm support baked in, the overlap was
> not obvious to me. I'll see how much we can reuse from the mremap
> path.

My comment was inspired by Lokesh "While going through mremap's
move_page_tables code, which is pretty similar to what we do here".

There are some subtle differences (could we even move whole page tables?
probably not due to holding the mmap locking only in read-mode) and
special exclusive-only+rmap adjust handling. Further, TLB flushing is
different (but maybe there are ways to just reuse the batching, did not
look into the details).

But move_page_tables is clearly single-mm code, and a unification might
not be that straight forward.

>
>>
>> Moving anon pages between process is currently only done via COW, where
>> all things (page pinning, memcg, ...) have been figured out and are
>> simply working as expected. Making uffd special by coding-up their own
>> thing does not sound compelling to me.
>>
>>
>> I am clearly against any unwarranted features+complexity. Again, I will
>> stop arguing further, the whole thing of "include it just because we
>> can" to avoid a flag (that we might never even see) doesn't make any
>> sense to me and likely never will.
>>
>> The whole way this feature is getting upstreamed is just messed up IMHO
>> and I the reasoning used in this thread to stick
>> as-close-as-possible to some code person B wrote some years ago (e.g.,
>> naming, sub-features) is far out of my comprehension.
>
> I don't think staying as-close-as-possible to the original version was
> the way I was driving this so far. At least that was not my conscious

These are rather the vibes I'm getting from Peter. "Why rename it, could
confuse people because the original patches are old", "Why exclude it if
it has been included in the original patches". Not the kind of reasoning
I can relate to when it comes to upstreaming some patches.


> intention. I'm open to further suggestions whenever it makes sense to
> deviate from it.

I'll repeat: any complexity we remove and any code reused in common
code/moved out of userfaultfd will be a win.

--
Cheers,

David / dhildenb

2023-10-22 15:48:12

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Fri, Oct 20, 2023 at 07:16:19PM +0200, David Hildenbrand wrote:
> These are rather the vibes I'm getting from Peter. "Why rename it, could
> confuse people because the original patches are old", "Why exclude it if it
> has been included in the original patches". Not the kind of reasoning I can
> relate to when it comes to upstreaming some patches.

You can't blame anyone if you misunderstood and biased the question.

The first question is definitely valid, even until now. You guys still
prefer to rename it, which I'm totally fine with.

The 2nd question is wrong from your interpretation. That's not my point,
at least not starting from a few replies already. What I was asking for is
why such page movement between mm is dangerous. I don't think I get solid
answers even until now.

Noticing "memcg is missing" is not an argument for "cross-mm is dangerous",
it's a review comment. Suren can address that.

You'll propose a new feature that may tag an mm is not an argument either,
if it's not merged yet. We can also address that depending on what it is,
also on which lands earlier.

It'll be good to discuss these details even in a single-mm support. Anyone
would like to add that can already refer to discussion in this thread.

I hope I'm clear.

--
Peter Xu

2023-10-22 17:03:07

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Thu, Oct 19, 2023 at 02:24:06PM -0700, Suren Baghdasaryan wrote:
> On Thu, Oct 12, 2023 at 3:00 PM Peter Xu <[email protected]> wrote:
> >
> > On Sun, Oct 08, 2023 at 11:42:27PM -0700, Suren Baghdasaryan wrote:
> > > From: Andrea Arcangeli <[email protected]>
> > >
> > > Implement the uABI of UFFDIO_MOVE ioctl.
> > > UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > > needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > > available (in userspace) for recycling, as is usually the case in heap
> > > compaction algorithms, then we can avoid the page allocation and memcpy
> > > (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > > userspace, we avoid the need to release (via madvise) the pages back to
> > > the kernel [2].
> > > We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > > thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > > measured using a benchmark that emulates a heap compaction implementation
> > > using userfaultfd (to allow concurrent accesses by application threads).
> > > More details of the usecase are explained in [2].
> > > Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > > touching them within the same vma. Today, it can only be done by mremap,
> > > however it forces splitting the vma.
> > >
> > > [1] https://lore.kernel.org/all/[email protected]/
> > > [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> > >
> > > Update for the ioctl_userfaultfd(2) manpage:
> > >
> > > UFFDIO_MOVE
> > > (Since Linux xxx) Move a continuous memory chunk into the
> > > userfault registered range and optionally wake up the blocked
> > > thread. The source and destination addresses and the number of
> > > bytes to move are specified by the src, dst, and len fields of
> > > the uffdio_move structure pointed to by argp:
> > >
> > > struct uffdio_move {
> > > __u64 dst; /* Destination of move */
> > > __u64 src; /* Source of move */
> > > __u64 len; /* Number of bytes to move */
> > > __u64 mode; /* Flags controlling behavior of move */
> > > __s64 move; /* Number of bytes moved, or negated error */
> > > };
> > >
> > > The following value may be bitwise ORed in mode to change the
> > > behavior of the UFFDIO_MOVE operation:
> > >
> > > UFFDIO_MOVE_MODE_DONTWAKE
> > > Do not wake up the thread that waits for page-fault
> > > resolution
> > >
> > > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > > Allow holes in the source virtual range that is being moved.
> > > When not specified, the holes will result in ENOENT error.
> > > When specified, the holes will be accounted as successfully
> > > moved memory. This is mostly useful to move hugepage aligned
> > > virtual regions without knowing if there are transparent
> > > hugepages in the regions or not, but preventing the risk of
> > > having to split the hugepage during the operation.
> > >
> > > The move field is used by the kernel to return the number of
> > > bytes that was actually moved, or an error (a negated errno-
> > > style value). If the value returned in move doesn't match the
> > > value that was specified in len, the operation fails with the
> > > error EAGAIN. The move field is output-only; it is not read by
> > > the UFFDIO_MOVE operation.
> > >
> > > The operation may fail for various reasons. Usually, remapping of
> > > pages that are not exclusive to the given process fail; once KSM
> > > might deduplicate pages or fork() COW-shares pages during fork()
> > > with child processes, they are no longer exclusive. Further, the
> > > kernel might only perform lightweight checks for detecting whether
> > > the pages are exclusive, and return -EBUSY in case that check fails.
> > > To make the operation more likely to succeed, KSM should be
> > > disabled, fork() should be avoided or MADV_DONTFORK should be
> > > configured for the source VMA before fork().
> > >
> > > This ioctl(2) operation returns 0 on success. In this case, the
> > > entire area was moved. On error, -1 is returned and errno is
> > > set to indicate the error. Possible errors include:
> > >
> > > EAGAIN The number of bytes moved (i.e., the value returned in
> > > the move field) does not equal the value that was
> > > specified in the len field.
> > >
> > > EINVAL Either dst or len was not a multiple of the system page
> > > size, or the range specified by src and len or dst and len
> > > was invalid.
> > >
> > > EINVAL An invalid bit was specified in the mode field.
> > >
> > > ENOENT
> > > The source virtual memory range has unmapped holes and
> > > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> > >
> > > EEXIST
> > > The destination virtual memory range is fully or partially
> > > mapped.
> > >
> > > EBUSY
> > > The pages in the source virtual memory range are not
> > > exclusive to the process. The kernel might only perform
> > > lightweight checks for detecting whether the pages are
> > > exclusive. To make the operation more likely to succeed,
> > > KSM should be disabled, fork() should be avoided or
> > > MADV_DONTFORK should be configured for the source virtual
> > > memory area before fork().
> > >
> > > ENOMEM Allocating memory needed for the operation failed.
> > >
> > > ESRCH
> > > The faulting process has exited at the time of a
> >
> > Nit pick comment for future man page: there's no faulting process in this
> > context. Perhaps "target process"?
>
> Ack.
>
> >
> > > UFFDIO_MOVE operation.
> > >
> > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > > ---
> > > Documentation/admin-guide/mm/userfaultfd.rst | 3 +
> > > fs/userfaultfd.c | 63 ++
> > > include/linux/rmap.h | 5 +
> > > include/linux/userfaultfd_k.h | 12 +
> > > include/uapi/linux/userfaultfd.h | 29 +-
> > > mm/huge_memory.c | 138 +++++
> > > mm/khugepaged.c | 3 +
> > > mm/rmap.c | 6 +
> > > mm/userfaultfd.c | 602 +++++++++++++++++++
> > > 9 files changed, 860 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst
> > > index 203e26da5f92..e5cc8848dcb3 100644
> > > --- a/Documentation/admin-guide/mm/userfaultfd.rst
> > > +++ b/Documentation/admin-guide/mm/userfaultfd.rst
> > > @@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
> > > areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
> > > support for shmem virtual memory areas.
> > >
> > > +- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
> > > + existing page contents from userspace.
> > > +
> > > The userland application should set the feature flags it intends to use
> > > when invoking the ``UFFDIO_API`` ioctl, to request that those features be
> > > enabled if supported.
> > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > index a7c6ef764e63..ac52e0f99a69 100644
> > > --- a/fs/userfaultfd.c
> > > +++ b/fs/userfaultfd.c
> > > @@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
> > > return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
> > > }
> > >
> > > +static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
> > > + unsigned long arg)
> >
> > If we do want to rename from REMAP to MOVE, we'd better rename the
> > functions too, as "remap" still exists all over the place..
>
> Ok. I thought that since the current implementation only remaps and
> never copies it would be correct to keep "remap" in these internal
> names and change that later if we support copying. But I'm fine with
> renaming them now to avoid confusion. Will do.

"move", not "copy", btw.

Not a big deal, take your preference at each place; "remap" sometimes can
read better, maybe. Fundamentally, I think it's because both "remap" and
"move" work in 99% cases. That's also why I think either name would work
here.

>
>
> >
> > > +{
> > > + __s64 ret;
> > > + struct uffdio_move uffdio_move;
> > > + struct uffdio_move __user *user_uffdio_move;
> > > + struct userfaultfd_wake_range range;
> > > +
> > > + user_uffdio_move = (struct uffdio_move __user *) arg;
> > > +
> > > + ret = -EAGAIN;
> > > + if (atomic_read(&ctx->mmap_changing))
> > > + goto out;
> >
> > I didn't notice this before, but I think we need to re-check this after
> > taking target mm's mmap read lock..
>
> Ack.
>
> >
> > maybe we'd want to pass in ctx* into remap_pages(), more reasoning below.
>
> Makes sense.
>
> >
> > > +
> > > + ret = -EFAULT;
> > > + if (copy_from_user(&uffdio_move, user_uffdio_move,
> > > + /* don't copy "remap" last field */
> >
> > s/remap/move/
>
> Ack.
>
> >
> > > + sizeof(uffdio_move)-sizeof(__s64)))
> > > + goto out;
> > > +
> > > + ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len);
> > > + if (ret)
> > > + goto out;
> > > +
> > > + ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len);
> > > + if (ret)
> > > + goto out;
> > > +
> > > + ret = -EINVAL;
> > > + if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
> > > + UFFDIO_MOVE_MODE_DONTWAKE))
> > > + goto out;
> > > +
> > > + if (mmget_not_zero(ctx->mm)) {
> > > + ret = remap_pages(ctx->mm, current->mm,
> > > + uffdio_move.dst, uffdio_move.src,
> > > + uffdio_move.len, uffdio_move.mode);
> > > + mmput(ctx->mm);
> > > + } else {
> > > + return -ESRCH;
> > > + }
> > > +
> > > + if (unlikely(put_user(ret, &user_uffdio_move->move)))
> > > + return -EFAULT;
> > > + if (ret < 0)
> > > + goto out;
> > > +
> > > + /* len == 0 would wake all */
> > > + BUG_ON(!ret);
> > > + range.len = ret;
> > > + if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
> > > + range.start = uffdio_move.dst;
> > > + wake_userfault(ctx, &range);
> > > + }
> > > + ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
> > > +
> > > +out:
> > > + return ret;
> > > +}
> > > +
> > > /*
> > > * userland asks for a certain API version and we return which bits
> > > * and ioctl commands are implemented in this kernel for such API
> > > @@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
> > > case UFFDIO_ZEROPAGE:
> > > ret = userfaultfd_zeropage(ctx, arg);
> > > break;
> > > + case UFFDIO_MOVE:
> > > + ret = userfaultfd_remap(ctx, arg);
> > > + break;
> > > case UFFDIO_WRITEPROTECT:
> > > ret = userfaultfd_writeprotect(ctx, arg);
> > > break;
> > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > index b26fe858fd44..8034eda972e5 100644
> > > --- a/include/linux/rmap.h
> > > +++ b/include/linux/rmap.h
> > > @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
> > > down_write(&anon_vma->root->rwsem);
> > > }
> > >
> > > +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
> > > +{
> > > + return down_write_trylock(&anon_vma->root->rwsem);
> > > +}
> > > +
> > > static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
> > > {
> > > up_write(&anon_vma->root->rwsem);
> > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> > > index f2dc19f40d05..ce8d20b57e8c 100644
> > > --- a/include/linux/userfaultfd_k.h
> > > +++ b/include/linux/userfaultfd_k.h
> > > @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
> > > extern long uffd_wp_range(struct vm_area_struct *vma,
> > > unsigned long start, unsigned long len, bool enable_wp);
> > >
> > > +/* remap_pages */
> > > +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
> > > +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
> > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > + unsigned long dst_start, unsigned long src_start,
> > > + unsigned long len, __u64 flags);
> > > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > > + struct vm_area_struct *dst_vma,
> > > + struct vm_area_struct *src_vma,
> > > + unsigned long dst_addr, unsigned long src_addr);
> > > +
> > > /* mm helpers */
> > > static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> > > struct vm_userfaultfd_ctx vm_ctx)
> > > diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
> > > index 0dbc81015018..2841e4ea8f2c 100644
> > > --- a/include/uapi/linux/userfaultfd.h
> > > +++ b/include/uapi/linux/userfaultfd.h
> > > @@ -41,7 +41,8 @@
> > > UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
> > > UFFD_FEATURE_WP_UNPOPULATED | \
> > > UFFD_FEATURE_POISON | \
> > > - UFFD_FEATURE_WP_ASYNC)
> > > + UFFD_FEATURE_WP_ASYNC | \
> > > + UFFD_FEATURE_MOVE)
> > > #define UFFD_API_IOCTLS \
> > > ((__u64)1 << _UFFDIO_REGISTER | \
> > > (__u64)1 << _UFFDIO_UNREGISTER | \
> > > @@ -50,6 +51,7 @@
> > > ((__u64)1 << _UFFDIO_WAKE | \
> > > (__u64)1 << _UFFDIO_COPY | \
> > > (__u64)1 << _UFFDIO_ZEROPAGE | \
> > > + (__u64)1 << _UFFDIO_MOVE | \
> > > (__u64)1 << _UFFDIO_WRITEPROTECT | \
> > > (__u64)1 << _UFFDIO_CONTINUE | \
> > > (__u64)1 << _UFFDIO_POISON)
> > > @@ -73,6 +75,7 @@
> > > #define _UFFDIO_WAKE (0x02)
> > > #define _UFFDIO_COPY (0x03)
> > > #define _UFFDIO_ZEROPAGE (0x04)
> > > +#define _UFFDIO_MOVE (0x05)
> > > #define _UFFDIO_WRITEPROTECT (0x06)
> > > #define _UFFDIO_CONTINUE (0x07)
> > > #define _UFFDIO_POISON (0x08)
> > > @@ -92,6 +95,8 @@
> > > struct uffdio_copy)
> > > #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
> > > struct uffdio_zeropage)
> > > +#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
> > > + struct uffdio_move)
> > > #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
> > > struct uffdio_writeprotect)
> > > #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
> > > @@ -222,6 +227,9 @@ struct uffdio_api {
> > > * asynchronous mode is supported in which the write fault is
> > > * automatically resolved and write-protection is un-set.
> > > * It implies UFFD_FEATURE_WP_UNPOPULATED.
> > > + *
> > > + * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
> > > + * existing page contents from userspace.
> > > */
> > > #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
> > > #define UFFD_FEATURE_EVENT_FORK (1<<1)
> > > @@ -239,6 +247,7 @@ struct uffdio_api {
> > > #define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
> > > #define UFFD_FEATURE_POISON (1<<14)
> > > #define UFFD_FEATURE_WP_ASYNC (1<<15)
> > > +#define UFFD_FEATURE_MOVE (1<<16)
> > > __u64 features;
> > >
> > > __u64 ioctls;
> > > @@ -347,6 +356,24 @@ struct uffdio_poison {
> > > __s64 updated;
> > > };
> > >
> > > +struct uffdio_move {
> > > + __u64 dst;
> > > + __u64 src;
> > > + __u64 len;
> > > + /*
> > > + * Especially if used to atomically remove memory from the
> > > + * address space the wake on the dst range is not needed.
> > > + */
> > > +#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
> > > +#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
> > > + __u64 mode;
> > > + /*
> > > + * "move" is written by the ioctl and must be at the end: the
> > > + * copy_from_user will not read the last 8 bytes.
> > > + */
> > > + __s64 move;
> > > +};
> > > +
> > > /*
> > > * Flags for the userfaultfd(2) system call itself.
> > > */
> > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > index 9656be95a542..6fac5c3d66e6 100644
> > > --- a/mm/huge_memory.c
> > > +++ b/mm/huge_memory.c
> > > @@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > return ret;
> > > }
> > >
> > > +#ifdef CONFIG_USERFAULTFD
> > > +/*
> > > + * The PT lock for src_pmd and the mmap_lock for reading are held by
> > > + * the caller, but it must return after releasing the
> > > + * page_table_lock. Just move the page from src_pmd to dst_pmd if possible.
> > > + * Return zero if succeeded in moving the page, -EAGAIN if it needs to be
> > > + * repeated by the caller, or other errors in case of failure.
> > > + */
> > > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > > + struct vm_area_struct *dst_vma,
> > > + struct vm_area_struct *src_vma,
> > > + unsigned long dst_addr, unsigned long src_addr)
> > > +{
> > > + pmd_t _dst_pmd, src_pmdval;
> > > + struct page *src_page;
> > > + struct folio *src_folio;
> > > + struct anon_vma *src_anon_vma;
> > > + spinlock_t *src_ptl, *dst_ptl;
> > > + pgtable_t src_pgtable, dst_pgtable;
> > > + struct mmu_notifier_range range;
> > > + int err = 0;
> > > +
> > > + src_pmdval = *src_pmd;
> > > + src_ptl = pmd_lockptr(src_mm, src_pmd);
> > > +
> > > + lockdep_assert_held(src_ptl);
> > > + mmap_assert_locked(src_mm);
> > > + mmap_assert_locked(dst_mm);
> > > +
> > > + BUG_ON(!pmd_none(dst_pmdval));
> > > + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> > > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> > > +
> > > + if (!pmd_trans_huge(src_pmdval)) {
> > > + spin_unlock(src_ptl);
> > > + if (is_pmd_migration_entry(src_pmdval)) {
> > > + pmd_migration_entry_wait(src_mm, &src_pmdval);
> > > + return -EAGAIN;
> > > + }
> > > + return -ENOENT;
> > > + }
> > > +
> > > + src_page = pmd_page(src_pmdval);
> > > + if (unlikely(!PageAnonExclusive(src_page))) {
> > > + spin_unlock(src_ptl);
> > > + return -EBUSY;
> > > + }
> > > +
> > > + src_folio = page_folio(src_page);
> > > + folio_get(src_folio);
> > > + spin_unlock(src_ptl);
> > > +
> > > + /* preallocate dst_pgtable if needed */
> > > + if (dst_mm != src_mm) {
> > > + dst_pgtable = pte_alloc_one(dst_mm);
> > > + if (unlikely(!dst_pgtable)) {
> > > + err = -ENOMEM;
> > > + goto put_folio;
> > > + }
> > > + } else {
> > > + dst_pgtable = NULL;
> > > + }
> > > +
> >
> > IIUC Lokesh's comment applies here, we probably need the
> > flush_cache_range(), not for x86 but for the other ones..
> >
> > cachetlb.rst:
> >
> > Next, we have the cache flushing interfaces. In general, when Linux
> > is changing an existing virtual-->physical mapping to a new value,
> > the sequence will be in one of the following forms::
> >
> > 1) flush_cache_mm(mm);
> > change_all_page_tables_of(mm);
> > flush_tlb_mm(mm);
> >
> > 2) flush_cache_range(vma, start, end);
> > change_range_of_page_tables(mm, start, end);
> > flush_tlb_range(vma, start, end);
> >
> > 3) flush_cache_page(vma, addr, pfn);
> > set_pte(pte_pointer, new_pte_val);
> > flush_tlb_page(vma, addr);
>
> Thanks for the reference. I guess that's to support VIVT caches?

I'm not 100% sure VIVT the only case, but.. yeah flush anything to ram as
long as things cached in va form would be required.

>
> >
> > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> > > + src_addr + HPAGE_PMD_SIZE);
> > > + mmu_notifier_invalidate_range_start(&range);
> > > +
> > > + folio_lock(src_folio);
> > > +
> > > + /*
> > > + * split_huge_page walks the anon_vma chain without the page
> > > + * lock. Serialize against it with the anon_vma lock, the page
> > > + * lock is not enough.
> > > + */
> > > + src_anon_vma = folio_get_anon_vma(src_folio);
> > > + if (!src_anon_vma) {
> > > + err = -EAGAIN;
> > > + goto unlock_folio;
> > > + }
> > > + anon_vma_lock_write(src_anon_vma);
> > > +
> > > + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> > > + double_pt_lock(src_ptl, dst_ptl);
> > > + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> > > + !pmd_same(*dst_pmd, dst_pmdval))) {
> > > + double_pt_unlock(src_ptl, dst_ptl);
> > > + err = -EAGAIN;
> > > + goto put_anon_vma;
> > > + }
> > > + if (!PageAnonExclusive(&src_folio->page)) {
> > > + double_pt_unlock(src_ptl, dst_ptl);
> > > + err = -EBUSY;
> > > + goto put_anon_vma;
> > > + }
> > > +
> > > + BUG_ON(!folio_test_head(src_folio));
> > > + BUG_ON(!folio_test_anon(src_folio));
> > > +
> > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > +
> > > + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> > > + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> >
> > Last time the conclusion is we leverage can_change_pmd_writable(), no?
>
> After your explanation that this works correctly for soft-dirty and
> UFFD_WP I thought the only thing left to handle was the check for
> VM_WRITE in both src_vma and dst_vma (which I added into
> validate_remap_areas()). Maybe I misunderstood and if so, I can
> replace the above PageAnonExclusive() with can_change_pmd_writable()
> (note that we err out on VM_SHARED VMAs, so PageAnonExclusive() will
> be included in that check).

I think we still need PageAnonExclusive() because that's the first guard to
decide whether the page can be moved over at all.

What I meant is something like keeping that, then:

if (pmd_soft_dirty(src_pmdval))
_dst_pmd = pmd_swp_mksoft_dirty(_dst_pmd);
if (pmd_uffd_wp(src_pmdval))
_dst_pmd = pte_swp_mkuffd_wp(swp_pte);
if (can_change_pmd_writable(_dst_vma, addr, _dst_pmd))
_dst_pmd = pmd_mkwrite(_dst_pmd, dst_vma);

But I'm not really sure anyone can leverage that, especially after I just
saw move_soft_dirty_pte(): mremap() treat everything dirty after movement.
I don't think there's a clear definition of how we treat memory dirty after
remap.

Maybe we should follow what it does with mremap()? Then your current code
is fine. Maybe that's the better start.

>
> >
> > > + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> > > +
> > > + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> > > + if (dst_pgtable) {
> > > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> > > + pte_free(src_mm, src_pgtable);
> > > + dst_pgtable = NULL;
> > > +
> > > + mm_inc_nr_ptes(dst_mm);
> > > + mm_dec_nr_ptes(src_mm);
> > > + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> > > + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> > > + } else {
> > > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> > > + }
> > > + double_pt_unlock(src_ptl, dst_ptl);
> > > +
> > > +put_anon_vma:
> > > + anon_vma_unlock_write(src_anon_vma);
> > > + put_anon_vma(src_anon_vma);
> > > +unlock_folio:
> > > + /* unblock rmap walks */
> > > + folio_unlock(src_folio);
> > > + mmu_notifier_invalidate_range_end(&range);
> > > + if (dst_pgtable)
> > > + pte_free(dst_mm, dst_pgtable);
> > > +put_folio:
> > > + folio_put(src_folio);
> > > +
> > > + return err;
> > > +}
> > > +#endif /* CONFIG_USERFAULTFD */
> > > +
> > > /*
> > > * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
> > > *
> > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > index 2b5c0321d96b..0c1ee7172852 100644
> > > --- a/mm/khugepaged.c
> > > +++ b/mm/khugepaged.c
> > > @@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > > * Prevent all access to pagetables with the exception of
> > > * gup_fast later handled by the ptep_clear_flush and the VM
> > > * handled by the anon_vma lock + PG_lock.
> > > + *
> > > + * UFFDIO_MOVE is prevented to race as well thanks to the
> > > + * mmap_lock.
> > > */
> > > mmap_write_lock(mm);
> > > result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > index f9ddc50269d2..a5919cac9a08 100644
> > > --- a/mm/rmap.c
> > > +++ b/mm/rmap.c
> > > @@ -490,6 +490,12 @@ void __init anon_vma_init(void)
> > > * page_remove_rmap() that the anon_vma pointer from page->mapping is valid
> > > * if there is a mapcount, we can dereference the anon_vma after observing
> > > * those.
> > > + *
> > > + * NOTE: the caller should normally hold folio lock when calling this. If
> > > + * not, the caller needs to double check the anon_vma didn't change after
> > > + * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
> > > + * concurrently without folio lock protection). See folio_lock_anon_vma_read()
> > > + * which has already covered that, and comment above remap_pages().
> > > */
> > > struct anon_vma *folio_get_anon_vma(struct folio *folio)
> > > {
> > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > index 96d9eae5c7cc..45ce1a8b8ab9 100644
> > > --- a/mm/userfaultfd.c
> > > +++ b/mm/userfaultfd.c
> > > @@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
> > > mmap_read_unlock(dst_mm);
> > > return err;
> > > }
> > > +
> > > +
> > > +void double_pt_lock(spinlock_t *ptl1,
> > > + spinlock_t *ptl2)
> > > + __acquires(ptl1)
> > > + __acquires(ptl2)
> > > +{
> > > + spinlock_t *ptl_tmp;
> > > +
> > > + if (ptl1 > ptl2) {
> > > + /* exchange ptl1 and ptl2 */
> > > + ptl_tmp = ptl1;
> > > + ptl1 = ptl2;
> > > + ptl2 = ptl_tmp;
> > > + }
> > > + /* lock in virtual address order to avoid lock inversion */
> > > + spin_lock(ptl1);
> > > + if (ptl1 != ptl2)
> > > + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
> > > + else
> > > + __acquire(ptl2);
> > > +}
> > > +
> > > +void double_pt_unlock(spinlock_t *ptl1,
> > > + spinlock_t *ptl2)
> > > + __releases(ptl1)
> > > + __releases(ptl2)
> > > +{
> > > + spin_unlock(ptl1);
> > > + if (ptl1 != ptl2)
> > > + spin_unlock(ptl2);
> > > + else
> > > + __release(ptl2);
> > > +}
> > > +
> > > +
> > > +static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > + struct vm_area_struct *dst_vma,
> > > + struct vm_area_struct *src_vma,
> > > + unsigned long dst_addr, unsigned long src_addr,
> > > + pte_t *dst_pte, pte_t *src_pte,
> > > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > > + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> > > + struct folio *src_folio)
> > > +{
> > > + double_pt_lock(dst_ptl, src_ptl);
> > > +
> > > + if (!pte_same(*src_pte, orig_src_pte) ||
> > > + !pte_same(*dst_pte, orig_dst_pte)) {
> > > + double_pt_unlock(dst_ptl, src_ptl);
> > > + return -EAGAIN;
> > > + }
> > > + if (folio_test_large(src_folio) ||
> > > + !PageAnonExclusive(&src_folio->page)) {
> > > + double_pt_unlock(dst_ptl, src_ptl);
> > > + return -EBUSY;
> > > + }
> > > +
> > > + BUG_ON(!folio_test_anon(src_folio));
> > > +
> > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > +
> > > + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> > > + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> > > + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
> >
> > can_change_pte_writable()?
>
> Same as my previous comment. If that's still needed I'll replace the
> above PageAnonExclusive() check with can_change_pte_writable().

If no one else sees any problem, let's keep your current code, per my above
observations.. to match mremap(), also keep it simple.

One more thing I just remembered on memcg: only uncharge+charge may not
work, I think the lruvec needs to be maintained as well, or memcg shrink
can try to swap some irrelevant page at least, and memcg accounting can
also go wrong.

AFAICT, that means something like another pair of:

folio_isolate_lru() + folio_putback_lru()

Besides the charge/uncharge.

Yu Zhao should be familiar with that code, maybe you can double check with
him before sending the new version.

I think this will belong to the separate patch to add cross-mm support, but
please also double check even just in case there can be implication of
single-mm that I missed.

Please also don't feel stressed over cross-mm support: at some point if you
see that separate patch grows we can stop from there, listing all the
cross-mm todos/investigations in the cover letter and start with single-mm.

Thanks,

--
Peter Xu

2023-10-23 12:04:25

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 22.10.23 17:46, Peter Xu wrote:
> On Fri, Oct 20, 2023 at 07:16:19PM +0200, David Hildenbrand wrote:
>> These are rather the vibes I'm getting from Peter. "Why rename it, could
>> confuse people because the original patches are old", "Why exclude it if it
>> has been included in the original patches". Not the kind of reasoning I can
>> relate to when it comes to upstreaming some patches.
>
> You can't blame anyone if you misunderstood and biased the question.
>
> The first question is definitely valid, even until now. You guys still
> prefer to rename it, which I'm totally fine with.
>
> The 2nd question is wrong from your interpretation. That's not my point,
> at least not starting from a few replies already. What I was asking for is
> why such page movement between mm is dangerous. I don't think I get solid
> answers even until now.
>
> Noticing "memcg is missing" is not an argument for "cross-mm is dangerous",
> it's a review comment. Suren can address that.
>
> You'll propose a new feature that may tag an mm is not an argument either,
> if it's not merged yet. We can also address that depending on what it is,
> also on which lands earlier.
>
> It'll be good to discuss these details even in a single-mm support. Anyone
> would like to add that can already refer to discussion in this thread.
>
> I hope I'm clear.
>

I said everything I had to say, go read what I wrote.

--
Cheers,

David / dhildenb

2023-10-23 12:31:08

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

Focusing on validate_remap_areas():

> +
> +static int validate_remap_areas(struct vm_area_struct *src_vma,
> + struct vm_area_struct *dst_vma)
> +{
> + /* Only allow remapping if both have the same access and protection */
> + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
> + return -EINVAL;

Makes sense. I do wonder about pkey and friends and if we even have to
so anything special.

> +
> + /* Only allow remapping if both are mlocked or both aren't */
> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> + return -EINVAL;
> +
> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> + return -EINVAL;

Why does one of both need VM_WRITE? If one really needs it, then the
destination (where we're moving stuff to).

> +
> + /*
> + * Be strict and only allow remap_pages if either the src or
> + * dst range is registered in the userfaultfd to prevent
> + * userland errors going unnoticed. As far as the VM
> + * consistency is concerned, it would be perfectly safe to
> + * remove this check, but there's no useful usage for
> + * remap_pages ouside of userfaultfd registered ranges. This
> + * is after all why it is an ioctl belonging to the
> + * userfaultfd and not a syscall.

I think the last sentence is the important bit and the comment can
likely be reduced.

> + *
> + * Allow both vmas to be registered in the userfaultfd, just
> + * in case somebody finds a way to make such a case useful.
> + * Normally only one of the two vmas would be registered in
> + * the userfaultfd.

Should we just check the destination? That makes most sense to me,
because with uffd we are resolving uffd-events. And just like
copy/zeropage we want to resolve a page fault ("userfault") of a
non-present page on the destination.


> + */
> + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
> + !src_vma->vm_userfaultfd_ctx.ctx)
> + return -EINVAL;



> +
> + /*
> + * FIXME: only allow remapping across anonymous vmas,
> + * tmpfs should be added.
> + */
> + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
> + return -EINVAL;

Why a FIXME here? Just drop the comment completely or replace it with
"We only allow to remap anonymous folios accross anonymous VMAs".

> +
> + /*
> + * Ensure the dst_vma has a anon_vma or this page
> + * would get a NULL anon_vma when moved in the
> + * dst_vma.
> + */
> + if (unlikely(anon_vma_prepare(dst_vma)))
> + return -ENOMEM;

Makes sense.

> +
> + return 0;
> +}


--
Cheers,

David / dhildenb

2023-10-23 15:54:30

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 23.10.23 14:29, David Hildenbrand wrote:
>> +
>> + /* Only allow remapping if both are mlocked or both aren't */
>> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
>> + return -EINVAL;
>> +
>> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
>> + return -EINVAL;
>
> Why does one of both need VM_WRITE? If one really needs it, then the
> destination (where we're moving stuff to).

Just realized that we want both to be writable.

If you have this in place, there is no need to use maybe*_mkwrite(), you
can use the non-maybe variants.

I recall that for UFFDIO_COPY we even support PROT_NONE VMAs, is there
any reason why we want to have different semantics here?

--
Cheers,

David / dhildenb

2023-10-23 16:37:32

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 23.10.23 14:03, David Hildenbrand wrote:
> On 22.10.23 17:46, Peter Xu wrote:
>> On Fri, Oct 20, 2023 at 07:16:19PM +0200, David Hildenbrand wrote:
>>> These are rather the vibes I'm getting from Peter. "Why rename it, could
>>> confuse people because the original patches are old", "Why exclude it if it
>>> has been included in the original patches". Not the kind of reasoning I can
>>> relate to when it comes to upstreaming some patches.
>>
>> You can't blame anyone if you misunderstood and biased the question.
>>
>> The first question is definitely valid, even until now. You guys still
>> prefer to rename it, which I'm totally fine with.
>>
>> The 2nd question is wrong from your interpretation. That's not my point,
>> at least not starting from a few replies already. What I was asking for is
>> why such page movement between mm is dangerous. I don't think I get solid
>> answers even until now.
>>
>> Noticing "memcg is missing" is not an argument for "cross-mm is dangerous",
>> it's a review comment. Suren can address that.
>>
>> You'll propose a new feature that may tag an mm is not an argument either,
>> if it's not merged yet. We can also address that depending on what it is,
>> also on which lands earlier.
>>
>> It'll be good to discuss these details even in a single-mm support. Anyone
>> would like to add that can already refer to discussion in this thread.
>>
>> I hope I'm clear.
>>
>
> I said everything I had to say, go read what I wrote.

Re-read your message after flying over first couple of paragraphs
previously a bit quick too quickly (can easily happen when I'm told that
I misunderstand questions and read them in a "biased" way).

I'll happy to discuss cross-mm support once we actually need it. I just
don't see the need to spend any energy on that right now, without any
users on the horizon.

[(a) I didn't blame anybody, I said that I don't understand the
reasoning. (b) I hope I made it clear that this is added complexity (and
not just currently dangerous) and so far I haven't heard a compelling
argument why we should do any of that or even spend our time discussing
that. (c) I never used "memcg is missing" as an argument for "cross-mm
is dangerous", all about added complexity without actual users. (d) "it
easily shows that there are cases where this will require extra work --
without any current benefits" -- is IMHO a perfectly fine argument
against complexity that currently nobody needs]

--
Cheers,

David / dhildenb

2023-10-23 17:34:30

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 23, 2023 at 9:36 AM David Hildenbrand <[email protected]> wrote:
>
> On 23.10.23 14:03, David Hildenbrand wrote:
> > On 22.10.23 17:46, Peter Xu wrote:
> >> On Fri, Oct 20, 2023 at 07:16:19PM +0200, David Hildenbrand wrote:
> >>> These are rather the vibes I'm getting from Peter. "Why rename it, could
> >>> confuse people because the original patches are old", "Why exclude it if it
> >>> has been included in the original patches". Not the kind of reasoning I can
> >>> relate to when it comes to upstreaming some patches.
> >>
> >> You can't blame anyone if you misunderstood and biased the question.
> >>
> >> The first question is definitely valid, even until now. You guys still
> >> prefer to rename it, which I'm totally fine with.
> >>
> >> The 2nd question is wrong from your interpretation. That's not my point,
> >> at least not starting from a few replies already. What I was asking for is
> >> why such page movement between mm is dangerous. I don't think I get solid
> >> answers even until now.
> >>
> >> Noticing "memcg is missing" is not an argument for "cross-mm is dangerous",
> >> it's a review comment. Suren can address that.
> >>
> >> You'll propose a new feature that may tag an mm is not an argument either,
> >> if it's not merged yet. We can also address that depending on what it is,
> >> also on which lands earlier.
> >>
> >> It'll be good to discuss these details even in a single-mm support. Anyone
> >> would like to add that can already refer to discussion in this thread.
> >>
> >> I hope I'm clear.
> >>
> >
> > I said everything I had to say, go read what I wrote.
>
> Re-read your message after flying over first couple of paragraphs
> previously a bit quick too quickly (can easily happen when I'm told that
> I misunderstand questions and read them in a "biased" way).
>
> I'll happy to discuss cross-mm support once we actually need it. I just
> don't see the need to spend any energy on that right now, without any
> users on the horizon.
>
> [(a) I didn't blame anybody, I said that I don't understand the
> reasoning. (b) I hope I made it clear that this is added complexity (and
> not just currently dangerous) and so far I haven't heard a compelling
> argument why we should do any of that or even spend our time discussing
> that. (c) I never used "memcg is missing" as an argument for "cross-mm
> is dangerous", all about added complexity without actual users. (d) "it
> easily shows that there are cases where this will require extra work --
> without any current benefits" -- is IMHO a perfectly fine argument
> against complexity that currently nobody needs]

Thanks for the discussion, folks!
I think posting the single-mm first and then following up with
cross-mm and its test would help us move forward. That will provide
functionality that is needed today quickly without unnecessary
distractions and will give us more time to discuss cross-mm support.
Also we will be able to test single-mm in isolation and make it more
solid before moving onto cross-mm.
I'll try to post the next version sometime this week.
Thanks,
Suren.

>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-23 17:44:39

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Sun, Oct 22, 2023 at 10:02 AM Peter Xu <[email protected]> wrote:
>
> On Thu, Oct 19, 2023 at 02:24:06PM -0700, Suren Baghdasaryan wrote:
> > On Thu, Oct 12, 2023 at 3:00 PM Peter Xu <[email protected]> wrote:
> > >
> > > On Sun, Oct 08, 2023 at 11:42:27PM -0700, Suren Baghdasaryan wrote:
> > > > From: Andrea Arcangeli <[email protected]>
> > > >
> > > > Implement the uABI of UFFDIO_MOVE ioctl.
> > > > UFFDIO_COPY performs ~20% better than UFFDIO_MOVE when the application
> > > > needs pages to be allocated [1]. However, with UFFDIO_MOVE, if pages are
> > > > available (in userspace) for recycling, as is usually the case in heap
> > > > compaction algorithms, then we can avoid the page allocation and memcpy
> > > > (done by UFFDIO_COPY). Also, since the pages are recycled in the
> > > > userspace, we avoid the need to release (via madvise) the pages back to
> > > > the kernel [2].
> > > > We see over 40% reduction (on a Google pixel 6 device) in the compacting
> > > > thread’s completion time by using UFFDIO_MOVE vs. UFFDIO_COPY. This was
> > > > measured using a benchmark that emulates a heap compaction implementation
> > > > using userfaultfd (to allow concurrent accesses by application threads).
> > > > More details of the usecase are explained in [2].
> > > > Furthermore, UFFDIO_MOVE enables moving swapped-out pages without
> > > > touching them within the same vma. Today, it can only be done by mremap,
> > > > however it forces splitting the vma.
> > > >
> > > > [1] https://lore.kernel.org/all/[email protected]/
> > > > [2] https://lore.kernel.org/linux-mm/CA+EESO4uO84SSnBhArH4HvLNhaUQ5nZKNKXqxRCyjniNVjp0Aw@mail.gmail.com/
> > > >
> > > > Update for the ioctl_userfaultfd(2) manpage:
> > > >
> > > > UFFDIO_MOVE
> > > > (Since Linux xxx) Move a continuous memory chunk into the
> > > > userfault registered range and optionally wake up the blocked
> > > > thread. The source and destination addresses and the number of
> > > > bytes to move are specified by the src, dst, and len fields of
> > > > the uffdio_move structure pointed to by argp:
> > > >
> > > > struct uffdio_move {
> > > > __u64 dst; /* Destination of move */
> > > > __u64 src; /* Source of move */
> > > > __u64 len; /* Number of bytes to move */
> > > > __u64 mode; /* Flags controlling behavior of move */
> > > > __s64 move; /* Number of bytes moved, or negated error */
> > > > };
> > > >
> > > > The following value may be bitwise ORed in mode to change the
> > > > behavior of the UFFDIO_MOVE operation:
> > > >
> > > > UFFDIO_MOVE_MODE_DONTWAKE
> > > > Do not wake up the thread that waits for page-fault
> > > > resolution
> > > >
> > > > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES
> > > > Allow holes in the source virtual range that is being moved.
> > > > When not specified, the holes will result in ENOENT error.
> > > > When specified, the holes will be accounted as successfully
> > > > moved memory. This is mostly useful to move hugepage aligned
> > > > virtual regions without knowing if there are transparent
> > > > hugepages in the regions or not, but preventing the risk of
> > > > having to split the hugepage during the operation.
> > > >
> > > > The move field is used by the kernel to return the number of
> > > > bytes that was actually moved, or an error (a negated errno-
> > > > style value). If the value returned in move doesn't match the
> > > > value that was specified in len, the operation fails with the
> > > > error EAGAIN. The move field is output-only; it is not read by
> > > > the UFFDIO_MOVE operation.
> > > >
> > > > The operation may fail for various reasons. Usually, remapping of
> > > > pages that are not exclusive to the given process fail; once KSM
> > > > might deduplicate pages or fork() COW-shares pages during fork()
> > > > with child processes, they are no longer exclusive. Further, the
> > > > kernel might only perform lightweight checks for detecting whether
> > > > the pages are exclusive, and return -EBUSY in case that check fails.
> > > > To make the operation more likely to succeed, KSM should be
> > > > disabled, fork() should be avoided or MADV_DONTFORK should be
> > > > configured for the source VMA before fork().
> > > >
> > > > This ioctl(2) operation returns 0 on success. In this case, the
> > > > entire area was moved. On error, -1 is returned and errno is
> > > > set to indicate the error. Possible errors include:
> > > >
> > > > EAGAIN The number of bytes moved (i.e., the value returned in
> > > > the move field) does not equal the value that was
> > > > specified in the len field.
> > > >
> > > > EINVAL Either dst or len was not a multiple of the system page
> > > > size, or the range specified by src and len or dst and len
> > > > was invalid.
> > > >
> > > > EINVAL An invalid bit was specified in the mode field.
> > > >
> > > > ENOENT
> > > > The source virtual memory range has unmapped holes and
> > > > UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES is not set.
> > > >
> > > > EEXIST
> > > > The destination virtual memory range is fully or partially
> > > > mapped.
> > > >
> > > > EBUSY
> > > > The pages in the source virtual memory range are not
> > > > exclusive to the process. The kernel might only perform
> > > > lightweight checks for detecting whether the pages are
> > > > exclusive. To make the operation more likely to succeed,
> > > > KSM should be disabled, fork() should be avoided or
> > > > MADV_DONTFORK should be configured for the source virtual
> > > > memory area before fork().
> > > >
> > > > ENOMEM Allocating memory needed for the operation failed.
> > > >
> > > > ESRCH
> > > > The faulting process has exited at the time of a
> > >
> > > Nit pick comment for future man page: there's no faulting process in this
> > > context. Perhaps "target process"?
> >
> > Ack.
> >
> > >
> > > > UFFDIO_MOVE operation.
> > > >
> > > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > > > ---
> > > > Documentation/admin-guide/mm/userfaultfd.rst | 3 +
> > > > fs/userfaultfd.c | 63 ++
> > > > include/linux/rmap.h | 5 +
> > > > include/linux/userfaultfd_k.h | 12 +
> > > > include/uapi/linux/userfaultfd.h | 29 +-
> > > > mm/huge_memory.c | 138 +++++
> > > > mm/khugepaged.c | 3 +
> > > > mm/rmap.c | 6 +
> > > > mm/userfaultfd.c | 602 +++++++++++++++++++
> > > > 9 files changed, 860 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/Documentation/admin-guide/mm/userfaultfd.rst b/Documentation/admin-guide/mm/userfaultfd.rst
> > > > index 203e26da5f92..e5cc8848dcb3 100644
> > > > --- a/Documentation/admin-guide/mm/userfaultfd.rst
> > > > +++ b/Documentation/admin-guide/mm/userfaultfd.rst
> > > > @@ -113,6 +113,9 @@ events, except page fault notifications, may be generated:
> > > > areas. ``UFFD_FEATURE_MINOR_SHMEM`` is the analogous feature indicating
> > > > support for shmem virtual memory areas.
> > > >
> > > > +- ``UFFD_FEATURE_MOVE`` indicates that the kernel supports moving an
> > > > + existing page contents from userspace.
> > > > +
> > > > The userland application should set the feature flags it intends to use
> > > > when invoking the ``UFFDIO_API`` ioctl, to request that those features be
> > > > enabled if supported.
> > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > > > index a7c6ef764e63..ac52e0f99a69 100644
> > > > --- a/fs/userfaultfd.c
> > > > +++ b/fs/userfaultfd.c
> > > > @@ -2039,6 +2039,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
> > > > return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
> > > > }
> > > >
> > > > +static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
> > > > + unsigned long arg)
> > >
> > > If we do want to rename from REMAP to MOVE, we'd better rename the
> > > functions too, as "remap" still exists all over the place..
> >
> > Ok. I thought that since the current implementation only remaps and
> > never copies it would be correct to keep "remap" in these internal
> > names and change that later if we support copying. But I'm fine with
> > renaming them now to avoid confusion. Will do.
>
> "move", not "copy", btw.
>
> Not a big deal, take your preference at each place; "remap" sometimes can
> read better, maybe. Fundamentally, I think it's because both "remap" and
> "move" work in 99% cases. That's also why I think either name would work
> here.

Ok, "move" it is. That will avoid unnecessary churn in the future if
we decide to implement the copy fallback.

>
> >
> >
> > >
> > > > +{
> > > > + __s64 ret;
> > > > + struct uffdio_move uffdio_move;
> > > > + struct uffdio_move __user *user_uffdio_move;
> > > > + struct userfaultfd_wake_range range;
> > > > +
> > > > + user_uffdio_move = (struct uffdio_move __user *) arg;
> > > > +
> > > > + ret = -EAGAIN;
> > > > + if (atomic_read(&ctx->mmap_changing))
> > > > + goto out;
> > >
> > > I didn't notice this before, but I think we need to re-check this after
> > > taking target mm's mmap read lock..
> >
> > Ack.
> >
> > >
> > > maybe we'd want to pass in ctx* into remap_pages(), more reasoning below.
> >
> > Makes sense.
> >
> > >
> > > > +
> > > > + ret = -EFAULT;
> > > > + if (copy_from_user(&uffdio_move, user_uffdio_move,
> > > > + /* don't copy "remap" last field */
> > >
> > > s/remap/move/
> >
> > Ack.
> >
> > >
> > > > + sizeof(uffdio_move)-sizeof(__s64)))
> > > > + goto out;
> > > > +
> > > > + ret = validate_range(ctx->mm, uffdio_move.dst, uffdio_move.len);
> > > > + if (ret)
> > > > + goto out;
> > > > +
> > > > + ret = validate_range(current->mm, uffdio_move.src, uffdio_move.len);
> > > > + if (ret)
> > > > + goto out;
> > > > +
> > > > + ret = -EINVAL;
> > > > + if (uffdio_move.mode & ~(UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES|
> > > > + UFFDIO_MOVE_MODE_DONTWAKE))
> > > > + goto out;
> > > > +
> > > > + if (mmget_not_zero(ctx->mm)) {
> > > > + ret = remap_pages(ctx->mm, current->mm,
> > > > + uffdio_move.dst, uffdio_move.src,
> > > > + uffdio_move.len, uffdio_move.mode);
> > > > + mmput(ctx->mm);
> > > > + } else {
> > > > + return -ESRCH;
> > > > + }
> > > > +
> > > > + if (unlikely(put_user(ret, &user_uffdio_move->move)))
> > > > + return -EFAULT;
> > > > + if (ret < 0)
> > > > + goto out;
> > > > +
> > > > + /* len == 0 would wake all */
> > > > + BUG_ON(!ret);
> > > > + range.len = ret;
> > > > + if (!(uffdio_move.mode & UFFDIO_MOVE_MODE_DONTWAKE)) {
> > > > + range.start = uffdio_move.dst;
> > > > + wake_userfault(ctx, &range);
> > > > + }
> > > > + ret = range.len == uffdio_move.len ? 0 : -EAGAIN;
> > > > +
> > > > +out:
> > > > + return ret;
> > > > +}
> > > > +
> > > > /*
> > > > * userland asks for a certain API version and we return which bits
> > > > * and ioctl commands are implemented in this kernel for such API
> > > > @@ -2131,6 +2191,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
> > > > case UFFDIO_ZEROPAGE:
> > > > ret = userfaultfd_zeropage(ctx, arg);
> > > > break;
> > > > + case UFFDIO_MOVE:
> > > > + ret = userfaultfd_remap(ctx, arg);
> > > > + break;
> > > > case UFFDIO_WRITEPROTECT:
> > > > ret = userfaultfd_writeprotect(ctx, arg);
> > > > break;
> > > > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > > > index b26fe858fd44..8034eda972e5 100644
> > > > --- a/include/linux/rmap.h
> > > > +++ b/include/linux/rmap.h
> > > > @@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
> > > > down_write(&anon_vma->root->rwsem);
> > > > }
> > > >
> > > > +static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
> > > > +{
> > > > + return down_write_trylock(&anon_vma->root->rwsem);
> > > > +}
> > > > +
> > > > static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
> > > > {
> > > > up_write(&anon_vma->root->rwsem);
> > > > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
> > > > index f2dc19f40d05..ce8d20b57e8c 100644
> > > > --- a/include/linux/userfaultfd_k.h
> > > > +++ b/include/linux/userfaultfd_k.h
> > > > @@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
> > > > extern long uffd_wp_range(struct vm_area_struct *vma,
> > > > unsigned long start, unsigned long len, bool enable_wp);
> > > >
> > > > +/* remap_pages */
> > > > +void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
> > > > +void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
> > > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > + unsigned long dst_start, unsigned long src_start,
> > > > + unsigned long len, __u64 flags);
> > > > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > > > + struct vm_area_struct *dst_vma,
> > > > + struct vm_area_struct *src_vma,
> > > > + unsigned long dst_addr, unsigned long src_addr);
> > > > +
> > > > /* mm helpers */
> > > > static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
> > > > struct vm_userfaultfd_ctx vm_ctx)
> > > > diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
> > > > index 0dbc81015018..2841e4ea8f2c 100644
> > > > --- a/include/uapi/linux/userfaultfd.h
> > > > +++ b/include/uapi/linux/userfaultfd.h
> > > > @@ -41,7 +41,8 @@
> > > > UFFD_FEATURE_WP_HUGETLBFS_SHMEM | \
> > > > UFFD_FEATURE_WP_UNPOPULATED | \
> > > > UFFD_FEATURE_POISON | \
> > > > - UFFD_FEATURE_WP_ASYNC)
> > > > + UFFD_FEATURE_WP_ASYNC | \
> > > > + UFFD_FEATURE_MOVE)
> > > > #define UFFD_API_IOCTLS \
> > > > ((__u64)1 << _UFFDIO_REGISTER | \
> > > > (__u64)1 << _UFFDIO_UNREGISTER | \
> > > > @@ -50,6 +51,7 @@
> > > > ((__u64)1 << _UFFDIO_WAKE | \
> > > > (__u64)1 << _UFFDIO_COPY | \
> > > > (__u64)1 << _UFFDIO_ZEROPAGE | \
> > > > + (__u64)1 << _UFFDIO_MOVE | \
> > > > (__u64)1 << _UFFDIO_WRITEPROTECT | \
> > > > (__u64)1 << _UFFDIO_CONTINUE | \
> > > > (__u64)1 << _UFFDIO_POISON)
> > > > @@ -73,6 +75,7 @@
> > > > #define _UFFDIO_WAKE (0x02)
> > > > #define _UFFDIO_COPY (0x03)
> > > > #define _UFFDIO_ZEROPAGE (0x04)
> > > > +#define _UFFDIO_MOVE (0x05)
> > > > #define _UFFDIO_WRITEPROTECT (0x06)
> > > > #define _UFFDIO_CONTINUE (0x07)
> > > > #define _UFFDIO_POISON (0x08)
> > > > @@ -92,6 +95,8 @@
> > > > struct uffdio_copy)
> > > > #define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
> > > > struct uffdio_zeropage)
> > > > +#define UFFDIO_MOVE _IOWR(UFFDIO, _UFFDIO_MOVE, \
> > > > + struct uffdio_move)
> > > > #define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
> > > > struct uffdio_writeprotect)
> > > > #define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
> > > > @@ -222,6 +227,9 @@ struct uffdio_api {
> > > > * asynchronous mode is supported in which the write fault is
> > > > * automatically resolved and write-protection is un-set.
> > > > * It implies UFFD_FEATURE_WP_UNPOPULATED.
> > > > + *
> > > > + * UFFD_FEATURE_MOVE indicates that the kernel supports moving an
> > > > + * existing page contents from userspace.
> > > > */
> > > > #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0)
> > > > #define UFFD_FEATURE_EVENT_FORK (1<<1)
> > > > @@ -239,6 +247,7 @@ struct uffdio_api {
> > > > #define UFFD_FEATURE_WP_UNPOPULATED (1<<13)
> > > > #define UFFD_FEATURE_POISON (1<<14)
> > > > #define UFFD_FEATURE_WP_ASYNC (1<<15)
> > > > +#define UFFD_FEATURE_MOVE (1<<16)
> > > > __u64 features;
> > > >
> > > > __u64 ioctls;
> > > > @@ -347,6 +356,24 @@ struct uffdio_poison {
> > > > __s64 updated;
> > > > };
> > > >
> > > > +struct uffdio_move {
> > > > + __u64 dst;
> > > > + __u64 src;
> > > > + __u64 len;
> > > > + /*
> > > > + * Especially if used to atomically remove memory from the
> > > > + * address space the wake on the dst range is not needed.
> > > > + */
> > > > +#define UFFDIO_MOVE_MODE_DONTWAKE ((__u64)1<<0)
> > > > +#define UFFDIO_MOVE_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
> > > > + __u64 mode;
> > > > + /*
> > > > + * "move" is written by the ioctl and must be at the end: the
> > > > + * copy_from_user will not read the last 8 bytes.
> > > > + */
> > > > + __s64 move;
> > > > +};
> > > > +
> > > > /*
> > > > * Flags for the userfaultfd(2) system call itself.
> > > > */
> > > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > > > index 9656be95a542..6fac5c3d66e6 100644
> > > > --- a/mm/huge_memory.c
> > > > +++ b/mm/huge_memory.c
> > > > @@ -2086,6 +2086,144 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > > > return ret;
> > > > }
> > > >
> > > > +#ifdef CONFIG_USERFAULTFD
> > > > +/*
> > > > + * The PT lock for src_pmd and the mmap_lock for reading are held by
> > > > + * the caller, but it must return after releasing the
> > > > + * page_table_lock. Just move the page from src_pmd to dst_pmd if possible.
> > > > + * Return zero if succeeded in moving the page, -EAGAIN if it needs to be
> > > > + * repeated by the caller, or other errors in case of failure.
> > > > + */
> > > > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > > > + struct vm_area_struct *dst_vma,
> > > > + struct vm_area_struct *src_vma,
> > > > + unsigned long dst_addr, unsigned long src_addr)
> > > > +{
> > > > + pmd_t _dst_pmd, src_pmdval;
> > > > + struct page *src_page;
> > > > + struct folio *src_folio;
> > > > + struct anon_vma *src_anon_vma;
> > > > + spinlock_t *src_ptl, *dst_ptl;
> > > > + pgtable_t src_pgtable, dst_pgtable;
> > > > + struct mmu_notifier_range range;
> > > > + int err = 0;
> > > > +
> > > > + src_pmdval = *src_pmd;
> > > > + src_ptl = pmd_lockptr(src_mm, src_pmd);
> > > > +
> > > > + lockdep_assert_held(src_ptl);
> > > > + mmap_assert_locked(src_mm);
> > > > + mmap_assert_locked(dst_mm);
> > > > +
> > > > + BUG_ON(!pmd_none(dst_pmdval));
> > > > + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> > > > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> > > > +
> > > > + if (!pmd_trans_huge(src_pmdval)) {
> > > > + spin_unlock(src_ptl);
> > > > + if (is_pmd_migration_entry(src_pmdval)) {
> > > > + pmd_migration_entry_wait(src_mm, &src_pmdval);
> > > > + return -EAGAIN;
> > > > + }
> > > > + return -ENOENT;
> > > > + }
> > > > +
> > > > + src_page = pmd_page(src_pmdval);
> > > > + if (unlikely(!PageAnonExclusive(src_page))) {
> > > > + spin_unlock(src_ptl);
> > > > + return -EBUSY;
> > > > + }
> > > > +
> > > > + src_folio = page_folio(src_page);
> > > > + folio_get(src_folio);
> > > > + spin_unlock(src_ptl);
> > > > +
> > > > + /* preallocate dst_pgtable if needed */
> > > > + if (dst_mm != src_mm) {
> > > > + dst_pgtable = pte_alloc_one(dst_mm);
> > > > + if (unlikely(!dst_pgtable)) {
> > > > + err = -ENOMEM;
> > > > + goto put_folio;
> > > > + }
> > > > + } else {
> > > > + dst_pgtable = NULL;
> > > > + }
> > > > +
> > >
> > > IIUC Lokesh's comment applies here, we probably need the
> > > flush_cache_range(), not for x86 but for the other ones..
> > >
> > > cachetlb.rst:
> > >
> > > Next, we have the cache flushing interfaces. In general, when Linux
> > > is changing an existing virtual-->physical mapping to a new value,
> > > the sequence will be in one of the following forms::
> > >
> > > 1) flush_cache_mm(mm);
> > > change_all_page_tables_of(mm);
> > > flush_tlb_mm(mm);
> > >
> > > 2) flush_cache_range(vma, start, end);
> > > change_range_of_page_tables(mm, start, end);
> > > flush_tlb_range(vma, start, end);
> > >
> > > 3) flush_cache_page(vma, addr, pfn);
> > > set_pte(pte_pointer, new_pte_val);
> > > flush_tlb_page(vma, addr);
> >
> > Thanks for the reference. I guess that's to support VIVT caches?
>
> I'm not 100% sure VIVT the only case, but.. yeah flush anything to ram as
> long as things cached in va form would be required.

Ack.

>
> >
> > >
> > > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> > > > + src_addr + HPAGE_PMD_SIZE);
> > > > + mmu_notifier_invalidate_range_start(&range);
> > > > +
> > > > + folio_lock(src_folio);
> > > > +
> > > > + /*
> > > > + * split_huge_page walks the anon_vma chain without the page
> > > > + * lock. Serialize against it with the anon_vma lock, the page
> > > > + * lock is not enough.
> > > > + */
> > > > + src_anon_vma = folio_get_anon_vma(src_folio);
> > > > + if (!src_anon_vma) {
> > > > + err = -EAGAIN;
> > > > + goto unlock_folio;
> > > > + }
> > > > + anon_vma_lock_write(src_anon_vma);
> > > > +
> > > > + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> > > > + double_pt_lock(src_ptl, dst_ptl);
> > > > + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> > > > + !pmd_same(*dst_pmd, dst_pmdval))) {
> > > > + double_pt_unlock(src_ptl, dst_ptl);
> > > > + err = -EAGAIN;
> > > > + goto put_anon_vma;
> > > > + }
> > > > + if (!PageAnonExclusive(&src_folio->page)) {
> > > > + double_pt_unlock(src_ptl, dst_ptl);
> > > > + err = -EBUSY;
> > > > + goto put_anon_vma;
> > > > + }
> > > > +
> > > > + BUG_ON(!folio_test_head(src_folio));
> > > > + BUG_ON(!folio_test_anon(src_folio));
> > > > +
> > > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > +
> > > > + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > > > + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> > > > + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> > >
> > > Last time the conclusion is we leverage can_change_pmd_writable(), no?
> >
> > After your explanation that this works correctly for soft-dirty and
> > UFFD_WP I thought the only thing left to handle was the check for
> > VM_WRITE in both src_vma and dst_vma (which I added into
> > validate_remap_areas()). Maybe I misunderstood and if so, I can
> > replace the above PageAnonExclusive() with can_change_pmd_writable()
> > (note that we err out on VM_SHARED VMAs, so PageAnonExclusive() will
> > be included in that check).
>
> I think we still need PageAnonExclusive() because that's the first guard to
> decide whether the page can be moved over at all.
>
> What I meant is something like keeping that, then:
>
> if (pmd_soft_dirty(src_pmdval))
> _dst_pmd = pmd_swp_mksoft_dirty(_dst_pmd);
> if (pmd_uffd_wp(src_pmdval))
> _dst_pmd = pte_swp_mkuffd_wp(swp_pte);
> if (can_change_pmd_writable(_dst_vma, addr, _dst_pmd))
> _dst_pmd = pmd_mkwrite(_dst_pmd, dst_vma);
>
> But I'm not really sure anyone can leverage that, especially after I just
> saw move_soft_dirty_pte(): mremap() treat everything dirty after movement.
> I don't think there's a clear definition of how we treat memory dirty after
> remap.
>
> Maybe we should follow what it does with mremap()? Then your current code
> is fine. Maybe that's the better start.

I think that was the original intention, basically treating remapping
as a write operation. Maybe I should add a comment here to make it
more clear?

>
> >
> > >
> > > > + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> > > > +
> > > > + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> > > > + if (dst_pgtable) {
> > > > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> > > > + pte_free(src_mm, src_pgtable);
> > > > + dst_pgtable = NULL;
> > > > +
> > > > + mm_inc_nr_ptes(dst_mm);
> > > > + mm_dec_nr_ptes(src_mm);
> > > > + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> > > > + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> > > > + } else {
> > > > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> > > > + }
> > > > + double_pt_unlock(src_ptl, dst_ptl);
> > > > +
> > > > +put_anon_vma:
> > > > + anon_vma_unlock_write(src_anon_vma);
> > > > + put_anon_vma(src_anon_vma);
> > > > +unlock_folio:
> > > > + /* unblock rmap walks */
> > > > + folio_unlock(src_folio);
> > > > + mmu_notifier_invalidate_range_end(&range);
> > > > + if (dst_pgtable)
> > > > + pte_free(dst_mm, dst_pgtable);
> > > > +put_folio:
> > > > + folio_put(src_folio);
> > > > +
> > > > + return err;
> > > > +}
> > > > +#endif /* CONFIG_USERFAULTFD */
> > > > +
> > > > /*
> > > > * Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
> > > > *
> > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > > > index 2b5c0321d96b..0c1ee7172852 100644
> > > > --- a/mm/khugepaged.c
> > > > +++ b/mm/khugepaged.c
> > > > @@ -1136,6 +1136,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > > > * Prevent all access to pagetables with the exception of
> > > > * gup_fast later handled by the ptep_clear_flush and the VM
> > > > * handled by the anon_vma lock + PG_lock.
> > > > + *
> > > > + * UFFDIO_MOVE is prevented to race as well thanks to the
> > > > + * mmap_lock.
> > > > */
> > > > mmap_write_lock(mm);
> > > > result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> > > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > > index f9ddc50269d2..a5919cac9a08 100644
> > > > --- a/mm/rmap.c
> > > > +++ b/mm/rmap.c
> > > > @@ -490,6 +490,12 @@ void __init anon_vma_init(void)
> > > > * page_remove_rmap() that the anon_vma pointer from page->mapping is valid
> > > > * if there is a mapcount, we can dereference the anon_vma after observing
> > > > * those.
> > > > + *
> > > > + * NOTE: the caller should normally hold folio lock when calling this. If
> > > > + * not, the caller needs to double check the anon_vma didn't change after
> > > > + * taking the anon_vma lock for either read or write (UFFDIO_MOVE can modify it
> > > > + * concurrently without folio lock protection). See folio_lock_anon_vma_read()
> > > > + * which has already covered that, and comment above remap_pages().
> > > > */
> > > > struct anon_vma *folio_get_anon_vma(struct folio *folio)
> > > > {
> > > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > > > index 96d9eae5c7cc..45ce1a8b8ab9 100644
> > > > --- a/mm/userfaultfd.c
> > > > +++ b/mm/userfaultfd.c
> > > > @@ -842,3 +842,605 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
> > > > mmap_read_unlock(dst_mm);
> > > > return err;
> > > > }
> > > > +
> > > > +
> > > > +void double_pt_lock(spinlock_t *ptl1,
> > > > + spinlock_t *ptl2)
> > > > + __acquires(ptl1)
> > > > + __acquires(ptl2)
> > > > +{
> > > > + spinlock_t *ptl_tmp;
> > > > +
> > > > + if (ptl1 > ptl2) {
> > > > + /* exchange ptl1 and ptl2 */
> > > > + ptl_tmp = ptl1;
> > > > + ptl1 = ptl2;
> > > > + ptl2 = ptl_tmp;
> > > > + }
> > > > + /* lock in virtual address order to avoid lock inversion */
> > > > + spin_lock(ptl1);
> > > > + if (ptl1 != ptl2)
> > > > + spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
> > > > + else
> > > > + __acquire(ptl2);
> > > > +}
> > > > +
> > > > +void double_pt_unlock(spinlock_t *ptl1,
> > > > + spinlock_t *ptl2)
> > > > + __releases(ptl1)
> > > > + __releases(ptl2)
> > > > +{
> > > > + spin_unlock(ptl1);
> > > > + if (ptl1 != ptl2)
> > > > + spin_unlock(ptl2);
> > > > + else
> > > > + __release(ptl2);
> > > > +}
> > > > +
> > > > +
> > > > +static int remap_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > + struct vm_area_struct *dst_vma,
> > > > + struct vm_area_struct *src_vma,
> > > > + unsigned long dst_addr, unsigned long src_addr,
> > > > + pte_t *dst_pte, pte_t *src_pte,
> > > > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > > > + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> > > > + struct folio *src_folio)
> > > > +{
> > > > + double_pt_lock(dst_ptl, src_ptl);
> > > > +
> > > > + if (!pte_same(*src_pte, orig_src_pte) ||
> > > > + !pte_same(*dst_pte, orig_dst_pte)) {
> > > > + double_pt_unlock(dst_ptl, src_ptl);
> > > > + return -EAGAIN;
> > > > + }
> > > > + if (folio_test_large(src_folio) ||
> > > > + !PageAnonExclusive(&src_folio->page)) {
> > > > + double_pt_unlock(dst_ptl, src_ptl);
> > > > + return -EBUSY;
> > > > + }
> > > > +
> > > > + BUG_ON(!folio_test_anon(src_folio));
> > > > +
> > > > + folio_move_anon_rmap(src_folio, dst_vma);
> > > > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > > > +
> > > > + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> > > > + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> > > > + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte), dst_vma);
> > >
> > > can_change_pte_writable()?
> >
> > Same as my previous comment. If that's still needed I'll replace the
> > above PageAnonExclusive() check with can_change_pte_writable().
>
> If no one else sees any problem, let's keep your current code, per my above
> observations.. to match mremap(), also keep it simple.

Ack.

>
> One more thing I just remembered on memcg: only uncharge+charge may not
> work, I think the lruvec needs to be maintained as well, or memcg shrink
> can try to swap some irrelevant page at least, and memcg accounting can
> also go wrong.
>
> AFAICT, that means something like another pair of:
>
> folio_isolate_lru() + folio_putback_lru()
>
> Besides the charge/uncharge.
>
> Yu Zhao should be familiar with that code, maybe you can double check with
> him before sending the new version.

Ok, I'll double check with Yu before sending cross-mm parts.

>
> I think this will belong to the separate patch to add cross-mm support, but
> please also double check even just in case there can be implication of
> single-mm that I missed.

Hmm. For single-mm we do not recharge memcgs. Why would we need to
isolate and put the pages back? Maybe I'm missing something... I'll
ask Yu in any case.

>
> Please also don't feel stressed over cross-mm support: at some point if you
> see that separate patch grows we can stop from there, listing all the
> cross-mm todos/investigations in the cover letter and start with single-mm.

Yes, I think this would be the best way forward (see my previous reply).

Thanks,
Suren.

>
> Thanks,
>
> --
> Peter Xu
>

2023-10-23 18:38:41

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 23, 2023 at 10:43:49AM -0700, Suren Baghdasaryan wrote:
> > Maybe we should follow what it does with mremap()? Then your current code
> > is fine. Maybe that's the better start.
>
> I think that was the original intention, basically treating remapping
> as a write operation. Maybe I should add a comment here to make it
> more clear?

Please avoid mention "emulate as a write" - this is not a write, e.g., we
move a swap entry over without faulting in the page. We also keep the page
states, e.g. on hotness. A write will change all of that.

Now rethinking with the recently merged WP_ASYNC: we ignore uffd-wp, which
means dirty from uffd-wp async tracking POV, that matches with soft-dirty
always set. Looks all good.

Perhaps something like "Follow mremap() behavior; ignore uffd-wp for now"
should work?

--
Peter Xu

2023-10-23 18:57:20

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 23, 2023 at 5:29 AM David Hildenbrand <[email protected]> wrote:
>
> Focusing on validate_remap_areas():
>
> > +
> > +static int validate_remap_areas(struct vm_area_struct *src_vma,
> > + struct vm_area_struct *dst_vma)
> > +{
> > + /* Only allow remapping if both have the same access and protection */
> > + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> > + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
> > + return -EINVAL;
>
> Makes sense. I do wonder about pkey and friends and if we even have to
> so anything special.

I don't see anything special done for mremap. Do you have something in mind?

>
> > +
> > + /* Only allow remapping if both are mlocked or both aren't */
> > + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> > + return -EINVAL;
> > +
> > + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> > + return -EINVAL;
>
> Why does one of both need VM_WRITE? If one really needs it, then the
> destination (where we're moving stuff to).

As you noticed later, both should have VM_WRITE.

>
> > +
> > + /*
> > + * Be strict and only allow remap_pages if either the src or
> > + * dst range is registered in the userfaultfd to prevent
> > + * userland errors going unnoticed. As far as the VM
> > + * consistency is concerned, it would be perfectly safe to
> > + * remove this check, but there's no useful usage for
> > + * remap_pages ouside of userfaultfd registered ranges. This
> > + * is after all why it is an ioctl belonging to the
> > + * userfaultfd and not a syscall.
>
> I think the last sentence is the important bit and the comment can
> likely be reduced.

Ok, I'll look into shortening it.

>
> > + *
> > + * Allow both vmas to be registered in the userfaultfd, just
> > + * in case somebody finds a way to make such a case useful.
> > + * Normally only one of the two vmas would be registered in
> > + * the userfaultfd.
>
> Should we just check the destination? That makes most sense to me,
> because with uffd we are resolving uffd-events. And just like
> copy/zeropage we want to resolve a page fault ("userfault") of a
> non-present page on the destination.

I think that makes sense. Not sure why the original implementation
needed the check for source too. Seems unnecessary.

>
>
> > + */
> > + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
> > + !src_vma->vm_userfaultfd_ctx.ctx)
> > + return -EINVAL;
>
>
>
> > +
> > + /*
> > + * FIXME: only allow remapping across anonymous vmas,
> > + * tmpfs should be added.
> > + */
> > + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
> > + return -EINVAL;
>
> Why a FIXME here? Just drop the comment completely or replace it with
> "We only allow to remap anonymous folios accross anonymous VMAs".

Will do. I guess Andrea had plans to cover tmpfs as well.

>
> > +
> > + /*
> > + * Ensure the dst_vma has a anon_vma or this page
> > + * would get a NULL anon_vma when moved in the
> > + * dst_vma.
> > + */
> > + if (unlikely(anon_vma_prepare(dst_vma)))
> > + return -ENOMEM;
>
> Makes sense.
>
> > +
> > + return 0;
> > +}
>
>

Thanks,
Suren.


> --
> Cheers,
>
> David / dhildenb
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-23 19:01:27

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 23, 2023 at 8:53 AM David Hildenbrand <[email protected]> wrote:
>
> On 23.10.23 14:29, David Hildenbrand wrote:
> >> +
> >> + /* Only allow remapping if both are mlocked or both aren't */
> >> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> >> + return -EINVAL;
> >> +
> >> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> >> + return -EINVAL;
> >
> > Why does one of both need VM_WRITE? If one really needs it, then the
> > destination (where we're moving stuff to).
>
> Just realized that we want both to be writable.
>
> If you have this in place, there is no need to use maybe*_mkwrite(), you
> can use the non-maybe variants.

Ack.

>
> I recall that for UFFDIO_COPY we even support PROT_NONE VMAs, is there
> any reason why we want to have different semantics here?

I don't think so. At least not for the single-mm case.

>
> --
> Cheers,
>
> David / dhildenb
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-23 19:01:59

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Mon, Oct 23, 2023 at 11:37 AM Peter Xu <[email protected]> wrote:
>
> On Mon, Oct 23, 2023 at 10:43:49AM -0700, Suren Baghdasaryan wrote:
> > > Maybe we should follow what it does with mremap()? Then your current code
> > > is fine. Maybe that's the better start.
> >
> > I think that was the original intention, basically treating remapping
> > as a write operation. Maybe I should add a comment here to make it
> > more clear?
>
> Please avoid mention "emulate as a write" - this is not a write, e.g., we
> move a swap entry over without faulting in the page. We also keep the page
> states, e.g. on hotness. A write will change all of that.

Understood.

>
> Now rethinking with the recently merged WP_ASYNC: we ignore uffd-wp, which
> means dirty from uffd-wp async tracking POV, that matches with soft-dirty
> always set. Looks all good.
>
> Perhaps something like "Follow mremap() behavior; ignore uffd-wp for now"
> should work?

Sounds good. Will add in the next version.
Thanks!

>
> --
> Peter Xu
>

2023-10-24 14:28:46

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On 23.10.23 20:56, Suren Baghdasaryan wrote:
> On Mon, Oct 23, 2023 at 5:29 AM David Hildenbrand <[email protected]> wrote:
>>
>> Focusing on validate_remap_areas():
>>
>>> +
>>> +static int validate_remap_areas(struct vm_area_struct *src_vma,
>>> + struct vm_area_struct *dst_vma)
>>> +{
>>> + /* Only allow remapping if both have the same access and protection */
>>> + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
>>> + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
>>> + return -EINVAL;
>>
>> Makes sense. I do wonder about pkey and friends and if we even have to
>> so anything special.
>
> I don't see anything special done for mremap. Do you have something in mind?

Nothing concrete, not a pkey expert. But as there is indeed nothing
pkey-special in the VMA, there is nothing we can really check for and/or
adjust.

So let's assume this is fine.

>>
>>> +
>>> + /* Only allow remapping if both are mlocked or both aren't */
>>> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
>>> + return -EINVAL;
>>> +
>>> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
>>> + return -EINVAL;
>>
>> Why does one of both need VM_WRITE? If one really needs it, then the
>> destination (where we're moving stuff to).
>
> As you noticed later, both should have VM_WRITE.

Can you comment why? Just a simplification for now? Would be good to add
that comment in the code as well.

/* For now, we keep it simple and only move between writable VMAs. */

>>> + */
>>> + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
>>> + !src_vma->vm_userfaultfd_ctx.ctx)
>>> + return -EINVAL;
>>
>>
>>
>>> +
>>> + /*
>>> + * FIXME: only allow remapping across anonymous vmas,
>>> + * tmpfs should be added.
>>> + */
>>> + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
>>> + return -EINVAL;
>>
>> Why a FIXME here? Just drop the comment completely or replace it with
>> "We only allow to remap anonymous folios accross anonymous VMAs".
>
> Will do. I guess Andrea had plans to cover tmpfs as well.


That is rather future work (or what's to fix here?) and better
documented in the cover letter.

Having thought about VMA checks, I do wonder if we want to just block
some VM_ flags right at the beginning (VM_IO,VM_PFNMAP,VM_HUGETLB,...).
That might be covered by some other checks here implicitly, but I'm not
100% sure if that's always the case. An explicit list as in
vma_ksm_compatible() might be clearer.

Further, I wonder if we have to block VM_SHADOW_STACK; we certainly
don't want to let users modify the shadow stack by moving modified
target pages into place. But this might already be covered by earlier
checks (vm_page_prot? but I didn't look up with which setting we ended
up in the upstream version).

Cc'ing Rick: see "validate_remap_areas()" in [1]

[1] https://lkml.kernel.org/r/[email protected]


--
Cheers,

David / dhildenb

2023-10-24 14:37:24

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v3 2/3] userfaultfd: UFFDIO_MOVE uABI

On Tue, Oct 24, 2023 at 7:27 AM David Hildenbrand <[email protected]> wrote:
>
> On 23.10.23 20:56, Suren Baghdasaryan wrote:
> > On Mon, Oct 23, 2023 at 5:29 AM David Hildenbrand <[email protected]> wrote:
> >>
> >> Focusing on validate_remap_areas():
> >>
> >>> +
> >>> +static int validate_remap_areas(struct vm_area_struct *src_vma,
> >>> + struct vm_area_struct *dst_vma)
> >>> +{
> >>> + /* Only allow remapping if both have the same access and protection */
> >>> + if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
> >>> + pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
> >>> + return -EINVAL;
> >>
> >> Makes sense. I do wonder about pkey and friends and if we even have to
> >> so anything special.
> >
> > I don't see anything special done for mremap. Do you have something in mind?
>
> Nothing concrete, not a pkey expert. But as there is indeed nothing
> pkey-special in the VMA, there is nothing we can really check for and/or
> adjust.
>
> So let's assume this is fine.

Sounds good until someone tells us otherwise.

>
> >>
> >>> +
> >>> + /* Only allow remapping if both are mlocked or both aren't */
> >>> + if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
> >>> + return -EINVAL;
> >>> +
> >>> + if (!(src_vma->vm_flags & VM_WRITE) || !(dst_vma->vm_flags & VM_WRITE))
> >>> + return -EINVAL;
> >>
> >> Why does one of both need VM_WRITE? If one really needs it, then the
> >> destination (where we're moving stuff to).
> >
> > As you noticed later, both should have VM_WRITE.
>
> Can you comment why? Just a simplification for now? Would be good to add
> that comment in the code as well.

Yeah, I thought to move a page both areas should be writable since we
are technically modifying both by this operation.

>
> /* For now, we keep it simple and only move between writable VMAs. */

Ack. Will add.

>
> >>> + */
> >>> + if (!dst_vma->vm_userfaultfd_ctx.ctx &&
> >>> + !src_vma->vm_userfaultfd_ctx.ctx)
> >>> + return -EINVAL;
> >>
> >>
> >>
> >>> +
> >>> + /*
> >>> + * FIXME: only allow remapping across anonymous vmas,
> >>> + * tmpfs should be added.
> >>> + */
> >>> + if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
> >>> + return -EINVAL;
> >>
> >> Why a FIXME here? Just drop the comment completely or replace it with
> >> "We only allow to remap anonymous folios accross anonymous VMAs".
> >
> > Will do. I guess Andrea had plans to cover tmpfs as well.
>
>
> That is rather future work (or what's to fix here?) and better
> documented in the cover letter.

Ack.

>
> Having thought about VMA checks, I do wonder if we want to just block
> some VM_ flags right at the beginning (VM_IO,VM_PFNMAP,VM_HUGETLB,...).
> That might be covered by some other checks here implicitly, but I'm not
> 100% sure if that's always the case. An explicit list as in
> vma_ksm_compatible() might be clearer.
>
> Further, I wonder if we have to block VM_SHADOW_STACK; we certainly
> don't want to let users modify the shadow stack by moving modified
> target pages into place. But this might already be covered by earlier
> checks (vm_page_prot? but I didn't look up with which setting we ended
> up in the upstream version).

Good point. I'll check if existing checks already cover these and if
not will add them.
Thanks,
Suren.

>
> Cc'ing Rick: see "validate_remap_areas()" in [1]
>
> [1] https://lkml.kernel.org/r/[email protected]
>
>
> --
> Cheers,
>
> David / dhildenb
>