2023-09-23 02:52:03

by Suren Baghdasaryan

[permalink] [raw]
Subject: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

From: Andrea Arcangeli <[email protected]>

This implements the uABI of UFFDIO_REMAP.

Notably one mode bitflag is also forwarded (and in turn known) by the
lowlevel remap_pages method.

Signed-off-by: Andrea Arcangeli <[email protected]>
Signed-off-by: Suren Baghdasaryan <[email protected]>
---
Changes since v1:
- add mmget_not_zero in userfaultfd_remap, per Jann Horn
- removed extern from function definitions, per Matthew Wilcox
- converted to folios in remap_pages_huge_pmd, per Matthew Wilcox
- use PageAnonExclusive in remap_pages_huge_pmd, per David Hildenbrand
- handle pgtable transfers between MMs, per Jann Horn
- ignore concurrent A/D pte bit changes, per Jann Horn
- split functions into smaller units, per David Hildenbrand
- test for folio_test_large in remap_anon_pte, per Matthew Wilcox
- use pte_swp_exclusive for swapcount check, per David Hildenbrand
- eliminated use of mmu_notifier_invalidate_range_start_nonblock,
per Jann Horn
- simplified THP alignment checks, per Jann Horn
- refactored the loop inside remap_pages, per Jann Horn
- additional clarifying comments, per Jann Horn

fs/userfaultfd.c | 63 ++++
include/linux/rmap.h | 5 +
include/linux/userfaultfd_k.h | 12 +
include/uapi/linux/userfaultfd.h | 22 ++
mm/huge_memory.c | 130 +++++++
mm/khugepaged.c | 3 +
mm/userfaultfd.c | 590 +++++++++++++++++++++++++++++++
7 files changed, 825 insertions(+)

diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
index 56eaae9dac1a..5b6bb20f4518 100644
--- a/fs/userfaultfd.c
+++ b/fs/userfaultfd.c
@@ -2027,6 +2027,66 @@ static inline unsigned int uffd_ctx_features(__u64 user_features)
return (unsigned int)user_features | UFFD_FEATURE_INITIALIZED;
}

+static int userfaultfd_remap(struct userfaultfd_ctx *ctx,
+ unsigned long arg)
+{
+ __s64 ret;
+ struct uffdio_remap uffdio_remap;
+ struct uffdio_remap __user *user_uffdio_remap;
+ struct userfaultfd_wake_range range;
+
+ user_uffdio_remap = (struct uffdio_remap __user *) arg;
+
+ ret = -EAGAIN;
+ if (atomic_read(&ctx->mmap_changing))
+ goto out;
+
+ ret = -EFAULT;
+ if (copy_from_user(&uffdio_remap, user_uffdio_remap,
+ /* don't copy "remap" last field */
+ sizeof(uffdio_remap)-sizeof(__s64)))
+ goto out;
+
+ ret = validate_range(ctx->mm, uffdio_remap.dst, uffdio_remap.len);
+ if (ret)
+ goto out;
+
+ ret = validate_range(current->mm, uffdio_remap.src, uffdio_remap.len);
+ if (ret)
+ goto out;
+
+ ret = -EINVAL;
+ if (uffdio_remap.mode & ~(UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES|
+ UFFDIO_REMAP_MODE_DONTWAKE))
+ goto out;
+
+ if (mmget_not_zero(ctx->mm)) {
+ ret = remap_pages(ctx->mm, current->mm,
+ uffdio_remap.dst, uffdio_remap.src,
+ uffdio_remap.len, uffdio_remap.mode);
+ mmput(ctx->mm);
+ } else {
+ return -ESRCH;
+ }
+
+ if (unlikely(put_user(ret, &user_uffdio_remap->remap)))
+ return -EFAULT;
+ if (ret < 0)
+ goto out;
+
+ /* len == 0 would wake all */
+ BUG_ON(!ret);
+ range.len = ret;
+ if (!(uffdio_remap.mode & UFFDIO_REMAP_MODE_DONTWAKE)) {
+ range.start = uffdio_remap.dst;
+ wake_userfault(ctx, &range);
+ }
+ ret = range.len == uffdio_remap.len ? 0 : -EAGAIN;
+
+out:
+ return ret;
+}
+
/*
* userland asks for a certain API version and we return which bits
* and ioctl commands are implemented in this kernel for such API
@@ -2113,6 +2173,9 @@ static long userfaultfd_ioctl(struct file *file, unsigned cmd,
case UFFDIO_ZEROPAGE:
ret = userfaultfd_zeropage(ctx, arg);
break;
+ case UFFDIO_REMAP:
+ ret = userfaultfd_remap(ctx, arg);
+ break;
case UFFDIO_WRITEPROTECT:
ret = userfaultfd_writeprotect(ctx, arg);
break;
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 51cc21ebb568..614c4b439907 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -121,6 +121,11 @@ static inline void anon_vma_lock_write(struct anon_vma *anon_vma)
down_write(&anon_vma->root->rwsem);
}

+static inline int anon_vma_trylock_write(struct anon_vma *anon_vma)
+{
+ return down_write_trylock(&anon_vma->root->rwsem);
+}
+
static inline void anon_vma_unlock_write(struct anon_vma *anon_vma)
{
up_write(&anon_vma->root->rwsem);
diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
index ac8c6854097c..9ea2c43ad4b7 100644
--- a/include/linux/userfaultfd_k.h
+++ b/include/linux/userfaultfd_k.h
@@ -93,6 +93,18 @@ extern int mwriteprotect_range(struct mm_struct *dst_mm,
extern long uffd_wp_range(struct vm_area_struct *vma,
unsigned long start, unsigned long len, bool enable_wp);

+/* remap_pages */
+void double_pt_lock(spinlock_t *ptl1, spinlock_t *ptl2);
+void double_pt_unlock(spinlock_t *ptl1, spinlock_t *ptl2);
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_start, unsigned long src_start,
+ unsigned long len, __u64 flags);
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr);
+
/* mm helpers */
static inline bool is_mergeable_vm_userfaultfd_ctx(struct vm_area_struct *vma,
struct vm_userfaultfd_ctx vm_ctx)
diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h
index 62151706c5a3..22d1c43e39f9 100644
--- a/include/uapi/linux/userfaultfd.h
+++ b/include/uapi/linux/userfaultfd.h
@@ -49,6 +49,7 @@
((__u64)1 << _UFFDIO_WAKE | \
(__u64)1 << _UFFDIO_COPY | \
(__u64)1 << _UFFDIO_ZEROPAGE | \
+ (__u64)1 << _UFFDIO_REMAP | \
(__u64)1 << _UFFDIO_WRITEPROTECT | \
(__u64)1 << _UFFDIO_CONTINUE | \
(__u64)1 << _UFFDIO_POISON)
@@ -72,6 +73,7 @@
#define _UFFDIO_WAKE (0x02)
#define _UFFDIO_COPY (0x03)
#define _UFFDIO_ZEROPAGE (0x04)
+#define _UFFDIO_REMAP (0x05)
#define _UFFDIO_WRITEPROTECT (0x06)
#define _UFFDIO_CONTINUE (0x07)
#define _UFFDIO_POISON (0x08)
@@ -91,6 +93,8 @@
struct uffdio_copy)
#define UFFDIO_ZEROPAGE _IOWR(UFFDIO, _UFFDIO_ZEROPAGE, \
struct uffdio_zeropage)
+#define UFFDIO_REMAP _IOWR(UFFDIO, _UFFDIO_REMAP, \
+ struct uffdio_remap)
#define UFFDIO_WRITEPROTECT _IOWR(UFFDIO, _UFFDIO_WRITEPROTECT, \
struct uffdio_writeprotect)
#define UFFDIO_CONTINUE _IOWR(UFFDIO, _UFFDIO_CONTINUE, \
@@ -340,6 +344,24 @@ struct uffdio_poison {
__s64 updated;
};

+struct uffdio_remap {
+ __u64 dst;
+ __u64 src;
+ __u64 len;
+ /*
+ * Especially if used to atomically remove memory from the
+ * address space the wake on the dst range is not needed.
+ */
+#define UFFDIO_REMAP_MODE_DONTWAKE ((__u64)1<<0)
+#define UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES ((__u64)1<<1)
+ __u64 mode;
+ /*
+ * "remap" is written by the ioctl and must be at the end: the
+ * copy_from_user will not read the last 8 bytes.
+ */
+ __s64 remap;
+};
+
/*
* Flags for the userfaultfd(2) system call itself.
*/
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 064fbd90822b..a8c898df36db 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1932,6 +1932,136 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
return ret;
}

+#ifdef CONFIG_USERFAULTFD
+/*
+ * The PT lock for src_pmd and the mmap_lock for reading are held by
+ * the caller, but it must return after releasing the
+ * page_table_lock. We're guaranteed the src_pmd is a pmd_trans_huge
+ * until the PT lock of the src_pmd is released. Just move the page
+ * from src_pmd to dst_pmd if possible. Return zero if succeeded in
+ * moving the page, -EAGAIN if it needs to be repeated by the caller,
+ * or other errors in case of failure.
+ */
+int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr)
+{
+ pmd_t _dst_pmd, src_pmdval;
+ struct page *src_page;
+ struct folio *src_folio;
+ struct anon_vma *src_anon_vma, *dst_anon_vma;
+ spinlock_t *src_ptl, *dst_ptl;
+ pgtable_t src_pgtable, dst_pgtable;
+ struct mmu_notifier_range range;
+ int err = 0;
+
+ src_pmdval = *src_pmd;
+ src_ptl = pmd_lockptr(src_mm, src_pmd);
+
+ BUG_ON(!spin_is_locked(src_ptl));
+ mmap_assert_locked(src_mm);
+ mmap_assert_locked(dst_mm);
+
+ BUG_ON(!pmd_trans_huge(src_pmdval));
+ BUG_ON(!pmd_none(dst_pmdval));
+ BUG_ON(src_addr & ~HPAGE_PMD_MASK);
+ BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
+
+ src_page = pmd_page(src_pmdval);
+ if (unlikely(!PageAnonExclusive(src_page))) {
+ spin_unlock(src_ptl);
+ return -EBUSY;
+ }
+
+ src_folio = page_folio(src_page);
+ folio_get(src_folio);
+ spin_unlock(src_ptl);
+
+ /* preallocate dst_pgtable if needed */
+ if (dst_mm != src_mm) {
+ dst_pgtable = pte_alloc_one(dst_mm);
+ if (unlikely(!dst_pgtable)) {
+ err = -ENOMEM;
+ goto put_folio;
+ }
+ } else {
+ dst_pgtable = NULL;
+ }
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
+ src_addr + HPAGE_PMD_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+
+ /* block all concurrent rmap walks */
+ folio_lock(src_folio);
+
+ /*
+ * split_huge_page walks the anon_vma chain without the page
+ * lock. Serialize against it with the anon_vma lock, the page
+ * lock is not enough.
+ */
+ src_anon_vma = folio_get_anon_vma(src_folio);
+ if (!src_anon_vma) {
+ err = -EAGAIN;
+ goto unlock_folio;
+ }
+ anon_vma_lock_write(src_anon_vma);
+
+ dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
+ double_pt_lock(src_ptl, dst_ptl);
+ if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
+ !pmd_same(*dst_pmd, dst_pmdval) ||
+ folio_mapcount(src_folio) != 1)) {
+ double_pt_unlock(src_ptl, dst_ptl);
+ err = -EAGAIN;
+ goto put_anon_vma;
+ }
+
+ BUG_ON(!folio_test_head(src_folio));
+ BUG_ON(!folio_test_anon(src_folio));
+
+ dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
+ WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma);
+ WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
+
+ src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
+ _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
+ _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
+ set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
+
+ src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
+ if (dst_pgtable) {
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
+ pte_free(src_mm, src_pgtable);
+ dst_pgtable = NULL;
+
+ mm_inc_nr_ptes(dst_mm);
+ mm_dec_nr_ptes(src_mm);
+ add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+ add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+ } else {
+ pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
+ }
+ double_pt_unlock(src_ptl, dst_ptl);
+
+put_anon_vma:
+ anon_vma_unlock_write(src_anon_vma);
+ put_anon_vma(src_anon_vma);
+unlock_folio:
+ /* unblock rmap walks */
+ folio_unlock(src_folio);
+ mmu_notifier_invalidate_range_end(&range);
+ if (dst_pgtable)
+ pte_free(dst_mm, dst_pgtable);
+put_folio:
+ folio_put(src_folio);
+
+ return err;
+}
+#endif /* CONFIG_USERFAULTFD */
+
/*
* Returns page table lock pointer if a given pmd maps a thp, NULL otherwise.
*
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 88433cc25d8a..af23248b3551 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1135,6 +1135,9 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* Prevent all access to pagetables with the exception of
* gup_fast later handled by the ptep_clear_flush and the VM
* handled by the anon_vma lock + PG_lock.
+ *
+ * UFFDIO_REMAP is prevented to race as well thanks to the
+ * mmap_lock.
*/
mmap_write_lock(mm);
result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 96d9eae5c7cc..5ce5e364373c 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -842,3 +842,593 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start,
mmap_read_unlock(dst_mm);
return err;
}
+
+
+void double_pt_lock(spinlock_t *ptl1,
+ spinlock_t *ptl2)
+ __acquires(ptl1)
+ __acquires(ptl2)
+{
+ spinlock_t *ptl_tmp;
+
+ if (ptl1 > ptl2) {
+ /* exchange ptl1 and ptl2 */
+ ptl_tmp = ptl1;
+ ptl1 = ptl2;
+ ptl2 = ptl_tmp;
+ }
+ /* lock in virtual address order to avoid lock inversion */
+ spin_lock(ptl1);
+ if (ptl1 != ptl2)
+ spin_lock_nested(ptl2, SINGLE_DEPTH_NESTING);
+ else
+ __acquire(ptl2);
+}
+
+void double_pt_unlock(spinlock_t *ptl1,
+ spinlock_t *ptl2)
+ __releases(ptl1)
+ __releases(ptl2)
+{
+ spin_unlock(ptl1);
+ if (ptl1 != ptl2)
+ spin_unlock(ptl2);
+ else
+ __release(ptl2);
+}
+
+
+static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr, unsigned long src_addr,
+ pte_t *dst_pte, pte_t *src_pte,
+ pte_t orig_dst_pte, pte_t orig_src_pte,
+ spinlock_t *dst_ptl, spinlock_t *src_ptl,
+ struct folio *src_folio)
+{
+ struct anon_vma *dst_anon_vma;
+
+ double_pt_lock(dst_ptl, src_ptl);
+
+ if (!pte_same(*src_pte, orig_src_pte) ||
+ !pte_same(*dst_pte, orig_dst_pte) ||
+ folio_test_large(src_folio) ||
+ folio_estimated_sharers(src_folio) != 1) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EAGAIN;
+ }
+
+ BUG_ON(!folio_test_anon(src_folio));
+
+ dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
+ WRITE_ONCE(src_folio->mapping,
+ (struct address_space *) dst_anon_vma);
+ WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
+ dst_addr));
+
+ orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
+ orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
+ orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
+ dst_vma);
+
+ set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
+
+ if (dst_mm != src_mm) {
+ inc_mm_counter(dst_mm, MM_ANONPAGES);
+ dec_mm_counter(src_mm, MM_ANONPAGES);
+ }
+
+ double_pt_unlock(dst_ptl, src_ptl);
+
+ return 0;
+}
+
+static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_addr, unsigned long src_addr,
+ pte_t *dst_pte, pte_t *src_pte,
+ pte_t orig_dst_pte, pte_t orig_src_pte,
+ spinlock_t *dst_ptl, spinlock_t *src_ptl)
+{
+ if (!pte_swp_exclusive(orig_src_pte))
+ return -EBUSY;
+
+ double_pt_lock(dst_ptl, src_ptl);
+
+ if (!pte_same(*src_pte, orig_src_pte) ||
+ !pte_same(*dst_pte, orig_dst_pte)) {
+ double_pt_unlock(dst_ptl, src_ptl);
+ return -EAGAIN;
+ }
+
+ orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
+ set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
+
+ if (dst_mm != src_mm) {
+ inc_mm_counter(dst_mm, MM_ANONPAGES);
+ dec_mm_counter(src_mm, MM_ANONPAGES);
+ }
+
+ double_pt_unlock(dst_ptl, src_ptl);
+
+ return 0;
+}
+
+/*
+ * The mmap_lock for reading is held by the caller. Just move the page
+ * from src_pmd to dst_pmd if possible, and return true if succeeded
+ * in moving the page.
+ */
+static int remap_pages_pte(struct mm_struct *dst_mm,
+ struct mm_struct *src_mm,
+ pmd_t *dst_pmd,
+ pmd_t *src_pmd,
+ struct vm_area_struct *dst_vma,
+ struct vm_area_struct *src_vma,
+ unsigned long dst_addr,
+ unsigned long src_addr,
+ __u64 mode)
+{
+ swp_entry_t entry;
+ pte_t orig_src_pte, orig_dst_pte;
+ spinlock_t *src_ptl, *dst_ptl;
+ pte_t *src_pte = NULL;
+ pte_t *dst_pte = NULL;
+
+ struct folio *src_folio = NULL;
+ struct anon_vma *src_anon_vma = NULL;
+ struct mmu_notifier_range range;
+ int err = 0;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
+ src_addr, src_addr + PAGE_SIZE);
+ mmu_notifier_invalidate_range_start(&range);
+retry:
+ dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
+
+ /* If an huge pmd materialized from under us fail */
+ if (unlikely(!dst_pte)) {
+ err = -EFAULT;
+ goto out;
+ }
+
+ src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
+
+ /*
+ * We held the mmap_lock for reading so MADV_DONTNEED
+ * can zap transparent huge pages under us, or the
+ * transparent huge page fault can establish new
+ * transparent huge pages under us.
+ */
+ if (unlikely(!src_pte)) {
+ err = -EFAULT;
+ goto out;
+ }
+
+ BUG_ON(pmd_none(*dst_pmd));
+ BUG_ON(pmd_none(*src_pmd));
+ BUG_ON(pmd_trans_huge(*dst_pmd));
+ BUG_ON(pmd_trans_huge(*src_pmd));
+
+ spin_lock(dst_ptl);
+ orig_dst_pte = *dst_pte;
+ spin_unlock(dst_ptl);
+ if (!pte_none(orig_dst_pte)) {
+ err = -EEXIST;
+ goto out;
+ }
+
+ spin_lock(src_ptl);
+ orig_src_pte = *src_pte;
+ spin_unlock(src_ptl);
+ if (pte_none(orig_src_pte)) {
+ if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
+ err = -ENOENT;
+ else /* nothing to do to remap a hole */
+ err = 0;
+ goto out;
+ }
+
+ if (pte_present(orig_src_pte)) {
+ /*
+ * Pin and lock both source folio and anon_vma. Since we are in
+ * RCU read section, we can't block, so on contention have to
+ * unmap the ptes, obtain the lock and retry.
+ */
+ if (!src_folio) {
+ struct folio *folio;
+
+ /*
+ * Pin the page while holding the lock to be sure the
+ * page isn't freed under us
+ */
+ spin_lock(src_ptl);
+ if (!pte_same(orig_src_pte, *src_pte)) {
+ spin_unlock(src_ptl);
+ err = -EAGAIN;
+ goto out;
+ }
+
+ folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
+ if (!folio || !folio_test_anon(folio) ||
+ folio_test_large(folio) ||
+ folio_estimated_sharers(folio) != 1) {
+ spin_unlock(src_ptl);
+ err = -EBUSY;
+ goto out;
+ }
+
+ folio_get(folio);
+ src_folio = folio;
+ spin_unlock(src_ptl);
+
+ /* block all concurrent rmap walks */
+ if (!folio_trylock(src_folio)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ folio_lock(src_folio);
+ goto retry;
+ }
+ }
+
+ if (!src_anon_vma) {
+ /*
+ * folio_referenced walks the anon_vma chain
+ * without the folio lock. Serialize against it with
+ * the anon_vma lock, the folio lock is not enough.
+ */
+ src_anon_vma = folio_get_anon_vma(src_folio);
+ if (!src_anon_vma) {
+ /* page was unmapped from under us */
+ err = -EAGAIN;
+ goto out;
+ }
+ if (!anon_vma_trylock_write(src_anon_vma)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ anon_vma_lock_write(src_anon_vma);
+ goto retry;
+ }
+ }
+
+ err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
+ dst_addr, src_addr, dst_pte, src_pte,
+ orig_dst_pte, orig_src_pte,
+ dst_ptl, src_ptl, src_folio);
+ } else {
+ entry = pte_to_swp_entry(orig_src_pte);
+ if (non_swap_entry(entry)) {
+ if (is_migration_entry(entry)) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+ migration_entry_wait(src_mm, src_pmd,
+ src_addr);
+ err = -EAGAIN;
+ } else
+ err = -EFAULT;
+ goto out;
+ }
+
+ err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
+ dst_pte, src_pte,
+ orig_dst_pte, orig_src_pte,
+ dst_ptl, src_ptl);
+ }
+
+out:
+ if (src_anon_vma) {
+ anon_vma_unlock_write(src_anon_vma);
+ put_anon_vma(src_anon_vma);
+ }
+ if (src_folio) {
+ folio_unlock(src_folio);
+ folio_put(src_folio);
+ }
+ if (dst_pte)
+ pte_unmap(dst_pte);
+ if (src_pte)
+ pte_unmap(src_pte);
+ mmu_notifier_invalidate_range_end(&range);
+
+ return err;
+}
+
+static int validate_remap_areas(struct vm_area_struct *src_vma,
+ struct vm_area_struct *dst_vma)
+{
+ /* Only allow remapping if both have the same access and protection */
+ if ((src_vma->vm_flags & VM_ACCESS_FLAGS) != (dst_vma->vm_flags & VM_ACCESS_FLAGS) ||
+ pgprot_val(src_vma->vm_page_prot) != pgprot_val(dst_vma->vm_page_prot))
+ return -EINVAL;
+
+ /* Only allow remapping if both are mlocked or both aren't */
+ if ((src_vma->vm_flags & VM_LOCKED) != (dst_vma->vm_flags & VM_LOCKED))
+ return -EINVAL;
+
+ /*
+ * Be strict and only allow remap_pages if either the src or
+ * dst range is registered in the userfaultfd to prevent
+ * userland errors going unnoticed. As far as the VM
+ * consistency is concerned, it would be perfectly safe to
+ * remove this check, but there's no useful usage for
+ * remap_pages ouside of userfaultfd registered ranges. This
+ * is after all why it is an ioctl belonging to the
+ * userfaultfd and not a syscall.
+ *
+ * Allow both vmas to be registered in the userfaultfd, just
+ * in case somebody finds a way to make such a case useful.
+ * Normally only one of the two vmas would be registered in
+ * the userfaultfd.
+ */
+ if (!dst_vma->vm_userfaultfd_ctx.ctx &&
+ !src_vma->vm_userfaultfd_ctx.ctx)
+ return -EINVAL;
+
+ /*
+ * FIXME: only allow remapping across anonymous vmas,
+ * tmpfs should be added.
+ */
+ if (!vma_is_anonymous(src_vma) || !vma_is_anonymous(dst_vma))
+ return -EINVAL;
+
+ /*
+ * Ensure the dst_vma has a anon_vma or this page
+ * would get a NULL anon_vma when moved in the
+ * dst_vma.
+ */
+ if (unlikely(anon_vma_prepare(dst_vma)))
+ return -ENOMEM;
+
+ return 0;
+}
+
+/**
+ * remap_pages - remap arbitrary anonymous pages of an existing vma
+ * @dst_start: start of the destination virtual memory range
+ * @src_start: start of the source virtual memory range
+ * @len: length of the virtual memory range
+ *
+ * remap_pages() remaps arbitrary anonymous pages atomically in zero
+ * copy. It only works on non shared anonymous pages because those can
+ * be relocated without generating non linear anon_vmas in the rmap
+ * code.
+ *
+ * It provides a zero copy mechanism to handle userspace page faults.
+ * The source vma pages should have mapcount == 1, which can be
+ * enforced by using madvise(MADV_DONTFORK) on src vma.
+ *
+ * The thread receiving the page during the userland page fault
+ * will receive the faulting page in the source vma through the network,
+ * storage or any other I/O device (MADV_DONTFORK in the source vma
+ * avoids remap_pages() to fail with -EBUSY if the process forks before
+ * remap_pages() is called), then it will call remap_pages() to map the
+ * page in the faulting address in the destination vma.
+ *
+ * This userfaultfd command works purely via pagetables, so it's the
+ * most efficient way to move physical non shared anonymous pages
+ * across different virtual addresses. Unlike mremap()/mmap()/munmap()
+ * it does not create any new vmas. The mapping in the destination
+ * address is atomic.
+ *
+ * It only works if the vma protection bits are identical from the
+ * source and destination vma.
+ *
+ * It can remap non shared anonymous pages within the same vma too.
+ *
+ * If the source virtual memory range has any unmapped holes, or if
+ * the destination virtual memory range is not a whole unmapped hole,
+ * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
+ * provides a very strict behavior to avoid any chance of memory
+ * corruption going unnoticed if there are userland race conditions.
+ * Only one thread should resolve the userland page fault at any given
+ * time for any given faulting address. This means that if two threads
+ * try to both call remap_pages() on the same destination address at the
+ * same time, the second thread will get an explicit error from this
+ * command.
+ *
+ * The command retval will return "len" is successful. The command
+ * however can be interrupted by fatal signals or errors. If
+ * interrupted it will return the number of bytes successfully
+ * remapped before the interruption if any, or the negative error if
+ * none. It will never return zero. Either it will return an error or
+ * an amount of bytes successfully moved. If the retval reports a
+ * "short" remap, the remap_pages() command should be repeated by
+ * userland with src+retval, dst+reval, len-retval if it wants to know
+ * about the error that interrupted it.
+ *
+ * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to
+ * prevent -ENOENT errors to materialize if there are holes in the
+ * source virtual range that is being remapped. The holes will be
+ * accounted as successfully remapped in the retval of the
+ * command. This is mostly useful to remap hugepage naturally aligned
+ * virtual regions without knowing if there are transparent hugepage
+ * in the regions or not, but preventing the risk of having to split
+ * the hugepmd during the remap.
+ *
+ * If there's any rmap walk that is taking the anon_vma locks without
+ * first obtaining the folio lock (for example split_huge_page and
+ * folio_referenced), they will have to verify if the folio->mapping
+ * has changed after taking the anon_vma lock. If it changed they
+ * should release the lock and retry obtaining a new anon_vma, because
+ * it means the anon_vma was changed by remap_pages() before the lock
+ * could be obtained. This is the only additional complexity added to
+ * the rmap code to provide this anonymous page remapping functionality.
+ */
+ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ unsigned long dst_start, unsigned long src_start,
+ unsigned long len, __u64 mode)
+{
+ struct vm_area_struct *src_vma, *dst_vma;
+ unsigned long src_addr, dst_addr;
+ pmd_t *src_pmd, *dst_pmd;
+ long err = -EINVAL;
+ ssize_t moved = 0;
+
+ /*
+ * Sanitize the command parameters:
+ */
+ BUG_ON(src_start & ~PAGE_MASK);
+ BUG_ON(dst_start & ~PAGE_MASK);
+ BUG_ON(len & ~PAGE_MASK);
+
+ /* Does the address range wrap, or is the span zero-sized? */
+ BUG_ON(src_start + len <= src_start);
+ BUG_ON(dst_start + len <= dst_start);
+
+ /*
+ * Because these are read sempahores there's no risk of lock
+ * inversion.
+ */
+ mmap_read_lock(dst_mm);
+ if (dst_mm != src_mm)
+ mmap_read_lock(src_mm);
+
+ /*
+ * Make sure the vma is not shared, that the src and dst remap
+ * ranges are both valid and fully within a single existing
+ * vma.
+ */
+ src_vma = find_vma(src_mm, src_start);
+ if (!src_vma || (src_vma->vm_flags & VM_SHARED))
+ goto out;
+ if (src_start < src_vma->vm_start ||
+ src_start + len > src_vma->vm_end)
+ goto out;
+
+ dst_vma = find_vma(dst_mm, dst_start);
+ if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
+ goto out;
+ if (dst_start < dst_vma->vm_start ||
+ dst_start + len > dst_vma->vm_end)
+ goto out;
+
+ err = validate_remap_areas(src_vma, dst_vma);
+ if (err)
+ goto out;
+
+ for (src_addr = src_start, dst_addr = dst_start;
+ src_addr < src_start + len;) {
+ spinlock_t *ptl;
+ pmd_t dst_pmdval;
+ unsigned long step_size;
+
+ BUG_ON(dst_addr >= dst_start + len);
+ /*
+ * Below works because anonymous area would not have a
+ * transparent huge PUD. If file-backed support is added,
+ * that case would need to be handled here.
+ */
+ src_pmd = mm_find_pmd(src_mm, src_addr);
+ if (unlikely(!src_pmd)) {
+ if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
+ err = -ENOENT;
+ break;
+ }
+ src_pmd = mm_alloc_pmd(src_mm, src_addr);
+ if (unlikely(!src_pmd)) {
+ err = -ENOMEM;
+ break;
+ }
+ }
+ dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
+ if (unlikely(!dst_pmd)) {
+ err = -ENOMEM;
+ break;
+ }
+
+ dst_pmdval = pmdp_get_lockless(dst_pmd);
+ /*
+ * If the dst_pmd is mapped as THP don't override it and just
+ * be strict. If dst_pmd changes into TPH after this check, the
+ * remap_pages_huge_pmd() will detect the change and retry
+ * while remap_pages_pte() will detect the change and fail.
+ */
+ if (unlikely(pmd_trans_huge(dst_pmdval))) {
+ err = -EEXIST;
+ break;
+ }
+
+ ptl = pmd_trans_huge_lock(src_pmd, src_vma);
+ if (ptl && !pmd_trans_huge(*src_pmd)) {
+ spin_unlock(ptl);
+ ptl = NULL;
+ }
+
+ if (ptl) {
+ /*
+ * Check if we can move the pmd without
+ * splitting it. First check the address
+ * alignment to be the same in src/dst. These
+ * checks don't actually need the PT lock but
+ * it's good to do it here to optimize this
+ * block away at build time if
+ * CONFIG_TRANSPARENT_HUGEPAGE is not set.
+ */
+ if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
+ src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
+ spin_unlock(ptl);
+ split_huge_pmd(src_vma, src_pmd, src_addr);
+ continue;
+ }
+
+ err = remap_pages_huge_pmd(dst_mm, src_mm,
+ dst_pmd, src_pmd,
+ dst_pmdval,
+ dst_vma, src_vma,
+ dst_addr, src_addr);
+ step_size = HPAGE_PMD_SIZE;
+ } else {
+ if (pmd_none(*src_pmd)) {
+ if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
+ err = -ENOENT;
+ break;
+ }
+ if (unlikely(__pte_alloc(src_mm, src_pmd))) {
+ err = -ENOMEM;
+ break;
+ }
+ }
+
+ if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
+ err = -ENOMEM;
+ break;
+ }
+
+ err = remap_pages_pte(dst_mm, src_mm,
+ dst_pmd, src_pmd,
+ dst_vma, src_vma,
+ dst_addr, src_addr,
+ mode);
+ step_size = PAGE_SIZE;
+ }
+
+ cond_resched();
+
+ if (!err) {
+ dst_addr += step_size;
+ src_addr += step_size;
+ moved += step_size;
+ }
+
+ if ((!err || err == -EAGAIN) &&
+ fatal_signal_pending(current))
+ err = -EINTR;
+
+ if (err && err != -EAGAIN)
+ break;
+ }
+
+out:
+ mmap_read_unlock(dst_mm);
+ if (dst_mm != src_mm)
+ mmap_read_unlock(src_mm);
+ BUG_ON(moved < 0);
+ BUG_ON(err > 0);
+ BUG_ON(!moved && !err);
+ return moved ? moved : err;
+}
--
2.42.0.515.g380fc7ccd1-goog


2023-09-27 14:08:01

by Jann Horn

[permalink] [raw]
Subject: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI]

[moving Hugh into "To:" recipients as FYI for khugepaged interaction]

On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> From: Andrea Arcangeli <[email protected]>
>
> This implements the uABI of UFFDIO_REMAP.
>
> Notably one mode bitflag is also forwarded (and in turn known) by the
> lowlevel remap_pages method.
>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
[...]
> +/*
> + * The mmap_lock for reading is held by the caller. Just move the page
> + * from src_pmd to dst_pmd if possible, and return true if succeeded
> + * in moving the page.
> + */
> +static int remap_pages_pte(struct mm_struct *dst_mm,
> + struct mm_struct *src_mm,
> + pmd_t *dst_pmd,
> + pmd_t *src_pmd,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr,
> + unsigned long src_addr,
> + __u64 mode)
> +{
> + swp_entry_t entry;
> + pte_t orig_src_pte, orig_dst_pte;
> + spinlock_t *src_ptl, *dst_ptl;
> + pte_t *src_pte = NULL;
> + pte_t *dst_pte = NULL;
> +
> + struct folio *src_folio = NULL;
> + struct anon_vma *src_anon_vma = NULL;
> + struct mmu_notifier_range range;
> + int err = 0;
> +
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> + src_addr, src_addr + PAGE_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +retry:
> + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> +
> + /* If an huge pmd materialized from under us fail */
> + if (unlikely(!dst_pte)) {
> + err = -EFAULT;
> + goto out;
> + }
> +
> + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> +
> + /*
> + * We held the mmap_lock for reading so MADV_DONTNEED
> + * can zap transparent huge pages under us, or the
> + * transparent huge page fault can establish new
> + * transparent huge pages under us.
> + */
> + if (unlikely(!src_pte)) {
> + err = -EFAULT;
> + goto out;
> + }
> +
> + BUG_ON(pmd_none(*dst_pmd));
> + BUG_ON(pmd_none(*src_pmd));
> + BUG_ON(pmd_trans_huge(*dst_pmd));
> + BUG_ON(pmd_trans_huge(*src_pmd));

This works for now, but note that Hugh Dickins has recently been
reworking khugepaged such that PTE-based mappings can be collapsed
into transhuge mappings under the mmap lock held in *read mode*;
holders of the mmap lock in read mode can only synchronize against
this by taking the right page table spinlock and rechecking the pmd
value. This is only the case for file-based mappings so far, not for
anonymous private VMAs; and this code only operates on anonymous
private VMAs so far, so it works out.

But if either Hugh further reworks khugepaged such that anonymous VMAs
can be collapsed under the mmap lock in read mode, or you expand this
code to work on file-backed VMAs, then it will become possible to hit
these BUG_ON() calls. I'm not sure what the plans for khugepaged going
forward are, but the number of edgecases everyone has to keep in mind
would go down if you changed this function to deal gracefully with
page tables disappearing under you.

In the newest version of mm/pgtable-generic.c, above
__pte_offset_map_lock(), there is a big comment block explaining the
current rules for page table access; in particular, regarding the
helper pte_offset_map_nolock() that you're using:

* pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
* but when successful, it also outputs a pointer to the spinlock in ptlp - as
* pte_offset_map_lock() does, but in this case without locking it. This helps
* the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
* act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
* pointer for the page table that it returns. In principle, the caller should
* recheck *pmd once the lock is taken; in practice, no callsite needs that -
* either the mmap_lock for write, or pte_same() check on contents, is enough.

If this becomes hittable in the future, I think you will need to
recheck *pmd, at least for dst_pte, to avoid copying PTEs into a
detached page table.

> + spin_lock(dst_ptl);
> + orig_dst_pte = *dst_pte;
> + spin_unlock(dst_ptl);
> + if (!pte_none(orig_dst_pte)) {
> + err = -EEXIST;
> + goto out;
> + }
> +
> + spin_lock(src_ptl);
> + orig_src_pte = *src_pte;
> + spin_unlock(src_ptl);
> + if (pte_none(orig_src_pte)) {
> + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
> + err = -ENOENT;
> + else /* nothing to do to remap a hole */
> + err = 0;
> + goto out;
> + }
> +
> + if (pte_present(orig_src_pte)) {
> + /*
> + * Pin and lock both source folio and anon_vma. Since we are in
> + * RCU read section, we can't block, so on contention have to
> + * unmap the ptes, obtain the lock and retry.
> + */
> + if (!src_folio) {
> + struct folio *folio;
> +
> + /*
> + * Pin the page while holding the lock to be sure the
> + * page isn't freed under us
> + */
> + spin_lock(src_ptl);
> + if (!pte_same(orig_src_pte, *src_pte)) {
> + spin_unlock(src_ptl);
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> + if (!folio || !folio_test_anon(folio) ||
> + folio_test_large(folio) ||
> + folio_estimated_sharers(folio) != 1) {
> + spin_unlock(src_ptl);
> + err = -EBUSY;
> + goto out;
> + }
> +
> + folio_get(folio);
> + src_folio = folio;
> + spin_unlock(src_ptl);
> +
> + /* block all concurrent rmap walks */
> + if (!folio_trylock(src_folio)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + folio_lock(src_folio);
> + goto retry;
> + }
> + }
> +
> + if (!src_anon_vma) {
> + /*
> + * folio_referenced walks the anon_vma chain
> + * without the folio lock. Serialize against it with
> + * the anon_vma lock, the folio lock is not enough.
> + */
> + src_anon_vma = folio_get_anon_vma(src_folio);
> + if (!src_anon_vma) {
> + /* page was unmapped from under us */
> + err = -EAGAIN;
> + goto out;
> + }
> + if (!anon_vma_trylock_write(src_anon_vma)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + anon_vma_lock_write(src_anon_vma);
> + goto retry;
> + }
> + }
> +
> + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> + dst_addr, src_addr, dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte,
> + dst_ptl, src_ptl, src_folio);
> + } else {
> + entry = pte_to_swp_entry(orig_src_pte);
> + if (non_swap_entry(entry)) {
> + if (is_migration_entry(entry)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + migration_entry_wait(src_mm, src_pmd,
> + src_addr);
> + err = -EAGAIN;
> + } else
> + err = -EFAULT;
> + goto out;
> + }
> +
> + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
> + dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte,
> + dst_ptl, src_ptl);
> + }
> +
> +out:
> + if (src_anon_vma) {
> + anon_vma_unlock_write(src_anon_vma);
> + put_anon_vma(src_anon_vma);
> + }
> + if (src_folio) {
> + folio_unlock(src_folio);
> + folio_put(src_folio);
> + }
> + if (dst_pte)
> + pte_unmap(dst_pte);
> + if (src_pte)
> + pte_unmap(src_pte);
> + mmu_notifier_invalidate_range_end(&range);
> +
> + return err;
> +}

2023-09-27 18:52:27

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> From: Andrea Arcangeli <[email protected]>
>
> This implements the uABI of UFFDIO_REMAP.
>
> Notably one mode bitflag is also forwarded (and in turn known) by the
> lowlevel remap_pages method.
>
> Signed-off-by: Andrea Arcangeli <[email protected]>
> Signed-off-by: Suren Baghdasaryan <[email protected]>
[...]
> +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr)
> +{
> + pmd_t _dst_pmd, src_pmdval;
> + struct page *src_page;
> + struct folio *src_folio;
> + struct anon_vma *src_anon_vma, *dst_anon_vma;
> + spinlock_t *src_ptl, *dst_ptl;
> + pgtable_t src_pgtable, dst_pgtable;
> + struct mmu_notifier_range range;
> + int err = 0;
> +
> + src_pmdval = *src_pmd;
> + src_ptl = pmd_lockptr(src_mm, src_pmd);
> +
> + BUG_ON(!spin_is_locked(src_ptl));
> + mmap_assert_locked(src_mm);
> + mmap_assert_locked(dst_mm);
> +
> + BUG_ON(!pmd_trans_huge(src_pmdval));
> + BUG_ON(!pmd_none(dst_pmdval));
> + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> +
> + src_page = pmd_page(src_pmdval);
> + if (unlikely(!PageAnonExclusive(src_page))) {
> + spin_unlock(src_ptl);
> + return -EBUSY;
> + }
> +
> + src_folio = page_folio(src_page);
> + folio_get(src_folio);
> + spin_unlock(src_ptl);
> +
> + /* preallocate dst_pgtable if needed */
> + if (dst_mm != src_mm) {
> + dst_pgtable = pte_alloc_one(dst_mm);
> + if (unlikely(!dst_pgtable)) {
> + err = -ENOMEM;
> + goto put_folio;
> + }
> + } else {
> + dst_pgtable = NULL;
> + }
> +
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> + src_addr + HPAGE_PMD_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +
> + /* block all concurrent rmap walks */
> + folio_lock(src_folio);
> +
> + /*
> + * split_huge_page walks the anon_vma chain without the page
> + * lock. Serialize against it with the anon_vma lock, the page
> + * lock is not enough.
> + */
> + src_anon_vma = folio_get_anon_vma(src_folio);
> + if (!src_anon_vma) {
> + err = -EAGAIN;
> + goto unlock_folio;
> + }
> + anon_vma_lock_write(src_anon_vma);
> +
> + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> + double_pt_lock(src_ptl, dst_ptl);
> + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> + !pmd_same(*dst_pmd, dst_pmdval) ||
> + folio_mapcount(src_folio) != 1)) {

I think this is also supposed to be PageAnonExclusive()?

> + double_pt_unlock(src_ptl, dst_ptl);
> + err = -EAGAIN;
> + goto put_anon_vma;
> + }
> +
> + BUG_ON(!folio_test_head(src_folio));
> + BUG_ON(!folio_test_anon(src_folio));
> +
> + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
> + WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma);
> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> +
> + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> +
> + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> + if (dst_pgtable) {
> + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> + pte_free(src_mm, src_pgtable);
> + dst_pgtable = NULL;
> +
> + mm_inc_nr_ptes(dst_mm);
> + mm_dec_nr_ptes(src_mm);
> + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> + } else {
> + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> + }
> + double_pt_unlock(src_ptl, dst_ptl);
> +
> +put_anon_vma:
> + anon_vma_unlock_write(src_anon_vma);
> + put_anon_vma(src_anon_vma);
> +unlock_folio:
> + /* unblock rmap walks */
> + folio_unlock(src_folio);
> + mmu_notifier_invalidate_range_end(&range);
> + if (dst_pgtable)
> + pte_free(dst_mm, dst_pgtable);
> +put_folio:
> + folio_put(src_folio);
> +
> + return err;
> +}
> +#endif /* CONFIG_USERFAULTFD */
[...]
> +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr,
> + pte_t *dst_pte, pte_t *src_pte,
> + pte_t orig_dst_pte, pte_t orig_src_pte,
> + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> + struct folio *src_folio)
> +{
> + struct anon_vma *dst_anon_vma;
> +
> + double_pt_lock(dst_ptl, src_ptl);
> +
> + if (!pte_same(*src_pte, orig_src_pte) ||
> + !pte_same(*dst_pte, orig_dst_pte) ||
> + folio_test_large(src_folio) ||
> + folio_estimated_sharers(src_folio) != 1) {
> + double_pt_unlock(dst_ptl, src_ptl);
> + return -EAGAIN;
> + }
> +
> + BUG_ON(!folio_test_anon(src_folio));
> +
> + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
> + WRITE_ONCE(src_folio->mapping,
> + (struct address_space *) dst_anon_vma);
> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
> + dst_addr));
> +
> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
> + dst_vma);

I think there's still a theoretical issue here that you could fix by
checking for the AnonExclusive flag, similar to the huge page case.

Consider the following scenario:

1. process P1 does a write fault in a private anonymous VMA, creating
and mapping a new anonymous page A1
2. process P1 forks and creates two children P2 and P3. afterwards, A1
is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
3. process P1 removes its mapping of A1, dropping its mapcount to 2.
4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
5. process P2 removes its mapping of A1, dropping its mapcount to 1.

If at this point P3 does a write fault on its mapping of A1, it will
still trigger copy-on-write thanks to the AnonExclusive mechanism; and
this is necessary to avoid P3 mapping A1 as writable and writing data
into it that will become visible to P2, if P2 and P3 are in different
security contexts.

But if P3 instead moves its mapping of A1 to another address with
remap_anon_pte() which only does a page mapcount check, the
maybe_mkwrite() will directly make the mapping writable, circumventing
the AnonExclusive mechanism.

> + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
> +
> + if (dst_mm != src_mm) {
> + inc_mm_counter(dst_mm, MM_ANONPAGES);
> + dec_mm_counter(src_mm, MM_ANONPAGES);
> + }
> +
> + double_pt_unlock(dst_ptl, src_ptl);
> +
> + return 0;
> +}
> +
> +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_addr, unsigned long src_addr,
> + pte_t *dst_pte, pte_t *src_pte,
> + pte_t orig_dst_pte, pte_t orig_src_pte,
> + spinlock_t *dst_ptl, spinlock_t *src_ptl)
> +{
> + if (!pte_swp_exclusive(orig_src_pte))
> + return -EBUSY;
> +
> + double_pt_lock(dst_ptl, src_ptl);
> +
> + if (!pte_same(*src_pte, orig_src_pte) ||
> + !pte_same(*dst_pte, orig_dst_pte)) {
> + double_pt_unlock(dst_ptl, src_ptl);
> + return -EAGAIN;
> + }
> +
> + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
> + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
> +
> + if (dst_mm != src_mm) {
> + inc_mm_counter(dst_mm, MM_ANONPAGES);
> + dec_mm_counter(src_mm, MM_ANONPAGES);

I think this is the wrong counter. Looking at zap_pte_range(), in the
"Genuine swap entry" case, we modify the MM_SWAPENTS counter, not
MM_ANONPAGES.

> + }
> +
> + double_pt_unlock(dst_ptl, src_ptl);
> +
> + return 0;
> +}
> +
> +/*
> + * The mmap_lock for reading is held by the caller. Just move the page
> + * from src_pmd to dst_pmd if possible, and return true if succeeded
> + * in moving the page.
> + */
> +static int remap_pages_pte(struct mm_struct *dst_mm,
> + struct mm_struct *src_mm,
> + pmd_t *dst_pmd,
> + pmd_t *src_pmd,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr,
> + unsigned long src_addr,
> + __u64 mode)
> +{
> + swp_entry_t entry;
> + pte_t orig_src_pte, orig_dst_pte;
> + spinlock_t *src_ptl, *dst_ptl;
> + pte_t *src_pte = NULL;
> + pte_t *dst_pte = NULL;
> +
> + struct folio *src_folio = NULL;
> + struct anon_vma *src_anon_vma = NULL;
> + struct mmu_notifier_range range;
> + int err = 0;
> +
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> + src_addr, src_addr + PAGE_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +retry:

This retry looks a bit dodgy. On this retry label, we restart almost
the entire operation, including re-reading the source PTE; the only
variables that carry state forward from the previous retry loop
iteration are src_folio and src_anon_vma.

> + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> +
> + /* If an huge pmd materialized from under us fail */
> + if (unlikely(!dst_pte)) {
> + err = -EFAULT;
> + goto out;
> + }
[...]
> + spin_lock(dst_ptl);
> + orig_dst_pte = *dst_pte;
> + spin_unlock(dst_ptl);
> + if (!pte_none(orig_dst_pte)) {
> + err = -EEXIST;
> + goto out;
> + }
> +
> + spin_lock(src_ptl);
> + orig_src_pte = *src_pte;

Here we read an entirely new orig_src_pte value. Something like a
concurrent MADV_DONTNEED+pagefault could have made the PTE point to a
different page between loop iterations.

> + spin_unlock(src_ptl);

I think you have to insert something like the following here:

if (src_folio && (orig_dst_pte != previous_src_pte)) {
err = -EAGAIN;
goto out;
}
previous_src_pte = orig_dst_pte;

Otherwise:

> + if (pte_none(orig_src_pte)) {
> + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
> + err = -ENOENT;
> + else /* nothing to do to remap a hole */
> + err = 0;
> + goto out;
> + }
> +
> + if (pte_present(orig_src_pte)) {
> + /*
> + * Pin and lock both source folio and anon_vma. Since we are in
> + * RCU read section, we can't block, so on contention have to
> + * unmap the ptes, obtain the lock and retry.
> + */
> + if (!src_folio) {

If we already found a src_folio in the previous iteration (but the
trylock failed), we keep using the same src_folio without rechecking
if the current PTE still points to the same folio.

> + struct folio *folio;
> +
> + /*
> + * Pin the page while holding the lock to be sure the
> + * page isn't freed under us
> + */
> + spin_lock(src_ptl);
> + if (!pte_same(orig_src_pte, *src_pte)) {
> + spin_unlock(src_ptl);
> + err = -EAGAIN;
> + goto out;
> + }
> +
> + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> + if (!folio || !folio_test_anon(folio) ||
> + folio_test_large(folio) ||
> + folio_estimated_sharers(folio) != 1) {
> + spin_unlock(src_ptl);
> + err = -EBUSY;
> + goto out;
> + }
> +
> + folio_get(folio);
> + src_folio = folio;
> + spin_unlock(src_ptl);
> +
> + /* block all concurrent rmap walks */
> + if (!folio_trylock(src_folio)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + folio_lock(src_folio);
> + goto retry;
> + }
> + }
> +
> + if (!src_anon_vma) {

(And here, if we already saw a src_anon_vma but the trylock failed,
we'll keep using that src_anon_vma.)

> + /*
> + * folio_referenced walks the anon_vma chain
> + * without the folio lock. Serialize against it with
> + * the anon_vma lock, the folio lock is not enough.
> + */
> + src_anon_vma = folio_get_anon_vma(src_folio);
> + if (!src_anon_vma) {
> + /* page was unmapped from under us */
> + err = -EAGAIN;
> + goto out;
> + }
> + if (!anon_vma_trylock_write(src_anon_vma)) {
> + pte_unmap(&orig_src_pte);
> + pte_unmap(&orig_dst_pte);
> + src_pte = dst_pte = NULL;
> + /* now we can block and wait */
> + anon_vma_lock_write(src_anon_vma);
> + goto retry;
> + }
> + }

So at this point we have:

- the current src_pte
- some referenced+locked src_folio that used to be mapped exclusively
at src_addr
- (the anon_vma associated with the src_folio)

> + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> + dst_addr, src_addr, dst_pte, src_pte,
> + orig_dst_pte, orig_src_pte,
> + dst_ptl, src_ptl, src_folio);

And then this will, without touching folio mapcounts/refcounts, delete
the current PTE at src_addr, and create a PTE at dst_addr pointing to
the old src_folio, leading to incorrect refcounts/mapcounts?

> + } else {
[...]
> + }
> +
> +out:
> + if (src_anon_vma) {
> + anon_vma_unlock_write(src_anon_vma);
> + put_anon_vma(src_anon_vma);
> + }
> + if (src_folio) {
> + folio_unlock(src_folio);
> + folio_put(src_folio);
> + }
> + if (dst_pte)
> + pte_unmap(dst_pte);
> + if (src_pte)
> + pte_unmap(src_pte);
> + mmu_notifier_invalidate_range_end(&range);
> +
> + return err;
> +}
[...]
> +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_start, unsigned long src_start,
> + unsigned long len, __u64 mode)
> +{
> + struct vm_area_struct *src_vma, *dst_vma;
> + unsigned long src_addr, dst_addr;
> + pmd_t *src_pmd, *dst_pmd;
> + long err = -EINVAL;
> + ssize_t moved = 0;
> +
> + /*
> + * Sanitize the command parameters:
> + */
> + BUG_ON(src_start & ~PAGE_MASK);
> + BUG_ON(dst_start & ~PAGE_MASK);
> + BUG_ON(len & ~PAGE_MASK);
> +
> + /* Does the address range wrap, or is the span zero-sized? */
> + BUG_ON(src_start + len <= src_start);
> + BUG_ON(dst_start + len <= dst_start);
> +
> + /*
> + * Because these are read sempahores there's no risk of lock
> + * inversion.
> + */
> + mmap_read_lock(dst_mm);
> + if (dst_mm != src_mm)
> + mmap_read_lock(src_mm);
> +
> + /*
> + * Make sure the vma is not shared, that the src and dst remap
> + * ranges are both valid and fully within a single existing
> + * vma.
> + */
> + src_vma = find_vma(src_mm, src_start);
> + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> + goto out;
> + if (src_start < src_vma->vm_start ||
> + src_start + len > src_vma->vm_end)
> + goto out;
> +
> + dst_vma = find_vma(dst_mm, dst_start);
> + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> + goto out;
> + if (dst_start < dst_vma->vm_start ||
> + dst_start + len > dst_vma->vm_end)
> + goto out;
> +
> + err = validate_remap_areas(src_vma, dst_vma);
> + if (err)
> + goto out;
> +
> + for (src_addr = src_start, dst_addr = dst_start;
> + src_addr < src_start + len;) {
> + spinlock_t *ptl;
> + pmd_t dst_pmdval;
> + unsigned long step_size;
> +
> + BUG_ON(dst_addr >= dst_start + len);
> + /*
> + * Below works because anonymous area would not have a
> + * transparent huge PUD. If file-backed support is added,
> + * that case would need to be handled here.
> + */
> + src_pmd = mm_find_pmd(src_mm, src_addr);
> + if (unlikely(!src_pmd)) {
> + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> + err = -ENOENT;
> + break;
> + }
> + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> + if (unlikely(!src_pmd)) {
> + err = -ENOMEM;
> + break;
> + }
> + }
> + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> + if (unlikely(!dst_pmd)) {
> + err = -ENOMEM;
> + break;
> + }
> +
> + dst_pmdval = pmdp_get_lockless(dst_pmd);
> + /*
> + * If the dst_pmd is mapped as THP don't override it and just
> + * be strict. If dst_pmd changes into TPH after this check, the
> + * remap_pages_huge_pmd() will detect the change and retry
> + * while remap_pages_pte() will detect the change and fail.
> + */
> + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> + err = -EEXIST;
> + break;
> + }
> +
> + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> + if (ptl && !pmd_trans_huge(*src_pmd)) {
> + spin_unlock(ptl);
> + ptl = NULL;
> + }

This still looks wrong - we do still have to split_huge_pmd()
somewhere so that remap_pages_pte() works.

> + if (ptl) {
> + /*
> + * Check if we can move the pmd without
> + * splitting it. First check the address
> + * alignment to be the same in src/dst. These
> + * checks don't actually need the PT lock but
> + * it's good to do it here to optimize this
> + * block away at build time if
> + * CONFIG_TRANSPARENT_HUGEPAGE is not set.
> + */
> + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
> + spin_unlock(ptl);
> + split_huge_pmd(src_vma, src_pmd, src_addr);
> + continue;
> + }
> +
> + err = remap_pages_huge_pmd(dst_mm, src_mm,
> + dst_pmd, src_pmd,
> + dst_pmdval,
> + dst_vma, src_vma,
> + dst_addr, src_addr);
> + step_size = HPAGE_PMD_SIZE;
> + } else {
> + if (pmd_none(*src_pmd)) {
> + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> + err = -ENOENT;
> + break;
> + }
> + if (unlikely(__pte_alloc(src_mm, src_pmd))) {
> + err = -ENOMEM;
> + break;
> + }
> + }
> +
> + if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
> + err = -ENOMEM;
> + break;
> + }
> +
> + err = remap_pages_pte(dst_mm, src_mm,
> + dst_pmd, src_pmd,
> + dst_vma, src_vma,
> + dst_addr, src_addr,
> + mode);
> + step_size = PAGE_SIZE;
> + }
> +
> + cond_resched();
> +
> + if (!err) {
> + dst_addr += step_size;
> + src_addr += step_size;
> + moved += step_size;
> + }
> +
> + if ((!err || err == -EAGAIN) &&
> + fatal_signal_pending(current))
> + err = -EINTR;
> +
> + if (err && err != -EAGAIN)
> + break;
> + }
> +
> +out:
> + mmap_read_unlock(dst_mm);
> + if (dst_mm != src_mm)
> + mmap_read_unlock(src_mm);
> + BUG_ON(moved < 0);
> + BUG_ON(err > 0);
> + BUG_ON(!moved && !err);
> + return moved ? moved : err;
> +}
> --
> 2.42.0.515.g380fc7ccd1-goog
>

2023-09-27 19:07:12

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI]

On Wed, Sep 27, 2023 at 3:07 AM Jann Horn <[email protected]> wrote:
>
> [moving Hugh into "To:" recipients as FYI for khugepaged interaction]
>
> On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > From: Andrea Arcangeli <[email protected]>
> >
> > This implements the uABI of UFFDIO_REMAP.
> >
> > Notably one mode bitflag is also forwarded (and in turn known) by the
> > lowlevel remap_pages method.
> >
> > Signed-off-by: Andrea Arcangeli <[email protected]>
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
> [...]
> > +/*
> > + * The mmap_lock for reading is held by the caller. Just move the page
> > + * from src_pmd to dst_pmd if possible, and return true if succeeded
> > + * in moving the page.
> > + */
> > +static int remap_pages_pte(struct mm_struct *dst_mm,
> > + struct mm_struct *src_mm,
> > + pmd_t *dst_pmd,
> > + pmd_t *src_pmd,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr,
> > + unsigned long src_addr,
> > + __u64 mode)
> > +{
> > + swp_entry_t entry;
> > + pte_t orig_src_pte, orig_dst_pte;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pte_t *src_pte = NULL;
> > + pte_t *dst_pte = NULL;
> > +
> > + struct folio *src_folio = NULL;
> > + struct anon_vma *src_anon_vma = NULL;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> > + src_addr, src_addr + PAGE_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +retry:
> > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > +
> > + /* If an huge pmd materialized from under us fail */
> > + if (unlikely(!dst_pte)) {
> > + err = -EFAULT;
> > + goto out;
> > + }
> > +
> > + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> > +
> > + /*
> > + * We held the mmap_lock for reading so MADV_DONTNEED
> > + * can zap transparent huge pages under us, or the
> > + * transparent huge page fault can establish new
> > + * transparent huge pages under us.
> > + */
> > + if (unlikely(!src_pte)) {
> > + err = -EFAULT;
> > + goto out;
> > + }
> > +
> > + BUG_ON(pmd_none(*dst_pmd));
> > + BUG_ON(pmd_none(*src_pmd));
> > + BUG_ON(pmd_trans_huge(*dst_pmd));
> > + BUG_ON(pmd_trans_huge(*src_pmd));
>
> This works for now, but note that Hugh Dickins has recently been
> reworking khugepaged such that PTE-based mappings can be collapsed
> into transhuge mappings under the mmap lock held in *read mode*;
> holders of the mmap lock in read mode can only synchronize against
> this by taking the right page table spinlock and rechecking the pmd
> value. This is only the case for file-based mappings so far, not for
> anonymous private VMAs; and this code only operates on anonymous
> private VMAs so far, so it works out.
>
> But if either Hugh further reworks khugepaged such that anonymous VMAs
> can be collapsed under the mmap lock in read mode, or you expand this
> code to work on file-backed VMAs, then it will become possible to hit
> these BUG_ON() calls. I'm not sure what the plans for khugepaged going
> forward are, but the number of edgecases everyone has to keep in mind
> would go down if you changed this function to deal gracefully with
> page tables disappearing under you.
>
> In the newest version of mm/pgtable-generic.c, above
> __pte_offset_map_lock(), there is a big comment block explaining the
> current rules for page table access; in particular, regarding the
> helper pte_offset_map_nolock() that you're using:
>
> * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
> * but when successful, it also outputs a pointer to the spinlock in ptlp - as
> * pte_offset_map_lock() does, but in this case without locking it. This helps
> * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
> * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
> * pointer for the page table that it returns. In principle, the caller should
> * recheck *pmd once the lock is taken; in practice, no callsite needs that -
> * either the mmap_lock for write, or pte_same() check on contents, is enough.
>
> If this becomes hittable in the future, I think you will need to
> recheck *pmd, at least for dst_pte, to avoid copying PTEs into a
> detached page table.

Thanks for the warning, Jann. It sounds to me it would be better to
add this pmd check now even though it's not hittable. Does that sound
good to everyone?

>
> > + spin_lock(dst_ptl);
> > + orig_dst_pte = *dst_pte;
> > + spin_unlock(dst_ptl);
> > + if (!pte_none(orig_dst_pte)) {
> > + err = -EEXIST;
> > + goto out;
> > + }
> > +
> > + spin_lock(src_ptl);
> > + orig_src_pte = *src_pte;
> > + spin_unlock(src_ptl);
> > + if (pte_none(orig_src_pte)) {
> > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
> > + err = -ENOENT;
> > + else /* nothing to do to remap a hole */
> > + err = 0;
> > + goto out;
> > + }
> > +
> > + if (pte_present(orig_src_pte)) {
> > + /*
> > + * Pin and lock both source folio and anon_vma. Since we are in
> > + * RCU read section, we can't block, so on contention have to
> > + * unmap the ptes, obtain the lock and retry.
> > + */
> > + if (!src_folio) {
> > + struct folio *folio;
> > +
> > + /*
> > + * Pin the page while holding the lock to be sure the
> > + * page isn't freed under us
> > + */
> > + spin_lock(src_ptl);
> > + if (!pte_same(orig_src_pte, *src_pte)) {
> > + spin_unlock(src_ptl);
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> > + if (!folio || !folio_test_anon(folio) ||
> > + folio_test_large(folio) ||
> > + folio_estimated_sharers(folio) != 1) {
> > + spin_unlock(src_ptl);
> > + err = -EBUSY;
> > + goto out;
> > + }
> > +
> > + folio_get(folio);
> > + src_folio = folio;
> > + spin_unlock(src_ptl);
> > +
> > + /* block all concurrent rmap walks */
> > + if (!folio_trylock(src_folio)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + folio_lock(src_folio);
> > + goto retry;
> > + }
> > + }
> > +
> > + if (!src_anon_vma) {
> > + /*
> > + * folio_referenced walks the anon_vma chain
> > + * without the folio lock. Serialize against it with
> > + * the anon_vma lock, the folio lock is not enough.
> > + */
> > + src_anon_vma = folio_get_anon_vma(src_folio);
> > + if (!src_anon_vma) {
> > + /* page was unmapped from under us */
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + anon_vma_lock_write(src_anon_vma);
> > + goto retry;
> > + }
> > + }
> > +
> > + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> > + dst_addr, src_addr, dst_pte, src_pte,
> > + orig_dst_pte, orig_src_pte,
> > + dst_ptl, src_ptl, src_folio);
> > + } else {
> > + entry = pte_to_swp_entry(orig_src_pte);
> > + if (non_swap_entry(entry)) {
> > + if (is_migration_entry(entry)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + migration_entry_wait(src_mm, src_pmd,
> > + src_addr);
> > + err = -EAGAIN;
> > + } else
> > + err = -EFAULT;
> > + goto out;
> > + }
> > +
> > + err = remap_swap_pte(dst_mm, src_mm, dst_addr, src_addr,
> > + dst_pte, src_pte,
> > + orig_dst_pte, orig_src_pte,
> > + dst_ptl, src_ptl);
> > + }
> > +
> > +out:
> > + if (src_anon_vma) {
> > + anon_vma_unlock_write(src_anon_vma);
> > + put_anon_vma(src_anon_vma);
> > + }
> > + if (src_folio) {
> > + folio_unlock(src_folio);
> > + folio_put(src_folio);
> > + }
> > + if (dst_pte)
> > + pte_unmap(dst_pte);
> > + if (src_pte)
> > + pte_unmap(src_pte);
> > + mmu_notifier_invalidate_range_end(&range);
> > +
> > + return err;
> > +}

2023-09-27 20:09:16

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan <[email protected]> wrote:
> On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
> > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > From: Andrea Arcangeli <[email protected]>
> > >
> > > This implements the uABI of UFFDIO_REMAP.
> > >
> > > Notably one mode bitflag is also forwarded (and in turn known) by the
> > > lowlevel remap_pages method.
> > >
> > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
[...]
> > > + /*
> > > + * folio_referenced walks the anon_vma chain
> > > + * without the folio lock. Serialize against it with
> > > + * the anon_vma lock, the folio lock is not enough.
> > > + */
> > > + src_anon_vma = folio_get_anon_vma(src_folio);
> > > + if (!src_anon_vma) {
> > > + /* page was unmapped from under us */
> > > + err = -EAGAIN;
> > > + goto out;
> > > + }
> > > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > > + pte_unmap(&orig_src_pte);
> > > + pte_unmap(&orig_dst_pte);
> > > + src_pte = dst_pte = NULL;
> > > + /* now we can block and wait */
> > > + anon_vma_lock_write(src_anon_vma);
> > > + goto retry;
> > > + }
> > > + }
> >
> > So at this point we have:
> >
> > - the current src_pte
> > - some referenced+locked src_folio that used to be mapped exclusively
> > at src_addr
> > - (the anon_vma associated with the src_folio)
> >
> > > + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> > > + dst_addr, src_addr, dst_pte, src_pte,
> > > + orig_dst_pte, orig_src_pte,
> > > + dst_ptl, src_ptl, src_folio);
> >
> > And then this will, without touching folio mapcounts/refcounts, delete
> > the current PTE at src_addr, and create a PTE at dst_addr pointing to
> > the old src_folio, leading to incorrect refcounts/mapcounts?
>
> I assume this still points to the missing previous_src_pte check
> discussed in the previous comments. Is that correct or is there yet
> another issue?

This is still referring to the missing previous_src_pte check.

> >
> > > + } else {
> > [...]
> > > + }
> > > +
> > > +out:
> > > + if (src_anon_vma) {
> > > + anon_vma_unlock_write(src_anon_vma);
> > > + put_anon_vma(src_anon_vma);
> > > + }
> > > + if (src_folio) {
> > > + folio_unlock(src_folio);
> > > + folio_put(src_folio);
> > > + }
> > > + if (dst_pte)
> > > + pte_unmap(dst_pte);
> > > + if (src_pte)
> > > + pte_unmap(src_pte);
> > > + mmu_notifier_invalidate_range_end(&range);
> > > +
> > > + return err;
> > > +}
> > [...]
> > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > + unsigned long dst_start, unsigned long src_start,
> > > + unsigned long len, __u64 mode)
> > > +{
> > > + struct vm_area_struct *src_vma, *dst_vma;
> > > + unsigned long src_addr, dst_addr;
> > > + pmd_t *src_pmd, *dst_pmd;
> > > + long err = -EINVAL;
> > > + ssize_t moved = 0;
> > > +
> > > + /*
> > > + * Sanitize the command parameters:
> > > + */
> > > + BUG_ON(src_start & ~PAGE_MASK);
> > > + BUG_ON(dst_start & ~PAGE_MASK);
> > > + BUG_ON(len & ~PAGE_MASK);
> > > +
> > > + /* Does the address range wrap, or is the span zero-sized? */
> > > + BUG_ON(src_start + len <= src_start);
> > > + BUG_ON(dst_start + len <= dst_start);
> > > +
> > > + /*
> > > + * Because these are read sempahores there's no risk of lock
> > > + * inversion.
> > > + */
> > > + mmap_read_lock(dst_mm);
> > > + if (dst_mm != src_mm)
> > > + mmap_read_lock(src_mm);
> > > +
> > > + /*
> > > + * Make sure the vma is not shared, that the src and dst remap
> > > + * ranges are both valid and fully within a single existing
> > > + * vma.
> > > + */
> > > + src_vma = find_vma(src_mm, src_start);
> > > + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > > + goto out;
> > > + if (src_start < src_vma->vm_start ||
> > > + src_start + len > src_vma->vm_end)
> > > + goto out;
> > > +
> > > + dst_vma = find_vma(dst_mm, dst_start);
> > > + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > > + goto out;
> > > + if (dst_start < dst_vma->vm_start ||
> > > + dst_start + len > dst_vma->vm_end)
> > > + goto out;
> > > +
> > > + err = validate_remap_areas(src_vma, dst_vma);
> > > + if (err)
> > > + goto out;
> > > +
> > > + for (src_addr = src_start, dst_addr = dst_start;
> > > + src_addr < src_start + len;) {
> > > + spinlock_t *ptl;
> > > + pmd_t dst_pmdval;
> > > + unsigned long step_size;
> > > +
> > > + BUG_ON(dst_addr >= dst_start + len);
> > > + /*
> > > + * Below works because anonymous area would not have a
> > > + * transparent huge PUD. If file-backed support is added,
> > > + * that case would need to be handled here.
> > > + */
> > > + src_pmd = mm_find_pmd(src_mm, src_addr);
> > > + if (unlikely(!src_pmd)) {
> > > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> > > + err = -ENOENT;
> > > + break;
> > > + }
> > > + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> > > + if (unlikely(!src_pmd)) {
> > > + err = -ENOMEM;
> > > + break;
> > > + }
> > > + }
> > > + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> > > + if (unlikely(!dst_pmd)) {
> > > + err = -ENOMEM;
> > > + break;
> > > + }
> > > +
> > > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > > + /*
> > > + * If the dst_pmd is mapped as THP don't override it and just
> > > + * be strict. If dst_pmd changes into TPH after this check, the
> > > + * remap_pages_huge_pmd() will detect the change and retry
> > > + * while remap_pages_pte() will detect the change and fail.
> > > + */
> > > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > > + err = -EEXIST;
> > > + break;
> > > + }
> > > +
> > > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > > + spin_unlock(ptl);
> > > + ptl = NULL;
> > > + }
> >
> > This still looks wrong - we do still have to split_huge_pmd()
> > somewhere so that remap_pages_pte() works.
>
> Hmm, I guess this extra check is not even needed...

Hm, and instead we'd bail at the pte_offset_map_nolock() in
remap_pages_pte()? I guess that's unusual but works...

(It would be a thing to look out for if anyone tried to backport this,
since the checks in pte_offset_map_nolock() were only introduced in
6.5, but idk if anyone's doing that)

2023-09-27 23:26:21

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
>
> On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > From: Andrea Arcangeli <[email protected]>
> >
> > This implements the uABI of UFFDIO_REMAP.
> >
> > Notably one mode bitflag is also forwarded (and in turn known) by the
> > lowlevel remap_pages method.
> >
> > Signed-off-by: Andrea Arcangeli <[email protected]>
> > Signed-off-by: Suren Baghdasaryan <[email protected]>
> [...]
> > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr)
> > +{
> > + pmd_t _dst_pmd, src_pmdval;
> > + struct page *src_page;
> > + struct folio *src_folio;
> > + struct anon_vma *src_anon_vma, *dst_anon_vma;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pgtable_t src_pgtable, dst_pgtable;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
> > + src_pmdval = *src_pmd;
> > + src_ptl = pmd_lockptr(src_mm, src_pmd);
> > +
> > + BUG_ON(!spin_is_locked(src_ptl));
> > + mmap_assert_locked(src_mm);
> > + mmap_assert_locked(dst_mm);
> > +
> > + BUG_ON(!pmd_trans_huge(src_pmdval));
> > + BUG_ON(!pmd_none(dst_pmdval));
> > + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> > +
> > + src_page = pmd_page(src_pmdval);
> > + if (unlikely(!PageAnonExclusive(src_page))) {
> > + spin_unlock(src_ptl);
> > + return -EBUSY;
> > + }
> > +
> > + src_folio = page_folio(src_page);
> > + folio_get(src_folio);
> > + spin_unlock(src_ptl);
> > +
> > + /* preallocate dst_pgtable if needed */
> > + if (dst_mm != src_mm) {
> > + dst_pgtable = pte_alloc_one(dst_mm);
> > + if (unlikely(!dst_pgtable)) {
> > + err = -ENOMEM;
> > + goto put_folio;
> > + }
> > + } else {
> > + dst_pgtable = NULL;
> > + }
> > +
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> > + src_addr + HPAGE_PMD_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +
> > + /* block all concurrent rmap walks */
> > + folio_lock(src_folio);
> > +
> > + /*
> > + * split_huge_page walks the anon_vma chain without the page
> > + * lock. Serialize against it with the anon_vma lock, the page
> > + * lock is not enough.
> > + */
> > + src_anon_vma = folio_get_anon_vma(src_folio);
> > + if (!src_anon_vma) {
> > + err = -EAGAIN;
> > + goto unlock_folio;
> > + }
> > + anon_vma_lock_write(src_anon_vma);
> > +
> > + dst_ptl = pmd_lockptr(dst_mm, dst_pmd);
> > + double_pt_lock(src_ptl, dst_ptl);
> > + if (unlikely(!pmd_same(*src_pmd, src_pmdval) ||
> > + !pmd_same(*dst_pmd, dst_pmdval) ||
> > + folio_mapcount(src_folio) != 1)) {
>
> I think this is also supposed to be PageAnonExclusive()?

Yes. Will fix.

>
> > + double_pt_unlock(src_ptl, dst_ptl);
> > + err = -EAGAIN;
> > + goto put_anon_vma;
> > + }
> > +
> > + BUG_ON(!folio_test_head(src_folio));
> > + BUG_ON(!folio_test_anon(src_folio));
> > +
> > + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
> > + WRITE_ONCE(src_folio->mapping, (struct address_space *) dst_anon_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma, dst_addr));
> > +
> > + src_pmdval = pmdp_huge_clear_flush(src_vma, src_addr, src_pmd);
> > + _dst_pmd = mk_huge_pmd(&src_folio->page, dst_vma->vm_page_prot);
> > + _dst_pmd = maybe_pmd_mkwrite(pmd_mkdirty(_dst_pmd), dst_vma);
> > + set_pmd_at(dst_mm, dst_addr, dst_pmd, _dst_pmd);
> > +
> > + src_pgtable = pgtable_trans_huge_withdraw(src_mm, src_pmd);
> > + if (dst_pgtable) {
> > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, dst_pgtable);
> > + pte_free(src_mm, src_pgtable);
> > + dst_pgtable = NULL;
> > +
> > + mm_inc_nr_ptes(dst_mm);
> > + mm_dec_nr_ptes(src_mm);
> > + add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
> > + add_mm_counter(src_mm, MM_ANONPAGES, -HPAGE_PMD_NR);
> > + } else {
> > + pgtable_trans_huge_deposit(dst_mm, dst_pmd, src_pgtable);
> > + }
> > + double_pt_unlock(src_ptl, dst_ptl);
> > +
> > +put_anon_vma:
> > + anon_vma_unlock_write(src_anon_vma);
> > + put_anon_vma(src_anon_vma);
> > +unlock_folio:
> > + /* unblock rmap walks */
> > + folio_unlock(src_folio);
> > + mmu_notifier_invalidate_range_end(&range);
> > + if (dst_pgtable)
> > + pte_free(dst_mm, dst_pgtable);
> > +put_folio:
> > + folio_put(src_folio);
> > +
> > + return err;
> > +}
> > +#endif /* CONFIG_USERFAULTFD */
> [...]
> > +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr,
> > + pte_t *dst_pte, pte_t *src_pte,
> > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> > + struct folio *src_folio)
> > +{
> > + struct anon_vma *dst_anon_vma;
> > +
> > + double_pt_lock(dst_ptl, src_ptl);
> > +
> > + if (!pte_same(*src_pte, orig_src_pte) ||
> > + !pte_same(*dst_pte, orig_dst_pte) ||
> > + folio_test_large(src_folio) ||
> > + folio_estimated_sharers(src_folio) != 1) {
> > + double_pt_unlock(dst_ptl, src_ptl);
> > + return -EAGAIN;
> > + }
> > +
> > + BUG_ON(!folio_test_anon(src_folio));
> > +
> > + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
> > + WRITE_ONCE(src_folio->mapping,
> > + (struct address_space *) dst_anon_vma);
> > + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
> > + dst_addr));
> > +
> > + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> > + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> > + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
> > + dst_vma);
>
> I think there's still a theoretical issue here that you could fix by
> checking for the AnonExclusive flag, similar to the huge page case.
>
> Consider the following scenario:
>
> 1. process P1 does a write fault in a private anonymous VMA, creating
> and mapping a new anonymous page A1
> 2. process P1 forks and creates two children P2 and P3. afterwards, A1
> is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
> 3. process P1 removes its mapping of A1, dropping its mapcount to 2.
> 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
> 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
>
> If at this point P3 does a write fault on its mapping of A1, it will
> still trigger copy-on-write thanks to the AnonExclusive mechanism; and
> this is necessary to avoid P3 mapping A1 as writable and writing data
> into it that will become visible to P2, if P2 and P3 are in different
> security contexts.
>
> But if P3 instead moves its mapping of A1 to another address with
> remap_anon_pte() which only does a page mapcount check, the
> maybe_mkwrite() will directly make the mapping writable, circumventing
> the AnonExclusive mechanism.

I see. Thanks for the detailed explanation! I will add
PageAnonExclusive() check in this path to prevent this scenario.

>
> > + set_pte_at(dst_mm, dst_addr, dst_pte, orig_dst_pte);
> > +
> > + if (dst_mm != src_mm) {
> > + inc_mm_counter(dst_mm, MM_ANONPAGES);
> > + dec_mm_counter(src_mm, MM_ANONPAGES);
> > + }
> > +
> > + double_pt_unlock(dst_ptl, src_ptl);
> > +
> > + return 0;
> > +}
> > +
> > +static int remap_swap_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_addr, unsigned long src_addr,
> > + pte_t *dst_pte, pte_t *src_pte,
> > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > + spinlock_t *dst_ptl, spinlock_t *src_ptl)
> > +{
> > + if (!pte_swp_exclusive(orig_src_pte))
> > + return -EBUSY;
> > +
> > + double_pt_lock(dst_ptl, src_ptl);
> > +
> > + if (!pte_same(*src_pte, orig_src_pte) ||
> > + !pte_same(*dst_pte, orig_dst_pte)) {
> > + double_pt_unlock(dst_ptl, src_ptl);
> > + return -EAGAIN;
> > + }
> > +
> > + orig_src_pte = ptep_get_and_clear(src_mm, src_addr, src_pte);
> > + set_pte_at(dst_mm, dst_addr, dst_pte, orig_src_pte);
> > +
> > + if (dst_mm != src_mm) {
> > + inc_mm_counter(dst_mm, MM_ANONPAGES);
> > + dec_mm_counter(src_mm, MM_ANONPAGES);
>
> I think this is the wrong counter. Looking at zap_pte_range(), in the
> "Genuine swap entry" case, we modify the MM_SWAPENTS counter, not
> MM_ANONPAGES.

Oops, my bad. Will fix.

>
> > + }
> > +
> > + double_pt_unlock(dst_ptl, src_ptl);
> > +
> > + return 0;
> > +}
> > +
> > +/*
> > + * The mmap_lock for reading is held by the caller. Just move the page
> > + * from src_pmd to dst_pmd if possible, and return true if succeeded
> > + * in moving the page.
> > + */
> > +static int remap_pages_pte(struct mm_struct *dst_mm,
> > + struct mm_struct *src_mm,
> > + pmd_t *dst_pmd,
> > + pmd_t *src_pmd,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr,
> > + unsigned long src_addr,
> > + __u64 mode)
> > +{
> > + swp_entry_t entry;
> > + pte_t orig_src_pte, orig_dst_pte;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pte_t *src_pte = NULL;
> > + pte_t *dst_pte = NULL;
> > +
> > + struct folio *src_folio = NULL;
> > + struct anon_vma *src_anon_vma = NULL;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> > + src_addr, src_addr + PAGE_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +retry:
>
> This retry looks a bit dodgy. On this retry label, we restart almost
> the entire operation, including re-reading the source PTE; the only
> variables that carry state forward from the previous retry loop
> iteration are src_folio and src_anon_vma.
>
> > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > +
> > + /* If an huge pmd materialized from under us fail */
> > + if (unlikely(!dst_pte)) {
> > + err = -EFAULT;
> > + goto out;
> > + }
> [...]
> > + spin_lock(dst_ptl);
> > + orig_dst_pte = *dst_pte;
> > + spin_unlock(dst_ptl);
> > + if (!pte_none(orig_dst_pte)) {
> > + err = -EEXIST;
> > + goto out;
> > + }
> > +
> > + spin_lock(src_ptl);
> > + orig_src_pte = *src_pte;
>
> Here we read an entirely new orig_src_pte value. Something like a
> concurrent MADV_DONTNEED+pagefault could have made the PTE point to a
> different page between loop iterations.
>
> > + spin_unlock(src_ptl);
>
> I think you have to insert something like the following here:
>
> if (src_folio && (orig_dst_pte != previous_src_pte)) {
> err = -EAGAIN;
> goto out;
> }
> previous_src_pte = orig_dst_pte;

Yes, this definitely needs to be rechecked. Will fix.

>
> Otherwise:
>
> > + if (pte_none(orig_src_pte)) {
> > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES))
> > + err = -ENOENT;
> > + else /* nothing to do to remap a hole */
> > + err = 0;
> > + goto out;
> > + }
> > +
> > + if (pte_present(orig_src_pte)) {
> > + /*
> > + * Pin and lock both source folio and anon_vma. Since we are in
> > + * RCU read section, we can't block, so on contention have to
> > + * unmap the ptes, obtain the lock and retry.
> > + */
> > + if (!src_folio) {
>
> If we already found a src_folio in the previous iteration (but the
> trylock failed), we keep using the same src_folio without rechecking
> if the current PTE still points to the same folio.
>
> > + struct folio *folio;
> > +
> > + /*
> > + * Pin the page while holding the lock to be sure the
> > + * page isn't freed under us
> > + */
> > + spin_lock(src_ptl);
> > + if (!pte_same(orig_src_pte, *src_pte)) {
> > + spin_unlock(src_ptl);
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > +
> > + folio = vm_normal_folio(src_vma, src_addr, orig_src_pte);
> > + if (!folio || !folio_test_anon(folio) ||
> > + folio_test_large(folio) ||
> > + folio_estimated_sharers(folio) != 1) {
> > + spin_unlock(src_ptl);
> > + err = -EBUSY;
> > + goto out;
> > + }
> > +
> > + folio_get(folio);
> > + src_folio = folio;
> > + spin_unlock(src_ptl);
> > +
> > + /* block all concurrent rmap walks */
> > + if (!folio_trylock(src_folio)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + folio_lock(src_folio);
> > + goto retry;
> > + }
> > + }
> > +
> > + if (!src_anon_vma) {
>
> (And here, if we already saw a src_anon_vma but the trylock failed,
> we'll keep using that src_anon_vma.)

Ack. The check for previous_src_pte should handle that as well.

>
> > + /*
> > + * folio_referenced walks the anon_vma chain
> > + * without the folio lock. Serialize against it with
> > + * the anon_vma lock, the folio lock is not enough.
> > + */
> > + src_anon_vma = folio_get_anon_vma(src_folio);
> > + if (!src_anon_vma) {
> > + /* page was unmapped from under us */
> > + err = -EAGAIN;
> > + goto out;
> > + }
> > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > + pte_unmap(&orig_src_pte);
> > + pte_unmap(&orig_dst_pte);
> > + src_pte = dst_pte = NULL;
> > + /* now we can block and wait */
> > + anon_vma_lock_write(src_anon_vma);
> > + goto retry;
> > + }
> > + }
>
> So at this point we have:
>
> - the current src_pte
> - some referenced+locked src_folio that used to be mapped exclusively
> at src_addr
> - (the anon_vma associated with the src_folio)
>
> > + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> > + dst_addr, src_addr, dst_pte, src_pte,
> > + orig_dst_pte, orig_src_pte,
> > + dst_ptl, src_ptl, src_folio);
>
> And then this will, without touching folio mapcounts/refcounts, delete
> the current PTE at src_addr, and create a PTE at dst_addr pointing to
> the old src_folio, leading to incorrect refcounts/mapcounts?

I assume this still points to the missing previous_src_pte check
discussed in the previous comments. Is that correct or is there yet
another issue?

>
> > + } else {
> [...]
> > + }
> > +
> > +out:
> > + if (src_anon_vma) {
> > + anon_vma_unlock_write(src_anon_vma);
> > + put_anon_vma(src_anon_vma);
> > + }
> > + if (src_folio) {
> > + folio_unlock(src_folio);
> > + folio_put(src_folio);
> > + }
> > + if (dst_pte)
> > + pte_unmap(dst_pte);
> > + if (src_pte)
> > + pte_unmap(src_pte);
> > + mmu_notifier_invalidate_range_end(&range);
> > +
> > + return err;
> > +}
> [...]
> > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_start, unsigned long src_start,
> > + unsigned long len, __u64 mode)
> > +{
> > + struct vm_area_struct *src_vma, *dst_vma;
> > + unsigned long src_addr, dst_addr;
> > + pmd_t *src_pmd, *dst_pmd;
> > + long err = -EINVAL;
> > + ssize_t moved = 0;
> > +
> > + /*
> > + * Sanitize the command parameters:
> > + */
> > + BUG_ON(src_start & ~PAGE_MASK);
> > + BUG_ON(dst_start & ~PAGE_MASK);
> > + BUG_ON(len & ~PAGE_MASK);
> > +
> > + /* Does the address range wrap, or is the span zero-sized? */
> > + BUG_ON(src_start + len <= src_start);
> > + BUG_ON(dst_start + len <= dst_start);
> > +
> > + /*
> > + * Because these are read sempahores there's no risk of lock
> > + * inversion.
> > + */
> > + mmap_read_lock(dst_mm);
> > + if (dst_mm != src_mm)
> > + mmap_read_lock(src_mm);
> > +
> > + /*
> > + * Make sure the vma is not shared, that the src and dst remap
> > + * ranges are both valid and fully within a single existing
> > + * vma.
> > + */
> > + src_vma = find_vma(src_mm, src_start);
> > + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > + goto out;
> > + if (src_start < src_vma->vm_start ||
> > + src_start + len > src_vma->vm_end)
> > + goto out;
> > +
> > + dst_vma = find_vma(dst_mm, dst_start);
> > + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > + goto out;
> > + if (dst_start < dst_vma->vm_start ||
> > + dst_start + len > dst_vma->vm_end)
> > + goto out;
> > +
> > + err = validate_remap_areas(src_vma, dst_vma);
> > + if (err)
> > + goto out;
> > +
> > + for (src_addr = src_start, dst_addr = dst_start;
> > + src_addr < src_start + len;) {
> > + spinlock_t *ptl;
> > + pmd_t dst_pmdval;
> > + unsigned long step_size;
> > +
> > + BUG_ON(dst_addr >= dst_start + len);
> > + /*
> > + * Below works because anonymous area would not have a
> > + * transparent huge PUD. If file-backed support is added,
> > + * that case would need to be handled here.
> > + */
> > + src_pmd = mm_find_pmd(src_mm, src_addr);
> > + if (unlikely(!src_pmd)) {
> > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> > + err = -ENOENT;
> > + break;
> > + }
> > + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> > + if (unlikely(!src_pmd)) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > + }
> > + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> > + if (unlikely(!dst_pmd)) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > +
> > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > + /*
> > + * If the dst_pmd is mapped as THP don't override it and just
> > + * be strict. If dst_pmd changes into TPH after this check, the
> > + * remap_pages_huge_pmd() will detect the change and retry
> > + * while remap_pages_pte() will detect the change and fail.
> > + */
> > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > + err = -EEXIST;
> > + break;
> > + }
> > +
> > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > + spin_unlock(ptl);
> > + ptl = NULL;
> > + }
>
> This still looks wrong - we do still have to split_huge_pmd()
> somewhere so that remap_pages_pte() works.

Hmm, I guess this extra check is not even needed...

>
> > + if (ptl) {
> > + /*
> > + * Check if we can move the pmd without
> > + * splitting it. First check the address
> > + * alignment to be the same in src/dst. These
> > + * checks don't actually need the PT lock but
> > + * it's good to do it here to optimize this
> > + * block away at build time if
> > + * CONFIG_TRANSPARENT_HUGEPAGE is not set.
> > + */
> > + if ((src_addr & ~HPAGE_PMD_MASK) || (dst_addr & ~HPAGE_PMD_MASK) ||
> > + src_start + len - src_addr < HPAGE_PMD_SIZE || !pmd_none(dst_pmdval)) {
> > + spin_unlock(ptl);
> > + split_huge_pmd(src_vma, src_pmd, src_addr);
> > + continue;
> > + }
> > +
> > + err = remap_pages_huge_pmd(dst_mm, src_mm,
> > + dst_pmd, src_pmd,
> > + dst_pmdval,
> > + dst_vma, src_vma,
> > + dst_addr, src_addr);
> > + step_size = HPAGE_PMD_SIZE;
> > + } else {
> > + if (pmd_none(*src_pmd)) {
> > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> > + err = -ENOENT;
> > + break;
> > + }
> > + if (unlikely(__pte_alloc(src_mm, src_pmd))) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > + }
> > +
> > + if (unlikely(pte_alloc(dst_mm, dst_pmd))) {
> > + err = -ENOMEM;
> > + break;
> > + }
> > +
> > + err = remap_pages_pte(dst_mm, src_mm,
> > + dst_pmd, src_pmd,
> > + dst_vma, src_vma,
> > + dst_addr, src_addr,
> > + mode);
> > + step_size = PAGE_SIZE;
> > + }
> > +
> > + cond_resched();
> > +
> > + if (!err) {
> > + dst_addr += step_size;
> > + src_addr += step_size;
> > + moved += step_size;
> > + }
> > +
> > + if ((!err || err == -EAGAIN) &&
> > + fatal_signal_pending(current))
> > + err = -EINTR;
> > +
> > + if (err && err != -EAGAIN)
> > + break;
> > + }
> > +
> > +out:
> > + mmap_read_unlock(dst_mm);
> > + if (dst_mm != src_mm)
> > + mmap_read_unlock(src_mm);
> > + BUG_ON(moved < 0);
> > + BUG_ON(err > 0);
> > + BUG_ON(!moved && !err);
> > + return moved ? moved : err;
> > +}
> > --
> > 2.42.0.515.g380fc7ccd1-goog
> >

2023-09-28 01:21:51

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan <[email protected]> wrote:
>
> On Wed, Sep 27, 2023 at 1:04 PM Jann Horn <[email protected]> wrote:
> >
> > On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan <[email protected]> wrote:
> > > On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
> > > > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > > > From: Andrea Arcangeli <[email protected]>
> > > > >
> > > > > This implements the uABI of UFFDIO_REMAP.
> > > > >
> > > > > Notably one mode bitflag is also forwarded (and in turn known) by the
> > > > > lowlevel remap_pages method.
> > > > >
> > > > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > [...]
> > > > > + /*
> > > > > + * folio_referenced walks the anon_vma chain
> > > > > + * without the folio lock. Serialize against it with
> > > > > + * the anon_vma lock, the folio lock is not enough.
> > > > > + */
> > > > > + src_anon_vma = folio_get_anon_vma(src_folio);
> > > > > + if (!src_anon_vma) {
> > > > > + /* page was unmapped from under us */
> > > > > + err = -EAGAIN;
> > > > > + goto out;
> > > > > + }
> > > > > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > > > > + pte_unmap(&orig_src_pte);
> > > > > + pte_unmap(&orig_dst_pte);
> > > > > + src_pte = dst_pte = NULL;
> > > > > + /* now we can block and wait */
> > > > > + anon_vma_lock_write(src_anon_vma);
> > > > > + goto retry;
> > > > > + }
> > > > > + }
> > > >
> > > > So at this point we have:
> > > >
> > > > - the current src_pte
> > > > - some referenced+locked src_folio that used to be mapped exclusively
> > > > at src_addr
> > > > - (the anon_vma associated with the src_folio)
> > > >
> > > > > + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> > > > > + dst_addr, src_addr, dst_pte, src_pte,
> > > > > + orig_dst_pte, orig_src_pte,
> > > > > + dst_ptl, src_ptl, src_folio);
> > > >
> > > > And then this will, without touching folio mapcounts/refcounts, delete
> > > > the current PTE at src_addr, and create a PTE at dst_addr pointing to
> > > > the old src_folio, leading to incorrect refcounts/mapcounts?
> > >
> > > I assume this still points to the missing previous_src_pte check
> > > discussed in the previous comments. Is that correct or is there yet
> > > another issue?
> >
> > This is still referring to the missing previous_src_pte check.
> >
> > > >
> > > > > + } else {
> > > > [...]
> > > > > + }
> > > > > +
> > > > > +out:
> > > > > + if (src_anon_vma) {
> > > > > + anon_vma_unlock_write(src_anon_vma);
> > > > > + put_anon_vma(src_anon_vma);
> > > > > + }
> > > > > + if (src_folio) {
> > > > > + folio_unlock(src_folio);
> > > > > + folio_put(src_folio);
> > > > > + }
> > > > > + if (dst_pte)
> > > > > + pte_unmap(dst_pte);
> > > > > + if (src_pte)
> > > > > + pte_unmap(src_pte);
> > > > > + mmu_notifier_invalidate_range_end(&range);
> > > > > +
> > > > > + return err;
> > > > > +}
> > > > [...]
> > > > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > > + unsigned long dst_start, unsigned long src_start,
> > > > > + unsigned long len, __u64 mode)
> > > > > +{
> > > > > + struct vm_area_struct *src_vma, *dst_vma;
> > > > > + unsigned long src_addr, dst_addr;
> > > > > + pmd_t *src_pmd, *dst_pmd;
> > > > > + long err = -EINVAL;
> > > > > + ssize_t moved = 0;
> > > > > +
> > > > > + /*
> > > > > + * Sanitize the command parameters:
> > > > > + */
> > > > > + BUG_ON(src_start & ~PAGE_MASK);
> > > > > + BUG_ON(dst_start & ~PAGE_MASK);
> > > > > + BUG_ON(len & ~PAGE_MASK);
> > > > > +
> > > > > + /* Does the address range wrap, or is the span zero-sized? */
> > > > > + BUG_ON(src_start + len <= src_start);
> > > > > + BUG_ON(dst_start + len <= dst_start);
> > > > > +
> > > > > + /*
> > > > > + * Because these are read sempahores there's no risk of lock
> > > > > + * inversion.
> > > > > + */
> > > > > + mmap_read_lock(dst_mm);
> > > > > + if (dst_mm != src_mm)
> > > > > + mmap_read_lock(src_mm);
> > > > > +
> > > > > + /*
> > > > > + * Make sure the vma is not shared, that the src and dst remap
> > > > > + * ranges are both valid and fully within a single existing
> > > > > + * vma.
> > > > > + */
> > > > > + src_vma = find_vma(src_mm, src_start);
> > > > > + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > > > > + goto out;
> > > > > + if (src_start < src_vma->vm_start ||
> > > > > + src_start + len > src_vma->vm_end)
> > > > > + goto out;
> > > > > +
> > > > > + dst_vma = find_vma(dst_mm, dst_start);
> > > > > + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > > > > + goto out;
> > > > > + if (dst_start < dst_vma->vm_start ||
> > > > > + dst_start + len > dst_vma->vm_end)
> > > > > + goto out;
> > > > > +
> > > > > + err = validate_remap_areas(src_vma, dst_vma);
> > > > > + if (err)
> > > > > + goto out;
> > > > > +
> > > > > + for (src_addr = src_start, dst_addr = dst_start;
> > > > > + src_addr < src_start + len;) {
> > > > > + spinlock_t *ptl;
> > > > > + pmd_t dst_pmdval;
> > > > > + unsigned long step_size;
> > > > > +
> > > > > + BUG_ON(dst_addr >= dst_start + len);
> > > > > + /*
> > > > > + * Below works because anonymous area would not have a
> > > > > + * transparent huge PUD. If file-backed support is added,
> > > > > + * that case would need to be handled here.
> > > > > + */
> > > > > + src_pmd = mm_find_pmd(src_mm, src_addr);
> > > > > + if (unlikely(!src_pmd)) {
> > > > > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> > > > > + err = -ENOENT;
> > > > > + break;
> > > > > + }
> > > > > + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> > > > > + if (unlikely(!src_pmd)) {
> > > > > + err = -ENOMEM;
> > > > > + break;
> > > > > + }
> > > > > + }
> > > > > + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> > > > > + if (unlikely(!dst_pmd)) {
> > > > > + err = -ENOMEM;
> > > > > + break;
> > > > > + }
> > > > > +
> > > > > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > > > > + /*
> > > > > + * If the dst_pmd is mapped as THP don't override it and just
> > > > > + * be strict. If dst_pmd changes into TPH after this check, the
> > > > > + * remap_pages_huge_pmd() will detect the change and retry
> > > > > + * while remap_pages_pte() will detect the change and fail.
> > > > > + */
> > > > > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > > > > + err = -EEXIST;
> > > > > + break;
> > > > > + }
> > > > > +
> > > > > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > > > > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > > > > + spin_unlock(ptl);
> > > > > + ptl = NULL;
> > > > > + }
> > > >
> > > > This still looks wrong - we do still have to split_huge_pmd()
> > > > somewhere so that remap_pages_pte() works.
> > >
> > > Hmm, I guess this extra check is not even needed...
> >
> > Hm, and instead we'd bail at the pte_offset_map_nolock() in
> > remap_pages_pte()? I guess that's unusual but works...
>
> Yes, that's what I was thinking but I agree, that seems fragile. Maybe
> just bail out early if (ptl && !pmd_trans_huge())?

No, actually we can still handle is_swap_pmd() case by splitting it
and remapping the individual ptes. So, I can bail out only in case of
pmd_devmap().


>
> >
> > (It would be a thing to look out for if anyone tried to backport this,
> > since the checks in pte_offset_map_nolock() were only introduced in
> > 6.5, but idk if anyone's doing that)

2023-09-28 02:17:48

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 6:29 AM David Hildenbrand <[email protected]> wrote:
>
> >> +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> >> + struct vm_area_struct *dst_vma,
> >> + struct vm_area_struct *src_vma,
> >> + unsigned long dst_addr, unsigned long src_addr,
> >> + pte_t *dst_pte, pte_t *src_pte,
> >> + pte_t orig_dst_pte, pte_t orig_src_pte,
> >> + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> >> + struct folio *src_folio)
> >> +{
> >> + struct anon_vma *dst_anon_vma;
> >> +
> >> + double_pt_lock(dst_ptl, src_ptl);
> >> +
> >> + if (!pte_same(*src_pte, orig_src_pte) ||
> >> + !pte_same(*dst_pte, orig_dst_pte) ||
> >> + folio_test_large(src_folio) ||
> >> + folio_estimated_sharers(src_folio) != 1) {
>
> ^ here you should check PageAnonExclusive. Please get rid of any
> implicit explicit/implcit mapcount checks.

Ack.

>
> >> + double_pt_unlock(dst_ptl, src_ptl);
> >> + return -EAGAIN;
> >> + }
> >> +
> >> + BUG_ON(!folio_test_anon(src_folio));
> >> +
> >> + dst_anon_vma = (void *)dst_vma->anon_vma + PAGE_MAPPING_ANON;
> >> + WRITE_ONCE(src_folio->mapping,
> >> + (struct address_space *) dst_anon_vma);
>
> I have some cleanups pending for page_move_anon_rmap(), that moves the
> SetPageAnonExclusive hunk out. Here we should be using
> page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
>
> I'll send them out soonish.

Should I keep this as is in my next version until you post the
cleanups? I can add a TODO comment to convert it to
folio_move_anon_rmap() once it's ready.

>
> >> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
> >> + dst_addr)); >> +
> >> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> >> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> >> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
> >> + dst_vma);
> >
> > I think there's still a theoretical issue here that you could fix by
> > checking for the AnonExclusive flag, similar to the huge page case.
> >
> > Consider the following scenario:
> >
> > 1. process P1 does a write fault in a private anonymous VMA, creating
> > and mapping a new anonymous page A1
> > 2. process P1 forks and creates two children P2 and P3. afterwards, A1
> > is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
> > 3. process P1 removes its mapping of A1, dropping its mapcount to 2.
> > 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
> > 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
> >
> > If at this point P3 does a write fault on its mapping of A1, it will
> > still trigger copy-on-write thanks to the AnonExclusive mechanism; and
> > this is necessary to avoid P3 mapping A1 as writable and writing data
> > into it that will become visible to P2, if P2 and P3 are in different
> > security contexts.
> >
> > But if P3 instead moves its mapping of A1 to another address with
> > remap_anon_pte() which only does a page mapcount check, the
> > maybe_mkwrite() will directly make the mapping writable, circumventing
> > the AnonExclusive mechanism.
> >
>
> Yes, can_change_pte_writable() contains the exact logic when we can turn
> something easily writable even if it wasn't writable before. which
> includes that PageAnonExclusive is set. (but with uffd-wp or softdirty
> tracking, there is more to consider)

For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not
set, but we want remapping to work for RO memory as well. Are you
saying that a PageAnonExclusive() check alone would not be enough
here?

Thanks,
Suren.

>
> --
> Cheers,
>
> David / dhildenb
>

2023-09-28 03:21:52

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 1:04 PM Jann Horn <[email protected]> wrote:
>
> On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan <[email protected]> wrote:
> > On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
> > > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > > From: Andrea Arcangeli <[email protected]>
> > > >
> > > > This implements the uABI of UFFDIO_REMAP.
> > > >
> > > > Notably one mode bitflag is also forwarded (and in turn known) by the
> > > > lowlevel remap_pages method.
> > > >
> > > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> [...]
> > > > + /*
> > > > + * folio_referenced walks the anon_vma chain
> > > > + * without the folio lock. Serialize against it with
> > > > + * the anon_vma lock, the folio lock is not enough.
> > > > + */
> > > > + src_anon_vma = folio_get_anon_vma(src_folio);
> > > > + if (!src_anon_vma) {
> > > > + /* page was unmapped from under us */
> > > > + err = -EAGAIN;
> > > > + goto out;
> > > > + }
> > > > + if (!anon_vma_trylock_write(src_anon_vma)) {
> > > > + pte_unmap(&orig_src_pte);
> > > > + pte_unmap(&orig_dst_pte);
> > > > + src_pte = dst_pte = NULL;
> > > > + /* now we can block and wait */
> > > > + anon_vma_lock_write(src_anon_vma);
> > > > + goto retry;
> > > > + }
> > > > + }
> > >
> > > So at this point we have:
> > >
> > > - the current src_pte
> > > - some referenced+locked src_folio that used to be mapped exclusively
> > > at src_addr
> > > - (the anon_vma associated with the src_folio)
> > >
> > > > + err = remap_anon_pte(dst_mm, src_mm, dst_vma, src_vma,
> > > > + dst_addr, src_addr, dst_pte, src_pte,
> > > > + orig_dst_pte, orig_src_pte,
> > > > + dst_ptl, src_ptl, src_folio);
> > >
> > > And then this will, without touching folio mapcounts/refcounts, delete
> > > the current PTE at src_addr, and create a PTE at dst_addr pointing to
> > > the old src_folio, leading to incorrect refcounts/mapcounts?
> >
> > I assume this still points to the missing previous_src_pte check
> > discussed in the previous comments. Is that correct or is there yet
> > another issue?
>
> This is still referring to the missing previous_src_pte check.
>
> > >
> > > > + } else {
> > > [...]
> > > > + }
> > > > +
> > > > +out:
> > > > + if (src_anon_vma) {
> > > > + anon_vma_unlock_write(src_anon_vma);
> > > > + put_anon_vma(src_anon_vma);
> > > > + }
> > > > + if (src_folio) {
> > > > + folio_unlock(src_folio);
> > > > + folio_put(src_folio);
> > > > + }
> > > > + if (dst_pte)
> > > > + pte_unmap(dst_pte);
> > > > + if (src_pte)
> > > > + pte_unmap(src_pte);
> > > > + mmu_notifier_invalidate_range_end(&range);
> > > > +
> > > > + return err;
> > > > +}
> > > [...]
> > > > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > > > + unsigned long dst_start, unsigned long src_start,
> > > > + unsigned long len, __u64 mode)
> > > > +{
> > > > + struct vm_area_struct *src_vma, *dst_vma;
> > > > + unsigned long src_addr, dst_addr;
> > > > + pmd_t *src_pmd, *dst_pmd;
> > > > + long err = -EINVAL;
> > > > + ssize_t moved = 0;
> > > > +
> > > > + /*
> > > > + * Sanitize the command parameters:
> > > > + */
> > > > + BUG_ON(src_start & ~PAGE_MASK);
> > > > + BUG_ON(dst_start & ~PAGE_MASK);
> > > > + BUG_ON(len & ~PAGE_MASK);
> > > > +
> > > > + /* Does the address range wrap, or is the span zero-sized? */
> > > > + BUG_ON(src_start + len <= src_start);
> > > > + BUG_ON(dst_start + len <= dst_start);
> > > > +
> > > > + /*
> > > > + * Because these are read sempahores there's no risk of lock
> > > > + * inversion.
> > > > + */
> > > > + mmap_read_lock(dst_mm);
> > > > + if (dst_mm != src_mm)
> > > > + mmap_read_lock(src_mm);
> > > > +
> > > > + /*
> > > > + * Make sure the vma is not shared, that the src and dst remap
> > > > + * ranges are both valid and fully within a single existing
> > > > + * vma.
> > > > + */
> > > > + src_vma = find_vma(src_mm, src_start);
> > > > + if (!src_vma || (src_vma->vm_flags & VM_SHARED))
> > > > + goto out;
> > > > + if (src_start < src_vma->vm_start ||
> > > > + src_start + len > src_vma->vm_end)
> > > > + goto out;
> > > > +
> > > > + dst_vma = find_vma(dst_mm, dst_start);
> > > > + if (!dst_vma || (dst_vma->vm_flags & VM_SHARED))
> > > > + goto out;
> > > > + if (dst_start < dst_vma->vm_start ||
> > > > + dst_start + len > dst_vma->vm_end)
> > > > + goto out;
> > > > +
> > > > + err = validate_remap_areas(src_vma, dst_vma);
> > > > + if (err)
> > > > + goto out;
> > > > +
> > > > + for (src_addr = src_start, dst_addr = dst_start;
> > > > + src_addr < src_start + len;) {
> > > > + spinlock_t *ptl;
> > > > + pmd_t dst_pmdval;
> > > > + unsigned long step_size;
> > > > +
> > > > + BUG_ON(dst_addr >= dst_start + len);
> > > > + /*
> > > > + * Below works because anonymous area would not have a
> > > > + * transparent huge PUD. If file-backed support is added,
> > > > + * that case would need to be handled here.
> > > > + */
> > > > + src_pmd = mm_find_pmd(src_mm, src_addr);
> > > > + if (unlikely(!src_pmd)) {
> > > > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) {
> > > > + err = -ENOENT;
> > > > + break;
> > > > + }
> > > > + src_pmd = mm_alloc_pmd(src_mm, src_addr);
> > > > + if (unlikely(!src_pmd)) {
> > > > + err = -ENOMEM;
> > > > + break;
> > > > + }
> > > > + }
> > > > + dst_pmd = mm_alloc_pmd(dst_mm, dst_addr);
> > > > + if (unlikely(!dst_pmd)) {
> > > > + err = -ENOMEM;
> > > > + break;
> > > > + }
> > > > +
> > > > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > > > + /*
> > > > + * If the dst_pmd is mapped as THP don't override it and just
> > > > + * be strict. If dst_pmd changes into TPH after this check, the
> > > > + * remap_pages_huge_pmd() will detect the change and retry
> > > > + * while remap_pages_pte() will detect the change and fail.
> > > > + */
> > > > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > > > + err = -EEXIST;
> > > > + break;
> > > > + }
> > > > +
> > > > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > > > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > > > + spin_unlock(ptl);
> > > > + ptl = NULL;
> > > > + }
> > >
> > > This still looks wrong - we do still have to split_huge_pmd()
> > > somewhere so that remap_pages_pte() works.
> >
> > Hmm, I guess this extra check is not even needed...
>
> Hm, and instead we'd bail at the pte_offset_map_nolock() in
> remap_pages_pte()? I guess that's unusual but works...

Yes, that's what I was thinking but I agree, that seems fragile. Maybe
just bail out early if (ptl && !pmd_trans_huge())?

>
> (It would be a thing to look out for if anyone tried to backport this,
> since the checks in pte_offset_map_nolock() were only introduced in
> 6.5, but idk if anyone's doing that)

2023-09-28 06:45:48

by Jann Horn

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 11:08 PM Suren Baghdasaryan <[email protected]> wrote:
> On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan <[email protected]> wrote:
> >
> > On Wed, Sep 27, 2023 at 1:04 PM Jann Horn <[email protected]> wrote:
> > >
> > > On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan <[email protected]> wrote:
> > > > On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
> > > > > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > > > > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > > > > > + /*
> > > > > > + * If the dst_pmd is mapped as THP don't override it and just
> > > > > > + * be strict. If dst_pmd changes into TPH after this check, the
> > > > > > + * remap_pages_huge_pmd() will detect the change and retry
> > > > > > + * while remap_pages_pte() will detect the change and fail.
> > > > > > + */
> > > > > > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > > > > > + err = -EEXIST;
> > > > > > + break;
> > > > > > + }
> > > > > > +
> > > > > > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > > > > > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > > > > > + spin_unlock(ptl);
> > > > > > + ptl = NULL;
> > > > > > + }
> > > > >
> > > > > This still looks wrong - we do still have to split_huge_pmd()
> > > > > somewhere so that remap_pages_pte() works.
> > > >
> > > > Hmm, I guess this extra check is not even needed...
> > >
> > > Hm, and instead we'd bail at the pte_offset_map_nolock() in
> > > remap_pages_pte()? I guess that's unusual but works...
> >
> > Yes, that's what I was thinking but I agree, that seems fragile. Maybe
> > just bail out early if (ptl && !pmd_trans_huge())?
>
> No, actually we can still handle is_swap_pmd() case by splitting it
> and remapping the individual ptes. So, I can bail out only in case of
> pmd_devmap().

FWIW I only learned today that "real" swap PMDs don't actually exist -
only migration entries, which are encoded as swap PMDs, exist. You can
see that when you look through the cases that something like
__split_huge_pmd() or zap_pmd_range() actually handles.

So I think if you wanted to handle all the PMD types properly here
without splitting, you could do that without _too_ much extra code.
But idk if it's worth it.

2023-09-28 17:24:01

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 27.09.23 20:25, Suren Baghdasaryan wrote:
>>
>> I have some cleanups pending for page_move_anon_rmap(), that moves the
>> SetPageAnonExclusive hunk out. Here we should be using
>> page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
>>
>> I'll send them out soonish.
>
> Should I keep this as is in my next version until you post the
> cleanups? I can add a TODO comment to convert it to
> folio_move_anon_rmap() once it's ready.

You should just be able to use page_move_anon_rmap() and whatever gets
in first cleans it up :)

>
>>
>>>> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
>>>> + dst_addr)); >> +
>>>> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
>>>> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
>>>> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
>>>> + dst_vma);
>>>
>>> I think there's still a theoretical issue here that you could fix by
>>> checking for the AnonExclusive flag, similar to the huge page case.
>>>
>>> Consider the following scenario:
>>>
>>> 1. process P1 does a write fault in a private anonymous VMA, creating
>>> and mapping a new anonymous page A1
>>> 2. process P1 forks and creates two children P2 and P3. afterwards, A1
>>> is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
>>> 3. process P1 removes its mapping of A1, dropping its mapcount to 2.
>>> 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
>>> 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
>>>
>>> If at this point P3 does a write fault on its mapping of A1, it will
>>> still trigger copy-on-write thanks to the AnonExclusive mechanism; and
>>> this is necessary to avoid P3 mapping A1 as writable and writing data
>>> into it that will become visible to P2, if P2 and P3 are in different
>>> security contexts.
>>>
>>> But if P3 instead moves its mapping of A1 to another address with
>>> remap_anon_pte() which only does a page mapcount check, the
>>> maybe_mkwrite() will directly make the mapping writable, circumventing
>>> the AnonExclusive mechanism.
>>>
>>
>> Yes, can_change_pte_writable() contains the exact logic when we can turn
>> something easily writable even if it wasn't writable before. which
>> includes that PageAnonExclusive is set. (but with uffd-wp or softdirty
>> tracking, there is more to consider)
>
> For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not
> set, but we want remapping to work for RO memory as well. Are you

In a VMA without VM_WRITE you certainly wouldn't want to make PTEs
writable :) That's why that function just does a sanity check that it is
not called in strange context. So one would only call it if VM_WRITE is set.

> saying that a PageAnonExclusive() check alone would not be enough
> here?

There are some interesting questions to ask here:

1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not?
You most probably have to mark the PTE softdirty and not make it writable.

2) VM_UFFD_WP requires similar care I assume? Peter might know.

--
Cheers,

David / dhildenb

2023-09-28 17:34:33

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 03:29:35PM +0200, David Hildenbrand wrote:
> > > + if (!pte_same(*src_pte, orig_src_pte) ||
> > > + !pte_same(*dst_pte, orig_dst_pte) ||
> > > + folio_test_large(src_folio) ||
> > > + folio_estimated_sharers(src_folio) != 1) {
>
> ^ here you should check PageAnonExclusive. Please get rid of any implicit
> explicit/implcit mapcount checks.

David, is PageAnon 100% accurate now in the current tree?

IOW, can it be possible that the page has total_mapcount==1 but missing
AnonExclusive bit in any possible way?

Thanks,

--
Peter Xu

2023-09-28 18:39:01

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 10:09 AM Peter Xu <[email protected]> wrote:
>
> On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
> > @@ -72,6 +73,7 @@
> > #define _UFFDIO_WAKE (0x02)
> > #define _UFFDIO_COPY (0x03)
> > #define _UFFDIO_ZEROPAGE (0x04)
> > +#define _UFFDIO_REMAP (0x05)
> > #define _UFFDIO_WRITEPROTECT (0x06)
> > #define _UFFDIO_CONTINUE (0x07)
> > #define _UFFDIO_POISON (0x08)
>
> Might be good to add a feature bit (UFFD_FEATURE_REMAP) for userspace to
> probe?

Ack.

>
> IIUC the whole remap feature was proposed at the birth of uffd even before
> COPY, but now we have tons of old kernels who will not support it.
>
> [...]
>
> > +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr)
> > +{
> > + pmd_t _dst_pmd, src_pmdval;
> > + struct page *src_page;
> > + struct folio *src_folio;
> > + struct anon_vma *src_anon_vma, *dst_anon_vma;
> > + spinlock_t *src_ptl, *dst_ptl;
> > + pgtable_t src_pgtable, dst_pgtable;
> > + struct mmu_notifier_range range;
> > + int err = 0;
> > +
> > + src_pmdval = *src_pmd;
> > + src_ptl = pmd_lockptr(src_mm, src_pmd);
> > +
> > + BUG_ON(!spin_is_locked(src_ptl));
> > + mmap_assert_locked(src_mm);
> > + mmap_assert_locked(dst_mm);
> > +
> > + BUG_ON(!pmd_trans_huge(src_pmdval));
> > + BUG_ON(!pmd_none(dst_pmdval));
> > + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> > + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> > +
> > + src_page = pmd_page(src_pmdval);
> > + if (unlikely(!PageAnonExclusive(src_page))) {
> > + spin_unlock(src_ptl);
> > + return -EBUSY;
> > + }
> > +
> > + src_folio = page_folio(src_page);
> > + folio_get(src_folio);
> > + spin_unlock(src_ptl);
> > +
> > + /* preallocate dst_pgtable if needed */
> > + if (dst_mm != src_mm) {
> > + dst_pgtable = pte_alloc_one(dst_mm);
> > + if (unlikely(!dst_pgtable)) {
> > + err = -ENOMEM;
> > + goto put_folio;
> > + }
> > + } else {
> > + dst_pgtable = NULL;
> > + }
> > +
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> > + src_addr + HPAGE_PMD_SIZE);
> > + mmu_notifier_invalidate_range_start(&range);
> > +
> > + /* block all concurrent rmap walks */
>
> This is not accurate either I think. Maybe we can do "s/all/most/", or
> just drop it (assuming the detailed and accurate version of documentation
> lies above remap_pages() regarding to REMAP locking)?

Yes, comments from the original patch need to be rechecked and I'm
sure I missed some new rules. Thanks for pointing this one out! I'll
drop it.

>
> > + folio_lock(src_folio);
>
> [...]
>
>
> > +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + struct vm_area_struct *dst_vma,
> > + struct vm_area_struct *src_vma,
> > + unsigned long dst_addr, unsigned long src_addr,
> > + pte_t *dst_pte, pte_t *src_pte,
> > + pte_t orig_dst_pte, pte_t orig_src_pte,
> > + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> > + struct folio *src_folio)
>
> remap_present_pte?

Sounds good.

>
> [...]
>
> > +/**
> > + * remap_pages - remap arbitrary anonymous pages of an existing vma
> > + * @dst_start: start of the destination virtual memory range
> > + * @src_start: start of the source virtual memory range
> > + * @len: length of the virtual memory range
> > + *
> > + * remap_pages() remaps arbitrary anonymous pages atomically in zero
> > + * copy. It only works on non shared anonymous pages because those can
> > + * be relocated without generating non linear anon_vmas in the rmap
> > + * code.
> > + *
> > + * It provides a zero copy mechanism to handle userspace page faults.
> > + * The source vma pages should have mapcount == 1, which can be
> > + * enforced by using madvise(MADV_DONTFORK) on src vma.
> > + *
> > + * The thread receiving the page during the userland page fault
> > + * will receive the faulting page in the source vma through the network,
> > + * storage or any other I/O device (MADV_DONTFORK in the source vma
> > + * avoids remap_pages() to fail with -EBUSY if the process forks before
> > + * remap_pages() is called), then it will call remap_pages() to map the
> > + * page in the faulting address in the destination vma.
> > + *
> > + * This userfaultfd command works purely via pagetables, so it's the
> > + * most efficient way to move physical non shared anonymous pages
> > + * across different virtual addresses. Unlike mremap()/mmap()/munmap()
> > + * it does not create any new vmas. The mapping in the destination
> > + * address is atomic.
> > + *
> > + * It only works if the vma protection bits are identical from the
> > + * source and destination vma.
> > + *
> > + * It can remap non shared anonymous pages within the same vma too.
> > + *
> > + * If the source virtual memory range has any unmapped holes, or if
> > + * the destination virtual memory range is not a whole unmapped hole,
> > + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
> > + * provides a very strict behavior to avoid any chance of memory
> > + * corruption going unnoticed if there are userland race conditions.
> > + * Only one thread should resolve the userland page fault at any given
> > + * time for any given faulting address. This means that if two threads
> > + * try to both call remap_pages() on the same destination address at the
> > + * same time, the second thread will get an explicit error from this
> > + * command.
> > + *
> > + * The command retval will return "len" is successful. The command
> > + * however can be interrupted by fatal signals or errors. If
> > + * interrupted it will return the number of bytes successfully
> > + * remapped before the interruption if any, or the negative error if
> > + * none. It will never return zero. Either it will return an error or
> > + * an amount of bytes successfully moved. If the retval reports a
> > + * "short" remap, the remap_pages() command should be repeated by
> > + * userland with src+retval, dst+reval, len-retval if it wants to know
> > + * about the error that interrupted it.
> > + *
> > + * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to
> > + * prevent -ENOENT errors to materialize if there are holes in the
> > + * source virtual range that is being remapped. The holes will be
> > + * accounted as successfully remapped in the retval of the
> > + * command. This is mostly useful to remap hugepage naturally aligned
> > + * virtual regions without knowing if there are transparent hugepage
> > + * in the regions or not, but preventing the risk of having to split
> > + * the hugepmd during the remap.
> > + *
> > + * If there's any rmap walk that is taking the anon_vma locks without
> > + * first obtaining the folio lock (for example split_huge_page and
> > + * folio_referenced), they will have to verify if the folio->mapping
>
> Hmm, this sentence seems to be not 100% accurate, perhaps not anymore?
>
> As split_huge_page() should need the folio lock and it'll serialize with
> REMAP with the folio lock too. It seems to me only folio_referenced() is
> the outlier so far, and that's covered by patch 1.
>
> I did also check other users of folio_get_anon_vma() (similar to use case
> of split_huge_page()) and they're all with the folio lock held, so we
> should be good.
>
> In summary, perhaps:
>
> - Drop split_huge_page() example here?
>
> - Should we document above folio_get_anon_vma() about this specialty due
> to UFFDIO_REMAP? I'm thiking something like:
>
> + *
> + * NOTE: the caller should normally hold folio lock when calling this. If
> + * not, the caller needs to double check the anon_vma didn't change after
> + * taking the anon_vma lock for either read or write (UFFDIO_REMAP can
> + * modify it concurrently without folio lock protection). See
> + * folio_lock_anon_vma_read() which has already covered that, and comment
> + * above remap_pages().
> */
> struct anon_vma *folio_get_anon_vma(struct folio *folio)
> {
> ...
> }

Ack. Will fix the remap_pages description and add the comment for
folio_get_anon_vma.

>
> > + * has changed after taking the anon_vma lock. If it changed they
> > + * should release the lock and retry obtaining a new anon_vma, because
> > + * it means the anon_vma was changed by remap_pages() before the lock
> > + * could be obtained. This is the only additional complexity added to
> > + * the rmap code to provide this anonymous page remapping functionality.
> > + */
> > +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> > + unsigned long dst_start, unsigned long src_start,
> > + unsigned long len, __u64 mode)
> > +{
>
> [...]
>
> > + if (!err) {
> > + dst_addr += step_size;
> > + src_addr += step_size;
> > + moved += step_size;
> > + }
> > +
> > + if ((!err || err == -EAGAIN) &&
> > + fatal_signal_pending(current))
> > + err = -EINTR;
> > +
> > + if (err && err != -EAGAIN)
> > + break;
>
> The err handling is slightly harder to read. I tried to rewrite it like
> this:
>
> switch (err) {
> case 0:
> dst_addr += step_size;
> src_addr += step_size;
> moved += step_size;
> /* fall through */
> case -EAGAIN:
> if (fatal_signal_pending(current)) {
> err = -EINTR;
> goto out;
> }
> /* Continue with the loop */
> break;
> default:
> goto out;
> }
>
> Not super good but maybe slightly better? No strong opinions, but if you
> agree that looks better we can use it.

Agree that this should be improved. Let me see if I can find a cleaner
way to handle these errors.

>
> > + }
> > +
> > +out:
> > + mmap_read_unlock(dst_mm);
> > + if (dst_mm != src_mm)
> > + mmap_read_unlock(src_mm);
> > + BUG_ON(moved < 0);
> > + BUG_ON(err > 0);
> > + BUG_ON(!moved && !err);
> > + return moved ? moved : err;
> > +}
>
> I think for the rest I'll read the new version (e.g. I saw discussion on
> proper handling of pmd swap entries, which is not yet addressed, but
> probably will in the next one).

Appreciate your feedback!
Thanks,
Suren.

>
> Thanks,
>
> --
> Peter Xu
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-09-28 19:32:50

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
> @@ -72,6 +73,7 @@
> #define _UFFDIO_WAKE (0x02)
> #define _UFFDIO_COPY (0x03)
> #define _UFFDIO_ZEROPAGE (0x04)
> +#define _UFFDIO_REMAP (0x05)
> #define _UFFDIO_WRITEPROTECT (0x06)
> #define _UFFDIO_CONTINUE (0x07)
> #define _UFFDIO_POISON (0x08)

Might be good to add a feature bit (UFFD_FEATURE_REMAP) for userspace to
probe?

IIUC the whole remap feature was proposed at the birth of uffd even before
COPY, but now we have tons of old kernels who will not support it.

[...]

> +int remap_pages_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + pmd_t *dst_pmd, pmd_t *src_pmd, pmd_t dst_pmdval,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr)
> +{
> + pmd_t _dst_pmd, src_pmdval;
> + struct page *src_page;
> + struct folio *src_folio;
> + struct anon_vma *src_anon_vma, *dst_anon_vma;
> + spinlock_t *src_ptl, *dst_ptl;
> + pgtable_t src_pgtable, dst_pgtable;
> + struct mmu_notifier_range range;
> + int err = 0;
> +
> + src_pmdval = *src_pmd;
> + src_ptl = pmd_lockptr(src_mm, src_pmd);
> +
> + BUG_ON(!spin_is_locked(src_ptl));
> + mmap_assert_locked(src_mm);
> + mmap_assert_locked(dst_mm);
> +
> + BUG_ON(!pmd_trans_huge(src_pmdval));
> + BUG_ON(!pmd_none(dst_pmdval));
> + BUG_ON(src_addr & ~HPAGE_PMD_MASK);
> + BUG_ON(dst_addr & ~HPAGE_PMD_MASK);
> +
> + src_page = pmd_page(src_pmdval);
> + if (unlikely(!PageAnonExclusive(src_page))) {
> + spin_unlock(src_ptl);
> + return -EBUSY;
> + }
> +
> + src_folio = page_folio(src_page);
> + folio_get(src_folio);
> + spin_unlock(src_ptl);
> +
> + /* preallocate dst_pgtable if needed */
> + if (dst_mm != src_mm) {
> + dst_pgtable = pte_alloc_one(dst_mm);
> + if (unlikely(!dst_pgtable)) {
> + err = -ENOMEM;
> + goto put_folio;
> + }
> + } else {
> + dst_pgtable = NULL;
> + }
> +
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, src_addr,
> + src_addr + HPAGE_PMD_SIZE);
> + mmu_notifier_invalidate_range_start(&range);
> +
> + /* block all concurrent rmap walks */

This is not accurate either I think. Maybe we can do "s/all/most/", or
just drop it (assuming the detailed and accurate version of documentation
lies above remap_pages() regarding to REMAP locking)?

> + folio_lock(src_folio);

[...]


> +static int remap_anon_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + struct vm_area_struct *dst_vma,
> + struct vm_area_struct *src_vma,
> + unsigned long dst_addr, unsigned long src_addr,
> + pte_t *dst_pte, pte_t *src_pte,
> + pte_t orig_dst_pte, pte_t orig_src_pte,
> + spinlock_t *dst_ptl, spinlock_t *src_ptl,
> + struct folio *src_folio)

remap_present_pte?

[...]

> +/**
> + * remap_pages - remap arbitrary anonymous pages of an existing vma
> + * @dst_start: start of the destination virtual memory range
> + * @src_start: start of the source virtual memory range
> + * @len: length of the virtual memory range
> + *
> + * remap_pages() remaps arbitrary anonymous pages atomically in zero
> + * copy. It only works on non shared anonymous pages because those can
> + * be relocated without generating non linear anon_vmas in the rmap
> + * code.
> + *
> + * It provides a zero copy mechanism to handle userspace page faults.
> + * The source vma pages should have mapcount == 1, which can be
> + * enforced by using madvise(MADV_DONTFORK) on src vma.
> + *
> + * The thread receiving the page during the userland page fault
> + * will receive the faulting page in the source vma through the network,
> + * storage or any other I/O device (MADV_DONTFORK in the source vma
> + * avoids remap_pages() to fail with -EBUSY if the process forks before
> + * remap_pages() is called), then it will call remap_pages() to map the
> + * page in the faulting address in the destination vma.
> + *
> + * This userfaultfd command works purely via pagetables, so it's the
> + * most efficient way to move physical non shared anonymous pages
> + * across different virtual addresses. Unlike mremap()/mmap()/munmap()
> + * it does not create any new vmas. The mapping in the destination
> + * address is atomic.
> + *
> + * It only works if the vma protection bits are identical from the
> + * source and destination vma.
> + *
> + * It can remap non shared anonymous pages within the same vma too.
> + *
> + * If the source virtual memory range has any unmapped holes, or if
> + * the destination virtual memory range is not a whole unmapped hole,
> + * remap_pages() will fail respectively with -ENOENT or -EEXIST. This
> + * provides a very strict behavior to avoid any chance of memory
> + * corruption going unnoticed if there are userland race conditions.
> + * Only one thread should resolve the userland page fault at any given
> + * time for any given faulting address. This means that if two threads
> + * try to both call remap_pages() on the same destination address at the
> + * same time, the second thread will get an explicit error from this
> + * command.
> + *
> + * The command retval will return "len" is successful. The command
> + * however can be interrupted by fatal signals or errors. If
> + * interrupted it will return the number of bytes successfully
> + * remapped before the interruption if any, or the negative error if
> + * none. It will never return zero. Either it will return an error or
> + * an amount of bytes successfully moved. If the retval reports a
> + * "short" remap, the remap_pages() command should be repeated by
> + * userland with src+retval, dst+reval, len-retval if it wants to know
> + * about the error that interrupted it.
> + *
> + * The UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES flag can be specified to
> + * prevent -ENOENT errors to materialize if there are holes in the
> + * source virtual range that is being remapped. The holes will be
> + * accounted as successfully remapped in the retval of the
> + * command. This is mostly useful to remap hugepage naturally aligned
> + * virtual regions without knowing if there are transparent hugepage
> + * in the regions or not, but preventing the risk of having to split
> + * the hugepmd during the remap.
> + *
> + * If there's any rmap walk that is taking the anon_vma locks without
> + * first obtaining the folio lock (for example split_huge_page and
> + * folio_referenced), they will have to verify if the folio->mapping

Hmm, this sentence seems to be not 100% accurate, perhaps not anymore?

As split_huge_page() should need the folio lock and it'll serialize with
REMAP with the folio lock too. It seems to me only folio_referenced() is
the outlier so far, and that's covered by patch 1.

I did also check other users of folio_get_anon_vma() (similar to use case
of split_huge_page()) and they're all with the folio lock held, so we
should be good.

In summary, perhaps:

- Drop split_huge_page() example here?

- Should we document above folio_get_anon_vma() about this specialty due
to UFFDIO_REMAP? I'm thiking something like:

+ *
+ * NOTE: the caller should normally hold folio lock when calling this. If
+ * not, the caller needs to double check the anon_vma didn't change after
+ * taking the anon_vma lock for either read or write (UFFDIO_REMAP can
+ * modify it concurrently without folio lock protection). See
+ * folio_lock_anon_vma_read() which has already covered that, and comment
+ * above remap_pages().
*/
struct anon_vma *folio_get_anon_vma(struct folio *folio)
{
...
}

> + * has changed after taking the anon_vma lock. If it changed they
> + * should release the lock and retry obtaining a new anon_vma, because
> + * it means the anon_vma was changed by remap_pages() before the lock
> + * could be obtained. This is the only additional complexity added to
> + * the rmap code to provide this anonymous page remapping functionality.
> + */
> +ssize_t remap_pages(struct mm_struct *dst_mm, struct mm_struct *src_mm,
> + unsigned long dst_start, unsigned long src_start,
> + unsigned long len, __u64 mode)
> +{

[...]

> + if (!err) {
> + dst_addr += step_size;
> + src_addr += step_size;
> + moved += step_size;
> + }
> +
> + if ((!err || err == -EAGAIN) &&
> + fatal_signal_pending(current))
> + err = -EINTR;
> +
> + if (err && err != -EAGAIN)
> + break;

The err handling is slightly harder to read. I tried to rewrite it like
this:

switch (err) {
case 0:
dst_addr += step_size;
src_addr += step_size;
moved += step_size;
/* fall through */
case -EAGAIN:
if (fatal_signal_pending(current)) {
err = -EINTR;
goto out;
}
/* Continue with the loop */
break;
default:
goto out;
}

Not super good but maybe slightly better? No strong opinions, but if you
agree that looks better we can use it.

> + }
> +
> +out:
> + mmap_read_unlock(dst_mm);
> + if (dst_mm != src_mm)
> + mmap_read_unlock(src_mm);
> + BUG_ON(moved < 0);
> + BUG_ON(err > 0);
> + BUG_ON(!moved && !err);
> + return moved ? moved : err;
> +}

I think for the rest I'll read the new version (e.g. I saw discussion on
proper handling of pmd swap entries, which is not yet addressed, but
probably will in the next one).

Thanks,

--
Peter Xu

2023-09-28 19:43:31

by Jann Horn

[permalink] [raw]
Subject: Re: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI]

On Wed, Sep 27, 2023 at 7:12 PM Suren Baghdasaryan <[email protected]> wrote:
>
> On Wed, Sep 27, 2023 at 3:07 AM Jann Horn <[email protected]> wrote:
> >
> > [moving Hugh into "To:" recipients as FYI for khugepaged interaction]
> >
> > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > From: Andrea Arcangeli <[email protected]>
> > >
> > > This implements the uABI of UFFDIO_REMAP.
> > >
> > > Notably one mode bitflag is also forwarded (and in turn known) by the
> > > lowlevel remap_pages method.
> > >
> > > Signed-off-by: Andrea Arcangeli <[email protected]>
> > > Signed-off-by: Suren Baghdasaryan <[email protected]>
> > [...]
> > > +/*
> > > + * The mmap_lock for reading is held by the caller. Just move the page
> > > + * from src_pmd to dst_pmd if possible, and return true if succeeded
> > > + * in moving the page.
> > > + */
> > > +static int remap_pages_pte(struct mm_struct *dst_mm,
> > > + struct mm_struct *src_mm,
> > > + pmd_t *dst_pmd,
> > > + pmd_t *src_pmd,
> > > + struct vm_area_struct *dst_vma,
> > > + struct vm_area_struct *src_vma,
> > > + unsigned long dst_addr,
> > > + unsigned long src_addr,
> > > + __u64 mode)
> > > +{
> > > + swp_entry_t entry;
> > > + pte_t orig_src_pte, orig_dst_pte;
> > > + spinlock_t *src_ptl, *dst_ptl;
> > > + pte_t *src_pte = NULL;
> > > + pte_t *dst_pte = NULL;
> > > +
> > > + struct folio *src_folio = NULL;
> > > + struct anon_vma *src_anon_vma = NULL;
> > > + struct mmu_notifier_range range;
> > > + int err = 0;
> > > +
> > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm,
> > > + src_addr, src_addr + PAGE_SIZE);
> > > + mmu_notifier_invalidate_range_start(&range);
> > > +retry:
> > > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > > +
> > > + /* If an huge pmd materialized from under us fail */
> > > + if (unlikely(!dst_pte)) {
> > > + err = -EFAULT;
> > > + goto out;
> > > + }
> > > +
> > > + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> > > +
> > > + /*
> > > + * We held the mmap_lock for reading so MADV_DONTNEED
> > > + * can zap transparent huge pages under us, or the
> > > + * transparent huge page fault can establish new
> > > + * transparent huge pages under us.
> > > + */
> > > + if (unlikely(!src_pte)) {
> > > + err = -EFAULT;
> > > + goto out;
> > > + }
> > > +
> > > + BUG_ON(pmd_none(*dst_pmd));
> > > + BUG_ON(pmd_none(*src_pmd));
> > > + BUG_ON(pmd_trans_huge(*dst_pmd));
> > > + BUG_ON(pmd_trans_huge(*src_pmd));
> >
> > This works for now, but note that Hugh Dickins has recently been
> > reworking khugepaged such that PTE-based mappings can be collapsed
> > into transhuge mappings under the mmap lock held in *read mode*;
> > holders of the mmap lock in read mode can only synchronize against
> > this by taking the right page table spinlock and rechecking the pmd
> > value. This is only the case for file-based mappings so far, not for
> > anonymous private VMAs; and this code only operates on anonymous
> > private VMAs so far, so it works out.
> >
> > But if either Hugh further reworks khugepaged such that anonymous VMAs
> > can be collapsed under the mmap lock in read mode, or you expand this
> > code to work on file-backed VMAs, then it will become possible to hit
> > these BUG_ON() calls. I'm not sure what the plans for khugepaged going
> > forward are, but the number of edgecases everyone has to keep in mind
> > would go down if you changed this function to deal gracefully with
> > page tables disappearing under you.
> >
> > In the newest version of mm/pgtable-generic.c, above
> > __pte_offset_map_lock(), there is a big comment block explaining the
> > current rules for page table access; in particular, regarding the
> > helper pte_offset_map_nolock() that you're using:
> >
> > * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_map();
> > * but when successful, it also outputs a pointer to the spinlock in ptlp - as
> > * pte_offset_map_lock() does, but in this case without locking it. This helps
> > * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that time
> > * act on a changed *pmd: pte_offset_map_nolock() provides the correct spinlock
> > * pointer for the page table that it returns. In principle, the caller should
> > * recheck *pmd once the lock is taken; in practice, no callsite needs that -
> > * either the mmap_lock for write, or pte_same() check on contents, is enough.
> >
> > If this becomes hittable in the future, I think you will need to
> > recheck *pmd, at least for dst_pte, to avoid copying PTEs into a
> > detached page table.
>
> Thanks for the warning, Jann. It sounds to me it would be better to
> add this pmd check now even though it's not hittable. Does that sound
> good to everyone?

Sounds good to me.

2023-09-28 19:43:42

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 11:25:22AM -0700, Suren Baghdasaryan wrote:
> For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not
> set, but we want remapping to work for RO memory as well.

Is there an use case that we want remap to work on RO?

The thing is, either removing a page or installing a new one with valid
content imply VM_WRITE to me on either side..

Thanks,

--
Peter Xu

2023-09-28 20:56:55

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
> As described as reply to v1, without fork() and KSM, the PAE bit should
> stick around. If that's not the case, we should investigate why.
>
> If we ever support the post-fork case (which the comment above remap_pages()
> excludes) we'll need good motivation why we'd want to make this
> overly-complicated feature even more complicated.

The problem is DONTFORK is only a suggestion, but not yet restricted. If
someone reaches on top of some !PAE page on src it'll never gonna proceed
and keep failing, iiuc.

do_wp_page() doesn't have that issue of accuracy only because one round of
CoW will just allocate a new page with PAE set guaranteed, which is pretty
much self-heal and unnoticed.

So it'll be great if we can have similar self-heal way for PAE. If not, I
think it's still fine we just always fail on !PAE src pages, but then maybe
we should let the user know what's wrong, e.g., the user can just forgot to
apply DONTFORK then forked. And then the user hits error and don't know
what happened. Probably at least we should document it well in man pages.

Another option can be we keep using folio_mapcount() for pte, and another
helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1)
for thp. But I know that's not ideal either.

--
Peter Xu

2023-09-29 00:13:40

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 11:34 AM Peter Xu <[email protected]> wrote:
>
> On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
> > On 28.09.23 19:21, Peter Xu wrote:
> > > On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
> > > > As described as reply to v1, without fork() and KSM, the PAE bit should
> > > > stick around. If that's not the case, we should investigate why.
> > > >
> > > > If we ever support the post-fork case (which the comment above remap_pages()
> > > > excludes) we'll need good motivation why we'd want to make this
> > > > overly-complicated feature even more complicated.
> > >
> > > The problem is DONTFORK is only a suggestion, but not yet restricted. If
> > > someone reaches on top of some !PAE page on src it'll never gonna proceed
> > > and keep failing, iiuc.
> >
> > Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We
> > should document that as a limitation.
> >
> > For example, you could return an error to the user that can just call
> > UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's
> > probably ugly as well).
>
> We could indeed provide some special errno perhaps upon the PAE check, then
> document it explicitly in the man page and suggest resolutions (like
> DONTFORK) when user hit it.
>
> >
> > >
> > > do_wp_page() doesn't have that issue of accuracy only because one round of
> > > CoW will just allocate a new page with PAE set guaranteed, which is pretty
> > > much self-heal and unnoticed.
> >
> > Yes. But it might have to copy, at which point the whole optimization of
> > remap is gone :)
>
> Right, but that's fine IMHO because it should still be very corner case,
> definitely not expected to be the majority to start impact the performance
> results.
>
> >
> > >
> > > So it'll be great if we can have similar self-heal way for PAE. If not, I
> > > think it's still fine we just always fail on !PAE src pages, but then maybe
> > > we should let the user know what's wrong, e.g., the user can just forgot to
> > > apply DONTFORK then forked. And then the user hits error and don't know
> > > what happened. Probably at least we should document it well in man pages.
> > >
> > Yes, exactly.
> >
> > > Another option can be we keep using folio_mapcount() for pte, and another
> > > helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1)
> > > for thp. But I know that's not ideal either.
> >
> > As long as we only set the pte writable if PAE is set, we're good from a CVE
> > perspective. The other part is just simplicity of avoiding all these
> > mapcount+swapcount games where possible.
> >
> > (one day folio_mapcount() might be faster -- I'm still working on that patch
> > in the bigger picture of handling PTE-mapped THP better)
>
> Sure.
>
> For now as long as we're crystal clear on the possibility of inaccuracy of
> PAE, it never hits besides fork() && !DONTFORK, and properly document it,
> then sounds good here.

Ok, sounds like we have a consensus. I'll prepare manpage changes to
document the DONTFORK requirement for uffd_remap.

>
> Thanks,
>
> --
> Peter Xu
>

2023-09-29 00:26:56

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 28.09.23 18:24, Peter Xu wrote:
> On Wed, Sep 27, 2023 at 03:29:35PM +0200, David Hildenbrand wrote:
>>>> + if (!pte_same(*src_pte, orig_src_pte) ||
>>>> + !pte_same(*dst_pte, orig_dst_pte) ||
>>>> + folio_test_large(src_folio) ||
>>>> + folio_estimated_sharers(src_folio) != 1) {
>>
>> ^ here you should check PageAnonExclusive. Please get rid of any implicit
>> explicit/implcit mapcount checks.
>
> David, is PageAnon 100% accurate now in the current tree?
>
> IOW, can it be possible that the page has total_mapcount==1 but missing
> AnonExclusive bit in any possible way?

As described as reply to v1, without fork() and KSM, the PAE bit should
stick around. If that's not the case, we should investigate why.

If we ever support the post-fork case (which the comment above
remap_pages() excludes) we'll need good motivation why we'd want to make
this overly-complicated feature even more complicated.

--
Cheers,

David / dhildenb

2023-09-29 00:28:19

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 11:43 AM Peter Xu <[email protected]> wrote:
>
> One more thing..
>
> On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
> > +static int remap_pages_pte(struct mm_struct *dst_mm,
>
> [...]
>
> > +retry:
> > + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> > +
> > + /* If an huge pmd materialized from under us fail */
> > + if (unlikely(!dst_pte)) {
> > + err = -EFAULT;
> > + goto out;
> > + }
> > +
> > + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> > +
> > + /*
> > + * We held the mmap_lock for reading so MADV_DONTNEED
> > + * can zap transparent huge pages under us, or the
> > + * transparent huge page fault can establish new
> > + * transparent huge pages under us.
> > + */
> > + if (unlikely(!src_pte)) {
> > + err = -EFAULT;
> > + goto out;
> > + }
>
> For these two places: I know that thp collapse with mmap read lock hasn't
> yet spread to anon (so I assume none of above could trigger yet on the
> failure paths), but shall we constantly return -EAGAIN here just in case we
> forget that in the future?
>
> For example, for UFFDIO_COPY over shmem which we can already hit similar
> case, mfill_atomic_install_pte() has:
>
> ret = -EAGAIN;
> dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
> if (!dst_pte)
> goto out;
>
> Thanks,

Retrying in this case makes sense to me. Will change.

>
> --
> Peter Xu
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-09-29 02:14:29

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 28.09.23 19:21, Peter Xu wrote:
> On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
>> As described as reply to v1, without fork() and KSM, the PAE bit should
>> stick around. If that's not the case, we should investigate why.
>>
>> If we ever support the post-fork case (which the comment above remap_pages()
>> excludes) we'll need good motivation why we'd want to make this
>> overly-complicated feature even more complicated.
>
> The problem is DONTFORK is only a suggestion, but not yet restricted. If
> someone reaches on top of some !PAE page on src it'll never gonna proceed
> and keep failing, iiuc.

Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We
should document that as a limitation.

For example, you could return an error to the user that can just call
UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's
probably ugly as well).

>
> do_wp_page() doesn't have that issue of accuracy only because one round of
> CoW will just allocate a new page with PAE set guaranteed, which is pretty
> much self-heal and unnoticed.

Yes. But it might have to copy, at which point the whole optimization of
remap is gone :)

>
> So it'll be great if we can have similar self-heal way for PAE. If not, I
> think it's still fine we just always fail on !PAE src pages, but then maybe
> we should let the user know what's wrong, e.g., the user can just forgot to
> apply DONTFORK then forked. And then the user hits error and don't know
> what happened. Probably at least we should document it well in man pages.
>
Yes, exactly.

> Another option can be we keep using folio_mapcount() for pte, and another
> helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1)
> for thp. But I know that's not ideal either.

As long as we only set the pte writable if PAE is set, we're good from a
CVE perspective. The other part is just simplicity of avoiding all these
mapcount+swapcount games where possible.

(one day folio_mapcount() might be faster -- I'm still working on that
patch in the bigger picture of handling PTE-mapped THP better)

--
Cheers,

David / dhildenb

2023-09-29 04:07:26

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 11:32 AM Suren Baghdasaryan <[email protected]> wrote:
>
> On Thu, Sep 28, 2023 at 10:15 AM David Hildenbrand <[email protected]> wrote:
> >
> > On 27.09.23 20:25, Suren Baghdasaryan wrote:
> > >>
> > >> I have some cleanups pending for page_move_anon_rmap(), that moves the
> > >> SetPageAnonExclusive hunk out. Here we should be using
> > >> page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
> > >>
> > >> I'll send them out soonish.
> > >
> > > Should I keep this as is in my next version until you post the
> > > cleanups? I can add a TODO comment to convert it to
> > > folio_move_anon_rmap() once it's ready.
> >
> > You should just be able to use page_move_anon_rmap() and whatever gets
> > in first cleans it up :)
>
> Ack.
>
> >
> > >
> > >>
> > >>>> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
> > >>>> + dst_addr)); >> +
> > >>>> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> > >>>> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> > >>>> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
> > >>>> + dst_vma);
> > >>>
> > >>> I think there's still a theoretical issue here that you could fix by
> > >>> checking for the AnonExclusive flag, similar to the huge page case.
> > >>>
> > >>> Consider the following scenario:
> > >>>
> > >>> 1. process P1 does a write fault in a private anonymous VMA, creating
> > >>> and mapping a new anonymous page A1
> > >>> 2. process P1 forks and creates two children P2 and P3. afterwards, A1
> > >>> is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
> > >>> 3. process P1 removes its mapping of A1, dropping its mapcount to 2.
> > >>> 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
> > >>> 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
> > >>>
> > >>> If at this point P3 does a write fault on its mapping of A1, it will
> > >>> still trigger copy-on-write thanks to the AnonExclusive mechanism; and
> > >>> this is necessary to avoid P3 mapping A1 as writable and writing data
> > >>> into it that will become visible to P2, if P2 and P3 are in different
> > >>> security contexts.
> > >>>
> > >>> But if P3 instead moves its mapping of A1 to another address with
> > >>> remap_anon_pte() which only does a page mapcount check, the
> > >>> maybe_mkwrite() will directly make the mapping writable, circumventing
> > >>> the AnonExclusive mechanism.
> > >>>
> > >>
> > >> Yes, can_change_pte_writable() contains the exact logic when we can turn
> > >> something easily writable even if it wasn't writable before. which
> > >> includes that PageAnonExclusive is set. (but with uffd-wp or softdirty
> > >> tracking, there is more to consider)
> > >
> > > For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not
> > > set, but we want remapping to work for RO memory as well. Are you
> >
> > In a VMA without VM_WRITE you certainly wouldn't want to make PTEs
> > writable :) That's why that function just does a sanity check that it is
> > not called in strange context. So one would only call it if VM_WRITE is set.
> >
> > > saying that a PageAnonExclusive() check alone would not be enough
> > > here?
> >
> > There are some interesting questions to ask here:
> >
> > 1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not?
> > You most probably have to mark the PTE softdirty and not make it writable.
> >
> > 2) VM_UFFD_WP requires similar care I assume? Peter might know.
>
> Let me look closer into these cases.
> I'll also double-check if we need to support uffd_remap for R/O vmas.
> I assumed we do but I actually never checked.

Ok, I confirmed that we don't need remapping or R/O areas. So, I can
use can_change_pte_writable() and keep things simple. Does that sound
good?

> Thanks!
>
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >

2023-09-29 04:36:08

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
> On 28.09.23 19:21, Peter Xu wrote:
> > On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
> > > As described as reply to v1, without fork() and KSM, the PAE bit should
> > > stick around. If that's not the case, we should investigate why.
> > >
> > > If we ever support the post-fork case (which the comment above remap_pages()
> > > excludes) we'll need good motivation why we'd want to make this
> > > overly-complicated feature even more complicated.
> >
> > The problem is DONTFORK is only a suggestion, but not yet restricted. If
> > someone reaches on top of some !PAE page on src it'll never gonna proceed
> > and keep failing, iiuc.
>
> Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We
> should document that as a limitation.
>
> For example, you could return an error to the user that can just call
> UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's
> probably ugly as well).

We could indeed provide some special errno perhaps upon the PAE check, then
document it explicitly in the man page and suggest resolutions (like
DONTFORK) when user hit it.

>
> >
> > do_wp_page() doesn't have that issue of accuracy only because one round of
> > CoW will just allocate a new page with PAE set guaranteed, which is pretty
> > much self-heal and unnoticed.
>
> Yes. But it might have to copy, at which point the whole optimization of
> remap is gone :)

Right, but that's fine IMHO because it should still be very corner case,
definitely not expected to be the majority to start impact the performance
results.

>
> >
> > So it'll be great if we can have similar self-heal way for PAE. If not, I
> > think it's still fine we just always fail on !PAE src pages, but then maybe
> > we should let the user know what's wrong, e.g., the user can just forgot to
> > apply DONTFORK then forked. And then the user hits error and don't know
> > what happened. Probably at least we should document it well in man pages.
> >
> Yes, exactly.
>
> > Another option can be we keep using folio_mapcount() for pte, and another
> > helper (perhaps: _nr_pages_mapped==COMPOUND_MAPPED && _entire_mapcount==1)
> > for thp. But I know that's not ideal either.
>
> As long as we only set the pte writable if PAE is set, we're good from a CVE
> perspective. The other part is just simplicity of avoiding all these
> mapcount+swapcount games where possible.
>
> (one day folio_mapcount() might be faster -- I'm still working on that patch
> in the bigger picture of handling PTE-mapped THP better)

Sure.

For now as long as we're crystal clear on the possibility of inaccuracy of
PAE, it never hits besides fork() && !DONTFORK, and properly document it,
then sounds good here.

Thanks,

--
Peter Xu

2023-09-29 04:41:08

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 07:15:13PM +0200, David Hildenbrand wrote:
> There are some interesting questions to ask here:
>
> 1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not? You
> most probably have to mark the PTE softdirty and not make it writable.

I don't know whether anyone would care about soft-dirty used with uffd
remap, but if to think about it..

Logically if the dst vma has !SOFTDIRTY (means, soft-dirty tracking
enabled), then IIUC the right thing to do is to assume this page is
modified, hence mark softdirty and perhaps proceed with other checks (where
write bit can be set if all check pass)?

Because from a soft-dirty monitor POV on dst_vma I see this REMAP the same
as writting data onto the missing page and got a page fault
(e.g. UFFDIO_COPY); we just avoided the allocation and copy.

The src vma seems also fine in this regard: soft-dirty should ignore holes
always anyway (e.g. DONTNEED on a page should report !soft-dirty later even
if tracking).

>
> 2) VM_UFFD_WP requires similar care I assume? Peter might know.

UFFD_WP shouldn't be affected, iiuc.

Let's first discuss dst vma side.

WP_UNPOPULATED made it slightly complicated but not so much. The core
should be that REMAP only installs pages if it's exactly pte_none():

+ if (!pte_none(orig_dst_pte)) {
+ err = -EEXIST;
+ goto out;
+ }

Then it already covers things like pte markers, and any marker currently
will fail the REMAP ioctl already. May not be always wanted, but no risk
of losing wp notifications. If that'll be a valid use case we can work it
out.

On src vma, REMAP ioctl should behave the same as DONTNEED. Now we drop
the src pte along with the uffd-wp bit even if set, which is the correct
behavior from that regard.

Again, I don't know whether anyone cares on any of those, though..

Thanks,

--
Peter Xu

2023-09-29 05:13:26

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Wed, Sep 27, 2023 at 3:49 PM Jann Horn <[email protected]> wrote:
>
> On Wed, Sep 27, 2023 at 11:08 PM Suren Baghdasaryan <[email protected]> wrote:
> > On Wed, Sep 27, 2023 at 1:42 PM Suren Baghdasaryan <[email protected]> wrote:
> > >
> > > On Wed, Sep 27, 2023 at 1:04 PM Jann Horn <[email protected]> wrote:
> > > >
> > > > On Wed, Sep 27, 2023 at 8:08 PM Suren Baghdasaryan <[email protected]> wrote:
> > > > > On Wed, Sep 27, 2023 at 5:47 AM Jann Horn <[email protected]> wrote:
> > > > > > On Sat, Sep 23, 2023 at 3:31 AM Suren Baghdasaryan <[email protected]> wrote:
> > > > > > > + dst_pmdval = pmdp_get_lockless(dst_pmd);
> > > > > > > + /*
> > > > > > > + * If the dst_pmd is mapped as THP don't override it and just
> > > > > > > + * be strict. If dst_pmd changes into TPH after this check, the
> > > > > > > + * remap_pages_huge_pmd() will detect the change and retry
> > > > > > > + * while remap_pages_pte() will detect the change and fail.
> > > > > > > + */
> > > > > > > + if (unlikely(pmd_trans_huge(dst_pmdval))) {
> > > > > > > + err = -EEXIST;
> > > > > > > + break;
> > > > > > > + }
> > > > > > > +
> > > > > > > + ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> > > > > > > + if (ptl && !pmd_trans_huge(*src_pmd)) {
> > > > > > > + spin_unlock(ptl);
> > > > > > > + ptl = NULL;
> > > > > > > + }
> > > > > >
> > > > > > This still looks wrong - we do still have to split_huge_pmd()
> > > > > > somewhere so that remap_pages_pte() works.
> > > > >
> > > > > Hmm, I guess this extra check is not even needed...
> > > >
> > > > Hm, and instead we'd bail at the pte_offset_map_nolock() in
> > > > remap_pages_pte()? I guess that's unusual but works...
> > >
> > > Yes, that's what I was thinking but I agree, that seems fragile. Maybe
> > > just bail out early if (ptl && !pmd_trans_huge())?
> >
> > No, actually we can still handle is_swap_pmd() case by splitting it
> > and remapping the individual ptes. So, I can bail out only in case of
> > pmd_devmap().
>
> FWIW I only learned today that "real" swap PMDs don't actually exist -
> only migration entries, which are encoded as swap PMDs, exist. You can
> see that when you look through the cases that something like
> __split_huge_pmd() or zap_pmd_range() actually handles.

Ah, good point.

>
> So I think if you wanted to handle all the PMD types properly here
> without splitting, you could do that without _too_ much extra code.
> But idk if it's worth it.

Yeah, I guess I can call pmd_migration_entry_wait() and retry by
returning EAGAIN, similar to how remap_pages_pte() handles PTE
migration. Looks simple enough.

Thanks for all the pointers! I'll start cooking the next version.

2023-09-29 08:00:47

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

One more thing..

On Fri, Sep 22, 2023 at 06:31:45PM -0700, Suren Baghdasaryan wrote:
> +static int remap_pages_pte(struct mm_struct *dst_mm,

[...]

> +retry:
> + dst_pte = pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &dst_ptl);
> +
> + /* If an huge pmd materialized from under us fail */
> + if (unlikely(!dst_pte)) {
> + err = -EFAULT;
> + goto out;
> + }
> +
> + src_pte = pte_offset_map_nolock(src_mm, src_pmd, src_addr, &src_ptl);
> +
> + /*
> + * We held the mmap_lock for reading so MADV_DONTNEED
> + * can zap transparent huge pages under us, or the
> + * transparent huge page fault can establish new
> + * transparent huge pages under us.
> + */
> + if (unlikely(!src_pte)) {
> + err = -EFAULT;
> + goto out;
> + }

For these two places: I know that thp collapse with mmap read lock hasn't
yet spread to anon (so I assume none of above could trigger yet on the
failure paths), but shall we constantly return -EAGAIN here just in case we
forget that in the future?

For example, for UFFDIO_COPY over shmem which we can already hit similar
case, mfill_atomic_install_pte() has:

ret = -EAGAIN;
dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl);
if (!dst_pte)
goto out;

Thanks,

--
Peter Xu

2023-09-29 18:06:39

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Thu, Sep 28, 2023 at 10:15 AM David Hildenbrand <[email protected]> wrote:
>
> On 27.09.23 20:25, Suren Baghdasaryan wrote:
> >>
> >> I have some cleanups pending for page_move_anon_rmap(), that moves the
> >> SetPageAnonExclusive hunk out. Here we should be using
> >> page_move_anon_rmap() [or rather, folio_move_anon_rmap() after my cleanups]
> >>
> >> I'll send them out soonish.
> >
> > Should I keep this as is in my next version until you post the
> > cleanups? I can add a TODO comment to convert it to
> > folio_move_anon_rmap() once it's ready.
>
> You should just be able to use page_move_anon_rmap() and whatever gets
> in first cleans it up :)

Ack.

>
> >
> >>
> >>>> + WRITE_ONCE(src_folio->index, linear_page_index(dst_vma,
> >>>> + dst_addr)); >> +
> >>>> + orig_src_pte = ptep_clear_flush(src_vma, src_addr, src_pte);
> >>>> + orig_dst_pte = mk_pte(&src_folio->page, dst_vma->vm_page_prot);
> >>>> + orig_dst_pte = maybe_mkwrite(pte_mkdirty(orig_dst_pte),
> >>>> + dst_vma);
> >>>
> >>> I think there's still a theoretical issue here that you could fix by
> >>> checking for the AnonExclusive flag, similar to the huge page case.
> >>>
> >>> Consider the following scenario:
> >>>
> >>> 1. process P1 does a write fault in a private anonymous VMA, creating
> >>> and mapping a new anonymous page A1
> >>> 2. process P1 forks and creates two children P2 and P3. afterwards, A1
> >>> is mapped in P1, P2 and P3 as a COW page, with mapcount 3.
> >>> 3. process P1 removes its mapping of A1, dropping its mapcount to 2.
> >>> 4. process P2 uses vmsplice() to grab a reference to A1 with get_user_pages()
> >>> 5. process P2 removes its mapping of A1, dropping its mapcount to 1.
> >>>
> >>> If at this point P3 does a write fault on its mapping of A1, it will
> >>> still trigger copy-on-write thanks to the AnonExclusive mechanism; and
> >>> this is necessary to avoid P3 mapping A1 as writable and writing data
> >>> into it that will become visible to P2, if P2 and P3 are in different
> >>> security contexts.
> >>>
> >>> But if P3 instead moves its mapping of A1 to another address with
> >>> remap_anon_pte() which only does a page mapcount check, the
> >>> maybe_mkwrite() will directly make the mapping writable, circumventing
> >>> the AnonExclusive mechanism.
> >>>
> >>
> >> Yes, can_change_pte_writable() contains the exact logic when we can turn
> >> something easily writable even if it wasn't writable before. which
> >> includes that PageAnonExclusive is set. (but with uffd-wp or softdirty
> >> tracking, there is more to consider)
> >
> > For uffd_remap can_change_pte_writable() would fail it VM_WRITE is not
> > set, but we want remapping to work for RO memory as well. Are you
>
> In a VMA without VM_WRITE you certainly wouldn't want to make PTEs
> writable :) That's why that function just does a sanity check that it is
> not called in strange context. So one would only call it if VM_WRITE is set.
>
> > saying that a PageAnonExclusive() check alone would not be enough
> > here?
>
> There are some interesting questions to ask here:
>
> 1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not?
> You most probably have to mark the PTE softdirty and not make it writable.
>
> 2) VM_UFFD_WP requires similar care I assume? Peter might know.

Let me look closer into these cases.
I'll also double-check if we need to support uffd_remap for R/O vmas.
I assumed we do but I actually never checked.
Thanks!

>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-02 15:35:03

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
> In case we cannot simply remap the page, the fallback sequence (from the
> cover letter) would be triggered.
>
> 1) UFFDIO_COPY
> 2) MADV_DONTNEED
>
> So we would just handle the operation internally without a fallback.

Note that I think there will be a slight difference on whole remap
atomicity, on what happens if the page is modified after UFFDIO_COPY but
before DONTNEED.

UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
during movement, and it will generate a missing event after moved, with
latest data showing up on dest.

I'm not sure that means such a fallback is a problem, Suren may know
better with the use case.

Thanks,

--
Peter Xu

2023-10-02 18:23:12

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 28.09.23 20:34, Peter Xu wrote:
> On Thu, Sep 28, 2023 at 07:51:18PM +0200, David Hildenbrand wrote:
>> On 28.09.23 19:21, Peter Xu wrote:
>>> On Thu, Sep 28, 2023 at 07:05:40PM +0200, David Hildenbrand wrote:
>>>> As described as reply to v1, without fork() and KSM, the PAE bit should
>>>> stick around. If that's not the case, we should investigate why.
>>>>
>>>> If we ever support the post-fork case (which the comment above remap_pages()
>>>> excludes) we'll need good motivation why we'd want to make this
>>>> overly-complicated feature even more complicated.
>>>
>>> The problem is DONTFORK is only a suggestion, but not yet restricted. If
>>> someone reaches on top of some !PAE page on src it'll never gonna proceed
>>> and keep failing, iiuc.
>>
>> Yes. It won't work if you fork() and not use DONTFORK on the src VMA. We
>> should document that as a limitation.
>>
>> For example, you could return an error to the user that can just call
>> UFFDIO_COPY. (or to the UFFDIO_COPY from inside uffd code, but that's
>> probably ugly as well).
>
> We could indeed provide some special errno perhaps upon the PAE check, then
> document it explicitly in the man page and suggest resolutions (like
> DONTFORK) when user hit it.
>

Maybe it might be reasonable to consider an operation that moves the
page, even if it might do an internal copy. UFFDIO_MOVE might be a
better name for something like that.

In case we cannot simply remap the page, the fallback sequence (from the
cover letter) would be triggered.

1) UFFDIO_COPY
2) MADV_DONTNEED

So we would just handle the operation internally without a fallback.

The recommendation to the user to make the overall operation as fast as
possible would be to not use KSM, to avoid fork(), or to use
MADV_DONTFORK when fork() must be used.


--
Cheers,

David / dhildenb

2023-10-02 20:21:52

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
>
> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
> > In case we cannot simply remap the page, the fallback sequence (from the
> > cover letter) would be triggered.
> >
> > 1) UFFDIO_COPY
> > 2) MADV_DONTNEED
> >
> > So we would just handle the operation internally without a fallback.
>
> Note that I think there will be a slight difference on whole remap
> atomicity, on what happens if the page is modified after UFFDIO_COPY but
> before DONTNEED.
>
> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
> during movement, and it will generate a missing event after moved, with
> latest data showing up on dest.
>
> I'm not sure that means such a fallback is a problem, Suren may know
> better with the use case.

Although there is no problem in using fallback with our use case but
as a user of userfaultfd, I'd suggest leaving it to the developer.
Failing with appropriate errno makes more sense. If handled in the
kernel, then the user may assume at the end of the operation that the
src vma is completely unmapped. And if not correctness issues, it
could lead to memory leaks.
>
> Thanks,
>
> --
> Peter Xu
>

2023-10-02 21:34:44

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 28.09.23 21:00, Peter Xu wrote:
> On Thu, Sep 28, 2023 at 07:15:13PM +0200, David Hildenbrand wrote:
>> There are some interesting questions to ask here:
>>
>> 1) What happens if the old VMA has VM_SOFTDIRTY set but the new one not? You
>> most probably have to mark the PTE softdirty and not make it writable.
>
> I don't know whether anyone would care about soft-dirty used with uffd
> remap, but if to think about it..
>
> Logically if the dst vma has !SOFTDIRTY (means, soft-dirty tracking
> enabled), then IIUC the right thing to do is to assume this page is
> modified, hence mark softdirty and perhaps proceed with other checks (where
> write bit can be set if all check pass)?

I think so, yes.

>
> Because from a soft-dirty monitor POV on dst_vma I see this REMAP the same
> as writting data onto the missing page and got a page fault
> (e.g. UFFDIO_COPY); we just avoided the allocation and copy.
>
> The src vma seems also fine in this regard: soft-dirty should ignore holes
> always anyway (e.g. DONTNEED on a page should report !soft-dirty later even
> if tracking).

Sounds good to me.

>
>>
>> 2) VM_UFFD_WP requires similar care I assume? Peter might know.
>
> UFFD_WP shouldn't be affected, iiuc.
>
> Let's first discuss dst vma side.
>
> WP_UNPOPULATED made it slightly complicated but not so much. The core
> should be that REMAP only installs pages if it's exactly pte_none():
>
> + if (!pte_none(orig_dst_pte)) {
> + err = -EEXIST;
> + goto out;
> + }
>
> Then it already covers things like pte markers, and any marker currently
> will fail the REMAP ioctl already. May not be always wanted, but no risk
> of losing wp notifications. If that'll be a valid use case we can work it
> out.

Agreed.

>
> On src vma, REMAP ioctl should behave the same as DONTNEED. Now we drop
> the src pte along with the uffd-wp bit even if set, which is the correct
> behavior from that regard.
>
> Again, I don't know whether anyone cares on any of those, though..

If it's easy to handle, we should just handle it or instead spell it out
why we believe we can break other features. Seems to be very easy to handle.

--
Cheers,

David / dhildenb

2023-10-02 21:58:04

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 02.10.23 17:55, Lokesh Gidra wrote:
> On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra <[email protected]> wrote:
>>
>> On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
>>>
>>> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
>>>> In case we cannot simply remap the page, the fallback sequence (from the
>>>> cover letter) would be triggered.
>>>>
>>>> 1) UFFDIO_COPY
>>>> 2) MADV_DONTNEED
>>>>
>>>> So we would just handle the operation internally without a fallback.
>>>
>>> Note that I think there will be a slight difference on whole remap
>>> atomicity, on what happens if the page is modified after UFFDIO_COPY but
>>> before DONTNEED.
>>>
>>> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
>>> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
>>> during movement, and it will generate a missing event after moved, with
>>> latest data showing up on dest.
>>>
>>> I'm not sure that means such a fallback is a problem, Suren may know
>>> better with the use case.
>>
>> Although there is no problem in using fallback with our use case but
>> as a user of userfaultfd, I'd suggest leaving it to the developer.
>> Failing with appropriate errno makes more sense. If handled in the
>> kernel, then the user may assume at the end of the operation that the
>> src vma is completely unmapped. And if not correctness issues, it
>> could lead to memory leaks.
>
> I meant that in addition to the possibility of correctness issues due
> to lack of atomicity, it could also lead to memory leaks, as the user
> may assume that src vma is empty post-operation. IMHO, it's better to
> fail with errno so that the user would fix the code with necessary
> changes (like using DONTFORK, if forking).

Leaving the atomicity discussion out because I think this can just be
handled (e.g., the src_vma would always be empty post-operation):

It might not necessarily be a good idea to only expose micro-operations
to user space. If the user-space fallback will almost always be
"UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation
performed is moving data, ideally with zero-copy.

[as said as reply to Peter, one could still have magic flags for users
that really want to detect when zero-copy is impossible]

With a logical MOVE API users like compaction [as given in the cover
letter], not every such user has to eventually implement fallback paths.

But just my 2 cents, the UFFDIO_REMAP users probably can share what the
exact use cases are and if fallbacks are required at all or if no-KSM +
DONTFORK just does the trick.

--
Cheers,

David / dhildenb

2023-10-02 22:00:40

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 02.10.23 17:21, Peter Xu wrote:
> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
>> In case we cannot simply remap the page, the fallback sequence (from the
>> cover letter) would be triggered.
>>
>> 1) UFFDIO_COPY
>> 2) MADV_DONTNEED
>>
>> So we would just handle the operation internally without a fallback.
>
> Note that I think there will be a slight difference on whole remap
> atomicity, on what happens if the page is modified after UFFDIO_COPY but
> before DONTNEED.

If the page is writable (implies PAE), we can always move it. If it is
R/O, it cannot change before we get a page fault and grab the PT lock
(well, and page lock).

So I think something atomic can be implemented without too much issues.

>
> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
> during movement, and it will generate a missing event after moved, with
> latest data showing up on dest.

If the page has to be copied, grab a reference and unmap it, then copy
it and map it into the new process. Should be doable and handle all
kinds of situations just fine.

Just throwing out ideas to get a less low-level interface.

[if one really wants to get notified when one cannot move without a
copy, one could have a flag for such power users to control the behavior]

--
Cheers,

David / dhildenb

2023-10-02 22:10:43

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 02.10.23 19:33, David Hildenbrand wrote:
> On 02.10.23 17:21, Peter Xu wrote:
>> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
>>> In case we cannot simply remap the page, the fallback sequence (from the
>>> cover letter) would be triggered.
>>>
>>> 1) UFFDIO_COPY
>>> 2) MADV_DONTNEED
>>>
>>> So we would just handle the operation internally without a fallback.
>>
>> Note that I think there will be a slight difference on whole remap
>> atomicity, on what happens if the page is modified after UFFDIO_COPY but
>> before DONTNEED.
>
> If the page is writable (implies PAE), we can always move it. If it is
> R/O, it cannot change before we get a page fault and grab the PT lock
> (well, and page lock).
>
> So I think something atomic can be implemented without too much issues.
>
>>
>> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
>> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
>> during movement, and it will generate a missing event after moved, with
>> latest data showing up on dest.
>
> If the page has to be copied, grab a reference and unmap it, then copy
> it and map it into the new process. Should be doable and handle all
> kinds of situations just fine.
>
> Just throwing out ideas to get a less low-level interface.
>
> [if one really wants to get notified when one cannot move without a
> copy, one could have a flag for such power users to control the behavior]
>

[of course, if someone would have a GUP-pin on such a page, the page
exchange would be observable. Just have to documented the UFFDIO_MOVE
semantics properly]

--
Cheers,

David / dhildenb

2023-10-02 22:10:54

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra <[email protected]> wrote:
>
> On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
> >
> > On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
> > > In case we cannot simply remap the page, the fallback sequence (from the
> > > cover letter) would be triggered.
> > >
> > > 1) UFFDIO_COPY
> > > 2) MADV_DONTNEED
> > >
> > > So we would just handle the operation internally without a fallback.
> >
> > Note that I think there will be a slight difference on whole remap
> > atomicity, on what happens if the page is modified after UFFDIO_COPY but
> > before DONTNEED.
> >
> > UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
> > can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
> > during movement, and it will generate a missing event after moved, with
> > latest data showing up on dest.
> >
> > I'm not sure that means such a fallback is a problem, Suren may know
> > better with the use case.
>
> Although there is no problem in using fallback with our use case but
> as a user of userfaultfd, I'd suggest leaving it to the developer.
> Failing with appropriate errno makes more sense. If handled in the
> kernel, then the user may assume at the end of the operation that the
> src vma is completely unmapped. And if not correctness issues, it
> could lead to memory leaks.

I meant that in addition to the possibility of correctness issues due
to lack of atomicity, it could also lead to memory leaks, as the user
may assume that src vma is empty post-operation. IMHO, it's better to
fail with errno so that the user would fix the code with necessary
changes (like using DONTFORK, if forking).
> >
> > Thanks,
> >
> > --
> > Peter Xu
> >

2023-10-02 22:16:34

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand <[email protected]> wrote:
>
> On 02.10.23 17:55, Lokesh Gidra wrote:
> > On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra <[email protected]> wrote:
> >>
> >> On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
> >>>
> >>> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
> >>>> In case we cannot simply remap the page, the fallback sequence (from the
> >>>> cover letter) would be triggered.
> >>>>
> >>>> 1) UFFDIO_COPY
> >>>> 2) MADV_DONTNEED
> >>>>
> >>>> So we would just handle the operation internally without a fallback.
> >>>
> >>> Note that I think there will be a slight difference on whole remap
> >>> atomicity, on what happens if the page is modified after UFFDIO_COPY but
> >>> before DONTNEED.
> >>>
> >>> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
> >>> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
> >>> during movement, and it will generate a missing event after moved, with
> >>> latest data showing up on dest.
> >>>
> >>> I'm not sure that means such a fallback is a problem, Suren may know
> >>> better with the use case.
> >>
> >> Although there is no problem in using fallback with our use case but
> >> as a user of userfaultfd, I'd suggest leaving it to the developer.
> >> Failing with appropriate errno makes more sense. If handled in the
> >> kernel, then the user may assume at the end of the operation that the
> >> src vma is completely unmapped. And if not correctness issues, it
> >> could lead to memory leaks.
> >
> > I meant that in addition to the possibility of correctness issues due
> > to lack of atomicity, it could also lead to memory leaks, as the user
> > may assume that src vma is empty post-operation. IMHO, it's better to
> > fail with errno so that the user would fix the code with necessary
> > changes (like using DONTFORK, if forking).
>
> Leaving the atomicity discussion out because I think this can just be
> handled (e.g., the src_vma would always be empty post-operation):
>
> It might not necessarily be a good idea to only expose micro-operations
> to user space. If the user-space fallback will almost always be
> "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation
> performed is moving data, ideally with zero-copy.
>
IMHO, such a fallback will be useful only if it's possible that only
some pages in the src vma fail due to this. But even then it would be
really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to
control if the user wants the fallback or not. OTOH, if this is
something that can be detected for the entire src vma, then failing
with errno is more appropriate.

Given that the patch is already quite complicated, I humbly suggest
leaving the fallback for now as a TODO.

> [as said as reply to Peter, one could still have magic flags for users
> that really want to detect when zero-copy is impossible]
>
> With a logical MOVE API users like compaction [as given in the cover
> letter], not every such user has to eventually implement fallback paths.
>
> But just my 2 cents, the UFFDIO_REMAP users probably can share what the
> exact use cases are and if fallbacks are required at all or if no-KSM +
> DONTFORK just does the trick.
>
> --
> Cheers,
>
> David / dhildenb
>

2023-10-03 20:05:01

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Mon, Oct 2, 2023 at 12:34 PM Lokesh Gidra <[email protected]> wrote:
>
> On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand <[email protected]> wrote:
> >
> > On 02.10.23 17:55, Lokesh Gidra wrote:
> > > On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra <[email protected]> wrote:
> > >>
> > >> On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
> > >>>
> > >>> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
> > >>>> In case we cannot simply remap the page, the fallback sequence (from the
> > >>>> cover letter) would be triggered.
> > >>>>
> > >>>> 1) UFFDIO_COPY
> > >>>> 2) MADV_DONTNEED
> > >>>>
> > >>>> So we would just handle the operation internally without a fallback.
> > >>>
> > >>> Note that I think there will be a slight difference on whole remap
> > >>> atomicity, on what happens if the page is modified after UFFDIO_COPY but
> > >>> before DONTNEED.
> > >>>
> > >>> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
> > >>> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
> > >>> during movement, and it will generate a missing event after moved, with
> > >>> latest data showing up on dest.
> > >>>
> > >>> I'm not sure that means such a fallback is a problem, Suren may know
> > >>> better with the use case.
> > >>
> > >> Although there is no problem in using fallback with our use case but
> > >> as a user of userfaultfd, I'd suggest leaving it to the developer.
> > >> Failing with appropriate errno makes more sense. If handled in the
> > >> kernel, then the user may assume at the end of the operation that the
> > >> src vma is completely unmapped. And if not correctness issues, it
> > >> could lead to memory leaks.
> > >
> > > I meant that in addition to the possibility of correctness issues due
> > > to lack of atomicity, it could also lead to memory leaks, as the user
> > > may assume that src vma is empty post-operation. IMHO, it's better to
> > > fail with errno so that the user would fix the code with necessary
> > > changes (like using DONTFORK, if forking).
> >
> > Leaving the atomicity discussion out because I think this can just be
> > handled (e.g., the src_vma would always be empty post-operation):
> >
> > It might not necessarily be a good idea to only expose micro-operations
> > to user space. If the user-space fallback will almost always be
> > "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation
> > performed is moving data, ideally with zero-copy.
> >
> IMHO, such a fallback will be useful only if it's possible that only
> some pages in the src vma fail due to this. But even then it would be
> really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to
> control if the user wants the fallback or not. OTOH, if this is
> something that can be detected for the entire src vma, then failing
> with errno is more appropriate.
>
> Given that the patch is already quite complicated, I humbly suggest
> leaving the fallback for now as a TODO.

Ok, I think it makes sense to implement the strict remap logic but in
a way that we can easily add copy fallback if that's needed in the
future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return
some unique error, like EBUSY when the page is not PAE. If we need to
add a copy fallback in the future, we will add a
UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy
mechanism. Does that sound good?

>
> > [as said as reply to Peter, one could still have magic flags for users
> > that really want to detect when zero-copy is impossible]
> >
> > With a logical MOVE API users like compaction [as given in the cover
> > letter], not every such user has to eventually implement fallback paths.
> >
> > But just my 2 cents, the UFFDIO_REMAP users probably can share what the
> > exact use cases are and if fallbacks are required at all or if no-KSM +
> > DONTFORK just does the trick.
> >
> > --
> > Cheers,
> >
> > David / dhildenb
> >

2023-10-03 20:22:53

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Tue, Oct 03, 2023 at 01:04:44PM -0700, Suren Baghdasaryan wrote:
> Ok, I think it makes sense to implement the strict remap logic but in
> a way that we can easily add copy fallback if that's needed in the
> future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return
> some unique error, like EBUSY when the page is not PAE. If we need to
> add a copy fallback in the future, we will add a
> UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy
> mechanism. Does that sound good?

For the clear failing approach, sounds all good here.

For the name, no strong opinion, but is there any strong one over MOVE?
MOVE is a fine name, however considering UFFDIO_REMAP's long history.. I
tend to prefer keeping it called as REMAP - it still sounds sane, and
anyone who knows REMAP will know this is exactly that.

Thanks,

--
Peter Xu

2023-10-03 21:06:14

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 03.10.23 22:04, Suren Baghdasaryan wrote:
> On Mon, Oct 2, 2023 at 12:34 PM Lokesh Gidra <[email protected]> wrote:
>>
>> On Mon, Oct 2, 2023 at 6:43 PM David Hildenbrand <[email protected]> wrote:
>>>
>>> On 02.10.23 17:55, Lokesh Gidra wrote:
>>>> On Mon, Oct 2, 2023 at 4:46 PM Lokesh Gidra <[email protected]> wrote:
>>>>>
>>>>> On Mon, Oct 2, 2023 at 4:21 PM Peter Xu <[email protected]> wrote:
>>>>>>
>>>>>> On Mon, Oct 02, 2023 at 10:00:03AM +0200, David Hildenbrand wrote:
>>>>>>> In case we cannot simply remap the page, the fallback sequence (from the
>>>>>>> cover letter) would be triggered.
>>>>>>>
>>>>>>> 1) UFFDIO_COPY
>>>>>>> 2) MADV_DONTNEED
>>>>>>>
>>>>>>> So we would just handle the operation internally without a fallback.
>>>>>>
>>>>>> Note that I think there will be a slight difference on whole remap
>>>>>> atomicity, on what happens if the page is modified after UFFDIO_COPY but
>>>>>> before DONTNEED.
>>>>>>
>>>>>> UFFDIO_REMAP guarantees full atomicity when moving the page, IOW, threads
>>>>>> can be updating the pages when ioctl(UFFDIO_REMAP), data won't get lost
>>>>>> during movement, and it will generate a missing event after moved, with
>>>>>> latest data showing up on dest.
>>>>>>
>>>>>> I'm not sure that means such a fallback is a problem, Suren may know
>>>>>> better with the use case.
>>>>>
>>>>> Although there is no problem in using fallback with our use case but
>>>>> as a user of userfaultfd, I'd suggest leaving it to the developer.
>>>>> Failing with appropriate errno makes more sense. If handled in the
>>>>> kernel, then the user may assume at the end of the operation that the
>>>>> src vma is completely unmapped. And if not correctness issues, it
>>>>> could lead to memory leaks.
>>>>
>>>> I meant that in addition to the possibility of correctness issues due
>>>> to lack of atomicity, it could also lead to memory leaks, as the user
>>>> may assume that src vma is empty post-operation. IMHO, it's better to
>>>> fail with errno so that the user would fix the code with necessary
>>>> changes (like using DONTFORK, if forking).
>>>
>>> Leaving the atomicity discussion out because I think this can just be
>>> handled (e.g., the src_vma would always be empty post-operation):
>>>
>>> It might not necessarily be a good idea to only expose micro-operations
>>> to user space. If the user-space fallback will almost always be
>>> "UFFDIO_COPY+MADV_DONTNEED", then clearly the logical operation
>>> performed is moving data, ideally with zero-copy.
>>>
>> IMHO, such a fallback will be useful only if it's possible that only
>> some pages in the src vma fail due to this. But even then it would be
>> really useful to have a flag maybe like UFFDIO_REMAP_FALLBACK_COPY to
>> control if the user wants the fallback or not. OTOH, if this is
>> something that can be detected for the entire src vma, then failing
>> with errno is more appropriate.
>>
>> Given that the patch is already quite complicated, I humbly suggest
>> leaving the fallback for now as a TODO.

I agree about the complexity, and I hope we can reduce that further.
Otherwise such things end up being a maintainance nightmare.

>
> Ok, I think it makes sense to implement the strict remap logic but in
> a way that we can easily add copy fallback if that's needed in the

I think whatever we do, we should

a) never talk about any of the implementation details (mapcount,
swapcount, PAE) towards the users

b) make it clear from the start that we might change the decision when
we fail (to the better or the worse); users should be prepared to
implement backup paths. We certainly don't want such behavior to be ABI.

I'd suggest documenting something like the following

"The operation may fail for various reasons. Usually, remapping of pages
that are not exclusive to the given process fail; once KSM might
dedduplicate pages or fork() COW-shares pages during fork() with child
processes, they are no longer exclusive. Further, the kernel might only
perform lightweight checks for detecting whether the pages are
exclusive, and return -EWHATSOEVER in case that check fails. To make the
operation more likely to succeed, KSM should be disabled, fork() should
be avoided or MADV_DONTFORK should be configured for the source VMA
before fork()."

> future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return
> some unique error, like EBUSY when the page is not PAE. If we need to
> add a copy fallback in the future, we will add a
> UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy
> mechanism. Does that sound good?

To me, if we're talking about moving data, then zero-copy is the
optimization and copy+delete would be the (slower) default.

If we're talking about remapping, then there is no copy; we're remapping
pages.


So if we'd ever want to support the copy case, one combination would be

UFFDIO_MOVE + UFFDIO_MOVE_ZERO_COPY_ONLY

whereby we would fail if the latter is not specified.

But just my thoughts.

--
Cheers,

David / dhildenb

2023-10-03 21:09:29

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 03.10.23 22:21, Peter Xu wrote:
> On Tue, Oct 03, 2023 at 01:04:44PM -0700, Suren Baghdasaryan wrote:
>> Ok, I think it makes sense to implement the strict remap logic but in
>> a way that we can easily add copy fallback if that's needed in the
>> future. So, I'll change UFFDIO_REMAP to UFFDIO_MOVE and will return
>> some unique error, like EBUSY when the page is not PAE. If we need to
>> add a copy fallback in the future, we will add a
>> UFFDIO_MOVE_MODE_ALLOW_COPY flag and will implement the copy
>> mechanism. Does that sound good?
>
> For the clear failing approach, sounds all good here.
>
> For the name, no strong opinion, but is there any strong one over MOVE?

See my reply regarding MOVE (+zero-copy optimization) vs. REMAP. Just my
thoughts.

REMAP reminds me of mremap, which would never perform any copies,
because it can just do more expensive page remappings (modifying VMAs etc.).

> MOVE is a fine name, however considering UFFDIO_REMAP's long history.. I
> tend to prefer keeping it called as REMAP - it still sounds sane, and
> anyone who knows REMAP will know this is exactly that.

Sorry I have to ask: has this ever been discussed on the list? I don't
see any pointers. If not, then probably the number of people that know
about the history can be counted with my two hands and that shouldn't be
the basis for making decisions.

But again, remap vs. move is for me a semantical difference; and as I am
not a native speaker others might disagree and I might be just wrong.

--
Cheers,

David / dhildenb

2023-10-03 21:22:03

by Peter Xu

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
> Sorry I have to ask: has this ever been discussed on the list? I don't see
> any pointers. If not, then probably the number of people that know about the
> history can be counted with my two hands and that shouldn't be the basis for
> making decisions.

For example:

https://lore.kernel.org/all/[email protected]/

--
Peter Xu

2023-10-03 22:26:58

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Tue, Oct 3, 2023 at 2:21 PM Peter Xu <[email protected]> wrote:
>
> On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
> > Sorry I have to ask: has this ever been discussed on the list? I don't see
> > any pointers. If not, then probably the number of people that know about the
> > history can be counted with my two hands and that shouldn't be the basis for
> > making decisions.
>
> For example:
>
> https://lore.kernel.org/all/[email protected]/

There was another submission in 2019:
https://lore.kernel.org/all/[email protected]/

Though both times it did not generate much discussion. I don't have a
strong preference though MOVE sounds more generic to me TBH (it
specifies the operation rather than REMAP which hints on how that
operation is carried out). But again, I'm fine either way.
As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I
find it weird that the default (the most efficient/desired) mode of
operation needs a flag. I would prefer to have no flag initially and
add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate
when/if we ever need it. Makes sense?

>
> --
> Peter Xu
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
>

2023-10-03 23:40:48

by Lokesh Gidra

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan <[email protected]> wrote:
>
> On Tue, Oct 3, 2023 at 2:21 PM Peter Xu <[email protected]> wrote:
> >
> > On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
> > > Sorry I have to ask: has this ever been discussed on the list? I don't see
> > > any pointers. If not, then probably the number of people that know about the
> > > history can be counted with my two hands and that shouldn't be the basis for
> > > making decisions.
> >
> > For example:
> >
> > https://lore.kernel.org/all/[email protected]/
>
> There was another submission in 2019:
> https://lore.kernel.org/all/[email protected]/
>
> Though both times it did not generate much discussion. I don't have a
> strong preference though MOVE sounds more generic to me TBH (it
> specifies the operation rather than REMAP which hints on how that
> operation is carried out). But again, I'm fine either way.

That's a good point. IMHO, if in future we want to have the fallback
implemented, then MOVE would be a more appropriate name than REMAP.

> As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I
> find it weird that the default (the most efficient/desired) mode of
> operation needs a flag. I would prefer to have no flag initially and
> add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate
> when/if we ever need it. Makes sense?

Agreed!
>
> >
> > --
> > Peter Xu
> >
> > --
> > To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> >

2023-10-06 12:31:41

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On 04.10.23 01:39, Lokesh Gidra wrote:
> On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan <[email protected]> wrote:
>>
>> On Tue, Oct 3, 2023 at 2:21 PM Peter Xu <[email protected]> wrote:
>>>
>>> On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
>>>> Sorry I have to ask: has this ever been discussed on the list? I don't see
>>>> any pointers. If not, then probably the number of people that know about the
>>>> history can be counted with my two hands and that shouldn't be the basis for
>>>> making decisions.
>>>
>>> For example:
>>>
>>> https://lore.kernel.org/all/[email protected]/

Sorry, I had to process a family NMI the last couple of days.

>>
>> There was another submission in 2019:
>> https://lore.kernel.org/all/[email protected]/

It would be good to link them in the cover letter and shortly explain
why that wasn't merged back then (if there was any reason).

>>
>> Though both times it did not generate much discussion. I don't have a
>> strong preference though MOVE sounds more generic to me TBH (it
>> specifies the operation rather than REMAP which hints on how that
>> operation is carried out). But again, I'm fine either way.
>
> That's a good point. IMHO, if in future we want to have the fallback
> implemented, then MOVE would be a more appropriate name than REMAP.
>
>> As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I
>> find it weird that the default (the most efficient/desired) mode of
>> operation needs a flag. I would prefer to have no flag initially and
>> add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate
>> when/if we ever need it. Makes sense?
>
> Agreed!

I agree. One could have UFFDIO_MOVE that is best-effort and documented
like that, and a to-be-named future extension that always works but
might be more expensive.


Ideally we'd have an interface that does not expose and/or rely on such
low-level information and simply always works, but getting that would
mean that we'd have to implement the fallback immediately ... so I guess
we'll have to expose a best-effort interface first.

--
Cheers,

David / dhildenb

2023-10-06 15:02:35

by Suren Baghdasaryan

[permalink] [raw]
Subject: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI

On Fri, Oct 6, 2023 at 5:30 AM David Hildenbrand <[email protected]> wrote:
>
> On 04.10.23 01:39, Lokesh Gidra wrote:
> > On Tue, Oct 3, 2023 at 11:26 PM Suren Baghdasaryan <[email protected]> wrote:
> >>
> >> On Tue, Oct 3, 2023 at 2:21 PM Peter Xu <[email protected]> wrote:
> >>>
> >>> On Tue, Oct 03, 2023 at 11:08:07PM +0200, David Hildenbrand wrote:
> >>>> Sorry I have to ask: has this ever been discussed on the list? I don't see
> >>>> any pointers. If not, then probably the number of people that know about the
> >>>> history can be counted with my two hands and that shouldn't be the basis for
> >>>> making decisions.
> >>>
> >>> For example:
> >>>
> >>> https://lore.kernel.org/all/[email protected]/
>
> Sorry, I had to process a family NMI the last couple of days.
>
> >>
> >> There was another submission in 2019:
> >> https://lore.kernel.org/all/[email protected]/
>
> It would be good to link them in the cover letter and shortly explain
> why that wasn't merged back then (if there was any reason).

Will do. I could not find the reason but will check again.

>
> >>
> >> Though both times it did not generate much discussion. I don't have a
> >> strong preference though MOVE sounds more generic to me TBH (it
> >> specifies the operation rather than REMAP which hints on how that
> >> operation is carried out). But again, I'm fine either way.
> >
> > That's a good point. IMHO, if in future we want to have the fallback
> > implemented, then MOVE would be a more appropriate name than REMAP.
> >
> >> As for UFFDIO_MOVE_ZERO_COPY_ONLY vs UFFDIO_MOVE_MODE_ALLOW_COPY, I
> >> find it weird that the default (the most efficient/desired) mode of
> >> operation needs a flag. I would prefer to have no flag initially and
> >> add UFFDIO_MOVE_MODE_ALLOW_COPY or whatever name is more appropriate
> >> when/if we ever need it. Makes sense?
> >
> > Agreed!
>
> I agree. One could have UFFDIO_MOVE that is best-effort and documented
> like that, and a to-be-named future extension that always works but
> might be more expensive.
>
>
> Ideally we'd have an interface that does not expose and/or rely on such
> low-level information and simply always works, but getting that would
> mean that we'd have to implement the fallback immediately ... so I guess
> we'll have to expose a best-effort interface first.

Sounds good. I'll try to post the next version early next week. Thanks
for the input folks!

>
> --
> Cheers,
>
> David / dhildenb
>