This IOCTL, PAGEMAP_SCAN on pagemap file can be used to get and/or clear
the info about page table entries. The following operations are supported
in this ioctl:
- Get the information if the pages have been written-to (PAGE_IS_WRITTEN),
file mapped (PAGE_IS_FILE), present (PAGE_IS_PRESENT) or swapped
(PAGE_IS_SWAPPED).
- Find pages which have been written-to and/or write protect the pages
(atomic PM_SCAN_OP_GET + PM_SCAN_OP_WP)
This IOCTL can be extended to get information about more PTE bits.
Signed-off-by: Muhammad Usama Anjum <[email protected]>
---
Changes in v19:
- Interface changes such as renaming, return mask and WP can be used
with any flags specified in masks
- Internal code changes
Changes in v18:
- Rebased on top of next-20230613
- ptep_get() updates
- remove pmd_trans_unstable() and add ACTION_AGAIN
- Review updates (Micheal)
Changes in v17:
- Rebased on next-20230606
- Made make_uffd_wp_*_pte() better and minor changes
Changes in v16:
- Fixed a corner case where kernel writes beyond user buffer by one
element
- Bring back exclusive PM_SCAN_OP_WP
- Cosmetic changes
Changes in v15:
- Build fix:
- Use generic tlb flush function in pagemap_scan_pmd_entry() instead of
using x86 specific flush function in do_pagemap_scan()
- Remove #ifdef from pagemap_scan_hugetlb_entry()
- Use mm instead of undefined vma->vm_mm
Changes in v14:
- Fix build error caused by #ifdef added at last minute in some configs
Changes in v13:
- Review updates
- mmap_read_lock_killable() instead of mmap_read_lock()
- Replace uffd_wp_range() with helpers which increases performance
drastically for OP_WP operations by reducing the number of tlb
flushing etc
- Add MMU_NOTIFY_PROTECTION_VMA notification for the memory range
Changes in v12:
- Add hugetlb support to cover all memory types
- Merge "userfaultfd: Define dummy uffd_wp_range()" with this patch
- Review updates to the code
Changes in v11:
- Find written pages in a better way
- Fix a corner case (thanks Paul)
- Improve the code/comments
- remove ENGAGE_WP + ! GET operation
- shorten the commit message in favour of moving documentation to
pagemap.rst
Changes in v10:
- move changes in tools/include/uapi/linux/fs.h to separate patch
- update commit message
Change in v8:
- Correct is_pte_uffd_wp()
- Improve readability and error checks
- Remove some un-needed code
Changes in v7:
- Rebase on top of latest next
- Fix some corner cases
- Base soft-dirty on the uffd wp async
- Update the terminologies
- Optimize the memory usage inside the ioctl
Changes in v6:
- Rename variables and update comments
- Make IOCTL independent of soft_dirty config
- Change masks and bitmap type to _u64
- Improve code quality
Changes in v5:
- Remove tlb flushing even for clear operation
Changes in v4:
- Update the interface and implementation
Changes in v3:
- Tighten the user-kernel interface by using explicit types and add more
error checking
Changes in v2:
- Convert the interface from syscall to ioctl
- Remove pidfd support as it doesn't make sense in ioctl
---
fs/proc/task_mmu.c | 526 ++++++++++++++++++++++++++++++++++++++++
include/linux/hugetlb.h | 1 +
include/uapi/linux/fs.h | 53 ++++
mm/hugetlb.c | 2 +-
4 files changed, 581 insertions(+), 1 deletion(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 507cd4e59d07..1844beea54ea 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -19,6 +19,7 @@
#include <linux/shmem_fs.h>
#include <linux/uaccess.h>
#include <linux/pkeys.h>
+#include <linux/minmax.h>
#include <asm/elf.h>
#include <asm/tlb.h>
@@ -1765,11 +1766,536 @@ static int pagemap_release(struct inode *inode, struct file *file)
return 0;
}
+#define PM_SCAN_REQUIRE_UFFD (1ULL << 63)
+
+#define PM_SCAN_FOUND_MAX_PAGES (1)
+#define PM_SCAN_BUFFER_FULL (-256)
+
+#define PM_SCAN_BITS_ALL (PAGE_IS_WRITTEN | PAGE_IS_FILE | \
+ PAGE_IS_PRESENT | PAGE_IS_SWAPPED)
+#define PM_SCAN_OPS (PM_SCAN_OP_GET | PM_SCAN_OP_WP)
+#define IS_PM_SCAN_GET(flags) (flags & PM_SCAN_OP_GET)
+#define IS_PM_SCAN_WP(flags) (flags & PM_SCAN_OP_WP)
+#define PM_SCAN_FLAGS(wt, file, present, swap) \
+ ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3))
+
+struct pagemap_scan_private {
+ struct page_region *vec_buf, cur_buf;
+ unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
+ unsigned long long required_mask, anyof_mask, excluded_mask, return_mask;
+};
+
+static inline bool is_pte_uffd_wp(pte_t pte)
+{
+ return (pte_present(pte) && pte_uffd_wp(pte)) ||
+ pte_swp_uffd_wp_any(pte);
+}
+
+static inline void make_uffd_wp_pte(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *pte)
+{
+ pte_t ptent = ptep_get(pte);
+
+ if (pte_present(ptent)) {
+ pte_t old_pte;
+
+ old_pte = ptep_modify_prot_start(vma, addr, pte);
+ ptent = pte_mkuffd_wp(ptent);
+ ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
+ } else if (is_swap_pte(ptent)) {
+ ptent = pte_swp_mkuffd_wp(ptent);
+ set_pte_at(vma->vm_mm, addr, pte, ptent);
+ } else {
+ set_pte_at(vma->vm_mm, addr, pte,
+ make_pte_marker(PTE_MARKER_UFFD_WP));
+ }
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline bool is_pmd_uffd_wp(pmd_t pmd)
+{
+ return (pmd_present(pmd) && pmd_uffd_wp(pmd)) ||
+ (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd));
+}
+
+static inline void make_uffd_wp_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmdp)
+{
+ pmd_t old, pmd = *pmdp;
+
+ if (pmd_present(pmd)) {
+ old = pmdp_invalidate_ad(vma, addr, pmdp);
+ pmd = pmd_mkuffd_wp(old);
+ set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
+ } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
+ pmd = pmd_swp_mkuffd_wp(pmd);
+ set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
+ }
+}
+#endif
+
+#ifdef CONFIG_HUGETLB_PAGE
+static inline bool is_huge_pte_uffd_wp(pte_t pte)
+{
+ return (pte_present(pte) && huge_pte_uffd_wp(pte)) ||
+ pte_swp_uffd_wp_any(pte);
+}
+
+static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t ptent)
+{
+ if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent))
+ return;
+
+ if (is_hugetlb_entry_migration(ptent))
+ set_huge_pte_at(vma->vm_mm, addr, ptep,
+ pte_swp_mkuffd_wp(ptent));
+ else if (!huge_pte_none(ptent))
+ huge_ptep_modify_prot_commit(vma, addr, ptep, ptent,
+ huge_pte_mkuffd_wp(ptent));
+ else
+ set_huge_pte_at(vma->vm_mm, addr, ptep,
+ make_pte_marker(PTE_MARKER_UFFD_WP));
+}
+#endif
+
+static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
+ struct mm_walk *walk)
+{
+ struct pagemap_scan_private *p = walk->private;
+ struct vm_area_struct *vma = walk->vma;
+
+ if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) ||
+ !userfaultfd_wp_use_markers(vma)))
+ return -EPERM;
+
+ if (vma->vm_flags & VM_PFNMAP)
+ return 1;
+
+ return 0;
+}
+
+static int pagemap_scan_output(unsigned long bitmap,
+ struct pagemap_scan_private *p,
+ unsigned long addr, unsigned int n_pages)
+{
+ struct page_region *cur_buf = &p->cur_buf;
+
+ if (!n_pages)
+ return -EINVAL;
+
+ bitmap &= p->return_mask;
+
+ if (cur_buf->flags == bitmap &&
+ cur_buf->start + cur_buf->len * PAGE_SIZE == addr) {
+ cur_buf->len += n_pages;
+ p->found_pages += n_pages;
+ } else {
+ if (cur_buf->len) {
+ if (p->vec_buf_index >= p->vec_buf_len)
+ return PM_SCAN_BUFFER_FULL;
+
+ memcpy(&p->vec_buf[p->vec_buf_index], cur_buf,
+ sizeof(*p->vec_buf));
+ p->vec_buf_index++;
+ }
+
+ cur_buf->start = addr;
+ cur_buf->len = n_pages;
+ cur_buf->flags = bitmap;
+ p->found_pages += n_pages;
+ }
+
+ if (p->found_pages == p->max_pages)
+ return PM_SCAN_FOUND_MAX_PAGES;
+
+ return 0;
+}
+
+static bool pagemap_scan_is_interesting_page(unsigned long bitmap,
+ struct pagemap_scan_private *p)
+{
+ if ((p->required_mask & bitmap) != p->required_mask)
+ return false;
+ if (p->anyof_mask && !(p->anyof_mask & bitmap))
+ return false;
+ if (p->excluded_mask & bitmap)
+ return false;
+
+ return true;
+}
+
+static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
+ unsigned long end, struct mm_walk *walk)
+{
+ bool is_written, flush = false, is_interesting = true;
+ struct pagemap_scan_private *p = walk->private;
+ struct vm_area_struct *vma = walk->vma;
+ unsigned long bitmap, addr = end;
+ pte_t *pte, *orig_pte, ptent;
+ spinlock_t *ptl;
+ int ret = 0;
+
+ arch_enter_lazy_mmu_mode();
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ ptl = pmd_trans_huge_lock(pmd, vma);
+ if (ptl) {
+ unsigned long n_pages = (end - start)/PAGE_SIZE;
+
+ if (p->max_pages && n_pages > p->max_pages - p->found_pages)
+ n_pages = p->max_pages - p->found_pages;
+
+ is_written = !is_pmd_uffd_wp(*pmd);
+
+ /*
+ * Break huge page into small pages if the WP operation need to
+ * be performed is on a portion of the huge page.
+ */
+ if (is_written && IS_PM_SCAN_WP(p->flags) &&
+ n_pages < HPAGE_SIZE/PAGE_SIZE) {
+ spin_unlock(ptl);
+
+ split_huge_pmd(vma, pmd, start);
+ goto process_smaller_pages;
+ }
+
+ bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
+ pmd_present(*pmd), is_swap_pmd(*pmd));
+
+ if (IS_PM_SCAN_GET(p->flags)) {
+ is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
+ if (is_interesting)
+ ret = pagemap_scan_output(bitmap, p, start, n_pages);
+ }
+
+ if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
+ ret >= 0) {
+ make_uffd_wp_pmd(vma, start, pmd);
+ flush_tlb_range(vma, start, end);
+ }
+
+ spin_unlock(ptl);
+
+ arch_leave_lazy_mmu_mode();
+ return ret;
+ }
+
+process_smaller_pages:
+#endif
+
+ orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
+ if (!pte) {
+ walk->action = ACTION_AGAIN;
+ return 0;
+ }
+
+ for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
+ ptent = ptep_get(pte);
+ is_written = !is_pte_uffd_wp(ptent);
+
+ bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
+ pte_present(ptent), is_swap_pte(ptent));
+
+ if (IS_PM_SCAN_GET(p->flags)) {
+ is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
+ if (is_interesting)
+ ret = pagemap_scan_output(bitmap, p, addr, 1);
+ }
+
+ if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
+ ret >= 0) {
+ make_uffd_wp_pte(vma, addr, pte);
+ flush = true;
+ }
+ }
+
+ if (flush)
+ flush_tlb_range(vma, start, addr);
+
+ pte_unmap_unlock(orig_pte, ptl);
+ arch_leave_lazy_mmu_mode();
+
+ cond_resched();
+ return ret;
+}
+
+#ifdef CONFIG_HUGETLB_PAGE
+static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
+ unsigned long start, unsigned long end,
+ struct mm_walk *walk)
+{
+ unsigned long n_pages = (end - start)/PAGE_SIZE;
+ struct pagemap_scan_private *p = walk->private;
+ struct vm_area_struct *vma = walk->vma;
+ bool is_written, is_interesting = true;
+ struct hstate *h = hstate_vma(vma);
+ unsigned long bitmap;
+ spinlock_t *ptl;
+ int ret = 0;
+ pte_t ptent;
+
+ if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE)
+ return -EINVAL;
+
+ if (p->max_pages && n_pages > p->max_pages - p->found_pages)
+ n_pages = p->max_pages - p->found_pages;
+
+ if (IS_PM_SCAN_WP(p->flags)) {
+ i_mmap_lock_write(vma->vm_file->f_mapping);
+ ptl = huge_pte_lock(h, vma->vm_mm, ptep);
+ }
+
+ ptent = huge_ptep_get(ptep);
+ is_written = !is_huge_pte_uffd_wp(ptent);
+
+ /*
+ * Partial hugetlb page clear isn't supported
+ */
+ if (is_written && IS_PM_SCAN_WP(p->flags) &&
+ n_pages < HPAGE_SIZE/PAGE_SIZE) {
+ ret = -ENOSPC;
+ goto unlock_and_return;
+ }
+
+ bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
+ pte_present(ptent), is_swap_pte(ptent));
+
+ if (IS_PM_SCAN_GET(p->flags)) {
+ is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
+ if (is_interesting)
+ ret = pagemap_scan_output(bitmap, p, start, n_pages);
+ }
+
+ if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
+ ret >= 0) {
+ make_uffd_wp_huge_pte(vma, start, ptep, ptent);
+ flush_hugetlb_tlb_range(vma, start, end);
+ }
+
+unlock_and_return:
+ if (IS_PM_SCAN_WP(p->flags)) {
+ spin_unlock(ptl);
+ i_mmap_unlock_write(vma->vm_file->f_mapping);
+ }
+
+ return ret;
+}
+#else
+#define pagemap_scan_hugetlb_entry NULL
+#endif
+
+static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
+ int depth, struct mm_walk *walk)
+{
+ unsigned long n_pages = (end - addr)/PAGE_SIZE;
+ struct pagemap_scan_private *p = walk->private;
+ struct vm_area_struct *vma = walk->vma;
+ int ret = 0;
+
+ if (!vma || !IS_PM_SCAN_GET(p->flags))
+ return 0;
+
+ if (p->max_pages && n_pages > p->max_pages - p->found_pages)
+ n_pages = p->max_pages - p->found_pages;
+
+ ret = pagemap_scan_output(PM_SCAN_FLAGS(false, (bool)vma->vm_file,
+ false, false), p, addr, n_pages);
+
+ return ret;
+}
+
+static const struct mm_walk_ops pagemap_scan_ops = {
+ .test_walk = pagemap_scan_test_walk,
+ .pmd_entry = pagemap_scan_pmd_entry,
+ .pte_hole = pagemap_scan_pte_hole,
+ .hugetlb_entry = pagemap_scan_hugetlb_entry,
+};
+
+static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
+ struct page_region __user *vec)
+{
+ /* Detect illegal size, flags, len and masks */
+ if (arg->size != sizeof(struct pm_scan_arg))
+ return -EINVAL;
+ if (!arg->flags)
+ return -EINVAL;
+ if (arg->flags & ~PM_SCAN_OPS)
+ return -EINVAL;
+ if (!arg->len)
+ return -EINVAL;
+ if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
+ arg->return_mask) & ~PM_SCAN_BITS_ALL)
+ return -EINVAL;
+ if (!arg->required_mask && !arg->anyof_mask &&
+ !arg->excluded_mask)
+ return -EINVAL;
+ if (!arg->return_mask)
+ return -EINVAL;
+
+ /* Validate memory range */
+ if (!IS_ALIGNED(start, PAGE_SIZE))
+ return -EINVAL;
+ if (!access_ok((void __user *)start, arg->len))
+ return -EFAULT;
+
+ if (IS_PM_SCAN_GET(arg->flags)) {
+ if (!arg->vec)
+ return -EINVAL;
+ if (arg->vec_len == 0)
+ return -EINVAL;
+ if (!access_ok((void __user *)arg->vec,
+ arg->vec_len * sizeof(struct page_region)))
+ return -EFAULT;
+ }
+
+ if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
+ arg->max_pages)
+ return -EINVAL;
+
+ return 0;
+}
+
+static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
+{
+ struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
+ unsigned long long start, end, walk_start, walk_end;
+ unsigned long long empty_slots, vec_index = 0;
+ struct mmu_notifier_range range;
+ struct page_region __user *vec;
+ struct pagemap_scan_private p;
+ struct pm_scan_arg arg;
+ int ret = 0;
+
+ if (copy_from_user(&arg, uarg, sizeof(arg)))
+ return -EFAULT;
+
+ start = untagged_addr((unsigned long)arg.start);
+ vec = (struct page_region *)untagged_addr((unsigned long)arg.vec);
+
+ ret = pagemap_scan_args_valid(&arg, start, vec);
+ if (ret)
+ return ret;
+
+ end = start + arg.len;
+ p.max_pages = arg.max_pages;
+ p.found_pages = 0;
+ p.required_mask = arg.required_mask;
+ p.anyof_mask = arg.anyof_mask;
+ p.excluded_mask = arg.excluded_mask;
+ p.return_mask = arg.return_mask;
+ p.flags = arg.flags;
+ p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
+ PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
+ p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
+ p.vec_buf = NULL;
+ p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
+
+ /*
+ * Allocate smaller buffer to get output from inside the page walk
+ * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
+ * we want to return output to user in compact form where no two
+ * consecutive regions should be continuous and have the same flags.
+ * So store the latest element in p.cur_buf between different walks and
+ * store the p.cur_buf at the end of the walk to the user buffer.
+ */
+ if (IS_PM_SCAN_GET(p.flags)) {
+ p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
+ GFP_KERNEL);
+ if (!p.vec_buf)
+ return -ENOMEM;
+ }
+
+ if (IS_PM_SCAN_WP(p.flags)) {
+ mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
+ mm, start, end);
+ mmu_notifier_invalidate_range_start(&range);
+ }
+
+ walk_start = walk_end = start;
+ while (walk_end < end && !ret) {
+ if (IS_PM_SCAN_GET(p.flags)) {
+ p.vec_buf_index = 0;
+
+ /*
+ * All data is copied to cur_buf first. When more data
+ * is found, we push cur_buf to vec_buf and copy new
+ * data to cur_buf. Subtract 1 from length as the
+ * index of cur_buf isn't counted in length.
+ */
+ empty_slots = arg.vec_len - vec_index;
+ p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
+ }
+
+ walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK;
+ if (walk_end > end)
+ walk_end = end;
+
+ ret = mmap_read_lock_killable(mm);
+ if (ret)
+ goto free_data;
+ ret = walk_page_range(mm, walk_start, walk_end,
+ &pagemap_scan_ops, &p);
+ mmap_read_unlock(mm);
+
+ if (ret && ret != PM_SCAN_BUFFER_FULL &&
+ ret != PM_SCAN_FOUND_MAX_PAGES)
+ goto free_data;
+
+ walk_start = walk_end;
+ if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
+ if (copy_to_user(&vec[vec_index], p.vec_buf,
+ p.vec_buf_index * sizeof(*p.vec_buf))) {
+ /*
+ * Return error even though the OP succeeded
+ */
+ ret = -EFAULT;
+ goto free_data;
+ }
+ vec_index += p.vec_buf_index;
+ }
+ }
+
+ if (p.cur_buf.len) {
+ if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
+ ret = -EFAULT;
+ goto free_data;
+ }
+ vec_index++;
+ }
+
+ ret = vec_index;
+
+free_data:
+ if (IS_PM_SCAN_WP(p.flags))
+ mmu_notifier_invalidate_range_end(&range);
+
+ kfree(p.vec_buf);
+ return ret;
+}
+
+static long do_pagemap_cmd(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct mm_struct *mm = file->private_data;
+
+ switch (cmd) {
+ case PAGEMAP_SCAN:
+ return do_pagemap_scan(mm, arg);
+
+ default:
+ return -EINVAL;
+ }
+}
+
const struct file_operations proc_pagemap_operations = {
.llseek = mem_lseek, /* borrow this */
.read = pagemap_read,
.open = pagemap_open,
.release = pagemap_release,
+ .unlocked_ioctl = do_pagemap_cmd,
+ .compat_ioctl = do_pagemap_cmd,
};
#endif /* CONFIG_PROC_PAGE_MONITOR */
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index beb7c63d2871..a6e773c3e2b4 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
unsigned long cp_flags);
bool is_hugetlb_entry_migration(pte_t pte);
+bool is_hugetlb_entry_hwpoisoned(pte_t pte);
void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
#else /* !CONFIG_HUGETLB_PAGE */
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index b7b56871029c..a9fb44db84a3 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -305,4 +305,57 @@ typedef int __bitwise __kernel_rwf_t;
#define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
RWF_APPEND)
+/* Pagemap ioctl */
+#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
+
+/* Bits are set in flags of the page_region and masks in pm_scan_args */
+#define PAGE_IS_WRITTEN (1 << 0)
+#define PAGE_IS_FILE (1 << 1)
+#define PAGE_IS_PRESENT (1 << 2)
+#define PAGE_IS_SWAPPED (1 << 3)
+
+/*
+ * struct page_region - Page region with flags
+ * @start: Start of the region
+ * @len: Length of the region in pages
+ * @bitmap: Bits sets for the region
+ */
+struct page_region {
+ __u64 start;
+ __u64 len;
+ __u64 flags;
+};
+
+/*
+ * struct pm_scan_arg - Pagemap ioctl argument
+ * @size: Size of the structure
+ * @flags: Flags for the IOCTL
+ * @start: Starting address of the region
+ * @len: Length of the region (All the pages in this length are included)
+ * @vec: Address of page_region struct array for output
+ * @vec_len: Length of the page_region struct array
+ * @max_pages: Optional max return pages
+ * @required_mask: Required mask - All of these bits have to be set in the PTE
+ * @anyof_mask: Any mask - Any of these bits are set in the PTE
+ * @excluded_mask: Exclude mask - None of these bits are set in the PTE
+ * @return_mask: Bits that are to be reported in page_region
+ */
+struct pm_scan_arg {
+ __u64 size;
+ __u64 flags;
+ __u64 start;
+ __u64 len;
+ __u64 vec;
+ __u64 vec_len;
+ __u64 max_pages;
+ __u64 required_mask;
+ __u64 anyof_mask;
+ __u64 excluded_mask;
+ __u64 return_mask;
+};
+
+/* Supported flags */
+#define PM_SCAN_OP_GET (1 << 0)
+#define PM_SCAN_OP_WP (1 << 1)
+
#endif /* _UAPI_LINUX_FS_H */
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0db13167b1ee..7e60f0f3fd03 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte)
return false;
}
-static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
+bool is_hugetlb_entry_hwpoisoned(pte_t pte)
{
swp_entry_t swp;
--
2.39.2
Hi Muhammad,
kernel test robot noticed the following build warnings:
[auto build test WARNING on akpm-mm/mm-everything]
[also build test WARNING on next-20230615]
[cannot apply to linus/master v6.4-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Muhammad-Usama-Anjum/userfaultfd-UFFD_FEATURE_WP_ASYNC/20230615-225037
base: https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/r/20230615141144.665148-3-usama.anjum%40collabora.com
patch subject: [PATCH v19 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs
config: m68k-allyesconfig (https://download.01.org/0day-ci/archive/20230616/[email protected]/config)
compiler: m68k-linux-gcc (GCC) 12.3.0
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
git remote add akpm-mm https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git
git fetch akpm-mm mm-everything
git checkout akpm-mm/mm-everything
b4 shazam https://lore.kernel.org/r/[email protected]
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=m68k olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=m68k SHELL=/bin/bash fs/proc/
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <[email protected]>
| Closes: https://lore.kernel.org/oe-kbuild-all/[email protected]/
All warnings (new ones prefixed by >>):
In file included from include/asm-generic/bug.h:5,
from arch/m68k/include/asm/bug.h:32,
from include/linux/bug.h:5,
from include/linux/mmdebug.h:5,
from include/linux/mm.h:6,
from include/linux/pagewalk.h:5,
from fs/proc/task_mmu.c:2:
fs/proc/task_mmu.c: In function 'pagemap_scan_args_valid':
>> fs/proc/task_mmu.c:2148:32: warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
2148 | if (!access_ok((void __user *)arg->vec,
| ^
include/linux/compiler.h:76:45: note: in definition of macro 'likely'
76 | # define likely(x) __builtin_expect(!!(x), 1)
| ^
fs/proc/task_mmu.c:2148:22: note: in expansion of macro 'access_ok'
2148 | if (!access_ok((void __user *)arg->vec,
| ^~~~~~~~~
vim +2148 fs/proc/task_mmu.c
2115
2116 static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
2117 struct page_region __user *vec)
2118 {
2119 /* Detect illegal size, flags, len and masks */
2120 if (arg->size != sizeof(struct pm_scan_arg))
2121 return -EINVAL;
2122 if (!arg->flags)
2123 return -EINVAL;
2124 if (arg->flags & ~PM_SCAN_OPS)
2125 return -EINVAL;
2126 if (!arg->len)
2127 return -EINVAL;
2128 if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
2129 arg->return_mask) & ~PM_SCAN_BITS_ALL)
2130 return -EINVAL;
2131 if (!arg->required_mask && !arg->anyof_mask &&
2132 !arg->excluded_mask)
2133 return -EINVAL;
2134 if (!arg->return_mask)
2135 return -EINVAL;
2136
2137 /* Validate memory range */
2138 if (!IS_ALIGNED(start, PAGE_SIZE))
2139 return -EINVAL;
2140 if (!access_ok((void __user *)start, arg->len))
2141 return -EFAULT;
2142
2143 if (IS_PM_SCAN_GET(arg->flags)) {
2144 if (!arg->vec)
2145 return -EINVAL;
2146 if (arg->vec_len == 0)
2147 return -EINVAL;
> 2148 if (!access_ok((void __user *)arg->vec,
2149 arg->vec_len * sizeof(struct page_region)))
2150 return -EFAULT;
2151 }
2152
2153 if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
2154 arg->max_pages)
2155 return -EINVAL;
2156
2157 return 0;
2158 }
2159
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> + unsigned long end, struct mm_walk *walk)
> +{
> + bool is_written, flush = false, is_interesting = true;
> + struct pagemap_scan_private *p = walk->private;
> + struct vm_area_struct *vma = walk->vma;
> + unsigned long bitmap, addr = end;
> + pte_t *pte, *orig_pte, ptent;
> + spinlock_t *ptl;
> + int ret = 0;
> +
> + arch_enter_lazy_mmu_mode();
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + ptl = pmd_trans_huge_lock(pmd, vma);
> + if (ptl) {
> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> +
> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> + n_pages = p->max_pages - p->found_pages;
> +
> + is_written = !is_pmd_uffd_wp(*pmd);
> +
> + /*
> + * Break huge page into small pages if the WP operation need to
> + * be performed is on a portion of the huge page.
> + */
> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
> + spin_unlock(ptl);
> +
> + split_huge_pmd(vma, pmd, start);
> + goto process_smaller_pages;
> + }
> +
> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> + pmd_present(*pmd), is_swap_pmd(*pmd));
> +
> + if (IS_PM_SCAN_GET(p->flags)) {
> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> + if (is_interesting)
> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> + }
> +
> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> + ret >= 0) {
> + make_uffd_wp_pmd(vma, start, pmd);
> + flush_tlb_range(vma, start, end);
> + }
> +
> + spin_unlock(ptl);
> +
> + arch_leave_lazy_mmu_mode();
> + return ret;
> + }
> +
> +process_smaller_pages:
> +#endif
> +
> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
> + if (!pte) {
Do we need to unlock ptl here?
spin_unlock(ptl);
> + walk->action = ACTION_AGAIN;
> + return 0;
> + }
> +
> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
> + ptent = ptep_get(pte);
> + is_written = !is_pte_uffd_wp(ptent);
> +
> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> + pte_present(ptent), is_swap_pte(ptent));
The vma->vm_file check isn't correct in this case. You can look when
pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
pages have a file backing store and what pages are anonymous.
I was trying to integrate this new interace into CRIU and I found
one more thing that is required. We need to detect zero pages.
It should look something like this:
#define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
pte_present(ptent), is_swap_pte(ptent),
pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
Thanks,
Andrei
On 6/17/23 11:39 AM, Andrei Vagin wrote:
> On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>> + unsigned long end, struct mm_walk *walk)
>> +{
>> + bool is_written, flush = false, is_interesting = true;
>> + struct pagemap_scan_private *p = walk->private;
>> + struct vm_area_struct *vma = walk->vma;
>> + unsigned long bitmap, addr = end;
>> + pte_t *pte, *orig_pte, ptent;
>> + spinlock_t *ptl;
>> + int ret = 0;
>> +
>> + arch_enter_lazy_mmu_mode();
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + ptl = pmd_trans_huge_lock(pmd, vma);
>> + if (ptl) {
>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>> +
>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>> + n_pages = p->max_pages - p->found_pages;
>> +
>> + is_written = !is_pmd_uffd_wp(*pmd);
>> +
>> + /*
>> + * Break huge page into small pages if the WP operation need to
>> + * be performed is on a portion of the huge page.
>> + */
>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>> + spin_unlock(ptl);
>> +
>> + split_huge_pmd(vma, pmd, start);
>> + goto process_smaller_pages;
>> + }
>> +
>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>> +
>> + if (IS_PM_SCAN_GET(p->flags)) {
>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>> + if (is_interesting)
>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>> + }
>> +
>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>> + ret >= 0) {
>> + make_uffd_wp_pmd(vma, start, pmd);
>> + flush_tlb_range(vma, start, end);
>> + }
>> +
>> + spin_unlock(ptl);
>> +
>> + arch_leave_lazy_mmu_mode();
>> + return ret;
>> + }
>> +
>> +process_smaller_pages:
>> +#endif
>> +
>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>> + if (!pte) {
>
> Do we need to unlock ptl here?
>
> spin_unlock(ptl);
No, please look at these recently merged patches:
https://lore.kernel.org/all/[email protected]
>
>> + walk->action = ACTION_AGAIN;
>> + return 0;
>> + }
>> +
>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>> + ptent = ptep_get(pte);
>> + is_written = !is_pte_uffd_wp(ptent);
>> +
>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>> + pte_present(ptent), is_swap_pte(ptent));
>
> The vma->vm_file check isn't correct in this case. You can look when
> pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
> pages have a file backing store and what pages are anonymous.
I'll update.
>
> I was trying to integrate this new interace into CRIU and I found
> one more thing that is required. We need to detect zero pages.
Should we name it ZERO_PFN_PRESENT_PAGE to be exact or what?
>
> It should look something like this:
>
> #define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
> ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
>
>
> bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
> pte_present(ptent), is_swap_pte(ptent),
> pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
Okay. Can you please confirm my assumptions:
- A THP cannot be file backed. (PM_FILE isn't being set for THP case)
- A hole is also not file backed.
A hole isn't present in memory. So its pfn would be zero. But as it isn't
present, it shouldn't report zero page. Right? For hole::
PM_SCAN_FLAGS(false, false, false, false, false)
>
> Thanks,
> Andrei
--
BR,
Muhammad Usama Anjum
On 6/19/23 11:06 AM, Muhammad Usama Anjum wrote:
> On 6/17/23 11:39 AM, Andrei Vagin wrote:
>> On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
>>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>>> + unsigned long end, struct mm_walk *walk)
>>> +{
>>> + bool is_written, flush = false, is_interesting = true;
>>> + struct pagemap_scan_private *p = walk->private;
>>> + struct vm_area_struct *vma = walk->vma;
>>> + unsigned long bitmap, addr = end;
>>> + pte_t *pte, *orig_pte, ptent;
>>> + spinlock_t *ptl;
>>> + int ret = 0;
>>> +
>>> + arch_enter_lazy_mmu_mode();
>>> +
>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>> + ptl = pmd_trans_huge_lock(pmd, vma);
>>> + if (ptl) {
>>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>>> +
>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>> + n_pages = p->max_pages - p->found_pages;
>>> +
>>> + is_written = !is_pmd_uffd_wp(*pmd);
>>> +
>>> + /*
>>> + * Break huge page into small pages if the WP operation need to
>>> + * be performed is on a portion of the huge page.
>>> + */
>>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>>> + spin_unlock(ptl);
>>> +
>>> + split_huge_pmd(vma, pmd, start);
>>> + goto process_smaller_pages;
>>> + }
>>> +
>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>>> +
>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>> + if (is_interesting)
>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>>> + }
>>> +
>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>> + ret >= 0) {
>>> + make_uffd_wp_pmd(vma, start, pmd);
>>> + flush_tlb_range(vma, start, end);
>>> + }
>>> +
>>> + spin_unlock(ptl);
>>> +
>>> + arch_leave_lazy_mmu_mode();
>>> + return ret;
>>> + }
>>> +
>>> +process_smaller_pages:
>>> +#endif
>>> +
>>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>>> + if (!pte) {
>>
>> Do we need to unlock ptl here?
>>
>> spin_unlock(ptl);
> No, please look at these recently merged patches:
> https://lore.kernel.org/all/[email protected]
>
>>
>>> + walk->action = ACTION_AGAIN;
>>> + return 0;
>>> + }
>>> +
>>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>>> + ptent = ptep_get(pte);
>>> + is_written = !is_pte_uffd_wp(ptent);
>>> +
>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>> + pte_present(ptent), is_swap_pte(ptent));
>>
>> The vma->vm_file check isn't correct in this case. You can look when
>> pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
>> pages have a file backing store and what pages are anonymous.
> I'll update.
>
>>
>> I was trying to integrate this new interace into CRIU and I found
>> one more thing that is required. We need to detect zero pages.
Can we not add this zero page flag now as we are already at v20? If you
have time to review and test the patches, then something can be done.
> Should we name it ZERO_PFN_PRESENT_PAGE to be exact or what?
>
>>
>> It should look something like this:
>>
>> #define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
>> ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
>>
>>
>> bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
>> pte_present(ptent), is_swap_pte(ptent),
>> pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
> Okay. Can you please confirm my assumptions:
> - A THP cannot be file backed. (PM_FILE isn't being set for THP case)
> - A hole is also not file backed.
>
> A hole isn't present in memory. So its pfn would be zero. But as it isn't
> present, it shouldn't report zero page. Right? For hole::
>
> PM_SCAN_FLAGS(false, false, false, false, false)
Please let me know about the test results you have been doing.
>
>
>>
>> Thanks,
>> Andrei
>
--
BR,
Muhammad Usama Anjum
On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
> This IOCTL, PAGEMAP_SCAN on pagemap file can be used to get and/or clear
> the info about page table entries. The following operations are supported
> in this ioctl:
> - Get the information if the pages have been written-to (PAGE_IS_WRITTEN),
> file mapped (PAGE_IS_FILE), present (PAGE_IS_PRESENT) or swapped
> (PAGE_IS_SWAPPED).
> - Find pages which have been written-to and/or write protect the pages
> (atomic PM_SCAN_OP_GET + PM_SCAN_OP_WP)
>
> This IOCTL can be extended to get information about more PTE bits.
>
> Signed-off-by: Muhammad Usama Anjum <[email protected]>
> ---
> Changes in v19:
> - Interface changes such as renaming, return mask and WP can be used
> with any flags specified in masks
> - Internal code changes
>
> Changes in v18:
> - Rebased on top of next-20230613
> - ptep_get() updates
> - remove pmd_trans_unstable() and add ACTION_AGAIN
> - Review updates (Micheal)
>
> Changes in v17:
> - Rebased on next-20230606
> - Made make_uffd_wp_*_pte() better and minor changes
>
> Changes in v16:
> - Fixed a corner case where kernel writes beyond user buffer by one
> element
> - Bring back exclusive PM_SCAN_OP_WP
> - Cosmetic changes
>
> Changes in v15:
> - Build fix:
> - Use generic tlb flush function in pagemap_scan_pmd_entry() instead of
> using x86 specific flush function in do_pagemap_scan()
> - Remove #ifdef from pagemap_scan_hugetlb_entry()
> - Use mm instead of undefined vma->vm_mm
>
> Changes in v14:
> - Fix build error caused by #ifdef added at last minute in some configs
>
> Changes in v13:
> - Review updates
> - mmap_read_lock_killable() instead of mmap_read_lock()
> - Replace uffd_wp_range() with helpers which increases performance
> drastically for OP_WP operations by reducing the number of tlb
> flushing etc
> - Add MMU_NOTIFY_PROTECTION_VMA notification for the memory range
>
> Changes in v12:
> - Add hugetlb support to cover all memory types
> - Merge "userfaultfd: Define dummy uffd_wp_range()" with this patch
> - Review updates to the code
>
> Changes in v11:
> - Find written pages in a better way
> - Fix a corner case (thanks Paul)
> - Improve the code/comments
> - remove ENGAGE_WP + ! GET operation
> - shorten the commit message in favour of moving documentation to
> pagemap.rst
>
> Changes in v10:
> - move changes in tools/include/uapi/linux/fs.h to separate patch
> - update commit message
>
> Change in v8:
> - Correct is_pte_uffd_wp()
> - Improve readability and error checks
> - Remove some un-needed code
>
> Changes in v7:
> - Rebase on top of latest next
> - Fix some corner cases
> - Base soft-dirty on the uffd wp async
> - Update the terminologies
> - Optimize the memory usage inside the ioctl
>
> Changes in v6:
> - Rename variables and update comments
> - Make IOCTL independent of soft_dirty config
> - Change masks and bitmap type to _u64
> - Improve code quality
>
> Changes in v5:
> - Remove tlb flushing even for clear operation
>
> Changes in v4:
> - Update the interface and implementation
>
> Changes in v3:
> - Tighten the user-kernel interface by using explicit types and add more
> error checking
>
> Changes in v2:
> - Convert the interface from syscall to ioctl
> - Remove pidfd support as it doesn't make sense in ioctl
> ---
> fs/proc/task_mmu.c | 526 ++++++++++++++++++++++++++++++++++++++++
> include/linux/hugetlb.h | 1 +
> include/uapi/linux/fs.h | 53 ++++
> mm/hugetlb.c | 2 +-
> 4 files changed, 581 insertions(+), 1 deletion(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 507cd4e59d07..1844beea54ea 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -19,6 +19,7 @@
> #include <linux/shmem_fs.h>
> #include <linux/uaccess.h>
> #include <linux/pkeys.h>
> +#include <linux/minmax.h>
>
> #include <asm/elf.h>
> #include <asm/tlb.h>
> @@ -1765,11 +1766,536 @@ static int pagemap_release(struct inode *inode, struct file *file)
> return 0;
> }
>
> +#define PM_SCAN_REQUIRE_UFFD (1ULL << 63)
> +
> +#define PM_SCAN_FOUND_MAX_PAGES (1)
> +#define PM_SCAN_BUFFER_FULL (-256)
> +
> +#define PM_SCAN_BITS_ALL (PAGE_IS_WRITTEN | PAGE_IS_FILE | \
> + PAGE_IS_PRESENT | PAGE_IS_SWAPPED)
> +#define PM_SCAN_OPS (PM_SCAN_OP_GET | PM_SCAN_OP_WP)
> +#define IS_PM_SCAN_GET(flags) (flags & PM_SCAN_OP_GET)
> +#define IS_PM_SCAN_WP(flags) (flags & PM_SCAN_OP_WP)
> +#define PM_SCAN_FLAGS(wt, file, present, swap) \
> + ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3))
> +
> +struct pagemap_scan_private {
> + struct page_region *vec_buf, cur_buf;
> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
should it be just unsigned long?
> + unsigned long long required_mask, anyof_mask, excluded_mask, return_mask;
> +};
> +
> +static inline bool is_pte_uffd_wp(pte_t pte)
> +{
> + return (pte_present(pte) && pte_uffd_wp(pte)) ||
> + pte_swp_uffd_wp_any(pte);
> +}
> +
> +static inline void make_uffd_wp_pte(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *pte)
> +{
> + pte_t ptent = ptep_get(pte);
> +
> + if (pte_present(ptent)) {
> + pte_t old_pte;
> +
> + old_pte = ptep_modify_prot_start(vma, addr, pte);
> + ptent = pte_mkuffd_wp(ptent);
> + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
> + } else if (is_swap_pte(ptent)) {
> + ptent = pte_swp_mkuffd_wp(ptent);
> + set_pte_at(vma->vm_mm, addr, pte, ptent);
> + } else {
> + set_pte_at(vma->vm_mm, addr, pte,
> + make_pte_marker(PTE_MARKER_UFFD_WP));
> + }
> +}
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static inline bool is_pmd_uffd_wp(pmd_t pmd)
> +{
> + return (pmd_present(pmd) && pmd_uffd_wp(pmd)) ||
> + (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd));
> +}
> +
> +static inline void make_uffd_wp_pmd(struct vm_area_struct *vma,
> + unsigned long addr, pmd_t *pmdp)
> +{
> + pmd_t old, pmd = *pmdp;
> +
> + if (pmd_present(pmd)) {
> + old = pmdp_invalidate_ad(vma, addr, pmdp);
> + pmd = pmd_mkuffd_wp(old);
> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> + } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
> + pmd = pmd_swp_mkuffd_wp(pmd);
> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> + }
> +}
> +#endif
> +
> +#ifdef CONFIG_HUGETLB_PAGE
> +static inline bool is_huge_pte_uffd_wp(pte_t pte)
> +{
> + return (pte_present(pte) && huge_pte_uffd_wp(pte)) ||
> + pte_swp_uffd_wp_any(pte);
> +}
> +
> +static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *ptep,
> + pte_t ptent)
> +{
> + if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent))
> + return;
> +
> + if (is_hugetlb_entry_migration(ptent))
> + set_huge_pte_at(vma->vm_mm, addr, ptep,
> + pte_swp_mkuffd_wp(ptent));
> + else if (!huge_pte_none(ptent))
> + huge_ptep_modify_prot_commit(vma, addr, ptep, ptent,
> + huge_pte_mkuffd_wp(ptent));
> + else
> + set_huge_pte_at(vma->vm_mm, addr, ptep,
> + make_pte_marker(PTE_MARKER_UFFD_WP));
> +}
> +#endif
> +
> +static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
> + struct mm_walk *walk)
> +{
> + struct pagemap_scan_private *p = walk->private;
> + struct vm_area_struct *vma = walk->vma;
> +
> + if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) ||
> + !userfaultfd_wp_use_markers(vma)))
> + return -EPERM;
> +
> + if (vma->vm_flags & VM_PFNMAP)
> + return 1;
> +
> + return 0;
> +}
> +
> +static int pagemap_scan_output(unsigned long bitmap,
> + struct pagemap_scan_private *p,
> + unsigned long addr, unsigned int n_pages)
> +{
> + struct page_region *cur_buf = &p->cur_buf;
> +
> + if (!n_pages)
> + return -EINVAL;
> +
> + bitmap &= p->return_mask;
> +
> + if (cur_buf->flags == bitmap &&
> + cur_buf->start + cur_buf->len * PAGE_SIZE == addr) {
> + cur_buf->len += n_pages;
> + p->found_pages += n_pages;
> + } else {
> + if (cur_buf->len) {
I would add a comment that vec_buf_len has been decremented by one for
cur_buf.
> + if (p->vec_buf_index >= p->vec_buf_len)
> + return PM_SCAN_BUFFER_FULL;
> +
> + memcpy(&p->vec_buf[p->vec_buf_index], cur_buf,
> + sizeof(*p->vec_buf));
> + p->vec_buf_index++;
> + }
> +
> + cur_buf->start = addr;
> + cur_buf->len = n_pages;
> + cur_buf->flags = bitmap;
> + p->found_pages += n_pages;
> + }
> +
> + if (p->found_pages == p->max_pages)
> + return PM_SCAN_FOUND_MAX_PAGES;
> +
> + return 0;
> +}
> +
> +static bool pagemap_scan_is_interesting_page(unsigned long bitmap,
> + struct pagemap_scan_private *p)
> +{
> + if ((p->required_mask & bitmap) != p->required_mask)
> + return false;
> + if (p->anyof_mask && !(p->anyof_mask & bitmap))
> + return false;
> + if (p->excluded_mask & bitmap)
> + return false;
> +
> + return true;
> +}
> +
> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> + unsigned long end, struct mm_walk *walk)
> +{
> + bool is_written, flush = false, is_interesting = true;
> + struct pagemap_scan_private *p = walk->private;
> + struct vm_area_struct *vma = walk->vma;
> + unsigned long bitmap, addr = end;
> + pte_t *pte, *orig_pte, ptent;
> + spinlock_t *ptl;
> + int ret = 0;
> +
> + arch_enter_lazy_mmu_mode();
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + ptl = pmd_trans_huge_lock(pmd, vma);
> + if (ptl) {
> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> +
> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> + n_pages = p->max_pages - p->found_pages;
> +
> + is_written = !is_pmd_uffd_wp(*pmd);
> +
> + /*
> + * Break huge page into small pages if the WP operation need to
> + * be performed is on a portion of the huge page.
> + */
> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
It might be worth stopping rather than splitting the huge page.
> + spin_unlock(ptl);
> +
> + split_huge_pmd(vma, pmd, start);
> + goto process_smaller_pages;
> + }
> +
> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> + pmd_present(*pmd), is_swap_pmd(*pmd));
> +
> + if (IS_PM_SCAN_GET(p->flags)) {
> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> + if (is_interesting)
> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> + }
> +
> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> + ret >= 0) {
> + make_uffd_wp_pmd(vma, start, pmd);
> + flush_tlb_range(vma, start, end);
> + }
> +
> + spin_unlock(ptl);
> +
> + arch_leave_lazy_mmu_mode();
> + return ret;
> + }
> +
> +process_smaller_pages:
> +#endif
> +
> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
> + if (!pte) {
> + walk->action = ACTION_AGAIN;
> + return 0;
> + }
> +
> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
> + ptent = ptep_get(pte);
> + is_written = !is_pte_uffd_wp(ptent);
> +
> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> + pte_present(ptent), is_swap_pte(ptent));
> +
> + if (IS_PM_SCAN_GET(p->flags)) {
> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> + if (is_interesting)
> + ret = pagemap_scan_output(bitmap, p, addr, 1);
> + }
> +
> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> + ret >= 0) {
> + make_uffd_wp_pte(vma, addr, pte);
> + flush = true;
> + }
> + }
> +
> + if (flush)
> + flush_tlb_range(vma, start, addr);
> +
> + pte_unmap_unlock(orig_pte, ptl);
> + arch_leave_lazy_mmu_mode();
> +
> + cond_resched();
> + return ret;
> +}
> +
> +#ifdef CONFIG_HUGETLB_PAGE
> +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
> + unsigned long start, unsigned long end,
> + struct mm_walk *walk)
> +{
> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> + struct pagemap_scan_private *p = walk->private;
> + struct vm_area_struct *vma = walk->vma;
> + bool is_written, is_interesting = true;
> + struct hstate *h = hstate_vma(vma);
> + unsigned long bitmap;
> + spinlock_t *ptl;
> + int ret = 0;
> + pte_t ptent;
> +
> + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE)
> + return -EINVAL;
> +
> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> + n_pages = p->max_pages - p->found_pages;
> +
> + if (IS_PM_SCAN_WP(p->flags)) {
> + i_mmap_lock_write(vma->vm_file->f_mapping);
> + ptl = huge_pte_lock(h, vma->vm_mm, ptep);
> + }
> +
> + ptent = huge_ptep_get(ptep);
> + is_written = !is_huge_pte_uffd_wp(ptent);
> +
> + /*
> + * Partial hugetlb page clear isn't supported
> + */
> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
n_pages depends on found_pages that is incremented in
pagemap_scan_output. Should we do this check right before calling
pagemap_scan_output?
> + ret = -ENOSPC;
should it be PM_SCAN_FOUND_MAX_PAGES? Otherwise, it fails the ioctl even
if it has handled some pages already.
> + goto unlock_and_return;
> + }
> +
> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> + pte_present(ptent), is_swap_pte(ptent));
> +
> + if (IS_PM_SCAN_GET(p->flags)) {
> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> + if (is_interesting)
> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> + }
> +
> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> + ret >= 0) {
> + make_uffd_wp_huge_pte(vma, start, ptep, ptent);
> + flush_hugetlb_tlb_range(vma, start, end);
> + }
> +
> +unlock_and_return:
> + if (IS_PM_SCAN_WP(p->flags)) {
> + spin_unlock(ptl);
> + i_mmap_unlock_write(vma->vm_file->f_mapping);
> + }
> +
> + return ret;
> +}
> +#else
> +#define pagemap_scan_hugetlb_entry NULL
> +#endif
> +
> +static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
> + int depth, struct mm_walk *walk)
> +{
> + unsigned long n_pages = (end - addr)/PAGE_SIZE;
> + struct pagemap_scan_private *p = walk->private;
> + struct vm_area_struct *vma = walk->vma;
> + int ret = 0;
> +
> + if (!vma || !IS_PM_SCAN_GET(p->flags))
> + return 0;
> +
> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> + n_pages = p->max_pages - p->found_pages;
> +
> + ret = pagemap_scan_output(PM_SCAN_FLAGS(false, (bool)vma->vm_file,
> + false, false), p, addr, n_pages);
> +
> + return ret;
> +}
> +
> +static const struct mm_walk_ops pagemap_scan_ops = {
> + .test_walk = pagemap_scan_test_walk,
> + .pmd_entry = pagemap_scan_pmd_entry,
> + .pte_hole = pagemap_scan_pte_hole,
> + .hugetlb_entry = pagemap_scan_hugetlb_entry,
> +};
> +
> +static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
> + struct page_region __user *vec)
> +{
> + /* Detect illegal size, flags, len and masks */
> + if (arg->size != sizeof(struct pm_scan_arg))
> + return -EINVAL;
> + if (!arg->flags)
> + return -EINVAL;
> + if (arg->flags & ~PM_SCAN_OPS)
> + return -EINVAL;
> + if (!arg->len)
> + return -EINVAL;
> + if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
> + arg->return_mask) & ~PM_SCAN_BITS_ALL)
> + return -EINVAL;
> + if (!arg->required_mask && !arg->anyof_mask &&
> + !arg->excluded_mask)
> + return -EINVAL;
> + if (!arg->return_mask)
> + return -EINVAL;
> +
> + /* Validate memory range */
> + if (!IS_ALIGNED(start, PAGE_SIZE))
> + return -EINVAL;
> + if (!access_ok((void __user *)start, arg->len))
> + return -EFAULT;
> +
> + if (IS_PM_SCAN_GET(arg->flags)) {
> + if (!arg->vec)
> + return -EINVAL;
> + if (arg->vec_len == 0)
> + return -EINVAL;
> + if (!access_ok((void __user *)arg->vec,
> + arg->vec_len * sizeof(struct page_region)))
> + return -EFAULT;
> + }
> +
> + if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
> + arg->max_pages)
> + return -EINVAL;
> +
> + return 0;
> +}
> +
> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
> +{
> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
> + unsigned long long start, end, walk_start, walk_end;
> + unsigned long long empty_slots, vec_index = 0;
> + struct mmu_notifier_range range;
> + struct page_region __user *vec;
> + struct pagemap_scan_private p;
> + struct pm_scan_arg arg;
> + int ret = 0;
> +
> + if (copy_from_user(&arg, uarg, sizeof(arg)))
> + return -EFAULT;
> +
> + start = untagged_addr((unsigned long)arg.start);
> + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec);
> +
> + ret = pagemap_scan_args_valid(&arg, start, vec);
> + if (ret)
> + return ret;
> +
> + end = start + arg.len;
> + p.max_pages = arg.max_pages;
> + p.found_pages = 0;
> + p.required_mask = arg.required_mask;
> + p.anyof_mask = arg.anyof_mask;
> + p.excluded_mask = arg.excluded_mask;
> + p.return_mask = arg.return_mask;
> + p.flags = arg.flags;
> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
> + p.vec_buf = NULL;
> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
> +
> + /*
> + * Allocate smaller buffer to get output from inside the page walk
> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
> + * we want to return output to user in compact form where no two
> + * consecutive regions should be continuous and have the same flags.
> + * So store the latest element in p.cur_buf between different walks and
> + * store the p.cur_buf at the end of the walk to the user buffer.
> + */
> + if (IS_PM_SCAN_GET(p.flags)) {
> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
> + GFP_KERNEL);
> + if (!p.vec_buf)
> + return -ENOMEM;
> + }
> +
> + if (IS_PM_SCAN_WP(p.flags)) {
> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
> + mm, start, end);
> + mmu_notifier_invalidate_range_start(&range);
> + }
> +
> + walk_start = walk_end = start;
> + while (walk_end < end && !ret) {
How long can it take to run this loop? Should we interrupt it if a
signal has been queued?
> + if (IS_PM_SCAN_GET(p.flags)) {
> + p.vec_buf_index = 0;
> +
> + /*
> + * All data is copied to cur_buf first. When more data
> + * is found, we push cur_buf to vec_buf and copy new
> + * data to cur_buf. Subtract 1 from length as the
> + * index of cur_buf isn't counted in length.
> + */
> + empty_slots = arg.vec_len - vec_index;
> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
> + }
> +
> + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK;
> + if (walk_end > end)
> + walk_end = end;
> +
> + ret = mmap_read_lock_killable(mm);
> + if (ret)
> + goto free_data;
This will fail the ioctl, but it isn't acceptable if any pages has been
handled.
> + ret = walk_page_range(mm, walk_start, walk_end,
> + &pagemap_scan_ops, &p);
> + mmap_read_unlock(mm);
> +
ret can be ENOSPC returned from pagemap_scan_pmd_entry, but we should
not return this error if any pages has been handled.
> + if (ret && ret != PM_SCAN_BUFFER_FULL &&
> + ret != PM_SCAN_FOUND_MAX_PAGES)
> + goto free_data;
> +
> + walk_start = walk_end;
> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
> + if (copy_to_user(&vec[vec_index], p.vec_buf,
> + p.vec_buf_index * sizeof(*p.vec_buf))) {
> + /*
> + * Return error even though the OP succeeded
> + */
> + ret = -EFAULT;
> + goto free_data;
> + }
> + vec_index += p.vec_buf_index;
> + }
> + }
> +
> + if (p.cur_buf.len) {
> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
> + ret = -EFAULT;
> + goto free_data;
> + }
> + vec_index++;
> + }
> +
> + ret = vec_index;
> +
> +free_data:
> + if (IS_PM_SCAN_WP(p.flags))
> + mmu_notifier_invalidate_range_end(&range);
> +
> + kfree(p.vec_buf);
> + return ret;
> +}
> +
> +static long do_pagemap_cmd(struct file *file, unsigned int cmd,
> + unsigned long arg)
> +{
> + struct mm_struct *mm = file->private_data;
> +
> + switch (cmd) {
> + case PAGEMAP_SCAN:
> + return do_pagemap_scan(mm, arg);
> +
> + default:
> + return -EINVAL;
> + }
> +}
> +
> const struct file_operations proc_pagemap_operations = {
> .llseek = mem_lseek, /* borrow this */
> .read = pagemap_read,
> .open = pagemap_open,
> .release = pagemap_release,
> + .unlocked_ioctl = do_pagemap_cmd,
> + .compat_ioctl = do_pagemap_cmd,
> };
> #endif /* CONFIG_PROC_PAGE_MONITOR */
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index beb7c63d2871..a6e773c3e2b4 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
> unsigned long cp_flags);
>
> bool is_hugetlb_entry_migration(pte_t pte);
> +bool is_hugetlb_entry_hwpoisoned(pte_t pte);
> void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
>
> #else /* !CONFIG_HUGETLB_PAGE */
> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> index b7b56871029c..a9fb44db84a3 100644
> --- a/include/uapi/linux/fs.h
> +++ b/include/uapi/linux/fs.h
> @@ -305,4 +305,57 @@ typedef int __bitwise __kernel_rwf_t;
> #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
> RWF_APPEND)
>
> +/* Pagemap ioctl */
> +#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
> +
> +/* Bits are set in flags of the page_region and masks in pm_scan_args */
> +#define PAGE_IS_WRITTEN (1 << 0)
> +#define PAGE_IS_FILE (1 << 1)
> +#define PAGE_IS_PRESENT (1 << 2)
> +#define PAGE_IS_SWAPPED (1 << 3)
> +
> +/*
> + * struct page_region - Page region with flags
> + * @start: Start of the region
> + * @len: Length of the region in pages
> + * @bitmap: Bits sets for the region
> + */
> +struct page_region {
> + __u64 start;
> + __u64 len;
> + __u64 flags;
> +};
> +
> +/*
> + * struct pm_scan_arg - Pagemap ioctl argument
> + * @size: Size of the structure
> + * @flags: Flags for the IOCTL
> + * @start: Starting address of the region
> + * @len: Length of the region (All the pages in this length are included)
> + * @vec: Address of page_region struct array for output
> + * @vec_len: Length of the page_region struct array
> + * @max_pages: Optional max return pages
> + * @required_mask: Required mask - All of these bits have to be set in the PTE
> + * @anyof_mask: Any mask - Any of these bits are set in the PTE
> + * @excluded_mask: Exclude mask - None of these bits are set in the PTE
> + * @return_mask: Bits that are to be reported in page_region
> + */
> +struct pm_scan_arg {
> + __u64 size;
> + __u64 flags;
> + __u64 start;
> + __u64 len;
> + __u64 vec;
> + __u64 vec_len;
> + __u64 max_pages;
> + __u64 required_mask;
> + __u64 anyof_mask;
> + __u64 excluded_mask;
> + __u64 return_mask;
> +};
> +
> +/* Supported flags */
> +#define PM_SCAN_OP_GET (1 << 0)
> +#define PM_SCAN_OP_WP (1 << 1)
> +
> #endif /* _UAPI_LINUX_FS_H */
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0db13167b1ee..7e60f0f3fd03 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte)
> return false;
> }
>
> -static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
> +bool is_hugetlb_entry_hwpoisoned(pte_t pte)
> {
> swp_entry_t swp;
>
> --
> 2.39.2
>
Hi,
Thank you for your review.
On 6/20/23 11:03 PM, Andrei Vagin wrote:
...
>> +struct pagemap_scan_private {
>> + struct page_region *vec_buf, cur_buf;
>> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
>
> should it be just unsigned long?
These internal values are storing data coming from user in struct
pm_scan_arg in which all variables are 64 bit(__u64) explicitly. This is
why we have unsigned long long here. It is absolutely necessary.
>
>> + unsigned long long required_mask, anyof_mask, excluded_mask, return_mask;
>> +};
>> +
>> +static inline bool is_pte_uffd_wp(pte_t pte)
>> +{
>> + return (pte_present(pte) && pte_uffd_wp(pte)) ||
>> + pte_swp_uffd_wp_any(pte);
>> +}
>> +
>> +static inline void make_uffd_wp_pte(struct vm_area_struct *vma,
>> + unsigned long addr, pte_t *pte)
>> +{
>> + pte_t ptent = ptep_get(pte);
>> +
>> + if (pte_present(ptent)) {
>> + pte_t old_pte;
>> +
>> + old_pte = ptep_modify_prot_start(vma, addr, pte);
>> + ptent = pte_mkuffd_wp(ptent);
>> + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
>> + } else if (is_swap_pte(ptent)) {
>> + ptent = pte_swp_mkuffd_wp(ptent);
>> + set_pte_at(vma->vm_mm, addr, pte, ptent);
>> + } else {
>> + set_pte_at(vma->vm_mm, addr, pte,
>> + make_pte_marker(PTE_MARKER_UFFD_WP));
>> + }
>> +}
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +static inline bool is_pmd_uffd_wp(pmd_t pmd)
>> +{
>> + return (pmd_present(pmd) && pmd_uffd_wp(pmd)) ||
>> + (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd));
>> +}
>> +
>> +static inline void make_uffd_wp_pmd(struct vm_area_struct *vma,
>> + unsigned long addr, pmd_t *pmdp)
>> +{
>> + pmd_t old, pmd = *pmdp;
>> +
>> + if (pmd_present(pmd)) {
>> + old = pmdp_invalidate_ad(vma, addr, pmdp);
>> + pmd = pmd_mkuffd_wp(old);
>> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
>> + } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
>> + pmd = pmd_swp_mkuffd_wp(pmd);
>> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
>> + }
>> +}
>> +#endif
>> +
>> +#ifdef CONFIG_HUGETLB_PAGE
>> +static inline bool is_huge_pte_uffd_wp(pte_t pte)
>> +{
>> + return (pte_present(pte) && huge_pte_uffd_wp(pte)) ||
>> + pte_swp_uffd_wp_any(pte);
>> +}
>> +
>> +static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
>> + unsigned long addr, pte_t *ptep,
>> + pte_t ptent)
>> +{
>> + if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent))
>> + return;
>> +
>> + if (is_hugetlb_entry_migration(ptent))
>> + set_huge_pte_at(vma->vm_mm, addr, ptep,
>> + pte_swp_mkuffd_wp(ptent));
>> + else if (!huge_pte_none(ptent))
>> + huge_ptep_modify_prot_commit(vma, addr, ptep, ptent,
>> + huge_pte_mkuffd_wp(ptent));
>> + else
>> + set_huge_pte_at(vma->vm_mm, addr, ptep,
>> + make_pte_marker(PTE_MARKER_UFFD_WP));
>> +}
>> +#endif
>> +
>> +static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
>> + struct mm_walk *walk)
>> +{
>> + struct pagemap_scan_private *p = walk->private;
>> + struct vm_area_struct *vma = walk->vma;
>> +
>> + if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) ||
>> + !userfaultfd_wp_use_markers(vma)))
>> + return -EPERM;
>> +
>> + if (vma->vm_flags & VM_PFNMAP)
>> + return 1;
>> +
>> + return 0;
>> +}
>> +
>> +static int pagemap_scan_output(unsigned long bitmap,
>> + struct pagemap_scan_private *p,
>> + unsigned long addr, unsigned int n_pages)
>> +{
>> + struct page_region *cur_buf = &p->cur_buf;
>> +
>> + if (!n_pages)
>> + return -EINVAL;
>> +
>> + bitmap &= p->return_mask;
>> +
>> + if (cur_buf->flags == bitmap &&
>> + cur_buf->start + cur_buf->len * PAGE_SIZE == addr) {
>> + cur_buf->len += n_pages;
>> + p->found_pages += n_pages;
>> + } else {
>> + if (cur_buf->len) {
>
> I would add a comment that vec_buf_len has been decremented by one for
> cur_buf.
>
>> + if (p->vec_buf_index >= p->vec_buf_len)
>> + return PM_SCAN_BUFFER_FULL;
>> +
>> + memcpy(&p->vec_buf[p->vec_buf_index], cur_buf,
>> + sizeof(*p->vec_buf));
>> + p->vec_buf_index++;
>> + }
>> +
>> + cur_buf->start = addr;
>> + cur_buf->len = n_pages;
>> + cur_buf->flags = bitmap;
>> + p->found_pages += n_pages;
>> + }
>> +
>> + if (p->found_pages == p->max_pages)
>> + return PM_SCAN_FOUND_MAX_PAGES;
>> +
>> + return 0;
>> +}
>> +
>> +static bool pagemap_scan_is_interesting_page(unsigned long bitmap,
>> + struct pagemap_scan_private *p)
>> +{
>> + if ((p->required_mask & bitmap) != p->required_mask)
>> + return false;
>> + if (p->anyof_mask && !(p->anyof_mask & bitmap))
>> + return false;
>> + if (p->excluded_mask & bitmap)
>> + return false;
>> +
>> + return true;
>> +}
>> +
>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>> + unsigned long end, struct mm_walk *walk)
>> +{
>> + bool is_written, flush = false, is_interesting = true;
>> + struct pagemap_scan_private *p = walk->private;
>> + struct vm_area_struct *vma = walk->vma;
>> + unsigned long bitmap, addr = end;
>> + pte_t *pte, *orig_pte, ptent;
>> + spinlock_t *ptl;
>> + int ret = 0;
>> +
>> + arch_enter_lazy_mmu_mode();
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> + ptl = pmd_trans_huge_lock(pmd, vma);
>> + if (ptl) {
>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>> +
>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>> + n_pages = p->max_pages - p->found_pages;
>> +
>> + is_written = !is_pmd_uffd_wp(*pmd);
>> +
>> + /*
>> + * Break huge page into small pages if the WP operation need to
>> + * be performed is on a portion of the huge page.
>> + */
>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>
> It might be worth stopping rather than splitting the huge page.
Sometimes normal pages can be grouped to become THP by the kernel depending
upon the configurations. We don't want to reject ioctl just because this
folding operation happened. We don't want to stop rather spliting is the
best option and it does work fine.
>
>> + spin_unlock(ptl);
>> +
>> + split_huge_pmd(vma, pmd, start);
>> + goto process_smaller_pages;
>> + }
>> +
>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>> +
>> + if (IS_PM_SCAN_GET(p->flags)) {
>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>> + if (is_interesting)
>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>> + }
>> +
>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>> + ret >= 0) {
>> + make_uffd_wp_pmd(vma, start, pmd);
>> + flush_tlb_range(vma, start, end);
>> + }
>> +
>> + spin_unlock(ptl);
>> +
>> + arch_leave_lazy_mmu_mode();
>> + return ret;
>> + }
>> +
>> +process_smaller_pages:
>> +#endif
>> +
>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>> + if (!pte) {
>> + walk->action = ACTION_AGAIN;
>> + return 0;
>> + }
>> +
>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>> + ptent = ptep_get(pte);
>> + is_written = !is_pte_uffd_wp(ptent);
>> +
>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>> + pte_present(ptent), is_swap_pte(ptent));
>> +
>> + if (IS_PM_SCAN_GET(p->flags)) {
>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>> + if (is_interesting)
>> + ret = pagemap_scan_output(bitmap, p, addr, 1);
>> + }
>> +
>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>> + ret >= 0) {
>> + make_uffd_wp_pte(vma, addr, pte);
>> + flush = true;
>> + }
>> + }
>> +
>> + if (flush)
>> + flush_tlb_range(vma, start, addr);
>> +
>> + pte_unmap_unlock(orig_pte, ptl);
>> + arch_leave_lazy_mmu_mode();
>> +
>> + cond_resched();
>> + return ret;
>> +}
>> +
>> +#ifdef CONFIG_HUGETLB_PAGE
>> +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
>> + unsigned long start, unsigned long end,
>> + struct mm_walk *walk)
>> +{
>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>> + struct pagemap_scan_private *p = walk->private;
>> + struct vm_area_struct *vma = walk->vma;
>> + bool is_written, is_interesting = true;
>> + struct hstate *h = hstate_vma(vma);
>> + unsigned long bitmap;
>> + spinlock_t *ptl;
>> + int ret = 0;
>> + pte_t ptent;
>> +
>> + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE)
>> + return -EINVAL;
>> +
>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>> + n_pages = p->max_pages - p->found_pages;
>> +
>> + if (IS_PM_SCAN_WP(p->flags)) {
>> + i_mmap_lock_write(vma->vm_file->f_mapping);
>> + ptl = huge_pte_lock(h, vma->vm_mm, ptep);
>> + }
>> +
>> + ptent = huge_ptep_get(ptep);
>> + is_written = !is_huge_pte_uffd_wp(ptent);
>> +
>> + /*
>> + * Partial hugetlb page clear isn't supported
>> + */
>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>
> n_pages depends on found_pages that is incremented in
> pagemap_scan_output. Should we do this check right before calling
> pagemap_scan_output?
This condition is acting like a check before pagemap_scan_output() already.
n_pages is telling us that how many empty pages are left to find out. But
if by making n_pages less, we cannot cover the whole hugetlb, we should
just return error as we don't split the hugetlb.
>
>> + ret = -ENOSPC;
>
> should it be PM_SCAN_FOUND_MAX_PAGES? Otherwise, it fails the ioctl even
> if it has handled some pages already.
It is a double edge sword. If we don't return error, user will never know
that he needs to specify more max_pages or his output buffer is small and
not coverig the entire range. The real purpose here that user gets aware
that he needs to specify full hugetlb range and found pages should cover
the entire range as well.
>
>> + goto unlock_and_return;
>> + }
>> +
>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>> + pte_present(ptent), is_swap_pte(ptent));
>> +
>> + if (IS_PM_SCAN_GET(p->flags)) {
>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>> + if (is_interesting)
>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>> + }
>> +
>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>> + ret >= 0) {
>> + make_uffd_wp_huge_pte(vma, start, ptep, ptent);
>> + flush_hugetlb_tlb_range(vma, start, end);
>> + }
>> +
>> +unlock_and_return:
>> + if (IS_PM_SCAN_WP(p->flags)) {
>> + spin_unlock(ptl);
>> + i_mmap_unlock_write(vma->vm_file->f_mapping);
>> + }
>> +
>> + return ret;
>> +}
>> +#else
>> +#define pagemap_scan_hugetlb_entry NULL
>> +#endif
>> +
>> +static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
>> + int depth, struct mm_walk *walk)
>> +{
>> + unsigned long n_pages = (end - addr)/PAGE_SIZE;
>> + struct pagemap_scan_private *p = walk->private;
>> + struct vm_area_struct *vma = walk->vma;
>> + int ret = 0;
>> +
>> + if (!vma || !IS_PM_SCAN_GET(p->flags))
>> + return 0;
>> +
>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>> + n_pages = p->max_pages - p->found_pages;
>> +
>> + ret = pagemap_scan_output(PM_SCAN_FLAGS(false, (bool)vma->vm_file,
>> + false, false), p, addr, n_pages);
>> +
>> + return ret;
>> +}
>> +
>> +static const struct mm_walk_ops pagemap_scan_ops = {
>> + .test_walk = pagemap_scan_test_walk,
>> + .pmd_entry = pagemap_scan_pmd_entry,
>> + .pte_hole = pagemap_scan_pte_hole,
>> + .hugetlb_entry = pagemap_scan_hugetlb_entry,
>> +};
>> +
>> +static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
>> + struct page_region __user *vec)
>> +{
>> + /* Detect illegal size, flags, len and masks */
>> + if (arg->size != sizeof(struct pm_scan_arg))
>> + return -EINVAL;
>> + if (!arg->flags)
>> + return -EINVAL;
>> + if (arg->flags & ~PM_SCAN_OPS)
>> + return -EINVAL;
>> + if (!arg->len)
>> + return -EINVAL;
>> + if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
>> + arg->return_mask) & ~PM_SCAN_BITS_ALL)
>> + return -EINVAL;
>> + if (!arg->required_mask && !arg->anyof_mask &&
>> + !arg->excluded_mask)
>> + return -EINVAL;
>> + if (!arg->return_mask)
>> + return -EINVAL;
>> +
>> + /* Validate memory range */
>> + if (!IS_ALIGNED(start, PAGE_SIZE))
>> + return -EINVAL;
>> + if (!access_ok((void __user *)start, arg->len))
>> + return -EFAULT;
>> +
>> + if (IS_PM_SCAN_GET(arg->flags)) {
>> + if (!arg->vec)
>> + return -EINVAL;
>> + if (arg->vec_len == 0)
>> + return -EINVAL;
>> + if (!access_ok((void __user *)arg->vec,
>> + arg->vec_len * sizeof(struct page_region)))
>> + return -EFAULT;
>> + }
>> +
>> + if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
>> + arg->max_pages)
>> + return -EINVAL;
>> +
>> + return 0;
>> +}
>> +
>> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
>> +{
>> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
>> + unsigned long long start, end, walk_start, walk_end;
>> + unsigned long long empty_slots, vec_index = 0;
>> + struct mmu_notifier_range range;
>> + struct page_region __user *vec;
>> + struct pagemap_scan_private p;
>> + struct pm_scan_arg arg;
>> + int ret = 0;
>> +
>> + if (copy_from_user(&arg, uarg, sizeof(arg)))
>> + return -EFAULT;
>> +
>> + start = untagged_addr((unsigned long)arg.start);
>> + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec);
>> +
>> + ret = pagemap_scan_args_valid(&arg, start, vec);
>> + if (ret)
>> + return ret;
>> +
>> + end = start + arg.len;
>> + p.max_pages = arg.max_pages;
>> + p.found_pages = 0;
>> + p.required_mask = arg.required_mask;
>> + p.anyof_mask = arg.anyof_mask;
>> + p.excluded_mask = arg.excluded_mask;
>> + p.return_mask = arg.return_mask;
>> + p.flags = arg.flags;
>> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
>> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
>> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
>> + p.vec_buf = NULL;
>> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
>> +
>> + /*
>> + * Allocate smaller buffer to get output from inside the page walk
>> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
>> + * we want to return output to user in compact form where no two
>> + * consecutive regions should be continuous and have the same flags.
>> + * So store the latest element in p.cur_buf between different walks and
>> + * store the p.cur_buf at the end of the walk to the user buffer.
>> + */
>> + if (IS_PM_SCAN_GET(p.flags)) {
>> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
>> + GFP_KERNEL);
>> + if (!p.vec_buf)
>> + return -ENOMEM;
>> + }
>> +
>> + if (IS_PM_SCAN_WP(p.flags)) {
>> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
>> + mm, start, end);
>> + mmu_notifier_invalidate_range_start(&range);
>> + }
>> +
>> + walk_start = walk_end = start;
>> + while (walk_end < end && !ret) {
>
> How long can it take to run this loop? Should we interrupt it if a
> signal has been queued?
I'm following mincore and pagemap_read here. There is no such thing there.
IOCTL is performing what user has requested. If the execution time is a
concern, user should have called the IOCTL on shorter address range.
>
>> + if (IS_PM_SCAN_GET(p.flags)) {
>> + p.vec_buf_index = 0;
>> +
>> + /*
>> + * All data is copied to cur_buf first. When more data
>> + * is found, we push cur_buf to vec_buf and copy new
>> + * data to cur_buf. Subtract 1 from length as the
>> + * index of cur_buf isn't counted in length.
>> + */
>> + empty_slots = arg.vec_len - vec_index;
>> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
>> + }
>> +
>> + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK;
>> + if (walk_end > end)
>> + walk_end = end;
>> +
>> + ret = mmap_read_lock_killable(mm);
>> + if (ret)
>> + goto free_data;
>
> This will fail the ioctl, but it isn't acceptable if any pages has been
> handled.
I'm following pagemap_read() here.
>
>
>> + ret = walk_page_range(mm, walk_start, walk_end,
>> + &pagemap_scan_ops, &p);
>> + mmap_read_unlock(mm);
>> +
>
> ret can be ENOSPC returned from pagemap_scan_pmd_entry, but we should
> not return this error if any pages has been handled.
ENOSPC isn't being returned from pagemap_scan_pmd_entry now.
>
>> + if (ret && ret != PM_SCAN_BUFFER_FULL &&
>> + ret != PM_SCAN_FOUND_MAX_PAGES)
>> + goto free_data;
>> +
>> + walk_start = walk_end;
>> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
>> + if (copy_to_user(&vec[vec_index], p.vec_buf,
>> + p.vec_buf_index * sizeof(*p.vec_buf))) {
>> + /*
>> + * Return error even though the OP succeeded
>> + */
>> + ret = -EFAULT;
>> + goto free_data;
>> + }
>> + vec_index += p.vec_buf_index;
>> + }
>> + }
>> +
>> + if (p.cur_buf.len) {
>> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
>> + ret = -EFAULT;
>> + goto free_data;
>> + }
>> + vec_index++;
>> + }
>> +
>> + ret = vec_index;
>> +
>> +free_data:
>> + if (IS_PM_SCAN_WP(p.flags))
>> + mmu_notifier_invalidate_range_end(&range);
>> +
>> + kfree(p.vec_buf);
>> + return ret;
>> +}
>> +
>> +static long do_pagemap_cmd(struct file *file, unsigned int cmd,
>> + unsigned long arg)
>> +{
>> + struct mm_struct *mm = file->private_data;
>> +
>> + switch (cmd) {
>> + case PAGEMAP_SCAN:
>> + return do_pagemap_scan(mm, arg);
>> +
>> + default:
>> + return -EINVAL;
>> + }
>> +}
>> +
>> const struct file_operations proc_pagemap_operations = {
>> .llseek = mem_lseek, /* borrow this */
>> .read = pagemap_read,
>> .open = pagemap_open,
>> .release = pagemap_release,
>> + .unlocked_ioctl = do_pagemap_cmd,
>> + .compat_ioctl = do_pagemap_cmd,
>> };
>> #endif /* CONFIG_PROC_PAGE_MONITOR */
>>
>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>> index beb7c63d2871..a6e773c3e2b4 100644
>> --- a/include/linux/hugetlb.h
>> +++ b/include/linux/hugetlb.h
>> @@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
>> unsigned long cp_flags);
>>
>> bool is_hugetlb_entry_migration(pte_t pte);
>> +bool is_hugetlb_entry_hwpoisoned(pte_t pte);
>> void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
>>
>> #else /* !CONFIG_HUGETLB_PAGE */
>> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
>> index b7b56871029c..a9fb44db84a3 100644
>> --- a/include/uapi/linux/fs.h
>> +++ b/include/uapi/linux/fs.h
>> @@ -305,4 +305,57 @@ typedef int __bitwise __kernel_rwf_t;
>> #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
>> RWF_APPEND)
>>
>> +/* Pagemap ioctl */
>> +#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
>> +
>> +/* Bits are set in flags of the page_region and masks in pm_scan_args */
>> +#define PAGE_IS_WRITTEN (1 << 0)
>> +#define PAGE_IS_FILE (1 << 1)
>> +#define PAGE_IS_PRESENT (1 << 2)
>> +#define PAGE_IS_SWAPPED (1 << 3)
>> +
>> +/*
>> + * struct page_region - Page region with flags
>> + * @start: Start of the region
>> + * @len: Length of the region in pages
>> + * @bitmap: Bits sets for the region
>> + */
>> +struct page_region {
>> + __u64 start;
>> + __u64 len;
>> + __u64 flags;
>> +};
>> +
>> +/*
>> + * struct pm_scan_arg - Pagemap ioctl argument
>> + * @size: Size of the structure
>> + * @flags: Flags for the IOCTL
>> + * @start: Starting address of the region
>> + * @len: Length of the region (All the pages in this length are included)
>> + * @vec: Address of page_region struct array for output
>> + * @vec_len: Length of the page_region struct array
>> + * @max_pages: Optional max return pages
>> + * @required_mask: Required mask - All of these bits have to be set in the PTE
>> + * @anyof_mask: Any mask - Any of these bits are set in the PTE
>> + * @excluded_mask: Exclude mask - None of these bits are set in the PTE
>> + * @return_mask: Bits that are to be reported in page_region
>> + */
>> +struct pm_scan_arg {
>> + __u64 size;
>> + __u64 flags;
>> + __u64 start;
>> + __u64 len;
>> + __u64 vec;
>> + __u64 vec_len;
>> + __u64 max_pages;
>> + __u64 required_mask;
>> + __u64 anyof_mask;
>> + __u64 excluded_mask;
>> + __u64 return_mask;
>> +};
>> +
>> +/* Supported flags */
>> +#define PM_SCAN_OP_GET (1 << 0)
>> +#define PM_SCAN_OP_WP (1 << 1)
>> +
>> #endif /* _UAPI_LINUX_FS_H */
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 0db13167b1ee..7e60f0f3fd03 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte)
>> return false;
>> }
>>
>> -static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>> +bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>> {
>> swp_entry_t swp;
>>
>> --
>> 2.39.2
>>
--
BR,
Muhammad Usama Anjum
On Mon, Jun 19, 2023 at 11:06:36AM +0500, Muhammad Usama Anjum wrote:
> On 6/17/23 11:39 AM, Andrei Vagin wrote:
> > On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
> >> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> >> + unsigned long end, struct mm_walk *walk)
> >> +{
> >> + bool is_written, flush = false, is_interesting = true;
> >> + struct pagemap_scan_private *p = walk->private;
> >> + struct vm_area_struct *vma = walk->vma;
> >> + unsigned long bitmap, addr = end;
> >> + pte_t *pte, *orig_pte, ptent;
> >> + spinlock_t *ptl;
> >> + int ret = 0;
> >> +
> >> + arch_enter_lazy_mmu_mode();
> >> +
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> + ptl = pmd_trans_huge_lock(pmd, vma);
> >> + if (ptl) {
> >> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> >> +
> >> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> >> + n_pages = p->max_pages - p->found_pages;
> >> +
> >> + is_written = !is_pmd_uffd_wp(*pmd);
> >> +
> >> + /*
> >> + * Break huge page into small pages if the WP operation need to
> >> + * be performed is on a portion of the huge page.
> >> + */
> >> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> >> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
> >> + spin_unlock(ptl);
> >> +
> >> + split_huge_pmd(vma, pmd, start);
> >> + goto process_smaller_pages;
> >> + }
> >> +
> >> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> >> + pmd_present(*pmd), is_swap_pmd(*pmd));
> >> +
> >> + if (IS_PM_SCAN_GET(p->flags)) {
> >> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> >> + if (is_interesting)
> >> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> >> + ret >= 0) {
> >> + make_uffd_wp_pmd(vma, start, pmd);
> >> + flush_tlb_range(vma, start, end);
> >> + }
> >> +
> >> + spin_unlock(ptl);
> >> +
> >> + arch_leave_lazy_mmu_mode();
> >> + return ret;
> >> + }
> >> +
> >> +process_smaller_pages:
> >> +#endif
> >> +
> >> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
> >> + if (!pte) {
> >
> > Do we need to unlock ptl here?
> >
> > spin_unlock(ptl);
> No, please look at these recently merged patches:
> https://lore.kernel.org/all/[email protected]
>
> >
> >> + walk->action = ACTION_AGAIN;
> >> + return 0;
> >> + }
> >> +
> >> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
> >> + ptent = ptep_get(pte);
> >> + is_written = !is_pte_uffd_wp(ptent);
> >> +
> >> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> >> + pte_present(ptent), is_swap_pte(ptent));
> >
> > The vma->vm_file check isn't correct in this case. You can look when
> > pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
> > pages have a file backing store and what pages are anonymous.
> I'll update.
>
> >
> > I was trying to integrate this new interace into CRIU and I found
> > one more thing that is required. We need to detect zero pages.
> Should we name it ZERO_PFN_PRESENT_PAGE to be exact or what?
IMHO, ZERO_PFN_PRESENT_PAGE looks a bit monstrous.
It looks like zero page is a proper noun in the kernel, so PAGE_IS_ZERO
might be a good choice here, but it is up to you.
>
> >
> > It should look something like this:
> >
> > #define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
> > ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
> >
> >
> > bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
> > pte_present(ptent), is_swap_pte(ptent),
> > pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
> Okay. Can you please confirm my assumptions:
> - A THP cannot be file backed. (PM_FILE isn't being set for THP case)
```
Currently THP only works for anonymous memory mappings and tmpfs/shmem.
But in the future it can expand to other filesystems.
```
https://www.kernel.org/doc/html/next/admin-guide/mm/transhuge.html
so THP can be "file backed".
> - A hole is also not file backed.
>
> A hole isn't present in memory. So its pfn would be zero. But as it isn't
> present, it shouldn't report zero page. Right? For hole::
>
> PM_SCAN_FLAGS(false, false, false, false, false)
This looks correct to me.
On 6/21/23 11:42 AM, Andrei Vagin wrote:
> On Mon, Jun 19, 2023 at 11:06:36AM +0500, Muhammad Usama Anjum wrote:
>> On 6/17/23 11:39 AM, Andrei Vagin wrote:
>>> On Thu, Jun 15, 2023 at 07:11:41PM +0500, Muhammad Usama Anjum wrote:
>>>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>>>> + unsigned long end, struct mm_walk *walk)
>>>> +{
>>>> + bool is_written, flush = false, is_interesting = true;
>>>> + struct pagemap_scan_private *p = walk->private;
>>>> + struct vm_area_struct *vma = walk->vma;
>>>> + unsigned long bitmap, addr = end;
>>>> + pte_t *pte, *orig_pte, ptent;
>>>> + spinlock_t *ptl;
>>>> + int ret = 0;
>>>> +
>>>> + arch_enter_lazy_mmu_mode();
>>>> +
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + ptl = pmd_trans_huge_lock(pmd, vma);
>>>> + if (ptl) {
>>>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>>>> +
>>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>>> + n_pages = p->max_pages - p->found_pages;
>>>> +
>>>> + is_written = !is_pmd_uffd_wp(*pmd);
>>>> +
>>>> + /*
>>>> + * Break huge page into small pages if the WP operation need to
>>>> + * be performed is on a portion of the huge page.
>>>> + */
>>>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>>>> + spin_unlock(ptl);
>>>> +
>>>> + split_huge_pmd(vma, pmd, start);
>>>> + goto process_smaller_pages;
>>>> + }
>>>> +
>>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>>>> +
>>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>>> + if (is_interesting)
>>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>>> + ret >= 0) {
>>>> + make_uffd_wp_pmd(vma, start, pmd);
>>>> + flush_tlb_range(vma, start, end);
>>>> + }
>>>> +
>>>> + spin_unlock(ptl);
>>>> +
>>>> + arch_leave_lazy_mmu_mode();
>>>> + return ret;
>>>> + }
>>>> +
>>>> +process_smaller_pages:
>>>> +#endif
>>>> +
>>>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>>>> + if (!pte) {
>>>
>>> Do we need to unlock ptl here?
>>>
>>> spin_unlock(ptl);
>> No, please look at these recently merged patches:
>> https://lore.kernel.org/all/[email protected]
>>
>>>
>>>> + walk->action = ACTION_AGAIN;
>>>> + return 0;
>>>> + }
>>>> +
>>>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>>>> + ptent = ptep_get(pte);
>>>> + is_written = !is_pte_uffd_wp(ptent);
>>>> +
>>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>>> + pte_present(ptent), is_swap_pte(ptent));
>>>
>>> The vma->vm_file check isn't correct in this case. You can look when
>>> pte_to_pagemap_entry sets PM_FILE. This flag is used to detect what
>>> pages have a file backing store and what pages are anonymous.
>> I'll update.
>>
>>>
>>> I was trying to integrate this new interace into CRIU and I found
>>> one more thing that is required. We need to detect zero pages.
>> Should we name it ZERO_PFN_PRESENT_PAGE to be exact or what?
>
> IMHO, ZERO_PFN_PRESENT_PAGE looks a bit monstrous.
> It looks like zero page is a proper noun in the kernel, so PAGE_IS_ZERO
> might be a good choice here, but it is up to you.
>
>>
>>>
>>> It should look something like this:
>>>
>>> #define PM_SCAN_FLAGS(wt, file, present, swap, zero) \
>>> ((wt) | ((file) << 1) | ((present) << 2) | ((swap) << 3) | ((zero) << 4))
>>>
>>>
>>> bitmap = PM_SCAN_FLAGS(is_written, page && !PageAnon(page),
>>> pte_present(ptent), is_swap_pte(ptent),
>>> pte_present(ptent) && is_zero_pfn(pte_pfn(ptent)));
>> Okay. Can you please confirm my assumptions:
>> - A THP cannot be file backed. (PM_FILE isn't being set for THP case)
>
> ```
> Currently THP only works for anonymous memory mappings and tmpfs/shmem.
> But in the future it can expand to other filesystems.
> ```
> https://www.kernel.org/doc/html/next/admin-guide/mm/transhuge.html
>
> so THP can be "file backed".
>
>> - A hole is also not file backed.
>>
>> A hole isn't present in memory. So its pfn would be zero. But as it isn't
>> present, it shouldn't report zero page. Right? For hole::
>>
>> PM_SCAN_FLAGS(false, false, false, false, false)
>
> This looks correct to me.
Thanks. I'll sent next version in a few minutes.
>
--
BR,
Muhammad Usama Anjum
On Wed, 21 Jun 2023 at 08:35, Muhammad Usama Anjum
<[email protected]> wrote:
> On 6/20/23 11:03 PM, Andrei Vagin wrote:
> ...
> >> +struct pagemap_scan_private {
> >> + struct page_region *vec_buf, cur_buf;
> >> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
> >
> > should it be just unsigned long?
> These internal values are storing data coming from user in struct
> pm_scan_arg in which all variables are 64 bit(__u64) explicitly. This is
> why we have unsigned long long here. It is absolutely necessary.
vec_buf_len and vec_buf_index can only have values in 0..512 range.
flags has only a few lower bits defined (this is checked on ioctl
entry) and max_pages can be limited to ULONG_MAX. Actually putting `if
(!max_pages || max_pages > ULONG_MAX) max_pages = ULONG_MAX` would
avoid having to check !max_pages during the walk.
Best Regards
Michał Mirosław
On Wed, Jun 21, 2023 at 11:34:54AM +0500, Muhammad Usama Anjum wrote:
> Hi,
>
> Thank you for your review.
>
> On 6/20/23 11:03 PM, Andrei Vagin wrote:
> ...
> >> +struct pagemap_scan_private {
> >> + struct page_region *vec_buf, cur_buf;
> >> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
> >
> > should it be just unsigned long?
> These internal values are storing data coming from user in struct
> pm_scan_arg in which all variables are 64 bit(__u64) explicitly. This is
> why we have unsigned long long here. It is absolutely necessary.
It isn't necessary, because "unsigned long" is enough for all valid values
>
> >
> >> + unsigned long long required_mask, anyof_mask, excluded_mask, return_mask;
> >> +};
> >> +
> >> +static inline bool is_pte_uffd_wp(pte_t pte)
> >> +{
> >> + return (pte_present(pte) && pte_uffd_wp(pte)) ||
> >> + pte_swp_uffd_wp_any(pte);
> >> +}
> >> +
> >> +static inline void make_uffd_wp_pte(struct vm_area_struct *vma,
> >> + unsigned long addr, pte_t *pte)
> >> +{
> >> + pte_t ptent = ptep_get(pte);
> >> +
> >> + if (pte_present(ptent)) {
> >> + pte_t old_pte;
> >> +
> >> + old_pte = ptep_modify_prot_start(vma, addr, pte);
> >> + ptent = pte_mkuffd_wp(ptent);
> >> + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
> >> + } else if (is_swap_pte(ptent)) {
> >> + ptent = pte_swp_mkuffd_wp(ptent);
> >> + set_pte_at(vma->vm_mm, addr, pte, ptent);
> >> + } else {
> >> + set_pte_at(vma->vm_mm, addr, pte,
> >> + make_pte_marker(PTE_MARKER_UFFD_WP));
> >> + }
> >> +}
> >> +
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> +static inline bool is_pmd_uffd_wp(pmd_t pmd)
> >> +{
> >> + return (pmd_present(pmd) && pmd_uffd_wp(pmd)) ||
> >> + (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd));
> >> +}
> >> +
> >> +static inline void make_uffd_wp_pmd(struct vm_area_struct *vma,
> >> + unsigned long addr, pmd_t *pmdp)
> >> +{
> >> + pmd_t old, pmd = *pmdp;
> >> +
> >> + if (pmd_present(pmd)) {
> >> + old = pmdp_invalidate_ad(vma, addr, pmdp);
> >> + pmd = pmd_mkuffd_wp(old);
> >> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> >> + } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
> >> + pmd = pmd_swp_mkuffd_wp(pmd);
> >> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
> >> + }
> >> +}
> >> +#endif
> >> +
> >> +#ifdef CONFIG_HUGETLB_PAGE
> >> +static inline bool is_huge_pte_uffd_wp(pte_t pte)
> >> +{
> >> + return (pte_present(pte) && huge_pte_uffd_wp(pte)) ||
> >> + pte_swp_uffd_wp_any(pte);
> >> +}
> >> +
> >> +static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
> >> + unsigned long addr, pte_t *ptep,
> >> + pte_t ptent)
> >> +{
> >> + if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent))
> >> + return;
> >> +
> >> + if (is_hugetlb_entry_migration(ptent))
> >> + set_huge_pte_at(vma->vm_mm, addr, ptep,
> >> + pte_swp_mkuffd_wp(ptent));
> >> + else if (!huge_pte_none(ptent))
> >> + huge_ptep_modify_prot_commit(vma, addr, ptep, ptent,
> >> + huge_pte_mkuffd_wp(ptent));
> >> + else
> >> + set_huge_pte_at(vma->vm_mm, addr, ptep,
> >> + make_pte_marker(PTE_MARKER_UFFD_WP));
> >> +}
> >> +#endif
> >> +
> >> +static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
> >> + struct mm_walk *walk)
> >> +{
> >> + struct pagemap_scan_private *p = walk->private;
> >> + struct vm_area_struct *vma = walk->vma;
> >> +
> >> + if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) ||
> >> + !userfaultfd_wp_use_markers(vma)))
> >> + return -EPERM;
> >> +
> >> + if (vma->vm_flags & VM_PFNMAP)
> >> + return 1;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static int pagemap_scan_output(unsigned long bitmap,
> >> + struct pagemap_scan_private *p,
> >> + unsigned long addr, unsigned int n_pages)
> >> +{
> >> + struct page_region *cur_buf = &p->cur_buf;
> >> +
> >> + if (!n_pages)
> >> + return -EINVAL;
> >> +
> >> + bitmap &= p->return_mask;
> >> +
> >> + if (cur_buf->flags == bitmap &&
> >> + cur_buf->start + cur_buf->len * PAGE_SIZE == addr) {
> >> + cur_buf->len += n_pages;
> >> + p->found_pages += n_pages;
> >> + } else {
> >> + if (cur_buf->len) {
> >
> > I would add a comment that vec_buf_len has been decremented by one for
> > cur_buf.
> >
> >> + if (p->vec_buf_index >= p->vec_buf_len)
> >> + return PM_SCAN_BUFFER_FULL;
> >> +
> >> + memcpy(&p->vec_buf[p->vec_buf_index], cur_buf,
> >> + sizeof(*p->vec_buf));
> >> + p->vec_buf_index++;
> >> + }
> >> +
> >> + cur_buf->start = addr;
> >> + cur_buf->len = n_pages;
> >> + cur_buf->flags = bitmap;
> >> + p->found_pages += n_pages;
> >> + }
> >> +
> >> + if (p->found_pages == p->max_pages)
> >> + return PM_SCAN_FOUND_MAX_PAGES;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static bool pagemap_scan_is_interesting_page(unsigned long bitmap,
> >> + struct pagemap_scan_private *p)
> >> +{
> >> + if ((p->required_mask & bitmap) != p->required_mask)
> >> + return false;
> >> + if (p->anyof_mask && !(p->anyof_mask & bitmap))
> >> + return false;
> >> + if (p->excluded_mask & bitmap)
> >> + return false;
> >> +
> >> + return true;
> >> +}
> >> +
> >> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
> >> + unsigned long end, struct mm_walk *walk)
> >> +{
> >> + bool is_written, flush = false, is_interesting = true;
> >> + struct pagemap_scan_private *p = walk->private;
> >> + struct vm_area_struct *vma = walk->vma;
> >> + unsigned long bitmap, addr = end;
> >> + pte_t *pte, *orig_pte, ptent;
> >> + spinlock_t *ptl;
> >> + int ret = 0;
> >> +
> >> + arch_enter_lazy_mmu_mode();
> >> +
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> + ptl = pmd_trans_huge_lock(pmd, vma);
> >> + if (ptl) {
> >> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> >> +
> >> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> >> + n_pages = p->max_pages - p->found_pages;
> >> +
> >> + is_written = !is_pmd_uffd_wp(*pmd);
> >> +
> >> + /*
> >> + * Break huge page into small pages if the WP operation need to
> >> + * be performed is on a portion of the huge page.
> >> + */
> >> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> >> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
> >
> > It might be worth stopping rather than splitting the huge page.
> Sometimes normal pages can be grouped to become THP by the kernel depending
> upon the configurations. We don't want to reject ioctl just because this
> folding operation happened. We don't want to stop rather spliting is the
> best option and it does work fine.
>
> >
> >> + spin_unlock(ptl);
> >> +
> >> + split_huge_pmd(vma, pmd, start);
> >> + goto process_smaller_pages;
> >> + }
> >> +
> >> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> >> + pmd_present(*pmd), is_swap_pmd(*pmd));
> >> +
> >> + if (IS_PM_SCAN_GET(p->flags)) {
> >> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> >> + if (is_interesting)
> >> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> >> + ret >= 0) {
> >> + make_uffd_wp_pmd(vma, start, pmd);
> >> + flush_tlb_range(vma, start, end);
> >> + }
> >> +
> >> + spin_unlock(ptl);
> >> +
> >> + arch_leave_lazy_mmu_mode();
> >> + return ret;
> >> + }
> >> +
> >> +process_smaller_pages:
> >> +#endif
> >> +
> >> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
> >> + if (!pte) {
> >> + walk->action = ACTION_AGAIN;
> >> + return 0;
> >> + }
> >> +
> >> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
> >> + ptent = ptep_get(pte);
> >> + is_written = !is_pte_uffd_wp(ptent);
> >> +
> >> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> >> + pte_present(ptent), is_swap_pte(ptent));
> >> +
> >> + if (IS_PM_SCAN_GET(p->flags)) {
> >> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> >> + if (is_interesting)
> >> + ret = pagemap_scan_output(bitmap, p, addr, 1);
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> >> + ret >= 0) {
> >> + make_uffd_wp_pte(vma, addr, pte);
> >> + flush = true;
> >> + }
> >> + }
> >> +
> >> + if (flush)
> >> + flush_tlb_range(vma, start, addr);
> >> +
> >> + pte_unmap_unlock(orig_pte, ptl);
> >> + arch_leave_lazy_mmu_mode();
> >> +
> >> + cond_resched();
> >> + return ret;
> >> +}
> >> +
> >> +#ifdef CONFIG_HUGETLB_PAGE
> >> +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
> >> + unsigned long start, unsigned long end,
> >> + struct mm_walk *walk)
> >> +{
> >> + unsigned long n_pages = (end - start)/PAGE_SIZE;
> >> + struct pagemap_scan_private *p = walk->private;
> >> + struct vm_area_struct *vma = walk->vma;
> >> + bool is_written, is_interesting = true;
> >> + struct hstate *h = hstate_vma(vma);
> >> + unsigned long bitmap;
> >> + spinlock_t *ptl;
> >> + int ret = 0;
> >> + pte_t ptent;
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE)
> >> + return -EINVAL;
> >> +
> >> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> >> + n_pages = p->max_pages - p->found_pages;
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags)) {
> >> + i_mmap_lock_write(vma->vm_file->f_mapping);
> >> + ptl = huge_pte_lock(h, vma->vm_mm, ptep);
> >> + }
> >> +
> >> + ptent = huge_ptep_get(ptep);
> >> + is_written = !is_huge_pte_uffd_wp(ptent);
> >> +
> >> + /*
> >> + * Partial hugetlb page clear isn't supported
> >> + */
> >> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
> >> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
> >
> > n_pages depends on found_pages that is incremented in
> > pagemap_scan_output. Should we do this check right before calling
> > pagemap_scan_output?
> This condition is acting like a check before pagemap_scan_output() already.
> n_pages is telling us that how many empty pages are left to find out. But
> if by making n_pages less, we cannot cover the whole hugetlb, we should
> just return error as we don't split the hugetlb.
I understand that, but here you don't know whether pagemap_scan_output
will be called for this page or not. It means that an error can be
reported due to unrelated page and this doesn't look right.
>
> >
> >> + ret = -ENOSPC;
> >
> > should it be PM_SCAN_FOUND_MAX_PAGES? Otherwise, it fails the ioctl even
> > if it has handled some pages already.
> It is a double edge sword. If we don't return error, user will never know
> that he needs to specify more max_pages or his output buffer is small and
> not coverig the entire range. The real purpose here that user gets aware
> that he needs to specify full hugetlb range and found pages should cover
> the entire range as well.
but if PM_SCAN_OP_WP is set, this error will be fatal, because some
pages can be marked write-protected, but they are not be reported to
user-space.
I think the ioctl has to report back the end address of the handled
region. It is like read and write syscalls that can return less data
than requested.
>
> >
> >> + goto unlock_and_return;
> >> + }
> >> +
> >> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
> >> + pte_present(ptent), is_swap_pte(ptent));
> >> +
> >> + if (IS_PM_SCAN_GET(p->flags)) {
> >> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
> >> + if (is_interesting)
> >> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
> >> + ret >= 0) {
> >> + make_uffd_wp_huge_pte(vma, start, ptep, ptent);
> >> + flush_hugetlb_tlb_range(vma, start, end);
> >> + }
> >> +
> >> +unlock_and_return:
> >> + if (IS_PM_SCAN_WP(p->flags)) {
> >> + spin_unlock(ptl);
> >> + i_mmap_unlock_write(vma->vm_file->f_mapping);
> >> + }
> >> +
> >> + return ret;
> >> +}
> >> +#else
> >> +#define pagemap_scan_hugetlb_entry NULL
> >> +#endif
> >> +
> >> +static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
> >> + int depth, struct mm_walk *walk)
> >> +{
> >> + unsigned long n_pages = (end - addr)/PAGE_SIZE;
> >> + struct pagemap_scan_private *p = walk->private;
> >> + struct vm_area_struct *vma = walk->vma;
> >> + int ret = 0;
> >> +
> >> + if (!vma || !IS_PM_SCAN_GET(p->flags))
> >> + return 0;
> >> +
> >> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
> >> + n_pages = p->max_pages - p->found_pages;
> >> +
> >> + ret = pagemap_scan_output(PM_SCAN_FLAGS(false, (bool)vma->vm_file,
> >> + false, false), p, addr, n_pages);
> >> +
> >> + return ret;
> >> +}
> >> +
> >> +static const struct mm_walk_ops pagemap_scan_ops = {
> >> + .test_walk = pagemap_scan_test_walk,
> >> + .pmd_entry = pagemap_scan_pmd_entry,
> >> + .pte_hole = pagemap_scan_pte_hole,
> >> + .hugetlb_entry = pagemap_scan_hugetlb_entry,
> >> +};
> >> +
> >> +static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
> >> + struct page_region __user *vec)
> >> +{
> >> + /* Detect illegal size, flags, len and masks */
> >> + if (arg->size != sizeof(struct pm_scan_arg))
> >> + return -EINVAL;
> >> + if (!arg->flags)
> >> + return -EINVAL;
> >> + if (arg->flags & ~PM_SCAN_OPS)
> >> + return -EINVAL;
> >> + if (!arg->len)
> >> + return -EINVAL;
> >> + if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
> >> + arg->return_mask) & ~PM_SCAN_BITS_ALL)
> >> + return -EINVAL;
> >> + if (!arg->required_mask && !arg->anyof_mask &&
> >> + !arg->excluded_mask)
> >> + return -EINVAL;
> >> + if (!arg->return_mask)
> >> + return -EINVAL;
> >> +
> >> + /* Validate memory range */
> >> + if (!IS_ALIGNED(start, PAGE_SIZE))
> >> + return -EINVAL;
> >> + if (!access_ok((void __user *)start, arg->len))
> >> + return -EFAULT;
> >> +
> >> + if (IS_PM_SCAN_GET(arg->flags)) {
> >> + if (!arg->vec)
> >> + return -EINVAL;
> >> + if (arg->vec_len == 0)
> >> + return -EINVAL;
> >> + if (!access_ok((void __user *)arg->vec,
> >> + arg->vec_len * sizeof(struct page_region)))
> >> + return -EFAULT;
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
> >> + arg->max_pages)
> >> + return -EINVAL;
> >> +
> >> + return 0;
> >> +}
> >> +
> >> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
> >> +{
> >> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
> >> + unsigned long long start, end, walk_start, walk_end;
> >> + unsigned long long empty_slots, vec_index = 0;
> >> + struct mmu_notifier_range range;
> >> + struct page_region __user *vec;
> >> + struct pagemap_scan_private p;
> >> + struct pm_scan_arg arg;
> >> + int ret = 0;
> >> +
> >> + if (copy_from_user(&arg, uarg, sizeof(arg)))
> >> + return -EFAULT;
> >> +
> >> + start = untagged_addr((unsigned long)arg.start);
> >> + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec);
> >> +
> >> + ret = pagemap_scan_args_valid(&arg, start, vec);
> >> + if (ret)
> >> + return ret;
> >> +
> >> + end = start + arg.len;
> >> + p.max_pages = arg.max_pages;
> >> + p.found_pages = 0;
> >> + p.required_mask = arg.required_mask;
> >> + p.anyof_mask = arg.anyof_mask;
> >> + p.excluded_mask = arg.excluded_mask;
> >> + p.return_mask = arg.return_mask;
> >> + p.flags = arg.flags;
> >> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
> >> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
> >> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
> >> + p.vec_buf = NULL;
> >> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
> >> +
> >> + /*
> >> + * Allocate smaller buffer to get output from inside the page walk
> >> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
> >> + * we want to return output to user in compact form where no two
> >> + * consecutive regions should be continuous and have the same flags.
> >> + * So store the latest element in p.cur_buf between different walks and
> >> + * store the p.cur_buf at the end of the walk to the user buffer.
> >> + */
> >> + if (IS_PM_SCAN_GET(p.flags)) {
> >> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
> >> + GFP_KERNEL);
> >> + if (!p.vec_buf)
> >> + return -ENOMEM;
> >> + }
> >> +
> >> + if (IS_PM_SCAN_WP(p.flags)) {
> >> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
> >> + mm, start, end);
> >> + mmu_notifier_invalidate_range_start(&range);
> >> + }
> >> +
> >> + walk_start = walk_end = start;
> >> + while (walk_end < end && !ret) {
> >
> > How long can it take to run this loop? Should we interrupt it if a
> > signal has been queued?
> I'm following mincore and pagemap_read here. There is no such thing there.
> IOCTL is performing what user has requested.
In case of pagemap, its buffer is limited by MAX_RW_COUNT (0x7ffff000),
so it can handle maximum 0xffffe00 pages in one iteration. Do you have any
limits for input parameters?
> If the execution time is a
> concern, user should have called the IOCTL on shorter address range.
It doesn't work this way. There can be many users and signals has to be
delivered in a reasonable time. One of the obvious examples when a signal
has to be delivered asap is OOM.
>
> >
> >> + if (IS_PM_SCAN_GET(p.flags)) {
> >> + p.vec_buf_index = 0;
> >> +
> >> + /*
> >> + * All data is copied to cur_buf first. When more data
> >> + * is found, we push cur_buf to vec_buf and copy new
> >> + * data to cur_buf. Subtract 1 from length as the
> >> + * index of cur_buf isn't counted in length.
> >> + */
> >> + empty_slots = arg.vec_len - vec_index;
> >> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
> >> + }
> >> +
> >> + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK;
> >> + if (walk_end > end)
> >> + walk_end = end;
> >> +
> >> + ret = mmap_read_lock_killable(mm);
> >> + if (ret)
> >> + goto free_data;
> >
> > This will fail the ioctl, but it isn't acceptable if any pages has been
> > handled.
> I'm following pagemap_read() here.
pagemap_read() doesn't change states of pages, so it is safe to return
errors from it. In this case, such errors will be fatal, because you
mark some pages as write-protected but don't report them to the user.
>
> >
> >
> >> + ret = walk_page_range(mm, walk_start, walk_end,
> >> + &pagemap_scan_ops, &p);
> >> + mmap_read_unlock(mm);
> >> +
> >
> > ret can be ENOSPC returned from pagemap_scan_pmd_entry, but we should
> > not return this error if any pages has been handled.
> ENOSPC isn't being returned from pagemap_scan_pmd_entry now.
I mean pagemap_scan_hugetlb_entry.
>
> >
> >> + if (ret && ret != PM_SCAN_BUFFER_FULL &&
> >> + ret != PM_SCAN_FOUND_MAX_PAGES)
> >> + goto free_data;
> >> +
> >> + walk_start = walk_end;
> >> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
> >> + if (copy_to_user(&vec[vec_index], p.vec_buf,
> >> + p.vec_buf_index * sizeof(*p.vec_buf))) {
> >> + /*
> >> + * Return error even though the OP succeeded
> >> + */
> >> + ret = -EFAULT;
> >> + goto free_data;
> >> + }
> >> + vec_index += p.vec_buf_index;
> >> + }
> >> + }
> >> +
> >> + if (p.cur_buf.len) {
> >> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
> >> + ret = -EFAULT;
> >> + goto free_data;
> >> + }
> >> + vec_index++;
> >> + }
> >> +
> >> + ret = vec_index;
> >> +
> >> +free_data:
> >> + if (IS_PM_SCAN_WP(p.flags))
> >> + mmu_notifier_invalidate_range_end(&range);
> >> +
> >> + kfree(p.vec_buf);
> >> + return ret;
> >> +}
> >> +
> >> +static long do_pagemap_cmd(struct file *file, unsigned int cmd,
> >> + unsigned long arg)
> >> +{
> >> + struct mm_struct *mm = file->private_data;
> >> +
> >> + switch (cmd) {
> >> + case PAGEMAP_SCAN:
> >> + return do_pagemap_scan(mm, arg);
> >> +
> >> + default:
> >> + return -EINVAL;
> >> + }
> >> +}
> >> +
> >> const struct file_operations proc_pagemap_operations = {
> >> .llseek = mem_lseek, /* borrow this */
> >> .read = pagemap_read,
> >> .open = pagemap_open,
> >> .release = pagemap_release,
> >> + .unlocked_ioctl = do_pagemap_cmd,
> >> + .compat_ioctl = do_pagemap_cmd,
> >> };
> >> #endif /* CONFIG_PROC_PAGE_MONITOR */
> >>
> >> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> >> index beb7c63d2871..a6e773c3e2b4 100644
> >> --- a/include/linux/hugetlb.h
> >> +++ b/include/linux/hugetlb.h
> >> @@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
> >> unsigned long cp_flags);
> >>
> >> bool is_hugetlb_entry_migration(pte_t pte);
> >> +bool is_hugetlb_entry_hwpoisoned(pte_t pte);
> >> void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
> >>
> >> #else /* !CONFIG_HUGETLB_PAGE */
> >> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
> >> index b7b56871029c..a9fb44db84a3 100644
> >> --- a/include/uapi/linux/fs.h
> >> +++ b/include/uapi/linux/fs.h
> >> @@ -305,4 +305,57 @@ typedef int __bitwise __kernel_rwf_t;
> >> #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
> >> RWF_APPEND)
> >>
> >> +/* Pagemap ioctl */
> >> +#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
> >> +
> >> +/* Bits are set in flags of the page_region and masks in pm_scan_args */
> >> +#define PAGE_IS_WRITTEN (1 << 0)
> >> +#define PAGE_IS_FILE (1 << 1)
> >> +#define PAGE_IS_PRESENT (1 << 2)
> >> +#define PAGE_IS_SWAPPED (1 << 3)
> >> +
> >> +/*
> >> + * struct page_region - Page region with flags
> >> + * @start: Start of the region
> >> + * @len: Length of the region in pages
> >> + * @bitmap: Bits sets for the region
> >> + */
> >> +struct page_region {
> >> + __u64 start;
> >> + __u64 len;
> >> + __u64 flags;
> >> +};
> >> +
> >> +/*
> >> + * struct pm_scan_arg - Pagemap ioctl argument
> >> + * @size: Size of the structure
> >> + * @flags: Flags for the IOCTL
> >> + * @start: Starting address of the region
> >> + * @len: Length of the region (All the pages in this length are included)
> >> + * @vec: Address of page_region struct array for output
> >> + * @vec_len: Length of the page_region struct array
> >> + * @max_pages: Optional max return pages
> >> + * @required_mask: Required mask - All of these bits have to be set in the PTE
> >> + * @anyof_mask: Any mask - Any of these bits are set in the PTE
> >> + * @excluded_mask: Exclude mask - None of these bits are set in the PTE
> >> + * @return_mask: Bits that are to be reported in page_region
> >> + */
> >> +struct pm_scan_arg {
> >> + __u64 size;
> >> + __u64 flags;
> >> + __u64 start;
> >> + __u64 len;
> >> + __u64 vec;
> >> + __u64 vec_len;
> >> + __u64 max_pages;
> >> + __u64 required_mask;
> >> + __u64 anyof_mask;
> >> + __u64 excluded_mask;
> >> + __u64 return_mask;
> >> +};
> >> +
> >> +/* Supported flags */
> >> +#define PM_SCAN_OP_GET (1 << 0)
> >> +#define PM_SCAN_OP_WP (1 << 1)
> >> +
> >> #endif /* _UAPI_LINUX_FS_H */
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index 0db13167b1ee..7e60f0f3fd03 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte)
> >> return false;
> >> }
> >>
> >> -static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
> >> +bool is_hugetlb_entry_hwpoisoned(pte_t pte)
> >> {
> >> swp_entry_t swp;
> >>
> >> --
> >> 2.39.2
> >>
>
> --
> BR,
> Muhammad Usama Anjum
On 6/21/23 6:29 PM, Michał Mirosław wrote:
> On Wed, 21 Jun 2023 at 08:35, Muhammad Usama Anjum
> <[email protected]> wrote:
>> On 6/20/23 11:03 PM, Andrei Vagin wrote:
>> ...
>>>> +struct pagemap_scan_private {
>>>> + struct page_region *vec_buf, cur_buf;
>>>> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
>>>
>>> should it be just unsigned long?
>> These internal values are storing data coming from user in struct
>> pm_scan_arg in which all variables are 64 bit(__u64) explicitly. This is
>> why we have unsigned long long here. It is absolutely necessary.
>
> vec_buf_len and vec_buf_index can only have values in 0..512 range.
> flags has only a few lower bits defined (this is checked on ioctl
> entry) and max_pages can be limited to ULONG_MAX. Actually putting `if
> (!max_pages || max_pages > ULONG_MAX) max_pages = ULONG_MAX` would
> avoid having to check !max_pages during the walk.
I'll update.
>
> Best Regards
> Michał Mirosław
--
BR,
Muhammad Usama Anjum
Thanks for reply.
On 6/22/23 12:45 AM, Andrei Vagin wrote:
> On Wed, Jun 21, 2023 at 11:34:54AM +0500, Muhammad Usama Anjum wrote:
>> Hi,
>>
>> Thank you for your review.
>>
>> On 6/20/23 11:03 PM, Andrei Vagin wrote:
>> ...
>>>> +struct pagemap_scan_private {
>>>> + struct page_region *vec_buf, cur_buf;
>>>> + unsigned long long vec_buf_len, vec_buf_index, max_pages, found_pages, flags;
>>>
>>> should it be just unsigned long?
>> These internal values are storing data coming from user in struct
>> pm_scan_arg in which all variables are 64 bit(__u64) explicitly. This is
>> why we have unsigned long long here. It is absolutely necessary.
>
> It isn't necessary, because "unsigned long" is enough for all valid values
I guess I can make some variables here long only.
>>
>>>
>>>> + unsigned long long required_mask, anyof_mask, excluded_mask, return_mask;
>>>> +};
>>>> +
>>>> +static inline bool is_pte_uffd_wp(pte_t pte)
>>>> +{
>>>> + return (pte_present(pte) && pte_uffd_wp(pte)) ||
>>>> + pte_swp_uffd_wp_any(pte);
>>>> +}
>>>> +
>>>> +static inline void make_uffd_wp_pte(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *pte)
>>>> +{
>>>> + pte_t ptent = ptep_get(pte);
>>>> +
>>>> + if (pte_present(ptent)) {
>>>> + pte_t old_pte;
>>>> +
>>>> + old_pte = ptep_modify_prot_start(vma, addr, pte);
>>>> + ptent = pte_mkuffd_wp(ptent);
>>>> + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent);
>>>> + } else if (is_swap_pte(ptent)) {
>>>> + ptent = pte_swp_mkuffd_wp(ptent);
>>>> + set_pte_at(vma->vm_mm, addr, pte, ptent);
>>>> + } else {
>>>> + set_pte_at(vma->vm_mm, addr, pte,
>>>> + make_pte_marker(PTE_MARKER_UFFD_WP));
>>>> + }
>>>> +}
>>>> +
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> +static inline bool is_pmd_uffd_wp(pmd_t pmd)
>>>> +{
>>>> + return (pmd_present(pmd) && pmd_uffd_wp(pmd)) ||
>>>> + (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd));
>>>> +}
>>>> +
>>>> +static inline void make_uffd_wp_pmd(struct vm_area_struct *vma,
>>>> + unsigned long addr, pmd_t *pmdp)
>>>> +{
>>>> + pmd_t old, pmd = *pmdp;
>>>> +
>>>> + if (pmd_present(pmd)) {
>>>> + old = pmdp_invalidate_ad(vma, addr, pmdp);
>>>> + pmd = pmd_mkuffd_wp(old);
>>>> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
>>>> + } else if (is_migration_entry(pmd_to_swp_entry(pmd))) {
>>>> + pmd = pmd_swp_mkuffd_wp(pmd);
>>>> + set_pmd_at(vma->vm_mm, addr, pmdp, pmd);
>>>> + }
>>>> +}
>>>> +#endif
>>>> +
>>>> +#ifdef CONFIG_HUGETLB_PAGE
>>>> +static inline bool is_huge_pte_uffd_wp(pte_t pte)
>>>> +{
>>>> + return (pte_present(pte) && huge_pte_uffd_wp(pte)) ||
>>>> + pte_swp_uffd_wp_any(pte);
>>>> +}
>>>> +
>>>> +static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma,
>>>> + unsigned long addr, pte_t *ptep,
>>>> + pte_t ptent)
>>>> +{
>>>> + if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent))
>>>> + return;
>>>> +
>>>> + if (is_hugetlb_entry_migration(ptent))
>>>> + set_huge_pte_at(vma->vm_mm, addr, ptep,
>>>> + pte_swp_mkuffd_wp(ptent));
>>>> + else if (!huge_pte_none(ptent))
>>>> + huge_ptep_modify_prot_commit(vma, addr, ptep, ptent,
>>>> + huge_pte_mkuffd_wp(ptent));
>>>> + else
>>>> + set_huge_pte_at(vma->vm_mm, addr, ptep,
>>>> + make_pte_marker(PTE_MARKER_UFFD_WP));
>>>> +}
>>>> +#endif
>>>> +
>>>> +static int pagemap_scan_test_walk(unsigned long start, unsigned long end,
>>>> + struct mm_walk *walk)
>>>> +{
>>>> + struct pagemap_scan_private *p = walk->private;
>>>> + struct vm_area_struct *vma = walk->vma;
>>>> +
>>>> + if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) ||
>>>> + !userfaultfd_wp_use_markers(vma)))
>>>> + return -EPERM;
>>>> +
>>>> + if (vma->vm_flags & VM_PFNMAP)
>>>> + return 1;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static int pagemap_scan_output(unsigned long bitmap,
>>>> + struct pagemap_scan_private *p,
>>>> + unsigned long addr, unsigned int n_pages)
>>>> +{
>>>> + struct page_region *cur_buf = &p->cur_buf;
>>>> +
>>>> + if (!n_pages)
>>>> + return -EINVAL;
>>>> +
>>>> + bitmap &= p->return_mask;
>>>> +
>>>> + if (cur_buf->flags == bitmap &&
>>>> + cur_buf->start + cur_buf->len * PAGE_SIZE == addr) {
>>>> + cur_buf->len += n_pages;
>>>> + p->found_pages += n_pages;
>>>> + } else {
>>>> + if (cur_buf->len) {
>>>
>>> I would add a comment that vec_buf_len has been decremented by one for
>>> cur_buf.
>>>
>>>> + if (p->vec_buf_index >= p->vec_buf_len)
>>>> + return PM_SCAN_BUFFER_FULL;
>>>> +
>>>> + memcpy(&p->vec_buf[p->vec_buf_index], cur_buf,
>>>> + sizeof(*p->vec_buf));
>>>> + p->vec_buf_index++;
>>>> + }
>>>> +
>>>> + cur_buf->start = addr;
>>>> + cur_buf->len = n_pages;
>>>> + cur_buf->flags = bitmap;
>>>> + p->found_pages += n_pages;
>>>> + }
>>>> +
>>>> + if (p->found_pages == p->max_pages)
>>>> + return PM_SCAN_FOUND_MAX_PAGES;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static bool pagemap_scan_is_interesting_page(unsigned long bitmap,
>>>> + struct pagemap_scan_private *p)
>>>> +{
>>>> + if ((p->required_mask & bitmap) != p->required_mask)
>>>> + return false;
>>>> + if (p->anyof_mask && !(p->anyof_mask & bitmap))
>>>> + return false;
>>>> + if (p->excluded_mask & bitmap)
>>>> + return false;
>>>> +
>>>> + return true;
>>>> +}
>>>> +
>>>> +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start,
>>>> + unsigned long end, struct mm_walk *walk)
>>>> +{
>>>> + bool is_written, flush = false, is_interesting = true;
>>>> + struct pagemap_scan_private *p = walk->private;
>>>> + struct vm_area_struct *vma = walk->vma;
>>>> + unsigned long bitmap, addr = end;
>>>> + pte_t *pte, *orig_pte, ptent;
>>>> + spinlock_t *ptl;
>>>> + int ret = 0;
>>>> +
>>>> + arch_enter_lazy_mmu_mode();
>>>> +
>>>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>>>> + ptl = pmd_trans_huge_lock(pmd, vma);
>>>> + if (ptl) {
>>>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>>>> +
>>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>>> + n_pages = p->max_pages - p->found_pages;
>>>> +
>>>> + is_written = !is_pmd_uffd_wp(*pmd);
>>>> +
>>>> + /*
>>>> + * Break huge page into small pages if the WP operation need to
>>>> + * be performed is on a portion of the huge page.
>>>> + */
>>>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>>>
>>> It might be worth stopping rather than splitting the huge page.
>> Sometimes normal pages can be grouped to become THP by the kernel depending
>> upon the configurations. We don't want to reject ioctl just because this
>> folding operation happened. We don't want to stop rather spliting is the
>> best option and it does work fine.
>>
>>>
>>>> + spin_unlock(ptl);
>>>> +
>>>> + split_huge_pmd(vma, pmd, start);
>>>> + goto process_smaller_pages;
>>>> + }
>>>> +
>>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>>> + pmd_present(*pmd), is_swap_pmd(*pmd));
>>>> +
>>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>>> + if (is_interesting)
>>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>>> + ret >= 0) {
>>>> + make_uffd_wp_pmd(vma, start, pmd);
>>>> + flush_tlb_range(vma, start, end);
>>>> + }
>>>> +
>>>> + spin_unlock(ptl);
>>>> +
>>>> + arch_leave_lazy_mmu_mode();
>>>> + return ret;
>>>> + }
>>>> +
>>>> +process_smaller_pages:
>>>> +#endif
>>>> +
>>>> + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl);
>>>> + if (!pte) {
>>>> + walk->action = ACTION_AGAIN;
>>>> + return 0;
>>>> + }
>>>> +
>>>> + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) {
>>>> + ptent = ptep_get(pte);
>>>> + is_written = !is_pte_uffd_wp(ptent);
>>>> +
>>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>>> + pte_present(ptent), is_swap_pte(ptent));
>>>> +
>>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>>> + if (is_interesting)
>>>> + ret = pagemap_scan_output(bitmap, p, addr, 1);
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>>> + ret >= 0) {
>>>> + make_uffd_wp_pte(vma, addr, pte);
>>>> + flush = true;
>>>> + }
>>>> + }
>>>> +
>>>> + if (flush)
>>>> + flush_tlb_range(vma, start, addr);
>>>> +
>>>> + pte_unmap_unlock(orig_pte, ptl);
>>>> + arch_leave_lazy_mmu_mode();
>>>> +
>>>> + cond_resched();
>>>> + return ret;
>>>> +}
>>>> +
>>>> +#ifdef CONFIG_HUGETLB_PAGE
>>>> +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask,
>>>> + unsigned long start, unsigned long end,
>>>> + struct mm_walk *walk)
>>>> +{
>>>> + unsigned long n_pages = (end - start)/PAGE_SIZE;
>>>> + struct pagemap_scan_private *p = walk->private;
>>>> + struct vm_area_struct *vma = walk->vma;
>>>> + bool is_written, is_interesting = true;
>>>> + struct hstate *h = hstate_vma(vma);
>>>> + unsigned long bitmap;
>>>> + spinlock_t *ptl;
>>>> + int ret = 0;
>>>> + pte_t ptent;
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE)
>>>> + return -EINVAL;
>>>> +
>>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>>> + n_pages = p->max_pages - p->found_pages;
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags)) {
>>>> + i_mmap_lock_write(vma->vm_file->f_mapping);
>>>> + ptl = huge_pte_lock(h, vma->vm_mm, ptep);
>>>> + }
>>>> +
>>>> + ptent = huge_ptep_get(ptep);
>>>> + is_written = !is_huge_pte_uffd_wp(ptent);
>>>> +
>>>> + /*
>>>> + * Partial hugetlb page clear isn't supported
>>>> + */
>>>> + if (is_written && IS_PM_SCAN_WP(p->flags) &&
>>>> + n_pages < HPAGE_SIZE/PAGE_SIZE) {
>>>
>>> n_pages depends on found_pages that is incremented in
>>> pagemap_scan_output. Should we do this check right before calling
>>> pagemap_scan_output?
>> This condition is acting like a check before pagemap_scan_output() already.
>> n_pages is telling us that how many empty pages are left to find out. But
>> if by making n_pages less, we cannot cover the whole hugetlb, we should
>> just return error as we don't split the hugetlb.
>
> I understand that, but here you don't know whether pagemap_scan_output
> will be called for this page or not. It means that an error can be
> reported due to unrelated page and this doesn't look right.
>
>>
>>>
>>>> + ret = -ENOSPC;
>>>
>>> should it be PM_SCAN_FOUND_MAX_PAGES? Otherwise, it fails the ioctl even
>>> if it has handled some pages already.
>> It is a double edge sword. If we don't return error, user will never know
>> that he needs to specify more max_pages or his output buffer is small and
>> not coverig the entire range. The real purpose here that user gets aware
>> that he needs to specify full hugetlb range and found pages should cover
>> the entire range as well.
>
> but if PM_SCAN_OP_WP is set, this error will be fatal, because some
> pages can be marked write-protected, but they are not be reported to
> user-space.
>
> I think the ioctl has to report back the end address of the handled
> region. It is like read and write syscalls that can return less data
> than requested.
This is good point. I'll abort the walk here instead of retuning the error
to user.
>
>>
>>>
>>>> + goto unlock_and_return;
>>>> + }
>>>> +
>>>> + bitmap = PM_SCAN_FLAGS(is_written, (bool)vma->vm_file,
>>>> + pte_present(ptent), is_swap_pte(ptent));
>>>> +
>>>> + if (IS_PM_SCAN_GET(p->flags)) {
>>>> + is_interesting = pagemap_scan_is_interesting_page(bitmap, p);
>>>> + if (is_interesting)
>>>> + ret = pagemap_scan_output(bitmap, p, start, n_pages);
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting &&
>>>> + ret >= 0) {
>>>> + make_uffd_wp_huge_pte(vma, start, ptep, ptent);
>>>> + flush_hugetlb_tlb_range(vma, start, end);
>>>> + }
>>>> +
>>>> +unlock_and_return:
>>>> + if (IS_PM_SCAN_WP(p->flags)) {
>>>> + spin_unlock(ptl);
>>>> + i_mmap_unlock_write(vma->vm_file->f_mapping);
>>>> + }
>>>> +
>>>> + return ret;
>>>> +}
>>>> +#else
>>>> +#define pagemap_scan_hugetlb_entry NULL
>>>> +#endif
>>>> +
>>>> +static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end,
>>>> + int depth, struct mm_walk *walk)
>>>> +{
>>>> + unsigned long n_pages = (end - addr)/PAGE_SIZE;
>>>> + struct pagemap_scan_private *p = walk->private;
>>>> + struct vm_area_struct *vma = walk->vma;
>>>> + int ret = 0;
>>>> +
>>>> + if (!vma || !IS_PM_SCAN_GET(p->flags))
>>>> + return 0;
>>>> +
>>>> + if (p->max_pages && n_pages > p->max_pages - p->found_pages)
>>>> + n_pages = p->max_pages - p->found_pages;
>>>> +
>>>> + ret = pagemap_scan_output(PM_SCAN_FLAGS(false, (bool)vma->vm_file,
>>>> + false, false), p, addr, n_pages);
>>>> +
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static const struct mm_walk_ops pagemap_scan_ops = {
>>>> + .test_walk = pagemap_scan_test_walk,
>>>> + .pmd_entry = pagemap_scan_pmd_entry,
>>>> + .pte_hole = pagemap_scan_pte_hole,
>>>> + .hugetlb_entry = pagemap_scan_hugetlb_entry,
>>>> +};
>>>> +
>>>> +static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start,
>>>> + struct page_region __user *vec)
>>>> +{
>>>> + /* Detect illegal size, flags, len and masks */
>>>> + if (arg->size != sizeof(struct pm_scan_arg))
>>>> + return -EINVAL;
>>>> + if (!arg->flags)
>>>> + return -EINVAL;
>>>> + if (arg->flags & ~PM_SCAN_OPS)
>>>> + return -EINVAL;
>>>> + if (!arg->len)
>>>> + return -EINVAL;
>>>> + if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask |
>>>> + arg->return_mask) & ~PM_SCAN_BITS_ALL)
>>>> + return -EINVAL;
>>>> + if (!arg->required_mask && !arg->anyof_mask &&
>>>> + !arg->excluded_mask)
>>>> + return -EINVAL;
>>>> + if (!arg->return_mask)
>>>> + return -EINVAL;
>>>> +
>>>> + /* Validate memory range */
>>>> + if (!IS_ALIGNED(start, PAGE_SIZE))
>>>> + return -EINVAL;
>>>> + if (!access_ok((void __user *)start, arg->len))
>>>> + return -EFAULT;
>>>> +
>>>> + if (IS_PM_SCAN_GET(arg->flags)) {
>>>> + if (!arg->vec)
>>>> + return -EINVAL;
>>>> + if (arg->vec_len == 0)
>>>> + return -EINVAL;
>>>> + if (!access_ok((void __user *)arg->vec,
>>>> + arg->vec_len * sizeof(struct page_region)))
>>>> + return -EFAULT;
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) &&
>>>> + arg->max_pages)
>>>> + return -EINVAL;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg)
>>>> +{
>>>> + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg;
>>>> + unsigned long long start, end, walk_start, walk_end;
>>>> + unsigned long long empty_slots, vec_index = 0;
>>>> + struct mmu_notifier_range range;
>>>> + struct page_region __user *vec;
>>>> + struct pagemap_scan_private p;
>>>> + struct pm_scan_arg arg;
>>>> + int ret = 0;
>>>> +
>>>> + if (copy_from_user(&arg, uarg, sizeof(arg)))
>>>> + return -EFAULT;
>>>> +
>>>> + start = untagged_addr((unsigned long)arg.start);
>>>> + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec);
>>>> +
>>>> + ret = pagemap_scan_args_valid(&arg, start, vec);
>>>> + if (ret)
>>>> + return ret;
>>>> +
>>>> + end = start + arg.len;
>>>> + p.max_pages = arg.max_pages;
>>>> + p.found_pages = 0;
>>>> + p.required_mask = arg.required_mask;
>>>> + p.anyof_mask = arg.anyof_mask;
>>>> + p.excluded_mask = arg.excluded_mask;
>>>> + p.return_mask = arg.return_mask;
>>>> + p.flags = arg.flags;
>>>> + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) &
>>>> + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0;
>>>> + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0;
>>>> + p.vec_buf = NULL;
>>>> + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT;
>>>> +
>>>> + /*
>>>> + * Allocate smaller buffer to get output from inside the page walk
>>>> + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As
>>>> + * we want to return output to user in compact form where no two
>>>> + * consecutive regions should be continuous and have the same flags.
>>>> + * So store the latest element in p.cur_buf between different walks and
>>>> + * store the p.cur_buf at the end of the walk to the user buffer.
>>>> + */
>>>> + if (IS_PM_SCAN_GET(p.flags)) {
>>>> + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf),
>>>> + GFP_KERNEL);
>>>> + if (!p.vec_buf)
>>>> + return -ENOMEM;
>>>> + }
>>>> +
>>>> + if (IS_PM_SCAN_WP(p.flags)) {
>>>> + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0,
>>>> + mm, start, end);
>>>> + mmu_notifier_invalidate_range_start(&range);
>>>> + }
>>>> +
>>>> + walk_start = walk_end = start;
>>>> + while (walk_end < end && !ret) {
>>>
>>> How long can it take to run this loop? Should we interrupt it if a
>>> signal has been queued?
>> I'm following mincore and pagemap_read here. There is no such thing there.
>> IOCTL is performing what user has requested.
>
> In case of pagemap, its buffer is limited by MAX_RW_COUNT (0x7ffff000),
> so it can handle maximum 0xffffe00 pages in one iteration. Do you have any
> limits for input parameters?
>
>> If the execution time is a
>> concern, user should have called the IOCTL on shorter address range.
>
> It doesn't work this way. There can be many users and signals has to be
> delivered in a reasonable time. One of the obvious examples when a signal
> has to be delivered asap is OOM.
This IOCTL is just like mincore, but with other flags and functionalities.
Mincore doesn't put any limit like this. I don't think we should put any
limit here as well.
>
>>
>>>
>>>> + if (IS_PM_SCAN_GET(p.flags)) {
>>>> + p.vec_buf_index = 0;
>>>> +
>>>> + /*
>>>> + * All data is copied to cur_buf first. When more data
>>>> + * is found, we push cur_buf to vec_buf and copy new
>>>> + * data to cur_buf. Subtract 1 from length as the
>>>> + * index of cur_buf isn't counted in length.
>>>> + */
>>>> + empty_slots = arg.vec_len - vec_index;
>>>> + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1);
>>>> + }
>>>> +
>>>> + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK;
>>>> + if (walk_end > end)
>>>> + walk_end = end;
>>>> +
>>>> + ret = mmap_read_lock_killable(mm);
>>>> + if (ret)
>>>> + goto free_data;
>>>
>>> This will fail the ioctl, but it isn't acceptable if any pages has been
>>> handled.
>> I'm following pagemap_read() here.
>
>
> pagemap_read() doesn't change states of pages, so it is safe to return
> errors from it. In this case, such errors will be fatal, because you
> mark some pages as write-protected but don't report them to the user.
Should we change to mmap_read_lock() instead or abort gracefully?
>
>>
>>>
>>>
>>>> + ret = walk_page_range(mm, walk_start, walk_end,
>>>> + &pagemap_scan_ops, &p);
>>>> + mmap_read_unlock(mm);
>>>> +
>>>
>>> ret can be ENOSPC returned from pagemap_scan_pmd_entry, but we should
>>> not return this error if any pages has been handled.
>> ENOSPC isn't being returned from pagemap_scan_pmd_entry now.
>
> I mean pagemap_scan_hugetlb_entry.
>
>>
>>>
>>>> + if (ret && ret != PM_SCAN_BUFFER_FULL &&
>>>> + ret != PM_SCAN_FOUND_MAX_PAGES)
>>>> + goto free_data;
>>>> +
>>>> + walk_start = walk_end;
>>>> + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) {
>>>> + if (copy_to_user(&vec[vec_index], p.vec_buf,
>>>> + p.vec_buf_index * sizeof(*p.vec_buf))) {
>>>> + /*
>>>> + * Return error even though the OP succeeded
>>>> + */
>>>> + ret = -EFAULT;
>>>> + goto free_data;
>>>> + }
>>>> + vec_index += p.vec_buf_index;
>>>> + }
>>>> + }
>>>> +
>>>> + if (p.cur_buf.len) {
>>>> + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) {
>>>> + ret = -EFAULT;
>>>> + goto free_data;
>>>> + }
>>>> + vec_index++;
>>>> + }
>>>> +
>>>> + ret = vec_index;
>>>> +
>>>> +free_data:
>>>> + if (IS_PM_SCAN_WP(p.flags))
>>>> + mmu_notifier_invalidate_range_end(&range);
>>>> +
>>>> + kfree(p.vec_buf);
>>>> + return ret;
>>>> +}
>>>> +
>>>> +static long do_pagemap_cmd(struct file *file, unsigned int cmd,
>>>> + unsigned long arg)
>>>> +{
>>>> + struct mm_struct *mm = file->private_data;
>>>> +
>>>> + switch (cmd) {
>>>> + case PAGEMAP_SCAN:
>>>> + return do_pagemap_scan(mm, arg);
>>>> +
>>>> + default:
>>>> + return -EINVAL;
>>>> + }
>>>> +}
>>>> +
>>>> const struct file_operations proc_pagemap_operations = {
>>>> .llseek = mem_lseek, /* borrow this */
>>>> .read = pagemap_read,
>>>> .open = pagemap_open,
>>>> .release = pagemap_release,
>>>> + .unlocked_ioctl = do_pagemap_cmd,
>>>> + .compat_ioctl = do_pagemap_cmd,
>>>> };
>>>> #endif /* CONFIG_PROC_PAGE_MONITOR */
>>>>
>>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>>> index beb7c63d2871..a6e773c3e2b4 100644
>>>> --- a/include/linux/hugetlb.h
>>>> +++ b/include/linux/hugetlb.h
>>>> @@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma,
>>>> unsigned long cp_flags);
>>>>
>>>> bool is_hugetlb_entry_migration(pte_t pte);
>>>> +bool is_hugetlb_entry_hwpoisoned(pte_t pte);
>>>> void hugetlb_unshare_all_pmds(struct vm_area_struct *vma);
>>>>
>>>> #else /* !CONFIG_HUGETLB_PAGE */
>>>> diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
>>>> index b7b56871029c..a9fb44db84a3 100644
>>>> --- a/include/uapi/linux/fs.h
>>>> +++ b/include/uapi/linux/fs.h
>>>> @@ -305,4 +305,57 @@ typedef int __bitwise __kernel_rwf_t;
>>>> #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\
>>>> RWF_APPEND)
>>>>
>>>> +/* Pagemap ioctl */
>>>> +#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)
>>>> +
>>>> +/* Bits are set in flags of the page_region and masks in pm_scan_args */
>>>> +#define PAGE_IS_WRITTEN (1 << 0)
>>>> +#define PAGE_IS_FILE (1 << 1)
>>>> +#define PAGE_IS_PRESENT (1 << 2)
>>>> +#define PAGE_IS_SWAPPED (1 << 3)
>>>> +
>>>> +/*
>>>> + * struct page_region - Page region with flags
>>>> + * @start: Start of the region
>>>> + * @len: Length of the region in pages
>>>> + * @bitmap: Bits sets for the region
>>>> + */
>>>> +struct page_region {
>>>> + __u64 start;
>>>> + __u64 len;
>>>> + __u64 flags;
>>>> +};
>>>> +
>>>> +/*
>>>> + * struct pm_scan_arg - Pagemap ioctl argument
>>>> + * @size: Size of the structure
>>>> + * @flags: Flags for the IOCTL
>>>> + * @start: Starting address of the region
>>>> + * @len: Length of the region (All the pages in this length are included)
>>>> + * @vec: Address of page_region struct array for output
>>>> + * @vec_len: Length of the page_region struct array
>>>> + * @max_pages: Optional max return pages
>>>> + * @required_mask: Required mask - All of these bits have to be set in the PTE
>>>> + * @anyof_mask: Any mask - Any of these bits are set in the PTE
>>>> + * @excluded_mask: Exclude mask - None of these bits are set in the PTE
>>>> + * @return_mask: Bits that are to be reported in page_region
>>>> + */
>>>> +struct pm_scan_arg {
>>>> + __u64 size;
>>>> + __u64 flags;
>>>> + __u64 start;
>>>> + __u64 len;
>>>> + __u64 vec;
>>>> + __u64 vec_len;
>>>> + __u64 max_pages;
>>>> + __u64 required_mask;
>>>> + __u64 anyof_mask;
>>>> + __u64 excluded_mask;
>>>> + __u64 return_mask;
>>>> +};
>>>> +
>>>> +/* Supported flags */
>>>> +#define PM_SCAN_OP_GET (1 << 0)
>>>> +#define PM_SCAN_OP_WP (1 << 1)
>>>> +
>>>> #endif /* _UAPI_LINUX_FS_H */
>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>> index 0db13167b1ee..7e60f0f3fd03 100644
>>>> --- a/mm/hugetlb.c
>>>> +++ b/mm/hugetlb.c
>>>> @@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte)
>>>> return false;
>>>> }
>>>>
>>>> -static bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>>>> +bool is_hugetlb_entry_hwpoisoned(pte_t pte)
>>>> {
>>>> swp_entry_t swp;
>>>>
>>>> --
>>>> 2.39.2
>>>>
>>
>> --
>> BR,
>> Muhammad Usama Anjum
--
BR,
Muhammad Usama Anjum
On Thu, 22 Jun 2023 at 12:21, Muhammad Usama Anjum
<[email protected]> wrote:
> On 6/22/23 12:45 AM, Andrei Vagin wrote:
> > On Wed, Jun 21, 2023 at 11:34:54AM +0500, Muhammad Usama Anjum wrote:
> >> On 6/20/23 11:03 PM, Andrei Vagin wrote:
[...]
> >>> should it be PM_SCAN_FOUND_MAX_PAGES? Otherwise, it fails the ioctl even
> >>> if it has handled some pages already.
> >> It is a double edge sword. If we don't return error, user will never know
> >> that he needs to specify more max_pages or his output buffer is small and
> >> not coverig the entire range. The real purpose here that user gets aware
> >> that he needs to specify full hugetlb range and found pages should cover
> >> the entire range as well.
> >
> > but if PM_SCAN_OP_WP is set, this error will be fatal, because some
> > pages can be marked write-protected, but they are not be reported to
> > user-space.
> >
> > I think the ioctl has to report back the end address of the handled
> > region. It is like read and write syscalls that can return less data
> > than requested.
> This is good point. I'll abort the walk here instead of retuning the error
> to user.
It would be great if the ioctl returned the address the walk finished
at. This would remove the special case for "buffer full after full
page was added" and if it happens that despite `max_pages` limit was
reached but no more pages would need to be added the caller would not
have to call the ioctl again on the remaining range.
[...]
> >>> How long can it take to run this loop? Should we interrupt it if a
> >>> signal has been queued?
> >> I'm following mincore and pagemap_read here. There is no such thing there.
> >> IOCTL is performing what user has requested.
> >
> > In case of pagemap, its buffer is limited by MAX_RW_COUNT (0x7ffff000),
> > so it can handle maximum 0xffffe00 pages in one iteration. Do you have any
> > limits for input parameters?
> >
> >> If the execution time is a
> >> concern, user should have called the IOCTL on shorter address range.
> >
> > It doesn't work this way. There can be many users and signals has to be
> > delivered in a reasonable time. One of the obvious examples when a signal
> > has to be delivered asap is OOM.
> This IOCTL is just like mincore, but with other flags and functionalities.
> Mincore doesn't put any limit like this. I don't think we should put any
> limit here as well.
I don't think we should treat mincore() as a good API example. Its
interface has similar performance problems to what select() or poll()
does. In this ioctl's case, we can limit the output at this end (large
anyway) as it won't affect the performance if for x TiB of memory the
call is made twice instead of once. (Returning the end of the walk
reached would be much help here.)
Best Regards
Michał Mirosław