Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp17045127rwd; Mon, 26 Jun 2023 19:57:46 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5Gwp9gD/GjBeUP11688jbJYQjlJ+SvrZ9IOSVIa7mVLiqtoc5QzFyjZ7G8rj6V3+pWSAal X-Received: by 2002:a17:907:7d91:b0:98c:cc3c:194e with SMTP id oz17-20020a1709077d9100b0098ccc3c194emr14839217ejc.52.1687834666388; Mon, 26 Jun 2023 19:57:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1687834666; cv=none; d=google.com; s=arc-20160816; b=HAp++t/gxVEMi1h6WZlQGs52YuEoEicQahE/YB2EjofIsyaXyAm5ImR/msPgTQu+72 IeNoQ2+6s57fd80Xc1XKJ0UujtBzRCL/bhZqae8DQAzO6+p7oinSnK2E9mNB6j99834a xiSfGQ6vaoagebqMI0wXuLnj3OMoHZ8t3WLyBycS9jcx5pfoyWwR6pyYmqU3OPeS4PwE 1CVExP/IdTmYUlkTDHp4oWig96g98cEFD8Wc3DcIwp8nJyVamSagqy5Zoe0XI3BFXVaA /h99fjGPbIj44qHuXPbEGPzoG1Ay5lTEtnyL8kGbvKHTKoq2Ok1Wg3jElaxwp9WNuFd9 0hlw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=Eimd6BullFqT2PHkLLCLSLcF+b9IsvKkKeIrlXoyEU4=; fh=yJrd2rNLaubHWVPiPZvN8SQeaivbPfQnuyOpc7saJ2E=; b=JMzs/L1QlFGaephzKYFgu3bPHatTz5hD1hAXpd182fruyuybuay7t7Uwnebz6gl8A9 HXsHHwKKDQcEY/aU1U+vi8bPYsnTRya43sV7RWlKvOzZg54Hs1NkQfW2FaV2npElQIXg oUnd1d0gH5K/WvmVsf4sTp5Q0QLP/HhUMTbSxAbpwu4ozdJxBLuMY8JP9b3FmfKmIMxy bYFUvCkbjY9gc0C1coQprL9iZJghjho1ojxdyGlyBMl75Eo5VcpCAWzVzqdIWgVFRbpg Vu3RhE6LvcKfmrznv26qURFBxyR9se+1sVnsqD+FQlryTeX7vFvzjFwPpdPuzE5mVTLx GqAQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=B35KJJed; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id gx8-20020a170906f1c800b0098e417a0c3bsi2366912ejb.109.2023.06.26.19.57.21; Mon, 26 Jun 2023 19:57:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=B35KJJed; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230030AbjF0Bqt (ORCPT + 99 others); Mon, 26 Jun 2023 21:46:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44306 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229597AbjF0Bqr (ORCPT ); Mon, 26 Jun 2023 21:46:47 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB95ED9; Mon, 26 Jun 2023 18:46:44 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-6687096c6ddso1963761b3a.0; Mon, 26 Jun 2023 18:46:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1687830404; x=1690422404; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to; bh=Eimd6BullFqT2PHkLLCLSLcF+b9IsvKkKeIrlXoyEU4=; b=B35KJJed0+RVb7/VXdJXgjgPhpijJe9NhD1cZN7ikBubok4OK4muGMWp7/JWgAtttJ d4PxLevH+OowLH1PjJqOyilPgGcVtnEMy28m/tx66lTP/bR6ruIIDhbsdls93qhoHQiK 7Nr1DldL5LN2PokqA4xh0XxI2Q4OPqetfGxbBJxVQ0BuACkWJ3lpJUHH7g+/Sd2FwRHr PLPOjkZ56rJb6hGAm4fT/ILm0dREPmkZLVD1LbvyPi2PpanpjUIWn0ck1qJrVdAul0XX kv8JtfM+J+aiXXE9i1uNqNzB4oOuVExBCQ9KNxLoir1Vjpjtr6aNaZHFZax5XGChOgi/ yUrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687830404; x=1690422404; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Eimd6BullFqT2PHkLLCLSLcF+b9IsvKkKeIrlXoyEU4=; b=NnUogFX2Zmmmav/yYFK63jaZ1QIFXEjGjKujfqujahfEJgOzMB+QolCxWFAdQKQ9j5 whQrZV6QMt+jboce2cEziSQZ/AeQoyMN7DB61pR6p979y3bjzzUblJWpyukGr3Kazt/p oVZ+df/m5zmJiGu9pALARoC9EAnna0P70TwcOm22Dgymrvj6owWSM4lP5No7OMop/EZo DNoa7hYbMhBXFuhgKHzpgAeEDsrztSGTKMUTUjdTVRaHJf11EClcFJWwexSMWBLR0V32 KuCQ4oSLQLfputKqtSiXzpU4FDFyscEH21oh6PFJbl9hLudPP15ZnbPEuZE70sy8ysZ8 SlbA== X-Gm-Message-State: AC+VfDyDurquPYMmSsqdPjz8GYksoQSGfc0cnehkbRksG2uYIJaKNEsJ MerT9Rud3tCy7MDE5xPo/dc= X-Received: by 2002:a05:6a20:728c:b0:126:243f:3429 with SMTP id o12-20020a056a20728c00b00126243f3429mr6667605pzk.33.1687830404019; Mon, 26 Jun 2023 18:46:44 -0700 (PDT) Received: from gmail.com ([2601:600:8500:5f14:d627:c51e:516e:a105]) by smtp.gmail.com with ESMTPSA id p3-20020a170902a40300b001b562175158sm4767657plq.244.2023.06.26.18.46.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 26 Jun 2023 18:46:43 -0700 (PDT) Date: Mon, 26 Jun 2023 18:46:40 -0700 From: Andrei Vagin To: Muhammad Usama Anjum Cc: Peter Xu , David Hildenbrand , Andrew Morton , =?utf-8?B?TWljaGHFgiBNaXJvc8WCYXc=?= , Danylo Mocherniuk , Paul Gofman , Cyrill Gorcunov , Mike Rapoport , Nadav Amit , Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Suren Baghdasaryan , Alex Sierra , Matthew Wilcox , Pasha Tatashin , Axel Rasmussen , "Gustavo A . R . Silva" , Dan Williams , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com Subject: Re: [PATCH v21 2/5] fs/proc/task_mmu: Implement IOCTL to get and optionally clear info about PTEs Message-ID: References: <20230626113156.1274521-1-usama.anjum@collabora.com> <20230626113156.1274521-3-usama.anjum@collabora.com> MIME-Version: 1.0 Content-Type: text/plain; charset=koi8-r Content-Disposition: inline In-Reply-To: <20230626113156.1274521-3-usama.anjum@collabora.com> X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jun 26, 2023 at 04:31:53PM +0500, Muhammad Usama Anjum wrote: > This IOCTL, PAGEMAP_SCAN on pagemap file can be used to get and/or clear > the info about page table entries. The following operations are supported > in this ioctl: > - Get the information if the pages have been written-to (PAGE_IS_WRITTEN), > file mapped (PAGE_IS_FILE), present (PAGE_IS_PRESENT), swapped > (PAGE_IS_SWAPPED) or page has pfn zero (PAGE_IS_PFNZERO). > - Find pages which have been written-to and/or write protect the pages > (atomic PM_SCAN_OP_GET + PM_SCAN_OP_WP) > > This IOCTL can be extended to get information about more PTE bits. > > Signed-off-by: Muhammad Usama Anjum > --- > Changes in v21: > - Abort walk instead of returning error if WP is to be performed on > partial hugetlb > - Changed the data types of some variables in pagemap_scan_private to > long > > Changes in v20: > - Correct PAGE_IS_FILE and add PAGE_IS_PFNZERO > > Changes in v19: > - Interface changes such as renaming, return mask and WP can be used > with any flags specified in masks > - Internal code changes > > Changes in v18: > - Rebased on top of next-20230613 > - ptep_get() updates > - remove pmd_trans_unstable() and add ACTION_AGAIN > - Review updates (Micheal) > > Changes in v17: > - Rebased on next-20230606 > - Made make_uffd_wp_*_pte() better and minor changes > > Changes in v16: > - Fixed a corner case where kernel writes beyond user buffer by one > element > - Bring back exclusive PM_SCAN_OP_WP > - Cosmetic changes > > Changes in v15: > - Build fix: > - Use generic tlb flush function in pagemap_scan_pmd_entry() instead of > using x86 specific flush function in do_pagemap_scan() > - Remove #ifdef from pagemap_scan_hugetlb_entry() > - Use mm instead of undefined vma->vm_mm > > Changes in v14: > - Fix build error caused by #ifdef added at last minute in some configs > > Changes in v13: > - Review updates > - mmap_read_lock_killable() instead of mmap_read_lock() > - Replace uffd_wp_range() with helpers which increases performance > drastically for OP_WP operations by reducing the number of tlb > flushing etc > - Add MMU_NOTIFY_PROTECTION_VMA notification for the memory range > > Changes in v12: > - Add hugetlb support to cover all memory types > - Merge "userfaultfd: Define dummy uffd_wp_range()" with this patch > - Review updates to the code > > Changes in v11: > - Find written pages in a better way > - Fix a corner case (thanks Paul) > - Improve the code/comments > - remove ENGAGE_WP + ! GET operation > - shorten the commit message in favour of moving documentation to > pagemap.rst > > Changes in v10: > - move changes in tools/include/uapi/linux/fs.h to separate patch > - update commit message > > Change in v8: > - Correct is_pte_uffd_wp() > - Improve readability and error checks > - Remove some un-needed code > > Changes in v7: > - Rebase on top of latest next > - Fix some corner cases > - Base soft-dirty on the uffd wp async > - Update the terminologies > - Optimize the memory usage inside the ioctl > --- > fs/proc/task_mmu.c | 560 ++++++++++++++++++++++++++++++++++++++++ > include/linux/hugetlb.h | 1 + > include/uapi/linux/fs.h | 54 ++++ > mm/hugetlb.c | 2 +- > 4 files changed, 616 insertions(+), 1 deletion(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index 507cd4e59d07..12186f0d4f15 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -19,6 +19,7 @@ > #include > #include > #include > +#include > > #include > #include > @@ -1765,11 +1766,570 @@ static int pagemap_release(struct inode *inode, struct file *file) > return 0; > } > > +#define PM_SCAN_REQUIRE_UFFD (1ULL << 63) > + > +#define PM_SCAN_FOUND_MAX_PAGES (1) > +#define PM_SCAN_END_WALK (-256) > + > +#define PM_SCAN_BITS_ALL (PAGE_IS_WRITTEN | PAGE_IS_FILE | \ > + PAGE_IS_PRESENT | PAGE_IS_SWAPPED | \ > + PAGE_IS_PFNZERO) > +#define PM_SCAN_OPS (PM_SCAN_OP_GET | PM_SCAN_OP_WP) > +#define IS_PM_SCAN_GET(flags) (flags & PM_SCAN_OP_GET) > +#define IS_PM_SCAN_WP(flags) (flags & PM_SCAN_OP_WP) > + > +#define PM_SCAN_FLAGS(wt, file, present, swap, pfnzero) \ > + ((wt) | ((file) << 1) | ((present) << 2) | \ > + ((swap) << 3) | ((pfnzero) << 4)) > + > +struct pagemap_scan_private { > + struct page_region *vec_buf, cur_buf; > + unsigned long vec_buf_len, vec_buf_index, max_pages, found_pages; > + unsigned long long flags, required_mask, anyof_mask, excluded_mask, return_mask; > +}; > + > +static inline bool is_pte_uffd_wp(pte_t pte) > +{ > + return (pte_present(pte) && pte_uffd_wp(pte)) || > + pte_swp_uffd_wp_any(pte); > +} > + > +static inline void make_uffd_wp_pte(struct vm_area_struct *vma, > + unsigned long addr, pte_t *pte) > +{ > + pte_t ptent = ptep_get(pte); > + > + if (pte_present(ptent)) { > + pte_t old_pte; > + > + old_pte = ptep_modify_prot_start(vma, addr, pte); > + ptent = pte_mkuffd_wp(ptent); > + ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); > + } else if (is_swap_pte(ptent)) { > + ptent = pte_swp_mkuffd_wp(ptent); > + set_pte_at(vma->vm_mm, addr, pte, ptent); > + } else { > + set_pte_at(vma->vm_mm, addr, pte, > + make_pte_marker(PTE_MARKER_UFFD_WP)); > + } > +} > + > +static inline bool pagemap_scan_is_file(struct vm_area_struct *vma, pte_t ptent, > + unsigned long addr) > +{ > + struct page *page = NULL; > + swp_entry_t entry; > + > + if (pte_present(ptent)) { > + page = vm_normal_page(vma, addr, ptent); > + } else { > + entry = pte_to_swp_entry(ptent); > + if (is_pfn_swap_entry(entry)) > + page = pfn_swap_entry_to_page(entry); > + } > + > + if (page && !PageAnon(page)) > + return true; > + > + return false; > +} > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > +static inline bool is_pmd_uffd_wp(pmd_t pmd) > +{ > + return (pmd_present(pmd) && pmd_uffd_wp(pmd)) || > + (is_swap_pmd(pmd) && pmd_swp_uffd_wp(pmd)); > +} > + > +static inline void make_uffd_wp_pmd(struct vm_area_struct *vma, > + unsigned long addr, pmd_t *pmdp) > +{ > + pmd_t old, pmd = *pmdp; > + > + if (pmd_present(pmd)) { > + old = pmdp_invalidate_ad(vma, addr, pmdp); > + pmd = pmd_mkuffd_wp(old); > + set_pmd_at(vma->vm_mm, addr, pmdp, pmd); > + } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { > + pmd = pmd_swp_mkuffd_wp(pmd); > + set_pmd_at(vma->vm_mm, addr, pmdp, pmd); > + } > +} > +#endif > + > +#ifdef CONFIG_HUGETLB_PAGE > +static inline bool is_huge_pte_uffd_wp(pte_t pte) > +{ > + return (pte_present(pte) && huge_pte_uffd_wp(pte)) || > + pte_swp_uffd_wp_any(pte); > +} > + > +static inline void make_uffd_wp_huge_pte(struct vm_area_struct *vma, > + unsigned long addr, pte_t *ptep, > + pte_t ptent) > +{ > + if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent)) > + return; > + > + if (is_hugetlb_entry_migration(ptent)) > + set_huge_pte_at(vma->vm_mm, addr, ptep, > + pte_swp_mkuffd_wp(ptent)); > + else if (!huge_pte_none(ptent)) > + huge_ptep_modify_prot_commit(vma, addr, ptep, ptent, > + huge_pte_mkuffd_wp(ptent)); > + else > + set_huge_pte_at(vma->vm_mm, addr, ptep, > + make_pte_marker(PTE_MARKER_UFFD_WP)); > +} > + > +static inline bool pagemap_scan_is_huge_file(pte_t pte) > +{ > + if (pte_present(pte) && (!PageAnon(pte_page(pte)))) > + return true; > + > + return false; > +} > +#endif > + > +static int pagemap_scan_test_walk(unsigned long start, unsigned long end, > + struct mm_walk *walk) > +{ > + struct pagemap_scan_private *p = walk->private; > + struct vm_area_struct *vma = walk->vma; > + > + if ((p->flags & PM_SCAN_REQUIRE_UFFD) && (!userfaultfd_wp_async(vma) || > + !userfaultfd_wp_use_markers(vma))) > + return -EPERM; > + > + if (vma->vm_flags & VM_PFNMAP) > + return 1; > + > + return 0; > +} > + > +static int pagemap_scan_output(unsigned long bitmap, > + struct pagemap_scan_private *p, > + unsigned long addr, unsigned int n_pages) > +{ > + struct page_region *cur_buf = &p->cur_buf; > + > + if (!n_pages) > + return -EINVAL; > + > + bitmap &= p->return_mask; > + > + if (cur_buf->flags == bitmap && > + cur_buf->start + cur_buf->len * PAGE_SIZE == addr) { > + cur_buf->len += n_pages; > + p->found_pages += n_pages; > + } else { > + if (cur_buf->len) { > + if (p->vec_buf_index >= p->vec_buf_len) > + return PM_SCAN_END_WALK; > + > + memcpy(&p->vec_buf[p->vec_buf_index], cur_buf, > + sizeof(*p->vec_buf)); > + p->vec_buf_index++; > + } > + > + cur_buf->start = addr; > + cur_buf->len = n_pages; > + cur_buf->flags = bitmap; > + p->found_pages += n_pages; > + } > + > + if (p->found_pages == p->max_pages) > + return PM_SCAN_FOUND_MAX_PAGES; > + > + return 0; > +} > + > +static bool pagemap_scan_is_interesting_page(unsigned long bitmap, > + struct pagemap_scan_private *p) > +{ > + if ((p->required_mask & bitmap) != p->required_mask) > + return false; > + if (p->anyof_mask && !(p->anyof_mask & bitmap)) > + return false; > + if (p->excluded_mask & bitmap) > + return false; > + > + return true; > +} > + > +static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start, > + unsigned long end, struct mm_walk *walk) > +{ > + bool is_written, flush = false, is_interesting = true; > + struct pagemap_scan_private *p = walk->private; > + struct vm_area_struct *vma = walk->vma; > + unsigned long bitmap, addr = end; > + pte_t *pte, *orig_pte, ptent; > + spinlock_t *ptl; > + int ret = 0; > + > + arch_enter_lazy_mmu_mode(); > + > +#ifdef CONFIG_TRANSPARENT_HUGEPAGE > + ptl = pmd_trans_huge_lock(pmd, vma); > + if (ptl) { > + unsigned long n_pages = (end - start)/PAGE_SIZE; > + > + if (n_pages > p->max_pages - p->found_pages) > + n_pages = p->max_pages - p->found_pages; > + > + is_written = !is_pmd_uffd_wp(*pmd); > + > + /* > + * Break huge page into small pages if the WP operation need to > + * be performed is on a portion of the huge page. > + */ > + if (is_written && IS_PM_SCAN_WP(p->flags) && > + n_pages < HPAGE_SIZE/PAGE_SIZE) { > + spin_unlock(ptl); > + > + split_huge_pmd(vma, pmd, start); > + goto process_smaller_pages; > + } > + > + bitmap = PM_SCAN_FLAGS(is_written, false, > + pmd_present(*pmd), is_swap_pmd(*pmd), > + pmd_present(*pmd) && is_zero_pfn(pmd_pfn(*pmd))); > + > + if (IS_PM_SCAN_GET(p->flags)) { > + is_interesting = pagemap_scan_is_interesting_page(bitmap, p); > + if (is_interesting) > + ret = pagemap_scan_output(bitmap, p, start, n_pages); > + } > + > + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting && > + ret >= 0) { > + make_uffd_wp_pmd(vma, start, pmd); > + flush_tlb_range(vma, start, end); > + } > + > + spin_unlock(ptl); > + > + arch_leave_lazy_mmu_mode(); > + return ret; > + } > + > +process_smaller_pages: > +#endif > + > + orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); > + if (!pte) { > + walk->action = ACTION_AGAIN; > + return 0; > + } > + > + for (addr = start; addr < end && !ret; pte++, addr += PAGE_SIZE) { > + ptent = ptep_get(pte); > + is_written = !is_pte_uffd_wp(ptent); > + > + bitmap = PM_SCAN_FLAGS(is_written, pagemap_scan_is_file(vma, ptent, addr), > + pte_present(ptent), is_swap_pte(ptent), > + pte_present(ptent) && is_zero_pfn(pte_pfn(ptent))); > + > + if (IS_PM_SCAN_GET(p->flags)) { > + is_interesting = pagemap_scan_is_interesting_page(bitmap, p); > + if (is_interesting) > + ret = pagemap_scan_output(bitmap, p, addr, 1); > + } > + > + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting && > + ret >= 0) { > + make_uffd_wp_pte(vma, addr, pte); > + flush = true; > + } > + } > + > + if (flush) > + flush_tlb_range(vma, start, addr); > + > + pte_unmap_unlock(orig_pte, ptl); > + arch_leave_lazy_mmu_mode(); > + > + cond_resched(); > + return ret; > +} > + > +#ifdef CONFIG_HUGETLB_PAGE > +static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask, > + unsigned long start, unsigned long end, > + struct mm_walk *walk) > +{ > + unsigned long n_pages = (end - start)/PAGE_SIZE; > + struct pagemap_scan_private *p = walk->private; > + struct vm_area_struct *vma = walk->vma; > + bool is_written, is_interesting = true; > + struct hstate *h = hstate_vma(vma); > + unsigned long bitmap; > + spinlock_t *ptl; > + int ret = 0; > + pte_t ptent; > + > + if (IS_PM_SCAN_WP(p->flags) && n_pages < HPAGE_SIZE/PAGE_SIZE) > + return -EINVAL; > + > + if (n_pages > p->max_pages - p->found_pages) > + n_pages = p->max_pages - p->found_pages; > + > + if (IS_PM_SCAN_WP(p->flags)) { > + i_mmap_lock_write(vma->vm_file->f_mapping); > + ptl = huge_pte_lock(h, vma->vm_mm, ptep); > + } > + > + ptent = huge_ptep_get(ptep); > + is_written = !is_huge_pte_uffd_wp(ptent); > + > + /* > + * Partial hugetlb page clear isn't supported > + */ > + if (is_written && IS_PM_SCAN_WP(p->flags) && > + n_pages < HPAGE_SIZE/PAGE_SIZE) { should it be done only if is_interesting is set? > + ret = PM_SCAN_END_WALK; > + goto unlock_and_return; > + } > + > + bitmap = PM_SCAN_FLAGS(is_written, pagemap_scan_is_huge_file(ptent), > + pte_present(ptent), is_swap_pte(ptent), > + pte_present(ptent) && is_zero_pfn(pte_pfn(ptent))); > + > + if (IS_PM_SCAN_GET(p->flags)) { > + is_interesting = pagemap_scan_is_interesting_page(bitmap, p); > + if (is_interesting) > + ret = pagemap_scan_output(bitmap, p, start, n_pages); > + } > + > + if (IS_PM_SCAN_WP(p->flags) && is_written && is_interesting && > + ret >= 0) { > + make_uffd_wp_huge_pte(vma, start, ptep, ptent); > + flush_hugetlb_tlb_range(vma, start, end); > + } > + > +unlock_and_return: > + if (IS_PM_SCAN_WP(p->flags)) { > + spin_unlock(ptl); > + i_mmap_unlock_write(vma->vm_file->f_mapping); > + } > + > + return ret; > +} > +#else > +#define pagemap_scan_hugetlb_entry NULL > +#endif > + > +static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end, > + int depth, struct mm_walk *walk) > +{ > + unsigned long n_pages = (end - addr)/PAGE_SIZE; > + struct pagemap_scan_private *p = walk->private; > + struct vm_area_struct *vma = walk->vma; > + int ret = 0; > + > + if (!vma || !IS_PM_SCAN_GET(p->flags)) > + return 0; > + > + if (n_pages > p->max_pages - p->found_pages) > + n_pages = p->max_pages - p->found_pages; > + > + ret = pagemap_scan_output(PM_SCAN_FLAGS(false, false, false, false, false), > + p, addr, n_pages); > + > + return ret; > +} > + > +static const struct mm_walk_ops pagemap_scan_ops = { > + .test_walk = pagemap_scan_test_walk, > + .pmd_entry = pagemap_scan_pmd_entry, > + .pte_hole = pagemap_scan_pte_hole, > + .hugetlb_entry = pagemap_scan_hugetlb_entry, > +}; > + > +static int pagemap_scan_args_valid(struct pm_scan_arg *arg, unsigned long start, > + struct page_region __user *vec) > +{ > + /* Detect illegal size, flags, len and masks */ > + if (arg->size != sizeof(struct pm_scan_arg)) > + return -EINVAL; > + if (!arg->flags) > + return -EINVAL; > + if (arg->flags & ~PM_SCAN_OPS) > + return -EINVAL; > + if (!arg->len) > + return -EINVAL; > + if ((arg->required_mask | arg->anyof_mask | arg->excluded_mask | > + arg->return_mask) & ~PM_SCAN_BITS_ALL) > + return -EINVAL; > + if (!arg->required_mask && !arg->anyof_mask && > + !arg->excluded_mask) > + return -EINVAL; > + if (!arg->return_mask) > + return -EINVAL; > + > + /* Validate memory range */ > + if (!IS_ALIGNED(start, PAGE_SIZE)) > + return -EINVAL; > + if (!access_ok((void __user *)start, arg->len)) > + return -EFAULT; > + > + if (IS_PM_SCAN_GET(arg->flags)) { > + if (arg->vec_len == 0) > + return -EINVAL; > + if (!vec) > + return -EINVAL; > + if (!access_ok((void __user *)vec, > + arg->vec_len * sizeof(struct page_region))) > + return -EFAULT; > + } > + > + if (IS_PM_SCAN_WP(arg->flags) && !IS_PM_SCAN_GET(arg->flags) && > + arg->max_pages) > + return -EINVAL; > + > + return 0; > +} > + > +static long do_pagemap_scan(struct mm_struct *mm, unsigned long __arg) > +{ > + struct pm_scan_arg __user *uarg = (struct pm_scan_arg __user *)__arg; > + unsigned long long start, end, walk_start, walk_end; > + unsigned long empty_slots, vec_index = 0; > + struct mmu_notifier_range range; > + struct page_region __user *vec; > + struct pagemap_scan_private p; > + struct pm_scan_arg arg; > + int ret = 0; > + > + if (copy_from_user(&arg, uarg, sizeof(arg))) > + return -EFAULT; > + > + start = untagged_addr((unsigned long)arg.start); > + vec = (struct page_region *)untagged_addr((unsigned long)arg.vec); > + > + ret = pagemap_scan_args_valid(&arg, start, vec); > + if (ret) > + return ret; > + > + end = start + arg.len; > + p.max_pages = (arg.max_pages) ? arg.max_pages : ULONG_MAX; > + p.found_pages = 0; > + p.required_mask = arg.required_mask; > + p.anyof_mask = arg.anyof_mask; > + p.excluded_mask = arg.excluded_mask; > + p.return_mask = arg.return_mask; > + p.flags = arg.flags; > + p.flags |= ((p.required_mask | p.anyof_mask | p.excluded_mask) & > + PAGE_IS_WRITTEN) ? PM_SCAN_REQUIRE_UFFD : 0; > + p.cur_buf.start = p.cur_buf.len = p.cur_buf.flags = 0; > + p.vec_buf = NULL; > + p.vec_buf_len = PAGEMAP_WALK_SIZE >> PAGE_SHIFT; > + > + /* > + * Allocate smaller buffer to get output from inside the page walk > + * functions and walk page range in PAGEMAP_WALK_SIZE size chunks. As > + * we want to return output to user in compact form where no two > + * consecutive regions should be continuous and have the same flags. > + * So store the latest element in p.cur_buf between different walks and > + * store the p.cur_buf at the end of the walk to the user buffer. > + */ > + if (IS_PM_SCAN_GET(p.flags)) { > + p.vec_buf = kmalloc_array(p.vec_buf_len, sizeof(*p.vec_buf), > + GFP_KERNEL); > + if (!p.vec_buf) > + return -ENOMEM; > + } > + > + if (IS_PM_SCAN_WP(p.flags)) { > + mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, > + mm, start, end); > + mmu_notifier_invalidate_range_start(&range); > + } > + > + walk_start = walk_end = start; > + while (walk_end < end && !ret) { > + if (IS_PM_SCAN_GET(p.flags)) { > + p.vec_buf_index = 0; > + > + /* > + * All data is copied to cur_buf first. When more data > + * is found, we push cur_buf to vec_buf and copy new > + * data to cur_buf. Subtract 1 from length as the > + * index of cur_buf isn't counted in length. > + */ > + empty_slots = arg.vec_len - vec_index; > + p.vec_buf_len = min(p.vec_buf_len, empty_slots - 1); > + } > + > + walk_end = (walk_start + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK; > + if (walk_end > end) > + walk_end = end; > + If this loop can run for a long time, we need to interrupt it in case of pending signals. If you think we don't need to do that, pls explain in the commit message, so that maintainers don't miss this part and double check that everything is alright here. > + ret = mmap_read_lock_killable(mm); > + if (ret) If any pages have been handled, we need to report them to user-space. It isn't acceptable to return a error in such cases. And we need to report an address where it stopped scanning. We can do that by adding zero length vector. > + goto free_data; > + ret = walk_page_range(mm, walk_start, walk_end, > + &pagemap_scan_ops, &p); > + mmap_read_unlock(mm); > + > + if (ret && ret != PM_SCAN_FOUND_MAX_PAGES && > + ret != PM_SCAN_END_WALK) > + goto free_data; > + > + walk_start = walk_end; > + if (IS_PM_SCAN_GET(p.flags) && p.vec_buf_index) { > + if (copy_to_user(&vec[vec_index], p.vec_buf, > + p.vec_buf_index * sizeof(*p.vec_buf))) { > + /* > + * Return error even though the OP succeeded > + */ > + ret = -EFAULT; > + goto free_data; > + } > + vec_index += p.vec_buf_index; Should we set ret to zero here if it is equal PM_SCAN_END_WALK. > + } > + } > + > + if (p.cur_buf.len) { > + if (copy_to_user(&vec[vec_index], &p.cur_buf, sizeof(p.cur_buf))) { > + ret = -EFAULT; > + goto free_data; > + } > + vec_index++; > + } > + > + ret = vec_index; > + > +free_data: > + if (IS_PM_SCAN_WP(p.flags)) > + mmu_notifier_invalidate_range_end(&range); > + > + kfree(p.vec_buf); > + return ret; > +} > + > +static long do_pagemap_cmd(struct file *file, unsigned int cmd, > + unsigned long arg) > +{ > + struct mm_struct *mm = file->private_data; > + > + switch (cmd) { > + case PAGEMAP_SCAN: > + return do_pagemap_scan(mm, arg); > + > + default: > + return -EINVAL; > + } > +} > + > const struct file_operations proc_pagemap_operations = { > .llseek = mem_lseek, /* borrow this */ > .read = pagemap_read, > .open = pagemap_open, > .release = pagemap_release, > + .unlocked_ioctl = do_pagemap_cmd, > + .compat_ioctl = do_pagemap_cmd, > }; > #endif /* CONFIG_PROC_PAGE_MONITOR */ > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index beb7c63d2871..a6e773c3e2b4 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -261,6 +261,7 @@ long hugetlb_change_protection(struct vm_area_struct *vma, > unsigned long cp_flags); > > bool is_hugetlb_entry_migration(pte_t pte); > +bool is_hugetlb_entry_hwpoisoned(pte_t pte); > void hugetlb_unshare_all_pmds(struct vm_area_struct *vma); > > #else /* !CONFIG_HUGETLB_PAGE */ > diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h > index b7b56871029c..00dba03d4b8f 100644 > --- a/include/uapi/linux/fs.h > +++ b/include/uapi/linux/fs.h > @@ -305,4 +305,58 @@ typedef int __bitwise __kernel_rwf_t; > #define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\ > RWF_APPEND) > > +/* Pagemap ioctl */ > +#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg) > + > +/* Bits are set in flags of the page_region and masks in pm_scan_args */ > +#define PAGE_IS_WRITTEN (1 << 0) > +#define PAGE_IS_FILE (1 << 1) > +#define PAGE_IS_PRESENT (1 << 2) > +#define PAGE_IS_SWAPPED (1 << 3) > +#define PAGE_IS_PFNZERO (1 << 4) > + > +/* > + * struct page_region - Page region with flags > + * @start: Start of the region > + * @len: Length of the region in pages > + * @bitmap: Bits sets for the region > + */ > +struct page_region { > + __u64 start; > + __u64 len; > + __u64 flags; > +}; > + > +/* > + * struct pm_scan_arg - Pagemap ioctl argument > + * @size: Size of the structure > + * @flags: Flags for the IOCTL > + * @start: Starting address of the region > + * @len: Length of the region (All the pages in this length are included) > + * @vec: Address of page_region struct array for output > + * @vec_len: Length of the page_region struct array > + * @max_pages: Optional max return pages > + * @required_mask: Required mask - All of these bits have to be set in the PTE > + * @anyof_mask: Any mask - Any of these bits are set in the PTE > + * @excluded_mask: Exclude mask - None of these bits are set in the PTE > + * @return_mask: Bits that are to be reported in page_region > + */ > +struct pm_scan_arg { > + __u64 size; > + __u64 flags; > + __u64 start; > + __u64 len; > + __u64 vec; > + __u64 vec_len; > + __u64 max_pages; > + __u64 required_mask; > + __u64 anyof_mask; > + __u64 excluded_mask; > + __u64 return_mask; > +}; > + > +/* Supported flags */ > +#define PM_SCAN_OP_GET (1 << 0) > +#define PM_SCAN_OP_WP (1 << 1) > + > #endif /* _UAPI_LINUX_FS_H */ > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 0db13167b1ee..7e60f0f3fd03 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -4980,7 +4980,7 @@ bool is_hugetlb_entry_migration(pte_t pte) > return false; > } > > -static bool is_hugetlb_entry_hwpoisoned(pte_t pte) > +bool is_hugetlb_entry_hwpoisoned(pte_t pte) > { > swp_entry_t swp; > > -- > 2.39.2 >