Received: by 2002:a05:6a10:1287:0:0:0:0 with SMTP id d7csp694272pxv; Thu, 22 Jul 2021 09:53:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy4yXjzf1GD2xG5P6UbFayc8IgGP+yysvrP89UyZ+3YCI9/Vn1gb3BDCvoEtbdwx+FvguGH X-Received: by 2002:a05:6402:c13:: with SMTP id co19mr706929edb.64.1626972784288; Thu, 22 Jul 2021 09:53:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626972784; cv=none; d=google.com; s=arc-20160816; b=abA07jWEgZI47SyrpbR3pvIi+3AZMCu7mTb6dwxjgK2qFJvmSh6BkgLH0CXrxmaOOZ 52YG7SgN/Pe1TgRL2PzptexzrkrHAZNKfQRdHzR19+9lurKAoXa3jOEy0eLIPI/Mu7ZL CMN3a3SBpIBksCfdMuD3bvAclBngBVKAFlCmef2A1MWnCrOUNpT4nuYj4yDIuIeuDyfP IdiIuCQo4b41jkHzR0taL4v1YE+W7OSsnmoUBd3m3tY5ghXV72R2oHLk7ChtisWPtF7/ GaUgB5RNOWSXcvu2H/6S7e/8qGHqdhFInsDIFLS93xiBYM/hE4J0rFQ2jr6MpvjagjxO 1ecg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=MAtxktsTmjynLsdNhGaDKHlBpvf8IbtLAXvcejFcWDI=; b=zKE2MT6O5GnqWd/OP1Z3GXszZtk9YgkG97nI3wcufTZjSR2/8kRx529mwLJeWGF+3A CIMzGxQjce12AcN1J97nKltiK7MX5dkSzR/cFwuXdtSgHIHzAs5c8c5MZbCgyIAROKyZ 471cRr9lhjh3AVKGMfybybDpBvVimD+hdwB8QPCyymcfD7UeM9U0D4o6oBmsRR7fWrwE 4Eiu6EDLC9z5IGdjwPwh2gLxZBG18DA9Dgb+2N1l1v1kHQJtwnW++tcmpG9zs/24rRNI jvJ+8KyU3H4KbGvQ4Lg9C9JwK1nG3fGyfUTVgfnDxMuCNKTV/xho0um4qDxlzRIUVQE4 ahzw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lc4t6rJ7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a21si17796556eds.445.2021.07.22.09.52.40; Thu, 22 Jul 2021 09:53:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=lc4t6rJ7; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234575AbhGVQKo (ORCPT + 99 others); Thu, 22 Jul 2021 12:10:44 -0400 Received: from mail.kernel.org ([198.145.29.99]:46914 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234228AbhGVQG0 (ORCPT ); Thu, 22 Jul 2021 12:06:26 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 180A361D0B; Thu, 22 Jul 2021 16:47:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1626972420; bh=iNsa5GPkjseL65jVrvtwbz+ADwfdaB+HZflh5PY1AFE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lc4t6rJ7wdfAycvNUD317R30y6O5+4GQW80O2QOvzEf1NffQducgrW75TjxBgmoO4 sb+R0kS68daozCCjovLpLr6qDgXl3t2A96K2gu1U1eRBYk/2nJqhQDyWBWKTYEYvmn U4Ax9WelTi2miwqMNHx+r+UkeG0dWZHEH1p+egV0= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Peter Xu , Jerome Glisse , Mike Rapoport , Alexander Viro , Andrea Arcangeli , Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Hugh Dickins , Joe Perches , "Kirill A. Shutemov" , Lokesh Gidra , Mike Kravetz , Mina Almasry , Oliver Upton , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , Andrew Morton , Linus Torvalds Subject: [PATCH 5.13 106/156] mm/userfaultfd: fix uffd-wp special cases for fork() Date: Thu, 22 Jul 2021 18:31:21 +0200 Message-Id: <20210722155631.798626003@linuxfoundation.org> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210722155628.371356843@linuxfoundation.org> References: <20210722155628.371356843@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Peter Xu commit 8f34f1eac3820fc2722e5159acceb22545b30b0d upstream. We tried to do something similar in b569a1760782 ("userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork") previously, but it's not doing it all right.. A few fixes around the code path: 1. We were referencing VM_UFFD_WP vm_flags on the _old_ vma rather than the new vma. That's overlooked in b569a1760782, so it won't work as expected. Thanks to the recent rework on fork code (7a4830c380f3a8b3), we can easily get the new vma now, so switch the checks to that. 2. Dropping the uffd-wp bit in copy_huge_pmd() could be wrong if the huge pmd is a migration huge pmd. When it happens, instead of using pmd_uffd_wp(), we should use pmd_swp_uffd_wp(). The fix is simply to handle them separately. 3. Forget to carry over uffd-wp bit for a write migration huge pmd entry. This also happens in copy_huge_pmd(), where we converted a write huge migration entry into a read one. 4. In copy_nonpresent_pte(), drop uffd-wp if necessary for swap ptes. 5. In copy_present_page() when COW is enforced when fork(), we also need to pass over the uffd-wp bit if VM_UFFD_WP is armed on the new vma, and when the pte to be copied has uffd-wp bit set. Remove the comment in copy_present_pte() about this. It won't help a huge lot to only comment there, but comment everywhere would be an overkill. Let's assume the commit messages would help. [peterx@redhat.com: fix a few thp pmd missing uffd-wp bit] Link: https://lkml.kernel.org/r/20210428225030.9708-4-peterx@redhat.com Link: https://lkml.kernel.org/r/20210428225030.9708-3-peterx@redhat.com Fixes: b569a1760782f ("userfaultfd: wp: drop _PAGE_UFFD_WP properly when fork") Signed-off-by: Peter Xu Cc: Jerome Glisse Cc: Mike Rapoport Cc: Alexander Viro Cc: Andrea Arcangeli Cc: Axel Rasmussen Cc: Brian Geffon Cc: "Dr . David Alan Gilbert" Cc: Hugh Dickins Cc: Joe Perches Cc: Kirill A. Shutemov Cc: Lokesh Gidra Cc: Mike Kravetz Cc: Mina Almasry Cc: Oliver Upton Cc: Shaohua Li Cc: Shuah Khan Cc: Stephen Rothwell Cc: Wang Qing Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- include/linux/huge_mm.h | 2 +- include/linux/swapops.h | 2 ++ mm/huge_memory.c | 27 ++++++++++++++------------- mm/memory.c | 25 +++++++++++++------------ 4 files changed, 30 insertions(+), 26 deletions(-) --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -10,7 +10,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, - struct vm_area_struct *vma); + struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma); void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd); int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -265,6 +265,8 @@ static inline swp_entry_t pmd_to_swp_ent if (pmd_swp_soft_dirty(pmd)) pmd = pmd_swp_clear_soft_dirty(pmd); + if (pmd_swp_uffd_wp(pmd)) + pmd = pmd_swp_clear_uffd_wp(pmd); arch_entry = __pmd_to_swp_entry(pmd); return swp_entry(__swp_type(arch_entry), __swp_offset(arch_entry)); } --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1026,7 +1026,7 @@ struct page *follow_devmap_pmd(struct vm int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, - struct vm_area_struct *vma) + struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma) { spinlock_t *dst_ptl, *src_ptl; struct page *src_page; @@ -1035,7 +1035,7 @@ int copy_huge_pmd(struct mm_struct *dst_ int ret = -ENOMEM; /* Skip if can be re-fill on fault */ - if (!vma_is_anonymous(vma)) + if (!vma_is_anonymous(dst_vma)) return 0; pgtable = pte_alloc_one(dst_mm); @@ -1049,14 +1049,6 @@ int copy_huge_pmd(struct mm_struct *dst_ ret = -EAGAIN; pmd = *src_pmd; - /* - * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA - * does not have the VM_UFFD_WP, which means that the uffd - * fork event is not enabled. - */ - if (!(vma->vm_flags & VM_UFFD_WP)) - pmd = pmd_clear_uffd_wp(pmd); - #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION if (unlikely(is_swap_pmd(pmd))) { swp_entry_t entry = pmd_to_swp_entry(pmd); @@ -1067,11 +1059,15 @@ int copy_huge_pmd(struct mm_struct *dst_ pmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*src_pmd)) pmd = pmd_swp_mksoft_dirty(pmd); + if (pmd_swp_uffd_wp(*src_pmd)) + pmd = pmd_swp_mkuffd_wp(pmd); set_pmd_at(src_mm, addr, src_pmd, pmd); } add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR); mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); + if (!userfaultfd_wp(dst_vma)) + pmd = pmd_swp_clear_uffd_wp(pmd); set_pmd_at(dst_mm, addr, dst_pmd, pmd); ret = 0; goto out_unlock; @@ -1107,11 +1103,11 @@ int copy_huge_pmd(struct mm_struct *dst_ * best effort that the pinned pages won't be replaced by another * random page during the coming copy-on-write. */ - if (unlikely(page_needs_cow_for_dma(vma, src_page))) { + if (unlikely(page_needs_cow_for_dma(src_vma, src_page))) { pte_free(dst_mm, pgtable); spin_unlock(src_ptl); spin_unlock(dst_ptl); - __split_huge_pmd(vma, src_pmd, addr, false, NULL); + __split_huge_pmd(src_vma, src_pmd, addr, false, NULL); return -EAGAIN; } @@ -1121,8 +1117,9 @@ int copy_huge_pmd(struct mm_struct *dst_ out_zero_page: mm_inc_nr_ptes(dst_mm); pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable); - pmdp_set_wrprotect(src_mm, addr, src_pmd); + if (!userfaultfd_wp(dst_vma)) + pmd = pmd_clear_uffd_wp(pmd); pmd = pmd_mkold(pmd_wrprotect(pmd)); set_pmd_at(dst_mm, addr, dst_pmd, pmd); @@ -1838,6 +1835,8 @@ int change_huge_pmd(struct vm_area_struc newpmd = swp_entry_to_pmd(entry); if (pmd_swp_soft_dirty(*pmd)) newpmd = pmd_swp_mksoft_dirty(newpmd); + if (pmd_swp_uffd_wp(*pmd)) + newpmd = pmd_swp_mkuffd_wp(newpmd); set_pmd_at(mm, addr, pmd, newpmd); } goto unlock; @@ -3248,6 +3247,8 @@ void remove_migration_pmd(struct page_vm pmde = pmd_mksoft_dirty(pmde); if (is_write_migration_entry(entry)) pmde = maybe_pmd_mkwrite(pmde, vma); + if (pmd_swp_uffd_wp(*pvmw->pmd)) + pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); flush_cache_range(vma, mmun_start, mmun_start + HPAGE_PMD_SIZE); if (PageAnon(new)) --- a/mm/memory.c +++ b/mm/memory.c @@ -708,10 +708,10 @@ out: static unsigned long copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, - pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma, - unsigned long addr, int *rss) + pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, unsigned long addr, int *rss) { - unsigned long vm_flags = vma->vm_flags; + unsigned long vm_flags = dst_vma->vm_flags; pte_t pte = *src_pte; struct page *page; swp_entry_t entry = pte_to_swp_entry(pte); @@ -780,6 +780,8 @@ copy_nonpresent_pte(struct mm_struct *ds set_pte_at(src_mm, addr, src_pte, pte); } } + if (!userfaultfd_wp(dst_vma)) + pte = pte_swp_clear_uffd_wp(pte); set_pte_at(dst_mm, addr, dst_pte, pte); return 0; } @@ -845,6 +847,9 @@ copy_present_page(struct vm_area_struct /* All done, just insert the new page copy in the child */ pte = mk_pte(new_page, dst_vma->vm_page_prot); pte = maybe_mkwrite(pte_mkdirty(pte), dst_vma); + if (userfaultfd_pte_wp(dst_vma, *src_pte)) + /* Uffd-wp needs to be delivered to dest pte as well */ + pte = pte_wrprotect(pte_mkuffd_wp(pte)); set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); return 0; } @@ -894,12 +899,7 @@ copy_present_pte(struct vm_area_struct * pte = pte_mkclean(pte); pte = pte_mkold(pte); - /* - * Make sure the _PAGE_UFFD_WP bit is cleared if the new VMA - * does not have the VM_UFFD_WP, which means that the uffd - * fork event is not enabled. - */ - if (!(vm_flags & VM_UFFD_WP)) + if (!userfaultfd_wp(dst_vma)) pte = pte_clear_uffd_wp(pte); set_pte_at(dst_vma->vm_mm, addr, dst_pte, pte); @@ -974,7 +974,8 @@ again: if (unlikely(!pte_present(*src_pte))) { entry.val = copy_nonpresent_pte(dst_mm, src_mm, dst_pte, src_pte, - src_vma, addr, rss); + dst_vma, src_vma, + addr, rss); if (entry.val) break; progress += 8; @@ -1051,8 +1052,8 @@ copy_pmd_range(struct vm_area_struct *ds || pmd_devmap(*src_pmd)) { int err; VM_BUG_ON_VMA(next-addr != HPAGE_PMD_SIZE, src_vma); - err = copy_huge_pmd(dst_mm, src_mm, - dst_pmd, src_pmd, addr, src_vma); + err = copy_huge_pmd(dst_mm, src_mm, dst_pmd, src_pmd, + addr, dst_vma, src_vma); if (err == -ENOMEM) return -ENOMEM; if (!err)