Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp4914812pxy; Tue, 27 Apr 2021 15:54:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw7OCR+Jkg74vl0PVc+3GqP066c9KmoKOrKUdzPbaeKgKYbcM5jJWN6hrRO0VZ8AJUnnU55 X-Received: by 2002:a17:906:6896:: with SMTP id n22mr26396490ejr.316.1619564070225; Tue, 27 Apr 2021 15:54:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1619564070; cv=none; d=google.com; s=arc-20160816; b=Ja9gr91OJBQdFXB4HzGilr+yiqcc1HipgbyLwmaFtTogqVwrg40cntgZ05hvXKeSsP xJM83FKqJdoamvnzl9P+MUsZsEGFsPveh9G6jPXI3Mhh7Ul4fgfJRKlRc8A7oQpzWt+J lYzMogku2xfZgnL1F5nDDolSw3cEw1ofB9VzvMZathckjxIxs+gHrvIgCnh8FvITeo13 xTtHPsNGQjn2nxLcfoB71cTgp8rH2uh9dLcQPV9tUill0VE4v7fvafXiQh8livZt+4mI 1HNNuHhy71/UHT3rxUDW4KlVZEJ/GSPWq71gIPJ+JTwTq4SPY/VC0zwdjB6xy8YoptyC ycqw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=petHdJChUhpREYrd/ZhXofHP43tyMB6JK0K3hKJPBbc=; b=uPoOOfDO+HbYHBPrTWisy7sUdYSjwfaw02cBlvw/5P9MZ4kdkxNToi+McwE6vZ05dE Cj/H/OoZHVedopX0IOmnrhbgFBdzHZyP1GsqlnSziQzUcJwB9jNfe/1YgD1CkhA+o9zE jwRp+2MrpCgCaVER+Wsb40qVdwOzYBwhE2LofYREvXaxh1qZybg1vjxdP+DvUAgIseQt +SWItL+RFMpUrWr5qUonEotrZNF1QJOPDoa/DRHhZFaLAz/0pNnr7yC2PU0LOxHUFvqR i6dnXtp8+NlRR7mX+MBofSbihep4VewAI3qlR0afx/S3PKd2hnfN/crqllnEOT/DF2Tn DIVQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=EfvMWHp5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q18si4079273edi.189.2021.04.27.15.54.05; Tue, 27 Apr 2021 15:54:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=EfvMWHp5; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239316AbhD0Wxs (ORCPT + 99 others); Tue, 27 Apr 2021 18:53:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53708 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239321AbhD0Wxk (ORCPT ); Tue, 27 Apr 2021 18:53:40 -0400 Received: from mail-qt1-x849.google.com (mail-qt1-x849.google.com [IPv6:2607:f8b0:4864:20::849]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 852E0C061574 for ; Tue, 27 Apr 2021 15:52:56 -0700 (PDT) Received: by mail-qt1-x849.google.com with SMTP id p15-20020a05622a00cfb02901ae13813340so24216593qtw.15 for ; Tue, 27 Apr 2021 15:52:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=petHdJChUhpREYrd/ZhXofHP43tyMB6JK0K3hKJPBbc=; b=EfvMWHp5mfvndVZG13ijUz8KNEt1ek88BGb0Fs1Tmf0u16xNohQ+cTIl2UegZXgUcD S0Cic2rbdbvaUBF0ey+2OUyZ79IIfOJfk4UcrZ7RXKT7XcfynO8f5hKqKmKe1o+UE7q2 f2PsLMBNMXdMq0+Q7aAtXHZWw+dTMuxpeshFVmjPRTGK0XHV7Vz0ScxCmnL3vLn0wWzp l68gjJsxJ5eCNeLfaHzAwtrgw45ytUvVV9wn9I6Zbfp4jtIWIbuXS4a/m8LhaLOWY9Q/ 5GAFUTjnezEjogdiwv/4fzebn62VoXe7byoUQPvTsf4JPJqAmxhaRMp8fUvHOyq1eIVy Dmpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=petHdJChUhpREYrd/ZhXofHP43tyMB6JK0K3hKJPBbc=; b=NuP+93+wWLYrRGlQDOaDbhjE0gvx9H24p2bdhjsK3/H0rnJwbXKEhppJfHfHBhF8OZ JTQAPD2fnkgX6kkviMG1eCT4fnZAuQ+GxaUvmKVtyvWQ/uA6DUfVcjlQd+5/OojT8FEF IKqofYURFkPNKwQ037k9beyUd4EFIrkIfCPSesPebQ/WurNRfgBmVAivYQHMRkLvHLVM Xj77Yq+njMjBPknI+0vpsOyaHQ6rpzZgkMAzifs7A+nxNPHa9kS1nqHmOpuH9GqlXnxT sdyD0s8ie3ugUctBNkNq3E0jIFGMFJFAxchodrAxpy53cLMCnbbvna16nogWmoC2eXP6 xz6w== X-Gm-Message-State: AOAM533y0sqsiDXP3RTHf3O1WqjhtqQ+IIArab9ZEpnaoJO9M4YQaVCY doK3rND+9BoWymqqzJ8uTc4BUX94rVx0A+zbATZy X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c423:570b:b823:c33e]) (user=axelrasmussen job=sendgmr) by 2002:ad4:4e44:: with SMTP id eb4mr17935776qvb.3.1619563975755; Tue, 27 Apr 2021 15:52:55 -0700 (PDT) Date: Tue, 27 Apr 2021 15:52:38 -0700 In-Reply-To: <20210427225244.4326-1-axelrasmussen@google.com> Message-Id: <20210427225244.4326-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210427225244.4326-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.498.g6c1eba8ee3d-goog Subject: [PATCH v5 04/10] userfaultfd/shmem: support UFFDIO_CONTINUE for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With this change, userspace can resolve a minor fault within a shmem-backed area with a UFFDIO_CONTINUE ioctl. The semantics for this match those for hugetlbfs - we look up the existing page in the page cache, and install a PTE for it. This commit introduces a new helper: mcopy_atomic_install_pte. Why handle UFFDIO_CONTINUE for shmem in mm/userfaultfd.c, instead of in shmem.c? The existing userfault implementation only relies on shmem.c for VM_SHARED VMAs. However, minor fault handling / CONTINUE work just fine for !VM_SHARED VMAs as well. We'd prefer to handle CONTINUE for shmem in one place, regardless of shared/private (to reduce code duplication). Why add a new mcopy_atomic_install_pte helper? A problem we have with continue is that shmem_mcopy_atomic_pte() and mcopy_atomic_pte() are *close* to what we want, but not exactly. We do want to setup the PTEs in a CONTINUE operation, but we don't want to e.g. allocate a new page, charge it (e.g. to the shmem inode), manipulate various flags, etc. Also we have the problem stated above: shmem_mcopy_atomic_pte() and mcopy_atomic_pte() both handle one-half of the problem (shared / private) continue cares about. So, introduce mcontinue_atomic_pte(), to handle all of the shmem continue cases. Introduce the helper so it doesn't duplicate code with mcopy_atomic_pte(). In a future commit, shmem_mcopy_atomic_pte() will also be modified to use this new helper. However, since this is a bigger refactor, it seems most clear to do it as a separate change. Signed-off-by: Axel Rasmussen --- mm/userfaultfd.c | 172 ++++++++++++++++++++++++++++++++++------------- 1 file changed, 127 insertions(+), 45 deletions(-) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 23fa2583bbd1..51d8c0127161 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -48,6 +48,83 @@ struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, return dst_vma; } +/* + * Install PTEs, to map dst_addr (within dst_vma) to page. + * + * This function handles MCOPY_ATOMIC_CONTINUE (which is always file-backed), + * whether or not dst_vma is VM_SHARED. It also handles the more general + * MCOPY_ATOMIC_NORMAL case, when dst_vma is *not* VM_SHARED (it may be file + * backed, or not). + * + * Note that MCOPY_ATOMIC_NORMAL for a VM_SHARED dst_vma is handled by + * shmem_mcopy_atomic_pte instead. + */ +static int mcopy_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, struct page *page, + bool newly_allocated, bool wp_copy) +{ + int ret; + pte_t _dst_pte, *dst_pte; + bool writable = dst_vma->vm_flags & VM_WRITE; + bool vm_shared = dst_vma->vm_flags & VM_SHARED; + bool page_in_cache = page->mapping; + spinlock_t *ptl; + struct inode *inode; + pgoff_t offset, max_off; + + _dst_pte = mk_pte(page, dst_vma->vm_page_prot); + if (page_in_cache && !vm_shared) + writable = false; + if (writable || !page_in_cache) + _dst_pte = pte_mkdirty(_dst_pte); + if (writable) { + if (wp_copy) + _dst_pte = pte_mkuffd_wp(_dst_pte); + else + _dst_pte = pte_mkwrite(_dst_pte); + } + + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); + + if (vma_is_shmem(dst_vma)) { + /* serialize against truncate with the page table lock */ + inode = dst_vma->vm_file->f_inode; + offset = linear_page_index(dst_vma, dst_addr); + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); + ret = -EFAULT; + if (unlikely(offset >= max_off)) + goto out_unlock; + } + + ret = -EEXIST; + if (!pte_none(*dst_pte)) + goto out_unlock; + + if (page_in_cache) + page_add_file_rmap(page, false); + else + page_add_new_anon_rmap(page, dst_vma, dst_addr, false); + + /* + * Must happen after rmap, as mm_counter() checks mapping (via + * PageAnon()), which is set by __page_set_anon_rmap(). + */ + inc_mm_counter(dst_mm, mm_counter(page)); + + if (newly_allocated) + lru_cache_add_inactive_or_unevictable(page, dst_vma); + + set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); + + /* No need to invalidate - it was non-present before */ + update_mmu_cache(dst_vma, dst_addr, dst_pte); + ret = 0; +out_unlock: + pte_unmap_unlock(dst_pte, ptl); + return ret; +} + static int mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, struct vm_area_struct *dst_vma, @@ -56,13 +133,9 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, struct page **pagep, bool wp_copy) { - pte_t _dst_pte, *dst_pte; - spinlock_t *ptl; void *page_kaddr; int ret; struct page *page; - pgoff_t offset, max_off; - struct inode *inode; if (!*pagep) { ret = -ENOMEM; @@ -99,43 +172,12 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, if (mem_cgroup_charge(page, dst_mm, GFP_KERNEL)) goto out_release; - _dst_pte = pte_mkdirty(mk_pte(page, dst_vma->vm_page_prot)); - if (dst_vma->vm_flags & VM_WRITE) { - if (wp_copy) - _dst_pte = pte_mkuffd_wp(_dst_pte); - else - _dst_pte = pte_mkwrite(_dst_pte); - } - - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); - if (dst_vma->vm_file) { - /* the shmem MAP_PRIVATE case requires checking the i_size */ - inode = dst_vma->vm_file->f_inode; - offset = linear_page_index(dst_vma, dst_addr); - max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); - ret = -EFAULT; - if (unlikely(offset >= max_off)) - goto out_release_uncharge_unlock; - } - ret = -EEXIST; - if (!pte_none(*dst_pte)) - goto out_release_uncharge_unlock; - - inc_mm_counter(dst_mm, MM_ANONPAGES); - page_add_new_anon_rmap(page, dst_vma, dst_addr, false); - lru_cache_add_inactive_or_unevictable(page, dst_vma); - - set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); - - /* No need to invalidate - it was non-present before */ - update_mmu_cache(dst_vma, dst_addr, dst_pte); - - pte_unmap_unlock(dst_pte, ptl); - ret = 0; + ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, true, wp_copy); + if (ret) + goto out_release; out: return ret; -out_release_uncharge_unlock: - pte_unmap_unlock(dst_pte, ptl); out_release: put_page(page); goto out; @@ -176,6 +218,41 @@ static int mfill_zeropage_pte(struct mm_struct *dst_mm, return ret; } +/* Handles UFFDIO_CONTINUE for all shmem VMAs (shared or private). */ +static int mcontinue_atomic_pte(struct mm_struct *dst_mm, + pmd_t *dst_pmd, + struct vm_area_struct *dst_vma, + unsigned long dst_addr, + bool wp_copy) +{ + struct inode *inode = file_inode(dst_vma->vm_file); + pgoff_t pgoff = linear_page_index(dst_vma, dst_addr); + struct page *page; + int ret; + + ret = shmem_getpage(inode, pgoff, &page, SGP_READ); + if (ret) + goto out; + if (!page) { + ret = -EFAULT; + goto out; + } + + ret = mcopy_atomic_install_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + page, false, wp_copy); + if (ret) + goto out_release; + + unlock_page(page); + ret = 0; +out: + return ret; +out_release: + unlock_page(page); + put_page(page); + goto out; +} + static pmd_t *mm_alloc_pmd(struct mm_struct *mm, unsigned long address) { pgd_t *pgd; @@ -415,11 +492,16 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, struct page **page, - bool zeropage, + enum mcopy_atomic_mode mode, bool wp_copy) { ssize_t err; + if (mode == MCOPY_ATOMIC_CONTINUE) { + return mcontinue_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, + wp_copy); + } + /* * The normal page fault path for a shmem will invoke the * fault, fill the hole in the file and COW it right away. The @@ -431,7 +513,7 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, * and not in the radix tree. */ if (!(dst_vma->vm_flags & VM_SHARED)) { - if (!zeropage) + if (mode == MCOPY_ATOMIC_NORMAL) err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, src_addr, page, wp_copy); @@ -441,7 +523,8 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, } else { VM_WARN_ON_ONCE(wp_copy); err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, - dst_addr, src_addr, zeropage, + dst_addr, src_addr, + mode != MCOPY_ATOMIC_NORMAL, page); } @@ -463,7 +546,6 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, long copied; struct page *page; bool wp_copy; - bool zeropage = (mcopy_mode == MCOPY_ATOMIC_ZEROPAGE); /* * Sanitize the command parameters: @@ -526,7 +608,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; - if (mcopy_mode == MCOPY_ATOMIC_CONTINUE) + if (!vma_is_shmem(dst_vma) && mcopy_mode == MCOPY_ATOMIC_CONTINUE) goto out_unlock; /* @@ -574,7 +656,7 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm, BUG_ON(pmd_trans_huge(*dst_pmd)); err = mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, dst_addr, - src_addr, &page, zeropage, wp_copy); + src_addr, &page, mcopy_mode, wp_copy); cond_resched(); if (unlikely(err == -ENOENT)) { -- 2.31.1.498.g6c1eba8ee3d-goog