Received: by 2002:a05:6a10:17d3:0:0:0:0 with SMTP id hz19csp3509710pxb; Wed, 14 Apr 2021 07:09:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzUUwLlbBKji6HpzUkMse/D9u0TEhJ9pl8bfm+um24rkdjVZsR7a+Kz1myqWEshBxVGEvKV X-Received: by 2002:a17:906:1453:: with SMTP id q19mr38658310ejc.76.1618409358974; Wed, 14 Apr 2021 07:09:18 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618409358; cv=none; d=google.com; s=arc-20160816; b=x5ukbMZA8gDawl3tlbZ4BymceAj1g99VKfKnLLycLqLTX2n/RLyxSJjFxPEMl+/2R+ KT43WHe31LlBXy7Ew8FywiiY4nqa8H1Bwb6vTM5TrTFn9RAweJJkPAp7fnphSTpKMSr1 B3j9XgdwYIp9UOmgTHu7C5DLFbdudZXVO8hAua8MdzCc+qkeAiy2leFYT4FPFUJRUEvv pce30oLDfEZnPWtaPJLwgF9hptXUjqip1Az91MqK8R7Ax59eubDDJToxJoY1PazwP60o UkHS2gA+XWu4Lbn92cElHzBUGk7JTv4byzbqmK+IA48juMq9JwFjPtyEkrDQ45U61A2h KNJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:user-agent:references:message-id :in-reply-to:subject:cc:to:from:date:dkim-signature; bh=KLZzVXsHHgJTPi9/Y2IIbzvR3FePP4MCJ3ecMxx4dqc=; b=VdL14mEyYld3JhxxN3+PX3PkQa1HwCY8zdd4rw0eWYkv0FFzplw/ZsYKTkBC161GKZ 9IRtCNl4TDbzVNUg3KcWi4FAYWtgr4Y/lrVwAzCZZakGvg645vOFLeUXTWjHP4VWfRrA 71NJJNxhxVvjB9n8Ml82pcKyaMo8ryIiHqH5Bv+WjrZJi8E9DMIlqX5eE7BVfGZ64bSL Xr7AlUggeUeBWNbUhILZOUlK2NE+iLSv9iNrkYUl5yJjaK3+RBR3PcABBw2/0UzV7exw C+vcBKyE24n0DholjHK4YOdnspQ6Pub3nmvfDhOOvts5KlqbutfZynA4xpqjKFkoFOzn LMdg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=JL6hA6+Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id b13si14003710ede.162.2021.04.14.07.08.55; Wed, 14 Apr 2021 07:09:18 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=JL6hA6+Q; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349415AbhDNGvy (ORCPT + 99 others); Wed, 14 Apr 2021 02:51:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349407AbhDNGvv (ORCPT ); Wed, 14 Apr 2021 02:51:51 -0400 Received: from mail-qv1-xf34.google.com (mail-qv1-xf34.google.com [IPv6:2607:f8b0:4864:20::f34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07328C06175F for ; Tue, 13 Apr 2021 23:51:30 -0700 (PDT) Received: by mail-qv1-xf34.google.com with SMTP id o11so9367906qvh.11 for ; Tue, 13 Apr 2021 23:51:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=KLZzVXsHHgJTPi9/Y2IIbzvR3FePP4MCJ3ecMxx4dqc=; b=JL6hA6+QcQLMoXUbtqIdHUyo2NzJQBbZYzeHvrGRShfPChqWNdglR38HGeljB2fxtP M/EB8Cp4oGJ5EKc4EqKIx/b+me/Kc6zIFiWhY/5nvoWnDAVJyyAvnJbQ/cSo7mLxmdN/ HI/+hYSkQsHt+TP8W1gHJr+SoC5GGpvSZLOSR9POOCRR3SR8+oVVKxMymufxPxTpC2nC +sMBHPpf81IfgL4N10W+2rFzsZ2vyOHFMVCeGYwQ+Sz0l0IsXp+v4eutMkcd/rzhXdvy 4h2kM4L1NjPbWvhMXaxD87NF550bFO546K5O4RrXtIXNxcXA2+SCnR9IrB2TA7oxqgiJ mg4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=KLZzVXsHHgJTPi9/Y2IIbzvR3FePP4MCJ3ecMxx4dqc=; b=uhMM0WnPl0qY587NKuKuErcG5IhJSDexZch1h0UbYO2N0nlB80P+kQDiSQS1bV7wr0 ttn3zhi1b7rhQKJ8AEXJAqUflYXb2RhRLV8xHeP2lEk0fkdAfI55oZH7ozu9C2aZ0Cd+ x3z7kk26NFJIUu2gI5M9ljBtTgcnh6q3QIyZOLZFQMI5W/M1+qccx79tbgC3b78YstLm yEHcgIYGybpqE3DKZUsXmv2h70KXchZWTzAl/1KO5eFo5Nu0RpktHFzsO7oV75mVWCF/ PCisDV/I4bJER2N6nMCb6O5Td59ydMrj54VGysq0SrP3/qZHU5pxv9ILyuyuOkx+T3KC tq2g== X-Gm-Message-State: AOAM531o7da64HBR09Q8LV/YnMm+i6E2lxqCql0f89NT6a8ua5juQWdx wkXy3YRIiQwWCmTuAVpMCe+2OQ== X-Received: by 2002:a05:6214:f6c:: with SMTP id iy12mr24408311qvb.15.1618383089927; Tue, 13 Apr 2021 23:51:29 -0700 (PDT) Received: from eggly.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id x22sm5183213qtq.93.2021.04.13.23.51.27 (version=TLS1 cipher=ECDHE-ECDSA-AES128-SHA bits=128/128); Tue, 13 Apr 2021 23:51:29 -0700 (PDT) Date: Tue, 13 Apr 2021 23:51:26 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Axel Rasmussen cc: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing , linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Subject: Re: [PATCH v2 2/9] userfaultfd/shmem: combine shmem_{mcopy_atomic,mfill_zeropage}_pte In-Reply-To: <20210413051721.2896915-3-axelrasmussen@google.com> Message-ID: References: <20210413051721.2896915-1-axelrasmussen@google.com> <20210413051721.2896915-3-axelrasmussen@google.com> User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 12 Apr 2021, Axel Rasmussen wrote: > Previously, we did a dance where we had one calling path in > userfaultfd.c (mfill_atomic_pte), but then we split it into two in > shmem_fs.h (shmem_{mcopy_atomic,mfill_zeropage}_pte), and then rejoined > into a single shared function in shmem.c (shmem_mfill_atomic_pte). > > This is all a bit overly complex. Just call the single combined shmem > function directly, allowing us to clean up various branches, > boilerplate, etc. > > While we're touching this function, two other small cleanup changes: > - offset is equivalent to pgoff, so we can get rid of offset entirely. > - Split two VM_BUG_ON cases into two statements. This means the line > number reported when the BUG is hit specifies exactly which condition > was true. > > Reviewed-by: Peter Xu > Signed-off-by: Axel Rasmussen Acked-by: Hugh Dickins though you've dropped one minor fix I did like, see below... > --- > include/linux/shmem_fs.h | 15 +++++------- > mm/shmem.c | 52 +++++++++++++--------------------------- > mm/userfaultfd.c | 10 +++----- > 3 files changed, 25 insertions(+), 52 deletions(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index d82b6f396588..919e36671fe6 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -122,21 +122,18 @@ static inline bool shmem_file(struct file *file) > extern bool shmem_charge(struct inode *inode, long pages); > extern void shmem_uncharge(struct inode *inode, long pages); > > +#ifdef CONFIG_USERFAULTFD > #ifdef CONFIG_SHMEM > extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > struct vm_area_struct *dst_vma, > unsigned long dst_addr, > unsigned long src_addr, > + bool zeropage, > struct page **pagep); > -extern int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, > - pmd_t *dst_pmd, > - struct vm_area_struct *dst_vma, > - unsigned long dst_addr); > -#else > +#else /* !CONFIG_SHMEM */ > #define shmem_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, dst_addr, \ In a previous version, you quietly corrected that "dst_pte" to "dst_pmd": of course it makes no difference to the code generated, but it was a good correction, helping to prevent confusion. > - src_addr, pagep) ({ BUG(); 0; }) > -#define shmem_mfill_zeropage_pte(dst_mm, dst_pmd, dst_vma, \ > - dst_addr) ({ BUG(); 0; }) > -#endif > + src_addr, zeropage, pagep) ({ BUG(); 0; }) > +#endif /* CONFIG_SHMEM */ > +#endif /* CONFIG_USERFAULTFD */ > > #endif > diff --git a/mm/shmem.c b/mm/shmem.c > index 26c76b13ad23..b72c55aa07fc 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2354,13 +2354,14 @@ static struct inode *shmem_get_inode(struct super_block *sb, const struct inode > return inode; > } > > -static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > - pmd_t *dst_pmd, > - struct vm_area_struct *dst_vma, > - unsigned long dst_addr, > - unsigned long src_addr, > - bool zeropage, > - struct page **pagep) > +#ifdef CONFIG_USERFAULTFD > +int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, > + pmd_t *dst_pmd, > + struct vm_area_struct *dst_vma, > + unsigned long dst_addr, > + unsigned long src_addr, > + bool zeropage, > + struct page **pagep) > { > struct inode *inode = file_inode(dst_vma->vm_file); > struct shmem_inode_info *info = SHMEM_I(inode); > @@ -2372,7 +2373,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > struct page *page; > pte_t _dst_pte, *dst_pte; > int ret; > - pgoff_t offset, max_off; > + pgoff_t max_off; > > ret = -ENOMEM; > if (!shmem_inode_acct_block(inode, 1)) > @@ -2383,7 +2384,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > if (!page) > goto out_unacct_blocks; > > - if (!zeropage) { /* mcopy_atomic */ > + if (!zeropage) { /* COPY */ > page_kaddr = kmap_atomic(page); > ret = copy_from_user(page_kaddr, > (const void __user *)src_addr, > @@ -2397,7 +2398,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > /* don't free the page */ > return -ENOENT; > } > - } else { /* mfill_zeropage_atomic */ > + } else { /* ZEROPAGE */ > clear_highpage(page); > } > } else { > @@ -2405,15 +2406,15 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > *pagep = NULL; > } > > - VM_BUG_ON(PageLocked(page) || PageSwapBacked(page)); > + VM_BUG_ON(PageLocked(page)); > + VM_BUG_ON(PageSwapBacked(page)); > __SetPageLocked(page); > __SetPageSwapBacked(page); > __SetPageUptodate(page); > > ret = -EFAULT; > - offset = linear_page_index(dst_vma, dst_addr); > max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); > - if (unlikely(offset >= max_off)) > + if (unlikely(pgoff >= max_off)) > goto out_release; > > ret = shmem_add_to_page_cache(page, mapping, pgoff, NULL, > @@ -2439,7 +2440,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > > ret = -EFAULT; > max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); > - if (unlikely(offset >= max_off)) > + if (unlikely(pgoff >= max_off)) > goto out_release_unlock; > > ret = -EEXIST; > @@ -2476,28 +2477,7 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > shmem_inode_unacct_blocks(inode, 1); > goto out; > } > - > -int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, > - pmd_t *dst_pmd, > - struct vm_area_struct *dst_vma, > - unsigned long dst_addr, > - unsigned long src_addr, > - struct page **pagep) > -{ > - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, > - dst_addr, src_addr, false, pagep); > -} > - > -int shmem_mfill_zeropage_pte(struct mm_struct *dst_mm, > - pmd_t *dst_pmd, > - struct vm_area_struct *dst_vma, > - unsigned long dst_addr) > -{ > - struct page *page = NULL; > - > - return shmem_mfill_atomic_pte(dst_mm, dst_pmd, dst_vma, > - dst_addr, 0, true, &page); > -} > +#endif /* CONFIG_USERFAULTFD */ > > #ifdef CONFIG_TMPFS > static const struct inode_operations shmem_symlink_inode_operations; > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index e14b3820c6a8..23fa2583bbd1 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -440,13 +440,9 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, > dst_vma, dst_addr); > } else { > VM_WARN_ON_ONCE(wp_copy); > - if (!zeropage) > - err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, > - dst_vma, dst_addr, > - src_addr, page); > - else > - err = shmem_mfill_zeropage_pte(dst_mm, dst_pmd, > - dst_vma, dst_addr); > + err = shmem_mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, > + dst_addr, src_addr, zeropage, > + page); > } > > return err; > -- > 2.31.1.295.g9ea45b61b8-goog > >