Received: by 2002:a05:7412:2a8c:b0:e2:908c:2ebd with SMTP id u12csp2770118rdh; Wed, 27 Sep 2023 12:07:12 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHzIYrpyO/1xZfdCBb2jM7LwfYboxXbCaKVVQ8VKbzQZXXCgiz50qc7/Mpv95raG/PWMQkf X-Received: by 2002:a17:902:684e:b0:1c6:2161:b171 with SMTP id f14-20020a170902684e00b001c62161b171mr2501344pln.4.1695841632334; Wed, 27 Sep 2023 12:07:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1695841632; cv=none; d=google.com; s=arc-20160816; b=MW+l3bnwRTEKwSL2Hvp+f8qhWjQwgc2I6tbhcdkQiax1+rVomMrE8Xi6TxabfCTgjA ugOPFqcIB3IiqOXXk7L/DyW0paIGFcCn9aplbOza8AOxFYeXSpoI0ju4CnMtnZrajH7P z+94dgcisuRdAS89+SiVqjJPArpNBMcib18tdywKfK9bO3DROZCScTJVK7pLX1vemr/q W8Qh2FSUh1WzKzT1ghwcDfcmw//KcK7scScKaKZzLV7/mnj9L4tA6dqtViRlCEYphFHm 9Y4cOnb9yodifbQyD24T00+Hpmkn+9Ny4dvu1ch+LFkyZ+NcvJlldQHvaUl9ygSFbCyN w5Kw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:cc:to:subject :message-id:date:from:in-reply-to:references:mime-version :dkim-signature; bh=LvUAki+h0o6OBk2tbEzSZshv+nhr9s6eBcOiPnVYGzk=; fh=n97Ao9TaCJ8xpmmZOIGY/WBxAIrrMsfFx7IitII/2p8=; b=txh/pzbYBeeR4sbtHTSXx650aBjkmxxSSh2qJ2PctAihZHH9wHyoEUDNhwxJ6HLBIN l/oFc5uAFViUYIhUr2d/uwMgGfhpJchbxqdpt07EUe7JGCzBD8eA9ps2/2rIFiswm6NF 3sOtzW/Qd6v4Bqx5C6x5BLv2unV9alk6VDxPlYAqstoJWjsAhRCpdvRZlNgEAAFOY5D4 L3C7R2DZiymhgoSZ57FsQzHr9UPNsO8oKsdnhdGfnmLWwBwPH7WEk7w2MphpohYcpQ2Z lcfHPpfOuBZJEjTlrMvwbUDWV1cxgvQAr1O2090JgDUtPOsvuRCAoM96K9V4oMsJtqJP krUA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=bo83+kMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from groat.vger.email (groat.vger.email. [23.128.96.35]) by mx.google.com with ESMTPS id k12-20020a170902c40c00b001bbc61fedafsi18185776plk.422.2023.09.27.12.07.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 27 Sep 2023 12:07:12 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) client-ip=23.128.96.35; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=bo83+kMz; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.35 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by groat.vger.email (Postfix) with ESMTP id D4DE28065EEB; Wed, 27 Sep 2023 10:13:13 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at groat.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229596AbjI0RMq (ORCPT + 99 others); Wed, 27 Sep 2023 13:12:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32960 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229542AbjI0RMn (ORCPT ); Wed, 27 Sep 2023 13:12:43 -0400 Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BB7DE5 for ; Wed, 27 Sep 2023 10:12:41 -0700 (PDT) Received: by mail-yb1-xb34.google.com with SMTP id 3f1490d57ef6-d89ba259964so3542743276.2 for ; Wed, 27 Sep 2023 10:12:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1695834760; x=1696439560; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=LvUAki+h0o6OBk2tbEzSZshv+nhr9s6eBcOiPnVYGzk=; b=bo83+kMzqIaSqihKOY4THVhrNC+iZpK90381706S55xSwbHw45FFUIn51Wg3gbAxOQ bzda+pqtxGakqjMvlqThV8Ufa9Qsz/xUzh6rMPlC+u6ebjQkuIyluMUKBS0Sd6Z5ChpL +Mmw5Z/wXHuqSYLbXyvYIisoyjR2LXskMKvJzAN2eb5uio9irDRPBhpNxYaHO6Vo9IqR DodNUULil17yAxBD/lTFdStEtsBeks1QP5B+nPNGriB6ijar3jLq8EGjx+jTh52Z/aw6 2iily+uK63CiZDebzivojKmd125Q5n4rAe3csnuTEAdCcEgwsw6EYtzujuiDDbGWLocw b/QA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1695834760; x=1696439560; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LvUAki+h0o6OBk2tbEzSZshv+nhr9s6eBcOiPnVYGzk=; b=RECOxWTxkAzDkpD5s1maqP8md3wmFKrfvDrq2p4Uq8rkwpXh0Kry6sfruTj4ekxV9H aoXZuiQgnUZyo8PkdW1mNVxX8coQYtdgKjQoL0snM4pafIkO9RQCu/eZ6yMQpt1xheMR J96/OBFJoNtJL52+e22yw21qBexoFBSF9iH3uqqefUkBaMn58lV03m8dDy39v4J0anvb 9UVXn/9WPp7RFGfqZ67QHTX2psh4vl3K7G+4iRc9mv9ROU+vHM4V2wI6nFacW7qlVMpM uSOmuviwsXI6hufGv7162sBwLQTJsK1RFsO+KqG1omnfcAtEljA4STqkAviQAIIsTnN7 nLbw== X-Gm-Message-State: AOJu0YzHgKDc3A67C0f1TeMich0RFJfYhqgLHolJ7urVWxab1sLKBU4k PzsinsZG4n8IYlonGBmK12zcPUF7s9FKRxbUuyVdxA== X-Received: by 2002:a25:cacd:0:b0:d81:c10d:7e1a with SMTP id a196-20020a25cacd000000b00d81c10d7e1amr2776442ybg.58.1695834760324; Wed, 27 Sep 2023 10:12:40 -0700 (PDT) MIME-Version: 1.0 References: <20230923013148.1390521-1-surenb@google.com> <20230923013148.1390521-3-surenb@google.com> In-Reply-To: From: Suren Baghdasaryan Date: Wed, 27 Sep 2023 10:12:28 -0700 Message-ID: Subject: Re: potential new userfaultfd vs khugepaged conflict [was: Re: [PATCH v2 2/3] userfaultfd: UFFDIO_REMAP uABI] To: Jann Horn Cc: Hugh Dickins , Andrew Morton , Al Viro , brauner@kernel.org, Shuah Khan , Andrea Arcangeli , Lokesh Gidra , Peter Xu , David Hildenbrand , Michal Hocko , Axel Rasmussen , Mike Rapoport , willy@infradead.org, Liam.Howlett@oracle.com, zhangpeng362@huawei.com, Brian Geffon , Kalesh Singh , Nicolas Geoffray , Jared Duke , Linux-MM , linux-fsdevel , kernel list , "open list:KERNEL SELFTEST FRAMEWORK" , kernel-team Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on groat.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (groat.vger.email [0.0.0.0]); Wed, 27 Sep 2023 10:13:14 -0700 (PDT) On Wed, Sep 27, 2023 at 3:07=E2=80=AFAM Jann Horn wrote: > > [moving Hugh into "To:" recipients as FYI for khugepaged interaction] > > On Sat, Sep 23, 2023 at 3:31=E2=80=AFAM Suren Baghdasaryan wrote: > > From: Andrea Arcangeli > > > > This implements the uABI of UFFDIO_REMAP. > > > > Notably one mode bitflag is also forwarded (and in turn known) by the > > lowlevel remap_pages method. > > > > Signed-off-by: Andrea Arcangeli > > Signed-off-by: Suren Baghdasaryan > [...] > > +/* > > + * The mmap_lock for reading is held by the caller. Just move the page > > + * from src_pmd to dst_pmd if possible, and return true if succeeded > > + * in moving the page. > > + */ > > +static int remap_pages_pte(struct mm_struct *dst_mm, > > + struct mm_struct *src_mm, > > + pmd_t *dst_pmd, > > + pmd_t *src_pmd, > > + struct vm_area_struct *dst_vma, > > + struct vm_area_struct *src_vma, > > + unsigned long dst_addr, > > + unsigned long src_addr, > > + __u64 mode) > > +{ > > + swp_entry_t entry; > > + pte_t orig_src_pte, orig_dst_pte; > > + spinlock_t *src_ptl, *dst_ptl; > > + pte_t *src_pte =3D NULL; > > + pte_t *dst_pte =3D NULL; > > + > > + struct folio *src_folio =3D NULL; > > + struct anon_vma *src_anon_vma =3D NULL; > > + struct mmu_notifier_range range; > > + int err =3D 0; > > + > > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, src_mm, > > + src_addr, src_addr + PAGE_SIZE); > > + mmu_notifier_invalidate_range_start(&range); > > +retry: > > + dst_pte =3D pte_offset_map_nolock(dst_mm, dst_pmd, dst_addr, &d= st_ptl); > > + > > + /* If an huge pmd materialized from under us fail */ > > + if (unlikely(!dst_pte)) { > > + err =3D -EFAULT; > > + goto out; > > + } > > + > > + src_pte =3D pte_offset_map_nolock(src_mm, src_pmd, src_addr, &s= rc_ptl); > > + > > + /* > > + * We held the mmap_lock for reading so MADV_DONTNEED > > + * can zap transparent huge pages under us, or the > > + * transparent huge page fault can establish new > > + * transparent huge pages under us. > > + */ > > + if (unlikely(!src_pte)) { > > + err =3D -EFAULT; > > + goto out; > > + } > > + > > + BUG_ON(pmd_none(*dst_pmd)); > > + BUG_ON(pmd_none(*src_pmd)); > > + BUG_ON(pmd_trans_huge(*dst_pmd)); > > + BUG_ON(pmd_trans_huge(*src_pmd)); > > This works for now, but note that Hugh Dickins has recently been > reworking khugepaged such that PTE-based mappings can be collapsed > into transhuge mappings under the mmap lock held in *read mode*; > holders of the mmap lock in read mode can only synchronize against > this by taking the right page table spinlock and rechecking the pmd > value. This is only the case for file-based mappings so far, not for > anonymous private VMAs; and this code only operates on anonymous > private VMAs so far, so it works out. > > But if either Hugh further reworks khugepaged such that anonymous VMAs > can be collapsed under the mmap lock in read mode, or you expand this > code to work on file-backed VMAs, then it will become possible to hit > these BUG_ON() calls. I'm not sure what the plans for khugepaged going > forward are, but the number of edgecases everyone has to keep in mind > would go down if you changed this function to deal gracefully with > page tables disappearing under you. > > In the newest version of mm/pgtable-generic.c, above > __pte_offset_map_lock(), there is a big comment block explaining the > current rules for page table access; in particular, regarding the > helper pte_offset_map_nolock() that you're using: > > * pte_offset_map_nolock(mm, pmd, addr, ptlp), above, is like pte_offset_= map(); > * but when successful, it also outputs a pointer to the spinlock in ptlp= - as > * pte_offset_map_lock() does, but in this case without locking it. This= helps > * the caller to avoid a later pte_lockptr(mm, *pmd), which might by that= time > * act on a changed *pmd: pte_offset_map_nolock() provides the correct sp= inlock > * pointer for the page table that it returns. In principle, the caller = should > * recheck *pmd once the lock is taken; in practice, no callsite needs th= at - > * either the mmap_lock for write, or pte_same() check on contents, is en= ough. > > If this becomes hittable in the future, I think you will need to > recheck *pmd, at least for dst_pte, to avoid copying PTEs into a > detached page table. Thanks for the warning, Jann. It sounds to me it would be better to add this pmd check now even though it's not hittable. Does that sound good to everyone? > > > + spin_lock(dst_ptl); > > + orig_dst_pte =3D *dst_pte; > > + spin_unlock(dst_ptl); > > + if (!pte_none(orig_dst_pte)) { > > + err =3D -EEXIST; > > + goto out; > > + } > > + > > + spin_lock(src_ptl); > > + orig_src_pte =3D *src_pte; > > + spin_unlock(src_ptl); > > + if (pte_none(orig_src_pte)) { > > + if (!(mode & UFFDIO_REMAP_MODE_ALLOW_SRC_HOLES)) > > + err =3D -ENOENT; > > + else /* nothing to do to remap a hole */ > > + err =3D 0; > > + goto out; > > + } > > + > > + if (pte_present(orig_src_pte)) { > > + /* > > + * Pin and lock both source folio and anon_vma. Since w= e are in > > + * RCU read section, we can't block, so on contention h= ave to > > + * unmap the ptes, obtain the lock and retry. > > + */ > > + if (!src_folio) { > > + struct folio *folio; > > + > > + /* > > + * Pin the page while holding the lock to be su= re the > > + * page isn't freed under us > > + */ > > + spin_lock(src_ptl); > > + if (!pte_same(orig_src_pte, *src_pte)) { > > + spin_unlock(src_ptl); > > + err =3D -EAGAIN; > > + goto out; > > + } > > + > > + folio =3D vm_normal_folio(src_vma, src_addr, or= ig_src_pte); > > + if (!folio || !folio_test_anon(folio) || > > + folio_test_large(folio) || > > + folio_estimated_sharers(folio) !=3D 1) { > > + spin_unlock(src_ptl); > > + err =3D -EBUSY; > > + goto out; > > + } > > + > > + folio_get(folio); > > + src_folio =3D folio; > > + spin_unlock(src_ptl); > > + > > + /* block all concurrent rmap walks */ > > + if (!folio_trylock(src_folio)) { > > + pte_unmap(&orig_src_pte); > > + pte_unmap(&orig_dst_pte); > > + src_pte =3D dst_pte =3D NULL; > > + /* now we can block and wait */ > > + folio_lock(src_folio); > > + goto retry; > > + } > > + } > > + > > + if (!src_anon_vma) { > > + /* > > + * folio_referenced walks the anon_vma chain > > + * without the folio lock. Serialize against it= with > > + * the anon_vma lock, the folio lock is not eno= ugh. > > + */ > > + src_anon_vma =3D folio_get_anon_vma(src_folio); > > + if (!src_anon_vma) { > > + /* page was unmapped from under us */ > > + err =3D -EAGAIN; > > + goto out; > > + } > > + if (!anon_vma_trylock_write(src_anon_vma)) { > > + pte_unmap(&orig_src_pte); > > + pte_unmap(&orig_dst_pte); > > + src_pte =3D dst_pte =3D NULL; > > + /* now we can block and wait */ > > + anon_vma_lock_write(src_anon_vma); > > + goto retry; > > + } > > + } > > + > > + err =3D remap_anon_pte(dst_mm, src_mm, dst_vma, src_vm= a, > > + dst_addr, src_addr, dst_pte, src_p= te, > > + orig_dst_pte, orig_src_pte, > > + dst_ptl, src_ptl, src_folio); > > + } else { > > + entry =3D pte_to_swp_entry(orig_src_pte); > > + if (non_swap_entry(entry)) { > > + if (is_migration_entry(entry)) { > > + pte_unmap(&orig_src_pte); > > + pte_unmap(&orig_dst_pte); > > + src_pte =3D dst_pte =3D NULL; > > + migration_entry_wait(src_mm, src_pmd, > > + src_addr); > > + err =3D -EAGAIN; > > + } else > > + err =3D -EFAULT; > > + goto out; > > + } > > + > > + err =3D remap_swap_pte(dst_mm, src_mm, dst_addr, src_ad= dr, > > + dst_pte, src_pte, > > + orig_dst_pte, orig_src_pte, > > + dst_ptl, src_ptl); > > + } > > + > > +out: > > + if (src_anon_vma) { > > + anon_vma_unlock_write(src_anon_vma); > > + put_anon_vma(src_anon_vma); > > + } > > + if (src_folio) { > > + folio_unlock(src_folio); > > + folio_put(src_folio); > > + } > > + if (dst_pte) > > + pte_unmap(dst_pte); > > + if (src_pte) > > + pte_unmap(src_pte); > > + mmu_notifier_invalidate_range_end(&range); > > + > > + return err; > > +}