Received: by 2002:a05:7412:2a8a:b0:fc:a2b0:25d7 with SMTP id u10csp505043rdh; Wed, 7 Feb 2024 10:54:45 -0800 (PST) X-Google-Smtp-Source: AGHT+IFn7JdythYSmYM7FonHzdP2VtFaZ+LBwGd6pi0AZA2XXE+dbEhvbyEyC/6Xxr6q4T0MvL9P X-Received: by 2002:a17:906:23e9:b0:a38:32cd:983c with SMTP id j9-20020a17090623e900b00a3832cd983cmr3976793ejg.33.1707332085381; Wed, 07 Feb 2024 10:54:45 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707332085; cv=pass; d=google.com; s=arc-20160816; b=f0L8yQLHFlcniroJb6UzpKT0htCGn+A7lxcGRKPK8xjapmEBsRySJ9VPzC22XvADmN NgoswukKf5qhhEl4oUzUSQTV4HtHbS1iElU5BADQIse9y2P9GKVL7jqTa55EJErftgFe 4BPl2ZT7MmM1jKrdZuFrEthZLi55jS5GzE7RL4DOR5KtrFBgJ4r3y4zuY75Hz3qsToVy MuEJZEyKnAKwRmETsICf2zftgtOsRbPOT/tQNMvvnJMc6ix3sQOX+IbmlUmp0yUq0Xtp vWnTEtv1RDUHlDV6rp2bBcnKmfr9tZMxWXOo+aUMiIgkI54LO0cxdRFDwMTkdROgW12M TovA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:list-unsubscribe:list-subscribe :list-id:precedence:dkim-signature; bh=slYbxMwYxeMYBVKVdbMzEGm/TzHLZWmuYeE2/l4I/eA=; fh=FJkYvxGgUvuc/zFGKF+WsB/FBHTTxZGXHNpZeRwXxdo=; b=otLFWeGtz3LxOdsqKJu4v4CKTv0DGzVmaLRq+WNrizo56x2Za4J0TuuKc/O6xOwa96 FEu3fyzxSclu6E3uvm/KSCADgx7eUDtWYJEDrbkicv95jdnQ+qjDr1Qj/LFt3TgTr3vb HzfwvJV3nwU6KOr10akroqw3lVY9/QAfUfjBnq2Eo3PiGcK97qodIywv7MlLaggKcpYE mKzi8w2gB33ZQSiWn94pzJee7v2F7kYohH/ozkhKifXwurYd/0REGLs0eVORrUnw4fs4 RfIGT+eHvvEgKibeDJDGlakp5L1z2KiSgfxYdGqcxcqQJrBb2MBeNgOr3Qo/cVxdNNr8 DKPA==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=EGviMRh5; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-56960-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-56960-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCUyQghA2U9/95SkDW2J8gQs+sMnVYv+upAehM/QjT4SomvCsHuY9FUrIyg1Cc+y1U7vU17UKx88fMek7n7KfxTOxEdZw5C4FOehCpDamA== Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id m8-20020a1709060d8800b00a35cb30b546si1223407eji.936.2024.02.07.10.54.45 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Feb 2024 10:54:45 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-56960-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=EGviMRh5; arc=pass (i=1 spf=pass spfdomain=google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-56960-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-56960-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id F04FB1F25BF2 for ; Wed, 7 Feb 2024 18:54:44 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id ECFAA127B72; Wed, 7 Feb 2024 18:50:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="EGviMRh5" Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com [209.85.128.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AA612127B70 for ; Wed, 7 Feb 2024 18:50:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.45 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707331824; cv=none; b=dt91yuiqwFWpRg6jmfb89jxLra99q80/CFhIT1zcTOaxyNFV+8Qf4tbPzy6UjPSetFkbgZo/S5IRW9y29/dsc5G+81f2UkkeBNgtiYAPi7WxHRziv7JfLoSle19wgVS2GwU2syGZBjLCtVlrG28xXBuzPuPqz8u8z6OEVgOMnWs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707331824; c=relaxed/simple; bh=jpYe0v2ASWi8osKT94CG82J/2D3BYio/TDmLz3ifNKQ=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=nyfL3iLCv8zuyb1Hb0ap2HxSbBmJLaqBkmvdWKVYDUd5GW4LhtJOIT24KCxjj6CbzRDApYBR5L9XnaaP+QvDWmziy0D1Z6zWCXUzqXrZprZIYJOeLeTdE4Cge9z1y5LHOvdYlirTo4VLPzAKjbFhDI+wk8HUHWXhGf86j8Hfm1s= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=EGviMRh5; arc=none smtp.client-ip=209.85.128.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Received: by mail-wm1-f45.google.com with SMTP id 5b1f17b1804b1-40f02b8d176so8574575e9.1 for ; Wed, 07 Feb 2024 10:50:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707331821; x=1707936621; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=slYbxMwYxeMYBVKVdbMzEGm/TzHLZWmuYeE2/l4I/eA=; b=EGviMRh5XoTmvYZmL9TftADITw9tTqyAjQftEecGXwAdHP1btN4RU8bcStEtFcLu41 T7NmG6+fCA4i80QANIQPg7OBObWiON6ZG0kDnBpPbGxSC3WgpRUUYEY/sHMBz9V6bfFY MWBaQW9xnCtGVAQjwLMjTRRL5QMXKL7xuvDSR1N0DkCSGwF3nOkijsP9MuH2pfWCYMHB /QWkD5lvUZ5dOH/8q7dtfb7lpL+c1a0Oo42UmifkPyKALGNuigE9fBo6QJhru/llfyZQ p1uwLDf9X7foNqKcZtTkz0WechJ7xlqMKYqf71Vva0+wFdAvo/yyYuchLVMJ15tOWluX NRsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707331821; x=1707936621; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=slYbxMwYxeMYBVKVdbMzEGm/TzHLZWmuYeE2/l4I/eA=; b=FIpfpI2wv0KU2Cjo08FxCIu36Ttl6oEf98Y6a3YYRqIAX+PPj6727tUu3twvDJWpki 3pFy7g8F587kZp0ucGTlilTCfMK914fVf3A38l/pstkuqtGKcyXND7xkl0JeCPdGc/ls RkMBCkEILjFwUZsMzXumZ6AIpH8lH+ROSdXYdAKy3GasCaUDr8bstMoPBX7nePlgMW0D UtbqSke/LXoo6OR/C5Zz/90u3xQkj2HOLPJ4DkrK0NEr5q2VXAL1VycHY/7fgYPOQYcB bwRhrnpuoBAmeSPvSZWFSaNtGTZIo8TXjF1IF2ZIoFcVkcBO9reriP7b+igNK6T6uOjn PCjg== X-Gm-Message-State: AOJu0YwP47PzK/dh7OQ6wDGtw+d4VigmX4z7PRTxty7ZhqfZdEZ4LNZ5 CmnxJWCd+VJp5+ZztE1j1RBSL1pwmbKmJwiXwIWuwvwXldKj1OdIaHGQstz8Fl48tDer3RIJDwr OmBXH9z+h+jgOA97WSDpPb+Beg4AnBZelyV9+ X-Received: by 2002:a05:6000:118e:b0:33b:252d:ec26 with SMTP id g14-20020a056000118e00b0033b252dec26mr3577965wrx.65.1707331820699; Wed, 07 Feb 2024 10:50:20 -0800 (PST) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20240206010919.1109005-1-lokeshgidra@google.com> <20240206010919.1109005-4-lokeshgidra@google.com> In-Reply-To: From: Lokesh Gidra Date: Wed, 7 Feb 2024 10:50:08 -0800 Message-ID: Subject: Re: [PATCH v3 3/3] userfaultfd: use per-vma locks in userfaultfd operations To: Jann Horn Cc: akpm@linux-foundation.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, selinux@vger.kernel.org, surenb@google.com, kernel-team@android.com, aarcange@redhat.com, peterx@redhat.com, david@redhat.com, axelrasmussen@google.com, bgeffon@google.com, willy@infradead.org, kaleshsingh@google.com, ngeoffray@google.com, timmurray@google.com, rppt@kernel.org, Liam.Howlett@oracle.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Tue, Feb 6, 2024 at 10:28=E2=80=AFAM Jann Horn wrote: > > On Tue, Feb 6, 2024 at 2:09=E2=80=AFAM Lokesh Gidra wrote: > > All userfaultfd operations, except write-protect, opportunistically use > > per-vma locks to lock vmas. On failure, attempt again inside mmap_lock > > critical section. > > > > Write-protect operation requires mmap_lock as it iterates over multiple > > vmas. > > > > Signed-off-by: Lokesh Gidra > [...] > > diff --git a/mm/memory.c b/mm/memory.c > > index b05fd28dbce1..393ab3b0d6f3 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > [...] > > +/* > > + * lock_vma() - Lookup and lock VMA corresponding to @address. > > + * @prepare_anon: If true, then prepare the VMA (if anonymous) with an= on_vma. > > + * > > + * Should be called without holding mmap_lock. VMA should be unlocked = after use > > + * with unlock_vma(). > > + * > > + * Return: A locked VMA containing @address, NULL of no VMA is found, = or > > + * -ENOMEM if anon_vma couldn't be allocated. > > + */ > > +struct vm_area_struct *lock_vma(struct mm_struct *mm, > > + unsigned long address, > > + bool prepare_anon) > > +{ > > + struct vm_area_struct *vma; > > + > > + vma =3D lock_vma_under_rcu(mm, address); > > + > > + if (vma) > > + return vma; > > + > > + mmap_read_lock(mm); > > + vma =3D vma_lookup(mm, address); > > + if (vma) { > > + if (prepare_anon && vma_is_anonymous(vma) && > > + anon_vma_prepare(vma)) > > + vma =3D ERR_PTR(-ENOMEM); > > + else > > + vma_acquire_read_lock(vma); > > This new code only calls anon_vma_prepare() for VMAs where > vma_is_anonymous() is true (meaning they are private anonymous). > > [...] > > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > > index 74aad0831e40..64e22e467e4f 100644 > > --- a/mm/userfaultfd.c > > +++ b/mm/userfaultfd.c > > @@ -19,20 +19,25 @@ > > #include > > #include "internal.h" > > > > -static __always_inline > > -struct vm_area_struct *find_dst_vma(struct mm_struct *dst_mm, > > - unsigned long dst_start, > > - unsigned long len) > > +/* Search for VMA and make sure it is valid. */ > > +static struct vm_area_struct *find_and_lock_dst_vma(struct mm_struct *= dst_mm, > > + unsigned long dst_s= tart, > > + unsigned long len) > > { > > - /* > > - * Make sure that the dst range is both valid and fully within = a > > - * single existing vma. > > - */ > > struct vm_area_struct *dst_vma; > > > > - dst_vma =3D find_vma(dst_mm, dst_start); > > - if (!range_in_vma(dst_vma, dst_start, dst_start + len)) > > - return NULL; > > + /* Ensure anon_vma is assigned for anonymous vma */ > > + dst_vma =3D lock_vma(dst_mm, dst_start, true); > > lock_vma() is now used by find_and_lock_dst_vma(), which is used by > mfill_atomic(). > > > + if (!dst_vma) > > + return ERR_PTR(-ENOENT); > > + > > + if (PTR_ERR(dst_vma) =3D=3D -ENOMEM) > > + return dst_vma; > > + > > + /* Make sure that the dst range is fully within dst_vma. */ > > + if (dst_start + len > dst_vma->vm_end) > > + goto out_unlock; > > > > /* > > * Check the vma is registered in uffd, this is required to > [...] > > @@ -597,7 +599,15 @@ static __always_inline ssize_t mfill_atomic(struct= userfaultfd_ctx *ctx, > > copied =3D 0; > > folio =3D NULL; > > retry: > > - mmap_read_lock(dst_mm); > > + /* > > + * Make sure the vma is not shared, that the dst range is > > + * both valid and fully within a single existing vma. > > + */ > > + dst_vma =3D find_and_lock_dst_vma(dst_mm, dst_start, len); > > + if (IS_ERR(dst_vma)) { > > + err =3D PTR_ERR(dst_vma); > > + goto out; > > + } > > > > /* > > * If memory mappings are changing because of non-cooperative > > @@ -609,15 +619,6 @@ static __always_inline ssize_t mfill_atomic(struct= userfaultfd_ctx *ctx, > > if (atomic_read(&ctx->mmap_changing)) > > goto out_unlock; > > > > - /* > > - * Make sure the vma is not shared, that the dst range is > > - * both valid and fully within a single existing vma. > > - */ > > - err =3D -ENOENT; > > - dst_vma =3D find_dst_vma(dst_mm, dst_start, len); > > - if (!dst_vma) > > - goto out_unlock; > > - > > err =3D -EINVAL; > > /* > > * shmem_zero_setup is invoked in mmap for MAP_ANONYMOUS|MAP_SH= ARED but > > @@ -647,16 +648,6 @@ static __always_inline ssize_t mfill_atomic(struct= userfaultfd_ctx *ctx, > > uffd_flags_mode_is(flags, MFILL_ATOMIC_CONTINUE)) > > goto out_unlock; > > > > - /* > > - * Ensure the dst_vma has a anon_vma or this page > > - * would get a NULL anon_vma when moved in the > > - * dst_vma. > > - */ > > - err =3D -ENOMEM; > > - if (!(dst_vma->vm_flags & VM_SHARED) && > > - unlikely(anon_vma_prepare(dst_vma))) > > - goto out_unlock; > > But the check mfill_atomic() used to do was different, it checked for VM_= SHARED. Thanks so much for catching this. > > Each VMA has one of these three types: > > 1. shared (marked by VM_SHARED; does not have an anon_vma) > 2. private file-backed (needs to have anon_vma when storing PTEs) > 3. private anonymous (what vma_is_anonymous() detects; needs to have > anon_vma when storing PTEs) As in the case of mfill_atomic(), it seems to me that checking for VM_SHARED flag will cover both (2) and (3) right? > > This old code would call anon_vma_prepare() for both private VMA types > (which is correct). The new code only calls anon_vma_prepare() for > private anonymous VMAs, not for private file-backed ones. I think this > code will probably crash with a BUG_ON() in __folio_set_anon() if you > try to use userfaultfd to insert a PTE into a private file-backed VMA > of a shmem file. (Which you should be able to get by creating a file > in /dev/shm/ and then mapping that file with mmap(NULL, , > PROT_READ|PROT_WRITE, MAP_PRIVATE, , 0).)