Received: by 2002:a05:6a10:8c0a:0:0:0:0 with SMTP id go10csp509619pxb; Wed, 3 Feb 2021 10:25:09 -0800 (PST) X-Google-Smtp-Source: ABdhPJz1oVtRWx7pFAw4gt9cWpjp6b30nKxQalQN60gxEHHDDGaSMB0kIU1I1VlL4+5fsmiUpWce X-Received: by 2002:a05:6402:79a:: with SMTP id d26mr4325995edy.266.1612376709051; Wed, 03 Feb 2021 10:25:09 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1612376709; cv=none; d=google.com; s=arc-20160816; b=jOJJOxOu28Hrue3Kt8j45GJxKRNeaBB57YAWI9nR/sVlNjMwsuV5y6d+D0G8tslvy9 eSZFg0+WoXsLchIEqvmpsq4XYh3REHNusQJ3vA+/oIhU3roTPpyKbgudeSXjx5P75Z+y VtnTtEH2y9qNNfIDxDzLiisF3EJtoHfsAMKCMQK4bRZFeb+rj/usJW3qHLYB0iC/HZ4Q 6rslBY3Bml0uU/RBDhWo3oO+yo26nWJ9/AdCX+wa2CZXpxhnBPUKpncLpmsURh1fAiRf 8LIR+e6fD0apxKV/2+bDP3jS7l+E4a/mkiBVzbBl5Te6iEcmYB56Tpng6YuRtFbXbQ6h UujA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=t15/2iQDwVaTBOunEsPSucc/nRrzJFxeeAfzfc0DJNE=; b=eVWXxTnwUJrJP04wYkUwMGEs2dl3g6A51WJ4DmWgcrcDhZKe5HVAmGLz456LpNpIWu y57BUj20Gk1Pp8zxPsqOQQrWHz3NuITITNu1I1dopBU3RMGOscZy2CISRknJtK+6NPW9 4bDBThIUcIJJ1Apyix2L0Duoydvief8RTIfMPTsbjm+zT2pDreAsZHBVtn5Zwyzm5xBv ONQe13mHO6MylThpUVP+XIgxh+5O7oCdN/scro2UBfg+ZPCkSOr/yJtm9ndRdDa5oQaa JvGkED98YZ68S7XJE88esqXb/Q/iX2UJKZAMukq59OS6+bvnszw55ifMfGNvy6N9wJwo CdHg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=wOW8ILuV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id r4si1907755ejb.1.2021.02.03.10.24.42; Wed, 03 Feb 2021 10:25:09 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=wOW8ILuV; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232000AbhBCSVm (ORCPT + 99 others); Wed, 3 Feb 2021 13:21:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229631AbhBCSVi (ORCPT ); Wed, 3 Feb 2021 13:21:38 -0500 Received: from mail-il1-x135.google.com (mail-il1-x135.google.com [IPv6:2607:f8b0:4864:20::135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C6C79C061573 for ; Wed, 3 Feb 2021 10:20:58 -0800 (PST) Received: by mail-il1-x135.google.com with SMTP id a16so143547ilq.5 for ; Wed, 03 Feb 2021 10:20:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=t15/2iQDwVaTBOunEsPSucc/nRrzJFxeeAfzfc0DJNE=; b=wOW8ILuVngAUBSdPy/ecmjBNJOfiMVtVA9nyDK8/THmrxZvlGUv/rVjlZsCQpAin3H +28x+whbEIs/N5KX70/rzYdBxSHXSgc7EECh63rgsTZZaT/2P89+yBaEGZEqhWUeNCdE rEuYf8O+4OksgXgQjVngIY7ZvfK3bo7M8VP5hcEuEhfKfnaZdLJNCmPBkv4MRvE08aTX 9MAftkVyMN+16pvN8SUua7qKGEwKZkmQHOX3H6aGTVqxTu4jnOk/g6I6jNOcHkrq5VEa a0q1sUmYU+v/8wywE82tZx9VBkVGaJC3KUN4NLRUb2v1ER98XP/oFSveycRGmbQnchEi GPng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=t15/2iQDwVaTBOunEsPSucc/nRrzJFxeeAfzfc0DJNE=; b=eayA30Ezrdu9tCY681LWtjiRRq4XFrZwNWMoTJ8Y10OMVUC7HZ9Wvun6vjL7zxlJhV svdqeQJise/i+HetsglM6VFR7jV4JFbJXTyusloh89srIaF/r2aK0en72bfgVAdPajwA J3mLvrvu98mW2pbiGbcQa+z06zhx450zfEIlgnC5GpZplcqG3+9P8PPrP3bkj7dui+7C XztxO1zOPKmrXQLvmJSo+bGLRcX3/PR16Iv2wYhYHL3IXf/D+SwHQt+pKoQOrDde9prV HmbxXJL+SRZk6xO0MufIycFOvEdg9orcLsjMeZ483fZwEtwsQcZ2H/4tyGC3EEwYphl2 /6WA== X-Gm-Message-State: AOAM531bJ3ug/lblZGwpjaTOVK/AnZo2dyWDQj9xSJBdNI4QOk8tumdl LP8qoEx3FtXrPB3oYLF7nMHkcL7AlWiqLEAKHyHomg== X-Received: by 2002:a05:6e02:1c8d:: with SMTP id w13mr3621359ill.301.1612376458087; Wed, 03 Feb 2021 10:20:58 -0800 (PST) MIME-Version: 1.0 References: <20210128224819.2651899-1-axelrasmussen@google.com> <20210128224819.2651899-6-axelrasmussen@google.com> <20210201183159.GF260413@xz-x1> <20210202171515.GF6468@xz-x1> In-Reply-To: <20210202171515.GF6468@xz-x1> From: Axel Rasmussen Date: Wed, 3 Feb 2021 10:20:21 -0800 Message-ID: Subject: Re: [PATCH v3 5/9] userfaultfd: add minor fault registration mode To: Peter Xu Cc: Alexander Viro , Alexey Dobriyan , Andrea Arcangeli , Andrew Morton , Anshuman Khandual , Catalin Marinas , Chinwen Chang , Huang Ying , Ingo Molnar , Jann Horn , Jerome Glisse , Lokesh Gidra , "Matthew Wilcox (Oracle)" , Michael Ellerman , =?UTF-8?Q?Michal_Koutn=C3=BD?= , Michel Lespinasse , Mike Kravetz , Mike Rapoport , Nicholas Piggin , Shaohua Li , Shawn Anastasio , Steven Rostedt , Steven Price , Vlastimil Babka , LKML , linux-fsdevel@vger.kernel.org, Linux MM , Adam Ruprecht , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 2, 2021 at 9:15 AM Peter Xu wrote: > > On Mon, Feb 01, 2021 at 01:31:59PM -0500, Peter Xu wrote: > > On Thu, Jan 28, 2021 at 02:48:15PM -0800, Axel Rasmussen wrote: > > > This feature allows userspace to intercept "minor" faults. By "minor" > > > faults, I mean the following situation: > > > > > > Let there exist two mappings (i.e., VMAs) to the same page(s) (shared > > > memory). One of the mappings is registered with userfaultfd (in minor > > > mode), and the other is not. Via the non-UFFD mapping, the underlying > > > pages have already been allocated & filled with some contents. The UFFD > > > mapping has not yet been faulted in; when it is touched for the first > > > time, this results in what I'm calling a "minor" fault. As a concrete > > > example, when working with hugetlbfs, we have huge_pte_none(), but > > > find_lock_page() finds an existing page. > > > > > > This commit adds the new registration mode, and sets the relevant flag > > > on the VMAs being registered. In the hugetlb fault path, if we find > > > that we have huge_pte_none(), but find_lock_page() does indeed find an > > > existing page, then we have a "minor" fault, and if the VMA has the > > > userfaultfd registration flag, we call into userfaultfd to handle it. > > > > When re-read, now I'm thinking whether we should restrict the minor fault > > scenario with shared mappings always, assuming there's one mapping with uffd > > and the other one without, while the non-uffd can modify the data before an > > UFFDIO_CONTINUE kicking the uffd process. > > > > To me, it's really more about page cache and that's all.. > > > > So I'm wondering whether below would be simpler and actually clearer on > > defining minor faults, comparing to the above whole two paragraphs. For > > example, the scemantics do not actually need two mappings: > > > > For shared memory, userfaultfd missing fault used to only report the event > > if the page cache does not exist for the current fault process. Here we > > define userfaultfd minor fault as the case where the missing page fault > > does have a backing page cache (so only the pgtable entry is missing). > > > > It should not affect most of your code, but only one below [1]. > > OK it could be slightly more than that... > > E.g. we'd need to make UFFDIO_COPY to not install the write bit if it's > UFFDIO_CONTINUE and if it's private mappings. In hugetlb_mcopy_atomic_pte() now > we apply the write bit unconditionally: > > _dst_pte = make_huge_pte(dst_vma, page, dst_vma->vm_flags & VM_WRITE); > > That'll need a touch-up otherwise. > > It's just the change seems still very small so I'd slightly prefer to support > it all. However I don't want to make your series complicated and blocking it, > so please feel free to still make it shared memory if that's your preference. > The worst case is if someone would like to enable this (if with a valid user > scenario) we'd export a new uffd feature flag. > > > > > [...] > > > > > @@ -1302,9 +1301,26 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, > > > unsigned long vm_flags) > > > { > > > /* FIXME: add WP support to hugetlbfs and shmem */ > > > - return vma_is_anonymous(vma) || > > > - ((is_vm_hugetlb_page(vma) || vma_is_shmem(vma)) && > > > - !(vm_flags & VM_UFFD_WP)); > > > + if (vm_flags & VM_UFFD_WP) { > > > + if (is_vm_hugetlb_page(vma) || vma_is_shmem(vma)) > > > + return false; > > > + } > > > + > > > + if (vm_flags & VM_UFFD_MINOR) { > > > + /* > > > + * The use case for minor registration (intercepting minor > > > + * faults) is to handle the case where a page is present, but > > > + * needs to be modified before it can be used. This requires > > > + * two mappings: one with UFFD registration, and one without. > > > + * So, it only makes sense to do this with shared memory. > > > + */ > > > + /* FIXME: Add minor fault interception for shmem. */ > > > + if (!(is_vm_hugetlb_page(vma) && (vma->vm_flags & VM_SHARED))) > > > + return false; > > > > [1] > > > > So here we also restrict the mapping be shared. My above comment on the commit > > message is also another way to ask whether we could also allow it to happen > > with non-shared mappings as long as there's a page cache. If so, we could drop > > the VM_SHARED check here. It won't affect your existing use case for sure, it > > just gives more possibility that maybe it could also be used on non-shared > > mappings due to some reason in the future. > > > > What do you think? Agreed, I don't see any reason why it can't work. The only requirement for it to be useful is, the UFFD-registered area needs to be able to "see" writes from the non-UFFD-registered area. Whether or not the UFFD-registered half is shared or not doesn't affect this. I'll include this change (and the VM_WRITE touchup described above) in a v4. > > > > The rest looks good to me. > > > > Thanks, > > > > -- > > Peter Xu > > -- > Peter Xu >