Received: by 2002:a05:6a10:a841:0:0:0:0 with SMTP id d1csp150626pxy; Tue, 20 Apr 2021 15:09:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzOkFA0+LdYCaJyDcIXBza0bDzTp9/yLPE06kKImcc8PJoI2KEsc7lpwW9DTF8s8ObeTpI2 X-Received: by 2002:a17:906:130a:: with SMTP id w10mr29554873ejb.71.1618956595637; Tue, 20 Apr 2021 15:09:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1618956595; cv=none; d=google.com; s=arc-20160816; b=G5DR5wiF0ERXOnW2NnERuFD9s47LF+7ninrT5SM58OqF6chNe2b+Fc9Et7Mm5dnFx2 2zO5lZIjtoFp8eIrI/sjFq+gecDNZUk2+XqSaoH5YDLbNNd9YGqsjO/Fg/WZrUYrTd9i ROwKn/4FtEP903J5ID6j/rMT5g+/yVdA+YJvXB7MqX8Dqeun9B46K77nauFePf3mDp2u kYVZVCcnic1NyDxBGm+5C7NLHvaBSaJbAx0EElFYlS5kV+zL8Yby2YM5U3f9xL/7dxLL fIr556n18Ckyrs2BVgiTH6jYsmewR/lR6kQVNgkYvBxf5Mq9yZ+1C/5E/P8718UqhPtJ STzg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=QUApq8UPhLnVOskfaex0nbVtMhAbaCtXI1eHuCuwzK0=; b=zIMBlcP/YOP5M4O6kAmQD05wqJtHOwNHaCYWk/JBayfoVtpNo2RNwlk2Z1LRCfWHSU ooDQGdmckZpEQZo68UZJvqbiGmPlsW33e0UgYedMJIXKW1vhQeaZ+4VJH+VdiiiXykwD LLybktMBA58sJom+mtDm9fTj+toO9CYivGbhiB6ZNTGf87KDixTwwWODLDpDOGtwwiQg 5QjgFm8H6hj+MhGvfEnDGoOCSERrgAzZWk6G0ZsCrDqJyPIO7PNoNlw5sDIA8/J997UD tcfDlC/4yXo5i8ieOGjErL41iGWDqSn5Leev/q2STOIg7ZO6YqH2f4vszgHU2YqU1HhQ 2t3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=go415UuM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dt12si86123ejb.115.2021.04.20.15.09.32; Tue, 20 Apr 2021 15:09:55 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=go415UuM; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234272AbhDTWJE (ORCPT + 99 others); Tue, 20 Apr 2021 18:09:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40370 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234261AbhDTWIt (ORCPT ); Tue, 20 Apr 2021 18:08:49 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE5C8C06138A for ; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id 7-20020ac857070000b02901ba5c36e1b2so2689990qtw.21 for ; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QUApq8UPhLnVOskfaex0nbVtMhAbaCtXI1eHuCuwzK0=; b=go415UuMQZc9Zin/DWljWc8XS2D43ct53uUZqRvif920L9paMAYqU8DAomFyOgMzKy BASyq6I6UBR+/DPIvFLjEmsDFvBoeoZ8ZRqAX2l59ua7fWNbLvGvrhRxeGBTRF0jfteS 70u3jBHIPKb/9Fp7miWXNnTGQ+RuVwC3KWRNfByHS7cGBdjkut5Ve2iijXJfv/YyRKEd DFUg/i50NODl/vWSwTyUkQezOd3vv3pPewiisUw3YwuocrFoFn/VnpTq0HEB1N7vAnIO z3HzvKXFWBQIEl7E2RHB32Ym368rCk4ECwgyz7C+Todyzdc2OTqQCeObDg2ncNld7X6n 3ZPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QUApq8UPhLnVOskfaex0nbVtMhAbaCtXI1eHuCuwzK0=; b=L0ADTMeRZlgSYtz8V6uAvGH8LryYigTFkVnlROEa9ln9Xu6pm5qsg9AfwDZjY2lUhR jqVUT9UMOpH7n+42smR05w2tE+JHrTR+8D1NZTTCarjeAXbXisDaiQaVzutJK7dhsq0C nCt8KFjXVC5c7ldv1DBjuMHuDATMEwxe2AOCguMTcnn0Odq4KLlTYOP/n9Ablrtqk+JP PlmJA5ayNi7F0ZqdJ6GLCPbUfMnPsHaguUmh4Ce7fligmuBh14KmTyGuIjcEnPld8K4b c2j1dwTfgXdU5E32hvZDD9c3yV1Yi9pFZWN21gOTDMT4mCkpu6VocTPl2+GVkxNZcie+ q71Q== X-Gm-Message-State: AOAM533IrTJkdRGibD+H/AM+BbFqCnW8PzuEMjwPIBXPxxMbhTNjd7pF uAXfnqHHPOrWRSikGJV0tv1PwWzbcANsr49IVPvW X-Received: from ajr0.svl.corp.google.com ([2620:15c:2cd:203:c40e:ee2c:2ab8:257a]) (user=axelrasmussen job=sendgmr) by 2002:a05:6214:18d:: with SMTP id q13mr4532886qvr.60.1618956496024; Tue, 20 Apr 2021 15:08:16 -0700 (PDT) Date: Tue, 20 Apr 2021 15:07:58 -0700 In-Reply-To: <20210420220804.486803-1-axelrasmussen@google.com> Message-Id: <20210420220804.486803-5-axelrasmussen@google.com> Mime-Version: 1.0 References: <20210420220804.486803-1-axelrasmussen@google.com> X-Mailer: git-send-email 2.31.1.368.gbe11c130af-goog Subject: [PATCH v4 04/10] userfaultfd/shmem: support minor fault registration for shmem From: Axel Rasmussen To: Alexander Viro , Andrea Arcangeli , Andrew Morton , Hugh Dickins , Jerome Glisse , Joe Perches , Lokesh Gidra , Mike Kravetz , Mike Rapoport , Peter Xu , Shaohua Li , Shuah Khan , Stephen Rothwell , Wang Qing Cc: linux-api@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, linux-mm@kvack.org, Axel Rasmussen , Brian Geffon , "Dr . David Alan Gilbert" , Mina Almasry , Oliver Upton Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch allows shmem-backed VMAs to be registered for minor faults. Minor faults are appropriately relayed to userspace in the fault path, for VMAs with the relevant flag. This commit doesn't hook up the UFFDIO_CONTINUE ioctl for shmem-backed minor faults, though, so userspace doesn't yet have a way to resolve such faults. Acked-by: Peter Xu Signed-off-by: Axel Rasmussen --- fs/userfaultfd.c | 6 +++--- include/uapi/linux/userfaultfd.h | 7 ++++++- mm/memory.c | 8 +++++--- mm/shmem.c | 12 +++++++++++- 4 files changed, 25 insertions(+), 8 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 14f92285d04f..9f3b8684cf3c 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1267,8 +1267,7 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma, } if (vm_flags & VM_UFFD_MINOR) { - /* FIXME: Add minor fault interception for shmem. */ - if (!is_vm_hugetlb_page(vma)) + if (!(is_vm_hugetlb_page(vma) || vma_is_shmem(vma))) return false; } @@ -1941,7 +1940,8 @@ static int userfaultfd_api(struct userfaultfd_ctx *ctx, /* report all available features and ioctls to userland */ uffdio_api.features = UFFD_API_FEATURES; #ifndef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR - uffdio_api.features &= ~UFFD_FEATURE_MINOR_HUGETLBFS; + uffdio_api.features &= + ~(UFFD_FEATURE_MINOR_HUGETLBFS | UFFD_FEATURE_MINOR_SHMEM); #endif uffdio_api.ioctls = UFFD_API_IOCTLS; ret = -EFAULT; diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h index bafbeb1a2624..159a74e9564f 100644 --- a/include/uapi/linux/userfaultfd.h +++ b/include/uapi/linux/userfaultfd.h @@ -31,7 +31,8 @@ UFFD_FEATURE_MISSING_SHMEM | \ UFFD_FEATURE_SIGBUS | \ UFFD_FEATURE_THREAD_ID | \ - UFFD_FEATURE_MINOR_HUGETLBFS) + UFFD_FEATURE_MINOR_HUGETLBFS | \ + UFFD_FEATURE_MINOR_SHMEM) #define UFFD_API_IOCTLS \ ((__u64)1 << _UFFDIO_REGISTER | \ (__u64)1 << _UFFDIO_UNREGISTER | \ @@ -185,6 +186,9 @@ struct uffdio_api { * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults * can be intercepted (via REGISTER_MODE_MINOR) for * hugetlbfs-backed pages. + * + * UFFD_FEATURE_MINOR_SHMEM indicates the same support as + * UFFD_FEATURE_MINOR_HUGETLBFS, but for shmem-backed pages instead. */ #define UFFD_FEATURE_PAGEFAULT_FLAG_WP (1<<0) #define UFFD_FEATURE_EVENT_FORK (1<<1) @@ -196,6 +200,7 @@ struct uffdio_api { #define UFFD_FEATURE_SIGBUS (1<<7) #define UFFD_FEATURE_THREAD_ID (1<<8) #define UFFD_FEATURE_MINOR_HUGETLBFS (1<<9) +#define UFFD_FEATURE_MINOR_SHMEM (1<<10) __u64 features; __u64 ioctls; diff --git a/mm/memory.c b/mm/memory.c index 4e358601c5d6..cc71a445c76c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3972,9 +3972,11 @@ static vm_fault_t do_read_fault(struct vm_fault *vmf) * something). */ if (vma->vm_ops->map_pages && fault_around_bytes >> PAGE_SHIFT > 1) { - ret = do_fault_around(vmf); - if (ret) - return ret; + if (likely(!userfaultfd_minor(vmf->vma))) { + ret = do_fault_around(vmf); + if (ret) + return ret; + } } ret = __do_fault(vmf); diff --git a/mm/shmem.c b/mm/shmem.c index b72c55aa07fc..30c0bb501dc9 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1785,7 +1785,7 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache. * - * vmf and fault_type are only supplied by shmem_fault: + * vma, vmf, and fault_type are only supplied by shmem_fault: * otherwise they are NULL. */ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, @@ -1820,6 +1820,16 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, page = pagecache_get_page(mapping, index, FGP_ENTRY | FGP_HEAD | FGP_LOCK, 0); + + if (page && vma && userfaultfd_minor(vma)) { + if (!xa_is_value(page)) { + unlock_page(page); + put_page(page); + } + *fault_type = handle_userfault(vmf, VM_UFFD_MINOR); + return 0; + } + if (xa_is_value(page)) { error = shmem_swapin_page(inode, index, &page, sgp, gfp, vma, fault_type); -- 2.31.1.368.gbe11c130af-goog