Received: by 2002:a05:6358:16cc:b0:ea:6187:17c9 with SMTP id r12csp5787355rwl; Sun, 8 Jan 2023 22:55:09 -0800 (PST) X-Google-Smtp-Source: AMrXdXsgLlwjeByAhn/LUcq3QiYcyM179qUVu9Hs4rHHu3htW3370Y9c+GN/dj+LVXGdOLJtGlP9 X-Received: by 2002:aa7:da4c:0:b0:46f:9a53:fdce with SMTP id w12-20020aa7da4c000000b0046f9a53fdcemr54959470eds.41.1673247308841; Sun, 08 Jan 2023 22:55:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673247308; cv=none; d=google.com; s=arc-20160816; b=teEKhmlc/k7e7gyA5tffkIApvUccfD0taYwSysFT+EEgeuFpXfHcZGiETQDCP0rGNi 9K7qbmI9DS7j/C39MhBLqOQr3CamTGpi8ZdIycj2BY4CTwNX/kqcEkW8ddYGGeoprdoI PG88bUP7Vrgq/eAM/W4dmdOu/iixlCtDwwLnPwPJqMaRPKH70VKaKmtnZtMYM867gjZO egHEMl7ZGiM3iauy5snfmn4khMC8Vt53a4ZZr2FgbULwrjWyvrdxKYzr6h7UK6WLANbY 35FxiL6UFFoTbQtOzMQ8ZPz4nWSmU4EuVRq569bLNv1EOd3n/1WNQggUI/epX0T+VhRE lRTg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=YTfxjD5sqxhL0jnBqNo7w5fm4qDpXNarjz3Gg6uEnTw=; b=s3+V23BH/7tRozMBfjc6870zRB81LEedkq+fwPbr1bZeD0egqsqVKY7lbmVMm534EM n+z38NtMDzBlwZRExRt+j4/idn0JghsBZ9i8GLs8VOLTGUHLVzsXV2P9SCiPkTNBzJmJ WfTkljfaCrXxTMepm8/FNSyZhRtplxJmxu0q7DZfi4zmYgNGFAdNh1KninPv/jM7YQfo o2v7AzXAqTlSWmE76012FGJLWgjkoo4HTejMmvQr0kaPikas/IwVeQP3RV2Jj0bOD8qa TFmB2xp8TE43nE95Xq1Hn5Y4oq0F9KigWfSlPTMc2GASC1RZwC/Rwegl86BytzE1QKu5 QVSw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=mXTIh0sI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id n11-20020a05640205cb00b004960cb11e1fsi9324501edx.413.2023.01.08.22.54.54; Sun, 08 Jan 2023 22:55:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@collabora.com header.s=mail header.b=mXTIh0sI; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=collabora.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234101AbjAIGqg (ORCPT + 56 others); Mon, 9 Jan 2023 01:46:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234287AbjAIGq2 (ORCPT ); Mon, 9 Jan 2023 01:46:28 -0500 Received: from madras.collabora.co.uk (madras.collabora.co.uk [46.235.227.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C5EC81274A; Sun, 8 Jan 2023 22:46:26 -0800 (PST) Received: from localhost.localdomain (unknown [39.45.24.179]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: usama.anjum) by madras.collabora.co.uk (Postfix) with ESMTPSA id AAE616602D55; Mon, 9 Jan 2023 06:46:18 +0000 (GMT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=collabora.com; s=mail; t=1673246785; bh=aJpqFZ4Z39PMY3EcUlB4erYuWBre3v1i5y5YelnRvyg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mXTIh0sImkx5pLx3K9a8ImYkW5z2Rrl2aD88JkC+GfF7XcpOmvPUQBJutyxUEFWOy 93dPVXljJ1g7XaaHIWfB6wDXP/siNTnmhaacnpFaZq5QKOhgq3Mmwcs8+OaQ6Z68VE FpJvtZkUqwGF2fnSiNvP7O4V5zJOsRawxTA9Ggj388SYN4H/YVFHSErncxHfFWKcqV SVASdMc0d+Qd5oHtJaHR4ufaoLSpg01S1nYEvUNLULcJELCFioQlNIZ5mdqj/1ZwuS VCNgd7svrcXFnl+pIbAAwz1EUWZtfz2NLFfXpKBszEvIQCncFNuYLnv1dulL3LWjBe WUIR/1dYXol3g== From: Muhammad Usama Anjum To: Peter Xu , David Hildenbrand , Andrew Morton , =?UTF-8?q?Micha=C5=82=20Miros=C5=82aw?= , Andrei Vagin , Danylo Mocherniuk , Paul Gofman , Cyrill Gorcunov Cc: Alexander Viro , Shuah Khan , Christian Brauner , Yang Shi , Vlastimil Babka , "Liam R . Howlett" , Yun Zhou , Suren Baghdasaryan , Alex Sierra , Muhammad Usama Anjum , Matthew Wilcox , Pasha Tatashin , Mike Rapoport , Nadav Amit , Axel Rasmussen , "Gustavo A . R . Silva" , Dan Williams , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kselftest@vger.kernel.org, Greg KH , kernel@collabora.com Subject: [PATCH v7 2/4] userfaultfd: split mwriteprotect_range() Date: Mon, 9 Jan 2023 11:45:17 +0500 Message-Id: <20230109064519.3555250-3-usama.anjum@collabora.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20230109064519.3555250-1-usama.anjum@collabora.com> References: <20230109064519.3555250-1-usama.anjum@collabora.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Split mwriteprotect_range() to create a unlocked version. This will be used in the next patch to write protect a memory area. Add a helper function, wp_range_async() as well. Signed-off-by: Muhammad Usama Anjum --- fs/userfaultfd.c | 11 ++++++++++ include/linux/userfaultfd_k.h | 10 +++++++++ mm/userfaultfd.c | 40 ++++++++++++++++++++++------------- 3 files changed, 46 insertions(+), 15 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index be5e10d15058..5ff5ff4314e3 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -1967,6 +1967,17 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) return ret; } +int wp_range_async(struct vm_area_struct *vma, unsigned long start, unsigned long len) +{ + struct userfaultfd_ctx *ctx = vma->vm_userfaultfd_ctx.ctx; + + if (!ctx) + return -1; + + ctx->async = true; + return __mwriteprotect_range(ctx->mm, start, len, true, &ctx->mmap_changing); +} + static inline unsigned int uffd_ctx_features(__u64 user_features) { /* diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index 9df0b9a762cc..fba05a32f8e9 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -73,6 +73,9 @@ extern ssize_t mcopy_continue(struct mm_struct *dst_mm, unsigned long dst_start, extern int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, unsigned long len, bool enable_wp, atomic_t *mmap_changing); +extern int __mwriteprotect_range(struct mm_struct *dst_mm, + unsigned long start, unsigned long len, + bool enable_wp, atomic_t *mmap_changing); extern void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *vma, unsigned long start, unsigned long len, bool enable_wp); @@ -179,6 +182,8 @@ extern int userfaultfd_unmap_prep(struct mm_struct *mm, unsigned long start, unsigned long end, struct list_head *uf); extern void userfaultfd_unmap_complete(struct mm_struct *mm, struct list_head *uf); +extern int wp_range_async(struct vm_area_struct *vma, unsigned long start, + unsigned long len); #else /* CONFIG_USERFAULTFD */ @@ -274,6 +279,11 @@ static inline bool uffd_disable_fault_around(struct vm_area_struct *vma) return false; } +int wp_range_async(struct vm_area_struct *vma, unsigned long start, unsigned long len) +{ + return -1; +} + #endif /* CONFIG_USERFAULTFD */ static inline bool pte_marker_entry_uffd_wp(swp_entry_t entry) diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 65ad172add27..9d8a43faf764 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -734,25 +734,13 @@ void uffd_wp_range(struct mm_struct *dst_mm, struct vm_area_struct *dst_vma, tlb_finish_mmu(&tlb); } -int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, - unsigned long len, bool enable_wp, - atomic_t *mmap_changing) +int __mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, + unsigned long len, bool enable_wp, + atomic_t *mmap_changing) { struct vm_area_struct *dst_vma; unsigned long page_mask; int err; - - /* - * Sanitize the command parameters: - */ - BUG_ON(start & ~PAGE_MASK); - BUG_ON(len & ~PAGE_MASK); - - /* Does the address range wrap, or is the span zero-sized? */ - BUG_ON(start + len <= start); - - mmap_read_lock(dst_mm); - /* * If memory mappings are changing because of non-cooperative * operation (e.g. mremap) running in parallel, bail out and @@ -783,6 +771,28 @@ int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, err = 0; out_unlock: + return err; +} + +int mwriteprotect_range(struct mm_struct *dst_mm, unsigned long start, + unsigned long len, bool enable_wp, + atomic_t *mmap_changing) +{ + int err; + + /* + * Sanitize the command parameters: + */ + BUG_ON(start & ~PAGE_MASK); + BUG_ON(len & ~PAGE_MASK); + + /* Does the address range wrap, or is the span zero-sized? */ + BUG_ON(start + len <= start); + + mmap_read_lock(dst_mm); + + err = __mwriteprotect_range(dst_mm, start, len, enable_wp, mmap_changing); + mmap_read_unlock(dst_mm); return err; } -- 2.30.2