Received: by 2002:a6b:500f:0:0:0:0:0 with SMTP id e15csp172180iob; Mon, 2 May 2022 16:14:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwQryeotMWQJPNP4PR4SXbMfMdH1KVmfW4EDWi6O8OzzugLkdkLBdYjh6Rx+P8YeM0cGjFM X-Received: by 2002:a63:f55:0:b0:3c2:363b:a88d with SMTP id 21-20020a630f55000000b003c2363ba88dmr4577768pgp.17.1651533249108; Mon, 02 May 2022 16:14:09 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1651533249; cv=none; d=google.com; s=arc-20160816; b=rEPKGv0450r1shzcDuopXnh38haUYEE4FZnq7s+j4PCADdKaqx5OQ+ItaYf4rWzL3x nXt6lCkjGZ9lP08c5+XItCro7JhAbVlaIw/4YLikzo1Tc7ZtEKYPnFpBW/Ymlw0MI1Zw NWs8/gzTuHG+bMrTcFeqmIhctb+XPIn4z+UPM7vTinAzBYojQCGmYA/9yof8I4DEUdFQ 5f/OvVBhAoVBp9RRJCyn76lXrZMqPLKEThUlFwWt7xmkl/jW91SyRcgXwJf5Tm3r6V4k orC0zM5X+NPZzMr+KDFlsM0wn+tFsHdjZcGd+gx7y0ngt9hBAhulsytczXGMzygtQz60 Hm5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=eQ3Hip+WwSGmUwdASPsfBzLUdQ2n5d5TtJje5i2JLf8=; b=rsuvgp8qZ1qOVuAW0Sb9aWCyv+sfpRA5ROQKwdm2jtLMhyHffnyPCoIkG5+5Tbwj2l CYhqSFUWOgMYT/F7bBwkBwdRpJpXbLCvck2l+cuJ+ABpcj4+1so4Ifpp8KIjlWnP5ZaY QbqcO2IKx9i+NG3j+AHrNHeRawisHoUQeD+4vpimuK7jb60vnql2wWoa+aZNlHZxRN+U 83sroLQ4UXDpT/CZqOsb9XkbMoZ6priM4tYQDiDbDrZBzByCTtfNS43ThZ3YzzIEByJe aMsqQ8FqALdLe91x1+lUBm+3spf4FQUNDkMsERT/CySvjnUBGaqMC8c0VlUW7MxCmLaN sgEg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=bMBGZp8O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [2620:137:e000::1:18]) by mx.google.com with ESMTPS id y187-20020a638ac4000000b003aa86eaf125si15178082pgd.458.2022.05.02.16.14.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 02 May 2022 16:14:09 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) client-ip=2620:137:e000::1:18; Authentication-Results: mx.google.com; dkim=pass header.i=@linuxfoundation.org header.s=korg header.b=bMBGZp8O; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linuxfoundation.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id E3F9D2FE5C; Mon, 2 May 2022 16:14:02 -0700 (PDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357959AbiD2KrL (ORCPT + 99 others); Fri, 29 Apr 2022 06:47:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59176 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1358002AbiD2Kqm (ORCPT ); Fri, 29 Apr 2022 06:46:42 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7804C8A9F; Fri, 29 Apr 2022 03:42:48 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 8462A6236D; Fri, 29 Apr 2022 10:42:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8401DC385BD; Fri, 29 Apr 2022 10:42:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1651228967; bh=v4nnWmacNa7G1jbL4vaabU9ZIa8ZfbxFYoUs1cv4oy4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bMBGZp8O6WjdwzQLPEDfo+keTA3o89X1Ru1hzl9SJbquJmukeH23V2e6R4Xaz/MmW 0igjdnxaXKx0xJLd+57Fm79/jc8FCWxEARNT2LmFHYiT6Tc0TWLC+mID2YpSMU1xr4 DaALfNov3rpNISxIQQAv8gxI9+o6y9c6VxTfpxS4= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg Kroah-Hartman , Andreas Gruenbacher , Anand Jain Subject: [PATCH 5.15 15/33] gup: Turn fault_in_pages_{readable,writeable} into fault_in_{readable,writeable} Date: Fri, 29 Apr 2022 12:42:02 +0200 Message-Id: <20220429104052.784871097@linuxfoundation.org> X-Mailer: git-send-email 2.36.0 In-Reply-To: <20220429104052.345760505@linuxfoundation.org> References: <20220429104052.345760505@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andreas Gruenbacher commit bb523b406c849eef8f265a07cd7f320f1f177743 upstream Turn fault_in_pages_{readable,writeable} into versions that return the number of bytes not faulted in, similar to copy_to_user, instead of returning a non-zero value when any of the requested pages couldn't be faulted in. This supports the existing users that require all pages to be faulted in as well as new users that are happy if any pages can be faulted in. Rename the functions to fault_in_{readable,writeable} to make sure this change doesn't silently break things. Neither of these functions is entirely trivial and it doesn't seem useful to inline them, so move them to mm/gup.c. Signed-off-by: Andreas Gruenbacher Signed-off-by: Anand Jain Signed-off-by: Greg Kroah-Hartman --- arch/powerpc/kernel/kvm.c | 3 + arch/powerpc/kernel/signal_32.c | 4 +- arch/powerpc/kernel/signal_64.c | 2 - arch/x86/kernel/fpu/signal.c | 7 +-- drivers/gpu/drm/armada/armada_gem.c | 7 +-- fs/btrfs/ioctl.c | 5 +- include/linux/pagemap.h | 57 +--------------------------- lib/iov_iter.c | 10 ++--- mm/filemap.c | 2 - mm/gup.c | 72 ++++++++++++++++++++++++++++++++++++ 10 files changed, 93 insertions(+), 76 deletions(-) --- a/arch/powerpc/kernel/kvm.c +++ b/arch/powerpc/kernel/kvm.c @@ -669,7 +669,8 @@ static void __init kvm_use_magic_page(vo on_each_cpu(kvm_map_magic_page, &features, 1); /* Quick self-test to see if the mapping works */ - if (fault_in_pages_readable((const char *)KVM_MAGIC_PAGE, sizeof(u32))) { + if (fault_in_readable((const char __user *)KVM_MAGIC_PAGE, + sizeof(u32))) { kvm_patching_worked = false; return; } --- a/arch/powerpc/kernel/signal_32.c +++ b/arch/powerpc/kernel/signal_32.c @@ -1048,7 +1048,7 @@ SYSCALL_DEFINE3(swapcontext, struct ucon if (new_ctx == NULL) return 0; if (!access_ok(new_ctx, ctx_size) || - fault_in_pages_readable((u8 __user *)new_ctx, ctx_size)) + fault_in_readable((char __user *)new_ctx, ctx_size)) return -EFAULT; /* @@ -1239,7 +1239,7 @@ SYSCALL_DEFINE3(debug_setcontext, struct #endif if (!access_ok(ctx, sizeof(*ctx)) || - fault_in_pages_readable((u8 __user *)ctx, sizeof(*ctx))) + fault_in_readable((char __user *)ctx, sizeof(*ctx))) return -EFAULT; /* --- a/arch/powerpc/kernel/signal_64.c +++ b/arch/powerpc/kernel/signal_64.c @@ -688,7 +688,7 @@ SYSCALL_DEFINE3(swapcontext, struct ucon if (new_ctx == NULL) return 0; if (!access_ok(new_ctx, ctx_size) || - fault_in_pages_readable((u8 __user *)new_ctx, ctx_size)) + fault_in_readable((char __user *)new_ctx, ctx_size)) return -EFAULT; /* --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -205,7 +205,7 @@ retry: fpregs_unlock(); if (ret) { - if (!fault_in_pages_writeable(buf_fx, fpu_user_xstate_size)) + if (!fault_in_writeable(buf_fx, fpu_user_xstate_size)) goto retry; return -EFAULT; } @@ -278,10 +278,9 @@ retry: if (ret != -EFAULT) return -EINVAL; - ret = fault_in_pages_readable(buf, size); - if (!ret) + if (!fault_in_readable(buf, size)) goto retry; - return ret; + return -EFAULT; } /* --- a/drivers/gpu/drm/armada/armada_gem.c +++ b/drivers/gpu/drm/armada/armada_gem.c @@ -336,7 +336,7 @@ int armada_gem_pwrite_ioctl(struct drm_d struct drm_armada_gem_pwrite *args = data; struct armada_gem_object *dobj; char __user *ptr; - int ret; + int ret = 0; DRM_DEBUG_DRIVER("handle %u off %u size %u ptr 0x%llx\n", args->handle, args->offset, args->size, args->ptr); @@ -349,9 +349,8 @@ int armada_gem_pwrite_ioctl(struct drm_d if (!access_ok(ptr, args->size)) return -EFAULT; - ret = fault_in_pages_readable(ptr, args->size); - if (ret) - return ret; + if (fault_in_readable(ptr, args->size)) + return -EFAULT; dobj = armada_gem_object_lookup(file, args->handle); if (dobj == NULL) --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2258,9 +2258,8 @@ static noinline int search_ioctl(struct key.offset = sk->min_offset; while (1) { - ret = fault_in_pages_writeable(ubuf + sk_offset, - *buf_size - sk_offset); - if (ret) + ret = -EFAULT; + if (fault_in_writeable(ubuf + sk_offset, *buf_size - sk_offset)) break; ret = btrfs_search_forward(root, &key, path, sk->min_transid); --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -733,61 +733,10 @@ int wait_on_page_private_2_killable(stru extern void add_page_wait_queue(struct page *page, wait_queue_entry_t *waiter); /* - * Fault everything in given userspace address range in. + * Fault in userspace address range. */ -static inline int fault_in_pages_writeable(char __user *uaddr, size_t size) -{ - char __user *end = uaddr + size - 1; - - if (unlikely(size == 0)) - return 0; - - if (unlikely(uaddr > end)) - return -EFAULT; - /* - * Writing zeroes into userspace here is OK, because we know that if - * the zero gets there, we'll be overwriting it. - */ - do { - if (unlikely(__put_user(0, uaddr) != 0)) - return -EFAULT; - uaddr += PAGE_SIZE; - } while (uaddr <= end); - - /* Check whether the range spilled into the next page. */ - if (((unsigned long)uaddr & PAGE_MASK) == - ((unsigned long)end & PAGE_MASK)) - return __put_user(0, end); - - return 0; -} - -static inline int fault_in_pages_readable(const char __user *uaddr, size_t size) -{ - volatile char c; - const char __user *end = uaddr + size - 1; - - if (unlikely(size == 0)) - return 0; - - if (unlikely(uaddr > end)) - return -EFAULT; - - do { - if (unlikely(__get_user(c, uaddr) != 0)) - return -EFAULT; - uaddr += PAGE_SIZE; - } while (uaddr <= end); - - /* Check whether the range spilled into the next page. */ - if (((unsigned long)uaddr & PAGE_MASK) == - ((unsigned long)end & PAGE_MASK)) { - return __get_user(c, end); - } - - (void)c; - return 0; -} +size_t fault_in_writeable(char __user *uaddr, size_t size); +size_t fault_in_readable(const char __user *uaddr, size_t size); int add_to_page_cache_locked(struct page *page, struct address_space *mapping, pgoff_t index, gfp_t gfp_mask); --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -191,7 +191,7 @@ static size_t copy_page_to_iter_iovec(st buf = iov->iov_base + skip; copy = min(bytes, iov->iov_len - skip); - if (IS_ENABLED(CONFIG_HIGHMEM) && !fault_in_pages_writeable(buf, copy)) { + if (IS_ENABLED(CONFIG_HIGHMEM) && !fault_in_writeable(buf, copy)) { kaddr = kmap_atomic(page); from = kaddr + offset; @@ -275,7 +275,7 @@ static size_t copy_page_from_iter_iovec( buf = iov->iov_base + skip; copy = min(bytes, iov->iov_len - skip); - if (IS_ENABLED(CONFIG_HIGHMEM) && !fault_in_pages_readable(buf, copy)) { + if (IS_ENABLED(CONFIG_HIGHMEM) && !fault_in_readable(buf, copy)) { kaddr = kmap_atomic(page); to = kaddr + offset; @@ -447,13 +447,11 @@ int iov_iter_fault_in_readable(const str bytes = i->count; for (p = i->iov, skip = i->iov_offset; bytes; p++, skip = 0) { size_t len = min(bytes, p->iov_len - skip); - int err; if (unlikely(!len)) continue; - err = fault_in_pages_readable(p->iov_base + skip, len); - if (unlikely(err)) - return err; + if (fault_in_readable(p->iov_base + skip, len)) + return -EFAULT; bytes -= len; } } --- a/mm/filemap.c +++ b/mm/filemap.c @@ -90,7 +90,7 @@ * ->lock_page (filemap_fault, access_process_vm) * * ->i_rwsem (generic_perform_write) - * ->mmap_lock (fault_in_pages_readable->do_page_fault) + * ->mmap_lock (fault_in_readable->do_page_fault) * * bdi->wb.list_lock * sb_lock (fs/fs-writeback.c) --- a/mm/gup.c +++ b/mm/gup.c @@ -1682,6 +1682,78 @@ finish_or_fault: #endif /* !CONFIG_MMU */ /** + * fault_in_writeable - fault in userspace address range for writing + * @uaddr: start of address range + * @size: size of address range + * + * Returns the number of bytes not faulted in (like copy_to_user() and + * copy_from_user()). + */ +size_t fault_in_writeable(char __user *uaddr, size_t size) +{ + char __user *start = uaddr, *end; + + if (unlikely(size == 0)) + return 0; + if (!PAGE_ALIGNED(uaddr)) { + if (unlikely(__put_user(0, uaddr) != 0)) + return size; + uaddr = (char __user *)PAGE_ALIGN((unsigned long)uaddr); + } + end = (char __user *)PAGE_ALIGN((unsigned long)start + size); + if (unlikely(end < start)) + end = NULL; + while (uaddr != end) { + if (unlikely(__put_user(0, uaddr) != 0)) + goto out; + uaddr += PAGE_SIZE; + } + +out: + if (size > uaddr - start) + return size - (uaddr - start); + return 0; +} +EXPORT_SYMBOL(fault_in_writeable); + +/** + * fault_in_readable - fault in userspace address range for reading + * @uaddr: start of user address range + * @size: size of user address range + * + * Returns the number of bytes not faulted in (like copy_to_user() and + * copy_from_user()). + */ +size_t fault_in_readable(const char __user *uaddr, size_t size) +{ + const char __user *start = uaddr, *end; + volatile char c; + + if (unlikely(size == 0)) + return 0; + if (!PAGE_ALIGNED(uaddr)) { + if (unlikely(__get_user(c, uaddr) != 0)) + return size; + uaddr = (const char __user *)PAGE_ALIGN((unsigned long)uaddr); + } + end = (const char __user *)PAGE_ALIGN((unsigned long)start + size); + if (unlikely(end < start)) + end = NULL; + while (uaddr != end) { + if (unlikely(__get_user(c, uaddr) != 0)) + goto out; + uaddr += PAGE_SIZE; + } + +out: + (void)c; + if (size > uaddr - start) + return size - (uaddr - start); + return 0; +} +EXPORT_SYMBOL(fault_in_readable); + +/** * get_dump_page() - pin user page in memory while writing it to core dump * @addr: user address *