Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp1831248pxb; Mon, 11 Oct 2021 14:10:58 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzhwuoFuJsoyME73IPA0J/InVl9Dj3iSxPV/zTQLaQL+H1caEo5ilPg+QzLtSg+j1CKtxTf X-Received: by 2002:a17:906:1184:: with SMTP id n4mr28920960eja.87.1633986658146; Mon, 11 Oct 2021 14:10:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1633986658; cv=none; d=google.com; s=arc-20160816; b=asviUlwbrq6d0z3GhI4zgJUguQW0c4jaGuWvvVCGNmQrGzn8355QigFS/V3Srez9iC JjQuXPt5ArgqGphKz70Bkz5qMRbJ8RIX2wZgMpSHGraJZK/tJhPlIEFMLdjnLKrBLUHS f7WQJp1ToJMXrj0nAzPS3dzyU3XJH0WuJVBI6mhi3/cymOjrt2Kfs9hd8AeS6PRlNTba ebgcrRF+Zb2EbYjMSo/Jkwi7JqDLGpPvkUd3KKnXnkFcQXRTJIKCA4bV9rDYW8d/RtUY 3k0DPpGqWgu+PElRJGFiaYuRTOJdX/4NGEwPQkErJl7fD5QxNRnEIiEUYhI9YNxMbnAP hkCQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=M9tYO3LZVEiDpzMLXFKIzhhUA0JYig+JLJJbchzqydc=; b=EoCyvG9xC+yIofXKc+JS8hjTlwc0G1P5eFp6oru7PPfJSGNFrmaf9pKWPabHQVUJn+ Z8FUqqpAiL3xHe/UuI/LFJh3g+Og5yiabAg8BHedGVRmDS8VtJXh04RajB/ORbTUlnDh 8j/v4hRmLpNjxVrpivQwhq0kLoVo+iBP6N4XZi5ijQM+xSlpJ/kW4baUt0FyzJ/E5N/6 tqTW4hhSfxNUW20l8eKKH/9uhN7DBxGkCV/G++AiP4EGU50AYQQnneBmv6WMiVNDwowD FJzXSPEIL5OtSd6h5djS5bUELC95w7WGszQYOhsRz3tW5pMbfW8WDz/B7Fubnc19wUCo oNJg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d6si13067807ejj.185.2021.10.11.14.10.34; Mon, 11 Oct 2021 14:10:58 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235180AbhJKVKW (ORCPT + 99 others); Mon, 11 Oct 2021 17:10:22 -0400 Received: from mail.kernel.org ([198.145.29.99]:39372 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235137AbhJKVKV (ORCPT ); Mon, 11 Oct 2021 17:10:21 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 47FA360F43; Mon, 11 Oct 2021 21:08:18 +0000 (UTC) Date: Mon, 11 Oct 2021 22:08:14 +0100 From: Catalin Marinas To: Linus Torvalds Cc: Al Viro , Andreas Gruenbacher , Christoph Hellwig , "Darrick J. Wong" , Jan Kara , Matthew Wilcox , cluster-devel , linux-fsdevel , Linux Kernel Mailing List , "ocfs2-devel@oss.oracle.com" , Josef Bacik , Will Deacon Subject: Re: [RFC][arm64] possible infinite loop in btrfs search_ioctl() Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Oct 11, 2021 at 12:15:43PM -0700, Linus Torvalds wrote: > On Mon, Oct 11, 2021 at 10:38 AM Catalin Marinas > wrote: > > I cleaned up this patch [1] but I realised it still doesn't solve it. > > The arm64 __copy_to_user_inatomic(), while ensuring progress if called > > in a loop, it does not guarantee precise copy to the fault position. > > That should be ok., We've always allowed the user copy to return early > if it does word copies and hits a page crosser that causes a fault. > > Any user then has the choice of: > > - partial copies are bad > > - partial copies are handled and then you retry from the place > copy_to_user() failed at > > and in that second case, the next time around, you'll get the fault > immediately (or you'll make some more progress - maybe the user copy > loop did something different just because the length and/or alignment > was different). > > If you get the fault immediately, that's -EFAULT. > > And if you make some more progress, it's again up to the caller to > rinse and repeat. > > End result: user copy functions do not have to report errors exactly. > It is the caller that has to handle the situation. > > Most importantly: "exact error or not" doesn't actually _matter_ to > the caller. If the caller does the right thing for an exact error, it > will do the right thing for an inexact one too. See above. Yes, that's my expectation (though fixed fairly recently in the arm64 user copy routines). > > The copy_to_sk(), after returning an error, starts again from the previous > > sizeof(sh) boundary rather than from where the __copy_to_user_inatomic() > > stopped. So it can get stuck attempting to copy the same search header. > > That seems to be purely a copy_to_sk() bug. > > Or rather, it looks like a bug in the caller. copy_to_sk() itself does > > if (copy_to_user_nofault(ubuf + *sk_offset, &sh, sizeof(sh))) { > ret = 0; > goto out; > } > > and the comment says > > * 0: all items from this leaf copied, continue with next > > but that comment is then obviously not actually true in that it's not > "continue with next" at all. The comments were correct before commit a48b73eca4ce ("btrfs: fix potential deadlock in the search ioctl") which introduced the potentially infinite loop. Something like this would make the comments match (I think): diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index cc61813213d8..1debf6a124e8 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2161,7 +2161,7 @@ static noinline int copy_to_sk(struct btrfs_path *path, * properly this next time through */ if (copy_to_user_nofault(ubuf + *sk_offset, &sh, sizeof(sh))) { - ret = 0; + ret = -EFAULT; goto out; } @@ -2175,7 +2175,7 @@ static noinline int copy_to_sk(struct btrfs_path *path, */ if (read_extent_buffer_to_user_nofault(leaf, up, item_off, item_len)) { - ret = 0; + ret = -EFAULT; *sk_offset -= sizeof(sh); goto out; } @@ -2260,12 +2260,8 @@ static noinline int search_ioctl(struct inode *inode, key.type = sk->min_type; key.offset = sk->min_offset; - while (1) { - ret = fault_in_pages_writeable(ubuf + sk_offset, - *buf_size - sk_offset); - if (ret) - break; - + ret = fault_in_pages_writeable(ubuf, *buf_size); + while (ret == 0) { ret = btrfs_search_forward(root, &key, path, sk->min_transid); if (ret != 0) { if (ret > 0) @@ -2275,9 +2271,14 @@ static noinline int search_ioctl(struct inode *inode, ret = copy_to_sk(path, &key, sk, buf_size, ubuf, &sk_offset, &num_found); btrfs_release_path(path); - if (ret) - break; + /* + * Fault in copy_to_sk(), attempt to bring the page in after + * releasing the locks and retry. + */ + if (ret == -EFAULT) + ret = fault_in_pages_writeable(ubuf + sk_offset, + sizeof(struct btrfs_ioctl_search_header)); } if (ret > 0) ret = 0; > > An ugly fix is to fall back to byte by byte copying so that we can > > attempt the actual fault address in fault_in_pages_writeable(). > > No, changing the user copy machinery is wrong. The caller really has > to do the right thing with partial results. > > And I think we need to make fault_in_pages_writeable() match the > actual faulting cases - maybe remote the "pages" part of the name? Ah, good point. Without removing "pages" from the name (too late over here to grep the kernel), something like below: diff --git a/arch/arm64/include/asm/page-def.h b/arch/arm64/include/asm/page-def.h index 2403f7b4cdbf..3768ac4a6610 100644 --- a/arch/arm64/include/asm/page-def.h +++ b/arch/arm64/include/asm/page-def.h @@ -15,4 +15,9 @@ #define PAGE_SIZE (_AC(1, UL) << PAGE_SHIFT) #define PAGE_MASK (~(PAGE_SIZE-1)) +#ifdef CONFIG_ARM64_MTE +#define FAULT_GRANULE_SIZE (16) +#define FAULT_GRANULE_MASK (~(FAULT_GRANULE_SIZE-1)) +#endif + #endif /* __ASM_PAGE_DEF_H */ diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 62db6b0176b9..7aef732e4fa7 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -16,6 +16,11 @@ #include /* for in_interrupt() */ #include +#ifndef FAULT_GRANULE_SIZE +#define FAULT_GRANULE_SIZE PAGE_SIZE +#define FAULT_GRANULE_MASK PAGE_MASK +#endif + struct pagevec; static inline bool mapping_empty(struct address_space *mapping) @@ -751,12 +756,12 @@ static inline int fault_in_pages_writeable(char __user *uaddr, size_t size) do { if (unlikely(__put_user(0, uaddr) != 0)) return -EFAULT; - uaddr += PAGE_SIZE; + uaddr += FAULT_GRANULE_SIZE; } while (uaddr <= end); - /* Check whether the range spilled into the next page. */ - if (((unsigned long)uaddr & PAGE_MASK) == - ((unsigned long)end & PAGE_MASK)) + /* Check whether the range spilled into the next granule. */ + if (((unsigned long)uaddr & FAULT_GRANULE_MASK) == + ((unsigned long)end & FAULT_GRANULE_MASK)) return __put_user(0, end); return 0; If this looks in the right direction, I'll do some proper patches tomorrow. -- Catalin