Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3DF2C433F5 for ; Sat, 27 Nov 2021 18:08:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356010AbhK0SLX (ORCPT ); Sat, 27 Nov 2021 13:11:23 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:56947 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344073AbhK0SJT (ORCPT ); Sat, 27 Nov 2021 13:09:19 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1638036364; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=+BwIW85KhvXySZsPUP6FJMZbiOq5atqcsMhwNjrDCOQ=; b=iVXeihDJS0TDGBkbwNux3r/P4Hyh/RgzOd4dmFj1gF71Sssxus5W70RX4kotOxWhbXFbC6 fVct+tAZnn+/epjhByX+pzoBlEsilvVyevhPBAhs7GRQW/6nJu9lmHJdfCdCEO6ArDAJs9 f912HCEVjnCr6NdliJLVZCH0K0lAW/Q= Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-490-og_PDYPRNnulvA1lVa2W4g-1; Sat, 27 Nov 2021 13:05:53 -0500 X-MC-Unique: og_PDYPRNnulvA1lVa2W4g-1 Received: by mail-wm1-f69.google.com with SMTP id l4-20020a05600c1d0400b00332f47a0fa3so7346659wms.8 for ; Sat, 27 Nov 2021 10:05:53 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=+BwIW85KhvXySZsPUP6FJMZbiOq5atqcsMhwNjrDCOQ=; b=dFIJYBro6+pT+OMTOvcvJCRUdxKNPvtyDXXPNXIgy9Dfhj4z381Lgc6vpLRVytHmcj qjYgrIBZC7h0ENK6rynCdSm3JiNqKVsWFbCFo/E95YIXe90CV/GDKJ2hzC5/6qkEvrPf LAbpz4VIeUQJuKnTRroKKB0axR7vtS/JYZiD1yYzVT4KaN4mUhSaVfjUmOsg9lKEWnyc CZPKtYLZOhvUUEhjmll/xv9sDPlJlLDDSVGtDtIROHJO+jgYg9gnAfdoT1ySv2KJdQkG MtWz/z2Nt1yi2ATIiHN1OhxOA7V9tGIjoPMSQprWaW9baVctcKKT7rBv6O7ViDKSR3p2 Nw9w== X-Gm-Message-State: AOAM5323b27RzsfA9Ef5I1ZzWuziuHQ6m3Fl0YkTTgxiEPxmEZAosSjz tq/ya8Q+O+g9MD+F8iRcJ9zLtEXxGAWcuLd+fUtbgCrGZrbMRNv+d9WzUiWNX9RFWuBMuVMojwZ bJN6ERwh4HwQYO8dzj2GXIUak6ADDmiIcQFtdpI4+ X-Received: by 2002:a1c:f005:: with SMTP id a5mr25342578wmb.19.1638036352695; Sat, 27 Nov 2021 10:05:52 -0800 (PST) X-Google-Smtp-Source: ABdhPJwb4kGFI4QmQ7+Cdh6OUi7N8oAx3K/VJeig9Qm7BzUKFdOwNTlZ64ftwSlmxskh3nck5MM6jmuV5H04VfdiGME= X-Received: by 2002:a1c:f005:: with SMTP id a5mr25342540wmb.19.1638036352374; Sat, 27 Nov 2021 10:05:52 -0800 (PST) MIME-Version: 1.0 References: <20211124192024.2408218-1-catalin.marinas@arm.com> <20211124192024.2408218-4-catalin.marinas@arm.com> <20211127123958.588350-1-agruenba@redhat.com> In-Reply-To: From: Andreas Gruenbacher Date: Sat, 27 Nov 2021 19:05:39 +0100 Message-ID: Subject: Re: [PATCH 3/3] btrfs: Avoid live-lock in search_ioctl() on hardware with sub-page faults To: Catalin Marinas Cc: Matthew Wilcox , Linus Torvalds , Josef Bacik , David Sterba , Al Viro , Andrew Morton , Will Deacon , linux-fsdevel , LKML , Linux ARM , linux-btrfs Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Nov 27, 2021 at 4:21 PM Catalin Marinas wrote: > On Sat, Nov 27, 2021 at 01:39:58PM +0100, Andreas Gruenbacher wrote: > > On Sat, Nov 27, 2021 at 4:52 AM Andreas Gruenbacher wrote: > > > On Sat, Nov 27, 2021 at 12:06 AM Catalin Marinas wrote: > > > > If we know that the arch copy_to_user() has an error of say maximum 16 > > > > bytes (or 15 rather on arm64), we can instead get fault_in_writeable() > > > > to probe the first 16 bytes rather than 1. > > > > > > That isn't going to help one bit: [raw_]copy_to_user() is allowed to > > > copy as little or as much as it wants as long as it follows the rules > > > documented in include/linux/uaccess.h: > > > > > > [] If copying succeeds, the return value must be 0. If some data cannot be > > > [] fetched, it is permitted to copy less than had been fetched; the only > > > [] hard requirement is that not storing anything at all (i.e. returning size) > > > [] should happen only when nothing could be copied. In other words, you don't > > > [] have to squeeze as much as possible - it is allowed, but not necessary. > > > > > > When fault_in_writeable() tells us that an address range is accessible > > > in principle, that doesn't mean that copy_to_user() will allow us to > > > access it in arbitrary chunks. It's also not the case that > > > fault_in_writeable(addr, size) is always followed by > > > copy_to_user(addr, ..., size) for the exact same address range, not > > > even in this case. > > > > > > These alignment restrictions have nothing to do with page or sub-page faults. > > > > > > I'm also fairly sure that passing in an unaligned buffer will send > > > search_ioctl into an endless loop on architectures with copy_to_user() > > > alignment restrictions; there don't seem to be any buffer alignment > > > checks. > > > > Let me retract that ... > > > > The description in include/linux/uaccess.h leaves out permissible > > reasons for fetching/storing less than requested. Thinking about it, if > > the address range passed to one of the copy functions includes an > > address that faults, it kind of makes sense to allow the copy function > > to stop short instead of copying every last byte right up to the address > > that fails. > > > > If that's the only reason, then it would be great to have that included > > in the description. And then we can indeed deal with the alignment > > effects in fault_in_writeable(). > > Ah, I started replying last night, sent it today without seeing your > follow-up. > > > > > I attempted the above here and works ok: > > > > > > > > https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=devel/btrfs-live-lock-fix > > > > > > > > but too late to post it this evening, I'll do it in the next day or so > > > > as an alternative to this series. > > > > I've taken a quick look. Under the assumption that alignment effects > > are tied to page / sub-page faults, I think we can really solve this > > generically as Willy has proposed. > > I think Willy's proposal stopped at the page boundary, it should go > beyond. > > > Maybe as shown below; no need for arch-specific code. > > > > diff --git a/mm/gup.c b/mm/gup.c > > index 2c51e9748a6a..a9b3d916b625 100644 > > --- a/mm/gup.c > > +++ b/mm/gup.c > > @@ -1658,6 +1658,8 @@ static long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, > > } > > #endif /* !CONFIG_MMU */ > > > > +#define SUBPAGE_FAULT_SIZE 16 > > + > > /** > > * fault_in_writeable - fault in userspace address range for writing > > * @uaddr: start of address range > > @@ -1673,8 +1675,19 @@ size_t fault_in_writeable(char __user *uaddr, size_t size) > > if (unlikely(size == 0)) > > return 0; > > if (!PAGE_ALIGNED(uaddr)) { > > + if (SUBPAGE_FAULT_SIZE && > > + !IS_ALIGNED((unsigned long)uaddr, SUBPAGE_FAULT_SIZE)) { > > + end = PTR_ALIGN(uaddr, SUBPAGE_FAULT_SIZE); > > + if (end - uaddr < size) { > > + if (unlikely(__put_user(0, uaddr) != 0)) > > + return size; > > + uaddr = end; > > + if (unlikely(!end)) > > + goto out; > > + } > > + } > > if (unlikely(__put_user(0, uaddr) != 0)) > > - return size; > > + goto out; > > uaddr = (char __user *)PAGE_ALIGN((unsigned long)uaddr); > > } > > end = (char __user *)PAGE_ALIGN((unsigned long)start + size); > > That's similar, somehow, to the arch-specific probing in one of my > patches: [1]. We could do the above if we can guarantee that the maximum > error margin in copy_to_user() is smaller than SUBPAGE_FAULT_SIZE. For > arm64 copy_to_user(), it is fine, but for copy_from_user(), if we ever > need to handle fault_in_readable(), it isn't (on arm64 up to 64 bytes > even if aligned: reads of large blocks are done in 4 * 16 loads, and if > one of them fails e.g. because of the 16-byte sub-page fault, no write > is done, hence such larger than 16 delta). > > If you want something in the generic fault_in_writeable(), we probably > need a loop over UACCESS_MAX_WRITE_ERROR in SUBPAGE_FAULT_SIZE > increments. But I thought I'd rather keep this in the arch-specific code. I see, that's even crazier than I'd thought. The looping / probing is still pretty generic, so I'd still consider putting it in the generic code. We also still have fault_in_safe_writeable which is more difficult to fix, and fault_in_readable which we don't want to leave behind broken, either. > Of course, the above fault_in_writeable() still needs the btrfs > search_ioctl() counterpart to change the probing on the actual fault > address or offset. Yes, but that change is relatively simple and it eliminates the need for probing the entire buffer, so it's a good thing. Maybe you want to add this though: --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2202,3 +2202,3 @@ static noinline int search_ioctl(struct inode *inode, unsigned long sk_offset = 0; - char __user *fault_in_addr; + char __user *fault_in_addr, *end; @@ -2230,6 +2230,6 @@ static noinline int search_ioctl(struct inode *inode, fault_in_addr = ubuf; + end = ubuf + *buf_size; while (1) { ret = -EFAULT; - if (fault_in_writeable(fault_in_addr, - *buf_size - (fault_in_addr - ubuf))) + if (fault_in_writeable(fault_in_addr, end - fault_in_addr)) break; > In the general case (uaccess error margin larger), I'm not entirely > convinced we can skip the check if PAGE_ALIGNED(uaddr). Yes, the loop can span multiple sub-page error domains, at least in the read case, so it needs to happen even for page-aligned addresses. > I should probably get this logic through CBMC (or TLA+), I can't think it > through. > > Thanks. > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/commit/?h=devel/btrfs-live-lock-fix&id=af7e96d9e9537d9f9cc014f388b7b2bb4a5bc343 > > -- > Catalin > Thanks, Andreas