Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp463193pxb; Thu, 21 Oct 2021 03:09:12 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx/Ap+j5RCIXNxNvBUTl05JXL6J0G/W4F30qKuAEa0rf7rOlPL2ZISPxfqERxIIllc3gdNk X-Received: by 2002:a05:6402:3554:: with SMTP id f20mr6539282edd.210.1634810951907; Thu, 21 Oct 2021 03:09:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634810951; cv=none; d=google.com; s=arc-20160816; b=aRVvZljnJy3SScIX/6rKRXmYENjoRP4wjon8DUssaBepD/CEOJkPU46452levKQUwn dBdc3MX91gAZJBkSix415TVWsA+whpHf+ACx2zg93exqJrQOJBPWizyoel35wLhkcHed XCqgF0mmK6L0ndgGCtBXp2hjKRaRkHsGJ6u2hiIxvG/RcME/4OnQrx0ll/9RlXWEu2i1 ANswm2dgodKlcThw65G+pIUW63vMs5D6nDXQj/PsL3b9dFeD9JQjQfm8cCJDZj2YDZ3U J4OwNbbEMyrosTV/BodPMNA+VRaghS1BabussRDRuGeZ6H4iVig0xI0T17OvptNIH+3K HRug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=UWE8tFKCyw9Uq9ZLuyNlXPtokskbVWBWSMY50I0ivy4=; b=rBhwDFsLUPJaxULbuD8/dx33ddEmQiBhaopM8FD4uoCzyZ3VyoWmfR2NfpIpEY/HXt VxGLFCleLUvmvQrImFLOR2HMMOMd/QnIlpIY0+Z0p97q0frKAX9xlp+KhZWQJqI6OnDa kkTCeHkrC2SD1jI0/ZIEO58aN/JQH8PTnCKNMP3zKQelo7fZKuMikEhxSAuTQQAwyf6O eSZnfbv+ockM6w3J0hM9ftFKmUurScyu1/B8hcLTbb5YtgJmhTPxjFYN3665ozA0Vpr1 4vhcM4WqJy7xLQ8OCsj5tISrUqOEtl7a9VPmpUTgt2o/JL94/ksJD/23Uzm/6SMp/hBv Crvg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d11si7481447edj.442.2021.10.21.03.08.48; Thu, 21 Oct 2021 03:09:11 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231658AbhJUKIM (ORCPT + 99 others); Thu, 21 Oct 2021 06:08:12 -0400 Received: from mail.kernel.org ([198.145.29.99]:47898 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231334AbhJUKIL (ORCPT ); Thu, 21 Oct 2021 06:08:11 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 064A8610FF; Thu, 21 Oct 2021 10:05:53 +0000 (UTC) Date: Thu, 21 Oct 2021 11:05:50 +0100 From: Catalin Marinas To: Andreas Gruenbacher Cc: Linus Torvalds , Al Viro , Christoph Hellwig , "Darrick J. Wong" , Jan Kara , Matthew Wilcox , cluster-devel , linux-fsdevel , Linux Kernel Mailing List , "ocfs2-devel@oss.oracle.com" , Josef Bacik , Will Deacon Subject: Re: [RFC][arm64] possible infinite loop in btrfs search_ioctl() Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 21, 2021 at 02:46:10AM +0200, Andreas Gruenbacher wrote: > On Tue, Oct 12, 2021 at 1:59 AM Linus Torvalds > wrote: > > On Mon, Oct 11, 2021 at 2:08 PM Catalin Marinas wrote: > > > > > > +#ifdef CONFIG_ARM64_MTE > > > +#define FAULT_GRANULE_SIZE (16) > > > +#define FAULT_GRANULE_MASK (~(FAULT_GRANULE_SIZE-1)) > > > > [...] > > > > > If this looks in the right direction, I'll do some proper patches > > > tomorrow. > > > > Looks fine to me. It's going to be quite expensive and bad for caches, though. > > > > That said, fault_in_writable() is _supposed_ to all be for the slow > > path when things go south and the normal path didn't work out, so I > > think it's fine. > > Let me get back to this; I'm actually not convinced that we need to > worry about sub-page-size fault granules in fault_in_pages_readable or > fault_in_pages_writeable. > > From a filesystem point of view, we can get into trouble when a > user-space read or write triggers a page fault while we're holding > filesystem locks, and that page fault ends up calling back into the > filesystem. To deal with that, we're performing those user-space > accesses with page faults disabled. Yes, this makes sense. > When a page fault would occur, we > get back an error instead, and then we try to fault in the offending > pages. If a page is resident and we still get a fault trying to access > it, trying to fault in the same page again isn't going to help and we > have a true error. You can't be sure the second fault is a true error. The unlocked fault_in_*() may race with some LRU scheme making the pte not accessible or a write-back making it clean/read-only. copy_to_user() with pagefault_disabled() fails again but that's a benign fault. The filesystem should re-attempt the fault-in (gup would correct the pte), disable page faults and copy_to_user(), potentially in an infinite loop. If you bail out on the second/third uaccess following a fault_in_*() call, you may get some unexpected errors (though very rare). Maybe the filesystems avoid this problem somehow but I couldn't figure it out. > We're clearly looking at memory at a page > granularity; faults at a sub-page level don't matter at this level of > abstraction (but they do show similar error behavior). To avoid > getting stuck, when it gets a short result or -EFAULT, the filesystem > implements the following backoff strategy: first, it tries to fault in > a number of pages. When the read or write still doesn't make progress, > it scales back and faults in a single page. Finally, when that still > doesn't help, it gives up. This strategy is needed for actual page > faults, but it also handles sub-page faults appropriately as long as > the user-space access functions give sensible results. As I said above, I think with this approach there's a small chance of incorrectly reporting an error when the fault is recoverable. If you change it to an infinite loop, you'd run into the sub-page fault problem. There are some places with such infinite loops: futex_wake_op(), search_ioctl() in the btrfs code. I still have to get my head around generic_perform_write() but I think we get away here because it faults in the page with a get_user() rather than gup (and copy_from_user() is guaranteed to make progress if any bytes can still be accessed). -- Catalin