Received: by 2002:a05:6a10:5bc5:0:0:0:0 with SMTP id os5csp113220pxb; Wed, 20 Oct 2021 17:49:11 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxxdkjND9YNm2Hie8fhJScJAq4vRb3qNycsy5dtQ3/DEKd3y5M0q8ZnAlOdSGeLEOivDqoI X-Received: by 2002:a17:906:38db:: with SMTP id r27mr3218601ejd.338.1634777350879; Wed, 20 Oct 2021 17:49:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1634777350; cv=none; d=google.com; s=arc-20160816; b=D6T86q/azyLhDt52EC//EzuYAil9buuebWG+4Q7j38xYV2obH8T+TC4cb5K8loArDB FJMSHXsYgAnI3+NrE+kWjLnoEfMPJtdnHYRvzoVJBGrEtrvjcNyhZkZfbJ9msDyk69AX tJVG1g52oGvsVAlTdzXEaHK3ipM8QGYBd492otktOsvQ4zU3fSqFnfPQ9FarXzKX5cmX 5W60g0XsSjewjT2zqUPFMlQ/aI7L/wEOlw4MQQuTJ8Bnb8yLgwLCLV7bWPCZYL9ZHocu uL5K9SsnM2UTYUrWgFkaBgR/lA+NkxECtqng1TPv6mNOWxPOhvc69eUg+N4Y8aLx3Hbi D36w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=GgRzGZvlhKco4vdRDM+mmGlMh4qaw7uLRwsJYM91KRo=; b=qC67nOzdZCDAwtnK+a43IZShGT0FOKiI6zJCxMgXrz3inreVHIVo3WtY84+ophR2FK BNnS3XP1WtcWXkqaVXQ5+atgXEULQ4W7dURkXObQuCPHxd8hduFYFQ40aWVpDAU4H0wz 9kG1iVXJG9YxoGkuqhoJiue7gygNB1A6R16Ot6aa8+QL50udxtiEQhd3oH8NWeaam1BU VEflxVXpAfum+okb4g4dFCcVV3ugCMS2Uz+8d5gxstNWifGi0CrmpOQeyPGH3Z9P0Z6z Xs/qEW6hNx3BrQxLurNAJxldjs9klbJ6g8oHKPpEehqFeFnmt42aR6K7fzXd7IJssSub wdJw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=IQBwYGfQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z39si4179413ede.176.2021.10.20.17.48.36; Wed, 20 Oct 2021 17:49:10 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=IQBwYGfQ; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230478AbhJUAsn (ORCPT + 99 others); Wed, 20 Oct 2021 20:48:43 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:48632 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230103AbhJUAsk (ORCPT ); Wed, 20 Oct 2021 20:48:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1634777184; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=GgRzGZvlhKco4vdRDM+mmGlMh4qaw7uLRwsJYM91KRo=; b=IQBwYGfQK15ThPucqyFYj/HScJ3ZaZm3RSKyA7gSM11/++fCdCiDHjp3v606wsWv6jwlCQ EXa2Ux/r/k6c7RIRLeDQ9HkY5gHLg168cNmX/mXtUNi6h5aZYz4VbgW+XFMM8kQOqRhMsO sz3ASGnR2J16cK3fnkpTef5JjMaJHcE= Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-185-aksWLoGfOMmzhptYgOLMyw-1; Wed, 20 Oct 2021 20:46:23 -0400 X-MC-Unique: aksWLoGfOMmzhptYgOLMyw-1 Received: by mail-wm1-f70.google.com with SMTP id m1-20020a1ca301000000b003231d5b3c4cso348979wme.5 for ; Wed, 20 Oct 2021 17:46:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=GgRzGZvlhKco4vdRDM+mmGlMh4qaw7uLRwsJYM91KRo=; b=3vb+/zzut1Hky1UxxALAuc5SgMsmf0jSOO41ZC7O0tGGjON+tQk3aLQ7OmiR700N+2 i7N2HBGSVGKI4DwEgLIBUgqTdOfEggDcuyjbr+lBInCDNwSRYg552MKmHQZnduXOUnpm 3D6oQq21vWV8eoaPNKz1kECaTDqhSorPA1aCEZPWWrccUa6Y8xj1LXmQh19qcmOAxx89 +h0qWapPXEc67bzUN0qFnd0zLp8GyECXPST4mJKoWarqD0PcZ3cYmGxeZm+ipbYfeHVV kZgwS/il95VI1N6CczUSl3j6Bj4MbOfbZYzjxyup9TfeFwNUKJIoQ9jrFqqCNp294edW +leA== X-Gm-Message-State: AOAM532bgEcjQcUz6s1Ldl4RC9mujSDm0+ycNHbGjaB01Sgx5xP0ZZNv 2IyWONoryOXfBnCFp4kfhFAbPA52wjZ95kizjE/95oF8qtcD6nWh8O2XVgWIC8L2bTVSCQWifi7 ZR/1FOWQXE7aERrzhaJzpupn2kRUUX/2bntfkIKR0 X-Received: by 2002:a1c:4e10:: with SMTP id g16mr588668wmh.128.1634777182160; Wed, 20 Oct 2021 17:46:22 -0700 (PDT) X-Received: by 2002:a1c:4e10:: with SMTP id g16mr588649wmh.128.1634777181923; Wed, 20 Oct 2021 17:46:21 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Andreas Gruenbacher Date: Thu, 21 Oct 2021 02:46:10 +0200 Message-ID: Subject: Re: [RFC][arm64] possible infinite loop in btrfs search_ioctl() To: Linus Torvalds Cc: Catalin Marinas , Al Viro , Christoph Hellwig , "Darrick J. Wong" , Jan Kara , Matthew Wilcox , cluster-devel , linux-fsdevel , Linux Kernel Mailing List , "ocfs2-devel@oss.oracle.com" , Josef Bacik , Will Deacon Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 12, 2021 at 1:59 AM Linus Torvalds wrote: > On Mon, Oct 11, 2021 at 2:08 PM Catalin Marinas wrote: > > > > +#ifdef CONFIG_ARM64_MTE > > +#define FAULT_GRANULE_SIZE (16) > > +#define FAULT_GRANULE_MASK (~(FAULT_GRANULE_SIZE-1)) > > [...] > > > If this looks in the right direction, I'll do some proper patches > > tomorrow. > > Looks fine to me. It's going to be quite expensive and bad for caches, though. > > That said, fault_in_writable() is _supposed_ to all be for the slow > path when things go south and the normal path didn't work out, so I > think it's fine. Let me get back to this; I'm actually not convinced that we need to worry about sub-page-size fault granules in fault_in_pages_readable or fault_in_pages_writeable. From a filesystem point of view, we can get into trouble when a user-space read or write triggers a page fault while we're holding filesystem locks, and that page fault ends up calling back into the filesystem. To deal with that, we're performing those user-space accesses with page faults disabled. When a page fault would occur, we get back an error instead, and then we try to fault in the offending pages. If a page is resident and we still get a fault trying to access it, trying to fault in the same page again isn't going to help and we have a true error. We're clearly looking at memory at a page granularity; faults at a sub-page level don't matter at this level of abstraction (but they do show similar error behavior). To avoid getting stuck, when it gets a short result or -EFAULT, the filesystem implements the following backoff strategy: first, it tries to fault in a number of pages. When the read or write still doesn't make progress, it scales back and faults in a single page. Finally, when that still doesn't help, it gives up. This strategy is needed for actual page faults, but it also handles sub-page faults appropriately as long as the user-space access functions give sensible results. What am I missing? Thanks, Andreas > I do wonder how the sub-page granularity works. Is it sufficient to > just read from it? Because then a _slightly_ better option might be to > do one write per page (to catch page table writability) and then one > read per "granule" (to catch pointer coloring or cache poisoning > issues)? > > That said, since this is all preparatory to us wanting to write to it > eventually anyway, maybe marking it all dirty in the caches is only > good. > > Linus >