Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp1476247rwd; Wed, 7 Jun 2023 17:24:13 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ7pTGyXl678ZbbvEGP/O3jEDZr/9I6Bc9DCfO9CpKAvegOJfIzQ3PpEnWX5UWY0Udvr8uyu X-Received: by 2002:a05:6a20:391:b0:105:66d3:8538 with SMTP id 17-20020a056a20039100b0010566d38538mr2123692pzt.8.1686183853450; Wed, 07 Jun 2023 17:24:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1686183853; cv=none; d=google.com; s=arc-20160816; b=mh944C3dTykOvqitolUIuXRZ4yGFRcN8rpLgPw1X7wWHI8ifRP22q8CYdejpvo+XGG CAWZavsOEptRZj5IF8AgPSbzVPx+RhmrIsVrECUH8sels4kBIpEoibQ0eEZYTu8Vnj+U W3AiDvWd/imNSIHHeZ7peyFc0D2kCrloCrVuOUrvvQPn0mFRotU6zmZyAiQzxGjzdlA4 RtCfHTqHAGiTrRp5N6Awv93uTWoikuZXhn6aEhEUX2WU8KEZ9XJNFVMv7Jg+8bDypBKQ 2vQmMBPzN5/l0H63S6+jjIdyNRH71vfo2L53esVkfRfpCSSOBDEvgV75VWLEa21Y9yJY aYBQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=mhikzEBmpROllrpUkG6N5rn5swH/R/MkyomEPj+0LRU=; b=iWsjP4pwt7jU+ILZo7Dx5ovdqE++NaCbtE+SMaxBynmGGbGNuE3icySfTtRDzNZmLr 7hk7RngW0qIarlRv/GkTcqY+YYtQoBa65atPicuGB0Ay861Ae7mDutlrkbprDr/urgiV oV8Y3OLCGikcKa1S6qR98p13tbMb0q0XFKvVmpYgSVwHTN6Uxl8k+Z3u4lcyTxabjoHq j6HSgR+VyI37fRjAV9N4Y/aOzKxjqjJSxBait/3yBVYyaeLisGUrGl3TKvi2LPJDWzI9 PGKxB3MpIpTJyXR/ilQUIBmpK3w7MaJeiyirqTEnQ4TWd9vnejoUFxbudOcw20CyE83z UB5A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id a16-20020a637f10000000b0051a8a22a42dsi83852pgd.268.2023.06.07.17.23.53; Wed, 07 Jun 2023 17:24:13 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232210AbjFGXvK (ORCPT + 99 others); Wed, 7 Jun 2023 19:51:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232152AbjFGXvJ (ORCPT ); Wed, 7 Jun 2023 19:51:09 -0400 Received: from mail-qk1-f171.google.com (mail-qk1-f171.google.com [209.85.222.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E38A269A for ; Wed, 7 Jun 2023 16:50:28 -0700 (PDT) Received: by mail-qk1-f171.google.com with SMTP id af79cd13be357-75ec7e8e826so14785a.0 for ; Wed, 07 Jun 2023 16:50:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686181827; x=1688773827; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=mhikzEBmpROllrpUkG6N5rn5swH/R/MkyomEPj+0LRU=; b=EOMLNegm10vwQVG7j1m6pqxPTAjwPaOt85/zuyvV87l1brn4F+ma9y0PY0ZlWp/0wt 9BmU5kBi0RzPA0mTAZQndSEShFci5yHQjaAtW1DIwoKVHD1Q18u2sarOp/BW9Alkslvr OT7xe4HDglvY4QAHWmTb7pDLm40JzXW60X7GguZHC6MTpLFShp+D2v+PIq+oehv9WP8z CSfazvUdHpG42jardhsV841DdBiRUpWHScnjGbiSJyJKK6bjPhWnm6hd3n1q98dxY4e+ x/tkK2JvTOlQxC1p2CAq76TBqqM14818XcWwAJ0j4ZQWJXqo9WwxMy+W6Lm/o4p9sEMj gE0A== X-Gm-Message-State: AC+VfDwI2zKl1M+SJDS2lLmcHLF/K/SThBHcIMS/H/wkmRF7DjZPho8A 9f5eoZecoe3bbPnEn5LTpvXY X-Received: by 2002:a05:620a:2b4b:b0:75b:23a1:3651 with SMTP id dp11-20020a05620a2b4b00b0075b23a13651mr4139465qkb.18.1686181827499; Wed, 07 Jun 2023 16:50:27 -0700 (PDT) Received: from localhost (pool-68-160-166-30.bstnma.fios.verizon.net. [68.160.166.30]) by smtp.gmail.com with ESMTPSA id x12-20020ae9f80c000000b0075ca4cd03d4sm245229qkh.64.2023.06.07.16.50.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 16:50:27 -0700 (PDT) Date: Wed, 7 Jun 2023 19:50:25 -0400 From: Mike Snitzer To: Dave Chinner Cc: Jens Axboe , Christoph Hellwig , Joe Thornber , Stefan Hajnoczi , "Michael S. Tsirkin" , "Darrick J. Wong" , Jason Wang , Bart Van Assche , linux-kernel@vger.kernel.org, Joe Thornber , linux-block@vger.kernel.org, dm-devel@redhat.com, Andreas Dilger , Sarthak Kukreti , linux-fsdevel@vger.kernel.org, Theodore Ts'o , linux-ext4@vger.kernel.org, Brian Foster , Alasdair Kergon Subject: Re: [PATCH v7 0/5] Introduce provisioning primitives Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Spam-Status: No, score=-1.7 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Tue, Jun 06 2023 at 10:01P -0400, Dave Chinner wrote: > On Sat, Jun 03, 2023 at 11:57:48AM -0400, Mike Snitzer wrote: > > On Fri, Jun 02 2023 at 8:52P -0400, > > Dave Chinner wrote: > > > > > Mike, I think you might have misunderstood what I have been proposing. > > > Possibly unintentionally, I didn't call it REQ_OP_PROVISION but > > > that's what I intended - the operation does not contain data at all. > > > It's an operation like REQ_OP_DISCARD or REQ_OP_WRITE_ZEROS - it > > > contains a range of sectors that need to be provisioned (or > > > discarded), and nothing else. > > > > No, I understood that. > > > > > The write IOs themselves are not tagged with anything special at all. > > > > I know, but I've been looking at how to also handle the delalloc > > usecase (and yes I know you feel it doesn't need handling, the issue > > is XFS does deal nicely with ensuring it has space when it tracks its > > allocations on "thick" storage > > Oh, no it doesn't. It -works for most cases-, but that does not mean > it provides any guarantees at all. We can still get ENOSPC for user > data when delayed allocation reservations "run out". > > This may be news to you, but the ephemeral XFS delayed allocation > space reservation is not accurate. It contains a "fudge factor" > called "indirect length". This is a "wet finger in the wind" > estimation of how much new metadata will need to be allocated to > index the physical allocations when they are made. It assumes large > data extents are allocated, which is good enough for most cases, but > it is no guarantee when there are no large data extents available to > allocate (e.g. near ENOSPC!). > > And therein lies the fundamental problem with ephemeral range > reservations: at the time of reservation, we don't know how many > individual physical LBA ranges the reserved data range is actually > going to span. > > As a result, XFS delalloc reservations are a "close-but-not-quite" > reservation backed by a global reserve pool that can be dipped into > if we run out of delalloc reservation. If the reserve pool is then > fully depleted before all delalloc conversion completes, we'll still > give ENOSPC. The pool is sized such that the vast majority of > workloads will complete delalloc conversion successfully before the > pool is depleted. > > Hence XFS gives everyone the -appearance- that it deals nicely with > ENOSPC conditions, but it never provides a -guarantee- that any > accepted write will always succeed without ENOSPC. > > IMO, using this "close-but-not-quite" reservation as the basis of > space requirements for other layers to provide "won't ENOSPC" > guarantees is fraught with problems. We already know that it is > insufficient in important corner cases at the filesystem level, and > we also know that lower layers trying to do ephemeral space > reservations will have exactly the same problems providing a > guarantee. And these are problems we've been unable to engineer > around in the past, so the likelihood we can engineer around them > now or in the future is also very unlikely. Thanks for clarifying. Wasn't aware of XFS delalloc's "wet finger in the air" ;) So do you think it reasonable to require applications to fallocate their data files? Unaware if users are aware to take that extra step. > > -- so adding coordination between XFS > > and dm-thin layers provides comparable safety.. that safety is an > > expected norm). > > > > But rather than discuss in terms of data vs metadata, the distinction > > is: > > 1) LBA range reservation (normal case, your proposal) > > 2) non-LBA reservation (absolute value, LBA range is known at later stage) > > > > But I'm clearly going off script for dwelling on wanting to handle > > both. > > Right, because if we do 1) then we don't need 2). :) Sure. > > My looking at (ab)using REQ_META being set (use 1) vs not (use 2) was > > a crude simplification for branching between the 2 approaches. > > > > And I understand I made you nervous by expanding the scope to a much > > more muddled/shitty interface. ;) > > Nervous? No, I'm simply trying to make sure that everyone is on the > same page. i.e. that if we water down the guarantee that 1) relies > on, then it's not actually useful to filesystems at all. Yeah, makes sense. > > > Put simply: if we restrict REQ_OP_PROVISION guarantees to just > > > REQ_META writes (or any other specific type of write operation) then > > > it's simply not worth persuing at the filesystem level because the > > > guarantees we actually need just aren't there and the complexity of > > > discovering and handling those corner cases just isn't worth the > > > effort. > > > > Here is where I get to say: I think you misunderstood me (but it was > > my fault for not being absolutely clear: I'm very much on the same > > page as you and Joe; and your visions need to just be implemented > > ASAP). > > OK, good that we've clarified the misunderstandings on both sides > quickly :) Do you think you're OK to scope out, and/or implement, the XFS changes if you use v7 of this patchset as the starting point? (v8 should just be v7 minus the dm-thin.c and dm-snap.c changes). The thinp support in v7 will work enough to allow XFS to issue REQ_OP_PROVISION and/or fallocate (via mkfs.xfs) to dm-thin devices. And Joe and I can make independent progress on the dm-thin.c changes needed to ensure the REQ_OP_PROVISION gaurantee you need. Thanks, Mike