Received: by 2002:ab2:60d1:0:b0:1f7:5705:b850 with SMTP id i17csp800377lqm; Wed, 1 May 2024 16:53:21 -0700 (PDT) X-Forwarded-Encrypted: i=3; AJvYcCUrEsl+gHqnLCBRSTSV/5eKf/BlfHkPaKXonVpMxeCAtzE023udGDGb8Svntdg+AC4ejsU6DCf4kbu5H6LF5rOStx9xl0Cf/6pF0yrx+g== X-Google-Smtp-Source: AGHT+IGWTySlS5KDnsuZe7gOrjWDIKthd3KTd1c+7dPkkRtycKAn8FyqLCR+VY55fSUYAV0psM3i X-Received: by 2002:a17:906:3715:b0:a55:75f6:ce0f with SMTP id d21-20020a170906371500b00a5575f6ce0fmr2509138ejc.13.1714607601716; Wed, 01 May 2024 16:53:21 -0700 (PDT) ARC-Seal: i=2; a=rsa-sha256; t=1714607601; cv=pass; d=google.com; s=arc-20160816; b=TjSe3xDZN1Ph84xsNEcs9E1wZXo5RqJWzeIrYNiLiGCSmPxsY8frlzM6laEYZkvWA9 igjkWczZxxyahsWl/ikEqt5t3XTxBDYVDMyzjOsvH4MKTwaLo7Djl51cJ627Ed5IQhXJ X2GQ1E3dUsYrruEn9v5w35XDPb5kNyrLbkG2ZYdD6G6jIwEnJIL5YYtSLA2ik93JdkY6 OLBIWiNd0fCwaM/61VEcLY8qbCM1lBl1JET2IibAN4fyZWLD6HCa9Yr2AzNpTkV64lbw 2v59K185XYI1E7/nZXMhba+LFd5wZGi620PQ44978QiRhqLmgZEoOKCqiI/aVdzyWP6k SHJg== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=in-reply-to:content-disposition:mime-version:list-unsubscribe :list-subscribe:list-id:precedence:references:message-id:subject:cc :to:from:date:dkim-signature; bh=Jdas2s/6xLAjbsKaiQpHqbpYA1Nl67vWKUUiZQCQvvQ=; fh=RkuZCAxs8IZLvy8uB9xx9XbPGRk7t/nf3UnhGwy2v6U=; b=WxwNz3QKXaDtGV/vHiLK5NNqjgoxb3A0xku4R3BQEmrKn4Wg4AnQkmhvrSwsnoI1+g 8vIf6+kQmmoCqccn3fOm3Cp762mnQWCjABfhV1kBqP1fgZ0CGoCKQaLF944BYSH9+YVW vK8mufgA4clH+WPSIjpRqELb0HiX8q9Mcij45vTcUfQCW3jL6L7Y6ybjqra3o+fLfhFV n0EuGkzeVZE/lTI14N92sDugnH5I0WRCFpuad0ETme/01XqhSv33KzxEnqGat++UAtSS QiwGEgiLlnpN5Bx9/ff29bn/ffmUgknNliA+8+yf74QpHMcHBuZwuLIlxiX914Uyy6ZY 5LTg==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=AbMRMmVs; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-165863-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165863-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from am.mirrors.kernel.org (am.mirrors.kernel.org. [2604:1380:4601:e00::3]) by mx.google.com with ESMTPS id z17-20020a170906271100b00a51cba07d76si16465841ejc.528.2024.05.01.16.53.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 May 2024 16:53:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-165863-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) client-ip=2604:1380:4601:e00::3; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b=AbMRMmVs; arc=pass (i=1 dkim=pass dkdomain=kernel.org); spf=pass (google.com: domain of linux-kernel+bounces-165863-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:4601:e00::3 as permitted sender) smtp.mailfrom="linux-kernel+bounces-165863-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 73A871F224FD for ; Wed, 1 May 2024 23:53:21 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 8737116ABE3; Wed, 1 May 2024 23:53:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AbMRMmVs" Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9C79784E15; Wed, 1 May 2024 23:53:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714607591; cv=none; b=E1maIIe3xn8D0Skz/Yef4XEYPYDvyacgUOBzTIX1S/RMZ4ie/JRWC1AiW6v14YouShnWmDlJb6f83uu2Swm4WWYzSsNbNn4pCpCbfo1V2HU++863JfEqal+OYBAJvA8hyTzpZg56C3ggWJTvZtDBBb3XCWhlqPFFDOScAan1Gnk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1714607591; c=relaxed/simple; bh=rHALjmhDn2eakGTTRIV+n62I6oIK/4AWJyxlnXnxY10=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=aN6/NhYHu46AqBQ0j4WSad4XuYXCrL4ZacAv8/GbXIB2y3TPovpXtI9T99Zmu3gYek3EAL0OwMQ6v3j4iDDeTlZ/O/kkcRmFtFCy7GiZl5Y3bDsFjcgbJRRekM+4VoQbWkPTzT6IGQyfskGvJSgidLn48dVjkA+kkWQZfwO+Z/Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AbMRMmVs; arc=none smtp.client-ip=10.30.226.201 Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0D282C072AA; Wed, 1 May 2024 23:53:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1714607591; bh=rHALjmhDn2eakGTTRIV+n62I6oIK/4AWJyxlnXnxY10=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=AbMRMmVs9NwW2rD82p4kN+osUxHziPBz9ymZ1HvKM8NvZpPHE8i2rCTPCKLIJXqvp MlJabX2OfJiEuLLSAFGnxb4cyS90MG8RnVdyCQVW7hRC9Htg+f/rY2Keh1OHURtDxA ImLteSENVT9/+Doq3hZ0qrXiDlzCQ0ZS1X4Nd3KamtroPDhFMS39CPJU1A/XYCwMhB jrbsNXDNYNKFv8yjrMmWHzVxoZvhbjrr/Aj9vBzTUtXd1tKHiaIeqrLo54bmB8bMM2 AzvqOL6EBkVpSvqjyOqkFUDWsmySD2llpN4bbKP4U7hX6zhD0elQUSJE4fTzgBNTib 7iJDdD22wN+tA== Date: Wed, 1 May 2024 16:53:10 -0700 From: "Darrick J. Wong" To: Dave Chinner Cc: John Garry , hch@lst.de, viro@zeniv.linux.org.uk, brauner@kernel.org, jack@suse.cz, chandan.babu@oracle.com, willy@infradead.org, axboe@kernel.dk, martin.petersen@oracle.com, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tytso@mit.edu, jbongio@google.com, ojaswin@linux.ibm.com, ritesh.list@gmail.com, mcgrof@kernel.org, p.raghav@samsung.com, linux-xfs@vger.kernel.org, catherine.hoang@oracle.com Subject: Re: [PATCH RFC v3 12/21] xfs: Only free full extents for forcealign Message-ID: <20240501235310.GP360919@frogsfrogsfrogs> References: <20240429174746.2132161-1-john.g.garry@oracle.com> <20240429174746.2132161-13-john.g.garry@oracle.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: On Wed, May 01, 2024 at 10:53:28AM +1000, Dave Chinner wrote: > On Mon, Apr 29, 2024 at 05:47:37PM +0000, John Garry wrote: > > Like we already do for rtvol, only free full extents for forcealign in > > xfs_free_file_space(). > > > > Signed-off-by: John Garry > > --- > > fs/xfs/xfs_bmap_util.c | 7 +++++-- > > 1 file changed, 5 insertions(+), 2 deletions(-) > > > > diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c > > index f26d1570b9bd..1dd45dfb2811 100644 > > --- a/fs/xfs/xfs_bmap_util.c > > +++ b/fs/xfs/xfs_bmap_util.c > > @@ -847,8 +847,11 @@ xfs_free_file_space( > > startoffset_fsb = XFS_B_TO_FSB(mp, offset); > > endoffset_fsb = XFS_B_TO_FSBT(mp, offset + len); > > > > - /* We can only free complete realtime extents. */ > > - if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) { > > + /* Free only complete extents. */ > > + if (xfs_inode_has_forcealign(ip) && ip->i_extsize > 1) { > > + startoffset_fsb = roundup_64(startoffset_fsb, ip->i_extsize); > > + endoffset_fsb = rounddown_64(endoffset_fsb, ip->i_extsize); > > + } else if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) { > > startoffset_fsb = xfs_rtb_roundup_rtx(mp, startoffset_fsb); > > endoffset_fsb = xfs_rtb_rounddown_rtx(mp, endoffset_fsb); > > } > > When you look at xfs_rtb_roundup_rtx() you'll find it's just a one > line wrapper around roundup_64(). I added this a couple of cycles ago to get ready for realtime modernization. That will create a bunch *more* churn in my tree just to convert everything *back*. Where the hell were you when that was being reviewed?!!! NO! This is pointless busywork! --D > So lets get rid of the obfuscation that the one line RT wrapper > introduces, and it turns into this: > > rounding = 1; > if (xfs_inode_has_forcealign(ip) > rounding = ip->i_extsize; > else if (XFS_IS_REALTIME_INODE(ip)) > rounding = mp->m_sb.sb_rextsize; > > if (rounding > 1) { > startoffset_fsb = roundup_64(startoffset_fsb, rounding); > endoffset_fsb = rounddown_64(endoffset_fsb, rounding); > } > > What this points out is that the prep steps for fallocate operations > also need to handle both forced alignment and rtextsize rounding, > and it does neither right now. xfs_flush_unmap_range() is the main > offender here, but xfs_prepare_shift() also needs fixing. > > Hence: > > static inline xfs_extlen_t > xfs_extent_alignment( > struct xfs_inode *ip) > { > if (xfs_inode_has_forcealign(ip)) > return ip->i_extsize; > if (XFS_IS_REALTIME_INODE(ip)) > return mp->m_sb.sb_rextsize; > return 1; > } > > > In xfs_flush_unmap_range(): > > /* > * Make sure we extend the flush out to extent alignment > * boundaries so any extent range overlapping the start/end > * of the modification we are about to do is clean and idle. > */ > rounding = XFS_FSB_TO_B(mp, xfs_extent_alignment(ip)); > rounding = max(rounding, PAGE_SIZE); > ... > > in xfs_free_file_space() > > /* > * Round the range we are going to free inwards to extent > * alignment boundaries so we don't free blocks outside the > * range requested. > */ > rounding = xfs_extent_alignment(ip); > if (rounding > 1 ) { > startoffset_fsb = roundup_64(startoffset_fsb, rounding); > endoffset_fsb = rounddown_64(endoffset_fsb, rounding); > } > > and in xfs_prepare_shift() > > /* > * Shift operations must stabilize the start block offset boundary along > * with the full range of the operation. If we don't, a COW writeback > * completion could race with an insert, front merge with the start > * extent (after split) during the shift and corrupt the file. Start > * with the aligned block just prior to the start to stabilize the boundary. > */ > rounding = XFS_FSB_TO_B(mp, xfs_extent_alignment(ip)); > offset = round_down(offset, rounding); > if (offset) > offset -= rounding; > > Also, I think that the changes I suggested earlier to > xfs_is_falloc_aligned() could use this xfs_extent_alignment() > helper... > > Overall this makes the code a whole lot easier to read and it also > allows forced alignment to work correctly on RT devices... > > -Dave. > -- > Dave Chinner > david@fromorbit.com >