Received: by 2002:a25:c205:0:0:0:0:0 with SMTP id s5csp737165ybf; Sat, 29 Feb 2020 14:48:26 -0800 (PST) X-Google-Smtp-Source: APXvYqzls/5+fbNcbXUWs0H91ND3+32LmNFSWimdyp7PbpfYdaCkoJoyqDo+lllIb2YKSXo0vpqt X-Received: by 2002:a54:4396:: with SMTP id u22mr7446810oiv.128.1583016505866; Sat, 29 Feb 2020 14:48:25 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1583016505; cv=none; d=google.com; s=arc-20160816; b=cwJ0cgIh9tbvE2A9g5OvkslbwQO/o20JOH8C5fpyw07Vfw1Ev40oC4/NtEVLmtR1Hs k7ODx0uLBfdj9fFOZfqtXvbPNWP7XlcXgcF63+N1O4y5IbIJ5S4no0QqYCi5ZMUibZqB hd/YprtO7CtA8MBiUehhrNSWE7qMUZi9jgViICHhdRjovuuW3lFieP5W5xblixwT/uw1 uplUCASvzxX5N5FsJl/FcEZDSOJ8xTxH+Jj2LnVkcBiCDK+HpLfhH5EGmD7bR2q80Qd3 WLt02Z4F4vkaRPHpRb9PiPr1T+efngH3doEslEL0O0R5kdXcJ7dg1HnNNgnAAaOd64G5 SWLg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=j3GNqzVW1yIOvB6Ytb2DdgPnMHN/+mwXQf2ylSIqX34=; b=Bym5R7QJK2Ryy8b+ixFsHYmofrqyOfsrzWxM1LVhcEAYLsFP7IYNSzWbwpEXseCGYw in3bcTzDr1XcPDFflYLek3wo9oeBOlUHgm7H9ASgD6Y9Q7J9IewRTg9PH8JFVNA2O88T hJWCUw377h/36JHYXe2RMHOFgdvHh+a3GFqPvrQiYPVHei/vP5qcL9fLTgYuscnZmpnJ Fosn3i1o24SYApQKf6letsl16B7X9LfmTYVudW8UXcoswe6T90T9D1UknQPRQjCjuqpl 7N/phjQ4n5zyiluLMgEsx7jgWFeGmPtMc0ehMIDRuvaIivrqXo8zmvfvjVCPG20przY8 RGcQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j65si3775435otc.308.2020.02.29.14.48.03; Sat, 29 Feb 2020 14:48:25 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-ext4-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727330AbgB2Wle (ORCPT + 99 others); Sat, 29 Feb 2020 17:41:34 -0500 Received: from mail105.syd.optusnet.com.au ([211.29.132.249]:55267 "EHLO mail105.syd.optusnet.com.au" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727244AbgB2Wle (ORCPT ); Sat, 29 Feb 2020 17:41:34 -0500 Received: from dread.disaster.area (pa49-195-202-68.pa.nsw.optusnet.com.au [49.195.202.68]) by mail105.syd.optusnet.com.au (Postfix) with ESMTPS id 6CD143A2AF1; Sun, 1 Mar 2020 09:41:26 +1100 (AEDT) Received: from dave by dread.disaster.area with local (Exim 4.92.3) (envelope-from ) id 1j8AnU-0004Xj-Go; Sun, 01 Mar 2020 09:41:24 +1100 Date: Sun, 1 Mar 2020 09:41:24 +1100 From: Dave Chinner To: Kirill Tkhai Cc: Christoph Hellwig , tytso@mit.edu, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, snitzer@redhat.com, jack@suse.cz, ebiggers@google.com, riteshh@linux.ibm.com, krisman@collabora.com, surajjs@amazon.com, dmonakhov@gmail.com, mbobrowski@mbobrowski.org, enwlinux@gmail.com, sblbir@amazon.com, khazhy@google.com, linux-ext4@vger.kernel.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH RFC 5/5] ext4: Add fallocate2() support Message-ID: <20200229224124.GR10737@dread.disaster.area> References: <158272427715.281342.10873281294835953645.stgit@localhost.localdomain> <158272447616.281342.14858371265376818660.stgit@localhost.localdomain> <20200226155521.GA24724@infradead.org> <06f9b82c-a519-7053-ec68-a549e02c6f6c@virtuozzo.com> <20200227073336.GJ10737@dread.disaster.area> <2e2ae13e-0757-0831-216d-b363b1727a0d@virtuozzo.com> <20200227215634.GM10737@dread.disaster.area> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Optus-CM-Score: 0 X-Optus-CM-Analysis: v=2.3 cv=X6os11be c=1 sm=1 tr=0 a=mqTaRPt+QsUAtUurwE173Q==:117 a=mqTaRPt+QsUAtUurwE173Q==:17 a=jpOVt7BSZ2e4Z31A5e1TngXxSK0=:19 a=kj9zAlcOel0A:10 a=SS2py6AdgQ4A:10 a=SSkiD6HNAAAA:20 a=7-415B0cAAAA:8 a=OCM-Wj0jSeTPhUvSoswA:9 a=7Zwj6sZBwVKJAoWSPKxL6X1jA+E=:19 a=GXfmVJ1PNMnGS-nY:21 a=ZgDRR2Pf6ezd43gC:21 a=CjuIK1q_8ugA:10 a=biEYGPWJfzWAr4FL6Ov7:22 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Fri, Feb 28, 2020 at 03:41:51PM +0300, Kirill Tkhai wrote: > On 28.02.2020 00:56, Dave Chinner wrote: > > On Thu, Feb 27, 2020 at 02:12:53PM +0300, Kirill Tkhai wrote: > >> On 27.02.2020 10:33, Dave Chinner wrote: > >>> On Wed, Feb 26, 2020 at 11:05:23PM +0300, Kirill Tkhai wrote: > >>>> On 26.02.2020 18:55, Christoph Hellwig wrote: > >>>>> On Wed, Feb 26, 2020 at 04:41:16PM +0300, Kirill Tkhai wrote: > >>>>>> This adds a support of physical hint for fallocate2() syscall. > >>>>>> In case of @physical argument is set for ext4_fallocate(), > >>>>>> we try to allocate blocks only from [@phisical, @physical + len] > >>>>>> range, while other blocks are not used. > >>>>> > >>>>> Sorry, but this is a complete bullshit interface. Userspace has > >>>>> absolutely no business even thinking of physical placement. If you > >>>>> want to align allocations to physical block granularity boundaries > >>>>> that is the file systems job, not the applications job. > >>>> > >>>> Why? There are two contradictory actions that filesystem can't do at the same time: > >>>> > >>>> 1)place files on a distance from each other to minimize number of extents > >>>> on possible future growth; > >>> > >>> Speculative EOF preallocation at delayed allocation reservation time > >>> provides this. > >>> > >>>> 2)place small files in the same big block of block device. > >>> > >>> Delayed allocation during writeback packs files smaller than the > >>> stripe unit of the filesystem tightly. > >>> > >>> So, yes, we do both of these things at the same time in XFS, and > >>> have for the past 10 years. > >>> > >>>> At initial allocation time you never know, which file will stop grow in some future, > >>>> i.e. which file is suitable for compaction. This knowledge becomes available some time later. > >>>> Say, if a file has not been changed for a month, it is suitable for compaction with > >>>> another files like it. > >>>> > >>>> If at allocation time you can determine a file, which won't grow in the future, don't be afraid, > >>>> and just share your algorithm here. > >>>> > >>>> In Virtuozzo we tried to compact ext4 with existing kernel interface: > >>>> > >>>> https://github.com/dmonakhov/e2fsprogs/blob/e4defrag2/misc/e4defrag2.c > >>>> > >>>> But it does not work well in many situations, and the main problem is blocks allocation > >>>> in desired place is not possible. Block allocator can't behave excellent for everything. > >>>> > >>>> If this interface bad, can you suggest another interface to make block allocator to know > >>>> the behavior expected from him in this specific case? > >>> > >>> Write once, long term data: > >>> > >>> fcntl(fd, F_SET_RW_HINT, RWH_WRITE_LIFE_EXTREME); > >>> > >>> That will allow the the storage stack to group all data with the > >>> same hint together, both in software and in hardware. > >> > >> This is interesting option, but it only applicable before write is made. And it's only > >> applicable on your own applications. My usecase is defragmentation of containers, where > >> any applications may run. Most of applications never care whether long or short-term > >> data they write. > > > > Why is that a problem? They'll be using the default write hint (i.e. > > NONE) and so a hint aware allocation policy will be separating that > > data from all the other data written with specific hints... > > > > And you've mentioned that your application has specific *never write > > again* selection criteria for data it is repacking. And that > > involves rewriting that data. IOWs, you know exactly what policy > > you want to apply before you rewrite the data, and so what other > > applications do is completely irrelevant for your repacker... > > It is not a rewriting data, there is moving data to new place with EXT4_IOC_MOVE_EXT. "rewriting" is a technical term for reading data at rest and writing it again, whether it be to the same location or to some other location. Changing physical location of data, by definition, requires rewriting data. EXT4_IOC_MOVE_EXT = data rewrite + extent swap to update the metadata in the original file to point at the new data. Hence it appears to "move" from userspace perspective (hence the name) but under the covers it is rewriting data and fiddling pointers... > > What the filesystem does with the hint is up to the filesystem > > and the policies that it's developers decide are appropriate. If > > your filesystem doesn't do what you need, talk to the filesystem > > developers about implementing the policy you require. > > Do XFS kernel defrag interfaces allow to pack some randomly chosen > small files in 1Mb blocks? Do they allow to pack small 4Kb file into > free space after a big file like in example: No. Randomly selecting small holes for small file writes is a terrible idea from a performance perspective. Hence filling tiny holes (not randomly!) is often only done for metadata allocation (e.g. extent map blocks, which are largely random access anyway) or if there is no other choice for data (e.g. at ENOSPC). Cheers, Dave. -- Dave Chinner david@fromorbit.com