Received: by 2002:a05:6a10:f347:0:0:0:0 with SMTP id d7csp5110669pxu; Tue, 22 Dec 2020 08:35:39 -0800 (PST) X-Google-Smtp-Source: ABdhPJy81+vWrjH5UIar8MDeTfGNu0YKcwbCH5vdFUHpJowUOcmaYUXsDiEjv/EnMZoHNJJjr+jO X-Received: by 2002:aa7:c84c:: with SMTP id g12mr20693932edt.193.1608654939711; Tue, 22 Dec 2020 08:35:39 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1608654939; cv=none; d=google.com; s=arc-20160816; b=YX+LIMYTWoqTBMP4UQxKUhVGMlXLQodvYmVvS4XQmIim9J5acrrpZqfgxj38p5QVN/ +gml09CW7k50hYvmBInlx/ELXRD2xYP74iNUjiIvSRxcp0ZdimXuQgA13O6ylbmqzX29 zo09kUVEvz6EY2ET493L5nzo7OiGErb+y9NHl95mwiwRrqDSSq2oiBPY+mJqS865wLZg ScGvi/4fwzoCLTbmRse0p40c5upqkX18wo9Hl7gGeucv7ZtQT4DxtE32wC4dhd5e7zVo SAgvm1KooXepb/uIzSWkUj5kd9SJ5XsVwHyJp3NNmXM3ect8fbgm3JxSzuPqn+XRmX9L sDkA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=hj0xmefZ8DpLAu7ervKJQvyqB7fso28kUNUV93Qqano=; b=IQvO9q1gQMQt1QEQeCBj95V+KHiiXIhPpRdeRtuGDTVWGyQ19BBZDYLVAwiPRXib2+ DBDyJnhYgT3iTBnXArC0p2dW0loQjILE2ipDYWpPXqjYww6NgfmYr558wdMjt52zx92V q7hDEG4pzGa/b5jtPf/nCD/GLuTahvgnly3iSVJteix7cnfXLfXA4s0UsZPRN5r6IOv5 1EmFKnZaEYh/Oxo06m/gEYD9VHiHdoEz2HqgyM8RCJe0iiCQaNqG5zx2xaObnG+HaZHz D4ScnqLt4KBrpTegKxvOSS2I74glHEDp1I5g11muoztjiY8WVYwDn44EO4UKfO5iQN+b lt1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id p22si10726690eji.449.2020.12.22.08.35.13; Tue, 22 Dec 2020 08:35:39 -0800 (PST) Received-SPF: pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-ext4-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-ext4-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727703AbgLVQe5 (ORCPT + 99 others); Tue, 22 Dec 2020 11:34:57 -0500 Received: from outgoing-auth-1.mit.edu ([18.9.28.11]:57639 "EHLO outgoing.mit.edu" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1727590AbgLVQe5 (ORCPT ); Tue, 22 Dec 2020 11:34:57 -0500 Received: from callcc.thunk.org (pool-72-74-133-215.bstnma.fios.verizon.net [72.74.133.215]) (authenticated bits=0) (User authenticated as tytso@ATHENA.MIT.EDU) by outgoing.mit.edu (8.14.7/8.12.4) with ESMTP id 0BMGY6pj012553 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 22 Dec 2020 11:34:07 -0500 Received: by callcc.thunk.org (Postfix, from userid 15806) id 6AC5A420280; Tue, 22 Dec 2020 11:34:06 -0500 (EST) Date: Tue, 22 Dec 2020 11:34:06 -0500 From: "Theodore Y. Ts'o" To: Matteo Croce Cc: linux-ext4@vger.kernel.org Subject: Re: discard and data=writeback Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org On Tue, Dec 22, 2020 at 03:59:29PM +0100, Matteo Croce wrote: > > I'm issuing sync + sleep(10) after the extraction, so the writes > should all be flushed. > Also, I repeated the test three times, with very similar results: So that means the problem is not due to page cache writeback interfering with the discards. So it's most likely that the problem is due to how the blocks are allocated and laid out when using data=ordered vs data=writeback. Some experiments to try next. After extracting the files with data=ordered and data=writeback on a freshly formatted file system, use "e2freefrag" to see how the free space is fragmented. This will tell us how the file system is doing from a holistic perspective, in terms of blocks allocated to the extracted files. (E2freefrag is showing you the blocks *not* allocated, of course, but that's a mirror image dual of the blocks that *are* allocated, especially if you start from an identical known state; hence the use of a freshly formatted file system.) Next, we can see how individual files look like with respect to fragmentation. This can be done via using filefrag on all of the files, e.g: find . -type f -print0 | xargs -0 filefrag Another way to get similar (although not identical) information is via running "e2fsck -E fragcheck" on a file system. How they differ is especially more of a big deal on ext3 file systems without extents and flex_bg, since filefrag tries to take into account metadata blocks such as indirect blocks and extent tree blocks, and e2fsck -E fragcheck does not; but it's good enough for getting a good gestalt for the files' overall fragmentation --- and note that as long as the average fragment size is at least a megabyte or two, some fragmentation really isn't that much of a problem from a real-world performance perspective. People can get way too invested in trying to get to perfection with 100% fragmentation-free files. The problem with doing this at the expense of all else is that you can end up making the overall free space fragmentation worse as the file system ages, at which point the file system performance really dives through the floor as the file system approaches 100%, or even 80-90% full, especially on HDD's. For SSD's fragmentation doesn't matter quite so much, unless the average fragment size is *really* small, and when you are discarded freed blocks. Even if the files are showing no substantial difference in fragmentation, and the free space is equally A-OK with respect to fragmentation, the other possibility is the *layout* of the blocks are such that the order in which they are deleted using rm -rf ends up being less friendly from a discard perspective. This can happen if the directory hierarchy is big enough, and/or the journal size is small enough, that the rm -rf requires multiple journal transactions to complete. That's because with mount -o discard, we do the discards after each transaction commit, and it might be that even though the used blocks are perfectly contiguous, because of the order in which the files end up getting deleted, we end up needing to discard them in smaller chunks. For example, one could imagine a case where you have a million 4k files, and they are allocated contiguously, but if you get super-unlucky, such that in the first transaction you delete all of the odd-numbered files, and in second transaction you delete all of the even-numbered files, you might need to do a million 4k discards --- but if all of the deletes could fit into a single transaction, you would only need to do a single million block discard operation. Finally, you may want to consider whether or not mount -o discard really makes sense or not. For most SSD's, especially high-end SSD's, it probably doesn't make that much difference. That's because when you overwrite a sector, the SSD knows (or should know; this might not be some really cheap, crappy low-end flash devices; but on those devices, discard might not be making uch of a difference anyway), that the old contents of the sector is no longer needed. Hence an overwrite effectively is an "implied discard". So long as there is a sufficient number of free erase blocks, the SSD might be able to keep up doing the GC for those "implied discards", and so accelerating the process by sending explicit discards after every journal transaction might not be necessary. Or, maybe it's sufficient to run "fstrim" every week at Sunday 3am local time; or maybe even fstrim once a night or fstrim once a month --- your mileage may vary. It's going to vary from SSD to SSD and from workload to workload, but you might find that mount -o discard isn't buying you all that much --- if you run a random write workload, and you don't notice any performance degradation, and you don't notice an increase in the SSD's write amplification numbers (if they are provided by your SSD), then you might very well find that it's not worth it to use mount -o discard. I personally don't bother using mount -o discard, and instead periodically run fstrim, on my personal machines. Part of that is because I'm mostly just reading and replying to emails, building kernels and editing text files, and that is not nearly as stressful on the FTL as a full-blown random write workload (for example, if you were running a database supporting a transaction processing workload). Cheers, - Ted