Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752277AbbELCeY (ORCPT ); Mon, 11 May 2015 22:34:24 -0400 Received: from mail.phunq.net ([184.71.0.62]:57358 "EHLO starbase.phunq.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751655AbbELCeX (ORCPT ); Mon, 11 May 2015 22:34:23 -0400 Message-ID: <555166BA.1050606@phunq.net> Date: Mon, 11 May 2015 19:34:34 -0700 From: Daniel Phillips User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: "Theodore Ts'o" , Pavel Machek , Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) References: <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <20150511231714.GD14088@thunk.org> In-Reply-To: <20150511231714.GD14088@thunk.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3028 Lines: 76 On 05/11/2015 04:17 PM, Theodore Ts'o wrote: > On Tue, May 12, 2015 at 12:12:23AM +0200, Pavel Machek wrote: >> Umm, are you sure. If "some areas of disk are faster than others" is >> still true on todays harddrives, the gaps will decrease the >> performance (as you'll "use up" the fast areas more quickly). > > It's still true. The difference between O.D. and I.D. (outer diameter > vs inner diameter) LBA's is typically a factor of 2. This is why > "short-stroking" works as a technique, That is true, and the effect is not dominant compared to introducing a lot of extra seeks. > and another way that people > doing competitive benchmarking can screw up and produce misleading > numbers. If you think we screwed up or produced misleading numbers, could you please be up front about it instead of making insinuations and continuing your tirade against benchmarking and those who do it. > (If you use partitions instead of the whole disk, you have > to use the same partition in order to make sure you aren't comparing > apples with oranges.) You can rest assured I did exactly that. Somebody complained that things would look much different with seeks factored out, so here are some new "competitive benchmarks" using fs_mark on a ram disk: tasks 1 16 64 ------------------------------------ ext4: 231 2154 5439 btrfs: 152 962 2230 xfs: 268 2729 6466 tux3: 315 5529 20301 (Files per second, more is better) The shell commands are: fs_mark -dtest -D5 -N100 -L1 -p5 -r5 -s1048576 -w4096 -n1000 -t1 fs_mark -dtest -D5 -N100 -L1 -p5 -r5 -s65536 -w4096 -n1000 -t16 fs_mark -dtest -D5 -N100 -L1 -p5 -r5 -s4096 -w4096 -n1000 -t64 The ram disk removes seek overhead and greatly reduces media transfer overhead. This does not change things much: it confirms that Tux3 is significantly faster than the others at synchronous loads. This is apparently true independently of media type, though to be sure SSD remains to be tested. The really interesting result is how much difference there is between filesystems, even on a ram disk. Is it just CPU or is it synchronization strategy and lock contention? Does our asynchronous front/back design actually help a lot, instead of being a disadvantage as you predicted? It is too bad that fs_mark caps number of tasks at 64, because I am sure that some embarrassing behavior would emerge at high task counts, as with my tests on spinning disk. Anyway, everybody but you loves competitive benchmarks, that is why I post them. They are not only useful for tracking down performance bugs, but as you point out, they help us advertise the reasons why Tux3 is interesting and ought to be merged. Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/