Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751847AbbD3P7E (ORCPT ); Thu, 30 Apr 2015 11:59:04 -0400 Received: from mail.phunq.net ([184.71.0.62]:40038 "EHLO starbase.phunq.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751089AbbD3P67 (ORCPT ); Thu, 30 Apr 2015 11:58:59 -0400 Message-ID: <55425153.3000505@phunq.net> Date: Thu, 30 Apr 2015 08:59:15 -0700 From: Daniel Phillips User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: "Theodore Ts'o" , Martin Steigerwald , Dave Chinner , Mike Galbraith , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) References: <8f886f13-6550-4322-95be-93244ae61045@phunq.net> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <4154074.ZWLyZCMjhl@merkaba> <20150430145710.GE12374@thunk.org> In-Reply-To: <20150430145710.GE12374@thunk.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3738 Lines: 73 Hi Ted, On 04/30/2015 07:57 AM, Theodore Ts'o wrote: > This is one of the reasons why I find head-to-head "competitions" > between file systems to be not very helpful for anything other than > benchmarketing. It's almost certain that the benchmark won't be > "fair" in some way, and it doesn't really matter whether the person > doing the benchmark was doing it with malice aforethought, or was just > incompetent and didn't understand the issues --- or did understand the > issues and didn't really care, because what they _really_ wanted to do > was to market their file system. Your proposition, as I understand it, is that nobody should ever do benchmarks because any benchmark must be one of: 1) malicious; 2) incompetent; or 3) careless. When in fact, a benchmark may be perfectly honest, competently done, and informative. > And even if the benchmark is fair, it might not match up with the end > user's hardware, or their use case. There will always be some use > case where file system A is better than file system B, for pretty much > any file system. Don't get me wrong --- I will do comparisons between > file systems, but only so I can figure out ways of making _my_ file > system better. And more often than not, it's comparisons of the same > file system before and after adding some new feature which is the most > interesting. I cordially invite you to replicate our fsync benchmarks, or invent your own. I am confident that you will find that the numbers are accurate, that the test cases were well chosen, that the results are informative, and that there is no sleight of hand. As for whether or not people should "market" their filesystems as you put it, that is easy for you to disparage when you are the incumbant. If we don't tell people what is great about Tux3 then how will they ever find out? Sure, it might be "advertising", but the important question is, is it _truthful_ advertising? Surely you remember how Linus got started... that was really blatant, and I am glad he did it. >> That are the allocation groups. I always wondered how it can be beneficial >> to spread the allocations onto 4 areas of one partition on expensive seek >> media. Now that makes better sense for me. I always had the gut impression >> that XFS may not be the fastest in all cases, but it is one of the >> filesystem with the most consistent performance over time, but never was >> able to fully explain why that is. > > Yep, pretty much all of the traditional update-in-place file systems > since the BSD FFS have done this, and for the same reason. For COW > file systems which are are constantly moving data and metadata blocks > around, they will need different strategies for trying to avoid the > free space fragmentation problem as the file system ages. Right, different problems, but I have a pretty good idea how to go about it now. I made a failed attempt a while back and learned a lot, my mistake was to try to give every object a fixed home position based on where it was first written and the result was worse for both read and write. Now the interesting thing is, naive linear allocation is great for both read and read, so my effort now is directed towards ways of doing naive linear allocation but choosing carefully which order we do the allocation in. I will keep you posted on how that progresses of course. Anyway, how did we get onto allocation? I thought my post was about fsync, and after all, you are the guest of honor. Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/