Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932233AbbELJDH (ORCPT ); Tue, 12 May 2015 05:03:07 -0400 Received: from atrey.karlin.mff.cuni.cz ([195.113.26.193]:52553 "EHLO atrey.karlin.mff.cuni.cz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932083AbbELJDC (ORCPT ); Tue, 12 May 2015 05:03:02 -0400 Date: Tue, 12 May 2015 11:03:00 +0200 From: Pavel Machek To: Daniel Phillips Cc: "Theodore Ts'o" , Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) Message-ID: <20150512090300.GA15574@amd> References: <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <20150511231714.GD14088@thunk.org> <555166BA.1050606@phunq.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <555166BA.1050606@phunq.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2396 Lines: 54 On Mon 2015-05-11 19:34:34, Daniel Phillips wrote: > > > On 05/11/2015 04:17 PM, Theodore Ts'o wrote: > > On Tue, May 12, 2015 at 12:12:23AM +0200, Pavel Machek wrote: > >> Umm, are you sure. If "some areas of disk are faster than others" is > >> still true on todays harddrives, the gaps will decrease the > >> performance (as you'll "use up" the fast areas more quickly). > > > > It's still true. The difference between O.D. and I.D. (outer diameter > > vs inner diameter) LBA's is typically a factor of 2. This is why > > "short-stroking" works as a technique, > > That is true, and the effect is not dominant compared to introducing > a lot of extra seeks. > > > and another way that people > > doing competitive benchmarking can screw up and produce misleading > > numbers. > > If you think we screwed up or produced misleading numbers, could you > please be up front about it instead of making insinuations and > continuing your tirade against benchmarking and those who do it. Are not you little harsh with Ted? He was polite. > The ram disk removes seek overhead and greatly reduces media transfer > overhead. This does not change things much: it confirms that Tux3 is > significantly faster than the others at synchronous loads. This is > apparently true independently of media type, though to be sure SSD > remains to be tested. > > The really interesting result is how much difference there is between > filesystems, even on a ram disk. Is it just CPU or is it synchronization > strategy and lock contention? Does our asynchronous front/back design > actually help a lot, instead of being a disadvantage as you predicted? > > It is too bad that fs_mark caps number of tasks at 64, because I am > sure that some embarrassing behavior would emerge at high task counts, > as with my tests on spinning disk. I'd call system with 65 tasks doing heavy fsync load at the some time "embarrassingly misconfigured" :-). It is nice if your filesystem can stay fast in that case, but... Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/