Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753980AbbEMHZm (ORCPT ); Wed, 13 May 2015 03:25:42 -0400 Received: from atrey.karlin.mff.cuni.cz ([195.113.26.193]:60016 "EHLO atrey.karlin.mff.cuni.cz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752635AbbEMHZl (ORCPT ); Wed, 13 May 2015 03:25:41 -0400 Date: Wed, 13 May 2015 09:25:39 +0200 From: Pavel Machek To: Daniel Phillips Cc: Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) Message-ID: <20150513072539.GC19700@amd> References: <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <555140E6.1070409@phunq.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <555140E6.1070409@phunq.net> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1947 Lines: 38 On Mon 2015-05-11 16:53:10, Daniel Phillips wrote: > Hi Pavel, > > On 05/11/2015 03:12 PM, Pavel Machek wrote: > >>> It is a fact of life that when you change one aspect of an intimately interconnected system, > >>> something else will change as well. You have naive/nonexistent free space management now; when you > >>> design something workable there it is going to impact everything else you've already done. It's an > >>> easy bet that the impact will be negative, the only question is to what degree. > >> > >> You might lose that bet. For example, suppose we do strictly linear allocation > >> each delta, and just leave nice big gaps between the deltas for future > >> expansion. Clearly, we run at similar or identical speed to the current naive > >> strategy until we must start filling in the gaps, and at that point our layout > >> is not any worse than XFS, which started bad and stayed that way. > > > > Umm, are you sure. If "some areas of disk are faster than others" is > > still true on todays harddrives, the gaps will decrease the > > performance (as you'll "use up" the fast areas more quickly). > > That's why I hedged my claim with "similar or identical". The > difference in media speed seems to be a relatively small effect When you knew it can't be identical? That's rather confusing, right? Perhaps you should post more details how your benchmark is structured next time, so we can see you did not make any trivial mistakes...? Or just clean the code up so that it can get merged, so that we can benchmark ourselves... Pavel -- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/