Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752160AbbELANS (ORCPT ); Mon, 11 May 2015 20:13:18 -0400 Received: from mail.lang.hm ([64.81.33.126]:47411 "EHLO bifrost.lang.hm" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751413AbbELANQ (ORCPT ); Mon, 11 May 2015 20:13:16 -0400 Date: Mon, 11 May 2015 17:12:39 -0700 (PDT) From: David Lang X-X-Sender: dlang@asgard.lang.hm To: Daniel Phillips cc: Pavel Machek , Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) In-Reply-To: <555140E6.1070409@phunq.net> Message-ID: References: <1430325763.19371.41.camel@gmail.com> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <555140E6.1070409@phunq.net> User-Agent: Alpine 2.02 (DEB 1266 2009-07-14) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2312 Lines: 41 On Mon, 11 May 2015, Daniel Phillips wrote: > On 05/11/2015 03:12 PM, Pavel Machek wrote: >>>> It is a fact of life that when you change one aspect of an intimately interconnected system, >>>> something else will change as well. You have naive/nonexistent free space management now; when you >>>> design something workable there it is going to impact everything else you've already done. It's an >>>> easy bet that the impact will be negative, the only question is to what degree. >>> >>> You might lose that bet. For example, suppose we do strictly linear allocation >>> each delta, and just leave nice big gaps between the deltas for future >>> expansion. Clearly, we run at similar or identical speed to the current naive >>> strategy until we must start filling in the gaps, and at that point our layout >>> is not any worse than XFS, which started bad and stayed that way. >> >> Umm, are you sure. If "some areas of disk are faster than others" is >> still true on todays harddrives, the gaps will decrease the >> performance (as you'll "use up" the fast areas more quickly). > > That's why I hedged my claim with "similar or identical". The > difference in media speed seems to be a relatively small effect > compared to extra seeks. It seems that XFS puts big spaces between > new directories, and suffers a lot of extra seeks because of it. > I propose to batch new directories together initially, then change > the allocation goal to a new, relatively empty area if a big batch > of files lands on a directory in a crowded region. The "big" gaps > would be on the order of delta size, so not really very big. This is an interesting idea, but what happens if the files don't arrive as a big batch, but rather trickle in over time (think a logserver that if putting files into a bunch of directories at a fairly modest rate per directory) And when you then decide that you have to move the directory/file info, doesn't that create a potentially large amount of unexpected IO that could end up interfering with what the user is trying to do? David Lang -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/