Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933759AbbELRbo (ORCPT ); Tue, 12 May 2015 13:31:44 -0400 Received: from mout.kundenserver.de ([212.227.126.130]:50999 "EHLO mout.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933207AbbELRbj (ORCPT ); Tue, 12 May 2015 13:31:39 -0400 Message-ID: <555238B5.9050705@ontolab.com> Date: Tue, 12 May 2015 19:30:29 +0200 From: Christian Stroetmann User-Agent: Mozilla/5.0 (Windows NT 5.0; rv:12.0) Gecko/20120428 Thunderbird/12.0.1 MIME-Version: 1.0 To: Daniel Phillips CC: David Lang , Pavel Machek , Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) References: <1430325763.19371.41.camel@gmail.com> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> <555140E6.1070409@phunq.net> <55518332.10009@phunq.net> In-Reply-To: <55518332.10009@phunq.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Provags-ID: V03:K0:qp7BazdSclfydHh1z4f6OGdY/iEx4x6mHcy2AKWbLWIvVr+BDkl qKNRziBao80RopOZYS2stWvywmu+065h+z8EwBBSy9tMZyueb5kkuxvnhS6FIhUobfaI3dh Y2WeA8z13VRmA4G9E6momzb0HAMoyZcBWhdkN+hhQ1OrdBI3UvEYTHULgcERfRzXuoKiBF1 Ek/o8NcaXqOPkoid3R+mQ== X-UI-Out-Filterresults: notjunk:1; Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6174 Lines: 116 Am 12.05.2015 06:36, schrieb Daniel Phillips: > Hi David, > > On 05/11/2015 05:12 PM, David Lang wrote: >> On Mon, 11 May 2015, Daniel Phillips wrote: >> >>> On 05/11/2015 03:12 PM, Pavel Machek wrote: >>>>>> It is a fact of life that when you change one aspect of an intimately interconnected system, >>>>>> something else will change as well. You have naive/nonexistent free space management now; when you >>>>>> design something workable there it is going to impact everything else you've already done. It's an >>>>>> easy bet that the impact will be negative, the only question is to what degree. >>>>> You might lose that bet. For example, suppose we do strictly linear allocation >>>>> each delta, and just leave nice big gaps between the deltas for future >>>>> expansion. Clearly, we run at similar or identical speed to the current naive >>>>> strategy until we must start filling in the gaps, and at that point our layout >>>>> is not any worse than XFS, which started bad and stayed that way. >>>> Umm, are you sure. If "some areas of disk are faster than others" is >>>> still true on todays harddrives, the gaps will decrease the >>>> performance (as you'll "use up" the fast areas more quickly). >>> That's why I hedged my claim with "similar or identical". The >>> difference in media speed seems to be a relatively small effect >>> compared to extra seeks. It seems that XFS puts big spaces between >>> new directories, and suffers a lot of extra seeks because of it. >>> I propose to batch new directories together initially, then change >>> the allocation goal to a new, relatively empty area if a big batch >>> of files lands on a directory in a crowded region. The "big" gaps >>> would be on the order of delta size, so not really very big. >> This is an interesting idea, but what happens if the files don't arrive as a big batch, but rather >> trickle in over time (think a logserver that if putting files into a bunch of directories at a >> fairly modest rate per directory) > If files are trickling in then we can afford to spend a lot more time > finding nice places to tuck them in. Log server files are an especially > irksome problem for a redirect-on-write filesystem because the final > block tends to be rewritten many times and we must move it to a new > location each time, so every extent ends up as one block. Oh well. If > we just make sure to have some free space at the end of the file that > only that file can use (until everywhere else is full) then the long > term result will be slightly ravelled blocks that nonetheless tend to > be on the same track or flash block as their logically contiguous > neighbours. There will be just zero or one empty data blocks mixed > into the file tail as we commit the tail block over and over with the > same allocation goal. Sometimes there will be a block or two of > metadata as well, which will eventually bake themselves into the > middle of contiguous data and stop moving around. > > Putting this together, we have: > > * At delta flush, break out all the log type files > * Dedicate some block groups to append type files > * Leave lots of space between files in those block groups > * Peek at the last block of the file to set the allocation goal > > Something like that. What we don't want is to throw those files into > the middle of a lot of rewrite-all files, messing up both kinds of file. > We don't care much about keeping these files near the parent directory > because one big seek per log file in a grep is acceptable, we just need > to avoid thousands of big seeks within the file, and not dribble single > blocks all over the disk. > > It would also be nice to merge together extents somehow as the final > block is rewritten. One idea is to retain the final block dirty until > the next delta, and write it again into a contiguous position, so the > final block is always flushed twice. We already have the opportunistic > merge logic, but the redirty behavior and making sure it only happens > to log files would be a bit fiddly. > > We will also play the incremental defragmentation card at some point, > but first we should try hard to control fragmentation in the first > place. Tux3 is well suited to online defragmentation because the delta > commit model makes it easy to move things around efficiently and safely, > but it does generate extra IO, so as a basic mechanism it is not ideal. > When we get to piling on features, that will be high on the list, > because it is relatively easy, and having that fallback gives a certain > sense of security. So we are again at some more features of SASOS4Fun. Said this, I can see as an alleged troll expert the agenda and strategy behind this and related threads, but still no usable code/file system at all and hence nothing that even might be ready for merging, as I understand the statements of the file system gurus. So it is time for the developer(s) to take decisions, what should be implement respectively manifested in code eventually and then show the complete result, so that others can make the tests and the benchmarks. Thanks Best Regards Do not feed the trolls. C.S. >> And when you then decide that you have to move the directory/file info, doesn't that create a >> potentially large amount of unexpected IO that could end up interfering with what the user is trying >> to do? > Right, we don't like that and don't plan to rely on it. What we hope > for is behavior that, when you slowly stir the pot, tends to improve the > layout just as often as it degrades it. It may indeed become harder to > find ideal places to put things as time goes by, but we also gain more > information to base decisions on. > > Regards, > > Daniel > -- > To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/