Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752577AbbEKXxF (ORCPT ); Mon, 11 May 2015 19:53:05 -0400 Received: from mail.phunq.net ([184.71.0.62]:48607 "EHLO starbase.phunq.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751413AbbEKXxB (ORCPT ); Mon, 11 May 2015 19:53:01 -0400 Message-ID: <555140E6.1070409@phunq.net> Date: Mon, 11 May 2015 16:53:10 -0700 From: Daniel Phillips User-Agent: Mozilla/5.0 (X11; Linux i686; rv:31.0) Gecko/20100101 Thunderbird/31.6.0 MIME-Version: 1.0 To: Pavel Machek CC: Howard Chu , Mike Galbraith , Dave Chinner , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, tux3@tux3.org, "Theodore Ts'o" , OGAWA Hirofumi Subject: Re: xfs: does mkfs.xfs require fancy switches to get decent performance? (was Tux3 Report: How fast can we fsync?) References: <1430325763.19371.41.camel@gmail.com> <1430334326.7360.25.camel@gmail.com> <20150430002008.GY15810@dastard> <1430395641.3180.94.camel@gmail.com> <1430401693.3180.131.camel@gmail.com> <55423732.2070509@phunq.net> <55423C05.1000506@symas.com> <554246D7.40105@phunq.net> <20150511221223.GD4434@amd> In-Reply-To: <20150511221223.GD4434@amd> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4225 Lines: 83 Hi Pavel, On 05/11/2015 03:12 PM, Pavel Machek wrote: >>> It is a fact of life that when you change one aspect of an intimately interconnected system, >>> something else will change as well. You have naive/nonexistent free space management now; when you >>> design something workable there it is going to impact everything else you've already done. It's an >>> easy bet that the impact will be negative, the only question is to what degree. >> >> You might lose that bet. For example, suppose we do strictly linear allocation >> each delta, and just leave nice big gaps between the deltas for future >> expansion. Clearly, we run at similar or identical speed to the current naive >> strategy until we must start filling in the gaps, and at that point our layout >> is not any worse than XFS, which started bad and stayed that way. > > Umm, are you sure. If "some areas of disk are faster than others" is > still true on todays harddrives, the gaps will decrease the > performance (as you'll "use up" the fast areas more quickly). That's why I hedged my claim with "similar or identical". The difference in media speed seems to be a relatively small effect compared to extra seeks. It seems that XFS puts big spaces between new directories, and suffers a lot of extra seeks because of it. I propose to batch new directories together initially, then change the allocation goal to a new, relatively empty area if a big batch of files lands on a directory in a crowded region. The "big" gaps would be on the order of delta size, so not really very big. Anyway, some people seem to have pounced on the words "naive" and "linear allocation" and jumped to the conclusion that our whole strategy is naive. Far from it. We don't just throw files randomly at the disk. We sort and partition files and metadata, and we carefully arrange the order of our allocation operations so that linear allocation produces a nice layout for both read and write. This turned out to be so much better than fiddling with the goal of individual allocations that we concluded we would get best results by sticking with linear allocation, but improve our sort step. The new plan is to partition updates into batches according to some affinity metrics, and set the linear allocation goal per batch. So for example, big files and append-type files can get special treatment in separate batches, while files that seem to be related because of having the same directory parent and being written in the same delta will continue to be streamed out using "naive" linear allocation, which is not necessarily as naive as one might think. It will take time and a lot of performance testing to get this right, but nobody should get the idea that it is any inherent design limitation. The opposite is true: we have no restrictions at all in media layout. Compared to Ext4, we do need to address the issue that data moves around when updated. This can cause rapid fragmentation. Btrfs has shown issues with that for big, randomly updated files. We want to fix it without falling back on update-in-place as Btrfs does. Actually, Tux3 already has update-in-place, and unlike Btrfs, we can switch to it for non-empty files. But we think that perfect data isolation per delta is something worth fighting for, and we would rather not force users to fiddle around with mode settings just to make something work as well as it already does on Ext4. We will tackle this issue by partitioning as above, and use a dedicated allocation strategy for such files, which are easy to detect. Metadata moving around per update does not seem to be a problem because it is all single blocks that need very little slack space to stay close to home. > Anyway... you have brand new filesystem. Of course it should be > faster/better/nicer than the existing filesystems. So don't be too > harsh with XFS people. They have done a lot of good work, but they still have a long way to go. I don't see any shame in that. Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/