Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751445AbbD2Wbw (ORCPT ); Wed, 29 Apr 2015 18:31:52 -0400 Received: from mail.parknet.co.jp ([210.171.160.6]:33545 "EHLO mail.parknet.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751214AbbD2Wbu (ORCPT ); Wed, 29 Apr 2015 18:31:50 -0400 X-Greylist: delayed 1500 seconds by postgrey-1.27 at vger.kernel.org; Wed, 29 Apr 2015 18:31:49 EDT From: OGAWA Hirofumi To: Mike Galbraith Cc: Daniel Phillips , , , , "Theodore Ts'o" Subject: Re: Tux3 Report: How fast can we fsync? References: <8f886f13-6550-4322-95be-93244ae61045@phunq.net> <1430274071.3363.4.camel@gmail.com> <1906f271-aa23-404b-9776-a4e2bce0c6aa@phunq.net> <1430289213.3693.3.camel@gmail.com> <1430325763.19371.41.camel@gmail.com> Date: Thu, 30 Apr 2015 07:06:42 +0900 In-Reply-To: (Daniel Phillips's message of "Wed, 29 Apr 2015 13:40:22 -0700") Message-ID: <87383ibnd9.fsf@mail.parknet.co.jp> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/25.0.50 (gnu/linux) MIME-Version: 1.0 Content-Type: text/plain Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1425 Lines: 36 Daniel Phillips writes: > On Wednesday, April 29, 2015 9:42:43 AM PDT, Mike Galbraith wrote: >> >> [dbench bakeoff] >> >> With dbench v4.00, tux3 seems to be king of the max_latency hill, but >> btrfs took throughput on my box. With v3.04, tux3 took 1st place at >> splashing about in pagecache, but last place at dbench -S. >> >> Hohum, curiosity satisfied. > > Thanks for that. Please keep in mind, that was our B team, it does a > full fs sync for every fsync. Maybe a rematch when the shiny new one > lands? Also, hardware? It looks like a single 7200 RPM disk, but it > would be nice to know. And it seems, not all dbench 4.0 are equal. > Mine doesn't have a -B option. Yeah, I also want to know hardware. Also, what size of partition? And each test was done by fresh FS (i.e. after mkfs), or same FS was used through all tests? My "hirofumi" branch in public repo is still having the bug to leave the empty block for inodes by repeat of create and unlink. And this bug makes fragment of FS very fast. (This bug is what I'm fixing, now.) If same FS was used, your test might hit to this bug. Thanks. -- OGAWA Hirofumi -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/