I tested the performace of various filesystems with a mozilla build tree
of 295MB, with primarily writing and copying operations. The test
system is Linux 2.6.0-test2, 512MB memory, 11531.85MB partition for
tests. Sync is run a few times throughout the test (for full script see
bottom of this email). I ran mkfs on the partition before every test.
Running the tests again tends to produces similar times, about +/- 3
seconds.
The first item number is time, in seconds, to complete the test (lower
is better). The second number is CPU use percentage (lower is better).
reiser4 171.28s, 30%CPU (1.0000x time; 1.0x CPU)
reiserfs 302.53s, 16%CPU (1.7663x time; 0.53x CPU)
ext3 319.71s, 11%CPU (1.8666x time; 0.36x CPU)
xfs 429.79s, 13%CPU (2.5093x time; 0.43x CPU)
jfs 470.88s, 6%CPU (2.7492x time 0.02x CPU)
What's interesting:
* ext3's syncs tended to take the longest 10 seconds, except
* JFS took a whopping 38.18s on its final sync
* xfs used more CPU than ext3 but was slower than ext3
* reiser4 had highest throughput and most CPU usage
* jfs had lowest throughput and least CPU usage
* total performance of course depends on how IO or CPU bound your task is
Individual test times (first number again is time in seconds, second is
CPU usage, last is total again)
reiser4
Copying Tree
33.39,34%
Sync
1.54,0%
recopying tree to mozilla-2
31.09,34%
recopying mozilla-2 to mozilla-3
33.15,33%
sync
2.89,3%
du
2.05,42%
rm -rf mozilla
7.41,52%
tar c mozilla-2
52.25,25%
final sync
6.77,2%
171.28,30%
reiserfs
Copying Tree
39.55,32%
Sync
3.15,1%
recopying tree to mozilla-2
75.15,13%
recopying mozilla-2 to mozilla-3
77.62,13%
sync
3.84,1%
du
2.46,21%
rm -rf mozilla
5.22,58%
tar c mozilla-2
90.83,12%
final sync
4.19,3%
302.53,16%
extended 3
Copying Tree
39.42,25%
Sync
9.05,0%
recopying tree to mozilla-2
79.96,9%
recopying mozilla-2 to mozilla-3
98.84,7%
sync
8.15,0%
du
3.31,11%
rm -rf mozilla
3.71,39%
tar c mozilla-2
74.93,13%
final sync
1.67,1%
319.71,11%
xfs
Copying Tree
43.50,32%
Sync
2.08,1%
recopying tree to mozilla-2
102.37,12%
recopying mozilla-2 to mozilla-3
108.00,12%
sync
2.40,2%
du
3.73,32%
rm -rf mozilla
8.75,56%
tar c mozilla-2
157.61,7%
final sync
0.95,1%
429.79,13%
jfs
Copying Tree
48.15,20%
Sync
3.05,1%
recopying tree to mozilla-2
108.39,5%
recopying mozilla-2 to mozilla-3
114.96,5%
sync
3.86,0%
du
2.42,17%
rm -rf mozilla
15.33,7%
tar c mozilla-2
135.86,6%
final sync
38.18,0%
470.88,6%
Here is the benchmark script:
#!/bin/sh
time='time -f%e,%P '
echo "Copying Tree"
$time cp -a /home/test/mozilla /mnt/test
echo "Sync"
$time sync
cd /mnt/test &&
echo "recopying tree to mozilla-2"
$time cp -a mozilla mozilla-2 &&
echo "recopying mozilla-2 to mozilla-3"
$time cp -a mozilla mozilla-2 &&
echo "sync"
$time sync &&
echo "du"
$time du mozilla > /dev/null &&
echo "rm -rf mozilla"
$time rm -rf mozilla
echo "tar c mozilla-2"
$time tar c mozilla-2 > mozilla.tar
echo "final sync"
$time sync
>>>>> "Grant" == Grant Miner <[email protected]> writes:
Grant> I tested the performace of various filesystems with a mozilla
Grant> build tree of 295MB, with primarily writing and copying
Grant> operations.
It'd be interesting to add in some read-only operations (e.g., tar to
/dev/null) because, in general, filesystems trade off expensive writes
vs expensive reads. Especially as the disk gets fuller. (What I mean
is that filesystems that do more work to optimise disk layout will be slower to
write, but should be faster to read. And `easy' optimisations for
disk layout get harder as the disk gets fuller and fragmented).
So the other thing that'd be interesting to test is doing the same
thing after having pre-fragmented the disk in some predictable way.
--
Dr Peter Chubb http://www.gelato.unsw.edu.au peterc AT gelato.unsw.edu.au
You are lost in a maze of BitKeeper repositories, all slightly different.
Grant Miner <[email protected]> wrote:
>
> I tested the performace of various filesystems with a mozilla build tree
> of 295MB, with primarily writing and copying operations. The test
> system is Linux 2.6.0-test2, 512MB memory, 11531.85MB partition for
> tests. Sync is run a few times throughout the test (for full script see
> bottom of this email). I ran mkfs on the partition before every test.
> Running the tests again tends to produces similar times, about +/- 3
> seconds.
>
> The first item number is time, in seconds, to complete the test (lower
> is better). The second number is CPU use percentage (lower is better).
>
> reiser4 171.28s, 30%CPU (1.0000x time; 1.0x CPU)
> reiserfs 302.53s, 16%CPU (1.7663x time; 0.53x CPU)
> ext3 319.71s, 11%CPU (1.8666x time; 0.36x CPU)
> xfs 429.79s, 13%CPU (2.5093x time; 0.43x CPU)
> jfs 470.88s, 6%CPU (2.7492x time 0.02x CPU)
But different filesystems will leave different amounts of dirty, unwritten
data in memory at the end of the test. On your machine, up to 200MB of
dirty data could be sitting there in memory at the end of the timing
interval. You need to decide how to account for that unwritten data in the
measurement. Simply ignoring it as you have done is certainly valid, but
is only realistic in a couple of scenarios:
a) the files are about the be deleted again
b) the application which your benchmark simulates is about to spend more
than 30 seconds not touching the disk.
This discrepancy is especially significant with ext3 which, in ordered data
mode, will commit all that data every five seconds. If the test takes more
than five seconds then ext3 can appear to take a _lot_ longer.
But it is somewhat artificial: that data has to be written out sometime.
Solutions to this inaccuracy are to make the test so long-running (ten
minutes or more) that the difference is minor, or to include the `sync' in
the time measurement.
And when benching things, please include ext2. It is the reference
filesystem, as it were. It tends to be the fastest, too.
On Wed, 2003-08-06 at 04:30, Grant Miner wrote:
> I tested the performace of various filesystems with a mozilla build tree
> of 295MB, with primarily writing and copying operations. The test
> system is Linux 2.6.0-test2, 512MB memory, 11531.85MB partition for
> tests. Sync is run a few times throughout the test (for full script see
> bottom of this email). I ran mkfs on the partition before every test.
> Running the tests again tends to produces similar times, about +/- 3
> seconds.
>
> The first item number is time, in seconds, to complete the test (lower
> is better). The second number is CPU use percentage (lower is better).
>
> reiser4 171.28s, 30%CPU (1.0000x time; 1.0x CPU)
> reiserfs 302.53s, 16%CPU (1.7663x time; 0.53x CPU)
> ext3 319.71s, 11%CPU (1.8666x time; 0.36x CPU)
> xfs 429.79s, 13%CPU (2.5093x time; 0.43x CPU)
> jfs 470.88s, 6%CPU (2.7492x time 0.02x CPU)
What about ext2? :-)
I think it could be interesting to compare the overhead of a journaled
fs against a non-one, simply for reference.
Hello!
On Wed, Aug 06, 2003 at 01:47:40PM +1000, Peter Chubb wrote:
> It'd be interesting to add in some read-only operations (e.g., tar to
> /dev/null) because, in general, filesystems trade off expensive writes
If somebody wants to implement this tar test, be aware that gnu tar tries to be extra smart, so
it fstat()s the output and if its equal to /dev/null, then no files are read at all, only
directory tree is traversed.
So one really needs to use something like "tar cf - /path | cat >/dev/null" to get
meaningful results.
Bye,
Oleg
On Tue, 05 Aug 2003 21:30:48 -0500, Grant Miner wrote:
> The first item number is time, in seconds, to complete the test (lower
> is better). The second number is CPU use percentage (lower is better).
>
> reiser4 171.28s, 30%CPU (1.0000x time; 1.0x CPU)
> reiserfs 302.53s, 16%CPU (1.7663x time; 0.53x CPU)
> ext3 319.71s, 11%CPU (1.8666x time; 0.36x CPU)
> xfs 429.79s, 13%CPU (2.5093x time; 0.43x CPU)
> jfs 470.88s, 6%CPU (2.7492x time 0.02x CPU)
That should be 0.20x CPU for jfs, right?
-Paul
Paul Dickson wrote:
> On Tue, 05 Aug 2003 21:30:48 -0500, Grant Miner wrote:
>
>
>>The first item number is time, in seconds, to complete the test (lower
>>is better). The second number is CPU use percentage (lower is better).
>>
>>reiser4 171.28s, 30%CPU (1.0000x time; 1.0x CPU)
>>reiserfs 302.53s, 16%CPU (1.7663x time; 0.53x CPU)
>>ext3 319.71s, 11%CPU (1.8666x time; 0.36x CPU)
>>xfs 429.79s, 13%CPU (2.5093x time; 0.43x CPU)
>>jfs 470.88s, 6%CPU (2.7492x time 0.02x CPU)
>
>
> That should be 0.20x CPU for jfs, right?
>
> -Paul
>
>
yes, that's right.
On Tue, 5 Aug 2003, Andrew Morton wrote:
> Solutions to this inaccuracy are to make the test so long-running (ten
> minutes or more) that the difference is minor, or to include the `sync' in
> the time measurement.
And/or reduce RAM at kernel boot, etc. Anyway, I also asked for 'sync'
yesterday and Grant included some but not after every each tests.
I run the results through some scripts to make it more readable.
It indeed has some interesting things ...
reiser4 reiserfs ext3 XFS JFS
copy 33.39,34% 39.55,32% 39.42,25% 43.50,32% 48.15,20%
sync 1.54, 0% 3.15, 1% 9.05, 0% 2.08, 1% 3.05, 1%
recopy1 31.09,34% 75.15,13% 79.96, 9% 102.37,12% 108.39, 5%
recopy2 33.15,33% 77.62,13% 98.84, 7% 108.00,12% 114.96, 5%
sync 2.89, 3% 3.84, 1% 8.15, 0% 2.40, 2% 3.86, 0%
du 2.05,42% 2.46,21% 3.31,11% 3.73,32% 2.42,17%
delete 7.41,52% 5.22,58% 3.71,39% 8.75,56% 15.33, 7%
tar 52.25,25% 90.83,12% 74.93,13% 157.61, 7% 135.86, 6%
sync 6.77, 2% 4.19, 3% 1.67, 1% 0.95, 1% 38.18, 0%
overall 171.28,30% 302.53,16% 319.71,11% 429.79,13% 470.88, 6%
BTW, zsh has a built-in 'time' so measuring a full operation can be
easily done as 'sync; time ( my_test; sync )'
Szaka
Andrew Morton wrote:
>But different filesystems will leave different amounts of dirty, unwritten
>data in memory at the end of the test. On your machine, up to 200MB of
>dirty data could be sitting there in memory at the end of the timing
>interval. You need to decide how to account for that unwritten data in the
>measurement. Simply ignoring it as you have done is certainly valid, but
>is only realistic in a couple of scenarios:
>
unless I misunderstand something, he is running sync and not ignoring that.
I don't think ext2 is a serious option for servers of the sort that
Linux specializes in, which is probably why he didn't measure it.
reiser4 cpu consumption is still dropping rapidly as others and I find
kruft in the code and remove it. Major kruft remains still.
--
Hans
El Wed, 06 Aug 2003 18:06:37 +0400 Hans Reiser <[email protected]> escribi?:
> I don't think ext2 is a serious option for servers of the sort that
> Linux specializes in, which is probably why he didn't measure it.
Why?
>
> reiser4 cpu consumption is still dropping rapidly as others and I find
> kruft in the code and remove it. Major kruft remains still.
Cool.
On Wed, Aug 06, 2003 at 06:34:10PM +0200, Diego Calleja Garc?a wrote:
> El Wed, 06 Aug 2003 18:06:37 +0400 Hans Reiser <[email protected]> escribi?:
>
> > I don't think ext2 is a serious option for servers of the sort that
> > Linux specializes in, which is probably why he didn't measure it.
>
> Why?
Because if you have a power outage, or a crash, you have to run the
filesystem check tools on it or risk damaging it further.
Journaled filesystems have a much smaller chance of having problems after a
crash.
El Wed, 6 Aug 2003 11:04:27 -0700 Mike Fedyk <[email protected]> escribi?:
>
> Journaled filesystems have a much smaller chance of having problems after a
> crash.
I've had (several) filesystem corruption in a desktop system with (several)
journaled filesystems on several disks. (They seem pretty stable these days,
though)
However I've not had any fs corrution in ext2; ext2 it's (from my experience)
rock stable.
Personally I'd consider twice the really "serious" option for a serious server.
On Wed, Aug 06, 2003 at 08:45:14PM +0200, Diego Calleja Garc?a wrote:
> El Wed, 6 Aug 2003 11:04:27 -0700 Mike Fedyk <[email protected]> escribi?:
>
> >
> > Journaled filesystems have a much smaller chance of having problems after a
> > crash.
>
> I've had (several) filesystem corruption in a desktop system with (several)
> journaled filesystems on several disks. (They seem pretty stable these days,
> though)
>
> However I've not had any fs corrution in ext2; ext2 it's (from my experience)
> rock stable.
>
> Personally I'd consider twice the really "serious" option for a serious server.
I've had corruption caused by hardware, and nothing else. I haven't run
into any serious bugs.
But with servers, the larger your filesystem, the longer it will take to
fsck. And that is bad for uptime. Period.
I would be running ext2 also if I wasn't running so many test kernels (and
they do oops on you), and I've been glad that I didn't have to fsck every
time I oopsed (though I do every once in a while, just to make sure).
El Wed, 6 Aug 2003 12:08:50 -0700 Mike Fedyk <[email protected]> escribi?:
> But with servers, the larger your filesystem, the longer it will take to
> fsck. And that is bad for uptime. Period.
Sure. But Han's "don't benchmark ext2 because it's not an option" isn't
a valid stament, at least to me.
I'm not saying ext2 is the best fs on earth, but i *really* think
it's a real option, and as such, it must be benchmarked.
Mike Fedyk <[email protected]> wrote:
>
> On Wed, Aug 06, 2003 at 06:34:10PM +0200, Diego Calleja Garc?a wrote:
> > El Wed, 06 Aug 2003 18:06:37 +0400 Hans Reiser <[email protected]> escribi?:
> >
> > > I don't think ext2 is a serious option for servers of the sort that
> > > Linux specializes in, which is probably why he didn't measure it.
> >
> > Why?
>
> Because if you have a power outage, or a crash, you have to run the
> filesystem check tools on it or risk damaging it further.
>
> Journaled filesystems have a much smaller chance of having problems after a
> crash.
Journalled filesytems have a runtime cost, and you're paying that all the
time.
If you're going 200 days between crashes on a disk-intensive box then using
a journalling fs to save 30 minutes at reboot time just doesn't stack up:
you've lost much, much more time than that across the 200 days.
It all depends on what the machine is doing and what your max downtime
requirements are.
Hi
This is my first post here, bear with me if I'm not providing enough
detail.
I have found what seems to be a problem with the ieee1394 (Firewire) driver
in 2.6.0-test2. The driver works (for me) only if compiled as a module, not
compiled statically into the kernel. When compiled statically, it aborts
loading complaining that it cannot find the module. This module is of course
not around, since the driver is compiled in.
The problem has been there since I started toying around with 2.5.3X.
My computer is a generic Athlon XP 1800+, with a very generic (read:
cheapo) Firewire controller.
Have fun!
Henrik R?der Clausen
Copenhagen
Hans Reiser wrote:
> reiser4 cpu consumption is still dropping rapidly as others and I find
> kruft in the code and remove it. Major kruft remains still.
If a file system is getting greater throughput, that means the relevant
code is being run more, which means more CPU will be used for the
purpose of setting up DMA, etc. That is, if a FS gets twice the
throughput, it would not be unreasonable to expect it to use 2x the CPU
time.
Furthermore, in order to achieve greater throughput, one has to write
more intelligent code. More intelligent code is probably going to
require more computation time.
That is to say, if your FS is twice as fast, saying it has a problem
purely on the basis that it's using more CPU ignores certain facts and
basic logic.
Now, if you can manage to make it twice as fast while NOT increasing the
CPU usage, well, then that's brilliant, but the fact that ReiserFS uses
more CPU doesn't bother me in the least.
On Wed, Aug 06, 2003 at 07:37:42PM -0400, Timothy Miller wrote:
>
>
> Hans Reiser wrote:
>
> >reiser4 cpu consumption is still dropping rapidly as others and I find
> >kruft in the code and remove it. Major kruft remains still.
> Now, if you can manage to make it twice as fast while NOT increasing the
> CPU usage, well, then that's brilliant, but the fact that ReiserFS uses
> more CPU doesn't bother me in the least.
Basically he's saying it's faster and still not at its peak effeciency yet
too.
Mike Fedyk wrote:
> On Wed, Aug 06, 2003 at 07:37:42PM -0400, Timothy Miller wrote:
>
>>
>>Hans Reiser wrote:
>>
>>
>>>reiser4 cpu consumption is still dropping rapidly as others and I find
>>>kruft in the code and remove it. Major kruft remains still.
>>
>
>>Now, if you can manage to make it twice as fast while NOT increasing the
>>CPU usage, well, then that's brilliant, but the fact that ReiserFS uses
>>more CPU doesn't bother me in the least.
>
>
> Basically he's saying it's faster and still not at its peak effeciency yet
> too.
That point was already clear to me. I guess I was rather unclear about
MY point. :) I wasn't talking to Hans so much as anyone who might
worry about CPU usage for a FS.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Diego Calleja Garc?a wrote:
> El Wed, 6 Aug 2003 11:04:27 -0700 Mike Fedyk <[email protected]>
escribi?:
>
>
>>Journaled filesystems have a much smaller chance of having problems
after a
>>crash.
>
> I've had (several) filesystem corruption in a desktop system with
(several)
> journaled filesystems on several disks. (They seem pretty stable these
days,
> though)
well, I only had one time huge problems with a journaling FS, this was
when I thought I could use Reise FS Beta on a Production File Server ;)
> However I've not had any fs corrution in ext2; ext2 it's (from my
experience)
> rock stable.
well, ever had a check of several hundrets of Gigabytes in ext2 after a
poweroutage ... when you had this several times in a row, you even take
ext3 and thank for its existence ...
> Personally I'd consider twice the really "serious" option for a
serious server.
I'd never use ext2 on a server anymore nowadays. You have so many
choises of stable journaling filesystems, you don't have to use ext2
anymore (except perhaps for small partitions like /tmp or /boot ...)
- --
Clemens Schwaighofer - IT Engineer & System Administration
==========================================================
Tequila Japan, 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN
Tel: +81-(0)3-3545-7703 Fax: +81-(0)3-3545-7343
http://www.tequila.jp
==========================================================
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.1 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQE/MaN5jBz/yQjBxz8RAqEDAJ9MZBTBokLsCxDQga3GVNHKY9q/3ACgpX2S
nIezpbMMsLb58jTnYnHI53w=
=fO1v
-----END PGP SIGNATURE-----
Diego Calleja Garc?a wrote:
>El Wed, 06 Aug 2003 18:06:37 +0400 Hans Reiser <[email protected]> escribi?:
>
>
>
>>I don't think ext2 is a serious option for servers of the sort that
>>Linux specializes in, which is probably why he didn't measure it.
>>
>>
>
>Why?
>
Run fsck on a 1 terabyte array while a department waits for their server
to come back up instead of having it back in 90 seconds and.....
disk speeds have increased linearly while their capacity has increased
quadratically.
--
Hans
Diego Calleja Garc?a wrote:
>El Wed, 6 Aug 2003 12:08:50 -0700 Mike Fedyk <[email protected]> escribi?:
>
>
>
>>But with servers, the larger your filesystem, the longer it will take to
>>fsck. And that is bad for uptime. Period.
>>
>>
>
>Sure. But Han's "don't benchmark ext2 because it's not an option" isn't
>a valid stament, at least to me.
>
>I'm not saying ext2 is the best fs on earth, but i *really* think
>it's a real option, and as such, it must be benchmarked.
>
>
>
>
Actually, I think it would be nice if Grant benchmarked it because it
shows the overhead of ext3's journaling, but it should be noted that it
is not a valid option for most servers.
--
Hans
On Wed, Aug 06, 2003 at 06:06:37PM +0400, Hans Reiser wrote:
>
> I don't think ext2 is a serious option for servers of the sort that
> Linux specializes in, which is probably why he didn't measure it.
FWIW, I use Linux to run an application which generates many large
temporary files. Some larger runs could easily generate hundreds
of GB of on-disk data. I really prefer to use ext2 for these temp
files. The speed is nice, and the data consistency guarantees the
other FSes give me does not really mean much as most of the files
are not of any use if the machine were to crash. Fsck times on
an unclean shutdown are a problem, but I guess I could solve that
by running mkext2fs instead.
On my home machine I switched the partition I do mozilla development
on from ext3 back to ext2. The man reason being that "make clobber"
was so much faster. And again I feel comfortable doing this because
I can regenerate everything on that partiton with out too much
work.
Anyway, I just wanted to point out that ext2 still has its uses. Im
looking forward to trying out reiser4. The speed looks quite impressive
from what I have seen on the net.
Thanks,
Jim
El Thu, 07 Aug 2003 16:55:41 +0400 Hans Reiser <[email protected]> escribi?:
> Run fsck on a 1 terabyte array while a department waits for their server
> to come back up instead of having it back in 90 seconds and.....
To start, some people don't need data safety.
>
> disk speeds have increased linearly while their capacity has increased
> quadratically.
It's useful as you say to show how slow is ext3 compared with ext2, between
other things. Also, it looks that ext2 scales really well. Benchmarks are not
there to show how fast and nice your reiser4 is compared with others
(there's no doubt reiser4 is pretty nice BTW). People are developing other
filesystems, you know.
On Thu, 7 Aug 2003, Diego Calleja [ISO-8859-15] Garc?a wrote:
>Date: Thu, 7 Aug 2003 21:09:22 +0200
>From: "Diego Calleja [ISO-8859-15] Garc?a" <[email protected]>
>To: Hans Reiser <[email protected]>
>Cc: [email protected], [email protected]
>Subject: Re: Filesystem Tests
>
>El Thu, 07 Aug 2003 16:55:41 +0400 Hans Reiser <[email protected]> escribi?:
>
>> Run fsck on a 1 terabyte array while a department waits for their server
>> to come back up instead of having it back in 90 seconds and.....
>
>To start, some people don't need data safety.
I would postulate that the number who do need such safety are far greater
than the numbers who don't (regardles of whether they realise it or not
;)
>It's useful as you say to show how slow is ext3 compared with ext2, between
>other things. Also, it looks that ext2 scales really well. Benchmarks are not
>there to show how fast and nice your reiser4 is compared with others
>(there's no doubt reiser4 is pretty nice BTW). People are developing other
>filesystems, you know.
>
Figures can be used to show whatever you want to show. Use them wisely.
(I really have no idea what this thread is about, but those statements
irked me for some reason)
Cheers,
Al
Szakacsits Szabolcs wrote:
> I run the results through some scripts to make it more readable.
I think that instead of giving the CPU percentage, you should give the
CPU time used:
CPU time used = CPU percentage * total time
This would give a more accurate measure of how much CPU is used by the
different filesystems. As someone said, if certain operations are
faster with reiser4, you expect a greater percentage of CPU time to be
spent in the disk driver etc. - if the amount of I/O is the same, that is.
Another interesting statistic would be the number of blocks read and
written during the test.
-- Jamie
On Sat, 9 Aug 2003, Jamie Lokier wrote:
> Szakacsits Szabolcs wrote:
> > I run the results through some scripts to make it more readable.
>
> I think that instead of giving the CPU percentage, you should give the
> CPU time used:
>
> CPU time used = CPU percentage * total time
In that case, the next time one could complain similarly, "instead of
giving the CPU time used, you should give the CPU percentage". Probably
that's the reason usually both is given, CPU time used in user and system
space as well.
> This would give a more accurate measure of how much CPU is used by the
> different filesystems.
The CPU percentages were given as integers in Grant's original numbers and
in same cases as 0. Doing mindless math with those would have given (even
more) bogus and misleading results.
> As someone said, if certain operations are faster with reiser4, you
> expect a greater percentage of CPU time to be spent in the disk driver
> etc. - if the amount of I/O is the same, that is.
I just can't believe reiser4 is so fast on an unloaded system (from the
numbers one could also expect it's the slowest on loaded systems and JFS
seems to be the winner on those). Disks have a speed/seek limits. To be
faster, one must ignore 'sync', do less IO (file/tail packing, compression,
etc) and/or optimise seek times.
> Another interesting statistic would be the number of blocks read and
> written during the test.
Yes, but I would collect those stats after these short term tests for an
additional X seconds to make sure no additional "optimization" is involved
(aka data is indeed on the disk).
Szaka
Szakacsits Szabolcs wrote:
> I just can't believe reiser4 is so fast on an unloaded system (from the
> numbers one could also expect it's the slowest on loaded systems and JFS
> seems to be the winner on those).
reiser4 is using approximately twice the CPU percentage, but completes
in approximately half the time, therefore it uses about the same
amount of CPU time at the others.
Therefore on a loaded system, with a load carefully chosen to make the
test CPU bound rather than I/O bound, one could expect reiser4 to
complete in approximately the same time as the others, _not_ slowest.
That's why it's misleading to draw conclusions from the CPU percentage alone.
> Disks have a speed/seek limits. To be faster, one must ignore
> 'sync', do less IO (file/tail packing, compression, etc) and/or
> optimise seek times.
reiser4 literature claims that it does less IO (wandering logs) and
suggests better seek patterns (deferred allocation).
> > Another interesting statistic would be the number of blocks read and
> > written during the test.
>
> Yes, but I would collect those stats after these short term tests for an
> additional X seconds to make sure no additional "optimization" is involved
> (aka data is indeed on the disk).
Indeed. Even sync() is not guaranteed to flush the data to disk in
its final form, if the filesystem state is already committed to the
journal.
-- Jamie
On Sat, 9 Aug 2003, Jamie Lokier wrote:
> reiser4 is using approximately twice the CPU percentage, but completes
> in approximately half the time, therefore it uses about the same
> amount of CPU time at the others.
>
> Therefore on a loaded system, with a load carefully chosen to make the
> test CPU bound rather than I/O bound, one could expect reiser4 to
> complete in approximately the same time as the others, _not_ slowest.
Depends how you define approximation, margins. I dropped them and
calculated reiser4 needs the most CPU time. Hans wrote it's worked on.
However guessing performance on a whatever carefully chosen loaded system
from results on an unloaded system is exactly that, guess, not fact.
> That's why it's misleading to draw conclusions from the CPU percentage alone.
I've never wrote I made my guesses from the CPU percentage alone, you
explained correctly why. I encourage you too to calculate yourself how
much more CPU time reiser4 needs.
Szaka
> I've never wrote I made my guesses from the CPU percentage alone, you
> explained correctly why. I encourage you too to calculate yourself how
> much more CPU time reiser4 needs.
Ok, fair enough :)
-- Jamie
Thanks everybody for suggestions. I ran again, new benchmark code,
2.6.0-pre3, all times in seconds now.
http://epoxy.mrs.umn.edu/~minerg/fstests/results.html
If you want bechmark code it is at
http://epoxy.mrs.umn.edu/~minerg/fstests/
Am Sonntag, 10. August 2003 23:03 schrieb Grant Miner:
> Thanks everybody for suggestions. I ran again, new benchmark code,
> 2.6.0-pre3, all times in seconds now.
>
> http://epoxy.mrs.umn.edu/~minerg/fstests/results.html
>
> If you want bechmark code it is at
> http://epoxy.mrs.umn.edu/~minerg/fstests/
Chris,
do you have ReiserFS 3.x data-logging Patches ready for 2.6?
On 2.4.xx (2.4.22-rc at least) ReiserFS 3.x seems to be fastest.
Regards,
Dieter
--
Dieter N?tzel
Graduate Student, Computer Science