Greetings all,
I ran the following script for several recent kernels with
several journaling filesystems. I patched in reiser4 from
namesys.com, downloaded yesterday. Sorry, no jfs data.
#!/bin/sh
for i in 1 2 3 4 5
do
time rm -rf linux-2.5.63
time tar zxf linux-2.5.63.tar.gz
time sync
echo ' '
done
The following data is from the 5th run in each case. The data seemed
fairly consistent, so I didn't average the runs. All the results are
saved if anyone cares.
The tar files were created on the file systems being tested.
Brief summary: It looks like the order of performance for this
particular load is reiserfs, reiser4, ext3, xfs.
Something changed with ext3 behavior following 2.5.60,
and xfs takes the biggest hit from anticipatory i/o scheduler, (for
this load).
Minor footnote: to get reiser4 to compile with 2.5.62-mm3, it was
necessary to s/UPDATE_ATIME/update_atime/g in some reiser4 files.
Steven
---------------------------------------------------------------------
2.5.60:
reiserfs:
rm -rf linux-2.5.63 0.02user 1.71system 0:02.16elapsed 80%CPU
tar zxf linux-2.5.63.tar.gz 4.83user 5.86system 0:14.09elapsed 75%CPU
sync 0.00user 0.03system 0:00.96elapsed 3%CPU
reiser4:
rm -rf linux-2.5.63 0.15user 3.26system 0:07.12elapsed 47%CPU
tar zxf linux-2.5.63.tar.gz 4.86user 7.93system 0:15.18elapsed 84%CPU
sync 0.00user 0.35system 0:01.17elapsed 30%CPU
ext3:
rm -rf linux-2.5.63 0.02user 0.56system 0:03.02elapsed 19%CPU
tar zxf linux-2.5.63.tar.gz 5.04user 4.85system 0:23.03elapsed 42%CPU
sync 0.00user 0.04system 0:08.14elapsed 0%CPU
xfs:
rm -rf linux-2.5.63 0.06user 2.46system 0:09.97elapsed 25%CPU
tar zxf linux-2.5.63.tar.gz 4.94user 6.47system 0:35.17elapsed 32%CPU
sync 0.00user 0.01system 0:03.53elapsed 0%CPU
---------------------------------------------------------------------
2.5.63:
reiserfs:
rm -rf linux-2.5.63 0.03user 1.73system 0:01.81elapsed 97%CPU
tar zxf linux-2.5.63.tar.gz 4.86user 6.22system 0:14.39elapsed 77%CPU
sync 0.00user 0.02system 0:01.32elapsed 1%CPU
reiser4:
rm -rf linux-2.5.63 0.18user 3.30system 0:06.50elapsed 53%CPU
tar zxf linux-2.5.63.tar.gz 4.86user 8.09system 0:14.82elapsed 87%CPU
sync 0.00user 0.36system 0:01.13elapsed 31%CPU
ext3:
rm -rf linux-2.5.63 0.02user 0.56system 0:00.67elapsed 86%CPU
tar zxf linux-2.5.63.tar.gz 4.69user 4.77system 0:14.92elapsed 63%CPU
sync 0.00user 0.18system 0:15.11elapsed 1%CPU
xfs:
rm -rf linux-2.5.63 0.04user 2.39system 0:07.07elapsed 34%CPU
tar zxf linux-2.5.63.tar.gz 5.11user 6.63system 0:38.43elapsed 30%CPU
sync 0.00user 0.09system 0:03.59elapsed 2%CPU
---------------------------------------------------------------------
2.5.62-mm3 elevator=as:
reiserfs:
rm -rf linux-2.5.63 0.03user 1.76system 0:01.82elapsed 98%CPU
tar zxf linux-2.5.63.tar.gz 4.96user 6.35system 0:14.09elapsed 80%CPU
sync 0.00user 0.13system 0:03.74elapsed 3%CPU
reiser4:
rm -rf linux-2.5.63 0.14user 3.25system 0:06.37elapsed 53%CPU
tar zxf linux-2.5.63.tar.gz 5.02user 8.31system 0:15.40elapsed 86%CPU
sync 0.00user 0.36system 0:01.21elapsed 30%CPU
ext3:
rm -rf linux-2.5.63 0.03user 0.55system 0:00.75elapsed 77%CPU
tar zxf linux-2.5.63.tar.gz 4.75user 4.42system 0:16.83elapsed 54%CPU
sync 0.00user 0.18system 0:15.94elapsed 1%CPU
xfs:
rm -rf linux-2.5.63 0.07user 2.46system 0:07.71elapsed 32%CPU
tar zxf linux-2.5.63.tar.gz 5.11user 6.80system 0:42.07elapsed 28%CPU
sync 0.00user 0.11system 0:03.74elapsed 3%CPU
Steven Cole wrote:
>
>Brief summary: It looks like the order of performance for this
>particular load is reiserfs, reiser4, ext3, xfs.
>
The performance of reiserfs V3 relative to ext3 and XFS in this
benchmark is consistent with past experience as best I can remember it.
I would say that roughly speaking these results have been true without
major change during the period we have been testing (at least for
writes, ext3 does better on reads, and I won't predict which of ext3 or
reiserfs is currently faster for reads, ext3 has tended to have a slight
read speed advantage for linux kernel source code).
The ~6% disadvantage of V4 compared to V3 is a bit surprising, and we
are still evaluating that result. We just checked in a complete rewrite
of the flushing code today: give us a few weeks of analysis and we will
hopefully have better results for V4 versus V3. The primary purpose of
the rewrite was code clarity, but rumor has it we found unnecessary work
being done during the rewrite and corrected it.;-) With clear code it
will be easier for us to analyze what it is doing.
I would advise using a larger benchmark with 30-60 kernels being
copied. Filesystems sometimes perform differently for sync than for
memory pressure.
I would be interested to understand why ext3 is slower for sync: is it
because it has more in its write cache, or because of something else?
If it has more in its write cache, then our write caching is less
aggressive in reiser4 than I want it to be, and if it is something else
then the ext3 guys need to look into it.
Thanks for doing this test.
--
Hans
On Thu, 27 Feb 2003, Hans Reiser wrote:
> I would advise using a larger benchmark with 30-60 kernels being
> copied. Filesystems sometimes perform differently for sync than for
> memory pressure.
Agreed, a benchmark suite gives a better view of overall performance.
When ext3 came out I benchmarked it running a usenet news server. I
configured it for one file per article and fed in about 100k articles with
each one being offered multiple times to generate both rejects and
accepts. I suppose I should do that again, it would give some insight into
performance creating *lots* of files, many in the same directory.
Needless to say for production use I configure a news server for least
resources and the filesystem plays little part in the performance.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.