2008-05-30 15:51:21

by Valerie Clement

[permalink] [raw]
Subject: Test results for ext4

Hi all,

Since a couple of weeks, I did batches of tests to have some performance
numbers for the new ext4 features like uninit_groups, flex_bg or
journal_checksum on a 5TB filesystem.
I tried to test allmost all combinations of mkfs and mount options, but
I put only a subset of them in the result tables, the most significant
for me.

I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
hangs and crashes occuring randomly outside ext4, sometimes in the slab
code or in the scsi driver eg., and which were not reproductible.
Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.

The first results and the test description are available here:
http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html

I will complete them in the next days.

In the first batch of tests, I compare the I/O throughput to create
1-GB files on disk in different configurations. The CPU usage is also
given to show mainly how the delayed allocation feature reduces it.
The average number of extents per file shows the impact of the
multiblock allocator and the flex_bg grouping on the file fragmentation.
At last, the fsck time shows how the uninit_groups feature reduces the
e2fsck duration.

In the second batch of tests, the results show improvements in transactions
-per-second throughput when doing small files writes, reads and creates
when using the flex_bg grouping.
The same ffsb test on an XFS filesystem hangs, I will try to have traces.

If you are interested in other tests, please let me know.

Val?rie


2008-05-30 16:02:57

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Valerie Clement wrote:
> Hi all,
>
> Since a couple of weeks, I did batches of tests to have some performance
> numbers for the new ext4 features like uninit_groups, flex_bg or
> journal_checksum on a 5TB filesystem.
> I tried to test allmost all combinations of mkfs and mount options, but
> I put only a subset of them in the result tables, the most significant
> for me.
>
> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
> hangs and crashes occuring randomly outside ext4, sometimes in the slab
> code or in the scsi driver eg., and which were not reproductible.
> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>
> The first results and the test description are available here:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html

To be fair in your comparisons with xfs, you should probably either turn
barriers off for xfs, or on for ext[34], just FWIW.

-Eric

2008-05-30 16:21:38

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Valerie Clement wrote:
>> Hi all,
>>
>> Since a couple of weeks, I did batches of tests to have some performance
>> numbers for the new ext4 features like uninit_groups, flex_bg or
>> journal_checksum on a 5TB filesystem.
>> I tried to test allmost all combinations of mkfs and mount options, but
>> I put only a subset of them in the result tables, the most significant
>> for me.
>>
>> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
>> hangs and crashes occuring randomly outside ext4, sometimes in the slab
>> code or in the scsi driver eg., and which were not reproductible.
>> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>>
>> The first results and the test description are available here:
>> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
>> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>
> To be fair in your comparisons with xfs, you should probably either turn
> barriers off for xfs, or on for ext[34], just FWIW.
>

Oops, I did the test on a device mapper /dev/md0. I forgot to change it in
the test description. I will do it.

When mounting the xfs filesystem, I've got the following message:
Filesystem "md0": Disabling barriers, not supported by the underlying device

It means barriers are not supported by my device, isn' it?

Val?rie

2008-05-30 16:24:51

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Valerie Clement wrote:
> Eric Sandeen wrote:

> Oops, I did the test on a device mapper /dev/md0. I forgot to change it in
> the test description. I will do it.
>
> When mounting the xfs filesystem, I've got the following message:
> Filesystem "md0": Disabling barriers, not supported by the underlying device
>
> It means barriers are not supported by my device, isn' it?

Yep, it should be a fair fight then. :) I saw /dev/sde in the writeup
and figured it probably had barriers on.

Thanks,
-Eric

2008-05-30 16:29:16

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Valerie Clement wrote:
>> Eric Sandeen wrote:
>
>> Oops, I did the test on a device mapper /dev/md0. I forgot to change it in
>> the test description. I will do it.
>>
>> When mounting the xfs filesystem, I've got the following message:
>> Filesystem "md0": Disabling barriers, not supported by the underlying device
>>
>> It means barriers are not supported by my device, isn' it?
>
> Yep, it should be a fair fight then. :) I saw /dev/sde in the writeup
> and figured it probably had barriers on.

Oh, also for completeness can you specify which xfsprogs you used?
There were some recent changes made which affect the fs geometry, and
might affect the results. So it would be good to fully specify.

Also why no fragmentation results for xfs or ext3?

Thanks,

-Eric

2008-05-30 17:48:28

by Mingming Cao

[permalink] [raw]
Subject: Re: Test results for ext4


On Fri, 2008-05-30 at 17:50 +0200, Valerie Clement wrote:
> Hi all,
>

Hi Valerie,

> Since a couple of weeks, I did batches of tests to have some performance
> numbers for the new ext4 features like uninit_groups, flex_bg or
> journal_checksum on a 5TB filesystem.
> I tried to test allmost all combinations of mkfs and mount options, but
> I put only a subset of them in the result tables, the most significant
> for me.
>

Thanks, that's very helpful.

> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
> hangs and crashes occuring randomly outside ext4, sometimes in the slab
> code or in the scsi driver eg., and which were not reproductible.
> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>
> The first results and the test description are available here:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html

Interesting nomballoc is faster than default (with mballoc) about 3%,
but the fragmentation is much better.

> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>


> I will complete them in the next days.
>
> In the first batch of tests, I compare the I/O throughput to create
> 1-GB files on disk in different configurations. The CPU usage is also
> given to show mainly how the delayed allocation feature reduces it.
> The average number of extents per file shows the impact of the
> multiblock allocator and the flex_bg grouping on the file fragmentation.
> At last, the fsck time shows how the uninit_groups feature reduces the
> e2fsck duration.
>
Don't know if Jose has any suggestions, I am curious what's the impact
of flex bg alone on fsck perf?

> In the second batch of tests, the results show improvements in transactions
> -per-second throughput when doing small files writes, reads and creates
> when using the flex_bg grouping.
> The same ffsb test on an XFS filesystem hangs, I will try to have traces.
>

> If you are interested in other tests, please let me know.
>
I also wondering if you get a chance to test larger inode (>256 bytes)
with uninit group and flex bg?

> Valérie
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html

2008-05-30 18:14:08

by Jose R. Santos

[permalink] [raw]
Subject: Re: Test results for ext4

On Fri, 30 May 2008 17:50:43 +0200
Valerie Clement <[email protected]> wrote:

> Hi all,
>
> Since a couple of weeks, I did batches of tests to have some
> performance numbers for the new ext4 features like uninit_groups,
> flex_bg or journal_checksum on a 5TB filesystem.
> I tried to test allmost all combinations of mkfs and mount options,
> but I put only a subset of them in the result tables, the most
> significant for me.
>
> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got
> several hangs and crashes occuring randomly outside ext4, sometimes
> in the slab code or in the scsi driver eg., and which were not
> reproductible. Since 2.6.26-rc2, no crash or hang occur with ext4 on
> my system.
>
> The first results and the test description are available here:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>
> I will complete them in the next days.
>
> In the first batch of tests, I compare the I/O throughput to create
> 1-GB files on disk in different configurations. The CPU usage is also
> given to show mainly how the delayed allocation feature reduces it.
> The average number of extents per file shows the impact of the
> multiblock allocator and the flex_bg grouping on the file
> fragmentation. At last, the fsck time shows how the uninit_groups
> feature reduces the e2fsck duration.
>
> In the second batch of tests, the results show improvements in
> transactions -per-second throughput when doing small files writes,
> reads and creates when using the flex_bg grouping.
> The same ffsb test on an XFS filesystem hangs, I will try to have
> traces.
>
> If you are interested in other tests, please let me know.

How about adding the following in "[filesystem0]" to age the filesystem:

agefs=1
[threadgroup0]
num_threads=10
write_size=40960
write_blocksize=4096
create_weight=10
append_weight=10
delete_weight=1
[end0]
desired_util=0.80

This will age the filesystem until it reaches 80% utilization before
starting the benchmark. Since a 5TB disk will take a while to age, I
suggest trying this on just a few runs.

>
> Valérie
>

-JRS

2008-05-30 20:59:20

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Valerie Clement wrote:
> Hi all,
>
> Since a couple of weeks, I did batches of tests to have some performance
> numbers for the new ext4 features like uninit_groups, flex_bg or
> journal_checksum on a 5TB filesystem.
> I tried to test allmost all combinations of mkfs and mount options, but
> I put only a subset of them in the result tables, the most significant
> for me.
>
> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
> hangs and crashes occuring randomly outside ext4, sometimes in the slab
> code or in the scsi driver eg., and which were not reproductible.
> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>
> The first results and the test description are available here:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>
> I will complete them in the next days.
>
> In the first batch of tests, I compare the I/O throughput to create
> 1-GB files on disk in different configurations. The CPU usage is also
> given to show mainly how the delayed allocation feature reduces it.
> The average number of extents per file shows the impact of the
> multiblock allocator and the flex_bg grouping on the file fragmentation.
> At last, the fsck time shows how the uninit_groups feature reduces the
> e2fsck duration.
>
> In the second batch of tests, the results show improvements in transactions
> -per-second throughput when doing small files writes, reads and creates
> when using the flex_bg grouping.
> The same ffsb test on an XFS filesystem hangs, I will try to have traces.
>
> If you are interested in other tests, please let me know.

Valerie, would you be interested in any xfs tuning? :)

I don't know how much tuning is "fair" for the comparison... but I think
in real usage xfs would/should get tuned a bit for a workload like this.

At the 5T range xfs gets into a funny allocation mode...

If you mount with "-o inode64" I bet you see a lot better performance.

Or, you could do sysctl -w fs.xfs.rotorstep=256

which would probably help too.

with a large fs like this, the allocator gets into a funny mode to keep
inodes in the lower part of the fs to keep them under 32 bits, and
scatters the data allocations around the higher portions of the fs.

Either -o inode64 will completely avoid this, or the rotorstep should
stop it from scattering each file, but instead switching AGs only every
256 files.

Could you also include the xfsprogs version on your summary pages, and
maybe even the output of xfs_info /mount/point so we can see the full fs
geometry? (I'd suggest maybe tune2fs output for the ext[34] filesystems
too, for the same reason)

When future generations look at the results it'll be nice to have as
much specificity about the setup as possible, I think.

Thanks,
-Eric

2008-05-31 19:37:10

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Valerie Clement wrote:
> Hi all,
>
> Since a couple of weeks, I did batches of tests to have some performance
> numbers for the new ext4 features like uninit_groups, flex_bg or
> journal_checksum on a 5TB filesystem.
> I tried to test allmost all combinations of mkfs and mount options, but
> I put only a subset of them in the result tables, the most significant
> for me.
>
> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
> hangs and crashes occuring randomly outside ext4, sometimes in the slab
> code or in the scsi driver eg., and which were not reproductible.
> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>
> The first results and the test description are available here:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>

One other question on the tests; am I reading correctly that ext3 used
"data=writeback" but ext4 used the default data=ordered mode?

Thanks,
-Eric

2008-06-02 13:07:29

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Oh, also for completeness can you specify which xfsprogs you used?
> There were some recent changes made which affect the fs geometry, and
> might affect the results. So it would be good to fully specify.
OK, to do. To be honest, I didn't update them recently.

>
> Also why no fragmentation results for xfs or ext3?
I only forgot to do it.

But I didn't want to make a full comparaison of ext4 to xfs and ext3.
When testing the latest ext4 patch queue with a new kernel, I'd got
sometimes kernel crashes, or system hang, or bad performance.
Running the same tests on ext3 and xfs for which the code is more
stable I think gives me reference numbers for my tests.
In this way, I found in the past a problem in the IO scheduler.

Val?rie


2008-06-02 13:21:08

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> One other question on the tests; am I reading correctly that ext3 used
> "data=writeback" but ext4 used the default data=ordered mode?

When the delalloc patches are applied, the data=writeback mode is forced.
This is why I set this mode for ext3.
But I agree, it is not clear, I have to precise that in the test description.

Val?rie

2008-06-02 13:30:21

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Mingming wrote:
> I also wondering if you get a chance to test larger inode (>256 bytes)
> with uninit group and flex bg?
I already made some tests with 512-byte inodes with the above options,
and that's ok, but I didn't launch stress tests and performance tests
in this configuration.
No problem to do it.
Valérie

2008-06-02 13:44:25

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Jose R. Santos wrote:
> agefs=1
> [threadgroup0]
> num_threads=10
> write_size=40960
> write_blocksize=4096
> create_weight=10
> append_weight=10
> delete_weight=1
> [end0]
> desired_util=0.80
>
> This will age the filesystem until it reaches 80% utilization before
> starting the benchmark. Since a 5TB disk will take a while to age, I
> suggest trying this on just a few runs.
Ok to do it only on a few runs (creating a 4TB file on my device takes
more than 12 hours...)
Valérie

2008-06-02 14:45:55

by Jose R. Santos

[permalink] [raw]
Subject: Re: Test results for ext4

On Mon, 02 Jun 2008 15:44:04 +0200
Valerie Clement <[email protected]> wrote:

> Jose R. Santos wrote:
> > agefs=1
> > [threadgroup0]
> > num_threads=10
> > write_size=40960
> > write_blocksize=4096
> > create_weight=10
> > append_weight=10
> > delete_weight=1
> > [end0]
> > desired_util=0.80
> >
> > This will age the filesystem until it reaches 80% utilization before
> > starting the benchmark. Since a 5TB disk will take a while to age,
> > I suggest trying this on just a few runs.
> Ok to do it only on a few runs (creating a 4TB file on my device takes
> more than 12 hours...)
> Valérie

Wow...

Maybe running aging on a 5TB fs is not such a good idea. I think it
would be better to run it on a smaller array size (1TB maybe). I
suspect that one of the reasons this is taking so long is that with 4TB
of small files, the rbtree in FFSB would be huge and could be larger
than the 2GB of memory you are running on your system, causing you to
swap.


Running on a smaller FS should still give a good idea of how well the
filesystems avoid fragmentation.


-JRS

2008-06-02 14:52:07

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Valerie, would you be interested in any xfs tuning? :)
Yes, if you give me inputs.

>
> I don't know how much tuning is "fair" for the comparison... but I think
> in real usage xfs would/should get tuned a bit for a workload like this.
>
> At the 5T range xfs gets into a funny allocation mode...

Look at the tests I'd done one year ago:
http://www.bullopensource.org/ext4/20070404/ffsb-write.html
Large sequential writes were done on a smaller device. With 4 threads,
xfs is better than ext3 and ext4. But when the thread number is increased,
xfs becomes less good.

To run my tests with 128 threads, maybe I have to tune something in xfs.

>
> If you mount with "-o inode64" I bet you see a lot better performance.
>
> Or, you could do sysctl -w fs.xfs.rotorstep=256
>
> which would probably help too.
>
> with a large fs like this, the allocator gets into a funny mode to keep
> inodes in the lower part of the fs to keep them under 32 bits, and
> scatters the data allocations around the higher portions of the fs.
>
> Either -o inode64 will completely avoid this, or the rotorstep should
> stop it from scattering each file, but instead switching AGs only every
> 256 files.
>
> Could you also include the xfsprogs version on your summary pages, and
> maybe even the output of xfs_info /mount/point so we can see the full fs
> geometry? (I'd suggest maybe tune2fs output for the ext[34] filesystems
> too, for the same reason)
>
> When future generations look at the results it'll be nice to have as
> much specificity about the setup as possible, I think.
Yes, I agree. Thank you very much for yours comments. They help me much.
Val?rie

2008-06-03 03:27:54

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Valerie Clement wrote:
>> Hi all,
>>
>> Since a couple of weeks, I did batches of tests to have some performance
>> numbers for the new ext4 features like uninit_groups, flex_bg or
>> journal_checksum on a 5TB filesystem.
>> I tried to test allmost all combinations of mkfs and mount options, but
>> I put only a subset of them in the result tables, the most significant
>> for me.
>>
>> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
>> hangs and crashes occuring randomly outside ext4, sometimes in the slab
>> code or in the scsi driver eg., and which were not reproductible.
>> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>>
>> The first results and the test description are available here:
>> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
>> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>>
>
> One other question on the tests; am I reading correctly that ext3 used
> "data=writeback" but ext4 used the default data=ordered mode?

I was interested in the results, especially since ext3 seemed to pretty
well match ext4 for throughput, although the cpu utilization differed.

I re-ran the same ffsb profiles on an 8G, 4-way opteron box, connected
to a "Vendor: WINSYS Model: SF2372" 2T hardware raid array with 512MB
cache, connected via fibrechannel.

Reads go pretty fast:

# dd if=/dev/sdc bs=16M count=512 iflag=direct of=/dev/null
8589934592 bytes (8.6 GB) copied, 23.2257 seconds, 370 MB/s

I got some different numbers....

This was with e2fsprogs-1.39 for ext3, e2fsprogs-1.40.10 for ext4, and
xfsprogs-2.9.8 for xfs.

I used defaults except; data=writeback for ext[34] and the nobarrier
option for xfs. ext3 was made with 128 byte inodes, ext4 with 256-byte
(new default). XFS used stock mkfs. I formatted the entire block
device /dev/sdc.

For the large file write test:

MB/s CPU %
ext3 140 90.7
ext4 182 50.2
xfs 222 145.0

And for the small random readwrite test:

trans/s CPU %
ext3 9830 12.2
ext4 11996 18.1
xfs 13863 23.5

Not sure what the difference is ...

If you have your tests scripted up I'd be interested to run all the
variations on this hardware as well, as it seems to show more throughput
differences...

Thanks!

-Eric

2008-06-04 15:34:41

by Valerie Clement

[permalink] [raw]
Subject: Re: Test results for ext4

Eric Sandeen wrote:
> Eric Sandeen wrote:
>> Valerie Clement wrote:
>>> Hi all,
>>>
>>> Since a couple of weeks, I did batches of tests to have some performance
>>> numbers for the new ext4 features like uninit_groups, flex_bg or
>>> journal_checksum on a 5TB filesystem.
>>> I tried to test allmost all combinations of mkfs and mount options, but
>>> I put only a subset of them in the result tables, the most significant
>>> for me.
>>>
>>> I had started to do these tests on a kernel 2.6.26-rc1, but I'd got several
>>> hangs and crashes occuring randomly outside ext4, sometimes in the slab
>>> code or in the scsi driver eg., and which were not reproductible.
>>> Since 2.6.26-rc2, no crash or hang occur with ext4 on my system.
>>>
>>> The first results and the test description are available here:
>>> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
>>> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html
>>>
>> One other question on the tests; am I reading correctly that ext3 used
>> "data=writeback" but ext4 used the default data=ordered mode?
>
> I was interested in the results, especially since ext3 seemed to pretty
> well match ext4 for throughput, although the cpu utilization differed.
>
> I re-ran the same ffsb profiles on an 8G, 4-way opteron box, connected
> to a "Vendor: WINSYS Model: SF2372" 2T hardware raid array with 512MB
> cache, connected via fibrechannel.
>
> Reads go pretty fast:
>
> # dd if=/dev/sdc bs=16M count=512 iflag=direct of=/dev/null
> 8589934592 bytes (8.6 GB) copied, 23.2257 seconds, 370 MB/s
>
> I got some different numbers....
>
> This was with e2fsprogs-1.39 for ext3, e2fsprogs-1.40.10 for ext4, and
> xfsprogs-2.9.8 for xfs.
I was using xfsprogs-2.9.0, maybe too old version...
I'm updating them and I'll run my tests again.

>
> I used defaults except; data=writeback for ext[34] and the nobarrier
> option for xfs. ext3 was made with 128 byte inodes, ext4 with 256-byte
> (new default). XFS used stock mkfs. I formatted the entire block
> device /dev/sdc.
>
> For the large file write test:
>
> MB/s CPU %
> ext3 140 90.7
> ext4 182 50.2
> xfs 222 145.0
>
> And for the small random readwrite test:
>
> trans/s CPU %
> ext3 9830 12.2
> ext4 11996 18.1
> xfs 13863 23.5
>
> Not sure what the difference is ...
>
> If you have your tests scripted up I'd be interested to run all the
> variations on this hardware as well, as it seems to show more throughput
> differences...

I added a link to the scripts I used in the test description section in:
http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html

Val?rie

2008-06-04 15:41:37

by Eric Sandeen

[permalink] [raw]
Subject: Re: Test results for ext4

Valerie Clement wrote:
> Eric Sandeen wrote:

>> This was with e2fsprogs-1.39 for ext3, e2fsprogs-1.40.10 for ext4, and
>> xfsprogs-2.9.8 for xfs.
> I was using xfsprogs-2.9.0, maybe too old version...
> I'm updating them and I'll run my tests again.

2.9.0 is about a year old... 2.9.5 got some defaults changes for
perfomance reasons, so it might make a difference. If kernel is
bleeding edge probably might as well use all bleeding-edge tools, too. :)

...

> I added a link to the scripts I used in the test description section in:
> http://www.bullopensource.org/ext4/20080530/ffsb-write-2.6.26-rc2.html
> http://www.bullopensource.org/ext4/20080530/ffsb-readwrite-2.6.26-rc2.html

Great, thanks! I think the hardware comparison is interesting...

-Eric

> Val?rie