2011-08-01 03:03:08

by Eric Whitney

[permalink] [raw]
Subject: 2.6.39 and 3.0 scalability measurement results

I've posted the results of my 2.6.38/2.6.39 and 2.6.39/3.0 ext4
scalability measurements and comparisons on a 48 core x86_64 server at:

http://free.linux.hp.com/~enw/ext4/2.6.39

http://free.linux.hp.com/~enw/ext4/3.0

The results include throughput and CPU efficiency graphs for five simple
workloads, the raw data for same, and lockstats as well.

The data cover ext4 filesystems with and without journals. For
reference, ext3, xfs, and btrfs are included as well.

The 2.6.38/2.6.39 results mainly show the clear scalability benefit of
making the mblk_io_submit mount option default behavior for ext4
filesystems with journals - see the large_file_creates throughput plot.

In the way of more recent news, the 2.6.39/3.0 results indicate little
change for ext4 either with or without journals.

Thanks,
Eric


2011-08-02 00:50:17

by Dave Chinner

[permalink] [raw]
Subject: Re: 2.6.39 and 3.0 scalability measurement results

On Sun, Jul 31, 2011 at 10:57:52PM -0400, Eric Whitney wrote:
> I've posted the results of my 2.6.38/2.6.39 and 2.6.39/3.0 ext4
> scalability measurements and comparisons on a 48 core x86_64 server
> at:
>
> http://free.linux.hp.com/~enw/ext4/2.6.39
>
> http://free.linux.hp.com/~enw/ext4/3.0
>
> The results include throughput and CPU efficiency graphs for five
> simple workloads, the raw data for same, and lockstats as well.
>
> The data cover ext4 filesystems with and without journals. For
> reference, ext3, xfs, and btrfs are included as well.

Can you include the output of the mkfs programs so that we can see
what the structure of the filesystems are? That makes a big
difference when interpreting the XFS results.

And FWIW, I'd be really interested to see the XFS results using the
inode64 mount option, rather then the not-really-ideal-for-multi-TB-
filesystems-but-used-historically-for-32-bit-application-
compatibility-reasons default of inode32.

inode64 drastically changes the layout of files and directories in
the filesystems, so I'd expect to see significant differences (good
and bad!) in the workloads using that option. We've been considering
changing it to be the default, so having some idea of how it
compares on your worklaods woul dbe an interesting discussion
point...

BTW, seeing as you are running against multiple diffferent
filesytems, can you cc these emails to linux-fsdevel rather than
just the ext4 list? There is wider interest in your results than
just ext4 developers...

Cheers,

Dave.
--
Dave Chinner
[email protected]