2.4.5pre1 is the base for comparison, but because privacy issues and company
policies, I can't give out anything but comparative numbers. These should
be enough to see what has been happening over the last couple of months of
kernel development.
The tests performed on the same equipment and networks using SPEC SFS NFS
benchmark testing. We have attempted to limit as many variables as we can.
Hardware:
- 4 Pentium III Xeon processors, 4GB ram
- 45 fibre channel drives, set up in hardware RAID 0/1
- 2 direct Gigabit Ethernet connections between SPEC SFS prime client and
system under test
- reiserfs
- all NFS filesystems exported with sync,no_wdelay to insure O_SYNC writes
to storage
- NFS v3 UDP
- LVM
2.4.7 kernel series
2.4.7 56%
2.4.7 (patches: reiserfs osync and Arjan's high memory patch) 81%
2.4.7 (patches: Mark Hemment's performance(NFS kernel lock) and
Arjan's high memory patch) 79%
2.4.7 (patches: Mark Hemment's performance(NFS kernel lock) and
reiserfs osync) 67%
2.4.9 kernel series
2.4.9-ac10 48%
2.4.9-ac13 40%
2.4.9-ac13 (patches: Rik's page aging) 40%
2.4.9-ac15 52%
2.4.9-ac15 (patches: Rik's page aging and launder patches) 57%
2.4.9-ac16 56%
2.4.10-pre4 20%
2.4.10-pre8 40%
2.4.10-pre10 28%
2.4.10-pre10aa1 25%
2.4.10-pre11 33%
2.4.10-pre12 27%
2.4.10-pre12 (patches: reiserfs performance patch) 20%
2.4.10-pre13 13%
2.4.10-pre13 (patches: Linus' allocate patch) 28%
2.4.10pre14 43%
2.4.10 kernel series
2.4.10 46%
2.4.10 (patches: Andrea's vmtweak) 62%
2.4.10(patches: Andrea's vmtweaks2) 62%
2.4.10-ac4 57%
2.4.11pre2 51%
2.4.11pre2 (patches: irq rewrite patch with default setting of
20000) 50%
2.4.11pre2 (patches: irq rewrite patch with setting of 10000) 50%
2.4.11pre2 (patches: irq rewrite patch with setting of 30000) 49%
2.4.11pre3aa1 50%
2.4.11pre6aa1 46%
2.4.11pre6aa1(patches: uses the older lvm instead.) 47%
2.4.12 kernel series
2.4.12 45%
2.4.13pre1 51%
2.4.12-ac2 54%
2.4.12-ac3 55%
Cary Dickens
Hewlett-Packard
"DICKENS,CARY (HP-Loveland,ex2)" wrote:
>
> 2.4.5pre1 is the base for comparison,
>
> [ figures showing that more recent kernels suck ]
>
SFS is a rather specialised workload, and synchronous NFS exports
are not a thing which gets a lot of attention. It could be one
small, hitherto unnoticed change which caused this performance
regression. And it appears that the change occurred between 2.4.5
and 2.4.7.
We don't know whether this slowdown is caused by changes in the VM,
the filesystem, the block device layer, nfsd or networking. For example,
ksoftirqd was introduced between 2.4.5 and 2.4.7. Could it be that?
For all these reasons it would be really helpful if you could
go back and test the 2.4.6-preX and 2.4.7-preX kernels (binary search)
and tell us if there was a particular release which caused this decrease in
throughput.
If it can be pinned down to a particular patch then there's a good
chance that it can be fixed.
I'll put this on my priority list to look into. When I get the numbers,
I'll squeak again.
Thanks,
Cary
> -----Original Message-----
> From: Andrew Morton [mailto:[email protected]]
> Sent: Thursday, October 18, 2001 12:34 PM
> To: DICKENS,CARY (HP-Loveland,ex2)
> Cc: Kernel Mailing List (E-mail); HABBINGA,ERIK (HP-Loveland,ex1)
> Subject: Re: Kernel performance in reference to 2.4.5pre1
>
>
> "DICKENS,CARY (HP-Loveland,ex2)" wrote:
> >
> > 2.4.5pre1 is the base for comparison,
> >
> > [ figures showing that more recent kernels suck ]
> >
>
> SFS is a rather specialised workload, and synchronous NFS exports
> are not a thing which gets a lot of attention. It could be one
> small, hitherto unnoticed change which caused this performance
> regression. And it appears that the change occurred between 2.4.5
> and 2.4.7.
>
> We don't know whether this slowdown is caused by changes in the VM,
> the filesystem, the block device layer, nfsd or networking.
> For example,
> ksoftirqd was introduced between 2.4.5 and 2.4.7. Could it be that?
>
> For all these reasons it would be really helpful if you could
> go back and test the 2.4.6-preX and 2.4.7-preX kernels (binary search)
> and tell us if there was a particular release which caused
> this decrease in
> throughput.
>
> If it can be pinned down to a particular patch then there's a good
> chance that it can be fixed.
>
Andrew Morton [mailto:[email protected]] wrote:
> SFS is a rather specialised workload, and synchronous NFS exports
> are not a thing which gets a lot of attention. It could be one
> small, hitherto unnoticed change which caused this performance
> regression. And it appears that the change occurred between 2.4.5
> and 2.4.7.
Cary, also note that Andrew did some work with ext3 which can greatly
improve the performance of synchronous I/O. Granted, it doesn't fix
any performance issues in the VM or VFS that may have been introduced,
but if you are looking for good benchmark numbers, give ext3 a try.
Use a large journal to avoid journal flushes for sync I/O. See:
http://marc.theaimsgroup.com/?l=linux-kernel&m=99650624414465&w=4
Cheers, Andreas
--
Andreas Dilger \ "If a man ate a pound of pasta and a pound of antipasto,
\ would they cancel out, leaving him still hungry?"
http://www-mddsp.enel.ucalgary.ca/People/adilger/ -- Dogbert