2007-10-26 14:25:31

by Martin Knoblauch

[permalink] [raw]
Subject: 2.6.24-rc1: First impressions

Hi ,

just to give some feedback on 2.6.24-rc1. For some time I am tracking IO/writeback problems that hurt system responsiveness big-time. I tested Peters stuff together with Fenguangs additions and it looked promising. Therefore I was very happy to see Peters stuff going into 2.6.24 and waited eagerly for rc1. In short, I am impressed. This really looks good. IO throughput is great and I could not reproduce the responsiveness problems so far.

Below are a some numbers of my brute-force I/O tests that I can use to bring responsiveness down. My platform is a HP/DL380g4, dual CPUs, HT-enabled, 8 GB Memory, SmartaArray6i controller with 4x72GB SCSI disks as RAID5 (battery protected writeback cahe enabled) and gigabit networking (tg3). User space is 64-bit RHEL4.3

I am basically doing copies using "dd" with 1MB blocksize. Local Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it tends to give best results. NFS3 Server is a Sun/T2000/Solaris10. The tests are:

dd1 - copy 16 GB from /dev/zero to local FS
dd1-dir - same, but using O_DIRECT for output
dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
net1 - copy 5.2 GB from NFS3 share to local FS
mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares

I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units are MB/sec.

test 2.6.19.2 2.6.22.6 2.6.24.-rc1
----------------------------------------------------------------
dd1 28 50 96
dd1-dir 88 88 86
dd2 2x16.5 2x11 2x44.5
dd2-dir 2x44 2x44 2x43
dd3 3x9.8 3x8.7 3x30
dd3-dir 3x29.5 3x29.5 3x28.5
net1 30-33 50-55 37-52
mix3 17/32 25/50 96/35 (disk/combined-network)


Some observations:

- single threaded disk speed really went up wit 2.6.24-rc1. It is now even better than O_DIRECT
- O_DIRECT took a slight hit compared to the older kernels. Not an issue for me, but maybe others care
- multi threaded non O_DIRECT scales for the first time ever !!!! Almost no loss compared to single threaded !!!!!!
- network throughput took a hit from 2.6.22.6 and is not as repeatable. Still better than 2.6.19.2 though

What actually surprises me most is the big performance win on the single threaded non O_DIRECT dd test. I did not expect that :-) What I had hoped for was of course the scalability.

So, this looks great and most likely I will push 2.6.24 (maybe .X) into my environment.

Happy weekend
Martin

------------------------------------------------------
Martin Knoblauch
email: k n o b i AT knobisoft DOT de
www: http://www.knobisoft.de



2007-10-26 15:22:52

by Ingo Molnar

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions


* Martin Knoblauch <[email protected]> wrote:

> Hi ,
>
> just to give some feedback on 2.6.24-rc1. For some time I am tracking
> IO/writeback problems that hurt system responsiveness big-time. I
> tested Peters stuff together with Fenguangs additions and it looked
> promising. Therefore I was very happy to see Peters stuff going into
> 2.6.24 and waited eagerly for rc1. In short, I am impressed. This
> really looks good. IO throughput is great and I could not reproduce
> the responsiveness problems so far.
>
> Below are a some numbers of my brute-force I/O tests that I can use
> to bring responsiveness down. My platform is a HP/DL380g4, dual CPUs,
> HT-enabled, 8 GB Memory, SmartaArray6i controller with 4x72GB SCSI
> disks as RAID5 (battery protected writeback cahe enabled) and gigabit
> networking (tg3). User space is 64-bit RHEL4.3
>
> I am basically doing copies using "dd" with 1MB blocksize. Local
> Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it tends
> to give best results. NFS3 Server is a Sun/T2000/Solaris10. The tests
> are:
>
> dd1 - copy 16 GB from /dev/zero to local FS
> dd1-dir - same, but using O_DIRECT for output
> dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> net1 - copy 5.2 GB from NFS3 share to local FS
> mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
>
> I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> are MB/sec.
>
> test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> ----------------------------------------------------------------
> dd1 28 50 96
> dd1-dir 88 88 86
> dd2 2x16.5 2x11 2x44.5
> dd2-dir 2x44 2x44 2x43
> dd3 3x9.8 3x8.7 3x30
> dd3-dir 3x29.5 3x29.5 3x28.5
> net1 30-33 50-55 37-52
> mix3 17/32 25/50 96/35 (disk/combined-network)

wow, really nice results! Peter does know how to make stuff fast :) Now
lets pick up some of Peter's other, previously discarded patches as well
:-)

Such as the rewritten reclaim (clockpro) patches:

http://programming.kicks-ass.net/kernel-patches/page-replace/

The improve-swap-performance (swap-token) patches:

http://programming.kicks-ass.net/kernel-patches/swap_token/

His enable-swap-over-NFS [and other complex IO transports] patches:

http://programming.kicks-ass.net/kernel-patches/vm_deadlock/

And the concurrent pagecache patches:

http://programming.kicks-ass.net/kernel-patches/concurrent-pagecache/

as a starter :-) I think the MM should get out of deep-feature-freeze
mode - there's tons of room to improve :-/

Ingo "runs and hides" Molnar

2007-10-26 15:29:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 2007-10-26 at 17:22 +0200, Ingo Molnar wrote:
> * Martin Knoblauch <[email protected]> wrote:
>
> > Hi ,
> >
> > just to give some feedback on 2.6.24-rc1. For some time I am tracking
> > IO/writeback problems that hurt system responsiveness big-time. I
> > tested Peters stuff together with Fenguangs additions and it looked
> > promising. Therefore I was very happy to see Peters stuff going into
> > 2.6.24 and waited eagerly for rc1. In short, I am impressed. This
> > really looks good. IO throughput is great and I could not reproduce
> > the responsiveness problems so far.
> >
> > Below are a some numbers of my brute-force I/O tests that I can use
> > to bring responsiveness down. My platform is a HP/DL380g4, dual CPUs,
> > HT-enabled, 8 GB Memory, SmartaArray6i controller with 4x72GB SCSI
> > disks as RAID5 (battery protected writeback cahe enabled) and gigabit
> > networking (tg3). User space is 64-bit RHEL4.3
> >
> > I am basically doing copies using "dd" with 1MB blocksize. Local
> > Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it tends
> > to give best results. NFS3 Server is a Sun/T2000/Solaris10. The tests
> > are:
> >
> > dd1 - copy 16 GB from /dev/zero to local FS
> > dd1-dir - same, but using O_DIRECT for output
> > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > net1 - copy 5.2 GB from NFS3 share to local FS
> > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
> >
> > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> > are MB/sec.
> >
> > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > ----------------------------------------------------------------
> > dd1 28 50 96
> > dd1-dir 88 88 86
> > dd2 2x16.5 2x11 2x44.5
> > dd2-dir 2x44 2x44 2x43
> > dd3 3x9.8 3x8.7 3x30
> > dd3-dir 3x29.5 3x29.5 3x28.5
> > net1 30-33 50-55 37-52
> > mix3 17/32 25/50 96/35 (disk/combined-network)
>
> wow, really nice results! Peter does know how to make stuff fast :) Now
> lets pick up some of Peter's other, previously discarded patches as well
> :-)
>
> Such as the rewritten reclaim (clockpro) patches:
>
> http://programming.kicks-ass.net/kernel-patches/page-replace/

I think riel is taking over that stuff with his split vm and policies
per type.

> The improve-swap-performance (swap-token) patches:
>
> http://programming.kicks-ass.net/kernel-patches/swap_token/

Ashwin's version did get upstreamed.

> His enable-swap-over-NFS [and other complex IO transports] patches:
>
> http://programming.kicks-ass.net/kernel-patches/vm_deadlock/

Will post that one again, soonish.... Esp. after Linus professed liking
to have swap over NFS.

I've been working on improving the changelogs and comments in that code.

latest code (somewhat raw, as rushed by ingo posting this) in:
http://programming.kicks-ass.net/kernel-patches/vm_deadlock/v2.6.23-mm1/

> And the concurrent pagecache patches:
>
> http://programming.kicks-ass.net/kernel-patches/concurrent-pagecache/
>
> as a starter :-) I think the MM should get out of deep-feature-freeze
> mode - there's tons of room to improve :-/

Yeah, that one would be cool, but it depends on Nick getting his
lockless pagecache upstream. For those who don't know, both are in -rt
(and have been for some time) so it's not unproven code.


Attachments:
signature.asc (189.00 B)
This is a digitally signed message part

2007-10-26 15:49:41

by Rik van Riel

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 26 Oct 2007 17:29:00 +0200
Peter Zijlstra <[email protected]> wrote:

> > wow, really nice results! Peter does know how to make stuff fast :)
> > Now lets pick up some of Peter's other, previously discarded
> > patches as well :-)
> >
> > Such as the rewritten reclaim (clockpro) patches:
> >
> > http://programming.kicks-ass.net/kernel-patches/page-replace/
>
> I think riel is taking over that stuff with his split vm and policies
> per type.

I am. Taking every single reference to a page into account simply
won't scale to systems with 1TB of RAM. This is why I am working
on implementing:

http://linux-mm.org/PageReplacementDesign

At the moment I only have the basic "plumbing" of the split VM
working and am fixing some bugs in that. Expect a patch series
with that soon, so you guys can review that code and tell me
where to beat it into shape some more :)

After that I will work on the policy bits, where we can really
get performance benefits. The patch series should be mergeable
in smaller increments, so we can take things slowly if desired.

--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan

2007-10-26 19:23:06

by Andrew Morton

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 26 Oct 2007 17:22:21 +0200
Ingo Molnar <[email protected]> wrote:

>
> * Martin Knoblauch <[email protected]> wrote:
>
> > Hi ,
> >
> > just to give some feedback on 2.6.24-rc1. For some time I am tracking
> > IO/writeback problems that hurt system responsiveness big-time. I
> > tested Peters stuff together with Fenguangs additions and it looked
> > promising. Therefore I was very happy to see Peters stuff going into
> > 2.6.24 and waited eagerly for rc1. In short, I am impressed. This
> > really looks good. IO throughput is great and I could not reproduce
> > the responsiveness problems so far.
> >
> > Below are a some numbers of my brute-force I/O tests that I can use
> > to bring responsiveness down. My platform is a HP/DL380g4, dual CPUs,
> > HT-enabled, 8 GB Memory, SmartaArray6i controller with 4x72GB SCSI
> > disks as RAID5 (battery protected writeback cahe enabled) and gigabit
> > networking (tg3). User space is 64-bit RHEL4.3
> >
> > I am basically doing copies using "dd" with 1MB blocksize. Local
> > Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it tends
> > to give best results. NFS3 Server is a Sun/T2000/Solaris10. The tests
> > are:
> >
> > dd1 - copy 16 GB from /dev/zero to local FS
> > dd1-dir - same, but using O_DIRECT for output
> > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > net1 - copy 5.2 GB from NFS3 share to local FS
> > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
> >
> > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> > are MB/sec.
> >
> > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > ----------------------------------------------------------------
> > dd1 28 50 96
> > dd1-dir 88 88 86
> > dd2 2x16.5 2x11 2x44.5
> > dd2-dir 2x44 2x44 2x43
> > dd3 3x9.8 3x8.7 3x30
> > dd3-dir 3x29.5 3x29.5 3x28.5
> > net1 30-33 50-55 37-52
> > mix3 17/32 25/50 96/35 (disk/combined-network)
>
> wow, really nice results!

Those changes seem suspiciously large to me. I wonder if there's less
physical IO happening during the timed run, and correspondingly more
afterwards.

> I think the MM should get out of deep-feature-freeze
> mode - there's tons of room to improve :-/

Kidding. We merge about 265 MM patches in 2.6.24-rc1:

482 files changed, 8071 insertions(+), 5142 deletions(-)

2007-10-26 19:34:11

by Ingo Molnar

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions


* Andrew Morton <[email protected]> wrote:

> > > dd1 - copy 16 GB from /dev/zero to local FS
> > > dd1-dir - same, but using O_DIRECT for output
> > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
> > >
> > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> > > are MB/sec.
> > >
> > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > ----------------------------------------------------------------
> > > dd1 28 50 96
> > > dd1-dir 88 88 86
> > > dd2 2x16.5 2x11 2x44.5
> > > dd2-dir 2x44 2x44 2x43
> > > dd3 3x9.8 3x8.7 3x30
> > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > net1 30-33 50-55 37-52
> > > mix3 17/32 25/50 96/35 (disk/combined-network)
> >
> > wow, really nice results!
>
> Those changes seem suspiciously large to me. I wonder if there's less
> physical IO happening during the timed run, and correspondingly more
> afterwards.

so a final 'sync' should be added to the test too, and the time it takes
factored into the bandwidth numbers?

> > I think the MM should get out of deep-feature-freeze mode - there's
> > tons of room to improve :-/
>
> Kidding. We merge about 265 MM patches in 2.6.24-rc1:
>
> 482 files changed, 8071 insertions(+), 5142 deletions(-)

impressive :)

Ingo

2007-10-26 19:43:45

by Andrew Morton

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 26 Oct 2007 21:33:40 +0200
Ingo Molnar <[email protected]> wrote:

>
> * Andrew Morton <[email protected]> wrote:
>
> > > > dd1 - copy 16 GB from /dev/zero to local FS
> > > > dd1-dir - same, but using O_DIRECT for output
> > > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
> > > >
> > > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> > > > are MB/sec.
> > > >
> > > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > > ----------------------------------------------------------------
> > > > dd1 28 50 96
> > > > dd1-dir 88 88 86
> > > > dd2 2x16.5 2x11 2x44.5
> > > > dd2-dir 2x44 2x44 2x43
> > > > dd3 3x9.8 3x8.7 3x30
> > > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > > net1 30-33 50-55 37-52
> > > > mix3 17/32 25/50 96/35 (disk/combined-network)
> > >
> > > wow, really nice results!
> >
> > Those changes seem suspiciously large to me. I wonder if there's less
> > physical IO happening during the timed run, and correspondingly more
> > afterwards.
>
> so a final 'sync' should be added to the test too, and the time it takes
> factored into the bandwidth numbers?

That's one way of doing it. Or just run the test for a "long" time. ie:
much longer than (total-memory / disk-bandwidth). Probably the latter
will give a more accurate result, but it can get boring.

> > > I think the MM should get out of deep-feature-freeze mode - there's
> > > tons of room to improve :-/
> >
> > Kidding. We merge about 265 MM patches in 2.6.24-rc1:
> >
> > 482 files changed, 8071 insertions(+), 5142 deletions(-)
>
> impressive :)

A lot of that was new functionality. That's easier to add than things
which change long-standing functionality.

2007-10-27 05:50:58

by Arjan van de Ven

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 26 Oct 2007 12:21:55 -0700
Andrew Morton <[email protected]> wrote:

> On Fri, 26 Oct 2007 17:22:21 +0200
> Ingo Molnar <[email protected]> wrote:
>
> >
> > * Martin Knoblauch <[email protected]> wrote:
> >
> > > Hi ,
> > >
> > > just to give some feedback on 2.6.24-rc1. For some time I am
> > > tracking IO/writeback problems that hurt system responsiveness
> > > big-time. I tested Peters stuff together with Fenguangs additions
> > > and it looked promising. Therefore I was very happy to see Peters
> > > stuff going into 2.6.24 and waited eagerly for rc1. In short, I
> > > am impressed. This really looks good. IO throughput is great and
> > > I could not reproduce the responsiveness problems so far.
> > >
> > > Below are a some numbers of my brute-force I/O tests that I can
> > > use to bring responsiveness down. My platform is a HP/DL380g4,
> > > dual CPUs, HT-enabled, 8 GB Memory, SmartaArray6i controller with
> > > 4x72GB SCSI disks as RAID5 (battery protected writeback cahe
> > > enabled) and gigabit networking (tg3). User space is 64-bit
> > > RHEL4.3
> > >
> > > I am basically doing copies using "dd" with 1MB blocksize. Local
> > > Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it
> > > tends to give best results. NFS3 Server is a Sun/T2000/Solaris10.
> > > The tests are:
> > >
> > > dd1 - copy 16 GB from /dev/zero to local FS
> > > dd1-dir - same, but using O_DIRECT for output
> > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3
> > > shares
> > >
> > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All
> > > units are MB/sec.
> > >
> > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > ----------------------------------------------------------------
> > > dd1 28 50 96
> > > dd1-dir 88 88 86
> > > dd2 2x16.5 2x11 2x44.5
> > > dd2-dir 2x44 2x44 2x43
> > > dd3 3x9.8 3x8.7 3x30
> > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > net1 30-33 50-55 37-52
> > > mix3 17/32 25/50 96/35
> > > (disk/combined-network)
> >
> > wow, really nice results!
>
> Those changes seem suspiciously large to me. I wonder if there's less
> physical IO happening during the timed run, and correspondingly more
> afterwards.
>

another option... this is ext2.. didn't the ext2 reservation stuff get
merged into -rc1? for ext3 that gave a 4x or so speed boost (much
better sequential allocation pattern)

(or maybe I'm just wrong)

2007-10-27 06:00:51

by Andrew Morton

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

On Fri, 26 Oct 2007 22:46:57 -0700 Arjan van de Ven <[email protected]> wrote:

> > > > dd1 - copy 16 GB from /dev/zero to local FS
> > > > dd1-dir - same, but using O_DIRECT for output
> > > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3
> > > > shares
> > > >
> > > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All
> > > > units are MB/sec.
> > > >
> > > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > > ----------------------------------------------------------------
> > > > dd1 28 50 96
> > > > dd1-dir 88 88 86
> > > > dd2 2x16.5 2x11 2x44.5
> > > > dd2-dir 2x44 2x44 2x43
> > > > dd3 3x9.8 3x8.7 3x30
> > > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > > net1 30-33 50-55 37-52
> > > > mix3 17/32 25/50 96/35
> > > > (disk/combined-network)
> > >
> > > wow, really nice results!
> >
> > Those changes seem suspiciously large to me. I wonder if there's less
> > physical IO happening during the timed run, and correspondingly more
> > afterwards.
> >
>
> another option... this is ext2.. didn't the ext2 reservation stuff get
> merged into -rc1? for ext3 that gave a 4x or so speed boost (much
> better sequential allocation pattern)
>

Yes, one would expect that to make a large difference in dd2/dd2-dir and
dd3/dd3-dir - but only on SMP. On UP there's not enough concurrency in the
fs block allocator for any damage to occur.

Reservations won't affect dd1 though, and that went faster too.

2007-10-27 19:04:20

by Bill Davidsen

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

Andrew Morton wrote:
> On Fri, 26 Oct 2007 21:33:40 +0200
> Ingo Molnar <[email protected]> wrote:
>
>> * Andrew Morton <[email protected]> wrote:
>>

>> so a final 'sync' should be added to the test too, and the time it takes
>> factored into the bandwidth numbers?
>
> That's one way of doing it. Or just run the test for a "long" time. ie:
> much longer than (total-memory / disk-bandwidth). Probably the latter
> will give a more accurate result, but it can get boring.
>
Longer might be less inaccurate, but without flushing the last data you
really don't get best accuracy, you just reduce the error. Clearly doing
fdatasync() is best, since other i/o caused by sync() can skew the results.

--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

2007-10-29 08:30:08

by Martin Knoblauch

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

----- Original Message ----
> From: Andrew Morton <[email protected]>
> To: Arjan van de Ven <[email protected]>
> Cc: Ingo Molnar <[email protected]>; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
> Sent: Saturday, October 27, 2007 7:59:51 AM
> Subject: Re: 2.6.24-rc1: First impressions
>
> On Fri, 26 Oct 2007 22:46:57 -0700 Arjan van de
> Ven
>
wrote:
>
> > > > > dd1 - copy 16 GB from /dev/zero to local FS
> > > > > dd1-dir - same, but using O_DIRECT for output
> > > > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to
> local
>
FS
> > > > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo
> local
>
FS
> > > > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3
> > > > > shares
> > > > >
> > > > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All
> > > > > units are MB/sec.
> > > > >
> > > > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > >
> >
>
----------------------------------------------------------------
> > > > > dd1 28 50 96
> > > > > dd1-dir 88 88 86
> > > > > dd2 2x16.5 2x11 2x44.5
> > > > > dd2-dir 2x44 2x44 2x43
> > > > > dd3 3x9.8 3x8.7 3x30
> > > > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > > > net1 30-33 50-55 37-52
> > > > > mix3 17/32 25/50 96/35
> > > > > (disk/combined-network)
> > > >
> > > > wow, really nice results!
> > >
> > > Those changes seem suspiciously large to me. I wonder if
> there's
>
less
> > > physical IO happening during the timed run, and
> correspondingly
>
more
> > > afterwards.
> > >
> >
> > another option... this is ext2.. didn't the ext2 reservation
> stuff
>
get
> > merged into -rc1? for ext3 that gave a 4x or so speed boost (much
> > better sequential allocation pattern)
> >
>
> Yes, one would expect that to make a large difference in
> dd2/dd2-dir
>
and
> dd3/dd3-dir - but only on SMP. On UP there's not enough concurrency
> in the fs block allocator for any damage to occur.
>

Just for the record the test are done on SMP.

> Reservations won't affect dd1 though, and that went faster too.
>

This is the one result that surprised me most, as I did not really expect any big moves here. I am not complaining :-), but definitely it would be nice to understand the why.

Cheers
Martin
>


2007-10-29 11:10:07

by Martin Knoblauch

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions

----- Original Message ----
> From: Ingo Molnar <[email protected]>
> To: Andrew Morton <[email protected]>
> Cc: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected]
> Sent: Friday, October 26, 2007 9:33:40 PM
> Subject: Re: 2.6.24-rc1: First impressions
>
>
> * Andrew Morton wrote:
>
> > > > dd1 - copy 16 GB from /dev/zero to local FS
> > > > dd1-dir - same, but using O_DIRECT for output
> > > > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to
> local
>
FS
> > > > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo
> local
>
FS
> > > > net1 - copy 5.2 GB from NFS3 share to local FS
> > > > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two
> NFS3
>
shares
> > > >
> > > > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1.
> All
>
units
> > > > are MB/sec.
> > > >
> > > > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > > > ----------------------------------------------------------------
> > > > dd1 28 50 96
> > > > dd1-dir 88 88 86
> > > > dd2 2x16.5 2x11 2x44.5
> > > > dd2-dir 2x44 2x44 2x43
> > > > dd3 3x9.8 3x8.7 3x30
> > > > dd3-dir 3x29.5 3x29.5 3x28.5
> > > > net1 30-33 50-55 37-52
> > > > mix3 17/32 25/50
> 96/35
>
(disk/combined-network)
> > >
> > > wow, really nice results!
> >
> > Those changes seem suspiciously large to me. I wonder if
> there's
>
less
> > physical IO happening during the timed run, and correspondingly more
> > afterwards.
>
> so a final 'sync' should be added to the test too, and the time
> it
>
takes
> factored into the bandwidth numbers?
>

One of the reasons I do 15 GB transfers is to make sure that I am well above the possible page cache size. And of course I am doing a final sync to finish the runs :-) The sync is also running faster in 2.6.24-rc1.

If I factor it in the results for dd1/dd3 are:

test 2.6.19.2 2.6.22.6 2.6.24-rc1
sync time 18sec 19sec 6sec
dd1 27.5 47.5 92
dd3 3x9.1 3x8.5 3x29

So basically including the sync time make 2.6.24-rc1 even more promosing. Now, I know that my benchmarks numbers are crude and show only a very small aspect of system performance. But - it is an aspect I care about a lot. And those benchmarks match my use-case pretty good.

Cheers
Martin





2007-10-29 11:41:22

by Ingo Molnar

[permalink] [raw]
Subject: Re: 2.6.24-rc1: First impressions


* Martin Knoblauch <[email protected]> wrote:

> One of the reasons I do 15 GB transfers is to make sure that I am
> well above the possible page cache size. And of course I am doing a
> final sync to finish the runs :-) The sync is also running faster in
> 2.6.24-rc1.
>
> If I factor it in the results for dd1/dd3 are:
>
> test 2.6.19.2 2.6.22.6 2.6.24-rc1
> sync time 18sec 19sec 6sec
> dd1 27.5 47.5 92
> dd3 3x9.1 3x8.5 3x29
>
> So basically including the sync time make 2.6.24-rc1 even more
> promosing. Now, I know that my benchmarks numbers are crude and show
> only a very small aspect of system performance. But - it is an aspect
> I care about a lot. And those benchmarks match my use-case pretty
> good.

indeed. I'm even more impressed :)

Ingo