2003-05-27 22:54:13

by Joel Becker

[permalink] [raw]
Subject: WimMark I report for 2.5.70-mm1

WimMark I report for 2.5.70-mm1

Runs (deadline): 717.27 1064.57 1089.13
Runs (anticipatory): 1342.93 1121.47 1330.42

WimMark I is a rough benchmark we have been running
here at Oracle against various kernels. Each run tests an OLTP
workload on the Oracle database with somewhat restrictive memory
conditions. This reduces in-memory buffering of data, allowing for
more I/O. The I/O is read and sync write, random and seek-laden. The
runs all do ramp-up work to populate caches and the like.
The benchmark is called "WimMark I" because it has no
official standing and is only a relative benchmark useful for comparing
kernel changes. The benchmark is normalized an arbitrary kernel, which
scores 1000.0. All other numbers are relative to this. A bigger number
is a better number. All things being equal, a delta <50 is close to
unimportant, and a delta < 20 is very identical.
This benchmark is sensitive to random system events. I run
three runs because of this. If two runs are nearly identical and the
remaining run is way off, that run should probably be ignored (it is
often a low number, signifying that something on the system impacted
the benchmark).
The machine in question is a 4 way 700 MHz Xeon machine with 2GB
of RAM. CONFIG_HIGHMEM4GB is selected. The disk accessed for data is a
10K RPM U2W SCSI of similar vintage. The data files are living on an
ext3 filesystem. Unless mentioned, all runs are
on this machine (variation in hardware would indeed change the
benchmark).
WimMark I run results are archived at
http://oss.oracle.com/~jlbec/wimmark/wimmark_I.html

--

"Reader, suppose you were and idiot. And suppose you were a member of
Congress. But I repeat myself."
- Mark Twain

Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127


2003-05-27 23:35:16

by Andrew Morton

[permalink] [raw]
Subject: Re: WimMark I report for 2.5.70-mm1

Joel Becker <[email protected]> wrote:
>
> WimMark I report for 2.5.70
>
> Runs: 1005.78 958.80 947.23
>
> WimMark I report for 2.5.70-mm1
>
> Runs (deadline): 717.27 1064.57 1089.13
> Runs (anticipatory): 1342.93 1121.47 1330.42
> ...
> WimMark I run results are archived at
> http://oss.oracle.com/~jlbec/wimmark/wimmark_I.html

This is nuts. WimMark keeps on showing 2:1 swings in throughput when no
other test shows any variation at all. I simply do not know what to make
of it.

Your results would appear to indicate that the regression between
2.5.69-mm5 and 2.5.69-mm8 was actually due to something in Linus's tree,
and it is now in 2.5.70.

I have an interdiff here between the linus.patch from mm5 and mm8 and it
contains nothing very interesting.

It's at http://www.zip.com.au/~akpm/linux/patches/stuff/wimmark-interdiff.txt

The actual diff is at
http://www.zip.com.au/~akpm/linux/patches/stuff/wimmark-interdiff.patch.gz

There is the bio split stuff in ll_rw_blk.c, but that shouldn't matter.

Which device driver are you using?

2003-05-27 23:51:58

by Joel Becker

[permalink] [raw]
Subject: Re: WimMark I report for 2.5.70-mm1

On Tue, May 27, 2003 at 04:46:12PM -0700, Andrew Morton wrote:
> Which device driver are you using?

sym53c8xx

--

"Always give your best, never get discouraged, never be petty; always
remember, others may hate you. Those who hate you don't win unless
you hate them. And then you destroy yourself."
- Richard M. Nixon

Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127

2003-05-28 00:24:36

by Nick Piggin

[permalink] [raw]
Subject: Re: WimMark I report for 2.5.70-mm1



Andrew Morton wrote:

>Joel Becker <[email protected]> wrote:
>
>>WimMark I report for 2.5.70
>>
>>Runs: 1005.78 958.80 947.23
>>
>>WimMark I report for 2.5.70-mm1
>>
>>Runs (deadline): 717.27 1064.57 1089.13
>>Runs (anticipatory): 1342.93 1121.47 1330.42
>>...
>> WimMark I run results are archived at
>>http://oss.oracle.com/~jlbec/wimmark/wimmark_I.html
>>
>
>This is nuts. WimMark keeps on showing 2:1 swings in throughput when no
>other test shows any variation at all. I simply do not know what to make
>of it.
>
>Your results would appear to indicate that the regression between
>2.5.69-mm5 and 2.5.69-mm8 was actually due to something in Linus's tree,
>and it is now in 2.5.70.
>
>I have an interdiff here between the linus.patch from mm5 and mm8 and it
>contains nothing very interesting.
>
It might be something from your tree that got into Linus' _after_ mm8?