WimMark I report for 2.5.69
Runs: 1462.17 1005.78 1995.99
WimMark I is a rough benchmark we have been running
here at Oracle against various kernels. Each run tests an OLTP
workload on the Oracle database with somewhat restrictive memory
conditions. This reduces in-memory buffering of data, allowing for
more I/O. The I/O is read and sync write, random and seek-laden. The
runs all do ramp-up work to populate caches and the like.
The benchmark is called "WimMark I" because it has no
official standing and is only a relative benchmark useful for comparing
kernel changes. The benchmark is normalized an arbitrary kernel, which
scores 1000.0. All other numbers are relative to this. A bigger number
is a better number. All things being equal, a delta <50 is close to
unimportant, and a delta < 20 is very identical.
This benchmark is sensitive to random system events. I run
three runs because of this. If two runs are nearly identical and the
remaining run is way off, that run should probably be ignored (it is
often a low number, signifying that something on the system impacted
the benchmark).
The machine in question is a 4 way 700 MHz Xeon machine with 2GB
of RAM. CONFIG_HIGHMEM4GB is selected. The disk accessed for data is a
10K RPM U2W SCSI of similar vintage. The data files are living on an
ext3 filesystem. Unless mentioned, all runs are
on this machine (variation in hardware would indeed change the
benchmark).
WimMark I run results are archived at
http://oss.oracle.com/~jlbec/wimmark/wimmark_I.html
--
"We'd better get back, `cause it'll be dark soon,
and they mostly come at night. Mostly."
Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127
Joel Becker <[email protected]> wrote:
>
> WimMark I report for 2.5.69
>
> Runs: 1462.17 1005.78 1995.99
> ...
> This benchmark is sensitive to random system events.
You can say that again.
> I run three runs because of this. If two runs are nearly identical and the
> remaining run is way off, that run should probably be ignored (it is
> often a low number, signifying that something on the system impacted
> the benchmark).
Here we have 1.0, 1.5 and 2.0.
We need to understand why there is such variation. If we can do that,
then perhaps we can make those 1.0's and 1.5's go away.
Is that a thing you can work on? One approach would be to vary parameters
(filesystem type, amount of memory, TCQ lengths, workload, whatever) and
see which ones the throughput is sensitive to.
On Wed, May 07, 2003 at 03:41:50PM -0700, Andrew Morton wrote:
> > Runs: 1462.17 1005.78 1995.99
> > ...
> > This benchmark is sensitive to random system events.
>
> You can say that again.
>
> We need to understand why there is such variation. If we can do that,
> then perhaps we can make those 1.0's and 1.5's go away.
Some kernels run very very even. Others do not. I suspect that
certain kernel behaviors and changes exacerbate the issues.
> Is that a thing you can work on? One approach would be to vary parameters
> (filesystem type, amount of memory, TCQ lengths, workload, whatever) and
> see which ones the throughput is sensitive to.
I can try. I'm currently trying to catch up to the
state-of-the-penguin, as I also have some test patches from Nick to run.
These runs take a while, and I've been busy as well.
Joel
--
"Any man who is under 30, and is not a liberal, has not heart;
and any man who is over 30, and is not a conservative, has no brains."
- Sir Winston Churchill
Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127
Joel Becker <[email protected]> wrote:
>
> On Wed, May 07, 2003 at 03:41:50PM -0700, Andrew Morton wrote:
> > > Runs: 1462.17 1005.78 1995.99
> > > ...
> > > This benchmark is sensitive to random system events.
> >
> > You can say that again.
> >
> > We need to understand why there is such variation. If we can do that,
> > then perhaps we can make those 1.0's and 1.5's go away.
>
> Some kernels run very very even. Others do not. I suspect that
> certain kernel behaviors and changes exacerbate the issues.
Correct me if I'm wrong, but this test is largely seek-bound, is it not?
Do you monitor the total CPU utilisation during the run? Is it generally
low? If so then we're seek-bound.
File layout will influence things a lot. Small variations in timing and in
CPU scheduler activity could well cause significant dfifferences in file
layout.
Does the test generate the files on-the-fly, or are they laid out
beforehand?
Is it possible to fully populate the datafiles before the run, with a
single thread of control? That will ensure that each run is working off
the same layout and will give a better basis for comparison.
It will also tell us that some layout tweaks may be needed.
On Wed, May 07, 2003 at 04:28:30PM -0700, Andrew Morton wrote:
> Does the test generate the files on-the-fly, or are they laid out
> beforehand?
They are untarred beforehand. They are pre-created and
pre-populated.
Joel
--
"In the long run...we'll all be dead."
-Unknown
Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127