2003-03-10 17:37:15

by Joel Becker

[permalink] [raw]
Subject: WimMark I for 2.5.64-mm2


WimMark I report for 2.5.64-mm2

Runs with deadline scheduler: 1580.10 1537.95
Runs with anticipatory scheduler: 632.87 597.33 555.19

WimMark I is a rough benchmark we have been running
here at Oracle against various kernels. Each run tests an OLTP
workload on the Oracle database with somewhat restrictive memory
conditions. This reduces in-memory buffering of data, allowing for
more I/O. The I/O is read and sync write, random and seek-laden.
The benchmark is called "WimMark I" because it has no
official standing and is only a relative benchmark useful for comparing
kernel changes. The benchmark is normalized an arbitrary kernel, which
scores 1000.0. All other numbers are relative to this.
The machine in question is a 4 way 700 MHz Xeon machine with 2GB
of RAM. CONFIG_HIGHMEM4GB is selected. The disk accessed for data is a
10K RPM U2W SCSI of similar vintage. The data files are living on an
ext3 filesystem. Unless mentioned, all runs are
on this machine (variation in hardware would indeed change the
benchmark).

--

"There are some experiences in life which should not be demanded
twice from any man, and one of them is listening to the Brahms Requiem."
- George Bernard Shaw

Joel Becker
Senior Member of Technical Staff
Oracle Corporation
E-mail: [email protected]
Phone: (650) 506-8127


2003-03-10 23:38:03

by Andrew Morton

[permalink] [raw]
Subject: Re: WimMark I for 2.5.64-mm2

Joel Becker <[email protected]> wrote:
>
>
> WimMark I report for 2.5.64-mm2
>
> Runs with deadline scheduler: 1580.10 1537.95
> Runs with anticipatory scheduler: 632.87 597.33 555.19

The anticipatory scheduler will never be better than deadline with these
sorts of workloads. The best we can do is to equal it.

With other OLTP-style tests, AS is at worst 5-10% behind deadline.

So what's up with WimMark? Is it possible that the test is exhibiting some
nonlinearity, wherein a small change in inputs causes a large swing in
output?

One way to tell that would be to perform several runs with different values
of /sys/block/sdXX/io_sched/antic_expire. And see how the overall runtime
varies as that is altered.

The default it currently 10 (milliseconds). With zero you should get the
same throughput as deadline.

I'm not sure what to conclude from this result. Can you shed any light on
what it means, on what's going on?