2003-01-31 21:44:47

by David Mansfield

[permalink] [raw]
Subject: 2.5.59-mm7 results with database 'benchmark'


Hi Andrew et al.

I have run my 'production database load' against the 2.5.59-mm7 kernel.
Fortunately for me, but unfortunately for you, I have upgraded the system
CPUs. They were 2 x PIII 866Mhz, 256kb cache, now 2 x PIII 1Ghz, 256kb
cache. I reran the 2.4.18-19.7.xsmp test as a baseline for comparison. I
include all results to date here. System and workload descriptions
follow.

The (slight) advantage that the 2.5.59 series had over the RedHat
kernels has evaporated. But it was marginal to begin with.

As usual, I'm willing to test...

Results:

kernel minutes
----------------------------- -----------
old cpus, 2x866mhz:
2.4.20-aa1 134
2.5.59 124
2.4.18-19.7.xsmp 128
2.5.59-mm5 157
2.5.59-mm5-no-anticipatory-io 125
2.5.59-mm6 125

new cpus, 2x1ghz:
2.4.18-19.7.xsmp 118 <--- new run
2.5.59-mm7 118 <--- new run


Platform and configuration:
HP LH3000 U3. Dual 1Ghz Intel Pentium III, 2GB ram. megaraid controller
with two channels, each channel raid 5 (hardware raid) PV on 6 15k scsi
disks, one megaraid LV per PV.

Two plain disks w/pairs of partitions in raid 1 for OS (redhat 7.3), a
second pair of partitions (regular partitions with ext2) for Oracle
redo-log (in a log 'group').

Oracle version 8.1.7.4 (no aio support in this release) is accessing
datafiles on the two megaraid devices via /dev/raw stacked on top of
device-mapper (lvm2).

Workload:
The workload consists of a few different phases.

1) Indexing: multiple indexes built against a 9 million row table. This
is mostly about sequential scans of a single table, with bursts of write
activity. 50 minutes or so.

2) Analyzing: The database scans tables and
builds statistics. Most of the time is spent analyzing the 9 million row
table. This is a completely cpu bound step on our underpowered system.
30 minutes.

3) Summarization: the large table is aggregated in about 100
different ways. Records are generated for each different summarization.
This is mixed read-write load. 50 minutes or so.

--
/==============================\
| David Mansfield |
| [email protected] |
\==============================/



2003-01-31 22:10:28

by Andrew Morton

[permalink] [raw]
Subject: Re: 2.5.59-mm7 results with database 'benchmark'

David Mansfield <[email protected]> wrote:
>
>
> The (slight) advantage that the 2.5.59 series had over the RedHat
> kernels has evaporated. But it was marginal to begin with.

Could you test 2.5.59-base? Could be that 2.5.59-mm7 is slower for some
reason.

Or it could be that the increased CPU speed now makes the load alternate
between 100% cpu-bound and 100% IO-bound rather than some combination of
both. (If you understand what I mean by this, please explain it to me some
time).


2003-01-31 22:17:05

by Martin J. Bligh

[permalink] [raw]
Subject: Re: 2.5.59-mm7 results with database 'benchmark'

> I have run my 'production database load' against the 2.5.59-mm7 kernel.
> Fortunately for me, but unfortunately for you, I have upgraded the system
> CPUs. They were 2 x PIII 866Mhz, 256kb cache, now 2 x PIII 1Ghz, 256kb
> cache. I reran the 2.4.18-19.7.xsmp test as a baseline for comparison. I
> include all results to date here. System and workload descriptions
> follow.
>
> The (slight) advantage that the 2.5.59 series had over the RedHat
> kernels has evaporated. But it was marginal to begin with.
>
> As usual, I'm willing to test...

You got any more detailed info? vmstat, oprofile / readprofile,
things like that?

M.

2003-02-14 22:04:52

by David Mansfield

[permalink] [raw]
Subject: [BENCHMARK] 2.5.60-mm2+anticipatory results with Oracle db load


Andrew and list,

I've been tracking various kernels with my 'production load'. The
load and the platform are described below.

As stated in the subject line, this is 2.5.60 + mm2 + all three
anticipatory IO scheduler patches from experimental/

In case anyone is comparing these to prior results, be warned, I've
changed CPUs, changed Oracle tuning, and of course, changed kernels.
Only the relevant kernels for comparison are included below (i.e. these
runs are on the same HW platform with same Oracle config).

If you would like to see other tests, let me know.

As per a prior request, I've attached 'vmstat 10' output during the run
(and a bit after). It's compressed. If this vmstat is useless, let me
know, I feel a bit bad spamming the list with it.

If you'd like other profiling, let me know along with some hints on how to
use it. I tend to think this is really an I/O scheduler + user space CPU
utilization issue (not a kernel system time issue).


kernel minutes
----------------------------- -----------
2.5.59 102
2.5.59-mm8 105
2.5.60-mm2-with-anticipatory 107 <- attached vmstat.log.gz for this


Platform and configuration:
HP LH3000 U3. Dual 1Ghz Intel Pentium III, 2GB ram. megaraid controller
with two channels, each channel raid 5 (hardware raid) PV on 6 15k scsi
disks, one megaraid LV per PV.

Two plain disks w/pairs of partitions in raid 1 for OS (redhat 7.3), a
second pair of partitions (regular partitions with ext2) for Oracle
redo-log (in a log 'group').

Oracle version 8.1.7.4 (no aio support in this release) is accessing
datafiles on the two megaraid devices via /dev/raw stacked on top of
device-mapper (lvm2).

Workload:
The workload consists of a few different phases.
1) Indexing: multiple indexes built against a 9 million row table. This
is mostly about sequential scans of a single table, with bursts of write
activity. 50 minutes or so.

2) Analyzing: The database scans tables and
builds statistics. Most of the time is spent analyzing the 9 million row
table. This is a completely cpu bound step on our underpowered system.
30 minutes.

3) Summarization: the large table is aggregated in about 100
different ways. Records are generated for each different summarization.
This is mixed read-write load. 50 minutes or so.

--
/==============================\
| David Mansfield |
| [email protected] |
\==============================/


Attachments:
vmstat.log.gz (12.81 kB)