2003-01-31 20:17:34

by Cliff White

[permalink] [raw]
Subject: [OSDL][BENCHMARK] OSDL-DBT-2 - 2.4 vs 2.5 4-way/8-way with vmstat


As Andrew requested, we now have posted DBT-2 results with vmstat
data included for the 8-way workloads we reported previously.
Runs have been included from both the cached and non-cached variants
of the workload. The 8-way results used the same database cache
size setting (2656M).

In addition, we have results for DBT-2 running on STP for the cached workload
comparing 2.4.18 versus 2.5.54dcl (the Data Center Linux kernel). In these
runs we varied the database cache size setting (LM=2031MB, MM=2344MB,
HH=2656M).

Summary of results (Higher Metric is better)


CPUs OS Load Memory Metric (Avg)
---- --- ---- ------ ------------

8way 2.4 Cached HM--- 4475.45
8way 2.5 Cached HM--- 5063.5
% speedup 2.5vs2.4----- 13.1 <---

8way 2.4 NonCached HM 1414.18
8way 2.5 NonCached HM 1659.8
% speedup 2.5vs2.4----- 17.4 <---

4way 2.4 Cached LM----2784.4
4way 2.5 Cached LM----2941
% speedup 2.5vs2.4---- 5.62 <---

4way 2.4 Cached MM----2786.2
4way 2.5 Cached MM----2939.8
% speedup 2.5vs2.4----- 5.51 <---

4way 2.4 Cached HM----2786
4way 2.5 Cached HM----2947.2
% speedup 2.5vs2.4----- 5.79 <---



Here are some highlights/comments:

Both the 4 way runs and the 8-way runs show improvement going from
2.4 to 2.5. The improvement is larger for the 8-way run. We believe
the 2.5 4-way would have improved more if it had not hit a CPU wall.
Observe the plot of vmstat percent user data at :
http://www.osdl.org/projects/dbt2dev/results/STP_4way/us.html

From the data and the plots, you will notice a big change about
every 10 minutes. That is the time the database does a "savepoint".
It writes dirty pages to the database files. This happened in the
cached runs and the non-cached runs.

We welcome your comments on why we are seeing this improvement.
As, always we also welcome suggestions for improvement, and random complaints.

Regards,

Mary Edie Meredith
Mark Wong
Cliff White

OSDL Database Test 2 Project
Open Source Development Lab
http://www.osdl.org



Link information:

The overall results page for the DBT-2 project is at:
http://www.osdl.org/projects/dbt2dev/results

direct link to 4 way results page:
http://www.osdl.org/projects/dbt2dev/results/STP_4way/STP_4way_2.4v2.5.html

direct link to 8 way results page:
http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/STP_8way_2.4v2.5.html
direct link to 8way results page



2003-01-31 21:17:25

by Andrew Morton

[permalink] [raw]
Subject: Re: [OSDL][BENCHMARK] OSDL-DBT-2 - 2.4 vs 2.5 4-way/8-way with vmstat

Cliff White <[email protected]> wrote:
>
> Link information:
>
> The overall results page for the DBT-2 project is at:
> http://www.osdl.org/projects/dbt2dev/results
>
> direct link to 4 way results page:
> http://www.osdl.org/projects/dbt2dev/results/STP_4way/STP_4way_2.4v2.5.html
>
> direct link to 8 way results page:
> http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/STP_8way_2.4v2.5.html
> direct link to 8way results page

There seem to be quite a lot of mangled links there. For example,
http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/STP_8way_2.4v2.5.html#table0
links to
http:////www.osdl/org/projects/dbt2dev/results/8way/LKML2/c24.html

and
http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/c24.html
links to
http://www.osdl/org/projects/dbt2dev/results/8way/LKML2/data/c24/296/vmstat.out

and when I fix up the latter:
http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/data/c24/296/vmstat.out
it isn't there :(

However the numbers at
http://www.osdl.org/projects/dbt2dev/results/LKML_dbt2_2.4v2.5_both.html#table0

Seem to be showing increased user time in 2.5, decreased system time. And
zero pgpgin/pgpgout in 2.5, which seems wrong.

I'd really like to see the vmstat traces. Judging by the reduced idle time
in 2.5, this change could be due to more successful page replacement
decisions.


2003-01-31 22:38:37

by Cliff White

[permalink] [raw]
Subject: Re: [OSDL][BENCHMARK] OSDL-DBT-2 - 2.4 vs 2.5 4-way/8-way with vmstat

> Cliff White <[email protected]> wrote:
> >
> > http://www.osdl.org/projects/dbt2dev/results/8way/LKML2/data/296/vmstat.out
>
> That's progress, thanks.
>
> It is useful to show the collection interval of vmstat in the reports. Is
> that `vmstat 1' or `vmstat 1000'?

vmstat interval is 60 seconds, you are right, we'll fix the report.
>
> This workload appears to be performing concurrent disk reads and writes. If
> these are _really_ happening at the same time (ie: if vmstat hasn't confused
> me) then it could well be the case that the throughput improvement comes from
> the I/O scheduler's tendency to service reads more promptly when there is a
> lot of writeback happening.

Re concurrent IO:
The offical answer from the dba's: "Well, it tries"
There are multiple user proccessess doing queries and commits.
There should be a difference between the cached and non-cached runs, as
the cached runs should not be doing much writing, except to the log device.

Every 10 minutes in all workloads, the database flushes cache to the
datafiles,
which should produce a noticeable peak in activity.

These loads suck up all the memory they can, so anything that gives us more
free memory should
be goodness. We think we are also seeing improvements in 2.5 in free memory,
but
we don't know for sure where is best to look and how best to prove it.
Any advice?
>
> If so then you can expect to see wild swings in results as you wend your way
> through recent 2.5 kernels :(. I'm working on settling that all down.
> 2.5.59-mm7 should do well.
>
Hopefully we can get some results for you on that kernel.
cliffw

2003-01-31 22:48:20

by Andrew Morton

[permalink] [raw]
Subject: Re: [OSDL][BENCHMARK] OSDL-DBT-2 - 2.4 vs 2.5 4-way/8-way with vmstat

Cliff White wrote:
>
> ...
> These loads suck up all the memory they can, so anything that gives us more
> free memory should
> be goodness. We think we are also seeing improvements in 2.5 in free memory,
> but
> we don't know for sure where is best to look and how best to prove it.
> Any advice?

Monitoring /proc/meminfo would be the main means. Further
info could be obtained by drilling down into /proc/vmstat
and /proc/slabinfo (the latter via bloatmeter, preferably).

http://www.zip.com.au/~akpm/linux/patches/stuff/

Martin Bligh is working on another VM reporting tool `vmtop', which
would be appropriate for that as well.