2006-05-29 19:45:15

by Valdis Klētnieks

[permalink] [raw]
Subject: Adaptive Readahead V14 - statistics question...

Running 2.6.17-rc4-mm3 + V14. I see this in /debug/readahead/events:

[table summary] total initial state context contexta backward onthrash onseek none
random_rate 8% 0% 4% 46% 9% 44% 0% 38% 18%
ra_hit_rate 89% 97% 90% 40% 83% 76% 0% 49% 0%
la_hit_rate 62% 99% 88% 29% 84% 9500% 0% 200% 3700%
var_ra_size 703 4 8064 39 5780 3 0 59 3010
avg_ra_size 6 2 67 6 33 2 0 4 36
avg_la_size 37 1 96 4 45 0 0 0 0

Are the 9500%, 200%, and 3700% numbers in la_hit_rate related to reality
in any way, or is something b0rken?

And is there any documentation on what these mean, so you can tell if it's
doing anything useful? (One thing I've noticed is that xmms, rather than gobble
up 100K of data off disk every 10 seconds or so, snarfs a big 2M chunk every
3-4 minutes, often sucking in an entire song at (nearly) one shot...)

(Complete contents of readahead/events follows, in case it helps diagnose...)

[table requests] total initial state context contexta backward onthrash onseek none
cache_miss 3934 543 93 2013 39 1199 0 47 417
random_read 1772 59 49 1059 11 575 0 19 327
io_congestion 4 0 4 0 0 0 0 0 0
io_cache_hit 1082 1 63 855 14 144 0 5 0
io_block 26320 18973 3519 2225 265 1288 0 50 1371
readahead 18601 15540 1008 1203 110 710 0 30 1483
lookahead 1972 153 671 1050 98 0 0 0 0
lookahead_hit 1241 152 596 312 84 95 0 2 37
lookahead_ignore 0 0 0 0 0 0 0 0 0
readahead_mmap 0 0 0 0 0 0 0 0 0
readahead_eof 14951 14348 569 19 15 0 0 0 0
readahead_shrink 0 0 0 0 0 0 0 0 70
readahead_thrash 0 0 0 0 0 0 0 0 0
readahead_mutilt 0 0 0 0 0 0 0 0 0
readahead_rescue 0 0 0 0 0 0 0 0 138

[table pages] total initial state context contexta backward onthrash onseek none
cache_miss 6541 2472 754 2026 43 1199 0 47 1194
random_read 1784 62 51 1065 12 575 0 19 337
io_congestion 396 0 396 0 0 0 0 0 0
io_cache_hit 10185 2 571 7930 1383 293 0 6 0
readahead 111015 30757 67949 6864 3642 1681 0 122 53677
readahead_hit 98812 30052 61602 2762 3041 1294 0 61 277
lookahead 72607 185 64222 3734 4466 0 0 0 0
lookahead_hit 68640 184 59207 4475 4774 0 0 0 0
lookahead_ignore 0 0 0 0 0 0 0 0 0
readahead_mmap 0 0 0 0 0 0 0 0 0
readahead_eof 39959 25045 14102 64 748 0 0 0 0
readahead_shrink 0 0 0 0 0 0 0 0 1076
readahead_thrash 0 0 0 0 0 0 0 0 0
readahead_mutilt 0 0 0 0 0 0 0 0 0
readahead_rescue 0 0 0 0 0 0 0 0 9538

[table summary] total initial state context contexta backward onthrash onseek none
random_rate 8% 0% 4% 46% 9% 44% 0% 38% 18%
ra_hit_rate 89% 97% 90% 40% 83% 76% 0% 49% 0%
la_hit_rate 62% 99% 88% 29% 84% 9500% 0% 200% 3700%
var_ra_size 703 4 8064 39 5780 3 0 59 3010
avg_ra_size 6 2 67 6 33 2 0 4 36
avg_la_size 37 1 96 4 45 0 0 0 0


Attachments:
(No filename) (226.00 B)

2006-05-30 00:37:54

by Wu Fengguang

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...

On Mon, May 29, 2006 at 03:44:59PM -0400, [email protected] wrote:
> Running 2.6.17-rc4-mm3 + V14. I see this in /debug/readahead/events:
>
> [table summary] total initial state context contexta backward onthrash onseek none
> random_rate 8% 0% 4% 46% 9% 44% 0% 38% 18%
> ra_hit_rate 89% 97% 90% 40% 83% 76% 0% 49% 0%
> la_hit_rate 62% 99% 88% 29% 84% 9500% 0% 200% 3700%
> var_ra_size 703 4 8064 39 5780 3 0 59 3010
> avg_ra_size 6 2 67 6 33 2 0 4 36
> avg_la_size 37 1 96 4 45 0 0 0 0
>
> Are the 9500%, 200%, and 3700% numbers in la_hit_rate related to reality
> in any way, or is something b0rken?

It's ok. They are computed from the following lines:
> lookahead 1972 153 671 1050 98 0 0 0 0
> lookahead_hit 1241 152 596 312 84 95 0 2 37
Here 'lookahead_hit' can somehow be greater than 'lookahead', which means
'cache hit' happened. i.e. the new readahead request overlapped with
some previous ones, and the 'lookahead_hit' is counted into the wrong
place. The 'cache hit' can also make the 'readahead_hit' larger or smaller.

This kind of mistakes can happen randomly because the accounting
mechanism is simple and supposed to work in normal. However there's no
guarantee of exact accurate - or the overhead will be unacceptable.

> And is there any documentation on what these mean, so you can tell if it's

This code snip helps a bit understanding:

/* Read-ahead events to be accounted. */
enum ra_event {
RA_EVENT_CACHE_MISS, /* read cache misses */
RA_EVENT_RANDOM_READ, /* random reads */
RA_EVENT_IO_CONGESTION, /* i/o congestion */
RA_EVENT_IO_CACHE_HIT, /* canceled i/o due to cache hit */
RA_EVENT_IO_BLOCK, /* wait for i/o completion */

RA_EVENT_READAHEAD, /* read-ahead issued */
RA_EVENT_READAHEAD_HIT, /* read-ahead page hit */
RA_EVENT_LOOKAHEAD, /* look-ahead issued */
RA_EVENT_LOOKAHEAD_HIT, /* look-ahead mark hit */
RA_EVENT_LOOKAHEAD_NOACTION, /* look-ahead mark ignored */
RA_EVENT_READAHEAD_MMAP, /* read-ahead for mmap access */
RA_EVENT_READAHEAD_EOF, /* read-ahead reaches EOF */
RA_EVENT_READAHEAD_SHRINK, /* ra_size falls under previous la_size */
RA_EVENT_READAHEAD_THRASHING, /* read-ahead thrashing happened */
RA_EVENT_READAHEAD_MUTILATE, /* read-ahead mutilated by imbalanced aging */
RA_EVENT_READAHEAD_RESCUE, /* read-ahead rescued */

RA_EVENT_READAHEAD_CUBE,
RA_EVENT_COUNT
};

> doing anything useful? (One thing I've noticed is that xmms, rather than gobble
> up 100K of data off disk every 10 seconds or so, snarfs a big 2M chunk every
> 3-4 minutes, often sucking in an entire song at (nearly) one shot...)

Hehe, it's resulted from the enlarged default max readahead size(128K => 1M).
Too much aggressive? I'm interesting to know the recommended size for
desktops, thanks. For now you can adjust it through the 'blockdev
--setra /dev/hda' command.

Wu
--

> (Complete contents of readahead/events follows, in case it helps diagnose...)
>
> [table requests] total initial state context contexta backward onthrash onseek none
> cache_miss 3934 543 93 2013 39 1199 0 47 417
> random_read 1772 59 49 1059 11 575 0 19 327
> io_congestion 4 0 4 0 0 0 0 0 0
> io_cache_hit 1082 1 63 855 14 144 0 5 0
> io_block 26320 18973 3519 2225 265 1288 0 50 1371
> readahead 18601 15540 1008 1203 110 710 0 30 1483
> lookahead 1972 153 671 1050 98 0 0 0 0
> lookahead_hit 1241 152 596 312 84 95 0 2 37
> lookahead_ignore 0 0 0 0 0 0 0 0 0
> readahead_mmap 0 0 0 0 0 0 0 0 0
> readahead_eof 14951 14348 569 19 15 0 0 0 0
> readahead_shrink 0 0 0 0 0 0 0 0 70
> readahead_thrash 0 0 0 0 0 0 0 0 0
> readahead_mutilt 0 0 0 0 0 0 0 0 0
> readahead_rescue 0 0 0 0 0 0 0 0 138
>
> [table pages] total initial state context contexta backward onthrash onseek none
> cache_miss 6541 2472 754 2026 43 1199 0 47 1194
> random_read 1784 62 51 1065 12 575 0 19 337
> io_congestion 396 0 396 0 0 0 0 0 0
> io_cache_hit 10185 2 571 7930 1383 293 0 6 0
> readahead 111015 30757 67949 6864 3642 1681 0 122 53677
> readahead_hit 98812 30052 61602 2762 3041 1294 0 61 277
> lookahead 72607 185 64222 3734 4466 0 0 0 0
> lookahead_hit 68640 184 59207 4475 4774 0 0 0 0
> lookahead_ignore 0 0 0 0 0 0 0 0 0
> readahead_mmap 0 0 0 0 0 0 0 0 0
> readahead_eof 39959 25045 14102 64 748 0 0 0 0
> readahead_shrink 0 0 0 0 0 0 0 0 1076
> readahead_thrash 0 0 0 0 0 0 0 0 0
> readahead_mutilt 0 0 0 0 0 0 0 0 0
> readahead_rescue 0 0 0 0 0 0 0 0 9538
>
> [table summary] total initial state context contexta backward onthrash onseek none
> random_rate 8% 0% 4% 46% 9% 44% 0% 38% 18%
> ra_hit_rate 89% 97% 90% 40% 83% 76% 0% 49% 0%
> la_hit_rate 62% 99% 88% 29% 84% 9500% 0% 200% 3700%
> var_ra_size 703 4 8064 39 5780 3 0 59 3010
> avg_ra_size 6 2 67 6 33 2 0 4 36
> avg_la_size 37 1 96 4 45 0 0 0 0
>


2006-05-30 03:36:41

by Voluspa

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...


Sorry about the top-post, I'm not subscribed.

On 2006-05-30 0:37:57 Wu Fengguang wrote:
> On Mon, May 29, 2006 at 03:44:59PM -0400, Valdis Kletnieks wrote:
[...]
>> doing anything useful? (One thing I've noticed is that xmms, rather
>> than gobble up 100K of data off disk every 10 seconds or so, snarfs
>> a big 2M chunk every 3-4 minutes, often sucking in an entire song at
>> (nearly) one shot...)
>
> Hehe, it's resulted from the enlarged default max readahead size(128K
> => 1M). Too much aggressive? I'm interesting to know the recommended
> size for desktops, thanks. For now you can adjust it through the
> 'blockdev --setra /dev/hda' command.

And notebooks? I'm running a 64bit system with 2gig memory and a 7200
RPM disk. Without your patches a movie like Elephants_Dream_HD.avi
causes a continuous silent read. After patching 2.6.17-rc5 (more on that
later) there's a slow 'click-read-click-read-click-etc' during the
same movie as the head travels _somewhere_ to rest(?) between reads.

Distracting in silent sequences, and perhaps increased disk wear/tear.
I'll try adjusting the readahead size towards silence tomorrow.

But as size slides in a mainstream direction, whence will any benefit
come - in this Joe-average case? It's not a faster 'cp' at least:

_Cold boot between tests - Copy between different partitions_

2.6.17-rc5-proper (Elephants_Dream_HD.avi 854537054 bytes)

real 0m44.050s
user 0m0.076s
sys 0m6.344s

2.6.17-rc5-patched

real 0m49.353s
user 0m0.075s
sys 0m6.287s

2.6.17-rc5-proper (compiled kernel tree linux-2.6.17-rc5 ~339M)

real 0m47.952s
user 0m0.198s
sys 0m6.118s

2.6.17-rc5-patched

real 0m46.513s
user 0m0.200s
sys 0m5.827s

Of course, my failure to see speed-ups could well be 'cos of a botched
patch transfer (or some kind of missing groundwork only available in
-mm). There was one reject in particular which made me pause. I'm no
programmer... and 'continue;' is a weird direction. At the end I settled
on:

[mm/readahead.c]
@@ -184,8 +289,10 @@
page->index, GFP_KERNEL)) {
ret = mapping->a_ops->readpage(filp, page);
if (ret != AOP_TRUNCATED_PAGE) {
- if (!pagevec_add(&lru_pvec, page))
+ if (!pagevec_add(&lru_pvec, page)) {
+ cond_resched();
__pagevec_lru_add(&lru_pvec);
+ }
continue;
} /* else fall through to release */
}

The full 82K experiment can temprarily be found at this location:
http://web.comhem.se/~u46139355/storetmp/adaptive-readahead-v14-linux-2.6.17-rc5-part-01to28of32.patch

At least it hasn't eaten my (backed up) disk yet ;-)

Mvh
Mats Johannesson
--

2006-05-30 06:40:20

by Wu Fengguang

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...

On Tue, May 30, 2006 at 05:36:31AM +0200, Voluspa wrote:
>
> Sorry about the top-post, I'm not subscribed.
>
> On 2006-05-30 0:37:57 Wu Fengguang wrote:
> > On Mon, May 29, 2006 at 03:44:59PM -0400, Valdis Kletnieks wrote:
> [...]
> >> doing anything useful? (One thing I've noticed is that xmms, rather
> >> than gobble up 100K of data off disk every 10 seconds or so, snarfs
> >> a big 2M chunk every 3-4 minutes, often sucking in an entire song at
> >> (nearly) one shot...)
> >
> > Hehe, it's resulted from the enlarged default max readahead size(128K
> > => 1M). Too much aggressive? I'm interesting to know the recommended
> > size for desktops, thanks. For now you can adjust it through the
> > 'blockdev --setra /dev/hda' command.
>
> And notebooks? I'm running a 64bit system with 2gig memory and a 7200
> RPM disk. Without your patches a movie like Elephants_Dream_HD.avi
> causes a continuous silent read. After patching 2.6.17-rc5 (more on that
> later) there's a slow 'click-read-click-read-click-etc' during the
> same movie as the head travels _somewhere_ to rest(?) between reads.
>
> Distracting in silent sequences, and perhaps increased disk wear/tear.
> I'll try adjusting the readahead size towards silence tomorrow.

Hmm... It seems risky to increase the default readahead size.
I would appreciate a feed back when you are settled with some new
size, thanks.

btw, maybe you will be interested in the 'laptop mode'.
It prolongs battery life by making disk activity "bursty":
http://www.xs4all.nl/~bsamwel/laptop_mode/

> But as size slides in a mainstream direction, whence will any benefit
> come - in this Joe-average case? It's not a faster 'cp' at least:
>
> _Cold boot between tests - Copy between different partitions_

I have never did 'cp' tests, because it involves writes caching
problems. Which makes the result hard to interpret. However I will
try to explain the two tests.

> 2.6.17-rc5-proper (Elephants_Dream_HD.avi 854537054 bytes)
>
> real 0m44.050s
> user 0m0.076s
> sys 0m6.344s
>
> 2.6.17-rc5-patched
>
> real 0m49.353s
> user 0m0.075s
> sys 0m6.287s

- only size matters in this trivial case.
- the increased size generally do not help single reading speed.
- but it helped reducing overhead(i.e. decreased user/sys time)
- not sure why real time increased so much.

> 2.6.17-rc5-proper (compiled kernel tree linux-2.6.17-rc5 ~339M)
>
> real 0m47.952s
> user 0m0.198s
> sys 0m6.118s
>
> 2.6.17-rc5-patched
>
> real 0m46.513s
> user 0m0.200s
> sys 0m5.827s

- the small files optimization in the new logic helped a little

Thanks,
Wu

2006-05-30 16:50:08

by Valdis Klētnieks

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...

On Tue, 30 May 2006 05:36:31 +0200, Voluspa said:
> On 2006-05-30 0:37:57 Wu Fengguang wrote:
> > On Mon, May 29, 2006 at 03:44:59PM -0400, Valdis Kletnieks wrote:
> [...]
> >> doing anything useful? (One thing I've noticed is that xmms, rather
> >> than gobble up 100K of data off disk every 10 seconds or so, snarfs
> >> a big 2M chunk every 3-4 minutes, often sucking in an entire song at
> >> (nearly) one shot...)
> >
> > Hehe, it's resulted from the enlarged default max readahead size(128K
> > => 1M). Too much aggressive? I'm interesting to know the recommended
> > size for desktops, thanks. For now you can adjust it through the
> > 'blockdev --setra /dev/hda' command.

Actually, it doesn't seem too aggressive at all - I have 768M of memory,
and the larger max readahead means that it hits the disk 1/8th as often
for a bigger slurp. Since I'm on a laptop with a slow 5400rpm 60g disk,
a 128K seek-and-read "costs" almost exactly the same as a 1M seek-and-read...

(If I was more memory constrained, I'd probably be hitting that --setra though ;)

The only hard numbers I have so far are a build of a 17-rc4-mm3 kernel tree
under -mm3+readahead and a slightly older -mm2 - the readahead kernel got
through the build about 30 seconds faster (19 mins 45 secs versus 20:17 -but
that's only 1 trial each).

Oh.. another "hard number" - elapsed time for a 4AM 'tripwire' run from cron
with a -mm3+readahead kernel was 36 minutes. A few days earlier, a -mm3
kernel took 46 minutes for the same thing. I'll have to go and retry this
with equivalent cache-cold scenarios - I *think* the file cache was roughly
equivalent, but can't prove it...

The desktop "feel" is certainly at least as good, but it's a lot harder
to quantify that - yesterday I was doing some heavy-duty cleaning in my
~/Mail directory (MH-style one message per file, about 250K files and 3G,
obviously seriously in need of cleaning). I'd often have 2 different
'find | xargs grep' type commands running at a time, and that seemed to
work a lot better than it used to (but again, no numbers)..

Damn, this is a lot harder to benchmark than the sort of microbenchmarks
we usually see around here. :)

> And notebooks? I'm running a 64bit system with 2gig memory and a 7200
> RPM disk. Without your patches a movie like Elephants_Dream_HD.avi
> causes a continuous silent read. After patching 2.6.17-rc5 (more on that
> later) there's a slow 'click-read-click-read-click-etc' during the
> same movie as the head travels _somewhere_ to rest(?) between reads.

For my usage patterns, this is a feature, not a bug. As mentioned before,
on this machine anything that reduces the number of seeks is a Good Thing.

> Distracting in silent sequences, and perhaps increased disk wear/tear.

It would be increased wear/tear only if the disk was idle long enough to
spin down. Especially for video, the read-ahead needed to let the disk spin
down (assuming a sane timeout for that) would be enormous. :)

> I'll try adjusting the readahead size towards silence tomorrow.

The onboard sound chip is an ok-quality CS4205, the onboard speakers are crap.
However, running the audio through a nice pair of Kenwood headphones is a good
solution. I don't hear the disk (or sometimes even the phone), and my
co-workers don't have to hear my Malmsteen collection. :)



Attachments:
(No filename) (226.00 B)

2006-05-31 21:07:39

by Diego Calleja

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...

El Tue, 30 May 2006 12:49:50 -0400,
[email protected] escribi?:


> The desktop "feel" is certainly at least as good, but it's a lot harder
> to quantify that - yesterday I was doing some heavy-duty cleaning in my

My desktop seems to boot a bit faster with adaptive readahead. I setup
a environment running kdm with automatic login plus a kde session which runs
a konqueror window and a openoffice writer windows. The time it takes for
the system to show the OO window went from 1:19 to 1:16 (I did a couple of
test of each kernel). Not a very scientific measurement, bootchart probably
could do it better

2006-05-31 21:50:55

by Voluspa

[permalink] [raw]
Subject: Re: Adaptive Readahead V14 - statistics question...

On Tue, 30 May 2006 12:49:50 -0400 Valdis.Kletnieks wrote:
> On Tue, 30 May 2006 05:36:31 +0200, Voluspa said:
> > On 2006-05-30 0:37:57 Wu Fengguang wrote:
> > > On Mon, May 29, 2006 at 03:44:59PM -0400, Valdis Kletnieks wrote:
[...]
> Damn, this is a lot harder to benchmark than the sort of microbenchmarks
> we usually see around here. :)

I don't even know what a microbenchmark is, but 'cp' and its higher-level
equivalents is such a frequent operation that I always begin any test
there.

[...] [Correction, should be: 'click-read-pause, click-read-pause etc']
> > later) there's a slow 'click-read-click-read-click-etc' during the
> > same movie as the head travels _somewhere_ to rest(?) between reads.
>
> For my usage patterns, this is a feature, not a bug. As mentioned before,
> on this machine anything that reduces the number of seeks is a Good Thing.
>
> > Distracting in silent sequences, and perhaps increased disk wear/tear.
>
> It would be increased wear/tear only if the disk was idle long enough to
> spin down. Especially for video, the read-ahead needed to let the disk spin
> down (assuming a sane timeout for that) would be enormous. :)

:-) I was thinking more in terms of disk head _arm_ wear. Somehow there's a
picture in my head of the arm swinging back to a rest position at an outer
(or inner?) "safe" disk track if read/write operations are delayed too much.
And therefore I associate a 'click' with the arm swinging back into action.
Normal quick read/write arm movement noise is distinctly different - in my
uninformed user ears.

I haven't adjusted the readahed size yet, but instead performed a series of
real-world usage tests.

Conclusion: On _this_ machine, with _these_ operations, Adaptive Readahead
in its current incarnation and default settings is a _loss_.

Patch version:
http://web.comhem.se/~u46139355/storetmp/adaptive-readahead-v14-linux-2.6.17-rc5-part-01to28of32-and-update-01to04of04-and-saner-CDVD-medium-error-handling.patch

Relevant hardware:
AMD Athlon 64 Processor 3400+ (2200 MHz top speed) L1 I Cache: 64K (64
bytes/line), D cache 64K (64 bytes/line), L2 Cache: 1024K (64 bytes/line).
VIA K8M800 chipset with VT8235 south. Kingmax 2x1GB DDR-333MHz SO-DIMM memory.
Hitachi Travelstar 7K100 (HTS721010G9AT00) 100GB 7200RPM Parallel-ATA disk,
http://www.hitachigst.com/hdd/support/7k100/7k100.htm acoustic management
value was set to 254 (fast/"noisy") at delivery.

Soft system:
Is extremely lean and simple. Pure 64bit compiled in a lfs-ish way almost
exactly 1 year ago. No desktop, just a wm (which wasn't even launched in
these tests). Toolchain glibc-2.3.5 (nptl), binutils-2.16.1, gcc-3.4.4

Filesystem:
Journaled ext3 with default mount (ordered data mode) and noatime.

Kernels:
loke:sleipner:~$ ls -l /boot/kernel-2.6.17-rc5*
1440 -rw-r--r-- 1 root root 1469211 May 30 02:25 /boot/kernel-2.6.17-rc5
1444 -rw-r--r-- 1 root root 1470540 May 30 19:07 /boot/kernel-2.6.17-rc5-ar

All tests were performed as the root user from a machine standstill "cold
boot" for each iteration and prepared for a 'console login - immediate run'
ie. eventual previous output deleted/reset.

_Massive READ_

[/usr had some 490000 files]

"cd /usr ; time find . -type f -exec md5sum {} \;"

2.6.17-rc5 ------- 2.6.17-rc5-ar

real 21m21.009s -- 21m37.663s
user 3m20.784s -- 3m20.701s
sys 6m34.261s -- 6m41.735s

I had planned to run this at least three times, but didn't realize I had
12 compiled kernel trees and 3 uncompiled there... So, a one-shot had to
do. But it's still significant.

_READ/WRITE_

[255 .tga files, each is 1244178 bytes]
[1 .wav file which is 1587644 bytes]
[movie becomes 573298 bytes ~9s long]

"time mencoder -ovc lavc -lavcopts aspect=16/9 mf://picsave/kreation/03-logo-joined/*.tga -oac lavc -audiofile kreation-files/kreation-logo-final.wav -o logo-final-widescreen-speedtest.avi"

2.6.17-rc5

real 0m10.164s 0m10.224s 0m10.141s
user 0m3.301s 0m3.304s 0m3.297s
sys 0m1.103s 0m1.097s 0m1.082s

2.6.17-rc5-ar

real 0m10.831s 0m10.816s 0m10.747s
user 0m3.319s 0m3.313s 0m3.324s
sys 0m1.081s 0m1.099s 0m1.042s

A 0.6s slowdown might not seem as such a big deal, but this is on a 9s
movie! Furthermore, the test was conducted on the / root partition which
resides on hda2. Subtracting the 8GB hda1 and the occupied 1.2GB of hda2
places us 9.2GB in from the disk edge (assuming 1 platter). I did a
one-shot test of this movie on hda3 - closes to the spindle - which all
in all gives a distance of ~95GB:

2.6.17-rc5 ------ 2.6.17-rc5-ar

real 0m16.134s -- 0m17.456s
user 0m3.311s -- 0m3.312s
sys 0m1.111s -- 0m1.135s

Wow. If nothing else, these tests have made me rethink my partitioning
scheme. I've used the same layout since xx-years ago when proximity of
swap-usr-home on those slow disks really made a difference. And since
I don't touch swap in normal operation nowadays... Power to the Edge!

_Geek usage_

[Kernel compile]
[CONFIG_REORDER "Processor type and features -> Function reordering" adds
ca 30s here]
[Note: I made a mistake by booting the -ar kernel first, and also didn't
alternate like I should have. This was the first set of tests and chip
temperature rise seem to slow things down. Physics reason above my head]

"time make"

2.6.17-rc5-ar

real 5m3.654s 5m3.787s 5m4.390s 5m4.991s
user 4m17.595s 4m17.580s 4m17.701s 4m18.043s
sys 0m31.551s 0m31.506s 0m31.368s 0m31.563s

2.6.17-rc5

real 5m4.606s 5m5.798s 5m4.684s 5m4.508s
user 4m18.586s 4m19.183s 4m19.111s 4m17.799s
sys 0m31.241s 0m31.482s 0m31.278s 0m31.610s

Any difference here should really be considered noise. The file read/write
is too infrequent and slow to really measure.

_Caveat and preemptive Mea Culpa_

The patching of 2.6.17-rc5 has neither been approved or verified as to
its correctness. The kernel compiles without errors and the new
/proc/sys/kernel/ sysctl readahead_ratio and readahead_hit_rate turn up
with the default 50 and 1. This is however not a proof of total parity
with the official -mm patch-set.

Mvh
Mats Johannesson
--