This patchset is designed to improve system responsiveness and interactivity.
It is configurable to any workload but the default -ck patch is aimed at the
desktop and -cks is available with more emphasis on serverspace.
Apply to 2.6.20
http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.20/2.6.20-ck1/patch-2.6.20-ck1.bz2
or server version
http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.20/2.6.20-ck1/patch-2.6.20-cks1.bz2
web:
http://kernel.kolivas.org
all patches:
http://www.kernel.org/pub/linux/kernel/people/ck/patches/
Split patches available.
Full patchlist:
sched-staircase-17.patch
sched-staircase17_interactive_tunable.patch
sched-staircase17_compute_tunable.patch
sched-range.patch
sched-iso-4.7.patch
track_mutexes-1.patch
sched-idleprio-1.1.patch
sched-limit_policy_changes.patch
sched-make_softirqd_batch.patch
cfq-ioprio_inherit_rt_class.patch
cfq-iso_idleprio_ionice.patch
hz-default_1000.patch
hz-no_default_250.patch
sched-add-above-background-load-function.patch
mm-swap_prefetch-34.patch
mm-convert_swappiness_to_mapped.patch
mm-lots_watermark.diff
mm-kswapd_inherit_prio-1.patch
mm-prio_dependant_scan-2.patch
mm-background_scan-2.patch
mm-idleprio_prio.patch
mm-filesize_dependant_lru_cache_add.patch
kconfig-expose_vmsplit_option.patch
ck1-version.patch
このカーネルは立派だと思いますよ
--
-ck
On Friday 16 February 2007, Con Kolivas wrote:
> This patchset is designed to improve system responsiveness and
> interactivity. It is configurable to any workload but the default -ck patch
> is aimed at the desktop and -cks is available with more emphasis on
> serverspace.
Running well on quite different machines. Good work :)
--
---------------------------------------
Malte Schr?der
[email protected]
ICQ# 68121508
---------------------------------------
Working well at home and at work.
It fixed the problems i had at work with hard lockups when leaving the
box idling on night and getting back the day after. It also fixed some
freezes i had when working on a repository converter for mercurial, the
convertion process used to be damn slow with pre1, it's now just fine.
Good work Con, it's much appreciated.
--
Edouard Gomez
On Fri, 16 Feb 2007 21:35:17 +0000, Edouard Gomez wrote:
> It also fixed some freezes i had when working on a repository
> converter for mercurial, the convertion process used to be damn slow
> with pre1, it's now just fine.
I didn't mean pre1, I meant the 2.6.20-rc6-ck1 patch you were hesitating
announcing as the final 2.6.20-ck1 when i asked you. Good to see reviews
allowed you spotting the bug.
--
Edouard Gomez
Con Kolivas wrote:
> mm-filesize_dependant_lru_cache_add.patch
I like it.
Is any of this stuff ever going to be merged?
On Saturday 17 February 2007 11:53, Chuck Ebbert wrote:
> Con Kolivas wrote:
> > mm-filesize_dependant_lru_cache_add.patch
>
> I like it.
Thanks :-)
> Is any of this stuff ever going to be merged?
See the last paragraph here:
http://lkml.org/lkml/2007/2/9/112
I'm thru with bashing my head against the wall.
--
-ck
On 2/16/07, Con Kolivas <[email protected]> wrote:
> I'm thru with bashing my head against the wall.
I do hope this post isn't in any way redundant, but from what I can
see, this has never been suggested... (someone please do enlighten me
if I'm wrong.)
Has anyone tried booting a kernel with the various patches in question
with a mem=###M boot flag (maybe mem=96M or some other "insanely low
number" ?) to make the kernel think it has less memory than is
physically available (and then compare to vanilla with the same
flags)? It might more clearly demonstrate the effects of Con's patches
when the kernel thinks (or knows) it has relatively little memory
(since many critics, from what I can tell, have quite a bit of memory
on their systems for their workloads).
Just my two cents.
--
~Mike
- Just the crazy copy cat.
On Saturday 17 February 2007 13:15, michael chang wrote:
> On 2/16/07, Con Kolivas <[email protected]> wrote:
> > I'm thru with bashing my head against the wall.
>
> I do hope this post isn't in any way redundant, but from what I can
> see, this has never been suggested... (someone please do enlighten me
> if I'm wrong.)
>
> Has anyone tried booting a kernel with the various patches in question
> with a mem=###M boot flag (maybe mem=96M or some other "insanely low
> number" ?) to make the kernel think it has less memory than is
> physically available (and then compare to vanilla with the same
> flags)? It might more clearly demonstrate the effects of Con's patches
> when the kernel thinks (or knows) it has relatively little memory
> (since many critics, from what I can tell, have quite a bit of memory
> on their systems for their workloads).
>
> Just my two cents.
Oh that's not a bad idea of course. I've been testing it like that for ages,
and there are many -ck users who have testified to swap prefetch helping in
low memory situations for real as well. Now how do you turn those testimonies
into convincing arguments? Maintainers are far too busy off testing code for
16+ cpus, petabytes of disk storage and so on to try it for themselves. Plus
they worry incessantly that my patches may harm those precious machines'
performance...
--
-ck
Con Kolivas wrote:
> This patchset is designed to improve system responsiveness and interactivity.
> It is configurable to any workload but the default -ck patch is aimed at the
> desktop and -cks is available with more emphasis on serverspace.
>
<snip>
>
> このカーネルは立派だと思いますよ
>
Running well. Thanks Con, great job! I just discovered
gmane.linux.kernel.ck through your quote of a post on it earlier.
Hugo
On 2/16/07, Con Kolivas <[email protected]> wrote:
> On Saturday 17 February 2007 13:15, michael chang wrote:
> > On 2/16/07, Con Kolivas <[email protected]> wrote:
> > > I'm thru with bashing my head against the wall.
> >
> > I do hope this post isn't in any way redundant, but from what I can
> > see, this has never been suggested... (someone please do enlighten me
> > if I'm wrong.)
> >
> > Has anyone tried booting a kernel with the various patches in question
> > with a mem=###M boot flag (maybe mem=96M or some other "insanely low
> > number" ?) to make the kernel think it has less memory than is
> > physically available (and then compare to vanilla with the same
> > flags)? It might more clearly demonstrate the effects of Con's patches
> > when the kernel thinks (or knows) it has relatively little memory
> > (since many critics, from what I can tell, have quite a bit of memory
> > on their systems for their workloads).
> >
> > Just my two cents.
>
> Oh that's not a bad idea of course. I've been testing it like that for ages,
It never hurts to point out the obvious in case someone didn't notice,
so long as one doesn't become repetitive.
> and there are many -ck users who have testified to swap prefetch helping in
> low memory situations for real as well. Now how do you turn those testimonies
> into convincing arguments? Maintainers are far too busy off testing code for
> 16+ cpus, petabytes of disk storage and so on to try it for themselves. Plus
Pity. What about virtualization? Surely one of these 16+ CPU machines
with petabytes of disk storage can spare one CPU for an hour for a
virtual machine, just set it with like a 3 GB hard drive image (which
is later deleted), 1 GB swap (ditto), 128 MB memory, and one CPU
instance -- then test with vanilla and the patches in question? (My
understanding is that one of the major "fun things" that these kinds
of machines have is that they come with interesting VM features and
instruction sets.)
> they worry incessantly that my patches may harm those precious machines'
> performance...
Has anyone tested it on one of these massive multi-core "beasts" and
seen if it DOES degrade performance? I want to see numbers. Since the
performance improvements for these machines are based on numbers, I
want to see any argument for degradation also in numbers. Both
absolute numbers and relative numbers.
(Obviously, since -ck doesn't target that kind of thing, it's not
possible at the moment to prove how useful -ck is with numbers. But
surely we can measure how much of a "negative" impact it does have on
everything else. If it isn't hurting anyone, then what's wrong with
it?)
Unfortunately, the argument that "xyz" is just as bad/worse is hardly
useful from what I've seen in kernel talks... maybe we're missing
something here.
Is it possible to command a program's memory usage be put into swap on
purpose, without "forcing" it into swap by taking other memory? (Maybe
such a feature could be used to time how long it takes to restore from
swap by timing how long the first or second display update takes on
some typically-used GUI program that takes a while to draw its GUI.)
Just a couple of additional thoughts.
> --
> -ck
>
--
~Mike
- Just the crazy copy cat.
P.S. For anyone who cares and is sending replies to my messages, I am
subscribed to the ck ML, but not linux-kernel. So if you want me to
see it and it's in the latter, CC me.
Con Kolivas wrote:
> Maintainers are far too busy off testing code for
> 16+ cpus, petabytes of disk storage and so on to try it for themselves. Plus
> they worry incessantly that my patches may harm those precious machines'
> performance...
>
But the one I like, mm-filesize_dependant_lru_cache_add.patch,
has an on-off switch.
In other words it adds an option to do things differently.
How could that possibly affect any workload if that option
isn't enabled?
On Sunday 18 February 2007 05:45, Chuck Ebbert wrote:
> Con Kolivas wrote:
> > Maintainers are far too busy off testing code for
> > 16+ cpus, petabytes of disk storage and so on to try it for themselves.
> > Plus they worry incessantly that my patches may harm those precious
> > machines' performance...
>
> But the one I like, mm-filesize_dependant_lru_cache_add.patch,
> has an on-off switch.
>
> In other words it adds an option to do things differently.
> How could that possibly affect any workload if that option
> isn't enabled?
Swap prefetch not only has an on-off switch, you can even build your kernel
without it entirely so it costs even less than this patch... I'm not going to
support the argument that it might be built into the kernel and enabled
unknowingly and _then_ cause overhead.
Oh and this patch depends on some of the code from the swap prefetch patch
too. I guess since they're so suspicious of swap prefetch the swap prefetch
patch can be ripped apart for the portions of code required to make this
patch work.
Do you still want this patch for mainline?...
--
-ck
On 2/17/07, Con Kolivas <[email protected]> wrote:
> On Sunday 18 February 2007 05:45, Chuck Ebbert wrote:
> > Con Kolivas wrote:
> > > Maintainers are far too busy off testing code for
> > > 16+ cpus, petabytes of disk storage and so on to try it for themselves.
> > > Plus they worry incessantly that my patches may harm those precious
> > > machines' performance...
> >
> > But the one I like, mm-filesize_dependant_lru_cache_add.patch,
> > has an on-off switch.
> >
> > In other words it adds an option to do things differently.
> > How could that possibly affect any workload if that option
> > isn't enabled?
>
> Swap prefetch not only has an on-off switch, you can even build your kernel
> without it entirely so it costs even less than this patch... I'm not going to
> support the argument that it might be built into the kernel and enabled
> unknowingly and _then_ cause overhead.
The patch, the way it's written now -- is the default to build with
swap-prefetch, or build without by default? If the former, maybe it
would be more accepted if the latter was the default. (Of course, that
defeats the point for desktop users who add the patch and then wonder
why it doesn't work, but... *shrugs*)
> Oh and this patch depends on some of the code from the swap prefetch patch
> too. I guess since they're so suspicious of swap prefetch the swap prefetch
> patch can be ripped apart for the portions of code required to make this
> patch work.
While I'm all for putting Con's patches into mainline, I'm worried
about what happens if you rip swap prefetch apart and (if the
unthinkable happens) somebody accidentally omits something or worse.
Then mainline would have even more reason to be suspicious of code
from you, Con. Unless you already ripped the swap prefetch patch into
the parts that mm-filesize_dependant_lru_cache_add.patch depend on and
the parts it doesn't, and check it's "sane" to use them
independently...
(I'd be WAY more suspicious of having "half" of swap prefetch than
having all of it. I hope that most of mainline agrees with me, but I
have a sneaking suspicion they don't.)
In any case, this "ripping"... would it make the reverse happen? i.e.
swap prefetch being dependent on
mm-filesize_dependant_lru_cache_add.patch instead?
--
~Mike
- Just the crazy copy cat.
On Sun, 18 Feb 2007 08:00:06 +1100 Con Kolivas <[email protected]> wrote:
> On Sunday 18 February 2007 05:45, Chuck Ebbert wrote:
> ...
> > But the one I like, mm-filesize_dependant_lru_cache_add.patch,
> > has an on-off switch.
> >
>
> ...
>
> Do you still want this patch for mainline?...
Don't think so. The problems I see are:
- It's a system-wide knob. In many situations this will do the wrong
thing. Controlling pagecache should be per-process.
- Its heuristics for working out when to invalidate the pagecache will be
too much for some situations and too little for others.
- Whatever we do, there will be some applications in some situations
which are hurt badly by changes like this: they'll do heaps of extra IO.
Generally, the penalties for getting this stuff wrong are very very high:
orders of magnitude slowdowns in the right situations. Which I suspect
will make any system-wide knob ultimately unsuccessful.
The ideal way of getting this *right* is to change every application in the
world to get smart about using sync_page_range() and/or posix_fadvise(),
then to add a set of command-line options to each application in the world
so the user can control its pagecache handling.
Obviously that isn't practical. But what _could_ be done is to put these
pagecache smarts into glibc's read() and write() code. So the user can do:
MAX_PAGECACHE=4M MAX_DIRTY_PAGECACHE=2M rsync foo bar
This will provide pagecache control for pretty much every application. It
has limitations (fork+exec behaviour??) but will be useful.
A kernel-based solution might use new rlimits, but would not be as flexible
or successful as a libc-based one, I suspect.
Andrew Morton writes:
> On Sun, 18 Feb 2007 08:00:06 +1100 Con Kolivas <[email protected]> wrote:
>
>> On Sunday 18 February 2007 05:45, Chuck Ebbert wrote:
>> ...
>> > But the one I like, mm-filesize_dependant_lru_cache_add.patch,
>> > has an on-off switch.
>> >
>>
>> ...
>>
>> Do you still want this patch for mainline?...
>
> Don't think so. The problems I see are:
>
> - It's a system-wide knob. In many situations this will do the wrong
> thing. Controlling pagecache should be per-process.
>
> - Its heuristics for working out when to invalidate the pagecache will be
> too much for some situations and too little for others.
>
> - Whatever we do, there will be some applications in some situations
> which are hurt badly by changes like this: they'll do heaps of extra IO.
>
>
> Generally, the penalties for getting this stuff wrong are very very high:
> orders of magnitude slowdowns in the right situations. Which I suspect
> will make any system-wide knob ultimately unsuccessful.
Rest assured I wasn't interested in pushing this patch for mainline anyway.
-ck users can also rest assured about this patch for the following reasons:
- The usage pattern on a desktop will guarantee that this patch helps 99.9%
of the time rather than hurts. Therefore, this feature is enabled by default
on -ck.
- With the usage pattern on a server of any sort, it will be unknown whether
this patch helps or harms. Therefore, this feature is disabled by default on
-cks.
--
-ck
On 2/18/07, Andrew Morton <[email protected]> wrote:
> Generally, the penalties for getting this stuff wrong are very very high:
> orders of magnitude slowdowns in the right situations. Which I suspect
> will make any system-wide knob ultimately unsuccessful.
>
Yes, they were. Now, it's an extremely light and well-tuned patch.
kprefetchd should only run on a totally idle system now.
> The ideal way of getting this *right* is to change every application in the
> world to get smart about using sync_page_range() and/or posix_fadvise(),
> then to add a set of command-line options to each application in the world
> so the user can control its pagecache handling.
We don't live in a perfect world. :-)
> Obviously that isn't practical. But what _could_ be done is to put these
> pagecache smarts into glibc's read() and write() code. So the user can do:
>
> MAX_PAGECACHE=4M MAX_DIRTY_PAGECACHE=2M rsync foo bar
>
> This will provide pagecache control for pretty much every application. It
> has limitations (fork+exec behaviour??) but will be useful.
Not too useful for interactive applications with unpredictable memory
consumption behaviour, where swap-prefetch still helps.
> A kernel-based solution might use new rlimits, but would not be as flexible
> or successful as a libc-based one, I suspect.
Radoslaw Szkodzinski writes:
> On 2/18/07, Andrew Morton <[email protected]> wrote:
>> Generally, the penalties for getting this stuff wrong are very very high:
>> orders of magnitude slowdowns in the right situations. Which I suspect
>> will make any system-wide knob ultimately unsuccessful.
>>
>
> Yes, they were. Now, it's an extremely light and well-tuned patch.
> kprefetchd should only run on a totally idle system now.
>
>> The ideal way of getting this *right* is to change every application in the
>> world to get smart about using sync_page_range() and/or posix_fadvise(),
>> then to add a set of command-line options to each application in the world
>> so the user can control its pagecache handling.
>
> We don't live in a perfect world. :-)
>
>> Obviously that isn't practical. But what _could_ be done is to put these
>> pagecache smarts into glibc's read() and write() code. So the user can do:
>>
>> MAX_PAGECACHE=4M MAX_DIRTY_PAGECACHE=2M rsync foo bar
>>
>> This will provide pagecache control for pretty much every application. It
>> has limitations (fork+exec behaviour??) but will be useful.
>
> Not too useful for interactive applications with unpredictable memory
> consumption behaviour, where swap-prefetch still helps.
Hey Radoslaw, your points are valid but Andrew was referring to the tail
large files patch in this email.
--
-ck
On 2/16/07, Con Kolivas <[email protected]> wrote:
> This patchset is designed to improve system responsiveness and interactivity.
> It is configurable to any workload but the default -ck patch is aimed at the
> desktop and -cks is available with more emphasis on serverspace.
>
> Apply to 2.6.20
any benchmarks for 2.6.20-ck vs 2.6.20?
mdew . writes:
> On 2/16/07, Con Kolivas <[email protected]> wrote:
>> This patchset is designed to improve system responsiveness and interactivity.
>> It is configurable to any workload but the default -ck patch is aimed at the
>> desktop and -cks is available with more emphasis on serverspace.
>>
>> Apply to 2.6.20
>
> any benchmarks for 2.6.20-ck vs 2.6.20?
Would some -ck user on the mailing list like to perform a set of interbench
benchmarks? They're pretty straight forward to do; see:
http://interbench.kolivas.org
--
-ck
On Sunday 18 February 2007 13:38, Con Kolivas wrote:
> mdew . writes:
> > On 2/16/07, Con Kolivas <[email protected]> wrote:
> >> This patchset is designed to improve system responsiveness and
> >> interactivity. It is configurable to any workload but the default -ck
> >> patch is aimed at the desktop and -cks is available with more emphasis
> >> on serverspace.
> >>
> >> Apply to 2.6.20
> >
> > any benchmarks for 2.6.20-ck vs 2.6.20?
>
> Would some -ck user on the mailing list like to perform a set of interbench
> benchmarks? They're pretty straight forward to do; see:
>
> http://interbench.kolivas.org
I couldn't take down any lower power machine for these benchmarks... A lower
power single cpu machine would be better for this. Feel free to throw any
other benchmarks at it.
This core2 duo 2.4 GHz with 2GB ram and 7200 rpm 16MB cache hard drive is not
too discrimanatory, but here are the results (use fixed font to see):
Using 2392573 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20 at datestamp 200702181608
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00329 0.006 100 100
Video 0.002 +/- 0.00356 0.01 100 100
X 0.007 +/- 0.0819 2 100 100
Burn 0.002 +/- 0.00335 0.005 100 100
Write 0.105 +/- 1.55 35.5 100 100
Read 0.006 +/- 0.00707 0.014 100 100
Compile 0.312 +/- 5.61 135 99.8 99.8
Memload 0.01 +/- 0.037 0.72 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.004 +/- 0.00431 0.017 100 100
X 0.006 +/- 0.00608 0.013 100 100
Burn 0.003 +/- 0.00392 0.012 100 100
Write 0.097 +/- 3.44 144 99.8 99.8
Read 0.005 +/- 0.00523 0.013 100 100
Compile 0.059 +/- 1.2 36.7 99.8 99.8
Memload 0.01 +/- 0.0767 1.85 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.056 +/- 0.379 3 98.4 96.7
Video 0.033 +/- 0.258 2 98.7 97.7
Burn 0 +/- 0 0 100 100
Write 0.051 +/- 0.67 11.2 99.3 99
Read 0.053 +/- 0.384 3 98 96.7
Compile 0.139 +/- 2.29 39 99 98.6
Memload 0.166 +/- 2.25 39 98.1 97.1
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0.551 +/- 0.553 0.665 99.5
Video 0.594 +/- 0.596 0.656 99.4
X 0.019 +/- 0.317 5.49 100
Burn 179 +/- 186 193 35.9
Write 1.16 +/- 5.87 69.2 98.9
Read 0.876 +/- 0.884 1.31 99.1
Compile 193 +/- 209 499 34.1
Memload 1.11 +/- 1.59 15.3 98.9
Using 2392573 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20-ck1 at datestamp 200702181542
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00333 0.005 100 100
Video 0.004 +/- 0.0309 0.717 100 100
X 0.008 +/- 0.124 2.99 100 100
Burn 0.002 +/- 0.00339 0.005 100 100
Write 0.03 +/- 0.228 2.99 100 100
Read 0.005 +/- 0.00636 0.017 100 100
Compile 0.041 +/- 0.268 3.06 100 100
Memload 0.31 +/- 4.83 6.3 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.003 +/- 0.00383 0.014 100 100
X 0.008 +/- 0.143 5.99 100 100
Burn 0.003 +/- 0.00383 0.009 100 100
Write 0.023 +/- 0.219 4.57 100 100
Read 0.004 +/- 0.0047 0.017 100 100
Compile 0.027 +/- 0.214 3.73 100 100
Memload 0.015 +/- 0.113 3 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0 +/- 0.000351 0.006 100 100
Video 0 +/- 0 0 100 100
Burn 0 +/- 0.000293 0.005 100 100
Write 0 +/- 5.85e-05 0.001 100 100
Read 0.003 +/- 0.0585 1 100 99.7
Compile 0.15 +/- 1.09 14 97.3 95.6
Memload 0 +/- 0.00105 0.018 100 100
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0 +/- 0 0 100
Video 0.006 +/- 0.115 1.99 100
X 0 +/- 0.00283 0.049 100
Burn 187 +/- 225 270 34.8
Write 1.38 +/- 6.8 99.4 98.6
Read 0 +/- 0 0 100
Compile 205 +/- 246 440 32.8
Memload 0.005 +/- 0.0295 0.282 100
--
-ck
On Sun, 2007-02-18 at 13:38 +1100, Con Kolivas wrote:
> mdew . writes:
>
> > On 2/16/07, Con Kolivas <[email protected]> wrote:
> >> This patchset is designed to improve system responsiveness and interactivity.
> >> It is configurable to any workload but the default -ck patch is aimed at the
> >> desktop and -cks is available with more emphasis on serverspace.
> >>
> >> Apply to 2.6.20
> >
> > any benchmarks for 2.6.20-ck vs 2.6.20?
>
> Would some -ck user on the mailing list like to perform a set of interbench
> benchmarks? They're pretty straight forward to do; see:
>
> http://interbench.kolivas.org
>
> --
> -ck
Here are some benches comparing 2.6.18-4-686 (Debian sid stock) and
2.6.20-ck1-mt1 (2.6.20-ck1 + sched-idleprio-1.11-2.0.patch)
I know it's not what was asked for, but it might be useful for review of
anyone using Debian kernels considering ck patches :)
Take a look.
-r
--
Rodney "meff" Gordon II -*- [email protected]
Systems Administrator / Coder Geek -*- Open yourself to OpenSource
On Sun, 2007-02-18 at 00:15 -0600, Rodney Gordon II wrote:
> On Sun, 2007-02-18 at 13:38 +1100, Con Kolivas wrote:
> > mdew . writes:
> >
> > > On 2/16/07, Con Kolivas <[email protected]> wrote:
> > >> This patchset is designed to improve system responsiveness and interactivity.
> > >> It is configurable to any workload but the default -ck patch is aimed at the
> > >> desktop and -cks is available with more emphasis on serverspace.
> > >>
> > >> Apply to 2.6.20
> > >
> > > any benchmarks for 2.6.20-ck vs 2.6.20?
> >
> > Would some -ck user on the mailing list like to perform a set of interbench
> > benchmarks? They're pretty straight forward to do; see:
> >
> > http://interbench.kolivas.org
> >
> > --
> > -ck
>
> Here are some benches comparing 2.6.18-4-686 (Debian sid stock) and
> 2.6.20-ck1-mt1 (2.6.20-ck1 + sched-idleprio-1.11-2.0.patch)
>
> I know it's not what was asked for, but it might be useful for review of
> anyone using Debian kernels considering ck patches :)
>
> Take a look.
>
> -r
System specs by the way:
Pentium-D 830 3.0GHz Dualcore, 1.5GB RAM, 7200RPM 16MB Cache SATA3 using
AHCI w/ NCQ on.
--
Rodney "meff" Gordon II -*- [email protected]
Systems Administrator / Coder Geek -*- Open yourself to OpenSource
Hi Con,
Con Kolivas wrote:
> Would some -ck user on the mailing list like to perform a set of
> interbench benchmarks? They're pretty straight forward to do; see:
Here are my results for AMD 3200+ (2.2Ghz, uniprocessor), 1gb RAM, 10,000RPM SATA drive after clean boots into runlevel 1.
2.6.19-ck1 data are included at the bottom.
Using 1116777 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20 at datestamp 200702172323
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.003 +/- 0.00314 0.005 100 100
Video 0.002 +/- 0.00254 0.006 100 100
X 0.996 +/- 2.57 10 100 100
Burn 0.002 +/- 0.00241 0.011 100 100
Write 0.053 +/- 0.6 10 100 100
Read 0.009 +/- 0.0117 0.114 100 100
Compile 0.023 +/- 0.368 9.01 100 100
Memload 0.013 +/- 0.0578 0.948 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00261 0.004 100 100
X 3.5 +/- 8.38 26.7 100 86.7
Burn 0.002 +/- 0.00235 0.006 100 100
Write 0.072 +/- 1.18 26.7 100 99.7
Read 0.007 +/- 0.00891 0.063 100 100
Compile 0.031 +/- 0.677 21.7 100 99.9
Memload 0.014 +/- 0.0688 1.57 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.026 +/- 0.231 2 98.7 98
Video 10.8 +/- 23.4 69 37.7 30.2
Burn 0.026 +/- 0.231 2 98.7 98
Write 0.543 +/- 3.54 55 89.6 86.8
Read 0.026 +/- 0.231 2 98.7 98
Compile 1.8 +/- 23.7 405 80.5 77.9
Memload 0.029 +/- 0.238 2 98.7 97.7
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0 +/- 0 0 100
Video 63.2 +/- 64.8 66.5 61.3
X 100 +/- 213 1392 49.9
Burn 349 +/- 375 400 22.3
Write 46.4 +/- 112 891 68.3
Read 8.45 +/- 8.63 12.2 92.2
Compile 437 +/- 505 1138 18.6
Memload 15.4 +/- 23.8 159 86.7
Using 1116777 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20-ck1 at datestamp 200702180758
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00242 0.005 100 100
Video 0.002 +/- 0.00241 0.003 100 100
X 0.206 +/- 0.98 7 100 100
Burn 0.002 +/- 0.00238 0.003 100 100
Write 0.014 +/- 0.204 5 100 100
Read 0.007 +/- 0.00847 0.062 100 100
Compile 0.007 +/- 0.00783 0.062 100 100
Memload 0.036 +/- 0.254 5 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00247 0.018 100 100
X 0.236 +/- 1.15 16.7 100 99.9
Burn 0.002 +/- 0.00252 0.012 100 100
Write 0.006 +/- 0.041 1 100 100
Read 0.007 +/- 0.0168 0.486 100 100
Compile 0.007 +/- 0.0278 0.643 100 100
Memload 0.031 +/- 0.247 5 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.049 +/- 0.465 7 98.7 97.7
Video 14.1 +/- 26.3 68 36.2 26.7
Burn 0.016 +/- 0.173 2 99.3 98.7
Write 0.413 +/- 1.7 10 90.6 87.2
Read 0.013 +/- 0.141 2 100 99
Compile 0.116 +/- 0.794 8 96.8 95.4
Memload 0.292 +/- 2.51 36 97.4 95.1
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0 +/- 0 0 100
Video 66 +/- 66.2 73 60.2
X 100 +/- 213 1392 49.9
Burn 370 +/- 457 560 21.3
Write 32.1 +/- 47.7 204 75.7
Read 7.08 +/- 7.43 9.78 93.4
Compile 437 +/- 532 820 18.6
Memload 18.7 +/- 31.1 211 84.3
Using 1116777 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.19.1-ck1 at datestamp 200702180820
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00234 0.003 100 100
Video 0.002 +/- 0.00243 0.004 100 100
X 0.235 +/- 1.08 7 100 100
Burn 0.002 +/- 0.00269 0.025 100 100
Write 0.003 +/- 0.00392 0.012 100 100
Read 0.007 +/- 0.00911 0.102 100 100
Compile 0.01 +/- 0.0431 0.637 100 100
Memload 0.029 +/- 0.21 3.88 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00237 0.008 100 100
X 0.235 +/- 1.15 16.7 100 99.9
Burn 0.002 +/- 0.0025 0.004 100 100
Write 0.004 +/- 0.0379 1.6 100 100
Read 0.006 +/- 0.00959 0.249 100 100
Compile 0.006 +/- 0.0356 1.18 100 100
Memload 0.04 +/- 0.296 6.01 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.006 +/- 0.0817 1 100 99.3
Video 14.2 +/- 26.5 68 36 26.6
Burn 0.006 +/- 0.0817 1 100 99.3
Write 0.036 +/- 0.379 5 99.3 98.3
Read 0.006 +/- 0.0817 1 100 99.3
Compile 0.006 +/- 0.0817 1 100 99.3
Memload 0.079 +/- 0.935 14 99.3 98.3
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0 +/- 0 0 100
Video 66 +/- 66.3 73 60.2
X 100 +/- 209 1292 49.9
Burn 370 +/- 457 560 21.3
Write 22 +/- 23.9 96.9 81.9
Read 7.05 +/- 7.4 9.66 93.4
Compile 428 +/- 518 646 18.9
Memload 19 +/- 31.1 213 84
>
> http://interbench.kolivas.org
>
> --
> -ck
>
On Sun, 18 Feb 2007 13:38:59 +1100
Con Kolivas <[email protected]> wrote:
> mdew . writes:
>
> > On 2/16/07, Con Kolivas <[email protected]> wrote:
> >> This patchset is designed to improve system responsiveness and interactivity.
> >> It is configurable to any workload but the default -ck patch is aimed at the
> >> desktop and -cks is available with more emphasis on serverspace.
> >>
> >> Apply to 2.6.20
> >
> > any benchmarks for 2.6.20-ck vs 2.6.20?
>
> Would some -ck user on the mailing list like to perform a set of interbench
> benchmarks? They're pretty straight forward to do; see:
>
> http://interbench.kolivas.org
>
> --
> -ck
>
> _______________________________________________
> http://ck.kolivas.org/faqs/replying-to-mailing-list.txt
> ck mailing list - mailto: [email protected]
> http://vds.kolivas.org/mailman/listinfo/ck
Hi, here's interbench as run on my Athlon XP 3200+, 512mb
RAM in single user mode (hardly any user processes). FS is ext3.
This is the vanilla kernel
----------------------------
Using 1093458 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20-beyondash at datestamp 200702181716
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00251 0.004 100 100
Video 0.002 +/- 0.00268 0.005 100 100
X 1.42 +/- 3.17 10 100 100
Burn 0.002 +/- 0.00269 0.005 100 100
Write 0.021 +/- 0.203 4.84 100 100
Read 0.012 +/- 0.0148 0.073 100 100
Compile 0.009 +/- 0.0119 0.119 100 100
Memload 0.015 +/- 0.0343 0.494 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00261 0.024 100 100
X 2.94 +/- 8.24 66.7 99.2 87
Burn 0.002 +/- 0.00267 0.01 100 100
Write 0.051 +/- 0.948 29.5 100 99.9
Read 0.007 +/- 0.009 0.068 100 100
Compile 0.007 +/- 0.0162 0.247 100 100
Memload 0.011 +/- 0.0283 0.637 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 9.7 +/- 19.1 51 45.1 34.9
Video 11.4 +/- 24.1 68 37.5 29.8
Burn 0.033 +/- 0.258 2 98.7 97.7
Write 3.61 +/- 12.5 57 56.9 53.6
Read 0.04 +/- 0.316 3 98.4 97.4
Compile 1.3 +/- 7.38 56 74.1 71.4
Memload 4.29 +/- 12 54 84.2 75.1
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0.202 +/- 2.95 49.9 99.8
Video 66.2 +/- 66.5 73.2 60.2
X 336 +/- 623 3001 22.9
Burn 349 +/- 376 401 22.3
Write 33.2 +/- 73.3 566 75.1
Read 6.68 +/- 7.97 53.3 93.7
Compile 401 +/- 426 906 20
Memload 30.6 +/- 44.6 131 76.6
And this is with ck1-pre1
----------------------------
Using 1093458 loops per ms, running every load for 30 seconds
Benchmarking kernel 2.6.20-beyondash at datestamp 200702181739
--- Benchmarking simulated cpu of Audio in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00243 0.004 100 100
Video 0.002 +/- 0.00268 0.004 100 100
X 0.227 +/- 1.06 7 100 100
Burn 0.002 +/- 0.00271 0.005 100 100
Write 0.02 +/- 0.0995 1.48 100 100
Read 0.011 +/- 0.0131 0.07 100 100
Compile 0.01 +/- 0.0252 0.411 100 100
Memload 0.021 +/- 0.0989 1.61 100 100
--- Benchmarking simulated cpu of Video in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.002 +/- 0.00303 0.062 100 100
X 0.395 +/- 2.2 33.7 99.9 99.1
Burn 0.002 +/- 0.00275 0.004 100 100
Write 0.018 +/- 0.109 3 100 100
Read 0.007 +/- 0.00927 0.139 100 100
Compile 0.007 +/- 0.0372 1.1 100 100
Memload 0.015 +/- 0.0773 1.63 100 100
--- Benchmarking simulated cpu of X in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU % Deadlines Met
None 0.006 +/- 0.0817 1 100 99.3
Video 14.3 +/- 26.8 68 35.7 26.2
Burn 0.166 +/- 1.66 21 98.7 97
Write 0.766 +/- 4.36 35 94.3 91.3
Read 0.106 +/- 1.25 21 98.7 97.3
Compile 0.776 +/- 4.35 48 92.6 89.2
Memload 1.5 +/- 6.83 44 96.4 91.7
--- Benchmarking simulated cpu of Gaming in the presence of simulated ---
Load Latency +/- SD (ms) Max Latency % Desired CPU
None 0.166 +/- 2.88 49.9 99.8
Video 66.2 +/- 66.6 80 60.2
X 101 +/- 202 1221 49.8
Burn 385 +/- 466 561 20.6
Write 30.6 +/- 54.1 175 76.6
Read 6.6 +/- 7.35 15.7 93.8
Compile 419 +/- 510 750 19.3
Memload 16.7 +/- 27.7 180 85.7
Both kernels were compiled with the same config, which is attached.
Voluntary preemption is enabled. Here's lspci
00:00.0 Host bridge: VIA Technologies, Inc. VT8377 [KT400/KT600 AGP] Host Bridge (rev 80)
00:01.0 PCI bridge: VIA Technologies, Inc. VT8237 PCI Bridge
00:0a.0 Network controller: RaLink RT2500 802.11g Cardbus/mini-PCI (rev 01)
00:0b.0 Multimedia audio controller: Creative Labs SB Audigy (rev 04)
00:0b.1 Input device controller: Creative Labs SB Audigy Game Port (rev 04)
00:0b.2 FireWire (IEEE 1394): Creative Labs SB Audigy FireWire Port (rev 04)
00:0f.0 RAID bus controller: VIA Technologies, Inc. VIA VT6420 SATA RAID Controller (rev 80)
00:0f.1 IDE interface: VIA Technologies, Inc. VT82C586A/B/VT82C686/A/B/VT823x/A/C PIPC Bus Master IDE (rev 06)
00:10.0 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
00:10.1 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
00:10.2 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
00:10.3 USB Controller: VIA Technologies, Inc. VT82xxxxx UHCI USB 1.1 Controller (rev 81)
00:10.4 USB Controller: VIA Technologies, Inc. USB 2.0 (rev 86)
00:11.0 ISA bridge: VIA Technologies, Inc. VT8237 ISA bridge [KT600/K8T800/K8T890 South]
00:12.0 Ethernet controller: VIA Technologies, Inc. VT6102 [Rhine-II] (rev 78)
01:00.0 VGA compatible controller: nVidia Corporation NV11 [GeForce2 MX/MX 400] (rev a1)
(nvidia module was not loaded during the benchmark runs)
Hope that's helpful,
Ash
On 2/16/07, Con Kolivas <[email protected]> wrote:
> This patchset is designed to improve system responsiveness and interactivity.
> It is configurable to any workload but the default -ck patch is aimed at the
> desktop and -cks is available with more emphasis on serverspace.
Hi Con.
I usually don't pay a lot of attention to benchmarks. Responsiveness
under load is much important to me.
But this is nice: I use FC6 with initng as boot process manager. With
vanilla 2.6.20 boot process takes 21 to 23 seconds; with 2.6.20-ck1
(same config, of course), boot process takes 17 to 19 seconds.
So your patchset has become my patchset of choice.
Regards,
Fabio
On Friday 16 February 2007, Con Kolivas wrote:
>This patchset is designed to improve system responsiveness and
> interactivity. It is configurable to any workload but the default -ck
> patch is aimed at the desktop and -cks is available with more emphasis
> on serverspace.
>
>Apply to 2.6.20
>http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.20/2.6.
>20-ck1/patch-2.6.20-ck1.bz2
>
>or server version
>http://www.kernel.org/pub/linux/kernel/people/ck/patches/2.6/2.6.20/2.6.
>20-ck1/patch-2.6.20-cks1.bz2
>
>web:
>http://kernel.kolivas.org
>
>all patches:
>http://www.kernel.org/pub/linux/kernel/people/ck/patches/
>
>
>Split patches available.
>
>Full patchlist:
>
>sched-staircase-17.patch
>sched-staircase17_interactive_tunable.patch
>sched-staircase17_compute_tunable.patch
>sched-range.patch
>sched-iso-4.7.patch
>track_mutexes-1.patch
>sched-idleprio-1.1.patch
>sched-limit_policy_changes.patch
>sched-make_softirqd_batch.patch
>cfq-ioprio_inherit_rt_class.patch
>cfq-iso_idleprio_ionice.patch
>hz-default_1000.patch
>hz-no_default_250.patch
>sched-add-above-background-load-function.patch
>mm-swap_prefetch-34.patch
>mm-convert_swappiness_to_mapped.patch
>mm-lots_watermark.diff
>mm-kswapd_inherit_prio-1.patch
>mm-prio_dependant_scan-2.patch
>mm-background_scan-2.patch
>mm-idleprio_prio.patch
>mm-filesize_dependant_lru_cache_add.patch
>kconfig-expose_vmsplit_option.patch
>ck1-version.patch
>
I have a problem, Con. The patch itself works fine for me, BUT it doesn't
update the version.h available in
/lib/modules/2.6.20-ck1/source/include/linux to include the -ck1 in the
reported kernel version when trying to build an fglrx driver with the
latest ati driver builder.
Which leaves this error message in /usr/share/fglrx/flgrx-install.log:
[root@coyote fglrx]# cat fglrx-install.log
[Message] Kernel Module : Trying to install a precompiled kernel module.
[Message] Kernel Module : Precompiled kernel module version mismatched.
[Message] Kernel Module : Found kernel module build environment,
generating kernel module now.
ATI module generator V 2.0
==========================
initializing...
Error:
kernel includes at /lib/modules/2.6.20-ck1/build/include do not match
current kernel.
they are versioned as ""
instead of "2.6.20-ck1".
you might need to adjust your symlinks:
- /usr/include
- /usr/src/linux
[Error] Kernel Module : Failed to compile kernel module - please consult
readme.
==========================
Unforch, the installer does not leave a readme behind that I've been able
to find, nor does it report the error on-screen.
The above files are not simlinks here on this FC6 install. And of
course /usr/src/linux does not exist allthough I could set it up for the
duration of a rebuild/reinstall cycle of my script.
Can we have a patch to address this? Or should I just hardcode it since
it will never be linked to any other later kernel?
I tried that in the src tree's include/linux/version.h, but it was
refreshed back to the original regex code by the make, so that's not
where to do it obviously. I've also made the simlink in /usr/src, but
since a kernel make re-writes version.h, that didn't help.
Whats next?
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)
On Sunday 25 February 2007 15:34, Gene Heskett wrote:
> I have a problem, Con. The patch itself works fine for me, BUT it doesn't
> update the version.h available in
> /lib/modules/2.6.20-ck1/source/include/linux to include the -ck1 in the
> reported kernel version when trying to build an fglrx driver with the
> latest ati driver builder.
> Which leaves this error message in /usr/share/fglrx/flgrx-install.log:
> [root@coyote fglrx]# cat fglrx-install.log
> [Message] Kernel Module : Trying to install a precompiled kernel module.
> [Message] Kernel Module : Precompiled kernel module version mismatched.
> [Message] Kernel Module : Found kernel module build environment,
> generating kernel module now.
> ATI module generator V 2.0
> ==========================
> initializing...
> Error:
> kernel includes at /lib/modules/2.6.20-ck1/build/include do not match
> current kernel.
> they are versioned as ""
> instead of "2.6.20-ck1".
> you might need to adjust your symlinks:
> - /usr/include
> - /usr/src/linux
> [Error] Kernel Module : Failed to compile kernel module - please consult
> readme.
> ==========================
> Unforch, the installer does not leave a readme behind that I've been able
> to find, nor does it report the error on-screen.
>
> The above files are not simlinks here on this FC6 install. And of
> course /usr/src/linux does not exist allthough I could set it up for the
> duration of a rebuild/reinstall cycle of my script.
>
> Can we have a patch to address this? Or should I just hardcode it since
> it will never be linked to any other later kernel?
>
> I tried that in the src tree's include/linux/version.h, but it was
> refreshed back to the original regex code by the make, so that's not
> where to do it obviously. I've also made the simlink in /usr/src, but
> since a kernel make re-writes version.h, that didn't help.
>
> Whats next?
I've never heard of this problem before. As far as I'm aware the EXTRAVERSION
usually is not included in version.h so it seems to be a limitation of the
flgrx installer? That would mean the flgrx installer wouldn't work on any
kernel with an extra version such as the -rc releases of mainline even. So
I'm sorry but I don't really know what to do about this problem.
--
-ck
On Sunday 25 February 2007, Con Kolivas wrote:
>On Sunday 25 February 2007 15:34, Gene Heskett wrote:
>> I have a problem, Con. The patch itself works fine for me, BUT it
[...]
>> Can we have a patch to address this? Or should I just hardcode it
>> since it will never be linked to any other later kernel?
>>
>> I tried that in the src tree's include/linux/version.h, but it was
>> refreshed back to the original regex code by the make, so that's not
>> where to do it obviously. I've also made the simlink in /usr/src, but
>> since a kernel make re-writes version.h, that didn't help.
>>
>> Whats next?
>
>I've never heard of this problem before. As far as I'm aware the
> EXTRAVERSION usually is not included in version.h so it seems to be a
> limitation of the flgrx installer? That would mean the flgrx installer
> wouldn't work on any kernel with an extra version such as the -rc
> releases of mainline even. So I'm sorry but I don't really know what to
> do about this problem.
Well, fwiw, I rebooted to 2.6.20 after looking at the error message again,
and I didn't expect it to work, and it didn't. Attempting to build it
from a console before I ever startx'd, the error message is that the
modules are versioned as "" where the kernel is "2.6.20".
========================================
[Message] Kernel Module : Trying to install a precompiled kernel module.
[Message] Kernel Module : Precompiled kernel module version mismatched.
[Message] Kernel Module : Found kernel module build environment,
generating kernel module now.
ATI module generator V 2.0
==========================
initializing...
Error:
kernel includes at /lib/modules/2.6.20/build/include do not match current
kernel.
they are versioned as ""
instead of "2.6.20".
you might need to adjust your symlinks:
- /usr/include
- /usr/src/linux
[Error] Kernel Module : Failed to compile kernel module - please consult
readme.
=======================
Modules versioning is enabled in the build .config:
[root@coyote linux-2.6.20]# grep VERSION .config
CONFIG_LOCALVERSION=""
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_MODVERSIONS is not set
CONFIG_MODULE_SRCVERSION_ALL=y
So it appears the installer simply isn't capable of finding the modinfo at
all.
So much for ati's linux support, they can't even throw working code over
the fence. I've ordered an nvidia 6800 card for the next round of
testing. 2+ years of screwing around with a garden slug speed video is
enough.
Thanks for the reply Con, I appreciate it.
--
Cheers, Gene
"There are four boxes to be used in defense of liberty:
soap, ballot, jury, and ammo. Please use in that order."
-Ed Howdershelt (Author)