2002-01-02 09:35:26

by Brian Litzinger

[permalink] [raw]
Subject: Linux 2.4.17 vs 2.2.19 vs rml new VM

I'd like to say that as of 2.4.17 w/preempt patch, the linux kernel
seems again to perform as well as 2.2.19 for interactive use and
reliability, at least in my use.

2.4.17 still croaks running some of the giant memory applications
that I run successfully on 2.2.19. (Machines with 2GB of RAM
running 3GB+ apps.)

I tried rmap-10 new VM and under my typical load my desktop machine
froze repeatedly. Seemed the memory pool was going down the drain
before the freeze. Meaning apps were failing and getting stuck in
various odd states.

No doubt, preempt and rmap-10 are incompatible, but I'm not going to
give up the preempt patch any time soon.

All in all 2.4.17 w/preempt is very satisfactory.

--
Brian Litzinger <[email protected]>

Copyright (c) 2002 By Brian Litzinger, All Rights Reserved


2002-01-02 09:57:53

by Alan

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

> I tried rmap-10 new VM and under my typical load my desktop machine
> froze repeatedly. Seemed the memory pool was going down the drain
> before the freeze. Meaning apps were failing and getting stuck in
> various odd states.
>
> No doubt, preempt and rmap-10 are incompatible, but I'm not going to
> give up the preempt patch any time soon.

I suspect its rmap-10 not the pre-empt patch. If you have the
time/inclination then testing just that load with rmap10a (the fixed rmap10)
would be interesting just to know which bit is the buggy half.

Similarly the low latency patch which on the whole seems to give better
results than the preempt patches is much less likely to cause problems as it
doesn't really change the system semantics in the same kind of way

2002-01-02 12:26:11

by Rik van Riel

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

On Wed, 2 Jan 2002 [email protected] wrote:

> I tried rmap-10 new VM and under my typical load my desktop machine
> froze repeatedly. Seemed the memory pool was going down the drain
> before the freeze. Meaning apps were failing and getting stuck in
> various odd states.

There's a stupid logic inversion bug in rmap-10, which is
fixed in rmap-10a. Andrew Morton tracked it down about an
hour after I released rmap-10.

Basically in wakeup_kswapd() user processes go to sleep
if the pressure on the VM is _really_ high *and* the user
process has all the same GFP options set as kswapd itself,
so the process can sleep on kswapd.

if ((gfp_mask & GFP_KSWAPD) == GFP_KSWAPD)
return;

Thinking about it, rmap-10a doesn't do the right thing,
either, releasing patches at 4 am isn't the right thing ;)

In vmscan.c, line 707 _should_ be:

if ((gfp_mask & GFP_KSWAPD) != GFP_KSWAPD)
return;

This way tasks which cannot safely sleep on kswapd will
return immediately, allowing only tasks which _can_
sleep on kswapd to go for a break.

Oh well, time for testing and releasing rmap-11 ;)

regards,

Rik
--
Shortwave goes a long way: irc.starchat.net #swl

http://www.surriel.com/ http://distro.conectiva.com/

2002-01-02 19:08:10

by jjs

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

Just .02 from the peanut gallery -

It would be interesting if you were to compare and
contrast 2.4.17-preempt with 2.4.17-low-latency.

I find the low latency patch makes a noticeable
difference in e.g. q3a and rtcw - OTOH I have
not been able to discern any tangible difference
from the stock kernel when using -preempt.

cu

jjs

[email protected] wrote:

>I'd like to say that as of 2.4.17 w/preempt patch, the linux kernel
>seems again to perform as well as 2.2.19 for interactive use and
>reliability, at least in my use.
>
>2.4.17 still croaks running some of the giant memory applications
>that I run successfully on 2.2.19. (Machines with 2GB of RAM
>running 3GB+ apps.)
>
>I tried rmap-10 new VM and under my typical load my desktop machine
>froze repeatedly. Seemed the memory pool was going down the drain
>before the freeze. Meaning apps were failing and getting stuck in
>various odd states.
>
>No doubt, preempt and rmap-10 are incompatible, but I'm not going to
>give up the preempt patch any time soon.
>
>All in all 2.4.17 w/preempt is very satisfactory.
>


2002-01-02 20:41:36

by Alan

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

> I find the low latency patch makes a noticeable
> difference in e.g. q3a and rtcw - OTOH I have
> not been able to discern any tangible difference
> from the stock kernel when using -preempt.

The measurements I've seen put lowlatency ahead of pre-empt in quality
of results. Since low latency fixes some of the locked latencies it might
be interesting for someone with time to benchmark

vanilla
low latency
pre-empt
both together

2002-01-02 23:14:44

by Dieter Nützel

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

On Tuesday, 2. January 2002 20:50, Alan cox wrote:
> > I find the low latency patch makes a noticeable
> > difference in e.g. q3a and rtcw - OTOH I have
> > not been able to discern any tangible difference
> > from the stock kernel when using -preempt.
>
> The measurements I've seen put lowlatency ahead of pre-empt in quality
> of results. Since low latency fixes some of the locked latencies it might
> be interesting for someone with time to benchmark
>
> vanilla
> low latency
> pre-empt
> both together

Don't forget that you have to use preempt-kernel-rml + lock-break-rml to
achieve the same (more) than the latency patch.

Taken from Robert's page and running it for some weeks, now.

[-]
Lock breaking for the Preemptible Kernel
lock-break-rml-2.4.15-1
lock-break-rml-2.4.16-3
lock-break-rml-2.4.17-2
lock-break-rml-2.4.18-pre1-1
README
ChangeLog
With the preemptible kernel, the need for explicit scheduling points, like in
the low-latency patches, are no more. However, since we can not preempt while
locks are held, we can take a similar model as low-latency and "break" (drop
and immediately reacquire) locks to improve system response. The trick is
finding when and where we can safely break the locks (periods of quiescence)
and how to safely recover. The majority of the lock breaking is in the VM and
VFS code. This patch is for users with strong system response requirements
affected by the worst-case latencies caused by long-held locks.
[-]

Regards,
Dieter

--
Dieter N?tzel
Graduate Student, Computer Science

University of Hamburg
@home: [email protected]

2002-01-02 23:51:34

by jjs

[permalink] [raw]
Subject: Re: Linux 2.4.17 vs 2.2.19 vs rml new VM

Well it is possible that with the several patches
you mention that I might see results similar to
what I now see with the low-latency patch.

However -

The preempt patch does NOT play well with the
tux webserver, which I am using. So, preempt is
not an option for me until and unless it is cleaned
up to allow cooperation with tux.

tux and low-latency get along just fine.

cu

jjs

Dieter N?tzel wrote:

>On Tuesday, 2. January 2002 20:50, Alan cox wrote:
>
>>>I find the low latency patch makes a noticeable
>>>difference in e.g. q3a and rtcw - OTOH I have
>>>not been able to discern any tangible difference
>>>from the stock kernel when using -preempt.
>>>
>>The measurements I've seen put lowlatency ahead of pre-empt in quality
>>of results. Since low latency fixes some of the locked latencies it might
>>be interesting for someone with time to benchmark
>>
>> vanilla
>> low latency
>> pre-empt
>> both together
>>
>
>Don't forget that you have to use preempt-kernel-rml + lock-break-rml to
>achieve the same (more) than the latency patch.
>
>Taken from Robert's page and running it for some weeks, now.
>
>[-]
>Lock breaking for the Preemptible Kernel
>lock-break-rml-2.4.15-1
>lock-break-rml-2.4.16-3
>lock-break-rml-2.4.17-2
>lock-break-rml-2.4.18-pre1-1
>README
>ChangeLog
>With the preemptible kernel, the need for explicit scheduling points, like in
>the low-latency patches, are no more. However, since we can not preempt while
>locks are held, we can take a similar model as low-latency and "break" (drop
>and immediately reacquire) locks to improve system response. The trick is
>finding when and where we can safely break the locks (periods of quiescence)
>and how to safely recover. The majority of the lock breaking is in the VM and
>VFS code. This patch is for users with strong system response requirements
>affected by the worst-case latencies caused by long-held locks.
>[-]
>
>Regards,
> Dieter
>