2001-12-20 13:30:54

by christian e

[permalink] [raw]
Subject: minimizing swap usage

Hi,all

Can someone give me a pointer to how I can avoid somethign like this:

foo@bar]$ free -m
total used free shared buffers cached
Mem: 249 245 4 0 6 136
-/+ buffers/cache: 102 147
Swap: 243 89 153

What's with all the caching when my apps crawl because it's swapping so
much ? I've tried to adjust /proc/vm/kswapd parameters but that doesn't
seem to help..I'd like to postpone the swapping until its almost out of
physical memory..

best regards

Christian




2001-12-20 14:06:01

by M. Edward Borasky

[permalink] [raw]
Subject: Re: minimizing swap usage

On Thu, 20 Dec 2001, christian e wrote:

> Hi,all
>
> Can someone give me a pointer to how I can avoid somethign like this:
>
> foo@bar]$ free -m
> total used free shared buffers cached
> Mem: 249 245 4 0 6 136
> -/+ buffers/cache: 102 147
> Swap: 243 89 153
>
> What's with all the caching when my apps crawl because it's swapping so
> much ? I've tried to adjust /proc/vm/kswapd parameters but that doesn't
> seem to help..I'd like to postpone the swapping until its almost out of
> physical memory..

This may seem counterintuitive, but postponing swapping / cache flushing
to disk till the last possible moment is counterproductive. It's a
little like procrastination in the time management world -- when the
time finally comes when you *have* to flush stuff out to disk, your poor
daemons / kernel threads go catatonic trying to keep up, and you end up
both CPU-bound and I/O bound. It is far better to have enough free
memory available to satisfy the demand for pages, even if that means
*raising* the watermarks, *more* swapping and smaller page caches.
--
M. Edward Borasky

[email protected]
http://www.borasky-research.net

When puns are outlawed, only inlaws will have gnus.

2001-12-25 11:02:20

by Nicholas Knight

[permalink] [raw]
Subject: Re: minimizing swap usage

On Thursday 20 December 2001 06:05 am, M. Edward (Ed) Borasky wrote:
> On Thu, 20 Dec 2001, christian e wrote:
> > Hi,all
> >
> > Can someone give me a pointer to how I can avoid somethign like
> > this:
> >
> > foo@bar]$ free -m
> > total used free shared buffers cached
> > Mem: 249 245 4 0 6
> > 136 -/+ buffers/cache: 102 147
> > Swap: 243 89 153
> >
> > What's with all the caching when my apps crawl because it's
> > swapping so much ? I've tried to adjust /proc/vm/kswapd parameters
> > but that doesn't seem to help..I'd like to postpone the swapping
> > until its almost out of physical memory..
>
> This may seem counterintuitive, but postponing swapping / cache
> flushing to disk till the last possible moment is counterproductive.
> It's a little like procrastination in the time management world --

Why not add a config option to choose between code for two behaviors:
(1 being the default, of course)
1. Current behavior, usualy a Good Thing, sometimes a Bad Thing, I've
had apps that had to call up in their entirety from swap space, while I
still had plenty of "avalible" RAM left ("avalible" meaning free, and
cache/buffers.) It seems the kernel puts a higher priority on caching
and buffers than on memory that hasn't been accessed in a while, which,
as I said, is usualy good, but not always.

2. Don't swap ANYTHING to disk until avalible RAM drops below, say,
15%. And put cache on a sanity check; if the system is going to swap to
disk because avalible RAM drops below 15%, and cache makes up more
than, say, 45%, start dropping the oldest stuff in the cache to free up
RAM instead of swapping. (I'm assuming 128-256MB+ of RAM here, for
anything smaller, default is probably best.)

At the very least, I'd like to see #2 tried, if someone that knows the
VM system has time to spare on it.
Cache/swap practices in the kernel have been bugging me for a long time.

2001-12-25 11:50:21

by Nicholas Knight

[permalink] [raw]
Subject: Re: minimizing swap usage

On Tuesday 25 December 2001 07:42 am, vda wrote:
> On Tuesday 25 December 2001 09:02, Nicholas Knight wrote:
> > > > What's with all the caching when my apps crawl because it's
> > > > swapping so much ? I've tried to adjust /proc/vm/kswapd
> > > > parameters but that doesn't seem to help..I'd like to postpone
> > > > the swapping until its almost out of physical memory..
> > >
> > > This may seem counterintuitive, but postponing swapping / cache
> > > flushing to disk till the last possible moment is
> > > counterproductive. It's a little like procrastination in the time
> > > management world --
> >
> > Why not add a config option to choose between code for two
> > behaviors: (1 being the default, of course)
> > 1. Current behavior.... [snip]
> > 2. Don't swap ANYTHING to disk until avalible RAM drops below, say,
> > 15%. And put cache on a sanity check; if the system is going to
> > swap to disk because avalible RAM drops below 15%, and cache makes
> > up more than, say, 45%, start dropping the oldest stuff in the
> > cache to free up RAM instead of swapping. (I'm assuming 128-256MB+
> > of RAM here, for anything smaller, default is probably best.)
>
> There are actually two things which are usually referred to as
> 'excessive swap':
>
> (1) 'swap in'. pages which were unused for some time got swapped out
> over time, but now an app (imagine something big: Mozilla) wants them
> back and starts to read them all from swap space into RAM (+ possibly
> swapping out old pages of other apps to free that RAM first). This
> may feel sluggish but is actually correct behaviour.

*USUALY* correct.

>
> (2) 'bad cache'. Kernel swaps out and back in pages which are NOT old
> while obviously old pages are sitting in the disk cache. This is a
> kernel bug in page age calculations: kernel incorrectly thinks that
> those pages it swaps out are oldest pages in RAM, and cache pages are
> young.
>
> If you want to submit a bug report, please make sure you are seeing
> (2) and not (1).
>

This was not a bug report.

> If you want to improve swap strategy, talk to Andrea Arcangeli and/or
> Rik van Riel or search archives and read (long) 2.4.x VM battle
> threads first.

I didn't email either directly, because I doubt either has time to work
on this, since default behavior is seen as "correct", and if they do,
they will most likely notice this thread and read my message, or
someone else will point it out to them that better knows if it's a pure
waste of their time or not.

As for the VM battle threads, I read most of them as they were
happening, and I really don't give a shit about them.

2001-12-28 03:53:03

by Nicholas Knight

[permalink] [raw]
Subject: Re: minimizing swap usage

On Thursday 27 December 2001 08:15 am, vda wrote:
> On Thursday 27 December 2001 09:47, christian e wrote:
> > > I've just installed 2.4.17 and so far it seems better.According
> > > to changelog there has been some changes to swap behaviour.Can I
> > > make it even more aggressive to cut down on buffers+cache somehow
> > > ??
> >
> > sorry ,my bad.After using it for some hours it's just as
> > bad.Rechecked the changelog and it was already done in the previous
> > kernel i used (17-rc1 ).Same problem with lots of cache and plenty
> > of swapping :-(
>
> Ok, let's try to collect some data.
> I ask (knowledgeable) list members to say whether they see something
> unusual
>
> You may find below
> 1) top of my box running normally
> 2) top after killall5 -15; killall5 -9
> 3) /proc/mounts
>
> Why proc/mounts? There you will see that my box is exclusively NFSed.
> AFAIK nfs mounts do not cache large amounts of data on the client
> size. Note that when you try to explain top (2).

You're not understanding. This is not a bug. This is intended behavior.
The problem is that this behavior is not neccisarily desirable. I have
attempted to propose something that can be TRIED in order to attempt to
optimize swap behavior for certain situations, primarily desktop usage.

This is not a bug.
There is nothing unusual about it.
I am not attempting to report a bug.
There is no need to collect data.
We have all the data we need from any number of complaints about
current swap behavior.
Something needs to be tried, I proposed something to be tried, I did
not report a bug, I did not say this was unusual for 2.4.x series.