2004-03-11 00:06:34

by Nick Piggin

[permalink] [raw]
Subject: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Here is my updated patches rolled into one.


Attachments:
vm-split-active.patch (28.25 kB)

2004-03-12 08:49:41

by Marc-Christian Petersen

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

On Thursday 11 March 2004 01:04, Nick Piggin wrote:

Hi Nick,

> Here is my updated patches rolled into one.

hmm, using this in 2.6.4-rc2-mm1 my machine starts to swap very very soon.
Machine has squid, bind, apache running, X 4.3.0, Windowmaker, so nothing
special.

Swap grows very easily starting to untar'gunzip a kernel tree. About +
150-200MB goes to swap. Everything is very smooth though, but I just wondered
because w/o your patches swap isn't used at all, even after some days of
uptime.

ciao, Marc

2004-03-12 09:09:45

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Marc-Christian Petersen wrote:

>On Thursday 11 March 2004 01:04, Nick Piggin wrote:
>
>Hi Nick,
>
>
>>Here is my updated patches rolled into one.
>>
>
>hmm, using this in 2.6.4-rc2-mm1 my machine starts to swap very very soon.
>Machine has squid, bind, apache running, X 4.3.0, Windowmaker, so nothing
>special.
>
>Swap grows very easily starting to untar'gunzip a kernel tree. About +
>150-200MB goes to swap. Everything is very smooth though, but I just wondered
>because w/o your patches swap isn't used at all, even after some days of
>uptime.
>
>

Hmm... I guess it is still smooth because it is swapping out only
inactive pages. If the standard VM isn't being pushed very hard it
doesn't scan mapped pages at all which is why it isn't swapping.

I have a preference for allowing it to scan some mapped pages though.
I'm not sure if there is any attempt at a drop behind logic. That
might help. Add new unmapped pagecache pages to the inactive list or
something might help... hmm, actually that's what it does now by the
looks.

I guess you don't have a problem though.

2004-03-12 09:27:18

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin <[email protected]> wrote:
>
> Hmm... I guess it is still smooth because it is swapping out only
> inactive pages. If the standard VM isn't being pushed very hard it
> doesn't scan mapped pages at all which is why it isn't swapping.
>
> I have a preference for allowing it to scan some mapped pages though.

I haven't looked at the code but if, as I assume, it is always scanning
mapped pages, although at a reduced rate then the effect will be the same
as setting swappiness to 100, except it will take longer.

That effect is to cause the whole world to be swapped out when people
return to their machines in the morning. Once they're swapped back in the
first thing they do it send bitchy emails to you know who.

>From a performance perspective it's the right thing to do, but nobody likes
it.

2004-03-12 09:38:19

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Andrew Morton wrote:

>Nick Piggin <[email protected]> wrote:
>
>>Hmm... I guess it is still smooth because it is swapping out only
>> inactive pages. If the standard VM isn't being pushed very hard it
>> doesn't scan mapped pages at all which is why it isn't swapping.
>>
>> I have a preference for allowing it to scan some mapped pages though.
>>
>
>I haven't looked at the code but if, as I assume, it is always scanning
>mapped pages, although at a reduced rate then the effect will be the same
>as setting swappiness to 100, except it will take longer.
>
>

Yep

>That effect is to cause the whole world to be swapped out when people
>return to their machines in the morning. Once they're swapped back in the
>first thing they do it send bitchy emails to you know who.
>
>>From a performance perspective it's the right thing to do, but nobody likes
>it.
>
>

Yeah. I wonder if there is a way to be smarter about dropping these
used once pages without putting pressure on more permanent pages...
I guess all heuristics will fall down somewhere or other.

2004-03-12 11:10:50

by Matthias Urlichs

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Hi, Andrew Morton wrote:

> That effect is to cause the whole world to be swapped out when people
> return to their machines in the morning.

The correct solution to this problem is "suspend-to-disk" --
if the machine isn't doing anything anyway, TURN IT OFF.

One slightly more practical solution from the "you-now-who gets angry
mails" POV anyway, would be to tie the reduced-rate scanning to the load
average -- if nothing at all happens, swap-out doesn't need to happen
either.

--
Matthias Urlichs

2004-03-12 11:48:01

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Matthias Urlichs wrote:
> > That effect is to cause the whole world to be swapped out when people
> > return to their machines in the morning.
>
> The correct solution to this problem is "suspend-to-disk" --
> if the machine isn't doing anything anyway, TURN IT OFF.

How is that better for people complaining that everything needs to be
swapped in in the morning?

Suspend-to-disk will cause everything to be paged in too. Faster I
suspect (haven't tried it; it doesn't work on my box), but still a
wait especially when you add in the BIOS boot time.

Environmentally turning an unused machine off is good. But I don't
see how suspend-to-disk will convince people who are annoyed by
swapping in the morning.

> One slightly more practical solution from the "you-now-who gets angry
> mails" POV anyway, would be to tie the reduced-rate scanning to the load
> average -- if nothing at all happens, swap-out doesn't need to happen
> either.

If nothing at all happens, does it matter that pages are written to
swap? They're still in RAM as well.

-- Jamie

2004-03-12 12:48:52

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Matthias Urlichs wrote:

>Hi, Andrew Morton wrote:
>
>
>>That effect is to cause the whole world to be swapped out when people
>>return to their machines in the morning.
>>
>
>The correct solution to this problem is "suspend-to-disk" --
>if the machine isn't doing anything anyway, TURN IT OFF.
>
>

Without arguing that point, the VM also should have a solution
to the problem where people don't turn it off.

>One slightly more practical solution from the "you-now-who gets angry
>mails" POV anyway, would be to tie the reduced-rate scanning to the load
>average -- if nothing at all happens, swap-out doesn't need to happen
>either.
>
>

Well if nothing at all happens we don't swap out, but when something
is happening, desktop users don't want any of their programs to be
swapped out no matter how long they have been sitting idle. They don't
want to wait 10 seconds to page something in even if it means they're
waiting an extra 10 minutes throughout the day for their kernel greps
and diffs to finish.

2004-03-12 14:15:42

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Nick Piggin wrote:

>
> Well if nothing at all happens we don't swap out, but when something
> is happening, desktop users don't want any of their programs to be
> swapped out no matter how long they have been sitting idle. They don't
> want to wait 10 seconds to page something in even if it means they're
> waiting an extra 10 minutes throughout the day for their kernel greps
> and diffs to finish.
>
>

Just had a try of doing things like updatedb and dd if=/dev/zero of=./blah
It is pretty swappy I guess. The following patch I think makes things less
swappy. It still isn't true dropbehind because new unmapped pages still do
place some pressure on the more established pagecache, but not as much.

It is unclear whether full dropbehind is actually good or not. If you have
512MB of memory and a 256MB working set of file data (unmapped), with 400MB
of mapped memory doing nothing, after enough thrashing through your 256MB,
you'd expect some of that mapped memory to be swapped out.

By the way, I would be interested to know the rationale behind
mark_page_accessed as it is without this patch, also what is it doing in
rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
still). It seems to me like it only serves to make VM behaviour harder to
understand, but I'm probably missing something. Andrew?


Attachments:
vm-dropbehind.patch (1.60 kB)

2004-03-12 14:21:22

by Mark_H_Johnson

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists





Nick Piggin <[email protected]> wrote:
>Andrew Morton wrote:

>>That effect is to cause the whole world to be swapped out when people
>>return to their machines in the morning. Once they're swapped back in
the
>>first thing they do it send bitchy emails to you know who.
>>
>>>From a performance perspective it's the right thing to do, but nobody
likes
>>it.
>>
>>
>
>Yeah. I wonder if there is a way to be smarter about dropping these
>used once pages without putting pressure on more permanent pages...
>I guess all heuristics will fall down somewhere or other.

Just a question, but I remember from VMS a long time ago that
as part of the working set limits, the "free list" was used to keep
pages that could be freely used but could be put back into the working
set quite easily (a "fast" page fault). Could you keep track of the
swapped pages in a similar manner so you don't have to go to disk to
get these pages [or is this already being done]? You would pull them
back from the free list and avoid the disk I/O in the morning.

By the way - with 2.4.24 I see a similar behavior anyway [slow to get
going in the morning]. I believe it is due to our nightly backup walking
through the disks. If you could FIX the retention of sequentially read
disk blocks from the various caches - that would help a lot more in
my mind.

--Mark H Johnson
<mailto:[email protected]>

2004-03-12 14:28:46

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



[email protected] wrote:

>
>
>
>Nick Piggin <[email protected]> wrote:
>
>>Andrew Morton wrote:
>>
>
>>>That effect is to cause the whole world to be swapped out when people
>>>return to their machines in the morning. Once they're swapped back in
>>>
>the
>
>>>first thing they do it send bitchy emails to you know who.
>>>
>>>>From a performance perspective it's the right thing to do, but nobody
>>>
>likes
>
>>>it.
>>>
>>>
>>>
>>Yeah. I wonder if there is a way to be smarter about dropping these
>>used once pages without putting pressure on more permanent pages...
>>I guess all heuristics will fall down somewhere or other.
>>
>
>Just a question, but I remember from VMS a long time ago that
>as part of the working set limits, the "free list" was used to keep
>pages that could be freely used but could be put back into the working
>set quite easily (a "fast" page fault). Could you keep track of the
>swapped pages in a similar manner so you don't have to go to disk to
>get these pages [or is this already being done]? You would pull them
>back from the free list and avoid the disk I/O in the morning.
>
>

Not too sure what you mean. If we've swapped out the pages, it is
because we need the memory for something else. So no.

One thing you could do is re read swapped pages when you have
plenty of free memory and the disks are idle.

>By the way - with 2.4.24 I see a similar behavior anyway [slow to get
>going in the morning]. I believe it is due to our nightly backup walking
>through the disks. If you could FIX the retention of sequentially read
>disk blocks from the various caches - that would help a lot more in
>my mind.
>
>

updatedb really wants to be able to provide better hints to the VM
that it is never going to use these pages again. I hate to cater for
the worst possible case that only happens because everyone has it as
a 2am cron job.

2004-03-12 15:03:48

by Mark_H_Johnson

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists





Nick Piggin <[email protected]> wrote:
>[email protected] wrote:
>>Nick Piggin <[email protected]> wrote:
>>
>>>Andrew Morton wrote:
>>>
>>
>>>>That effect is to cause the whole world to be swapped out when people
>>>>return to their machines in the morning. Once they're swapped back in
>>>>
[this is the symptom being reported]
>>Just a question, but I remember from VMS a long time ago that
>>as part of the working set limits, the "free list" was used to keep
>>pages that could be freely used but could be put back into the working
>>set quite easily (a "fast" page fault). Could you keep track of the
>>swapped pages in a similar manner so you don't have to go to disk to
>>get these pages [or is this already being done]? You would pull them
>>back from the free list and avoid the disk I/O in the morning.
>
>Not too sure what you mean. If we've swapped out the pages, it is
>because we need the memory for something else. So no.

Actually - no, from what Andrew said, the system was not under memory
pressure and did not need the memory for something else. The swapping
occurred "just because". In that case, it would be better to keep track
of where the pages came from (i.e., swap them in from the free list).

Don't get me wrong - that behavior may be the "right thing" from an
overall performance standpoint. A little extra disk I/O when the system
is relatively idle may provide needed reserve (free pages) for when the
system gets busy again.

>One thing you could do is re read swapped pages when you have
>plenty of free memory and the disks are idle.
That may also be a good idea. However, if you keep a mapping between
pages on the "free list" and those in the swap file / partition, you
do not actually have to do the disk I/O to accomplish that.

--Mark H Johnson
<mailto:[email protected]>

2004-03-12 15:05:52

by Nikita Danilov

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin writes:
>

[...]

>
> By the way, I would be interested to know the rationale behind
> mark_page_accessed as it is without this patch, also what is it doing in
> rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
> still). It seems to me like it only serves to make VM behaviour harder to
> understand, but I'm probably missing something. Andrew?

With your patch, once a page got into inactive list, its PG_referenced
bit will only be checked by VM scanner when page wanders to the tail of
list. In particular, if is impossible to tell pages that were accessed
only once while on inactive list from ones that were accessed multiple
times. Original mark_page_accessed() moves page to the active list on
the second access, thus making it less eligible for the reclaim.

I actually tried quite an opposite modification:
(ftp://ftp.namesys.com/pub/misc-patches/unsupported/extra/2004.03.10-2.6.4-rc3/a_1[5678]*)

/* roughly, modulo locking, etc. */
void fastcall mark_page_accessed(struct page *page)
{
if (!PageReferenced(page))
SetPageReferenced(page);
else if (!PageLRU(page))
continue;
else if (!PageActive(page)) {
/* page is on inactive list */
del_page_from_inactive_list(zone, page);
SetPageActive(page);
add_page_to_active_list(zone, page);
inc_page_state(pgactivate);
ClearPageReferenced(page);
} else {
/* page is on active list, move it to head */
list_move(&page->lru, &zone->active_list);
ClearPageReferenced(page);
}
}

That is, referenced and active page is moved to head of the active
list. While somewhat improving file system performance it badly affects
anonymous memory, because (it seems) file system pages tend to push
mapped ones out of active list. Probably it should have better effect
with your split active lists.

>

Nikita.

2004-03-12 15:13:43

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



[email protected] wrote:

>
>
>
>Nick Piggin <[email protected]> wrote:
>
>>Not too sure what you mean. If we've swapped out the pages, it is
>>because we need the memory for something else. So no.
>>
>
>Actually - no, from what Andrew said, the system was not under memory
>pressure and did not need the memory for something else. The swapping
>occurred "just because". In that case, it would be better to keep track
>of where the pages came from (i.e., swap them in from the free list).
>
>

In Linux, all reclaim is driven by a memory shortage. Often it
is just because more memory is being requested for more file
cache.

My patch does make it a bit more probable that process memory will
be swapped out before file cache is discarded.

>Don't get me wrong - that behavior may be the "right thing" from an
>overall performance standpoint. A little extra disk I/O when the system
>is relatively idle may provide needed reserve (free pages) for when the
>system gets busy again.
>
>
>>One thing you could do is re read swapped pages when you have
>>plenty of free memory and the disks are idle.
>>
>That may also be a good idea. However, if you keep a mapping between
>pages on the "free list" and those in the swap file / partition, you
>do not actually have to do the disk I/O to accomplish that.
>
>

But presumably if you are running into memory pressure, you really
will need to free those free list pages, requiring the page to be
read from disk when it is used again.

2004-03-12 15:29:25

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Nikita Danilov wrote:

>Nick Piggin writes:
> >
>
>[...]
>
> >
> > By the way, I would be interested to know the rationale behind
> > mark_page_accessed as it is without this patch, also what is it doing in
> > rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
> > still). It seems to me like it only serves to make VM behaviour harder to
> > understand, but I'm probably missing something. Andrew?
>
>With your patch, once a page got into inactive list, its PG_referenced
>bit will only be checked by VM scanner when page wanders to the tail of
>list. In particular, if is impossible to tell pages that were accessed
>only once while on inactive list from ones that were accessed multiple
>times. Original mark_page_accessed() moves page to the active list on
>the second access, thus making it less eligible for the reclaim.
>
>

With my patch though, it gives unmapped pages the same treatment as
mapped pages. Without my patch, pages getting a lot of mark_page_accessed
activity can easily be promoted unfairly past mapped ones which are simply
getting activity through the pte.

I say just set the bit and let the scanner handle it.

>I actually tried quite an opposite modification:
>(ftp://ftp.namesys.com/pub/misc-patches/unsupported/extra/2004.03.10-2.6.4-rc3/a_1[5678]*)
>
>/* roughly, modulo locking, etc. */
>void fastcall mark_page_accessed(struct page *page)
>{
> if (!PageReferenced(page))
> SetPageReferenced(page);
> else if (!PageLRU(page))
> continue;
> else if (!PageActive(page)) {
> /* page is on inactive list */
> del_page_from_inactive_list(zone, page);
> SetPageActive(page);
> add_page_to_active_list(zone, page);
> inc_page_state(pgactivate);
> ClearPageReferenced(page);
> } else {
> /* page is on active list, move it to head */
> list_move(&page->lru, &zone->active_list);
> ClearPageReferenced(page);
> }
>}
>
>That is, referenced and active page is moved to head of the active
>list. While somewhat improving file system performance it badly affects
>anonymous memory, because (it seems) file system pages tend to push
>mapped ones out of active list. Probably it should have better effect
>with your split active lists.
>

Yeah. Hmm, I think it might be a good idea to do this sorting for
unmapped pages on the active list. It shouldn't do ClearPageReferenced
though, because your !PageReferenced pages now get their referenced
bit set, and next time around the scanner they come in above the
PageReferenced page.

I don't like the inactive->active promotion here though, as I explained.

2004-03-12 16:31:35

by Nikita Danilov

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin writes:
>
>
> Nikita Danilov wrote:
>
> >Nick Piggin writes:
> > >
> >
> >[...]
> >
> > >
> > > By the way, I would be interested to know the rationale behind
> > > mark_page_accessed as it is without this patch, also what is it doing in
> > > rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
> > > still). It seems to me like it only serves to make VM behaviour harder to
> > > understand, but I'm probably missing something. Andrew?
> >
> >With your patch, once a page got into inactive list, its PG_referenced
> >bit will only be checked by VM scanner when page wanders to the tail of
> >list. In particular, if is impossible to tell pages that were accessed
> >only once while on inactive list from ones that were accessed multiple
> >times. Original mark_page_accessed() moves page to the active list on
> >the second access, thus making it less eligible for the reclaim.
> >
> >
>
> With my patch though, it gives unmapped pages the same treatment as
> mapped pages. Without my patch, pages getting a lot of mark_page_accessed
> activity can easily be promoted unfairly past mapped ones which are simply
> getting activity through the pte.

Another way to put it is that treatment of file system pages is dumbed
down to the level of mapped ones: information about access patterns is
just discarded.

>
> I say just set the bit and let the scanner handle it.

I think that decisions about balancing VM and file system caches should
be done by higher level, rather than by forcing file system to use
low-level mechanisms designed for VM, where only limited information is
provided by hardware. Splitting page queues is a step in a right
direction, as it allows to implement more precise replacement for the
file system cache.

Nikita.

2004-03-12 19:10:51

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin <[email protected]> wrote:
>
> Just had a try of doing things like updatedb and dd if=/dev/zero of=./blah
> It is pretty swappy I guess.

You'll need to bring the scanning priority back into the picture: don't
move mapped pages down onto the inactive list at low scanning priorities.
And that eans retaining the remember-the-priority-from-last-time logic.

Otherwise it's inevitable that even a `cat monster_file > /dev/null' will
eventually swap out everything it can.

> By the way, I would be interested to know the rationale behind
> mark_page_accessed as it is without this patch, also what is it doing in
> rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
> still). It seems to me like it only serves to make VM behaviour harder to
> understand, but I'm probably missing something. Andrew?

hm, that's left-over code which is pretty pointless now.


if (page_test_and_clear_young(page))
mark_page_accessed(page);

if (TestClearPageReferenced(page))
referenced++;

The pages in here are never on the LRU, so all the mark_page_accessed()
will do is to set PG_Referenced. And we immediately clear it again. So
the mark_page_accessed() can be replaced with referenced++.


2004-03-12 19:07:58

by Bill Davidsen

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin wrote:
>
>
> Matthias Urlichs wrote:
>
>> Hi, Andrew Morton wrote:
>>
>>
>>> That effect is to cause the whole world to be swapped out when people
>>> return to their machines in the morning.
>>>
>>
>> The correct solution to this problem is "suspend-to-disk" --
>> if the machine isn't doing anything anyway, TURN IT OFF.
>>
>>
>
> Without arguing that point, the VM also should have a solution
> to the problem where people don't turn it off.
>
>> One slightly more practical solution from the "you-now-who gets angry
>> mails" POV anyway, would be to tie the reduced-rate scanning to the load
>> average -- if nothing at all happens, swap-out doesn't need to happen
>> either.
>>
>>
>
> Well if nothing at all happens we don't swap out, but when something
> is happening, desktop users don't want any of their programs to be
> swapped out no matter how long they have been sitting idle. They don't
> want to wait 10 seconds to page something in even if it means they're
> waiting an extra 10 minutes throughout the day for their kernel greps
> and diffs to finish.

I have noticed that 2.6 seems to clear memory (any version I've run for
a while) and a lunch break results in a burst of disk activity before
the screen saver even gets in to unlock the screen. I know this box has
no cron activity during the day, so the pages were not forced out.

It's a good thing IMHO to write dirty pages to swap so the space can be
reclaimed if needed, but shouldn't the page be marked as clean and left
in memory for use without swap-in nif it's needed? I see this on backup
servers, and a machine with 3GB of free memory, no mail, no cron and no
app running isn't getting much memory pressure ;-)

I am not saying the behaviour is wrong, I just fail to see why the last
application run isn't still in memory an hour later, absent memory pressure.

--
-bill

2004-03-12 19:37:41

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin wrote:
> In Linux, all reclaim is driven by a memory shortage. Often it
> is just because more memory is being requested for more file
> cache.

Is reclaim the same as swapping, though? I'd expect pages to be
written to the swapfile speculatively, before they are needed for
reclaim. Is that one of those behaviours which everyone agrees is
sensible, but it's yet to be implemented in the 2.6 VM?

> But presumably if you are running into memory pressure, you really
> will need to free those free list pages, requiring the page to be
> read from disk when it is used again.

The idea is that you write pages to swap _before_ the memory pressure
arrives, which makes those pages available immediately when memory
pressure does arrive, provided they are still clean. It's speculative.

I thought Linux did this already, but I don't know the current VM well.

-- Jamie

2004-03-12 19:52:28

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Nick Piggin wrote:
> One thing you could do is re read swapped pages when you have
> plenty of free memory and the disks are idle.

Better: re-read swapped pages _and_ file-backed pages that are likely
to be used in future, when you have plenty of free memory and the
disks are idle.

updatedb would push plenty of memory out overnight. But after the
cron jobs and before people wake up in the morning, the kernel would
gradually re-read the pages corresponding to mapped regions in
processes. Possibly with emphasis on some processes more than others.
Possibly remembering some of that likelihood information even when a
particular executable isn't currently running.

During the day, after a big compile the kernel would gradually re-read
pages for processes which are running on your desktop but which you're
not actively using. The editor you were using during the compile will
still be responsive because it wasn't swapped out. The Nautilus or
Mozilla that you weren't using will appear responsive when you switch
to it, because the kernel was re-reading their mapped pages after the
compile, while you didn't notice because you were still using the
editor.

The intention is to avoid those long stalls where you switch to a
Mozilla window and it takes 30 seconds to page in all those libraries
randomly. It's not necessary to keep Mozilla in memory all the time,
even when the memory is specifically useful for a compile, to provide
that illusion of snappy response most of the time.

-- Jamie

2004-03-12 21:21:00

by Mike Fedyk

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Jamie Lokier wrote:
> Nick Piggin wrote:
>
>>In Linux, all reclaim is driven by a memory shortage. Often it
>>is just because more memory is being requested for more file
>>cache.
>
>
> Is reclaim the same as swapping, though? I'd expect pages to be
> written to the swapfile speculatively, before they are needed for
> reclaim. Is that one of those behaviours which everyone agrees is
> sensible, but it's yet to be implemented in the 2.6 VM?
>

Nobody has mentioned the swap cache yet. If a page is in ram, and swap
and not dirty, it's counted in the swap cache.

>
>>But presumably if you are running into memory pressure, you really
>>will need to free those free list pages, requiring the page to be
>>read from disk when it is used again.
>
>
> The idea is that you write pages to swap _before_ the memory pressure
> arrives, which makes those pages available immediately when memory
> pressure does arrive, provided they are still clean. It's speculative.
>
> I thought Linux did this already, but I don't know the current VM well.
>

You're saying all anon memory should become swap_cache eventually
(though, it should be a background "task" so it doesn't block userspace
memory requests).

That would have other side benefits. If the anon page matches (I'm not
calling it "!dirty" since that might have other semantics in the current
VM) what is in swap, it can be cleaned without performing any IO. Also,
suspending will have much less IO to perform before completion.

Though there would have to be swap recycling algo if swap size < ram.

Mike

2004-03-12 22:22:11

by Jamie Lokier

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Mike Fedyk wrote:
> That would have other side benefits. If the anon page matches (I'm not
> calling it "!dirty" since that might have other semantics in the current
> VM) what is in swap, it can be cleaned without performing any IO. Also,
> suspending will have much less IO to perform before completion.

Exactly those sort of benefits.

Btw, When you say "You're saying all anon memory should become
swap_cache eventually" it's worth noting that there are benefits to
doing it the other way too: speculatively pulling in pages that are
thought likely to be good for interactive response, at the expense of
pages which have been used more recently, and must remain in RAM for a
short while while they are considered in use, but aren't ranked so
highly based on some interactivity heuristics.

I.e. fixing the "everything swapped out in the morning" problem by
having a long term slow rebalancing in favour of pages which seem to
be requested for interactive purposes, competing against the short
term balance of whichever pages have been used recently or are
predicted by short term readahead.

Both replicating RAM pages to swap, and replicating swap or
file-backed pages to RAM can be speculative and down slowly, over the
long term, and when there is little other activity or I/O.

-- Jamie

2004-03-12 22:37:16

by Mike Fedyk

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Jamie Lokier wrote:
> Mike Fedyk wrote:
>
>>That would have other side benefits. If the anon page matches (I'm not
>>calling it "!dirty" since that might have other semantics in the current
>>VM) what is in swap, it can be cleaned without performing any IO. Also,
>> suspending will have much less IO to perform before completion.
>
>
> Exactly those sort of benefits.

:)

>
> Btw, When you say "You're saying all anon memory should become
> swap_cache eventually" it's worth noting that there are benefits to
> doing it the other way too: speculatively pulling in pages that are
> thought likely to be good for interactive response, at the expense of
> pages which have been used more recently, and must remain in RAM for a
> short while while they are considered in use, but aren't ranked so
> highly based on some interactivity heuristics.
>

IIUC, the current VM loses the aging information as soon as a page is
swapped out. You might be asking for a LFU list instead of a LRU list.
Though, a reverse LFU (MFU -- most frequently used?) used only for swap
might do what you want also...

> I.e. fixing the "everything swapped out in the morning" problem by
> having a long term slow rebalancing in favour of pages which seem to
> be requested for interactive purposes, competing against the short
> term balance of whichever pages have been used recently or are
> predicted by short term readahead.
>

There was talk in Andrea's objrmap thread about using two LRU lists, but
I forget what the benefits of that were.

> Both replicating RAM pages to swap, and replicating swap or
> file-backed pages to RAM can be speculative and down slowly, over the
> long term, and when there is little other activity or I/O.

In short, that probably would require some major surgery in the VM.

Mike

2004-03-12 23:07:47

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Nikita Danilov wrote:

>Nick Piggin writes:
>
> > With my patch though, it gives unmapped pages the same treatment as
> > mapped pages. Without my patch, pages getting a lot of mark_page_accessed
> > activity can easily be promoted unfairly past mapped ones which are simply
> > getting activity through the pte.
>
>Another way to put it is that treatment of file system pages is dumbed
>down to the level of mapped ones: information about access patterns is
>just discarded.
>
>

In a way, yes.

> >
> > I say just set the bit and let the scanner handle it.
>
>I think that decisions about balancing VM and file system caches should
>be done by higher level, rather than by forcing file system to use
>low-level mechanisms designed for VM, where only limited information is
>provided by hardware. Splitting page queues is a step in a right
>direction, as it allows to implement more precise replacement for the
>file system cache.
>
>

It makes it that much harder to calculate the pressure you are putting
on mapped vs unmapped pages though.

2004-03-12 23:25:34

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Andrew Morton wrote:

>Nick Piggin <[email protected]> wrote:
>
>>Just had a try of doing things like updatedb and dd if=/dev/zero of=./blah
>>It is pretty swappy I guess.
>>
>
>You'll need to bring the scanning priority back into the picture: don't
>move mapped pages down onto the inactive list at low scanning priorities.
>And that eans retaining the remember-the-priority-from-last-time logic.
>
>Otherwise it's inevitable that even a `cat monster_file > /dev/null' will
>eventually swap out everything it can.
>
>

Hmm I dunno. At mapped_page_cost 8, I don't think it is swappy enough
that your desktop users will be running into problems. I need to write
4GB of file to push out 70MB of swap here (256MB RAM). And not much of
that swap has come back in, by the way...

>>By the way, I would be interested to know the rationale behind
>>mark_page_accessed as it is without this patch, also what is it doing in
>>rmap.c (I know hardly anything actually uses page_test_and_clear_young, but
>>still). It seems to me like it only serves to make VM behaviour harder to
>>understand, but I'm probably missing something. Andrew?
>>
>
>hm, that's left-over code which is pretty pointless now.
>
>
> if (page_test_and_clear_young(page))
> mark_page_accessed(page);
>
> if (TestClearPageReferenced(page))
> referenced++;
>
>The pages in here are never on the LRU, so all the mark_page_accessed()
>will do is to set PG_Referenced. And we immediately clear it again. So
>the mark_page_accessed() can be replaced with referenced++.
>
>
>

Yep, see the patch I'd attached before.

2004-03-13 00:01:22

by Nick Piggin

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists



Bill Davidsen wrote:

>
> I have noticed that 2.6 seems to clear memory (any version I've run
> for a while) and a lunch break results in a burst of disk activity
> before the screen saver even gets in to unlock the screen. I know this
> box has no cron activity during the day, so the pages were not forced
> out.
>


It shouldn't. Perhaps something else is using memory in the background?


> It's a good thing IMHO to write dirty pages to swap so the space can
> be reclaimed if needed, but shouldn't the page be marked as clean and
> left in memory for use without swap-in nif it's needed? I see this on
> backup servers, and a machine with 3GB of free memory, no mail, no
> cron and no app running isn't getting much memory pressure ;-)
>

Well it is basically just written out and reclaimed when it is needed,
it won't just be swapped out without memory pressure.

Although, there were some highmem balancing problems in 2.6 including
2.6.4 (now fixed in -bk). This causes too much pressure to be put on
ZONE_NORMAL mapped and file cache memory in favour of slab cache. This
could easily be causing the misbehaviour.

> I am not saying the behaviour is wrong, I just fail to see why the
> last application run isn't still in memory an hour later, absent
> memory pressure.
>

There would have to be *some* memory pressure... honestly, try 2.6-bk,
or if they are production machines and you can't, then wait for 2.6.5.


2004-03-13 12:35:16

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH] 2.6.4-rc2-mm1: vm-split-active-lists

Hi!
> > That effect is to cause the whole world to be swapped out when people
> > return to their machines in the morning.
>
> The correct solution to this problem is "suspend-to-disk" --
> if the machine isn't doing anything anyway, TURN IT OFF.

Try it.

With current design, machine swaps *a lot* after resume.

Suspend-to-ram is probably better.

But if you don't run your updatedb overnight, you are going to
run it while you are logged in, and that is going to suck.
--
64 bytes from 195.113.31.123: icmp_seq=28 ttl=51 time=448769.1 ms