[email protected] wrote:
>
> It is still the same in 2.4.17pre2 :-(
>
Yes, well, it would be.
It seems that what's happening is that all your buffercache memory
is on the active list, and all your anonymous memory is on the
inactive list. Consequently the logic in shrink_caches() which
is supposed to move pages onto the inactive list doesn't do anything - it
just sits there calling refill_inactive() with a zero argument all
the time.
You'll find that if you push the machine really hard - allocate
1.5x physical memory and touch it all then the VM will, eventually,
with great reluctance and much swapping, relinquish the 30 megabytes
of buffercache memory. But it's out of whack.
I tested the pre7aa1 patches and it seemed possibly a little better,
but not right.
If we put anon pages on the active list instead, then shrink_caches()
and refill_inactive() start to do something, and they move that stale
old buffercache memory onto the inactive list where it can be freed.
This is just a random hack. I don't understand what's going on in
the VM, let alone what's *supposed* to be going on. And given the
state of documentation available to us, I never will.
--- linux-2.4.17-pre2/mm/memory.c Thu Nov 22 23:02:59 2001
+++ linux-akpm/mm/memory.c Sat Dec 1 20:14:28 2001
@@ -1174,6 +1174,7 @@ static int do_anonymous_page(struct mm_s
flush_page_to_ram(page);
entry = pte_mkwrite(pte_mkdirty(mk_pte(page, vma->vm_page_prot)));
lru_cache_add(page);
+ activate_page(page);
}
set_pte(page_table, entry);
-
On Sat, 1 Dec 2001, Andrew Morton wrote:
> You'll find that if you push the machine really hard - allocate
> 1.5x physical memory and touch it all then the VM will, eventually,
> with great reluctance and much swapping, relinquish the 30 megabytes
> of buffercache memory. But it's out of whack.
This is an expected (and very bad) side effect of use-once.
> If we put anon pages on the active list instead, then shrink_caches()
> and refill_inactive() start to do something, and they move that stale
> old buffercache memory onto the inactive list where it can be freed.
This would fix the problem of not being able to evict stale
active pages, but I have no idea if it would unbalance
something else.
> This is just a random hack. I don't understand what's going on in
> the VM, let alone what's *supposed* to be going on. And given the
> state of documentation available to us, I never will.
The balancing in Andrea's VM is just too subtle to understand
without docs, that is, if there is any particular idea behind
it and it isn't just experimentation.
regards,
Rik
--
Shortwave goes a long way: irc.starchat.net #swl
http://www.surriel.com/ http://distro.conectiva.com/
I verified this by throwing in 1GB of swap. It does lose the buffers
eventually, but it does so under great pains. However, since I do have
*plenty* memory and slow disks, I prefer running without swap. These settings
appear to be broken (2.4.13 worked!).
Without swap, I would have to reboot every day to make my simulation happy.-
But it needs to allocate and use *only* about 300 of 768MB which *should* be
available, only because the night before a cron job kicked off 'updatdb'
filling the buffers.
Without swap, it really is *this* bad in 2.4.16 and the latest pre-releases.
RAM is full of buffers that just won't disappear to make room for more
important stuff. My simulation gets killed *every time* until I reboot to
free 'buffers' or add swap. (This makes everything slower in total.)
:-(
Jan
On Sunday 02 December 2001 13:17, Rik wrote:
> On Sat, 1 Dec 2001, Andrew Morton wrote:
> > You'll find that if you push the machine really hard - allocate
> > 1.5x physical memory and touch it all then the VM will, eventually,
> > with great reluctance and much swapping, relinquish the 30 megabytes
> > of buffercache memory. But it's out of whack.
>
> This is an expected (and very bad) side effect of use-once.
>
> > If we put anon pages on the active list instead, then shrink_caches()
> > and refill_inactive() start to do something, and they move that stale
> > old buffercache memory onto the inactive list where it can be freed.
>
> This would fix the problem of not being able to evict stale
> active pages, but I have no idea if it would unbalance
> something else.
>
> > This is just a random hack. I don't understand what's going on in
> > the VM, let alone what's *supposed* to be going on. And given the
> > state of documentation available to us, I never will.
>
> The balancing in Andrea's VM is just too subtle to understand
> without docs, that is, if there is any particular idea behind
> it and it isn't just experimentation.
>
> regards,
>
> Rik