Hi,
On Wed, Jan 31, 2001 at 04:24:24PM -0800, David Gould wrote:
>
> I am skeptical of the argument that we can win by replacing "the least
> desirable" pages with pages were even less desireable and that we have
> no recent indication of any need for. It seems possible under heavy swap
> to discard quite a portion of the useful pages in favor of junk that just
> happenned to have a lucky disk address.
When readin clustering was added to 2.2 for swap and paging,
performance for a lot of VM-intensive tasks more than doubled. Disk
seeks are _expensive_. If you read in 15 neighbouring pages on swapin
and on average only one of them turns out to be useful, you have still
halved the number of swapin IOs required. The performance advantages
are so enormous that easily compensate for the cost of holding the
other, unneeded pages in memory for a while.
Also remember that the readahead pages won't actually get mapped into
memory, so they can be recycled easily. So, under swapping you tend
to find that the extra readin pages are going to be replacing old,
unneeded readahead pages to some extent, rather than swapping out
useful pages.
Cheers,
Stephen
On Thu, 1 Feb 2001, Stephen C. Tweedie wrote:
> Hi,
>
> On Wed, Jan 31, 2001 at 04:24:24PM -0800, David Gould wrote:
> >
> > I am skeptical of the argument that we can win by replacing "the least
> > desirable" pages with pages were even less desireable and that we have
> > no recent indication of any need for. It seems possible under heavy swap
> > to discard quite a portion of the useful pages in favor of junk that just
> > happenned to have a lucky disk address.
>
> When readin clustering was added to 2.2 for swap and paging,
> performance for a lot of VM-intensive tasks more than doubled. Disk
> seeks are _expensive_. If you read in 15 neighbouring pages on swapin
> and on average only one of them turns out to be useful, you have still
> halved the number of swapin IOs required. The performance advantages
> are so enormous that easily compensate for the cost of holding the
> other, unneeded pages in memory for a while.
>
> Also remember that the readahead pages won't actually get mapped into
> memory, so they can be recycled easily. So, under swapping you tend
> to find that the extra readin pages are going to be replacing old,
> unneeded readahead pages to some extent, rather than swapping out
> useful pages.
If we're under free memory shortage, "unlucky" readaheads will be harmful.
Currently the swapin readahead code can block waiting for memory to do the
readahead, forcing other pages to be aged/freed more aggressively.
Hi,
On Thu, Feb 01, 2001 at 08:53:33AM -0200, Marcelo Tosatti wrote:
>
> On Thu, 1 Feb 2001, Stephen C. Tweedie wrote:
>
> If we're under free memory shortage, "unlucky" readaheads will be harmful.
I know, it's a balancing act. But given that even one successful
readahead per read will halve the number of swapin seeks, the
performance loss due to the extra scavenging has got to be bad to
outweigh the benefit.
Cheers,
Stephen
On Thu, 1 Feb 2001, Stephen C. Tweedie wrote:
> On Thu, Feb 01, 2001 at 08:53:33AM -0200, Marcelo Tosatti wrote:
> > On Thu, 1 Feb 2001, Stephen C. Tweedie wrote:
> >
> > If we're under free memory shortage, "unlucky" readaheads will be harmful.
>
> I know, it's a balancing act. But given that even one
> successful readahead per read will halve the number of swapin
> seeks, the performance loss due to the extra scavenging has got
> to be bad to outweigh the benefit.
But only when the extra pages we're reading in don't
displace useful data from memory, making us fault in
those other pages ... causing us to go to the disk
again and do more readahead, which could potentially
displace even more pages, etc...
One solution could be to put (most of) the swapin readahead
pages on the inactive_dirty list, so pressure by readahead
on the resident pages is smaller and the not used readahead
pages are reclaimed faster.
(and with the size of the inactive list being 1 second worth
of page steals, those pages still have a good chance of being
used before they're being recycled)
regards,
Rik
--
Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/
On Thu, Feb 01, 2001 at 02:45:04PM -0200, Rik van Riel wrote:
> One solution could be to put (most of) the swapin readahead
> pages on the inactive_dirty list, so pressure by readahead
> on the resident pages is smaller and the not used readahead
> pages are reclaimed faster.
Shouldn't they be on inactive_clean anyway? They are not mapped
(if I read Stephens comment correctly) and are clean (because we
just read them in).
So if we have to put it there explicitly, we have at least a
performance bug, don't we?
Or do I still not get the new linux mm design? ;-(
Totally clueless
Ingo Oeser
PS: Who CC'ed is also subscribed to linux-mm? Or do we all filter
dupes via "formail -D"? ;-)
--
10.+11.03.2001 - 3. Chemnitzer LinuxTag <http://www.tu-chemnitz.de/linux/tag>
<<<<<<<<<<<< come and join the fun >>>>>>>>>>>>
Hi,
On Thu, Feb 01, 2001 at 02:45:04PM -0200, Rik van Riel wrote:
> On Thu, 1 Feb 2001, Stephen C. Tweedie wrote:
>
> But only when the extra pages we're reading in don't
> displace useful data from memory, making us fault in
> those other pages ... causing us to go to the disk
> again and do more readahead, which could potentially
> displace even more pages, etc...
Remember, it's a balance. You can displace a few useful pages and
still win overall because the cost _per page_ goes way down due to
better disk IO utilisation.
> One solution could be to put (most of) the swapin readahead
> pages on the inactive_dirty list, so pressure by readahead
> on the resident pages is smaller and the not used readahead
> pages are reclaimed faster.
Yep, that would make much sense.
--Stephen
On Thu, 1 Feb 2001, Ingo Oeser wrote:
> On Thu, Feb 01, 2001 at 02:45:04PM -0200, Rik van Riel wrote:
> > One solution could be to put (most of) the swapin readahead
> > pages on the inactive_dirty list, so pressure by readahead
> > on the resident pages is smaller and the not used readahead
> > pages are reclaimed faster.
>
> Shouldn't they be on inactive_clean anyway?
No, the inactive_clean pages are reclaimed before the
other inactive pages, and we want to give all pages
an equal chance to be used when we put them on the
inactive list.
This is especially true for freshly read in swap cache
pages, because we _expect_ that some of them will be
used.
> Or do I still not get the new linux mm design? ;-(
Read mm/swap.c::deactivate_page_nolock(), my decision to
put all clean inactive pages directly on inactive_clean
lead to the fact that dirty pages would stick around
forever and page reclaim could be quite unfair towards
clean pages. This was changed later to put all inactive
pages on the inactive_dirty list first and have them
more fairly reclaimed in page_launder.
regards,
Rik
--
Virtual memory is like a game you can't win;
However, without VM there's truly nothing to lose...
http://www.surriel.com/
http://www.conectiva.com/ http://distro.conectiva.com.br/