2002-06-19 11:21:39

by Craig Kulesa

[permalink] [raw]
Subject: [PATCH] (2/2) reverse mappings for current 2.5.23 VM



Where: http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

This rather smaller patch serves a very different purpose than patch #1.
It introduces the "minimal" rmap functionality of Rik van Riel's reverse
mapping patches atop the current 2.5.23 classzone VM. This quick patch
was done on a whim at 1 AM, but it actually seems to perform pretty
decently on my laptop. Page eviction choice is quite good, even
in the absence of any sort of page aging. Using the same quick test as
in the previous email:

2.5.22 vanilla:
Total kernel swapouts during test = 29068 kB
Total kernel swapins during test = 16480 kB
Elapsed time for test: 141 seconds

2.5.23-rmap (this patch -- "rmap-minimal"):
Total kernel swapouts during test = 24068 kB
Total kernel swapins during test = 6480 kB
Elapsed time for test: 133 seconds

2.5.23-rmap13b (Rik's "rmap-13b complete") :
Total kernel swapouts during test = 40696 kB
Total kernel swapins during test = 380 kB
Elapsed time for test: 133 seconds

[Gotta tone down page_launder() a bit...]

Modifications:

- in vmscan.c: dropped swap_out_add_to_swap_cache(), integrated
its contents to rmap's add_to_swap() in swap_state.c. This is a
more reasonable place for it anyway.

- Dropped try_to_swap_out(), swap_out(), and all its brethren from
vmscan.c. What a great feeling! :)

- In vmscan.c's shrink_cache():
If a page is actively referenced and page mapping in use, move
the inactive page to the active list; alloc some swap space for
anon pages, then if we must, fall to rmap's try_to_unmap() to
swap. Drop the max_mapped logic, since swap_out() is gone and
we don't need it. If try_to_unmap() fails, put the page on the
active list. These are all pieces of Rik's page_launder()
logic in his integrated rmap scheme.

- use page_referenced() instead of TestClearPageReferenced() in
refill_inactive()

- compilation patches as per "complete" rmap patch #1 (previous
email)

Okay it's quick and dirty, but it seems to work pretty well in initial
(and not yet rigorous) tests. Like the full rmap patch for 2.5, I'll try
to keep this patch up to date with the 2.5 and rmap trees until VM
development switches to 2.5.

Comments, patches, fixes & feedback always welcome. :)


Craig Kulesa
Steward Observatory, Univ. of Arizona



2002-06-19 12:03:07

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM

On Wed, 19 Jun 2002, Craig Kulesa wrote:

> Where: http://loke.as.arizona.edu/~ckulesa/kernel/rmap-vm/

Thank you. I take it as a big compliment that people are
not only interested in rmap on other kernel versions but
able to read and understand the rmap code well enough to
be able to do so ;)

> 2.5.22 vanilla:
> Total kernel swapouts during test = 29068 kB
> Total kernel swapins during test = 16480 kB
> Elapsed time for test: 141 seconds
>
> 2.5.23-rmap (this patch -- "rmap-minimal"):
> Total kernel swapouts during test = 24068 kB
> Total kernel swapins during test = 6480 kB
> Elapsed time for test: 133 seconds
>
> 2.5.23-rmap13b (Rik's "rmap-13b complete") :
> Total kernel swapouts during test = 40696 kB
> Total kernel swapins during test = 380 kB
> Elapsed time for test: 133 seconds

Interesting to see that both rmap versions have the same
performance, it would seem that swapouts are much cheaper
than waiting for a pagefault to swap something in ...

> [Gotta tone down page_launder() a bit...]

... though I definately agree with your analysis here.
I hadn't expected to give a quick rmap port without any
of the VM balancing changes to give a performance edge
over the virtual scanning VM and am surprised by your
results.


> Modifications:
>
> - in vmscan.c: dropped swap_out_add_to_swap_cache(), integrated
> its contents to rmap's add_to_swap() in swap_state.c. This is a
> more reasonable place for it anyway.
>
> - Dropped try_to_swap_out(), swap_out(), and all its brethren from
> vmscan.c. What a great feeling! :)
>
> - In vmscan.c's shrink_cache():
> If a page is actively referenced and page mapping in use, move
> the inactive page to the active list; alloc some swap space for
> anon pages, then if we must, fall to rmap's try_to_unmap() to
> swap. Drop the max_mapped logic, since swap_out() is gone and
> we don't need it. If try_to_unmap() fails, put the page on the
> active list. These are all pieces of Rik's page_launder()
> logic in his integrated rmap scheme.
>
> - use page_referenced() instead of TestClearPageReferenced() in
> refill_inactive()

This changelog seems small enough for the code to be mergeable,
if it weren't for one last TODO item:

- pte_highmem support for -rmap


> Okay it's quick and dirty, but it seems to work pretty well in initial
> (and not yet rigorous) tests. Like the full rmap patch for 2.5, I'll try
> to keep this patch up to date with the 2.5 and rmap trees until VM
> development switches to 2.5.

Thank you. I'm leaving for two meetings and a conference
in Canada later today, but I'll be back to work on rmap
for 2.5 from july 3rd.

kind regards,

Rik
--
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/ http://distro.conectiva.com/

2002-06-19 17:02:48

by Daniel Phillips

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM

On Wednesday 19 June 2002 13:58, Rik van Riel wrote:
> > 2.5.22 vanilla:
> > Total kernel swapouts during test = 29068 kB
> > Total kernel swapins during test = 16480 kB
> > Elapsed time for test: 141 seconds
> >
> > 2.5.23-rmap (this patch -- "rmap-minimal"):
> > Total kernel swapouts during test = 24068 kB
> > Total kernel swapins during test = 6480 kB
> > Elapsed time for test: 133 seconds
> >
> > 2.5.23-rmap13b (Rik's "rmap-13b complete") :
> > Total kernel swapouts during test = 40696 kB
> > Total kernel swapins during test = 380 kB
> > Elapsed time for test: 133 seconds
>
> Interesting to see that both rmap versions have the same
> performance, it would seem that swapouts are much cheaper
> than waiting for a pagefault to swap something in ...

You might conclude from the above that the lru+rmap is superior to
aging+rmap: while they show the same wall-clock time, lru+rmap consumes
considerably less disk bandwidth. Naturally, it would be premature to
conclude this from one trial on one load.

These patches need benchmarking - lots of it, and preferrably in the next few
days.

We need to see cpu stats as well.

--
Daniel

2002-06-19 17:30:43

by Rik van Riel

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM

On Wed, 19 Jun 2002, Daniel Phillips wrote:

> > > 2.5.23-rmap (this patch -- "rmap-minimal"):
> > > Total kernel swapouts during test = 24068 kB
> > > Total kernel swapins during test = 6480 kB
> > > Elapsed time for test: 133 seconds
> > >
> > > 2.5.23-rmap13b (Rik's "rmap-13b complete") :
> > > Total kernel swapouts during test = 40696 kB
> > > Total kernel swapins during test = 380 kB
> > > Elapsed time for test: 133 seconds

> You might conclude from the above that the lru+rmap is superior to
> aging+rmap: while they show the same wall-clock time, lru+rmap consumes
> considerably less disk bandwidth. Naturally, it would be premature to
> conclude this from one trial on one load.

On the contrary, aging+rmap shows a lot less swapins.

The fact that it has more swapouts than needed means
we need to fix one aspect of the thing (page_launder),
it doesn't mean we should get rid of the whole thing.

kind regards,

Rik
--
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/ http://distro.conectiva.com/

2002-06-19 17:46:47

by Daniel Phillips

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM

On Wednesday 19 June 2002 13:21, Craig Kulesa wrote:
> 2.5.22 vanilla:
^^--- is this a typo?

> Total kernel swapouts during test = 29068 kB
> Total kernel swapins during test = 16480 kB
> Elapsed time for test: 141 seconds
>
> 2.5.23-rmap (this patch -- "rmap-minimal"):
> Total kernel swapouts during test = 24068 kB
> Total kernel swapins during test = 6480 kB
> Elapsed time for test: 133 seconds
>
> 2.5.23-rmap13b (Rik's "rmap-13b complete") :
> Total kernel swapouts during test = 40696 kB
> Total kernel swapins during test = 380 kB
> Elapsed time for test: 133 seconds

--
Daniel

2002-06-19 20:10:15

by Craig Kulesa

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM


On Wed, 19 Jun 2002, Daniel Phillips wrote:

> You might conclude from the above that the lru+rmap is superior to
> aging+rmap: while they show the same wall-clock time, lru+rmap consumes
> considerably less disk bandwidth.

I wouldn't draw _any_ conclusions about either patch yet, because as you
said, it's only one type of load. And it was a single tick in vmstat
where page_launder() was aggressive that made the difference between the
two. In a different test, where I had actually *used* more of the
application pages instead of simply closing most of the applications
(save one, the memory hog), the results are likely to have been very
different.

I think that Rik's right: this simply points out that page_launder(), at
least in its interaction with 2.5, needs some tuning. I think both
approaches look very promising, but each for different reasons.

-Craig

2002-06-19 20:26:07

by Craig Kulesa

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM


On Wed, 19 Jun 2002, Daniel Phillips wrote:

> On Wednesday 19 June 2002 13:21, Craig Kulesa wrote:
> > 2.5.22 vanilla:
> ^^--- is this a typo?

Good eye, but no. I was indeed comparing 2.5.22 to the 2.5.23 rmap
patches. I performed the same test on 2.5.23 later and its behavior is
similar.

-Craig

2002-06-19 20:44:32

by Daniel Phillips

[permalink] [raw]
Subject: Re: [PATCH] (2/2) reverse mappings for current 2.5.23 VM

On Wednesday 19 June 2002 22:09, Craig Kulesa wrote:
> I wouldn't draw _any_ conclusions about either patch yet, because as you
> said, it's only one type of load. And it was a single tick in vmstat
> where page_launder() was aggressive that made the difference between the
> two. In a different test, where I had actually *used* more of the
> application pages instead of simply closing most of the applications
> (save one, the memory hog), the results are likely to have been very
> different.
>
> I think that Rik's right: this simply points out that page_launder(), at
> least in its interaction with 2.5, needs some tuning. I think both
> approaches look very promising, but each for different reasons.

Indeed.

One reason for being interested in a lot more numbers and a variety of loads
is that there's an effect, predicted by Andea, that I'm watching for: both
aging+rmap and lru+rmap do swapout in random order with respect to virtual
memory, and this should in theory cause increased seeking on swap-in. We
didn't see any sign of such degradation vs mainline, in fact we saw a
significant overall speedup. It could be we just haven't got enough data
yet, or maybe there really is more seeking for each swap-in, but the effect
of less swapping overall is dominant.

--
Daniel