2004-03-09 05:37:46

by Nick Piggin

[permalink] [raw]
Subject: [RFC][PATCH 0/4] VM split active lists

Hi,
Background: there are a number of problems in the 2.6 page reclaim
algorithms. Thankfully, most of them were simple oversights or small
bugs, the worst of which Andrew Morton and myself have fixes for in
his -mm tree and being mostly simple and obviously correct, they will
hopefully be included in 2.6.5.

With these fixes, 2.6 swapping performance (the area I'm focusing on)
is very much improved. Unfortunately there is another more complex
patch in limbo that improves performance by a additional 10%. It is
Nikita's dont-rotate-active-list.

The reason for the improvement is that it improves ordering of mapped
pages on the active list. Now I I'd like to fix this problem and get
this 10%. However dont-rotate-active-list is pretty ugly to put it
nicely.

OK, the theory is that mapped pagecache pages are worth more than
unmapped pages. This is a good theory because mapped pages will
usually have far more random access patterns, so pagein *and* pageout
will be much less efficient. Also, applications are probably coded to
be more suited to blocking in read() than a random code / anon memory
page. So a factor of >= 16 wouldn't be out of the question.

Now the basic problem is that we have these two classes of pages on
one (the active) list, and we attempt to place different scanning
semantics on each class. This is done with the reclaim_mapped logic.
Now I won't be too disparaging of reclaim_mapped because I think
Andrew crea^W^W^W^W it somehow more or less works, but it has a couple
of problems.

* Difficult to trace: relies on some saved state from earlier in time.
* difficult to control: relies on inner workings (eg "priority").
mapped vs unmapped scanning behaviour is derived basically by black
magic.
* not-quite-right semantics: mapped pages are infinitely preferable
to unmapped pages until something goes click and then they are worth
about half as much.
* These semantics mean that in low memory pressure (before the click),
truely inactive mapped pages will never be reclaimed. Probably they
should be to increase resident working set.
* Also, a significant number of mapped pages can be passed over
without doing any real work.
* This causes list position information to be lost (which is where
that 10% comes from).

Now I have an alternative which hopefully solves all these problems
and with less complexity than dont-rotate-active-list which only
solves the last one: split the active list into active_mapped and
active_unmapped lists. Pages are moved between them lazily at scan
time, and they needn't be totally accurate.

You then simply put 16 (or whatever) times the amount of pressure on
the unmapped list as you do on the mapped list. This number can be the
tunable (instead of swapiness).

I have an implementation which compiles, boots, and survives a -j8
kbuild. Probably still has a few problems though. Couple of things: it
presently just puts even pressure on both lists, so it is swappy
(trivial to fix). It also gives unmapped pages the full two level
(active+inactive) system because it was just easier to do it that way.
Don't know if this would be good or bad.

The patches go like this:
1/4: vm-lrutopage-cleanup
Cleanup from Nikita's dont-rotate-active-list patch.

2/4: vm-nofixed-active-list
Generalise active list scanning to scan different lists.

3/4: vm-no-reclaim_mapped
Kill reclaim_mapped and its merry men.

4/4: vm-mapped-x-active-lists
Split the active list into mapped and unmapped pages.


2004-03-09 05:40:05

by Nick Piggin

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists



Nick Piggin wrote:

>
>@@ -714,14 +737,27 @@ shrink_zone(struct zone *zone, int max_s
> * just to make sure that the kernel will slowly sift through the
> * active list.
> */
>- ratio = (unsigned long)SWAP_CLUSTER_MAX * zone->nr_active /
>- ((zone->nr_inactive | 1) * 2);
>+ nr_active = zone->nr_active_mapped + zone->nr_active_unmapped;
>+ ratio = (unsigned long)SWAP_CLUSTER_MAX * nr_active /
>+ (zone->nr_inactive * 2 + 1);
>+ mapped_ratio = (unsigned long long)ratio * nr_active;
>+ do_div(mapped_ratio, zone->nr_active_mapped+1);
>

Just for information, this is where you would balance mapped vs unmapped
pages: do_div(mapped_ratio, 16); /* mapped pages are worth 16 times
more */

>+
>+ ratio = ratio - mapped_ratio;
>+ atomic_add(ratio+1, &zone->nr_scan_active_unmapped);
>+ count = atomic_read(&zone->nr_scan_active_unmapped);
>+ if (count >= SWAP_CLUSTER_MAX) {
>+ atomic_set(&zone->nr_scan_active_unmapped, 0);
>+ shrink_active_list(zone, &zone->active_unmapped_list,
>+ &zone->nr_active_unmapped, count, ps);
>+ }
>
>- atomic_add(ratio+1, &zone->nr_scan_active);
>- count = atomic_read(&zone->nr_scan_active);
>+ atomic_add(mapped_ratio+1, &zone->nr_scan_active_mapped);
>+ count = atomic_read(&zone->nr_scan_active_mapped);
> if (count >= SWAP_CLUSTER_MAX) {
>- atomic_set(&zone->nr_scan_active, 0);
>- shrink_active_list(zone, &zone->active_list, count, ps);
>+ atomic_set(&zone->nr_scan_active_mapped, 0);
>+ shrink_active_list(zone, &zone->active_mapped_list,
>+ &zone->nr_active_mapped, count, ps);
> }
>
> atomic_add(max_scan, &zone->nr_scan_inactive);
>
>
>

2004-03-09 05:49:11

by Mike Fedyk

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

Nick Piggin wrote:
>
>
> ------------------------------------------------------------------------
>
>
> Split the active list into mapped and unmapped pages.

This looks similar to Rik's Active and Active-anon lists in 2.4-rmap.

Also, how does this interact with Andrea's VM work?

2004-03-09 06:06:44

by Nick Piggin

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists



Mike Fedyk wrote:

> Nick Piggin wrote:
>
>>
>>
>> ------------------------------------------------------------------------
>>
>>
>> Split the active list into mapped and unmapped pages.
>
>
> This looks similar to Rik's Active and Active-anon lists in 2.4-rmap.
>

Oh? I haven't looked at 2.4-rmap for a while. Well I guess that gives
it more credibility, thanks.

> Also, how does this interact with Andrea's VM work?
>

Not sure to be honest, I haven't looked at it :\. I'm not really
sure if the rmap mitigation direction is just a holdover until
page clustering or intended as a permanent feature...

Either way, I trust its proponents will take the onus for regressions.

2004-03-09 07:03:14

by William Lee Irwin III

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

On Tue, Mar 09, 2004 at 05:06:37PM +1100, Nick Piggin wrote:
> Not sure to be honest, I haven't looked at it :\. I'm not really
> sure if the rmap mitigation direction is just a holdover until
> page clustering or intended as a permanent feature...
> Either way, I trust its proponents will take the onus for regressions.

Actually, anobjrmap does wonderful things wrt. liberating pgcl
internals from some very frustrating complications having to do with
assumptions of a 1:1 correspondence between pte pages and struct pages,
so I would regard work in the direction of anobjrmap as useful to
advance the state of page clustering regardless of its rmap mitigation
overtones. The "partial" objrmap is actually insufficient to clean up
this assumption, and introduces new failure modes I don't like (which
it is in fact not necessary to do; aa's code is very close to doing the
partial-but-insufficient-for-pgcl objrmap properly apart from trying to
allocate more pte_chains than necessary and not falling back to the vma
lists for linear/nonlinear mapping mixtures). The current port has some
code to deal with this I'm extremely eager to dump as soon as things
such as anobjrmap etc. make it possible, if they're merged.

Current efforts are now a background/spare time affair centering around
non-i386 architectures and driver audits.


-- wli

2004-03-09 07:24:03

by Nick Piggin

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists



William Lee Irwin III wrote:

>On Tue, Mar 09, 2004 at 05:06:37PM +1100, Nick Piggin wrote:
>
>>Not sure to be honest, I haven't looked at it :\. I'm not really
>>sure if the rmap mitigation direction is just a holdover until
>>page clustering or intended as a permanent feature...
>>Either way, I trust its proponents will take the onus for regressions.
>>
>
>Actually, anobjrmap does wonderful things wrt. liberating pgcl
>internals from some very frustrating complications having to do with
>assumptions of a 1:1 correspondence between pte pages and struct pages,
>so I would regard work in the direction of anobjrmap as useful to
>advance the state of page clustering regardless of its rmap mitigation
>overtones. The "partial" objrmap is actually insufficient to clean up
>this assumption, and introduces new failure modes I don't like (which
>it is in fact not necessary to do; aa's code is very close to doing the
>partial-but-insufficient-for-pgcl objrmap properly apart from trying to
>allocate more pte_chains than necessary and not falling back to the vma
>lists for linear/nonlinear mapping mixtures). The current port has some
>code to deal with this I'm extremely eager to dump as soon as things
>such as anobjrmap etc. make it possible, if they're merged.
>
>Current efforts are now a background/spare time affair centering around
>non-i386 architectures and driver audits.
>

OK. I had just noticed that the people complaining about rmap most
are the ones using 4K page size (x86-64 uses 4K, doesn't it?). Not
that this fact means it is OK to ignore them problem, but I thought
maybe pgcl might solve it in a more general way.

I wonder how much you gain with objrmap / anobjrmap on say a 64K page
architecture?

2004-03-09 07:37:28

by William Lee Irwin III

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

William Lee Irwin III wrote:
>> Current efforts are now a background/spare time affair centering around
>> non-i386 architectures and driver audits.

On Tue, Mar 09, 2004 at 06:23:53PM +1100, Nick Piggin wrote:
> OK. I had just noticed that the people complaining about rmap most
> are the ones using 4K page size (x86-64 uses 4K, doesn't it?). Not
> that this fact means it is OK to ignore them problem, but I thought
> maybe pgcl might solve it in a more general way.
> I wonder how much you gain with objrmap / anobjrmap on say a 64K page
> architecture?

pgcl doesn't reduce userspace's mapping granularity. The current
implementation has the same pte_chain overhead as mainline for the same
virtualspace mapped. It's unclear how feasible it is to reduce this
overhead, though various proposals have gone around. I've ignored the
potential pte_chain reduction issue entirely in favor of concentrating
on more basic correctness and functionality. The removal of the 1:1 pte
page : struct page assumption is the vastly more important aspect of
anobjrmap in relation to pgcl, since removing that assumption would
remove a significant piece of complexity.

-- wli

2004-03-09 09:24:46

by William Lee Irwin III

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

On Tue, Mar 09, 2004 at 06:23:53PM +1100, Nick Piggin wrote:
> OK. I had just noticed that the people complaining about rmap most
> are the ones using 4K page size (x86-64 uses 4K, doesn't it?). Not
> that this fact means it is OK to ignore them problem, but I thought
> maybe pgcl might solve it in a more general way.

There is something to be gained in terms of general cache and memory
footprint of non-reclamation-oriented operations. The sad thing is
that many of the arguments presented in favor of these object-based
physical-to-virtual resolution methods are largely for what I'd call
the wrong reasons. Kernel compiles are not a realistic workload. fork()
is used in real applications that are forking servers, and those are
what should be instrumented for the performance argument. Cache and
memory conservation are also legitimate concerns, which are being
expressed in ways that pollute them with the stigma of highmem.


On Tue, Mar 09, 2004 at 06:23:53PM +1100, Nick Piggin wrote:
> I wonder how much you gain with objrmap / anobjrmap on say a 64K page
> architecture?

The gains I spoke of earlier are completely in terms of implementation
mechanics and unrelated to concerns such as performance. Essentially,
ptes are expected to be of some size, and it's desirable that they
remain of those sizes and not "widened" artificially lest we incur more
fragmentation. The pte_chain -based physical-to-virtual resolution
algorithm utilized the struct page tracking a pte page, which is unique
in mainline, to shove information used by the physical-to-virtual
resolution algorithm into. This gets rather ugly when the struct page
corresponds to multiple pte pages. pte pages are already grossly
underutilized (ca. 20%) with stock 4K pte pages; jacking them up to
64KB/etc. worsens space utilization, has larger latencies associated
with bitblitting the things, and is just plain ugly to implement.

anobjrmap OTOH removes this dependency on the struct page tracking a
4K pte page. 4K (or otherwise sub-PAGE_SIZE) blocks of memory for ptes
may be freely used without incurring implementation complexity or the
other disadvantages above. The partial objrmap doesn't remove this
dependency on a unique struct page tracking a pte page, retaining it
for the cases of anonymous and nonlinearly-mapped pagecache pages.


-- wli

P.S.: The best word I could come up with for leaf radix tree nodes
of pagetables was "pte page". This term is not meant to imply
they are of size PAGE_SIZE.

2004-03-09 15:27:32

by Marc-Christian Petersen

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

On Tuesday 09 March 2004 06:35, Nick Piggin wrote:

Hi Nick,

seems the following patch is required ontop of your patches?

ciao, Marc


Attachments:
(No filename) (135.00 B)
002_03-vm-mapped-x-active-lists-1-fix.patch (618.00 B)
Download all attachments

2004-03-09 15:43:06

by Nikita Danilov

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists

Marc-Christian Petersen writes:
> On Tuesday 09 March 2004 06:35, Nick Piggin wrote:
>
> Hi Nick,
>
> seems the following patch is required ontop of your patches?
>
> ciao, Marc
> --- old/arch/i386/mm/hugetlbpage.c 2004-03-09 14:57:42.000000000 +0100
> +++ new/arch/i386/mm/hugetlbpage.c 2004-03-09 15:36:15.000000000 +0100
> @@ -411,8 +411,8 @@ static void update_and_free_page(struct
> htlbzone_pages--;
> for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
> map->flags &= ~(1 << PG_locked | 1 << PG_error | 1 << PG_referenced |
> - 1 << PG_dirty | 1 << PG_active | 1 << PG_reserved |
> - 1 << PG_private | 1<< PG_writeback);
> + 1 << PG_dirty | 1 << PG_active_mapped | 1 << PG_active_unapped |

PG_active_unapped?

> + 1 << PG_reserved | 1 << PG_private | 1<< PG_writeback);
> set_page_count(map, 0);
> map++;
> }

Nikita.

2004-03-10 02:50:14

by Nick Piggin

[permalink] [raw]
Subject: Re: [RFC][PATCH 4/4] vm-mapped-x-active-lists



Marc-Christian Petersen wrote:

>On Tuesday 09 March 2004 06:35, Nick Piggin wrote:
>
>Hi Nick,
>
>seems the following patch is required ontop of your patches?
>
>

Hi Marc,
Yep thanks for that one. You're right of course, minus the typo.
It's funny, I made the same one in about 3 other places.

Nick

2004-03-10 05:32:23

by Nick Piggin

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/4] VM split active lists



Nick Piggin wrote:

>
> OK, the theory is that mapped pagecache pages are worth more than
> unmapped pages. This is a good theory because mapped pages will
> usually have far more random access patterns, so pagein *and* pageout
> will be much less efficient. Also, applications are probably coded to
> be more suited to blocking in read() than a random code / anon memory
> page. So a factor of >= 16 wouldn't be out of the question.
>

Just a followup - there is a small but significant bug in patch
#4/4. In shrink_zone, mapped_ratio should be divided by
nr_active_unmapped. I have this fixed, hugepage compile problems
fixed, and a mapped_page_cost tunable in place of swappiness. So
anyone interested in testing should please ask me for my latest
patch.

I'm getting some preliminary numbers now. They're pretty good,
looks like they should be similar to dont-rotate-active-list
which isn't too surprising.

Interestingly, mapped_page_cost of 8 is close to optimal for
swapping-kbuild throughput. Values of 4 and 16 are both worse.
mapped_page_cost is in units of unmapped page cost. Maybe it is
just me, but I find this scheme is more meaningful and provides
more control than swappiness.

2004-03-12 09:58:34

by Hans Reiser

[permalink] [raw]
Subject: Re: [RFC][PATCH 0/4] VM split active lists

I didn't review the code carefully, but it seems like a
reasonable/better design overall. Thanks for it.

--
Hans