2007-05-19 22:26:53

by David Miller

[permalink] [raw]
Subject: that page count overflow thing


I think we may be able to fix that one without making the
counter larger, it's silly overhead for such an extreme
case IMHO.

Perhaps it might be possible to just make the counter stick at it's
maximum, and when it's there we have an rbtree of external "large"
counters, keyed by page struct address.

So basically externalize counters that go over the maximally
representable value. In this way only the idiotic cases pay
the price.


2007-05-21 11:38:12

by William Lee Irwin III

[permalink] [raw]
Subject: Re: that page count overflow thing

On Sat, May 19, 2007 at 03:26:49PM -0700, David Miller wrote:
> I think we may be able to fix that one without making the
> counter larger, it's silly overhead for such an extreme
> case IMHO.
> Perhaps it might be possible to just make the counter stick at it's
> maximum, and when it's there we have an rbtree of external "large"
> counters, keyed by page struct address.
> So basically externalize counters that go over the maximally
> representable value. In this way only the idiotic cases pay
> the price.

This could be awkward with allocation requirements. How about an
open-addressed hash table? It can be made so large as to never
need to expand in advance with a very small constant factor space
overhead.


-- wli

2007-05-21 11:50:37

by David Miller

[permalink] [raw]
Subject: Re: that page count overflow thing

From: William Lee Irwin III <[email protected]>
Date: Mon, 21 May 2007 04:37:47 -0700

> This could be awkward with allocation requirements. How about an
> open-addressed hash table? It can be made so large as to never
> need to expand in advance with a very small constant factor space
> overhead.

I was just thinking of a normal hash table with entries that
looked simply like:

struct page_big_count_hash {
struct page_big_count_hash *next; /* or list_head or hlist_head etc. */
struct page *key;
atomic64_t count;
};

2007-05-21 12:21:23

by William Lee Irwin III

[permalink] [raw]
Subject: Re: that page count overflow thing

> From: William Lee Irwin III <[email protected]>
> Date: Mon, 21 May 2007 04:37:47 -0700
>> This could be awkward with allocation requirements. How about an
>> open-addressed hash table? It can be made so large as to never
>> need to expand in advance with a very small constant factor space
>> overhead.

On Mon, May 21, 2007 at 04:50:31AM -0700, David Miller wrote:
> I was just thinking of a normal hash table with entries that
> looked simply like:
> struct page_big_count_hash {
> struct page_big_count_hash *next; /* or list_head or hlist_head etc. */
> struct page *key;
> atomic64_t count;
> };

I guess that could work with a static pool of hashtable elements like
mm/highmem.c uses, but the pointer links seem like such a waste of space.
It'll work, so no big deal. Maybe converting mm/highmem.c to hashing by
open addressing would be a simplification. Not worth disturbing working
code, though.


-- wli