2007-05-18 09:55:09

by Eric Dumazet

[permalink] [raw]
Subject: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

alloc_large_system_hash() is called at boot time to allocate space for several large hash tables.

Lately, TCP hash table was changed and its bucketsize is not a power-of-two anymore.

On most setups, alloc_large_system_hash() allocates one big page (order > 0) with __get_free_pages(GFP_ATOMIC, order). This single high_order page has a power-of-two size, bigger than the needed size.

We can free all pages that wont be used by the hash table.

On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.

TCP established hash table entries: 32768 (order: 6, 393216 bytes)

Signed-off-by: Eric Dumazet <[email protected]>
---
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ae96dd8..2e0ba08 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3350,6 +3350,20 @@ void *__init alloc_large_system_hash(const char *tablename,
for (order = 0; ((1UL << order) << PAGE_SHIFT) < size; order++)
;
table = (void*) __get_free_pages(GFP_ATOMIC, order);
+ /*
+ * If bucketsize is not a power-of-two, we may free
+ * some pages at the end of hash table.
+ */
+ if (table) {
+ unsigned long alloc_end = (unsigned long)table +
+ (PAGE_SIZE << order);
+ unsigned long used = (unsigned long)table +
+ PAGE_ALIGN(size);
+ while (used < alloc_end) {
+ free_page(used);
+ used += PAGE_SIZE;
+ }
+ }
}
} while (!table && size > PAGE_SIZE && --log2qty);


2007-05-18 18:21:20

by Christoph Lameter

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

On Fri, 18 May 2007, Eric Dumazet wrote:

> table = (void*) __get_free_pages(GFP_ATOMIC, order);

ATOMIC? Is there some reason why we need atomic here?

> + /*
> + * If bucketsize is not a power-of-two, we may free
> + * some pages at the end of hash table.
> + */
> + if (table) {
> + unsigned long alloc_end = (unsigned long)table +
> + (PAGE_SIZE << order);
> + unsigned long used = (unsigned long)table +
> + PAGE_ALIGN(size);
> + while (used < alloc_end) {
> + free_page(used);

Isnt this going to interfere with the kernel_map_pages debug stuff?

2007-05-19 08:40:34

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

On Fri, 18 May 2007 11:54:54 +0200 Eric Dumazet <[email protected]> wrote:

> alloc_large_system_hash() is called at boot time to allocate space for several large hash tables.
>
> Lately, TCP hash table was changed and its bucketsize is not a power-of-two anymore.
>
> On most setups, alloc_large_system_hash() allocates one big page (order > 0) with __get_free_pages(GFP_ATOMIC, order). This single high_order page has a power-of-two size, bigger than the needed size.

Watch the 200-column text, please.

> We can free all pages that wont be used by the hash table.
>
> On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
>
> TCP established hash table entries: 32768 (order: 6, 393216 bytes)
>
> Signed-off-by: Eric Dumazet <[email protected]>
> ---
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index ae96dd8..2e0ba08 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3350,6 +3350,20 @@ void *__init alloc_large_system_hash(const char *tablename,
> for (order = 0; ((1UL << order) << PAGE_SHIFT) < size; order++)
> ;
> table = (void*) __get_free_pages(GFP_ATOMIC, order);
> + /*
> + * If bucketsize is not a power-of-two, we may free
> + * some pages at the end of hash table.
> + */
> + if (table) {
> + unsigned long alloc_end = (unsigned long)table +
> + (PAGE_SIZE << order);
> + unsigned long used = (unsigned long)table +
> + PAGE_ALIGN(size);
> + while (used < alloc_end) {
> + free_page(used);
> + used += PAGE_SIZE;
> + }
> + }
> }
> } while (!table && size > PAGE_SIZE && --log2qty);
>

It went BUG.

static inline int put_page_testzero(struct page *page)
{
VM_BUG_ON(atomic_read(&page->_count) == 0);
return atomic_dec_and_test(&page->_count);
}

http://userweb.kernel.org/~akpm/s5000523.jpg
http://userweb.kernel.org/~akpm/config-vmm.txt

2007-05-19 18:08:17

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

Andrew Morton a ?crit :
> On Fri, 18 May 2007 11:54:54 +0200 Eric Dumazet <[email protected]> wrote:
>
>> alloc_large_system_hash() is called at boot time to allocate space for several large hash tables.
>>
>> Lately, TCP hash table was changed and its bucketsize is not a power-of-two anymore.
>>
>> On most setups, alloc_large_system_hash() allocates one big page (order > 0) with __get_free_pages(GFP_ATOMIC, order). This single high_order page has a power-of-two size, bigger than the needed size.
>
> Watch the 200-column text, please.
>
>> We can free all pages that wont be used by the hash table.
>>
>> On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
>>
>> TCP established hash table entries: 32768 (order: 6, 393216 bytes)
>>
>> Signed-off-by: Eric Dumazet <[email protected]>
>> ---
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index ae96dd8..2e0ba08 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -3350,6 +3350,20 @@ void *__init alloc_large_system_hash(const char *tablename,
>> for (order = 0; ((1UL << order) << PAGE_SHIFT) < size; order++)
>> ;
>> table = (void*) __get_free_pages(GFP_ATOMIC, order);
>> + /*
>> + * If bucketsize is not a power-of-two, we may free
>> + * some pages at the end of hash table.
>> + */
>> + if (table) {
>> + unsigned long alloc_end = (unsigned long)table +
>> + (PAGE_SIZE << order);
>> + unsigned long used = (unsigned long)table +
>> + PAGE_ALIGN(size);
>> + while (used < alloc_end) {
>> + free_page(used);
>> + used += PAGE_SIZE;
>> + }
>> + }
>> }
>> } while (!table && size > PAGE_SIZE && --log2qty);
>>
>
> It went BUG.
>
> static inline int put_page_testzero(struct page *page)
> {
> VM_BUG_ON(atomic_read(&page->_count) == 0);
> return atomic_dec_and_test(&page->_count);
> }
>
> http://userweb.kernel.org/~akpm/s5000523.jpg
> http://userweb.kernel.org/~akpm/config-vmm.txt

I see :(

Maybe David has an idea how this can be done properly ?

ref : http://marc.info/?l=linux-netdev&m=117706074825048&w=2


2007-05-19 18:22:58

by William Lee Irwin III

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

On Fri, May 18, 2007 at 11:54:54AM +0200, Eric Dumazet wrote:
> alloc_large_system_hash() is called at boot time to allocate space
> for several large hash tables.
> Lately, TCP hash table was changed and its bucketsize is not a
> power-of-two anymore.
> On most setups, alloc_large_system_hash() allocates one big page
> (order > 0) with __get_free_pages(GFP_ATOMIC, order). This single
> high_order page has a power-of-two size, bigger than the needed size.
> We can free all pages that wont be used by the hash table.
> On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
> TCP established hash table entries: 32768 (order: 6, 393216 bytes)

The proper way to do this is to convert the large system hashtable
users to use some data structure / algorithm other than hashing by
separate chaining.


-- wli

2007-05-19 18:41:59

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

William Lee Irwin III a ?crit :
> On Fri, May 18, 2007 at 11:54:54AM +0200, Eric Dumazet wrote:
>> alloc_large_system_hash() is called at boot time to allocate space
>> for several large hash tables.
>> Lately, TCP hash table was changed and its bucketsize is not a
>> power-of-two anymore.
>> On most setups, alloc_large_system_hash() allocates one big page
>> (order > 0) with __get_free_pages(GFP_ATOMIC, order). This single
>> high_order page has a power-of-two size, bigger than the needed size.
>> We can free all pages that wont be used by the hash table.
>> On a 1GB i386 machine, this patch saves 128 KB of LOWMEM memory.
>> TCP established hash table entries: 32768 (order: 6, 393216 bytes)
>
> The proper way to do this is to convert the large system hashtable
> users to use some data structure / algorithm other than hashing by
> separate chaining.

No thanks. This was already discussed to death on netdev. To date, hash tables
are a good compromise.

I dont mind losing part of memory, I prefer to keep good performance when
handling 1.000.000 or more tcp sessions.

2007-05-19 18:54:46

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

From: Eric Dumazet <[email protected]>
Date: Sat, 19 May 2007 20:07:11 +0200

> Maybe David has an idea how this can be done properly ?
>
> ref : http://marc.info/?l=linux-netdev&m=117706074825048&w=2

You need to use __GFP_COMP or similar to make this splitting+freeing
thing work.

Otherwise the individual pages don't have page references, only
the head page of the high-order page will.

2007-05-19 20:38:21

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index ae96dd8..7c219eb 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3350,6 +3350,21 @@ void *__init alloc_large_system_hash(const char *tablename,
for (order = 0; ((1UL << order) << PAGE_SHIFT) < size; order++)
;
table = (void*) __get_free_pages(GFP_ATOMIC, order);
+ /*
+ * If bucketsize is not a power-of-two, we may free
+ * some pages at the end of hash table.
+ */
+ if (table) {
+ unsigned long alloc_end = (unsigned long)table +
+ (PAGE_SIZE << order);
+ unsigned long used = (unsigned long)table +
+ PAGE_ALIGN(size);
+ split_page(virt_to_page(table), order);
+ while (used < alloc_end) {
+ free_page(used);
+ used += PAGE_SIZE;
+ }
+ }
}
} while (!table && size > PAGE_SIZE && --log2qty);


Attachments:
alloc_large.patch (823.00 B)

2007-05-21 08:11:59

by William Lee Irwin III

[permalink] [raw]
Subject: Re: [PATCH] MM : alloc_large_system_hash() can free some memory for non power-of-two bucketsize

William Lee Irwin III a ?crit :
>> The proper way to do this is to convert the large system hashtable
>> users to use some data structure / algorithm other than hashing by
>> separate chaining.

On Sat, May 19, 2007 at 08:41:01PM +0200, Eric Dumazet wrote:
> No thanks. This was already discussed to death on netdev. To date, hash
> tables are a good compromise.
> I dont mind losing part of memory, I prefer to keep good performance when
> handling 1.000.000 or more tcp sessions.

The data structures perform well enough, but I suppose it's not worth
pushing the issue this way.


-- wli