2009-04-29 21:11:24

by Hugh Dickins

[permalink] [raw]
Subject: [PATCH mmotm] mm: alloc_large_system_hash check order

On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
had better make its own check on the order.

Signed-off-by: Hugh Dickins <[email protected]>
---
Should probably follow
page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch

Cc'ed DaveM and netdev, just in case they're surprised it was asking for
so much, or disappointed it's not getting as much as it was asking for.

mm/page_alloc.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

--- 2.6.30-rc3-mm1/mm/page_alloc.c 2009-04-29 21:01:08.000000000 +0100
+++ mmotm/mm/page_alloc.c 2009-04-29 21:12:04.000000000 +0100
@@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
else {
unsigned long order = get_order(size);
- table = (void*) __get_free_pages(GFP_ATOMIC, order);
+
+ if (order < MAX_ORDER)
+ table = (void *)__get_free_pages(GFP_ATOMIC,
+ order);
/*
* If bucketsize is not a power-of-two, we may free
* some pages at the end of hash table.


2009-04-29 21:30:27

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH mmotm] mm: alloc_large_system_hash check order

On Wed, 29 Apr 2009 22:09:48 +0100 (BST)
Hugh Dickins <[email protected]> wrote:

> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> had better make its own check on the order.
>
> Signed-off-by: Hugh Dickins <[email protected]>
> ---
> Should probably follow
> page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch
>
> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.
>
> mm/page_alloc.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> --- 2.6.30-rc3-mm1/mm/page_alloc.c 2009-04-29 21:01:08.000000000 +0100
> +++ mmotm/mm/page_alloc.c 2009-04-29 21:12:04.000000000 +0100
> @@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
> table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> else {
> unsigned long order = get_order(size);
> - table = (void*) __get_free_pages(GFP_ATOMIC, order);
> +
> + if (order < MAX_ORDER)
> + table = (void *)__get_free_pages(GFP_ATOMIC,
> + order);
> /*
> * If bucketsize is not a power-of-two, we may free
> * some pages at the end of hash table.

yes, the code is a bit odd:

: do {
: size = bucketsize << log2qty;
: if (flags & HASH_EARLY)
: table = alloc_bootmem_nopanic(size);
: else if (hashdist)
: table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
: else {
: unsigned long order = get_order(size);
: table = (void*) __get_free_pages(GFP_ATOMIC, order);
: /*
: * If bucketsize is not a power-of-two, we may free
: * some pages at the end of hash table.
: */
: if (table) {
: unsigned long alloc_end = (unsigned long)table +
: (PAGE_SIZE << order);
: unsigned long used = (unsigned long)table +
: PAGE_ALIGN(size);
: split_page(virt_to_page(table), order);
: while (used < alloc_end) {
: free_page(used);
: used += PAGE_SIZE;
: }
: }
: }
: } while (!table && size > PAGE_SIZE && --log2qty);

In the case where it does the __vmalloc(), the order-11 allocation will
succeed. But in the other cases, the allocation attempt will need to
be shrunk and we end up with a smaller hash table. Is that sensible?

If we want to regularise all three cases, doing

size = min(size, MAX_ORDER);

before starting the loop would be suitable, although the huge
__get_free_pages() might still fail. (But it will then warn, won't it?
And nobody is reporting that).

I was a bit iffy about adding the warning in the first place, let it go
through due to its potential to lead us to code which isn't doing what
it thinks it's doing, or is being generally peculiar.

2009-04-30 00:25:50

by David Miller

[permalink] [raw]
Subject: Re: [PATCH mmotm] mm: alloc_large_system_hash check order

From: Hugh Dickins <[email protected]>
Date: Wed, 29 Apr 2009 22:09:48 +0100 (BST)

> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.

This is basically what should be happening, thanks for the note.

2009-04-30 13:26:04

by Mel Gorman

[permalink] [raw]
Subject: Re: [PATCH mmotm] mm: alloc_large_system_hash check order

On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> had better make its own check on the order.
>
> Signed-off-by: Hugh Dickins <[email protected]>

Looks good

Reviewed-by: Mel Gorman <[email protected]>

As I was looking there, it seemed that alloc_large_system_hash() should be
using alloc_pages_exact() instead of having its own "give back the spare
pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
where it may catch other unwary callers.

How about adding the following patch on top of yours?

==== CUT HERE ====
Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic

alloc_large_system_hash() has logic for freeing unused pages at the end
of an power-of-two-pages-aligned buffer that is a duplicate of what is in
alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
alloc_pages_exact().

Signed-off-by: Mel Gorman <[email protected]>
---
mm/page_alloc.c | 27 +++++----------------------
1 file changed, 5 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..c94b140 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
unsigned int order = get_order(size);
unsigned long addr;

+ if (order >= MAX_ORDER)
+ return NULL;
+
addr = __get_free_pages(gfp_mask, order);
if (addr) {
unsigned long alloc_end = addr + (PAGE_SIZE << order);
@@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
table = alloc_bootmem_nopanic(size);
else if (hashdist)
table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
- else {
- unsigned long order = get_order(size);
-
- if (order < MAX_ORDER)
- table = (void *)__get_free_pages(GFP_ATOMIC,
- order);
- /*
- * If bucketsize is not a power-of-two, we may free
- * some pages at the end of hash table.
- */
- if (table) {
- unsigned long alloc_end = (unsigned long)table +
- (PAGE_SIZE << order);
- unsigned long used = (unsigned long)table +
- PAGE_ALIGN(size);
- split_page(virt_to_page(table), order);
- while (used < alloc_end) {
- free_page(used);
- used += PAGE_SIZE;
- }
- }
- }
+ else
+ table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
} while (!table && size > PAGE_SIZE && --log2qty);

if (!table)