Running 2.5.53-mm3, I found the following in dmesg. I don't remember
getting anything like this with 2.5.53-mm3.
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:3, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
xmms: page allocation failure. order:5, mode:0x20
xmms: page allocation failure. order:4, mode:0x20
--
L1: khromy ;khromy(at)lnuxlab.ath.cx
On Sun, Dec 29, 2002 at 03:26:10PM -0500, khromy wrote:
> Running 2.5.53-mm3, I found the following in dmesg. I don't remember
> getting anything like this with 2.5.53-mm3.
^^ 2.5.53-mm2
--
L1: khromy ;khromy(at)lnuxlab.ath.cx
khromy wrote:
>
> Running 2.5.53-mm3, I found the following in dmesg. I don't remember
> getting anything like this with 2.5.53-mm3.
>
> xmms: page allocation failure. order:5, mode:0x20
gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
afoul of the reduced memory reserves. It deserved to.
Could you please add this patch, and make sure that you have set
CONFIG_KALLSYMS=y? This will find the culprit.
Thanks.
--- 25/mm/page_alloc.c~a Sun Dec 29 12:40:30 2002
+++ 25-akpm/mm/page_alloc.c Sun Dec 29 12:40:36 2002
@@ -572,6 +572,7 @@ nopage:
printk("%s: page allocation failure."
" order:%d, mode:0x%x\n",
current->comm, order, gfp_mask);
+ dump_stack();
}
return NULL;
}
_
On Sun, Dec 29, 2002 at 12:42:20PM -0800, Andrew Morton wrote:
> khromy wrote:
> >
> > Running 2.5.53-mm3, I found the following in dmesg. I don't remember
> > getting anything like this with 2.5.53-mm3.
> >
> > xmms: page allocation failure. order:5, mode:0x20
>
> gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> afoul of the reduced memory reserves. It deserved to.
>
> Could you please add this patch, and make sure that you have set
> CONFIG_KALLSYMS=y? This will find the culprit.
XFree86: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c0265066>] unix_poll+0x22/0x90
[<c0224ce9>] sock_poll+0x1d/0x24
[<c014d452>] do_select+0xfe/0x208
[<c014d214>] __pollwait+0x0/0x98
[<c014d8b6>] sys_select+0x332/0x46c
[<c01089af>] syscall_call+0x7/0xb
XFree86: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c0265066>] unix_poll+0x22/0x90
[<c0224ce9>] sock_poll+0x1d/0x24
[<c014d452>] do_select+0xfe/0x208
[<c014d214>] __pollwait+0x0/0x98
[<c014d8b6>] sys_select+0x332/0x46c
[<c01089af>] syscall_call+0x7/0xb
ENOMEM in journal_alloc_journal_head, retrying.
xmms: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c021dc2b>] es1371_poll+0xcf/0x20c
[<c014d452>] do_select+0xfe/0x208
[<c014d214>] __pollwait+0x0/0x98
[<c014d8b6>] sys_select+0x332/0x46c
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c021dc2b>] es1371_poll+0xcf/0x20c
[<c014d452>] do_select+0xfe/0x208
[<c014d214>] __pollwait+0x0/0x98
[<c014d8b6>] sys_select+0x332/0x46c
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c0265066>] unix_poll+0x22/0x90
[<c0224ce9>] sock_poll+0x1d/0x24
[<c014da35>] do_pollfd+0x45/0x84
[<c014dad1>] do_poll+0x5d/0xc0
[<c014dc48>] sys_poll+0x114/0x1cc
[<c014d214>] __pollwait+0x0/0x98
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c0265066>] unix_poll+0x22/0x90
[<c0224ce9>] sock_poll+0x1d/0x24
[<c014da35>] do_pollfd+0x45/0x84
[<c014dad1>] do_poll+0x5d/0xc0
[<c014dc48>] sys_poll+0x114/0x1cc
[<c014d214>] __pollwait+0x0/0x98
[<c01089af>] syscall_call+0x7/0xb
kswapd0: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fcee>] balance_pgdat+0xba/0x130
[<c012fe62>] kswapd+0xfe/0x104
[<c012fd64>] kswapd+0x0/0x104
[<c0115a54>] autoremove_wake_function+0x0/0x38
[<c0115a54>] autoremove_wake_function+0x0/0x38
[<c0106dfd>] kernel_thread_helper+0x5/0xc
kswapd0: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fcee>] balance_pgdat+0xba/0x130
[<c012fe62>] kswapd+0xfe/0x104
[<c012fd64>] kswapd+0x0/0x104
[<c0115a54>] autoremove_wake_function+0x0/0x38
[<c0115a54>] autoremove_wake_function+0x0/0x38
[<c0106dfd>] kernel_thread_helper+0x5/0xc
bk: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e36c>] journal_alloc_journal_head+0x10/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c01322b7>] do_anonymous_page+0xf3/0x24c
[<c0132448>] do_no_page+0x38/0x2d0
[<c0132771>] handle_mm_fault+0x91/0x13c
[<c0112ec2>] do_page_fault+0x132/0x414
[<c0112d90>] do_page_fault+0x0/0x414
[<c011d526>] update_wall_time+0x12/0x3c
[<c01342db>] do_brk+0x10b/0x1dc
[<c0133205>] sys_brk+0xad/0xd8
[<c0109391>] error_code+0x2d/0x38
XFree86: page allocation failure. order:0, mode:0xd0
Call Trace:
[<c012a3dd>] __alloc_pages+0x255/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c012c7e6>] cache_grow+0xb6/0x20c
[<c012c9cf>] __cache_alloc_refill+0x93/0x220
[<c01145ac>] do_schedule+0x268/0x2c8
[<c012cb96>] cache_alloc_refill+0x3a/0x58
[<c012cf1d>] kmem_cache_alloc+0x45/0xc8
[<c017e3bc>] journal_alloc_journal_head+0x60/0x68
[<c017e458>] journal_add_journal_head+0x80/0x120
[<c0178fc6>] journal_dirty_data+0x4a/0x1bc
[<c016cd5f>] journal_dirty_async_data+0x17/0x6c
[<c016ca84>] walk_page_buffers+0x50/0x74
[<c016d305>] ext3_writepage+0x261/0x33c
[<c016cd48>] journal_dirty_async_data+0x0/0x6c
[<c012f0b6>] shrink_list+0x2b6/0x4bc
[<c0135bc1>] page_referenced+0xbd/0xcc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012fa48>] refill_inactive_zone+0x4b4/0x4dc
[<c012e296>] __pagevec_release+0x1a/0x28
[<c012f467>] shrink_cache+0x1ab/0x2d8
[<c012fadc>] shrink_zone+0x6c/0x74
[<c012fb4e>] shrink_caches+0x6a/0x94
[<c012fbf4>] try_to_free_pages+0x7c/0xbc
[<c012a33c>] __alloc_pages+0x1b4/0x264
[<c012a414>] __get_free_pages+0x28/0x60
[<c014d247>] __pollwait+0x33/0x98
[<c0265066>] unix_poll+0x22/0x90
[<c0224ce9>] sock_poll+0x1d/0x24
[<c014d452>] do_select+0xfe/0x208
[<c014d214>] __pollwait+0x0/0x98
[<c014d8b6>] sys_select+0x332/0x46c
[<c01089af>] syscall_call+0x7/0xb
--
L1: khromy ;khromy(at)lnuxlab.ath.cx
khromy wrote:
>
> On Sun, Dec 29, 2002 at 12:42:20PM -0800, Andrew Morton wrote:
> > khromy wrote:
> > >
> > > Running 2.5.53-mm3, I found the following in dmesg. I don't remember
> > > getting anything like this with 2.5.53-mm3.
> > >
> > > xmms: page allocation failure. order:5, mode:0x20
> >
> > gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> > afoul of the reduced memory reserves. It deserved to.
> >
> > Could you please add this patch, and make sure that you have set
> > CONFIG_KALLSYMS=y? This will find the culprit.
>
> XFree86: page allocation failure. order:0, mode:0xd0
> Call Trace:
> [<c012a3dd>] __alloc_pages+0x255/0x264
> [<c012a414>] __get_free_pages+0x28/0x60
> [<c012c7e6>] cache_grow+0xb6/0x20c
> [<c012c9cf>] __cache_alloc_refill+0x93/0x220
> [<c012cb96>] cache_alloc_refill+0x3a/0x58
> [<c012cf1d>] kmem_cache_alloc+0x45/0xc8
> [<c017e36c>] journal_alloc_journal_head+0x10/0x68
> [<c017e458>] journal_add_journal_head+0x80/0x120
oops, sorry. They're all expected. I'd like to know where
the 5-order failure during xmms usage came from. Were you
using a CDROM at the time??
This should tell us, thanks:
--- 25/mm/page_alloc.c~a Sun Dec 29 15:52:29 2002
+++ 25-akpm/mm/page_alloc.c Sun Dec 29 15:52:47 2002
@@ -547,6 +547,8 @@ nopage:
printk("%s: page allocation failure."
" order:%d, mode:0x%x\n",
current->comm, order, gfp_mask);
+ if (order > 3)
+ dump_stack();
}
return NULL;
}
_
On Sun, Dec 29, 2002 at 04:10:30PM -0800, Andrew Morton wrote:
> khromy wrote:
> >
> > On Sun, Dec 29, 2002 at 03:54:52PM -0800, Andrew Morton wrote:
> > > khromy wrote:
> > > >
> > > > On Sun, Dec 29, 2002 at 12:42:20PM -0800, Andrew Morton wrote:
> > > > > khromy wrote:
> > > > > >
> > > > > > Running 2.5.53-mm3, I found the following in dmesg. I don't remember
> > > > > > getting anything like this with 2.5.53-mm3.
> > > > > >
> > > > > > xmms: page allocation failure. order:5, mode:0x20
> > > > >
> > > > > gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> > > > > afoul of the reduced memory reserves. It deserved to.
> > > > >
> > > > > Could you please add this patch, and make sure that you have set
> > > > > CONFIG_KALLSYMS=y? This will find the culprit.
> > > >
> > > > XFree86: page allocation failure. order:0, mode:0xd0
> > > > Call Trace:
> > > > [<c012a3dd>] __alloc_pages+0x255/0x264
> > > > [<c012a414>] __get_free_pages+0x28/0x60
> > > > [<c012c7e6>] cache_grow+0xb6/0x20c
> > > > [<c012c9cf>] __cache_alloc_refill+0x93/0x220
> > > > [<c012cb96>] cache_alloc_refill+0x3a/0x58
> > > > [<c012cf1d>] kmem_cache_alloc+0x45/0xc8
> > > > [<c017e36c>] journal_alloc_journal_head+0x10/0x68
> > > > [<c017e458>] journal_add_journal_head+0x80/0x120
> > >
> > > oops, sorry. They're all expected. I'd like to know where
> > > the 5-order failure during xmms usage came from. Were you
> > > using a CDROM at the time??
> >
> > Nope, playing mp3s from a harddrive. I can reproduce it by doing some
> > IO or compiling something and then switching back and forth through
> > workspaces really fast at the same time..
>
> Ah. Well could you please add the second patch? That'll find the source.
It didn't apply ontop of the one that you sent earlier so I compiled
with only the second one..
I got this while applying the second one:
patching file mm/page_alloc.c
Hunk #1 succeeded at 572 (offset 25 lines).
And here is dmesg:
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
xmms: page allocation failure. order:5, mode:0x20
Call Trace:
[<c012a3e7>] __alloc_pages+0x25f/0x26c
[<c012a41c>] __get_free_pages+0x28/0x60
[<c010e36e>] dma_alloc_coherent+0x3e/0x74
[<c021c8ba>] prog_dmabuf+0x7e/0x2b4
[<c021c31d>] set_dac2_rate+0xb5/0xe0
[<c021f01d>] es1371_ioctl+0x10d5/0x140c
[<c012d228>] kmem_cache_free+0x174/0x1b8
[<c014ccf9>] sys_ioctl+0x1fd/0x254
[<c01089af>] syscall_call+0x7/0xb
--
L1: khromy ;khromy(at)lnuxlab.ath.cx
On Sun, 2002-12-29 at 20:42, Andrew Morton wrote:
> gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> afoul of the reduced memory reserves. It deserved to.
ISA sound I/O. And yes it really does want the 128K if it can get it on
a slower box. It will try 128/64/32/.. so it gets less if there isnt any
DMA RAM around. All the sound works this way because few bits of sound
hardware, even in the PCI world, support scatter gather.
If the VM can't deal with it - we need to fix the VM. All these
allocations are blocking and can wait a long time.
Please direct any complaints to the hardware vendors.
Alan
khromy wrote:
>
> ...
> And here is dmesg:
>
> xmms: page allocation failure. order:5, mode:0x20
> Call Trace:
> [<c012a3e7>] __alloc_pages+0x25f/0x26c
> [<c012a41c>] __get_free_pages+0x28/0x60
> [<c010e36e>] dma_alloc_coherent+0x3e/0x74
> [<c021c8ba>] prog_dmabuf+0x7e/0x2b4
> [<c021c31d>] set_dac2_rate+0xb5/0xe0
> [<c021f01d>] es1371_ioctl+0x10d5/0x140c
> [<c012d228>] kmem_cache_free+0x174/0x1b8
> [<c014ccf9>] sys_ioctl+0x1fd/0x254
> [<c01089af>] syscall_call+0x7/0xb
>
OK, thanks. The audio driver is trying to allocate a large DMA
buffer and just falls back to a smaller size if it fails.
And dma_alloc_coherent() forces GFP_ATOMIC, which is quite broken
of it.
Let's leave this one as-is. It'll be OK when all the debug code
is pulled out.
On Mon, Dec 30, 2002 at 01:32:26AM +0000, Alan Cox wrote:
> On Sun, 2002-12-29 at 20:42, Andrew Morton wrote:
> > gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> > afoul of the reduced memory reserves. It deserved to.
>
> ISA sound I/O. And yes it really does want the 128K if it can get it on
> a slower box. It will try 128/64/32/.. so it gets less if there isnt any
> DMA RAM around. All the sound works this way because few bits of sound
> hardware, even in the PCI world, support scatter gather.
This is a PCI sound card.
Bus 0, device 11, function 0:
Multimedia audio controller: Ensoniq 5880 AudioPCI (rev 2).
--
L1: khromy ;khromy(at)lnuxlab.ath.cx
Alan Cox wrote:
>
> On Sun, 2002-12-29 at 20:42, Andrew Morton wrote:
> > gack. Someone is requesting 128k of memory with GFP_ATOMIC. It fell
> > afoul of the reduced memory reserves. It deserved to.
>
> ISA sound I/O. And yes it really does want the 128K if it can get it on
> a slower box. It will try 128/64/32/.. so it gets less if there isnt any
> DMA RAM around. All the sound works this way because few bits of sound
> hardware, even in the PCI world, support scatter gather.
>
> If the VM can't deal with it - we need to fix the VM.
It'll tend to usually work because GFP_KERNEL allocations prefer to
not dip into the DMA region.
> All these allocations are blocking and can wait a long time.
But it's not! dma_alloc_coherent() is using GFP_ATOMIC|__GFP_DMA.
Now, if we can fix the caller to use
__GFP_WAIT | __GFP_IO | __GFP_HIGHIO | __GFP_FS | __GFP_DMA
then that at least will allow page reclaim.
Then we can remove this restriction in __alloc_pages():
/*
* Don't let big-order allocations loop. Yield for kswapd, try again.
*/
if (order <= 3) {
yield();
goto rebalance;
}
and all will be well.
dma_alloc_coherent() should be fixed to take a gfp_mask, and callers
should be updated.
As for permitting direct page reclaim for higher-order allocations: I
just don't know - it's from before my time. Perhaps the VM will livelock.
On Mon, 2002-12-30 at 01:20, khromy wrote:
> > DMA RAM around. All the sound works this way because few bits of sound
> > hardware, even in the PCI world, support scatter gather.
>
> This is a PCI sound card.
>
> Bus 0, device 11, function 0:
> Multimedia audio controller: Ensoniq 5880 AudioPCI (rev 2).
And doesn't support scatter gather either so does the same thing.