Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753592AbYLIMy5 (ORCPT ); Tue, 9 Dec 2008 07:54:57 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751865AbYLIMyo (ORCPT ); Tue, 9 Dec 2008 07:54:44 -0500 Received: from an-out-0708.google.com ([209.85.132.242]:38765 "EHLO an-out-0708.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751576AbYLIMyn (ORCPT ); Tue, 9 Dec 2008 07:54:43 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=DWdKme7y0siaVX/U41AigQ/UQIC5UfidkTxBxfJwrYeWsDn78BCCZ0GcsKN0KbhGWP UMlWwbTUP3xW2VqssoqAKTpcyyz6ErjRHyrNkEuDp5DWOJw6eF3gjqpM5CuJOK+1W1cp nNNbTS19azehM3qdjvm6ZvU/U4wUx/H7q3Nr8= Message-ID: Date: Tue, 9 Dec 2008 15:54:42 +0300 From: "Alexander Beregalov" To: linux-next@vger.kernel.org, linux-mm@kvack.org, LKML , xfs@oss.sgi.com Subject: next-20081209: pdflush: page allocation failure (xfs) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3602 Lines: 84 I got the message during compiling of the kernel. (tainted by previous warning) x86_64, 2Gb of RAM pdflush: page allocation failure. order:0, mode:0x4000 Pid: 30415, comm: pdflush Tainted: G W 2.6.28-rc7-next-20081209 #3 Call Trace: [] __alloc_pages_internal+0x469/0x488 [] ? bvec_alloc_bs+0xdc/0x11a [] alloc_slab_page+0x20/0x26 [] __slab_alloc+0x26c/0x596 [] ? bvec_alloc_bs+0xdc/0x11a [] ? bvec_alloc_bs+0xdc/0x11a [] kmem_cache_alloc+0x7b/0xbe [] bvec_alloc_bs+0xdc/0x11a [] bio_alloc_bioset+0xa9/0x101 [] bio_alloc+0x10/0x1f [] xfs_alloc_ioend_bio+0x23/0x52 [] xfs_submit_ioend+0x56/0xd4 [] xfs_page_state_convert+0x5e9/0x642 [] ? xfs_count_page_state+0x97/0xb6 [] xfs_vm_writepage+0xbe/0xf7 [] __writepage+0x15/0x3b [] write_cache_pages+0x1cd/0x331 [] ? __writepage+0x0/0x3b [] generic_writepages+0x22/0x28 [] xfs_vm_writepages+0x45/0x4e [] do_writepages+0x2b/0x3b [] __writeback_single_inode+0x186/0x2fa [] ? generic_sync_sb_inodes+0x2bc/0x30a [] generic_sync_sb_inodes+0x22c/0x30a [] writeback_inodes+0x9d/0xf4 [] wb_kupdate+0xa3/0x11e [] pdflush+0x11d/0x1d0 [] ? wb_kupdate+0x0/0x11e [] ? pdflush+0x0/0x1d0 [] kthread+0x49/0x76 [] child_rip+0xa/0x20 [] ? finish_task_switch+0x0/0xb9 [] ? restore_args+0x0/0x30 [] ? kthread+0x0/0x76 [] ? child_rip+0x0/0x20 Mem-Info: DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 CPU 1: hi: 0, btch: 1 usd: 0 CPU 2: hi: 0, btch: 1 usd: 0 CPU 3: hi: 0, btch: 1 usd: 0 DMA32 per-cpu: CPU 0: hi: 186, btch: 31 usd: 109 CPU 1: hi: 186, btch: 31 usd: 154 CPU 2: hi: 186, btch: 31 usd: 76 CPU 3: hi: 186, btch: 31 usd: 39 Active_anon:46892 active_file:71530 inactive_anon:10315 inactive_file:243246 unevictable:0 dirty:15089 writeback:1402 unstable:0 free:1877 slab:116494 mapped:672 pagetables:401 bounce:0 DMA free:1452kB min:40kB low:48kB high:60kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:15072kB pages_scanned:0 all_unreclaimable? yes lowmem_reserve[]: 0 1975 1975 1975 DMA32 free:6056kB min:5664kB low:7080kB high:8496kB active_anon:187568kB inactive_anon:41260kB active_file:286120kB inactive_file:972984kB unevictable:0kB present:2023256kB pages_scanned:0 all_unreclaimable? no lowmem_reserve[]: 0 0 0 0 DMA: 1*4kB 1*8kB 0*16kB 1*32kB 2*64kB 2*128kB 2*256kB 1*512kB 0*1024kB 0*2048kB 0*4096kB = 1452kB DMA32: 204*4kB 59*8kB 6*16kB 4*32kB 1*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 6056kB 314797 total pagecache pages 0 pages in swap cache Swap cache stats: add 0, delete 0, find 0/0 Free swap = 3911788kB Total swap = 3911788kB 523088 pages RAM 22877 pages reserved 224954 pages shared 276682 pages non-shared -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/