Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755311Ab0HXSD6 (ORCPT ); Tue, 24 Aug 2010 14:03:58 -0400 Received: from freeflow.nu ([178.79.134.28]:58087 "EHLO freeflow.nu" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751375Ab0HXSD4 (ORCPT ); Tue, 24 Aug 2010 14:03:56 -0400 Message-ID: <4C74097A.5020504@kernel.org> Date: Tue, 24 Aug 2010 21:03:38 +0300 From: Pekka Enberg User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.2.8) Gecko/20100802 Thunderbird/3.1.2 MIME-Version: 1.0 To: Stan Hoeppner CC: Christoph Lameter , Mikael Abrahamsson , Linux Kernel List , linux-mm@kvack.org, Mel Gorman , Linux Netdev List Subject: Re: 2.6.34.1 page allocation failure References: <4C70BFF3.8030507@hardwarefreak.com> <4C724141.8060000@kernel.org> <4C72F7C6.3020109@hardwarefreak.com> In-Reply-To: <4C72F7C6.3020109@hardwarefreak.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3589 Lines: 82 [ I'm CC'ing netdev. ] On 24.8.2010 1.35, Stan Hoeppner wrote: > Pekka Enberg put forth on 8/23/2010 4:37 AM: >> On 8/23/10 1:40 AM, Christoph Lameter wrote: >>> On Sun, 22 Aug 2010, Pekka Enberg wrote: >>> >>>> In Stan's case, it's a order-1 GFP_ATOMIC allocation but there are >>>> only order-0 pages available. Mel, any recent page allocator fixes in >>>> 2.6.35 or 2.6.36-rc1 that Stan/Mikael should test? >>> This is the TCP slab? Best fix would be in the page allocator. However, >>> in this particular case the slub allocator would be able to fall back to >>> an order 0 allocation and still satisfy the request. >> Looking at the stack trace of the oops, I think Stan has CONFIG_SLAB >> which doesn't have order-0 fallback. > That is correct. The menuconfig help screen led me to believe the SLAB > allocator was the "safe" choice: > > "CONFIG_SLAB: > The regular slab allocator that is established and known to work well in > all environments" > > Should I be using SLUB instead? Any downsides to SLUB on an old and > slow (500 MHz) single core dual CPU box with<512MB RAM? I don't think the problem here is SLAB so it shouldn't matter which one you use. You might not see the problems with SLUB, though, because it falls back to 0-order allocations. > Also, what is the impact of these oopses? Despite the entries in dmesg, > the system "seems" to be running ok. Or is this simply the calm before > the impending storm? The page allocation failure in question is this: kswapd0: page allocation failure. order:1, mode:0x20 Pid: 139, comm: kswapd0 Not tainted 2.6.34.1 #1 Call Trace: [] ? __alloc_pages_nodemask+0x448/0x48a [] ? cache_alloc_refill+0x22f/0x422 [] ? tcp_v4_send_check+0x6e/0xa4 [] ? kmem_cache_alloc+0x41/0x6a [] ? sk_prot_alloc+0x19/0x55 [] ? sk_clone+0x16/0x1cc [] ? inet_csk_clone+0xf/0x80 [] ? tcp_create_openreq_child+0x1a/0x3c8 [] ? tcp_v4_syn_recv_sock+0x4b/0x151 [] ? tcp_check_req+0x209/0x335 [] ? tcp_v4_do_rcv+0x8d/0x14d [] ? tcp_v4_rcv+0x383/0x56d [] ? ip_local_deliver+0x76/0xc0 [] ? ip_rcv+0x3dc/0x3fa [] ? ktime_get_real+0xf/0x2b [] ? netif_receive_skb+0x219/0x234 [] ? e100_poll+0x1d0/0x47e [] ? net_rx_action+0x58/0xf8 [] ? __do_softirq+0x78/0xe5 [] ? do_softirq+0x23/0x27 [] ? do_IRQ+0x7d/0x8e [] ? common_interrupt+0x29/0x30 [] ? kmem_cache_free+0xbd/0xc5 [] ? __xfs_inode_set_reclaim_tag+0x29/0x2f [] ? destroy_inode+0x1c/0x2b [] ? dispose_list+0xaa/0xd0 [] ? shrink_icache_memory+0x198/0x1c5 [] ? shrink_slab+0xda/0x12f [] ? kswapd+0x468/0x63b [] ? isolate_pages_global+0x0/0x1bc [] ? autoremove_wake_function+0x0/0x2d [] ? complete+0x28/0x36 [] ? kswapd+0x0/0x63b [] ? kthread+0x61/0x66 [] ? kthread+0x0/0x66 [] ? kernel_thread_helper+0x6/0x10 It looks to me as if tcp_create_openreq_child() is able to cope with the situation so the warning could be harmless. If that's the case, we should probably stick a __GFP_NOWARN there. Pekka -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/