Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030187AbXADW6R (ORCPT ); Thu, 4 Jan 2007 17:58:17 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1030191AbXADW6Q (ORCPT ); Thu, 4 Jan 2007 17:58:16 -0500 Received: from smtp0.osdl.org ([65.172.181.24]:33909 "EHLO smtp.osdl.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030187AbXADW6O (ORCPT ); Thu, 4 Jan 2007 17:58:14 -0500 Date: Thu, 4 Jan 2007 14:57:47 -0800 From: Andrew Morton To: Bernhard Schmidt Cc: netfilter-devel@lists.netfilter.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org Subject: Re: [Bug] OOPS with nf_conntrack_ipv6, probably fragmented UDPv6 Message-Id: <20070104145747.280d5928.akpm@osdl.org> In-Reply-To: <459D322F.5010707@birkenwald.de> References: <459D322F.5010707@birkenwald.de> X-Mailer: Sylpheed version 2.2.7 (GTK+ 2.8.6; i686-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3594 Lines: 85 On Thu, 04 Jan 2007 17:58:23 +0100 Bernhard Schmidt wrote: > Hi, > > I've hit another kernel oops with 2.6.20-rc3 on i386 platform. It is > reproducible, as soon as I load nf_conntrack_ipv6 and try to send > something large (scp or so) inside an OpenVPN tunnel on my client > (patched with UDPv6 transport) the router (another box) OOPSes. > > tcpdump suggests the problem appears as soon as my client sends > fragmented UDPv6 packets towards the destination. It does not happen > when nf_conntrack_ipv6 is not loaded. This is the OOPS as dumped from > the serial console: > > heimdall login: Oops: 0000 [#1] > Modules linked in: sit sch_red sch_htb pppoe pppox ppp_generic slhc > xt_CLASSIFY ipt_TOS xt_length ipt_tos ipt_TCPMSS xt_tcpudp > ipt_MASQUERADE xt_state iptable_mangle iptable_filter > iptable_nat nf_nat nf_conntrack_ipv4 ip_tables x_tables > nf_conntrack_ipv6 nf_conntrack nfnetlink > CPU: 0 > EIP: 0060:[<00000001>] Not tainted VLI > EFLAGS: 00010246 (2.6.20-rc3 #2) > EIP is at 0x1 > eax: cd215bc0 ebx: cd1f3160 ecx: cc59002a edx: cd215bc0 > esi: cd215bc0 edi: cd215bc0 ebp: 00000000 esp: c030bd3c > ds: 007b es: 007b ss: 0068 > Process swapper (pid: 0, ti=c030a000 task=c02e93a0 task.ti=c030a000) > Stack: c0212cc4 00000004 cc83f160 cd2130c0 cd215bc0 cd2130c0 cd215bc0 > c021734b > c030bdb4 c0307a60 0000000a cceee800 cceee800 cd215bc0 cd1f3160 > 00000000 > c021896b c0307a60 cd215bc0 cd215bc0 cceee800 cd1f3160 c025f1c6 > 00000000 > Call Trace: > [] __kfree_skb+0x84/0xe0 > [] dev_hard_start_xmit+0x1bb/0x1d0 > [] dev_queue_xmit+0x11b/0x1b0 > [] ip6_output2+0x276/0x2b0 > [] ip6_output_finish+0x0/0xf0 > [] ip6_output+0x90a/0x940 > [] cache_alloc_refill+0x2c5/0x3f0 > [] pskb_expand_head+0xdd/0x130 > [] ip6_forward+0x465/0x4b0 > [] ip6_rcv_finish+0x16/0x30 > [] nf_ct_frag6_output+0x86/0xb0 [nf_conntrack_ipv6] > [] ip6_rcv_finish+0x0/0x30 > [] ipv6_defrag+0x3b/0x50 [nf_conntrack_ipv6] > [] ip6_rcv_finish+0x0/0x30 > [] nf_iterate+0x38/0x70 > [] ip6_rcv_finish+0x0/0x30 > [] nf_hook_slow+0x4d/0xc0 > [] ip6_rcv_finish+0x0/0x30 > [] ipv6_rcv+0x1e0/0x250 > [] ip6_rcv_finish+0x0/0x30 > [] netif_receive_skb+0x1a8/0x200 > [] process_backlog+0x6e/0xe0 > [] net_rx_action+0x52/0xd0 > [] __do_softirq+0x35/0x80 > [] do_softirq+0x22/0x30 > [] do_IRQ+0x5e/0x70 > [] common_interrupt+0x23/0x30 > [] default_idle+0x0/0x40 > [] default_idle+0x27/0x40 > [] cpu_idle+0x37/0x50 > [] start_kernel+0x266/0x270 > [] unknown_bootoption+0x0/0x210 > ======================= > Code: Bad EIP value. > EIP: [<00000001>] 0x1 SS:ESP 0068:c030bd3c > <0>Kernel panic - not syncing: Fatal exception in interrupt > <0>Rebooting in 20 seconds..<4>atkbd.c: Spurious ACK on > isa0060/serio0. Some program might be trying access hardware directly. > At a guess I'd say that skb->nfct->destroy has value 0x00000001. Not a good function address. Presumably it is suppsoed to be zero... - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/