Return-Path: linux-nfs-owner@vger.kernel.org Received: from smtp-o-3.desy.de ([131.169.56.156]:35036 "EHLO smtp-o-3.desy.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752096AbbAZJbd (ORCPT ); Mon, 26 Jan 2015 04:31:33 -0500 Received: from smtp-map-1.desy.de (smtp-map-1.desy.de [131.169.56.66]) by smtp-o-3.desy.de (DESY-O-3) with ESMTP id 5472F2804A4 for ; Mon, 26 Jan 2015 10:31:31 +0100 (CET) Received: from ZITSWEEP4.win.desy.de (zitsweep4.win.desy.de [131.169.97.98]) by smtp-map-1.desy.de (DESY_MAP_1) with ESMTP id 2F7AC13E8D for ; Mon, 26 Jan 2015 10:31:29 +0100 (MET) Date: Mon, 26 Jan 2015 10:31:29 +0100 (CET) From: "Mkrtchyan, Tigran" To: Trond Myklebust Cc: Olga Kornievskaia , Linux NFS Mailing List Message-ID: <935803842.386714.1422264689223.JavaMail.zimbra@desy.de> In-Reply-To: <1366785487.376442.1422133679079.JavaMail.zimbra@desy.de> References: <130621862.279655.1421851650684.JavaMail.zimbra@desy.de> <1421869687.4674.2.camel@primarydata.com> <1192395468.285001.1421873884470.JavaMail.zimbra@desy.de> <1366785487.376442.1422133679079.JavaMail.zimbra@desy.de> Subject: Re: Yet another kernel crash in NFS4 state recovery MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Sender: linux-nfs-owner@vger.kernel.org List-ID: we rebooted back into a kernel without the fix an it crashed almost immediately. ----- Original Message ----- > From: "Mkrtchyan, Tigran" > To: "Trond Myklebust" > Cc: "Olga Kornievskaia" , "Linux NFS Mailing List" > Sent: Saturday, January 24, 2015 10:07:59 PM > Subject: Re: Yet another kernel crash in NFS4 state recovery > Looks like there are no crashes any more. > > Tigran. > > ----- Original Message ----- >> From: "Mkrtchyan, Tigran" >> To: "Trond Myklebust" >> Cc: "Olga Kornievskaia" , "Linux NFS Mailing List" >> >> Sent: Wednesday, January 21, 2015 9:58:04 PM >> Subject: Re: Yet another kernel crash in NFS4 state recovery > >> Hi Trond, Olga, >> >> This is really weird. We had no problem until today. >> Today is started to crash every 7 minutes or so. >> >> I will try the fix tomorrow. But I have idea what have triggered it >> today. >> >> Tigran. >> >> ----- Original Message ----- >>> From: "Trond Myklebust" >>> To: "Olga Kornievskaia" >>> Cc: "Mkrtchyan, Tigran" , "Linux NFS Mailing List" >>> >>> Sent: Wednesday, January 21, 2015 8:48:07 PM >>> Subject: Re: Yet another kernel crash in NFS4 state recovery >> >>> On Wed, 2015-01-21 at 14:09 -0500, Olga Kornievskaia wrote: >>>> On Wed, Jan 21, 2015 at 1:41 PM, Trond Myklebust >>>> wrote: >>>> > On Wed, Jan 21, 2015 at 9:47 AM, Mkrtchyan, Tigran >>>> > wrote: >>>> >> >>>> >> >>>> >> Now with RHEL7. >>>> >> >>>> >> [ 482.016897] BUG: unable to handle kernel NULL pointer dereference at >>>> >> 000000000000001a >>>> >> [ 482.017023] IP: [] rpc_peeraddr2str+0x5/0x30 [sunrpc] >>>> >> [ 482.017023] PGD baefe067 PUD baeff067 PMD 0 >>>> >> [ 482.017023] Oops: 0000 [#1] SMP >>>> >> [ 482.017023] Modules linked in: nfs_layout_nfsv41_files rpcsec_gss_krb5 nfsv4 >>>> >> dns_resolver nfs fscache ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack >>>> >> ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat >>>> >> nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security >>>> >> ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 >>>> >> nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security >>>> >> iptable_raw iptable_filter ip_tables sg ppdev kvm_intel kvm pcspkr serio_raw >>>> >> virtio_balloon i2c_piix4 parport_pc parport mperf nfsd auth_rpcgss nfs_acl >>>> >> lockd sunrpc sr_mod cdrom ata_generic pata_acpi ext4 mbcache jbd2 virtio_blk >>>> >> cirrus syscopyarea sysfillrect sysimgblt drm_kms_helper ttm virtio_net ata_piix >>>> >> drm libata virtio_pci virtio_ring virtio >>>> >> [ 482.017023] i2c_core floppy >>>> >> [ 482.017023] CPU: 0 PID: 2834 Comm: xrootd Not tainted >>>> >> 3.10.0-123.13.2.el7.x86_64 #1 >>>> >> [ 482.017023] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs >>>> >> 01/01/2011 >>>> >> [ 482.017023] task: ffff8800b188cfa0 ti: ffff880232484000 task.ti: >>>> >> ffff880232484000 >>>> >> [ 482.017023] RIP: 0010:[] [] >>>> >> rpc_peeraddr2str+0x5/0x30 [sunrpc] >>>> >> [ 482.017023] RSP: 0018:ffff880232485708 EFLAGS: 00010246 >>>> >> [ 482.017023] RAX: 000000000001bcb0 RBX: ffff880233ded800 RCX: 0000000000000000 >>>> >> [ 482.017023] RDX: ffffffffa0494078 RSI: 0000000000000000 RDI: ffffffffffffffea >>>> >> [ 482.017023] RBP: ffff880232485760 R08: ffff880232485740 R09: 0000000000000000 >>>> >> [ 482.017023] R10: 0000000000000000 R11: fffffffffffffff2 R12: ffff8800bac3e690 >>>> >> [ 482.017023] R13: ffff8800bac3e638 R14: 0000000000000000 R15: 0000000000000000 >>>> >> [ 482.017023] FS: 00007f0d84b79700(0000) GS:ffff88023fc00000(0000) >>>> >> knlGS:0000000000000000 >>>> >> [ 482.017023] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b >>>> >> [ 482.017023] CR2: 000000000000001a CR3: 00000000baefd000 CR4: 00000000000006f0 >>>> >> [ 482.017023] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 >>>> >> [ 482.017023] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 >>>> >> [ 482.017023] Stack: >>>> >> [ 482.017023] ffffffffa04c79a5 0000000000000000 ffff880232485768 >>>> >> ffffffffa046d858 >>>> >> [ 482.017023] 0000000000000000 ffff8800b188cfa0 ffffffff81086ac0 >>>> >> ffff880232485740 >>>> >> [ 482.017023] ffff880232485740 0000000096605de3 ffff880233ded800 >>>> >> ffff880232485778 >>>> >> [ 482.017023] Call Trace: >>>> >> [ 482.017023] [] ? nfs4_schedule_state_manager+0x65/0xf0 >>>> >> [nfsv4] >>>> >> [ 482.017023] [] ? >>>> >> nfs_wait_client_init_complete.part.6+0x98/0xd0 [nfs] >>>> >> [ 482.017023] [] ? wake_up_bit+0x30/0x30 >>>> >> [ 482.017023] [] nfs4_schedule_lease_recovery+0x2e/0x60 >>>> >> [nfsv4] >>>> >> [ 482.017023] [] nfs41_walk_client_list+0x104/0x340 [nfsv4] >>>> >> [ 482.017023] [] nfs41_discover_server_trunking+0x39/0x40 >>>> >> [nfsv4] >>>> >> [ 482.017023] [] nfs4_discover_server_trunking+0x7d/0x2e0 >>>> >> [nfsv4] >>>> >> [ 482.017023] [] nfs4_init_client+0x124/0x2f0 [nfsv4] >>>> >> [ 482.017023] [] ? __fscache_acquire_cookie+0x74/0x2a0 >>>> >> [fscache] >>>> >> [ 482.017023] [] ? __fscache_acquire_cookie+0x74/0x2a0 >>>> >> [fscache] >>>> >> [ 482.017023] [] ? generic_lookup_cred+0x15/0x20 [sunrpc] >>>> >> [ 482.017023] [] ? __rpc_init_priority_wait_queue+0x81/0xc0 >>>> >> [sunrpc] >>>> >> [ 482.017023] [] ? rpc_init_wait_queue+0x13/0x20 [sunrpc] >>>> >> [ 482.017023] [] ? nfs4_alloc_client+0x189/0x1e0 [nfsv4] >>>> >> [ 482.017023] [] nfs_get_client+0x26a/0x320 [nfs] >>>> >> [ 482.017023] [] nfs4_set_ds_client+0x8e/0xe0 [nfsv4] >>>> >> [ 482.017023] [] nfs4_fl_prepare_ds+0xe9/0x298 >>>> >> [nfs_layout_nfsv41_files] >>>> >> [ 482.017023] [] filelayout_read_pagelist+0x56/0x170 >>>> >> [nfs_layout_nfsv41_files] >>>> >> [ 482.017023] [] pnfs_generic_pg_readpages+0xe7/0x270 >>>> >> [nfsv4] >>>> >> [ 482.017023] [] nfs_pageio_doio+0x19/0x50 [nfs] >>>> >> [ 482.017023] [] nfs_pageio_complete+0x24/0x30 [nfs] >>>> >> [ 482.017023] [] nfs_readpages+0x16a/0x1d0 [nfs] >>>> >> [ 482.017023] [] ? __page_cache_alloc+0x87/0xb0 >>>> >> [ 482.017023] [] __do_page_cache_readahead+0x1cc/0x250 >>>> >> [ 482.017023] [] ondemand_readahead+0x126/0x240 >>>> >> [ 482.017023] [] page_cache_sync_readahead+0x31/0x50 >>>> >> [ 482.017023] [] generic_file_aio_read+0x1ab/0x750 >>>> >> [ 482.017023] [] nfs_file_read+0x71/0xf0 [nfs] >>>> >> [ 482.017023] [] do_sync_read+0x8d/0xd0 >>>> >> [ 482.017023] [] vfs_read+0x9c/0x170 >>>> >> [ 482.017023] [] SyS_pread64+0x92/0xc0 >>>> >> [ 482.017023] [] system_call_fastpath+0x16/0x1b >>>> >> [ 482.017023] Code: c3 0f 1f 44 00 00 0f 1f 44 00 00 55 48 c7 47 50 40 72 1d a0 >>>> >> 48 89 e5 5d c3 66 66 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 <48> 8b 47 >>>> >> 30 89 f6 55 48 c7 c2 d8 da 1f a0 48 89 e5 48 8b 84 f0 >>>> >> [ 482.017023] RIP [] rpc_peeraddr2str+0x5/0x30 [sunrpc] >>>> >> [ 482.017023] RSP >>>> >> [ 482.017023] CR2: 000000000000001a >>>> >> >>>> >> >>>> >> Looks like clp->cl_rpcclient point to nowhere when nfs4_schedule_state_manager >>>> >> is called. >>>> >> >>>> > >>>> > I'm guessing >>>> > >>>> > https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=080af20cc945d110f9912d01cf6b66f94a375b8d >>>> > >>>> >>>> The Oops is seen even with that patch. As I was explained, in the >>>> commit you pointed at the whole client structure is null. In this case >>>> it's the rpcclient structure that's invalid. >>> >>> >>> Ah. You are right... Tigran, how about the following patch? >>> >>> Cheers >>> Trond >>> 8<--------------------------------------------------------------------- >>> From eb8720a31e1d36415c7377f287d5d217540830c3 Mon Sep 17 00:00:00 2001 >>> From: Trond Myklebust >>> Date: Wed, 21 Jan 2015 14:37:44 -0500 >>> Subject: [PATCH] NFSv4.1: Fix an Oops in nfs41_walk_client_list >>> >>> If we start state recovery on a client that failed to initialise correctly, >>> then we are very likely to Oops. >>> >>> Reported-by: "Mkrtchyan, Tigran" >>> Link: >>> http://lkml.kernel.org/r/130621862.279655.1421851650684.JavaMail.zimbra@desy.de >>> Cc: stable@vger.kernel.org >>> Signed-off-by: Trond Myklebust >>> --- >>> fs/nfs/nfs4client.c | 2 +- >>> 1 file changed, 1 insertion(+), 1 deletion(-) >>> >>> diff --git a/fs/nfs/nfs4client.c b/fs/nfs/nfs4client.c >>> index 953daa44a282..706ad10b8186 100644 >>> --- a/fs/nfs/nfs4client.c >>> +++ b/fs/nfs/nfs4client.c >>> @@ -639,7 +639,7 @@ int nfs41_walk_client_list(struct nfs_client *new, >>> prev = pos; >>> >>> status = nfs_wait_client_init_complete(pos); >>> - if (status == 0) { >>> + if (pos->cl_cons_state == NFS_CS_SESSION_INITING) { >>> nfs4_schedule_lease_recovery(pos); >>> status = nfs4_wait_clnt_recover(pos); >>> } >>> -- >>> 2.1.0 >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-nfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html > -- > To unsubscribe from this list: send the line "unsubscribe linux-nfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html