2001-07-19 23:53:59

by Ragnar Kjørstad

[permalink] [raw]
Subject: Re: Busy inodes after umount

On Thu, Jul 19, 2001 at 04:22:07PM -0400, Christian, Chip wrote:
> I found the same thing happening. Tracked it down in our case to using fdisk to re-read disk size before mounting. Replaced it with "blockdev --readpt" and the problem seems to have gone away. YMMV.

I've now been able to reproduce:

* make a filesystem
* mount it
* export it (nfs)
* mount on remote machine
* lock file (fcntl)
* unexport
* unmount

Then you get the VFS message about self-destruct. Tested with both ext2
and xfs.

The lock is still present in /proc/locks after the umount.

With ext2 I can remount the filesystem successfully, but with XFS I get
the message about duplicate UUIDs and the mount failes. I believe this is a totally
different problem from the one you were experiencing. (and blockdev doesn't help for me)

I suppose this is a generic kernel bug?



--
Ragnar Kjorstad
Big Storage


> [root@ha2 /root]# mkfs -t xfs -f /dev/sdb1
> meta-data=/dev/sdb1 isize=256 agcount=51, agsize=262144
> blks
> data = bsize=4096 blocks=13305828,
> imaxpct=25
> = sunit=0 swidth=0 blks, unwritten=0
> naming =version 2 bsize=4096
> log =internal log bsize=4096 blocks=1624
> realtime =none extsz=65536 blocks=0, rtextents=0
> [root@ha2 /root]# mount -t xfs /dev/sdb1 /mnt/raid/
> [root@ha2 /root]# umount /mnt/raid/
> [root@ha2 /root]# mount -t xfs /dev/sdb1 /mnt/raid/
> mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
> or too many mounted file systems
>
>
> >From /var/log/messages:
> Jul 19 12:27:15 ha2 kernel: Start mounting filesystem: sd(8,17)
> Jul 19 12:27:16 ha2 kernel: Ending clean XFS mount for filesystem: sd(8,17)
> Jul 19 12:27:19 ha2 kernel: XFS unmount got error 16
> Jul 19 12:27:19 ha2 kernel: linvfs_put_super: vfsp/0xc2ff71e0 left dangling!
> Jul 19 12:27:19 ha2 kernel: VFS: Busy inodes after unmount. Self-destruct in 5 seconds. Have a nice day...
> Jul 19 12:27:21 ha2 kernel: XFS: Filesystem has duplicate UUID - can't mount
>
>
> This happens on a shared storage cluster with two nodes. The same thing
> happens on both nodes. (I'm only using the device from one device at the
> time)
>
> linux-2.4.5 with XFS patch from 06112001.
>
> After a reboot it works again, and I have not been able to reproduce
> yet. It first happened when I was testing NFS locks, so it could be
> related to that.
>
>
>
> --
> Ragnar Kjorstad
> Big Storage


2001-07-19 23:59:10

by Matthew Jacob

[permalink] [raw]
Subject: Re: Busy inodes after umount


I reported this a couple of months back. It's reassuring to know that it's a
consistent problem.

On Fri, 20 Jul 2001, [iso-8859-1] Ragnar Kj?rstad wrote:

> On Thu, Jul 19, 2001 at 04:22:07PM -0400, Christian, Chip wrote:
> > I found the same thing happening. Tracked it down in our case to using fdisk to re-read disk size before mounting. Replaced it with "blockdev --readpt" and the problem seems to have gone away. YMMV.
>
> I've now been able to reproduce:
>
> * make a filesystem
> * mount it
> * export it (nfs)
> * mount on remote machine
> * lock file (fcntl)
> * unexport
> * unmount
>
> Then you get the VFS message about self-destruct. Tested with both ext2
> and xfs.
>
> The lock is still present in /proc/locks after the umount.
>
> With ext2 I can remount the filesystem successfully, but with XFS I get
> the message about duplicate UUIDs and the mount failes. I believe this is a totally
> different problem from the one you were experiencing. (and blockdev doesn't help for me)
>
> I suppose this is a generic kernel bug?
>
>
>

2001-07-20 00:39:01

by Tad Dolphay

[permalink] [raw]
Subject: Re: Busy inodes after umount

I know there was a fix for a "Busy inodes after unmount" problem in
2.4.6-pre3. Here's an excerpt from a posting to the NFS mailing list
from Neil Brown:

-------------Included message-----------------------
Previously anonymous dentries were hashed (though with no name, the
hash was pretty meaningless). This meant that they would hang around
after the last reference was dropped. This was actually fairly
pointless as they would never get referenced again, and caused a real
problem as umount wouldn't discard them and so you got the message
printk("VFS: Busy inodes after unmount. "
"Self-destruct in 5 seconds. Have a nice day...\n");

In 2.4.6-pre3 I stopped hashing those dentries so now when the last
reference is dropped, the dentry is freed. So now there will never be
more anonymous dentries than there are active nfsd threads.
---------------end included message-------------------

Tad

>
> I reported this a couple of months back. It's reassuring to know that it's a
> consistent problem.
>
> On Fri, 20 Jul 2001, [iso-8859-1] Ragnar Kj?rstad wrote:
>
> > On Thu, Jul 19, 2001 at 04:22:07PM -0400, Christian, Chip wrote:
> > > I found the same thing happening. Tracked it down in our case to using fdisk to re-read disk size before mounting. Replaced it with "blockdev --readpt" and the problem seems to have gone away. YMMV.
> >
> > I've now been able to reproduce:
> >
> > * make a filesystem
> > * mount it
> > * export it (nfs)
> > * mount on remote machine
> > * lock file (fcntl)
> > * unexport
> > * unmount
> >
> > Then you get the VFS message about self-destruct. Tested with both ext2
> > and xfs.
> >
> > The lock is still present in /proc/locks after the umount.
> >
> > With ext2 I can remount the filesystem successfully, but with XFS I get
> > the message about duplicate UUIDs and the mount failes. I believe this is a totally
> > different problem from the one you were experiencing. (and blockdev doesn't help for me)
> >
> > I suppose this is a generic kernel bug?
> >
> >
> >
>

2001-07-20 00:50:31

by Ragnar Kjørstad

[permalink] [raw]
Subject: Re: Busy inodes after umount

On Thu, Jul 19, 2001 at 07:38:15PM -0500, Tad Dolphay wrote:
> I know there was a fix for a "Busy inodes after unmount" problem in
> 2.4.6-pre3. Here's an excerpt from a posting to the NFS mailing list
> from Neil Brown:

Thanks. I'll try that and see if that solves the problem (also the XFS
UUID problem).


--
Ragnar Kjorstad
Big Storage

2001-07-31 00:17:10

by Ragnar Kjørstad

[permalink] [raw]
Subject: Re: Busy inodes after umount

On Thu, Jul 19, 2001 at 07:38:15PM -0500, Tad Dolphay wrote:
> > > I've now been able to reproduce:
> > >
> > > * make a filesystem
> > > * mount it
> > > * export it (nfs)
> > > * mount on remote machine
> > > * lock file (fcntl)
> > > * unexport
> > > * unmount
> > >
> > > Then you get the VFS message about self-destruct. Tested with both ext2
> > > and xfs.
> > >
> > > The lock is still present in /proc/locks after the umount.
> > >
> > > With ext2 I can remount the filesystem successfully, but with XFS I get
> > > the message about duplicate UUIDs and the mount failes. I believe this is a totally
> > > different problem from the one you were experiencing. (and blockdev doesn't help for me)
> > >
> > > I suppose this is a generic kernel bug?
>
> I know there was a fix for a "Busy inodes after unmount" problem in
> 2.4.6-pre3. Here's an excerpt from a posting to the NFS mailing list
> from Neil Brown:
>
> -------------Included message-----------------------
> Previously anonymous dentries were hashed (though with no name, the
> hash was pretty meaningless). This meant that they would hang around
> after the last reference was dropped. This was actually fairly
> pointless as they would never get referenced again, and caused a real
> problem as umount wouldn't discard them and so you got the message
> printk("VFS: Busy inodes after unmount. "
> "Self-destruct in 5 seconds. Have a nice day...\n");
>
> In 2.4.6-pre3 I stopped hashing those dentries so now when the last
> reference is dropped, the dentry is freed. So now there will never be
> more anonymous dentries than there are active nfsd threads.
> ---------------end included message-------------------

I just tested with 2.4.7, and the problem remains.


--
Ragnar Kjorstad
Big Storage