2008-01-18 21:45:32

by Christian Kujau

[permalink] [raw]
Subject: 2.6.24-rc8: possible circular locking dependency detected

Hi,

just FYI, upgrading to -rc8 gave the following messages in kern.log in
the morning hours, when the backups were run:

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.24-rc8 #2
-------------------------------------------------------
rsync/23295 is trying to acquire lock:
(iprune_mutex){--..}, at: [<c017a552>] shrink_icache_memory+0x72/0x220

but task is already holding lock:
(&(&ip->i_iolock)->mr_lock){----}, at: [<c0275056>] xfs_ilock+0x96/0xb0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&(&ip->i_iolock)->mr_lock){----}:
[<c0138c01>] __lock_acquire+0xbe1/0x10b0
[<c0275056>] xfs_ilock+0x96/0xb0
[<c0137b4f>] trace_hardirqs_on+0x9f/0x140
[<c013912f>] lock_acquire+0x5f/0x80
[<c0275056>] xfs_ilock+0x96/0xb0
[<c012f4b1>] down_write_nested+0x41/0x60
[<c0275056>] xfs_ilock+0x96/0xb0
[<c0275056>] xfs_ilock+0x96/0xb0
[<c02751ea>] xfs_ireclaim+0x1a/0x60
[<c0294e73>] xfs_finish_reclaim+0x53/0x1a0
[<c02a40ce>] xfs_fs_clear_inode+0x5e/0x90
[<c017a102>] clear_inode+0x82/0x160
[<c017a55c>] shrink_icache_memory+0x7c/0x220
[<c017a43a>] dispose_list+0x1a/0xc0
[<c017a6c2>] shrink_icache_memory+0x1e2/0x220
[<c014f6d1>] shrink_slab+0x101/0x160
[<c014fa4a>] kswapd+0x2aa/0x410
[<c012c1f0>] autoremove_wake_function+0x0/0x40
[<c014f7a0>] kswapd+0x0/0x410
[<c012bf42>] kthread+0x42/0x70

Full dmesg and .config: http://nerdbynature.de/bits/2.6.24-rc8/

Thanks,
Christian.
--
BOFH excuse #18:

excess surge protection


2008-01-21 02:55:37

by David Chinner

[permalink] [raw]
Subject: Re: 2.6.24-rc8: possible circular locking dependency detected

On Fri, Jan 18, 2008 at 10:45:17PM +0100, Christian Kujau wrote:
> Hi,
>
> just FYI, upgrading to -rc8 gave the following messages in kern.log in
> the morning hours, when the backups were run:
>
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.24-rc8 #2
> -------------------------------------------------------
> rsync/23295 is trying to acquire lock:
> (iprune_mutex){--..}, at: [<c017a552>] shrink_icache_memory+0x72/0x220
>
> but task is already holding lock:
> (&(&ip->i_iolock)->mr_lock){----}, at: [<c0275056>] xfs_ilock+0x96/0xb0
>
> which lock already depends on the new lock.

memory reclaim can occur when an inode lock is held,
causing i_iolock -> iprune_mutex to occur. This is quite
common.

During reclaim, while holding iprune_mutex, we lock a
different inode to complete the cleaning up of it,
resulting in iprune_mutex -> i_iolock.

At this point, lockdep gets upset and blats out a warning.

But, there's no problem here as it is always safe for us
to take the i_iolock in inode reclaim because it can never
be the same as the i_iolock that we've taken prior to memory
reclaim being entered. Therefore false positive.

Lockdep folk - we really need an annotation to prevent this false
positive from being reported because we are getting reports at
least once a week....

Cheers,

Dave.
--
Dave Chinner
Principal Engineer
SGI Australian Software Group