2008-11-25 11:44:17

by Dan Noé

[permalink] [raw]
Subject: Lockdep warning for iprune_mutex at shrink_icache_memory

I have experienced the following lockdep warning on 2.6.28-rc6. I
would be happy to help debug, but I don't know this section of code at
all.

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.28-rc6git #1
-------------------------------------------------------
rsync/21485 is trying to acquire lock:
(iprune_mutex){--..}, at: [<ffffffff80310b14>]
shrink_icache_memory+0x84/0x290

but task is already holding lock:
(&(&ip->i_iolock)->mr_lock){----}, at: [<ffffffffa01fcae5>]
xfs_ilock+0x75/0xb0 [xfs]

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&(&ip->i_iolock)->mr_lock){----}:
[<ffffffff80279939>] __lock_acquire+0xd49/0x11a0
[<ffffffff80279e21>] lock_acquire+0x91/0xc0
[<ffffffff8026a557>] down_write_nested+0x57/0x90
[<ffffffffa01fcb15>] xfs_ilock+0xa5/0xb0 [xfs]
[<ffffffffa01fccc6>] xfs_ireclaim+0x46/0x90 [xfs]
[<ffffffffa021a95e>] xfs_finish_reclaim+0x5e/0x1a0 [xfs]
[<ffffffffa021acbb>] xfs_reclaim+0x11b/0x120 [xfs]
[<ffffffffa022a29e>] xfs_fs_clear_inode+0xee/0x120 [xfs]
[<ffffffff80310881>] clear_inode+0xb1/0x130
[<ffffffff803109a8>] dispose_list+0x38/0x120
[<ffffffff80310cd3>] shrink_icache_memory+0x243/0x290
[<ffffffff802c80d5>] shrink_slab+0x125/0x180
[<ffffffff802cb80a>] kswapd+0x52a/0x680
[<ffffffff80265dae>] kthread+0x4e/0x90
[<ffffffff8020d849>] child_rip+0xa/0x11
[<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (iprune_mutex){--..}:
[<ffffffff80279a00>] __lock_acquire+0xe10/0x11a0
[<ffffffff80279e21>] lock_acquire+0x91/0xc0
[<ffffffff8052bee3>] __mutex_lock_common+0xb3/0x390
[<ffffffff8052c2a4>] mutex_lock_nested+0x44/0x50
[<ffffffff80310b14>] shrink_icache_memory+0x84/0x290
[<ffffffff802c80d5>] shrink_slab+0x125/0x180
[<ffffffff802cad3b>] do_try_to_free_pages+0x2bb/0x460
[<ffffffff802cafd7>] try_to_free_pages+0x67/0x70
[<ffffffff802c1dfa>] __alloc_pages_internal+0x23a/0x530
[<ffffffff802e6c5d>] alloc_pages_current+0xad/0x110
[<ffffffff802f17ab>] new_slab+0x2ab/0x350
[<ffffffff802f29bc>] __slab_alloc+0x33c/0x440
[<ffffffff802f2c76>] kmem_cache_alloc+0xd6/0xe0
[<ffffffff803b962b>] radix_tree_preload+0x3b/0xb0
[<ffffffff802bc728>] add_to_page_cache_locked+0x68/0x110
[<ffffffff802bc801>] add_to_page_cache_lru+0x31/0x90
[<ffffffff8032a08f>] mpage_readpages+0x9f/0x120
[<ffffffffa02204ff>] xfs_vm_readpages+0x1f/0x30 [xfs]
[<ffffffff802c5ac1>] __do_page_cache_readahead+0x1a1/0x250
[<ffffffff802c5f2b>] ondemand_readahead+0x1cb/0x250
[<ffffffff802c6059>] page_cache_async_readahead+0xa9/0xc0
[<ffffffff802bd3d7>] generic_file_aio_read+0x447/0x6c0
[<ffffffffa0229aff>] xfs_read+0x12f/0x2c0 [xfs]
[<ffffffffa0224e46>] xfs_file_aio_read+0x56/0x60 [xfs]
[<ffffffff802fad99>] do_sync_read+0xf9/0x140
[<ffffffff802fb5d8>] vfs_read+0xc8/0x180
[<ffffffff802fb795>] sys_read+0x55/0x90
[<ffffffff8020c6ab>] system_call_fastpath+0x16/0x1b
[<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

2 locks held by rsync/21485:
#0: (&(&ip->i_iolock)->mr_lock){----}, at: [<ffffffffa01fcae5>]
xfs_ilock+0x75/0xb0 [xfs] #1: (shrinker_rwsem){----}, at:
[<ffffffff802c7fe7>] shrink_slab+0x37/0x180

stack backtrace:
Pid: 21485, comm: rsync Not tainted 2.6.28-rc6git #1
Call Trace:
[<ffffffff802776d7>] print_circular_bug_tail+0xa7/0xf0
[<ffffffff80279a00>] __lock_acquire+0xe10/0x11a0
[<ffffffff80279e21>] lock_acquire+0x91/0xc0
[<ffffffff80310b14>] ? shrink_icache_memory+0x84/0x290
[<ffffffff8052bee3>] __mutex_lock_common+0xb3/0x390
[<ffffffff80310b14>] ? shrink_icache_memory+0x84/0x290
[<ffffffff80310b14>] ? shrink_icache_memory+0x84/0x290
[<ffffffff80213b93>] ? native_sched_clock+0x13/0x60
[<ffffffff8052c2a4>] mutex_lock_nested+0x44/0x50
[<ffffffff80310b14>] shrink_icache_memory+0x84/0x290
[<ffffffff802c80d5>] shrink_slab+0x125/0x180
[<ffffffff802cad3b>] do_try_to_free_pages+0x2bb/0x460
[<ffffffff802cafd7>] try_to_free_pages+0x67/0x70
[<ffffffff802c9610>] ? isolate_pages_global+0x0/0x260
[<ffffffff802c1dfa>] __alloc_pages_internal+0x23a/0x530
[<ffffffff802e6c5d>] alloc_pages_current+0xad/0x110
[<ffffffff802f17ab>] new_slab+0x2ab/0x350
[<ffffffff802f29ad>] ? __slab_alloc+0x32d/0x440
[<ffffffff802f29bc>] __slab_alloc+0x33c/0x440
[<ffffffff803b962b>] ? radix_tree_preload+0x3b/0xb0
[<ffffffff8020c558>] ? ftrace_call+0x5/0x2b
[<ffffffff803b962b>] ? radix_tree_preload+0x3b/0xb0
[<ffffffff802f2c76>] kmem_cache_alloc+0xd6/0xe0
[<ffffffff803b962b>] radix_tree_preload+0x3b/0xb0
[<ffffffff802bc728>] add_to_page_cache_locked+0x68/0x110
[<ffffffff802bc801>] add_to_page_cache_lru+0x31/0x90
[<ffffffff8032a08f>] mpage_readpages+0x9f/0x120
[<ffffffffa02200d0>] ? xfs_get_blocks+0x0/0x20 [xfs]
[<ffffffff802c1cb3>] ? __alloc_pages_internal+0xf3/0x530
[<ffffffffa02200d0>] ? xfs_get_blocks+0x0/0x20 [xfs]
[<ffffffffa02204ff>] xfs_vm_readpages+0x1f/0x30 [xfs]
[<ffffffff802c5ac1>] __do_page_cache_readahead+0x1a1/0x250
[<ffffffff802c59ea>] ? __do_page_cache_readahead+0xca/0x250
[<ffffffff802c5f2b>] ondemand_readahead+0x1cb/0x250
[<ffffffffa0071860>] ? raid1_congested+0x0/0xf0 [raid1]
[<ffffffff8020c558>] ? ftrace_call+0x5/0x2b
[<ffffffff802c6059>] page_cache_async_readahead+0xa9/0xc0
[<ffffffff802bd3d7>] generic_file_aio_read+0x447/0x6c0
[<ffffffff8052df84>] ? _spin_unlock_irqrestore+0x44/0x70
[<ffffffffa01fcae5>] ? xfs_ilock+0x75/0xb0 [xfs]
[<ffffffffa0229aff>] xfs_read+0x12f/0x2c0 [xfs]
[<ffffffffa0224e46>] xfs_file_aio_read+0x56/0x60 [xfs]
[<ffffffff802fad99>] do_sync_read+0xf9/0x140
[<ffffffff80266200>] ? autoremove_wake_function+0x0/0x40
[<ffffffff8020c558>] ? ftrace_call+0x5/0x2b
[<ffffffff80375069>] ? cap_file_permission+0x9/0x10
[<ffffffff80373fa6>] ? security_file_permission+0x16/0x20
[<ffffffff802fb5d8>] vfs_read+0xc8/0x180
[<ffffffff802fb795>] sys_read+0x55/0x90
[<ffffffff8020c6ab>] system_call_fastpath+0x16/0x1b


Cheers,
Dan

--
/--------------- - - - - - -
| Dan Noé
| http://isomerica.net/~dpn/


2008-11-26 07:26:38

by Dave Chinner

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Tue, Nov 25, 2008 at 06:43:57AM -0500, Dan No? wrote:
> I have experienced the following lockdep warning on 2.6.28-rc6. I
> would be happy to help debug, but I don't know this section of code at
> all.
>
> =======================================================
> [ INFO: possible circular locking dependency detected ]
> 2.6.28-rc6git #1
> -------------------------------------------------------
> rsync/21485 is trying to acquire lock:
> (iprune_mutex){--..}, at: [<ffffffff80310b14>]
> shrink_icache_memory+0x84/0x290
>
> but task is already holding lock:
> (&(&ip->i_iolock)->mr_lock){----}, at: [<ffffffffa01fcae5>]
> xfs_ilock+0x75/0xb0 [xfs]

False positive. memory reclaim can be invoked while we
are holding an inode lock, which means we go:

xfs_ilock -> iprune_mutex

And when the inode shrinker reclaims a dirty xfs inode,
we go:

iprune_mutex -> xfs_ilock

However, this cannot deadlock as the first case can
only occur with a referenced inode, and the second case
can only occur with an unreferenced inode. Hence we can
never get a situation where the inode being locked on
either side of the iprune_mutex is the same inode so
deadlock is impossible.

To avoid this false positive, either we need to turn off
lockdep checking on xfs inodes (not going to happen), or memory
reclaim needs to be able to tell lockdep that recursion on
filesystem lock classes may occur. Perhaps we can add a
simple annotation to the iprune mutex initialisation as well as
the xfs ilock initialisation to indicate that such recursion
is possible and allowed...

Cheers,

Dave.
--
Dave Chinner
[email protected]

2008-11-26 15:03:20

by Peter Zijlstra

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Wed, 2008-11-26 at 18:26 +1100, Dave Chinner wrote:
> On Tue, Nov 25, 2008 at 06:43:57AM -0500, Dan Noé wrote:
> > I have experienced the following lockdep warning on 2.6.28-rc6. I
> > would be happy to help debug, but I don't know this section of code at
> > all.
> >
> > =======================================================
> > [ INFO: possible circular locking dependency detected ]
> > 2.6.28-rc6git #1
> > -------------------------------------------------------
> > rsync/21485 is trying to acquire lock:
> > (iprune_mutex){--..}, at: [<ffffffff80310b14>]
> > shrink_icache_memory+0x84/0x290
> >
> > but task is already holding lock:
> > (&(&ip->i_iolock)->mr_lock){----}, at: [<ffffffffa01fcae5>]
> > xfs_ilock+0x75/0xb0 [xfs]
>
> False positive. memory reclaim can be invoked while we
> are holding an inode lock, which means we go:
>
> xfs_ilock -> iprune_mutex
>
> And when the inode shrinker reclaims a dirty xfs inode,
> we go:
>
> iprune_mutex -> xfs_ilock
>
> However, this cannot deadlock as the first case can
> only occur with a referenced inode, and the second case
> can only occur with an unreferenced inode. Hence we can
> never get a situation where the inode being locked on
> either side of the iprune_mutex is the same inode so
> deadlock is impossible.
>
> To avoid this false positive, either we need to turn off
> lockdep checking on xfs inodes (not going to happen), or memory
> reclaim needs to be able to tell lockdep that recursion on
> filesystem lock classes may occur. Perhaps we can add a
> simple annotation to the iprune mutex initialisation as well as
> the xfs ilock initialisation to indicate that such recursion
> is possible and allowed...

This is that: an inode has multiple stages in its life-cycle, thing
again, right?

Last time I talked to Christoph about that, he said it would be possible
to get (v)fs hooks for when the inode changes data structures as its not
really too FS specific or was fully filesystem specific, I can't
remember.

The thing to do is re-annotate the inode locks whenever the inode
changes data-structure, much like we do in unlock_new_inode().

So for each stage in the inode's life-cycle you need to create a key for
each lock, such as:

struct lock_class_key xfs_active_inode_ilock;
struct lock_class_key xfs_deleted_inode_ilock;
...

and on state change do something like:

BUG_ON(rwsem_is_locked(&xfs_ilock->mrlock));

init_rwsem(&xfs_ilock->mrlock);
lockdep_set_class(&xfs_ilock->mrlock, &xfs_deleted_inode_ilock);


hth

2008-11-26 17:53:20

by Dan Noé

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Wed, 26 Nov 2008 16:02:59 +0100
Peter Zijlstra <[email protected]> wrote:

> The thing to do is re-annotate the inode locks whenever the inode
> changes data-structure, much like we do in unlock_new_inode().
>
> So for each stage in the inode's life-cycle you need to create a key
> for each lock, such as:
>
> struct lock_class_key xfs_active_inode_ilock;
> struct lock_class_key xfs_deleted_inode_ilock;
> ...
>
> and on state change do something like:
>
> BUG_ON(rwsem_is_locked(&xfs_ilock->mrlock));
>
> init_rwsem(&xfs_ilock->mrlock);
> lockdep_set_class(&xfs_ilock->mrlock, &xfs_deleted_inode_ilock);

This seems to make sense, based on my understanding and reading of the
lockdep documentation. It looks like the first step is to add callbacks
for state change, and then add XFS specific callbacks that set the
lockdep class. I would love to work on a patch, with some guidance.

Thanks,
Dan

--
/--------------- - - - - - -
| Dan Noé
| http://isomerica.net/~dpn/

2008-11-26 21:35:13

by Dave Chinner

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Wed, Nov 26, 2008 at 04:02:59PM +0100, Peter Zijlstra wrote:
> On Wed, 2008-11-26 at 18:26 +1100, Dave Chinner wrote:
> > On Tue, Nov 25, 2008 at 06:43:57AM -0500, Dan No? wrote:
> > > I have experienced the following lockdep warning on 2.6.28-rc6. I
> > > would be happy to help debug, but I don't know this section of code at
> > > all.
> > >
> > > =======================================================
> > > [ INFO: possible circular locking dependency detected ]
> > > 2.6.28-rc6git #1
> > > -------------------------------------------------------
> > > rsync/21485 is trying to acquire lock:
> > > (iprune_mutex){--..}, at: [<ffffffff80310b14>]
> > > shrink_icache_memory+0x84/0x290
> > >
> > > but task is already holding lock:
> > > (&(&ip->i_iolock)->mr_lock){----}, at: [<ffffffffa01fcae5>]
> > > xfs_ilock+0x75/0xb0 [xfs]
> >
> > False positive. memory reclaim can be invoked while we
> > are holding an inode lock, which means we go:
> >
> > xfs_ilock -> iprune_mutex
> >
> > And when the inode shrinker reclaims a dirty xfs inode,
> > we go:
> >
> > iprune_mutex -> xfs_ilock
> >
> > However, this cannot deadlock as the first case can
> > only occur with a referenced inode, and the second case
> > can only occur with an unreferenced inode. Hence we can
> > never get a situation where the inode being locked on
> > either side of the iprune_mutex is the same inode so
> > deadlock is impossible.
> >
> > To avoid this false positive, either we need to turn off
> > lockdep checking on xfs inodes (not going to happen), or memory
> > reclaim needs to be able to tell lockdep that recursion on
> > filesystem lock classes may occur. Perhaps we can add a
> > simple annotation to the iprune mutex initialisation as well as
> > the xfs ilock initialisation to indicate that such recursion
> > is possible and allowed...
>
> This is that: an inode has multiple stages in its life-cycle, thing
> again, right?

Sort of.

> Last time I talked to Christoph about that, he said it would be possible
> to get (v)fs hooks for when the inode changes data structures as its not
> really too FS specific or was fully filesystem specific, I can't
> remember.
>
> The thing to do is re-annotate the inode locks whenever the inode
> changes data-structure, much like we do in unlock_new_inode().

Ok, that's really changing the class of the inode lock dependent
on it's type (it's directory inode specific) during initialisation.
That is, it is setting the class for the life of the inode, not
changing it half way through it's life cycle.

> So for each stage in the inode's life-cycle you need to create a key for
> each lock, such as:
>
> struct lock_class_key xfs_active_inode_ilock;
> struct lock_class_key xfs_deleted_inode_ilock;
> ...
>
> and on state change do something like:
>
> BUG_ON(rwsem_is_locked(&xfs_ilock->mrlock));
>
> init_rwsem(&xfs_ilock->mrlock);
> lockdep_set_class(&xfs_ilock->mrlock, &xfs_deleted_inode_ilock);

I don't think that is possible for XFS - we can't re-init the inode
locks safely while they are still active. Apart from the fact that
the inode locks play a critical part in EOL synchronisation
(preventing use after free), the only way we could guarantee
exclusive access to the inode to be able to re-init the locks is to
already hold the inode locks.

However, if we can change the class of the lock while it is held, we
could probably use this technique because we track the reclaimable
state of the inode and handle it specially in lookup so we have all
the infrastructure to be able to do this dynamically. Is changing
the lock class dynamically possible/allowed?

Cheers,

Dave.
--
Dave Chinner
[email protected]

2008-11-27 08:05:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Thu, 2008-11-27 at 08:34 +1100, Dave Chinner wrote:
> Is changing the lock class dynamically possible/allowed?

Currently, no, but I'll see what I can do, it requires a bit of trickery
to make that happen..

I'll let you know when I've sorted that out.

2008-12-04 08:00:31

by Peter Zijlstra

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory

On Thu, 2008-11-27 at 09:05 +0100, Peter Zijlstra wrote:
> On Thu, 2008-11-27 at 08:34 +1100, Dave Chinner wrote:
> > Is changing the lock class dynamically possible/allowed?
>
> Currently, no, but I'll see what I can do, it requires a bit of trickery
> to make that happen..
>
> I'll let you know when I've sorted that out.

Ok, that wasn't hard at all..

Dave, Christoph, can you have a play with this and post this patch along
with a potential user - I think it best if we don't merge this without
at least one user in tree :-)

---
Subject: lockdep: change a held lock's class
From: Peter Zijlstra <[email protected]>
Date: Thu Dec 04 08:34:56 CET 2008

Impact: introduce new lockdep API

Allow to change a held lock's class. Basically the same as the existing
code to change a subclass therefore reuse all that.

The XFS code will be able to use this to annotate their inode locking.

Signed-off-by: Peter Zijlstra <[email protected]>
Signed-off-by: Ingo Molnar <[email protected]>
---
include/linux/lockdep.h | 12 ++++++++++--
kernel/lockdep.c | 24 +++++++++---------------
2 files changed, 19 insertions(+), 17 deletions(-)

Index: linux-2.6/include/linux/lockdep.h
===================================================================
--- linux-2.6.orig/include/linux/lockdep.h
+++ linux-2.6/include/linux/lockdep.h
@@ -314,8 +314,15 @@ extern void lock_acquire(struct lockdep_
extern void lock_release(struct lockdep_map *lock, int nested,
unsigned long ip);

-extern void lock_set_subclass(struct lockdep_map *lock, unsigned int subclass,
- unsigned long ip);
+extern void lock_set_class(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, unsigned int subclass,
+ unsigned long ip);
+
+static inline void lock_set_subclass(struct lockdep_map *lock,
+ unsigned int subclass, unsigned long ip)
+{
+ lock_set_class(lock, lock->name, lock->key, subclass, ip);
+}

# define INIT_LOCKDEP .lockdep_recursion = 0,

@@ -333,6 +340,7 @@ static inline void lockdep_on(void)

# define lock_acquire(l, s, t, r, c, n, i) do { } while (0)
# define lock_release(l, n, i) do { } while (0)
+# define lock_set_class(l, n, k, s, i) do { } while (0)
# define lock_set_subclass(l, s, i) do { } while (0)
# define lockdep_init() do { } while (0)
# define lockdep_info() do { } while (0)
Index: linux-2.6/kernel/lockdep.c
===================================================================
--- linux-2.6.orig/kernel/lockdep.c
+++ linux-2.6/kernel/lockdep.c
@@ -292,14 +292,12 @@ void lockdep_off(void)
{
current->lockdep_recursion++;
}
-
EXPORT_SYMBOL(lockdep_off);

void lockdep_on(void)
{
current->lockdep_recursion--;
}
-
EXPORT_SYMBOL(lockdep_on);

/*
@@ -2514,7 +2512,6 @@ void lockdep_init_map(struct lockdep_map
if (subclass)
register_lock_class(lock, subclass, 1);
}
-
EXPORT_SYMBOL_GPL(lockdep_init_map);

/*
@@ -2695,8 +2692,9 @@ static int check_unlock(struct task_stru
}

static int
-__lock_set_subclass(struct lockdep_map *lock,
- unsigned int subclass, unsigned long ip)
+__lock_set_class(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, unsigned int subclass,
+ unsigned long ip)
{
struct task_struct *curr = current;
struct held_lock *hlock, *prev_hlock;
@@ -2723,6 +2721,7 @@ __lock_set_subclass(struct lockdep_map *
return print_unlock_inbalance_bug(curr, lock, ip);

found_it:
+ lockdep_init_map(lock, name, key, 0);
class = register_lock_class(lock, subclass, 0);
hlock->class_idx = class - lock_classes + 1;

@@ -2907,9 +2906,9 @@ static void check_flags(unsigned long fl
#endif
}

-void
-lock_set_subclass(struct lockdep_map *lock,
- unsigned int subclass, unsigned long ip)
+void lock_set_class(struct lockdep_map *lock, const char *name,
+ struct lock_class_key *key, unsigned int subclass,
+ unsigned long ip)
{
unsigned long flags;

@@ -2919,13 +2918,12 @@ lock_set_subclass(struct lockdep_map *lo
raw_local_irq_save(flags);
current->lockdep_recursion = 1;
check_flags(flags);
- if (__lock_set_subclass(lock, subclass, ip))
+ if (__lock_set_class(lock, name, key, subclass, ip))
check_chain_key(current);
current->lockdep_recursion = 0;
raw_local_irq_restore(flags);
}
-
-EXPORT_SYMBOL_GPL(lock_set_subclass);
+EXPORT_SYMBOL_GPL(lock_set_class);

/*
* We are not always called with irqs disabled - do that here,
@@ -2949,7 +2947,6 @@ void lock_acquire(struct lockdep_map *lo
current->lockdep_recursion = 0;
raw_local_irq_restore(flags);
}
-
EXPORT_SYMBOL_GPL(lock_acquire);

void lock_release(struct lockdep_map *lock, int nested,
@@ -2967,7 +2964,6 @@ void lock_release(struct lockdep_map *lo
current->lockdep_recursion = 0;
raw_local_irq_restore(flags);
}
-
EXPORT_SYMBOL_GPL(lock_release);

#ifdef CONFIG_LOCK_STAT
@@ -3452,7 +3448,6 @@ retry:
if (unlock)
read_unlock(&tasklist_lock);
}
-
EXPORT_SYMBOL_GPL(debug_show_all_locks);

/*
@@ -3473,7 +3468,6 @@ void debug_show_held_locks(struct task_s
{
__debug_show_held_locks(task);
}
-
EXPORT_SYMBOL_GPL(debug_show_held_locks);

void lockdep_sys_exit(void)

2008-12-04 09:09:46

by Ingo Molnar

[permalink] [raw]
Subject: Re: Lockdep warning for iprune_mutex at shrink_icache_memory


* Peter Zijlstra <[email protected]> wrote:

> On Thu, 2008-11-27 at 09:05 +0100, Peter Zijlstra wrote:
> > On Thu, 2008-11-27 at 08:34 +1100, Dave Chinner wrote:
> > > Is changing the lock class dynamically possible/allowed?
> >
> > Currently, no, but I'll see what I can do, it requires a bit of trickery
> > to make that happen..
> >
> > I'll let you know when I've sorted that out.
>
> Ok, that wasn't hard at all..
>
> Dave, Christoph, can you have a play with this and post this patch along
> with a potential user - I think it best if we don't merge this without
> at least one user in tree :-)
>
> ---
> Subject: lockdep: change a held lock's class
> From: Peter Zijlstra <[email protected]>
> Date: Thu Dec 04 08:34:56 CET 2008
>
> Impact: introduce new lockdep API
>
> Allow to change a held lock's class. Basically the same as the existing
> code to change a subclass therefore reuse all that.
>
> The XFS code will be able to use this to annotate their inode locking.
>
> Signed-off-by: Peter Zijlstra <[email protected]>
> Signed-off-by: Ingo Molnar <[email protected]>
> ---
> include/linux/lockdep.h | 12 ++++++++++--
> kernel/lockdep.c | 24 +++++++++---------------
> 2 files changed, 19 insertions(+), 17 deletions(-)

i've applied it to tip/core/locking, but it's just a new API without
really disturbing the current code - but it would be nice to know whether
it solves the XFS annotation problems.

Ingo