2014-02-07 18:47:32

by Gautham R Shenoy

[permalink] [raw]
Subject: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.

Hi,

>From the lockdep annotation and the comment that existed before the
lockdep annotations were introduced,
mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
held.

However, there's a call path in deactivate_slab() when

(new.inuse || n->nr_partial <= s->min_partial) &&
!(new.freelist) &&
!(kmem_cache_debug(s))

which ends up calling add_full() without holding
n->list_lock.

This was discovered while onlining/offlining cpus in 3.14-rc1 due to
the lockdep annotations added by commit
c65c1877bd6826ce0d9713d76e30a7bed8e49f38.

Fix this by unconditionally taking the lock
irrespective of the state of kmem_cache_debug(s).

Cc: Peter Zijlstra <[email protected]>
Cc: Pekka Enberg <[email protected]>
Signed-off-by: Gautham R. Shenoy <[email protected]>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 7e3e045..1f723f7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1882,7 +1882,7 @@ redo:
}
} else {
m = M_FULL;
- if (kmem_cache_debug(s) && !lock) {
+ if (!lock) {
lock = 1;
/*
* This also ensures that the scanning of full
--
1.8.3.1


2014-02-07 20:46:23

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.

On Sat, 8 Feb 2014, Gautham R Shenoy wrote:

> Hi,
>
> From the lockdep annotation and the comment that existed before the
> lockdep annotations were introduced,
> mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> held.
>
> However, there's a call path in deactivate_slab() when
>
> (new.inuse || n->nr_partial <= s->min_partial) &&
> !(new.freelist) &&
> !(kmem_cache_debug(s))
>
> which ends up calling add_full() without holding
> n->list_lock.
>
> This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> the lockdep annotations added by commit
> c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
>
> Fix this by unconditionally taking the lock
> irrespective of the state of kmem_cache_debug(s).
>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Signed-off-by: Gautham R. Shenoy <[email protected]>

No, it's not needed unless kmem_cache_debug(s) is actually set,
specifically s->flags & SLAB_STORE_USER.

You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693
instead which is already in -mm and linux-next.

2014-02-08 03:01:25

by Gautham R Shenoy

[permalink] [raw]
Subject: Re: [PATCH] slub: Hold list_lock unconditionally before the call to add_full.

On Fri, Feb 07, 2014 at 12:46:19PM -0800, David Rientjes wrote:
> On Sat, 8 Feb 2014, Gautham R Shenoy wrote:
>
> > Hi,
> >
> > From the lockdep annotation and the comment that existed before the
> > lockdep annotations were introduced,
> > mm/slub.c:add_full(s, n, page) expects to be called with n->list_lock
> > held.
> >
> > However, there's a call path in deactivate_slab() when
> >
> > (new.inuse || n->nr_partial <= s->min_partial) &&
> > !(new.freelist) &&
> > !(kmem_cache_debug(s))
> >
> > which ends up calling add_full() without holding
> > n->list_lock.
> >
> > This was discovered while onlining/offlining cpus in 3.14-rc1 due to
> > the lockdep annotations added by commit
> > c65c1877bd6826ce0d9713d76e30a7bed8e49f38.
> >
> > Fix this by unconditionally taking the lock
> > irrespective of the state of kmem_cache_debug(s).
> >
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Pekka Enberg <[email protected]>
> > Signed-off-by: Gautham R. Shenoy <[email protected]>
>
> No, it's not needed unless kmem_cache_debug(s) is actually set,
> specifically s->flags & SLAB_STORE_USER.
>
> You want the patch at http://marc.info/?l=linux-kernel&m=139147105027693
> instead which is already in -mm and linux-next.
>

Ah, thanks! Wasn't aware of this fix. Shall apply this one.

--
Thanks and Regards
gautham.