2018-06-19 21:35:13

by Shakeel Butt

[permalink] [raw]
Subject: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
allocated per node for a kmem_cache. Thus, slabs_node() in
__kmem_cache_empty() will always return 0. So, in such situation, it is
required to check per-cpu slabs to make sure if a kmem_cache is empty or
not.

Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
per-cpu slabs.

Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt <[email protected]>
Reported-by: Jason A . Donenfeld <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: <[email protected]>
---
mm/slub.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index a3b8467c14af..731c02b371ae 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)

bool __kmem_cache_empty(struct kmem_cache *s)
{
- int node;
+ int cpu, node;
struct kmem_cache_node *n;

+ /*
+ * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
+ * check slabs for all cpus.
+ */
+ if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
+ for_each_online_cpu(cpu) {
+ struct kmem_cache_cpu *c;
+
+ c = per_cpu_ptr(s->cpu_slab, cpu);
+ if (c->page || slub_percpu_partial(c))
+ return false;
+ }
+ }
+
for_each_kmem_cache_node(s, node, n)
if (n->nr_partial || slabs_node(s, node))
return false;
--
2.18.0.rc1.244.gcf134e6275-goog



2018-06-19 21:41:40

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Tue, Jun 19, 2018 at 2:33 PM Shakeel Butt <[email protected]> wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Signed-off-by: Shakeel Butt <[email protected]>
> Reported-by: Jason A . Donenfeld <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: <[email protected]>

Forgot to Cc: Andrey Ryabinin <[email protected]>

> ---
> mm/slub.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
> bool __kmem_cache_empty(struct kmem_cache *s)
> {
> - int node;
> + int cpu, node;
> struct kmem_cache_node *n;
>
> + /*
> + * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> + * check slabs for all cpus.
> + */
> + if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> + for_each_online_cpu(cpu) {
> + struct kmem_cache_cpu *c;
> +
> + c = per_cpu_ptr(s->cpu_slab, cpu);
> + if (c->page || slub_percpu_partial(c))
> + return false;
> + }
> + }
> +
> for_each_kmem_cache_node(s, node, n)
> if (n->nr_partial || slabs_node(s, node))
> return false;
> --
> 2.18.0.rc1.244.gcf134e6275-goog
>

2018-06-19 21:56:56

by Jason A. Donenfeld

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Tue, Jun 19, 2018 at 11:34 PM Shakeel Butt <[email protected]> wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Signed-off-by: Shakeel Butt <[email protected]>
> Reported-by: Jason A . Donenfeld <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Pekka Enberg <[email protected]>
> Cc: David Rientjes <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: <[email protected]>
> ---
> mm/slub.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
> bool __kmem_cache_empty(struct kmem_cache *s)
> {
> - int node;
> + int cpu, node;
> struct kmem_cache_node *n;
>
> + /*
> + * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> + * check slabs for all cpus.
> + */
> + if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> + for_each_online_cpu(cpu) {
> + struct kmem_cache_cpu *c;
> +
> + c = per_cpu_ptr(s->cpu_slab, cpu);
> + if (c->page || slub_percpu_partial(c))
> + return false;
> + }
> + }
> +
> for_each_kmem_cache_node(s, node, n)
> if (n->nr_partial || slabs_node(s, node))
> return false;
> --
> 2.18.0.rc1.244.gcf134e6275-goog
>

I can confirm that this fixes the test case on build.wireguard.com.

2018-06-19 22:24:30

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Tue, 19 Jun 2018 14:33:52 -0700 Shakeel Butt <[email protected]> wrote:

> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.

Thanks guys. I'll beef up this changelog a bit by adding

f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
causes crashes when using slub, as described at
http://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pGw1iJnQg@mail.gmail.com

So that a) Greg knows why we're sending it at him and b) other people
who are seeing the same sorts of crashes in their various kernels will
know that this patch will probably fix them.


2018-06-20 00:51:03

by David Rientjes

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Tue, 19 Jun 2018, Shakeel Butt wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
> bool __kmem_cache_empty(struct kmem_cache *s)
> {
> - int node;
> + int cpu, node;

Nit: wouldn't cpu be unused if CONFIG_SLUB_DEBUG is disabled?

> struct kmem_cache_node *n;
>
> + /*
> + * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> + * check slabs for all cpus.
> + */
> + if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> + for_each_online_cpu(cpu) {
> + struct kmem_cache_cpu *c;
> +
> + c = per_cpu_ptr(s->cpu_slab, cpu);
> + if (c->page || slub_percpu_partial(c))
> + return false;
> + }
> + }
> +
> for_each_kmem_cache_node(s, node, n)
> if (n->nr_partial || slabs_node(s, node))
> return false;

Wouldn't it just be better to allow {inc,dec}_slabs_node() to adjust the
nr_slabs counter instead of doing the per-cpu iteration on every shutdown?

2018-06-20 01:27:03

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Tue, Jun 19, 2018 at 5:49 PM David Rientjes <[email protected]> wrote:
>
> On Tue, 19 Jun 2018, Shakeel Butt wrote:
>
> > diff --git a/mm/slub.c b/mm/slub.c
> > index a3b8467c14af..731c02b371ae 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> >
> > bool __kmem_cache_empty(struct kmem_cache *s)
> > {
> > - int node;
> > + int cpu, node;
>
> Nit: wouldn't cpu be unused if CONFIG_SLUB_DEBUG is disabled?
>

I think I didn't get the warning as I didn't use #ifdef.

> > struct kmem_cache_node *n;
> >
> > + /*
> > + * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> > + * check slabs for all cpus.
> > + */
> > + if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> > + for_each_online_cpu(cpu) {
> > + struct kmem_cache_cpu *c;
> > +
> > + c = per_cpu_ptr(s->cpu_slab, cpu);
> > + if (c->page || slub_percpu_partial(c))
> > + return false;
> > + }
> > + }
> > +
> > for_each_kmem_cache_node(s, node, n)
> > if (n->nr_partial || slabs_node(s, node))
> > return false;
>
> Wouldn't it just be better to allow {inc,dec}_slabs_node() to adjust the
> nr_slabs counter instead of doing the per-cpu iteration on every shutdown?

Yes that is doable as the functions {inc,dec}_slabs_node() are called
in slow path. I can move them out of CONFIG_SLUB_DEBUG. I think the
reason 0f389ec63077 ("slub: No need for per node slab counters if
!SLUB_DEBUG") put them under CONFIG_SLUB_DEBUG is because these
counters were only read through sysfs API which were disabled on
!CONFIG_SLUB_DEBUG. However we have a usecase other than sysfs API.

Please let me know if there is any objection to this conversion. For
large machines I think this is preferable approach.

thanks,
Shakeel

2018-06-20 12:09:07

by Andrey Ryabinin

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG



On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.

So what? Yes, they call flush_all() and then check if there are non-empty slabs left.
And that check doesn't work in case of disabled CONFIG_SLUB_DEBUG.
How is flush_all() or per-cpu slabs even relevant here?


2018-06-20 21:38:44

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG

On Wed, Jun 20, 2018 at 5:08 AM Andrey Ryabinin <[email protected]> wrote:
>
>
>
> On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_empty() will always return 0. So, in such situation, it is
> > required to check per-cpu slabs to make sure if a kmem_cache is empty or
> > not.
> >
> > Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> > not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> > per-cpu slabs.
>
> So what? Yes, they call flush_all() and then check if there are non-empty slabs left.
> And that check doesn't work in case of disabled CONFIG_SLUB_DEBUG.
> How is flush_all() or per-cpu slabs even relevant here?
>

The flush_all() will move all cpu slabs and partials to node's partial
list and thus later check of node's partial list will handle non-empty
slabs situation. However what I missed is the 'full slabs' which are
not on any list for !CONFIG_SLUB_DEBUG. So, this patch is not the
complete solution. I think David's suggestion is the complete
solution. I will post a patch based on David's suggestion.