2021-07-28 15:55:56

by Shakeel Butt

[permalink] [raw]
Subject: [PATCH] slub: fix unreclaimable slab stat for bulk free

SLUB uses page allocator for higher order allocations and update
unreclaimable slab stat for such allocations. At the moment, the bulk
free for SLUB does not share code with normal free code path for these
type of allocations and have missed the stat update. So, fix the stat
update by common code. The user visible impact of the bug is the
potential of inconsistent unreclaimable slab stat visible through
meminfo and vmstat.

Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: Shakeel Butt <[email protected]>
---
mm/slub.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6dad2b6fda6f..03770291aa6b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3238,6 +3238,16 @@ struct detached_freelist {
struct kmem_cache *s;
};

+static inline void free_nonslab_page(struct page *page)
+{
+ unsigned int order = compound_order(page);
+
+ VM_BUG_ON_PAGE(!PageCompound(page), page);
+ kfree_hook(page_address(page));
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
+ __free_pages(page, order);
+}
+
/*
* This function progressively scans the array with free objects (with
* a limited look ahead) and extract objects belonging to the same
@@ -3274,9 +3284,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
if (!s) {
/* Handle kalloc'ed objects */
if (unlikely(!PageSlab(page))) {
- BUG_ON(!PageCompound(page));
- kfree_hook(object);
- __free_pages(page, compound_order(page));
+ free_nonslab_page(page);
p[size] = NULL; /* mark object processed */
return size;
}
@@ -4252,13 +4260,7 @@ void kfree(const void *x)

page = virt_to_head_page(x);
if (unlikely(!PageSlab(page))) {
- unsigned int order = compound_order(page);
-
- BUG_ON(!PageCompound(page));
- kfree_hook(object);
- mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
- -(PAGE_SIZE << order));
- __free_pages(page, order);
+ free_nonslab_page(page);
return;
}
slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
--
2.32.0.432.gabb21c7263-goog



2021-07-28 16:48:50

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

On Wed 28-07-21 08:53:54, Shakeel Butt wrote:
> SLUB uses page allocator for higher order allocations and update
> unreclaimable slab stat for such allocations. At the moment, the bulk
> free for SLUB does not share code with normal free code path for these
> type of allocations and have missed the stat update. So, fix the stat
> update by common code. The user visible impact of the bug is the
> potential of inconsistent unreclaimable slab stat visible through
> meminfo and vmstat.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Shakeel Butt <[email protected]>

LGTM
Acked-by: Michal Hocko <[email protected]>

> ---
> mm/slub.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 6dad2b6fda6f..03770291aa6b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3238,6 +3238,16 @@ struct detached_freelist {
> struct kmem_cache *s;
> };
>
> +static inline void free_nonslab_page(struct page *page)
> +{
> + unsigned int order = compound_order(page);
> +
> + VM_BUG_ON_PAGE(!PageCompound(page), page);
> + kfree_hook(page_address(page));
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
> + __free_pages(page, order);
> +}
> +
> /*
> * This function progressively scans the array with free objects (with
> * a limited look ahead) and extract objects belonging to the same
> @@ -3274,9 +3284,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
> if (!s) {
> /* Handle kalloc'ed objects */
> if (unlikely(!PageSlab(page))) {
> - BUG_ON(!PageCompound(page));
> - kfree_hook(object);
> - __free_pages(page, compound_order(page));
> + free_nonslab_page(page);
> p[size] = NULL; /* mark object processed */
> return size;
> }
> @@ -4252,13 +4260,7 @@ void kfree(const void *x)
>
> page = virt_to_head_page(x);
> if (unlikely(!PageSlab(page))) {
> - unsigned int order = compound_order(page);
> -
> - BUG_ON(!PageCompound(page));
> - kfree_hook(object);
> - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> - -(PAGE_SIZE << order));
> - __free_pages(page, order);
> + free_nonslab_page(page);
> return;
> }
> slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
> --
> 2.32.0.432.gabb21c7263-goog

--
Michal Hocko
SUSE Labs

2021-07-28 23:33:21

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

On Wed, Jul 28, 2021 at 08:53:54AM -0700, Shakeel Butt wrote:
> SLUB uses page allocator for higher order allocations and update
> unreclaimable slab stat for such allocations. At the moment, the bulk
> free for SLUB does not share code with normal free code path for these
> type of allocations and have missed the stat update. So, fix the stat
> update by common code. The user visible impact of the bug is the
> potential of inconsistent unreclaimable slab stat visible through
> meminfo and vmstat.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Shakeel Butt <[email protected]>

Acked-by: Roman Gushchin <[email protected]>

Thanks!

2021-07-29 05:43:33

by Muchun Song

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

On Wed, Jul 28, 2021 at 11:54 PM Shakeel Butt <[email protected]> wrote:
>
> SLUB uses page allocator for higher order allocations and update
> unreclaimable slab stat for such allocations. At the moment, the bulk
> free for SLUB does not share code with normal free code path for these
> type of allocations and have missed the stat update. So, fix the stat
> update by common code. The user visible impact of the bug is the
> potential of inconsistent unreclaimable slab stat visible through
> meminfo and vmstat.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Shakeel Butt <[email protected]>

Reviewed-by: Muchun Song <[email protected]>

2021-07-29 06:54:06

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free


On 2021/7/28 23:53, Shakeel Butt wrote:
> SLUB uses page allocator for higher order allocations and update
> unreclaimable slab stat for such allocations. At the moment, the bulk
> free for SLUB does not share code with normal free code path for these
> type of allocations and have missed the stat update. So, fix the stat
> update by common code. The user visible impact of the bug is the
> potential of inconsistent unreclaimable slab stat visible through
> meminfo and vmstat.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Shakeel Butt <[email protected]>
> ---
> mm/slub.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 6dad2b6fda6f..03770291aa6b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3238,6 +3238,16 @@ struct detached_freelist {
> struct kmem_cache *s;
> };
>
> +static inline void free_nonslab_page(struct page *page)
> +{
> + unsigned int order = compound_order(page);
> +
> + VM_BUG_ON_PAGE(!PageCompound(page), page);

Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
disabled.


> + kfree_hook(page_address(page));
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
> + __free_pages(page, order);
> +}
> +
> /*
> * This function progressively scans the array with free objects (with
> * a limited look ahead) and extract objects belonging to the same
> @@ -3274,9 +3284,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
> if (!s) {
> /* Handle kalloc'ed objects */
> if (unlikely(!PageSlab(page))) {
> - BUG_ON(!PageCompound(page));
> - kfree_hook(object);
> - __free_pages(page, compound_order(page));
> + free_nonslab_page(page);
> p[size] = NULL; /* mark object processed */
> return size;
> }
> @@ -4252,13 +4260,7 @@ void kfree(const void *x)
>
> page = virt_to_head_page(x);
> if (unlikely(!PageSlab(page))) {
> - unsigned int order = compound_order(page);
> -
> - BUG_ON(!PageCompound(page));
> - kfree_hook(object);
> - mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> - -(PAGE_SIZE << order));
> - __free_pages(page, order);
> + free_nonslab_page(page);
> return;
> }
> slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);

2021-07-29 14:13:27

by Shakeel Butt

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <[email protected]> wrote:
>
>
> On 2021/7/28 23:53, Shakeel Butt wrote:
> > SLUB uses page allocator for higher order allocations and update
> > unreclaimable slab stat for such allocations. At the moment, the bulk
> > free for SLUB does not share code with normal free code path for these
> > type of allocations and have missed the stat update. So, fix the stat
> > update by common code. The user visible impact of the bug is the
> > potential of inconsistent unreclaimable slab stat visible through
> > meminfo and vmstat.
> >
> > Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> > Signed-off-by: Shakeel Butt <[email protected]>
> > ---
> > mm/slub.c | 22 ++++++++++++----------
> > 1 file changed, 12 insertions(+), 10 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 6dad2b6fda6f..03770291aa6b 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3238,6 +3238,16 @@ struct detached_freelist {
> > struct kmem_cache *s;
> > };
> >
> > +static inline void free_nonslab_page(struct page *page)
> > +{
> > + unsigned int order = compound_order(page);
> > +
> > + VM_BUG_ON_PAGE(!PageCompound(page), page);
>
> Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
> disabled.

I don't have a strong opinion on this. Please send a patch with
reasoning if you want WARN_ON_ONCE here.

2021-08-03 14:26:00

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free


On 2021/7/29 22:03, Shakeel Butt wrote:
> On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <[email protected]> wrote:
>>
>> On 2021/7/28 23:53, Shakeel Butt wrote:
>>> SLUB uses page allocator for higher order allocations and update
>>> unreclaimable slab stat for such allocations. At the moment, the bulk
>>> free for SLUB does not share code with normal free code path for these
>>> type of allocations and have missed the stat update. So, fix the stat
>>> update by common code. The user visible impact of the bug is the
>>> potential of inconsistent unreclaimable slab stat visible through
>>> meminfo and vmstat.
>>>
>>> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
>>> Signed-off-by: Shakeel Butt <[email protected]>
>>> ---
>>> mm/slub.c | 22 ++++++++++++----------
>>> 1 file changed, 12 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 6dad2b6fda6f..03770291aa6b 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3238,6 +3238,16 @@ struct detached_freelist {
>>> struct kmem_cache *s;
>>> };
>>>
>>> +static inline void free_nonslab_page(struct page *page)
>>> +{
>>> + unsigned int order = compound_order(page);
>>> +
>>> + VM_BUG_ON_PAGE(!PageCompound(page), page);
>> Could we add WARN_ON here, or we got nothing when CONFIG_DEBUG_VM is
>> disabled.
> I don't have a strong opinion on this. Please send a patch with
> reasoning if you want WARN_ON_ONCE here.

Ok, we met a BUG_ON(!PageCompound(page)) in kfree() twice in lts4.4, we
are still debugging it.

It's different to analyses due to no vmcore, and can't be reproduced.

WARN_ON() here could help us to notice the issue.

Also is there any experience or known fix/way to debug this kinds of
issue? memory corruption?

Any suggestion will be appreciated, thanks.



> .
>

2021-08-03 14:30:15

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free

On 8/3/21 4:24 PM, Kefeng Wang wrote:
>
> On 2021/7/29 22:03, Shakeel Butt wrote:
>> On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <[email protected]> wrote:
>>>
>>> On 2021/7/28 23:53, Shakeel Butt wrote:
>> I don't have a strong opinion on this. Please send a patch with
>> reasoning if you want WARN_ON_ONCE here.
>
> Ok, we met a BUG_ON(!PageCompound(page)) in kfree() twice in lts4.4, we are
> still debugging it.
>
> It's different to analyses due to no vmcore, and can't be reproduced.
>
> WARN_ON() here could help us to notice the issue.
>
> Also is there any experience or known fix/way to debug this kinds of issue?
> memory corruption?

This would typically be a use-after-free/double-free - a problem of the slab
user, not slab itself.

> Any suggestion will be appreciated, thanks.

debug_pagealloc could help to catch a use-after-free earlier

>
>> .
>>


2021-08-03 14:48:29

by Kefeng Wang

[permalink] [raw]
Subject: Re: [PATCH] slub: fix unreclaimable slab stat for bulk free


On 2021/8/3 22:29, Vlastimil Babka wrote:
> On 8/3/21 4:24 PM, Kefeng Wang wrote:
>> On 2021/7/29 22:03, Shakeel Butt wrote:
>>> On Wed, Jul 28, 2021 at 11:52 PM Kefeng Wang <[email protected]> wrote:
>>>> On 2021/7/28 23:53, Shakeel Butt wrote:
>>> I don't have a strong opinion on this. Please send a patch with
>>> reasoning if you want WARN_ON_ONCE here.
>> Ok, we met a BUG_ON(!PageCompound(page)) in kfree() twice in lts4.4, we are
>> still debugging it.
>>
>> It's different to analyses due to no vmcore, and can't be reproduced.
>>
>> WARN_ON() here could help us to notice the issue.
>>
>> Also is there any experience or known fix/way to debug this kinds of issue?
>> memory corruption?
> This would typically be a use-after-free/double-free - a problem of the slab
> user, not slab itself.

We enable KASAN to find whether or not there are some UAF/double free,
no related

log for now.

>
>> Any suggestion will be appreciated, thanks.
> debug_pagealloc could help to catch a use-after-free earlier

OK, will try this, thanks.

>
>>> .
>>>
> .
>