There was a memory leak problem when we did stressful test
on Android device.
The root cause of this was from page_frag_cache alloc
and it was very hard to find out.
We add to count the page frag allocation and free with function call.
The gap between pgfrag_alloc and pgfrag_free is good to to calculate
for the amount of page.
The gap between pgfrag_alloc_calls and pgfrag_free_calls is for
sub-indicator.
They can see trends of memory usage during the test.
Without it, it's difficult to check page frag usage so I believe we
should add it.
Signed-off-by: Kyeongdon Kim <[email protected]>
---
include/linux/vm_event_item.h | 4 ++++
mm/page_alloc.c | 9 +++++++--
mm/vmstat.c | 4 ++++
3 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
index d77bc35..75425d4 100644
--- a/include/linux/vm_event_item.h
+++ b/include/linux/vm_event_item.h
@@ -110,6 +110,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
SWAP_RA,
SWAP_RA_HIT,
#endif
+ PGFRAG_ALLOC,
+ PGFRAG_FREE,
+ PGFRAG_ALLOC_CALLS,
+ PGFRAG_FREE_CALLS,
NR_VM_EVENT_ITEMS
};
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index db2d25f..b3ddd76 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4322,6 +4322,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
free_hot_cold_page(page, false);
else
__free_pages_ok(page, order);
+ __count_vm_events(PGFRAG_FREE, 1 << order);
}
}
EXPORT_SYMBOL(__page_frag_cache_drain);
@@ -4338,7 +4339,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
page = __page_frag_cache_refill(nc, gfp_mask);
if (!page)
return NULL;
-
+ __count_vm_events(PGFRAG_ALLOC, 1 << compound_order(page));
#if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
/* if size can vary use size else just use PAGE_SIZE */
size = nc->size;
@@ -4375,6 +4376,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
nc->pagecnt_bias--;
nc->offset = offset;
+ __count_vm_event(PGFRAG_ALLOC_CALLS);
return nc->va + offset;
}
@@ -4387,8 +4389,11 @@ void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
- if (unlikely(put_page_testzero(page)))
+ if (unlikely(put_page_testzero(page))) {
+ __count_vm_events(PGFRAG_FREE, 1 << compound_order(page));
__free_pages_ok(page, compound_order(page));
+ }
+ __count_vm_event(PGFRAG_FREE_CALLS);
}
EXPORT_SYMBOL(page_frag_free);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4bb13e7..c00fe05 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1217,6 +1217,10 @@ const char * const vmstat_text[] = {
"swap_ra",
"swap_ra_hit",
#endif
+ "pgfrag_alloc",
+ "pgfrag_free",
+ "pgfrag_alloc_calls",
+ "pgfrag_free_calls",
#endif /* CONFIG_VM_EVENTS_COUNTERS */
};
#endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
--
2.6.2
IMHO that's too much counters.
Per-node NR_FRAGMENT_PAGES should be enough for guessing what's going on.
Perf probes provides enough features for furhter debugging.
On 01.09.2017 02:37, Kyeongdon Kim wrote:
> There was a memory leak problem when we did stressful test
> on Android device.
> The root cause of this was from page_frag_cache alloc
> and it was very hard to find out.
>
> We add to count the page frag allocation and free with function call.
> The gap between pgfrag_alloc and pgfrag_free is good to to calculate
> for the amount of page.
> The gap between pgfrag_alloc_calls and pgfrag_free_calls is for
> sub-indicator.
> They can see trends of memory usage during the test.
> Without it, it's difficult to check page frag usage so I believe we
> should add it.
>
> Signed-off-by: Kyeongdon Kim <[email protected]>
> ---
> include/linux/vm_event_item.h | 4 ++++
> mm/page_alloc.c | 9 +++++++--
> mm/vmstat.c | 4 ++++
> 3 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/vm_event_item.h b/include/linux/vm_event_item.h
> index d77bc35..75425d4 100644
> --- a/include/linux/vm_event_item.h
> +++ b/include/linux/vm_event_item.h
> @@ -110,6 +110,10 @@ enum vm_event_item { PGPGIN, PGPGOUT, PSWPIN, PSWPOUT,
> SWAP_RA,
> SWAP_RA_HIT,
> #endif
> + PGFRAG_ALLOC,
> + PGFRAG_FREE,
> + PGFRAG_ALLOC_CALLS,
> + PGFRAG_FREE_CALLS,
> NR_VM_EVENT_ITEMS
> };
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index db2d25f..b3ddd76 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4322,6 +4322,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
> free_hot_cold_page(page, false);
> else
> __free_pages_ok(page, order);
> + __count_vm_events(PGFRAG_FREE, 1 << order);
> }
> }
> EXPORT_SYMBOL(__page_frag_cache_drain);
> @@ -4338,7 +4339,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
> page = __page_frag_cache_refill(nc, gfp_mask);
> if (!page)
> return NULL;
> -
> + __count_vm_events(PGFRAG_ALLOC, 1 << compound_order(page));
> #if (PAGE_SIZE < PAGE_FRAG_CACHE_MAX_SIZE)
> /* if size can vary use size else just use PAGE_SIZE */
> size = nc->size;
> @@ -4375,6 +4376,7 @@ void *page_frag_alloc(struct page_frag_cache *nc,
>
> nc->pagecnt_bias--;
> nc->offset = offset;
> + __count_vm_event(PGFRAG_ALLOC_CALLS);
>
> return nc->va + offset;
> }
> @@ -4387,8 +4389,11 @@ void page_frag_free(void *addr)
> {
> struct page *page = virt_to_head_page(addr);
>
> - if (unlikely(put_page_testzero(page)))
> + if (unlikely(put_page_testzero(page))) {
> + __count_vm_events(PGFRAG_FREE, 1 << compound_order(page));
> __free_pages_ok(page, compound_order(page));
> + }
> + __count_vm_event(PGFRAG_FREE_CALLS);
> }
> EXPORT_SYMBOL(page_frag_free);
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4bb13e7..c00fe05 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1217,6 +1217,10 @@ const char * const vmstat_text[] = {
> "swap_ra",
> "swap_ra_hit",
> #endif
> + "pgfrag_alloc",
> + "pgfrag_free",
> + "pgfrag_alloc_calls",
> + "pgfrag_free_calls",
> #endif /* CONFIG_VM_EVENTS_COUNTERS */
> };
> #endif /* CONFIG_PROC_FS || CONFIG_SYSFS || CONFIG_NUMA */
>
On Fri 01-09-17 12:12:36, Konstantin Khlebnikov wrote:
> IMHO that's too much counters.
> Per-node NR_FRAGMENT_PAGES should be enough for guessing what's going on.
> Perf probes provides enough features for furhter debugging.
I would tend to agree. Adding a counter based on a single debugging
instance sounds like an overkill to me. Counters should be pretty cheep
but this is way too specialized API to export to the userspace.
We have other interfaces to debug memory leaks like page_owner.
--
Michal Hocko
SUSE Labs
Thanks for your reply,
But I couldn't find "NR_FRAGMENT_PAGES" in linux-next.git .. is that
vmstat counter? or others?
As you know, page_frag_alloc() directly calls __alloc_pages_nodemask()
function,
so that makes too difficult to see memory usage in real time even though
we have "/meminfo or /slabinfo.." information.
If there was a way already to figure out the memory leakage from
page_frag_cache in mainline, I agree your opinion
but I think we don't have it now.
If those counters too much in my patch,
I can say two values (pgfrag_alloc and pgfrag_free) are enough to guess
what will happen
and would remove pgfrag_alloc_calls and pgfrag_free_calls.
Thanks,
Kyeongdon Kim
On 2017-09-01 오후 6:12, Konstantin Khlebnikov wrote:
> IMHO that's too much counters.
> Per-node NR_FRAGMENT_PAGES should be enough for guessing what's going on.
> Perf probes provides enough features for furhter debugging.
>
> On 01.09.2017 02:37, Kyeongdon Kim wrote:
> > There was a memory leak problem when we did stressful test
> > on Android device.
> > The root cause of this was from page_frag_cache alloc
> > and it was very hard to find out.
> >
> > We add to count the page frag allocation and free with function call.
> > The gap between pgfrag_alloc and pgfrag_free is good to to calculate
> > for the amount of page.
> > The gap between pgfrag_alloc_calls and pgfrag_free_calls is for
> > sub-indicator.
> > They can see trends of memory usage during the test.
> > Without it, it's difficult to check page frag usage so I believe we
> > should add it.
> >
> > Signed-off-by: Kyeongdon Kim <[email protected]>
> > ---
Thanks for your reply,
We already used other i/f like page_owner and kmemleak to resolve memory
leakage issue.
But, page_owner can only for guess but cannot find intuitively memory
usage regarding page_frag_cache.
And kmemleak cannot use (because of calling directly
__alloc_pages_nodemask()).
Additionally, some embedded linux like Android or something..
is not able to always use kmemleak & page_owner because of runtime
performance deterioration.
However, the root cause of this memory issue is from net device like
wireless.
In short, should always use wireless on device but, cannot use those
memory debug tools.
That's why those counters need..
and for much cheaper I can remove pgfrag_alloc_calls and pgfrag_free_calls.
Thanks,
Kyeongdon Kim
On 2017-09-01 오후 6:21, Michal Hocko wrote:
> On Fri 01-09-17 12:12:36, Konstantin Khlebnikov wrote:
> > IMHO that's too much counters.
> > Per-node NR_FRAGMENT_PAGES should be enough for guessing what's
> going on.
> > Perf probes provides enough features for furhter debugging.
>
> I would tend to agree. Adding a counter based on a single debugging
> instance sounds like an overkill to me. Counters should be pretty cheep
> but this is way too specialized API to export to the userspace.
>
> We have other interfaces to debug memory leaks like page_owner.
> --
> Michal Hocko
> SUSE Labs
On 04.09.2017 04:35, Kyeongdon Kim wrote:
> Thanks for your reply,
> But I couldn't find "NR_FRAGMENT_PAGES" in linux-next.git .. is that vmstat counter? or others?
>
I mean rather than adding bunch vmstat counters for operations it might be
worth to add page counter which will show current amount of these pages.
But this seems too low-level for tracking, common counters for all network
buffers would be more useful but much harder to implement.
As I can see page owner is able to save stacktrace where allocation happened,
this makes debugging mostly trivial without any counters. If it adds too much
overhead - just track random 1% of pages, should be enough for finding leak.
> As you know, page_frag_alloc() directly calls __alloc_pages_nodemask() function,
> so that makes too difficult to see memory usage in real time even though we have "/meminfo or /slabinfo.." information.
> If there was a way already to figure out the memory leakage from page_frag_cache in mainline, I agree your opinion
> but I think we don't have it now.
>
> If those counters too much in my patch,
> I can say two values (pgfrag_alloc and pgfrag_free) are enough to guess what will happen
> and would remove pgfrag_alloc_calls and pgfrag_free_calls.
>
> Thanks,
> Kyeongdon Kim
>
> On 2017-09-01 오후 6:12, Konstantin Khlebnikov wrote:
>> IMHO that's too much counters.
>> Per-node NR_FRAGMENT_PAGES should be enough for guessing what's going on.
>> Perf probes provides enough features for furhter debugging.
>>
>> On 01.09.2017 02:37, Kyeongdon Kim wrote:
>> > There was a memory leak problem when we did stressful test
>> > on Android device.
>> > The root cause of this was from page_frag_cache alloc
>> > and it was very hard to find out.
>> >
>> > We add to count the page frag allocation and free with function call.
>> > The gap between pgfrag_alloc and pgfrag_free is good to to calculate
>> > for the amount of page.
>> > The gap between pgfrag_alloc_calls and pgfrag_free_calls is for
>> > sub-indicator.
>> > They can see trends of memory usage during the test.
>> > Without it, it's difficult to check page frag usage so I believe we
>> > should add it.
>> >
>> > Signed-off-by: Kyeongdon Kim <[email protected]>
>> > ---
On 2017-09-04 오후 5:30, Konstantin Khlebnikov wrote:
> On 04.09.2017 04:35, Kyeongdon Kim wrote:
> > Thanks for your reply,
> > But I couldn't find "NR_FRAGMENT_PAGES" in linux-next.git .. is that
> vmstat counter? or others?
> >
>
> I mean rather than adding bunch vmstat counters for operations it
> might be
> worth to add page counter which will show current amount of these pages.
> But this seems too low-level for tracking, common counters for all
> network
> buffers would be more useful but much harder to implement.
>
Ok, thanks for the comment.
> As I can see page owner is able to save stacktrace where allocation
> happened,
> this makes debugging mostly trivial without any counters. If it adds
> too much
> overhead - just track random 1% of pages, should be enough for finding
> leak.
>
As I said, we already used page owner tools to resolve the leak issue.
But that's extremely difficult to figure it out,
too much callstack and too much allocation orders(0 or more).
We couldn't easily get a clue event if we track 1% of pages..
In fact, I was writing another email to send a new patch with debug config.
We added a hash function to pick out address with callstack by using
debugfs,
It could be showing the only page_frag_cache leak with owner.
for exmaple code :
+++ /mm/page_alloc.c
@@ -4371,7 +4371,9 @@ void *page_frag_alloc(struct page_frag_cache *nc,
nc->pagecnt_bias--;
nc->offset = offset;
+#ifdef CONFIG_PGFRAG_DEBUG
+ page_frag_debug_alloc(nc->va + offset);
+#endif
return nc->va + offset;
}
EXPORT_SYMBOL(page_frag_alloc);
@@ -4382,7 +4384,9 @@ EXPORT_SYMBOL(page_frag_alloc);
void page_frag_free(void *addr)
{
struct page *page = virt_to_head_page(addr);
+#ifdef CONFIG_PGFRAG_DEBUG
+ page_frag_debug_free(addr);
+#endif
if (unlikely(put_page_testzero(page)))
Those counters that I added may be too much for the linux server or
something.
However, I think the other systems may need to simple debugging method.
(like Android OS)
So if you can accept the patch with debug feature,
I will send it including counters.
but still thinking those counters don't need. I won't.
Anyway, I'm grateful for your feedback, means a lot to me.
Thanks,
Kyeongdon Kim
> > As you know, page_frag_alloc() directly calls
> __alloc_pages_nodemask() function,
> > so that makes too difficult to see memory usage in real time even
> though we have "/meminfo or /slabinfo.." information.
> > If there was a way already to figure out the memory leakage from
> page_frag_cache in mainline, I agree your opinion
> > but I think we don't have it now.
> >
> > If those counters too much in my patch,
> > I can say two values (pgfrag_alloc and pgfrag_free) are enough to
> guess what will happen
> > and would remove pgfrag_alloc_calls and pgfrag_free_calls.
> >
> > Thanks,
> > Kyeongdon Kim
> >
>
> >> IMHO that's too much counters.
> >> Per-node NR_FRAGMENT_PAGES should be enough for guessing what's
> going on.
> >> Perf probes provides enough features for furhter debugging.
> >>
> >> On 01.09.2017 02:37, Kyeongdon Kim wrote:
> >> > There was a memory leak problem when we did stressful test
> >> > on Android device.
> >> > The root cause of this was from page_frag_cache alloc
> >> > and it was very hard to find out.
> >> >
> >> > We add to count the page frag allocation and free with function
> call.
> >> > The gap between pgfrag_alloc and pgfrag_free is good to to calculate
> >> > for the amount of page.
> >> > The gap between pgfrag_alloc_calls and pgfrag_free_calls is for
> >> > sub-indicator.
> >> > They can see trends of memory usage during the test.
> >> > Without it, it's difficult to check page frag usage so I believe we
> >> > should add it.
> >> >
> >> > Signed-off-by: Kyeongdon Kim <[email protected]>
> >> > ---