2017-07-11 06:05:12

by Joel Fernandes

[permalink] [raw]
Subject: [PATCH] tracing/ring_buffer: Try harder to allocate

ftrace can fail to allocate per-CPU ring buffer on systems with a large
number of CPUs coupled while large amounts of cache happening in the
page cache. Currently the ring buffer allocation doesn't retry in the VM
implementation even if direct-reclaim made some progress but still
wasn't able to find a free page. On retrying I see that the allocations
almost always succeed. The retry doesn't happen because __GFP_NORETRY is
used in the tracer to prevent the case where we might OOM, however if we
drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is
triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag
introduced recently [1].

Tested the following still succeeds without destabilizing a system with
1GB memory.
echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb

[1] https://marc.info/?l=linux-mm&m=149820805124906&w=2

Cc: Alexander Duyck <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hao Lee <[email protected]>
Cc: Vladimir Davydov <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tim Murray <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: [email protected]
Signed-off-by: Joel Fernandes <[email protected]>
---
kernel/trace/ring_buffer.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index 4ae268e687fe..529cc50d7243 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1136,12 +1136,12 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
for (i = 0; i < nr_pages; i++) {
struct page *page;
/*
- * __GFP_NORETRY flag makes sure that the allocation fails
- * gracefully without invoking oom-killer and the system is
- * not destabilized.
+ * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails
+ * gracefully without invoking oom-killer and the system is not
+ * destabilized.
*/
bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
- GFP_KERNEL | __GFP_NORETRY,
+ GFP_KERNEL | __GFP_RETRY_MAYFAIL,
cpu_to_node(cpu));
if (!bpage)
goto free_pages;
@@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
list_add(&bpage->list, pages);

page = alloc_pages_node(cpu_to_node(cpu),
- GFP_KERNEL | __GFP_NORETRY, 0);
+ GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0);
if (!page)
goto free_pages;
bpage->page = page_address(page);
--
2.13.2.725.g09c95d1e9-goog


2017-07-11 06:12:53

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH] tracing/ring_buffer: Try harder to allocate

On Mon 10-07-17 23:05:00, Joel Fernandes wrote:
> ftrace can fail to allocate per-CPU ring buffer on systems with a large
> number of CPUs coupled while large amounts of cache happening in the
> page cache. Currently the ring buffer allocation doesn't retry in the VM
> implementation even if direct-reclaim made some progress but still
> wasn't able to find a free page. On retrying I see that the allocations
> almost always succeed. The retry doesn't happen because __GFP_NORETRY is
> used in the tracer to prevent the case where we might OOM, however if we
> drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is
> triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag
> introduced recently [1].
>
> Tested the following still succeeds without destabilizing a system with
> 1GB memory.
> echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb
>
> [1] https://marc.info/?l=linux-mm&m=149820805124906&w=2

Yes this is the correct usage of the new flag.

> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Hao Lee <[email protected]>
> Cc: Vladimir Davydov <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Tim Murray <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: [email protected]

I do not think stable tag is appropriate. The new flag hasn't been
merged yet and it is not a stable material.

> Signed-off-by: Joel Fernandes <[email protected]>

Feel free to add
Acked-by: Michal Hocko <[email protected]>

> ---
> kernel/trace/ring_buffer.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 4ae268e687fe..529cc50d7243 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -1136,12 +1136,12 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
> for (i = 0; i < nr_pages; i++) {
> struct page *page;
> /*
> - * __GFP_NORETRY flag makes sure that the allocation fails
> - * gracefully without invoking oom-killer and the system is
> - * not destabilized.
> + * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails
> + * gracefully without invoking oom-killer and the system is not
> + * destabilized.
> */
> bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
> - GFP_KERNEL | __GFP_NORETRY,
> + GFP_KERNEL | __GFP_RETRY_MAYFAIL,
> cpu_to_node(cpu));
> if (!bpage)
> goto free_pages;
> @@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
> list_add(&bpage->list, pages);
>
> page = alloc_pages_node(cpu_to_node(cpu),
> - GFP_KERNEL | __GFP_NORETRY, 0);
> + GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0);
> if (!page)
> goto free_pages;
> bpage->page = page_address(page);
> --
> 2.13.2.725.g09c95d1e9-goog

--
Michal Hocko
SUSE Labs

2017-07-11 06:17:30

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH] tracing/ring_buffer: Try harder to allocate

On 07/11/2017 08:05 AM, Joel Fernandes wrote:
> ftrace can fail to allocate per-CPU ring buffer on systems with a large
> number of CPUs coupled while large amounts of cache happening in the
> page cache. Currently the ring buffer allocation doesn't retry in the VM
> implementation even if direct-reclaim made some progress but still
> wasn't able to find a free page. On retrying I see that the allocations
> almost always succeed. The retry doesn't happen because __GFP_NORETRY is
> used in the tracer to prevent the case where we might OOM, however if we
> drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is
> triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag
> introduced recently [1].
>
> Tested the following still succeeds without destabilizing a system with
> 1GB memory.
> echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb
>
> [1] https://marc.info/?l=linux-mm&m=149820805124906&w=2
>
> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Hao Lee <[email protected]>
> Cc: Vladimir Davydov <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Tim Murray <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: [email protected]

Not stable, as Michal mentioned.

Acked-by: Vlastimil Babka <[email protected]>

> Signed-off-by: Joel Fernandes <[email protected]>
> ---
> kernel/trace/ring_buffer.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
> index 4ae268e687fe..529cc50d7243 100644
> --- a/kernel/trace/ring_buffer.c
> +++ b/kernel/trace/ring_buffer.c
> @@ -1136,12 +1136,12 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
> for (i = 0; i < nr_pages; i++) {
> struct page *page;
> /*
> - * __GFP_NORETRY flag makes sure that the allocation fails
> - * gracefully without invoking oom-killer and the system is
> - * not destabilized.
> + * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails
> + * gracefully without invoking oom-killer and the system is not
> + * destabilized.
> */
> bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()),
> - GFP_KERNEL | __GFP_NORETRY,
> + GFP_KERNEL | __GFP_RETRY_MAYFAIL,
> cpu_to_node(cpu));
> if (!bpage)
> goto free_pages;
> @@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu)
> list_add(&bpage->list, pages);
>
> page = alloc_pages_node(cpu_to_node(cpu),
> - GFP_KERNEL | __GFP_NORETRY, 0);
> + GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0);
> if (!page)
> goto free_pages;
> bpage->page = page_address(page);
>

2017-07-11 17:22:28

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH] tracing/ring_buffer: Try harder to allocate

On Mon, Jul 10, 2017 at 11:05:00PM -0700, Joel Fernandes wrote:
> ftrace can fail to allocate per-CPU ring buffer on systems with a large
> number of CPUs coupled while large amounts of cache happening in the
> page cache. Currently the ring buffer allocation doesn't retry in the VM
> implementation even if direct-reclaim made some progress but still
> wasn't able to find a free page. On retrying I see that the allocations
> almost always succeed. The retry doesn't happen because __GFP_NORETRY is
> used in the tracer to prevent the case where we might OOM, however if we
> drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is
> triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag
> introduced recently [1].
>
> Tested the following still succeeds without destabilizing a system with
> 1GB memory.
> echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb
>
> [1] https://marc.info/?l=linux-mm&m=149820805124906&w=2
>
> Cc: Alexander Duyck <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Hao Lee <[email protected]>
> Cc: Vladimir Davydov <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Joonsoo Kim <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Tim Murray <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: [email protected]
> Signed-off-by: Joel Fernandes <[email protected]>

Acked-by: Johannes Weiner <[email protected]>