Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755286AbdGKGFM (ORCPT ); Tue, 11 Jul 2017 02:05:12 -0400 Received: from mail-pf0-f177.google.com ([209.85.192.177]:34891 "EHLO mail-pf0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755241AbdGKGFK (ORCPT ); Tue, 11 Jul 2017 02:05:10 -0400 From: Joel Fernandes To: linux-kernel@vger.kernel.org Cc: kernel-team@android.com, linux-mm@kvack.org, Joel Fernandes , Alexander Duyck , Mel Gorman , Hao Lee , Vladimir Davydov , Johannes Weiner , Joonsoo Kim , Michal Hocko , Tim Murray , Ingo Molnar , Steven Rostedt , stable@vger.kernel.org Subject: [PATCH] tracing/ring_buffer: Try harder to allocate Date: Mon, 10 Jul 2017 23:05:00 -0700 Message-Id: <20170711060500.17016-1-joelaf@google.com> X-Mailer: git-send-email 2.13.2.725.g09c95d1e9-goog Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2668 Lines: 65 ftrace can fail to allocate per-CPU ring buffer on systems with a large number of CPUs coupled while large amounts of cache happening in the page cache. Currently the ring buffer allocation doesn't retry in the VM implementation even if direct-reclaim made some progress but still wasn't able to find a free page. On retrying I see that the allocations almost always succeed. The retry doesn't happen because __GFP_NORETRY is used in the tracer to prevent the case where we might OOM, however if we drop __GFP_NORETRY, we risk destabilizing the system if OOM killer is triggered. To prevent this situation, use the __GFP_RETRY_MAYFAIL flag introduced recently [1]. Tested the following still succeeds without destabilizing a system with 1GB memory. echo 300000 > /sys/kernel/debug/tracing/buffer_size_kb [1] https://marc.info/?l=linux-mm&m=149820805124906&w=2 Cc: Alexander Duyck Cc: Mel Gorman Cc: Hao Lee Cc: Vladimir Davydov Cc: Johannes Weiner Cc: Joonsoo Kim Cc: Michal Hocko Cc: Tim Murray Cc: Ingo Molnar Cc: Steven Rostedt Cc: stable@vger.kernel.org Signed-off-by: Joel Fernandes --- kernel/trace/ring_buffer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 4ae268e687fe..529cc50d7243 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -1136,12 +1136,12 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu) for (i = 0; i < nr_pages; i++) { struct page *page; /* - * __GFP_NORETRY flag makes sure that the allocation fails - * gracefully without invoking oom-killer and the system is - * not destabilized. + * __GFP_RETRY_MAYFAIL flag makes sure that the allocation fails + * gracefully without invoking oom-killer and the system is not + * destabilized. */ bpage = kzalloc_node(ALIGN(sizeof(*bpage), cache_line_size()), - GFP_KERNEL | __GFP_NORETRY, + GFP_KERNEL | __GFP_RETRY_MAYFAIL, cpu_to_node(cpu)); if (!bpage) goto free_pages; @@ -1149,7 +1149,7 @@ static int __rb_allocate_pages(long nr_pages, struct list_head *pages, int cpu) list_add(&bpage->list, pages); page = alloc_pages_node(cpu_to_node(cpu), - GFP_KERNEL | __GFP_NORETRY, 0); + GFP_KERNEL | __GFP_RETRY_MAYFAIL, 0); if (!page) goto free_pages; bpage->page = page_address(page); -- 2.13.2.725.g09c95d1e9-goog