Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753139AbdLFWlB (ORCPT ); Wed, 6 Dec 2017 17:41:01 -0500 Received: from mga05.intel.com ([192.55.52.43]:61451 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752376AbdLFWk5 (ORCPT ); Wed, 6 Dec 2017 17:40:57 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,369,1508828400"; d="scan'208";a="9722372" From: Tom Zanussi To: rostedt@goodmis.org Cc: tglx@linutronix.de, mhiramat@kernel.org, namhyung@kernel.org, vedang.patel@intel.com, bigeasy@linutronix.de, joel.opensrc@gmail.com, joelaf@google.com, mathieu.desnoyers@efficios.com, baohong.liu@intel.com, rajvi.jingar@intel.com, julia@ni.com, fengguang.wu@intel.com, linux-kernel@vger.kernel.org, linux-rt-users@vger.kernel.org, Tom Zanussi Subject: [PATCH v7 35/37] tracing: Increase trace_recursive_lock() limit for synthetic events Date: Wed, 6 Dec 2017 16:38:16 -0600 Message-Id: <486b1eeea79f9ac91fad2818eec9a0e87979babd.1512593081.git.tom.zanussi@linux.intel.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1528 Lines: 41 Synthetic event generation needs to happen while the current event is still in progress, so add 1 to the trace_recursive_lock() recursion limit to account for that. Because we also want to allow for the possibility of a synthetic event being generated from another synthetic event, add an additional increment for that as well. Signed-off-by: Tom Zanussi --- kernel/trace/ring_buffer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 763b3fb..8c862ea 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -2589,16 +2589,16 @@ static void rb_commit(struct ring_buffer_per_cpu *cpu_buffer, * IRQ context * NMI context * - * If for some reason the ring buffer starts to recurse, we - * only allow that to happen at most 4 times (one for each - * context). If it happens 5 times, then we consider this a - * recusive loop and do not let it go further. + * If for some reason the ring buffer starts to recurse, we only allow + * that to happen at most 6 times (one for each context, plus possibly + * two levels of synthetic event generation). If it happens 7 times, + * then we consider this a recusive loop and do not let it go further. */ static __always_inline int trace_recursive_lock(struct ring_buffer_per_cpu *cpu_buffer) { - if (cpu_buffer->current_context >= 4) + if (cpu_buffer->current_context >= 6) return 1; cpu_buffer->current_context++; -- 1.9.3