Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp2017728imu; Wed, 28 Nov 2018 20:27:15 -0800 (PST) X-Google-Smtp-Source: AFSGD/XqE0J2EXwwt8eJCSMElrwpJtTXJiMsrXczmaBx+RQvNS37o2B1SlTcyowUrTqD9nCbwsHp X-Received: by 2002:a17:902:b093:: with SMTP id p19mr19287833plr.135.1543465634980; Wed, 28 Nov 2018 20:27:14 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543465634; cv=none; d=google.com; s=arc-20160816; b=XR1noqbgqIAP9W4r5nn8/dol8UHgdZnJVt54KxjhhqqPb+paUfPMd1FWRl8LWJaNSz GvYOjYA4Q2cbWKGQSuikEZiIIbneVXdQeKzoTP93vNt5xeSRBuB2lKiU7ucj/a/c+zrl v2rO6YYKIb9eLtqVxRuxtyGoKjT9W2BrQ6pJg9kn2NAaBjnqYNWDlv66J2F7nbdX/ion BMRNWho/IPGtRlrgS3KE2lc3xAMDx4qX9kcI7CLc626IZwR5n8M+jKgPVhtDmHMV9PKS Y24JcCWZX5Ko93uKfqL6AT37ISWIAunnaVjY2on77L2drNBvOhehCmlQoi0jMvYJott6 56cQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date; bh=W76SdPdFSPEcL/mwYjCRxVmZW86VLEIg2P8a+dV/r90=; b=L2eNep5364DCWf6LJTByqVRivNpMcWeA2PVXBM+VfqDxidKaXVXH+oD3583zyC3wDS c8kW/WBVGkwArhCAlaXvMZMfY/Z6isAlssZcoZ++fzg+VD1k3qXGOlDPMnet8V3LQRcD vOUxBSGEzHfhsVw4CcTG12JMyxSMk7HkVnKvqgHrJ/9ZkUOL//feFGWJnbI06OIcjm0u TNtufc3fQZMpo5mhJvUeKMiYRUJhQH02qih1wocSXMd6k6i3vT304ZyxFh5he84SlmRD SB1bgbGDAoyFUzXL5JzJEdBxmVlEMkVn4HK/aH9fQ5697StchCXR+7jzb8oyln8ckUD9 ig2w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id w10si629042pgj.214.2018.11.28.20.26.59; Wed, 28 Nov 2018 20:27:14 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727672AbeK2P2d (ORCPT + 99 others); Thu, 29 Nov 2018 10:28:33 -0500 Received: from mail.kernel.org ([198.145.29.99]:52360 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727359AbeK2P2d (ORCPT ); Thu, 29 Nov 2018 10:28:33 -0500 Received: from vmware.local.home (cpe-66-24-56-78.stny.res.rr.com [66.24.56.78]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 5044E2081B; Thu, 29 Nov 2018 04:24:32 +0000 (UTC) Date: Wed, 28 Nov 2018 23:24:30 -0500 From: Steven Rostedt To: Joe Lawrence Cc: linux-kernel@vger.kernel.org, Linus Torvalds , Ingo Molnar , Andrew Morton , Thomas Gleixner , Peter Zijlstra , linux-arch@vger.kernel.org, Joel Fernandes , Masami Hiramatsu , Josh Poimboeuf , Andy Lutomirski , Frederic Weisbecker Subject: Re: [for-next][PATCH 00/18] function_graph: Add separate depth counter to prevent trace corruption Message-ID: <20181128232430.425f8b12@vmware.local.home> In-Reply-To: <20181128222936.74da7f54@vmware.local.home> References: <20181122002801.501220343@goodmis.org> <20181128203931.ym2rkya7a6agzzy3@redhat.com> <20181128160021.09acd1af@gandalf.local.home> <20181128222936.74da7f54@vmware.local.home> X-Mailer: Claws Mail 3.15.1 (GTK+ 2.24.32; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 28 Nov 2018 22:29:36 -0500 Steven Rostedt wrote: > Does this patch fix it for you? Take 2. I realized that the reason for the interrupts being traced is because it is more likely to interrupt when the depth is already 0, setting it to 1 causing the interrupt to think it's already in a function that wants to be traced (when that wasn't decided yet). The fix uses a bit that gets set when we want to trace (and will trace till that bit is cleared). That bit gets cleared on the return function when depth is zero. But if we are tracing something in a interrupt that happened to interrupt the beginning of a function that already set the depth to 0, then we need to clear the bit at depth of 1 not zero (and 2 if we want to trace a function that interrupted a softirq, that interrupted a start of normal function. and 3 if we want to trace an NMI function that had the very unlikely case of interrupting a start of a interrupt function, that interrupted the start of a softirq function, that interrupted a start of a normal context function!). If that happens then we will continue to trace functions when we don't want to. To solve that, I need to record the depth (0-3) when the bit is set, and clear it when the return function is at that same depth (0-3). -- Steve diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h index 3b8c0e24ab30..447bd96ee658 100644 --- a/kernel/trace/trace.h +++ b/kernel/trace/trace.h @@ -512,12 +512,44 @@ enum { * can only be modified by current, we can reuse trace_recursion. */ TRACE_IRQ_BIT, + + /* Set if the function is in the set_graph_function file */ + TRACE_GRAPH_BIT, + + /* + * In the very unlikely case that an interrupt came in + * at a start of graph tracing, and we want to trace + * the function in that interrupt, the depth can be greater + * than zero, because of the preempted start of a previous + * trace. In an even more unlikely case, depth could be 2 + * if a softirq interrupted the start of graph tracing, + * followed by an interrupt preempting a start of graph + * tracing in the softirq, and depth can even be 3 + * if an NMI came in at the start of an interrupt function + * that preempted a softirq start of a function that + * preempted normal context!!!! Luckily, it can't be + * greater than 3, so the next two bits are a mask + * of what the depth is when we set TRACE_GRAPH_BIT + */ + + TRACE_GRAPH_DEPTH_START_BIT, + TRACE_GRAPH_DEPTH_END_BIT, }; #define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0) #define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0) #define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit))) +#define trace_recursion_depth() \ + (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3) +#define trace_recursion_set_depth(depth) \ + do { \ + current->trace_recursion &= \ + ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \ + current->trace_recursion |= \ + ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \ + } while (0) + #define TRACE_CONTEXT_BITS 4 #define TRACE_FTRACE_START TRACE_FTRACE_BIT @@ -843,8 +875,9 @@ extern void __trace_graph_return(struct trace_array *tr, extern struct ftrace_hash *ftrace_graph_hash; extern struct ftrace_hash *ftrace_graph_notrace_hash; -static inline int ftrace_graph_addr(unsigned long addr) +static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) { + unsigned long addr = trace->func; int ret = 0; preempt_disable_notrace(); @@ -855,6 +888,14 @@ static inline int ftrace_graph_addr(unsigned long addr) } if (ftrace_lookup_ip(ftrace_graph_hash, addr)) { + + /* + * This needs to be cleared on the return functions + * when the depth is zero. + */ + trace_recursion_set(TRACE_GRAPH_BIT); + trace_recursion_set_depth(trace->depth); + /* * If no irqs are to be traced, but a set_graph_function * is set, and called by an interrupt handler, we still @@ -872,6 +913,13 @@ static inline int ftrace_graph_addr(unsigned long addr) return ret; } +static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +{ + if (trace_recursion_test(TRACE_GRAPH_BIT) && + trace->depth == trace_recursion_depth()) + trace_recursion_clear(TRACE_GRAPH_BIT); +} + static inline int ftrace_graph_notrace_addr(unsigned long addr) { int ret = 0; @@ -885,7 +933,7 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) return ret; } #else -static inline int ftrace_graph_addr(unsigned long addr) +static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) { return 1; } @@ -894,6 +942,8 @@ static inline int ftrace_graph_notrace_addr(unsigned long addr) { return 0; } +static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) +{ } #endif /* CONFIG_DYNAMIC_FTRACE */ extern unsigned int fgraph_max_depth; @@ -901,7 +951,8 @@ extern unsigned int fgraph_max_depth; static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) { /* trace it when it is-nested-in or is a function enabled. */ - return !(trace->depth || ftrace_graph_addr(trace->func)) || + return !(trace_recursion_test(TRACE_GRAPH_BIT) || + ftrace_graph_addr(trace)) || (trace->depth < 0) || (fgraph_max_depth && trace->depth >= fgraph_max_depth); } diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c index 2561460d7baf..086af4f5c3e8 100644 --- a/kernel/trace/trace_functions_graph.c +++ b/kernel/trace/trace_functions_graph.c @@ -509,6 +509,8 @@ void trace_graph_return(struct ftrace_graph_ret *trace) int cpu; int pc; + ftrace_graph_addr_finish(trace); + local_irq_save(flags); cpu = raw_smp_processor_id(); data = per_cpu_ptr(tr->trace_buffer.data, cpu); @@ -532,6 +534,8 @@ void set_graph_array(struct trace_array *tr) static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) { + ftrace_graph_addr_finish(trace); + if (tracing_thresh && (trace->rettime - trace->calltime < tracing_thresh)) return; diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c index b7357f9f82a3..98ea6d28df15 100644 --- a/kernel/trace/trace_irqsoff.c +++ b/kernel/trace/trace_irqsoff.c @@ -208,6 +208,8 @@ static void irqsoff_graph_return(struct ftrace_graph_ret *trace) unsigned long flags; int pc; + ftrace_graph_addr_finish(trace); + if (!func_prolog_dec(tr, &data, &flags)) return; diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c index a86b303e6c67..7d04b9890755 100644 --- a/kernel/trace/trace_sched_wakeup.c +++ b/kernel/trace/trace_sched_wakeup.c @@ -270,6 +270,8 @@ static void wakeup_graph_return(struct ftrace_graph_ret *trace) unsigned long flags; int pc; + ftrace_graph_addr_finish(trace); + if (!func_prolog_preempt_disable(tr, &data, &pc)) return;