Received: by 2002:a25:c593:0:0:0:0:0 with SMTP id v141csp509177ybe; Wed, 4 Sep 2019 03:24:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqzF4TWWH06czMrOg+p7tYBzpNDEk2X0444eXS7+0Gs/sjwkLMS+JrVLORQ7sXN72fcueZ5P X-Received: by 2002:a17:902:bf47:: with SMTP id u7mr20803260pls.77.1567592656088; Wed, 04 Sep 2019 03:24:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1567592656; cv=none; d=google.com; s=arc-20160816; b=VXA2jIx0ZLKFHXMHyFgqrxfLe5JtOXJypIAwEnm+HB2YjFO3uU9DFwjk4wuwZKwTV4 Go1uKu/oSrCuZhr5rHFQIfcdmO/HpcRiXBf8+0V2fMyXbFUCN4pVeMAzqMmK2aJJBgfO 5lxLUi9x7zN6/gyBDrr0XUiwdTaU9bfACUlB1jIrgyCfB8aFpyYMEY3iv1lyS+vKKRgJ zz/7cG7zI3VSO1zZRDX8jreLv41ZeckAaITc7/gRNhY5p/INd4jaU6UfP1nYCkGYKxu2 b2kmvThyL2K1a4CwLAjVkGpUa55/ZJfrxFAfRMzJUCoZWPEXe4lSMVc+13NBJ37VnY8s rq1A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:reply-to:message-id :subject:cc:to:from:date; bh=mvKDH7ESaapdrhN0MVpDp0M6LjWFsPBYMHUMLrtDvlk=; b=w8rQJHuuowne5lv/DvloleB0VfjLDI6hUkcehd8Uo4OSKvp/DfFaXrRiANBsYQ2d5z yPZBWcBSRAnjpUTe+paYKl4Py45QLPvB+RpA2+vGUlB26vdBgATzzFMcBx7egeR62BG5 Tc1QdmvbKu19UwdTyz4Zp8bE8YaAZ+dC0c4NHGwj/Po4/m3/78r4QNn4N/FNZqDNWj7l WZdNe/DOuKmTjemdHZiEAcsni+u/IZyNhYuNGX2gzArXzhaEJq4mZkCzdQ4TtELt+Z1r arTL8XmgVUxv9DldpUzFwgF0mN3/bSZtPMu21guJjpbrK24bdzqFZtBiOfwBcgWPOBrQ eJvQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q25si16995974pgv.114.2019.09.04.03.24.00; Wed, 04 Sep 2019 03:24:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729269AbfIDKVp (ORCPT + 99 others); Wed, 4 Sep 2019 06:21:45 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:37062 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725840AbfIDKVp (ORCPT ); Wed, 4 Sep 2019 06:21:45 -0400 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x84AJ12c145751; Wed, 4 Sep 2019 06:21:10 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ut8td5r64-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Sep 2019 06:21:09 -0400 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.27/8.16.0.27) with SMTP id x84AL93e004501; Wed, 4 Sep 2019 06:21:09 -0400 Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131]) by mx0a-001b2d01.pphosted.com with ESMTP id 2ut8td5r5b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Sep 2019 06:21:08 -0400 Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1]) by ppma01dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id x84AKIQM027264; Wed, 4 Sep 2019 10:21:07 GMT Received: from b01cxnp23034.gho.pok.ibm.com (b01cxnp23034.gho.pok.ibm.com [9.57.198.29]) by ppma01dal.us.ibm.com with ESMTP id 2uqgh76dk0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 04 Sep 2019 10:21:07 +0000 Received: from b01ledav003.gho.pok.ibm.com (b01ledav003.gho.pok.ibm.com [9.57.199.108]) by b01cxnp23034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x84AL6rI28770566 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 4 Sep 2019 10:21:06 GMT Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 76408B2066; Wed, 4 Sep 2019 10:21:06 +0000 (GMT) Received: from b01ledav003.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 372C9B2064; Wed, 4 Sep 2019 10:21:06 +0000 (GMT) Received: from paulmck-ThinkPad-W541 (unknown [9.85.159.175]) by b01ledav003.gho.pok.ibm.com (Postfix) with ESMTP; Wed, 4 Sep 2019 10:21:06 +0000 (GMT) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id 8720F16C096A; Wed, 4 Sep 2019 03:21:06 -0700 (PDT) Date: Wed, 4 Sep 2019 03:21:06 -0700 From: "Paul E. McKenney" To: Joel Fernandes Cc: Viktor Rosendahl , Steven Rostedt , Ingo Molnar , linux-kernel@vger.kernel.org, Peter Zijlstra , linux-rt-users@vger.kernel.org Subject: Re: [PATCH v5 1/4] ftrace: Implement fs notification for tracing_max_latency Message-ID: <20190904102106.GN4125@linux.ibm.com> Reply-To: paulmck@kernel.org References: <20190903132602.3440-1-viktor.rosendahl@gmail.com> <20190903132602.3440-2-viktor.rosendahl@gmail.com> <20190904040039.GB150430@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190904040039.GB150430@google.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2019-09-04_03:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=1034 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1906280000 definitions=main-1909040103 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Sep 04, 2019 at 12:00:39AM -0400, Joel Fernandes wrote: > [ Resending since I messed up my last email's headers! ] > > On Tue, Sep 03, 2019 at 03:25:59PM +0200, Viktor Rosendahl wrote: > > This patch implements the feature that the tracing_max_latency file, > > e.g. /sys/kernel/debug/tracing/tracing_max_latency will receive > > notifications through the fsnotify framework when a new latency is > > available. > > > > One particularly interesting use of this facility is when enabling > > threshold tracing, through /sys/kernel/debug/tracing/tracing_thresh, > > together with the preempt/irqsoff tracers. This makes it possible to > > implement a user space program that can, with equal probability, > > obtain traces of latencies that occur immediately after each other in > > spite of the fact that the preempt/irqsoff tracers operate in overwrite > > mode. > > Adding Paul since RCU faces similar situations, i.e. raising softirq risks > scheduler deadlock in rcu_read_unlock_special() -- but RCU's solution is to > avoid raising the softirq and instead use irq_work. Peter's solution, actually. ;-) Thanx, Paul > I was wondering, if we can rename __raise_softirq_irqoff() to > raise_softirq_irqoff_no_wake() and call that from places where there is risk > of scheduler related deadlocks. Then I think this can be used from Viktor's > code. Let us discuss - what would happen if the softirq is raised, but > ksoftirqd is not awakened for this latency notification path? Is this really > an issue considering the softirq will execute during the next interrupt exit? > > thanks, > > - Joel > > > > This facility works with the hwlat, preempt/irqsoff, and wakeup > > tracers. > > > > This patch also adds some unfortunate calls from __schedule() and > > do_idle(). Those calls to the latency_fsnotify_disable/enable() are > > needed because we cannot wake up the workqueue from these critical > > sections without risking a deadlock. Similar problems would also arise > > if we try to schedule a tasklet, raise a softirq, or wake up a kernel > > thread. If a notification event would happen in the forbidden sections, > > we schedule the fsnotify work as soon as we have exited them. > > > > There was a suggestion to remove this latency_fsnotify_enable/disable() > > gunk, or at least to combine it with the start_critical_timings() and > > stop_critical_timings(). I have however not been able to come up with > > a way to do it. > > > > It seems like it would be possible to simply replace the calls to > > latency_fsnotify_enable/disable() with calls to > > start/stop_critical_timings(). However, the main problem is that it > > would not work for the wakup tracer. The wakeup tracer needs a > > facility that postpones the notifications, not one that prevents the > > measurements because all its measurements takes place in the middle > > of __schedule(). On the other hand, in some places, like in idle and > > the console we need start stop functions that prevents the > > measurements from being make. > > > > Signed-off-by: Viktor Rosendahl > > --- > > include/linux/ftrace.h | 31 +++++++++ > > kernel/sched/core.c | 3 + > > kernel/sched/idle.c | 3 + > > kernel/sched/sched.h | 1 + > > kernel/trace/trace.c | 112 +++++++++++++++++++++++++++++- > > kernel/trace/trace.h | 22 ++++++ > > kernel/trace/trace_hwlat.c | 4 +- > > kernel/trace/trace_irqsoff.c | 4 ++ > > kernel/trace/trace_sched_wakeup.c | 4 ++ > > 9 files changed, 181 insertions(+), 3 deletions(-) > > > > diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h > > index 8a8cb3c401b2..b4d9700ef917 100644 > > --- a/include/linux/ftrace.h > > +++ b/include/linux/ftrace.h > > @@ -907,4 +907,35 @@ unsigned long arch_syscall_addr(int nr); > > > > #endif /* CONFIG_FTRACE_SYSCALLS */ > > > > +#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ > > + defined(CONFIG_FSNOTIFY) > > + > > +DECLARE_PER_CPU(int, latency_notify_disable); > > +DECLARE_STATIC_KEY_FALSE(latency_notify_key); > > + > > +void latency_fsnotify_process(void); > > + > > +/* > > + * Disable/enable fsnotify while in scheduler and idle code. Trying to wake > > + * anything up from there, such as calling queue_work() is prone to deadlock. > > + */ > > +static inline void latency_fsnotify_disable(void) > > +{ > > + this_cpu_inc(latency_notify_disable); > > +} > > + > > +static inline void latency_fsnotify_enable(void) > > +{ > > + this_cpu_dec(latency_notify_disable); > > + if (static_branch_unlikely(&latency_notify_key)) > > + latency_fsnotify_process(); > > +} > > + > > +#else > > + > > +#define latency_fsnotify_disable() do { } while (0) > > +#define latency_fsnotify_enable() do { } while (0) > > + > > +#endif > > + > > #endif /* _LINUX_FTRACE_H */ > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 010d578118d6..e3c1dc801073 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -3198,6 +3198,7 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev) > > */ > > > > rq = finish_task_switch(prev); > > + latency_fsnotify_enable(); > > balance_callback(rq); > > preempt_enable(); > > > > @@ -3820,6 +3821,7 @@ static void __sched notrace __schedule(bool preempt) > > > > local_irq_disable(); > > rcu_note_context_switch(preempt); > > + latency_fsnotify_disable(); > > > > /* > > * Make sure that signal_pending_state()->signal_pending() below > > @@ -3883,6 +3885,7 @@ static void __sched notrace __schedule(bool preempt) > > rq_unlock_irq(rq, &rf); > > } > > > > + latency_fsnotify_enable(); > > balance_callback(rq); > > } > > > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > > index 80940939b733..5fc87d99a407 100644 > > --- a/kernel/sched/idle.c > > +++ b/kernel/sched/idle.c > > @@ -236,6 +236,7 @@ static void do_idle(void) > > > > __current_set_polling(); > > tick_nohz_idle_enter(); > > + latency_fsnotify_disable(); > > > > while (!need_resched()) { > > check_pgt_cache(); > > @@ -265,6 +266,8 @@ static void do_idle(void) > > arch_cpu_idle_exit(); > > } > > > > + latency_fsnotify_enable(); > > + > > /* > > * Since we fell out of the loop above, we know TIF_NEED_RESCHED must > > * be set, propagate it into PREEMPT_NEED_RESCHED. > > diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h > > index 802b1f3405f2..467d6ad03f16 100644 > > --- a/kernel/sched/sched.h > > +++ b/kernel/sched/sched.h > > @@ -46,6 +46,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c > > index 563e80f9006a..a622263a69e4 100644 > > --- a/kernel/trace/trace.c > > +++ b/kernel/trace/trace.c > > @@ -44,6 +44,10 @@ > > #include > > #include > > #include > > +#include > > +#include > > +#include > > +#include > > > > #include "trace.h" > > #include "trace_output.h" > > @@ -1480,6 +1484,110 @@ static ssize_t trace_seq_to_buffer(struct trace_seq *s, void *buf, size_t cnt) > > > > unsigned long __read_mostly tracing_thresh; > > > > +#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ > > + defined(CONFIG_FSNOTIFY) > > + > > +static const struct file_operations tracing_max_lat_fops; > > +static struct workqueue_struct *fsnotify_wq; > > +static DEFINE_PER_CPU(struct llist_head, notify_list); > > + > > +DEFINE_PER_CPU(int, latency_notify_disable); > > +DEFINE_STATIC_KEY_FALSE(latency_notify_key); > > + > > +static void latency_fsnotify_workfn(struct work_struct *work) > > +{ > > + struct trace_array *tr = container_of(work, struct trace_array, > > + fsnotify_work); > > + fsnotify(tr->d_max_latency->d_inode, FS_MODIFY, > > + tr->d_max_latency->d_inode, FSNOTIFY_EVENT_INODE, NULL, 0); > > +} > > + > > +static void trace_create_maxlat_file(struct trace_array *tr, > > + struct dentry *d_tracer) > > +{ > > + INIT_WORK(&tr->fsnotify_work, latency_fsnotify_workfn); > > + atomic_set(&tr->notify_pending, 0); > > + tr->d_max_latency = trace_create_file("tracing_max_latency", 0644, > > + d_tracer, &tr->max_latency, > > + &tracing_max_lat_fops); > > +} > > + > > +void latency_fsnotify_stop(void) > > +{ > > + /* Make sure all CPUs see caller's previous actions to stop tracer */ > > + smp_wmb(); > > + static_branch_disable(&latency_notify_key); > > + latency_fsnotify_process(); > > +} > > + > > +void latency_fsnotify_start(void) > > +{ > > + static_branch_enable(&latency_notify_key); > > + /* Make sure all CPUs see key value before caller continue */ > > + smp_wmb(); > > +} > > + > > +void latency_fsnotify_process(void) > > +{ > > + struct trace_array *tr; > > + struct llist_head *list; > > + struct llist_node *node; > > + > > + if (this_cpu_read(latency_notify_disable)) > > + return; > > + > > + list = this_cpu_ptr(¬ify_list); > > + for (node = llist_del_first(list); node != NULL; > > + node = llist_del_first(list)) { > > + tr = llist_entry(node, struct trace_array, notify_ll); > > + atomic_set(&tr->notify_pending, 0); > > + queue_work(fsnotify_wq, &tr->fsnotify_work); > > + } > > +} > > + > > +__init static int latency_fsnotify_init(void) > > +{ > > + fsnotify_wq = alloc_workqueue("tr_max_lat_wq", > > + WQ_UNBOUND | WQ_HIGHPRI, 0); > > + if (!fsnotify_wq) { > > + pr_err("Unable to allocate tr_max_lat_wq\n"); > > + return -ENOMEM; > > + } > > + return 0; > > +} > > + > > +late_initcall_sync(latency_fsnotify_init); > > + > > +void latency_fsnotify(struct trace_array *tr) > > +{ > > + if (!fsnotify_wq) > > + return; > > + > > + if (!this_cpu_read(latency_notify_disable)) > > + queue_work(fsnotify_wq, &tr->fsnotify_work); > > + else { > > + /* > > + * notify_pending prevents us from adding the same entry to > > + * more than one notify_list. It will get queued in > > + * latency_enable_fsnotify() > > + */ > > + if (!atomic_xchg(&tr->notify_pending, 1)) > > + llist_add(&tr->notify_ll, this_cpu_ptr(¬ify_list)); > > + } > > +} > > + > > +/* > > + * (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ > > + * defined(CONFIG_FSNOTIFY) > > + */ > > +#else > > + > > +#define trace_create_maxlat_file(tr, d_tracer) \ > > + trace_create_file("tracing_max_latency", 0644, d_tracer, \ > > + &tr->max_latency, &tracing_max_lat_fops) > > + > > +#endif > > + > > #ifdef CONFIG_TRACER_MAX_TRACE > > /* > > * Copy the new maximum trace into the separate maximum-trace > > @@ -1518,6 +1626,7 @@ __update_max_tr(struct trace_array *tr, struct task_struct *tsk, int cpu) > > > > /* record this tasks comm */ > > tracing_record_cmdline(tsk); > > + latency_fsnotify(tr); > > } > > > > /** > > @@ -8550,8 +8659,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer) > > create_trace_options_dir(tr); > > > > #if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) > > - trace_create_file("tracing_max_latency", 0644, d_tracer, > > - &tr->max_latency, &tracing_max_lat_fops); > > + trace_create_maxlat_file(tr, d_tracer); > > #endif > > > > if (ftrace_create_function_files(tr, d_tracer)) > > diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h > > index 005f08629b8b..d9f83b2aaa71 100644 > > --- a/kernel/trace/trace.h > > +++ b/kernel/trace/trace.h > > @@ -16,6 +16,7 @@ > > #include > > #include > > #include > > +#include > > > > #ifdef CONFIG_FTRACE_SYSCALLS > > #include /* For NR_SYSCALLS */ > > @@ -264,6 +265,12 @@ struct trace_array { > > #endif > > #if defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER) > > unsigned long max_latency; > > +#ifdef CONFIG_FSNOTIFY > > + struct dentry *d_max_latency; > > + struct work_struct fsnotify_work; > > + atomic_t notify_pending; > > + struct llist_node notify_ll; > > +#endif > > #endif > > struct trace_pid_list __rcu *filtered_pids; > > /* > > @@ -785,6 +792,21 @@ void update_max_tr_single(struct trace_array *tr, > > struct task_struct *tsk, int cpu); > > #endif /* CONFIG_TRACER_MAX_TRACE */ > > > > +#if (defined(CONFIG_TRACER_MAX_TRACE) || defined(CONFIG_HWLAT_TRACER)) && \ > > + defined(CONFIG_FSNOTIFY) > > + > > +void latency_fsnotify(struct trace_array *tr); > > +void latency_fsnotify_start(void); > > +void latency_fsnotify_stop(void); > > + > > +#else > > + > > +#define latency_fsnotify(tr) do { } while (0) > > +#define latency_fsnotify_start() do { } while (0) > > +#define latency_fsnotify_stop() do { } while (0) > > + > > +#endif > > + > > #ifdef CONFIG_STACKTRACE > > void __trace_stack(struct trace_array *tr, unsigned long flags, int skip, > > int pc); > > diff --git a/kernel/trace/trace_hwlat.c b/kernel/trace/trace_hwlat.c > > index fa95139445b2..9c379261ee89 100644 > > --- a/kernel/trace/trace_hwlat.c > > +++ b/kernel/trace/trace_hwlat.c > > @@ -254,8 +254,10 @@ static int get_sample(void) > > trace_hwlat_sample(&s); > > > > /* Keep a running maximum ever recorded hardware latency */ > > - if (sample > tr->max_latency) > > + if (sample > tr->max_latency) { > > tr->max_latency = sample; > > + latency_fsnotify(tr); > > + } > > } > > > > out: > > diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c > > index a745b0cee5d3..29403a83a5f0 100644 > > --- a/kernel/trace/trace_irqsoff.c > > +++ b/kernel/trace/trace_irqsoff.c > > @@ -557,6 +557,7 @@ static int __irqsoff_tracer_init(struct trace_array *tr) > > if (irqsoff_busy) > > return -EBUSY; > > > > + latency_fsnotify_start(); > > save_flags = tr->trace_flags; > > > > /* non overwrite screws up the latency tracers */ > > @@ -591,16 +592,19 @@ static void __irqsoff_tracer_reset(struct trace_array *tr) > > ftrace_reset_array_ops(tr); > > > > irqsoff_busy = false; > > + latency_fsnotify_stop(); > > } > > > > static void irqsoff_tracer_start(struct trace_array *tr) > > { > > + latency_fsnotify_start(); > > tracer_enabled = 1; > > } > > > > static void irqsoff_tracer_stop(struct trace_array *tr) > > { > > tracer_enabled = 0; > > + latency_fsnotify_stop(); > > } > > > > #ifdef CONFIG_IRQSOFF_TRACER > > diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c > > index 743b2b520d34..3dc90d9f605b 100644 > > --- a/kernel/trace/trace_sched_wakeup.c > > +++ b/kernel/trace/trace_sched_wakeup.c > > @@ -669,6 +669,7 @@ static bool wakeup_busy; > > > > static int __wakeup_tracer_init(struct trace_array *tr) > > { > > + latency_fsnotify_start(); > > save_flags = tr->trace_flags; > > > > /* non overwrite screws up the latency tracers */ > > @@ -727,10 +728,12 @@ static void wakeup_tracer_reset(struct trace_array *tr) > > set_tracer_flag(tr, TRACE_ITER_OVERWRITE, overwrite_flag); > > ftrace_reset_array_ops(tr); > > wakeup_busy = false; > > + latency_fsnotify_stop(); > > } > > > > static void wakeup_tracer_start(struct trace_array *tr) > > { > > + latency_fsnotify_start(); > > wakeup_reset(tr); > > tracer_enabled = 1; > > } > > @@ -738,6 +741,7 @@ static void wakeup_tracer_start(struct trace_array *tr) > > static void wakeup_tracer_stop(struct trace_array *tr) > > { > > tracer_enabled = 0; > > + latency_fsnotify_stop(); > > } > > > > static struct tracer wakeup_tracer __read_mostly = > > -- > > 2.17.1 > >