Hi,
This is v3 of the trace event triggers patchset, addressing comments
from Masami Hiramatsu, zhangwei(Jovi), and Steve Rostedt.
v3:
- added a new patch to the series (patch 8/9 - update event filters
for multibuffer) to bring the event filters up-to-date wrt the
multibuffer changes - without this patch, the same filter is
applied to all buffers regardless of which instance sets it; this
patch allows you to set per-instance filters as you'd expect. The
one exception to this is the 'ftrace subsystem' events, which are
special and retain their current behavior.
- changed the syscall soft enabling to keep a per-trace-array array
of trace_event_files alongside the 'enabled' bitmaps there. This
keeps them in a place where they're only allocated for tracing
and which I think addresses all the previous comments for that
patch.
v2:
- removed all changes to __ftrace_event_enable_disable() (except
for patch 04/11 which clears the soft_disabled bit as discussed)
and created a separate trace_event_trigger_enable_disable() that
calls it after setting/clearing the TRIGGER_MODE_BIT.
- added a trigger_mode enum for future patches that break up the
trigger calls for filtering, but that's also now used as a command
id for registering/unregistering commands.
- removed the enter_file/exit_file members that were added to
syscall_metadata after realizing that they were unnecessary if
ftrace_syscall_enter/exit() were modified to receive a pointer
to the ftrace_file instead of the pointer to the trace_array in
the ftrace_file.
- broke up the trigger invocation into two parts so that triggers
like 'stacktrace' that themselves log into the trace buffer can
defer the actual trigger invocation until after the current
record is closed, which is needed for the filter check that
in turn determines whether the trigger gets invoked.
- other minor cleanup
This patchset implements 'trace event triggers', which are similar to
the function triggers implemented for 'ftrace filter commands' (see
'Filter commands' in Documentation/trace/ftrace.txt), but instead of
being invoked from function calls are invoked by trace events.
Basically the patchset allows 'commands' to be triggered whenever a
given trace event is hit. The set of commands implemented by this
patchset are:
- enable/disable_event - enable or disable another event whenever
the trigger event is hit
- stacktrace - dump a stacktrace to the trace buffer whenever the
trigger event is hit
- snapshot - create a snapshot of the current trace buffer whenever
the trigger event is hit
- traceon/traceoff - turn tracing on or off whenever the trigger
event is hit
Triggers can also be conditionally invoked by associating a standard
trace event filter with them - if the given event passes the filter,
the trigger is invoked, otherwise it's not. (see 'Event filtering' in
Documentation/trace/events.txt for info on event filters).
See the last patch in the series for more complete documention on
event triggers and the available trigger commands, and below for some
simple examples of each of the above commands along with conditional
filtering.
The first four patches are bugfix patches or minor improvements which
can be applied regardless; the rest contain the basic framework and
implementations for each command.
This patchset was based on some ideas from Steve Rostedt, which he
outlined during a couple discussions at ELC and follow-on e-mails.
Code- and interface-wise, it's also partially based on the existing
function triggers implementation and essentially works on top of the
SOFT_DISABLE mode introduced for that. Both Steve and Masami
Hiramatsu took a look at a couple early versions of this patchset, and
offered some very useful suggestions reflected in this patchset -
thanks to them both for the ideas and for taking the time to do some
basic sanity reviews!
Below are a few concrete examples demonstrating each of the available
commands.
The first example attempts to capture all the kmalloc events that
happen as a result of reading a particular file.
The first part of the set of commands below adds a kmalloc
'enable_event' trigger to the sys_enter_read trace event - as a
result, when the sys_enter_read event occurs, kmalloc events are
enabled, resulting in those kmalloc events getting logged into the
trace buffer. The :1 at the end of the kmalloc enable_event specifies
that the enabling of kmalloc events on sys_enter_read will only happen
once - subsequent reads won't trigger the kmalloc logging. The next
part of the example reads a test file, which triggers the
sys_enter_read tracepoint and thus turns on the kmalloc events, and
once done, adds a trigger to sys_exit_read that disables kmalloc
events. The disable_event doesn't have a :1 appended, which means it
happens on every sys_exit_read.
# echo 'enable_event:kmem:kmalloc:1' > \
/sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger; \
cat ~/junk.txt > /dev/null; \
echo 'disable_event:kmem:kmalloc' > \
/sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
Just to show a bit of what happens under the covers, if we display the
kmalloc 'enable' file, we see that it's 'soft disabled' (the asterisk
after the enable flag). This means that it's actually enabled but is
in the SOFT_DISABLED state, and is essentially held back from actually
logging anything to the trace buffer, but can be made to log into the
buffer by simply flipping a bit :
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/enable
0*
If we look at the 'enable' file for the triggering sys_enter_read
trace event, we can see that it also has the 'soft disable' flag set.
This is because in the case of the triggering event, we also need to
have the trace event invoked regardless of whether or not its actually
being logged, so we can process the triggers. This functionality is
also built on top of the SOFT_DISABLE flag and is reflected in the
enable state as well:
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/enable
0*
To find out which triggers are set for a particular event, we can look
at the 'trigger' file for the event. Here's what the 'trigger' file
for the sys_enter_read event looks like:
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
enable_event:kmem:kmalloc:count=0
The 'count=0' field at the end shows that this trigger has no more
triggering ability left - it's essentially fired all its shots - if
it was still active, it would have a non-zero count.
Looking at the sys_exit_read, we see that since we didn't specify a
number at the end, the number of times it can fire is unlimited:
# cat /sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
disable_event:kmem:kmalloc:unlimited
# cat /sys/kernel/debug/tracing/events/syscalls/sys_exit_read/enable
0*
Finally, let's look at the results of the above set of commands by
cat'ing the 'trace' file:
# cat /sys/kernel/debug/tracing/trace
# tracer: nop
#
# entries-in-buffer/entries-written: 85/85 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
cat-2596 [001] .... 374.518849: kmalloc: call_site=ffffffff812de707 ptr=ffff8800306b9290 bytes_req=2 bytes_alloc=8 gfp_flags=GFP_KERNEL|GFP_ZERO
cat-2596 [001] .... 374.518956: kmalloc: call_site=ffffffff81182a12 ptr=ffff88010c8e1500 bytes_req=256 bytes_alloc=256 gfp_flags=GFP_KERNEL|GFP_ZERO
cat-2596 [001] .... 374.518959: kmalloc: call_site=ffffffff812d8e49 ptr=ffff88003002a200 bytes_req=32 bytes_alloc=32 gfp_flags=GFP_KERNEL|GFP_ZERO
cat-2596 [001] .... 374.518960: kmalloc: call_site=ffffffff812de707 ptr=ffff8800306b9088 bytes_req=2 bytes_alloc=8 gfp_flags=GFP_KERNEL|GFP_ZERO
cat-2596 [003] .... 374.519063: kmalloc: call_site=ffffffff812d9f50 ptr=ffff8800b793fd00 bytes_req=256 bytes_alloc=256 gfp_flags=GFP_KERNEL
cat-2596 [003] .... 374.519119: kmalloc: call_site=ffffffff811cc3bc ptr=ffff8800b7918900 bytes_req=128 bytes_alloc=128 gfp_flags=GFP_KERNEL
cat-2596 [003] .... 374.519122: kmalloc: call_site=ffffffff811cc4d2 ptr=ffff880030404800 bytes_req=504 bytes_alloc=512 gfp_flags=GFP_KERNEL
cat-2596 [003] .... 374.519125: kmalloc: call_site=ffffffff811cc64e ptr=ffff88003039d8a0 bytes_req=28 bytes_alloc=32 gfp_flags=GFP_KERNEL
.
.
.
Xorg-1194 [000] .... 374.543956: kmalloc: call_site=ffffffffa03a8599 ptr=ffff8800ba23b700 bytes_req=112 bytes_alloc=128 gfp_flags=GFP_TEMPORARY|GFP_NOWARN|GFP_NORETRY
Xorg-1194 [000] .... 374.543961: kmalloc: call_site=ffffffffa03a7639 ptr=ffff8800b7905b40 bytes_req=56 bytes_alloc=64 gfp_flags=GFP_TEMPORARY|GFP_ZERO
Xorg-1194 [000] .... 374.543973: kmalloc: call_site=ffffffffa039f716 ptr=ffff8800b7905ac0 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL
.
.
.
compiz-1769 [002] .... 374.547586: kmalloc: call_site=ffffffffa03a8599 ptr=ffff8800ba320400 bytes_req=952 bytes_alloc=1024 gfp_flags=GFP_TEMPORARY|GFP_NOWARN|GFP_NORETRY
compiz-1769 [002] .... 374.547592: kmalloc: call_site=ffffffffa03a7639 ptr=ffff8800bd5f7400 bytes_req=280 bytes_alloc=512 gfp_flags=GFP_TEMPORARY|GFP_ZERO
compiz-1769 [002] .... 374.547623: kmalloc: call_site=ffffffffa039f716 ptr=ffff8800b792d580 bytes_req=64 bytes_alloc=64 gfp_flags=GFP_KERNEL
.
.
.
cat-2596 [000] .... 374.646019: kmalloc: call_site=ffffffff8123df9f ptr=ffff8800ba2f2900 bytes_req=96 bytes_alloc=96 gfp_flags=GFP_NOFS|GFP_ZERO
cat-2596 [000] .... 374.648263: kmalloc: call_site=ffffffff8123df9f ptr=ffff8800ba2f2900 bytes_req=96 bytes_alloc=96 gfp_flags=GFP_NOFS|GFP_ZERO
cat-2596 [000] .... 374.650503: kmalloc: call_site=ffffffff8123df9f ptr=ffff8800ba2f2900 bytes_req=96 bytes_alloc=96 gfp_flags=GFP_NOFS|GFP_ZERO
.
.
.
bash-2425 [002] .... 374.654923: kmalloc: call_site=ffffffff8123df9f ptr=ffff8800b7a28780 bytes_req=96 bytes_alloc=96 gfp_flags=GFP_NOFS|GFP_ZERO
rsyslogd-974 [002] .... 374.655163: kmalloc: call_site=ffffffff81046ae6 ptr=ffff8800ba320400 bytes_req=1024 bytes_alloc=1024 gfp_flags=GFP_KERNEL
As you can see, we captured all the kmallocs from our 'cat' reads, but
also any other kmallocs that happened for other processes between the
time we turned on kmalloc events and turned them off. Future work
should add a way to screen out unwanted events e.g. the abilitiy to
capture the triggering pid in a simple variable and use that variable
in event filters to screen out other pids.
To turn off the events we turned on, simply reinvoke the commands
prefixed by '!':
# echo '!enable_event:kmem:kmalloc:1' > /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
# echo '!disable_event:kmem:kmalloc' > /sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
You can verify that the events have been turned off by again examining
the 'enable' and 'trigger' files:
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/enable
0
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/enable
0
The next example shows how to use the 'stacktrace' command. To have a
stacktrace logged every time a particular event occurs, simply echo
'stacktrace' into the 'trigger' file for that event:
# echo 'stacktrace' > /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
stacktrace:unlimited
Looking at the 'trace' output, we indeed see stack traces for every
kmalloc:
# cat /sys/kernel/debug/tracing/trace
compiz-1769 [003] .... 2422.614630: <stack trace>
=> i915_add_request
=> i915_gem_do_execbuffer.isra.15
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
Xorg-1194 [002] .... 2422.619076: <stack trace>
=> drm_wait_vblank
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
Xorg-1194 [000] .... 2422.625823: <stack trace>
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
.
.
.
bash-2842 [001] .... 2423.002059: <stack trace>
=> __tracing_open
=> tracing_open
=> do_dentry_open
=> finish_open
=> do_last
=> path_openat
=> do_filp_open
=> do_sys_open
=> SyS_open
=> system_call_fastpath
bash-2842 [001] .... 2423.002070: <stack trace>
=> __tracing_open
=> tracing_open
=> do_dentry_open
=> finish_open
=> do_last
=> path_openat
=> do_filp_open
=> do_sys_open
=> SyS_open
=> system_call_fastpath
For an event like kmalloc, however, we don't typically want to see a
stack trace for every single event, since the amount of data produced
is overwhelming. What we'd typically want to do is only log a stack
trace for particular events of interest. We can accomplish that by
appending an 'event filter' to the trigger. The event filters used to
filter triggers are exactly the same as those implemented for the
existing trace event 'filter' files - see the trace event
documentation for details.
First, let's turn off the existing stacktrace event, and clear the
trace buffer:
# echo '!stacktrace' > /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
# echo > /sys/kernel/debug/tracing/trace
Now, we can add a new stacktrace trigger which will fire 5 times, but
only if the number of bytes requested by the caller was greater than
or equal to 512:
# echo 'stacktrace:5 if bytes_req >= 512' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
stacktrace:count=0 if bytes_req >= 512
>From looking at the trigger, we can see the event fired 5 times
(count=0) and looking at the 'trace' file, we can verify that:
# cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 5/5 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
rsyslogd-974 [000] .... 1796.412997: <stack trace>
=> kmem_cache_alloc_trace
=> do_syslog
=> kmsg_read
=> proc_reg_read
=> vfs_read
=> SyS_read
=> system_call_fastpath
compiz-1769 [000] .... 1796.427342: <stack trace>
=> __kmalloc
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
Xorg-1194 [003] .... 1796.441251: <stack trace>
=> __kmalloc
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
Xorg-1194 [003] .... 1796.441392: <stack trace>
=> __kmalloc
=> sg_kmalloc
=> __sg_alloc_table
=> sg_alloc_table
=> i915_gem_object_get_pages_gtt
=> i915_gem_object_get_pages
=> i915_gem_object_pin
=> i915_gem_execbuffer_reserve_object.isra.11
=> i915_gem_execbuffer_reserve
=> i915_gem_do_execbuffer.isra.15
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
Xorg-1194 [003] .... 1796.441672: <stack trace>
=> __kmalloc
=> i915_gem_execbuffer2
=> drm_ioctl
=> do_vfs_ioctl
=> SyS_ioctl
=> system_call_fastpath
So the trace output shows exactly 5 stacktraces, as expected.
Just for comparison, let's look at an event that's harder to trigger,
to see a count that isn't 0 in the trigger description:
# echo 'stacktrace:5 if bytes_req >= 65536' > \
/sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
# cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
stacktrace:count=5 if bytes_req >= 65536
The next example shows how to use the 'snapshot' command to capture a
snapshot of the trace buffer when an 'interesting' event occurs.
In this case, we'll first start the entire block subsystem tracing:
# echo 1 > /sys/kernel/debug/tracing/events/block/enable
Next, we add a 'snapshot' trigger that will take a snapshot of all the
events leading up to the particular event we're interested in, which
is a block queue unplug with a depth > 1. In this case we're
interested in capturing the snapshot just one time, the first time it
occurs:
# echo 'snapshot:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
It may take awhile for the condition to occur, but once it does, we
can see the entire sequence of block events leading up to in in the
'snapshot' file:
# cat /sys/kernel/debug/tracing/snapshot
jbd2/sdb1-8-278 [001] .... 382.075012: block_bio_queue: 8,16 WS 629429976 + 8 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [001] .... 382.075012: block_bio_backmerge: 8,16 WS 629429976 + 8 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [001] d... 382.075015: block_rq_insert: 8,16 WS 0 () 629429912 + 72 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [001] d... 382.075030: block_rq_issue: 8,16 WS 0 () 629429912 + 72 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [001] d... 382.075044: block_unplug: [jbd2/sdb1-8] 1
<idle>-0 [000] ..s. 382.075310: block_rq_complete: 8,16 WS () 629429912 + 72 [0]
jbd2/sdb1-8-278 [000] .... 382.075407: block_touch_buffer: 8,17 sector=78678492 size=4096
jbd2/sdb1-8-278 [000] .... 382.075413: block_bio_remap: 8,16 FWFS 629429984 + 8 <- (8,17) 629427936
jbd2/sdb1-8-278 [000] .... 382.075415: block_bio_queue: 8,16 FWFS 629429984 + 8 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [000] .... 382.075418: block_getrq: 8,16 FWFS 629429984 + 8 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [000] d... 382.075421: block_rq_insert: 8,16 FWFS 0 () 629429984 + 8 [jbd2/sdb1-8]
jbd2/sdb1-8-278 [000] d... 382.075424: block_rq_issue: 8,16 FWS 0 () 18446744073709551615 + 0 [jbd2/sdb1-8]
<idle>-0 [000] dNs. 382.115912: block_rq_issue: 8,16 WS 0 () 629429984 + 8 [swapper/0]
<idle>-0 [000] ..s. 382.116059: block_rq_complete: 8,16 WS () 629429984 + 8 [0]
<idle>-0 [000] dNs. 382.116079: block_rq_issue: 8,16 FWS 0 () 18446744073709551615 + 0 [swapper/0]
<idle>-0 [000] d.s. 382.131030: block_rq_complete: 8,16 WS () 629429984 + 0 [0]
jbd2/sdb1-8-278 [000] .... 382.131106: block_dirty_buffer: 8,17 sector=26 size=4096
jbd2/sdb1-8-278 [000] .... 382.131111: block_dirty_buffer: 8,17 sector=106954757 size=4096
.
.
.
kworker/u16:3-66 [002] .... 387.144505: block_bio_remap: 8,16 WM 2208 + 8 <- (8,17) 160
kworker/u16:3-66 [002] .... 387.144512: block_bio_queue: 8,16 WM 2208 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] .... 387.144522: block_getrq: 8,16 WM 2208 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] .... 387.144525: block_plug: [kworker/u16:3]
kworker/u16:3-66 [002] .... 387.144530: block_bio_remap: 8,16 WM 2216 + 8 <- (8,17) 168
kworker/u16:3-66 [002] .... 387.144531: block_bio_queue: 8,16 WM 2216 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] .... 387.144533: block_bio_backmerge: 8,16 WM 2216 + 8 [kworker/u16:3]
.
.
.
kworker/u16:3-66 [002] d... 387.144631: block_rq_insert: 8,16 WM 0 () 2208 + 16 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144636: block_rq_insert: 8,16 WM 0 () 2256 + 16 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144638: block_rq_insert: 8,16 WM 0 () 662702080 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144640: block_rq_insert: 8,16 WM 0 () 683673680 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144641: block_rq_insert: 8,16 WM 0 () 729812344 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144642: block_rq_insert: 8,16 WM 0 () 729828896 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144643: block_rq_insert: 8,16 WM 0 () 730599480 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144644: block_rq_insert: 8,16 WM 0 () 855640104 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144645: block_rq_insert: 8,16 WM 0 () 880805984 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144646: block_rq_insert: 8,16 WM 0 () 1186990400 + 8 [kworker/u16:3]
kworker/u16:3-66 [002] d... 387.144649: block_unplug: [kworker/u16:3] 10
The final example shows something very similer but using the
'traceoff' command to stop tracing when an 'interesting' event occurs.
The traceon and traceoff commands can be used together to toggle
tracing on and off in creative ways to capture different traces in the
'trace' buffer, but this example just shows essentially the same use
case as the previous example but using 'traceoff' to capture trace
data of interest in the standard 'trace' buffer.
Again, we'll start the entire block subsystem tracing:
# echo 1 > /sys/kernel/debug/tracing/events/block/enable
# echo 'traceoff:1 if nr_rq > 1' > \
/sys/kernel/debug/tracing/events/block/block_unplug/trigger
# cat /sys/kernel/debug/tracing/trace
kworker/u16:4-67 [000] .... 803.003670: block_bio_remap: 8,16 WM 2208 + 8 <- (8,17) 160
kworker/u16:4-67 [000] .... 803.003670: block_bio_queue: 8,16 WM 2208 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] .... 803.003672: block_getrq: 8,16 WM 2208 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] .... 803.003674: block_bio_remap: 8,16 WM 2216 + 8 <- (8,17) 168
kworker/u16:4-67 [000] .... 803.003675: block_bio_queue: 8,16 WM 2216 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] .... 803.003676: block_bio_backmerge: 8,16 WM 2216 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] .... 803.003678: block_bio_remap: 8,16 WM 2232 + 8 <- (8,17) 184
kworker/u16:4-67 [000] .... 803.003678: block_bio_queue: 8,16 WM 2232 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] .... 803.003680: block_getrq: 8,16 WM 2232 + 8 [kworker/u16:4]
.
.
.
kworker/u16:4-67 [000] d... 803.003720: block_rq_insert: 8,16 WM 0 () 285223776 + 16 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003721: block_rq_insert: 8,16 WM 0 () 662702080 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003722: block_rq_insert: 8,16 WM 0 () 683673680 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003723: block_rq_insert: 8,16 WM 0 () 730599480 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003724: block_rq_insert: 8,16 WM 0 () 763365384 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003725: block_rq_insert: 8,16 WM 0 () 880805984 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003726: block_rq_insert: 8,16 WM 0 () 1186990872 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003727: block_rq_insert: 8,16 WM 0 () 1187057608 + 8 [kworker/u16:4]
kworker/u16:4-67 [000] d... 803.003729: block_unplug: [kworker/u16:4] 14
The following changes since commit c0d15cc7ee8c0d1970197d9eb1727503bcdd2471:
linked-list: Remove __list_for_each (2013-07-16 22:00:14 -0700)
are available in the git repository at:
git://git.yoctoproject.org/linux-yocto-contrib.git tzanussi/event-triggers-v3
http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto-contrib/log/?h=tzanussi/event-triggers-v3
Tom Zanussi (9):
tracing: Add support for SOFT_DISABLE to syscall events
tracing: add basic event trigger framework
tracing: add 'traceon' and 'traceoff' event trigger commands
tracing: add 'snapshot' event trigger command
tracing: add 'stacktrace' event trigger command
tracing: add 'enable_event' and 'disable_event' event trigger
commands
tracing: add and use generic set_trigger_filter() implementation
tracing: update event filters for multibuffer
tracing: add documentation for trace event triggers
Documentation/trace/events.txt | 207 ++++++
include/linux/ftrace_event.h | 56 +-
include/trace/ftrace.h | 39 +-
kernel/trace/Makefile | 1 +
kernel/trace/trace.c | 27 +-
kernel/trace/trace.h | 58 +-
kernel/trace/trace_branch.c | 2 +-
kernel/trace/trace_events.c | 46 +-
kernel/trace/trace_events_filter.c | 179 ++++-
kernel/trace/trace_events_trigger.c | 1306 ++++++++++++++++++++++++++++++++++
kernel/trace/trace_export.c | 2 +-
kernel/trace/trace_functions_graph.c | 4 +-
kernel/trace/trace_kprobe.c | 4 +-
kernel/trace/trace_mmiotrace.c | 4 +-
kernel/trace/trace_sched_switch.c | 4 +-
kernel/trace/trace_syscalls.c | 40 +-
kernel/trace/trace_uprobe.c | 3 +-
17 files changed, 1885 insertions(+), 97 deletions(-)
create mode 100644 kernel/trace/trace_events_trigger.c
--
1.7.11.4
The original SOFT_DISABLE patches didn't add support for soft disable
of syscall events; this adds it and paves the way for future patches
allowing triggers to be added to syscall events, since triggers are
built on top of SOFT_DISABLE.
Add an array of ftrace_event_file pointers indexed by syscall number
to the trace array alongside the existing enabled bitmaps. The
ftrace_event_file structs in turn contain the soft disable flags we
need for per-syscall soft disable accounting; later patches add
additional 'trigger' flags and per-syscall triggers and filters.
Signed-off-by: Tom Zanussi <[email protected]>
---
kernel/trace/trace.h | 2 ++
kernel/trace/trace_syscalls.c | 14 ++++++++++++++
2 files changed, 16 insertions(+)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 4a4f6e1..af6eb2c 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -202,6 +202,8 @@ struct trace_array {
int sys_refcount_exit;
DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
+ struct ftrace_event_file *enter_syscall_files[NR_syscalls];
+ struct ftrace_event_file *exit_syscall_files[NR_syscalls];
#endif
int stop_count;
int clock_id;
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 322e164..4915b69 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -302,6 +302,7 @@ static int __init syscall_exit_define_fields(struct ftrace_event_call *call)
static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
{
struct trace_array *tr = data;
+ struct ftrace_event_file *ftrace_file;
struct syscall_trace_enter *entry;
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
@@ -317,6 +318,10 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
if (!test_bit(syscall_nr, tr->enabled_enter_syscalls))
return;
+ ftrace_file = rcu_dereference_raw(tr->enter_syscall_files[syscall_nr]);
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ return;
+
sys_data = syscall_nr_to_meta(syscall_nr);
if (!sys_data)
return;
@@ -345,6 +350,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
{
struct trace_array *tr = data;
+ struct ftrace_event_file *ftrace_file;
struct syscall_trace_exit *entry;
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
@@ -359,6 +365,10 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (!test_bit(syscall_nr, tr->enabled_exit_syscalls))
return;
+ ftrace_file = rcu_dereference_raw(tr->exit_syscall_files[syscall_nr]);
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ return;
+
sys_data = syscall_nr_to_meta(syscall_nr);
if (!sys_data)
return;
@@ -397,6 +407,7 @@ static int reg_event_syscall_enter(struct ftrace_event_file *file,
if (!tr->sys_refcount_enter)
ret = register_trace_sys_enter(ftrace_syscall_enter, tr);
if (!ret) {
+ rcu_assign_pointer(tr->enter_syscall_files[num], file);
set_bit(num, tr->enabled_enter_syscalls);
tr->sys_refcount_enter++;
}
@@ -416,6 +427,7 @@ static void unreg_event_syscall_enter(struct ftrace_event_file *file,
mutex_lock(&syscall_trace_lock);
tr->sys_refcount_enter--;
clear_bit(num, tr->enabled_enter_syscalls);
+ rcu_assign_pointer(tr->enter_syscall_files[num], NULL);
if (!tr->sys_refcount_enter)
unregister_trace_sys_enter(ftrace_syscall_enter, tr);
mutex_unlock(&syscall_trace_lock);
@@ -435,6 +447,7 @@ static int reg_event_syscall_exit(struct ftrace_event_file *file,
if (!tr->sys_refcount_exit)
ret = register_trace_sys_exit(ftrace_syscall_exit, tr);
if (!ret) {
+ rcu_assign_pointer(tr->exit_syscall_files[num], file);
set_bit(num, tr->enabled_exit_syscalls);
tr->sys_refcount_exit++;
}
@@ -454,6 +467,7 @@ static void unreg_event_syscall_exit(struct ftrace_event_file *file,
mutex_lock(&syscall_trace_lock);
tr->sys_refcount_exit--;
clear_bit(num, tr->enabled_exit_syscalls);
+ rcu_assign_pointer(tr->exit_syscall_files[num], NULL);
if (!tr->sys_refcount_exit)
unregister_trace_sys_exit(ftrace_syscall_exit, tr);
mutex_unlock(&syscall_trace_lock);
--
1.7.11.4
Add a generic event_command.set_trigger_filter() op implementation and
have the current set of trigger commands use it - this essentially
gives them all support for filters.
Syntactically, filters are supported by adding 'if <filter>' just
after the command, in which case only events matching the filter will
invoke the trigger. For example, to add a filter to an
enable/disable_event command:
echo 'enable_event:system:event if common_pid == 999' > \
.../othersys/otherevent/trigger
The above command will only enable the system:event event if the
common_pid field in the othersys:otherevent event is 999.
As another example, to add a filter to a stacktrace command:
echo 'stacktrace if common_pid == 999' > \
.../somesys/someevent/trigger
The above command will only trigger a stacktrace if the common_pid
field in the event is 999.
The filter syntax is the same as that described in the 'Event
filtering' section of Documentation/trace/events.txt.
Because triggers can now use filters, the trigger-invoking logic needs
to be moved - for ftrace_raw_event_calls, trigger invocation now needs
to happen after the { assign; } part of the call.
Also, because triggers need to be invoked even for soft-disabled
events, the SOFT_DISABLED check and return needs to be moved from the
top of the call to a point following the trigger check, which means
that soft-disabled events actually get discarded instead of simply
skipped. There's still a SOFT_DISABLED-only check at the top of the
function, so when an event is soft disabled but not because of the
presence of a trigger, the original SOFT_DISABLED behavior remains
unchanged.
There's also a bit of trickiness in that some triggers need to avoid
being invoked while an event is currently in the process of being
logged, since the trigger may itself log data into the trace buffer.
Thus we make sure the current event is committed before invoking those
triggers. To do that, we split the trigger invocation in two - the
first part (event_triggers_call()) checks the filter using the current
trace record; if a command has the post_trigger flag set, it sets a
bit for itself in the return value, otherwise it directly invoks the
trigger. Once all commands have been either invoked or set their
return flag, event_triggers_call() returns. The current record is
then either committed or discarded; if any commands have deferred
their triggers, those commands are finally invoked following the close
of the current event by event_triggers_post_call().
The syscall event invocation code is also changed in analogous ways.
Because event triggers need to be able to create and free filters,
this also adds a couple external wrappers for the existing
create_filter and free_filter functions, which are too generic to be
made extern functions themselves.
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 6 ++-
include/trace/ftrace.h | 45 ++++++++++++-----
kernel/trace/trace.h | 4 ++
kernel/trace/trace_events_filter.c | 13 +++++
kernel/trace/trace_events_trigger.c | 97 +++++++++++++++++++++++++++++++++++--
kernel/trace/trace_syscalls.c | 36 ++++++++++----
6 files changed, 174 insertions(+), 27 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 57ca386..f0c6e80 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -328,7 +328,11 @@ extern int filter_current_check_discard(struct ring_buffer *buffer,
struct ftrace_event_call *call,
void *rec,
struct ring_buffer_event *event);
-extern void event_triggers_call(struct ftrace_event_file *file);
+extern enum trigger_mode event_triggers_call(struct ftrace_event_file *file,
+ void *rec);
+extern void event_triggers_post_call(struct ftrace_event_file *file,
+ enum trigger_mode tm);
+
enum {
FILTER_OTHER = 0,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 63dfb0a..311f38e 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -412,13 +412,15 @@ static inline notrace int ftrace_get_offsets_##call( \
* struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
* struct ring_buffer_event *event;
* struct ftrace_raw_<call> *entry; <-- defined in stage 1
+ * enum trigger_mode __tm = TM_NONE;
* struct ring_buffer *buffer;
* unsigned long irq_flags;
* int __data_size;
* int pc;
*
- * if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
- * &ftrace_file->flags))
+ * if ((ftrace_file->flags & (FTRACE_EVENT_FL_SOFT_DISABLED |
+ * FTRACE_EVENT_FL_TRIGGER_MODE)) ==
+ * FTRACE_EVENT_FL_SOFT_DISABLED)
* return;
*
* local_save_flags(irq_flags);
@@ -437,9 +439,19 @@ static inline notrace int ftrace_get_offsets_##call( \
* { <assign>; } <-- Here we assign the entries by the __field and
* __array macros.
*
- * if (!filter_current_check_discard(buffer, event_call, entry, event))
- * trace_nowake_buffer_unlock_commit(buffer,
- * event, irq_flags, pc);
+ * if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT,
+ * &ftrace_file->flags))
+ * __tm = event_triggers_call(ftrace_file, entry);
+ *
+ * if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
+ * &ftrace_file->flags))
+ * ring_buffer_discard_commit(buffer, event);
+ * else if (!filter_current_check_discard(buffer, event_call,
+ * entry, event))
+ * trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
+ *
+ * if (__tm)
+ * event_triggers_post_call(ftrace_file, __tm);
* }
*
* static struct trace_event ftrace_event_type_<call> = {
@@ -521,17 +533,15 @@ ftrace_raw_event_##call(void *__data, proto) \
struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
struct ring_buffer_event *event; \
struct ftrace_raw_##call *entry; \
+ enum trigger_mode __tm = TM_NONE; \
struct ring_buffer *buffer; \
unsigned long irq_flags; \
int __data_size; \
int pc; \
\
- if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, \
- &ftrace_file->flags)) \
- event_triggers_call(ftrace_file); \
- \
- if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
- &ftrace_file->flags)) \
+ if ((ftrace_file->flags & (FTRACE_EVENT_FL_SOFT_DISABLED | \
+ FTRACE_EVENT_FL_TRIGGER_MODE)) == \
+ FTRACE_EVENT_FL_SOFT_DISABLED) \
return; \
\
local_save_flags(irq_flags); \
@@ -551,8 +561,19 @@ ftrace_raw_event_##call(void *__data, proto) \
\
{ assign; } \
\
- if (!filter_current_check_discard(buffer, event_call, entry, event)) \
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, \
+ &ftrace_file->flags)) \
+ __tm = event_triggers_call(ftrace_file, entry); \
+ \
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
+ &ftrace_file->flags)) \
+ ring_buffer_discard_commit(buffer, event); \
+ else if (!filter_current_check_discard(buffer, event_call, \
+ entry, event)) \
trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
+ \
+ if (__tm) \
+ event_triggers_post_call(ftrace_file, __tm); \
}
/*
* The ftrace_test_probe is compiled out, it is only here as a build time check
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index c06d2d2..8597592 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -996,6 +996,10 @@ extern int apply_subsystem_event_filter(struct ftrace_subsystem_dir *dir,
extern void print_subsystem_event_filter(struct event_subsystem *system,
struct trace_seq *s);
extern int filter_assign_type(const char *type);
+extern int create_event_filter(struct ftrace_event_call *call,
+ char *filter_str, bool set_str,
+ struct event_filter **filterp);
+extern void free_event_filter(struct event_filter *filter);
struct ftrace_event_field *
trace_find_event_field(struct ftrace_event_call *call, char *name);
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 0d883dc..6cf2cef 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -783,6 +783,11 @@ static void __free_filter(struct event_filter *filter)
kfree(filter);
}
+void free_event_filter(struct event_filter *filter)
+{
+ __free_filter(filter);
+}
+
/*
* Called when destroying the ftrace_event_call.
* The call is being freed, so we do not need to worry about
@@ -1808,6 +1813,14 @@ static int create_filter(struct ftrace_event_call *call,
return err;
}
+int create_event_filter(struct ftrace_event_call *call,
+ char *filter_str, bool set_str,
+ struct event_filter **filterp)
+{
+ return create_filter(call, filter_str, set_str, filterp);
+}
+
+
/**
* create_system_filter - create a filter for an event_subsystem
* @system: event_subsystem to create a filter for
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 2300fc8..b0ca093 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -35,6 +35,7 @@ struct event_trigger_data {
bool enable;
struct event_trigger_ops *ops;
enum trigger_mode mode;
+ bool post_trigger;
struct event_filter *filter;
char *filter_str;
struct list_head list;
@@ -44,20 +45,45 @@ struct trigger_iterator {
struct ftrace_event_file *file;
};
-void event_triggers_call(struct ftrace_event_file *file)
+enum trigger_mode
+event_triggers_call(struct ftrace_event_file *file, void *rec)
{
struct event_trigger_data *data;
+ enum trigger_mode tm = TM_NONE;
if (list_empty(&file->triggers))
- return;
+ return tm;
preempt_disable_notrace();
- list_for_each_entry_rcu(data, &file->triggers, list)
+ list_for_each_entry_rcu(data, &file->triggers, list) {
+ if (data->filter && !filter_match_preds(data->filter, rec))
+ continue;
+ if (data->post_trigger) {
+ tm |= data->mode;
+ continue;
+ }
data->ops->func((void **)&data);
+ }
preempt_enable_notrace();
+
+ return tm;
}
EXPORT_SYMBOL_GPL(event_triggers_call);
+void
+event_triggers_post_call(struct ftrace_event_file *file, enum trigger_mode tm)
+{
+ struct event_trigger_data *data;
+
+ preempt_disable_notrace();
+ list_for_each_entry_rcu(data, &file->triggers, list) {
+ if (data->mode & tm)
+ data->ops->func((void **)&data);
+ }
+ preempt_enable_notrace();
+}
+EXPORT_SYMBOL_GPL(event_triggers_post_call);
+
static void *trigger_next(struct seq_file *m, void *t, loff_t *pos)
{
struct trigger_iterator *iter = m->private;
@@ -400,6 +426,52 @@ event_trigger_free(struct event_trigger_ops *ops, void **_data)
kfree(data);
}
+static int set_trigger_filter(char *filter_str, void *trigger_data,
+ void *cmd_data)
+{
+ struct trigger_iterator *iter = cmd_data;
+ struct event_trigger_data *data = trigger_data;
+ struct event_filter *filter, *tmp;
+ int ret = -EINVAL;
+ char *s;
+
+ s = strsep(&filter_str, " \t");
+
+ if (!strlen(s) || strcmp(s, "if") != 0)
+ goto out;
+
+ if (!filter_str)
+ goto out;
+
+ /* The filter is for the 'trigger' event, not the triggered event */
+ ret = create_event_filter(iter->file->event_call,
+ filter_str, false, &filter);
+ if (ret)
+ goto out;
+
+ tmp = data->filter;
+
+ rcu_assign_pointer(data->filter, filter);
+
+ if (tmp) {
+ /* Make sure the call is done with the filter */
+ synchronize_sched();
+ free_event_filter(tmp);
+ }
+
+ kfree(data->filter_str);
+
+ data->filter_str = kstrdup(filter_str, GFP_KERNEL);
+ if (!data->filter_str) {
+ free_event_filter(data->filter);
+ data->filter = NULL;
+ ret = -ENOMEM;
+ }
+
+ out:
+ return ret;
+}
+
static int
event_trigger_callback(struct event_command *cmd_ops, void *cmd_data,
char *glob, char *cmd, char *param, int enabled)
@@ -623,6 +695,7 @@ static struct event_command trigger_traceon_cmd = {
.reg = register_trigger,
.unreg = unregister_trigger,
.get_trigger_ops = onoff_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
static struct event_command trigger_traceoff_cmd = {
@@ -632,6 +705,7 @@ static struct event_command trigger_traceoff_cmd = {
.reg = register_trigger,
.unreg = unregister_trigger,
.get_trigger_ops = onoff_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
static void
@@ -713,6 +787,7 @@ static struct event_command trigger_snapshot_cmd = {
.reg = register_snapshot_trigger,
.unreg = unregister_trigger,
.get_trigger_ops = snapshot_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
/*
@@ -764,6 +839,17 @@ stacktrace_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
data->filter_str);
}
+static int stacktrace_register_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ void *trigger_data, void *cmd_data)
+{
+ struct event_trigger_data *data = trigger_data;
+
+ data->post_trigger = true;
+
+ return register_trigger(glob, ops, trigger_data, cmd_data);
+}
+
static struct event_trigger_ops stacktrace_trigger_ops = {
.func = stacktrace_trigger,
.print = stacktrace_trigger_print,
@@ -788,9 +874,10 @@ static struct event_command trigger_stacktrace_cmd = {
.name = "stacktrace",
.trigger_mode = TM_STACKTRACE,
.func = event_trigger_callback,
- .reg = register_trigger,
+ .reg = stacktrace_register_trigger,
.unreg = unregister_trigger,
.get_trigger_ops = stacktrace_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
static __init void unregister_trigger_traceon_traceoff_cmds(void)
@@ -1119,6 +1206,7 @@ static struct event_command trigger_enable_cmd = {
.reg = event_enable_register_trigger,
.unreg = event_enable_unregister_trigger,
.get_trigger_ops = event_enable_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
static struct event_command trigger_disable_cmd = {
@@ -1128,6 +1216,7 @@ static struct event_command trigger_disable_cmd = {
.reg = event_enable_register_trigger,
.unreg = event_enable_unregister_trigger,
.get_trigger_ops = event_enable_get_trigger_ops,
+ .set_filter = set_trigger_filter,
};
static __init void unregister_trigger_enable_disable_cmds(void)
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index a79f85b..609190b 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -306,6 +306,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
struct syscall_trace_enter *entry;
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
+ enum trigger_mode __tm = TM_NONE;
struct ring_buffer *buffer;
unsigned long irq_flags;
int pc;
@@ -319,9 +320,9 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
return;
ftrace_file = rcu_dereference_raw(tr->enter_syscall_files[syscall_nr]);
- if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
- event_triggers_call(ftrace_file);
- if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ if ((ftrace_file->flags &
+ (FTRACE_EVENT_FL_SOFT_DISABLED | FTRACE_EVENT_FL_TRIGGER_MODE)) ==
+ FTRACE_EVENT_FL_SOFT_DISABLED)
return;
sys_data = syscall_nr_to_meta(syscall_nr);
@@ -343,10 +344,17 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
entry->nr = syscall_nr;
syscall_get_arguments(current, regs, 0, sys_data->nb_args, entry->args);
- if (!filter_current_check_discard(buffer, sys_data->enter_event,
- entry, event))
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
+ __tm = event_triggers_call(ftrace_file, entry);
+
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ ring_buffer_discard_commit(buffer, event);
+ else if (!filter_current_check_discard(buffer, sys_data->enter_event,
+ entry, event))
trace_current_buffer_unlock_commit(buffer, event,
irq_flags, pc);
+ if (__tm)
+ event_triggers_post_call(ftrace_file, __tm);
}
static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
@@ -356,6 +364,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
struct syscall_trace_exit *entry;
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
+ enum trigger_mode __tm = TM_NONE;
struct ring_buffer *buffer;
unsigned long irq_flags;
int pc;
@@ -368,9 +377,9 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
return;
ftrace_file = rcu_dereference_raw(tr->exit_syscall_files[syscall_nr]);
- if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
- event_triggers_call(ftrace_file);
- if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ if ((ftrace_file->flags &
+ (FTRACE_EVENT_FL_SOFT_DISABLED | FTRACE_EVENT_FL_TRIGGER_MODE)) ==
+ FTRACE_EVENT_FL_SOFT_DISABLED)
return;
sys_data = syscall_nr_to_meta(syscall_nr);
@@ -391,10 +400,17 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
entry->nr = syscall_nr;
entry->ret = syscall_get_return_value(current, regs);
- if (!filter_current_check_discard(buffer, sys_data->exit_event,
- entry, event))
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
+ __tm = event_triggers_call(ftrace_file, entry);
+
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
+ ring_buffer_discard_commit(buffer, event);
+ else if (!filter_current_check_discard(buffer, sys_data->exit_event,
+ entry, event))
trace_current_buffer_unlock_commit(buffer, event,
irq_flags, pc);
+ if (__tm)
+ event_triggers_post_call(ftrace_file, __tm);
}
static int reg_event_syscall_enter(struct ftrace_event_file *file,
--
1.7.11.4
Add 'snapshot' ftrace_func_command. snapshot event triggers are added
by the user via this command in a similar way and using practically
the same syntax as the analogous 'snapshot' ftrace function command,
but instead of writing to the set_ftrace_filter file, the snapshot
event trigger is written to the per-event 'trigger' files:
echo 'snapshot' > .../somesys/someevent/trigger
The above command will turn on snapshots for someevent i.e. whenever
someevent is hit, a snapshot will be done.
This also adds a 'count' version that limits the number of times the
command will be invoked:
echo 'snapshot:N' > .../somesys/someevent/trigger
Where N is the number of times the command will be invoked.
The above command will snapshot N times for someevent i.e. whenever
someevent is hit N times, a snapshot will be done.
Also adds a new ftrace_alloc_snapshot() function - the ftrace snapshot
command defines code that allocates a snapshot, which would be nice to
be able to reuse, which this does.
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 1 +
kernel/trace/trace.c | 9 ++++
kernel/trace/trace.h | 1 +
kernel/trace/trace_events_trigger.c | 89 +++++++++++++++++++++++++++++++++++++
4 files changed, 100 insertions(+)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index c794686..a25daf3 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -317,6 +317,7 @@ struct ftrace_event_file {
enum trigger_mode {
TM_NONE = (0),
TM_TRACE_ONOFF = (1 << 0),
+ TM_SNAPSHOT = (1 << 1),
};
extern void destroy_preds(struct ftrace_event_call *call);
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 0cd500b..2997d61 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -5371,6 +5371,15 @@ static const struct file_operations tracing_dyn_info_fops = {
};
#endif /* CONFIG_DYNAMIC_FTRACE */
+#if defined(CONFIG_TRACER_SNAPSHOT)
+int ftrace_alloc_snapshot(void)
+{
+ return alloc_snapshot(&global_trace);
+}
+#else
+int ftrace_alloc_snapshot(void) { return -ENOSYS; }
+#endif
+
#if defined(CONFIG_TRACER_SNAPSHOT) && defined(CONFIG_DYNAMIC_FTRACE)
static void
ftrace_snapshot(unsigned long ip, unsigned long parent_ip, void **data)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index d3e8626..6425507 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1057,6 +1057,7 @@ struct event_command {
extern int trace_event_enable_disable(struct ftrace_event_file *file,
int enable, int soft_disable);
+extern int ftrace_alloc_snapshot(void);
extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[];
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index f2b97b6..90a59dc 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -634,6 +634,87 @@ static struct event_command trigger_traceoff_cmd = {
.get_trigger_ops = onoff_get_trigger_ops,
};
+static void
+snapshot_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ tracing_snapshot();
+}
+
+static void
+snapshot_count_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!data->count)
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ snapshot_trigger(_data);
+}
+
+static int
+register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
+ void *data, void *cmd_data)
+{
+ int ret = register_trigger(glob, ops, data, cmd_data);
+
+ if (ret > 0)
+ ftrace_alloc_snapshot();
+
+ return ret;
+}
+
+static int
+snapshot_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
+ void *_data)
+{
+ struct event_trigger_data *data = _data;
+
+ return event_trigger_print("snapshot", m, (void *)data->count,
+ data->filter_str);
+}
+
+static struct event_trigger_ops snapshot_trigger_ops = {
+ .func = snapshot_trigger,
+ .print = snapshot_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops snapshot_count_trigger_ops = {
+ .func = snapshot_count_trigger,
+ .print = snapshot_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops *
+snapshot_get_trigger_ops(char *cmd, char *param)
+{
+ return param ? &snapshot_count_trigger_ops : &snapshot_trigger_ops;
+}
+
+static struct event_command trigger_snapshot_cmd = {
+ .name = "snapshot",
+ .trigger_mode = TM_SNAPSHOT,
+ .func = event_trigger_callback,
+ .reg = register_snapshot_trigger,
+ .unreg = unregister_trigger,
+ .get_trigger_ops = snapshot_get_trigger_ops,
+};
+
static __init void unregister_trigger_traceon_traceoff_cmds(void)
{
unregister_event_command(&trigger_traceon_cmd,
@@ -670,5 +751,13 @@ __init int register_trigger_cmds(void)
return ret;
}
+ ret = register_event_command(&trigger_snapshot_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0)) {
+ unregister_trigger_traceon_traceoff_cmds();
+ return ret;
+ }
+
return 0;
}
--
1.7.11.4
Add 'stacktrace' ftrace_func_command. stacktrace event triggers are
added by the user via this command in a similar way and using
practically the same syntax as the analogous 'stacktrace' ftrace
function command, but instead of writing to the set_ftrace_filter
file, the stacktrace event trigger is written to the per-event
'trigger' files:
echo 'stacktrace' > .../tracing/events/somesys/someevent/trigger
The above command will turn on stacktraces for someevent i.e. whenever
someevent is hit, a stacktrace will be logged.
This also adds a 'count' version that limits the number of times the
command will be invoked:
echo 'stacktrace:N' > .../tracing/events/somesys/someevent/trigger
Where N is the number of times the command will be invoked.
The above command will log N stacktraces for someevent i.e. whenever
someevent is hit N times, a stacktrace will be logged.
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 1 +
kernel/trace/trace_events_trigger.c | 89 +++++++++++++++++++++++++++++++++++++
2 files changed, 90 insertions(+)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index a25daf3..51c141e 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -318,6 +318,7 @@ enum trigger_mode {
TM_NONE = (0),
TM_TRACE_ONOFF = (1 << 0),
TM_SNAPSHOT = (1 << 1),
+ TM_STACKTRACE = (1 << 2),
};
extern void destroy_preds(struct ftrace_event_call *call);
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 90a59dc..dd14e36 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -715,6 +715,84 @@ static struct event_command trigger_snapshot_cmd = {
.get_trigger_ops = snapshot_get_trigger_ops,
};
+/*
+ * Skip 4:
+ * ftrace_stacktrace()
+ * function_trace_probe_call()
+ * ftrace_ops_list_func()
+ * ftrace_call()
+ */
+#define STACK_SKIP 4
+
+static void
+stacktrace_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ trace_dump_stack(STACK_SKIP);
+}
+
+static void
+stacktrace_count_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!data->count)
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ stacktrace_trigger(_data);
+}
+
+static int
+stacktrace_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
+ void *_data)
+{
+ struct event_trigger_data *data = _data;
+
+ return event_trigger_print("stacktrace", m, (void *)data->count,
+ data->filter_str);
+}
+
+static struct event_trigger_ops stacktrace_trigger_ops = {
+ .func = stacktrace_trigger,
+ .print = stacktrace_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops stacktrace_count_trigger_ops = {
+ .func = stacktrace_count_trigger,
+ .print = stacktrace_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops *
+stacktrace_get_trigger_ops(char *cmd, char *param)
+{
+ return param ? &stacktrace_count_trigger_ops : &stacktrace_trigger_ops;
+}
+
+static struct event_command trigger_stacktrace_cmd = {
+ .name = "stacktrace",
+ .trigger_mode = TM_STACKTRACE,
+ .func = event_trigger_callback,
+ .reg = register_trigger,
+ .unreg = unregister_trigger,
+ .get_trigger_ops = stacktrace_get_trigger_ops,
+};
+
static __init void unregister_trigger_traceon_traceoff_cmds(void)
{
unregister_event_command(&trigger_traceon_cmd,
@@ -759,5 +837,16 @@ __init int register_trigger_cmds(void)
return ret;
}
+ ret = register_event_command(&trigger_stacktrace_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0)) {
+ unregister_trigger_traceon_traceoff_cmds();
+ unregister_event_command(&trigger_snapshot_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ return ret;
+ }
+
return 0;
}
--
1.7.11.4
The trace event filters are still tied to event calls rather than
event files, which means you don't get what you'd expect when using
filters in the multibuffer case:
Before:
# echo 'count > 65536' > /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
count > 65536
# mkdir /sys/kernel/debug/tracing/instances/test1
# echo 'count > 4096' > /sys/kernel/debug/tracing/instances/test1/events/syscalls/sys_enter_read/filter
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
count > 4096
Setting the filter in tracing/instances/test1/events shouldn't affect
the same event in tracing/events as it does above.
After:
# echo 'count > 65536' > /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
count > 65536
# mkdir /sys/kernel/debug/tracing/instances/test1
# echo 'count > 4096' > /sys/kernel/debug/tracing/instances/test1/events/syscalls/sys_enter_read/filter
# cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/filter
count > 65536
We'd like to just move the filter directly from ftrace_event_call to
ftrace_event_file, but there are a couple cases that don't yet have
multibuffer support and therefore have to continue using the current
event_call-based filters. For those cases, a new USE_CALL_FILTER bit
is added to the event_call flags, whose main purpose is to keep the
old behavioir for those cases until they can be updated with
multibuffer support; at that point, the USE_CALL_FILTER flag (and the
new associated call_filter_check_discard() function) can go away.
The multibuffer support also made filter_current_check_discard()
redundant, so this change removes that function as well and replaces
it with filter_check_discard() (or call_filter_check_discard() as
appropriate).
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 37 ++++++--
include/trace/ftrace.h | 6 +-
kernel/trace/trace.c | 18 ++--
kernel/trace/trace.h | 10 +--
kernel/trace/trace_branch.c | 2 +-
kernel/trace/trace_events.c | 16 ++--
kernel/trace/trace_events_filter.c | 166 +++++++++++++++++++++++++++--------
kernel/trace/trace_export.c | 2 +-
kernel/trace/trace_functions_graph.c | 4 +-
kernel/trace/trace_kprobe.c | 4 +-
kernel/trace/trace_mmiotrace.c | 4 +-
kernel/trace/trace_sched_switch.c | 4 +-
kernel/trace/trace_syscalls.c | 8 +-
kernel/trace/trace_uprobe.c | 3 +-
14 files changed, 193 insertions(+), 91 deletions(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index f0c6e80..00c8a5e 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -200,6 +200,7 @@ enum {
TRACE_EVENT_FL_NO_SET_FILTER_BIT,
TRACE_EVENT_FL_IGNORE_ENABLE_BIT,
TRACE_EVENT_FL_WAS_ENABLED_BIT,
+ TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
};
/*
@@ -211,6 +212,7 @@ enum {
* WAS_ENABLED - Set and stays set when an event was ever enabled
* (used for module unloading, if a module event is enabled,
* it is best to clear the buffers that used it).
+ * USE_CALL_FILTER - For ftrace internal events, don't use file filter
*/
enum {
TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
@@ -218,6 +220,7 @@ enum {
TRACE_EVENT_FL_NO_SET_FILTER = (1 << TRACE_EVENT_FL_NO_SET_FILTER_BIT),
TRACE_EVENT_FL_IGNORE_ENABLE = (1 << TRACE_EVENT_FL_IGNORE_ENABLE_BIT),
TRACE_EVENT_FL_WAS_ENABLED = (1 << TRACE_EVENT_FL_WAS_ENABLED_BIT),
+ TRACE_EVENT_FL_USE_CALL_FILTER = (1 << TRACE_EVENT_FL_USE_CALL_FILTER_BIT),
};
struct ftrace_event_call {
@@ -236,6 +239,7 @@ struct ftrace_event_call {
* bit 2: failed to apply filter
* bit 3: ftrace internal event (do not enable)
* bit 4: Event was enabled by module
+ * bit 5: use call filter rather than file filter
*/
int flags; /* static flags of different events */
@@ -251,6 +255,8 @@ struct ftrace_subsystem_dir;
enum {
FTRACE_EVENT_FL_ENABLED_BIT,
FTRACE_EVENT_FL_RECORDED_CMD_BIT,
+ FTRACE_EVENT_FL_FILTERED_BIT,
+ FTRACE_EVENT_FL_NO_SET_FILTER_BIT,
FTRACE_EVENT_FL_SOFT_MODE_BIT,
FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
FTRACE_EVENT_FL_TRIGGER_MODE_BIT,
@@ -260,6 +266,8 @@ enum {
* Ftrace event file flags:
* ENABLED - The event is enabled
* RECORDED_CMD - The comms should be recorded at sched_switch
+ * FILTERED - The event has a filter attached
+ * NO_SET_FILTER - Set when filter has error and is to be ignored
* SOFT_MODE - The event is enabled/disabled by SOFT_DISABLED
* SOFT_DISABLED - When set, do not trace the event (even though its
* tracepoint may be enabled)
@@ -268,6 +276,8 @@ enum {
enum {
FTRACE_EVENT_FL_ENABLED = (1 << FTRACE_EVENT_FL_ENABLED_BIT),
FTRACE_EVENT_FL_RECORDED_CMD = (1 << FTRACE_EVENT_FL_RECORDED_CMD_BIT),
+ FTRACE_EVENT_FL_FILTERED = (1 << FTRACE_EVENT_FL_FILTERED_BIT),
+ FTRACE_EVENT_FL_NO_SET_FILTER = (1 << FTRACE_EVENT_FL_NO_SET_FILTER_BIT),
FTRACE_EVENT_FL_SOFT_MODE = (1 << FTRACE_EVENT_FL_SOFT_MODE_BIT),
FTRACE_EVENT_FL_SOFT_DISABLED = (1 << FTRACE_EVENT_FL_SOFT_DISABLED_BIT),
FTRACE_EVENT_FL_TRIGGER_MODE = (1 << FTRACE_EVENT_FL_TRIGGER_MODE_BIT),
@@ -276,6 +286,7 @@ enum {
struct ftrace_event_file {
struct list_head list;
struct ftrace_event_call *event_call;
+ struct event_filter *filter;
struct dentry *dir;
struct trace_array *tr;
struct ftrace_subsystem_dir *system;
@@ -286,8 +297,10 @@ struct ftrace_event_file {
* bit 0: enabled
* bit 1: enabled cmd record
* bit 2: enable/disable with the soft disable bit
- * bit 3: soft disabled
- * bit 4: trigger enabled
+ * bit 3: filter_active
+ * bit 4: failed to apply filter
+ * bit 5: soft disabled
+ * bit 6: trigger enabled
*
* Note: The bits must be set atomically to prevent races
* from other writers. Reads of flags do not need to be in
@@ -322,17 +335,27 @@ enum trigger_mode {
TM_EVENT_ENABLE = (1 << 3),
};
-extern void destroy_preds(struct ftrace_event_call *call);
+extern void destroy_preds(struct ftrace_event_file *file);
+extern void destroy_call_preds(struct ftrace_event_call *call);
extern int filter_match_preds(struct event_filter *filter, void *rec);
-extern int filter_current_check_discard(struct ring_buffer *buffer,
- struct ftrace_event_call *call,
- void *rec,
- struct ring_buffer_event *event);
extern enum trigger_mode event_triggers_call(struct ftrace_event_file *file,
void *rec);
extern void event_triggers_post_call(struct ftrace_event_file *file,
enum trigger_mode tm);
+static inline int
+filter_check_discard(struct ftrace_event_file *file, void *rec,
+ struct ring_buffer *buffer,
+ struct ring_buffer_event *event)
+{
+ if (unlikely(file->flags & FTRACE_EVENT_FL_FILTERED) &&
+ !filter_match_preds(file->filter, rec)) {
+ ring_buffer_discard_commit(buffer, event);
+ return 1;
+ }
+
+ return 0;
+}
enum {
FILTER_OTHER = 0,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 311f38e..7eff632 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -446,8 +446,7 @@ static inline notrace int ftrace_get_offsets_##call( \
* if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
* &ftrace_file->flags))
* ring_buffer_discard_commit(buffer, event);
- * else if (!filter_current_check_discard(buffer, event_call,
- * entry, event))
+ * else if (!filter_check_discard(ftrace_file, entry, buffer, event))
* trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
*
* if (__tm)
@@ -568,8 +567,7 @@ ftrace_raw_event_##call(void *__data, proto) \
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
&ftrace_file->flags)) \
ring_buffer_discard_commit(buffer, event); \
- else if (!filter_current_check_discard(buffer, event_call, \
- entry, event)) \
+ else if (!filter_check_discard(ftrace_file, entry, buffer, event)) \
trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
\
if (__tm) \
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 2997d61..4e2a70f 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -235,14 +235,6 @@ void trace_array_put(struct trace_array *this_tr)
mutex_unlock(&trace_types_lock);
}
-int filter_current_check_discard(struct ring_buffer *buffer,
- struct ftrace_event_call *call, void *rec,
- struct ring_buffer_event *event)
-{
- return filter_check_discard(call, rec, buffer, event);
-}
-EXPORT_SYMBOL_GPL(filter_current_check_discard);
-
cycle_t ftrace_now(int cpu)
{
u64 ts;
@@ -1631,7 +1623,7 @@ trace_function(struct trace_array *tr,
entry->ip = ip;
entry->parent_ip = parent_ip;
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
}
@@ -1715,7 +1707,7 @@ static void __ftrace_trace_stack(struct ring_buffer *buffer,
entry->size = trace.nr_entries;
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
out:
@@ -1817,7 +1809,7 @@ ftrace_trace_userstack(struct ring_buffer *buffer, unsigned long flags, int pc)
trace.entries = entry->caller;
save_stack_trace_user(&trace);
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
out_drop_count:
@@ -2009,7 +2001,7 @@ int trace_vbprintk(unsigned long ip, const char *fmt, va_list args)
entry->fmt = fmt;
memcpy(entry->buf, tbuffer, sizeof(u32) * len);
- if (!filter_check_discard(call, entry, buffer, event)) {
+ if (!call_filter_check_discard(call, entry, buffer, event)) {
__buffer_unlock_commit(buffer, event);
ftrace_trace_stack(buffer, flags, 6, pc);
}
@@ -2064,7 +2056,7 @@ __trace_array_vprintk(struct ring_buffer *buffer,
memcpy(&entry->buf, tbuffer, len);
entry->buf[len] = '\0';
- if (!filter_check_discard(call, entry, buffer, event)) {
+ if (!call_filter_check_discard(call, entry, buffer, event)) {
__buffer_unlock_commit(buffer, event);
ftrace_trace_stack(buffer, flags, 6, pc);
}
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 8597592..2bcf37a 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -987,9 +987,9 @@ struct filter_pred {
extern enum regex_type
filter_parse_regex(char *buff, int len, char **search, int *not);
-extern void print_event_filter(struct ftrace_event_call *call,
+extern void print_event_filter(struct ftrace_event_file *file,
struct trace_seq *s);
-extern int apply_event_filter(struct ftrace_event_call *call,
+extern int apply_event_filter(struct ftrace_event_file *file,
char *filter_string);
extern int apply_subsystem_event_filter(struct ftrace_subsystem_dir *dir,
char *filter_string);
@@ -1005,9 +1005,9 @@ struct ftrace_event_field *
trace_find_event_field(struct ftrace_event_call *call, char *name);
static inline int
-filter_check_discard(struct ftrace_event_call *call, void *rec,
- struct ring_buffer *buffer,
- struct ring_buffer_event *event)
+call_filter_check_discard(struct ftrace_event_call *call, void *rec,
+ struct ring_buffer *buffer,
+ struct ring_buffer_event *event)
{
if (unlikely(call->flags & TRACE_EVENT_FL_FILTERED) &&
!filter_match_preds(call->filter, rec)) {
diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
index d594da0..697fb9b 100644
--- a/kernel/trace/trace_branch.c
+++ b/kernel/trace/trace_branch.c
@@ -78,7 +78,7 @@ probe_likely_condition(struct ftrace_branch_data *f, int val, int expect)
entry->line = f->line;
entry->correct = val == expect;
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
out:
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 8ac5ce0..9493e67 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -992,7 +992,7 @@ static ssize_t
event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
loff_t *ppos)
{
- struct ftrace_event_call *call = filp->private_data;
+ struct ftrace_event_file *file = filp->private_data;
struct trace_seq *s;
int r;
@@ -1005,7 +1005,7 @@ event_filter_read(struct file *filp, char __user *ubuf, size_t cnt,
trace_seq_init(s);
- print_event_filter(call, s);
+ print_event_filter(file, s);
r = simple_read_from_buffer(ubuf, cnt, ppos, s->buffer, s->len);
kfree(s);
@@ -1017,7 +1017,7 @@ static ssize_t
event_filter_write(struct file *filp, const char __user *ubuf, size_t cnt,
loff_t *ppos)
{
- struct ftrace_event_call *call = filp->private_data;
+ struct ftrace_event_file *file = filp->private_data;
char *buf;
int err;
@@ -1034,7 +1034,7 @@ event_filter_write(struct file *filp, const char __user *ubuf, size_t cnt,
}
buf[cnt] = '\0';
- err = apply_event_filter(call, buf);
+ err = apply_event_filter(file, buf);
free_page((unsigned long) buf);
if (err < 0)
return err;
@@ -1520,7 +1520,7 @@ event_create_dir(struct dentry *parent,
return -1;
}
}
- trace_create_file("filter", 0644, file->dir, call,
+ trace_create_file("filter", 0644, file->dir, file,
filter);
trace_create_file("trigger", 0644, file->dir, file,
@@ -1578,6 +1578,10 @@ static void event_remove(struct ftrace_event_call *call)
if (file->event_call != call)
continue;
ftrace_event_enable_disable(file, 0);
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
+ destroy_call_preds(call);
+ else
+ destroy_preds(file);
/*
* The do_for_each_event_file() is
* a double loop. After finding the call for this
@@ -1711,7 +1715,7 @@ static void __trace_remove_event_call(struct ftrace_event_call *call)
{
event_remove(call);
trace_destroy_fields(call);
- destroy_preds(call);
+ destroy_call_preds(call);
}
/* Remove an event_call */
diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
index 6cf2cef..edd5e97 100644
--- a/kernel/trace/trace_events_filter.c
+++ b/kernel/trace/trace_events_filter.c
@@ -637,12 +637,15 @@ static void append_filter_err(struct filter_parse_state *ps,
free_page((unsigned long) buf);
}
-void print_event_filter(struct ftrace_event_call *call, struct trace_seq *s)
+void print_event_filter(struct ftrace_event_file *file, struct trace_seq *s)
{
struct event_filter *filter;
mutex_lock(&event_mutex);
- filter = call->filter;
+ if (file->event_call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
+ filter = file->event_call->filter;
+ else
+ filter = file->filter;
if (filter && filter->filter_string)
trace_seq_printf(s, "%s\n", filter->filter_string);
else
@@ -768,7 +771,12 @@ static void __free_preds(struct event_filter *filter)
filter->n_preds = 0;
}
-static void filter_disable(struct ftrace_event_call *call)
+static void filter_disable(struct ftrace_event_file *file)
+{
+ file->flags &= ~FTRACE_EVENT_FL_FILTERED;
+}
+
+static void call_filter_disable(struct ftrace_event_call *call)
{
call->flags &= ~TRACE_EVENT_FL_FILTERED;
}
@@ -789,12 +797,24 @@ void free_event_filter(struct event_filter *filter)
}
/*
+ * Called when destroying the ftrace_event_file.
+ * The file is being freed, so we do not need to worry about
+ * the file being currently used. This is for module code removing
+ * the tracepoints from within it.
+ */
+void destroy_preds(struct ftrace_event_file *file)
+{
+ __free_filter(file->filter);
+ file->filter = NULL;
+}
+
+/*
* Called when destroying the ftrace_event_call.
* The call is being freed, so we do not need to worry about
* the call being currently used. This is for module code removing
* the tracepoints from within it.
*/
-void destroy_preds(struct ftrace_event_call *call)
+void destroy_call_preds(struct ftrace_event_call *call)
{
__free_filter(call->filter);
call->filter = NULL;
@@ -832,28 +852,44 @@ static int __alloc_preds(struct event_filter *filter, int n_preds)
return 0;
}
-static void filter_free_subsystem_preds(struct event_subsystem *system)
+static void filter_free_subsystem_preds(struct event_subsystem *system,
+ struct trace_array *tr)
{
+ struct ftrace_event_file *file;
struct ftrace_event_call *call;
- list_for_each_entry(call, &ftrace_events, list) {
+ list_for_each_entry(file, &tr->events, list) {
+ call = file->event_call;
if (strcmp(call->class->system, system->name) != 0)
continue;
- filter_disable(call);
- remove_filter_string(call->filter);
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
+ call_filter_disable(call);
+ remove_filter_string(call->filter);
+ } else {
+ filter_disable(file);
+ remove_filter_string(file->filter);
+ }
}
}
-static void filter_free_subsystem_filters(struct event_subsystem *system)
+static void filter_free_subsystem_filters(struct event_subsystem *system,
+ struct trace_array *tr)
{
+ struct ftrace_event_file *file;
struct ftrace_event_call *call;
- list_for_each_entry(call, &ftrace_events, list) {
+ list_for_each_entry(file, &tr->events, list) {
+ call = file->event_call;
if (strcmp(call->class->system, system->name) != 0)
continue;
- __free_filter(call->filter);
- call->filter = NULL;
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
+ __free_filter(call->filter);
+ call->filter = NULL;
+ } else {
+ __free_filter(file->filter);
+ file->filter = NULL;
+ }
}
}
@@ -1630,9 +1666,11 @@ struct filter_list {
};
static int replace_system_preds(struct event_subsystem *system,
+ struct trace_array *tr,
struct filter_parse_state *ps,
char *filter_string)
{
+ struct ftrace_event_file *file;
struct ftrace_event_call *call;
struct filter_list *filter_item;
struct filter_list *tmp;
@@ -1640,8 +1678,8 @@ static int replace_system_preds(struct event_subsystem *system,
bool fail = true;
int err;
- list_for_each_entry(call, &ftrace_events, list) {
-
+ list_for_each_entry(file, &tr->events, list) {
+ call = file->event_call;
if (strcmp(call->class->system, system->name) != 0)
continue;
@@ -1650,21 +1688,34 @@ static int replace_system_preds(struct event_subsystem *system,
* (filter arg is ignored on dry_run)
*/
err = replace_preds(call, NULL, ps, filter_string, true);
- if (err)
- call->flags |= TRACE_EVENT_FL_NO_SET_FILTER;
- else
- call->flags &= ~TRACE_EVENT_FL_NO_SET_FILTER;
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
+ if (err)
+ call->flags |= TRACE_EVENT_FL_NO_SET_FILTER;
+ else
+ call->flags &= ~TRACE_EVENT_FL_NO_SET_FILTER;
+ } else {
+ if (err)
+ file->flags |= FTRACE_EVENT_FL_NO_SET_FILTER;
+ else
+ file->flags &= ~FTRACE_EVENT_FL_NO_SET_FILTER;
+ }
}
- list_for_each_entry(call, &ftrace_events, list) {
+ list_for_each_entry(file, &tr->events, list) {
struct event_filter *filter;
+ call = file->event_call;
+
if (strcmp(call->class->system, system->name) != 0)
continue;
- if (call->flags & TRACE_EVENT_FL_NO_SET_FILTER)
+ if (file->flags & FTRACE_EVENT_FL_NO_SET_FILTER)
continue;
+ if ((call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) &&
+ (call->flags & TRACE_EVENT_FL_NO_SET_FILTER))
+ continue;
+
filter_item = kzalloc(sizeof(*filter_item), GFP_KERNEL);
if (!filter_item)
goto fail_mem;
@@ -1683,17 +1734,29 @@ static int replace_system_preds(struct event_subsystem *system,
err = replace_preds(call, filter, ps, filter_string, false);
if (err) {
- filter_disable(call);
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
+ call_filter_disable(call);
+ else
+ filter_disable(file);
parse_error(ps, FILT_ERR_BAD_SUBSYS_FILTER, 0);
append_filter_err(ps, filter);
- } else
- call->flags |= TRACE_EVENT_FL_FILTERED;
+ } else {
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER)
+ call->flags |= TRACE_EVENT_FL_FILTERED;
+ else
+ file->flags |= FTRACE_EVENT_FL_FILTERED;
+ }
/*
* Regardless of if this returned an error, we still
* replace the filter for the call.
*/
- filter = call->filter;
- rcu_assign_pointer(call->filter, filter_item->filter);
+ if (call->flags & TRACE_EVENT_FL_USE_CALL_FILTER) {
+ filter = call->filter;
+ rcu_assign_pointer(call->filter, filter_item->filter);
+ } else {
+ filter = file->filter;
+ rcu_assign_pointer(file->filter, filter_item->filter);
+ }
filter_item->filter = filter;
fail = false;
@@ -1831,6 +1894,7 @@ int create_event_filter(struct ftrace_event_call *call,
* and always remembers @filter_str.
*/
static int create_system_filter(struct event_subsystem *system,
+ struct trace_array *tr,
char *filter_str, struct event_filter **filterp)
{
struct event_filter *filter = NULL;
@@ -1839,7 +1903,7 @@ static int create_system_filter(struct event_subsystem *system,
err = create_filter_start(filter_str, true, &ps, &filter);
if (!err) {
- err = replace_system_preds(system, ps, filter_str);
+ err = replace_system_preds(system, tr, ps, filter_str);
if (!err) {
/* System filters just show a default message */
kfree(filter->filter_string);
@@ -1854,19 +1918,31 @@ static int create_system_filter(struct event_subsystem *system,
return err;
}
-int apply_event_filter(struct ftrace_event_call *call, char *filter_string)
+int apply_event_filter(struct ftrace_event_file *file, char *filter_string)
{
+ struct ftrace_event_call *call = file->event_call;
struct event_filter *filter;
+ bool use_call_filter;
int err = 0;
+ use_call_filter = call->flags & TRACE_EVENT_FL_USE_CALL_FILTER;
+
mutex_lock(&event_mutex);
if (!strcmp(strstrip(filter_string), "0")) {
- filter_disable(call);
- filter = call->filter;
+ if (use_call_filter) {
+ call_filter_disable(call);
+ filter = call->filter;
+ } else {
+ filter_disable(file);
+ filter = file->filter;
+ }
if (!filter)
goto out_unlock;
- RCU_INIT_POINTER(call->filter, NULL);
+ if (use_call_filter)
+ RCU_INIT_POINTER(call->filter, NULL);
+ else
+ RCU_INIT_POINTER(file->filter, NULL);
/* Make sure the filter is not being used */
synchronize_sched();
__free_filter(filter);
@@ -1882,14 +1958,25 @@ int apply_event_filter(struct ftrace_event_call *call, char *filter_string)
* string
*/
if (filter) {
- struct event_filter *tmp = call->filter;
+ struct event_filter *tmp;
- if (!err)
- call->flags |= TRACE_EVENT_FL_FILTERED;
- else
- filter_disable(call);
+ if (use_call_filter) {
+ tmp = call->filter;
+ if (!err)
+ call->flags |= TRACE_EVENT_FL_FILTERED;
+ else
+ call_filter_disable(call);
+
+ rcu_assign_pointer(call->filter, filter);
+ } else {
+ tmp = file->filter;
+ if (!err)
+ file->flags |= FTRACE_EVENT_FL_FILTERED;
+ else
+ filter_disable(file);
- rcu_assign_pointer(call->filter, filter);
+ rcu_assign_pointer(file->filter, filter);
+ }
if (tmp) {
/* Make sure the call is done with the filter */
@@ -1907,6 +1994,7 @@ int apply_subsystem_event_filter(struct ftrace_subsystem_dir *dir,
char *filter_string)
{
struct event_subsystem *system = dir->subsystem;
+ struct trace_array *tr = dir->tr;
struct event_filter *filter;
int err = 0;
@@ -1919,18 +2007,18 @@ int apply_subsystem_event_filter(struct ftrace_subsystem_dir *dir,
}
if (!strcmp(strstrip(filter_string), "0")) {
- filter_free_subsystem_preds(system);
+ filter_free_subsystem_preds(system, tr);
remove_filter_string(system->filter);
filter = system->filter;
system->filter = NULL;
/* Ensure all filters are no longer used */
synchronize_sched();
- filter_free_subsystem_filters(system);
+ filter_free_subsystem_filters(system, tr);
__free_filter(filter);
goto out_unlock;
}
- err = create_system_filter(system, filter_string, &filter);
+ err = create_system_filter(system, tr, filter_string, &filter);
if (filter) {
/*
* No event actually uses the system filter
diff --git a/kernel/trace/trace_export.c b/kernel/trace/trace_export.c
index d21a746..7c3e3e7 100644
--- a/kernel/trace/trace_export.c
+++ b/kernel/trace/trace_export.c
@@ -180,7 +180,7 @@ struct ftrace_event_call __used event_##call = { \
.event.type = etype, \
.class = &event_class_ftrace_##call, \
.print_fmt = print, \
- .flags = TRACE_EVENT_FL_IGNORE_ENABLE, \
+ .flags = TRACE_EVENT_FL_IGNORE_ENABLE | TRACE_EVENT_FL_USE_CALL_FILTER, \
}; \
struct ftrace_event_call __used \
__attribute__((section("_ftrace_events"))) *__event_##call = &event_##call;
diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
index 8388bc9..1500aeb 100644
--- a/kernel/trace/trace_functions_graph.c
+++ b/kernel/trace/trace_functions_graph.c
@@ -230,7 +230,7 @@ int __trace_graph_entry(struct trace_array *tr,
return 0;
entry = ring_buffer_event_data(event);
entry->graph_ent = *trace;
- if (!filter_current_check_discard(buffer, call, entry, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
return 1;
@@ -335,7 +335,7 @@ void __trace_graph_return(struct trace_array *tr,
return;
entry = ring_buffer_event_data(event);
entry->ret = *trace;
- if (!filter_current_check_discard(buffer, call, entry, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
__buffer_unlock_commit(buffer, event);
}
diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
index 7ed6976..60180ad 100644
--- a/kernel/trace/trace_kprobe.c
+++ b/kernel/trace/trace_kprobe.c
@@ -819,7 +819,7 @@ __kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs,
entry->ip = (unsigned long)tp->rp.kp.addr;
store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize);
- if (!filter_current_check_discard(buffer, call, entry, event))
+ if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_buffer_unlock_commit_regs(buffer, event,
irq_flags, pc, regs);
}
@@ -868,7 +868,7 @@ __kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri,
entry->ret_ip = (unsigned long)ri->ret_addr;
store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize);
- if (!filter_current_check_discard(buffer, call, entry, event))
+ if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_buffer_unlock_commit_regs(buffer, event,
irq_flags, pc, regs);
}
diff --git a/kernel/trace/trace_mmiotrace.c b/kernel/trace/trace_mmiotrace.c
index a5e8f48..4f06ac0 100644
--- a/kernel/trace/trace_mmiotrace.c
+++ b/kernel/trace/trace_mmiotrace.c
@@ -323,7 +323,7 @@ static void __trace_mmiotrace_rw(struct trace_array *tr,
entry = ring_buffer_event_data(event);
entry->rw = *rw;
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
trace_buffer_unlock_commit(buffer, event, 0, pc);
}
@@ -353,7 +353,7 @@ static void __trace_mmiotrace_map(struct trace_array *tr,
entry = ring_buffer_event_data(event);
entry->map = *map;
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
trace_buffer_unlock_commit(buffer, event, 0, pc);
}
diff --git a/kernel/trace/trace_sched_switch.c b/kernel/trace/trace_sched_switch.c
index 4e98e3b..3f34dc9 100644
--- a/kernel/trace/trace_sched_switch.c
+++ b/kernel/trace/trace_sched_switch.c
@@ -45,7 +45,7 @@ tracing_sched_switch_trace(struct trace_array *tr,
entry->next_state = next->state;
entry->next_cpu = task_cpu(next);
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
trace_buffer_unlock_commit(buffer, event, flags, pc);
}
@@ -101,7 +101,7 @@ tracing_sched_wakeup_trace(struct trace_array *tr,
entry->next_state = wakee->state;
entry->next_cpu = task_cpu(wakee);
- if (!filter_check_discard(call, entry, buffer, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
trace_buffer_unlock_commit(buffer, event, flags, pc);
}
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 609190b..516df72 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -324,11 +324,9 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
(FTRACE_EVENT_FL_SOFT_DISABLED | FTRACE_EVENT_FL_TRIGGER_MODE)) ==
FTRACE_EVENT_FL_SOFT_DISABLED)
return;
-
sys_data = syscall_nr_to_meta(syscall_nr);
if (!sys_data)
return;
-
size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;
local_save_flags(irq_flags);
@@ -349,8 +347,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
ring_buffer_discard_commit(buffer, event);
- else if (!filter_current_check_discard(buffer, sys_data->enter_event,
- entry, event))
+ else if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_current_buffer_unlock_commit(buffer, event,
irq_flags, pc);
if (__tm)
@@ -405,8 +402,7 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
ring_buffer_discard_commit(buffer, event);
- else if (!filter_current_check_discard(buffer, sys_data->exit_event,
- entry, event))
+ else if (!filter_check_discard(ftrace_file, entry, buffer, event))
trace_current_buffer_unlock_commit(buffer, event,
irq_flags, pc);
if (__tm)
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index d5d0cd3..df1eb12 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -128,6 +128,7 @@ alloc_trace_uprobe(const char *group, const char *event, int nargs, bool is_ret)
if (is_ret)
tu->consumer.ret_handler = uretprobe_dispatcher;
init_trace_uprobe_filter(&tu->filter);
+ tu->call.flags |= TRACE_EVENT_FL_USE_CALL_FILTER;
return tu;
error:
@@ -541,7 +542,7 @@ static void uprobe_trace_print(struct trace_uprobe *tu,
for (i = 0; i < tu->nr_args; i++)
call_fetch(&tu->args[i].fetch, regs, data + tu->args[i].offset);
- if (!filter_current_check_discard(buffer, call, entry, event))
+ if (!call_filter_check_discard(call, entry, buffer, event))
trace_buffer_unlock_commit(buffer, event, 0, 0);
}
--
1.7.11.4
Provide a basic overview of trace event triggers and document the
available trigger commands, along with a few simple examples.
Signed-off-by: Tom Zanussi <[email protected]>
---
Documentation/trace/events.txt | 207 +++++++++++++++++++++++++++++++++++++++++
1 file changed, 207 insertions(+)
diff --git a/Documentation/trace/events.txt b/Documentation/trace/events.txt
index 37732a2..c94435d 100644
--- a/Documentation/trace/events.txt
+++ b/Documentation/trace/events.txt
@@ -287,3 +287,210 @@ their old filters):
prev_pid == 0
# cat sched_wakeup/filter
common_pid == 0
+
+6. Event triggers
+=================
+
+Trace events can be made to conditionally invoke trigger 'commands'
+which can take various forms and are described in detail below;
+examples would be enabling or disabling other trace events or invoking
+a stack trace whenever the trace event is hit. Whenever a trace event
+with attached triggers is invoked, the set of trigger commands
+associated with that event is invoked. Any given trigger can
+additionally have an event filter of the same form as described in
+section 5 (Event filtering) associated with it - the command will only
+be invoked if the event being invoked passes the associated filter.
+If no filter is associated with the trigger, it always passes.
+
+Triggers are added to and removed from a particular event by writing
+trigger expressions to the 'trigger' file for the given event.
+
+A given event can have any number of triggers associated with it,
+subject to any restrictions that individual commands may have in that
+regard.
+
+Event triggers are implemented on top of "soft" mode, which means that
+whenever a trace event has one or more triggers associated with it,
+the event is activated even if it isn't actually enabled, but is
+disabled in a "soft" mode. That is, the tracepoint will be called,
+but just will not be traced, unless of course it's actually enabled.
+This scheme allows triggers to be invoked even for events that aren't
+enabled, and also allows the current event filter implementation to be
+used for conditionally invoking triggers.
+
+The syntax for event triggers is roughly based on the syntax for
+set_ftrace_filter 'ftrace filter commands' (see the 'Filter commands'
+section of Documentation/trace/ftrace.txt), but there are major
+differences and the implementation isn't currently tied to it in any
+way, so beware about making generalizations between the two.
+
+6.1 Expression syntax
+---------------------
+
+Triggers are added by echoing the command to the 'trigger' file:
+
+ # echo 'command[:count] [if filter]' > trigger
+
+Triggers are removed by echoing the same command but starting with '!'
+to the 'trigger' file:
+
+ # echo '!command[:count] [if filter]' > trigger
+
+The [if filter] part isn't used in matching commands when removing, so
+leaving that off in a '!' command will accomplish the same thing as
+having it in.
+
+The filter syntax is the same as that described in the 'Event
+filtering' section above.
+
+For ease of use, writing to the trigger file using '>' currently just
+adds or removes a single trigger and there's no explicit '>>' support
+('>' actually behaves like '>>') or truncation support to remove all
+triggers (you have to use '!' for each one added.)
+
+6.2 Supported trigger commands
+------------------------------
+
+The following commands are supported:
+
+- enable_event/disable_event
+
+ These commands can enable or disable another trace event whenever
+ the triggering event is hit. When these commands are registered,
+ the other trace event is activated, but disabled in a "soft" mode.
+ That is, the tracepoint will be called, but just will not be traced.
+ The event tracepoint stays in this mode as long as there's a trigger
+ in effect that can trigger it.
+
+ For example, the following trigger causes kmalloc events to be
+ traced when a read system call is entered, and the :1 at the end
+ specifies that this enablement happens only once:
+
+ # echo 'enable_event:kmem:kmalloc:1' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
+
+ The following trigger causes kmalloc events to stop being traced
+ when a read system call exits. This disablement happens on every
+ read system call exit:
+
+ # echo 'disable_event:kmem:kmalloc' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
+
+ The format is:
+
+ enable_event:<system>:<event>[:count]
+ disable_event:<system>:<event>[:count]
+
+ To remove the above commands:
+
+ # echo '!enable_event:kmem:kmalloc:1' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
+
+ # echo '!disable_event:kmem:kmalloc' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_exit_read/trigger
+
+ Note that there can be any number of enable/disable_event triggers
+ per triggering event, but there can only be one trigger per
+ triggered event. e.g. sys_enter_read can have triggers enabling both
+ kmem:kmalloc and sched:sched_switch, but can't have two kmem:kmalloc
+ versions such as kmem:kmalloc and kmem:kmalloc:1 or 'kmem:kmalloc if
+ bytes_req == 256' and 'kmem:kmalloc if bytes_alloc == 256' (they
+ could be combined into a single filter on kmem:kmalloc though).
+
+- stacktrace
+
+ This command dumps a stacktrace in the trace buffer whenever the
+ triggering event occurs.
+
+ For example, the following trigger dumps a stacktrace every time the
+ kmalloc tracepoint is hit:
+
+ # echo 'stacktrace' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ The following trigger dumps a stacktrace the first 5 times a kmalloc
+ request happens with a size >= 64K
+
+ # echo 'stacktrace:5 if bytes_req >= 65536' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ The format is:
+
+ stacktrace[:count]
+
+ To remove the above commands:
+
+ # echo '!stacktrace' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # echo '!stacktrace:5 if bytes_req >= 65536' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ The latter can also be removed more simply by the following (without
+ the filter):
+
+ # echo '!stacktrace:5' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ Note that there can be only one stacktrace trigger per triggering
+ event.
+
+- snapshot
+
+ This command causes a snapshot to be triggered whenever the
+ triggering event occurs.
+
+ The following command creates a snapshot every time a block request
+ queue is unplugged with a depth > 1. If you were tracing a set of
+ events or functions at the time, the snapshot trace buffer would
+ capture those events when the trigger event occured:
+
+ # echo 'snapshot if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ To only snapshot once:
+
+ # echo 'snapshot:1 if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ To remove the above commands:
+
+ # echo '!snapshot if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ # echo '!snapshot:1 if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ Note that there can be only one snapshot trigger per triggering
+ event.
+
+- traceon/traceoff
+
+ These commands turn tracing on and off when the specified events are
+ hit. The parameter determines how many times the tracing system is
+ turned on and off. If unspecified, there is no limit.
+
+ The following command turns tracing off the first time a block
+ request queue is unplugged with a depth > 1. If you were tracing a
+ set of events or functions at the time, you could then examine the
+ trace buffer to see the sequence of events that led up to the
+ trigger event:
+
+ # echo 'traceoff:1 if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ To always disable tracing when nr_rq > 1 :
+
+ # echo 'traceoff if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ To remove the above commands:
+
+ # echo '!traceoff:1 if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ # echo '!traceoff if nr_rq > 1' > \
+ /sys/kernel/debug/tracing/events/block/block_unplug/trigger
+
+ Note that there can be only one traceon or traceoff trigger per
+ triggering event.
--
1.7.11.4
Add 'enable_event' and 'disable_event' event_command commands.
enable_event and disable_event event triggers are added by the user
via these commands in a similar way and using practically the same
syntax as the analagous 'enable_event' and 'disable_event' ftrace
function commands, but instead of writing to the set_ftrace_filter
file, the enable_event and disable_event triggers are written to the
per-event 'trigger' files:
echo 'enable_event:system:event' > .../othersys/otherevent/trigger
echo 'disable_event:system:event' > .../othersys/otherevent/trigger
The above commands will enable or disable the 'system:event' trace
events whenever the othersys:otherevent events are hit.
This also adds a 'count' version that limits the number of times the
command will be invoked:
echo 'enable_event:system:event:N' > .../othersys/otherevent/trigger
echo 'disable_event:system:event:N' > .../othersys/otherevent/trigger
Where N is the number of times the command will be invoked.
The above commands will will enable or disable the 'system:event'
trace events whenever the othersys:otherevent events are hit, but only
N times.
This also makes the find_event_file() helper function extern, since
it's useful to use from other places, such as the event triggers code,
so make it accessible.
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 1 +
kernel/trace/trace.h | 4 +
kernel/trace/trace_events.c | 2 +-
kernel/trace/trace_events_trigger.c | 365 ++++++++++++++++++++++++++++++++++++
4 files changed, 371 insertions(+), 1 deletion(-)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 51c141e..57ca386 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -319,6 +319,7 @@ enum trigger_mode {
TM_TRACE_ONOFF = (1 << 0),
TM_SNAPSHOT = (1 << 1),
TM_STACKTRACE = (1 << 2),
+ TM_EVENT_ENABLE = (1 << 3),
};
extern void destroy_preds(struct ftrace_event_call *call);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 6425507..c06d2d2 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1018,6 +1018,10 @@ extern void trace_event_enable_cmd_record(bool enable);
extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr);
extern int event_trace_del_tracer(struct trace_array *tr);
+extern struct ftrace_event_file *find_event_file(struct trace_array *tr,
+ const char *system,
+ const char *event);
+
extern struct mutex event_mutex;
extern struct list_head ftrace_events;
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index db8a93c..8ac5ce0 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1969,7 +1969,7 @@ struct event_probe_data {
bool enable;
};
-static struct ftrace_event_file *
+struct ftrace_event_file *
find_event_file(struct trace_array *tr, const char *system, const char *event)
{
struct ftrace_event_file *file;
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index dd14e36..2300fc8 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -803,6 +803,359 @@ static __init void unregister_trigger_traceon_traceoff_cmds(void)
&trigger_cmd_mutex);
}
+/* Avoid typos */
+#define ENABLE_EVENT_STR "enable_event"
+#define DISABLE_EVENT_STR "disable_event"
+
+static void
+event_enable_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (data->enable)
+ clear_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &data->file->flags);
+ else
+ set_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &data->file->flags);
+}
+
+static void
+event_enable_count_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!data->count)
+ return;
+
+ /* Skip if the event is in a state we want to switch to */
+ if (data->enable == !(data->file->flags & FTRACE_EVENT_FL_SOFT_DISABLED))
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ event_enable_trigger(_data);
+}
+
+static int
+event_enable_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
+ void *_data)
+{
+ struct event_trigger_data *data = _data;
+
+ seq_printf(m, "%s:%s:%s",
+ data->enable ? ENABLE_EVENT_STR : DISABLE_EVENT_STR,
+ data->file->event_call->class->system,
+ data->file->event_call->name);
+
+ if (data->count == -1)
+ seq_puts(m, ":unlimited");
+ else
+ seq_printf(m, ":count=%ld", data->count);
+
+ if (data->filter_str)
+ seq_printf(m, " if %s\n", data->filter_str);
+ else
+ seq_puts(m, "\n");
+
+ return 0;
+}
+
+static void
+event_enable_trigger_free(struct event_trigger_ops *ops, void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (WARN_ON_ONCE(data->ref <= 0))
+ return;
+
+ data->ref--;
+ if (!data->ref) {
+ /* Remove the TRIGGER_MODE flag */
+ trace_event_trigger_enable_disable(data->file, 0);
+ module_put(data->file->event_call->mod);
+ kfree(data);
+ }
+}
+
+static struct event_trigger_ops event_enable_trigger_ops = {
+ .func = event_enable_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops event_enable_count_trigger_ops = {
+ .func = event_enable_count_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops event_disable_trigger_ops = {
+ .func = event_enable_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static struct event_trigger_ops event_disable_count_trigger_ops = {
+ .func = event_enable_count_trigger,
+ .print = event_enable_trigger_print,
+ .init = event_trigger_init,
+ .free = event_enable_trigger_free,
+};
+
+static int
+event_enable_trigger_func(struct event_command *cmd_ops, void *cmd_data,
+ char *glob, char *cmd, char *param, int enabled)
+{
+ struct trace_array *tr = top_trace_array();
+ struct ftrace_event_file *file;
+ struct event_trigger_ops *trigger_ops;
+ struct event_trigger_data *trigger_data;
+ const char *system;
+ const char *event;
+ char *trigger;
+ char *number;
+ bool enable;
+ int ret;
+
+ if (!enabled)
+ return -EINVAL;
+
+ if (!param)
+ return -EINVAL;
+
+ /* separate the trigger from the filter (s:e:n [if filter]) */
+ trigger = strsep(¶m, " \t");
+ if (!trigger)
+ return -EINVAL;
+
+ system = strsep(&trigger, ":");
+ if (!trigger)
+ return -EINVAL;
+
+ event = strsep(&trigger, ":");
+
+ mutex_lock(&event_mutex);
+
+ ret = -EINVAL;
+ file = find_event_file(tr, system, event);
+ if (!file)
+ goto out;
+
+ enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
+
+ trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
+
+ ret = -ENOMEM;
+ trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
+ if (!trigger_data)
+ goto out;
+
+ trigger_data->enable = enable;
+ trigger_data->count = -1;
+ trigger_data->file = file;
+ trigger_data->ops = trigger_ops;
+ INIT_LIST_HEAD(&trigger_data->list);
+ RCU_INIT_POINTER(trigger_data->filter, NULL);
+
+ if (glob[0] == '!') {
+ cmd_ops->unreg(glob+1, trigger_ops, trigger_data, cmd_data);
+ kfree(trigger_data);
+ ret = 0;
+ goto out;
+ }
+
+ if (trigger) {
+ number = strsep(&trigger, ":");
+
+ ret = -EINVAL;
+ if (!strlen(number))
+ goto out_free;
+
+ /*
+ * We use the callback data field (which is a pointer)
+ * as our counter.
+ */
+ ret = kstrtoul(number, 0, &trigger_data->count);
+ if (ret)
+ goto out_free;
+ }
+
+ if (!param) /* if param is non-empty, it's supposed to be a filter */
+ goto out_reg;
+
+ if (!cmd_ops->set_filter)
+ goto out_reg;
+
+ ret = cmd_ops->set_filter(param, trigger_data, cmd_data);
+ if (ret < 0)
+ goto out_free;
+
+ out_reg:
+ /* Don't let event modules unload while probe registered */
+ ret = try_module_get(file->event_call->mod);
+ if (!ret) {
+ ret = -EBUSY;
+ goto out_free;
+ }
+
+ ret = trace_event_enable_disable(file, 1, 1);
+ if (ret < 0)
+ goto out_put;
+ ret = cmd_ops->reg(glob, trigger_ops, trigger_data, cmd_data);
+ /*
+ * The above returns on success the # of functions enabled,
+ * but if it didn't find any functions it returns zero.
+ * Consider no functions a failure too.
+ */
+ if (!ret) {
+ ret = -ENOENT;
+ goto out_disable;
+ } else if (ret < 0)
+ goto out_disable;
+ /* Just return zero, not the number of enabled functions */
+ ret = 0;
+ out:
+ mutex_unlock(&event_mutex);
+ return ret;
+
+ out_disable:
+ trace_event_enable_disable(file, 0, 1);
+ out_put:
+ module_put(file->event_call->mod);
+ out_free:
+ kfree(trigger_data);
+ goto out;
+}
+
+static int event_enable_register_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ void *trigger_data, void *cmd_data)
+{
+ struct trigger_iterator *iter = cmd_data;
+ struct event_trigger_data *data = trigger_data;
+ struct event_trigger_data *test;
+ int ret = 0;
+
+ list_for_each_entry_rcu(test, &iter->file->triggers, list) {
+ if (test->file == data->file) {
+ ret = -EEXIST;
+ goto out;
+ }
+ }
+
+ if (data->ops->init) {
+ ret = data->ops->init(data->ops, (void **)&data);
+ if (ret < 0)
+ goto out;
+ }
+
+ list_add_rcu(&data->list, &iter->file->triggers);
+ ret++;
+
+ if (trace_event_trigger_enable_disable(iter->file, 1) < 0) {
+ list_del_rcu(&data->list);
+ ret--;
+ }
+out:
+ return ret;
+}
+
+static void event_enable_unregister_trigger(char *glob,
+ struct event_trigger_ops *ops,
+ void *trigger_data, void *cmd_data)
+{
+ struct trigger_iterator *iter = cmd_data;
+ struct event_trigger_data *test = trigger_data;
+ struct event_trigger_data *data;
+ bool unregistered = false;
+
+ list_for_each_entry_rcu(data, &iter->file->triggers, list) {
+ if (data->file == test->file) {
+ unregistered = true;
+ list_del_rcu(&data->list);
+ trace_event_trigger_enable_disable(iter->file, 0);
+ break;
+ }
+ }
+
+ if (unregistered && data->ops->free)
+ data->ops->free(data->ops, (void **)&data);
+}
+
+static struct event_trigger_ops *
+event_enable_get_trigger_ops(char *cmd, char *param)
+{
+ struct event_trigger_ops *ops;
+ bool enable;
+
+ enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
+
+ if (enable)
+ ops = param ? &event_enable_count_trigger_ops :
+ &event_enable_trigger_ops;
+ else
+ ops = param ? &event_disable_count_trigger_ops :
+ &event_disable_trigger_ops;
+
+ return ops;
+}
+
+static struct event_command trigger_enable_cmd = {
+ .name = ENABLE_EVENT_STR,
+ .trigger_mode = TM_EVENT_ENABLE,
+ .func = event_enable_trigger_func,
+ .reg = event_enable_register_trigger,
+ .unreg = event_enable_unregister_trigger,
+ .get_trigger_ops = event_enable_get_trigger_ops,
+};
+
+static struct event_command trigger_disable_cmd = {
+ .name = DISABLE_EVENT_STR,
+ .trigger_mode = TM_EVENT_ENABLE,
+ .func = event_enable_trigger_func,
+ .reg = event_enable_register_trigger,
+ .unreg = event_enable_unregister_trigger,
+ .get_trigger_ops = event_enable_get_trigger_ops,
+};
+
+static __init void unregister_trigger_enable_disable_cmds(void)
+{
+ unregister_event_command(&trigger_enable_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ unregister_event_command(&trigger_disable_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+}
+
+static __init int register_trigger_enable_disable_cmds(void)
+{
+ int ret;
+
+ ret = register_event_command(&trigger_enable_cmd, &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0))
+ return ret;
+ ret = register_event_command(&trigger_disable_cmd, &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0))
+ unregister_trigger_enable_disable_cmds();
+
+ return ret;
+}
+
static __init int register_trigger_traceon_traceoff_cmds(void)
{
int ret;
@@ -848,5 +1201,17 @@ __init int register_trigger_cmds(void)
return ret;
}
+ ret = register_trigger_enable_disable_cmds();
+ if (ret) {
+ unregister_trigger_traceon_traceoff_cmds();
+ unregister_event_command(&trigger_snapshot_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ unregister_event_command(&trigger_stacktrace_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ return ret;
+ }
+
return 0;
}
--
1.7.11.4
Add 'traceon' and 'traceoff' ftrace_func_command commands. traceon
and traceoff event triggers are added by the user via these commands
in a similar way and using practically the same syntax as the
analagous 'traceon' and 'traceoff' ftrace function commands, but
instead of writing to the set_ftrace_filter file, the traceon and
traceoff triggers are written to the per-event 'trigger' files:
echo 'traceon' > .../tracing/events/somesys/someevent/trigger
echo 'traceoff' > .../tracing/events/somesys/someevent/trigger
The above command will turn tracing on or off whenever someevent is
hit.
This also adds a 'count' version that limits the number of times the
command will be invoked:
echo 'traceon:N' > .../tracing/events/somesys/someevent/trigger
echo 'traceoff:N' > .../tracing/events/somesys/someevent/trigger
Where N is the number of times the command will be invoked.
The above commands will will turn tracing on or off whenever someevent
is hit, but only N times.
The event_trigger_init() and event_trigger_free() are meant to be
common implementations of the event_trigger_ops init() and free() ops.
Most trigger_ops implementations will use these, but some will
override and possibly reuse them.
Signed-off-by: Tom Zanussi <[email protected]>
---
include/linux/ftrace_event.h | 1 +
kernel/trace/trace_events_trigger.c | 313 ++++++++++++++++++++++++++++++++++++
2 files changed, 314 insertions(+)
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 6cd5bbc..c794686 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -316,6 +316,7 @@ struct ftrace_event_file {
enum trigger_mode {
TM_NONE = (0),
+ TM_TRACE_ONOFF = (1 << 0),
};
extern void destroy_preds(struct ftrace_event_call *call);
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
index 1f0565c..f2b97b6 100644
--- a/kernel/trace/trace_events_trigger.c
+++ b/kernel/trace/trace_events_trigger.c
@@ -355,7 +355,320 @@ static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
data->ops->free(data->ops, (void **)&data);
}
+int
+event_trigger_print(const char *name, struct seq_file *m,
+ void *data, char *filter_str)
+{
+ long count = (long)data;
+
+ seq_printf(m, "%s", name);
+
+ if (count == -1)
+ seq_puts(m, ":unlimited");
+ else
+ seq_printf(m, ":count=%ld", count);
+
+ if (filter_str)
+ seq_printf(m, " if %s\n", filter_str);
+ else
+ seq_puts(m, "\n");
+
+ return 0;
+}
+
+static int
+event_trigger_init(struct event_trigger_ops *ops, void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ data->ref++;
+ return 0;
+}
+
+static void
+event_trigger_free(struct event_trigger_ops *ops, void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (WARN_ON_ONCE(data->ref <= 0))
+ return;
+
+ data->ref--;
+ if (!data->ref)
+ kfree(data);
+}
+
+static int
+event_trigger_callback(struct event_command *cmd_ops, void *cmd_data,
+ char *glob, char *cmd, char *param, int enabled)
+{
+ struct event_trigger_ops *trigger_ops;
+ struct event_trigger_data *trigger_data;
+ char *trigger = NULL;
+ char *number;
+ int ret;
+
+ if (!enabled)
+ return -EINVAL;
+
+ /* separate the trigger from the filter (t:n [if filter]) */
+ if (param && isdigit(param[0]))
+ trigger = strsep(¶m, " \t");
+
+ mutex_lock(&event_mutex);
+
+ trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
+
+ ret = -ENOMEM;
+ trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
+ if (!trigger_data)
+ goto out;
+
+ trigger_data->count = -1;
+ trigger_data->ops = trigger_ops;
+ trigger_data->mode = cmd_ops->trigger_mode;
+ INIT_LIST_HEAD(&trigger_data->list);
+
+ if (glob[0] == '!') {
+ cmd_ops->unreg(glob+1, trigger_ops, trigger_data, cmd_data);
+ kfree(trigger_data);
+ ret = 0;
+ goto out;
+ }
+
+ if (trigger) {
+ number = strsep(&trigger, ":");
+
+ ret = -EINVAL;
+ if (!strlen(number))
+ goto out_free;
+
+ /*
+ * We use the callback data field (which is a pointer)
+ * as our counter.
+ */
+ ret = kstrtoul(number, 0, &trigger_data->count);
+ if (ret)
+ goto out_free;
+ }
+
+ if (!param) /* if param is non-empty, it's supposed to be a filter */
+ goto out_reg;
+
+ if (!cmd_ops->set_filter)
+ goto out_reg;
+
+ ret = cmd_ops->set_filter(param, trigger_data, cmd_data);
+ if (ret < 0)
+ goto out_free;
+
+ out_reg:
+ ret = cmd_ops->reg(glob, trigger_ops, trigger_data, cmd_data);
+ /*
+ * The above returns on success the # of functions enabled,
+ * but if it didn't find any functions it returns zero.
+ * Consider no functions a failure too.
+ */
+ if (!ret) {
+ ret = -ENOENT;
+ goto out_free;
+ } else if (ret < 0)
+ goto out_free;
+ ret = 0;
+ out:
+ mutex_unlock(&event_mutex);
+ return ret;
+
+ out_free:
+ kfree(trigger_data);
+ goto out;
+}
+
+static void
+traceon_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (tracing_is_on())
+ return;
+
+ tracing_on();
+}
+
+static void
+traceon_count_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!data->count)
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ traceon_trigger(_data);
+}
+
+static void
+traceoff_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!tracing_is_on())
+ return;
+
+ tracing_off();
+}
+
+static void
+traceoff_count_trigger(void **_data)
+{
+ struct event_trigger_data **p = (struct event_trigger_data **)_data;
+ struct event_trigger_data *data = *p;
+
+ if (!data)
+ return;
+
+ if (!data->count)
+ return;
+
+ if (data->count != -1)
+ (data->count)--;
+
+ traceoff_trigger(_data);
+}
+
+static int
+traceon_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
+ void *_data)
+{
+ struct event_trigger_data *data = _data;
+
+ return event_trigger_print("traceon", m, (void *)data->count,
+ data->filter_str);
+}
+
+static int
+traceoff_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
+ void *_data)
+{
+ struct event_trigger_data *data = _data;
+
+ return event_trigger_print("traceoff", m, (void *)data->count,
+ data->filter_str);
+}
+
+static struct event_trigger_ops traceon_trigger_ops = {
+ .func = traceon_trigger,
+ .print = traceon_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops traceon_count_trigger_ops = {
+ .func = traceon_count_trigger,
+ .print = traceon_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops traceoff_trigger_ops = {
+ .func = traceoff_trigger,
+ .print = traceoff_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops traceoff_count_trigger_ops = {
+ .func = traceoff_count_trigger,
+ .print = traceoff_trigger_print,
+ .init = event_trigger_init,
+ .free = event_trigger_free,
+};
+
+static struct event_trigger_ops *
+onoff_get_trigger_ops(char *cmd, char *param)
+{
+ struct event_trigger_ops *ops;
+
+ /* we register both traceon and traceoff to this callback */
+ if (strcmp(cmd, "traceon") == 0)
+ ops = param ? &traceon_count_trigger_ops :
+ &traceon_trigger_ops;
+ else
+ ops = param ? &traceoff_count_trigger_ops :
+ &traceoff_trigger_ops;
+
+ return ops;
+}
+
+static struct event_command trigger_traceon_cmd = {
+ .name = "traceon",
+ .trigger_mode = TM_TRACE_ONOFF,
+ .func = event_trigger_callback,
+ .reg = register_trigger,
+ .unreg = unregister_trigger,
+ .get_trigger_ops = onoff_get_trigger_ops,
+};
+
+static struct event_command trigger_traceoff_cmd = {
+ .name = "traceoff",
+ .trigger_mode = TM_TRACE_ONOFF,
+ .func = event_trigger_callback,
+ .reg = register_trigger,
+ .unreg = unregister_trigger,
+ .get_trigger_ops = onoff_get_trigger_ops,
+};
+
+static __init void unregister_trigger_traceon_traceoff_cmds(void)
+{
+ unregister_event_command(&trigger_traceon_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+ unregister_event_command(&trigger_traceoff_cmd,
+ &trigger_commands,
+ &trigger_cmd_mutex);
+}
+
+static __init int register_trigger_traceon_traceoff_cmds(void)
+{
+ int ret;
+
+ ret = register_event_command(&trigger_traceon_cmd, &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0))
+ return ret;
+ ret = register_event_command(&trigger_traceoff_cmd, &trigger_commands,
+ &trigger_cmd_mutex);
+ if (WARN_ON(ret < 0))
+ unregister_trigger_traceon_traceoff_cmds();
+
+ return ret;
+}
+
__init int register_trigger_cmds(void)
{
+ int ret;
+
+ ret = register_trigger_traceon_traceoff_cmds();
+ if (ret) {
+ unregister_trigger_traceon_traceoff_cmds();
+ return ret;
+ }
+
return 0;
}
--
1.7.11.4
Add a 'trigger' file for each trace event, enabling 'trace event
triggers' to be set for trace events.
'trace event triggers' are patterned after the existing 'ftrace
function triggers' implementation except that triggers are written to
per-event 'trigger' files instead of to a single file such as the
'set_ftrace_filter' used for ftrace function triggers.
The implementation is meant to be entirely separate from ftrace
function triggers, in order to keep the respective implementations
relatively simple and to allow them to diverge.
The event trigger functionality is built on top of SOFT_DISABLE
functionality. It adds a TRIGGER_MODE bit to the ftrace_event_file
flags which is checked when any trace event fires. Triggers set for a
particular event need to be checked regardless of whether that event
is actually enabled or not - getting an event to fire even if it's not
enabled is what's already implemented by SOFT_DISABLE mode, so trigger
mode directly reuses that. Event trigger essentially inherit the soft
disable logic in __ftrace_event_enable_disable() while adding a bit of
logic and trigger reference counting via tm_ref on top of that in a
new trace_event_trigger_enable_disable() function. Because the base
__ftrace_event_enable_disable() code now needs to be invoked from
outside trace_events.c, a wrapper is also added for those usages.
The triggers for an event are actually invoked via a new function,
event_triggers_call(), and code is also added to invoke them for
ftrace_raw_event calls as well as syscall events.
The main part of the patch creates a new trace_events_trigger.c file
to contain the trace event triggers implementation.
The standard open, read, and release file operations are implemented
here.
The open() implementation sets up for the various open modes of the
'trigger' file. It creates and attaches the trigger iterator and sets
up the command parser. If opened for reading set up the trigger
seq_ops.
The read() implementation parses the event trigger written to the
'trigger' file, looks up the trigger command, and passes it along to
that event_command's func() implementation for command-specific
processing.
The release() implementation does whatever cleanup is needed to
release the 'trigger' file, like releasing the parser and trigger
iterator, etc.
A couple of functions for event command registration and
unregistration are added, along with a list to add them to and a mutex
to protect them, as well as an (initially empty) registration function
to add the set of commands that will be added by future commits, and
call to it from the trace event initialization code.
also added are a couple trigger-specific data structures needed for
these implementations such as a trigger iterator and a struct for
trigger-specific data.
A couple structs consisting mostly of function meant to be implemented
in command-specific ways, event_command and event_trigger_ops, are
used by the generic event trigger command implementations. They're
being put into trace.h alongside the other trace_event data structures
and functions, in the expectation that they'll be needed in several
trace_event-related files such as trace_events_trigger.c and
trace_events.c.
The event_command.func() function is meant to be called by the trigger
parsing code in order to add a trigger instance to the corresponding
event. It essentially coordinates adding a live trigger instance to
the event, and arming the triggering the event.
Every event_command func() implementation essentially does the
same thing for any command:
- choose ops - use the value of param to choose either a number or
count version of event_trigger_ops specific to the command
- do the register or unregister of those ops
- associate a filter, if specified, with the triggering event
The reg() and unreg() ops allow command-specific implementations for
event_trigger_op registration and unregistration, and the
get_trigger_ops() op allows command-specific event_trigger_ops
selection to be parameterized. When a trigger instance is added, the
reg() op essentially adds that trigger to the triggering event and
arms it, while unreg() does the opposite. The set_filter() function
is used to associate a filter with the trigger - if the command
doesn't specify a set_filter() implementation, the command will ignore
filters.
Each command has an associated trigger_mode, which serves double duty,
both as a unique identifier for the command as well as a value that
can be used for setting a trigger mode bit during trigger invocation.
The signature of func() adds a pointer to the event_command struct,
used to invoke those functions, along with a command_data param that
can be passed to the reg/unreg functions. This allows func()
implementations to use command-specific blobs and supports code
re-use.
The event_trigger_ops.func() command corrsponds to the trigger 'probe'
function that gets called when the triggering event is actually
invoked. The other functions are used to list the trigger when
needed, along with a couple mundane book-keeping functions.
Some common register/unregister_trigger() implementations of the
event_command reg()/unreg() callbacks are also provided, which add and
remove trigger instances to the per-event list of triggers, and
arm/disarm them as appropriate.
Most event commands will use these, but some will override and
possibly reuse them.
Signed-off-by: Tom Zanussi <[email protected]>
Idea-by: Steve Rostedt <[email protected]>
---
include/linux/ftrace_event.h | 13 +-
include/trace/ftrace.h | 4 +
kernel/trace/Makefile | 1 +
kernel/trace/trace.h | 37 ++++
kernel/trace/trace_events.c | 28 ++-
kernel/trace/trace_events_trigger.c | 361 ++++++++++++++++++++++++++++++++++++
kernel/trace/trace_syscalls.c | 4 +
7 files changed, 445 insertions(+), 3 deletions(-)
create mode 100644 kernel/trace/trace_events_trigger.c
diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4372658..6cd5bbc 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -253,6 +253,7 @@ enum {
FTRACE_EVENT_FL_RECORDED_CMD_BIT,
FTRACE_EVENT_FL_SOFT_MODE_BIT,
FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
+ FTRACE_EVENT_FL_TRIGGER_MODE_BIT,
};
/*
@@ -261,13 +262,15 @@ enum {
* RECORDED_CMD - The comms should be recorded at sched_switch
* SOFT_MODE - The event is enabled/disabled by SOFT_DISABLED
* SOFT_DISABLED - When set, do not trace the event (even though its
- * tracepoint may be enabled)
+ * tracepoint may be enabled)
+ * TRIGGER_MODE - The event is enabled/disabled by SOFT_DISABLED
*/
enum {
FTRACE_EVENT_FL_ENABLED = (1 << FTRACE_EVENT_FL_ENABLED_BIT),
FTRACE_EVENT_FL_RECORDED_CMD = (1 << FTRACE_EVENT_FL_RECORDED_CMD_BIT),
FTRACE_EVENT_FL_SOFT_MODE = (1 << FTRACE_EVENT_FL_SOFT_MODE_BIT),
FTRACE_EVENT_FL_SOFT_DISABLED = (1 << FTRACE_EVENT_FL_SOFT_DISABLED_BIT),
+ FTRACE_EVENT_FL_TRIGGER_MODE = (1 << FTRACE_EVENT_FL_TRIGGER_MODE_BIT),
};
struct ftrace_event_file {
@@ -276,6 +279,7 @@ struct ftrace_event_file {
struct dentry *dir;
struct trace_array *tr;
struct ftrace_subsystem_dir *system;
+ struct list_head triggers;
/*
* 32 bit flags:
@@ -283,6 +287,7 @@ struct ftrace_event_file {
* bit 1: enabled cmd record
* bit 2: enable/disable with the soft disable bit
* bit 3: soft disabled
+ * bit 4: trigger enabled
*
* Note: The bits must be set atomically to prevent races
* from other writers. Reads of flags do not need to be in
@@ -294,6 +299,7 @@ struct ftrace_event_file {
*/
unsigned long flags;
atomic_t sm_ref; /* soft-mode reference counter */
+ atomic_t tm_ref; /* trigger-mode reference counter */
};
#define __TRACE_EVENT_FLAGS(name, value) \
@@ -308,12 +314,17 @@ struct ftrace_event_file {
#define MAX_FILTER_STR_VAL 256 /* Should handle KSYM_SYMBOL_LEN */
+enum trigger_mode {
+ TM_NONE = (0),
+};
+
extern void destroy_preds(struct ftrace_event_call *call);
extern int filter_match_preds(struct event_filter *filter, void *rec);
extern int filter_current_check_discard(struct ring_buffer *buffer,
struct ftrace_event_call *call,
void *rec,
struct ring_buffer_event *event);
+extern void event_triggers_call(struct ftrace_event_file *file);
enum {
FILTER_OTHER = 0,
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index d615f78..63dfb0a 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -526,6 +526,10 @@ ftrace_raw_event_##call(void *__data, proto) \
int __data_size; \
int pc; \
\
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, \
+ &ftrace_file->flags)) \
+ event_triggers_call(ftrace_file); \
+ \
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
&ftrace_file->flags)) \
return; \
diff --git a/kernel/trace/Makefile b/kernel/trace/Makefile
index d7e2068..1378e84 100644
--- a/kernel/trace/Makefile
+++ b/kernel/trace/Makefile
@@ -50,6 +50,7 @@ ifeq ($(CONFIG_PERF_EVENTS),y)
obj-$(CONFIG_EVENT_TRACING) += trace_event_perf.o
endif
obj-$(CONFIG_EVENT_TRACING) += trace_events_filter.o
+obj-$(CONFIG_EVENT_TRACING) += trace_events_trigger.o
obj-$(CONFIG_KPROBE_EVENT) += trace_kprobe.o
obj-$(CONFIG_TRACEPOINTS) += power-traces.o
ifeq ($(CONFIG_PM_RUNTIME),y)
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index af6eb2c..d3e8626 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -1021,6 +1021,43 @@ extern int event_trace_del_tracer(struct trace_array *tr);
extern struct mutex event_mutex;
extern struct list_head ftrace_events;
+extern const struct file_operations event_trigger_fops;
+
+extern int register_trigger_cmds(void);
+
+struct event_trigger_ops {
+ void (*func)(void **data);
+ int (*init)(struct event_trigger_ops *ops,
+ void **data);
+ void (*free)(struct event_trigger_ops *ops,
+ void **data);
+ int (*print)(struct seq_file *m,
+ struct event_trigger_ops *ops,
+ void *data);
+};
+
+struct event_command {
+ struct list_head list;
+ char *name;
+ enum trigger_mode trigger_mode;
+ int (*func)(struct event_command *cmd_ops,
+ void *cmd_data, char *glob, char *cmd,
+ char *params, int enable);
+ int (*reg)(char *glob,
+ struct event_trigger_ops *trigger_ops,
+ void *trigger_data, void *cmd_data);
+ void (*unreg)(char *glob,
+ struct event_trigger_ops *trigger_ops,
+ void *trigger_data, void *cmd_data);
+ int (*set_filter)(char *filter_str,
+ void *trigger_data,
+ void *cmd_data);
+ struct event_trigger_ops *(*get_trigger_ops)(char *cmd, char *param);
+};
+
+extern int trace_event_enable_disable(struct ftrace_event_file *file,
+ int enable, int soft_disable);
+
extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[];
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 7d85429..db8a93c 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -342,6 +342,12 @@ static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
return ret;
}
+int trace_event_enable_disable(struct ftrace_event_file *file,
+ int enable, int soft_disable)
+{
+ return __ftrace_event_enable_disable(file, enable, soft_disable);
+}
+
static int ftrace_event_enable_disable(struct ftrace_event_file *file,
int enable)
{
@@ -1464,6 +1470,7 @@ event_create_dir(struct dentry *parent,
const struct file_operations *id,
const struct file_operations *enable,
const struct file_operations *filter,
+ const struct file_operations *trigger,
const struct file_operations *format)
{
struct ftrace_event_call *call = file->event_call;
@@ -1516,6 +1523,9 @@ event_create_dir(struct dentry *parent,
trace_create_file("filter", 0644, file->dir, call,
filter);
+ trace_create_file("trigger", 0644, file->dir, file,
+ trigger);
+
trace_create_file("format", 0444, file->dir, call,
format);
@@ -1628,6 +1638,8 @@ trace_create_new_event(struct ftrace_event_call *call,
file->event_call = call;
file->tr = tr;
atomic_set(&file->sm_ref, 0);
+ atomic_set(&file->tm_ref, 0);
+ INIT_LIST_HEAD(&file->triggers);
list_add(&file->list, &tr->events);
return file;
@@ -1640,6 +1652,7 @@ __trace_add_new_event(struct ftrace_event_call *call,
const struct file_operations *id,
const struct file_operations *enable,
const struct file_operations *filter,
+ const struct file_operations *trigger,
const struct file_operations *format)
{
struct ftrace_event_file *file;
@@ -1648,7 +1661,8 @@ __trace_add_new_event(struct ftrace_event_call *call,
if (!file)
return -ENOMEM;
- return event_create_dir(tr->event_dir, file, id, enable, filter, format);
+ return event_create_dir(tr->event_dir, file, id, enable, filter,
+ trigger, format);
}
/*
@@ -1732,6 +1746,7 @@ struct ftrace_module_file_ops {
struct file_operations enable;
struct file_operations format;
struct file_operations filter;
+ struct file_operations trigger;
};
static struct ftrace_module_file_ops *
@@ -1781,6 +1796,9 @@ trace_create_file_ops(struct module *mod)
file_ops->filter = ftrace_event_filter_fops;
file_ops->filter.owner = mod;
+ file_ops->trigger = event_trigger_fops;
+ file_ops->trigger.owner = mod;
+
file_ops->format = ftrace_event_format_fops;
file_ops->format.owner = mod;
@@ -1876,7 +1894,8 @@ __trace_add_new_mod_event(struct ftrace_event_call *call,
{
return __trace_add_new_event(call, tr,
&file_ops->id, &file_ops->enable,
- &file_ops->filter, &file_ops->format);
+ &file_ops->filter, &file_ops->filter,
+ &file_ops->format);
}
#else
@@ -1929,6 +1948,7 @@ __trace_add_event_dirs(struct trace_array *tr)
&ftrace_event_id_fops,
&ftrace_enable_fops,
&ftrace_event_filter_fops,
+ &event_trigger_fops,
&ftrace_event_format_fops);
if (ret < 0)
pr_warning("Could not create directory for event %s\n",
@@ -2241,6 +2261,7 @@ __trace_early_add_event_dirs(struct trace_array *tr)
&ftrace_event_id_fops,
&ftrace_enable_fops,
&ftrace_event_filter_fops,
+ &event_trigger_fops,
&ftrace_event_format_fops);
if (ret < 0)
pr_warning("Could not create directory for event %s\n",
@@ -2300,6 +2321,7 @@ __add_event_to_tracers(struct ftrace_event_call *call,
&ftrace_event_id_fops,
&ftrace_enable_fops,
&ftrace_event_filter_fops,
+ &event_trigger_fops,
&ftrace_event_format_fops);
}
}
@@ -2484,6 +2506,8 @@ static __init int event_trace_enable(void)
register_event_cmds();
+ register_trigger_cmds();
+
return 0;
}
diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
new file mode 100644
index 0000000..1f0565c
--- /dev/null
+++ b/kernel/trace/trace_events_trigger.c
@@ -0,0 +1,361 @@
+/*
+ * trace_events_trigger - trace event triggers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Copyright (C) 2013 Tom Zanussi <[email protected]>
+ */
+
+#include <linux/module.h>
+#include <linux/ctype.h>
+#include <linux/mutex.h>
+#include <linux/slab.h>
+
+#include "trace.h"
+
+static LIST_HEAD(trigger_commands);
+static DEFINE_MUTEX(trigger_cmd_mutex);
+
+struct event_trigger_data {
+ struct ftrace_event_file *file;
+ unsigned long count;
+ int ref;
+ bool enable;
+ struct event_trigger_ops *ops;
+ enum trigger_mode mode;
+ struct event_filter *filter;
+ char *filter_str;
+ struct list_head list;
+};
+
+struct trigger_iterator {
+ struct ftrace_event_file *file;
+};
+
+void event_triggers_call(struct ftrace_event_file *file)
+{
+ struct event_trigger_data *data;
+
+ if (list_empty(&file->triggers))
+ return;
+
+ preempt_disable_notrace();
+ list_for_each_entry_rcu(data, &file->triggers, list)
+ data->ops->func((void **)&data);
+ preempt_enable_notrace();
+}
+EXPORT_SYMBOL_GPL(event_triggers_call);
+
+static void *trigger_next(struct seq_file *m, void *t, loff_t *pos)
+{
+ struct trigger_iterator *iter = m->private;
+
+ return seq_list_next(t, &iter->file->triggers, pos);
+}
+
+static void *trigger_start(struct seq_file *m, loff_t *pos)
+{
+ struct trigger_iterator *iter = m->private;
+
+ mutex_lock(&event_mutex);
+
+ return seq_list_start(&iter->file->triggers, *pos);
+}
+
+static void trigger_stop(struct seq_file *m, void *t)
+{
+ mutex_unlock(&event_mutex);
+}
+
+static int trigger_show(struct seq_file *m, void *v)
+{
+ struct event_trigger_data *data;
+
+ data = list_entry(v, struct event_trigger_data, list);
+ data->ops->print(m, data->ops, data);
+
+ return 0;
+}
+
+static const struct seq_operations event_triggers_seq_ops = {
+ .start = trigger_start,
+ .next = trigger_next,
+ .stop = trigger_stop,
+ .show = trigger_show,
+};
+
+static int event_trigger_regex_open(struct inode *inode, struct file *file)
+{
+ struct trigger_iterator *iter;
+ int ret = 0;
+
+ iter = kzalloc(sizeof(*iter), GFP_KERNEL);
+ if (!iter)
+ return -ENOMEM;
+
+ iter->file = inode->i_private;
+
+ mutex_lock(&event_mutex);
+
+ if (file->f_mode & FMODE_READ) {
+ ret = seq_open(file, &event_triggers_seq_ops);
+ if (!ret) {
+ struct seq_file *m = file->private_data;
+ m->private = iter;
+ } else {
+ /* Failed */
+ kfree(iter);
+ }
+ } else
+ file->private_data = iter;
+
+ mutex_unlock(&event_mutex);
+
+ return ret;
+}
+
+static int trigger_process_regex(struct trigger_iterator *iter,
+ char *buff, int enable)
+{
+ struct event_command *p;
+ char *command, *next = buff;
+ int ret = -EINVAL;
+
+ command = strsep(&next, ": \t");
+ command = (command[0] != '!') ? command : command + 1;
+
+ mutex_lock(&trigger_cmd_mutex);
+ list_for_each_entry(p, &trigger_commands, list) {
+ if (strcmp(p->name, command) == 0) {
+ ret = p->func(p, iter, buff, command, next, enable);
+ goto out_unlock;
+ }
+ }
+ out_unlock:
+ mutex_unlock(&trigger_cmd_mutex);
+
+ return ret;
+}
+
+static ssize_t event_trigger_regex_write(struct file *file,
+ const char __user *ubuf,
+ size_t cnt, loff_t *ppos, int enable)
+{
+ struct trigger_iterator *iter = file->private_data;
+ ssize_t ret;
+ char *buf;
+
+ if (!cnt)
+ return 0;
+
+ if (cnt >= PAGE_SIZE)
+ return -EINVAL;
+
+ if (file->f_mode & FMODE_READ) {
+ struct seq_file *m = file->private_data;
+ iter = m->private;
+ } else
+ iter = file->private_data;
+
+ buf = (char *)__get_free_page(GFP_TEMPORARY);
+ if (!buf)
+ return -ENOMEM;
+
+ if (copy_from_user(buf, ubuf, cnt)) {
+ free_page((unsigned long) buf);
+ return -EFAULT;
+ }
+ buf[cnt] = '\0';
+ strim(buf);
+
+ ret = trigger_process_regex(iter, buf, enable);
+
+ free_page((unsigned long) buf);
+ if (ret < 0)
+ goto out;
+
+ *ppos += cnt;
+ ret = cnt;
+ out:
+ return ret;
+}
+
+static int event_trigger_regex_release(struct inode *inode, struct file *file)
+{
+ struct seq_file *m = (struct seq_file *)file->private_data;
+ struct trigger_iterator *iter;
+
+ mutex_lock(&event_mutex);
+
+ if (file->f_mode & FMODE_READ) {
+ iter = m->private;
+
+ seq_release(inode, file);
+ } else
+ iter = file->private_data;
+
+ kfree(iter);
+
+ mutex_unlock(&event_mutex);
+
+ return 0;
+}
+
+static ssize_t
+event_trigger_write(struct file *filp, const char __user *ubuf,
+ size_t cnt, loff_t *ppos)
+{
+ return event_trigger_regex_write(filp, ubuf, cnt, ppos, 1);
+}
+
+static int
+event_trigger_open(struct inode *inode, struct file *filp)
+{
+ return event_trigger_regex_open(inode, filp);
+}
+
+static int
+event_trigger_release(struct inode *inode, struct file *file)
+{
+ return event_trigger_regex_release(inode, file);
+}
+
+const struct file_operations event_trigger_fops = {
+ .open = event_trigger_open,
+ .read = seq_read,
+ .write = event_trigger_write,
+ .llseek = ftrace_filter_lseek,
+ .release = event_trigger_release,
+};
+
+static int register_event_command(struct event_command *cmd,
+ struct list_head *cmd_list,
+ struct mutex *cmd_list_mutex)
+{
+ struct event_command *p;
+ int ret = 0;
+
+ mutex_lock(cmd_list_mutex);
+ list_for_each_entry(p, cmd_list, list) {
+ if (strcmp(cmd->name, p->name) == 0) {
+ ret = -EBUSY;
+ goto out_unlock;
+ }
+ }
+ list_add(&cmd->list, cmd_list);
+ out_unlock:
+ mutex_unlock(cmd_list_mutex);
+
+ return ret;
+}
+
+static int unregister_event_command(struct event_command *cmd,
+ struct list_head *cmd_list,
+ struct mutex *cmd_list_mutex)
+{
+ struct event_command *p, *n;
+ int ret = -ENODEV;
+
+ mutex_lock(cmd_list_mutex);
+ list_for_each_entry_safe(p, n, cmd_list, list) {
+ if (strcmp(cmd->name, p->name) == 0) {
+ ret = 0;
+ list_del_init(&p->list);
+ goto out_unlock;
+ }
+ }
+ out_unlock:
+ mutex_unlock(cmd_list_mutex);
+
+ return ret;
+}
+
+static int trace_event_trigger_enable_disable(struct ftrace_event_file *file,
+ int trigger_enable)
+{
+ int ret = 0;
+
+ if (trigger_enable) {
+ if (atomic_inc_return(&file->tm_ref) > 1)
+ return ret;
+ set_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &file->flags);
+ ret = trace_event_enable_disable(file, 1, 1);
+ } else {
+ if (atomic_dec_return(&file->tm_ref) > 0)
+ return ret;
+ clear_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &file->flags);
+ ret = trace_event_enable_disable(file, 0, 1);
+ }
+
+ return ret;
+}
+
+static int register_trigger(char *glob, struct event_trigger_ops *ops,
+ void *trigger_data, void *cmd_data)
+{
+ struct trigger_iterator *iter = cmd_data;
+ struct event_trigger_data *data = trigger_data;
+ struct event_trigger_data *test;
+ int ret = 0;
+
+ list_for_each_entry_rcu(test, &iter->file->triggers, list) {
+ if (test->mode == data->mode) {
+ ret = -EEXIST;
+ goto out;
+ }
+ }
+
+ if (data->ops->init) {
+ ret = data->ops->init(data->ops, (void **)&data);
+ if (ret < 0)
+ goto out;
+ }
+
+ list_add_rcu(&data->list, &iter->file->triggers);
+ ret++;
+
+ if (trace_event_trigger_enable_disable(iter->file, 1) < 0) {
+ list_del_rcu(&data->list);
+ ret--;
+ }
+out:
+ return ret;
+}
+
+static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
+ void *trigger_data, void *cmd_data)
+{
+ struct trigger_iterator *iter = cmd_data;
+ struct event_trigger_data *test = trigger_data;
+ struct event_trigger_data *data;
+ bool unregistered = false;
+
+ list_for_each_entry_rcu(data, &iter->file->triggers, list) {
+ if (data->mode == test->mode) {
+ unregistered = true;
+ list_del_rcu(&data->list);
+ trace_event_trigger_enable_disable(iter->file, 0);
+ break;
+ }
+ }
+
+ if (unregistered && data->ops->free)
+ data->ops->free(data->ops, (void **)&data);
+}
+
+__init int register_trigger_cmds(void)
+{
+ return 0;
+}
diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 4915b69..a79f85b 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -319,6 +319,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
return;
ftrace_file = rcu_dereference_raw(tr->enter_syscall_files[syscall_nr]);
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
+ event_triggers_call(ftrace_file);
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
return;
@@ -366,6 +368,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
return;
ftrace_file = rcu_dereference_raw(tr->exit_syscall_files[syscall_nr]);
+ if (test_bit(FTRACE_EVENT_FL_TRIGGER_MODE_BIT, &ftrace_file->flags))
+ event_triggers_call(ftrace_file);
if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
return;
--
1.7.11.4
(2013/07/20 0:09), Tom Zanussi wrote:
> Hi,
>
> This is v3 of the trace event triggers patchset, addressing comments
> from Masami Hiramatsu, zhangwei(Jovi), and Steve Rostedt.
Steven, Oleg, would you think this can go at this time?
Or Tom is better to wait for Oleg's bugfix?
I'd like to clarify that first.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
(2013/07/20 0:09), Tom Zanussi wrote:
> The original SOFT_DISABLE patches didn't add support for soft disable
> of syscall events; this adds it and paves the way for future patches
> allowing triggers to be added to syscall events, since triggers are
> built on top of SOFT_DISABLE.
>
> Add an array of ftrace_event_file pointers indexed by syscall number
> to the trace array alongside the existing enabled bitmaps. The
> ftrace_event_file structs in turn contain the soft disable flags we
> need for per-syscall soft disable accounting; later patches add
> additional 'trigger' flags and per-syscall triggers and filters.
>
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> kernel/trace/trace.h | 2 ++
> kernel/trace/trace_syscalls.c | 14 ++++++++++++++
> 2 files changed, 16 insertions(+)
>
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 4a4f6e1..af6eb2c 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -202,6 +202,8 @@ struct trace_array {
> int sys_refcount_exit;
> DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
> DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
> + struct ftrace_event_file *enter_syscall_files[NR_syscalls];
> + struct ftrace_event_file *exit_syscall_files[NR_syscalls];
> #endif
> int stop_count;
> int clock_id;
> diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
> index 322e164..4915b69 100644
> --- a/kernel/trace/trace_syscalls.c
> +++ b/kernel/trace/trace_syscalls.c
> @@ -302,6 +302,7 @@ static int __init syscall_exit_define_fields(struct ftrace_event_call *call)
> static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
> {
> struct trace_array *tr = data;
> + struct ftrace_event_file *ftrace_file;
> struct syscall_trace_enter *entry;
> struct syscall_metadata *sys_data;
> struct ring_buffer_event *event;
> @@ -317,6 +318,10 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
> if (!test_bit(syscall_nr, tr->enabled_enter_syscalls))
> return;
>
> + ftrace_file = rcu_dereference_raw(tr->enter_syscall_files[syscall_nr]);
> + if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
> + return;
> +
It seems here is no lock to protect ftrace_file. This could be racy with
unreg_event_syscall_enter(). For example,
<thread0> <thread1>
ftrace_syscall_enter() unreg_event_syscall_enter()
test_bit(enabled_enter_syscalls) lock(syscall_trace_lock);
clear_bit(enabled_enter_syscalls)
tr->enter_syscall_files[num] = NULL
ftrace_file = tr->enter_syscall_files[syscall_nr]
In this case, ftrace_file can be NULL.
And even if it is passed(checked there), unreg_event_syscall_enter() still
doesn't ensure that the other threads are using the ftrace_file
(Not completely disabled) as previous kprobe-tracer doesn't.
Please check the below patch I posted.
http://marc.info/?l=linux-kernel&m=137336270104492
AFAIK, tracepoint (on which syscall tracer depends) locks rcu_read_lock
too, so I think it is possible to use same approach in syscall tracer to
guarantee all running event handlers have done.
Thank you,
> sys_data = syscall_nr_to_meta(syscall_nr);
> if (!sys_data)
> return;
> @@ -345,6 +350,7 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
> static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
> {
> struct trace_array *tr = data;
> + struct ftrace_event_file *ftrace_file;
> struct syscall_trace_exit *entry;
> struct syscall_metadata *sys_data;
> struct ring_buffer_event *event;
> @@ -359,6 +365,10 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
> if (!test_bit(syscall_nr, tr->enabled_exit_syscalls))
> return;
>
> + ftrace_file = rcu_dereference_raw(tr->exit_syscall_files[syscall_nr]);
> + if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &ftrace_file->flags))
> + return;
> +
> sys_data = syscall_nr_to_meta(syscall_nr);
> if (!sys_data)
> return;
> @@ -397,6 +407,7 @@ static int reg_event_syscall_enter(struct ftrace_event_file *file,
> if (!tr->sys_refcount_enter)
> ret = register_trace_sys_enter(ftrace_syscall_enter, tr);
> if (!ret) {
> + rcu_assign_pointer(tr->enter_syscall_files[num], file);
> set_bit(num, tr->enabled_enter_syscalls);
> tr->sys_refcount_enter++;
> }
> @@ -416,6 +427,7 @@ static void unreg_event_syscall_enter(struct ftrace_event_file *file,
> mutex_lock(&syscall_trace_lock);
> tr->sys_refcount_enter--;
> clear_bit(num, tr->enabled_enter_syscalls);
> + rcu_assign_pointer(tr->enter_syscall_files[num], NULL);
> if (!tr->sys_refcount_enter)
> unregister_trace_sys_enter(ftrace_syscall_enter, tr);
> mutex_unlock(&syscall_trace_lock);
> @@ -435,6 +447,7 @@ static int reg_event_syscall_exit(struct ftrace_event_file *file,
> if (!tr->sys_refcount_exit)
> ret = register_trace_sys_exit(ftrace_syscall_exit, tr);
> if (!ret) {
> + rcu_assign_pointer(tr->exit_syscall_files[num], file);
> set_bit(num, tr->enabled_exit_syscalls);
> tr->sys_refcount_exit++;
> }
> @@ -454,6 +467,7 @@ static void unreg_event_syscall_exit(struct ftrace_event_file *file,
> mutex_lock(&syscall_trace_lock);
> tr->sys_refcount_exit--;
> clear_bit(num, tr->enabled_exit_syscalls);
> + rcu_assign_pointer(tr->exit_syscall_files[num], NULL);
> if (!tr->sys_refcount_exit)
> unregister_trace_sys_exit(ftrace_syscall_exit, tr);
> mutex_unlock(&syscall_trace_lock);
>
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
(2013/07/20 0:09), Tom Zanussi wrote:
> Add 'traceon' and 'traceoff' ftrace_func_command commands. traceon
> and traceoff event triggers are added by the user via these commands
> in a similar way and using practically the same syntax as the
> analagous 'traceon' and 'traceoff' ftrace function commands, but
> instead of writing to the set_ftrace_filter file, the traceon and
> traceoff triggers are written to the per-event 'trigger' files:
>
> echo 'traceon' > .../tracing/events/somesys/someevent/trigger
> echo 'traceoff' > .../tracing/events/somesys/someevent/trigger
>
> The above command will turn tracing on or off whenever someevent is
> hit.
>
> This also adds a 'count' version that limits the number of times the
> command will be invoked:
>
> echo 'traceon:N' > .../tracing/events/somesys/someevent/trigger
> echo 'traceoff:N' > .../tracing/events/somesys/someevent/trigger
>
> Where N is the number of times the command will be invoked.
>
> The above commands will will turn tracing on or off whenever someevent
> is hit, but only N times.
>
> The event_trigger_init() and event_trigger_free() are meant to be
> common implementations of the event_trigger_ops init() and free() ops.
> Most trigger_ops implementations will use these, but some will
> override and possibly reuse them.
Please add a comment in the code. (perhaps, a separator comment is enough)
event_trigger_callback(), register_trigger and unregister_trigger in 2/9
are common for event_command func(), reg() and unreg() too.
I think it is better to move those common code in this or previous patch.
Thank you,
> Signed-off-by: Tom Zanussi <[email protected]>
> ---
> include/linux/ftrace_event.h | 1 +
> kernel/trace/trace_events_trigger.c | 313 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 314 insertions(+)
>
> diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
> index 6cd5bbc..c794686 100644
> --- a/include/linux/ftrace_event.h
> +++ b/include/linux/ftrace_event.h
> @@ -316,6 +316,7 @@ struct ftrace_event_file {
>
> enum trigger_mode {
> TM_NONE = (0),
> + TM_TRACE_ONOFF = (1 << 0),
> };
>
> extern void destroy_preds(struct ftrace_event_call *call);
> diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
> index 1f0565c..f2b97b6 100644
> --- a/kernel/trace/trace_events_trigger.c
> +++ b/kernel/trace/trace_events_trigger.c
> @@ -355,7 +355,320 @@ static void unregister_trigger(char *glob, struct event_trigger_ops *ops,
> data->ops->free(data->ops, (void **)&data);
> }
>
> +int
> +event_trigger_print(const char *name, struct seq_file *m,
> + void *data, char *filter_str)
> +{
> + long count = (long)data;
> +
> + seq_printf(m, "%s", name);
> +
> + if (count == -1)
> + seq_puts(m, ":unlimited");
> + else
> + seq_printf(m, ":count=%ld", count);
> +
> + if (filter_str)
> + seq_printf(m, " if %s\n", filter_str);
> + else
> + seq_puts(m, "\n");
> +
> + return 0;
> +}
> +
> +static int
> +event_trigger_init(struct event_trigger_ops *ops, void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + data->ref++;
> + return 0;
> +}
> +
> +static void
> +event_trigger_free(struct event_trigger_ops *ops, void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + if (WARN_ON_ONCE(data->ref <= 0))
> + return;
> +
> + data->ref--;
> + if (!data->ref)
> + kfree(data);
> +}
> +
> +static int
> +event_trigger_callback(struct event_command *cmd_ops, void *cmd_data,
> + char *glob, char *cmd, char *param, int enabled)
> +{
> + struct event_trigger_ops *trigger_ops;
> + struct event_trigger_data *trigger_data;
> + char *trigger = NULL;
> + char *number;
> + int ret;
> +
> + if (!enabled)
> + return -EINVAL;
> +
> + /* separate the trigger from the filter (t:n [if filter]) */
> + if (param && isdigit(param[0]))
> + trigger = strsep(¶m, " \t");
> +
> + mutex_lock(&event_mutex);
> +
> + trigger_ops = cmd_ops->get_trigger_ops(cmd, trigger);
> +
> + ret = -ENOMEM;
> + trigger_data = kzalloc(sizeof(*trigger_data), GFP_KERNEL);
> + if (!trigger_data)
> + goto out;
> +
> + trigger_data->count = -1;
> + trigger_data->ops = trigger_ops;
> + trigger_data->mode = cmd_ops->trigger_mode;
> + INIT_LIST_HEAD(&trigger_data->list);
> +
> + if (glob[0] == '!') {
> + cmd_ops->unreg(glob+1, trigger_ops, trigger_data, cmd_data);
> + kfree(trigger_data);
> + ret = 0;
> + goto out;
> + }
> +
> + if (trigger) {
> + number = strsep(&trigger, ":");
> +
> + ret = -EINVAL;
> + if (!strlen(number))
> + goto out_free;
> +
> + /*
> + * We use the callback data field (which is a pointer)
> + * as our counter.
> + */
> + ret = kstrtoul(number, 0, &trigger_data->count);
> + if (ret)
> + goto out_free;
> + }
> +
> + if (!param) /* if param is non-empty, it's supposed to be a filter */
> + goto out_reg;
> +
> + if (!cmd_ops->set_filter)
> + goto out_reg;
> +
> + ret = cmd_ops->set_filter(param, trigger_data, cmd_data);
> + if (ret < 0)
> + goto out_free;
> +
> + out_reg:
> + ret = cmd_ops->reg(glob, trigger_ops, trigger_data, cmd_data);
> + /*
> + * The above returns on success the # of functions enabled,
> + * but if it didn't find any functions it returns zero.
> + * Consider no functions a failure too.
> + */
> + if (!ret) {
> + ret = -ENOENT;
> + goto out_free;
> + } else if (ret < 0)
> + goto out_free;
> + ret = 0;
> + out:
> + mutex_unlock(&event_mutex);
> + return ret;
> +
> + out_free:
> + kfree(trigger_data);
> + goto out;
> +}
> +
> +static void
> +traceon_trigger(void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + if (!data)
> + return;
> +
> + if (tracing_is_on())
> + return;
> +
> + tracing_on();
> +}
> +
> +static void
> +traceon_count_trigger(void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + if (!data)
> + return;
> +
> + if (!data->count)
> + return;
> +
> + if (data->count != -1)
> + (data->count)--;
> +
> + traceon_trigger(_data);
> +}
> +
> +static void
> +traceoff_trigger(void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + if (!data)
> + return;
> +
> + if (!tracing_is_on())
> + return;
> +
> + tracing_off();
> +}
> +
> +static void
> +traceoff_count_trigger(void **_data)
> +{
> + struct event_trigger_data **p = (struct event_trigger_data **)_data;
> + struct event_trigger_data *data = *p;
> +
> + if (!data)
> + return;
> +
> + if (!data->count)
> + return;
> +
> + if (data->count != -1)
> + (data->count)--;
> +
> + traceoff_trigger(_data);
> +}
> +
> +static int
> +traceon_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> + void *_data)
> +{
> + struct event_trigger_data *data = _data;
> +
> + return event_trigger_print("traceon", m, (void *)data->count,
> + data->filter_str);
> +}
> +
> +static int
> +traceoff_trigger_print(struct seq_file *m, struct event_trigger_ops *ops,
> + void *_data)
> +{
> + struct event_trigger_data *data = _data;
> +
> + return event_trigger_print("traceoff", m, (void *)data->count,
> + data->filter_str);
> +}
> +
> +static struct event_trigger_ops traceon_trigger_ops = {
> + .func = traceon_trigger,
> + .print = traceon_trigger_print,
> + .init = event_trigger_init,
> + .free = event_trigger_free,
> +};
> +
> +static struct event_trigger_ops traceon_count_trigger_ops = {
> + .func = traceon_count_trigger,
> + .print = traceon_trigger_print,
> + .init = event_trigger_init,
> + .free = event_trigger_free,
> +};
> +
> +static struct event_trigger_ops traceoff_trigger_ops = {
> + .func = traceoff_trigger,
> + .print = traceoff_trigger_print,
> + .init = event_trigger_init,
> + .free = event_trigger_free,
> +};
> +
> +static struct event_trigger_ops traceoff_count_trigger_ops = {
> + .func = traceoff_count_trigger,
> + .print = traceoff_trigger_print,
> + .init = event_trigger_init,
> + .free = event_trigger_free,
> +};
> +
> +static struct event_trigger_ops *
> +onoff_get_trigger_ops(char *cmd, char *param)
> +{
> + struct event_trigger_ops *ops;
> +
> + /* we register both traceon and traceoff to this callback */
> + if (strcmp(cmd, "traceon") == 0)
> + ops = param ? &traceon_count_trigger_ops :
> + &traceon_trigger_ops;
> + else
> + ops = param ? &traceoff_count_trigger_ops :
> + &traceoff_trigger_ops;
> +
> + return ops;
> +}
> +
> +static struct event_command trigger_traceon_cmd = {
> + .name = "traceon",
> + .trigger_mode = TM_TRACE_ONOFF,
> + .func = event_trigger_callback,
> + .reg = register_trigger,
> + .unreg = unregister_trigger,
> + .get_trigger_ops = onoff_get_trigger_ops,
> +};
> +
> +static struct event_command trigger_traceoff_cmd = {
> + .name = "traceoff",
> + .trigger_mode = TM_TRACE_ONOFF,
> + .func = event_trigger_callback,
> + .reg = register_trigger,
> + .unreg = unregister_trigger,
> + .get_trigger_ops = onoff_get_trigger_ops,
> +};
> +
> +static __init void unregister_trigger_traceon_traceoff_cmds(void)
> +{
> + unregister_event_command(&trigger_traceon_cmd,
> + &trigger_commands,
> + &trigger_cmd_mutex);
> + unregister_event_command(&trigger_traceoff_cmd,
> + &trigger_commands,
> + &trigger_cmd_mutex);
> +}
> +
> +static __init int register_trigger_traceon_traceoff_cmds(void)
> +{
> + int ret;
> +
> + ret = register_event_command(&trigger_traceon_cmd, &trigger_commands,
> + &trigger_cmd_mutex);
> + if (WARN_ON(ret < 0))
> + return ret;
> + ret = register_event_command(&trigger_traceoff_cmd, &trigger_commands,
> + &trigger_cmd_mutex);
> + if (WARN_ON(ret < 0))
> + unregister_trigger_traceon_traceoff_cmds();
> +
> + return ret;
> +}
> +
> __init int register_trigger_cmds(void)
> {
> + int ret;
> +
> + ret = register_trigger_traceon_traceoff_cmds();
> + if (ret) {
> + unregister_trigger_traceon_traceoff_cmds();
> + return ret;
> + }
> +
> return 0;
> }
>
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]
(2013/07/20 0:09), Tom Zanussi wrote:
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index af6eb2c..d3e8626 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -1021,6 +1021,43 @@ extern int event_trace_del_tracer(struct trace_array *tr);
> extern struct mutex event_mutex;
> extern struct list_head ftrace_events;
>
> +extern const struct file_operations event_trigger_fops;
> +
> +extern int register_trigger_cmds(void);
> +
> +struct event_trigger_ops {
> + void (*func)(void **data);
> + int (*init)(struct event_trigger_ops *ops,
> + void **data);
> + void (*free)(struct event_trigger_ops *ops,
> + void **data);
> + int (*print)(struct seq_file *m,
> + struct event_trigger_ops *ops,
> + void *data);
> +};
Please add comments what each ops does. :)
> +
> +struct event_command {
> + struct list_head list;
> + char *name;
> + enum trigger_mode trigger_mode;
> + int (*func)(struct event_command *cmd_ops,
> + void *cmd_data, char *glob, char *cmd,
> + char *params, int enable);
> + int (*reg)(char *glob,
> + struct event_trigger_ops *trigger_ops,
> + void *trigger_data, void *cmd_data);
> + void (*unreg)(char *glob,
> + struct event_trigger_ops *trigger_ops,
> + void *trigger_data, void *cmd_data);
> + int (*set_filter)(char *filter_str,
> + void *trigger_data,
> + void *cmd_data);
> + struct event_trigger_ops *(*get_trigger_ops)(char *cmd, char *param);
> +};
Ditto.
[...]
> +static int event_trigger_regex_open(struct inode *inode, struct file *file)
> +{
> + struct trigger_iterator *iter;
> + int ret = 0;
> +
> + iter = kzalloc(sizeof(*iter), GFP_KERNEL);
> + if (!iter)
> + return -ENOMEM;
> +
> + iter->file = inode->i_private;
> +
> + mutex_lock(&event_mutex);
> +
> + if (file->f_mode & FMODE_READ) {
> + ret = seq_open(file, &event_triggers_seq_ops);
> + if (!ret) {
> + struct seq_file *m = file->private_data;
> + m->private = iter;
> + } else {
> + /* Failed */
> + kfree(iter);
> + }
> + } else
> + file->private_data = iter;
> +
Perhaps, unfortunately, this will not work correctly if currently
discussing bugfix approach is applied.
Oleg, would you help us what this should be?
> + mutex_unlock(&event_mutex);
> +
> + return ret;
> +}
[...]
> +static int unregister_event_command(struct event_command *cmd,
> + struct list_head *cmd_list,
> + struct mutex *cmd_list_mutex)
> +{
> + struct event_command *p, *n;
> + int ret = -ENODEV;
> +
> + mutex_lock(cmd_list_mutex);
> + list_for_each_entry_safe(p, n, cmd_list, list) {
> + if (strcmp(cmd->name, p->name) == 0) {
> + ret = 0;
> + list_del_init(&p->list);
> + goto out_unlock;
> + }
> + }
> + out_unlock:
> + mutex_unlock(cmd_list_mutex);
> +
> + return ret;
> +}
OK, it seems that this is used only for rollback process in __init
functions. If so, it should have __init (and a comment),
or we need a refcount to check no one uses the command anymore.
Thank you,
--
Masami HIRAMATSU
IT Management Research Dept. Linux Technology Center
Hitachi, Ltd., Yokohama Research Laboratory
E-mail: [email protected]