2008-07-10 16:46:25

by Steven Rostedt

[permalink] [raw]
Subject: [PATCH] ftrace: Documentation


This is the long awaited ftrace.txt. It explains in quite detail how to
use ftrace and the various tracers.

Signed-off-by: Steven Rostedt <[email protected]>
---
Documentation/ftrace.txt | 1353 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 1353 insertions(+)

Index: linux-tip.git/Documentation/ftrace.txt
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-tip.git/Documentation/ftrace.txt 2008-07-10 12:23:06.000000000 -0400
@@ -0,0 +1,1353 @@
+ ftrace - Function Tracer
+ ========================
+
+Copyright 2008 Red Hat Inc.
+Author: Steven Rostedt <[email protected]>
+
+
+Introduction
+------------
+
+Ftrace is an internal tracer designed to help out developers and
+designers of systems to find what is going on inside the kernel.
+It can be used for debugging or analyzing latencies and performance
+issues that take place outside of user-space.
+
+Although ftrace is the function tracer, it also includes an
+infrastructure that allows for other types of tracing. Some of the
+tracers that are currently in ftrace is a tracer to trace
+context switches, the time it takes for a high priority task to
+run after it was woken up, the time interrupts are disabled, and
+more.
+
+
+The File System
+---------------
+
+Ftrace uses the debugfs file system to hold the control files as well
+as the files to display output.
+
+To mount the debugfs system:
+
+ # mkdir /debug
+ # mount -t debugfs nodev /debug
+
+
+That's it! (assuming that you have ftrace configured into your kernel)
+
+After mounting the debugfs, you can see a directory called
+"tracing". This directory contains the control and output files
+of ftrace. Here is a list of some of the key files:
+
+
+ Note: all time values are in microseconds.
+
+ current_tracer : This is used to set or display the current tracer
+ that is configured.
+
+ available_tracers : This holds the different types of tracers that
+ has been compiled into the kernel. The tracers
+ listed here can be configured by echoing in their
+ name into current_tracer.
+
+ tracing_enabled : This sets or displays whether the current_tracer
+ is activated and tracing or not. Echo 0 into this
+ file to disable the tracer or 1 (or non-zero) to
+ enable it.
+
+ trace : This file holds the output of the trace in a human readable
+ format.
+
+ latency_trace : This file shows the same trace but the information
+ is organized more to display possible latencies
+ in the system.
+
+ trace_pipe : The output is the same as the "trace" file but this
+ file is meant to be streamed with live tracing.
+ Reads from this file will block until new data
+ is retrieved. Unlike the "trace" and "latency_trace"
+ files, this file is a consumer. This means reading
+ from this file causes sequential reads to display
+ more current data. Once data is read from this
+ file, it is consumed, and will not be read
+ again with a sequential read. The "trace" and
+ "latency_trace" files are static, and if the
+ tracer isn't adding more data, they will display
+ the same information every time they are read.
+
+ iter_ctrl : This file lets the user control the amount of data
+ that is displayed in one of the above output
+ files.
+
+ trace_max_latency : Some of the tracers record the max latency.
+ For example, the time interrupts are disabled.
+ This time is saved in this file. The max trace
+ will also be stored, and displayed by either
+ "trace" or "latency_trace". A new max trace will
+ only be recorded if the latency is greater than
+ the value in this file. (in microseconds)
+
+ trace_entries : This sets or displays the number of trace
+ entries each CPU buffer can hold. The tracer buffers
+ are the same size for each CPU, so care must be
+ taken when modifying the trace_entries. The number
+ of actually entries will be the number given
+ times the number of possible CPUS. The buffers
+ are saved as individual pages, and the actual entries
+ will always be rounded up to entries per page.
+
+ This can only be updated when the current_tracer
+ is set to "none".
+
+ NOTE: It is planned on changing the allocated buffers
+ from being the number of possible CPUS to
+ the number of online CPUS.
+
+ tracing_cpumask : This is a mask that lets the user only trace
+ on specified CPUS. The format is a hex string
+ representing the CPUS.
+
+ set_ftrace_filter : When dynamic ftrace is configured in, the
+ code is dynamically modified to disable calling
+ of the function profiler (mcount). This lets
+ tracing be configured in with practically no overhead
+ in performance. This also has a side effect of
+ enabling or disabling specific functions to be
+ traced. Echoing in names of functions into this
+ file will limit the trace to only those files.
+
+ set_ftrace_notrace: This has the opposite effect that
+ set_ftrace_filter has. Any function that is added
+ here will not be traced. If a function exists
+ in both set_ftrace_filter and set_ftrace_notrace
+ the function will _not_ bet traced.
+
+ available_filter_functions : When a function is encountered the first
+ time by the dynamic tracer, it is recorded and
+ later the call is converted into a nop. This file
+ lists the functions that have been recorded
+ by the dynamic tracer and these functions can
+ be used to set the ftrace filter by the above
+ "set_ftrace_filter" file.
+
+
+The Tracers
+-----------
+
+Here are the list of current tracers that can be configured.
+
+ ftrace - function tracer that uses mcount to trace all functions.
+ It is possible to filter out which functions that are
+ traced when dynamic ftrace is configured in.
+
+ sched_switch - traces the context switches between tasks.
+
+ irqsoff - traces the areas that disable interrupts and saves off
+ the trace with the longest max latency.
+ See tracing_max_latency. When a new max is recorded,
+ it replaces the old trace. It is best to view this
+ trace with the latency_trace file.
+
+ preemptoff - Similar to irqsoff but traces and records the time
+ preemption is disabled.
+
+ preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
+ records the largest time irqs and/or preemption is
+ disabled.
+
+ wakeup - Traces and records the max latency that it takes for
+ the highest priority task to get scheduled after
+ it has been woken up.
+
+ none - This is not a tracer. To remove all tracers from tracing
+ simply echo "none" into current_tracer.
+
+
+Examples of using the tracer
+----------------------------
+
+Here are typical examples of using the tracers with only controlling
+them with the debugfs interface (without using any user-land utilities).
+
+Output format:
+--------------
+
+Here's an example of the output format of the file "trace"
+
+ --------
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4251 [01] 10152.583854: path_put <-path_walk
+ bash-4251 [01] 10152.583855: dput <-path_put
+ bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
+ --------
+
+A header is printed with the trace that is represented. In this case
+the tracer is "ftrace". Then a header showing the format. Task name
+"bash", the task PID "4251", the CPU that it was running on
+"01", the timestamp in <secs>.<usecs> format, the function name that was
+traced "path_put" and the parent function that called this function
+"path_walk".
+
+The sched_switch tracer also includes tracing of task wake ups and
+context switches.
+
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
+ events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
+ kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
+
+Wake ups are represented by a "+" and the context switches show
+"==>". The format is:
+
+ Context switches:
+
+ Previous task Next Task
+
+ <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
+
+ Wake ups:
+
+ Current task Task waking up
+
+ <pid>:<prio>:<state> + <pid>:<prio>:<state>
+
+The prio is the internal kernel priority, which is inverse to the
+priority that is usually displayed by user-space tools. Zero represents
+the highest priority (99). Prio 100 starts the "nice" priorities with
+100 being equal to nice -20 and 139 being nice 19. The prio "140" is
+reserved for the idle task which is the lowest priority thread (pid 0).
+
+
+Latency trace format
+--------------------
+
+For traces that display latency times, the latency_trace file gives
+a bit more information to see why a latency happened. Here's a typical
+trace.
+
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: apic_timer_interrupt
+ => ended at: do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
+ <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
+ <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
+
+
+vim:ft=help
+
+
+This shows that the current tracer is "irqsoff" tracing the time
+interrupts are disabled. It gives the trace version and the kernel
+this was executed on (2.6.26-rc8). Then it displays the max latency
+in microsecs (97 us). The number of trace entries displayed
+by the total number recorded (both are three: #3/3). The type of
+preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero
+and reserved for later use. #P is the number of online CPUS (#P:2).
+
+The task is the process that was running when the latency happened.
+(swapper pid: 0).
+
+The start and stop that caused the latencies:
+
+ apic_timer_interrupt is where the interrupts were disabled.
+ do_softirq is where they were enabled again.
+
+The next lines after the header are the trace itself. The header
+explains which is which.
+
+ cmd: The name of the process in the trace.
+
+ pid: The PID of that process.
+
+ CPU#: The CPU that the process was running on.
+
+ irqs-off: 'd' interrupts are disabled. '.' otherwise.
+
+ need-resched: 'N' task need_resched is set, '.' otherwise.
+
+ hardirq/softirq:
+ 'H' - hard irq happened inside a softirq.
+ 'h' - hard irq is running
+ 's' - soft irq is running
+ '.' - normal context.
+
+ preempt-depth: The level of preempt_disabled
+
+The above is mostly meaningful for kernel developers.
+
+ time: This differs from the trace output where as the trace output
+ contained a absolute timestamp. This timestamp is relative
+ to the start of the first entry in the the trace.
+
+ delay: This is just to help catch your eye a bit better. And
+ needs to be fixed to be only relative to the same CPU.
+ The marks is determined by the difference between this
+ current trace and the next trace.
+ '!' - greater than preempt_mark_thresh (default 100)
+ '+' - greater than 1 microsecond
+ ' ' - less than or equal to 1 microsecond.
+
+ The rest is the same as the 'trace' file.
+
+
+iter_ctrl
+---------
+
+The iter_ctrl file is used to control what gets printed in the trace
+output. To see what is available, simply cat the file:
+
+ cat /debug/tracing/iter_ctrl
+ print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
+ noblock nostacktrace nosched-tree
+
+To disable one of the options, echo in the option appended with "no".
+
+ echo noprint-parent > /debug/tracing/iter_ctrl
+
+To enable an option, leave off the "no".
+
+ echo sym-offest > /debug/tracing/iter_ctrl
+
+Here are the available options:
+
+ print-parent - On function traces, display the calling function
+ as well as the function being traced.
+
+ print-parent:
+ bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
+
+ noprint-parent:
+ bash-4000 [01] 1477.606694: simple_strtoul
+
+
+ sym-offset - Display not only the function name, but also the offset
+ in the function. For example, instead of seeing just
+ "ktime_get" you will see "ktime_get+0xb/0x20"
+
+ sym-offset:
+ bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
+
+ sym-addr - this will also display the function address as well as
+ the function name.
+
+ sym-addr:
+ bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
+
+ verbose - This deals with the latency_trace file.
+
+ bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
+ (+0.000ms): simple_strtoul (strict_strtoul)
+
+ raw - This will display raw numbers. This option is best for use with
+ user applications that can translate the raw numbers better than
+ having it done in the kernel.
+
+ hex - similar to raw, but the numbers will be in a hexadecimal format.
+
+ bin - This will print out the formats in raw binary.
+
+ block - TBD (needs update)
+
+ stacktrace - This is one of the options that changes the trace itself.
+ When a trace is recorded, so is the stack of functions.
+ This allows for back traces of trace sites.
+
+ sched-tree - TBD (any users??)
+
+
+sched_switch
+------------
+
+This tracer simply records schedule switches. Here's an example
+on how to implement it.
+
+ # echo sched_switch > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # sleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+
+# tracer: sched_switch
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
+ bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
+ sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
+ bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
+ bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
+ sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
+ bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
+ bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
+ <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
+ <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
+ ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
+ <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
+ <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
+ ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
+ sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
+ [...]
+
+
+As we have discussed previously about this format, the header shows
+the name of the trace and points to the options. The "FUNCTION"
+is a misnomer since here it represents the wake ups and context
+switches.
+
+The sched_switch only lists the wake ups (represented with '+')
+and context switches ('==>') with the previous task or current
+first followed by the next task or task waking up. The format for both
+of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
+is the inverse of the actual priority with zero (0) being the highest
+priority and the nice values starting at 100 (nice -20). Below is
+a quick chart to map the kernel priority to user land priorities.
+
+ Kernel priority: 0 to 99 ==> user RT priority 99 to 0
+ Kernel priority: 100 to 139 ==> user nice -20 to 19
+ Kernel priority: 140 ==> idle task priority
+
+The task states are:
+
+ R - running : wants to run, may not actually be running
+ S - sleep : process is waiting to be woken up (handles signals)
+ D - deep sleep : process must be woken up (ignores signals)
+ T - stopped : process suspended
+ t - traced : process is being traced (with something like gdb)
+ Z - zombie : process waiting to be cleaned up
+ X - unknown
+
+
+ftrace_enabled
+--------------
+
+The following tracers give different output depending on whether
+or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
+one can either use the sysctl function or set it via the proc
+file system interface.
+
+ sysctl kernel.ftrace_enabled=1
+
+ or
+
+ echo 1 > /proc/sys/kernel/ftrace_enabled
+
+To disable ftrace_enabled simply replace the '1' with '0' in
+the above commands.
+
+When ftrace_enabled is set the tracers will also record the functions
+that are within the trace. The descriptions of the tracers
+will also show an example with ftrace enabled.
+
+
+irqsoff
+-------
+
+When interrupts are disabled, the CPU can not react to any other
+external event (besides NMIs and SMIs). This prevents the timer
+interrupt from triggering or the mouse interrupt from letting the
+kernel know of a new mouse event. The result is a latency with the
+reaction time.
+
+The irqsoff tracer tracks the time interrupts are disabled and when
+they are re-enabled. When a new maximum latency is hit, it saves off
+the trace so that it may be retrieved at a later time. Every time a
+new maximum in reached, the old saved trace is discarded and the new
+trace is saved.
+
+To reset the maximum, echo 0 into tracing_max_latency. Here's an
+example:
+
+ # echo irqsoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 6 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: copy_page_range
+ => ended at: copy_page_range
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ bash-4269 1...1 0us+: _spin_lock (copy_page_range)
+ bash-4269 1...1 7us : _spin_unlock (copy_page_range)
+ bash-4269 1...2 7us : trace_preempt_on (copy_page_range)
+
+
+vim:ft=help
+
+Here we see that that we had a latency of 6 microsecs (which is
+very good). The spin_lock in copy_page_range disabled interrupts.
+The difference between the 6 and the displayed timestamp 7us is
+because the clock must have incremented between the time of recording
+the max latency and recording the function that had that latency.
+
+Note the above had ftrace_enabled not set. If we set the ftrace_enabled
+we get a much larger output:
+
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: __alloc_pages_internal
+ => ended at: __alloc_pages_internal
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
+ ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
+ ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
+ ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
+ ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
+[...]
+ ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
+ ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
+ ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
+ ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
+
+
+vim:ft=help
+
+
+Here we traced a 50 microsecond latency. But we also see all the
+functions that were called during that time. Note that enabling
+function tracing we endure an added overhead. This overhead may
+extend the latency times. But never the less, this trace has provided
+some very helpful debugging.
+
+
+preemptoff
+----------
+
+When preemption is disabled we may be able to receive interrupts but
+the task can not be preempted and a higher priority task must wait
+for preemption to be enabled again before it can preempt a lower
+priority task.
+
+The preemptoff tracer traces the places that disables preemption.
+Like the irqsoff, it records the maximum latency that preemption
+was disabled. The control of preemptoff is much like the irqsoff.
+
+ # echo preemptoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: preemptoff
+#
+preemptoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: do_IRQ
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
+ sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
+
+
+vim:ft=help
+
+This has some more changes. Preemption was disabled when an interrupt
+came in (notice the 'h'), and was enabled while doing a softirq.
+(notice the 's'). But we also see that interrupts have been disabled
+when entering the preempt off section and leaving it (the 'd').
+We do not know if interrupts were enabled in the mean time.
+
+# tracer: preemptoff
+#
+preemptoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: remove_wait_queue
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
+ sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
+ sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
+ sshd-4261 0d..1 2us : irq_enter (do_IRQ)
+ sshd-4261 0d..1 2us : idle_cpu (irq_enter)
+ sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
+ sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
+ sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
+[...]
+ sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
+ sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
+ sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
+ sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
+ sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
+ sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
+ sshd-4261 0d..2 15us : do_softirq (irq_exit)
+ sshd-4261 0d... 15us : __do_softirq (do_softirq)
+ sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
+ sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
+[...]
+ sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
+[...]
+ sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
+
+
+The above is an example of the preemptoff trace with ftrace_enabled
+set. Here we see that interrupts were disabled the entire time.
+The irq_enter code lets us know that we entered an interrupt 'h'.
+Before that, the functions being traced still show that it is not
+in an interrupt, but we can see by the functions themselves that
+this is not the case.
+
+Notice that the __do_softirq when called doesn't have a preempt_count.
+It may seem that we missed a preempt enabled. What really happened
+is that the preempt count is held on the threads stack and we
+switched to the softirq stack (4K stacks in effect). The code
+does not copy the preempt count, but because interrupts are disabled
+we don't need to worry about it. Having a tracer like this is good
+to let people know what really happens inside the kernel.
+
+
+preemptirqsoff
+--------------
+
+Knowing the locations that have interrupts disabled or preemption
+disabled for the longest times is helpful. But sometimes we would
+like to know when either preemption and/or interrupts are disabled.
+
+The following code:
+
+ local_irq_disable();
+ call_function_with_irqs_off();
+ preempt_disable();
+ call_function_with_irqs_and_preemption_off();
+ local_irq_enable();
+ call_function_with_preemption_off();
+ preempt_enable();
+
+The irqsoff tracer will record the total length of
+call_function_with_irqs_off() and
+call_function_with_irqs_and_preemption_off().
+
+The preemptoff tracer will record the total length of
+call_function_with_irqs_and_preemption_off() and
+call_function_with_preemption_off().
+
+But neither will trace the time that interrupts and/or preemption
+is disabled. This total time is the time that we can not schedule.
+To record this time, use the preemptirqsoff tracer.
+
+Again, using this trace is much like the irqsoff and preemptoff tracers.
+
+ # echo preemptoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: preemptirqsoff
+#
+preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: apic_timer_interrupt
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
+ ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
+ ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
+
+
+vim:ft=help
+
+
+The trace_hardirqs_off_thunk is called from assembly on x86 when
+interrupts are disabled in the assembly code. Without the function
+tracing, we don't know if interrupts were enabled within the preemption
+points. We do see that it started with preemption enabled.
+
+Here is a trace with ftrace_enabled set:
+
+
+# tracer: preemptirqsoff
+#
+preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: write_chan
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4473 0.N.. 0us : preempt_schedule (write_chan)
+ ls-4473 0dN.1 1us : _spin_lock (schedule)
+ ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
+ ls-4473 0d..2 2us : put_prev_task_fair (schedule)
+[...]
+ ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
+ ls-4473 0d..2 13us : __switch_to (schedule)
+ sshd-4261 0d..2 14us : finish_task_switch (schedule)
+ sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
+ sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
+ sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
+ sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
+ sshd-4261 0d..2 17us : irq_enter (do_IRQ)
+ sshd-4261 0d..2 17us : idle_cpu (irq_enter)
+ sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
+ sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
+ sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
+ sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
+ sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
+ sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
+[...]
+ sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
+ sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
+ sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
+ sshd-4261 0d..3 30us : do_softirq (irq_exit)
+ sshd-4261 0d... 30us : __do_softirq (do_softirq)
+ sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
+ sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
+[...]
+ sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
+ sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
+ sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
+ sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
+ sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
+ sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
+ sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
+[...]
+ sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
+ sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
+ sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
+ sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
+ sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
+ sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
+ sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
+ sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
+ sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
+ sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
+ sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
+[...]
+ sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
+ sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
+ sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
+ sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
+ sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
+
+
+This is a very interesting trace. It started with the preemption of
+the ls task. We see that the task had the "need_resched" bit set
+with the 'N' in the trace. Interrupts are disabled in the spin_lock
+and the trace started. We see that a schedule took place to run
+sshd. When the interrupts were enabled we took an interrupt.
+On return of the interrupt the softirq ran. We took another interrupt
+while running the softirq as we see with the capital 'H'.
+
+
+wakeup
+------
+
+In Real-Time environment it is very important to know the wakeup
+time it takes for the highest priority task that wakes up to the
+time it executes. This is also known as "schedule latency".
+I stress the point that this is about RT tasks. It is also important
+to know the scheduling latency of non-RT tasks, but the average
+schedule latency is better for non-RT tasks. Tools like
+LatencyTop is more appropriate for such measurements.
+
+Real-Time environments is interested in the worst case latency.
+That is the longest latency it takes for something to happen, and
+not the average. We can have a very fast scheduler that may only
+have a large latency once in a while, but that would not work well
+with Real-Time tasks. The wakeup tracer was designed to record
+the worst case wakeups of RT tasks. Non-RT tasks are not recorded
+because the tracer only records one worst case and tracing non-RT
+tasks that are unpredictable will overwrite the worst case latency
+of RT tasks.
+
+Since this tracer only deals with RT tasks, we will run this slightly
+different than we did with the previous tracers. Instead of performing
+an 'ls' we will run 'sleep 1' under 'chrt' which changes the
+priority of the task.
+
+ # echo wakeup > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # chrt -f 5 sleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: wakeup
+#
+wakeup latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
+ -----------------
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
+ <idle>-0 1d..4 4us : schedule (cpu_idle)
+
+
+vim:ft=help
+
+
+Running this on an idle system we see that it only took 4 microseconds
+to perform the task switch. Note, since the trace marker in the
+schedule is before the actual "switch" we stop the tracing when
+the recorded task is about to schedule in. This may change if
+we add a new marker at the end of the scheduler.
+
+Notice that the recorded task is 'sleep' with the PID of 4901 and it
+has an rt_prio of 5. This priority is user-space priority and not
+the internal kernel priority. The policy is 1 for SCHED_FIFO and 2
+for SCHED_RR.
+
+Doing the same with chrt -r 5 and ftrace_enabled set.
+
+# tracer: wakeup
+#
+wakeup latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
+ -----------------
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
+ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
+ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
+ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
+ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
+ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
+ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
+ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
+[...]
+ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
+ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
+ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
+ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
+[...]
+ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
+ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
+ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
+ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
+ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
+ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
+ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
+ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
+ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
+ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
+ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
+ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
+ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
+ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
+[...]
+ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
+ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
+ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
+ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
+ksoftirq-7 1d..4 50us : schedule (__cond_resched)
+
+The interrupt went off while running ksoftirqd. This task runs at
+SCHED_OTHER. Why didn't we see the 'N' set early? This may be
+a harmless bug with x86_32 and 4K stacks. The need_reched() function
+that tests if we need to reschedule looks on the actual stack.
+Where as the setting of the NEED_RESCHED bit happens on the
+task's stack. But because we are in a hard interrupt, the test
+is with the interrupts stack which has that to be false. We don't
+see the 'N' until we switch back to the task's stack.
+
+ftrace
+------
+
+ftrace is not only the name of the tracing infrastructure, but it
+is also a name of one of the tracers. The tracer is the function
+tracer. Enabling the function tracer can be done from the
+debug file system. Make sure the ftrace_enabled is set otherwise
+this tracer is a nop.
+
+ # sysctl kernel.ftrace_enabled=1
+ # echo ftrace > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4003 [00] 123.638713: finish_task_switch <-schedule
+ bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
+ bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
+ bash-4003 [00] 123.638715: hrtick_set <-schedule
+ bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
+ bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
+ bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
+ bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
+ bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
+ bash-4003 [00] 123.638718: sub_preempt_count <-schedule
+ bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
+ bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
+ bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
+ bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
+ bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
+[...]
+
+
+Note: It is sometimes better to enable or disable tracing directly from
+a program, because the buffer may be overflowed by the echo commands
+before you get to the point you want to trace. It is also easier to
+stop the tracing at the point that you hit the part that you are
+interested in. Since the ftrace buffer is a ring buffer with the
+oldest data being overwritten, usually it is sufficient to start the
+tracer with an echo command but have you code stop it. Something
+like the following is usually appropriate for this.
+
+int trace_fd;
+[...]
+int main(int argc, char *argv[]) {
+ [...]
+ trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY);
+ [...]
+ if (condition_hit()) {
+ write(trace_fd, "0", 1);
+ }
+ [...]
+}
+
+
+dynamic ftrace
+--------------
+
+If CONFIG_DYNAMIC_FTRACE is set, then the system will run with
+virtually no overhead when function tracing is disabled. The way
+this works is the mcount function call (placed at the start of
+every kernel function, produced by the -pg switch in gcc), starts
+of pointing to a simple return.
+
+When dynamic ftrace is initialized, it calls kstop_machine to make it
+act like a uniprocessor so that it can freely modify code without
+worrying about other processors executing that same code. At
+initialization, the mcount calls are change to call a "record_ip"
+function. After this, the first time a kernel function is called,
+it has the calling address saved in a hash table.
+
+Later on the ftraced kernel thread is awoken and will again call
+kstop_machine if new functions have been recorded. The ftraced thread
+will change all calls to mcount to "nop". Just calling mcount
+and having mcount return has shown a 10% overhead. By converting
+it to a nop, there is no recordable overhead to the system.
+
+One special side-effect to the recording of the functions being
+traced, is that we can now selectively choose which functions we
+want to trace and which ones we want the mcount calls to remain as
+nops.
+
+Two files that contain to the enabling and disabling of recorded
+functions are:
+
+ set_ftrace_filter
+
+and
+
+ set_ftrace_notrace
+
+A list of available functions that you can add to this files is listed
+in:
+
+ available_filter_functions
+
+ # cat /debug/tracing/available_filter_functions
+put_prev_task_idle
+kmem_cache_create
+pick_next_task_rt
+get_online_cpus
+pick_next_task_fair
+mutex_lock
+[...]
+
+If I'm only interested in sys_nanosleep and hrtimer_interrupt:
+
+ # echo sys_nanosleep hrtimer_interrupt \
+ > /debug/tracing/set_ftrace_filter
+ # echo ftrace > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
+ usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
+ <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
+
+To see what functions are being traced, you can cat the file:
+
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_interrupt
+sys_nanosleep
+
+
+Perhaps this isn't enough. The filters also allow simple wild cards.
+Only the following is currently available
+
+ <match>* - will match functions that begins with <match>
+ *<match> - will match functions that end with <match>
+ *<match>* - will match functions that have <match> in it
+
+Thats all the wild cards that are allowed.
+
+ <match>*<match> will not work.
+
+ # echo hrtimer_* > /debug/tracing/set_ftrace_filter
+
+Produces:
+
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
+ bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
+ bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
+ bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
+ <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
+
+
+Notice that we lost the sys_nanosleep.
+
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_run_queues
+hrtimer_run_pending
+hrtimer_init
+hrtimer_cancel
+hrtimer_try_to_cancel
+hrtimer_forward
+hrtimer_start
+hrtimer_reprogram
+hrtimer_force_reprogram
+hrtimer_get_next_event
+hrtimer_interrupt
+hrtimer_nanosleep
+hrtimer_wakeup
+hrtimer_get_remaining
+hrtimer_get_res
+hrtimer_init_sleeper
+
+
+This is because the '>' and '>>' act just like they do in bash.
+To rewrite the filters, use '>'
+To append to the filters, use '>>'
+
+To clear out a filter so that all functions will be recorded again.
+
+ # echo > /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+ #
+
+Again, now we want to append.
+
+ # echo sys_nanosleep > /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+sys_nanosleep
+ # echo hrtimer_* >> /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_run_queues
+hrtimer_run_pending
+hrtimer_init
+hrtimer_cancel
+hrtimer_try_to_cancel
+hrtimer_forward
+hrtimer_start
+hrtimer_reprogram
+hrtimer_force_reprogram
+hrtimer_get_next_event
+hrtimer_interrupt
+sys_nanosleep
+hrtimer_nanosleep
+hrtimer_wakeup
+hrtimer_get_remaining
+hrtimer_get_res
+hrtimer_init_sleeper
+
+
+The set_ftrace_notrace prevents those functions from being traced.
+
+ # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace
+
+Produces:
+
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4043 [01] 115.281644: finish_task_switch <-schedule
+ bash-4043 [01] 115.281645: hrtick_set <-schedule
+ bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
+ bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
+ bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
+ bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
+ bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
+ bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
+ bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
+
+We can see that there's no more lock or preempt tracing.
+
+ftraced
+-------
+
+As mentioned above, when dynamic ftrace is configured in, a kernel
+thread wakes up once a second and checks to see if there are mcount
+calls that need to be converted into nops. If there is not, then
+it simply goes back to sleep. But if there is, it will call
+kstop_machine to convert the calls to nops.
+
+There may be a case that you do not want this added latency.
+Perhaps you are doing some audio recording and this activity might
+cause skips in the playback. There is an interface to disable
+and enable the ftraced kernel thread.
+
+ # echo 0 > /debug/tracing/ftraced_enabled
+
+This will disable the calling of the kstop_machine to update the
+mcount calls to nops. Remember that there's a large overhead
+to calling mcount. Without this kernel thread, that overhead will
+exist.
+
+Any write to the ftraced_enabled file will cause the kstop_machine
+to run if there are recorded calls to mcount. This means that a
+user can manually perform the updates when they want to by simply
+echoing a '0' into the ftraced_enabled file.
+
+The updates are also done at the beginning of enabling a tracer
+that uses ftrace function recording.
+
+
+trace_pipe
+----------
+
+The trace_pipe outputs the same as trace, but the effect on the
+tracing is different. Every read from trace_pipe is consumed.
+This means that subsequent reads will be different. The trace
+is live.
+
+ # echo ftrace > /debug/tracing/current_tracer
+ # cat /debug/tracing/trace_pipe > /tmp/trace.out &
+[1] 4153
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+
+ #
+ # cat /tmp/trace.out
+ bash-4043 [00] 41.267106: finish_task_switch <-schedule
+ bash-4043 [00] 41.267106: hrtick_set <-schedule
+ bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
+ bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
+ bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
+ bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
+ bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
+ bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
+ bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
+ bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
+
+
+Note, reading the trace_pipe will block until more input is added.
+By changing the tracer, trace_pipe will issue an EOF. We needed
+to set the ftrace tracer _before_ cating the trace_pipe file.
+
+
+trace entries
+-------------
+
+Having too much or not enough data can be troublesome in diagnosing
+some issue in the kernel. The file trace_entries is used to modify
+the size of the internal trace buffers. The numbers listed
+is the number of entries that can be recorded per CPU. To know
+the full size, multiply the number of possible CPUS with the
+number of entries.
+
+ # cat /debug/tracing/trace_entries
+65620
+
+Note, to modify this you must have tracing fulling disabled. To do that,
+echo "none" into the current_tracer.
+
+ # echo none > /debug/tracing/current_tracer
+ # echo 100000 > /debug/tracing/trace_entries
+ # cat /debug/tracing/trace_entries
+100045
+
+
+Notice that we echoed in 100,000 but the size is 100,045. The entries
+are held by individual pages. It allocates the number of pages it takes
+to fulfill the request. If more entries may fit on the last page
+it will add them.
+
+ # echo 1 > /debug/tracing/trace_entries
+ # cat /debug/tracing/trace_entries
+85
+
+This shows us that 85 entries can fit on a single page.
+
+The number of pages that will be allocated is a percentage of available
+memory. Allocating too much will produces an error.
+
+ # echo 1000000000000 > /debug/tracing/trace_entries
+-bash: echo: write error: Cannot allocate memory
+ # cat /debug/tracing/trace_entries
+85
+


2008-07-10 18:51:12

by Jon Masters

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Thu, 2008-07-10 at 12:46 -0400, Steven Rostedt wrote:
> This is the long awaited ftrace.txt. It explains in quite detail how to
> use ftrace and the various tracers.

Dude. As usual, you rock (not my world, of course...but this will
finally allow me to actually use ftrace in some kind of anger :P).

Documentation! W00t! Wow!

Jon.

2008-07-10 19:59:35

by Elias Oltmanns

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

Steven Rostedt <[email protected]> wrote:
> This is the long awaited ftrace.txt. It explains in quite detail how to
> use ftrace and the various tracers.
>
> Signed-off-by: Steven Rostedt <[email protected]>

Exactly what I've just been looking for. Very nice.

As I read through this enlightening peace, I took the liberty to make
some comments where I thought I'd spotted some mistake. Note that I'm
not a native English speaker nor familiar with all the terminology.
Also, I didn't exactly scratch my head when I had a bad feeling about
something but couldn't come up with a better idea straight away.
Basically, I just skimmed through the lines because im interested in the
matter.

Anyway, here it goes:

[...]
> + available_tracers : This holds the different types of tracers that
> + has been compiled into the kernel. The tracers

have

> + listed here can be configured by echoing in their
> + name into current_tracer.
[...]
> + trace_entries : This sets or displays the number of trace
> + entries each CPU buffer can hold. The tracer buffers
> + are the same size for each CPU, so care must be
> + taken when modifying the trace_entries. The number
> + of actually entries will be the number given

actual

> + times the number of possible CPUS. The buffers
> + are saved as individual pages, and the actual entries
> + will always be rounded up to entries per page.

Not sure I understand the last sentence, but may be it's just me not
being familiar with the terminology.

[...]
> + set_ftrace_filter : When dynamic ftrace is configured in, the
> + code is dynamically modified to disable calling
> + of the function profiler (mcount). This lets
> + tracing be configured in with practically no overhead
> + in performance. This also has a side effect of
> + enabling or disabling specific functions to be
> + traced. Echoing in names of functions into this
> + file will limit the trace to only those files.

these functions?

> +
> + set_ftrace_notrace: This has the opposite effect that
> + set_ftrace_filter has. Any function that is added
> + here will not be traced. If a function exists
> + in both set_ftrace_filter and set_ftrace_notrace

(comma)

> + the function will _not_ bet traced.
> +
[...]
> + ftrace - function tracer that uses mcount to trace all functions.
> + It is possible to filter out which functions that are

are to be

> + traced when dynamic ftrace is configured in.
> +
[...]
> + time: This differs from the trace output where as the trace output
> + contained a absolute timestamp. This timestamp is relative
> + to the start of the first entry in the the trace.

double `the'

Actually, the whole description of this item feels a bit awkward.

> +
> + delay: This is just to help catch your eye a bit better. And
> + needs to be fixed to be only relative to the same CPU.
> + The marks is determined by the difference between this

are

> + current trace and the next trace.
> + '!' - greater than preempt_mark_thresh (default 100)
> + '+' - greater than 1 microsecond
> + ' ' - less than or equal to 1 microsecond.
> +
[...]
> +To disable one of the options, echo in the option appended with "no".

prepended?

> +
> + echo noprint-parent > /debug/tracing/iter_ctrl
> +
> +To enable an option, leave off the "no".
> +
> + echo sym-offest > /debug/tracing/iter_ctrl

sym-offset

> +
> +Here are the available options:
[...]
> + sym-offset - Display not only the function name, but also the offset
> + in the function. For example, instead of seeing just
> + "ktime_get" you will see "ktime_get+0xb/0x20"

(comma) (full stop)

[...]
> + hex - similar to raw, but the numbers will be in a hexadecimal format.

(capital S)

> +
> + bin - This will print out the formats in raw binary.
> +
> + block - TBD (needs update)
> +
> + stacktrace - This is one of the options that changes the trace itself.

change

> + When a trace is recorded, so is the stack of functions.
> + This allows for back traces of trace sites.
> +
> + sched-tree - TBD (any users??)
> +
> +
> +sched_switch
> +------------
> +
> +This tracer simply records schedule switches. Here's an example
> +on how to implement it.

use?

[...]
> +When ftrace_enabled is set the tracers will also record the functions

(comma)

> +that are within the trace. The descriptions of the tracers
> +will also show an example with ftrace enabled.
> +
> +
> +irqsoff
> +-------
> +
> +When interrupts are disabled, the CPU can not react to any other
> +external event (besides NMIs and SMIs). This prevents the timer
> +interrupt from triggering or the mouse interrupt from letting the
> +kernel know of a new mouse event. The result is a latency with the
> +reaction time.
> +
> +The irqsoff tracer tracks the time interrupts are disabled and when

when

> +they are re-enabled. When a new maximum latency is hit, it saves off
> +the trace so that it may be retrieved at a later time. Every time a
> +new maximum in reached, the old saved trace is discarded and the new
> +trace is saved.
[...]
> +Note the above had ftrace_enabled not set. If we set the ftrace_enabled

(comma)

> +we get a much larger output:
> +
[...]
> +Here we traced a 50 microsecond latency. But we also see all the
> +functions that were called during that time. Note that enabling

by enabling?

> +function tracing we endure an added overhead. This overhead may

(comma)

> +extend the latency times. But never the less, this trace has provided
> +some very helpful debugging.

debugging information?

> +
> +
> +preemptoff
> +----------
> +
> +When preemption is disabled we may be able to receive interrupts but

(comma)

> +the task can not be preempted and a higher priority task must wait
> +for preemption to be enabled again before it can preempt a lower
> +priority task.
> +
> +The preemptoff tracer traces the places that disables preemption.

disable

> +Like the irqsoff, it records the maximum latency that preemption
> +was disabled. The control of preemptoff is much like the irqsoff.
[...]
> +Notice that the __do_softirq when called doesn't have a preempt_count.
> +It may seem that we missed a preempt enabled. What really happened
> +is that the preempt count is held on the threads stack and we
> +switched to the softirq stack (4K stacks in effect). The code
> +does not copy the preempt count, but because interrupts are disabled

(comma)

> +we don't need to worry about it. Having a tracer like this is good
> +to let people know what really happens inside the kernel.
[...]
> +To record this time, use the preemptirqsoff tracer.
> +
> +Again, using this trace is much like the irqsoff and preemptoff tracers.
> +
> + # echo preemptoff > /debug/tracing/current_tracer

preemptirqsoff

> + # echo 0 > /debug/tracing/tracing_max_latency
> + # echo 1 > /debug/tracing/tracing_enabled
> + # ls -ltr
> + [...]
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/latency_trace
> +# tracer: preemptirqsoff
[...]
> +This is a very interesting trace. It started with the preemption of
> +the ls task. We see that the task had the "need_resched" bit set
> +with the 'N' in the trace. Interrupts are disabled in the spin_lock
> +and the trace started. We see that a schedule took place to run
> +sshd. When the interrupts were enabled we took an interrupt.

(comma)

> +On return of the interrupt the softirq ran. We took another interrupt

from the interrupt handler,

> +while running the softirq as we see with the capital 'H'.
> +
> +
> +wakeup
> +------
> +
> +In Real-Time environment it is very important to know the wakeup
> +time it takes for the highest priority task that wakes up to the
> +time it executes. This is also known as "schedule latency".
> +I stress the point that this is about RT tasks. It is also important
> +to know the scheduling latency of non-RT tasks, but the average
> +schedule latency is better for non-RT tasks. Tools like
> +LatencyTop is more appropriate for such measurements.

are

> +
> +Real-Time environments is interested in the worst case latency.

are

> +That is the longest latency it takes for something to happen, and
> +not the average. We can have a very fast scheduler that may only
> +have a large latency once in a while, but that would not work well
> +with Real-Time tasks. The wakeup tracer was designed to record
> +the worst case wakeups of RT tasks. Non-RT tasks are not recorded
> +because the tracer only records one worst case and tracing non-RT
> +tasks that are unpredictable will overwrite the worst case latency
> +of RT tasks.
> +
> +Since this tracer only deals with RT tasks, we will run this slightly
> +different than we did with the previous tracers. Instead of performing
> +an 'ls' we will run 'sleep 1' under 'chrt' which changes the

(comma)

> +priority of the task.
[...]
> +Running this on an idle system we see that it only took 4 microseconds

(comma)

> +to perform the task switch. Note, since the trace marker in the
> +schedule is before the actual "switch" we stop the tracing when

(comma)

> +the recorded task is about to schedule in. This may change if
> +we add a new marker at the end of the scheduler.
[...]
> +Where as the setting of the NEED_RESCHED bit happens on the
> +task's stack. But because we are in a hard interrupt, the test
> +is with the interrupts stack which has that to be false. We don't

^^^^
Superfluous that? Don't understand that sentence.

> +see the 'N' until we switch back to the task's stack.
[...]
> +When dynamic ftrace is initialized, it calls kstop_machine to make it
> +act like a uniprocessor so that it can freely modify code without
> +worrying about other processors executing that same code. At
> +initialization, the mcount calls are change to call a "record_ip"

changed

> +function. After this, the first time a kernel function is called,
> +it has the calling address saved in a hash table.
[...]
> +Two files that contain to the enabling and disabling of recorded
> +functions are:

Can this be expressed somewhat differently?

> +
> + set_ftrace_filter
> +
> +and
> +
> + set_ftrace_notrace
> +
> +A list of available functions that you can add to this files is listed

these

> +in:
> +
> + available_filter_functions
[...]
> +Perhaps this isn't enough. The filters also allow simple wild cards.
> +Only the following is currently available
> +
> + <match>* - will match functions that begins with <match>

begin

> + *<match> - will match functions that end with <match>
> + *<match>* - will match functions that have <match> in it
[...]
> +This is because the '>' and '>>' act just like they do in bash.
> +To rewrite the filters, use '>'
> +To append to the filters, use '>>'
> +
> +To clear out a filter so that all functions will be recorded again.

:

> +
> + # echo > /debug/tracing/set_ftrace_filter
> + # cat /debug/tracing/set_ftrace_filter
> + #
[...]
> +ftraced
> +-------
> +
> +As mentioned above, when dynamic ftrace is configured in, a kernel
> +thread wakes up once a second and checks to see if there are mcount
> +calls that need to be converted into nops. If there is not, then

are

> +it simply goes back to sleep. But if there is, it will call

are

> +kstop_machine to convert the calls to nops.
[...]
> +Any write to the ftraced_enabled file will cause the kstop_machine
> +to run if there are recorded calls to mcount. This means that a

Incomplete sentence.

> +user can manually perform the updates when they want to by simply

(s)he wants

> +echoing a '0' into the ftraced_enabled file.
[...]
> +Having too much or not enough data can be troublesome in diagnosing
> +some issue in the kernel. The file trace_entries is used to modify
> +the size of the internal trace buffers. The numbers listed
> +is the number of entries that can be recorded per CPU. To know

are

> +the full size, multiply the number of possible CPUS with the
> +number of entries.
> +
> + # cat /debug/tracing/trace_entries
> +65620
> +
> +Note, to modify this you must have tracing fulling disabled. To do that,

(comma) fully / completely

> +echo "none" into the current_tracer.
[...]
> +The number of pages that will be allocated is a percentage of available
> +memory. Allocating too much will produces an error.

produce


Regards,

Elias

2008-07-10 20:30:15

by Randy Dunlap

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Thu, 10 Jul 2008 21:59:01 +0200 Elias Oltmanns wrote:

> Steven Rostedt <[email protected]> wrote:
> > This is the long awaited ftrace.txt. It explains in quite detail how to
> > use ftrace and the various tracers.
> >
> > Signed-off-by: Steven Rostedt <[email protected]>
>
> Exactly what I've just been looking for. Very nice.
>
> As I read through this enlightening peace, I took the liberty to make
> some comments where I thought I'd spotted some mistake. Note that I'm
> not a native English speaker nor familiar with all the terminology.
> Also, I didn't exactly scratch my head when I had a bad feeling about
> something but couldn't come up with a better idea straight away.
> Basically, I just skimmed through the lines because im interested in the
> matter.
>
> Anyway, here it goes:


[I'm dropping good comments, just making more comments.]

> > + set_ftrace_notrace: This has the opposite effect that
> > + set_ftrace_filter has. Any function that is added
> > + here will not be traced. If a function exists
> > + in both set_ftrace_filter and set_ftrace_notrace
>
> (comma)
>
> > + the function will _not_ bet traced.

be

> > + stacktrace - This is one of the options that changes the trace itself.
>
> change

changes :) [subject is "This", singular]

>
> > + When a trace is recorded, so is the stack of functions.
> > + This allows for back traces of trace sites.
> > +
> > +sched_switch
> > +------------
> > +
> > +This tracer simply records schedule switches. Here's an example
> > +on how to implement it.

of

>
> use?
>
> [...]
> > +Here we traced a 50 microsecond latency. But we also see all the
> > +functions that were called during that time. Note that enabling
>
> by enabling?
>
> > +function tracing we endure an added overhead. This overhead may
>
> (comma)
>
> > +extend the latency times. But never the less, this trace has provided

nevertheless,

> > +some very helpful debugging.
>
> debugging information?
>
> > +
> > +
> > +preemptoff
> > +----------
> > +
> > +When preemption is disabled we may be able to receive interrupts but
>
> (comma)
>
> > +the task can not be preempted and a higher priority task must wait

cannot

> > +for preemption to be enabled again before it can preempt a lower
> > +priority task.
> > +
> > +The preemptoff tracer traces the places that disables preemption.
>
> disable
>
> > +Like the irqsoff, it records the maximum latency that preemption
> > +was disabled. The control of preemptoff is much like the irqsoff.


> > +Since this tracer only deals with RT tasks, we will run this slightly
> > +different than we did with the previous tracers. Instead of performing

differently

> > +an 'ls' we will run 'sleep 1' under 'chrt' which changes the
>
> (comma)
>
> > +priority of the task.

> [...]
> > +Where as the setting of the NEED_RESCHED bit happens on the

Whereas

> > +task's stack. But because we are in a hard interrupt, the test

^? That's not a complete sentence.

> > +is with the interrupts stack which has that to be false. We don't

interrupt
>
> ^^^^
> Superfluous that? Don't understand that sentence.
>
> > +see the 'N' until we switch back to the task's stack.
> [...]
> > +When dynamic ftrace is initialized, it calls kstop_machine to make it

^^ what is "it"?

> > +act like a uniprocessor so that it can freely modify code without
> > +worrying about other processors executing that same code. At
> > +initialization, the mcount calls are change to call a "record_ip"
>
> changed
>
> > +function. After this, the first time a kernel function is called,
> > +it has the calling address saved in a hash table.
> [...]
> > +Two files that contain to the enabling and disabling of recorded
> > +functions are:
>
> Can this be expressed somewhat differently?

or drop "to".

>
> > +
> > + set_ftrace_filter
> > +
> > +and
> > +
> > + set_ftrace_notrace
> > +
> > +A list of available functions that you can add to this files is listed
>
> these
>
> > +in:
> > +
> > + available_filter_functions
> [...]
> > +Perhaps this isn't enough. The filters also allow simple wild cards.
> > +Only the following is currently available

Only the following are currently available:

> > +
> > + <match>* - will match functions that begins with <match>
>
> begin
>
> > + *<match> - will match functions that end with <match>
> > + *<match>* - will match functions that have <match> in it


---
~Randy
Linux Plumbers Conference, 17-19 September 2008, Portland, Oregon USA
http://linuxplumbersconf.org/

2008-07-10 23:55:57

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation


On Thu, 10 Jul 2008, Elias Oltmanns wrote:

>
> Steven Rostedt <[email protected]> wrote:
> > This is the long awaited ftrace.txt. It explains in quite detail how to
> > use ftrace and the various tracers.
> >
> > Signed-off-by: Steven Rostedt <[email protected]>
>
> Exactly what I've just been looking for. Very nice.

Thank you.

>
> As I read through this enlightening peace, I took the liberty to make
> some comments where I thought I'd spotted some mistake. Note that I'm
> not a native English speaker nor familiar with all the terminology.
> Also, I didn't exactly scratch my head when I had a bad feeling about
> something but couldn't come up with a better idea straight away.
> Basically, I just skimmed through the lines because im interested in the
> matter.

I'm a native English speaker, but I have always found that I get the best
comments from those that are not native speakers ;-)

>
> Anyway, here it goes:
>
> [...]
> > + available_tracers : This holds the different types of tracers that
> > + has been compiled into the kernel. The tracers
>
> have

noted.

>
> > + listed here can be configured by echoing in their
> > + name into current_tracer.
> [...]
> > + trace_entries : This sets or displays the number of trace
> > + entries each CPU buffer can hold. The tracer buffers
> > + are the same size for each CPU, so care must be
> > + taken when modifying the trace_entries. The number
> > + of actually entries will be the number given
>
> actual

I probably should review my work before drinking a beer.

>
> > + times the number of possible CPUS. The buffers
> > + are saved as individual pages, and the actual entries
> > + will always be rounded up to entries per page.
>
> Not sure I understand the last sentence, but may be it's just me not
> being familiar with the terminology.

I should rewrite it then. I allocate the buffer by pages (a block of
memory that is used in kernel allocation. Usually 4K). Since the entries
are less than a page, if there is extra padding on the last page after
all the requested entries have been allocated, I use the rest of the page
to add entries that can still fit.


>
> [...]
> > + set_ftrace_filter : When dynamic ftrace is configured in, the
> > + code is dynamically modified to disable calling
> > + of the function profiler (mcount). This lets
> > + tracing be configured in with practically no overhead
> > + in performance. This also has a side effect of
> > + enabling or disabling specific functions to be
> > + traced. Echoing in names of functions into this
> > + file will limit the trace to only those files.
>
> these functions?

Doh! yeah.

>
> > +
> > + set_ftrace_notrace: This has the opposite effect that
> > + set_ftrace_filter has. Any function that is added
> > + here will not be traced. If a function exists
> > + in both set_ftrace_filter and set_ftrace_notrace
>
> (comma)

yep.

>
> > + the function will _not_ bet traced.
> > +
> [...]
> > + ftrace - function tracer that uses mcount to trace all functions.
> > + It is possible to filter out which functions that are
>
> are to be

Both ways sound OK to me. But then again, I would trust a non-native
speaker more. Since they were actually taught the language ;-)

>
> > + traced when dynamic ftrace is configured in.
> > +
> [...]
> > + time: This differs from the trace output where as the trace output
> > + contained a absolute timestamp. This timestamp is relative
> > + to the start of the first entry in the the trace.
>
> double `the'

Yes, I actually did proof read this (looks away).

>
> Actually, the whole description of this item feels a bit awkward.

I'll rewrite it.

>
> > +
> > + delay: This is just to help catch your eye a bit better. And
> > + needs to be fixed to be only relative to the same CPU.
> > + The marks is determined by the difference between this
>
> are

yep

>
> > + current trace and the next trace.
> > + '!' - greater than preempt_mark_thresh (default 100)
> > + '+' - greater than 1 microsecond
> > + ' ' - less than or equal to 1 microsecond.
> > +
> [...]
> > +To disable one of the options, echo in the option appended with "no".
>
> prepended?

yep.

>
> > +
> > + echo noprint-parent > /debug/tracing/iter_ctrl
> > +
> > +To enable an option, leave off the "no".
> > +
> > + echo sym-offest > /debug/tracing/iter_ctrl
>
> sym-offset

heh.

>
> > +
> > +Here are the available options:
> [...]
> > + sym-offset - Display not only the function name, but also the offset
> > + in the function. For example, instead of seeing just
> > + "ktime_get" you will see "ktime_get+0xb/0x20"
>
> (comma) (full stop)

The quotes were not good enough??

(just kidding)

>
> [...]
> > + hex - similar to raw, but the numbers will be in a hexadecimal format.
>
> (capital S)

sure

>
> > +
> > + bin - This will print out the formats in raw binary.
> > +
> > + block - TBD (needs update)
> > +
> > + stacktrace - This is one of the options that changes the trace itself.
>
> change

Hmm, now this would be a good English question. "change" is for plural and
"changes" is for singular. Now is "This is one of options" plural or
singular. I'm thinking singular, because it is "one of", but I'm not an
English major.


>
> > + When a trace is recorded, so is the stack of functions.
> > + This allows for back traces of trace sites.
> > +
> > + sched-tree - TBD (any users??)
> > +
> > +
> > +sched_switch
> > +------------
> > +
> > +This tracer simply records schedule switches. Here's an example
> > +on how to implement it.
>
> use?

Yeah, I guess so.

>
> [...]
> > +When ftrace_enabled is set the tracers will also record the functions
>
> (comma)

yep

>
> > +that are within the trace. The descriptions of the tracers
> > +will also show an example with ftrace enabled.
> > +
> > +
> > +irqsoff
> > +-------
> > +
> > +When interrupts are disabled, the CPU can not react to any other
> > +external event (besides NMIs and SMIs). This prevents the timer
> > +interrupt from triggering or the mouse interrupt from letting the
> > +kernel know of a new mouse event. The result is a latency with the
> > +reaction time.
> > +
> > +The irqsoff tracer tracks the time interrupts are disabled and when
>
> when

Hmm, I don't like either "when"s. How about:

The irqsoff tracer tracks the time interrupts are disabled to the time
they are re-enabled.

??

>
> > +they are re-enabled. When a new maximum latency is hit, it saves off
> > +the trace so that it may be retrieved at a later time. Every time a
> > +new maximum in reached, the old saved trace is discarded and the new
> > +trace is saved.
> [...]
> > +Note the above had ftrace_enabled not set. If we set the ftrace_enabled
>
> (comma)

I have a lot of these. I just commented on someones writing in saying that
you can use a "then" or a "comma" but don't leave both out. Seems I've
been doing that a lot here.

>
> > +we get a much larger output:
> > +
> [...]
> > +Here we traced a 50 microsecond latency. But we also see all the
> > +functions that were called during that time. Note that enabling
>
> by enabling?

The "by" sounds funny to me. But it could be correct.

>
> > +function tracing we endure an added overhead. This overhead may
>
> (comma)

yep

>
> > +extend the latency times. But never the less, this trace has provided
> > +some very helpful debugging.
>
> debugging information?
>
> > +
> > +
> > +preemptoff
> > +----------
> > +
> > +When preemption is disabled we may be able to receive interrupts but
>
> (comma)

OK

>
> > +the task can not be preempted and a higher priority task must wait
> > +for preemption to be enabled again before it can preempt a lower
> > +priority task.
> > +
> > +The preemptoff tracer traces the places that disables preemption.
>
> disable

Ah, you are right. the noun is "places" not "tracer"

>
> > +Like the irqsoff, it records the maximum latency that preemption
> > +was disabled. The control of preemptoff is much like the irqsoff.
> [...]
> > +Notice that the __do_softirq when called doesn't have a preempt_count.
> > +It may seem that we missed a preempt enabled. What really happened
> > +is that the preempt count is held on the threads stack and we
> > +switched to the softirq stack (4K stacks in effect). The code
> > +does not copy the preempt count, but because interrupts are disabled
>
> (comma)

OK

>
> > +we don't need to worry about it. Having a tracer like this is good
> > +to let people know what really happens inside the kernel.
> [...]
> > +To record this time, use the preemptirqsoff tracer.
> > +
> > +Again, using this trace is much like the irqsoff and preemptoff tracers.
> > +
> > + # echo preemptoff > /debug/tracing/current_tracer
>
> preemptirqsoff

Bah! cut and paste error.

>
> > + # echo 0 > /debug/tracing/tracing_max_latency
> > + # echo 1 > /debug/tracing/tracing_enabled
> > + # ls -ltr
> > + [...]
> > + # echo 0 > /debug/tracing/tracing_enabled
> > + # cat /debug/tracing/latency_trace
> > +# tracer: preemptirqsoff
> [...]
> > +This is a very interesting trace. It started with the preemption of
> > +the ls task. We see that the task had the "need_resched" bit set
> > +with the 'N' in the trace. Interrupts are disabled in the spin_lock
> > +and the trace started. We see that a schedule took place to run
> > +sshd. When the interrupts were enabled we took an interrupt.
>
> (comma)

OK

>
> > +On return of the interrupt the softirq ran. We took another interrupt
>
> from the interrupt handler,

Yeah, that sounds better.

>
> > +while running the softirq as we see with the capital 'H'.
> > +
> > +
> > +wakeup
> > +------
> > +
> > +In Real-Time environment it is very important to know the wakeup
> > +time it takes for the highest priority task that wakes up to the
> > +time it executes. This is also known as "schedule latency".
> > +I stress the point that this is about RT tasks. It is also important
> > +to know the scheduling latency of non-RT tasks, but the average
> > +schedule latency is better for non-RT tasks. Tools like
> > +LatencyTop is more appropriate for such measurements.
>
> are

right! again "tools" is the noun not "LatencyTop".

>
> > +
> > +Real-Time environments is interested in the worst case latency.
>
> are

OK

>
> > +That is the longest latency it takes for something to happen, and
> > +not the average. We can have a very fast scheduler that may only
> > +have a large latency once in a while, but that would not work well
> > +with Real-Time tasks. The wakeup tracer was designed to record
> > +the worst case wakeups of RT tasks. Non-RT tasks are not recorded
> > +because the tracer only records one worst case and tracing non-RT
> > +tasks that are unpredictable will overwrite the worst case latency
> > +of RT tasks.
> > +
> > +Since this tracer only deals with RT tasks, we will run this slightly
> > +different than we did with the previous tracers. Instead of performing
> > +an 'ls' we will run 'sleep 1' under 'chrt' which changes the
>
> (comma)

OK

>
> > +priority of the task.
> [...]
> > +Running this on an idle system we see that it only took 4 microseconds
>
> (comma)

OK

>
> > +to perform the task switch. Note, since the trace marker in the
> > +schedule is before the actual "switch" we stop the tracing when
>
> (comma)

OK

>
> > +the recorded task is about to schedule in. This may change if
> > +we add a new marker at the end of the scheduler.
> [...]
> > +Where as the setting of the NEED_RESCHED bit happens on the
> > +task's stack. But because we are in a hard interrupt, the test
> > +is with the interrupts stack which has that to be false. We don't
>
> ^^^^
> Superfluous that? Don't understand that sentence.

No really, I did proof read it...

God that was an awful explanation. OK, how about something like this:

Some task data is stored at the top of the tasks stack (need_resched and
preempt_count). The setting of the NEED_RESCHED sets the bit on the
task's real stack. The test for NEED_RESCHED looks at the current stack.
Since the current stack is the hard interrupt stack (as the kernel is
configured to use a separate stack for interrupts), the trace shows that
the need_resched bit has not yet been set.

>
> > +see the 'N' until we switch back to the task's stack.
> [...]
> > +When dynamic ftrace is initialized, it calls kstop_machine to make it
> > +act like a uniprocessor so that it can freely modify code without
> > +worrying about other processors executing that same code. At
> > +initialization, the mcount calls are change to call a "record_ip"
>
> changed

OK

>
> > +function. After this, the first time a kernel function is called,
> > +it has the calling address saved in a hash table.
> [...]
> > +Two files that contain to the enabling and disabling of recorded
> > +functions are:
>
> Can this be expressed somewhat differently?

Did I write that??
How about:

Two files are used, one for enabling and one for disabling the tracing
of recorded functions.

I'm sure you'll tell me I missed a comma in there somewhere ;-)

>
> > +
> > + set_ftrace_filter
> > +
> > +and
> > +
> > + set_ftrace_notrace
> > +
> > +A list of available functions that you can add to this files is listed
>
> these

Augh, yeah.

>
> > +in:
> > +
> > + available_filter_functions
> [...]
> > +Perhaps this isn't enough. The filters also allow simple wild cards.
> > +Only the following is currently available
> > +
> > + <match>* - will match functions that begins with <match>
>
> begin

OK

>
> > + *<match> - will match functions that end with <match>
> > + *<match>* - will match functions that have <match> in it
> [...]
> > +This is because the '>' and '>>' act just like they do in bash.
> > +To rewrite the filters, use '>'
> > +To append to the filters, use '>>'
> > +
> > +To clear out a filter so that all functions will be recorded again.
>
> :

Heh, I was running out of ':'.

>
> > +
> > + # echo > /debug/tracing/set_ftrace_filter
> > + # cat /debug/tracing/set_ftrace_filter
> > + #
> [...]
> > +ftraced
> > +-------
> > +
> > +As mentioned above, when dynamic ftrace is configured in, a kernel
> > +thread wakes up once a second and checks to see if there are mcount
> > +calls that need to be converted into nops. If there is not, then
>
> are

That sounds off. I see your point, but I'm not sure this applies for
plural. I can be wrong though.

>
> > +it simply goes back to sleep. But if there is, it will call
>
> are

Same here.

>
> > +kstop_machine to convert the calls to nops.
> [...]
> > +Any write to the ftraced_enabled file will cause the kstop_machine
> > +to run if there are recorded calls to mcount. This means that a
>
> Incomplete sentence.

hmm, how so? Although I am missing a comma. I could also write it like
"If there are recorded calls to mcount, any write to the ftraced_enabled
file will cause kstop_machine to run".

>
> > +user can manually perform the updates when they want to by simply
>
> (s)he wants

This is where I hate the English language, and will not be including this
update. Sorry, I hate the whole she/he thing. I simply rebel and use
"they"!

>
> > +echoing a '0' into the ftraced_enabled file.
> [...]
> > +Having too much or not enough data can be troublesome in diagnosing
> > +some issue in the kernel. The file trace_entries is used to modify
> > +the size of the internal trace buffers. The numbers listed
> > +is the number of entries that can be recorded per CPU. To know
>
> are

OK

>
> > +the full size, multiply the number of possible CPUS with the
> > +number of entries.
> > +
> > + # cat /debug/tracing/trace_entries
> > +65620
> > +
> > +Note, to modify this you must have tracing fulling disabled. To do that,
>
> (comma) fully / completely

Interesting. I never even knew there was a word "fulling".

>
> > +echo "none" into the current_tracer.
> [...]
> > +The number of pages that will be allocated is a percentage of available
> > +memory. Allocating too much will produces an error.
>
> produce

Why proof read your own writing when you have people willing to do it for
you ;-)

No really, I greatly appreciate your comments.

-- Steve

2008-07-11 00:02:29

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation


On Thu, 10 Jul 2008, Randy Dunlap wrote:

> [I'm dropping good comments, just making more comments.]

This is why I CC'd you ;-)

>
> > > + set_ftrace_notrace: This has the opposite effect that
> > > + set_ftrace_filter has. Any function that is added
> > > + here will not be traced. If a function exists
> > > + in both set_ftrace_filter and set_ftrace_notrace
> >
> > (comma)
> >
> > > + the function will _not_ bet traced.
>
> be

Sorry, it has a gambling problem.

>
> > > + stacktrace - This is one of the options that changes the trace itself.
> >
> > change
>
> changes :) [subject is "This", singular]

Woot! I'm actually right?? Nah, this can't be. Maybe my mom did speak good
english. ;-)


>
> >
> > > + When a trace is recorded, so is the stack of functions.
> > > + This allows for back traces of trace sites.
> > > +
> > > +sched_switch
> > > +------------
> > > +
> > > +This tracer simply records schedule switches. Here's an example
> > > +on how to implement it.
>
> of

OK

>
> >
> > use?
> >
> > [...]
> > > +Here we traced a 50 microsecond latency. But we also see all the
> > > +functions that were called during that time. Note that enabling
> >
> > by enabling?
> >
> > > +function tracing we endure an added overhead. This overhead may
> >
> > (comma)
> >
> > > +extend the latency times. But never the less, this trace has provided
>
> nevertheless,

It is one word? I wrote that first and then thought that it was wrong.

>
> > > +some very helpful debugging.
> >
> > debugging information?

Sure.

> >
> > > +
> > > +
> > > +preemptoff
> > > +----------
> > > +
> > > +When preemption is disabled we may be able to receive interrupts but
> >
> > (comma)
> >
> > > +the task can not be preempted and a higher priority task must wait
>
> cannot

OK

>
> > > +for preemption to be enabled again before it can preempt a lower
> > > +priority task.
> > > +
> > > +The preemptoff tracer traces the places that disables preemption.
> >
> > disable
> >
> > > +Like the irqsoff, it records the maximum latency that preemption
> > > +was disabled. The control of preemptoff is much like the irqsoff.
>
>
> > > +Since this tracer only deals with RT tasks, we will run this slightly
> > > +different than we did with the previous tracers. Instead of performing
>
> differently

OK

>
> > > +an 'ls' we will run 'sleep 1' under 'chrt' which changes the
> >
> > (comma)
> >
> > > +priority of the task.
>
> > [...]
> > > +Where as the setting of the NEED_RESCHED bit happens on the
>
> Whereas

OK

>
> > > +task's stack. But because we are in a hard interrupt, the test
>
> ^? That's not a complete sentence.

I rewrote this part. So I'm not ignoring these comments.

>
> > > +is with the interrupts stack which has that to be false. We don't
>
> interrupt
> >
> > ^^^^
> > Superfluous that? Don't understand that sentence.
> >
> > > +see the 'N' until we switch back to the task's stack.
> > [...]
> > > +When dynamic ftrace is initialized, it calls kstop_machine to make it
>
> ^^ what is "it"?

s/it/the machine/

>
> > > +act like a uniprocessor so that it can freely modify code without
> > > +worrying about other processors executing that same code. At
> > > +initialization, the mcount calls are change to call a "record_ip"
> >
> > changed
> >
> > > +function. After this, the first time a kernel function is called,
> > > +it has the calling address saved in a hash table.
> > [...]
> > > +Two files that contain to the enabling and disabling of recorded
> > > +functions are:
> >
> > Can this be expressed somewhat differently?
>
> or drop "to".

I rewrote this part too.

>
> >
> > > +
> > > + set_ftrace_filter
> > > +
> > > +and
> > > +
> > > + set_ftrace_notrace
> > > +
> > > +A list of available functions that you can add to this files is listed
> >
> > these
> >
> > > +in:
> > > +
> > > + available_filter_functions
> > [...]
> > > +Perhaps this isn't enough. The filters also allow simple wild cards.
> > > +Only the following is currently available
>
> Only the following are currently available:

OK

>
> > > +
> > > + <match>* - will match functions that begins with <match>
> >
> > begin
> >
> > > + *<match> - will match functions that end with <match>
> > > + *<match>* - will match functions that have <match> in it


Thanks Randy, once again you helped me out in my documentation.

-- Steve

2008-07-11 00:37:33

by Steven Rostedt

[permalink] [raw]
Subject: [PATCH -v2] ftrace: Documentation


This is the long awaited ftrace.txt. It explains in quite detail how to
use ftrace.

Updated with comments from Elias Oltmann and Randy Dunlap.

Signed-off-by: Steven Rostedt <[email protected]>
---
Documentation/ftrace.txt | 1361 +++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 1361 insertions(+)

Index: linux-tip.git/Documentation/ftrace.txt
===================================================================
--- /dev/null 1970-01-01 00:00:00.000000000 +0000
+++ linux-tip.git/Documentation/ftrace.txt 2008-07-10 20:18:33.000000000 -0400
@@ -0,0 +1,1361 @@
+ ftrace - Function Tracer
+ ========================
+
+Copyright 2008 Red Hat Inc.
+ Author: Steven Rostedt <[email protected]>
+ License: The GNU Free Documentation License, Version 1.2
+Reviewers: Elias Oltmanns and Randy Dunlap
+
+Writen for: 2.6.26-rc8 linux-2.6-tip.git tip/tracing/ftrace branch
+
+Introduction
+------------
+
+Ftrace is an internal tracer designed to help out developers and
+designers of systems to find what is going on inside the kernel.
+It can be used for debugging or analyzing latencies and performance
+issues that take place outside of user-space.
+
+Although ftrace is the function tracer, it also includes an
+infrastructure that allows for other types of tracing. Some of the
+tracers that are currently in ftrace is a tracer to trace
+context switches, the time it takes for a high priority task to
+run after it was woken up, the time interrupts are disabled, and
+more.
+
+
+The File System
+---------------
+
+Ftrace uses the debugfs file system to hold the control files as well
+as the files to display output.
+
+To mount the debugfs system:
+
+ # mkdir /debug
+ # mount -t debugfs nodev /debug
+
+
+That's it! (assuming that you have ftrace configured into your kernel)
+
+After mounting the debugfs, you can see a directory called
+"tracing". This directory contains the control and output files
+of ftrace. Here is a list of some of the key files:
+
+
+ Note: all time values are in microseconds.
+
+ current_tracer : This is used to set or display the current tracer
+ that is configured.
+
+ available_tracers : This holds the different types of tracers that
+ have been compiled into the kernel. The tracers
+ listed here can be configured by echoing in their
+ name into current_tracer.
+
+ tracing_enabled : This sets or displays whether the current_tracer
+ is activated and tracing or not. Echo 0 into this
+ file to disable the tracer or 1 (or non-zero) to
+ enable it.
+
+ trace : This file holds the output of the trace in a human readable
+ format.
+
+ latency_trace : This file shows the same trace but the information
+ is organized more to display possible latencies
+ in the system.
+
+ trace_pipe : The output is the same as the "trace" file but this
+ file is meant to be streamed with live tracing.
+ Reads from this file will block until new data
+ is retrieved. Unlike the "trace" and "latency_trace"
+ files, this file is a consumer. This means reading
+ from this file causes sequential reads to display
+ more current data. Once data is read from this
+ file, it is consumed, and will not be read
+ again with a sequential read. The "trace" and
+ "latency_trace" files are static, and if the
+ tracer isn't adding more data, they will display
+ the same information every time they are read.
+
+ iter_ctrl : This file lets the user control the amount of data
+ that is displayed in one of the above output
+ files.
+
+ trace_max_latency : Some of the tracers record the max latency.
+ For example, the time interrupts are disabled.
+ This time is saved in this file. The max trace
+ will also be stored, and displayed by either
+ "trace" or "latency_trace". A new max trace will
+ only be recorded if the latency is greater than
+ the value in this file. (in microseconds)
+
+ trace_entries : This sets or displays the number of trace
+ entries each CPU buffer can hold. The tracer buffers
+ are the same size for each CPU, so care must be
+ taken when modifying the trace_entries. The trace
+ buffers are allocated in pages (blocks of memory that
+ the kernel uses for allocation, usually 4 KB in size).
+ Since each entry is smaller than a page, if the last
+ allocated page has room for more entries than were
+ requested, the rest of the page is used to allocate
+ entries.
+
+ This can only be updated when the current_tracer
+ is set to "none".
+
+ NOTE: It is planned on changing the allocated buffers
+ from being the number of possible CPUS to
+ the number of online CPUS.
+
+ tracing_cpumask : This is a mask that lets the user only trace
+ on specified CPUS. The format is a hex string
+ representing the CPUS.
+
+ set_ftrace_filter : When dynamic ftrace is configured in, the
+ code is dynamically modified to disable calling
+ of the function profiler (mcount). This lets
+ tracing be configured in with practically no overhead
+ in performance. This also has a side effect of
+ enabling or disabling specific functions to be
+ traced. Echoing in names of functions into this
+ file will limit the trace to only these functions.
+
+ set_ftrace_notrace: This has the opposite effect that
+ set_ftrace_filter has. Any function that is added
+ here will not be traced. If a function exists
+ in both set_ftrace_filter and set_ftrace_notrace,
+ the function will _not_ be traced.
+
+ available_filter_functions : When a function is encountered the first
+ time by the dynamic tracer, it is recorded and
+ later the call is converted into a nop. This file
+ lists the functions that have been recorded
+ by the dynamic tracer and these functions can
+ be used to set the ftrace filter by the above
+ "set_ftrace_filter" file.
+
+
+The Tracers
+-----------
+
+Here are the list of current tracers that can be configured.
+
+ ftrace - function tracer that uses mcount to trace all functions.
+ It is possible to filter out which functions that are
+ to be traced when dynamic ftrace is configured in.
+
+ sched_switch - traces the context switches between tasks.
+
+ irqsoff - traces the areas that disable interrupts and saves off
+ the trace with the longest max latency.
+ See tracing_max_latency. When a new max is recorded,
+ it replaces the old trace. It is best to view this
+ trace with the latency_trace file.
+
+ preemptoff - Similar to irqsoff but traces and records the time
+ preemption is disabled.
+
+ preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
+ records the largest time irqs and/or preemption is
+ disabled.
+
+ wakeup - Traces and records the max latency that it takes for
+ the highest priority task to get scheduled after
+ it has been woken up.
+
+ none - This is not a tracer. To remove all tracers from tracing
+ simply echo "none" into current_tracer.
+
+
+Examples of using the tracer
+----------------------------
+
+Here are typical examples of using the tracers with only controlling
+them with the debugfs interface (without using any user-land utilities).
+
+Output format:
+--------------
+
+Here's an example of the output format of the file "trace"
+
+ --------
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4251 [01] 10152.583854: path_put <-path_walk
+ bash-4251 [01] 10152.583855: dput <-path_put
+ bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
+ --------
+
+A header is printed with the trace that is represented. In this case
+the tracer is "ftrace". Then a header showing the format. Task name
+"bash", the task PID "4251", the CPU that it was running on
+"01", the timestamp in <secs>.<usecs> format, the function name that was
+traced "path_put" and the parent function that called this function
+"path_walk".
+
+The sched_switch tracer also includes tracing of task wake ups and
+context switches.
+
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
+ events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
+ kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
+ ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
+
+Wake ups are represented by a "+" and the context switches show
+"==>". The format is:
+
+ Context switches:
+
+ Previous task Next Task
+
+ <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
+
+ Wake ups:
+
+ Current task Task waking up
+
+ <pid>:<prio>:<state> + <pid>:<prio>:<state>
+
+The prio is the internal kernel priority, which is inverse to the
+priority that is usually displayed by user-space tools. Zero represents
+the highest priority (99). Prio 100 starts the "nice" priorities with
+100 being equal to nice -20 and 139 being nice 19. The prio "140" is
+reserved for the idle task which is the lowest priority thread (pid 0).
+
+
+Latency trace format
+--------------------
+
+For traces that display latency times, the latency_trace file gives
+a bit more information to see why a latency happened. Here's a typical
+trace.
+
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: apic_timer_interrupt
+ => ended at: do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
+ <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
+ <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
+
+
+vim:ft=help
+
+
+This shows that the current tracer is "irqsoff" tracing the time
+interrupts are disabled. It gives the trace version and the kernel
+this was executed on (2.6.26-rc8). Then it displays the max latency
+in microsecs (97 us). The number of trace entries displayed
+by the total number recorded (both are three: #3/3). The type of
+preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero
+and reserved for later use. #P is the number of online CPUS (#P:2).
+
+The task is the process that was running when the latency happened.
+(swapper pid: 0).
+
+The start and stop that caused the latencies:
+
+ apic_timer_interrupt is where the interrupts were disabled.
+ do_softirq is where they were enabled again.
+
+The next lines after the header are the trace itself. The header
+explains which is which.
+
+ cmd: The name of the process in the trace.
+
+ pid: The PID of that process.
+
+ CPU#: The CPU that the process was running on.
+
+ irqs-off: 'd' interrupts are disabled. '.' otherwise.
+
+ need-resched: 'N' task need_resched is set, '.' otherwise.
+
+ hardirq/softirq:
+ 'H' - hard irq happened inside a softirq.
+ 'h' - hard irq is running
+ 's' - soft irq is running
+ '.' - normal context.
+
+ preempt-depth: The level of preempt_disabled
+
+The above is mostly meaningful for kernel developers.
+
+ time: This differs from the trace file output. The trace file output
+ included an absolute timestamp. The timestamp used by the
+ latency_trace file is relative to the start of the trace.
+
+ delay: This is just to help catch your eye a bit better. And
+ needs to be fixed to be only relative to the same CPU.
+ The marks are determined by the difference between this
+ current trace and the next trace.
+ '!' - greater than preempt_mark_thresh (default 100)
+ '+' - greater than 1 microsecond
+ ' ' - less than or equal to 1 microsecond.
+
+ The rest is the same as the 'trace' file.
+
+
+iter_ctrl
+---------
+
+The iter_ctrl file is used to control what gets printed in the trace
+output. To see what is available, simply cat the file:
+
+ cat /debug/tracing/iter_ctrl
+ print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
+ noblock nostacktrace nosched-tree
+
+To disable one of the options, echo in the option prepended with "no".
+
+ echo noprint-parent > /debug/tracing/iter_ctrl
+
+To enable an option, leave off the "no".
+
+ echo sym-offset > /debug/tracing/iter_ctrl
+
+Here are the available options:
+
+ print-parent - On function traces, display the calling function
+ as well as the function being traced.
+
+ print-parent:
+ bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
+
+ noprint-parent:
+ bash-4000 [01] 1477.606694: simple_strtoul
+
+
+ sym-offset - Display not only the function name, but also the offset
+ in the function. For example, instead of seeing just
+ "ktime_get", you will see "ktime_get+0xb/0x20".
+
+ sym-offset:
+ bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
+
+ sym-addr - this will also display the function address as well as
+ the function name.
+
+ sym-addr:
+ bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
+
+ verbose - This deals with the latency_trace file.
+
+ bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
+ (+0.000ms): simple_strtoul (strict_strtoul)
+
+ raw - This will display raw numbers. This option is best for use with
+ user applications that can translate the raw numbers better than
+ having it done in the kernel.
+
+ hex - Similar to raw, but the numbers will be in a hexadecimal format.
+
+ bin - This will print out the formats in raw binary.
+
+ block - TBD (needs update)
+
+ stacktrace - This is one of the options that changes the trace itself.
+ When a trace is recorded, so is the stack of functions.
+ This allows for back traces of trace sites.
+
+ sched-tree - TBD (any users??)
+
+
+sched_switch
+------------
+
+This tracer simply records schedule switches. Here's an example
+of how to use it.
+
+ # echo sched_switch > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # sleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+
+# tracer: sched_switch
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
+ bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
+ sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
+ bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
+ bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
+ sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
+ bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
+ bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
+ <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
+ <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
+ ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
+ <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
+ <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
+ ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
+ sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
+ [...]
+
+
+As we have discussed previously about this format, the header shows
+the name of the trace and points to the options. The "FUNCTION"
+is a misnomer since here it represents the wake ups and context
+switches.
+
+The sched_switch only lists the wake ups (represented with '+')
+and context switches ('==>') with the previous task or current
+first followed by the next task or task waking up. The format for both
+of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
+is the inverse of the actual priority with zero (0) being the highest
+priority and the nice values starting at 100 (nice -20). Below is
+a quick chart to map the kernel priority to user land priorities.
+
+ Kernel priority: 0 to 99 ==> user RT priority 99 to 0
+ Kernel priority: 100 to 139 ==> user nice -20 to 19
+ Kernel priority: 140 ==> idle task priority
+
+The task states are:
+
+ R - running : wants to run, may not actually be running
+ S - sleep : process is waiting to be woken up (handles signals)
+ D - deep sleep : process must be woken up (ignores signals)
+ T - stopped : process suspended
+ t - traced : process is being traced (with something like gdb)
+ Z - zombie : process waiting to be cleaned up
+ X - unknown
+
+
+ftrace_enabled
+--------------
+
+The following tracers give different output depending on whether
+or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
+one can either use the sysctl function or set it via the proc
+file system interface.
+
+ sysctl kernel.ftrace_enabled=1
+
+ or
+
+ echo 1 > /proc/sys/kernel/ftrace_enabled
+
+To disable ftrace_enabled simply replace the '1' with '0' in
+the above commands.
+
+When ftrace_enabled is set the tracers will also record the functions
+that are within the trace. The descriptions of the tracers
+will also show an example with ftrace enabled.
+
+
+irqsoff
+-------
+
+When interrupts are disabled, the CPU can not react to any other
+external event (besides NMIs and SMIs). This prevents the timer
+interrupt from triggering or the mouse interrupt from letting the
+kernel know of a new mouse event. The result is a latency with the
+reaction time.
+
+The irqsoff tracer tracks the time interrupts are disabled to the time
+they are re-enabled. When a new maximum latency is hit, it saves off
+the trace so that it may be retrieved at a later time. Every time a
+new maximum in reached, the old saved trace is discarded and the new
+trace is saved.
+
+To reset the maximum, echo 0 into tracing_max_latency. Here's an
+example:
+
+ # echo irqsoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 6 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: copy_page_range
+ => ended at: copy_page_range
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ bash-4269 1...1 0us+: _spin_lock (copy_page_range)
+ bash-4269 1...1 7us : _spin_unlock (copy_page_range)
+ bash-4269 1...2 7us : trace_preempt_on (copy_page_range)
+
+
+vim:ft=help
+
+Here we see that that we had a latency of 6 microsecs (which is
+very good). The spin_lock in copy_page_range disabled interrupts.
+The difference between the 6 and the displayed timestamp 7us is
+because the clock must have incremented between the time of recording
+the max latency and recording the function that had that latency.
+
+Note the above had ftrace_enabled not set. If we set the ftrace_enabled,
+we get a much larger output:
+
+# tracer: irqsoff
+#
+irqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: __alloc_pages_internal
+ => ended at: __alloc_pages_internal
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
+ ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
+ ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
+ ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
+ ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
+[...]
+ ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
+ ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
+ ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
+ ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
+ ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
+ ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
+ ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
+
+
+vim:ft=help
+
+
+Here we traced a 50 microsecond latency. But we also see all the
+functions that were called during that time. Note that by enabling
+function tracing, we endure an added overhead. This overhead may
+extend the latency times. But nevertheless, this trace has provided
+some very helpful debugging information.
+
+
+preemptoff
+----------
+
+When preemption is disabled, we may be able to receive interrupts but
+the task cannot be preempted and a higher priority task must wait
+for preemption to be enabled again before it can preempt a lower
+priority task.
+
+The preemptoff tracer traces the places that disable preemption.
+Like the irqsoff, it records the maximum latency that preemption
+was disabled. The control of preemptoff is much like the irqsoff.
+
+ # echo preemptoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: preemptoff
+#
+preemptoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: do_IRQ
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
+ sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
+
+
+vim:ft=help
+
+This has some more changes. Preemption was disabled when an interrupt
+came in (notice the 'h'), and was enabled while doing a softirq.
+(notice the 's'). But we also see that interrupts have been disabled
+when entering the preempt off section and leaving it (the 'd').
+We do not know if interrupts were enabled in the mean time.
+
+# tracer: preemptoff
+#
+preemptoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: remove_wait_queue
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
+ sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
+ sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
+ sshd-4261 0d..1 2us : irq_enter (do_IRQ)
+ sshd-4261 0d..1 2us : idle_cpu (irq_enter)
+ sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
+ sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
+ sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
+[...]
+ sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
+ sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
+ sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
+ sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
+ sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
+ sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
+ sshd-4261 0d..2 15us : do_softirq (irq_exit)
+ sshd-4261 0d... 15us : __do_softirq (do_softirq)
+ sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
+ sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
+[...]
+ sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
+ sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
+[...]
+ sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
+
+
+The above is an example of the preemptoff trace with ftrace_enabled
+set. Here we see that interrupts were disabled the entire time.
+The irq_enter code lets us know that we entered an interrupt 'h'.
+Before that, the functions being traced still show that it is not
+in an interrupt, but we can see by the functions themselves that
+this is not the case.
+
+Notice that the __do_softirq when called doesn't have a preempt_count.
+It may seem that we missed a preempt enabled. What really happened
+is that the preempt count is held on the threads stack and we
+switched to the softirq stack (4K stacks in effect). The code
+does not copy the preempt count, but because interrupts are disabled,
+we don't need to worry about it. Having a tracer like this is good
+to let people know what really happens inside the kernel.
+
+
+preemptirqsoff
+--------------
+
+Knowing the locations that have interrupts disabled or preemption
+disabled for the longest times is helpful. But sometimes we would
+like to know when either preemption and/or interrupts are disabled.
+
+The following code:
+
+ local_irq_disable();
+ call_function_with_irqs_off();
+ preempt_disable();
+ call_function_with_irqs_and_preemption_off();
+ local_irq_enable();
+ call_function_with_preemption_off();
+ preempt_enable();
+
+The irqsoff tracer will record the total length of
+call_function_with_irqs_off() and
+call_function_with_irqs_and_preemption_off().
+
+The preemptoff tracer will record the total length of
+call_function_with_irqs_and_preemption_off() and
+call_function_with_preemption_off().
+
+But neither will trace the time that interrupts and/or preemption
+is disabled. This total time is the time that we can not schedule.
+To record this time, use the preemptirqsoff tracer.
+
+Again, using this trace is much like the irqsoff and preemptoff tracers.
+
+ # echo preemptirqsoff > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # ls -ltr
+ [...]
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: preemptirqsoff
+#
+preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: apic_timer_interrupt
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
+ ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
+ ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
+
+
+vim:ft=help
+
+
+The trace_hardirqs_off_thunk is called from assembly on x86 when
+interrupts are disabled in the assembly code. Without the function
+tracing, we don't know if interrupts were enabled within the preemption
+points. We do see that it started with preemption enabled.
+
+Here is a trace with ftrace_enabled set:
+
+
+# tracer: preemptirqsoff
+#
+preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
+ -----------------
+ => started at: write_chan
+ => ended at: __do_softirq
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ ls-4473 0.N.. 0us : preempt_schedule (write_chan)
+ ls-4473 0dN.1 1us : _spin_lock (schedule)
+ ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
+ ls-4473 0d..2 2us : put_prev_task_fair (schedule)
+[...]
+ ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
+ ls-4473 0d..2 13us : __switch_to (schedule)
+ sshd-4261 0d..2 14us : finish_task_switch (schedule)
+ sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
+ sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
+ sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
+ sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
+ sshd-4261 0d..2 17us : irq_enter (do_IRQ)
+ sshd-4261 0d..2 17us : idle_cpu (irq_enter)
+ sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
+ sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
+ sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
+ sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
+ sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
+ sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
+[...]
+ sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
+ sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
+ sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
+ sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
+ sshd-4261 0d..3 30us : do_softirq (irq_exit)
+ sshd-4261 0d... 30us : __do_softirq (do_softirq)
+ sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
+ sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
+ sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
+[...]
+ sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
+ sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
+ sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
+ sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
+ sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
+ sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
+ sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
+ sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
+[...]
+ sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
+ sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
+ sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
+ sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
+ sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
+ sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
+ sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
+ sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
+ sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
+ sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
+ sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
+[...]
+ sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
+ sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
+ sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
+ sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
+ sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
+ sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
+
+
+This is a very interesting trace. It started with the preemption of
+the ls task. We see that the task had the "need_resched" bit set
+with the 'N' in the trace. Interrupts are disabled in the spin_lock
+and the trace started. We see that a schedule took place to run
+sshd. When the interrupts were enabled, we took an interrupt.
+On return from the interrupt handler, the softirq ran. We took another
+interrupt while running the softirq as we see with the capital 'H'.
+
+
+wakeup
+------
+
+In Real-Time environment it is very important to know the wakeup
+time it takes for the highest priority task that wakes up to the
+time it executes. This is also known as "schedule latency".
+I stress the point that this is about RT tasks. It is also important
+to know the scheduling latency of non-RT tasks, but the average
+schedule latency is better for non-RT tasks. Tools like
+LatencyTop are more appropriate for such measurements.
+
+Real-Time environments are interested in the worst case latency.
+That is the longest latency it takes for something to happen, and
+not the average. We can have a very fast scheduler that may only
+have a large latency once in a while, but that would not work well
+with Real-Time tasks. The wakeup tracer was designed to record
+the worst case wakeups of RT tasks. Non-RT tasks are not recorded
+because the tracer only records one worst case and tracing non-RT
+tasks that are unpredictable will overwrite the worst case latency
+of RT tasks.
+
+Since this tracer only deals with RT tasks, we will run this slightly
+differently than we did with the previous tracers. Instead of performing
+an 'ls', we will run 'sleep 1' under 'chrt' which changes the
+priority of the task.
+
+ # echo wakeup > /debug/tracing/current_tracer
+ # echo 0 > /debug/tracing/tracing_max_latency
+ # echo 1 > /debug/tracing/tracing_enabled
+ # chrt -f 5 sleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/latency_trace
+# tracer: wakeup
+#
+wakeup latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
+ -----------------
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
+ <idle>-0 1d..4 4us : schedule (cpu_idle)
+
+
+vim:ft=help
+
+
+Running this on an idle system, we see that it only took 4 microseconds
+to perform the task switch. Note, since the trace marker in the
+schedule is before the actual "switch", we stop the tracing when
+the recorded task is about to schedule in. This may change if
+we add a new marker at the end of the scheduler.
+
+Notice that the recorded task is 'sleep' with the PID of 4901 and it
+has an rt_prio of 5. This priority is user-space priority and not
+the internal kernel priority. The policy is 1 for SCHED_FIFO and 2
+for SCHED_RR.
+
+Doing the same with chrt -r 5 and ftrace_enabled set.
+
+# tracer: wakeup
+#
+wakeup latency trace v1.1.5 on 2.6.26-rc8
+--------------------------------------------------------------------
+ latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
+ -----------------
+ | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
+ -----------------
+
+# _------=> CPU#
+# / _-----=> irqs-off
+# | / _----=> need-resched
+# || / _---=> hardirq/softirq
+# ||| / _--=> preempt-depth
+# |||| /
+# ||||| delay
+# cmd pid ||||| time | caller
+# \ / ||||| \ | /
+ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
+ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
+ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
+ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
+ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
+ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
+ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
+ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
+[...]
+ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
+ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
+ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
+ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
+[...]
+ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
+ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
+ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
+ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
+ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
+ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
+ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
+ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
+ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
+ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
+ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
+ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
+ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
+ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
+[...]
+ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
+ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
+ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
+ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
+ksoftirq-7 1d..4 50us : schedule (__cond_resched)
+
+The interrupt went off while running ksoftirqd. This task runs at
+SCHED_OTHER. Why didn't we see the 'N' set early? This may be
+a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K stacks
+configured, the interrupt and softirq runs with their own stack.
+Some information is held on the top of the task's stack (need_resched
+and preempt_count are both stored there). The setting of the NEED_RESCHED
+bit is done directly to the task's stack, but the reading of the
+NEED_RESCHED is done by looking at the current stack, which in this case
+is the stack for the hard interrupt. This hides the fact that NEED_RESCHED
+has been set. We don't see the 'N' until we switch back to the task's
+assigned stack.
+
+ftrace
+------
+
+ftrace is not only the name of the tracing infrastructure, but it
+is also a name of one of the tracers. The tracer is the function
+tracer. Enabling the function tracer can be done from the
+debug file system. Make sure the ftrace_enabled is set otherwise
+this tracer is a nop.
+
+ # sysctl kernel.ftrace_enabled=1
+ # echo ftrace > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4003 [00] 123.638713: finish_task_switch <-schedule
+ bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
+ bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
+ bash-4003 [00] 123.638715: hrtick_set <-schedule
+ bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
+ bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
+ bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
+ bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
+ bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
+ bash-4003 [00] 123.638718: sub_preempt_count <-schedule
+ bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
+ bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
+ bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
+ bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
+ bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
+[...]
+
+
+Note: It is sometimes better to enable or disable tracing directly from
+a program, because the buffer may be overflowed by the echo commands
+before you get to the point you want to trace. It is also easier to
+stop the tracing at the point that you hit the part that you are
+interested in. Since the ftrace buffer is a ring buffer with the
+oldest data being overwritten, usually it is sufficient to start the
+tracer with an echo command but have you code stop it. Something
+like the following is usually appropriate for this.
+
+int trace_fd;
+[...]
+int main(int argc, char *argv[]) {
+ [...]
+ trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY);
+ [...]
+ if (condition_hit()) {
+ write(trace_fd, "0", 1);
+ }
+ [...]
+}
+
+
+dynamic ftrace
+--------------
+
+If CONFIG_DYNAMIC_FTRACE is set, then the system will run with
+virtually no overhead when function tracing is disabled. The way
+this works is the mcount function call (placed at the start of
+every kernel function, produced by the -pg switch in gcc), starts
+of pointing to a simple return.
+
+When dynamic ftrace is initialized, it calls kstop_machine to make
+the machine act like a uniprocessor so that it can freely modify code
+without worrying about other processors executing that same code. At
+initialization, the mcount calls are changed to call a "record_ip"
+function. After this, the first time a kernel function is called,
+it has the calling address saved in a hash table.
+
+Later on the ftraced kernel thread is awoken and will again call
+kstop_machine if new functions have been recorded. The ftraced thread
+will change all calls to mcount to "nop". Just calling mcount
+and having mcount return has shown a 10% overhead. By converting
+it to a nop, there is no recordable overhead to the system.
+
+One special side-effect to the recording of the functions being
+traced, is that we can now selectively choose which functions we
+want to trace and which ones we want the mcount calls to remain as
+nops.
+
+Two files are used, one for enabling and one for disabling the tracing
+of recorded functions. They are:
+
+ set_ftrace_filter
+
+and
+
+ set_ftrace_notrace
+
+A list of available functions that you can add to these files is listed
+in:
+
+ available_filter_functions
+
+ # cat /debug/tracing/available_filter_functions
+put_prev_task_idle
+kmem_cache_create
+pick_next_task_rt
+get_online_cpus
+pick_next_task_fair
+mutex_lock
+[...]
+
+If I'm only interested in sys_nanosleep and hrtimer_interrupt:
+
+ # echo sys_nanosleep hrtimer_interrupt \
+ > /debug/tracing/set_ftrace_filter
+ # echo ftrace > /debug/tracing/current_tracer
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
+ usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
+ <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
+
+To see what functions are being traced, you can cat the file:
+
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_interrupt
+sys_nanosleep
+
+
+Perhaps this isn't enough. The filters also allow simple wild cards.
+Only the following are currently available
+
+ <match>* - will match functions that begin with <match>
+ *<match> - will match functions that end with <match>
+ *<match>* - will match functions that have <match> in it
+
+Thats all the wild cards that are allowed.
+
+ <match>*<match> will not work.
+
+ # echo hrtimer_* > /debug/tracing/set_ftrace_filter
+
+Produces:
+
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
+ bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
+ bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
+ bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
+ <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
+ <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
+
+
+Notice that we lost the sys_nanosleep.
+
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_run_queues
+hrtimer_run_pending
+hrtimer_init
+hrtimer_cancel
+hrtimer_try_to_cancel
+hrtimer_forward
+hrtimer_start
+hrtimer_reprogram
+hrtimer_force_reprogram
+hrtimer_get_next_event
+hrtimer_interrupt
+hrtimer_nanosleep
+hrtimer_wakeup
+hrtimer_get_remaining
+hrtimer_get_res
+hrtimer_init_sleeper
+
+
+This is because the '>' and '>>' act just like they do in bash.
+To rewrite the filters, use '>'
+To append to the filters, use '>>'
+
+To clear out a filter so that all functions will be recorded again:
+
+ # echo > /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+ #
+
+Again, now we want to append.
+
+ # echo sys_nanosleep > /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+sys_nanosleep
+ # echo hrtimer_* >> /debug/tracing/set_ftrace_filter
+ # cat /debug/tracing/set_ftrace_filter
+hrtimer_run_queues
+hrtimer_run_pending
+hrtimer_init
+hrtimer_cancel
+hrtimer_try_to_cancel
+hrtimer_forward
+hrtimer_start
+hrtimer_reprogram
+hrtimer_force_reprogram
+hrtimer_get_next_event
+hrtimer_interrupt
+sys_nanosleep
+hrtimer_nanosleep
+hrtimer_wakeup
+hrtimer_get_remaining
+hrtimer_get_res
+hrtimer_init_sleeper
+
+
+The set_ftrace_notrace prevents those functions from being traced.
+
+ # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace
+
+Produces:
+
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+ bash-4043 [01] 115.281644: finish_task_switch <-schedule
+ bash-4043 [01] 115.281645: hrtick_set <-schedule
+ bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
+ bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
+ bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
+ bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
+ bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
+ bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
+ bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
+
+We can see that there's no more lock or preempt tracing.
+
+ftraced
+-------
+
+As mentioned above, when dynamic ftrace is configured in, a kernel
+thread wakes up once a second and checks to see if there are mcount
+calls that need to be converted into nops. If there are not any, then
+it simply goes back to sleep. But if there are some, it will call
+kstop_machine to convert the calls to nops.
+
+There may be a case that you do not want this added latency.
+Perhaps you are doing some audio recording and this activity might
+cause skips in the playback. There is an interface to disable
+and enable the ftraced kernel thread.
+
+ # echo 0 > /debug/tracing/ftraced_enabled
+
+This will disable the calling of the kstop_machine to update the
+mcount calls to nops. Remember that there's a large overhead
+to calling mcount. Without this kernel thread, that overhead will
+exist.
+
+If there are recorded calls to mcount, any write to the ftraced_enabled
+file will cause the kstop_machine to run. This means that a
+user can manually perform the updates when they want to by simply
+echoing a '0' into the ftraced_enabled file.
+
+The updates are also done at the beginning of enabling a tracer
+that uses ftrace function recording.
+
+
+trace_pipe
+----------
+
+The trace_pipe outputs the same as trace, but the effect on the
+tracing is different. Every read from trace_pipe is consumed.
+This means that subsequent reads will be different. The trace
+is live.
+
+ # echo ftrace > /debug/tracing/current_tracer
+ # cat /debug/tracing/trace_pipe > /tmp/trace.out &
+[1] 4153
+ # echo 1 > /debug/tracing/tracing_enabled
+ # usleep 1
+ # echo 0 > /debug/tracing/tracing_enabled
+ # cat /debug/tracing/trace
+# tracer: ftrace
+#
+# TASK-PID CPU# TIMESTAMP FUNCTION
+# | | | | |
+
+ #
+ # cat /tmp/trace.out
+ bash-4043 [00] 41.267106: finish_task_switch <-schedule
+ bash-4043 [00] 41.267106: hrtick_set <-schedule
+ bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
+ bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
+ bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
+ bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
+ bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
+ bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
+ bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
+ bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
+
+
+Note, reading the trace_pipe will block until more input is added.
+By changing the tracer, trace_pipe will issue an EOF. We needed
+to set the ftrace tracer _before_ cating the trace_pipe file.
+
+
+trace entries
+-------------
+
+Having too much or not enough data can be troublesome in diagnosing
+some issue in the kernel. The file trace_entries is used to modify
+the size of the internal trace buffers. The number listed
+is the number of entries that can be recorded per CPU. To know
+the full size, multiply the number of possible CPUS with the
+number of entries.
+
+ # cat /debug/tracing/trace_entries
+65620
+
+Note, to modify this, you must have tracing completely disabled. To do that,
+echo "none" into the current_tracer.
+
+ # echo none > /debug/tracing/current_tracer
+ # echo 100000 > /debug/tracing/trace_entries
+ # cat /debug/tracing/trace_entries
+100045
+
+
+Notice that we echoed in 100,000 but the size is 100,045. The entries
+are held by individual pages. It allocates the number of pages it takes
+to fulfill the request. If more entries may fit on the last page
+it will add them.
+
+ # echo 1 > /debug/tracing/trace_entries
+ # cat /debug/tracing/trace_entries
+85
+
+This shows us that 85 entries can fit on a single page.
+
+The number of pages that will be allocated is a percentage of available
+memory. Allocating too much will produce an error.
+
+ # echo 1000000000000 > /debug/tracing/trace_entries
+-bash: echo: write error: Cannot allocate memory
+ # cat /debug/tracing/trace_entries
+85
+

2008-07-11 07:51:32

by Elias Oltmanns

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

Steven Rostedt <[email protected]> wrote:
> On Thu, 10 Jul 2008, Elias Oltmanns wrote:
>
>>
>> Steven Rostedt <[email protected]> wrote:
[...]
>> > + times the number of possible CPUS. The buffers
>> > + are saved as individual pages, and the actual entries
>> > + will always be rounded up to entries per page.
>>
>> Not sure I understand the last sentence, but may be it's just me not
>> being familiar with the terminology.
>
> I should rewrite it then. I allocate the buffer by pages (a block of
> memory that is used in kernel allocation. Usually 4K). Since the entries
> are less than a page, if there is extra padding on the last page after
> all the requested entries have been allocated, I use the rest of the page
> to add entries that can still fit.

I see, thanks.

[...]
>> > + ftrace - function tracer that uses mcount to trace all functions.
>> > + It is possible to filter out which functions that are
>>
>> are to be
>
> Both ways sound OK to me. But then again, I would trust a non-native
> speaker more. Since they were actually taught the language ;-)

Most likely, both ways *are* OK; I was mainly concerned about the `that'
which irritates me.

>
>>
>> > + traced when dynamic ftrace is configured in.
[...]
>> > + stacktrace - This is one of the options that changes the trace itself.
>>
>> change
>
> Hmm, now this would be a good English question. "change" is for plural and
> "changes" is for singular. Now is "This is one of options" plural or
> singular. I'm thinking singular, because it is "one of", but I'm not an
> English major.

Well, I couldn't name any particular rule to make my point but I'll
dissect that sentence in order to explain my thinking:

1. This is one of the options.

Well, we knew that anyway, so surely, something is missing.

2. This is an option that changes the trace itself.

Quite conclusive.

3. This is one of the options that change the trace itself.

The `that' clause refers to options, thus implying that there are more
then just this one that change the trace itself. I'm not sure whether
the fact that there is no comma before the `that' implies that the rest
of the sentence refers to the subject, though.

4. This is one of the options that changes the trace itself.

Personally, I think of this as the sum of 1. (no valuable information)
and 2. (a perfectly valid and meaningful statement), i.e., why not stick
to 2. In constrast, the third variant really does provide additional
information because the relative clause refers to options (plural).

Well, I really am clutching at straws here, I suppose. Actually, I dare
not argue with with you *and* Randy about this.

[...]
>> > +The irqsoff tracer tracks the time interrupts are disabled and when
>>
>> when
>
> Hmm, I don't like either "when"s. How about:
>
> The irqsoff tracer tracks the time interrupts are disabled to the time
> they are re-enabled.
>
> ??

Fine with me.

>
>>
>> > +they are re-enabled. When a new maximum latency is hit, it saves off
>> > +the trace so that it may be retrieved at a later time. Every time a
>> > +new maximum in reached, the old saved trace is discarded and the new
>> > +trace is saved.
>> [...]
>> > +Note the above had ftrace_enabled not set. If we set the ftrace_enabled
>>
>> (comma)
>
> I have a lot of these. I just commented on someones writing in saying that
> you can use a "then" or a "comma" but don't leave both out. Seems I've
> been doing that a lot here.

I doubt you really can omit the comma either way since the conditional
precedes the main clause.

[...]
>> > +Where as the setting of the NEED_RESCHED bit happens on the
>> > +task's stack. But because we are in a hard interrupt, the test
>> > +is with the interrupts stack which has that to be false. We don't
>>
>> ^^^^
>> Superfluous that? Don't understand that sentence.
>
> No really, I did proof read it...
>
> God that was an awful explanation. OK, how about something like this:
>
> Some task data is stored at the top of the tasks stack (need_resched and
> preempt_count). The setting of the NEED_RESCHED sets the bit on the
> task's real stack. The test for NEED_RESCHED looks at the current stack.
> Since the current stack is the hard interrupt stack (as the kernel is
> configured to use a separate stack for interrupts), the trace shows that
> the need_resched bit has not yet been set.

Righht, thanks.

[...]
>> > +Two files that contain to the enabling and disabling of recorded
>> > +functions are:
>>
>> Can this be expressed somewhat differently?
>
> Did I write that??
> How about:
>
> Two files are used, one for enabling and one for disabling the tracing
> of recorded functions.
>
> I'm sure you'll tell me I missed a comma in there somewhere ;-)

Sorry to disappoint you, I could think of one. ;-)

[...]
>> > +ftraced
>> > +-------
>> > +
>> > +As mentioned above, when dynamic ftrace is configured in, a kernel
>> > +thread wakes up once a second and checks to see if there are mcount
>> > +calls that need to be converted into nops. If there is not, then
>>
>> are
>
> That sounds off. I see your point, but I'm not sure this applies for
> plural. I can be wrong though.

Well, I could be persuaded to believe that this is idiomatic.

>
>>
>> > +it simply goes back to sleep. But if there is, it will call
>>
>> are
>
> Same here.

Of course.

>
>>
>> > +kstop_machine to convert the calls to nops.
>> [...]
>> > +Any write to the ftraced_enabled file will cause the kstop_machine
>> > +to run if there are recorded calls to mcount. This means that a
>>
>> Incomplete sentence.
>
> hmm, how so? Although I am missing a comma. I could also write it like
> "If there are recorded calls to mcount, any write to the ftraced_enabled
> file will cause kstop_machine to run".

Oh dear, I didn't exactly excel myself there, did I. Your original
sentence was quite alright and there was no comma missing either because
the conditional follows the main clause. Personally, though, I find the
new version easier to understand.

>
>>
>> > +user can manually perform the updates when they want to by simply
>>
>> (s)he wants
>
> This is where I hate the English language, and will not be including this
> update. Sorry, I hate the whole she/he thing. I simply rebel and use
> "they"!

Yes, I appreciate that. Actually, I was wondering at the time whether I
should suggest saying `users' insdead of `a user'.

>
>>
>> > +echoing a '0' into the ftraced_enabled file.

Regards,

Elias

2008-07-11 19:20:54

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

On Thu, 10 Jul 2008 20:37:19 -0400 (EDT) Steven Rostedt <[email protected]> wrote:

>
> This is the long awaited ftrace.txt. It explains in quite detail how to
> use ftrace.
>
> Updated with comments from Elias Oltmann and Randy Dunlap.
>
> Signed-off-by: Steven Rostedt <[email protected]>
> ---
> Documentation/ftrace.txt | 1361 +++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 1361 insertions(+)
>
> Index: linux-tip.git/Documentation/ftrace.txt
> ===================================================================
> --- /dev/null 1970-01-01 00:00:00.000000000 +0000
> +++ linux-tip.git/Documentation/ftrace.txt 2008-07-10 20:18:33.000000000 -0400
> @@ -0,0 +1,1361 @@
> + ftrace - Function Tracer
> + ========================
> +
> +Copyright 2008 Red Hat Inc.
> + Author: Steven Rostedt <[email protected]>
> + License: The GNU Free Documentation License, Version 1.2
> +Reviewers: Elias Oltmanns and Randy Dunlap
> +
> +Writen for: 2.6.26-rc8 linux-2.6-tip.git tip/tracing/ftrace branch
> +
> +Introduction
> +------------
> +
> +Ftrace is an internal tracer designed to help out developers and
> +designers of systems to find what is going on inside the kernel.
> +It can be used for debugging or analyzing latencies and performance
> +issues that take place outside of user-space.
> +
> +Although ftrace is the function tracer, it also includes an
> +infrastructure that allows for other types of tracing. Some of the
> +tracers that are currently in ftrace is a tracer to trace

grammar bustage here

> +context switches, the time it takes for a high priority task to
> +run after it was woken up, the time interrupts are disabled, and
> +more.

Please enumerate "and more" ;)

> +
> +The File System
> +---------------
> +
> +Ftrace uses the debugfs file system to hold the control files as well
> +as the files to display output.
> +
> +To mount the debugfs system:
> +
> + # mkdir /debug
> + # mount -t debugfs nodev /debug

This should be a reference to the debugfs documentation (rofl?) rather than
a reimplementation of it.

> +
> +That's it! (assuming that you have ftrace configured into your kernel)
> +
> +After mounting the debugfs, you can see a directory called
> +"tracing". This directory contains the control and output files
> +of ftrace. Here is a list of some of the key files:
> +
> +
> + Note: all time values are in microseconds.
> +
> + current_tracer : This is used to set or display the current tracer
> + that is configured.
> +
> + available_tracers : This holds the different types of tracers that
> + have been compiled into the kernel. The tracers
> + listed here can be configured by echoing in their

s/in//

> + name into current_tracer.
> +
> + tracing_enabled : This sets or displays whether the current_tracer
> + is activated and tracing or not. Echo 0 into this
> + file to disable the tracer or 1 (or non-zero) to
> + enable it.

kernel should only permit 0 or 1 (IMO). This is a kernel ABI, not a C
program.

> + trace : This file holds the output of the trace in a human readable
> + format.

"described below" (I hope)

> + latency_trace : This file shows the same trace but the information
> + is organized more to display possible latencies
> + in the system.

?

> + trace_pipe : The output is the same as the "trace" file but this
> + file is meant to be streamed with live tracing.

We have three slightly munged versions of the same data? wtf?

Would it not be better to present all the data a single time and perform
post-processing in userspace?

It's scary, but we _can_ ship userspace code. getdelays.c has worked well.
Baby steps.

> + Reads from this file will block until new data
> + is retrieved. Unlike the "trace" and "latency_trace"
> + files, this file is a consumer. This means reading
> + from this file causes sequential reads to display
> + more current data.

hrm.

> Once data is read from this
> + file, it is consumed, and will not be read
> + again with a sequential read. The "trace" and
> + "latency_trace" files are static, and if the
> + tracer isn't adding more data, they will display
> + the same information every time they are read.

hrm. Side note: it is sad that we are learning fundamental design
decisions ages after the code was put into mainline. How did this happen?

In a better world we'd have seen this document before coding started! Or
at least prior to merging.

> + iter_ctrl : This file lets the user control the amount of data
> + that is displayed in one of the above output
> + files.
> +
> + trace_max_latency : Some of the tracers record the max latency.
> + For example, the time interrupts are disabled.
> + This time is saved in this file. The max trace
> + will also be stored, and displayed by either
> + "trace" or "latency_trace". A new max trace will
> + only be recorded if the latency is greater than
> + the value in this file. (in microseconds)
> +
> + trace_entries : This sets or displays the number of trace
> + entries each CPU buffer can hold. The tracer buffers
> + are the same size for each CPU, so care must be
> + taken when modifying the trace_entries.

I don't understand "A, so care must be taken when B". Why must care be
taken? What might happen if I was careless? How do I take care? Type
slowly? ;)

> The trace
> + buffers are allocated in pages (blocks of memory that
> + the kernel uses for allocation, usually 4 KB in size).
> + Since each entry is smaller than a page, if the last
> + allocated page has room for more entries than were
> + requested, the rest of the page is used to allocate
> + entries.
> +
> + This can only be updated when the current_tracer
> + is set to "none".
> +
> + NOTE: It is planned on changing the allocated buffers
> + from being the number of possible CPUS to
> + the number of online CPUS.
> +
> + tracing_cpumask : This is a mask that lets the user only trace
> + on specified CPUS. The format is a hex string
> + representing the CPUS.

Why is this feature useful? (I'd have asked this prior to merging, if I'd
known it existed!)

> + set_ftrace_filter : When dynamic ftrace is configured in, the

I guess we'll learn later what "dynamic" ftrace is.

> + code is dynamically modified to disable calling
> + of the function profiler (mcount).

What does "dynamically modified" mean here? text rewriting?

> This lets
> + tracing be configured in with practically no overhead
> + in performance. This also has a side effect of
> + enabling or disabling specific functions to be
> + traced. Echoing in names of functions into this

s/in//

> + file will limit the trace to only these functions.

s/these/those/

> + set_ftrace_notrace: This has the opposite effect that
> + set_ftrace_filter has.

"This has an effect opposite to that of set_ftrace_filter", perhaps.

> Any function that is added
> + here will not be traced. If a function exists
> + in both set_ftrace_filter and set_ftrace_notrace,
> + the function will _not_ be traced.
> +
> + available_filter_functions : When a function is encountered the first
> + time by the dynamic tracer, it is recorded and
> + later the call is converted into a nop. This file
> + lists the functions that have been recorded
> + by the dynamic tracer and these functions can
> + be used to set the ftrace filter by the above
> + "set_ftrace_filter" file.

My head just spun off. Perhaps some more details here?

> +
> +The Tracers
> +-----------
> +
> +Here are the list of current tracers that can be configured.

s/are/is/
s/can/may/

> +
> + ftrace - function tracer that uses mcount to trace all functions.
> + It is possible to filter out which functions that are

s/that//

What does "filter out" mean here? I asusme that they are omitted? A bit
unclear.

> + to be traced when dynamic ftrace is configured in.

"a function tracer which"


> + sched_switch - traces the context switches between tasks.
> +
> + irqsoff - traces the areas that disable interrupts and saves off
> + the trace with the longest max latency.

s/off//

> + See tracing_max_latency. When a new max is recorded,
> + it replaces the old trace. It is best to view this
> + trace with the latency_trace file.

s/with/via/

> + preemptoff - Similar to irqsoff but traces and records the time
> + preemption is disabled.

s/the time/the amount of time for which/

> + preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
> + records the largest time irqs and/or preemption is
> + disabled.

s/time/time for which/

This interface has a strange mix of wordsruntogether and
words_separated_by_underscores. Oh well - another consequence of
post-facto changelogging.

> + wakeup - Traces and records the max latency that it takes for
> + the highest priority task to get scheduled after
> + it has been woken up.
> +
> + none - This is not a tracer. To remove all tracers from tracing
> + simply echo "none" into current_tracer.

Does the system then run at full performance levels?

> +
> +Examples of using the tracer
> +----------------------------
> +
> +Here are typical examples of using the tracers with only controlling
> +them with the debugfs interface (without using any user-land utilities).

s/with only controlling them/when controlling them only/

> +Output format:
> +--------------
> +
> +Here's an example of the output format of the file "trace"
> +
> + --------
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-4251 [01] 10152.583854: path_put <-path_walk
> + bash-4251 [01] 10152.583855: dput <-path_put
> + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> + --------

pids are no longer unique system-wide, and any part of the kernel ABI which
exports them to userspace is, basically, broken. Oh well.

> +A header is printed with the trace that is represented.

s/that is represented//?

> In this case
> +the tracer is "ftrace". Then a header showing the format. Task name
> +"bash", the task PID "4251", the CPU that it was running on
> +"01", the timestamp in <secs>.<usecs> format, the function name that was
> +traced "path_put" and the parent function that called this function
> +"path_walk".

Please spell out what the timestamp represents. The time at which that
function was entered?

> +The sched_switch tracer also includes tracing of task wake ups and

"wakeups" would be a more typical spelling.

> +context switches.
> +
> + ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S
> + ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S
> + ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R
> + events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R
> + kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R
> + ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R
> +
> +Wake ups are represented by a "+" and the context switches show
> +"==>". The format is:

s/show/are shown as/

> +
> + Context switches:
> +
> + Previous task Next Task
> +
> + <pid>:<prio>:<state> ==> <pid>:<prio>:<state>
> +
> + Wake ups:
> +
> + Current task Task waking up
> +
> + <pid>:<prio>:<state> + <pid>:<prio>:<state>
> +
> +The prio is the internal kernel priority, which is inverse to the

s/inverse to/the inverse of/

> +priority that is usually displayed by user-space tools. Zero represents
> +the highest priority (99). Prio 100 starts the "nice" priorities with
> +100 being equal to nice -20 and 139 being nice 19. The prio "140" is
> +reserved for the idle task which is the lowest priority thread (pid 0).

Would it not be better to convert these back into their userland
representation or userland presentation?

> +
> +Latency trace format
> +--------------------
> +
> +For traces that display latency times, the latency_trace file gives
> +a bit more information to see why a latency happened. Here's a typical

s/a bit/somewhat/ (IMO)

> +trace.

General nit: apostrophes are suitable for conversation, but not for formal
docmentation. Please consider s/'s/ is/g.

> +# tracer: irqsoff
> +#
> +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: apic_timer_interrupt
> + => ended at: do_softirq
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
> + <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
> + <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)

The kernel prints all that stuff out of a debugfs file?

What have we done? :(

> +
> +vim:ft=help

What's this?

> +
> +
> +This shows that the current tracer is "irqsoff" tracing the time

s/time/time for which/

> +interrupts are disabled. It gives the trace version and the kernel

s/are/were/

s/kernel/version of the kernel upon which/

> +this was executed on (2.6.26-rc8). Then it displays the max latency
> +in microsecs (97 us). The number of trace entries displayed
> +by the total number recorded (both are three: #3/3). The type of

s/by/and/???

> +preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero
> +and reserved for later use. #P is the number of online CPUS (#P:2).

s/reserved/are reserved/

> +
> +The task is the process that was running when the latency happened.

s/happened/occurred/

> +(swapper pid: 0).
> +
> +The start and stop that caused the latencies:

"start and stop" what? events? function calls?

> +
> + apic_timer_interrupt is where the interrupts were disabled.
> + do_softirq is where they were enabled again.
> +
> +The next lines after the header are the trace itself. The header
> +explains which is which.
> +
> + cmd: The name of the process in the trace.
> +
> + pid: The PID of that process.

:(

> + CPU#: The CPU that the process was running on.

s/that/which/

> +
> + irqs-off: 'd' interrupts are disabled. '.' otherwise.
> +
> + need-resched: 'N' task need_resched is set, '.' otherwise.
> +
> + hardirq/softirq:
> + 'H' - hard irq happened inside a softirq.
> + 'h' - hard irq is running
> + 's' - soft irq is running
> + '.' - normal context.
> +
> + preempt-depth: The level of preempt_disabled
> +
> +The above is mostly meaningful for kernel developers.
> +
> + time: This differs from the trace file output. The trace file output
> + included an absolute timestamp. The timestamp used by the

s/included/includes/?

> + latency_trace file is relative to the start of the trace.
> +
> + delay: This is just to help catch your eye a bit better. And
> + needs to be fixed to be only relative to the same CPU.

eh?

> + The marks are determined by the difference between this
> + current trace and the next trace.
> + '!' - greater than preempt_mark_thresh (default 100)
> + '+' - greater than 1 microsecond
> + ' ' - less than or equal to 1 microsecond.
> +
> + The rest is the same as the 'trace' file.
> +
> +
> +iter_ctrl
> +---------
> +
> +The iter_ctrl file is used to control what gets printed in the trace
> +output. To see what is available, simply cat the file:
> +
> + cat /debug/tracing/iter_ctrl
> + print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \
> + noblock nostacktrace nosched-tree
> +
> +To disable one of the options, echo in the option prepended with "no".
> +
> + echo noprint-parent > /debug/tracing/iter_ctrl
> +
> +To enable an option, leave off the "no".
> +
> + echo sym-offset > /debug/tracing/iter_ctrl
> +
> +Here are the available options:
> +
> + print-parent - On function traces, display the calling function
> + as well as the function being traced.
> +
> + print-parent:
> + bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul
> +
> + noprint-parent:
> + bash-4000 [01] 1477.606694: simple_strtoul
> +
> +
> + sym-offset - Display not only the function name, but also the offset
> + in the function. For example, instead of seeing just
> + "ktime_get", you will see "ktime_get+0xb/0x20".
> +
> + sym-offset:
> + bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0
> +
> + sym-addr - this will also display the function address as well as
> + the function name.
> +
> + sym-addr:
> + bash-4000 [01] 1477.606694: simple_strtoul <c0339346>
> +
> + verbose - This deals with the latency_trace file.
> +
> + bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
> + (+0.000ms): simple_strtoul (strict_strtoul)
> +
> + raw - This will display raw numbers. This option is best for use with
> + user applications that can translate the raw numbers better than
> + having it done in the kernel.

ooh, does this mean that we get to delete all the other interfaces?

I mean, honestly, all this pretty-printing and post-processing could and
should have been done in userspace. As much as poss. We suck.

> + hex - Similar to raw, but the numbers will be in a hexadecimal format.

Does really this need to exist? Again, it is the sort of thing which would
have needed justification during the pre-merge review. But afaik it was
jammed into the tree without knowledge or comment.

There are lessons here.

> + bin - This will print out the formats in raw binary.

I don't understand this.

> + block - TBD (needs update)
> +
> + stacktrace - This is one of the options that changes the trace itself.
> + When a trace is recorded, so is the stack of functions.
> + This allows for back traces of trace sites.

?

> + sched-tree - TBD (any users??)
> +
> +
> +sched_switch
> +------------
> +
> +This tracer simply records schedule switches. Here's an example
> +of how to use it.
> +
> + # echo sched_switch > /debug/tracing/current_tracer
> + # echo 1 > /debug/tracing/tracing_enabled
> + # sleep 1
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/trace
> +
> +# tracer: sched_switch
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R
> + bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R
> + sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R
> + bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S
> + bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R
> + sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R
> + bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D
> + bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R
> + <idle>-0 [00] 240.132589: 0:140:R + 4:115:S
> + <idle>-0 [00] 240.132591: 0:140:R ==> 4:115:R
> + ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R
> + <idle>-0 [00] 240.132598: 0:140:R + 4:115:S
> + <idle>-0 [00] 240.132599: 0:140:R ==> 4:115:R
> + ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R
> + sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R
> + [...]
> +
> +
> +As we have discussed previously about this format, the header shows
> +the name of the trace and points to the options. The "FUNCTION"
> +is a misnomer since here it represents the wake ups and context
> +switches.
> +
> +The sched_switch only lists the wake ups (represented with '+')

s/sched_switch/sched_switch file/?

> +and context switches ('==>') with the previous task or current

s/current/current task/?

> +first followed by the next task or task waking up. The format for both
> +of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
> +is the inverse of the actual priority with zero (0) being the highest
> +priority and the nice values starting at 100 (nice -20). Below is
> +a quick chart to map the kernel priority to user land priorities.
> +
> + Kernel priority: 0 to 99 ==> user RT priority 99 to 0
> + Kernel priority: 100 to 139 ==> user nice -20 to 19
> + Kernel priority: 140 ==> idle task priority
> +
> +The task states are:
> +
> + R - running : wants to run, may not actually be running
> + S - sleep : process is waiting to be woken up (handles signals)
> + D - deep sleep : process must be woken up (ignores signals)

"uninterruptible sleep", please. no need to invent new (and hence
unfamilar) terms!

> + T - stopped : process suspended
> + t - traced : process is being traced (with something like gdb)
> + Z - zombie : process waiting to be cleaned up
> + X - unknown
> +
> +
> +ftrace_enabled
> +--------------
> +
> +The following tracers give different output depending on whether
> +or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
> +one can either use the sysctl function or set it via the proc
> +file system interface.
> +
> + sysctl kernel.ftrace_enabled=1
> +
> + or
> +
> + echo 1 > /proc/sys/kernel/ftrace_enabled
> +
> +To disable ftrace_enabled simply replace the '1' with '0' in
> +the above commands.
> +
> +When ftrace_enabled is set the tracers will also record the functions
> +that are within the trace. The descriptions of the tracers
> +will also show an example with ftrace enabled.

What are "the following tracers" here?

> +
> +irqsoff
> +-------
> +
> +When interrupts are disabled, the CPU can not react to any other
> +external event (besides NMIs and SMIs). This prevents the timer
> +interrupt from triggering or the mouse interrupt from letting the
> +kernel know of a new mouse event. The result is a latency with the
> +reaction time.
> +
> +The irqsoff tracer tracks the time interrupts are disabled to the time

"te time for which interrupts are disabled." will suffice.

> +they are re-enabled. When a new maximum latency is hit, it saves off
> +the trace so that it may be retrieved at a later time.

"the tracer saves the stack trace leading up to that latency point so that"

> Every time a
> +new maximum in reached, the old saved trace is discarded and the new
> +trace is saved.
> +
> +To reset the maximum, echo 0 into tracing_max_latency. Here's an
> +example:
> +
> + # echo irqsoff > /debug/tracing/current_tracer
> + # echo 0 > /debug/tracing/tracing_max_latency
> + # echo 1 > /debug/tracing/tracing_enabled
> + # ls -ltr
> + [...]
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/latency_trace
> +# tracer: irqsoff
> +#
> +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 6 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: copy_page_range
> + => ended at: copy_page_range
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + bash-4269 1...1 0us+: _spin_lock (copy_page_range)
> + bash-4269 1...1 7us : _spin_unlock (copy_page_range)
> + bash-4269 1...2 7us : trace_preempt_on (copy_page_range)

istr writing stuff which does this in 1999 ;)

> +
> +vim:ft=help

?

> +Here we see that that we had a latency of 6 microsecs (which is
> +very good). The spin_lock in copy_page_range disabled interrupts.

spin_lock disables interrutps?

> +The difference between the 6 and the displayed timestamp 7us is
> +because

"occurred because"

> the clock must have incremented between the time of recording

s/must have/was/

> +the max latency and recording the function that had that latency.

s/recording/the time of recording/

> +
> +Note the above had ftrace_enabled not set. If we set the ftrace_enabled,

s/above/above example/

> +we get a much larger output:
> +
> +# tracer: irqsoff
> +#
> +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: __alloc_pages_internal
> + => ended at: __alloc_pages_internal
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal)
> + ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist)
> + ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk)
> + ls-4339 0d..1 4us : add_preempt_count (_spin_lock)
> + ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk)
> + ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue)
> + ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest)
> + ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk)
> + ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue)
> + ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest)
> + ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk)
> + ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue)
> +[...]
> + ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue)
> + ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest)
> + ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk)
> + ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue)
> + ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest)
> + ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk)
> + ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock)
> + ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal)
> + ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal)
> +
> +
> +vim:ft=help

?

> +
> +Here we traced a 50 microsecond latency. But we also see all the
> +functions that were called during that time. Note that by enabling
> +function tracing, we endure an added overhead. This overhead may

s/endure/incur/

> +extend the latency times. But nevertheless, this trace has provided
> +some very helpful debugging information.
> +
> +
> +preemptoff
> +----------
> +
> +When preemption is disabled, we may be able to receive interrupts but
> +the task cannot be preempted and a higher priority task must wait
> +for preemption to be enabled again before it can preempt a lower
> +priority task.
> +
> +The preemptoff tracer traces the places that disable preemption.
> +Like the irqsoff, it records the maximum latency that preemption

s/that/for which/

> +was disabled. The control of preemptoff is much like the irqsoff.

s/the//

> + # echo preemptoff > /debug/tracing/current_tracer
> + # echo 0 > /debug/tracing/tracing_max_latency
> + # echo 1 > /debug/tracing/tracing_enabled
> + # ls -ltr
> + [...]
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/latency_trace
> +# tracer: preemptoff
> +#
> +preemptoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: do_IRQ
> + => ended at: __do_softirq
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + sshd-4261 0d.h. 0us+: irq_enter (do_IRQ)
> + sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq)
> + sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq)
> +
> +
> +vim:ft=help

?

> +This has some more changes. Preemption was disabled when an interrupt
> +came in (notice the 'h'), and was enabled while doing a softirq.
> +(notice the 's'). But we also see that interrupts have been disabled
> +when entering the preempt off section and leaving it (the 'd').
> +We do not know if interrupts were enabled in the mean time.
> +
> +# tracer: preemptoff
> +#
> +preemptoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: remove_wait_queue
> + => ended at: __do_softirq
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue)
> + sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue)
> + sshd-4261 0d..1 2us : do_IRQ (common_interrupt)
> + sshd-4261 0d..1 2us : irq_enter (do_IRQ)
> + sshd-4261 0d..1 2us : idle_cpu (irq_enter)
> + sshd-4261 0d..1 3us : add_preempt_count (irq_enter)
> + sshd-4261 0d.h1 3us : idle_cpu (irq_enter)
> + sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ)
> +[...]
> + sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock)
> + sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq)
> + sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq)
> + sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq)
> + sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock)
> + sshd-4261 0d.h1 14us : irq_exit (do_IRQ)
> + sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit)
> + sshd-4261 0d..2 15us : do_softirq (irq_exit)
> + sshd-4261 0d... 15us : __do_softirq (do_softirq)
> + sshd-4261 0d... 16us : __local_bh_disable (__do_softirq)
> + sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable)
> + sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable)
> +[...]
> + sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable)
> + sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable)
> + sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip)
> + sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip)
> + sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable)
> +[...]
> + sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq)
> + sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq)
> +
> +
> +The above is an example of the preemptoff trace with ftrace_enabled
> +set. Here we see that interrupts were disabled the entire time.
> +The irq_enter code lets us know that we entered an interrupt 'h'.
> +Before that, the functions being traced still show that it is not
> +in an interrupt, but we can see by the functions themselves that

s/by/from/

> +this is not the case.
> +
> +Notice that the __do_softirq when called doesn't have a preempt_count.

s/the//

> +It may seem that we missed a preempt enabled. What really happened

s/enabled/enabling/?

> +is that the preempt count is held on the threads stack and we

s/threads/thread's/

> +switched to the softirq stack (4K stacks in effect). The code
> +does not copy the preempt count, but because interrupts are disabled,
> +we don't need to worry about it. Having a tracer like this is good
> +to let people know what really happens inside the kernel.

s/to let/for letting/

> +
> +
> +preemptirqsoff
> +--------------
> +
> +Knowing the locations that have interrupts disabled or preemption
> +disabled for the longest times is helpful. But sometimes we would
> +like to know when either preemption and/or interrupts are disabled.
> +
> +The following code:

s/The/Consider the/?

> +
> + local_irq_disable();
> + call_function_with_irqs_off();
> + preempt_disable();
> + call_function_with_irqs_and_preemption_off();
> + local_irq_enable();
> + call_function_with_preemption_off();
> + preempt_enable();
> +
> +The irqsoff tracer will record the total length of
> +call_function_with_irqs_off() and
> +call_function_with_irqs_and_preemption_off().
> +
> +The preemptoff tracer will record the total length of
> +call_function_with_irqs_and_preemption_off() and
> +call_function_with_preemption_off().
> +
> +But neither will trace the time that interrupts and/or preemption
> +is disabled. This total time is the time that we can not schedule.
> +To record this time, use the preemptirqsoff tracer.
> +
> +Again, using this trace is much like the irqsoff and preemptoff tracers.
> +
> + # echo preemptirqsoff > /debug/tracing/current_tracer
> + # echo 0 > /debug/tracing/tracing_max_latency
> + # echo 1 > /debug/tracing/tracing_enabled
> + # ls -ltr
> + [...]
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/latency_trace
> +# tracer: preemptirqsoff
> +#
> +preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: apic_timer_interrupt
> + => ended at: __do_softirq
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt)
> + ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq)
> + ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq)
> +
> +
> +vim:ft=help

?

> +
> +The trace_hardirqs_off_thunk is called from assembly on x86 when
> +interrupts are disabled in the assembly code. Without the function
> +tracing, we don't know if interrupts were enabled within the preemption
> +points. We do see that it started with preemption enabled.
> +
> +Here is a trace with ftrace_enabled set:
> +
> +
> +# tracer: preemptirqsoff
> +#
> +preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0)
> + -----------------
> + => started at: write_chan
> + => ended at: __do_softirq
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + ls-4473 0.N.. 0us : preempt_schedule (write_chan)
> + ls-4473 0dN.1 1us : _spin_lock (schedule)
> + ls-4473 0dN.1 2us : add_preempt_count (_spin_lock)
> + ls-4473 0d..2 2us : put_prev_task_fair (schedule)
> +[...]
> + ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts)
> + ls-4473 0d..2 13us : __switch_to (schedule)
> + sshd-4261 0d..2 14us : finish_task_switch (schedule)
> + sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch)
> + sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave)
> + sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set)
> + sshd-4261 0d..2 16us : do_IRQ (common_interrupt)
> + sshd-4261 0d..2 17us : irq_enter (do_IRQ)
> + sshd-4261 0d..2 17us : idle_cpu (irq_enter)
> + sshd-4261 0d..2 18us : add_preempt_count (irq_enter)
> + sshd-4261 0d.h2 18us : idle_cpu (irq_enter)
> + sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ)
> + sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq)
> + sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock)
> + sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq)
> + sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock)
> +[...]
> + sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq)
> + sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock)
> + sshd-4261 0d.h2 29us : irq_exit (do_IRQ)
> + sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit)
> + sshd-4261 0d..3 30us : do_softirq (irq_exit)
> + sshd-4261 0d... 30us : __do_softirq (do_softirq)
> + sshd-4261 0d... 31us : __local_bh_disable (__do_softirq)
> + sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable)
> + sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable)
> +[...]
> + sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip)
> + sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip)
> + sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt)
> + sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt)
> + sshd-4261 0d.s3 45us : idle_cpu (irq_enter)
> + sshd-4261 0d.s3 46us : add_preempt_count (irq_enter)
> + sshd-4261 0d.H3 46us : idle_cpu (irq_enter)
> + sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt)
> + sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt)
> +[...]
> + sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt)
> + sshd-4261 0d.H3 82us : ktime_get (tick_program_event)
> + sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get)
> + sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts)
> + sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts)
> + sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event)
> + sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event)
> + sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt)
> + sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit)
> + sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit)
> + sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable)
> +[...]
> + sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action)
> + sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq)
> + sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq)
> + sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq)
> + sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable)
> + sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq)
> + sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq)
> +
> +
> +This is a very interesting trace. It started with the preemption of
> +the ls task. We see that the task had the "need_resched" bit set
> +with the 'N' in the trace. Interrupts are disabled in the spin_lock

s/with/via/

> +and the trace started. We see that a schedule took place to run

s/started/is started/? (unclear)

> +sshd. When the interrupts were enabled, we took an interrupt.
> +On return from the interrupt handler, the softirq ran. We took another
> +interrupt while running the softirq as we see with the capital 'H'.

s/with/from/

> +
> +
> +wakeup
> +------
> +
> +In Real-Time environment it is very important to know the wakeup

s/In/In a/

> +time it takes for the highest priority task that wakes up to the

s/wakes up/is woken up/

> +time it executes. This is also known as "schedule latency".

s/time/time that/

> +I stress the point that this is about RT tasks. It is also important
> +to know the scheduling latency of non-RT tasks, but the average
> +schedule latency is better for non-RT tasks. Tools like
> +LatencyTop are more appropriate for such measurements.
> +
> +Real-Time environments are interested in the worst case latency.
> +That is the longest latency it takes for something to happen, and
> +not the average. We can have a very fast scheduler that may only
> +have a large latency once in a while, but that would not work well
> +with Real-Time tasks. The wakeup tracer was designed to record
> +the worst case wakeups of RT tasks. Non-RT tasks are not recorded
> +because the tracer only records one worst case and tracing non-RT
> +tasks that are unpredictable will overwrite the worst case latency
> +of RT tasks.
> +
> +Since this tracer only deals with RT tasks, we will run this slightly
> +differently than we did with the previous tracers. Instead of performing
> +an 'ls', we will run 'sleep 1' under 'chrt' which changes the
> +priority of the task.
> +
> + # echo wakeup > /debug/tracing/current_tracer
> + # echo 0 > /debug/tracing/tracing_max_latency
> + # echo 1 > /debug/tracing/tracing_enabled
> + # chrt -f 5 sleep 1
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/latency_trace
> +# tracer: wakeup
> +#
> +wakeup latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5)
> + -----------------
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> + <idle>-0 1d.h4 0us+: try_to_wake_up (wake_up_process)
> + <idle>-0 1d..4 4us : schedule (cpu_idle)
> +
> +
> +vim:ft=help
> +
> +
> +Running this on an idle system, we see that it only took 4 microseconds
> +to perform the task switch. Note, since the trace marker in the
> +schedule is before the actual "switch", we stop the tracing when
> +the recorded task is about to schedule in. This may change if
> +we add a new marker at the end of the scheduler.
> +
> +Notice that the recorded task is 'sleep' with the PID of 4901 and it
> +has an rt_prio of 5. This priority is user-space priority and not
> +the internal kernel priority. The policy is 1 for SCHED_FIFO and 2
> +for SCHED_RR.
> +
> +Doing the same with chrt -r 5 and ftrace_enabled set.
> +
> +# tracer: wakeup
> +#
> +wakeup latency trace v1.1.5 on 2.6.26-rc8
> +--------------------------------------------------------------------
> + latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> + -----------------
> + | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5)
> + -----------------
> +
> +# _------=> CPU#
> +# / _-----=> irqs-off
> +# | / _----=> need-resched
> +# || / _---=> hardirq/softirq
> +# ||| / _--=> preempt-depth
> +# |||| /
> +# ||||| delay
> +# cmd pid ||||| time | caller
> +# \ / ||||| \ | /
> +ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process)
> +ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb)
> +ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up)
> +ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup)
> +ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr)
> +ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup)
> +ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up)
> +ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up)
> +[...]
> +ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt)
> +ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit)
> +ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit)
> +ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq)
> +[...]
> +ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks)
> +ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq)
> +ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable)
> +ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd)
> +ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd)
> +ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched)
> +ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched)
> +ksoftirq-7 1.N.2 33us : schedule (__cond_resched)
> +ksoftirq-7 1.N.2 33us : add_preempt_count (schedule)
> +ksoftirq-7 1.N.3 34us : hrtick_clear (schedule)
> +ksoftirq-7 1dN.3 35us : _spin_lock (schedule)
> +ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock)
> +ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule)
> +ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair)
> +[...]
> +ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline)
> +ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock)
> +ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline)
> +ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock)
> +ksoftirq-7 1d..4 50us : schedule (__cond_resched)
> +
> +The interrupt went off while running ksoftirqd. This task runs at
> +SCHED_OTHER. Why didn't we see the 'N' set early? This may be
> +a harmless bug with x86_32 and 4K stacks. On x86_32 with 4K stacks
> +configured, the interrupt and softirq runs with their own stack.
> +Some information is held on the top of the task's stack (need_resched
> +and preempt_count are both stored there). The setting of the NEED_RESCHED
> +bit is done directly to the task's stack, but the reading of the
> +NEED_RESCHED is done by looking at the current stack, which in this case
> +is the stack for the hard interrupt. This hides the fact that NEED_RESCHED
> +has been set. We don't see the 'N' until we switch back to the task's
> +assigned stack.
> +
> +ftrace
> +------
> +
> +ftrace is not only the name of the tracing infrastructure, but it
> +is also a name of one of the tracers. The tracer is the function
> +tracer. Enabling the function tracer can be done from the
> +debug file system. Make sure the ftrace_enabled is set otherwise
> +this tracer is a nop.
> +
> + # sysctl kernel.ftrace_enabled=1
> + # echo ftrace > /debug/tracing/current_tracer
> + # echo 1 > /debug/tracing/tracing_enabled
> + # usleep 1
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/trace
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-4003 [00] 123.638713: finish_task_switch <-schedule
> + bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch
> + bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq
> + bash-4003 [00] 123.638715: hrtick_set <-schedule
> + bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set
> + bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave
> + bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set
> + bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore
> + bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set
> + bash-4003 [00] 123.638718: sub_preempt_count <-schedule
> + bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule
> + bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run
> + bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion
> + bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common
> + bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq
> +[...]
> +
> +
> +Note: It is sometimes better to enable or disable tracing directly from
> +a program, because the buffer may be overflowed by the echo commands
> +before you get to the point you want to trace.

What does this mean?

> It is also easier to
> +stop the tracing at the point that you hit the part that you are

s/at the point that/when/? (unclear)

> +interested in. Since the ftrace buffer is a ring buffer with the
> +oldest data being overwritten, usually it is sufficient to start the
> +tracer with an echo command but have you code stop it. Something

s/you/your/?

(better would be "the controlling application")

> +like the following is usually appropriate for this.
> +
> +int trace_fd;
> +[...]
> +int main(int argc, char *argv[]) {
> + [...]
> + trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY);
> + [...]
> + if (condition_hit()) {
> + write(trace_fd, "0", 1);
> + }
> + [...]
> +}
> +
> +
> +dynamic ftrace
> +--------------
> +
> +If CONFIG_DYNAMIC_FTRACE is set, then the system will run with

s/then//

> +virtually no overhead when function tracing is disabled. The way
> +this works is the mcount function call (placed at the start of
> +every kernel function, produced by the -pg switch in gcc), starts
> +of pointing to a simple return.

So some config option enabled -pg?

> +When dynamic ftrace is initialized, it calls kstop_machine to make
> +the machine act like a uniprocessor so that it can freely modify code
> +without worrying about other processors executing that same code. At
> +initialization, the mcount calls are changed to call a "record_ip"
> +function. After this, the first time a kernel function is called,
> +it has the calling address saved in a hash table.
> +
> +Later on the ftraced kernel thread is awoken and will again call
> +kstop_machine if new functions have been recorded. The ftraced thread
> +will change all calls to mcount to "nop". Just calling mcount
> +and having mcount return has shown a 10% overhead. By converting
> +it to a nop, there is no recordable overhead to the system.

s/recordable/measurable/?

> +One special side-effect to the recording of the functions being
> +traced, is that we can now selectively choose which functions we

s/,//

> +want to trace and which ones we want the mcount calls to remain as

s/want/wish/g

> +nops.
> +
> +Two files are used, one for enabling and one for disabling the tracing
> +of recorded functions. They are:

"tracing of recorded" doesn't make a lot of sense. "recording of traced"?

> + set_ftrace_filter
> +
> +and
> +
> + set_ftrace_notrace
> +
> +A list of available functions that you can add to these files is listed
> +in:
> +
> + available_filter_functions
> +
> + # cat /debug/tracing/available_filter_functions
> +put_prev_task_idle
> +kmem_cache_create
> +pick_next_task_rt
> +get_online_cpus
> +pick_next_task_fair
> +mutex_lock
> +[...]
> +
> +If I'm only interested in sys_nanosleep and hrtimer_interrupt:
> +
> + # echo sys_nanosleep hrtimer_interrupt \
> + > /debug/tracing/set_ftrace_filter
> + # echo ftrace > /debug/tracing/current_tracer
> + # echo 1 > /debug/tracing/tracing_enabled
> + # usleep 1
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/trace
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt
> + usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call
> + <idle>-0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt
> +
> +To see what functions are being traced, you can cat the file:

s/what/which/

> + # cat /debug/tracing/set_ftrace_filter
> +hrtimer_interrupt
> +sys_nanosleep
> +
> +
> +Perhaps this isn't enough. The filters also allow simple wild cards.
> +Only the following are currently available
> +
> + <match>* - will match functions that begin with <match>
> + *<match> - will match functions that end with <match>
> + *<match>* - will match functions that have <match> in it
> +
> +Thats all the wild cards that are allowed.

"These are the only wildcards which are supported"?

> + <match>*<match> will not work.
> +
> + # echo hrtimer_* > /debug/tracing/set_ftrace_filter
> +
> +Produces:
> +
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-4003 [00] 1480.611794: hrtimer_init <-copy_process
> + bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set
> + bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear
> + bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel
> + <idle>-0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt
> + <idle>-0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt
> + <idle>-0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt
> + <idle>-0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt
> + <idle>-0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt
> +
> +
> +Notice that we lost the sys_nanosleep.
> +
> + # cat /debug/tracing/set_ftrace_filter
> +hrtimer_run_queues
> +hrtimer_run_pending
> +hrtimer_init
> +hrtimer_cancel
> +hrtimer_try_to_cancel
> +hrtimer_forward
> +hrtimer_start
> +hrtimer_reprogram
> +hrtimer_force_reprogram
> +hrtimer_get_next_event
> +hrtimer_interrupt
> +hrtimer_nanosleep
> +hrtimer_wakeup
> +hrtimer_get_remaining
> +hrtimer_get_res
> +hrtimer_init_sleeper
> +
> +
> +This is because the '>' and '>>' act just like they do in bash.
> +To rewrite the filters, use '>'
> +To append to the filters, use '>>'
> +
> +To clear out a filter so that all functions will be recorded again:
> +
> + # echo > /debug/tracing/set_ftrace_filter
> + # cat /debug/tracing/set_ftrace_filter
> + #
> +
> +Again, now we want to append.
> +
> + # echo sys_nanosleep > /debug/tracing/set_ftrace_filter
> + # cat /debug/tracing/set_ftrace_filter
> +sys_nanosleep
> + # echo hrtimer_* >> /debug/tracing/set_ftrace_filter
> + # cat /debug/tracing/set_ftrace_filter
> +hrtimer_run_queues
> +hrtimer_run_pending
> +hrtimer_init
> +hrtimer_cancel
> +hrtimer_try_to_cancel
> +hrtimer_forward
> +hrtimer_start
> +hrtimer_reprogram
> +hrtimer_force_reprogram
> +hrtimer_get_next_event
> +hrtimer_interrupt
> +sys_nanosleep
> +hrtimer_nanosleep
> +hrtimer_wakeup
> +hrtimer_get_remaining
> +hrtimer_get_res
> +hrtimer_init_sleeper
> +
> +
> +The set_ftrace_notrace prevents those functions from being traced.
> +
> + # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace
> +
> +Produces:
> +
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-4043 [01] 115.281644: finish_task_switch <-schedule
> + bash-4043 [01] 115.281645: hrtick_set <-schedule
> + bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set
> + bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run
> + bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion
> + bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run
> + bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop
> + bash-4043 [01] 115.281648: wake_up_process <-kthread_stop
> + bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process
> +
> +We can see that there's no more lock or preempt tracing.
> +
> +ftraced
> +-------
> +
> +As mentioned above, when dynamic ftrace is configured in, a kernel
> +thread wakes up once a second and checks to see if there are mcount
> +calls that need to be converted into nops. If there are not any, then
> +it simply goes back to sleep. But if there are some, it will call
> +kstop_machine to convert the calls to nops.
> +
> +There may be a case that you do not want this added latency.

s/that/in which/

> +Perhaps you are doing some audio recording and this activity might
> +cause skips in the playback. There is an interface to disable
> +and enable the ftraced kernel thread.

Oh. Is the term "ftraced" the name of a kernel thread? I'd been thinking
it referred to "something which is being ftraced".

> +
> + # echo 0 > /debug/tracing/ftraced_enabled
> +
> +This will disable the calling of the kstop_machine to update the

"of kstop_machine"

> +mcount calls to nops. Remember that there's a large overhead
> +to calling mcount. Without this kernel thread, that overhead will
> +exist.
> +
> +If there are recorded calls to mcount, any write to the ftraced_enabled
> +file will cause the kstop_machine to run. This means that a
> +user can manually perform the updates when they want to by simply
> +echoing a '0' into the ftraced_enabled file.
> +
> +The updates are also done at the beginning of enabling a tracer
> +that uses ftrace function recording.
> +
> +
> +trace_pipe
> +----------
> +
> +The trace_pipe outputs the same as trace,

"The trace_pipe file has the same contents as the trace file"? (unclear)

> but the effect on the
> +tracing is different. Every read from trace_pipe is consumed.
> +This means that subsequent reads will be different. The trace
> +is live.
> +
> + # echo ftrace > /debug/tracing/current_tracer
> + # cat /debug/tracing/trace_pipe > /tmp/trace.out &
> +[1] 4153
> + # echo 1 > /debug/tracing/tracing_enabled
> + # usleep 1
> + # echo 0 > /debug/tracing/tracing_enabled
> + # cat /debug/tracing/trace
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> +
> + #
> + # cat /tmp/trace.out
> + bash-4043 [00] 41.267106: finish_task_switch <-schedule
> + bash-4043 [00] 41.267106: hrtick_set <-schedule
> + bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set
> + bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run
> + bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion
> + bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run
> + bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop
> + bash-4043 [00] 41.267110: wake_up_process <-kthread_stop
> + bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process
> + bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up
> +
> +
> +Note, reading the trace_pipe will block until more input is added.

"the trace_pipe file"

> +By changing the tracer, trace_pipe will issue an EOF. We needed
> +to set the ftrace tracer _before_ cating the trace_pipe file.
> +
> +
> +trace entries
> +-------------
> +
> +Having too much or not enough data can be troublesome in diagnosing
> +some issue in the kernel. The file trace_entries is used to modify

s/some/an/

> +the size of the internal trace buffers. The number listed
> +is the number of entries that can be recorded per CPU. To know
> +the full size, multiply the number of possible CPUS with the
> +number of entries.

How do I know the number of possible CPUs? Within an order of magnitude?
Is it in dmesg, perhaps?

> + # cat /debug/tracing/trace_entries
> +65620
> +
> +Note, to modify this, you must have tracing completely disabled. To do that,
> +echo "none" into the current_tracer.

What happens if I forgot?

> + # echo none > /debug/tracing/current_tracer
> + # echo 100000 > /debug/tracing/trace_entries
> + # cat /debug/tracing/trace_entries
> +100045
> +
> +
> +Notice that we echoed in 100,000 but the size is 100,045. The entries
> +are held by individual pages. It allocates the number of pages it takes

s/by/in/

> +to fulfill the request. If more entries may fit on the last page
> +it will add them.

s/it will add them/then they will be added/

> + # echo 1 > /debug/tracing/trace_entries
> + # cat /debug/tracing/trace_entries
> +85
> +
> +This shows us that 85 entries can fit on a single page.

s/on/in/

> +The number of pages that will be allocated is a percentage of available

s/that/which/

s/is a/is limited to a/? (unclear)

> +memory. Allocating too much will produce an error.

s/much/many/

> + # echo 1000000000000 > /debug/tracing/trace_entries
> +-bash: echo: write error: Cannot allocate memory
> + # cat /debug/tracing/trace_entries
> +85
> +
>

2008-07-11 21:00:11

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


Andrew,

Thank you so much for reading over this. I see you have two types of
comments in this review. One is fixing my general bad use of the English
language (which was greatly needed), and the other is questions on the
implementation of the ftrace tool itself.

This email will skip all the helpful grammar fixes and concentrate on the
questions concerning ftrace. I'll send out a patch later that fixes the
grammar that you suggested.


On Fri, 11 Jul 2008, Andrew Morton wrote:

> > +
> > +The File System
> > +---------------
> > +
> > +Ftrace uses the debugfs file system to hold the control files as well
> > +as the files to display output.
> > +
> > +To mount the debugfs system:
> > +
> > + # mkdir /debug
> > + # mount -t debugfs nodev /debug
>
> This should be a reference to the debugfs documentation (rofl?) rather than
> a reimplementation of it.

heh, is this a reference to the lack of debugfs documentation. I don't
feel qualified to creat that document. Doing a quick search in
Documentation directory seems to point out a lot of "instructions" on
debugfs. Well, it was only three lines, not much of a repeat in text.

> > + tracing_enabled : This sets or displays whether the current_tracer
> > + is activated and tracing or not. Echo 0 into this
> > + file to disable the tracer or 1 (or non-zero) to
> > + enable it.
>
> kernel should only permit 0 or 1 (IMO). This is a kernel ABI, not a C
> program.

Sure, not a problem to do that. I always like to have options (hacks) that
give different meanings for different things echo'd in. I haven't
implemented it here, but I kept that option open. Actually, I think I did
that for debugging.

But those are hacks or debugging that I could add when I need them. I'll
write up a patch to handle that.

>
> > + trace : This file holds the output of the trace in a human readable
> > + format.
>
> "described below" (I hope)

yep

>
> > + latency_trace : This file shows the same trace but the information
> > + is organized more to display possible latencies
> > + in the system.
>
> ?
>
> > + trace_pipe : The output is the same as the "trace" file but this
> > + file is meant to be streamed with live tracing.
>
> We have three slightly munged versions of the same data? wtf?
>
> Would it not be better to present all the data a single time and perform
> post-processing in userspace?
>
> It's scary, but we _can_ ship userspace code. getdelays.c has worked well.
> Baby steps.

We had this conversation before. I'll talk more about this down below.

But, trace_pipe acts different than trace. trace_pipe consumes the trace
entries, but trace_pipe doesn't work for the tracers that have a
max_latency (irqsoff, wakeup, etc) (which reminds me, I need to add that
fact to this document).

ftrace has plugins. These tracers here are the ones that are in the
ftrace branch in the linux-tip.git repo. But we are adding other plugins,
an the trace_pipe was to satisfy them, not these tracers.

>
> > + Reads from this file will block until new data
> > + is retrieved. Unlike the "trace" and "latency_trace"
> > + files, this file is a consumer. This means reading
> > + from this file causes sequential reads to display
> > + more current data.
>
> hrm.

dedededdeeeeeeeeee ;-)

>
> > Once data is read from this
> > + file, it is consumed, and will not be read
> > + again with a sequential read. The "trace" and
> > + "latency_trace" files are static, and if the
> > + tracer isn't adding more data, they will display
> > + the same information every time they are read.
>
> hrm. Side note: it is sad that we are learning fundamental design
> decisions ages after the code was put into mainline. How did this happen?
>
> In a better world we'd have seen this document before coding started! Or
> at least prior to merging.

Well, the code went through a lot of changes. It started out in the -rt
kernel years ago. But that time it wasn't written for mainline in mind.
Lots of people in the -rt community saw the benefits of the tracer and
asked us to port it to mainline outside of -rt. Everytime I went to write
this document, the code changed. It was a work in progress.


> > +
> > + trace_entries : This sets or displays the number of trace
> > + entries each CPU buffer can hold. The tracer buffers
> > + are the same size for each CPU, so care must be
> > + taken when modifying the trace_entries.
>
> I don't understand "A, so care must be taken when B". Why must care be
> taken? What might happen if I was careless? How do I take care? Type
> slowly? ;)

Actually I never finished my thought there. I should remove that "take
care" comment out completely.

> > +
> > + tracing_cpumask : This is a mask that lets the user only trace
> > + on specified CPUS. The format is a hex string
> > + representing the CPUS.
>
> Why is this feature useful? (I'd have asked this prior to merging, if I'd
> known it existed!)

I can't comment on this. I didn't write that code, I just added it to
the document because I saw it existed. This was added by Ingo and Thomas,
without much description to why. I think it allows you to limit which
CPUS to perform the trace on.

>
> > + set_ftrace_filter : When dynamic ftrace is configured in, the
>
> I guess we'll learn later what "dynamic" ftrace is.

yep ;-)

>
> > + code is dynamically modified to disable calling
> > + of the function profiler (mcount).
>
> What does "dynamically modified" mean here? text rewriting?

Yes.

>
> > + preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
> > + records the largest time irqs and/or preemption is
> > + disabled.
>
> s/time/time for which/
>
> This interface has a strange mix of wordsruntogether and
> words_separated_by_underscores. Oh well - another consequence of
> post-facto changelogging.

I should make sched_switch to schedswitch and that way we have the files
having underscores and the tracers without them. Or should I add
underscores to all of them?

>
> > + wakeup - Traces and records the max latency that it takes for
> > + the highest priority task to get scheduled after
> > + it has been woken up.
> > +
> > + none - This is not a tracer. To remove all tracers from tracing
> > + simply echo "none" into current_tracer.
>
> Does the system then run at full performance levels?

I should probably document that each of the above tracers have their own
configuration option, with the exception of preemptirqsoff which is turned
on if both preemptoff and irqsoff is on. The reason I say this is that
the tracing of the irqsoff and preemption off has a slight overhead
(although implementing them with markers and the like might help).

But setting "none" with just sched_switch ftrace (wih dynamic ftracing)
and wakeup configured in brings the system back to full performance level.


> > +
> > +Here's an example of the output format of the file "trace"
> > +
> > + --------
> > +# tracer: ftrace
> > +#
> > +# TASK-PID CPU# TIMESTAMP FUNCTION
> > +# | | | | |
> > + bash-4251 [01] 10152.583854: path_put <-path_walk
> > + bash-4251 [01] 10152.583855: dput <-path_put
> > + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> > + --------
>
> pids are no longer unique system-wide, and any part of the kernel ABI which
> exports them to userspace is, basically, broken. Oh well.

What should be used instead? Of course we're not using a kernel ABI, we
are using an API (text based ;-) But more on that later.

>
> > +priority that is usually displayed by user-space tools. Zero represents
> > +the highest priority (99). Prio 100 starts the "nice" priorities with
> > +100 being equal to nice -20 and 139 being nice 19. The prio "140" is
> > +reserved for the idle task which is the lowest priority thread (pid 0).
>
> Would it not be better to convert these back into their userland
> representation or userland presentation?

Yes. I've thought about doing that, and I had already documented to a
customer what the prios were, so I was hesitant about making the change.
It would be easy to do. I think it would be better in the long run to
make all priorities outputed to match user space. Perhaps I'll do the "ps"
thing and have nice levels positive and RT priorities negative.

> > +# tracer: irqsoff
> > +#
> > +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> > +--------------------------------------------------------------------
> > + latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> > + -----------------
> > + | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
> > + -----------------
> > + => started at: apic_timer_interrupt
> > + => ended at: do_softirq
> > +
> > +# _------=> CPU#
> > +# / _-----=> irqs-off
> > +# | / _----=> need-resched
> > +# || / _---=> hardirq/softirq
> > +# ||| / _--=> preempt-depth
> > +# |||| /
> > +# ||||| delay
> > +# cmd pid ||||| time | caller
> > +# \ / ||||| \ | /
> > + <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
> > + <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
> > + <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
>
> The kernel prints all that stuff out of a debugfs file?
>
> What have we done? :(

This is very helpful on embedded systems.

>
> > +
> > +vim:ft=help
>
> What's this?

A relic. When I first started rewriting the -rt patch lantency tracer to
aim it at mainline, Ingo suggested that I should keep the output format
the same, since there are lots of tools that parse this information (for
the -rt patch). One of the things that Ingo had was this "vim" command.
I'd be happy to nuke it.

> > +
> > + apic_timer_interrupt is where the interrupts were disabled.
> > + do_softirq is where they were enabled again.
> > +
> > +The next lines after the header are the trace itself. The header
> > +explains which is which.
> > +
> > + cmd: The name of the process in the trace.
> > +
> > + pid: The PID of that process.
>
> :(

What should we use?

>
> > + latency_trace file is relative to the start of the trace.
> > +
> > + delay: This is just to help catch your eye a bit better. And
> > + needs to be fixed to be only relative to the same CPU.
>
> eh?

We use the latency tracer part for finding bad latencies. One thing that
helps the human eye catch bad areas is this little notation.

>
> > + The marks are determined by the difference between this
> > + current trace and the next trace.
> > + '!' - greater than preempt_mark_thresh (default 100)
> > + '+' - greater than 1 microsecond
> > + ' ' - less than or equal to 1 microsecond.
> > +
> > + The rest is the same as the 'trace' file.
> > +
> > +

[...]

> > +
> > + raw - This will display raw numbers. This option is best for use with
> > + user applications that can translate the raw numbers better than
> > + having it done in the kernel.
>
> ooh, does this mean that we get to delete all the other interfaces?
>
> I mean, honestly, all this pretty-printing and post-processing could and
> should have been done in userspace. As much as poss. We suck.

Funny, I never have used this option. But yes, it helps for userspace
to use it. I can write code to parse this out. ftrace is used a lot
on embedded systems and having it where we don't need user land to display
the trace has been beneficial to users of this tool.

If you are suggesting that the kernel comes with its own user land app
(in scripts/ ?) to handle all the new tracers, then maybe it would be
OK.

A lot of this code has also been designed around my logdev tracer which
I've been porting since the 2.1 days. I plan on adding one of logdev's
features to ftrace too. The "dump log on crash". I could have a kdump
kernel read the logs too, but this gets even more complex because of
the way the ftrace buffers are allocated. We use the page structure to
link list all the ftrace buffers together. Parsing out the page structures
to read the ftrace buffer from kdump would be very difficult (but not
impossible). Having the ftrace dump to printk on a crash could be very
informative. At least have that until we come up with a way to get
the data after booting kdump.


>
> > + hex - Similar to raw, but the numbers will be in a hexadecimal format.
>
> Does really this need to exist? Again, it is the sort of thing which would
> have needed justification during the pre-merge review. But afaik it was
> jammed into the tree without knowledge or comment.
>
> There are lessons here.

I haven't used this feature either, and not sure why it is there.

>
> > + bin - This will print out the formats in raw binary.
>
> I don't understand this.

"raw" is ASCII hexidecimal output, where as "bin" is actual binary.

>
> > + block - TBD (needs update)
> > +
> > + stacktrace - This is one of the options that changes the trace itself.
> > + When a trace is recorded, so is the stack of functions.
> > + This allows for back traces of trace sites.
>
> ?

Ah, I need to document this. This is a cool feature. At certain locations
this tracer will trace the stack back trace. I've toyed with this feature
so I didn't know enough to document it.


> > +first followed by the next task or task waking up. The format for both
> > +of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
> > +is the inverse of the actual priority with zero (0) being the highest
> > +priority and the nice values starting at 100 (nice -20). Below is
> > +a quick chart to map the kernel priority to user land priorities.
> > +
> > + Kernel priority: 0 to 99 ==> user RT priority 99 to 0
> > + Kernel priority: 100 to 139 ==> user nice -20 to 19
> > + Kernel priority: 140 ==> idle task priority
> > +
> > +The task states are:
> > +
> > + R - running : wants to run, may not actually be running
> > + S - sleep : process is waiting to be woken up (handles signals)
> > + D - deep sleep : process must be woken up (ignores signals)
>
> "uninterruptible sleep", please. no need to invent new (and hence
> unfamilar) terms!

This is my own ignorance. I didn't know the best way to say it. Why do
we use 'D' for "uninterruptible sleep"? I don't see a 'D' in there? But
"deep sleep" is more obvious. OK, I'll shut up and change it to
"uniterruptible sleep".


>
> > + T - stopped : process suspended
> > + t - traced : process is being traced (with something like gdb)
> > + Z - zombie : process waiting to be cleaned up
> > + X - unknown
> > +
> > +
> > +ftrace_enabled
> > +--------------
> > +
> > +The following tracers give different output depending on whether
> > +or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
> > +one can either use the sysctl function or set it via the proc
> > +file system interface.
> > +
> > + sysctl kernel.ftrace_enabled=1
> > +
> > + or
> > +
> > + echo 1 > /proc/sys/kernel/ftrace_enabled
> > +
> > +To disable ftrace_enabled simply replace the '1' with '0' in
> > +the above commands.
> > +
> > +When ftrace_enabled is set the tracers will also record the functions
> > +that are within the trace. The descriptions of the tracers
> > +will also show an example with ftrace enabled.
>
> What are "the following tracers" here?

They are "irqsoff" "preemptoff" "preemptirqsoff" "wakeup" "sched_switch"
etc. Oh, I should state that?

> > + -----------------
> > + | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0)
> > + -----------------
> > + => started at: copy_page_range
> > + => ended at: copy_page_range
> > +
> > +# _------=> CPU#
> > +# / _-----=> irqs-off
> > +# | / _----=> need-resched
> > +# || / _---=> hardirq/softirq
> > +# ||| / _--=> preempt-depth
> > +# |||| /
> > +# ||||| delay
> > +# cmd pid ||||| time | caller
> > +# \ / ||||| \ | /
> > + bash-4269 1...1 0us+: _spin_lock (copy_page_range)
> > + bash-4269 1...1 7us : _spin_unlock (copy_page_range)
> > + bash-4269 1...2 7us : trace_preempt_on (copy_page_range)
>
> istr writing stuff which does this in 1999 ;)

Why didn't you add it to the kernel then? ;-)

>
> > +
> > +vim:ft=help
>
> ?

Bah!

>
> > +Here we see that that we had a latency of 6 microsecs (which is
> > +very good). The spin_lock in copy_page_range disabled interrupts.
>
> spin_lock disables interrutps?

Hmm, that trace is tracing preemption off :-? Either a bug in my code, or
a cut and paste error.

[...]

> > +
> > +Note: It is sometimes better to enable or disable tracing directly from
> > +a program, because the buffer may be overflowed by the echo commands
> > +before you get to the point you want to trace.
>
> What does this mean?

It means we do a hula dance around the memory buffers. ;-)

OK, that needs to be rewritten. It is basically saying that ftrace uses a
ring buffer and if the buffer size is not big enough, running the echo
commands may overflow the buffers before you get any useful information.

[...]

>
> > +virtually no overhead when function tracing is disabled. The way
> > +this works is the mcount function call (placed at the start of
> > +every kernel function, produced by the -pg switch in gcc), starts
> > +of pointing to a simple return.
>
> So some config option enabled -pg?

Yes, FTRACE does. I should add here what each of the config options do
and what is affected by them.

[..]

> > +Perhaps you are doing some audio recording and this activity might
> > +cause skips in the playback. There is an interface to disable
> > +and enable the ftraced kernel thread.
>
> Oh. Is the term "ftraced" the name of a kernel thread? I'd been thinking
> it referred to "something which is being ftraced".

hehe, duly noted.

[..]


> > +the size of the internal trace buffers. The number listed
> > +is the number of entries that can be recorded per CPU. To know
> > +the full size, multiply the number of possible CPUS with the
> > +number of entries.
>
> How do I know the number of possible CPUs? Within an order of magnitude?
> Is it in dmesg, perhaps?

Good question? I used NR_CPUS, I would like to change this to be online
CPUS but small steps first. Is NR_CPUS exported somewhere?


>
> > + # cat /debug/tracing/trace_entries
> > +65620
> > +
> > +Note, to modify this, you must have tracing completely disabled. To do that,
> > +echo "none" into the current_tracer.
>
> What happens if I forgot?

It fails with a -EINVAL. I got burnt by that too when demonstarting this
to someone. But it doesn't crash the system ;-)


Oh well, I hope it wasn't too painful for you.

-- Steve

2008-07-11 22:39:52

by Andrew Morton

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

On Fri, 11 Jul 2008 16:59:53 -0400 (EDT) Steven Rostedt <[email protected]> wrote:

>
> > > +
> > > + tracing_cpumask : This is a mask that lets the user only trace
> > > + on specified CPUS. The format is a hex string
> > > + representing the CPUS.
> >
> > Why is this feature useful? (I'd have asked this prior to merging, if I'd
> > known it existed!)
>
> I can't comment on this. I didn't write that code, I just added it to
> the document because I saw it existed. This was added by Ingo and Thomas,
> without much description to why. I think it allows you to limit which
> CPUS to perform the trace on.

Information such as "why this code exists" seems fairly important ;)
It's surprising how often people forget to mention it (in comments, and
changelogs).

> >
> > > + preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
> > > + records the largest time irqs and/or preemption is
> > > + disabled.
> >
> > s/time/time for which/
> >
> > This interface has a strange mix of wordsruntogether and
> > words_separated_by_underscores. Oh well - another consequence of
> > post-facto changelogging.
>
> I should make sched_switch to schedswitch and that way we have the files
> having underscores and the tracers without them. Or should I add
> underscores to all of them?

Adding underscores is better, but it might not be worth the churn now, dunno.

> > > +
> > > +Here's an example of the output format of the file "trace"
> > > +
> > > + --------
> > > +# tracer: ftrace
> > > +#
> > > +# TASK-PID CPU# TIMESTAMP FUNCTION
> > > +# | | | | |
> > > + bash-4251 [01] 10152.583854: path_put <-path_walk
> > > + bash-4251 [01] 10152.583855: dput <-path_put
> > > + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> > > + --------
> >
> > pids are no longer unique system-wide, and any part of the kernel ABI which
> > exports them to userspace is, basically, broken. Oh well.
>
> What should be used instead? Of course we're not using a kernel ABI, we
> are using an API (text based ;-) But more on that later.

Well that's an interesting question and it has come up before. There
are times when the kernel wants to display a process identifier at
least in a printk. Oopses are one prominent example.

Perhaps we do need a way of doing this in a post-pid-namespace-world.
Presumably it would be of the form "pidns-identifier:pid", and just
plain old "pid" if no pid namespaces are in operation, for some
back-compatibility where possible.

Eric, any thoughts?

> > > +# tracer: irqsoff
> > > +#
> > > +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> > > +--------------------------------------------------------------------
> > > + latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> > > + -----------------
> > > + | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
> > > + -----------------
> > > + => started at: apic_timer_interrupt
> > > + => ended at: do_softirq
> > > +
> > > +# _------=> CPU#
> > > +# / _-----=> irqs-off
> > > +# | / _----=> need-resched
> > > +# || / _---=> hardirq/softirq
> > > +# ||| / _--=> preempt-depth
> > > +# |||| /
> > > +# ||||| delay
> > > +# cmd pid ||||| time | caller
> > > +# \ / ||||| \ | /
> > > + <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
> > > + <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
> > > + <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
> >
> > The kernel prints all that stuff out of a debugfs file?
> >
> > What have we done? :(
>
> This is very helpful on embedded systems.

Well... why? embedded platforms can run userspace programs too. But
the ornate nature of this kernel->userspace interface has gone and made
implementation of userspace parsers hard.

> If you are suggesting that the kernel comes with its own user land app
> (in scripts/ ?) to handle all the new tracers, then maybe it would be
> OK.

This also comes up again and again. Kernel programmers have no
convenient route for delivering userspace code to users, so they end up
putting userspace functionality into the kernel.

getdelays.c is a counter-example. We've maintained that as new
taskstats capabilities have come along and as it turned out, this was
quite easy and people find geydelays.c to be quite useful. Its name is
outdated though.

>
> > > +first followed by the next task or task waking up. The format for both
> > > +of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
> > > +is the inverse of the actual priority with zero (0) being the highest
> > > +priority and the nice values starting at 100 (nice -20). Below is
> > > +a quick chart to map the kernel priority to user land priorities.
> > > +
> > > + Kernel priority: 0 to 99 ==> user RT priority 99 to 0
> > > + Kernel priority: 100 to 139 ==> user nice -20 to 19
> > > + Kernel priority: 140 ==> idle task priority
> > > +
> > > +The task states are:
> > > +
> > > + R - running : wants to run, may not actually be running
> > > + S - sleep : process is waiting to be woken up (handles signals)
> > > + D - deep sleep : process must be woken up (ignores signals)
> >
> > "uninterruptible sleep", please. no need to invent new (and hence
> > unfamilar) terms!
>
> This is my own ignorance. I didn't know the best way to say it. Why do
> we use 'D' for "uninterruptible sleep"? I don't see a 'D' in there? But
> "deep sleep" is more obvious. OK, I'll shut up and change it to
> "uniterruptible sleep".
>

Heh. Maybe "D" does indeed refer to "deep sleep". That's all before
my time. But yes, "uninterruptible sleep" is the well-known term for
this state.

>
> >
> > > + T - stopped : process suspended
> > > + t - traced : process is being traced (with something like gdb)
> > > + Z - zombie : process waiting to be cleaned up
> > > + X - unknown
> > > +
> > > +
> > > +ftrace_enabled
> > > +--------------
> > > +
> > > +The following tracers give different output depending on whether
> > > +or not the sysctl ftrace_enabled is set. To set ftrace_enabled,
> > > +one can either use the sysctl function or set it via the proc
> > > +file system interface.
> > > +
> > > + sysctl kernel.ftrace_enabled=1
> > > +
> > > + or
> > > +
> > > + echo 1 > /proc/sys/kernel/ftrace_enabled
> > > +
> > > +To disable ftrace_enabled simply replace the '1' with '0' in
> > > +the above commands.
> > > +
> > > +When ftrace_enabled is set the tracers will also record the functions
> > > +that are within the trace. The descriptions of the tracers
> > > +will also show an example with ftrace enabled.
> >
> > What are "the following tracers" here?
>
> They are "irqsoff" "preemptoff" "preemptirqsoff" "wakeup" "sched_switch"
> etc. Oh, I should state that?

I think so.

> > > + -----------------
> > > + | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0)
> > > + -----------------
> > > + => started at: copy_page_range
> > > + => ended at: copy_page_range
> > > +
> > > +# _------=> CPU#
> > > +# / _-----=> irqs-off
> > > +# | / _----=> need-resched
> > > +# || / _---=> hardirq/softirq
> > > +# ||| / _--=> preempt-depth
> > > +# |||| /
> > > +# ||||| delay
> > > +# cmd pid ||||| time | caller
> > > +# \ / ||||| \ | /
> > > + bash-4269 1...1 0us+: _spin_lock (copy_page_range)
> > > + bash-4269 1...1 7us : _spin_unlock (copy_page_range)
> > > + bash-4269 1...2 7us : trace_preempt_on (copy_page_range)
> >
> > istr writing stuff which does this in 1999 ;)
>
> Why didn't you add it to the kernel then? ;-)

It was too large, was of doubtful usefulness and used
opening-brace-goes-on-a-new-line coding style ;)

>
> > > +the size of the internal trace buffers. The number listed
> > > +is the number of entries that can be recorded per CPU. To know
> > > +the full size, multiply the number of possible CPUS with the
> > > +number of entries.
> >
> > How do I know the number of possible CPUs? Within an order of magnitude?
> > Is it in dmesg, perhaps?
>
> Good question? I used NR_CPUS, I would like to change this to be online
> CPUS but small steps first. Is NR_CPUS exported somewhere?
>

erk. I guess it's in the sysfs topology stuff somewhere. I use `grep
processor /proc/cpuinfo|wc -l' but this is perhaps not the best of
interfaces!

2008-07-11 23:22:17

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

Andrew Morton <[email protected]> writes:

> On Fri, 11 Jul 2008 16:59:53 -0400 (EDT) Steven Rostedt <[email protected]>

>> > > +
>> > > +Here's an example of the output format of the file "trace"
>> > > +
>> > > + --------
>> > > +# tracer: ftrace
>> > > +#
>> > > +# TASK-PID CPU# TIMESTAMP FUNCTION
>> > > +# | | | | |
>> > > + bash-4251 [01] 10152.583854: path_put <-path_walk
>> > > + bash-4251 [01] 10152.583855: dput <-path_put
>> > > + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
>> > > + --------
>> >
>> > pids are no longer unique system-wide, and any part of the kernel ABI which
>> > exports them to userspace is, basically, broken. Oh well.
>>
>> What should be used instead? Of course we're not using a kernel ABI, we
>> are using an API (text based ;-) But more on that later.
>
> Well that's an interesting question and it has come up before. There
> are times when the kernel wants to display a process identifier at
> least in a printk. Oopses are one prominent example.
>
> Perhaps we do need a way of doing this in a post-pid-namespace-world.
> Presumably it would be of the form "pidns-identifier:pid", and just
> plain old "pid" if no pid namespaces are in operation, for some
> back-compatibility where possible.
>
> Eric, any thoughts?

I don't quite know what we are doing here. Is this a /proc or /sysfs file?

After a long series of discussion on semantics what we came up with
was that the pid namespaces are hierarchical and that a struct pid
will have a numerical identifier in each pid namespace. Which means
that for printing pids in the case of printks especially for oops
reports we can just go with pid number in the init_pid_ns. Which is
the classic system wide pid.

In every other case I know besides printk we are delivering the data to an
application, and that application is running in a pid namespace
therefore we really want to figure out the pid namespace and give it the
information.

For filesystem interfaces (besides proc which provides a natural split)
the classic answer is to capture namespaces at mount time. And display
the data in the filesystem relative to the namespaces we captured.

Eric



2008-07-12 10:16:40

by John Kacur

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

On Sat, Jul 12, 2008 at 12:37 AM, Andrew Morton
<[email protected]> wrote:
>
> On Fri, 11 Jul 2008 16:59:53 -0400 (EDT) Steven Rostedt <[email protected]> wrote:
>
> >
> > > > +
> > > > + tracing_cpumask : This is a mask that lets the user only trace
> > > > + on specified CPUS. The format is a hex string
> > > > + representing the CPUS.
> > >
> > > Why is this feature useful? (I'd have asked this prior to merging, if I'd
> > > known it existed!)
> >
> > I can't comment on this. I didn't write that code, I just added it to
> > the document because I saw it existed. This was added by Ingo and Thomas,
> > without much description to why. I think it allows you to limit which
> > CPUS to perform the trace on.
>
> Information such as "why this code exists" seems fairly important ;)
> It's surprising how often people forget to mention it (in comments, and
> changelogs).
>
> > >
> > > > + preemptirqsoff - Similar to irqsoff and preemptoff, but traces and
> > > > + records the largest time irqs and/or preemption is
> > > > + disabled.
> > >
> > > s/time/time for which/
> > >
> > > This interface has a strange mix of wordsruntogether and
> > > words_separated_by_underscores. Oh well - another consequence of
> > > post-facto changelogging.
> >
> > I should make sched_switch to schedswitch and that way we have the files
> > having underscores and the tracers without them. Or should I add
> > underscores to all of them?
>
> Adding underscores is better, but it might not be worth the churn now, dunno.
>
> > > > +
> > > > +Here's an example of the output format of the file "trace"
> > > > +
> > > > + --------
> > > > +# tracer: ftrace
> > > > +#
> > > > +# TASK-PID CPU# TIMESTAMP FUNCTION
> > > > +# | | | | |
> > > > + bash-4251 [01] 10152.583854: path_put <-path_walk
> > > > + bash-4251 [01] 10152.583855: dput <-path_put
> > > > + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> > > > + --------
> > >
> > > pids are no longer unique system-wide, and any part of the kernel ABI which
> > > exports them to userspace is, basically, broken. Oh well.
> >
> > What should be used instead? Of course we're not using a kernel ABI, we
> > are using an API (text based ;-) But more on that later.
>
> Well that's an interesting question and it has come up before. There
> are times when the kernel wants to display a process identifier at
> least in a printk. Oopses are one prominent example.
>
> Perhaps we do need a way of doing this in a post-pid-namespace-world.
> Presumably it would be of the form "pidns-identifier:pid", and just
> plain old "pid" if no pid namespaces are in operation, for some
> back-compatibility where possible.
>
> Eric, any thoughts?
>
> > > > +# tracer: irqsoff
> > > > +#
> > > > +irqsoff latency trace v1.1.5 on 2.6.26-rc8
> > > > +--------------------------------------------------------------------
> > > > + latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2)
> > > > + -----------------
> > > > + | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0)
> > > > + -----------------
> > > > + => started at: apic_timer_interrupt
> > > > + => ended at: do_softirq
> > > > +
> > > > +# _------=> CPU#
> > > > +# / _-----=> irqs-off
> > > > +# | / _----=> need-resched
> > > > +# || / _---=> hardirq/softirq
> > > > +# ||| / _--=> preempt-depth
> > > > +# |||| /
> > > > +# ||||| delay
> > > > +# cmd pid ||||| time | caller
> > > > +# \ / ||||| \ | /
> > > > + <idle>-0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt)
> > > > + <idle>-0 0d.s. 97us : __do_softirq (do_softirq)
> > > > + <idle>-0 0d.s1 98us : trace_hardirqs_on (do_softirq)
> > >
> > > The kernel prints all that stuff out of a debugfs file?
> > >
> > > What have we done? :(
> >
> > This is very helpful on embedded systems.
>
> Well... why? embedded platforms can run userspace programs too. But
> the ornate nature of this kernel->userspace interface has gone and made
> implementation of userspace parsers hard.
>
> > If you are suggesting that the kernel comes with its own user land app
> > (in scripts/ ?) to handle all the new tracers, then maybe it would be
> > OK.
>
> This also comes up again and again. Kernel programmers have no
> convenient route for delivering userspace code to users, so they end up
> putting userspace functionality into the kernel.
>
> getdelays.c is a counter-example. We've maintained that as new
> taskstats capabilities have come along and as it turned out, this was
> quite easy and people find geydelays.c to be quite useful. Its name is
> outdated though.
>
> >
> > > > +first followed by the next task or task waking up. The format for both
> > > > +of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO
> > > > +is the inverse of the actual priority with zero (0) being the highest
> > > > +priority and the nice values starting at 100 (nice -20). Below is
> > > > +a quick chart to map the kernel priority to user land priorities.
> > > > +
> > > > + Kernel priority: 0 to 99 ==> user RT priority 99 to 0
> > > > + Kernel priority: 100 to 139 ==> user nice -20 to 19
> > > > + Kernel priority: 140 ==> idle task priority
> > > > +
> > > > +The task states are:
> > > > +
> > > > + R - running : wants to run, may not actually be running
> > > > + S - sleep : process is waiting to be woken up (handles signals)
> > > > + D - deep sleep : process must be woken up (ignores signals)
> > >
> > > "uninterruptible sleep", please. no need to invent new (and hence
> > > unfamilar) terms!
> >
> > This is my own ignorance. I didn't know the best way to say it. Why do
> > we use 'D' for "uninterruptible sleep"? I don't see a 'D' in there? But
> > "deep sleep" is more obvious. OK, I'll shut up and change it to
> > "uniterruptible sleep".
> >
>
> Heh. Maybe "D" does indeed refer to "deep sleep". That's all before
> my time. But yes, "uninterruptible sleep" is the well-known term for
> this state.
----SNIP----
According to array.c in the kernel, 'D' stands for disk sleep

static const char *task_state_array[] = {
"R (running)", /* 0 */
"M (running-mutex)", /* 1 */
"S (sleeping)", /* 2 */
"D (disk sleep)", /* 4 */
"T (stopped)", /* 8 */
"T (tracing stop)", /* 16 */
"Z (zombie)", /* 32 */
"X (dead)" /* 64 */
};

2008-07-12 12:49:21

by Abhishek Sagar

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Thu, Jul 10, 2008 at 10:16 PM, Steven Rostedt <[email protected]> wrote:
>
> This is the long awaited ftrace.txt. It explains in quite detail how to
> use ftrace and the various tracers.

This is quite good. Thanks!

> +Output format:
> +--------------
> +
> +Here's an example of the output format of the file "trace"
> +
> + --------
> +# tracer: ftrace
> +#
> +# TASK-PID CPU# TIMESTAMP FUNCTION
> +# | | | | |
> + bash-4251 [01] 10152.583854: path_put <-path_walk
> + bash-4251 [01] 10152.583855: dput <-path_put
> + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> + --------

A small caveat here...the function names may be substituted with a
"[unkown/kretprobe'd]" under special circumstances
(http://lkml.org/lkml/2008/5/27/275). May be that should be mentioned
here?

--
Regards,
Abhishek

2008-07-14 09:38:18

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Thu, 2008-07-10 at 12:46 -0400, Steven Rostedt wrote:

> + tracing_cpumask : This is a mask that lets the user only trace
> + on specified CPUS. The format is a hex string
> + representing the CPUS.

There is a parser and other goodies in the cpuset infrastructure, stuff
like that is needed when NR_CPUS > 32,64


2008-07-14 18:09:18

by David Teigland

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Thu, Jul 10, 2008 at 12:46:01PM -0400, Steven Rostedt wrote:
> +The File System
> +---------------
> +
> +Ftrace uses the debugfs file system to hold the control files as well
> +as the files to display output.
> +
> +To mount the debugfs system:
> +
> + # mkdir /debug
> + # mount -t debugfs nodev /debug


I believe /sys/kernel/debug is the proper mount point.

Dave

2008-07-14 19:59:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation



On Mon, 14 Jul 2008, David Teigland wrote:

>
> On Thu, Jul 10, 2008 at 12:46:01PM -0400, Steven Rostedt wrote:
> > +The File System
> > +---------------
> > +
> > +Ftrace uses the debugfs file system to hold the control files as well
> > +as the files to display output.
> > +
> > +To mount the debugfs system:
> > +
> > + # mkdir /debug
> > + # mount -t debugfs nodev /debug
>
>
> I believe /sys/kernel/debug is the proper mount point.

I've heard about that, but it just seems like a deep path. I can mention
that at the begining of the document then say:

For shorter path names in the following examples, we will simply mount
the debugfs at /debug.

-- Steve

2008-07-15 01:08:55

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


On Fri, 11 Jul 2008, Eric W. Biederman wrote:

> Andrew Morton <[email protected]> writes:
> >
> > Well that's an interesting question and it has come up before. There
> > are times when the kernel wants to display a process identifier at
> > least in a printk. Oopses are one prominent example.
> >
> > Perhaps we do need a way of doing this in a post-pid-namespace-world.
> > Presumably it would be of the form "pidns-identifier:pid", and just
> > plain old "pid" if no pid namespaces are in operation, for some
> > back-compatibility where possible.
> >
> > Eric, any thoughts?
>
> I don't quite know what we are doing here. Is this a /proc or /sysfs file?

Actually it is a /debug (or /sys/mount/debug if you prefer) file.

>
> After a long series of discussion on semantics what we came up with
> was that the pid namespaces are hierarchical and that a struct pid
> will have a numerical identifier in each pid namespace. Which means
> that for printing pids in the case of printks especially for oops
> reports we can just go with pid number in the init_pid_ns. Which is
> the classic system wide pid.
>
> In every other case I know besides printk we are delivering the data to an
> application, and that application is running in a pid namespace
> therefore we really want to figure out the pid namespace and give it the
> information.
>
> For filesystem interfaces (besides proc which provides a natural split)
> the classic answer is to capture namespaces at mount time. And display
> the data in the filesystem relative to the namespaces we captured.

I'd be interested in knowing who would want namespaces in traces. I've
basically only used tracing to see "what's happening in the kernel here?".
Where I only use the pid to differentiate between the tasks I know are
running.

Hence, tracing is much like printk. Does it really matter with these
outputs. But ftrace is pluggable, pid namespaces may matter in future
plugins.

-- Steve

2008-07-15 01:32:24

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

Steven Rostedt <[email protected]> writes:

> Actually it is a /debug (or /sys/mount/debug if you prefer) file.

Got it. I haven't ever actually seen anyone use debugfs.

> I'd be interested in knowing who would want namespaces in traces. I've
> basically only used tracing to see "what's happening in the kernel here?".
> Where I only use the pid to differentiate between the tasks I know are
> running.

> Hence, tracing is much like printk. Does it really matter with these
> outputs. But ftrace is pluggable, pid namespaces may matter in future
> plugins.

So it would not be hard to capture the pid namespace in mount or
even look at current to get it (although the last is a little odd).

I'm not at all certain if it makes sense. If this is something
an ordinary user could use then we definitely want to do something.

Is tracing possible without inserting kernel modules?

Eric

2008-07-15 01:43:57

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


On Mon, 14 Jul 2008, Eric W. Biederman wrote:

>
> Steven Rostedt <[email protected]> writes:
>
> > Actually it is a /debug (or /sys/mount/debug if you prefer) file.
>
> Got it. I haven't ever actually seen anyone use debugfs.
>
> > I'd be interested in knowing who would want namespaces in traces. I've
> > basically only used tracing to see "what's happening in the kernel here?".
> > Where I only use the pid to differentiate between the tasks I know are
> > running.
>
> > Hence, tracing is much like printk. Does it really matter with these
> > outputs. But ftrace is pluggable, pid namespaces may matter in future
> > plugins.

Bare with me, I'm new to the namespace concept of pids.

>
> So it would not be hard to capture the pid namespace in mount or
> even look at current to get it (although the last is a little odd).

>From userspace or from with the kernel (doing the trace)

>
> I'm not at all certain if it makes sense. If this is something
> an ordinary user could use then we definitely want to do something.
>
> Is tracing possible without inserting kernel modules?

The tracer is built into the kernel (no module needed).

-- Steve

2008-07-15 02:02:20

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

Steven Rostedt <[email protected]> writes:

> On Mon, 14 Jul 2008, Eric W. Biederman wrote:
>
>>
>> Steven Rostedt <[email protected]> writes:
>>
>> > Actually it is a /debug (or /sys/mount/debug if you prefer) file.
>>
>> Got it. I haven't ever actually seen anyone use debugfs.
>>
>> > I'd be interested in knowing who would want namespaces in traces. I've
>> > basically only used tracing to see "what's happening in the kernel here?".
>> > Where I only use the pid to differentiate between the tasks I know are
>> > running.
>>
>> > Hence, tracing is much like printk. Does it really matter with these
>> > outputs. But ftrace is pluggable, pid namespaces may matter in future
>> > plugins.
>
> Bare with me, I'm new to the namespace concept of pids.

Sure. Just bare with me as I am new to the concept of ftrace.

>> So it would not be hard to capture the pid namespace in mount or
>> even look at current to get it (although the last is a little odd).
>
>>From userspace or from with the kernel (doing the trace)
>
>>
>> I'm not at all certain if it makes sense. If this is something
>> an ordinary user could use then we definitely want to do something.
>>
>> Is tracing possible without inserting kernel modules?
>
> The tracer is built into the kernel (no module needed).

Ok. So this is something simpler to use then SystemTap. Yeah.

It sounds like it is reasonable or at least semi reasonable to use
this as an unprivileged user.

The easiest model to think of this in is a chroot that does pids as
well as the filesystem. In which case if you are inside one and
you use the tracer. You want pids that are meaningful in your
subset of userspace, and not the global ones.

Eric

2008-07-15 02:18:49

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


On Mon, 14 Jul 2008, Eric W. Biederman wrote:
> >
> > Bare with me, I'm new to the namespace concept of pids.
>
> Sure. Just bare with me as I am new to the concept of ftrace.

Yep, understood. But you can get a good understanding by reading this
document ;-)

>
> >> So it would not be hard to capture the pid namespace in mount or
> >> even look at current to get it (although the last is a little odd).
> >
> >>From userspace or from with the kernel (doing the trace)
> >
> >>
> >> I'm not at all certain if it makes sense. If this is something
> >> an ordinary user could use then we definitely want to do something.
> >>
> >> Is tracing possible without inserting kernel modules?
> >
> > The tracer is built into the kernel (no module needed).
>
> Ok. So this is something simpler to use then SystemTap. Yeah.

Yes, very similar. SystemTap may even hook into ftrace, and vice versa.

>
> It sounds like it is reasonable or at least semi reasonable to use
> this as an unprivileged user.

Currently only root can do the traces. Since some of the tracing can
hurt the performance of the system.

>
> The easiest model to think of this in is a chroot that does pids as
> well as the filesystem. In which case if you are inside one and
> you use the tracer. You want pids that are meaningful in your
> subset of userspace, and not the global ones.

Some tracers do a trace at every function call. This uses the gcc -pg
option to set up the start of each function to call profiling code.
Dynamic ftrace is a on the fly code modification to maintain good
performance while tracing is disabled.

Because of this being such a high critical path, can I get the namespace
pid information directly from the task structure. Any function that is
called must also be careful to not fall back into the tracer. The trace
deals with self recursion, but functions that call back to the tracer
cause a bigger performance impact while tracing.

-- Steve

2008-07-15 02:52:28

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

Steven Rostedt <[email protected]> writes:

>> Ok. So this is something simpler to use then SystemTap. Yeah.
>
> Yes, very similar. SystemTap may even hook into ftrace, and vice versa.

Got it.

>> It sounds like it is reasonable or at least semi reasonable to use
>> this as an unprivileged user.
>
> Currently only root can do the traces. Since some of the tracing can
> hurt the performance of the system.

Reasonable.

>> The easiest model to think of this in is a chroot that does pids as
>> well as the filesystem. In which case if you are inside one and
>> you use the tracer. You want pids that are meaningful in your
>> subset of userspace, and not the global ones.
>
> Some tracers do a trace at every function call. This uses the gcc -pg
> option to set up the start of each function to call profiling code.
> Dynamic ftrace is a on the fly code modification to maintain good
> performance while tracing is disabled.
>
> Because of this being such a high critical path, can I get the namespace
> pid information directly from the task structure. Any function that is
> called must also be careful to not fall back into the tracer. The trace
> deals with self recursion, but functions that call back to the tracer
> cause a bigger performance impact while tracing.

All of the interesting functions are inline so it shouldn't be a big deal.
Mostly they exist to keep the semantics clear as we refactor the code.
task_pid_nr(tsk) yields the global pid number, and is currently implemented as just tsk->pid.
task_pid(tsk) yields the struct pid.

task_pid_nr_ns(tsk) yields the pid number from the perspective of a specific
task.

struct pid is interesting because it is immune from pid roll over conflicts.
I don't know if that is any use to you or not.

stuct pid contains an embedded array of the pid_nrs one for each namespace the
struct pid is in.

Eric

2008-07-15 03:05:34

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


On Mon, 14 Jul 2008, Eric W. Biederman wrote:
>
> All of the interesting functions are inline so it shouldn't be a big deal.
> Mostly they exist to keep the semantics clear as we refactor the code.
> task_pid_nr(tsk) yields the global pid number, and is currently implemented as just tsk->pid.
> task_pid(tsk) yields the struct pid.
>
> task_pid_nr_ns(tsk) yields the pid number from the perspective of a specific
> task.
>
> struct pid is interesting because it is immune from pid roll over conflicts.
> I don't know if that is any use to you or not.
>
> stuct pid contains an embedded array of the pid_nrs one for each namespace the
> struct pid is in.

Is there documentation around to let me know the proper way to use the pid
namespace API? I think ftrace should be updated before 2.6.27 is
released, to use the pid namespaces. There's some other clean ups I need
to do.

Note: I'll be traveling from Weds through to the 28th. I need to write up
my tutorial for OLS on ftrace and will not be doing many patches for the
next two weeks.

-- Steve

2008-07-15 03:32:21

by Eric W. Biederman

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

Steven Rostedt <[email protected]> writes:

> Is there documentation around to let me know the proper way to use the pid
> namespace API? I think ftrace should be updated before 2.6.27 is
> released, to use the pid namespaces. There's some other clean ups I need
> to do.

There are the bits in sched.h and pid.h I don't think there are any
detailed docs on how to convert a subsystem.

Not that there aren't good examples. It is just generally assumed
that we won't be adding many more subsystems using pids.

> Note: I'll be traveling from Weds through to the 28th. I need to write up
> my tutorial for OLS on ftrace and will not be doing many patches for the
> next two weeks.

Well if you get stuck perhaps we can talk at OLS.

Eric

2008-07-15 14:39:23

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation



On Sat, 12 Jul 2008, Abhishek Sagar wrote:

>
> On Thu, Jul 10, 2008 at 10:16 PM, Steven Rostedt <[email protected]> wrote:
> >
> > This is the long awaited ftrace.txt. It explains in quite detail how to
> > use ftrace and the various tracers.
>
> This is quite good. Thanks!
>
> > +Output format:
> > +--------------
> > +
> > +Here's an example of the output format of the file "trace"
> > +
> > + --------
> > +# tracer: ftrace
> > +#
> > +# TASK-PID CPU# TIMESTAMP FUNCTION
> > +# | | | | |
> > + bash-4251 [01] 10152.583854: path_put <-path_walk
> > + bash-4251 [01] 10152.583855: dput <-path_put
> > + bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput
> > + --------
>
> A small caveat here...the function names may be substituted with a
> "[unkown/kretprobe'd]" under special circumstances
> (http://lkml.org/lkml/2008/5/27/275). May be that should be mentioned
> here?

Hmm, I thinking that this might be too much information for this document.
If you can think of a good way to expline this without "overloading" the
reader, then please, let me know.

Thanks,

-- Steve

2008-07-15 15:34:10

by Abhishek Sagar

[permalink] [raw]
Subject: Re: [PATCH] ftrace: Documentation

On Tue, Jul 15, 2008 at 8:09 PM, Steven Rostedt <[email protected]> wrote:
> Hmm, I thinking that this might be too much information for this document.
> If you can think of a good way to expline this without "overloading" the
> reader, then please, let me know.

Err...didn't think it through. On juxtaposing the additional
explanation the text seems to be wandering off too much. So it's fine
as is.

Regards,
Abhishek

2008-07-16 10:59:30

by Florian Weimer

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

* Steven Rostedt:

> + License: The GNU Free Documentation License, Version 1.2

Is it really a good idea to put files under GPL-incompatible licenses
into the tree?

2008-07-16 11:39:33

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation


On Wed, 16 Jul 2008, Florian Weimer wrote:

>
> * Steven Rostedt:
>
> > + License: The GNU Free Documentation License, Version 1.2
>
> Is it really a good idea to put files under GPL-incompatible licenses
> into the tree?
>

The document is not code. The GPL is not appropriate for it. I had this
discussion when I wrote the rt-mutex-design.txt file, and the conclussion
was that the GFDL was an appropriate license.

-- Steve

2008-07-17 14:19:48

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

On Wed, Jul 16, 2008 at 07:39:23AM -0400, Steven Rostedt wrote:
>
> On Wed, 16 Jul 2008, Florian Weimer wrote:
>
> >
> > * Steven Rostedt:
> >
> > > + License: The GNU Free Documentation License, Version 1.2
> >
> > Is it really a good idea to put files under GPL-incompatible licenses
> > into the tree?
> >
>
> The document is not code. The GPL is not appropriate for it. I had this
> discussion when I wrote the rt-mutex-design.txt file, and the conclussion
> was that the GFDL was an appropriate license.

The GFDL is never appropriate, and certainly not for the kernel tree.
We had some files under it in the past and we decided to relicense them
after talking to the authors.

2008-07-18 02:47:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation




On Thu, 17 Jul 2008, Christoph Hellwig wrote:
> >
> > The document is not code. The GPL is not appropriate for it. I had this
> > discussion when I wrote the rt-mutex-design.txt file, and the conclussion
> > was that the GFDL was an appropriate license.
>
> The GFDL is never appropriate, and certainly not for the kernel tree.
> We had some files under it in the past and we decided to relicense them
> after talking to the authors.
>

I'm fine with any "free" license. I don't need people asking me to use
this work, as long as they give me credit (keep the copyright). I don't
remember exactly how the thread went, I first put the document under the
GPL, but I someone told me that isn't appropriate for documentation. So I
used this instead. I know the documentation and the code are distributed
together, but the "binary" of Linux does not contain the Documentation
directory as source, so I would think that the GPL is not quite
appropriate for the Documentation directory.

I'll need to ask a lawyer about this, but how about a "dual" license?
The GFDL and what ever you feel is appropriate?

-- Steve

2008-07-20 11:16:26

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH -v2] ftrace: Documentation

On Thu, Jul 17, 2008 at 10:47:18PM -0400, Steven Rostedt wrote:
> this work, as long as they give me credit (keep the copyright). I don't
> remember exactly how the thread went, I first put the document under the
> GPL, but I someone told me that isn't appropriate for documentation. So I
> used this instead. I know the documentation and the code are distributed
> together, but the "binary" of Linux does not contain the Documentation
> directory as source, so I would think that the GPL is not quite
> appropriate for the Documentation directory.
>
> I'll need to ask a lawyer about this, but how about a "dual" license?
> The GFDL and what ever you feel is appropriate?

The GPL is what covers the whole kernel tree and thuis also te
Documentation/ directory. I don't think we've ever denied anyone to do
any kind of dual licensing as as strange as it might be, so a GPLv2/GFDL
dual license sounds perfectly fine.