Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760636AbYHFR77 (ORCPT ); Wed, 6 Aug 2008 13:59:59 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758363AbYHFR7P (ORCPT ); Wed, 6 Aug 2008 13:59:15 -0400 Received: from casper.infradead.org ([85.118.1.10]:37505 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757994AbYHFR7C (ORCPT ); Wed, 6 Aug 2008 13:59:02 -0400 Date: Wed, 6 Aug 2008 10:56:23 -0700 From: Greg KH To: linux-kernel@vger.kernel.org, Andrew Morton , torvalds@linux-foundation.org, stable@kernel.org Subject: Re: Linux 2.6.26.2 Message-ID: <20080806175623.GD27119@kroah.com> References: <20080806175529.GB27119@kroah.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20080806175529.GB27119@kroah.com> User-Agent: Mutt/1.5.16 (2007-06-09) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 89915 Lines: 2528 diff --git a/Documentation/ftrace.txt b/Documentation/ftrace.txt deleted file mode 100644 index 13e4bf0..0000000 --- a/Documentation/ftrace.txt +++ /dev/null @@ -1,1353 +0,0 @@ - ftrace - Function Tracer - ======================== - -Copyright 2008 Red Hat Inc. -Author: Steven Rostedt - - -Introduction ------------- - -Ftrace is an internal tracer designed to help out developers and -designers of systems to find what is going on inside the kernel. -It can be used for debugging or analyzing latencies and performance -issues that take place outside of user-space. - -Although ftrace is the function tracer, it also includes an -infrastructure that allows for other types of tracing. Some of the -tracers that are currently in ftrace is a tracer to trace -context switches, the time it takes for a high priority task to -run after it was woken up, the time interrupts are disabled, and -more. - - -The File System ---------------- - -Ftrace uses the debugfs file system to hold the control files as well -as the files to display output. - -To mount the debugfs system: - - # mkdir /debug - # mount -t debugfs nodev /debug - - -That's it! (assuming that you have ftrace configured into your kernel) - -After mounting the debugfs, you can see a directory called -"tracing". This directory contains the control and output files -of ftrace. Here is a list of some of the key files: - - - Note: all time values are in microseconds. - - current_tracer : This is used to set or display the current tracer - that is configured. - - available_tracers : This holds the different types of tracers that - has been compiled into the kernel. The tracers - listed here can be configured by echoing in their - name into current_tracer. - - tracing_enabled : This sets or displays whether the current_tracer - is activated and tracing or not. Echo 0 into this - file to disable the tracer or 1 (or non-zero) to - enable it. - - trace : This file holds the output of the trace in a human readable - format. - - latency_trace : This file shows the same trace but the information - is organized more to display possible latencies - in the system. - - trace_pipe : The output is the same as the "trace" file but this - file is meant to be streamed with live tracing. - Reads from this file will block until new data - is retrieved. Unlike the "trace" and "latency_trace" - files, this file is a consumer. This means reading - from this file causes sequential reads to display - more current data. Once data is read from this - file, it is consumed, and will not be read - again with a sequential read. The "trace" and - "latency_trace" files are static, and if the - tracer isn't adding more data, they will display - the same information every time they are read. - - iter_ctrl : This file lets the user control the amount of data - that is displayed in one of the above output - files. - - trace_max_latency : Some of the tracers record the max latency. - For example, the time interrupts are disabled. - This time is saved in this file. The max trace - will also be stored, and displayed by either - "trace" or "latency_trace". A new max trace will - only be recorded if the latency is greater than - the value in this file. (in microseconds) - - trace_entries : This sets or displays the number of trace - entries each CPU buffer can hold. The tracer buffers - are the same size for each CPU, so care must be - taken when modifying the trace_entries. The number - of actually entries will be the number given - times the number of possible CPUS. The buffers - are saved as individual pages, and the actual entries - will always be rounded up to entries per page. - - This can only be updated when the current_tracer - is set to "none". - - NOTE: It is planned on changing the allocated buffers - from being the number of possible CPUS to - the number of online CPUS. - - tracing_cpumask : This is a mask that lets the user only trace - on specified CPUS. The format is a hex string - representing the CPUS. - - set_ftrace_filter : When dynamic ftrace is configured in, the - code is dynamically modified to disable calling - of the function profiler (mcount). This lets - tracing be configured in with practically no overhead - in performance. This also has a side effect of - enabling or disabling specific functions to be - traced. Echoing in names of functions into this - file will limit the trace to only those files. - - set_ftrace_notrace: This has the opposite effect that - set_ftrace_filter has. Any function that is added - here will not be traced. If a function exists - in both set_ftrace_filter and set_ftrace_notrace - the function will _not_ bet traced. - - available_filter_functions : When a function is encountered the first - time by the dynamic tracer, it is recorded and - later the call is converted into a nop. This file - lists the functions that have been recorded - by the dynamic tracer and these functions can - be used to set the ftrace filter by the above - "set_ftrace_filter" file. - - -The Tracers ------------ - -Here are the list of current tracers that can be configured. - - ftrace - function tracer that uses mcount to trace all functions. - It is possible to filter out which functions that are - traced when dynamic ftrace is configured in. - - sched_switch - traces the context switches between tasks. - - irqsoff - traces the areas that disable interrupts and saves off - the trace with the longest max latency. - See tracing_max_latency. When a new max is recorded, - it replaces the old trace. It is best to view this - trace with the latency_trace file. - - preemptoff - Similar to irqsoff but traces and records the time - preemption is disabled. - - preemptirqsoff - Similar to irqsoff and preemptoff, but traces and - records the largest time irqs and/or preemption is - disabled. - - wakeup - Traces and records the max latency that it takes for - the highest priority task to get scheduled after - it has been woken up. - - none - This is not a tracer. To remove all tracers from tracing - simply echo "none" into current_tracer. - - -Examples of using the tracer ----------------------------- - -Here are typical examples of using the tracers with only controlling -them with the debugfs interface (without using any user-land utilities). - -Output format: --------------- - -Here's an example of the output format of the file "trace" - - -------- -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - bash-4251 [01] 10152.583854: path_put <-path_walk - bash-4251 [01] 10152.583855: dput <-path_put - bash-4251 [01] 10152.583855: _atomic_dec_and_lock <-dput - -------- - -A header is printed with the trace that is represented. In this case -the tracer is "ftrace". Then a header showing the format. Task name -"bash", the task PID "4251", the CPU that it was running on -"01", the timestamp in . format, the function name that was -traced "path_put" and the parent function that called this function -"path_walk". - -The sched_switch tracer also includes tracing of task wake ups and -context switches. - - ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 2916:115:S - ksoftirqd/1-7 [01] 1453.070013: 7:115:R + 10:115:S - ksoftirqd/1-7 [01] 1453.070013: 7:115:R ==> 10:115:R - events/1-10 [01] 1453.070013: 10:115:S ==> 2916:115:R - kondemand/1-2916 [01] 1453.070013: 2916:115:S ==> 7:115:R - ksoftirqd/1-7 [01] 1453.070013: 7:115:S ==> 0:140:R - -Wake ups are represented by a "+" and the context switches show -"==>". The format is: - - Context switches: - - Previous task Next Task - - :: ==> :: - - Wake ups: - - Current task Task waking up - - :: + :: - -The prio is the internal kernel priority, which is inverse to the -priority that is usually displayed by user-space tools. Zero represents -the highest priority (99). Prio 100 starts the "nice" priorities with -100 being equal to nice -20 and 139 being nice 19. The prio "140" is -reserved for the idle task which is the lowest priority thread (pid 0). - - -Latency trace format --------------------- - -For traces that display latency times, the latency_trace file gives -a bit more information to see why a latency happened. Here's a typical -trace. - -# tracer: irqsoff -# -irqsoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 97 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: swapper-0 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: apic_timer_interrupt - => ended at: do_softirq - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - -0 0d..1 0us+: trace_hardirqs_off_thunk (apic_timer_interrupt) - -0 0d.s. 97us : __do_softirq (do_softirq) - -0 0d.s1 98us : trace_hardirqs_on (do_softirq) - - -vim:ft=help - - -This shows that the current tracer is "irqsoff" tracing the time -interrupts are disabled. It gives the trace version and the kernel -this was executed on (2.6.26-rc8). Then it displays the max latency -in microsecs (97 us). The number of trace entries displayed -by the total number recorded (both are three: #3/3). The type of -preemption that was used (PREEMPT). VP, KP, SP, and HP are always zero -and reserved for later use. #P is the number of online CPUS (#P:2). - -The task is the process that was running when the latency happened. -(swapper pid: 0). - -The start and stop that caused the latencies: - - apic_timer_interrupt is where the interrupts were disabled. - do_softirq is where they were enabled again. - -The next lines after the header are the trace itself. The header -explains which is which. - - cmd: The name of the process in the trace. - - pid: The PID of that process. - - CPU#: The CPU that the process was running on. - - irqs-off: 'd' interrupts are disabled. '.' otherwise. - - need-resched: 'N' task need_resched is set, '.' otherwise. - - hardirq/softirq: - 'H' - hard irq happened inside a softirq. - 'h' - hard irq is running - 's' - soft irq is running - '.' - normal context. - - preempt-depth: The level of preempt_disabled - -The above is mostly meaningful for kernel developers. - - time: This differs from the trace output where as the trace output - contained a absolute timestamp. This timestamp is relative - to the start of the first entry in the the trace. - - delay: This is just to help catch your eye a bit better. And - needs to be fixed to be only relative to the same CPU. - The marks is determined by the difference between this - current trace and the next trace. - '!' - greater than preempt_mark_thresh (default 100) - '+' - greater than 1 microsecond - ' ' - less than or equal to 1 microsecond. - - The rest is the same as the 'trace' file. - - -iter_ctrl ---------- - -The iter_ctrl file is used to control what gets printed in the trace -output. To see what is available, simply cat the file: - - cat /debug/tracing/iter_ctrl - print-parent nosym-offset nosym-addr noverbose noraw nohex nobin \ - noblock nostacktrace nosched-tree - -To disable one of the options, echo in the option appended with "no". - - echo noprint-parent > /debug/tracing/iter_ctrl - -To enable an option, leave off the "no". - - echo sym-offest > /debug/tracing/iter_ctrl - -Here are the available options: - - print-parent - On function traces, display the calling function - as well as the function being traced. - - print-parent: - bash-4000 [01] 1477.606694: simple_strtoul <-strict_strtoul - - noprint-parent: - bash-4000 [01] 1477.606694: simple_strtoul - - - sym-offset - Display not only the function name, but also the offset - in the function. For example, instead of seeing just - "ktime_get" you will see "ktime_get+0xb/0x20" - - sym-offset: - bash-4000 [01] 1477.606694: simple_strtoul+0x6/0xa0 - - sym-addr - this will also display the function address as well as - the function name. - - sym-addr: - bash-4000 [01] 1477.606694: simple_strtoul - - verbose - This deals with the latency_trace file. - - bash 4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \ - (+0.000ms): simple_strtoul (strict_strtoul) - - raw - This will display raw numbers. This option is best for use with - user applications that can translate the raw numbers better than - having it done in the kernel. - - hex - similar to raw, but the numbers will be in a hexadecimal format. - - bin - This will print out the formats in raw binary. - - block - TBD (needs update) - - stacktrace - This is one of the options that changes the trace itself. - When a trace is recorded, so is the stack of functions. - This allows for back traces of trace sites. - - sched-tree - TBD (any users??) - - -sched_switch ------------- - -This tracer simply records schedule switches. Here's an example -on how to implement it. - - # echo sched_switch > /debug/tracing/current_tracer - # echo 1 > /debug/tracing/tracing_enabled - # sleep 1 - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/trace - -# tracer: sched_switch -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - bash-3997 [01] 240.132281: 3997:120:R + 4055:120:R - bash-3997 [01] 240.132284: 3997:120:R ==> 4055:120:R - sleep-4055 [01] 240.132371: 4055:120:S ==> 3997:120:R - bash-3997 [01] 240.132454: 3997:120:R + 4055:120:S - bash-3997 [01] 240.132457: 3997:120:R ==> 4055:120:R - sleep-4055 [01] 240.132460: 4055:120:D ==> 3997:120:R - bash-3997 [01] 240.132463: 3997:120:R + 4055:120:D - bash-3997 [01] 240.132465: 3997:120:R ==> 4055:120:R - -0 [00] 240.132589: 0:140:R + 4:115:S - -0 [00] 240.132591: 0:140:R ==> 4:115:R - ksoftirqd/0-4 [00] 240.132595: 4:115:S ==> 0:140:R - -0 [00] 240.132598: 0:140:R + 4:115:S - -0 [00] 240.132599: 0:140:R ==> 4:115:R - ksoftirqd/0-4 [00] 240.132603: 4:115:S ==> 0:140:R - sleep-4055 [01] 240.133058: 4055:120:S ==> 3997:120:R - [...] - - -As we have discussed previously about this format, the header shows -the name of the trace and points to the options. The "FUNCTION" -is a misnomer since here it represents the wake ups and context -switches. - -The sched_switch only lists the wake ups (represented with '+') -and context switches ('==>') with the previous task or current -first followed by the next task or task waking up. The format for both -of these is PID:KERNEL-PRIO:TASK-STATE. Remember that the KERNEL-PRIO -is the inverse of the actual priority with zero (0) being the highest -priority and the nice values starting at 100 (nice -20). Below is -a quick chart to map the kernel priority to user land priorities. - - Kernel priority: 0 to 99 ==> user RT priority 99 to 0 - Kernel priority: 100 to 139 ==> user nice -20 to 19 - Kernel priority: 140 ==> idle task priority - -The task states are: - - R - running : wants to run, may not actually be running - S - sleep : process is waiting to be woken up (handles signals) - D - deep sleep : process must be woken up (ignores signals) - T - stopped : process suspended - t - traced : process is being traced (with something like gdb) - Z - zombie : process waiting to be cleaned up - X - unknown - - -ftrace_enabled --------------- - -The following tracers give different output depending on whether -or not the sysctl ftrace_enabled is set. To set ftrace_enabled, -one can either use the sysctl function or set it via the proc -file system interface. - - sysctl kernel.ftrace_enabled=1 - - or - - echo 1 > /proc/sys/kernel/ftrace_enabled - -To disable ftrace_enabled simply replace the '1' with '0' in -the above commands. - -When ftrace_enabled is set the tracers will also record the functions -that are within the trace. The descriptions of the tracers -will also show an example with ftrace enabled. - - -irqsoff -------- - -When interrupts are disabled, the CPU can not react to any other -external event (besides NMIs and SMIs). This prevents the timer -interrupt from triggering or the mouse interrupt from letting the -kernel know of a new mouse event. The result is a latency with the -reaction time. - -The irqsoff tracer tracks the time interrupts are disabled and when -they are re-enabled. When a new maximum latency is hit, it saves off -the trace so that it may be retrieved at a later time. Every time a -new maximum in reached, the old saved trace is discarded and the new -trace is saved. - -To reset the maximum, echo 0 into tracing_max_latency. Here's an -example: - - # echo irqsoff > /debug/tracing/current_tracer - # echo 0 > /debug/tracing/tracing_max_latency - # echo 1 > /debug/tracing/tracing_enabled - # ls -ltr - [...] - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/latency_trace -# tracer: irqsoff -# -irqsoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 6 us, #3/3, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: bash-4269 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: copy_page_range - => ended at: copy_page_range - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - bash-4269 1...1 0us+: _spin_lock (copy_page_range) - bash-4269 1...1 7us : _spin_unlock (copy_page_range) - bash-4269 1...2 7us : trace_preempt_on (copy_page_range) - - -vim:ft=help - -Here we see that that we had a latency of 6 microsecs (which is -very good). The spin_lock in copy_page_range disabled interrupts. -The difference between the 6 and the displayed timestamp 7us is -because the clock must have incremented between the time of recording -the max latency and recording the function that had that latency. - -Note the above had ftrace_enabled not set. If we set the ftrace_enabled -we get a much larger output: - -# tracer: irqsoff -# -irqsoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 50 us, #101/101, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: ls-4339 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: __alloc_pages_internal - => ended at: __alloc_pages_internal - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - ls-4339 0...1 0us+: get_page_from_freelist (__alloc_pages_internal) - ls-4339 0d..1 3us : rmqueue_bulk (get_page_from_freelist) - ls-4339 0d..1 3us : _spin_lock (rmqueue_bulk) - ls-4339 0d..1 4us : add_preempt_count (_spin_lock) - ls-4339 0d..2 4us : __rmqueue (rmqueue_bulk) - ls-4339 0d..2 5us : __rmqueue_smallest (__rmqueue) - ls-4339 0d..2 5us : __mod_zone_page_state (__rmqueue_smallest) - ls-4339 0d..2 6us : __rmqueue (rmqueue_bulk) - ls-4339 0d..2 6us : __rmqueue_smallest (__rmqueue) - ls-4339 0d..2 7us : __mod_zone_page_state (__rmqueue_smallest) - ls-4339 0d..2 7us : __rmqueue (rmqueue_bulk) - ls-4339 0d..2 8us : __rmqueue_smallest (__rmqueue) -[...] - ls-4339 0d..2 46us : __rmqueue_smallest (__rmqueue) - ls-4339 0d..2 47us : __mod_zone_page_state (__rmqueue_smallest) - ls-4339 0d..2 47us : __rmqueue (rmqueue_bulk) - ls-4339 0d..2 48us : __rmqueue_smallest (__rmqueue) - ls-4339 0d..2 48us : __mod_zone_page_state (__rmqueue_smallest) - ls-4339 0d..2 49us : _spin_unlock (rmqueue_bulk) - ls-4339 0d..2 49us : sub_preempt_count (_spin_unlock) - ls-4339 0d..1 50us : get_page_from_freelist (__alloc_pages_internal) - ls-4339 0d..2 51us : trace_hardirqs_on (__alloc_pages_internal) - - -vim:ft=help - - -Here we traced a 50 microsecond latency. But we also see all the -functions that were called during that time. Note that enabling -function tracing we endure an added overhead. This overhead may -extend the latency times. But never the less, this trace has provided -some very helpful debugging. - - -preemptoff ----------- - -When preemption is disabled we may be able to receive interrupts but -the task can not be preempted and a higher priority task must wait -for preemption to be enabled again before it can preempt a lower -priority task. - -The preemptoff tracer traces the places that disables preemption. -Like the irqsoff, it records the maximum latency that preemption -was disabled. The control of preemptoff is much like the irqsoff. - - # echo preemptoff > /debug/tracing/current_tracer - # echo 0 > /debug/tracing/tracing_max_latency - # echo 1 > /debug/tracing/tracing_enabled - # ls -ltr - [...] - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/latency_trace -# tracer: preemptoff -# -preemptoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 29 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: do_IRQ - => ended at: __do_softirq - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - sshd-4261 0d.h. 0us+: irq_enter (do_IRQ) - sshd-4261 0d.s. 29us : _local_bh_enable (__do_softirq) - sshd-4261 0d.s1 30us : trace_preempt_on (__do_softirq) - - -vim:ft=help - -This has some more changes. Preemption was disabled when an interrupt -came in (notice the 'h'), and was enabled while doing a softirq. -(notice the 's'). But we also see that interrupts have been disabled -when entering the preempt off section and leaving it (the 'd'). -We do not know if interrupts were enabled in the mean time. - -# tracer: preemptoff -# -preemptoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 63 us, #87/87, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: remove_wait_queue - => ended at: __do_softirq - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - sshd-4261 0d..1 0us : _spin_lock_irqsave (remove_wait_queue) - sshd-4261 0d..1 1us : _spin_unlock_irqrestore (remove_wait_queue) - sshd-4261 0d..1 2us : do_IRQ (common_interrupt) - sshd-4261 0d..1 2us : irq_enter (do_IRQ) - sshd-4261 0d..1 2us : idle_cpu (irq_enter) - sshd-4261 0d..1 3us : add_preempt_count (irq_enter) - sshd-4261 0d.h1 3us : idle_cpu (irq_enter) - sshd-4261 0d.h. 4us : handle_fasteoi_irq (do_IRQ) -[...] - sshd-4261 0d.h. 12us : add_preempt_count (_spin_lock) - sshd-4261 0d.h1 12us : ack_ioapic_quirk_irq (handle_fasteoi_irq) - sshd-4261 0d.h1 13us : move_native_irq (ack_ioapic_quirk_irq) - sshd-4261 0d.h1 13us : _spin_unlock (handle_fasteoi_irq) - sshd-4261 0d.h1 14us : sub_preempt_count (_spin_unlock) - sshd-4261 0d.h1 14us : irq_exit (do_IRQ) - sshd-4261 0d.h1 15us : sub_preempt_count (irq_exit) - sshd-4261 0d..2 15us : do_softirq (irq_exit) - sshd-4261 0d... 15us : __do_softirq (do_softirq) - sshd-4261 0d... 16us : __local_bh_disable (__do_softirq) - sshd-4261 0d... 16us+: add_preempt_count (__local_bh_disable) - sshd-4261 0d.s4 20us : add_preempt_count (__local_bh_disable) - sshd-4261 0d.s4 21us : sub_preempt_count (local_bh_enable) - sshd-4261 0d.s5 21us : sub_preempt_count (local_bh_enable) -[...] - sshd-4261 0d.s6 41us : add_preempt_count (__local_bh_disable) - sshd-4261 0d.s6 42us : sub_preempt_count (local_bh_enable) - sshd-4261 0d.s7 42us : sub_preempt_count (local_bh_enable) - sshd-4261 0d.s5 43us : add_preempt_count (__local_bh_disable) - sshd-4261 0d.s5 43us : sub_preempt_count (local_bh_enable_ip) - sshd-4261 0d.s6 44us : sub_preempt_count (local_bh_enable_ip) - sshd-4261 0d.s5 44us : add_preempt_count (__local_bh_disable) - sshd-4261 0d.s5 45us : sub_preempt_count (local_bh_enable) -[...] - sshd-4261 0d.s. 63us : _local_bh_enable (__do_softirq) - sshd-4261 0d.s1 64us : trace_preempt_on (__do_softirq) - - -The above is an example of the preemptoff trace with ftrace_enabled -set. Here we see that interrupts were disabled the entire time. -The irq_enter code lets us know that we entered an interrupt 'h'. -Before that, the functions being traced still show that it is not -in an interrupt, but we can see by the functions themselves that -this is not the case. - -Notice that the __do_softirq when called doesn't have a preempt_count. -It may seem that we missed a preempt enabled. What really happened -is that the preempt count is held on the threads stack and we -switched to the softirq stack (4K stacks in effect). The code -does not copy the preempt count, but because interrupts are disabled -we don't need to worry about it. Having a tracer like this is good -to let people know what really happens inside the kernel. - - -preemptirqsoff --------------- - -Knowing the locations that have interrupts disabled or preemption -disabled for the longest times is helpful. But sometimes we would -like to know when either preemption and/or interrupts are disabled. - -The following code: - - local_irq_disable(); - call_function_with_irqs_off(); - preempt_disable(); - call_function_with_irqs_and_preemption_off(); - local_irq_enable(); - call_function_with_preemption_off(); - preempt_enable(); - -The irqsoff tracer will record the total length of -call_function_with_irqs_off() and -call_function_with_irqs_and_preemption_off(). - -The preemptoff tracer will record the total length of -call_function_with_irqs_and_preemption_off() and -call_function_with_preemption_off(). - -But neither will trace the time that interrupts and/or preemption -is disabled. This total time is the time that we can not schedule. -To record this time, use the preemptirqsoff tracer. - -Again, using this trace is much like the irqsoff and preemptoff tracers. - - # echo preemptoff > /debug/tracing/current_tracer - # echo 0 > /debug/tracing/tracing_max_latency - # echo 1 > /debug/tracing/tracing_enabled - # ls -ltr - [...] - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/latency_trace -# tracer: preemptirqsoff -# -preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 293 us, #3/3, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: ls-4860 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: apic_timer_interrupt - => ended at: __do_softirq - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - ls-4860 0d... 0us!: trace_hardirqs_off_thunk (apic_timer_interrupt) - ls-4860 0d.s. 294us : _local_bh_enable (__do_softirq) - ls-4860 0d.s1 294us : trace_preempt_on (__do_softirq) - - -vim:ft=help - - -The trace_hardirqs_off_thunk is called from assembly on x86 when -interrupts are disabled in the assembly code. Without the function -tracing, we don't know if interrupts were enabled within the preemption -points. We do see that it started with preemption enabled. - -Here is a trace with ftrace_enabled set: - - -# tracer: preemptirqsoff -# -preemptirqsoff latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 105 us, #183/183, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: sshd-4261 (uid:0 nice:0 policy:0 rt_prio:0) - ----------------- - => started at: write_chan - => ended at: __do_softirq - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - ls-4473 0.N.. 0us : preempt_schedule (write_chan) - ls-4473 0dN.1 1us : _spin_lock (schedule) - ls-4473 0dN.1 2us : add_preempt_count (_spin_lock) - ls-4473 0d..2 2us : put_prev_task_fair (schedule) -[...] - ls-4473 0d..2 13us : set_normalized_timespec (ktime_get_ts) - ls-4473 0d..2 13us : __switch_to (schedule) - sshd-4261 0d..2 14us : finish_task_switch (schedule) - sshd-4261 0d..2 14us : _spin_unlock_irq (finish_task_switch) - sshd-4261 0d..1 15us : add_preempt_count (_spin_lock_irqsave) - sshd-4261 0d..2 16us : _spin_unlock_irqrestore (hrtick_set) - sshd-4261 0d..2 16us : do_IRQ (common_interrupt) - sshd-4261 0d..2 17us : irq_enter (do_IRQ) - sshd-4261 0d..2 17us : idle_cpu (irq_enter) - sshd-4261 0d..2 18us : add_preempt_count (irq_enter) - sshd-4261 0d.h2 18us : idle_cpu (irq_enter) - sshd-4261 0d.h. 18us : handle_fasteoi_irq (do_IRQ) - sshd-4261 0d.h. 19us : _spin_lock (handle_fasteoi_irq) - sshd-4261 0d.h. 19us : add_preempt_count (_spin_lock) - sshd-4261 0d.h1 20us : _spin_unlock (handle_fasteoi_irq) - sshd-4261 0d.h1 20us : sub_preempt_count (_spin_unlock) -[...] - sshd-4261 0d.h1 28us : _spin_unlock (handle_fasteoi_irq) - sshd-4261 0d.h1 29us : sub_preempt_count (_spin_unlock) - sshd-4261 0d.h2 29us : irq_exit (do_IRQ) - sshd-4261 0d.h2 29us : sub_preempt_count (irq_exit) - sshd-4261 0d..3 30us : do_softirq (irq_exit) - sshd-4261 0d... 30us : __do_softirq (do_softirq) - sshd-4261 0d... 31us : __local_bh_disable (__do_softirq) - sshd-4261 0d... 31us+: add_preempt_count (__local_bh_disable) - sshd-4261 0d.s4 34us : add_preempt_count (__local_bh_disable) -[...] - sshd-4261 0d.s3 43us : sub_preempt_count (local_bh_enable_ip) - sshd-4261 0d.s4 44us : sub_preempt_count (local_bh_enable_ip) - sshd-4261 0d.s3 44us : smp_apic_timer_interrupt (apic_timer_interrupt) - sshd-4261 0d.s3 45us : irq_enter (smp_apic_timer_interrupt) - sshd-4261 0d.s3 45us : idle_cpu (irq_enter) - sshd-4261 0d.s3 46us : add_preempt_count (irq_enter) - sshd-4261 0d.H3 46us : idle_cpu (irq_enter) - sshd-4261 0d.H3 47us : hrtimer_interrupt (smp_apic_timer_interrupt) - sshd-4261 0d.H3 47us : ktime_get (hrtimer_interrupt) -[...] - sshd-4261 0d.H3 81us : tick_program_event (hrtimer_interrupt) - sshd-4261 0d.H3 82us : ktime_get (tick_program_event) - sshd-4261 0d.H3 82us : ktime_get_ts (ktime_get) - sshd-4261 0d.H3 83us : getnstimeofday (ktime_get_ts) - sshd-4261 0d.H3 83us : set_normalized_timespec (ktime_get_ts) - sshd-4261 0d.H3 84us : clockevents_program_event (tick_program_event) - sshd-4261 0d.H3 84us : lapic_next_event (clockevents_program_event) - sshd-4261 0d.H3 85us : irq_exit (smp_apic_timer_interrupt) - sshd-4261 0d.H3 85us : sub_preempt_count (irq_exit) - sshd-4261 0d.s4 86us : sub_preempt_count (irq_exit) - sshd-4261 0d.s3 86us : add_preempt_count (__local_bh_disable) -[...] - sshd-4261 0d.s1 98us : sub_preempt_count (net_rx_action) - sshd-4261 0d.s. 99us : add_preempt_count (_spin_lock_irq) - sshd-4261 0d.s1 99us+: _spin_unlock_irq (run_timer_softirq) - sshd-4261 0d.s. 104us : _local_bh_enable (__do_softirq) - sshd-4261 0d.s. 104us : sub_preempt_count (_local_bh_enable) - sshd-4261 0d.s. 105us : _local_bh_enable (__do_softirq) - sshd-4261 0d.s1 105us : trace_preempt_on (__do_softirq) - - -This is a very interesting trace. It started with the preemption of -the ls task. We see that the task had the "need_resched" bit set -with the 'N' in the trace. Interrupts are disabled in the spin_lock -and the trace started. We see that a schedule took place to run -sshd. When the interrupts were enabled we took an interrupt. -On return of the interrupt the softirq ran. We took another interrupt -while running the softirq as we see with the capital 'H'. - - -wakeup ------- - -In Real-Time environment it is very important to know the wakeup -time it takes for the highest priority task that wakes up to the -time it executes. This is also known as "schedule latency". -I stress the point that this is about RT tasks. It is also important -to know the scheduling latency of non-RT tasks, but the average -schedule latency is better for non-RT tasks. Tools like -LatencyTop is more appropriate for such measurements. - -Real-Time environments is interested in the worst case latency. -That is the longest latency it takes for something to happen, and -not the average. We can have a very fast scheduler that may only -have a large latency once in a while, but that would not work well -with Real-Time tasks. The wakeup tracer was designed to record -the worst case wakeups of RT tasks. Non-RT tasks are not recorded -because the tracer only records one worst case and tracing non-RT -tasks that are unpredictable will overwrite the worst case latency -of RT tasks. - -Since this tracer only deals with RT tasks, we will run this slightly -different than we did with the previous tracers. Instead of performing -an 'ls' we will run 'sleep 1' under 'chrt' which changes the -priority of the task. - - # echo wakeup > /debug/tracing/current_tracer - # echo 0 > /debug/tracing/tracing_max_latency - # echo 1 > /debug/tracing/tracing_enabled - # chrt -f 5 sleep 1 - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/latency_trace -# tracer: wakeup -# -wakeup latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 4 us, #2/2, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: sleep-4901 (uid:0 nice:0 policy:1 rt_prio:5) - ----------------- - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / - -0 1d.h4 0us+: try_to_wake_up (wake_up_process) - -0 1d..4 4us : schedule (cpu_idle) - - -vim:ft=help - - -Running this on an idle system we see that it only took 4 microseconds -to perform the task switch. Note, since the trace marker in the -schedule is before the actual "switch" we stop the tracing when -the recorded task is about to schedule in. This may change if -we add a new marker at the end of the scheduler. - -Notice that the recorded task is 'sleep' with the PID of 4901 and it -has an rt_prio of 5. This priority is user-space priority and not -the internal kernel priority. The policy is 1 for SCHED_FIFO and 2 -for SCHED_RR. - -Doing the same with chrt -r 5 and ftrace_enabled set. - -# tracer: wakeup -# -wakeup latency trace v1.1.5 on 2.6.26-rc8 --------------------------------------------------------------------- - latency: 50 us, #60/60, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:2) - ----------------- - | task: sleep-4068 (uid:0 nice:0 policy:2 rt_prio:5) - ----------------- - -# _------=> CPU# -# / _-----=> irqs-off -# | / _----=> need-resched -# || / _---=> hardirq/softirq -# ||| / _--=> preempt-depth -# |||| / -# ||||| delay -# cmd pid ||||| time | caller -# \ / ||||| \ | / -ksoftirq-7 1d.H3 0us : try_to_wake_up (wake_up_process) -ksoftirq-7 1d.H4 1us : sub_preempt_count (marker_probe_cb) -ksoftirq-7 1d.H3 2us : check_preempt_wakeup (try_to_wake_up) -ksoftirq-7 1d.H3 3us : update_curr (check_preempt_wakeup) -ksoftirq-7 1d.H3 4us : calc_delta_mine (update_curr) -ksoftirq-7 1d.H3 5us : __resched_task (check_preempt_wakeup) -ksoftirq-7 1d.H3 6us : task_wake_up_rt (try_to_wake_up) -ksoftirq-7 1d.H3 7us : _spin_unlock_irqrestore (try_to_wake_up) -[...] -ksoftirq-7 1d.H2 17us : irq_exit (smp_apic_timer_interrupt) -ksoftirq-7 1d.H2 18us : sub_preempt_count (irq_exit) -ksoftirq-7 1d.s3 19us : sub_preempt_count (irq_exit) -ksoftirq-7 1..s2 20us : rcu_process_callbacks (__do_softirq) -[...] -ksoftirq-7 1..s2 26us : __rcu_process_callbacks (rcu_process_callbacks) -ksoftirq-7 1d.s2 27us : _local_bh_enable (__do_softirq) -ksoftirq-7 1d.s2 28us : sub_preempt_count (_local_bh_enable) -ksoftirq-7 1.N.3 29us : sub_preempt_count (ksoftirqd) -ksoftirq-7 1.N.2 30us : _cond_resched (ksoftirqd) -ksoftirq-7 1.N.2 31us : __cond_resched (_cond_resched) -ksoftirq-7 1.N.2 32us : add_preempt_count (__cond_resched) -ksoftirq-7 1.N.2 33us : schedule (__cond_resched) -ksoftirq-7 1.N.2 33us : add_preempt_count (schedule) -ksoftirq-7 1.N.3 34us : hrtick_clear (schedule) -ksoftirq-7 1dN.3 35us : _spin_lock (schedule) -ksoftirq-7 1dN.3 36us : add_preempt_count (_spin_lock) -ksoftirq-7 1d..4 37us : put_prev_task_fair (schedule) -ksoftirq-7 1d..4 38us : update_curr (put_prev_task_fair) -[...] -ksoftirq-7 1d..5 47us : _spin_trylock (tracing_record_cmdline) -ksoftirq-7 1d..5 48us : add_preempt_count (_spin_trylock) -ksoftirq-7 1d..6 49us : _spin_unlock (tracing_record_cmdline) -ksoftirq-7 1d..6 49us : sub_preempt_count (_spin_unlock) -ksoftirq-7 1d..4 50us : schedule (__cond_resched) - -The interrupt went off while running ksoftirqd. This task runs at -SCHED_OTHER. Why didn't we see the 'N' set early? This may be -a harmless bug with x86_32 and 4K stacks. The need_reched() function -that tests if we need to reschedule looks on the actual stack. -Where as the setting of the NEED_RESCHED bit happens on the -task's stack. But because we are in a hard interrupt, the test -is with the interrupts stack which has that to be false. We don't -see the 'N' until we switch back to the task's stack. - -ftrace ------- - -ftrace is not only the name of the tracing infrastructure, but it -is also a name of one of the tracers. The tracer is the function -tracer. Enabling the function tracer can be done from the -debug file system. Make sure the ftrace_enabled is set otherwise -this tracer is a nop. - - # sysctl kernel.ftrace_enabled=1 - # echo ftrace > /debug/tracing/current_tracer - # echo 1 > /debug/tracing/tracing_enabled - # usleep 1 - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/trace -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - bash-4003 [00] 123.638713: finish_task_switch <-schedule - bash-4003 [00] 123.638714: _spin_unlock_irq <-finish_task_switch - bash-4003 [00] 123.638714: sub_preempt_count <-_spin_unlock_irq - bash-4003 [00] 123.638715: hrtick_set <-schedule - bash-4003 [00] 123.638715: _spin_lock_irqsave <-hrtick_set - bash-4003 [00] 123.638716: add_preempt_count <-_spin_lock_irqsave - bash-4003 [00] 123.638716: _spin_unlock_irqrestore <-hrtick_set - bash-4003 [00] 123.638717: sub_preempt_count <-_spin_unlock_irqrestore - bash-4003 [00] 123.638717: hrtick_clear <-hrtick_set - bash-4003 [00] 123.638718: sub_preempt_count <-schedule - bash-4003 [00] 123.638718: sub_preempt_count <-preempt_schedule - bash-4003 [00] 123.638719: wait_for_completion <-__stop_machine_run - bash-4003 [00] 123.638719: wait_for_common <-wait_for_completion - bash-4003 [00] 123.638720: _spin_lock_irq <-wait_for_common - bash-4003 [00] 123.638720: add_preempt_count <-_spin_lock_irq -[...] - - -Note: It is sometimes better to enable or disable tracing directly from -a program, because the buffer may be overflowed by the echo commands -before you get to the point you want to trace. It is also easier to -stop the tracing at the point that you hit the part that you are -interested in. Since the ftrace buffer is a ring buffer with the -oldest data being overwritten, usually it is sufficient to start the -tracer with an echo command but have you code stop it. Something -like the following is usually appropriate for this. - -int trace_fd; -[...] -int main(int argc, char *argv[]) { - [...] - trace_fd = open("/debug/tracing/tracing_enabled", O_WRONLY); - [...] - if (condition_hit()) { - write(trace_fd, "0", 1); - } - [...] -} - - -dynamic ftrace --------------- - -If CONFIG_DYNAMIC_FTRACE is set, then the system will run with -virtually no overhead when function tracing is disabled. The way -this works is the mcount function call (placed at the start of -every kernel function, produced by the -pg switch in gcc), starts -of pointing to a simple return. - -When dynamic ftrace is initialized, it calls kstop_machine to make it -act like a uniprocessor so that it can freely modify code without -worrying about other processors executing that same code. At -initialization, the mcount calls are change to call a "record_ip" -function. After this, the first time a kernel function is called, -it has the calling address saved in a hash table. - -Later on the ftraced kernel thread is awoken and will again call -kstop_machine if new functions have been recorded. The ftraced thread -will change all calls to mcount to "nop". Just calling mcount -and having mcount return has shown a 10% overhead. By converting -it to a nop, there is no recordable overhead to the system. - -One special side-effect to the recording of the functions being -traced, is that we can now selectively choose which functions we -want to trace and which ones we want the mcount calls to remain as -nops. - -Two files that contain to the enabling and disabling of recorded -functions are: - - set_ftrace_filter - -and - - set_ftrace_notrace - -A list of available functions that you can add to this files is listed -in: - - available_filter_functions - - # cat /debug/tracing/available_filter_functions -put_prev_task_idle -kmem_cache_create -pick_next_task_rt -get_online_cpus -pick_next_task_fair -mutex_lock -[...] - -If I'm only interested in sys_nanosleep and hrtimer_interrupt: - - # echo sys_nanosleep hrtimer_interrupt \ - > /debug/tracing/set_ftrace_filter - # echo ftrace > /debug/tracing/current_tracer - # echo 1 > /debug/tracing/tracing_enabled - # usleep 1 - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/trace -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - usleep-4134 [00] 1317.070017: hrtimer_interrupt <-smp_apic_timer_interrupt - usleep-4134 [00] 1317.070111: sys_nanosleep <-syscall_call - -0 [00] 1317.070115: hrtimer_interrupt <-smp_apic_timer_interrupt - -To see what functions are being traced, you can cat the file: - - # cat /debug/tracing/set_ftrace_filter -hrtimer_interrupt -sys_nanosleep - - -Perhaps this isn't enough. The filters also allow simple wild cards. -Only the following is currently available - - * - will match functions that begins with - * - will match functions that end with - ** - will match functions that have in it - -Thats all the wild cards that are allowed. - - * will not work. - - # echo hrtimer_* > /debug/tracing/set_ftrace_filter - -Produces: - -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - bash-4003 [00] 1480.611794: hrtimer_init <-copy_process - bash-4003 [00] 1480.611941: hrtimer_start <-hrtick_set - bash-4003 [00] 1480.611956: hrtimer_cancel <-hrtick_clear - bash-4003 [00] 1480.611956: hrtimer_try_to_cancel <-hrtimer_cancel - -0 [00] 1480.612019: hrtimer_get_next_event <-get_next_timer_interrupt - -0 [00] 1480.612025: hrtimer_get_next_event <-get_next_timer_interrupt - -0 [00] 1480.612032: hrtimer_get_next_event <-get_next_timer_interrupt - -0 [00] 1480.612037: hrtimer_get_next_event <-get_next_timer_interrupt - -0 [00] 1480.612382: hrtimer_get_next_event <-get_next_timer_interrupt - - -Notice that we lost the sys_nanosleep. - - # cat /debug/tracing/set_ftrace_filter -hrtimer_run_queues -hrtimer_run_pending -hrtimer_init -hrtimer_cancel -hrtimer_try_to_cancel -hrtimer_forward -hrtimer_start -hrtimer_reprogram -hrtimer_force_reprogram -hrtimer_get_next_event -hrtimer_interrupt -hrtimer_nanosleep -hrtimer_wakeup -hrtimer_get_remaining -hrtimer_get_res -hrtimer_init_sleeper - - -This is because the '>' and '>>' act just like they do in bash. -To rewrite the filters, use '>' -To append to the filters, use '>>' - -To clear out a filter so that all functions will be recorded again. - - # echo > /debug/tracing/set_ftrace_filter - # cat /debug/tracing/set_ftrace_filter - # - -Again, now we want to append. - - # echo sys_nanosleep > /debug/tracing/set_ftrace_filter - # cat /debug/tracing/set_ftrace_filter -sys_nanosleep - # echo hrtimer_* >> /debug/tracing/set_ftrace_filter - # cat /debug/tracing/set_ftrace_filter -hrtimer_run_queues -hrtimer_run_pending -hrtimer_init -hrtimer_cancel -hrtimer_try_to_cancel -hrtimer_forward -hrtimer_start -hrtimer_reprogram -hrtimer_force_reprogram -hrtimer_get_next_event -hrtimer_interrupt -sys_nanosleep -hrtimer_nanosleep -hrtimer_wakeup -hrtimer_get_remaining -hrtimer_get_res -hrtimer_init_sleeper - - -The set_ftrace_notrace prevents those functions from being traced. - - # echo '*preempt*' '*lock*' > /debug/tracing/set_ftrace_notrace - -Produces: - -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - bash-4043 [01] 115.281644: finish_task_switch <-schedule - bash-4043 [01] 115.281645: hrtick_set <-schedule - bash-4043 [01] 115.281645: hrtick_clear <-hrtick_set - bash-4043 [01] 115.281646: wait_for_completion <-__stop_machine_run - bash-4043 [01] 115.281647: wait_for_common <-wait_for_completion - bash-4043 [01] 115.281647: kthread_stop <-stop_machine_run - bash-4043 [01] 115.281648: init_waitqueue_head <-kthread_stop - bash-4043 [01] 115.281648: wake_up_process <-kthread_stop - bash-4043 [01] 115.281649: try_to_wake_up <-wake_up_process - -We can see that there's no more lock or preempt tracing. - -ftraced -------- - -As mentioned above, when dynamic ftrace is configured in, a kernel -thread wakes up once a second and checks to see if there are mcount -calls that need to be converted into nops. If there is not, then -it simply goes back to sleep. But if there is, it will call -kstop_machine to convert the calls to nops. - -There may be a case that you do not want this added latency. -Perhaps you are doing some audio recording and this activity might -cause skips in the playback. There is an interface to disable -and enable the ftraced kernel thread. - - # echo 0 > /debug/tracing/ftraced_enabled - -This will disable the calling of the kstop_machine to update the -mcount calls to nops. Remember that there's a large overhead -to calling mcount. Without this kernel thread, that overhead will -exist. - -Any write to the ftraced_enabled file will cause the kstop_machine -to run if there are recorded calls to mcount. This means that a -user can manually perform the updates when they want to by simply -echoing a '0' into the ftraced_enabled file. - -The updates are also done at the beginning of enabling a tracer -that uses ftrace function recording. - - -trace_pipe ----------- - -The trace_pipe outputs the same as trace, but the effect on the -tracing is different. Every read from trace_pipe is consumed. -This means that subsequent reads will be different. The trace -is live. - - # echo ftrace > /debug/tracing/current_tracer - # cat /debug/tracing/trace_pipe > /tmp/trace.out & -[1] 4153 - # echo 1 > /debug/tracing/tracing_enabled - # usleep 1 - # echo 0 > /debug/tracing/tracing_enabled - # cat /debug/tracing/trace -# tracer: ftrace -# -# TASK-PID CPU# TIMESTAMP FUNCTION -# | | | | | - - # - # cat /tmp/trace.out - bash-4043 [00] 41.267106: finish_task_switch <-schedule - bash-4043 [00] 41.267106: hrtick_set <-schedule - bash-4043 [00] 41.267107: hrtick_clear <-hrtick_set - bash-4043 [00] 41.267108: wait_for_completion <-__stop_machine_run - bash-4043 [00] 41.267108: wait_for_common <-wait_for_completion - bash-4043 [00] 41.267109: kthread_stop <-stop_machine_run - bash-4043 [00] 41.267109: init_waitqueue_head <-kthread_stop - bash-4043 [00] 41.267110: wake_up_process <-kthread_stop - bash-4043 [00] 41.267110: try_to_wake_up <-wake_up_process - bash-4043 [00] 41.267111: select_task_rq_rt <-try_to_wake_up - - -Note, reading the trace_pipe will block until more input is added. -By changing the tracer, trace_pipe will issue an EOF. We needed -to set the ftrace tracer _before_ cating the trace_pipe file. - - -trace entries -------------- - -Having too much or not enough data can be troublesome in diagnosing -some issue in the kernel. The file trace_entries is used to modify -the size of the internal trace buffers. The numbers listed -is the number of entries that can be recorded per CPU. To know -the full size, multiply the number of possible CPUS with the -number of entries. - - # cat /debug/tracing/trace_entries -65620 - -Note, to modify this you must have tracing fulling disabled. To do that, -echo "none" into the current_tracer. - - # echo none > /debug/tracing/current_tracer - # echo 100000 > /debug/tracing/trace_entries - # cat /debug/tracing/trace_entries -100045 - - -Notice that we echoed in 100,000 but the size is 100,045. The entries -are held by individual pages. It allocates the number of pages it takes -to fulfill the request. If more entries may fit on the last page -it will add them. - - # echo 1 > /debug/tracing/trace_entries - # cat /debug/tracing/trace_entries -85 - -This shows us that 85 entries can fit on a single page. - -The number of pages that will be allocated is a percentage of available -memory. Allocating too much will produces an error. - - # echo 1000000000000 > /debug/tracing/trace_entries --bash: echo: write error: Cannot allocate memory - # cat /debug/tracing/trace_entries -85 - diff --git a/Makefile b/Makefile index c536d7b..5599044 100644 --- a/Makefile +++ b/Makefile @@ -1,7 +1,7 @@ VERSION = 2 PATCHLEVEL = 6 SUBLEVEL = 26 -EXTRAVERSION = .1 +EXTRAVERSION = .2 NAME = Rotary Wombat # *DOCUMENTATION* diff --git a/arch/powerpc/kernel/ppc32.h b/arch/powerpc/kernel/ppc32.h index 90e5627..fda05e2 100644 --- a/arch/powerpc/kernel/ppc32.h +++ b/arch/powerpc/kernel/ppc32.h @@ -135,4 +135,6 @@ struct ucontext32 { struct mcontext32 uc_mcontext; }; +extern int copy_siginfo_to_user32(struct compat_siginfo __user *d, siginfo_t *s); + #endif /* _PPC64_PPC32_H */ diff --git a/arch/powerpc/kernel/ptrace32.c b/arch/powerpc/kernel/ptrace32.c index 4c1de6a..9d30e10 100644 --- a/arch/powerpc/kernel/ptrace32.c +++ b/arch/powerpc/kernel/ptrace32.c @@ -29,12 +29,15 @@ #include #include #include +#include #include #include #include #include +#include "ppc32.h" + /* * does not yet catch signals sent when the child dies. * in exit.c or in signal.c. @@ -64,6 +67,27 @@ static long compat_ptrace_old(struct task_struct *child, long request, return -EPERM; } +static int compat_ptrace_getsiginfo(struct task_struct *child, compat_siginfo_t __user *data) +{ + siginfo_t lastinfo; + int error = -ESRCH; + + read_lock(&tasklist_lock); + if (likely(child->sighand != NULL)) { + error = -EINVAL; + spin_lock_irq(&child->sighand->siglock); + if (likely(child->last_siginfo != NULL)) { + lastinfo = *child->last_siginfo; + error = 0; + } + spin_unlock_irq(&child->sighand->siglock); + } + read_unlock(&tasklist_lock); + if (!error) + return copy_siginfo_to_user32(data, &lastinfo); + return error; +} + long compat_arch_ptrace(struct task_struct *child, compat_long_t request, compat_ulong_t caddr, compat_ulong_t cdata) { @@ -282,6 +306,9 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, 0, PT_REGS_COUNT * sizeof(compat_long_t), compat_ptr(data)); + case PTRACE_GETSIGINFO: + return compat_ptrace_getsiginfo(child, compat_ptr(data)); + case PTRACE_GETFPREGS: case PTRACE_SETFPREGS: case PTRACE_GETVRREGS: diff --git a/arch/x86/kernel/io_delay.c b/arch/x86/kernel/io_delay.c index 5921e5f..1c3a66a 100644 --- a/arch/x86/kernel/io_delay.c +++ b/arch/x86/kernel/io_delay.c @@ -103,6 +103,9 @@ void __init io_delay_init(void) static int __init io_delay_param(char *s) { + if (!s) + return -EINVAL; + if (!strcmp(s, "0x80")) io_delay_type = CONFIG_IO_DELAY_TYPE_0X80; else if (!strcmp(s, "0xed")) diff --git a/arch/x86/kernel/kprobes.c b/arch/x86/kernel/kprobes.c index b8c6743..43c019f 100644 --- a/arch/x86/kernel/kprobes.c +++ b/arch/x86/kernel/kprobes.c @@ -860,7 +860,6 @@ static int __kprobes post_kprobe_handler(struct pt_regs *regs) resume_execution(cur, regs, kcb); regs->flags |= kcb->kprobe_saved_flags; - trace_hardirqs_fixup_flags(regs->flags); if ((kcb->kprobe_status != KPROBE_REENTER) && cur->post_handler) { kcb->kprobe_status = KPROBE_HIT_SSDONE; diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index ba370dc..58325a6 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -164,6 +164,9 @@ void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c) static int __init idle_setup(char *str) { + if (!str) + return -EINVAL; + if (!strcmp(str, "poll")) { printk("using polling idle threads.\n"); pm_idle = poll_idle; diff --git a/block/bsg.c b/block/bsg.c index 54d617f..0526471 100644 --- a/block/bsg.c +++ b/block/bsg.c @@ -725,8 +725,13 @@ static int bsg_put_device(struct bsg_device *bd) mutex_lock(&bsg_mutex); do_free = atomic_dec_and_test(&bd->ref_count); - if (!do_free) + if (!do_free) { + mutex_unlock(&bsg_mutex); goto out; + } + + hlist_del(&bd->dev_list); + mutex_unlock(&bsg_mutex); dprintk("%s: tearing down\n", bd->name); @@ -742,10 +747,8 @@ static int bsg_put_device(struct bsg_device *bd) */ ret = bsg_complete_all_commands(bd); - hlist_del(&bd->dev_list); kfree(bd); out: - mutex_unlock(&bsg_mutex); kref_put(&q->bsg_dev.ref, bsg_kref_release_function); if (do_free) blk_put_queue(q); diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h index 78eb784..7828ef2 100644 --- a/drivers/input/serio/i8042-x86ia64io.h +++ b/drivers/input/serio/i8042-x86ia64io.h @@ -63,7 +63,7 @@ static inline void i8042_write_command(int val) outb(val, I8042_COMMAND_REG); } -#if defined(__i386__) || defined(__x86_64__) +#ifdef CONFIG_X86 #include @@ -291,17 +291,36 @@ static struct dmi_system_id __initdata i8042_dmi_nomux_table[] = { DMI_MATCH(DMI_PRODUCT_VERSION, "3000 N100"), }, }, + { + .ident = "Acer Aspire 1360", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), + DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 1360"), + }, + }, { } }; - - +#ifdef CONFIG_PNP +static struct dmi_system_id __initdata i8042_dmi_nopnp_table[] = { + { + .ident = "Intel MBO Desktop D845PESV", + .matches = { + DMI_MATCH(DMI_BOARD_NAME, "D845PESV"), + DMI_MATCH(DMI_BOARD_VENDOR, "Intel Corporation"), + }, + }, + { + .ident = "Gericom Bellagio", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "Gericom"), + DMI_MATCH(DMI_PRODUCT_NAME, "N34AS6"), + }, + }, + { } +}; #endif -#ifdef CONFIG_X86 - -#include - /* * Some Wistron based laptops need us to explicitly enable the 'Dritek * keyboard extension' to make their extra keys start generating scancodes. @@ -356,7 +375,6 @@ static struct dmi_system_id __initdata i8042_dmi_dritek_table[] = { #endif /* CONFIG_X86 */ - #ifdef CONFIG_PNP #include @@ -466,6 +484,11 @@ static int __init i8042_pnp_init(void) int pnp_data_busted = 0; int err; +#ifdef CONFIG_X86 + if (dmi_check_system(i8042_dmi_nopnp_table)) + i8042_nopnp = 1; +#endif + if (i8042_nopnp) { printk(KERN_INFO "i8042: PNP detection disabled\n"); return 0; @@ -591,15 +614,13 @@ static int __init i8042_platform_init(void) i8042_reset = 1; #endif -#if defined(__i386__) || defined(__x86_64__) +#ifdef CONFIG_X86 if (dmi_check_system(i8042_dmi_noloop_table)) i8042_noloop = 1; if (dmi_check_system(i8042_dmi_nomux_table)) i8042_nomux = 1; -#endif -#ifdef CONFIG_X86 if (dmi_check_system(i8042_dmi_dritek_table)) i8042_dritek = 1; #endif /* CONFIG_X86 */ diff --git a/drivers/md/linear.c b/drivers/md/linear.c index 1074824..ec921f5 100644 --- a/drivers/md/linear.c +++ b/drivers/md/linear.c @@ -126,7 +126,7 @@ static linear_conf_t *linear_conf(mddev_t *mddev, int raid_disks) int j = rdev->raid_disk; dev_info_t *disk = conf->disks + j; - if (j < 0 || j > raid_disks || disk->rdev) { + if (j < 0 || j >= raid_disks || disk->rdev) { printk("linear: disk numbering problem. Aborting!\n"); goto out; } diff --git a/drivers/md/md.c b/drivers/md/md.c index 2580ac1..9664511 100644 --- a/drivers/md/md.c +++ b/drivers/md/md.c @@ -3326,9 +3326,9 @@ static struct kobject *md_probe(dev_t dev, int *part, void *data) disk->queue = mddev->queue; add_disk(disk); mddev->gendisk = disk; - mutex_unlock(&disks_mutex); error = kobject_init_and_add(&mddev->kobj, &md_ktype, &disk->dev.kobj, "%s", "md"); + mutex_unlock(&disks_mutex); if (error) printk(KERN_WARNING "md: cannot register %s/md - name in use\n", disk->disk_name); diff --git a/drivers/net/wireless/ath5k/base.c b/drivers/net/wireless/ath5k/base.c index e57905c..bc3ea09 100644 --- a/drivers/net/wireless/ath5k/base.c +++ b/drivers/net/wireless/ath5k/base.c @@ -1774,20 +1774,21 @@ ath5k_tasklet_rx(unsigned long data) struct ath5k_rx_status rs = {}; struct sk_buff *skb; struct ath5k_softc *sc = (void *)data; - struct ath5k_buf *bf; + struct ath5k_buf *bf, *bf_last; struct ath5k_desc *ds; int ret; int hdrlen; int pad; spin_lock(&sc->rxbuflock); + if (list_empty(&sc->rxbuf)) { + ATH5K_WARN(sc, "empty rx buf pool\n"); + goto unlock; + } + bf_last = list_entry(sc->rxbuf.prev, struct ath5k_buf, list); do { rxs.flag = 0; - if (unlikely(list_empty(&sc->rxbuf))) { - ATH5K_WARN(sc, "empty rx buf pool\n"); - break; - } bf = list_first_entry(&sc->rxbuf, struct ath5k_buf, list); BUG_ON(bf->skb == NULL); skb = bf->skb; @@ -1797,8 +1798,24 @@ ath5k_tasklet_rx(unsigned long data) pci_dma_sync_single_for_cpu(sc->pdev, sc->desc_daddr, sc->desc_len, PCI_DMA_FROMDEVICE); - if (unlikely(ds->ds_link == bf->daddr)) /* this is the end */ - break; + /* + * last buffer must not be freed to ensure proper hardware + * function. When the hardware finishes also a packet next to + * it, we are sure, it doesn't use it anymore and we can go on. + */ + if (bf_last == bf) + bf->flags |= 1; + if (bf->flags) { + struct ath5k_buf *bf_next = list_entry(bf->list.next, + struct ath5k_buf, list); + ret = sc->ah->ah_proc_rx_desc(sc->ah, bf_next->desc, + &rs); + if (ret) + break; + bf->flags &= ~1; + /* skip the overwritten one (even status is martian) */ + goto next; + } ret = sc->ah->ah_proc_rx_desc(sc->ah, ds, &rs); if (unlikely(ret == -EINPROGRESS)) @@ -1921,6 +1938,7 @@ accept: next: list_move_tail(&bf->list, &sc->rxbuf); } while (ath5k_rxbuf_setup(sc, bf) == 0); +unlock: spin_unlock(&sc->rxbuflock); } @@ -2435,6 +2453,9 @@ ath5k_stop_hw(struct ath5k_softc *sc) mutex_unlock(&sc->lock); del_timer_sync(&sc->calib_tim); + tasklet_kill(&sc->rxtq); + tasklet_kill(&sc->txtq); + tasklet_kill(&sc->restq); return ret; } diff --git a/drivers/net/wireless/ath5k/base.h b/drivers/net/wireless/ath5k/base.h index 3a97558..4badca7 100644 --- a/drivers/net/wireless/ath5k/base.h +++ b/drivers/net/wireless/ath5k/base.h @@ -55,7 +55,7 @@ struct ath5k_buf { struct list_head list; - unsigned int flags; /* tx descriptor flags */ + unsigned int flags; /* rx descriptor flags */ struct ath5k_desc *desc; /* virtual addr of desc */ dma_addr_t daddr; /* physical addr of desc */ struct sk_buff *skb; /* skbuff for buf */ diff --git a/drivers/scsi/ch.c b/drivers/scsi/ch.c index c4b938b..2be2da6 100644 --- a/drivers/scsi/ch.c +++ b/drivers/scsi/ch.c @@ -926,6 +926,7 @@ static int ch_probe(struct device *dev) if (init) ch_init_elem(ch); + dev_set_drvdata(dev, ch); sdev_printk(KERN_INFO, sd, "Attached scsi changer %s\n", ch->name); return 0; diff --git a/fs/jbd/transaction.c b/fs/jbd/transaction.c index 67ff202..8dee320 100644 --- a/fs/jbd/transaction.c +++ b/fs/jbd/transaction.c @@ -1648,12 +1648,42 @@ out: return; } +/* + * journal_try_to_free_buffers() could race with journal_commit_transaction() + * The latter might still hold the a count on buffers when inspecting + * them on t_syncdata_list or t_locked_list. + * + * journal_try_to_free_buffers() will call this function to + * wait for the current transaction to finish syncing data buffers, before + * tryinf to free that buffer. + * + * Called with journal->j_state_lock held. + */ +static void journal_wait_for_transaction_sync_data(journal_t *journal) +{ + transaction_t *transaction = NULL; + tid_t tid; + + spin_lock(&journal->j_state_lock); + transaction = journal->j_committing_transaction; + + if (!transaction) { + spin_unlock(&journal->j_state_lock); + return; + } + + tid = transaction->t_tid; + spin_unlock(&journal->j_state_lock); + log_wait_commit(journal, tid); +} /** * int journal_try_to_free_buffers() - try to free page buffers. * @journal: journal for operation * @page: to try and free - * @unused_gfp_mask: unused + * @gfp_mask: we use the mask to detect how hard should we try to release + * buffers. If __GFP_WAIT and __GFP_FS is set, we wait for commit code to + * release the buffers. * * * For all the buffers on this page, @@ -1682,9 +1712,11 @@ out: * journal_try_to_free_buffer() is changing its state. But that * cannot happen because we never reallocate freed data as metadata * while the data is part of a transaction. Yes? + * + * Return 0 on failure, 1 on success */ int journal_try_to_free_buffers(journal_t *journal, - struct page *page, gfp_t unused_gfp_mask) + struct page *page, gfp_t gfp_mask) { struct buffer_head *head; struct buffer_head *bh; @@ -1713,7 +1745,28 @@ int journal_try_to_free_buffers(journal_t *journal, if (buffer_jbd(bh)) goto busy; } while ((bh = bh->b_this_page) != head); + ret = try_to_free_buffers(page); + + /* + * There are a number of places where journal_try_to_free_buffers() + * could race with journal_commit_transaction(), the later still + * holds the reference to the buffers to free while processing them. + * try_to_free_buffers() failed to free those buffers. Some of the + * caller of releasepage() request page buffers to be dropped, otherwise + * treat the fail-to-free as errors (such as generic_file_direct_IO()) + * + * So, if the caller of try_to_release_page() wants the synchronous + * behaviour(i.e make sure buffers are dropped upon return), + * let's wait for the current transaction to finish flush of + * dirty data buffers, then try to free those buffers again, + * with the journal locked. + */ + if (ret == 0 && (gfp_mask & __GFP_WAIT) && (gfp_mask & __GFP_FS)) { + journal_wait_for_transaction_sync_data(journal); + ret = try_to_free_buffers(page); + } + busy: return ret; } diff --git a/fs/namei.c b/fs/namei.c index 01e67dd..3b26a24 100644 --- a/fs/namei.c +++ b/fs/namei.c @@ -519,7 +519,14 @@ static struct dentry * real_lookup(struct dentry * parent, struct qstr * name, s */ result = d_lookup(parent, name); if (!result) { - struct dentry * dentry = d_alloc(parent, name); + struct dentry *dentry; + + /* Don't create child dentry for a dead directory. */ + result = ERR_PTR(-ENOENT); + if (IS_DEADDIR(dir)) + goto out_unlock; + + dentry = d_alloc(parent, name); result = ERR_PTR(-ENOMEM); if (dentry) { result = dir->i_op->lookup(dir, dentry, nd); @@ -528,6 +535,7 @@ static struct dentry * real_lookup(struct dentry * parent, struct qstr * name, s else result = dentry; } +out_unlock: mutex_unlock(&dir->i_mutex); return result; } @@ -1317,7 +1325,14 @@ static struct dentry *__lookup_hash(struct qstr *name, dentry = cached_lookup(base, name, nd); if (!dentry) { - struct dentry *new = d_alloc(base, name); + struct dentry *new; + + /* Don't create child dentry for a dead directory. */ + dentry = ERR_PTR(-ENOENT); + if (IS_DEADDIR(inode)) + goto out; + + new = d_alloc(base, name); dentry = ERR_PTR(-ENOMEM); if (!new) goto out; diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c index 596c5d8..1d7ac64 100644 --- a/fs/nfs/inode.c +++ b/fs/nfs/inode.c @@ -57,8 +57,6 @@ static int enable_ino64 = NFS_64_BIT_INODE_NUMBERS_ENABLED; static void nfs_invalidate_inode(struct inode *); static int nfs_update_inode(struct inode *, struct nfs_fattr *); -static void nfs_zap_acl_cache(struct inode *); - static struct kmem_cache * nfs_inode_cachep; static inline unsigned long @@ -167,7 +165,7 @@ void nfs_zap_mapping(struct inode *inode, struct address_space *mapping) } } -static void nfs_zap_acl_cache(struct inode *inode) +void nfs_zap_acl_cache(struct inode *inode) { void (*clear_acl_cache)(struct inode *); diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h index 04ae867..24241fc 100644 --- a/fs/nfs/internal.h +++ b/fs/nfs/internal.h @@ -150,6 +150,7 @@ extern void nfs_clear_inode(struct inode *); #ifdef CONFIG_NFS_V4 extern void nfs4_clear_inode(struct inode *); #endif +void nfs_zap_acl_cache(struct inode *inode); /* super.c */ extern struct file_system_type nfs_xdev_fs_type; diff --git a/fs/nfs/nfs3acl.c b/fs/nfs/nfs3acl.c index 9b73625..423842f 100644 --- a/fs/nfs/nfs3acl.c +++ b/fs/nfs/nfs3acl.c @@ -5,6 +5,8 @@ #include #include +#include "internal.h" + #define NFSDBG_FACILITY NFSDBG_PROC ssize_t nfs3_listxattr(struct dentry *dentry, char *buffer, size_t size) @@ -205,6 +207,8 @@ struct posix_acl *nfs3_proc_getacl(struct inode *inode, int type) status = nfs_revalidate_inode(server, inode); if (status < 0) return ERR_PTR(status); + if (NFS_I(inode)->cache_validity & NFS_INO_INVALID_ACL) + nfs_zap_acl_cache(inode); acl = nfs3_get_cached_acl(inode, type); if (acl != ERR_PTR(-EAGAIN)) return acl; @@ -319,9 +323,8 @@ static int nfs3_proc_setacls(struct inode *inode, struct posix_acl *acl, dprintk("NFS call setacl\n"); msg.rpc_proc = &server->client_acl->cl_procinfo[ACLPROC3_SETACL]; status = rpc_call_sync(server->client_acl, &msg, 0); - spin_lock(&inode->i_lock); - NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ACCESS; - spin_unlock(&inode->i_lock); + nfs_access_zap_cache(inode); + nfs_zap_acl_cache(inode); dprintk("NFS reply setacl: %d\n", status); /* pages may have been allocated at the xdr layer. */ diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c index 1293e0a..806d17f 100644 --- a/fs/nfs/nfs4proc.c +++ b/fs/nfs/nfs4proc.c @@ -2706,6 +2706,8 @@ static ssize_t nfs4_proc_get_acl(struct inode *inode, void *buf, size_t buflen) ret = nfs_revalidate_inode(server, inode); if (ret < 0) return ret; + if (NFS_I(inode)->cache_validity & NFS_INO_INVALID_ACL) + nfs_zap_acl_cache(inode); ret = nfs4_read_cached_acl(inode, buf, buflen); if (ret != -ENOENT) return ret; @@ -2733,7 +2735,8 @@ static int __nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t bufl nfs_inode_return_delegation(inode); buf_to_pages(buf, buflen, arg.acl_pages, &arg.acl_pgbase); ret = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); - nfs_zap_caches(inode); + nfs_access_zap_cache(inode); + nfs_zap_acl_cache(inode); return ret; } diff --git a/fs/romfs/inode.c b/fs/romfs/inode.c index 3f13d49..35e5c6e 100644 --- a/fs/romfs/inode.c +++ b/fs/romfs/inode.c @@ -418,7 +418,8 @@ static int romfs_readpage(struct file *file, struct page * page) { struct inode *inode = page->mapping->host; - loff_t offset, avail, readlen; + loff_t offset, size; + unsigned long filled; void *buf; int result = -EIO; @@ -430,21 +431,29 @@ romfs_readpage(struct file *file, struct page * page) /* 32 bit warning -- but not for us :) */ offset = page_offset(page); - if (offset < i_size_read(inode)) { - avail = inode->i_size-offset; - readlen = min_t(unsigned long, avail, PAGE_SIZE); - if (romfs_copyfrom(inode, buf, ROMFS_I(inode)->i_dataoffset+offset, readlen) == readlen) { - if (readlen < PAGE_SIZE) { - memset(buf + readlen,0,PAGE_SIZE-readlen); - } - SetPageUptodate(page); - result = 0; + size = i_size_read(inode); + filled = 0; + result = 0; + if (offset < size) { + unsigned long readlen; + + size -= offset; + readlen = size > PAGE_SIZE ? PAGE_SIZE : size; + + filled = romfs_copyfrom(inode, buf, ROMFS_I(inode)->i_dataoffset+offset, readlen); + + if (filled != readlen) { + SetPageError(page); + filled = 0; + result = -EIO; } } - if (result) { - memset(buf, 0, PAGE_SIZE); - SetPageError(page); - } + + if (filled < PAGE_SIZE) + memset(buf + filled, 0, PAGE_SIZE-filled); + + if (!result) + SetPageUptodate(page); flush_dcache_page(page); unlock_page(page); diff --git a/include/sound/emu10k1.h b/include/sound/emu10k1.h index 7b7b9b1..10ee28e 100644 --- a/include/sound/emu10k1.h +++ b/include/sound/emu10k1.h @@ -1670,6 +1670,7 @@ struct snd_emu_chip_details { unsigned char spi_dac; /* SPI interface for DAC */ unsigned char i2c_adc; /* I2C interface for ADC */ unsigned char adc_1361t; /* Use Philips 1361T ADC */ + unsigned char invert_shared_spdif; /* analog/digital switch inverted */ const char *driver; const char *name; const char *id; /* for backward compatibility - can be NULL if not needed */ diff --git a/mm/filemap.c b/mm/filemap.c index 4f32423..afb991a 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2581,9 +2581,8 @@ out: * Otherwise return zero. * * The @gfp_mask argument specifies whether I/O may be performed to release - * this page (__GFP_IO), and whether the call may block (__GFP_WAIT). + * this page (__GFP_IO), and whether the call may block (__GFP_WAIT & __GFP_FS). * - * NOTE: @gfp_mask may go away, and this function may become non-blocking. */ int try_to_release_page(struct page *page, gfp_t gfp_mask) { diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c index f85d946..24e91eb 100644 --- a/net/bluetooth/bnep/core.c +++ b/net/bluetooth/bnep/core.c @@ -507,6 +507,11 @@ static int bnep_session(void *arg) /* Delete network device */ unregister_netdev(dev); + /* Wakeup user-space polling for socket errors */ + s->sock->sk->sk_err = EUNATCH; + + wake_up_interruptible(s->sock->sk->sk_sleep); + /* Release the socket */ fput(s->sock->file); diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c index 519cdb9..96434d7 100644 --- a/net/bluetooth/hidp/core.c +++ b/net/bluetooth/hidp/core.c @@ -581,6 +581,12 @@ static int hidp_session(void *arg) hid_free_device(session->hid); } + /* Wakeup user-space polling for socket errors */ + session->intr_sock->sk->sk_err = EUNATCH; + session->ctrl_sock->sk->sk_err = EUNATCH; + + hidp_schedule(session); + fput(session->intr_sock->file); wait_event_timeout(*(ctrl_sk->sk_sleep), @@ -879,6 +885,10 @@ int hidp_del_connection(struct hidp_conndel_req *req) skb_queue_purge(&session->ctrl_transmit); skb_queue_purge(&session->intr_transmit); + /* Wakeup user-space polling for socket errors */ + session->intr_sock->sk->sk_err = EUNATCH; + session->ctrl_sock->sk->sk_err = EUNATCH; + /* Kill session thread */ atomic_inc(&session->terminate); hidp_schedule(session); diff --git a/net/ipv4/netfilter/nf_nat_sip.c b/net/ipv4/netfilter/nf_nat_sip.c index 4334d5c..1454432 100644 --- a/net/ipv4/netfilter/nf_nat_sip.c +++ b/net/ipv4/netfilter/nf_nat_sip.c @@ -318,11 +318,11 @@ static int mangle_content_len(struct sk_buff *skb, buffer, buflen); } -static unsigned mangle_sdp_packet(struct sk_buff *skb, const char **dptr, - unsigned int dataoff, unsigned int *datalen, - enum sdp_header_types type, - enum sdp_header_types term, - char *buffer, int buflen) +static int mangle_sdp_packet(struct sk_buff *skb, const char **dptr, + unsigned int dataoff, unsigned int *datalen, + enum sdp_header_types type, + enum sdp_header_types term, + char *buffer, int buflen) { enum ip_conntrack_info ctinfo; struct nf_conn *ct = nf_ct_get(skb, &ctinfo); @@ -330,9 +330,9 @@ static unsigned mangle_sdp_packet(struct sk_buff *skb, const char **dptr, if (ct_sip_get_sdp_header(ct, *dptr, dataoff, *datalen, type, term, &matchoff, &matchlen) <= 0) - return 0; + return -ENOENT; return mangle_packet(skb, dptr, datalen, matchoff, matchlen, - buffer, buflen); + buffer, buflen) ? 0 : -EINVAL; } static unsigned int ip_nat_sdp_addr(struct sk_buff *skb, const char **dptr, @@ -346,8 +346,8 @@ static unsigned int ip_nat_sdp_addr(struct sk_buff *skb, const char **dptr, unsigned int buflen; buflen = sprintf(buffer, NIPQUAD_FMT, NIPQUAD(addr->ip)); - if (!mangle_sdp_packet(skb, dptr, dataoff, datalen, type, term, - buffer, buflen)) + if (mangle_sdp_packet(skb, dptr, dataoff, datalen, type, term, + buffer, buflen)) return 0; return mangle_content_len(skb, dptr, datalen); @@ -381,15 +381,27 @@ static unsigned int ip_nat_sdp_session(struct sk_buff *skb, const char **dptr, /* Mangle session description owner and contact addresses */ buflen = sprintf(buffer, "%u.%u.%u.%u", NIPQUAD(addr->ip)); - if (!mangle_sdp_packet(skb, dptr, dataoff, datalen, + if (mangle_sdp_packet(skb, dptr, dataoff, datalen, SDP_HDR_OWNER_IP4, SDP_HDR_MEDIA, buffer, buflen)) return 0; - if (!mangle_sdp_packet(skb, dptr, dataoff, datalen, - SDP_HDR_CONNECTION_IP4, SDP_HDR_MEDIA, - buffer, buflen)) + switch (mangle_sdp_packet(skb, dptr, dataoff, datalen, + SDP_HDR_CONNECTION_IP4, SDP_HDR_MEDIA, + buffer, buflen)) { + case 0: + /* + * RFC 2327: + * + * Session description + * + * c=* (connection information - not required if included in all media) + */ + case -ENOENT: + break; + default: return 0; + } return mangle_content_len(skb, dptr, datalen); } diff --git a/net/netfilter/xt_time.c b/net/netfilter/xt_time.c index ed76baa..9f32859 100644 --- a/net/netfilter/xt_time.c +++ b/net/netfilter/xt_time.c @@ -173,7 +173,7 @@ time_mt(const struct sk_buff *skb, const struct net_device *in, __net_timestamp((struct sk_buff *)skb); stamp = ktime_to_ns(skb->tstamp); - do_div(stamp, NSEC_PER_SEC); + stamp = div_s64(stamp, NSEC_PER_SEC); if (info->flags & XT_TIME_LOCAL_TZ) /* Adjust for local timezone */ diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c index 558dadb..e024e45 100644 --- a/sound/core/seq/oss/seq_oss_synth.c +++ b/sound/core/seq/oss/seq_oss_synth.c @@ -604,6 +604,9 @@ snd_seq_oss_synth_make_info(struct seq_oss_devinfo *dp, int dev, struct synth_in { struct seq_oss_synth *rec; + if (dev < 0 || dev >= dp->max_synthdev) + return -ENXIO; + if (dp->synths[dev].is_midi) { struct midi_info minf; snd_seq_oss_midi_make_info(dp, dp->synths[dev].midi_mapped, &minf); diff --git a/sound/pci/emu10k1/emu10k1_main.c b/sound/pci/emu10k1/emu10k1_main.c index 548c9cc..2f283ea 100644 --- a/sound/pci/emu10k1/emu10k1_main.c +++ b/sound/pci/emu10k1/emu10k1_main.c @@ -1528,6 +1528,7 @@ static struct snd_emu_chip_details emu_chip_details[] = { .ca0151_chip = 1, .spk71 = 1, .spdif_bug = 1, + .invert_shared_spdif = 1, /* digital/analog switch swapped */ .adc_1361t = 1, /* 24 bit capture instead of 16bit. Fixes ALSA bug#324 */ .ac97_chip = 1} , {.vendor = 0x1102, .device = 0x0004, .revision = 0x04, diff --git a/sound/pci/emu10k1/emumixer.c b/sound/pci/emu10k1/emumixer.c index fd22120..9f77692 100644 --- a/sound/pci/emu10k1/emumixer.c +++ b/sound/pci/emu10k1/emumixer.c @@ -1578,6 +1578,10 @@ static int snd_emu10k1_shared_spdif_get(struct snd_kcontrol *kcontrol, ucontrol->value.integer.value[0] = inl(emu->port + A_IOCFG) & A_IOCFG_GPOUT0 ? 1 : 0; else ucontrol->value.integer.value[0] = inl(emu->port + HCFG) & HCFG_GPOUT0 ? 1 : 0; + if (emu->card_capabilities->invert_shared_spdif) + ucontrol->value.integer.value[0] = + !ucontrol->value.integer.value[0]; + return 0; } @@ -1586,15 +1590,18 @@ static int snd_emu10k1_shared_spdif_put(struct snd_kcontrol *kcontrol, { unsigned long flags; struct snd_emu10k1 *emu = snd_kcontrol_chip(kcontrol); - unsigned int reg, val; + unsigned int reg, val, sw; int change = 0; + sw = ucontrol->value.integer.value[0]; + if (emu->card_capabilities->invert_shared_spdif) + sw = !sw; spin_lock_irqsave(&emu->reg_lock, flags); if ( emu->card_capabilities->i2c_adc) { /* Do nothing for Audigy 2 ZS Notebook */ } else if (emu->audigy) { reg = inl(emu->port + A_IOCFG); - val = ucontrol->value.integer.value[0] ? A_IOCFG_GPOUT0 : 0; + val = sw ? A_IOCFG_GPOUT0 : 0; change = (reg & A_IOCFG_GPOUT0) != val; if (change) { reg &= ~A_IOCFG_GPOUT0; @@ -1603,7 +1610,7 @@ static int snd_emu10k1_shared_spdif_put(struct snd_kcontrol *kcontrol, } } reg = inl(emu->port + HCFG); - val = ucontrol->value.integer.value[0] ? HCFG_GPOUT0 : 0; + val = sw ? HCFG_GPOUT0 : 0; change |= (reg & HCFG_GPOUT0) != val; if (change) { reg &= ~HCFG_GPOUT0; diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c index b3a618e..6ba7ac0 100644 --- a/sound/pci/hda/hda_intel.c +++ b/sound/pci/hda/hda_intel.c @@ -285,6 +285,7 @@ struct azx_dev { u32 *posbuf; /* position buffer pointer */ unsigned int bufsize; /* size of the play buffer in bytes */ + unsigned int period_bytes; /* size of the period in bytes */ unsigned int frags; /* number for period in the play buffer */ unsigned int fifo_size; /* FIFO size */ @@ -301,11 +302,10 @@ struct azx_dev { */ unsigned char stream_tag; /* assigned stream */ unsigned char index; /* stream index */ - /* for sanity check of position buffer */ - unsigned int period_intr; unsigned int opened :1; unsigned int running :1; + unsigned int irq_pending: 1; }; /* CORB/RIRB */ @@ -369,6 +369,9 @@ struct azx { /* for debugging */ unsigned int last_cmd; /* last issued command (to sync) */ + + /* for pending irqs */ + struct work_struct irq_pending_work; }; /* driver types */ @@ -908,6 +911,8 @@ static void azx_init_pci(struct azx *chip) } +static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev); + /* * interrupt handler */ @@ -930,11 +935,18 @@ static irqreturn_t azx_interrupt(int irq, void *dev_id) azx_dev = &chip->azx_dev[i]; if (status & azx_dev->sd_int_sta_mask) { azx_sd_writeb(azx_dev, SD_STS, SD_INT_MASK); - if (azx_dev->substream && azx_dev->running) { - azx_dev->period_intr++; + if (!azx_dev->substream || !azx_dev->running) + continue; + /* check whether this IRQ is really acceptable */ + if (azx_position_ok(chip, azx_dev)) { + azx_dev->irq_pending = 0; spin_unlock(&chip->reg_lock); snd_pcm_period_elapsed(azx_dev->substream); spin_lock(&chip->reg_lock); + } else { + /* bogus IRQ, process it later */ + azx_dev->irq_pending = 1; + schedule_work(&chip->irq_pending_work); } } } @@ -973,6 +985,7 @@ static int azx_setup_periods(struct snd_pcm_substream *substream, azx_sd_writel(azx_dev, SD_BDLPU, 0); period_bytes = snd_pcm_lib_period_bytes(substream); + azx_dev->period_bytes = period_bytes; periods = azx_dev->bufsize / period_bytes; /* program the initial BDL entries */ @@ -1421,27 +1434,16 @@ static int azx_pcm_trigger(struct snd_pcm_substream *substream, int cmd) return 0; } -static snd_pcm_uframes_t azx_pcm_pointer(struct snd_pcm_substream *substream) +static unsigned int azx_get_position(struct azx *chip, + struct azx_dev *azx_dev) { - struct azx_pcm *apcm = snd_pcm_substream_chip(substream); - struct azx *chip = apcm->chip; - struct azx_dev *azx_dev = get_azx_dev(substream); unsigned int pos; if (chip->position_fix == POS_FIX_POSBUF || chip->position_fix == POS_FIX_AUTO) { /* use the position buffer */ pos = le32_to_cpu(*azx_dev->posbuf); - if (chip->position_fix == POS_FIX_AUTO && - azx_dev->period_intr == 1 && !pos) { - printk(KERN_WARNING - "hda-intel: Invalid position buffer, " - "using LPIB read method instead.\n"); - chip->position_fix = POS_FIX_NONE; - goto read_lpib; - } } else { - read_lpib: /* read LPIB */ pos = azx_sd_readl(azx_dev, SD_LPIB); if (chip->position_fix == POS_FIX_FIFO) @@ -1449,7 +1451,90 @@ static snd_pcm_uframes_t azx_pcm_pointer(struct snd_pcm_substream *substream) } if (pos >= azx_dev->bufsize) pos = 0; - return bytes_to_frames(substream->runtime, pos); + return pos; +} + +static snd_pcm_uframes_t azx_pcm_pointer(struct snd_pcm_substream *substream) +{ + struct azx_pcm *apcm = snd_pcm_substream_chip(substream); + struct azx *chip = apcm->chip; + struct azx_dev *azx_dev = get_azx_dev(substream); + return bytes_to_frames(substream->runtime, + azx_get_position(chip, azx_dev)); +} + +/* + * Check whether the current DMA position is acceptable for updating + * periods. Returns non-zero if it's OK. + * + * Many HD-audio controllers appear pretty inaccurate about + * the update-IRQ timing. The IRQ is issued before actually the + * data is processed. So, we need to process it afterwords in a + * workqueue. + */ +static int azx_position_ok(struct azx *chip, struct azx_dev *azx_dev) +{ + unsigned int pos; + + pos = azx_get_position(chip, azx_dev); + if (chip->position_fix == POS_FIX_AUTO) { + if (!pos) { + printk(KERN_WARNING + "hda-intel: Invalid position buffer, " + "using LPIB read method instead.\n"); + chip->position_fix = POS_FIX_NONE; + pos = azx_get_position(chip, azx_dev); + } else + chip->position_fix = POS_FIX_POSBUF; + } + + if (pos % azx_dev->period_bytes > azx_dev->period_bytes / 2) + return 0; /* NG - it's below the period boundary */ + return 1; /* OK, it's fine */ +} + +/* + * The work for pending PCM period updates. + */ +static void azx_irq_pending_work(struct work_struct *work) +{ + struct azx *chip = container_of(work, struct azx, irq_pending_work); + int i, pending; + + for (;;) { + pending = 0; + spin_lock_irq(&chip->reg_lock); + for (i = 0; i < chip->num_streams; i++) { + struct azx_dev *azx_dev = &chip->azx_dev[i]; + if (!azx_dev->irq_pending || + !azx_dev->substream || + !azx_dev->running) + continue; + if (azx_position_ok(chip, azx_dev)) { + azx_dev->irq_pending = 0; + spin_unlock(&chip->reg_lock); + snd_pcm_period_elapsed(azx_dev->substream); + spin_lock(&chip->reg_lock); + } else + pending++; + } + spin_unlock_irq(&chip->reg_lock); + if (!pending) + return; + cond_resched(); + } +} + +/* clear irq_pending flags and assure no on-going workq */ +static void azx_clear_irq_pending(struct azx *chip) +{ + int i; + + spin_lock_irq(&chip->reg_lock); + for (i = 0; i < chip->num_streams; i++) + chip->azx_dev[i].irq_pending = 0; + spin_unlock_irq(&chip->reg_lock); + flush_scheduled_work(); } static struct snd_pcm_ops azx_pcm_ops = { @@ -1676,6 +1761,7 @@ static int azx_suspend(struct pci_dev *pci, pm_message_t state) int i; snd_power_change_state(card, SNDRV_CTL_POWER_D3hot); + azx_clear_irq_pending(chip); for (i = 0; i < AZX_MAX_PCMS; i++) snd_pcm_suspend_all(chip->pcm[i]); if (chip->initialized) @@ -1732,6 +1818,7 @@ static int azx_free(struct azx *chip) int i; if (chip->initialized) { + azx_clear_irq_pending(chip); for (i = 0; i < chip->num_streams; i++) azx_stream_stop(chip, &chip->azx_dev[i]); azx_stop_chip(chip); @@ -1857,6 +1944,7 @@ static int __devinit azx_create(struct snd_card *card, struct pci_dev *pci, chip->irq = -1; chip->driver_type = driver_type; chip->msi = enable_msi; + INIT_WORK(&chip->irq_pending_work, azx_irq_pending_work); chip->position_fix = check_position_fix(chip, position_fix[dev]); check_probe_mask(chip, dev); diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c index a99e86d..b5f655d 100644 --- a/sound/pci/hda/patch_analog.c +++ b/sound/pci/hda/patch_analog.c @@ -1618,6 +1618,7 @@ static const char *ad1981_models[AD1981_MODELS] = { static struct snd_pci_quirk ad1981_cfg_tbl[] = { SND_PCI_QUIRK(0x1014, 0x0597, "Lenovo Z60", AD1981_THINKPAD), + SND_PCI_QUIRK(0x1014, 0x05b7, "Lenovo Z60m", AD1981_THINKPAD), /* All HP models */ SND_PCI_QUIRK(0x103c, 0, "HP nx", AD1981_HP), SND_PCI_QUIRK(0x1179, 0x0001, "Toshiba U205", AD1981_TOSHIBA), @@ -2623,7 +2624,7 @@ static int ad1988_auto_create_extra_out(struct hda_codec *codec, hda_nid_t pin, { struct ad198x_spec *spec = codec->spec; hda_nid_t nid; - int idx, err; + int i, idx, err; char name[32]; if (! pin) @@ -2631,16 +2632,26 @@ static int ad1988_auto_create_extra_out(struct hda_codec *codec, hda_nid_t pin, idx = ad1988_pin_idx(pin); nid = ad1988_idx_to_dac(codec, idx); - /* specify the DAC as the extra output */ - if (! spec->multiout.hp_nid) - spec->multiout.hp_nid = nid; - else - spec->multiout.extra_out_nid[0] = nid; - /* control HP volume/switch on the output mixer amp */ - sprintf(name, "%s Playback Volume", pfx); - if ((err = add_control(spec, AD_CTL_WIDGET_VOL, name, - HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT))) < 0) - return err; + /* check whether the corresponding DAC was already taken */ + for (i = 0; i < spec->autocfg.line_outs; i++) { + hda_nid_t pin = spec->autocfg.line_out_pins[i]; + hda_nid_t dac = ad1988_idx_to_dac(codec, ad1988_pin_idx(pin)); + if (dac == nid) + break; + } + if (i >= spec->autocfg.line_outs) { + /* specify the DAC as the extra output */ + if (!spec->multiout.hp_nid) + spec->multiout.hp_nid = nid; + else + spec->multiout.extra_out_nid[0] = nid; + /* control HP volume/switch on the output mixer amp */ + sprintf(name, "%s Playback Volume", pfx); + err = add_control(spec, AD_CTL_WIDGET_VOL, name, + HDA_COMPOSE_AMP_VAL(nid, 3, 0, HDA_OUTPUT)); + if (err < 0) + return err; + } nid = ad1988_mixer_nids[idx]; sprintf(name, "%s Playback Switch", pfx); if ((err = add_control(spec, AD_CTL_BIND_MUTE, name, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/