2022-07-09 01:54:07

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 00/17] perf: Add perf kwork

Sometimes, we need to analyze time properties of kernel work such as irq,
softirq, and workqueue, including delay and running time of specific interrupts.
Currently, these events have kernel tracepoints, but perf tool does not
directly analyze the delay of these events

The perf-kwork tool is used to trace time properties of kernel work
(such as irq, softirq, and workqueue), including runtime, latency,
and timehist, using the infrastructure in the perf tools to allow
tracing extra targets

We also use bpf trace to collect and filter data in kernel to solve
problem of large perf data volume and extra file system interruptions.

Example usage:

1. Kwork record:

# perf kwork record -- sleep 10
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 6.825 MB perf.data ]

2. Kwork report:

# perf kwork report -S

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1347.861 ms | 25049 | 1.417 ms | 121235.524083 s | 121235.525499 s |
(s)TIMER:1 | 0005 | 151.033 ms | 2545 | 0.153 ms | 121237.454591 s | 121237.454744 s |
(s)RCU:9 | 0005 | 117.254 ms | 2754 | 0.223 ms | 121239.461024 s | 121239.461246 s |
(s)SCHED:7 | 0005 | 58.714 ms | 1773 | 0.075 ms | 121237.702345 s | 121237.702419 s |
(s)RCU:9 | 0007 | 43.359 ms | 945 | 0.861 ms | 121237.702984 s | 121237.703845 s |
(s)SCHED:7 | 0000 | 33.389 ms | 549 | 4.121 ms | 121235.521379 s | 121235.525499 s |
(s)RCU:9 | 0002 | 21.419 ms | 484 | 0.281 ms | 121244.629001 s | 121244.629282 s |
(w)mix_interrupt_randomness | 0000 | 21.047 ms | 391 | 1.016 ms | 121237.934008 s | 121237.935024 s |
(s)SCHED:7 | 0007 | 19.903 ms | 570 | 0.065 ms | 121235.523360 s | 121235.523426 s |
(s)RCU:9 | 0000 | 19.017 ms | 472 | 0.507 ms | 121244.634002 s | 121244.634510 s |
... <SNIP> ...
(s)SCHED:7 | 0003 | 0.049 ms | 1 | 0.049 ms | 121240.018631 s | 121240.018680 s |
(w)vmstat_update | 0003 | 0.046 ms | 1 | 0.046 ms | 121240.916200 s | 121240.916246 s |
(s)RCU:9 | 0004 | 0.045 ms | 2 | 0.024 ms | 121235.522876 s | 121235.522900 s |
(w)neigh_managed_work | 0001 | 0.044 ms | 1 | 0.044 ms | 121235.513929 s | 121235.513973 s |
(w)vmstat_update | 0006 | 0.031 ms | 1 | 0.031 ms | 121245.673914 s | 121245.673945 s |
(w)vmstat_update | 0004 | 0.028 ms | 1 | 0.028 ms | 121235.522743 s | 121235.522770 s |
(w)wb_update_bandwidth_workfn | 0000 | 0.024 ms | 1 | 0.024 ms | 121244.842660 s | 121244.842683 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 36071
Total runtime (msec) : 1887.188 (0.185% load average)
Total time span (msec) : 10185.012
--------------------------------------------------------------------------------------------------------------------------------

3. Kwork latency:

# perf kwork latency

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0004 | 3.903 ms | 1 | 3.903 ms | 121235.517068 s | 121235.520971 s |
(s)RCU:9 | 0004 | 3.252 ms | 2 | 5.809 ms | 121235.517068 s | 121235.522876 s |
(s)RCU:9 | 0001 | 3.238 ms | 2 | 5.832 ms | 121235.514494 s | 121235.520326 s |
(w)vmstat_update | 0004 | 1.738 ms | 1 | 1.738 ms | 121235.521005 s | 121235.522743 s |
(s)SCHED:7 | 0004 | 0.978 ms | 2 | 1.899 ms | 121235.520940 s | 121235.522840 s |
(w)wb_update_bandwidth_workfn | 0000 | 0.834 ms | 1 | 0.834 ms | 121244.841826 s | 121244.842660 s |
(s)RCU:9 | 0003 | 0.479 ms | 3 | 0.752 ms | 121240.027521 s | 121240.028273 s |
(s)TIMER:1 | 0001 | 0.465 ms | 1 | 0.465 ms | 121235.513107 s | 121235.513572 s |
(w)vmstat_update | 0000 | 0.391 ms | 5 | 1.275 ms | 121236.814938 s | 121236.816213 s |
(w)mix_interrupt_randomness | 0002 | 0.317 ms | 5 | 0.874 ms | 121244.628034 s | 121244.628908 s |
(w)neigh_managed_work | 0001 | 0.315 ms | 1 | 0.315 ms | 121235.513614 s | 121235.513929 s |
... <SNIP> ...
(s)TIMER:1 | 0005 | 0.061 ms | 2545 | 0.506 ms | 121237.136113 s | 121237.136619 s |
(s)SCHED:7 | 0001 | 0.052 ms | 21 | 0.437 ms | 121237.711014 s | 121237.711451 s |
(s)SCHED:7 | 0002 | 0.045 ms | 309 | 0.145 ms | 121237.137184 s | 121237.137329 s |
(s)SCHED:7 | 0003 | 0.045 ms | 1 | 0.045 ms | 121240.018586 s | 121240.018631 s |
(s)SCHED:7 | 0007 | 0.044 ms | 570 | 0.173 ms | 121238.161161 s | 121238.161334 s |
(s)BLOCK:4 | 0003 | 0.030 ms | 4 | 0.056 ms | 121240.028255 s | 121240.028311 s |
--------------------------------------------------------------------------------------------------------------------------------
INFO: 28.761% skipped events (27674 including 2607 raise, 25067 entry, 0 exit)

4. Kwork timehist:

# perf kwork timehist
Runtime start Runtime end Cpu Kwork name Runtime Delaytime
(TYPE)NAME:NUM (msec) (msec)
----------------- ----------------- ------ ------------------------------ ---------- ----------
121235.513572 121235.513674 [0001] (s)TIMER:1 0.102 0.465
121235.513688 121235.513738 [0001] (s)SCHED:7 0.050 0.172
121235.513750 121235.513777 [0001] (s)RCU:9 0.027 0.643
121235.513929 121235.513973 [0001] (w)neigh_managed_work 0.044 0.315
121235.520326 121235.520386 [0001] (s)RCU:9 0.060 5.832
121235.520672 121235.520716 [0002] (s)SCHED:7 0.044 0.048
121235.520729 121235.520753 [0002] (s)RCU:9 0.024 5.651
121235.521213 121235.521249 [0005] (s)TIMER:1 0.036 0.064
121235.520166 121235.521379 [0000] (s)SCHED:7 1.213 0.056
... <SNIP> ...
121235.533256 121235.533296 [0000] virtio0-requests:25 0.040
121235.533322 121235.533359 [0000] (s)SCHED:7 0.037 0.095
121235.533018 121235.533452 [0006] (s)RCU:9 0.434 0.348
121235.534653 121235.534698 [0000] virtio0-requests:25 0.046
121235.535657 121235.535702 [0000] virtio0-requests:25 0.044
121235.535857 121235.535916 [0005] (s)TIMER:1 0.059 0.055
121235.535927 121235.535947 [0005] (s)RCU:9 0.020 0.113
121235.536178 121235.536196 [0006] (s)RCU:9 0.018 0.410
121235.537406 121235.537445 [0006] (s)SCHED:7 0.039 0.049
121235.537457 121235.537481 [0006] (s)RCU:9 0.024 0.334
121235.538199 121235.538254 [0007] (s)RCU:9 0.055 0.066

5. Kwork report use bpf:

# perf kwork report -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)flush_to_ldisc | 0000 | 2.279 ms | 2 | 2.219 ms | 121293.080933 s | 121293.083152 s |
(s)SCHED:7 | 0001 | 2.141 ms | 2 | 2.100 ms | 121293.082064 s | 121293.084164 s |
(s)RCU:9 | 0003 | 2.137 ms | 3 | 2.046 ms | 121293.081348 s | 121293.083394 s |
(s)TIMER:1 | 0007 | 1.882 ms | 12 | 0.249 ms | 121295.632211 s | 121295.632460 s |
(w)e1000_watchdog | 0002 | 1.136 ms | 3 | 0.428 ms | 121294.496559 s | 121294.496987 s |
(s)SCHED:7 | 0007 | 0.995 ms | 12 | 0.139 ms | 121295.632483 s | 121295.632621 s |
(s)NET_RX:3 | 0002 | 0.727 ms | 5 | 0.391 ms | 121299.044624 s | 121299.045016 s |
(s)TIMER:1 | 0002 | 0.696 ms | 5 | 0.164 ms | 121294.496172 s | 121294.496337 s |
(s)SCHED:7 | 0002 | 0.427 ms | 6 | 0.077 ms | 121295.840321 s | 121295.840398 s |
(s)SCHED:7 | 0000 | 0.366 ms | 3 | 0.156 ms | 121296.545389 s | 121296.545545 s |
eth0:10 | 0002 | 0.353 ms | 5 | 0.122 ms | 121293.084796 s | 121293.084919 s |
(w)flush_to_ldisc | 0000 | 0.298 ms | 1 | 0.298 ms | 121299.046236 s | 121299.046534 s |
(w)mix_interrupt_randomness | 0002 | 0.215 ms | 4 | 0.077 ms | 121293.086747 s | 121293.086823 s |
(s)RCU:9 | 0002 | 0.128 ms | 3 | 0.060 ms | 121293.087348 s | 121293.087409 s |
(w)vmstat_shepherd | 0000 | 0.098 ms | 1 | 0.098 ms | 121293.083901 s | 121293.083999 s |
(s)TIMER:1 | 0001 | 0.089 ms | 1 | 0.089 ms | 121293.085709 s | 121293.085798 s |
(w)vmstat_update | 0003 | 0.071 ms | 1 | 0.071 ms | 121293.085227 s | 121293.085298 s |
(w)wq_barrier_func | 0000 | 0.064 ms | 1 | 0.064 ms | 121293.083688 s | 121293.083752 s |
(w)vmstat_update | 0000 | 0.041 ms | 1 | 0.041 ms | 121293.083829 s | 121293.083869 s |
(s)RCU:9 | 0001 | 0.038 ms | 1 | 0.038 ms | 121293.085818 s | 121293.085856 s |
(s)RCU:9 | 0007 | 0.035 ms | 1 | 0.035 ms | 121293.112322 s | 121293.112357 s |
--------------------------------------------------------------------------------------------------------------------------------

6. Kwork latency use bpf:

# perf kwork latency -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(w)vmstat_shepherd | 0000 | 2.044 ms | 2 | 2.764 ms | 121314.942758 s | 121314.945522 s |
(w)flush_to_ldisc | 0000 | 1.008 ms | 1 | 1.008 ms | 121317.335508 s | 121317.336516 s |
(w)vmstat_update | 0002 | 0.879 ms | 1 | 0.879 ms | 121317.024405 s | 121317.025284 s |
(w)mix_interrupt_randomness | 0002 | 0.328 ms | 5 | 0.383 ms | 121308.832944 s | 121308.833327 s |
(w)e1000_watchdog | 0002 | 0.304 ms | 5 | 0.368 ms | 121317.024305 s | 121317.024673 s |
(s)RCU:9 | 0001 | 0.172 ms | 41 | 0.728 ms | 121308.308187 s | 121308.308915 s |
(s)TIMER:1 | 0000 | 0.149 ms | 3 | 0.195 ms | 121317.334255 s | 121317.334449 s |
(s)NET_RX:3 | 0001 | 0.143 ms | 40 | 1.213 ms | 121315.030992 s | 121315.032205 s |
(s)RCU:9 | 0002 | 0.139 ms | 27 | 0.187 ms | 121315.077388 s | 121315.077576 s |
(s)NET_RX:3 | 0002 | 0.130 ms | 7 | 0.283 ms | 121308.832917 s | 121308.833201 s |
(s)SCHED:7 | 0007 | 0.123 ms | 34 | 0.191 ms | 121308.736240 s | 121308.736431 s |
(s)TIMER:1 | 0007 | 0.116 ms | 18 | 0.145 ms | 121308.736168 s | 121308.736313 s |
(s)RCU:9 | 0007 | 0.111 ms | 68 | 0.318 ms | 121308.736194 s | 121308.736512 s |
(s)SCHED:7 | 0002 | 0.110 ms | 22 | 0.292 ms | 121308.832197 s | 121308.832489 s |
(s)TIMER:1 | 0001 | 0.107 ms | 1 | 0.107 ms | 121314.948230 s | 121314.948337 s |
(w)neigh_managed_work | 0001 | 0.103 ms | 1 | 0.103 ms | 121314.948381 s | 121314.948484 s |
(s)RCU:9 | 0000 | 0.099 ms | 49 | 0.289 ms | 121308.520167 s | 121308.520456 s |
(s)NET_RX:3 | 0007 | 0.096 ms | 40 | 1.227 ms | 121315.022994 s | 121315.024220 s |
(s)RCU:9 | 0003 | 0.093 ms | 37 | 0.261 ms | 121314.950651 s | 121314.950913 s |
(w)flush_to_ldisc | 0000 | 0.090 ms | 1 | 0.090 ms | 121317.336737 s | 121317.336827 s |
(s)TIMER:1 | 0002 | 0.078 ms | 36 | 0.115 ms | 121310.880172 s | 121310.880288 s |
(s)SCHED:7 | 0001 | 0.071 ms | 27 | 0.180 ms | 121314.953571 s | 121314.953751 s |
(s)SCHED:7 | 0000 | 0.066 ms | 28 | 0.344 ms | 121317.334345 s | 121317.334689 s |
(s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.119 ms | 121314.978808 s | 121314.978927 s |
--------------------------------------------------------------------------------------------------------------------------------

7. Kwork report with filter:

# perf kwork report -b -n RCU
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0006 | 2.266 ms | 3 | 2.158 ms | 121335.008290 s | 121335.010449 s |
(s)RCU:9 | 0002 | 0.158 ms | 3 | 0.063 ms | 121335.011914 s | 121335.011977 s |
(s)RCU:9 | 0007 | 0.082 ms | 1 | 0.082 ms | 121335.448378 s | 121335.448460 s |
(s)RCU:9 | 0000 | 0.058 ms | 1 | 0.058 ms | 121335.011350 s | 121335.011408 s |
--------------------------------------------------------------------------------------------------------------------------------

---
Changes since v2:
- Updage commit messages.

Changes since v1:
- Add options and document when actually add the functionality later.
- Replace "cluster" with "work".
- Add workqueue symbolizing function support.
- Replace "frequency" with "count" in report header.
- Add bpf trace support.

Yang Jihong (17):
perf kwork: New tool
perf kwork: Add irq kwork record support
perf kwork: Add softirq kwork record support
perf kwork: Add workqueue kwork record support
tools lib: Add list_last_entry_or_null
perf kwork: Implement perf kwork report
perf kwork: Add irq report support
perf kwork: Add softirq report support
perf kwork: Add workqueue report support
perf kwork: Implement perf kwork latency
perf kwork: Add softirq latency support
perf kwork: Add workqueue latency support
perf kwork: Implement perf kwork timehist
perf kwork: Implement bpf trace
perf kwork: Add irq trace bpf support
perf kwork: Add softirq trace bpf support
perf kwork: Add workqueue trace bpf support

tools/include/linux/list.h | 11 +
tools/perf/Build | 1 +
tools/perf/Documentation/perf-kwork.txt | 180 ++
tools/perf/Makefile.perf | 1 +
tools/perf/builtin-kwork.c | 1834 ++++++++++++++++++++
tools/perf/builtin.h | 1 +
tools/perf/command-list.txt | 1 +
tools/perf/perf.c | 1 +
tools/perf/util/Build | 1 +
tools/perf/util/bpf_kwork.c | 356 ++++
tools/perf/util/bpf_skel/kwork_trace.bpf.c | 381 ++++
tools/perf/util/kwork.h | 257 +++
12 files changed, 3025 insertions(+)
create mode 100644 tools/perf/Documentation/perf-kwork.txt
create mode 100644 tools/perf/builtin-kwork.c
create mode 100644 tools/perf/util/bpf_kwork.c
create mode 100644 tools/perf/util/bpf_skel/kwork_trace.bpf.c
create mode 100644 tools/perf/util/kwork.h

--
2.30.GIT


2022-07-09 01:55:00

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 01/17] perf kwork: New tool

The perf-kwork tool is used to trace time properties of kernel work
(such as irq, softirq, and workqueue), including runtime, latency,
and timehist, using the infrastructure in the perf tools to allow
tracing extra targets.

This is the first commit to reuse perf_record framework code to
implement a simple record function, kwork is not supported currently.

Test cases:

# perf

usage: perf [--version] [--help] [OPTIONS] COMMAND [ARGS]

The most commonly used perf commands are:
<SNIP>
iostat Show I/O performance metrics
kallsyms Searches running kernel for symbols
kmem Tool to trace/measure kernel memory properties
kvm Tool to trace/measure kvm guest os
kwork Tool to trace/measure kernel work properties (latencies)
list List all symbolic event types
lock Analyze lock events
mem Profile memory accesses
record Run a command and record its profile into perf.data
<SNIP>
See 'perf help COMMAND' for more information on a specific command.

# perf kwork

Usage: perf kwork [<options>] {record}

-D, --dump-raw-trace dump raw trace in ASCII
-f, --force don't complain, do it
-k, --kwork <kwork> list of kwork to profile
-v, --verbose be more verbose (show symbol address, etc)

# perf kwork record -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 1.787 MB perf.data ]

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Build | 1 +
tools/perf/Documentation/perf-kwork.txt | 43 +++++++
tools/perf/builtin-kwork.c | 162 ++++++++++++++++++++++++
tools/perf/builtin.h | 1 +
tools/perf/command-list.txt | 1 +
tools/perf/perf.c | 1 +
tools/perf/util/kwork.h | 41 ++++++
7 files changed, 250 insertions(+)
create mode 100644 tools/perf/Documentation/perf-kwork.txt
create mode 100644 tools/perf/builtin-kwork.c
create mode 100644 tools/perf/util/kwork.h

diff --git a/tools/perf/Build b/tools/perf/Build
index db61dbe2b543..496b096153bb 100644
--- a/tools/perf/Build
+++ b/tools/perf/Build
@@ -25,6 +25,7 @@ perf-y += builtin-data.o
perf-y += builtin-version.o
perf-y += builtin-c2c.o
perf-y += builtin-daemon.o
+perf-y += builtin-kwork.o

perf-$(CONFIG_TRACE) += builtin-trace.o
perf-$(CONFIG_LIBELF) += builtin-probe.o
diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
new file mode 100644
index 000000000000..dc1e36da57bb
--- /dev/null
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -0,0 +1,43 @@
+perf-kowrk(1)
+=============
+
+NAME
+----
+perf-kwork - Tool to trace/measure kernel work properties (latencies)
+
+SYNOPSIS
+--------
+[verse]
+'perf kwork' {record}
+
+DESCRIPTION
+-----------
+There are several variants of 'perf kwork':
+
+ 'perf kwork record <command>' to record the kernel work
+ of an arbitrary workload.
+
+ Example usage:
+ perf kwork record -- sleep 1
+
+OPTIONS
+-------
+-D::
+--dump-raw-trace=::
+ Display verbose dump of the sched data.
+
+-f::
+--force::
+ Don't complain, do it.
+
+-k::
+--kwork::
+ List of kwork to profile
+
+-v::
+--verbose::
+ Be more verbose. (show symbol address, etc)
+
+SEE ALSO
+--------
+linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
new file mode 100644
index 000000000000..f3552c56ede3
--- /dev/null
+++ b/tools/perf/builtin-kwork.c
@@ -0,0 +1,162 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * builtin-kwork.c
+ *
+ * Copyright (c) 2022 Huawei Inc, Yang Jihong <[email protected]>
+ */
+
+#include "builtin.h"
+
+#include "util/data.h"
+#include "util/kwork.h"
+#include "util/debug.h"
+#include "util/symbol.h"
+#include "util/thread.h"
+#include "util/string2.h"
+#include "util/callchain.h"
+#include "util/evsel_fprintf.h"
+
+#include <subcmd/pager.h>
+#include <subcmd/parse-options.h>
+
+#include <errno.h>
+#include <inttypes.h>
+#include <linux/err.h>
+#include <linux/time64.h>
+#include <linux/zalloc.h>
+
+static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
+};
+
+static void setup_event_list(struct perf_kwork *kwork,
+ const struct option *options,
+ const char * const usage_msg[])
+{
+ int i;
+ struct kwork_class *class;
+ char *tmp, *tok, *str;
+
+ if (kwork->event_list_str == NULL)
+ goto null_event_list_str;
+
+ str = strdup(kwork->event_list_str);
+ for (tok = strtok_r(str, ", ", &tmp);
+ tok; tok = strtok_r(NULL, ", ", &tmp)) {
+ for (i = 0; i < KWORK_CLASS_MAX; i++) {
+ class = kwork_class_supported_list[i];
+ if (strcmp(tok, class->name) == 0) {
+ list_add_tail(&class->list, &kwork->class_list);
+ break;
+ }
+ }
+ if (i == KWORK_CLASS_MAX)
+ usage_with_options_msg(usage_msg, options,
+ "Unknown --event key: `%s'", tok);
+ }
+ free(str);
+
+null_event_list_str:
+ /*
+ * config all kwork events if not specified
+ */
+ if (list_empty(&kwork->class_list))
+ for (i = 0; i < KWORK_CLASS_MAX; i++)
+ list_add_tail(&kwork_class_supported_list[i]->list,
+ &kwork->class_list);
+
+ pr_debug("Config event list:");
+ list_for_each_entry(class, &kwork->class_list, list)
+ pr_debug(" %s", class->name);
+ pr_debug("\n");
+}
+
+static int perf_kwork__record(struct perf_kwork *kwork,
+ int argc, const char **argv)
+{
+ const char **rec_argv;
+ unsigned int rec_argc, i, j;
+ struct kwork_class *class;
+
+ const char *const record_args[] = {
+ "record",
+ "-a",
+ "-R",
+ "-m", "1024",
+ "-c", "1",
+ };
+
+ rec_argc = ARRAY_SIZE(record_args) + argc - 1;
+
+ list_for_each_entry(class, &kwork->class_list, list)
+ rec_argc += 2 * class->nr_tracepoints;
+
+ rec_argv = calloc(rec_argc + 1, sizeof(char *));
+ if (rec_argv == NULL)
+ return -ENOMEM;
+
+ for (i = 0; i < ARRAY_SIZE(record_args); i++)
+ rec_argv[i] = strdup(record_args[i]);
+
+ list_for_each_entry(class, &kwork->class_list, list) {
+ for (j = 0; j < class->nr_tracepoints; j++) {
+ rec_argv[i++] = strdup("-e");
+ rec_argv[i++] = strdup(class->tp_handlers[j].name);
+ }
+ }
+
+ for (j = 1; j < (unsigned int)argc; j++, i++)
+ rec_argv[i] = argv[j];
+
+ BUG_ON(i != rec_argc);
+
+ pr_debug("record comm: ");
+ for (j = 0; j < rec_argc; j++)
+ pr_debug("%s ", rec_argv[j]);
+ pr_debug("\n");
+
+ return cmd_record(i, rec_argv);
+}
+
+int cmd_kwork(int argc, const char **argv)
+{
+ static struct perf_kwork kwork = {
+ .class_list = LIST_HEAD_INIT(kwork.class_list),
+
+ .force = false,
+ .event_list_str = NULL,
+ };
+
+ const struct option kwork_options[] = {
+ OPT_INCR('v', "verbose", &verbose,
+ "be more verbose (show symbol address, etc)"),
+ OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
+ "dump raw trace in ASCII"),
+ OPT_STRING('k', "kwork", &kwork.event_list_str, "kwork",
+ "list of kwork to profile"),
+ OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
+ OPT_END()
+ };
+
+ const char *kwork_usage[] = {
+ NULL,
+ NULL
+ };
+ const char *const kwork_subcommands[] = {
+ "record", NULL
+ };
+
+ argc = parse_options_subcommand(argc, argv, kwork_options,
+ kwork_subcommands, kwork_usage,
+ PARSE_OPT_STOP_AT_NON_OPTION);
+ if (!argc)
+ usage_with_options(kwork_usage, kwork_options);
+
+ setup_event_list(&kwork, kwork_options, kwork_usage);
+
+ if (strlen(argv[0]) > 2 && strstarts("record", argv[0]))
+ return perf_kwork__record(&kwork, argc, argv);
+ else
+ usage_with_options(kwork_usage, kwork_options);
+
+ return 0;
+}
diff --git a/tools/perf/builtin.h b/tools/perf/builtin.h
index 7303e80a639c..d03afea86217 100644
--- a/tools/perf/builtin.h
+++ b/tools/perf/builtin.h
@@ -38,6 +38,7 @@ int cmd_mem(int argc, const char **argv);
int cmd_data(int argc, const char **argv);
int cmd_ftrace(int argc, const char **argv);
int cmd_daemon(int argc, const char **argv);
+int cmd_kwork(int argc, const char **argv);

int find_scripts(char **scripts_array, char **scripts_path_array, int num,
int pathlen);
diff --git a/tools/perf/command-list.txt b/tools/perf/command-list.txt
index 4aa034aefa33..8fcab5ad00c5 100644
--- a/tools/perf/command-list.txt
+++ b/tools/perf/command-list.txt
@@ -18,6 +18,7 @@ perf-iostat mainporcelain common
perf-kallsyms mainporcelain common
perf-kmem mainporcelain common
perf-kvm mainporcelain common
+perf-kwork mainporcelain common
perf-list mainporcelain common
perf-lock mainporcelain common
perf-mem mainporcelain common
diff --git a/tools/perf/perf.c b/tools/perf/perf.c
index 0170cb0819d6..c21b3973641a 100644
--- a/tools/perf/perf.c
+++ b/tools/perf/perf.c
@@ -91,6 +91,7 @@ static struct cmd_struct commands[] = {
{ "data", cmd_data, 0 },
{ "ftrace", cmd_ftrace, 0 },
{ "daemon", cmd_daemon, 0 },
+ { "kwork", cmd_kwork, 0 },
};

struct pager_config {
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
new file mode 100644
index 000000000000..6950636aab2a
--- /dev/null
+++ b/tools/perf/util/kwork.h
@@ -0,0 +1,41 @@
+#ifndef PERF_UTIL_KWORK_H
+#define PERF_UTIL_KWORK_H
+
+#include "perf.h"
+
+#include "util/tool.h"
+#include "util/event.h"
+#include "util/evlist.h"
+#include "util/session.h"
+#include "util/time-utils.h"
+
+#include <linux/list.h>
+#include <linux/bitmap.h>
+
+enum kwork_class_type {
+ KWORK_CLASS_MAX,
+};
+
+struct kwork_class {
+ struct list_head list;
+ const char *name;
+ enum kwork_class_type type;
+
+ unsigned int nr_tracepoints;
+ const struct evsel_str_handler *tp_handlers;
+};
+
+struct perf_kwork {
+ /*
+ * metadata
+ */
+ struct list_head class_list;
+
+ /*
+ * options for command
+ */
+ bool force;
+ const char *event_list_str;
+};
+
+#endif /* PERF_UTIL_KWORK_H */
--
2.30.GIT

2022-07-09 01:55:05

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 05/17] tools lib: Add list_last_entry_or_null

Add list_last_entry_or_null to get the last element from a list,
returns NULL if the list is empty.

Signed-off-by: Yang Jihong <[email protected]>
---
tools/include/linux/list.h | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/tools/include/linux/list.h b/tools/include/linux/list.h
index b2fc48d5478c..a4dfb6a7cc6a 100644
--- a/tools/include/linux/list.h
+++ b/tools/include/linux/list.h
@@ -384,6 +384,17 @@ static inline void list_splice_tail_init(struct list_head *list,
#define list_first_entry_or_null(ptr, type, member) \
(!list_empty(ptr) ? list_first_entry(ptr, type, member) : NULL)

+/**
+ * list_last_entry_or_null - get the last element from a list
+ * @ptr: the list head to take the element from.
+ * @type: the type of the struct this is embedded in.
+ * @member: the name of the list_head within the struct.
+ *
+ * Note that if the list is empty, it returns NULL.
+ */
+#define list_last_entry_or_null(ptr, type, member) \
+ (!list_empty(ptr) ? list_last_entry(ptr, type, member) : NULL)
+
/**
* list_next_entry - get the next element in list
* @pos: the type * to cursor
--
2.30.GIT

2022-07-09 01:55:29

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 07/17] perf kwork: Add irq report support

Implements irq kwork report function.

Test cases:

# perf kwork record -- sleep 10
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 6.134 MB perf.data ]

# perf kwork report

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -C 2

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -C 3

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -i perf.data

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -s max,freq

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -S

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 1167.501 ms | 18284 | 1.096 ms | 44004.464905 s | 44004.466001 s |
eth0:10 | 0002 | 0.185 ms | 5 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 18289
Total runtime (msec) : 1167.686 (0.115% load average)
Total time span (msec) : 10159.155
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report --time 44005,

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 402.173 ms | 4695 | 0.981 ms | 44007.831992 s | 44007.832973 s |
eth0:10 | 0002 | 0.089 ms | 2 | 0.058 ms | 44005.012222 s | 44005.012280 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/builtin-kwork.c | 63 ++++++++++++++++++++++++++++++++++++--
1 file changed, 61 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 9c488d647995..b1993be0a20a 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -479,16 +479,75 @@ static int report_exit_event(struct perf_kwork *kwork,
return 0;
}

+static struct kwork_class kwork_irq;
+static int process_irq_handler_entry_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->entry_event)
+ return kwork->tp_handler->entry_event(kwork, &kwork_irq,
+ evsel, sample, machine);
+ return 0;
+}
+
+static int process_irq_handler_exit_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->exit_event)
+ return kwork->tp_handler->exit_event(kwork, &kwork_irq,
+ evsel, sample, machine);
+ return 0;
+}
+
const struct evsel_str_handler irq_tp_handlers[] = {
- { "irq:irq_handler_entry", NULL, },
- { "irq:irq_handler_exit", NULL, },
+ { "irq:irq_handler_entry", process_irq_handler_entry_event, },
+ { "irq:irq_handler_exit", process_irq_handler_exit_event, },
};

+static int irq_class_init(struct kwork_class *class,
+ struct perf_session *session)
+{
+ if (perf_session__set_tracepoints_handlers(session, irq_tp_handlers)) {
+ pr_err("Failed to set irq tracepoints handlers\n");
+ return -1;
+ }
+
+ class->work_root = RB_ROOT_CACHED;
+ return 0;
+}
+
+static void irq_work_init(struct kwork_class *class,
+ struct kwork_work *work,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine __maybe_unused)
+{
+ work->class = class;
+ work->cpu = sample->cpu;
+ work->id = evsel__intval(evsel, sample, "irq");
+ work->name = evsel__strval(evsel, sample, "name");
+}
+
+static void irq_work_name(struct kwork_work *work, char *buf, int len)
+{
+ snprintf(buf, len, "%s:%" PRIu64 "", work->name, work->id);
+}
+
static struct kwork_class kwork_irq = {
.name = "irq",
.type = KWORK_CLASS_IRQ,
.nr_tracepoints = 2,
.tp_handlers = irq_tp_handlers,
+ .class_init = irq_class_init,
+ .work_init = irq_work_init,
+ .work_name = irq_work_name,
};

const struct evsel_str_handler softirq_tp_handlers[] = {
--
2.30.GIT

2022-07-09 01:55:37

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 09/17] perf kwork: Add workqueue report support

Implements workqueue report function.

Test cases:

# perf kwork -k workqueue rep

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)gc_worker | 0001 | 1912.389 ms | 173 | 12.896 ms | 44002.050787 s | 44002.063683 s |
(w)mix_interrupt_randomness | 0000 | 24.308 ms | 285 | 3.349 ms | 44004.784908 s | 44004.788257 s |
(w)e1000_watchdog | 0002 | 5.332 ms | 5 | 2.059 ms | 44000.914366 s | 44000.916424 s |
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_shepherd | 0000 | 0.964 ms | 8 | 0.195 ms | 43997.986453 s | 43997.986648 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)mix_interrupt_randomness | 0002 | 0.114 ms | 5 | 0.037 ms | 44005.012625 s | 44005.012662 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)neigh_periodic_work | 0004 | 0.039 ms | 1 | 0.039 ms | 43999.929935 s | 43999.929974 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)neigh_managed_work | 0001 | 0.036 ms | 1 | 0.036 ms | 43997.665813 s | 43997.665849 s |
(w)neigh_managed_work | 0004 | 0.036 ms | 1 | 0.036 ms | 44002.953507 s | 44002.953543 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k workqueue rep -S

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)gc_worker | 0001 | 1912.389 ms | 173 | 12.896 ms | 44002.050787 s | 44002.063683 s |
(w)mix_interrupt_randomness | 0000 | 24.308 ms | 285 | 3.349 ms | 44004.784908 s | 44004.788257 s |
(w)e1000_watchdog | 0002 | 5.332 ms | 5 | 2.059 ms | 44000.914366 s | 44000.916424 s |
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_shepherd | 0000 | 0.964 ms | 8 | 0.195 ms | 43997.986453 s | 43997.986648 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)mix_interrupt_randomness | 0002 | 0.114 ms | 5 | 0.037 ms | 44005.012625 s | 44005.012662 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)neigh_periodic_work | 0004 | 0.039 ms | 1 | 0.039 ms | 43999.929935 s | 43999.929974 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)neigh_managed_work | 0001 | 0.036 ms | 1 | 0.036 ms | 43997.665813 s | 43997.665849 s |
(w)neigh_managed_work | 0004 | 0.036 ms | 1 | 0.036 ms | 44002.953507 s | 44002.953543 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 500
Total runtime (msec) : 1945.085 (0.192% load average)
Total time span (msec) : 10155.026
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k workqueue rep -n vmstat_update

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)vmstat_update | 0005 | 0.989 ms | 2 | 0.953 ms | 43997.986991 s | 43997.987944 s |
(w)vmstat_update | 0003 | 0.306 ms | 6 | 0.077 ms | 44004.689543 s | 44004.689620 s |
(w)vmstat_update | 0000 | 0.196 ms | 5 | 0.049 ms | 44005.713732 s | 44005.713781 s |
(w)vmstat_update | 0001 | 0.162 ms | 2 | 0.130 ms | 44000.192034 s | 44000.192164 s |
(w)vmstat_update | 0002 | 0.084 ms | 2 | 0.043 ms | 44004.817702 s | 44004.817745 s |
(w)vmstat_update | 0006 | 0.067 ms | 2 | 0.041 ms | 43997.987214 s | 43997.987254 s |
(w)vmstat_update | 0007 | 0.037 ms | 1 | 0.037 ms | 43997.988969 s | 43997.989006 s |
(w)vmstat_update | 0004 | 0.027 ms | 1 | 0.027 ms | 43997.913973 s | 43997.914000 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/builtin-kwork.c | 74 ++++++++++++++++++++++++++++++++++++--
1 file changed, 72 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 8680fe3795d4..f7736b6f0815 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -657,17 +657,87 @@ static struct kwork_class kwork_softirq = {
.work_name = softirq_work_name,
};

+static struct kwork_class kwork_workqueue;
+static int process_workqueue_execute_start_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->entry_event)
+ return kwork->tp_handler->entry_event(kwork, &kwork_workqueue,
+ evsel, sample, machine);
+
+ return 0;
+}
+
+static int process_workqueue_execute_end_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->exit_event)
+ return kwork->tp_handler->exit_event(kwork, &kwork_workqueue,
+ evsel, sample, machine);
+
+ return 0;
+}
+
const struct evsel_str_handler workqueue_tp_handlers[] = {
{ "workqueue:workqueue_activate_work", NULL, },
- { "workqueue:workqueue_execute_start", NULL, },
- { "workqueue:workqueue_execute_end", NULL, },
+ { "workqueue:workqueue_execute_start", process_workqueue_execute_start_event, },
+ { "workqueue:workqueue_execute_end", process_workqueue_execute_end_event, },
};

+static int workqueue_class_init(struct kwork_class *class,
+ struct perf_session *session)
+{
+ if (perf_session__set_tracepoints_handlers(session,
+ workqueue_tp_handlers)) {
+ pr_err("Failed to set workqueue tracepoints handlers\n");
+ return -1;
+ }
+
+ class->work_root = RB_ROOT_CACHED;
+ return 0;
+}
+
+static void workqueue_work_init(struct kwork_class *class,
+ struct kwork_work *work,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ char *modp = NULL;
+ unsigned long long function_addr = evsel__intval(evsel,
+ sample, "function");
+
+ work->class = class;
+ work->cpu = sample->cpu;
+ work->id = evsel__intval(evsel, sample, "work");
+ work->name = function_addr == 0 ? NULL :
+ machine__resolve_kernel_addr(machine, &function_addr, &modp);
+}
+
+static void workqueue_work_name(struct kwork_work *work, char *buf, int len)
+{
+ if (work->name != NULL)
+ snprintf(buf, len, "(w)%s", work->name);
+ else
+ snprintf(buf, len, "(w)0x%" PRIx64, work->id);
+}
+
static struct kwork_class kwork_workqueue = {
.name = "workqueue",
.type = KWORK_CLASS_WORKQUEUE,
.nr_tracepoints = 3,
.tp_handlers = workqueue_tp_handlers,
+ .class_init = workqueue_class_init,
+ .work_init = workqueue_work_init,
+ .work_name = workqueue_work_name,
};

static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
--
2.30.GIT

2022-07-09 01:56:46

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 13/17] perf kwork: Implement perf kwork timehist

Implements framework of perf kwork timehist,
to provide an analysis of kernel work events.

Test cases:

# perf kwork tim
Runtime start Runtime end Cpu Kwork name Runtime Delaytime
(TYPE)NAME:NUM (msec) (msec)
----------------- ----------------- ------ ------------------------------ ---------- ----------
91576.060290 91576.060344 [0000] (s)RCU:9 0.055 0.111
91576.061470 91576.061547 [0000] (s)SCHED:7 0.077 0.073
91576.062604 91576.062697 [0001] (s)RCU:9 0.094 0.409
91576.064443 91576.064517 [0002] (s)RCU:9 0.074 0.114
91576.065144 91576.065211 [0000] (s)SCHED:7 0.067 0.058
91576.066564 91576.066609 [0003] (s)RCU:9 0.045 0.110
91576.068495 91576.068559 [0000] (s)SCHED:7 0.064 0.059
91576.068900 91576.068996 [0004] (s)RCU:9 0.096 0.726
91576.069364 91576.069420 [0002] (s)RCU:9 0.056 0.082
91576.069649 91576.069701 [0004] (s)RCU:9 0.052 0.111
91576.070147 91576.070206 [0000] (s)SCHED:7 0.060 0.057
91576.073147 91576.073202 [0000] (s)SCHED:7 0.054 0.060
<SNIP>

# perf kwork tim --max-stack 2 -g
Runtime start Runtime end Cpu Kwork name Runtime Delaytime
(TYPE)NAME:NUM (msec) (msec)
----------------- ----------------- ------ ------------------------------ ---------- ----------
91576.060290 91576.060344 [0000] (s)RCU:9 0.055 0.111 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.061470 91576.061547 [0000] (s)SCHED:7 0.077 0.073 irq_exit_rcu <- sysvec_call_function_single
91576.062604 91576.062697 [0001] (s)RCU:9 0.094 0.409 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.064443 91576.064517 [0002] (s)RCU:9 0.074 0.114 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.065144 91576.065211 [0000] (s)SCHED:7 0.067 0.058 irq_exit_rcu <- sysvec_call_function_single
91576.066564 91576.066609 [0003] (s)RCU:9 0.045 0.110 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.068495 91576.068559 [0000] (s)SCHED:7 0.064 0.059 irq_exit_rcu <- sysvec_call_function_single
91576.068900 91576.068996 [0004] (s)RCU:9 0.096 0.726 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.069364 91576.069420 [0002] (s)RCU:9 0.056 0.082 irq_exit_rcu <- sysvec_apic_timer_interrupt
91576.069649 91576.069701 [0004] (s)RCU:9 0.052 0.111 irq_exit_rcu <- sysvec_apic_timer_interrupt
<SNIP>

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 65 ++++++
tools/perf/builtin-kwork.c | 299 +++++++++++++++++++++++-
tools/perf/util/kwork.h | 3 +
3 files changed, 366 insertions(+), 1 deletion(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index 069981457de1..51c1625bacae 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -21,10 +21,36 @@ There are several variants of 'perf kwork':

'perf kwork latency' to report the per kwork latencies.

+ 'perf kwork timehist' provides an analysis of kernel work events.
+
Example usage:
perf kwork record -- sleep 1
perf kwork report
perf kwork latency
+ perf kwork timehist
+
+ By default it shows the individual work events such as irq, workqeueu,
+ including the run time and delay (time between raise and actually entry):
+
+ Runtime start Runtime end Cpu Kwork name Runtime Delaytime
+ (TYPE)NAME:NUM (msec) (msec)
+ ----------------- ----------------- ------ ------------------------- ---------- ----------
+ 1811186.976062 1811186.976327 [0000] (s)RCU:9 0.266 0.114
+ 1811186.978452 1811186.978547 [0000] (s)SCHED:7 0.095 0.171
+ 1811186.980327 1811186.980490 [0000] (s)SCHED:7 0.162 0.083
+ 1811186.981221 1811186.981271 [0000] (s)SCHED:7 0.050 0.077
+ 1811186.984267 1811186.984318 [0000] (s)SCHED:7 0.051 0.075
+ 1811186.987252 1811186.987315 [0000] (s)SCHED:7 0.063 0.081
+ 1811186.987785 1811186.987843 [0006] (s)RCU:9 0.058 0.645
+ 1811186.988319 1811186.988383 [0000] (s)SCHED:7 0.064 0.143
+ 1811186.989404 1811186.989607 [0002] (s)TIMER:1 0.203 0.111
+ 1811186.989660 1811186.989732 [0002] (s)SCHED:7 0.072 0.310
+ 1811186.991295 1811186.991407 [0002] eth0:10 0.112
+ 1811186.991639 1811186.991734 [0002] (s)NET_RX:3 0.095 0.277
+ 1811186.989860 1811186.991826 [0002] (w)vmstat_shepherd 1.966 0.345
+ ...
+
+ Times are in msec.usec.

OPTIONS
-------
@@ -100,6 +126,45 @@ OPTIONS for 'perf kwork latency'
stop time is not given (i.e, time string is 'x.y,') then analysis goes
to end of file.

+OPTIONS for 'perf kwork timehist'
+---------------------------------
+
+-C::
+--cpu::
+ Only show events for the given CPU(s) (comma separated list).
+
+-g::
+--call-graph::
+ Display call chains if present (default off).
+
+-i::
+--input::
+ Input file name. (default: perf.data unless stdin is a fifo)
+
+-k::
+--vmlinux=<file>::
+ Vmlinux pathname
+
+-n::
+--name::
+ Only show events for the given name.
+
+--kallsyms=<file>::
+ Kallsyms pathname
+
+--max-stack::
+ Maximum number of functions to display in backtrace, default 5.
+
+--symfs=<directory>::
+ Look for files with symbols relative to this directory.
+
+--time::
+ Only analyze samples within given time window: <start>,<stop>. Times
+ have the format seconds.microseconds. If start is not given (i.e., time
+ string is ',x.y') then analysis starts at the beginning of the file. If
+ stop time is not given (i.e, time string is 'x.y,') then analysis goes
+ to end of file.
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 4902bc73aca1..31dcfcfcc5a1 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -35,10 +35,12 @@
#define PRINT_TIMESTAMP_WIDTH 17
#define PRINT_KWORK_NAME_WIDTH 30
#define RPINT_DECIMAL_WIDTH 3
+#define PRINT_BRACKETPAIR_WIDTH 2
#define PRINT_TIME_UNIT_SEC_WIDTH 2
#define PRINT_TIME_UNIT_MESC_WIDTH 3
#define PRINT_RUNTIME_HEADER_WIDTH (PRINT_RUNTIME_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
#define PRINT_LATENCY_HEADER_WIDTH (PRINT_LATENCY_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
+#define PRINT_TIMEHIST_CPU_WIDTH (PRINT_CPU_WIDTH + PRINT_BRACKETPAIR_WIDTH)
#define PRINT_TIMESTAMP_HEADER_WIDTH (PRINT_TIMESTAMP_WIDTH + PRINT_TIME_UNIT_SEC_WIDTH)

struct sort_dimension {
@@ -574,6 +576,185 @@ static int latency_entry_event(struct perf_kwork *kwork,
return 0;
}

+static void timehist_save_callchain(struct perf_kwork *kwork,
+ struct perf_sample *sample,
+ struct evsel *evsel,
+ struct machine *machine)
+{
+ struct symbol *sym;
+ struct thread *thread;
+ struct callchain_cursor_node *node;
+ struct callchain_cursor *cursor = &callchain_cursor;
+
+ if (!kwork->show_callchain || sample->callchain == NULL)
+ return;
+
+ /* want main thread for process - has maps */
+ thread = machine__findnew_thread(machine, sample->pid, sample->pid);
+ if (thread == NULL) {
+ pr_debug("Failed to get thread for pid %d\n", sample->pid);
+ return;
+ }
+
+ if (thread__resolve_callchain(thread, cursor, evsel, sample,
+ NULL, NULL, kwork->max_stack + 2) != 0) {
+ pr_debug("Failed to resolve callchain, skipping\n");
+ goto out_put;
+ }
+
+ callchain_cursor_commit(cursor);
+
+ while (true) {
+ node = callchain_cursor_current(cursor);
+ if (node == NULL)
+ break;
+
+ sym = node->ms.sym;
+ if (sym) {
+ if (!strcmp(sym->name, "__softirqentry_text_start") ||
+ !strcmp(sym->name, "__do_softirq"))
+ sym->ignore = 1;
+ }
+
+ callchain_cursor_advance(cursor);
+ }
+
+out_put:
+ thread__put(thread);
+}
+
+static void timehist_print_event(struct perf_kwork *kwork,
+ struct kwork_work *work,
+ struct kwork_atom *atom,
+ struct perf_sample *sample,
+ struct addr_location *al)
+{
+ char entrytime[32], exittime[32];
+ char kwork_name[PRINT_KWORK_NAME_WIDTH];
+
+ /*
+ * runtime start
+ */
+ timestamp__scnprintf_usec(atom->time,
+ entrytime, sizeof(entrytime));
+ printf(" %*s ", PRINT_TIMESTAMP_WIDTH, entrytime);
+
+ /*
+ * runtime end
+ */
+ timestamp__scnprintf_usec(sample->time,
+ exittime, sizeof(exittime));
+ printf(" %*s ", PRINT_TIMESTAMP_WIDTH, exittime);
+
+ /*
+ * cpu
+ */
+ printf(" [%0*d] ", PRINT_CPU_WIDTH, work->cpu);
+
+ /*
+ * kwork name
+ */
+ if (work->class && work->class->work_name) {
+ work->class->work_name(work, kwork_name,
+ PRINT_KWORK_NAME_WIDTH);
+ printf(" %-*s ", PRINT_KWORK_NAME_WIDTH, kwork_name);
+ } else
+ printf(" %-*s ", PRINT_KWORK_NAME_WIDTH, "");
+
+ /*
+ *runtime
+ */
+ printf(" %*.*f ",
+ PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)(sample->time - atom->time) / NSEC_PER_MSEC);
+
+ /*
+ * delaytime
+ */
+ if (atom->prev != NULL)
+ printf(" %*.*f ", PRINT_LATENCY_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)(atom->time - atom->prev->time) / NSEC_PER_MSEC);
+ else
+ printf(" %*s ", PRINT_LATENCY_WIDTH, " ");
+
+ /*
+ * callchain
+ */
+ if (kwork->show_callchain) {
+ printf(" ");
+ sample__fprintf_sym(sample, al, 0,
+ EVSEL__PRINT_SYM | EVSEL__PRINT_ONELINE |
+ EVSEL__PRINT_CALLCHAIN_ARROW |
+ EVSEL__PRINT_SKIP_IGNORED,
+ &callchain_cursor, symbol_conf.bt_stop_list,
+ stdout);
+ }
+
+ printf("\n");
+}
+
+static int timehist_raise_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ return work_push_atom(kwork, class, KWORK_TRACE_RAISE,
+ KWORK_TRACE_MAX, evsel, sample,
+ machine, NULL);
+}
+
+static int timehist_entry_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ int ret;
+ struct kwork_work *work = NULL;
+
+ ret = work_push_atom(kwork, class, KWORK_TRACE_ENTRY,
+ KWORK_TRACE_RAISE, evsel, sample,
+ machine, &work);
+ if (ret)
+ return ret;
+
+ if (work != NULL)
+ timehist_save_callchain(kwork, sample, evsel, machine);
+
+ return 0;
+}
+
+static int timehist_exit_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct kwork_atom *atom = NULL;
+ struct kwork_work *work = NULL;
+ struct addr_location al;
+
+ if (machine__resolve(machine, &al, sample) < 0) {
+ pr_debug("Problem processing event, skipping it\n");
+ return -1;
+ }
+
+ atom = work_pop_atom(kwork, class, KWORK_TRACE_EXIT,
+ KWORK_TRACE_ENTRY, evsel, sample,
+ machine, &work);
+ if (work == NULL)
+ return -1;
+
+ if (atom != NULL) {
+ work->nr_atoms++;
+ timehist_print_event(kwork, work, atom, sample, &al);
+ atom_del(atom);
+ }
+
+ return 0;
+}
+
static struct kwork_class kwork_irq;
static int process_irq_handler_entry_event(struct perf_tool *tool,
struct evsel *evsel,
@@ -991,6 +1172,42 @@ static int report_print_header(struct perf_kwork *kwork)
return ret;
}

+static void timehist_print_header(void)
+{
+ /*
+ * header row
+ */
+ printf(" %-*s %-*s %-*s %-*s %-*s %-*s\n",
+ PRINT_TIMESTAMP_WIDTH, "Runtime start",
+ PRINT_TIMESTAMP_WIDTH, "Runtime end",
+ PRINT_TIMEHIST_CPU_WIDTH, "Cpu",
+ PRINT_KWORK_NAME_WIDTH, "Kwork name",
+ PRINT_RUNTIME_WIDTH, "Runtime",
+ PRINT_RUNTIME_WIDTH, "Delaytime");
+
+ /*
+ * units row
+ */
+ printf(" %-*s %-*s %-*s %-*s %-*s %-*s\n",
+ PRINT_TIMESTAMP_WIDTH, "",
+ PRINT_TIMESTAMP_WIDTH, "",
+ PRINT_TIMEHIST_CPU_WIDTH, "",
+ PRINT_KWORK_NAME_WIDTH, "(TYPE)NAME:NUM",
+ PRINT_RUNTIME_WIDTH, "(msec)",
+ PRINT_RUNTIME_WIDTH, "(msec)");
+
+ /*
+ * separator
+ */
+ printf(" %.*s %.*s %.*s %.*s %.*s %.*s\n",
+ PRINT_TIMESTAMP_WIDTH, graph_dotted_line,
+ PRINT_TIMESTAMP_WIDTH, graph_dotted_line,
+ PRINT_TIMEHIST_CPU_WIDTH, graph_dotted_line,
+ PRINT_KWORK_NAME_WIDTH, graph_dotted_line,
+ PRINT_RUNTIME_WIDTH, graph_dotted_line,
+ PRINT_RUNTIME_WIDTH, graph_dotted_line);
+}
+
static void print_summary(struct perf_kwork *kwork)
{
u64 time = kwork->timeend - kwork->timestart;
@@ -1083,6 +1300,7 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
struct perf_session *session)
{
int ret;
+ struct evsel *evsel;
struct kwork_class *class;

static struct trace_kwork_handler report_ops = {
@@ -1093,6 +1311,11 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
.raise_event = latency_raise_event,
.entry_event = latency_entry_event,
};
+ static struct trace_kwork_handler timehist_ops = {
+ .raise_event = timehist_raise_event,
+ .entry_event = timehist_entry_event,
+ .exit_event = timehist_exit_event,
+ };

switch (kwork->report) {
case KWORK_REPORT_RUNTIME:
@@ -1101,6 +1324,9 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
case KWORK_REPORT_LATENCY:
kwork->tp_handler = &latency_ops;
break;
+ case KWORK_REPORT_TIMEHIST:
+ kwork->tp_handler = &timehist_ops;
+ break;
default:
pr_debug("Invalid report type %d\n", kwork->report);
return -1;
@@ -1129,6 +1355,14 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
}
}

+ list_for_each_entry(evsel, &session->evlist->core.entries, core.node) {
+ if (kwork->show_callchain && !evsel__has_callchain(evsel)) {
+ pr_debug("Samples do not have callchains\n");
+ kwork->show_callchain = 0;
+ symbol_conf.use_callchain = 0;
+ }
+ }
+
return 0;
}

@@ -1162,6 +1396,9 @@ static int perf_kwork__read_events(struct perf_kwork *kwork)
goto out_delete;
}

+ if (kwork->report == KWORK_REPORT_TIMEHIST)
+ timehist_print_header();
+
ret = perf_session__process_events(session);
if (ret) {
pr_debug("Failed to process events, error %d\n", ret);
@@ -1255,6 +1492,31 @@ static int perf_kwork__process_tracepoint_sample(struct perf_tool *tool,
return err;
}

+static int perf_kwork__timehist(struct perf_kwork *kwork)
+{
+ /*
+ * event handlers for timehist option
+ */
+ kwork->tool.comm = perf_event__process_comm;
+ kwork->tool.exit = perf_event__process_exit;
+ kwork->tool.fork = perf_event__process_fork;
+ kwork->tool.attr = perf_event__process_attr;
+ kwork->tool.tracing_data = perf_event__process_tracing_data;
+ kwork->tool.build_id = perf_event__process_build_id;
+ kwork->tool.ordered_events = true;
+ kwork->tool.ordering_requires_timestamps = true;
+ symbol_conf.use_callchain = kwork->show_callchain;
+
+ if (symbol__validate_sym_arguments()) {
+ pr_err("Failed to validate sym arguments\n");
+ return -1;
+ }
+
+ setup_pager();
+
+ return perf_kwork__read_events(kwork);
+}
+
static void setup_event_list(struct perf_kwork *kwork,
const struct option *options,
const char * const usage_msg[])
@@ -1367,6 +1629,8 @@ int cmd_kwork(int argc, const char **argv)
.event_list_str = NULL,
.summary = false,
.sort_order = NULL,
+ .show_callchain = false,
+ .max_stack = 5,

.timestart = 0,
.timeend = 0,
@@ -1418,6 +1682,27 @@ int cmd_kwork(int argc, const char **argv)
"input file name"),
OPT_PARENT(kwork_options)
};
+ const struct option timehist_options[] = {
+ OPT_STRING('k', "vmlinux", &symbol_conf.vmlinux_name,
+ "file", "vmlinux pathname"),
+ OPT_STRING(0, "kallsyms", &symbol_conf.kallsyms_name,
+ "file", "kallsyms pathname"),
+ OPT_BOOLEAN('g', "call-graph", &kwork.show_callchain,
+ "Display call chains if present"),
+ OPT_UINTEGER(0, "max-stack", &kwork.max_stack,
+ "Maximum number of functions to display backtrace."),
+ OPT_STRING(0, "symfs", &symbol_conf.symfs, "directory",
+ "Look for files with symbols relative to this directory"),
+ OPT_STRING(0, "time", &kwork.time_str, "str",
+ "Time span for analysis (start,stop)"),
+ OPT_STRING('C', "cpu", &kwork.cpu_list, "cpu",
+ "list of cpus to profile"),
+ OPT_STRING('n', "name", &kwork.profile_name, "name",
+ "event name to profile"),
+ OPT_STRING('i', "input", &input_name, "file",
+ "input file name"),
+ OPT_PARENT(kwork_options)
+ };

const char *kwork_usage[] = {
NULL,
@@ -1431,8 +1716,12 @@ int cmd_kwork(int argc, const char **argv)
"perf kwork latency [<options>]",
NULL
};
+ const char * const timehist_usage[] = {
+ "perf kwork timehist [<options>]",
+ NULL
+ };
const char *const kwork_subcommands[] = {
- "record", "report", "latency", NULL
+ "record", "report", "latency", "timehist", NULL
};

argc = parse_options_subcommand(argc, argv, kwork_options,
@@ -1466,6 +1755,14 @@ int cmd_kwork(int argc, const char **argv)
kwork.report = KWORK_REPORT_LATENCY;
setup_sorting(&kwork, latency_options, latency_usage);
return perf_kwork__report(&kwork);
+ } else if (strlen(argv[0]) > 2 && strstarts("timehist", argv[0])) {
+ if (argc > 1) {
+ argc = parse_options(argc, argv, timehist_options, timehist_usage, 0);
+ if (argc)
+ usage_with_options(timehist_usage, timehist_options);
+ }
+ kwork.report = KWORK_REPORT_TIMEHIST;
+ return perf_kwork__timehist(&kwork);
} else
usage_with_options(kwork_usage, kwork_options);

diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index e540373ab14e..6a06194304b8 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -22,6 +22,7 @@ enum kwork_class_type {
enum kwork_report_type {
KWORK_REPORT_RUNTIME,
KWORK_REPORT_LATENCY,
+ KWORK_REPORT_TIMEHIST,
};

enum kwork_trace_type {
@@ -200,6 +201,8 @@ struct perf_kwork {
*/
bool summary;
const char *sort_order;
+ bool show_callchain;
+ unsigned int max_stack;

/*
* statistics
--
2.30.GIT

2022-07-09 01:58:28

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 17/17] perf kwork: Add workqueue trace bpf support

Implements workqueue trace bpf function.

Test cases:

# perf kwork -k workqueue lat -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(w)addrconf_verify_work | 0002 | 5.856 ms | 1 | 5.856 ms | 111994.634313 s | 111994.640169 s |
(w)vmstat_update | 0001 | 1.247 ms | 1 | 1.247 ms | 111996.462651 s | 111996.463899 s |
(w)neigh_periodic_work | 0001 | 1.183 ms | 1 | 1.183 ms | 111996.462789 s | 111996.463973 s |
(w)neigh_managed_work | 0001 | 0.989 ms | 2 | 1.635 ms | 111996.462820 s | 111996.464455 s |
(w)wb_workfn | 0000 | 0.667 ms | 1 | 0.667 ms | 111996.384273 s | 111996.384940 s |
(w)bpf_prog_free_deferred | 0001 | 0.495 ms | 1 | 0.495 ms | 111986.314201 s | 111986.314696 s |
(w)mix_interrupt_randomness | 0002 | 0.421 ms | 6 | 0.749 ms | 111995.927750 s | 111995.928499 s |
(w)vmstat_shepherd | 0000 | 0.374 ms | 2 | 0.385 ms | 111991.265242 s | 111991.265627 s |
(w)e1000_watchdog | 0002 | 0.356 ms | 5 | 0.390 ms | 111994.528380 s | 111994.528770 s |
(w)vmstat_update | 0000 | 0.231 ms | 2 | 0.365 ms | 111996.384407 s | 111996.384772 s |
(w)flush_to_ldisc | 0006 | 0.165 ms | 1 | 0.165 ms | 111995.930606 s | 111995.930771 s |
(w)flush_to_ldisc | 0000 | 0.094 ms | 2 | 0.095 ms | 111996.460453 s | 111996.460548 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k workqueue rep -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(w)e1000_watchdog | 0002 | 0.627 ms | 2 | 0.324 ms | 112002.720665 s | 112002.720989 s |
(w)flush_to_ldisc | 0007 | 0.598 ms | 2 | 0.534 ms | 112000.875226 s | 112000.875761 s |
(w)wq_barrier_func | 0007 | 0.492 ms | 1 | 0.492 ms | 112000.876981 s | 112000.877473 s |
(w)flush_to_ldisc | 0007 | 0.281 ms | 1 | 0.281 ms | 112005.826882 s | 112005.827163 s |
(w)mix_interrupt_randomness | 0002 | 0.229 ms | 3 | 0.102 ms | 112005.825671 s | 112005.825774 s |
(w)vmstat_shepherd | 0000 | 0.202 ms | 1 | 0.202 ms | 112001.504511 s | 112001.504713 s |
(w)bpf_prog_free_deferred | 0001 | 0.181 ms | 1 | 0.181 ms | 112000.883251 s | 112000.883432 s |
(w)wb_workfn | 0007 | 0.130 ms | 1 | 0.130 ms | 112001.505195 s | 112001.505325 s |
(w)vmstat_update | 0000 | 0.053 ms | 1 | 0.053 ms | 112001.504763 s | 112001.504815 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/util/bpf_kwork.c | 22 +++++-
tools/perf/util/bpf_skel/kwork_trace.bpf.c | 84 ++++++++++++++++++++++
2 files changed, 105 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/bpf_kwork.c b/tools/perf/util/bpf_kwork.c
index 1d76ca499ff6..fe9b0bdbb947 100644
--- a/tools/perf/util/bpf_kwork.c
+++ b/tools/perf/util/bpf_kwork.c
@@ -120,11 +120,31 @@ static struct kwork_class_bpf kwork_softirq_bpf = {
.get_work_name = get_work_name_from_map,
};

+static void workqueue_load_prepare(struct perf_kwork *kwork)
+{
+ if (kwork->report == KWORK_REPORT_RUNTIME) {
+ bpf_program__set_autoload(
+ skel->progs.report_workqueue_execute_start, true);
+ bpf_program__set_autoload(
+ skel->progs.report_workqueue_execute_end, true);
+ } else if (kwork->report == KWORK_REPORT_LATENCY) {
+ bpf_program__set_autoload(
+ skel->progs.latency_workqueue_activate_work, true);
+ bpf_program__set_autoload(
+ skel->progs.latency_workqueue_execute_start, true);
+ }
+}
+
+static struct kwork_class_bpf kwork_workqueue_bpf = {
+ .load_prepare = workqueue_load_prepare,
+ .get_work_name = get_work_name_from_map,
+};
+
static struct kwork_class_bpf *
kwork_class_bpf_supported_list[KWORK_CLASS_MAX] = {
[KWORK_CLASS_IRQ] = &kwork_irq_bpf,
[KWORK_CLASS_SOFTIRQ] = &kwork_softirq_bpf,
- [KWORK_CLASS_WORKQUEUE] = NULL,
+ [KWORK_CLASS_WORKQUEUE] = &kwork_workqueue_bpf,
};

static bool valid_kwork_class_type(enum kwork_class_type type)
diff --git a/tools/perf/util/bpf_skel/kwork_trace.bpf.c b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
index a9afc64f2d67..238b03f9ea2b 100644
--- a/tools/perf/util/bpf_skel/kwork_trace.bpf.c
+++ b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
@@ -167,6 +167,15 @@ static __always_inline void do_update_name(void *map,
bpf_map_update_elem(map, key, name, BPF_ANY);
}

+static __always_inline int update_timestart(void *map, struct work_key *key)
+{
+ if (!trace_event_match(key, NULL))
+ return 0;
+
+ do_update_timestart(map, key);
+ return 0;
+}
+
static __always_inline int update_timestart_and_name(void *time_map,
void *names_map,
struct work_key *key,
@@ -192,6 +201,21 @@ static __always_inline int update_timeend(void *report_map,
return 0;
}

+static __always_inline int update_timeend_and_name(void *report_map,
+ void *time_map,
+ void *names_map,
+ struct work_key *key,
+ char *name)
+{
+ if (!trace_event_match(key, name))
+ return 0;
+
+ do_update_timeend(report_map, time_map, key);
+ do_update_name(names_map, key, name);
+
+ return 0;
+}
+
SEC("tracepoint/irq/irq_handler_entry")
int report_irq_handler_entry(struct trace_event_raw_irq_handler_entry *ctx)
{
@@ -294,4 +318,64 @@ int latency_softirq_entry(struct trace_event_raw_softirq *ctx)
return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
}

+SEC("tracepoint/workqueue/workqueue_execute_start")
+int report_workqueue_execute_start(struct trace_event_raw_workqueue_execute_start *ctx)
+{
+ struct work_key key = {
+ .type = KWORK_CLASS_WORKQUEUE,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->work,
+ };
+
+ return update_timestart(&perf_kwork_time, &key);
+}
+
+SEC("tracepoint/workqueue/workqueue_execute_end")
+int report_workqueue_execute_end(struct trace_event_raw_workqueue_execute_end *ctx)
+{
+ char name[MAX_KWORKNAME];
+ struct work_key key = {
+ .type = KWORK_CLASS_WORKQUEUE,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->work,
+ };
+ unsigned long long func_addr = (unsigned long long)ctx->function;
+
+ __builtin_memset(name, 0, sizeof(name));
+ bpf_snprintf(name, sizeof(name), "%ps", &func_addr, sizeof(func_addr));
+
+ return update_timeend_and_name(&perf_kwork_report, &perf_kwork_time,
+ &perf_kwork_names, &key, name);
+}
+
+SEC("tracepoint/workqueue/workqueue_activate_work")
+int latency_workqueue_activate_work(struct trace_event_raw_workqueue_activate_work *ctx)
+{
+ struct work_key key = {
+ .type = KWORK_CLASS_WORKQUEUE,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->work,
+ };
+
+ return update_timestart(&perf_kwork_time, &key);
+}
+
+SEC("tracepoint/workqueue/workqueue_execute_start")
+int latency_workqueue_execute_start(struct trace_event_raw_workqueue_execute_start *ctx)
+{
+ char name[MAX_KWORKNAME];
+ struct work_key key = {
+ .type = KWORK_CLASS_WORKQUEUE,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->work,
+ };
+ unsigned long long func_addr = (unsigned long long)ctx->function;
+
+ __builtin_memset(name, 0, sizeof(name));
+ bpf_snprintf(name, sizeof(name), "%ps", &func_addr, sizeof(func_addr));
+
+ return update_timeend_and_name(&perf_kwork_report, &perf_kwork_time,
+ &perf_kwork_names, &key, name);
+}
+
char LICENSE[] SEC("license") = "Dual BSD/GPL";
--
2.30.GIT

2022-07-09 02:29:25

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 02/17] perf kwork: Add irq kwork record support

Record interrupt events irq:irq_handler_entry & irq_handler_exit

Test cases:

# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.556 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
#

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 2 +-
tools/perf/builtin-kwork.c | 15 ++++++++++++++-
tools/perf/util/kwork.h | 1 +
3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index dc1e36da57bb..57bd5fa7d5c9 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -32,7 +32,7 @@ OPTIONS

-k::
--kwork::
- List of kwork to profile
+ List of kwork to profile (irq, etc)

-v::
--verbose::
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index f3552c56ede3..a26b7fde1e38 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -25,7 +25,20 @@
#include <linux/time64.h>
#include <linux/zalloc.h>

+const struct evsel_str_handler irq_tp_handlers[] = {
+ { "irq:irq_handler_entry", NULL, },
+ { "irq:irq_handler_exit", NULL, },
+};
+
+static struct kwork_class kwork_irq = {
+ .name = "irq",
+ .type = KWORK_CLASS_IRQ,
+ .nr_tracepoints = 2,
+ .tp_handlers = irq_tp_handlers,
+};
+
static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
+ [KWORK_CLASS_IRQ] = &kwork_irq,
};

static void setup_event_list(struct perf_kwork *kwork,
@@ -132,7 +145,7 @@ int cmd_kwork(int argc, const char **argv)
OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
"dump raw trace in ASCII"),
OPT_STRING('k', "kwork", &kwork.event_list_str, "kwork",
- "list of kwork to profile"),
+ "list of kwork to profile (irq, etc)"),
OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
OPT_END()
};
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index 6950636aab2a..f1d89cb058fc 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -13,6 +13,7 @@
#include <linux/bitmap.h>

enum kwork_class_type {
+ KWORK_CLASS_IRQ,
KWORK_CLASS_MAX,
};

--
2.30.GIT

2022-07-09 02:29:25

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 14/17] perf kwork: Implement bpf trace

perf_record generates perf.data, which generates extra interrupts
for hard disk, amount of data to be collected increases with time.
Use ebpf trace can process the data in kernel, which solves the
preceding two problems.

Add -b/--use-bpf option for latency and report to support
tracing kwork events using ebpf:
1. Create bpf prog and attach to tracepoints,
2. Start tracing after command is entered,
3. After user hit "ctrl+c", stop tracing and report,
4. Support CPU and name filtering.

This commit implements the framework code and
does not add specific event support.

Test cases:

# perf kwork rep -h

Usage: perf kwork report [<options>]

-b, --use-bpf Use BPF to measure kwork runtime
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)

# perf kwork lat -h

Usage: perf kwork latency [<options>]

-b, --use-bpf Use BPF to measure kwork latency
-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)

# perf kwork lat -b
Unsupported bpf trace class irq

# perf kwork rep -b
Unsupported bpf trace class irq

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 10 +
tools/perf/Makefile.perf | 1 +
tools/perf/builtin-kwork.c | 66 ++++-
tools/perf/util/Build | 1 +
tools/perf/util/bpf_kwork.c | 278 +++++++++++++++++++++
tools/perf/util/bpf_skel/kwork_trace.bpf.c | 74 ++++++
tools/perf/util/kwork.h | 35 +++
7 files changed, 464 insertions(+), 1 deletion(-)
create mode 100644 tools/perf/util/bpf_kwork.c
create mode 100644 tools/perf/util/bpf_skel/kwork_trace.bpf.c

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index 51c1625bacae..3c36324712b6 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -26,7 +26,9 @@ There are several variants of 'perf kwork':
Example usage:
perf kwork record -- sleep 1
perf kwork report
+ perf kwork report -b
perf kwork latency
+ perf kwork latency -b
perf kwork timehist

By default it shows the individual work events such as irq, workqeueu,
@@ -73,6 +75,10 @@ OPTIONS
OPTIONS for 'perf kwork report'
----------------------------

+-b::
+--use-bpf::
+ Use BPF to measure kwork runtime
+
-C::
--cpu::
Only show events for the given CPU(s) (comma separated list).
@@ -103,6 +109,10 @@ OPTIONS for 'perf kwork report'
OPTIONS for 'perf kwork latency'
----------------------------

+-b::
+--use-bpf::
+ Use BPF to measure kwork latency
+
-C::
--cpu::
Only show events for the given CPU(s) (comma separated list).
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 8f738e11356d..44246d003846 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -1039,6 +1039,7 @@ SKELETONS := $(SKEL_OUT)/bpf_prog_profiler.skel.h
SKELETONS += $(SKEL_OUT)/bperf_leader.skel.h $(SKEL_OUT)/bperf_follower.skel.h
SKELETONS += $(SKEL_OUT)/bperf_cgroup.skel.h $(SKEL_OUT)/func_latency.skel.h
SKELETONS += $(SKEL_OUT)/off_cpu.skel.h
+SKELETONS += $(SKEL_OUT)/kwork_trace.skel.h

$(SKEL_TMP_OUT) $(LIBBPF_OUTPUT):
$(Q)$(MKDIR) -p $@
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 31dcfcfcc5a1..7dcd17ba892a 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -1427,13 +1427,69 @@ static void process_skipped_events(struct perf_kwork *kwork,
}
}

+struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct kwork_work *key)
+{
+ struct kwork_work *work = NULL;
+
+ work = work_new(key);
+ if (work == NULL)
+ return NULL;
+
+ work_insert(&class->work_root, work, &kwork->cmp_id);
+ return work;
+}
+
+static void sig_handler(int sig)
+{
+ /*
+ * Simply capture termination signal so that
+ * the program can continue after pause returns
+ */
+ pr_debug("Captuer signal %d\n", sig);
+}
+
+static int perf_kwork__report_bpf(struct perf_kwork *kwork)
+{
+ int ret;
+
+ signal(SIGINT, sig_handler);
+ signal(SIGTERM, sig_handler);
+
+ ret = perf_kwork__trace_prepare_bpf(kwork);
+ if (ret)
+ return -1;
+
+ printf("Starting trace, Hit <Ctrl+C> to stop and report\n");
+
+ perf_kwork__trace_start();
+
+ /*
+ * a simple pause, wait here for stop signal
+ */
+ pause();
+
+ perf_kwork__trace_finish();
+
+ perf_kwork__report_read_bpf(kwork);
+
+ perf_kwork__report_cleanup_bpf();
+
+ return 0;
+}
+
static int perf_kwork__report(struct perf_kwork *kwork)
{
int ret;
struct rb_node *next;
struct kwork_work *work;

- ret = perf_kwork__read_events(kwork);
+ if (kwork->use_bpf)
+ ret = perf_kwork__report_bpf(kwork);
+ else
+ ret = perf_kwork__read_events(kwork);
+
if (ret != 0)
return -1;

@@ -1667,6 +1723,10 @@ int cmd_kwork(int argc, const char **argv)
"input file name"),
OPT_BOOLEAN('S', "with-summary", &kwork.summary,
"Show summary with statistics"),
+#ifdef HAVE_BPF_SKEL
+ OPT_BOOLEAN('b', "use-bpf", &kwork.use_bpf,
+ "Use BPF to measure kwork runtime"),
+#endif
OPT_PARENT(kwork_options)
};
const struct option latency_options[] = {
@@ -1680,6 +1740,10 @@ int cmd_kwork(int argc, const char **argv)
"Time span for analysis (start,stop)"),
OPT_STRING('i', "input", &input_name, "file",
"input file name"),
+#ifdef HAVE_BPF_SKEL
+ OPT_BOOLEAN('b', "use-bpf", &kwork.use_bpf,
+ "Use BPF to measure kwork latency"),
+#endif
OPT_PARENT(kwork_options)
};
const struct option timehist_options[] = {
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index a51267d88ca9..66ad30cf65ec 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -148,6 +148,7 @@ perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_counter_cgroup.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_ftrace.o
perf-$(CONFIG_PERF_BPF_SKEL) += bpf_off_cpu.o
+perf-$(CONFIG_PERF_BPF_SKEL) += bpf_kwork.o
perf-$(CONFIG_BPF_PROLOGUE) += bpf-prologue.o
perf-$(CONFIG_LIBELF) += symbol-elf.o
perf-$(CONFIG_LIBELF) += probe-file.o
diff --git a/tools/perf/util/bpf_kwork.c b/tools/perf/util/bpf_kwork.c
new file mode 100644
index 000000000000..433bfadd3af1
--- /dev/null
+++ b/tools/perf/util/bpf_kwork.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * bpf_kwork.c
+ *
+ * Copyright (c) 2022 Huawei Inc, Yang Jihong <[email protected]>
+ */
+
+#include <time.h>
+#include <fcntl.h>
+#include <stdio.h>
+#include <unistd.h>
+
+#include <linux/time64.h>
+
+#include "util/debug.h"
+#include "util/kwork.h"
+
+#include <bpf/bpf.h>
+
+#include "util/bpf_skel/kwork_trace.skel.h"
+
+/*
+ * This should be in sync with "util/kwork_trace.bpf.c"
+ */
+#define MAX_KWORKNAME 128
+
+struct work_key {
+ u32 type;
+ u32 cpu;
+ u64 id;
+};
+
+struct report_data {
+ u64 nr;
+ u64 total_time;
+ u64 max_time;
+ u64 max_time_start;
+ u64 max_time_end;
+};
+
+struct kwork_class_bpf {
+ struct kwork_class *class;
+
+ void (*load_prepare)(struct perf_kwork *kwork);
+ int (*get_work_name)(struct work_key *key, char **ret_name);
+};
+
+static struct kwork_trace_bpf *skel;
+
+static struct timespec ts_start;
+static struct timespec ts_end;
+
+void perf_kwork__trace_start(void)
+{
+ clock_gettime(CLOCK_MONOTONIC, &ts_start);
+ skel->bss->enabled = 1;
+}
+
+void perf_kwork__trace_finish(void)
+{
+ clock_gettime(CLOCK_MONOTONIC, &ts_end);
+ skel->bss->enabled = 0;
+}
+
+static struct kwork_class_bpf *
+kwork_class_bpf_supported_list[KWORK_CLASS_MAX] = {
+ [KWORK_CLASS_IRQ] = NULL,
+ [KWORK_CLASS_SOFTIRQ] = NULL,
+ [KWORK_CLASS_WORKQUEUE] = NULL,
+};
+
+static bool valid_kwork_class_type(enum kwork_class_type type)
+{
+ return type >= 0 && type < KWORK_CLASS_MAX ? true : false;
+}
+
+static int setup_filters(struct perf_kwork *kwork)
+{
+ u8 val = 1;
+ int i, nr_cpus, key, fd;
+ struct perf_cpu_map *map;
+
+ if (kwork->cpu_list != NULL) {
+ fd = bpf_map__fd(skel->maps.perf_kwork_cpu_filter);
+ if (fd < 0) {
+ pr_debug("Invalid cpu filter fd\n");
+ return -1;
+ }
+
+ map = perf_cpu_map__new(kwork->cpu_list);
+ if (map == NULL) {
+ pr_debug("Invalid cpu_list\n");
+ return -1;
+ }
+
+ nr_cpus = libbpf_num_possible_cpus();
+ for (i = 0; i < perf_cpu_map__nr(map); i++) {
+ struct perf_cpu cpu = perf_cpu_map__cpu(map, i);
+
+ if (cpu.cpu >= nr_cpus) {
+ perf_cpu_map__put(map);
+ pr_err("Requested cpu %d too large\n", cpu.cpu);
+ return -1;
+ }
+ bpf_map_update_elem(fd, &cpu.cpu, &val, BPF_ANY);
+ }
+ perf_cpu_map__put(map);
+
+ skel->bss->has_cpu_filter = 1;
+ }
+
+ if (kwork->profile_name != NULL) {
+ if (strlen(kwork->profile_name) >= MAX_KWORKNAME) {
+ pr_err("Requested name filter %s too large, limit to %d\n",
+ kwork->profile_name, MAX_KWORKNAME - 1);
+ return -1;
+ }
+
+ fd = bpf_map__fd(skel->maps.perf_kwork_name_filter);
+ if (fd < 0) {
+ pr_debug("Invalid name filter fd\n");
+ return -1;
+ }
+
+ key = 0;
+ bpf_map_update_elem(fd, &key, kwork->profile_name, BPF_ANY);
+
+ skel->bss->has_name_filter = 1;
+ }
+
+ return 0;
+}
+
+int perf_kwork__trace_prepare_bpf(struct perf_kwork *kwork)
+{
+ struct bpf_program *prog;
+ struct kwork_class *class;
+ struct kwork_class_bpf *class_bpf;
+ enum kwork_class_type type;
+
+ skel = kwork_trace_bpf__open();
+ if (!skel) {
+ pr_debug("Failed to open kwork trace skeleton\n");
+ return -1;
+ }
+
+ /*
+ * set all progs to non-autoload,
+ * then set corresponding progs according to config
+ */
+ bpf_object__for_each_program(prog, skel->obj)
+ bpf_program__set_autoload(prog, false);
+
+ list_for_each_entry(class, &kwork->class_list, list) {
+ type = class->type;
+ if (!valid_kwork_class_type(type) ||
+ (kwork_class_bpf_supported_list[type] == NULL)) {
+ pr_err("Unsupported bpf trace class %s\n", class->name);
+ goto out;
+ }
+
+ class_bpf = kwork_class_bpf_supported_list[type];
+ class_bpf->class = class;
+
+ if (class_bpf->load_prepare != NULL)
+ class_bpf->load_prepare(kwork);
+ }
+
+ if (kwork_trace_bpf__load(skel)) {
+ pr_debug("Failed to load kwork trace skeleton\n");
+ goto out;
+ }
+
+ if (setup_filters(kwork))
+ goto out;
+
+ if (kwork_trace_bpf__attach(skel)) {
+ pr_debug("Failed to attach kwork trace skeleton\n");
+ goto out;
+ }
+
+ return 0;
+
+out:
+ kwork_trace_bpf__destroy(skel);
+ return -1;
+}
+
+static int add_work(struct perf_kwork *kwork,
+ struct work_key *key,
+ struct report_data *data)
+{
+ struct kwork_work *work;
+ struct kwork_class_bpf *bpf_trace;
+ struct kwork_work tmp = {
+ .id = key->id,
+ .name = NULL,
+ .cpu = key->cpu,
+ };
+ enum kwork_class_type type = key->type;
+
+ if (!valid_kwork_class_type(type)) {
+ pr_debug("Invalid class type %d to add work\n", type);
+ return -1;
+ }
+
+ bpf_trace = kwork_class_bpf_supported_list[type];
+ tmp.class = bpf_trace->class;
+
+ if ((bpf_trace->get_work_name != NULL) &&
+ (bpf_trace->get_work_name(key, &tmp.name)))
+ return -1;
+
+ work = perf_kwork_add_work(kwork, tmp.class, &tmp);
+ if (work == NULL)
+ return -1;
+
+ if (kwork->report == KWORK_REPORT_RUNTIME) {
+ work->nr_atoms = data->nr;
+ work->total_runtime = data->total_time;
+ work->max_runtime = data->max_time;
+ work->max_runtime_start = data->max_time_start;
+ work->max_runtime_end = data->max_time_end;
+ } else if (kwork->report == KWORK_REPORT_LATENCY) {
+ work->nr_atoms = data->nr;
+ work->total_latency = data->total_time;
+ work->max_latency = data->max_time;
+ work->max_latency_start = data->max_time_start;
+ work->max_latency_end = data->max_time_end;
+ } else {
+ pr_debug("Invalid bpf report type %d\n", kwork->report);
+ return -1;
+ }
+
+ kwork->timestart = (u64)ts_start.tv_sec * NSEC_PER_SEC + ts_start.tv_nsec;
+ kwork->timeend = (u64)ts_end.tv_sec * NSEC_PER_SEC + ts_end.tv_nsec;
+
+ return 0;
+}
+
+int perf_kwork__report_read_bpf(struct perf_kwork *kwork)
+{
+ struct report_data data;
+ struct work_key key = {
+ .type = 0,
+ .cpu = 0,
+ .id = 0,
+ };
+ struct work_key prev = {
+ .type = 0,
+ .cpu = 0,
+ .id = 0,
+ };
+ int fd = bpf_map__fd(skel->maps.perf_kwork_report);
+
+ if (fd < 0) {
+ pr_debug("Invalid report fd\n");
+ return -1;
+ }
+
+ while (!bpf_map_get_next_key(fd, &prev, &key)) {
+ if ((bpf_map_lookup_elem(fd, &key, &data)) != 0) {
+ pr_debug("Failed to lookup report elem\n");
+ return -1;
+ }
+
+ if ((data.nr != 0) && (add_work(kwork, &key, &data) != 0))
+ return -1;
+
+ prev = key;
+ }
+ return 0;
+}
+
+void perf_kwork__report_cleanup_bpf(void)
+{
+ kwork_trace_bpf__destroy(skel);
+}
diff --git a/tools/perf/util/bpf_skel/kwork_trace.bpf.c b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
new file mode 100644
index 000000000000..36112be831e3
--- /dev/null
+++ b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
@@ -0,0 +1,74 @@
+// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+// Copyright (c) 2022, Huawei
+
+#include "vmlinux.h"
+#include <bpf/bpf_helpers.h>
+#include <bpf/bpf_tracing.h>
+
+#define KWORK_COUNT 100
+#define MAX_KWORKNAME 128
+
+/*
+ * This should be in sync with "util/kwork.h"
+ */
+enum kwork_class_type {
+ KWORK_CLASS_IRQ,
+ KWORK_CLASS_SOFTIRQ,
+ KWORK_CLASS_WORKQUEUE,
+ KWORK_CLASS_MAX,
+};
+
+struct work_key {
+ __u32 type;
+ __u32 cpu;
+ __u64 id;
+};
+
+struct report_data {
+ __u64 nr;
+ __u64 total_time;
+ __u64 max_time;
+ __u64 max_time_start;
+ __u64 max_time_end;
+};
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(struct work_key));
+ __uint(value_size, MAX_KWORKNAME);
+ __uint(max_entries, KWORK_COUNT);
+} perf_kwork_names SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(struct work_key));
+ __uint(value_size, sizeof(__u64));
+ __uint(max_entries, KWORK_COUNT);
+} perf_kwork_time SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(struct work_key));
+ __uint(value_size, sizeof(struct report_data));
+ __uint(max_entries, KWORK_COUNT);
+} perf_kwork_report SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_HASH);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, sizeof(__u8));
+ __uint(max_entries, 1);
+} perf_kwork_cpu_filter SEC(".maps");
+
+struct {
+ __uint(type, BPF_MAP_TYPE_ARRAY);
+ __uint(key_size, sizeof(__u32));
+ __uint(value_size, MAX_KWORKNAME);
+ __uint(max_entries, 1);
+} perf_kwork_name_filter SEC(".maps");
+
+int enabled = 0;
+int has_cpu_filter = 0;
+int has_name_filter = 0;
+
+char LICENSE[] SEC("license") = "Dual BSD/GPL";
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index 6a06194304b8..320c0a6d2e08 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -203,6 +203,7 @@ struct perf_kwork {
const char *sort_order;
bool show_callchain;
unsigned int max_stack;
+ bool use_bpf;

/*
* statistics
@@ -219,4 +220,38 @@ struct perf_kwork {
u64 nr_skipped_events[KWORK_TRACE_MAX + 1];
};

+struct kwork_work *perf_kwork_add_work(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct kwork_work *key);
+
+#ifdef HAVE_BPF_SKEL
+
+int perf_kwork__trace_prepare_bpf(struct perf_kwork *kwork);
+int perf_kwork__report_read_bpf(struct perf_kwork *kwork);
+void perf_kwork__report_cleanup_bpf(void);
+
+void perf_kwork__trace_start(void);
+void perf_kwork__trace_finish(void);
+
+#else /* !HAVE_BPF_SKEL */
+
+static inline int
+perf_kwork__trace_prepare_bpf(struct perf_kwork *kwork __maybe_unused)
+{
+ return -1;
+}
+
+static inline int
+perf_kwork__report_read_bpf(struct perf_kwork *kwork __maybe_unused)
+{
+ return -1;
+}
+
+static inline void perf_kwork__report_cleanup_bpf(void) {}
+
+static inline void perf_kwork__trace_start(void) {}
+static inline void perf_kwork__trace_finish(void) {}
+
+#endif /* HAVE_BPF_SKEL */
+
#endif /* PERF_UTIL_KWORK_H */
--
2.30.GIT

2022-07-09 02:29:46

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 15/17] perf kwork: Add irq trace bpf support

Implements irq trace bpf function.

Test cases:
Trace irq without filter:

# perf kwork -k irq rep -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 31.026 ms | 285 | 1.493 ms | 110326.049963 s | 110326.051456 s |
eth0:10 | 0002 | 7.875 ms | 96 | 1.429 ms | 110313.916835 s | 110313.918264 s |
ata_piix:14 | 0002 | 2.510 ms | 28 | 0.396 ms | 110331.367987 s | 110331.368383 s |
--------------------------------------------------------------------------------------------------------------------------------

Trace irq with cpu filter:

# perf kwork -k irq rep -b -C 0
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 34.288 ms | 282 | 2.061 ms | 110358.078968 s | 110358.081029 s |
--------------------------------------------------------------------------------------------------------------------------------

Trace irq with name filter:

# perf kwork -k irq rep -b -n eth0
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
eth0:10 | 0002 | 2.184 ms | 21 | 0.572 ms | 110386.541699 s | 110386.542271 s |
--------------------------------------------------------------------------------------------------------------------------------

Trace irq with summary:

# perf kwork -k irq rep -b -S
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
virtio0-requests:25 | 0000 | 42.923 ms | 285 | 1.181 ms | 110418.128867 s | 110418.130049 s |
eth0:10 | 0002 | 2.085 ms | 20 | 0.668 ms | 110416.002935 s | 110416.003603 s |
ata_piix:14 | 0002 | 0.970 ms | 4 | 0.656 ms | 110424.034482 s | 110424.035138 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 309
Total runtime (msec) : 45.977 (0.003% load average)
Total time span (msec) : 17017.655
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/util/bpf_kwork.c | 40 +++++-
tools/perf/util/bpf_skel/kwork_trace.bpf.c | 150 +++++++++++++++++++++
2 files changed, 189 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/bpf_kwork.c b/tools/perf/util/bpf_kwork.c
index 433bfadd3af1..08252fcda1a4 100644
--- a/tools/perf/util/bpf_kwork.c
+++ b/tools/perf/util/bpf_kwork.c
@@ -62,9 +62,47 @@ void perf_kwork__trace_finish(void)
skel->bss->enabled = 0;
}

+static int get_work_name_from_map(struct work_key *key, char **ret_name)
+{
+ char name[MAX_KWORKNAME] = { 0 };
+ int fd = bpf_map__fd(skel->maps.perf_kwork_names);
+
+ *ret_name = NULL;
+
+ if (fd < 0) {
+ pr_debug("Invalid names map fd\n");
+ return 0;
+ }
+
+ if ((bpf_map_lookup_elem(fd, key, name) == 0) && (strlen(name) != 0)) {
+ *ret_name = strdup(name);
+ if (*ret_name == NULL) {
+ pr_err("Failed to copy work name\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static void irq_load_prepare(struct perf_kwork *kwork)
+{
+ if (kwork->report == KWORK_REPORT_RUNTIME) {
+ bpf_program__set_autoload(
+ skel->progs.report_irq_handler_entry, true);
+ bpf_program__set_autoload(
+ skel->progs.report_irq_handler_exit, true);
+ }
+}
+
+static struct kwork_class_bpf kwork_irq_bpf = {
+ .load_prepare = irq_load_prepare,
+ .get_work_name = get_work_name_from_map,
+};
+
static struct kwork_class_bpf *
kwork_class_bpf_supported_list[KWORK_CLASS_MAX] = {
- [KWORK_CLASS_IRQ] = NULL,
+ [KWORK_CLASS_IRQ] = &kwork_irq_bpf,
[KWORK_CLASS_SOFTIRQ] = NULL,
[KWORK_CLASS_WORKQUEUE] = NULL,
};
diff --git a/tools/perf/util/bpf_skel/kwork_trace.bpf.c b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
index 36112be831e3..1925407d1c16 100644
--- a/tools/perf/util/bpf_skel/kwork_trace.bpf.c
+++ b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
@@ -71,4 +71,154 @@ int enabled = 0;
int has_cpu_filter = 0;
int has_name_filter = 0;

+static __always_inline int local_strncmp(const char *s1,
+ unsigned int sz, const char *s2)
+{
+ int ret = 0;
+ unsigned int i;
+
+ for (i = 0; i < sz; i++) {
+ ret = (unsigned char)s1[i] - (unsigned char)s2[i];
+ if (ret || !s1[i] || !s2[i])
+ break;
+ }
+
+ return ret;
+}
+
+static __always_inline int trace_event_match(struct work_key *key, char *name)
+{
+ __u8 *cpu_val;
+ char *name_val;
+ __u32 zero = 0;
+ __u32 cpu = bpf_get_smp_processor_id();
+
+ if (!enabled)
+ return 0;
+
+ if (has_cpu_filter) {
+ cpu_val = bpf_map_lookup_elem(&perf_kwork_cpu_filter, &cpu);
+ if (!cpu_val)
+ return 0;
+ }
+
+ if (has_name_filter && (name != NULL)) {
+ name_val = bpf_map_lookup_elem(&perf_kwork_name_filter, &zero);
+ if (name_val &&
+ (local_strncmp(name_val, MAX_KWORKNAME, name) != 0)) {
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+static __always_inline void do_update_time(void *map, struct work_key *key,
+ __u64 time_start, __u64 time_end)
+{
+ struct report_data zero, *data;
+ __s64 delta = time_end - time_start;
+
+ if (delta < 0)
+ return;
+
+ data = bpf_map_lookup_elem(map, key);
+ if (!data) {
+ __builtin_memset(&zero, 0, sizeof(zero));
+ bpf_map_update_elem(map, key, &zero, BPF_NOEXIST);
+ data = bpf_map_lookup_elem(map, key);
+ if (!data)
+ return;
+ }
+
+ if ((delta > data->max_time) ||
+ (data->max_time == 0)) {
+ data->max_time = delta;
+ data->max_time_start = time_start;
+ data->max_time_end = time_end;
+ }
+
+ data->total_time += delta;
+ data->nr++;
+}
+
+static __always_inline void do_update_timestart(void *map, struct work_key *key)
+{
+ __u64 ts = bpf_ktime_get_ns();
+
+ bpf_map_update_elem(map, key, &ts, BPF_ANY);
+}
+
+static __always_inline void do_update_timeend(void *report_map, void *time_map,
+ struct work_key *key)
+{
+ __u64 *time = bpf_map_lookup_elem(time_map, key);
+
+ if (time) {
+ bpf_map_delete_elem(time_map, key);
+ do_update_time(report_map, key, *time, bpf_ktime_get_ns());
+ }
+}
+
+static __always_inline void do_update_name(void *map,
+ struct work_key *key, char *name)
+{
+ if (!bpf_map_lookup_elem(map, key))
+ bpf_map_update_elem(map, key, name, BPF_ANY);
+}
+
+static __always_inline int update_timestart_and_name(void *time_map,
+ void *names_map,
+ struct work_key *key,
+ char *name)
+{
+ if (!trace_event_match(key, name))
+ return 0;
+
+ do_update_timestart(time_map, key);
+ do_update_name(names_map, key, name);
+
+ return 0;
+}
+
+static __always_inline int update_timeend(void *report_map,
+ void *time_map, struct work_key *key)
+{
+ if (!trace_event_match(key, NULL))
+ return 0;
+
+ do_update_timeend(report_map, time_map, key);
+
+ return 0;
+}
+
+SEC("tracepoint/irq/irq_handler_entry")
+int report_irq_handler_entry(struct trace_event_raw_irq_handler_entry *ctx)
+{
+ char name[MAX_KWORKNAME];
+ struct work_key key = {
+ .type = KWORK_CLASS_IRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->irq,
+ };
+ void *name_addr = (void *)ctx + (ctx->__data_loc_name & 0xffff);
+
+ bpf_probe_read_kernel_str(name, sizeof(name), name_addr);
+
+ return update_timestart_and_name(&perf_kwork_time,
+ &perf_kwork_names, &key, name);
+}
+
+SEC("tracepoint/irq/irq_handler_exit")
+int report_irq_handler_exit(struct trace_event_raw_irq_handler_exit *ctx)
+{
+ struct work_key key = {
+ .type = KWORK_CLASS_IRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->irq,
+ };
+
+ return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
+}
+
char LICENSE[] SEC("license") = "Dual BSD/GPL";
--
2.30.GIT

2022-07-09 02:31:41

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 16/17] perf kwork: Add softirq trace bpf support

Implements softirq trace bpf function.

Test cases:
Trace softirq latency without filter:

# perf kwork -k softirq lat -b
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0005 | 0.281 ms | 3 | 0.338 ms | 111295.752222 s | 111295.752560 s |
(s)RCU:9 | 0002 | 0.262 ms | 24 | 1.400 ms | 111301.335986 s | 111301.337386 s |
(s)SCHED:7 | 0005 | 0.177 ms | 14 | 0.212 ms | 111295.752270 s | 111295.752481 s |
(s)RCU:9 | 0007 | 0.161 ms | 47 | 2.022 ms | 111295.402159 s | 111295.404181 s |
(s)NET_RX:3 | 0003 | 0.149 ms | 12 | 1.261 ms | 111301.192964 s | 111301.194225 s |
(s)TIMER:1 | 0001 | 0.105 ms | 9 | 0.198 ms | 111301.180191 s | 111301.180389 s |
... <SNIP> ...
(s)NET_RX:3 | 0002 | 0.098 ms | 6 | 0.124 ms | 111295.403760 s | 111295.403884 s |
(s)SCHED:7 | 0001 | 0.093 ms | 19 | 0.242 ms | 111301.180256 s | 111301.180498 s |
(s)SCHED:7 | 0007 | 0.078 ms | 15 | 0.188 ms | 111300.064226 s | 111300.064415 s |
(s)SCHED:7 | 0004 | 0.077 ms | 11 | 0.213 ms | 111301.361759 s | 111301.361973 s |
(s)SCHED:7 | 0000 | 0.063 ms | 33 | 0.805 ms | 111295.401811 s | 111295.402616 s |
(s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.085 ms | 111301.192255 s | 111301.192340 s |
--------------------------------------------------------------------------------------------------------------------------------

Trace softirq latency with cpu filter:

# perf kwork -k softirq lat -b -C 1
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0001 | 0.178 ms | 5 | 0.572 ms | 111435.534135 s | 111435.534707 s |
--------------------------------------------------------------------------------------------------------------------------------

Trace softirq latency with name filter:

# perf kwork -k softirq lat -b -n SCHED
Starting trace, Hit <Ctrl+C> to stop and report
^C
Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)SCHED:7 | 0001 | 0.295 ms | 15 | 2.183 ms | 111452.534950 s | 111452.537133 s |
(s)SCHED:7 | 0002 | 0.215 ms | 10 | 0.315 ms | 111460.000238 s | 111460.000553 s |
(s)SCHED:7 | 0005 | 0.190 ms | 29 | 0.338 ms | 111457.032538 s | 111457.032876 s |
(s)SCHED:7 | 0003 | 0.097 ms | 10 | 0.319 ms | 111452.434351 s | 111452.434670 s |
(s)SCHED:7 | 0006 | 0.089 ms | 1 | 0.089 ms | 111450.737450 s | 111450.737539 s |
(s)SCHED:7 | 0007 | 0.085 ms | 17 | 0.169 ms | 111452.471333 s | 111452.471502 s |
(s)SCHED:7 | 0004 | 0.071 ms | 15 | 0.221 ms | 111452.535252 s | 111452.535473 s |
(s)SCHED:7 | 0000 | 0.044 ms | 32 | 0.130 ms | 111460.001982 s | 111460.002112 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/util/bpf_kwork.c | 22 ++++++-
tools/perf/util/bpf_skel/kwork_trace.bpf.c | 73 ++++++++++++++++++++++
2 files changed, 94 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/bpf_kwork.c b/tools/perf/util/bpf_kwork.c
index 08252fcda1a4..1d76ca499ff6 100644
--- a/tools/perf/util/bpf_kwork.c
+++ b/tools/perf/util/bpf_kwork.c
@@ -100,10 +100,30 @@ static struct kwork_class_bpf kwork_irq_bpf = {
.get_work_name = get_work_name_from_map,
};

+static void softirq_load_prepare(struct perf_kwork *kwork)
+{
+ if (kwork->report == KWORK_REPORT_RUNTIME) {
+ bpf_program__set_autoload(
+ skel->progs.report_softirq_entry, true);
+ bpf_program__set_autoload(
+ skel->progs.report_softirq_exit, true);
+ } else if (kwork->report == KWORK_REPORT_LATENCY) {
+ bpf_program__set_autoload(
+ skel->progs.latency_softirq_raise, true);
+ bpf_program__set_autoload(
+ skel->progs.latency_softirq_entry, true);
+ }
+}
+
+static struct kwork_class_bpf kwork_softirq_bpf = {
+ .load_prepare = softirq_load_prepare,
+ .get_work_name = get_work_name_from_map,
+};
+
static struct kwork_class_bpf *
kwork_class_bpf_supported_list[KWORK_CLASS_MAX] = {
[KWORK_CLASS_IRQ] = &kwork_irq_bpf,
- [KWORK_CLASS_SOFTIRQ] = NULL,
+ [KWORK_CLASS_SOFTIRQ] = &kwork_softirq_bpf,
[KWORK_CLASS_WORKQUEUE] = NULL,
};

diff --git a/tools/perf/util/bpf_skel/kwork_trace.bpf.c b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
index 1925407d1c16..a9afc64f2d67 100644
--- a/tools/perf/util/bpf_skel/kwork_trace.bpf.c
+++ b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
@@ -221,4 +221,77 @@ int report_irq_handler_exit(struct trace_event_raw_irq_handler_exit *ctx)
return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
}

+static char softirq_name_list[NR_SOFTIRQS][MAX_KWORKNAME] = {
+ { "HI" },
+ { "TIMER" },
+ { "NET_TX" },
+ { "NET_RX" },
+ { "BLOCK" },
+ { "IRQ_POLL" },
+ { "TASKLET" },
+ { "SCHED" },
+ { "HRTIMER" },
+ { "RCU" },
+};
+
+SEC("tracepoint/irq/softirq_entry")
+int report_softirq_entry(struct trace_event_raw_softirq *ctx)
+{
+ unsigned int vec = ctx->vec;
+ struct work_key key = {
+ .type = KWORK_CLASS_SOFTIRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)vec,
+ };
+
+ if (vec < NR_SOFTIRQS)
+ return update_timestart_and_name(&perf_kwork_time,
+ &perf_kwork_names, &key,
+ softirq_name_list[vec]);
+
+ return 0;
+}
+
+SEC("tracepoint/irq/softirq_exit")
+int report_softirq_exit(struct trace_event_raw_softirq *ctx)
+{
+ struct work_key key = {
+ .type = KWORK_CLASS_SOFTIRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->vec,
+ };
+
+ return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
+}
+
+SEC("tracepoint/irq/softirq_raise")
+int latency_softirq_raise(struct trace_event_raw_softirq *ctx)
+{
+ unsigned int vec = ctx->vec;
+ struct work_key key = {
+ .type = KWORK_CLASS_SOFTIRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)vec,
+ };
+
+ if (vec < NR_SOFTIRQS)
+ return update_timestart_and_name(&perf_kwork_time,
+ &perf_kwork_names, &key,
+ softirq_name_list[vec]);
+
+ return 0;
+}
+
+SEC("tracepoint/irq/softirq_entry")
+int latency_softirq_entry(struct trace_event_raw_softirq *ctx)
+{
+ struct work_key key = {
+ .type = KWORK_CLASS_SOFTIRQ,
+ .cpu = bpf_get_smp_processor_id(),
+ .id = (__u64)ctx->vec,
+ };
+
+ return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
+}
+
char LICENSE[] SEC("license") = "Dual BSD/GPL";
--
2.30.GIT

2022-07-09 02:32:21

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 04/17] perf kwork: Add workqueue kwork record support

Record workqueue events workqueue:workqueue_activate_work,
workqueue:workqueue_execute_start & workqueue:workqueue_execute_end

Tese cases:
Record all events:

# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.857 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
workqueue:workqueue_activate_work
workqueue:workqueue_execute_start
workqueue:workqueue_execute_end
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events

Record workqueue events:

# perf kwork -k workqueue record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.081 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
workqueue:workqueue_activate_work
workqueue:workqueue_execute_start
workqueue:workqueue_execute_end
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 2 +-
tools/perf/builtin-kwork.c | 16 +++++++++++++++-
tools/perf/util/kwork.h | 1 +
3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index abfdeca2ad39..c5b52f61da99 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -32,7 +32,7 @@ OPTIONS

-k::
--kwork::
- List of kwork to profile (irq, softirq, etc)
+ List of kwork to profile (irq, softirq, workqueue, etc)

-v::
--verbose::
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 2c492c8fd019..8086236b7513 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -50,9 +50,23 @@ static struct kwork_class kwork_softirq = {
.tp_handlers = softirq_tp_handlers,
};

+const struct evsel_str_handler workqueue_tp_handlers[] = {
+ { "workqueue:workqueue_activate_work", NULL, },
+ { "workqueue:workqueue_execute_start", NULL, },
+ { "workqueue:workqueue_execute_end", NULL, },
+};
+
+static struct kwork_class kwork_workqueue = {
+ .name = "workqueue",
+ .type = KWORK_CLASS_WORKQUEUE,
+ .nr_tracepoints = 3,
+ .tp_handlers = workqueue_tp_handlers,
+};
+
static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
[KWORK_CLASS_IRQ] = &kwork_irq,
[KWORK_CLASS_SOFTIRQ] = &kwork_softirq,
+ [KWORK_CLASS_WORKQUEUE] = &kwork_workqueue,
};

static void setup_event_list(struct perf_kwork *kwork,
@@ -159,7 +173,7 @@ int cmd_kwork(int argc, const char **argv)
OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
"dump raw trace in ASCII"),
OPT_STRING('k', "kwork", &kwork.event_list_str, "kwork",
- "list of kwork to profile (irq, softirq, etc)"),
+ "list of kwork to profile (irq, softirq, workqueue, etc)"),
OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
OPT_END()
};
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index 669a81626cb4..03203c4deb34 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -15,6 +15,7 @@
enum kwork_class_type {
KWORK_CLASS_IRQ,
KWORK_CLASS_SOFTIRQ,
+ KWORK_CLASS_WORKQUEUE,
KWORK_CLASS_MAX,
};

--
2.30.GIT

2022-07-09 02:33:28

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 10/17] perf kwork: Implement perf kwork latency

Implements framework of perf kwork latency, which is used to report time
properties such as delay time and frequency.

Test cases:

# perf kwork lat -h

Usage: perf kwork latency [<options>]

-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)

# perf kwork lat -C 199
Requested CPU 199 too large. Consider raising MAX_NR_CPUS
Invalid cpu bitmap

# perf kwork lat -i perf_no_exist.data
failed to open perf_no_exist.data: No such file or directory

# perf kwork lat -s avg1
Error: Unknown --sort key: `avg1'

Usage: perf kwork latency [<options>]

-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): avg, max, count
--time <str> Time span for analysis (start,stop)

# perf kwork lat --time FFFF,
Invalid time span

# perf kwork lat

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
INFO: 36.570% skipped events (31537 including 0 raise, 31537 entry, 0 exit)

Since there are no latency-enabled events, the output is empty.

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 29 +++++
tools/perf/builtin-kwork.c | 166 +++++++++++++++++++++++-
tools/perf/util/kwork.h | 14 ++
3 files changed, 208 insertions(+), 1 deletion(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index b79b2c0d047e..069981457de1 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -19,9 +19,12 @@ There are several variants of 'perf kwork':

'perf kwork report' to report the per kwork runtime.

+ 'perf kwork latency' to report the per kwork latencies.
+
Example usage:
perf kwork record -- sleep 1
perf kwork report
+ perf kwork latency

OPTIONS
-------
@@ -71,6 +74,32 @@ OPTIONS for 'perf kwork report'
stop time is not given (i.e, time string is 'x.y,') then analysis goes
to end of file.

+OPTIONS for 'perf kwork latency'
+----------------------------
+
+-C::
+--cpu::
+ Only show events for the given CPU(s) (comma separated list).
+
+-i::
+--input::
+ Input file name. (default: perf.data unless stdin is a fifo)
+
+-n::
+--name::
+ Only show events for the given name.
+
+-s::
+--sort::
+ Sort by key(s): avg, max, count
+
+--time::
+ Only analyze samples within given time window: <start>,<stop>. Times
+ have the format seconds.microseconds. If start is not given (i.e., time
+ string is ',x.y') then analysis starts at the beginning of the file. If
+ stop time is not given (i.e, time string is 'x.y,') then analysis goes
+ to end of file.
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index f7736b6f0815..cc2c090fc2f0 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -31,12 +31,14 @@
#define PRINT_CPU_WIDTH 4
#define PRINT_COUNT_WIDTH 9
#define PRINT_RUNTIME_WIDTH 10
+#define PRINT_LATENCY_WIDTH 10
#define PRINT_TIMESTAMP_WIDTH 17
#define PRINT_KWORK_NAME_WIDTH 30
#define RPINT_DECIMAL_WIDTH 3
#define PRINT_TIME_UNIT_SEC_WIDTH 2
#define PRINT_TIME_UNIT_MESC_WIDTH 3
#define PRINT_RUNTIME_HEADER_WIDTH (PRINT_RUNTIME_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
+#define PRINT_LATENCY_HEADER_WIDTH (PRINT_LATENCY_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
#define PRINT_TIMESTAMP_HEADER_WIDTH (PRINT_TIMESTAMP_WIDTH + PRINT_TIME_UNIT_SEC_WIDTH)

struct sort_dimension {
@@ -90,6 +92,36 @@ static int max_runtime_cmp(struct kwork_work *l, struct kwork_work *r)
return 0;
}

+static int avg_latency_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ u64 avgl, avgr;
+
+ if (!r->nr_atoms)
+ return 1;
+ if (!l->nr_atoms)
+ return -1;
+
+ avgl = l->total_latency / l->nr_atoms;
+ avgr = r->total_latency / r->nr_atoms;
+
+ if (avgl > avgr)
+ return 1;
+ if (avgl < avgr)
+ return -1;
+
+ return 0;
+}
+
+static int max_latency_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ if (l->max_latency > r->max_latency)
+ return 1;
+ if (l->max_latency < r->max_latency)
+ return -1;
+
+ return 0;
+}
+
static int sort_dimension__add(struct perf_kwork *kwork __maybe_unused,
const char *tok, struct list_head *list)
{
@@ -110,13 +142,21 @@ static int sort_dimension__add(struct perf_kwork *kwork __maybe_unused,
.name = "count",
.cmp = count_cmp,
};
+ static struct sort_dimension avg_sort_dimension = {
+ .name = "avg",
+ .cmp = avg_latency_cmp,
+ };
struct sort_dimension *available_sorts[] = {
&id_sort_dimension,
&max_sort_dimension,
&count_sort_dimension,
&runtime_sort_dimension,
+ &avg_sort_dimension,
};

+ if (kwork->report == KWORK_REPORT_LATENCY)
+ max_sort_dimension.cmp = max_latency_cmp;
+
for (i = 0; i < ARRAY_SIZE(available_sorts); i++) {
if (!strcmp(available_sorts[i]->name, tok)) {
list_add_tail(&available_sorts[i]->list, list);
@@ -479,6 +519,61 @@ static int report_exit_event(struct perf_kwork *kwork,
return 0;
}

+static void latency_update_entry_event(struct kwork_work *work,
+ struct kwork_atom *atom,
+ struct perf_sample *sample)
+{
+ u64 delta;
+ u64 entry_time = sample->time;
+ u64 raise_time = atom->time;
+
+ if ((raise_time != 0) && (entry_time >= raise_time)) {
+ delta = entry_time - raise_time;
+ if ((delta > work->max_latency) ||
+ (work->max_latency == 0)) {
+ work->max_latency = delta;
+ work->max_latency_start = raise_time;
+ work->max_latency_end = entry_time;
+ }
+ work->total_latency += delta;
+ work->nr_atoms++;
+ }
+}
+
+static int latency_raise_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ return work_push_atom(kwork, class, KWORK_TRACE_RAISE,
+ KWORK_TRACE_MAX, evsel, sample,
+ machine, NULL);
+}
+
+static int latency_entry_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct kwork_atom *atom = NULL;
+ struct kwork_work *work = NULL;
+
+ atom = work_pop_atom(kwork, class, KWORK_TRACE_ENTRY,
+ KWORK_TRACE_RAISE, evsel, sample,
+ machine, &work);
+ if (work == NULL)
+ return -1;
+
+ if (atom != NULL) {
+ latency_update_entry_event(work, atom, sample);
+ atom_del(atom);
+ }
+
+ return 0;
+}
+
static struct kwork_class kwork_irq;
static int process_irq_handler_entry_event(struct perf_tool *tool,
struct evsel *evsel,
@@ -757,6 +852,7 @@ static void report_print_work(struct perf_kwork *kwork,
int ret = 0;
char kwork_name[PRINT_KWORK_NAME_WIDTH];
char max_runtime_start[32], max_runtime_end[32];
+ char max_latency_start[32], max_latency_end[32];

printf(" ");

@@ -782,6 +878,14 @@ static void report_print_work(struct perf_kwork *kwork,
ret += printf(" %*.*f ms |",
PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
(double)work->total_runtime / NSEC_PER_MSEC);
+ /*
+ * avg delay
+ */
+ else if (kwork->report == KWORK_REPORT_LATENCY)
+ ret += printf(" %*.*f ms |",
+ PRINT_LATENCY_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)work->total_latency /
+ work->nr_atoms / NSEC_PER_MSEC);

/*
* count
@@ -805,6 +909,22 @@ static void report_print_work(struct perf_kwork *kwork,
PRINT_TIMESTAMP_WIDTH, max_runtime_start,
PRINT_TIMESTAMP_WIDTH, max_runtime_end);
}
+ /*
+ * max delay, max delay start, max delay end
+ */
+ else if (kwork->report == KWORK_REPORT_LATENCY) {
+ timestamp__scnprintf_usec(work->max_latency_start,
+ max_latency_start,
+ sizeof(max_latency_start));
+ timestamp__scnprintf_usec(work->max_latency_end,
+ max_latency_end,
+ sizeof(max_latency_end));
+ ret += printf(" %*.*f ms | %*s s | %*s s |",
+ PRINT_LATENCY_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)work->max_latency / NSEC_PER_MSEC,
+ PRINT_TIMESTAMP_WIDTH, max_latency_start,
+ PRINT_TIMESTAMP_WIDTH, max_latency_end);
+ }

printf("\n");
}
@@ -821,6 +941,9 @@ static int report_print_header(struct perf_kwork *kwork)
if (kwork->report == KWORK_REPORT_RUNTIME)
ret += printf(" %-*s |",
PRINT_RUNTIME_HEADER_WIDTH, "Total Runtime");
+ else if (kwork->report == KWORK_REPORT_LATENCY)
+ ret += printf(" %-*s |",
+ PRINT_LATENCY_HEADER_WIDTH, "Avg delay");

ret += printf(" %-*s |", PRINT_COUNT_WIDTH, "Count");

@@ -829,6 +952,11 @@ static int report_print_header(struct perf_kwork *kwork)
PRINT_RUNTIME_HEADER_WIDTH, "Max runtime",
PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime start",
PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime end");
+ else if (kwork->report == KWORK_REPORT_LATENCY)
+ ret += printf(" %-*s | %-*s | %-*s |",
+ PRINT_LATENCY_HEADER_WIDTH, "Max delay",
+ PRINT_TIMESTAMP_HEADER_WIDTH, "Max delay start",
+ PRINT_TIMESTAMP_HEADER_WIDTH, "Max delay end");

printf("\n");
print_separator(ret);
@@ -862,6 +990,7 @@ static void print_skipped_events(struct perf_kwork *kwork)
{
int i;
const char *const kwork_event_str[] = {
+ [KWORK_TRACE_RAISE] = "raise",
[KWORK_TRACE_ENTRY] = "entry",
[KWORK_TRACE_EXIT] = "exit",
};
@@ -932,11 +1061,18 @@ static int perf_kwork__check_config(struct perf_kwork *kwork,
.entry_event = report_entry_event,
.exit_event = report_exit_event,
};
+ static struct trace_kwork_handler latency_ops = {
+ .raise_event = latency_raise_event,
+ .entry_event = latency_entry_event,
+ };

switch (kwork->report) {
case KWORK_REPORT_RUNTIME:
kwork->tp_handler = &report_ops;
break;
+ case KWORK_REPORT_LATENCY:
+ kwork->tp_handler = &latency_ops;
+ break;
default:
pr_debug("Invalid report type %d\n", kwork->report);
return -1;
@@ -1214,6 +1350,7 @@ int cmd_kwork(int argc, const char **argv)
.nr_skipped_events = { 0 },
};
static const char default_report_sort_order[] = "runtime, max, count";
+ static const char default_latency_sort_order[] = "avg, max, count";

const struct option kwork_options[] = {
OPT_INCR('v', "verbose", &verbose,
@@ -1240,6 +1377,19 @@ int cmd_kwork(int argc, const char **argv)
"Show summary with statistics"),
OPT_PARENT(kwork_options)
};
+ const struct option latency_options[] = {
+ OPT_STRING('s', "sort", &kwork.sort_order, "key[,key2...]",
+ "sort by key(s): avg, max, count"),
+ OPT_STRING('C', "cpu", &kwork.cpu_list, "cpu",
+ "list of cpus to profile"),
+ OPT_STRING('n', "name", &kwork.profile_name, "name",
+ "event name to profile"),
+ OPT_STRING(0, "time", &kwork.time_str, "str",
+ "Time span for analysis (start,stop)"),
+ OPT_STRING('i', "input", &input_name, "file",
+ "input file name"),
+ OPT_PARENT(kwork_options)
+ };

const char *kwork_usage[] = {
NULL,
@@ -1249,8 +1399,12 @@ int cmd_kwork(int argc, const char **argv)
"perf kwork report [<options>]",
NULL
};
+ const char * const latency_usage[] = {
+ "perf kwork latency [<options>]",
+ NULL
+ };
const char *const kwork_subcommands[] = {
- "record", "report", NULL
+ "record", "report", "latency", NULL
};

argc = parse_options_subcommand(argc, argv, kwork_options,
@@ -1274,6 +1428,16 @@ int cmd_kwork(int argc, const char **argv)
kwork.report = KWORK_REPORT_RUNTIME;
setup_sorting(&kwork, report_options, report_usage);
return perf_kwork__report(&kwork);
+ } else if (strlen(argv[0]) > 2 && strstarts("latency", argv[0])) {
+ kwork.sort_order = default_latency_sort_order;
+ if (argc > 1) {
+ argc = parse_options(argc, argv, latency_options, latency_usage, 0);
+ if (argc)
+ usage_with_options(latency_usage, latency_options);
+ }
+ kwork.report = KWORK_REPORT_LATENCY;
+ setup_sorting(&kwork, latency_options, latency_usage);
+ return perf_kwork__report(&kwork);
} else
usage_with_options(kwork_usage, kwork_options);

diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index 0a86bf47c74d..e540373ab14e 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -21,9 +21,11 @@ enum kwork_class_type {

enum kwork_report_type {
KWORK_REPORT_RUNTIME,
+ KWORK_REPORT_LATENCY,
};

enum kwork_trace_type {
+ KWORK_TRACE_RAISE,
KWORK_TRACE_ENTRY,
KWORK_TRACE_EXIT,
KWORK_TRACE_MAX,
@@ -116,6 +118,14 @@ struct kwork_work {
u64 max_runtime_start;
u64 max_runtime_end;
u64 total_runtime;
+
+ /*
+ * latency report
+ */
+ u64 max_latency;
+ u64 max_latency_start;
+ u64 max_latency_end;
+ u64 total_latency;
};

struct kwork_class {
@@ -143,6 +153,10 @@ struct kwork_class {

struct perf_kwork;
struct trace_kwork_handler {
+ int (*raise_event)(struct perf_kwork *kwork,
+ struct kwork_class *class, struct evsel *evsel,
+ struct perf_sample *sample, struct machine *machine);
+
int (*entry_event)(struct perf_kwork *kwork,
struct kwork_class *class, struct evsel *evsel,
struct perf_sample *sample, struct machine *machine);
--
2.30.GIT

2022-07-09 02:33:38

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 12/17] perf kwork: Add workqueue latency support

Implements workqueue latency function.

Test cases:

# perf kwork -k workqueue lat

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(w)vmstat_update | 0001 | 5.004 ms | 1 | 5.004 ms | 44001.745646 s | 44001.750650 s |
(w)vmstat_update | 0006 | 1.773 ms | 1 | 1.773 ms | 44000.830840 s | 44000.832613 s |
(w)vmstat_shepherd | 0000 | 0.992 ms | 8 | 2.474 ms | 44007.717845 s | 44007.720318 s |
(w)vmstat_update | 0000 | 0.974 ms | 5 | 2.624 ms | 44004.785970 s | 44004.788594 s |
(w)e1000_watchdog | 0002 | 0.687 ms | 5 | 2.632 ms | 44005.009334 s | 44005.011966 s |
(w)vmstat_update | 0002 | 0.307 ms | 1 | 0.307 ms | 44004.817395 s | 44004.817702 s |
(w)vmstat_update | 0004 | 0.296 ms | 1 | 0.296 ms | 43997.913677 s | 43997.913973 s |
(w)mix_interrupt_randomness | 0000 | 0.283 ms | 285 | 3.724 ms | 44006.790889 s | 44006.794613 s |
(w)neigh_managed_work | 0001 | 0.271 ms | 1 | 0.271 ms | 43997.665542 s | 43997.665813 s |
(w)vmstat_update | 0005 | 0.261 ms | 1 | 0.261 ms | 44007.820542 s | 44007.820803 s |
(w)neigh_managed_work | 0004 | 0.220 ms | 1 | 0.220 ms | 44002.953287 s | 44002.953507 s |
(w)neigh_periodic_work | 0004 | 0.217 ms | 1 | 0.217 ms | 43999.929718 s | 43999.929935 s |
(w)mix_interrupt_randomness | 0002 | 0.199 ms | 5 | 0.310 ms | 44005.012316 s | 44005.012625 s |
(w)vmstat_update | 0003 | 0.199 ms | 4 | 0.307 ms | 44005.714391 s | 44005.714699 s |
(w)gc_worker | 0001 | 0.071 ms | 173 | 1.128 ms | 44002.062579 s | 44002.063707 s |
--------------------------------------------------------------------------------------------------------------------------------
INFO: 0.020% skipped events (17 including 10 raise, 7 entry, 0 exit)

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/builtin-kwork.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index fa09d4eea913..4902bc73aca1 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -767,6 +767,20 @@ static struct kwork_class kwork_softirq = {
};

static struct kwork_class kwork_workqueue;
+static int process_workqueue_activate_work_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->raise_event)
+ return kwork->tp_handler->raise_event(kwork, &kwork_workqueue,
+ evsel, sample, machine);
+
+ return 0;
+}
+
static int process_workqueue_execute_start_event(struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
@@ -796,7 +810,7 @@ static int process_workqueue_execute_end_event(struct perf_tool *tool,
}

const struct evsel_str_handler workqueue_tp_handlers[] = {
- { "workqueue:workqueue_activate_work", NULL, },
+ { "workqueue:workqueue_activate_work", process_workqueue_activate_work_event, },
{ "workqueue:workqueue_execute_start", process_workqueue_execute_start_event, },
{ "workqueue:workqueue_execute_end", process_workqueue_execute_end_event, },
};
--
2.30.GIT

2022-07-09 02:34:02

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 11/17] perf kwork: Add softirq latency support

Implements softirq latency function.

Test cases:

# perf kwork -k softirq lat

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0006 | 1.048 ms | 1 | 1.048 ms | 44000.829759 s | 44000.830807 s |
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)SCHED:7 | 0000 | 0.158 ms | 677 | 2.639 ms | 44004.785716 s | 44004.788355 s |
... <SNIP> ...
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
(s)TIMER:1 | 0005 | 0.128 ms | 1 | 0.128 ms | 44007.820346 s | 44007.820474 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k softirq lat -C 1,2

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)NET_RX:3 | 0002 | 0.106 ms | 5 | 0.163 ms | 44005.012255 s | 44005.012418 s |
(s)TIMER:1 | 0002 | 0.084 ms | 9 | 0.114 ms | 44005.009168 s | 44005.009282 s |
(s)SCHED:7 | 0001 | 0.049 ms | 655 | 0.837 ms | 44005.707998 s | 44005.708835 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k softirq lat -n RCU

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)RCU:9 | 0004 | 0.237 ms | 26 | 0.792 ms | 43997.683018 s | 43997.683810 s |
(s)RCU:9 | 0007 | 0.217 ms | 140 | 1.335 ms | 43997.671080 s | 43997.672415 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k softirq lat -s count,avg -n RCU

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)RCU:9 | 0003 | 0.129 ms | 2193 | 1.507 ms | 44006.010208 s | 44006.011715 s |
(s)RCU:9 | 0001 | 0.216 ms | 1619 | 3.659 ms | 43997.662069 s | 43997.665727 s |
(s)RCU:9 | 0002 | 0.141 ms | 932 | 1.662 ms | 44005.010206 s | 44005.011868 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)RCU:9 | 0007 | 0.217 ms | 140 | 1.335 ms | 43997.671080 s | 43997.672415 s |
(s)RCU:9 | 0005 | 0.245 ms | 88 | 1.866 ms | 43997.683105 s | 43997.684971 s |
(s)RCU:9 | 0004 | 0.237 ms | 26 | 0.792 ms | 43997.683018 s | 43997.683810 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork -k softirq lat --time 43997,

Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0006 | 1.048 ms | 1 | 1.048 ms | 44000.829759 s | 44000.830807 s |
(s)TIMER:1 | 0001 | 1.008 ms | 4 | 3.434 ms | 43997.662069 s | 43997.665503 s |
(s)RCU:9 | 0006 | 0.675 ms | 7 | 1.328 ms | 43997.670304 s | 43997.671632 s |
(s)RCU:9 | 0000 | 0.414 ms | 701 | 3.996 ms | 43997.661170 s | 43997.665167 s |
(s)TIMER:1 | 0004 | 0.083 ms | 21 | 0.127 ms | 44004.969171 s | 44004.969298 s |
... <SNIP> ...
(s)SCHED:7 | 0005 | 0.050 ms | 4 | 0.086 ms | 43997.684852 s | 43997.684938 s |
(s)SCHED:7 | 0001 | 0.049 ms | 655 | 0.837 ms | 44005.707998 s | 44005.708835 s |
(s)SCHED:7 | 0007 | 0.044 ms | 171 | 0.077 ms | 43997.943265 s | 43997.943342 s |
(s)SCHED:7 | 0002 | 0.040 ms | 1731 | 0.211 ms | 44005.009237 s | 44005.009447 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/builtin-kwork.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index cc2c090fc2f0..fa09d4eea913 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -646,6 +646,20 @@ static struct kwork_class kwork_irq = {
};

static struct kwork_class kwork_softirq;
+static int process_softirq_raise_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->raise_event)
+ return kwork->tp_handler->raise_event(kwork, &kwork_softirq,
+ evsel, sample, machine);
+
+ return 0;
+}
+
static int process_softirq_entry_event(struct perf_tool *tool,
struct evsel *evsel,
struct perf_sample *sample,
@@ -675,7 +689,7 @@ static int process_softirq_exit_event(struct perf_tool *tool,
}

const struct evsel_str_handler softirq_tp_handlers[] = {
- { "irq:softirq_raise", NULL, },
+ { "irq:softirq_raise", process_softirq_raise_event, },
{ "irq:softirq_entry", process_softirq_entry_event, },
{ "irq:softirq_exit", process_softirq_exit_event, },
};
--
2.30.GIT

2022-07-09 02:34:17

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 03/17] perf kwork: Add softirq kwork record support

Record softirq events irq:softirq_raise, irq:softirq_entry &
irq:softirq_exit.

Test cases:
Record all events:

# perf kwork record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 0 times to write data ]
[ perf record: Captured and wrote 0.897 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:irq_handler_entry
irq:irq_handler_exit
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events

Record softirq events:

# perf kwork -k softirq record -o perf_kwork.date -- sleep 1
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.141 MB perf_kwork.date ]
#
# perf evlist -i perf_kwork.date
irq:softirq_raise
irq:softirq_entry
irq:softirq_exit
dummy:HG
# Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 2 +-
tools/perf/builtin-kwork.c | 16 +++++++++++++++-
tools/perf/util/kwork.h | 1 +
3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index 57bd5fa7d5c9..abfdeca2ad39 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -32,7 +32,7 @@ OPTIONS

-k::
--kwork::
- List of kwork to profile (irq, etc)
+ List of kwork to profile (irq, softirq, etc)

-v::
--verbose::
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index a26b7fde1e38..2c492c8fd019 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -37,8 +37,22 @@ static struct kwork_class kwork_irq = {
.tp_handlers = irq_tp_handlers,
};

+const struct evsel_str_handler softirq_tp_handlers[] = {
+ { "irq:softirq_raise", NULL, },
+ { "irq:softirq_entry", NULL, },
+ { "irq:softirq_exit", NULL, },
+};
+
+static struct kwork_class kwork_softirq = {
+ .name = "softirq",
+ .type = KWORK_CLASS_SOFTIRQ,
+ .nr_tracepoints = 3,
+ .tp_handlers = softirq_tp_handlers,
+};
+
static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
[KWORK_CLASS_IRQ] = &kwork_irq,
+ [KWORK_CLASS_SOFTIRQ] = &kwork_softirq,
};

static void setup_event_list(struct perf_kwork *kwork,
@@ -145,7 +159,7 @@ int cmd_kwork(int argc, const char **argv)
OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
"dump raw trace in ASCII"),
OPT_STRING('k', "kwork", &kwork.event_list_str, "kwork",
- "list of kwork to profile (irq, etc)"),
+ "list of kwork to profile (irq, softirq, etc)"),
OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
OPT_END()
};
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index f1d89cb058fc..669a81626cb4 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -14,6 +14,7 @@

enum kwork_class_type {
KWORK_CLASS_IRQ,
+ KWORK_CLASS_SOFTIRQ,
KWORK_CLASS_MAX,
};

--
2.30.GIT

2022-07-09 02:47:26

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 08/17] perf kwork: Add softirq report support

Implements softirq kwork report function.

Test cases:

# perf kwork -k softirq rep

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
... <SNIP> ...
(s)RCU:9 | 0004 | 0.830 ms | 26 | 0.058 ms | 43997.666418 s | 43997.666476 s |
(s)TIMER:1 | 0001 | 0.471 ms | 4 | 0.158 ms | 44007.834694 s | 44007.834852 s |
(s)RCU:9 | 0006 | 0.220 ms | 7 | 0.048 ms | 44004.833764 s | 44004.833812 s |
(s)NET_RX:3 | 0002 | 0.164 ms | 5 | 0.049 ms | 44005.012418 s | 44005.012466 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
--------------------------------------------------------------------------------------------------------------------------------

#
# perf kwork -k softirq rep -S

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0000 | 63.631 ms | 680 | 2.690 ms | 44006.721976 s | 44006.724666 s |
... <SNIP> ...
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
(s)RCU:9 | 0006 | 0.220 ms | 7 | 0.048 ms | 44004.833764 s | 44004.833812 s |
(s)NET_RX:3 | 0002 | 0.164 ms | 5 | 0.049 ms | 44005.012418 s | 44005.012466 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
--------------------------------------------------------------------------------------------------------------------------------
Total count : 12748
Total runtime (msec) : 661.433 (0.065% load average)
Total time span (msec) : 10176.441
--------------------------------------------------------------------------------------------------------------------------------

#
# perf kwork -k softirq rep -s count,max

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
(s)TIMER:1 | 0003 | 181.387 ms | 2476 | 1.240 ms | 44004.787960 s | 44004.789201 s |
(s)RCU:9 | 0003 | 91.573 ms | 2193 | 0.650 ms | 44004.790258 s | 44004.790908 s |
(s)SCHED:7 | 0002 | 50.039 ms | 1731 | 0.074 ms | 44005.009447 s | 44005.009521 s |
(s)RCU:9 | 0001 | 78.960 ms | 1619 | 1.195 ms | 44001.496553 s | 44001.497749 s |
(s)SCHED:7 | 0003 | 55.962 ms | 1255 | 0.954 ms | 44004.812008 s | 44004.812962 s |
... <SNIP> ...
(s)RCU:9 | 0002 | 35.241 ms | 932 | 0.407 ms | 44005.009541 s | 44005.009949 s |
(s)RCU:9 | 0000 | 45.710 ms | 702 | 1.144 ms | 44004.787023 s | 44004.788167 s |
(s)SCHED:7 | 0006 | 0.080 ms | 2 | 0.044 ms | 43997.826145 s | 43997.826189 s |
(s)TIMER:1 | 0005 | 0.164 ms | 1 | 0.164 ms | 44007.820474 s | 44007.820638 s |
(s)TIMER:1 | 0006 | 0.087 ms | 1 | 0.087 ms | 44000.830807 s | 44000.830894 s |
--------------------------------------------------------------------------------------------------------------------------------

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/builtin-kwork.c | 98 +++++++++++++++++++++++++++++++++++++-
1 file changed, 96 insertions(+), 2 deletions(-)

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index b1993be0a20a..8680fe3795d4 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -550,17 +550,111 @@ static struct kwork_class kwork_irq = {
.work_name = irq_work_name,
};

+static struct kwork_class kwork_softirq;
+static int process_softirq_entry_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->entry_event)
+ return kwork->tp_handler->entry_event(kwork, &kwork_softirq,
+ evsel, sample, machine);
+
+ return 0;
+}
+
+static int process_softirq_exit_event(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct perf_kwork *kwork = container_of(tool, struct perf_kwork, tool);
+
+ if (kwork->tp_handler->exit_event)
+ return kwork->tp_handler->exit_event(kwork, &kwork_softirq,
+ evsel, sample, machine);
+
+ return 0;
+}
+
const struct evsel_str_handler softirq_tp_handlers[] = {
{ "irq:softirq_raise", NULL, },
- { "irq:softirq_entry", NULL, },
- { "irq:softirq_exit", NULL, },
+ { "irq:softirq_entry", process_softirq_entry_event, },
+ { "irq:softirq_exit", process_softirq_exit_event, },
};

+static int softirq_class_init(struct kwork_class *class,
+ struct perf_session *session)
+{
+ if (perf_session__set_tracepoints_handlers(session,
+ softirq_tp_handlers)) {
+ pr_err("Failed to set softirq tracepoints handlers\n");
+ return -1;
+ }
+
+ class->work_root = RB_ROOT_CACHED;
+ return 0;
+}
+
+static char *evsel__softirq_name(struct evsel *evsel, u64 num)
+{
+ char *name = NULL;
+ bool found = false;
+ struct tep_print_flag_sym *sym = NULL;
+ struct tep_print_arg *args = evsel->tp_format->print_fmt.args;
+
+ if ((args == NULL) || (args->next == NULL))
+ return NULL;
+
+ /* skip softirq field: "REC->vec" */
+ for (sym = args->next->symbol.symbols; sym != NULL; sym = sym->next) {
+ if ((eval_flag(sym->value) == (unsigned long long)num) &&
+ (strlen(sym->str) != 0)) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found)
+ return NULL;
+
+ name = strdup(sym->str);
+ if (name == NULL) {
+ pr_err("Failed to copy symbol name\n");
+ return NULL;
+ }
+ return name;
+}
+
+static void softirq_work_init(struct kwork_class *class,
+ struct kwork_work *work,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine __maybe_unused)
+{
+ u64 num = evsel__intval(evsel, sample, "vec");
+
+ work->id = num;
+ work->class = class;
+ work->cpu = sample->cpu;
+ work->name = evsel__softirq_name(evsel, num);
+}
+
+static void softirq_work_name(struct kwork_work *work, char *buf, int len)
+{
+ snprintf(buf, len, "(s)%s:%" PRIu64 "", work->name, work->id);
+}
+
static struct kwork_class kwork_softirq = {
.name = "softirq",
.type = KWORK_CLASS_SOFTIRQ,
.nr_tracepoints = 3,
.tp_handlers = softirq_tp_handlers,
+ .class_init = softirq_class_init,
+ .work_init = softirq_work_init,
+ .work_name = softirq_work_name,
};

const struct evsel_str_handler workqueue_tp_handlers[] = {
--
2.30.GIT

2022-07-09 02:48:42

by Yang Jihong

[permalink] [raw]
Subject: [RFC v3 06/17] perf kwork: Implement perf kwork report

Implements framework of perf kwork report, which is used to report time
properties such as run time and frequency:

Test cases:

# perf kwork

Usage: perf kwork [<options>] {record|report}

-D, --dump-raw-trace dump raw trace in ASCII
-f, --force don't complain, do it
-k, --kwork <kwork> list of kwork to profile (irq, softirq, workqueue, etc)
-v, --verbose be more verbose (show symbol address, etc)

# perf kwork report -h

Usage: perf kwork report [<options>]

-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)

# perf kwork report

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -S

Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
--------------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------------------------
Total count : 0
Total runtime (msec) : 0.000 (0.000% load average)
Total time span (msec) : 0.000
--------------------------------------------------------------------------------------------------------------------------------

# perf kwork report -C 0,100
Requested CPU 100 too large. Consider raising MAX_NR_CPUS
Invalid cpu bitmap

# perf kwork report -s runtime1
Error: Unknown --sort key: `runtime1'

Usage: perf kwork report [<options>]

-C, --cpu <cpu> list of cpus to profile
-i, --input <file> input file name
-n, --name <name> event name to profile
-s, --sort <key[,key2...]>
sort by key(s): runtime, max, count
-S, --with-summary Show summary with statistics
--time <str> Time span for analysis (start,stop)

# perf kwork report -i perf_no_exist.data
failed to open perf_no_exist.data: No such file or directory

# perf kwork report --time 00FFF,
Invalid time span

Since there are no report supported events, the output is empty.

Briefly describe the data structure:
1. "class" indicates event type. For example, irq and softiq correspond
to different types.
2. "cluster" refers to a specific event corresponding to a type. For
example, RCU and TIMER in softirq correspond to different clusters,
which contains three types of events: raise, entry, and exit.
3. "atom" includes time of each sample and sample of the previous phase.
(For example, exit corresponds to entry, which is used for timehist.)

Signed-off-by: Yang Jihong <[email protected]>
---
tools/perf/Documentation/perf-kwork.txt | 33 +
tools/perf/builtin-kwork.c | 859 +++++++++++++++++++++++-
tools/perf/util/kwork.h | 161 +++++
3 files changed, 1051 insertions(+), 2 deletions(-)

diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
index c5b52f61da99..b79b2c0d047e 100644
--- a/tools/perf/Documentation/perf-kwork.txt
+++ b/tools/perf/Documentation/perf-kwork.txt
@@ -17,8 +17,11 @@ There are several variants of 'perf kwork':
'perf kwork record <command>' to record the kernel work
of an arbitrary workload.

+ 'perf kwork report' to report the per kwork runtime.
+
Example usage:
perf kwork record -- sleep 1
+ perf kwork report

OPTIONS
-------
@@ -38,6 +41,36 @@ OPTIONS
--verbose::
Be more verbose. (show symbol address, etc)

+OPTIONS for 'perf kwork report'
+----------------------------
+
+-C::
+--cpu::
+ Only show events for the given CPU(s) (comma separated list).
+
+-i::
+--input::
+ Input file name. (default: perf.data unless stdin is a fifo)
+
+-n::
+--name::
+ Only show events for the given name.
+
+-s::
+--sort::
+ Sort by key(s): runtime, max, count
+
+-S::
+--with-summary::
+ Show summary with statistics
+
+--time::
+ Only analyze samples within given time window: <start>,<stop>. Times
+ have the format seconds.microseconds. If start is not given (i.e., time
+ string is ',x.y') then analysis starts at the beginning of the file. If
+ stop time is not given (i.e, time string is 'x.y,') then analysis goes
+ to end of file.
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index 8086236b7513..9c488d647995 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -25,6 +25,460 @@
#include <linux/time64.h>
#include <linux/zalloc.h>

+/*
+ * report header elements width
+ */
+#define PRINT_CPU_WIDTH 4
+#define PRINT_COUNT_WIDTH 9
+#define PRINT_RUNTIME_WIDTH 10
+#define PRINT_TIMESTAMP_WIDTH 17
+#define PRINT_KWORK_NAME_WIDTH 30
+#define RPINT_DECIMAL_WIDTH 3
+#define PRINT_TIME_UNIT_SEC_WIDTH 2
+#define PRINT_TIME_UNIT_MESC_WIDTH 3
+#define PRINT_RUNTIME_HEADER_WIDTH (PRINT_RUNTIME_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
+#define PRINT_TIMESTAMP_HEADER_WIDTH (PRINT_TIMESTAMP_WIDTH + PRINT_TIME_UNIT_SEC_WIDTH)
+
+struct sort_dimension {
+ const char *name;
+ int (*cmp)(struct kwork_work *l, struct kwork_work *r);
+ struct list_head list;
+};
+
+static int id_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ if (l->cpu > r->cpu)
+ return 1;
+ if (l->cpu < r->cpu)
+ return -1;
+
+ if (l->id > r->id)
+ return 1;
+ if (l->id < r->id)
+ return -1;
+
+ return 0;
+}
+
+static int count_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ if (l->nr_atoms > r->nr_atoms)
+ return 1;
+ if (l->nr_atoms < r->nr_atoms)
+ return -1;
+
+ return 0;
+}
+
+static int runtime_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ if (l->total_runtime > r->total_runtime)
+ return 1;
+ if (l->total_runtime < r->total_runtime)
+ return -1;
+
+ return 0;
+}
+
+static int max_runtime_cmp(struct kwork_work *l, struct kwork_work *r)
+{
+ if (l->max_runtime > r->max_runtime)
+ return 1;
+ if (l->max_runtime < r->max_runtime)
+ return -1;
+
+ return 0;
+}
+
+static int sort_dimension__add(struct perf_kwork *kwork __maybe_unused,
+ const char *tok, struct list_head *list)
+{
+ size_t i;
+ static struct sort_dimension max_sort_dimension = {
+ .name = "max",
+ .cmp = max_runtime_cmp,
+ };
+ static struct sort_dimension id_sort_dimension = {
+ .name = "id",
+ .cmp = id_cmp,
+ };
+ static struct sort_dimension runtime_sort_dimension = {
+ .name = "runtime",
+ .cmp = runtime_cmp,
+ };
+ static struct sort_dimension count_sort_dimension = {
+ .name = "count",
+ .cmp = count_cmp,
+ };
+ struct sort_dimension *available_sorts[] = {
+ &id_sort_dimension,
+ &max_sort_dimension,
+ &count_sort_dimension,
+ &runtime_sort_dimension,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(available_sorts); i++) {
+ if (!strcmp(available_sorts[i]->name, tok)) {
+ list_add_tail(&available_sorts[i]->list, list);
+ return 0;
+ }
+ }
+
+ return -1;
+}
+
+static void setup_sorting(struct perf_kwork *kwork,
+ const struct option *options,
+ const char * const usage_msg[])
+{
+ char *tmp, *tok, *str = strdup(kwork->sort_order);
+
+ for (tok = strtok_r(str, ", ", &tmp);
+ tok; tok = strtok_r(NULL, ", ", &tmp)) {
+ if (sort_dimension__add(kwork, tok, &kwork->sort_list) < 0)
+ usage_with_options_msg(usage_msg, options,
+ "Unknown --sort key: `%s'", tok);
+ }
+
+ pr_debug("Sort order: %s\n", kwork->sort_order);
+ free(str);
+}
+
+static struct kwork_atom *atom_new(struct perf_kwork *kwork,
+ struct perf_sample *sample)
+{
+ unsigned long i;
+ struct kwork_atom_page *page;
+ struct kwork_atom *atom = NULL;
+
+ list_for_each_entry(page, &kwork->atom_page_list, list) {
+ if (!bitmap_full(page->bitmap, NR_ATOM_PER_PAGE)) {
+ i = find_first_zero_bit(page->bitmap, NR_ATOM_PER_PAGE);
+ BUG_ON(i >= NR_ATOM_PER_PAGE);
+ atom = &page->atoms[i];
+ goto found_atom;
+ }
+ }
+
+ /*
+ * new page
+ */
+ page = zalloc(sizeof(*page));
+ if (page == NULL) {
+ pr_err("Failed to zalloc kwork atom page\n");
+ return NULL;
+ }
+
+ i = 0;
+ atom = &page->atoms[0];
+ list_add_tail(&page->list, &kwork->atom_page_list);
+
+found_atom:
+ set_bit(i, page->bitmap);
+ atom->time = sample->time;
+ atom->prev = NULL;
+ atom->page_addr = page;
+ atom->bit_inpage = i;
+ return atom;
+}
+
+static void atom_free(struct kwork_atom *atom)
+{
+ if (atom->prev != NULL)
+ atom_free(atom->prev);
+
+ clear_bit(atom->bit_inpage,
+ ((struct kwork_atom_page *)atom->page_addr)->bitmap);
+}
+
+static void atom_del(struct kwork_atom *atom)
+{
+ list_del(&atom->list);
+ atom_free(atom);
+}
+
+static int work_cmp(struct list_head *list,
+ struct kwork_work *l, struct kwork_work *r)
+{
+ int ret = 0;
+ struct sort_dimension *sort;
+
+ BUG_ON(list_empty(list));
+
+ list_for_each_entry(sort, list, list) {
+ ret = sort->cmp(l, r);
+ if (ret)
+ return ret;
+ }
+
+ return ret;
+}
+
+static struct kwork_work *work_search(struct rb_root_cached *root,
+ struct kwork_work *key,
+ struct list_head *sort_list)
+{
+ int cmp;
+ struct kwork_work *work;
+ struct rb_node *node = root->rb_root.rb_node;
+
+ while (node) {
+ work = container_of(node, struct kwork_work, node);
+ cmp = work_cmp(sort_list, key, work);
+ if (cmp > 0)
+ node = node->rb_left;
+ else if (cmp < 0)
+ node = node->rb_right;
+ else {
+ if (work->name == NULL)
+ work->name = key->name;
+ return work;
+ }
+ }
+ return NULL;
+}
+
+static void work_insert(struct rb_root_cached *root,
+ struct kwork_work *key, struct list_head *sort_list)
+{
+ int cmp;
+ bool leftmost = true;
+ struct kwork_work *cur;
+ struct rb_node **new = &(root->rb_root.rb_node), *parent = NULL;
+
+ while (*new) {
+ cur = container_of(*new, struct kwork_work, node);
+ parent = *new;
+ cmp = work_cmp(sort_list, key, cur);
+
+ if (cmp > 0)
+ new = &((*new)->rb_left);
+ else {
+ new = &((*new)->rb_right);
+ leftmost = false;
+ }
+ }
+
+ rb_link_node(&key->node, parent, new);
+ rb_insert_color_cached(&key->node, root, leftmost);
+}
+
+static struct kwork_work *work_new(struct kwork_work *key)
+{
+ int i;
+ struct kwork_work *work = zalloc(sizeof(*work));
+
+ if (work == NULL) {
+ pr_err("Failed to zalloc kwork work\n");
+ return NULL;
+ }
+
+ for (i = 0; i < KWORK_TRACE_MAX; i++)
+ INIT_LIST_HEAD(&work->atom_list[i]);
+
+ work->id = key->id;
+ work->cpu = key->cpu;
+ work->name = key->name;
+ work->class = key->class;
+ return work;
+}
+
+static struct kwork_work *work_findnew(struct rb_root_cached *root,
+ struct kwork_work *key,
+ struct list_head *sort_list)
+{
+ struct kwork_work *work = NULL;
+
+ work = work_search(root, key, sort_list);
+ if (work != NULL)
+ return work;
+
+ work = work_new(key);
+ if (work == NULL)
+ return NULL;
+
+ work_insert(root, work, sort_list);
+ return work;
+}
+
+static void profile_update_timespan(struct perf_kwork *kwork,
+ struct perf_sample *sample)
+{
+ if (!kwork->summary)
+ return;
+
+ if ((kwork->timestart == 0) || (kwork->timestart > sample->time))
+ kwork->timestart = sample->time;
+
+ if (kwork->timeend < sample->time)
+ kwork->timeend = sample->time;
+}
+
+static bool profile_event_match(struct perf_kwork *kwork,
+ struct kwork_work *work,
+ struct perf_sample *sample)
+{
+ int cpu = work->cpu;
+ u64 time = sample->time;
+ struct perf_time_interval *ptime = &kwork->ptime;
+
+ if ((kwork->cpu_list != NULL) && !test_bit(cpu, kwork->cpu_bitmap))
+ return false;
+
+ if (((ptime->start != 0) && (ptime->start > time)) ||
+ ((ptime->end != 0) && (ptime->end < time)))
+ return false;
+
+ if ((kwork->profile_name != NULL) &&
+ (work->name != NULL) &&
+ (strcmp(work->name, kwork->profile_name) != 0))
+ return false;
+
+ profile_update_timespan(kwork, sample);
+ return true;
+}
+
+static int work_push_atom(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ enum kwork_trace_type src_type,
+ enum kwork_trace_type dst_type,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine,
+ struct kwork_work **ret_work)
+{
+ struct kwork_atom *atom, *dst_atom;
+ struct kwork_work *work, key;
+
+ BUG_ON(class->work_init == NULL);
+ class->work_init(class, &key, evsel, sample, machine);
+
+ atom = atom_new(kwork, sample);
+ if (atom == NULL)
+ return -1;
+
+ work = work_findnew(&class->work_root, &key, &kwork->cmp_id);
+ if (work == NULL) {
+ free(atom);
+ return -1;
+ }
+
+ if (!profile_event_match(kwork, work, sample))
+ return 0;
+
+ if ((dst_type >= 0) && (dst_type < KWORK_TRACE_MAX)) {
+ dst_atom = list_last_entry_or_null(&work->atom_list[dst_type],
+ struct kwork_atom, list);
+ if (dst_atom != NULL) {
+ atom->prev = dst_atom;
+ list_del(&dst_atom->list);
+ }
+ }
+
+ if (ret_work != NULL)
+ *ret_work = work;
+
+ list_add_tail(&atom->list, &work->atom_list[src_type]);
+
+ return 0;
+}
+
+static struct kwork_atom *work_pop_atom(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ enum kwork_trace_type src_type,
+ enum kwork_trace_type dst_type,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine,
+ struct kwork_work **ret_work)
+{
+ struct kwork_atom *atom, *src_atom;
+ struct kwork_work *work, key;
+
+ BUG_ON(class->work_init == NULL);
+ class->work_init(class, &key, evsel, sample, machine);
+
+ work = work_findnew(&class->work_root, &key, &kwork->cmp_id);
+ if (ret_work != NULL)
+ *ret_work = work;
+
+ if (work == NULL)
+ return NULL;
+
+ if (!profile_event_match(kwork, work, sample))
+ return NULL;
+
+ atom = list_last_entry_or_null(&work->atom_list[dst_type],
+ struct kwork_atom, list);
+ if (atom != NULL)
+ return atom;
+
+ src_atom = atom_new(kwork, sample);
+ if (src_atom != NULL)
+ list_add_tail(&src_atom->list, &work->atom_list[src_type]);
+ else {
+ if (ret_work != NULL)
+ *ret_work = NULL;
+ }
+
+ return NULL;
+}
+
+static void report_update_exit_event(struct kwork_work *work,
+ struct kwork_atom *atom,
+ struct perf_sample *sample)
+{
+ u64 delta;
+ u64 exit_time = sample->time;
+ u64 entry_time = atom->time;
+
+ if ((entry_time != 0) && (exit_time >= entry_time)) {
+ delta = exit_time - entry_time;
+ if ((delta > work->max_runtime) ||
+ (work->max_runtime == 0)) {
+ work->max_runtime = delta;
+ work->max_runtime_start = entry_time;
+ work->max_runtime_end = exit_time;
+ }
+ work->total_runtime += delta;
+ work->nr_atoms++;
+ }
+}
+
+static int report_entry_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ return work_push_atom(kwork, class, KWORK_TRACE_ENTRY,
+ KWORK_TRACE_MAX, evsel, sample,
+ machine, NULL);
+}
+
+static int report_exit_event(struct perf_kwork *kwork,
+ struct kwork_class *class,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine)
+{
+ struct kwork_atom *atom = NULL;
+ struct kwork_work *work = NULL;
+
+ atom = work_pop_atom(kwork, class, KWORK_TRACE_EXIT,
+ KWORK_TRACE_ENTRY, evsel, sample,
+ machine, &work);
+ if (work == NULL)
+ return -1;
+
+ if (atom != NULL) {
+ report_update_exit_event(work, atom, sample);
+ atom_del(atom);
+ }
+
+ return 0;
+}
+
const struct evsel_str_handler irq_tp_handlers[] = {
{ "irq:irq_handler_entry", NULL, },
{ "irq:irq_handler_exit", NULL, },
@@ -69,6 +523,351 @@ static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
[KWORK_CLASS_WORKQUEUE] = &kwork_workqueue,
};

+static void print_separator(int len)
+{
+ printf(" %.*s\n", len, graph_dotted_line);
+}
+
+static void report_print_work(struct perf_kwork *kwork,
+ struct kwork_work *work)
+{
+ int ret = 0;
+ char kwork_name[PRINT_KWORK_NAME_WIDTH];
+ char max_runtime_start[32], max_runtime_end[32];
+
+ printf(" ");
+
+ /*
+ * kwork name
+ */
+ if (work->class && work->class->work_name) {
+ work->class->work_name(work, kwork_name,
+ PRINT_KWORK_NAME_WIDTH);
+ ret += printf(" %-*s |", PRINT_KWORK_NAME_WIDTH, kwork_name);
+ } else
+ ret += printf(" %-*s |", PRINT_KWORK_NAME_WIDTH, "");
+
+ /*
+ * cpu
+ */
+ ret += printf(" %0*d |", PRINT_CPU_WIDTH, work->cpu);
+
+ /*
+ * total runtime
+ */
+ if (kwork->report == KWORK_REPORT_RUNTIME)
+ ret += printf(" %*.*f ms |",
+ PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)work->total_runtime / NSEC_PER_MSEC);
+
+ /*
+ * count
+ */
+ ret += printf(" %*" PRIu64 " |", PRINT_COUNT_WIDTH, work->nr_atoms);
+
+ /*
+ * max runtime, max runtime start, max runtime end
+ */
+ if (kwork->report == KWORK_REPORT_RUNTIME) {
+
+ timestamp__scnprintf_usec(work->max_runtime_start,
+ max_runtime_start,
+ sizeof(max_runtime_start));
+ timestamp__scnprintf_usec(work->max_runtime_end,
+ max_runtime_end,
+ sizeof(max_runtime_end));
+ ret += printf(" %*.*f ms | %*s s | %*s s |",
+ PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
+ (double)work->max_runtime / NSEC_PER_MSEC,
+ PRINT_TIMESTAMP_WIDTH, max_runtime_start,
+ PRINT_TIMESTAMP_WIDTH, max_runtime_end);
+ }
+
+ printf("\n");
+}
+
+static int report_print_header(struct perf_kwork *kwork)
+{
+ int ret;
+
+ printf("\n ");
+ ret = printf(" %-*s | %-*s |",
+ PRINT_KWORK_NAME_WIDTH, "Kwork Name",
+ PRINT_CPU_WIDTH, "Cpu");
+
+ if (kwork->report == KWORK_REPORT_RUNTIME)
+ ret += printf(" %-*s |",
+ PRINT_RUNTIME_HEADER_WIDTH, "Total Runtime");
+
+ ret += printf(" %-*s |", PRINT_COUNT_WIDTH, "Count");
+
+ if (kwork->report == KWORK_REPORT_RUNTIME)
+ ret += printf(" %-*s | %-*s | %-*s |",
+ PRINT_RUNTIME_HEADER_WIDTH, "Max runtime",
+ PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime start",
+ PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime end");
+
+ printf("\n");
+ print_separator(ret);
+ return ret;
+}
+
+static void print_summary(struct perf_kwork *kwork)
+{
+ u64 time = kwork->timeend - kwork->timestart;
+
+ printf(" Total count : %9" PRIu64 "\n", kwork->all_count);
+ printf(" Total runtime (msec) : %9.3f (%.3f%% load average)\n",
+ (double)kwork->all_runtime / NSEC_PER_MSEC,
+ time == 0 ? 0 : (double)kwork->all_runtime / time);
+ printf(" Total time span (msec) : %9.3f\n",
+ (double)time / NSEC_PER_MSEC);
+}
+
+static unsigned long long nr_list_entry(struct list_head *head)
+{
+ struct list_head *pos;
+ unsigned long long n = 0;
+
+ list_for_each(pos, head)
+ n++;
+
+ return n;
+}
+
+static void print_skipped_events(struct perf_kwork *kwork)
+{
+ int i;
+ const char *const kwork_event_str[] = {
+ [KWORK_TRACE_ENTRY] = "entry",
+ [KWORK_TRACE_EXIT] = "exit",
+ };
+
+ if ((kwork->nr_skipped_events[KWORK_TRACE_MAX] != 0) &&
+ (kwork->nr_events != 0)) {
+ printf(" INFO: %.3f%% skipped events (%" PRIu64 " including ",
+ (double)kwork->nr_skipped_events[KWORK_TRACE_MAX] /
+ (double)kwork->nr_events * 100.0,
+ kwork->nr_skipped_events[KWORK_TRACE_MAX]);
+
+ for (i = 0; i < KWORK_TRACE_MAX; i++)
+ printf("%" PRIu64 " %s%s",
+ kwork->nr_skipped_events[i],
+ kwork_event_str[i],
+ (i == KWORK_TRACE_MAX - 1) ? ")\n" : ", ");
+ }
+
+ if (verbose > 0)
+ printf(" INFO: use %lld atom pages\n",
+ nr_list_entry(&kwork->atom_page_list));
+}
+
+static void print_bad_events(struct perf_kwork *kwork)
+{
+ if ((kwork->nr_lost_events != 0) && (kwork->nr_events != 0))
+ printf(" INFO: %.3f%% lost events (%ld out of %ld, in %ld chunks)\n",
+ (double)kwork->nr_lost_events /
+ (double)kwork->nr_events * 100.0,
+ kwork->nr_lost_events, kwork->nr_events,
+ kwork->nr_lost_chunks);
+}
+
+static void work_sort(struct perf_kwork *kwork, struct kwork_class *class)
+{
+ struct rb_node *node;
+ struct kwork_work *data;
+ struct rb_root_cached *root = &class->work_root;
+
+ pr_debug("Sorting %s ...\n", class->name);
+ for (;;) {
+ node = rb_first_cached(root);
+ if (!node)
+ break;
+
+ rb_erase_cached(node, root);
+ data = rb_entry(node, struct kwork_work, node);
+ work_insert(&kwork->sorted_work_root,
+ data, &kwork->sort_list);
+ }
+}
+
+static void perf_kwork__sort(struct perf_kwork *kwork)
+{
+ struct kwork_class *class;
+
+ list_for_each_entry(class, &kwork->class_list, list)
+ work_sort(kwork, class);
+}
+
+static int perf_kwork__check_config(struct perf_kwork *kwork,
+ struct perf_session *session)
+{
+ int ret;
+ struct kwork_class *class;
+
+ static struct trace_kwork_handler report_ops = {
+ .entry_event = report_entry_event,
+ .exit_event = report_exit_event,
+ };
+
+ switch (kwork->report) {
+ case KWORK_REPORT_RUNTIME:
+ kwork->tp_handler = &report_ops;
+ break;
+ default:
+ pr_debug("Invalid report type %d\n", kwork->report);
+ return -1;
+ }
+
+ list_for_each_entry(class, &kwork->class_list, list)
+ if ((class->class_init != NULL) &&
+ (class->class_init(class, session) != 0))
+ return -1;
+
+ if (kwork->cpu_list != NULL) {
+ ret = perf_session__cpu_bitmap(session,
+ kwork->cpu_list,
+ kwork->cpu_bitmap);
+ if (ret < 0) {
+ pr_err("Invalid cpu bitmap\n");
+ return -1;
+ }
+ }
+
+ if (kwork->time_str != NULL) {
+ ret = perf_time__parse_str(&kwork->ptime, kwork->time_str);
+ if (ret != 0) {
+ pr_err("Invalid time span\n");
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
+static int perf_kwork__read_events(struct perf_kwork *kwork)
+{
+ int ret = -1;
+ struct perf_session *session = NULL;
+
+ struct perf_data data = {
+ .path = input_name,
+ .mode = PERF_DATA_MODE_READ,
+ .force = kwork->force,
+ };
+
+ session = perf_session__new(&data, &kwork->tool);
+ if (IS_ERR(session)) {
+ pr_debug("Error creating perf session\n");
+ return PTR_ERR(session);
+ }
+
+ symbol__init(&session->header.env);
+
+ if (perf_kwork__check_config(kwork, session) != 0)
+ goto out_delete;
+
+ if (session->tevent.pevent &&
+ tep_set_function_resolver(session->tevent.pevent,
+ machine__resolve_kernel_addr,
+ &session->machines.host) < 0) {
+ pr_err("Failed to set libtraceevent function resolver\n");
+ goto out_delete;
+ }
+
+ ret = perf_session__process_events(session);
+ if (ret) {
+ pr_debug("Failed to process events, error %d\n", ret);
+ goto out_delete;
+ }
+
+ kwork->nr_events = session->evlist->stats.nr_events[0];
+ kwork->nr_lost_events = session->evlist->stats.total_lost;
+ kwork->nr_lost_chunks = session->evlist->stats.nr_events[PERF_RECORD_LOST];
+
+out_delete:
+ perf_session__delete(session);
+ return ret;
+}
+
+static void process_skipped_events(struct perf_kwork *kwork,
+ struct kwork_work *work)
+{
+ int i;
+ unsigned long long count;
+
+ for (i = 0; i < KWORK_TRACE_MAX; i++) {
+ count = nr_list_entry(&work->atom_list[i]);
+ kwork->nr_skipped_events[i] += count;
+ kwork->nr_skipped_events[KWORK_TRACE_MAX] += count;
+ }
+}
+
+static int perf_kwork__report(struct perf_kwork *kwork)
+{
+ int ret;
+ struct rb_node *next;
+ struct kwork_work *work;
+
+ ret = perf_kwork__read_events(kwork);
+ if (ret != 0)
+ return -1;
+
+ perf_kwork__sort(kwork);
+
+ setup_pager();
+
+ ret = report_print_header(kwork);
+ next = rb_first_cached(&kwork->sorted_work_root);
+ while (next) {
+ work = rb_entry(next, struct kwork_work, node);
+ process_skipped_events(kwork, work);
+
+ if (work->nr_atoms != 0) {
+ report_print_work(kwork, work);
+ if (kwork->summary) {
+ kwork->all_runtime += work->total_runtime;
+ kwork->all_count += work->nr_atoms;
+ }
+ }
+ next = rb_next(next);
+ }
+ print_separator(ret);
+
+ if (kwork->summary) {
+ print_summary(kwork);
+ print_separator(ret);
+ }
+
+ print_bad_events(kwork);
+ print_skipped_events(kwork);
+ printf("\n");
+
+ return 0;
+}
+
+typedef int (*tracepoint_handler)(struct perf_tool *tool,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine);
+
+static int perf_kwork__process_tracepoint_sample(struct perf_tool *tool,
+ union perf_event *event __maybe_unused,
+ struct perf_sample *sample,
+ struct evsel *evsel,
+ struct machine *machine)
+{
+ int err = 0;
+
+ if (evsel->handler != NULL) {
+ tracepoint_handler f = evsel->handler;
+
+ err = f(tool, evsel, sample, machine);
+ }
+
+ return err;
+}
+
static void setup_event_list(struct perf_kwork *kwork,
const struct option *options,
const char * const usage_msg[])
@@ -161,11 +960,37 @@ static int perf_kwork__record(struct perf_kwork *kwork,
int cmd_kwork(int argc, const char **argv)
{
static struct perf_kwork kwork = {
+ .tool = {
+ .mmap = perf_event__process_mmap,
+ .mmap2 = perf_event__process_mmap2,
+ .sample = perf_kwork__process_tracepoint_sample,
+ },
+
.class_list = LIST_HEAD_INIT(kwork.class_list),
+ .atom_page_list = LIST_HEAD_INIT(kwork.atom_page_list),
+ .sort_list = LIST_HEAD_INIT(kwork.sort_list),
+ .cmp_id = LIST_HEAD_INIT(kwork.cmp_id),
+ .sorted_work_root = RB_ROOT_CACHED,
+ .tp_handler = NULL,

+ .profile_name = NULL,
+ .cpu_list = NULL,
+ .time_str = NULL,
.force = false,
.event_list_str = NULL,
+ .summary = false,
+ .sort_order = NULL,
+
+ .timestart = 0,
+ .timeend = 0,
+ .nr_events = 0,
+ .nr_lost_chunks = 0,
+ .nr_lost_events = 0,
+ .all_runtime = 0,
+ .all_count = 0,
+ .nr_skipped_events = { 0 },
};
+ static const char default_report_sort_order[] = "runtime, max, count";

const struct option kwork_options[] = {
OPT_INCR('v', "verbose", &verbose,
@@ -177,13 +1002,32 @@ int cmd_kwork(int argc, const char **argv)
OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
OPT_END()
};
+ const struct option report_options[] = {
+ OPT_STRING('s', "sort", &kwork.sort_order, "key[,key2...]",
+ "sort by key(s): runtime, max, count"),
+ OPT_STRING('C', "cpu", &kwork.cpu_list, "cpu",
+ "list of cpus to profile"),
+ OPT_STRING('n', "name", &kwork.profile_name, "name",
+ "event name to profile"),
+ OPT_STRING(0, "time", &kwork.time_str, "str",
+ "Time span for analysis (start,stop)"),
+ OPT_STRING('i', "input", &input_name, "file",
+ "input file name"),
+ OPT_BOOLEAN('S', "with-summary", &kwork.summary,
+ "Show summary with statistics"),
+ OPT_PARENT(kwork_options)
+ };

const char *kwork_usage[] = {
NULL,
NULL
};
+ const char * const report_usage[] = {
+ "perf kwork report [<options>]",
+ NULL
+ };
const char *const kwork_subcommands[] = {
- "record", NULL
+ "record", "report", NULL
};

argc = parse_options_subcommand(argc, argv, kwork_options,
@@ -193,10 +1037,21 @@ int cmd_kwork(int argc, const char **argv)
usage_with_options(kwork_usage, kwork_options);

setup_event_list(&kwork, kwork_options, kwork_usage);
+ sort_dimension__add(&kwork, "id", &kwork.cmp_id);

if (strlen(argv[0]) > 2 && strstarts("record", argv[0]))
return perf_kwork__record(&kwork, argc, argv);
- else
+ else if (strlen(argv[0]) > 2 && strstarts("report", argv[0])) {
+ kwork.sort_order = default_report_sort_order;
+ if (argc > 1) {
+ argc = parse_options(argc, argv, report_options, report_usage, 0);
+ if (argc)
+ usage_with_options(report_usage, report_options);
+ }
+ kwork.report = KWORK_REPORT_RUNTIME;
+ setup_sorting(&kwork, report_options, report_usage);
+ return perf_kwork__report(&kwork);
+ } else
usage_with_options(kwork_usage, kwork_options);

return 0;
diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
index 03203c4deb34..0a86bf47c74d 100644
--- a/tools/perf/util/kwork.h
+++ b/tools/perf/util/kwork.h
@@ -19,6 +19,105 @@ enum kwork_class_type {
KWORK_CLASS_MAX,
};

+enum kwork_report_type {
+ KWORK_REPORT_RUNTIME,
+};
+
+enum kwork_trace_type {
+ KWORK_TRACE_ENTRY,
+ KWORK_TRACE_EXIT,
+ KWORK_TRACE_MAX,
+};
+
+/*
+ * data structure:
+ *
+ * +==================+ +============+ +======================+
+ * | class | | work | | atom |
+ * +==================+ +============+ +======================+
+ * +------------+ | +-----+ | | +------+ | | +-------+ +-----+ |
+ * | perf_kwork | +-> | irq | --------|+-> | eth0 | --+-> | raise | - | ... | --+ +-----------+
+ * +-----+------+ || +-----+ ||| +------+ ||| +-------+ +-----+ | | | |
+ * | || ||| ||| | +-> | atom_page |
+ * | || ||| ||| +-------+ +-----+ | | |
+ * | class_list ||| |+-> | entry | - | ... | ----> | |
+ * | || ||| ||| +-------+ +-----+ | | |
+ * | || ||| ||| | +-> | |
+ * | || ||| ||| +-------+ +-----+ | | | |
+ * | || ||| |+-> | exit | - | ... | --+ +-----+-----+
+ * | || ||| | | +-------+ +-----+ | |
+ * | || ||| | | | |
+ * | || ||| +-----+ | | | |
+ * | || |+-> | ... | | | | |
+ * | || | | +-----+ | | | |
+ * | || | | | | | |
+ * | || +---------+ | | +-----+ | | +-------+ +-----+ | |
+ * | +-> | softirq | -------> | RCU | ---+-> | raise | - | ... | --+ +-----+-----+
+ * | || +---------+ | | +-----+ ||| +-------+ +-----+ | | | |
+ * | || | | ||| | +-> | atom_page |
+ * | || | | ||| +-------+ +-----+ | | |
+ * | || | | |+-> | entry | - | ... | ----> | |
+ * | || | | ||| +-------+ +-----+ | | |
+ * | || | | ||| | +-> | |
+ * | || | | ||| +-------+ +-----+ | | | |
+ * | || | | |+-> | exit | - | ... | --+ +-----+-----+
+ * | || | | | | +-------+ +-----+ | |
+ * | || | | | | | |
+ * | || +-----------+ | | +-----+ | | | |
+ * | +-> | workqueue | -----> | ... | | | | |
+ * | | +-----------+ | | +-----+ | | | |
+ * | +==================+ +============+ +======================+ |
+ * | |
+ * +----> atom_page_list ---------------------------------------------------------+
+ *
+ */
+
+struct kwork_atom {
+ struct list_head list;
+ u64 time;
+ struct kwork_atom *prev;
+
+ void *page_addr;
+ unsigned long bit_inpage;
+};
+
+#define NR_ATOM_PER_PAGE 128
+struct kwork_atom_page {
+ struct list_head list;
+ struct kwork_atom atoms[NR_ATOM_PER_PAGE];
+ DECLARE_BITMAP(bitmap, NR_ATOM_PER_PAGE);
+};
+
+struct kwork_class;
+struct kwork_work {
+ /*
+ * class field
+ */
+ struct rb_node node;
+ struct kwork_class *class;
+
+ /*
+ * work field
+ */
+ u64 id;
+ int cpu;
+ char *name;
+
+ /*
+ * atom field
+ */
+ u64 nr_atoms;
+ struct list_head atom_list[KWORK_TRACE_MAX];
+
+ /*
+ * runtime report
+ */
+ u64 max_runtime;
+ u64 max_runtime_start;
+ u64 max_runtime_end;
+ u64 total_runtime;
+};
+
struct kwork_class {
struct list_head list;
const char *name;
@@ -26,19 +125,81 @@ struct kwork_class {

unsigned int nr_tracepoints;
const struct evsel_str_handler *tp_handlers;
+
+ struct rb_root_cached work_root;
+
+ int (*class_init)(struct kwork_class *class,
+ struct perf_session *session);
+
+ void (*work_init)(struct kwork_class *class,
+ struct kwork_work *work,
+ struct evsel *evsel,
+ struct perf_sample *sample,
+ struct machine *machine);
+
+ void (*work_name)(struct kwork_work *work,
+ char *buf, int len);
+};
+
+struct perf_kwork;
+struct trace_kwork_handler {
+ int (*entry_event)(struct perf_kwork *kwork,
+ struct kwork_class *class, struct evsel *evsel,
+ struct perf_sample *sample, struct machine *machine);
+
+ int (*exit_event)(struct perf_kwork *kwork,
+ struct kwork_class *class, struct evsel *evsel,
+ struct perf_sample *sample, struct machine *machine);
};

struct perf_kwork {
/*
* metadata
*/
+ struct perf_tool tool;
struct list_head class_list;
+ struct list_head atom_page_list;
+ struct list_head sort_list, cmp_id;
+ struct rb_root_cached sorted_work_root;
+ const struct trace_kwork_handler *tp_handler;
+
+ /*
+ * profile filters
+ */
+ const char *profile_name;
+
+ const char *cpu_list;
+ DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
+
+ const char *time_str;
+ struct perf_time_interval ptime;

/*
* options for command
*/
bool force;
const char *event_list_str;
+ enum kwork_report_type report;
+
+ /*
+ * options for subcommand
+ */
+ bool summary;
+ const char *sort_order;
+
+ /*
+ * statistics
+ */
+ u64 timestart;
+ u64 timeend;
+
+ unsigned long nr_events;
+ unsigned long nr_lost_chunks;
+ unsigned long nr_lost_events;
+
+ u64 all_runtime;
+ u64 all_count;
+ u64 nr_skipped_events[KWORK_TRACE_MAX + 1];
};

#endif /* PERF_UTIL_KWORK_H */
--
2.30.GIT

2022-07-16 09:31:45

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 00/17] perf: Add perf kwork

Ping

Regards,
Yang

On 2022/7/9 9:50, Yang Jihong wrote:
> Sometimes, we need to analyze time properties of kernel work such as irq,
> softirq, and workqueue, including delay and running time of specific interrupts.
> Currently, these events have kernel tracepoints, but perf tool does not
> directly analyze the delay of these events
>
> The perf-kwork tool is used to trace time properties of kernel work
> (such as irq, softirq, and workqueue), including runtime, latency,
> and timehist, using the infrastructure in the perf tools to allow
> tracing extra targets
>
> We also use bpf trace to collect and filter data in kernel to solve
> problem of large perf data volume and extra file system interruptions.
>
> Example usage:
>
> 1. Kwork record:
>
> # perf kwork record -- sleep 10
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Captured and wrote 6.825 MB perf.data ]
>
> 2. Kwork report:
>
> # perf kwork report -S
>
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> virtio0-requests:25 | 0000 | 1347.861 ms | 25049 | 1.417 ms | 121235.524083 s | 121235.525499 s |
> (s)TIMER:1 | 0005 | 151.033 ms | 2545 | 0.153 ms | 121237.454591 s | 121237.454744 s |
> (s)RCU:9 | 0005 | 117.254 ms | 2754 | 0.223 ms | 121239.461024 s | 121239.461246 s |
> (s)SCHED:7 | 0005 | 58.714 ms | 1773 | 0.075 ms | 121237.702345 s | 121237.702419 s |
> (s)RCU:9 | 0007 | 43.359 ms | 945 | 0.861 ms | 121237.702984 s | 121237.703845 s |
> (s)SCHED:7 | 0000 | 33.389 ms | 549 | 4.121 ms | 121235.521379 s | 121235.525499 s |
> (s)RCU:9 | 0002 | 21.419 ms | 484 | 0.281 ms | 121244.629001 s | 121244.629282 s |
> (w)mix_interrupt_randomness | 0000 | 21.047 ms | 391 | 1.016 ms | 121237.934008 s | 121237.935024 s |
> (s)SCHED:7 | 0007 | 19.903 ms | 570 | 0.065 ms | 121235.523360 s | 121235.523426 s |
> (s)RCU:9 | 0000 | 19.017 ms | 472 | 0.507 ms | 121244.634002 s | 121244.634510 s |
> ... <SNIP> ...
> (s)SCHED:7 | 0003 | 0.049 ms | 1 | 0.049 ms | 121240.018631 s | 121240.018680 s |
> (w)vmstat_update | 0003 | 0.046 ms | 1 | 0.046 ms | 121240.916200 s | 121240.916246 s |
> (s)RCU:9 | 0004 | 0.045 ms | 2 | 0.024 ms | 121235.522876 s | 121235.522900 s |
> (w)neigh_managed_work | 0001 | 0.044 ms | 1 | 0.044 ms | 121235.513929 s | 121235.513973 s |
> (w)vmstat_update | 0006 | 0.031 ms | 1 | 0.031 ms | 121245.673914 s | 121245.673945 s |
> (w)vmstat_update | 0004 | 0.028 ms | 1 | 0.028 ms | 121235.522743 s | 121235.522770 s |
> (w)wb_update_bandwidth_workfn | 0000 | 0.024 ms | 1 | 0.024 ms | 121244.842660 s | 121244.842683 s |
> --------------------------------------------------------------------------------------------------------------------------------
> Total count : 36071
> Total runtime (msec) : 1887.188 (0.185% load average)
> Total time span (msec) : 10185.012
> --------------------------------------------------------------------------------------------------------------------------------
>
> 3. Kwork latency:
>
> # perf kwork latency
>
> Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> --------------------------------------------------------------------------------------------------------------------------------
> (s)TIMER:1 | 0004 | 3.903 ms | 1 | 3.903 ms | 121235.517068 s | 121235.520971 s |
> (s)RCU:9 | 0004 | 3.252 ms | 2 | 5.809 ms | 121235.517068 s | 121235.522876 s |
> (s)RCU:9 | 0001 | 3.238 ms | 2 | 5.832 ms | 121235.514494 s | 121235.520326 s |
> (w)vmstat_update | 0004 | 1.738 ms | 1 | 1.738 ms | 121235.521005 s | 121235.522743 s |
> (s)SCHED:7 | 0004 | 0.978 ms | 2 | 1.899 ms | 121235.520940 s | 121235.522840 s |
> (w)wb_update_bandwidth_workfn | 0000 | 0.834 ms | 1 | 0.834 ms | 121244.841826 s | 121244.842660 s |
> (s)RCU:9 | 0003 | 0.479 ms | 3 | 0.752 ms | 121240.027521 s | 121240.028273 s |
> (s)TIMER:1 | 0001 | 0.465 ms | 1 | 0.465 ms | 121235.513107 s | 121235.513572 s |
> (w)vmstat_update | 0000 | 0.391 ms | 5 | 1.275 ms | 121236.814938 s | 121236.816213 s |
> (w)mix_interrupt_randomness | 0002 | 0.317 ms | 5 | 0.874 ms | 121244.628034 s | 121244.628908 s |
> (w)neigh_managed_work | 0001 | 0.315 ms | 1 | 0.315 ms | 121235.513614 s | 121235.513929 s |
> ... <SNIP> ...
> (s)TIMER:1 | 0005 | 0.061 ms | 2545 | 0.506 ms | 121237.136113 s | 121237.136619 s |
> (s)SCHED:7 | 0001 | 0.052 ms | 21 | 0.437 ms | 121237.711014 s | 121237.711451 s |
> (s)SCHED:7 | 0002 | 0.045 ms | 309 | 0.145 ms | 121237.137184 s | 121237.137329 s |
> (s)SCHED:7 | 0003 | 0.045 ms | 1 | 0.045 ms | 121240.018586 s | 121240.018631 s |
> (s)SCHED:7 | 0007 | 0.044 ms | 570 | 0.173 ms | 121238.161161 s | 121238.161334 s |
> (s)BLOCK:4 | 0003 | 0.030 ms | 4 | 0.056 ms | 121240.028255 s | 121240.028311 s |
> --------------------------------------------------------------------------------------------------------------------------------
> INFO: 28.761% skipped events (27674 including 2607 raise, 25067 entry, 0 exit)
>
> 4. Kwork timehist:
>
> # perf kwork timehist
> Runtime start Runtime end Cpu Kwork name Runtime Delaytime
> (TYPE)NAME:NUM (msec) (msec)
> ----------------- ----------------- ------ ------------------------------ ---------- ----------
> 121235.513572 121235.513674 [0001] (s)TIMER:1 0.102 0.465
> 121235.513688 121235.513738 [0001] (s)SCHED:7 0.050 0.172
> 121235.513750 121235.513777 [0001] (s)RCU:9 0.027 0.643
> 121235.513929 121235.513973 [0001] (w)neigh_managed_work 0.044 0.315
> 121235.520326 121235.520386 [0001] (s)RCU:9 0.060 5.832
> 121235.520672 121235.520716 [0002] (s)SCHED:7 0.044 0.048
> 121235.520729 121235.520753 [0002] (s)RCU:9 0.024 5.651
> 121235.521213 121235.521249 [0005] (s)TIMER:1 0.036 0.064
> 121235.520166 121235.521379 [0000] (s)SCHED:7 1.213 0.056
> ... <SNIP> ...
> 121235.533256 121235.533296 [0000] virtio0-requests:25 0.040
> 121235.533322 121235.533359 [0000] (s)SCHED:7 0.037 0.095
> 121235.533018 121235.533452 [0006] (s)RCU:9 0.434 0.348
> 121235.534653 121235.534698 [0000] virtio0-requests:25 0.046
> 121235.535657 121235.535702 [0000] virtio0-requests:25 0.044
> 121235.535857 121235.535916 [0005] (s)TIMER:1 0.059 0.055
> 121235.535927 121235.535947 [0005] (s)RCU:9 0.020 0.113
> 121235.536178 121235.536196 [0006] (s)RCU:9 0.018 0.410
> 121235.537406 121235.537445 [0006] (s)SCHED:7 0.039 0.049
> 121235.537457 121235.537481 [0006] (s)RCU:9 0.024 0.334
> 121235.538199 121235.538254 [0007] (s)RCU:9 0.055 0.066
>
> 5. Kwork report use bpf:
>
> # perf kwork report -b
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> (w)flush_to_ldisc | 0000 | 2.279 ms | 2 | 2.219 ms | 121293.080933 s | 121293.083152 s |
> (s)SCHED:7 | 0001 | 2.141 ms | 2 | 2.100 ms | 121293.082064 s | 121293.084164 s |
> (s)RCU:9 | 0003 | 2.137 ms | 3 | 2.046 ms | 121293.081348 s | 121293.083394 s |
> (s)TIMER:1 | 0007 | 1.882 ms | 12 | 0.249 ms | 121295.632211 s | 121295.632460 s |
> (w)e1000_watchdog | 0002 | 1.136 ms | 3 | 0.428 ms | 121294.496559 s | 121294.496987 s |
> (s)SCHED:7 | 0007 | 0.995 ms | 12 | 0.139 ms | 121295.632483 s | 121295.632621 s |
> (s)NET_RX:3 | 0002 | 0.727 ms | 5 | 0.391 ms | 121299.044624 s | 121299.045016 s |
> (s)TIMER:1 | 0002 | 0.696 ms | 5 | 0.164 ms | 121294.496172 s | 121294.496337 s |
> (s)SCHED:7 | 0002 | 0.427 ms | 6 | 0.077 ms | 121295.840321 s | 121295.840398 s |
> (s)SCHED:7 | 0000 | 0.366 ms | 3 | 0.156 ms | 121296.545389 s | 121296.545545 s |
> eth0:10 | 0002 | 0.353 ms | 5 | 0.122 ms | 121293.084796 s | 121293.084919 s |
> (w)flush_to_ldisc | 0000 | 0.298 ms | 1 | 0.298 ms | 121299.046236 s | 121299.046534 s |
> (w)mix_interrupt_randomness | 0002 | 0.215 ms | 4 | 0.077 ms | 121293.086747 s | 121293.086823 s |
> (s)RCU:9 | 0002 | 0.128 ms | 3 | 0.060 ms | 121293.087348 s | 121293.087409 s |
> (w)vmstat_shepherd | 0000 | 0.098 ms | 1 | 0.098 ms | 121293.083901 s | 121293.083999 s |
> (s)TIMER:1 | 0001 | 0.089 ms | 1 | 0.089 ms | 121293.085709 s | 121293.085798 s |
> (w)vmstat_update | 0003 | 0.071 ms | 1 | 0.071 ms | 121293.085227 s | 121293.085298 s |
> (w)wq_barrier_func | 0000 | 0.064 ms | 1 | 0.064 ms | 121293.083688 s | 121293.083752 s |
> (w)vmstat_update | 0000 | 0.041 ms | 1 | 0.041 ms | 121293.083829 s | 121293.083869 s |
> (s)RCU:9 | 0001 | 0.038 ms | 1 | 0.038 ms | 121293.085818 s | 121293.085856 s |
> (s)RCU:9 | 0007 | 0.035 ms | 1 | 0.035 ms | 121293.112322 s | 121293.112357 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> 6. Kwork latency use bpf:
>
> # perf kwork latency -b
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> --------------------------------------------------------------------------------------------------------------------------------
> (w)vmstat_shepherd | 0000 | 2.044 ms | 2 | 2.764 ms | 121314.942758 s | 121314.945522 s |
> (w)flush_to_ldisc | 0000 | 1.008 ms | 1 | 1.008 ms | 121317.335508 s | 121317.336516 s |
> (w)vmstat_update | 0002 | 0.879 ms | 1 | 0.879 ms | 121317.024405 s | 121317.025284 s |
> (w)mix_interrupt_randomness | 0002 | 0.328 ms | 5 | 0.383 ms | 121308.832944 s | 121308.833327 s |
> (w)e1000_watchdog | 0002 | 0.304 ms | 5 | 0.368 ms | 121317.024305 s | 121317.024673 s |
> (s)RCU:9 | 0001 | 0.172 ms | 41 | 0.728 ms | 121308.308187 s | 121308.308915 s |
> (s)TIMER:1 | 0000 | 0.149 ms | 3 | 0.195 ms | 121317.334255 s | 121317.334449 s |
> (s)NET_RX:3 | 0001 | 0.143 ms | 40 | 1.213 ms | 121315.030992 s | 121315.032205 s |
> (s)RCU:9 | 0002 | 0.139 ms | 27 | 0.187 ms | 121315.077388 s | 121315.077576 s |
> (s)NET_RX:3 | 0002 | 0.130 ms | 7 | 0.283 ms | 121308.832917 s | 121308.833201 s |
> (s)SCHED:7 | 0007 | 0.123 ms | 34 | 0.191 ms | 121308.736240 s | 121308.736431 s |
> (s)TIMER:1 | 0007 | 0.116 ms | 18 | 0.145 ms | 121308.736168 s | 121308.736313 s |
> (s)RCU:9 | 0007 | 0.111 ms | 68 | 0.318 ms | 121308.736194 s | 121308.736512 s |
> (s)SCHED:7 | 0002 | 0.110 ms | 22 | 0.292 ms | 121308.832197 s | 121308.832489 s |
> (s)TIMER:1 | 0001 | 0.107 ms | 1 | 0.107 ms | 121314.948230 s | 121314.948337 s |
> (w)neigh_managed_work | 0001 | 0.103 ms | 1 | 0.103 ms | 121314.948381 s | 121314.948484 s |
> (s)RCU:9 | 0000 | 0.099 ms | 49 | 0.289 ms | 121308.520167 s | 121308.520456 s |
> (s)NET_RX:3 | 0007 | 0.096 ms | 40 | 1.227 ms | 121315.022994 s | 121315.024220 s |
> (s)RCU:9 | 0003 | 0.093 ms | 37 | 0.261 ms | 121314.950651 s | 121314.950913 s |
> (w)flush_to_ldisc | 0000 | 0.090 ms | 1 | 0.090 ms | 121317.336737 s | 121317.336827 s |
> (s)TIMER:1 | 0002 | 0.078 ms | 36 | 0.115 ms | 121310.880172 s | 121310.880288 s |
> (s)SCHED:7 | 0001 | 0.071 ms | 27 | 0.180 ms | 121314.953571 s | 121314.953751 s |
> (s)SCHED:7 | 0000 | 0.066 ms | 28 | 0.344 ms | 121317.334345 s | 121317.334689 s |
> (s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.119 ms | 121314.978808 s | 121314.978927 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> 7. Kwork report with filter:
>
> # perf kwork report -b -n RCU
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> (s)RCU:9 | 0006 | 2.266 ms | 3 | 2.158 ms | 121335.008290 s | 121335.010449 s |
> (s)RCU:9 | 0002 | 0.158 ms | 3 | 0.063 ms | 121335.011914 s | 121335.011977 s |
> (s)RCU:9 | 0007 | 0.082 ms | 1 | 0.082 ms | 121335.448378 s | 121335.448460 s |
> (s)RCU:9 | 0000 | 0.058 ms | 1 | 0.058 ms | 121335.011350 s | 121335.011408 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> ---
> Changes since v2:
> - Updage commit messages.
>
> Changes since v1:
> - Add options and document when actually add the functionality later.
> - Replace "cluster" with "work".
> - Add workqueue symbolizing function support.
> - Replace "frequency" with "count" in report header.
> - Add bpf trace support.
>
> Yang Jihong (17):
> perf kwork: New tool
> perf kwork: Add irq kwork record support
> perf kwork: Add softirq kwork record support
> perf kwork: Add workqueue kwork record support
> tools lib: Add list_last_entry_or_null
> perf kwork: Implement perf kwork report
> perf kwork: Add irq report support
> perf kwork: Add softirq report support
> perf kwork: Add workqueue report support
> perf kwork: Implement perf kwork latency
> perf kwork: Add softirq latency support
> perf kwork: Add workqueue latency support
> perf kwork: Implement perf kwork timehist
> perf kwork: Implement bpf trace
> perf kwork: Add irq trace bpf support
> perf kwork: Add softirq trace bpf support
> perf kwork: Add workqueue trace bpf support
>
> tools/include/linux/list.h | 11 +
> tools/perf/Build | 1 +
> tools/perf/Documentation/perf-kwork.txt | 180 ++
> tools/perf/Makefile.perf | 1 +
> tools/perf/builtin-kwork.c | 1834 ++++++++++++++++++++
> tools/perf/builtin.h | 1 +
> tools/perf/command-list.txt | 1 +
> tools/perf/perf.c | 1 +
> tools/perf/util/Build | 1 +
> tools/perf/util/bpf_kwork.c | 356 ++++
> tools/perf/util/bpf_skel/kwork_trace.bpf.c | 381 ++++
> tools/perf/util/kwork.h | 257 +++
> 12 files changed, 3025 insertions(+)
> create mode 100644 tools/perf/Documentation/perf-kwork.txt
> create mode 100644 tools/perf/builtin-kwork.c
> create mode 100644 tools/perf/util/bpf_kwork.c
> create mode 100644 tools/perf/util/bpf_skel/kwork_trace.bpf.c
> create mode 100644 tools/perf/util/kwork.h
>

2022-07-17 13:39:43

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 00/17] perf: Add perf kwork

Em Sat, Jul 16, 2022 at 05:14:28PM +0800, Yang Jihong escreveu:
> Ping

I'm back from vacations, will get into this this coming week, thanks for
you work!

- Arnaldo

> Regards,
> Yang
>
> On 2022/7/9 9:50, Yang Jihong wrote:
> > Sometimes, we need to analyze time properties of kernel work such as irq,
> > softirq, and workqueue, including delay and running time of specific interrupts.
> > Currently, these events have kernel tracepoints, but perf tool does not
> > directly analyze the delay of these events
> >
> > The perf-kwork tool is used to trace time properties of kernel work
> > (such as irq, softirq, and workqueue), including runtime, latency,
> > and timehist, using the infrastructure in the perf tools to allow
> > tracing extra targets
> >
> > We also use bpf trace to collect and filter data in kernel to solve
> > problem of large perf data volume and extra file system interruptions.
> >
> > Example usage:
> >
> > 1. Kwork record:
> >
> > # perf kwork record -- sleep 10
> > [ perf record: Woken up 0 times to write data ]
> > [ perf record: Captured and wrote 6.825 MB perf.data ]
> >
> > 2. Kwork report:
> >
> > # perf kwork report -S
> >
> > Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> > --------------------------------------------------------------------------------------------------------------------------------
> > virtio0-requests:25 | 0000 | 1347.861 ms | 25049 | 1.417 ms | 121235.524083 s | 121235.525499 s |
> > (s)TIMER:1 | 0005 | 151.033 ms | 2545 | 0.153 ms | 121237.454591 s | 121237.454744 s |
> > (s)RCU:9 | 0005 | 117.254 ms | 2754 | 0.223 ms | 121239.461024 s | 121239.461246 s |
> > (s)SCHED:7 | 0005 | 58.714 ms | 1773 | 0.075 ms | 121237.702345 s | 121237.702419 s |
> > (s)RCU:9 | 0007 | 43.359 ms | 945 | 0.861 ms | 121237.702984 s | 121237.703845 s |
> > (s)SCHED:7 | 0000 | 33.389 ms | 549 | 4.121 ms | 121235.521379 s | 121235.525499 s |
> > (s)RCU:9 | 0002 | 21.419 ms | 484 | 0.281 ms | 121244.629001 s | 121244.629282 s |
> > (w)mix_interrupt_randomness | 0000 | 21.047 ms | 391 | 1.016 ms | 121237.934008 s | 121237.935024 s |
> > (s)SCHED:7 | 0007 | 19.903 ms | 570 | 0.065 ms | 121235.523360 s | 121235.523426 s |
> > (s)RCU:9 | 0000 | 19.017 ms | 472 | 0.507 ms | 121244.634002 s | 121244.634510 s |
> > ... <SNIP> ...
> > (s)SCHED:7 | 0003 | 0.049 ms | 1 | 0.049 ms | 121240.018631 s | 121240.018680 s |
> > (w)vmstat_update | 0003 | 0.046 ms | 1 | 0.046 ms | 121240.916200 s | 121240.916246 s |
> > (s)RCU:9 | 0004 | 0.045 ms | 2 | 0.024 ms | 121235.522876 s | 121235.522900 s |
> > (w)neigh_managed_work | 0001 | 0.044 ms | 1 | 0.044 ms | 121235.513929 s | 121235.513973 s |
> > (w)vmstat_update | 0006 | 0.031 ms | 1 | 0.031 ms | 121245.673914 s | 121245.673945 s |
> > (w)vmstat_update | 0004 | 0.028 ms | 1 | 0.028 ms | 121235.522743 s | 121235.522770 s |
> > (w)wb_update_bandwidth_workfn | 0000 | 0.024 ms | 1 | 0.024 ms | 121244.842660 s | 121244.842683 s |
> > --------------------------------------------------------------------------------------------------------------------------------
> > Total count : 36071
> > Total runtime (msec) : 1887.188 (0.185% load average)
> > Total time span (msec) : 10185.012
> > --------------------------------------------------------------------------------------------------------------------------------
> >
> > 3. Kwork latency:
> >
> > # perf kwork latency
> >
> > Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> > --------------------------------------------------------------------------------------------------------------------------------
> > (s)TIMER:1 | 0004 | 3.903 ms | 1 | 3.903 ms | 121235.517068 s | 121235.520971 s |
> > (s)RCU:9 | 0004 | 3.252 ms | 2 | 5.809 ms | 121235.517068 s | 121235.522876 s |
> > (s)RCU:9 | 0001 | 3.238 ms | 2 | 5.832 ms | 121235.514494 s | 121235.520326 s |
> > (w)vmstat_update | 0004 | 1.738 ms | 1 | 1.738 ms | 121235.521005 s | 121235.522743 s |
> > (s)SCHED:7 | 0004 | 0.978 ms | 2 | 1.899 ms | 121235.520940 s | 121235.522840 s |
> > (w)wb_update_bandwidth_workfn | 0000 | 0.834 ms | 1 | 0.834 ms | 121244.841826 s | 121244.842660 s |
> > (s)RCU:9 | 0003 | 0.479 ms | 3 | 0.752 ms | 121240.027521 s | 121240.028273 s |
> > (s)TIMER:1 | 0001 | 0.465 ms | 1 | 0.465 ms | 121235.513107 s | 121235.513572 s |
> > (w)vmstat_update | 0000 | 0.391 ms | 5 | 1.275 ms | 121236.814938 s | 121236.816213 s |
> > (w)mix_interrupt_randomness | 0002 | 0.317 ms | 5 | 0.874 ms | 121244.628034 s | 121244.628908 s |
> > (w)neigh_managed_work | 0001 | 0.315 ms | 1 | 0.315 ms | 121235.513614 s | 121235.513929 s |
> > ... <SNIP> ...
> > (s)TIMER:1 | 0005 | 0.061 ms | 2545 | 0.506 ms | 121237.136113 s | 121237.136619 s |
> > (s)SCHED:7 | 0001 | 0.052 ms | 21 | 0.437 ms | 121237.711014 s | 121237.711451 s |
> > (s)SCHED:7 | 0002 | 0.045 ms | 309 | 0.145 ms | 121237.137184 s | 121237.137329 s |
> > (s)SCHED:7 | 0003 | 0.045 ms | 1 | 0.045 ms | 121240.018586 s | 121240.018631 s |
> > (s)SCHED:7 | 0007 | 0.044 ms | 570 | 0.173 ms | 121238.161161 s | 121238.161334 s |
> > (s)BLOCK:4 | 0003 | 0.030 ms | 4 | 0.056 ms | 121240.028255 s | 121240.028311 s |
> > --------------------------------------------------------------------------------------------------------------------------------
> > INFO: 28.761% skipped events (27674 including 2607 raise, 25067 entry, 0 exit)
> >
> > 4. Kwork timehist:
> >
> > # perf kwork timehist
> > Runtime start Runtime end Cpu Kwork name Runtime Delaytime
> > (TYPE)NAME:NUM (msec) (msec)
> > ----------------- ----------------- ------ ------------------------------ ---------- ----------
> > 121235.513572 121235.513674 [0001] (s)TIMER:1 0.102 0.465
> > 121235.513688 121235.513738 [0001] (s)SCHED:7 0.050 0.172
> > 121235.513750 121235.513777 [0001] (s)RCU:9 0.027 0.643
> > 121235.513929 121235.513973 [0001] (w)neigh_managed_work 0.044 0.315
> > 121235.520326 121235.520386 [0001] (s)RCU:9 0.060 5.832
> > 121235.520672 121235.520716 [0002] (s)SCHED:7 0.044 0.048
> > 121235.520729 121235.520753 [0002] (s)RCU:9 0.024 5.651
> > 121235.521213 121235.521249 [0005] (s)TIMER:1 0.036 0.064
> > 121235.520166 121235.521379 [0000] (s)SCHED:7 1.213 0.056
> > ... <SNIP> ...
> > 121235.533256 121235.533296 [0000] virtio0-requests:25 0.040
> > 121235.533322 121235.533359 [0000] (s)SCHED:7 0.037 0.095
> > 121235.533018 121235.533452 [0006] (s)RCU:9 0.434 0.348
> > 121235.534653 121235.534698 [0000] virtio0-requests:25 0.046
> > 121235.535657 121235.535702 [0000] virtio0-requests:25 0.044
> > 121235.535857 121235.535916 [0005] (s)TIMER:1 0.059 0.055
> > 121235.535927 121235.535947 [0005] (s)RCU:9 0.020 0.113
> > 121235.536178 121235.536196 [0006] (s)RCU:9 0.018 0.410
> > 121235.537406 121235.537445 [0006] (s)SCHED:7 0.039 0.049
> > 121235.537457 121235.537481 [0006] (s)RCU:9 0.024 0.334
> > 121235.538199 121235.538254 [0007] (s)RCU:9 0.055 0.066
> >
> > 5. Kwork report use bpf:
> >
> > # perf kwork report -b
> > Starting trace, Hit <Ctrl+C> to stop and report
> > ^C
> > Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> > --------------------------------------------------------------------------------------------------------------------------------
> > (w)flush_to_ldisc | 0000 | 2.279 ms | 2 | 2.219 ms | 121293.080933 s | 121293.083152 s |
> > (s)SCHED:7 | 0001 | 2.141 ms | 2 | 2.100 ms | 121293.082064 s | 121293.084164 s |
> > (s)RCU:9 | 0003 | 2.137 ms | 3 | 2.046 ms | 121293.081348 s | 121293.083394 s |
> > (s)TIMER:1 | 0007 | 1.882 ms | 12 | 0.249 ms | 121295.632211 s | 121295.632460 s |
> > (w)e1000_watchdog | 0002 | 1.136 ms | 3 | 0.428 ms | 121294.496559 s | 121294.496987 s |
> > (s)SCHED:7 | 0007 | 0.995 ms | 12 | 0.139 ms | 121295.632483 s | 121295.632621 s |
> > (s)NET_RX:3 | 0002 | 0.727 ms | 5 | 0.391 ms | 121299.044624 s | 121299.045016 s |
> > (s)TIMER:1 | 0002 | 0.696 ms | 5 | 0.164 ms | 121294.496172 s | 121294.496337 s |
> > (s)SCHED:7 | 0002 | 0.427 ms | 6 | 0.077 ms | 121295.840321 s | 121295.840398 s |
> > (s)SCHED:7 | 0000 | 0.366 ms | 3 | 0.156 ms | 121296.545389 s | 121296.545545 s |
> > eth0:10 | 0002 | 0.353 ms | 5 | 0.122 ms | 121293.084796 s | 121293.084919 s |
> > (w)flush_to_ldisc | 0000 | 0.298 ms | 1 | 0.298 ms | 121299.046236 s | 121299.046534 s |
> > (w)mix_interrupt_randomness | 0002 | 0.215 ms | 4 | 0.077 ms | 121293.086747 s | 121293.086823 s |
> > (s)RCU:9 | 0002 | 0.128 ms | 3 | 0.060 ms | 121293.087348 s | 121293.087409 s |
> > (w)vmstat_shepherd | 0000 | 0.098 ms | 1 | 0.098 ms | 121293.083901 s | 121293.083999 s |
> > (s)TIMER:1 | 0001 | 0.089 ms | 1 | 0.089 ms | 121293.085709 s | 121293.085798 s |
> > (w)vmstat_update | 0003 | 0.071 ms | 1 | 0.071 ms | 121293.085227 s | 121293.085298 s |
> > (w)wq_barrier_func | 0000 | 0.064 ms | 1 | 0.064 ms | 121293.083688 s | 121293.083752 s |
> > (w)vmstat_update | 0000 | 0.041 ms | 1 | 0.041 ms | 121293.083829 s | 121293.083869 s |
> > (s)RCU:9 | 0001 | 0.038 ms | 1 | 0.038 ms | 121293.085818 s | 121293.085856 s |
> > (s)RCU:9 | 0007 | 0.035 ms | 1 | 0.035 ms | 121293.112322 s | 121293.112357 s |
> > --------------------------------------------------------------------------------------------------------------------------------
> >
> > 6. Kwork latency use bpf:
> >
> > # perf kwork latency -b
> > Starting trace, Hit <Ctrl+C> to stop and report
> > ^C
> > Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> > --------------------------------------------------------------------------------------------------------------------------------
> > (w)vmstat_shepherd | 0000 | 2.044 ms | 2 | 2.764 ms | 121314.942758 s | 121314.945522 s |
> > (w)flush_to_ldisc | 0000 | 1.008 ms | 1 | 1.008 ms | 121317.335508 s | 121317.336516 s |
> > (w)vmstat_update | 0002 | 0.879 ms | 1 | 0.879 ms | 121317.024405 s | 121317.025284 s |
> > (w)mix_interrupt_randomness | 0002 | 0.328 ms | 5 | 0.383 ms | 121308.832944 s | 121308.833327 s |
> > (w)e1000_watchdog | 0002 | 0.304 ms | 5 | 0.368 ms | 121317.024305 s | 121317.024673 s |
> > (s)RCU:9 | 0001 | 0.172 ms | 41 | 0.728 ms | 121308.308187 s | 121308.308915 s |
> > (s)TIMER:1 | 0000 | 0.149 ms | 3 | 0.195 ms | 121317.334255 s | 121317.334449 s |
> > (s)NET_RX:3 | 0001 | 0.143 ms | 40 | 1.213 ms | 121315.030992 s | 121315.032205 s |
> > (s)RCU:9 | 0002 | 0.139 ms | 27 | 0.187 ms | 121315.077388 s | 121315.077576 s |
> > (s)NET_RX:3 | 0002 | 0.130 ms | 7 | 0.283 ms | 121308.832917 s | 121308.833201 s |
> > (s)SCHED:7 | 0007 | 0.123 ms | 34 | 0.191 ms | 121308.736240 s | 121308.736431 s |
> > (s)TIMER:1 | 0007 | 0.116 ms | 18 | 0.145 ms | 121308.736168 s | 121308.736313 s |
> > (s)RCU:9 | 0007 | 0.111 ms | 68 | 0.318 ms | 121308.736194 s | 121308.736512 s |
> > (s)SCHED:7 | 0002 | 0.110 ms | 22 | 0.292 ms | 121308.832197 s | 121308.832489 s |
> > (s)TIMER:1 | 0001 | 0.107 ms | 1 | 0.107 ms | 121314.948230 s | 121314.948337 s |
> > (w)neigh_managed_work | 0001 | 0.103 ms | 1 | 0.103 ms | 121314.948381 s | 121314.948484 s |
> > (s)RCU:9 | 0000 | 0.099 ms | 49 | 0.289 ms | 121308.520167 s | 121308.520456 s |
> > (s)NET_RX:3 | 0007 | 0.096 ms | 40 | 1.227 ms | 121315.022994 s | 121315.024220 s |
> > (s)RCU:9 | 0003 | 0.093 ms | 37 | 0.261 ms | 121314.950651 s | 121314.950913 s |
> > (w)flush_to_ldisc | 0000 | 0.090 ms | 1 | 0.090 ms | 121317.336737 s | 121317.336827 s |
> > (s)TIMER:1 | 0002 | 0.078 ms | 36 | 0.115 ms | 121310.880172 s | 121310.880288 s |
> > (s)SCHED:7 | 0001 | 0.071 ms | 27 | 0.180 ms | 121314.953571 s | 121314.953751 s |
> > (s)SCHED:7 | 0000 | 0.066 ms | 28 | 0.344 ms | 121317.334345 s | 121317.334689 s |
> > (s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.119 ms | 121314.978808 s | 121314.978927 s |
> > --------------------------------------------------------------------------------------------------------------------------------
> >
> > 7. Kwork report with filter:
> >
> > # perf kwork report -b -n RCU
> > Starting trace, Hit <Ctrl+C> to stop and report
> > ^C
> > Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> > --------------------------------------------------------------------------------------------------------------------------------
> > (s)RCU:9 | 0006 | 2.266 ms | 3 | 2.158 ms | 121335.008290 s | 121335.010449 s |
> > (s)RCU:9 | 0002 | 0.158 ms | 3 | 0.063 ms | 121335.011914 s | 121335.011977 s |
> > (s)RCU:9 | 0007 | 0.082 ms | 1 | 0.082 ms | 121335.448378 s | 121335.448460 s |
> > (s)RCU:9 | 0000 | 0.058 ms | 1 | 0.058 ms | 121335.011350 s | 121335.011408 s |
> > --------------------------------------------------------------------------------------------------------------------------------
> >
> > ---
> > Changes since v2:
> > - Updage commit messages.
> >
> > Changes since v1:
> > - Add options and document when actually add the functionality later.
> > - Replace "cluster" with "work".
> > - Add workqueue symbolizing function support.
> > - Replace "frequency" with "count" in report header.
> > - Add bpf trace support.
> >
> > Yang Jihong (17):
> > perf kwork: New tool
> > perf kwork: Add irq kwork record support
> > perf kwork: Add softirq kwork record support
> > perf kwork: Add workqueue kwork record support
> > tools lib: Add list_last_entry_or_null
> > perf kwork: Implement perf kwork report
> > perf kwork: Add irq report support
> > perf kwork: Add softirq report support
> > perf kwork: Add workqueue report support
> > perf kwork: Implement perf kwork latency
> > perf kwork: Add softirq latency support
> > perf kwork: Add workqueue latency support
> > perf kwork: Implement perf kwork timehist
> > perf kwork: Implement bpf trace
> > perf kwork: Add irq trace bpf support
> > perf kwork: Add softirq trace bpf support
> > perf kwork: Add workqueue trace bpf support
> >
> > tools/include/linux/list.h | 11 +
> > tools/perf/Build | 1 +
> > tools/perf/Documentation/perf-kwork.txt | 180 ++
> > tools/perf/Makefile.perf | 1 +
> > tools/perf/builtin-kwork.c | 1834 ++++++++++++++++++++
> > tools/perf/builtin.h | 1 +
> > tools/perf/command-list.txt | 1 +
> > tools/perf/perf.c | 1 +
> > tools/perf/util/Build | 1 +
> > tools/perf/util/bpf_kwork.c | 356 ++++
> > tools/perf/util/bpf_skel/kwork_trace.bpf.c | 381 ++++
> > tools/perf/util/kwork.h | 257 +++
> > 12 files changed, 3025 insertions(+)
> > create mode 100644 tools/perf/Documentation/perf-kwork.txt
> > create mode 100644 tools/perf/builtin-kwork.c
> > create mode 100644 tools/perf/util/bpf_kwork.c
> > create mode 100644 tools/perf/util/bpf_skel/kwork_trace.bpf.c
> > create mode 100644 tools/perf/util/kwork.h
> >

--

- Arnaldo

2022-07-25 21:59:12

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 15/17] perf kwork: Add irq trace bpf support

Em Sat, Jul 09, 2022 at 09:50:31AM +0800, Yang Jihong escreveu:
> Implements irq trace bpf function.
>
> Test cases:
> Trace irq without filter:
>
> # perf kwork -k irq rep -b
> Starting trace, Hit <Ctrl+C> to stop and report

That is cool, works like a charm :-) Lemme go back testing the rest...

- Arnaldo

> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> virtio0-requests:25 | 0000 | 31.026 ms | 285 | 1.493 ms | 110326.049963 s | 110326.051456 s |
> eth0:10 | 0002 | 7.875 ms | 96 | 1.429 ms | 110313.916835 s | 110313.918264 s |
> ata_piix:14 | 0002 | 2.510 ms | 28 | 0.396 ms | 110331.367987 s | 110331.368383 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> Trace irq with cpu filter:
>
> # perf kwork -k irq rep -b -C 0
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> virtio0-requests:25 | 0000 | 34.288 ms | 282 | 2.061 ms | 110358.078968 s | 110358.081029 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> Trace irq with name filter:
>
> # perf kwork -k irq rep -b -n eth0
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> eth0:10 | 0002 | 2.184 ms | 21 | 0.572 ms | 110386.541699 s | 110386.542271 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> Trace irq with summary:
>
> # perf kwork -k irq rep -b -S
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> virtio0-requests:25 | 0000 | 42.923 ms | 285 | 1.181 ms | 110418.128867 s | 110418.130049 s |
> eth0:10 | 0002 | 2.085 ms | 20 | 0.668 ms | 110416.002935 s | 110416.003603 s |
> ata_piix:14 | 0002 | 0.970 ms | 4 | 0.656 ms | 110424.034482 s | 110424.035138 s |
> --------------------------------------------------------------------------------------------------------------------------------
> Total count : 309
> Total runtime (msec) : 45.977 (0.003% load average)
> Total time span (msec) : 17017.655
> --------------------------------------------------------------------------------------------------------------------------------
>
> Signed-off-by: Yang Jihong <[email protected]>
> ---
> tools/perf/util/bpf_kwork.c | 40 +++++-
> tools/perf/util/bpf_skel/kwork_trace.bpf.c | 150 +++++++++++++++++++++
> 2 files changed, 189 insertions(+), 1 deletion(-)
>
> diff --git a/tools/perf/util/bpf_kwork.c b/tools/perf/util/bpf_kwork.c
> index 433bfadd3af1..08252fcda1a4 100644
> --- a/tools/perf/util/bpf_kwork.c
> +++ b/tools/perf/util/bpf_kwork.c
> @@ -62,9 +62,47 @@ void perf_kwork__trace_finish(void)
> skel->bss->enabled = 0;
> }
>
> +static int get_work_name_from_map(struct work_key *key, char **ret_name)
> +{
> + char name[MAX_KWORKNAME] = { 0 };
> + int fd = bpf_map__fd(skel->maps.perf_kwork_names);
> +
> + *ret_name = NULL;
> +
> + if (fd < 0) {
> + pr_debug("Invalid names map fd\n");
> + return 0;
> + }
> +
> + if ((bpf_map_lookup_elem(fd, key, name) == 0) && (strlen(name) != 0)) {
> + *ret_name = strdup(name);
> + if (*ret_name == NULL) {
> + pr_err("Failed to copy work name\n");
> + return -1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static void irq_load_prepare(struct perf_kwork *kwork)
> +{
> + if (kwork->report == KWORK_REPORT_RUNTIME) {
> + bpf_program__set_autoload(
> + skel->progs.report_irq_handler_entry, true);
> + bpf_program__set_autoload(
> + skel->progs.report_irq_handler_exit, true);
> + }
> +}
> +
> +static struct kwork_class_bpf kwork_irq_bpf = {
> + .load_prepare = irq_load_prepare,
> + .get_work_name = get_work_name_from_map,
> +};
> +
> static struct kwork_class_bpf *
> kwork_class_bpf_supported_list[KWORK_CLASS_MAX] = {
> - [KWORK_CLASS_IRQ] = NULL,
> + [KWORK_CLASS_IRQ] = &kwork_irq_bpf,
> [KWORK_CLASS_SOFTIRQ] = NULL,
> [KWORK_CLASS_WORKQUEUE] = NULL,
> };
> diff --git a/tools/perf/util/bpf_skel/kwork_trace.bpf.c b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
> index 36112be831e3..1925407d1c16 100644
> --- a/tools/perf/util/bpf_skel/kwork_trace.bpf.c
> +++ b/tools/perf/util/bpf_skel/kwork_trace.bpf.c
> @@ -71,4 +71,154 @@ int enabled = 0;
> int has_cpu_filter = 0;
> int has_name_filter = 0;
>
> +static __always_inline int local_strncmp(const char *s1,
> + unsigned int sz, const char *s2)
> +{
> + int ret = 0;
> + unsigned int i;
> +
> + for (i = 0; i < sz; i++) {
> + ret = (unsigned char)s1[i] - (unsigned char)s2[i];
> + if (ret || !s1[i] || !s2[i])
> + break;
> + }
> +
> + return ret;
> +}
> +
> +static __always_inline int trace_event_match(struct work_key *key, char *name)
> +{
> + __u8 *cpu_val;
> + char *name_val;
> + __u32 zero = 0;
> + __u32 cpu = bpf_get_smp_processor_id();
> +
> + if (!enabled)
> + return 0;
> +
> + if (has_cpu_filter) {
> + cpu_val = bpf_map_lookup_elem(&perf_kwork_cpu_filter, &cpu);
> + if (!cpu_val)
> + return 0;
> + }
> +
> + if (has_name_filter && (name != NULL)) {
> + name_val = bpf_map_lookup_elem(&perf_kwork_name_filter, &zero);
> + if (name_val &&
> + (local_strncmp(name_val, MAX_KWORKNAME, name) != 0)) {
> + return 0;
> + }
> + }
> +
> + return 1;
> +}
> +
> +static __always_inline void do_update_time(void *map, struct work_key *key,
> + __u64 time_start, __u64 time_end)
> +{
> + struct report_data zero, *data;
> + __s64 delta = time_end - time_start;
> +
> + if (delta < 0)
> + return;
> +
> + data = bpf_map_lookup_elem(map, key);
> + if (!data) {
> + __builtin_memset(&zero, 0, sizeof(zero));
> + bpf_map_update_elem(map, key, &zero, BPF_NOEXIST);
> + data = bpf_map_lookup_elem(map, key);
> + if (!data)
> + return;
> + }
> +
> + if ((delta > data->max_time) ||
> + (data->max_time == 0)) {
> + data->max_time = delta;
> + data->max_time_start = time_start;
> + data->max_time_end = time_end;
> + }
> +
> + data->total_time += delta;
> + data->nr++;
> +}
> +
> +static __always_inline void do_update_timestart(void *map, struct work_key *key)
> +{
> + __u64 ts = bpf_ktime_get_ns();
> +
> + bpf_map_update_elem(map, key, &ts, BPF_ANY);
> +}
> +
> +static __always_inline void do_update_timeend(void *report_map, void *time_map,
> + struct work_key *key)
> +{
> + __u64 *time = bpf_map_lookup_elem(time_map, key);
> +
> + if (time) {
> + bpf_map_delete_elem(time_map, key);
> + do_update_time(report_map, key, *time, bpf_ktime_get_ns());
> + }
> +}
> +
> +static __always_inline void do_update_name(void *map,
> + struct work_key *key, char *name)
> +{
> + if (!bpf_map_lookup_elem(map, key))
> + bpf_map_update_elem(map, key, name, BPF_ANY);
> +}
> +
> +static __always_inline int update_timestart_and_name(void *time_map,
> + void *names_map,
> + struct work_key *key,
> + char *name)
> +{
> + if (!trace_event_match(key, name))
> + return 0;
> +
> + do_update_timestart(time_map, key);
> + do_update_name(names_map, key, name);
> +
> + return 0;
> +}
> +
> +static __always_inline int update_timeend(void *report_map,
> + void *time_map, struct work_key *key)
> +{
> + if (!trace_event_match(key, NULL))
> + return 0;
> +
> + do_update_timeend(report_map, time_map, key);
> +
> + return 0;
> +}
> +
> +SEC("tracepoint/irq/irq_handler_entry")
> +int report_irq_handler_entry(struct trace_event_raw_irq_handler_entry *ctx)
> +{
> + char name[MAX_KWORKNAME];
> + struct work_key key = {
> + .type = KWORK_CLASS_IRQ,
> + .cpu = bpf_get_smp_processor_id(),
> + .id = (__u64)ctx->irq,
> + };
> + void *name_addr = (void *)ctx + (ctx->__data_loc_name & 0xffff);
> +
> + bpf_probe_read_kernel_str(name, sizeof(name), name_addr);
> +
> + return update_timestart_and_name(&perf_kwork_time,
> + &perf_kwork_names, &key, name);
> +}
> +
> +SEC("tracepoint/irq/irq_handler_exit")
> +int report_irq_handler_exit(struct trace_event_raw_irq_handler_exit *ctx)
> +{
> + struct work_key key = {
> + .type = KWORK_CLASS_IRQ,
> + .cpu = bpf_get_smp_processor_id(),
> + .id = (__u64)ctx->irq,
> + };
> +
> + return update_timeend(&perf_kwork_report, &perf_kwork_time, &key);
> +}
> +
> char LICENSE[] SEC("license") = "Dual BSD/GPL";
> --
> 2.30.GIT

--

- Arnaldo

2022-07-25 22:20:34

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 00/17] perf: Add perf kwork (using BPF skels)

Em Sat, Jul 09, 2022 at 09:50:16AM +0800, Yang Jihong escreveu:
> Sometimes, we need to analyze time properties of kernel work such as irq,
> softirq, and workqueue, including delay and running time of specific interrupts.
> Currently, these events have kernel tracepoints, but perf tool does not
> directly analyze the delay of these events
>
> The perf-kwork tool is used to trace time properties of kernel work
> (such as irq, softirq, and workqueue), including runtime, latency,
> and timehist, using the infrastructure in the perf tools to allow
> tracing extra targets
>
> We also use bpf trace to collect and filter data in kernel to solve
> problem of large perf data volume and extra file system interruptions.

Pushed out to tmp.perf/core, will continue reviewing and testing then
move to perf/core, thanks for the great work.

Its fantastic how the bpf skel infra is working well with tools/perf,
really great.

- Arnaldo

> Example usage:
>
> 1. Kwork record:
>
> # perf kwork record -- sleep 10
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Captured and wrote 6.825 MB perf.data ]
>
> 2. Kwork report:
>
> # perf kwork report -S
>
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> virtio0-requests:25 | 0000 | 1347.861 ms | 25049 | 1.417 ms | 121235.524083 s | 121235.525499 s |
> (s)TIMER:1 | 0005 | 151.033 ms | 2545 | 0.153 ms | 121237.454591 s | 121237.454744 s |
> (s)RCU:9 | 0005 | 117.254 ms | 2754 | 0.223 ms | 121239.461024 s | 121239.461246 s |
> (s)SCHED:7 | 0005 | 58.714 ms | 1773 | 0.075 ms | 121237.702345 s | 121237.702419 s |
> (s)RCU:9 | 0007 | 43.359 ms | 945 | 0.861 ms | 121237.702984 s | 121237.703845 s |
> (s)SCHED:7 | 0000 | 33.389 ms | 549 | 4.121 ms | 121235.521379 s | 121235.525499 s |
> (s)RCU:9 | 0002 | 21.419 ms | 484 | 0.281 ms | 121244.629001 s | 121244.629282 s |
> (w)mix_interrupt_randomness | 0000 | 21.047 ms | 391 | 1.016 ms | 121237.934008 s | 121237.935024 s |
> (s)SCHED:7 | 0007 | 19.903 ms | 570 | 0.065 ms | 121235.523360 s | 121235.523426 s |
> (s)RCU:9 | 0000 | 19.017 ms | 472 | 0.507 ms | 121244.634002 s | 121244.634510 s |
> ... <SNIP> ...
> (s)SCHED:7 | 0003 | 0.049 ms | 1 | 0.049 ms | 121240.018631 s | 121240.018680 s |
> (w)vmstat_update | 0003 | 0.046 ms | 1 | 0.046 ms | 121240.916200 s | 121240.916246 s |
> (s)RCU:9 | 0004 | 0.045 ms | 2 | 0.024 ms | 121235.522876 s | 121235.522900 s |
> (w)neigh_managed_work | 0001 | 0.044 ms | 1 | 0.044 ms | 121235.513929 s | 121235.513973 s |
> (w)vmstat_update | 0006 | 0.031 ms | 1 | 0.031 ms | 121245.673914 s | 121245.673945 s |
> (w)vmstat_update | 0004 | 0.028 ms | 1 | 0.028 ms | 121235.522743 s | 121235.522770 s |
> (w)wb_update_bandwidth_workfn | 0000 | 0.024 ms | 1 | 0.024 ms | 121244.842660 s | 121244.842683 s |
> --------------------------------------------------------------------------------------------------------------------------------
> Total count : 36071
> Total runtime (msec) : 1887.188 (0.185% load average)
> Total time span (msec) : 10185.012
> --------------------------------------------------------------------------------------------------------------------------------
>
> 3. Kwork latency:
>
> # perf kwork latency
>
> Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> --------------------------------------------------------------------------------------------------------------------------------
> (s)TIMER:1 | 0004 | 3.903 ms | 1 | 3.903 ms | 121235.517068 s | 121235.520971 s |
> (s)RCU:9 | 0004 | 3.252 ms | 2 | 5.809 ms | 121235.517068 s | 121235.522876 s |
> (s)RCU:9 | 0001 | 3.238 ms | 2 | 5.832 ms | 121235.514494 s | 121235.520326 s |
> (w)vmstat_update | 0004 | 1.738 ms | 1 | 1.738 ms | 121235.521005 s | 121235.522743 s |
> (s)SCHED:7 | 0004 | 0.978 ms | 2 | 1.899 ms | 121235.520940 s | 121235.522840 s |
> (w)wb_update_bandwidth_workfn | 0000 | 0.834 ms | 1 | 0.834 ms | 121244.841826 s | 121244.842660 s |
> (s)RCU:9 | 0003 | 0.479 ms | 3 | 0.752 ms | 121240.027521 s | 121240.028273 s |
> (s)TIMER:1 | 0001 | 0.465 ms | 1 | 0.465 ms | 121235.513107 s | 121235.513572 s |
> (w)vmstat_update | 0000 | 0.391 ms | 5 | 1.275 ms | 121236.814938 s | 121236.816213 s |
> (w)mix_interrupt_randomness | 0002 | 0.317 ms | 5 | 0.874 ms | 121244.628034 s | 121244.628908 s |
> (w)neigh_managed_work | 0001 | 0.315 ms | 1 | 0.315 ms | 121235.513614 s | 121235.513929 s |
> ... <SNIP> ...
> (s)TIMER:1 | 0005 | 0.061 ms | 2545 | 0.506 ms | 121237.136113 s | 121237.136619 s |
> (s)SCHED:7 | 0001 | 0.052 ms | 21 | 0.437 ms | 121237.711014 s | 121237.711451 s |
> (s)SCHED:7 | 0002 | 0.045 ms | 309 | 0.145 ms | 121237.137184 s | 121237.137329 s |
> (s)SCHED:7 | 0003 | 0.045 ms | 1 | 0.045 ms | 121240.018586 s | 121240.018631 s |
> (s)SCHED:7 | 0007 | 0.044 ms | 570 | 0.173 ms | 121238.161161 s | 121238.161334 s |
> (s)BLOCK:4 | 0003 | 0.030 ms | 4 | 0.056 ms | 121240.028255 s | 121240.028311 s |
> --------------------------------------------------------------------------------------------------------------------------------
> INFO: 28.761% skipped events (27674 including 2607 raise, 25067 entry, 0 exit)
>
> 4. Kwork timehist:
>
> # perf kwork timehist
> Runtime start Runtime end Cpu Kwork name Runtime Delaytime
> (TYPE)NAME:NUM (msec) (msec)
> ----------------- ----------------- ------ ------------------------------ ---------- ----------
> 121235.513572 121235.513674 [0001] (s)TIMER:1 0.102 0.465
> 121235.513688 121235.513738 [0001] (s)SCHED:7 0.050 0.172
> 121235.513750 121235.513777 [0001] (s)RCU:9 0.027 0.643
> 121235.513929 121235.513973 [0001] (w)neigh_managed_work 0.044 0.315
> 121235.520326 121235.520386 [0001] (s)RCU:9 0.060 5.832
> 121235.520672 121235.520716 [0002] (s)SCHED:7 0.044 0.048
> 121235.520729 121235.520753 [0002] (s)RCU:9 0.024 5.651
> 121235.521213 121235.521249 [0005] (s)TIMER:1 0.036 0.064
> 121235.520166 121235.521379 [0000] (s)SCHED:7 1.213 0.056
> ... <SNIP> ...
> 121235.533256 121235.533296 [0000] virtio0-requests:25 0.040
> 121235.533322 121235.533359 [0000] (s)SCHED:7 0.037 0.095
> 121235.533018 121235.533452 [0006] (s)RCU:9 0.434 0.348
> 121235.534653 121235.534698 [0000] virtio0-requests:25 0.046
> 121235.535657 121235.535702 [0000] virtio0-requests:25 0.044
> 121235.535857 121235.535916 [0005] (s)TIMER:1 0.059 0.055
> 121235.535927 121235.535947 [0005] (s)RCU:9 0.020 0.113
> 121235.536178 121235.536196 [0006] (s)RCU:9 0.018 0.410
> 121235.537406 121235.537445 [0006] (s)SCHED:7 0.039 0.049
> 121235.537457 121235.537481 [0006] (s)RCU:9 0.024 0.334
> 121235.538199 121235.538254 [0007] (s)RCU:9 0.055 0.066
>
> 5. Kwork report use bpf:
>
> # perf kwork report -b
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> (w)flush_to_ldisc | 0000 | 2.279 ms | 2 | 2.219 ms | 121293.080933 s | 121293.083152 s |
> (s)SCHED:7 | 0001 | 2.141 ms | 2 | 2.100 ms | 121293.082064 s | 121293.084164 s |
> (s)RCU:9 | 0003 | 2.137 ms | 3 | 2.046 ms | 121293.081348 s | 121293.083394 s |
> (s)TIMER:1 | 0007 | 1.882 ms | 12 | 0.249 ms | 121295.632211 s | 121295.632460 s |
> (w)e1000_watchdog | 0002 | 1.136 ms | 3 | 0.428 ms | 121294.496559 s | 121294.496987 s |
> (s)SCHED:7 | 0007 | 0.995 ms | 12 | 0.139 ms | 121295.632483 s | 121295.632621 s |
> (s)NET_RX:3 | 0002 | 0.727 ms | 5 | 0.391 ms | 121299.044624 s | 121299.045016 s |
> (s)TIMER:1 | 0002 | 0.696 ms | 5 | 0.164 ms | 121294.496172 s | 121294.496337 s |
> (s)SCHED:7 | 0002 | 0.427 ms | 6 | 0.077 ms | 121295.840321 s | 121295.840398 s |
> (s)SCHED:7 | 0000 | 0.366 ms | 3 | 0.156 ms | 121296.545389 s | 121296.545545 s |
> eth0:10 | 0002 | 0.353 ms | 5 | 0.122 ms | 121293.084796 s | 121293.084919 s |
> (w)flush_to_ldisc | 0000 | 0.298 ms | 1 | 0.298 ms | 121299.046236 s | 121299.046534 s |
> (w)mix_interrupt_randomness | 0002 | 0.215 ms | 4 | 0.077 ms | 121293.086747 s | 121293.086823 s |
> (s)RCU:9 | 0002 | 0.128 ms | 3 | 0.060 ms | 121293.087348 s | 121293.087409 s |
> (w)vmstat_shepherd | 0000 | 0.098 ms | 1 | 0.098 ms | 121293.083901 s | 121293.083999 s |
> (s)TIMER:1 | 0001 | 0.089 ms | 1 | 0.089 ms | 121293.085709 s | 121293.085798 s |
> (w)vmstat_update | 0003 | 0.071 ms | 1 | 0.071 ms | 121293.085227 s | 121293.085298 s |
> (w)wq_barrier_func | 0000 | 0.064 ms | 1 | 0.064 ms | 121293.083688 s | 121293.083752 s |
> (w)vmstat_update | 0000 | 0.041 ms | 1 | 0.041 ms | 121293.083829 s | 121293.083869 s |
> (s)RCU:9 | 0001 | 0.038 ms | 1 | 0.038 ms | 121293.085818 s | 121293.085856 s |
> (s)RCU:9 | 0007 | 0.035 ms | 1 | 0.035 ms | 121293.112322 s | 121293.112357 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> 6. Kwork latency use bpf:
>
> # perf kwork latency -b
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Avg delay | Count | Max delay | Max delay start | Max delay end |
> --------------------------------------------------------------------------------------------------------------------------------
> (w)vmstat_shepherd | 0000 | 2.044 ms | 2 | 2.764 ms | 121314.942758 s | 121314.945522 s |
> (w)flush_to_ldisc | 0000 | 1.008 ms | 1 | 1.008 ms | 121317.335508 s | 121317.336516 s |
> (w)vmstat_update | 0002 | 0.879 ms | 1 | 0.879 ms | 121317.024405 s | 121317.025284 s |
> (w)mix_interrupt_randomness | 0002 | 0.328 ms | 5 | 0.383 ms | 121308.832944 s | 121308.833327 s |
> (w)e1000_watchdog | 0002 | 0.304 ms | 5 | 0.368 ms | 121317.024305 s | 121317.024673 s |
> (s)RCU:9 | 0001 | 0.172 ms | 41 | 0.728 ms | 121308.308187 s | 121308.308915 s |
> (s)TIMER:1 | 0000 | 0.149 ms | 3 | 0.195 ms | 121317.334255 s | 121317.334449 s |
> (s)NET_RX:3 | 0001 | 0.143 ms | 40 | 1.213 ms | 121315.030992 s | 121315.032205 s |
> (s)RCU:9 | 0002 | 0.139 ms | 27 | 0.187 ms | 121315.077388 s | 121315.077576 s |
> (s)NET_RX:3 | 0002 | 0.130 ms | 7 | 0.283 ms | 121308.832917 s | 121308.833201 s |
> (s)SCHED:7 | 0007 | 0.123 ms | 34 | 0.191 ms | 121308.736240 s | 121308.736431 s |
> (s)TIMER:1 | 0007 | 0.116 ms | 18 | 0.145 ms | 121308.736168 s | 121308.736313 s |
> (s)RCU:9 | 0007 | 0.111 ms | 68 | 0.318 ms | 121308.736194 s | 121308.736512 s |
> (s)SCHED:7 | 0002 | 0.110 ms | 22 | 0.292 ms | 121308.832197 s | 121308.832489 s |
> (s)TIMER:1 | 0001 | 0.107 ms | 1 | 0.107 ms | 121314.948230 s | 121314.948337 s |
> (w)neigh_managed_work | 0001 | 0.103 ms | 1 | 0.103 ms | 121314.948381 s | 121314.948484 s |
> (s)RCU:9 | 0000 | 0.099 ms | 49 | 0.289 ms | 121308.520167 s | 121308.520456 s |
> (s)NET_RX:3 | 0007 | 0.096 ms | 40 | 1.227 ms | 121315.022994 s | 121315.024220 s |
> (s)RCU:9 | 0003 | 0.093 ms | 37 | 0.261 ms | 121314.950651 s | 121314.950913 s |
> (w)flush_to_ldisc | 0000 | 0.090 ms | 1 | 0.090 ms | 121317.336737 s | 121317.336827 s |
> (s)TIMER:1 | 0002 | 0.078 ms | 36 | 0.115 ms | 121310.880172 s | 121310.880288 s |
> (s)SCHED:7 | 0001 | 0.071 ms | 27 | 0.180 ms | 121314.953571 s | 121314.953751 s |
> (s)SCHED:7 | 0000 | 0.066 ms | 28 | 0.344 ms | 121317.334345 s | 121317.334689 s |
> (s)SCHED:7 | 0003 | 0.063 ms | 14 | 0.119 ms | 121314.978808 s | 121314.978927 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> 7. Kwork report with filter:
>
> # perf kwork report -b -n RCU
> Starting trace, Hit <Ctrl+C> to stop and report
> ^C
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> (s)RCU:9 | 0006 | 2.266 ms | 3 | 2.158 ms | 121335.008290 s | 121335.010449 s |
> (s)RCU:9 | 0002 | 0.158 ms | 3 | 0.063 ms | 121335.011914 s | 121335.011977 s |
> (s)RCU:9 | 0007 | 0.082 ms | 1 | 0.082 ms | 121335.448378 s | 121335.448460 s |
> (s)RCU:9 | 0000 | 0.058 ms | 1 | 0.058 ms | 121335.011350 s | 121335.011408 s |
> --------------------------------------------------------------------------------------------------------------------------------
>
> ---
> Changes since v2:
> - Updage commit messages.
>
> Changes since v1:
> - Add options and document when actually add the functionality later.
> - Replace "cluster" with "work".
> - Add workqueue symbolizing function support.
> - Replace "frequency" with "count" in report header.
> - Add bpf trace support.
>
> Yang Jihong (17):
> perf kwork: New tool
> perf kwork: Add irq kwork record support
> perf kwork: Add softirq kwork record support
> perf kwork: Add workqueue kwork record support
> tools lib: Add list_last_entry_or_null
> perf kwork: Implement perf kwork report
> perf kwork: Add irq report support
> perf kwork: Add softirq report support
> perf kwork: Add workqueue report support
> perf kwork: Implement perf kwork latency
> perf kwork: Add softirq latency support
> perf kwork: Add workqueue latency support
> perf kwork: Implement perf kwork timehist
> perf kwork: Implement bpf trace
> perf kwork: Add irq trace bpf support
> perf kwork: Add softirq trace bpf support
> perf kwork: Add workqueue trace bpf support
>
> tools/include/linux/list.h | 11 +
> tools/perf/Build | 1 +
> tools/perf/Documentation/perf-kwork.txt | 180 ++
> tools/perf/Makefile.perf | 1 +
> tools/perf/builtin-kwork.c | 1834 ++++++++++++++++++++
> tools/perf/builtin.h | 1 +
> tools/perf/command-list.txt | 1 +
> tools/perf/perf.c | 1 +
> tools/perf/util/Build | 1 +
> tools/perf/util/bpf_kwork.c | 356 ++++
> tools/perf/util/bpf_skel/kwork_trace.bpf.c | 381 ++++
> tools/perf/util/kwork.h | 257 +++
> 12 files changed, 3025 insertions(+)
> create mode 100644 tools/perf/Documentation/perf-kwork.txt
> create mode 100644 tools/perf/builtin-kwork.c
> create mode 100644 tools/perf/util/bpf_kwork.c
> create mode 100644 tools/perf/util/bpf_skel/kwork_trace.bpf.c
> create mode 100644 tools/perf/util/kwork.h
>
> --
> 2.30.GIT

--

- Arnaldo

2022-07-26 17:40:54

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 01/17] perf kwork: New tool

Em Sat, Jul 09, 2022 at 09:50:17AM +0800, Yang Jihong escreveu:
> The perf-kwork tool is used to trace time properties of kernel work
> (such as irq, softirq, and workqueue), including runtime, latency,
> and timehist, using the infrastructure in the perf tools to allow
> tracing extra targets.
>
> This is the first commit to reuse perf_record framework code to
> implement a simple record function, kwork is not supported currently.

So, since I have to fix some issues I'm adding small stylistic changes,
starting with this:


- multiline if/for blocks need {}

- remove needless spaces between variable declaration + initialization.

- Arnaldo

diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
index f3552c56ede3c501..bfa5c53f1273c631 100644
--- a/tools/perf/builtin-kwork.c
+++ b/tools/perf/builtin-kwork.c
@@ -49,9 +49,10 @@ static void setup_event_list(struct perf_kwork *kwork,
break;
}
}
- if (i == KWORK_CLASS_MAX)
+ if (i == KWORK_CLASS_MAX) {
usage_with_options_msg(usage_msg, options,
"Unknown --event key: `%s'", tok);
+ }
}
free(str);

@@ -59,10 +60,12 @@ static void setup_event_list(struct perf_kwork *kwork,
/*
* config all kwork events if not specified
*/
- if (list_empty(&kwork->class_list))
- for (i = 0; i < KWORK_CLASS_MAX; i++)
+ if (list_empty(&kwork->class_list)) {
+ for (i = 0; i < KWORK_CLASS_MAX; i++) {
list_add_tail(&kwork_class_supported_list[i]->list,
&kwork->class_list);
+ }
+ }

pr_debug("Config event list:");
list_for_each_entry(class, &kwork->class_list, list)
@@ -125,7 +128,6 @@ int cmd_kwork(int argc, const char **argv)
.force = false,
.event_list_str = NULL,
};
-
const struct option kwork_options[] = {
OPT_INCR('v', "verbose", &verbose,
"be more verbose (show symbol address, etc)"),

2022-07-26 17:43:16

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

Em Sat, Jul 09, 2022 at 09:50:22AM +0800, Yang Jihong escreveu:
> +
> +static void report_print_work(struct perf_kwork *kwork,
> + struct kwork_work *work)
> +{
> + int ret = 0;
> + char kwork_name[PRINT_KWORK_NAME_WIDTH];
> + char max_runtime_start[32], max_runtime_end[32];

Committer notes:

- Add some {} for multiline for/if blocks

- Return the calculated number of printed bytes in report_print_work,
otherwise soem compilers will complain that variable isn't used, e.g.:

2 92.64 almalinux:9 : FAIL clang version 13.0.1 (Red Hat 13.0.1-1.el9)
builtin-kwork.c:1061:6: error: variable 'ret' set but not used [-Werror,-Wunused-but-set-variable]
int ret = 0;


> +
> + printf(" ");
> +
> + /*
> + * kwork name
> + */
> + if (work->class && work->class->work_name) {
> + work->class->work_name(work, kwork_name,
> + PRINT_KWORK_NAME_WIDTH);
> + ret += printf(" %-*s |", PRINT_KWORK_NAME_WIDTH, kwork_name);
> + } else
> + ret += printf(" %-*s |", PRINT_KWORK_NAME_WIDTH, "");
> +
> + /*
> + * cpu
> + */
> + ret += printf(" %0*d |", PRINT_CPU_WIDTH, work->cpu);
> +
> + /*
> + * total runtime
> + */
> + if (kwork->report == KWORK_REPORT_RUNTIME)
> + ret += printf(" %*.*f ms |",
> + PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
> + (double)work->total_runtime / NSEC_PER_MSEC);
> +
> + /*
> + * count
> + */
> + ret += printf(" %*" PRIu64 " |", PRINT_COUNT_WIDTH, work->nr_atoms);
> +
> + /*
> + * max runtime, max runtime start, max runtime end
> + */
> + if (kwork->report == KWORK_REPORT_RUNTIME) {
> +
> + timestamp__scnprintf_usec(work->max_runtime_start,
> + max_runtime_start,
> + sizeof(max_runtime_start));
> + timestamp__scnprintf_usec(work->max_runtime_end,
> + max_runtime_end,
> + sizeof(max_runtime_end));
> + ret += printf(" %*.*f ms | %*s s | %*s s |",
> + PRINT_RUNTIME_WIDTH, RPINT_DECIMAL_WIDTH,
> + (double)work->max_runtime / NSEC_PER_MSEC,
> + PRINT_TIMESTAMP_WIDTH, max_runtime_start,
> + PRINT_TIMESTAMP_WIDTH, max_runtime_end);
> + }
> +
> + printf("\n");
> +}
> +
> +static int report_print_header(struct perf_kwork *kwork)
> +{
> + int ret;
> +
> + printf("\n ");
> + ret = printf(" %-*s | %-*s |",
> + PRINT_KWORK_NAME_WIDTH, "Kwork Name",
> + PRINT_CPU_WIDTH, "Cpu");
> +
> + if (kwork->report == KWORK_REPORT_RUNTIME)
> + ret += printf(" %-*s |",
> + PRINT_RUNTIME_HEADER_WIDTH, "Total Runtime");
> +
> + ret += printf(" %-*s |", PRINT_COUNT_WIDTH, "Count");
> +
> + if (kwork->report == KWORK_REPORT_RUNTIME)
> + ret += printf(" %-*s | %-*s | %-*s |",
> + PRINT_RUNTIME_HEADER_WIDTH, "Max runtime",
> + PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime start",
> + PRINT_TIMESTAMP_HEADER_WIDTH, "Max runtime end");
> +
> + printf("\n");
> + print_separator(ret);
> + return ret;
> +}
> +
> +static void print_summary(struct perf_kwork *kwork)
> +{
> + u64 time = kwork->timeend - kwork->timestart;
> +
> + printf(" Total count : %9" PRIu64 "\n", kwork->all_count);
> + printf(" Total runtime (msec) : %9.3f (%.3f%% load average)\n",
> + (double)kwork->all_runtime / NSEC_PER_MSEC,
> + time == 0 ? 0 : (double)kwork->all_runtime / time);
> + printf(" Total time span (msec) : %9.3f\n",
> + (double)time / NSEC_PER_MSEC);
> +}
> +
> +static unsigned long long nr_list_entry(struct list_head *head)
> +{
> + struct list_head *pos;
> + unsigned long long n = 0;
> +
> + list_for_each(pos, head)
> + n++;
> +
> + return n;
> +}
> +
> +static void print_skipped_events(struct perf_kwork *kwork)
> +{
> + int i;
> + const char *const kwork_event_str[] = {
> + [KWORK_TRACE_ENTRY] = "entry",
> + [KWORK_TRACE_EXIT] = "exit",
> + };
> +
> + if ((kwork->nr_skipped_events[KWORK_TRACE_MAX] != 0) &&
> + (kwork->nr_events != 0)) {
> + printf(" INFO: %.3f%% skipped events (%" PRIu64 " including ",
> + (double)kwork->nr_skipped_events[KWORK_TRACE_MAX] /
> + (double)kwork->nr_events * 100.0,
> + kwork->nr_skipped_events[KWORK_TRACE_MAX]);
> +
> + for (i = 0; i < KWORK_TRACE_MAX; i++)
> + printf("%" PRIu64 " %s%s",
> + kwork->nr_skipped_events[i],
> + kwork_event_str[i],
> + (i == KWORK_TRACE_MAX - 1) ? ")\n" : ", ");
> + }
> +
> + if (verbose > 0)
> + printf(" INFO: use %lld atom pages\n",
> + nr_list_entry(&kwork->atom_page_list));
> +}
> +
> +static void print_bad_events(struct perf_kwork *kwork)
> +{
> + if ((kwork->nr_lost_events != 0) && (kwork->nr_events != 0))
> + printf(" INFO: %.3f%% lost events (%ld out of %ld, in %ld chunks)\n",
> + (double)kwork->nr_lost_events /
> + (double)kwork->nr_events * 100.0,
> + kwork->nr_lost_events, kwork->nr_events,
> + kwork->nr_lost_chunks);
> +}
> +
> +static void work_sort(struct perf_kwork *kwork, struct kwork_class *class)
> +{
> + struct rb_node *node;
> + struct kwork_work *data;
> + struct rb_root_cached *root = &class->work_root;
> +
> + pr_debug("Sorting %s ...\n", class->name);
> + for (;;) {
> + node = rb_first_cached(root);
> + if (!node)
> + break;
> +
> + rb_erase_cached(node, root);
> + data = rb_entry(node, struct kwork_work, node);
> + work_insert(&kwork->sorted_work_root,
> + data, &kwork->sort_list);
> + }
> +}
> +
> +static void perf_kwork__sort(struct perf_kwork *kwork)
> +{
> + struct kwork_class *class;
> +
> + list_for_each_entry(class, &kwork->class_list, list)
> + work_sort(kwork, class);
> +}
> +
> +static int perf_kwork__check_config(struct perf_kwork *kwork,
> + struct perf_session *session)
> +{
> + int ret;
> + struct kwork_class *class;
> +
> + static struct trace_kwork_handler report_ops = {
> + .entry_event = report_entry_event,
> + .exit_event = report_exit_event,
> + };
> +
> + switch (kwork->report) {
> + case KWORK_REPORT_RUNTIME:
> + kwork->tp_handler = &report_ops;
> + break;
> + default:
> + pr_debug("Invalid report type %d\n", kwork->report);
> + return -1;
> + }
> +
> + list_for_each_entry(class, &kwork->class_list, list)
> + if ((class->class_init != NULL) &&
> + (class->class_init(class, session) != 0))
> + return -1;
> +
> + if (kwork->cpu_list != NULL) {
> + ret = perf_session__cpu_bitmap(session,
> + kwork->cpu_list,
> + kwork->cpu_bitmap);
> + if (ret < 0) {
> + pr_err("Invalid cpu bitmap\n");
> + return -1;
> + }
> + }
> +
> + if (kwork->time_str != NULL) {
> + ret = perf_time__parse_str(&kwork->ptime, kwork->time_str);
> + if (ret != 0) {
> + pr_err("Invalid time span\n");
> + return -1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int perf_kwork__read_events(struct perf_kwork *kwork)
> +{
> + int ret = -1;
> + struct perf_session *session = NULL;
> +
> + struct perf_data data = {
> + .path = input_name,
> + .mode = PERF_DATA_MODE_READ,
> + .force = kwork->force,
> + };
> +
> + session = perf_session__new(&data, &kwork->tool);
> + if (IS_ERR(session)) {
> + pr_debug("Error creating perf session\n");
> + return PTR_ERR(session);
> + }
> +
> + symbol__init(&session->header.env);
> +
> + if (perf_kwork__check_config(kwork, session) != 0)
> + goto out_delete;
> +
> + if (session->tevent.pevent &&
> + tep_set_function_resolver(session->tevent.pevent,
> + machine__resolve_kernel_addr,
> + &session->machines.host) < 0) {
> + pr_err("Failed to set libtraceevent function resolver\n");
> + goto out_delete;
> + }
> +
> + ret = perf_session__process_events(session);
> + if (ret) {
> + pr_debug("Failed to process events, error %d\n", ret);
> + goto out_delete;
> + }
> +
> + kwork->nr_events = session->evlist->stats.nr_events[0];
> + kwork->nr_lost_events = session->evlist->stats.total_lost;
> + kwork->nr_lost_chunks = session->evlist->stats.nr_events[PERF_RECORD_LOST];
> +
> +out_delete:
> + perf_session__delete(session);
> + return ret;
> +}
> +
> +static void process_skipped_events(struct perf_kwork *kwork,
> + struct kwork_work *work)
> +{
> + int i;
> + unsigned long long count;
> +
> + for (i = 0; i < KWORK_TRACE_MAX; i++) {
> + count = nr_list_entry(&work->atom_list[i]);
> + kwork->nr_skipped_events[i] += count;
> + kwork->nr_skipped_events[KWORK_TRACE_MAX] += count;
> + }
> +}
> +
> +static int perf_kwork__report(struct perf_kwork *kwork)
> +{
> + int ret;
> + struct rb_node *next;
> + struct kwork_work *work;
> +
> + ret = perf_kwork__read_events(kwork);
> + if (ret != 0)
> + return -1;
> +
> + perf_kwork__sort(kwork);
> +
> + setup_pager();
> +
> + ret = report_print_header(kwork);
> + next = rb_first_cached(&kwork->sorted_work_root);
> + while (next) {
> + work = rb_entry(next, struct kwork_work, node);
> + process_skipped_events(kwork, work);
> +
> + if (work->nr_atoms != 0) {
> + report_print_work(kwork, work);
> + if (kwork->summary) {
> + kwork->all_runtime += work->total_runtime;
> + kwork->all_count += work->nr_atoms;
> + }
> + }
> + next = rb_next(next);
> + }
> + print_separator(ret);
> +
> + if (kwork->summary) {
> + print_summary(kwork);
> + print_separator(ret);
> + }
> +
> + print_bad_events(kwork);
> + print_skipped_events(kwork);
> + printf("\n");
> +
> + return 0;
> +}
> +
> +typedef int (*tracepoint_handler)(struct perf_tool *tool,
> + struct evsel *evsel,
> + struct perf_sample *sample,
> + struct machine *machine);
> +
> +static int perf_kwork__process_tracepoint_sample(struct perf_tool *tool,
> + union perf_event *event __maybe_unused,
> + struct perf_sample *sample,
> + struct evsel *evsel,
> + struct machine *machine)
> +{
> + int err = 0;
> +
> + if (evsel->handler != NULL) {
> + tracepoint_handler f = evsel->handler;
> +
> + err = f(tool, evsel, sample, machine);
> + }
> +
> + return err;
> +}
> +
> static void setup_event_list(struct perf_kwork *kwork,
> const struct option *options,
> const char * const usage_msg[])
> @@ -161,11 +960,37 @@ static int perf_kwork__record(struct perf_kwork *kwork,
> int cmd_kwork(int argc, const char **argv)
> {
> static struct perf_kwork kwork = {
> + .tool = {
> + .mmap = perf_event__process_mmap,
> + .mmap2 = perf_event__process_mmap2,
> + .sample = perf_kwork__process_tracepoint_sample,
> + },
> +
> .class_list = LIST_HEAD_INIT(kwork.class_list),
> + .atom_page_list = LIST_HEAD_INIT(kwork.atom_page_list),
> + .sort_list = LIST_HEAD_INIT(kwork.sort_list),
> + .cmp_id = LIST_HEAD_INIT(kwork.cmp_id),
> + .sorted_work_root = RB_ROOT_CACHED,
> + .tp_handler = NULL,
>
> + .profile_name = NULL,
> + .cpu_list = NULL,
> + .time_str = NULL,
> .force = false,
> .event_list_str = NULL,
> + .summary = false,
> + .sort_order = NULL,
> +
> + .timestart = 0,
> + .timeend = 0,
> + .nr_events = 0,
> + .nr_lost_chunks = 0,
> + .nr_lost_events = 0,
> + .all_runtime = 0,
> + .all_count = 0,
> + .nr_skipped_events = { 0 },
> };
> + static const char default_report_sort_order[] = "runtime, max, count";
>
> const struct option kwork_options[] = {
> OPT_INCR('v', "verbose", &verbose,
> @@ -177,13 +1002,32 @@ int cmd_kwork(int argc, const char **argv)
> OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
> OPT_END()
> };
> + const struct option report_options[] = {
> + OPT_STRING('s', "sort", &kwork.sort_order, "key[,key2...]",
> + "sort by key(s): runtime, max, count"),
> + OPT_STRING('C', "cpu", &kwork.cpu_list, "cpu",
> + "list of cpus to profile"),
> + OPT_STRING('n', "name", &kwork.profile_name, "name",
> + "event name to profile"),
> + OPT_STRING(0, "time", &kwork.time_str, "str",
> + "Time span for analysis (start,stop)"),
> + OPT_STRING('i', "input", &input_name, "file",
> + "input file name"),
> + OPT_BOOLEAN('S', "with-summary", &kwork.summary,
> + "Show summary with statistics"),
> + OPT_PARENT(kwork_options)
> + };
>
> const char *kwork_usage[] = {
> NULL,
> NULL
> };
> + const char * const report_usage[] = {
> + "perf kwork report [<options>]",
> + NULL
> + };
> const char *const kwork_subcommands[] = {
> - "record", NULL
> + "record", "report", NULL
> };
>
> argc = parse_options_subcommand(argc, argv, kwork_options,
> @@ -193,10 +1037,21 @@ int cmd_kwork(int argc, const char **argv)
> usage_with_options(kwork_usage, kwork_options);
>
> setup_event_list(&kwork, kwork_options, kwork_usage);
> + sort_dimension__add(&kwork, "id", &kwork.cmp_id);
>
> if (strlen(argv[0]) > 2 && strstarts("record", argv[0]))
> return perf_kwork__record(&kwork, argc, argv);
> - else
> + else if (strlen(argv[0]) > 2 && strstarts("report", argv[0])) {
> + kwork.sort_order = default_report_sort_order;
> + if (argc > 1) {
> + argc = parse_options(argc, argv, report_options, report_usage, 0);
> + if (argc)
> + usage_with_options(report_usage, report_options);
> + }
> + kwork.report = KWORK_REPORT_RUNTIME;
> + setup_sorting(&kwork, report_options, report_usage);
> + return perf_kwork__report(&kwork);
> + } else
> usage_with_options(kwork_usage, kwork_options);
>
> return 0;
> diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
> index 03203c4deb34..0a86bf47c74d 100644
> --- a/tools/perf/util/kwork.h
> +++ b/tools/perf/util/kwork.h
> @@ -19,6 +19,105 @@ enum kwork_class_type {
> KWORK_CLASS_MAX,
> };
>
> +enum kwork_report_type {
> + KWORK_REPORT_RUNTIME,
> +};
> +
> +enum kwork_trace_type {
> + KWORK_TRACE_ENTRY,
> + KWORK_TRACE_EXIT,
> + KWORK_TRACE_MAX,
> +};
> +
> +/*
> + * data structure:
> + *
> + * +==================+ +============+ +======================+
> + * | class | | work | | atom |
> + * +==================+ +============+ +======================+
> + * +------------+ | +-----+ | | +------+ | | +-------+ +-----+ |
> + * | perf_kwork | +-> | irq | --------|+-> | eth0 | --+-> | raise | - | ... | --+ +-----------+
> + * +-----+------+ || +-----+ ||| +------+ ||| +-------+ +-----+ | | | |
> + * | || ||| ||| | +-> | atom_page |
> + * | || ||| ||| +-------+ +-----+ | | |
> + * | class_list ||| |+-> | entry | - | ... | ----> | |
> + * | || ||| ||| +-------+ +-----+ | | |
> + * | || ||| ||| | +-> | |
> + * | || ||| ||| +-------+ +-----+ | | | |
> + * | || ||| |+-> | exit | - | ... | --+ +-----+-----+
> + * | || ||| | | +-------+ +-----+ | |
> + * | || ||| | | | |
> + * | || ||| +-----+ | | | |
> + * | || |+-> | ... | | | | |
> + * | || | | +-----+ | | | |
> + * | || | | | | | |
> + * | || +---------+ | | +-----+ | | +-------+ +-----+ | |
> + * | +-> | softirq | -------> | RCU | ---+-> | raise | - | ... | --+ +-----+-----+
> + * | || +---------+ | | +-----+ ||| +-------+ +-----+ | | | |
> + * | || | | ||| | +-> | atom_page |
> + * | || | | ||| +-------+ +-----+ | | |
> + * | || | | |+-> | entry | - | ... | ----> | |
> + * | || | | ||| +-------+ +-----+ | | |
> + * | || | | ||| | +-> | |
> + * | || | | ||| +-------+ +-----+ | | | |
> + * | || | | |+-> | exit | - | ... | --+ +-----+-----+
> + * | || | | | | +-------+ +-----+ | |
> + * | || | | | | | |
> + * | || +-----------+ | | +-----+ | | | |
> + * | +-> | workqueue | -----> | ... | | | | |
> + * | | +-----------+ | | +-----+ | | | |
> + * | +==================+ +============+ +======================+ |
> + * | |
> + * +----> atom_page_list ---------------------------------------------------------+
> + *
> + */
> +
> +struct kwork_atom {
> + struct list_head list;
> + u64 time;
> + struct kwork_atom *prev;
> +
> + void *page_addr;
> + unsigned long bit_inpage;
> +};
> +
> +#define NR_ATOM_PER_PAGE 128
> +struct kwork_atom_page {
> + struct list_head list;
> + struct kwork_atom atoms[NR_ATOM_PER_PAGE];
> + DECLARE_BITMAP(bitmap, NR_ATOM_PER_PAGE);
> +};
> +
> +struct kwork_class;
> +struct kwork_work {
> + /*
> + * class field
> + */
> + struct rb_node node;
> + struct kwork_class *class;
> +
> + /*
> + * work field
> + */
> + u64 id;
> + int cpu;
> + char *name;
> +
> + /*
> + * atom field
> + */
> + u64 nr_atoms;
> + struct list_head atom_list[KWORK_TRACE_MAX];
> +
> + /*
> + * runtime report
> + */
> + u64 max_runtime;
> + u64 max_runtime_start;
> + u64 max_runtime_end;
> + u64 total_runtime;
> +};
> +
> struct kwork_class {
> struct list_head list;
> const char *name;
> @@ -26,19 +125,81 @@ struct kwork_class {
>
> unsigned int nr_tracepoints;
> const struct evsel_str_handler *tp_handlers;
> +
> + struct rb_root_cached work_root;
> +
> + int (*class_init)(struct kwork_class *class,
> + struct perf_session *session);
> +
> + void (*work_init)(struct kwork_class *class,
> + struct kwork_work *work,
> + struct evsel *evsel,
> + struct perf_sample *sample,
> + struct machine *machine);
> +
> + void (*work_name)(struct kwork_work *work,
> + char *buf, int len);
> +};
> +
> +struct perf_kwork;
> +struct trace_kwork_handler {
> + int (*entry_event)(struct perf_kwork *kwork,
> + struct kwork_class *class, struct evsel *evsel,
> + struct perf_sample *sample, struct machine *machine);
> +
> + int (*exit_event)(struct perf_kwork *kwork,
> + struct kwork_class *class, struct evsel *evsel,
> + struct perf_sample *sample, struct machine *machine);
> };
>
> struct perf_kwork {
> /*
> * metadata
> */
> + struct perf_tool tool;
> struct list_head class_list;
> + struct list_head atom_page_list;
> + struct list_head sort_list, cmp_id;
> + struct rb_root_cached sorted_work_root;
> + const struct trace_kwork_handler *tp_handler;
> +
> + /*
> + * profile filters
> + */
> + const char *profile_name;
> +
> + const char *cpu_list;
> + DECLARE_BITMAP(cpu_bitmap, MAX_NR_CPUS);
> +
> + const char *time_str;
> + struct perf_time_interval ptime;
>
> /*
> * options for command
> */
> bool force;
> const char *event_list_str;
> + enum kwork_report_type report;
> +
> + /*
> + * options for subcommand
> + */
> + bool summary;
> + const char *sort_order;
> +
> + /*
> + * statistics
> + */
> + u64 timestart;
> + u64 timeend;
> +
> + unsigned long nr_events;
> + unsigned long nr_lost_chunks;
> + unsigned long nr_lost_events;
> +
> + u64 all_runtime;
> + u64 all_count;
> + u64 nr_skipped_events[KWORK_TRACE_MAX + 1];
> };
>
> #endif /* PERF_UTIL_KWORK_H */
> --
> 2.30.GIT

--

- Arnaldo

2022-07-27 00:53:30

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

Hello,

On 2022/7/27 1:40, Arnaldo Carvalho de Melo wrote:
> Em Sat, Jul 09, 2022 at 09:50:22AM +0800, Yang Jihong escreveu:
>> +
>> +static void report_print_work(struct perf_kwork *kwork,
>> + struct kwork_work *work)
>> +{
>> + int ret = 0;
>> + char kwork_name[PRINT_KWORK_NAME_WIDTH];
>> + char max_runtime_start[32], max_runtime_end[32];
>
> Committer notes:
>
> - Add some {} for multiline for/if blocks
>
> - Return the calculated number of printed bytes in report_print_work,
> otherwise soem compilers will complain that variable isn't used, e.g.:
>
> 2 92.64 almalinux:9 : FAIL clang version 13.0.1 (Red Hat 13.0.1-1.el9)
> builtin-kwork.c:1061:6: error: variable 'ret' set but not used [-Werror,-Wunused-but-set-variable]
> int ret = 0;
>
>
OK, I'll fix it in next version.

Regards,
Jihong

2022-07-27 01:02:34

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 01/17] perf kwork: New tool

Hello,

On 2022/7/27 1:27, Arnaldo Carvalho de Melo wrote:
> Em Sat, Jul 09, 2022 at 09:50:17AM +0800, Yang Jihong escreveu:
>> The perf-kwork tool is used to trace time properties of kernel work
>> (such as irq, softirq, and workqueue), including runtime, latency,
>> and timehist, using the infrastructure in the perf tools to allow
>> tracing extra targets.
>>
>> This is the first commit to reuse perf_record framework code to
>> implement a simple record function, kwork is not supported currently.
>
> So, since I have to fix some issues I'm adding small stylistic changes,
> starting with this:
>
>
> - multiline if/for blocks need {}
>
> - remove needless spaces between variable declaration + initialization.
>
> - Arnaldo
>
OK, I'll fix it in next version.

Regards,
Jihong

2022-07-27 14:13:03

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

Em Wed, Jul 27, 2022 at 08:39:33AM +0800, Yang Jihong escreveu:
> Hello,
>
> On 2022/7/27 1:40, Arnaldo Carvalho de Melo wrote:
> > Em Sat, Jul 09, 2022 at 09:50:22AM +0800, Yang Jihong escreveu:
> > > +
> > > +static void report_print_work(struct perf_kwork *kwork,
> > > + struct kwork_work *work)
> > > +{
> > > + int ret = 0;
> > > + char kwork_name[PRINT_KWORK_NAME_WIDTH];
> > > + char max_runtime_start[32], max_runtime_end[32];
> >
> > Committer notes:
> >
> > - Add some {} for multiline for/if blocks
> >
> > - Return the calculated number of printed bytes in report_print_work,
> > otherwise soem compilers will complain that variable isn't used, e.g.:
> >
> > 2 92.64 almalinux:9 : FAIL clang version 13.0.1 (Red Hat 13.0.1-1.el9)
> > builtin-kwork.c:1061:6: error: variable 'ret' set but not used [-Werror,-Wunused-but-set-variable]
> > int ret = 0;
> >
> >
> OK, I'll fix it in next version.

your work with these fixups is already at acme/perf/core:

git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git perf/core

Please continue from there. Please let me know if I made some mistake.

Thanks for working on this!

- Arnaldo

2022-07-27 23:37:11

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC v3 01/17] perf kwork: New tool

Hello,

On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>
> The perf-kwork tool is used to trace time properties of kernel work
> (such as irq, softirq, and workqueue), including runtime, latency,
> and timehist, using the infrastructure in the perf tools to allow
> tracing extra targets.
>
> This is the first commit to reuse perf_record framework code to
> implement a simple record function, kwork is not supported currently.
>
> Test cases:
>
> # perf
>
> usage: perf [--version] [--help] [OPTIONS] COMMAND [ARGS]
>
> The most commonly used perf commands are:
> <SNIP>
> iostat Show I/O performance metrics
> kallsyms Searches running kernel for symbols
> kmem Tool to trace/measure kernel memory properties
> kvm Tool to trace/measure kvm guest os
> kwork Tool to trace/measure kernel work properties (latencies)
> list List all symbolic event types
> lock Analyze lock events
> mem Profile memory accesses
> record Run a command and record its profile into perf.data
> <SNIP>
> See 'perf help COMMAND' for more information on a specific command.
>
> # perf kwork
>
> Usage: perf kwork [<options>] {record}
>
> -D, --dump-raw-trace dump raw trace in ASCII
> -f, --force don't complain, do it
> -k, --kwork <kwork> list of kwork to profile
> -v, --verbose be more verbose (show symbol address, etc)
>
> # perf kwork record -- sleep 1
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Captured and wrote 1.787 MB perf.data ]
>
> Signed-off-by: Yang Jihong <[email protected]>
> ---
[SNIP]
> +
> +static int perf_kwork__record(struct perf_kwork *kwork,
> + int argc, const char **argv)
> +{
> + const char **rec_argv;
> + unsigned int rec_argc, i, j;
> + struct kwork_class *class;
> +
> + const char *const record_args[] = {
> + "record",
> + "-a",
> + "-R",
> + "-m", "1024",
> + "-c", "1",

Please consider adding '--synth task' to skip costly synthesis
if you don't need user space symbols.

> + };
> +
> + rec_argc = ARRAY_SIZE(record_args) + argc - 1;
> +
> + list_for_each_entry(class, &kwork->class_list, list)
> + rec_argc += 2 * class->nr_tracepoints;
> +
> + rec_argv = calloc(rec_argc + 1, sizeof(char *));
> + if (rec_argv == NULL)
> + return -ENOMEM;
> +
> + for (i = 0; i < ARRAY_SIZE(record_args); i++)
> + rec_argv[i] = strdup(record_args[i]);
> +
> + list_for_each_entry(class, &kwork->class_list, list) {
> + for (j = 0; j < class->nr_tracepoints; j++) {
> + rec_argv[i++] = strdup("-e");
> + rec_argv[i++] = strdup(class->tp_handlers[j].name);
> + }
> + }
> +
> + for (j = 1; j < (unsigned int)argc; j++, i++)
> + rec_argv[i] = argv[j];
> +
> + BUG_ON(i != rec_argc);
> +
> + pr_debug("record comm: ");
> + for (j = 0; j < rec_argc; j++)
> + pr_debug("%s ", rec_argv[j]);
> + pr_debug("\n");
> +
> + return cmd_record(i, rec_argv);
> +}

2022-07-28 00:04:59

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC v3 02/17] perf kwork: Add irq kwork record support

On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>
> Record interrupt events irq:irq_handler_entry & irq_handler_exit
>
> Test cases:
>
> # perf kwork record -o perf_kwork.date -- sleep 1
> [ perf record: Woken up 0 times to write data ]
> [ perf record: Captured and wrote 0.556 MB perf_kwork.date ]
> #
> # perf evlist -i perf_kwork.date
> irq:irq_handler_entry
> irq:irq_handler_exit
> dummy:HG
> # Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
> #
>
> Signed-off-by: Yang Jihong <[email protected]>
> ---
> tools/perf/Documentation/perf-kwork.txt | 2 +-
> tools/perf/builtin-kwork.c | 15 ++++++++++++++-
> tools/perf/util/kwork.h | 1 +
> 3 files changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
> index dc1e36da57bb..57bd5fa7d5c9 100644
> --- a/tools/perf/Documentation/perf-kwork.txt
> +++ b/tools/perf/Documentation/perf-kwork.txt
> @@ -32,7 +32,7 @@ OPTIONS
>
> -k::
> --kwork::
> - List of kwork to profile
> + List of kwork to profile (irq, etc)
>
> -v::
> --verbose::
> diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
> index f3552c56ede3..a26b7fde1e38 100644
> --- a/tools/perf/builtin-kwork.c
> +++ b/tools/perf/builtin-kwork.c
> @@ -25,7 +25,20 @@
> #include <linux/time64.h>
> #include <linux/zalloc.h>
>
> +const struct evsel_str_handler irq_tp_handlers[] = {
> + { "irq:irq_handler_entry", NULL, },
> + { "irq:irq_handler_exit", NULL, },
> +};
> +
> +static struct kwork_class kwork_irq = {
> + .name = "irq",
> + .type = KWORK_CLASS_IRQ,
> + .nr_tracepoints = 2,

Nit: I don't think it's gonna change frequently but
it'd be better to use ARRAY_SIZE(irq_tp_handlers)
for future changes.

Thanks,
Namhyung


> + .tp_handlers = irq_tp_handlers,
> +};
> +
> static struct kwork_class *kwork_class_supported_list[KWORK_CLASS_MAX] = {
> + [KWORK_CLASS_IRQ] = &kwork_irq,
> };
>
> static void setup_event_list(struct perf_kwork *kwork,
> @@ -132,7 +145,7 @@ int cmd_kwork(int argc, const char **argv)
> OPT_BOOLEAN('D', "dump-raw-trace", &dump_trace,
> "dump raw trace in ASCII"),
> OPT_STRING('k', "kwork", &kwork.event_list_str, "kwork",
> - "list of kwork to profile"),
> + "list of kwork to profile (irq, etc)"),
> OPT_BOOLEAN('f', "force", &kwork.force, "don't complain, do it"),
> OPT_END()
> };
> diff --git a/tools/perf/util/kwork.h b/tools/perf/util/kwork.h
> index 6950636aab2a..f1d89cb058fc 100644
> --- a/tools/perf/util/kwork.h
> +++ b/tools/perf/util/kwork.h
> @@ -13,6 +13,7 @@
> #include <linux/bitmap.h>
>
> enum kwork_class_type {
> + KWORK_CLASS_IRQ,
> KWORK_CLASS_MAX,
> };
>
> --
> 2.30.GIT
>

2022-07-28 00:08:28

by Namhyung Kim

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>
> Implements framework of perf kwork report, which is used to report time
> properties such as run time and frequency:
>
> Test cases:
>
> # perf kwork
>
> Usage: perf kwork [<options>] {record|report}
>
> -D, --dump-raw-trace dump raw trace in ASCII
> -f, --force don't complain, do it
> -k, --kwork <kwork> list of kwork to profile (irq, softirq, workqueue, etc)
> -v, --verbose be more verbose (show symbol address, etc)
>
> # perf kwork report -h
>
> Usage: perf kwork report [<options>]
>
> -C, --cpu <cpu> list of cpus to profile
> -i, --input <file> input file name
> -n, --name <name> event name to profile
> -s, --sort <key[,key2...]>
> sort by key(s): runtime, max, count
> -S, --with-summary Show summary with statistics
> --time <str> Time span for analysis (start,stop)
>
> # perf kwork report
>
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> --------------------------------------------------------------------------------------------------------------------------------
>
> # perf kwork report -S
>
> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
> --------------------------------------------------------------------------------------------------------------------------------
> --------------------------------------------------------------------------------------------------------------------------------
> Total count : 0
> Total runtime (msec) : 0.000 (0.000% load average)
> Total time span (msec) : 0.000
> --------------------------------------------------------------------------------------------------------------------------------
>
> # perf kwork report -C 0,100
> Requested CPU 100 too large. Consider raising MAX_NR_CPUS
> Invalid cpu bitmap
>
> # perf kwork report -s runtime1
> Error: Unknown --sort key: `runtime1'
>
> Usage: perf kwork report [<options>]
>
> -C, --cpu <cpu> list of cpus to profile
> -i, --input <file> input file name
> -n, --name <name> event name to profile
> -s, --sort <key[,key2...]>
> sort by key(s): runtime, max, count
> -S, --with-summary Show summary with statistics
> --time <str> Time span for analysis (start,stop)
>
> # perf kwork report -i perf_no_exist.data
> failed to open perf_no_exist.data: No such file or directory
>
> # perf kwork report --time 00FFF,
> Invalid time span
>
> Since there are no report supported events, the output is empty.
>
> Briefly describe the data structure:
> 1. "class" indicates event type. For example, irq and softiq correspond
> to different types.
> 2. "cluster" refers to a specific event corresponding to a type. For
> example, RCU and TIMER in softirq correspond to different clusters,
> which contains three types of events: raise, entry, and exit.

Maybe I'm too late... but it's now "work", right?

> 3. "atom" includes time of each sample and sample of the previous phase.
> (For example, exit corresponds to entry, which is used for timehist.)
>
> Signed-off-by: Yang Jihong <[email protected]>
> ---
> tools/perf/Documentation/perf-kwork.txt | 33 +
> tools/perf/builtin-kwork.c | 859 +++++++++++++++++++++++-
> tools/perf/util/kwork.h | 161 +++++
> 3 files changed, 1051 insertions(+), 2 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
> index c5b52f61da99..b79b2c0d047e 100644
> --- a/tools/perf/Documentation/perf-kwork.txt
> +++ b/tools/perf/Documentation/perf-kwork.txt
> @@ -17,8 +17,11 @@ There are several variants of 'perf kwork':
> 'perf kwork record <command>' to record the kernel work
> of an arbitrary workload.
>
> + 'perf kwork report' to report the per kwork runtime.
> +
> Example usage:
> perf kwork record -- sleep 1
> + perf kwork report
>
> OPTIONS
> -------
> @@ -38,6 +41,36 @@ OPTIONS
> --verbose::
> Be more verbose. (show symbol address, etc)
>
> +OPTIONS for 'perf kwork report'
> +----------------------------
> +
> +-C::
> +--cpu::
> + Only show events for the given CPU(s) (comma separated list).
> +
> +-i::
> +--input::
> + Input file name. (default: perf.data unless stdin is a fifo)
> +
> +-n::
> +--name::
> + Only show events for the given name.
> +
> +-s::
> +--sort::
> + Sort by key(s): runtime, max, count
> +
> +-S::
> +--with-summary::
> + Show summary with statistics
> +
> +--time::
> + Only analyze samples within given time window: <start>,<stop>. Times
> + have the format seconds.microseconds. If start is not given (i.e., time
> + string is ',x.y') then analysis starts at the beginning of the file. If
> + stop time is not given (i.e, time string is 'x.y,') then analysis goes
> + to end of file.
> +
> SEE ALSO
> --------
> linkperf:perf-record[1]
> diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
> index 8086236b7513..9c488d647995 100644
> --- a/tools/perf/builtin-kwork.c
> +++ b/tools/perf/builtin-kwork.c
> @@ -25,6 +25,460 @@
> #include <linux/time64.h>
> #include <linux/zalloc.h>
>
> +/*
> + * report header elements width
> + */
> +#define PRINT_CPU_WIDTH 4
> +#define PRINT_COUNT_WIDTH 9
> +#define PRINT_RUNTIME_WIDTH 10
> +#define PRINT_TIMESTAMP_WIDTH 17
> +#define PRINT_KWORK_NAME_WIDTH 30
> +#define RPINT_DECIMAL_WIDTH 3
> +#define PRINT_TIME_UNIT_SEC_WIDTH 2
> +#define PRINT_TIME_UNIT_MESC_WIDTH 3

MSEC ?

Thanks,
Namhyung


> +#define PRINT_RUNTIME_HEADER_WIDTH (PRINT_RUNTIME_WIDTH + PRINT_TIME_UNIT_MESC_WIDTH)
> +#define PRINT_TIMESTAMP_HEADER_WIDTH (PRINT_TIMESTAMP_WIDTH + PRINT_TIME_UNIT_SEC_WIDTH)
> +

2022-07-28 11:57:49

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 01/17] perf kwork: New tool

Hello Namhyung,

On 2022/7/28 7:33, Namhyung Kim wrote:
> Hello,
>
> On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>>
>> The perf-kwork tool is used to trace time properties of kernel work
>> (such as irq, softirq, and workqueue), including runtime, latency,
>> and timehist, using the infrastructure in the perf tools to allow
>> tracing extra targets.
>>
>> This is the first commit to reuse perf_record framework code to
>> implement a simple record function, kwork is not supported currently.
>>
>> Test cases:
>>
>> # perf
>>
>> usage: perf [--version] [--help] [OPTIONS] COMMAND [ARGS]
>>
>> The most commonly used perf commands are:
>> <SNIP>
>> iostat Show I/O performance metrics
>> kallsyms Searches running kernel for symbols
>> kmem Tool to trace/measure kernel memory properties
>> kvm Tool to trace/measure kvm guest os
>> kwork Tool to trace/measure kernel work properties (latencies)
>> list List all symbolic event types
>> lock Analyze lock events
>> mem Profile memory accesses
>> record Run a command and record its profile into perf.data
>> <SNIP>
>> See 'perf help COMMAND' for more information on a specific command.
>>
>> # perf kwork
>>
>> Usage: perf kwork [<options>] {record}
>>
>> -D, --dump-raw-trace dump raw trace in ASCII
>> -f, --force don't complain, do it
>> -k, --kwork <kwork> list of kwork to profile
>> -v, --verbose be more verbose (show symbol address, etc)
>>
>> # perf kwork record -- sleep 1
>> [ perf record: Woken up 0 times to write data ]
>> [ perf record: Captured and wrote 1.787 MB perf.data ]
>>
>> Signed-off-by: Yang Jihong <[email protected]>
>> ---
> [SNIP]
>> +
>> +static int perf_kwork__record(struct perf_kwork *kwork,
>> + int argc, const char **argv)
>> +{
>> + const char **rec_argv;
>> + unsigned int rec_argc, i, j;
>> + struct kwork_class *class;
>> +
>> + const char *const record_args[] = {
>> + "record",
>> + "-a",
>> + "-R",
>> + "-m", "1024",
>> + "-c", "1",
>
> Please consider adding '--synth task' to skip costly synthesis
> if you don't need user space symbols.
>
Yes, we don't need user space symbols now, I'll add this option in next
fix patch,thanks for your suggestion.

Regards,
Jihong
> .
>

2022-07-28 12:03:30

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

Hello Arnaldo,

On 2022/7/27 22:04, Arnaldo Carvalho de Melo wrote:
> Em Wed, Jul 27, 2022 at 08:39:33AM +0800, Yang Jihong escreveu:
>> Hello,
>>
>> On 2022/7/27 1:40, Arnaldo Carvalho de Melo wrote:
>>> Em Sat, Jul 09, 2022 at 09:50:22AM +0800, Yang Jihong escreveu:
>>>> +
>>>> +static void report_print_work(struct perf_kwork *kwork,
>>>> + struct kwork_work *work)
>>>> +{
>>>> + int ret = 0;
>>>> + char kwork_name[PRINT_KWORK_NAME_WIDTH];
>>>> + char max_runtime_start[32], max_runtime_end[32];
>>>
>>> Committer notes:
>>>
>>> - Add some {} for multiline for/if blocks
>>>
>>> - Return the calculated number of printed bytes in report_print_work,
>>> otherwise soem compilers will complain that variable isn't used, e.g.:
>>>
>>> 2 92.64 almalinux:9 : FAIL clang version 13.0.1 (Red Hat 13.0.1-1.el9)
>>> builtin-kwork.c:1061:6: error: variable 'ret' set but not used [-Werror,-Wunused-but-set-variable]
>>> int ret = 0;
>>>
>>>
>> OK, I'll fix it in next version.
>
> your work with these fixups is already at acme/perf/core:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git perf/core
>
> Please continue from there. Please let me know if I made some mistake.
>
> Thanks for working on this!
>
Thanks to these fixups.
OK, I'll send fix patches according to review comments based on this branch.

Regards,
Jihong

2022-07-28 12:21:47

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 02/17] perf kwork: Add irq kwork record support

Hello Namhyung,

On 2022/7/28 7:42, Namhyung Kim wrote:
> On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>>
>> Record interrupt events irq:irq_handler_entry & irq_handler_exit
>>
>> Test cases:
>>
>> # perf kwork record -o perf_kwork.date -- sleep 1
>> [ perf record: Woken up 0 times to write data ]
>> [ perf record: Captured and wrote 0.556 MB perf_kwork.date ]
>> #
>> # perf evlist -i perf_kwork.date
>> irq:irq_handler_entry
>> irq:irq_handler_exit
>> dummy:HG
>> # Tip: use 'perf evlist --trace-fields' to show fields for tracepoint events
>> #
>>
>> Signed-off-by: Yang Jihong <[email protected]>
>> ---
>> tools/perf/Documentation/perf-kwork.txt | 2 +-
>> tools/perf/builtin-kwork.c | 15 ++++++++++++++-
>> tools/perf/util/kwork.h | 1 +
>> 3 files changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
>> index dc1e36da57bb..57bd5fa7d5c9 100644
>> --- a/tools/perf/Documentation/perf-kwork.txt
>> +++ b/tools/perf/Documentation/perf-kwork.txt
>> @@ -32,7 +32,7 @@ OPTIONS
>>
>> -k::
>> --kwork::
>> - List of kwork to profile
>> + List of kwork to profile (irq, etc)
>>
>> -v::
>> --verbose::
>> diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
>> index f3552c56ede3..a26b7fde1e38 100644
>> --- a/tools/perf/builtin-kwork.c
>> +++ b/tools/perf/builtin-kwork.c
>> @@ -25,7 +25,20 @@
>> #include <linux/time64.h>
>> #include <linux/zalloc.h>
>>
>> +const struct evsel_str_handler irq_tp_handlers[] = {
>> + { "irq:irq_handler_entry", NULL, },
>> + { "irq:irq_handler_exit", NULL, },
>> +};
>> +
>> +static struct kwork_class kwork_irq = {
>> + .name = "irq",
>> + .type = KWORK_CLASS_IRQ,
>> + .nr_tracepoints = 2,
>
> Nit: I don't think it's gonna change frequently but
> it'd be better to use ARRAY_SIZE(irq_tp_handlers)
> for future changes.
>
OK, I'll fix in next patch,thanks for your review.

Regards,
Jihong

2022-07-28 12:42:02

by Yang Jihong

[permalink] [raw]
Subject: Re: [RFC v3 06/17] perf kwork: Implement perf kwork report

Hello Namhyung ,

On 2022/7/28 8:00, Namhyung Kim wrote:
> On Fri, Jul 8, 2022 at 6:53 PM Yang Jihong <[email protected]> wrote:
>>
>> Implements framework of perf kwork report, which is used to report time
>> properties such as run time and frequency:
>>
>> Test cases:
>>
>> # perf kwork
>>
>> Usage: perf kwork [<options>] {record|report}
>>
>> -D, --dump-raw-trace dump raw trace in ASCII
>> -f, --force don't complain, do it
>> -k, --kwork <kwork> list of kwork to profile (irq, softirq, workqueue, etc)
>> -v, --verbose be more verbose (show symbol address, etc)
>>
>> # perf kwork report -h
>>
>> Usage: perf kwork report [<options>]
>>
>> -C, --cpu <cpu> list of cpus to profile
>> -i, --input <file> input file name
>> -n, --name <name> event name to profile
>> -s, --sort <key[,key2...]>
>> sort by key(s): runtime, max, count
>> -S, --with-summary Show summary with statistics
>> --time <str> Time span for analysis (start,stop)
>>
>> # perf kwork report
>>
>> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
>> --------------------------------------------------------------------------------------------------------------------------------
>> --------------------------------------------------------------------------------------------------------------------------------
>>
>> # perf kwork report -S
>>
>> Kwork Name | Cpu | Total Runtime | Count | Max runtime | Max runtime start | Max runtime end |
>> --------------------------------------------------------------------------------------------------------------------------------
>> --------------------------------------------------------------------------------------------------------------------------------
>> Total count : 0
>> Total runtime (msec) : 0.000 (0.000% load average)
>> Total time span (msec) : 0.000
>> --------------------------------------------------------------------------------------------------------------------------------
>>
>> # perf kwork report -C 0,100
>> Requested CPU 100 too large. Consider raising MAX_NR_CPUS
>> Invalid cpu bitmap
>>
>> # perf kwork report -s runtime1
>> Error: Unknown --sort key: `runtime1'
>>
>> Usage: perf kwork report [<options>]
>>
>> -C, --cpu <cpu> list of cpus to profile
>> -i, --input <file> input file name
>> -n, --name <name> event name to profile
>> -s, --sort <key[,key2...]>
>> sort by key(s): runtime, max, count
>> -S, --with-summary Show summary with statistics
>> --time <str> Time span for analysis (start,stop)
>>
>> # perf kwork report -i perf_no_exist.data
>> failed to open perf_no_exist.data: No such file or directory
>>
>> # perf kwork report --time 00FFF,
>> Invalid time span
>>
>> Since there are no report supported events, the output is empty.
>>
>> Briefly describe the data structure:
>> 1. "class" indicates event type. For example, irq and softiq correspond
>> to different types.
>> 2. "cluster" refers to a specific event corresponding to a type. For
>> example, RCU and TIMER in softirq correspond to different clusters,
>> which contains three types of events: raise, entry, and exit.
>
> Maybe I'm too late... but it's now "work", right?
>
Yes, The code has been changed to "work" according to previous
suggestion, but commit message forgets to modify ...

>> 3. "atom" includes time of each sample and sample of the previous phase.
>> (For example, exit corresponds to entry, which is used for timehist.)
>>
>> Signed-off-by: Yang Jihong <[email protected]>
>> ---
>> tools/perf/Documentation/perf-kwork.txt | 33 +
>> tools/perf/builtin-kwork.c | 859 +++++++++++++++++++++++-
>> tools/perf/util/kwork.h | 161 +++++
>> 3 files changed, 1051 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/perf/Documentation/perf-kwork.txt b/tools/perf/Documentation/perf-kwork.txt
>> index c5b52f61da99..b79b2c0d047e 100644
>> --- a/tools/perf/Documentation/perf-kwork.txt
>> +++ b/tools/perf/Documentation/perf-kwork.txt
>> @@ -17,8 +17,11 @@ There are several variants of 'perf kwork':
>> 'perf kwork record <command>' to record the kernel work
>> of an arbitrary workload.
>>
>> + 'perf kwork report' to report the per kwork runtime.
>> +
>> Example usage:
>> perf kwork record -- sleep 1
>> + perf kwork report
>>
>> OPTIONS
>> -------
>> @@ -38,6 +41,36 @@ OPTIONS
>> --verbose::
>> Be more verbose. (show symbol address, etc)
>>
>> +OPTIONS for 'perf kwork report'
>> +----------------------------
>> +
>> +-C::
>> +--cpu::
>> + Only show events for the given CPU(s) (comma separated list).
>> +
>> +-i::
>> +--input::
>> + Input file name. (default: perf.data unless stdin is a fifo)
>> +
>> +-n::
>> +--name::
>> + Only show events for the given name.
>> +
>> +-s::
>> +--sort::
>> + Sort by key(s): runtime, max, count
>> +
>> +-S::
>> +--with-summary::
>> + Show summary with statistics
>> +
>> +--time::
>> + Only analyze samples within given time window: <start>,<stop>. Times
>> + have the format seconds.microseconds. If start is not given (i.e., time
>> + string is ',x.y') then analysis starts at the beginning of the file. If
>> + stop time is not given (i.e, time string is 'x.y,') then analysis goes
>> + to end of file.
>> +
>> SEE ALSO
>> --------
>> linkperf:perf-record[1]
>> diff --git a/tools/perf/builtin-kwork.c b/tools/perf/builtin-kwork.c
>> index 8086236b7513..9c488d647995 100644
>> --- a/tools/perf/builtin-kwork.c
>> +++ b/tools/perf/builtin-kwork.c
>> @@ -25,6 +25,460 @@
>> #include <linux/time64.h>
>> #include <linux/zalloc.h>
>>
>> +/*
>> + * report header elements width
>> + */
>> +#define PRINT_CPU_WIDTH 4
>> +#define PRINT_COUNT_WIDTH 9
>> +#define PRINT_RUNTIME_WIDTH 10
>> +#define PRINT_TIMESTAMP_WIDTH 17
>> +#define PRINT_KWORK_NAME_WIDTH 30
>> +#define RPINT_DECIMAL_WIDTH 3
>> +#define PRINT_TIME_UNIT_SEC_WIDTH 2
>> +#define PRINT_TIME_UNIT_MESC_WIDTH 3
>
> MSEC ?
>
Yes, I'll fix in next patch,thanks for your review.

Regards,
Jihong
.