2018-08-09 14:59:51

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [GIT PULL 00/44] perf/core improvements and fixes

Hi Ingo,

Please consider pulling,

- Arnaldo

Test results at the end of this message, as usual.

Several new test environments consisting of building with/without
elfutils 0.173 ELF and DWARF libraries cross built for many
architectures, some appearing in the this container based test
environment for the first time:

56 ubuntu:18.04-x-m68k : Ok m68k-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
60 ubuntu:18.04-x-riscv64 : Ok riscv64-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
62 ubuntu:18.04-x-sh4 : Ok sh4-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
63 ubuntu:18.04-x-sparc64 : Ok sparc64-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0

The following changes since commit ec2cb7a526d49b65576301e183448fb51ee543a6:

Merge tag 'perf-core-for-mingo-4.19-20180801' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core (2018-08-02 09:59:41 +0200)

are available in the Git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-4.19-20180809

for you to fetch changes up to 6a9405b56c274024564f9014bba97b92c91b34d6:

perf map: Optimize maps__fixup_overlappings() (2018-08-08 15:56:00 -0300)

----------------------------------------------------------------
perf/core improvements and fixes:

perf annotate: (Jiri Olsa)

- Show percentage based on global or local hits or period, adding hotkeys ('p':
local/global, 'b': hit/period, use 'h' to see all hotkeys in the TUI) to
toggle those modes in the TUI, as well as a command line to select that, i.e.

perf report/annotate --percent-type global-period,local-period,global-hits,local-hits

to help understand the impact an annotated line has globally or just on the
function's total number of hits or its total period.

Try using it from the dynamic annotation interface from 'perf top', i.e.
fire up 'perf top', then press 'a' on a kernel function then try pressing 'p'
to see, dynamicly, the global/local percentage for the lines with samples.

This was based on a suggestion made by Stephane Eranian.

perf trace: (Arnaldo Carvalho de Melo)

- Process syscalls:sys_enter_SYSCALLNAME tracepoints using the strace-like
beautifiers used with raw_syscalls:sys_enter, paving the way to use
those beautifiers with whatever event carries that payload in its
PERF_SAMPLE_RAW area (Arnaldo Carvalho de Melo)

- Add more wrappers for BPF functions to be used in eBPF programs, together
with examples on how to use them, for instance, a "hello, world" like
program attached to the 'openat' syscall entry tracepoint using a stdio.h
like puts() function that abstracts access to the bpf_perf_event_output
eBPF function associated with a eBPF map associated with a "bpf-output"
(PERF_COUNT_SW_BPF_OUTPUT software event) that gets what is passed into
the perf ring buffer to finally appear in the 'perf trace' output:

$ cd tools/perf/examples/bpf/
$ cat hello.c
#include <stdio.h>

int syscall_enter(openat)(void *args)
{
puts("Hello, world\n");
return 0;
}

license(GPL);
$
# perf trace -e hello.c cat /etc/passwd > /dev/null
0.000 __bpf_stdout__:Hello, world
0.033 __bpf_stdout__:Hello, world
0.358 __bpf_stdout__:Hello, world
#

- Add another example (augmented_syscalls.c) that copies the syscall tracepoint
payload + the first 64 bytes of 'openat''s 'filename' pointer parameter,
using the eBPF's probe_read, probe_read_str and perf_event_output, sending to
an eBPF map associated with a bpf-output perf event that then gets passwd to
the existing raw_syscalls:sys_enter beautifier.

The changesets were done very granularly so that we can see that payload
first processed by the generic bpf_output formatter, where we can see that the
filename is being copied, together with the raw_syscalls:sys_enter formatter,
to make sure both agree, e.g.:

# perf trace -e perf/tools/perf/examples/bpf/augmented_syscalls.c,openat cat /etc/passwd > /dev/null
0.000 ( ): __augmented_syscalls__:X?.C......................`\..................../etc/ld.so.cache..#......,....ao.k...............k......1.".........
0.006 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC
0.008 ( 0.005 ms): cat/31292 openat(dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC) = 3
0.036 ( ): __augmented_syscalls__:X?.C.......................\..................../lib64/libc.so.6......... .\....#........?.......=.C..../.".........
0.037 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC
0.039 ( 0.007 ms): cat/31292 openat(dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC) = 3
0.323 ( ): __augmented_syscalls__:X?.C.....................P....................../etc/passwd......>.C....@................>.C.....,....ao.>.C........
0.325 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0xe8be50d6
0.327 ( 0.004 ms): cat/31292 openat(dfd: CWD, filename: 0xe8be50d6) = 3
#

The next step is to improve the beautifiers to use that filename so that we
can show it just like 'perf trace' + probe:vfs_getname (getname_flags kprobe supported
by 'perf trace' to show the pathname in syscalls like open, openat, rename, etc) and
strace does (via ptrace).

This now requires having a clang installed to turn the augmented_syscalls.c into
eBPF object to feed the kernel, in upcoming patches this need will be removed by
making 'perf trace' generate the object directly, or by linking libclang into
perf's binary, code that is already in merged.

strace's '-s strsize' will be implemented to state how many bytes we should copy,
so that a the familiar 'strace' workflow can be mimic'ed some more. Per-event
terms should also be used to state how many bytes for each pointer arg should be
copied and subsequently beautified, something like:

# perf trace -e openat/filename:16/,read/buf:256/

Infrastructure: (Konstantin Khlebnikov)

- Optimize the synthesization of maps for pre-existing threads, synthesizing
maps just for the thread group leader

- Optimize maps__fixup_overlappings()

Arch specific:

arm64: (Sean V Kelley)

- Enable JSON events for Ampere Computing eMAG processor

s/390: (Thomas Richter)

- Support auxiliary trace on 'perf report'

Cleanups:

- Drop unneeded bitmap_zero() calls (Yury Norov)

Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>

----------------------------------------------------------------
Arnaldo Carvalho de Melo (17):
perf trace: Associate vfs_getname()'ed pathname with fd returned from 'openat'
perf trace: Use beautifiers on syscalls:sys_enter_ handlers
perf trace: Rename some syscall_tp methods to raw_syscall
perf trace: Allow setting up a syscall_tp struct without a format_field
perf trace: Setup struct syscall_tp for syscalls:sys_{enter,exit}_NAME events
perf trace: Use perf_evsel__sc_tp_{uint,ptr} for "id"/"args" handling syscalls:* events
perf bpf: Add 'syscall_enter' probe helper for syscall enter tracepoints
perf bpf: Add struct bpf_map struct
perf bpf: Add bpf/stdio.h wrapper to bpf_perf_event_output function
perf bpf: Make bpf__for_each_stdout_map() generic
perf bpf: Generalize bpf__setup_stdout()
perf bpf: Add bpf__setup_output_event() strerror() counterpart
perf bpf: Add wrappers to BPF_FUNC_probe_read(_str) functions
perf trace: Handle "bpf-output" events associated with "__augmented_syscalls__" BPF map
perf bpf: Make bpf__setup_output_event() return the bpf-output event
perf trace: Setup the augmented syscalls bpf-output event fields
perf trace: Wire up the augmented syscalls with the syscalls:sys_enter_FOO beautifier

Jiri Olsa (20):
perf annotate: Make symbol__annotate_fprintf2() local
perf annotate: Make annotation_line__max_percent static
perf annotate: Get rid of annotation__scnprintf_samples_period()
perf annotate: Rename struct annotation_line::samples* to data*
perf annotate: Rename local sample variables to data
perf annotate: Rename hist to sym_hist in annotation__calc_percent
perf annotate: Loop group events directly in annotation__calc_percent()
perf annotate: Switch struct annotation_data::percent to array
perf annotate: Add PERCENT_HITS_GLOBAL percent value
perf annotate: Add PERCENT_PERIOD_LOCAL percent value
perf annotate: Add PERCENT_PERIOD_GLOBAL percent value
perf annotate: Add percent_type to struct annotation_options
perf annotate: Pass struct annotation_options to symbol__calc_lines()
perf annotate: Pass 'struct annotation_options' to map_symbol__annotation_dump()
perf annotate: Pass browser percent_type in annotate_browser__calc_percent()
perf annotate: Add support to toggle percent type
perf annotate: Make local period the default percent type
perf annotate: Display percent type in stdio output
perf annotate: Add --percent-type option
perf report: Add --percent-type option

Konstantin Khlebnikov (2):
perf map: Synthesize maps only for thread group leader
perf map: Optimize maps__fixup_overlappings()

Sean V Kelley (1):
perf vendor events arm64: Enable JSON events for eMAG

Thomas Richter (3):
perf auxtrace: Support for perf report -D for s390
perf report: Add raw report support for s390 auxiliary trace
perf report: Add GUI report support for s390 auxiliary trace

Yury Norov (1):
perf tools: Drop unneeded bitmap_zero() calls

tools/perf/Documentation/perf-annotate.txt | 9 +
tools/perf/Documentation/perf-report.txt | 9 +
tools/perf/arch/s390/util/auxtrace.c | 1 +
tools/perf/builtin-annotate.c | 4 +
tools/perf/builtin-report.c | 3 +
tools/perf/builtin-trace.c | 191 ++++-
tools/perf/examples/bpf/augmented_syscalls.c | 55 ++
tools/perf/examples/bpf/hello.c | 9 +
tools/perf/examples/bpf/sys_enter_openat.c | 33 +
tools/perf/include/bpf/bpf.h | 20 +
tools/perf/include/bpf/stdio.h | 19 +
.../arch/arm64/ampere/emag/core-imp-def.json | 32 +
tools/perf/pmu-events/arch/arm64/mapfile.csv | 1 +
tools/perf/tests/bitmap.c | 2 -
tools/perf/tests/mem2node.c | 2 -
tools/perf/ui/browsers/annotate.c | 76 +-
tools/perf/util/Build | 1 +
tools/perf/util/annotate.c | 301 ++++---
tools/perf/util/annotate.h | 54 +-
tools/perf/util/auxtrace.c | 3 +
tools/perf/util/auxtrace.h | 1 +
tools/perf/util/bpf-loader.c | 48 +-
tools/perf/util/bpf-loader.h | 23 +-
tools/perf/util/event.c | 13 +-
tools/perf/util/evsel.h | 7 +
tools/perf/util/header.c | 3 -
tools/perf/util/map.c | 44 +-
tools/perf/util/map.h | 1 -
tools/perf/util/s390-cpumsf-kernel.h | 71 ++
tools/perf/util/s390-cpumsf.c | 945 +++++++++++++++++++++
tools/perf/util/s390-cpumsf.h | 21 +
31 files changed, 1773 insertions(+), 229 deletions(-)
create mode 100644 tools/perf/examples/bpf/augmented_syscalls.c
create mode 100644 tools/perf/examples/bpf/hello.c
create mode 100644 tools/perf/examples/bpf/sys_enter_openat.c
create mode 100644 tools/perf/include/bpf/stdio.h
create mode 100644 tools/perf/pmu-events/arch/arm64/ampere/emag/core-imp-def.json
create mode 100644 tools/perf/util/s390-cpumsf-kernel.h
create mode 100644 tools/perf/util/s390-cpumsf.c
create mode 100644 tools/perf/util/s390-cpumsf.h

Test results:

The first ones are container (docker) based builds of tools/perf with
and without libelf support. Where clang is available, it is also used
to build perf with/without libelf, and building with LIBCLANGLLVM=1
(built-in clang) with gcc and clang when clang and its devel libraries
are installed.

The objtool and samples/bpf/ builds are disabled now that I'm switching from
using the sources in a local volume to fetching them from a http server to
build it inside the container, to make it easier to build in a container cluster.
Those will come back later.

Several are cross builds, the ones with -x-ARCH and the android one, and those
may not have all the features built, due to lack of multi-arch devel packages,
available and being used so far on just a few, like
debian:experimental-x-{arm64,mipsel}.

The 'perf test' one will perform a variety of tests exercising
tools/perf/util/, tools/lib/{bpf,traceevent,etc}, as well as run perf commands
with a variety of command line event specifications to then intercept the
sys_perf_event syscall to check that the perf_event_attr fields are set up as
expected, among a variety of other unit tests.

Then there is the 'make -C tools/perf build-test' ones, that build tools/perf/
with a variety of feature sets, exercising the build with an incomplete set of
features as well as with a complete one. It is planned to have it run on each
of the containers mentioned above, using some container orchestration
infrastructure. Get in contact if interested in helping having this in place.

# dm
1 alpine:3.4 : Ok gcc (Alpine 5.3.0) 5.3.0
2 alpine:3.5 : Ok gcc (Alpine 6.2.1) 6.2.1 20160822
3 alpine:3.6 : Ok gcc (Alpine 6.3.0) 6.3.0
4 alpine:3.7 : Ok gcc (Alpine 6.4.0) 6.4.0
5 alpine:edge : Ok gcc (Alpine 6.4.0) 6.4.0
6 amazonlinux:1 : Ok gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
7 amazonlinux:2 : Ok gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
8 android-ndk:r12b-arm : Ok arm-linux-androideabi-gcc (GCC) 4.9.x 20150123 (prerelease)
9 android-ndk:r15c-arm : Ok arm-linux-androideabi-gcc (GCC) 4.9.x 20150123 (prerelease)
10 centos:5 : Ok gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55)
11 centos:6 : Ok gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23)
12 centos:7 : Ok gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
13 debian:7 : Ok gcc (Debian 4.7.2-5) 4.7.2
14 debian:8 : Ok gcc (Debian 4.9.2-10+deb8u1) 4.9.2
15 debian:9 : Ok gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516
16 debian:experimental : Ok gcc (Debian 8.2.0-1) 8.2.0
17 debian:experimental-x-arm64 : Ok aarch64-linux-gnu-gcc (Debian 8.1.0-12) 8.1.0
18 debian:experimental-x-mips : Ok mips-linux-gnu-gcc (Debian 8.1.0-12) 8.1.0
19 debian:experimental-x-mips64 : Ok mips64-linux-gnuabi64-gcc (Debian 7.3.0-18) 7.3.0
20 debian:experimental-x-mipsel : Ok mipsel-linux-gnu-gcc (Debian 8.1.0-12) 8.1.0
21 fedora:20 : Ok gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-7)
22 fedora:21 : Ok gcc (GCC) 4.9.2 20150212 (Red Hat 4.9.2-6)
23 fedora:22 : Ok gcc (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
24 fedora:23 : Ok gcc (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
25 fedora:24 : Ok gcc (GCC) 6.3.1 20161221 (Red Hat 6.3.1-1)
26 fedora:24-x-ARC-uClibc : Ok arc-linux-gcc (ARCompact ISA Linux uClibc toolchain 2017.09-rc2) 7.1.1 20170710
27 fedora:25 : Ok gcc (GCC) 6.4.1 20170727 (Red Hat 6.4.1-1)
28 fedora:26 : Ok gcc (GCC) 7.3.1 20180130 (Red Hat 7.3.1-2)
29 fedora:27 : Ok gcc (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
30 fedora:28 : Ok gcc (GCC) 8.1.1 20180502 (Red Hat 8.1.1-1)
31 fedora:rawhide : Ok gcc (GCC) 8.0.1 20180324 (Red Hat 8.0.1-0.20)
32 gentoo-stage3-amd64:latest : Ok gcc (Gentoo 7.3.0-r3 p1.4) 7.3.0
33 mageia:5 : Ok gcc (GCC) 4.9.2
34 mageia:6 : Ok gcc (Mageia 5.5.0-1.mga6) 5.5.0
35 opensuse:42.1 : Ok gcc (SUSE Linux) 4.8.5
36 opensuse:42.2 : Ok gcc (SUSE Linux) 4.8.5
37 opensuse:42.3 : Ok gcc (SUSE Linux) 4.8.5
38 opensuse:tumbleweed : Ok gcc (SUSE Linux) 7.3.1 20180323 [gcc-7-branch revision 258812]
39 oraclelinux:6 : Ok gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23.0.1)
40 oraclelinux:7 : Ok gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28.0.1)
41 ubuntu:12.04.5 : Ok gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3
42 ubuntu:14.04.4 : Ok gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
43 ubuntu:14.04.4-x-linaro-arm64 : Ok aarch64-linux-gnu-gcc (Linaro GCC 5.5-2017.10) 5.5.0
44 ubuntu:16.04 : Ok gcc (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
45 ubuntu:16.04-x-arm : Ok arm-linux-gnueabihf-gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
46 ubuntu:16.04-x-arm64 : Ok aarch64-linux-gnu-gcc (Ubuntu/Linaro 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
47 ubuntu:16.04-x-powerpc : Ok powerpc-linux-gnu-gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
48 ubuntu:16.04-x-powerpc64 : Ok powerpc64-linux-gnu-gcc (Ubuntu/IBM 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
49 ubuntu:16.04-x-powerpc64el : Ok powerpc64le-linux-gnu-gcc (Ubuntu/IBM 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
50 ubuntu:16.04-x-s390 : Ok s390x-linux-gnu-gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609
51 ubuntu:16.10 : Ok gcc (Ubuntu 6.2.0-5ubuntu12) 6.2.0 20161005
52 ubuntu:17.10 : Ok gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0
53 ubuntu:18.04 : Ok gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
54 ubuntu:18.04-x-arm : Ok arm-linux-gnueabihf-gcc (Ubuntu/Linaro 7.3.0-16ubuntu3) 7.3.0
55 ubuntu:18.04-x-arm64 : Ok aarch64-linux-gnu-gcc (Ubuntu/Linaro 7.3.0-16ubuntu3) 7.3.0
56 ubuntu:18.04-x-m68k : Ok m68k-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
57 ubuntu:18.04-x-powerpc : Ok powerpc-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
58 ubuntu:18.04-x-powerpc64 : Ok powerpc64-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
59 ubuntu:18.04-x-powerpc64el : Ok powerpc64le-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
60 ubuntu:18.04-x-riscv64 : Ok riscv64-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
61 ubuntu:18.04-x-s390 : Ok s390x-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
62 ubuntu:18.04-x-sh4 : Ok sh4-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
63 ubuntu:18.04-x-sparc64 : Ok sparc64-linux-gnu-gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
64 ubuntu:18.10 : Ok gcc (Ubuntu 8.2.0-1ubuntu2) 8.2.0
#

# uname -a
Linux jouet 4.18.0-rc8-00002-g1236568ee3cb #12 SMP Tue Aug 7 14:08:26 -03 2018 x86_64 x86_64 x86_64 GNU/Linux
# git log --oneline -1
6a9405b56c27 (HEAD -> perf/core, tag: perf-core-for-mingo-4.19-20180809, acme.korg/perf/core) perf map: Optimize maps__fixup_overlappings()
# perf version --build-options
perf version 4.18.rc7.g6a9405
dwarf: [ on ] # HAVE_DWARF_SUPPORT
dwarf_getlocations: [ on ] # HAVE_DWARF_GETLOCATIONS_SUPPORT
glibc: [ on ] # HAVE_GLIBC_SUPPORT
gtk2: [ on ] # HAVE_GTK2_SUPPORT
syscall_table: [ on ] # HAVE_SYSCALL_TABLE_SUPPORT
libbfd: [ on ] # HAVE_LIBBFD_SUPPORT
libelf: [ on ] # HAVE_LIBELF_SUPPORT
libnuma: [ on ] # HAVE_LIBNUMA_SUPPORT
numa_num_possible_cpus: [ on ] # HAVE_LIBNUMA_SUPPORT
libperl: [ on ] # HAVE_LIBPERL_SUPPORT
libpython: [ on ] # HAVE_LIBPYTHON_SUPPORT
libslang: [ on ] # HAVE_SLANG_SUPPORT
libcrypto: [ on ] # HAVE_LIBCRYPTO_SUPPORT
libunwind: [ on ] # HAVE_LIBUNWIND_SUPPORT
libdw-dwarf-unwind: [ on ] # HAVE_DWARF_SUPPORT
zlib: [ on ] # HAVE_ZLIB_SUPPORT
lzma: [ on ] # HAVE_LZMA_SUPPORT
get_cpuid: [ on ] # HAVE_AUXTRACE_SUPPORT
bpf: [ on ] # HAVE_LIBBPF_SUPPORT
# perf test
1: vmlinux symtab matches kallsyms : Ok
2: Detect openat syscall event : Ok
3: Detect openat syscall event on all cpus : Ok
4: Read samples using the mmap interface : Ok
5: Test data source output : Ok
6: Parse event definition strings : Ok
7: Simple expression parser : Ok
8: PERF_RECORD_* events & perf_sample fields : Ok
9: Parse perf pmu format : Ok
10: DSO data read : Ok
11: DSO data cache : Ok
12: DSO data reopen : Ok
13: Roundtrip evsel->name : Ok
14: Parse sched tracepoints fields : Ok
15: syscalls:sys_enter_openat event fields : Ok
16: Setup struct perf_event_attr : Ok
17: Match and link multiple hists : Ok
18: 'import perf' in python : Ok
19: Breakpoint overflow signal handler : Ok
20: Breakpoint overflow sampling : Ok
21: Breakpoint accounting : Ok
22: Number of exit events of a simple workload : Ok
23: Software clock events period values : Ok
24: Object code reading : Ok
25: Sample parsing : Ok
26: Use a dummy software event to keep tracking : Ok
27: Parse with no sample_id_all bit set : Ok
28: Filter hist entries : Ok
29: Lookup mmap thread : Ok
30: Share thread mg : Ok
31: Sort output of hist entries : Ok
32: Cumulate child hist entries : Ok
33: Track with sched_switch : Ok
34: Filter fds with revents mask in a fdarray : Ok
35: Add fd to a fdarray, making it autogrow : Ok
36: kmod_path__parse : Ok
37: Thread map : Ok
38: LLVM search and compile :
38.1: Basic BPF llvm compile : Ok
38.2: kbuild searching : Ok
38.3: Compile source for BPF prologue generation : Ok
38.4: Compile source for BPF relocation : Ok
39: Session topology : Ok
40: BPF filter :
40.1: Basic BPF filtering : Ok
40.2: BPF pinning : Ok
40.3: BPF prologue generation : Ok
40.4: BPF relocation checker : Ok
41: Synthesize thread map : Ok
42: Remove thread map : Ok
43: Synthesize cpu map : Ok
44: Synthesize stat config : Ok
45: Synthesize stat : Ok
46: Synthesize stat round : Ok
47: Synthesize attr update : Ok
48: Event times : Ok
49: Read backward ring buffer : Ok
50: Print cpu map : Ok
51: Probe SDT events : Ok
52: is_printable_array : Ok
53: Print bitmap : Ok
54: perf hooks : Ok
55: builtin clang support : Skip (not compiled in)
56: unit_number__scnprintf : Ok
57: mem2node : Ok
58: x86 rdpmc : Ok
59: Convert perf time to TSC : Ok
60: DWARF unwind : Ok
61: x86 instruction decoder - new instructions : Ok
62: Use vfs_getname probe to get syscall args filenames : Ok
63: Check open filename arg using perf trace + vfs_getname: Ok
64: probe libc's inet_pton & backtrace it with ping : Ok
65: Add vfs_getname probe to get syscall args filenames : Ok
#

$ make -C tools/perf build-test
make: Entering directory '/home/acme/git/perf/tools/perf'
- tarpkg: ./tests/perf-targz-src-pkg .
make_no_scripts_O: make NO_LIBPYTHON=1 NO_LIBPERL=1
make_clean_all_O: make clean all
make_help_O: make help
make_no_libdw_dwarf_unwind_O: make NO_LIBDW_DWARF_UNWIND=1
make_debug_O: make DEBUG=1
make_no_backtrace_O: make NO_BACKTRACE=1
make_no_libaudit_O: make NO_LIBAUDIT=1
make_no_libelf_O: make NO_LIBELF=1
make_perf_o_O: make perf.o
make_install_bin_O: make install-bin
make_install_prefix_O: make install prefix=/tmp/krava
make_no_gtk2_O: make NO_GTK2=1
make_no_libbpf_O: make NO_LIBBPF=1
make_static_O: make LDFLAGS=-static
make_no_libperl_O: make NO_LIBPERL=1
make_cscope_O: make cscope
make_no_newt_O: make NO_NEWT=1
make_no_demangle_O: make NO_DEMANGLE=1
make_install_prefix_slash_O: make install prefix=/tmp/krava/
make_no_ui_O: make NO_NEWT=1 NO_SLANG=1 NO_GTK2=1
make_pure_O: make
make_no_slang_O: make NO_SLANG=1
make_with_clangllvm_O: make LIBCLANGLLVM=1
make_doc_O: make doc
make_no_libnuma_O: make NO_LIBNUMA=1
make_tags_O: make tags
make_no_auxtrace_O: make NO_AUXTRACE=1
make_install_O: make install
make_with_babeltrace_O: make LIBBABELTRACE=1
make_no_libunwind_O: make NO_LIBUNWIND=1
make_util_map_o_O: make util/map.o
make_no_libpython_O: make NO_LIBPYTHON=1
make_no_libbionic_O: make NO_LIBBIONIC=1
make_util_pmu_bison_o_O: make util/pmu-bison.o
make_minimal_O: make NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1 NO_DEMANGLE=1 NO_LIBELF=1 NO_LIBUNWIND=1 NO_BACKTRACE=1 NO_LIBNUMA=1 NO_LIBAUDIT=1 NO_LIBBIONIC=1 NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1 NO_LIBBPF=1 NO_LIBCRYPTO=1 NO_SDT=1 NO_JVMTI=1
OK
make: Leaving directory '/home/acme/git/perf/tools/perf'
$


2018-08-09 14:59:57

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 01/44] perf trace: Associate vfs_getname()'ed pathname with fd returned from 'openat'

From: Arnaldo Carvalho de Melo <[email protected]>

When the vfs_getname() wannabe tracepoint is in place:

# perf probe -l
probe:vfs_getname (on getname_flags:73@acme/git/linux/fs/namei.c with pathname)
#

'perf trace' will use it to get the pathname when it is copied from
userspace to the kernel, right after syscalls:sys_enter_open, copied
in the 'probe:vfs_getname', stash it somewhere and then, at
syscalls:sys_exit_open time, if the 'open' return is not -1, i.e. a
successfull open syscall, associate that pathname to this return, i.e.
the fd.

We were not doing this for the 'openat' syscall, which would cause 'perf
trace' to fallback to using /proc to get the fd, change it so that we
use what we got from probe:vfs_getname, reducing the 'openat'
beautification process cost, ditching the syscalls performed to read
procfs state and avoiding some possible races in the process.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 88561eed7950..2a85f5198da0 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -121,7 +121,6 @@ struct trace {
bool force;
bool vfs_getname;
int trace_pgfaults;
- int open_id;
};

struct tp_field {
@@ -805,12 +804,17 @@ static struct syscall_fmt *syscall_fmt__find(const char *name)
return bsearch(name, syscall_fmts, nmemb, sizeof(struct syscall_fmt), syscall_fmt__cmp);
}

+/*
+ * is_exit: is this "exit" or "exit_group"?
+ * is_open: is this "open" or "openat"? To associate the fd returned in sys_exit with the pathname in sys_enter.
+ */
struct syscall {
struct event_format *tp_format;
int nr_args;
+ bool is_exit;
+ bool is_open;
struct format_field *args;
const char *name;
- bool is_exit;
struct syscall_fmt *fmt;
struct syscall_arg_fmt *arg_fmt;
};
@@ -1299,6 +1303,7 @@ static int trace__read_syscall_info(struct trace *trace, int id)
}

sc->is_exit = !strcmp(name, "exit_group") || !strcmp(name, "exit");
+ sc->is_open = !strcmp(name, "open") || !strcmp(name, "openat");

return syscall__set_arg_fmts(sc);
}
@@ -1722,7 +1727,7 @@ static int trace__sys_exit(struct trace *trace, struct perf_evsel *evsel,

ret = perf_evsel__sc_tp_uint(evsel, ret, sample);

- if (id == trace->open_id && ret >= 0 && ttrace->filename.pending_open) {
+ if (sc->is_open && ret >= 0 && ttrace->filename.pending_open) {
trace__set_fd_pathname(thread, ret, ttrace->filename.name);
ttrace->filename.pending_open = false;
++trace->stats.vfs_getname;
@@ -3205,8 +3210,6 @@ int cmd_trace(int argc, const char **argv)
}
}

- trace.open_id = syscalltbl__id(trace.sctbl, "open");
-
err = target__validate(&trace.opts.target);
if (err) {
target__strerror(&trace.opts.target, err, bf, sizeof(bf));
--
2.14.4


2018-08-09 15:00:05

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 02/44] perf trace: Use beautifiers on syscalls:sys_enter_ handlers

From: Arnaldo Carvalho de Melo <[email protected]>

We were using the beautifiers only when processing the
raw_syscalls:sys_enter events, but we can as well use them for the
syscalls:sys_enter_NAME events, as the layout is the same.

Some more tweaking is needed as we're processing them straight away,
i.e. there is no buffering in the sys_enter_NAME event to wait for
things like vfs_getname to provide pointer contents and then flushing
at sys_exit_NAME, so we need to state in the syscall_arg that this
is unbuffered, just print the pointer values, beautifying just
non-pointer syscall args.

This just shows an alternative way of processing tracepoints, that we
will end up using when creating "tracepoint" payloads that already copy
pointer contents (or chunks of it, i.e. not the whole filename, but just
the end of it, not all the bf for a read/write, but just the start,
etc), directly in the kernel using eBPF.

E.g.:

# perf trace -e syscalls:*enter*sleep,*sleep sleep 1
0.303 ( ): syscalls:sys_enter_nanosleep:rqtp: 0x7ffc93d5ecc0
0.305 (1000.229 ms): sleep/8746 nanosleep(rqtp: 0x7ffc93d5ecc0) = 0
# perf trace -e syscalls:*_*sleep,*sleep sleep 1
0.288 ( ): syscalls:sys_enter_nanosleep:rqtp: 0x7ffecde87e40
0.289 ( ): sleep/8748 nanosleep(rqtp: 0x7ffecde87e40) ...
1000.479 ( ): syscalls:sys_exit_nanosleep:0x0
0.289 (1000.208 ms): sleep/8748 ... [continued]: nanosleep()) = 0
#

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 47 +++++++++++++++++++++++++++++++++++++++++++---
1 file changed, 44 insertions(+), 3 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 2a85f5198da0..7336552c22cf 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -1666,6 +1666,44 @@ static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel,
return err;
}

+static int trace__fprintf_sys_enter(struct trace *trace, struct perf_evsel *evsel,
+ struct perf_sample *sample)
+{
+ struct format_field *field = perf_evsel__field(evsel, "__syscall_nr");
+ struct thread_trace *ttrace;
+ struct thread *thread;
+ struct syscall *sc;
+ char msg[1024];
+ int id, err = -1;
+ void *args;
+
+ if (field == NULL)
+ return -1;
+
+ id = format_field__intval(field, sample, evsel->needs_swap);
+ sc = trace__syscall_info(trace, evsel, id);
+
+ if (sc == NULL)
+ return -1;
+
+ thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
+ ttrace = thread__trace(thread, trace->output);
+ /*
+ * We need to get ttrace just to make sure it is there when syscall__scnprintf_args()
+ * and the rest of the beautifiers accessing it via struct syscall_arg touches it.
+ */
+ if (ttrace == NULL)
+ goto out_put;
+
+ args = sample->raw_data + field->offset + sizeof(u64); /* skip __syscall_nr, there is where args are */
+ syscall__scnprintf_args(sc, msg, sizeof(msg), args, trace, thread);
+ fprintf(trace->output, "%s", msg);
+ err = 0;
+out_put:
+ thread__put(thread);
+ return err;
+}
+
static int trace__resolve_callchain(struct trace *trace, struct perf_evsel *evsel,
struct perf_sample *sample,
struct callchain_cursor *cursor)
@@ -1964,9 +2002,12 @@ static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
if (perf_evsel__is_bpf_output(evsel)) {
bpf_output__fprintf(trace, sample);
} else if (evsel->tp_format) {
- event_format__fprintf(evsel->tp_format, sample->cpu,
- sample->raw_data, sample->raw_size,
- trace->output);
+ if (strncmp(evsel->tp_format->name, "sys_enter_", 10) ||
+ trace__fprintf_sys_enter(trace, evsel, sample)) {
+ event_format__fprintf(evsel->tp_format, sample->cpu,
+ sample->raw_data, sample->raw_size,
+ trace->output);
+ }
}

fprintf(trace->output, "\n");
--
2.14.4


2018-08-09 15:00:23

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 03/44] perf trace: Rename some syscall_tp methods to raw_syscall

From: Arnaldo Carvalho de Melo <[email protected]>

Because raw_syscalls have the field for the syscall number as 'id' while
the syscalls:sys_{enter,exit}_NAME have it as __syscall_nr...

Since we want to support both for being able to enable just a
syscalls:sys_{enter,exit}_name instead of asking for
raw_syscalls:sys_{enter,exit} plus filters, make the method names for
each kind of tracepoint more explicit.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signef-off-by: Arnaldo Carvalho de Melo <[email protected]>
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 7336552c22cf..039f94467968 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -239,7 +239,7 @@ static void perf_evsel__delete_priv(struct perf_evsel *evsel)
perf_evsel__delete(evsel);
}

-static int perf_evsel__init_syscall_tp(struct perf_evsel *evsel, void *handler)
+static int perf_evsel__init_raw_syscall_tp(struct perf_evsel *evsel, void *handler)
{
evsel->priv = malloc(sizeof(struct syscall_tp));
if (evsel->priv != NULL) {
@@ -257,7 +257,7 @@ static int perf_evsel__init_syscall_tp(struct perf_evsel *evsel, void *handler)
return -ENOENT;
}

-static struct perf_evsel *perf_evsel__syscall_newtp(const char *direction, void *handler)
+static struct perf_evsel *perf_evsel__raw_syscall_newtp(const char *direction, void *handler)
{
struct perf_evsel *evsel = perf_evsel__newtp("raw_syscalls", direction);

@@ -268,7 +268,7 @@ static struct perf_evsel *perf_evsel__syscall_newtp(const char *direction, void
if (IS_ERR(evsel))
return NULL;

- if (perf_evsel__init_syscall_tp(evsel, handler))
+ if (perf_evsel__init_raw_syscall_tp(evsel, handler))
goto out_delete;

return evsel;
@@ -2288,14 +2288,14 @@ static int trace__add_syscall_newtp(struct trace *trace)
struct perf_evlist *evlist = trace->evlist;
struct perf_evsel *sys_enter, *sys_exit;

- sys_enter = perf_evsel__syscall_newtp("sys_enter", trace__sys_enter);
+ sys_enter = perf_evsel__raw_syscall_newtp("sys_enter", trace__sys_enter);
if (sys_enter == NULL)
goto out;

if (perf_evsel__init_sc_tp_ptr_field(sys_enter, args))
goto out_delete_sys_enter;

- sys_exit = perf_evsel__syscall_newtp("sys_exit", trace__sys_exit);
+ sys_exit = perf_evsel__raw_syscall_newtp("sys_exit", trace__sys_exit);
if (sys_exit == NULL)
goto out_delete_sys_enter;

@@ -2717,7 +2717,7 @@ static int trace__replay(struct trace *trace)
"syscalls:sys_enter");

if (evsel &&
- (perf_evsel__init_syscall_tp(evsel, trace__sys_enter) < 0 ||
+ (perf_evsel__init_raw_syscall_tp(evsel, trace__sys_enter) < 0 ||
perf_evsel__init_sc_tp_ptr_field(evsel, args))) {
pr_err("Error during initialize raw_syscalls:sys_enter event\n");
goto out;
@@ -2729,7 +2729,7 @@ static int trace__replay(struct trace *trace)
evsel = perf_evlist__find_tracepoint_by_name(session->evlist,
"syscalls:sys_exit");
if (evsel &&
- (perf_evsel__init_syscall_tp(evsel, trace__sys_exit) < 0 ||
+ (perf_evsel__init_raw_syscall_tp(evsel, trace__sys_exit) < 0 ||
perf_evsel__init_sc_tp_uint_field(evsel, ret))) {
pr_err("Error during initialize raw_syscalls:sys_exit event\n");
goto out;
--
2.14.4


2018-08-09 15:00:27

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 04/44] perf trace: Allow setting up a syscall_tp struct without a format_field

From: Arnaldo Carvalho de Melo <[email protected]>

To avoid having to ask libtraceevent to find a field by name when
handling each tracepoint event, we setup a struct syscall_tp with
a tp_field struct having an extractor function + the offset for the
"id", "args" and "ret" raw_syscalls:sys_{enter,exit} tracepoints.

Now that we want to do the same with syscalls:sys_{entry,exit}_NAME
individual syscall tracepoints, where we have "id" as "__syscall_nr" and
"args" as the actual series of per syscall parameters, we need more
flexibility from the routines that set up these pre-looked up syscall
tracepoint arg fields.

The next cset will use it.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 039f94467968..7fca844ced0b 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -156,13 +156,11 @@ TP_UINT_FIELD__SWAPPED(16);
TP_UINT_FIELD__SWAPPED(32);
TP_UINT_FIELD__SWAPPED(64);

-static int tp_field__init_uint(struct tp_field *field,
- struct format_field *format_field,
- bool needs_swap)
+static int __tp_field__init_uint(struct tp_field *field, int size, int offset, bool needs_swap)
{
- field->offset = format_field->offset;
+ field->offset = offset;

- switch (format_field->size) {
+ switch (size) {
case 1:
field->integer = tp_field__u8;
break;
@@ -182,18 +180,28 @@ static int tp_field__init_uint(struct tp_field *field,
return 0;
}

+static int tp_field__init_uint(struct tp_field *field, struct format_field *format_field, bool needs_swap)
+{
+ return __tp_field__init_uint(field, format_field->size, format_field->offset, needs_swap);
+}
+
static void *tp_field__ptr(struct tp_field *field, struct perf_sample *sample)
{
return sample->raw_data + field->offset;
}

-static int tp_field__init_ptr(struct tp_field *field, struct format_field *format_field)
+static int __tp_field__init_ptr(struct tp_field *field, int offset)
{
- field->offset = format_field->offset;
+ field->offset = offset;
field->pointer = tp_field__ptr;
return 0;
}

+static int tp_field__init_ptr(struct tp_field *field, struct format_field *format_field)
+{
+ return __tp_field__init_ptr(field, format_field->offset);
+}
+
struct syscall_tp {
struct tp_field id;
union {
--
2.14.4


2018-08-09 15:00:46

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 06/44] perf trace: Use perf_evsel__sc_tp_{uint,ptr} for "id"/"args" handling syscalls:* events

From: Arnaldo Carvalho de Melo <[email protected]>

Now it looks just about the same as for the trace__sys_{enter,exit}.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 13 +++----------
1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 41799596c045..7232a7302580 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -1693,20 +1693,13 @@ static int trace__sys_enter(struct trace *trace, struct perf_evsel *evsel,
static int trace__fprintf_sys_enter(struct trace *trace, struct perf_evsel *evsel,
struct perf_sample *sample)
{
- struct format_field *field = perf_evsel__field(evsel, "__syscall_nr");
struct thread_trace *ttrace;
struct thread *thread;
- struct syscall *sc;
+ int id = perf_evsel__sc_tp_uint(evsel, id, sample), err = -1;
+ struct syscall *sc = trace__syscall_info(trace, evsel, id);
char msg[1024];
- int id, err = -1;
void *args;

- if (field == NULL)
- return -1;
-
- id = format_field__intval(field, sample, evsel->needs_swap);
- sc = trace__syscall_info(trace, evsel, id);
-
if (sc == NULL)
return -1;

@@ -1719,7 +1712,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct perf_evsel *evse
if (ttrace == NULL)
goto out_put;

- args = sample->raw_data + field->offset + sizeof(u64); /* skip __syscall_nr, there is where args are */
+ args = perf_evsel__sc_tp_ptr(evsel, args, sample);
syscall__scnprintf_args(sc, msg, sizeof(msg), args, trace, thread);
fprintf(trace->output, "%s", msg);
err = 0;
--
2.14.4


2018-08-09 15:00:51

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 08/44] perf report: Add raw report support for s390 auxiliary trace

From: Thomas Richter <[email protected]>

Add support for s390 auxiliary trace support.

Use 'perf record -e rbd000' to create the perf.data file. The event
also has the symbolic name SF_CYCLES_BASIC_DIAG, using 'perf record -e
SF_CYCLES_BASIC_DIAG' is equivalent.

Use 'perf report -D' to display the auxiliary trace data.

Output before:

0 0 0x25a66 [0x30]: PERF_RECORD_AUXTRACE size: 0x40000
offset: 0 ref: 0 idx: 4 tid: -1 cpu: 4
Nothing else

Output after:

0 0 0x25a66 [0x30]: PERF_RECORD_AUXTRACE size: 0x40000
offset: 0 ref: 0 idx: 4 tid: -1 cpu: 4
.
. ... s390 AUX data: size 262144 bytes
[00000000] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
[0x000020] Diag Def:8005
[0x0000bf] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
[0x0000df] Diag Def:8005
[0x00017e] Basic Def:0001 Inst:0000 TW AS:3 ASN:0xffff IA:0x0000000000c2f1bc
CL:1 HPP:0x8000000000000000 GPP:000000000000000000
....
[0x000fc0] Trailer F T bsdes:32 dsdes:159 Overflow:0 Time:0xd4ab59a8450fa108
C:1 TOD:0xd4ab4ec98ceb3832 1:0x8000000000000000 2:0xd4ab4ec98ceb3832

This output is shown for every sampled data block. The
output contains the

- basic-sampling data entry

- diagnostic-sampling data entry

- trailer entry

The basic sampling entry and diagnostic sampling entry sizes can be
extracted using the trailer entries in the SDB. On older hardware these
values (bsdes and dsdes in the trailer entry) are reserved and zero.
Older hardware use hard coded values based on the s390 machine type.

Signed-off-by: Thomas Richter <[email protected]>
Reviewed-by: Hendrik Brueckner <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Michael Ellerman <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Link: http://lkml.kernel.org/r/[email protected]
[ Merged a fix for a 'tipe puned' problem reported by Michael Ellerman see last Link tag. ]
[ Removed __packed from two structs, they're already naturally packed and having that. ]
[ attribute breaks the build in gcc 8.1.1 mips, 4.4.7 x86_64, 7.1.1 ARCompact ISA, etc) ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/s390-cpumsf-kernel.h | 71 ++++++++++
tools/perf/util/s390-cpumsf.c | 247 ++++++++++++++++++++++++++++++++++-
2 files changed, 317 insertions(+), 1 deletion(-)
create mode 100644 tools/perf/util/s390-cpumsf-kernel.h

diff --git a/tools/perf/util/s390-cpumsf-kernel.h b/tools/perf/util/s390-cpumsf-kernel.h
new file mode 100644
index 000000000000..de8c7ad0eca8
--- /dev/null
+++ b/tools/perf/util/s390-cpumsf-kernel.h
@@ -0,0 +1,71 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Auxtrace support for s390 CPU measurement sampling facility
+ *
+ * Copyright IBM Corp. 2018
+ * Author(s): Hendrik Brueckner <[email protected]>
+ * Thomas Richter <[email protected]>
+ */
+#ifndef S390_CPUMSF_KERNEL_H
+#define S390_CPUMSF_KERNEL_H
+
+#define S390_CPUMSF_PAGESZ 4096 /* Size of sample block units */
+#define S390_CPUMSF_DIAG_DEF_FIRST 0x8001 /* Diagnostic entry lowest id */
+
+struct hws_basic_entry {
+ unsigned int def:16; /* 0-15 Data Entry Format */
+ unsigned int R:4; /* 16-19 reserved */
+ unsigned int U:4; /* 20-23 Number of unique instruct. */
+ unsigned int z:2; /* zeros */
+ unsigned int T:1; /* 26 PSW DAT mode */
+ unsigned int W:1; /* 27 PSW wait state */
+ unsigned int P:1; /* 28 PSW Problem state */
+ unsigned int AS:2; /* 29-30 PSW address-space control */
+ unsigned int I:1; /* 31 entry valid or invalid */
+ unsigned int CL:2; /* 32-33 Configuration Level */
+ unsigned int:14;
+ unsigned int prim_asn:16; /* primary ASN */
+ unsigned long long ia; /* Instruction Address */
+ unsigned long long gpp; /* Guest Program Parameter */
+ unsigned long long hpp; /* Host Program Parameter */
+};
+
+struct hws_diag_entry {
+ unsigned int def:16; /* 0-15 Data Entry Format */
+ unsigned int R:15; /* 16-19 and 20-30 reserved */
+ unsigned int I:1; /* 31 entry valid or invalid */
+ u8 data[]; /* Machine-dependent sample data */
+};
+
+struct hws_combined_entry {
+ struct hws_basic_entry basic; /* Basic-sampling data entry */
+ struct hws_diag_entry diag; /* Diagnostic-sampling data entry */
+};
+
+struct hws_trailer_entry {
+ union {
+ struct {
+ unsigned int f:1; /* 0 - Block Full Indicator */
+ unsigned int a:1; /* 1 - Alert request control */
+ unsigned int t:1; /* 2 - Timestamp format */
+ unsigned int:29; /* 3 - 31: Reserved */
+ unsigned int bsdes:16; /* 32-47: size of basic SDE */
+ unsigned int dsdes:16; /* 48-63: size of diagnostic SDE */
+ };
+ unsigned long long flags; /* 0 - 64: All indicators */
+ };
+ unsigned long long overflow; /* 64 - sample Overflow count */
+ unsigned char timestamp[16]; /* 16 - 31 timestamp */
+ unsigned long long reserved1; /* 32 -Reserved */
+ unsigned long long reserved2; /* */
+ union { /* 48 - reserved for programming use */
+ struct {
+ unsigned long long clock_base:1; /* in progusage2 */
+ unsigned long long progusage1:63;
+ unsigned long long progusage2;
+ };
+ unsigned long long progusage[2];
+ };
+};
+
+#endif
diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
index e9a5ea21dbbf..14728b0834c6 100644
--- a/tools/perf/util/s390-cpumsf.c
+++ b/tools/perf/util/s390-cpumsf.c
@@ -26,6 +26,7 @@
#include "debug.h"
#include "auxtrace.h"
#include "s390-cpumsf.h"
+#include "s390-cpumsf-kernel.h"

struct s390_cpumsf {
struct auxtrace auxtrace;
@@ -35,8 +36,213 @@ struct s390_cpumsf {
struct machine *machine;
u32 auxtrace_type;
u32 pmu_type;
+ u16 machine_type;
};

+/* Display s390 CPU measurement facility basic-sampling data entry */
+static bool s390_cpumsf_basic_show(const char *color, size_t pos,
+ struct hws_basic_entry *basic)
+{
+ if (basic->def != 1) {
+ pr_err("Invalid AUX trace basic entry [%#08zx]\n", pos);
+ return false;
+ }
+ color_fprintf(stdout, color, " [%#08zx] Basic Def:%04x Inst:%#04x"
+ " %c%c%c%c AS:%d ASN:%#04x IA:%#018llx\n"
+ "\t\tCL:%d HPP:%#018llx GPP:%#018llx\n",
+ pos, basic->def, basic->U,
+ basic->T ? 'T' : ' ',
+ basic->W ? 'W' : ' ',
+ basic->P ? 'P' : ' ',
+ basic->I ? 'I' : ' ',
+ basic->AS, basic->prim_asn, basic->ia, basic->CL,
+ basic->hpp, basic->gpp);
+ return true;
+}
+
+/* Display s390 CPU measurement facility diagnostic-sampling data entry */
+static bool s390_cpumsf_diag_show(const char *color, size_t pos,
+ struct hws_diag_entry *diag)
+{
+ if (diag->def < S390_CPUMSF_DIAG_DEF_FIRST) {
+ pr_err("Invalid AUX trace diagnostic entry [%#08zx]\n", pos);
+ return false;
+ }
+ color_fprintf(stdout, color, " [%#08zx] Diag Def:%04x %c\n",
+ pos, diag->def, diag->I ? 'I' : ' ');
+ return true;
+}
+
+/* Return TOD timestamp contained in an trailer entry */
+static unsigned long long trailer_timestamp(struct hws_trailer_entry *te)
+{
+ /* te->t set: TOD in STCKE format, bytes 8-15
+ * to->t not set: TOD in STCK format, bytes 0-7
+ */
+ unsigned long long ts;
+
+ memcpy(&ts, &te->timestamp[te->t], sizeof(ts));
+ return ts;
+}
+
+/* Display s390 CPU measurement facility trailer entry */
+static bool s390_cpumsf_trailer_show(const char *color, size_t pos,
+ struct hws_trailer_entry *te)
+{
+ if (te->bsdes != sizeof(struct hws_basic_entry)) {
+ pr_err("Invalid AUX trace trailer entry [%#08zx]\n", pos);
+ return false;
+ }
+ color_fprintf(stdout, color, " [%#08zx] Trailer %c%c%c bsdes:%d"
+ " dsdes:%d Overflow:%lld Time:%#llx\n"
+ "\t\tC:%d TOD:%#lx 1:%#llx 2:%#llx\n",
+ pos,
+ te->f ? 'F' : ' ',
+ te->a ? 'A' : ' ',
+ te->t ? 'T' : ' ',
+ te->bsdes, te->dsdes, te->overflow,
+ trailer_timestamp(te), te->clock_base, te->progusage2,
+ te->progusage[0], te->progusage[1]);
+ return true;
+}
+
+/* Test a sample data block. It must be 4KB or a multiple thereof in size and
+ * 4KB page aligned. Each sample data page has a trailer entry at the
+ * end which contains the sample entry data sizes.
+ *
+ * Return true if the sample data block passes the checks and set the
+ * basic set entry size and diagnostic set entry size.
+ *
+ * Return false on failure.
+ *
+ * Note: Old hardware does not set the basic or diagnostic entry sizes
+ * in the trailer entry. Use the type number instead.
+ */
+static bool s390_cpumsf_validate(int machine_type,
+ unsigned char *buf, size_t len,
+ unsigned short *bsdes,
+ unsigned short *dsdes)
+{
+ struct hws_basic_entry *basic = (struct hws_basic_entry *)buf;
+ struct hws_trailer_entry *te;
+
+ *dsdes = *bsdes = 0;
+ if (len & (S390_CPUMSF_PAGESZ - 1)) /* Illegal size */
+ return false;
+ if (basic->def != 1) /* No basic set entry, must be first */
+ return false;
+ /* Check for trailer entry at end of SDB */
+ te = (struct hws_trailer_entry *)(buf + S390_CPUMSF_PAGESZ
+ - sizeof(*te));
+ *bsdes = te->bsdes;
+ *dsdes = te->dsdes;
+ if (!te->bsdes && !te->dsdes) {
+ /* Very old hardware, use CPUID */
+ switch (machine_type) {
+ case 2097:
+ case 2098:
+ *dsdes = 64;
+ *bsdes = 32;
+ break;
+ case 2817:
+ case 2818:
+ *dsdes = 74;
+ *bsdes = 32;
+ break;
+ case 2827:
+ case 2828:
+ *dsdes = 85;
+ *bsdes = 32;
+ break;
+ default:
+ /* Illegal trailer entry */
+ return false;
+ }
+ }
+ return true;
+}
+
+/* Return true if there is room for another entry */
+static bool s390_cpumsf_reached_trailer(size_t entry_sz, size_t pos)
+{
+ size_t payload = S390_CPUMSF_PAGESZ - sizeof(struct hws_trailer_entry);
+
+ if (payload - (pos & (S390_CPUMSF_PAGESZ - 1)) < entry_sz)
+ return false;
+ return true;
+}
+
+/* Dump an auxiliary buffer. These buffers are multiple of
+ * 4KB SDB pages.
+ */
+static void s390_cpumsf_dump(struct s390_cpumsf *sf,
+ unsigned char *buf, size_t len)
+{
+ const char *color = PERF_COLOR_BLUE;
+ struct hws_basic_entry *basic;
+ struct hws_diag_entry *diag;
+ size_t pos = 0;
+ unsigned short bsdes, dsdes;
+
+ color_fprintf(stdout, color,
+ ". ... s390 AUX data: size %zu bytes\n",
+ len);
+
+ if (!s390_cpumsf_validate(sf->machine_type, buf, len, &bsdes,
+ &dsdes)) {
+ pr_err("Invalid AUX trace data block size:%zu"
+ " (type:%d bsdes:%hd dsdes:%hd)\n",
+ len, sf->machine_type, bsdes, dsdes);
+ return;
+ }
+
+ /* s390 kernel always returns 4KB blocks fully occupied,
+ * no partially filled SDBs.
+ */
+ while (pos < len) {
+ /* Handle Basic entry */
+ basic = (struct hws_basic_entry *)(buf + pos);
+ if (s390_cpumsf_basic_show(color, pos, basic))
+ pos += bsdes;
+ else
+ return;
+
+ /* Handle Diagnostic entry */
+ diag = (struct hws_diag_entry *)(buf + pos);
+ if (s390_cpumsf_diag_show(color, pos, diag))
+ pos += dsdes;
+ else
+ return;
+
+ /* Check for trailer entry */
+ if (!s390_cpumsf_reached_trailer(bsdes + dsdes, pos)) {
+ /* Show trailer entry */
+ struct hws_trailer_entry te;
+
+ pos = (pos + S390_CPUMSF_PAGESZ)
+ & ~(S390_CPUMSF_PAGESZ - 1);
+ pos -= sizeof(te);
+ memcpy(&te, buf + pos, sizeof(te));
+ /* Set descriptor sizes in case of old hardware
+ * where these values are not set.
+ */
+ te.bsdes = bsdes;
+ te.dsdes = dsdes;
+ if (s390_cpumsf_trailer_show(color, pos, &te))
+ pos += sizeof(te);
+ else
+ return;
+ }
+ }
+}
+
+static void s390_cpumsf_dump_event(struct s390_cpumsf *sf, unsigned char *buf,
+ size_t len)
+{
+ printf(".\n");
+ s390_cpumsf_dump(sf, buf, len);
+}
+
static int
s390_cpumsf_process_event(struct perf_session *session __maybe_unused,
union perf_event *event __maybe_unused,
@@ -47,10 +253,40 @@ s390_cpumsf_process_event(struct perf_session *session __maybe_unused,
}

static int
-s390_cpumsf_process_auxtrace_event(struct perf_session *session __maybe_unused,
+s390_cpumsf_process_auxtrace_event(struct perf_session *session,
union perf_event *event __maybe_unused,
struct perf_tool *tool __maybe_unused)
{
+ struct s390_cpumsf *sf = container_of(session->auxtrace,
+ struct s390_cpumsf,
+ auxtrace);
+
+ int fd = perf_data__fd(session->data);
+ struct auxtrace_buffer *buffer;
+ off_t data_offset;
+ int err;
+
+ if (perf_data__is_pipe(session->data)) {
+ data_offset = 0;
+ } else {
+ data_offset = lseek(fd, 0, SEEK_CUR);
+ if (data_offset == -1)
+ return -errno;
+ }
+
+ err = auxtrace_queues__add_event(&sf->queues, session, event,
+ data_offset, &buffer);
+ if (err)
+ return err;
+
+ /* Dump here after copying piped trace out of the pipe */
+ if (dump_trace) {
+ if (auxtrace_buffer__get_data(buffer, fd)) {
+ s390_cpumsf_dump_event(sf, buffer->data,
+ buffer->size);
+ auxtrace_buffer__put_data(buffer);
+ }
+ }
return 0;
}

@@ -85,6 +321,14 @@ static void s390_cpumsf_free(struct perf_session *session)
free(sf);
}

+static int s390_cpumsf_get_type(const char *cpuid)
+{
+ int ret, family = 0;
+
+ ret = sscanf(cpuid, "%*[^,],%u", &family);
+ return (ret == 1) ? family : 0;
+}
+
int s390_cpumsf_process_auxtrace_info(union perf_event *event,
struct perf_session *session)
{
@@ -107,6 +351,7 @@ int s390_cpumsf_process_auxtrace_info(union perf_event *event,
sf->machine = &session->machines.host; /* No kvm support */
sf->auxtrace_type = auxtrace_info->type;
sf->pmu_type = PERF_TYPE_RAW;
+ sf->machine_type = s390_cpumsf_get_type(session->evlist->env->cpuid);

sf->auxtrace.process_event = s390_cpumsf_process_event;
sf->auxtrace.process_auxtrace_event = s390_cpumsf_process_auxtrace_event;
--
2.14.4


2018-08-09 15:00:51

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 07/44] perf auxtrace: Support for perf report -D for s390

From: Thomas Richter <[email protected]>

Add initial support for s390 auxiliary traces using the CPU-Measurement
Sampling Facility.

Support and ignore PERF_REPORT_AUXTRACE_INFO records in the perf data
file. Later patches will show the contents of the auxiliary traces.

Setup the auxtrace queues and data structures for s390. A raw dump of
the perf.data file now does not show an error when an auxtrace event is
encountered.

Output before:

[root@s35lp76 perf]# ./perf report -D -i perf.data.auxtrace
0x128 [0x10]: failed to process type: 70
Error:
failed to process sample

0x128 [0x10]: event: 70
.
. ... raw event: size 16 bytes
. 0000: 00 00 00 46 00 00 00 10 00 00 00 00 00 00 00 00 ...F............

0x128 [0x10]: PERF_RECORD_AUXTRACE_INFO type: 0
[root@s35lp76 perf]#

Output after:

# ./perf report -D -i perf.data.auxtrace |fgrep PERF_RECORD_AUXTRACE
0 0 0x128 [0x10]: PERF_RECORD_AUXTRACE_INFO type: 5
0 0 0x25a66 [0x30]: PERF_RECORD_AUXTRACE size: 0x40000
offset: 0 ref: 0 idx: 4 tid: -1 cpu: 4
....

Additional notes about the underlying hardware and software
implementation, provided by Hendrik Brueckner (see Link: below).

=============================================================================

The CPU-Measurement Facility (CPU-MF) provides a set of functions to obtain
performance information on the mainframe. Basically, it was introduced
with System z10 years ago for the z/Architecture, that means, 64-bit.
For Linux, there are two facilities of interest, counter facility and sampling
facility. The counter facility provides hardware counters for instructions,
cycles, crypto-activities, and many more.

The sampling facility is a hardware sampler that when started will write
samples at a particular interval into a sampling buffer. At some point,
for example, if a sample block is full, it generates an interrupt to collect
samples (while the sampler continues to run).

Few years ago, I started to provide the a perf PMU to use the counter
and sampling facilities. Recently, the device driver was updated to also
"export" the sampling buffer into the AUX area. Thomas now completed the
related perf work to interpret and process these AUX data.

If people are more interested in the sampling facility, they can have a
look into:

- The Load-Program-Parameter and the CPU-Measurement Facilities, SA23-2260-05
http://www-01.ibm.com/support/docview.wss?uid=isg26fcd1cc32246f4c8852574ce0044734a

and to learn how-to use it for Linux on Z, have look at chapter 54,
"Using the CPU-measurement facilities" in the:

- Device Drivers, Features, and Commands, SC33-8411-34
http://public.dhe.ibm.com/software/dw/linux390/docu/l416dd34.pdf

=============================================================================

Signed-off-by: Thomas Richter <[email protected]>
Reviewed-by: Hendrik Brueckner <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/arch/s390/util/auxtrace.c | 1 +
tools/perf/util/Build | 1 +
tools/perf/util/auxtrace.c | 3 +
tools/perf/util/auxtrace.h | 1 +
tools/perf/util/s390-cpumsf.c | 123 +++++++++++++++++++++++++++++++++++
tools/perf/util/s390-cpumsf.h | 21 ++++++
6 files changed, 150 insertions(+)
create mode 100644 tools/perf/util/s390-cpumsf.c
create mode 100644 tools/perf/util/s390-cpumsf.h

diff --git a/tools/perf/arch/s390/util/auxtrace.c b/tools/perf/arch/s390/util/auxtrace.c
index 3afe8256eff2..44c857388897 100644
--- a/tools/perf/arch/s390/util/auxtrace.c
+++ b/tools/perf/arch/s390/util/auxtrace.c
@@ -30,6 +30,7 @@ cpumsf_info_fill(struct auxtrace_record *itr __maybe_unused,
struct auxtrace_info_event *auxtrace_info __maybe_unused,
size_t priv_size __maybe_unused)
{
+ auxtrace_info->type = PERF_AUXTRACE_S390_CPUMSF;
return 0;
}

diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index b604ef334dc9..7efe15b9618d 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -87,6 +87,7 @@ libperf-$(CONFIG_AUXTRACE) += intel-pt.o
libperf-$(CONFIG_AUXTRACE) += intel-bts.o
libperf-$(CONFIG_AUXTRACE) += arm-spe.o
libperf-$(CONFIG_AUXTRACE) += arm-spe-pkt-decoder.o
+libperf-$(CONFIG_AUXTRACE) += s390-cpumsf.o

ifdef CONFIG_LIBOPENCSD
libperf-$(CONFIG_AUXTRACE) += cs-etm.o
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index d056447520a2..ae8c37b219c9 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -56,6 +56,7 @@
#include "intel-pt.h"
#include "intel-bts.h"
#include "arm-spe.h"
+#include "s390-cpumsf.h"

#include "sane_ctype.h"
#include "symbol/kallsyms.h"
@@ -920,6 +921,8 @@ int perf_event__process_auxtrace_info(struct perf_tool *tool __maybe_unused,
return arm_spe_process_auxtrace_info(event, session);
case PERF_AUXTRACE_CS_ETM:
return cs_etm__process_auxtrace_info(event, session);
+ case PERF_AUXTRACE_S390_CPUMSF:
+ return s390_cpumsf_process_auxtrace_info(event, session);
case PERF_AUXTRACE_UNKNOWN:
default:
return -EINVAL;
diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
index e731f55da072..71fc3bd74299 100644
--- a/tools/perf/util/auxtrace.h
+++ b/tools/perf/util/auxtrace.h
@@ -44,6 +44,7 @@ enum auxtrace_type {
PERF_AUXTRACE_INTEL_BTS,
PERF_AUXTRACE_CS_ETM,
PERF_AUXTRACE_ARM_SPE,
+ PERF_AUXTRACE_S390_CPUMSF,
};

enum itrace_period_type {
diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
new file mode 100644
index 000000000000..e9a5ea21dbbf
--- /dev/null
+++ b/tools/perf/util/s390-cpumsf.c
@@ -0,0 +1,123 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corp. 2018
+ * Auxtrace support for s390 CPU-Measurement Sampling Facility
+ *
+ * Author(s): Thomas Richter <[email protected]>
+ */
+
+#include <endian.h>
+#include <errno.h>
+#include <byteswap.h>
+#include <inttypes.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/bitops.h>
+#include <linux/log2.h>
+
+#include "cpumap.h"
+#include "color.h"
+#include "evsel.h"
+#include "evlist.h"
+#include "machine.h"
+#include "session.h"
+#include "util.h"
+#include "thread.h"
+#include "debug.h"
+#include "auxtrace.h"
+#include "s390-cpumsf.h"
+
+struct s390_cpumsf {
+ struct auxtrace auxtrace;
+ struct auxtrace_queues queues;
+ struct auxtrace_heap heap;
+ struct perf_session *session;
+ struct machine *machine;
+ u32 auxtrace_type;
+ u32 pmu_type;
+};
+
+static int
+s390_cpumsf_process_event(struct perf_session *session __maybe_unused,
+ union perf_event *event __maybe_unused,
+ struct perf_sample *sample __maybe_unused,
+ struct perf_tool *tool __maybe_unused)
+{
+ return 0;
+}
+
+static int
+s390_cpumsf_process_auxtrace_event(struct perf_session *session __maybe_unused,
+ union perf_event *event __maybe_unused,
+ struct perf_tool *tool __maybe_unused)
+{
+ return 0;
+}
+
+static int s390_cpumsf_flush(struct perf_session *session __maybe_unused,
+ struct perf_tool *tool __maybe_unused)
+{
+ return 0;
+}
+
+static void s390_cpumsf_free_events(struct perf_session *session)
+{
+ struct s390_cpumsf *sf = container_of(session->auxtrace,
+ struct s390_cpumsf,
+ auxtrace);
+ struct auxtrace_queues *queues = &sf->queues;
+ unsigned int i;
+
+ for (i = 0; i < queues->nr_queues; i++)
+ zfree(&queues->queue_array[i].priv);
+ auxtrace_queues__free(queues);
+}
+
+static void s390_cpumsf_free(struct perf_session *session)
+{
+ struct s390_cpumsf *sf = container_of(session->auxtrace,
+ struct s390_cpumsf,
+ auxtrace);
+
+ auxtrace_heap__free(&sf->heap);
+ s390_cpumsf_free_events(session);
+ session->auxtrace = NULL;
+ free(sf);
+}
+
+int s390_cpumsf_process_auxtrace_info(union perf_event *event,
+ struct perf_session *session)
+{
+ struct auxtrace_info_event *auxtrace_info = &event->auxtrace_info;
+ struct s390_cpumsf *sf;
+ int err;
+
+ if (auxtrace_info->header.size < sizeof(struct auxtrace_info_event))
+ return -EINVAL;
+
+ sf = zalloc(sizeof(struct s390_cpumsf));
+ if (sf == NULL)
+ return -ENOMEM;
+
+ err = auxtrace_queues__init(&sf->queues);
+ if (err)
+ goto err_free;
+
+ sf->session = session;
+ sf->machine = &session->machines.host; /* No kvm support */
+ sf->auxtrace_type = auxtrace_info->type;
+ sf->pmu_type = PERF_TYPE_RAW;
+
+ sf->auxtrace.process_event = s390_cpumsf_process_event;
+ sf->auxtrace.process_auxtrace_event = s390_cpumsf_process_auxtrace_event;
+ sf->auxtrace.flush_events = s390_cpumsf_flush;
+ sf->auxtrace.free_events = s390_cpumsf_free_events;
+ sf->auxtrace.free = s390_cpumsf_free;
+ session->auxtrace = &sf->auxtrace;
+
+ return 0;
+
+err_free:
+ free(sf);
+ return err;
+}
diff --git a/tools/perf/util/s390-cpumsf.h b/tools/perf/util/s390-cpumsf.h
new file mode 100644
index 000000000000..fb64d100555c
--- /dev/null
+++ b/tools/perf/util/s390-cpumsf.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright IBM Corp. 2018
+ * Auxtrace support for s390 CPU-Measurement Sampling Facility
+ *
+ * Author(s): Thomas Richter <[email protected]>
+ */
+
+#ifndef INCLUDE__PERF_S390_CPUMSF_H
+#define INCLUDE__PERF_S390_CPUMSF_H
+
+union perf_event;
+struct perf_session;
+struct perf_pmu;
+
+struct auxtrace_record *
+s390_cpumsf_recording_init(int *err, struct perf_pmu *s390_cpumsf_pmu);
+
+int s390_cpumsf_process_auxtrace_info(union perf_event *event,
+ struct perf_session *session);
+#endif
--
2.14.4


2018-08-09 15:00:57

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 09/44] perf report: Add GUI report support for s390 auxiliary trace

From: Thomas Richter <[email protected]>

Add support for s390 auxiliary trace support.

Use 'perf record -e rbd000 -- ls' to create the perf.data file.

Use 'perf report' to display the auxiliary trace data.

Output before:

[root@s35lp76 perf]# ./perf report --stdio
0x128 [0x10]: failed to process type: 70
Error:
failed to process sample
[root@s35lp76 perf]#

Output after:

[root@s35lp76 perf]# ./perf report --stdio

18.21% 18.21% ls [kernel.kallsyms] [k] ftrace_likely_update
9.52% 9.52% ls [kernel.kallsyms] [k] lock_acquire
9.38% 9.38% ls [kernel.kallsyms] [k] lock_release
3.45% 3.45% ls [kernel.kallsyms] [k] lock_acquired
2.88% 2.88% ls [kernel.kallsyms] [k] link_path_walk
2.63% 2.63% ls [kernel.kallsyms] [k] __d_lookup
2.38% 2.38% ls [kernel.kallsyms] [k] __d_lookup_rcu
2.04% 2.04% ls [kernel.kallsyms] [k] ___might_sleep
1.83% 1.83% ls [kernel.kallsyms] [k] debug_lockdep_rcu_enabled
1.44% 1.44% ls [kernel.kallsyms] [k] dput
....

Signed-off-by: Thomas Richter <[email protected]>
Reviewed-by: Hendrik Brueckner <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
[ Use PRI[xd]64 to fix the build on debian:experimental-x-mips (gcc 8.1.0) and others ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/s390-cpumsf.c | 593 +++++++++++++++++++++++++++++++++++++++++-
1 file changed, 585 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/s390-cpumsf.c b/tools/perf/util/s390-cpumsf.c
index 14728b0834c6..d2c78ffd9fee 100644
--- a/tools/perf/util/s390-cpumsf.c
+++ b/tools/perf/util/s390-cpumsf.c
@@ -4,6 +4,138 @@
* Auxtrace support for s390 CPU-Measurement Sampling Facility
*
* Author(s): Thomas Richter <[email protected]>
+ *
+ * Auxiliary traces are collected during 'perf record' using rbd000 event.
+ * Several PERF_RECORD_XXX are generated during recording:
+ *
+ * PERF_RECORD_AUX:
+ * Records that new data landed in the AUX buffer part.
+ * PERF_RECORD_AUXTRACE:
+ * Defines auxtrace data. Followed by the actual data. The contents of
+ * the auxtrace data is dependent on the event and the CPU.
+ * This record is generated by perf record command. For details
+ * see Documentation/perf.data-file-format.txt.
+ * PERF_RECORD_AUXTRACE_INFO:
+ * Defines a table of contains for PERF_RECORD_AUXTRACE records. This
+ * record is generated during 'perf record' command. Each record contains up
+ * to 256 entries describing offset and size of the AUXTRACE data in the
+ * perf.data file.
+ * PERF_RECORD_AUXTRACE_ERROR:
+ * Indicates an error during AUXTRACE collection such as buffer overflow.
+ * PERF_RECORD_FINISHED_ROUND:
+ * Perf events are not necessarily in time stamp order, as they can be
+ * collected in parallel on different CPUs. If the events should be
+ * processed in time order they need to be sorted first.
+ * Perf report guarantees that there is no reordering over a
+ * PERF_RECORD_FINISHED_ROUND boundary event. All perf records with a
+ * time stamp lower than this record are processed (and displayed) before
+ * the succeeding perf record are processed.
+ *
+ * These records are evaluated during perf report command.
+ *
+ * 1. PERF_RECORD_AUXTRACE_INFO is used to set up the infrastructure for
+ * auxiliary trace data processing. See s390_cpumsf_process_auxtrace_info()
+ * below.
+ * Auxiliary trace data is collected per CPU. To merge the data into the report
+ * an auxtrace_queue is created for each CPU. It is assumed that the auxtrace
+ * data is in ascending order.
+ *
+ * Each queue has a double linked list of auxtrace_buffers. This list contains
+ * the offset and size of a CPU's auxtrace data. During auxtrace processing
+ * the data portion is mmap()'ed.
+ *
+ * To sort the queues in chronological order, all queue access is controlled
+ * by the auxtrace_heap. This is basicly a stack, each stack element has two
+ * entries, the queue number and a time stamp. However the stack is sorted by
+ * the time stamps. The highest time stamp is at the bottom the lowest
+ * (nearest) time stamp is at the top. That sort order is maintained at all
+ * times!
+ *
+ * After the auxtrace infrastructure has been setup, the auxtrace queues are
+ * filled with data (offset/size pairs) and the auxtrace_heap is populated.
+ *
+ * 2. PERF_RECORD_XXX processing triggers access to the auxtrace_queues.
+ * Each record is handled by s390_cpumsf_process_event(). The time stamp of
+ * the perf record is compared with the time stamp located on the auxtrace_heap
+ * top element. If that time stamp is lower than the time stamp from the
+ * record sample, the auxtrace queues will be processed. As auxtrace queues
+ * control many auxtrace_buffers and each buffer can be quite large, the
+ * auxtrace buffer might be processed only partially. In this case the
+ * position in the auxtrace_buffer of that queue is remembered and the time
+ * stamp of the last processed entry of the auxtrace_buffer replaces the
+ * current auxtrace_heap top.
+ *
+ * 3. Auxtrace_queues might run of out data and are feeded by the
+ * PERF_RECORD_AUXTRACE handling, see s390_cpumsf_process_auxtrace_event().
+ *
+ * Event Generation
+ * Each sampling-data entry in the auxilary trace data generates a perf sample.
+ * This sample is filled
+ * with data from the auxtrace such as PID/TID, instruction address, CPU state,
+ * etc. This sample is processed with perf_session__deliver_synth_event() to
+ * be included into the GUI.
+ *
+ * 4. PERF_RECORD_FINISHED_ROUND event is used to process all the remaining
+ * auxiliary traces entries until the time stamp of this record is reached
+ * auxtrace_heap top. This is triggered by ordered_event->deliver().
+ *
+ *
+ * Perf event processing.
+ * Event processing of PERF_RECORD_XXX entries relies on time stamp entries.
+ * This is the function call sequence:
+ *
+ * __cmd_report()
+ * |
+ * perf_session__process_events()
+ * |
+ * __perf_session__process_events()
+ * |
+ * perf_session__process_event()
+ * | This functions splits the PERF_RECORD_XXX records.
+ * | - Those generated by perf record command (type number equal or higher
+ * | than PERF_RECORD_USER_TYPE_START) are handled by
+ * | perf_session__process_user_event(see below)
+ * | - Those generated by the kernel are handled by
+ * | perf_evlist__parse_sample_timestamp()
+ * |
+ * perf_evlist__parse_sample_timestamp()
+ * | Extract time stamp from sample data.
+ * |
+ * perf_session__queue_event()
+ * | If timestamp is positive the sample is entered into an ordered_event
+ * | list, sort order is the timestamp. The event processing is deferred until
+ * | later (see perf_session__process_user_event()).
+ * | Other timestamps (0 or -1) are handled immediately by
+ * | perf_session__deliver_event(). These are events generated at start up
+ * | of command perf record. They create PERF_RECORD_COMM and PERF_RECORD_MMAP*
+ * | records. They are needed to create a list of running processes and its
+ * | memory mappings and layout. They are needed at the beginning to enable
+ * | command perf report to create process trees and memory mappings.
+ * |
+ * perf_session__deliver_event()
+ * | Delivers a PERF_RECORD_XXX entry for handling.
+ * |
+ * auxtrace__process_event()
+ * | The timestamp of the PERF_RECORD_XXX entry is taken to correlate with
+ * | time stamps from the auxiliary trace buffers. This enables
+ * | synchronization between auxiliary trace data and the events on the
+ * | perf.data file.
+ * |
+ * machine__deliver_event()
+ * | Handles the PERF_RECORD_XXX event. This depends on the record type.
+ * It might update the process tree, update a process memory map or enter
+ * a sample with IP and call back chain data into GUI data pool.
+ *
+ *
+ * Deferred processing determined by perf_session__process_user_event() is
+ * finally processed when a PERF_RECORD_FINISHED_ROUND is encountered. These
+ * are generated during command perf record.
+ * The timestamp of PERF_RECORD_FINISHED_ROUND event is taken to process all
+ * PERF_RECORD_XXX entries stored in the ordered_event list. This list was
+ * built up while reading the perf.data file.
+ * Each event is now processed by calling perf_session__deliver_event().
+ * This enables time synchronization between the data in the perf.data file and
+ * the data in the auxiliary trace buffers.
*/

#include <endian.h>
@@ -37,6 +169,14 @@ struct s390_cpumsf {
u32 auxtrace_type;
u32 pmu_type;
u16 machine_type;
+ bool data_queued;
+};
+
+struct s390_cpumsf_queue {
+ struct s390_cpumsf *sf;
+ unsigned int queue_nr;
+ struct auxtrace_buffer *buffer;
+ int cpu;
};

/* Display s390 CPU measurement facility basic-sampling data entry */
@@ -181,8 +321,8 @@ static void s390_cpumsf_dump(struct s390_cpumsf *sf,
const char *color = PERF_COLOR_BLUE;
struct hws_basic_entry *basic;
struct hws_diag_entry *diag;
- size_t pos = 0;
unsigned short bsdes, dsdes;
+ size_t pos = 0;

color_fprintf(stdout, color,
". ... s390 AUX data: size %zu bytes\n",
@@ -243,15 +383,414 @@ static void s390_cpumsf_dump_event(struct s390_cpumsf *sf, unsigned char *buf,
s390_cpumsf_dump(sf, buf, len);
}

+#define S390_LPP_PID_MASK 0xffffffff
+
+static bool s390_cpumsf_make_event(size_t pos,
+ struct hws_basic_entry *basic,
+ struct s390_cpumsf_queue *sfq)
+{
+ struct perf_sample sample = {
+ .ip = basic->ia,
+ .pid = basic->hpp & S390_LPP_PID_MASK,
+ .tid = basic->hpp & S390_LPP_PID_MASK,
+ .cpumode = PERF_RECORD_MISC_CPUMODE_UNKNOWN,
+ .cpu = sfq->cpu,
+ .period = 1
+ };
+ union perf_event event;
+
+ memset(&event, 0, sizeof(event));
+ if (basic->CL == 1) /* Native LPAR mode */
+ sample.cpumode = basic->P ? PERF_RECORD_MISC_USER
+ : PERF_RECORD_MISC_KERNEL;
+ else if (basic->CL == 2) /* Guest kernel/user space */
+ sample.cpumode = basic->P ? PERF_RECORD_MISC_GUEST_USER
+ : PERF_RECORD_MISC_GUEST_KERNEL;
+ else if (basic->gpp || basic->prim_asn != 0xffff)
+ /* Use heuristics on old hardware */
+ sample.cpumode = basic->P ? PERF_RECORD_MISC_GUEST_USER
+ : PERF_RECORD_MISC_GUEST_KERNEL;
+ else
+ sample.cpumode = basic->P ? PERF_RECORD_MISC_USER
+ : PERF_RECORD_MISC_KERNEL;
+
+ event.sample.header.type = PERF_RECORD_SAMPLE;
+ event.sample.header.misc = sample.cpumode;
+ event.sample.header.size = sizeof(struct perf_event_header);
+
+ pr_debug4("%s pos:%#zx ip:%#" PRIx64 " P:%d CL:%d pid:%d.%d cpumode:%d cpu:%d\n",
+ __func__, pos, sample.ip, basic->P, basic->CL, sample.pid,
+ sample.tid, sample.cpumode, sample.cpu);
+ if (perf_session__deliver_synth_event(sfq->sf->session, &event,
+ &sample)) {
+ pr_err("s390 Auxiliary Trace: failed to deliver event\n");
+ return false;
+ }
+ return true;
+}
+
+static unsigned long long get_trailer_time(const unsigned char *buf)
+{
+ struct hws_trailer_entry *te;
+ unsigned long long aux_time;
+
+ te = (struct hws_trailer_entry *)(buf + S390_CPUMSF_PAGESZ
+ - sizeof(*te));
+
+ if (!te->clock_base) /* TOD_CLOCK_BASE value missing */
+ return 0;
+
+ /* Correct calculation to convert time stamp in trailer entry to
+ * nano seconds (taken from arch/s390 function tod_to_ns()).
+ * TOD_CLOCK_BASE is stored in trailer entry member progusage2.
+ */
+ aux_time = trailer_timestamp(te) - te->progusage2;
+ aux_time = (aux_time >> 9) * 125 + (((aux_time & 0x1ff) * 125) >> 9);
+ return aux_time;
+}
+
+/* Process the data samples of a single queue. The first parameter is a
+ * pointer to the queue, the second parameter is the time stamp. This
+ * is the time stamp:
+ * - of the event that triggered this processing.
+ * - or the time stamp when the last proccesing of this queue stopped.
+ * In this case it stopped at a 4KB page boundary and record the
+ * position on where to continue processing on the next invocation
+ * (see buffer->use_data and buffer->use_size).
+ *
+ * When this function returns the second parameter is updated to
+ * reflect the time stamp of the last processed auxiliary data entry
+ * (taken from the trailer entry of that page). The caller uses this
+ * returned time stamp to record the last processed entry in this
+ * queue.
+ *
+ * The function returns:
+ * 0: Processing successful. The second parameter returns the
+ * time stamp from the trailer entry until which position
+ * processing took place. Subsequent calls resume from this
+ * position.
+ * <0: An error occurred during processing. The second parameter
+ * returns the maximum time stamp.
+ * >0: Done on this queue. The second parameter returns the
+ * maximum time stamp.
+ */
+static int s390_cpumsf_samples(struct s390_cpumsf_queue *sfq, u64 *ts)
+{
+ struct s390_cpumsf *sf = sfq->sf;
+ unsigned char *buf = sfq->buffer->use_data;
+ size_t len = sfq->buffer->use_size;
+ struct hws_basic_entry *basic;
+ unsigned short bsdes, dsdes;
+ size_t pos = 0;
+ int err = 1;
+ u64 aux_ts;
+
+ if (!s390_cpumsf_validate(sf->machine_type, buf, len, &bsdes,
+ &dsdes)) {
+ *ts = ~0ULL;
+ return -1;
+ }
+
+ /* Get trailer entry time stamp and check if entries in
+ * this auxiliary page are ready for processing. If the
+ * time stamp of the first entry is too high, whole buffer
+ * can be skipped. In this case return time stamp.
+ */
+ aux_ts = get_trailer_time(buf);
+ if (!aux_ts) {
+ pr_err("[%#08" PRIx64 "] Invalid AUX trailer entry TOD clock base\n",
+ sfq->buffer->data_offset);
+ aux_ts = ~0ULL;
+ goto out;
+ }
+ if (aux_ts > *ts) {
+ *ts = aux_ts;
+ return 0;
+ }
+
+ while (pos < len) {
+ /* Handle Basic entry */
+ basic = (struct hws_basic_entry *)(buf + pos);
+ if (s390_cpumsf_make_event(pos, basic, sfq))
+ pos += bsdes;
+ else {
+ err = -EBADF;
+ goto out;
+ }
+
+ pos += dsdes; /* Skip diagnositic entry */
+
+ /* Check for trailer entry */
+ if (!s390_cpumsf_reached_trailer(bsdes + dsdes, pos)) {
+ pos = (pos + S390_CPUMSF_PAGESZ)
+ & ~(S390_CPUMSF_PAGESZ - 1);
+ /* Check existence of next page */
+ if (pos >= len)
+ break;
+ aux_ts = get_trailer_time(buf + pos);
+ if (!aux_ts) {
+ aux_ts = ~0ULL;
+ goto out;
+ }
+ if (aux_ts > *ts) {
+ *ts = aux_ts;
+ sfq->buffer->use_data += pos;
+ sfq->buffer->use_size -= pos;
+ return 0;
+ }
+ }
+ }
+out:
+ *ts = aux_ts;
+ sfq->buffer->use_size = 0;
+ sfq->buffer->use_data = NULL;
+ return err; /* Buffer completely scanned or error */
+}
+
+/* Run the s390 auxiliary trace decoder.
+ * Select the queue buffer to operate on, the caller already selected
+ * the proper queue, depending on second parameter 'ts'.
+ * This is the time stamp until which the auxiliary entries should
+ * be processed. This value is updated by called functions and
+ * returned to the caller.
+ *
+ * Resume processing in the current buffer. If there is no buffer
+ * get a new buffer from the queue and setup start position for
+ * processing.
+ * When a buffer is completely processed remove it from the queue
+ * before returning.
+ *
+ * This function returns
+ * 1: When the queue is empty. Second parameter will be set to
+ * maximum time stamp.
+ * 0: Normal processing done.
+ * <0: Error during queue buffer setup. This causes the caller
+ * to stop processing completely.
+ */
+static int s390_cpumsf_run_decoder(struct s390_cpumsf_queue *sfq,
+ u64 *ts)
+{
+
+ struct auxtrace_buffer *buffer;
+ struct auxtrace_queue *queue;
+ int err;
+
+ queue = &sfq->sf->queues.queue_array[sfq->queue_nr];
+
+ /* Get buffer and last position in buffer to resume
+ * decoding the auxiliary entries. One buffer might be large
+ * and decoding might stop in between. This depends on the time
+ * stamp of the trailer entry in each page of the auxiliary
+ * data and the time stamp of the event triggering the decoding.
+ */
+ if (sfq->buffer == NULL) {
+ sfq->buffer = buffer = auxtrace_buffer__next(queue,
+ sfq->buffer);
+ if (!buffer) {
+ *ts = ~0ULL;
+ return 1; /* Processing done on this queue */
+ }
+ /* Start with a new buffer on this queue */
+ if (buffer->data) {
+ buffer->use_size = buffer->size;
+ buffer->use_data = buffer->data;
+ }
+ } else
+ buffer = sfq->buffer;
+
+ if (!buffer->data) {
+ int fd = perf_data__fd(sfq->sf->session->data);
+
+ buffer->data = auxtrace_buffer__get_data(buffer, fd);
+ if (!buffer->data)
+ return -ENOMEM;
+ buffer->use_size = buffer->size;
+ buffer->use_data = buffer->data;
+ }
+ pr_debug4("%s queue_nr:%d buffer:%" PRId64 " offset:%#" PRIx64 " size:%#zx rest:%#zx\n",
+ __func__, sfq->queue_nr, buffer->buffer_nr, buffer->offset,
+ buffer->size, buffer->use_size);
+ err = s390_cpumsf_samples(sfq, ts);
+
+ /* If non-zero, there is either an error (err < 0) or the buffer is
+ * completely done (err > 0). The error is unrecoverable, usually
+ * some descriptors could not be read successfully, so continue with
+ * the next buffer.
+ * In both cases the parameter 'ts' has been updated.
+ */
+ if (err) {
+ sfq->buffer = NULL;
+ list_del(&buffer->list);
+ auxtrace_buffer__free(buffer);
+ if (err > 0) /* Buffer done, no error */
+ err = 0;
+ }
+ return err;
+}
+
+static struct s390_cpumsf_queue *
+s390_cpumsf_alloc_queue(struct s390_cpumsf *sf, unsigned int queue_nr)
+{
+ struct s390_cpumsf_queue *sfq;
+
+ sfq = zalloc(sizeof(struct s390_cpumsf_queue));
+ if (sfq == NULL)
+ return NULL;
+
+ sfq->sf = sf;
+ sfq->queue_nr = queue_nr;
+ sfq->cpu = -1;
+ return sfq;
+}
+
+static int s390_cpumsf_setup_queue(struct s390_cpumsf *sf,
+ struct auxtrace_queue *queue,
+ unsigned int queue_nr, u64 ts)
+{
+ struct s390_cpumsf_queue *sfq = queue->priv;
+
+ if (list_empty(&queue->head))
+ return 0;
+
+ if (sfq == NULL) {
+ sfq = s390_cpumsf_alloc_queue(sf, queue_nr);
+ if (!sfq)
+ return -ENOMEM;
+ queue->priv = sfq;
+
+ if (queue->cpu != -1)
+ sfq->cpu = queue->cpu;
+ }
+ return auxtrace_heap__add(&sf->heap, queue_nr, ts);
+}
+
+static int s390_cpumsf_setup_queues(struct s390_cpumsf *sf, u64 ts)
+{
+ unsigned int i;
+ int ret = 0;
+
+ for (i = 0; i < sf->queues.nr_queues; i++) {
+ ret = s390_cpumsf_setup_queue(sf, &sf->queues.queue_array[i],
+ i, ts);
+ if (ret)
+ break;
+ }
+ return ret;
+}
+
+static int s390_cpumsf_update_queues(struct s390_cpumsf *sf, u64 ts)
+{
+ if (!sf->queues.new_data)
+ return 0;
+
+ sf->queues.new_data = false;
+ return s390_cpumsf_setup_queues(sf, ts);
+}
+
+static int s390_cpumsf_process_queues(struct s390_cpumsf *sf, u64 timestamp)
+{
+ unsigned int queue_nr;
+ u64 ts;
+ int ret;
+
+ while (1) {
+ struct auxtrace_queue *queue;
+ struct s390_cpumsf_queue *sfq;
+
+ if (!sf->heap.heap_cnt)
+ return 0;
+
+ if (sf->heap.heap_array[0].ordinal >= timestamp)
+ return 0;
+
+ queue_nr = sf->heap.heap_array[0].queue_nr;
+ queue = &sf->queues.queue_array[queue_nr];
+ sfq = queue->priv;
+
+ auxtrace_heap__pop(&sf->heap);
+ if (sf->heap.heap_cnt) {
+ ts = sf->heap.heap_array[0].ordinal + 1;
+ if (ts > timestamp)
+ ts = timestamp;
+ } else {
+ ts = timestamp;
+ }
+
+ ret = s390_cpumsf_run_decoder(sfq, &ts);
+ if (ret < 0) {
+ auxtrace_heap__add(&sf->heap, queue_nr, ts);
+ return ret;
+ }
+ if (!ret) {
+ ret = auxtrace_heap__add(&sf->heap, queue_nr, ts);
+ if (ret < 0)
+ return ret;
+ }
+ }
+ return 0;
+}
+
+static int s390_cpumsf_synth_error(struct s390_cpumsf *sf, int code, int cpu,
+ pid_t pid, pid_t tid, u64 ip)
+{
+ char msg[MAX_AUXTRACE_ERROR_MSG];
+ union perf_event event;
+ int err;
+
+ strncpy(msg, "Lost Auxiliary Trace Buffer", sizeof(msg) - 1);
+ auxtrace_synth_error(&event.auxtrace_error, PERF_AUXTRACE_ERROR_ITRACE,
+ code, cpu, pid, tid, ip, msg);
+
+ err = perf_session__deliver_synth_event(sf->session, &event, NULL);
+ if (err)
+ pr_err("s390 Auxiliary Trace: failed to deliver error event,"
+ "error %d\n", err);
+ return err;
+}
+
+static int s390_cpumsf_lost(struct s390_cpumsf *sf, struct perf_sample *sample)
+{
+ return s390_cpumsf_synth_error(sf, 1, sample->cpu,
+ sample->pid, sample->tid, 0);
+}
+
static int
s390_cpumsf_process_event(struct perf_session *session __maybe_unused,
- union perf_event *event __maybe_unused,
- struct perf_sample *sample __maybe_unused,
- struct perf_tool *tool __maybe_unused)
+ union perf_event *event,
+ struct perf_sample *sample,
+ struct perf_tool *tool)
{
- return 0;
+ struct s390_cpumsf *sf = container_of(session->auxtrace,
+ struct s390_cpumsf,
+ auxtrace);
+ u64 timestamp = sample->time;
+ int err = 0;
+
+ if (dump_trace)
+ return 0;
+
+ if (!tool->ordered_events) {
+ pr_err("s390 Auxiliary Trace requires ordered events\n");
+ return -EINVAL;
+ }
+
+ if (event->header.type == PERF_RECORD_AUX &&
+ event->aux.flags & PERF_AUX_FLAG_TRUNCATED)
+ return s390_cpumsf_lost(sf, sample);
+
+ if (timestamp) {
+ err = s390_cpumsf_update_queues(sf, timestamp);
+ if (!err)
+ err = s390_cpumsf_process_queues(sf, timestamp);
+ }
+ return err;
}

+struct s390_cpumsf_synth {
+ struct perf_tool cpumsf_tool;
+ struct perf_session *session;
+};
+
static int
s390_cpumsf_process_auxtrace_event(struct perf_session *session,
union perf_event *event __maybe_unused,
@@ -266,6 +805,9 @@ s390_cpumsf_process_auxtrace_event(struct perf_session *session,
off_t data_offset;
int err;

+ if (sf->data_queued)
+ return 0;
+
if (perf_data__is_pipe(session->data)) {
data_offset = 0;
} else {
@@ -290,17 +832,21 @@ s390_cpumsf_process_auxtrace_event(struct perf_session *session,
return 0;
}

+static void s390_cpumsf_free_events(struct perf_session *session __maybe_unused)
+{
+}
+
static int s390_cpumsf_flush(struct perf_session *session __maybe_unused,
struct perf_tool *tool __maybe_unused)
{
return 0;
}

-static void s390_cpumsf_free_events(struct perf_session *session)
+static void s390_cpumsf_free_queues(struct perf_session *session)
{
struct s390_cpumsf *sf = container_of(session->auxtrace,
struct s390_cpumsf,
- auxtrace);
+ auxtrace);
struct auxtrace_queues *queues = &sf->queues;
unsigned int i;

@@ -316,7 +862,7 @@ static void s390_cpumsf_free(struct perf_session *session)
auxtrace);

auxtrace_heap__free(&sf->heap);
- s390_cpumsf_free_events(session);
+ s390_cpumsf_free_queues(session);
session->auxtrace = NULL;
free(sf);
}
@@ -329,6 +875,19 @@ static int s390_cpumsf_get_type(const char *cpuid)
return (ret == 1) ? family : 0;
}

+/* Check itrace options set on perf report command.
+ * Return true, if none are set or all options specified can be
+ * handled on s390.
+ * Return false otherwise.
+ */
+static bool check_auxtrace_itrace(struct itrace_synth_opts *itops)
+{
+ if (!itops || !itops->set)
+ return true;
+ pr_err("No --itrace options supported\n");
+ return false;
+}
+
int s390_cpumsf_process_auxtrace_info(union perf_event *event,
struct perf_session *session)
{
@@ -343,6 +902,11 @@ int s390_cpumsf_process_auxtrace_info(union perf_event *event,
if (sf == NULL)
return -ENOMEM;

+ if (!check_auxtrace_itrace(session->itrace_synth_opts)) {
+ err = -EINVAL;
+ goto err_free;
+ }
+
err = auxtrace_queues__init(&sf->queues);
if (err)
goto err_free;
@@ -360,8 +924,21 @@ int s390_cpumsf_process_auxtrace_info(union perf_event *event,
sf->auxtrace.free = s390_cpumsf_free;
session->auxtrace = &sf->auxtrace;

+ if (dump_trace)
+ return 0;
+
+ err = auxtrace_queues__process_index(&sf->queues, session);
+ if (err)
+ goto err_free_queues;
+
+ if (sf->queues.populated)
+ sf->data_queued = true;
+
return 0;

+err_free_queues:
+ auxtrace_queues__free(&sf->queues);
+ session->auxtrace = NULL;
err_free:
free(sf);
return err;
--
2.14.4


2018-08-09 15:01:06

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 12/44] perf bpf: Add 'syscall_enter' probe helper for syscall enter tracepoints

From: Arnaldo Carvalho de Melo <[email protected]>

Allowing one to hook into the syscalls:sys_enter_NAME tracepoints,
an example is provided that hooks into the 'openat' syscall.

Using it with the probe:vfs_getname probe into getname_flags to get the
filename args as it is copied from userspace:

# perf probe -l
probe:vfs_getname (on getname_flags:73@acme/git/linux/fs/namei.c with pathname)
# perf trace -e probe:*getname,tools/perf/examples/bpf/sys_enter_openat.c cat /etc/passwd > /dev/null
0.000 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/ld.so.preload"
0.022 syscalls:sys_enter_openat:dfd: CWD, filename: 0xafbe8da8, flags: CLOEXEC
0.027 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/ld.so.cache"
0.054 syscalls:sys_enter_openat:dfd: CWD, filename: 0xafdf0ce0, flags: CLOEXEC
0.057 probe:vfs_getname:(ffffffffbd2a8983) pathname="/lib64/libc.so.6"
0.316 probe:vfs_getname:(ffffffffbd2a8983) pathname="/usr/lib/locale/locale-archive"
0.375 syscalls:sys_enter_openat:dfd: CWD, filename: 0xe2b2b0b4
0.379 probe:vfs_getname:(ffffffffbd2a8983) pathname="/etc/passwd"
#

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/examples/bpf/sys_enter_openat.c | 33 ++++++++++++++++++++++++++++++
tools/perf/include/bpf/bpf.h | 3 +++
2 files changed, 36 insertions(+)
create mode 100644 tools/perf/examples/bpf/sys_enter_openat.c

diff --git a/tools/perf/examples/bpf/sys_enter_openat.c b/tools/perf/examples/bpf/sys_enter_openat.c
new file mode 100644
index 000000000000..9cd124b09392
--- /dev/null
+++ b/tools/perf/examples/bpf/sys_enter_openat.c
@@ -0,0 +1,33 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Hook into 'openat' syscall entry tracepoint
+ *
+ * Test it with:
+ *
+ * perf trace -e tools/perf/examples/bpf/sys_enter_openat.c cat /etc/passwd > /dev/null
+ *
+ * It'll catch some openat syscalls related to the dynamic linked and
+ * the last one should be the one for '/etc/passwd'.
+ *
+ * The syscall_enter_openat_args can be used to get the syscall fields
+ * and use them for filtering calls, i.e. use in expressions for
+ * the return value.
+ */
+
+#include <bpf.h>
+
+struct syscall_enter_openat_args {
+ unsigned long long unused;
+ long syscall_nr;
+ long dfd;
+ char *filename_ptr;
+ long flags;
+ long mode;
+};
+
+int syscall_enter(openat)(struct syscall_enter_openat_args *args)
+{
+ return 1;
+}
+
+license(GPL);
diff --git a/tools/perf/include/bpf/bpf.h b/tools/perf/include/bpf/bpf.h
index a63aa6241b7f..2873cdde293f 100644
--- a/tools/perf/include/bpf/bpf.h
+++ b/tools/perf/include/bpf/bpf.h
@@ -9,6 +9,9 @@
#define probe(function, vars) \
SEC(#function "=" #function " " #vars) function

+#define syscall_enter(name) \
+ SEC("syscalls:sys_enter_" #name) syscall_enter_ ## name
+
#define license(name) \
char _license[] SEC("license") = #name; \
int _version SEC("version") = LINUX_VERSION_CODE;
--
2.14.4


2018-08-09 15:01:13

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 14/44] perf annotate: Make annotation_line__max_percent static

From: Jiri Olsa <[email protected]>

There's no outside user of it.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 3 ++-
tools/perf/util/annotate.h | 1 -
2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index b6e7d0d56622..956c9b19d81c 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2441,7 +2441,8 @@ bool ui__has_annotation(void)
}


-double annotation_line__max_percent(struct annotation_line *al, struct annotation *notes)
+static double annotation_line__max_percent(struct annotation_line *al,
+ struct annotation *notes)
{
double percent_max = 0.0;
int i;
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 5f24fc9dcc7c..a93502d0c582 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -169,7 +169,6 @@ struct annotation_write_ops {
void (*write_graph)(void *obj, int graph);
};

-double annotation_line__max_percent(struct annotation_line *al, struct annotation *notes);
void annotation_line__write(struct annotation_line *al, struct annotation *notes,
struct annotation_write_ops *ops);

--
2.14.4


2018-08-09 15:01:15

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 05/44] perf trace: Setup struct syscall_tp for syscalls:sys_{enter,exit}_NAME events

From: Arnaldo Carvalho de Melo <[email protected]>

Mapping "__syscall_nr" to "id" and setting up "args" from the offset of
"__syscall_nr" + sizeof(u64), as the payload for syscalls:* is the same
as for raw_syscalls:*, just the fields have different names.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 53 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 52 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 7fca844ced0b..41799596c045 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -247,6 +247,22 @@ static void perf_evsel__delete_priv(struct perf_evsel *evsel)
perf_evsel__delete(evsel);
}

+static int perf_evsel__init_syscall_tp(struct perf_evsel *evsel)
+{
+ struct syscall_tp *sc = evsel->priv = malloc(sizeof(struct syscall_tp));
+
+ if (evsel->priv != NULL) {
+ if (perf_evsel__init_tp_uint_field(evsel, &sc->id, "__syscall_nr"))
+ goto out_delete;
+ return 0;
+ }
+
+ return -ENOMEM;
+out_delete:
+ zfree(&evsel->priv);
+ return -ENOENT;
+}
+
static int perf_evsel__init_raw_syscall_tp(struct perf_evsel *evsel, void *handler)
{
evsel->priv = malloc(sizeof(struct syscall_tp));
@@ -2977,6 +2993,36 @@ static void evlist__set_evsel_handler(struct perf_evlist *evlist, void *handler)
evsel->handler = handler;
}

+static int evlist__set_syscall_tp_fields(struct perf_evlist *evlist)
+{
+ struct perf_evsel *evsel;
+
+ evlist__for_each_entry(evlist, evsel) {
+ if (evsel->priv || !evsel->tp_format)
+ continue;
+
+ if (strcmp(evsel->tp_format->system, "syscalls"))
+ continue;
+
+ if (perf_evsel__init_syscall_tp(evsel))
+ return -1;
+
+ if (!strncmp(evsel->tp_format->name, "sys_enter_", 10)) {
+ struct syscall_tp *sc = evsel->priv;
+
+ if (__tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64)))
+ return -1;
+ } else if (!strncmp(evsel->tp_format->name, "sys_exit_", 9)) {
+ struct syscall_tp *sc = evsel->priv;
+
+ if (__tp_field__init_uint(&sc->ret, sizeof(u64), sc->id.offset + sizeof(u64), evsel->needs_swap))
+ return -1;
+ }
+ }
+
+ return 0;
+}
+
/*
* XXX: Hackish, just splitting the combined -e+--event (syscalls
* (raw_syscalls:{sys_{enter,exit}} + events (tracepoints, HW, SW, etc) to use
@@ -3236,8 +3282,13 @@ int cmd_trace(int argc, const char **argv)
symbol_conf.use_callchain = true;
}

- if (trace.evlist->nr_entries > 0)
+ if (trace.evlist->nr_entries > 0) {
evlist__set_evsel_handler(trace.evlist, trace__event_handler);
+ if (evlist__set_syscall_tp_fields(trace.evlist)) {
+ perror("failed to set syscalls:* tracepoint fields");
+ goto out;
+ }
+ }

if ((argc >= 1) && (strcmp(argv[0], "record") == 0))
return trace__record(&trace, argc-1, &argv[1]);
--
2.14.4


2018-08-09 15:01:18

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 15/44] perf annotate: Get rid of annotation__scnprintf_samples_period()

From: Jiri Olsa <[email protected]>

We have more current function tto get the title for annotation,
which is hists__scnprintf_title. They both have same output as
far as the annotation's header line goes.

They differ in counting of the nr_samples, hists__scnprintf_title
provides more accurate number based on the setup of the
symbol_conf.filter_relative variable.

Plus it also displays any uid/thread/dso/socket filters/zooms
if there are set any, which annotation__scnprintf_samples_period
does not.

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 3 +--
tools/perf/util/annotate.c | 44 ++-------------------------------------
tools/perf/util/annotate.h | 7 -------
3 files changed, 3 insertions(+), 51 deletions(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index 3b4f1c10ff57..d264916d2648 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -624,8 +624,7 @@ static int annotate_browser__run(struct annotate_browser *browser,
char title[256];
int key;

- annotation__scnprintf_samples_period(notes, title, sizeof(title), evsel);
-
+ hists__scnprintf_title(hists, title, sizeof(title));
if (annotate_browser__show(&browser->b, title, help) < 0)
return -1;

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 956c9b19d81c..0d40cee13f6b 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2389,7 +2389,7 @@ int symbol__tty_annotate2(struct symbol *sym, struct map *map,
{
struct dso *dso = map->dso;
struct rb_root source_line = RB_ROOT;
- struct annotation *notes = symbol__annotation(sym);
+ struct hists *hists = evsel__hists(evsel);
char buf[1024];

if (symbol__annotate2(sym, map, evsel, opts, NULL) < 0)
@@ -2401,7 +2401,7 @@ int symbol__tty_annotate2(struct symbol *sym, struct map *map,
print_summary(&source_line, dso->long_name);
}

- annotation__scnprintf_samples_period(notes, buf, sizeof(buf), evsel);
+ hists__scnprintf_title(hists, buf, sizeof(buf));
fprintf(stdout, "%s\n%s() %s\n", buf, sym->name, dso->long_name);
symbol__annotate_fprintf2(sym, stdout);

@@ -2689,46 +2689,6 @@ int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *ev
return -1;
}

-int __annotation__scnprintf_samples_period(struct annotation *notes,
- char *bf, size_t size,
- struct perf_evsel *evsel,
- bool show_freq)
-{
- const char *ev_name = perf_evsel__name(evsel);
- char buf[1024], ref[30] = " show reference callgraph, ";
- char sample_freq_str[64] = "";
- unsigned long nr_samples = 0;
- int nr_members = 1;
- bool enable_ref = false;
- u64 nr_events = 0;
- char unit;
- int i;
-
- if (perf_evsel__is_group_event(evsel)) {
- perf_evsel__group_desc(evsel, buf, sizeof(buf));
- ev_name = buf;
- nr_members = evsel->nr_members;
- }
-
- for (i = 0; i < nr_members; i++) {
- struct sym_hist *ah = annotation__histogram(notes, evsel->idx + i);
-
- nr_samples += ah->nr_samples;
- nr_events += ah->period;
- }
-
- if (symbol_conf.show_ref_callgraph && strstr(ev_name, "call-graph=no"))
- enable_ref = true;
-
- if (show_freq)
- scnprintf(sample_freq_str, sizeof(sample_freq_str), " %d Hz,", evsel->attr.sample_freq);
-
- nr_samples = convert_unit(nr_samples, &unit);
- return scnprintf(bf, size, "Samples: %lu%c of event%s '%s',%s%sEvent count (approx.): %" PRIu64,
- nr_samples, unit, evsel->nr_members > 1 ? "s" : "",
- ev_name, sample_freq_str, enable_ref ? ref : " ", nr_events);
-}
-
#define ANNOTATION__CFG(n) \
{ .name = #n, .value = &annotation__default_options.n, }

diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index a93502d0c582..d06f14c656c6 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -177,13 +177,6 @@ int __annotation__scnprintf_samples_period(struct annotation *notes,
struct perf_evsel *evsel,
bool show_freq);

-static inline int annotation__scnprintf_samples_period(struct annotation *notes,
- char *bf, size_t size,
- struct perf_evsel *evsel)
-{
- return __annotation__scnprintf_samples_period(notes, bf, size, evsel, true);
-}
-
int disasm_line__scnprintf(struct disasm_line *dl, char *bf, size_t size, bool raw);
size_t disasm__fprintf(struct list_head *head, FILE *fp);
void symbol__calc_percent(struct symbol *sym, struct perf_evsel *evsel);
--
2.14.4


2018-08-09 15:01:23

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 16/44] perf annotate: Rename struct annotation_line::samples* to data*

From: Jiri Olsa <[email protected]>

The name 'samples*' is little confusing because we have nested 'struct
sym_hist_entry' under annotation_line struct, which holds 'nr_samples'
as well.

Also the holding struct name is 'annotation_data' so the 'data' name
fits better.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 10 ++++----
tools/perf/util/annotate.c | 52 +++++++++++++++++++--------------------
tools/perf/util/annotate.h | 4 +--
3 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index d264916d2648..d648d1e153f3 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -227,10 +227,10 @@ static int disasm__cmp(struct annotation_line *a, struct annotation_line *b)
{
int i;

- for (i = 0; i < a->samples_nr; i++) {
- if (a->samples[i].percent == b->samples[i].percent)
+ for (i = 0; i < a->data_nr; i++) {
+ if (a->data[i].percent == b->data[i].percent)
continue;
- return a->samples[i].percent < b->samples[i].percent;
+ return a->data[i].percent < b->data[i].percent;
}
return 0;
}
@@ -314,8 +314,8 @@ static void annotate_browser__calc_percent(struct annotate_browser *browser,
continue;
}

- for (i = 0; i < pos->al.samples_nr; i++) {
- struct annotation_data *sample = &pos->al.samples[i];
+ for (i = 0; i < pos->al.data_nr; i++) {
+ struct annotation_data *sample = &pos->al.data[i];

if (max_percent < sample->percent)
max_percent = sample->percent;
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 0d40cee13f6b..e4cb8963db1a 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1108,7 +1108,7 @@ annotation_line__new(struct annotate_args *args, size_t privsize)
if (perf_evsel__is_group_event(evsel))
nr = evsel->nr_members;

- size += sizeof(al->samples[0]) * nr;
+ size += sizeof(al->data[0]) * nr;

al = zalloc(size);
if (al) {
@@ -1117,7 +1117,7 @@ annotation_line__new(struct annotate_args *args, size_t privsize)
al->offset = args->offset;
al->line = strdup(args->line);
al->line_nr = args->line_nr;
- al->samples_nr = nr;
+ al->data_nr = nr;
}

return al;
@@ -1309,15 +1309,15 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
const char *color;
struct annotation *notes = symbol__annotation(sym);

- for (i = 0; i < al->samples_nr; i++) {
- struct annotation_data *sample = &al->samples[i];
+ for (i = 0; i < al->data_nr; i++) {
+ struct annotation_data *sample = &al->data[i];

if (sample->percent > max_percent)
max_percent = sample->percent;
}

- if (al->samples_nr > nr_percent)
- nr_percent = al->samples_nr;
+ if (al->data_nr > nr_percent)
+ nr_percent = al->data_nr;

if (max_percent < min_pcnt)
return -1;
@@ -1351,7 +1351,7 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
}

for (i = 0; i < nr_percent; i++) {
- struct annotation_data *sample = &al->samples[i];
+ struct annotation_data *sample = &al->data[i];

color = get_percent_color(sample->percent);

@@ -1788,12 +1788,12 @@ static void annotation__calc_percent(struct annotation *notes,
next = annotation_line__next(al, &notes->src->source);
end = next ? next->offset : len;

- for (i = 0; i < al->samples_nr; i++) {
+ for (i = 0; i < al->data_nr; i++) {
struct annotation_data *sample;
struct sym_hist *hist;

hist = annotation__histogram(notes, evsel->idx + i);
- sample = &al->samples[i];
+ sample = &al->data[i];

calc_percent(hist, sample, al->offset, end);
}
@@ -1859,8 +1859,8 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)

ret = strcmp(iter->path, al->path);
if (ret == 0) {
- for (i = 0; i < al->samples_nr; i++)
- iter->samples[i].percent_sum += al->samples[i].percent;
+ for (i = 0; i < al->data_nr; i++)
+ iter->data[i].percent_sum += al->data[i].percent;
return;
}

@@ -1870,8 +1870,8 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)
p = &(*p)->rb_right;
}

- for (i = 0; i < al->samples_nr; i++)
- al->samples[i].percent_sum = al->samples[i].percent;
+ for (i = 0; i < al->data_nr; i++)
+ al->data[i].percent_sum = al->data[i].percent;

rb_link_node(&al->rb_node, parent, p);
rb_insert_color(&al->rb_node, root);
@@ -1881,10 +1881,10 @@ static int cmp_source_line(struct annotation_line *a, struct annotation_line *b)
{
int i;

- for (i = 0; i < a->samples_nr; i++) {
- if (a->samples[i].percent_sum == b->samples[i].percent_sum)
+ for (i = 0; i < a->data_nr; i++) {
+ if (a->data[i].percent_sum == b->data[i].percent_sum)
continue;
- return a->samples[i].percent_sum > b->samples[i].percent_sum;
+ return a->data[i].percent_sum > b->data[i].percent_sum;
}

return 0;
@@ -1949,8 +1949,8 @@ static void print_summary(struct rb_root *root, const char *filename)
int i;

al = rb_entry(node, struct annotation_line, rb_node);
- for (i = 0; i < al->samples_nr; i++) {
- percent = al->samples[i].percent_sum;
+ for (i = 0; i < al->data_nr; i++) {
+ percent = al->data[i].percent_sum;
color = get_percent_color(percent);
color_fprintf(stdout, color, " %7.2f", percent);

@@ -2355,10 +2355,10 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
double percent_max = 0.0;
int i;

- for (i = 0; i < al->samples_nr; i++) {
+ for (i = 0; i < al->data_nr; i++) {
struct annotation_data *sample;

- sample = &al->samples[i];
+ sample = &al->data[i];

if (sample->percent > percent_max)
percent_max = sample->percent;
@@ -2448,8 +2448,8 @@ static double annotation_line__max_percent(struct annotation_line *al,
int i;

for (i = 0; i < notes->nr_events; i++) {
- if (al->samples[i].percent > percent_max)
- percent_max = al->samples[i].percent;
+ if (al->data[i].percent > percent_max)
+ percent_max = al->data[i].percent;
}

return percent_max;
@@ -2515,15 +2515,15 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
int i;

for (i = 0; i < notes->nr_events; i++) {
- obj__set_percent_color(obj, al->samples[i].percent, current_entry);
+ obj__set_percent_color(obj, al->data[i].percent, current_entry);
if (notes->options->show_total_period) {
- obj__printf(obj, "%11" PRIu64 " ", al->samples[i].he.period);
+ obj__printf(obj, "%11" PRIu64 " ", al->data[i].he.period);
} else if (notes->options->show_nr_samples) {
obj__printf(obj, "%6" PRIu64 " ",
- al->samples[i].he.nr_samples);
+ al->data[i].he.nr_samples);
} else {
obj__printf(obj, "%6.2f ",
- al->samples[i].percent);
+ al->data[i].percent);
}
}
} else {
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index d06f14c656c6..58aa14c55bab 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -122,8 +122,8 @@ struct annotation_line {
char *path;
u32 idx;
int idx_asm;
- int samples_nr;
- struct annotation_data samples[0];
+ int data_nr;
+ struct annotation_data data[0];
};

struct disasm_line {
--
2.14.4


2018-08-09 15:01:31

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 10/44] perf vendor events arm64: Enable JSON events for eMAG

From: Sean V Kelley <[email protected]>

This patch adds the Ampere Computing eMAG file. This platform follows
the ARMv8 recommended IMPLEMENTATION DEFINED events, where applicable.

Signed-off-by: Sean V Kelley <[email protected]>
Reviewed-by: John Garry <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Ganapatrao Kulkarni <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: William Cohen <[email protected]>
Cc: [email protected]
LPU-Reference: [email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
.../arch/arm64/ampere/emag/core-imp-def.json | 32 ++++++++++++++++++++++
tools/perf/pmu-events/arch/arm64/mapfile.csv | 1 +
2 files changed, 33 insertions(+)
create mode 100644 tools/perf/pmu-events/arch/arm64/ampere/emag/core-imp-def.json

diff --git a/tools/perf/pmu-events/arch/arm64/ampere/emag/core-imp-def.json b/tools/perf/pmu-events/arch/arm64/ampere/emag/core-imp-def.json
new file mode 100644
index 000000000000..bc03c06c3918
--- /dev/null
+++ b/tools/perf/pmu-events/arch/arm64/ampere/emag/core-imp-def.json
@@ -0,0 +1,32 @@
+[
+ {
+ "ArchStdEvent": "L1D_CACHE_RD",
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_WR",
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_RD",
+ },
+ {
+ "ArchStdEvent": "L1D_CACHE_REFILL_WR",
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL_RD",
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_REFILL_WR",
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_RD",
+ },
+ {
+ "ArchStdEvent": "L1D_TLB_WR",
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_RD",
+ },
+ {
+ "ArchStdEvent": "BUS_ACCESS_WR",
+ }
+]
diff --git a/tools/perf/pmu-events/arch/arm64/mapfile.csv b/tools/perf/pmu-events/arch/arm64/mapfile.csv
index f03e26ecb658..59cd8604b0bd 100644
--- a/tools/perf/pmu-events/arch/arm64/mapfile.csv
+++ b/tools/perf/pmu-events/arch/arm64/mapfile.csv
@@ -16,3 +16,4 @@
0x00000000420f5160,v1,cavium/thunderx2,core
0x00000000430f0af0,v1,cavium/thunderx2,core
0x00000000480fd010,v1,hisilicon/hip08,core
+0x00000000500f0000,v1,ampere/emag,core
--
2.14.4


2018-08-09 15:01:40

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 11/44] perf tools: Drop unneeded bitmap_zero() calls

From: Yury Norov <[email protected]>

bitmap_zero() is called after bitmap_alloc() in perf code. But
bitmap_alloc() internally uses calloc() which guarantees that allocated
area is zeroed. So following bitmap_zero is unneeded. Drop it.

This happened because of confusing name for bitmap allocator. It
should has name bitmap_zalloc instead of bitmap_alloc.

This series:

https://lkml.org/lkml/2018/6/18/841

introduces a new API for bitmap allocations in kernel, and functions
there are named correctly. Following patch propogates the API to tools,
and fixes naming issue.

Signed-off-by: Yury Norov <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Andi Kleen <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Andriy Shevchenko <[email protected]>
Cc: David Ahern <[email protected]>
Cc: David Carrillo-Cisneros <[email protected]>
Cc: Dmitry Torokhov <[email protected]>
Cc: Jin Yao <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kate Stewart <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mike Snitzer <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Philippe Ombredanne <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/tests/bitmap.c | 2 --
tools/perf/tests/mem2node.c | 2 --
tools/perf/util/header.c | 3 ---
3 files changed, 7 deletions(-)

diff --git a/tools/perf/tests/bitmap.c b/tools/perf/tests/bitmap.c
index 47bedf25ba69..96e7fc1ad3f9 100644
--- a/tools/perf/tests/bitmap.c
+++ b/tools/perf/tests/bitmap.c
@@ -16,8 +16,6 @@ static unsigned long *get_bitmap(const char *str, int nbits)
bm = bitmap_alloc(nbits);

if (map && bm) {
- bitmap_zero(bm, nbits);
-
for (i = 0; i < map->nr; i++)
set_bit(map->map[i], bm);
}
diff --git a/tools/perf/tests/mem2node.c b/tools/perf/tests/mem2node.c
index 0c3c87f86e03..9e9e4d37cc77 100644
--- a/tools/perf/tests/mem2node.c
+++ b/tools/perf/tests/mem2node.c
@@ -24,8 +24,6 @@ static unsigned long *get_bitmap(const char *str, int nbits)
bm = bitmap_alloc(nbits);

if (map && bm) {
- bitmap_zero(bm, nbits);
-
for (i = 0; i < map->nr; i++) {
set_bit(map->map[i], bm);
}
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index 5af58aac91ad..5f1af7b07b96 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -279,8 +279,6 @@ static int do_read_bitmap(struct feat_fd *ff, unsigned long **pset, u64 *psize)
if (!set)
return -ENOMEM;

- bitmap_zero(set, size);
-
p = (u64 *) set;

for (i = 0; (u64) i < BITS_TO_U64(size); i++) {
@@ -1285,7 +1283,6 @@ static int memory_node__read(struct memory_node *n, unsigned long idx)
return -ENOMEM;
}

- bitmap_zero(n->set, size);
n->node = idx;
n->size = size;

--
2.14.4


2018-08-09 15:01:45

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 13/44] perf annotate: Make symbol__annotate_fprintf2() local

From: Jiri Olsa <[email protected]>

There's no outside user of it.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 2 +-
tools/perf/util/annotate.h | 1 -
2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index f91775b4bc3c..b6e7d0d56622 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2129,7 +2129,7 @@ static void FILE__write_graph(void *fp, int graph)
fputs(s, fp);
}

-int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp)
+static int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp)
{
struct annotation *notes = symbol__annotation(sym);
struct annotation_write_ops ops = {
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index a4c0d91907e6..5f24fc9dcc7c 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -340,7 +340,6 @@ int symbol__strerror_disassemble(struct symbol *sym, struct map *map,
int symbol__annotate_printf(struct symbol *sym, struct map *map,
struct perf_evsel *evsel,
struct annotation_options *options);
-int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp);
void symbol__annotate_zero_histogram(struct symbol *sym, int evidx);
void symbol__annotate_decay_histogram(struct symbol *sym, int evidx);
void annotated_source__purge(struct annotated_source *as);
--
2.14.4


2018-08-09 15:01:46

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 19/44] perf annotate: Loop group events directly in annotation__calc_percent()

From: Jiri Olsa <[email protected]>

We need to bring in 'struct hists' object and for that we need 'struct
perf_evsel' object in the scope.

Switching the group data loop with the evsel group loop. It does the
same thing, but it brings evsel object, that we can use later get the
'struct hists' object.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 13 ++++++++-----
tools/perf/util/evsel.h | 7 +++++++
2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 728603636adc..34d4bb73aa84 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1774,13 +1774,14 @@ static void calc_percent(struct sym_hist *sym_hist,
}

static void annotation__calc_percent(struct annotation *notes,
- struct perf_evsel *evsel, s64 len)
+ struct perf_evsel *leader, s64 len)
{
struct annotation_line *al, *next;
+ struct perf_evsel *evsel;

list_for_each_entry(al, &notes->src->source, node) {
s64 end;
- int i;
+ int i = 0;

if (al->offset == -1)
continue;
@@ -1788,12 +1789,14 @@ static void annotation__calc_percent(struct annotation *notes,
next = annotation_line__next(al, &notes->src->source);
end = next ? next->offset : len;

- for (i = 0; i < al->data_nr; i++) {
+ for_each_group_evsel(evsel, leader) {
struct annotation_data *data;
struct sym_hist *sym_hist;

- sym_hist = annotation__histogram(notes, evsel->idx + i);
- data = &al->data[i];
+ BUG_ON(i >= al->data_nr);
+
+ sym_hist = annotation__histogram(notes, evsel->idx);
+ data = &al->data[i++];

calc_percent(sym_hist, data, al->offset, end);
}
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index 973c03167947..163c960614d3 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -452,11 +452,18 @@ static inline int perf_evsel__group_idx(struct perf_evsel *evsel)
return evsel->idx - evsel->leader->idx;
}

+/* Iterates group WITHOUT the leader. */
#define for_each_group_member(_evsel, _leader) \
for ((_evsel) = list_entry((_leader)->node.next, struct perf_evsel, node); \
(_evsel) && (_evsel)->leader == (_leader); \
(_evsel) = list_entry((_evsel)->node.next, struct perf_evsel, node))

+/* Iterates group WITH the leader. */
+#define for_each_group_evsel(_evsel, _leader) \
+for ((_evsel) = _leader; \
+ (_evsel) && (_evsel)->leader == (_leader); \
+ (_evsel) = list_entry((_evsel)->node.next, struct perf_evsel, node))
+
static inline bool perf_evsel__has_branch_callstack(const struct perf_evsel *evsel)
{
return evsel->attr.branch_sample_type & PERF_SAMPLE_BRANCH_CALL_STACK;
--
2.14.4


2018-08-09 15:01:54

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 20/44] perf annotate: Switch struct annotation_data::percent to array

From: Jiri Olsa <[email protected]>

So we can hold multiple percent values for annotation line.

The first member of this array is current local hits percent value
(PERCENT_HITS_LOCAL index), so no functional change is expected.

Adding annotation_data__percent function to return requested percent
value from struct annotation_data.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 9 ++++---
tools/perf/util/annotate.c | 57 ++++++++++++++++++++++++++-------------
tools/perf/util/annotate.h | 13 ++++++++-
3 files changed, 56 insertions(+), 23 deletions(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index d648d1e153f3..81876c3923d2 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -315,10 +315,13 @@ static void annotate_browser__calc_percent(struct annotate_browser *browser,
}

for (i = 0; i < pos->al.data_nr; i++) {
- struct annotation_data *sample = &pos->al.data[i];
+ double percent;

- if (max_percent < sample->percent)
- max_percent = sample->percent;
+ percent = annotation_data__percent(&pos->al.data[i],
+ PERCENT_HITS_LOCAL);
+
+ if (max_percent < percent)
+ max_percent = percent;
}

if (max_percent < 0.01 && pos->al.ipc == 0) {
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 34d4bb73aa84..074adb2a831e 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1310,10 +1310,13 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
struct annotation *notes = symbol__annotation(sym);

for (i = 0; i < al->data_nr; i++) {
- struct annotation_data *data = &al->data[i];
+ double percent;
+
+ percent = annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);

- if (data->percent > max_percent)
- max_percent = data->percent;
+ if (percent > max_percent)
+ max_percent = percent;
}

if (al->data_nr > nr_percent)
@@ -1352,8 +1355,10 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start

for (i = 0; i < nr_percent; i++) {
struct annotation_data *data = &al->data[i];
+ double percent;

- color = get_percent_color(data->percent);
+ percent = annotation_data__percent(data, PERCENT_HITS_LOCAL);
+ color = get_percent_color(percent);

if (symbol_conf.show_total_period)
color_fprintf(stdout, color, " %11" PRIu64,
@@ -1362,7 +1367,7 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
color_fprintf(stdout, color, " %7" PRIu64,
data->he.nr_samples);
else
- color_fprintf(stdout, color, " %7.2f", data->percent);
+ color_fprintf(stdout, color, " %7.2f", percent);
}

printf(" : ");
@@ -1769,7 +1774,7 @@ static void calc_percent(struct sym_hist *sym_hist,
if (sym_hist->nr_samples) {
data->he.period = period;
data->he.nr_samples = hits;
- data->percent = 100.0 * hits / sym_hist->nr_samples;
+ data->percent[PERCENT_HITS_LOCAL] = 100.0 * hits / sym_hist->nr_samples;
}
}

@@ -1862,8 +1867,10 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)

ret = strcmp(iter->path, al->path);
if (ret == 0) {
- for (i = 0; i < al->data_nr; i++)
- iter->data[i].percent_sum += al->data[i].percent;
+ for (i = 0; i < al->data_nr; i++) {
+ iter->data[i].percent_sum += annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);
+ }
return;
}

@@ -1873,8 +1880,10 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)
p = &(*p)->rb_right;
}

- for (i = 0; i < al->data_nr; i++)
- al->data[i].percent_sum = al->data[i].percent;
+ for (i = 0; i < al->data_nr; i++) {
+ al->data[i].percent_sum = annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);
+ }

rb_link_node(&al->rb_node, parent, p);
rb_insert_color(&al->rb_node, root);
@@ -2359,12 +2368,13 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
int i;

for (i = 0; i < al->data_nr; i++) {
- struct annotation_data *data;
+ double percent;

- data = &al->data[i];
+ percent = annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);

- if (data->percent > percent_max)
- percent_max = data->percent;
+ if (percent > percent_max)
+ percent_max = percent;
}

if (percent_max <= 0.5)
@@ -2451,8 +2461,13 @@ static double annotation_line__max_percent(struct annotation_line *al,
int i;

for (i = 0; i < notes->nr_events; i++) {
- if (al->data[i].percent > percent_max)
- percent_max = al->data[i].percent;
+ double percent;
+
+ percent = annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);
+
+ if (percent > percent_max)
+ percent_max = percent;
}

return percent_max;
@@ -2518,15 +2533,19 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
int i;

for (i = 0; i < notes->nr_events; i++) {
- obj__set_percent_color(obj, al->data[i].percent, current_entry);
+ double percent;
+
+ percent = annotation_data__percent(&al->data[i],
+ PERCENT_HITS_LOCAL);
+
+ obj__set_percent_color(obj, percent, current_entry);
if (notes->options->show_total_period) {
obj__printf(obj, "%11" PRIu64 " ", al->data[i].he.period);
} else if (notes->options->show_nr_samples) {
obj__printf(obj, "%6" PRIu64 " ",
al->data[i].he.nr_samples);
} else {
- obj__printf(obj, "%6.2f ",
- al->data[i].percent);
+ obj__printf(obj, "%6.2f ", percent);
}
}
} else {
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 58aa14c55bab..0afbf8075fca 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -101,8 +101,13 @@ struct sym_hist_entry {
u64 period;
};

+enum {
+ PERCENT_HITS_LOCAL,
+ PERCENT_MAX,
+};
+
struct annotation_data {
- double percent;
+ double percent[PERCENT_MAX];
double percent_sum;
struct sym_hist_entry he;
};
@@ -134,6 +139,12 @@ struct disasm_line {
struct annotation_line al;
};

+static inline double annotation_data__percent(struct annotation_data *data,
+ unsigned int which)
+{
+ return which < PERCENT_MAX ? data->percent[which] : -1;
+}
+
static inline struct disasm_line *disasm_line(struct annotation_line *al)
{
return al ? container_of(al, struct disasm_line, al) : NULL;
--
2.14.4


2018-08-09 15:01:58

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 21/44] perf annotate: Add PERCENT_HITS_GLOBAL percent value

From: Jiri Olsa <[email protected]>

Adding and computing global hits percent value for annotation line.
Storing it in struct annotation_data percent array under new
PERCENT_HITS_GLOBAL index.

At the moment it's not displayed, it's coming in following patches.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 8 +++++++-
tools/perf/util/annotate.h | 1 +
2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 074adb2a831e..b7485a512da1 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1759,6 +1759,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
}

static void calc_percent(struct sym_hist *sym_hist,
+ struct hists *hists,
struct annotation_data *data,
s64 offset, s64 end)
{
@@ -1776,6 +1777,10 @@ static void calc_percent(struct sym_hist *sym_hist,
data->he.nr_samples = hits;
data->percent[PERCENT_HITS_LOCAL] = 100.0 * hits / sym_hist->nr_samples;
}
+
+ if (hists->stats.nr_non_filtered_samples)
+ data->percent[PERCENT_HITS_GLOBAL] = 100.0 * hits / hists->stats.nr_non_filtered_samples;
+
}

static void annotation__calc_percent(struct annotation *notes,
@@ -1795,6 +1800,7 @@ static void annotation__calc_percent(struct annotation *notes,
end = next ? next->offset : len;

for_each_group_evsel(evsel, leader) {
+ struct hists *hists = evsel__hists(evsel);
struct annotation_data *data;
struct sym_hist *sym_hist;

@@ -1803,7 +1809,7 @@ static void annotation__calc_percent(struct annotation *notes,
sym_hist = annotation__histogram(notes, evsel->idx);
data = &al->data[i++];

- calc_percent(sym_hist, data, al->offset, end);
+ calc_percent(sym_hist, hists, data, al->offset, end);
}
}
}
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 0afbf8075fca..3a06cb2b6e28 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -103,6 +103,7 @@ struct sym_hist_entry {

enum {
PERCENT_HITS_LOCAL,
+ PERCENT_HITS_GLOBAL,
PERCENT_MAX,
};

--
2.14.4


2018-08-09 15:01:58

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 17/44] perf annotate: Rename local sample variables to data

From: Jiri Olsa <[email protected]>

Based on previous rename, changing also the local variable names to fit
properly.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 40 ++++++++++++++++++++--------------------
1 file changed, 20 insertions(+), 20 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index e4cb8963db1a..8bd278a71004 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1310,10 +1310,10 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
struct annotation *notes = symbol__annotation(sym);

for (i = 0; i < al->data_nr; i++) {
- struct annotation_data *sample = &al->data[i];
+ struct annotation_data *data = &al->data[i];

- if (sample->percent > max_percent)
- max_percent = sample->percent;
+ if (data->percent > max_percent)
+ max_percent = data->percent;
}

if (al->data_nr > nr_percent)
@@ -1351,18 +1351,18 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
}

for (i = 0; i < nr_percent; i++) {
- struct annotation_data *sample = &al->data[i];
+ struct annotation_data *data = &al->data[i];

- color = get_percent_color(sample->percent);
+ color = get_percent_color(data->percent);

if (symbol_conf.show_total_period)
color_fprintf(stdout, color, " %11" PRIu64,
- sample->he.period);
+ data->he.period);
else if (symbol_conf.show_nr_samples)
color_fprintf(stdout, color, " %7" PRIu64,
- sample->he.nr_samples);
+ data->he.nr_samples);
else
- color_fprintf(stdout, color, " %7.2f", sample->percent);
+ color_fprintf(stdout, color, " %7.2f", data->percent);
}

printf(" : ");
@@ -1754,7 +1754,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
}

static void calc_percent(struct sym_hist *hist,
- struct annotation_data *sample,
+ struct annotation_data *data,
s64 offset, s64 end)
{
unsigned int hits = 0;
@@ -1767,9 +1767,9 @@ static void calc_percent(struct sym_hist *hist,
}

if (hist->nr_samples) {
- sample->he.period = period;
- sample->he.nr_samples = hits;
- sample->percent = 100.0 * hits / hist->nr_samples;
+ data->he.period = period;
+ data->he.nr_samples = hits;
+ data->percent = 100.0 * hits / hist->nr_samples;
}
}

@@ -1789,13 +1789,13 @@ static void annotation__calc_percent(struct annotation *notes,
end = next ? next->offset : len;

for (i = 0; i < al->data_nr; i++) {
- struct annotation_data *sample;
+ struct annotation_data *data;
struct sym_hist *hist;

- hist = annotation__histogram(notes, evsel->idx + i);
- sample = &al->data[i];
+ hist = annotation__histogram(notes, evsel->idx + i);
+ data = &al->data[i];

- calc_percent(hist, sample, al->offset, end);
+ calc_percent(hist, data, al->offset, end);
}
}
}
@@ -2356,12 +2356,12 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
int i;

for (i = 0; i < al->data_nr; i++) {
- struct annotation_data *sample;
+ struct annotation_data *data;

- sample = &al->data[i];
+ data = &al->data[i];

- if (sample->percent > percent_max)
- percent_max = sample->percent;
+ if (data->percent > percent_max)
+ percent_max = data->percent;
}

if (percent_max <= 0.5)
--
2.14.4


2018-08-09 15:02:06

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 22/44] perf annotate: Add PERCENT_PERIOD_LOCAL percent value

From: Jiri Olsa <[email protected]>

Adding and computing local period percent value for annotation line.
Storing it in struct annotation_data percent array under new
PERCENT_PERIOD_LOCAL index.

At the moment it's not displayed, it's coming in following patches.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 2 ++
tools/perf/util/annotate.h | 1 +
2 files changed, 3 insertions(+)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index b7485a512da1..b37e8cc18668 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1781,6 +1781,8 @@ static void calc_percent(struct sym_hist *sym_hist,
if (hists->stats.nr_non_filtered_samples)
data->percent[PERCENT_HITS_GLOBAL] = 100.0 * hits / hists->stats.nr_non_filtered_samples;

+ if (sym_hist->period)
+ data->percent[PERCENT_PERIOD_LOCAL] = 100.0 * period / sym_hist->period;
}

static void annotation__calc_percent(struct annotation *notes,
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 3a06cb2b6e28..890b6869caa9 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -104,6 +104,7 @@ struct sym_hist_entry {
enum {
PERCENT_HITS_LOCAL,
PERCENT_HITS_GLOBAL,
+ PERCENT_PERIOD_LOCAL,
PERCENT_MAX,
};

--
2.14.4


2018-08-09 15:02:16

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 24/44] perf annotate: Add percent_type to struct annotation_options

From: Jiri Olsa <[email protected]>

It will be used to carry user selection of percent type for annotation
output.

Passing the percent_type to the annotation_line__print function as the
first step and making it default to current percentage type
(PERCENT_HITS_LOCAL) value.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 13 ++++++++-----
tools/perf/util/annotate.h | 1 +
2 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index e890164592b0..91528a065768 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -49,6 +49,7 @@ struct annotation_options annotation__default_options = {
.jump_arrows = true,
.annotate_src = true,
.offset_level = ANNOTATION__OFFSET_JUMP_TARGETS,
+ .percent_type = PERCENT_HITS_LOCAL,
};

static regex_t file_lineno;
@@ -1297,7 +1298,8 @@ static int disasm_line__print(struct disasm_line *dl, u64 start, int addr_fmt_wi
static int
annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start,
struct perf_evsel *evsel, u64 len, int min_pcnt, int printed,
- int max_lines, struct annotation_line *queue, int addr_fmt_width)
+ int max_lines, struct annotation_line *queue, int addr_fmt_width,
+ int percent_type)
{
struct disasm_line *dl = container_of(al, struct disasm_line, al);
static const char *prev_line;
@@ -1313,7 +1315,7 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
double percent;

percent = annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ percent_type);

if (percent > max_percent)
max_percent = percent;
@@ -1333,7 +1335,8 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
if (queue == al)
break;
annotation_line__print(queue, sym, start, evsel, len,
- 0, 0, 1, NULL, addr_fmt_width);
+ 0, 0, 1, NULL, addr_fmt_width,
+ percent_type);
}
}

@@ -1357,7 +1360,7 @@ annotation_line__print(struct annotation_line *al, struct symbol *sym, u64 start
struct annotation_data *data = &al->data[i];
double percent;

- percent = annotation_data__percent(data, PERCENT_HITS_LOCAL);
+ percent = annotation_data__percent(data, percent_type);
color = get_percent_color(percent);

if (symbol_conf.show_total_period)
@@ -2075,7 +2078,7 @@ int symbol__annotate_printf(struct symbol *sym, struct map *map,

err = annotation_line__print(pos, sym, start, evsel, len,
opts->min_pcnt, printed, opts->max_lines,
- queue, addr_fmt_width);
+ queue, addr_fmt_width, opts->percent_type);

switch (err) {
case 0:
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 48fe2aa6b5a8..145dec845f33 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -82,6 +82,7 @@ struct annotation_options {
int context;
const char *objdump_path;
const char *disassembler_style;
+ unsigned int percent_type;
};

enum {
--
2.14.4


2018-08-09 15:02:18

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 25/44] perf annotate: Pass struct annotation_options to symbol__calc_lines()

From: Jiri Olsa <[email protected]>

Pass struct annotation_options to symbol__calc_lines(), to carry on and
pass the percent_type value.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 91528a065768..2b06476c79c2 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1868,7 +1868,8 @@ int symbol__annotate(struct symbol *sym, struct map *map,
return symbol__disassemble(sym, &args);
}

-static void insert_source_line(struct rb_root *root, struct annotation_line *al)
+static void insert_source_line(struct rb_root *root, struct annotation_line *al,
+ struct annotation_options *opts)
{
struct annotation_line *iter;
struct rb_node **p = &root->rb_node;
@@ -1883,7 +1884,7 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)
if (ret == 0) {
for (i = 0; i < al->data_nr; i++) {
iter->data[i].percent_sum += annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ opts->percent_type);
}
return;
}
@@ -1896,7 +1897,7 @@ static void insert_source_line(struct rb_root *root, struct annotation_line *al)

for (i = 0; i < al->data_nr; i++) {
al->data[i].percent_sum = annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ opts->percent_type);
}

rb_link_node(&al->rb_node, parent, p);
@@ -2372,7 +2373,8 @@ void annotation__update_column_widths(struct annotation *notes)
}

static void annotation__calc_lines(struct annotation *notes, struct map *map,
- struct rb_root *root)
+ struct rb_root *root,
+ struct annotation_options *opts)
{
struct annotation_line *al;
struct rb_root tmp_root = RB_ROOT;
@@ -2385,7 +2387,7 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,
double percent;

percent = annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ opts->percent_type);

if (percent > percent_max)
percent_max = percent;
@@ -2396,18 +2398,19 @@ static void annotation__calc_lines(struct annotation *notes, struct map *map,

al->path = get_srcline(map->dso, notes->start + al->offset, NULL,
false, true, notes->start + al->offset);
- insert_source_line(&tmp_root, al);
+ insert_source_line(&tmp_root, al, opts);
}

resort_source_line(root, &tmp_root);
}

static void symbol__calc_lines(struct symbol *sym, struct map *map,
- struct rb_root *root)
+ struct rb_root *root,
+ struct annotation_options *opts)
{
struct annotation *notes = symbol__annotation(sym);

- annotation__calc_lines(notes, map, root);
+ annotation__calc_lines(notes, map, root, opts);
}

int symbol__tty_annotate2(struct symbol *sym, struct map *map,
@@ -2424,7 +2427,7 @@ int symbol__tty_annotate2(struct symbol *sym, struct map *map,

if (opts->print_lines) {
srcline_full_filename = opts->full_path;
- symbol__calc_lines(sym, map, &source_line);
+ symbol__calc_lines(sym, map, &source_line, opts);
print_summary(&source_line, dso->long_name);
}

@@ -2451,7 +2454,7 @@ int symbol__tty_annotate(struct symbol *sym, struct map *map,

if (opts->print_lines) {
srcline_full_filename = opts->full_path;
- symbol__calc_lines(sym, map, &source_line);
+ symbol__calc_lines(sym, map, &source_line, opts);
print_summary(&source_line, dso->long_name);
}

--
2.14.4


2018-08-09 15:02:27

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 26/44] perf annotate: Pass 'struct annotation_options' to map_symbol__annotation_dump()

From: Jiri Olsa <[email protected]>

Pass 'struct annotation_options' to map_symbol__annotation_dump(), to
carry on and pass the percent_type value.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 4 ++--
tools/perf/util/annotate.c | 42 +++++++++++++++++++++------------------
tools/perf/util/annotate.h | 6 ++++--
3 files changed, 29 insertions(+), 23 deletions(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index 81876c3923d2..cfe611c28987 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -115,7 +115,7 @@ static void annotate_browser__write(struct ui_browser *browser, void *entry, int
if (!browser->navkeypressed)
ops.width += 1;

- annotation_line__write(al, notes, &ops);
+ annotation_line__write(al, notes, &ops, ab->opts);

if (ops.current_entry)
ab->selection = al;
@@ -783,7 +783,7 @@ static int annotate_browser__run(struct annotate_browser *browser,
continue;
}
case 'P':
- map_symbol__annotation_dump(ms, evsel);
+ map_symbol__annotation_dump(ms, evsel, browser->opts);
continue;
case 't':
if (notes->options->show_total_period) {
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 2b06476c79c2..850958bb613a 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2156,10 +2156,11 @@ static void FILE__write_graph(void *fp, int graph)
fputs(s, fp);
}

-static int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp)
+static int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp,
+ struct annotation_options *opts)
{
struct annotation *notes = symbol__annotation(sym);
- struct annotation_write_ops ops = {
+ struct annotation_write_ops wops = {
.first_line = true,
.obj = fp,
.set_color = FILE__set_color,
@@ -2173,15 +2174,16 @@ static int symbol__annotate_fprintf2(struct symbol *sym, FILE *fp)
list_for_each_entry(al, &notes->src->source, node) {
if (annotation_line__filter(al, notes))
continue;
- annotation_line__write(al, notes, &ops);
+ annotation_line__write(al, notes, &wops, opts);
fputc('\n', fp);
- ops.first_line = false;
+ wops.first_line = false;
}

return 0;
}

-int map_symbol__annotation_dump(struct map_symbol *ms, struct perf_evsel *evsel)
+int map_symbol__annotation_dump(struct map_symbol *ms, struct perf_evsel *evsel,
+ struct annotation_options *opts)
{
const char *ev_name = perf_evsel__name(evsel);
char buf[1024];
@@ -2203,7 +2205,7 @@ int map_symbol__annotation_dump(struct map_symbol *ms, struct perf_evsel *evsel)

fprintf(fp, "%s() %s\nEvent: %s\n\n",
ms->sym->name, ms->map->dso->long_name, ev_name);
- symbol__annotate_fprintf2(ms->sym, fp);
+ symbol__annotate_fprintf2(ms->sym, fp, opts);

fclose(fp);
err = 0;
@@ -2433,7 +2435,7 @@ int symbol__tty_annotate2(struct symbol *sym, struct map *map,

hists__scnprintf_title(hists, buf, sizeof(buf));
fprintf(stdout, "%s\n%s() %s\n", buf, sym->name, dso->long_name);
- symbol__annotate_fprintf2(sym, stdout);
+ symbol__annotate_fprintf2(sym, stdout, opts);

annotated_source__purge(symbol__annotation(sym)->src);

@@ -2472,7 +2474,8 @@ bool ui__has_annotation(void)


static double annotation_line__max_percent(struct annotation_line *al,
- struct annotation *notes)
+ struct annotation *notes,
+ unsigned int percent_type)
{
double percent_max = 0.0;
int i;
@@ -2481,7 +2484,7 @@ static double annotation_line__max_percent(struct annotation_line *al,
double percent;

percent = annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ percent_type);

if (percent > percent_max)
percent_max = percent;
@@ -2523,7 +2526,7 @@ static void disasm_line__write(struct disasm_line *dl, struct annotation *notes,

static void __annotation_line__write(struct annotation_line *al, struct annotation *notes,
bool first_line, bool current_entry, bool change_color, int width,
- void *obj,
+ void *obj, unsigned int percent_type,
int (*obj__set_color)(void *obj, int color),
void (*obj__set_percent_color)(void *obj, double percent, bool current),
int (*obj__set_jumps_percent_color)(void *obj, int nr, bool current),
@@ -2531,7 +2534,7 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
void (*obj__write_graph)(void *obj, int graph))

{
- double percent_max = annotation_line__max_percent(al, notes);
+ double percent_max = annotation_line__max_percent(al, notes, percent_type);
int pcnt_width = annotation__pcnt_width(notes),
cycles_width = annotation__cycles_width(notes);
bool show_title = false;
@@ -2552,8 +2555,7 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
for (i = 0; i < notes->nr_events; i++) {
double percent;

- percent = annotation_data__percent(&al->data[i],
- PERCENT_HITS_LOCAL);
+ percent = annotation_data__percent(&al->data[i], percent_type);

obj__set_percent_color(obj, percent, current_entry);
if (notes->options->show_total_period) {
@@ -2680,13 +2682,15 @@ static void __annotation_line__write(struct annotation_line *al, struct annotati
}

void annotation_line__write(struct annotation_line *al, struct annotation *notes,
- struct annotation_write_ops *ops)
+ struct annotation_write_ops *wops,
+ struct annotation_options *opts)
{
- __annotation_line__write(al, notes, ops->first_line, ops->current_entry,
- ops->change_color, ops->width, ops->obj,
- ops->set_color, ops->set_percent_color,
- ops->set_jumps_percent_color, ops->printf,
- ops->write_graph);
+ __annotation_line__write(al, notes, wops->first_line, wops->current_entry,
+ wops->change_color, wops->width, wops->obj,
+ opts->percent_type,
+ wops->set_color, wops->set_percent_color,
+ wops->set_jumps_percent_color, wops->printf,
+ wops->write_graph);
}

int symbol__annotate2(struct symbol *sym, struct map *map, struct perf_evsel *evsel,
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 145dec845f33..3d4579e68d28 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -185,7 +185,8 @@ struct annotation_write_ops {
};

void annotation_line__write(struct annotation_line *al, struct annotation *notes,
- struct annotation_write_ops *ops);
+ struct annotation_write_ops *ops,
+ struct annotation_options *opts);

int __annotation__scnprintf_samples_period(struct annotation *notes,
char *bf, size_t size,
@@ -351,7 +352,8 @@ void symbol__annotate_zero_histogram(struct symbol *sym, int evidx);
void symbol__annotate_decay_histogram(struct symbol *sym, int evidx);
void annotated_source__purge(struct annotated_source *as);

-int map_symbol__annotation_dump(struct map_symbol *ms, struct perf_evsel *evsel);
+int map_symbol__annotation_dump(struct map_symbol *ms, struct perf_evsel *evsel,
+ struct annotation_options *opts);

bool ui__has_annotation(void);

--
2.14.4


2018-08-09 15:02:41

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 27/44] perf annotate: Pass browser percent_type in annotate_browser__calc_percent()

From: Jiri Olsa <[email protected]>

Pass browser percent_type in annotate_browser__calc_percent().

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index cfe611c28987..2a3a34d450d5 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -318,7 +318,7 @@ static void annotate_browser__calc_percent(struct annotate_browser *browser,
double percent;

percent = annotation_data__percent(&pos->al.data[i],
- PERCENT_HITS_LOCAL);
+ browser->opts->percent_type);

if (max_percent < percent)
max_percent = percent;
--
2.14.4


2018-08-09 15:02:43

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 23/44] perf annotate: Add PERCENT_PERIOD_GLOBAL percent value

From: Jiri Olsa <[email protected]>

Adding and computing global period percent value for annotation line.
Storing it in struct annotation_data percent array under new
PERCENT_PERIOD_GLOBAL index.

At the moment it's not displayed, it's coming in following patches.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 3 +++
tools/perf/util/annotate.h | 1 +
2 files changed, 4 insertions(+)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index b37e8cc18668..e890164592b0 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1783,6 +1783,9 @@ static void calc_percent(struct sym_hist *sym_hist,

if (sym_hist->period)
data->percent[PERCENT_PERIOD_LOCAL] = 100.0 * period / sym_hist->period;
+
+ if (hists->stats.total_period)
+ data->percent[PERCENT_PERIOD_GLOBAL] = 100.0 * period / hists->stats.total_period;
}

static void annotation__calc_percent(struct annotation *notes,
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 890b6869caa9..48fe2aa6b5a8 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -105,6 +105,7 @@ enum {
PERCENT_HITS_LOCAL,
PERCENT_HITS_GLOBAL,
PERCENT_PERIOD_LOCAL,
+ PERCENT_PERIOD_GLOBAL,
PERCENT_MAX,
};

--
2.14.4


2018-08-09 15:02:53

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 18/44] perf annotate: Rename hist to sym_hist in annotation__calc_percent

From: Jiri Olsa <[email protected]>

We will need to bring in 'struct hists' variable in this scope, so it's
better we do this rename first.

Signed-off-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 8bd278a71004..728603636adc 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -1753,7 +1753,7 @@ static int symbol__disassemble(struct symbol *sym, struct annotate_args *args)
goto out_free_command;
}

-static void calc_percent(struct sym_hist *hist,
+static void calc_percent(struct sym_hist *sym_hist,
struct annotation_data *data,
s64 offset, s64 end)
{
@@ -1761,15 +1761,15 @@ static void calc_percent(struct sym_hist *hist,
u64 period = 0;

while (offset < end) {
- hits += hist->addr[offset].nr_samples;
- period += hist->addr[offset].period;
+ hits += sym_hist->addr[offset].nr_samples;
+ period += sym_hist->addr[offset].period;
++offset;
}

- if (hist->nr_samples) {
+ if (sym_hist->nr_samples) {
data->he.period = period;
data->he.nr_samples = hits;
- data->percent = 100.0 * hits / hist->nr_samples;
+ data->percent = 100.0 * hits / sym_hist->nr_samples;
}
}

@@ -1790,12 +1790,12 @@ static void annotation__calc_percent(struct annotation *notes,

for (i = 0; i < al->data_nr; i++) {
struct annotation_data *data;
- struct sym_hist *hist;
+ struct sym_hist *sym_hist;

- hist = annotation__histogram(notes, evsel->idx + i);
+ sym_hist = annotation__histogram(notes, evsel->idx + i);
data = &al->data[i];

- calc_percent(hist, data, al->offset, end);
+ calc_percent(sym_hist, data, al->offset, end);
}
}
}
--
2.14.4


2018-08-09 15:02:56

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 29/44] perf annotate: Make local period the default percent type

From: Jiri Olsa <[email protected]>

Currently we display the percentages in annotation output based on
number of samples hits. Switching it to period based percentage by
default, because it corresponds more to the time spent on the line.

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 850958bb613a..05d15629afd0 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -49,7 +49,7 @@ struct annotation_options annotation__default_options = {
.jump_arrows = true,
.annotate_src = true,
.offset_level = ANNOTATION__OFFSET_JUMP_TARGETS,
- .percent_type = PERCENT_HITS_LOCAL,
+ .percent_type = PERCENT_PERIOD_LOCAL,
};

static regex_t file_lineno;
--
2.14.4


2018-08-09 15:02:59

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 30/44] perf annotate: Display percent type in stdio output

From: Jiri Olsa <[email protected]>

In following patches we will allow to switch percent type even for stdio
annotation outputs. Adding the percent type value into the annotation
outputs title.

$ perf annotate --stdio
Percent | Sou ... instructions:u } (2805 samples, percent: local period)
--------------------------- ... ------------------------------------------------------
...

$ perf annotate --stdio2
Samples: 2K of events 'anon ... count (approx.): 156525487, [percent: local period]
safe_write.c() /usr/bin/yes
Percent
...

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/annotate.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 05d15629afd0..6316fa96d984 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2056,10 +2056,12 @@ int symbol__annotate_printf(struct symbol *sym, struct map *map,
evsel_name = buf;
}

- graph_dotted_len = printf(" %-*.*s| Source code & Disassembly of %s for %s (%" PRIu64 " samples)\n",
+ graph_dotted_len = printf(" %-*.*s| Source code & Disassembly of %s for %s (%" PRIu64 " samples, "
+ "percent: %s)\n",
width, width, symbol_conf.show_total_period ? "Period" :
symbol_conf.show_nr_samples ? "Samples" : "Percent",
- d_filename, evsel_name, h->nr_samples);
+ d_filename, evsel_name, h->nr_samples,
+ percent_type_str(opts->percent_type));

printf("%-*.*s----\n",
graph_dotted_len, graph_dotted_len, graph_dotted_line);
@@ -2434,7 +2436,8 @@ int symbol__tty_annotate2(struct symbol *sym, struct map *map,
}

hists__scnprintf_title(hists, buf, sizeof(buf));
- fprintf(stdout, "%s\n%s() %s\n", buf, sym->name, dso->long_name);
+ fprintf(stdout, "%s, [percent: %s]\n%s() %s\n",
+ buf, percent_type_str(opts->percent_type), sym->name, dso->long_name);
symbol__annotate_fprintf2(sym, stdout, opts);

annotated_source__purge(symbol__annotation(sym)->src);
--
2.14.4


2018-08-09 15:03:04

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 31/44] perf annotate: Add --percent-type option

From: Jiri Olsa <[email protected]>

Add --percent-type option to set annotation percent type from following
choices:

global-period, local-period, global-hits, local-hits

Examples:

$ perf annotate --percent-type period-local --stdio | head -1
Percent | Source code ... es, percent: local period)
$ perf annotate --percent-type hits-local --stdio | head -1
Percent | Source code ... es, percent: local hits)
$ perf annotate --percent-type hits-global --stdio | head -1
Percent | Source code ... es, percent: global hits)
$ perf annotate --percent-type period-global --stdio | head -1
Percent | Source code ... es, percent: global period)

The local/global keywords set if the percentage is computed in the scope
of the function (local) or the whole data (global).

The period/hits keywords set the base the percentage is computed on -
the samples period or the number of samples (hits).

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/Documentation/perf-annotate.txt | 9 ++++++
tools/perf/builtin-annotate.c | 4 +++
tools/perf/util/annotate.c | 52 ++++++++++++++++++++++++++++++
tools/perf/util/annotate.h | 2 ++
4 files changed, 67 insertions(+)

diff --git a/tools/perf/Documentation/perf-annotate.txt b/tools/perf/Documentation/perf-annotate.txt
index 749cc6055dac..e8c972f89357 100644
--- a/tools/perf/Documentation/perf-annotate.txt
+++ b/tools/perf/Documentation/perf-annotate.txt
@@ -118,6 +118,15 @@ OPTIONS
--group::
Show event group information together

+--percent-type::
+ Set annotation percent type from following choices:
+ global-period, local-period, global-hits, local-hits
+
+ The local/global keywords set if the percentage is computed
+ in the scope of the function (local) or the whole data (global).
+ The period/hits keywords set the base the percentage is computed
+ on - the samples period or the number of samples (hits).
+
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-report[1]
diff --git a/tools/perf/builtin-annotate.c b/tools/perf/builtin-annotate.c
index 8180319285af..830481b8db26 100644
--- a/tools/perf/builtin-annotate.c
+++ b/tools/perf/builtin-annotate.c
@@ -542,6 +542,10 @@ int cmd_annotate(int argc, const char **argv)
OPT_CALLBACK_DEFAULT(0, "stdio-color", NULL, "mode",
"'always' (default), 'never' or 'auto' only applicable to --stdio mode",
stdio__config_color, "always"),
+ OPT_CALLBACK(0, "percent-type", &annotate.opts, "local-period",
+ "Set percent type local/global-period/hits",
+ annotate_parse_percent_type),
+
OPT_END()
};
int ret;
diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
index 6316fa96d984..e4268b948e0e 100644
--- a/tools/perf/util/annotate.c
+++ b/tools/perf/util/annotate.c
@@ -2799,3 +2799,55 @@ void annotation_config__init(void)
annotation__default_options.show_total_period = symbol_conf.show_total_period;
annotation__default_options.show_nr_samples = symbol_conf.show_nr_samples;
}
+
+static unsigned int parse_percent_type(char *str1, char *str2)
+{
+ unsigned int type = (unsigned int) -1;
+
+ if (!strcmp("period", str1)) {
+ if (!strcmp("local", str2))
+ type = PERCENT_PERIOD_LOCAL;
+ else if (!strcmp("global", str2))
+ type = PERCENT_PERIOD_GLOBAL;
+ }
+
+ if (!strcmp("hits", str1)) {
+ if (!strcmp("local", str2))
+ type = PERCENT_HITS_LOCAL;
+ else if (!strcmp("global", str2))
+ type = PERCENT_HITS_GLOBAL;
+ }
+
+ return type;
+}
+
+int annotate_parse_percent_type(const struct option *opt, const char *_str,
+ int unset __maybe_unused)
+{
+ struct annotation_options *opts = opt->value;
+ unsigned int type;
+ char *str1, *str2;
+ int err = -1;
+
+ str1 = strdup(_str);
+ if (!str1)
+ return -ENOMEM;
+
+ str2 = strchr(str1, '-');
+ if (!str2)
+ goto out;
+
+ *str2++ = 0;
+
+ type = parse_percent_type(str1, str2);
+ if (type == (unsigned int) -1)
+ type = parse_percent_type(str2, str1);
+ if (type != (unsigned int) -1) {
+ opts->percent_type = type;
+ err = 0;
+ }
+
+out:
+ free(str1);
+ return err;
+}
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 760a6678edff..005a5fe8a8c6 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -397,4 +397,6 @@ static inline int symbol__tui_annotate(struct symbol *sym __maybe_unused,

void annotation_config__init(void);

+int annotate_parse_percent_type(const struct option *opt, const char *_str,
+ int unset);
#endif /* __PERF_ANNOTATE_H */
--
2.14.4


2018-08-09 15:03:13

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 34/44] perf bpf: Add bpf/stdio.h wrapper to bpf_perf_event_output function

From: Arnaldo Carvalho de Melo <[email protected]>

That, together with the map __bpf_output__ that is already handled by
'perf trace' to print that event's contents as strings provides a
debugging facility, to show it in use, print a simple string everytime
the syscalls:sys_enter_openat() syscall tracepoint is hit:

# cat tools/perf/examples/bpf/hello.c
#include <stdio.h>

int syscall_enter(openat)(void *args)
{
puts("Hello, world\n");
return 0;
}

license(GPL);
#
# perf trace -e openat,tools/perf/examples/bpf/hello.c cat /etc/passwd > /dev/null
0.016 ( ): __bpf_stdout__:Hello, world
0.018 ( 0.010 ms): cat/9079 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC) = 3
0.057 ( ): __bpf_stdout__:Hello, world
0.059 ( 0.011 ms): cat/9079 openat(dfd: CWD, filename: /lib64/libc.so.6, flags: CLOEXEC) = 3
0.417 ( ): __bpf_stdout__:Hello, world
0.419 ( 0.009 ms): cat/9079 openat(dfd: CWD, filename: /etc/passwd) = 3
#

This is part of an ongoing experimentation on making eBPF scripts as
consumed by perf to be as concise as possible and using familiar
concepts such as stdio.h functions, that end up just wrapping the
existing BPF functions, trying to hide as much boilerplate as possible
while using just conventions and C preprocessor tricks.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/examples/bpf/hello.c | 9 +++++++++
tools/perf/include/bpf/stdio.h | 19 +++++++++++++++++++
2 files changed, 28 insertions(+)
create mode 100644 tools/perf/examples/bpf/hello.c
create mode 100644 tools/perf/include/bpf/stdio.h

diff --git a/tools/perf/examples/bpf/hello.c b/tools/perf/examples/bpf/hello.c
new file mode 100644
index 000000000000..cf3c2fdc7f79
--- /dev/null
+++ b/tools/perf/examples/bpf/hello.c
@@ -0,0 +1,9 @@
+#include <stdio.h>
+
+int syscall_enter(openat)(void *args)
+{
+ puts("Hello, world\n");
+ return 0;
+}
+
+license(GPL);
diff --git a/tools/perf/include/bpf/stdio.h b/tools/perf/include/bpf/stdio.h
new file mode 100644
index 000000000000..2899cb7bfed8
--- /dev/null
+++ b/tools/perf/include/bpf/stdio.h
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <bpf.h>
+
+struct bpf_map SEC("maps") __bpf_stdout__ = {
+ .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
+ .key_size = sizeof(int),
+ .value_size = sizeof(u32),
+ .max_entries = __NR_CPUS__,
+};
+
+static int (*perf_event_output)(void *, struct bpf_map *, int, void *, unsigned long) =
+ (void *)BPF_FUNC_perf_event_output;
+
+#define puts(from) \
+ ({ const int __len = sizeof(from); \
+ char __from[__len] = from; \
+ perf_event_output(args, &__bpf_stdout__, BPF_F_CURRENT_CPU, \
+ &__from, __len & (sizeof(from) - 1)); })
--
2.14.4


2018-08-09 15:03:16

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 35/44] perf bpf: Make bpf__for_each_stdout_map() generic

From: Arnaldo Carvalho de Melo <[email protected]>

By passing a 'name' arg, that will eventually be used to setup more
"bpf-output" events, e.g. to create a event where to create raw_syscalls
like events that in addition to the syscall arguments will also copy the
pointer contents being passed from/to userspace.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/bpf-loader.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 3d02ae38ec56..e864a7e0ff12 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -1529,12 +1529,15 @@ int bpf__apply_obj_config(void)
bpf_object__for_each_safe(obj, objtmp) \
bpf_map__for_each(pos, obj)

-#define bpf__for_each_stdout_map(pos, obj, objtmp) \
+#define bpf__for_each_map_named(pos, obj, objtmp, name) \
bpf__for_each_map(pos, obj, objtmp) \
if (bpf_map__name(pos) && \
- (strcmp("__bpf_stdout__", \
+ (strcmp(name, \
bpf_map__name(pos)) == 0))

+#define bpf__for_each_stdout_map(pos, obj, objtmp) \
+ bpf__for_each_map_named(pos, obj, objtmp, "__bpf_stdout__")
+
int bpf__setup_stdout(struct perf_evlist *evlist)
{
struct bpf_map_priv *tmpl_priv = NULL;
--
2.14.4


2018-08-09 15:03:22

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 28/44] perf annotate: Add support to toggle percent type

From: Jiri Olsa <[email protected]>

Add new key bindings to toggle percent type/base in annotation UI browser:

'p' to switch between local and global percent type
'b' to switch between hits and perdio percent base

Add the following help messages to the UI browser '?' window:

...
p Toggle percent type [local/global]
b Toggle percent base [period/hits]
...

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
[ Moved percent_type to be the last arg to sym_title(), its an arg to what is being formmated (buf, size) ]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/ui/browsers/annotate.c | 52 ++++++++++++++++++++++++++++++++++++---
tools/perf/util/annotate.h | 16 ++++++++++++
2 files changed, 64 insertions(+), 4 deletions(-)

diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c
index 2a3a34d450d5..1d00e5ec7906 100644
--- a/tools/perf/ui/browsers/annotate.c
+++ b/tools/perf/ui/browsers/annotate.c
@@ -15,6 +15,7 @@
#include <linux/kernel.h>
#include <linux/string.h>
#include <sys/ttydefaults.h>
+#include <asm/bug.h>

struct disasm_line_samples {
double percent;
@@ -383,9 +384,10 @@ static void ui_browser__init_asm_mode(struct ui_browser *browser)
#define SYM_TITLE_MAX_SIZE (PATH_MAX + 64)

static int sym_title(struct symbol *sym, struct map *map, char *title,
- size_t sz)
+ size_t sz, int percent_type)
{
- return snprintf(title, sz, "%s %s", sym->name, map->dso->long_name);
+ return snprintf(title, sz, "%s %s [Percent: %s]", sym->name, map->dso->long_name,
+ percent_type_str(percent_type));
}

/*
@@ -423,7 +425,7 @@ static bool annotate_browser__callq(struct annotate_browser *browser,

pthread_mutex_unlock(&notes->lock);
symbol__tui_annotate(dl->ops.target.sym, ms->map, evsel, hbt, browser->opts);
- sym_title(ms->sym, ms->map, title, sizeof(title));
+ sym_title(ms->sym, ms->map, title, sizeof(title), browser->opts->percent_type);
ui_browser__show_title(&browser->b, title);
return true;
}
@@ -598,6 +600,7 @@ bool annotate_browser__continue_search_reverse(struct annotate_browser *browser,

static int annotate_browser__show(struct ui_browser *browser, char *title, const char *help)
{
+ struct annotate_browser *ab = container_of(browser, struct annotate_browser, b);
struct map_symbol *ms = browser->priv;
struct symbol *sym = ms->sym;
char symbol_dso[SYM_TITLE_MAX_SIZE];
@@ -605,7 +608,7 @@ static int annotate_browser__show(struct ui_browser *browser, char *title, const
if (ui_browser__show(browser, title, help) < 0)
return -1;

- sym_title(sym, ms->map, symbol_dso, sizeof(symbol_dso));
+ sym_title(sym, ms->map, symbol_dso, sizeof(symbol_dso), ab->opts->percent_type);

ui_browser__gotorc_title(browser, 0, 0);
ui_browser__set_color(browser, HE_COLORSET_ROOT);
@@ -613,6 +616,39 @@ static int annotate_browser__show(struct ui_browser *browser, char *title, const
return 0;
}

+static void
+switch_percent_type(struct annotation_options *opts, bool base)
+{
+ switch (opts->percent_type) {
+ case PERCENT_HITS_LOCAL:
+ if (base)
+ opts->percent_type = PERCENT_PERIOD_LOCAL;
+ else
+ opts->percent_type = PERCENT_HITS_GLOBAL;
+ break;
+ case PERCENT_HITS_GLOBAL:
+ if (base)
+ opts->percent_type = PERCENT_PERIOD_GLOBAL;
+ else
+ opts->percent_type = PERCENT_HITS_LOCAL;
+ break;
+ case PERCENT_PERIOD_LOCAL:
+ if (base)
+ opts->percent_type = PERCENT_HITS_LOCAL;
+ else
+ opts->percent_type = PERCENT_PERIOD_GLOBAL;
+ break;
+ case PERCENT_PERIOD_GLOBAL:
+ if (base)
+ opts->percent_type = PERCENT_HITS_GLOBAL;
+ else
+ opts->percent_type = PERCENT_PERIOD_LOCAL;
+ break;
+ default:
+ WARN_ON(1);
+ }
+}
+
static int annotate_browser__run(struct annotate_browser *browser,
struct perf_evsel *evsel,
struct hist_browser_timer *hbt)
@@ -703,6 +739,8 @@ static int annotate_browser__run(struct annotate_browser *browser,
"k Toggle line numbers\n"
"P Print to [symbol_name].annotation file.\n"
"r Run available scripts\n"
+ "p Toggle percent type [local/global]\n"
+ "b Toggle percent base [period/hits]\n"
"? Search string backwards\n");
continue;
case 'r':
@@ -802,6 +840,12 @@ static int annotate_browser__run(struct annotate_browser *browser,
notes->options->show_minmax_cycle = true;
annotation__update_column_widths(notes);
continue;
+ case 'p':
+ case 'b':
+ switch_percent_type(browser->opts, key == 'b');
+ hists__scnprintf_title(hists, title, sizeof(title));
+ annotate_browser__show(&browser->b, title, help);
+ continue;
case K_LEFT:
case K_ESC:
case 'q':
diff --git a/tools/perf/util/annotate.h b/tools/perf/util/annotate.h
index 3d4579e68d28..760a6678edff 100644
--- a/tools/perf/util/annotate.h
+++ b/tools/perf/util/annotate.h
@@ -11,6 +11,7 @@
#include <linux/list.h>
#include <linux/rbtree.h>
#include <pthread.h>
+#include <asm/bug.h>

struct ins_ops;

@@ -149,6 +150,21 @@ static inline double annotation_data__percent(struct annotation_data *data,
return which < PERCENT_MAX ? data->percent[which] : -1;
}

+static inline const char *percent_type_str(unsigned int type)
+{
+ static const char *str[PERCENT_MAX] = {
+ "local hits",
+ "global hits",
+ "local period",
+ "global period",
+ };
+
+ if (WARN_ON(type >= PERCENT_MAX))
+ return "N/A";
+
+ return str[type];
+}
+
static inline struct disasm_line *disasm_line(struct annotation_line *al)
{
return al ? container_of(al, struct disasm_line, al) : NULL;
--
2.14.4


2018-08-09 15:03:24

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 37/44] perf bpf: Add bpf__setup_output_event() strerror() counterpart

From: Arnaldo Carvalho de Melo <[email protected]>

That is just bpf__strerror_setup_stdout() renamed to the more general
"setup_output_event" method, keep the existing stdout() as a wrapper.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/bpf-loader.c | 4 ++--
tools/perf/util/bpf-loader.h | 15 +++++++++------
2 files changed, 11 insertions(+), 8 deletions(-)

diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 95a27bb6f1a1..80dead642719 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -1791,8 +1791,8 @@ int bpf__strerror_apply_obj_config(int err, char *buf, size_t size)
return 0;
}

-int bpf__strerror_setup_stdout(struct perf_evlist *evlist __maybe_unused,
- int err, char *buf, size_t size)
+int bpf__strerror_setup_output_event(struct perf_evlist *evlist __maybe_unused,
+ int err, char *buf, size_t size)
{
bpf__strerror_head(err, buf, size);
bpf__strerror_end(buf, size);
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 6be0eec043c6..8eca75145ac2 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -83,9 +83,7 @@ int bpf__strerror_apply_obj_config(int err, char *buf, size_t size);

int bpf__setup_stdout(struct perf_evlist *evlist);
int bpf__setup_output_event(struct perf_evlist *evlist, const char *name);
-int bpf__strerror_setup_stdout(struct perf_evlist *evlist, int err,
- char *buf, size_t size);
-
+int bpf__strerror_setup_output_event(struct perf_evlist *evlist, int err, char *buf, size_t size);
#else
#include <errno.h>

@@ -200,11 +198,16 @@ bpf__strerror_apply_obj_config(int err __maybe_unused,
}

static inline int
-bpf__strerror_setup_stdout(struct perf_evlist *evlist __maybe_unused,
- int err __maybe_unused, char *buf,
- size_t size)
+bpf__strerror_setup_output_event(struct perf_evlist *evlist __maybe_unused,
+ int err __maybe_unused, char *buf, size_t size)
{
return __bpf_strerror(buf, size);
}
+
#endif
+
+static inline int bpf__strerror_setup_stdout(struct perf_evlist *evlist, int err, char *buf, size_t size)
+{
+ return bpf__strerror_setup_output_event(evlist, err, buf, size);
+}
#endif
--
2.14.4


2018-08-09 15:03:29

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 38/44] perf bpf: Add wrappers to BPF_FUNC_probe_read(_str) functions

From: Arnaldo Carvalho de Melo <[email protected]>

Will be used shortly in the augmented syscalls work together with a
PERF_COUNT_SW_BPF_OUTPUT software event to insert syscalls + pointer
contents in the perf ring buffer, to be consumed by 'perf trace'
beautifiers.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/include/bpf/bpf.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/tools/perf/include/bpf/bpf.h b/tools/perf/include/bpf/bpf.h
index 1f632b56bb34..47897d65e799 100644
--- a/tools/perf/include/bpf/bpf.h
+++ b/tools/perf/include/bpf/bpf.h
@@ -30,4 +30,7 @@ struct bpf_map {
char _license[] SEC("license") = #name; \
int _version SEC("version") = LINUX_VERSION_CODE;

+static int (*probe_read)(void *dst, int size, const void *unsafe_addr) = (void *)BPF_FUNC_probe_read;
+static int (*probe_read_str)(void *dst, int size, const void *unsafe_addr) = (void *)BPF_FUNC_probe_read_str;
+
#endif /* _PERF_BPF_H */
--
2.14.4


2018-08-09 15:03:36

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 39/44] perf trace: Handle "bpf-output" events associated with "__augmented_syscalls__" BPF map

From: Arnaldo Carvalho de Melo <[email protected]>

Add an example BPF script that writes syscalls:sys_enter_openat raw
tracepoint payloads augmented with the first 64 bytes of the "filename"
syscall pointer arg.

Then catch it and print it just like with things written to the
"__bpf_stdout__" map associated with a PERF_COUNT_SW_BPF_OUTPUT software
event, by just letting the default tracepoint handler in 'perf trace',
trace__event_handler(), to use bpf_output__fprintf(trace, sample), just
like it does with all other PERF_COUNT_SW_BPF_OUTPUT events, i.e. just
do a dump on the payload, so that we can check if what is being printed
has at least the first 64 bytes of the "filename" arg:

The augmented_syscalls.c eBPF script:

# cat tools/perf/examples/bpf/augmented_syscalls.c
// SPDX-License-Identifier: GPL-2.0

#include <stdio.h>

struct bpf_map SEC("maps") __augmented_syscalls__ = {
.type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
.key_size = sizeof(int),
.value_size = sizeof(u32),
.max_entries = __NR_CPUS__,
};

struct syscall_enter_openat_args {
unsigned long long common_tp_fields;
long syscall_nr;
long dfd;
char *filename_ptr;
long flags;
long mode;
};

struct augmented_enter_openat_args {
struct syscall_enter_openat_args args;
char filename[64];
};

int syscall_enter(openat)(struct syscall_enter_openat_args *args)
{
struct augmented_enter_openat_args augmented_args;

probe_read(&augmented_args.args, sizeof(augmented_args.args), args);
probe_read_str(&augmented_args.filename, sizeof(augmented_args.filename), args->filename_ptr);
perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU,
&augmented_args, sizeof(augmented_args));
return 1;
}

license(GPL);
#

So it will just prepare a raw_syscalls:sys_enter payload for the
"openat" syscall.

This will eventually be done for all syscalls with pointer args,
globally or just when the user asks, using some spec, which args of
which syscalls it wants "expanded" this way, we'll probably start with
just all the syscalls that have char * pointers with familiar names, the
ones we already handle with the probe:vfs_getname kprobe if it is in
place hooking the kernel getname_flags() function used to copy from user
the paths.

Running it we get:

# perf trace -e perf/tools/perf/examples/bpf/augmented_syscalls.c,openat cat /etc/passwd > /dev/null
0.000 ( ): __augmented_syscalls__:X?.C......................`\..................../etc/ld.so.cache..#......,....ao.k...............k......1.".........
0.006 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC
0.008 ( 0.005 ms): cat/31292 openat(dfd: CWD, filename: 0x5c600da8, flags: CLOEXEC ) = 3
0.036 ( ): __augmented_syscalls__:X?.C.......................\..................../lib64/libc.so.6......... .\....#........?.......=.C..../.".........
0.037 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC
0.039 ( 0.007 ms): cat/31292 openat(dfd: CWD, filename: 0x5c808ce0, flags: CLOEXEC ) = 3
0.323 ( ): __augmented_syscalls__:X?.C.....................P....................../etc/passwd......>.C....@................>.C.....,....ao.>.C........
0.325 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0xe8be50d6
0.327 ( 0.004 ms): cat/31292 openat(dfd: CWD, filename: 0xe8be50d6 ) = 3
#

We need to go on optimizing this to avoid seding trash or zeroes in the
pointer content payload, using the return from bpf_probe_read_str(), but
to keep things simple at this stage and make incremental progress, lets
leave it at that for now.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 7 ++++
tools/perf/examples/bpf/augmented_syscalls.c | 55 ++++++++++++++++++++++++++++
2 files changed, 62 insertions(+)
create mode 100644 tools/perf/examples/bpf/augmented_syscalls.c

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 7232a7302580..9b4e24217c46 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -3240,6 +3240,13 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode");
}

+ err = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
+ if (err) {
+ bpf__strerror_setup_output_event(trace.evlist, err, bf, sizeof(bf));
+ pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf);
+ goto out;
+ }
+
err = bpf__setup_stdout(trace.evlist);
if (err) {
bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf));
diff --git a/tools/perf/examples/bpf/augmented_syscalls.c b/tools/perf/examples/bpf/augmented_syscalls.c
new file mode 100644
index 000000000000..69a31386d8cd
--- /dev/null
+++ b/tools/perf/examples/bpf/augmented_syscalls.c
@@ -0,0 +1,55 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Augment the openat syscall with the contents of the filename pointer argument.
+ *
+ * Test it with:
+ *
+ * perf trace -e tools/perf/examples/bpf/augmented_syscalls.c cat /etc/passwd > /dev/null
+ *
+ * It'll catch some openat syscalls related to the dynamic linked and
+ * the last one should be the one for '/etc/passwd'.
+ *
+ * This matches what is marshalled into the raw_syscall:sys_enter payload
+ * expected by the 'perf trace' beautifiers, and can be used by them unmodified,
+ * which will be done as that feature is implemented in the next csets, for now
+ * it will appear in a dump done by the default tracepoint handler in 'perf trace',
+ * that uses bpf_output__fprintf() to just dump those contents, as done with
+ * the bpf-output event associated with the __bpf_output__ map declared in
+ * tools/perf/include/bpf/stdio.h.
+ */
+
+#include <stdio.h>
+
+struct bpf_map SEC("maps") __augmented_syscalls__ = {
+ .type = BPF_MAP_TYPE_PERF_EVENT_ARRAY,
+ .key_size = sizeof(int),
+ .value_size = sizeof(u32),
+ .max_entries = __NR_CPUS__,
+};
+
+struct syscall_enter_openat_args {
+ unsigned long long common_tp_fields;
+ long syscall_nr;
+ long dfd;
+ char *filename_ptr;
+ long flags;
+ long mode;
+};
+
+struct augmented_enter_openat_args {
+ struct syscall_enter_openat_args args;
+ char filename[64];
+};
+
+int syscall_enter(openat)(struct syscall_enter_openat_args *args)
+{
+ struct augmented_enter_openat_args augmented_args;
+
+ probe_read(&augmented_args.args, sizeof(augmented_args.args), args);
+ probe_read_str(&augmented_args.filename, sizeof(augmented_args.filename), args->filename_ptr);
+ perf_event_output(args, &__augmented_syscalls__, BPF_F_CURRENT_CPU,
+ &augmented_args, sizeof(augmented_args));
+ return 1;
+}
+
+license(GPL);
--
2.14.4


2018-08-09 15:03:42

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 40/44] perf bpf: Make bpf__setup_output_event() return the bpf-output event

From: Arnaldo Carvalho de Melo <[email protected]>

We're calling it to setup that event, and we'll need it later to decide
if the bpf-output event we're handling is the one setup for a specific
purpose, return it using ERR_PTR, etc.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 9 +++++----
tools/perf/util/bpf-loader.c | 23 ++++++++++++-----------
tools/perf/util/bpf-loader.h | 7 ++++---
3 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 9b4e24217c46..43a699cfcadf 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -3216,8 +3216,9 @@ int cmd_trace(int argc, const char **argv)
};
bool __maybe_unused max_stack_user_set = true;
bool mmap_pages_user_set = true;
+ struct perf_evsel *evsel;
const char * const trace_subcommands[] = { "record", NULL };
- int err;
+ int err = -1;
char bf[BUFSIZ];

signal(SIGSEGV, sighandler_dump_stack);
@@ -3240,9 +3241,9 @@ int cmd_trace(int argc, const char **argv)
"cgroup monitoring only available in system-wide mode");
}

- err = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
- if (err) {
- bpf__strerror_setup_output_event(trace.evlist, err, bf, sizeof(bf));
+ evsel = bpf__setup_output_event(trace.evlist, "__augmented_syscalls__");
+ if (IS_ERR(evsel)) {
+ bpf__strerror_setup_output_event(trace.evlist, PTR_ERR(evsel), bf, sizeof(bf));
pr_err("ERROR: Setup trace syscalls enter failed: %s\n", bf);
goto out;
}
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 80dead642719..47aac41349a2 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -1535,7 +1535,7 @@ int bpf__apply_obj_config(void)
(strcmp(name, \
bpf_map__name(pos)) == 0))

-int bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
+struct perf_evsel *bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
{
struct bpf_map_priv *tmpl_priv = NULL;
struct bpf_object *obj, *tmp;
@@ -1548,7 +1548,7 @@ int bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
struct bpf_map_priv *priv = bpf_map__priv(map);

if (IS_ERR(priv))
- return -BPF_LOADER_ERRNO__INTERNAL;
+ return ERR_PTR(-BPF_LOADER_ERRNO__INTERNAL);

/*
* No need to check map type: type should have been
@@ -1561,20 +1561,20 @@ int bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
}

if (!need_init)
- return 0;
+ return NULL;

if (!tmpl_priv) {
char *event_definition = NULL;

if (asprintf(&event_definition, "bpf-output/no-inherit=1,name=%s/", name) < 0)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);

err = parse_events(evlist, event_definition, NULL);
free(event_definition);

if (err) {
pr_debug("ERROR: failed to create the \"%s\" bpf-output event\n", name);
- return -err;
+ return ERR_PTR(-err);
}

evsel = perf_evlist__last(evlist);
@@ -1584,37 +1584,38 @@ int bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
struct bpf_map_priv *priv = bpf_map__priv(map);

if (IS_ERR(priv))
- return -BPF_LOADER_ERRNO__INTERNAL;
+ return ERR_PTR(-BPF_LOADER_ERRNO__INTERNAL);
if (priv)
continue;

if (tmpl_priv) {
priv = bpf_map_priv__clone(tmpl_priv);
if (!priv)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);

err = bpf_map__set_priv(map, priv, bpf_map_priv__clear);
if (err) {
bpf_map_priv__clear(map, priv);
- return err;
+ return ERR_PTR(err);
}
} else if (evsel) {
struct bpf_map_op *op;

op = bpf_map__add_newop(map, NULL);
if (IS_ERR(op))
- return PTR_ERR(op);
+ return ERR_PTR(PTR_ERR(op));
op->op_type = BPF_MAP_OP_SET_EVSEL;
op->v.evsel = evsel;
}
}

- return 0;
+ return evsel;
}

int bpf__setup_stdout(struct perf_evlist *evlist)
{
- return bpf__setup_output_event(evlist, "__bpf_stdout__");
+ struct perf_evsel *evsel = bpf__setup_output_event(evlist, "__bpf_stdout__");
+ return IS_ERR(evsel) ? PTR_ERR(evsel) : 0;
}

#define ERRNO_OFFSET(e) ((e) - __BPF_LOADER_ERRNO__START)
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 8eca75145ac2..62d245a90e1d 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -43,6 +43,7 @@ enum bpf_loader_errno {
__BPF_LOADER_ERRNO__END,
};

+struct perf_evsel;
struct bpf_object;
struct parse_events_term;
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
@@ -82,7 +83,7 @@ int bpf__apply_obj_config(void);
int bpf__strerror_apply_obj_config(int err, char *buf, size_t size);

int bpf__setup_stdout(struct perf_evlist *evlist);
-int bpf__setup_output_event(struct perf_evlist *evlist, const char *name);
+struct perf_evsel *bpf__setup_output_event(struct perf_evlist *evlist, const char *name);
int bpf__strerror_setup_output_event(struct perf_evlist *evlist, int err, char *buf, size_t size);
#else
#include <errno.h>
@@ -137,10 +138,10 @@ bpf__setup_stdout(struct perf_evlist *evlist __maybe_unused)
return 0;
}

-static inline int
+static inline struct perf_evsel *
bpf__setup_output_event(struct perf_evlist *evlist __maybe_unused, const char *name __maybe_unused)
{
- return 0;
+ return NULL;
}

static inline int
--
2.14.4


2018-08-09 15:03:49

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 42/44] perf trace: Wire up the augmented syscalls with the syscalls:sys_enter_FOO beautifier

From: Arnaldo Carvalho de Melo <[email protected]>

We just check that the evsel is the one we associated with the
bpf-output event associated with the "__augmented_syscalls__" eBPF map,
to show that the formatting is done properly:

# perf trace -e perf/tools/perf/examples/bpf/augmented_syscalls.c,openat cat /etc/passwd > /dev/null
0.000 ( ): __augmented_syscalls__:dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC
0.006 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC
0.007 ( 0.004 ms): cat/11486 openat(dfd: CWD, filename: 0x43e06da8, flags: CLOEXEC ) = 3
0.029 ( ): __augmented_syscalls__:dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC
0.030 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC
0.031 ( 0.004 ms): cat/11486 openat(dfd: CWD, filename: 0x4400ece0, flags: CLOEXEC ) = 3
0.249 ( ): __augmented_syscalls__:dfd: CWD, filename: 0xc3700d6
0.250 ( ): syscalls:sys_enter_openat:dfd: CWD, filename: 0xc3700d6
0.252 ( 0.003 ms): cat/11486 openat(dfd: CWD, filename: 0xc3700d6 ) = 3
#

Now we just need to get the full blown enter/exit handlers to check if the
evsel being processed is the augmented_syscalls one to go pick the pointer
payloads from the end of the payload.

We also need to state somehow what is the layout for multi pointer arg syscalls.

Also handy would be to have a BTF file with the struct definitions used in
syscalls, compact, generated at kernel built time and available for use in eBPF
programs.

Till we get there we can go on doing some manual coupling of the most relevant
syscalls with some hand built beautifiers.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 06215acb1481..22ab8e67c760 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -2042,7 +2042,10 @@ static int trace__event_handler(struct trace *trace, struct perf_evsel *evsel,
fprintf(trace->output, "%s:", evsel->name);

if (perf_evsel__is_bpf_output(evsel)) {
- bpf_output__fprintf(trace, sample);
+ if (evsel == trace->syscalls.events.augmented)
+ trace__fprintf_sys_enter(trace, evsel, sample);
+ else
+ bpf_output__fprintf(trace, sample);
} else if (evsel->tp_format) {
if (strncmp(evsel->tp_format->name, "sys_enter_", 10) ||
trace__fprintf_sys_enter(trace, evsel, sample)) {
--
2.14.4


2018-08-09 15:03:50

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 43/44] perf map: Synthesize maps only for thread group leader

From: Konstantin Khlebnikov <[email protected]>

Threads share map_groups, all map events are merged into it.

Thus we could send mmaps only for thread group leader. Otherwise it
took ages to attach and record something from processes with many vmas
and threads.

Thread group leader could be already dead, but it seems perf cannot
handle this case anyway.

Testing dummy:

#include <stdio.h>
#include <stdlib.h>
#include <sys/mman.h>
#include <pthread.h>
#include <unistd.h>

void *thread(void *arg) {
pause();
}

int main(int argc, char **argv) {
int threads = 10000;
int vmas = 50000;
pthread_t th;
for (int i = 0; i < threads; i++)
pthread_create(&th, NULL, thread, NULL);
for (int i = 0; i < vmas; i++)
mmap(NULL, 4096, (i & 1) ? PROT_READ : PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS | MAP_NORESERVE, -1, 0);
sleep(60);
return 0;
}

Comment by Jiri Olsa:

We actualy synthesize the group leader (if we found one) for the thread
even if it's not present in the thread_map, so the process maps are
always in data.

Signed-off-by: Konstantin Khlebnikov <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/153363294102.396323.6277944760215058174.stgit@buzz
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/event.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/event.c b/tools/perf/util/event.c
index 0c8ecf0c78a4..0cd42150f712 100644
--- a/tools/perf/util/event.c
+++ b/tools/perf/util/event.c
@@ -541,10 +541,17 @@ static int __event__synthesize_thread(union perf_event *comm_event,
tgid, process, machine) < 0)
return -1;

+ /*
+ * send mmap only for thread group leader
+ * see thread__init_map_groups
+ */
+ if (pid == tgid &&
+ perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid,
+ process, machine, mmap_data,
+ proc_map_timeout))
+ return -1;

- return perf_event__synthesize_mmap_events(tool, mmap_event, pid, tgid,
- process, machine, mmap_data,
- proc_map_timeout);
+ return 0;
}

if (machine__is_default_guest(machine))
--
2.14.4


2018-08-09 15:03:53

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 44/44] perf map: Optimize maps__fixup_overlappings()

From: Konstantin Khlebnikov <[email protected]>

This function splits and removes overlapping areas.

Maps in tree are ordered by start address thus we could find first
overlap and stop if next map does not overlap.

Signed-off-by: Konstantin Khlebnikov <[email protected]>
Acked-by: Jiri Olsa <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Link: http://lkml.kernel.org/r/153365189407.435244.7234821822450484712.stgit@buzz
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/map.c | 44 ++++++++++++++++++++++++++------------------
tools/perf/util/map.h | 1 -
2 files changed, 26 insertions(+), 19 deletions(-)

diff --git a/tools/perf/util/map.c b/tools/perf/util/map.c
index 89ac5b5dc218..36d0763311ef 100644
--- a/tools/perf/util/map.c
+++ b/tools/perf/util/map.c
@@ -381,20 +381,6 @@ struct map *map__clone(struct map *from)
return map;
}

-int map__overlap(struct map *l, struct map *r)
-{
- if (l->start > r->start) {
- struct map *t = l;
- l = r;
- r = t;
- }
-
- if (l->end > r->start)
- return 1;
-
- return 0;
-}
-
size_t map__fprintf(struct map *map, FILE *fp)
{
return fprintf(fp, " %" PRIx64 "-%" PRIx64 " %" PRIx64 " %s\n",
@@ -675,20 +661,42 @@ static void __map_groups__insert(struct map_groups *mg, struct map *map)
static int maps__fixup_overlappings(struct maps *maps, struct map *map, FILE *fp)
{
struct rb_root *root;
- struct rb_node *next;
+ struct rb_node *next, *first;
int err = 0;

down_write(&maps->lock);

root = &maps->entries;
- next = rb_first(root);

+ /*
+ * Find first map where end > map->start.
+ * Same as find_vma() in kernel.
+ */
+ next = root->rb_node;
+ first = NULL;
+ while (next) {
+ struct map *pos = rb_entry(next, struct map, rb_node);
+
+ if (pos->end > map->start) {
+ first = next;
+ if (pos->start <= map->start)
+ break;
+ next = next->rb_left;
+ } else
+ next = next->rb_right;
+ }
+
+ next = first;
while (next) {
struct map *pos = rb_entry(next, struct map, rb_node);
next = rb_next(&pos->rb_node);

- if (!map__overlap(pos, map))
- continue;
+ /*
+ * Stop if current map starts after map->end.
+ * Maps are ordered by start: next will not overlap for sure.
+ */
+ if (pos->start >= map->end)
+ break;

if (verbose >= 2) {

diff --git a/tools/perf/util/map.h b/tools/perf/util/map.h
index 4cb90f242bed..e0f327b51e66 100644
--- a/tools/perf/util/map.h
+++ b/tools/perf/util/map.h
@@ -166,7 +166,6 @@ static inline void __map__zput(struct map **map)

#define map__zput(map) __map__zput(&map)

-int map__overlap(struct map *l, struct map *r);
size_t map__fprintf(struct map *map, FILE *fp);
size_t map__fprintf_dsoname(struct map *map, FILE *fp);
char *map__srcline(struct map *map, u64 addr, struct symbol *sym);
--
2.14.4


2018-08-09 15:03:57

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 36/44] perf bpf: Generalize bpf__setup_stdout()

From: Arnaldo Carvalho de Melo <[email protected]>

We will use it to set up other bpf-output events, for instance to
generate augmented syscall entry tracepoints with pointer contents.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/bpf-loader.c | 26 +++++++++++++++++---------
tools/perf/util/bpf-loader.h | 7 +++++++
2 files changed, 24 insertions(+), 9 deletions(-)

diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index e864a7e0ff12..95a27bb6f1a1 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -1535,10 +1535,7 @@ int bpf__apply_obj_config(void)
(strcmp(name, \
bpf_map__name(pos)) == 0))

-#define bpf__for_each_stdout_map(pos, obj, objtmp) \
- bpf__for_each_map_named(pos, obj, objtmp, "__bpf_stdout__")
-
-int bpf__setup_stdout(struct perf_evlist *evlist)
+int bpf__setup_output_event(struct perf_evlist *evlist, const char *name)
{
struct bpf_map_priv *tmpl_priv = NULL;
struct bpf_object *obj, *tmp;
@@ -1547,7 +1544,7 @@ int bpf__setup_stdout(struct perf_evlist *evlist)
int err;
bool need_init = false;

- bpf__for_each_stdout_map(map, obj, tmp) {
+ bpf__for_each_map_named(map, obj, tmp, name) {
struct bpf_map_priv *priv = bpf_map__priv(map);

if (IS_ERR(priv))
@@ -1567,17 +1564,23 @@ int bpf__setup_stdout(struct perf_evlist *evlist)
return 0;

if (!tmpl_priv) {
- err = parse_events(evlist, "bpf-output/no-inherit=1,name=__bpf_stdout__/",
- NULL);
+ char *event_definition = NULL;
+
+ if (asprintf(&event_definition, "bpf-output/no-inherit=1,name=%s/", name) < 0)
+ return -ENOMEM;
+
+ err = parse_events(evlist, event_definition, NULL);
+ free(event_definition);
+
if (err) {
- pr_debug("ERROR: failed to create bpf-output event\n");
+ pr_debug("ERROR: failed to create the \"%s\" bpf-output event\n", name);
return -err;
}

evsel = perf_evlist__last(evlist);
}

- bpf__for_each_stdout_map(map, obj, tmp) {
+ bpf__for_each_map_named(map, obj, tmp, name) {
struct bpf_map_priv *priv = bpf_map__priv(map);

if (IS_ERR(priv))
@@ -1609,6 +1612,11 @@ int bpf__setup_stdout(struct perf_evlist *evlist)
return 0;
}

+int bpf__setup_stdout(struct perf_evlist *evlist)
+{
+ return bpf__setup_output_event(evlist, "__bpf_stdout__");
+}
+
#define ERRNO_OFFSET(e) ((e) - __BPF_LOADER_ERRNO__START)
#define ERRCODE_OFFSET(c) ERRNO_OFFSET(BPF_LOADER_ERRNO__##c)
#define NR_ERRNO (__BPF_LOADER_ERRNO__END - __BPF_LOADER_ERRNO__START)
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 5d3aefd6fae7..6be0eec043c6 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -82,6 +82,7 @@ int bpf__apply_obj_config(void);
int bpf__strerror_apply_obj_config(int err, char *buf, size_t size);

int bpf__setup_stdout(struct perf_evlist *evlist);
+int bpf__setup_output_event(struct perf_evlist *evlist, const char *name);
int bpf__strerror_setup_stdout(struct perf_evlist *evlist, int err,
char *buf, size_t size);

@@ -138,6 +139,12 @@ bpf__setup_stdout(struct perf_evlist *evlist __maybe_unused)
return 0;
}

+static inline int
+bpf__setup_output_event(struct perf_evlist *evlist __maybe_unused, const char *name __maybe_unused)
+{
+ return 0;
+}
+
static inline int
__bpf_strerror(char *buf, size_t size)
{
--
2.14.4


2018-08-09 15:04:08

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 32/44] perf report: Add --percent-type option

From: Jiri Olsa <[email protected]>

Set annotation percent type from following choices:

global-period, local-period, global-hits, local-hits

With following report option setup the percent type will be passed to
annotation browser:

$ perf report --percent-type period-local

The local/global keywords set if the percentage is computed in the scope
of the function (local) or the whole data (global). The period/hits
keywords set the base the percentage is computed on - the samples period
or the number of samples (hits).

Signed-off-by: Jiri Olsa <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: Alexander Shishkin <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Stephane Eranian <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/Documentation/perf-report.txt | 9 +++++++++
tools/perf/builtin-report.c | 3 +++
2 files changed, 12 insertions(+)

diff --git a/tools/perf/Documentation/perf-report.txt b/tools/perf/Documentation/perf-report.txt
index 917e36fde6d8..474a4941f65d 100644
--- a/tools/perf/Documentation/perf-report.txt
+++ b/tools/perf/Documentation/perf-report.txt
@@ -477,6 +477,15 @@ include::itrace.txt[]
Display monitored tasks stored in perf data. Displaying pid/tid/ppid
plus the command string aligned to distinguish parent and child tasks.

+--percent-type::
+ Set annotation percent type from following choices:
+ global-period, local-period, global-hits, local-hits
+
+ The local/global keywords set if the percentage is computed
+ in the scope of the function (local) or the whole data (global).
+ The period/hits keywords set the base the percentage is computed
+ on - the samples period or the number of samples (hits).
+
include::callchain-overhead-calculation.txt[]

SEE ALSO
diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
index 02f7a3c27761..143542ffac20 100644
--- a/tools/perf/builtin-report.c
+++ b/tools/perf/builtin-report.c
@@ -1124,6 +1124,9 @@ int cmd_report(int argc, const char **argv)
"Time span of interest (start,stop)"),
OPT_BOOLEAN(0, "inline", &symbol_conf.inline_name,
"Show inline function"),
+ OPT_CALLBACK(0, "percent-type", &report.annotation_opts, "local-period",
+ "Set percent type local/global-period/hits",
+ annotate_parse_percent_type),
OPT_END()
};
struct perf_data data = {
--
2.14.4


2018-08-09 15:04:18

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 41/44] perf trace: Setup the augmented syscalls bpf-output event fields

From: Arnaldo Carvalho de Melo <[email protected]>

The payload that is put in place by the eBPF script attached to
syscalls:sys_enter_openat (and other syscalls with pointers, in the
future) can be consumed by the existing sys_enter beautifiers if
evsel->priv is setup with a struct syscall_tp with struct tp_fields for
the 'syscall_id' and 'args' fields expected by the beautifiers, this
patch does just that.

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/builtin-trace.c | 34 +++++++++++++++++++++++++++++++++-
1 file changed, 33 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
index 43a699cfcadf..06215acb1481 100644
--- a/tools/perf/builtin-trace.c
+++ b/tools/perf/builtin-trace.c
@@ -77,7 +77,8 @@ struct trace {
struct syscall *table;
struct {
struct perf_evsel *sys_enter,
- *sys_exit;
+ *sys_exit,
+ *augmented;
} events;
} syscalls;
struct record_opts opts;
@@ -263,6 +264,30 @@ static int perf_evsel__init_syscall_tp(struct perf_evsel *evsel)
return -ENOENT;
}

+static int perf_evsel__init_augmented_syscall_tp(struct perf_evsel *evsel)
+{
+ struct syscall_tp *sc = evsel->priv = malloc(sizeof(struct syscall_tp));
+
+ if (evsel->priv != NULL) { /* field, sizeof_field, offsetof_field */
+ if (__tp_field__init_uint(&sc->id, sizeof(long), sizeof(long long), evsel->needs_swap))
+ goto out_delete;
+
+ return 0;
+ }
+
+ return -ENOMEM;
+out_delete:
+ zfree(&evsel->priv);
+ return -EINVAL;
+}
+
+static int perf_evsel__init_augmented_syscall_tp_args(struct perf_evsel *evsel)
+{
+ struct syscall_tp *sc = evsel->priv;
+
+ return __tp_field__init_ptr(&sc->args, sc->id.offset + sizeof(u64));
+}
+
static int perf_evsel__init_raw_syscall_tp(struct perf_evsel *evsel, void *handler)
{
evsel->priv = malloc(sizeof(struct syscall_tp));
@@ -3248,6 +3273,13 @@ int cmd_trace(int argc, const char **argv)
goto out;
}

+ if (evsel) {
+ if (perf_evsel__init_augmented_syscall_tp(evsel) ||
+ perf_evsel__init_augmented_syscall_tp_args(evsel))
+ goto out;
+ trace.syscalls.events.augmented = evsel;
+ }
+
err = bpf__setup_stdout(trace.evlist);
if (err) {
bpf__strerror_setup_stdout(trace.evlist, err, bf, sizeof(bf));
--
2.14.4


2018-08-09 15:04:59

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 33/44] perf bpf: Add struct bpf_map struct

From: Arnaldo Carvalho de Melo <[email protected]>

A helper structure used by eBPF C program to describe map attributes to
elf_bpf loader, to be used initially by the special __bpf_stdout__ map
used to print strings into the perf ring buffer in BPF scripts, e.g.:

Using the upcoming stdio.h and puts() macros to use the __bpf_stdout__
map to add strings to the ring buffer:

# cat tools/perf/examples/bpf/hello.c
#include <stdio.h>

int syscall_enter(openat)(void *args)
{
puts("Hello, world\n");
return 0;
}

license(GPL);
#
# cat ~/.perfconfig
[llvm]
dump-obj = true
# perf trace -e openat,tools/perf/examples/bpf/hello.c/call-graph=dwarf/ cat /etc/passwd > /dev/null
LLVM: dumping tools/perf/examples/bpf/hello.o
0.016 ( ): __bpf_stdout__:Hello, world
0.018 ( 0.010 ms): cat/9079 openat(dfd: CWD, filename: /etc/ld.so.cache, flags: CLOEXEC ) = 3
0.057 ( ): __bpf_stdout__:Hello, world
0.059 ( 0.011 ms): cat/9079 openat(dfd: CWD, filename: /lib64/libc.so.6, flags: CLOEXEC ) = 3
0.417 ( ): __bpf_stdout__:Hello, world
0.419 ( 0.009 ms): cat/9079 openat(dfd: CWD, filename: /etc/passwd ) = 3
#
# file tools/perf/examples/bpf/hello.o
tools/perf/examples/bpf/hello.o: ELF 64-bit LSB relocatable, *unknown arch 0xf7* version 1 (SYSV), not stripped
# readelf -SW tools/perf/examples/bpf/hello.o
There are 10 section headers, starting at offset 0x208:

Section Headers:
[Nr] Name Type Address Off Size ES Flg Lk Inf Al
[ 0] NULL 0000000000000000 000000 000000 00 0 0 0
[ 1] .strtab STRTAB 0000000000000000 000188 00007f 00 0 0 1
[ 2] .text PROGBITS 0000000000000000 000040 000000 00 AX 0 0 4
[ 3] syscalls:sys_enter_openat PROGBITS 0000000000000000 000040 000088 00 AX 0 0 8
[ 4] .relsyscalls:sys_enter_openat REL 0000000000000000 000178 000010 10 9 3 8
[ 5] maps PROGBITS 0000000000000000 0000c8 00001c 00 WA 0 0 4
[ 6] .rodata.str1.1 PROGBITS 0000000000000000 0000e4 00000e 01 AMS 0 0 1
[ 7] license PROGBITS 0000000000000000 0000f2 000004 00 WA 0 0 1
[ 8] version PROGBITS 0000000000000000 0000f8 000004 00 WA 0 0 4
[ 9] .symtab SYMTAB 0000000000000000 000100 000078 18 1 1 8
Key to Flags:
W (write), A (alloc), X (execute), M (merge), S (strings), I (info),
L (link order), O (extra OS processing required), G (group), T (TLS),
C (compressed), x (unknown), o (OS specific), E (exclude),
p (processor specific)
# readelf -s tools/perf/examples/bpf/hello.o

Symbol table '.symtab' contains 5 entries:
Num: Value Size Type Bind Vis Ndx Name
0: 0000000000000000 0 NOTYPE LOCAL DEFAULT UND
1: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 5 __bpf_stdout__
2: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 7 _license
3: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 8 _version
4: 0000000000000000 0 NOTYPE GLOBAL DEFAULT 3 syscall_enter_openat
#

Cc: Adrian Hunter <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Wang Nan <[email protected]>
Link: https://lkml.kernel.org/n/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/include/bpf/bpf.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)

diff --git a/tools/perf/include/bpf/bpf.h b/tools/perf/include/bpf/bpf.h
index 2873cdde293f..1f632b56bb34 100644
--- a/tools/perf/include/bpf/bpf.h
+++ b/tools/perf/include/bpf/bpf.h
@@ -4,6 +4,20 @@

#include <uapi/linux/bpf.h>

+/*
+ * A helper structure used by eBPF C program to describe map attributes to
+ * elf_bpf loader, taken from tools/testing/selftests/bpf/bpf_helpers.h:
+ */
+struct bpf_map {
+ unsigned int type;
+ unsigned int key_size;
+ unsigned int value_size;
+ unsigned int max_entries;
+ unsigned int map_flags;
+ unsigned int inner_map_idx;
+ unsigned int numa_node;
+};
+
#define SEC(NAME) __attribute__((section(NAME), used))

#define probe(function, vars) \
--
2.14.4


2018-08-09 15:28:42

by Kim Phillips

[permalink] [raw]
Subject: Re: [GIT PULL 00/44] perf/core improvements and fixes

On Thu, 9 Aug 2018 11:57:38 -0300
Arnaldo Carvalho de Melo <[email protected]> wrote:

Hi Arnaldo,

> Arch specific:
>
> arm64: (Sean V Kelley)
>
> - Enable JSON events for Ampere Computing eMAG processor
>

Did this one get missed?:

https://www.mail-archive.com/[email protected]/msg1745454.html

Thanks,

Kim