Hi Arnaldo Carvalho de Melo,
I rearranged the first 39 patches of this patchset according to
your comments. After applying all of them you can use a hello world
BPF program for testing. They are based on your 'tmp.perf/ebpf', commit
60cd37eb100c4880b28078a47f3062fac7572095.
I hope I can manage a public avaliable git repository for you
tomorrow (tomorrow means 24 hours later). What about a repository on
github? However I have to do this out of my office because of company's
IT policy.
In this v11 you can see following improvements:
Commit messages improvements:
'bpf tools: Collect symbol table from SHT_SYMTAB section'
'bpf tools: Collect relocation sections from SHT_REL sections'
'bpf tools: Record map accessing instructions for each program'
'bpf tools: Relocate eBPF programs'
'bpf tools: Link all bpf objects onto a list'
Decoupling:
'bpf tools: Collect eBPF programs from their own sections'
'bpf tools: Introduce accessors for struct bpf_program'
Renaming: bpf_object__for_each -> bpf_object__for_each_safe
'bpf tools: Link all bpf objects onto a list'
Patch ordering:
'perf tools: Make perf depend on libbpf'
Error message improvement (refer to http://llvm.org/apt):
'perf tools: Call clang to compile C source to object code'
In this v11 part 1 patch set, I haven't follow your comment in
'bpf tools: Introduce accessors for struct bpf_object' that let me
update accessors API from returning error code to returning actual
value and indicate error using invalid values. I prefer current API
because I saw and fixed many bugs related to error code in perf's
code (like commit ed30775). Reason of those bugs are misusing of
error code: some part of code return negative on error, some part
of code return non-zero on error, and developer forgot them. I don't
want libbpf to introduce more bugs like them. But if you insist on
it, I'll change it.
Wang Nan (39):
bpf: Use correct #ifdef controller for trace_call_bpf()
tracing, perf: Implement BPF programs attached to uprobes
bpf tools: Introduce 'bpf' library and add bpf feature check
bpf tools: Allow caller to set printing function
bpf tools: Open eBPF object file and do basic validation
bpf tools: Read eBPF object from buffer
bpf tools: Check endianness and make libbpf fail early
bpf tools: Iterate over ELF sections to collect information
bpf tools: Collect version and license from ELF sections
bpf tools: Collect map definitions from 'maps' section
bpf tools: Collect symbol table from SHT_SYMTAB section
bpf tools: Collect eBPF programs from their own sections
bpf tools: Collect relocation sections from SHT_REL sections
bpf tools: Record map accessing instructions for each program
bpf tools: Add bpf.c/h for common bpf operations
bpf tools: Create eBPF maps defined in an object file
bpf tools: Relocate eBPF programs
bpf tools: Introduce bpf_load_program() to bpf.c
bpf tools: Load eBPF programs in object files into kernel
bpf tools: Introduce accessors for struct bpf_program
bpf tools: Introduce accessors for struct bpf_object
bpf tools: Link all bpf objects onto a list
perf tools: Introduce llvm config options
perf tools: Call clang to compile C source to object code
perf tools: Auto detecting kernel build directory
perf tools: Auto detecting kernel include options
perf tests: Add LLVM test for eBPF on-the-fly compiling
perf tools: Make perf depend on libbpf
perf record: Enable passing bpf object file to --event
perf record: Compile scriptlets if pass '.c' to --event
perf tools: Parse probe points of eBPF programs during preparation
perf probe: Attach trace_probe_event with perf_probe_event
perf record: Probe at kprobe points
perf record: Load all eBPF object into kernel
perf tools: Add bpf_fd field to evsel and config it
perf tools: Attach eBPF program to perf event
perf tools: Suppress probing messages when probing by BPF loading
perf record: Add clang options for compiling BPF scripts
bpf tools: Load a program with different instance using preprocessor
include/linux/trace_events.h | 7 +-
kernel/events/core.c | 4 +-
kernel/trace/Kconfig | 2 +-
kernel/trace/trace_uprobe.c | 5 +
tools/build/Makefile.feature | 6 +-
tools/build/feature/Makefile | 6 +-
tools/build/feature/test-bpf.c | 18 +
tools/lib/bpf/.gitignore | 2 +
tools/lib/bpf/Build | 1 +
tools/lib/bpf/Makefile | 195 +++++++
tools/lib/bpf/bpf.c | 85 +++
tools/lib/bpf/bpf.h | 23 +
tools/lib/bpf/libbpf.c | 1184 +++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 107 ++++
tools/perf/MANIFEST | 3 +
tools/perf/Makefile.perf | 19 +-
tools/perf/builtin-probe.c | 4 +-
tools/perf/builtin-record.c | 43 +-
tools/perf/config/Makefile | 19 +-
tools/perf/tests/Build | 1 +
tools/perf/tests/builtin-test.c | 4 +
tools/perf/tests/llvm.c | 85 +++
tools/perf/tests/make | 4 +-
tools/perf/tests/tests.h | 1 +
tools/perf/util/Build | 2 +
tools/perf/util/bpf-loader.c | 310 ++++++++++
tools/perf/util/bpf-loader.h | 46 ++
tools/perf/util/config.c | 4 +
tools/perf/util/debug.c | 5 +
tools/perf/util/debug.h | 1 +
tools/perf/util/evlist.c | 41 ++
tools/perf/util/evlist.h | 1 +
tools/perf/util/evsel.c | 17 +
tools/perf/util/evsel.h | 1 +
tools/perf/util/llvm-utils.c | 373 ++++++++++++
tools/perf/util/llvm-utils.h | 39 ++
tools/perf/util/parse-events.c | 16 +
tools/perf/util/parse-events.h | 2 +
tools/perf/util/parse-events.l | 6 +
tools/perf/util/parse-events.y | 29 +-
tools/perf/util/probe-event.c | 82 +--
tools/perf/util/probe-event.h | 7 +-
42 files changed, 2759 insertions(+), 51 deletions(-)
create mode 100644 tools/build/feature/test-bpf.c
create mode 100644 tools/lib/bpf/.gitignore
create mode 100644 tools/lib/bpf/Build
create mode 100644 tools/lib/bpf/Makefile
create mode 100644 tools/lib/bpf/bpf.c
create mode 100644 tools/lib/bpf/bpf.h
create mode 100644 tools/lib/bpf/libbpf.c
create mode 100644 tools/lib/bpf/libbpf.h
create mode 100644 tools/perf/tests/llvm.c
create mode 100644 tools/perf/util/bpf-loader.c
create mode 100644 tools/perf/util/bpf-loader.h
create mode 100644 tools/perf/util/llvm-utils.c
create mode 100644 tools/perf/util/llvm-utils.h
--
1.8.3.4
Commit e1abf2cc8d5d80b41c4419368ec743ccadbb131e ("bpf: Fix the build on
BPF_SYSCALL=y && !CONFIG_TRACING kernels, make it more configurable")
updated the building condition of bpf_trace.o from CONFIG_BPF_SYSCALL
to CONFIG_BPF_EVENTS, but the corresponding #ifdef controller in
trace_events.h for trace_call_bpf() was not changed. Which, in theory,
is incorrect.
With current Kconfigs, we can create a .config with CONFIG_BPF_SYSCALL=y
and CONFIG_BPF_EVENTS=n by unselecting CONFIG_KPROBE_EVENT and
selecting CONFIG_BPF_SYSCALL. With these options, trace_call_bpf() will
be defined as an extern function, but if anyone calls it a symbol missing
error will be triggered since bpf_trace.o was not built.
This patch changes the #ifdef controller for trace_call_bpf() from
CONFIG_BPF_SYSCALL to CONFIG_BPF_EVENTS. I'll show its correctness:
Before this patch:
BPF_SYSCALL BPF_EVENTS trace_call_bpf bpf_trace.o
y y normal compiled
n n inline not compiled
y n normal not compiled (incorrect)
n y impossible (BPF_EVENTS depends on BPF_SYSCALL)
After this patch:
BPF_SYSCALL BPF_EVENTS trace_call_bpf bpf_trace.o
y y normal compiled
n n inline not compiled
y n inline not compiled (fixed)
n y impossible (BPF_EVENTS depends on BPF_SYSCALL)
So this patch doesn't break anything. QED.
Signed-off-by: Wang Nan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
include/linux/trace_events.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 1063c85..180dbf8 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -542,7 +542,7 @@ event_trigger_unlock_commit_regs(struct trace_event_file *file,
event_triggers_post_call(file, tt);
}
-#ifdef CONFIG_BPF_SYSCALL
+#ifdef CONFIG_BPF_EVENTS
unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx);
#else
static inline unsigned int trace_call_bpf(struct bpf_prog *prog, void *ctx)
--
1.8.3.4
By copying BPF related operation to uprobe processing path, this patch
allow users attach BPF programs to uprobes like what they are already
doing on kprobes.
After this patch, users are allowed to use PERF_EVENT_IOC_SET_BPF on a
uprobe perf event. Which make it possible to profile user space programs
and kernel events together using BPF.
Because of this patch, CONFIG_BPF_EVENTS should be selected by
CONFIG_UPROBE_EVENT to ensure trace_call_bpf() is compiled even if
KPROBE_EVENT is not set.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
include/linux/trace_events.h | 5 +++++
kernel/events/core.c | 4 ++--
kernel/trace/Kconfig | 2 +-
kernel/trace/trace_uprobe.c | 5 +++++
4 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
index 180dbf8..ed27917 100644
--- a/include/linux/trace_events.h
+++ b/include/linux/trace_events.h
@@ -243,6 +243,7 @@ enum {
TRACE_EVENT_FL_USE_CALL_FILTER_BIT,
TRACE_EVENT_FL_TRACEPOINT_BIT,
TRACE_EVENT_FL_KPROBE_BIT,
+ TRACE_EVENT_FL_UPROBE_BIT,
};
/*
@@ -257,6 +258,7 @@ enum {
* USE_CALL_FILTER - For trace internal events, don't use file filter
* TRACEPOINT - Event is a tracepoint
* KPROBE - Event is a kprobe
+ * UPROBE - Event is a uprobe
*/
enum {
TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
@@ -267,8 +269,11 @@ enum {
TRACE_EVENT_FL_USE_CALL_FILTER = (1 << TRACE_EVENT_FL_USE_CALL_FILTER_BIT),
TRACE_EVENT_FL_TRACEPOINT = (1 << TRACE_EVENT_FL_TRACEPOINT_BIT),
TRACE_EVENT_FL_KPROBE = (1 << TRACE_EVENT_FL_KPROBE_BIT),
+ TRACE_EVENT_FL_UPROBE = (1 << TRACE_EVENT_FL_UPROBE_BIT),
};
+#define TRACE_EVENT_FL_UKPROBE (TRACE_EVENT_FL_KPROBE | TRACE_EVENT_FL_UPROBE)
+
struct trace_event_call {
struct list_head list;
struct trace_event_class *class;
diff --git a/kernel/events/core.c b/kernel/events/core.c
index d3dae34..0b1d564 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -6749,8 +6749,8 @@ static int perf_event_set_bpf_prog(struct perf_event *event, u32 prog_fd)
if (event->tp_event->prog)
return -EEXIST;
- if (!(event->tp_event->flags & TRACE_EVENT_FL_KPROBE))
- /* bpf programs can only be attached to kprobes */
+ if (!(event->tp_event->flags & TRACE_EVENT_FL_UKPROBE))
+ /* bpf programs can only be attached to u/kprobes */
return -EINVAL;
prog = bpf_prog_get(prog_fd);
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index 3b9a48a..1153c43 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -434,7 +434,7 @@ config UPROBE_EVENT
config BPF_EVENTS
depends on BPF_SYSCALL
- depends on KPROBE_EVENT
+ depends on KPROBE_EVENT || UPROBE_EVENT
bool
default y
help
diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index aa1ea7b..f97479f 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1095,11 +1095,15 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
{
struct trace_event_call *call = &tu->tp.call;
struct uprobe_trace_entry_head *entry;
+ struct bpf_prog *prog = call->prog;
struct hlist_head *head;
void *data;
int size, esize;
int rctx;
+ if (prog && !trace_call_bpf(prog, regs))
+ return;
+
esize = SIZEOF_TRACE_ENTRY(is_ret_probe(tu));
size = esize + tu->tp.size + dsize;
@@ -1289,6 +1293,7 @@ static int register_uprobe_event(struct trace_uprobe *tu)
return -ENODEV;
}
+ call->flags = TRACE_EVENT_FL_UPROBE;
call->class->reg = trace_uprobe_register;
call->data = tu;
ret = trace_add_event_call(call);
--
1.8.3.4
This is the first patch of libbpf. The goal of libbpf is to create a
standard way for accessing eBPF object files. This patch creates
'Makefile' and 'Build' for it, allows 'make' to build libbpf.a and
libbpf.so, 'make install' to put them into proper directories.
Most part of Makefile is borrowed from traceevent.
Before building, it checks the existence of libelf in Makefile, and deny
to build if not found. Instead of throwing an error if libelf not found,
the error raises in a phony target "elfdep". This design is to ensure
'make clean' still workable even if libelf is not found.
Because libbpf requires 'kern_version' field set for 'union bpf_attr'
(bpfdep" is used for that dependency), Kernel BPF API is also checked
by intruducing a new feature check 'bpf' into tools/build/feature,
which checks the existence and version of linux/bpf.h. When building
libbpf, it searches that file from include/uapi/linux in kernel source
tree (controlled by FEATURE_CHECK_CFLAGS-bpf). Since it searches kernel
source tree it reside, installing of newest kernel headers is not
required, except we are trying to port these files to an old kernel.
To avoid checking that file when perf building, the newly introduced
'bpf' feature check doesn't added into FEATURE_TESTS and
FEATURE_DISPLAY by default in tools/build/Makefile.feature, but added
into libbpf's specific.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Bcc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/build/feature/Makefile | 6 +-
tools/build/feature/test-bpf.c | 18 ++++
tools/lib/bpf/.gitignore | 2 +
tools/lib/bpf/Build | 1 +
tools/lib/bpf/Makefile | 195 +++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.c | 14 +++
tools/lib/bpf/libbpf.h | 11 +++
7 files changed, 246 insertions(+), 1 deletion(-)
create mode 100644 tools/build/feature/test-bpf.c
create mode 100644 tools/lib/bpf/.gitignore
create mode 100644 tools/lib/bpf/Build
create mode 100644 tools/lib/bpf/Makefile
create mode 100644 tools/lib/bpf/libbpf.c
create mode 100644 tools/lib/bpf/libbpf.h
diff --git a/tools/build/feature/Makefile b/tools/build/feature/Makefile
index 463ed8f..1c0d69f 100644
--- a/tools/build/feature/Makefile
+++ b/tools/build/feature/Makefile
@@ -33,7 +33,8 @@ FILES= \
test-compile-32.bin \
test-compile-x32.bin \
test-zlib.bin \
- test-lzma.bin
+ test-lzma.bin \
+ test-bpf.bin
CC := $(CROSS_COMPILE)gcc -MD
PKG_CONFIG := $(CROSS_COMPILE)pkg-config
@@ -156,6 +157,9 @@ test-zlib.bin:
test-lzma.bin:
$(BUILD) -llzma
+test-bpf.bin:
+ $(BUILD)
+
-include *.d
###############################
diff --git a/tools/build/feature/test-bpf.c b/tools/build/feature/test-bpf.c
new file mode 100644
index 0000000..062bac8
--- /dev/null
+++ b/tools/build/feature/test-bpf.c
@@ -0,0 +1,18 @@
+#include <linux/bpf.h>
+
+int main(void)
+{
+ union bpf_attr attr;
+
+ attr.prog_type = BPF_PROG_TYPE_KPROBE;
+ attr.insn_cnt = 0;
+ attr.insns = 0;
+ attr.license = 0;
+ attr.log_buf = 0;
+ attr.log_size = 0;
+ attr.log_level = 0;
+ attr.kern_version = 0;
+
+ attr = attr;
+ return 0;
+}
diff --git a/tools/lib/bpf/.gitignore b/tools/lib/bpf/.gitignore
new file mode 100644
index 0000000..812aeed
--- /dev/null
+++ b/tools/lib/bpf/.gitignore
@@ -0,0 +1,2 @@
+libbpf_version.h
+FEATURE-DUMP
diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
new file mode 100644
index 0000000..a316484
--- /dev/null
+++ b/tools/lib/bpf/Build
@@ -0,0 +1 @@
+libbpf-y := libbpf.o
diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
new file mode 100644
index 0000000..f68d23a
--- /dev/null
+++ b/tools/lib/bpf/Makefile
@@ -0,0 +1,195 @@
+# Most of this file is copied from tools/lib/traceevent/Makefile
+
+BPF_VERSION = 0
+BPF_PATCHLEVEL = 0
+BPF_EXTRAVERSION = 1
+
+MAKEFLAGS += --no-print-directory
+
+
+# Makefiles suck: This macro sets a default value of $(2) for the
+# variable named by $(1), unless the variable has been set by
+# environment or command line. This is necessary for CC and AR
+# because make sets default values, so the simpler ?= approach
+# won't work as expected.
+define allow-override
+ $(if $(or $(findstring environment,$(origin $(1))),\
+ $(findstring command line,$(origin $(1)))),,\
+ $(eval $(1) = $(2)))
+endef
+
+# Allow setting CC and AR, or setting CROSS_COMPILE as a prefix.
+$(call allow-override,CC,$(CROSS_COMPILE)gcc)
+$(call allow-override,AR,$(CROSS_COMPILE)ar)
+
+INSTALL = install
+
+# Use DESTDIR for installing into a different root directory.
+# This is useful for building a package. The program will be
+# installed in this directory as if it was the root directory.
+# Then the build tool can move it later.
+DESTDIR ?=
+DESTDIR_SQ = '$(subst ','\'',$(DESTDIR))'
+
+LP64 := $(shell echo __LP64__ | ${CC} ${CFLAGS} -E -x c - | tail -n 1)
+ifeq ($(LP64), 1)
+ libdir_relative = lib64
+else
+ libdir_relative = lib
+endif
+
+prefix ?= /usr/local
+libdir = $(prefix)/$(libdir_relative)
+man_dir = $(prefix)/share/man
+man_dir_SQ = '$(subst ','\'',$(man_dir))'
+
+export man_dir man_dir_SQ INSTALL
+export DESTDIR DESTDIR_SQ
+
+include ../../scripts/Makefile.include
+
+# copy a bit from Linux kbuild
+
+ifeq ("$(origin V)", "command line")
+ VERBOSE = $(V)
+endif
+ifndef VERBOSE
+ VERBOSE = 0
+endif
+
+ifeq ($(srctree),)
+srctree := $(patsubst %/,%,$(dir $(shell pwd)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+srctree := $(patsubst %/,%,$(dir $(srctree)))
+#$(info Determined 'srctree' to be $(srctree))
+endif
+
+FEATURE_DISPLAY = libelf libelf-getphdrnum libelf-mmap bpf
+FEATURE_TESTS = libelf bpf
+
+INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/arch/$(ARCH)/include/uapi -I$(srctree)/include/uapi
+FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES)
+
+include $(srctree)/tools/build/Makefile.feature
+
+export prefix libdir src obj
+
+# Shell quotes
+libdir_SQ = $(subst ','\'',$(libdir))
+libdir_relative_SQ = $(subst ','\'',$(libdir_relative))
+plugin_dir_SQ = $(subst ','\'',$(plugin_dir))
+
+LIB_FILE = libbpf.a libbpf.so
+
+VERSION = $(BPF_VERSION)
+PATCHLEVEL = $(BPF_PATCHLEVEL)
+EXTRAVERSION = $(BPF_EXTRAVERSION)
+
+OBJ = $@
+N =
+
+LIBBPF_VERSION = $(BPF_VERSION).$(BPF_PATCHLEVEL).$(BPF_EXTRAVERSION)
+
+# Set compile option CFLAGS
+ifdef EXTRA_CFLAGS
+ CFLAGS := $(EXTRA_CFLAGS)
+else
+ CFLAGS := -g -Wall
+endif
+
+ifeq ($(feature-libelf-mmap), 1)
+ override CFLAGS += -DHAVE_LIBELF_MMAP_SUPPORT
+endif
+
+ifeq ($(feature-libelf-getphdrnum), 1)
+ override CFLAGS += -DHAVE_ELF_GETPHDRNUM_SUPPORT
+endif
+
+# Append required CFLAGS
+override CFLAGS += $(EXTRA_WARNINGS)
+override CFLAGS += -Werror -Wall
+override CFLAGS += -fPIC
+override CFLAGS += $(INCLUDES)
+
+ifeq ($(VERBOSE),1)
+ Q =
+else
+ Q = @
+endif
+
+# Disable command line variables (CFLAGS) overide from top
+# level Makefile (perf), otherwise build Makefile will get
+# the same command line setup.
+MAKEOVERRIDES=
+
+export srctree OUTPUT CC LD CFLAGS V
+build := -f $(srctree)/tools/build/Makefile.build dir=. obj
+
+BPF_IN := $(OUTPUT)libbpf-in.o
+LIB_FILE := $(addprefix $(OUTPUT),$(LIB_FILE))
+
+CMD_TARGETS = $(LIB_FILE)
+
+TARGETS = $(CMD_TARGETS)
+
+all: $(VERSION_FILES) all_cmd
+
+all_cmd: $(CMD_TARGETS)
+
+$(BPF_IN): force elfdep bpfdep
+ $(Q)$(MAKE) $(build)=libbpf
+
+$(OUTPUT)libbpf.so: $(BPF_IN)
+ $(QUIET_LINK)$(CC) --shared $^ -o $@
+
+$(OUTPUT)libbpf.a: $(BPF_IN)
+ $(QUIET_LINK)$(RM) $@; $(AR) rcs $@ $^
+
+define update_dir
+ (echo $1 > [email protected]; \
+ if [ -r $@ ] && cmp -s $@ [email protected]; then \
+ rm -f [email protected]; \
+ else \
+ echo ' UPDATE $@'; \
+ mv -f [email protected] $@; \
+ fi);
+endef
+
+define do_install
+ if [ ! -d '$(DESTDIR_SQ)$2' ]; then \
+ $(INSTALL) -d -m 755 '$(DESTDIR_SQ)$2'; \
+ fi; \
+ $(INSTALL) $1 '$(DESTDIR_SQ)$2'
+endef
+
+install_lib: all_cmd
+ $(call QUIET_INSTALL, $(LIB_FILE)) \
+ $(call do_install,$(LIB_FILE),$(libdir_SQ))
+
+install: install_lib
+
+### Cleaning rules
+
+config-clean:
+ $(call QUIET_CLEAN, config)
+ $(Q)$(MAKE) -C $(srctree)/tools/build/feature/ clean >/dev/null
+
+clean:
+ $(call QUIET_CLEAN, libbpf) $(RM) *.o *~ $(TARGETS) *.a *.so $(VERSION_FILES) .*.d \
+ $(RM) LIBBPF-CFLAGS
+ $(call QUIET_CLEAN, core-gen) $(RM) $(OUTPUT)FEATURE-DUMP
+
+
+
+PHONY += force elfdep bpfdep
+force:
+
+elfdep:
+ @if [ "$(feature-libelf)" != "1" ]; then echo "No libelf found"; exit -1 ; fi
+
+bpfdep:
+ @if [ "$(feature-bpf)" != "1" ]; then echo "BPF API too old"; exit -1 ; fi
+
+# Declare the contents of the .PHONY variable as phony. We keep that
+# information in a variable so we can use it in if_changed and friends.
+.PHONY: $(PHONY)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
new file mode 100644
index 0000000..c08d6bc
--- /dev/null
+++ b/tools/lib/bpf/libbpf.c
@@ -0,0 +1,14 @@
+/*
+ * Common eBPF ELF object loading operations.
+ *
+ * Copyright (C) 2013-2015 Alexei Starovoitov <[email protected]>
+ * Copyright (C) 2015 Wang Nan <[email protected]>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include <asm/unistd.h>
+#include <linux/bpf.h>
+
+#include "libbpf.h"
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
new file mode 100644
index 0000000..a6f46d9
--- /dev/null
+++ b/tools/lib/bpf/libbpf.h
@@ -0,0 +1,11 @@
+/*
+ * Common eBPF ELF object loading operations.
+ *
+ * Copyright (C) 2013-2015 Alexei Starovoitov <[email protected]>
+ * Copyright (C) 2015 Wang Nan <[email protected]>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+#ifndef __BPF_LIBBPF_H
+#define __BPF_LIBBPF_H
+
+#endif
--
1.8.3.4
By libbpf_set_print(), users of libbpf are allowed to register he/she
own debug, info and warning printing functions. Libbpf will use those
functions to print messages. If not provided, default info and warning
printing functions are fprintf(stderr, ...); default debug printing
is NULL.
This API is designed to be used by perf, enables it to register its own
logging functions to make all logs uniform, instead of separated
logging level control.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 40 ++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 12 ++++++++++++
2 files changed, 52 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index c08d6bc..6f0c13a 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -7,8 +7,48 @@
*/
#include <stdlib.h>
+#include <stdio.h>
+#include <stdarg.h>
+#include <string.h>
#include <unistd.h>
#include <asm/unistd.h>
#include <linux/bpf.h>
#include "libbpf.h"
+
+#define __printf(a, b) __attribute__((format(printf, a, b)))
+
+__printf(1, 2)
+static int __base_pr(const char *format, ...)
+{
+ va_list args;
+ int err;
+
+ va_start(args, format);
+ err = vfprintf(stderr, format, args);
+ va_end(args);
+ return err;
+}
+
+static __printf(1, 2) libbpf_print_fn_t __pr_warning = __base_pr;
+static __printf(1, 2) libbpf_print_fn_t __pr_info = __base_pr;
+static __printf(1, 2) libbpf_print_fn_t __pr_debug;
+
+#define __pr(func, fmt, ...) \
+do { \
+ if ((func)) \
+ (func)("libbpf: " fmt, ##__VA_ARGS__); \
+} while (0)
+
+#define pr_warning(fmt, ...) __pr(__pr_warning, fmt, ##__VA_ARGS__)
+#define pr_info(fmt, ...) __pr(__pr_info, fmt, ##__VA_ARGS__)
+#define pr_debug(fmt, ...) __pr(__pr_debug, fmt, ##__VA_ARGS__)
+
+void libbpf_set_print(libbpf_print_fn_t warn,
+ libbpf_print_fn_t info,
+ libbpf_print_fn_t debug)
+{
+ __pr_warning = warn;
+ __pr_info = info;
+ __pr_debug = debug;
+}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index a6f46d9..8d1eeba 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -8,4 +8,16 @@
#ifndef __BPF_LIBBPF_H
#define __BPF_LIBBPF_H
+/*
+ * In include/linux/compiler-gcc.h, __printf is defined. However
+ * it should be better if libbpf.h doesn't depend on Linux header file.
+ * So instead of __printf, here we use gcc attribute directly.
+ */
+typedef int (*libbpf_print_fn_t)(const char *, ...)
+ __attribute__((format(printf, 1, 2)));
+
+void libbpf_set_print(libbpf_print_fn_t warn,
+ libbpf_print_fn_t info,
+ libbpf_print_fn_t debug);
+
#endif
--
1.8.3.4
This patch defines basic interface of libbpf. 'struct bpf_object' will
be the handler of each object file. Its internal structure is hide to
user. eBPF object files are compiled by LLVM as ELF format. In this
patch, libelf is used to open those files, read EHDR and do basic
validation according to e_type and e_machine.
All elf related staffs are grouped together and reside in efile field of
'struct bpf_object'. bpf_object__elf_finish() is introduced to clear it.
After all eBPF programs in an object file are loaded, related ELF
information is useless. Close the object file and free those memory.
The zfree() and zclose() functions are introduced to ensure setting NULL
pointers and negative file descriptors after resources are released.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 158 +++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 8 +++
2 files changed, 166 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 6f0c13a..9e44608 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -11,8 +11,12 @@
#include <stdarg.h>
#include <string.h>
#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
#include <asm/unistd.h>
#include <linux/bpf.h>
+#include <libelf.h>
+#include <gelf.h>
#include "libbpf.h"
@@ -52,3 +56,157 @@ void libbpf_set_print(libbpf_print_fn_t warn,
__pr_info = info;
__pr_debug = debug;
}
+
+/* Copied from tools/perf/util/util.h */
+#ifndef zfree
+# define zfree(ptr) ({ free(*ptr); *ptr = NULL; })
+#endif
+
+#ifndef zclose
+# define zclose(fd) ({ \
+ int ___err = 0; \
+ if ((fd) >= 0) \
+ ___err = close((fd)); \
+ fd = -1; \
+ ___err; })
+#endif
+
+#ifdef HAVE_LIBELF_MMAP_SUPPORT
+# define LIBBPF_ELF_C_READ_MMAP ELF_C_READ_MMAP
+#else
+# define LIBBPF_ELF_C_READ_MMAP ELF_C_READ
+#endif
+
+struct bpf_object {
+ /*
+ * Information when doing elf related work. Only valid if fd
+ * is valid.
+ */
+ struct {
+ int fd;
+ Elf *elf;
+ GElf_Ehdr ehdr;
+ } efile;
+ char path[];
+};
+#define obj_elf_valid(o) ((o)->efile.elf)
+
+static struct bpf_object *bpf_object__new(const char *path)
+{
+ struct bpf_object *obj;
+
+ obj = calloc(1, sizeof(struct bpf_object) + strlen(path) + 1);
+ if (!obj) {
+ pr_warning("alloc memory failed for %s\n", path);
+ return NULL;
+ }
+
+ strcpy(obj->path, path);
+ obj->efile.fd = -1;
+ return obj;
+}
+
+static void bpf_object__elf_finish(struct bpf_object *obj)
+{
+ if (!obj_elf_valid(obj))
+ return;
+
+ if (obj->efile.elf) {
+ elf_end(obj->efile.elf);
+ obj->efile.elf = NULL;
+ }
+ zclose(obj->efile.fd);
+}
+
+static int bpf_object__elf_init(struct bpf_object *obj)
+{
+ int err = 0;
+ GElf_Ehdr *ep;
+
+ if (obj_elf_valid(obj)) {
+ pr_warning("elf init: internal error\n");
+ return -EEXIST;
+ }
+
+ obj->efile.fd = open(obj->path, O_RDONLY);
+ if (obj->efile.fd < 0) {
+ pr_warning("failed to open %s: %s\n", obj->path,
+ strerror(errno));
+ return -errno;
+ }
+
+ obj->efile.elf = elf_begin(obj->efile.fd,
+ LIBBPF_ELF_C_READ_MMAP,
+ NULL);
+ if (!obj->efile.elf) {
+ pr_warning("failed to open %s as ELF file\n",
+ obj->path);
+ err = -EINVAL;
+ goto errout;
+ }
+
+ if (!gelf_getehdr(obj->efile.elf, &obj->efile.ehdr)) {
+ pr_warning("failed to get EHDR from %s\n",
+ obj->path);
+ err = -EINVAL;
+ goto errout;
+ }
+ ep = &obj->efile.ehdr;
+
+ if ((ep->e_type != ET_REL) || (ep->e_machine != 0)) {
+ pr_warning("%s is not an eBPF object file\n",
+ obj->path);
+ err = -EINVAL;
+ goto errout;
+ }
+
+ return 0;
+errout:
+ bpf_object__elf_finish(obj);
+ return err;
+}
+
+static struct bpf_object *
+__bpf_object__open(const char *path)
+{
+ struct bpf_object *obj;
+
+ if (elf_version(EV_CURRENT) == EV_NONE) {
+ pr_warning("failed to init libelf for %s\n", path);
+ return NULL;
+ }
+
+ obj = bpf_object__new(path);
+ if (!obj)
+ return NULL;
+
+ if (bpf_object__elf_init(obj))
+ goto out;
+
+ bpf_object__elf_finish(obj);
+ return obj;
+out:
+ bpf_object__close(obj);
+ return NULL;
+}
+
+struct bpf_object *bpf_object__open(const char *path)
+{
+ /* param validation */
+ if (!path)
+ return NULL;
+
+ pr_debug("loading %s\n", path);
+
+ return __bpf_object__open(path);
+}
+
+void bpf_object__close(struct bpf_object *obj)
+{
+ if (!obj)
+ return;
+
+ bpf_object__elf_finish(obj);
+
+ free(obj);
+}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 8d1eeba..ec3301c 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -8,6 +8,8 @@
#ifndef __BPF_LIBBPF_H
#define __BPF_LIBBPF_H
+#include <stdio.h>
+
/*
* In include/linux/compiler-gcc.h, __printf is defined. However
* it should be better if libbpf.h doesn't depend on Linux header file.
@@ -20,4 +22,10 @@ void libbpf_set_print(libbpf_print_fn_t warn,
libbpf_print_fn_t info,
libbpf_print_fn_t debug);
+/* Hide internal to user */
+struct bpf_object;
+
+struct bpf_object *bpf_object__open(const char *path);
+void bpf_object__close(struct bpf_object *object);
+
#endif
--
1.8.3.4
To support dynamic compiling, this patch allows caller to pass a
in-memory buffer to libbpf by bpf_object__open_buffer(). libbpf calls
elf_memory() to open it as ELF object file.
Because __bpf_object__open() collects all required data and won't need
that buffer anymore, libbpf uses that buffer directly instead of clone a
new buffer. Caller of libbpf can free that buffer or use it do other
things after bpf_object__open_buffer() return.
Signed-off-by: Wang Nan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 62 ++++++++++++++++++++++++++++++++++++++++----------
tools/lib/bpf/libbpf.h | 2 ++
2 files changed, 52 insertions(+), 12 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 9e44608..36dfbc1 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -84,6 +84,8 @@ struct bpf_object {
*/
struct {
int fd;
+ void *obj_buf;
+ size_t obj_buf_sz;
Elf *elf;
GElf_Ehdr ehdr;
} efile;
@@ -91,7 +93,9 @@ struct bpf_object {
};
#define obj_elf_valid(o) ((o)->efile.elf)
-static struct bpf_object *bpf_object__new(const char *path)
+static struct bpf_object *bpf_object__new(const char *path,
+ void *obj_buf,
+ size_t obj_buf_sz)
{
struct bpf_object *obj;
@@ -103,6 +107,16 @@ static struct bpf_object *bpf_object__new(const char *path)
strcpy(obj->path, path);
obj->efile.fd = -1;
+
+ /*
+ * Caller of this function should also calls
+ * bpf_object__elf_finish() after data collection to return
+ * obj_buf to user. If not, we should duplicate the buffer to
+ * avoid user freeing them before elf finish.
+ */
+ obj->efile.obj_buf = obj_buf;
+ obj->efile.obj_buf_sz = obj_buf_sz;
+
return obj;
}
@@ -116,6 +130,8 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
obj->efile.elf = NULL;
}
zclose(obj->efile.fd);
+ obj->efile.obj_buf = NULL;
+ obj->efile.obj_buf_sz = 0;
}
static int bpf_object__elf_init(struct bpf_object *obj)
@@ -128,16 +144,26 @@ static int bpf_object__elf_init(struct bpf_object *obj)
return -EEXIST;
}
- obj->efile.fd = open(obj->path, O_RDONLY);
- if (obj->efile.fd < 0) {
- pr_warning("failed to open %s: %s\n", obj->path,
- strerror(errno));
- return -errno;
+ if (obj->efile.obj_buf_sz > 0) {
+ /*
+ * obj_buf should have been validated by
+ * bpf_object__open_buffer().
+ */
+ obj->efile.elf = elf_memory(obj->efile.obj_buf,
+ obj->efile.obj_buf_sz);
+ } else {
+ obj->efile.fd = open(obj->path, O_RDONLY);
+ if (obj->efile.fd < 0) {
+ pr_warning("failed to open %s: %s\n", obj->path,
+ strerror(errno));
+ return -errno;
+ }
+
+ obj->efile.elf = elf_begin(obj->efile.fd,
+ LIBBPF_ELF_C_READ_MMAP,
+ NULL);
}
- obj->efile.elf = elf_begin(obj->efile.fd,
- LIBBPF_ELF_C_READ_MMAP,
- NULL);
if (!obj->efile.elf) {
pr_warning("failed to open %s as ELF file\n",
obj->path);
@@ -167,7 +193,7 @@ errout:
}
static struct bpf_object *
-__bpf_object__open(const char *path)
+__bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
{
struct bpf_object *obj;
@@ -176,7 +202,7 @@ __bpf_object__open(const char *path)
return NULL;
}
- obj = bpf_object__new(path);
+ obj = bpf_object__new(path, obj_buf, obj_buf_sz);
if (!obj)
return NULL;
@@ -198,7 +224,19 @@ struct bpf_object *bpf_object__open(const char *path)
pr_debug("loading %s\n", path);
- return __bpf_object__open(path);
+ return __bpf_object__open(path, NULL, 0);
+}
+
+struct bpf_object *bpf_object__open_buffer(void *obj_buf,
+ size_t obj_buf_sz)
+{
+ /* param validation */
+ if (!obj_buf || obj_buf_sz <= 0)
+ return NULL;
+
+ pr_debug("loading object from buffer\n");
+
+ return __bpf_object__open("[buffer]", obj_buf, obj_buf_sz);
}
void bpf_object__close(struct bpf_object *obj)
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index ec3301c..dc966dd 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -26,6 +26,8 @@ void libbpf_set_print(libbpf_print_fn_t warn,
struct bpf_object;
struct bpf_object *bpf_object__open(const char *path);
+struct bpf_object *bpf_object__open_buffer(void *obj_buf,
+ size_t obj_buf_sz);
void bpf_object__close(struct bpf_object *object);
#endif
--
1.8.3.4
Check endianness according to EHDR. Code is taken from
tools/perf/util/symbol-elf.c.
Libbpf doesn't magically convert missmatched endianness. Even if we swap
eBPF instructions to correct byte order, we are unable to deal with
endianness in code logical generated by LLVM.
Therefore, libbpf should simply reject missmatched ELF object, and let
LLVM to create good code.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 36dfbc1..15b3e82 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -192,6 +192,34 @@ errout:
return err;
}
+static int
+bpf_object__check_endianness(struct bpf_object *obj)
+{
+ static unsigned int const endian = 1;
+
+ switch (obj->efile.ehdr.e_ident[EI_DATA]) {
+ case ELFDATA2LSB:
+ /* We are big endian, BPF obj is little endian. */
+ if (*(unsigned char const *)&endian != 1)
+ goto mismatch;
+ break;
+
+ case ELFDATA2MSB:
+ /* We are little endian, BPF obj is big endian. */
+ if (*(unsigned char const *)&endian != 0)
+ goto mismatch;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+
+mismatch:
+ pr_warning("Error: endianness mismatch.\n");
+ return -EINVAL;
+}
+
static struct bpf_object *
__bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
{
@@ -208,6 +236,8 @@ __bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
if (bpf_object__elf_init(obj))
goto out;
+ if (bpf_object__check_endianness(obj))
+ goto out;
bpf_object__elf_finish(obj);
return obj;
--
1.8.3.4
bpf_obj_elf_collect() is introduced to iterate over each elf sections to
collection information in eBPF object files. This function will futher
enhanced to collect license, kernel version, programs, configs and map
information.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 15b3e82..d8d6eb5 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -220,6 +220,57 @@ mismatch:
return -EINVAL;
}
+static int bpf_object__elf_collect(struct bpf_object *obj)
+{
+ Elf *elf = obj->efile.elf;
+ GElf_Ehdr *ep = &obj->efile.ehdr;
+ Elf_Scn *scn = NULL;
+ int idx = 0, err = 0;
+
+ /* Elf is corrupted/truncated, avoid calling elf_strptr. */
+ if (!elf_rawdata(elf_getscn(elf, ep->e_shstrndx), NULL)) {
+ pr_warning("failed to get e_shstrndx from %s\n",
+ obj->path);
+ return -EINVAL;
+ }
+
+ while ((scn = elf_nextscn(elf, scn)) != NULL) {
+ char *name;
+ GElf_Shdr sh;
+ Elf_Data *data;
+
+ idx++;
+ if (gelf_getshdr(scn, &sh) != &sh) {
+ pr_warning("failed to get section header from %s\n",
+ obj->path);
+ err = -EINVAL;
+ goto out;
+ }
+
+ name = elf_strptr(elf, ep->e_shstrndx, sh.sh_name);
+ if (!name) {
+ pr_warning("failed to get section name from %s\n",
+ obj->path);
+ err = -EINVAL;
+ goto out;
+ }
+
+ data = elf_getdata(scn, 0);
+ if (!data) {
+ pr_warning("failed to get section data from %s(%s)\n",
+ name, obj->path);
+ err = -EINVAL;
+ goto out;
+ }
+ pr_debug("section %s, size %ld, link %d, flags %lx, type=%d\n",
+ name, (unsigned long)data->d_size,
+ (int)sh.sh_link, (unsigned long)sh.sh_flags,
+ (int)sh.sh_type);
+ }
+out:
+ return err;
+}
+
static struct bpf_object *
__bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
{
@@ -238,6 +289,8 @@ __bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
goto out;
if (bpf_object__check_endianness(obj))
goto out;
+ if (bpf_object__elf_collect(obj))
+ goto out;
bpf_object__elf_finish(obj);
return obj;
--
1.8.3.4
Expand bpf_obj_elf_collect() to collect license and kernel version
information in eBPF object file. eBPF object file should have a section
named 'license', which contains a string. It should also have a section
named 'version', contains a u32 LINUX_VERSION_CODE.
bpf_obj_validate() is introduced to validate object file after loaded.
Currently it only check existence of 'version' section.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 53 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d8d6eb5..95c8d8e 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -14,6 +14,7 @@
#include <fcntl.h>
#include <errno.h>
#include <asm/unistd.h>
+#include <linux/kernel.h>
#include <linux/bpf.h>
#include <libelf.h>
#include <gelf.h>
@@ -78,6 +79,8 @@ void libbpf_set_print(libbpf_print_fn_t warn,
#endif
struct bpf_object {
+ char license[64];
+ u32 kern_version;
/*
* Information when doing elf related work. Only valid if fd
* is valid.
@@ -220,6 +223,33 @@ mismatch:
return -EINVAL;
}
+static int
+bpf_object__init_license(struct bpf_object *obj,
+ void *data, size_t size)
+{
+ memcpy(obj->license, data,
+ min(size, sizeof(obj->license) - 1));
+ pr_debug("license of %s is %s\n", obj->path, obj->license);
+ return 0;
+}
+
+static int
+bpf_object__init_kversion(struct bpf_object *obj,
+ void *data, size_t size)
+{
+ u32 kver;
+
+ if (size != sizeof(kver)) {
+ pr_warning("invalid kver section in %s\n", obj->path);
+ return -EINVAL;
+ }
+ memcpy(&kver, data, sizeof(kver));
+ obj->kern_version = kver;
+ pr_debug("kernel version of %s is %x\n", obj->path,
+ obj->kern_version);
+ return 0;
+}
+
static int bpf_object__elf_collect(struct bpf_object *obj)
{
Elf *elf = obj->efile.elf;
@@ -266,11 +296,32 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
name, (unsigned long)data->d_size,
(int)sh.sh_link, (unsigned long)sh.sh_flags,
(int)sh.sh_type);
+
+ if (strcmp(name, "license") == 0)
+ err = bpf_object__init_license(obj,
+ data->d_buf,
+ data->d_size);
+ else if (strcmp(name, "version") == 0)
+ err = bpf_object__init_kversion(obj,
+ data->d_buf,
+ data->d_size);
+ if (err)
+ goto out;
}
out:
return err;
}
+static int bpf_object__validate(struct bpf_object *obj)
+{
+ if (obj->kern_version == 0) {
+ pr_warning("%s doesn't provide kernel version\n",
+ obj->path);
+ return -EINVAL;
+ }
+ return 0;
+}
+
static struct bpf_object *
__bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
{
@@ -291,6 +342,8 @@ __bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
goto out;
if (bpf_object__elf_collect(obj))
goto out;
+ if (bpf_object__validate(obj))
+ goto out;
bpf_object__elf_finish(obj);
return obj;
--
1.8.3.4
If maps are used by eBPF programs, corresponding object file(s) should
contain a section named 'map'. Which contains map definitions. This
patch copies the data of the whole section. Map data parsing should be
acted just before map loading.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 29 +++++++++++++++++++++++++++++
1 file changed, 29 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 95c8d8e..87f5054 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -81,6 +81,9 @@ void libbpf_set_print(libbpf_print_fn_t warn,
struct bpf_object {
char license[64];
u32 kern_version;
+ void *maps_buf;
+ size_t maps_buf_sz;
+
/*
* Information when doing elf related work. Only valid if fd
* is valid.
@@ -250,6 +253,28 @@ bpf_object__init_kversion(struct bpf_object *obj,
return 0;
}
+static int
+bpf_object__init_maps(struct bpf_object *obj, void *data,
+ size_t size)
+{
+ if (size == 0) {
+ pr_debug("%s doesn't need map definition\n",
+ obj->path);
+ return 0;
+ }
+
+ obj->maps_buf = malloc(size);
+ if (!obj->maps_buf) {
+ pr_warning("malloc maps failed: %s\n", obj->path);
+ return -ENOMEM;
+ }
+
+ obj->maps_buf_sz = size;
+ memcpy(obj->maps_buf, data, size);
+ pr_debug("maps in %s: %ld bytes\n", obj->path, (long)size);
+ return 0;
+}
+
static int bpf_object__elf_collect(struct bpf_object *obj)
{
Elf *elf = obj->efile.elf;
@@ -305,6 +330,9 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
err = bpf_object__init_kversion(obj,
data->d_buf,
data->d_size);
+ else if (strcmp(name, "maps") == 0)
+ err = bpf_object__init_maps(obj, data->d_buf,
+ data->d_size);
if (err)
goto out;
}
@@ -382,5 +410,6 @@ void bpf_object__close(struct bpf_object *obj)
bpf_object__elf_finish(obj);
+ zfree(&obj->maps_buf);
free(obj);
}
--
1.8.3.4
This patch collects symbols section. This section is useful when linking
BPF maps.
What 'bpf_map_xxx()' functions actually require are map's file
descriptors (and the internal verifier converts fds into pointers to
'struct bpf_map'), which we don't know when compiling. Therefore, we
should make compiler generate a 'ldr_64 r1, <imm>' instruction, and
fill the 'imm' field with the actual file descriptor when loading in
libbpf.
BPF programs should be written in this way:
struct bpf_map_def SEC("maps") my_map = {
.type = BPF_MAP_TYPE_HASH,
.key_size = sizeof(unsigned long),
.value_size = sizeof(unsigned long),
.max_entries = 1000000,
};
SEC("my_func=sys_write")
int my_func(void *ctx)
{
...
bpf_map_update_elem(&my_map, &key, &value, BPF_ANY);
...
}
Compiler should convert '&my_map' into a 'ldr_64, r1, <imm>'
instruction, where imm should be the address of 'my_map'. According to
the address, libbpf knows which map it actually referenced, and then
fills the imm field with the 'fd' of that map created by it.
However, since we never really 'link' the object file, the imm field is
only a record in relocation section. Therefore libbpf should do the
relocation:
1. In relocation section (type == SHT_REL), positions of each such
'ld_64' instruction are recorded with a reference of an entry in
symbol table (SHT_SYMTAB);
2. From records in symbol table we can find the indics of map
variables.
Libbpf first record SHT_SYMTAB and positions of each instruction which
required bu such operation. Then create file descriptor. Finally, after
map creation complete, replace the imm field.
This is the first patch of BPF map related stuff. It records SHT_SYMTAB
into object's efile field for further use.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 87f5054..9b016c0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -94,6 +94,7 @@ struct bpf_object {
size_t obj_buf_sz;
Elf *elf;
GElf_Ehdr ehdr;
+ Elf_Data *symbols;
} efile;
char path[];
};
@@ -135,6 +136,7 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
elf_end(obj->efile.elf);
obj->efile.elf = NULL;
}
+ obj->efile.symbols = NULL;
zclose(obj->efile.fd);
obj->efile.obj_buf = NULL;
obj->efile.obj_buf_sz = 0;
@@ -333,6 +335,14 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
else if (strcmp(name, "maps") == 0)
err = bpf_object__init_maps(obj, data->d_buf,
data->d_size);
+ else if (sh.sh_type == SHT_SYMTAB) {
+ if (obj->efile.symbols) {
+ pr_warning("bpf: multiple SYMTAB in %s\n",
+ obj->path);
+ err = -EEXIST;
+ } else
+ obj->efile.symbols = data;
+ }
if (err)
goto out;
}
--
1.8.3.4
This patch collects all programs in an object file into an array of
'struct bpf_program' for further processing. That structure is for
representing each eBPF program. 'bpf_prog' should be a better name, but
it has been used by linux/filter.h. Although it is a kernel space name,
I still prefer to call it 'bpf_program' to prevent possible confusion.
bpf_program__new() creates a new 'struct bpf_program' object. It first
init a variable in stack using __bpf_program__new(), then if success,
enlarges obj->programs array and copy the new object in.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 117 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 117 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 9b016c0..3b717de 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -78,12 +78,27 @@ void libbpf_set_print(libbpf_print_fn_t warn,
# define LIBBPF_ELF_C_READ_MMAP ELF_C_READ
#endif
+/*
+ * bpf_prog should be a better name but it has been used in
+ * linux/filter.h.
+ */
+struct bpf_program {
+ /* Index in elf obj file, for relocation use. */
+ int idx;
+ char *section_name;
+ struct bpf_insn *insns;
+ size_t insns_cnt;
+};
+
struct bpf_object {
char license[64];
u32 kern_version;
void *maps_buf;
size_t maps_buf_sz;
+ struct bpf_program *programs;
+ size_t nr_programs;
+
/*
* Information when doing elf related work. Only valid if fd
* is valid.
@@ -100,6 +115,84 @@ struct bpf_object {
};
#define obj_elf_valid(o) ((o)->efile.elf)
+static void bpf_program__clear(struct bpf_program *prog)
+{
+ if (!prog)
+ return;
+
+ zfree(&prog->section_name);
+ zfree(&prog->insns);
+ prog->insns_cnt = 0;
+ prog->idx = -1;
+}
+
+static int
+__bpf_program__new(void *data, size_t size, char *name, int idx,
+ struct bpf_program *prog)
+{
+ if (size < sizeof(struct bpf_insn)) {
+ pr_warning("corrupted section '%s'\n", name);
+ return -EINVAL;
+ }
+
+ bzero(prog, sizeof(*prog));
+
+ prog->section_name = strdup(name);
+ if (!prog->section_name) {
+ pr_warning("failed to alloc name for prog %s\n",
+ name);
+ goto errout;
+ }
+
+ prog->insns = malloc(size);
+ if (!prog->insns) {
+ pr_warning("failed to alloc insns for %s\n", name);
+ goto errout;
+ }
+ prog->insns_cnt = size / sizeof(struct bpf_insn);
+ memcpy(prog->insns, data,
+ prog->insns_cnt * sizeof(struct bpf_insn));
+ prog->idx = idx;
+
+ return 0;
+errout:
+ bpf_program__clear(prog);
+ return -ENOMEM;
+}
+
+static struct bpf_program *
+bpf_program__new(struct bpf_object *obj, void *data, size_t size,
+ char *name, int idx)
+{
+ struct bpf_program prog, *progs;
+ int nr_progs, err;
+
+ err = __bpf_program__new(data, size, name, idx, &prog);
+ if (err)
+ return NULL;
+
+ progs = obj->programs;
+ nr_progs = obj->nr_programs;
+
+ progs = realloc(progs, sizeof(progs[0]) * (nr_progs + 1));
+ if (!progs) {
+ /*
+ * In this case the original obj->programs
+ * is still valid, so don't need special treat for
+ * bpf_close_object().
+ */
+ pr_warning("failed to alloc a new program '%s'\n",
+ name);
+ bpf_program__clear(&prog);
+ return NULL;
+ }
+
+ obj->programs = progs;
+ obj->nr_programs = nr_progs + 1;
+ progs[nr_progs] = prog;
+ return &progs[nr_progs];
+}
+
static struct bpf_object *bpf_object__new(const char *path,
void *obj_buf,
size_t obj_buf_sz)
@@ -342,6 +435,21 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
err = -EEXIST;
} else
obj->efile.symbols = data;
+ } else if ((sh.sh_type == SHT_PROGBITS) &&
+ (sh.sh_flags & SHF_EXECINSTR) &&
+ (data->d_size > 0)) {
+ struct bpf_program *prog;
+
+ prog = bpf_program__new(obj, data->d_buf,
+ data->d_size, name,
+ idx);
+ if (!prog) {
+ pr_warning("failed to alloc program %s (%s)",
+ name, obj->path);
+ err = -ENOMEM;
+ } else
+ pr_debug("found program %s\n",
+ prog->section_name);
}
if (err)
goto out;
@@ -415,11 +523,20 @@ struct bpf_object *bpf_object__open_buffer(void *obj_buf,
void bpf_object__close(struct bpf_object *obj)
{
+ size_t i;
+
if (!obj)
return;
bpf_object__elf_finish(obj);
zfree(&obj->maps_buf);
+
+ if (obj->programs && obj->nr_programs) {
+ for (i = 0; i < obj->nr_programs; i++)
+ bpf_program__clear(&obj->programs[i]);
+ }
+ zfree(&obj->programs);
+
free(obj);
}
--
1.8.3.4
This patch collects relocation sections into 'struct object'. Such
sections are used for connecting maps to bpf programs. 'reloc' field in
'struct bpf_object' is introduced for storing such information.
This patch simply store the data into 'reloc' field. Following patch
will parse them to know the exact instructions which are needed to be
relocated.
Note that the collected data will be invalid after ELF object file is
closed.
This is the second patch related to map relocation. The first one is
'bpf tools: Collect symbol table from SHT_SYMTAB section'. The
principle of map relocation is described in its commit message.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 3b717de..5f12fa6 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -110,6 +110,11 @@ struct bpf_object {
Elf *elf;
GElf_Ehdr ehdr;
Elf_Data *symbols;
+ struct {
+ GElf_Shdr shdr;
+ Elf_Data *data;
+ } *reloc;
+ int nr_reloc;
} efile;
char path[];
};
@@ -230,6 +235,9 @@ static void bpf_object__elf_finish(struct bpf_object *obj)
obj->efile.elf = NULL;
}
obj->efile.symbols = NULL;
+
+ zfree(&obj->efile.reloc);
+ obj->efile.nr_reloc = 0;
zclose(obj->efile.fd);
obj->efile.obj_buf = NULL;
obj->efile.obj_buf_sz = 0;
@@ -450,6 +458,24 @@ static int bpf_object__elf_collect(struct bpf_object *obj)
} else
pr_debug("found program %s\n",
prog->section_name);
+ } else if (sh.sh_type == SHT_REL) {
+ void *reloc = obj->efile.reloc;
+ int nr_reloc = obj->efile.nr_reloc + 1;
+
+ reloc = realloc(reloc,
+ sizeof(*obj->efile.reloc) * nr_reloc);
+ if (!reloc) {
+ pr_warning("realloc failed\n");
+ err = -ENOMEM;
+ } else {
+ int n = nr_reloc - 1;
+
+ obj->efile.reloc = reloc;
+ obj->efile.nr_reloc = nr_reloc;
+
+ obj->efile.reloc[n].shdr = sh;
+ obj->efile.reloc[n].data = data;
+ }
}
if (err)
goto out;
--
1.8.3.4
This patch records the indices of instructions which are needed to be
relocated. That information is saved in the 'reloc_desc' field in
'struct bpf_program'. In the loading phase (this patch takes effect in
the opening phase), the collected instructions will be replaced by map
loading instructions.
Since we are going to close the ELF file and clear all data at the end
of the 'opening' phase, the ELF information will no longer be valid in
the 'loading' phase. We have to locate the instructions before maps are
loaded, instead of directly modifying the instruction.
'struct bpf_map_def' is introduced in this patch to let us know how many
maps are defined in the object.
This is the third part of map relocation. The principle of map relocation
is described in commit message of 'bpf tools: Collect symbol table from
SHT_SYMTAB section'.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 124 +++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 13 ++++++
2 files changed, 137 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 5f12fa6..4f13772 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -9,6 +9,7 @@
#include <stdlib.h>
#include <stdio.h>
#include <stdarg.h>
+#include <inttypes.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
@@ -88,6 +89,12 @@ struct bpf_program {
char *section_name;
struct bpf_insn *insns;
size_t insns_cnt;
+
+ struct {
+ int insn_idx;
+ int map_idx;
+ } *reloc_desc;
+ int nr_reloc;
};
struct bpf_object {
@@ -127,6 +134,9 @@ static void bpf_program__clear(struct bpf_program *prog)
zfree(&prog->section_name);
zfree(&prog->insns);
+ zfree(&prog->reloc_desc);
+
+ prog->nr_reloc = 0;
prog->insns_cnt = 0;
prog->idx = -1;
}
@@ -484,6 +494,118 @@ out:
return err;
}
+static struct bpf_program *
+bpf_object__find_prog_by_idx(struct bpf_object *obj, int idx)
+{
+ struct bpf_program *prog;
+ size_t i;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ prog = &obj->programs[i];
+ if (prog->idx == idx)
+ return prog;
+ }
+ return NULL;
+}
+
+static int
+bpf_program__collect_reloc(struct bpf_program *prog,
+ size_t nr_maps, GElf_Shdr *shdr,
+ Elf_Data *data, Elf_Data *symbols)
+{
+ int i, nrels;
+
+ pr_debug("collecting relocating info for: '%s'\n",
+ prog->section_name);
+ nrels = shdr->sh_size / shdr->sh_entsize;
+
+ prog->reloc_desc = malloc(sizeof(*prog->reloc_desc) * nrels);
+ if (!prog->reloc_desc) {
+ pr_warning("failed to alloc memory in relocation\n");
+ return -ENOMEM;
+ }
+ prog->nr_reloc = nrels;
+
+ for (i = 0; i < nrels; i++) {
+ GElf_Sym sym;
+ GElf_Rel rel;
+ unsigned int insn_idx;
+ struct bpf_insn *insns = prog->insns;
+ size_t map_idx;
+
+ if (!gelf_getrel(data, i, &rel)) {
+ pr_warning("relocation: failed to get %d reloc\n", i);
+ return -EINVAL;
+ }
+
+ insn_idx = rel.r_offset / sizeof(struct bpf_insn);
+ pr_debug("relocation: insn_idx=%u\n", insn_idx);
+
+ if (!gelf_getsym(symbols,
+ GELF_R_SYM(rel.r_info),
+ &sym)) {
+ pr_warning("relocation: symbol %"PRIx64" not found\n",
+ GELF_R_SYM(rel.r_info));
+ return -EINVAL;
+ }
+
+ if (insns[insn_idx].code != (BPF_LD | BPF_IMM | BPF_DW)) {
+ pr_warning("bpf: relocation: invalid relo for insns[%d].code 0x%x\n",
+ insn_idx, insns[insn_idx].code);
+ return -EINVAL;
+ }
+
+ map_idx = sym.st_value / sizeof(struct bpf_map_def);
+ if (map_idx >= nr_maps) {
+ pr_warning("bpf relocation: map_idx %d large than %d\n",
+ (int)map_idx, (int)nr_maps - 1);
+ return -EINVAL;
+ }
+
+ prog->reloc_desc[i].insn_idx = insn_idx;
+ prog->reloc_desc[i].map_idx = map_idx;
+ }
+ return 0;
+}
+
+static int bpf_object__collect_reloc(struct bpf_object *obj)
+{
+ int i, err;
+
+ if (!obj_elf_valid(obj)) {
+ pr_warning("Internal error: elf object is closed\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < obj->efile.nr_reloc; i++) {
+ GElf_Shdr *shdr = &obj->efile.reloc[i].shdr;
+ Elf_Data *data = obj->efile.reloc[i].data;
+ int idx = shdr->sh_info;
+ struct bpf_program *prog;
+ size_t nr_maps = obj->maps_buf_sz /
+ sizeof(struct bpf_map_def);
+
+ if (shdr->sh_type != SHT_REL) {
+ pr_warning("internal error at %d\n", __LINE__);
+ return -EINVAL;
+ }
+
+ prog = bpf_object__find_prog_by_idx(obj, idx);
+ if (!prog) {
+ pr_warning("relocation failed: no %d section\n",
+ idx);
+ return -ENOENT;
+ }
+
+ err = bpf_program__collect_reloc(prog, nr_maps,
+ shdr, data,
+ obj->efile.symbols);
+ if (err)
+ return -EINVAL;
+ }
+ return 0;
+}
+
static int bpf_object__validate(struct bpf_object *obj)
{
if (obj->kern_version == 0) {
@@ -514,6 +636,8 @@ __bpf_object__open(const char *path, void *obj_buf, size_t obj_buf_sz)
goto out;
if (bpf_object__elf_collect(obj))
goto out;
+ if (bpf_object__collect_reloc(obj))
+ goto out;
if (bpf_object__validate(obj))
goto out;
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index dc966dd..6e75acd 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -30,4 +30,17 @@ struct bpf_object *bpf_object__open_buffer(void *obj_buf,
size_t obj_buf_sz);
void bpf_object__close(struct bpf_object *object);
+/*
+ * We don't need __attribute__((packed)) now since it is
+ * unnecessary for 'bpf_map_def' because they are all aligned.
+ * In addition, using it will trigger -Wpacked warning message,
+ * and will be treated as an error due to -Werror.
+ */
+struct bpf_map_def {
+ unsigned int type;
+ unsigned int key_size;
+ unsigned int value_size;
+ unsigned int max_entries;
+};
+
#endif
--
1.8.3.4
This patch introduces bpf.c and bpf.h, which hold common functions
issuing bpf syscall. The goal of these two files is to hide syscall
completely from user. Note that bpf.c and bpf.h deal with kernel
interface only. Things like structure of 'map' section in the ELF object
is not cared by of bpf.[ch].
We first introduce bpf_create_map().
Note that, since functions in bpf.[ch] are wrapper of sys_bpf, they
don't use OO style naming.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/Build | 2 +-
tools/lib/bpf/bpf.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/bpf.h | 16 ++++++++++++++++
3 files changed, 68 insertions(+), 1 deletion(-)
create mode 100644 tools/lib/bpf/bpf.c
create mode 100644 tools/lib/bpf/bpf.h
diff --git a/tools/lib/bpf/Build b/tools/lib/bpf/Build
index a316484..d874975 100644
--- a/tools/lib/bpf/Build
+++ b/tools/lib/bpf/Build
@@ -1 +1 @@
-libbpf-y := libbpf.o
+libbpf-y := libbpf.o bpf.o
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
new file mode 100644
index 0000000..208de7c3
--- /dev/null
+++ b/tools/lib/bpf/bpf.c
@@ -0,0 +1,51 @@
+/*
+ * common eBPF ELF operations.
+ *
+ * Copyright (C) 2013-2015 Alexei Starovoitov <[email protected]>
+ * Copyright (C) 2015 Wang Nan <[email protected]>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+
+#include <stdlib.h>
+#include <memory.h>
+#include <unistd.h>
+#include <asm/unistd.h>
+#include <linux/bpf.h>
+#include "bpf.h"
+
+/*
+ * When building perf, unistd.h is override. Define __NR_bpf is
+ * required to be defined.
+ */
+#ifndef __NR_bpf
+# if defined(__i386__)
+# define __NR_bpf 357
+# elif defined(__x86_64__)
+# define __NR_bpf 321
+# elif defined(__aarch64__)
+# define __NR_bpf 280
+# else
+# error __NR_bpf not defined. libbpf does not support your arch.
+# endif
+#endif
+
+static int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
+ unsigned int size)
+{
+ return syscall(__NR_bpf, cmd, attr, size);
+}
+
+int bpf_create_map(enum bpf_map_type map_type, int key_size,
+ int value_size, int max_entries)
+{
+ union bpf_attr attr;
+
+ memset(&attr, '\0', sizeof(attr));
+
+ attr.map_type = map_type;
+ attr.key_size = key_size;
+ attr.value_size = value_size;
+ attr.max_entries = max_entries;
+
+ return sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr));
+}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
new file mode 100644
index 0000000..28f7942
--- /dev/null
+++ b/tools/lib/bpf/bpf.h
@@ -0,0 +1,16 @@
+/*
+ * common eBPF ELF operations.
+ *
+ * Copyright (C) 2013-2015 Alexei Starovoitov <[email protected]>
+ * Copyright (C) 2015 Wang Nan <[email protected]>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+#ifndef __BPF_BPF_H
+#define __BPF_BPF_H
+
+#include <linux/bpf.h>
+
+int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
+ int max_entries);
+
+#endif
--
1.8.3.4
This patch creates maps based on 'map' section in object file using
bpf_create_map(), and stores the fds into an array in 'struct
bpf_object'.
Previous patches parse ELF object file and collects required data, but
doesn't play with the kernel. They belong to the 'opening' phase. This
patch is the first patch in 'loading' phase. The 'loaded' field is
introduced in 'struct bpf_object' to avoid loading an object twice,
because the loading phase clears resources collected during the opening
which becomes useless after loading. In this patch, maps_buf is cleared.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 102 +++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 4 ++
2 files changed, 106 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 4f13772..c214e1c 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -21,6 +21,7 @@
#include <gelf.h>
#include "libbpf.h"
+#include "bpf.h"
#define __printf(a, b) __attribute__((format(printf, a, b)))
@@ -105,6 +106,13 @@ struct bpf_object {
struct bpf_program *programs;
size_t nr_programs;
+ int *map_fds;
+ /*
+ * This field is required because maps_buf will be freed and
+ * maps_buf_sz will be set to 0 after loaded.
+ */
+ size_t nr_map_fds;
+ bool loaded;
/*
* Information when doing elf related work. Only valid if fd
@@ -232,6 +240,7 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf = obj_buf;
obj->efile.obj_buf_sz = obj_buf_sz;
+ obj->loaded = false;
return obj;
}
@@ -568,6 +577,62 @@ bpf_program__collect_reloc(struct bpf_program *prog,
return 0;
}
+static int
+bpf_object__create_maps(struct bpf_object *obj)
+{
+ unsigned int i;
+ size_t nr_maps;
+ int *pfd;
+
+ nr_maps = obj->maps_buf_sz / sizeof(struct bpf_map_def);
+ if (!obj->maps_buf || !nr_maps) {
+ pr_debug("don't need create maps for %s\n",
+ obj->path);
+ return 0;
+ }
+
+ obj->map_fds = malloc(sizeof(int) * nr_maps);
+ if (!obj->map_fds) {
+ pr_warning("realloc perf_bpf_map_fds failed\n");
+ return -ENOMEM;
+ }
+ obj->nr_map_fds = nr_maps;
+
+ /* fill all fd with -1 */
+ memset(obj->map_fds, 0xff, sizeof(int) * nr_maps);
+
+ pfd = obj->map_fds;
+ for (i = 0; i < nr_maps; i++) {
+ struct bpf_map_def def;
+
+ def = *(struct bpf_map_def *)(obj->maps_buf +
+ i * sizeof(struct bpf_map_def));
+
+ *pfd = bpf_create_map(def.type,
+ def.key_size,
+ def.value_size,
+ def.max_entries);
+ if (*pfd < 0) {
+ size_t j;
+ int err = *pfd;
+
+ pr_warning("failed to create map: %s\n",
+ strerror(errno));
+ for (j = 0; j < i; j++)
+ zclose(obj->map_fds[j]);
+ obj->nr_map_fds = 0;
+ zfree(&obj->map_fds);
+ return err;
+ }
+ pr_debug("create map: fd=%d\n", *pfd);
+ pfd++;
+ }
+
+ zfree(&obj->maps_buf);
+ obj->maps_buf_sz = 0;
+ return 0;
+}
+
static int bpf_object__collect_reloc(struct bpf_object *obj)
{
int i, err;
@@ -671,6 +736,42 @@ struct bpf_object *bpf_object__open_buffer(void *obj_buf,
return __bpf_object__open("[buffer]", obj_buf, obj_buf_sz);
}
+int bpf_object__unload(struct bpf_object *obj)
+{
+ size_t i;
+
+ if (!obj)
+ return -EINVAL;
+
+ for (i = 0; i < obj->nr_map_fds; i++)
+ zclose(obj->map_fds[i]);
+ zfree(&obj->map_fds);
+ obj->nr_map_fds = 0;
+
+ return 0;
+}
+
+int bpf_object__load(struct bpf_object *obj)
+{
+ if (!obj)
+ return -EINVAL;
+
+ if (obj->loaded) {
+ pr_warning("object should not be loaded twice\n");
+ return -EINVAL;
+ }
+
+ obj->loaded = true;
+ if (bpf_object__create_maps(obj))
+ goto out;
+
+ return 0;
+out:
+ bpf_object__unload(obj);
+ pr_warning("failed to load object '%s'\n", obj->path);
+ return -EINVAL;
+}
+
void bpf_object__close(struct bpf_object *obj)
{
size_t i;
@@ -679,6 +780,7 @@ void bpf_object__close(struct bpf_object *obj)
return;
bpf_object__elf_finish(obj);
+ bpf_object__unload(obj);
zfree(&obj->maps_buf);
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 6e75acd..3e69600 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -30,6 +30,10 @@ struct bpf_object *bpf_object__open_buffer(void *obj_buf,
size_t obj_buf_sz);
void bpf_object__close(struct bpf_object *object);
+/* Load/unload object into/from kernel */
+int bpf_object__load(struct bpf_object *obj);
+int bpf_object__unload(struct bpf_object *obj);
+
/*
* We don't need __attribute__((packed)) now since it is
* unnecessary for 'bpf_map_def' because they are all aligned.
--
1.8.3.4
If an eBPF program accesses a map, LLVM generates a load instruction
which loads an absolute address into a register, like this:
ld_64 r1, <MCOperand Expr:(mymap)>
...
call 2
That ld_64 instruction will be recorded in relocation section.
To enable the usage of that map, relocation must be done by replacing
the immediate value by real map file descriptor so it can be found by
eBPF map functions.
This patch to the relocation work based on information collected by
patches:
'bpf tools: Collect symbol table from SHT_SYMTAB section',
'bpf tools: Collect relocation sections from SHT_REL sections'
and
'bpf tools: Record map accessing instructions for each program'.
For each instruction which needs relocation, it inject corresponding
file descriptor to imm field. As a part of protocol, src_reg is set to
BPF_PSEUDO_MAP_FD to notify kernel this is a map loading instruction.
This is the final part of map relocation patch. The principle of map
relocation is described in commit message of 'bpf tools: Collect symbol
table from SHT_SYMTAB section'.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 52 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 52 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index c214e1c..cd40ae0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -633,6 +633,56 @@ bpf_object__create_maps(struct bpf_object *obj)
return 0;
}
+static int
+bpf_program__relocate(struct bpf_program *prog, int *map_fds)
+{
+ int i;
+
+ if (!prog || !prog->reloc_desc)
+ return 0;
+
+ for (i = 0; i < prog->nr_reloc; i++) {
+ int insn_idx, map_idx;
+ struct bpf_insn *insns = prog->insns;
+
+ insn_idx = prog->reloc_desc[i].insn_idx;
+ map_idx = prog->reloc_desc[i].map_idx;
+
+ if (insn_idx >= (int)prog->insns_cnt) {
+ pr_warning("relocation out of range: '%s'\n",
+ prog->section_name);
+ return -ERANGE;
+ }
+ insns[insn_idx].src_reg = BPF_PSEUDO_MAP_FD;
+ insns[insn_idx].imm = map_fds[map_idx];
+ }
+
+ zfree(&prog->reloc_desc);
+ prog->nr_reloc = 0;
+ return 0;
+}
+
+
+static int
+bpf_object__relocate(struct bpf_object *obj)
+{
+ struct bpf_program *prog;
+ size_t i;
+ int err;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ prog = &obj->programs[i];
+
+ err = bpf_program__relocate(prog, obj->map_fds);
+ if (err) {
+ pr_warning("failed to relocate '%s'\n",
+ prog->section_name);
+ return err;
+ }
+ }
+ return 0;
+}
+
static int bpf_object__collect_reloc(struct bpf_object *obj)
{
int i, err;
@@ -764,6 +814,8 @@ int bpf_object__load(struct bpf_object *obj)
obj->loaded = true;
if (bpf_object__create_maps(obj))
goto out;
+ if (bpf_object__relocate(obj))
+ goto out;
return 0;
out:
--
1.8.3.4
bpf_load_program() can be used to load bpf program into kernel. To make
loading faster, first try to load without logbuf. Try again with logbuf
if the first try failed.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/bpf.c | 34 ++++++++++++++++++++++++++++++++++
tools/lib/bpf/bpf.h | 7 +++++++
2 files changed, 41 insertions(+)
diff --git a/tools/lib/bpf/bpf.c b/tools/lib/bpf/bpf.c
index 208de7c3..a633105 100644
--- a/tools/lib/bpf/bpf.c
+++ b/tools/lib/bpf/bpf.c
@@ -29,6 +29,11 @@
# endif
#endif
+static __u64 ptr_to_u64(void *ptr)
+{
+ return (__u64) (unsigned long) ptr;
+}
+
static int sys_bpf(enum bpf_cmd cmd, union bpf_attr *attr,
unsigned int size)
{
@@ -49,3 +54,32 @@ int bpf_create_map(enum bpf_map_type map_type, int key_size,
return sys_bpf(BPF_MAP_CREATE, &attr, sizeof(attr));
}
+
+int bpf_load_program(enum bpf_prog_type type, struct bpf_insn *insns,
+ size_t insns_cnt, char *license,
+ u32 kern_version, char *log_buf, size_t log_buf_sz)
+{
+ int fd;
+ union bpf_attr attr;
+
+ bzero(&attr, sizeof(attr));
+ attr.prog_type = type;
+ attr.insn_cnt = (__u32)insns_cnt;
+ attr.insns = ptr_to_u64(insns);
+ attr.license = ptr_to_u64(license);
+ attr.log_buf = ptr_to_u64(NULL);
+ attr.log_size = 0;
+ attr.log_level = 0;
+ attr.kern_version = kern_version;
+
+ fd = sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
+ if (fd >= 0 || !log_buf || !log_buf_sz)
+ return fd;
+
+ /* Try again with log */
+ attr.log_buf = ptr_to_u64(log_buf);
+ attr.log_size = log_buf_sz;
+ attr.log_level = 1;
+ log_buf[0] = 0;
+ return sys_bpf(BPF_PROG_LOAD, &attr, sizeof(attr));
+}
diff --git a/tools/lib/bpf/bpf.h b/tools/lib/bpf/bpf.h
index 28f7942..854b736 100644
--- a/tools/lib/bpf/bpf.h
+++ b/tools/lib/bpf/bpf.h
@@ -13,4 +13,11 @@
int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size,
int max_entries);
+/* Recommend log buffer size */
+#define BPF_LOG_BUF_SIZE 65536
+int bpf_load_program(enum bpf_prog_type type, struct bpf_insn *insns,
+ size_t insns_cnt, char *license,
+ u32 kern_version, char *log_buf,
+ size_t log_buf_sz);
+
#endif
--
1.8.3.4
This patch utilizes previous introduced bpf_load_program to load
programs in the ELF file into kernel. Result is stored in 'fd' field in
'struct bpf_program'.
During loading, it allocs a log buffer and free it before return. Note
that that buffer is not passed to bpf_load_program() if the first
loading try is successful. Doesn't use a statically allocated log buffer
to avoid potention multi-thread problem.
Instructions collected during opening is cleared after loading.
load_program() is created for loading a 'struct bpf_insn' array into
kernel, bpf_program__load() calls it. By this design we have a function
loads instructions into kernel. It will be used by further patches,
which creates different instances from a program and load them into
kernel.
Signed-off-by: Wang Nan <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 90 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index cd40ae0..d826d5b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -96,6 +96,8 @@ struct bpf_program {
int map_idx;
} *reloc_desc;
int nr_reloc;
+
+ int fd;
};
struct bpf_object {
@@ -135,11 +137,20 @@ struct bpf_object {
};
#define obj_elf_valid(o) ((o)->efile.elf)
+static void bpf_program__unload(struct bpf_program *prog)
+{
+ if (!prog)
+ return;
+
+ zclose(prog->fd);
+}
+
static void bpf_program__clear(struct bpf_program *prog)
{
if (!prog)
return;
+ bpf_program__unload(prog);
zfree(&prog->section_name);
zfree(&prog->insns);
zfree(&prog->reloc_desc);
@@ -176,6 +187,7 @@ __bpf_program__new(void *data, size_t size, char *name, int idx,
memcpy(prog->insns, data,
prog->insns_cnt * sizeof(struct bpf_insn));
prog->idx = idx;
+ prog->fd = -1;
return 0;
errout:
@@ -721,6 +733,79 @@ static int bpf_object__collect_reloc(struct bpf_object *obj)
return 0;
}
+static int
+load_program(struct bpf_insn *insns, int insns_cnt,
+ char *license, u32 kern_version, int *pfd)
+{
+ int ret;
+ char *log_buf;
+
+ if (!insns || !insns_cnt)
+ return -EINVAL;
+
+ log_buf = malloc(BPF_LOG_BUF_SIZE);
+ if (!log_buf)
+ pr_warning("Alloc log buffer for bpf loader error, continue without log\n");
+
+ ret = bpf_load_program(BPF_PROG_TYPE_KPROBE, insns,
+ insns_cnt, license, kern_version,
+ log_buf, BPF_LOG_BUF_SIZE);
+
+ if (ret >= 0) {
+ *pfd = ret;
+ ret = 0;
+ goto out;
+ }
+
+ ret = -EINVAL;
+ pr_warning("load bpf program failed: %s\n", strerror(errno));
+
+ if (log_buf) {
+ pr_warning("-- BEGIN DUMP LOG ---\n");
+ pr_warning("\n%s\n", log_buf);
+ pr_warning("-- END LOG --\n");
+ }
+
+out:
+ free(log_buf);
+ return ret;
+}
+
+static int
+bpf_program__load(struct bpf_program *prog,
+ char *license, u32 kern_version)
+{
+ int err, fd;
+
+ err = load_program(prog->insns, prog->insns_cnt,
+ license, kern_version, &fd);
+ if (!err)
+ prog->fd = fd;
+
+ if (err)
+ pr_warning("failed to load program '%s'\n",
+ prog->section_name);
+ zfree(&prog->insns);
+ prog->insns_cnt = 0;
+ return err;
+}
+
+static int
+bpf_object__load_progs(struct bpf_object *obj)
+{
+ size_t i;
+ int err;
+
+ for (i = 0; i < obj->nr_programs; i++) {
+ err = bpf_program__load(&obj->programs[i],
+ obj->license,
+ obj->kern_version);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
static int bpf_object__validate(struct bpf_object *obj)
{
if (obj->kern_version == 0) {
@@ -798,6 +883,9 @@ int bpf_object__unload(struct bpf_object *obj)
zfree(&obj->map_fds);
obj->nr_map_fds = 0;
+ for (i = 0; i < obj->nr_programs; i++)
+ bpf_program__unload(&obj->programs[i]);
+
return 0;
}
@@ -816,6 +904,8 @@ int bpf_object__load(struct bpf_object *obj)
goto out;
if (bpf_object__relocate(obj))
goto out;
+ if (bpf_object__load_progs(obj))
+ goto out;
return 0;
out:
--
1.8.3.4
This patch introduces accessors for user of libbpf to retrieve section
name and fd of a opened/loaded eBPF program. 'struct bpf_prog_handler'
is used for that purpose. Accessors of programs section name and file
descriptor are provided. Set/get private data are also impelmented.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 25 +++++++++++++++
2 files changed, 107 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index d826d5b..b1575c4 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -98,6 +98,10 @@ struct bpf_program {
int nr_reloc;
int fd;
+
+ struct bpf_object *obj;
+ void *priv;
+ bpf_program_clear_priv_t clear_priv;
};
struct bpf_object {
@@ -150,6 +154,12 @@ static void bpf_program__clear(struct bpf_program *prog)
if (!prog)
return;
+ if (prog->clear_priv)
+ prog->clear_priv(prog, prog->priv);
+
+ prog->priv = NULL;
+ prog->clear_priv = NULL;
+
bpf_program__unload(prog);
zfree(&prog->section_name);
zfree(&prog->insns);
@@ -224,6 +234,7 @@ bpf_program__new(struct bpf_object *obj, void *data, size_t size,
obj->programs = progs;
obj->nr_programs = nr_progs + 1;
+ prog.obj = obj;
progs[nr_progs] = prog;
return &progs[nr_progs];
}
@@ -934,3 +945,74 @@ void bpf_object__close(struct bpf_object *obj)
free(obj);
}
+
+struct bpf_program *
+bpf_program__next(struct bpf_program *prev, struct bpf_object *obj)
+{
+ size_t idx;
+
+ if (!obj->programs)
+ return NULL;
+ /* First handler */
+ if (prev == NULL)
+ return &obj->programs[0];
+
+ if (prev->obj != obj) {
+ pr_warning("error: program handler doesn't match object\n");
+ return NULL;
+ }
+
+ idx = (prev - obj->programs) + 1;
+ if (idx >= obj->nr_programs)
+ return NULL;
+ return &obj->programs[idx];
+}
+
+int bpf_program__set_private(struct bpf_program *prog,
+ void *priv,
+ bpf_program_clear_priv_t clear_priv)
+{
+ if (prog->priv && prog->clear_priv)
+ prog->clear_priv(prog, prog->priv);
+
+ prog->priv = priv;
+ prog->clear_priv = clear_priv;
+ return 0;
+}
+
+int bpf_program__get_private(struct bpf_program *prog, void **ppriv)
+{
+ *ppriv = prog->priv;
+ return 0;
+}
+
+int bpf_program__get_title(struct bpf_program *prog,
+ const char **ptitle, bool dup)
+{
+ const char *title;
+
+ if (!ptitle)
+ return -EINVAL;
+
+ title = prog->section_name;
+ if (dup) {
+ title = strdup(title);
+ if (!title) {
+ pr_warning("failed to strdup program title\n");
+ *ptitle = NULL;
+ return -ENOMEM;
+ }
+ }
+
+ *ptitle = title;
+ return 0;
+}
+
+int bpf_program__get_fd(struct bpf_program *prog, int *pfd)
+{
+ if (!pfd)
+ return -EINVAL;
+
+ *pfd = prog->fd;
+ return 0;
+}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 3e69600..9e0e102 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -9,6 +9,7 @@
#define __BPF_LIBBPF_H
#include <stdio.h>
+#include <stdbool.h>
/*
* In include/linux/compiler-gcc.h, __printf is defined. However
@@ -34,6 +35,30 @@ void bpf_object__close(struct bpf_object *object);
int bpf_object__load(struct bpf_object *obj);
int bpf_object__unload(struct bpf_object *obj);
+/* Accessors of bpf_program. */
+struct bpf_program;
+struct bpf_program *bpf_program__next(struct bpf_program *prog,
+ struct bpf_object *obj);
+
+#define bpf_object__for_each_program(pos, obj) \
+ for ((pos) = bpf_program__next(NULL, (obj)); \
+ (pos) != NULL; \
+ (pos) = bpf_program__next((pos), (obj)))
+
+typedef void (*bpf_program_clear_priv_t)(struct bpf_program *,
+ void *);
+
+int bpf_program__set_private(struct bpf_program *prog, void *priv,
+ bpf_program_clear_priv_t clear_priv);
+
+int bpf_program__get_private(struct bpf_program *prog,
+ void **ppriv);
+
+int bpf_program__get_title(struct bpf_program *prog,
+ const char **ptitle, bool dup);
+
+int bpf_program__get_fd(struct bpf_program *prog, int *pfd);
+
/*
* We don't need __attribute__((packed)) now since it is
* unnecessary for 'bpf_map_def' because they are all aligned.
--
1.8.3.4
This patch add an accessor which allows caller to get count of programs
in an object file.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 9 +++++++++
tools/lib/bpf/libbpf.h | 3 +++
2 files changed, 12 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index b1575c4..b58b13b 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -946,6 +946,15 @@ void bpf_object__close(struct bpf_object *obj)
free(obj);
}
+int bpf_object__get_prog_cnt(struct bpf_object *obj, size_t *pcnt)
+{
+ if (!obj || !pcnt)
+ return -EINVAL;
+
+ *pcnt = obj->nr_programs;
+ return 0;
+}
+
struct bpf_program *
bpf_program__next(struct bpf_program *prev, struct bpf_object *obj)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 9e0e102..a20ae2e 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -35,6 +35,9 @@ void bpf_object__close(struct bpf_object *object);
int bpf_object__load(struct bpf_object *obj);
int bpf_object__unload(struct bpf_object *obj);
+/* Accessors of bpf_object */
+int bpf_object__get_prog_cnt(struct bpf_object *obj, size_t *pcnt);
+
/* Accessors of bpf_program. */
struct bpf_program;
struct bpf_program *bpf_program__next(struct bpf_program *prog,
--
1.8.3.4
To allow enumeration of all bpf_objects, keep them in a list (hidden to
caller). bpf_object__for_each_safe() is introduced to do this iteration.
It is safe even user close the object during iteration.
Signed-off-by: Wang Nan <[email protected]>
Acked-by: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/lib/bpf/libbpf.c | 32 ++++++++++++++++++++++++++++++++
tools/lib/bpf/libbpf.h | 7 +++++++
2 files changed, 39 insertions(+)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index b58b13b..83a3b06 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -17,6 +17,7 @@
#include <asm/unistd.h>
#include <linux/kernel.h>
#include <linux/bpf.h>
+#include <linux/list.h>
#include <libelf.h>
#include <gelf.h>
@@ -104,6 +105,8 @@ struct bpf_program {
bpf_program_clear_priv_t clear_priv;
};
+static LIST_HEAD(bpf_objects_list);
+
struct bpf_object {
char license[64];
u32 kern_version;
@@ -137,6 +140,12 @@ struct bpf_object {
} *reloc;
int nr_reloc;
} efile;
+ /*
+ * All loaded bpf_object is linked in a list, which is
+ * hidden to caller. bpf_objects__<func> handlers deal with
+ * all objects.
+ */
+ struct list_head list;
char path[];
};
#define obj_elf_valid(o) ((o)->efile.elf)
@@ -264,6 +273,9 @@ static struct bpf_object *bpf_object__new(const char *path,
obj->efile.obj_buf_sz = obj_buf_sz;
obj->loaded = false;
+
+ INIT_LIST_HEAD(&obj->list);
+ list_add(&obj->list, &bpf_objects_list);
return obj;
}
@@ -943,6 +955,7 @@ void bpf_object__close(struct bpf_object *obj)
}
zfree(&obj->programs);
+ list_del(&obj->list);
free(obj);
}
@@ -955,6 +968,25 @@ int bpf_object__get_prog_cnt(struct bpf_object *obj, size_t *pcnt)
return 0;
}
+struct bpf_object *
+bpf_object__next(struct bpf_object *prev)
+{
+ struct bpf_object *next;
+
+ if (!prev)
+ next = list_first_entry(&bpf_objects_list,
+ struct bpf_object,
+ list);
+ else
+ next = list_next_entry(prev, list);
+
+ /* Empty list is noticed here so don't need checking on entry. */
+ if (&next->list == &bpf_objects_list)
+ return NULL;
+
+ return next;
+}
+
struct bpf_program *
bpf_program__next(struct bpf_program *prev, struct bpf_object *obj)
{
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index a20ae2e..1643b57 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -38,6 +38,13 @@ int bpf_object__unload(struct bpf_object *obj);
/* Accessors of bpf_object */
int bpf_object__get_prog_cnt(struct bpf_object *obj, size_t *pcnt);
+struct bpf_object *bpf_object__next(struct bpf_object *prev);
+#define bpf_object__for_each_safe(pos, tmp) \
+ for ((pos) = bpf_object__next(NULL), \
+ (tmp) = bpf_object__next(pos); \
+ (pos) != NULL; \
+ (pos) = (tmp), (tmp) = bpf_object__next(tmp))
+
/* Accessors of bpf_program. */
struct bpf_program;
struct bpf_program *bpf_program__next(struct bpf_program *prog,
--
1.8.3.4
This patch introduces [llvm] config section with 5 options. Following
patches will use then to config llvm dynamica compiling.
'llvm-utils.[ch]' is introduced in this patch for holding all
llvm/clang related stuffs.
Example:
[llvm]
# Path to clang. If omit, search it from $PATH.
clang-path = "/path/to/clang"
# Cmdline template. Following line shows its default value.
# Environment variable is used to passing options.
clang-bpf-cmd-template = "$CLANG_EXEC $CLANG_OPTIONS \
$KERNEL_INC_OPTIONS -Wno-unused-value \
-Wno-pointer-sign -working-directory \
$WORKING_DIR -c $CLANG_SOURCE -target \
bpf -O2 -o -"
# Options passed to clang, will be passed to cmdline by
# $CLANG_OPTIONS.
clang-opt = "-Wno-unused-value -Wno-pointer-sign"
# kbuild directory. If not set, use /lib/modules/`uname -r`/build.
# If set to "" deliberately, skip kernel header auto-detector.
kbuild-dir = "/path/to/kernel/build"
# Options passed to 'make' when detecting kernel header options.
kbuild-opts = "ARCH=x86_64"
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/Build | 1 +
tools/perf/util/config.c | 4 ++++
tools/perf/util/llvm-utils.c | 45 ++++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/llvm-utils.h | 36 +++++++++++++++++++++++++++++++++++
4 files changed, 86 insertions(+)
create mode 100644 tools/perf/util/llvm-utils.c
create mode 100644 tools/perf/util/llvm-utils.h
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index 601d114..bc24293 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -14,6 +14,7 @@ libperf-y += find_next_bit.o
libperf-y += help.o
libperf-y += kallsyms.o
libperf-y += levenshtein.o
+libperf-y += llvm-utils.o
libperf-y += parse-options.o
libperf-y += parse-events.o
libperf-y += path.o
diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c
index e18f653..2e452ac 100644
--- a/tools/perf/util/config.c
+++ b/tools/perf/util/config.c
@@ -12,6 +12,7 @@
#include "cache.h"
#include "exec_cmd.h"
#include "util/hist.h" /* perf_hist_config */
+#include "util/llvm-utils.h" /* perf_llvm_config */
#define MAXNAME (256)
@@ -408,6 +409,9 @@ int perf_default_config(const char *var, const char *value,
if (!prefixcmp(var, "call-graph."))
return perf_callchain_config(var, value);
+ if (!prefixcmp(var, "llvm."))
+ return perf_llvm_config(var, value);
+
/* Add other config variables here. */
return 0;
}
diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
new file mode 100644
index 0000000..fd5b1bc
--- /dev/null
+++ b/tools/perf/util/llvm-utils.c
@@ -0,0 +1,45 @@
+/*
+ * Copyright (C) 2015, Wang Nan <[email protected]>
+ * Copyright (C) 2015, Huawei Inc.
+ */
+
+#include <stdio.h>
+#include "util.h"
+#include "debug.h"
+#include "llvm-utils.h"
+#include "cache.h"
+
+#define CLANG_BPF_CMD_DEFAULT_TEMPLATE \
+ "$CLANG_EXEC $CLANG_OPTIONS $KERNEL_INC_OPTIONS " \
+ "-Wno-unused-value -Wno-pointer-sign " \
+ "-working-directory $WORKING_DIR " \
+ " -c \"$CLANG_SOURCE\" -target bpf -O2 -o -"
+
+struct llvm_param llvm_param = {
+ .clang_path = "clang",
+ .clang_bpf_cmd_template = CLANG_BPF_CMD_DEFAULT_TEMPLATE,
+ .clang_opt = NULL,
+ .kbuild_dir = NULL,
+ .kbuild_opts = NULL,
+};
+
+int perf_llvm_config(const char *var, const char *value)
+{
+ if (prefixcmp(var, "llvm."))
+ return 0;
+ var += sizeof("llvm.") - 1;
+
+ if (!strcmp(var, "clang-path"))
+ llvm_param.clang_path = strdup(value);
+ else if (!strcmp(var, "clang-bpf-cmd-template"))
+ llvm_param.clang_bpf_cmd_template = strdup(value);
+ else if (!strcmp(var, "clang-opt"))
+ llvm_param.clang_opt = strdup(value);
+ else if (!strcmp(var, "kbuild-dir"))
+ llvm_param.kbuild_dir = strdup(value);
+ else if (!strcmp(var, "kbuild-opts"))
+ llvm_param.kbuild_opts = strdup(value);
+ else
+ return -1;
+ return 0;
+}
diff --git a/tools/perf/util/llvm-utils.h b/tools/perf/util/llvm-utils.h
new file mode 100644
index 0000000..504b799
--- /dev/null
+++ b/tools/perf/util/llvm-utils.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2015, Wang Nan <[email protected]>
+ * Copyright (C) 2015, Huawei Inc.
+ */
+#ifndef __LLVM_UTILS_H
+#define __LLVM_UTILS_H
+
+#include "debug.h"
+
+struct llvm_param {
+ /* Path of clang executable */
+ const char *clang_path;
+ /*
+ * Template of clang bpf compiling. 5 env variables
+ * can be used:
+ * $CLANG_EXEC: Path to clang.
+ * $CLANG_OPTIONS: Extra options to clang.
+ * $KERNEL_INC_OPTIONS: Kernel include directories.
+ * $WORKING_DIR: Kernel source directory.
+ * $CLANG_SOURCE: Source file to be compiled.
+ */
+ const char *clang_bpf_cmd_template;
+ /* Will be filled in $CLANG_OPTIONS */
+ const char *clang_opt;
+ /* Where to find kbuild system */
+ const char *kbuild_dir;
+ /*
+ * Arguments passed to make, like 'ARCH=arm' if doing cross
+ * compiling. Should not be used for dynamic compiling.
+ */
+ const char *kbuild_opts;
+};
+
+extern struct llvm_param llvm_param;
+extern int perf_llvm_config(const char *var, const char *value);
+#endif
--
1.8.3.4
This is the core patch for supporting eBPF on-the-fly compiling, does
the following work:
1. Search clang compiler using search_program().
2. Run command template defined in llvm-bpf-cmd-template option in
[llvm] config section using read_from_pipe(). Patch of clang and
source code path is injected into shell command using environment
variable using force_set_env().
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/llvm-utils.c | 225 +++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/llvm-utils.h | 3 +
2 files changed, 228 insertions(+)
diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
index fd5b1bc..dca16e7 100644
--- a/tools/perf/util/llvm-utils.c
+++ b/tools/perf/util/llvm-utils.c
@@ -43,3 +43,228 @@ int perf_llvm_config(const char *var, const char *value)
return -1;
return 0;
}
+
+static int
+search_program(const char *def, const char *name,
+ char *output)
+{
+ char *env, *path, *tmp;
+ char buf[PATH_MAX];
+ int ret;
+
+ output[0] = '\0';
+ if (def && def[0] != '\0') {
+ if (def[0] == '/') {
+ if (access(def, F_OK) == 0) {
+ strlcpy(output, def, PATH_MAX);
+ return 0;
+ }
+ } else if (def[0] != '\0')
+ name = def;
+ }
+
+ env = getenv("PATH");
+ if (!env)
+ return -1;
+ env = strdup(env);
+ if (!env)
+ return -1;
+
+ ret = -ENOENT;
+ path = strtok_r(env, ":", &tmp);
+ while (path) {
+ scnprintf(buf, sizeof(buf), "%s/%s", path, name);
+ if (access(buf, F_OK) == 0) {
+ strlcpy(output, buf, PATH_MAX);
+ ret = 0;
+ break;
+ }
+ path = strtok_r(NULL, ":", &tmp);
+ }
+
+ free(env);
+ return ret;
+}
+
+#define READ_SIZE 4096
+static int
+read_from_pipe(const char *cmd, void **p_buf, size_t *p_read_sz)
+{
+ int err = 0;
+ void *buf = NULL;
+ FILE *file = NULL;
+ size_t read_sz = 0, buf_sz = 0;
+
+ file = popen(cmd, "r");
+ if (!file) {
+ pr_err("ERROR: unable to popen cmd: %s\n",
+ strerror(errno));
+ return -EINVAL;
+ }
+
+ while (!feof(file) && !ferror(file)) {
+ /*
+ * Make buf_sz always have obe byte extra space so we
+ * can put '\0' there.
+ */
+ if (buf_sz - read_sz < READ_SIZE + 1) {
+ void *new_buf;
+
+ buf_sz = read_sz + READ_SIZE + 1;
+ new_buf = realloc(buf, buf_sz);
+
+ if (!new_buf) {
+ pr_err("ERROR: failed to realloc memory\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ buf = new_buf;
+ }
+ read_sz += fread(buf + read_sz, 1, READ_SIZE, file);
+ }
+
+ if (buf_sz - read_sz < 1) {
+ pr_err("ERROR: internal error\n");
+ err = -EINVAL;
+ goto errout;
+ }
+
+ if (ferror(file)) {
+ pr_err("ERROR: error occurred when reading from pipe: %s\n",
+ strerror(errno));
+ err = -EIO;
+ goto errout;
+ }
+
+ err = WEXITSTATUS(pclose(file));
+ file = NULL;
+ if (err) {
+ err = -EINVAL;
+ goto errout;
+ }
+
+ /*
+ * If buf is string, give it terminal '\0' to make our life
+ * easier. If buf is not string, that '\0' is out of space
+ * indicated by read_sz so caller won't even notice it.
+ */
+ ((char *)buf)[read_sz] = '\0';
+
+ if (!p_buf)
+ free(buf);
+ else
+ *p_buf = buf;
+
+ if (p_read_sz)
+ *p_read_sz = read_sz;
+ return 0;
+
+errout:
+ if (file)
+ pclose(file);
+ free(buf);
+ if (p_buf)
+ *p_buf = NULL;
+ if (p_read_sz)
+ *p_read_sz = 0;
+ return err;
+}
+
+static inline void
+force_set_env(const char *var, const char *value)
+{
+ if (value) {
+ setenv(var, value, 1);
+ pr_debug("set env: %s=%s\n", var, value);
+ } else {
+ unsetenv(var);
+ pr_debug("unset env: %s\n", var);
+ }
+}
+
+static void
+version_notice(void)
+{
+ pr_err(
+" \tLLVM 3.7 or newer is required. Which can be found from http://llvm.org\n"
+" \tYou may want to try git trunk:\n"
+" \t\tgit clone http://llvm.org/git/llvm.git\n"
+" \t\t and\n"
+" \t\tgit clone http://llvm.org/git/clang.git\n\n"
+" \tOr fetch the latest clang/llvm 3.7 from pre-built llvm packages for\n"
+" \tdebian/ubuntu:\n"
+" \t\thttp://llvm.org/apt\n\n"
+" \tIf you are using old version of clang, change 'clang-bpf-cmd-template'\n"
+" \toption in [llvm] section of ~/.perfconfig to:\n\n"
+" \t \"$CLANG_EXEC $CLANG_OPTIONS $KERNEL_INC_OPTIONS \\\n"
+" \t -working-directory $WORKING_DIR -c $CLANG_SOURCE \\\n"
+" \t -emit-llvm -o - | /path/to/llc -march=bpf -filetype=obj -o -\"\n"
+" \t(Replace /path/to/llc with path to your llc)\n\n"
+);
+}
+
+int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ size_t *p_obj_buf_sz)
+{
+ int err;
+ char clang_path[PATH_MAX];
+ const char *clang_opt = llvm_param.clang_opt;
+ const char *template = llvm_param.clang_bpf_cmd_template;
+ void *obj_buf = NULL;
+ size_t obj_buf_sz;
+
+ if (!template)
+ template = CLANG_BPF_CMD_DEFAULT_TEMPLATE;
+
+ err = search_program(llvm_param.clang_path,
+ "clang", clang_path);
+ if (err) {
+ pr_err(
+"ERROR:\tunable to find clang.\n"
+"Hint:\tTry to install latest clang/llvm to support BPF. Check your $PATH\n"
+" \tand 'clang-path' option in [llvm] section of ~/.perfconfig.\n");
+ version_notice();
+ return -ENOENT;
+ }
+
+ force_set_env("CLANG_EXEC", clang_path);
+ force_set_env("CLANG_OPTIONS", clang_opt);
+ force_set_env("KERNEL_INC_OPTIONS", NULL);
+ force_set_env("WORKING_DIR", ".");
+
+ /*
+ * Since we may reset clang's working dir, path of source file
+ * should be transferred into absolute path, except we want
+ * stdin to be source file (testing).
+ */
+ force_set_env("CLANG_SOURCE",
+ (path[0] == '-') ? path :
+ make_nonrelative_path(path));
+
+ pr_debug("llvm compiling command template: %s\n", template);
+ err = read_from_pipe(template, &obj_buf, &obj_buf_sz);
+ if (err) {
+ pr_err("ERROR:\tunable to compile %s\n", path);
+ pr_err("Hint:\tCheck error message shown above.\n");
+ version_notice();
+ pr_err("Hint:\tYou can also pre-compile it into .o\n");
+ goto errout;
+ }
+
+ if (!p_obj_buf)
+ free(obj_buf);
+ else
+ *p_obj_buf = obj_buf;
+
+ if (p_obj_buf_sz)
+ *p_obj_buf_sz = obj_buf_sz;
+ return 0;
+errout:
+ free(obj_buf);
+ if (p_obj_buf)
+ *p_obj_buf = NULL;
+ if (p_obj_buf_sz)
+ *p_obj_buf_sz = 0;
+ return err;
+}
diff --git a/tools/perf/util/llvm-utils.h b/tools/perf/util/llvm-utils.h
index 504b799..d23adbc 100644
--- a/tools/perf/util/llvm-utils.h
+++ b/tools/perf/util/llvm-utils.h
@@ -33,4 +33,7 @@ struct llvm_param {
extern struct llvm_param llvm_param;
extern int perf_llvm_config(const char *var, const char *value);
+
+extern int llvm__compile_bpf(const char *path, void **p_obj_buf,
+ size_t *p_obj_buf_sz);
#endif
--
1.8.3.4
This patch detects kernel build directory using a embedded shell
script 'kbuild_detector', which does this by checking existence of
include/generated/autoconf.h.
clang working directory is changed to kbuild directory if it is found,
to help user use relative include path. Following patch will detect
kernel include directory, which contains relative include patch so this
workdir changing is needed.
Users are allowed to set 'kbuild-dir = ""' manually to disable this
checking.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/llvm-utils.c | 56 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
index dca16e7..2ca2bd6 100644
--- a/tools/perf/util/llvm-utils.c
+++ b/tools/perf/util/llvm-utils.c
@@ -204,6 +204,51 @@ version_notice(void)
);
}
+static const char *kbuild_detector =
+"#!/usr/bin/env sh\n"
+"DEFAULT_KBUILD_DIR=/lib/modules/`uname -r`/build\n"
+"if test -z \"$KBUILD_DIR\"\n"
+"then\n"
+" KBUILD_DIR=$DEFAULT_KBUILD_DIR\n"
+"fi\n"
+"if test -f $KBUILD_DIR/include/generated/autoconf.h\n"
+"then\n"
+" echo -n $KBUILD_DIR\n"
+" exit 0\n"
+"fi\n"
+"exit -1\n";
+
+static inline void
+get_kbuild_opts(char **kbuild_dir)
+{
+ int err;
+
+ if (!kbuild_dir)
+ return;
+
+ *kbuild_dir = NULL;
+
+ if (llvm_param.kbuild_dir && !llvm_param.kbuild_dir[0]) {
+ pr_debug("[llvm.kbuild-dir] is set to \"\" deliberately.\n");
+ pr_debug("Skip kbuild options detection.\n");
+ return;
+ }
+
+ force_set_env("KBUILD_DIR", llvm_param.kbuild_dir);
+ force_set_env("KBUILD_OPTS", llvm_param.kbuild_opts);
+ err = read_from_pipe(kbuild_detector,
+ ((void **)kbuild_dir),
+ NULL);
+ if (err) {
+ pr_warning(
+"WARNING:\tunable to get correct kernel building directory.\n"
+"Hint:\tSet correct kbuild directory using 'kbuild-dir' option in [llvm]\n"
+" \tsection of ~/.perfconfig or set it to \"\" to suppress kbuild\n"
+" \tdetection.\n\n");
+ return;
+ }
+}
+
int llvm__compile_bpf(const char *path, void **p_obj_buf,
size_t *p_obj_buf_sz)
{
@@ -211,6 +256,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
char clang_path[PATH_MAX];
const char *clang_opt = llvm_param.clang_opt;
const char *template = llvm_param.clang_bpf_cmd_template;
+ char *kbuild_dir = NULL;
void *obj_buf = NULL;
size_t obj_buf_sz;
@@ -228,10 +274,16 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
return -ENOENT;
}
+ /*
+ * This is an optional work. Even it fail we can continue our
+ * work. Needn't to check error return.
+ */
+ get_kbuild_opts(&kbuild_dir);
+
force_set_env("CLANG_EXEC", clang_path);
force_set_env("CLANG_OPTIONS", clang_opt);
force_set_env("KERNEL_INC_OPTIONS", NULL);
- force_set_env("WORKING_DIR", ".");
+ force_set_env("WORKING_DIR", kbuild_dir ? : ".");
/*
* Since we may reset clang's working dir, path of source file
@@ -252,6 +304,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
goto errout;
}
+ free(kbuild_dir);
if (!p_obj_buf)
free(obj_buf);
else
@@ -261,6 +314,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
*p_obj_buf_sz = obj_buf_sz;
return 0;
errout:
+ free(kbuild_dir);
free(obj_buf);
if (p_obj_buf)
*p_obj_buf = NULL;
--
1.8.3.4
To help user find correct kernel include options, this patch extracts
them from kbuild system by an embedded script kinc_fetch_script, which
creates a temporary directory, generates Makefile and an empty dummy.o
then use the Makefile to fetch $(NOSTDINC_FLAGS), $(LINUXINCLUDE) and
$(EXTRA_CFLAGS) options. The result is passed to compiler script using
'KERNEL_INC_OPTIONS' environment variable.
Because options from kbuild contains relative path like
'Iinclude/generated/uapi', the work directory must be changed. This is
done by previous patch.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/llvm-utils.c | 59 ++++++++++++++++++++++++++++++++++++++++----
1 file changed, 54 insertions(+), 5 deletions(-)
diff --git a/tools/perf/util/llvm-utils.c b/tools/perf/util/llvm-utils.c
index 2ca2bd6..b658896 100644
--- a/tools/perf/util/llvm-utils.c
+++ b/tools/perf/util/llvm-utils.c
@@ -218,15 +218,42 @@ static const char *kbuild_detector =
"fi\n"
"exit -1\n";
+static const char *kinc_fetch_script =
+"#!/usr/bin/env sh\n"
+"if ! test -d \"$KBUILD_DIR\"\n"
+"then\n"
+" exit -1\n"
+"fi\n"
+"if ! test -f \"$KBUILD_DIR/include/generated/autoconf.h\"\n"
+"then\n"
+" exit -1\n"
+"fi\n"
+"TMPDIR=`mktemp -d`\n"
+"if test -z \"$TMPDIR\"\n"
+"then\n"
+" exit -1\n"
+"fi\n"
+"cat << EOF > $TMPDIR/Makefile\n"
+"obj-y := dummy.o\n"
+"\\$(obj)/%.o: \\$(src)/%.c\n"
+"\t@echo -n \"\\$(NOSTDINC_FLAGS) \\$(LINUXINCLUDE) \\$(EXTRA_CFLAGS)\"\n"
+"EOF\n"
+"touch $TMPDIR/dummy.c\n"
+"make -s -C $KBUILD_DIR M=$TMPDIR $KBUILD_OPTS dummy.o 2>/dev/null\n"
+"RET=$?\n"
+"rm -rf $TMPDIR\n"
+"exit $RET\n";
+
static inline void
-get_kbuild_opts(char **kbuild_dir)
+get_kbuild_opts(char **kbuild_dir, char **kbuild_include_opts)
{
int err;
- if (!kbuild_dir)
+ if (!kbuild_dir || !kbuild_include_opts)
return;
*kbuild_dir = NULL;
+ *kbuild_include_opts = NULL;
if (llvm_param.kbuild_dir && !llvm_param.kbuild_dir[0]) {
pr_debug("[llvm.kbuild-dir] is set to \"\" deliberately.\n");
@@ -247,6 +274,26 @@ get_kbuild_opts(char **kbuild_dir)
" \tdetection.\n\n");
return;
}
+
+ pr_debug("Kernel build dir is set to %s\n", *kbuild_dir);
+ force_set_env("KBUILD_DIR", *kbuild_dir);
+ err = read_from_pipe(kinc_fetch_script,
+ (void **)kbuild_include_opts,
+ NULL);
+ if (err) {
+ pr_warning(
+"WARNING:\tunable to get kernel include directories from '%s'\n"
+"Hint:\tTry set clang include options using 'clang-bpf-cmd-template'\n"
+" \toption in [llvm] section of ~/.perfconfig and set 'kbuild-dir'\n"
+" \toption in [llvm] to \"\" to suppress this detection.\n\n",
+ *kbuild_dir);
+
+ free(*kbuild_dir);
+ *kbuild_dir = NULL;
+ return;
+ }
+
+ pr_debug("include option is set to %s\n", *kbuild_include_opts);
}
int llvm__compile_bpf(const char *path, void **p_obj_buf,
@@ -256,7 +303,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
char clang_path[PATH_MAX];
const char *clang_opt = llvm_param.clang_opt;
const char *template = llvm_param.clang_bpf_cmd_template;
- char *kbuild_dir = NULL;
+ char *kbuild_dir = NULL, *kbuild_include_opts = NULL;
void *obj_buf = NULL;
size_t obj_buf_sz;
@@ -278,11 +325,11 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
* This is an optional work. Even it fail we can continue our
* work. Needn't to check error return.
*/
- get_kbuild_opts(&kbuild_dir);
+ get_kbuild_opts(&kbuild_dir, &kbuild_include_opts);
force_set_env("CLANG_EXEC", clang_path);
force_set_env("CLANG_OPTIONS", clang_opt);
- force_set_env("KERNEL_INC_OPTIONS", NULL);
+ force_set_env("KERNEL_INC_OPTIONS", kbuild_include_opts);
force_set_env("WORKING_DIR", kbuild_dir ? : ".");
/*
@@ -305,6 +352,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
}
free(kbuild_dir);
+ free(kbuild_include_opts);
if (!p_obj_buf)
free(obj_buf);
else
@@ -315,6 +363,7 @@ int llvm__compile_bpf(const char *path, void **p_obj_buf,
return 0;
errout:
free(kbuild_dir);
+ free(kbuild_include_opts);
free(obj_buf);
if (p_obj_buf)
*p_obj_buf = NULL;
--
1.8.3.4
Previous patches introduce llvm__compile_bpf() to compile source file to
eBPF object. This patch adds testcase to test it. It also test libbpf
by opening generated object.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/tests/Build | 1 +
tools/perf/tests/builtin-test.c | 4 ++
tools/perf/tests/llvm.c | 85 +++++++++++++++++++++++++++++++++++++++++
tools/perf/tests/tests.h | 1 +
4 files changed, 91 insertions(+)
create mode 100644 tools/perf/tests/llvm.c
diff --git a/tools/perf/tests/Build b/tools/perf/tests/Build
index d20d6e6..c1518bd 100644
--- a/tools/perf/tests/Build
+++ b/tools/perf/tests/Build
@@ -32,6 +32,7 @@ perf-y += sample-parsing.o
perf-y += parse-no-sample-id-all.o
perf-y += kmod-path.o
perf-y += thread-map.o
+perf-y += llvm.o
perf-$(CONFIG_X86) += perf-time-to-tsc.o
diff --git a/tools/perf/tests/builtin-test.c b/tools/perf/tests/builtin-test.c
index c1dde73..6a3fb54 100644
--- a/tools/perf/tests/builtin-test.c
+++ b/tools/perf/tests/builtin-test.c
@@ -174,6 +174,10 @@ static struct test {
.desc = "Test thread map",
.func = test__thread_map,
},
+ {
+ .desc = "Test LLVM searching and compiling",
+ .func = test__llvm,
+ },
{
.func = NULL,
},
diff --git a/tools/perf/tests/llvm.c b/tools/perf/tests/llvm.c
new file mode 100644
index 0000000..9984631
--- /dev/null
+++ b/tools/perf/tests/llvm.c
@@ -0,0 +1,85 @@
+#include <stdio.h>
+#include <bpf/libbpf.h>
+#include <util/llvm-utils.h>
+#include <util/cache.h>
+#include "tests.h"
+#include "debug.h"
+
+static int perf_config_cb(const char *var, const char *val,
+ void *arg __maybe_unused)
+{
+ return perf_default_config(var, val, arg);
+}
+
+/*
+ * Randomly give it a "version" section since we don't really load it
+ * into kernel
+ */
+static const char test_bpf_prog[] =
+ "__attribute__((section(\"do_fork\"), used)) "
+ "int fork(void *ctx) {return 0;} "
+ "char _license[] __attribute__((section(\"license\"), used)) = \"GPL\";"
+ "int _version __attribute__((section(\"version\"), used)) = 0x40100;";
+
+#ifdef HAVE_LIBBPF_SUPPORT
+static int test__bpf_parsing(void *obj_buf, size_t obj_buf_sz)
+{
+ struct bpf_object *obj;
+
+ obj = bpf_object__open_buffer(obj_buf, obj_buf_sz);
+ if (!obj)
+ return -1;
+ bpf_object__close(obj);
+ return 0;
+}
+#else
+static int test__bpf_parsing(void *obj_buf __maybe_unused,
+ size_t obj_buf_sz __maybe_unused)
+{
+ fprintf(stderr, " (skip bpf parsing)");
+ return 0;
+}
+#endif
+
+#define new_string(n, fmt, ...) \
+do { \
+ n##_sz = snprintf(NULL, 0, fmt, __VA_ARGS__);\
+ if (n##_sz == 0) \
+ return -1; \
+ n##_new = malloc(n##_sz + 1); \
+ if (!n##_new) \
+ return -1; \
+ snprintf(n##_new, n##_sz + 1, fmt, __VA_ARGS__);\
+ n##_new[n##_sz] = '\0'; \
+} while (0)
+
+int test__llvm(void)
+{
+ char *tmpl_new, *clang_opt_new;
+ size_t tmpl_sz, clang_opt_sz;
+ void *obj_buf;
+ size_t obj_buf_sz;
+ int err;
+
+ perf_config(perf_config_cb, NULL);
+
+ if (!llvm_param.clang_bpf_cmd_template)
+ return -1;
+
+ if (!llvm_param.clang_opt)
+ llvm_param.clang_opt = strdup("");
+
+ new_string(tmpl, "echo '%s' | %s", test_bpf_prog,
+ llvm_param.clang_bpf_cmd_template);
+ new_string(clang_opt, "-xc %s", llvm_param.clang_opt);
+
+ llvm_param.clang_bpf_cmd_template = tmpl_new;
+ llvm_param.clang_opt = clang_opt_new;
+ err = llvm__compile_bpf("-", &obj_buf, &obj_buf_sz);
+ if (err)
+ return -1;
+
+ err = test__bpf_parsing(obj_buf, obj_buf_sz);
+ free(obj_buf);
+ return err;
+}
diff --git a/tools/perf/tests/tests.h b/tools/perf/tests/tests.h
index ebb47d9..bf113a2 100644
--- a/tools/perf/tests/tests.h
+++ b/tools/perf/tests/tests.h
@@ -62,6 +62,7 @@ int test__fdarray__filter(void);
int test__fdarray__add(void);
int test__kmod_path__parse(void);
int test__thread_map(void);
+int test__llvm(void);
#if defined(__x86_64__) || defined(__i386__) || defined(__arm__) || defined(__aarch64__)
#ifdef HAVE_DWARF_UNWIND_SUPPORT
--
1.8.3.4
By adding libbpf into perf's Makefile, this patch enables perf to build
libbpf during building if libelf is found and neither NO_LIBELF nor
NO_LIBBPF is set. The newly introduced code is similar to libapi and
libtraceevent building in Makefile.perf.
MANIFEST is also updated for 'make perf-*-src-pkg'.
Append make_no_libbpf to tools/perf/tests/make.
'bpf' feature check is appended into default FEATURE_TESTS and
FEATURE_DISPLAY, so perf will check API version of bpf in
/path/to/kernel/include/uapi/linux/bpf.h. Which should not fail except
when we are trying to port this code to an old kernel.
Error messages are also updated to notify users about the disable of BPF
support of 'perf record' if libelf is missed or BPF API check failed.
Signed-off-by: Wang Nan <[email protected]>
Cc: Alexei Starovoitov <[email protected]>
Cc: Brendan Gregg <[email protected]>
Cc: Daniel Borkmann <[email protected]>
Cc: David Ahern <[email protected]>
Cc: He Kuang <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Kaixu Xia <[email protected]>
Cc: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Zefan Li <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/build/Makefile.feature | 6 ++++--
tools/perf/MANIFEST | 3 +++
tools/perf/Makefile.perf | 19 +++++++++++++++++--
tools/perf/config/Makefile | 19 ++++++++++++++++++-
tools/perf/tests/make | 4 +++-
5 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/tools/build/Makefile.feature b/tools/build/Makefile.feature
index 2975632..5ec6b37 100644
--- a/tools/build/Makefile.feature
+++ b/tools/build/Makefile.feature
@@ -51,7 +51,8 @@ FEATURE_TESTS ?= \
timerfd \
libdw-dwarf-unwind \
zlib \
- lzma
+ lzma \
+ bpf
FEATURE_DISPLAY ?= \
dwarf \
@@ -67,7 +68,8 @@ FEATURE_DISPLAY ?= \
libunwind \
libdw-dwarf-unwind \
zlib \
- lzma
+ lzma \
+ bpf
# Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features.
# If in the future we need per-feature checks/flags for features not
diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
index 09dc0aa..1b42f7c 100644
--- a/tools/perf/MANIFEST
+++ b/tools/perf/MANIFEST
@@ -17,6 +17,7 @@ tools/build
tools/arch/x86/include/asm/atomic.h
tools/arch/x86/include/asm/rmwcc.h
tools/lib/traceevent
+tools/lib/bpf
tools/lib/api
tools/lib/rbtree.c
tools/lib/symbol/kallsyms.c
@@ -67,6 +68,8 @@ arch/*/lib/memset*.S
include/linux/poison.h
include/linux/hw_breakpoint.h
include/uapi/linux/perf_event.h
+include/uapi/linux/bpf.h
+include/uapi/linux/bpf_common.h
include/uapi/linux/const.h
include/uapi/linux/swab.h
include/uapi/linux/hw_breakpoint.h
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 7a4b549..4857129 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -125,6 +125,7 @@ STRIP = strip
LIB_DIR = $(srctree)/tools/lib/api/
TRACE_EVENT_DIR = $(srctree)/tools/lib/traceevent/
+BPF_DIR = $(srctree)/tools/lib/bpf/
# include config/Makefile by default and rule out
# non-config cases
@@ -160,6 +161,7 @@ strip-libs = $(filter-out -l%,$(1))
ifneq ($(OUTPUT),)
TE_PATH=$(OUTPUT)
+ BPF_PATH=$(OUTPUT)
ifneq ($(subdir),)
LIB_PATH=$(OUTPUT)/../lib/api/
else
@@ -168,6 +170,7 @@ endif
else
TE_PATH=$(TRACE_EVENT_DIR)
LIB_PATH=$(LIB_DIR)
+ BPF_PATH=$(BPF_DIR)
endif
LIBTRACEEVENT = $(TE_PATH)libtraceevent.a
@@ -179,6 +182,8 @@ LIBTRACEEVENT_DYNAMIC_LIST_LDFLAGS = -Xlinker --dynamic-list=$(LIBTRACEEVENT_DYN
LIBAPI = $(LIB_PATH)libapi.a
export LIBAPI
+LIBBPF = $(BPF_PATH)libbpf.a
+
# python extension build directories
PYTHON_EXTBUILD := $(OUTPUT)python_ext_build/
PYTHON_EXTBUILD_LIB := $(PYTHON_EXTBUILD)lib/
@@ -231,6 +236,9 @@ export PERL_PATH
LIB_FILE=$(OUTPUT)libperf.a
PERFLIBS = $(LIB_FILE) $(LIBAPI) $(LIBTRACEEVENT)
+ifndef NO_LIBBPF
+ PERFLIBS += $(LIBBPF)
+endif
# We choose to avoid "if .. else if .. else .. endif endif"
# because maintaining the nesting to match is a pain. If
@@ -400,6 +408,13 @@ $(LIBAPI)-clean:
$(call QUIET_CLEAN, libapi)
$(Q)$(MAKE) -C $(LIB_DIR) O=$(OUTPUT) clean >/dev/null
+$(LIBBPF): FORCE
+ $(Q)$(MAKE) -C $(BPF_DIR) O=$(OUTPUT) $(OUTPUT)libbpf.a
+
+$(LIBBPF)-clean:
+ $(call QUIET_CLEAN, libbpf)
+ $(Q)$(MAKE) -C $(BPF_DIR) O=$(OUTPUT) clean >/dev/null
+
help:
@echo 'Perf make targets:'
@echo ' doc - make *all* documentation (see below)'
@@ -439,7 +454,7 @@ INSTALL_DOC_TARGETS += quick-install-doc quick-install-man quick-install-html
$(DOC_TARGETS):
$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) $(@:doc=all)
-TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol
+TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol ../lib/bpf
TAG_FILES= ../../include/uapi/linux/perf_event.h
TAGS:
@@ -542,7 +557,7 @@ config-clean:
$(call QUIET_CLEAN, config)
$(Q)$(MAKE) -C $(srctree)/tools/build/feature/ clean >/dev/null
-clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean config-clean
+clean: $(LIBTRACEEVENT)-clean $(LIBAPI)-clean $(LIBBPF)-clean config-clean
$(call QUIET_CLEAN, core-objs) $(RM) $(LIB_FILE) $(OUTPUT)perf-archive $(OUTPUT)perf-with-kcore $(LANG_BINDINGS)
$(Q)find . -name '*.o' -delete -o -name '\.*.cmd' -delete -o -name '\.*.d' -delete
$(Q)$(RM) $(OUTPUT).config-detected
diff --git a/tools/perf/config/Makefile b/tools/perf/config/Makefile
index 094ddae..52722cf 100644
--- a/tools/perf/config/Makefile
+++ b/tools/perf/config/Makefile
@@ -106,6 +106,7 @@ ifdef LIBBABELTRACE
FEATURE_CHECK_LDFLAGS-libbabeltrace := $(LIBBABELTRACE_LDFLAGS) -lbabeltrace-ctf
endif
+FEATURE_CHECK_CFLAGS-bpf = -I. -I$(srctree)/tools/include -I$(srctree)/arch/$(ARCH)/include/uapi -I$(srctree)/include/uapi
# include ARCH specific config
-include $(src-perf)/arch/$(ARCH)/Makefile
@@ -233,6 +234,7 @@ ifdef NO_LIBELF
NO_DEMANGLE := 1
NO_LIBUNWIND := 1
NO_LIBDW_DWARF_UNWIND := 1
+ NO_LIBBPF := 1
else
ifeq ($(feature-libelf), 0)
ifeq ($(feature-glibc), 1)
@@ -242,13 +244,14 @@ else
LIBC_SUPPORT := 1
endif
ifeq ($(LIBC_SUPPORT),1)
- msg := $(warning No libelf found, disables 'probe' tool, please install elfutils-libelf-devel/libelf-dev);
+ msg := $(warning No libelf found, disables 'probe' tool and BPF support in 'perf record', please install elfutils-libelf-devel/libelf-dev);
NO_LIBELF := 1
NO_DWARF := 1
NO_DEMANGLE := 1
NO_LIBUNWIND := 1
NO_LIBDW_DWARF_UNWIND := 1
+ NO_LIBBPF := 1
else
ifneq ($(filter s% -static%,$(LDFLAGS),),)
msg := $(error No static glibc found, please install glibc-static);
@@ -301,6 +304,13 @@ ifndef NO_LIBELF
$(call detected,CONFIG_DWARF)
endif # PERF_HAVE_DWARF_REGS
endif # NO_DWARF
+
+ ifndef NO_LIBBPF
+ ifeq ($(feature-bpf), 1)
+ CFLAGS += -DHAVE_LIBBPF_SUPPORT
+ $(call detected,CONFIG_LIBBPF)
+ endif
+ endif # NO_LIBBPF
endif # NO_LIBELF
ifeq ($(ARCH),powerpc)
@@ -316,6 +326,13 @@ ifndef NO_LIBUNWIND
endif
endif
+ifndef NO_LIBBPF
+ ifneq ($(feature-bpf), 1)
+ msg := $(warning BPF API too old. Please install recent kernel headers. BPF support in 'perf record' is disabled.)
+ NO_LIBBPF := 1
+ endif
+endif
+
dwarf-post-unwind := 1
dwarf-post-unwind-text := BUG
diff --git a/tools/perf/tests/make b/tools/perf/tests/make
index 729112f..89e0f3d 100644
--- a/tools/perf/tests/make
+++ b/tools/perf/tests/make
@@ -44,6 +44,7 @@ make_no_libnuma := NO_LIBNUMA=1
make_no_libaudit := NO_LIBAUDIT=1
make_no_libbionic := NO_LIBBIONIC=1
make_no_auxtrace := NO_AUXTRACE=1
+make_no_libbpf := NO_LIBBPF=1
make_tags := tags
make_cscope := cscope
make_help := help
@@ -65,7 +66,7 @@ make_static := LDFLAGS=-static
make_minimal := NO_LIBPERL=1 NO_LIBPYTHON=1 NO_NEWT=1 NO_GTK2=1
make_minimal += NO_DEMANGLE=1 NO_LIBELF=1 NO_LIBUNWIND=1 NO_BACKTRACE=1
make_minimal += NO_LIBNUMA=1 NO_LIBAUDIT=1 NO_LIBBIONIC=1
-make_minimal += NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1
+make_minimal += NO_LIBDW_DWARF_UNWIND=1 NO_AUXTRACE=1 NO_LIBBPF=1
# $(run) contains all available tests
run := make_pure
@@ -93,6 +94,7 @@ run += make_no_libnuma
run += make_no_libaudit
run += make_no_libbionic
run += make_no_auxtrace
+run += make_no_libbpf
run += make_help
run += make_doc
run += make_perf_o
--
1.8.3.4
By introducing new rules in tools/perf/util/parse-events.[ly], this
patch enables 'perf record --event bpf_file.o' to select events by
an eBPF object file. It calls parse_events_load_bpf() to load that
file, which uses bpf__prepare_load() and finally calls
bpf_object__open() for the object files.
Instead of introducing evsel to evlist during parsing, events
selected by eBPF object files are appended separately. The reason
is:
1. During parsing, the probing points have not been initialized.
2. Currently we are unable to call add_perf_probe_events() twice,
therefore we have to wait until all such events are collected,
then probe all points by one call.
The real probing and selecting is reside in following patches.
'bpf-loader.[ch]' are introduced in this patch. Which will be the
interface between perf and libbpf. bpf__prepare_load() resides in
bpf-loader.c. Dummy functions should be used because bpf-loader.c is
available only when CONFIG_LIBBPF is on.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/Build | 1 +
tools/perf/util/bpf-loader.c | 60 ++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 24 +++++++++++++++++
tools/perf/util/debug.c | 5 ++++
tools/perf/util/debug.h | 1 +
tools/perf/util/parse-events.c | 16 +++++++++++
tools/perf/util/parse-events.h | 2 ++
tools/perf/util/parse-events.l | 3 +++
tools/perf/util/parse-events.y | 18 ++++++++++++-
9 files changed, 129 insertions(+), 1 deletion(-)
create mode 100644 tools/perf/util/bpf-loader.c
create mode 100644 tools/perf/util/bpf-loader.h
diff --git a/tools/perf/util/Build b/tools/perf/util/Build
index bc24293..3357e5a 100644
--- a/tools/perf/util/Build
+++ b/tools/perf/util/Build
@@ -79,6 +79,7 @@ libperf-y += thread-stack.o
libperf-$(CONFIG_AUXTRACE) += auxtrace.o
libperf-y += parse-branch-options.o
+libperf-$(CONFIG_LIBBPF) += bpf-loader.o
libperf-$(CONFIG_LIBELF) += symbol-elf.o
libperf-$(CONFIG_LIBELF) += probe-event.o
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
new file mode 100644
index 0000000..7c750b6
--- /dev/null
+++ b/tools/perf/util/bpf-loader.c
@@ -0,0 +1,60 @@
+/*
+ * bpf-loader.c
+ *
+ * Copyright (C) 2015 Wang Nan <[email protected]>
+ * Copyright (C) 2015 Huawei Inc.
+ */
+
+#include <bpf/libbpf.h>
+#include "perf.h"
+#include "debug.h"
+#include "bpf-loader.h"
+
+#define DEFINE_PRINT_FN(name, level) \
+static int libbpf_##name(const char *fmt, ...) \
+{ \
+ va_list args; \
+ int ret; \
+ \
+ va_start(args, fmt); \
+ ret = veprintf(level, verbose, pr_fmt(fmt), args);\
+ va_end(args); \
+ return ret; \
+}
+
+DEFINE_PRINT_FN(warning, 0)
+DEFINE_PRINT_FN(info, 0)
+DEFINE_PRINT_FN(debug, 1)
+
+static bool libbpf_initialized;
+
+int bpf__prepare_load(const char *filename)
+{
+ struct bpf_object *obj;
+
+ if (!libbpf_initialized)
+ libbpf_set_print(libbpf_warning,
+ libbpf_info,
+ libbpf_debug);
+
+ obj = bpf_object__open(filename);
+ if (!obj) {
+ pr_err("bpf: failed to load %s\n", filename);
+ return -EINVAL;
+ }
+
+ /*
+ * Throw object pointer away: it will be retrived using
+ * bpf_objects iterater.
+ */
+
+ return 0;
+}
+
+void bpf__clear(void)
+{
+ struct bpf_object *obj, *tmp;
+
+ bpf_object__for_each_safe(obj, tmp)
+ bpf_object__close(obj);
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
new file mode 100644
index 0000000..39d8d1a
--- /dev/null
+++ b/tools/perf/util/bpf-loader.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2015, Wang Nan <[email protected]>
+ * Copyright (C) 2015, Huawei Inc.
+ */
+#ifndef __BPF_LOADER_H
+#define __BPF_LOADER_H
+
+#include <linux/compiler.h>
+#include "debug.h"
+
+#ifdef HAVE_LIBBPF_SUPPORT
+int bpf__prepare_load(const char *filename);
+
+void bpf__clear(void);
+#else
+static inline int bpf__prepare_load(const char *filename __maybe_unused)
+{
+ pr_err("ERROR: eBPF object loading is disabled during compiling.\n");
+ return -1;
+}
+
+static inline void bpf__clear(void) { }
+#endif
+#endif
diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
index 2da5581..86d9c73 100644
--- a/tools/perf/util/debug.c
+++ b/tools/perf/util/debug.c
@@ -36,6 +36,11 @@ static int _eprintf(int level, int var, const char *fmt, va_list args)
return ret;
}
+int veprintf(int level, int var, const char *fmt, va_list args)
+{
+ return _eprintf(level, var, fmt, args);
+}
+
int eprintf(int level, int var, const char *fmt, ...)
{
va_list args;
diff --git a/tools/perf/util/debug.h b/tools/perf/util/debug.h
index caac2fd..8b9a088 100644
--- a/tools/perf/util/debug.h
+++ b/tools/perf/util/debug.h
@@ -50,6 +50,7 @@ void pr_stat(const char *fmt, ...);
int eprintf(int level, int var, const char *fmt, ...) __attribute__((format(printf, 3, 4)));
int eprintf_time(int level, int var, u64 t, const char *fmt, ...) __attribute__((format(printf, 4, 5)));
+int veprintf(int level, int var, const char *fmt, va_list args);
int perf_debug_option(const char *str);
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index a71eeb2..a2829ef 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -19,6 +19,7 @@
#include "thread_map.h"
#include "cpumap.h"
#include "asm/bug.h"
+#include "bpf-loader.h"
#define MAX_NAME_LEN 100
@@ -475,6 +476,21 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
return add_tracepoint_event(list, idx, sys, event);
}
+int parse_events_load_bpf(struct list_head *list __maybe_unused,
+ int *idx __maybe_unused,
+ char *bpf_file_name)
+{
+ /*
+ * Currently don't link any event to list. BPF object files
+ * should be saved to a seprated list and processed together.
+ *
+ * Things could be changed if we solve perf probe reentering
+ * problem. After that probe events file by file is possible.
+ * However, probing cost is still need to be considered.
+ */
+ return bpf__prepare_load(bpf_file_name);
+}
+
static int
parse_breakpoint_type(const char *type, struct perf_event_attr *attr)
{
diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
index 131f29b..41b962a 100644
--- a/tools/perf/util/parse-events.h
+++ b/tools/perf/util/parse-events.h
@@ -114,6 +114,8 @@ int parse_events__modifier_group(struct list_head *list, char *event_mod);
int parse_events_name(struct list_head *list, char *name);
int parse_events_add_tracepoint(struct list_head *list, int *idx,
char *sys, char *event);
+int parse_events_load_bpf(struct list_head *list, int *idx,
+ char *bpf_file_name);
int parse_events_add_numeric(struct parse_events_evlist *data,
struct list_head *list,
u32 type, u64 config,
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 13cef3c..8328b28 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -115,6 +115,7 @@ do { \
group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+
+bpf_object .*\.(o|bpf)
num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+
@@ -159,6 +160,7 @@ modifier_bp [rwx]{1,3}
}
{event_pmu} |
+{bpf_object} |
{event} {
BEGIN(INITIAL);
REWIND(1);
@@ -261,6 +263,7 @@ r{num_raw_hex} { return raw(yyscanner); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
{name} { return pmu_str_check(yyscanner); }
+{bpf_object} { return str(yyscanner, PE_BPF_OBJECT); }
"/" { BEGIN(config); return '/'; }
- { return '-'; }
, { BEGIN(event); return ','; }
diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
index 591905a..481f3cd 100644
--- a/tools/perf/util/parse-events.y
+++ b/tools/perf/util/parse-events.y
@@ -42,6 +42,7 @@ static inc_group_count(struct list_head *list,
%token PE_VALUE PE_VALUE_SYM_HW PE_VALUE_SYM_SW PE_RAW PE_TERM
%token PE_EVENT_NAME
%token PE_NAME
+%token PE_BPF_OBJECT
%token PE_MODIFIER_EVENT PE_MODIFIER_BP
%token PE_NAME_CACHE_TYPE PE_NAME_CACHE_OP_RESULT
%token PE_PREFIX_MEM PE_PREFIX_RAW PE_PREFIX_GROUP
@@ -53,6 +54,7 @@ static inc_group_count(struct list_head *list,
%type <num> PE_RAW
%type <num> PE_TERM
%type <str> PE_NAME
+%type <str> PE_BPF_OBJECT
%type <str> PE_NAME_CACHE_TYPE
%type <str> PE_NAME_CACHE_OP_RESULT
%type <str> PE_MODIFIER_EVENT
@@ -69,6 +71,7 @@ static inc_group_count(struct list_head *list,
%type <head> event_legacy_tracepoint
%type <head> event_legacy_numeric
%type <head> event_legacy_raw
+%type <head> event_bpf_file
%type <head> event_def
%type <head> event_mod
%type <head> event_name
@@ -198,7 +201,8 @@ event_def: event_pmu |
event_legacy_mem |
event_legacy_tracepoint sep_dc |
event_legacy_numeric sep_dc |
- event_legacy_raw sep_dc
+ event_legacy_raw sep_dc |
+ event_bpf_file
event_pmu:
PE_NAME '/' event_config '/'
@@ -420,6 +424,18 @@ PE_RAW
$$ = list;
}
+event_bpf_file:
+PE_BPF_OBJECT
+{
+ struct parse_events_evlist *data = _data;
+ struct list_head *list;
+
+ ALLOC_LIST(list);
+ ABORT_ON(parse_events_load_bpf(list, &data->idx, $1));
+ $$ = list;
+}
+
+
start_terms: event_config
{
struct parse_events_terms *data = _data;
--
1.8.3.4
This patch enables passing source files to --event directly using:
# perf record --event bpf-file.c command
This patch does following works:
1) Allow passing '.c' file to '--event'. parse_events_load_bpf() is
expanded to allow caller tell it whether the passed file is source
file or object.
2) llvm__compile_bpf() is called to compile the '.c' file, the result
is saved into memory. Use bpf_object__open_buffer() to load the
in-memory object.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/bpf-loader.c | 17 +++++++++++++++--
tools/perf/util/bpf-loader.h | 5 +++--
tools/perf/util/parse-events.c | 4 ++--
tools/perf/util/parse-events.h | 2 +-
tools/perf/util/parse-events.l | 3 +++
tools/perf/util/parse-events.y | 15 +++++++++++++--
6 files changed, 37 insertions(+), 9 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 7c750b6..61d3adf 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -9,6 +9,7 @@
#include "perf.h"
#include "debug.h"
#include "bpf-loader.h"
+#include "llvm-utils.h"
#define DEFINE_PRINT_FN(name, level) \
static int libbpf_##name(const char *fmt, ...) \
@@ -28,7 +29,7 @@ DEFINE_PRINT_FN(debug, 1)
static bool libbpf_initialized;
-int bpf__prepare_load(const char *filename)
+int bpf__prepare_load(const char *filename, bool source)
{
struct bpf_object *obj;
@@ -37,7 +38,19 @@ int bpf__prepare_load(const char *filename)
libbpf_info,
libbpf_debug);
- obj = bpf_object__open(filename);
+ if (source) {
+ void *obj_buf;
+ size_t obj_buf_sz;
+ int err;
+
+ err = llvm__compile_bpf(filename, &obj_buf, &obj_buf_sz);
+ if (err)
+ return err;
+ obj = bpf_object__open_buffer(obj_buf, obj_buf_sz);
+ free(obj_buf);
+ } else
+ obj = bpf_object__open(filename);
+
if (!obj) {
pr_err("bpf: failed to load %s\n", filename);
return -EINVAL;
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 39d8d1a..5566be0 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -9,11 +9,12 @@
#include "debug.h"
#ifdef HAVE_LIBBPF_SUPPORT
-int bpf__prepare_load(const char *filename);
+int bpf__prepare_load(const char *filename, bool source);
void bpf__clear(void);
#else
-static inline int bpf__prepare_load(const char *filename __maybe_unused)
+static inline int bpf__prepare_load(const char *filename __maybe_unused,
+ bool source __maybe_unused)
{
pr_err("ERROR: eBPF object loading is disabled during compiling.\n");
return -1;
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index a2829ef..8f3644f 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -478,7 +478,7 @@ int parse_events_add_tracepoint(struct list_head *list, int *idx,
int parse_events_load_bpf(struct list_head *list __maybe_unused,
int *idx __maybe_unused,
- char *bpf_file_name)
+ char *bpf_file_name, bool source)
{
/*
* Currently don't link any event to list. BPF object files
@@ -488,7 +488,7 @@ int parse_events_load_bpf(struct list_head *list __maybe_unused,
* problem. After that probe events file by file is possible.
* However, probing cost is still need to be considered.
*/
- return bpf__prepare_load(bpf_file_name);
+ return bpf__prepare_load(bpf_file_name, source);
}
static int
diff --git a/tools/perf/util/parse-events.h b/tools/perf/util/parse-events.h
index 41b962a..5841d4f 100644
--- a/tools/perf/util/parse-events.h
+++ b/tools/perf/util/parse-events.h
@@ -115,7 +115,7 @@ int parse_events_name(struct list_head *list, char *name);
int parse_events_add_tracepoint(struct list_head *list, int *idx,
char *sys, char *event);
int parse_events_load_bpf(struct list_head *list, int *idx,
- char *bpf_file_name);
+ char *bpf_file_name, bool source);
int parse_events_add_numeric(struct parse_events_evlist *data,
struct list_head *list,
u32 type, u64 config,
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 8328b28..556fa21 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -116,6 +116,7 @@ group [^,{}/]*[{][^}]*[}][^,{}/]*
event_pmu [^,{}/]+[/][^/]*[/][^,{}/]*
event [^,{}/]+
bpf_object .*\.(o|bpf)
+bpf_source .*\.c
num_dec [0-9]+
num_hex 0x[a-fA-F0-9]+
@@ -161,6 +162,7 @@ modifier_bp [rwx]{1,3}
{event_pmu} |
{bpf_object} |
+{bpf_source} |
{event} {
BEGIN(INITIAL);
REWIND(1);
@@ -264,6 +266,7 @@ r{num_raw_hex} { return raw(yyscanner); }
{modifier_event} { return str(yyscanner, PE_MODIFIER_EVENT); }
{name} { return pmu_str_check(yyscanner); }
{bpf_object} { return str(yyscanner, PE_BPF_OBJECT); }
+{bpf_source} { return str(yyscanner, PE_BPF_SOURCE); }
"/" { BEGIN(config); return '/'; }
- { return '-'; }
, { BEGIN(event); return ','; }
diff --git a/tools/perf/util/parse-events.y b/tools/perf/util/parse-events.y
index 481f3cd..eeb9768 100644
--- a/tools/perf/util/parse-events.y
+++ b/tools/perf/util/parse-events.y
@@ -42,7 +42,7 @@ static inc_group_count(struct list_head *list,
%token PE_VALUE PE_VALUE_SYM_HW PE_VALUE_SYM_SW PE_RAW PE_TERM
%token PE_EVENT_NAME
%token PE_NAME
-%token PE_BPF_OBJECT
+%token PE_BPF_OBJECT PE_BPF_SOURCE
%token PE_MODIFIER_EVENT PE_MODIFIER_BP
%token PE_NAME_CACHE_TYPE PE_NAME_CACHE_OP_RESULT
%token PE_PREFIX_MEM PE_PREFIX_RAW PE_PREFIX_GROUP
@@ -55,6 +55,7 @@ static inc_group_count(struct list_head *list,
%type <num> PE_TERM
%type <str> PE_NAME
%type <str> PE_BPF_OBJECT
+%type <str> PE_BPF_SOURCE
%type <str> PE_NAME_CACHE_TYPE
%type <str> PE_NAME_CACHE_OP_RESULT
%type <str> PE_MODIFIER_EVENT
@@ -431,7 +432,17 @@ PE_BPF_OBJECT
struct list_head *list;
ALLOC_LIST(list);
- ABORT_ON(parse_events_load_bpf(list, &data->idx, $1));
+ ABORT_ON(parse_events_load_bpf(list, &data->idx, $1, false));
+ $$ = list;
+}
+|
+PE_BPF_SOURCE
+{
+ struct parse_events_evlist *data = _data;
+ struct list_head *list;
+
+ ALLOC_LIST(list);
+ ABORT_ON(parse_events_load_bpf(list, &data->idx, $1, true));
$$ = list;
}
--
1.8.3.4
This patch parses section name of each program, and creates
corresponding 'struct perf_probe_event' structure.
parse_perf_probe_command() is used to do the main parsing works.
Parsing result is stored into a global array. This is because
add_perf_probe_events() is non-reentrantable. In following patch,
add_perf_probe_events will be introduced to insert kprobes. It accepts
an array of 'struct perf_probe_event' and do all works in one call.
Define PERF_BPF_PROBE_GROUP as "perf_bpf_probe", which will be used
as group name of all eBPF probing points.
This patch utilizes bpf_program__set_private(), bind perf_probe_event
with bpf program by private field.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/bpf-loader.c | 126 ++++++++++++++++++++++++++++++++++++++++++-
tools/perf/util/bpf-loader.h | 2 +
2 files changed, 126 insertions(+), 2 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 61d3adf..e810d05 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -10,6 +10,8 @@
#include "debug.h"
#include "bpf-loader.h"
#include "llvm-utils.h"
+#include "probe-event.h"
+#include "probe-finder.h"
#define DEFINE_PRINT_FN(name, level) \
static int libbpf_##name(const char *fmt, ...) \
@@ -29,9 +31,122 @@ DEFINE_PRINT_FN(debug, 1)
static bool libbpf_initialized;
+static struct perf_probe_event probe_event_array[MAX_PROBES];
+static size_t nr_probe_events;
+
+static struct perf_probe_event *
+alloc_perf_probe_event(void)
+{
+ struct perf_probe_event *pev;
+ int n = nr_probe_events;
+
+ if (n >= MAX_PROBES) {
+ pr_err("bpf: too many events, increase MAX_PROBES\n");
+ return NULL;
+ }
+
+ nr_probe_events = n + 1;
+ pev = &probe_event_array[n];
+ bzero(pev, sizeof(*pev));
+ return pev;
+}
+
+struct bpf_prog_priv {
+ struct perf_probe_event *pev;
+};
+
+static void
+bpf_prog_priv__clear(struct bpf_program *prog __maybe_unused,
+ void *_priv)
+{
+ struct bpf_prog_priv *priv = _priv;
+
+ if (priv->pev)
+ clear_perf_probe_event(priv->pev);
+ free(priv);
+}
+
+static int
+config_bpf_program(struct bpf_program *prog)
+{
+ struct perf_probe_event *pev = alloc_perf_probe_event();
+ struct bpf_prog_priv *priv = NULL;
+ const char *config_str;
+ int err;
+
+ /* pr_err has been done by alloc_perf_probe_event */
+ if (!pev)
+ return -ENOMEM;
+
+ err = bpf_program__get_title(prog, &config_str, false);
+ if (err || !config_str) {
+ pr_err("bpf: unable to get title for program\n");
+ return -EINVAL;
+ }
+
+ pr_debug("bpf: config program '%s'\n", config_str);
+ err = parse_perf_probe_command(config_str, pev);
+ if (err < 0) {
+ pr_err("bpf: '%s' is not a valid config string\n",
+ config_str);
+ /* parse failed, don't need clear pev. */
+ return -EINVAL;
+ }
+
+ if (pev->group && strcmp(pev->group, PERF_BPF_PROBE_GROUP)) {
+ pr_err("bpf: '%s': group for event is set and not '%s'.\n",
+ config_str, PERF_BPF_PROBE_GROUP);
+ err = -EINVAL;
+ goto errout;
+ } else if (!pev->group)
+ pev->group = strdup(PERF_BPF_PROBE_GROUP);
+
+ if (!pev->group) {
+ pr_err("bpf: strdup failed\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ if (!pev->event) {
+ pr_err("bpf: '%s': event name is missing\n",
+ config_str);
+ err = -EINVAL;
+ goto errout;
+ }
+
+ pr_debug("bpf: config '%s' is ok\n", config_str);
+
+ priv = calloc(1, sizeof(*priv));
+ if (!priv) {
+ pr_err("bpf: failed to alloc memory\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+
+ priv->pev = pev;
+
+ err = bpf_program__set_private(prog, priv,
+ bpf_prog_priv__clear);
+ if (err) {
+ pr_err("bpf: set program private failed\n");
+ err = -ENOMEM;
+ goto errout;
+ }
+ return 0;
+
+errout:
+ if (pev)
+ clear_perf_probe_event(pev);
+ if (priv)
+ free(priv);
+ return err;
+}
+
int bpf__prepare_load(const char *filename, bool source)
{
struct bpf_object *obj;
+ struct bpf_program *prog;
+ int err = 0;
if (!libbpf_initialized)
libbpf_set_print(libbpf_warning,
@@ -41,7 +156,6 @@ int bpf__prepare_load(const char *filename, bool source)
if (source) {
void *obj_buf;
size_t obj_buf_sz;
- int err;
err = llvm__compile_bpf(filename, &obj_buf, &obj_buf_sz);
if (err)
@@ -56,12 +170,20 @@ int bpf__prepare_load(const char *filename, bool source)
return -EINVAL;
}
+ bpf_object__for_each_program(prog, obj) {
+ err = config_bpf_program(prog);
+ if (err)
+ goto errout;
+ }
+
/*
* Throw object pointer away: it will be retrived using
* bpf_objects iterater.
*/
-
return 0;
+errout:
+ bpf_object__close(obj);
+ return err;
}
void bpf__clear(void)
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 5566be0..5a3c954 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -8,6 +8,8 @@
#include <linux/compiler.h>
#include "debug.h"
+#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
+
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename, bool source);
--
1.8.3.4
This patch drops struct __event_package structure. Instead, it adds
trace_probe_event into 'struct perf_probe_event'.
trace_probe_event information gives further patches a chance to access
actual probe points and actual arguments. Using them, bpf_loader will
be able to attach one bpf program to different probing points of a
inline functions (which has multiple probing points) and glob
functions. Moreover, by reading arguments information, bpf code for
reading those arguments can be generated.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/builtin-probe.c | 4 ++-
tools/perf/util/probe-event.c | 60 +++++++++++++++++++++----------------------
tools/perf/util/probe-event.h | 6 ++++-
3 files changed, 38 insertions(+), 32 deletions(-)
diff --git a/tools/perf/builtin-probe.c b/tools/perf/builtin-probe.c
index b81cec3..826d452 100644
--- a/tools/perf/builtin-probe.c
+++ b/tools/perf/builtin-probe.c
@@ -496,7 +496,9 @@ __cmd_probe(int argc, const char **argv, const char *prefix __maybe_unused)
usage_with_options(probe_usage, options);
}
- ret = add_perf_probe_events(params.events, params.nevents);
+ ret = add_perf_probe_events(params.events,
+ params.nevents,
+ true);
if (ret < 0) {
pr_err_with_code(" Error: Failed to add events.", ret);
return ret;
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 381f23a..083e8b4 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -1930,6 +1930,9 @@ void clear_perf_probe_event(struct perf_probe_event *pev)
struct perf_probe_arg_field *field, *next;
int i;
+ if (pev->ntevs)
+ cleanup_perf_probe_event(pev);
+
free(pev->event);
free(pev->group);
free(pev->target);
@@ -2778,61 +2781,58 @@ static int convert_to_probe_trace_events(struct perf_probe_event *pev,
return find_probe_trace_events_from_map(pev, tevs);
}
-struct __event_package {
- struct perf_probe_event *pev;
- struct probe_trace_event *tevs;
- int ntevs;
-};
-
-int add_perf_probe_events(struct perf_probe_event *pevs, int npevs)
+int cleanup_perf_probe_event(struct perf_probe_event *pev)
{
- int i, j, ret;
- struct __event_package *pkgs;
+ int i;
- ret = 0;
- pkgs = zalloc(sizeof(struct __event_package) * npevs);
+ if (!pev || !pev->ntevs)
+ return 0;
- if (pkgs == NULL)
- return -ENOMEM;
+ for (i = 0; i < pev->ntevs; i++)
+ clear_probe_trace_event(&pev->tevs[i]);
+
+ zfree(&pev->tevs);
+ pev->ntevs = 0;
+ return 0;
+}
+
+int add_perf_probe_events(struct perf_probe_event *pevs, int npevs,
+ bool cleanup)
+{
+ int i, ret;
ret = init_symbol_maps(pevs->uprobes);
- if (ret < 0) {
- free(pkgs);
+ if (ret < 0)
return ret;
- }
/* Loop 1: convert all events */
for (i = 0; i < npevs; i++) {
- pkgs[i].pev = &pevs[i];
/* Init kprobe blacklist if needed */
- if (!pkgs[i].pev->uprobes)
+ if (pevs[i].uprobes)
kprobe_blacklist__init();
/* Convert with or without debuginfo */
- ret = convert_to_probe_trace_events(pkgs[i].pev,
- &pkgs[i].tevs);
- if (ret < 0)
+ ret = convert_to_probe_trace_events(&pevs[i], &pevs[i].tevs);
+ if (ret < 0) {
+ cleanup = true;
goto end;
- pkgs[i].ntevs = ret;
+ }
+ pevs[i].ntevs = ret;
}
/* This just release blacklist only if allocated */
kprobe_blacklist__release();
/* Loop 2: add all events */
for (i = 0; i < npevs; i++) {
- ret = __add_probe_trace_events(pkgs[i].pev, pkgs[i].tevs,
- pkgs[i].ntevs,
+ ret = __add_probe_trace_events(&pevs[i], pevs[i].tevs,
+ pevs[i].ntevs,
probe_conf.force_add);
if (ret < 0)
break;
}
end:
/* Loop 3: cleanup and free trace events */
- for (i = 0; i < npevs; i++) {
- for (j = 0; j < pkgs[i].ntevs; j++)
- clear_probe_trace_event(&pkgs[i].tevs[j]);
- zfree(&pkgs[i].tevs);
- }
- free(pkgs);
+ for (i = 0; cleanup && (i < npevs); i++)
+ cleanup_perf_probe_event(&pevs[i]);
exit_symbol_maps();
return ret;
diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
index 31db6ee..4b7a951 100644
--- a/tools/perf/util/probe-event.h
+++ b/tools/perf/util/probe-event.h
@@ -86,6 +86,8 @@ struct perf_probe_event {
bool uprobes; /* Uprobe event flag */
char *target; /* Target binary */
struct perf_probe_arg *args; /* Arguments */
+ struct probe_trace_event *tevs;
+ int ntevs;
};
/* Line range */
@@ -131,8 +133,10 @@ extern void line_range__clear(struct line_range *lr);
/* Initialize line range */
extern int line_range__init(struct line_range *lr);
-extern int add_perf_probe_events(struct perf_probe_event *pevs, int npevs);
+extern int add_perf_probe_events(struct perf_probe_event *pevs, int npevs,
+ bool cleanup);
extern int del_perf_probe_events(struct strfilter *filter);
+extern int cleanup_perf_probe_event(struct perf_probe_event *pev);
extern int show_perf_probe_events(struct strfilter *filter);
extern int show_line_range(struct line_range *lr, const char *module,
bool user);
--
1.8.3.4
In this patch, kprobe points are created using add_perf_probe_events.
Since all events are already grouped together in an array, calling
add_perf_probe_events() creates all of them.
probe_conf.max_probes is set to MAX_PROBES to support glob matching.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/builtin-record.c | 18 ++++++++++++++++-
tools/perf/util/bpf-loader.c | 48 ++++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 4 ++++
3 files changed, 69 insertions(+), 1 deletion(-)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 283fe96..33b213a 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -29,6 +29,7 @@
#include "util/data.h"
#include "util/auxtrace.h"
#include "util/parse-branch-options.h"
+#include "util/bpf-loader.h"
#include <unistd.h>
#include <sched.h>
@@ -1111,7 +1112,21 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
if (err)
return err;
- err = -ENOMEM;
+ /*
+ * bpf__probe must be called before symbol__init() because we
+ * need init_symbol_maps. If called after symbol__init,
+ * symbol_conf.sort_by_name won't take effect.
+ *
+ * bpf__unprobe() is safe even if bpf__probe() failed, and it
+ * also calls symbol__init. Therefore, goto out_symbol_exit
+ * is safe when probe failed.
+ */
+ err = bpf__probe();
+ if (err) {
+ pr_err("Probing at events in BPF object failed.\n");
+ pr_err("Try perf probe -d '*' to remove existing probe events.\n");
+ goto out_symbol_exit;
+ }
symbol__init(NULL);
@@ -1172,6 +1187,7 @@ out_symbol_exit:
perf_evlist__delete(rec->evlist);
symbol__exit();
auxtrace_record__free(rec->itr);
+ bpf__unprobe();
return err;
}
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index e810d05..df3f471 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -193,3 +193,51 @@ void bpf__clear(void)
bpf_object__for_each_safe(obj, tmp)
bpf_object__close(obj);
}
+
+static bool is_probing;
+
+int bpf__unprobe(void)
+{
+ struct strfilter *delfilter;
+ int ret;
+
+ if (!is_probing)
+ return 0;
+
+ delfilter = strfilter__new(PERF_BPF_PROBE_GROUP ":*", NULL);
+ if (!delfilter) {
+ pr_err("Failed to create delfilter when unprobing\n");
+ return -ENOMEM;
+ }
+
+ ret = del_perf_probe_events(delfilter);
+ strfilter__delete(delfilter);
+ if (ret < 0 && is_probing)
+ pr_err("Error: failed to delete events: %s\n",
+ strerror(-ret));
+ else
+ is_probing = false;
+ return ret < 0 ? ret : 0;
+}
+
+int bpf__probe(void)
+{
+ int err;
+
+ if (nr_probe_events <= 0)
+ return 0;
+
+ probe_conf.max_probes = MAX_PROBES;
+ /* Let add_perf_probe_events keeps probe_trace_event */
+ err = add_perf_probe_events(probe_event_array,
+ nr_probe_events,
+ false);
+
+ /* add_perf_probe_events return negative when fail */
+ if (err < 0)
+ pr_err("bpf probe: failed to probe events\n");
+ else
+ is_probing = true;
+
+ return err < 0 ? err : 0;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 5a3c954..374aec0 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -12,6 +12,8 @@
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename, bool source);
+int bpf__probe(void);
+int bpf__unprobe(void);
void bpf__clear(void);
#else
@@ -22,6 +24,8 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused,
return -1;
}
+static inline int bpf__probe(void) { return 0; }
+static inline int bpf__unprobe(void) { return 0; }
static inline void bpf__clear(void) { }
#endif
#endif
--
1.8.3.4
This patch utilizes bpf_load_object() provided by libbpf to load all
objects into kernel.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/builtin-record.c | 12 ++++++++++++
tools/perf/util/bpf-loader.c | 19 +++++++++++++++++++
tools/perf/util/bpf-loader.h | 2 ++
3 files changed, 33 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 33b213a..6d943a7 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1128,6 +1128,18 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
goto out_symbol_exit;
}
+ /*
+ * bpf__probe() also calls symbol__init() if there are probe
+ * events in bpf objects, so calling symbol_exit when failure
+ * is safe. If there is no probe event, bpf__load() always
+ * success.
+ */
+ err = bpf__load();
+ if (err) {
+ pr_err("Loading BPF programs failed\n");
+ goto out_symbol_exit;
+ }
+
symbol__init(NULL);
if (symbol_conf.kptr_restrict)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index df3f471..f68ba33 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -241,3 +241,22 @@ int bpf__probe(void)
return err < 0 ? err : 0;
}
+
+int bpf__load(void)
+{
+ struct bpf_object *obj, *tmp;
+ int err = 0;
+
+ bpf_object__for_each_safe(obj, tmp) {
+ err = bpf_object__load(obj);
+ if (err) {
+ pr_err("bpf: load objects failed\n");
+ goto errout;
+ }
+ }
+ return 0;
+errout:
+ bpf_object__for_each_safe(obj, tmp)
+ bpf_object__unload(obj);
+ return err;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index 374aec0..ae0dc9b 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -14,6 +14,7 @@
int bpf__prepare_load(const char *filename, bool source);
int bpf__probe(void);
int bpf__unprobe(void);
+int bpf__load(void);
void bpf__clear(void);
#else
@@ -26,6 +27,7 @@ static inline int bpf__prepare_load(const char *filename __maybe_unused,
static inline int bpf__probe(void) { return 0; }
static inline int bpf__unprobe(void) { return 0; }
+static inline int bpf__load(void) { return 0; }
static inline void bpf__clear(void) { }
#endif
#endif
--
1.8.3.4
This patch adds a bpf_fd field to 'struct evsel' then introduces method
to config it. In bpf-loader, a bpf__foreach_tev() function is added,
Which calls the callback function for each 'struct probe_trace_event'
events for each bpf program with their file descriptors. In evlist.c,
perf_evlist__add_bpf() is introduced to add all bpf events into evlist.
The event names are found from probe_trace_event structure.
'perf record' calls perf_evlist__add_bpf().
Since bpf-loader.c will not be built if libbpf is turned off, an empty
bpf__foreach_tev() is defined in bpf-loader.h to avoid compiling
error.
This patch iterates over 'struct probe_trace_event' instead of
'struct probe_trace_event' during the loop for further patches, which
will generate multiple instances form one BPF program and install then
onto different 'struct probe_trace_event'.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/builtin-record.c | 6 ++++++
tools/perf/util/bpf-loader.c | 42 ++++++++++++++++++++++++++++++++++++++++++
tools/perf/util/bpf-loader.h | 13 +++++++++++++
tools/perf/util/evlist.c | 41 +++++++++++++++++++++++++++++++++++++++++
tools/perf/util/evlist.h | 1 +
tools/perf/util/evsel.c | 1 +
tools/perf/util/evsel.h | 1 +
7 files changed, 105 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 6d943a7..bd189b1 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1140,6 +1140,12 @@ int cmd_record(int argc, const char **argv, const char *prefix __maybe_unused)
goto out_symbol_exit;
}
+ err = perf_evlist__add_bpf(rec->evlist);
+ if (err < 0) {
+ pr_err("Failed to add events from BPF object(s)\n");
+ goto out_symbol_exit;
+ }
+
symbol__init(NULL);
if (symbol_conf.kptr_restrict)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index f68ba33..63077b9 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -260,3 +260,45 @@ errout:
bpf_object__unload(obj);
return err;
}
+
+int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg)
+{
+ struct bpf_object *obj, *tmp;
+ struct bpf_program *prog;
+ int err;
+
+ bpf_object__for_each_safe(obj, tmp) {
+ bpf_object__for_each_program(prog, obj) {
+ struct probe_trace_event *tev;
+ struct perf_probe_event *pev;
+ struct bpf_prog_priv *priv;
+ int i, fd;
+
+ err = bpf_program__get_private(prog,
+ (void **)&priv);
+ if (err || !priv) {
+ pr_err("bpf: failed to get private field\n");
+ return -EINVAL;
+ }
+
+ pev = priv->pev;
+ for (i = 0; i < pev->ntevs; i++) {
+ tev = &pev->tevs[i];
+
+ err = bpf_program__get_fd(prog, &fd);
+
+ if (err || fd < 0) {
+ pr_err("bpf: failed to get file descriptor\n");
+ return -EINVAL;
+ }
+ err = func(tev, fd, arg);
+ if (err) {
+ pr_err("bpf: call back failed, stop iterate\n");
+ return err;
+ }
+ }
+ }
+ }
+
+ return 0;
+}
diff --git a/tools/perf/util/bpf-loader.h b/tools/perf/util/bpf-loader.h
index ae0dc9b..ef9b3bb 100644
--- a/tools/perf/util/bpf-loader.h
+++ b/tools/perf/util/bpf-loader.h
@@ -6,10 +6,14 @@
#define __BPF_LOADER_H
#include <linux/compiler.h>
+#include "probe-event.h"
#include "debug.h"
#define PERF_BPF_PROBE_GROUP "perf_bpf_probe"
+typedef int (*bpf_prog_iter_callback_t)(struct probe_trace_event *tev,
+ int fd, void *arg);
+
#ifdef HAVE_LIBBPF_SUPPORT
int bpf__prepare_load(const char *filename, bool source);
int bpf__probe(void);
@@ -17,6 +21,8 @@ int bpf__unprobe(void);
int bpf__load(void);
void bpf__clear(void);
+
+int bpf__foreach_tev(bpf_prog_iter_callback_t func, void *arg);
#else
static inline int bpf__prepare_load(const char *filename __maybe_unused,
bool source __maybe_unused)
@@ -29,5 +35,12 @@ static inline int bpf__probe(void) { return 0; }
static inline int bpf__unprobe(void) { return 0; }
static inline int bpf__load(void) { return 0; }
static inline void bpf__clear(void) { }
+
+static inline int
+bpf__foreach_tev(bpf_prog_iter_callback_t func __maybe_unused,
+ void *arg __maybe_unused)
+{
+ return 0;
+}
#endif
#endif
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index f7d9c77..5a01c7f 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -14,6 +14,7 @@
#include "target.h"
#include "evlist.h"
#include "evsel.h"
+#include "bpf-loader.h"
#include "debug.h"
#include <unistd.h>
@@ -194,6 +195,46 @@ error:
return -ENOMEM;
}
+static int add_bpf_event(struct probe_trace_event *tev, int fd,
+ void *arg)
+{
+ struct perf_evlist *evlist = arg;
+ struct perf_evsel *pos;
+ struct list_head list;
+ int err, idx, entries;
+
+ pr_debug("add bpf event %s:%s and attach bpf program %d\n",
+ tev->group, tev->event, fd);
+ INIT_LIST_HEAD(&list);
+ idx = evlist->nr_entries;
+
+ pr_debug("adding %s:%s\n", tev->group, tev->event);
+ err = parse_events_add_tracepoint(&list, &idx, tev->group,
+ tev->event);
+ if (err) {
+ struct perf_evsel *evsel, *tmp;
+
+ pr_err("Failed to add BPF event %s:%s\n",
+ tev->group, tev->event);
+ list_for_each_entry_safe(evsel, tmp, &list, node) {
+ list_del(&evsel->node);
+ perf_evsel__delete(evsel);
+ }
+ return -EINVAL;
+ }
+
+ list_for_each_entry(pos, &list, node)
+ pos->bpf_fd = fd;
+ entries = idx - evlist->nr_entries;
+ perf_evlist__splice_list_tail(evlist, &list, entries);
+ return 0;
+}
+
+int perf_evlist__add_bpf(struct perf_evlist *evlist)
+{
+ return bpf__foreach_tev(add_bpf_event, evlist);
+}
+
static int perf_evlist__add_attrs(struct perf_evlist *evlist,
struct perf_event_attr *attrs, size_t nr_attrs)
{
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index 037633c..4e3b6b0 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -72,6 +72,7 @@ void perf_evlist__delete(struct perf_evlist *evlist);
void perf_evlist__add(struct perf_evlist *evlist, struct perf_evsel *entry);
int perf_evlist__add_default(struct perf_evlist *evlist);
+int perf_evlist__add_bpf(struct perf_evlist *evlist);
int __perf_evlist__add_default_attrs(struct perf_evlist *evlist,
struct perf_event_attr *attrs, size_t nr_attrs);
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 83c0803..fe80047 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -206,6 +206,7 @@ void perf_evsel__init(struct perf_evsel *evsel,
evsel->leader = evsel;
evsel->unit = "";
evsel->scale = 1.0;
+ evsel->bpf_fd = -1;
INIT_LIST_HEAD(&evsel->node);
perf_evsel__object.init(evsel);
evsel->sample_size = __perf_evsel__sample_size(attr->sample_type);
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index fe9f327..4bcd629 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -86,6 +86,7 @@ struct perf_evsel {
unsigned long *per_pkg_mask;
struct perf_evsel *leader;
char *group_name;
+ int bpf_fd;
};
union u64_swap {
--
1.8.3.4
This is the final patch which makes basic BPF filter work. After
applying this patch, users are allowed to use BPF filter like:
# perf record --event ./hello_world.c ls
In this patch PERF_EVENT_IOC_SET_BPF ioctl is used to attach eBPF
program to a newly created perf event. The file descriptor of the
eBPF program is passed to perf record using previous patches, and
stored into evsel->bpf_fd.
It is possible that different perf event are created for one kprobe
events for different CPUs. In this case, when trying to call the
ioctl, EEXIST will be return. This patch doesn't treat it as an error.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/evsel.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index fe80047..73c17f3 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -1216,6 +1216,22 @@ retry_open:
err);
goto try_fallback;
}
+
+ if (evsel->bpf_fd >= 0) {
+ int evt_fd = FD(evsel, cpu, thread);
+ int bpf_fd = evsel->bpf_fd;
+
+ err = ioctl(evt_fd,
+ PERF_EVENT_IOC_SET_BPF,
+ bpf_fd);
+ if (err && errno != EEXIST) {
+ pr_err("failed to attach bpf fd %d: %s\n",
+ bpf_fd, strerror(errno));
+ err = -EINVAL;
+ goto out_close;
+ }
+ }
+
set_rlimit = NO_CHANGE;
/*
--
1.8.3.4
This patch suppress message output by add_perf_probe_events() and
del_perf_probe_events() if they are triggered by BPF loading. Before
this patch, when using 'perf record' with BPF object/source as event
selector, following message will be output:
Added new event:
perf_bpf_probe:lock_page_ret (on __lock_page%return)
You can now use it in all perf tools, such as:
perf record -e perf_bpf_probe:lock_page_ret -aR sleep 1
...
Removed event: perf_bpf_probe:lock_page_ret
Which is misleading, especially 'use it in all perf tools' because they
will be removed after 'pref record' exit.
In this patch, a 'silent' field is appended into probe_conf to control
output. bpf__{,un}probe() set it to true when calling
{add,del}_perf_probe_events().
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/util/bpf-loader.c | 6 ++++++
tools/perf/util/probe-event.c | 22 ++++++++++++++++------
tools/perf/util/probe-event.h | 1 +
3 files changed, 23 insertions(+), 6 deletions(-)
diff --git a/tools/perf/util/bpf-loader.c b/tools/perf/util/bpf-loader.c
index 63077b9..4aa372b 100644
--- a/tools/perf/util/bpf-loader.c
+++ b/tools/perf/util/bpf-loader.c
@@ -199,6 +199,7 @@ static bool is_probing;
int bpf__unprobe(void)
{
struct strfilter *delfilter;
+ bool old_silent = probe_conf.silent;
int ret;
if (!is_probing)
@@ -210,7 +211,9 @@ int bpf__unprobe(void)
return -ENOMEM;
}
+ probe_conf.silent = true;
ret = del_perf_probe_events(delfilter);
+ probe_conf.silent = old_silent;
strfilter__delete(delfilter);
if (ret < 0 && is_probing)
pr_err("Error: failed to delete events: %s\n",
@@ -223,15 +226,18 @@ int bpf__unprobe(void)
int bpf__probe(void)
{
int err;
+ bool old_silent = probe_conf.silent;
if (nr_probe_events <= 0)
return 0;
+ probe_conf.silent = true;
probe_conf.max_probes = MAX_PROBES;
/* Let add_perf_probe_events keeps probe_trace_event */
err = add_perf_probe_events(probe_event_array,
nr_probe_events,
false);
+ probe_conf.silent = old_silent;
/* add_perf_probe_events return negative when fail */
if (err < 0)
diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 083e8b4..b9573c5 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -51,7 +51,9 @@
#define PERFPROBE_GROUP "probe"
bool probe_event_dry_run; /* Dry run flag */
-struct probe_conf probe_conf;
+struct probe_conf probe_conf = {
+ .silent = false,
+};
#define semantic_error(msg ...) pr_err("Semantic error :" msg)
@@ -2250,10 +2252,12 @@ static int show_perf_probe_event(const char *group, const char *event,
ret = perf_probe_event__sprintf(group, event, pev, module, &buf);
if (ret >= 0) {
- if (use_stdout)
+ if (use_stdout && !probe_conf.silent)
printf("%s\n", buf.buf);
- else
+ else if (!probe_conf.silent)
pr_info("%s\n", buf.buf);
+ else
+ pr_debug("%s\n", buf.buf);
}
strbuf_release(&buf);
@@ -2512,7 +2516,10 @@ static int __add_probe_trace_events(struct perf_probe_event *pev,
safename = (pev->point.function && !strisglob(pev->point.function));
ret = 0;
- pr_info("Added new event%s\n", (ntevs > 1) ? "s:" : ":");
+ if (!probe_conf.silent)
+ pr_info("Added new event%s\n", (ntevs > 1) ? "s:" : ":");
+ else
+ pr_debug("Added new event%s\n", (ntevs > 1) ? "s:" : ":");
for (i = 0; i < ntevs; i++) {
tev = &tevs[i];
/* Skip if the symbol is out of .text or blacklisted */
@@ -2569,7 +2576,7 @@ static int __add_probe_trace_events(struct perf_probe_event *pev,
warn_uprobe_event_compat(tev);
/* Note that it is possible to skip all events because of blacklist */
- if (ret >= 0 && event) {
+ if (ret >= 0 && event && !probe_conf.silent) {
/* Show how to use the event. */
pr_info("\nYou can now use it in all perf tools, such as:\n\n");
pr_info("\tperf record -e %s:%s -aR sleep 1\n\n", group, event);
@@ -2865,7 +2872,10 @@ static int __del_trace_probe_event(int fd, struct str_node *ent)
goto error;
}
- pr_info("Removed event: %s\n", ent->s);
+ if (!probe_conf.silent)
+ pr_info("Removed event: %s\n", ent->s);
+ else
+ pr_debug("Removed event: %s\n", ent->s);
return 0;
error:
pr_warning("Failed to delete event: %s\n",
diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
index 4b7a951..116b0aa 100644
--- a/tools/perf/util/probe-event.h
+++ b/tools/perf/util/probe-event.h
@@ -13,6 +13,7 @@ struct probe_conf {
bool force_add;
bool no_inlines;
int max_probes;
+ bool silent;
};
extern struct probe_conf probe_conf;
extern bool probe_event_dry_run;
--
1.8.3.4
Although previous patch allows setting BPF compiler related options in
perfconfig, on some ad-hoc situation it still requires passing options
through cmdline. This patch introduces 2 options to 'perf record' for
this propose: --clang-path and --clang-opt.
Signed-off-by: Wang Nan <[email protected]>
---
tools/perf/builtin-record.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index bd189b1..e89c045 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -30,6 +30,7 @@
#include "util/auxtrace.h"
#include "util/parse-branch-options.h"
#include "util/bpf-loader.h"
+#include "util/llvm-utils.h"
#include <unistd.h>
#include <sched.h>
@@ -1073,6 +1074,12 @@ struct option __record_options[] = {
"opts", "AUX area tracing Snapshot Mode", ""),
OPT_UINTEGER(0, "proc-map-timeout", &record.opts.proc_map_timeout,
"per thread proc mmap processing timeout in ms"),
+#ifdef HAVE_LIBBPF_SUPPORT
+ OPT_STRING(0, "clang-path", &llvm_param.clang_path, "clang path",
+ "clang binary to use for compiling BPF scriptlets"),
+ OPT_STRING(0, "clang-opt", &llvm_param.clang_opt, "clang options",
+ "options passed to clang when compiling BPF scriptlets"),
+#endif
OPT_END()
};
--
1.8.3.4
In this patch, caller of libbpf is able to control the loaded programs
by installing a preprocessor callback for a BPF program. With
preprocessor, different instances can be created from one BPF program.
This patch will be used by perf to generate different prologue for
different 'struct probe_trace_event' instances matched by one
'struct perf_probe_event'.
bpf_program__set_prep() is added to support this function. Caller
should pass libbpf the number of instance should be created and a
preprocessor function which will be called when doing real loading.
The callback should return instructions arrays for each instances.
nr_instance and instance_fds are appended into bpf_programs to support
multiple instances. bpf_program__get_nth_fd() is introduced for
read fd of instances. Old interface bpf_program__get_fd() won't work
for program which has preprocessor hooked.
Signed-off-by: Wang Nan <[email protected]>
Signed-off-by: He Kuang <[email protected]>
---
tools/lib/bpf/libbpf.c | 135 +++++++++++++++++++++++++++++++++++++++++++++++--
tools/lib/bpf/libbpf.h | 22 ++++++++
2 files changed, 152 insertions(+), 5 deletions(-)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 83a3b06..d467fcb 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -100,6 +100,10 @@ struct bpf_program {
int fd;
+ int nr_instance;
+ int *instance_fds;
+ bpf_program_prep_t preprocessor;
+
struct bpf_object *obj;
void *priv;
bpf_program_clear_priv_t clear_priv;
@@ -156,6 +160,19 @@ static void bpf_program__unload(struct bpf_program *prog)
return;
zclose(prog->fd);
+
+ if (prog->preprocessor) {
+ int i;
+
+ if (prog->nr_instance <= 0)
+ pr_warning("Internal error when unloading: instance is %d\n",
+ prog->nr_instance);
+ else
+ for (i = 0; i < prog->nr_instance; i++)
+ zclose(prog->instance_fds[i]);
+ prog->nr_instance = -1;
+ zfree(&prog->instance_fds);
+ }
}
static void bpf_program__clear(struct bpf_program *prog)
@@ -207,6 +224,8 @@ __bpf_program__new(void *data, size_t size, char *name, int idx,
prog->insns_cnt * sizeof(struct bpf_insn));
prog->idx = idx;
prog->fd = -1;
+ prog->nr_instance = -1;
+ prog->instance_fds = NULL;
return 0;
errout:
@@ -798,13 +817,57 @@ static int
bpf_program__load(struct bpf_program *prog,
char *license, u32 kern_version)
{
- int err, fd;
+ int err = 0, fd, i;
+
+ if (!prog->preprocessor) {
+ err = load_program(prog->insns, prog->insns_cnt,
+ license, kern_version, &fd);
+ if (!err)
+ prog->fd = fd;
+ goto out;
+ }
- err = load_program(prog->insns, prog->insns_cnt,
- license, kern_version, &fd);
- if (!err)
- prog->fd = fd;
+ if (prog->nr_instance <= 0 || !prog->instance_fds) {
+ pr_warning("Internal error when loading '%s'\n",
+ prog->section_name);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < prog->nr_instance; i++) {
+ struct bpf_prog_prep_result result;
+ bpf_program_prep_t preprocessor = prog->preprocessor;
+
+ bzero(&result, sizeof(result));
+ err = preprocessor(prog, i, prog->insns,
+ prog->insns_cnt, &result);
+ if (err) {
+ pr_warning("Preprocessing %dth instance of program '%s' failed\n",
+ i, prog->section_name);
+ goto out;
+ }
+
+ if (!result.new_insn_ptr || !result.new_insn_cnt) {
+ pr_debug("Skip loading %dth instance of program '%s'\n",
+ i, prog->section_name);
+ prog->instance_fds[i] = -1;
+ continue;
+ }
+ err = load_program(result.new_insn_ptr,
+ result.new_insn_cnt,
+ license, kern_version, &fd);
+
+ if (err) {
+ pr_warning("Loading %dth instance of program '%s' failed\n",
+ i, prog->section_name);
+ goto out;
+ }
+
+ if (result.pfd)
+ *result.pfd = fd;
+ prog->instance_fds[i] = fd;
+ }
+out:
if (err)
pr_warning("failed to load program '%s'\n",
prog->section_name);
@@ -1054,6 +1117,68 @@ int bpf_program__get_fd(struct bpf_program *prog, int *pfd)
if (!pfd)
return -EINVAL;
+ if (prog->preprocessor) {
+ pr_warning("Invalid using bpf_program__get_fd() on program %s\n",
+ prog->section_name);
+ return -EINVAL;
+ }
+
*pfd = prog->fd;
return 0;
}
+
+int bpf_program__set_prep(struct bpf_program *prog, int nr_instance,
+ bpf_program_prep_t prep)
+{
+ int *instance_fds;
+
+ if (nr_instance <= 0 || !prep)
+ return -EINVAL;
+
+ instance_fds = malloc(sizeof(int) * nr_instance);
+ if (!instance_fds) {
+ pr_warning("alloc memory failed for instance of fds\n");
+ return -ENOMEM;
+ }
+
+ /* fill all fd with -1 */
+ memset(instance_fds, 0xff, sizeof(int) * nr_instance);
+
+ prog->nr_instance = nr_instance;
+ prog->instance_fds = instance_fds;
+ prog->preprocessor = prep;
+ return 0;
+}
+
+int bpf_program__get_nth_fd(struct bpf_program *prog, int n, int *pfd)
+{
+ int fd;
+
+ if (!pfd)
+ return -EINVAL;
+
+ if (!prog->preprocessor ||
+ prog->nr_instance <= 0 ||
+ !prog->instance_fds) {
+
+ pr_warning("Invalid using bpf_program__get_nth_fd() on program %s\n",
+ prog->section_name);
+ return -EINVAL;
+ }
+
+ if (n >= prog->nr_instance || n < 0) {
+ pr_warning("Can't get the %dth fd from program %s: only %d instances\n",
+ n, prog->section_name, prog->nr_instance);
+ return -EINVAL;
+ }
+
+ fd = prog->instance_fds[n];
+ if (fd < 0) {
+ pr_warning("%dth instance of program '%s' is invalid\n",
+ n, prog->section_name);
+ return -EEXIST;
+ }
+
+ *pfd = fd;
+ return 0;
+}
diff --git a/tools/lib/bpf/libbpf.h b/tools/lib/bpf/libbpf.h
index 1643b57..8c2dc74 100644
--- a/tools/lib/bpf/libbpf.h
+++ b/tools/lib/bpf/libbpf.h
@@ -69,6 +69,28 @@ int bpf_program__get_title(struct bpf_program *prog,
int bpf_program__get_fd(struct bpf_program *prog, int *pfd);
+struct bpf_insn;
+struct bpf_prog_prep_result {
+ /*
+ * If not NULL, load new instruction array.
+ * If set to NULL, don't load this instance.
+ */
+ struct bpf_insn *new_insn_ptr;
+ int new_insn_cnt;
+
+ /* If not NULL, result fd is set to it */
+ int *pfd;
+};
+
+typedef int (*bpf_program_prep_t)(struct bpf_program *, int n,
+ struct bpf_insn *, int insn_cnt,
+ struct bpf_prog_prep_result *res);
+
+int bpf_program__set_prep(struct bpf_program *prog, int nr_instance,
+ bpf_program_prep_t prep);
+
+int bpf_program__get_nth_fd(struct bpf_program *prog, int n, int *pfd);
+
/*
* We don't need __attribute__((packed)) now since it is
* unnecessary for 'bpf_map_def' because they are all aligned.
--
1.8.3.4
Em Wed, Jul 08, 2015 at 01:13:49PM +0000, Wang Nan escreveu:
> Hi Arnaldo Carvalho de Melo,
Hi Wang (hope this shorter form is ok on your country, calling me just
"Arnaldo" is fine in mine :-))
> I rearranged the first 39 patches of this patchset according to
> your comments. After applying all of them you can use a hello world
> BPF program for testing. They are based on your 'tmp.perf/ebpf', commit
> 60cd37eb100c4880b28078a47f3062fac7572095.
> I hope I can manage a public avaliable git repository for you
> tomorrow (tomorrow means 24 hours later). What about a repository on
> github? However I have to do this out of my office because of company's
> IT policy.
Why not ask the kernel.org admins for a:
git://git.kernel.org/pub/scm/linux/kernel/git/wangnan0/linux.git
Area?
> In this v11 you can see following improvements:
>
> Commit messages improvements:
> 'bpf tools: Collect symbol table from SHT_SYMTAB section'
> 'bpf tools: Collect relocation sections from SHT_REL sections'
> 'bpf tools: Record map accessing instructions for each program'
> 'bpf tools: Relocate eBPF programs'
> 'bpf tools: Link all bpf objects onto a list'
>
> Decoupling:
> 'bpf tools: Collect eBPF programs from their own sections'
> 'bpf tools: Introduce accessors for struct bpf_program'
>
> Renaming: bpf_object__for_each -> bpf_object__for_each_safe
> 'bpf tools: Link all bpf objects onto a list'
>
> Patch ordering:
> 'perf tools: Make perf depend on libbpf'
>
> Error message improvement (refer to http://llvm.org/apt):
> 'perf tools: Call clang to compile C source to object code'
>
> In this v11 part 1 patch set, I haven't follow your comment in
> 'bpf tools: Introduce accessors for struct bpf_object' that let me
> update accessors API from returning error code to returning actual
> value and indicate error using invalid values. I prefer current API
> because I saw and fixed many bugs related to error code in perf's
> code (like commit ed30775).
> Reason of those bugs are misusing of error code: some part of code
> return negative on error, some part of code return non-zero on error,
> and developer forgot them. I don't want libbpf to introduce more bugs
> like them. But if you insist on it, I'll change it.
If you don't follow the chosen convention, bugs appear.
And the convention of returning < 0 for errors and >= 0 for success is
common, just see the libc wrappers for syscalls, see the open, read,
write man pages, etc, that is an ooooold convention :-)
And those wrappers struck me as exaggerated, see one of them:
int bpf_program__get_fd(struct bpf_program *prog, int *pfd)
{
if (!pfd)
return -EINVAL;
*pfd = prog->fd;
return 0;
}
What can go wrong with accessing a struct member? The only think I
thought about was: hey, the struct pointer needs to be checked against
NULL, but no, in this case what you thought could go wrong was for the
library user to pass a NULL pointer as the return place (pfd).
So, yes, I still think this is way exaggerated, if you insist that the
struct must be opaque and thus we need accessors, I think that having:
int bpf_program__fd(struct bpf_program *prog)
{
return prog->fd;
}
Is way more sane, yes, I would trow away those extra four characters
(get_).
Heck, in this case there is not even a possible problem where we would
want to return something negative instead of doing what was requested.
If you find any other part in tools/perf/ (or anywhere else) that
doesn't follows the convention it states it conforms to, please file a
bug or submit a patch, like you did in the case you mentioned (ed30775),
it would be a bug and has to be fixed.
- Arnaldo
> Wang Nan (39):
> bpf: Use correct #ifdef controller for trace_call_bpf()
> tracing, perf: Implement BPF programs attached to uprobes
> bpf tools: Introduce 'bpf' library and add bpf feature check
> bpf tools: Allow caller to set printing function
> bpf tools: Open eBPF object file and do basic validation
> bpf tools: Read eBPF object from buffer
> bpf tools: Check endianness and make libbpf fail early
> bpf tools: Iterate over ELF sections to collect information
> bpf tools: Collect version and license from ELF sections
> bpf tools: Collect map definitions from 'maps' section
> bpf tools: Collect symbol table from SHT_SYMTAB section
> bpf tools: Collect eBPF programs from their own sections
> bpf tools: Collect relocation sections from SHT_REL sections
> bpf tools: Record map accessing instructions for each program
> bpf tools: Add bpf.c/h for common bpf operations
> bpf tools: Create eBPF maps defined in an object file
> bpf tools: Relocate eBPF programs
> bpf tools: Introduce bpf_load_program() to bpf.c
> bpf tools: Load eBPF programs in object files into kernel
> bpf tools: Introduce accessors for struct bpf_program
> bpf tools: Introduce accessors for struct bpf_object
> bpf tools: Link all bpf objects onto a list
> perf tools: Introduce llvm config options
> perf tools: Call clang to compile C source to object code
> perf tools: Auto detecting kernel build directory
> perf tools: Auto detecting kernel include options
> perf tests: Add LLVM test for eBPF on-the-fly compiling
> perf tools: Make perf depend on libbpf
> perf record: Enable passing bpf object file to --event
> perf record: Compile scriptlets if pass '.c' to --event
> perf tools: Parse probe points of eBPF programs during preparation
> perf probe: Attach trace_probe_event with perf_probe_event
> perf record: Probe at kprobe points
> perf record: Load all eBPF object into kernel
> perf tools: Add bpf_fd field to evsel and config it
> perf tools: Attach eBPF program to perf event
> perf tools: Suppress probing messages when probing by BPF loading
> perf record: Add clang options for compiling BPF scripts
> bpf tools: Load a program with different instance using preprocessor
>
> include/linux/trace_events.h | 7 +-
> kernel/events/core.c | 4 +-
> kernel/trace/Kconfig | 2 +-
> kernel/trace/trace_uprobe.c | 5 +
> tools/build/Makefile.feature | 6 +-
> tools/build/feature/Makefile | 6 +-
> tools/build/feature/test-bpf.c | 18 +
> tools/lib/bpf/.gitignore | 2 +
> tools/lib/bpf/Build | 1 +
> tools/lib/bpf/Makefile | 195 +++++++
> tools/lib/bpf/bpf.c | 85 +++
> tools/lib/bpf/bpf.h | 23 +
> tools/lib/bpf/libbpf.c | 1184 +++++++++++++++++++++++++++++++++++++++
> tools/lib/bpf/libbpf.h | 107 ++++
> tools/perf/MANIFEST | 3 +
> tools/perf/Makefile.perf | 19 +-
> tools/perf/builtin-probe.c | 4 +-
> tools/perf/builtin-record.c | 43 +-
> tools/perf/config/Makefile | 19 +-
> tools/perf/tests/Build | 1 +
> tools/perf/tests/builtin-test.c | 4 +
> tools/perf/tests/llvm.c | 85 +++
> tools/perf/tests/make | 4 +-
> tools/perf/tests/tests.h | 1 +
> tools/perf/util/Build | 2 +
> tools/perf/util/bpf-loader.c | 310 ++++++++++
> tools/perf/util/bpf-loader.h | 46 ++
> tools/perf/util/config.c | 4 +
> tools/perf/util/debug.c | 5 +
> tools/perf/util/debug.h | 1 +
> tools/perf/util/evlist.c | 41 ++
> tools/perf/util/evlist.h | 1 +
> tools/perf/util/evsel.c | 17 +
> tools/perf/util/evsel.h | 1 +
> tools/perf/util/llvm-utils.c | 373 ++++++++++++
> tools/perf/util/llvm-utils.h | 39 ++
> tools/perf/util/parse-events.c | 16 +
> tools/perf/util/parse-events.h | 2 +
> tools/perf/util/parse-events.l | 6 +
> tools/perf/util/parse-events.y | 29 +-
> tools/perf/util/probe-event.c | 82 +--
> tools/perf/util/probe-event.h | 7 +-
> 42 files changed, 2759 insertions(+), 51 deletions(-)
> create mode 100644 tools/build/feature/test-bpf.c
> create mode 100644 tools/lib/bpf/.gitignore
> create mode 100644 tools/lib/bpf/Build
> create mode 100644 tools/lib/bpf/Makefile
> create mode 100644 tools/lib/bpf/bpf.c
> create mode 100644 tools/lib/bpf/bpf.h
> create mode 100644 tools/lib/bpf/libbpf.c
> create mode 100644 tools/lib/bpf/libbpf.h
> create mode 100644 tools/perf/tests/llvm.c
> create mode 100644 tools/perf/util/bpf-loader.c
> create mode 100644 tools/perf/util/bpf-loader.h
> create mode 100644 tools/perf/util/llvm-utils.c
> create mode 100644 tools/perf/util/llvm-utils.h
>
> --
> 1.8.3.4
?????ҵ? iPhone
> ?? 2015??7??8?գ?????10:03??Arnaldo Carvalho de Melo <[email protected]> д????
>
> Em Wed, Jul 08, 2015 at 01:13:49PM +0000, Wang Nan escreveu:
>> Hi Arnaldo Carvalho de Melo,
>
> Hi Wang (hope this shorter form is ok on your country, calling me just
> "Arnaldo" is fine in mine :-))
>
>> I rearranged the first 39 patches of this patchset according to
>> your comments. After applying all of them you can use a hello world
>> BPF program for testing. They are based on your 'tmp.perf/ebpf', commit
>> 60cd37eb100c4880b28078a47f3062fac7572095.
>
>> I hope I can manage a public avaliable git repository for you
>> tomorrow (tomorrow means 24 hours later). What about a repository on
>> github? However I have to do this out of my office because of company's
>> IT policy.
>
> Why not ask the kernel.org admins for a:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/wangnan0/linux.git
>
> Area?
>
Is that possible for me? Could you please provide further instruction (or web page describe those instructions) so I can follow to form my application?
>> In this v11 you can see following improvements:
>>
>> Commit messages improvements:
>> 'bpf tools: Collect symbol table from SHT_SYMTAB section'
>> 'bpf tools: Collect relocation sections from SHT_REL sections'
>> 'bpf tools: Record map accessing instructions for each program'
>> 'bpf tools: Relocate eBPF programs'
>> 'bpf tools: Link all bpf objects onto a list'
>>
>> Decoupling:
>> 'bpf tools: Collect eBPF programs from their own sections'
>> 'bpf tools: Introduce accessors for struct bpf_program'
>>
>> Renaming: bpf_object__for_each -> bpf_object__for_each_safe
>> 'bpf tools: Link all bpf objects onto a list'
>>
>> Patch ordering:
>> 'perf tools: Make perf depend on libbpf'
>>
>> Error message improvement (refer to http://llvm.org/apt):
>> 'perf tools: Call clang to compile C source to object code'
>>
>> In this v11 part 1 patch set, I haven't follow your comment in
>> 'bpf tools: Introduce accessors for struct bpf_object' that let me
>> update accessors API from returning error code to returning actual
>> value and indicate error using invalid values. I prefer current API
>> because I saw and fixed many bugs related to error code in perf's
>> code (like commit ed30775).
>
>> Reason of those bugs are misusing of error code: some part of code
>> return negative on error, some part of code return non-zero on error,
>> and developer forgot them. I don't want libbpf to introduce more bugs
>> like them. But if you insist on it, I'll change it.
>
> If you don't follow the chosen convention, bugs appear.
>
> And the convention of returning < 0 for errors and >= 0 for success is
> common, just see the libc wrappers for syscalls, see the open, read,
> write man pages, etc, that is an ooooold convention :-)
>
> And those wrappers struck me as exaggerated, see one of them:
>
> int bpf_program__get_fd(struct bpf_program *prog, int *pfd)
> {
> if (!pfd)
> return -EINVAL;
>
> *pfd = prog->fd;
> return 0;
> }
>
> What can go wrong with accessing a struct member? The only think I
> thought about was: hey, the struct pointer needs to be checked against
> NULL, but no, in this case what you thought could go wrong was for the
> library user to pass a NULL pointer as the return place (pfd).
>
OK, I will change. You won't see it in next version.
However, for the specific function you mentioned, please see patch 39/39 in this patchset. It shows that, accessing a struct member can go wrong in a way unpredictable when start working on this patch.
When I start working on this patch I thought this function may go wrong in only two obvious situations: 1: caller feed a NULL pointer; 2: try to get file descriptor before loading that program. However, when we start working on BPF prologue generator we realize that we must provide a way to enable one program be loaded multiple times with different prologues. Patch 39/39 is the winner among many ideas which provides the desire function while not impact old code (new code for you) too much, which we discussed and tries for weeks. In that patch we allow caller attach a pre-processor to a program, and require them call a new function bpf_program__get_nth_fd() to get the nth descriptor. The semantic of old bpf_program__get_fd() becomes hard to define so we simply disallow them to call it if preprocessing is required. This is how a new source of error raise during development.
Thank you.
> So, yes, I still think this is way exaggerated, if you insist that the
> struct must be opaque and thus we need accessors, I think that having:
>
> int bpf_program__fd(struct bpf_program *prog)
> {
> return prog->fd;
> }
>
> Is way more sane, yes, I would trow away those extra four characters
> (get_).
>
> Heck, in this case there is not even a possible problem where we would
> want to return something negative instead of doing what was requested.
>
> If you find any other part in tools/perf/ (or anywhere else) that
> doesn't follows the convention it states it conforms to, please file a
> bug or submit a patch, like you did in the case you mentioned (ed30775),
> it would be a bug and has to be fixed.
>
> - Arnaldo
>
>> Wang Nan (39):
>> bpf: Use correct #ifdef controller for trace_call_bpf()
>> tracing, perf: Implement BPF programs attached to uprobes
>> bpf tools: Introduce 'bpf' library and add bpf feature check
>> bpf tools: Allow caller to set printing function
>> bpf tools: Open eBPF object file and do basic validation
>> bpf tools: Read eBPF object from buffer
>> bpf tools: Check endianness and make libbpf fail early
>> bpf tools: Iterate over ELF sections to collect information
>> bpf tools: Collect version and license from ELF sections
>> bpf tools: Collect map definitions from 'maps' section
>> bpf tools: Collect symbol table from SHT_SYMTAB section
>> bpf tools: Collect eBPF programs from their own sections
>> bpf tools: Collect relocation sections from SHT_REL sections
>> bpf tools: Record map accessing instructions for each program
>> bpf tools: Add bpf.c/h for common bpf operations
>> bpf tools: Create eBPF maps defined in an object file
>> bpf tools: Relocate eBPF programs
>> bpf tools: Introduce bpf_load_program() to bpf.c
>> bpf tools: Load eBPF programs in object files into kernel
>> bpf tools: Introduce accessors for struct bpf_program
>> bpf tools: Introduce accessors for struct bpf_object
>> bpf tools: Link all bpf objects onto a list
>> perf tools: Introduce llvm config options
>> perf tools: Call clang to compile C source to object code
>> perf tools: Auto detecting kernel build directory
>> perf tools: Auto detecting kernel include options
>> perf tests: Add LLVM test for eBPF on-the-fly compiling
>> perf tools: Make perf depend on libbpf
>> perf record: Enable passing bpf object file to --event
>> perf record: Compile scriptlets if pass '.c' to --event
>> perf tools: Parse probe points of eBPF programs during preparation
>> perf probe: Attach trace_probe_event with perf_probe_event
>> perf record: Probe at kprobe points
>> perf record: Load all eBPF object into kernel
>> perf tools: Add bpf_fd field to evsel and config it
>> perf tools: Attach eBPF program to perf event
>> perf tools: Suppress probing messages when probing by BPF loading
>> perf record: Add clang options for compiling BPF scripts
>> bpf tools: Load a program with different instance using preprocessor
>>
>> include/linux/trace_events.h | 7 +-
>> kernel/events/core.c | 4 +-
>> kernel/trace/Kconfig | 2 +-
>> kernel/trace/trace_uprobe.c | 5 +
>> tools/build/Makefile.feature | 6 +-
>> tools/build/feature/Makefile | 6 +-
>> tools/build/feature/test-bpf.c | 18 +
>> tools/lib/bpf/.gitignore | 2 +
>> tools/lib/bpf/Build | 1 +
>> tools/lib/bpf/Makefile | 195 +++++++
>> tools/lib/bpf/bpf.c | 85 +++
>> tools/lib/bpf/bpf.h | 23 +
>> tools/lib/bpf/libbpf.c | 1184 +++++++++++++++++++++++++++++++++++++++
>> tools/lib/bpf/libbpf.h | 107 ++++
>> tools/perf/MANIFEST | 3 +
>> tools/perf/Makefile.perf | 19 +-
>> tools/perf/builtin-probe.c | 4 +-
>> tools/perf/builtin-record.c | 43 +-
>> tools/perf/config/Makefile | 19 +-
>> tools/perf/tests/Build | 1 +
>> tools/perf/tests/builtin-test.c | 4 +
>> tools/perf/tests/llvm.c | 85 +++
>> tools/perf/tests/make | 4 +-
>> tools/perf/tests/tests.h | 1 +
>> tools/perf/util/Build | 2 +
>> tools/perf/util/bpf-loader.c | 310 ++++++++++
>> tools/perf/util/bpf-loader.h | 46 ++
>> tools/perf/util/config.c | 4 +
>> tools/perf/util/debug.c | 5 +
>> tools/perf/util/debug.h | 1 +
>> tools/perf/util/evlist.c | 41 ++
>> tools/perf/util/evlist.h | 1 +
>> tools/perf/util/evsel.c | 17 +
>> tools/perf/util/evsel.h | 1 +
>> tools/perf/util/llvm-utils.c | 373 ++++++++++++
>> tools/perf/util/llvm-utils.h | 39 ++
>> tools/perf/util/parse-events.c | 16 +
>> tools/perf/util/parse-events.h | 2 +
>> tools/perf/util/parse-events.l | 6 +
>> tools/perf/util/parse-events.y | 29 +-
>> tools/perf/util/probe-event.c | 82 +--
>> tools/perf/util/probe-event.h | 7 +-
>> 42 files changed, 2759 insertions(+), 51 deletions(-)
>> create mode 100644 tools/build/feature/test-bpf.c
>> create mode 100644 tools/lib/bpf/.gitignore
>> create mode 100644 tools/lib/bpf/Build
>> create mode 100644 tools/lib/bpf/Makefile
>> create mode 100644 tools/lib/bpf/bpf.c
>> create mode 100644 tools/lib/bpf/bpf.h
>> create mode 100644 tools/lib/bpf/libbpf.c
>> create mode 100644 tools/lib/bpf/libbpf.h
>> create mode 100644 tools/perf/tests/llvm.c
>> create mode 100644 tools/perf/util/bpf-loader.c
>> create mode 100644 tools/perf/util/bpf-loader.h
>> create mode 100644 tools/perf/util/llvm-utils.c
>> create mode 100644 tools/perf/util/llvm-utils.h
>>
>> --
>> 1.8.3.4
Em Wed, Jul 08, 2015 at 11:57:13PM +0800, pi3orama escreveu:
> 发自我的 iPhone
> > 在 2015年7月8日,下午10:03,Arnaldo Carvalho de Melo <[email protected]> 写道:
> > And those wrappers struck me as exaggerated, see one of them:
> > int bpf_program__get_fd(struct bpf_program *prog, int *pfd)
> > {
> > if (!pfd)
> > return -EINVAL;
> >
> > *pfd = prog->fd;
> > return 0;
> > }
> > What can go wrong with accessing a struct member? The only think I
> > thought about was: hey, the struct pointer needs to be checked against
> > NULL, but no, in this case what you thought could go wrong was for the
> > library user to pass a NULL pointer as the return place (pfd).
> >
>
> OK, I will change. You won't see it in next version.
> However, for the specific function you mentioned, please see patch
> 39/39 in this patchset. It shows that, accessing a struct member can
> go wrong in a way unpredictable when start working on this patch.
So, I haven't got there yet, i.e. I need to understand a patch by what
is in the parts of the code it changes or adds or by what is explained
in its changeset.
> When I start working on this patch I thought this function may go
> wrong in only two obvious situations: 1: caller feed a NULL pointer;
> 2: try to get file descriptor before loading that program.
> However, when we start working on BPF prologue generator we realize
> that we must provide a way to enable one program be loaded multiple
> times with different prologues.
> Patch 39/39 is the winner among many ideas which provides the desire
> function while not impact old code (new code for you) too much, which
> we discussed and tries for weeks.
I understand that this was discussed somewhere, but we need to have the
relevant context for a specific patch in its changelog comments or in
comments added in that specific patch, otherwise we will have to look at
multiple messages discussing it, filter out dead ends, etc.
> In that patch we allow caller attach a pre-processor to a program, and
> require them call a new function bpf_program__get_nth_fd() to get the
> nth descriptor.
> The semantic of old bpf_program__get_fd() becomes hard to define so we
> simply disallow them to call it if preprocessing is required. This is
> how a new source of error raise during development.
Ok, but then that other type of error can be returned as a negative
value:
int bpf_program__get_fd(struct bpf_program *prog)
{
ret = -EINVAL;
if (prog == NULL)
goto out;
ret = -EBADFD;
if (not yet opened)
goto out;
ret = -ESOMETHINGELSE
if (something else)
goto out;
/* More logic */
err = prog->fd;
out:
return ret;
}
int some_other_func(struct bpf_program *prog)
{
int fd = bpf_program__get_fd(prog);
if (fd < 0)
goto out_err;
return 0;
out_err:
printf("Something went wrong: %s\n", strerror(-fd));
return fd;
}
Looking at the users, i.e. going to the future to figure out the current
patch we are talking about...
Oops:
[PATCH v11 35/39] perf tools: Add bpf_fd field to evsel and config it
+ err = bpf_program__get_fd(prog, &fd);
+
+ if (err || fd < 0) {
+ pr_err("bpf: failed to get file descriptor\n");
+ return -EINVAL;
+ }
Here you use _both_ the error return, and doesn't look if it is less
than zero, and if it is zero, i.e. the operation went well, that is not
enough, you have to anyway look at the value of the fd?
And 39/39:
It just checks prog->preprocessed and if not set, returns as well
-EINVAL. I.e. its callers will have to continue checking for errors
_and_ for the fd value returned, it returned zero, I can say that this
is confusing :-\
Reading 39/39, I think you could have just:
struct bpf_program {
struct {
int nr;
int *fds;
} instance;
};
Then _always_ use that prog->instance_fd[], thing, it starts with 1
entry, i.e. what you call now prog->fd is really prog->instance_fd[0],
this way you don't have to have two functions (get_fd and get_nth_fd) to
access the fds associated with a bpf_program, just one, that receives an
index.
Or you could make:
static int bpf_program__get_fd(struct bpf_program *prog)
{
return bpf_program__get_nth_fd(prog, 0);
}
But now I don't even know if this all makes sense, this is the last
patch in the series, I can't see a user of this preprocessor, to see
if this couldn't be done in other way :-\
I'll continue looking at the patches a bit more...
- Arnaldo
> Thank you.
>
> > So, yes, I still think this is way exaggerated, if you insist that the
> > struct must be opaque and thus we need accessors, I think that having:
> >
> > int bpf_program__fd(struct bpf_program *prog)
> > {
> > return prog->fd;
> > }
> >
> > Is way more sane, yes, I would trow away those extra four characters
> > (get_).
> >
> > Heck, in this case there is not even a possible problem where we would
> > want to return something negative instead of doing what was requested.
> >
> > If you find any other part in tools/perf/ (or anywhere else) that
> > doesn't follows the convention it states it conforms to, please file a
> > bug or submit a patch, like you did in the case you mentioned (ed30775),
> > it would be a bug and has to be fixed.
> >
> > - Arnaldo
> >
> >> Wang Nan (39):
> >> bpf: Use correct #ifdef controller for trace_call_bpf()
> >> tracing, perf: Implement BPF programs attached to uprobes
> >> bpf tools: Introduce 'bpf' library and add bpf feature check
> >> bpf tools: Allow caller to set printing function
> >> bpf tools: Open eBPF object file and do basic validation
> >> bpf tools: Read eBPF object from buffer
> >> bpf tools: Check endianness and make libbpf fail early
> >> bpf tools: Iterate over ELF sections to collect information
> >> bpf tools: Collect version and license from ELF sections
> >> bpf tools: Collect map definitions from 'maps' section
> >> bpf tools: Collect symbol table from SHT_SYMTAB section
> >> bpf tools: Collect eBPF programs from their own sections
> >> bpf tools: Collect relocation sections from SHT_REL sections
> >> bpf tools: Record map accessing instructions for each program
> >> bpf tools: Add bpf.c/h for common bpf operations
> >> bpf tools: Create eBPF maps defined in an object file
> >> bpf tools: Relocate eBPF programs
> >> bpf tools: Introduce bpf_load_program() to bpf.c
> >> bpf tools: Load eBPF programs in object files into kernel
> >> bpf tools: Introduce accessors for struct bpf_program
> >> bpf tools: Introduce accessors for struct bpf_object
> >> bpf tools: Link all bpf objects onto a list
> >> perf tools: Introduce llvm config options
> >> perf tools: Call clang to compile C source to object code
> >> perf tools: Auto detecting kernel build directory
> >> perf tools: Auto detecting kernel include options
> >> perf tests: Add LLVM test for eBPF on-the-fly compiling
> >> perf tools: Make perf depend on libbpf
> >> perf record: Enable passing bpf object file to --event
> >> perf record: Compile scriptlets if pass '.c' to --event
> >> perf tools: Parse probe points of eBPF programs during preparation
> >> perf probe: Attach trace_probe_event with perf_probe_event
> >> perf record: Probe at kprobe points
> >> perf record: Load all eBPF object into kernel
> >> perf tools: Add bpf_fd field to evsel and config it
> >> perf tools: Attach eBPF program to perf event
> >> perf tools: Suppress probing messages when probing by BPF loading
> >> perf record: Add clang options for compiling BPF scripts
> >> bpf tools: Load a program with different instance using preprocessor
> >>
> >> include/linux/trace_events.h | 7 +-
> >> kernel/events/core.c | 4 +-
> >> kernel/trace/Kconfig | 2 +-
> >> kernel/trace/trace_uprobe.c | 5 +
> >> tools/build/Makefile.feature | 6 +-
> >> tools/build/feature/Makefile | 6 +-
> >> tools/build/feature/test-bpf.c | 18 +
> >> tools/lib/bpf/.gitignore | 2 +
> >> tools/lib/bpf/Build | 1 +
> >> tools/lib/bpf/Makefile | 195 +++++++
> >> tools/lib/bpf/bpf.c | 85 +++
> >> tools/lib/bpf/bpf.h | 23 +
> >> tools/lib/bpf/libbpf.c | 1184 +++++++++++++++++++++++++++++++++++++++
> >> tools/lib/bpf/libbpf.h | 107 ++++
> >> tools/perf/MANIFEST | 3 +
> >> tools/perf/Makefile.perf | 19 +-
> >> tools/perf/builtin-probe.c | 4 +-
> >> tools/perf/builtin-record.c | 43 +-
> >> tools/perf/config/Makefile | 19 +-
> >> tools/perf/tests/Build | 1 +
> >> tools/perf/tests/builtin-test.c | 4 +
> >> tools/perf/tests/llvm.c | 85 +++
> >> tools/perf/tests/make | 4 +-
> >> tools/perf/tests/tests.h | 1 +
> >> tools/perf/util/Build | 2 +
> >> tools/perf/util/bpf-loader.c | 310 ++++++++++
> >> tools/perf/util/bpf-loader.h | 46 ++
> >> tools/perf/util/config.c | 4 +
> >> tools/perf/util/debug.c | 5 +
> >> tools/perf/util/debug.h | 1 +
> >> tools/perf/util/evlist.c | 41 ++
> >> tools/perf/util/evlist.h | 1 +
> >> tools/perf/util/evsel.c | 17 +
> >> tools/perf/util/evsel.h | 1 +
> >> tools/perf/util/llvm-utils.c | 373 ++++++++++++
> >> tools/perf/util/llvm-utils.h | 39 ++
> >> tools/perf/util/parse-events.c | 16 +
> >> tools/perf/util/parse-events.h | 2 +
> >> tools/perf/util/parse-events.l | 6 +
> >> tools/perf/util/parse-events.y | 29 +-
> >> tools/perf/util/probe-event.c | 82 +--
> >> tools/perf/util/probe-event.h | 7 +-
> >> 42 files changed, 2759 insertions(+), 51 deletions(-)
> >> create mode 100644 tools/build/feature/test-bpf.c
> >> create mode 100644 tools/lib/bpf/.gitignore
> >> create mode 100644 tools/lib/bpf/Build
> >> create mode 100644 tools/lib/bpf/Makefile
> >> create mode 100644 tools/lib/bpf/bpf.c
> >> create mode 100644 tools/lib/bpf/bpf.h
> >> create mode 100644 tools/lib/bpf/libbpf.c
> >> create mode 100644 tools/lib/bpf/libbpf.h
> >> create mode 100644 tools/perf/tests/llvm.c
> >> create mode 100644 tools/perf/util/bpf-loader.c
> >> create mode 100644 tools/perf/util/bpf-loader.h
> >> create mode 100644 tools/perf/util/llvm-utils.c
> >> create mode 100644 tools/perf/util/llvm-utils.h
> >>
> >> --
> >> 1.8.3.4
Em Wed, Jul 08, 2015 at 01:14:17PM +0000, Wang Nan escreveu:
> By adding libbpf into perf's Makefile, this patch enables perf to build
> libbpf during building if libelf is found and neither NO_LIBELF nor
> NO_LIBBPF is set. The newly introduced code is similar to libapi and
> libtraceevent building in Makefile.perf.
>
> MANIFEST is also updated for 'make perf-*-src-pkg'.
>
> Append make_no_libbpf to tools/perf/tests/make.
At this point this is also handy, to make ctags happy and allow us to
navigate on libbpf files when working on perf, using tools/perf/tags
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 7a4b549214e3..d4058dbb836d 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -439,7 +439,7 @@ INSTALL_DOC_TARGETS += quick-install-doc quick-install-man quick-install-html
$(DOC_TARGETS):
$(QUIET_SUBDIR0)Documentation $(QUIET_SUBDIR1) $(@:doc=all)
-TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol
+TAG_FOLDERS= . ../lib/traceevent ../lib/api ../lib/symbol ../lib/bpf
TAG_FILES= ../../include/uapi/linux/perf_event.h
TAGS:
Em Wed, Jul 08, 2015 at 11:57:13PM +0800, pi3orama escreveu:
> 发自我的 iPhone
> > 在 2015年7月8日,下午10:03,Arnaldo Carvalho de Melo <[email protected]> 写道:
> > Em Wed, Jul 08, 2015 at 01:13:49PM +0000, Wang Nan escreveu:
> >> I rearranged the first 39 patches of this patchset according to
> >> your comments. After applying all of them you can use a hello world
> >> BPF program for testing. They are based on your 'tmp.perf/ebpf', commit
> >> 60cd37eb100c4880b28078a47f3062fac7572095.
> >> I hope I can manage a public avaliable git repository for you
> >> tomorrow (tomorrow means 24 hours later). What about a repository on
> >> github? However I have to do this out of my office because of company's
> >> IT policy.
> > Why not ask the kernel.org admins for a:
> > git://git.kernel.org/pub/scm/linux/kernel/git/wangnan0/linux.git
So, please send any further patches on top of my perf/ebpf, that I ask
you to check to see if it you find any problems, those I think are good
to go, and I intend to go on putting more patches there that we've all
agreed, to then push to Ingo from that branch.
I am doing some changes in the patch right after that, will put it in
another branch and post the changes with my reasoning about those
changes.
For reference:
https://git.kernel.org/cgit/linux/kernel/git/acme/linux.git/commit/?h=perf/ebpf
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git perf/ebpf
- Arnaldo