Changes v4 => v5
1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
2. Minor fixes in selftests. (Andrii)
Changes v3 => v4:
1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
entries. (Peter)
2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
PERF_MAX_BRANCH_SNAPSHOT
5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
(Andrii)
6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
to work properly with modules. (Andrii)
Changes v2 => v3:
1. Fix the use of static_call. (Peter)
2. Limit the use to perfmon version >= 2. (Peter)
3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
and intel_pmu_enable_all().
Changes v1 => v2:
1. Rename the helper as bpf_get_branch_snapshot;
2. Fix/simplify the use of static_call;
3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
branch records to an output argument of type perf_branch_snapshot.
Branch stack can be very useful in understanding software events. For
example, when a long function, e.g. sys_perf_event_open, returns an errno,
it is not obvious why the function failed. Branch stack could provide very
helpful information in this type of scenarios.
This set adds support to read branch stack with a new BPF helper
bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
It is also possible to support the same feaure for PowerPC.
The hardware that records the branch stace is not stopped automatically on
software events. Therefore, it is necessary to stop it in software soon.
Otherwise, the hardware buffers/registers will be flushed. One of the key
design consideration in this set is to minimize the number of branch record
entries between the event triggers and the hardware recorder is stopped.
Based on this goal, current design is different from the discussions in
original RFC [1]:
1) Static call is used when supported, to save function pointer
dereference;
2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
because the latter uses about 10 entries before stopping LBR.
With current code, on Intel CPU, LBR is stopped after 10 branch entries
after fexit triggers:
ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
...
[1] https://lore.kernel.org/bpf/[email protected]/
Song Liu (3):
perf: enable branch record for software events
bpf: introduce helper bpf_get_branch_snapshot
selftests/bpf: add test for bpf_get_branch_snapshot
arch/x86/events/intel/core.c | 29 ++++-
arch/x86/events/intel/ds.c | 8 --
arch/x86/events/perf_event.h | 10 +-
include/linux/perf_event.h | 23 ++++
include/uapi/linux/bpf.h | 22 ++++
kernel/bpf/trampoline.c | 3 +-
kernel/events/core.c | 2 +
kernel/trace/bpf_trace.c | 30 ++++++
tools/include/uapi/linux/bpf.h | 22 ++++
.../selftests/bpf/bpf_testmod/bpf_testmod.c | 19 +++-
.../selftests/bpf/prog_tests/core_reloc.c | 14 +--
.../bpf/prog_tests/get_branch_snapshot.c | 100 ++++++++++++++++++
.../selftests/bpf/prog_tests/module_attach.c | 39 -------
.../selftests/bpf/progs/get_branch_snapshot.c | 40 +++++++
tools/testing/selftests/bpf/test_progs.c | 39 +++++++
tools/testing/selftests/bpf/test_progs.h | 2 +
tools/testing/selftests/bpf/trace_helpers.c | 37 +++++++
tools/testing/selftests/bpf/trace_helpers.h | 5 +
18 files changed, 378 insertions(+), 66 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
--
2.30.2
Introduce bpf_get_branch_snapshot(), which allows tracing pogram to get
branch trace from hardware (e.g. Intel LBR). To use the feature, the
user need to create perf_event with proper branch_record filtering
on each cpu, and then calls bpf_get_branch_snapshot in the bpf function.
On Intel CPUs, VLBR event (raw event 0x1b00) can be use for this.
Acked-by: John Fastabend <[email protected]>
Acked-by: Andrii Nakryiko <[email protected]>
Signed-off-by: Song Liu <[email protected]>
---
include/uapi/linux/bpf.h | 22 ++++++++++++++++++++++
kernel/bpf/trampoline.c | 3 ++-
kernel/trace/bpf_trace.c | 30 ++++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 22 ++++++++++++++++++++++
4 files changed, 76 insertions(+), 1 deletion(-)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 791f31dd0abee..b695ef151001e 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4877,6 +4877,27 @@ union bpf_attr {
* Get the struct pt_regs associated with **task**.
* Return
* A pointer to struct pt_regs.
+ *
+ * long bpf_get_branch_snapshot(void *entries, u32 size, u64 flags)
+ * Description
+ * Get branch trace from hardware engines like Intel LBR. The
+ * hardware engine is stopped shortly after the helper is
+ * called. Therefore, the user need to filter branch entries
+ * based on the actual use case. To capture branch trace
+ * before the trigger point of the BPF program, the helper
+ * should be called at the beginning of the BPF program.
+ *
+ * The data is stored as struct perf_branch_entry into output
+ * buffer *entries*. *size* is the size of *entries* in bytes.
+ * *flags* is reserved for now and must be zero.
+ *
+ * Return
+ * On success, number of bytes written to *buf*. On error, a
+ * negative value.
+ *
+ * **-EINVAL** if *flags* is not zero.
+ *
+ * **-ENOENT** if architecture does not support branch records.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -5055,6 +5076,7 @@ union bpf_attr {
FN(get_func_ip), \
FN(get_attach_cookie), \
FN(task_pt_regs), \
+ FN(get_branch_snapshot), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
index fe1e857324e66..39eaaff81953d 100644
--- a/kernel/bpf/trampoline.c
+++ b/kernel/bpf/trampoline.c
@@ -10,6 +10,7 @@
#include <linux/rcupdate_trace.h>
#include <linux/rcupdate_wait.h>
#include <linux/module.h>
+#include <linux/static_call.h>
/* dummy _ops. The verifier will operate on target program's ops. */
const struct bpf_verifier_ops bpf_extension_verifier_ops = {
@@ -526,7 +527,7 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
}
#define NO_START_TIME 1
-static u64 notrace bpf_prog_start_time(void)
+static __always_inline u64 notrace bpf_prog_start_time(void)
{
u64 start = NO_START_TIME;
diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 8e2eb950aa829..067e88c3d2ee5 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -1017,6 +1017,34 @@ static const struct bpf_func_proto bpf_get_attach_cookie_proto_pe = {
.arg1_type = ARG_PTR_TO_CTX,
};
+BPF_CALL_3(bpf_get_branch_snapshot, void *, buf, u32, size, u64, flags)
+{
+#ifndef CONFIG_X86
+ return -ENOENT;
+#else
+ static const u32 br_entry_size = sizeof(struct perf_branch_entry);
+ u32 entry_cnt = size / br_entry_size;
+
+ entry_cnt = static_call(perf_snapshot_branch_stack)(buf, entry_cnt);
+
+ if (unlikely(flags))
+ return -EINVAL;
+
+ if (!entry_cnt)
+ return -ENOENT;
+
+ return entry_cnt * br_entry_size;
+#endif
+}
+
+static const struct bpf_func_proto bpf_get_branch_snapshot_proto = {
+ .func = bpf_get_branch_snapshot,
+ .gpl_only = true,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_UNINIT_MEM,
+ .arg2_type = ARG_CONST_SIZE_OR_ZERO,
+};
+
static const struct bpf_func_proto *
bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
@@ -1132,6 +1160,8 @@ bpf_tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_snprintf_proto;
case BPF_FUNC_get_func_ip:
return &bpf_get_func_ip_proto_tracing;
+ case BPF_FUNC_get_branch_snapshot:
+ return &bpf_get_branch_snapshot_proto;
default:
return bpf_base_func_proto(func_id);
}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 791f31dd0abee..b695ef151001e 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4877,6 +4877,27 @@ union bpf_attr {
* Get the struct pt_regs associated with **task**.
* Return
* A pointer to struct pt_regs.
+ *
+ * long bpf_get_branch_snapshot(void *entries, u32 size, u64 flags)
+ * Description
+ * Get branch trace from hardware engines like Intel LBR. The
+ * hardware engine is stopped shortly after the helper is
+ * called. Therefore, the user need to filter branch entries
+ * based on the actual use case. To capture branch trace
+ * before the trigger point of the BPF program, the helper
+ * should be called at the beginning of the BPF program.
+ *
+ * The data is stored as struct perf_branch_entry into output
+ * buffer *entries*. *size* is the size of *entries* in bytes.
+ * *flags* is reserved for now and must be zero.
+ *
+ * Return
+ * On success, number of bytes written to *buf*. On error, a
+ * negative value.
+ *
+ * **-EINVAL** if *flags* is not zero.
+ *
+ * **-ENOENT** if architecture does not support branch records.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -5055,6 +5076,7 @@ union bpf_attr {
FN(get_func_ip), \
FN(get_attach_cookie), \
FN(task_pt_regs), \
+ FN(get_branch_snapshot), \
/* */
/* integer value in 'imm' field of BPF_CALL instruction selects which helper
--
2.30.2
> On Sep 7, 2021, at 1:27 PM, Song Liu <[email protected]> wrote:
Forgot to add changes:
Changes v5 => v6
1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack.
(Peter)
2. Remove buf and size check in bpf_get_branch_snapshot, move flags
check to later fo the function. (Peter, Andrii)
3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)
>
> Changes v4 => v5
> 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
> 2. Minor fixes in selftests. (Andrii)
>
> Changes v3 => v4:
> 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
> entries. (Peter)
> 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
> 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
> 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
> PERF_MAX_BRANCH_SNAPSHOT
> 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
> (Andrii)
> 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
> to work properly with modules. (Andrii)
>
> Changes v2 => v3:
> 1. Fix the use of static_call. (Peter)
> 2. Limit the use to perfmon version >= 2. (Peter)
> 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
> and intel_pmu_enable_all().
>
> Changes v1 => v2:
> 1. Rename the helper as bpf_get_branch_snapshot;
> 2. Fix/simplify the use of static_call;
> 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
> branch records to an output argument of type perf_branch_snapshot.
>
> Branch stack can be very useful in understanding software events. For
> example, when a long function, e.g. sys_perf_event_open, returns an errno,
> it is not obvious why the function failed. Branch stack could provide very
> helpful information in this type of scenarios.
>
> This set adds support to read branch stack with a new BPF helper
> bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
> It is also possible to support the same feaure for PowerPC.
>
> The hardware that records the branch stace is not stopped automatically on
> software events. Therefore, it is necessary to stop it in software soon.
> Otherwise, the hardware buffers/registers will be flushed. One of the key
> design consideration in this set is to minimize the number of branch record
> entries between the event triggers and the hardware recorder is stopped.
> Based on this goal, current design is different from the discussions in
> original RFC [1]:
> 1) Static call is used when supported, to save function pointer
> dereference;
> 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
> because the latter uses about 10 entries before stopping LBR.
>
> With current code, on Intel CPU, LBR is stopped after 10 branch entries
> after fexit triggers:
>
> ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
> ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
> ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
> ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
> ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
> ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
> ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
> ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
> ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
> ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> ...
>
> [1] https://lore.kernel.org/bpf/[email protected]/
>
> Song Liu (3):
> perf: enable branch record for software events
> bpf: introduce helper bpf_get_branch_snapshot
> selftests/bpf: add test for bpf_get_branch_snapshot
>
> arch/x86/events/intel/core.c | 29 ++++-
> arch/x86/events/intel/ds.c | 8 --
> arch/x86/events/perf_event.h | 10 +-
> include/linux/perf_event.h | 23 ++++
> include/uapi/linux/bpf.h | 22 ++++
> kernel/bpf/trampoline.c | 3 +-
> kernel/events/core.c | 2 +
> kernel/trace/bpf_trace.c | 30 ++++++
> tools/include/uapi/linux/bpf.h | 22 ++++
> .../selftests/bpf/bpf_testmod/bpf_testmod.c | 19 +++-
> .../selftests/bpf/prog_tests/core_reloc.c | 14 +--
> .../bpf/prog_tests/get_branch_snapshot.c | 100 ++++++++++++++++++
> .../selftests/bpf/prog_tests/module_attach.c | 39 -------
> .../selftests/bpf/progs/get_branch_snapshot.c | 40 +++++++
> tools/testing/selftests/bpf/test_progs.c | 39 +++++++
> tools/testing/selftests/bpf/test_progs.h | 2 +
> tools/testing/selftests/bpf/trace_helpers.c | 37 +++++++
> tools/testing/selftests/bpf/trace_helpers.h | 5 +
> 18 files changed, 378 insertions(+), 66 deletions(-)
> create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
> create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
>
> --
> 2.30.2
On Tue, Sep 7, 2021 at 1:31 PM Song Liu <[email protected]> wrote:
>
>
>
> > On Sep 7, 2021, at 1:27 PM, Song Liu <[email protected]> wrote:
>
> Forgot to add changes:
>
> Changes v5 => v6
> 1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack.
> (Peter)
> 2. Remove buf and size check in bpf_get_branch_snapshot, move flags
> check to later fo the function. (Peter, Andrii)
> 3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)
>
Looks great, thanks! Looking forward to being able to use it. Please
consider following up with migrate_disable() inlining as well.
For the series:
Acked-by: Andrii Nakryiko <[email protected]>
> >
> > Changes v4 => v5
> > 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
> > 2. Minor fixes in selftests. (Andrii)
> >
> > Changes v3 => v4:
> > 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
> > entries. (Peter)
> > 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
> > 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
> > 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
> > PERF_MAX_BRANCH_SNAPSHOT
> > 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
> > (Andrii)
> > 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
> > to work properly with modules. (Andrii)
> >
> > Changes v2 => v3:
> > 1. Fix the use of static_call. (Peter)
> > 2. Limit the use to perfmon version >= 2. (Peter)
> > 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
> > and intel_pmu_enable_all().
> >
> > Changes v1 => v2:
> > 1. Rename the helper as bpf_get_branch_snapshot;
> > 2. Fix/simplify the use of static_call;
> > 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
> > branch records to an output argument of type perf_branch_snapshot.
> >
> > Branch stack can be very useful in understanding software events. For
> > example, when a long function, e.g. sys_perf_event_open, returns an errno,
> > it is not obvious why the function failed. Branch stack could provide very
> > helpful information in this type of scenarios.
> >
> > This set adds support to read branch stack with a new BPF helper
> > bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
> > It is also possible to support the same feaure for PowerPC.
> >
> > The hardware that records the branch stace is not stopped automatically on
> > software events. Therefore, it is necessary to stop it in software soon.
> > Otherwise, the hardware buffers/registers will be flushed. One of the key
> > design consideration in this set is to minimize the number of branch record
> > entries between the event triggers and the hardware recorder is stopped.
> > Based on this goal, current design is different from the discussions in
> > original RFC [1]:
> > 1) Static call is used when supported, to save function pointer
> > dereference;
> > 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
> > because the latter uses about 10 entries before stopping LBR.
> >
> > With current code, on Intel CPU, LBR is stopped after 10 branch entries
> > after fexit triggers:
> >
> > ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
> > ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
> > ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
> > ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
> > ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
> > ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
> > ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
> > ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
> > ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
> > ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
> > ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
> > ...
> >
> > [1] https://lore.kernel.org/bpf/[email protected]/
> >
> > Song Liu (3):
> > perf: enable branch record for software events
> > bpf: introduce helper bpf_get_branch_snapshot
> > selftests/bpf: add test for bpf_get_branch_snapshot
> >
> > arch/x86/events/intel/core.c | 29 ++++-
> > arch/x86/events/intel/ds.c | 8 --
> > arch/x86/events/perf_event.h | 10 +-
> > include/linux/perf_event.h | 23 ++++
> > include/uapi/linux/bpf.h | 22 ++++
> > kernel/bpf/trampoline.c | 3 +-
> > kernel/events/core.c | 2 +
> > kernel/trace/bpf_trace.c | 30 ++++++
> > tools/include/uapi/linux/bpf.h | 22 ++++
> > .../selftests/bpf/bpf_testmod/bpf_testmod.c | 19 +++-
> > .../selftests/bpf/prog_tests/core_reloc.c | 14 +--
> > .../bpf/prog_tests/get_branch_snapshot.c | 100 ++++++++++++++++++
> > .../selftests/bpf/prog_tests/module_attach.c | 39 -------
> > .../selftests/bpf/progs/get_branch_snapshot.c | 40 +++++++
> > tools/testing/selftests/bpf/test_progs.c | 39 +++++++
> > tools/testing/selftests/bpf/test_progs.h | 2 +
> > tools/testing/selftests/bpf/trace_helpers.c | 37 +++++++
> > tools/testing/selftests/bpf/trace_helpers.h | 5 +
> > 18 files changed, 378 insertions(+), 66 deletions(-)
> > create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
> > create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
> >
> > --
> > 2.30.2
>
Hi Peter,
Do you have further comments/concerns on v6? If not, could you please
reply with your Reviewed-by or Acked-by?
Thanks,
Song
> On Sep 7, 2021, at 1:29 PM, Song Liu <[email protected]> wrote:
>
>
>
>> On Sep 7, 2021, at 1:27 PM, Song Liu <[email protected]> wrote:
>
> Forgot to add changes:
>
> Changes v5 => v6
> 1. Add local_irq_save/restore to intel_pmu_snapshot_branch_stack.
> (Peter)
> 2. Remove buf and size check in bpf_get_branch_snapshot, move flags
> check to later fo the function. (Peter, Andrii)
> 3. Revise comments for bpf_get_branch_snapshot in bpf.h (Andrii)
>
>>
>> Changes v4 => v5
>> 1. Modify perf_snapshot_branch_stack_t to save some memcpy. (Andrii)
>> 2. Minor fixes in selftests. (Andrii)
>>
>> Changes v3 => v4:
>> 1. Do not reshuffle intel_pmu_disable_all(). Use some inline to save LBR
>> entries. (Peter)
>> 2. Move static_call(perf_snapshot_branch_stack) to the helper. (Alexei)
>> 3. Add argument flags to bpf_get_branch_snapshot. (Andrii)
>> 4. Make MAX_BRANCH_SNAPSHOT an enum (Andrii). And rename it as
>> PERF_MAX_BRANCH_SNAPSHOT
>> 5. Make bpf_get_branch_snapshot similar to bpf_read_branch_records.
>> (Andrii)
>> 6. Move the test target function to bpf_testmod. Updated kallsyms_find_next
>> to work properly with modules. (Andrii)
>>
>> Changes v2 => v3:
>> 1. Fix the use of static_call. (Peter)
>> 2. Limit the use to perfmon version >= 2. (Peter)
>> 3. Modify intel_pmu_snapshot_branch_stack() to use intel_pmu_disable_all
>> and intel_pmu_enable_all().
>>
>> Changes v1 => v2:
>> 1. Rename the helper as bpf_get_branch_snapshot;
>> 2. Fix/simplify the use of static_call;
>> 3. Instead of percpu variables, let intel_pmu_snapshot_branch_stack output
>> branch records to an output argument of type perf_branch_snapshot.
>>
>> Branch stack can be very useful in understanding software events. For
>> example, when a long function, e.g. sys_perf_event_open, returns an errno,
>> it is not obvious why the function failed. Branch stack could provide very
>> helpful information in this type of scenarios.
>>
>> This set adds support to read branch stack with a new BPF helper
>> bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
>> It is also possible to support the same feaure for PowerPC.
>>
>> The hardware that records the branch stace is not stopped automatically on
>> software events. Therefore, it is necessary to stop it in software soon.
>> Otherwise, the hardware buffers/registers will be flushed. One of the key
>> design consideration in this set is to minimize the number of branch record
>> entries between the event triggers and the hardware recorder is stopped.
>> Based on this goal, current design is different from the discussions in
>> original RFC [1]:
>> 1) Static call is used when supported, to save function pointer
>> dereference;
>> 2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
>> because the latter uses about 10 entries before stopping LBR.
>>
>> With current code, on Intel CPU, LBR is stopped after 10 branch entries
>> after fexit triggers:
>>
>> ID: 0 from intel_pmu_lbr_disable_all+58 to intel_pmu_lbr_disable_all+93
>> ID: 1 from intel_pmu_lbr_disable_all+54 to intel_pmu_lbr_disable_all+58
>> ID: 2 from intel_pmu_snapshot_branch_stack+102 to intel_pmu_lbr_disable_all+0
>> ID: 3 from bpf_get_branch_snapshot+18 to intel_pmu_snapshot_branch_stack+0
>> ID: 4 from bpf_get_branch_snapshot+18 to bpf_get_branch_snapshot+0
>> ID: 5 from __brk_limit+474918983 to bpf_get_branch_snapshot+0
>> ID: 6 from __bpf_prog_enter+34 to __brk_limit+474918971
>> ID: 7 from migrate_disable+60 to __bpf_prog_enter+9
>> ID: 8 from __bpf_prog_enter+4 to migrate_disable+0
>> ID: 9 from bpf_testmod_loop_test+20 to __bpf_prog_enter+0
>> ID: 10 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 11 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 12 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ID: 13 from bpf_testmod_loop_test+20 to bpf_testmod_loop_test+13
>> ...
>>
>> [1] https://lore.kernel.org/bpf/[email protected]/
>>
>> Song Liu (3):
>> perf: enable branch record for software events
>> bpf: introduce helper bpf_get_branch_snapshot
>> selftests/bpf: add test for bpf_get_branch_snapshot
>>
>> arch/x86/events/intel/core.c | 29 ++++-
>> arch/x86/events/intel/ds.c | 8 --
>> arch/x86/events/perf_event.h | 10 +-
>> include/linux/perf_event.h | 23 ++++
>> include/uapi/linux/bpf.h | 22 ++++
>> kernel/bpf/trampoline.c | 3 +-
>> kernel/events/core.c | 2 +
>> kernel/trace/bpf_trace.c | 30 ++++++
>> tools/include/uapi/linux/bpf.h | 22 ++++
>> .../selftests/bpf/bpf_testmod/bpf_testmod.c | 19 +++-
>> .../selftests/bpf/prog_tests/core_reloc.c | 14 +--
>> .../bpf/prog_tests/get_branch_snapshot.c | 100 ++++++++++++++++++
>> .../selftests/bpf/prog_tests/module_attach.c | 39 -------
>> .../selftests/bpf/progs/get_branch_snapshot.c | 40 +++++++
>> tools/testing/selftests/bpf/test_progs.c | 39 +++++++
>> tools/testing/selftests/bpf/test_progs.h | 2 +
>> tools/testing/selftests/bpf/trace_helpers.c | 37 +++++++
>> tools/testing/selftests/bpf/trace_helpers.h | 5 +
>> 18 files changed, 378 insertions(+), 66 deletions(-)
>> create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
>> create mode 100644 tools/testing/selftests/bpf/progs/get_branch_snapshot.c
>>
>> --
>> 2.30.2
>