2021-04-16 22:14:56

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 0/4] perf util: bpf perf improvements

This patches set improves bpf_perf (perf-stat --bpf-counter) by
1) exposing key definitions to a libperf header;
2) adding compatibility check for perf_attr_map;
3) introducing config stat.bpf-counter-events.
4) introducing 'b' modify to event parser.

Changes v2 => v3:
1. Add 'b' modifier. (Jiri)
2. Allow configuring stat.bpf-counter-events with any event name (instead
of limiting to hardware events). (Jiri)

Changes v1 => v2:
1. Separte 2/3 from 1/3. (Jiri)
2. Rename bperf.h to bpf_perf.h. (Jiri)
3. Other small fixes/optimizations. (Jiri)

Song Liu (4):
perf util: move bpf_perf definitions to a libperf header
perf bpf: check perf_attr_map is compatible with the perf binary
perf-stat: introduce config stat.bpf-counter-events
perf-stat: introduce ':b' modifier

tools/lib/perf/include/perf/bpf_perf.h | 31 ++++++++++++++++
tools/perf/Documentation/perf-stat.txt | 2 ++
tools/perf/builtin-stat.c | 43 ++++++++++++++--------
tools/perf/util/bpf_counter.c | 50 +++++++++++++-------------
tools/perf/util/config.c | 4 +++
tools/perf/util/evsel.c | 22 ++++++++++++
tools/perf/util/evsel.h | 9 +++++
tools/perf/util/parse-events.c | 8 ++++-
tools/perf/util/parse-events.l | 2 +-
tools/perf/util/target.h | 5 ---
10 files changed, 129 insertions(+), 47 deletions(-)
create mode 100644 tools/lib/perf/include/perf/bpf_perf.h

--
2.30.2


2021-04-16 22:15:07

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 1/4] perf util: move bpf_perf definitions to a libperf header

By following the same protocol, other tools can share hardware PMCs with
perf. Move perf_event_attr_map_entry and BPF_PERF_DEFAULT_ATTR_MAP_PATH to
bpf_perf.h for other tools to use.

Signed-off-by: Song Liu <[email protected]>
---
tools/lib/perf/include/perf/bpf_perf.h | 31 ++++++++++++++++++++++++++
tools/perf/util/bpf_counter.c | 27 +++-------------------
2 files changed, 34 insertions(+), 24 deletions(-)
create mode 100644 tools/lib/perf/include/perf/bpf_perf.h

diff --git a/tools/lib/perf/include/perf/bpf_perf.h b/tools/lib/perf/include/perf/bpf_perf.h
new file mode 100644
index 0000000000000..e7cf6ba7b674b
--- /dev/null
+++ b/tools/lib/perf/include/perf/bpf_perf.h
@@ -0,0 +1,31 @@
+/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
+#ifndef __LIBPERF_BPF_PERF_H
+#define __LIBPERF_BPF_PERF_H
+
+#include <linux/types.h> /* for __u32 */
+
+/*
+ * bpf_perf uses a hashmap, the attr_map, to track all the leader programs.
+ * The hashmap is pinned in bpffs. flock() on this file is used to ensure
+ * no concurrent access to the attr_map. The key of attr_map is struct
+ * perf_event_attr, and the value is struct perf_event_attr_map_entry.
+ *
+ * struct perf_event_attr_map_entry contains two __u32 IDs, bpf_link of the
+ * leader prog, and the diff_map. Each perf-stat session holds a reference
+ * to the bpf_link to make sure the leader prog is attached to sched_switch
+ * tracepoint.
+ *
+ * Since the hashmap only contains IDs of the bpf_link and diff_map, it
+ * does not hold any references to the leader program. Once all perf-stat
+ * sessions of these events exit, the leader prog, its maps, and the
+ * perf_events will be freed.
+ */
+struct perf_event_attr_map_entry {
+ __u32 link_id;
+ __u32 diff_map_id;
+};
+
+/* default attr_map name */
+#define BPF_PERF_DEFAULT_ATTR_MAP_PATH "perf_attr_map"
+
+#endif /* __LIBPERF_BPF_PERF_H */
diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 81d1df3c4ec0e..be484ddbbd5be 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -14,6 +14,7 @@
#include <bpf/btf.h>
#include <bpf/libbpf.h>
#include <api/fs/fs.h>
+#include <perf/bpf_perf.h>

#include "bpf_counter.h"
#include "counts.h"
@@ -29,28 +30,6 @@
#include "bpf_skel/bperf_leader.skel.h"
#include "bpf_skel/bperf_follower.skel.h"

-/*
- * bperf uses a hashmap, the attr_map, to track all the leader programs.
- * The hashmap is pinned in bpffs. flock() on this file is used to ensure
- * no concurrent access to the attr_map. The key of attr_map is struct
- * perf_event_attr, and the value is struct perf_event_attr_map_entry.
- *
- * struct perf_event_attr_map_entry contains two __u32 IDs, bpf_link of the
- * leader prog, and the diff_map. Each perf-stat session holds a reference
- * to the bpf_link to make sure the leader prog is attached to sched_switch
- * tracepoint.
- *
- * Since the hashmap only contains IDs of the bpf_link and diff_map, it
- * does not hold any references to the leader program. Once all perf-stat
- * sessions of these events exit, the leader prog, its maps, and the
- * perf_events will be freed.
- */
-struct perf_event_attr_map_entry {
- __u32 link_id;
- __u32 diff_map_id;
-};
-
-#define DEFAULT_ATTR_MAP_PATH "fs/bpf/perf_attr_map"
#define ATTR_MAP_SIZE 16

static inline void *u64_to_ptr(__u64 ptr)
@@ -341,8 +320,8 @@ static int bperf_lock_attr_map(struct target *target)
if (target->attr_map) {
scnprintf(path, PATH_MAX, "%s", target->attr_map);
} else {
- scnprintf(path, PATH_MAX, "%s/%s", sysfs__mountpoint(),
- DEFAULT_ATTR_MAP_PATH);
+ scnprintf(path, PATH_MAX, "%s/fs/bpf/%s", sysfs__mountpoint(),
+ BPF_PERF_DEFAULT_ATTR_MAP_PATH);
}

if (access(path, F_OK)) {
--
2.30.2

2021-04-16 22:15:13

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 4/4] perf-stat: introduce ':b' modifier

Introduce 'b' modifier to event parser, which means use BPF program to
manage this event. This is the same as --bpf-counters option, but only
applies to this event. For example,

perf stat -e cycles:b,cs # use bpf for cycles, but not cs
perf stat -e cycles,cs --bpf-counters # use bpf for both cycles and cs

Suggested-by: Jiri Olsa <[email protected]>
Signed-off-by: Song Liu <[email protected]>
---
tools/perf/util/bpf_counter.c | 2 +-
tools/perf/util/evsel.h | 1 +
tools/perf/util/parse-events.c | 8 +++++++-
tools/perf/util/parse-events.l | 2 +-
4 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 3189b63714371..7b4d511c7e6ec 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -790,7 +790,7 @@ int bpf_counter__load(struct evsel *evsel, struct target *target)
{
if (target->bpf_str)
evsel->bpf_counter_ops = &bpf_program_profiler_ops;
- else if (target->use_bpf)
+ else if (target->use_bpf || evsel->bpf_counter)
evsel->bpf_counter_ops = &bperf_ops;
else if (evsel__match_bpf_counter_events(evsel->name))
evsel->bpf_counter_ops = &bperf_ops;
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index ce4b629d659c2..8f66cdcb338d0 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -82,6 +82,7 @@ struct evsel {
bool auto_merge_stats;
bool collect_stat;
bool weak_group;
+ bool bpf_counter;
int bpf_fd;
struct bpf_object *bpf_obj;
};
diff --git a/tools/perf/util/parse-events.c b/tools/perf/util/parse-events.c
index 8123d218ad17c..46ebd269a98d1 100644
--- a/tools/perf/util/parse-events.c
+++ b/tools/perf/util/parse-events.c
@@ -1804,6 +1804,7 @@ struct event_modifier {
int pinned;
int weak;
int exclusive;
+ int bpf_counter;
};

static int get_event_modifier(struct event_modifier *mod, char *str,
@@ -1824,6 +1825,7 @@ static int get_event_modifier(struct event_modifier *mod, char *str,
int exclude = eu | ek | eh;
int exclude_GH = evsel ? evsel->exclude_GH : 0;
int weak = 0;
+ int bpf_counter = 0;

memset(mod, 0, sizeof(*mod));

@@ -1867,6 +1869,8 @@ static int get_event_modifier(struct event_modifier *mod, char *str,
exclusive = 1;
} else if (*str == 'W') {
weak = 1;
+ } else if (*str == 'b') {
+ bpf_counter = 1;
} else
break;

@@ -1898,6 +1902,7 @@ static int get_event_modifier(struct event_modifier *mod, char *str,
mod->sample_read = sample_read;
mod->pinned = pinned;
mod->weak = weak;
+ mod->bpf_counter = bpf_counter;
mod->exclusive = exclusive;

return 0;
@@ -1912,7 +1917,7 @@ static int check_modifier(char *str)
char *p = str;

/* The sizeof includes 0 byte as well. */
- if (strlen(str) > (sizeof("ukhGHpppPSDIWe") - 1))
+ if (strlen(str) > (sizeof("ukhGHpppPSDIWeb") - 1))
return -1;

while (*p) {
@@ -1953,6 +1958,7 @@ int parse_events__modifier_event(struct list_head *list, char *str, bool add)
evsel->sample_read = mod.sample_read;
evsel->precise_max = mod.precise_max;
evsel->weak_group = mod.weak;
+ evsel->bpf_counter = mod.bpf_counter;

if (evsel__is_group_leader(evsel)) {
evsel->core.attr.pinned = mod.pinned;
diff --git a/tools/perf/util/parse-events.l b/tools/perf/util/parse-events.l
index 0b36285a9435d..fb8646cc3e834 100644
--- a/tools/perf/util/parse-events.l
+++ b/tools/perf/util/parse-events.l
@@ -210,7 +210,7 @@ name_tag [\'][a-zA-Z_*?\[\]][a-zA-Z0-9_*?\-,\.\[\]:=]*[\']
name_minus [a-zA-Z_*?][a-zA-Z0-9\-_*?.:]*
drv_cfg_term [a-zA-Z0-9_\.]+(=[a-zA-Z0-9_*?\.:]+)?
/* If you add a modifier you need to update check_modifier() */
-modifier_event [ukhpPGHSDIWe]+
+modifier_event [ukhpPGHSDIWeb]+
modifier_bp [rwx]{1,3}

%%
--
2.30.2

2021-04-16 22:17:10

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 2/4] perf bpf: check perf_attr_map is compatible with the perf binary

perf_attr_map could be shared among different version of perf binary. Add
bperf_attr_map_compatible() to check whether the existing attr_map is
compatible with current perf binary.

Signed-off-by: Song Liu <[email protected]>
---
tools/perf/util/bpf_counter.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)

diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index be484ddbbd5be..5de991ab46af9 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -312,6 +312,20 @@ static __u32 bpf_map_get_id(int fd)
return map_info.id;
}

+static bool bperf_attr_map_compatible(int attr_map_fd)
+{
+ struct bpf_map_info map_info = {0};
+ __u32 map_info_len = sizeof(map_info);
+ int err;
+
+ err = bpf_obj_get_info_by_fd(attr_map_fd, &map_info, &map_info_len);
+
+ if (err)
+ return false;
+ return (map_info.key_size == sizeof(struct perf_event_attr)) &&
+ (map_info.value_size == sizeof(struct perf_event_attr_map_entry));
+}
+
static int bperf_lock_attr_map(struct target *target)
{
char path[PATH_MAX];
@@ -346,6 +360,11 @@ static int bperf_lock_attr_map(struct target *target)
return -1;
}

+ if (!bperf_attr_map_compatible(map_fd)) {
+ close(map_fd);
+ return -1;
+
+ }
err = flock(map_fd, LOCK_EX);
if (err) {
close(map_fd);
--
2.30.2

2021-04-16 22:17:21

by Song Liu

[permalink] [raw]
Subject: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events

Currently, to use BPF to aggregate perf event counters, the user uses
--bpf-counters option. Enable "use bpf by default" events with a config
option, stat.bpf-counter-events. Events with name in the option will use
BPF.

This also enables mixed BPF event and regular event in the same sesssion.
For example:

perf config stat.bpf-counter-events=instructions
perf stat -e instructions,cs

The second command will use BPF for "instructions" but not "cs".

Signed-off-by: Song Liu <[email protected]>
---
tools/perf/Documentation/perf-stat.txt | 2 ++
tools/perf/builtin-stat.c | 43 +++++++++++++++++---------
tools/perf/util/bpf_counter.c | 2 ++
tools/perf/util/config.c | 4 +++
tools/perf/util/evsel.c | 22 +++++++++++++
tools/perf/util/evsel.h | 8 +++++
tools/perf/util/target.h | 5 ---
7 files changed, 66 insertions(+), 20 deletions(-)

diff --git a/tools/perf/Documentation/perf-stat.txt b/tools/perf/Documentation/perf-stat.txt
index 6ec5960b08c3d..78afe13cd7d47 100644
--- a/tools/perf/Documentation/perf-stat.txt
+++ b/tools/perf/Documentation/perf-stat.txt
@@ -97,6 +97,8 @@ report::
Use BPF programs to aggregate readings from perf_events. This
allows multiple perf-stat sessions that are counting the same metric (cycles,
instructions, etc.) to share hardware counters.
+ To use BPF programs on common hardware events by default, use
+ "perf config stat.bpf-counter-events=<list_of_events>".

--bpf-attr-map::
With option "--bpf-counters", different perf-stat sessions share
diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
index 2a2c15cac80a3..0c76271f3ef53 100644
--- a/tools/perf/builtin-stat.c
+++ b/tools/perf/builtin-stat.c
@@ -413,6 +413,8 @@ static int read_affinity_counters(struct timespec *rs)
evlist__for_each_entry(evsel_list, counter) {
if (evsel__cpu_iter_skip(counter, cpu))
continue;
+ if (evsel__is_bpf(counter))
+ continue;
if (!counter->err) {
counter->err = read_counter_cpu(counter, rs,
counter->cpu_iter - 1);
@@ -423,17 +425,28 @@ static int read_affinity_counters(struct timespec *rs)
return 0;
}

+/*
+ * Returns:
+ * 0 if all events use BPF;
+ * 1 if some events do NOT use BPF;
+ * < 0 on errors;
+ */
static int read_bpf_map_counters(void)
{
+ bool has_none_bpf_events = false;
struct evsel *counter;
int err;

evlist__for_each_entry(evsel_list, counter) {
+ if (!evsel__is_bpf(counter)) {
+ has_none_bpf_events = true;
+ continue;
+ }
err = bpf_counter__read(counter);
if (err)
return err;
}
- return 0;
+ return has_none_bpf_events ? 1 : 0;
}

static void read_counters(struct timespec *rs)
@@ -442,9 +455,10 @@ static void read_counters(struct timespec *rs)
int err;

if (!stat_config.stop_read_counter) {
- if (target__has_bpf(&target))
- err = read_bpf_map_counters();
- else
+ err = read_bpf_map_counters();
+ if (err < 0)
+ return;
+ if (err)
err = read_affinity_counters(rs);
if (err < 0)
return;
@@ -535,12 +549,13 @@ static int enable_counters(void)
struct evsel *evsel;
int err;

- if (target__has_bpf(&target)) {
- evlist__for_each_entry(evsel_list, evsel) {
- err = bpf_counter__enable(evsel);
- if (err)
- return err;
- }
+ evlist__for_each_entry(evsel_list, evsel) {
+ if (!evsel__is_bpf(evsel))
+ continue;
+
+ err = bpf_counter__enable(evsel);
+ if (err)
+ return err;
}

if (stat_config.initial_delay < 0) {
@@ -784,11 +799,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
if (affinity__setup(&affinity) < 0)
return -1;

- if (target__has_bpf(&target)) {
- evlist__for_each_entry(evsel_list, counter) {
- if (bpf_counter__load(counter, &target))
- return -1;
- }
+ evlist__for_each_entry(evsel_list, counter) {
+ if (bpf_counter__load(counter, &target))
+ return -1;
}

evlist__for_each_cpu (evsel_list, i, cpu) {
diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
index 5de991ab46af9..3189b63714371 100644
--- a/tools/perf/util/bpf_counter.c
+++ b/tools/perf/util/bpf_counter.c
@@ -792,6 +792,8 @@ int bpf_counter__load(struct evsel *evsel, struct target *target)
evsel->bpf_counter_ops = &bpf_program_profiler_ops;
else if (target->use_bpf)
evsel->bpf_counter_ops = &bperf_ops;
+ else if (evsel__match_bpf_counter_events(evsel->name))
+ evsel->bpf_counter_ops = &bperf_ops;

if (evsel->bpf_counter_ops)
return evsel->bpf_counter_ops->load(evsel, target);
diff --git a/tools/perf/util/config.c b/tools/perf/util/config.c
index 6bcb5ef221f8c..63d472b336de2 100644
--- a/tools/perf/util/config.c
+++ b/tools/perf/util/config.c
@@ -18,6 +18,7 @@
#include "util/hist.h" /* perf_hist_config */
#include "util/llvm-utils.h" /* perf_llvm_config */
#include "util/stat.h" /* perf_stat__set_big_num */
+#include "util/evsel.h" /* evsel__hw_names, evsel__use_bpf_counters */
#include "build-id.h"
#include "debug.h"
#include "config.h"
@@ -460,6 +461,9 @@ static int perf_stat_config(const char *var, const char *value)
if (!strcmp(var, "stat.no-csv-summary"))
perf_stat__set_no_csv_summary(perf_config_bool(var, value));

+ if (!strcmp(var, "stat.bpf-counter-events"))
+ evsel__bpf_counter_events = strdup(value);
+
/* Add other config variables here. */
return 0;
}
diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
index 2d2614eeaa20e..080ddcfefbcd2 100644
--- a/tools/perf/util/evsel.c
+++ b/tools/perf/util/evsel.c
@@ -492,6 +492,28 @@ const char *evsel__hw_names[PERF_COUNT_HW_MAX] = {
"ref-cycles",
};

+char *evsel__bpf_counter_events;
+
+bool evsel__match_bpf_counter_events(const char *name)
+{
+ int name_len;
+ bool match;
+ char *ptr;
+
+ if (!evsel__bpf_counter_events)
+ return false;
+
+ ptr = strstr(evsel__bpf_counter_events, name);
+ name_len = strlen(name);
+
+ /* check name matches a full token in evsel__bpf_counter_events */
+ match = (ptr != NULL) &&
+ ((ptr == evsel__bpf_counter_events) || (*(ptr - 1) == ',')) &&
+ ((*(ptr + name_len) == ',') || (*(ptr + name_len) == '\0'));
+
+ return match;
+}
+
static const char *__evsel__hw_name(u64 config)
{
if (config < PERF_COUNT_HW_MAX && evsel__hw_names[config])
diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
index eccc4fd5b3eb4..ce4b629d659c2 100644
--- a/tools/perf/util/evsel.h
+++ b/tools/perf/util/evsel.h
@@ -239,6 +239,11 @@ void evsel__calc_id_pos(struct evsel *evsel);

bool evsel__is_cache_op_valid(u8 type, u8 op);

+static inline bool evsel__is_bpf(struct evsel *evsel)
+{
+ return evsel->bpf_counter_ops != NULL;
+}
+
#define EVSEL__MAX_ALIASES 8

extern const char *evsel__hw_cache[PERF_COUNT_HW_CACHE_MAX][EVSEL__MAX_ALIASES];
@@ -246,6 +251,9 @@ extern const char *evsel__hw_cache_op[PERF_COUNT_HW_CACHE_OP_MAX][EVSEL__MAX_ALI
extern const char *evsel__hw_cache_result[PERF_COUNT_HW_CACHE_RESULT_MAX][EVSEL__MAX_ALIASES];
extern const char *evsel__hw_names[PERF_COUNT_HW_MAX];
extern const char *evsel__sw_names[PERF_COUNT_SW_MAX];
+extern char *evsel__bpf_counter_events;
+bool evsel__match_bpf_counter_events(const char *name);
+
int __evsel__hw_cache_type_op_res_name(u8 type, u8 op, u8 result, char *bf, size_t size);
const char *evsel__name(struct evsel *evsel);

diff --git a/tools/perf/util/target.h b/tools/perf/util/target.h
index 1bce3eb28ef25..4ff56217f2a65 100644
--- a/tools/perf/util/target.h
+++ b/tools/perf/util/target.h
@@ -66,11 +66,6 @@ static inline bool target__has_cpu(struct target *target)
return target->system_wide || target->cpu_list;
}

-static inline bool target__has_bpf(struct target *target)
-{
- return target->bpf_str || target->use_bpf;
-}
-
static inline bool target__none(struct target *target)
{
return !target__has_task(target) && !target__has_cpu(target);
--
2.30.2

2021-04-17 16:46:12

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events

Hi Song,

On Sat, Apr 17, 2021 at 7:13 AM Song Liu <[email protected]> wrote:
>
> Currently, to use BPF to aggregate perf event counters, the user uses
> --bpf-counters option. Enable "use bpf by default" events with a config
> option, stat.bpf-counter-events. Events with name in the option will use
> BPF.
>
> This also enables mixed BPF event and regular event in the same sesssion.
> For example:
>
> perf config stat.bpf-counter-events=instructions
> perf stat -e instructions,cs
>
> The second command will use BPF for "instructions" but not "cs".
>
> Signed-off-by: Song Liu <[email protected]>
> ---
> @@ -535,12 +549,13 @@ static int enable_counters(void)
> struct evsel *evsel;
> int err;
>
> - if (target__has_bpf(&target)) {
> - evlist__for_each_entry(evsel_list, evsel) {
> - err = bpf_counter__enable(evsel);
> - if (err)
> - return err;
> - }
> + evlist__for_each_entry(evsel_list, evsel) {
> + if (!evsel__is_bpf(evsel))
> + continue;
> +
> + err = bpf_counter__enable(evsel);
> + if (err)
> + return err;

I just realized it doesn't have a disable counterpart.

> }
>
> if (stat_config.initial_delay < 0) {
> @@ -784,11 +799,9 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> if (affinity__setup(&affinity) < 0)
> return -1;
>
> - if (target__has_bpf(&target)) {
> - evlist__for_each_entry(evsel_list, counter) {
> - if (bpf_counter__load(counter, &target))
> - return -1;
> - }
> + evlist__for_each_entry(evsel_list, counter) {
> + if (bpf_counter__load(counter, &target))
> + return -1;
> }
>
> evlist__for_each_cpu (evsel_list, i, cpu) {

[SNIP]
> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
> index 2d2614eeaa20e..080ddcfefbcd2 100644
> --- a/tools/perf/util/evsel.c
> +++ b/tools/perf/util/evsel.c
> @@ -492,6 +492,28 @@ const char *evsel__hw_names[PERF_COUNT_HW_MAX] = {
> "ref-cycles",
> };
>
> +char *evsel__bpf_counter_events;
> +
> +bool evsel__match_bpf_counter_events(const char *name)
> +{
> + int name_len;
> + bool match;
> + char *ptr;
> +
> + if (!evsel__bpf_counter_events)
> + return false;
> +
> + ptr = strstr(evsel__bpf_counter_events, name);
> + name_len = strlen(name);
> +
> + /* check name matches a full token in evsel__bpf_counter_events */
> + match = (ptr != NULL) &&
> + ((ptr == evsel__bpf_counter_events) || (*(ptr - 1) == ',')) &&
> + ((*(ptr + name_len) == ',') || (*(ptr + name_len) == '\0'));

I'm not sure we have an event name which is a substring of another.
Maybe it can retry if it fails to match.

Thanks,
Namhyung

> +
> + return match;
> +}
> +
> static const char *__evsel__hw_name(u64 config)
> {
> if (config < PERF_COUNT_HW_MAX && evsel__hw_names[config])

2021-04-19 18:41:54

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events

On Fri, Apr 16, 2021 at 03:13:24PM -0700, Song Liu wrote:

SNIP

> +/*
> + * Returns:
> + * 0 if all events use BPF;
> + * 1 if some events do NOT use BPF;
> + * < 0 on errors;
> + */
> static int read_bpf_map_counters(void)
> {
> + bool has_none_bpf_events = false;
> struct evsel *counter;
> int err;
>
> evlist__for_each_entry(evsel_list, counter) {
> + if (!evsel__is_bpf(counter)) {
> + has_none_bpf_events = true;
> + continue;
> + }
> err = bpf_counter__read(counter);
> if (err)
> return err;
> }
> - return 0;
> + return has_none_bpf_events ? 1 : 0;
> }
>
> static void read_counters(struct timespec *rs)
> @@ -442,9 +455,10 @@ static void read_counters(struct timespec *rs)
> int err;
>
> if (!stat_config.stop_read_counter) {
> - if (target__has_bpf(&target))
> - err = read_bpf_map_counters();
> - else
> + err = read_bpf_map_counters();
> + if (err < 0)
> + return;
> + if (err)
> err = read_affinity_counters(rs);

this part is confusing for me.. I understand we don't want to enter
read_affinity_counters when there's no bpf counter, so we don't set
affinities in vain.. but there must be better way ;-)

> diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
> index 5de991ab46af9..3189b63714371 100644
> --- a/tools/perf/util/bpf_counter.c
> +++ b/tools/perf/util/bpf_counter.c
> @@ -792,6 +792,8 @@ int bpf_counter__load(struct evsel *evsel, struct target *target)
> evsel->bpf_counter_ops = &bpf_program_profiler_ops;
> else if (target->use_bpf)
> evsel->bpf_counter_ops = &bperf_ops;
> + else if (evsel__match_bpf_counter_events(evsel->name))
> + evsel->bpf_counter_ops = &bperf_ops;

please put this with the target->use_bpf check,
it seems like it's another thing

thanks,
jirka

2021-04-19 22:19:45

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events



> On Apr 19, 2021, at 7:26 AM, Jiri Olsa <[email protected]> wrote:
>
> On Fri, Apr 16, 2021 at 03:13:24PM -0700, Song Liu wrote:
>
> SNIP
>
>> +/*
>> + * Returns:
>> + * 0 if all events use BPF;
>> + * 1 if some events do NOT use BPF;
>> + * < 0 on errors;
>> + */
>> static int read_bpf_map_counters(void)
>> {
>> + bool has_none_bpf_events = false;
>> struct evsel *counter;
>> int err;
>>
>> evlist__for_each_entry(evsel_list, counter) {
>> + if (!evsel__is_bpf(counter)) {
>> + has_none_bpf_events = true;
>> + continue;
>> + }
>> err = bpf_counter__read(counter);
>> if (err)
>> return err;
>> }
>> - return 0;
>> + return has_none_bpf_events ? 1 : 0;
>> }
>>
>> static void read_counters(struct timespec *rs)
>> @@ -442,9 +455,10 @@ static void read_counters(struct timespec *rs)
>> int err;
>>
>> if (!stat_config.stop_read_counter) {
>> - if (target__has_bpf(&target))
>> - err = read_bpf_map_counters();
>> - else
>> + err = read_bpf_map_counters();
>> + if (err < 0)
>> + return;
>> + if (err)
>> err = read_affinity_counters(rs);
>
> this part is confusing for me.. I understand we don't want to enter
> read_affinity_counters when there's no bpf counter, so we don't set
> affinities in vain.. but there must be better way ;-)
>
>> diff --git a/tools/perf/util/bpf_counter.c b/tools/perf/util/bpf_counter.c
>> index 5de991ab46af9..3189b63714371 100644
>> --- a/tools/perf/util/bpf_counter.c
>> +++ b/tools/perf/util/bpf_counter.c
>> @@ -792,6 +792,8 @@ int bpf_counter__load(struct evsel *evsel, struct target *target)
>> evsel->bpf_counter_ops = &bpf_program_profiler_ops;
>> else if (target->use_bpf)
>> evsel->bpf_counter_ops = &bperf_ops;
>> + else if (evsel__match_bpf_counter_events(evsel->name))
>> + evsel->bpf_counter_ops = &bperf_ops;
>
> please put this with the target->use_bpf check,
> it seems like it's another thing

Thanks for the review. Fixing these in v4.

Song

2021-04-19 22:20:27

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events



> On Apr 17, 2021, at 9:45 AM, Namhyung Kim <[email protected]> wrote:
>
> Hi Song,
>
> On Sat, Apr 17, 2021 at 7:13 AM Song Liu <[email protected]> wrote:
>>
>> Currently, to use BPF to aggregate perf event counters, the user uses
>> --bpf-counters option. Enable "use bpf by default" events with a config
>> option, stat.bpf-counter-events. Events with name in the option will use
>> BPF.
>>
>> This also enables mixed BPF event and regular event in the same sesssion.
>> For example:
>>
>> perf config stat.bpf-counter-events=instructions
>> perf stat -e instructions,cs
>>
>> The second command will use BPF for "instructions" but not "cs".
>>
>> Signed-off-by: Song Liu <[email protected]>
>> ---
>> @@ -535,12 +549,13 @@ static int enable_counters(void)
>> struct evsel *evsel;
>> int err;
>>
>> - if (target__has_bpf(&target)) {
>> - evlist__for_each_entry(evsel_list, evsel) {
>> - err = bpf_counter__enable(evsel);
>> - if (err)
>> - return err;
>> - }
>> + evlist__for_each_entry(evsel_list, evsel) {
>> + if (!evsel__is_bpf(evsel))
>> + continue;
>> +
>> + err = bpf_counter__enable(evsel);
>> + if (err)
>> + return err;
>
> I just realized it doesn't have a disable counterpart.

I guess it is not really necessary for perf-stat? It is probably good
to have it though. How about we do it in a follow up patch?

[...]

>> + if (!evsel__bpf_counter_events)
>> + return false;
>> +
>> + ptr = strstr(evsel__bpf_counter_events, name);
>> + name_len = strlen(name);
>> +
>> + /* check name matches a full token in evsel__bpf_counter_events */
>> + match = (ptr != NULL) &&
>> + ((ptr == evsel__bpf_counter_events) || (*(ptr - 1) == ',')) &&
>> + ((*(ptr + name_len) == ',') || (*(ptr + name_len) == '\0'));
>
> I'm not sure we have an event name which is a substring of another.
> Maybe it can retry if it fails to match.

We have ref-cycles and cycles. And some raw events may be substring of others?

Thanks,
Song

[...]

2021-04-20 17:33:02

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH v3 3/4] perf-stat: introduce config stat.bpf-counter-events

On Mon, Apr 19, 2021 at 08:26:02PM +0000, Song Liu wrote:
>
>
> > On Apr 17, 2021, at 9:45 AM, Namhyung Kim <[email protected]> wrote:
> >
> > Hi Song,
> >
> > On Sat, Apr 17, 2021 at 7:13 AM Song Liu <[email protected]> wrote:
> >>
> >> Currently, to use BPF to aggregate perf event counters, the user uses
> >> --bpf-counters option. Enable "use bpf by default" events with a config
> >> option, stat.bpf-counter-events. Events with name in the option will use
> >> BPF.
> >>
> >> This also enables mixed BPF event and regular event in the same sesssion.
> >> For example:
> >>
> >> perf config stat.bpf-counter-events=instructions
> >> perf stat -e instructions,cs
> >>
> >> The second command will use BPF for "instructions" but not "cs".
> >>
> >> Signed-off-by: Song Liu <[email protected]>
> >> ---
> >> @@ -535,12 +549,13 @@ static int enable_counters(void)
> >> struct evsel *evsel;
> >> int err;
> >>
> >> - if (target__has_bpf(&target)) {
> >> - evlist__for_each_entry(evsel_list, evsel) {
> >> - err = bpf_counter__enable(evsel);
> >> - if (err)
> >> - return err;
> >> - }
> >> + evlist__for_each_entry(evsel_list, evsel) {
> >> + if (!evsel__is_bpf(evsel))
> >> + continue;
> >> +
> >> + err = bpf_counter__enable(evsel);
> >> + if (err)
> >> + return err;
> >
> > I just realized it doesn't have a disable counterpart.
>
> I guess it is not really necessary for perf-stat? It is probably good
> to have it though. How about we do it in a follow up patch?

good catch, should it at least do:
evsel->follower_skel->bss->enabled = 0;

because then the follower goes down only with perf process, right?
still doing the counts till the end..?

also while checking on that I realized we open the counters
in separate path for this in bperf_reload_leader_program by
calling evsel__open_per_cpu.. and we're missing all the
fallback code and setup from create_perf_stat_counter

I think we should at least call create_perf_stat_counter,
and also propagate error value if it fails

jirka

>
> [...]
>
> >> + if (!evsel__bpf_counter_events)
> >> + return false;
> >> +
> >> + ptr = strstr(evsel__bpf_counter_events, name);
> >> + name_len = strlen(name);
> >> +
> >> + /* check name matches a full token in evsel__bpf_counter_events */
> >> + match = (ptr != NULL) &&
> >> + ((ptr == evsel__bpf_counter_events) || (*(ptr - 1) == ',')) &&
> >> + ((*(ptr + name_len) == ',') || (*(ptr + name_len) == '\0'));
> >
> > I'm not sure we have an event name which is a substring of another.
> > Maybe it can retry if it fails to match.
>
> We have ref-cycles and cycles. And some raw events may be substring of others?
>
> Thanks,
> Song
>
> [...]
>