2020-11-16 12:17:47

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 00/12] Introduce threaded trace streaming for basic perf record operation


Changes in v3:
- avoided skipped redundant patch 3/15
- applied "data file" and "data directory" terms allover the patch set
- captured Acked-by: tags by Namhyung Kim
- avoided braces where don't needed
- employed thread local variable for serial trace streaming
- added specs for --thread option - core, socket, numa and user defined
- added parallel loading of data directory files similar to the prototype [1]

v2: https://lore.kernel.org/lkml/[email protected]/

Changes in v2:
- explicitly added credit tags to patches 6/15 and 15/15,
additionally to cites [1], [2]
- updated description of 3/15 to explicitly mention the reason
to open data directories in read access mode (e.g. for perf report)
- implemented fix for compilation error of 2/15
- explicitly elaborated on found issues to be resolved for
threaded AUX trace capture

v1: https://lore.kernel.org/lkml/[email protected]/

Patch set provides parallel threaded trace streaming mode for basic
perf record operation. Provided mode mitigates profiling data losses
and resolves scalability issues of serial and asynchronous (--aio)
trace streaming modes on multicore server systems. The design and
implementation are based on the prototype [1], [2].

Parallel threaded mode executes trace streaming threads that read kernel
data buffers and write captured data into several data files located at
data directory. Layout of trace streaming threads and their mapping to data
buffers to read can be configured using a value of --thread command line
option. Specification value provides masks separated by colon so the masks
define cpus to be monitored by one thread and thread affinity mask is
separated by slash. <cpus mask 1>/<affinity mask 1>:<cpu mask 2>/<affinity mask 2>
specifies parallel threads layout that consists of two threads with
corresponding assigned cpus to be monitored. Specification value can be
a string e.g. "cpu", "core" or "socket" meaning creation of data streaming
thread for monitoring every cpu, whole core or socket. The option provided
with no or empty value defaults to "cpu" layout creating data streaming
thread for every cpu being monitored. Specification masks are filtered
by the mask provided via -C option.

Parallel streaming mode is compatible with Zstd compression/decompression
(--compression-level) and external control commands (--control). The mode
is not enabled for pipe mode. The mode is not enabled for AUX area tracing,
related and derived modes like --snapshot or --aux-sample. --switch-output-*
and --timestamp-filename options are not enabled for parallel streaming.
Initial intent to enable AUX area tracing faced the need to define some
optimal way to store index data in data directory. --switch-output-* and
--timestamp-filename use cases are not clear for data directories.
Asynchronous(--aio) trace streaming and affinity (--affinity) modes are
mutually exclusive to parallel streaming mode.

Basic analysis of data directories is provided in perf report mode.
Raw dump and aggregated reports are available for data directories,
still with no memory consumption optimizations.

Tested:

tools/perf/perf record -o prof.data --threads -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads= -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads=cpu -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads=core -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads=socket -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads=numa -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data -C 2,5 --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data -C 3,4 --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data -C 0,4,2,6 --threads=core -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data -C 0,4,2,6 --threads=numa -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads -g --call-graph dwarf,4096 -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads -g --call-graph dwarf,4096 --compression-level=3 -- matrix.gcc.g.O3
tools/perf/perf record -o prof.data --threads -a
tools/perf/perf record -D -1 -e cpu-cycles -a --control fd:10,11 -- sleep 30
tools/perf/perf record --threads -D -1 -e cpu-cycles -a --control fd:10,11 -- sleep 30

tools/perf/perf report -i prof.data
tools/perf/perf report -i prof.data --call-graph=callee
tools/perf/perf report -i prof.data --stdio --header
tools/perf/perf report -i prof.data -D --header

[1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads
[2] https://lore.kernel.org/lkml/[email protected]/

---
Alexey Budankov (12):
perf record: introduce thread affinity and mmap masks
perf record: introduce thread specific data array
perf record: introduce thread local variable
perf record: stop threads in the end of trace streaming
perf record: start threads in the beginning of trace streaming
perf record: introduce data file at mmap buffer object
perf record: init data file at mmap buffer object
perf record: introduce --threads=<spec> command line option
perf record: document parallel data streaming mode
perf report: output data file name in raw trace dump
perf session: load data directory files for analysis
perf session: use reader functions to load perf data file

tools/include/linux/bitmap.h | 11 +
tools/lib/api/fd/array.c | 17 +
tools/lib/api/fd/array.h | 1 +
tools/lib/bitmap.c | 14 +
tools/perf/Documentation/perf-record.txt | 18 +
tools/perf/builtin-inject.c | 3 +-
tools/perf/builtin-record.c | 1019 ++++++++++++++++++++--
tools/perf/util/evlist.c | 16 +
tools/perf/util/evlist.h | 1 +
tools/perf/util/mmap.c | 6 +
tools/perf/util/mmap.h | 6 +
tools/perf/util/ordered-events.h | 1 +
tools/perf/util/record.h | 2 +
tools/perf/util/session.c | 484 +++++++---
tools/perf/util/session.h | 5 +
tools/perf/util/tool.h | 3 +-
16 files changed, 1398 insertions(+), 209 deletions(-)

--
2.24.1


2020-11-16 12:19:57

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 04/12] perf record: stop threads in the end of trace streaming


Signal thread to terminate by closing write fd of comm.msg pipe.
Receive THREAD_MSG__READY message as the confirmation of the
thread's termination. Stop threads created for parallel trace
streaming prior their stats processing.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index e41e1cd90168..d0b528cde68b 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -111,6 +111,16 @@ struct thread_data {

static __thread struct thread_data *thread;

+enum thread_msg {
+ THREAD_MSG__UNDEFINED = 0,
+ THREAD_MSG__READY,
+ THREAD_MSG__MAX,
+};
+
+static const char *thread_msg_tags[THREAD_MSG__MAX] = {
+ "UNDEFINED", "READY"
+};
+
struct record {
struct perf_tool tool;
struct record_opts opts;
@@ -1818,6 +1828,23 @@ static void hit_auxtrace_snapshot_trigger(struct record *rec)
}
}

+static int record__terminate_thread(struct thread_data *thread_data)
+{
+ int res;
+ enum thread_msg ack = THREAD_MSG__UNDEFINED;
+ pid_t tid = thread_data->tid;
+
+ close(thread_data->comm.msg[1]);
+ res = read(thread_data->comm.ack[0], &ack, sizeof(ack));
+ if (res != -1)
+ pr_debug("threads[%d]: sent %s\n", tid, thread_msg_tags[ack]);
+ else
+ pr_err("threads[%d]: failed to recv msg=%s from tid=%d\n",
+ thread->tid, thread_msg_tags[ack], tid);
+
+ return 0;
+}
+
static int record__start_threads(struct record *rec)
{
struct thread_data *thread_data = rec->thread_data;
@@ -1834,6 +1861,9 @@ static int record__stop_threads(struct record *rec, unsigned long *waking)
int t;
struct thread_data *thread_data = rec->thread_data;

+ for (t = 1; t < rec->nr_threads; t++)
+ record__terminate_thread(&thread_data[t]);
+
for (t = 0; t < rec->nr_threads; t++) {
rec->samples += thread_data[t].samples;
*waking += thread_data[t].waking;
--
2.24.1

2020-11-16 12:20:32

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 02/12] perf record: introduce thread specific data array


Introduce thread specific data object and array of such objects
to store and manage thread local data. Implement functions to
allocate, initialize, finalize and release thread specific data.

Thread local maps and overwrite_maps arrays keep pointers to
mmap buffer objects to serve according to maps thread mask.
Thread local pollfd array keeps event fds connected to mmaps
buffers according to maps thread mask.

Thread control commands are delivered via thread local comm pipes
and ctlfd_pos fd. External control commands (--control option)
are delivered via evlist ctlfd_pos fd and handled by the main
tool thread.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/lib/api/fd/array.c | 17 ++++
tools/lib/api/fd/array.h | 1 +
tools/perf/builtin-record.c | 191 +++++++++++++++++++++++++++++++++++-
3 files changed, 206 insertions(+), 3 deletions(-)

diff --git a/tools/lib/api/fd/array.c b/tools/lib/api/fd/array.c
index 5e6cb9debe37..de8bcbaea3f1 100644
--- a/tools/lib/api/fd/array.c
+++ b/tools/lib/api/fd/array.c
@@ -88,6 +88,23 @@ int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags
return pos;
}

+int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base)
+{
+ struct pollfd *entry;
+ int npos;
+
+ if (pos >= base->nr)
+ return -EINVAL;
+
+ entry = &base->entries[pos];
+
+ npos = fdarray__add(fda, entry->fd, entry->events, base->priv[pos].flags);
+ if (npos >= 0)
+ fda->priv[npos] = base->priv[pos];
+
+ return npos;
+}
+
int fdarray__filter(struct fdarray *fda, short revents,
void (*entry_destructor)(struct fdarray *fda, int fd, void *arg),
void *arg)
diff --git a/tools/lib/api/fd/array.h b/tools/lib/api/fd/array.h
index 7fcf21a33c0c..4a03da7f1fc1 100644
--- a/tools/lib/api/fd/array.h
+++ b/tools/lib/api/fd/array.h
@@ -42,6 +42,7 @@ struct fdarray *fdarray__new(int nr_alloc, int nr_autogrow);
void fdarray__delete(struct fdarray *fda);

int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags flags);
+int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base);
int fdarray__poll(struct fdarray *fda, int timeout);
int fdarray__filter(struct fdarray *fda, short revents,
void (*entry_destructor)(struct fdarray *fda, int fd, void *arg),
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 82f009703ad7..765a90e38f69 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -56,6 +56,7 @@
#include <poll.h>
#include <pthread.h>
#include <unistd.h>
+#include <sys/syscall.h>
#include <sched.h>
#include <signal.h>
#ifdef HAVE_EVENTFD_SUPPORT
@@ -90,6 +91,24 @@ struct thread_mask {
struct mmap_cpu_mask affinity;
};

+struct thread_data {
+ pid_t tid;
+ struct thread_mask *mask;
+ struct {
+ int msg[2];
+ int ack[2];
+ } comm;
+ struct fdarray pollfd;
+ int ctlfd_pos;
+ struct mmap **maps;
+ struct mmap **overwrite_maps;
+ int nr_mmaps;
+ struct record *rec;
+ unsigned long long samples;
+ unsigned long waking;
+ u64 bytes_written;
+};
+
struct record {
struct perf_tool tool;
struct record_opts opts;
@@ -114,6 +133,7 @@ struct record {
struct mmap_cpu_mask affinity_mask;
unsigned long output_max_size; /* = 0: unlimited */
struct thread_mask *thread_masks;
+ struct thread_data *thread_data;
int nr_threads;
};

@@ -842,9 +862,168 @@ static int record__kcore_copy(struct machine *machine, struct perf_data *data)
return kcore_copy(from_dir, kcore_dir);
}

+static int record__thread_data_init_comm(struct thread_data *thread_data)
+{
+ if (pipe(thread_data->comm.msg) || pipe(thread_data->comm.ack)) {
+ pr_err("Failed to create thread comm pipes, error %m\n");
+ return -ENOMEM;
+ }
+
+ pr_debug("thread_data[%p]: msg=[%d,%d], ack=[%d,%d]\n", thread_data,
+ thread_data->comm.msg[0], thread_data->comm.msg[1],
+ thread_data->comm.ack[0], thread_data->comm.ack[1]);
+
+ return 0;
+}
+
+static int record__thread_data_init_maps(struct thread_data *thread_data, struct evlist *evlist)
+{
+ int m, tm, nr_mmaps = evlist->core.nr_mmaps;
+ struct mmap *mmap = evlist->mmap;
+ struct mmap *overwrite_mmap = evlist->overwrite_mmap;
+ struct perf_cpu_map *cpus = evlist->core.cpus;
+
+ thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, thread_data->mask->maps.nbits);
+ if (mmap) {
+ thread_data->maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *));
+ if (!thread_data->maps) {
+ pr_err("Failed to allocate maps thread data\n");
+ return -ENOMEM;
+ }
+ }
+ if (overwrite_mmap) {
+ thread_data->overwrite_maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *));
+ if (!thread_data->overwrite_maps) {
+ pr_err("Failed to allocate overwrite maps thread data\n");
+ return -ENOMEM;
+ }
+ }
+ pr_debug("thread_data[%p]: nr_mmaps=%d, maps=%p, overwrite_maps=%p\n", thread_data,
+ thread_data->nr_mmaps, thread_data->maps, thread_data->overwrite_maps);
+
+ for (m = 0, tm = 0; m < nr_mmaps && tm < thread_data->nr_mmaps; m++) {
+ if (test_bit(cpus->map[m], thread_data->mask->maps.bits)) {
+ if (thread_data->maps) {
+ thread_data->maps[tm] = &mmap[m];
+ pr_debug("thread_data[%p]: maps[%d] -> mmap[%d], cpus[%d]\n",
+ thread_data, tm, m, cpus->map[m]);
+ }
+ if (thread_data->overwrite_maps) {
+ thread_data->overwrite_maps[tm] = &overwrite_mmap[m];
+ pr_debug("thread_data[%p]: overwrite_maps[%d] -> overwrite_mmap[%d], cpus[%d]\n",
+ thread_data, tm, m, cpus->map[m]);
+ }
+ tm++;
+ }
+ }
+
+ return 0;
+}
+
+static int record__thread_data_init_pollfd(struct thread_data *thread_data, struct evlist *evlist)
+{
+ int f, tm, pos;
+ struct mmap *map, *overwrite_map;
+
+ fdarray__init(&thread_data->pollfd, 64);
+
+ for (tm = 0; tm < thread_data->nr_mmaps; tm++) {
+ map = thread_data->maps ? thread_data->maps[tm] : NULL;
+ overwrite_map = thread_data->overwrite_maps ? thread_data->overwrite_maps[tm] : NULL;
+
+ for (f = 0; f < evlist->core.pollfd.nr; f++) {
+ void *ptr = evlist->core.pollfd.priv[f].ptr;
+
+ if ((map && ptr == map) || (overwrite_map && ptr == overwrite_map)) {
+ pos = fdarray__clone(&thread_data->pollfd, f, &evlist->core.pollfd);
+ if (pos < 0)
+ return pos;
+ pr_debug("thread_data[%p]: pollfd[%d] <- event_fd=%d\n",
+ thread_data, pos, evlist->core.pollfd.entries[f].fd);
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int record__alloc_thread_data(struct record *rec, struct evlist *evlist)
+{
+ int t, ret;
+ struct thread_data *thread_data;
+
+ thread_data = zalloc(rec->nr_threads * sizeof(*(rec->thread_data)));
+ if (!thread_data) {
+ pr_err("Failed to allocate thread data\n");
+ return -ENOMEM;
+ }
+
+ for (t = 0; t < rec->nr_threads; t++) {
+ thread_data[t].rec = rec;
+ thread_data[t].mask = &rec->thread_masks[t];
+ ret = record__thread_data_init_maps(&thread_data[t], evlist);
+ if (ret)
+ return ret;
+ ret = record__thread_data_init_pollfd(&thread_data[t], evlist);
+ if (ret)
+ return ret;
+ if (t) {
+ thread_data[t].tid = -1;
+ ret = record__thread_data_init_comm(&thread_data[t]);
+ if (ret)
+ return ret;
+ thread_data[t].ctlfd_pos = fdarray__add(&thread_data[t].pollfd,
+ thread_data[t].comm.msg[0],
+ POLLIN | POLLERR | POLLHUP,
+ fdarray_flag__nonfilterable);
+ if (thread_data[t].ctlfd_pos < 0)
+ return -ENOMEM;
+ pr_debug("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n",
+ thread_data, thread_data[t].ctlfd_pos,
+ thread_data[t].comm.msg[0]);
+ } else {
+ thread_data[t].tid = syscall(SYS_gettid);
+ if (evlist->ctl_fd.pos == -1)
+ continue;
+ thread_data[t].ctlfd_pos = fdarray__clone(&thread_data[t].pollfd,
+ evlist->ctl_fd.pos,
+ &evlist->core.pollfd);
+ if (ret < 0)
+ return ret;
+ pr_debug("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n",
+ thread_data, thread_data[t].ctlfd_pos,
+ evlist->core.pollfd.entries[evlist->ctl_fd.pos].fd);
+ }
+ }
+
+ rec->thread_data = thread_data;
+
+ return 0;
+}
+
+static int record__free_thread_data(struct record *rec)
+{
+ int t;
+
+ for (t = 0; t < rec->nr_threads; t++) {
+ close(rec->thread_data[t].comm.msg[0]);
+ close(rec->thread_data[t].comm.msg[1]);
+ close(rec->thread_data[t].comm.ack[0]);
+ close(rec->thread_data[t].comm.ack[1]);
+ zfree(&rec->thread_data[t].maps);
+ zfree(&rec->thread_data[t].overwrite_maps);
+ fdarray__exit(&rec->thread_data[t].pollfd);
+ }
+
+ zfree(&rec->thread_data);
+
+ return 0;
+}
+
static int record__mmap_evlist(struct record *rec,
struct evlist *evlist)
{
+ int ret;
struct record_opts *opts = &rec->opts;
bool auxtrace_overwrite = opts->auxtrace_snapshot_mode ||
opts->auxtrace_sample_mode;
@@ -875,6 +1054,14 @@ static int record__mmap_evlist(struct record *rec,
return -EINVAL;
}
}
+
+ if (evlist__initialize_ctlfd(evlist, opts->ctl_fd, opts->ctl_fd_ack))
+ return -1;
+
+ ret = record__alloc_thread_data(rec, evlist);
+ if (ret)
+ return ret;
+
return 0;
}

@@ -1845,9 +2032,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
perf_evlist__start_workload(rec->evlist);
}

- if (evlist__initialize_ctlfd(rec->evlist, opts->ctl_fd, opts->ctl_fd_ack))
- goto out_child;
-
if (opts->initial_delay) {
pr_info(EVLIST_DISABLED_MSG);
if (opts->initial_delay > 0) {
@@ -1998,6 +2182,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
record__synthesize_workload(rec, true);

out_child:
+ record__free_thread_data(rec);
evlist__finalize_ctlfd(rec->evlist);
record__mmap_read_all(rec, true);
record__aio_mmap_read_sync(rec);
--
2.24.1


2020-11-16 12:21:14

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 05/12] perf record: start threads in the beginning of trace streaming


Start thread in detached state because its management is implemented
via messaging to avoid any scaling issues. Block signals prior thread
start so only main tool thread would be notified on external async
signals during data collection. Thread affinity mask is used to assign
eligible cpus for the thread to run. Wait and sync on thread start using
thread comm.ack pipe.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 103 +++++++++++++++++++++++++++++++++++-
1 file changed, 102 insertions(+), 1 deletion(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index d0b528cde68b..13773739bedc 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -1407,6 +1407,64 @@ static void record__thread_munmap_filtered(struct fdarray *fda, int fd,
perf_mmap__put(map);
}

+static void *record__thread(void *arg)
+{
+ enum thread_msg msg = THREAD_MSG__READY;
+ bool terminate = false;
+ struct fdarray *pollfd;
+ int err, ctlfd_pos;
+
+ thread = arg;
+ thread->tid = syscall(SYS_gettid);
+
+ err = write(thread->comm.ack[1], &msg, sizeof(msg));
+ if (err == -1)
+ pr_err("threads[%d]: failed to notify on start. Error %m", thread->tid);
+
+ pr_debug("threads[%d]: started on cpu=%d\n", thread->tid, sched_getcpu());
+
+ pollfd = &thread->pollfd;
+ ctlfd_pos = thread->ctlfd_pos;
+
+ for (;;) {
+ unsigned long long hits = thread->samples;
+
+ if (record__mmap_read_all(thread->rec, false) < 0 || terminate)
+ break;
+
+ if (hits == thread->samples) {
+
+ err = fdarray__poll(pollfd, -1);
+ /*
+ * Propagate error, only if there's any. Ignore positive
+ * number of returned events and interrupt error.
+ */
+ if (err > 0 || (err < 0 && errno == EINTR))
+ err = 0;
+ thread->waking++;
+
+ if (fdarray__filter(pollfd, POLLERR | POLLHUP,
+ record__thread_munmap_filtered, NULL) == 0)
+ break;
+ }
+
+ if (pollfd->entries[ctlfd_pos].revents & POLLHUP) {
+ terminate = true;
+ close(thread->comm.msg[0]);
+ pollfd->entries[ctlfd_pos].fd = -1;
+ pollfd->entries[ctlfd_pos].events = 0;
+ }
+
+ pollfd->entries[ctlfd_pos].revents = 0;
+ }
+
+ err = write(thread->comm.ack[1], &msg, sizeof(msg));
+ if (err == -1)
+ pr_err("threads[%d]: failed to notify on termination. Error %m", thread->tid);
+
+ return NULL;
+}
+
static void record__init_features(struct record *rec)
{
struct perf_session *session = rec->session;
@@ -1847,13 +1905,56 @@ static int record__terminate_thread(struct thread_data *thread_data)

static int record__start_threads(struct record *rec)
{
+ int t, tt, ret = 0, nr_threads = rec->nr_threads;
struct thread_data *thread_data = rec->thread_data;
+ sigset_t full, mask;
+ pthread_t handle;
+ pthread_attr_t attrs;
+
+ sigfillset(&full);
+ if (sigprocmask(SIG_SETMASK, &full, &mask)) {
+ pr_err("Failed to block signals on threads start. Error: %m\n");
+ return -1;
+ }
+
+ pthread_attr_init(&attrs);
+ pthread_attr_setdetachstate(&attrs, PTHREAD_CREATE_DETACHED);
+
+ for (t = 1; t < nr_threads; t++) {
+ enum thread_msg msg = THREAD_MSG__UNDEFINED;
+
+ pthread_attr_setaffinity_np(&attrs, MMAP_CPU_MASK_BYTES(&(thread_data[t].mask->affinity)),
+ (cpu_set_t *)(thread_data[t].mask->affinity.bits));
+
+ if (pthread_create(&handle, &attrs, record__thread, &thread_data[t])) {
+ for (tt = 1; tt < t; tt++)
+ record__terminate_thread(&thread_data[t]);
+ pr_err("Failed to start threads. Error: %m\n");
+ ret = -1;
+ goto out_err;
+ }
+
+ if (read(thread_data[t].comm.ack[0], &msg, sizeof(msg)) > 0)
+ pr_debug("threads[%d]: sent %s\n", rec->thread_data[t].tid,
+ thread_msg_tags[msg]);
+ }
+
+ if (nr_threads > 1) {
+ sched_setaffinity(0, MMAP_CPU_MASK_BYTES(&thread_data[0].mask->affinity),
+ (cpu_set_t *)thread_data[0].mask->affinity.bits);
+ }

thread = &thread_data[0];

pr_debug("threads[%d]: started on cpu=%d\n", thread->tid, sched_getcpu());

- return 0;
+out_err:
+ if (sigprocmask(SIG_SETMASK, &mask, NULL)) {
+ pr_err("Failed to unblock signals on threads start. Error: %m\n");
+ ret = -1;
+ }
+
+ return ret;
}

static int record__stop_threads(struct record *rec, unsigned long *waking)
--
2.24.1

2020-11-16 12:21:49

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 03/12] perf record: introduce thread local variable


Introduce thread local variable and use it for threaded trace streaming.
Use thread affinity mask instead or record affinity mask in affinity modes.
Introduce and use evlist__ctlfd_update() function to propogate external
control commands to global evlist object.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 137 ++++++++++++++++++++++++------------
tools/perf/util/evlist.c | 16 +++++
tools/perf/util/evlist.h | 1 +
3 files changed, 109 insertions(+), 45 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 765a90e38f69..e41e1cd90168 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -109,6 +109,8 @@ struct thread_data {
u64 bytes_written;
};

+static __thread struct thread_data *thread;
+
struct record {
struct perf_tool tool;
struct record_opts opts;
@@ -130,7 +132,6 @@ struct record {
bool timestamp_boundary;
struct switch_output switch_output;
unsigned long long samples;
- struct mmap_cpu_mask affinity_mask;
unsigned long output_max_size; /* = 0: unlimited */
struct thread_mask *thread_masks;
struct thread_data *thread_data;
@@ -565,7 +566,7 @@ static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
bf = map->data;
}

- rec->samples++;
+ thread->samples++;
return record__write(rec, map, bf, size);
}

@@ -1244,16 +1245,23 @@ static struct perf_event_header finished_round_event = {

static void record__adjust_affinity(struct record *rec, struct mmap *map)
{
+ int ret = 0;
+
if (rec->opts.affinity != PERF_AFFINITY_SYS &&
- !bitmap_equal(rec->affinity_mask.bits, map->affinity_mask.bits,
- rec->affinity_mask.nbits)) {
- bitmap_zero(rec->affinity_mask.bits, rec->affinity_mask.nbits);
- bitmap_or(rec->affinity_mask.bits, rec->affinity_mask.bits,
- map->affinity_mask.bits, rec->affinity_mask.nbits);
- sched_setaffinity(0, MMAP_CPU_MASK_BYTES(&rec->affinity_mask),
- (cpu_set_t *)rec->affinity_mask.bits);
- if (verbose == 2)
- mmap_cpu_mask__scnprintf(&rec->affinity_mask, "thread");
+ !bitmap_equal(thread->mask->affinity.bits, map->affinity_mask.bits,
+ thread->mask->affinity.nbits)) {
+ bitmap_zero(thread->mask->affinity.bits, thread->mask->affinity.nbits);
+ bitmap_or(thread->mask->affinity.bits, thread->mask->affinity.bits,
+ map->affinity_mask.bits, thread->mask->affinity.nbits);
+ ret = sched_setaffinity(0, MMAP_CPU_MASK_BYTES(&thread->mask->affinity),
+ (cpu_set_t *)thread->mask->affinity.bits);
+ if (ret)
+ pr_err("threads[%d]: sched_setaffinity() call failed: %m\n", thread->tid);
+ if (verbose == 2) {
+ pr_debug("threads[%d]: addr=", thread->tid);
+ mmap_cpu_mask__scnprintf(&thread->mask->affinity, "thread");
+ pr_debug("threads[%d]: on cpu=%d\n", thread->tid, sched_getcpu());
+ }
}
}

@@ -1291,17 +1299,21 @@ static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_
static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
bool overwrite, bool synch)
{
- u64 bytes_written = rec->bytes_written;
+ u64 bytes_written;
int i;
int rc = 0;
- struct mmap *maps;
+ struct mmap **maps;
+ int nr_mmaps;
int trace_fd = rec->data.file.fd;
off_t off = 0;

if (!evlist)
return 0;

- maps = overwrite ? evlist->overwrite_mmap : evlist->mmap;
+ bytes_written = thread->bytes_written;
+ maps = overwrite ? thread->overwrite_maps : thread->maps;
+ nr_mmaps = thread->nr_mmaps;
+
if (!maps)
return 0;

@@ -1311,9 +1323,9 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
if (record__aio_enabled(rec))
off = record__aio_get_pos(trace_fd);

- for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ for (i = 0; i < nr_mmaps; i++) {
u64 flush = 0;
- struct mmap *map = &maps[i];
+ struct mmap *map = maps[i];

if (map->core.base) {
record__adjust_affinity(rec, map);
@@ -1376,6 +1388,15 @@ static int record__mmap_read_all(struct record *rec, bool synch)
return record__mmap_read_evlist(rec, rec->evlist, true, synch);
}

+static void record__thread_munmap_filtered(struct fdarray *fda, int fd,
+ void *arg __maybe_unused)
+{
+ struct perf_mmap *map = fda->priv[fd].ptr;
+
+ if (map)
+ perf_mmap__put(map);
+}
+
static void record__init_features(struct record *rec)
{
struct perf_session *session = rec->session;
@@ -1797,6 +1818,33 @@ static void hit_auxtrace_snapshot_trigger(struct record *rec)
}
}

+static int record__start_threads(struct record *rec)
+{
+ struct thread_data *thread_data = rec->thread_data;
+
+ thread = &thread_data[0];
+
+ pr_debug("threads[%d]: started on cpu=%d\n", thread->tid, sched_getcpu());
+
+ return 0;
+}
+
+static int record__stop_threads(struct record *rec, unsigned long *waking)
+{
+ int t;
+ struct thread_data *thread_data = rec->thread_data;
+
+ for (t = 0; t < rec->nr_threads; t++) {
+ rec->samples += thread_data[t].samples;
+ *waking += thread_data[t].waking;
+ pr_debug("threads[%d]: samples=%lld, wakes=%ld, trasferred=%ld, compressed=%ld\n",
+ thread_data[t].tid, thread_data[t].samples, thread_data[t].waking,
+ rec->session->bytes_transferred, rec->session->bytes_compressed);
+ }
+
+ return 0;
+}
+
static int __cmd_record(struct record *rec, int argc, const char **argv)
{
int err;
@@ -1904,7 +1952,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)

if (record__open(rec) != 0) {
err = -1;
- goto out_child;
+ goto out_free_threads;
}
session->header.env.comp_mmap_len = session->evlist->core.mmap_len;

@@ -1912,7 +1960,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
err = record__kcore_copy(&session->machines.host, data);
if (err) {
pr_err("ERROR: Failed to copy kcore\n");
- goto out_child;
+ goto out_free_threads;
}
}

@@ -1923,7 +1971,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
bpf__strerror_apply_obj_config(err, errbuf, sizeof(errbuf));
pr_err("ERROR: Apply config to BPF failed: %s\n",
errbuf);
- goto out_child;
+ goto out_free_threads;
}

/*
@@ -1941,11 +1989,11 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
if (data->is_pipe) {
err = perf_header__write_pipe(fd);
if (err < 0)
- goto out_child;
+ goto out_free_threads;
} else {
err = perf_session__write_header(session, rec->evlist, fd, false);
if (err < 0)
- goto out_child;
+ goto out_free_threads;
}

err = -1;
@@ -1953,16 +2001,16 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
&& !perf_header__has_feat(&session->header, HEADER_BUILD_ID)) {
pr_err("Couldn't generate buildids. "
"Use --no-buildid to profile anyway.\n");
- goto out_child;
+ goto out_free_threads;
}

err = record__setup_sb_evlist(rec);
if (err)
- goto out_child;
+ goto out_free_threads;

err = record__synthesize(rec, false);
if (err < 0)
- goto out_child;
+ goto out_free_threads;

if (rec->realtime_prio) {
struct sched_param param;
@@ -1971,10 +2019,13 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
if (sched_setscheduler(0, SCHED_FIFO, &param)) {
pr_err("Could not set realtime priority.\n");
err = -1;
- goto out_child;
+ goto out_free_threads;
}
}

+ if (record__start_threads(rec))
+ goto out_free_threads;
+
/*
* When perf is starting the traced process, all the events
* (apart from group members) have enable_on_exec=1 set,
@@ -2045,7 +2096,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
trigger_ready(&switch_output_trigger);
perf_hooks__invoke_record_start();
for (;;) {
- unsigned long long hits = rec->samples;
+ unsigned long long hits = thread->samples;

/*
* rec->evlist->bkw_mmap_state is possible to be
@@ -2114,20 +2165,26 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
alarm(rec->switch_output.time);
}

- if (hits == rec->samples) {
+ if (hits == thread->samples) {
if (done || draining)
break;
- err = evlist__poll(rec->evlist, -1);
+ err = fdarray__poll(&thread->pollfd, -1);
/*
* Propagate error, only if there's any. Ignore positive
* number of returned events and interrupt error.
*/
if (err > 0 || (err < 0 && errno == EINTR))
err = 0;
- waking++;
+ thread->waking++;

- if (evlist__filter_pollfd(rec->evlist, POLLERR | POLLHUP) == 0)
+ if (fdarray__filter(&thread->pollfd, POLLERR | POLLHUP,
+ record__thread_munmap_filtered, NULL) == 0)
draining = true;
+
+ if (thread->ctlfd_pos != -1) {
+ evlist__ctlfd_update(rec->evlist,
+ &thread->pollfd.entries[thread->ctlfd_pos]);
+ }
}

if (evlist__ctlfd_process(rec->evlist, &cmd) > 0) {
@@ -2175,18 +2232,20 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
goto out_child;
}

- if (!quiet)
- fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
-
if (target__none(&rec->opts.target))
record__synthesize_workload(rec, true);

out_child:
+ record__stop_threads(rec, &waking);
+out_free_threads:
record__free_thread_data(rec);
evlist__finalize_ctlfd(rec->evlist);
record__mmap_read_all(rec, true);
record__aio_mmap_read_sync(rec);

+ if (!quiet)
+ fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
+
if (rec->session->bytes_transferred && rec->session->bytes_compressed) {
ratio = (float)rec->session->bytes_transferred/(float)rec->session->bytes_compressed;
session->header.env.comp_ratio = ratio + 0.5;
@@ -2995,17 +3054,6 @@ int cmd_record(int argc, const char **argv)

symbol__init(NULL);

- if (rec->opts.affinity != PERF_AFFINITY_SYS) {
- rec->affinity_mask.nbits = cpu__max_cpu();
- rec->affinity_mask.bits = bitmap_alloc(rec->affinity_mask.nbits);
- if (!rec->affinity_mask.bits) {
- pr_err("Failed to allocate thread mask for %zd cpus\n", rec->affinity_mask.nbits);
- err = -ENOMEM;
- goto out_opts;
- }
- pr_debug2("thread mask[%zd]: empty\n", rec->affinity_mask.nbits);
- }
-
err = record__auxtrace_init(rec);
if (err)
goto out;
@@ -3134,7 +3182,6 @@ int cmd_record(int argc, const char **argv)

err = __cmd_record(&record, argc, argv);
out:
- bitmap_free(rec->affinity_mask.bits);
evlist__delete(rec->evlist);
symbol__exit();
auxtrace_record__free(rec->itr);
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 8bdf3d2c907c..758a4896fedd 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1970,6 +1970,22 @@ int evlist__ctlfd_process(struct evlist *evlist, enum evlist_ctl_cmd *cmd)
return err;
}

+int evlist__ctlfd_update(struct evlist *evlist, struct pollfd *update)
+{
+ int ctlfd_pos = evlist->ctl_fd.pos;
+ struct pollfd *entries = evlist->core.pollfd.entries;
+
+ if (!evlist__ctlfd_initialized(evlist))
+ return 0;
+
+ if (entries[ctlfd_pos].fd != update->fd ||
+ entries[ctlfd_pos].events != update->events)
+ return -1;
+
+ entries[ctlfd_pos].revents = update->revents;
+ return 0;
+}
+
struct evsel *evlist__find_evsel(struct evlist *evlist, int idx)
{
struct evsel *evsel;
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index e1a450322bc5..9b73d6ccf066 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -380,6 +380,7 @@ void evlist__close_control(int ctl_fd, int ctl_fd_ack, bool *ctl_fd_close);
int evlist__initialize_ctlfd(struct evlist *evlist, int ctl_fd, int ctl_fd_ack);
int evlist__finalize_ctlfd(struct evlist *evlist);
bool evlist__ctlfd_initialized(struct evlist *evlist);
+int evlist__ctlfd_update(struct evlist *evlist, struct pollfd *update);
int evlist__ctlfd_process(struct evlist *evlist, enum evlist_ctl_cmd *cmd);
int evlist__ctlfd_ack(struct evlist *evlist);

--
2.24.1


2020-11-16 12:23:03

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 07/12] perf record: init data file at mmap buffer object


Initialize data files located at mmap buffer objects so trace data
can be written into several data file located at data directory.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 41 ++++++++++++++++++++++++++++++-------
tools/perf/util/record.h | 1 +
2 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 779676531edf..f5e5175da6a1 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -158,6 +158,11 @@ static const char *affinity_tags[PERF_AFFINITY_MAX] = {
"SYS", "NODE", "CPU"
};

+static int record__threads_enabled(struct record *rec)
+{
+ return rec->opts.threads_spec;
+}
+
static bool switch_output_signal(struct record *rec)
{
return rec->switch_output.signal &&
@@ -1060,7 +1065,7 @@ static int record__free_thread_data(struct record *rec)
static int record__mmap_evlist(struct record *rec,
struct evlist *evlist)
{
- int ret;
+ int i, ret;
struct record_opts *opts = &rec->opts;
bool auxtrace_overwrite = opts->auxtrace_snapshot_mode ||
opts->auxtrace_sample_mode;
@@ -1099,6 +1104,18 @@ static int record__mmap_evlist(struct record *rec,
if (ret)
return ret;

+ if (record__threads_enabled(rec)) {
+ ret = perf_data__create_dir(&rec->data, evlist->core.nr_mmaps);
+ if (ret)
+ return ret;
+ for (i = 0; i < evlist->core.nr_mmaps; i++) {
+ if (evlist->mmap)
+ evlist->mmap[i].file = &rec->data.dir.files[i];
+ if (evlist->overwrite_mmap)
+ evlist->overwrite_mmap[i].file = &rec->data.dir.files[i];
+ }
+ }
+
return 0;
}

@@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
/*
* Mark the round finished in case we wrote
* at least one event.
+ *
+ * No need for round events in directory mode,
+ * because per-cpu maps and files have data
+ * sorted by kernel.
*/
- if (bytes_written != rec->bytes_written)
+ if (!record__threads_enabled(rec) && bytes_written != rec->bytes_written)
rc = record__write(rec, NULL, &finished_round_event, sizeof(finished_round_event));

if (overwrite)
@@ -1514,7 +1535,9 @@ static void record__init_features(struct record *rec)
if (!rec->opts.use_clockid)
perf_header__clear_feat(&session->header, HEADER_CLOCK_DATA);

- perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
+ if (!record__threads_enabled(rec))
+ perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
+
if (!record__comp_enabled(rec))
perf_header__clear_feat(&session->header, HEADER_COMPRESSED);

@@ -1525,15 +1548,21 @@ static void
record__finish_output(struct record *rec)
{
struct perf_data *data = &rec->data;
- int fd = perf_data__fd(data);
+ int i, fd = perf_data__fd(data);

if (data->is_pipe)
return;

rec->session->header.data_size += rec->bytes_written;
data->file.size = lseek(perf_data__fd(data), 0, SEEK_CUR);
+ if (record__threads_enabled(rec)) {
+ for (i = 0; i < data->dir.nr; i++)
+ data->dir.files[i].size = lseek(data->dir.files[i].fd, 0, SEEK_CUR);
+ }

if (!rec->no_buildid) {
+ /* this will be recalculated during process_buildids() */
+ rec->samples = 0;
process_buildids(rec);

if (rec->buildid_all)
@@ -2438,8 +2467,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
status = err;

record__synthesize(rec, true);
- /* this will be recalculated during process_buildids() */
- rec->samples = 0;

if (!err) {
if (!rec->timestamp_filename) {
@@ -3179,7 +3206,7 @@ int cmd_record(int argc, const char **argv)

}

- if (rec->opts.kcore)
+ if (rec->opts.kcore || record__threads_enabled(rec))
rec->data.is_dir = true;

if (rec->opts.comp_level != 0) {
diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
index 266760ac9143..9c13a39cc58f 100644
--- a/tools/perf/util/record.h
+++ b/tools/perf/util/record.h
@@ -74,6 +74,7 @@ struct record_opts {
int ctl_fd;
int ctl_fd_ack;
bool ctl_fd_close;
+ int threads_spec;
};

extern const char * const *record_usage;
--
2.24.1

2020-11-16 12:24:08

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 08/12] perf record: introduce --threads=<spec> command line option


Provide --threads option in perf record command line interface.
The option can have a value in the form of masks that specify
cpus to be monitored with data streaming threads and its layout
in system topology. The masks can be filtered using cpu mask
provided via -C option.

The specification value can be user defined list of masks. Masks
separated by colon define cpus to be monitored by one thread and
affinity mask of that thread is separated by slash. For example:
<cpus mask 1>/<affinity mask 1>:<cpu mask 2>/<affinity mask 2>
specifies parallel threads layout that consists of two threads
with corresponding assigned cpus to be monitored.

The specification value can be a string e.g. "cpu", "core" or
"socket" meaning creation of data streaming thread for every
cpu or core or socket to monitor distinct cpus or cpus grouped
by core or socket.

The option provided with no or empty value defaults to per-cpu
parallel threads layout creating data streaming thread for every
cpu being monitored.

Feature design and implementation are based on prototypes [1], [2].

[1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads
[2] https://lore.kernel.org/lkml/[email protected]/

Suggested-by: Jiri Olsa <[email protected]>
Suggested-by: Namhyung Kim <[email protected]>
Signed-off-by: Alexey Budankov <[email protected]>
---
tools/include/linux/bitmap.h | 11 ++
tools/lib/bitmap.c | 14 ++
tools/perf/builtin-record.c | 308 ++++++++++++++++++++++++++++++++++-
tools/perf/util/record.h | 1 +
4 files changed, 332 insertions(+), 2 deletions(-)

diff --git a/tools/include/linux/bitmap.h b/tools/include/linux/bitmap.h
index 477a1cae513f..2eb1d1084543 100644
--- a/tools/include/linux/bitmap.h
+++ b/tools/include/linux/bitmap.h
@@ -18,6 +18,8 @@ int __bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
int __bitmap_equal(const unsigned long *bitmap1,
const unsigned long *bitmap2, unsigned int bits);
void bitmap_clear(unsigned long *map, unsigned int start, int len);
+int __bitmap_intersects(const unsigned long *bitmap1,
+ const unsigned long *bitmap2, unsigned int bits);

#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) & (BITS_PER_LONG - 1)))

@@ -178,4 +180,13 @@ static inline int bitmap_equal(const unsigned long *src1,
return __bitmap_equal(src1, src2, nbits);
}

+static inline int bitmap_intersects(const unsigned long *src1,
+ const unsigned long *src2, unsigned int nbits)
+{
+ if (small_const_nbits(nbits))
+ return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0;
+ else
+ return __bitmap_intersects(src1, src2, nbits);
+}
+
#endif /* _PERF_BITOPS_H */
diff --git a/tools/lib/bitmap.c b/tools/lib/bitmap.c
index 5043747ef6c5..3cc3a5b43bb5 100644
--- a/tools/lib/bitmap.c
+++ b/tools/lib/bitmap.c
@@ -86,3 +86,17 @@ int __bitmap_equal(const unsigned long *bitmap1,

return 1;
}
+
+int __bitmap_intersects(const unsigned long *bitmap1,
+ const unsigned long *bitmap2, unsigned int bits)
+{
+ unsigned int k, lim = bits/BITS_PER_LONG;
+ for (k = 0; k < lim; ++k)
+ if (bitmap1[k] & bitmap2[k])
+ return 1;
+
+ if (bits % BITS_PER_LONG)
+ if ((bitmap1[k] & bitmap2[k]) & BITMAP_LAST_WORD_MASK(bits))
+ return 1;
+ return 0;
+}
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index f5e5175da6a1..fd0587d636b2 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -49,6 +49,7 @@
#include "util/clockid.h"
#include "asm/bug.h"
#include "perf.h"
+#include "cputopo.h"

#include <errno.h>
#include <inttypes.h>
@@ -121,6 +122,20 @@ static const char *thread_msg_tags[THREAD_MSG__MAX] = {
"UNDEFINED", "READY"
};

+enum thread_spec {
+ THREAD_SPEC__UNDEFINED = 0,
+ THREAD_SPEC__CPU,
+ THREAD_SPEC__CORE,
+ THREAD_SPEC__SOCKET,
+ THREAD_SPEC__NUMA,
+ THREAD_SPEC__USER,
+ THREAD_SPEC__MAX,
+};
+
+static const char *thread_spec_tags[THREAD_SPEC__MAX] = {
+ "undefined", "cpu", "core", "socket", "numa", "user"
+};
+
struct record {
struct perf_tool tool;
struct record_opts opts;
@@ -2660,6 +2675,64 @@ static void record__thread_mask_free(struct thread_mask *mask)
record__mmap_cpu_mask_free(&mask->affinity);
}

+static int record__thread_mask_or(struct thread_mask *dest, struct thread_mask *src1,
+ struct thread_mask *src2)
+{
+ if (src1->maps.nbits != src2->maps.nbits || src1->affinity.nbits != src2->affinity.nbits ||
+ dest->maps.nbits != src1->maps.nbits || dest->affinity.nbits != src1->affinity.nbits)
+ return -EINVAL;
+
+ bitmap_or(dest->maps.bits, src1->maps.bits, src2->maps.bits, src1->maps.nbits);
+ bitmap_or(dest->affinity.bits, src1->affinity.bits, src2->affinity.bits, src1->affinity.nbits);
+
+ return 0;
+}
+
+static int record__thread_mask_intersects(struct thread_mask *mask_1, struct thread_mask *mask_2)
+{
+ int res1, res2;
+
+ if (mask_1->maps.nbits != mask_2->maps.nbits || mask_1->affinity.nbits != mask_2->affinity.nbits)
+ return -EINVAL;
+
+ res1 = bitmap_intersects(mask_1->maps.bits, mask_2->maps.bits, mask_1->maps.nbits);
+ res2 = bitmap_intersects(mask_1->affinity.bits, mask_2->affinity.bits, mask_1->affinity.nbits);
+ if (res1 || res2)
+ return 1;
+
+ return 0;
+}
+
+static int record__parse_threads(const struct option *opt, const char *str, int unset)
+{
+ int s;
+ struct record_opts *opts = opt->value;
+
+ if (unset || !str || !strlen(str)) {
+ opts->threads_spec = THREAD_SPEC__CPU;
+ } else {
+ for (s = 1; s < THREAD_SPEC__MAX; s++) {
+ if (s == THREAD_SPEC__USER)
+ {
+ opts->threads_user_spec = strdup(str);
+ opts->threads_spec = THREAD_SPEC__USER;
+ break;
+ }
+ if (!strncasecmp(str, thread_spec_tags[s], strlen(thread_spec_tags[s]))) {
+ opts->threads_spec = s;
+ break;
+ }
+ }
+ }
+
+ pr_debug("threads_spec: %s", thread_spec_tags[opts->threads_spec]);
+ if (opts->threads_spec == THREAD_SPEC__USER)
+ pr_debug("=[%s]", opts->threads_user_spec);
+ pr_debug("\n");
+
+ return 0;
+}
+
static int parse_output_max_size(const struct option *opt,
const char *str, int unset)
{
@@ -3084,6 +3157,9 @@ static struct option __record_options[] = {
"\t\t\t Optionally send control command completion ('ack\\n') to ack-fd descriptor.\n"
"\t\t\t Alternatively, ctl-fifo / ack-fifo will be opened and used as ctl-fd / ack-fd.",
parse_control_option),
+ OPT_CALLBACK_OPTARG(0, "threads", &record.opts, NULL, "spec",
+ "write collected trace data into several data files using parallel threads",
+ record__parse_threads),
OPT_END()
};

@@ -3097,6 +3173,17 @@ static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_c
set_bit(cpus->map[c], mask->bits);
}

+static void record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, char *mask_spec)
+{
+ struct perf_cpu_map *cpus;
+
+ cpus = perf_cpu_map__new(mask_spec);
+ if (cpus) {
+ record__mmap_cpu_mask_init(mask, cpus);
+ free(cpus);
+ }
+}
+
static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr_bits)
{
int t, ret;
@@ -3116,6 +3203,196 @@ static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr

return 0;
}
+
+static int record__init_thread_cpu_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ int t, ret, nr_cpus = perf_cpu_map__nr(cpus);
+
+ ret = record__alloc_thread_masks(rec, nr_cpus, cpu__max_cpu());
+ if (ret)
+ return ret;
+
+ rec->nr_threads = nr_cpus;
+ pr_debug("threads: nr_threads=%d\n", rec->nr_threads);
+
+ for (t = 0; t < rec->nr_threads; t++) {
+ set_bit(cpus->map[t], rec->thread_masks[t].maps.bits);
+ pr_debug("thread_masks[%d]: maps mask [%d]\n", t, cpus->map[t]);
+ set_bit(cpus->map[t], rec->thread_masks[t].affinity.bits);
+ pr_debug("thread_masks[%d]: affinity mask [%d]\n", t, cpus->map[t]);
+ }
+
+ return 0;
+}
+
+static int record__init_thread_masks_spec(struct record *rec, struct perf_cpu_map *cpus,
+ char **maps_spec, char **affinity_spec, u32 nr_spec)
+{
+ u32 s;
+ int ret, nr_threads = 0;
+ struct mmap_cpu_mask cpus_mask;
+ struct thread_mask thread_mask, full_mask;
+
+ ret = record__mmap_cpu_mask_alloc(&cpus_mask, cpu__max_cpu());
+ if (ret)
+ return ret;
+ record__mmap_cpu_mask_init(&cpus_mask, cpus);
+ ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu());
+ if (ret)
+ return ret;
+ ret = record__thread_mask_alloc(&full_mask, cpu__max_cpu());
+ if (ret)
+ return ret;
+ record__thread_mask_clear(&full_mask);
+
+ for (s = 0; s < nr_spec; s++) {
+ record__thread_mask_clear(&thread_mask);
+
+ record__mmap_cpu_mask_init_spec(&thread_mask.maps, maps_spec[s]);
+ record__mmap_cpu_mask_init_spec(&thread_mask.affinity, affinity_spec[s]);
+
+ if (!bitmap_and(thread_mask.maps.bits, thread_mask.maps.bits,
+ cpus_mask.bits, thread_mask.maps.nbits) ||
+ !bitmap_and(thread_mask.affinity.bits, thread_mask.affinity.bits,
+ cpus_mask.bits, thread_mask.affinity.nbits))
+ continue;
+
+ ret = record__thread_mask_intersects(&thread_mask, &full_mask);
+ if (ret)
+ return ret;
+ record__thread_mask_or(&full_mask, &full_mask, &thread_mask);
+
+ rec->thread_masks = realloc(rec->thread_masks,
+ (nr_threads + 1) * sizeof(struct thread_mask));
+ if (!rec->thread_masks) {
+ pr_err("Failed to allocate thread masks\n");
+ return -ENOMEM;
+ }
+ rec->thread_masks[nr_threads] = thread_mask;
+ pr_debug("thread_masks[%d]: addr=", nr_threads);
+ mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].maps, "maps");
+ pr_debug("thread_masks[%d]: addr=", nr_threads);
+ mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].affinity, "affinity");
+ nr_threads++;
+ ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu());
+ if (ret)
+ return ret;
+ }
+
+ rec->nr_threads = nr_threads;
+ pr_debug("threads: nr_threads=%d\n", rec->nr_threads);
+
+ record__mmap_cpu_mask_free(&cpus_mask);
+ record__thread_mask_free(&thread_mask);
+ record__thread_mask_free(&full_mask);
+
+ return 0;
+}
+
+static int record__init_thread_core_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ int ret;
+ struct cpu_topology *topo;
+
+ topo = cpu_topology__new();
+ if (!topo)
+ return -EINVAL;
+
+ ret = record__init_thread_masks_spec(rec, cpus, topo->thread_siblings,
+ topo->thread_siblings, topo->thread_sib);
+ cpu_topology__delete(topo);
+
+ return ret;
+}
+
+static int record__init_thread_socket_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ int ret;
+ struct cpu_topology *topo;
+
+ topo = cpu_topology__new();
+ if (!topo)
+ return -EINVAL;
+
+ ret = record__init_thread_masks_spec(rec, cpus, topo->core_siblings,
+ topo->core_siblings, topo->core_sib);
+ cpu_topology__delete(topo);
+
+ return ret;
+}
+
+static int record__init_thread_numa_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ u32 s;
+ int ret;
+ char **spec;
+ struct numa_topology *topo;
+
+ topo = numa_topology__new();
+ if (!topo)
+ return -EINVAL;
+ spec = zalloc(topo->nr * sizeof(char *));
+ if (!spec)
+ return -ENOMEM;
+ for (s = 0; s < topo->nr; s++)
+ spec[s] = topo->nodes[s].cpus;
+
+ ret = record__init_thread_masks_spec(rec, cpus, spec, spec, topo->nr);
+
+ zfree(&spec);
+
+ numa_topology__delete(topo);
+
+ return ret;
+}
+
+static int record__init_thread_user_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ int t, ret;
+ u32 s, nr_spec = 0;
+ char **maps_spec = NULL, **affinity_spec = NULL;
+ char *spec, *spec_ptr, *user_spec, *mask, *mask_ptr;
+
+ for (t = 0, user_spec = (char *)rec->opts.threads_user_spec; ;t++, user_spec = NULL) {
+ spec = strtok_r(user_spec, ":", &spec_ptr);
+ if (spec == NULL)
+ break;
+ pr_debug(" spec[%d]: %s\n", t, spec);
+ mask = strtok_r(spec, "/", &mask_ptr);
+ if (mask == NULL)
+ break;
+ pr_debug(" maps mask: %s\n", mask);
+ maps_spec = realloc(maps_spec, (nr_spec + 1) * sizeof(char *));
+ if (!maps_spec) {
+ pr_err("Failed to realloc maps_spec\n");
+ return -ENOMEM;
+ }
+ maps_spec[nr_spec] = strdup(mask);
+ mask = strtok_r(NULL, "/", &mask_ptr);
+ if (mask == NULL)
+ break;
+ pr_debug(" affinity mask: %s\n", mask);
+ affinity_spec = realloc(affinity_spec, (nr_spec + 1) * sizeof(char *));
+ if (!maps_spec) {
+ pr_err("Failed to realloc affinity_spec\n");
+ return -ENOMEM;
+ }
+ affinity_spec[nr_spec] = strdup(mask);
+ nr_spec++;
+ }
+
+ ret = record__init_thread_masks_spec(rec, cpus, maps_spec, affinity_spec, nr_spec);
+
+ for (s = 0; s < nr_spec; s++) {
+ free(maps_spec[s]);
+ free(affinity_spec[s]);
+ }
+ free(affinity_spec);
+ free(maps_spec);
+
+ return ret;
+}
+
static int record__init_thread_default_masks(struct record *rec, struct perf_cpu_map *cpus)
{
int ret;
@@ -3133,9 +3410,33 @@ static int record__init_thread_default_masks(struct record *rec, struct perf_cpu

static int record__init_thread_masks(struct record *rec)
{
+ int ret = 0;
struct perf_cpu_map *cpus = rec->evlist->core.cpus;

- return record__init_thread_default_masks(rec, cpus);
+ if (!record__threads_enabled(rec))
+ return record__init_thread_default_masks(rec, cpus);
+
+ switch (rec->opts.threads_spec) {
+ case THREAD_SPEC__CPU:
+ ret = record__init_thread_cpu_masks(rec, cpus);
+ break;
+ case THREAD_SPEC__CORE:
+ ret = record__init_thread_core_masks(rec, cpus);
+ break;
+ case THREAD_SPEC__SOCKET:
+ ret = record__init_thread_socket_masks(rec, cpus);
+ break;
+ case THREAD_SPEC__NUMA:
+ ret = record__init_thread_numa_masks(rec, cpus);
+ break;
+ case THREAD_SPEC__USER:
+ ret = record__init_thread_user_masks(rec, cpus);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
}

static int record__fini_thread_masks(struct record *rec)
@@ -3361,7 +3662,10 @@ int cmd_record(int argc, const char **argv)

err = record__init_thread_masks(rec);
if (err) {
- pr_err("record__init_thread_masks failed, error %d\n", err);
+ if (err > 0)
+ pr_err("ERROR: parallel data streaming masks (--threads) intersect.\n");
+ else
+ pr_err("record__init_thread_masks failed, error %d\n", err);
goto out;
}

diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
index 9c13a39cc58f..7f64ff5da2b2 100644
--- a/tools/perf/util/record.h
+++ b/tools/perf/util/record.h
@@ -75,6 +75,7 @@ struct record_opts {
int ctl_fd_ack;
bool ctl_fd_close;
int threads_spec;
+ const char *threads_user_spec;
};

extern const char * const *record_usage;
--
2.24.1


2020-11-16 12:24:09

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 06/12] perf record: introduce data file at mmap buffer object


Introduce data file and compressor objects into mmap object so
they could be used to process and store data stream from the
corresponding kernel data buffer. Introduce bytes_transferred
and bytes_compressed stats so they would capture statistics for
the related data buffer transfers. Make use of the introduced
per mmap file, compressor and stats when they are initialized
and available.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 64 +++++++++++++++++++++++++++++--------
tools/perf/util/mmap.c | 6 ++++
tools/perf/util/mmap.h | 6 ++++
3 files changed, 63 insertions(+), 13 deletions(-)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 13773739bedc..779676531edf 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -188,11 +188,19 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,
{
struct perf_data_file *file = &rec->session->data->file;

+ if (map && map->file)
+ file = map->file;
+
if (perf_data_file__write(file, bf, size) < 0) {
pr_err("failed to write perf data, error: %m\n");
return -1;
}

+ if (map && map->file) {
+ map->bytes_written += size;
+ return 0;
+ }
+
rec->bytes_written += size;

if (record__output_max_size_exceeded(rec) && !done) {
@@ -210,8 +218,8 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,

static int record__aio_enabled(struct record *rec);
static int record__comp_enabled(struct record *rec);
-static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_size,
- void *src, size_t src_size);
+static size_t zstd_compress(struct zstd_data *data,
+ void *dst, size_t dst_size, void *src, size_t src_size);

#ifdef HAVE_AIO_SUPPORT
static int record__aio_write(struct aiocb *cblock, int trace_fd,
@@ -345,9 +353,13 @@ static int record__aio_pushfn(struct mmap *map, void *to, void *buf, size_t size
*/

if (record__comp_enabled(aio->rec)) {
- size = zstd_compress(aio->rec->session, aio->data + aio->size,
- mmap__mmap_len(map) - aio->size,
+ struct zstd_data *zstd_data = &aio->rec->session->zstd_data;
+
+ aio->rec->session->bytes_transferred += size;
+ size = zstd_compress(zstd_data,
+ aio->data + aio->size, mmap__mmap_len(map) - aio->size,
buf, size);
+ aio->rec->session->bytes_compressed += size;
} else {
memcpy(aio->data + aio->size, buf, size);
}
@@ -572,8 +584,22 @@ static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
struct record *rec = to;

if (record__comp_enabled(rec)) {
- size = zstd_compress(rec->session, map->data, mmap__mmap_len(map), bf, size);
+ struct zstd_data *zstd_data = &rec->session->zstd_data;
+
+ if (map->file) {
+ zstd_data = &map->zstd_data;
+ map->bytes_transferred += size;
+ } else {
+ rec->session->bytes_transferred += size;
+ }
+
+ size = zstd_compress(zstd_data, map->data, mmap__mmap_len(map), bf, size);
bf = map->data;
+
+ if (map->file)
+ map->bytes_compressed += size;
+ else
+ rec->session->bytes_compressed += size;
}

thread->samples++;
@@ -1291,18 +1317,15 @@ static size_t process_comp_header(void *record, size_t increment)
return size;
}

-static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_size,
+static size_t zstd_compress(struct zstd_data *zstd_data, void *dst, size_t dst_size,
void *src, size_t src_size)
{
size_t compressed;
size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct perf_record_compressed) - 1;

- compressed = zstd_compress_stream_to_records(&session->zstd_data, dst, dst_size, src, src_size,
+ compressed = zstd_compress_stream_to_records(zstd_data, dst, dst_size, src, src_size,
max_record_size, process_comp_header);

- session->bytes_transferred += src_size;
- session->bytes_compressed += compressed;
-
return compressed;
}

@@ -1959,8 +1982,9 @@ static int record__start_threads(struct record *rec)

static int record__stop_threads(struct record *rec, unsigned long *waking)
{
- int t;
+ int t, tm;
struct thread_data *thread_data = rec->thread_data;
+ u64 bytes_written = 0, bytes_transferred = 0, bytes_compressed = 0;

for (t = 1; t < rec->nr_threads; t++)
record__terminate_thread(&thread_data[t]);
@@ -1968,9 +1992,23 @@ static int record__stop_threads(struct record *rec, unsigned long *waking)
for (t = 0; t < rec->nr_threads; t++) {
rec->samples += thread_data[t].samples;
*waking += thread_data[t].waking;
- pr_debug("threads[%d]: samples=%lld, wakes=%ld, trasferred=%ld, compressed=%ld\n",
+ for (tm = 0; tm < thread_data[t].nr_mmaps; tm++) {
+ if (thread_data[t].maps) {
+ bytes_transferred += thread_data[t].maps[tm]->bytes_transferred;
+ bytes_compressed += thread_data[t].maps[tm]->bytes_compressed;
+ bytes_written += thread_data[t].maps[tm]->bytes_written;
+ }
+ if (thread_data[t].overwrite_maps) {
+ bytes_transferred += thread_data[t].overwrite_maps[tm]->bytes_transferred;
+ bytes_compressed += thread_data[t].overwrite_maps[tm]->bytes_compressed;
+ bytes_written += thread_data[t].overwrite_maps[tm]->bytes_written;
+ }
+ }
+ rec->session->bytes_transferred += bytes_transferred;
+ rec->session->bytes_compressed += bytes_compressed;
+ pr_debug("threads[%d]: samples=%lld, wakes=%ld, trasferred=%ld, compressed=%ld, written=%ld\n",
thread_data[t].tid, thread_data[t].samples, thread_data[t].waking,
- rec->session->bytes_transferred, rec->session->bytes_compressed);
+ bytes_transferred, bytes_compressed, bytes_written);
}

return 0;
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index ab7108d22428..a2c5e4237592 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -230,6 +230,8 @@ void mmap__munmap(struct mmap *map)
{
bitmap_free(map->affinity_mask.bits);

+ zstd_fini(&map->zstd_data);
+
perf_mmap__aio_munmap(map);
if (map->data != NULL) {
munmap(map->data, mmap__mmap_len(map));
@@ -291,6 +293,10 @@ int mmap__mmap(struct mmap *map, struct mmap_params *mp, int fd, int cpu)
map->core.flush = mp->flush;

map->comp_level = mp->comp_level;
+ if (zstd_init(&map->zstd_data, map->comp_level)) {
+ pr_debug2("failed to init mmap commpressor, error %d\n", errno);
+ return -1;
+ }

if (map->comp_level && !perf_mmap__aio_enabled(map)) {
map->data = mmap(NULL, mmap__mmap_len(map), PROT_READ|PROT_WRITE,
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index 9d5f589f02ae..c04ca4b5adf5 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -13,6 +13,7 @@
#endif
#include "auxtrace.h"
#include "event.h"
+#include "util/compress.h"

struct aiocb;

@@ -43,6 +44,11 @@ struct mmap {
struct mmap_cpu_mask affinity_mask;
void *data;
int comp_level;
+ struct perf_data_file *file;
+ struct zstd_data zstd_data;
+ u64 bytes_transferred;
+ u64 bytes_compressed;
+ u64 bytes_written;
};

struct mmap_params {
--
2.24.1


2020-11-16 12:25:52

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 10/12] perf report: output data file name in raw trace dump


Print path and name of a data file into raw dump (-D)
<file_offset>@<path/file>. Print offset of PERF_RECORD_COMPRESSED
record instead of zero for decompressed records:
[email protected] [0x30]: event: 9
or
[email protected]/data.7 [0x30]: event: 9

Acked-by: Namhyung Kim <[email protected]>
Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-inject.c | 3 +-
tools/perf/util/ordered-events.h | 1 +
tools/perf/util/session.c | 77 +++++++++++++++++++-------------
tools/perf/util/session.h | 1 +
tools/perf/util/tool.h | 3 +-
5 files changed, 52 insertions(+), 33 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 452a75fe68e5..037f8a98220c 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -106,7 +106,8 @@ static int perf_event__repipe_op2_synth(struct perf_session *session,

static int perf_event__repipe_op4_synth(struct perf_session *session,
union perf_event *event,
- u64 data __maybe_unused)
+ u64 data __maybe_unused,
+ const char *str __maybe_unused)
{
return perf_event__repipe_synth(session->tool, event);
}
diff --git a/tools/perf/util/ordered-events.h b/tools/perf/util/ordered-events.h
index 75345946c4b9..42c9764c6b5b 100644
--- a/tools/perf/util/ordered-events.h
+++ b/tools/perf/util/ordered-events.h
@@ -9,6 +9,7 @@ struct perf_sample;
struct ordered_event {
u64 timestamp;
u64 file_offset;
+ const char *file_path;
union perf_event *event;
struct list_head list;
};
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index 098080287c68..3c2fafb3a04d 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -36,7 +36,8 @@

#ifdef HAVE_ZSTD_SUPPORT
static int perf_session__process_compressed_event(struct perf_session *session,
- union perf_event *event, u64 file_offset)
+ union perf_event *event, u64 file_offset,
+ const char *file_path)
{
void *src;
size_t decomp_size, src_size;
@@ -58,6 +59,7 @@ static int perf_session__process_compressed_event(struct perf_session *session,
}

decomp->file_pos = file_offset;
+ decomp->file_path = file_path;
decomp->mmap_len = mmap_len;
decomp->head = 0;

@@ -98,7 +100,8 @@ static int perf_session__process_compressed_event(struct perf_session *session,
static int perf_session__deliver_event(struct perf_session *session,
union perf_event *event,
struct perf_tool *tool,
- u64 file_offset);
+ u64 file_offset,
+ const char *file_path);

static int perf_session__open(struct perf_session *session)
{
@@ -180,7 +183,8 @@ static int ordered_events__deliver_event(struct ordered_events *oe,
ordered_events);

return perf_session__deliver_event(session, event->event,
- session->tool, event->file_offset);
+ session->tool, event->file_offset,
+ event->file_path);
}

struct perf_session *perf_session__new(struct perf_data *data,
@@ -452,7 +456,8 @@ static int process_stat_round_stub(struct perf_session *perf_session __maybe_unu

static int perf_session__process_compressed_event_stub(struct perf_session *session __maybe_unused,
union perf_event *event __maybe_unused,
- u64 file_offset __maybe_unused)
+ u64 file_offset __maybe_unused,
+ const char *file_path __maybe_unused)
{
dump_printf(": unhandled!\n");
return 0;
@@ -1241,13 +1246,14 @@ static void sample_read__printf(struct perf_sample *sample, u64 read_format)
}

static void dump_event(struct evlist *evlist, union perf_event *event,
- u64 file_offset, struct perf_sample *sample)
+ u64 file_offset, struct perf_sample *sample,
+ const char *file_path)
{
if (!dump_trace)
return;

- printf("\n%#" PRIx64 " [%#x]: event: %d\n",
- file_offset, event->header.size, event->header.type);
+ printf("\n%#" PRIx64 "@%s [%#x]: event: %d\n",
+ file_offset, file_path, event->header.size, event->header.type);

trace_event(event);
if (event->header.type == PERF_RECORD_SAMPLE && evlist->trace_event_sample_raw)
@@ -1438,12 +1444,13 @@ static int machines__deliver_event(struct machines *machines,
struct evlist *evlist,
union perf_event *event,
struct perf_sample *sample,
- struct perf_tool *tool, u64 file_offset)
+ struct perf_tool *tool, u64 file_offset,
+ const char *file_path)
{
struct evsel *evsel;
struct machine *machine;

- dump_event(evlist, event, file_offset, sample);
+ dump_event(evlist, event, file_offset, sample, file_path);

evsel = perf_evlist__id2evsel(evlist, sample->id);

@@ -1520,7 +1527,8 @@ static int machines__deliver_event(struct machines *machines,
static int perf_session__deliver_event(struct perf_session *session,
union perf_event *event,
struct perf_tool *tool,
- u64 file_offset)
+ u64 file_offset,
+ const char *file_path)
{
struct perf_sample sample;
int ret;
@@ -1538,7 +1546,7 @@ static int perf_session__deliver_event(struct perf_session *session,
return 0;

ret = machines__deliver_event(&session->machines, session->evlist,
- event, &sample, tool, file_offset);
+ event, &sample, tool, file_offset, file_path);

if (dump_trace && sample.aux_sample.size)
auxtrace__dump_auxtrace_sample(session, &sample);
@@ -1548,7 +1556,8 @@ static int perf_session__deliver_event(struct perf_session *session,

static s64 perf_session__process_user_event(struct perf_session *session,
union perf_event *event,
- u64 file_offset)
+ u64 file_offset,
+ const char *file_path)
{
struct ordered_events *oe = &session->ordered_events;
struct perf_tool *tool = session->tool;
@@ -1558,7 +1567,7 @@ static s64 perf_session__process_user_event(struct perf_session *session,

if (event->header.type != PERF_RECORD_COMPRESSED ||
tool->compressed == perf_session__process_compressed_event_stub)
- dump_event(session->evlist, event, file_offset, &sample);
+ dump_event(session->evlist, event, file_offset, &sample, file_path);

/* These events are processed right away */
switch (event->header.type) {
@@ -1617,9 +1626,9 @@ static s64 perf_session__process_user_event(struct perf_session *session,
case PERF_RECORD_HEADER_FEATURE:
return tool->feature(session, event);
case PERF_RECORD_COMPRESSED:
- err = tool->compressed(session, event, file_offset);
+ err = tool->compressed(session, event, file_offset, file_path);
if (err)
- dump_event(session->evlist, event, file_offset, &sample);
+ dump_event(session->evlist, event, file_offset, &sample, file_path);
return err;
default:
return -EINVAL;
@@ -1636,9 +1645,9 @@ int perf_session__deliver_synth_event(struct perf_session *session,
events_stats__inc(&evlist->stats, event->header.type);

if (event->header.type >= PERF_RECORD_USER_TYPE_START)
- return perf_session__process_user_event(session, event, 0);
+ return perf_session__process_user_event(session, event, 0, NULL);

- return machines__deliver_event(&session->machines, evlist, event, sample, tool, 0);
+ return machines__deliver_event(&session->machines, evlist, event, sample, tool, 0, NULL);
}

static void event_swap(union perf_event *event, bool sample_id_all)
@@ -1734,7 +1743,8 @@ int perf_session__peek_events(struct perf_session *session, u64 offset,
}

static s64 perf_session__process_event(struct perf_session *session,
- union perf_event *event, u64 file_offset)
+ union perf_event *event, u64 file_offset,
+ const char *file_path)
{
struct evlist *evlist = session->evlist;
struct perf_tool *tool = session->tool;
@@ -1749,7 +1759,7 @@ static s64 perf_session__process_event(struct perf_session *session,
events_stats__inc(&evlist->stats, event->header.type);

if (event->header.type >= PERF_RECORD_USER_TYPE_START)
- return perf_session__process_user_event(session, event, file_offset);
+ return perf_session__process_user_event(session, event, file_offset, file_path);

if (tool->ordered_events) {
u64 timestamp = -1ULL;
@@ -1763,7 +1773,7 @@ static s64 perf_session__process_event(struct perf_session *session,
return ret;
}

- return perf_session__deliver_event(session, event, tool, file_offset);
+ return perf_session__deliver_event(session, event, tool, file_offset, file_path);
}

void perf_event_header__bswap(struct perf_event_header *hdr)
@@ -2001,7 +2011,8 @@ static int __perf_session__process_pipe_events(struct perf_session *session)
}
}

- if ((skip = perf_session__process_event(session, event, head)) < 0) {
+ skip = perf_session__process_event(session, event, head, "pipe");
+ if (skip < 0) {
pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n",
head, event->header.size, event->header.type);
err = -EINVAL;
@@ -2082,7 +2093,7 @@ fetch_decomp_event(u64 head, size_t mmap_size, char *buf, bool needs_swap)
static int __perf_session__process_decomp_events(struct perf_session *session)
{
s64 skip;
- u64 size, file_pos = 0;
+ u64 size;
struct decomp *decomp = session->decomp_last;

if (!decomp)
@@ -2096,9 +2107,9 @@ static int __perf_session__process_decomp_events(struct perf_session *session)
break;

size = event->header.size;
-
- if (size < sizeof(struct perf_event_header) ||
- (skip = perf_session__process_event(session, event, file_pos)) < 0) {
+ skip = perf_session__process_event(session, event, decomp->file_pos,
+ decomp->file_path);
+ if (size < sizeof(struct perf_event_header) || skip < 0) {
pr_err("%#" PRIx64 " [%#x]: failed to process type: %d\n",
decomp->file_pos + decomp->head, event->header.size, event->header.type);
return -EINVAL;
@@ -2129,10 +2140,12 @@ struct reader;

typedef s64 (*reader_cb_t)(struct perf_session *session,
union perf_event *event,
- u64 file_offset);
+ u64 file_offset,
+ const char *file_path);

struct reader {
int fd;
+ const char *path;
u64 data_size;
u64 data_offset;
reader_cb_t process;
@@ -2211,9 +2224,9 @@ reader__process_events(struct reader *rd, struct perf_session *session,
skip = -EINVAL;

if (size < sizeof(struct perf_event_header) ||
- (skip = rd->process(session, event, file_pos)) < 0) {
- pr_err("%#" PRIx64 " [%#x]: failed to process type: %d [%s]\n",
- file_offset + head, event->header.size,
+ (skip = rd->process(session, event, file_pos, rd->path)) < 0) {
+ pr_err("%#" PRIx64 " [%s] [%#x]: failed to process type: %d [%s]\n",
+ file_offset + head, rd->path, event->header.size,
event->header.type, strerror(-skip));
err = skip;
goto out;
@@ -2243,9 +2256,10 @@ reader__process_events(struct reader *rd, struct perf_session *session,

static s64 process_simple(struct perf_session *session,
union perf_event *event,
- u64 file_offset)
+ u64 file_offset,
+ const char *file_path)
{
- return perf_session__process_event(session, event, file_offset);
+ return perf_session__process_event(session, event, file_offset, file_path);
}

static int __perf_session__process_events(struct perf_session *session)
@@ -2255,6 +2269,7 @@ static int __perf_session__process_events(struct perf_session *session)
.data_size = session->header.data_size,
.data_offset = session->header.data_offset,
.process = process_simple,
+ .path = session->data->file.path,
};
struct ordered_events *oe = &session->ordered_events;
struct perf_tool *tool = session->tool;
diff --git a/tools/perf/util/session.h b/tools/perf/util/session.h
index f76480166d38..378ffc3e2809 100644
--- a/tools/perf/util/session.h
+++ b/tools/perf/util/session.h
@@ -46,6 +46,7 @@ struct perf_session {
struct decomp {
struct decomp *next;
u64 file_pos;
+ const char *file_path;
size_t mmap_len;
u64 head;
size_t size;
diff --git a/tools/perf/util/tool.h b/tools/perf/util/tool.h
index bbbc0dcd461f..c966531d3eca 100644
--- a/tools/perf/util/tool.h
+++ b/tools/perf/util/tool.h
@@ -28,7 +28,8 @@ typedef int (*event_attr_op)(struct perf_tool *tool,

typedef int (*event_op2)(struct perf_session *session, union perf_event *event);
typedef s64 (*event_op3)(struct perf_session *session, union perf_event *event);
-typedef int (*event_op4)(struct perf_session *session, union perf_event *event, u64 data);
+typedef int (*event_op4)(struct perf_session *session, union perf_event *event, u64 data,
+ const char *str);

typedef int (*event_oe)(struct perf_tool *tool, union perf_event *event,
struct ordered_events *oe);
--
2.24.1

2020-11-16 12:27:16

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 09/12] perf record: document parallel data streaming mode


Document --threads option syntax and parallel data streaming modes
in Documentation/perf-record.txt. Implement compatibility checks for
other modes and related command line options: asynchronous(--aio)
trace streaming and affinity (--affinity) modes, pipe mode, AUX
area tracing --snapshot and --aux-sample options, --switch-output,
--switch-output-event, --switch-max-files and --timestamp-filename
options. Parallel data streaming is compatible with Zstd compression
(--compression-level) and external control commands (--control).
Cpu mask provided via -C option filters --threads specification masks.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/Documentation/perf-record.txt | 18 ++++++++++
tools/perf/builtin-record.c | 43 ++++++++++++++++++++++--
2 files changed, 58 insertions(+), 3 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 768888b9326a..baf9428856e6 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -671,6 +671,24 @@ Example of bash shell script to enable and disable events during measurements:
wait -n ${perf_pid}
exit $?

+--threads=<spec>::
+Write collected trace data into several data files using parallel threads.
+<spec> value can be user defined list of masks. Masks separated by colon
+define cpus to be monitored by a thread and affinity mask of that thread
+is separated by slash. For example user specification like the following:
+<cpus mask 1>/<affinity mask 1>:<cpu mask 2>/<affinity mask 2> specifies
+parallel threads layout that consists of two threads with corresponding
+assigned cpus to be monitored. <spec> value can also be a string meaning
+predefined parallel threads layout:
+ cpu - create new data streaming thread for every monitored cpu
+ core - create new thread to monitor cpus grouped by a core
+ socket - create new thread to monitor cpus grouped by a socket
+ numa - create new threed to monitor cpus grouped by a numa domain
+Predefined layouts can be used on systems with large number of cpus in
+order not to spawn multiple per-cpu streaming threads but still avoid LOST
+events in data directory files. Option specified with no or empty value
+defaults to cpu layout. Masks defined or provided by the option value are
+filtered through the mask provided by -C option.

SEE ALSO
--------
diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index fd0587d636b2..9ea70dfa17d4 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -798,6 +798,12 @@ static int record__auxtrace_init(struct record *rec)
{
int err;

+ if ((rec->opts.auxtrace_snapshot_opts || rec->opts.auxtrace_sample_opts)
+ && record__threads_enabled(rec)) {
+ pr_err("AUX area tracing options are not available in parallel streaming mode.\n");
+ return -EINVAL;
+ }
+
if (!rec->itr) {
rec->itr = auxtrace_record__init(rec->evlist, &err);
if (err)
@@ -2107,6 +2113,11 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
return PTR_ERR(session);
}

+ if (record__threads_enabled(rec) && perf_data__is_pipe(&rec->data)) {
+ pr_err("Parallel trace streaming is not available in pipe mode.\n");
+ return -1;
+ }
+
fd = perf_data__fd(data);
rec->session = session;

@@ -2853,12 +2864,22 @@ static int switch_output_setup(struct record *rec)
* --switch-output=signal, as we'll send a SIGUSR2 from the side band
* thread to its parent.
*/
- if (rec->switch_output_event_set)
+ if (rec->switch_output_event_set) {
+ if (record__threads_enabled(rec)) {
+ pr_warning("WARNING: --switch-output-event option is not available in parallel streaming mode.\n");
+ return 0;
+ }
goto do_signal;
+ }

if (!s->set)
return 0;

+ if (record__threads_enabled(rec)) {
+ pr_warning("WARNING: --switch-output option is not available in parallel streaming mode.\n");
+ return 0;
+ }
+
if (!strcmp(s->str, "signal")) {
do_signal:
s->signal = true;
@@ -3137,8 +3158,8 @@ static struct option __record_options[] = {
"Set affinity mask of trace reading thread to NUMA node cpu mask or cpu of processed mmap buffer",
record__parse_affinity),
#ifdef HAVE_ZSTD_SUPPORT
- OPT_CALLBACK_OPTARG('z', "compression-level", &record.opts, &comp_level_default,
- "n", "Compressed records using specified level (default: 1 - fastest compression, 22 - greatest compression)",
+ OPT_CALLBACK_OPTARG('z', "compression-level", &record.opts, &comp_level_default, "n",
+ "Compress records using specified level (default: 1 - fastest compression, 22 - greatest compression)",
record__parse_comp_level),
#endif
OPT_CALLBACK(0, "max-size", &record.output_max_size,
@@ -3510,6 +3531,17 @@ int cmd_record(int argc, const char **argv)
if (rec->opts.kcore || record__threads_enabled(rec))
rec->data.is_dir = true;

+ if (record__threads_enabled(rec)) {
+ if (rec->opts.affinity != PERF_AFFINITY_SYS) {
+ pr_err("--affinity option is mutually exclusive to parallel streaming mode.\n");
+ goto out_opts;
+ }
+ if (record__aio_enabled(rec)) {
+ pr_err("Asynchronous streaming mode (--aio) is mutually exclusive to parallel streaming mode.\n");
+ goto out_opts;
+ }
+ }
+
if (rec->opts.comp_level != 0) {
pr_debug("Compression enabled, disabling build id collection at the end of the session.\n");
rec->no_buildid = true;
@@ -3543,6 +3575,11 @@ int cmd_record(int argc, const char **argv)
}
}

+ if (rec->timestamp_filename && record__threads_enabled(rec)) {
+ rec->timestamp_filename = false;
+ pr_warning("WARNING: --timestamp-filename option is not available in parallel streaming mode.\n");
+ }
+
/*
* Allow aliases to facilitate the lookup of symbols for address
* filters. Refer to auxtrace_parse_filters().
--
2.24.1

2020-11-16 12:27:18

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 12/12] perf session: use reader functions to load perf data file


Use the reader functions to load data file similar to loading of
data directory files.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/util/session.c | 215 ++++++++++++--------------------------
1 file changed, 66 insertions(+), 149 deletions(-)

diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index 3cb30c1667c0..f6b06187c6f5 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -2194,109 +2194,6 @@ static int __perf_session__process_decomp_events(struct perf_session *session)
return 0;
}

-static int
-reader__process_events(struct reader *rd, struct perf_session *session,
- struct ui_progress *prog)
-{
- u64 data_size = rd->data_size;
- u64 head, page_offset, file_offset, file_pos, size;
- int err = 0, mmap_prot, mmap_flags, map_idx = 0;
- size_t mmap_size;
- char *buf, *mmaps[NUM_MMAPS];
- union perf_event *event;
- s64 skip;
-
- page_offset = page_size * (rd->data_offset / page_size);
- file_offset = page_offset;
- head = rd->data_offset - page_offset;
-
- ui_progress__init_size(prog, data_size, "Processing events...");
-
- data_size += rd->data_offset;
-
- mmap_size = MMAP_SIZE;
- if (mmap_size > data_size) {
- mmap_size = data_size;
- session->one_mmap = true;
- }
-
- memset(mmaps, 0, sizeof(mmaps));
-
- mmap_prot = PROT_READ;
- mmap_flags = MAP_SHARED;
-
- if (session->header.needs_swap) {
- mmap_prot |= PROT_WRITE;
- mmap_flags = MAP_PRIVATE;
- }
-remap:
- buf = mmap(NULL, mmap_size, mmap_prot, mmap_flags, rd->fd,
- file_offset);
- if (buf == MAP_FAILED) {
- pr_err("failed to mmap file\n");
- err = -errno;
- goto out;
- }
- mmaps[map_idx] = buf;
- map_idx = (map_idx + 1) & (ARRAY_SIZE(mmaps) - 1);
- file_pos = file_offset + head;
- if (session->one_mmap) {
- session->one_mmap_addr = buf;
- session->one_mmap_offset = file_offset;
- }
-
-more:
- event = fetch_mmaped_event(head, mmap_size, buf, session->header.needs_swap);
- if (IS_ERR(event))
- return PTR_ERR(event);
-
- if (!event) {
- if (mmaps[map_idx]) {
- munmap(mmaps[map_idx], mmap_size);
- mmaps[map_idx] = NULL;
- }
-
- page_offset = page_size * (head / page_size);
- file_offset += page_offset;
- head -= page_offset;
- goto remap;
- }
-
- size = event->header.size;
-
- skip = -EINVAL;
-
- if (size < sizeof(struct perf_event_header) ||
- (skip = rd->process(session, event, file_pos, rd->path)) < 0) {
- pr_err("%#" PRIx64 " [%s] [%#x]: failed to process type: %d [%s]\n",
- file_offset + head, rd->path, event->header.size,
- event->header.type, strerror(-skip));
- err = skip;
- goto out;
- }
-
- if (skip)
- size += skip;
-
- head += size;
- file_pos += size;
-
- err = __perf_session__process_decomp_events(session);
- if (err)
- goto out;
-
- ui_progress__update(prog, size);
-
- if (session_done())
- goto out;
-
- if (file_pos < data_size)
- goto more;
-
-out:
- return err;
-}
-
static s64 process_simple(struct perf_session *session,
union perf_event *event,
u64 file_offset,
@@ -2305,52 +2202,6 @@ static s64 process_simple(struct perf_session *session,
return perf_session__process_event(session, event, file_offset, file_path);
}

-static int __perf_session__process_events(struct perf_session *session)
-{
- struct reader rd = {
- .fd = perf_data__fd(session->data),
- .data_size = session->header.data_size,
- .data_offset = session->header.data_offset,
- .process = process_simple,
- .path = session->data->file.path,
- };
- struct ordered_events *oe = &session->ordered_events;
- struct perf_tool *tool = session->tool;
- struct ui_progress prog;
- int err;
-
- perf_tool__fill_defaults(tool);
-
- if (rd.data_size == 0)
- return -1;
-
- ui_progress__init_size(&prog, rd.data_size, "Processing events...");
-
- err = reader__process_events(&rd, session, &prog);
- if (err)
- goto out_err;
- /* do the final flush for ordered samples */
- err = ordered_events__flush(oe, OE_FLUSH__FINAL);
- if (err)
- goto out_err;
- err = auxtrace__flush_events(session, tool);
- if (err)
- goto out_err;
- err = perf_session__flush_thread_stacks(session);
-out_err:
- ui_progress__finish();
- if (!tool->no_warn)
- perf_session__warn_about_errors(session);
- /*
- * We may switching perf.data output, make ordered_events
- * reusable.
- */
- ordered_events__reinit(&session->ordered_events);
- auxtrace__free_events(session);
- session->one_mmap = false;
- return err;
-}
-
static int
reader__init(struct reader *rd, bool *one_mmap)
{
@@ -2467,6 +2318,72 @@ reader__read_event(struct reader *rd, struct perf_session *session,
session->active_reader = NULL;;
return ret;
}
+
+static int __perf_session__process_events(struct perf_session *session)
+{
+ struct reader *rd;
+ struct ordered_events *oe = &session->ordered_events;
+ struct perf_tool *tool = session->tool;
+ struct ui_progress prog;
+ int err;
+
+ perf_tool__fill_defaults(tool);
+
+ rd = session->readers = zalloc(sizeof(struct reader));
+ if (!rd)
+ return -ENOMEM;
+
+ session->nr_readers = 1;
+
+ *rd = (struct reader) {
+ .fd = perf_data__fd(session->data),
+ .data_size = session->header.data_size,
+ .data_offset = session->header.data_offset,
+ .process = process_simple,
+ .path = session->data->file.path,
+ };
+
+ ui_progress__init_size(&prog, rd->data_size, "Processing events...");
+
+ reader__init(rd, &session->one_mmap);
+ if (reader__mmap(rd, session) != READER_OK)
+ goto out_err;
+
+ while (true) {
+ if (session_done())
+ break;
+
+ err = reader__read_event(rd, session, &prog);
+ if (err < 0)
+ break;
+ if (err == READER_EOF) {
+ err = reader__mmap(rd, session);
+ if (err <= 0)
+ break;
+ }
+ }
+
+ /* do the final flush for ordered samples */
+ err = ordered_events__flush(oe, OE_FLUSH__FINAL);
+ if (err)
+ goto out_err;
+ err = auxtrace__flush_events(session, tool);
+ if (err)
+ goto out_err;
+ err = perf_session__flush_thread_stacks(session);
+out_err:
+ ui_progress__finish();
+ if (!tool->no_warn)
+ perf_session__warn_about_errors(session);
+ /*
+ * We may switching perf.data output, make ordered_events
+ * reusable.
+ */
+ ordered_events__reinit(&session->ordered_events);
+ auxtrace__free_events(session);
+ session->one_mmap = false;
+ return err;
+}
/*
* This function reads, merge and process directory data.
* It assumens the version 1 of directory data, where each
--
2.24.1

2020-11-16 12:28:10

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 11/12] perf session: load data directory files for analysis


Introduce decompressor into trace reader object so that decompression
could be executed on per data file basis separately for every data
file located in data directory.

Load data directory files and provide basic raw dump and aggregated
analysis support of data directories in report mode, still with no
memory consumption optimizations.

Design and implementation are based on the prototype [1], [2].

[1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads
[2] https://lore.kernel.org/lkml/[email protected]/

Suggested-by: Jiri Olsa <[email protected]>
Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/util/session.c | 350 +++++++++++++++++++++++++++++++++-----
tools/perf/util/session.h | 4 +
2 files changed, 315 insertions(+), 39 deletions(-)

diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index 3c2fafb3a04d..3cb30c1667c0 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -34,6 +34,55 @@
#include "arch/common.h"
#include <internal/lib.h>

+struct reader;
+
+typedef s64 (*reader_cb_t)(struct perf_session *session,
+ union perf_event *event,
+ u64 file_offset,
+ const char *file_path);
+
+/*
+ * On 64bit we can mmap the data file in one go. No need for tiny mmap
+ * slices. On 32bit we use 32MB.
+ */
+#if BITS_PER_LONG == 64
+#define MMAP_SIZE ULLONG_MAX
+#define NUM_MMAPS 1
+#else
+#define MMAP_SIZE (32 * 1024 * 1024ULL)
+#define NUM_MMAPS 128
+#endif
+
+struct reader_state {
+ char *mmaps[NUM_MMAPS];
+ size_t mmap_size;
+ int mmap_idx;
+ char *mmap_cur;
+ u64 file_pos;
+ u64 file_offset;
+ u64 data_size;
+ u64 head;
+ bool eof;
+ u64 size;
+};
+
+enum {
+ READER_EOF = 0,
+ READER_OK = 1,
+};
+
+struct reader {
+ int fd;
+ const char *path;
+ u64 data_size;
+ u64 data_offset;
+ reader_cb_t process;
+ struct zstd_data zstd_data;
+ struct decomp *decomp;
+ struct decomp *decomp_last;
+ struct reader_state state;
+};
+
#ifdef HAVE_ZSTD_SUPPORT
static int perf_session__process_compressed_event(struct perf_session *session,
union perf_event *event, u64 file_offset,
@@ -43,7 +92,10 @@ static int perf_session__process_compressed_event(struct perf_session *session,
size_t decomp_size, src_size;
u64 decomp_last_rem = 0;
size_t mmap_len, decomp_len = session->header.env.comp_mmap_len;
- struct decomp *decomp, *decomp_last = session->decomp_last;
+ struct decomp *decomp, *decomp_last = session->active_reader ?
+ session->active_reader->decomp_last : session->decomp_last;
+ struct zstd_data *zstd_data = session->active_reader ?
+ &session->active_reader->zstd_data: &session->zstd_data;

if (decomp_last) {
decomp_last_rem = decomp_last->size - decomp_last->head;
@@ -71,7 +123,7 @@ static int perf_session__process_compressed_event(struct perf_session *session,
src = (void *)event + sizeof(struct perf_record_compressed);
src_size = event->pack.header.size - sizeof(struct perf_record_compressed);

- decomp_size = zstd_decompress_stream(&(session->zstd_data), src, src_size,
+ decomp_size = zstd_decompress_stream(zstd_data, src, src_size,
&(decomp->data[decomp_last_rem]), decomp_len - decomp_last_rem);
if (!decomp_size) {
munmap(decomp, mmap_len);
@@ -81,12 +133,22 @@ static int perf_session__process_compressed_event(struct perf_session *session,

decomp->size += decomp_size;

- if (session->decomp == NULL) {
- session->decomp = decomp;
- session->decomp_last = decomp;
+ if (session->active_reader) {
+ if (session->active_reader->decomp == NULL) {
+ session->active_reader->decomp = decomp;
+ session->active_reader->decomp_last = decomp;
+ } else {
+ session->active_reader->decomp_last->next = decomp;
+ session->active_reader->decomp_last = decomp;
+ }
} else {
- session->decomp_last->next = decomp;
- session->decomp_last = decomp;
+ if (session->decomp == NULL) {
+ session->decomp = decomp;
+ session->decomp_last = decomp;
+ } else {
+ session->decomp_last->next = decomp;
+ session->decomp_last = decomp;
+ }
}

pr_debug("decomp (B): %zd to %zd\n", src_size, decomp_size);
@@ -277,11 +339,10 @@ static void perf_session__delete_threads(struct perf_session *session)
machine__delete_threads(&session->machines.host);
}

-static void perf_session__release_decomp_events(struct perf_session *session)
+static void perf_decomp__release_events(struct decomp *next)
{
- struct decomp *next, *decomp;
+ struct decomp *decomp;
size_t mmap_len;
- next = session->decomp;
do {
decomp = next;
if (decomp == NULL)
@@ -294,13 +355,21 @@ static void perf_session__release_decomp_events(struct perf_session *session)

void perf_session__delete(struct perf_session *session)
{
+ int r;
+
if (session == NULL)
return;
auxtrace__free(session);
auxtrace_index__free(&session->auxtrace_index);
perf_session__destroy_kernel_maps(session);
perf_session__delete_threads(session);
- perf_session__release_decomp_events(session);
+ if (session->readers) {
+ for (r = 0; r < session->nr_readers; r++)
+ perf_decomp__release_events(session->readers[r].decomp);
+ zfree(&session->readers);
+ session->nr_readers = 0;
+ }
+ perf_decomp__release_events(session->decomp);
perf_env__exit(&session->header.env);
machines__exit(&session->machines);
if (session->data)
@@ -2094,7 +2163,8 @@ static int __perf_session__process_decomp_events(struct perf_session *session)
{
s64 skip;
u64 size;
- struct decomp *decomp = session->decomp_last;
+ struct decomp *decomp = session->active_reader ?
+ session->active_reader->decomp_last : session->decomp_last;

if (!decomp)
return 0;
@@ -2124,33 +2194,6 @@ static int __perf_session__process_decomp_events(struct perf_session *session)
return 0;
}

-/*
- * On 64bit we can mmap the data file in one go. No need for tiny mmap
- * slices. On 32bit we use 32MB.
- */
-#if BITS_PER_LONG == 64
-#define MMAP_SIZE ULLONG_MAX
-#define NUM_MMAPS 1
-#else
-#define MMAP_SIZE (32 * 1024 * 1024ULL)
-#define NUM_MMAPS 128
-#endif
-
-struct reader;
-
-typedef s64 (*reader_cb_t)(struct perf_session *session,
- union perf_event *event,
- u64 file_offset,
- const char *file_path);
-
-struct reader {
- int fd;
- const char *path;
- u64 data_size;
- u64 data_offset;
- reader_cb_t process;
-};
-
static int
reader__process_events(struct reader *rd, struct perf_session *session,
struct ui_progress *prog)
@@ -2308,6 +2351,232 @@ static int __perf_session__process_events(struct perf_session *session)
return err;
}

+static int
+reader__init(struct reader *rd, bool *one_mmap)
+{
+ struct reader_state *st = &rd->state;
+ char **mmaps = st->mmaps;
+
+ pr_debug("reader processing %s\n", rd->path);
+
+ st->head = rd->data_offset;
+
+ st->data_size = rd->data_size + rd->data_offset;
+
+ st->mmap_size = MMAP_SIZE;
+ if (st->mmap_size > st->data_size) {
+ st->mmap_size = st->data_size;
+ if (one_mmap)
+ *one_mmap = true;
+ }
+
+ memset(mmaps, 0, sizeof(st->mmaps));
+
+ if (zstd_init(&rd->zstd_data, 0))
+ return -1;
+
+ return 0;
+}
+
+static int
+reader__mmap(struct reader *rd, struct perf_session *session)
+{
+ struct reader_state *st = &rd->state;
+ int mmap_prot, mmap_flags;
+ char *buf, **mmaps = st->mmaps;
+ u64 page_offset;
+
+ if (st->file_pos >= st->data_size) {
+ st->eof = true;
+ return READER_EOF;
+ }
+
+ mmap_prot = PROT_READ;
+ mmap_flags = MAP_SHARED;
+
+ if (session->header.needs_swap) {
+ mmap_prot |= PROT_WRITE;
+ mmap_flags = MAP_PRIVATE;
+ }
+
+ if (mmaps[st->mmap_idx]) {
+ munmap(mmaps[st->mmap_idx], st->mmap_size);
+ mmaps[st->mmap_idx] = NULL;
+ }
+
+ page_offset = page_size * (st->head / page_size);
+ st->file_offset += page_offset;
+ st->head -= page_offset;
+
+ buf = mmap(NULL, st->mmap_size, mmap_prot, mmap_flags, rd->fd,
+ st->file_offset);
+ if (buf == MAP_FAILED) {
+ pr_err("failed to mmap file\n");
+ return -errno;
+ }
+ mmaps[st->mmap_idx] = st->mmap_cur = buf;
+ st->mmap_idx = (st->mmap_idx + 1) & (ARRAY_SIZE(st->mmaps) - 1);
+ st->file_pos = st->file_offset + st->head;
+ return READER_OK;
+}
+
+static int
+reader__read_event(struct reader *rd, struct perf_session *session,
+ struct ui_progress *prog)
+{
+ struct reader_state *st = &rd->state;
+ union perf_event *event;
+ int ret = READER_OK;
+ u64 size;
+ s64 skip;
+
+ event = fetch_mmaped_event(st->head, st->mmap_size, st->mmap_cur, session->header.needs_swap);
+ if (IS_ERR(event))
+ return PTR_ERR(event);
+
+ if (!event)
+ return READER_EOF;
+
+ session->active_reader = rd;
+ size = event->header.size;
+ skip = -EINVAL;
+
+ if (size < sizeof(struct perf_event_header) ||
+ (skip = perf_session__process_event(session, event, st->file_pos, rd->path)) < 0) {
+ pr_err("%#" PRIx64 " [%s] [%#x]: failed to process type: %d [%s]\n",
+ st->file_offset + st->head, rd->path, event->header.size,
+ event->header.type, strerror(-skip));
+ ret = skip;
+ goto out;
+ }
+
+ if (skip)
+ size += skip;
+
+ st->size += size;
+ st->head += size;
+ st->file_pos += size;
+
+ skip = __perf_session__process_decomp_events(session);
+ if (skip)
+ ret = skip;
+
+ ui_progress__update(prog, size);
+
+out:
+ session->active_reader = NULL;;
+ return ret;
+}
+/*
+ * This function reads, merge and process directory data.
+ * It assumens the version 1 of directory data, where each
+ * data file holds per-cpu data, already sorted by kernel.
+ */
+static int __perf_session__process_dir_events(struct perf_session *session)
+{
+ struct perf_data *data = session->data;
+ struct perf_tool *tool = session->tool;
+ int i, ret = 0, readers = 1;
+ struct ui_progress prog;
+ u64 total_size = perf_data__size(session->data);
+ struct reader *rd;
+
+ perf_tool__fill_defaults(tool);
+
+ ui_progress__init_size(&prog, total_size, "Sorting events...");
+
+ for (i = 0; i < data->dir.nr; i++) {
+ if (data->dir.files[i].size)
+ readers++;
+ }
+
+ rd = session->readers = zalloc(readers * sizeof(struct reader));
+ if (!rd)
+ return -ENOMEM;
+ session->nr_readers = readers;
+ readers = 0;
+
+ rd[readers] = (struct reader) {
+ .fd = perf_data__fd(session->data),
+ .path = session->data->file.path,
+ .data_size = session->header.data_size,
+ .data_offset = session->header.data_offset,
+ };
+ reader__init(&rd[readers], &session->one_mmap);
+ if (reader__mmap(&rd[readers], session) != READER_OK)
+ goto out_err;
+ readers++;
+
+ for (i = 0; i < data->dir.nr; i++) {
+ if (data->dir.files[i].size) {
+ rd[readers] = (struct reader) {
+ .fd = data->dir.files[i].fd,
+ .path = data->dir.files[i].path,
+ .data_size = data->dir.files[i].size,
+ .data_offset = 0,
+ };
+ reader__init(&rd[readers], &session->one_mmap);
+ if (reader__mmap(&rd[readers], session) != READER_OK)
+ goto out_err;
+ readers++;
+ }
+ }
+
+ i = 0;
+
+ while ((ret >= 0) && readers) {
+ if (session_done())
+ return 0;
+
+ if (rd[i].state.eof) {
+ i = (i + 1) % session->nr_readers;
+ continue;
+ }
+
+ ret = reader__read_event(&rd[i], session, &prog);
+ if (ret < 0)
+ break;
+ if (ret == READER_EOF) {
+ ret = reader__mmap(&rd[i], session);
+ if (ret < 0)
+ goto out_err;
+ if (ret == READER_EOF)
+ readers--;
+ }
+
+ /*
+ * Processing 10MBs of data from each reader in sequence,
+ * because that's the way the ordered events sorting works
+ * most efficiently.
+ */
+ if (rd[i].state.size >= 10*1024*1024) {
+ rd[i].state.size = 0;
+ i = (i + 1) % session->nr_readers;
+ }
+ }
+
+ ret = ordered_events__flush(&session->ordered_events, OE_FLUSH__FINAL);
+ if (ret)
+ goto out_err;
+
+ ret = perf_session__flush_thread_stacks(session);
+out_err:
+ ui_progress__finish();
+
+ if (!tool->no_warn)
+ perf_session__warn_about_errors(session);
+
+ /*
+ * We may switching perf.data output, make ordered_events
+ * reusable.
+ */
+ ordered_events__reinit(&session->ordered_events);
+
+ session->one_mmap = false;
+
+ return ret;
+}
+
int perf_session__process_events(struct perf_session *session)
{
if (perf_session__register_idle_thread(session) < 0)
@@ -2316,6 +2585,9 @@ int perf_session__process_events(struct perf_session *session)
if (perf_data__is_pipe(session->data))
return __perf_session__process_pipe_events(session);

+ if (perf_data__is_dir(session->data))
+ return __perf_session__process_dir_events(session);
+
return __perf_session__process_events(session);
}

diff --git a/tools/perf/util/session.h b/tools/perf/util/session.h
index 378ffc3e2809..cbc54615d155 100644
--- a/tools/perf/util/session.h
+++ b/tools/perf/util/session.h
@@ -19,6 +19,7 @@ struct thread;

struct auxtrace;
struct itrace_synth_opts;
+struct reader;

struct perf_session {
struct perf_header header;
@@ -41,6 +42,9 @@ struct perf_session {
struct zstd_data zstd_data;
struct decomp *decomp;
struct decomp *decomp_last;
+ struct reader *readers;
+ int nr_readers;
+ struct reader *active_reader;
};

struct decomp {
--
2.24.1

2020-11-17 01:49:59

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v3 01/12] perf record: introduce thread affinity and mmap masks


Introduce affinity and mmap thread masks. Thread affinity mask
defines cpus that a thread is allowed to run on. Thread maps
mask defines mmap data buffers the thread serves to stream
profiling data from.

Signed-off-by: Alexey Budankov <[email protected]>
---
tools/perf/builtin-record.c | 116 ++++++++++++++++++++++++++++++++++++
1 file changed, 116 insertions(+)

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index adf311d15d3d..82f009703ad7 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -85,6 +85,11 @@ struct switch_output {
int cur_file;
};

+struct thread_mask {
+ struct mmap_cpu_mask maps;
+ struct mmap_cpu_mask affinity;
+};
+
struct record {
struct perf_tool tool;
struct record_opts opts;
@@ -108,6 +113,8 @@ struct record {
unsigned long long samples;
struct mmap_cpu_mask affinity_mask;
unsigned long output_max_size; /* = 0: unlimited */
+ struct thread_mask *thread_masks;
+ int nr_threads;
};

static volatile int done;
@@ -2174,6 +2181,45 @@ static int record__parse_affinity(const struct option *opt, const char *str, int
return 0;
}

+static int record__mmap_cpu_mask_alloc(struct mmap_cpu_mask *mask, int nr_bits)
+{
+ mask->nbits = nr_bits;
+ mask->bits = bitmap_alloc(mask->nbits);
+ if (!mask->bits) {
+ pr_err("Failed to allocate mmap_cpu mask\n");
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void record__mmap_cpu_mask_free(struct mmap_cpu_mask *mask)
+{
+ bitmap_free(mask->bits);
+ mask->nbits = 0;
+}
+
+static void record__thread_mask_clear(struct thread_mask *mask)
+{
+ bitmap_zero(mask->maps.bits, mask->maps.nbits);
+ bitmap_zero(mask->affinity.bits, mask->affinity.nbits);
+}
+
+static int record__thread_mask_alloc(struct thread_mask *mask, int nr_bits)
+{
+ if (record__mmap_cpu_mask_alloc(&mask->maps, nr_bits) ||
+ record__mmap_cpu_mask_alloc(&mask->affinity, nr_bits))
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void record__thread_mask_free(struct thread_mask *mask)
+{
+ record__mmap_cpu_mask_free(&mask->maps);
+ record__mmap_cpu_mask_free(&mask->affinity);
+}
+
static int parse_output_max_size(const struct option *opt,
const char *str, int unset)
{
@@ -2603,6 +2649,69 @@ static struct option __record_options[] = {

struct option *record_options = __record_options;

+static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_cpu_map *cpus)
+{
+ int c;
+
+ for (c = 0; c < cpus->nr; c++)
+ set_bit(cpus->map[c], mask->bits);
+}
+
+static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr_bits)
+{
+ int t, ret;
+
+ rec->thread_masks = zalloc(nr_threads * sizeof(*(rec->thread_masks)));
+ if (!rec->thread_masks) {
+ pr_err("Failed to allocate thread masks\n");
+ return -ENOMEM;
+ }
+
+ for (t = 0; t < nr_threads; t++) {
+ ret = record__thread_mask_alloc(&rec->thread_masks[t], nr_bits);
+ if (ret)
+ return ret;
+ record__thread_mask_clear(&rec->thread_masks[t]);
+ }
+
+ return 0;
+}
+static int record__init_thread_default_masks(struct record *rec, struct perf_cpu_map *cpus)
+{
+ int ret;
+
+ ret = record__alloc_thread_masks(rec, 1, cpu__max_cpu());
+ if (ret)
+ return ret;
+
+ record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus);
+
+ rec->nr_threads = 1;
+
+ return 0;
+}
+
+static int record__init_thread_masks(struct record *rec)
+{
+ struct perf_cpu_map *cpus = rec->evlist->core.cpus;
+
+ return record__init_thread_default_masks(rec, cpus);
+}
+
+static int record__fini_thread_masks(struct record *rec)
+{
+ int t;
+
+ for (t = 0; t < rec->nr_threads; t++)
+ record__thread_mask_free(&rec->thread_masks[t]);
+
+ zfree(&rec->thread_masks);
+
+ rec->nr_threads = 0;
+
+ return 0;
+}
+
int cmd_record(int argc, const char **argv)
{
int err;
@@ -2821,6 +2930,12 @@ int cmd_record(int argc, const char **argv)
goto out;
}

+ err = record__init_thread_masks(rec);
+ if (err) {
+ pr_err("record__init_thread_masks failed, error %d\n", err);
+ goto out;
+ }
+
if (rec->opts.nr_cblocks > nr_cblocks_max)
rec->opts.nr_cblocks = nr_cblocks_max;
pr_debug("nr_cblocks: %d\n", rec->opts.nr_cblocks);
@@ -2839,6 +2954,7 @@ int cmd_record(int argc, const char **argv)
symbol__exit();
auxtrace_record__free(rec->itr);
out_opts:
+ record__fini_thread_masks(rec);
evlist__close_control(rec->opts.ctl_fd, rec->opts.ctl_fd_ack, &rec->opts.ctl_fd_close);
return err;
}
--
2.24.1

2020-11-20 09:48:38

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] Introduce threaded trace streaming for basic perf record operation

Hi,

Thanks for working on this!

On Mon, Nov 16, 2020 at 03:12:47PM +0300, Alexey Budankov wrote:
>
> Changes in v3:
> - avoided skipped redundant patch 3/15
> - applied "data file" and "data directory" terms allover the patch set
> - captured Acked-by: tags by Namhyung Kim
> - avoided braces where don't needed
> - employed thread local variable for serial trace streaming
> - added specs for --thread option - core, socket, numa and user defined
> - added parallel loading of data directory files similar to the prototype [1]

Can you please consider splitting tracing records (FORK/MMAP/...) into
a separate file? I think this change would put too much burden to the
perf report side. I'm saying this repeatedly because I'm afraid that
it'd be harder to change later once we accept this approach/format..

Thanks,
Namhyung


>
> v2: https://lore.kernel.org/lkml/[email protected]/
>
> Changes in v2:
> - explicitly added credit tags to patches 6/15 and 15/15,
> additionally to cites [1], [2]
> - updated description of 3/15 to explicitly mention the reason
> to open data directories in read access mode (e.g. for perf report)
> - implemented fix for compilation error of 2/15
> - explicitly elaborated on found issues to be resolved for
> threaded AUX trace capture
>
> v1: https://lore.kernel.org/lkml/[email protected]/
>
> Patch set provides parallel threaded trace streaming mode for basic
> perf record operation. Provided mode mitigates profiling data losses
> and resolves scalability issues of serial and asynchronous (--aio)
> trace streaming modes on multicore server systems. The design and
> implementation are based on the prototype [1], [2].
>
> Parallel threaded mode executes trace streaming threads that read kernel
> data buffers and write captured data into several data files located at
> data directory. Layout of trace streaming threads and their mapping to data
> buffers to read can be configured using a value of --thread command line
> option. Specification value provides masks separated by colon so the masks
> define cpus to be monitored by one thread and thread affinity mask is
> separated by slash. <cpus mask 1>/<affinity mask 1>:<cpu mask 2>/<affinity mask 2>
> specifies parallel threads layout that consists of two threads with
> corresponding assigned cpus to be monitored. Specification value can be
> a string e.g. "cpu", "core" or "socket" meaning creation of data streaming
> thread for monitoring every cpu, whole core or socket. The option provided
> with no or empty value defaults to "cpu" layout creating data streaming
> thread for every cpu being monitored. Specification masks are filtered
> by the mask provided via -C option.
>
> Parallel streaming mode is compatible with Zstd compression/decompression
> (--compression-level) and external control commands (--control). The mode
> is not enabled for pipe mode. The mode is not enabled for AUX area tracing,
> related and derived modes like --snapshot or --aux-sample. --switch-output-*
> and --timestamp-filename options are not enabled for parallel streaming.
> Initial intent to enable AUX area tracing faced the need to define some
> optimal way to store index data in data directory. --switch-output-* and
> --timestamp-filename use cases are not clear for data directories.
> Asynchronous(--aio) trace streaming and affinity (--affinity) modes are
> mutually exclusive to parallel streaming mode.
>
> Basic analysis of data directories is provided in perf report mode.
> Raw dump and aggregated reports are available for data directories,
> still with no memory consumption optimizations.
>
> Tested:
>
> tools/perf/perf record -o prof.data --threads -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads= -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads=cpu -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads=core -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads=socket -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads=numa -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data -C 2,5 --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data -C 3,4 --threads=0-3/3:4-7/4 -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data -C 0,4,2,6 --threads=core -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data -C 0,4,2,6 --threads=numa -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads -g --call-graph dwarf,4096 -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads -g --call-graph dwarf,4096 --compression-level=3 -- matrix.gcc.g.O3
> tools/perf/perf record -o prof.data --threads -a
> tools/perf/perf record -D -1 -e cpu-cycles -a --control fd:10,11 -- sleep 30
> tools/perf/perf record --threads -D -1 -e cpu-cycles -a --control fd:10,11 -- sleep 30
>
> tools/perf/perf report -i prof.data
> tools/perf/perf report -i prof.data --call-graph=callee
> tools/perf/perf report -i prof.data --stdio --header
> tools/perf/perf report -i prof.data -D --header
>
> [1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads
> [2] https://lore.kernel.org/lkml/[email protected]/
>
> ---
> Alexey Budankov (12):
> perf record: introduce thread affinity and mmap masks
> perf record: introduce thread specific data array
> perf record: introduce thread local variable
> perf record: stop threads in the end of trace streaming
> perf record: start threads in the beginning of trace streaming
> perf record: introduce data file at mmap buffer object
> perf record: init data file at mmap buffer object
> perf record: introduce --threads=<spec> command line option
> perf record: document parallel data streaming mode
> perf report: output data file name in raw trace dump
> perf session: load data directory files for analysis
> perf session: use reader functions to load perf data file
>
> tools/include/linux/bitmap.h | 11 +
> tools/lib/api/fd/array.c | 17 +
> tools/lib/api/fd/array.h | 1 +
> tools/lib/bitmap.c | 14 +
> tools/perf/Documentation/perf-record.txt | 18 +
> tools/perf/builtin-inject.c | 3 +-
> tools/perf/builtin-record.c | 1019 ++++++++++++++++++++--
> tools/perf/util/evlist.c | 16 +
> tools/perf/util/evlist.h | 1 +
> tools/perf/util/mmap.c | 6 +
> tools/perf/util/mmap.h | 6 +
> tools/perf/util/ordered-events.h | 1 +
> tools/perf/util/record.h | 2 +
> tools/perf/util/session.c | 484 +++++++---
> tools/perf/util/session.h | 5 +
> tools/perf/util/tool.h | 3 +-
> 16 files changed, 1398 insertions(+), 209 deletions(-)
>
> --
> 2.24.1
>

2020-11-20 10:03:16

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 01/12] perf record: introduce thread affinity and mmap masks

On Mon, Nov 16, 2020 at 03:14:50PM +0300, Alexey Budankov wrote:
>
> Introduce affinity and mmap thread masks. Thread affinity mask
> defines cpus that a thread is allowed to run on. Thread maps
> mask defines mmap data buffers the thread serves to stream
> profiling data from.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> tools/perf/builtin-record.c | 116 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 116 insertions(+)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index adf311d15d3d..82f009703ad7 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
[SNIP]
> +static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr_bits)
> +{
> + int t, ret;
> +
> + rec->thread_masks = zalloc(nr_threads * sizeof(*(rec->thread_masks)));
> + if (!rec->thread_masks) {
> + pr_err("Failed to allocate thread masks\n");
> + return -ENOMEM;
> + }
> +
> + for (t = 0; t < nr_threads; t++) {
> + ret = record__thread_mask_alloc(&rec->thread_masks[t], nr_bits);
> + if (ret)
> + return ret;
> + record__thread_mask_clear(&rec->thread_masks[t]);
> + }
> +
> + return 0;
> +}
> +static int record__init_thread_default_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + int ret;
> +
> + ret = record__alloc_thread_masks(rec, 1, cpu__max_cpu());
> + if (ret)
> + return ret;
> +
> + record__mmap_cpu_mask_init(&rec->thread_masks->maps, cpus);
> +
> + rec->nr_threads = 1;
> +
> + return 0;
> +}
> +
> +static int record__init_thread_masks(struct record *rec)
> +{
> + struct perf_cpu_map *cpus = rec->evlist->core.cpus;
> +
> + return record__init_thread_default_masks(rec, cpus);
> +}
> +
> +static int record__fini_thread_masks(struct record *rec)
> +{
> + int t;
> +
> + for (t = 0; t < rec->nr_threads; t++)
> + record__thread_mask_free(&rec->thread_masks[t]);

It might be failed when allocating rec->thread_masks.

Thanks
Namhyung


> +
> + zfree(&rec->thread_masks);
> +
> + rec->nr_threads = 0;
> +
> + return 0;
> +}
> +
> int cmd_record(int argc, const char **argv)
> {
> int err;
> @@ -2821,6 +2930,12 @@ int cmd_record(int argc, const char **argv)
> goto out;
> }
>
> + err = record__init_thread_masks(rec);
> + if (err) {
> + pr_err("record__init_thread_masks failed, error %d\n", err);
> + goto out;
> + }
> +
> if (rec->opts.nr_cblocks > nr_cblocks_max)
> rec->opts.nr_cblocks = nr_cblocks_max;
> pr_debug("nr_cblocks: %d\n", rec->opts.nr_cblocks);
> @@ -2839,6 +2954,7 @@ int cmd_record(int argc, const char **argv)
> symbol__exit();
> auxtrace_record__free(rec->itr);
> out_opts:
> + record__fini_thread_masks(rec);
> evlist__close_control(rec->opts.ctl_fd, rec->opts.ctl_fd_ack, &rec->opts.ctl_fd_close);
> return err;
> }
> --
> 2.24.1
>

2020-11-20 10:18:13

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 02/12] perf record: introduce thread specific data array

On Mon, Nov 16, 2020 at 03:15:42PM +0300, Alexey Budankov wrote:
>
> Introduce thread specific data object and array of such objects
> to store and manage thread local data. Implement functions to
> allocate, initialize, finalize and release thread specific data.
>
> Thread local maps and overwrite_maps arrays keep pointers to
> mmap buffer objects to serve according to maps thread mask.
> Thread local pollfd array keeps event fds connected to mmaps
> buffers according to maps thread mask.
>
> Thread control commands are delivered via thread local comm pipes
> and ctlfd_pos fd. External control commands (--control option)
> are delivered via evlist ctlfd_pos fd and handled by the main
> tool thread.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> tools/lib/api/fd/array.c | 17 ++++
> tools/lib/api/fd/array.h | 1 +
> tools/perf/builtin-record.c | 191 +++++++++++++++++++++++++++++++++++-
> 3 files changed, 206 insertions(+), 3 deletions(-)
>
> diff --git a/tools/lib/api/fd/array.c b/tools/lib/api/fd/array.c
> index 5e6cb9debe37..de8bcbaea3f1 100644
> --- a/tools/lib/api/fd/array.c
> +++ b/tools/lib/api/fd/array.c
> @@ -88,6 +88,23 @@ int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags
> return pos;
> }
>
> +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base)
> +{
> + struct pollfd *entry;
> + int npos;
> +
> + if (pos >= base->nr)
> + return -EINVAL;
> +
> + entry = &base->entries[pos];
> +
> + npos = fdarray__add(fda, entry->fd, entry->events, base->priv[pos].flags);
> + if (npos >= 0)
> + fda->priv[npos] = base->priv[pos];
> +
> + return npos;
> +}
> +
> int fdarray__filter(struct fdarray *fda, short revents,
> void (*entry_destructor)(struct fdarray *fda, int fd, void *arg),
> void *arg)
> diff --git a/tools/lib/api/fd/array.h b/tools/lib/api/fd/array.h
> index 7fcf21a33c0c..4a03da7f1fc1 100644
> --- a/tools/lib/api/fd/array.h
> +++ b/tools/lib/api/fd/array.h
> @@ -42,6 +42,7 @@ struct fdarray *fdarray__new(int nr_alloc, int nr_autogrow);
> void fdarray__delete(struct fdarray *fda);
>
> int fdarray__add(struct fdarray *fda, int fd, short revents, enum fdarray_flags flags);
> +int fdarray__clone(struct fdarray *fda, int pos, struct fdarray *base);
> int fdarray__poll(struct fdarray *fda, int timeout);
> int fdarray__filter(struct fdarray *fda, short revents,
> void (*entry_destructor)(struct fdarray *fda, int fd, void *arg),
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 82f009703ad7..765a90e38f69 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -56,6 +56,7 @@
> #include <poll.h>
> #include <pthread.h>
> #include <unistd.h>
> +#include <sys/syscall.h>
> #include <sched.h>
> #include <signal.h>
> #ifdef HAVE_EVENTFD_SUPPORT
> @@ -90,6 +91,24 @@ struct thread_mask {
> struct mmap_cpu_mask affinity;
> };
>
> +struct thread_data {
> + pid_t tid;
> + struct thread_mask *mask;
> + struct {
> + int msg[2];
> + int ack[2];
> + } comm;

I think the name 'comm' is misleading as we have thread's comm
already.


> + struct fdarray pollfd;
> + int ctlfd_pos;
> + struct mmap **maps;
> + struct mmap **overwrite_maps;
> + int nr_mmaps;
> + struct record *rec;
> + unsigned long long samples;
> + unsigned long waking;
> + u64 bytes_written;
> +};
> +
> struct record {
> struct perf_tool tool;
> struct record_opts opts;
> @@ -114,6 +133,7 @@ struct record {
> struct mmap_cpu_mask affinity_mask;
> unsigned long output_max_size; /* = 0: unlimited */
> struct thread_mask *thread_masks;
> + struct thread_data *thread_data;
> int nr_threads;
> };
>
> @@ -842,9 +862,168 @@ static int record__kcore_copy(struct machine *machine, struct perf_data *data)
> return kcore_copy(from_dir, kcore_dir);
> }
>
> +static int record__thread_data_init_comm(struct thread_data *thread_data)
> +{
> + if (pipe(thread_data->comm.msg) || pipe(thread_data->comm.ack)) {
> + pr_err("Failed to create thread comm pipes, error %m\n");
> + return -ENOMEM;
> + }
> +
> + pr_debug("thread_data[%p]: msg=[%d,%d], ack=[%d,%d]\n", thread_data,
> + thread_data->comm.msg[0], thread_data->comm.msg[1],
> + thread_data->comm.ack[0], thread_data->comm.ack[1]);
> +
> + return 0;
> +}
> +
> +static int record__thread_data_init_maps(struct thread_data *thread_data, struct evlist *evlist)
> +{
> + int m, tm, nr_mmaps = evlist->core.nr_mmaps;
> + struct mmap *mmap = evlist->mmap;
> + struct mmap *overwrite_mmap = evlist->overwrite_mmap;
> + struct perf_cpu_map *cpus = evlist->core.cpus;
> +
> + thread_data->nr_mmaps = bitmap_weight(thread_data->mask->maps.bits, thread_data->mask->maps.nbits);
> + if (mmap) {
> + thread_data->maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *));
> + if (!thread_data->maps) {
> + pr_err("Failed to allocate maps thread data\n");
> + return -ENOMEM;
> + }
> + }
> + if (overwrite_mmap) {
> + thread_data->overwrite_maps = zalloc(thread_data->nr_mmaps * sizeof(struct mmap *));
> + if (!thread_data->overwrite_maps) {
> + pr_err("Failed to allocate overwrite maps thread data\n");
> + return -ENOMEM;
> + }
> + }
> + pr_debug("thread_data[%p]: nr_mmaps=%d, maps=%p, overwrite_maps=%p\n", thread_data,
> + thread_data->nr_mmaps, thread_data->maps, thread_data->overwrite_maps);
> +
> + for (m = 0, tm = 0; m < nr_mmaps && tm < thread_data->nr_mmaps; m++) {
> + if (test_bit(cpus->map[m], thread_data->mask->maps.bits)) {
> + if (thread_data->maps) {
> + thread_data->maps[tm] = &mmap[m];
> + pr_debug("thread_data[%p]: maps[%d] -> mmap[%d], cpus[%d]\n",
> + thread_data, tm, m, cpus->map[m]);
> + }
> + if (thread_data->overwrite_maps) {
> + thread_data->overwrite_maps[tm] = &overwrite_mmap[m];
> + pr_debug("thread_data[%p]: overwrite_maps[%d] -> overwrite_mmap[%d], cpus[%d]\n",
> + thread_data, tm, m, cpus->map[m]);

I'm afraid this will add too much debug message for verbose=1. Maybe
we can demote some to pr_debug2()?


> + }
> + tm++;
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int record__thread_data_init_pollfd(struct thread_data *thread_data, struct evlist *evlist)
> +{
> + int f, tm, pos;
> + struct mmap *map, *overwrite_map;
> +
> + fdarray__init(&thread_data->pollfd, 64);
> +
> + for (tm = 0; tm < thread_data->nr_mmaps; tm++) {
> + map = thread_data->maps ? thread_data->maps[tm] : NULL;
> + overwrite_map = thread_data->overwrite_maps ? thread_data->overwrite_maps[tm] : NULL;
> +
> + for (f = 0; f < evlist->core.pollfd.nr; f++) {
> + void *ptr = evlist->core.pollfd.priv[f].ptr;
> +
> + if ((map && ptr == map) || (overwrite_map && ptr == overwrite_map)) {
> + pos = fdarray__clone(&thread_data->pollfd, f, &evlist->core.pollfd);
> + if (pos < 0)
> + return pos;
> + pr_debug("thread_data[%p]: pollfd[%d] <- event_fd=%d\n",
> + thread_data, pos, evlist->core.pollfd.entries[f].fd);
> + }
> + }
> + }
> +
> + return 0;
> +}
> +
> +static int record__alloc_thread_data(struct record *rec, struct evlist *evlist)
> +{
> + int t, ret;
> + struct thread_data *thread_data;
> +
> + thread_data = zalloc(rec->nr_threads * sizeof(*(rec->thread_data)));
> + if (!thread_data) {
> + pr_err("Failed to allocate thread data\n");
> + return -ENOMEM;
> + }
> +
> + for (t = 0; t < rec->nr_threads; t++) {
> + thread_data[t].rec = rec;
> + thread_data[t].mask = &rec->thread_masks[t];
> + ret = record__thread_data_init_maps(&thread_data[t], evlist);
> + if (ret)
> + return ret;

This and other places that return in the middle will leak the
thread_data.


> + ret = record__thread_data_init_pollfd(&thread_data[t], evlist);
> + if (ret)
> + return ret;
> + if (t) {
> + thread_data[t].tid = -1;
> + ret = record__thread_data_init_comm(&thread_data[t]);
> + if (ret)
> + return ret;
> + thread_data[t].ctlfd_pos = fdarray__add(&thread_data[t].pollfd,
> + thread_data[t].comm.msg[0],
> + POLLIN | POLLERR | POLLHUP,
> + fdarray_flag__nonfilterable);
> + if (thread_data[t].ctlfd_pos < 0)
> + return -ENOMEM;
> + pr_debug("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n",
> + thread_data, thread_data[t].ctlfd_pos,
> + thread_data[t].comm.msg[0]);
> + } else {
> + thread_data[t].tid = syscall(SYS_gettid);
> + if (evlist->ctl_fd.pos == -1)
> + continue;
> + thread_data[t].ctlfd_pos = fdarray__clone(&thread_data[t].pollfd,
> + evlist->ctl_fd.pos,
> + &evlist->core.pollfd);
> + if (ret < 0)
> + return ret;

You should check ctlfd_pos instead.

> + pr_debug("thread_data[%p]: pollfd[%d] <- ctl_fd=%d\n",
> + thread_data, thread_data[t].ctlfd_pos,
> + evlist->core.pollfd.entries[evlist->ctl_fd.pos].fd);
> + }
> + }
> +
> + rec->thread_data = thread_data;
> +
> + return 0;
> +}
> +
> +static int record__free_thread_data(struct record *rec)
> +{
> + int t;
> +
> + for (t = 0; t < rec->nr_threads; t++) {
> + close(rec->thread_data[t].comm.msg[0]);
> + close(rec->thread_data[t].comm.msg[1]);
> + close(rec->thread_data[t].comm.ack[0]);
> + close(rec->thread_data[t].comm.ack[1]);
> + zfree(&rec->thread_data[t].maps);
> + zfree(&rec->thread_data[t].overwrite_maps);
> + fdarray__exit(&rec->thread_data[t].pollfd);

The rec->thread_data might not be set.

Thanks,
Namhyung


> + }
> +
> + zfree(&rec->thread_data);
> +
> + return 0;
> +}
> +
> static int record__mmap_evlist(struct record *rec,
> struct evlist *evlist)
> {
> + int ret;
> struct record_opts *opts = &rec->opts;
> bool auxtrace_overwrite = opts->auxtrace_snapshot_mode ||
> opts->auxtrace_sample_mode;
> @@ -875,6 +1054,14 @@ static int record__mmap_evlist(struct record *rec,
> return -EINVAL;
> }
> }
> +
> + if (evlist__initialize_ctlfd(evlist, opts->ctl_fd, opts->ctl_fd_ack))
> + return -1;
> +
> + ret = record__alloc_thread_data(rec, evlist);
> + if (ret)
> + return ret;
> +
> return 0;
> }
>
> @@ -1845,9 +2032,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> perf_evlist__start_workload(rec->evlist);
> }
>
> - if (evlist__initialize_ctlfd(rec->evlist, opts->ctl_fd, opts->ctl_fd_ack))
> - goto out_child;
> -
> if (opts->initial_delay) {
> pr_info(EVLIST_DISABLED_MSG);
> if (opts->initial_delay > 0) {
> @@ -1998,6 +2182,7 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> record__synthesize_workload(rec, true);
>
> out_child:
> + record__free_thread_data(rec);
> evlist__finalize_ctlfd(rec->evlist);
> record__mmap_read_all(rec, true);
> record__aio_mmap_read_sync(rec);
> --
> 2.24.1
>
>

2020-11-20 10:23:36

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] perf record: introduce thread local variable

On Mon, Nov 16, 2020 at 03:16:19PM +0300, Alexey Budankov wrote:
>
> Introduce thread local variable and use it for threaded trace streaming.
> Use thread affinity mask instead or record affinity mask in affinity modes.
> Introduce and use evlist__ctlfd_update() function to propogate external
> control commands to global evlist object.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> tools/perf/builtin-record.c | 137 ++++++++++++++++++++++++------------
> tools/perf/util/evlist.c | 16 +++++
> tools/perf/util/evlist.h | 1 +
> 3 files changed, 109 insertions(+), 45 deletions(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 765a90e38f69..e41e1cd90168 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
[SNIP]
> @@ -2114,20 +2165,26 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> alarm(rec->switch_output.time);
> }
>
> - if (hits == rec->samples) {
> + if (hits == thread->samples) {
> if (done || draining)
> break;
> - err = evlist__poll(rec->evlist, -1);
> + err = fdarray__poll(&thread->pollfd, -1);
> /*
> * Propagate error, only if there's any. Ignore positive
> * number of returned events and interrupt error.
> */
> if (err > 0 || (err < 0 && errno == EINTR))
> err = 0;
> - waking++;
> + thread->waking++;
>
> - if (evlist__filter_pollfd(rec->evlist, POLLERR | POLLHUP) == 0)
> + if (fdarray__filter(&thread->pollfd, POLLERR | POLLHUP,
> + record__thread_munmap_filtered, NULL) == 0)
> draining = true;
> +
> + if (thread->ctlfd_pos != -1) {

Isn't it only for the first thread? I guess all thread should have
non-negative ctlfd_pos so this check seems meaningless, no?

Thanks,
Namhyung


> + evlist__ctlfd_update(rec->evlist,
> + &thread->pollfd.entries[thread->ctlfd_pos]);
> + }
> }
>
> if (evlist__ctlfd_process(rec->evlist, &cmd) > 0) {
> @@ -2175,18 +2232,20 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> goto out_child;
> }
>
> - if (!quiet)
> - fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
> -
> if (target__none(&rec->opts.target))
> record__synthesize_workload(rec, true);
>
> out_child:
> + record__stop_threads(rec, &waking);
> +out_free_threads:
> record__free_thread_data(rec);
> evlist__finalize_ctlfd(rec->evlist);
> record__mmap_read_all(rec, true);
> record__aio_mmap_read_sync(rec);
>
> + if (!quiet)
> + fprintf(stderr, "[ perf record: Woken up %ld times to write data ]\n", waking);
> +
> if (rec->session->bytes_transferred && rec->session->bytes_compressed) {
> ratio = (float)rec->session->bytes_transferred/(float)rec->session->bytes_compressed;
> session->header.env.comp_ratio = ratio + 0.5;
> @@ -2995,17 +3054,6 @@ int cmd_record(int argc, const char **argv)
>
> symbol__init(NULL);
>
> - if (rec->opts.affinity != PERF_AFFINITY_SYS) {
> - rec->affinity_mask.nbits = cpu__max_cpu();
> - rec->affinity_mask.bits = bitmap_alloc(rec->affinity_mask.nbits);
> - if (!rec->affinity_mask.bits) {
> - pr_err("Failed to allocate thread mask for %zd cpus\n", rec->affinity_mask.nbits);
> - err = -ENOMEM;
> - goto out_opts;
> - }
> - pr_debug2("thread mask[%zd]: empty\n", rec->affinity_mask.nbits);
> - }
> -
> err = record__auxtrace_init(rec);
> if (err)
> goto out;
> @@ -3134,7 +3182,6 @@ int cmd_record(int argc, const char **argv)
>
> err = __cmd_record(&record, argc, argv);
> out:
> - bitmap_free(rec->affinity_mask.bits);
> evlist__delete(rec->evlist);
> symbol__exit();
> auxtrace_record__free(rec->itr);
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index 8bdf3d2c907c..758a4896fedd 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -1970,6 +1970,22 @@ int evlist__ctlfd_process(struct evlist *evlist, enum evlist_ctl_cmd *cmd)
> return err;
> }
>
> +int evlist__ctlfd_update(struct evlist *evlist, struct pollfd *update)
> +{
> + int ctlfd_pos = evlist->ctl_fd.pos;
> + struct pollfd *entries = evlist->core.pollfd.entries;
> +
> + if (!evlist__ctlfd_initialized(evlist))
> + return 0;
> +
> + if (entries[ctlfd_pos].fd != update->fd ||
> + entries[ctlfd_pos].events != update->events)
> + return -1;
> +
> + entries[ctlfd_pos].revents = update->revents;
> + return 0;
> +}
> +
> struct evsel *evlist__find_evsel(struct evlist *evlist, int idx)
> {
> struct evsel *evsel;
> diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
> index e1a450322bc5..9b73d6ccf066 100644
> --- a/tools/perf/util/evlist.h
> +++ b/tools/perf/util/evlist.h
> @@ -380,6 +380,7 @@ void evlist__close_control(int ctl_fd, int ctl_fd_ack, bool *ctl_fd_close);
> int evlist__initialize_ctlfd(struct evlist *evlist, int ctl_fd, int ctl_fd_ack);
> int evlist__finalize_ctlfd(struct evlist *evlist);
> bool evlist__ctlfd_initialized(struct evlist *evlist);
> +int evlist__ctlfd_update(struct evlist *evlist, struct pollfd *update);
> int evlist__ctlfd_process(struct evlist *evlist, enum evlist_ctl_cmd *cmd);
> int evlist__ctlfd_ack(struct evlist *evlist);
>
> --
> 2.24.1
>
>

2020-11-20 10:30:44

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 06/12] perf record: introduce data file at mmap buffer object

On Mon, Nov 16, 2020 at 03:18:50PM +0300, Alexey Budankov wrote:
>
> Introduce data file and compressor objects into mmap object so
> they could be used to process and store data stream from the
> corresponding kernel data buffer. Introduce bytes_transferred
> and bytes_compressed stats so they would capture statistics for
> the related data buffer transfers. Make use of the introduced
> per mmap file, compressor and stats when they are initialized
> and available.

So the bytes_transferred == bytes read from the mmap buffer, right?

Thanks,
Namhyung

>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> tools/perf/builtin-record.c | 64 +++++++++++++++++++++++++++++--------
> tools/perf/util/mmap.c | 6 ++++
> tools/perf/util/mmap.h | 6 ++++
> 3 files changed, 63 insertions(+), 13 deletions(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 13773739bedc..779676531edf 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -188,11 +188,19 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,
> {
> struct perf_data_file *file = &rec->session->data->file;
>
> + if (map && map->file)
> + file = map->file;
> +
> if (perf_data_file__write(file, bf, size) < 0) {
> pr_err("failed to write perf data, error: %m\n");
> return -1;
> }
>
> + if (map && map->file) {
> + map->bytes_written += size;
> + return 0;
> + }
> +
> rec->bytes_written += size;
>
> if (record__output_max_size_exceeded(rec) && !done) {
> @@ -210,8 +218,8 @@ static int record__write(struct record *rec, struct mmap *map __maybe_unused,
>
> static int record__aio_enabled(struct record *rec);
> static int record__comp_enabled(struct record *rec);
> -static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_size,
> - void *src, size_t src_size);
> +static size_t zstd_compress(struct zstd_data *data,
> + void *dst, size_t dst_size, void *src, size_t src_size);
>
> #ifdef HAVE_AIO_SUPPORT
> static int record__aio_write(struct aiocb *cblock, int trace_fd,
> @@ -345,9 +353,13 @@ static int record__aio_pushfn(struct mmap *map, void *to, void *buf, size_t size
> */
>
> if (record__comp_enabled(aio->rec)) {
> - size = zstd_compress(aio->rec->session, aio->data + aio->size,
> - mmap__mmap_len(map) - aio->size,
> + struct zstd_data *zstd_data = &aio->rec->session->zstd_data;
> +
> + aio->rec->session->bytes_transferred += size;
> + size = zstd_compress(zstd_data,
> + aio->data + aio->size, mmap__mmap_len(map) - aio->size,
> buf, size);
> + aio->rec->session->bytes_compressed += size;
> } else {
> memcpy(aio->data + aio->size, buf, size);
> }
> @@ -572,8 +584,22 @@ static int record__pushfn(struct mmap *map, void *to, void *bf, size_t size)
> struct record *rec = to;
>
> if (record__comp_enabled(rec)) {
> - size = zstd_compress(rec->session, map->data, mmap__mmap_len(map), bf, size);
> + struct zstd_data *zstd_data = &rec->session->zstd_data;
> +
> + if (map->file) {
> + zstd_data = &map->zstd_data;
> + map->bytes_transferred += size;
> + } else {
> + rec->session->bytes_transferred += size;
> + }
> +
> + size = zstd_compress(zstd_data, map->data, mmap__mmap_len(map), bf, size);
> bf = map->data;
> +
> + if (map->file)
> + map->bytes_compressed += size;
> + else
> + rec->session->bytes_compressed += size;
> }
>
> thread->samples++;
> @@ -1291,18 +1317,15 @@ static size_t process_comp_header(void *record, size_t increment)
> return size;
> }
>
> -static size_t zstd_compress(struct perf_session *session, void *dst, size_t dst_size,
> +static size_t zstd_compress(struct zstd_data *zstd_data, void *dst, size_t dst_size,
> void *src, size_t src_size)
> {
> size_t compressed;
> size_t max_record_size = PERF_SAMPLE_MAX_SIZE - sizeof(struct perf_record_compressed) - 1;
>
> - compressed = zstd_compress_stream_to_records(&session->zstd_data, dst, dst_size, src, src_size,
> + compressed = zstd_compress_stream_to_records(zstd_data, dst, dst_size, src, src_size,
> max_record_size, process_comp_header);
>
> - session->bytes_transferred += src_size;
> - session->bytes_compressed += compressed;
> -
> return compressed;
> }
>
> @@ -1959,8 +1982,9 @@ static int record__start_threads(struct record *rec)
>
> static int record__stop_threads(struct record *rec, unsigned long *waking)
> {
> - int t;
> + int t, tm;
> struct thread_data *thread_data = rec->thread_data;
> + u64 bytes_written = 0, bytes_transferred = 0, bytes_compressed = 0;
>
> for (t = 1; t < rec->nr_threads; t++)
> record__terminate_thread(&thread_data[t]);
> @@ -1968,9 +1992,23 @@ static int record__stop_threads(struct record *rec, unsigned long *waking)
> for (t = 0; t < rec->nr_threads; t++) {
> rec->samples += thread_data[t].samples;
> *waking += thread_data[t].waking;
> - pr_debug("threads[%d]: samples=%lld, wakes=%ld, trasferred=%ld, compressed=%ld\n",
> + for (tm = 0; tm < thread_data[t].nr_mmaps; tm++) {
> + if (thread_data[t].maps) {
> + bytes_transferred += thread_data[t].maps[tm]->bytes_transferred;
> + bytes_compressed += thread_data[t].maps[tm]->bytes_compressed;
> + bytes_written += thread_data[t].maps[tm]->bytes_written;
> + }
> + if (thread_data[t].overwrite_maps) {
> + bytes_transferred += thread_data[t].overwrite_maps[tm]->bytes_transferred;
> + bytes_compressed += thread_data[t].overwrite_maps[tm]->bytes_compressed;
> + bytes_written += thread_data[t].overwrite_maps[tm]->bytes_written;
> + }
> + }
> + rec->session->bytes_transferred += bytes_transferred;
> + rec->session->bytes_compressed += bytes_compressed;
> + pr_debug("threads[%d]: samples=%lld, wakes=%ld, trasferred=%ld, compressed=%ld, written=%ld\n",
> thread_data[t].tid, thread_data[t].samples, thread_data[t].waking,
> - rec->session->bytes_transferred, rec->session->bytes_compressed);
> + bytes_transferred, bytes_compressed, bytes_written);
> }
>
> return 0;
> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> index ab7108d22428..a2c5e4237592 100644
> --- a/tools/perf/util/mmap.c
> +++ b/tools/perf/util/mmap.c
> @@ -230,6 +230,8 @@ void mmap__munmap(struct mmap *map)
> {
> bitmap_free(map->affinity_mask.bits);
>
> + zstd_fini(&map->zstd_data);
> +
> perf_mmap__aio_munmap(map);
> if (map->data != NULL) {
> munmap(map->data, mmap__mmap_len(map));
> @@ -291,6 +293,10 @@ int mmap__mmap(struct mmap *map, struct mmap_params *mp, int fd, int cpu)
> map->core.flush = mp->flush;
>
> map->comp_level = mp->comp_level;
> + if (zstd_init(&map->zstd_data, map->comp_level)) {
> + pr_debug2("failed to init mmap commpressor, error %d\n", errno);
> + return -1;
> + }
>
> if (map->comp_level && !perf_mmap__aio_enabled(map)) {
> map->data = mmap(NULL, mmap__mmap_len(map), PROT_READ|PROT_WRITE,
> diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
> index 9d5f589f02ae..c04ca4b5adf5 100644
> --- a/tools/perf/util/mmap.h
> +++ b/tools/perf/util/mmap.h
> @@ -13,6 +13,7 @@
> #endif
> #include "auxtrace.h"
> #include "event.h"
> +#include "util/compress.h"
>
> struct aiocb;
>
> @@ -43,6 +44,11 @@ struct mmap {
> struct mmap_cpu_mask affinity_mask;
> void *data;
> int comp_level;
> + struct perf_data_file *file;
> + struct zstd_data zstd_data;
> + u64 bytes_transferred;
> + u64 bytes_compressed;
> + u64 bytes_written;
> };
>
> struct mmap_params {
> --
> 2.24.1
>
>

2020-11-20 10:53:48

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer object

On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:
>
> Initialize data files located at mmap buffer objects so trace data
> can be written into several data file located at data directory.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> tools/perf/builtin-record.c | 41 ++++++++++++++++++++++++++++++-------
> tools/perf/util/record.h | 1 +
> 2 files changed, 35 insertions(+), 7 deletions(-)
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 779676531edf..f5e5175da6a1 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -158,6 +158,11 @@ static const char *affinity_tags[PERF_AFFINITY_MAX] = {
> "SYS", "NODE", "CPU"
> };
>
> +static int record__threads_enabled(struct record *rec)
> +{
> + return rec->opts.threads_spec;
> +}
> +
> static bool switch_output_signal(struct record *rec)
> {
> return rec->switch_output.signal &&
> @@ -1060,7 +1065,7 @@ static int record__free_thread_data(struct record *rec)
> static int record__mmap_evlist(struct record *rec,
> struct evlist *evlist)
> {
> - int ret;
> + int i, ret;
> struct record_opts *opts = &rec->opts;
> bool auxtrace_overwrite = opts->auxtrace_snapshot_mode ||
> opts->auxtrace_sample_mode;
> @@ -1099,6 +1104,18 @@ static int record__mmap_evlist(struct record *rec,
> if (ret)
> return ret;
>
> + if (record__threads_enabled(rec)) {
> + ret = perf_data__create_dir(&rec->data, evlist->core.nr_mmaps);
> + if (ret)
> + return ret;
> + for (i = 0; i < evlist->core.nr_mmaps; i++) {
> + if (evlist->mmap)
> + evlist->mmap[i].file = &rec->data.dir.files[i];
> + if (evlist->overwrite_mmap)
> + evlist->overwrite_mmap[i].file = &rec->data.dir.files[i];
> + }
> + }
> +
> return 0;
> }
>
> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
> /*
> * Mark the round finished in case we wrote
> * at least one event.
> + *
> + * No need for round events in directory mode,
> + * because per-cpu maps and files have data
> + * sorted by kernel.

But it's not just for single cpu since task can migrate so we need to
look at other cpu's data too. Thus we use the ordered events queue
and round events help to determine when to flush the data. Without
the round events, it'd consume huge amount of memory during report.

If we separate tracking records and process them first, we should be
able to process samples immediately without sorting them in the
ordered event queue. This will save both cpu cycles and memory
footprint significantly IMHO.

Thanks,
Namhyung


> */
> - if (bytes_written != rec->bytes_written)
> + if (!record__threads_enabled(rec) && bytes_written != rec->bytes_written)
> rc = record__write(rec, NULL, &finished_round_event, sizeof(finished_round_event));
>
> if (overwrite)
> @@ -1514,7 +1535,9 @@ static void record__init_features(struct record *rec)
> if (!rec->opts.use_clockid)
> perf_header__clear_feat(&session->header, HEADER_CLOCK_DATA);
>
> - perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
> + if (!record__threads_enabled(rec))
> + perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
> +
> if (!record__comp_enabled(rec))
> perf_header__clear_feat(&session->header, HEADER_COMPRESSED);
>
> @@ -1525,15 +1548,21 @@ static void
> record__finish_output(struct record *rec)
> {
> struct perf_data *data = &rec->data;
> - int fd = perf_data__fd(data);
> + int i, fd = perf_data__fd(data);
>
> if (data->is_pipe)
> return;
>
> rec->session->header.data_size += rec->bytes_written;
> data->file.size = lseek(perf_data__fd(data), 0, SEEK_CUR);
> + if (record__threads_enabled(rec)) {
> + for (i = 0; i < data->dir.nr; i++)
> + data->dir.files[i].size = lseek(data->dir.files[i].fd, 0, SEEK_CUR);
> + }
>
> if (!rec->no_buildid) {
> + /* this will be recalculated during process_buildids() */
> + rec->samples = 0;
> process_buildids(rec);
>
> if (rec->buildid_all)
> @@ -2438,8 +2467,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> status = err;
>
> record__synthesize(rec, true);
> - /* this will be recalculated during process_buildids() */
> - rec->samples = 0;
>
> if (!err) {
> if (!rec->timestamp_filename) {
> @@ -3179,7 +3206,7 @@ int cmd_record(int argc, const char **argv)
>
> }
>
> - if (rec->opts.kcore)
> + if (rec->opts.kcore || record__threads_enabled(rec))
> rec->data.is_dir = true;
>
> if (rec->opts.comp_level != 0) {
> diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
> index 266760ac9143..9c13a39cc58f 100644
> --- a/tools/perf/util/record.h
> +++ b/tools/perf/util/record.h
> @@ -74,6 +74,7 @@ struct record_opts {
> int ctl_fd;
> int ctl_fd_ack;
> bool ctl_fd_close;
> + int threads_spec;
> };
>
> extern const char * const *record_usage;
> --
> 2.24.1
>

2020-11-20 11:11:27

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 08/12] perf record: introduce --threads=<spec> command line option

On Mon, Nov 16, 2020 at 03:20:26PM +0300, Alexey Budankov wrote:
>
> Provide --threads option in perf record command line interface.
> The option can have a value in the form of masks that specify
> cpus to be monitored with data streaming threads and its layout
> in system topology. The masks can be filtered using cpu mask
> provided via -C option.
>
> The specification value can be user defined list of masks. Masks
> separated by colon define cpus to be monitored by one thread and
> affinity mask of that thread is separated by slash. For example:
> <cpus mask 1>/<affinity mask 1>:<cpu mask 2>/<affinity mask 2>
> specifies parallel threads layout that consists of two threads
> with corresponding assigned cpus to be monitored.
>
> The specification value can be a string e.g. "cpu", "core" or
> "socket" meaning creation of data streaming thread for every
> cpu or core or socket to monitor distinct cpus or cpus grouped
> by core or socket.
>
> The option provided with no or empty value defaults to per-cpu
> parallel threads layout creating data streaming thread for every
> cpu being monitored.
>
> Feature design and implementation are based on prototypes [1], [2].
>
> [1] git clone https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git -b perf/record_threads
> [2] https://lore.kernel.org/lkml/[email protected]/
>
> Suggested-by: Jiri Olsa <[email protected]>
> Suggested-by: Namhyung Kim <[email protected]>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
[SNIP]
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index f5e5175da6a1..fd0587d636b2 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -3097,6 +3173,17 @@ static void record__mmap_cpu_mask_init(struct mmap_cpu_mask *mask, struct perf_c
> set_bit(cpus->map[c], mask->bits);
> }
>
> +static void record__mmap_cpu_mask_init_spec(struct mmap_cpu_mask *mask, char *mask_spec)
> +{
> + struct perf_cpu_map *cpus;
> +
> + cpus = perf_cpu_map__new(mask_spec);
> + if (cpus) {
> + record__mmap_cpu_mask_init(mask, cpus);
> + free(cpus);

Would be better to use perf_cpu_map__put().


> + }
> +}
> +
> static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr_bits)
> {
> int t, ret;
> @@ -3116,6 +3203,196 @@ static int record__alloc_thread_masks(struct record *rec, int nr_threads, int nr
>
> return 0;
> }
> +
> +static int record__init_thread_cpu_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + int t, ret, nr_cpus = perf_cpu_map__nr(cpus);
> +
> + ret = record__alloc_thread_masks(rec, nr_cpus, cpu__max_cpu());
> + if (ret)
> + return ret;
> +
> + rec->nr_threads = nr_cpus;
> + pr_debug("threads: nr_threads=%d\n", rec->nr_threads);
> +
> + for (t = 0; t < rec->nr_threads; t++) {
> + set_bit(cpus->map[t], rec->thread_masks[t].maps.bits);
> + pr_debug("thread_masks[%d]: maps mask [%d]\n", t, cpus->map[t]);
> + set_bit(cpus->map[t], rec->thread_masks[t].affinity.bits);
> + pr_debug("thread_masks[%d]: affinity mask [%d]\n", t, cpus->map[t]);
> + }
> +
> + return 0;
> +}
> +
> +static int record__init_thread_masks_spec(struct record *rec, struct perf_cpu_map *cpus,
> + char **maps_spec, char **affinity_spec, u32 nr_spec)
> +{
> + u32 s;
> + int ret, nr_threads = 0;
> + struct mmap_cpu_mask cpus_mask;
> + struct thread_mask thread_mask, full_mask;
> +
> + ret = record__mmap_cpu_mask_alloc(&cpus_mask, cpu__max_cpu());
> + if (ret)
> + return ret;
> + record__mmap_cpu_mask_init(&cpus_mask, cpus);
> + ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu());
> + if (ret)
> + return ret;
> + ret = record__thread_mask_alloc(&full_mask, cpu__max_cpu());
> + if (ret)
> + return ret;
> + record__thread_mask_clear(&full_mask);
> +
> + for (s = 0; s < nr_spec; s++) {
> + record__thread_mask_clear(&thread_mask);
> +
> + record__mmap_cpu_mask_init_spec(&thread_mask.maps, maps_spec[s]);
> + record__mmap_cpu_mask_init_spec(&thread_mask.affinity, affinity_spec[s]);
> +
> + if (!bitmap_and(thread_mask.maps.bits, thread_mask.maps.bits,
> + cpus_mask.bits, thread_mask.maps.nbits) ||
> + !bitmap_and(thread_mask.affinity.bits, thread_mask.affinity.bits,
> + cpus_mask.bits, thread_mask.affinity.nbits))
> + continue;
> +
> + ret = record__thread_mask_intersects(&thread_mask, &full_mask);
> + if (ret)
> + return ret;
> + record__thread_mask_or(&full_mask, &full_mask, &thread_mask);
> +
> + rec->thread_masks = realloc(rec->thread_masks,
> + (nr_threads + 1) * sizeof(struct thread_mask));
> + if (!rec->thread_masks) {
> + pr_err("Failed to allocate thread masks\n");
> + return -ENOMEM;

It'll leak previous rec->thread_masks as well as cpu/thread/full masks.


> + }
> + rec->thread_masks[nr_threads] = thread_mask;
> + pr_debug("thread_masks[%d]: addr=", nr_threads);
> + mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].maps, "maps");
> + pr_debug("thread_masks[%d]: addr=", nr_threads);
> + mmap_cpu_mask__scnprintf(&rec->thread_masks[nr_threads].affinity, "affinity");
> + nr_threads++;
> + ret = record__thread_mask_alloc(&thread_mask, cpu__max_cpu());
> + if (ret)
> + return ret;
> + }
> +
> + rec->nr_threads = nr_threads;
> + pr_debug("threads: nr_threads=%d\n", rec->nr_threads);
> +
> + record__mmap_cpu_mask_free(&cpus_mask);
> + record__thread_mask_free(&thread_mask);
> + record__thread_mask_free(&full_mask);
> +
> + return 0;
> +}
> +
> +static int record__init_thread_core_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + int ret;
> + struct cpu_topology *topo;
> +
> + topo = cpu_topology__new();
> + if (!topo)
> + return -EINVAL;
> +
> + ret = record__init_thread_masks_spec(rec, cpus, topo->thread_siblings,
> + topo->thread_siblings, topo->thread_sib);
> + cpu_topology__delete(topo);
> +
> + return ret;
> +}
> +
> +static int record__init_thread_socket_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + int ret;
> + struct cpu_topology *topo;
> +
> + topo = cpu_topology__new();
> + if (!topo)
> + return -EINVAL;
> +
> + ret = record__init_thread_masks_spec(rec, cpus, topo->core_siblings,
> + topo->core_siblings, topo->core_sib);
> + cpu_topology__delete(topo);
> +
> + return ret;
> +}
> +
> +static int record__init_thread_numa_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + u32 s;
> + int ret;
> + char **spec;
> + struct numa_topology *topo;
> +
> + topo = numa_topology__new();
> + if (!topo)
> + return -EINVAL;
> + spec = zalloc(topo->nr * sizeof(char *));
> + if (!spec)
> + return -ENOMEM;

Will leak topo.


> + for (s = 0; s < topo->nr; s++)
> + spec[s] = topo->nodes[s].cpus;
> +
> + ret = record__init_thread_masks_spec(rec, cpus, spec, spec, topo->nr);
> +
> + zfree(&spec);
> +
> + numa_topology__delete(topo);
> +
> + return ret;
> +}
> +
> +static int record__init_thread_user_masks(struct record *rec, struct perf_cpu_map *cpus)
> +{
> + int t, ret;
> + u32 s, nr_spec = 0;
> + char **maps_spec = NULL, **affinity_spec = NULL;
> + char *spec, *spec_ptr, *user_spec, *mask, *mask_ptr;
> +
> + for (t = 0, user_spec = (char *)rec->opts.threads_user_spec; ;t++, user_spec = NULL) {
> + spec = strtok_r(user_spec, ":", &spec_ptr);
> + if (spec == NULL)
> + break;
> + pr_debug(" spec[%d]: %s\n", t, spec);
> + mask = strtok_r(spec, "/", &mask_ptr);
> + if (mask == NULL)
> + break;
> + pr_debug(" maps mask: %s\n", mask);
> + maps_spec = realloc(maps_spec, (nr_spec + 1) * sizeof(char *));
> + if (!maps_spec) {
> + pr_err("Failed to realloc maps_spec\n");
> + return -ENOMEM;

Likewise, you can use a temp variable and bail out to free all
existing specs.

> + }
> + maps_spec[nr_spec] = strdup(mask);
> + mask = strtok_r(NULL, "/", &mask_ptr);
> + if (mask == NULL)
> + break;
> + pr_debug(" affinity mask: %s\n", mask);
> + affinity_spec = realloc(affinity_spec, (nr_spec + 1) * sizeof(char *));
> + if (!maps_spec) {
> + pr_err("Failed to realloc affinity_spec\n");
> + return -ENOMEM;

Ditto.

Thanks,
Namhyung


> + }
> + affinity_spec[nr_spec] = strdup(mask);
> + nr_spec++;
> + }
> +
> + ret = record__init_thread_masks_spec(rec, cpus, maps_spec, affinity_spec, nr_spec);
> +
> + for (s = 0; s < nr_spec; s++) {
> + free(maps_spec[s]);
> + free(affinity_spec[s]);
> + }
> + free(affinity_spec);
> + free(maps_spec);
> +
> + return ret;
> +}
> +
> static int record__init_thread_default_masks(struct record *rec, struct perf_cpu_map *cpus)
> {
> int ret;
> @@ -3133,9 +3410,33 @@ static int record__init_thread_default_masks(struct record *rec, struct perf_cpu
>
> static int record__init_thread_masks(struct record *rec)
> {
> + int ret = 0;
> struct perf_cpu_map *cpus = rec->evlist->core.cpus;
>
> - return record__init_thread_default_masks(rec, cpus);
> + if (!record__threads_enabled(rec))
> + return record__init_thread_default_masks(rec, cpus);
> +
> + switch (rec->opts.threads_spec) {
> + case THREAD_SPEC__CPU:
> + ret = record__init_thread_cpu_masks(rec, cpus);
> + break;
> + case THREAD_SPEC__CORE:
> + ret = record__init_thread_core_masks(rec, cpus);
> + break;
> + case THREAD_SPEC__SOCKET:
> + ret = record__init_thread_socket_masks(rec, cpus);
> + break;
> + case THREAD_SPEC__NUMA:
> + ret = record__init_thread_numa_masks(rec, cpus);
> + break;
> + case THREAD_SPEC__USER:
> + ret = record__init_thread_user_masks(rec, cpus);
> + break;
> + default:
> + break;
> + }
> +
> + return ret;
> }
>
> static int record__fini_thread_masks(struct record *rec)
> @@ -3361,7 +3662,10 @@ int cmd_record(int argc, const char **argv)
>
> err = record__init_thread_masks(rec);
> if (err) {
> - pr_err("record__init_thread_masks failed, error %d\n", err);
> + if (err > 0)
> + pr_err("ERROR: parallel data streaming masks (--threads) intersect.\n");
> + else
> + pr_err("record__init_thread_masks failed, error %d\n", err);
> goto out;
> }
>
> diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
> index 9c13a39cc58f..7f64ff5da2b2 100644
> --- a/tools/perf/util/record.h
> +++ b/tools/perf/util/record.h
> @@ -75,6 +75,7 @@ struct record_opts {
> int ctl_fd_ack;
> bool ctl_fd_close;
> int threads_spec;
> + const char *threads_user_spec;
> };
>
> extern const char * const *record_usage;
> --
> 2.24.1
>
>

2020-12-15 15:08:55

by Alexei Budankov

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] Introduce threaded trace streaming for basic perf record operation

Hi,

On 20.11.2020 12:45, Namhyung Kim wrote:
> Hi,
>
> Thanks for working on this!

Thanks for your review.
Just spotted comments for this version.
Sorry for delay.

>
> On Mon, Nov 16, 2020 at 03:12:47PM +0300, Alexey Budankov wrote:
>>
>> Changes in v3:
>> - avoided skipped redundant patch 3/15
>> - applied "data file" and "data directory" terms allover the patch set
>> - captured Acked-by: tags by Namhyung Kim
>> - avoided braces where don't needed
>> - employed thread local variable for serial trace streaming
>> - added specs for --thread option - core, socket, numa and user defined
>> - added parallel loading of data directory files similar to the prototype [1]
>
> Can you please consider splitting tracing records (FORK/MMAP/...) into
> a separate file? I think this change would put too much burden to the
> perf report side. I'm saying this repeatedly because I'm afraid that
> it'd be harder to change later once we accept this approach/format..

Alexey Bayduraev in To/Cc is going to proceed with this work
so there are chances to have updated version soon.

Thanks,
Alexei

2021-03-01 11:19:17

by Bayduraev, Alexey V

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer object

Hi,

On 20.11.2020 13:49, Namhyung Kim wrote:
> On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:

<SNIP>

>>
>> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
>> /*
>> * Mark the round finished in case we wrote
>> * at least one event.
>> + *
>> + * No need for round events in directory mode,
>> + * because per-cpu maps and files have data
>> + * sorted by kernel.
>
> But it's not just for single cpu since task can migrate so we need to
> look at other cpu's data too. Thus we use the ordered events queue
> and round events help to determine when to flush the data. Without
> the round events, it'd consume huge amount of memory during report.
>
> If we separate tracking records and process them first, we should be
> able to process samples immediately without sorting them in the
> ordered event queue. This will save both cpu cycles and memory
> footprint significantly IMHO.
>
> Thanks,
> Namhyung
>

As far as I understand, to split tracing records (FORK/MMAP/COMM) into
a separate file, we need to implement a runtime trace decoder on the
perf-record side to recognize such tracing records coming from the kernel.
Is that what you mean?

IMHO this can be tricky to implement and adds some overhead that can lead
to possible data loss. Do you have any other ideas how to optimize memory
consumption on perf-report side without a runtime trace decoder?
Maybe "round events" would somehow help in directory mode?

BTW In our tool we use another approach: two-pass trace file loading.
The first loads tracing records, the second loads samples.

Thanks,
Alexey

>
>> */
>> - if (bytes_written != rec->bytes_written)
>> + if (!record__threads_enabled(rec) && bytes_written != rec->bytes_written)
>> rc = record__write(rec, NULL, &finished_round_event, sizeof(finished_round_event));
>>
>> if (overwrite)
>> @@ -1514,7 +1535,9 @@ static void record__init_features(struct record *rec)
>> if (!rec->opts.use_clockid)
>> perf_header__clear_feat(&session->header, HEADER_CLOCK_DATA);
>>
>> - perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
>> + if (!record__threads_enabled(rec))
>> + perf_header__clear_feat(&session->header, HEADER_DIR_FORMAT);
>> +
>> if (!record__comp_enabled(rec))
>> perf_header__clear_feat(&session->header, HEADER_COMPRESSED);
>>
>> @@ -1525,15 +1548,21 @@ static void
>> record__finish_output(struct record *rec)
>> {
>> struct perf_data *data = &rec->data;
>> - int fd = perf_data__fd(data);
>> + int i, fd = perf_data__fd(data);
>>
>> if (data->is_pipe)
>> return;
>>
>> rec->session->header.data_size += rec->bytes_written;
>> data->file.size = lseek(perf_data__fd(data), 0, SEEK_CUR);
>> + if (record__threads_enabled(rec)) {
>> + for (i = 0; i < data->dir.nr; i++)
>> + data->dir.files[i].size = lseek(data->dir.files[i].fd, 0, SEEK_CUR);
>> + }
>>
>> if (!rec->no_buildid) {
>> + /* this will be recalculated during process_buildids() */
>> + rec->samples = 0;
>> process_buildids(rec);
>>
>> if (rec->buildid_all)
>> @@ -2438,8 +2467,6 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
>> status = err;
>>
>> record__synthesize(rec, true);
>> - /* this will be recalculated during process_buildids() */
>> - rec->samples = 0;
>>
>> if (!err) {
>> if (!rec->timestamp_filename) {
>> @@ -3179,7 +3206,7 @@ int cmd_record(int argc, const char **argv)
>>
>> }
>>
>> - if (rec->opts.kcore)
>> + if (rec->opts.kcore || record__threads_enabled(rec))
>> rec->data.is_dir = true;
>>
>> if (rec->opts.comp_level != 0) {
>> diff --git a/tools/perf/util/record.h b/tools/perf/util/record.h
>> index 266760ac9143..9c13a39cc58f 100644
>> --- a/tools/perf/util/record.h
>> +++ b/tools/perf/util/record.h
>> @@ -74,6 +74,7 @@ struct record_opts {
>> int ctl_fd;
>> int ctl_fd_ack;
>> bool ctl_fd_close;
>> + int threads_spec;
>> };
>>
>> extern const char * const *record_usage;
>> --
>> 2.24.1
>>

2021-03-01 13:36:52

by Bayduraev, Alexey V

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer object

On 01.03.2021 14:44, Namhyung Kim wrote:
> Hello,
>
> On Mon, Mar 1, 2021 at 8:16 PM Bayduraev, Alexey V
> <[email protected]> wrote:
>>
>> Hi,
>>
>> On 20.11.2020 13:49, Namhyung Kim wrote:
>>> On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:
>>
>> <SNIP>
>>
>>>>
>>>> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
>>>> /*
>>>> * Mark the round finished in case we wrote
>>>> * at least one event.
>>>> + *
>>>> + * No need for round events in directory mode,
>>>> + * because per-cpu maps and files have data
>>>> + * sorted by kernel.
>>>
>>> But it's not just for single cpu since task can migrate so we need to
>>> look at other cpu's data too. Thus we use the ordered events queue
>>> and round events help to determine when to flush the data. Without
>>> the round events, it'd consume huge amount of memory during report.
>>>
>>> If we separate tracking records and process them first, we should be
>>> able to process samples immediately without sorting them in the
>>> ordered event queue. This will save both cpu cycles and memory
>>> footprint significantly IMHO.
>>>
>>> Thanks,
>>> Namhyung
>>>
>>
>> As far as I understand, to split tracing records (FORK/MMAP/COMM) into
>> a separate file, we need to implement a runtime trace decoder on the
>> perf-record side to recognize such tracing records coming from the kernel.
>> Is that what you mean?
>
> No, I meant separating the mmap buffers so that the record process
> can save the data without decoding.
>

Thanks,

Do you think this can be implemented only on the user side by creating a dummy
event and manipulating by mmap/comm/task flags of struct perf_event_attr?
Or some changes on the kernel side are necessary?

Regards,
Alexey

>>
>> IMHO this can be tricky to implement and adds some overhead that can lead
>> to possible data loss. Do you have any other ideas how to optimize memory
>> consumption on perf-report side without a runtime trace decoder?
>> Maybe "round events" would somehow help in directory mode?
>>
>> BTW In our tool we use another approach: two-pass trace file loading.
>> The first loads tracing records, the second loads samples.
>
> Yeah, something like that. With the separated data, we can do it
> more efficiently IMHO.
>
> Thanks,
> Namhyung
>

2021-03-01 14:24:35

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer object

On Mon, Mar 1, 2021 at 10:33 PM Bayduraev, Alexey V
<[email protected]> wrote:
>
> On 01.03.2021 14:44, Namhyung Kim wrote:
> > Hello,
> >
> > On Mon, Mar 1, 2021 at 8:16 PM Bayduraev, Alexey V
> > <[email protected]> wrote:
> >>
> >> Hi,
> >>
> >> On 20.11.2020 13:49, Namhyung Kim wrote:
> >>> On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:
> >>
> >> <SNIP>
> >>
> >>>>
> >>>> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
> >>>> /*
> >>>> * Mark the round finished in case we wrote
> >>>> * at least one event.
> >>>> + *
> >>>> + * No need for round events in directory mode,
> >>>> + * because per-cpu maps and files have data
> >>>> + * sorted by kernel.
> >>>
> >>> But it's not just for single cpu since task can migrate so we need to
> >>> look at other cpu's data too. Thus we use the ordered events queue
> >>> and round events help to determine when to flush the data. Without
> >>> the round events, it'd consume huge amount of memory during report.
> >>>
> >>> If we separate tracking records and process them first, we should be
> >>> able to process samples immediately without sorting them in the
> >>> ordered event queue. This will save both cpu cycles and memory
> >>> footprint significantly IMHO.
> >>>
> >>> Thanks,
> >>> Namhyung
> >>>
> >>
> >> As far as I understand, to split tracing records (FORK/MMAP/COMM) into
> >> a separate file, we need to implement a runtime trace decoder on the
> >> perf-record side to recognize such tracing records coming from the kernel.
> >> Is that what you mean?
> >
> > No, I meant separating the mmap buffers so that the record process
> > can save the data without decoding.
> >
>
> Thanks,
>
> Do you think this can be implemented only on the user side by creating a dummy
> event and manipulating by mmap/comm/task flags of struct perf_event_attr?
> Or some changes on the kernel side are necessary?

It's only user space changes but it can be large. Actually I worked
on parallelizing
perf report several years ago (not finished, but I don't have time for
it now). At the
time, perf record didn't support directory output so I made it have indexes to
different data parts. But you can get the idea from the code in

https://git.kernel.org/pub/scm/linux/kernel/git/namhyung/linux-perf.git/log/?h=perf/threaded-v5

Thanks,
Namhyung

2021-03-03 03:03:44

by Namhyung Kim

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] perf record: init data file at mmap buffer object

Hello,

On Mon, Mar 1, 2021 at 8:16 PM Bayduraev, Alexey V
<[email protected]> wrote:
>
> Hi,
>
> On 20.11.2020 13:49, Namhyung Kim wrote:
> > On Mon, Nov 16, 2020 at 03:19:41PM +0300, Alexey Budankov wrote:
>
> <SNIP>
>
> >>
> >> @@ -1400,8 +1417,12 @@ static int record__mmap_read_evlist(struct record *rec, struct evlist *evlist,
> >> /*
> >> * Mark the round finished in case we wrote
> >> * at least one event.
> >> + *
> >> + * No need for round events in directory mode,
> >> + * because per-cpu maps and files have data
> >> + * sorted by kernel.
> >
> > But it's not just for single cpu since task can migrate so we need to
> > look at other cpu's data too. Thus we use the ordered events queue
> > and round events help to determine when to flush the data. Without
> > the round events, it'd consume huge amount of memory during report.
> >
> > If we separate tracking records and process them first, we should be
> > able to process samples immediately without sorting them in the
> > ordered event queue. This will save both cpu cycles and memory
> > footprint significantly IMHO.
> >
> > Thanks,
> > Namhyung
> >
>
> As far as I understand, to split tracing records (FORK/MMAP/COMM) into
> a separate file, we need to implement a runtime trace decoder on the
> perf-record side to recognize such tracing records coming from the kernel.
> Is that what you mean?

No, I meant separating the mmap buffers so that the record process
can save the data without decoding.

>
> IMHO this can be tricky to implement and adds some overhead that can lead
> to possible data loss. Do you have any other ideas how to optimize memory
> consumption on perf-report side without a runtime trace decoder?
> Maybe "round events" would somehow help in directory mode?
>
> BTW In our tool we use another approach: two-pass trace file loading.
> The first loads tracing records, the second loads samples.

Yeah, something like that. With the separated data, we can do it
more efficiently IMHO.

Thanks,
Namhyung