2018-10-15 06:28:25

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v14 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads


Currently in record mode the tool implements trace writing serially.
The algorithm loops over mapped per-cpu data buffers and stores
ready data chunks into a trace file using write() system call.

At some circumstances the kernel may lack free space in a buffer
because the other buffer's half is not yet written to disk due to
some other buffer's data writing by the tool at the moment.

Thus serial trace writing implementation may cause the kernel
to loose profiling data and that is what observed when profiling
highly parallel CPU bound workloads on machines with big number
of cores.

Experiment with profiling matrix multiplication code executing 128
threads on Intel Xeon Phi (KNM) with 272 cores, like below,
demonstrates data loss metrics value of 98%:

/usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
--call-graph dwarf,1024 --user-regs=IP,SP,BP --switch-events \
-e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
matrix.gcc

Data loss metrics is the ratio lost_time/elapsed_time where
lost_time is the sum of time intervals containing PERF_RECORD_LOST
records and elapsed_time is the elapsed application run time
under profiling.

Applying asynchronous trace streaming thru Posix AIO API [1] lowers
data loss metrics value providing 2x improvement (from 98% to ~1%)

Asynchronous trace streaming is currently limited to glibc linkage.
musl libc [5] also provides Posix AIO API implementation, however
the patchkit is not tested with it. There may be other libc libraries
linked by Perf tool that currently lack Posix AIO API support [2],
[3], [4] so NO_AIO define may be used to limit Perf tool binary to
serial streaming only.

---
Alexey Budankov (3):
perf util: map data buffer for preserving collected data
perf record: enable asynchronous trace writing
perf record: extend trace writing to multi AIO

tools/perf/Documentation/perf-record.txt | 5 +
tools/perf/Makefile.config | 5 +
tools/perf/Makefile.perf | 7 +-
tools/perf/builtin-record.c | 252 ++++++++++++++++++++++++++++++-
tools/perf/perf.h | 1 +
tools/perf/util/evlist.c | 6 +-
tools/perf/util/evlist.h | 2 +-
tools/perf/util/mmap.c | 146 +++++++++++++++++-
tools/perf/util/mmap.h | 26 +++-
9 files changed, 439 insertions(+), 11 deletions(-)

---
Changes in v14:
- implement default nr_cblocks_default variable
- fix --aio option handling
Changes in v13:
- named new functions with _aio_ word
- grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
- moved perf_mmap__aio_push() stub into header
- removed trailed white space
Changes in v12:
- applied stub functions design for the whole patch kit
- grouped AIO related data into a struct under struct perf_mmap
- implemented record__aio_get/set_pos(), record__aio_enabled()
- implemented simple --aio option
- extended --aio option to --aio-cblocks=<n>
Changes in v11:
- replacing the both lseek() syscalls in every loop iteration by the only
two syscalls just before and after the loop at record__mmap_read_evlist()
and advancing *in-flight* off file pos value at perf_mmap__aio_push()
Changes in v10:
- moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap();
- adjusted error reporting by using %m
- avoided lseek() setting file pos back in case of record__aio_write() failure
- compacted code selecting between serial and AIO streaming
- optimized call places of record__mmap_read_sync()
- added description of aio-cblocks option into perf-record.txt
Changes in v9:
- enable AIO streaming only when --aio-cblocks option is specified explicitly
- enable AIO based implementation when linking with glibc only
- define NO_AIO to limit Perf binary to serial implementation
Changes in v8:
- run the whole thing thru checkpatch.pl and corrected found issues except
lines longer than 80 symbols
- corrected comments alignment and formatting
- moved multi AIO implementation into 3rd patch in the series
- implemented explicit cblocks array allocation
- split AIO completion check into separate record__aio_complete()
- set nr_cblocks default to 1 and max allowed value to 4
Changes in v7:
- implemented handling record.aio setting from perfconfig file
Changes in v6:
- adjusted setting of priorities for cblocks;
- handled errno == EAGAIN case from aio_write() return;
Changes in v5:
- resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
- data loss metrics decreased from 25% to 2x in trialed configuration;
- reshaped layout of data structures;
- implemented --aio option;
- avoided nanosleep() prior calling aio_suspend();
- switched to per-cpu aio multi buffer record__aio_sync();
- record_mmap_read_sync() now does global sync just before
switching trace file or collection stop;
Changes in v4:
- converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
- converted void *bf to struct perf_mmap *md in signatures
- written comment in perf_mmap__push() just before perf_mmap__get();
- written comment in record__mmap_read_sync() on possible restarting
of aio_write() operation and releasing perf_mmap object after all;
- added perf_mmap__put() for the cases of failed aio_write();
Changes in v3:
- written comments about nanosleep(0.5ms) call prior aio_suspend()
to cope with intrusiveness of its implementation in glibc;
- written comments about rationale behind coping profiling data
into mmap->data buffer;
Changes in v2:
- converted zalloc() to calloc() for allocation of mmap_aio array,
- cleared typo and adjusted fallback branch code;

---

[1] http://man7.org/linux/man-pages/man7/aio.7.html
[2] https://android.googlesource.com/platform/bionic/+/master/docs/status.md
[3] https://www.uclibc.org/
[4] https://uclibc-ng.org/
[5] https://www.musl-libc.org/


2018-10-15 06:44:39

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v14 1/3]: perf util: map data buffer for preserving collected data


The map->data buffer is used to preserve map->base profiling data
for writing to disk. AIO map->cblock is used to queue corresponding
map->data buffer for asynchronous writing.

Signed-off-by: Alexey Budankov <[email protected]>
---
Changes in v13:
- grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
Changes in v12:
- applied stub functions design for the whole patch kit
- grouped AIO related data into a struct under struct perf_mmap
Changes in v10:
- moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap()
- adjusted error reporting by using %m
Changes in v9:
- implemented NO_AIO and HAVE_AIO_SUPPORT defines to cover cases of
libc implementations without Posix AIO API support
Changes in v7:
- implemented handling record.aio setting from perfconfig file
Changes in v6:
- adjusted setting of priorities for cblocks;
Changes in v5:
- reshaped layout of data structures;
- implemented --aio option;
Changes in v4:
- converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
Changes in v2:
- converted zalloc() to calloc() for allocation of mmap_aio array,
- cleared typo and adjusted fallback branch code;
---
tools/perf/Makefile.config | 5 +++++
tools/perf/Makefile.perf | 7 ++++++-
tools/perf/util/evlist.c | 2 +-
tools/perf/util/mmap.c | 49 +++++++++++++++++++++++++++++++++++++++++++++-
tools/perf/util/mmap.h | 11 ++++++++++-
5 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
index f6d1a03c7523..2e90f4ce9214 100644
--- a/tools/perf/Makefile.config
+++ b/tools/perf/Makefile.config
@@ -355,6 +355,11 @@ endif # NO_LIBELF

ifeq ($(feature-glibc), 1)
CFLAGS += -DHAVE_GLIBC_SUPPORT
+ ifndef NO_AIO
+ ifndef BIONIC
+ CFLAGS += -DHAVE_AIO_SUPPORT
+ endif
+ endif
endif

ifdef NO_DWARF
diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
index 92514fb3689f..7becc6a72cf2 100644
--- a/tools/perf/Makefile.perf
+++ b/tools/perf/Makefile.perf
@@ -97,8 +97,13 @@ include ../scripts/utilities.mak
# Define LIBCLANGLLVM if you DO want builtin clang and llvm support.
# When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
# llvm-config is not in $PATH.
-
+#
# Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
+#
+# Define NO_AIO if you do not want support of Posix AIO based trace
+# streaming for record mode. Currently Posix AIO trace streaming is
+# supported only when linking with glibc.
+#

# As per kernel Makefile, avoid funny character set dependencies
unexport LC_ALL
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index be440df29615..1a83bf2c069c 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1028,7 +1028,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
* Its value is decided by evsel's write_backward.
* So &mp should not be passed through const pointer.
*/
- struct mmap_params mp;
+ struct mmap_params mp = { .nr_cblocks = 0 };

if (!evlist->mmap)
evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index cdb95b3a1213..47cdc3ad6546 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -153,8 +153,55 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
{
}

+#ifdef HAVE_AIO_SUPPORT
+static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
+{
+ int delta_max;
+
+ if (mp->nr_cblocks) {
+ map->aio.data = malloc(perf_mmap__mmap_len(map));
+ if (!map->aio.data) {
+ pr_debug2("failed to allocate data buffer, error %m\n");
+ return -1;
+ }
+ /*
+ * Use cblock.aio_fildes value different from -1
+ * to denote started aio write operation on the
+ * cblock so it requires explicit record__aio_sync()
+ * call prior the cblock may be reused again.
+ */
+ map->aio.cblock.aio_fildes = -1;
+ /*
+ * Allocate cblock with max priority delta to
+ * have faster aio write system calls.
+ */
+ delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
+ map->aio.cblock.aio_reqprio = delta_max;
+ }
+
+ return 0;
+}
+
+static void perf_mmap__aio_munmap(struct perf_mmap *map)
+{
+ if (map->aio.data)
+ zfree(&map->aio.data);
+}
+#else
+static int perf_mmap__aio_mmap(struct perf_mmap *map __maybe_unused,
+ struct mmap_params *mp __maybe_unused)
+{
+ return 0;
+}
+
+static void perf_mmap__aio_munmap(struct perf_mmap *map __maybe_unused)
+{
+}
+#endif
+
void perf_mmap__munmap(struct perf_mmap *map)
{
+ perf_mmap__aio_munmap(map);
if (map->base != NULL) {
munmap(map->base, perf_mmap__mmap_len(map));
map->base = NULL;
@@ -197,7 +244,7 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
&mp->auxtrace_mp, map->base, fd))
return -1;

- return 0;
+ return perf_mmap__aio_mmap(map, mp);
}

static int overwrite_rb_find_range(void *buf, int mask, u64 *start, u64 *end)
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index e603314dc792..a46dbdcdcc8a 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -6,6 +6,9 @@
#include <linux/types.h>
#include <asm/barrier.h>
#include <stdbool.h>
+#ifdef HAVE_AIO_SUPPORT
+#include <aio.h>
+#endif
#include "auxtrace.h"
#include "event.h"

@@ -26,6 +29,12 @@ struct perf_mmap {
bool overwrite;
struct auxtrace_mmap auxtrace_mmap;
char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
+#ifdef HAVE_AIO_SUPPORT
+ struct {
+ void *data;
+ struct aiocb cblock;
+ } aio;
+#endif
};

/*
@@ -57,7 +66,7 @@ enum bkw_mmap_state {
};

struct mmap_params {
- int prot, mask;
+ int prot, mask, nr_cblocks;
struct auxtrace_mmap_params auxtrace_mp;
};


2018-10-15 06:46:19

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v14 2/3]: perf record: enable asynchronous trace writing


Trace file offset is read once before mmaps iterating loop and written
back after all performance data enqueued for aio writing. Trace file offset
is incremented linearly after every successful aio write operation.

record__aio_sync() blocks till completion of started AIO operation
and then proceeds.

record__aio_mmap_read_sync() implements a barrier for all incomplete
aio write requests.

Signed-off-by: Alexey Budankov <[email protected]>
---
Changes in v14:
- implement default nr_cblocks_default variable
Changes in v13:
- named new functions with _aio_ word
- grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
- moved perf_mmap__aio_push() stub into header
- removed trailed white space
Changes in v12:
- implemented record__aio_get/set_pos(), record__aio_enabled()
- implemented simple --aio option
Changes in v11:
- replacing the both lseek() syscalls in every loop iteration by the only
two syscalls just before and after the loop at record__mmap_read_evlist()
and advancing *in-flight* off file pos value at perf_mmap__aio_push()
Changes in v10:
- avoided lseek() setting file pos back in case of record__aio_write() failure
- compacted code selecting between serial and AIO streaming
- optimized call places of record__mmap_read_sync()
Changes in v9:
- enable AIO streaming only when --aio-cblocks option is specified explicitly
Changes in v8:
- split AIO completion check into separate record__aio_complete()
Changes in v6:
- handled errno == EAGAIN case from aio_write();
Changes in v5:
- data loss metrics decreased from 25% to 2x in trialed configuration;
- avoided nanosleep() prior calling aio_suspend();
- switched to per cpu multi record__aio_sync() aio
- record_mmap_read_sync() now does global barrier just before
switching trace file or collection stop;
- resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
Changes in v4:
- converted void *bf to struct perf_mmap *md in signatures
- written comment in perf_mmap__push() just before perf_mmap__get();
- written comment in record__mmap_read_sync() on possible restarting
of aio_write() operation and releasing perf_mmap object after all;
- added perf_mmap__put() for the cases of failed aio_write();
Changes in v3:
- written comments about nanosleep(0.5ms) call prior aio_suspend()
to cope with intrusiveness of its implementation in glibc;
- written comments about rationale behind coping profiling data
into mmap->data buffer;
---
tools/perf/Documentation/perf-record.txt | 5 +
tools/perf/builtin-record.c | 218 ++++++++++++++++++++++++++++++-
tools/perf/perf.h | 1 +
tools/perf/util/evlist.c | 6 +-
tools/perf/util/evlist.h | 2 +-
tools/perf/util/mmap.c | 77 ++++++++++-
tools/perf/util/mmap.h | 14 ++
7 files changed, 314 insertions(+), 9 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 246dee081efd..7efb4af88a68 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -435,6 +435,11 @@ Specify vmlinux path which has debuginfo.
--buildid-all::
Record build-id of all DSOs regardless whether it's actually hit or not.

+--aio::
+Enable asynchronous (Posix AIO) trace writing mode.
+Asynchronous mode is supported only when linking Perf tool with libc library
+providing implementation for Posix AIO API.
+
--all-kernel::
Configure all used events to run in kernel space.

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 0980dfe3396b..0c6105860123 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -124,6 +124,183 @@ static int record__write(struct record *rec, struct perf_mmap *map __maybe_unuse
return 0;
}

+#ifdef HAVE_AIO_SUPPORT
+static int record__aio_write(struct aiocb *cblock, int trace_fd,
+ void *buf, size_t size, off_t off)
+{
+ int rc;
+
+ cblock->aio_fildes = trace_fd;
+ cblock->aio_buf = buf;
+ cblock->aio_nbytes = size;
+ cblock->aio_offset = off;
+ cblock->aio_sigevent.sigev_notify = SIGEV_NONE;
+
+ do {
+ rc = aio_write(cblock);
+ if (rc == 0) {
+ break;
+ } else if (errno != EAGAIN) {
+ cblock->aio_fildes = -1;
+ pr_err("failed to queue perf data, error: %m\n");
+ break;
+ }
+ } while (1);
+
+ return rc;
+}
+
+static int record__aio_complete(struct perf_mmap *md, struct aiocb *cblock)
+{
+ void *rem_buf;
+ off_t rem_off;
+ size_t rem_size;
+ int rc, aio_errno;
+ ssize_t aio_ret, written;
+
+ aio_errno = aio_error(cblock);
+ if (aio_errno == EINPROGRESS)
+ return 0;
+
+ written = aio_ret = aio_return(cblock);
+ if (aio_ret < 0) {
+ if (aio_errno != EINTR)
+ pr_err("failed to write perf data, error: %m\n");
+ written = 0;
+ }
+
+ rem_size = cblock->aio_nbytes - written;
+
+ if (rem_size == 0) {
+ cblock->aio_fildes = -1;
+ /*
+ * md->refcount is incremented in perf_mmap__push() for
+ * every enqueued aio write request so decrement it because
+ * the request is now complete.
+ */
+ perf_mmap__put(md);
+ rc = 1;
+ } else {
+ /*
+ * aio write request may require restart with the
+ * reminder if the kernel didn't write whole
+ * chunk at once.
+ */
+ rem_off = cblock->aio_offset + written;
+ rem_buf = (void *)(cblock->aio_buf + written);
+ record__aio_write(cblock, cblock->aio_fildes,
+ rem_buf, rem_size, rem_off);
+ rc = 0;
+ }
+
+ return rc;
+}
+
+static void record__aio_sync(struct perf_mmap *md)
+{
+ struct aiocb *cblock = &md->aio.cblock;
+ struct timespec timeout = { 0, 1000 * 1000 * 1 }; /* 1ms */
+
+ do {
+ if (cblock->aio_fildes == -1 || record__aio_complete(md, cblock))
+ return;
+
+ while (aio_suspend((const struct aiocb**)&cblock, 1, &timeout)) {
+ if (!(errno == EAGAIN || errno == EINTR))
+ pr_err("failed to sync perf data, error: %m\n");
+ }
+ } while (1);
+}
+
+static int record__aio_pushfn(void *to, struct aiocb *cblock, void *bf, size_t size, off_t off)
+{
+ struct record *rec = to;
+ int ret, trace_fd = rec->session->data->file.fd;
+
+ rec->samples++;
+
+ ret = record__aio_write(cblock, trace_fd, bf, size, off);
+ if (!ret) {
+ rec->bytes_written += size;
+ if (switch_output_size(rec))
+ trigger_hit(&switch_output_trigger);
+ }
+
+ return ret;
+}
+
+static off_t record__aio_get_pos(int trace_fd)
+{
+ return lseek(trace_fd, 0, SEEK_CUR);
+}
+
+static void record__aio_set_pos(int trace_fd, off_t pos)
+{
+ lseek(trace_fd, pos, SEEK_SET);
+}
+
+static void record__aio_mmap_read_sync(struct record *rec)
+{
+ int i;
+ struct perf_evlist *evlist = rec->evlist;
+ struct perf_mmap *maps = evlist->mmap;
+
+ if (!rec->opts.nr_cblocks)
+ return;
+
+ for (i = 0; i < evlist->nr_mmaps; i++) {
+ struct perf_mmap *map = &maps[i];
+
+ if (map->base)
+ record__aio_sync(map);
+ }
+}
+
+static int nr_cblocks_default = 1;
+
+static int record__aio_parse(const struct option *opt,
+ const char *str __maybe_unused,
+ int unset)
+{
+ struct record_opts *opts = (struct record_opts *)opt->value;
+
+ if (unset)
+ opts->nr_cblocks = 0;
+ else
+ opts->nr_cblocks = nr_cblocks_default;
+
+ return 0;
+}
+#else /* HAVE_AIO_SUPPORT */
+static void record__aio_sync(struct perf_mmap *md __maybe_unused)
+{
+}
+
+static int record__aio_pushfn(void *to __maybe_unused, struct aiocb *cblock __maybe_unused,
+ void *bf __maybe_unused, size_t size __maybe_unused, off_t off __maybe_unused)
+{
+ return -1;
+}
+
+static off_t record__aio_get_pos(int trace_fd __maybe_unused)
+{
+ return -1;
+}
+
+static void record__aio_set_pos(int trace_fd __maybe_unused, off_t pos __maybe_unused)
+{
+}
+
+static void record__aio_mmap_read_sync(struct record *rec __maybe_unused)
+{
+}
+#endif
+
+static int record__aio_enabled(struct record *rec)
+{
+ return rec->opts.nr_cblocks > 0;
+}
+
static int process_synthesized_event(struct perf_tool *tool,
union perf_event *event,
struct perf_sample *sample __maybe_unused,
@@ -329,7 +506,7 @@ static int record__mmap_evlist(struct record *rec,

if (perf_evlist__mmap_ex(evlist, opts->mmap_pages,
opts->auxtrace_mmap_pages,
- opts->auxtrace_snapshot_mode) < 0) {
+ opts->auxtrace_snapshot_mode, opts->nr_cblocks) < 0) {
if (errno == EPERM) {
pr_err("Permission error mapping pages.\n"
"Consider increasing "
@@ -520,6 +697,8 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
int i;
int rc = 0;
struct perf_mmap *maps;
+ int trace_fd = rec->data.file.fd;
+ off_t off;

if (!evlist)
return 0;
@@ -531,13 +710,29 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
if (overwrite && evlist->bkw_mmap_state != BKW_MMAP_DATA_PENDING)
return 0;

+ if (record__aio_enabled(rec))
+ off = record__aio_get_pos(trace_fd);
+
for (i = 0; i < evlist->nr_mmaps; i++) {
struct perf_mmap *map = &maps[i];

if (map->base) {
- if (perf_mmap__push(map, rec, record__pushfn) != 0) {
- rc = -1;
- goto out;
+ if (!record__aio_enabled(rec)) {
+ if (perf_mmap__push(map, rec, record__pushfn) != 0) {
+ rc = -1;
+ goto out;
+ }
+ } else {
+ /*
+ * Call record__aio_sync() to wait till map->data buffer
+ * becomes available after previous aio write request.
+ */
+ record__aio_sync(map);
+ if (perf_mmap__aio_push(map, rec, record__aio_pushfn, &off) != 0) {
+ record__aio_set_pos(trace_fd, off);
+ rc = -1;
+ goto out;
+ }
}
}

@@ -548,6 +743,9 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
}
}

+ if (record__aio_enabled(rec))
+ record__aio_set_pos(trace_fd, off);
+
/*
* Mark the round finished in case we wrote
* at least one event.
@@ -650,6 +848,8 @@ record__switch_output(struct record *rec, bool at_exit)
/* Same Size: "2015122520103046"*/
char timestamp[] = "InvalidTimestamp";

+ record__aio_mmap_read_sync(rec);
+
record__synthesize(rec, true);
if (target__none(&rec->opts.target))
record__synthesize_workload(rec, true);
@@ -1157,6 +1357,8 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
record__synthesize_workload(rec, true);

out_child:
+ record__aio_mmap_read_sync(rec);
+
if (forks) {
int exit_status;

@@ -1681,6 +1883,11 @@ static struct option __record_options[] = {
"signal"),
OPT_BOOLEAN(0, "dry-run", &dry_run,
"Parse options then exit"),
+#ifdef HAVE_AIO_SUPPORT
+ OPT_CALLBACK_NOOPT(0, "aio", &record.opts,
+ NULL, "Enable asynchronous trace writing mode",
+ record__aio_parse),
+#endif
OPT_END()
};

@@ -1873,6 +2080,9 @@ int cmd_record(int argc, const char **argv)
goto out;
}

+ if (verbose > 0)
+ pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);
+
err = __cmd_record(&record, argc, argv);
out:
perf_evlist__delete(rec->evlist);
diff --git a/tools/perf/perf.h b/tools/perf/perf.h
index 21bf7f5a3cf5..0a1ae2ae567a 100644
--- a/tools/perf/perf.h
+++ b/tools/perf/perf.h
@@ -82,6 +82,7 @@ struct record_opts {
bool use_clockid;
clockid_t clockid;
unsigned int proc_map_timeout;
+ int nr_cblocks;
};

struct option;
diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
index 1a83bf2c069c..6593bd0dc2af 100644
--- a/tools/perf/util/evlist.c
+++ b/tools/perf/util/evlist.c
@@ -1018,7 +1018,7 @@ int perf_evlist__parse_mmap_pages(const struct option *opt, const char *str,
*/
int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
unsigned int auxtrace_pages,
- bool auxtrace_overwrite)
+ bool auxtrace_overwrite, int nr_cblocks)
{
struct perf_evsel *evsel;
const struct cpu_map *cpus = evlist->cpus;
@@ -1028,7 +1028,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
* Its value is decided by evsel's write_backward.
* So &mp should not be passed through const pointer.
*/
- struct mmap_params mp = { .nr_cblocks = 0 };
+ struct mmap_params mp = { .nr_cblocks = nr_cblocks };

if (!evlist->mmap)
evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
@@ -1060,7 +1060,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,

int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages)
{
- return perf_evlist__mmap_ex(evlist, pages, 0, false);
+ return perf_evlist__mmap_ex(evlist, pages, 0, false, 0);
}

int perf_evlist__create_maps(struct perf_evlist *evlist, struct target *target)
diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
index dc66436add98..2464463879b4 100644
--- a/tools/perf/util/evlist.h
+++ b/tools/perf/util/evlist.h
@@ -162,7 +162,7 @@ unsigned long perf_event_mlock_kb_in_pages(void);

int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
unsigned int auxtrace_pages,
- bool auxtrace_overwrite);
+ bool auxtrace_overwrite, int nr_cblocks);
int perf_evlist__mmap(struct perf_evlist *evlist, unsigned int pages);
void perf_evlist__munmap(struct perf_evlist *evlist);

diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index 47cdc3ad6546..61aa381d05d0 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -158,7 +158,8 @@ static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
{
int delta_max;

- if (mp->nr_cblocks) {
+ map->aio.nr_cblocks = mp->nr_cblocks;
+ if (map->aio.nr_cblocks) {
map->aio.data = malloc(perf_mmap__mmap_len(map));
if (!map->aio.data) {
pr_debug2("failed to allocate data buffer, error %m\n");
@@ -187,6 +188,80 @@ static void perf_mmap__aio_munmap(struct perf_mmap *map)
if (map->aio.data)
zfree(&map->aio.data);
}
+
+int perf_mmap__aio_push(struct perf_mmap *md, void *to,
+ int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off),
+ off_t *off)
+{
+ u64 head = perf_mmap__read_head(md);
+ unsigned char *data = md->base + page_size;
+ unsigned long size, size0 = 0;
+ void *buf;
+ int rc = 0;
+
+ rc = perf_mmap__read_init(md);
+ if (rc < 0)
+ return (rc == -EAGAIN) ? 0 : -1;
+
+ /*
+ * md->base data is copied into md->data buffer to
+ * release space in the kernel buffer as fast as possible,
+ * thru perf_mmap__consume() below.
+ *
+ * That lets the kernel to proceed with storing more
+ * profiling data into the kernel buffer earlier than other
+ * per-cpu kernel buffers are handled.
+ *
+ * Coping can be done in two steps in case the chunk of
+ * profiling data crosses the upper bound of the kernel buffer.
+ * In this case we first move part of data from md->start
+ * till the upper bound and then the reminder from the
+ * beginning of the kernel buffer till the end of
+ * the data chunk.
+ */
+
+ size = md->end - md->start;
+
+ if ((md->start & md->mask) + size != (md->end & md->mask)) {
+ buf = &data[md->start & md->mask];
+ size = md->mask + 1 - (md->start & md->mask);
+ md->start += size;
+ memcpy(md->aio.data, buf, size);
+ size0 = size;
+ }
+
+ buf = &data[md->start & md->mask];
+ size = md->end - md->start;
+ md->start += size;
+ memcpy(md->aio.data + size0, buf, size);
+
+ /*
+ * Increment md->refcount to guard md->data buffer
+ * from premature deallocation because md object can be
+ * released earlier than aio write request started
+ * on mmap->data is complete.
+ *
+ * perf_mmap__put() is done at record__aio_complete()
+ * after started request completion.
+ */
+ perf_mmap__get(md);
+
+ md->prev = head;
+ perf_mmap__consume(md);
+
+ rc = push(to, &md->aio.cblock, md->aio.data, size0 + size, *off);
+ if (!rc) {
+ *off += size0 + size;
+ } else {
+ /*
+ * Decrement md->refcount back if aio write
+ * operation failed to start.
+ */
+ perf_mmap__put(md);
+ }
+
+ return rc;
+}
#else
static int perf_mmap__aio_mmap(struct perf_mmap *map __maybe_unused,
struct mmap_params *mp __maybe_unused)
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index a46dbdcdcc8a..9be06f18f17a 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -12,6 +12,7 @@
#include "auxtrace.h"
#include "event.h"

+struct aiocb;
/**
* struct perf_mmap - perf's ring buffer mmap details
*
@@ -33,6 +34,7 @@ struct perf_mmap {
struct {
void *data;
struct aiocb cblock;
+ int nr_cblocks;
} aio;
#endif
};
@@ -103,6 +105,18 @@ union perf_event *perf_mmap__read_event(struct perf_mmap *map);

int perf_mmap__push(struct perf_mmap *md, void *to,
int push(struct perf_mmap *map, void *to, void *buf, size_t size));
+#ifdef HAVE_AIO_SUPPORT
+int perf_mmap__aio_push(struct perf_mmap *md, void *to,
+ int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off),
+ off_t *off);
+#else
+static inline int perf_mmap__aio_push(struct perf_mmap *md __maybe_unused, void *to __maybe_unused,
+ int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off) __maybe_unused,
+ off_t *off __maybe_unused)
+{
+ return 0;
+}
+#endif

size_t perf_mmap__mmap_len(struct perf_mmap *map);


2018-10-15 06:47:17

by Alexey Budankov

[permalink] [raw]
Subject: [PATCH v14 3/3]: perf record: extend trace writing to multi AIO


Multi AIO trace writing allows caching more kernel data into userspace
memory postponing trace writing for the sake of overall profiling data
thruput increase. It could be seen as kernel data buffer extension into
userspace memory.

With aio option value different from 0, default value is 1,
tool has capability to cache more and more data into user space
along with delegating spill to AIO.

That allows avoiding suspend at record__aio_sync() between calls of
record__mmap_read_evlist() and increase profiling data thruput for
the cost of userspace memory.

Signed-off-by: Alexey Budankov <[email protected]>
---
Changes in v14:
- fix --aio option handling
Changes in v13:
- preserved --aio option name avoiding complication
Changes in v12:
- extended --aio option to --aio-cblocks=<n>
Changes in v10:
- added description of aio-cblocks option into perf-record.tx
---
tools/perf/Documentation/perf-record.txt | 4 +-
tools/perf/builtin-record.c | 64 ++++++++++++++++++++++++--------
tools/perf/util/mmap.c | 64 +++++++++++++++++++++-----------
tools/perf/util/mmap.h | 9 +++--
4 files changed, 99 insertions(+), 42 deletions(-)

diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
index 7efb4af88a68..d232b13ea713 100644
--- a/tools/perf/Documentation/perf-record.txt
+++ b/tools/perf/Documentation/perf-record.txt
@@ -435,8 +435,8 @@ Specify vmlinux path which has debuginfo.
--buildid-all::
Record build-id of all DSOs regardless whether it's actually hit or not.

---aio::
-Enable asynchronous (Posix AIO) trace writing mode.
+--aio[=n]::
+Use <n> control blocks in asynchronous (Posix AIO) trace writing mode (default: 1, max: 4).
Asynchronous mode is supported only when linking Perf tool with libc library
providing implementation for Posix AIO API.

diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
index 0c6105860123..c5d9288b8670 100644
--- a/tools/perf/builtin-record.c
+++ b/tools/perf/builtin-record.c
@@ -196,16 +196,35 @@ static int record__aio_complete(struct perf_mmap *md, struct aiocb *cblock)
return rc;
}

-static void record__aio_sync(struct perf_mmap *md)
+static int record__aio_sync(struct perf_mmap *md, bool sync_all)
{
- struct aiocb *cblock = &md->aio.cblock;
+ struct aiocb **aiocb = md->aio.aiocb;
+ struct aiocb *cblocks = md->aio.cblocks;
struct timespec timeout = { 0, 1000 * 1000 * 1 }; /* 1ms */
+ int i, do_suspend;

do {
- if (cblock->aio_fildes == -1 || record__aio_complete(md, cblock))
- return;
+ do_suspend = 0;
+ for (i = 0; i < md->aio.nr_cblocks; ++i) {
+ if (cblocks[i].aio_fildes == -1 || record__aio_complete(md, &cblocks[i])) {
+ if (sync_all)
+ aiocb[i] = NULL;
+ else
+ return i;
+ } else {
+ /*
+ * Started aio write is not complete yet
+ * so it has to be waited before the
+ * next allocation.
+ */
+ aiocb[i] = &cblocks[i];
+ do_suspend = 1;
+ }
+ }
+ if (!do_suspend)
+ return -1;

- while (aio_suspend((const struct aiocb**)&cblock, 1, &timeout)) {
+ while (aio_suspend((const struct aiocb **)aiocb, md->aio.nr_cblocks, &timeout)) {
if (!(errno == EAGAIN || errno == EINTR))
pr_err("failed to sync perf data, error: %m\n");
}
@@ -252,28 +271,33 @@ static void record__aio_mmap_read_sync(struct record *rec)
struct perf_mmap *map = &maps[i];

if (map->base)
- record__aio_sync(map);
+ record__aio_sync(map, true);
}
}

static int nr_cblocks_default = 1;

static int record__aio_parse(const struct option *opt,
- const char *str __maybe_unused,
+ const char *str,
int unset)
{
struct record_opts *opts = (struct record_opts *)opt->value;

- if (unset)
+ if (unset) {
opts->nr_cblocks = 0;
- else
- opts->nr_cblocks = nr_cblocks_default;
+ } else {
+ if (str)
+ opts->nr_cblocks = strtol(str, NULL, 0);
+ if (!opts->nr_cblocks)
+ opts->nr_cblocks = nr_cblocks_default;
+ }

return 0;
}
#else /* HAVE_AIO_SUPPORT */
-static void record__aio_sync(struct perf_mmap *md __maybe_unused)
+static int record__aio_sync(struct perf_mmap *md __maybe_unused, bool sync_all __maybe_unused)
{
+ return -1;
}

static int record__aio_pushfn(void *to __maybe_unused, struct aiocb *cblock __maybe_unused,
@@ -723,12 +747,13 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
goto out;
}
} else {
+ int idx;
/*
* Call record__aio_sync() to wait till map->data buffer
* becomes available after previous aio write request.
*/
- record__aio_sync(map);
- if (perf_mmap__aio_push(map, rec, record__aio_pushfn, &off) != 0) {
+ idx = record__aio_sync(map, false);
+ if (perf_mmap__aio_push(map, rec, idx, record__aio_pushfn, &off) != 0) {
record__aio_set_pos(trace_fd, off);
rc = -1;
goto out;
@@ -1492,6 +1517,13 @@ static int perf_record_config(const char *var, const char *value, void *cb)
var = "call-graph.record-mode";
return perf_default_config(var, value, cb);
}
+#ifdef HAVE_AIO_SUPPORT
+ if (!strcmp(var, "record.aio")) {
+ rec->opts.nr_cblocks = strtol(value, NULL, 0);
+ if (!rec->opts.nr_cblocks)
+ rec->opts.nr_cblocks = nr_cblocks_default;
+ }
+#endif

return 0;
}
@@ -1884,8 +1916,8 @@ static struct option __record_options[] = {
OPT_BOOLEAN(0, "dry-run", &dry_run,
"Parse options then exit"),
#ifdef HAVE_AIO_SUPPORT
- OPT_CALLBACK_NOOPT(0, "aio", &record.opts,
- NULL, "Enable asynchronous trace writing mode",
+ OPT_CALLBACK_OPTARG(0, "aio", &record.opts,
+ &nr_cblocks_default, "n", "Use <n> control blocks in asynchronous trace writing mode (default: 1, max: 4)",
record__aio_parse),
#endif
OPT_END()
@@ -2080,6 +2112,8 @@ int cmd_record(int argc, const char **argv)
goto out;
}

+ if (rec->opts.nr_cblocks > 4)
+ rec->opts.nr_cblocks = 4;
if (verbose > 0)
pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);

diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
index 61aa381d05d0..ab30555d2afc 100644
--- a/tools/perf/util/mmap.c
+++ b/tools/perf/util/mmap.c
@@ -156,28 +156,50 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
#ifdef HAVE_AIO_SUPPORT
static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
{
- int delta_max;
+ int delta_max, i, prio;

map->aio.nr_cblocks = mp->nr_cblocks;
if (map->aio.nr_cblocks) {
- map->aio.data = malloc(perf_mmap__mmap_len(map));
+ map->aio.aiocb = calloc(map->aio.nr_cblocks, sizeof(struct aiocb *));
+ if (!map->aio.aiocb) {
+ pr_debug2("failed to allocate aiocb for data buffer, error %m\n");
+ return -1;
+ }
+ map->aio.cblocks = calloc(map->aio.nr_cblocks, sizeof(struct aiocb));
+ if (!map->aio.cblocks) {
+ pr_debug2("failed to allocate cblocks for data buffer, error %m\n");
+ return -1;
+ }
+ map->aio.data = calloc(map->aio.nr_cblocks, sizeof(void *));
if (!map->aio.data) {
pr_debug2("failed to allocate data buffer, error %m\n");
return -1;
}
- /*
- * Use cblock.aio_fildes value different from -1
- * to denote started aio write operation on the
- * cblock so it requires explicit record__aio_sync()
- * call prior the cblock may be reused again.
- */
- map->aio.cblock.aio_fildes = -1;
- /*
- * Allocate cblock with max priority delta to
- * have faster aio write system calls.
- */
delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
- map->aio.cblock.aio_reqprio = delta_max;
+ for (i = 0; i < map->aio.nr_cblocks; ++i) {
+ map->aio.data[i] = malloc(perf_mmap__mmap_len(map));
+ if (!map->aio.data[i]) {
+ pr_debug2("failed to allocate data buffer area, error %m");
+ return -1;
+ }
+ /*
+ * Use cblock.aio_fildes value different from -1
+ * to denote started aio write operation on the
+ * cblock so it requires explicit record__aio_sync()
+ * call prior the cblock may be reused again.
+ */
+ map->aio.cblocks[i].aio_fildes = -1;
+ /*
+ * Allocate cblocks with priority delta to have
+ * faster aio write system calls because queued requests
+ * are kept in separate per-prio queues and adding
+ * a new request will iterate thru shorter per-prio
+ * list. Blocks with numbers higher than
+ * _SC_AIO_PRIO_DELTA_MAX go with priority 0.
+ */
+ prio = delta_max - i;
+ map->aio.cblocks[i].aio_reqprio = prio >= 0 ? prio : 0;
+ }
}

return 0;
@@ -189,7 +211,7 @@ static void perf_mmap__aio_munmap(struct perf_mmap *map)
zfree(&map->aio.data);
}

-int perf_mmap__aio_push(struct perf_mmap *md, void *to,
+int perf_mmap__aio_push(struct perf_mmap *md, void *to, int idx,
int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off),
off_t *off)
{
@@ -204,7 +226,7 @@ int perf_mmap__aio_push(struct perf_mmap *md, void *to,
return (rc == -EAGAIN) ? 0 : -1;

/*
- * md->base data is copied into md->data buffer to
+ * md->base data is copied into md->data[idx] buffer to
* release space in the kernel buffer as fast as possible,
* thru perf_mmap__consume() below.
*
@@ -226,20 +248,20 @@ int perf_mmap__aio_push(struct perf_mmap *md, void *to,
buf = &data[md->start & md->mask];
size = md->mask + 1 - (md->start & md->mask);
md->start += size;
- memcpy(md->aio.data, buf, size);
+ memcpy(md->aio.data[idx], buf, size);
size0 = size;
}

buf = &data[md->start & md->mask];
size = md->end - md->start;
md->start += size;
- memcpy(md->aio.data + size0, buf, size);
+ memcpy(md->aio.data[idx] + size0, buf, size);

/*
- * Increment md->refcount to guard md->data buffer
+ * Increment md->refcount to guard md->data[idx] buffer
* from premature deallocation because md object can be
* released earlier than aio write request started
- * on mmap->data is complete.
+ * on mmap->data[idx] is complete.
*
* perf_mmap__put() is done at record__aio_complete()
* after started request completion.
@@ -249,7 +271,7 @@ int perf_mmap__aio_push(struct perf_mmap *md, void *to,
md->prev = head;
perf_mmap__consume(md);

- rc = push(to, &md->aio.cblock, md->aio.data, size0 + size, *off);
+ rc = push(to, &md->aio.cblocks[idx], md->aio.data[idx], size0 + size, *off);
if (!rc) {
*off += size0 + size;
} else {
diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
index 9be06f18f17a..cb63d357a248 100644
--- a/tools/perf/util/mmap.h
+++ b/tools/perf/util/mmap.h
@@ -32,8 +32,9 @@ struct perf_mmap {
char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
#ifdef HAVE_AIO_SUPPORT
struct {
- void *data;
- struct aiocb cblock;
+ void **data;
+ struct aiocb *cblocks;
+ struct aiocb **aiocb;
int nr_cblocks;
} aio;
#endif
@@ -106,11 +107,11 @@ union perf_event *perf_mmap__read_event(struct perf_mmap *map);
int perf_mmap__push(struct perf_mmap *md, void *to,
int push(struct perf_mmap *map, void *to, void *buf, size_t size));
#ifdef HAVE_AIO_SUPPORT
-int perf_mmap__aio_push(struct perf_mmap *md, void *to,
+int perf_mmap__aio_push(struct perf_mmap *md, void *to, int idx,
int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off),
off_t *off);
#else
-static inline int perf_mmap__aio_push(struct perf_mmap *md __maybe_unused, void *to __maybe_unused,
+static inline int perf_mmap__aio_push(struct perf_mmap *md __maybe_unused, void *to __maybe_unused, int idx __maybe_unused,
int push(void *to, struct aiocb *cblock, void *buf, size_t size, off_t off) __maybe_unused,
off_t *off __maybe_unused)
{

2018-10-15 10:18:40

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH v14 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads

On Mon, Oct 15, 2018 at 09:26:09AM +0300, Alexey Budankov wrote:
>
> Currently in record mode the tool implements trace writing serially.
> The algorithm loops over mapped per-cpu data buffers and stores
> ready data chunks into a trace file using write() system call.
>
> At some circumstances the kernel may lack free space in a buffer
> because the other buffer's half is not yet written to disk due to
> some other buffer's data writing by the tool at the moment.
>
> Thus serial trace writing implementation may cause the kernel
> to loose profiling data and that is what observed when profiling
> highly parallel CPU bound workloads on machines with big number
> of cores.
>
> Experiment with profiling matrix multiplication code executing 128
> threads on Intel Xeon Phi (KNM) with 272 cores, like below,
> demonstrates data loss metrics value of 98%:
>
> /usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
> --call-graph dwarf,1024 --user-regs=IP,SP,BP --switch-events \
> -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
> matrix.gcc

I ran above on 24 cpu server and could not see the gain,
but I guess I'd need much bigger server to see that

anyway, the code is now nicely separated, and given the
advertised results below I have no objections

Reviewed-by: Jiri Olsa <[email protected]>

thanks,
jirka


>
> Data loss metrics is the ratio lost_time/elapsed_time where
> lost_time is the sum of time intervals containing PERF_RECORD_LOST
> records and elapsed_time is the elapsed application run time
> under profiling.
>
> Applying asynchronous trace streaming thru Posix AIO API [1] lowers
> data loss metrics value providing 2x improvement (from 98% to ~1%)
>
> Asynchronous trace streaming is currently limited to glibc linkage.
> musl libc [5] also provides Posix AIO API implementation, however
> the patchkit is not tested with it. There may be other libc libraries
> linked by Perf tool that currently lack Posix AIO API support [2],
> [3], [4] so NO_AIO define may be used to limit Perf tool binary to
> serial streaming only.
>
> ---
> Alexey Budankov (3):
> perf util: map data buffer for preserving collected data
> perf record: enable asynchronous trace writing
> perf record: extend trace writing to multi AIO
>
> tools/perf/Documentation/perf-record.txt | 5 +
> tools/perf/Makefile.config | 5 +
> tools/perf/Makefile.perf | 7 +-
> tools/perf/builtin-record.c | 252 ++++++++++++++++++++++++++++++-
> tools/perf/perf.h | 1 +
> tools/perf/util/evlist.c | 6 +-
> tools/perf/util/evlist.h | 2 +-
> tools/perf/util/mmap.c | 146 +++++++++++++++++-
> tools/perf/util/mmap.h | 26 +++-
> 9 files changed, 439 insertions(+), 11 deletions(-)
>
> ---
> Changes in v14:
> - implement default nr_cblocks_default variable
> - fix --aio option handling
> Changes in v13:
> - named new functions with _aio_ word
> - grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
> - moved perf_mmap__aio_push() stub into header
> - removed trailed white space
> Changes in v12:
> - applied stub functions design for the whole patch kit
> - grouped AIO related data into a struct under struct perf_mmap
> - implemented record__aio_get/set_pos(), record__aio_enabled()
> - implemented simple --aio option
> - extended --aio option to --aio-cblocks=<n>
> Changes in v11:
> - replacing the both lseek() syscalls in every loop iteration by the only
> two syscalls just before and after the loop at record__mmap_read_evlist()
> and advancing *in-flight* off file pos value at perf_mmap__aio_push()
> Changes in v10:
> - moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap();
> - adjusted error reporting by using %m
> - avoided lseek() setting file pos back in case of record__aio_write() failure
> - compacted code selecting between serial and AIO streaming
> - optimized call places of record__mmap_read_sync()
> - added description of aio-cblocks option into perf-record.txt
> Changes in v9:
> - enable AIO streaming only when --aio-cblocks option is specified explicitly
> - enable AIO based implementation when linking with glibc only
> - define NO_AIO to limit Perf binary to serial implementation
> Changes in v8:
> - run the whole thing thru checkpatch.pl and corrected found issues except
> lines longer than 80 symbols
> - corrected comments alignment and formatting
> - moved multi AIO implementation into 3rd patch in the series
> - implemented explicit cblocks array allocation
> - split AIO completion check into separate record__aio_complete()
> - set nr_cblocks default to 1 and max allowed value to 4
> Changes in v7:
> - implemented handling record.aio setting from perfconfig file
> Changes in v6:
> - adjusted setting of priorities for cblocks;
> - handled errno == EAGAIN case from aio_write() return;
> Changes in v5:
> - resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
> - data loss metrics decreased from 25% to 2x in trialed configuration;
> - reshaped layout of data structures;
> - implemented --aio option;
> - avoided nanosleep() prior calling aio_suspend();
> - switched to per-cpu aio multi buffer record__aio_sync();
> - record_mmap_read_sync() now does global sync just before
> switching trace file or collection stop;
> Changes in v4:
> - converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
> - converted void *bf to struct perf_mmap *md in signatures
> - written comment in perf_mmap__push() just before perf_mmap__get();
> - written comment in record__mmap_read_sync() on possible restarting
> of aio_write() operation and releasing perf_mmap object after all;
> - added perf_mmap__put() for the cases of failed aio_write();
> Changes in v3:
> - written comments about nanosleep(0.5ms) call prior aio_suspend()
> to cope with intrusiveness of its implementation in glibc;
> - written comments about rationale behind coping profiling data
> into mmap->data buffer;
> Changes in v2:
> - converted zalloc() to calloc() for allocation of mmap_aio array,
> - cleared typo and adjusted fallback branch code;
>
> ---
>
> [1] http://man7.org/linux/man-pages/man7/aio.7.html
> [2] https://android.googlesource.com/platform/bionic/+/master/docs/status.md
> [3] https://www.uclibc.org/
> [4] https://uclibc-ng.org/
> [5] https://www.musl-libc.org/

2018-10-25 08:00:28

by Alexey Budankov

[permalink] [raw]
Subject: Re: [PATCH v14 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads

Hi,

On 15.10.2018 13:17, Jiri Olsa wrote:
> On Mon, Oct 15, 2018 at 09:26:09AM +0300, Alexey Budankov wrote:
>>
>> Currently in record mode the tool implements trace writing serially.
>> The algorithm loops over mapped per-cpu data buffers and stores
>> ready data chunks into a trace file using write() system call.
>>
>> At some circumstances the kernel may lack free space in a buffer
>> because the other buffer's half is not yet written to disk due to
>> some other buffer's data writing by the tool at the moment.
>>
>> Thus serial trace writing implementation may cause the kernel
>> to loose profiling data and that is what observed when profiling
>> highly parallel CPU bound workloads on machines with big number
>> of cores.
>>
>> Experiment with profiling matrix multiplication code executing 128
>> threads on Intel Xeon Phi (KNM) with 272 cores, like below,
>> demonstrates data loss metrics value of 98%:
>>
>> /usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
>> --call-graph dwarf,1024 --user-regs=IP,SP,BP --switch-events \
>> -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
>> matrix.gcc
>
> I ran above on 24 cpu server and could not see the gain,
> but I guess I'd need much bigger server to see that
>
> anyway, the code is now nicely separated, and given the
> advertised results below I have no objections
>
> Reviewed-by: Jiri Olsa <[email protected]>

Is the plan Jiri mentioned earlier to have it as a stand alone patch kit
or upstream the changes into mainline?

Thanks,
Alexey

>
> thanks,
> jirka
>
>
>>
>> Data loss metrics is the ratio lost_time/elapsed_time where
>> lost_time is the sum of time intervals containing PERF_RECORD_LOST
>> records and elapsed_time is the elapsed application run time
>> under profiling.
>>
>> Applying asynchronous trace streaming thru Posix AIO API [1] lowers
>> data loss metrics value providing 2x improvement (from 98% to ~1%)
>>
>> Asynchronous trace streaming is currently limited to glibc linkage.
>> musl libc [5] also provides Posix AIO API implementation, however
>> the patchkit is not tested with it. There may be other libc libraries
>> linked by Perf tool that currently lack Posix AIO API support [2],
>> [3], [4] so NO_AIO define may be used to limit Perf tool binary to
>> serial streaming only.
>>
>> ---
>> Alexey Budankov (3):
>> perf util: map data buffer for preserving collected data
>> perf record: enable asynchronous trace writing
>> perf record: extend trace writing to multi AIO
>>
>> tools/perf/Documentation/perf-record.txt | 5 +
>> tools/perf/Makefile.config | 5 +
>> tools/perf/Makefile.perf | 7 +-
>> tools/perf/builtin-record.c | 252 ++++++++++++++++++++++++++++++-
>> tools/perf/perf.h | 1 +
>> tools/perf/util/evlist.c | 6 +-
>> tools/perf/util/evlist.h | 2 +-
>> tools/perf/util/mmap.c | 146 +++++++++++++++++-
>> tools/perf/util/mmap.h | 26 +++-
>> 9 files changed, 439 insertions(+), 11 deletions(-)
>>
>> ---
>> Changes in v14:
>> - implement default nr_cblocks_default variable
>> - fix --aio option handling
>> Changes in v13:
>> - named new functions with _aio_ word
>> - grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
>> - moved perf_mmap__aio_push() stub into header
>> - removed trailed white space
>> Changes in v12:
>> - applied stub functions design for the whole patch kit
>> - grouped AIO related data into a struct under struct perf_mmap
>> - implemented record__aio_get/set_pos(), record__aio_enabled()
>> - implemented simple --aio option
>> - extended --aio option to --aio-cblocks=<n>
>> Changes in v11:
>> - replacing the both lseek() syscalls in every loop iteration by the only
>> two syscalls just before and after the loop at record__mmap_read_evlist()
>> and advancing *in-flight* off file pos value at perf_mmap__aio_push()
>> Changes in v10:
>> - moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap();
>> - adjusted error reporting by using %m
>> - avoided lseek() setting file pos back in case of record__aio_write() failure
>> - compacted code selecting between serial and AIO streaming
>> - optimized call places of record__mmap_read_sync()
>> - added description of aio-cblocks option into perf-record.txt
>> Changes in v9:
>> - enable AIO streaming only when --aio-cblocks option is specified explicitly
>> - enable AIO based implementation when linking with glibc only
>> - define NO_AIO to limit Perf binary to serial implementation
>> Changes in v8:
>> - run the whole thing thru checkpatch.pl and corrected found issues except
>> lines longer than 80 symbols
>> - corrected comments alignment and formatting
>> - moved multi AIO implementation into 3rd patch in the series
>> - implemented explicit cblocks array allocation
>> - split AIO completion check into separate record__aio_complete()
>> - set nr_cblocks default to 1 and max allowed value to 4
>> Changes in v7:
>> - implemented handling record.aio setting from perfconfig file
>> Changes in v6:
>> - adjusted setting of priorities for cblocks;
>> - handled errno == EAGAIN case from aio_write() return;
>> Changes in v5:
>> - resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
>> - data loss metrics decreased from 25% to 2x in trialed configuration;
>> - reshaped layout of data structures;
>> - implemented --aio option;
>> - avoided nanosleep() prior calling aio_suspend();
>> - switched to per-cpu aio multi buffer record__aio_sync();
>> - record_mmap_read_sync() now does global sync just before
>> switching trace file or collection stop;
>> Changes in v4:
>> - converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
>> - converted void *bf to struct perf_mmap *md in signatures
>> - written comment in perf_mmap__push() just before perf_mmap__get();
>> - written comment in record__mmap_read_sync() on possible restarting
>> of aio_write() operation and releasing perf_mmap object after all;
>> - added perf_mmap__put() for the cases of failed aio_write();
>> Changes in v3:
>> - written comments about nanosleep(0.5ms) call prior aio_suspend()
>> to cope with intrusiveness of its implementation in glibc;
>> - written comments about rationale behind coping profiling data
>> into mmap->data buffer;
>> Changes in v2:
>> - converted zalloc() to calloc() for allocation of mmap_aio array,
>> - cleared typo and adjusted fallback branch code;
>>
>> ---
>>
>> [1] http://man7.org/linux/man-pages/man7/aio.7.html
>> [2] https://android.googlesource.com/platform/bionic/+/master/docs/status.md
>> [3] https://www.uclibc.org/
>> [4] https://uclibc-ng.org/
>> [5] https://www.musl-libc.org/
>

2018-10-25 08:56:45

by Jiri Olsa

[permalink] [raw]
Subject: Re: [PATCH v14 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads

On Thu, Oct 25, 2018 at 10:59:36AM +0300, Alexey Budankov wrote:
> Hi,
>
> On 15.10.2018 13:17, Jiri Olsa wrote:
> > On Mon, Oct 15, 2018 at 09:26:09AM +0300, Alexey Budankov wrote:
> >>
> >> Currently in record mode the tool implements trace writing serially.
> >> The algorithm loops over mapped per-cpu data buffers and stores
> >> ready data chunks into a trace file using write() system call.
> >>
> >> At some circumstances the kernel may lack free space in a buffer
> >> because the other buffer's half is not yet written to disk due to
> >> some other buffer's data writing by the tool at the moment.
> >>
> >> Thus serial trace writing implementation may cause the kernel
> >> to loose profiling data and that is what observed when profiling
> >> highly parallel CPU bound workloads on machines with big number
> >> of cores.
> >>
> >> Experiment with profiling matrix multiplication code executing 128
> >> threads on Intel Xeon Phi (KNM) with 272 cores, like below,
> >> demonstrates data loss metrics value of 98%:
> >>
> >> /usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
> >> --call-graph dwarf,1024 --user-regs=IP,SP,BP --switch-events \
> >> -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
> >> matrix.gcc
> >
> > I ran above on 24 cpu server and could not see the gain,
> > but I guess I'd need much bigger server to see that
> >
> > anyway, the code is now nicely separated, and given the
> > advertised results below I have no objections
> >
> > Reviewed-by: Jiri Olsa <[email protected]>
>
> Is the plan Jiri mentioned earlier to have it as a stand alone patch kit
> or upstream the changes into mainline?

I haven't heard from Arnaldo yet, but I'd like to have this merged in

jirka

2018-10-25 11:13:43

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH v14 0/3]: perf: reduce data loss when profiling highly parallel CPU bound workloads

Em Thu, Oct 25, 2018 at 10:54:39AM +0200, Jiri Olsa escreveu:
> On Thu, Oct 25, 2018 at 10:59:36AM +0300, Alexey Budankov wrote:
> > Hi,
> >
> > On 15.10.2018 13:17, Jiri Olsa wrote:
> > > On Mon, Oct 15, 2018 at 09:26:09AM +0300, Alexey Budankov wrote:
> > >>
> > >> Currently in record mode the tool implements trace writing serially.
> > >> The algorithm loops over mapped per-cpu data buffers and stores
> > >> ready data chunks into a trace file using write() system call.
> > >>
> > >> At some circumstances the kernel may lack free space in a buffer
> > >> because the other buffer's half is not yet written to disk due to
> > >> some other buffer's data writing by the tool at the moment.
> > >>
> > >> Thus serial trace writing implementation may cause the kernel
> > >> to loose profiling data and that is what observed when profiling
> > >> highly parallel CPU bound workloads on machines with big number
> > >> of cores.
> > >>
> > >> Experiment with profiling matrix multiplication code executing 128
> > >> threads on Intel Xeon Phi (KNM) with 272 cores, like below,
> > >> demonstrates data loss metrics value of 98%:
> > >>
> > >> /usr/bin/time perf record -o /tmp/perf-ser.data -a -N -B -T -R -g \
> > >> --call-graph dwarf,1024 --user-regs=IP,SP,BP --switch-events \
> > >> -e cycles,instructions,ref-cycles,software/period=1,name=cs,config=0x3/Duk -- \
> > >> matrix.gcc
> > >
> > > I ran above on 24 cpu server and could not see the gain,
> > > but I guess I'd need much bigger server to see that
> > >
> > > anyway, the code is now nicely separated, and given the
> > > advertised results below I have no objections
> > >
> > > Reviewed-by: Jiri Olsa <[email protected]>
> >
> > Is the plan Jiri mentioned earlier to have it as a stand alone patch kit
> > or upstream the changes into mainline?
>
> I haven't heard from Arnaldo yet, but I'd like to have this merged in

I'll try and go over it today.

- Arnaldo

2018-11-01 18:33:03

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v14 3/3]: perf record: extend trace writing to multi AIO

On Sun, Oct 14, 2018 at 11:47 PM Alexey Budankov
<[email protected]> wrote:
>
>
> Multi AIO trace writing allows caching more kernel data into userspace
> memory postponing trace writing for the sake of overall profiling data
> thruput increase. It could be seen as kernel data buffer extension into
> userspace memory.
>
> With aio option value different from 0, default value is 1,
> tool has capability to cache more and more data into user space
> along with delegating spill to AIO.
>
> That allows avoiding suspend at record__aio_sync() between calls of
> record__mmap_read_evlist() and increase profiling data thruput for
> the cost of userspace memory.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> Changes in v14:
> - fix --aio option handling
> Changes in v13:
> - preserved --aio option name avoiding complication
> Changes in v12:
> - extended --aio option to --aio-cblocks=<n>
> Changes in v10:
> - added description of aio-cblocks option into perf-record.tx
> ---
> tools/perf/Documentation/perf-record.txt | 4 +-
> tools/perf/builtin-record.c | 64 ++++++++++++++++++++++++--------
> tools/perf/util/mmap.c | 64 +++++++++++++++++++++-----------
> tools/perf/util/mmap.h | 9 +++--
> 4 files changed, 99 insertions(+), 42 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
> index 7efb4af88a68..d232b13ea713 100644
> --- a/tools/perf/Documentation/perf-record.txt
> +++ b/tools/perf/Documentation/perf-record.txt
> @@ -435,8 +435,8 @@ Specify vmlinux path which has debuginfo.
> --buildid-all::
> Record build-id of all DSOs regardless whether it's actually hit or not.
>
> ---aio::
> -Enable asynchronous (Posix AIO) trace writing mode.
> +--aio[=n]::
> +Use <n> control blocks in asynchronous (Posix AIO) trace writing mode (default: 1, max: 4).
> Asynchronous mode is supported only when linking Perf tool with libc library
> providing implementation for Posix AIO API.
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 0c6105860123..c5d9288b8670 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -196,16 +196,35 @@ static int record__aio_complete(struct perf_mmap *md, struct aiocb *cblock)
> return rc;
> }
>
> -static void record__aio_sync(struct perf_mmap *md)
> +static int record__aio_sync(struct perf_mmap *md, bool sync_all)
> {
> - struct aiocb *cblock = &md->aio.cblock;
> + struct aiocb **aiocb = md->aio.aiocb;
> + struct aiocb *cblocks = md->aio.cblocks;
> struct timespec timeout = { 0, 1000 * 1000 * 1 }; /* 1ms */
> + int i, do_suspend;
>
> do {
> - if (cblock->aio_fildes == -1 || record__aio_complete(md, cblock))
> - return;
> + do_suspend = 0;
> + for (i = 0; i < md->aio.nr_cblocks; ++i) {
> + if (cblocks[i].aio_fildes == -1 || record__aio_complete(md, &cblocks[i])) {
> + if (sync_all)
> + aiocb[i] = NULL;
> + else
> + return i;
> + } else {
> + /*
> + * Started aio write is not complete yet
> + * so it has to be waited before the
> + * next allocation.
> + */
> + aiocb[i] = &cblocks[i];
> + do_suspend = 1;
> + }
> + }
> + if (!do_suspend)
> + return -1;
>
> - while (aio_suspend((const struct aiocb**)&cblock, 1, &timeout)) {
> + while (aio_suspend((const struct aiocb **)aiocb, md->aio.nr_cblocks, &timeout)) {
> if (!(errno == EAGAIN || errno == EINTR))
> pr_err("failed to sync perf data, error: %m\n");
> }
> @@ -252,28 +271,33 @@ static void record__aio_mmap_read_sync(struct record *rec)
> struct perf_mmap *map = &maps[i];
>
> if (map->base)
> - record__aio_sync(map);
> + record__aio_sync(map, true);
> }
> }
>
> static int nr_cblocks_default = 1;
>
> static int record__aio_parse(const struct option *opt,
> - const char *str __maybe_unused,
> + const char *str,
> int unset)
> {
> struct record_opts *opts = (struct record_opts *)opt->value;
>
> - if (unset)
> + if (unset) {
> opts->nr_cblocks = 0;
> - else
> - opts->nr_cblocks = nr_cblocks_default;
> + } else {
> + if (str)
> + opts->nr_cblocks = strtol(str, NULL, 0);
> + if (!opts->nr_cblocks)
> + opts->nr_cblocks = nr_cblocks_default;
> + }
>
> return 0;
> }
> #else /* HAVE_AIO_SUPPORT */
> -static void record__aio_sync(struct perf_mmap *md __maybe_unused)
> +static int record__aio_sync(struct perf_mmap *md __maybe_unused, bool sync_all __maybe_unused)
> {
> + return -1;
> }
>
> static int record__aio_pushfn(void *to __maybe_unused, struct aiocb *cblock __maybe_unused,
> @@ -723,12 +747,13 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
> goto out;
> }
> } else {
> + int idx;
> /*
> * Call record__aio_sync() to wait till map->data buffer
> * becomes available after previous aio write request.
> */
> - record__aio_sync(map);
> - if (perf_mmap__aio_push(map, rec, record__aio_pushfn, &off) != 0) {
> + idx = record__aio_sync(map, false);
> + if (perf_mmap__aio_push(map, rec, idx, record__aio_pushfn, &off) != 0) {
> record__aio_set_pos(trace_fd, off);
> rc = -1;
> goto out;
> @@ -1492,6 +1517,13 @@ static int perf_record_config(const char *var, const char *value, void *cb)
> var = "call-graph.record-mode";
> return perf_default_config(var, value, cb);
> }
> +#ifdef HAVE_AIO_SUPPORT
> + if (!strcmp(var, "record.aio")) {
> + rec->opts.nr_cblocks = strtol(value, NULL, 0);
> + if (!rec->opts.nr_cblocks)
> + rec->opts.nr_cblocks = nr_cblocks_default;
> + }
> +#endif
>
> return 0;
> }
> @@ -1884,8 +1916,8 @@ static struct option __record_options[] = {
> OPT_BOOLEAN(0, "dry-run", &dry_run,
> "Parse options then exit"),
> #ifdef HAVE_AIO_SUPPORT
> - OPT_CALLBACK_NOOPT(0, "aio", &record.opts,
> - NULL, "Enable asynchronous trace writing mode",
> + OPT_CALLBACK_OPTARG(0, "aio", &record.opts,
> + &nr_cblocks_default, "n", "Use <n> control blocks in asynchronous trace writing mode (default: 1, max: 4)",
> record__aio_parse),
> #endif
> OPT_END()
> @@ -2080,6 +2112,8 @@ int cmd_record(int argc, const char **argv)
> goto out;
> }
>
> + if (rec->opts.nr_cblocks > 4)
> + rec->opts.nr_cblocks = 4;
nit: I feel this logic belongs to record__aio_parse(). And we should have

static int nr_cblocks_max = 4;

Other than this

Acked-by: Song Liu <[email protected]>

2018-11-01 18:43:05

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v14 2/3]: perf record: enable asynchronous trace writing

On Sun, Oct 14, 2018 at 11:45 PM Alexey Budankov
<[email protected]> wrote:
>
>
> Trace file offset is read once before mmaps iterating loop and written
> back after all performance data enqueued for aio writing. Trace file offset
> is incremented linearly after every successful aio write operation.
>
> record__aio_sync() blocks till completion of started AIO operation
> and then proceeds.
>
> record__aio_mmap_read_sync() implements a barrier for all incomplete
> aio write requests.
>
> Signed-off-by: Alexey Budankov <[email protected]>
> ---
> Changes in v14:
> - implement default nr_cblocks_default variable
> Changes in v13:
> - named new functions with _aio_ word
> - grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
> - moved perf_mmap__aio_push() stub into header
> - removed trailed white space
> Changes in v12:
> - implemented record__aio_get/set_pos(), record__aio_enabled()
> - implemented simple --aio option
> Changes in v11:
> - replacing the both lseek() syscalls in every loop iteration by the only
> two syscalls just before and after the loop at record__mmap_read_evlist()
> and advancing *in-flight* off file pos value at perf_mmap__aio_push()
> Changes in v10:
> - avoided lseek() setting file pos back in case of record__aio_write() failure
> - compacted code selecting between serial and AIO streaming
> - optimized call places of record__mmap_read_sync()
> Changes in v9:
> - enable AIO streaming only when --aio-cblocks option is specified explicitly
> Changes in v8:
> - split AIO completion check into separate record__aio_complete()
> Changes in v6:
> - handled errno == EAGAIN case from aio_write();
> Changes in v5:
> - data loss metrics decreased from 25% to 2x in trialed configuration;
> - avoided nanosleep() prior calling aio_suspend();
> - switched to per cpu multi record__aio_sync() aio
> - record_mmap_read_sync() now does global barrier just before
> switching trace file or collection stop;
> - resolved livelock on perf record -e intel_pt// -- dd if=/dev/zero of=/dev/null count=100000
> Changes in v4:
> - converted void *bf to struct perf_mmap *md in signatures
> - written comment in perf_mmap__push() just before perf_mmap__get();
> - written comment in record__mmap_read_sync() on possible restarting
> of aio_write() operation and releasing perf_mmap object after all;
> - added perf_mmap__put() for the cases of failed aio_write();
> Changes in v3:
> - written comments about nanosleep(0.5ms) call prior aio_suspend()
> to cope with intrusiveness of its implementation in glibc;
> - written comments about rationale behind coping profiling data
> into mmap->data buffer;
> ---
> tools/perf/Documentation/perf-record.txt | 5 +
> tools/perf/builtin-record.c | 218 ++++++++++++++++++++++++++++++-
> tools/perf/perf.h | 1 +
> tools/perf/util/evlist.c | 6 +-
> tools/perf/util/evlist.h | 2 +-
> tools/perf/util/mmap.c | 77 ++++++++++-
> tools/perf/util/mmap.h | 14 ++
> 7 files changed, 314 insertions(+), 9 deletions(-)
>
> diff --git a/tools/perf/Documentation/perf-record.txt b/tools/perf/Documentation/perf-record.txt
> index 246dee081efd..7efb4af88a68 100644
> --- a/tools/perf/Documentation/perf-record.txt
> +++ b/tools/perf/Documentation/perf-record.txt
> @@ -435,6 +435,11 @@ Specify vmlinux path which has debuginfo.
> --buildid-all::
> Record build-id of all DSOs regardless whether it's actually hit or not.
>
> +--aio::
> +Enable asynchronous (Posix AIO) trace writing mode.
> +Asynchronous mode is supported only when linking Perf tool with libc library
> +providing implementation for Posix AIO API.
> +
> --all-kernel::
> Configure all used events to run in kernel space.
>
> diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
> index 0980dfe3396b..0c6105860123 100644
> --- a/tools/perf/builtin-record.c
> +++ b/tools/perf/builtin-record.c
> @@ -124,6 +124,183 @@ static int record__write(struct record *rec, struct perf_mmap *map __maybe_unuse
> return 0;
> }
>
> +#ifdef HAVE_AIO_SUPPORT
> +static int record__aio_write(struct aiocb *cblock, int trace_fd,
> + void *buf, size_t size, off_t off)
> +{
> + int rc;
> +
> + cblock->aio_fildes = trace_fd;
> + cblock->aio_buf = buf;
> + cblock->aio_nbytes = size;
> + cblock->aio_offset = off;
> + cblock->aio_sigevent.sigev_notify = SIGEV_NONE;
> +
> + do {
> + rc = aio_write(cblock);
> + if (rc == 0) {
> + break;
> + } else if (errno != EAGAIN) {
> + cblock->aio_fildes = -1;
> + pr_err("failed to queue perf data, error: %m\n");
> + break;
> + }
> + } while (1);
> +
> + return rc;
> +}
> +
> +static int record__aio_complete(struct perf_mmap *md, struct aiocb *cblock)
> +{
> + void *rem_buf;
> + off_t rem_off;
> + size_t rem_size;
> + int rc, aio_errno;
> + ssize_t aio_ret, written;
> +
> + aio_errno = aio_error(cblock);
> + if (aio_errno == EINPROGRESS)
> + return 0;
> +
> + written = aio_ret = aio_return(cblock);
> + if (aio_ret < 0) {
> + if (aio_errno != EINTR)
> + pr_err("failed to write perf data, error: %m\n");
> + written = 0;
> + }
> +
> + rem_size = cblock->aio_nbytes - written;
> +
> + if (rem_size == 0) {
> + cblock->aio_fildes = -1;
> + /*
> + * md->refcount is incremented in perf_mmap__push() for
> + * every enqueued aio write request so decrement it because
> + * the request is now complete.
> + */
> + perf_mmap__put(md);
> + rc = 1;
> + } else {
> + /*
> + * aio write request may require restart with the
> + * reminder if the kernel didn't write whole
> + * chunk at once.
> + */
> + rem_off = cblock->aio_offset + written;
> + rem_buf = (void *)(cblock->aio_buf + written);
> + record__aio_write(cblock, cblock->aio_fildes,
> + rem_buf, rem_size, rem_off);
> + rc = 0;
> + }
> +
> + return rc;
> +}
> +
> +static void record__aio_sync(struct perf_mmap *md)
> +{
> + struct aiocb *cblock = &md->aio.cblock;
> + struct timespec timeout = { 0, 1000 * 1000 * 1 }; /* 1ms */
> +
> + do {
> + if (cblock->aio_fildes == -1 || record__aio_complete(md, cblock))
> + return;
> +
> + while (aio_suspend((const struct aiocb**)&cblock, 1, &timeout)) {
> + if (!(errno == EAGAIN || errno == EINTR))
> + pr_err("failed to sync perf data, error: %m\n");
> + }
> + } while (1);
> +}
> +
> +static int record__aio_pushfn(void *to, struct aiocb *cblock, void *bf, size_t size, off_t off)
> +{
> + struct record *rec = to;
> + int ret, trace_fd = rec->session->data->file.fd;
> +
> + rec->samples++;
> +
> + ret = record__aio_write(cblock, trace_fd, bf, size, off);
> + if (!ret) {
> + rec->bytes_written += size;
> + if (switch_output_size(rec))
> + trigger_hit(&switch_output_trigger);
> + }
> +
> + return ret;
> +}
> +
> +static off_t record__aio_get_pos(int trace_fd)
> +{
> + return lseek(trace_fd, 0, SEEK_CUR);
> +}
> +
> +static void record__aio_set_pos(int trace_fd, off_t pos)
> +{
> + lseek(trace_fd, pos, SEEK_SET);
> +}
> +
> +static void record__aio_mmap_read_sync(struct record *rec)
> +{
> + int i;
> + struct perf_evlist *evlist = rec->evlist;
> + struct perf_mmap *maps = evlist->mmap;
> +
> + if (!rec->opts.nr_cblocks)
> + return;
> +
> + for (i = 0; i < evlist->nr_mmaps; i++) {
> + struct perf_mmap *map = &maps[i];
> +
> + if (map->base)
> + record__aio_sync(map);
> + }
> +}
> +
> +static int nr_cblocks_default = 1;
> +
> +static int record__aio_parse(const struct option *opt,
> + const char *str __maybe_unused,
> + int unset)
> +{
> + struct record_opts *opts = (struct record_opts *)opt->value;
> +
> + if (unset)
> + opts->nr_cblocks = 0;
> + else
> + opts->nr_cblocks = nr_cblocks_default;
> +
> + return 0;
> +}
> +#else /* HAVE_AIO_SUPPORT */
> +static void record__aio_sync(struct perf_mmap *md __maybe_unused)
> +{
> +}
> +
> +static int record__aio_pushfn(void *to __maybe_unused, struct aiocb *cblock __maybe_unused,
> + void *bf __maybe_unused, size_t size __maybe_unused, off_t off __maybe_unused)
> +{
> + return -1;
> +}
> +
> +static off_t record__aio_get_pos(int trace_fd __maybe_unused)
> +{
> + return -1;
> +}
> +
> +static void record__aio_set_pos(int trace_fd __maybe_unused, off_t pos __maybe_unused)
> +{
> +}
> +
> +static void record__aio_mmap_read_sync(struct record *rec __maybe_unused)
> +{
> +}
> +#endif
> +
> +static int record__aio_enabled(struct record *rec)
> +{
> + return rec->opts.nr_cblocks > 0;
> +}
> +
> static int process_synthesized_event(struct perf_tool *tool,
> union perf_event *event,
> struct perf_sample *sample __maybe_unused,
> @@ -329,7 +506,7 @@ static int record__mmap_evlist(struct record *rec,
>
> if (perf_evlist__mmap_ex(evlist, opts->mmap_pages,
> opts->auxtrace_mmap_pages,
> - opts->auxtrace_snapshot_mode) < 0) {
> + opts->auxtrace_snapshot_mode, opts->nr_cblocks) < 0) {
> if (errno == EPERM) {
> pr_err("Permission error mapping pages.\n"
> "Consider increasing "
> @@ -520,6 +697,8 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
> int i;
> int rc = 0;
> struct perf_mmap *maps;
> + int trace_fd = rec->data.file.fd;
> + off_t off;
>
> if (!evlist)
> return 0;
> @@ -531,13 +710,29 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
> if (overwrite && evlist->bkw_mmap_state != BKW_MMAP_DATA_PENDING)
> return 0;
>
> + if (record__aio_enabled(rec))
> + off = record__aio_get_pos(trace_fd);
> +
> for (i = 0; i < evlist->nr_mmaps; i++) {
> struct perf_mmap *map = &maps[i];
>
> if (map->base) {
> - if (perf_mmap__push(map, rec, record__pushfn) != 0) {
> - rc = -1;
> - goto out;
> + if (!record__aio_enabled(rec)) {
> + if (perf_mmap__push(map, rec, record__pushfn) != 0) {
> + rc = -1;
> + goto out;
> + }
> + } else {
> + /*
> + * Call record__aio_sync() to wait till map->data buffer
> + * becomes available after previous aio write request.
> + */
> + record__aio_sync(map);
> + if (perf_mmap__aio_push(map, rec, record__aio_pushfn, &off) != 0) {
> + record__aio_set_pos(trace_fd, off);
> + rc = -1;
> + goto out;
> + }
> }
> }
>
> @@ -548,6 +743,9 @@ static int record__mmap_read_evlist(struct record *rec, struct perf_evlist *evli
> }
> }
>
> + if (record__aio_enabled(rec))
> + record__aio_set_pos(trace_fd, off);
> +
> /*
> * Mark the round finished in case we wrote
> * at least one event.
> @@ -650,6 +848,8 @@ record__switch_output(struct record *rec, bool at_exit)
> /* Same Size: "2015122520103046"*/
> char timestamp[] = "InvalidTimestamp";
>
> + record__aio_mmap_read_sync(rec);
> +
> record__synthesize(rec, true);
> if (target__none(&rec->opts.target))
> record__synthesize_workload(rec, true);
> @@ -1157,6 +1357,8 @@ static int __cmd_record(struct record *rec, int argc, const char **argv)
> record__synthesize_workload(rec, true);
>
> out_child:
> + record__aio_mmap_read_sync(rec);
> +
> if (forks) {
> int exit_status;
>
> @@ -1681,6 +1883,11 @@ static struct option __record_options[] = {
> "signal"),
> OPT_BOOLEAN(0, "dry-run", &dry_run,
> "Parse options then exit"),
> +#ifdef HAVE_AIO_SUPPORT
> + OPT_CALLBACK_NOOPT(0, "aio", &record.opts,
> + NULL, "Enable asynchronous trace writing mode",
> + record__aio_parse),
> +#endif
> OPT_END()
> };
>
> @@ -1873,6 +2080,9 @@ int cmd_record(int argc, const char **argv)
> goto out;
> }
>
> + if (verbose > 0)
> + pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);
> +

nit: I guess we don't really need this pr_info()?

Other than this,

Acked-by: Song Liu <[email protected]>

2018-11-01 18:49:04

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH v14 1/3]: perf util: map data buffer for preserving collected data

On Sun, Oct 14, 2018 at 11:44 PM Alexey Budankov
<[email protected]> wrote:
>
>
> The map->data buffer is used to preserve map->base profiling data
> for writing to disk. AIO map->cblock is used to queue corresponding
> map->data buffer for asynchronous writing.
>
> Signed-off-by: Alexey Budankov <[email protected]>

Acked-by: Song Liu <[email protected]>

Shall we add libaio to tools/build/Makefile.feature? It will be great to see
whether aio is enabled in "Auto-detecting system features:".

Thanks for implementing this!

Song

> ---
> Changes in v13:
> - grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
> Changes in v12:
> - applied stub functions design for the whole patch kit
> - grouped AIO related data into a struct under struct perf_mmap
> Changes in v10:
> - moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap()
> - adjusted error reporting by using %m
> Changes in v9:
> - implemented NO_AIO and HAVE_AIO_SUPPORT defines to cover cases of
> libc implementations without Posix AIO API support
> Changes in v7:
> - implemented handling record.aio setting from perfconfig file
> Changes in v6:
> - adjusted setting of priorities for cblocks;
> Changes in v5:
> - reshaped layout of data structures;
> - implemented --aio option;
> Changes in v4:
> - converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
> Changes in v2:
> - converted zalloc() to calloc() for allocation of mmap_aio array,
> - cleared typo and adjusted fallback branch code;
> ---
> tools/perf/Makefile.config | 5 +++++
> tools/perf/Makefile.perf | 7 ++++++-
> tools/perf/util/evlist.c | 2 +-
> tools/perf/util/mmap.c | 49 +++++++++++++++++++++++++++++++++++++++++++++-
> tools/perf/util/mmap.h | 11 ++++++++++-
> 5 files changed, 70 insertions(+), 4 deletions(-)
>
> diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
> index f6d1a03c7523..2e90f4ce9214 100644
> --- a/tools/perf/Makefile.config
> +++ b/tools/perf/Makefile.config
> @@ -355,6 +355,11 @@ endif # NO_LIBELF
>
> ifeq ($(feature-glibc), 1)
> CFLAGS += -DHAVE_GLIBC_SUPPORT
> + ifndef NO_AIO
> + ifndef BIONIC
> + CFLAGS += -DHAVE_AIO_SUPPORT
> + endif
> + endif
> endif
>
> ifdef NO_DWARF
> diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
> index 92514fb3689f..7becc6a72cf2 100644
> --- a/tools/perf/Makefile.perf
> +++ b/tools/perf/Makefile.perf
> @@ -97,8 +97,13 @@ include ../scripts/utilities.mak
> # Define LIBCLANGLLVM if you DO want builtin clang and llvm support.
> # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
> # llvm-config is not in $PATH.
> -
> +#
> # Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
> +#
> +# Define NO_AIO if you do not want support of Posix AIO based trace
> +# streaming for record mode. Currently Posix AIO trace streaming is
> +# supported only when linking with glibc.
> +#
>
> # As per kernel Makefile, avoid funny character set dependencies
> unexport LC_ALL
> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
> index be440df29615..1a83bf2c069c 100644
> --- a/tools/perf/util/evlist.c
> +++ b/tools/perf/util/evlist.c
> @@ -1028,7 +1028,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
> * Its value is decided by evsel's write_backward.
> * So &mp should not be passed through const pointer.
> */
> - struct mmap_params mp;
> + struct mmap_params mp = { .nr_cblocks = 0 };
>
> if (!evlist->mmap)
> evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
> index cdb95b3a1213..47cdc3ad6546 100644
> --- a/tools/perf/util/mmap.c
> +++ b/tools/perf/util/mmap.c
> @@ -153,8 +153,55 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
> {
> }
>
> +#ifdef HAVE_AIO_SUPPORT
> +static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
> +{
> + int delta_max;
> +
> + if (mp->nr_cblocks) {
> + map->aio.data = malloc(perf_mmap__mmap_len(map));
> + if (!map->aio.data) {
> + pr_debug2("failed to allocate data buffer, error %m\n");
> + return -1;
> + }
> + /*
> + * Use cblock.aio_fildes value different from -1
> + * to denote started aio write operation on the
> + * cblock so it requires explicit record__aio_sync()
> + * call prior the cblock may be reused again.
> + */
> + map->aio.cblock.aio_fildes = -1;
> + /*
> + * Allocate cblock with max priority delta to
> + * have faster aio write system calls.
> + */
> + delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
> + map->aio.cblock.aio_reqprio = delta_max;
> + }
> +
> + return 0;
> +}
> +
> +static void perf_mmap__aio_munmap(struct perf_mmap *map)
> +{
> + if (map->aio.data)
> + zfree(&map->aio.data);
> +}
> +#else
> +static int perf_mmap__aio_mmap(struct perf_mmap *map __maybe_unused,
> + struct mmap_params *mp __maybe_unused)
> +{
> + return 0;
> +}
> +
> +static void perf_mmap__aio_munmap(struct perf_mmap *map __maybe_unused)
> +{
> +}
> +#endif
> +
> void perf_mmap__munmap(struct perf_mmap *map)
> {
> + perf_mmap__aio_munmap(map);
> if (map->base != NULL) {
> munmap(map->base, perf_mmap__mmap_len(map));
> map->base = NULL;
> @@ -197,7 +244,7 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
> &mp->auxtrace_mp, map->base, fd))
> return -1;
>
> - return 0;
> + return perf_mmap__aio_mmap(map, mp);
> }
>
> static int overwrite_rb_find_range(void *buf, int mask, u64 *start, u64 *end)
> diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
> index e603314dc792..a46dbdcdcc8a 100644
> --- a/tools/perf/util/mmap.h
> +++ b/tools/perf/util/mmap.h
> @@ -6,6 +6,9 @@
> #include <linux/types.h>
> #include <asm/barrier.h>
> #include <stdbool.h>
> +#ifdef HAVE_AIO_SUPPORT
> +#include <aio.h>
> +#endif
> #include "auxtrace.h"
> #include "event.h"
>
> @@ -26,6 +29,12 @@ struct perf_mmap {
> bool overwrite;
> struct auxtrace_mmap auxtrace_mmap;
> char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
> +#ifdef HAVE_AIO_SUPPORT
> + struct {
> + void *data;
> + struct aiocb cblock;
> + } aio;
> +#endif
> };
>
> /*
> @@ -57,7 +66,7 @@ enum bkw_mmap_state {
> };
>
> struct mmap_params {
> - int prot, mask;
> + int prot, mask, nr_cblocks;
> struct auxtrace_mmap_params auxtrace_mp;
> };
>

2018-11-02 16:11:58

by Alexey Budankov

[permalink] [raw]
Subject: Re: [PATCH v14 2/3]: perf record: enable asynchronous trace writing

Hi,

On 01.11.2018 21:42, Song Liu wrote:
> On Sun, Oct 14, 2018 at 11:45 PM Alexey Budankov
> <[email protected]> wrote:
>>
<SNIP>
>> + if (verbose > 0)
>> + pr_info("nr_cblocks: %d\n", rec->opts.nr_cblocks);
>> +
>
> nit: I guess we don't really need this pr_info()?

Diagnostic aid.

Thanks,
Alexey

>
> Other than this,
>
> Acked-by: Song Liu <[email protected]>
>

2018-11-02 16:13:52

by Alexey Budankov

[permalink] [raw]
Subject: Re: [PATCH v14 3/3]: perf record: extend trace writing to multi AIO

Hi,

On 01.11.2018 21:30, Song Liu wrote:
> On Sun, Oct 14, 2018 at 11:47 PM Alexey Budankov
> <[email protected]> wrote:
>>
<SNIP>
>>
>>
>> static int nr_cblocks_default = 1;
>>
>> static int record__aio_parse(const struct option *opt,
>> - const char *str __maybe_unused,
>> + const char *str,
>> int unset)
>> {
>> struct record_opts *opts = (struct record_opts *)opt->value;
>>
>> - if (unset)
>> + if (unset) {
>> opts->nr_cblocks = 0;
>> - else
>> - opts->nr_cblocks = nr_cblocks_default;
>> + } else {
>> + if (str)
>> + opts->nr_cblocks = strtol(str, NULL, 0);
>> + if (!opts->nr_cblocks)
>> + opts->nr_cblocks = nr_cblocks_default;
>> + }
>>
>> return 0;
>> }
>> #else /* HAVE_AIO_SUPPORT */
<SNIP>
>> @@ -1492,6 +1517,13 @@ static int perf_record_config(const char *var, const char *value, void *cb)
>> var = "call-graph.record-mode";
>> return perf_default_config(var, value, cb);
>> }
>> +#ifdef HAVE_AIO_SUPPORT
>> + if (!strcmp(var, "record.aio")) {
>> + rec->opts.nr_cblocks = strtol(value, NULL, 0);
>> + if (!rec->opts.nr_cblocks)
>> + rec->opts.nr_cblocks = nr_cblocks_default;
>> + }
>> +#endif
>>
>> return 0;
>> }
>> @@ -1884,8 +1916,8 @@ static struct option __record_options[] = {
>> OPT_BOOLEAN(0, "dry-run", &dry_run,
>> "Parse options then exit"),
>> #ifdef HAVE_AIO_SUPPORT
>> - OPT_CALLBACK_NOOPT(0, "aio", &record.opts,
>> - NULL, "Enable asynchronous trace writing mode",
>> + OPT_CALLBACK_OPTARG(0, "aio", &record.opts,
>> + &nr_cblocks_default, "n", "Use <n> control blocks in asynchronous trace writing mode (default: 1, max: 4)",
>> record__aio_parse),
>> #endif
>> OPT_END()
>> @@ -2080,6 +2112,8 @@ int cmd_record(int argc, const char **argv)
>> goto out;
>> }
>>
>> + if (rec->opts.nr_cblocks > 4)
>> + rec->opts.nr_cblocks = 4;
> nit: I feel this logic belongs to record__aio_parse(). And we should have
>
> static int nr_cblocks_max = 4;

This looks reasonable.

Thanks,
Alexey

>
> Other than this
>
> Acked-by: Song Liu <[email protected]>
>

2018-11-02 16:15:39

by Alexey Budankov

[permalink] [raw]
Subject: Re: [PATCH v14 1/3]: perf util: map data buffer for preserving collected data

Hi,

On 01.11.2018 21:47, Song Liu wrote:
> On Sun, Oct 14, 2018 at 11:44 PM Alexey Budankov
> <[email protected]> wrote:
>>
>>
>> The map->data buffer is used to preserve map->base profiling data
>> for writing to disk. AIO map->cblock is used to queue corresponding
>> map->data buffer for asynchronous writing.
>>
>> Signed-off-by: Alexey Budankov <[email protected]>
>
> Acked-by: Song Liu <[email protected]>

Thanks for review!

>
> Shall we add libaio to tools/build/Makefile.feature? It will be great to see
> whether aio is enabled in "Auto-detecting system features:".

Yes. This makes sense. There are targeted libc implementations that
currently lack Posix AIO API support required for this patch kit.

Thanks,
Alexey

>
> Thanks for implementing this!
>
> Song
>
>> ---
>> Changes in v13:
>> - grouped aio functions under single #ifdef HAVE_AIO_SUPPORT
>> Changes in v12:
>> - applied stub functions design for the whole patch kit
>> - grouped AIO related data into a struct under struct perf_mmap
>> Changes in v10:
>> - moved specific code to perf_mmap__aio_mmap(), perf_mmap__aio_munmap()
>> - adjusted error reporting by using %m
>> Changes in v9:
>> - implemented NO_AIO and HAVE_AIO_SUPPORT defines to cover cases of
>> libc implementations without Posix AIO API support
>> Changes in v7:
>> - implemented handling record.aio setting from perfconfig file
>> Changes in v6:
>> - adjusted setting of priorities for cblocks;
>> Changes in v5:
>> - reshaped layout of data structures;
>> - implemented --aio option;
>> Changes in v4:
>> - converted mmap()/munmap() to malloc()/free() for mmap->data buffer management
>> Changes in v2:
>> - converted zalloc() to calloc() for allocation of mmap_aio array,
>> - cleared typo and adjusted fallback branch code;
>> ---
>> tools/perf/Makefile.config | 5 +++++
>> tools/perf/Makefile.perf | 7 ++++++-
>> tools/perf/util/evlist.c | 2 +-
>> tools/perf/util/mmap.c | 49 +++++++++++++++++++++++++++++++++++++++++++++-
>> tools/perf/util/mmap.h | 11 ++++++++++-
>> 5 files changed, 70 insertions(+), 4 deletions(-)
>>
>> diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
>> index f6d1a03c7523..2e90f4ce9214 100644
>> --- a/tools/perf/Makefile.config
>> +++ b/tools/perf/Makefile.config
>> @@ -355,6 +355,11 @@ endif # NO_LIBELF
>>
>> ifeq ($(feature-glibc), 1)
>> CFLAGS += -DHAVE_GLIBC_SUPPORT
>> + ifndef NO_AIO
>> + ifndef BIONIC
>> + CFLAGS += -DHAVE_AIO_SUPPORT
>> + endif
>> + endif
>> endif
>>
>> ifdef NO_DWARF
>> diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
>> index 92514fb3689f..7becc6a72cf2 100644
>> --- a/tools/perf/Makefile.perf
>> +++ b/tools/perf/Makefile.perf
>> @@ -97,8 +97,13 @@ include ../scripts/utilities.mak
>> # Define LIBCLANGLLVM if you DO want builtin clang and llvm support.
>> # When selected, pass LLVM_CONFIG=/path/to/llvm-config to `make' if
>> # llvm-config is not in $PATH.
>> -
>> +#
>> # Define NO_CORESIGHT if you do not want support for CoreSight trace decoding.
>> +#
>> +# Define NO_AIO if you do not want support of Posix AIO based trace
>> +# streaming for record mode. Currently Posix AIO trace streaming is
>> +# supported only when linking with glibc.
>> +#
>>
>> # As per kernel Makefile, avoid funny character set dependencies
>> unexport LC_ALL
>> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
>> index be440df29615..1a83bf2c069c 100644
>> --- a/tools/perf/util/evlist.c
>> +++ b/tools/perf/util/evlist.c
>> @@ -1028,7 +1028,7 @@ int perf_evlist__mmap_ex(struct perf_evlist *evlist, unsigned int pages,
>> * Its value is decided by evsel's write_backward.
>> * So &mp should not be passed through const pointer.
>> */
>> - struct mmap_params mp;
>> + struct mmap_params mp = { .nr_cblocks = 0 };
>>
>> if (!evlist->mmap)
>> evlist->mmap = perf_evlist__alloc_mmap(evlist, false);
>> diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c
>> index cdb95b3a1213..47cdc3ad6546 100644
>> --- a/tools/perf/util/mmap.c
>> +++ b/tools/perf/util/mmap.c
>> @@ -153,8 +153,55 @@ void __weak auxtrace_mmap_params__set_idx(struct auxtrace_mmap_params *mp __mayb
>> {
>> }
>>
>> +#ifdef HAVE_AIO_SUPPORT
>> +static int perf_mmap__aio_mmap(struct perf_mmap *map, struct mmap_params *mp)
>> +{
>> + int delta_max;
>> +
>> + if (mp->nr_cblocks) {
>> + map->aio.data = malloc(perf_mmap__mmap_len(map));
>> + if (!map->aio.data) {
>> + pr_debug2("failed to allocate data buffer, error %m\n");
>> + return -1;
>> + }
>> + /*
>> + * Use cblock.aio_fildes value different from -1
>> + * to denote started aio write operation on the
>> + * cblock so it requires explicit record__aio_sync()
>> + * call prior the cblock may be reused again.
>> + */
>> + map->aio.cblock.aio_fildes = -1;
>> + /*
>> + * Allocate cblock with max priority delta to
>> + * have faster aio write system calls.
>> + */
>> + delta_max = sysconf(_SC_AIO_PRIO_DELTA_MAX);
>> + map->aio.cblock.aio_reqprio = delta_max;
>> + }
>> +
>> + return 0;
>> +}
>> +
>> +static void perf_mmap__aio_munmap(struct perf_mmap *map)
>> +{
>> + if (map->aio.data)
>> + zfree(&map->aio.data);
>> +}
>> +#else
>> +static int perf_mmap__aio_mmap(struct perf_mmap *map __maybe_unused,
>> + struct mmap_params *mp __maybe_unused)
>> +{
>> + return 0;
>> +}
>> +
>> +static void perf_mmap__aio_munmap(struct perf_mmap *map __maybe_unused)
>> +{
>> +}
>> +#endif
>> +
>> void perf_mmap__munmap(struct perf_mmap *map)
>> {
>> + perf_mmap__aio_munmap(map);
>> if (map->base != NULL) {
>> munmap(map->base, perf_mmap__mmap_len(map));
>> map->base = NULL;
>> @@ -197,7 +244,7 @@ int perf_mmap__mmap(struct perf_mmap *map, struct mmap_params *mp, int fd, int c
>> &mp->auxtrace_mp, map->base, fd))
>> return -1;
>>
>> - return 0;
>> + return perf_mmap__aio_mmap(map, mp);
>> }
>>
>> static int overwrite_rb_find_range(void *buf, int mask, u64 *start, u64 *end)
>> diff --git a/tools/perf/util/mmap.h b/tools/perf/util/mmap.h
>> index e603314dc792..a46dbdcdcc8a 100644
>> --- a/tools/perf/util/mmap.h
>> +++ b/tools/perf/util/mmap.h
>> @@ -6,6 +6,9 @@
>> #include <linux/types.h>
>> #include <asm/barrier.h>
>> #include <stdbool.h>
>> +#ifdef HAVE_AIO_SUPPORT
>> +#include <aio.h>
>> +#endif
>> #include "auxtrace.h"
>> #include "event.h"
>>
>> @@ -26,6 +29,12 @@ struct perf_mmap {
>> bool overwrite;
>> struct auxtrace_mmap auxtrace_mmap;
>> char event_copy[PERF_SAMPLE_MAX_SIZE] __aligned(8);
>> +#ifdef HAVE_AIO_SUPPORT
>> + struct {
>> + void *data;
>> + struct aiocb cblock;
>> + } aio;
>> +#endif
>> };
>>
>> /*
>> @@ -57,7 +66,7 @@ enum bkw_mmap_state {
>> };
>>
>> struct mmap_params {
>> - int prot, mask;
>> + int prot, mask, nr_cblocks;
>> struct auxtrace_mmap_params auxtrace_mp;
>> };
>>
>