2021-04-30 07:03:52

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 00/12] perf intel-pt: Add more support for VMs

Hi

Here is V2, see changes list below.

Previously, limited support was added for timeless decoding of VM traces.

Primarily, these patches add support for decoding traces of virtual
machines using timestamps, assuming the VMX TSC Offset is unchanging
during perf record, and VMX TSC Scaling is not used.

The first 3 patches add an itrace option to allow timeless decoding
even if timestamps have been recorded.

The next 2 patches add an option to perf inject to do "VM Time Correlation"
i.e. sort out the timestamps.

There are 5 preparation patches, then the main changes in patch
"perf intel-pt: Add VM Time Correlation to decoder".

The final patch adds documentation which provides an example.


Changes in V2:

perf inject: Add facility to do in place update
Error out instead of prompt to continue

perf inject: Add --vm-time-correlation option
Tighten up parsing of "dry-run"

perf intel-pt: Add a tree for VMCS information
Rename lookup_vmcs to findnew_vmcs

perf intel-pt: Add VM Time Correlation to decoder
Rename lookup_vmcs to findnew_vmcs

perf intel-pt: Parse VM Time Correlation options and set up decoding
Rename lookup_vmcs to findnew_vmcs
Explain "dry-run" option
Adjust example for 'perf inject' changes


Adrian Hunter (12):
perf auxtrace: Add Z itrace option for timeless decoding
perf intel-pt: Move synth_opts initialization earlier
perf intel-pt: Support Z itrace option for timeless decoding
perf inject: Add facility to do in place update
perf inject: Add --vm-time-correlation option
perf auxtrace: Allow buffers to be mapped read / write
perf intel-pt: Let overlap detection handle VM timestamps
perf intel-pt: Add a tree for VMCS information
perf intel-pt: Pass the first timestamp to the decoder
perf intel-pt: Better 7-byte timestamp wraparound logic
perf intel-pt: Add VM Time Correlation to decoder
perf intel-pt: Parse VM Time Correlation options and set up decoding

tools/perf/Documentation/itrace.txt | 1 +
tools/perf/Documentation/perf-inject.txt | 10 +
tools/perf/Documentation/perf-intel-pt.txt | 114 +++-
tools/perf/builtin-inject.c | 73 ++-
tools/perf/util/auxtrace.c | 8 +-
tools/perf/util/auxtrace.h | 14 +-
tools/perf/util/data.c | 3 +-
tools/perf/util/data.h | 1 +
tools/perf/util/header.c | 5 +
.../perf/util/intel-pt-decoder/intel-pt-decoder.c | 723 ++++++++++++++++++++-
.../perf/util/intel-pt-decoder/intel-pt-decoder.h | 20 +-
tools/perf/util/intel-pt-decoder/intel-pt-log.h | 5 +
tools/perf/util/intel-pt.c | 224 ++++++-
tools/perf/util/session.c | 6 +-
14 files changed, 1169 insertions(+), 38 deletions(-)


Regards
Adrian


2021-04-30 07:04:02

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 03/12] perf intel-pt: Support Z itrace option for timeless decoding

Correlating virtual machine TSC packets is not supported at present, so
instead support the Z itrace option.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/Documentation/perf-intel-pt.txt | 5 +++++
tools/perf/util/intel-pt.c | 2 +-
2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/tools/perf/Documentation/perf-intel-pt.txt b/tools/perf/Documentation/perf-intel-pt.txt
index 1dcec73c910c..33df7898c378 100644
--- a/tools/perf/Documentation/perf-intel-pt.txt
+++ b/tools/perf/Documentation/perf-intel-pt.txt
@@ -869,6 +869,7 @@ The letters are:
L synthesize last branch entries on existing event records
s skip initial number of events
q quicker (less detailed) decoding
+ Z prefer to ignore timestamps (so-called "timeless" decoding)

"Instructions" events look like they were recorded by "perf record -e
instructions".
@@ -1062,6 +1063,10 @@ What *will* be decoded with the qq option:

- instruction pointer associated with PSB packets

+The Z option is equivalent to having recorded a trace without TSC
+(i.e. config term tsc=0). It can be useful to avoid timestamp issues when
+decoding a trace of a virtual machine.
+

dump option
~~~~~~~~~~~
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index eef8970d98fe..7bee969eb335 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -899,7 +899,7 @@ static bool intel_pt_timeless_decoding(struct intel_pt *pt)
bool timeless_decoding = true;
u64 config;

- if (!pt->tsc_bit || !pt->cap_user_time_zero)
+ if (!pt->tsc_bit || !pt->cap_user_time_zero || pt->synth_opts.timeless_decoding)
return true;

evlist__for_each_entry(pt->session->evlist, evsel) {
--
2.25.1

2021-04-30 07:04:02

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 01/12] perf auxtrace: Add Z itrace option for timeless decoding

Issues correlating timestamps can be avoided with timeless decoding. Add
an option for that, so that timeless decoding can be used even when
timestamps are present.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/Documentation/itrace.txt | 1 +
tools/perf/util/auxtrace.c | 3 +++
tools/perf/util/auxtrace.h | 2 ++
3 files changed, 6 insertions(+)

diff --git a/tools/perf/Documentation/itrace.txt b/tools/perf/Documentation/itrace.txt
index 0f1005209a2b..2d586fe5e4c5 100644
--- a/tools/perf/Documentation/itrace.txt
+++ b/tools/perf/Documentation/itrace.txt
@@ -20,6 +20,7 @@
L synthesize last branch entries on existing event records
s skip initial number of events
q quicker (less detailed) decoding
+ Z prefer to ignore timestamps (so-called "timeless" decoding)

The default is all events i.e. the same as --itrace=ibxwpe,
except for perf script where it is --itrace=ce
diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index 5b6ccb90b397..48b88e2bca1e 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -1569,6 +1569,9 @@ int itrace_parse_synth_opts(const struct option *opt, const char *str,
case 'q':
synth_opts->quick += 1;
break;
+ case 'Z':
+ synth_opts->timeless_decoding = true;
+ break;
case ' ':
case ',':
break;
diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
index a4fbb33b7245..59c3c05384a4 100644
--- a/tools/perf/util/auxtrace.h
+++ b/tools/perf/util/auxtrace.h
@@ -89,6 +89,7 @@ enum itrace_period_type {
* @tlb: whether to synthesize TLB events
* @remote_access: whether to synthesize remote access events
* @mem: whether to synthesize memory events
+ * @timeless_decoding: prefer "timeless" decoding i.e. ignore timestamps
* @callchain_sz: maximum callchain size
* @last_branch_sz: branch context size
* @period: 'instructions' events period
@@ -128,6 +129,7 @@ struct itrace_synth_opts {
bool tlb;
bool remote_access;
bool mem;
+ bool timeless_decoding;
unsigned int callchain_sz;
unsigned int last_branch_sz;
unsigned long long period;
--
2.25.1

2021-04-30 07:04:22

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 02/12] perf intel-pt: Move synth_opts initialization earlier

Move synth_opts initialization earlier, so it can be used earlier.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/util/intel-pt.c | 29 ++++++++++++++---------------
1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 8658d42ce57a..eef8970d98fe 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -3479,6 +3479,20 @@ int intel_pt_process_auxtrace_info(union perf_event *event,

intel_pt_log_set_name(INTEL_PT_PMU_NAME);

+ if (session->itrace_synth_opts->set) {
+ pt->synth_opts = *session->itrace_synth_opts;
+ } else {
+ struct itrace_synth_opts *opts = session->itrace_synth_opts;
+
+ itrace_synth_opts__set_default(&pt->synth_opts, opts->default_no_sample);
+ if (!opts->default_no_sample && !opts->inject) {
+ pt->synth_opts.branches = false;
+ pt->synth_opts.callchain = true;
+ pt->synth_opts.add_callchain = true;
+ }
+ pt->synth_opts.thread_stack = opts->thread_stack;
+ }
+
pt->session = session;
pt->machine = &session->machines.host; /* No kvm support */
pt->auxtrace_type = auxtrace_info->type;
@@ -3609,21 +3623,6 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
goto err_delete_thread;
}

- if (session->itrace_synth_opts->set) {
- pt->synth_opts = *session->itrace_synth_opts;
- } else {
- itrace_synth_opts__set_default(&pt->synth_opts,
- session->itrace_synth_opts->default_no_sample);
- if (!session->itrace_synth_opts->default_no_sample &&
- !session->itrace_synth_opts->inject) {
- pt->synth_opts.branches = false;
- pt->synth_opts.callchain = true;
- pt->synth_opts.add_callchain = true;
- }
- pt->synth_opts.thread_stack =
- session->itrace_synth_opts->thread_stack;
- }
-
if (pt->synth_opts.log)
intel_pt_log_enable();

--
2.25.1

2021-04-30 07:04:36

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 06/12] perf auxtrace: Allow buffers to be mapped read / write

To support in-place update, allow buffers to be mapped read / write.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/util/auxtrace.c | 5 +++--
tools/perf/util/auxtrace.h | 6 +++++-
2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/auxtrace.c b/tools/perf/util/auxtrace.c
index 48b88e2bca1e..ade655d58780 100644
--- a/tools/perf/util/auxtrace.c
+++ b/tools/perf/util/auxtrace.c
@@ -1120,8 +1120,9 @@ int auxtrace_queue_data(struct perf_session *session, bool samples, bool events)
auxtrace_queue_data_cb, &qd);
}

-void *auxtrace_buffer__get_data(struct auxtrace_buffer *buffer, int fd)
+void *auxtrace_buffer__get_data_rw(struct auxtrace_buffer *buffer, int fd, bool rw)
{
+ int prot = rw ? PROT_READ | PROT_WRITE : PROT_READ;
size_t adj = buffer->data_offset & (page_size - 1);
size_t size = buffer->size + adj;
off_t file_offset = buffer->data_offset - adj;
@@ -1130,7 +1131,7 @@ void *auxtrace_buffer__get_data(struct auxtrace_buffer *buffer, int fd)
if (buffer->data)
return buffer->data;

- addr = mmap(NULL, size, PROT_READ, MAP_SHARED, fd, file_offset);
+ addr = mmap(NULL, size, prot, MAP_SHARED, fd, file_offset);
if (addr == MAP_FAILED)
return NULL;

diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
index 9ac2ac1bd793..472c0973b1f1 100644
--- a/tools/perf/util/auxtrace.h
+++ b/tools/perf/util/auxtrace.h
@@ -533,7 +533,11 @@ int auxtrace_queue_data(struct perf_session *session, bool samples,
bool events);
struct auxtrace_buffer *auxtrace_buffer__next(struct auxtrace_queue *queue,
struct auxtrace_buffer *buffer);
-void *auxtrace_buffer__get_data(struct auxtrace_buffer *buffer, int fd);
+void *auxtrace_buffer__get_data_rw(struct auxtrace_buffer *buffer, int fd, bool rw);
+static inline void *auxtrace_buffer__get_data(struct auxtrace_buffer *buffer, int fd)
+{
+ return auxtrace_buffer__get_data_rw(buffer, fd, false);
+}
void auxtrace_buffer__put_data(struct auxtrace_buffer *buffer);
void auxtrace_buffer__drop_data(struct auxtrace_buffer *buffer);
void auxtrace_buffer__free(struct auxtrace_buffer *buffer);
--
2.25.1

2021-04-30 07:04:36

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 05/12] perf inject: Add --vm-time-correlation option

Intel PT timestamps are affected by virtualization. Add a new option
that will allow the Intel PT decoder to correlate the timestamps and
translate the virtual machine timestamps to host timestamps.

The advantages of making this a separate step, rather than a part of
normal decoding are that it is simpler to implement, and it needs to
be done only once.

This patch adds only the option. Later patches add Intel PT support.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/Documentation/perf-inject.txt | 10 ++++++
tools/perf/builtin-inject.c | 44 ++++++++++++++++++++++++
tools/perf/util/auxtrace.h | 6 ++++
3 files changed, 60 insertions(+)

diff --git a/tools/perf/Documentation/perf-inject.txt b/tools/perf/Documentation/perf-inject.txt
index a8eccff21281..91108fe3ad5f 100644
--- a/tools/perf/Documentation/perf-inject.txt
+++ b/tools/perf/Documentation/perf-inject.txt
@@ -68,6 +68,16 @@ include::itrace.txt[]
--force::
Don't complain, do it.

+--vm-time-correlation[=OPTIONS]::
+ Some architectures may capture AUX area data which contains timestamps
+ affected by virtualization. This option will update those timestamps
+ in place, to correlate with host timestamps. The in-place update means
+ that an output file is not specified, and instead the input file is
+ modified. The options are architecture specific, except that they may
+ start with "dry-run" which will cause the file to be processed but
+ without updating it. Currently this option is supported only by
+ Intel PT, refer linkperf:perf-intel-pt[1]
+
SEE ALSO
--------
linkperf:perf-record[1], linkperf:perf-report[1], linkperf:perf-archive[1],
diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index 6007f1844b01..102cafb0c0b3 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -31,6 +31,7 @@
#include <uapi/linux/mman.h> /* To get things like MAP_HUGETLB even on older libc headers */

#include <linux/list.h>
+#include <linux/string.h>
#include <errno.h>
#include <signal.h>

@@ -698,6 +699,36 @@ static void strip_init(struct perf_inject *inject)
evsel->handler = drop_sample;
}

+static int parse_vm_time_correlation(const struct option *opt, const char *str, int unset)
+{
+ struct perf_inject *inject = opt->value;
+ const char *args;
+ char *dry_run;
+
+ if (unset)
+ return 0;
+
+ inject->itrace_synth_opts.set = true;
+ inject->itrace_synth_opts.vm_time_correlation = true;
+ inject->in_place_update = true;
+
+ if (!str)
+ return 0;
+
+ dry_run = skip_spaces(str);
+ if (!strncmp(dry_run, "dry-run", strlen("dry-run"))) {
+ inject->itrace_synth_opts.vm_tm_corr_dry_run = true;
+ inject->in_place_update_dry_run = true;
+ args = dry_run + strlen("dry-run");
+ } else {
+ args = str;
+ }
+
+ inject->itrace_synth_opts.vm_tm_corr_args = strdup(args);
+
+ return inject->itrace_synth_opts.vm_tm_corr_args ? 0 : -ENOMEM;
+}
+
static int __cmd_inject(struct perf_inject *inject)
{
int ret = -EINVAL;
@@ -739,6 +770,15 @@ static int __cmd_inject(struct perf_inject *inject)
else if (!strncmp(name, "sched:sched_stat_", 17))
evsel->handler = perf_inject__sched_stat;
}
+ } else if (inject->itrace_synth_opts.vm_time_correlation) {
+ session->itrace_synth_opts = &inject->itrace_synth_opts;
+ memset(&inject->tool, 0, sizeof(inject->tool));
+ inject->tool.id_index = perf_event__process_id_index;
+ inject->tool.auxtrace_info = perf_event__process_auxtrace_info;
+ inject->tool.auxtrace = perf_event__process_auxtrace;
+ inject->tool.auxtrace_error = perf_event__process_auxtrace_error;
+ inject->tool.ordered_events = true;
+ inject->tool.ordering_requires_timestamps = true;
} else if (inject->itrace_synth_opts.set) {
session->itrace_synth_opts = &inject->itrace_synth_opts;
inject->itrace_synth_opts.inject = true;
@@ -880,6 +920,9 @@ int cmd_inject(int argc, const char **argv)
itrace_parse_synth_opts),
OPT_BOOLEAN(0, "strip", &inject.strip,
"strip non-synthesized events (use with --itrace)"),
+ OPT_CALLBACK_OPTARG(0, "vm-time-correlation", &inject, NULL, "opts",
+ "correlate time between VM guests and the host",
+ parse_vm_time_correlation),
OPT_END()
};
const char * const inject_usage[] = {
@@ -968,5 +1011,6 @@ int cmd_inject(int argc, const char **argv)
out_delete:
zstd_fini(&(inject.session->zstd_data));
perf_session__delete(inject.session);
+ free(inject.itrace_synth_opts.vm_tm_corr_args);
return ret;
}
diff --git a/tools/perf/util/auxtrace.h b/tools/perf/util/auxtrace.h
index 59c3c05384a4..9ac2ac1bd793 100644
--- a/tools/perf/util/auxtrace.h
+++ b/tools/perf/util/auxtrace.h
@@ -90,6 +90,9 @@ enum itrace_period_type {
* @remote_access: whether to synthesize remote access events
* @mem: whether to synthesize memory events
* @timeless_decoding: prefer "timeless" decoding i.e. ignore timestamps
+ * @vm_time_correlation: perform VM Time Correlation
+ * @vm_tm_corr_dry_run: VM Time Correlation dry-run
+ * @vm_tm_corr_args: VM Time Correlation implementation-specific arguments
* @callchain_sz: maximum callchain size
* @last_branch_sz: branch context size
* @period: 'instructions' events period
@@ -130,6 +133,9 @@ struct itrace_synth_opts {
bool remote_access;
bool mem;
bool timeless_decoding;
+ bool vm_time_correlation;
+ bool vm_tm_corr_dry_run;
+ char *vm_tm_corr_args;
unsigned int callchain_sz;
unsigned int last_branch_sz;
unsigned long long period;
--
2.25.1

2021-04-30 07:04:42

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 07/12] perf intel-pt: Let overlap detection handle VM timestamps

Intel PT timestamps are affected by virtualization. While TSC packets can
still be considered to be unique, the TSC values need not be in order any
more. Adjust the algorithm accordingly.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/util/intel-pt-decoder/intel-pt-decoder.c | 12 ++++++++----
tools/perf/util/intel-pt-decoder/intel-pt-decoder.h | 3 ++-
tools/perf/util/intel-pt.c | 10 +++++++++-
3 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index 8c59677bee13..8f916f90205e 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -3227,6 +3227,7 @@ static unsigned char *adj_for_padding(unsigned char *buf_b,
* @len_b: size of second buffer
* @consecutive: returns true if there is data in buf_b that is consecutive
* to buf_a
+ * @ooo_tsc: out-of-order TSC due to VM TSC offset / scaling
*
* If the trace contains TSC we can look at the last TSC of @buf_a and the
* first TSC of @buf_b in order to determine if the buffers overlap, and then
@@ -3239,7 +3240,8 @@ static unsigned char *adj_for_padding(unsigned char *buf_b,
static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
size_t len_a,
unsigned char *buf_b,
- size_t len_b, bool *consecutive)
+ size_t len_b, bool *consecutive,
+ bool ooo_tsc)
{
uint64_t tsc_a, tsc_b;
unsigned char *p;
@@ -3274,7 +3276,7 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
start = buf_b + len_b - (rem_b - rem_a);
return adj_for_padding(start, buf_a, len_a);
}
- if (cmp < 0)
+ if (cmp < 0 && !ooo_tsc)
return buf_b; /* tsc_a < tsc_b => no overlap */
}

@@ -3292,6 +3294,7 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
* @have_tsc: can use TSC packets to detect overlap
* @consecutive: returns true if there is data in buf_b that is consecutive
* to buf_a
+ * @ooo_tsc: out-of-order TSC due to VM TSC offset / scaling
*
* When trace samples or snapshots are recorded there is the possibility that
* the data overlaps. Note that, for the purposes of decoding, data is only
@@ -3302,7 +3305,8 @@ static unsigned char *intel_pt_find_overlap_tsc(unsigned char *buf_a,
*/
unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
unsigned char *buf_b, size_t len_b,
- bool have_tsc, bool *consecutive)
+ bool have_tsc, bool *consecutive,
+ bool ooo_tsc)
{
unsigned char *found;

@@ -3315,7 +3319,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,

if (have_tsc) {
found = intel_pt_find_overlap_tsc(buf_a, len_a, buf_b, len_b,
- consecutive);
+ consecutive, ooo_tsc);
if (found)
return found;
}
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
index d9e62a7f6f0e..bebdb7d37b39 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
@@ -269,7 +269,8 @@ int intel_pt_fast_forward(struct intel_pt_decoder *decoder, uint64_t timestamp);

unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,
unsigned char *buf_b, size_t len_b,
- bool have_tsc, bool *consecutive);
+ bool have_tsc, bool *consecutive,
+ bool ooo_tsc);

int intel_pt__strerror(int code, char *buf, size_t buflen);

diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 7bee969eb335..a494d47aa1ad 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -278,9 +278,17 @@ static int intel_pt_do_fix_overlap(struct intel_pt *pt, struct auxtrace_buffer *
void *start;

start = intel_pt_find_overlap(a->data, a->size, b->data, b->size,
- pt->have_tsc, &consecutive);
+ pt->have_tsc, &consecutive,
+ pt->synth_opts.vm_time_correlation);
if (!start)
return -EINVAL;
+ /*
+ * In the case of vm_time_correlation, the overlap might contain TSC
+ * packets that will not be fixed, and that will then no longer work for
+ * overlap detection. Avoid that by zeroing out the overlap.
+ */
+ if (pt->synth_opts.vm_time_correlation)
+ memset(b->data, 0, start - b->data);
b->use_size = b->data + b->size - start;
b->use_data = start;
if (b->use_size && consecutive)
--
2.25.1

2021-04-30 07:05:15

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 11/12] perf intel-pt: Add VM Time Correlation to decoder

VM Time Correlation means determining if each TSC packet belongs to a VM
Guest or the Host. When the trace is "in context" that is indicated by
the NR flag in the PIP packet. However, when tracing kernel-only,
userspace only, or using address filters, the trace can be "out of context"
in which case timing packets are produced but not PIP packets.

Nevertheless, it is very unlikely the VM Guest timestamps will be in
the same range as the Host timestamps. Host time ranges are established
by a starting side-band event timestamp, and subsequently by the buffer
timestamp, written when the buffer is copied to the perf.data file.

This patch supports updating the VM Guest timestamp packets, assuming an
unchanging (during perf record) VMX TSC Offset and no VMX TSC scaling.

Furthermore, it is possible to determine what the VMX TSC Offset is,
although not necessarily at the start. The dry-run option lets that
information be determined so that the user can pass it to a subsequent
run. For more detail, refer to the example in the Intel PT documentation
in a subsequent patch.

VM Time Correlation is also performed on the TSC value in PEBs-via-PT
records.

Signed-off-by: Adrian Hunter <[email protected]>
---
.../util/intel-pt-decoder/intel-pt-decoder.c | 689 ++++++++++++++++++
.../util/intel-pt-decoder/intel-pt-decoder.h | 2 +
.../perf/util/intel-pt-decoder/intel-pt-log.h | 5 +
3 files changed, 696 insertions(+)

diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index c4044bc2fb86..0db5f948801f 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -35,6 +35,10 @@

#define BIT63 (((uint64_t)1 << 63))

+#define SEVEN_BYTES 0xffffffffffffffULL
+
+#define NO_VMCS 0xffffffffffULL
+
#define INTEL_PT_RETURN 1

/* Maximum number of loops with no packets consumed i.e. stuck in a loop */
@@ -51,6 +55,11 @@ struct intel_pt_stack {
int pos;
};

+enum intel_pt_p_once {
+ INTEL_PT_PRT_ONCE_UNK_VMCS,
+ INTEL_PT_PRT_ONCE_ERANGE,
+};
+
enum intel_pt_pkt_state {
INTEL_PT_STATE_NO_PSB,
INTEL_PT_STATE_NO_IP,
@@ -64,6 +73,7 @@ enum intel_pt_pkt_state {
INTEL_PT_STATE_FUP_NO_TIP,
INTEL_PT_STATE_FUP_IN_PSB,
INTEL_PT_STATE_RESAMPLE,
+ INTEL_PT_STATE_VM_TIME_CORRELATION,
};

static inline bool intel_pt_sample_time(enum intel_pt_pkt_state pkt_state)
@@ -75,6 +85,7 @@ static inline bool intel_pt_sample_time(enum intel_pt_pkt_state pkt_state)
case INTEL_PT_STATE_IN_SYNC:
case INTEL_PT_STATE_TNT_CONT:
case INTEL_PT_STATE_RESAMPLE:
+ case INTEL_PT_STATE_VM_TIME_CORRELATION:
return true;
case INTEL_PT_STATE_TNT:
case INTEL_PT_STATE_TIP:
@@ -123,6 +134,11 @@ struct intel_pt_decoder {
bool in_psb;
bool hop;
bool leap;
+ bool vm_time_correlation;
+ bool vm_tm_corr_dry_run;
+ bool vm_tm_corr_reliable;
+ bool vm_tm_corr_same_buf;
+ bool vm_tm_corr_continuous;
bool nr;
bool next_nr;
enum intel_pt_param_flags flags;
@@ -141,6 +157,10 @@ struct intel_pt_decoder {
uint64_t cycle_cnt;
uint64_t cyc_ref_timestamp;
uint64_t first_timestamp;
+ uint64_t last_reliable_timestamp;
+ uint64_t vmcs;
+ uint64_t print_once;
+ uint64_t last_ctc;
uint32_t last_mtc;
uint32_t tsc_ctc_ratio_n;
uint32_t tsc_ctc_ratio_d;
@@ -219,6 +239,31 @@ static uint64_t intel_pt_lower_power_of_2(uint64_t x)
return x << i;
}

+__printf(1, 2)
+static void p_log(const char *fmt, ...)
+{
+ char buf[512];
+ va_list args;
+
+ va_start(args, fmt);
+ vsnprintf(buf, sizeof(buf), fmt, args);
+ va_end(args);
+
+ fprintf(stderr, "%s\n", buf);
+ intel_pt_log("%s\n", buf);
+}
+
+static bool intel_pt_print_once(struct intel_pt_decoder *decoder,
+ enum intel_pt_p_once id)
+{
+ uint64_t bit = 1ULL << id;
+
+ if (decoder->print_once & bit)
+ return false;
+ decoder->print_once |= bit;
+ return true;
+}
+
static uint64_t intel_pt_cyc_threshold(uint64_t ctl)
{
if (!(ctl & INTEL_PT_CYC_ENABLE))
@@ -266,7 +311,10 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
decoder->branch_enable = params->branch_enable;
decoder->hop = params->quick >= 1;
decoder->leap = params->quick >= 2;
+ decoder->vm_time_correlation = params->vm_time_correlation;
+ decoder->vm_tm_corr_dry_run = params->vm_tm_corr_dry_run;
decoder->first_timestamp = params->first_timestamp;
+ decoder->last_reliable_timestamp = params->first_timestamp;

decoder->flags = params->flags;

@@ -587,6 +635,7 @@ static int intel_pt_get_data(struct intel_pt_decoder *decoder, bool reposition)
intel_pt_reposition(decoder);
decoder->ref_timestamp = buffer.ref_timestamp;
decoder->state.trace_nr = buffer.trace_nr;
+ decoder->vm_tm_corr_same_buf = false;
intel_pt_log("Reference timestamp 0x%" PRIx64 "\n",
decoder->ref_timestamp);
return -ENOLINK;
@@ -1475,9 +1524,24 @@ static uint64_t intel_pt_8b_tsc(uint64_t timestamp, uint64_t ref_timestamp)
return timestamp;
}

+/* For use only when decoder->vm_time_correlation is true */
+static bool intel_pt_time_in_range(struct intel_pt_decoder *decoder,
+ uint64_t timestamp)
+{
+ uint64_t max_timestamp = decoder->buf_timestamp;
+
+ if (!max_timestamp) {
+ max_timestamp = decoder->last_reliable_timestamp +
+ 0x400000000ULL;
+ }
+ return timestamp >= decoder->last_reliable_timestamp &&
+ timestamp < decoder->buf_timestamp;
+}
+
static void intel_pt_calc_tsc_timestamp(struct intel_pt_decoder *decoder)
{
uint64_t timestamp;
+ bool bad = false;

decoder->have_tma = false;

@@ -1507,8 +1571,13 @@ static void intel_pt_calc_tsc_timestamp(struct intel_pt_decoder *decoder)
} else {
intel_pt_log_to("Suppressing bad timestamp", timestamp);
timestamp = decoder->timestamp;
+ bad = true;
}
}
+ if (decoder->vm_time_correlation &&
+ (bad || !intel_pt_time_in_range(decoder, timestamp)) &&
+ intel_pt_print_once(decoder, INTEL_PT_PRT_ONCE_ERANGE))
+ p_log("Timestamp out of range");
decoder->timestamp = timestamp;
decoder->timestamp_insn_cnt = 0;
}
@@ -1585,6 +1654,7 @@ static void intel_pt_calc_tma(struct intel_pt_decoder *decoder)
intel_pt_mtc_cyc_cnt_upd(decoder);

decoder->last_mtc = (ctc >> decoder->mtc_shift) & 0xff;
+ decoder->last_ctc = ctc - ctc_rem;
decoder->ctc_timestamp = decoder->tsc_timestamp - fc;
if (decoder->tsc_ctc_mult) {
decoder->ctc_timestamp -= ctc_rem * decoder->tsc_ctc_mult;
@@ -1969,6 +2039,613 @@ static int intel_pt_resample(struct intel_pt_decoder *decoder)
return 0;
}

+struct intel_pt_vm_tsc_info {
+ struct intel_pt_pkt pip_packet;
+ struct intel_pt_pkt vmcs_packet;
+ struct intel_pt_pkt tma_packet;
+ bool tsc, pip, vmcs, tma, psbend;
+ uint64_t ctc_delta;
+ uint64_t last_ctc;
+ int max_lookahead;
+};
+
+/* Lookahead and get the PIP, VMCS and TMA packets from PSB+ */
+static int intel_pt_vm_psb_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
+{
+ struct intel_pt_vm_tsc_info *data = pkt_info->data;
+
+ switch (pkt_info->packet.type) {
+ case INTEL_PT_PAD:
+ case INTEL_PT_MNT:
+ case INTEL_PT_MODE_EXEC:
+ case INTEL_PT_MODE_TSX:
+ case INTEL_PT_MTC:
+ case INTEL_PT_FUP:
+ case INTEL_PT_CYC:
+ case INTEL_PT_CBR:
+ break;
+
+ case INTEL_PT_TSC:
+ data->tsc = true;
+ break;
+
+ case INTEL_PT_TMA:
+ data->tma_packet = pkt_info->packet;
+ data->tma = true;
+ break;
+
+ case INTEL_PT_PIP:
+ data->pip_packet = pkt_info->packet;
+ data->pip = true;
+ break;
+
+ case INTEL_PT_VMCS:
+ data->vmcs_packet = pkt_info->packet;
+ data->vmcs = true;
+ break;
+
+ case INTEL_PT_PSBEND:
+ data->psbend = true;
+ return 1;
+
+ case INTEL_PT_TIP_PGE:
+ case INTEL_PT_PTWRITE:
+ case INTEL_PT_PTWRITE_IP:
+ case INTEL_PT_EXSTOP:
+ case INTEL_PT_EXSTOP_IP:
+ case INTEL_PT_MWAIT:
+ case INTEL_PT_PWRE:
+ case INTEL_PT_PWRX:
+ case INTEL_PT_BBP:
+ case INTEL_PT_BIP:
+ case INTEL_PT_BEP:
+ case INTEL_PT_BEP_IP:
+ case INTEL_PT_OVF:
+ case INTEL_PT_BAD:
+ case INTEL_PT_TNT:
+ case INTEL_PT_TIP_PGD:
+ case INTEL_PT_TIP:
+ case INTEL_PT_PSB:
+ case INTEL_PT_TRACESTOP:
+ default:
+ return 1;
+ }
+
+ return 0;
+}
+
+struct intel_pt_ovf_fup_info {
+ int max_lookahead;
+ bool found;
+};
+
+/* Lookahead to detect a FUP packet after OVF */
+static int intel_pt_ovf_fup_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
+{
+ struct intel_pt_ovf_fup_info *data = pkt_info->data;
+
+ if (pkt_info->packet.type == INTEL_PT_CYC ||
+ pkt_info->packet.type == INTEL_PT_MTC ||
+ pkt_info->packet.type == INTEL_PT_TSC)
+ return !--(data->max_lookahead);
+ data->found = pkt_info->packet.type == INTEL_PT_FUP;
+ return 1;
+}
+
+static bool intel_pt_ovf_fup_lookahead(struct intel_pt_decoder *decoder)
+{
+ struct intel_pt_ovf_fup_info data = {
+ .max_lookahead = 16,
+ .found = false,
+ };
+
+ intel_pt_pkt_lookahead(decoder, intel_pt_ovf_fup_lookahead_cb, &data);
+ return data.found;
+}
+
+/* Lookahead and get the TMA packet after TSC */
+static int intel_pt_tma_lookahead_cb(struct intel_pt_pkt_info *pkt_info)
+{
+ struct intel_pt_vm_tsc_info *data = pkt_info->data;
+
+ if (pkt_info->packet.type == INTEL_PT_CYC ||
+ pkt_info->packet.type == INTEL_PT_MTC)
+ return !--(data->max_lookahead);
+
+ if (pkt_info->packet.type == INTEL_PT_TMA) {
+ data->tma_packet = pkt_info->packet;
+ data->tma = true;
+ }
+ return 1;
+}
+
+static uint64_t intel_pt_ctc_to_tsc(struct intel_pt_decoder *decoder, uint64_t ctc)
+{
+ if (decoder->tsc_ctc_mult)
+ return ctc * decoder->tsc_ctc_mult;
+ else
+ return multdiv(ctc, decoder->tsc_ctc_ratio_n, decoder->tsc_ctc_ratio_d);
+}
+
+static uint64_t intel_pt_calc_expected_tsc(struct intel_pt_decoder *decoder,
+ uint32_t ctc,
+ uint32_t fc,
+ uint64_t last_ctc_timestamp,
+ uint64_t ctc_delta,
+ uint32_t last_ctc)
+{
+ /* Number of CTC ticks from last_ctc_timestamp to last_mtc */
+ uint64_t last_mtc_ctc = last_ctc + ctc_delta;
+ /*
+ * Number of CTC ticks from there until current TMA packet. We would
+ * expect last_mtc_ctc to be before ctc, but the TSC packet can slip
+ * past an MTC, so a sign-extended value is used.
+ */
+ uint64_t delta = (int16_t)((uint16_t)ctc - (uint16_t)last_mtc_ctc);
+ /* Total CTC ticks from last_ctc_timestamp to current TMA packet */
+ uint64_t new_ctc_delta = ctc_delta + delta;
+ uint64_t expected_tsc;
+
+ /*
+ * Convert CTC ticks to TSC ticks, add the starting point
+ * (last_ctc_timestamp) and the fast counter from the TMA packet.
+ */
+ expected_tsc = last_ctc_timestamp + intel_pt_ctc_to_tsc(decoder, new_ctc_delta) + fc;
+
+ if (intel_pt_enable_logging) {
+ intel_pt_log_x64(last_mtc_ctc);
+ intel_pt_log_x32(last_ctc);
+ intel_pt_log_x64(ctc_delta);
+ intel_pt_log_x64(delta);
+ intel_pt_log_x32(ctc);
+ intel_pt_log_x64(new_ctc_delta);
+ intel_pt_log_x64(last_ctc_timestamp);
+ intel_pt_log_x32(fc);
+ intel_pt_log_x64(intel_pt_ctc_to_tsc(decoder, new_ctc_delta));
+ intel_pt_log_x64(expected_tsc);
+ }
+
+ return expected_tsc;
+}
+
+static uint64_t intel_pt_expected_tsc(struct intel_pt_decoder *decoder,
+ struct intel_pt_vm_tsc_info *data)
+{
+ uint32_t ctc = data->tma_packet.payload;
+ uint32_t fc = data->tma_packet.count;
+
+ return intel_pt_calc_expected_tsc(decoder, ctc, fc,
+ decoder->ctc_timestamp,
+ data->ctc_delta, data->last_ctc);
+}
+
+static void intel_pt_translate_vm_tsc(struct intel_pt_decoder *decoder,
+ struct intel_pt_vmcs_info *vmcs_info)
+{
+ uint64_t payload = decoder->packet.payload;
+
+ /* VMX adds the TSC Offset, so subtract to get host TSC */
+ decoder->packet.payload -= vmcs_info->tsc_offset;
+ /* TSC packet has only 7 bytes */
+ decoder->packet.payload &= SEVEN_BYTES;
+
+ /*
+ * The buffer is mmapped from the data file, so this also updates the
+ * data file.
+ */
+ if (!decoder->vm_tm_corr_dry_run)
+ memcpy((void *)decoder->buf + 1, &decoder->packet.payload, 7);
+
+ intel_pt_log("Translated VM TSC %#" PRIx64 " -> %#" PRIx64
+ " VMCS %#" PRIx64 " TSC Offset %#" PRIx64 "\n",
+ payload, decoder->packet.payload, vmcs_info->vmcs,
+ vmcs_info->tsc_offset);
+}
+
+static void intel_pt_translate_vm_tsc_offset(struct intel_pt_decoder *decoder,
+ uint64_t tsc_offset)
+{
+ struct intel_pt_vmcs_info vmcs_info = {
+ .vmcs = NO_VMCS,
+ .tsc_offset = tsc_offset
+ };
+
+ intel_pt_translate_vm_tsc(decoder, &vmcs_info);
+}
+
+static inline bool in_vm(uint64_t pip_payload)
+{
+ return pip_payload & 1;
+}
+
+static inline bool pip_in_vm(struct intel_pt_pkt *pip_packet)
+{
+ return pip_packet->payload & 1;
+}
+
+static void intel_pt_print_vmcs_info(struct intel_pt_vmcs_info *vmcs_info)
+{
+ p_log("VMCS: %#" PRIx64 " TSC Offset %#" PRIx64,
+ vmcs_info->vmcs, vmcs_info->tsc_offset);
+}
+
+static void intel_pt_vm_tm_corr_psb(struct intel_pt_decoder *decoder,
+ struct intel_pt_vm_tsc_info *data)
+{
+ memset(data, 0, sizeof(*data));
+ data->ctc_delta = decoder->ctc_delta;
+ data->last_ctc = decoder->last_ctc;
+ intel_pt_pkt_lookahead(decoder, intel_pt_vm_psb_lookahead_cb, data);
+ if (data->tsc && !data->psbend)
+ p_log("ERROR: PSB without PSBEND");
+ decoder->in_psb = data->psbend;
+}
+
+static void intel_pt_vm_tm_corr_first_tsc(struct intel_pt_decoder *decoder,
+ struct intel_pt_vm_tsc_info *data,
+ struct intel_pt_vmcs_info *vmcs_info,
+ uint64_t host_tsc)
+{
+ if (!decoder->in_psb) {
+ /* Can't happen */
+ p_log("ERROR: First TSC is not in PSB+");
+ }
+
+ if (data->pip) {
+ if (pip_in_vm(&data->pip_packet)) { /* Guest */
+ if (vmcs_info && vmcs_info->tsc_offset) {
+ intel_pt_translate_vm_tsc(decoder, vmcs_info);
+ decoder->vm_tm_corr_reliable = true;
+ } else {
+ p_log("ERROR: First TSC, unknown TSC Offset");
+ }
+ } else { /* Host */
+ decoder->vm_tm_corr_reliable = true;
+ }
+ } else { /* Host or Guest */
+ decoder->vm_tm_corr_reliable = false;
+ if (intel_pt_time_in_range(decoder, host_tsc)) {
+ /* Assume Host */
+ } else {
+ /* Assume Guest */
+ if (vmcs_info && vmcs_info->tsc_offset)
+ intel_pt_translate_vm_tsc(decoder, vmcs_info);
+ else
+ p_log("ERROR: First TSC, no PIP, unknown TSC Offset");
+ }
+ }
+}
+
+static void intel_pt_vm_tm_corr_tsc(struct intel_pt_decoder *decoder,
+ struct intel_pt_vm_tsc_info *data)
+{
+ struct intel_pt_vmcs_info *vmcs_info;
+ uint64_t tsc_offset = 0;
+ uint64_t vmcs;
+ bool reliable = true;
+ uint64_t expected_tsc;
+ uint64_t host_tsc;
+ uint64_t ref_timestamp;
+
+ bool assign = false;
+ bool assign_reliable = false;
+
+ /* Already have 'data' for the in_psb case */
+ if (!decoder->in_psb) {
+ memset(data, 0, sizeof(*data));
+ data->ctc_delta = decoder->ctc_delta;
+ data->last_ctc = decoder->last_ctc;
+ data->max_lookahead = 16;
+ intel_pt_pkt_lookahead(decoder, intel_pt_tma_lookahead_cb, data);
+ if (decoder->pge) {
+ data->pip = true;
+ data->pip_packet.payload = decoder->pip_payload;
+ }
+ }
+
+ /* Calculations depend on having TMA packets */
+ if (!data->tma) {
+ p_log("ERROR: TSC without TMA");
+ return;
+ }
+
+ vmcs = data->vmcs ? data->vmcs_packet.payload : decoder->vmcs;
+ if (vmcs == NO_VMCS)
+ vmcs = 0;
+
+ vmcs_info = decoder->findnew_vmcs_info(decoder->data, vmcs);
+
+ ref_timestamp = decoder->timestamp ? decoder->timestamp : decoder->buf_timestamp;
+ host_tsc = intel_pt_8b_tsc(decoder->packet.payload, ref_timestamp);
+
+ if (!decoder->ctc_timestamp) {
+ intel_pt_vm_tm_corr_first_tsc(decoder, data, vmcs_info, host_tsc);
+ return;
+ }
+
+ expected_tsc = intel_pt_expected_tsc(decoder, data);
+
+ tsc_offset = host_tsc - expected_tsc;
+
+ /* Determine if TSC is from Host or Guest */
+ if (data->pip) {
+ if (pip_in_vm(&data->pip_packet)) { /* Guest */
+ if (!vmcs_info) {
+ /* PIP NR=1 without VMCS cannot happen */
+ p_log("ERROR: Missing VMCS");
+ intel_pt_translate_vm_tsc_offset(decoder, tsc_offset);
+ decoder->vm_tm_corr_reliable = false;
+ return;
+ }
+ } else { /* Host */
+ decoder->last_reliable_timestamp = host_tsc;
+ decoder->vm_tm_corr_reliable = true;
+ return;
+ }
+ } else { /* Host or Guest */
+ reliable = false; /* Host/Guest is a guess, so not reliable */
+ if (decoder->in_psb) {
+ if (!tsc_offset)
+ return; /* Zero TSC Offset, assume Host */
+ /*
+ * TSC packet has only 7 bytes of TSC. We have no
+ * information about the Guest's 8th byte, but it
+ * doesn't matter because we only need 7 bytes.
+ * Here, since the 8th byte is unreliable and
+ * irrelevant, compare only 7 byes.
+ */
+ if (vmcs_info &&
+ (tsc_offset & SEVEN_BYTES) ==
+ (vmcs_info->tsc_offset & SEVEN_BYTES)) {
+ /* Same TSC Offset as last VMCS, assume Guest */
+ goto guest;
+ }
+ }
+ /*
+ * Check if the host_tsc is within the expected range.
+ * Note, we could narrow the range more by looking ahead for
+ * the next host TSC in the same buffer, but we don't bother to
+ * do that because this is probably good enough.
+ */
+ if (host_tsc >= expected_tsc && intel_pt_time_in_range(decoder, host_tsc)) {
+ /* Within expected range for Host TSC, assume Host */
+ decoder->vm_tm_corr_reliable = false;
+ return;
+ }
+ }
+
+guest: /* Assuming Guest */
+
+ /* Determine whether to assign TSC Offset */
+ if (vmcs_info && vmcs_info->vmcs) {
+ if (vmcs_info->tsc_offset && vmcs_info->reliable) {
+ assign = false;
+ } else if (decoder->in_psb && data->pip && decoder->vm_tm_corr_reliable &&
+ decoder->vm_tm_corr_continuous && decoder->vm_tm_corr_same_buf) {
+ /* Continuous tracing, TSC in a PSB is not a time loss */
+ assign = true;
+ assign_reliable = true;
+ } else if (decoder->in_psb && data->pip && decoder->vm_tm_corr_same_buf) {
+ /*
+ * Unlikely to be a time loss TSC in a PSB which is not
+ * at the start of a buffer.
+ */
+ assign = true;
+ assign_reliable = false;
+ }
+ }
+
+ /* Record VMCS TSC Offset */
+ if (assign && (vmcs_info->tsc_offset != tsc_offset ||
+ vmcs_info->reliable != assign_reliable)) {
+ bool print = vmcs_info->tsc_offset != tsc_offset;
+
+ vmcs_info->tsc_offset = tsc_offset;
+ vmcs_info->reliable = assign_reliable;
+ if (print)
+ intel_pt_print_vmcs_info(vmcs_info);
+ }
+
+ /* Determine what TSC Offset to use */
+ if (vmcs_info && vmcs_info->tsc_offset) {
+ if (!vmcs_info->reliable)
+ reliable = false;
+ intel_pt_translate_vm_tsc(decoder, vmcs_info);
+ } else {
+ reliable = false;
+ if (vmcs_info) {
+ if (!vmcs_info->error_printed) {
+ p_log("ERROR: Unknown TSC Offset for VMCS %#" PRIx64,
+ vmcs_info->vmcs);
+ vmcs_info->error_printed = true;
+ }
+ } else {
+ if (intel_pt_print_once(decoder, INTEL_PT_PRT_ONCE_UNK_VMCS))
+ p_log("ERROR: Unknown VMCS");
+ }
+ intel_pt_translate_vm_tsc_offset(decoder, tsc_offset);
+ }
+
+ decoder->vm_tm_corr_reliable = reliable;
+}
+
+static void intel_pt_vm_tm_corr_pebs_tsc(struct intel_pt_decoder *decoder)
+{
+ uint64_t host_tsc = decoder->packet.payload;
+ uint64_t guest_tsc = decoder->packet.payload;
+ struct intel_pt_vmcs_info *vmcs_info;
+ uint64_t vmcs;
+
+ vmcs = decoder->vmcs;
+ if (vmcs == NO_VMCS)
+ vmcs = 0;
+
+ vmcs_info = decoder->findnew_vmcs_info(decoder->data, vmcs);
+
+ if (decoder->pge) {
+ if (in_vm(decoder->pip_payload)) { /* Guest */
+ if (!vmcs_info) {
+ /* PIP NR=1 without VMCS cannot happen */
+ p_log("ERROR: Missing VMCS");
+ }
+ } else { /* Host */
+ return;
+ }
+ } else { /* Host or Guest */
+ if (intel_pt_time_in_range(decoder, host_tsc)) {
+ /* Within expected range for Host TSC, assume Host */
+ return;
+ }
+ }
+
+ if (vmcs_info) {
+ /* Translate Guest TSC to Host TSC */
+ host_tsc = ((guest_tsc & SEVEN_BYTES) - vmcs_info->tsc_offset) & SEVEN_BYTES;
+ host_tsc = intel_pt_8b_tsc(host_tsc, decoder->timestamp);
+ intel_pt_log("Translated VM TSC %#" PRIx64 " -> %#" PRIx64
+ " VMCS %#" PRIx64 " TSC Offset %#" PRIx64 "\n",
+ guest_tsc, host_tsc, vmcs_info->vmcs,
+ vmcs_info->tsc_offset);
+ if (!intel_pt_time_in_range(decoder, host_tsc) &&
+ intel_pt_print_once(decoder, INTEL_PT_PRT_ONCE_ERANGE))
+ p_log("Timestamp out of range");
+ } else {
+ if (intel_pt_print_once(decoder, INTEL_PT_PRT_ONCE_UNK_VMCS))
+ p_log("ERROR: Unknown VMCS");
+ host_tsc = decoder->timestamp;
+ }
+
+ decoder->packet.payload = host_tsc;
+
+ if (!decoder->vm_tm_corr_dry_run)
+ memcpy((void *)decoder->buf + 1, &host_tsc, 8);
+}
+
+static int intel_pt_vm_time_correlation(struct intel_pt_decoder *decoder)
+{
+ struct intel_pt_vm_tsc_info data = { .psbend = false };
+ bool pge;
+ int err;
+
+ if (decoder->in_psb)
+ intel_pt_vm_tm_corr_psb(decoder, &data);
+
+ while (1) {
+ err = intel_pt_get_next_packet(decoder);
+ if (err == -ENOLINK)
+ continue;
+ if (err)
+ break;
+
+ switch (decoder->packet.type) {
+ case INTEL_PT_TIP_PGD:
+ decoder->pge = false;
+ decoder->vm_tm_corr_continuous = false;
+ break;
+
+ case INTEL_PT_TNT:
+ case INTEL_PT_TIP:
+ case INTEL_PT_TIP_PGE:
+ decoder->pge = true;
+ break;
+
+ case INTEL_PT_OVF:
+ decoder->in_psb = false;
+ pge = decoder->pge;
+ decoder->pge = intel_pt_ovf_fup_lookahead(decoder);
+ if (pge != decoder->pge)
+ intel_pt_log("Surprising PGE change in OVF!");
+ if (!decoder->pge)
+ decoder->vm_tm_corr_continuous = false;
+ break;
+
+ case INTEL_PT_FUP:
+ if (decoder->in_psb)
+ decoder->pge = true;
+ break;
+
+ case INTEL_PT_TRACESTOP:
+ decoder->pge = false;
+ decoder->vm_tm_corr_continuous = false;
+ decoder->have_tma = false;
+ break;
+
+ case INTEL_PT_PSB:
+ intel_pt_vm_tm_corr_psb(decoder, &data);
+ break;
+
+ case INTEL_PT_PIP:
+ decoder->pip_payload = decoder->packet.payload;
+ break;
+
+ case INTEL_PT_MTC:
+ intel_pt_calc_mtc_timestamp(decoder);
+ break;
+
+ case INTEL_PT_TSC:
+ intel_pt_vm_tm_corr_tsc(decoder, &data);
+ intel_pt_calc_tsc_timestamp(decoder);
+ decoder->vm_tm_corr_same_buf = true;
+ decoder->vm_tm_corr_continuous = decoder->pge;
+ break;
+
+ case INTEL_PT_TMA:
+ intel_pt_calc_tma(decoder);
+ break;
+
+ case INTEL_PT_CYC:
+ intel_pt_calc_cyc_timestamp(decoder);
+ break;
+
+ case INTEL_PT_CBR:
+ intel_pt_calc_cbr(decoder);
+ break;
+
+ case INTEL_PT_PSBEND:
+ decoder->in_psb = false;
+ data.psbend = false;
+ break;
+
+ case INTEL_PT_VMCS:
+ if (decoder->packet.payload != NO_VMCS)
+ decoder->vmcs = decoder->packet.payload;
+ break;
+
+ case INTEL_PT_BBP:
+ decoder->blk_type = decoder->packet.payload;
+ break;
+
+ case INTEL_PT_BIP:
+ if (decoder->blk_type == INTEL_PT_PEBS_BASIC &&
+ decoder->packet.count == 2)
+ intel_pt_vm_tm_corr_pebs_tsc(decoder);
+ break;
+
+ case INTEL_PT_BEP:
+ case INTEL_PT_BEP_IP:
+ decoder->blk_type = 0;
+ break;
+
+ case INTEL_PT_MODE_EXEC:
+ case INTEL_PT_MODE_TSX:
+ case INTEL_PT_MNT:
+ case INTEL_PT_PAD:
+ case INTEL_PT_PTWRITE_IP:
+ case INTEL_PT_PTWRITE:
+ case INTEL_PT_MWAIT:
+ case INTEL_PT_PWRE:
+ case INTEL_PT_EXSTOP_IP:
+ case INTEL_PT_EXSTOP:
+ case INTEL_PT_PWRX:
+ case INTEL_PT_BAD: /* Does not happen */
+ default:
+ break;
+ }
+ }
+
+ return err;
+}
+
#define HOP_PROCESS 0
#define HOP_IGNORE 1
#define HOP_RETURN 2
@@ -2910,6 +3587,15 @@ static int intel_pt_sync(struct intel_pt_decoder *decoder)
if (err)
return err;

+ if (decoder->vm_time_correlation) {
+ decoder->in_psb = true;
+ if (!decoder->timestamp)
+ decoder->timestamp = 1;
+ decoder->state.type = 0;
+ decoder->pkt_state = INTEL_PT_STATE_VM_TIME_CORRELATION;
+ return 0;
+ }
+
decoder->have_last_ip = true;
decoder->pkt_state = INTEL_PT_STATE_NO_IP;

@@ -2997,6 +3683,9 @@ const struct intel_pt_state *intel_pt_decode(struct intel_pt_decoder *decoder)
case INTEL_PT_STATE_RESAMPLE:
err = intel_pt_resample(decoder);
break;
+ case INTEL_PT_STATE_VM_TIME_CORRELATION:
+ err = intel_pt_vm_time_correlation(decoder);
+ break;
default:
err = intel_pt_bug(decoder);
break;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
index 4d78a8c35e2a..714c475808c0 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
@@ -258,6 +258,8 @@ struct intel_pt_params {
void *data;
bool return_compression;
bool branch_enable;
+ bool vm_time_correlation;
+ bool vm_tm_corr_dry_run;
uint64_t first_timestamp;
uint64_t ctl;
uint64_t period;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-log.h b/tools/perf/util/intel-pt-decoder/intel-pt-log.h
index 388661f89c44..d900aab24b21 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-log.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-log.h
@@ -67,4 +67,9 @@ static inline void intel_pt_log_to(const char *msg, uint64_t u)
intel_pt_log("%s to " x64_fmt "\n", msg, u);
}

+#define intel_pt_log_var(var, fmt) intel_pt_log("%s: " #var " " fmt "\n", __func__, var)
+
+#define intel_pt_log_x32(var) intel_pt_log_var(var, "%#x")
+#define intel_pt_log_x64(var) intel_pt_log_var(var, "%#" PRIx64)
+
#endif
--
2.25.1

2021-04-30 07:05:24

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 10/12] perf intel-pt: Better 7-byte timestamp wraparound logic

A timestamp should not go backwards. If it does it is assumed that the
7-byte TSC packet value has wrapped. Improve that logic so that it will
not allow the timestamp to go past the buffer timestamp (which is recorded
when the buffer is copied out)

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/util/intel-pt-decoder/intel-pt-decoder.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index 7c93ae2bd56c..c4044bc2fb86 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -1499,9 +1499,15 @@ static void intel_pt_calc_tsc_timestamp(struct intel_pt_decoder *decoder)
timestamp = decoder->timestamp;
}
if (timestamp < decoder->timestamp) {
- intel_pt_log_to("Wraparound timestamp", timestamp);
- timestamp += (1ULL << 56);
- decoder->tsc_timestamp = timestamp;
+ if (!decoder->buf_timestamp ||
+ (timestamp + (1ULL << 56) < decoder->buf_timestamp)) {
+ intel_pt_log_to("Wraparound timestamp", timestamp);
+ timestamp += (1ULL << 56);
+ decoder->tsc_timestamp = timestamp;
+ } else {
+ intel_pt_log_to("Suppressing bad timestamp", timestamp);
+ timestamp = decoder->timestamp;
+ }
}
decoder->timestamp = timestamp;
decoder->timestamp_insn_cnt = 0;
--
2.25.1

2021-04-30 07:05:24

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 12/12] perf intel-pt: Parse VM Time Correlation options and set up decoding

Add parsing and validation of VM Time Correlation options, and pass
parameters to the decoder. Also update the Intel PT documentation
accordingly.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/Documentation/perf-intel-pt.txt | 108 ++++++++++++++++++++-
tools/perf/util/intel-pt.c | 98 ++++++++++++++++++-
2 files changed, 202 insertions(+), 4 deletions(-)

diff --git a/tools/perf/Documentation/perf-intel-pt.txt b/tools/perf/Documentation/perf-intel-pt.txt
index 33df7898c378..8d6cce062a5f 100644
--- a/tools/perf/Documentation/perf-intel-pt.txt
+++ b/tools/perf/Documentation/perf-intel-pt.txt
@@ -1155,8 +1155,9 @@ include::build-xed.txt[]
Tracing Virtual Machines
------------------------

-Currently, only kernel tracing is supported and only with "timeless" decoding
-i.e. no TSC timestamps
+Currently, only kernel tracing is supported and only with either "timeless" decoding
+(i.e. no TSC timestamps) or VM Time Correlation. VM Time Correlation is an extra step
+using 'perf inject' and requires unchanging VMX TSC Offset and no VMX TSC Scaling.

Other limitations and caveats

@@ -1167,7 +1168,7 @@ Other limitations and caveats
Guest VCPU is unknown but may be able to be inferred from the host thread
Callchains are not supported

-Example
+Example using "timeless" decoding

Start VM

@@ -1231,6 +1232,107 @@ perf script can be used to provide an instruction trace
:1440 1440 ffffffffbb74603c clockevents_program_event+0x4c ([guest.kernel.kallsyms]) popq %rbx
:1440 1440 ffffffffbb74603d clockevents_program_event+0x4d ([guest.kernel.kallsyms]) popq %r12

+Example using VM Time Correlation
+
+Start VM
+
+ $ sudo virsh start kubuntu20.04
+ Domain kubuntu20.04 started
+
+Mount the guest file system. Note sshfs needs -o direct_io to enable reading of proc files. root access is needed to read /proc/kcore.
+
+ $ mkdir -p vm0
+ $ sshfs -o direct_io root@vm0:/ vm0
+
+Copy the guest /proc/kallsyms, /proc/modules and /proc/kcore
+
+ $ perf buildid-cache -v --kcore vm0/proc/kcore
+ same kcore found in /home/user/.debug/[kernel.kcore]/cc9c55a98c5e4ec0aeda69302554aabed5cd6491/2021021312450777
+ $ KALLSYMS=/home/user/.debug/\[kernel.kcore\]/cc9c55a98c5e4ec0aeda69302554aabed5cd6491/2021021312450777/kallsyms
+
+Find the VM process
+
+ $ ps -eLl | grep 'KVM\|PID'
+ F S UID PID PPID LWP C PRI NI ADDR SZ WCHAN TTY TIME CMD
+ 3 S 64055 16998 1 17005 13 80 0 - 1818189 - ? 00:00:16 CPU 0/KVM
+ 3 S 64055 16998 1 17006 4 80 0 - 1818189 - ? 00:00:05 CPU 1/KVM
+ 3 S 64055 16998 1 17007 3 80 0 - 1818189 - ? 00:00:04 CPU 2/KVM
+ 3 S 64055 16998 1 17008 4 80 0 - 1818189 - ? 00:00:05 CPU 3/KVM
+
+Start an open-ended perf record, tracing the VM process, do something on the VM, and then ctrl-C to stop.
+IPC can be determined, hence cyc=1 can be added.
+Only kernel decoding is supported, so 'k' must be specified.
+Intel PT traces both the host and the guest so --guest and --host need to be specified.
+
+ $ sudo perf kvm --guest --host --guestkallsyms $KALLSYMS record --kcore -e intel_pt/cyc=1/k -p 16998
+ ^C[ perf record: Woken up 1 times to write data ]
+ [ perf record: Captured and wrote 9.041 MB perf.data.kvm ]
+
+Now 'perf inject' can be used to determine the VMX TCS Offset. Note, Intel PT TSC packets are
+only 7-bytes, so the TSC Offset might differ from the actual value in the 8th byte. That will
+have no effect i.e. the resulting timestamps will be correct anyway.
+
+ $ perf inject -i perf.data.kvm --vm-time-correlation=dry-run
+ ERROR: Unknown TSC Offset for VMCS 0x1bff6a
+ VMCS: 0x1bff6a TSC Offset 0xffffe42722c64c41
+ ERROR: Unknown TSC Offset for VMCS 0x1cbc08
+ VMCS: 0x1cbc08 TSC Offset 0xffffe42722c64c41
+ ERROR: Unknown TSC Offset for VMCS 0x1c3ce8
+ VMCS: 0x1c3ce8 TSC Offset 0xffffe42722c64c41
+ ERROR: Unknown TSC Offset for VMCS 0x1cbce9
+ VMCS: 0x1cbce9 TSC Offset 0xffffe42722c64c41
+
+Each virtual CPU has a different Virtual Machine Control Structure (VMCS)
+shown above with the calculated TSC Offset. For an unchanging TSC Offset
+they should all be the same for the same virtual machine.
+
+Now that the TSC Offset is known, it can be provided to 'perf inject'
+
+ $ perf inject -i perf.data.kvm --vm-time-correlation="dry-run 0xffffe42722c64c41"
+
+Note the options for 'perf inject' --vm-time-correlation are:
+
+ [ dry-run ] [ <TSC Offset> [ : <VMCS> [ , <VMCS> ]... ] ]...
+
+So it is possible to specify different TSC Offsets for different VMCS.
+The option "dry-run" will cause the file to be processed but without updating it.
+Note it is also possible to get a intel_pt.log file by adding option --itrace=d
+
+There were no errors so, do it for real
+
+ $ perf inject -i perf.data.kvm --vm-time-correlation=0xffffe42722c64c41 --force
+
+'perf script' can be used to see if there are any decoder errors
+
+ $ perf script -i perf.data.kvm --guestkallsyms $KALLSYMS --itrace=e-o
+
+There were none.
+
+'perf script' can be used to provide an instruction trace showing timestamps
+
+ $ perf script -i perf.data.kvm --guestkallsyms $KALLSYMS --insn-trace --xed -F+ipc | grep -C10 vmresume | head -21
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cdd __vmx_vcpu_run+0x3d ([kernel.kallsyms]) movq 0x48(%rax), %r9
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ce1 __vmx_vcpu_run+0x41 ([kernel.kallsyms]) movq 0x50(%rax), %r10
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ce5 __vmx_vcpu_run+0x45 ([kernel.kallsyms]) movq 0x58(%rax), %r11
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ce9 __vmx_vcpu_run+0x49 ([kernel.kallsyms]) movq 0x60(%rax), %r12
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133ced __vmx_vcpu_run+0x4d ([kernel.kallsyms]) movq 0x68(%rax), %r13
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cf1 __vmx_vcpu_run+0x51 ([kernel.kallsyms]) movq 0x70(%rax), %r14
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cf5 __vmx_vcpu_run+0x55 ([kernel.kallsyms]) movq 0x78(%rax), %r15
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cf9 __vmx_vcpu_run+0x59 ([kernel.kallsyms]) movq (%rax), %rax
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133cfc __vmx_vcpu_run+0x5c ([kernel.kallsyms]) callq 0xffffffff82133c40
+ CPU 1/KVM 17006 [001] 11500.262865593: ffffffff82133c40 vmx_vmenter+0x0 ([kernel.kallsyms]) jz 0xffffffff82133c46
+ CPU 1/KVM 17006 [001] 11500.262866075: ffffffff82133c42 vmx_vmenter+0x2 ([kernel.kallsyms]) vmresume IPC: 0.05 (40/769)
+ :17006 17006 [001] 11500.262869216: ffffffff82200cb0 asm_sysvec_apic_timer_interrupt+0x0 ([guest.kernel.kallsyms]) clac
+ :17006 17006 [001] 11500.262869216: ffffffff82200cb3 asm_sysvec_apic_timer_interrupt+0x3 ([guest.kernel.kallsyms]) pushq $0xffffffffffffffff
+ :17006 17006 [001] 11500.262869216: ffffffff82200cb5 asm_sysvec_apic_timer_interrupt+0x5 ([guest.kernel.kallsyms]) callq 0xffffffff82201160
+ :17006 17006 [001] 11500.262869216: ffffffff82201160 error_entry+0x0 ([guest.kernel.kallsyms]) cld
+ :17006 17006 [001] 11500.262869216: ffffffff82201161 error_entry+0x1 ([guest.kernel.kallsyms]) pushq %rsi
+ :17006 17006 [001] 11500.262869216: ffffffff82201162 error_entry+0x2 ([guest.kernel.kallsyms]) movq 0x8(%rsp), %rsi
+ :17006 17006 [001] 11500.262869216: ffffffff82201167 error_entry+0x7 ([guest.kernel.kallsyms]) movq %rdi, 0x8(%rsp)
+ :17006 17006 [001] 11500.262869216: ffffffff8220116c error_entry+0xc ([guest.kernel.kallsyms]) pushq %rdx
+ :17006 17006 [001] 11500.262869216: ffffffff8220116d error_entry+0xd ([guest.kernel.kallsyms]) pushq %rcx
+ :17006 17006 [001] 11500.262869216: ffffffff8220116e error_entry+0xe ([guest.kernel.kallsyms]) pushq %rax
+


SEE ALSO
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 858b679d00be..2a5fe1514e65 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -1018,6 +1018,19 @@ static bool intel_pt_have_tsc(struct intel_pt *pt)
return have_tsc;
}

+static bool intel_pt_have_mtc(struct intel_pt *pt)
+{
+ struct evsel *evsel;
+ u64 config;
+
+ evlist__for_each_entry(pt->session->evlist, evsel) {
+ if (intel_pt_get_config(pt, &evsel->core.attr, &config) &&
+ (config & pt->mtc_bit))
+ return true;
+ }
+ return false;
+}
+
static bool intel_pt_sampling_mode(struct intel_pt *pt)
{
struct evsel *evsel;
@@ -1182,6 +1195,8 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
params.tsc_ctc_ratio_n = pt->tsc_ctc_ratio_n;
params.tsc_ctc_ratio_d = pt->tsc_ctc_ratio_d;
params.quick = pt->synth_opts.quick;
+ params.vm_time_correlation = pt->synth_opts.vm_time_correlation;
+ params.vm_tm_corr_dry_run = pt->synth_opts.vm_tm_corr_dry_run;
params.first_timestamp = pt->first_timestamp;

if (pt->filts.cnt > 0)
@@ -2465,7 +2480,7 @@ static int intel_pt_run_decoder(struct intel_pt_queue *ptq, u64 *timestamp)
if (pt->per_cpu_mmaps &&
(pt->have_sched_switch == 1 || pt->have_sched_switch == 3) &&
!pt->timeless_decoding && intel_pt_tracing_kernel(pt) &&
- !pt->sampling_mode) {
+ !pt->sampling_mode && !pt->synth_opts.vm_time_correlation) {
pt->switch_ip = intel_pt_switch_ip(pt, &pt->ptss_ip);
if (pt->switch_ip) {
intel_pt_log("switch_ip: %"PRIx64" ptss_ip: %"PRIx64"\n",
@@ -3496,6 +3511,65 @@ static int intel_pt_setup_time_ranges(struct intel_pt *pt,
return 0;
}

+static int intel_pt_parse_vm_tm_corr_arg(struct intel_pt *pt, char **args)
+{
+ struct intel_pt_vmcs_info *vmcs_info;
+ u64 tsc_offset, vmcs;
+ char *p = *args;
+
+ errno = 0;
+
+ p = skip_spaces(p);
+ if (!*p)
+ return 1;
+
+ tsc_offset = strtoull(p, &p, 0);
+ if (errno)
+ return -errno;
+ p = skip_spaces(p);
+ if (*p != ':') {
+ pt->dflt_tsc_offset = tsc_offset;
+ *args = p;
+ return 0;
+ }
+ while (1) {
+ vmcs = strtoull(p, &p, 0);
+ if (errno)
+ return -errno;
+ if (!vmcs)
+ return -EINVAL;
+ vmcs_info = intel_pt_findnew_vmcs(&pt->vmcs_info, vmcs, tsc_offset);
+ if (!vmcs_info)
+ return -ENOMEM;
+ p = skip_spaces(p);
+ if (*p != ',')
+ break;
+ p += 1;
+ }
+ *args = p;
+ return 0;
+}
+
+static int intel_pt_parse_vm_tm_corr_args(struct intel_pt *pt)
+{
+ char *args = pt->synth_opts.vm_tm_corr_args;
+ int ret;
+
+ if (!args)
+ return 0;
+
+ do {
+ ret = intel_pt_parse_vm_tm_corr_arg(pt, &args);
+ } while (!ret);
+
+ if (ret < 0) {
+ pr_err("Failed to parse VM Time Correlation options\n");
+ return ret;
+ }
+
+ return 0;
+}
+
static const char * const intel_pt_info_fmts[] = {
[INTEL_PT_PMU_TYPE] = " PMU Type %"PRId64"\n",
[INTEL_PT_TIME_SHIFT] = " Time Shift %"PRIu64"\n",
@@ -3667,6 +3741,28 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
pt->sampling_mode = intel_pt_sampling_mode(pt);
pt->est_tsc = !pt->timeless_decoding;

+ if (pt->synth_opts.vm_time_correlation) {
+ if (pt->timeless_decoding) {
+ pr_err("Intel PT has no time information for VM Time Correlation\n");
+ err = -EINVAL;
+ goto err_free_queues;
+ }
+ if (session->itrace_synth_opts->ptime_range) {
+ pr_err("Time ranges cannot be specified with VM Time Correlation\n");
+ err = -EINVAL;
+ goto err_free_queues;
+ }
+ /* Currently TSC Offset is calculated using MTC packets */
+ if (!intel_pt_have_mtc(pt)) {
+ pr_err("MTC packets must have been enabled for VM Time Correlation\n");
+ err = -EINVAL;
+ goto err_free_queues;
+ }
+ err = intel_pt_parse_vm_tm_corr_args(pt);
+ if (err)
+ goto err_free_queues;
+ }
+
pt->unknown_thread = thread__new(999999999, 999999999);
if (!pt->unknown_thread) {
err = -ENOMEM;
--
2.25.1

2021-04-30 07:06:02

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 04/12] perf inject: Add facility to do in place update

When there is a need to modify only timestamps, it is much simpler and
quicker to do it to the existing file rather than re-write all the
contents.

In preparation for that, add the ability to modify the input file in place.
In practice that just means making the file descriptor and mmaps writable.

Signed-off-by: Adrian Hunter <[email protected]>
---
tools/perf/builtin-inject.c | 26 ++++++++++++++++++++++----
tools/perf/util/data.c | 3 ++-
tools/perf/util/data.h | 1 +
tools/perf/util/header.c | 5 +++++
tools/perf/util/session.c | 6 +++++-
5 files changed, 35 insertions(+), 6 deletions(-)

diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
index ddccc0eb7390..6007f1844b01 100644
--- a/tools/perf/builtin-inject.c
+++ b/tools/perf/builtin-inject.c
@@ -43,6 +43,8 @@ struct perf_inject {
bool have_auxtrace;
bool strip;
bool jit_mode;
+ bool in_place_update;
+ bool in_place_update_dry_run;
const char *input_name;
struct perf_data output;
u64 bytes_written;
@@ -701,7 +703,7 @@ static int __cmd_inject(struct perf_inject *inject)
int ret = -EINVAL;
struct perf_session *session = inject->session;
struct perf_data *data_out = &inject->output;
- int fd = perf_data__fd(data_out);
+ int fd = inject->in_place_update ? -1 : perf_data__fd(data_out);
u64 output_data_offset;

signal(SIGINT, sig_handler);
@@ -759,14 +761,14 @@ static int __cmd_inject(struct perf_inject *inject)
if (!inject->itrace_synth_opts.set)
auxtrace_index__free(&session->auxtrace_index);

- if (!data_out->is_pipe)
+ if (!data_out->is_pipe && !inject->in_place_update)
lseek(fd, output_data_offset, SEEK_SET);

ret = perf_session__process_events(session);
if (ret)
return ret;

- if (!data_out->is_pipe) {
+ if (!data_out->is_pipe && !inject->in_place_update) {
if (inject->build_ids)
perf_header__set_feat(&session->header,
HEADER_BUILD_ID);
@@ -900,7 +902,23 @@ int cmd_inject(int argc, const char **argv)
return -1;
}

- if (perf_data__open(&inject.output)) {
+ if (inject.in_place_update) {
+ if (!strcmp(inject.input_name, "-")) {
+ pr_err("Input file name required for in-place updating\n");
+ return -1;
+ }
+ if (strcmp(inject.output.path, "-")) {
+ pr_err("Output file name must not be specified for in-place updating\n");
+ return -1;
+ }
+ if (!data.force && !inject.in_place_update_dry_run) {
+ pr_err("The input file would be updated in place, "
+ "the --force option is required.\n");
+ return -1;
+ }
+ if (!inject.in_place_update_dry_run)
+ data.in_place_update = true;
+ } else if (perf_data__open(&inject.output)) {
perror("failed to create output file");
return -1;
}
diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
index f29af4fc3d09..209fe63d4e49 100644
--- a/tools/perf/util/data.c
+++ b/tools/perf/util/data.c
@@ -239,11 +239,12 @@ static bool is_dir(struct perf_data *data)

static int open_file_read(struct perf_data *data)
{
+ int flags = data->in_place_update ? O_RDWR : O_RDONLY;
struct stat st;
int fd;
char sbuf[STRERR_BUFSIZE];

- fd = open(data->file.path, O_RDONLY);
+ fd = open(data->file.path, flags);
if (fd < 0) {
int err = errno;

diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h
index 62a3e66fbee8..c9de82af5584 100644
--- a/tools/perf/util/data.h
+++ b/tools/perf/util/data.h
@@ -31,6 +31,7 @@ struct perf_data {
bool is_dir;
bool force;
bool use_stdio;
+ bool in_place_update;
enum perf_data_mode mode;

struct {
diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
index aa1e42518d37..02b13c7a23be 100644
--- a/tools/perf/util/header.c
+++ b/tools/perf/util/header.c
@@ -3814,6 +3814,11 @@ int perf_session__read_header(struct perf_session *session)
if (perf_file_header__read(&f_header, header, fd) < 0)
return -EINVAL;

+ if (header->needs_swap && data->in_place_update) {
+ pr_err("In-place update not supported when byte-swapping is required\n");
+ return -EINVAL;
+ }
+
/*
* Sanity check that perf.data was written cleanly; data size is
* initialized to 0 and updated only if the on_exit function is run.
diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
index eba3769be3f1..edd068ea8a6c 100644
--- a/tools/perf/util/session.c
+++ b/tools/perf/util/session.c
@@ -2131,6 +2131,7 @@ struct reader {
u64 data_size;
u64 data_offset;
reader_cb_t process;
+ bool in_place_update;
};

static int
@@ -2164,7 +2165,9 @@ reader__process_events(struct reader *rd, struct perf_session *session,
mmap_prot = PROT_READ;
mmap_flags = MAP_SHARED;

- if (session->header.needs_swap) {
+ if (rd->in_place_update) {
+ mmap_prot |= PROT_WRITE;
+ } else if (session->header.needs_swap) {
mmap_prot |= PROT_WRITE;
mmap_flags = MAP_PRIVATE;
}
@@ -2250,6 +2253,7 @@ static int __perf_session__process_events(struct perf_session *session)
.data_size = session->header.data_size,
.data_offset = session->header.data_offset,
.process = process_simple,
+ .in_place_update = session->data->in_place_update,
};
struct ordered_events *oe = &session->ordered_events;
struct perf_tool *tool = session->tool;
--
2.25.1

2021-04-30 07:06:10

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 08/12] perf intel-pt: Add a tree for VMCS information

Even when VMX TSC Offset is not changing (during perf record), different
virtual machines can have different TSC Offsets. There is a Virtual Machine
Control Structure (VMCS) for each virtual CPU, the address of which is
reported to Intel PT in the VMCS packet. We do not know which VMCS belongs
to which virtual machine, so use a tree to keep track of VMCS information.
Then the decoder will be able to use the current VMCS value to look up the
current TSC Offset.

Signed-off-by: Adrian Hunter <[email protected]>
---
.../util/intel-pt-decoder/intel-pt-decoder.c | 2 +
.../util/intel-pt-decoder/intel-pt-decoder.h | 11 ++++
tools/perf/util/intel-pt.c | 66 +++++++++++++++++++
3 files changed, 79 insertions(+)

diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index 8f916f90205e..c12e4a39b790 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -107,6 +107,7 @@ struct intel_pt_decoder {
uint64_t max_insn_cnt, void *data);
bool (*pgd_ip)(uint64_t ip, void *data);
int (*lookahead)(void *data, intel_pt_lookahead_cb_t cb, void *cb_data);
+ struct intel_pt_vmcs_info *(*findnew_vmcs_info)(void *data, uint64_t vmcs);
void *data;
struct intel_pt_state state;
const unsigned char *buf;
@@ -258,6 +259,7 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
decoder->walk_insn = params->walk_insn;
decoder->pgd_ip = params->pgd_ip;
decoder->lookahead = params->lookahead;
+ decoder->findnew_vmcs_info = params->findnew_vmcs_info;
decoder->data = params->data;
decoder->return_compression = params->return_compression;
decoder->branch_enable = params->branch_enable;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
index bebdb7d37b39..e6e782da45a4 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
@@ -11,6 +11,8 @@
#include <stddef.h>
#include <stdbool.h>

+#include <linux/rbtree.h>
+
#include "intel-pt-insn-decoder.h"

#define INTEL_PT_IN_TX (1 << 0)
@@ -199,6 +201,14 @@ struct intel_pt_blk_items {
bool is_32_bit;
};

+struct intel_pt_vmcs_info {
+ struct rb_node rb_node;
+ uint64_t vmcs;
+ uint64_t tsc_offset;
+ bool reliable;
+ bool error_printed;
+};
+
struct intel_pt_state {
enum intel_pt_sample_type type;
bool from_nr;
@@ -244,6 +254,7 @@ struct intel_pt_params {
uint64_t max_insn_cnt, void *data);
bool (*pgd_ip)(uint64_t ip, void *data);
int (*lookahead)(void *data, intel_pt_lookahead_cb_t cb, void *cb_data);
+ struct intel_pt_vmcs_info *(*findnew_vmcs_info)(void *data, uint64_t vmcs);
void *data;
bool return_compression;
bool branch_enable;
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index a494d47aa1ad..8f6df8eec9b8 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -133,6 +133,9 @@ struct intel_pt {

struct ip_callchain *chain;
struct branch_stack *br_stack;
+
+ u64 dflt_tsc_offset;
+ struct rb_root vmcs_info;
};

enum switch_state {
@@ -271,6 +274,65 @@ static bool intel_pt_log_events(struct intel_pt *pt, u64 tm)
return !n || !perf_time__ranges_skip_sample(range, n, tm);
}

+static struct intel_pt_vmcs_info *intel_pt_findnew_vmcs(struct rb_root *rb_root,
+ u64 vmcs,
+ u64 dflt_tsc_offset)
+{
+ struct rb_node **p = &rb_root->rb_node;
+ struct rb_node *parent = NULL;
+ struct intel_pt_vmcs_info *v;
+
+ while (*p) {
+ parent = *p;
+ v = rb_entry(parent, struct intel_pt_vmcs_info, rb_node);
+
+ if (v->vmcs == vmcs)
+ return v;
+
+ if (vmcs < v->vmcs)
+ p = &(*p)->rb_left;
+ else
+ p = &(*p)->rb_right;
+ }
+
+ v = zalloc(sizeof(*v));
+ if (v) {
+ v->vmcs = vmcs;
+ v->tsc_offset = dflt_tsc_offset;
+ v->reliable = dflt_tsc_offset;
+
+ rb_link_node(&v->rb_node, parent, p);
+ rb_insert_color(&v->rb_node, rb_root);
+ }
+
+ return v;
+}
+
+static struct intel_pt_vmcs_info *intel_pt_findnew_vmcs_info(void *data, uint64_t vmcs)
+{
+ struct intel_pt_queue *ptq = data;
+ struct intel_pt *pt = ptq->pt;
+
+ if (!vmcs && !pt->dflt_tsc_offset)
+ return NULL;
+
+ return intel_pt_findnew_vmcs(&pt->vmcs_info, vmcs, pt->dflt_tsc_offset);
+}
+
+static void intel_pt_free_vmcs_info(struct intel_pt *pt)
+{
+ struct intel_pt_vmcs_info *v;
+ struct rb_node *n;
+
+ n = rb_first(&pt->vmcs_info);
+ while (n) {
+ v = rb_entry(n, struct intel_pt_vmcs_info, rb_node);
+ n = rb_next(n);
+ rb_erase(&v->rb_node, &pt->vmcs_info);
+ free(v);
+ }
+}
+
static int intel_pt_do_fix_overlap(struct intel_pt *pt, struct auxtrace_buffer *a,
struct auxtrace_buffer *b)
{
@@ -1109,6 +1171,7 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
params.get_trace = intel_pt_get_trace;
params.walk_insn = intel_pt_walk_next_insn;
params.lookahead = intel_pt_lookahead;
+ params.findnew_vmcs_info = intel_pt_findnew_vmcs_info;
params.data = ptq;
params.return_compression = intel_pt_return_compression(pt);
params.branch_enable = intel_pt_branch_enable(pt);
@@ -2970,6 +3033,7 @@ static void intel_pt_free(struct perf_session *session)
auxtrace_heap__free(&pt->heap);
intel_pt_free_events(session);
session->auxtrace = NULL;
+ intel_pt_free_vmcs_info(pt);
thread__put(pt->unknown_thread);
addr_filters__exit(&pt->filts);
zfree(&pt->chain);
@@ -3475,6 +3539,8 @@ int intel_pt_process_auxtrace_info(union perf_event *event,
if (!pt)
return -ENOMEM;

+ pt->vmcs_info = RB_ROOT;
+
addr_filters__init(&pt->filts);

err = perf_config(intel_pt_perf_config, pt);
--
2.25.1

2021-04-30 07:06:25

by Adrian Hunter

[permalink] [raw]
Subject: [PATCH V2 09/12] perf intel-pt: Pass the first timestamp to the decoder

VM Time Correlation will use time ranges to determine whether a TSC packet
belongs to the Host or Guest. To start, the first non-zero timestamp is
needed. Pass that to the decoder.

Signed-off-by: Adrian Hunter <[email protected]>
---
.../util/intel-pt-decoder/intel-pt-decoder.c | 8 ++++++++
.../util/intel-pt-decoder/intel-pt-decoder.h | 4 ++++
tools/perf/util/intel-pt.c | 19 +++++++++++++++++++
3 files changed, 31 insertions(+)

diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
index c12e4a39b790..7c93ae2bd56c 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.c
@@ -140,6 +140,7 @@ struct intel_pt_decoder {
uint64_t ctc_delta;
uint64_t cycle_cnt;
uint64_t cyc_ref_timestamp;
+ uint64_t first_timestamp;
uint32_t last_mtc;
uint32_t tsc_ctc_ratio_n;
uint32_t tsc_ctc_ratio_d;
@@ -265,6 +266,7 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
decoder->branch_enable = params->branch_enable;
decoder->hop = params->quick >= 1;
decoder->leap = params->quick >= 2;
+ decoder->first_timestamp = params->first_timestamp;

decoder->flags = params->flags;

@@ -314,6 +316,12 @@ struct intel_pt_decoder *intel_pt_decoder_new(struct intel_pt_params *params)
return decoder;
}

+void intel_pt_set_first_timestamp(struct intel_pt_decoder *decoder,
+ uint64_t first_timestamp)
+{
+ decoder->first_timestamp = first_timestamp;
+}
+
static void intel_pt_pop_blk(struct intel_pt_stack *stack)
{
struct intel_pt_blk *blk = stack->blk;
diff --git a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
index e6e782da45a4..4d78a8c35e2a 100644
--- a/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
+++ b/tools/perf/util/intel-pt-decoder/intel-pt-decoder.h
@@ -258,6 +258,7 @@ struct intel_pt_params {
void *data;
bool return_compression;
bool branch_enable;
+ uint64_t first_timestamp;
uint64_t ctl;
uint64_t period;
enum intel_pt_period_type period_type;
@@ -285,4 +286,7 @@ unsigned char *intel_pt_find_overlap(unsigned char *buf_a, size_t len_a,

int intel_pt__strerror(int code, char *buf, size_t buflen);

+void intel_pt_set_first_timestamp(struct intel_pt_decoder *decoder,
+ uint64_t first_timestamp);
+
#endif
diff --git a/tools/perf/util/intel-pt.c b/tools/perf/util/intel-pt.c
index 8f6df8eec9b8..858b679d00be 100644
--- a/tools/perf/util/intel-pt.c
+++ b/tools/perf/util/intel-pt.c
@@ -78,6 +78,7 @@ struct intel_pt {
u64 kernel_start;
u64 switch_ip;
u64 ptss_ip;
+ u64 first_timestamp;

struct perf_tsc_conversion tc;
bool cap_user_time_zero;
@@ -1181,6 +1182,7 @@ static struct intel_pt_queue *intel_pt_alloc_queue(struct intel_pt *pt,
params.tsc_ctc_ratio_n = pt->tsc_ctc_ratio_n;
params.tsc_ctc_ratio_d = pt->tsc_ctc_ratio_d;
params.quick = pt->synth_opts.quick;
+ params.first_timestamp = pt->first_timestamp;

if (pt->filts.cnt > 0)
params.pgd_ip = intel_pt_pgd_ip;
@@ -1245,6 +1247,21 @@ static void intel_pt_free_queue(void *priv)
free(ptq);
}

+static void intel_pt_first_timestamp(struct intel_pt *pt, u64 timestamp)
+{
+ unsigned int i;
+
+ pt->first_timestamp = timestamp;
+
+ for (i = 0; i < pt->queues.nr_queues; i++) {
+ struct auxtrace_queue *queue = &pt->queues.queue_array[i];
+ struct intel_pt_queue *ptq = queue->priv;
+
+ if (ptq && ptq->decoder)
+ intel_pt_set_first_timestamp(ptq->decoder, timestamp);
+ }
+}
+
static void intel_pt_set_pid_tid_cpu(struct intel_pt *pt,
struct auxtrace_queue *queue)
{
@@ -2947,6 +2964,8 @@ static int intel_pt_process_event(struct perf_session *session,
sample->time);
}
} else if (timestamp) {
+ if (!pt->first_timestamp)
+ intel_pt_first_timestamp(pt, timestamp);
err = intel_pt_process_queues(pt, timestamp);
}
if (err)
--
2.25.1

2021-04-30 21:30:42

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH V2 00/12] perf intel-pt: Add more support for VMs


On 4/30/2021 12:02 AM, Adrian Hunter wrote:
> Hi
>
> Here is V2, see changes list below.


This will be very useful. I'll definitely be an user for future
debugging. Thanks for working on this.

For the series

Reviewed-by: Andi Kleen <[email protected]>

-Andi


2021-05-03 18:10:23

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [PATCH V2 00/12] perf intel-pt: Add more support for VMs

Em Fri, Apr 30, 2021 at 02:29:23PM -0700, Andi Kleen escreveu:
>
> On 4/30/2021 12:02 AM, Adrian Hunter wrote:
> > Hi
> >
> > Here is V2, see changes list below.
>
>
> This will be very useful. I'll definitely be an user for future debugging.
> Thanks for working on this.
>
> For the series
>
> Reviewed-by: Andi Kleen <[email protected]>

Thanks, applied.

- Arnaldo