2015-04-13 22:15:20

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [GIT PULL 0/5] perf/core improvements and fixes

Hi Ingo,

Please consider pulling,

Best regards,

- Arnaldo

The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:

perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)

are available in the git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo

for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:

perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)

----------------------------------------------------------------
perf/core improvements and fixes:

New features:

- Analyze page allocator events also in 'perf kmem' (Namhyung Kim)

User visible fixes:

- Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)

- lazy_line probe fixes in 'perf probe' (He Kuang)

Infrastructure:

- Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)

Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>

----------------------------------------------------------------
He Kuang (3):
perf probe: Set retprobe flag when probe in address-based alternative mode
perf probe: Make --source avaiable when probe with lazy_line
perf probe: Fix segfault when probe with lazy_line to file

Namhyung Kim (2):
tracing, mm: Record pfn instead of pointer to struct page
perf kmem: Analyze page allocator events also

include/trace/events/filemap.h | 8 +-
include/trace/events/kmem.h | 42 +--
include/trace/events/vmscan.h | 8 +-
tools/perf/Documentation/perf-kmem.txt | 8 +-
tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
tools/perf/util/probe-event.c | 3 +-
tools/perf/util/probe-event.h | 2 +
tools/perf/util/probe-finder.c | 20 +-
8 files changed, 540 insertions(+), 51 deletions(-)


2015-04-13 22:15:28

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

From: Namhyung Kim <[email protected]>

The struct page is opaque for userspace tools, so it'd be better to save
pfn in order to identify page frames.

The textual output of $debugfs/tracing/trace file remains unchanged and
only raw (binary) data format is changed - but thanks to libtraceevent,
userspace tools which deal with the raw data (like perf and trace-cmd)
can parse the format easily. So impact on the userspace will also be
minimal.

Signed-off-by: Namhyung Kim <[email protected]>
Based-on-patch-by: Joonsoo Kim <[email protected]>
Acked-by: Ingo Molnar <[email protected]>
Acked-by: Steven Rostedt <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
include/trace/events/filemap.h | 8 ++++----
include/trace/events/kmem.h | 42 +++++++++++++++++++++---------------------
include/trace/events/vmscan.h | 8 ++++----
3 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
index 0421f49a20f7..42febb6bc1d5 100644
--- a/include/trace/events/filemap.h
+++ b/include/trace/events/filemap.h
@@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
TP_ARGS(page),

TP_STRUCT__entry(
- __field(struct page *, page)
+ __field(unsigned long, pfn)
__field(unsigned long, i_ino)
__field(unsigned long, index)
__field(dev_t, s_dev)
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->i_ino = page->mapping->host->i_ino;
__entry->index = page->index;
if (page->mapping->host->i_sb)
@@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
__entry->i_ino,
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->index << PAGE_SHIFT)
);

diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
index 4ad10baecd4d..81ea59812117 100644
--- a/include/trace/events/kmem.h
+++ b/include/trace/events/kmem.h
@@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
TP_ARGS(page, order),

TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->order = order;
),

TP_printk("page=%p pfn=%lu order=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->order)
);

@@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
TP_ARGS(page, cold),

TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( int, cold )
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->cold = cold;
),

TP_printk("page=%p pfn=%lu order=0 cold=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->cold)
);

@@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
TP_ARGS(page, order, gfp_flags, migratetype),

TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
__field( gfp_t, gfp_flags )
__field( int, migratetype )
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
__entry->order = order;
__entry->gfp_flags = gfp_flags;
__entry->migratetype = migratetype;
),

TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
- __entry->page,
- __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+ __entry->pfn != -1UL ? __entry->pfn : 0,
__entry->order,
__entry->migratetype,
show_gfp_flags(__entry->gfp_flags))
@@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
TP_ARGS(page, order, migratetype),

TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( unsigned int, order )
__field( int, migratetype )
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page ? page_to_pfn(page) : -1UL;
__entry->order = order;
__entry->migratetype = migratetype;
),

TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
- __entry->page,
- __entry->page ? page_to_pfn(__entry->page) : 0,
+ __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
+ __entry->pfn != -1UL ? __entry->pfn : 0,
__entry->order,
__entry->migratetype,
__entry->order == 0)
@@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
TP_ARGS(page, order, migratetype),

TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
- __entry->page, page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn), __entry->pfn,
__entry->order, __entry->migratetype)
);

@@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
alloc_migratetype, fallback_migratetype),

TP_STRUCT__entry(
- __field( struct page *, page )
+ __field( unsigned long, pfn )
__field( int, alloc_order )
__field( int, fallback_order )
__field( int, alloc_migratetype )
@@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->alloc_order = alloc_order;
__entry->fallback_order = fallback_order;
__entry->alloc_migratetype = alloc_migratetype;
@@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
),

TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
__entry->alloc_order,
__entry->fallback_order,
pageblock_order,
diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
index 69590b6ffc09..f66476b96264 100644
--- a/include/trace/events/vmscan.h
+++ b/include/trace/events/vmscan.h
@@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
TP_ARGS(page, reclaim_flags),

TP_STRUCT__entry(
- __field(struct page *, page)
+ __field(unsigned long, pfn)
__field(int, reclaim_flags)
),

TP_fast_assign(
- __entry->page = page;
+ __entry->pfn = page_to_pfn(page);
__entry->reclaim_flags = reclaim_flags;
),

TP_printk("page=%p pfn=%lu flags=%s",
- __entry->page,
- page_to_pfn(__entry->page),
+ pfn_to_page(__entry->pfn),
+ __entry->pfn,
show_reclaim_flags(__entry->reclaim_flags))
);

--
1.9.3

2015-04-13 22:15:33

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 2/5] perf kmem: Analyze page allocator events also

From: Namhyung Kim <[email protected]>

The perf kmem command records and analyze kernel memory allocation only
for SLAB objects. This patch implement a simple page allocator analyzer
using kmem:mm_page_alloc and kmem:mm_page_free events.

It adds two new options of --slab and --page. The --slab option is for
analyzing SLAB allocator and that's what perf kmem currently does.

The new --page option enables page allocator events and analyze kernel
memory usage in page unit. Currently, 'stat --alloc' subcommand is
implemented only.

If none of these --slab nor --page is specified, --slab is implied.

First run 'perf kmem record' to generate a suitable perf.data file:

# perf kmem record --page sleep 5

Then run 'perf kmem stat' to postprocess the perf.data file:

# perf kmem stat --page --alloc --line 10

-------------------------------------------------------------------------------
PFN | Total alloc (KB) | Hits | Order | Mig.type | GFP flags
-------------------------------------------------------------------------------
4045014 | 16 | 1 | 2 | RECLAIM | 00285250
4143980 | 16 | 1 | 2 | RECLAIM | 00285250
3938658 | 16 | 1 | 2 | RECLAIM | 00285250
4045400 | 16 | 1 | 2 | RECLAIM | 00285250
3568708 | 16 | 1 | 2 | RECLAIM | 00285250
3729824 | 16 | 1 | 2 | RECLAIM | 00285250
3657210 | 16 | 1 | 2 | RECLAIM | 00285250
4120750 | 16 | 1 | 2 | RECLAIM | 00285250
3678850 | 16 | 1 | 2 | RECLAIM | 00285250
3693874 | 16 | 1 | 2 | RECLAIM | 00285250
... | ... | ... | ... | ... | ...
-------------------------------------------------------------------------------

SUMMARY (page allocator)
========================
Total allocation requests : 44,260 [ 177,256 KB ]
Total free requests : 117 [ 468 KB ]

Total alloc+freed requests : 49 [ 196 KB ]
Total alloc-only requests : 44,211 [ 177,060 KB ]
Total free-only requests : 68 [ 272 KB ]

Total allocation failures : 0 [ 0 KB ]

Order Unmovable Reclaimable Movable Reserved CMA/Isolated
----- ------------ ------------ ------------ ------------ ------------
0 32 . 44,210 . .
1 . . . . .
2 . 18 . . .
3 . . . . .
4 . . . . .
5 . . . . .
6 . . . . .
7 . . . . .
8 . . . . .
9 . . . . .
10 . . . . .

Signed-off-by: Namhyung Kim <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Cc: David Ahern <[email protected]>
Cc: Jiri Olsa <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/Documentation/perf-kmem.txt | 8 +-
tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
2 files changed, 491 insertions(+), 17 deletions(-)

diff --git a/tools/perf/Documentation/perf-kmem.txt b/tools/perf/Documentation/perf-kmem.txt
index 150253cc3c97..23219c65c16f 100644
--- a/tools/perf/Documentation/perf-kmem.txt
+++ b/tools/perf/Documentation/perf-kmem.txt
@@ -3,7 +3,7 @@ perf-kmem(1)

NAME
----
-perf-kmem - Tool to trace/measure kernel memory(slab) properties
+perf-kmem - Tool to trace/measure kernel memory properties

SYNOPSIS
--------
@@ -46,6 +46,12 @@ OPTIONS
--raw-ip::
Print raw ip instead of symbol

+--slab::
+ Analyze SLAB allocator events.
+
+--page::
+ Analyze page allocator events
+
SEE ALSO
--------
linkperf:perf-record[1]
diff --git a/tools/perf/builtin-kmem.c b/tools/perf/builtin-kmem.c
index 4ebf65c79434..63ea01349b6e 100644
--- a/tools/perf/builtin-kmem.c
+++ b/tools/perf/builtin-kmem.c
@@ -22,6 +22,11 @@
#include <linux/string.h>
#include <locale.h>

+static int kmem_slab;
+static int kmem_page;
+
+static long kmem_page_size;
+
struct alloc_stat;
typedef int (*sort_fn_t)(struct alloc_stat *, struct alloc_stat *);

@@ -226,6 +231,244 @@ static int perf_evsel__process_free_event(struct perf_evsel *evsel,
return 0;
}

+static u64 total_page_alloc_bytes;
+static u64 total_page_free_bytes;
+static u64 total_page_nomatch_bytes;
+static u64 total_page_fail_bytes;
+static unsigned long nr_page_allocs;
+static unsigned long nr_page_frees;
+static unsigned long nr_page_fails;
+static unsigned long nr_page_nomatch;
+
+static bool use_pfn;
+
+#define MAX_MIGRATE_TYPES 6
+#define MAX_PAGE_ORDER 11
+
+static int order_stats[MAX_PAGE_ORDER][MAX_MIGRATE_TYPES];
+
+struct page_stat {
+ struct rb_node node;
+ u64 page;
+ int order;
+ unsigned gfp_flags;
+ unsigned migrate_type;
+ u64 alloc_bytes;
+ u64 free_bytes;
+ int nr_alloc;
+ int nr_free;
+};
+
+static struct rb_root page_tree;
+static struct rb_root page_alloc_tree;
+static struct rb_root page_alloc_sorted;
+
+static struct page_stat *search_page(unsigned long page, bool create)
+{
+ struct rb_node **node = &page_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct page_stat *data;
+
+ while (*node) {
+ s64 cmp;
+
+ parent = *node;
+ data = rb_entry(*node, struct page_stat, node);
+
+ cmp = data->page - page;
+ if (cmp < 0)
+ node = &parent->rb_left;
+ else if (cmp > 0)
+ node = &parent->rb_right;
+ else
+ return data;
+ }
+
+ if (!create)
+ return NULL;
+
+ data = zalloc(sizeof(*data));
+ if (data != NULL) {
+ data->page = page;
+
+ rb_link_node(&data->node, parent, node);
+ rb_insert_color(&data->node, &page_tree);
+ }
+
+ return data;
+}
+
+static int page_stat_cmp(struct page_stat *a, struct page_stat *b)
+{
+ if (a->page > b->page)
+ return -1;
+ if (a->page < b->page)
+ return 1;
+ if (a->order > b->order)
+ return -1;
+ if (a->order < b->order)
+ return 1;
+ if (a->migrate_type > b->migrate_type)
+ return -1;
+ if (a->migrate_type < b->migrate_type)
+ return 1;
+ if (a->gfp_flags > b->gfp_flags)
+ return -1;
+ if (a->gfp_flags < b->gfp_flags)
+ return 1;
+ return 0;
+}
+
+static struct page_stat *search_page_alloc_stat(struct page_stat *stat, bool create)
+{
+ struct rb_node **node = &page_alloc_tree.rb_node;
+ struct rb_node *parent = NULL;
+ struct page_stat *data;
+
+ while (*node) {
+ s64 cmp;
+
+ parent = *node;
+ data = rb_entry(*node, struct page_stat, node);
+
+ cmp = page_stat_cmp(data, stat);
+ if (cmp < 0)
+ node = &parent->rb_left;
+ else if (cmp > 0)
+ node = &parent->rb_right;
+ else
+ return data;
+ }
+
+ if (!create)
+ return NULL;
+
+ data = zalloc(sizeof(*data));
+ if (data != NULL) {
+ data->page = stat->page;
+ data->order = stat->order;
+ data->gfp_flags = stat->gfp_flags;
+ data->migrate_type = stat->migrate_type;
+
+ rb_link_node(&data->node, parent, node);
+ rb_insert_color(&data->node, &page_alloc_tree);
+ }
+
+ return data;
+}
+
+static bool valid_page(u64 pfn_or_page)
+{
+ if (use_pfn && pfn_or_page == -1UL)
+ return false;
+ if (!use_pfn && pfn_or_page == 0)
+ return false;
+ return true;
+}
+
+static int perf_evsel__process_page_alloc_event(struct perf_evsel *evsel,
+ struct perf_sample *sample)
+{
+ u64 page;
+ unsigned int order = perf_evsel__intval(evsel, sample, "order");
+ unsigned int gfp_flags = perf_evsel__intval(evsel, sample, "gfp_flags");
+ unsigned int migrate_type = perf_evsel__intval(evsel, sample,
+ "migratetype");
+ u64 bytes = kmem_page_size << order;
+ struct page_stat *stat;
+ struct page_stat this = {
+ .order = order,
+ .gfp_flags = gfp_flags,
+ .migrate_type = migrate_type,
+ };
+
+ if (use_pfn)
+ page = perf_evsel__intval(evsel, sample, "pfn");
+ else
+ page = perf_evsel__intval(evsel, sample, "page");
+
+ nr_page_allocs++;
+ total_page_alloc_bytes += bytes;
+
+ if (!valid_page(page)) {
+ nr_page_fails++;
+ total_page_fail_bytes += bytes;
+
+ return 0;
+ }
+
+ /*
+ * This is to find the current page (with correct gfp flags and
+ * migrate type) at free event.
+ */
+ stat = search_page(page, true);
+ if (stat == NULL)
+ return -ENOMEM;
+
+ stat->order = order;
+ stat->gfp_flags = gfp_flags;
+ stat->migrate_type = migrate_type;
+
+ this.page = page;
+ stat = search_page_alloc_stat(&this, true);
+ if (stat == NULL)
+ return -ENOMEM;
+
+ stat->nr_alloc++;
+ stat->alloc_bytes += bytes;
+
+ order_stats[order][migrate_type]++;
+
+ return 0;
+}
+
+static int perf_evsel__process_page_free_event(struct perf_evsel *evsel,
+ struct perf_sample *sample)
+{
+ u64 page;
+ unsigned int order = perf_evsel__intval(evsel, sample, "order");
+ u64 bytes = kmem_page_size << order;
+ struct page_stat *stat;
+ struct page_stat this = {
+ .order = order,
+ };
+
+ if (use_pfn)
+ page = perf_evsel__intval(evsel, sample, "pfn");
+ else
+ page = perf_evsel__intval(evsel, sample, "page");
+
+ nr_page_frees++;
+ total_page_free_bytes += bytes;
+
+ stat = search_page(page, false);
+ if (stat == NULL) {
+ pr_debug2("missing free at page %"PRIx64" (order: %d)\n",
+ page, order);
+
+ nr_page_nomatch++;
+ total_page_nomatch_bytes += bytes;
+
+ return 0;
+ }
+
+ this.page = page;
+ this.gfp_flags = stat->gfp_flags;
+ this.migrate_type = stat->migrate_type;
+
+ rb_erase(&stat->node, &page_tree);
+ free(stat);
+
+ stat = search_page_alloc_stat(&this, false);
+ if (stat == NULL)
+ return -ENOENT;
+
+ stat->nr_free++;
+ stat->free_bytes += bytes;
+
+ return 0;
+}
+
typedef int (*tracepoint_handler)(struct perf_evsel *evsel,
struct perf_sample *sample);

@@ -270,8 +513,9 @@ static double fragmentation(unsigned long n_req, unsigned long n_alloc)
return 100.0 - (100.0 * n_req / n_alloc);
}

-static void __print_result(struct rb_root *root, struct perf_session *session,
- int n_lines, int is_caller)
+static void __print_slab_result(struct rb_root *root,
+ struct perf_session *session,
+ int n_lines, int is_caller)
{
struct rb_node *next;
struct machine *machine = &session->machines.host;
@@ -323,9 +567,56 @@ static void __print_result(struct rb_root *root, struct perf_session *session,
printf("%.105s\n", graph_dotted_line);
}

-static void print_summary(void)
+static const char * const migrate_type_str[] = {
+ "UNMOVABL",
+ "RECLAIM",
+ "MOVABLE",
+ "RESERVED",
+ "CMA/ISLT",
+ "UNKNOWN",
+};
+
+static void __print_page_result(struct rb_root *root,
+ struct perf_session *session __maybe_unused,
+ int n_lines)
+{
+ struct rb_node *next = rb_first(root);
+ const char *format;
+
+ printf("\n%.80s\n", graph_dotted_line);
+ printf(" %-16s | Total alloc (KB) | Hits | Order | Mig.type | GFP flags\n",
+ use_pfn ? "PFN" : "Page");
+ printf("%.80s\n", graph_dotted_line);
+
+ if (use_pfn)
+ format = " %16llu | %'16llu | %'9d | %5d | %8s | %08lx\n";
+ else
+ format = " %016llx | %'16llu | %'9d | %5d | %8s | %08lx\n";
+
+ while (next && n_lines--) {
+ struct page_stat *data;
+
+ data = rb_entry(next, struct page_stat, node);
+
+ printf(format, (unsigned long long)data->page,
+ (unsigned long long)data->alloc_bytes / 1024,
+ data->nr_alloc, data->order,
+ migrate_type_str[data->migrate_type],
+ (unsigned long)data->gfp_flags);
+
+ next = rb_next(next);
+ }
+
+ if (n_lines == -1)
+ printf(" ... | ... | ... | ... | ... | ... \n");
+
+ printf("%.80s\n", graph_dotted_line);
+}
+
+static void print_slab_summary(void)
{
- printf("\nSUMMARY\n=======\n");
+ printf("\nSUMMARY (SLAB allocator)");
+ printf("\n========================\n");
printf("Total bytes requested: %'lu\n", total_requested);
printf("Total bytes allocated: %'lu\n", total_allocated);
printf("Total bytes wasted on internal fragmentation: %'lu\n",
@@ -335,13 +626,73 @@ static void print_summary(void)
printf("Cross CPU allocations: %'lu/%'lu\n", nr_cross_allocs, nr_allocs);
}

-static void print_result(struct perf_session *session)
+static void print_page_summary(void)
+{
+ int o, m;
+ u64 nr_alloc_freed = nr_page_frees - nr_page_nomatch;
+ u64 total_alloc_freed_bytes = total_page_free_bytes - total_page_nomatch_bytes;
+
+ printf("\nSUMMARY (page allocator)");
+ printf("\n========================\n");
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation requests",
+ nr_page_allocs, total_page_alloc_bytes / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free requests",
+ nr_page_frees, total_page_free_bytes / 1024);
+ printf("\n");
+
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests",
+ nr_alloc_freed, (total_alloc_freed_bytes) / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc-only requests",
+ nr_page_allocs - nr_alloc_freed,
+ (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024);
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free-only requests",
+ nr_page_nomatch, total_page_nomatch_bytes / 1024);
+ printf("\n");
+
+ printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total allocation failures",
+ nr_page_fails, total_page_fail_bytes / 1024);
+ printf("\n");
+
+ printf("%5s %12s %12s %12s %12s %12s\n", "Order", "Unmovable",
+ "Reclaimable", "Movable", "Reserved", "CMA/Isolated");
+ printf("%.5s %.12s %.12s %.12s %.12s %.12s\n", graph_dotted_line,
+ graph_dotted_line, graph_dotted_line, graph_dotted_line,
+ graph_dotted_line, graph_dotted_line);
+
+ for (o = 0; o < MAX_PAGE_ORDER; o++) {
+ printf("%5d", o);
+ for (m = 0; m < MAX_MIGRATE_TYPES - 1; m++) {
+ if (order_stats[o][m])
+ printf(" %'12d", order_stats[o][m]);
+ else
+ printf(" %12c", '.');
+ }
+ printf("\n");
+ }
+}
+
+static void print_slab_result(struct perf_session *session)
{
if (caller_flag)
- __print_result(&root_caller_sorted, session, caller_lines, 1);
+ __print_slab_result(&root_caller_sorted, session, caller_lines, 1);
+ if (alloc_flag)
+ __print_slab_result(&root_alloc_sorted, session, alloc_lines, 0);
+ print_slab_summary();
+}
+
+static void print_page_result(struct perf_session *session)
+{
if (alloc_flag)
- __print_result(&root_alloc_sorted, session, alloc_lines, 0);
- print_summary();
+ __print_page_result(&page_alloc_sorted, session, alloc_lines);
+ print_page_summary();
+}
+
+static void print_result(struct perf_session *session)
+{
+ if (kmem_slab)
+ print_slab_result(session);
+ if (kmem_page)
+ print_page_result(session);
}

struct sort_dimension {
@@ -353,8 +704,8 @@ struct sort_dimension {
static LIST_HEAD(caller_sort);
static LIST_HEAD(alloc_sort);

-static void sort_insert(struct rb_root *root, struct alloc_stat *data,
- struct list_head *sort_list)
+static void sort_slab_insert(struct rb_root *root, struct alloc_stat *data,
+ struct list_head *sort_list)
{
struct rb_node **new = &(root->rb_node);
struct rb_node *parent = NULL;
@@ -383,8 +734,8 @@ static void sort_insert(struct rb_root *root, struct alloc_stat *data,
rb_insert_color(&data->node, root);
}

-static void __sort_result(struct rb_root *root, struct rb_root *root_sorted,
- struct list_head *sort_list)
+static void __sort_slab_result(struct rb_root *root, struct rb_root *root_sorted,
+ struct list_head *sort_list)
{
struct rb_node *node;
struct alloc_stat *data;
@@ -396,26 +747,79 @@ static void __sort_result(struct rb_root *root, struct rb_root *root_sorted,

rb_erase(node, root);
data = rb_entry(node, struct alloc_stat, node);
- sort_insert(root_sorted, data, sort_list);
+ sort_slab_insert(root_sorted, data, sort_list);
+ }
+}
+
+static void sort_page_insert(struct rb_root *root, struct page_stat *data)
+{
+ struct rb_node **new = &root->rb_node;
+ struct rb_node *parent = NULL;
+
+ while (*new) {
+ struct page_stat *this;
+ int cmp = 0;
+
+ this = rb_entry(*new, struct page_stat, node);
+ parent = *new;
+
+ /* TODO: support more sort key */
+ cmp = data->alloc_bytes - this->alloc_bytes;
+
+ if (cmp > 0)
+ new = &parent->rb_left;
+ else
+ new = &parent->rb_right;
+ }
+
+ rb_link_node(&data->node, parent, new);
+ rb_insert_color(&data->node, root);
+}
+
+static void __sort_page_result(struct rb_root *root, struct rb_root *root_sorted)
+{
+ struct rb_node *node;
+ struct page_stat *data;
+
+ for (;;) {
+ node = rb_first(root);
+ if (!node)
+ break;
+
+ rb_erase(node, root);
+ data = rb_entry(node, struct page_stat, node);
+ sort_page_insert(root_sorted, data);
}
}

static void sort_result(void)
{
- __sort_result(&root_alloc_stat, &root_alloc_sorted, &alloc_sort);
- __sort_result(&root_caller_stat, &root_caller_sorted, &caller_sort);
+ if (kmem_slab) {
+ __sort_slab_result(&root_alloc_stat, &root_alloc_sorted,
+ &alloc_sort);
+ __sort_slab_result(&root_caller_stat, &root_caller_sorted,
+ &caller_sort);
+ }
+ if (kmem_page) {
+ __sort_page_result(&page_alloc_tree, &page_alloc_sorted);
+ }
}

static int __cmd_kmem(struct perf_session *session)
{
int err = -EINVAL;
+ struct perf_evsel *evsel;
const struct perf_evsel_str_handler kmem_tracepoints[] = {
+ /* slab allocator */
{ "kmem:kmalloc", perf_evsel__process_alloc_event, },
{ "kmem:kmem_cache_alloc", perf_evsel__process_alloc_event, },
{ "kmem:kmalloc_node", perf_evsel__process_alloc_node_event, },
{ "kmem:kmem_cache_alloc_node", perf_evsel__process_alloc_node_event, },
{ "kmem:kfree", perf_evsel__process_free_event, },
{ "kmem:kmem_cache_free", perf_evsel__process_free_event, },
+ /* page allocator */
+ { "kmem:mm_page_alloc", perf_evsel__process_page_alloc_event, },
+ { "kmem:mm_page_free", perf_evsel__process_page_free_event, },
};

if (!perf_session__has_traces(session, "kmem record"))
@@ -426,10 +830,20 @@ static int __cmd_kmem(struct perf_session *session)
goto out;
}

+ evlist__for_each(session->evlist, evsel) {
+ if (!strcmp(perf_evsel__name(evsel), "kmem:mm_page_alloc") &&
+ perf_evsel__field(evsel, "pfn")) {
+ use_pfn = true;
+ break;
+ }
+ }
+
setup_pager();
err = perf_session__process_events(session);
- if (err != 0)
+ if (err != 0) {
+ pr_err("error during process events: %d\n", err);
goto out;
+ }
sort_result();
print_result(session);
out:
@@ -612,6 +1026,22 @@ static int parse_alloc_opt(const struct option *opt __maybe_unused,
return 0;
}

+static int parse_slab_opt(const struct option *opt __maybe_unused,
+ const char *arg __maybe_unused,
+ int unset __maybe_unused)
+{
+ kmem_slab = (kmem_page + 1);
+ return 0;
+}
+
+static int parse_page_opt(const struct option *opt __maybe_unused,
+ const char *arg __maybe_unused,
+ int unset __maybe_unused)
+{
+ kmem_page = (kmem_slab + 1);
+ return 0;
+}
+
static int parse_line_opt(const struct option *opt __maybe_unused,
const char *arg, int unset __maybe_unused)
{
@@ -634,6 +1064,8 @@ static int __cmd_record(int argc, const char **argv)
{
const char * const record_args[] = {
"record", "-a", "-R", "-c", "1",
+ };
+ const char * const slab_events[] = {
"-e", "kmem:kmalloc",
"-e", "kmem:kmalloc_node",
"-e", "kmem:kfree",
@@ -641,10 +1073,19 @@ static int __cmd_record(int argc, const char **argv)
"-e", "kmem:kmem_cache_alloc_node",
"-e", "kmem:kmem_cache_free",
};
+ const char * const page_events[] = {
+ "-e", "kmem:mm_page_alloc",
+ "-e", "kmem:mm_page_free",
+ };
unsigned int rec_argc, i, j;
const char **rec_argv;

rec_argc = ARRAY_SIZE(record_args) + argc - 1;
+ if (kmem_slab)
+ rec_argc += ARRAY_SIZE(slab_events);
+ if (kmem_page)
+ rec_argc += ARRAY_SIZE(page_events);
+
rec_argv = calloc(rec_argc + 1, sizeof(char *));

if (rec_argv == NULL)
@@ -653,6 +1094,15 @@ static int __cmd_record(int argc, const char **argv)
for (i = 0; i < ARRAY_SIZE(record_args); i++)
rec_argv[i] = strdup(record_args[i]);

+ if (kmem_slab) {
+ for (j = 0; j < ARRAY_SIZE(slab_events); j++, i++)
+ rec_argv[i] = strdup(slab_events[j]);
+ }
+ if (kmem_page) {
+ for (j = 0; j < ARRAY_SIZE(page_events); j++, i++)
+ rec_argv[i] = strdup(page_events[j]);
+ }
+
for (j = 1; j < (unsigned int)argc; j++, i++)
rec_argv[i] = argv[j];

@@ -679,6 +1129,10 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
OPT_CALLBACK('l', "line", NULL, "num", "show n lines", parse_line_opt),
OPT_BOOLEAN(0, "raw-ip", &raw_ip, "show raw ip instead of symbol"),
OPT_BOOLEAN('f', "force", &file.force, "don't complain, do it"),
+ OPT_CALLBACK_NOOPT(0, "slab", NULL, NULL, "Analyze slab allocator",
+ parse_slab_opt),
+ OPT_CALLBACK_NOOPT(0, "page", NULL, NULL, "Analyze page allocator",
+ parse_page_opt),
OPT_END()
};
const char *const kmem_subcommands[] = { "record", "stat", NULL };
@@ -695,6 +1149,9 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
if (!argc)
usage_with_options(kmem_usage, kmem_options);

+ if (kmem_slab == 0 && kmem_page == 0)
+ kmem_slab = 1; /* for backward compatibility */
+
if (!strncmp(argv[0], "rec", 3)) {
symbol__init(NULL);
return __cmd_record(argc, argv);
@@ -706,6 +1163,17 @@ int cmd_kmem(int argc, const char **argv, const char *prefix __maybe_unused)
if (session == NULL)
return -1;

+ if (kmem_page) {
+ struct perf_evsel *evsel = perf_evlist__first(session->evlist);
+
+ if (evsel == NULL || evsel->tp_format == NULL) {
+ pr_err("invalid event found.. aborting\n");
+ return -1;
+ }
+
+ kmem_page_size = pevent_get_page_size(evsel->tp_format->pevent);
+ }
+
symbol__init(&session->header.env);

if (!strcmp(argv[0], "stat")) {
--
1.9.3

2015-04-13 22:16:33

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 3/5] perf probe: Set retprobe flag when probe in address-based alternative mode

From: He Kuang <[email protected]>

When perf probe searched in a debuginfo file and failed, it tried with
an alternative, in function get_alternative_probe_event():

memcpy(tmp, &pev->point, sizeof(*tmp));
memset(&pev->point, 0, sizeof(pev->point));

In this case, it drops the retprobe flag and forgets to set it back in
find_alternative_probe_point(), so the problem occurs.

Can be reproduced as following:

$ perf probe -v -k vmlinux --add='sys_write%return'
...
Added new event:
Writing event: p:probe/sys_write _stext+1584952
probe:sys_write (on sys_write%return)

$ cat /sys/kernel/debug/tracing/kprobe_events
p:probe/sys_write _stext+1584952

After this patch:

$ perf probe -v -k vmlinux --add='sys_write%return'
Added new event:
Writing event: r:probe/sys_write SyS_write+0
probe:sys_write (on sys_write%return)

$ cat /sys/kernel/debug/tracing/kprobe_events
r:probe/sys_write SyS_write

Signed-off-by: He Kuang <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/probe-event.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 30545ce2c712..5483d98236d3 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -332,6 +332,7 @@ static int find_alternative_probe_point(struct debuginfo *dinfo,
else {
result->offset += pp->offset;
result->line += pp->line;
+ result->retprobe = pp->retprobe;
ret = 0;
}

--
1.9.3

2015-04-13 22:15:40

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 4/5] perf probe: Make --source avaiable when probe with lazy_line

From: He Kuang <[email protected]>

Use get_real_path() to enable --source option when probe with lazy_line
pattern.

Before this patch:

$ perf probe -s ./kernel_src/ -k ./vmlinux --add='fs/super.c;s->s_count=1;'
Failed to open fs/super.c: No such file or directory
Error: Failed to add events.

After this patch:

$ perf probe -s ./kernel_src/ -k ./vmlinux --add='fs/super.c;s->s_count=1;'
Added new events:
probe:_stext (on @fs/super.c)
probe:_stext_1 (on @fs/super.c)
...

Signed-off-by: He Kuang <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/probe-event.c | 2 +-
tools/perf/util/probe-event.h | 2 ++
tools/perf/util/probe-finder.c | 18 +++++++++++++++---
3 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
index 5483d98236d3..35ee51a8724f 100644
--- a/tools/perf/util/probe-event.c
+++ b/tools/perf/util/probe-event.c
@@ -661,7 +661,7 @@ static int try_to_find_probe_trace_events(struct perf_probe_event *pev,
* a newly allocated path on success.
* Return 0 if file was found and readable, -errno otherwise.
*/
-static int get_real_path(const char *raw_path, const char *comp_dir,
+int get_real_path(const char *raw_path, const char *comp_dir,
char **new_path)
{
const char *prefix = symbol_conf.source_prefix;
diff --git a/tools/perf/util/probe-event.h b/tools/perf/util/probe-event.h
index d6b783447be9..21809ea9b2b4 100644
--- a/tools/perf/util/probe-event.h
+++ b/tools/perf/util/probe-event.h
@@ -135,6 +135,8 @@ extern int show_available_vars(struct perf_probe_event *pevs, int npevs,
struct strfilter *filter, bool externs);
extern int show_available_funcs(const char *module, struct strfilter *filter,
bool user);
+extern int get_real_path(const char *raw_path, const char *comp_dir,
+ char **new_path);

/* Maximum index number of event-name postfix */
#define MAX_EVENT_INDEX 1024
diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index 7831e2d93949..431c12d299a2 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -791,11 +791,20 @@ static int find_lazy_match_lines(struct intlist *list,
ssize_t len;
int count = 0, linenum = 1;
char sbuf[STRERR_BUFSIZE];
+ char *realname = NULL;
+ int ret;

- fp = fopen(fname, "r");
+ ret = get_real_path(fname, NULL, &realname);
+ if (ret < 0) {
+ pr_warning("Failed to find source file %s.\n", fname);
+ return ret;
+ }
+
+ fp = fopen(realname, "r");
if (!fp) {
- pr_warning("Failed to open %s: %s\n", fname,
+ pr_warning("Failed to open %s: %s\n", realname,
strerror_r(errno, sbuf, sizeof(sbuf)));
+ free(realname);
return -errno;
}

@@ -817,7 +826,10 @@ static int find_lazy_match_lines(struct intlist *list,
fclose(fp);

if (count == 0)
- pr_debug("No matched lines found in %s.\n", fname);
+ pr_debug("No matched lines found in %s.\n", realname);
+
+ free(realname);
+
return count;
}

--
1.9.3

2015-04-13 22:15:42

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: [PATCH 5/5] perf probe: Fix segfault when probe with lazy_line to file

From: He Kuang <[email protected]>

The first argument passed to find_probe_point_lazy() should be CU die,
which will be passed to die_walk_lines() when lazy_line matches.
Currently, when we probe with lazy_line pattern to file without function
name, NULL pointer is passed and causes a segment fault.

Can be reproduced as following:

$ perf probe -k vmlinux --add='fs/super.c;s->s_count=1;'
[ 1958.984658] perf[1020]: segfault at 10 ip 00007fc6e10d8c71 sp
00007ffcbfaaf900 error 4 in libdw-0.161.so[7fc6e10ce000+34000]
Segmentation fault

After this patch:

$ perf probe -k vmlinux --add='fs/super.c;s->s_count=1;'
Added new event:
probe:_stext (on @fs/super.c)

You can now use it in all perf tools, such as:
perf record -e probe:_stext -aR sleep 1

Signed-off-by: He Kuang <[email protected]>
Tested-by: Arnaldo Carvalho de Melo <[email protected]>
Acked-by: Masami Hiramatsu <[email protected]>
Cc: Namhyung Kim <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Wang Nan <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
---
tools/perf/util/probe-finder.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
index 431c12d299a2..e91101b8f60f 100644
--- a/tools/perf/util/probe-finder.c
+++ b/tools/perf/util/probe-finder.c
@@ -1067,7 +1067,7 @@ static int debuginfo__find_probes(struct debuginfo *dbg,
if (pp->function)
ret = find_probe_point_by_func(pf);
else if (pp->lazy_line)
- ret = find_probe_point_lazy(NULL, pf);
+ ret = find_probe_point_lazy(&pf->cu_die, pf);
else {
pf->lno = pp->line;
ret = find_probe_point_by_line(pf);
--
1.9.3

Subject: Re: [GIT PULL 0/5] perf/core improvements and fixes

Hi, Arnaldo,

> perf probe: Make --source avaiable when probe with lazy_line

No, could you pull Naohiro's patch?
I'd like to move get_real_path to probe_finder.c

Thank you,

(2015/04/14 7:14), Arnaldo Carvalho de Melo wrote:
> Hi Ingo,
>
> Please consider pulling,
>
> Best regards,
>
> - Arnaldo
>
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
>
> perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
>
> are available in the git repository at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo
>
> for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:
>
> perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)
>
> ----------------------------------------------------------------
> perf/core improvements and fixes:
>
> New features:
>
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
>
> User visible fixes:
>
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
>
> - lazy_line probe fixes in 'perf probe' (He Kuang)
>
> Infrastructure:
>
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
>
> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
>
> ----------------------------------------------------------------
> He Kuang (3):
> perf probe: Set retprobe flag when probe in address-based alternative mode
> perf probe: Make --source avaiable when probe with lazy_line
> perf probe: Fix segfault when probe with lazy_line to file
>
> Namhyung Kim (2):
> tracing, mm: Record pfn instead of pointer to struct page
> perf kmem: Analyze page allocator events also
>
> include/trace/events/filemap.h | 8 +-
> include/trace/events/kmem.h | 42 +--
> include/trace/events/vmscan.h | 8 +-
> tools/perf/Documentation/perf-kmem.txt | 8 +-
> tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
> tools/perf/util/probe-event.c | 3 +-
> tools/perf/util/probe-event.h | 2 +
> tools/perf/util/probe-finder.c | 20 +-
> 8 files changed, 540 insertions(+), 51 deletions(-)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

2015-04-13 23:09:34

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [GIT PULL 0/5] perf/core improvements and fixes

Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> Hi, Arnaldo,
>
> > perf probe: Make --source avaiable when probe with lazy_line
>
> No, could you pull Naohiro's patch?
> I'd like to move get_real_path to probe_finder.c

OOps, yeah, you asked for that... Ingo, please ignore this pull request
for now, thanks,

- Arnaldo

> Thank you,
>
> (2015/04/14 7:14), Arnaldo Carvalho de Melo wrote:
> > Hi Ingo,
> >
> > Please consider pulling,
> >
> > Best regards,
> >
> > - Arnaldo
> >
> > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> >
> > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> >
> > are available in the git repository at:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo
> >
> > for you to fetch changes up to be8d5b1c6b468d10bd2928bbd1a5ca3fd2980402:
> >
> > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 17:59:41 -0300)
> >
> > ----------------------------------------------------------------
> > perf/core improvements and fixes:
> >
> > New features:
> >
> > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> >
> > User visible fixes:
> >
> > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> >
> > - lazy_line probe fixes in 'perf probe' (He Kuang)
> >
> > Infrastructure:
> >
> > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> >
> > Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
> >
> > ----------------------------------------------------------------
> > He Kuang (3):
> > perf probe: Set retprobe flag when probe in address-based alternative mode
> > perf probe: Make --source avaiable when probe with lazy_line
> > perf probe: Fix segfault when probe with lazy_line to file
> >
> > Namhyung Kim (2):
> > tracing, mm: Record pfn instead of pointer to struct page
> > perf kmem: Analyze page allocator events also
> >
> > include/trace/events/filemap.h | 8 +-
> > include/trace/events/kmem.h | 42 +--
> > include/trace/events/vmscan.h | 8 +-
> > tools/perf/Documentation/perf-kmem.txt | 8 +-
> > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
> > tools/perf/util/probe-event.c | 3 +-
> > tools/perf/util/probe-event.h | 2 +
> > tools/perf/util/probe-finder.c | 20 +-
> > 8 files changed, 540 insertions(+), 51 deletions(-)
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
> >
>
>
> --
> Masami HIRAMATSU
> Linux Technology Research Center, System Productivity Research Dept.
> Center for Technology Innovation - Systems Engineering
> Hitachi, Ltd., Research & Development Group
> E-mail: [email protected]
>

2015-04-13 23:19:41

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: [GIT PULL 0/5] perf/core improvements and fixes

Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> > Hi, Arnaldo,
> >
> > > perf probe: Make --source avaiable when probe with lazy_line
> >
> > No, could you pull Naohiro's patch?
> > I'd like to move get_real_path to probe_finder.c
>
> OOps, yeah, you asked for that... Ingo, please ignore this pull request
> for now, thanks,

Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
that all is right, ok?

- Arnaldo

The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:

perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)

are available in the git repository at:

git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2

for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:

perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)

----------------------------------------------------------------
perf/core improvements and fixes:

New features:

- Analyze page allocator events also in 'perf kmem' (Namhyung Kim)

User visible fixes:

- Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)

- lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)

Infrastructure:

- Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)

Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>

----------------------------------------------------------------
He Kuang (2):
perf probe: Set retprobe flag when probe in address-based alternative mode
perf probe: Fix segfault when probe with lazy_line to file

Namhyung Kim (2):
tracing, mm: Record pfn instead of pointer to struct page
perf kmem: Analyze page allocator events also

Naohiro Aota (1):
perf probe: Find compilation directory path for lazy matching

include/trace/events/filemap.h | 8 +-
include/trace/events/kmem.h | 42 +--
include/trace/events/vmscan.h | 8 +-
tools/perf/Documentation/perf-kmem.txt | 8 +-
tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
tools/perf/util/probe-event.c | 60 +---
tools/perf/util/probe-finder.c | 73 ++++-
tools/perf/util/probe-finder.h | 4 +
8 files changed, 596 insertions(+), 107 deletions(-)

Subject: Re: Re: [GIT PULL 0/5] perf/core improvements and fixes

(2015/04/14 8:19), Arnaldo Carvalho de Melo wrote:
> Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
>> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
>>> Hi, Arnaldo,
>>>
>>>> perf probe: Make --source avaiable when probe with lazy_line
>>>
>>> No, could you pull Naohiro's patch?
>>> I'd like to move get_real_path to probe_finder.c
>>
>> OOps, yeah, you asked for that... Ingo, please ignore this pull request
>> for now, thanks,
>
> Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> that all is right, ok?

OK, I've built and tested it :)

Acked-by: Masami Hiramatsu <[email protected]>
Tested-by: Masami Hiramatsu <[email protected]>

Thank you!

>
> - Arnaldo
>
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
>
> perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
>
> are available in the git repository at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
>
> for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
>
> perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
>
> ----------------------------------------------------------------
> perf/core improvements and fixes:
>
> New features:
>
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
>
> User visible fixes:
>
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
>
> - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
>
> Infrastructure:
>
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
>
> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
>
> ----------------------------------------------------------------
> He Kuang (2):
> perf probe: Set retprobe flag when probe in address-based alternative mode
> perf probe: Fix segfault when probe with lazy_line to file
>
> Namhyung Kim (2):
> tracing, mm: Record pfn instead of pointer to struct page
> perf kmem: Analyze page allocator events also
>
> Naohiro Aota (1):
> perf probe: Find compilation directory path for lazy matching
>
> include/trace/events/filemap.h | 8 +-
> include/trace/events/kmem.h | 42 +--
> include/trace/events/vmscan.h | 8 +-
> tools/perf/Documentation/perf-kmem.txt | 8 +-
> tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
> tools/perf/util/probe-event.c | 60 +---
> tools/perf/util/probe-finder.c | 73 ++++-
> tools/perf/util/probe-finder.h | 4 +
> 8 files changed, 596 insertions(+), 107 deletions(-)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>


--
Masami HIRAMATSU
Linux Technology Research Center, System Productivity Research Dept.
Center for Technology Innovation - Systems Engineering
Hitachi, Ltd., Research & Development Group
E-mail: [email protected]

2015-04-14 12:12:58

by Ingo Molnar

[permalink] [raw]
Subject: Re: [GIT PULL 0/5] perf/core improvements and fixes


* Arnaldo Carvalho de Melo <[email protected]> wrote:

> Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> > Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> > > Hi, Arnaldo,
> > >
> > > > perf probe: Make --source avaiable when probe with lazy_line
> > >
> > > No, could you pull Naohiro's patch?
> > > I'd like to move get_real_path to probe_finder.c
> >
> > OOps, yeah, you asked for that... Ingo, please ignore this pull request
> > for now, thanks,
>
> Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> that all is right, ok?
>
> - Arnaldo
>
> The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
>
> perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
>
> are available in the git repository at:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
>
> for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
>
> perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
>
> ----------------------------------------------------------------
> perf/core improvements and fixes:
>
> New features:
>
> - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
>
> User visible fixes:
>
> - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
>
> - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
>
> Infrastructure:
>
> - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
>
> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
>
> ----------------------------------------------------------------
> He Kuang (2):
> perf probe: Set retprobe flag when probe in address-based alternative mode
> perf probe: Fix segfault when probe with lazy_line to file
>
> Namhyung Kim (2):
> tracing, mm: Record pfn instead of pointer to struct page
> perf kmem: Analyze page allocator events also
>
> Naohiro Aota (1):
> perf probe: Find compilation directory path for lazy matching
>
> include/trace/events/filemap.h | 8 +-
> include/trace/events/kmem.h | 42 +--
> include/trace/events/vmscan.h | 8 +-
> tools/perf/Documentation/perf-kmem.txt | 8 +-
> tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
> tools/perf/util/probe-event.c | 60 +---
> tools/perf/util/probe-finder.c | 73 ++++-
> tools/perf/util/probe-finder.h | 4 +
> 8 files changed, 596 insertions(+), 107 deletions(-)

Pulled, thanks a lot Arnaldo!

Ingo

2015-04-14 12:17:26

by Arnaldo Carvalho de Melo

[permalink] [raw]
Subject: Re: Re: [GIT PULL 0/5] perf/core improvements and fixes

Em Tue, Apr 14, 2015 at 04:04:29PM +0900, Masami Hiramatsu escreveu:
> (2015/04/14 8:19), Arnaldo Carvalho de Melo wrote:
> > Em Mon, Apr 13, 2015 at 08:09:23PM -0300, Arnaldo Carvalho de Melo escreveu:
> >> Em Tue, Apr 14, 2015 at 07:33:07AM +0900, Masami Hiramatsu escreveu:
> >>> Hi, Arnaldo,
> >>>
> >>>> perf probe: Make --source avaiable when probe with lazy_line
> >>>
> >>> No, could you pull Naohiro's patch?
> >>> I'd like to move get_real_path to probe_finder.c
> >>
> >> OOps, yeah, you asked for that... Ingo, please ignore this pull request
> >> for now, thanks,
> >
> > Ok, I did that and created a perf-core-for-mingo-2, Masami, please check
> > that all is right, ok?
>
> OK, I've built and tested it :)
>
> Acked-by: Masami Hiramatsu <[email protected]>
> Tested-by: Masami Hiramatsu <[email protected]>

Thanks, and sorry for the slip up in getting the right patch as we
agreed in that discussion,

Regards,

- Arnaldo

> Thank you!
>
> >
> > - Arnaldo
> >
> > The following changes since commit 066450be419fa48007a9f29e19828f2a86198754:
> >
> > perf/x86/intel/pt: Clean up the control flow in pt_pmu_hw_init() (2015-04-12 11:21:15 +0200)
> >
> > are available in the git repository at:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux.git tags/perf-core-for-mingo-2
> >
> > for you to fetch changes up to f19e80c640d58ddfd70f2454ee597f81ba966690:
> >
> > perf probe: Fix segfault when probe with lazy_line to file (2015-04-13 20:12:21 -0300)
> >
> > ----------------------------------------------------------------
> > perf/core improvements and fixes:
> >
> > New features:
> >
> > - Analyze page allocator events also in 'perf kmem' (Namhyung Kim)
> >
> > User visible fixes:
> >
> > - Fix retprobe 'perf probe' handling when failing to find needed debuginfo (He Kuang)
> >
> > - lazy_line probe fixes in 'perf probe' (Naohiro Aota, He Kuang)
> >
> > Infrastructure:
> >
> > - Record pfn instead of pointer to struct page in tracepoints (Namhyung Kim)
> >
> > Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
> >
> > ----------------------------------------------------------------
> > He Kuang (2):
> > perf probe: Set retprobe flag when probe in address-based alternative mode
> > perf probe: Fix segfault when probe with lazy_line to file
> >
> > Namhyung Kim (2):
> > tracing, mm: Record pfn instead of pointer to struct page
> > perf kmem: Analyze page allocator events also
> >
> > Naohiro Aota (1):
> > perf probe: Find compilation directory path for lazy matching
> >
> > include/trace/events/filemap.h | 8 +-
> > include/trace/events/kmem.h | 42 +--
> > include/trace/events/vmscan.h | 8 +-
> > tools/perf/Documentation/perf-kmem.txt | 8 +-
> > tools/perf/builtin-kmem.c | 500 +++++++++++++++++++++++++++++++--
> > tools/perf/util/probe-event.c | 60 +---
> > tools/perf/util/probe-finder.c | 73 ++++-
> > tools/perf/util/probe-finder.h | 4 +
> > 8 files changed, 596 insertions(+), 107 deletions(-)
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to [email protected]
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
> >
>
>
> --
> Masami HIRAMATSU
> Linux Technology Research Center, System Productivity Research Dept.
> Center for Technology Innovation - Systems Engineering
> Hitachi, Ltd., Research & Development Group
> E-mail: [email protected]
>

2017-07-31 07:43:45

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
> From: Namhyung Kim <[email protected]>
>
> The struct page is opaque for userspace tools, so it'd be better to save
> pfn in order to identify page frames.
>
> The textual output of $debugfs/tracing/trace file remains unchanged and
> only raw (binary) data format is changed - but thanks to libtraceevent,
> userspace tools which deal with the raw data (like perf and trace-cmd)
> can parse the format easily.

Hmm it seems trace-cmd doesn't work that well, at least on current
x86_64 kernel where I noticed it:

trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1

I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
The events/kmem/mm_page_alloc/format file contains this for page:

REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)

I think userspace can't know vmmemap_base nor the implied sizeof(struct
page) for pointer arithmetic?

On older 4.4-based kernel:

REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)

This also fails to parse, so it must be the struct page part?

I think the problem is, even if ve solve this with some more
preprocessor trickery to make the format file contain only constant
numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
more complicated than simple arithmetic, and can't be exported in the
format file.

I'm afraid that to support userspace parsing of the trace data, we will
have to store both struct page and pfn... or perhaps give up on reporting
the struct page pointer completely. Thoughts?

> So impact on the userspace will also be
> minimal.
>
> Signed-off-by: Namhyung Kim <[email protected]>
> Based-on-patch-by: Joonsoo Kim <[email protected]>
> Acked-by: Ingo Molnar <[email protected]>
> Acked-by: Steven Rostedt <[email protected]>
> Cc: David Ahern <[email protected]>
> Cc: Jiri Olsa <[email protected]>
> Cc: Minchan Kim <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: [email protected]
> Link: http://lkml.kernel.org/r/[email protected]
> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
> ---
> include/trace/events/filemap.h | 8 ++++----
> include/trace/events/kmem.h | 42 +++++++++++++++++++++---------------------
> include/trace/events/vmscan.h | 8 ++++----
> 3 files changed, 29 insertions(+), 29 deletions(-)
>
> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
> index 0421f49a20f7..42febb6bc1d5 100644
> --- a/include/trace/events/filemap.h
> +++ b/include/trace/events/filemap.h
> @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
> TP_ARGS(page),
>
> TP_STRUCT__entry(
> - __field(struct page *, page)
> + __field(unsigned long, pfn)
> __field(unsigned long, i_ino)
> __field(unsigned long, index)
> __field(dev_t, s_dev)
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page_to_pfn(page);
> __entry->i_ino = page->mapping->host->i_ino;
> __entry->index = page->index;
> if (page->mapping->host->i_sb)
> @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
> TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
> MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
> __entry->i_ino,
> - __entry->page,
> - page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn),
> + __entry->pfn,
> __entry->index << PAGE_SHIFT)
> );
>
> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
> index 4ad10baecd4d..81ea59812117 100644
> --- a/include/trace/events/kmem.h
> +++ b/include/trace/events/kmem.h
> @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
> TP_ARGS(page, order),
>
> TP_STRUCT__entry(
> - __field( struct page *, page )
> + __field( unsigned long, pfn )
> __field( unsigned int, order )
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page_to_pfn(page);
> __entry->order = order;
> ),
>
> TP_printk("page=%p pfn=%lu order=%d",
> - __entry->page,
> - page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn),
> + __entry->pfn,
> __entry->order)
> );
>
> @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
> TP_ARGS(page, cold),
>
> TP_STRUCT__entry(
> - __field( struct page *, page )
> + __field( unsigned long, pfn )
> __field( int, cold )
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page_to_pfn(page);
> __entry->cold = cold;
> ),
>
> TP_printk("page=%p pfn=%lu order=0 cold=%d",
> - __entry->page,
> - page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn),
> + __entry->pfn,
> __entry->cold)
> );
>
> @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
> TP_ARGS(page, order, gfp_flags, migratetype),
>
> TP_STRUCT__entry(
> - __field( struct page *, page )
> + __field( unsigned long, pfn )
> __field( unsigned int, order )
> __field( gfp_t, gfp_flags )
> __field( int, migratetype )
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page ? page_to_pfn(page) : -1UL;
> __entry->order = order;
> __entry->gfp_flags = gfp_flags;
> __entry->migratetype = migratetype;
> ),
>
> TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
> - __entry->page,
> - __entry->page ? page_to_pfn(__entry->page) : 0,
> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
> + __entry->pfn != -1UL ? __entry->pfn : 0,
> __entry->order,
> __entry->migratetype,
> show_gfp_flags(__entry->gfp_flags))
> @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
> TP_ARGS(page, order, migratetype),
>
> TP_STRUCT__entry(
> - __field( struct page *, page )
> + __field( unsigned long, pfn )
> __field( unsigned int, order )
> __field( int, migratetype )
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page ? page_to_pfn(page) : -1UL;
> __entry->order = order;
> __entry->migratetype = migratetype;
> ),
>
> TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
> - __entry->page,
> - __entry->page ? page_to_pfn(__entry->page) : 0,
> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
> + __entry->pfn != -1UL ? __entry->pfn : 0,
> __entry->order,
> __entry->migratetype,
> __entry->order == 0)
> @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
> TP_ARGS(page, order, migratetype),
>
> TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
> - __entry->page, page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn), __entry->pfn,
> __entry->order, __entry->migratetype)
> );
>
> @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
> alloc_migratetype, fallback_migratetype),
>
> TP_STRUCT__entry(
> - __field( struct page *, page )
> + __field( unsigned long, pfn )
> __field( int, alloc_order )
> __field( int, fallback_order )
> __field( int, alloc_migratetype )
> @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page_to_pfn(page);
> __entry->alloc_order = alloc_order;
> __entry->fallback_order = fallback_order;
> __entry->alloc_migratetype = alloc_migratetype;
> @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
> ),
>
> TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
> - __entry->page,
> - page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn),
> + __entry->pfn,
> __entry->alloc_order,
> __entry->fallback_order,
> pageblock_order,
> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
> index 69590b6ffc09..f66476b96264 100644
> --- a/include/trace/events/vmscan.h
> +++ b/include/trace/events/vmscan.h
> @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
> TP_ARGS(page, reclaim_flags),
>
> TP_STRUCT__entry(
> - __field(struct page *, page)
> + __field(unsigned long, pfn)
> __field(int, reclaim_flags)
> ),
>
> TP_fast_assign(
> - __entry->page = page;
> + __entry->pfn = page_to_pfn(page);
> __entry->reclaim_flags = reclaim_flags;
> ),
>
> TP_printk("page=%p pfn=%lu flags=%s",
> - __entry->page,
> - page_to_pfn(__entry->page),
> + pfn_to_page(__entry->pfn),
> + __entry->pfn,
> show_reclaim_flags(__entry->reclaim_flags))
> );
>
>

2017-08-31 11:38:44

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

Ping?

On 07/31/2017 09:43 AM, Vlastimil Babka wrote:
> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
>> From: Namhyung Kim <[email protected]>
>>
>> The struct page is opaque for userspace tools, so it'd be better to save
>> pfn in order to identify page frames.
>>
>> The textual output of $debugfs/tracing/trace file remains unchanged and
>> only raw (binary) data format is changed - but thanks to libtraceevent,
>> userspace tools which deal with the raw data (like perf and trace-cmd)
>> can parse the format easily.
>
> Hmm it seems trace-cmd doesn't work that well, at least on current
> x86_64 kernel where I noticed it:
>
> trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1
>
> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> The events/kmem/mm_page_alloc/format file contains this for page:
>
> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
>
> I think userspace can't know vmmemap_base nor the implied sizeof(struct
> page) for pointer arithmetic?
>
> On older 4.4-based kernel:
>
> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)
>
> This also fails to parse, so it must be the struct page part?
>
> I think the problem is, even if ve solve this with some more
> preprocessor trickery to make the format file contain only constant
> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> more complicated than simple arithmetic, and can't be exported in the
> format file.
>
> I'm afraid that to support userspace parsing of the trace data, we will
> have to store both struct page and pfn... or perhaps give up on reporting
> the struct page pointer completely. Thoughts?
>
>> So impact on the userspace will also be
>> minimal.
>>
>> Signed-off-by: Namhyung Kim <[email protected]>
>> Based-on-patch-by: Joonsoo Kim <[email protected]>
>> Acked-by: Ingo Molnar <[email protected]>
>> Acked-by: Steven Rostedt <[email protected]>
>> Cc: David Ahern <[email protected]>
>> Cc: Jiri Olsa <[email protected]>
>> Cc: Minchan Kim <[email protected]>
>> Cc: Peter Zijlstra <[email protected]>
>> Cc: [email protected]
>> Link: http://lkml.kernel.org/r/[email protected]
>> Signed-off-by: Arnaldo Carvalho de Melo <[email protected]>
>> ---
>> include/trace/events/filemap.h | 8 ++++----
>> include/trace/events/kmem.h | 42 +++++++++++++++++++++---------------------
>> include/trace/events/vmscan.h | 8 ++++----
>> 3 files changed, 29 insertions(+), 29 deletions(-)
>>
>> diff --git a/include/trace/events/filemap.h b/include/trace/events/filemap.h
>> index 0421f49a20f7..42febb6bc1d5 100644
>> --- a/include/trace/events/filemap.h
>> +++ b/include/trace/events/filemap.h
>> @@ -18,14 +18,14 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>> TP_ARGS(page),
>>
>> TP_STRUCT__entry(
>> - __field(struct page *, page)
>> + __field(unsigned long, pfn)
>> __field(unsigned long, i_ino)
>> __field(unsigned long, index)
>> __field(dev_t, s_dev)
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page_to_pfn(page);
>> __entry->i_ino = page->mapping->host->i_ino;
>> __entry->index = page->index;
>> if (page->mapping->host->i_sb)
>> @@ -37,8 +37,8 @@ DECLARE_EVENT_CLASS(mm_filemap_op_page_cache,
>> TP_printk("dev %d:%d ino %lx page=%p pfn=%lu ofs=%lu",
>> MAJOR(__entry->s_dev), MINOR(__entry->s_dev),
>> __entry->i_ino,
>> - __entry->page,
>> - page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn),
>> + __entry->pfn,
>> __entry->index << PAGE_SHIFT)
>> );
>>
>> diff --git a/include/trace/events/kmem.h b/include/trace/events/kmem.h
>> index 4ad10baecd4d..81ea59812117 100644
>> --- a/include/trace/events/kmem.h
>> +++ b/include/trace/events/kmem.h
>> @@ -154,18 +154,18 @@ TRACE_EVENT(mm_page_free,
>> TP_ARGS(page, order),
>>
>> TP_STRUCT__entry(
>> - __field( struct page *, page )
>> + __field( unsigned long, pfn )
>> __field( unsigned int, order )
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page_to_pfn(page);
>> __entry->order = order;
>> ),
>>
>> TP_printk("page=%p pfn=%lu order=%d",
>> - __entry->page,
>> - page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn),
>> + __entry->pfn,
>> __entry->order)
>> );
>>
>> @@ -176,18 +176,18 @@ TRACE_EVENT(mm_page_free_batched,
>> TP_ARGS(page, cold),
>>
>> TP_STRUCT__entry(
>> - __field( struct page *, page )
>> + __field( unsigned long, pfn )
>> __field( int, cold )
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page_to_pfn(page);
>> __entry->cold = cold;
>> ),
>>
>> TP_printk("page=%p pfn=%lu order=0 cold=%d",
>> - __entry->page,
>> - page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn),
>> + __entry->pfn,
>> __entry->cold)
>> );
>>
>> @@ -199,22 +199,22 @@ TRACE_EVENT(mm_page_alloc,
>> TP_ARGS(page, order, gfp_flags, migratetype),
>>
>> TP_STRUCT__entry(
>> - __field( struct page *, page )
>> + __field( unsigned long, pfn )
>> __field( unsigned int, order )
>> __field( gfp_t, gfp_flags )
>> __field( int, migratetype )
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page ? page_to_pfn(page) : -1UL;
>> __entry->order = order;
>> __entry->gfp_flags = gfp_flags;
>> __entry->migratetype = migratetype;
>> ),
>>
>> TP_printk("page=%p pfn=%lu order=%d migratetype=%d gfp_flags=%s",
>> - __entry->page,
>> - __entry->page ? page_to_pfn(__entry->page) : 0,
>> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
>> + __entry->pfn != -1UL ? __entry->pfn : 0,
>> __entry->order,
>> __entry->migratetype,
>> show_gfp_flags(__entry->gfp_flags))
>> @@ -227,20 +227,20 @@ DECLARE_EVENT_CLASS(mm_page,
>> TP_ARGS(page, order, migratetype),
>>
>> TP_STRUCT__entry(
>> - __field( struct page *, page )
>> + __field( unsigned long, pfn )
>> __field( unsigned int, order )
>> __field( int, migratetype )
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page ? page_to_pfn(page) : -1UL;
>> __entry->order = order;
>> __entry->migratetype = migratetype;
>> ),
>>
>> TP_printk("page=%p pfn=%lu order=%u migratetype=%d percpu_refill=%d",
>> - __entry->page,
>> - __entry->page ? page_to_pfn(__entry->page) : 0,
>> + __entry->pfn != -1UL ? pfn_to_page(__entry->pfn) : NULL,
>> + __entry->pfn != -1UL ? __entry->pfn : 0,
>> __entry->order,
>> __entry->migratetype,
>> __entry->order == 0)
>> @@ -260,7 +260,7 @@ DEFINE_EVENT_PRINT(mm_page, mm_page_pcpu_drain,
>> TP_ARGS(page, order, migratetype),
>>
>> TP_printk("page=%p pfn=%lu order=%d migratetype=%d",
>> - __entry->page, page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn), __entry->pfn,
>> __entry->order, __entry->migratetype)
>> );
>>
>> @@ -275,7 +275,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>> alloc_migratetype, fallback_migratetype),
>>
>> TP_STRUCT__entry(
>> - __field( struct page *, page )
>> + __field( unsigned long, pfn )
>> __field( int, alloc_order )
>> __field( int, fallback_order )
>> __field( int, alloc_migratetype )
>> @@ -284,7 +284,7 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page_to_pfn(page);
>> __entry->alloc_order = alloc_order;
>> __entry->fallback_order = fallback_order;
>> __entry->alloc_migratetype = alloc_migratetype;
>> @@ -294,8 +294,8 @@ TRACE_EVENT(mm_page_alloc_extfrag,
>> ),
>>
>> TP_printk("page=%p pfn=%lu alloc_order=%d fallback_order=%d pageblock_order=%d alloc_migratetype=%d fallback_migratetype=%d fragmenting=%d change_ownership=%d",
>> - __entry->page,
>> - page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn),
>> + __entry->pfn,
>> __entry->alloc_order,
>> __entry->fallback_order,
>> pageblock_order,
>> diff --git a/include/trace/events/vmscan.h b/include/trace/events/vmscan.h
>> index 69590b6ffc09..f66476b96264 100644
>> --- a/include/trace/events/vmscan.h
>> +++ b/include/trace/events/vmscan.h
>> @@ -336,18 +336,18 @@ TRACE_EVENT(mm_vmscan_writepage,
>> TP_ARGS(page, reclaim_flags),
>>
>> TP_STRUCT__entry(
>> - __field(struct page *, page)
>> + __field(unsigned long, pfn)
>> __field(int, reclaim_flags)
>> ),
>>
>> TP_fast_assign(
>> - __entry->page = page;
>> + __entry->pfn = page_to_pfn(page);
>> __entry->reclaim_flags = reclaim_flags;
>> ),
>>
>> TP_printk("page=%p pfn=%lu flags=%s",
>> - __entry->page,
>> - page_to_pfn(__entry->page),
>> + pfn_to_page(__entry->pfn),
>> + __entry->pfn,
>> show_reclaim_flags(__entry->reclaim_flags))
>> );
>>
>>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>

2017-08-31 13:43:10

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <[email protected]> wrote:

> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
> > From: Namhyung Kim <[email protected]>
> >
> > The struct page is opaque for userspace tools, so it'd be better to save
> > pfn in order to identify page frames.
> >
> > The textual output of $debugfs/tracing/trace file remains unchanged and
> > only raw (binary) data format is changed - but thanks to libtraceevent,
> > userspace tools which deal with the raw data (like perf and trace-cmd)
> > can parse the format easily.
>
> Hmm it seems trace-cmd doesn't work that well, at least on current
> x86_64 kernel where I noticed it:
>
> trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1

Which version of trace-cmd failed? It parses for me. Hmm, the
vmemmap_base isn't in the event format file. It's the actually address.
That's probably what failed to parse.

>
> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> The events/kmem/mm_page_alloc/format file contains this for page:
>
> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)

But yeah, I think the output is wrong. I just ran this:

page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK

But running it with trace-cmd report -R (raw format):

mm_page_alloc: pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0

The parser currently ignores types, so it doesn't do pointer
arithmetic correctly, and would be hard to here as it doesn't know the
size of the struct page. What could work is if we changed the printf
fmt to be:

(unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page))


>
> I think userspace can't know vmmemap_base nor the implied sizeof(struct
> page) for pointer arithmetic?
>
> On older 4.4-based kernel:
>
> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)

This is what I have on 4.13-rc7

>
> This also fails to parse, so it must be the struct page part?

Again, what version of trace-cmd do you have?


>
> I think the problem is, even if ve solve this with some more
> preprocessor trickery to make the format file contain only constant
> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> more complicated than simple arithmetic, and can't be exported in the
> format file.
>
> I'm afraid that to support userspace parsing of the trace data, we will
> have to store both struct page and pfn... or perhaps give up on reporting
> the struct page pointer completely. Thoughts?

Had some thoughts up above.

-- Steve

2017-08-31 14:31:40

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On 08/31/2017 03:43 PM, Steven Rostedt wrote:
> On Mon, 31 Jul 2017 09:43:41 +0200 Vlastimil Babka <[email protected]> wrote:
>
>> On 04/14/2015 12:14 AM, Arnaldo Carvalho de Melo wrote:
>>> From: Namhyung Kim <[email protected]>
>>>
>>> The struct page is opaque for userspace tools, so it'd be better to save
>>> pfn in order to identify page frames.
>>>
>>> The textual output of $debugfs/tracing/trace file remains unchanged and
>>> only raw (binary) data format is changed - but thanks to libtraceevent,
>>> userspace tools which deal with the raw data (like perf and trace-cmd)
>>> can parse the format easily.
>>
>> Hmm it seems trace-cmd doesn't work that well, at least on current
>> x86_64 kernel where I noticed it:
>>
>> trace-cmd-22020 [003] 105219.542610: mm_page_alloc: [FAILED TO PARSE] pfn=0x165cb4 order=0 gfp_flags=29491274 migratetype=1
>
> Which version of trace-cmd failed? It parses for me. Hmm, the
> vmemmap_base isn't in the event format file. It's the actually address.
> That's probably what failed to parse.

Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.

>
>>
>> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
>> The events/kmem/mm_page_alloc/format file contains this for page:
>>
>> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
>
> But yeah, I think the output is wrong. I just ran this:
>
> page=0xffffea00000a62f4 pfn=680692 order=0 migratetype=0 gfp_flags=GFP_KERNEL_ACCOUNT|__GFP_ZERO|__GFP_NOTRACK
>
> But running it with trace-cmd report -R (raw format):
>
> mm_page_alloc: pfn=0xa62f4 order=0 gfp_flags=24150208 migratetype=0
>
> The parser currently ignores types, so it doesn't do pointer
> arithmetic correctly, and would be hard to here as it doesn't know the
> size of the struct page. What could work is if we changed the printf
> fmt to be:
>
> (unsigned long)(0xffffea0000000000UL) + (REC->pfn * sizeof(struct page))
>
>
>>
>> I think userspace can't know vmmemap_base nor the implied sizeof(struct
>> page) for pointer arithmetic?
>>
>> On older 4.4-based kernel:
>>
>> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)
>
> This is what I have on 4.13-rc7
>
>>
>> This also fails to parse, so it must be the struct page part?
>
> Again, what version of trace-cmd do you have?

On the older distro it was 2.0.4

>
>>
>> I think the problem is, even if ve solve this with some more
>> preprocessor trickery to make the format file contain only constant
>> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
>> more complicated than simple arithmetic, and can't be exported in the
>> format file.
>>
>> I'm afraid that to support userspace parsing of the trace data, we will
>> have to store both struct page and pfn... or perhaps give up on reporting
>> the struct page pointer completely. Thoughts?
>
> Had some thoughts up above.

Yeah, it could be made to work for some configurations, but see the part
about "sparse memory model without vmemmap" above.

> -- Steve
>

2017-08-31 14:44:15

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On Thu, 31 Aug 2017 16:31:36 +0200
Vlastimil Babka <[email protected]> wrote:


> > Which version of trace-cmd failed? It parses for me. Hmm, the
> > vmemmap_base isn't in the event format file. It's the actually address.
> > That's probably what failed to parse.
>
> Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.

Right, but you have the vmemmap_base in the event format, which can't
be parsed by userspace because it has no idea what the value of the
vmemmap_base is.

>
> >
> >>
> >> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
> >> The events/kmem/mm_page_alloc/format file contains this for page:
> >>
> >> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
> >


> >> On older 4.4-based kernel:
> >>
> >> REC->pfn != -1UL ? (((struct page *)(0xffffea0000000000UL)) + (REC->pfn)) : ((void *)0)
> >
> > This is what I have on 4.13-rc7
> >
> >>
> >> This also fails to parse, so it must be the struct page part?
> >
> > Again, what version of trace-cmd do you have?
>
> On the older distro it was 2.0.4

Right. That's probably why it failed to parse here. If you installed
the latest trace-cmd from the git repo, it probably will parse fine.

>
> >
> >>
> >> I think the problem is, even if ve solve this with some more
> >> preprocessor trickery to make the format file contain only constant
> >> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
> >> more complicated than simple arithmetic, and can't be exported in the
> >> format file.
> >>
> >> I'm afraid that to support userspace parsing of the trace data, we will
> >> have to store both struct page and pfn... or perhaps give up on reporting
> >> the struct page pointer completely. Thoughts?
> >
> > Had some thoughts up above.
>
> Yeah, it could be made to work for some configurations, but see the part
> about "sparse memory model without vmemmap" above.

Right, but that should work with the latest trace-cmd. Does it?

-- Steve

2017-09-01 08:16:26

by Vlastimil Babka

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On 08/31/2017 04:44 PM, Steven Rostedt wrote:
> On Thu, 31 Aug 2017 16:31:36 +0200
> Vlastimil Babka <[email protected]> wrote:
>
>
>>> Which version of trace-cmd failed? It parses for me. Hmm, the
>>> vmemmap_base isn't in the event format file. It's the actually address.
>>> That's probably what failed to parse.
>>
>> Mine says 2.6. With 4.13-rc6 I get FAILED TO PARSE.
>
> Right, but you have the vmemmap_base in the event format, which can't
> be parsed by userspace because it has no idea what the value of the
> vmemmap_base is.

This seems to be caused by CONFIG_RANDOMIZE_MEMORY. If we somehow put the value
in the format file, it's an info leak? (but I guess kernels that care must have
ftrace disabled anyway :)

>>
>>>
>>>>
>>>> I'm quite sure it's due to the "page=%p" part, which uses pfn_to_page().
>>>> The events/kmem/mm_page_alloc/format file contains this for page:
>>>>
>>>> REC->pfn != -1UL ? (((struct page *)vmemmap_base) + (REC->pfn)) : ((void *)0)
>>>>
>>>> I think the problem is, even if ve solve this with some more
>>>> preprocessor trickery to make the format file contain only constant
>>>> numbers, pfn_to_page() on e.g. sparse memory model without vmmemap is
>>>> more complicated than simple arithmetic, and can't be exported in the
>>>> format file.
>>>>
>>>> I'm afraid that to support userspace parsing of the trace data, we will
>>>> have to store both struct page and pfn... or perhaps give up on reporting
>>>> the struct page pointer completely. Thoughts?
>>>
>>> Had some thoughts up above.
>>
>> Yeah, it could be made to work for some configurations, but see the part
>> about "sparse memory model without vmemmap" above.
>
> Right, but that should work with the latest trace-cmd. Does it?

Hmm, by "sparse memory model without vmemmap" I don't mean there's a
number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y

Then __pfn_to_page() looks like this:

#define __page_to_pfn(pg) \
({ const struct page *__pg = (pg); \
int __sec = page_to_section(__pg); \
(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \
})

Then the part of format file looks like this:

REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0)

The section things involve some array lookups, so I don't see how we
could pass it to tracing userspace. Would we want to special-case
this config to store both pfn and struct page in the trace frame? And
make sure the simpler ones work despite all the exsisting gotchas?
I'd rather say we should either store both pfn and page pointer, or
just throw away the page pointer as the pfn is enough to e.g. match
alloc and free, and also much more deterministic.

> -- Steve
>

2017-09-01 11:15:46

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH 1/5] tracing, mm: Record pfn instead of pointer to struct page

On Fri, 1 Sep 2017 10:16:21 +0200
Vlastimil Babka <[email protected]> wrote:

> > Right, but that should work with the latest trace-cmd. Does it?
>
> Hmm, by "sparse memory model without vmemmap" I don't mean there's a
> number instead of "vmemmap_base". I mean CONFIG_SPARSEMEM=y
>
> Then __pfn_to_page() looks like this:
>
> #define __page_to_pfn(pg) \
> ({ const struct page *__pg = (pg); \
> int __sec = page_to_section(__pg); \
> (unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec))); \
> })
>
> Then the part of format file looks like this:
>
> REC->pfn != -1UL ? ({ unsigned long __pfn = (REC->pfn); struct mem_section *__sec = __pfn_to_section(__pfn); __section_mem_map_addr(__sec) + __pfn; }) : ((void *)0)

Ouch.

>
> The section things involve some array lookups, so I don't see how we
> could pass it to tracing userspace. Would we want to special-case
> this config to store both pfn and struct page in the trace frame? And
> make sure the simpler ones work despite all the exsisting gotchas?
> I'd rather say we should either store both pfn and page pointer, or
> just throw away the page pointer as the pfn is enough to e.g. match
> alloc and free, and also much more deterministic.

Write up a patch and we'll take a look.

-- Steve