2013-04-10 03:27:29

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 00/12] event tracing expose change and bugfix/cleanup

From: "zhangwei(Jovi)" <[email protected]>

Hi steven,

I have reworked this patchset again with minor change.
[v2 -> v3:
- change trace_descripte_t defintion in patch 3
- new patch "export ftrace_events"
- remove patch "export syscall metadata"
(syscall tracing are use same event_trace_ops backend as normal event tracepoint,
so there's no need to export anything of syscall)
- remove private data field in ftrace_event_file struct (also not needed)
]

This patchset contain:
1) event tracing expose work (v3)
new implementation is based on multi-instances buffer work,
it also integrate syscall tracing code to use same event backend store mechanism.
The change include patch 1-6(patch 2 also fix a long-term minor bug)

2) some cleanup
This include patch 7-11.

3) patch 12 fix libtraceevent warning

Note that these patches is based on latest linux-trace git tree:
(on top of multi-instances buffer implementation)

git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace.git
tip/perf/core

All patches pass basic testing.


Note that ktap code already make use of this event tracing export work,
If you are interesting, you can check ktap code in below link to see
how this export work is implemented by external modules.
https://github.com/ktap/ktap/blob/master/library/trace.c

And even more, you can try it. :)

Thanks very much

zhangwei(Jovi) (12):
tracing: move trace_array definition into include/linux/trace_array.h
tracing: fix irqs-off tag display in syscall tracing
tracing: expose event tracing infrastructure
tracing: export ftrace_events
tracing: switch syscall tracing to use event_trace_ops backend
tracing: expose structure ftrace_event_field
tracing: remove TRACE_EVENT_TYPE enum definition
tracing: remove obsolete macro guard _TRACE_PROFILE_INIT
tracing: remove ftrace(...) function
tracing: use per trace_array clock_id instead of global
trace_clock_id
tracing: guard tracing_selftest_disabled by
CONFIG_FTRACE_STARTUP_TEST
libtraceevent: add libtraceevent prefix in warning message

include/linux/ftrace_event.h | 32 ++++++++
include/linux/trace_array.h | 118 +++++++++++++++++++++++++++++
include/trace/ftrace.h | 71 ++++++------------
kernel/trace/trace.c | 27 +++----
kernel/trace/trace.h | 144 +-----------------------------------
kernel/trace/trace_events.c | 55 ++++++++++++++
kernel/trace/trace_syscalls.c | 36 ++++-----
tools/lib/traceevent/event-parse.c | 2 +-
8 files changed, 257 insertions(+), 228 deletions(-)
create mode 100644 include/linux/trace_array.h

--
1.7.9.7


2013-04-10 03:27:15

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 04/12] tracing: export ftrace_events

From: "zhangwei(Jovi)" <[email protected]>

let modules can access ftrace_events

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
include/linux/ftrace_event.h | 1 +
kernel/trace/trace.h | 1 -
kernel/trace/trace_events.c | 2 ++
3 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4b55272..f6a6e48 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -346,6 +346,7 @@ enum {
#define EVENT_STORAGE_SIZE 128
extern struct mutex event_storage_mutex;
extern char event_storage[EVENT_STORAGE_SIZE];
+extern struct list_head ftrace_events;

extern int trace_event_raw_init(struct ftrace_event_call *call);
extern int trace_define_field(struct ftrace_event_call *call, const char *type,
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 0a1f4be..8f4966b 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -917,7 +917,6 @@ extern int event_trace_add_tracer(struct dentry *parent, struct trace_array *tr)
extern int event_trace_del_tracer(struct trace_array *tr);

extern struct mutex event_mutex;
-extern struct list_head ftrace_events;

extern const char *__start___trace_bprintk_fmt[];
extern const char *__stop___trace_bprintk_fmt[];
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 09ca479..7c52a51 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -34,6 +34,8 @@ char event_storage[EVENT_STORAGE_SIZE];
EXPORT_SYMBOL_GPL(event_storage);

LIST_HEAD(ftrace_events);
+EXPORT_SYMBOL_GPL(ftrace_events);
+
static LIST_HEAD(ftrace_common_fields);

#define GFP_TRACE (GFP_KERNEL | __GFP_ZERO)
--
1.7.9.7

2013-04-10 03:27:18

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 11/12] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST

From: "zhangwei(Jovi)" <[email protected]>

Variable tracing_selftest_disabled have not any sense when
CONFIG_FTRACE_STARTUP_TEST is disabled.

This patch also remove __read_mostly attribute, since variable
tracing_selftest_disabled really not read mostly.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace.c | 6 ++++--
kernel/trace/trace.h | 2 +-
kernel/trace/trace_events.c | 2 ++
3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index ee4e110..09a3aa8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -58,10 +58,12 @@ bool ring_buffer_expanded;
*/
static bool __read_mostly tracing_selftest_running;

+#ifdef CONFIG_FTRACE_STARTUP_TEST
/*
* If a tracer is running, we do not want to run SELFTEST.
*/
-bool __read_mostly tracing_selftest_disabled;
+bool tracing_selftest_disabled;
+#endif

/* For tracers that don't implement custom flags */
static struct tracer_opt dummy_tracer_opt[] = {
@@ -1069,8 +1071,8 @@ int register_tracer(struct tracer *type)
tracing_set_tracer(type->name);
default_bootup_tracer = NULL;
/* disable other selftests, since this will break it. */
- tracing_selftest_disabled = true;
#ifdef CONFIG_FTRACE_STARTUP_TEST
+ tracing_selftest_disabled = true;
printk(KERN_INFO "Disabling FTRACE selftests due to running tracer '%s'\n",
type->name);
#endif
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9b8afa7..e9ef8b7 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -546,10 +546,10 @@ extern int DYN_FTRACE_TEST_NAME(void);
extern int DYN_FTRACE_TEST_NAME2(void);

extern bool ring_buffer_expanded;
-extern bool tracing_selftest_disabled;
DECLARE_PER_CPU(int, ftrace_cpu_disabled);

#ifdef CONFIG_FTRACE_STARTUP_TEST
+extern bool tracing_selftest_disabled;
extern int trace_selftest_startup_function(struct tracer *trace,
struct trace_array *tr);
extern int trace_selftest_startup_function_graph(struct tracer *trace,
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 7c52a51..7c4a16b 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -2251,7 +2251,9 @@ static __init int setup_trace_event(char *str)
{
strlcpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
ring_buffer_expanded = true;
+#ifdef CONFIG_FTRACE_STARTUP_TEST
tracing_selftest_disabled = true;
+#endif

return 1;
}
--
1.7.9.7

2013-04-10 03:27:25

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 08/12] tracing: remove obsolete macro guard _TRACE_PROFILE_INIT

From: "zhangwei(Jovi)" <[email protected]>

Macro _TRACE_PROFILE_INIT was removed at long time ago,
but leave guard "#undef" here, remove it.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
include/trace/ftrace.h | 2 --
1 file changed, 2 deletions(-)

diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 743e754..b95cc52 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -677,5 +677,3 @@ static inline void perf_test_probe_##call(void) \
#include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
#endif /* CONFIG_PERF_EVENTS */

-#undef _TRACE_PROFILE_INIT
-
--
1.7.9.7

2013-04-10 03:27:35

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 09/12] tracing: remove ftrace(...) function

From: "zhangwei(Jovi)" <[email protected]>

The only caller of function ftrace(...) was removed at long time ago,
so remove the function body also.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace.c | 9 ---------
kernel/trace/trace.h | 5 -----
2 files changed, 14 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 224b152..dd0c122 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -1534,15 +1534,6 @@ trace_function(struct trace_array *tr,
__buffer_unlock_commit(buffer, event);
}

-void
-ftrace(struct trace_array *tr, struct trace_array_cpu *data,
- unsigned long ip, unsigned long parent_ip, unsigned long flags,
- int pc)
-{
- if (likely(!atomic_read(&data->disabled)))
- trace_function(tr, ip, parent_ip, flags, pc);
-}
-
#ifdef CONFIG_STACKTRACE

#define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9964695..bb3fd1b 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -445,11 +445,6 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu);

void poll_wait_pipe(struct trace_iterator *iter);

-void ftrace(struct trace_array *tr,
- struct trace_array_cpu *data,
- unsigned long ip,
- unsigned long parent_ip,
- unsigned long flags, int pc);
void tracing_sched_switch_trace(struct trace_array *tr,
struct task_struct *prev,
struct task_struct *next,
--
1.7.9.7

2013-04-10 03:27:45

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 03/12] tracing: expose event tracing infrastructure

From: "zhangwei(Jovi)" <[email protected]>

Currently event tracing only can be use for ftrace and perf,
there don't have any mechanism to let modules(like external tracing tool)
register callback tracing function.

Event tracing implement based on tracepoint, compare with raw tracepoint,
event tracing infrastructure provide built-in structured event annotate format,
this feature should expose to external user.

For example, simple pseudo ktap script demonstrate how to use this event
tracing expose change.

function event_trace(e)
{
printf("%s", e.annotate);
}

os.trace("sched:sched_switch", event_trace);
os.trace("irq:softirq_raise", event_trace);

The running result:
sched_switch: prev_comm=rcu_sched prev_pid=10 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
softirq_raise: vec=1 [action=TIMER]
...

This expose change can be use by other tracing tool, like systemtap/lttng,
if they would implement this.

This patch introduce struct event_trace_ops in trace_array, it have
two callback functions, pre_trace and do_trace.
when ftrace_raw_event_<call> function hit, it will call all
registered event_trace_ops.

the benefit of this change is kernel size shrink ~18K

(the kernel size will reduce more when perf tracing code
converting to use this mechanism in future)

text data bss dec hex filename
7402131 804364 3149824 11356319 ad489f vmlinux.old
7383115 804684 3149824 11337623 acff97 vmlinux.new

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
include/linux/ftrace_event.h | 21 +++++++++++++
include/linux/trace_array.h | 1 +
include/trace/ftrace.h | 69 +++++++++++++-----------------------------
kernel/trace/trace.c | 4 ++-
kernel/trace/trace.h | 2 ++
kernel/trace/trace_events.c | 51 +++++++++++++++++++++++++++++++
6 files changed, 99 insertions(+), 49 deletions(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index 4e28b01..4b55272 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -6,6 +6,7 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/perf_event.h>
+#include <linux/trace_array.h>

struct trace_array;
struct trace_buffer;
@@ -245,6 +246,26 @@ struct ftrace_event_call {
#endif
};

+
+/*
+ * trace_descriptor_t is purpose for passing arguments between
+ * pre_trace and do_trace function.
+ */
+struct trace_descriptor_t {
+ struct ring_buffer_event *event;
+ struct ring_buffer *buffer;
+ unsigned long irq_flags;
+ int pc;
+};
+
+/* callback function for tracing */
+struct event_trace_ops {
+ void *(*pre_trace)(struct ftrace_event_file *file,
+ int entry_size, void *data);
+ void (*do_trace)(struct ftrace_event_file *file, void *entry,
+ int entry_size, void *data);
+};
+
struct trace_array;
struct ftrace_subsystem_dir;

diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
index c5b7a13..b362c5f 100644
--- a/include/linux/trace_array.h
+++ b/include/linux/trace_array.h
@@ -56,6 +56,7 @@ struct trace_array {
struct list_head list;
char *name;
struct trace_buffer trace_buffer;
+ struct event_trace_ops *ops;
#ifdef CONFIG_TRACER_MAX_TRACE
/*
* The max_buffer is used to snapshot the trace when a maximum
diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
index 4bda044..743e754 100644
--- a/include/trace/ftrace.h
+++ b/include/trace/ftrace.h
@@ -401,41 +401,28 @@ static inline notrace int ftrace_get_offsets_##call( \
*
* static struct ftrace_event_call event_<call>;
*
- * static void ftrace_raw_event_<call>(void *__data, proto)
+ * static notrace void ftrace_raw_event_##call(void *__data, proto)
* {
* struct ftrace_event_file *ftrace_file = __data;
- * struct ftrace_event_call *event_call = ftrace_file->event_call;
- * struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
- * struct ring_buffer_event *event;
- * struct ftrace_raw_<call> *entry; <-- defined in stage 1
- * struct ring_buffer *buffer;
- * unsigned long irq_flags;
- * int __data_size;
- * int pc;
+ * struct ftrace_data_offsets_##call __maybe_unused __data_offsets;
+ * struct trace_descriptor_t __desc;
+ * struct event_trace_ops *ops = ftrace_file->tr->ops;
+ * struct ftrace_raw_##call *entry; <-- defined in stage 1
+ * int __data_size, __entry_size;
*
- * if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
- * &ftrace_file->flags))
- * return;
- *
- * local_save_flags(irq_flags);
- * pc = preempt_count();
- *
- * __data_size = ftrace_get_offsets_<call>(&__data_offsets, args);
+ * __data_size = ftrace_get_offsets_##call(&__data_offsets, args);
+ * __entry_size = sizeof(*entry) + __data_size;
*
- * event = trace_event_buffer_lock_reserve(&buffer, ftrace_file,
- * event_<call>->event.type,
- * sizeof(*entry) + __data_size,
- * irq_flags, pc);
- * if (!event)
+ * entry = ops->pre_trace(ftrace_file, __entry_size, &__desc);
+ * if (!entry)
* return;
- * entry = ring_buffer_event_data(event);
+ *
+ * tstruct
*
* { <assign>; } <-- Here we assign the entries by the __field and
* __array macros.
*
- * if (!filter_current_check_discard(buffer, event_call, entry, event))
- * trace_nowake_buffer_unlock_commit(buffer,
- * event, irq_flags, pc);
+ * ops->do_trace(ftrace_file, entry, __entry_size, &__desc);
* }
*
* static struct trace_event ftrace_event_type_<call> = {
@@ -513,38 +500,24 @@ static notrace void \
ftrace_raw_event_##call(void *__data, proto) \
{ \
struct ftrace_event_file *ftrace_file = __data; \
- struct ftrace_event_call *event_call = ftrace_file->event_call; \
struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
- struct ring_buffer_event *event; \
+ struct trace_descriptor_t __desc; \
+ struct event_trace_ops *ops = ftrace_file->tr->ops; \
struct ftrace_raw_##call *entry; \
- struct ring_buffer *buffer; \
- unsigned long irq_flags; \
- int __data_size; \
- int pc; \
+ int __data_size, __entry_size; \
\
- if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
- &ftrace_file->flags)) \
- return; \
- \
- local_save_flags(irq_flags); \
- pc = preempt_count(); \
+ __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
+ __entry_size = sizeof(*entry) + __data_size; \
\
- __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
- \
- event = trace_event_buffer_lock_reserve(&buffer, ftrace_file, \
- event_call->event.type, \
- sizeof(*entry) + __data_size, \
- irq_flags, pc); \
- if (!event) \
+ entry = ops->pre_trace(ftrace_file, __entry_size, &__desc); \
+ if (!entry) \
return; \
- entry = ring_buffer_event_data(event); \
\
tstruct \
\
{ assign; } \
\
- if (!filter_current_check_discard(buffer, event_call, entry, event)) \
- trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
+ ops->do_trace(ftrace_file, entry, __entry_size, &__desc); \
}
/*
* The ftrace_test_probe is compiled out, it is only here as a build time check
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 829b2be..224b152 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -189,7 +189,7 @@ unsigned long long ns2usecs(cycle_t nsec)
* pages for the buffer for that CPU. Each CPU has the same number
* of pages allocated for its buffer.
*/
-static struct trace_array global_trace;
+static struct trace_array global_trace = {.ops = &ftrace_events_ops};

LIST_HEAD(ftrace_trace_arrays);

@@ -5773,6 +5773,8 @@ static int new_instance_create(const char *name)

list_add(&tr->list, &ftrace_trace_arrays);

+ tr->ops = &ftrace_events_ops;
+
mutex_unlock(&trace_types_lock);

return 0;
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index a8acfcd..0a1f4be 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -493,6 +493,8 @@ extern unsigned long nsecs_to_usecs(unsigned long nsecs);

extern unsigned long tracing_thresh;

+extern struct event_trace_ops ftrace_events_ops;
+
#ifdef CONFIG_TRACER_MAX_TRACE
extern unsigned long tracing_max_latency;

diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 53582e9..09ca479 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -241,6 +241,57 @@ void trace_event_enable_cmd_record(bool enable)
mutex_unlock(&event_mutex);
}

+static void *ftrace_events_pre_trace(struct ftrace_event_file *file,
+ int entry_size, void *data)
+{
+ struct ftrace_event_call *event_call = file->event_call;
+ struct trace_descriptor_t *desc = data;
+ struct ring_buffer_event *event;
+ struct ring_buffer *buffer;
+ unsigned long irq_flags;
+ int pc;
+
+ if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
+ return NULL;
+
+ local_save_flags(irq_flags);
+ pc = preempt_count();
+
+ event = trace_event_buffer_lock_reserve(&buffer, file,
+ event_call->event.type,
+ entry_size, irq_flags, pc);
+
+ if (!event)
+ return NULL;
+
+ desc->event = event;
+ desc->buffer = buffer;
+ desc->irq_flags = irq_flags;
+ desc->pc = pc;
+
+ return ring_buffer_event_data(event);
+}
+
+static void ftrace_events_do_trace(struct ftrace_event_file *file, void *entry,
+ int entry_size, void *data)
+{
+ struct ftrace_event_call *event_call = file->event_call;
+ struct trace_descriptor_t *desc = data;
+ struct ring_buffer_event *event = desc->event;
+ struct ring_buffer *buffer = desc->buffer;
+ unsigned long irq_flags = desc->irq_flags;
+ int pc = desc->pc;
+
+ if (!filter_current_check_discard(buffer, event_call, entry, event))
+ trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
+}
+
+struct event_trace_ops ftrace_events_ops = {
+ .pre_trace = ftrace_events_pre_trace,
+ .do_trace = ftrace_events_do_trace,
+};
+
+
static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
int enable, int soft_disable)
{
--
1.7.9.7

2013-04-10 03:27:51

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 01/12] tracing: move trace_array definition into include/linux/trace_array.h

From: "zhangwei(Jovi)" <[email protected]>

Prepare for expose event tracing infrastructure.
(struct trace_array shall be use by external modules)

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
include/linux/trace_array.h | 117 +++++++++++++++++++++++++++++++++++++++++++
kernel/trace/trace.h | 116 +-----------------------------------------
2 files changed, 118 insertions(+), 115 deletions(-)
create mode 100644 include/linux/trace_array.h

diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
new file mode 100644
index 0000000..c5b7a13
--- /dev/null
+++ b/include/linux/trace_array.h
@@ -0,0 +1,117 @@
+#ifndef _LINUX_KERNEL_TRACE_ARRAY_H
+#define _LINUX_KERNEL_TRACE_ARRAY_H
+
+#ifdef CONFIG_FTRACE_SYSCALLS
+#include <asm/unistd.h> /* For NR_SYSCALLS */
+#include <asm/syscall.h> /* some archs define it here */
+#endif
+
+struct trace_cpu {
+ struct trace_array *tr;
+ struct dentry *dir;
+ int cpu;
+};
+
+/*
+ * The CPU trace array - it consists of thousands of trace entries
+ * plus some other descriptor data: (for example which task started
+ * the trace, etc.)
+ */
+struct trace_array_cpu {
+ struct trace_cpu trace_cpu;
+ atomic_t disabled;
+ void *buffer_page; /* ring buffer spare */
+
+ unsigned long entries;
+ unsigned long saved_latency;
+ unsigned long critical_start;
+ unsigned long critical_end;
+ unsigned long critical_sequence;
+ unsigned long nice;
+ unsigned long policy;
+ unsigned long rt_priority;
+ unsigned long skipped_entries;
+ cycle_t preempt_timestamp;
+ pid_t pid;
+ kuid_t uid;
+ char comm[TASK_COMM_LEN];
+};
+
+struct tracer;
+
+struct trace_buffer {
+ struct trace_array *tr;
+ struct ring_buffer *buffer;
+ struct trace_array_cpu __percpu *data;
+ cycle_t time_start;
+ int cpu;
+};
+
+/*
+ * The trace array - an array of per-CPU trace arrays. This is the
+ * highest level data structure that individual tracers deal with.
+ * They have on/off state as well:
+ */
+struct trace_array {
+ struct list_head list;
+ char *name;
+ struct trace_buffer trace_buffer;
+#ifdef CONFIG_TRACER_MAX_TRACE
+ /*
+ * The max_buffer is used to snapshot the trace when a maximum
+ * latency is reached, or when the user initiates a snapshot.
+ * Some tracers will use this to store a maximum trace while
+ * it continues examining live traces.
+ *
+ * The buffers for the max_buffer are set up the same as the trace_buffer
+ * When a snapshot is taken, the buffer of the max_buffer is swapped
+ * with the buffer of the trace_buffer and the buffers are reset for
+ * the trace_buffer so the tracing can continue.
+ */
+ struct trace_buffer max_buffer;
+ bool allocated_snapshot;
+#endif
+ int buffer_disabled;
+ struct trace_cpu trace_cpu; /* place holder */
+#ifdef CONFIG_FTRACE_SYSCALLS
+ int sys_refcount_enter;
+ int sys_refcount_exit;
+ DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
+ DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
+#endif
+ int stop_count;
+ int clock_id;
+ struct tracer *current_trace;
+ unsigned int flags;
+ raw_spinlock_t start_lock;
+ struct dentry *dir;
+ struct dentry *options;
+ struct dentry *percpu_dir;
+ struct dentry *event_dir;
+ struct list_head systems;
+ struct list_head events;
+ struct task_struct *waiter;
+ int ref;
+};
+
+enum {
+ TRACE_ARRAY_FL_GLOBAL = (1 << 0)
+};
+
+extern struct list_head ftrace_trace_arrays;
+
+/*
+ * The global tracer (top) should be the first trace array added,
+ * but we check the flag anyway.
+ */
+static inline struct trace_array *top_trace_array(void)
+{
+ struct trace_array *tr;
+
+ tr = list_entry(ftrace_trace_arrays.prev,
+ typeof(*tr), list);
+ WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));
+ return tr;
+}
+
+#endif /* _LINUX_KERNEL_TRACE_ARRAY_H */
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 9e01458..a8acfcd 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -12,11 +12,7 @@
#include <linux/hw_breakpoint.h>
#include <linux/trace_seq.h>
#include <linux/ftrace_event.h>
-
-#ifdef CONFIG_FTRACE_SYSCALLS
-#include <asm/unistd.h> /* For NR_SYSCALLS */
-#include <asm/syscall.h> /* some archs define it here */
-#endif
+#include <linux/trace_array.h>

enum trace_type {
__TRACE_FIRST_TYPE = 0,
@@ -133,116 +129,6 @@ enum trace_flag_type {

#define TRACE_BUF_SIZE 1024

-struct trace_array;
-
-struct trace_cpu {
- struct trace_array *tr;
- struct dentry *dir;
- int cpu;
-};
-
-/*
- * The CPU trace array - it consists of thousands of trace entries
- * plus some other descriptor data: (for example which task started
- * the trace, etc.)
- */
-struct trace_array_cpu {
- struct trace_cpu trace_cpu;
- atomic_t disabled;
- void *buffer_page; /* ring buffer spare */
-
- unsigned long entries;
- unsigned long saved_latency;
- unsigned long critical_start;
- unsigned long critical_end;
- unsigned long critical_sequence;
- unsigned long nice;
- unsigned long policy;
- unsigned long rt_priority;
- unsigned long skipped_entries;
- cycle_t preempt_timestamp;
- pid_t pid;
- kuid_t uid;
- char comm[TASK_COMM_LEN];
-};
-
-struct tracer;
-
-struct trace_buffer {
- struct trace_array *tr;
- struct ring_buffer *buffer;
- struct trace_array_cpu __percpu *data;
- cycle_t time_start;
- int cpu;
-};
-
-/*
- * The trace array - an array of per-CPU trace arrays. This is the
- * highest level data structure that individual tracers deal with.
- * They have on/off state as well:
- */
-struct trace_array {
- struct list_head list;
- char *name;
- struct trace_buffer trace_buffer;
-#ifdef CONFIG_TRACER_MAX_TRACE
- /*
- * The max_buffer is used to snapshot the trace when a maximum
- * latency is reached, or when the user initiates a snapshot.
- * Some tracers will use this to store a maximum trace while
- * it continues examining live traces.
- *
- * The buffers for the max_buffer are set up the same as the trace_buffer
- * When a snapshot is taken, the buffer of the max_buffer is swapped
- * with the buffer of the trace_buffer and the buffers are reset for
- * the trace_buffer so the tracing can continue.
- */
- struct trace_buffer max_buffer;
- bool allocated_snapshot;
-#endif
- int buffer_disabled;
- struct trace_cpu trace_cpu; /* place holder */
-#ifdef CONFIG_FTRACE_SYSCALLS
- int sys_refcount_enter;
- int sys_refcount_exit;
- DECLARE_BITMAP(enabled_enter_syscalls, NR_syscalls);
- DECLARE_BITMAP(enabled_exit_syscalls, NR_syscalls);
-#endif
- int stop_count;
- int clock_id;
- struct tracer *current_trace;
- unsigned int flags;
- raw_spinlock_t start_lock;
- struct dentry *dir;
- struct dentry *options;
- struct dentry *percpu_dir;
- struct dentry *event_dir;
- struct list_head systems;
- struct list_head events;
- struct task_struct *waiter;
- int ref;
-};
-
-enum {
- TRACE_ARRAY_FL_GLOBAL = (1 << 0)
-};
-
-extern struct list_head ftrace_trace_arrays;
-
-/*
- * The global tracer (top) should be the first trace array added,
- * but we check the flag anyway.
- */
-static inline struct trace_array *top_trace_array(void)
-{
- struct trace_array *tr;
-
- tr = list_entry(ftrace_trace_arrays.prev,
- typeof(*tr), list);
- WARN_ON(!(tr->flags & TRACE_ARRAY_FL_GLOBAL));
- return tr;
-}
-
#define FTRACE_CMP_TYPE(var, type) \
__builtin_types_compatible_p(typeof(var), type *)

--
1.7.9.7

2013-04-10 03:28:25

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 10/12] tracing: use per trace_array clock_id instead of global trace_clock_id

From: "zhangwei(Jovi)" <[email protected]>

tracing clock id already changed into per trace_array variable,
but there still use global trace_clock_id, which value always is 0 now.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace.c | 8 +++-----
kernel/trace/trace.h | 2 --
2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index dd0c122..ee4e110 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -652,8 +652,6 @@ static struct {
ARCH_TRACE_CLOCKS
};

-int trace_clock_id;
-
/*
* trace_parser_get_init - gets the buffer for trace parser
*/
@@ -2806,7 +2804,7 @@ __tracing_open(struct inode *inode, struct file *file, bool snapshot)
iter->iter_flags |= TRACE_FILE_ANNOTATE;

/* Output in nanoseconds only if we are using a clock in nanoseconds. */
- if (trace_clocks[trace_clock_id].in_ns)
+ if (trace_clocks[tr->clock_id].in_ns)
iter->iter_flags |= TRACE_FILE_TIME_IN_NS;

/* stop the trace while dumping if we are not opening "snapshot" */
@@ -3805,7 +3803,7 @@ static int tracing_open_pipe(struct inode *inode, struct file *filp)
iter->iter_flags |= TRACE_FILE_LAT_FMT;

/* Output in nanoseconds only if we are using a clock in nanoseconds. */
- if (trace_clocks[trace_clock_id].in_ns)
+ if (trace_clocks[tr->clock_id].in_ns)
iter->iter_flags |= TRACE_FILE_TIME_IN_NS;

iter->cpu_file = tc->cpu;
@@ -5075,7 +5073,7 @@ tracing_stats_read(struct file *filp, char __user *ubuf,
cnt = ring_buffer_bytes_cpu(trace_buf->buffer, cpu);
trace_seq_printf(s, "bytes: %ld\n", cnt);

- if (trace_clocks[trace_clock_id].in_ns) {
+ if (trace_clocks[tr->clock_id].in_ns) {
/* local or global for trace_clock */
t = ns2usecs(ring_buffer_oldest_event_ts(trace_buf->buffer, cpu));
usec_rem = do_div(t, USEC_PER_SEC);
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index bb3fd1b..9b8afa7 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -588,8 +588,6 @@ enum print_line_t print_trace_line(struct trace_iterator *iter);

extern unsigned long trace_flags;

-extern int trace_clock_id;
-
/* Standard output formatting function used for function return traces */
#ifdef CONFIG_FUNCTION_GRAPH_TRACER

--
1.7.9.7

2013-04-10 03:27:24

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 12/12] libtraceevent: add libtraceevent prefix in warning message

From: "zhangwei(Jovi)" <[email protected]>

Now using perf tracepoint, perf output some warning message
which hard to understand what's wrong in perf.

[root@jovi perf]# ./perf stat -e timer:* ls
Warning: unknown op '{'
Warning: unknown op '{'
...

Actually these warning message is caused by libtraceevent format
parsing code.

So add libtraceevent prefix to identify this more clearly.

(we should remove all those warning message when running perf stat in future,
it's not necessary to parse event format when running perf stat)

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
tools/lib/traceevent/event-parse.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/lib/traceevent/event-parse.c b/tools/lib/traceevent/event-parse.c
index 82b0606..a3971d2 100644
--- a/tools/lib/traceevent/event-parse.c
+++ b/tools/lib/traceevent/event-parse.c
@@ -47,7 +47,7 @@ static int show_warning = 1;
#define do_warning(fmt, ...) \
do { \
if (show_warning) \
- warning(fmt, ##__VA_ARGS__); \
+ warning("libtraceevent: "fmt, ##__VA_ARGS__); \
} while (0)

static void init_input_buf(const char *buf, unsigned long long size)
--
1.7.9.7

2013-04-10 03:27:22

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 07/12] tracing: remove TRACE_EVENT_TYPE enum definition

From: "zhangwei(Jovi)" <[email protected]>

TRACE_EVENT_TYPE enum is not used at present, remove it.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace.h | 6 ------
1 file changed, 6 deletions(-)

diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 89da073..9964695 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -794,12 +794,6 @@ static inline void trace_branch_disable(void)
/* set ring buffers to default size if not already done so */
int tracing_update_buffers(void);

-/* trace event type bit fields, not numeric */
-enum {
- TRACE_EVENT_TYPE_PRINTF = 1,
- TRACE_EVENT_TYPE_RAW = 2,
-};
-
struct event_filter {
int n_preds; /* Number assigned */
int a_preds; /* allocated */
--
1.7.9.7

2013-04-10 03:29:11

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 02/12] tracing: fix irqs-off tag display in syscall tracing

From: "zhangwei(Jovi)" <[email protected]>

Now all syscall tracing irqs-off tag is wrong,
syscall enter entry doesn't disable irq.

[root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
[root@jovi tracing]# cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 13/13 #P:2
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
irqbalance-513 [000] d... 56115.496766: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
irqbalance-513 [000] d... 56115.497008: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
sendmail-771 [000] d... 56115.827982: sys_open(filename: b770e6d1, flags: 0, mode: 1b6)

The reason is syscall tracing doesn't record irq_flags into buffer.
Fix this after this patch:

[root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
[root@jovi tracing]# cat trace
# tracer: nop
#
# entries-in-buffer/entries-written: 14/14 #P:2
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
irqbalance-514 [001] .... 46.213921: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
irqbalance-514 [001] .... 46.214160: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
<...>-920 [001] .... 47.307260: sys_open(filename: 4e82a0c5, flags: 80000, mode: 0)

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace_syscalls.c | 21 +++++++++++++++++----
1 file changed, 17 insertions(+), 4 deletions(-)

diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 8f2ac73..322e164 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -306,6 +306,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
struct ring_buffer *buffer;
+ unsigned long irq_flags;
+ int pc;
int syscall_nr;
int size;

@@ -321,9 +323,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)

size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;

+ local_save_flags(irq_flags);
+ pc = preempt_count();
+
buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer,
- sys_data->enter_event->event.type, size, 0, 0);
+ sys_data->enter_event->event.type, size, irq_flags, pc);
if (!event)
return;

@@ -333,7 +338,8 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)

if (!filter_current_check_discard(buffer, sys_data->enter_event,
entry, event))
- trace_current_buffer_unlock_commit(buffer, event, 0, 0);
+ trace_current_buffer_unlock_commit(buffer, event,
+ irq_flags, pc);
}

static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
@@ -343,6 +349,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
struct syscall_metadata *sys_data;
struct ring_buffer_event *event;
struct ring_buffer *buffer;
+ unsigned long irq_flags;
+ int pc;
int syscall_nr;

syscall_nr = trace_get_syscall_nr(current, regs);
@@ -355,9 +363,13 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (!sys_data)
return;

+ local_save_flags(irq_flags);
+ pc = preempt_count();
+
buffer = tr->trace_buffer.buffer;
event = trace_buffer_lock_reserve(buffer,
- sys_data->exit_event->event.type, sizeof(*entry), 0, 0);
+ sys_data->exit_event->event.type, sizeof(*entry),
+ irq_flags, pc);
if (!event)
return;

@@ -367,7 +379,8 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)

if (!filter_current_check_discard(buffer, sys_data->exit_event,
entry, event))
- trace_current_buffer_unlock_commit(buffer, event, 0, 0);
+ trace_current_buffer_unlock_commit(buffer, event,
+ irq_flags, pc);
}

static int reg_event_syscall_enter(struct ftrace_event_file *file,
--
1.7.9.7

2013-04-10 03:29:30

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 06/12] tracing: expose structure ftrace_event_field

From: "zhangwei(Jovi)" <[email protected]>

Currently event tracing field information is only stored in
struct ftrace_event_field, this structure is defined in
internal trace.h.
Move this ftrace_event_field into include/linux/ftrace_event.h,
then external modules can make use this structure to parse event
field(like ktap).

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
include/linux/ftrace_event.h | 10 ++++++++++
kernel/trace/trace.h | 10 ----------
2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
index f6a6e48..ee4dc8d 100644
--- a/include/linux/ftrace_event.h
+++ b/include/linux/ftrace_event.h
@@ -176,6 +176,16 @@ enum trace_reg {
#endif
};

+struct ftrace_event_field {
+ struct list_head link;
+ const char *name;
+ const char *type;
+ int filter_type;
+ int offset;
+ int size;
+ int is_signed;
+};
+
struct ftrace_event_call;

struct ftrace_event_class {
diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
index 8f4966b..89da073 100644
--- a/kernel/trace/trace.h
+++ b/kernel/trace/trace.h
@@ -800,16 +800,6 @@ enum {
TRACE_EVENT_TYPE_RAW = 2,
};

-struct ftrace_event_field {
- struct list_head link;
- const char *name;
- const char *type;
- int filter_type;
- int offset;
- int size;
- int is_signed;
-};
-
struct event_filter {
int n_preds; /* Number assigned */
int a_preds; /* allocated */
--
1.7.9.7

2013-04-10 03:29:49

by zhangwei(Jovi)

[permalink] [raw]
Subject: [PATCH v3 05/12] tracing: switch syscall tracing to use event_trace_ops backend

From: "zhangwei(Jovi)" <[email protected]>

Other tracepoints already switched to use event_trace_ops as
backend store mechanism, syscall tracing can use same backend.

This change would also expose syscall tracing to external modules
with same interface like other tracepoints.

Signed-off-by: zhangwei(Jovi) <[email protected]>
---
kernel/trace/trace_syscalls.c | 49 ++++++++++++++---------------------------
1 file changed, 16 insertions(+), 33 deletions(-)

diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
index 322e164..72675b1 100644
--- a/kernel/trace/trace_syscalls.c
+++ b/kernel/trace/trace_syscalls.c
@@ -302,12 +302,10 @@ static int __init syscall_exit_define_fields(struct ftrace_event_call *call)
static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
{
struct trace_array *tr = data;
+ struct ftrace_event_file event_file;
+ struct trace_descriptor_t desc;
struct syscall_trace_enter *entry;
struct syscall_metadata *sys_data;
- struct ring_buffer_event *event;
- struct ring_buffer *buffer;
- unsigned long irq_flags;
- int pc;
int syscall_nr;
int size;

@@ -323,34 +321,26 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)

size = sizeof(*entry) + sizeof(unsigned long) * sys_data->nb_args;

- local_save_flags(irq_flags);
- pc = preempt_count();
-
- buffer = tr->trace_buffer.buffer;
- event = trace_buffer_lock_reserve(buffer,
- sys_data->enter_event->event.type, size, irq_flags, pc);
- if (!event)
+ event_file.tr = tr;
+ event_file.event_call = sys_data->enter_event;
+ event_file.flags = FTRACE_EVENT_FL_ENABLED;
+ entry = tr->ops->pre_trace(&event_file, size, &desc);
+ if (!entry)
return;

- entry = ring_buffer_event_data(event);
entry->nr = syscall_nr;
syscall_get_arguments(current, regs, 0, sys_data->nb_args, entry->args);

- if (!filter_current_check_discard(buffer, sys_data->enter_event,
- entry, event))
- trace_current_buffer_unlock_commit(buffer, event,
- irq_flags, pc);
+ tr->ops->do_trace(&event_file, entry, size, &desc);
}

static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
{
struct trace_array *tr = data;
+ struct ftrace_event_file event_file;
+ struct trace_descriptor_t desc;
struct syscall_trace_exit *entry;
struct syscall_metadata *sys_data;
- struct ring_buffer_event *event;
- struct ring_buffer *buffer;
- unsigned long irq_flags;
- int pc;
int syscall_nr;

syscall_nr = trace_get_syscall_nr(current, regs);
@@ -363,24 +353,17 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
if (!sys_data)
return;

- local_save_flags(irq_flags);
- pc = preempt_count();
-
- buffer = tr->trace_buffer.buffer;
- event = trace_buffer_lock_reserve(buffer,
- sys_data->exit_event->event.type, sizeof(*entry),
- irq_flags, pc);
- if (!event)
+ event_file.tr = tr;
+ event_file.event_call = sys_data->exit_event;
+ event_file.flags = FTRACE_EVENT_FL_ENABLED;
+ entry = tr->ops->pre_trace(&event_file, sizeof(*entry), &desc);
+ if (!entry)
return;

- entry = ring_buffer_event_data(event);
entry->nr = syscall_nr;
entry->ret = syscall_get_return_value(current, regs);

- if (!filter_current_check_discard(buffer, sys_data->exit_event,
- entry, event))
- trace_current_buffer_unlock_commit(buffer, event,
- irq_flags, pc);
+ tr->ops->do_trace(&event_file, entry, sizeof(*entry), &desc);
}

static int reg_event_syscall_enter(struct ftrace_event_file *file,
--
1.7.9.7

2013-04-10 15:08:20

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] event tracing expose change and bugfix/cleanup

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> Hi steven,
>
> I have reworked this patchset again with minor change.
> [v2 -> v3:
> - change trace_descripte_t defintion in patch 3
> - new patch "export ftrace_events"
> - remove patch "export syscall metadata"
> (syscall tracing are use same event_trace_ops backend as normal event tracepoint,
> so there's no need to export anything of syscall)
> - remove private data field in ftrace_event_file struct (also not needed)
> ]

Thanks,

Note, I'm trying to catch up on my -rt responsibilities, and most likely
wont get to this this week, and next week I'll be at collaboration
summit. It may not be till after I get back from that, that I'll have a
chance to look at these.

Depending on when Linus opens the next merge window, even if everything
goes fine, this patch set may not make it into 3.10, and will have to
wait till 3.11.

Just giving you a heads up.

-- Steve

2013-04-11 03:32:14

by zhangwei(Jovi)

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] event tracing expose change and bugfix/cleanup

On 2013/4/10 23:08, Steven Rostedt wrote:
> On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
>> From: "zhangwei(Jovi)" <[email protected]>
>>
>> Hi steven,
>>
>> I have reworked this patchset again with minor change.
>> [v2 -> v3:
>> - change trace_descripte_t defintion in patch 3
>> - new patch "export ftrace_events"
>> - remove patch "export syscall metadata"
>> (syscall tracing are use same event_trace_ops backend as normal event tracepoint,
>> so there's no need to export anything of syscall)
>> - remove private data field in ftrace_event_file struct (also not needed)
>> ]
>
> Thanks,
>
> Note, I'm trying to catch up on my -rt responsibilities, and most likely
> wont get to this this week, and next week I'll be at collaboration
> summit. It may not be till after I get back from that, that I'll have a
> chance to look at these.
>
> Depending on when Linus opens the next merge window, even if everything
> goes fine, this patch set may not make it into 3.10, and will have to
> wait till 3.11.
>
> Just giving you a heads up.
It's fine for me, let's check this patch set later.
Thanks.
>
>
>
>

2013-07-02 23:16:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] event tracing expose change and bugfix/cleanup

On Wed, 2013-04-10 at 11:08 -0400, Steven Rostedt wrote:
> On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> > From: "zhangwei(Jovi)" <[email protected]>
> >
> > Hi steven,
> >
> > I have reworked this patchset again with minor change.
> > [v2 -> v3:
> > - change trace_descripte_t defintion in patch 3
> > - new patch "export ftrace_events"
> > - remove patch "export syscall metadata"
> > (syscall tracing are use same event_trace_ops backend as normal event tracepoint,
> > so there's no need to export anything of syscall)
> > - remove private data field in ftrace_event_file struct (also not needed)
> > ]
>
> Thanks,
>
> Note, I'm trying to catch up on my -rt responsibilities, and most likely
> wont get to this this week, and next week I'll be at collaboration
> summit. It may not be till after I get back from that, that I'll have a
> chance to look at these.
>
> Depending on when Linus opens the next merge window, even if everything
> goes fine, this patch set may not make it into 3.10, and will have to
> wait till 3.11.

Sorry, I've been buried in other work and going through my TODO list in
my inbox I've stumbled on this. If anything, it will have to wait for
3.12, but I have some questions about this patch set that needs to be
answered first. I'll reply to individual patches.

-- Steve

2013-07-02 23:19:54

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 01/12] tracing: move trace_array definition into include/linux/trace_array.h

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> Prepare for expose event tracing infrastructure.
> (struct trace_array shall be use by external modules)
>

What module is going to be using this?

-- Steve

2013-07-02 23:25:28

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 02/12] tracing: fix irqs-off tag display in syscall tracing

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> Now all syscall tracing irqs-off tag is wrong,
> syscall enter entry doesn't disable irq.
>
> [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
> [root@jovi tracing]# cat trace
> # tracer: nop
> #
> # entries-in-buffer/entries-written: 13/13 #P:2
> #
> # _-----=> irqs-off
> # / _----=> need-resched
> # | / _---=> hardirq/softirq
> # || / _--=> preempt-depth
> # ||| / delay
> # TASK-PID CPU# |||| TIMESTAMP FUNCTION
> # | | | |||| | |
> irqbalance-513 [000] d... 56115.496766: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
> irqbalance-513 [000] d... 56115.497008: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
> sendmail-771 [000] d... 56115.827982: sys_open(filename: b770e6d1, flags: 0, mode: 1b6)
>
> The reason is syscall tracing doesn't record irq_flags into buffer.
> Fix this after this patch:
>
> [root@jovi tracing]#echo "syscalls:sys_enter_open" > set_event
> [root@jovi tracing]# cat trace
> # tracer: nop
> #
> # entries-in-buffer/entries-written: 14/14 #P:2
> #
> # _-----=> irqs-off
> # / _----=> need-resched
> # | / _---=> hardirq/softirq
> # || / _--=> preempt-depth
> # ||| / delay
> # TASK-PID CPU# |||| TIMESTAMP FUNCTION
> # | | | |||| | |
> irqbalance-514 [001] .... 46.213921: sys_open(filename: 804e1a6, flags: 0, mode: 1b6)
> irqbalance-514 [001] .... 46.214160: sys_open(filename: 804e1bb, flags: 0, mode: 1b6)
> <...>-920 [001] .... 47.307260: sys_open(filename: 4e82a0c5, flags: 80000, mode: 0)
>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---

I'll pull this one in for 3.11 and mark for stable.

Thanks,

-- Steve

2013-07-02 23:35:19

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] tracing: expose event tracing infrastructure

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> Currently event tracing only can be use for ftrace and perf,
> there don't have any mechanism to let modules(like external tracing tool)
> register callback tracing function.
>
> Event tracing implement based on tracepoint, compare with raw tracepoint,
> event tracing infrastructure provide built-in structured event annotate format,
> this feature should expose to external user.
>
> For example, simple pseudo ktap script demonstrate how to use this event
> tracing expose change.

Ah, it's for ktap.

Lets work on getting ktap into mainline first ;-)

>
> function event_trace(e)
> {
> printf("%s", e.annotate);
> }
>
> os.trace("sched:sched_switch", event_trace);
> os.trace("irq:softirq_raise", event_trace);
>
> The running result:
> sched_switch: prev_comm=rcu_sched prev_pid=10 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
> softirq_raise: vec=1 [action=TIMER]
> ...
>
> This expose change can be use by other tracing tool, like systemtap/lttng,
> if they would implement this.
>
> This patch introduce struct event_trace_ops in trace_array, it have
> two callback functions, pre_trace and do_trace.
> when ftrace_raw_event_<call> function hit, it will call all
> registered event_trace_ops.
>
> the benefit of this change is kernel size shrink ~18K

Now this is something that I would be more interested in having.

>
> (the kernel size will reduce more when perf tracing code
> converting to use this mechanism in future)
>
> text data bss dec hex filename
> 7402131 804364 3149824 11356319 ad489f vmlinux.old
> 7383115 804684 3149824 11337623 acff97 vmlinux.new
>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---
> include/linux/ftrace_event.h | 21 +++++++++++++
> include/linux/trace_array.h | 1 +
> include/trace/ftrace.h | 69 +++++++++++++-----------------------------
> kernel/trace/trace.c | 4 ++-
> kernel/trace/trace.h | 2 ++
> kernel/trace/trace_events.c | 51 +++++++++++++++++++++++++++++++
> 6 files changed, 99 insertions(+), 49 deletions(-)
>
> diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
> index 4e28b01..4b55272 100644
> --- a/include/linux/ftrace_event.h
> +++ b/include/linux/ftrace_event.h
> @@ -6,6 +6,7 @@
> #include <linux/percpu.h>
> #include <linux/hardirq.h>
> #include <linux/perf_event.h>
> +#include <linux/trace_array.h>
>
> struct trace_array;
> struct trace_buffer;
> @@ -245,6 +246,26 @@ struct ftrace_event_call {
> #endif
> };
>
> +
> +/*
> + * trace_descriptor_t is purpose for passing arguments between
> + * pre_trace and do_trace function.
> + */
> +struct trace_descriptor_t {
> + struct ring_buffer_event *event;
> + struct ring_buffer *buffer;
> + unsigned long irq_flags;
> + int pc;
> +};
> +
> +/* callback function for tracing */
> +struct event_trace_ops {
> + void *(*pre_trace)(struct ftrace_event_file *file,
> + int entry_size, void *data);
> + void (*do_trace)(struct ftrace_event_file *file, void *entry,
> + int entry_size, void *data);
> +};
> +
> struct trace_array;
> struct ftrace_subsystem_dir;
>
> diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
> index c5b7a13..b362c5f 100644
> --- a/include/linux/trace_array.h
> +++ b/include/linux/trace_array.h
> @@ -56,6 +56,7 @@ struct trace_array {
> struct list_head list;
> char *name;
> struct trace_buffer trace_buffer;
> + struct event_trace_ops *ops;
> #ifdef CONFIG_TRACER_MAX_TRACE
> /*
> * The max_buffer is used to snapshot the trace when a maximum
> diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
> index 4bda044..743e754 100644
> --- a/include/trace/ftrace.h
> +++ b/include/trace/ftrace.h
> @@ -401,41 +401,28 @@ static inline notrace int ftrace_get_offsets_##call( \
> *
> * static struct ftrace_event_call event_<call>;
> *
> - * static void ftrace_raw_event_<call>(void *__data, proto)
> + * static notrace void ftrace_raw_event_##call(void *__data, proto)
> * {
> * struct ftrace_event_file *ftrace_file = __data;
> - * struct ftrace_event_call *event_call = ftrace_file->event_call;
> - * struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
> - * struct ring_buffer_event *event;
> - * struct ftrace_raw_<call> *entry; <-- defined in stage 1
> - * struct ring_buffer *buffer;
> - * unsigned long irq_flags;
> - * int __data_size;
> - * int pc;
> + * struct ftrace_data_offsets_##call __maybe_unused __data_offsets;
> + * struct trace_descriptor_t __desc;
> + * struct event_trace_ops *ops = ftrace_file->tr->ops;
> + * struct ftrace_raw_##call *entry; <-- defined in stage 1
> + * int __data_size, __entry_size;
> *
> - * if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
> - * &ftrace_file->flags))
> - * return;
> - *
> - * local_save_flags(irq_flags);
> - * pc = preempt_count();
> - *
> - * __data_size = ftrace_get_offsets_<call>(&__data_offsets, args);
> + * __data_size = ftrace_get_offsets_##call(&__data_offsets, args);
> + * __entry_size = sizeof(*entry) + __data_size;
> *
> - * event = trace_event_buffer_lock_reserve(&buffer, ftrace_file,
> - * event_<call>->event.type,
> - * sizeof(*entry) + __data_size,
> - * irq_flags, pc);
> - * if (!event)
> + * entry = ops->pre_trace(ftrace_file, __entry_size, &__desc);
> + * if (!entry)
> * return;
> - * entry = ring_buffer_event_data(event);
> + *
> + * tstruct
> *
> * { <assign>; } <-- Here we assign the entries by the __field and
> * __array macros.
> *
> - * if (!filter_current_check_discard(buffer, event_call, entry, event))
> - * trace_nowake_buffer_unlock_commit(buffer,
> - * event, irq_flags, pc);
> + * ops->do_trace(ftrace_file, entry, __entry_size, &__desc);
> * }
> *
> * static struct trace_event ftrace_event_type_<call> = {
> @@ -513,38 +500,24 @@ static notrace void \
> ftrace_raw_event_##call(void *__data, proto) \
> { \
> struct ftrace_event_file *ftrace_file = __data; \
> - struct ftrace_event_call *event_call = ftrace_file->event_call; \
> struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
> - struct ring_buffer_event *event; \
> + struct trace_descriptor_t __desc; \
> + struct event_trace_ops *ops = ftrace_file->tr->ops; \
> struct ftrace_raw_##call *entry; \
> - struct ring_buffer *buffer; \
> - unsigned long irq_flags; \
> - int __data_size; \
> - int pc; \
> + int __data_size, __entry_size; \
> \
> - if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
> - &ftrace_file->flags)) \
> - return; \
> - \
> - local_save_flags(irq_flags); \
> - pc = preempt_count(); \
> + __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
> + __entry_size = sizeof(*entry) + __data_size; \
> \
> - __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
> - \
> - event = trace_event_buffer_lock_reserve(&buffer, ftrace_file, \
> - event_call->event.type, \
> - sizeof(*entry) + __data_size, \
> - irq_flags, pc); \
> - if (!event) \
> + entry = ops->pre_trace(ftrace_file, __entry_size, &__desc); \
> + if (!entry) \
> return; \
> - entry = ring_buffer_event_data(event); \
> \
> tstruct \
> \
> { assign; } \
> \
> - if (!filter_current_check_discard(buffer, event_call, entry, event)) \
> - trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
> + ops->do_trace(ftrace_file, entry, __entry_size, &__desc); \
> }

Hmm, this is a major change. Something that will definitely have to wait
for 3.12.

-- Steve

> /*
> * The ftrace_test_probe is compiled out, it is only here as a build time check
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 829b2be..224b152 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -189,7 +189,7 @@ unsigned long long ns2usecs(cycle_t nsec)
> * pages for the buffer for that CPU. Each CPU has the same number
> * of pages allocated for its buffer.
> */
> -static struct trace_array global_trace;
> +static struct trace_array global_trace = {.ops = &ftrace_events_ops};
>
> LIST_HEAD(ftrace_trace_arrays);
>
> @@ -5773,6 +5773,8 @@ static int new_instance_create(const char *name)
>
> list_add(&tr->list, &ftrace_trace_arrays);
>
> + tr->ops = &ftrace_events_ops;
> +
> mutex_unlock(&trace_types_lock);
>
> return 0;
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index a8acfcd..0a1f4be 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -493,6 +493,8 @@ extern unsigned long nsecs_to_usecs(unsigned long nsecs);
>
> extern unsigned long tracing_thresh;
>
> +extern struct event_trace_ops ftrace_events_ops;
> +
> #ifdef CONFIG_TRACER_MAX_TRACE
> extern unsigned long tracing_max_latency;
>
> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
> index 53582e9..09ca479 100644
> --- a/kernel/trace/trace_events.c
> +++ b/kernel/trace/trace_events.c
> @@ -241,6 +241,57 @@ void trace_event_enable_cmd_record(bool enable)
> mutex_unlock(&event_mutex);
> }
>
> +static void *ftrace_events_pre_trace(struct ftrace_event_file *file,
> + int entry_size, void *data)
> +{
> + struct ftrace_event_call *event_call = file->event_call;
> + struct trace_descriptor_t *desc = data;
> + struct ring_buffer_event *event;
> + struct ring_buffer *buffer;
> + unsigned long irq_flags;
> + int pc;
> +
> + if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
> + return NULL;
> +
> + local_save_flags(irq_flags);
> + pc = preempt_count();
> +
> + event = trace_event_buffer_lock_reserve(&buffer, file,
> + event_call->event.type,
> + entry_size, irq_flags, pc);
> +
> + if (!event)
> + return NULL;
> +
> + desc->event = event;
> + desc->buffer = buffer;
> + desc->irq_flags = irq_flags;
> + desc->pc = pc;
> +
> + return ring_buffer_event_data(event);
> +}
> +
> +static void ftrace_events_do_trace(struct ftrace_event_file *file, void *entry,
> + int entry_size, void *data)
> +{
> + struct ftrace_event_call *event_call = file->event_call;
> + struct trace_descriptor_t *desc = data;
> + struct ring_buffer_event *event = desc->event;
> + struct ring_buffer *buffer = desc->buffer;
> + unsigned long irq_flags = desc->irq_flags;
> + int pc = desc->pc;
> +
> + if (!filter_current_check_discard(buffer, event_call, entry, event))
> + trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
> +}
> +
> +struct event_trace_ops ftrace_events_ops = {
> + .pre_trace = ftrace_events_pre_trace,
> + .do_trace = ftrace_events_do_trace,
> +};
> +
> +
> static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
> int enable, int soft_disable)
> {

2013-07-02 23:39:54

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 07/12] tracing: remove TRACE_EVENT_TYPE enum definition

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> TRACE_EVENT_TYPE enum is not used at present, remove it.

Looks reasonable, pulled.

Thanks,

-- Steve

>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---
> kernel/trace/trace.h | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 89da073..9964695 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -794,12 +794,6 @@ static inline void trace_branch_disable(void)
> /* set ring buffers to default size if not already done so */
> int tracing_update_buffers(void);
>
> -/* trace event type bit fields, not numeric */
> -enum {
> - TRACE_EVENT_TYPE_PRINTF = 1,
> - TRACE_EVENT_TYPE_RAW = 2,
> -};
> -
> struct event_filter {
> int n_preds; /* Number assigned */
> int a_preds; /* allocated */

2013-07-02 23:45:38

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 09/12] tracing: remove ftrace(...) function

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> The only caller of function ftrace(...) was removed at long time ago,
> so remove the function body also.

Looks reasonable, pulled.

Thanks,

-- Steve

>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---
> kernel/trace/trace.c | 9 ---------
> kernel/trace/trace.h | 5 -----
> 2 files changed, 14 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index 224b152..dd0c122 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -1534,15 +1534,6 @@ trace_function(struct trace_array *tr,
> __buffer_unlock_commit(buffer, event);
> }
>
> -void
> -ftrace(struct trace_array *tr, struct trace_array_cpu *data,
> - unsigned long ip, unsigned long parent_ip, unsigned long flags,
> - int pc)
> -{
> - if (likely(!atomic_read(&data->disabled)))
> - trace_function(tr, ip, parent_ip, flags, pc);
> -}
> -
> #ifdef CONFIG_STACKTRACE
>
> #define FTRACE_STACK_MAX_ENTRIES (PAGE_SIZE / sizeof(unsigned long))
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 9964695..bb3fd1b 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -445,11 +445,6 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu);
>
> void poll_wait_pipe(struct trace_iterator *iter);
>
> -void ftrace(struct trace_array *tr,
> - struct trace_array_cpu *data,
> - unsigned long ip,
> - unsigned long parent_ip,
> - unsigned long flags, int pc);
> void tracing_sched_switch_trace(struct trace_array *tr,
> struct task_struct *prev,
> struct task_struct *next,

2013-07-02 23:56:47

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 11/12] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST

On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
> From: "zhangwei(Jovi)" <[email protected]>
>
> Variable tracing_selftest_disabled have not any sense when
> CONFIG_FTRACE_STARTUP_TEST is disabled.
>
> This patch also remove __read_mostly attribute, since variable
> tracing_selftest_disabled really not read mostly.

Yes it is mostly read only. Sure, it's not read much, but it is also
only written to once. That makes it, "read mostly".

-- Steve

>
> Signed-off-by: zhangwei(Jovi) <[email protected]>
> ---
> kernel/trace/trace.c | 6 ++++--
> kernel/trace/trace.h | 2 +-
> kernel/trace/trace_events.c | 2 ++
> 3 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index ee4e110..09a3aa8 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -58,10 +58,12 @@ bool ring_buffer_expanded;
> */
> static bool __read_mostly tracing_selftest_running;
>
> +#ifdef CONFIG_FTRACE_STARTUP_TEST
> /*
> * If a tracer is running, we do not want to run SELFTEST.
> */
> -bool __read_mostly tracing_selftest_disabled;
> +bool tracing_selftest_disabled;
> +#endif
>
> /* For tracers that don't implement custom flags */
> static struct tracer_opt dummy_tracer_opt[] = {
> @@ -1069,8 +1071,8 @@ int register_tracer(struct tracer *type)
> tracing_set_tracer(type->name);
> default_bootup_tracer = NULL;
> /* disable other selftests, since this will break it. */
> - tracing_selftest_disabled = true;
> #ifdef CONFIG_FTRACE_STARTUP_TEST
> + tracing_selftest_disabled = true;
> printk(KERN_INFO "Disabling FTRACE selftests due to running tracer '%s'\n",
> type->name);
> #endif
> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
> index 9b8afa7..e9ef8b7 100644
> --- a/kernel/trace/trace.h
> +++ b/kernel/trace/trace.h
> @@ -546,10 +546,10 @@ extern int DYN_FTRACE_TEST_NAME(void);
> extern int DYN_FTRACE_TEST_NAME2(void);
>
> extern bool ring_buffer_expanded;
> -extern bool tracing_selftest_disabled;
> DECLARE_PER_CPU(int, ftrace_cpu_disabled);
>
> #ifdef CONFIG_FTRACE_STARTUP_TEST
> +extern bool tracing_selftest_disabled;
> extern int trace_selftest_startup_function(struct tracer *trace,
> struct trace_array *tr);
> extern int trace_selftest_startup_function_graph(struct tracer *trace,
> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
> index 7c52a51..7c4a16b 100644
> --- a/kernel/trace/trace_events.c
> +++ b/kernel/trace/trace_events.c
> @@ -2251,7 +2251,9 @@ static __init int setup_trace_event(char *str)
> {
> strlcpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
> ring_buffer_expanded = true;
> +#ifdef CONFIG_FTRACE_STARTUP_TEST
> tracing_selftest_disabled = true;
> +#endif
>
> return 1;
> }

2013-07-03 04:00:54

by zhangwei(Jovi)

[permalink] [raw]
Subject: Re: [PATCH v3 03/12] tracing: expose event tracing infrastructure

On 2013/7/3 7:35, Steven Rostedt wrote:
> On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
>> From: "zhangwei(Jovi)" <[email protected]>
>>
>> Currently event tracing only can be use for ftrace and perf,
>> there don't have any mechanism to let modules(like external tracing tool)
>> register callback tracing function.
>>
>> Event tracing implement based on tracepoint, compare with raw tracepoint,
>> event tracing infrastructure provide built-in structured event annotate format,
>> this feature should expose to external user.
>>
>> For example, simple pseudo ktap script demonstrate how to use this event
>> tracing expose change.
>
> Ah, it's for ktap.
>
> Lets work on getting ktap into mainline first ;-)
>
Sure.

I have to say this patch need to revise a little.

Original ktap is based on this patch, so ktap user handler will invoked when event is hit,
but from strict technical point of view, this is completely not needed, because it's perf
callback mechanism in there, which I didn't use it(should be blame). Currently ktap already
tuned to use unified perf interface, so it's pretty easy to support tracepoint/kprobe/uprobe/PMU/hw_breakpoint
in a unified manner without any kernel patch(Also I suggest Tom's event trigger patchset
can follow this way if possible).

But perhaps this patch still have value for tracing subsystem, with benefit on:
reduce kernel size by unify ftrace_raw_event_##call and perf_trace_##call

Remember the size reduced in my v2 patch?
Link:https://lkml.org/lkml/2013/3/13/143
kernel size will shrink ~52K with that change

Worth to continue to focus on kernel size shrinking?
please forget the part of "expose event tracing infrastructure".

>>
>> function event_trace(e)
>> {
>> printf("%s", e.annotate);
>> }
>>
>> os.trace("sched:sched_switch", event_trace);
>> os.trace("irq:softirq_raise", event_trace);
>>
>> The running result:
>> sched_switch: prev_comm=rcu_sched prev_pid=10 prev_prio=120 prev_state=S ==> next_comm=swapper/1 next_pid=0 next_prio=120
>> softirq_raise: vec=1 [action=TIMER]
>> ...
>>
>> This expose change can be use by other tracing tool, like systemtap/lttng,
>> if they would implement this.
>>
>> This patch introduce struct event_trace_ops in trace_array, it have
>> two callback functions, pre_trace and do_trace.
>> when ftrace_raw_event_<call> function hit, it will call all
>> registered event_trace_ops.
>>
>> the benefit of this change is kernel size shrink ~18K
>
> Now this is something that I would be more interested in having.
>
>>
>> (the kernel size will reduce more when perf tracing code
>> converting to use this mechanism in future)
>>
>> text data bss dec hex filename
>> 7402131 804364 3149824 11356319 ad489f vmlinux.old
>> 7383115 804684 3149824 11337623 acff97 vmlinux.new
>>
>> Signed-off-by: zhangwei(Jovi) <[email protected]>
>> ---
>> include/linux/ftrace_event.h | 21 +++++++++++++
>> include/linux/trace_array.h | 1 +
>> include/trace/ftrace.h | 69 +++++++++++++-----------------------------
>> kernel/trace/trace.c | 4 ++-
>> kernel/trace/trace.h | 2 ++
>> kernel/trace/trace_events.c | 51 +++++++++++++++++++++++++++++++
>> 6 files changed, 99 insertions(+), 49 deletions(-)
>>
>> diff --git a/include/linux/ftrace_event.h b/include/linux/ftrace_event.h
>> index 4e28b01..4b55272 100644
>> --- a/include/linux/ftrace_event.h
>> +++ b/include/linux/ftrace_event.h
>> @@ -6,6 +6,7 @@
>> #include <linux/percpu.h>
>> #include <linux/hardirq.h>
>> #include <linux/perf_event.h>
>> +#include <linux/trace_array.h>
>>
>> struct trace_array;
>> struct trace_buffer;
>> @@ -245,6 +246,26 @@ struct ftrace_event_call {
>> #endif
>> };
>>
>> +
>> +/*
>> + * trace_descriptor_t is purpose for passing arguments between
>> + * pre_trace and do_trace function.
>> + */
>> +struct trace_descriptor_t {
>> + struct ring_buffer_event *event;
>> + struct ring_buffer *buffer;
>> + unsigned long irq_flags;
>> + int pc;
>> +};
>> +
>> +/* callback function for tracing */
>> +struct event_trace_ops {
>> + void *(*pre_trace)(struct ftrace_event_file *file,
>> + int entry_size, void *data);
>> + void (*do_trace)(struct ftrace_event_file *file, void *entry,
>> + int entry_size, void *data);
>> +};
>> +
>> struct trace_array;
>> struct ftrace_subsystem_dir;
>>
>> diff --git a/include/linux/trace_array.h b/include/linux/trace_array.h
>> index c5b7a13..b362c5f 100644
>> --- a/include/linux/trace_array.h
>> +++ b/include/linux/trace_array.h
>> @@ -56,6 +56,7 @@ struct trace_array {
>> struct list_head list;
>> char *name;
>> struct trace_buffer trace_buffer;
>> + struct event_trace_ops *ops;
>> #ifdef CONFIG_TRACER_MAX_TRACE
>> /*
>> * The max_buffer is used to snapshot the trace when a maximum
>> diff --git a/include/trace/ftrace.h b/include/trace/ftrace.h
>> index 4bda044..743e754 100644
>> --- a/include/trace/ftrace.h
>> +++ b/include/trace/ftrace.h
>> @@ -401,41 +401,28 @@ static inline notrace int ftrace_get_offsets_##call( \
>> *
>> * static struct ftrace_event_call event_<call>;
>> *
>> - * static void ftrace_raw_event_<call>(void *__data, proto)
>> + * static notrace void ftrace_raw_event_##call(void *__data, proto)
>> * {
>> * struct ftrace_event_file *ftrace_file = __data;
>> - * struct ftrace_event_call *event_call = ftrace_file->event_call;
>> - * struct ftrace_data_offsets_<call> __maybe_unused __data_offsets;
>> - * struct ring_buffer_event *event;
>> - * struct ftrace_raw_<call> *entry; <-- defined in stage 1
>> - * struct ring_buffer *buffer;
>> - * unsigned long irq_flags;
>> - * int __data_size;
>> - * int pc;
>> + * struct ftrace_data_offsets_##call __maybe_unused __data_offsets;
>> + * struct trace_descriptor_t __desc;
>> + * struct event_trace_ops *ops = ftrace_file->tr->ops;
>> + * struct ftrace_raw_##call *entry; <-- defined in stage 1
>> + * int __data_size, __entry_size;
>> *
>> - * if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT,
>> - * &ftrace_file->flags))
>> - * return;
>> - *
>> - * local_save_flags(irq_flags);
>> - * pc = preempt_count();
>> - *
>> - * __data_size = ftrace_get_offsets_<call>(&__data_offsets, args);
>> + * __data_size = ftrace_get_offsets_##call(&__data_offsets, args);
>> + * __entry_size = sizeof(*entry) + __data_size;
>> *
>> - * event = trace_event_buffer_lock_reserve(&buffer, ftrace_file,
>> - * event_<call>->event.type,
>> - * sizeof(*entry) + __data_size,
>> - * irq_flags, pc);
>> - * if (!event)
>> + * entry = ops->pre_trace(ftrace_file, __entry_size, &__desc);
>> + * if (!entry)
>> * return;
>> - * entry = ring_buffer_event_data(event);
>> + *
>> + * tstruct
>> *
>> * { <assign>; } <-- Here we assign the entries by the __field and
>> * __array macros.
>> *
>> - * if (!filter_current_check_discard(buffer, event_call, entry, event))
>> - * trace_nowake_buffer_unlock_commit(buffer,
>> - * event, irq_flags, pc);
>> + * ops->do_trace(ftrace_file, entry, __entry_size, &__desc);
>> * }
>> *
>> * static struct trace_event ftrace_event_type_<call> = {
>> @@ -513,38 +500,24 @@ static notrace void \
>> ftrace_raw_event_##call(void *__data, proto) \
>> { \
>> struct ftrace_event_file *ftrace_file = __data; \
>> - struct ftrace_event_call *event_call = ftrace_file->event_call; \
>> struct ftrace_data_offsets_##call __maybe_unused __data_offsets;\
>> - struct ring_buffer_event *event; \
>> + struct trace_descriptor_t __desc; \
>> + struct event_trace_ops *ops = ftrace_file->tr->ops; \
>> struct ftrace_raw_##call *entry; \
>> - struct ring_buffer *buffer; \
>> - unsigned long irq_flags; \
>> - int __data_size; \
>> - int pc; \
>> + int __data_size, __entry_size; \
>> \
>> - if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, \
>> - &ftrace_file->flags)) \
>> - return; \
>> - \
>> - local_save_flags(irq_flags); \
>> - pc = preempt_count(); \
>> + __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
>> + __entry_size = sizeof(*entry) + __data_size; \
>> \
>> - __data_size = ftrace_get_offsets_##call(&__data_offsets, args); \
>> - \
>> - event = trace_event_buffer_lock_reserve(&buffer, ftrace_file, \
>> - event_call->event.type, \
>> - sizeof(*entry) + __data_size, \
>> - irq_flags, pc); \
>> - if (!event) \
>> + entry = ops->pre_trace(ftrace_file, __entry_size, &__desc); \
>> + if (!entry) \
>> return; \
>> - entry = ring_buffer_event_data(event); \
>> \
>> tstruct \
>> \
>> { assign; } \
>> \
>> - if (!filter_current_check_discard(buffer, event_call, entry, event)) \
>> - trace_buffer_unlock_commit(buffer, event, irq_flags, pc); \
>> + ops->do_trace(ftrace_file, entry, __entry_size, &__desc); \
>> }
>
> Hmm, this is a major change. Something that will definitely have to wait
> for 3.12.
>
> -- Steve
>
>> /*
>> * The ftrace_test_probe is compiled out, it is only here as a build time check
>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
>> index 829b2be..224b152 100644
>> --- a/kernel/trace/trace.c
>> +++ b/kernel/trace/trace.c
>> @@ -189,7 +189,7 @@ unsigned long long ns2usecs(cycle_t nsec)
>> * pages for the buffer for that CPU. Each CPU has the same number
>> * of pages allocated for its buffer.
>> */
>> -static struct trace_array global_trace;
>> +static struct trace_array global_trace = {.ops = &ftrace_events_ops};
>>
>> LIST_HEAD(ftrace_trace_arrays);
>>
>> @@ -5773,6 +5773,8 @@ static int new_instance_create(const char *name)
>>
>> list_add(&tr->list, &ftrace_trace_arrays);
>>
>> + tr->ops = &ftrace_events_ops;
>> +
>> mutex_unlock(&trace_types_lock);
>>
>> return 0;
>> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
>> index a8acfcd..0a1f4be 100644
>> --- a/kernel/trace/trace.h
>> +++ b/kernel/trace/trace.h
>> @@ -493,6 +493,8 @@ extern unsigned long nsecs_to_usecs(unsigned long nsecs);
>>
>> extern unsigned long tracing_thresh;
>>
>> +extern struct event_trace_ops ftrace_events_ops;
>> +
>> #ifdef CONFIG_TRACER_MAX_TRACE
>> extern unsigned long tracing_max_latency;
>>
>> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
>> index 53582e9..09ca479 100644
>> --- a/kernel/trace/trace_events.c
>> +++ b/kernel/trace/trace_events.c
>> @@ -241,6 +241,57 @@ void trace_event_enable_cmd_record(bool enable)
>> mutex_unlock(&event_mutex);
>> }
>>
>> +static void *ftrace_events_pre_trace(struct ftrace_event_file *file,
>> + int entry_size, void *data)
>> +{
>> + struct ftrace_event_call *event_call = file->event_call;
>> + struct trace_descriptor_t *desc = data;
>> + struct ring_buffer_event *event;
>> + struct ring_buffer *buffer;
>> + unsigned long irq_flags;
>> + int pc;
>> +
>> + if (test_bit(FTRACE_EVENT_FL_SOFT_DISABLED_BIT, &file->flags))
>> + return NULL;
>> +
>> + local_save_flags(irq_flags);
>> + pc = preempt_count();
>> +
>> + event = trace_event_buffer_lock_reserve(&buffer, file,
>> + event_call->event.type,
>> + entry_size, irq_flags, pc);
>> +
>> + if (!event)
>> + return NULL;
>> +
>> + desc->event = event;
>> + desc->buffer = buffer;
>> + desc->irq_flags = irq_flags;
>> + desc->pc = pc;
>> +
>> + return ring_buffer_event_data(event);
>> +}
>> +
>> +static void ftrace_events_do_trace(struct ftrace_event_file *file, void *entry,
>> + int entry_size, void *data)
>> +{
>> + struct ftrace_event_call *event_call = file->event_call;
>> + struct trace_descriptor_t *desc = data;
>> + struct ring_buffer_event *event = desc->event;
>> + struct ring_buffer *buffer = desc->buffer;
>> + unsigned long irq_flags = desc->irq_flags;
>> + int pc = desc->pc;
>> +
>> + if (!filter_current_check_discard(buffer, event_call, entry, event))
>> + trace_buffer_unlock_commit(buffer, event, irq_flags, pc);
>> +}
>> +
>> +struct event_trace_ops ftrace_events_ops = {
>> + .pre_trace = ftrace_events_pre_trace,
>> + .do_trace = ftrace_events_do_trace,
>> +};
>> +
>> +
>> static int __ftrace_event_enable_disable(struct ftrace_event_file *file,
>> int enable, int soft_disable)
>> {
>
>
>
> .
>

2013-07-03 04:02:52

by zhangwei(Jovi)

[permalink] [raw]
Subject: Re: [PATCH v3 01/12] tracing: move trace_array definition into include/linux/trace_array.h

On 2013/7/3 7:19, Steven Rostedt wrote:
> On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
>> From: "zhangwei(Jovi)" <[email protected]>
>>
>> Prepare for expose event tracing infrastructure.
>> (struct trace_array shall be use by external modules)
>>
>
> What module is going to be using this?
>
> -- Steve
>
Please ignore this patch, the reason was posted in another thread reply,
sorry for this.

jovi

2013-07-03 04:12:54

by zhangwei(Jovi)

[permalink] [raw]
Subject: Re: [PATCH v3 11/12] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST

On 2013/7/3 7:56, Steven Rostedt wrote:
> On Wed, 2013-04-10 at 11:26 +0800, zhangwei(Jovi) wrote:
>> From: "zhangwei(Jovi)" <[email protected]>
>>
>> Variable tracing_selftest_disabled have not any sense when
>> CONFIG_FTRACE_STARTUP_TEST is disabled.
>>
>> This patch also remove __read_mostly attribute, since variable
>> tracing_selftest_disabled really not read mostly.
>
> Yes it is mostly read only. Sure, it's not read much, but it is also
> only written to once. That makes it, "read mostly".
>
> -- Steve
>
Ok, we can leave the __read_mostly attribute.

And tracing_selftest_disabled still can move to CONFIG_FTRACE_STARTUP_TEST
guard, normally CONFIG_FTRACE_STARTUP_TEST is disabled in most system.

Do I need to resend this patch?

jovi

>>
>> Signed-off-by: zhangwei(Jovi) <[email protected]>
>> ---
>> kernel/trace/trace.c | 6 ++++--
>> kernel/trace/trace.h | 2 +-
>> kernel/trace/trace_events.c | 2 ++
>> 3 files changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
>> index ee4e110..09a3aa8 100644
>> --- a/kernel/trace/trace.c
>> +++ b/kernel/trace/trace.c
>> @@ -58,10 +58,12 @@ bool ring_buffer_expanded;
>> */
>> static bool __read_mostly tracing_selftest_running;
>>
>> +#ifdef CONFIG_FTRACE_STARTUP_TEST
>> /*
>> * If a tracer is running, we do not want to run SELFTEST.
>> */
>> -bool __read_mostly tracing_selftest_disabled;
>> +bool tracing_selftest_disabled;
>> +#endif
>>
>> /* For tracers that don't implement custom flags */
>> static struct tracer_opt dummy_tracer_opt[] = {
>> @@ -1069,8 +1071,8 @@ int register_tracer(struct tracer *type)
>> tracing_set_tracer(type->name);
>> default_bootup_tracer = NULL;
>> /* disable other selftests, since this will break it. */
>> - tracing_selftest_disabled = true;
>> #ifdef CONFIG_FTRACE_STARTUP_TEST
>> + tracing_selftest_disabled = true;
>> printk(KERN_INFO "Disabling FTRACE selftests due to running tracer '%s'\n",
>> type->name);
>> #endif
>> diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
>> index 9b8afa7..e9ef8b7 100644
>> --- a/kernel/trace/trace.h
>> +++ b/kernel/trace/trace.h
>> @@ -546,10 +546,10 @@ extern int DYN_FTRACE_TEST_NAME(void);
>> extern int DYN_FTRACE_TEST_NAME2(void);
>>
>> extern bool ring_buffer_expanded;
>> -extern bool tracing_selftest_disabled;
>> DECLARE_PER_CPU(int, ftrace_cpu_disabled);
>>
>> #ifdef CONFIG_FTRACE_STARTUP_TEST
>> +extern bool tracing_selftest_disabled;
>> extern int trace_selftest_startup_function(struct tracer *trace,
>> struct trace_array *tr);
>> extern int trace_selftest_startup_function_graph(struct tracer *trace,
>> diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
>> index 7c52a51..7c4a16b 100644
>> --- a/kernel/trace/trace_events.c
>> +++ b/kernel/trace/trace_events.c
>> @@ -2251,7 +2251,9 @@ static __init int setup_trace_event(char *str)
>> {
>> strlcpy(bootup_event_buf, str, COMMAND_LINE_SIZE);
>> ring_buffer_expanded = true;
>> +#ifdef CONFIG_FTRACE_STARTUP_TEST
>> tracing_selftest_disabled = true;
>> +#endif
>>
>> return 1;
>> }
>
>
>
> .
>

2013-07-03 11:39:10

by Steven Rostedt

[permalink] [raw]
Subject: Re: [PATCH v3 11/12] tracing: guard tracing_selftest_disabled by CONFIG_FTRACE_STARTUP_TEST

On Wed, 2013-07-03 at 12:12 +0800, zhangwei(Jovi) wrote:

> Ok, we can leave the __read_mostly attribute.
>
> And tracing_selftest_disabled still can move to CONFIG_FTRACE_STARTUP_TEST
> guard, normally CONFIG_FTRACE_STARTUP_TEST is disabled in most system.
>
> Do I need to resend this patch?
>

Yes please. I don't like to modify someone else patch that makes in no
longer match the change log.

-- Steve